text
stringlengths
20
1.01M
url
stringlengths
14
1.25k
dump
stringlengths
9
15
lang
stringclasses
4 values
source
stringclasses
4 values
#include <calib3d.hpp> Block Matching Stereo Correspondence Algorithm The class implements BM stereo correspondence algorithm by K. Konolige. the default constructor the full constructor taking the camera-specific preset, number of disparities and the SAD window size the method that reinitializes the state. The previous content is destroyed the stereo correspondence operator. Finds the disparity for the specified rectified stereo pair pointer to the underlying CvStereoBMState
https://docs.opencv.org/ref/2.4.13.2/d9/dba/classcv_1_1StereoBM.html
CC-MAIN-2022-21
en
refinedweb
trans-render provides an alternative way of instantiating a template. trans-render Yes, there is an actual web component in this package. However, it won't make sense unless the core functions described first are (at least partly) understood. trans-render provides an alternative way of instantiating a template. It draws inspiration from the (least) popular features of XSLT. Like XSLT, trans-render performs transforms on elements by matching tests on elements. Whereas XSLT uses XPath for its tests, trans-render uses css path tests via the element.matches() and element.querySelector() methods. XSLT can take pure XML with no formatting instructions as its input. Generally speaking, the XML that XSLT acts on isn't a bunch of semantically meaningless div tags, but rather a nice semantic document, whose intrinsic structure is enough to go on, in order to formulate a "transform" that doesn't feel like a hack. Likewise, with the advent of custom elements, the template markup will tend to be much more semantic, like XML. trans-render tries to rely as much as possible on this intrinisic semantic nature of the template markup, to give enough clues on how to fill in the needed "potholes" like textContent and property setting. But trans-render is completely extensible, so it can certainly accommodate custom markup (like string interpolation, or common binding attributes) by using additional, optional helper libraries. This leaves the template markup quite pristine, but it does mean that the separation between the template and the binding instructions will tend to require looking in two places, rather than one. And if the template document structure changes, separate adjustments may be needed to make the binding rules in sync. Much like how separate style rules often need adjusting when the document structure changes. Advantages By keeping the binding separate, the same template can thus be used to bind with different object structures. Providing the binding transform in JS form inside the init function signature has the advantage that one can benefit from TypeScript typing of Custom and Native DOM elements with no additional IDE support. Another advantage of separating the binding like this, is that one can insert comments, console.log's and/or breakpoints, in order to walk through the binding process. For more musings on the question of what is this good for, please see the rambling section below. NB It's come to my attention (via template discussions found here) that there are some existing libraries which have explored similar ideas: Workflow trans-render provides helper functions for cloning a template, and then walking through the DOM, applying rules in document order. Note that the document can grow, as processing takes place (due, for example, to cloning sub templates). It's critical, therefore, that the processing occur in a logical order, and that order is down the document tree. That way it is fine to append nodes before continuing processing. Drilling down to children For each matching element, after modifying the node, you can instruct the processor which node(s) to consider next. Most of the time, especially during initial development, you won't need / want to be so precise about where to go next. Generally, the pattern, as we will see, is just to define transform rules that match the HTML Template document structure pretty closely. So, in the example we will see below, this notation: const Transform = { details: { summary: x => model.summaryText } }; means "if a node has tag name "details", then continue processing the next siblings of details, but also, find the first descendent of the node that has tag name "summary", and set its textContent property to model.summaryText." If most of the template is static, but there's a deeply nested element that needs modifying, it is possible to drill straight down to that element by specifying a "Select" string value, which invokes querySelector. But beware: there's no going back to previous elements once that's done. If your template is dense with dynamic pockets, you will more likely want to navigate to the first child by setting Select = '*'. So the syntax shown above is equivalent to: const Transform = { details: { Select: 'summary', Transform: { summary: x => model.summaryText } } }; In this case, the details property is a "NextStep" JS Object. Clearly, the first example is easier, but you need to adopt the second way if you want to fine tune the next processing steps. Matching next siblings We most likely will also want to check the next siblings down for matches. Previously, in order to do this, you had to make sure "matchNextSibling" was passed back for every match. But that proved cumbersome. The current implementation checks for matches on the next sibling(s) by default. You can halt going any further by specifying "SkipSibs" in the "NextStep" object, something to strongly consider when looking for optimization opportunities. It is deeply unfortunate that the DOM Query Api doesn't provide a convenience function for finding the next sibling that matches a query, similar to querySelector. Just saying. But some support for "cutting to the chase" laterally is also provided, via the "NextMatch" property in the NextStep object. At this point, only a synchronous workflow is provided. Syntax Example: <template id="sourceTemplate"> <details> ... <summary></summary> ... </details> </template> <div id="target"></div> <script type="module"> import { init } from '../init.js'; const model = { summaryText: 'hello' } const Transform = { details: { summary: x => model.summaryText } }; init(sourceTemplate, { Transform }, target); </script> Produces <div id="target"> <details> ... <summary>hello</summary> ... </details> </div> Or even simpler, your transform can hardcode some values: <template id="sourceTemplate"> <details> ... <summary></summary> ... </details> </template> <div id="target"></div> <script type="module"> import { init } from '../init.js'; const Transform = { details: { summary: 'Hallå' } }; init(sourceTemplate, { Transform }, target); </script> produces: <div id="target"> <details> ... <summary>Hallå</summary> ... </details> </div> "target" is the HTML element we are populating. The transform matches can return a string, which will be used to set the textContent of the target. Or the transform can do its own manipulations on the target element, and then return a "NextStep" object specifying where to go next, or it can return a new Transform, which will get applied the first child by default. Note the unusual property name casing, in the JavaScript arena for the NextStep object: Transform, Select, SkipSibs, etc. As we will see, this pattern is to allow the interpreter to distinguish between css matches for a nested Transform, vs a "NextStep" JS object. What does wdwsf stand for? As you may have noticed, some abbreviations are used by this library: - init = initialize - ctx = (rendering) context - idx = (numeric) index of array - SkipSibs = Skip Siblings - attribs = attributes - props = properties - refs = references Use Case 1: Applying the DRY principle to (post) punk rock lyrics Example 1a (only viewable at webcomponents.org ) Demonstrates including sub templates. Note the transform rule above (if viewed from webcomponents.org): Transform: { '*': { Select: '*' }, "*" is a match for all css elements. What this is saying is "for any element regardless of css-matching characteristics, continue processing its first child (Select => querySelector). This, combined with the default setting to match all the next siblings means that, for a "sparse" template with very few pockets of dynamic data, you will be doing a lot more processing than needed, as every single HTMLElement node will be checked for a match. But for initial, pre-optimization work, this transform rule can be a convenient way to get things done more quickly. Example 1b (only viewable at webcomponents.org ) Demonstrates use of update, rudimentary interpolation, recursive select. Reapplying (some) of the transform Often, we want to reapply a transform, after something changes -- typically the source data. The ability to do this is illustrated in the previous example. Critical syntax shown below: <script type="module"> import { init } from '../init.js'; import { interpolate } from '../interpolate.js'; import {update} from '../update.js'; const ctx = init(Main, { model:{ Day1: 'Monday', Day2: 'Tuesday', Day3: 'Wednesday', Day4: 'Thursday', Day5: 'Friday', Day6: 'Saturday', Day7: 'Sunday', }, interpolate: interpolate, $: id => window[id], }, target); changeDays.addEventListener('click', e=>{ ctx.model = { Day1: 'måndag', Day2: 'tisdag', Day3: 'onsdag', Day4: 'torsdag', Day5: 'fredag', Day6: 'lördag', Day7: 'söndag', } update(ctx, target); }) </script> Loop support (NB: Not yet optimized) The next big use case for this library is using it in conjunction with a virtual scroller. As far as I can see, the performance of this library should work quite well in that scenario. However, no self respecting rendering library would be complete without some internal support for repeating lists. This library is no exception. While the performance of the initial list is likely to be acceptable, no effort has yet been made to utilize state of the art tricks to make list updates keep the number of DOM changes at a minimum. Anyway the syntax is shown below. What's notable is a sub template is cloned repeatedly, then populated using the simple init / update methods. <div> <template id="itemTemplate"> <li></li> </template> <template id="list"> <ul id="container"></ul> <button id="addItems">Add items</button> <button id="removeItems">Remove items</button> </template> <div id="target"></div> <script type="module"> import { init } from '../init.js'; import { repeat} from '../repeat.js'; import {update} from '../update.js'; const options = {matchNext: true}; const itemTransform = { li: ({ idx }) => 'Hello ' + idx, }; const ctx = init(list, { Transform: { ul: ({ target, ctx }) => repeat(itemTemplate, ctx, 10, target, itemTransform) } }, target, options); ctx.update = update; addItems.addEventListener('click', e => { repeat(itemTemplate, ctx, 15, container, itemTransform); }); removeItems.addEventListener('click', e =>{ repeat(itemTemplate, ctx, 5, container); }) </script> </div> Simple Template Insertion (implemented, untested) A template can be inserted directly inside the target element as follows: <template id="summaryTemplate"> My summary Text </template> <template id="sourceTemplate"> <details> ... <summary></summary> ... </details> </template> <div id="target"></div> <script type="module"> import { init } from '../init.js'; const model = { const Transform = { details: { summary: summaryTemplate } }; init(sourceTemplate, { Transform }, target); </script> Multiple matching with "Ditto" notation (untested) Sometimes, one rule will cause the target to get (new) children. We then want to apply another rule to process the target element, now that the children are there. But uniqueueness of the keys of the JSON like structure we are using prevents us from listing the same match expression twice. We can specify multiple matches as follows: <script type="module"> import { init } from '../init.js'; const model = { const Transform = { details: { summary: summaryTemplate, '"': ({target}) => ..., '""': ..., '"3': ... } }; init(sourceTemplate, { Transform }, target); </script> I.e. any selector that starts with a double quote (") will use the last selector that didn't. Alternate Template Selection <template id="sourceTemplate"> <details> <div data- <template data- <script type="module"> import 'myCdn/mondayView.js'; </script> <monday-view></monday-view> </template> <template data- <script type="module"> import 'myCdn/tuesdayview.js'; </script> <tuesday-view></tuesday-view> </template> </div> </details> </template> <script type="module"> import { init } from '../init.js'; import { chooser } from '../chooser.js'; const model = { const Transform = { details: { 'div[data-is="switch"]': ({target}) => chooser(target, '[data-tag="condition-1"]', 'afterend'); } }; init(sourceTemplate, { Transform }, target); </script> Ramblings From the Department of Faulty Analogies When defining an HTML based user interface, the question arises whether styles should be inlined in the markup or kept separate in style tags and/or CSS files. The ability to keep the styles separate from the HTML does not invalidate support for inline styles. The browser supports both, and probably always will. Likewise, arguing for the benefits of this library is not in any way meant to disparage the usefulness of the current prevailing orthodoxy of including the binding / formatting instructions in the markup. I would be delighted to see the template instantiation proposal, with support for inline binding, added to the arsenal of tools developers could use. Should that proposal come to fruition, this library, hovering under 1KB, would be in mind-share competition (my mind anyway) with one that is 0KB, with the full backing / optimization work of Chrome, Safari, Firefox. Why would anyone use this library then? And in fact, the library described here is quite open ended. Until template instantiation becomes built into the browser, this library could be used as a tiny stand-in. Once template instantiation is built into the browser, this library could continue to supplement the native support (or the other way around, depending.) For example, in the second example above, the core "init" function described here has nothing special to offer in terms of string interpolation, since CSS matching provides no help: <div>Hello {{Name}}</div> We provide a small helper function "interpolate" for this purpose, but as this is a fundamental use case for template instantiation, and as this library doesn't add much "value-add" for that use case, native template instantiation could be used as a first round of processing. And where it makes sense to tightly couple the binding to the template, use it there as well, followed by a binding step using this library. Just as use of inline styles, supplemented by css style tags/files (or the other way around) is something seen quite often. A question in my mind, is how does this rendering approach fit in with web components (I'm going to take a leap here and assume that HTML Modules / Imports in some form makes it into browsers, even though I think the discussion still has some relevance without that). I think this alternative approach can provide value, by providing a process for "Pipeline Rendering": Rendering starts with an HTML template element, which produces transformed markup using init or native template instantiation. Then consuming / extending web components could insert additional bindings via the CSS-matching transformations this library provides. To aid with this process, the init and update functions provide a rendering options parameter, which contains an optional "initializedCallback" and "updatedCallback" option. This allows a pipeline processing sequence to be set up, similar in concept to Apache Cocoon. NB In re-reading the template instantiation proposal with a fresh set of eyes, I see now that there has in fact been some careful thought given to the idea of providing a kind of pipeline of binding. And as mentioned above, this library provides little help when it comes to string interpolation, so the fact that the proposal provides some hooks for callbacks is really nice to see. I may not yet fully grasp the proposal, but it still does appear to me that the template instantiation proposal is only useful if one defines regions ahead of time in the markup where dynamic content may go. This library, on the other hand, considers the entire template document open for amendment. This may be alarming, if as me, you find yourself comparing this effort to the ::part ::theme initiative, where authors need to specify which elements can be themed. However, the use case is quite different. In the case of stylesheets, we are talking about global theming, affecting large numbers of elements at the same time. The use case I'm really considering is one web component extending another. I don't just mean direct class inheritance, but compositional extensions as well. It doesn't seem that unreasonable to provide maximum flexibility in that circumstance. Yes, I suppose the ability to mark some tags as "undeletable / non negotiable" might be nice, but I see no way to enforce that. Client-side JS faster than SSR? Another interesting case to consider is this Periodic Table Codepen example. Being what it is, it is no suprise that there's a lot of repetitive HTML markup needed to define the table. An intriguing question, is this: Could this be the first known scenario in the history of the planet, where rendering time (including first paint) would be improved rather than degraded with the help of client-side JavaScript? The proper, natural instinct of a good modern developer, including the author of the codepen, is to generate the HTML from a concise data format using a server-side language (pug). But using this library, and cloning some repetitive templates on the client side, reduces download size from 16kb to 14kb, and may improve other performance metrics as well. These are the performance results my copy of chrome captures, after opening in an incognito window, and throttling cpu to 6x and slow 3g network. Trans-Rendering: Original: You can compare the two here: This link uses client-side trans-rendering. This link uses all static html Results are a bit unpredictable, and usually the differences are less dramatic. Lighthouse scrores also provide evidence that trans-rendering improves performance. Trans-Rendering: Original: Once in a while the scores match, but most of the time the scores above are what is seen. So the difference isn't dramatic, but it is statistically significant, in my opinion. Miscellaneous Helper Functions insertAdjacentTemplate(template: HTMLTemplateElement, target: Element, position: InsertPosition) This function is modeled after insertAdjacentElement / insertAdjacentHTML. Only here we are able to insert a template. By using the preferred "afterEnd" as the insert position, the trans-rendering will be able to process those nodes like any other nodes. Declative-ish property setting Object.assign and its modern abbreviated variations, provides a quite declarative feeling when populating an object with values. Unfortunately, Object.assign throws errors if using it to set read-only properties like style and dataset (are there others?). An alternative to object.assign are convenience functions like JQuery.extends, JQuery.attr and "h", which domMerge draws inspiration from. The function domMerge provides similar help. The (tentative) signature is export function domMerge(target: HTMLElement, vals: Vals): void where export interface Vals { attribs?: { [key: string]: string | boolean | number }; propVals?: object; } Behavior enhancement Vue (with common roots from Polymer 1) provides an elegant way of turning an existing DOM element into a kind of anonymous custom element. The alternative to this is the "is" built-in custom element api, which, while implemented in two of the three major browsers, remains strongly opposed by the third, and the reasons seem, to my non-expert ears, to have some merit. Even if the built-ins do become a standard, I still think the "decorate" function, described below, would come in handy for less formal occasions. Tentative Signature: export function decorate( target: HTMLElement, source: DecorateArgs ) where export interface DecorateArgs extends Vals{ propDefs?: object, methods?: {[key: string] : Function}, on?: {[key: string] : (e: Event) => void}, } For example: <div id="decorateTest"> <button>Test</button> </div> <script type="module"> import {decorate} from '../decorate.js'; import {init, attribs} from '../init.js'; init(decorateTest, { Transform: { div: { button: ({target}) => decorate(target, { propVals:{ textContent: 'Hello', }, attribs: { title: 'Hello, world' }, propDefs:{ count: 0 }, on:{ click: function(e){ this.count++; } }, methods:{ onPropsChange(){ alert(this.count) } } }) } } }) </script> decorate can also attach behaviors to custom elements, not just native elements, in a decorative way. Avoiding namespace collisionsReflections on the Revolutionary Extensible Web Manifesto NB: All names, characters, and incidents portrayed in the following discussion are fictitious. No identification with actual persons (living or deceased), places, buildings, and products is intended or should be inferred. No person or entity associated with this discussion received payment or anything of value, or entered into any agreement, in connection with the depiction of tobacco products. No animals were harmed in formulating the points discussed below. In a web-loving land, there was a kingdom that held sway over a large portion of the greatest minds, who in turn guided career choices of the common folk. The kingdom's main income derived from a most admirable goal -- keeping friends and family in touch. The kingdom was ruled by conservatives. "Edmund Burke" conservatives, who didn't see the appeal of allowing heretics to join freely in their kingdom. They were tolerant, mind you. If you were not a tax-paying subject born to a family of the kingdom, i.e. a heretic, and you wanted to visit their kingdom, you could do so. You only had to be heavily surrounded by guards, who would translate what you had to say, and vice versa, into Essex, the de-facto language of the web, according to the kingdom's elites. The heretics called these conservatives unflattering words like "reactionaries." "Why can't we speak directly to your subjects? What are you afraid of?" the counter-cultural heretics would plead. The ruling elites countered with fancy words like "heuristics" and "smoosh." "We've put our greatest minds to the problem, and, quite frankly, they're stumped. We don't see how we can let you speak freely without corrupting the language of the web. The web rules over all of us, and what if the web wants to introduce an attribute that is already in heavy use? What are we to do then? Don't you see? We are the true lovers of the web. We are protecting the web, so it can continue to evolve and flourish." Which all sounded like a good faith argument. But why, at least one heretic thought, has the main web site used to bind family and friends together introduced the following global constants, which surely could cause problems if the web wanted to evolve:A subset of global constants. meta_referrer pageTitle u_0_11 u_0_11 u_0_11 u_0_11 u_0_11 u_0_12 u_0_13 u_0_14 u_0_15 u_0_16 u_0_17 pagelet_bluebar blueBarDOMInspector login_form pass loginbutton u_0_2 u_0_3 u_0_4 lgnjs locale prefill_contact_point prefill_source prefill_type globalContainer content reg_box reg_error reg_error_inner reg reg_form_box fullname_field u_0_b u_0_c u_0_d u_0_e fullname_error_msg u_0_f u_0_g u_0_h u_0_i u_0_j u_0_k u_0_l u_0_m password_field u_0_n u_0_o u_0_p u_0_q month day year birthday-help u_0_r u_0_s u_0_9 u_0_a u_0_t terms-link privacy-link cookie-use-link u_0_u u_0_v referrer asked_to_login terms ns ri action_dialog_shown reg_instance contactpoint_label ignore locale reg_captcha security_check_header outer_captcha_box captcha_box captcha_response_error captcha captcha_persist_data captcha_response captca-recaptcha captcha_whats_this captcha_buttons u_0_w u_0_x u_0_y reg_pages_msg u_0_z u_0_10 pageFooter contentCurve js_0 u_0_18 u_0_19 And why does the kingdom not want to empower its subjects to choose for themselves if this is a valid concern? Now I do think this is a concern to consider. Focusing on the decorate functionality described above, the intention here is not to provide a formal extension mechanism, as the built-in custom element "is" extension proposal provides (and which Apple tirelessly objects to), but rather a one-time duct tape type solution. Whether adding a property to a native element, or to an existing custom element, to err on the side of caution, the code doesn't pass the property or method call on to the element it is decorating. NB If: - You are slapping properties onto an existing native HTML element, and: - The existing native HTML element might, in the future, adopt properties / methods with the same name. Then it's a good idea to consider making use of Symbols: <, attribs} from '../decorate.js'; import {init} from '../init.js'; const count = Symbol('count'); const myMethod = Symbol('myMethod'); init(decorateTest, { Transform: { button: ({target}) => decorate(target, { propVals:{ textContent: 'Hello', }, attribs:{ title: "Hello, world" }, propDefs:{ [count]: 0 }, on:{ click: function(e){ this[count]++; } }, methods:{ onPropsChange(){ this[myMethod](); }, [myMethod](){ alert(this[count]); } } }) } }) </script> </body> </html> The syntax isn't that much more complicated, but it is probably harder to troubleshoot if using symbols, so use your best judgment. Perhaps start properties and methods with an underscore if you wish to preserve the easy debugging capabilities. You can also use Symbol.for('count'), which kind of meets halfway between the two approaches. Even more indirection The render context which the init function works with provides a "symbols" property for storing symbols. The transform does look a little scary at first, but hopefully it's manageable: <} from '../decorate.js'; import {init} from '../init.js'; init(decorateTest, { symbols: { count: Symbol('count'), myMethod: Symbol('myMethod') }, Transform: { button: ({target, ctx}) => decorate(target, { propVals: { textContent: 'Hello', }, attribs:{ title: "Hello, world" }, propDefs:{ [ctx.symbols['count']]: 0 }, on:{ click: function(e){ this[ctx.symbols['count']]++; } }, methods:{ onPropsChange(){ this[ctx.symbols['myMethod']](); }, [ctx.symbols['myMethod']](){ alert(this[ctx.symbols['count']]); } } }) } }) </script> </body> </html> appendTag(container: HTMLElement, name: string, config: DecorateArgs) : HTMLElement Just saves a tiny bit of boiler plate (document.createElement, container.appendChild) chooser(container: Element, select: string, position: InsertPosition, target?: HTMLElement) -- untested Clones the template element within the container, matching the select string, and inserts according to the position parameter, relative to the optional target element, or the container if no target element is provided. replaceTargetWithTemplate(target: Element, template: HTMLTemplateElement) -- untested injectModuleScript(script: string) -- untested injectModuleRef(path: string) -- untested trans-render the web component A web component wrapper around the functions described here is available.
https://vaadin.com/directory/component/bahrustrans-render
CC-MAIN-2022-21
en
refinedweb
One of the most common patterns in object oriented programming is dependency injection, and the inversion of control principle, (IOC). IOC containers are often feature packed, complex beasts that can stump even seasoned programmers. They take a collection of types with dependencies and when you need an instance of something they can automagically wire one up for you. You might have seen Typescript containers in frameworks like Angular, and NestJs with their module systems. Or maybe you are using a stand alone container like Inversify. One of the best ways to demystify programming concepts is to go out and build it yourself, so this article will build a minimal toy container step by step. But first… A quick history lesson Back yonder during the framework wars of 2014, some Google engineers had run into a problem. They had been working on Angular 2 when they realised the language they were building it in, Typescript, had a fatal flaw. It was kind of a deal breaker, so they did what Google engineers do in these kinds of situations. They invented a new language. It was called AtScript. I'm not here to rehash the history of AtScript. Anders Hejlsberg,(creator of Typescript), gives his short version of it here. Like Anders mentions in his talk, Typescript at the time was missing two crucial features which AtScript was meant to address; Decorators and Reflection. And they were the secret sauce that made makes IOC in Typescript possible. Decorators If you've used a Typescript container before, you've probably seen something like this: @Injectable() class SomeService { constructor(private anotherService: AnotherService) {} } At the top there we have the Injectable decorator. The decorator is saying this class can have its dependencies automatically injected. A decorator is a function which wraps a class, function or method and adds behaviour to it. This is useful for defining metadata associated with an object. It also ties into the way reflection works in Typescript. Reflection In order to know which things to wire up, we need to be able to inspect types at runtime. Let's look at how Javascript does things before getting to Typescript. const a = "hello there"; const b = 0b1; console.log(typeof a); // "string"; console.log(typeof b); // "number"; While it isn't perfect, Javascript does support a degree of basic runtime reflection. Besides the primitive types of the language, (num, boolean, object, string, array etc), classes also carry runtime information: class Alpha {} const a = new Alpha(); a instanceof Alpha; // true We can also inspect the class's prototype to get a list of methods. But that's where we start to hit some limits. There is no easy way to extract the names of class properties or method parameters. Traditional pure javascript containers would use hacks like casting the function or class to a string and manually parsing that string to get the names of each parameter/property. That name would then be used by the container to lookup the correct dependency. Of course, this would fail if you ran a minifier over your code, because all those parameter names would change. This was a common issue with Angular 1, and the work arounds involved a lot of redundancy. So, vanilla Javascript doesn't help us much in the reflection department. To combat this, Typescript uses a library called reflect-metadata to store additional type information. For instance, Typescript types assigned to parameters and properties are made available at runtime. It is enabled with the 'emitDecoratorMetadata' compiler option. @SomeDecorator() function someFunc(a: number, b: string){} Reflect.getMetadata('design:types', someFunc); // Number, String There are two catches though: - Classes/functions must have a decorator for them to save metadata. - Only classes/enums/primitive types can be recorded. Interfaces and union types come through as 'Object'. That's because these types disappear entirely after compilation, whereas classes hang around. Anyway, that's enough background for now. If Typescript decorators/reflect-metadata are still confusing you, go check out the official tutorial. The Code Our container is going to use two main concepts. Tokens and Providers. Tokens are an identifier for something that our container needs to know how to create, and providers describe how to create them. With that in mind, a minimal public interface for the Container class looks like this. export class Container { addProvider<T>(provider: Provider<T>) {} // TODO inject<T>(type: Token<T>): T {} // TODO } Now let's define our Token. Tokens can either refer to a class or, in cases where the parameter type doesn't give enough context about what to inject, a constant attached to a parameter with a decorator. const API_URL_TOKEN = new InjectionToken('some-identifier'); const TWITTER_TOKEN = new InjectionToken('another-identifier'); class SomeClass { // Both AService, API_URL_TOKEN, and TWITTER_URL_TOKEN are all tokens. // We will define the Inject decorator later. constructor(b: AService, @Inject(API_URL_TOKEN) apiURL: string, @Inject(TWITTER_URL_TOKEN) twitterUrl: string) {} } Our definition for Tokens looks like this: // We use this to refer to classes. export interface Type<T> extends Function { // Has a constructor which takes any number of arguments. // Can be an implicit constructor. new (...args: any[]): T; } export class InjectionToken { constructor(public injectionIdentifier: string) {} } // Our combined Token type Token<T> = Type<T> | InjectionToken; Next, let's define the Providers. There are three different Provider types we will implement. One for providing an existing value as a singleton, one for providing via a factory function, and one for providing just the class name to use. // Every provider maps to a token. export interface BaseProvider<T> { provide: Token<T>; } export interface ClassProvider<T> extends BaseProvider<T> { useClass: Type<T>; } export interface ValueProvider<T> extends BaseProvider<T> { useValue: T; } // To keep things simple, a factory is just a function which creates the type. export type Factory<T> = () => T; export interface FactoryProvider<T> extends BaseProvider<T> { useFactory: Factory<T>; } export type Provider<T> = ClassProvider<T> | ValueProvider<T> | FactoryProvider<T>; For convenience let's throw in some type guards as well. export function isClassProvider<T>(provider: BaseProvider<T>): provider is ClassProvider<T> { return (provider as any).useClass !== undefined; } export function isValueProvider<T>(provider: BaseProvider<T>): provider is ValueProvider<T> { return (provider as any).useValue !== undefined; } export function isFactoryProvider<T>(provider: BaseProvider<T>): provider is FactoryProvider<T> { return (provider as any).useFactory !== undefined; } This is pretty good for our base API. We just need to define two decorators before we are ready to implement the container. // This class decorator adds a boolean property to the class // metadata, marking it as 'injectable'. // It uses the reflect-metadata API. const INJECTABLE_METADATA_KEY = Symbol('INJECTABLE_KEY'); export function Injectable() { return function(target: any) { // target in this case is the class being decorated. Reflect.defineMetadata(INJECTABLE_METADATA_KEY, true, target); return target; }; } // We also provide an easy way to query whether a class is // injectable. Our container will reject classes which aren't // marked as injectable. export function isInjectable<T>(target: Type<T>) { return Reflect.getMetadata(INJECTABLE_METADATA_KEY, target) === true; } And we define the Inject decorator, which maps a parameter to another Token. const INJECT_METADATA_KEY = Symbol('INJECT_KEY'); // This is a parameter decorator, it takes a token to map the parameter to. export function Inject(token: Token<any>) { return function(target: any, _: string | symbol, index: number) { Reflect.defineMetadata(INJECT_METADATA_KEY, token, target, `index-${index}`); return target; }; } export function getInjectionToken(target: any, index: number) { return Reflect.getMetadata(INJECT_METADATA_KEY, target, `index-${index}`) as Token<any> | undefined; } The Container The implementation for adding providers is fairly simple. You can see it is just a simple key value store. The providers map uses any types, but we know the Token and Provider will always match because the only way to insert into that map is with the addProvider method. class Container { private providers = new Map<Token<any>, Provider<any>>(); addProvider<T>(provider: Provider<T>) { this.assertInjectableIfClassProvider(provider); this.providers.set(provider.provide, provider); } // ... } We use the assertInjectableIfClassProvider method to make sure all the classes which are provided to the container have been marked as Injectable, and therefore have metadata. This isn't strictly necessary, but it will help us catch issues at configuration time. class Container { // ... private assertInjectableIfClassProvider<T>(provider: Provider<T>) { if (isClassProvider(provider) && !isInjectable(provider.useClass)) { throw new Error( `Cannot provide ${this.getTokenName(provider.provide)} using class ${this.getTokenName( provider.useClass )}, ${this.getTokenName(provider.useClass)} isn't injectable` ); } } // Returns a printable name for the token. private getTokenName<T>(token: Token<T>) { return token instanceof InjectionToken ? token.injectionIdentifier : token.name; } // ... } Next we have our injection function. This first method looks up the provider, and the second method determines which type of provider it is, then handles each case separately. class Container { // ... inject<T>(type: Token<T>): T { let provider = this.providers.get(type); return this.injectWithProvider(type, provider); } private injectWithProvider<T>(type: Token<T>, provider?: Provider<T>): T { if (provider === undefined) { throw new Error(`No provider for type ${this.getTokenName(type)}`); } if (isClassProvider(provider)) { return this.injectClass(provider as ClassProvider<T>); } else if (isValueProvider(provider)) { return this.injectValue(provider as ValueProvider<T>); } else { // Factory provider by process of elimination return this.injectFactory(provider as FactoryProvider<T>); } } // ... } The value and factory providers are pretty straight forward. One is a method call, one just returns a value. The class provider is a little more complex, it needs to construct the items in the parameter list for the constructor, and then invokes the constructor using the class reference. class Container { // ... private injectValue<T>(valueProvider: ValueProvider<T>): T { return valueProvider.useValue; } private injectFactory<T>(valueProvider: FactoryProvider<T>): T { return valueProvider.useFactory(); } private injectClass<T>(classProvider: ClassProvider<T>): T { const target = classProvider.useClass; const params = this.getInjectedParams(target); return Reflect.construct(target, params); } // ... } The implementation for building the parameter list is where things get tricky. We invoke the reflect-metadata API in order to get a list of types for each parameter of the constructor. For each of those parameters, we find the relevant token, and then construct is recursively. public class Container { // ... private getInjectedParams<T>(target: Type<T>) { const argTypes = Reflect.getMetadata(REFLECT_PARAMS, target) as (InjectableParam | undefined)[]; if (argTypes === undefined) { return []; } return argTypes.map((argType, index) => { // The reflect-metadata API fails on circular dependencies, // and will return undefined for the argument instead. // We could handle this better, but for now let's just throw an error. if (argType === undefined) { throw new Error( `Injection error. Recursive dependency detected in constructor for type ${ target.name } with parameter at index ${index}` ); } // Check if a 'Inject(INJECTION_TOKEN)' was added to the parameter. // This always takes priority over the parameter type. const overrideToken = getInjectionToken(target, index); const actualToken = overrideToken === undefined ? argType : overrideToken; let provider = this.providers.get(actualToken); return this.injectWithProvider(actualToken, provider); }); } } Using it That's it for the implementation. Here's what it looks like using our new container. const API_TOKEN = new InjectionToken('api-token'); @Injectable() class SomeService { constructor(@Inject(API_TOKEN)) {} } @Injectable() class InjectableClass { constructor(public someService: SomeService) {} } const container = new Container(); container.addProvider({ provide: API_TOKEN, useValue: ' }); container.addProvider({ provide: SomeService, useClass: SomeService }); container.addProvider({ provide: InjectableClass, useClass: InjectableClass }); const instance = container.inject(InjectableClass); Conclusion While the toy container we built here was fairly simple, it's also powerful. You can already see the bones of how other more advanced containers are built. A working demo repository with tests and documentation can be found here. If you are up for a challenge, fork it and see if you can extend it with the following features: - Early detection of circular references, (when you add your providers). - Nested containers, add the ability to provide types from child containers, (similar to Angular/NestJs modules). - Factories with injected parameters. - Specifiy scope of instance lifecycle in providers, (eg. singleton). Discussion (5) I've found the reflect-metadatalibrary a bit heavy to ship with apps so you might be interested in the @abraham/reflection alternative I wrote. I actually just released v0.5.0. Announcing v0.5 of my Metadata Reflection API polyfill for TypeScript decorators Abraham Williams ・ Jan 22 '19 ・ 1 min read Nice, I might have to give that a try on my next Angular project. I love typescript but never take advantage of the more advanced features in my codebases (nodejs, react) because of the rejection I get from most teams when trying to introduce it into our code base. Great article, keep up the awesome work. I think it's fair that teams should try to fight against too much complexity in their codebase, but sometimes a slightly more complex idea like Inversion of Control can dramatically simplify everything else. It's a tradeoff, and there is not always an easy answer about the best way to do things. Just keep making the case to try new ideas. See also this much more minimalist approach - 10 lines of code and no dependencies: dev.to/mindplay/minimal-di-contain... I only wish I could figure out how to type-hint it correctly. (I don't even know that it's possible - short of manually typing out the interfaces, but it really ought to be possible with inference...)
https://dev.to/darcyrayner/typescript-dependency-injection-in-200-loc-12j7
CC-MAIN-2022-21
en
refinedweb
Suppose you have a tensor with shape [4, 16, 256], where your LSTM is 2-layer bi-directional (2*2 = 4), the batch size is 16 and the hidden state is 256. What is the correct way to get the concatenated last layer output of the output (shape [16, 512])? I’m doing the following – please note that I support both GRU and LSTM in the model so I can decide on setup time: def forward(self, inputs): batch_size = inputs.shape[0] # Push through embedding layer X = self.embedding(inputs) # Push through RNN layer (the ouput is irrelevant) _, self.hidden = self.rnn(X, self.hidden) # Get the hidden state of the last layer of the RNN if self.params.rnn_type == RnnType.RNN_TYPE__GRU: hidden = self.hidden elif self.params.rnn_type == RnnType.RNN_TYPE__LSTM: hidden = self.hidden[0] # Flatten hidden state with respect to batch size hidden = hidden.transpose(1,0).contiguous().view(batch_size, -1) ... The important part is the transpose(1,0) to get the batch size to the front. Everything else is handled by the view() command. I only need the transpose since I initialize the RNN with batch_first=True. Note that the input shape of the directly following linear layer needs to be (rnn_hidden_dim * num_directions * num_layers, output_size). Thanks for the reply. Suppose in a two stack LSTM, the hidden state of the first layer is pretty much intermediate and I am thinking to get rid of it. It seems your code did not touch this part. Do you have any recommendations on how to do so? From the official doc it is not clear which parts of the hidden output (self.hidden[0] in your example) we should pick. The output of LSTM is output, (h_n, c_n) in my code _, self.hidden = self.rnn(X, self.hidden), self.hidden is the tuples (h_n, c_n), and since I only want h_n, I have to do hidden = self.hidden[0]. In case you only want the last layer, the docs say that you can separate the hidden state with h_n = h_n.view(num_layers, num_directions, batch, hidden_size. Since num_layers is the first dimension, you only need to to h_n = h_n[-1] to get the last layer. The shape will be (num_directions, batch, hidden_size)
https://discuss.pytorch.org/t/how-to-concatenate-the-hidden-states-of-a-bi-lstm-with-multiple-layers/39798
CC-MAIN-2022-21
en
refinedweb
I’ve been coding a pytorch CNN on a binary classification task where my dataset is unbalanced – the ratio of the two classes is 30:1. Within the training loop, every 5 epochs, I calculate the loss on a small set of validation data to display along with the loss on the much larger training set to monitor progress and help tune the hyperparameters. At first, to keep things simple, I only used a subset of the training data so that the two classes would be balanced. That was working and I saw some moderate improvements in loss in both training and validation before overfitting set in. In order to improve training, I then tried to use all of the training data and switched from the original loss function torch.nn.BCELoss to torch.nn.BCEWithLogitsLoss(pos_weight=30.0), keeping in mind to drop the final torch.sigmoid transform. Since the validation data continued to be balanced, I used torch.nn.BCEWithLogitsLoss(pos_weight=1.0) to calculate the validation loss. With this setup, although my training loss decreases right away, my validation loss only goes up. Here is the relevant code: def train(epochs, optimizer, model, train_loss_fn, train_loader, test_loss_fn, test_loader): for epoch in range(1, epochs + 1): train_losses = [] model.train() for images, classes in train_loader: optimizer.zero_grad() images = images.to(device) classes = classes.to(device) outputs = model(images) loss = train_loss_fn(torch.flatten(outputs), classes) loss.backward() optimizer.step() train_losses.append( loss.item() ) if (epoch == 1) | (epoch % 5 == 0): now = datetime.datetime.now() test_loss = test(model, test_loader, test_loss_fn) test_loss = np.round(test_loss, 4) train_loss = np.round(np.mean(np.asarray(train_losses)),4) lr = np.format_float_scientific(np.squeeze(get_lr(optimizer)), precision=4) print(f"{now}: Epoch {epoch}, Train Loss: {train_loss}, Test Loss: {test_loss}, Learning Rate: {lr}") def test(model, test_loader, lf): model.eval() test_losses = [] with torch.no_grad(): for i, (images,classes) in enumerate(test_loader): images = images.to(device) classes = classes.to(device) outputs = model(images) loss = lf(torch.flatten(outputs), torch.flatten(classes)) test_losses.append( loss.item() ) mean_loss = np.mean(np.asarray(test_losses)) return mean_loss Class_1_Weighting = torch.tensor(30.) TRAIN_loss_fn = torch.nn.BCEWithLogitsLoss(reduction='mean', pos_weight=Class_1_Weighting) TEST_loss_fn = torch.nn.BCEWithLogitsLoss(reduction='mean') # train(...) cross-posted from stackexchange
https://discuss.pytorch.org/t/pytorch-training-with-unbalanced-class-sizes-while-validation-with-balanced-classes-isnt-working/143685
CC-MAIN-2022-21
en
refinedweb
table of contents - bullseye 1.1.1n-0+deb11u1 - testing 1.1.1n-1 - unstable 1.1.1n-1 - experimental 3.0.2-1 NAME¶ X509_chain_up_ref, X509_new, X509_free, X509_up_ref - X509 certificate ASN1 allocation functions SYNOPSIS¶ #include <openssl/x509.h> X509 *X509_new(void); void X509_free(X509 *a); int X509_up_ref(X509 *a); STACK_OF(X509) *X509_chain_up_ref(STACK_OF(X509) *x); DESCRIPTION¶¶¶¶) Licensed under the OpenSSL license (the "License"). You may not use this file except in compliance with the License. You can obtain a copy in the file LICENSE in the source distribution or at <
https://manpages.debian.org/bullseye/libssl-doc/X509_up_ref.3ssl.en.html
CC-MAIN-2022-21
en
refinedweb
As discussed in dlmopen requires a mechanism for [optionally] sharing some objects between more than one namespace. The following patchset attempts an implementation for this: If an object is loaded with the new RTLD_SHARED flag we instead ensure that a "master" copy exists (and is flagged as no-delete) in the main namespace and a thin wrapper or clone is placed in the target namespace. I have attached the test program(s) I am using to the bug above. It is not intended as a final implementation but I wanted to check that the basic approach is acceptable/workable. If it is, then I plan to extend the patchset as follows: - dlmopen will implicitly apply RTLD_SHARED to the libc/libpthread group - The user will be able to request that this sharing _not_ occur by passing a different flag to dlmopen (name TBD) - LD_AUDIT paths will not apply this implict sharing rule, so audit libraries will continue to be completely isolated. If it isn't, then I guess it's back to the drawing board (but reasons why it isn't acceptable/workable would be appreciated so I can figure out how to do it right). Vivek Das Mohapatra (5): bits/dlfcn.h: Declare and describe the dlmopen RTLD_SHARED flag include/link.h: Update the link_map struct to allow clones elf/dl-object.c: Implement a helper function to clone link_map entries elf/dl-load.c, elf-dl-open.c: Implement RTLD_SHARED dlmopen cloning elf/dl-fini.c: Handle cloned link_map entries in the shutdown path bits/dlfcn.h | 7 +++++ elf/dl-fini.c | 51 ++++++++++++++++++++++++++++++ elf/dl-load.c | 34 ++++++++++++++++++++ elf/dl-object.c | 78 ++++++++++++++++++++++++++++++++++++++++++++++ elf/dl-open.c | 31 ++++++++++++++++-- include/link.h | 6 ++-- sysdeps/generic/ldsodefs.h | 6 ++++ 7 files changed, 209 insertions(+), 4 deletions(-) -- 2.11.0
https://sourceware.org/pipermail/libc-help/2018-April/004495.html
CC-MAIN-2022-21
en
refinedweb
This article introduces how to customize the Java annotation processor and related knowledge. After reading this article, you can easily understand and understand the application of the annotation processor of major open source frameworks. This article starts: For custom Java annotations, see Custom annotation. This article has authorized WeChat official account: hongyangAndroid. Basic implementation There are two steps to implement a custom annotation Processor. The first is to implement the Processor interface to process annotations, and the second is to register the annotation Processor. Implement the Processor interface You can customize the annotation Processor by implementing the Processor interface. Here, we use a simpler method to implement the custom annotation Processor by inheriting the AbstractProcessor class. Implement the abstract method process to handle the functions we want. public class CustomProcessor extends AbstractProcessor { @Override public boolean process(Set<? extends TypeElement> annotations, RoundEnvironment roundEnvironment) { return false; } } In addition, we also need to specify the supported annotation types and supported Java versions. By overriding the getSupportedAnnotationTypes method and getSupportedSourceVersion method: public class CustomProcessor extends AbstractProcessor { @Override public boolean process(Set<? extends TypeElement> annotations, RoundEnvironment roundEnvironment) { return false; } @Override public Set<String> getSupportedAnnotationTypes() { Set<String> annotataions = new LinkedHashSet<String>(); annotataions.add(CustomAnnotation.class.getCanonicalName()); return annotataions; } @Override public SourceVersion getSupportedSourceVersion() { return SourceVersion.latestSupported(); } } For specifying supported annotation types, we can also specify them by annotation: @SupportedAnnotationTypes({"io.github.yuweiguocn.annotation.CustomAnnotation"}) public class CustomProcessor extends AbstractProcessor { @Override public boolean process(Set<? extends TypeElement> annotations, RoundEnvironment roundEnvironment) { return false; } @Override public SourceVersion getSupportedSourceVersion() { return SourceVersion.latestSupported(); } } Because the Android platform may have compatibility problems, it is recommended to override the getSupportedAnnotationTypes method to specify the supported annotation types. Register annotation processor Finally, we need to register our custom annotation processor. Create a new res folder, a new META-INF folder under the directory, a new services folder under the directory, a new javax.annotation.processing.Processor file under the directory, and then write the full class name of our custom annotation processor to this file: io.github.yuweiguocn.processor.CustomProcessor The above registration method is too troublesome. Google helped us write an annotation processor to generate this file. github address: Add dependency: compile 'com.google.auto.service:auto-service:1.0-rc2' Add comments: @AutoService(Processor.class) public class CustomProcessor extends AbstractProcessor { ... } Get it done and realize the power of annotation processor. Later, we only need to focus on the processing logic in the annotation processor. Let's take a look at the final project structure: Basic concepts There is also an init method in the abstract class, which is provided in the Processor interface. When we compile the program, the annotation Processor tool will call this method and provide the object implementing the ProcessingEnvironment interface as a parameter. @Override public synchronized void init(ProcessingEnvironment processingEnvironment) { super.init(processingEnvironment); } We can use ProcessingEnvironment to obtain some utility classes and option parameters: element The Element element is an interface that represents a program Element, such as a package, class, or method. All of the following Element type interfaces inherit from the Element interface: If we want to judge the type of an element, we should use the Element.getKind() method in conjunction with the ElementKind enumeration class. Try to avoid using instanceof for judgment, because for example, TypeElement represents both a class and an interface. The judgment result may not be what you want. For example, we judge whether an element is a class: if (element instanceof TypeElement) { //Error, or it may be an interface } if (element.getKind() == ElementKind.CLASS) { //correct //doSomething } The following table shows some constants in ElementKind enumeration class. Please refer to the official document for details. type TypeMirror is an interface that represents types in the Java programming language. These types include base types, declaration types (class and interface types), array types, type variables, and null types. You can also represent wildcard type parameters, the signature and return type of executable, and pseudo types corresponding to package and keyword void. All of the following type interfaces inherit from the TypeMirror interface: Similarly, if we want to judge the type of a TypeMirror, we should use the TypeMirror.getKind() method in conjunction with the TypeKind enumeration class. Try to avoid using instanceof for judgment, because for example, DeclaredType represents both class type and interface type. The judgment result may not be what you want. Some constants in TypeKind enumeration class. Please check the official documentation for details. create a file The Filer interface supports the creation of new files through the annotation processor. You can create three file types: source files, class files, and auxiliary resource files. 1. Create source file JavaFileObject createSourceFile(CharSequence name, Element... originatingElements) throws IOException Create a new source file and return an object to allow it to be written. The name and path of the file (relative to the root directory output location of the source file) are based on the types declared in the file. If you declare more than one type, you should use the name of the main top-level type (for example, the one declared public). You can also create source files to hold information about a package, including package annotations. To create a source file for the specified package, you can use name as the package name followed by ". Package info"; To create a source file for an unspecified package, use "package info". 2. Create class file JavaFileObject createClassFile(CharSequence name, Element... originatingElements) throws IOException Create a new class file and return an object to allow it to be written. The name and path of the file (relative to the root directory output location of the class file) are based on the type name to be written. You can also create class files to hold information about a package, including package annotations. To create a class file for the specified package, you can use name as the package name followed by ". Package info"; Creating class files for unspecified packages is not supported. 3. Create auxiliary resource file FileObject createResource(JavaFileManager.Location location, CharSequence pkg, CharSequence relativeName, Element... originatingElements) throws IOException Create a new auxiliary resource file for the write operation and return a file object for it. The file can be found with a newly created source file, a newly created binary file, or other supported location. Location CLASS_OUTPUT and SOURCE_OUTPUT must be supported. Resources can be specified relative to a package (which is a source file and class file) and extracted from it by relative pathname. From a less strict point of view, the full pathname of the new file will be a concatenation of location, pkg, and relativeName. For generating Java files, you can also use Square's open source class library JavaPoet , interested students can understand. Print error message The Messager interface provides a way for the annotation processor to report error messages, warnings, and other notifications. Note: we should catch the possible exceptions during processing and notify the user through the method provided by the Messager interface. In addition, using the method with Element parameter to connect to the Element with error, the user can directly click the error message and jump to the corresponding line of the error source file. If you throw an exception in process(), the JVM running the annotation processor will crash (just like other Java applications), so that users will get a very difficult error message from javac. Configure option parameters We can get the option parameters through the getOptions() method and configure the option parameter values in the gradle file. For example, we configured a parameter value called yuweiguoCustomAnnotation. android { defaultConfig { javaCompileOptions { annotationProcessorOptions { arguments = [ yuweiguoCustomAnnotation : 'io.github.yuweiguocn.customannotation.MyCustomAnnotation' ] } } } } Override the getSupportedOptions method in the annotation processor to specify the name of the supported option parameter. Get the option parameter value through the getOptions method. public static final String CUSTOM_ANNOTATION = "yuweiguoCustomAnnotation"; @Override public boolean process(Set<? extends TypeElement> annotations, RoundEnvironment roundEnv) { try { String resultPath = processingEnv.getOptions().get(CUSTOM_ANNOTATION); if (resultPath == null) { ... return false; } ... } catch (Exception e) { e.printStackTrace(); ... } return true; } @Override public Set<String> getSupportedOptions() { Set<String> options = new LinkedHashSet<String>(); options.add(CUSTOM_ANNOTATION); return options; } Processing process The definition of annotation processing given in the official Java document: annotation processing is an orderly circular process. In each loop, a processor may be required to process the annotations in the source and class files generated in the previous loop. The input for the first cycle is the initial input for running the tool. These initial inputs can be regarded as the output of the virtual 0th cycle. This means that the process method we implement may be called many times, because the file we generate may also contain corresponding annotations. For example, our source file is SourceActivity.class and the generated file is Generated.class. In this way, there will be three cycles. The first input is SourceActivity.class and the output is Generated.class; The second input is Generated.class, and the output does not generate a new file; The third input is null and the output is null. Each cycle will call the process method, which provides two parameters. The first is the collection of annotation types we request to process (that is, the annotation type we specify by overriding the getSupportedAnnotationTypes method), and the second is the environment of information about the current and last cycle. The return value indicates whether these annotations are declared by this Processor. If true is returned, these annotations have been declared and subsequent processors are not required to process them; If false is returned, these annotations are undeclared and may require subsequent processors to process them. public abstract boolean process(Set<? extends TypeElement> annotations, RoundEnvironment roundEnv) Get annotation element We can get annotation elements through the RoundEnvironment interface. The process method provides an object that implements the RoundEnvironment interface. Example After understanding the basic concepts, let's take a look at an example. This example is only for demonstration and has no practical significance. The main function is to customize an annotation, which can only be used on public methods. We get the class name and method name through the annotation processor and store them in the List collection, and then generate a file specified through the parameter options. Through this file, we can obtain the List collection. Custom annotation: @Documented @Target({ElementType.METHOD}) @Retention(RetentionPolicy.RUNTIME) public @interface CustomAnnotation { } Key codes in annotation processor: @Override public boolean process(Set<? extends TypeElement> annotations, RoundEnvironment roundEnv) { try { String resultPath = processingEnv.getOptions().get(CUSTOM_ANNOTATION); if (resultPath == null) { messager.printMessage(Diagnostic.Kind.ERROR, "No option " + CUSTOM_ANNOTATION + " passed to annotation processor"); return false; } round++; messager.printMessage(Diagnostic.Kind.NOTE, "round " + round + " process over " + roundEnv.processingOver()); Iterator<? extends TypeElement> iterator = annotations.iterator(); while (iterator.hasNext()) { messager.printMessage(Diagnostic.Kind.NOTE, "name is " + iterator.next().getSimpleName().toString()); } if (roundEnv.processingOver()) { if (!annotations.isEmpty()) { messager.printMessage(Diagnostic.Kind.ERROR, "Unexpected processing state: annotations still available after processing over"); return false; } } if (annotations.isEmpty()) { return false; } for (Element element : roundEnv.getElementsAnnotatedWith(CustomAnnotation.class)) { if (element.getKind() != ElementKind.METHOD) { messager.printMessage( Diagnostic.Kind.ERROR, String.format("Only methods can be annotated with @%s", CustomAnnotation.class.getSimpleName()), element); return true; // Exit processing } if (!element.getModifiers().contains(Modifier.PUBLIC)) { messager.printMessage(Diagnostic.Kind.ERROR, "Subscriber method must be public", element); return true; } ExecutableElement execElement = (ExecutableElement) element; TypeElement classElement = (TypeElement) execElement.getEnclosingElement(); result.add(classElement.getSimpleName().toString() + "#" + execElement.getSimpleName().toString()); } if (!result.isEmpty()) { generateFile(resultPath); } else { messager.printMessage(Diagnostic.Kind.WARNING, "No @CustomAnnotation annotations found"); } result.clear(); } catch (Exception e) { e.printStackTrace(); messager.printMessage(Diagnostic.Kind.ERROR, "Unexpected error in CustomProcessor: " + e); } return true; } private void generateFile(String path) { BufferedWriter writer = null; try { JavaFileObject sourceFile = filer.createSourceFile(path); int period = path.lastIndexOf('.'); String myPackage = period > 0 ? path.substring(0, period) : null; String clazz = path.substring(period + 1); writer = new BufferedWriter(sourceFile.openWriter()); if (myPackage != null) { writer.write("package " + myPackage + ";\n\n"); } writer.write("import java.util.ArrayList;\n"); writer.write("import java.util.List;\n\n"); writer.write("/** This class is generated by CustomProcessor, do not edit. */\n"); writer.write("public class " + clazz + " {\n"); writer.write(" private static final List<String> ANNOTATIONS;\n\n"); writer.write(" static {\n"); writer.write(" ANNOTATIONS = new ArrayList<>();\n\n"); writeMethodLines(writer); writer.write(" }\n\n"); writer.write(" public static List<String> getAnnotations() {\n"); writer.write(" return ANNOTATIONS;\n"); writer.write(" }\n\n"); writer.write("}\n"); } catch (IOException e) { throw new RuntimeException("Could not write source for " + path, e); } finally { if (writer != null) { try { writer.close(); } catch (IOException e) { //Silent } } } } private void writeMethodLines(BufferedWriter writer) throws IOException { for (int i = 0; i < result.size(); i++) { writer.write(" ANNOTATIONS.add(\"" + result.get(i) + "\");\n"); } } Compile output: Note: round 1 process over false Note: name is CustomAnnotation Note: round 2 process over false Note: round 3 process over true Get full code: For uploading custom annotation processor to jcenter, please see Upload class library to jcenter. I'm glad you can read here. At this time, go to see the source code of annotation processor in EventBus 3.0. I believe you can easily understand its principle. Note: if you clone the project code, you may find that the annotation and annotation processor are separate modules. One thing is certain that our annotation processor only needs to be used during compilation and does not need to be packaged in APK. Therefore, for the sake of users, we need to separate the annotation processor into separate modules. reference resources - - - - - - Author: Yu Weiguo Link: Source: Jianshu The copyright of Jianshu belongs to the author. Please contact the author for authorization and indicate the source for any form of reprint.
https://programmer.help/blogs/619cd166af340.html
CC-MAIN-2022-21
en
refinedweb
Find & Filter React Children By Type Take control of your children in React for all environments Article Update: I’ve decided rewrite these utils from the ground up, add a bunch of new ones (including deep/recursive searching) and publish an NPM package for all to consume: This article will discuss the how-it-works for finding and filtering React children by type as it pertains to custom component children. If you are looking at finding and filtering core HTML Element (JSX Intrinsic Element) children like divs, spans, etc, please use react-nanny or see my other article for the how-it-works: This article will discuss finding and filtering React children by type as it pertains to core custom component children. If you are looking at finding and filtering core HTML Element (JSX) children like divs, spans, etc, please see my other article: There are situations in which knowing the type of each child passed to your component would be incredibly useful. For example, you might want to: - Validate that the consumer provides markup that you expect - Conditionally show or hide child items - Simplify the use of your component by allowing your consumer to pass in several children, but you want to place one of a certain type in a different location of your output JSX than the rest The task seems like it should be easy enough accomplish. After all, if you were to console.log(children), you’d see there’s a type key on each child. An internet search will uncover that you can easily do this: React.Children.toArray(children).map(x => console.log(x.type.name)); Let’s say we need to create a List component that accepts several ToDo components as children like this: And we define our components like this: We map over the children in List.jsx just like our internet search told us. Then we run our app and see the following in the console: ToDo ToDo ToDo We got exactly what we were looking for so we begin coding away… validating here; conditionally showing/hiding there. Our PR gets merged and we think to ourself, “Self, you really nailed it today.” However, we’re about to be in for a big surprise: It doesn’t work like we expect in production. So what’s the problem? If we were to run the same map and console.log in production, we’d see something like this: u u u It turns out that our app’s build has been optimized for production which includes… wait for it… minification! All of our component names and types have now been minified to something that is completely unpredictable that we can’t code against. Out of the darkness comes a solution! We can take advantage of the fact that literal string values do not get minified. Simply add a prop that you don’t advertise in your documentation and treat it as a constant. I’ve named it __TYPE in the example below, but you can name it whatever you like. Then give it a default value by defining it via PropTypes. If we were to now rewrite that mapping that we did at the top of this article to this: React.Children.toArray(children).map(x => console.log(x.props.__TYPE)); We would get this result in our console in all environments: ToDo ToDo ToDo I know what you’re thinking because I can read your thoughts, “What’s stopping the consumer from doing something like the following…?” <ToDo __TYPE="MoreLikeToDontAmirite?" /> The truth is that there really isn’t anything stopping the consumer from doing that. However, we’ve done two things that should immediately discourage people from doing this: - The prop name starts with not one, but two underscores which should indicate that this is definitely a private prop. - The prop name is in all caps which should indicate that it is a constant. Of course we can and should do more. We can take advantage of the fact that we can create a custom PropType that will notify the user with an in-your-face console error should they attempt to stray from the default: If we now update the ToDo.jsx component to consume this custom prop validator like this: …we should see the following in our console if we try to pass in a value for __TYPE in our App.jsx: Validate Your Children The List.jsx component, as it stands, can accept any child you throw into it, but that’s not desired behavior. For example, we don’t want someone to be able to pass in a div: We cannot stand for this kind of thing! Now that we can identify our children, we create another util to handle this situation for us. Consider the following: We feed the getChildrenByType function our children and an array of the types we want to include and the function will return only the children that have a matching __TYPE. The typeOfComponent helper function will check for our __TYPE under the hood. If that isn’t defined, it will next check the stringified type of the component which is helpful for finding HTML element children (i.e. divs, spans, etc.). If you’re interested in filtering those kinds of children, you can find the link to my other article at the top. Otherwise, you can ignore the details of type for now. That means we can update our List.jsx component with this function: When we run the updated code, we’ll notice that our list is nice and clean without the bogus div our consumer so carelessly injected. Note: If we’re using react-nanny, we can alternatively pass in the actual imported component as part of our types array if it’s in scope: import ToDo from './components/ToDo; ... <ul>{getChildrenByType(children, [ToDo])}</ul> Notice ToDo isn’t a string like it was before. However, if you don’t have your component in scope, you’ll definitely want to key off of a prop value like __TYPE and use a string value in your array. Conditionally Show/Hide Specific Children Sometimes you may want to show or hide specific children or children of a certain type based on the configuration of the parent component. To illustrate this, let’s create a new type of ToDo called ToDoCompleted which adds “- COMPLETED” to the end of the item: Next, we can add a new prop to our List.jsx component called hideCompleted which will conditionally hide or show completed todo items in the list. We can also conditionally add the ToDoCompleted type to the array that we’re passing to our getChildrenByType util function: If we now update our App.jsx to this: …and start our app, we will see this: If we were to add the hideCompleted prop to our list like this: <List hideCompleted> …we will see Item 2 removed from the list: Move Your Children Around Let’s say that the design team comes to us and says that all of the app’s todo lists should have completed items at the bottom of the list when they are to be displayed. We can accomplish this in List.jsx with our same util function: In the code above, we’re finding all ToDo children and all ToDoCompleted children and rendering each of those out instead of all children like we were before. However, if someone were to want the completed items hidden, they can still do that. If we start up our app, we should see Item 2 at the bottom: Why not simply use render props? I’m glad you asked. I am not anti-render prop. In fact, I use them quite frequently, but they aren’t a golden hammer solution for every problem. If you take our previous scenario with being required to move completed items to the bottom of the list, that was not an initial requirement. It was a requirement that was brought to us after the component already existed. In this case, we could refactor the component to accept a prop called renderCompleted that returns the completed items and we can invoke that function in the spot in our render markup where we want those items to be, but we will be breaking the props api contract which will necessitate action from all consumers. If the component source code lives in our app, we’d have to refactor every instance that it’s used. If the component is part of a distributed package, we’d have to publish a new major version which consumers would manually need to update in addition to refactoring every used instance. Meanwhile, you have some teams using the new and some using the old which can create a strange experience from the user as they use your product. In situations like that, it’s better to use this method and herd the children where they need to be. No breaking change; no consumer refactoring required. Speaking of render props (since you brought it up)… These techniques aren’t just for children, they also work with render props or any JSX for that matter. Let’s say you have a component with a render prop called renderActionArea and you expect that prop to return you one or more PrimaryButton components. How do you know the consumer is returning a PrimaryButton and not a div or a span or a SecondaryButton? Well, now you can! Simply… const actionArea = getChildrenByType(renderActionArea(), ['PrimaryButton']); Awesome! Do you have any other helpful utils? Yes! To recap the article update posted at the top, I’ve published an NPM package that has these utils re-engineered to handle additional situations and offer more options to give you flexibility. There are also many additional utils that we didn’t discuss in this article: react-nanny Utils to manage your React Children; find and filter children by type or custom function, enforce child content, and… Here is a list of the utils currently available in react-nanny: getChild— Gets first child by specified predicate getChildDeep— Gets first child by specified predicate (deep search) getChildByType— Gets first child by specified type getChildByTypeDeep— Gets first child by specified type (deep search) getChildren— Gets all children by specified predicate getChildrenDeep— Gets all children by specified predicate (deep search) getChildrenByType— Gets all children by specified type getChildrenByTypeDeep— Gets all children by specified type (deep search) noEmptyChildrenDeep— Ensure that there is some level of content and not just a bunch of empty divs, spans, etc (deep search) removeChildren— Removes all children by specified predicate removeChildrenDeep— Removes all children by specified predicate (deep search) removeChildrenByType— Removes all children by specified type removeChildrenByTypeDeep— Removes all children by specified type (deep search) typeOfComponent— Gets the string type of the component if defined by a prop, the string type of the core html (JSX Intrinsic) element, or the function type Go forth and be good to your React children! As a matter of practice, I highly recommend creating your own prop to identify what kind of component it is. Even if you think you’ll never use it, it’s good to have it in place for a time when it might save your bacon. If you’re needing to also filter core HTML Element components like divs, spans, etc., I highly recommend you continue on to my article on that topic:
https://mparavano.medium.com/find-filter-react-children-by-type-d9799fb78292
CC-MAIN-2022-21
en
refinedweb
preface Hi Coder, I'm CoderStar! This week, I will mainly share with you three design modes (command mode, mediator mode and combination mode) and their applications in the AppDelegate decoupling scenario, especially the combination mode, which precipitates the corresponding wheels for you to share. At the same time, let me tell you about the plan of the following articles on Design Patterns Series. Because the articles related to design patterns will be sorted out in combination with the scenes we will actually encounter in development, the documents may be discontinuous. I hope you can understand that I will sort out most code examples of design patterns into the designpatterns demo [1] warehouse in the form of Playground, Therefore, there may be some cases of manually calling system functions in the code example. At the same time, I recommend a good website for learning design patterns - in-depth design patterns [2]. Some UML diagrams involved in this article are also from this website. scene AppDelegate is the root object of the application, that is, the only proxy, and can be considered the core of every iOS project. It provides exposure to application lifecycle events; It ensures that the application interacts correctly with the system and other applications; It usually assumes many responsibilities, which makes it difficult to change, expand and test. With the iterative upgrading of business, new functions and businesses are added, and the amount of code in AppDelegate is also growing, resulting in its Massive. Common businesses in AppDelegate include: Event handling and dissemination in the life cycle; Manage UI stack configuration: select the initial view controller and perform root view controller conversion; Manage background tasks; Management notice; Third party library initialization; Manage equipment direction; Set UIAppearance; And because AppDelegate will affect the whole APP, we will be careful when facing complex AppDelegate for fear that our changes will affect other functions. Therefore, the simplicity and clarity of AppDelegate is very important for a healthy iOS architecture. Next, we use the above three design patterns to decouple AppDelegate and make it elegant. Command mode Command pattern is a behavior design pattern, which can transform a request into a separate object containing all the information related to the request. This transformation allows you to parameterize the method according to different requests, delay the execution of requests or put them in the queue, and implement revocable operations. UML Command mode URL graph Implementation mode Declare a command interface with only one execution method. Extract the request and make it a specific command class that implements the command interface. Each class must have a set of member variables to hold request parameters and references to the actual recipient object. The values of all these variables must be initialized through the command constructor. Find the class that is responsible for the sender. Add member variables to these classes that hold commands. The sender can only interact with its commands through the command interface. The sender itself usually does not create a command object, but obtains it through client code. Modify the sender to execute the command instead of sending the request directly to the receiver. The client must initialize objects in the following order: Create recipients. Create a command and associate it with the recipient if necessary. Create a sender and associate it with a specific command. Code example import UIKit // MARK: - Command interface protocol AppDelegateDidFinishLaunchingCommand { func execute() } // MARK: - Initialize third-party commands struct InitializeThirdPartiesCommand: AppDelegateDidFinishLaunchingCommand { func execute() { print("InitializeThirdPartiesCommand trigger") } } // MARK: - Initialize rootViewController struct InitialViewControllerCommand: AppDelegateDidFinishLaunchingCommand { let keyWindow: UIWindow func execute() { print("InitialViewControllerCommand trigger") keyWindow.rootViewController = UIViewController() } } // MARK: - Command constructor final class AppDelegateCommandsBuilder { private var window: UIWindow! func setKeyWindow(_ window: UIWindow) -> AppDelegateCommandsBuilder { self.window = window return self } func build() -> [AppDelegateDidFinishLaunchingCommand] { return [ InitializeThirdPartiesCommand(), InitialViewControllerCommand(keyWindow: window), ] } } // MARK: - AppDelegate /// Act as sender and client class AppDelegate: UIResponder, UIApplicationDelegate { var window: UIWindow? func application(_ application: UIApplication, didFinishLaunchingWithOptions launchOptions: [UIApplication.LaunchOptionsKey: Any]?) -> Bool { window = UIWindow() AppDelegateCommandsBuilder() .setKeyWindow(window!) .build() .forEach { $0.execute() } return true } } // MARK: - Manual call AppDelegate().application(UIApplication.shared, didFinishLaunchingWithOptions: nil) In fact, the above transformation is not a command mode strictly followed. For example, there is no receiver role, and the sender and client are not completely separated. At the same time, AppDelegateCommandsBuilder is actually a builder mode, which is also commonly used. This mode will be explained separately later. If you want to see the complete command mode code example of the role, see the command code example [3]. After transforming AppDelegate using the Command mode, when we need to add processing logic to the callback, we do not need to modify AppDelegate, but directly add the corresponding Command class and add it in AppDelegateCommandsBuilder. The disadvantages of this method must be obvious to you. The above code example only understands and couples the didFinishLaunch method, and does not transform other methods. If other methods are transformed, the above set also needs to be implemented, which will be somewhat redundant. Intermediary model Mediator pattern is a behavior design pattern that allows you to reduce chaotic dependencies between objects. This pattern restricts the direct interaction between objects, forcing them to cooperate through a mediator object. In fact, developers should be very familiar with the mediator pattern, because in the MVC pattern, C is a typical mediator, which limits the direct interaction between M and V. UML Mediator pattern UML diagram Code example import UIKit // MARK: - Lifecycle event interface protocol AppLifecycleListener { func onAppWillEnterForeground() func onAppDidEnterBackground() func onAppDidFinishLaunching() } // MARK: - Interface is implemented by default, so that the implementation class can optionally implement methods extension AppLifecycleListener { func onAppWillEnterForeground() {} func onAppDidEnterBackground() {} func onAppDidFinishLaunching() {} } // MARK: - Implementation class class AppLifecycleListenerImp1: AppLifecycleListener { func onAppDidEnterBackground() { } } class AppLifecycleListenerImp2: AppLifecycleListener { func onAppDidEnterBackground() { } } // MARK: - tertium quid class AppLifecycleMediator: NSObject { private let listeners: [AppLifecycleListener] init(listeners: [AppLifecycleListener]) { self.listeners = listeners super.init() subscribe() } deinit { NotificationCenter.default.removeObserver(self) } /// Subscribe to lifecycle events private func subscribe() { NotificationCenter.default.addObserver(self, selector: #selector(onAppWillEnterForeground), name: UIApplication.willEnterForegroundNotification, object: nil) NotificationCenter.default.addObserver(self, selector: #selector(onAppDidEnterBackground), name: UIApplication.didEnterBackgroundNotification, object: nil) NotificationCenter.default.addObserver(self, selector: #selector(onAppDidFinishLaunching), name: UIApplication.didFinishLaunchingNotification, object: nil) } @objc private func onAppWillEnterForeground() { listeners.forEach { $0.onAppWillEnterForeground() } } @objc private func onAppDidEnterBackground() { listeners.forEach { $0.onAppDidEnterBackground() } } @objc private func onAppDidFinishLaunching() { listeners.forEach { $0.onAppDidFinishLaunching() } } // MARK: - To add a new Listener, you can modify it here public static func makeDefaultMediator() -> AppLifecycleMediator { let listener1 = AppLifecycleListenerImp1() let listener2 = AppLifecycleListenerImp2() return AppLifecycleMediator(listeners: [listener1, listener2]) } } class AppDelegate: UIResponder, UIApplicationDelegate { var window: UIWindow? /// Build listeners and automatically subscribe to lifecycle notifications internally let mediator = AppLifecycleMediator.makeDefaultMediator() func application(_ application: UIApplication, didFinishLaunchingWithOptions launchOptions: [UIApplication.LaunchOptionsKey: Any]?) -> Bool { return true } } As you can see above, applifecycle mediator is obviously a mediator through which lifecycle events can be propagated to specific users. In fact, the mediator mode is also commonly used in component communication schemes. I'll introduce it to you later. If you are interested, you can learn about it yourself, that is, the CTMediator scheme we often call. Combination mode Composite mode is a structural design mode. You can use it to combine objects into a tree structure and use them like independent objects. UML Combined mode URL graph In the AppDelegate scenario, AppDelegate is a root Composite role, and each business is a Leaf role. If it is applied to componentization, each component is a Leaf role or Composite role (components can be redistributed to each business Leaf). Code example // MARK: - Interface, directly inheriting UIApplicationDelegate, UNUserNotificationCenterDelegate two protocols. /// Empty protocol, and each component module implements the protocol public protocol ApplicationService: UIApplicationDelegate, UNUserNotificationCenterDelegate {} /// It is convenient to obtain window in the component extension ApplicationService { /// window public var window: UIWindow? { // swiftlint:disable:next redundant_nil_coalescing return UIApplication.shared.delegate?.window ?? nil } } // MARK: - AppDelegate inheritance open class ApplicationServiceManagerDelegate: UIResponder, UIApplicationDelegate { /// Subclasses need to be assigned in the constructor public var window: UIWindow? /// It is rewritten by the subclass and returns the plist file address containing the class name of each module implementing ApplicationService /// plist file needs to be of type NSArray open var plistPath: String? { return nil } /// It is rewritten by subclasses to return the classes that implement ApplicationService in each module open var services: [ApplicationService] { guard let path = plistPath else { return [] } guard let applicationServiceNameArr = NSArray(contentsOfFile: path) else { return [] } var applicationServiceArr = [ApplicationService]( "ApplicationService") for applicationServiceName in applicationServiceNameArr { if let applicationServiceNameStr = applicationServiceName as? String, let applicationService = NSClassFromString(applicationServiceNameStr), let module = applicationService as? NSObject.Type { let service = module.init() if let result = service as? ApplicationService { applicationServiceArr.append(result) } } } return applicationServiceArr } public func getService(by type: ApplicationService.Type) -> ApplicationService? { for service in applicationServices where service.isMember(of: type) { return service } return nil } /// Lazy load gets the calculation property services so that it is calculated only once private lazy var applicationServices: [ApplicationService] = { self.services }() } // MARK: - The protocol is implemented by default and events are distributed to each Leaf extension ApplicationServiceManagerDelegate { @available(iOS 3.0, *) open func application(_ application: UIApplication, didFinishLaunchingWithOptions launchOptions: [UIApplication.LaunchOptionsKey: Any]? = nil) -> Bool { var result = false for service in applicationServices { if service.application?(application, didFinishLaunchingWithOptions: launchOptions) ?? false { result = true } } return result } /** Implement the protocol methods one by one, and distribute the events to each Leaf */ } // MARK: - Mode of use final class AppThemeApplicationService: NSObject, ApplicationService { func application(_ application: UIApplication, didFinishLaunchingWithOptions launchOptions: [UIApplication.LaunchOptionsKey: Any]? = nil) -> Bool { /// setup AppTheme return true } } final class AppConfigApplicationService: NSObject, ApplicationService { func application(_ application: UIApplication, didFinishLaunchingWithOptions launchOptions: [UIApplication.LaunchOptionsKey: Any]? = nil) -> Bool { /// setup AppConfig return true } } @UIApplicationMain class AppDelegate: ApplicationServiceManagerDelegate { override var services: [ApplicationService] { return [ AppConfigApplicationService(), AppThemeApplicationService(), ] } override init() { super.init() if window == nil { window = UIWindow() } } } From the above code example, we can see that each Leaf implements the ApplicationService protocol, which can get all callbacks that AppDelegate can get. For AppDelegate, there will be no business logic inside. Because of the default implementation of the protocol, the tasks have been distributed to each Leaf by default. The remaining tasks are only to provide the list of leaves. Considering the use in the component environment, it does not directly reference each Leaf and provides the form of plist configuration file. The decoupling scheme is improved, and the precipitated wheel address is application service manager [4]. The function is relatively lightweight, welcome to use. In fact, Alibaba's BeeHive[5] is about the decoupling of AppDelegate, but it is a comprehensive component scheme, and the event distribution of AppDelegate is only a part of it. The above three design modes can be selected or combined according to the actual situation of their respective projects. For example, the composite mode can be selected for shell engineering to distribute events to each component, and the command or mediator mode can be selected for event distribution within the component. We should work harder! Let's be CoderStar! reference material - Refactoring Massive App Delegate[6] reference material [1] DesignPatternsDemo: [2] In depth design mode: [3] Command code example: [4] ApplicationServiceManager: [5] BeeHive: [6] Refactoring Massive App Delegate: It is very important to have a technical circle official account with a group of like-minded friends. Here is my technical public address. Dry cargo is only a technical topic here. WeChat official account: CoderStar
https://programmer.help/blogs/619904fb7e0b2.html
CC-MAIN-2022-21
en
refinedweb
End-to-End Testing with Aurelia and Protractor This week Core Aurelia Team member in charge of testability, Vildan Softic, shows us how to write End-to-End tests with Aurelia and Protractor. Introduction combining Aurelia and Protractor. About The Author much as possible to Aurelia. You can find Vildan on GitHub and LinkedIn. per se, but with the app's interface. This is different than unit tests, which take care of isolated parts of the application - called units - by verifying them through the removal or mocking of other parts. It's important to note that one method of testing does not replace the other, so don't take this article as an excuse to skip unit testing. How are E2E tests different? One of the key differences when working with E2E tests is that all of your work is located in the browser, which naturally leads to writing a lot of asynchronous code. It doesn't matter whether you request a DOM Element, send some fake keystrokes or trigger a click, each of these actions needs to be automatically translated to understandable instructions and sent to the browser under test. So working with Promises becomes a major enabler when keeping track of deferred executions and responses. Another important aspect already noted is the necessity to translate programmatic actions into browser understandable ones. Needless to say, variations exist between the different browsers.... Thus, modifications which get persisted to databases or local storage will stay that way and may produce side effects for your next test run. Last but not least, there is a much higher test code maintenance cost, compared to unit tests. The reason is that now, not only one component is tested exclusively, but rather the whole system at once. Imagine trying to fill out an input element with the id txtFirstname, just to realize the next day your tests fail because your fellow front-end designer decided to change the name to txtFirstName. This makes it clear that you must treat your test code like general application logic and give it all the love it deserves. Protractor Although the previous section may sound depressing, there is hope for developers by the name of Protractor. It is an End-To-End testing framework originally created for the JavaScript front-end framework AngularJS. Under the hood it's actually a Node.js application, which supports a wide variety of assertion/test libraries like Jasmine, Mocha or Cucumber. The rest of the article will focus on the BDD Testing Framework Jasmine. A nice tutorial on how to use it can be found here.. Now the interesting thing is that instead of manually testing your application in each of the major browsers, automated Protractor tests can run on multiple browsers at the same time, saving you valuable time and money. Support is wide-spread and even includes headless browsers like PhantomJS. Besides that, being a wrapper, it offers additional convenience features, not present in vanilla WebDriverJS-API. One feature, perhaps the most important, is that it allows you to write asynchronous tests in a synchronous style. This means that Protractor will automatically execute the next task, the moment the previous pending tasks finish. A Basic Example To get a basic idea of how this works, take a look at the following example. describe('aurelia homepage', function() { it('should load page', function() { browser.get(''); expect(browser.getTitle()).toEqual('Home | Aurelia'); }); }); As you can see, the test utilizes Jasmine for BDD style testing which is placed in a separate JavaScript file and defines a scenario/suite by using a describe block. Each test then gets handled by a separate it function. In this one we'd like to verify that after loading the Aurelia Homepage, the title equals our expected page title. The first line will issue a general browser method get which loads the given URL. This function now returns a promise, to which you'd normally append a then function, which gets called after the promise successfully resolves. In this test case though, we don't need to care about that, because Protractor will execute the expectation only after the previous line has successfully resolved. Protractor also adapts the Jasmine expectations to work in an async way, so by the time matchers like toEqual are called, the previous expectation is already resolved. But sometimes you need to wait for a certain action to happen in the future. Again we can leverage the general browser object and utilize it's sleep method. describe('aurelia homepage', function() { it('should navigate to a different subpage', function() { // load page // navigate to different subpage // wait for X to happen after 2 seconds browser.sleep(2000) expect(WHATEVER).toEqual(SOMETHING); }); }); Accessing DOM Elements Great! So we know how to load a page. But how do we find DOM Elements and see whether they are rendered properly? Protractor provides the global object element, an EelementFinder, which offers a locator factory by used to define a way to search for elements. Let's take a look at the following example. describe('aurelia homepage', function() { beforeEach(function() { browser.get(''); }) it('should have proper header text set', function() { expect(element(by.tagName('h2')).getText()).toBe('EXPECTED HEADER'); }); it('should find an about section', function() { expect(element(by.id('about')).isPresent()).toBe(true); }); }); The first test is looking for an <h2> tag by utilizing the tagName locator. The second test looks for an element with the ID about and expects it to be rendered on the page. Here we use the isPresent method, provided by the EelementFinder. You may have noticed the method beforeEach at the top of the describe block. This is a setup method, which will get called before each test in the current scope, being the current describe block. To perform tear down operations, you'd simply define a function afterEach, which gets called after each test. You can find a full list of locators here. Just keep in mind that everything specific to AngularJS, like bindingor modelwon't work with Aurelia Apps. Interacting with Forms Now we know how to work with general elements, but what about inputs? Wouldn't it be nice to fake data entries in order to verify the logic of a form? To do so, let's look at the next example. Our test will navigate to the Google homepage, search for a specific keyword, trigger the search and expect to see an element containing the given value. describe('google homepage', function() { beforeEach(function() { browser.get(''); }); it('should load page', function() { element(by.name('q')).sendKeys('Aurelia'); element(by.name('btnG')).click(); browser.sleep(2000); expect(element(by.css('h3 a')).getText()).toContain('Aurelia'); }); }); First we navigate to the page using browser.get and look for an input with the name q. The sendKeys method now simulates the keystrokes for the keyword Aurelia. Afterwards we perform a search by clicking the button named btnG. Now we need to wait for Google to perform the search and render the result. We therefore leverage the browser.sleep method to give it some time. Finally we look for a link containing the word Aurelia. Protractor and Aurelia In order to work with Protractor, there is a little configuration that is necessary. This is done in a configuration file, e.g. protractor.conf.js, which sets up the basic information for Protractor so it can find our test files, start the standalone Selenium server and wire up the JasmineOptions for the console output. The Aurelia Skeleton Navigation App thankfully already shares a preconfigured setup. Let's take a look at it. exports.config = { directConnect: true, capabilities: { 'browserName': 'chrome' }, onPrepare: function() { browser.ignoreSynchronization = true; by.addLocator('valueBind', function (bindingModel, opt_parentElement) { var using = opt_parentElement || document; var matches = using.querySelectorAll('*[value\\.bind="' + bindingModel +'"]'); var result = undefined; if (matches.length === 0) { result = null; } else if (matches.length === 1) { result = matches[0]; } else { result = matches; } return result; }); }, //seleniumAddress: '', //add proper version number seleniumServerJar: './node_modules/gulp-protractor/node_modules/protractor/selenium/selenium-server-standalone-2.44.0.jar', specs: ['specs/e2e/dist/*.js'], //Options to be passed to Jasmine-node. jasmineNodeOpts: { showColors: true, defaultTimeoutInterval: 30000 } }; The first setting tells Protractor to directly connect to the Browser leveraging its WebDriver, in this case, Chrome, defined by the capabilities property. By doing so, Protractor won't need a Selenium Server and will talk directly to the mentioned Browser. The method onPrepare is useful for setting up code before Protractor starts. The first line tells Protractor not to expect an AngularJS homepage, but let ourselves do the checking for when a page is fully loaded. Afterwards we add an Aurelia specific Locator named valueBind which looks for elements binding their value to a specific model. The option seleniumServerJar now may be omitted since we are using directConnect. If specified together, it will simply be ignored by Protractor. The property specs takes the path to our spec files. Since Aurelia is built from ground up with full support for ES6, we encourage developers to write their tests using ES6 features. Since we'd like to start tests only when Aurelia is fully loaded, we leverage another feature of Protractor called executeAsyncScript. beforeEach(() => { browser.get(''); browser.executeAsyncScript( 'var cb = arguments[arguments.length - 1];' + 'document.addEventListener("aurelia-composed", function (e) {' + ' cb("Aurelia App composed")' + '}, false);' ).then(function(result){ console.log(result); }); }); This method provides a way to execute JavaScript directly in the inspected Web page and work with the results after completion. We use this feature to listen for a DOM event fired by Aurelia after initial view composition. By placing this in a beforeEach section, we ensure that none of the tests will be started before the async script successfully finishes. Testing the Aurelia Skeleton Navigation App Besides having the configuration file set up, the Skeleton Navigation App also defines a set of demo tests to help you get started with testing your own page. First you'd need to download the App directly from our Github-Repo, or install it via Yeoman and follow the installation instruction. Afterwards, in order to start E2E testing, simply open up a console and run the following command to start up the built in web server: gulp watch After that, open another console and hit the following command to start up the E2E test run: gulp e2e You will find the demo spec in the folder test/e2e/src/. Page Objects To conclude this article we're going to quickly look at how to structure tests. We organize our test methods using a pattern called Page Objects (POs). What this means is that you try to group information about how you access parts of the application into a separate class. This makes it simple to access specific elements multiple times. Now instead of repeating the element.by.xxx code over and over across multiple tests, we unify the access making it easier to maintain and modify. Since Aurelia promotes the use of ES6, our page objects are simple ES6 classes, exposing functionality through methods. These methods contain the logic for how to interact with Protractor. The following example shows our main Skeleton PO, which takes care of general application information like the page title and page navigation. export class PageObject_Skeleton { constructor() {} getCurrentPageTitle() { return browser.getTitle(); } navigateTo(href) { var deferred = protractor.promise.defer(); element(by.css('a[href="' + href + '"]')).click().then( () => { browser.sleep(2000); deferred.fulfill(true); }); return deferred.promise; } } The second PO is all about the Welcome page. export class PageObject_Welcome { constructor() {} getGreeting() { return element(by.tagName('h2')).getText(); } setFirstname(value) { return element(by.valueBind('firstName')).clear().sendKeys(value); } setLastname(value) { return element(by.valueBind('lastName')).clear().sendKeys(value); } getFullname() { return element(by.css('.help-block')).getText(); } pressSubmitButton() { return element(by.css('button[type="submit"]')).click(); } openAlertDialog() { return browser.wait(() => { this.pressSubmitButton(); return browser.switchTo().alert().then( function(alert) { alert.dismiss(); return true; }, function() { return false; } ); }); } } Test Specification The previously defined page objects can now be imported into our test specification by leveraging the ES6 import syntax. Using beforeEach we can instantiate the POs, navigate to the Web app and wait for the previously mentioned aurelia-composed event to start testing. Our page object methods, in combination with Jasmine's BDD style assertions, make each test become an easy to read English sentence. import {PageObject_Welcome} from './welcome.po.js'; import {PageObject_Skeleton} from './skeleton.po.js'; describe('aurelia skeleton app', function() { var po_welcome, po_skeleton; beforeEach( () => { po_skeleton = new PageObject_Skeleton(); po_welcome = new PageObject_Welcome(); browser.get(''); browser.executeAsyncScript( 'var cb = arguments[arguments.length - 1];' + 'document.addEventListener("aurelia-composed", function (e) {' + ' cb("Aurelia App composed")' + '}, false);' ).then(function(result){ console.log(result); }); }); it('should load the page and display the initial page title', () => { expect(po_skeleton.getCurrentPageTitle()).toBe('Welcome | Aurelia'); }); it('should display greeting', () => { expect(po_welcome.getGreeting()).toBe('Welcome to the Aurelia Navigation App!'); }); it('should automatically write down the fullname', () => { po_welcome.setFirstname('Rob'); po_welcome.setLastname('Eisenberg'); expect(po_welcome.getFullname()).toBe('ROB EISENBERG'); }); it('should show alert message when clicking submit button', () => { expect(po_welcome.openAlertDialog()).toBe(true); }); it('should navigate to flickr page', () => { po_skeleton.navigateTo('#/flickr'); expect(po_skeleton.getCurrentPageTitle()).toBe('Flickr | Aurelia'); }); }); Summary We hope you enjoyed this introduction to E2E Testing with Protractor and the Aurelia Framework. Start writing tests and see how it feels. If you encounter any problems join us in our gitter channel.
http://blog.aurelia.io/2015/02/16/end-to-end-testing-with-aurelia-and-protractor/
CC-MAIN-2017-09
en
refinedweb
Updating my plugins to use the new view.find_by_selector method, I found something interesting:Before my plugin used view.get_symbols and was slow with some very big source (and 'Goto Symbol' function also).So I changed it to use self.view.find_by_selector with the same searching scope and the result is beyond my expectation. import sublime, sublime_plugin import time class TestSymbolListCommand(sublime_plugin.TextCommand): def run(self, edit): print "*view.find_by_selector:" # mimic view.get_symbols() btime = time.time() self.classlist1 = (pos, self.view.substr(pos)) for pos in self.view.find_by_selector('source.pascal meta.function.pascal, source.pascal meta.class.pascal entity.name.class.pascal')] print time.time() - btime print len(self.classlist1) print "*view.get_symbols:" btime = time.time() self.classlist2 = self.view.get_symbols() print time.time() - btime print len(self.classlist2) print "*compare result:" print self.classlist1 == self.classlist2[/code] And this the result on a very large Pascal file: [code]>>> view.run_command('test_symbol_list') *view.find_by_selector: 0.0929999351501 17548 *view.get_symbols: 4.77500009537 17548 *compare result: True So the view.get_symbols is near 5 seconds and view.find_by_selector is 0.1 second.It must be something wrong with view.get_symbols, what do you think Jon ? Yeah, there's a some performance issue with get_symbols that I haven't had a chance to look at yet
https://forum.sublimetext.com/t/goto-symbol-performance-issue/2646
CC-MAIN-2017-09
en
refinedweb
Hi Linus,with HZ=1000 even on 32bit platforms, we really should use the64 bit jiffies value for exported interfaces like uptime, process starttime etc.Otherwise innocent users might get quite surprised when ps output goesberzerk after 49.5 days, showing processes as having started in thefuture.Note that the appended patch does not change any of the exportedinterfaces, it just avoids internal overflows.These changes were already discussed on lkml long ago, before George Anzinger introduced the current 64 bit jiffies implementation.In its current form this patch was posted on lkml last month, and has beenin -dj (modulo the HZ=1000 change) since 2.5.20-dj3.TimPart 1/4: "Infrastructure"Provide a sane way to avoid unneccessary locking on 64 bit platforms,and a 64 bit analogon to "jiffies_to_clock_t()".Naming it "jiffies_64_to_user_HZ()" is the only part of these patches I expect to be controversial.--- linux-2.5.46-bk4/include/linux/jiffies.h Mon Nov 4 23:30:04 2002+++ linux-2.5.46-bk4-j64a/include/linux/jiffies.h Sun Nov 10 09:16:35 2002@@ -2,14 +2,34 @@ #define _LINUX_JIFFIES_H #include <linux/types.h>+#include <linux/spinlock.h>+#include <asm/system.h> #include <asm/param.h> /* for HZ */ /* * The 64-bit value is not volatile - you MUST NOT read it- * without holding read_lock_irq(&xtime_lock)+ * without holding read_lock_irq(&xtime_lock).+ * get_jiffies_64() will do this for you as appropriate. */ extern u64 jiffies_64; extern unsigned long volatile jiffies;++static inline u64 get_jiffies_64(void)+{+#if BITS_PER_LONG < 64+ extern rwlock_t xtime_lock;+ unsigned long flags;+ u64 tmp;++ read_lock_irqsave(&xtime_lock, flags);+ tmp = jiffies_64;+ read_unlock_irqrestore(&xtime_lock, flags);+ return tmp;+#else+ return (u64)jiffies;+#endif+} + /* * These inlines deal with timer wrapping correctly. You are --- linux-2.5.46-bk4/include/linux/times.h Mon Nov 4 23:30:06 2002+++ linux-2.5.46-bk4-j64a/include/linux/times.h Sun Nov 10 09:16:35 2002@@ -2,7 +2,16 @@ #define _LINUX_TIMES_H #ifdef __KERNEL__+#include <asm/div64.h>+#include <asm/types.h>+ # define jiffies_to_clock_t(x) ((x) / (HZ / USER_HZ))++static inline u64 jiffies_64_to_user_HZ(u64 x)+{+ do_div(x, HZ / USER_HZ);+ return x;+} #endif struct tms {-To unsubscribe from this list: send the line "unsubscribe linux-kernel" inthe body of a message to majordomo@vger.kernel.orgMore majordomo info at read the FAQ at
https://lkml.org/lkml/2002/11/10/4
CC-MAIN-2017-09
en
refinedweb
A Partridge In A Pear Tree December 23, 2016 We first define the gifts, then iterate through the verses of the song in two nested loops: (define gifts '( ("first" "a partridge in a pear tree") ("second" "two turtle doves") ("third" "three french hens") ("fourth" "four calling birds") ("fifth" "five golden rings") ("sixth" "six geese a-laying") ("seventh" "seven swans a-swimming") ("eighth" "eight maids a-milking") ("ninth" "nine ladies dancing") ("tenth" "ten lords a-leaping") ("eleventh" "eleven pipers piping") ("twelfth" "twelve drummers drumming"))) (define (christmas) (let loop ((gifts gifts) (rev-gifts (list))) (when (pair? gifts) (display "On the ") (display (caar gifts)) (display " day of Christmas my true love gave to me ") (display (cadar gifts)) (let rev-loop ((rev-gifts rev-gifts)) (when (pair? rev-gifts) (display (if (< 1 (length rev-gifts)) ", " " and ")) (display (cadar rev-gifts)) (rev-loop (cdr rev-gifts)))) (display ".") (newline) (loop (cdr gifts) (cons (car gifts) rev-gifts))))) You can run the program at. Advertisements I wish you a Merry Christmas and a Happy New Year. 425 character in Common Lisp: Oops, sorry, the bzip2 -9 file is actually only 429 bytes: Here’s a little C program (there are a couple of Obfuscated C competition programs that are worth a look). Not quite as short as Pascal’s (hard to compete with a language with built-in printing of numbers as ordinals): Here’s some JS as well: Merry Christmas all. Here’s the 12 days in MUMPS: twelvedays ; n i,j,nth f i=1:1:12 d . s nth=$p($t(days),”;”,i+1) . w !,”On the “,nth,” day of Christmas my true love gave to me” . f j=i:-1:1 d . . w ” ” . . w $p($t(presents+j),”;”,2) . . i j>2 w “,” . . e w $s(j=2:” and”,1:”.”) q ; days ;first;second;third;fourth;fifth;sixth;seventh;eighth;ninth;tenth;eleventh;twelfth presents ; I thought I would try in Python: gifts = { 'twelvth': 'twelve drummers drumming', 'eleventh': 'eleven pipers piping', 'tenth': 'ten lords a-leaping', 'ninth': 'nine ladies dancing', 'eighth': 'eight maids a-milking', 'seventh': 'seven swans a-swimming', 'sixth': 'six geese a-laying', 'fifth': 'five golden rings', 'fourth': 'four calling birds', 'third': 'three French hens', 'second': 'two turtle doves and ', 'first': 'a partridge in a pear tree.', } days = ('first', 'second', 'third', 'fourth', 'fifth', 'sixth', 'seventh', 'eighth', 'ninth', 'tenth', 'eleventh', 'twelvth') def first_part(day): print("On the {} day of Christmas" " my true love gave to me ".format(day), end="") def second_part(gift, comma=True): print(gift, end="") print(end=", ") if comma else None for day in days: first_part(day) for i in range(days.index(day), -1, -1): comma = False if (i == 0 or i == 1) else True gift = gifts[days[i]] second_part(gift, comma) print() Actually, I quite like the obfuscated C version, and it is easy to de-obfuscate: (let ((p “On the ~:r day of Christmas my true love gave to me ~a~%”) .”)) (let loop ((i ‘(236 215 196 176 157 137 113 90 69 48 26 0)) (c 1)) (if (not (null? i)) (begin (format #t p c (substring r (car i))) (loop (cdr i) (+ c 1)))))) But a constructive version could also make use of format, although its dsl is very unschemely: (let ((p “On the ~:r day of Christmas my true love gave to me ~a~%”)) (let loop ((r ‘(“turtle doves” “French hens” “calling birds” “golden rings” “geese a-laying” “swans a-swimming” “maids a-milking” “ladies dancing” “lords a-leaping” “pipers piping” “drummers drumming”)) (s “a partridge in a pear tree.”) (c 1)) (format #t p c s) (if (not (null? r)) (begin (loop (cdr r) (format #f “~r ~a~a ~a” (+ c 1) (car r) (if (= c 1) ” and” “,”) s) (+ c 1)))))) @Michael: Thanks, I wasn’t particularly intending to be obfuscated, just compact. For real C obfuscation see, eg: That program does actually print out The Twelve Days of Christmas, if it’s not clear from the code. Happy New Year to all.
https://programmingpraxis.com/2016/12/23/a-partridge-in-a-pear-tree/2/
CC-MAIN-2017-09
en
refinedweb
void setup() { // initialize the digital pin as an output. pinMode(led, OUTPUT); Serial.begin(9600);} void setup() { pinMode(13, OUTPUT); Serial.begin(9600);}void loop () { } Binary sketch size: 1,968 bytes (of a 32,256 byte maximum) What are you doing exactly?(a) What Arduino is it?(b) What version of the IDE do you have?(c) ...First the my result (still not using the IDE) I have yielded an optimized size for the "setup()" modified blink example of 2262 bytes only: avr-objcopy -O ihex program.elf program.hex text data bss dec hex filename 0 2262 0 2262 8d6 program.hex...When compiling the final sketch I have then used those compiling flags (which incitently are the same):-Os -w -Wl,--gc-sections -ffunction-sections -fdata-sections... text data bss dec hex filename 0 1084 0 1084 43c Blink.cpp.hex text data bss dec hex filename 0 2362 0 2362 93a Blink.cpp.hex -c -g -Os -Wall -fno-exceptions -ffunction-sections -fdata-sections -mmcu=atmega328p -DF_CPU=16000000L -MMD -DUSB_VID=null -DUSB_PID=null -DARDUINO=103 (c) I do actually not use the IDE provided on the side at all, but instead I have used ... Also note that you have been looking at the size of the image file, which is not the same as the number of bytes of program memory used by the sketch. The interesting thing is that your optimized code is more optimized than mine. [....] int main(void){ init();#if defined(USBCON) //USBDevice.attach();#endif setup(); for (;;) { loop(); //if (serialEventRun) serialEventRun(); } return 0;} $ avr-size sketch_mar09c.cpp.hex text data bss dec hex filename 0 1968 0 1968 7b0 sketch_mar09c.cpp.hex -rw-r--r-- 1 nick staff 5548 10 Mar 08:42 /sketch_mar09c.cpp.hex int main(){ pinMode(13, OUTPUT); Serial.begin(9600);} Binary sketch size: 1,638 bytes (of a 32,256 byte maximum) int main() { } Binary sketch size: 176 bytes (of a 32,256 byte maximum) Up to now they did not seem to be that very much necessary... but there might be important reasons for them The USB line would be for the Leonardo.The serialEvent stuff I would happily be rid of. It's for calling your own function when serial data arrives, as if you can't work that part out for yourself. That sounds interesting PeterH. Thank you for mentioning this. Can you tell some more about those different sizes. I used the avr-size tool, which I actually thought to be the way to actually see the memory of the sketch used in the flash memory afterwards. Is it not? What is the "size of the image file"? would that be not the size of the .elf or .hex file then? Please enter a valid email to subscribe We need to confirm your email address. To complete the subscription, please click the link in the Thank you for subscribing! Arduino via Egeo 16 Torino, 10131 Italy
http://forum.arduino.cc/index.php?topic=153493.0;prev_next=prev
CC-MAIN-2017-09
en
refinedweb
I'm still trying to get DragonFly reliably working on VirtualBox (just because Qemu on my Ubuntu doesn't like to run with the accelerator kqemu). - When I boot for the first time ("turn power on"), pressing a key in the boot menu (the "1. Boot DragonFly [default]" ... "7. Reboot" menu) will freeze the system. - Or if I let the countdown timer pass, it will just execute for a very show while to then begin to spin in DELAY with delta=0 and ticks_left=2 [1]. The patch I am using is appended. I am thankful for any hints or further suggestions what the reason for this strange behaviour could be! Also note that the clock calibration returns a difference of more than 1%. Matt, you suggested a while ago, to use the APIC timer and get completely rid of the 8254 timer. If you could give me a starting point I'd like to try that out. At least I can now boot and successuflly compile a kernel in VirtualBox, which is a big help in testing things out. diff --git a/sys/platform/pc32/isa/clock.c b/sys/platform/pc32/isa/clock.c index f02333d..170a240 100644 --- a/sys/platform/pc32/isa/clock.c +++ b/sys/platform/pc32/isa/clock.c @@ -152,6 +152,8 @@ static struct cputimer i8254_cputimer = { 0, 0, 0 }; +static int cold_delay_timer = 1; + /* * timer0 clock interrupt. Timer0 is in one-shot mode and has stopped * counting as of this interrupt. We use timer1 in free-running mode (not @@ -415,8 +417,18 @@ DODELAY(int n, int doswitch) #endif delta = tick - prev_tick; prev_tick = tick; - if (delta < 0) + + if (delta <= 0) { + /* break delay loop during early boot as + the timer might not be correctly working */ + if (cold_delay_timer) { + break; + } else { + kprintf("delta: %d, ticks_left: %d\n", + delta, ticks_left); + } delta = 0; + } ticks_left -= delta; if (doswitch && ticks_left > 0) lwkt_switch(); @@ -600,6 +612,7 @@ fail: static void i8254_restore(void) { + kprintf("i8254_restore\n"); timer0_state = ACQUIRED; clock_lock(); @@ -794,6 +807,11 @@ startrtclock(void) #endif } + /* Timer should now work correctly! */ + cold_delay_timer = 0; + //if (bootverbose) + kprintf("cold_delay_timer -> 0\n"); + EVENTHANDLER_REGISTER(shutdown_post_sync, resettodr_on_shutdown, NULL, SHUTDOWN_PRI_LAST); #if !defined(SMP)
https://www.dragonflybsd.org/mailarchive/kernel/2008-12/msg00005.html
CC-MAIN-2017-09
en
refinedweb
Introduction This is an end-to-end description of how in Visual Studio 2015 to create and stand-up a WCF Restful service that can be consumed by AngularJS Background I am writing this article because I could not find a single stand alone example to create and run a simple but complete WCF/JSON/AngularJS app in Visual Studio 2015. Instead I found several good articles that addressed various parts of writing and consuming JSON, or Rest in WCF, or AngularJS. Consequently, I created this article from several others, as well as adding my own experience in making applications work. Wherever I have lifted code, I tried to site the appropriate source with the author's URL. Using the code This is a cook book. Like all recipes, I am only laying a working foundation so that you can add your own flavoring to this model to meet your immediate needs. Creating a Restful Service with VS2015 Create the app Open VS2015, open a new project and select WCF Service Application. I named my project RestSample and I named the project WcfRestfulService. Add a new WCF service to the project Add a new service and interface, by right clicking on the project WcfRestfulService, and select WCF Service I named the service ProductService.svc . Also you can delete the existing IService and Service.svc files since we are not using them. Create a data model This step is optional since you can simply pass a JSON string latter on. Still this shows how to create a JSON formatted string from a net class using serialization, which is very common. To create the data model, first create a folder (I called mine Domain) that will host our data service. You can do this by right-clicking the project file and selecting Add New Folder. In the folder add a class, by right-clicking the folder, click Add, select Add class I named my file and class Product . Following is the content of the class, prepared for serialization in WCF. [DataContract] public class Product { [DataMember] public int ProductId { get; set; } [DataMember] public string Name { get; set; } [DataMember] public string CategoryName { get; set; } [DataMember] public int Price { get; set; } } Next I created a static class called ProductServer.cs that will be used to create and provide the list of products. public sealed class ProductsServer { private static List<Product> _products; private static ProductsServer _instance; private static readonly object LockMechanism = new object(); public static ProductsServer Instance { get { if (_instance == null) { //not really neccessary on a small project, but it is the Microsoft recommended pattern lock (LockMechanism) { _instance = new ProductsServer(); } } return _instance; } } private ProductsServer() { Intialize(); } private static void Intialize() { _products = new List<Product> { new Product() {ProductId = 1, Name = "Product 1", CategoryName = "Category 1", Price = 10}, new Product() {ProductId = 2, Name = "Product 2", CategoryName = "Category 1", Price = 5}, new Product() {ProductId = 3, Name = "Product 3", CategoryName = "Category 2", Price = 15}, new Product() {ProductId = 4, Name = "Product 4", CategoryName = "Category 3", Price = 9} }; } public List<Product> Products { get { return _products; } } } Modify the IProductService with Web Protocols In IProductService , delete the DoWork function and replace it with GetProductList . Next add the WebInvoke decoration, so that java based callers (like AngularJS and AJAX) can read our JSON formatted product list [ServiceContract] public interface IProductService { [OperationContract] [WebInvoke(Method = "GET", ResponseFormat = WebMessageFormat.Json, RequestFormat = WebMessageFormat.Json, BodyStyle = WebMessageBodyStyle.Bare, UriTemplate = "GetProductList/")] List<Product> GetProductList(); } In the ProductService class add a reference to the ProductServer we created earlier to fetch the list. public class ProductService : IProductService { List<Product> IProductService.GetProductList() { return ProductsServer.Instance.Products; } } As Mr. Ghani may have noticed this is very similar to his page, which in fact I am using as a guide. His excellent article can be found at TopWcfTutorials . Code Clean-up At this point, the product should be a complete WCF service; on the other hand, I always seem to have issues that need to be resolved. To test our work, right click on the svc and select view in browser. When I tried this I got the following error: The type 'WcfService1.ProjectService', provided as the Service attribute value in the ServiceHost directive, or provided in the configuration element system.serviceModel/serviceHostingEnvironment/serviceActivations could not be found. As the error says, I forget to change the namespace, an easy error to make, which is why I am leaving it in. To resolve the issue, right click on the ProductService.svc , select View Markup, and open with the XML (text) Editor. You will see that the Service is still labeled as WcfService1. Change that to WcfRestulService. While I was doing that I went back into my project file and changed the default Assembly name and Namespace (right-click project file. Select properties) Now I have a new error. OperationContractAttributes are only valid on methods that are declared in a type that has ServiceContractAttribute. Looking in Web.config, I find there are no end points or behaviours, so we need to build-out the configuration. Refactor the Web.config with endpoints, services and behaviors Add the binding type in the protocol mapping to webHttpBinding Add the behaviors Add the service end points We now have a working service. Unfortunately, it is SOAP and we want Rest, so we add the following behavior extension. Next we need to reference the extension in the endpoint behaviors. Finally we need to add a class to handle cross platform service calls. This class is taken from CORS plus JSON Rest Adding a CORS extension Right-click on the project and add a new file to enable CORS - Cross Origin Resource Sharing. Enabling CORS will allow us to see the JSON string directly in the browser, and it will give AngularJS the permission neccessary to read the restful service.); } } } Testing the service At this point we should be able to see our product list serialized as JSON in our browser. You can do this by right-clicking ProductService.svc as we have been doing, and by adding making the call to the service /GetProductList In my browser it looks like this: Create an AngularJS App to Consume the Service Add an empty ASP.Net Web Application On the solution file, right-click and select Web, ASP.NET Web Application. I called mine WebApp. Next select the empty template, and click OK. Add an Index.html by right clicking the project file Load the AngularJS Libraries If you do not already have the AngularLS library files, open the Package Manager Console. You will find it under View->Other Windows->Package Manager Console. Load the AngularJS files directly from NuGet by typing in the following command at the promt PM> Install-Package AngularJS.Core This adds the AngularJS core library into the scripts folder. I had to open and close the folder a couple of times to see the change, but following is an image of scripts after loading the library. Create the AngularJS app and module files Start by creating a folder in WebApp named app. I am going to separate the app and the module files as is the generally recommended practice. Create two java script file named app.module.js and main.js, by right-clicking on the app folder and select Add, then JavaScript File. Add the following code to app.module. (function () { 'use strict'; angular.module('app', []); })(); Add the following in the main. You will need to modify the URL to use the same port as your service WcfRestfulService (function () { 'use strict'; angular .module('app') .controller('Main', main); function main($scope, $http) { $http.get('').then(function (response) { $scope.products = response.data; }); } })(); You will find the port number by clicking on the service and looking at the URL in the Development Server section. In my case it is 62245. It will be different on each machine. To test the service you should be able to copy or click on the url and see the service as we did earlier. Reference your AngularJS files We are finally ready to connect the WebApp to the service. To do this we need to reference the js classes in the Index.html, so that it can find the AngaularJS libraries and modules. The page we are going to build is very simple. At the same time it completes the project from start to finish. To bind to the app and controll we need to add the application directive and the controller directive in the body Finally we need to create a table and bind it to the JSON data provided by our service using the repeat directive. Don't forget, as I did, to set the WebApp as the startup project, by right-clicking the project file and selecting Set as StartUp Project. The simple results are show below. History 22 Feb 2016 - Submitted
http://126kr.com/article/13hs9d3h49
CC-MAIN-2017-09
en
refinedweb
Revision history for Net-IMAP-Client 0.95 Apr 9, 2011 - fix #48163 error after logout - add examine() method (identical to select(); keyword differs) 0.94 Dec 22, 2010 - fix #56647 rfc822 attachment mishandling - fix #43046 Does not strip \r from error messages - fix #59462, #43047 creation of non-modifiable array attempted - fix #43049 append() method's documentation - fix status(): avoid returning data from undef key 0.93 - add namespace() method - modify get_summaries() to always return array ref 0.92 Feb 28, 2009 - Doc updates 0.91 Feb 27, 2009 Fixed get_summary to correctly identify the attachment filenames with GMail's IMAP (GMail sends some headers in uppercase). 0.10 * unreleased * get_summaries now supports fetching additional headers. They are available (unparsed) via the $summary->headers method (where $summary is a Net::IMAP::Client::MsgSummary object). Minor cleanups and speed improvement. 0.9 Jan 25, 2009 - more fixes for cyrus imap server: get_flags and get_summaries 0.8 Jan 20, 2009 - updated dependencies (Encode isn't core in Perl 5.6) - fixed status() on uw-imap (server can return "NO CLIENT BUG DETECTED ..." when called on the selected mailbox). Thanks Max Maischein (corion.net). 0.7 Jan 13, 2009 - fixed get_flags (thanks Peter Pilsl for the report) 0.6 Nov 09, 2008 - better reconnect support (check that value of getline is undef and force reconnect if so) 0.5 Oct 31, 2008 - added new methods: create_folder, delete_folder, copy, get_flags, get_threads, fetch - fixed some bugs with append 0.4 Sep 22, 2008 - added append / expunge - added store, add_flags, del_flags, delete_message - heavily modified _send_cmd() to support literals - try to reconnect when connection is lost Fixes: - subtle bug related to using syswrite / sysread (now using only buffered I/O) - return proper notifications from get_part_body and get_parts_bodies (sometimes the \Seen flag becomes set, this must be reported.) - return references in get_parts_bodies - _parse_tokens now interprets BODY[*] as an atom - fixed BODYSTRUCTURE parser in MsgSummary.pm (sometimes the server would include additional extension data which we don't support and must be properly discarded) 0.3 Sep 08, 2008 Fix for - return undef from constructor when connection failed. 0.2 Sep 02, 2008 There are some disruptive changes, I hope no one took this module seriously yet. :-p - some support for server notifications. N/I/C will try to keep up with notifications involving \Deleted or \Seen flags (i.e. updating $imap->{FOLDERS}{$current_folder}) and it also can report an array of notifications after some commands. See the notifications() method. - new methods: folders_more(), noop(), get_parts_bodies(), capability(), seq_to_uid() - got rid of wantarray for most methods (the exception is folders()) - status() now returns a hashref instead of an array; needed since the IMAP STATUS command might actually fail for some folders. - most methods in Net::IMAP::Client::MsgAddress and Net::IMAP::Client::MsgSummary will now decode the data (previously it left it "MIME-Word"-encoded). - fixed dependencies in Makefile.PL -- hopefully 0.1 Aug 23, 2008 First public release.
https://metacpan.org/changes/distribution/Net-IMAP-Client
CC-MAIN-2017-09
en
refinedweb
SYNCFUSION BLOG Using WPF inside LINQPad Using WPF inside LINQPad Daniel Jebaraj | read | May 2, 2012 Daniel Jebaraj May 2, 2012 read I love using LINQPad to create and test quick snippets. LINQPad brings some of the instant gratification associated with dynamic languages, such as Python and Ruby, to C#. Until recently, I had not used LINQPad to work with UI code. A few days ago, I was looking to test a small WPF code snippet. I figured there must be a way to use LINQPad. Several searches later, I had a working snippet that I have been using since. The code is quite simple, and it turned out that LINQPad has great built-in support for WPF through the PanelManager.StackWpfElement and PanelManager.DisplayWpfElement API calls. These calls allow you to create UI elements inside a named panel displayed in the lower pane beside the results window. Additional details are available at. Displaying a list box var items = new ObservableCollection(); // collection initialization var list = new ListBox(); string template = @" "; list.ItemTemplate = template.ToDataTemplate(); list.ItemsSource = items; PanelManager.StackWpfElement(list, "WPF"); Note DataTemplate is instantiated using an extension method. The method takes a snippet of XAML, plugs it into a standard XAML snippet for data templates (containing standard namespaces), and then instantiates the XAML using XamlReader.Load. You can, of course, change the XAML format to include other custom assemblies. Code that instantiates Data Template public static object InstantiateXAML(string xaml) { return XamlReader.Load ( XmlReader.Create(new StringReader(xaml)) ); } public static DataTemplate ToDataTemplate(this string template) { string templateFormat = @" {0} "; return (DataTemplate) InstantiateXAML(string.Format(templateFormat, template)); } The snippet provided in the download link below also demonstrates setting content, attaching event handlers, and obtaining access to dynamically created elements. Note You will have to add the following assembly references and namespace imports in LINQPad. The F4 key will cause the dialog that allows these to be added and displayed. Once added, you can save the assemblies as default. Assemblies 1. PresentationCore.dll 2. PresentationFramework.dll 3. System.Windows.Controls.Ribbon.dll 4. System.Windows.dll 5. System.Windows.Interactivity.dll 6. System.Windows.Presentation.dll 7. System.Xaml.dll 8. WindowsBase.dll Namespaces 1. System.Net 2. System.Net.Mail 3. System 4. System.Collections.Generic 5. System.Linq 6. System.Text 7. System.Collections 8. System.Windows 9. System.Windows.Controls 10. System.Collections.ObjectModel 11. System.Windows.Markup Give it a try. This is the download link:
http://blog.syncfusion.com/post/Using-WPF-inside-LINQPad.aspx
CC-MAIN-2018-30
en
refinedweb
Click Here to See A Working Demo of this Angular Shopping Cart Click Here to See An Advanced Version of This Shopping Cart Watch Video Demonstrating Shopping Cart Features I updated this article and the sample project to Angular 5. The original article and AngularJS Shopping Cart project can be found at: Many people would like to sell products online but they don't have any prodcts to sell. And, many people would like to have other people sell the products they have but don't know how to find those people. This Angular 5 Shopping Cart with Video for MLM & Affiliate Marketing does both. You can give this cart to distributors filled with your products to put on their websites and it will pass the orders to you from any merchant account provider like PayApal with the Distributor ID of the distributor that the order came from so you can pay them commissions. In addition, you can also give distributors a link with their Distributor ID and those orders with the passed in Distributor ID will pass into PayPal or any designated merchant provider. The Shopping Cart in this project includes the ability to play videos from servers all over the world. The reason for this is that pictures in shopping carts in today's world are not feective in delivery the benefits of why somebody should buy a product. A video is a TV commercial that studies have shown is thousands of times more effective in generating sales. For example, teh YOUKU server in China is now the largest television network in teh world and you can post your videos (TV commercials) for free on the YOUKU server and play them in this Shopping Cart in stead of a boring picture of a product.. To start with I wanted to include a Pinterest Style Layout so I decided to use a common one that I have seen used often in shopping carts, namely, Codrops famous ViewModeSwitch, that you can find at:. ViewModeSwitch is a CSS Pinterest Style Layout that is used n many commercial shopping and it works well with AngularJS with minimal changes. Here is a video demonstration of the shopping cart: Here are some of the practical features I included: If you are PC(Window) or OS X (Mac) user you can install the latest version of node at: using one of the installers and follow the steps. If you install the latest version on Windows you will probably get a variety of errors that will take you hours of searching the Internet to fix. One possible solution is to not allow the installer to set the Environment PATH which is the default setting in the setup and to set the Environment PATH manually. At this point if you tried using npm some people will get the dreaded and now famous error: npm ERR! Windows_NT 6.1.7601 There are numberous working fixes for this error if you are behind a proxy but if you are'n't behind a proxy then trying to fix this error can make you crazy. To fix this error or if you just updated to the latest version of npm then. Install Angular CLI which will also install Angular's "ng" command globally on your system as shown below. At this time Angular 5 should install with the commands below. I won't go into installing Angular 5 at this time because you can find plenty of documentation on it on the Internet. I will keep the focus of this article on the source code for Angular 5 Mobile Apps. Directions for installing Angular-CLI are at: We should start by understanding Webpack, System.js and angular-cli. System.js was heavily used in the beginning when Angular 2 was being built. Webpack evolved next and finally, the defacto standard now, Angular CLI evolved as a sort of wrapper for Webpack and to help scaffold new projects and create components. easily. Installing angular-cli which will also install Angular's "ng" command globally on your system: Directions for installing angular-cli are at: npm uninstall -g angular-cli npm uninstall --save-dev angular-cli npm uninstall --save-dev angular/cli npm uninstall -g @angular/cli npm cache clean Delete the C:\Users\YOU\AppData\Roaming\npm\node_modules\@angular folder. Reboot, then, finally, run: npm install -g @angular/cli@latest To verify whether your installation completed successfully, you can run: ng version @angular/cli: 1.3.1 node: 7.4.0 os: win32 x64 Download and unzip the file "cart-app.zip" at the top of this article and place the unzipped folder 'cart cart-app folder as follows: Select a folder - I used C:\Angular and put the unzipped 'cart-app' folder in there and run: <a href="">C:\Angular>c</a>art-app>npm install This will install the node_modules folder which is very large so be patient. Next we will use Visual Studio Code IDE which works nicely on both Winows and Mac computers. and build the Angular 5 App in this editor. from the IDE the following commands: C:\Angular>mobile-app>ng build C:\Angular>mobile: As of Janurary 2017 If you are using angular-cli, AOT compiling is now the default compilation method when running the following command ng build --prod with no code change requirements. ng build --prod I will jump ahead here to explain how to build your "www" folder for mobile. The BIG SECRET to compiling an Angular 5 App for to run out of a folder isn't obvious. To build an Amgular 5 App so it will work is setting up the pathways correctly. Look at the index.html from the src folder you added to the project and you will notice that in the index.html file we replaced: <base href="../" /> with <script>document.write('<base href="' + document.location + '" />');<script> When you refresh a page this will dynamically set the base href to your current document.location. And if you look at app.routing.ts you will see that I added made sure we using Hash Tag Location in app.routing.ts: @NgModule({ imports: [RouterModule.forRoot(routes, {useHash: true})], exports: [RouterModule] }) Instead of: @NgModule({ imports: [RouterModule.forRoot(routes)], exports: [RouterModule] }) Our Angular 5 Shopping Cart is also an Angular 5 Moble App for importing into XCODE (iPhone) or Android Studio. Our default build folder, namely, projectFolder/dist/ Will NOT work for our production version if we also want it to function as a mobile app. To create our production build we need to use some extra commands. We use the folder name "www" so I can use it as a mobile app as well as a web based app but you can change this to any name after you build it for production and then just paste it into your website. Let's create a production build of our Angular 5 Shopping Cart using the following command. // Run in command line when directory is projectFolder // flag prod bundle for production & flag aot enables the // ahead-of-time compilation also known as offline compilation. // --prod is now the same as --prod --aot ng build --prod --base-href /$ROOT/Angular/cart-app/www/ In the pathway above you will notice that I used the folder "Angular2" on my "C" drive and created my "cart-app" folder inside that directory. If you have your project in a different folder then adjust the pathway above accordingly. The contents of the generated "www" folder will go into our "www" folder in Android Studio or XCODE to use the Angular 5 Sopping Cart as a Mobile App and all the pathways will actually work. Viola! I still use IIS 7 and on IIS 7 and later versions you MUST be sure to set the MIME type for our json files. Our Angular 5 Shopping Cart uses config.json and products.json which require that the correct MIME Type is set on IIS 7 or any server.. We have only a few simple views in our app, namely, store, product, checkout, and blank. You can easily add more views like Legal Notices, Terms of service, Refund Policy, Cordova or PhoneGap if you use as a mobile app, etc. BlankComponent is used as a fudge for certain route calls. const routes: Routes = [ { path: '', component: StoreComponent }, { path: 'store', component: StoreComponent }, { path: 'cart', component: CartComponent }, { path: 'product/:id', component: ProductComponent }, { path: 'blank', component: BlankComponent } ]; Reading Data BEFORE App Startup in Angular 5 Using APP_INITIALIZER We want all of our views to have easy access to our Configuration & Products Data since once loaded this data does NOT change. To accomplish this we add the following in our app.module.ts file as follows. // IN OUR APP.MODDULE.TS FILE... import { NgModule } from '@angular/core'; import { APP_INITIALIZER } from '@angular/core'; // Other imports for our Modules... import { Injectable } from '@angular/core'; import { Http, Jsonp } from "@angular/http"; // Too many files inside Rx folder causes delay in loading // so we don't load all of them to improve loading time. import 'rxjs/add/operator/map'; import 'rxjs/add/operator/catch'; import { ConfigService} from './services/config.service'; // We are Using AOT, So useFactory Can't be a Dynamic Function. // We MUST Use An EXPORTED function as shown below. export function initConfig(config: ConfigService){ return () => config.load(); } @NgModule({ declarations: [ AppComponent, NavbarComponent, BlankComponent, StoreComponent, CartComponent, ProductComponent, CapitalizePipe, UniquePipe, OrderByPipe, SafeHtmlPipe ], imports: [ BrowserModule, FormsModule, HttpModule, FormsModule, JsonpModule, AppRoutingModule ], providers: [ LocalStorageService, DataObservableService, PagerService, OrderByPipe, SafeHtmlPipe, ConfigService, { provide: APP_INITIALIZER, useFactory: initConfig, deps: [ConfigService], // If you use Jsop then also pass in Http & Jsonp // deps: [ConfigService, Http, Jsonp], multi: true } ], bootstrap: [AppComponent] }) export class AppModule { } In our config.service.ts file we will retriev our JSON Config data and our Paroducts Data. We will use a combination of both Promise and Observable for best results. Promise - A Promise handles a single event when an async operation completes or fails. Observable - An Observable is like a Stream that allows you to pass zero or more events where the callback is called for each event. Observable is preferred over Promise because it provides the features of Promise and more. With Observable it doesn't matter if you want to handle no event, one event or multiple events. You can utilize the same API in each case. As shown below is our config.service file in which we will load our Config.json file using a Promise and then we will read DATA_SOURCE to see what method to use to load our Products.json file. // THIS IS OUR CONFIG.SERVICE.TS FILE import { Inject, Injectable } from '@angular/core'; import { Http, Jsonp, Response, Headers, RequestOptions, URLSearchParams } from '@angular/http'; import { DomSanitizer, SafeResourceUrl, SafeHtml, SafeUrl, SafeStyle} from '@angular/platform-browser'; import { Observable } from 'rxjs/Observable'; // Too many files inside Rx folder. So I did this to improve loading time. import 'rxjs/add/operator/map'; import 'rxjs/add/operator/catch'; import { Config } from '../services/config.static'; @Injectable() export class ConfigService { // Load configuration data so they will be available to all views private static _config: Object = null; // Load products so they will be available to all views // We can read from _promise value of DATA_SOURCE to determine // what method to use, local or remote, to get Products Data. private static _products: Object = null; private static _promise: Promise = null; private retryCount = 2; headers: Headers; options: RequestOptions; constructor(private http: Http, private _jsonp: Jsonp, private sanitizer: DomSanitizer) { this.headers = new Headers( { 'Content-Type': 'application/json', 'Accept': 'q=0.8;application/json;q=0.9', 'async': true, 'dataType': 'jsonp' }); this.options = new RequestOptions({ headers: this.headers }); ConfigService._promise = this.load(); } public config(): any { return ConfigService._config; } public getProducts(): any { return ConfigService._products; } public configKey(key: any) { return ConfigService._config[key] } getRandomInt(min, max) { return Math.floor(Math.random() * (max - min + 1)) + min; } load() { let local_base = './assets/data/' + 'config' + '.json?'; // we add a random value to prevent caching - this trick works nicely! let local_rnd = 'rnd=' + this.getRandomInt(1, 500); let local_url = local_base + local_rnd; return new Promise((resolve, reject) => { this.http.get(local_url) .map((response: Response) => response.json()) .catch((error: any) => { console.error(error); return Observable.throw(error.json().error || 'Server error'); }) .subscribe((data) => { ConfigService._config = data; // resolve(true); // UNSLASH THIS IF YOU DON'T GET PRODUCTS HERE !!! // INCLUDE THIS IF YOU WANT TO RETRIEVE PRODUCTS HERE //// let request: any = null; switch (data.DATA_SOURCE) { case 'local': { local_base = './assets/data/' + 'products' + '.json?'; local_rnd = 'rnd=' + this.getRandomInt(1, 500); local_url = local_base + local_rnd; request = this.http.get(local_url); } break; case 'remote': { // THIS IS AN EXAMPLE OF HOW TO USE A JSONP SERVER let _store = ''; let _userid = ''; const jsonp_base = data.JSONP_DOMAIN1; let jsonp_param = 'store=' + _store + '&userid=' + _userid; jsonp_param = jsonp_param + '&methodName=Feeds&jsonp=JSONP_CALLBACK'; let jsonp_rnd = '&rnd=' + this.getRandomInt(1, 500); let jsonp_url = jsonp_base + jsonp_param + jsonp_rnd; request = this._jsonp.get(jsonp_url, this.options); } break; case 'default': { console.error('Environment file is not set or invalid'); resolve(true); } break; } if (request) { request // .retry(this.retryCount) Deprecated in Angular 5 .map( (res) => { let products = res.json(); this.checkProducts(products); return products; }) .catch((error: any) => { console.error('Error reading ' + data.DATA_SOURCE + ' configuration file'); }) .subscribe((responseData) => { ConfigService._products = responseData; resolve(true); }); } else { console.error('Env config file "env.json" is not valid'); resolve(true); } }); }); } checkProducts(prods: any) { if (prods) { prods.forEach((data) => { // Get embed format for given tube servers like youtube, vimeo, youku, etc. data.link = this.getVideoEmbed(data.tube, data.videoid); data.videopage = this.sanitizer.bypassSecurityTrustResourceUrl(data.link); }); }; } // ... ETC. } 5 Shopping Cart App. You can DOWNLOAD my. Retrieving our Config and Products Data is now easy to do in any of our views as follows. this.config = this.configService.config(); // alert('Configurations: '+ JSON.stringify(this.config)); this.products = this.configService.getProducts(); // alert('Configurations: '+ JSON.stringify(this.products)); I added a file called config.static to illustrate another approach and perhaps better approach you can use for global configuration files. Using this approach is illustrated below. import { Config } from 'app/services/config.static'; alert(Config.CONFIG.STORE_BG_IMAGES[0]); You can see our PagerService buttons shown below. Our pager is actually fairly simple and the code for it is as follows. <table style="float:right;"> <tr><td> <!-- slide button pager --> <ul * <li [ngClass]="{disabled:pager.currentPage === 1}"> <a (click)="setPage(1)">First</a> </li> <li [ngClass]="{disabled:pager.currentPage === 1}"> <a (click)="setPage(pager.currentPage - 1)">Previous{{page}}</a> </<li> <li [ngClass]="{disabled:pager.currentPage === pager.totalPages}"> <a (click)="setPage(pager.currentPage + 1)">Next</a> </<li> <li [ngClass]="{disabled:pager.currentPage === pager.totalPages}"> <a (click)="setPage(pager.totalPages)">Last</a> </<li> </<ul> </<td></<tr> </table> In our app we retrieve videos from the hundreds of tube servers that allow embedding in web pages. In addition to displaying millions of movies and all kinds of videos this code will also display YOUR monetized videos from these Tube Servers like YouTube among movies and other videos. In our ConfigService we apply in checkProducts the embed format for various tube servers as follows. checkProducts(prods: Product[]) { if (prods) { prods.forEach((data) => { // Get embed format for tube servers like youtube, vimeo, youku, etc. data.link = this.getVideoEmbed(data.tube, data.videoid); data.videopage = this.sanitizer.bypassSecurityTrustResourceUrl(data.link); }); }; }. The Products Module uses the ConfigService to get the products data and then selects whatever "id" was passed by the router. // private route: ActivatedRoute // get URL parameters via Route this.sub = this.route .params .subscribe(params => { this.params = params['id']; this.getProducts(); }); getProducts() { this.prods = (this.configService.getProducts()).filter(prods => ((prods.sku === this.params) || (this.params === '')) ); window.scrollTo(0, 0); } Our CartModule View calls CartService in cart.service.ts which uses Observable on localStorage as follows. loadItems(): Observable { let items = localStorage != null ? localStorage[this.cartName + '_items'] : null; if (items != null && JSON != null) { try { items = JSON.parse(items); for (let i = 0; i < items.length; i++) { let item = items[i]; if (item.sku != null && item.productname != null && item.unitprice != null && item.saleprice != null && item.showsale != null && item.quantity != null && item.sh != null && item.faux != null) { item = new this.cartItem(item.sku, item.productname, item.unitprice, item.saleprice, item.showsale, item.quantity, item.sh, item.faux); this.items.push(item); } } } catch (err) { // ignore errors while loading... } } return items; } Users can edit the cart and checkout using PayPal, Google Wallet, and Stripe. Below is what the Cart View looks like on a laptop. As I explained earlier you can use the compiled files as a Mobile Shopping Cart that you can install on any Mobile Device or as a web based Angular 5 Shopping Cart on your website. Shown on the left I am using it on a Mobile Phone for my comic book collection. The best feature is that I can set a DISTRIBUTER_ID in my Config.json file and give a link or the folder with this app to someone who wants to sell my products and get paid a commission. I decided to add a collection of hover animations and I wrote an animation editor that is in the menu to select and apply different hover animations to different objects in the cart. I loooked at a number of hover libraries and picked one called Hover that I really liked by Ian Lunn which you can explore on his GitHub at: To apply an effect you simply select the Effects tab in the menu and then selct one of the green radio buttons, namely: storeimg, store pill, carousel img, or carosel pill. Note that the carousel options are only avaiable if you have added the JavaScript for the Super Slick Carousel. After selecting the object you want to apply the hover effect to simply click on the effect you want in the list belwo. You can easily define new objects to apply these effects to in the views. The hover effects from the Hover library are applied as follows: changeAnimation(effect_name) { event.preventDefault(); this.isOpenNavbarAnimation = !this.isOpenNavbarAnimation; let e = ''; if (this.myModel === 'carousel_img_video') { e = '.carousel_img_video'; } else if (this.myModel === 'carousel_pill') { e = '.carousel_pill'; } else if (this.myModel === 'store_img_video') { e = '.store_img_video'; } else if (this.myModel === 'store_pill') { //e = '.nav-pills li'; e = '.store_pill'; } if (e.length > 0) { $(e).removeClass(function (index, css) { return (css.match(/(^|\s)hvr-\S+/g) || []).join(' '); }); $(e).addClass(effect_name); } if (effect_name.length > 0) { $(e).removeClass(function (index, css) { return (css.match(/(^|\s)hvr-\S+/g) || []).join(' '); }); $(e).addClass(effect_name); } }; The controls below set the effects above:: // create a radioButtonGroup for our apply effects options public myOptions = [ { id: 'store_img_video', name: 'store img', disabled: false, showinfo: '' }, { id: 'store_pill', name: 'store pill', disabled: false, showinfo: '' }, { id: 'carousel_img_video', name: 'carousel img', disabled: false, showinfo: '' }, { id: 'carousel_pill', name: 'carousel pill', disabled: false, showinfo: '' } ];. To create the gradient in these navbars I used the gradient editor at:. For the Navbars I applied Color Coordination with the Navbars so that each navbar would have its own Hover Effect when hovering over the pills created by Codrops famous ViewModeSwitch, The code that changes the background color and enlarges the image as well as dozens of other cool hover effects are applied using the Hover library. A few other transition effects are from the project on Codrops related to ViewModeSwitch called ResponsiveIconGrid at: In each Navbar style sheet we have that hover gradient css for the pills as follows. Which produces the different hover effects for each navbar. However, we need to turn off many of these effects when the shopping cart is on a mobile device and the CSS in this project does that to achieve a nicer display on mobile devices. Dozens of samples of all kinds of animation effects for Codrops famous ViewModeSwitch are available on Codrops. However remember that animation effects are distracting and take a reader's attention away from the ad copy for your products so use them sparingly. Below you can see there are a large number of effects you can apply to any object in any view. The red button in the tab menu dropdown allows you remove any hover effect you have applied. For additional glyphicons check out: One of the things you have to decide is how to handle closing of expanded dropdowns when there is an off menu click. I tried several approaches to this and finally decided on the following approach to handle off menu clicks to close any expanded dropdowns in our NavbarComponent as shown below. @Component({ selector: 'app-navbar', templateUrl: './navbar.component.html', styleUrls: ['./navbar.component.scss'], host: { '(document:click)': 'offNavClick($event)', }, providers: [WindowService, DataObservableService, LocalStorageService, CartService] }) constructor(private _eref: ElementRef) { } offNavClick(event) { // Off menu code to close dropdowns if (!this._eref.nativeElement.contains(event.target)) { this.isOpenNavbarTheme = false; this.isOpenNavbarAnimation = false; this.isOpenNavbarVideoSites = false; } } I don't like the look of Bootstrap 3 buttons so I decided to give them some depth as shown below. To do this I used a really cool Bootstrap 3 Editor that creates buttons with a gradient and mouse over and mousedown eeffects with a single block of CSS code at: You can use the ng generate command to add features to the app by going to the directory you want to add the module, service, etc. to using one of the commands below: For example to add a Cart Service you would go to the directory you want to add it to and type: C:\Angular2\cart-app\cd src\app\services C:\Angular2\cart-app\src\app\services>ng generate service cart:. This article, along with any associated source code and files, is licensed under The Code Project Open License (CPOL) Glad you enjoyed the summary! It really was a great summary 'CF_DATA_FILE': 'ac_products/products.js', 'CF_DATA_LOCAL': '/crud.ashx?ac=getproducts&cn=local', 'CF_DATA_REMOTE': '/crud.ashx?ac=getproducts&cn=remote', <add name="remoteCartConnectionString" connectionString="ADD_YOUR_CONNECTION_STRING\SQLEXPRESS;Initial Catalog=ACart;user=YOUR_USERNAME;pwd=YOUR_PASSWORD" providerName="System.Data.SqlClient"/> General News Suggestion Question Bug Answer Joke Praise Rant Admin Use Ctrl+Left/Right to switch messages, Ctrl+Up/Down to switch threads, Ctrl+Shift+Left/Right to switch pages.
https://www.codeproject.com/Articles/881354/Angular-Shopping-Cart-for-Affiliate-Marketing-Angu?msg=5350426
CC-MAIN-2018-30
en
refinedweb
The Twitter OAuthcalypse hit me last week. Back in July, I used Twitter’s basic authentication scheme in a new site I’m working on (wantbox.com) to easily send out tweets via my @wantbox account whenever someone posted a new “want” to the site. This integration took about 30 minutes to research and implement. All I had to do was put my twitter username and password in my Django project’s settings.py file, pip install the python-twitter app into my virtualenv and then use: from django.conf import settings import twitter api = twitter.Api(username=settings.TWITTER_USERNAME, password=settings.TWITTER_PASSWORD) api.PostUpdate('Someone in 02481 wants "a home gym" %s' % extra_stuff) The resulting tweet would appear in my @wantbox stream appearing to come “via API”. Easy-peasy. Last week, however, I started getting “401 Unauthorized” errors whenever I tried to post to Twitter. Clearly the Twitter API team had officially turned off basic authentication and was now requiring all apps to connect using OAuth. Yesterday around lunchtime I stopped work on the fun new wantbox feature I was working on so I could “quickly fix” my broken Twitter posting. Oh boy…quickly and OAuth clearly don’t play well together. I tried a bunch of solutions which ultimately failed. I worked through the night…I got up in the morning and worked some more. I gave up. I stubbornly came back. I gave up again. I lost hope: Finally, when all hope was nearly loss, I succeeded in connecting to my Twitter account via OAuth and posting a tweet! I completely understand the rationale for moving to OAuth (no more Twitter passwords stored on third-party sites, easier management of who can connect, etc…) but the process of getting there was painful to me. If by sharing my experience I save even one of you from some of this pain, then my job is done. How I eventually got it working I was fortunate to find Jeff Miller’s blog post “Twitter from the command line in Python using OAuth.” It was the resource that got me over the top (thanks Jeff!) Keep in mind that my goal was connecting one Twitter account (@wantbox) to the Twitter app I created (wantbox.com). I am not yet adding the feature of letting individual site users connect their Twitter accounts to my Twitter app. STEP ONE: Connecting your Twitter Account to your Twitter App - Log into the Twitter account where you want to create a Twitter App - Create the Twitter app: - application name: this will be in the from line e.g. “via wantbox.com” - application type: client - access type: read & write - make a note of your “Consumer key” and “Consumer secret” - Activate your virtualenv if you are using virtualenv - pip install tweepy - pip install simplejson - Open a python shell - >>> import tweepy - >>> auth = tweepy.OAuthHandler(‘PASTE YOUR CONSUMER KEY’, ‘PASTE YOUR CONSUMER SECRET’) - >>> auth_url = auth.get_authorization_url() - >>> print ‘Please authorize: ‘ + auth_url - Keep your python shell open, and copy and paste the returned URL into a browser. You should see a Twitter page indicating that “An application would like to connect to your account”. Make sure you are logged into the Twitter account that you want to connect and then click the “Allow” button. - Since you setup your app as a client, Twitter will return to you a PIN. Make a note of it and go back to your open python shell - >>> auth.get_access_token(PASTE_YOUR_PIN) - >>> print “ACCESS_KEY = ‘%s'” % auth.access_token.key - Make a note of this ACCESS_KEY - >>> print “ACCESS_SECRET = ‘%s'” % auth.access_token.secret - Make a note of this ACCESS_SECRET STEP TWO: Sending a Tweet via your Twitter Account/App - Open a new python shell - >>> import tweepy - >>> auth = tweepy.OAuthHandler(‘PASTE_CONSUMER_KEY’, ‘PASTE_CONSUMER_SECRET’) - >>> auth.set_access_token(‘PASTE_ACCESS_KEY’, ‘PASTE_ACCESS_SECRET’) - >>> api = tweepy.API(auth) - >>> api.update_status(“Hello, brave new Twitter/OAuth world!”) If all goes well, you’ll see something like “<tweepy.models.Status object at 0x6c91aafb8a90>” in your command line and the following in your Twitter stream: You can now tuck the above code (steps 2-6) into a python function and tweet at will. The gotcha that gotme for 24+ hrs Try your test tweeting from a public, outside-the-home-firewall server. I kept getting “401 Unauthorized” errors when I tried connecting to Twitter from my home dev server, but was successful on my first try when I tried on my public, wantbox.com server. I suspect that the failed earlier solutions (python-twitter, oauth-python-twitter, twitter_app, etc…) would have worked if I tried them on a live server. I don’t know why I can’t get authenticated from home, but I’m happy it works in the wild. There you have it. The process took a couple years off my life, but I’m a better man because of it. If you have any questions, ask them in the comments. If I encountered and solved your particular problem I will respond. If I haven’t I will leave it to the community to do so. I love the smell of tweeting in the morning. 11 Responses to “OAuthcalypse Now: Tweeting with Twitter, OAuth and Django” I got hit too, but eventually solved matters by fetching the SVN head of python-twitter, which has OAuth support. (Poorly documented OAuth support, but support nonetheless.) Once I figured out the secrets of the new standalone OAuth script, which is analgous to the tweepy get_authorization_url() two-step you describe above, all my Twitter projects were back up and running in a couple of hours with minimal changes to the familiar old API. Worked fine from both home dev server and the production server. Great post, nice and fun reading. Thank you! 🙂 Do you know of a php / curl post method or script that can utilize OAuth – Our web postings also stopped last week and we have spent the last 24 hours searching and testing – trying to reconnect ??? We found many people with the same problem and no one having a solution at this point. Thank you very much! I am positive this saved me a ton of time. It’s not the firewall – OAuth is done over HTTP, so that would never be an issue. The reason it didn’t work from your local machine is because you have to enable access from every domain in the Twitter OAuth application settings. Log in at dev.twitter.com and go to “Manage Domains”,then add “localhost”. It’s part of the callback URL, so Twitter will reject it if it’s not added to the list of valid domains. Thanks for the feedback Rob, very helpful. on March 25th, 2011 at 6:30 am # […] new wants are added to Wantbox, I create a shortcode for the want and tweet it out. These tweets look like […] on March 25th, 2011 at 7:20 am # […] new wants are added to Wantbox, I create a shortcode for the want andtweet it out. These tweets look like […] Mitch – thank you very much. You short circuited what I expected to be hours of frustration. For some reason, it DID work for me from localhost without Rob’s advice about enabling localhost. i need to dispay mytimeline into my application,is there any recomends? I get this error raise TweepError(‘Failed to send request: %s’ % e) tweepy.error.TweepError: Failed to send request: [Errno 110] Connection timed out even when my internet seems to be at its best, any help?
http://mitchfournier.com/2010/09/07/oauthcalypse-now-tweeting-with-twitter-oauth-and-django/
CC-MAIN-2018-30
en
refinedweb
In my last article about shrinking, I discussed the problems with basing shrinking on the type of the values to be shrunk. In writing it though I forgot that there was a halfway house which is also somewhat bad (but significantly less so) that you see in a couple of implementations. This is when the shrinking is not type based, but still follows the classic shrinking API that takes a value and returns a lazy list of shrinks of that value. Examples of libraries that do this are theft and QuickTheories. This works reasonably well and solves the major problems with type directed shrinking, but it’s still somewhat fragile and importantly does not compose nearly as well as the approaches that Hypothesis or test.check take. Ideally, as well as not being based on the types of the values being generated, shrinking should not be based on the actual values generated at all. This may seem counter-intuitive, but it actually works pretty well. Rather than going into implementation details just yet, lets start with why this is important. Consider the example from the last post: from hypothesis import given from hypothesis.strategies import integers even_numbers = integers().map(lambda x: x * 2) @given(even_numbers) def test_even_numbers_are_even(n): assert n % 2 == 0 We took a strategy and composed it with a function mapping over the values that that strategy produced to get a new strategy. Suppose the Hypothesis strategy implementation looked something like the following: class SearchStrategy(object): def generate(self, random): raise NotImplementedErro() def shrink(self, value): return () i.e. we can generate a value and we can shrink a value that we’ve previously generated. By default we don’t know how to generate values (subclasses have to implement that) and we can’t shrink anything, which subclasses are able to fix if they want or leave as is if they’re fine with that. (This is in fact how a very early implementation of it looked) This is essentially the approach taken by theft or QuickTheories, and the problem with it is that under this implementation the ‘map’ function we used above is impossible to define in a way that preserves shrinking: In order to shrink a generated value, you need some way to invert the function you’re composing with (which is in general impossible even if your language somehow exposed the facilities to do it, which it almost certainly doesn’t) so you could take the generated value, map it back to the value that produced it, shrink that and then compose with the mapping function. Hypothesis and test.check both support even more complicated composition of strategies (Hypothesis somewhat better than test.check - both support the same operations, but Hypothesis’s underlying implementation works somewhat better for more complicated compositions), but even the simplest of compositions fails if you need to be able to shrink arbitrary values. The key idea for fixing this is as follows: In order to shrink outputs it almost always suffices to shrink inputs. Although in theory you can get functions where simpler input leads to more complicated output, in practice this seems to be rare enough that it’s OK to just shrug and accept more complicated test output in those cases. Given that, the way to shrink the output of a mapped strategy is to just shrink the value generated from the first strategy and feed it to the mapping function. Which means that you need an API that can support that sort of shrinking. The way this works in test.check is that instead of generating a single value it generates an entire (lazy) tree of values with shrinks for them. See Reid Draper’s article on the subject for slightly more detail. This supports mapping fairly easily: We just apply the mapping function to the rose tree - both the initial generated value, and all the shrunk child values. Hypothesis’s implementation is more complicated so will have to wait for another article, but the key idea behind it is that Hypothesis takes the “Shrinking outputs can be done by shrinking inputs” idea to its logical conclusion and has a single unified intermediate representation that all generation is based off. Strategies can provide hints about possibly useful shrinks to perform on that representation, but otherwise have very little control over the shrinking process at all. This supports mapping even more easily, because a strategy is just a function which takes an IR object and returns a value, so the mapped strategy just does the same thing and applies the mapping function. Obviously I think Hypothesis’s implementation is better, but test.check’s implementation is entirely respectable too and is probably easier to copy right now if you’re implementing a property based testing system from scratch. But I do think that whichever one you start from it’s important to take away the key idea: You can shrink outputs by shrinking inputs, and strategies should compose in a way that preserves shrinking. The result is significantly more convenient to use because it means that users will rarely or never have to write their own shrinking functions, and there are fewer posssible places for shrinking and generation to get out of sync.
https://hypothesis.works/articles/compositional-shrinking/
CC-MAIN-2018-30
en
refinedweb
Supreme Court Judgments Subscribe Dr. Ashok Vs Union of India & Ors [1997] INSC 491 (2 May 1997) S.C. AGRAWAL, G.B. PATTANAIK ACT: HEADNOTE: WITH TRANSFER CASE (C) NOS.2 & 3 OF 1997 PATTANAIK. J. On the basis of a letter by one Dr. Ashok addressed to the Chief Justice of India indicating therein that several insecticides, colour additives, food additives are in widespread use in this country which have already been banned in several advanced countries as it has been found that those insecticides are carcinogenus, this Court treated the letter as a Petition under Article 32 of the Constitution and took up the matter as a public Interest litigation. Notices were issued to the Union of India through the Secretary. Ministry of Environment and Forest, through the Secretary, Ministry of Agriculture, through Secretary, Ministry of Industry & Chemicals as well as to pesticides Association of India through its Secretary Shri H.S. Bahl and the Asbestos Cement Products Manufacturers Association. The Annexure to the said letter contained 21 chemicals and additives and a prayer was made that the respondents should be directed to ban forthwith the import, production, distribution, sale and use of the listed chemicals and articles so that the citizens will not be exposed to the hazards which the aforesaid insecticides/additives are capable of being caused. It was alleged generally in the petition that food. water, air, drug and cosmetic contaminataion are the general results of the widespread use of the chemical have been banned in the united States of America and rest are in the process of being banned. Though initially the annexure to the letter contained only 21 items of insecticides and additives but by way of an application 19 other chemicals were added and thus in all the prayer of the petitioner is to prevent manufacture. production and use of 40 insecticides and/or additives. Counter-affidavits were filed on behalf of Secretary, pesticides Association, Madras. A supplementary affidavit was also filed on behalf of the Ministry of Environment and Forest. A further affidavit was also filed in August 1989 by the Deputy Director General of Health Services giving the available information on the listed chemicals as to the carsinogenicity status on the basis of research carried out by the Indian Council of Chemical Research. It was indicated in the said affidavit that the benefits accrued as a result of use of chemicals should be weighed against anticipated risk and whole issue be examined in totality before arriving at a conclusion. When the matter was heard on 24th September, 1996 this Court observed that there has been a time lag between the filing of the affidavits and the date of hearing of the petition and there is no material on record to indicate as to whether any further stops have been taken with regard to the control of use of these harmful pesticides and chemicals and whether any further study has been made in that regard. The Union of India was, therefore, granted time to file a further detailed affidavit clarifying the entire position. When the case was taken up for hearing on 5th November, 1996 it transpired that no further affidavit has been filed pursuant to the earlier direction and therefore, the Court was constrained to pass an order requiring the officers of different Ministries involved to be present in the Court on the next date of hearing and required affidavit should be filed. Pursuant to the aforesaid order of the Court an additional affidavit was filed by the Under Secretary to the Government of India, Ministry of Agriculture on 18th November, 1996 stating therein the steps taken by the Government of India in prohibiting manufacture, import and use of certain chemicals and in permitting restricted use of certain other chemicals and insecticides. To the aforesaid affidavit a Notification dated 26th May, 1989 was annexed as Annexure 1 which Notification indicates that the Government of India had set up an Expert Committee with a view to review continuance use in India of pesticides that are either banned or restricted for use in other countries. To the said additional affidavit also annexed a Notification dated 15th May, 1990 of the Ministry of Agriculture which Notification indicates that the Central Government after considering the recommendations of the Expert Committee and after consultation with the Registration Committee set up under the Insecticides Act 1968 cancelled the certificate of Registration in respect of Aaldrin, restricted the use of Dieldrin, for Locust Control in desert areas by plant Protection Adviser to the Government of India and restricted the use of Ethylene Dibromide as a Fumigant for Foodgrains through Central Government, State Government, Government Undertakings, and Government Organisation like Food Corporation of India and Others. To the said Additional Affidavit yet another Notification of the Ministry of Agriculture dated 20th September, 1986 was annexed as Annexure III which Notification prohibited the manufacture, import and use of Heptachlor and Chlordane and cancelled the Registration Certificate issued by the Registration Committee to Various Persons. It also prohibited the use of Alderin in India and cancelled the Registration Certificate issued under the insecticides Act. It further transpires the Government of India, Ministry of Agriculture by Notification dated 1st January, 1996 cancelling certificate of Registration in respect of Benzene Haxachloride with effect from 1st April, 1997, being of the opinion that the manufacture and use of Benzene haxachloride shall be phased out progressively and the production of its technical grade by the existing manufacturers reduced to the extent of 50 per cent by 31st March, 1996 an totally banned by 31st March, 1997. The Notification also indicated that the Certificate of Registration in respect of Benzene Haxachloride shall be deemed to have lapsed in respect of those registration in respect of Benzene Haxachloride shall be deemed to have lapsed in respect of those registrants who are yet to obtain manufacture licences. On behalf of the Ministry of Environment and Forest, the Director Ministry of Environment also filed an Additional Affidavit indicating the steps taken by the Environment Ministry Prohibiting import of Polychlorinated Biphenyls. Ministry of Health also filed an additional affidavit and Ministry of Petro- chemicals also filed an affidavit. When the case was taken up for hearing on 21st November, 1996 and these affidavits of different Ministries were placed it was noticed that the affidavits have dealt with 21 chemicals and additives which were listed in the original petition. But there has been no response in respect of 19 other chemicals and insecticides referred to in the additional list. It was also brought to the notice of the Court some Writ petitions have been filed by the manufacturers of certain chemicals challenging the Notification of the Government cancelling the Registration Certificate issued under the insecticides Act and Prohibiting the Manufacture with effect from 1st April, 1997. It was stated that a consolidated affidavit be filed by the Union of India in consultation with all the concerned Ministries in respect of 40 chemicals so that it would be easier to deal with the problem. In response to the aforesaid direction of the Court dated 27th November,1996 the Under Secretary to the Government of India in the Ministry of Agriculture has filed a consolidated affidavit dealing with 40 items of chemicals and the steps taken by the Government of India in the Concerned Ministries either prohibiting and/or allowing restricted manufacture, use of chemicals on a thorough study and on receipt of recommendations from the experts. On the basis of applications by manufactures, in respect of the writ Petitions pending in Allahabad High Court and Madras High Court orders were passed by this Court to get the cases transferred and those transferred petitions were also heard alongwith main Writ Petition. Chemicals, besides food, air and water, have always been part of man's environment in some measure. Even before the earliest civilizations or agriculture, the lightning flash caused oxygen and nitrogen of the air to combine, producing oxides of nitrogen and the said nitrogen dioxide eventually combined with water and oxygen to form nitrates that significantly enriched the soil. Volcanos contributed sulphur dioxide and particulates to the air just as fossil fuel burning power plants do today. But the total contribution of these sources was small and the earth was thinly populated. With the rise of civilizations; the sources of population increased day by day. Water polluted with lead from the pipes used in the Roman distribution system is postulated to have contributed to the decline of Rome. Miners and metal workers in the Middle Ages suffered occupational diseases from dusts and fumes generated in their trades. As early as in 1713 Ramazzini in his book "Diseases of Workers" has described the effects of many of these chemical pollutants on workers. When coal was introduced as a fuel the problem of pollution became much worse with combinations of fog and smoke in London becoming most famous. With the recognition of the deleterious effects of chemicals, especially in the Workplace, there began measure for the control of the release of these materials and the prevention of occupational diseases. The concentrations of many of these materials in the atmosphere were quit high. The scientists began research to find out the ways and means to reduce the contents of chemical in the atmosphere so as to check the health hazards. In 1945 Warren Cook of Switzerland published a list of the limits with abstracts of the information on which they were based. The United states Public Health Service established drinking water standards in 1946, Henry Smyth in 1956 reviewed the researches done in the field and proposed the name Threshold Limit Values for limiting air concentration for the working environment. The American conference of Governmental Industrial Hygienists every year compiled a list after annual review indicating the deleterious effect of Several Chemicals and pesticides on the human health and the said study is adopted by the occupational Safety and Health Administration of the Department of Labour as a Regulation. Until 1960 there was no legislation and it is only in 1960's the Clean Air Acts were passed in the United states. There has been constant research on the use of chemicals and pesticides and its effect on the human health in most of the advance countries and the industries also spend a substantial part of the money in establishing a research and development organisations. on the basis of experiments conducted and datas available the use of several chemicals and pesticides have been either totally banned or have been permitted to be used in a regulated manner depending upon the effect of such chemicals or pesticides on the human system. In all ages men faced difficulty in protecting their crops on the field from small animals and disease organisms. An insect, a field mouse, the spore of a fungus. or a tiny root-eating worm is more difficult to deal with. Since these small organisms reproduce rapidly, their total eating capacity is very great. Small pests may also be carriers of disease, Malaria and Yellow fever, spread by mosquitos, have killed more people than all wars. Not all insects, rodents, fungi, and soil microorganisms are pests. Most of them do not interfere with people, and many are directly helpful. Millions of small animals live within a single cubic meter of healthy soil. Most are necessary to the process of decay and hence to the recycling of nutrients. Fungi, too, are essential to the process of decay in all the world's ecosystems. pests have lived side by side with people for thousands of years. At times pest species have bloomed and brought disease and famine. But most of the time, natural balance has been maintained, and humans have lived together with insects in reasonable harmony. In modern times, people are no longer willing to accept these natural cycles. Human population is now so large that tremendous quantities of food are needed. One way to increase crop yields is to reduce competition from insects. Scientists studying a cabbage field in United States found 177 different species of insects of which only 5 species were significant pests. The agricultural system is subject to the normal checks and balances of a natural ecosystem. If left alone, pest species are usually dept under control by their enemies. According to an estimate insects at 10 per cent of the food crops in the United states in 1891 and at that time very few pesticides were being used. The pest populations were controlled by insect predators, parasites, and disease. But in the survey of 1970 it was found that the crop losses to insects rose to 13 per cent. The question, however, whether it is on account of chemical sprays or whether farmers would be better off if no pesticides were used at all still remains unanswered. There is no dispute that most chemical pesticides are poisonous to humans as well as to insects. The organophosphates which have been used extensively in North America since 1973 are much more poisonous than the DDT which was replaced by such organophosphates. Since mid- 1940s many thousands of people have fallen sick or have died from severe pesticide poisoning every year. At present more than half of these are children who are exposed to the toxic chemical through carelessness in packing or storage. Most of the others are workers who handle these materials in the factory or on farms. Even workers working in the factory where chemicals are manufactured bring the pesticide dust home on their clothes and they poison the family as well. In July 1975 the Allied chemical Company paid millions in damage suits and the plant was shut down. No amount of compensation paid in cash could make the people healthy again. People can avoid exposure to large doses of insecticides but it is impossible to avoid exposure to contaminants in food, in the air and in drinking water. Scientists in their anxiety to increase the production capacity of the soil and to prevent the food particles from various pests and insects have invented several insecticides which has caused deleterious effect on the human health. The broad spectrum pesticides have serious flaws. They upset ecosystem, poison people and animal and possibly cause cancer. on the basis of continued research in the field several other advance countries whereas in a developing country, like India, no effective measures have been taken so far while examining the affidavits filed in this court by different Ministries of the Government of India to find out what effective steps have been banned in other countries particularly when its deleterious effect on the human health is alarming, One thing is absolutely clear that in this country there has not been much study and research on the harmful effect of several such chemicals and pesticides. There is no coordinated organisation and the lack of coordination between different ministries of the government who deal with different chemicals and pesticides make the people of this country suffer. It may be true that several such insecticides and chemicals may be required in certain contingency when epidemics like Plague and dengue break. But that cannot be ground for allowing the industrialists to manufacturer such commodity when it is established that the use of the commodity is grossly detrimental to the human health. Take for example an insecticide called DDT. It acts as a nerve poison. Paralyzing insects. It has been used to control insects which destroy food and forage crops and to kill disease carrying insects, such as mosquitoes that carry malaria and yellow fever and lice that carry typhus. DDT is a residual poison that retains its effectiveness in a sprayed area for weeks, although it may persist in the area for years. It is harmless to most plants. The chemical was first prepared by Oothmar Zeidler, a German chemist in 1874. Its effectiveness was discovered and recognised by a Swiss scientist Paul Hermann Muller who won the Noble prize in 1984. it was used heavily in world War II, particularly in the mid and South-pacific theaters by spraying mosquito infected areas prior to invasion and occupation. The spray program continued after the war and was primarily responsible for eliminating malaria and yellow fever as major diseases. The said chemical, however, is toxic to people and animals. it accumulates in the bodies of animals that eat food contaminated with the substance. When dissolved in organic solvents. DDT can be absorbed through the skin. The chemical nature of DDT is not changed by process of metabolism, soil microorganisms or sun-light. It is dangerous to birds, to fish and other forms of aquatic life, Because of its potential danger to human health and its possible effect on several species its use has been totally banned in the United States of America by the Environmental Protection Agency since 1972. Soon thereafter the said insecticide has been banned in several other countries including Canada, Sweden and Denmark, But so far as India is concerned. It is now being produced only by M/s Hindustan insecticides Limited and the Director General of Health services on getting information about the quantity required by respective States for their Public health Programme puts it before the requirement Committee and only on the approval of the said Committee it is manufactured and sent to different States. Thus though it has not been fully banned but its manufacture and use has been controlled. We have taken the illustration with respect to one of the insecticides only for the purpose of indicating that several insecticides which have been banned in the advanced countries like America are still being permitted to be used in this country possibly because of certain necessity. Agriculture was the principal activity of Indians till Nineteenth Century and more than seventy per cent population were dependent on agriculture for their livelihood. In the twentieth Century the Country saw industrial revolution. The rural population started migrating from villages to urban and industrial towns. but yet agriculture holds the dominant position in Indian economy. The growing realisation of acute problem of population explosion in India necessitated the policy makers, planners to make vigorous efforts to optimise agricultural production. The idea of green revolution was floated and effective steps were taken to machanise the agricultural process and to modernise it by using fertilizers and spray in pesticides in order to achieve self sufficiency in food grains, commercial crops and other agricultural products. It was realised that endeavor should be made on war footing to boost agricultural production so as to fulfil the requirement of food for our teeming millions. One of the hurdles in boosting agricultural production was excessive loss and destruction of crops and foodgrains by insects and pests. A need was, therefore, felt to import and manufacture insecticides and pesticides to protect crops and plants from the damage of pests and insects. But the most dangerous crisis in the present day modern world is that of global atmospheric pollution. The eco system has become imbalanced by uncontrolled use. abuse and misuse of natural resources and manufacture and use of hazardous products and chemicals resulting in endangering the very existence of human race. The excessive use of chemicals and pesticides for optimising agricultural production created alarming danger to health and safety of living beings in general and agriculture workers in particular. The impact of pesticides use on global environment may vary in magnitude and exhibits a variety of behavioural patterns and modes of action. Pesticides affect man's ecosystem and their residues can get into the food chain. The amount of pesticide consumed by people depends on the manner of usage of pesticides particularly on farm crops, storage of the produce and its processing. In most of the developed countries the use of hard pesticides on agricultural crops has been either banned or restricted and other pest control programmes are adopted in order to maintain eco-system. But the developing countries are still using these pesticides without caring for side effects on environment. In recent times the Central Government has set up the pesticides Environment pollution Advisory committee in the Ministry of Agriculture to review from time to time the environmental repercussion and to suggest measures. Whenever necessary. It is a fact that pesticides considered hazardous in rich countries of the developing countries lack scientific facilities for toxicological scrutiny as also for making proper cost assessment. It is true that different countries may have different requirements but it is difficult and dangerous to assume that pesticides banned or restricted in USA or other European countries will be acceptable in the Third World countries. In India pesticides are use over the past four decades for crop protection and control of diseases like malaria. There has been much debate over the use of pesticides at the cost to weigh the benefits of use of pesticides and the adverse effect that is produced on human health on account of such use of pesticides. Right to Life enshrined in Article 21 means right to have something more than survival and not mere existence or animal existence. It includes all those aspects of life which go to make a man's life meaningful , complete and worth living. As has been stated by this court in Maneka Gandhi's case (1978) 1 Supreme Court Cases 248, in the case of Board of Trustees vs. Dilip (1993) 1 Supreme Court Cases 124 and in the case of Ramasharan vs. Union of India 1989 Supp. (1) Supreme court Cases 251, that it would include all that gives meaning to a man's life, for example, his tradition, culture, heritage and protection of that heritage in its full measure. In still recent cases this Court has given liberal interpretation to the word 'life' in Article 21. And in the case M.C. Mehta vs. Union of India & others (1987) 4 supreme Court Cases 463 while dealing with a public Interest petition relating to Ganga Water Pollution this Court has observed that life, public health and ecology have priority over problems of unemployment and loss of revenue. In the United Nations Conference on the Human Environment held at Stockholm in 1972 it was stated that the protection and improvement of human environment is a major issue which affects the well-being of people and economic development through out the world and it is the urgent desire of the people of whole world and the duty of all Governments. It was also stated:- " a better environment. To defend and improve the human environment for present and future generations has become an imperative goad for mankind a goal to be pursued together with, and in harmony with, the established and fundamental goals of peace and of world-wide economic and social development." What has been stated above in relation to the environmental hazards would apply with much greater force when it comes to health hazards. By giving an extended meaning to expression 'life' in Article 21 this court has brought health hazards due to pollution within it and so also the health hazards from use of harmful drugs. In the case of Vincent Panikuriangara vs. Union of India, 1987 (2) SCC 165, on a public Interest Petition seeking directions from this Court to ban import, manufacture, sale and distribution of certain drugs this Court had observed 'A healthy body is the very foundation for all human activities and in a welfare state it is the obligation of the state to ensure the creation and the sustaining of conditions congenial to good health' . The Court in the aforesaid case extracted a passage from the earlier judgment in Bandhua Munti Morcha vs. Union of India 1984 (3) SCC 161, which would be profitable to extract herein:- " huamane conditions of work an maternity relief. These are the minimum requirements which must exist in order to enable a person to live with human dignity. and no state neither the central Government has the right to take any action which will deprive a person of the enjoyment of these basic essentials". It was further observed: " The branch with which we are now dealing, namely, healthy care of citizens, is a problem with various facets. It involves an ever- changing challenge. There appears to be, as it were, a constant competition between nature (which can be said to be responsible for new ailments) on one side and human ingenuity engaged in research and finding out curative processes. This being the situation, the problem has an evershifting base. It is commonplace that what is considered to be the best medicine today for treatment of a particular disease becomes out of date and soon goes out of the market hitherto unknown diseases are noticed. To meet new challenges, new drugs have to be found. In this field, therefore, change appears to be the rule." It is necessary to examine the present problem arising out of use of pesticides and other chemicals which on account of its adverse effects on human health has already been banned in other advanced countries. On examining the counter-affidavits filed on behalf of the different Ministries of the Government it appears to us that though sufficient steps have been taken to either ban or to allow restrictive use of these insecticides but yet there is no co-ordinated effort and different Ministries of the Government of India are involved. It also further transpires that there has been no continuous effort to have research or to have minimum information about the adverse effects of the use of such pesticides and other chemicals as a result of which people at large of this country suffer to a great extent. As it is on account of lack of capacity of the people of the country to afford good and nutritious food. the average standard of human health is much below as compared to other advanced countries. In addition to that it insecticides and chemicals are permitted to be freely used in protecting the foodgrains and in increasing the agricultural production then that will bring insarmountable hazards to all those country-men who consume those food articles. To check these maladies what is essential for the Government of India is to have a co-ordinated and sustained effort. In this age of computerisation and inter-linking of the countries through internet it does not take more than a couple of minutes to gather the necessary information in respect o f any particular insecticide or pesticide and how such commodities have been dealt with in other advanced countries. What is really essential is a genuine will on the part of the Administrative machinery and a conjoined effort of all the ministries concerned. on the basis of the affidavits filed while we are satisfied that the different measures taken by the Central Government in totally prohibiting in some other cases are adequate step from the health hazards point of view and no further direction is necessary to be issued in respect of the 40 items of insecticides and chemicals identified in the petition filed. but we would direct that a Committee of Four senior officers from the four different Ministries involved should be constituted which committee should have deliberations atleast once in three months and take suitable measures in future in respect of any other insecticides and chemicals which is found to be hazardous for health. Such a Committee should be constituted by the Cabinet Secretary within two months from the date of the order and the said Committee may take the assistance of such technical experts as they think appropriate. We would accordingly dispose of this Writ petition with the aforesaid observation. In the two Transferred Cases. the notification date 1.1.1996 of the Central Government issued in exercise of powers under sub-section (2) of section 27 of the Insecticides Act, 1968 phasing out progressively the manufacture and use of Benzene Hexachloride and directing that the certificate of Registration in respect of Benzene Hexachloride issued to various firms shall be deemed to have been cancelled w.e.f 1st of April, 1997, has been challenged by the manufacturers inter alia on the ground that it is beyond the scope and powers of the Central Government under Section 27(2) of the Insecticides Act to issue such Notification. It is contended by Mr.C.S. Vaidyanathan, the learned senior counsel for the petitioner -M/S. Kanoria Chemicals and Industries Ltd. as well as MR. Jayant Das, learned senior counsel appearing for the petitioner in the other Transferred Case that consultation with Registration Committee being mandatory for exercise of power under Sub- Section (2) of Section 27(2) of the Act and there being no such consultation with the Registration Committee the issuance of the impugned Notification in purported exercise of power under section 27 (2) of the Act is vitiated and as such is liable to be stuck down. It is further contended that neither there has been any investigation of its own by the Central Government nor the Central Government could have been satisfied about the insecticides in question is likely to cause any risk which would enable the Central Government could have been satisfied about the insecticides in question is likely to cause any risk which would enable the Central Government to cancel the certificate of Registration and therefore. the inpugned Notification is invalid In law since the satisfaction is based upon non-existent material and as such the notification in question is liable to be struck down . Lastly, it is contended that in exercise of power under sub-section (2) of section 27 the certificate of Registration of any insecticide specified in sub-clause (iii) of clause (e) of section 3 or any specific batch thereof can be cancelled it the Central Government is of the opinion for reasons to be recorded in writing that the use of the said insecticide is likely to involve such risk to human beings or animals so as to render it expedient or necessary to take immediate action. Section 3 (e) (iii) deals with a preparation containing any one or more of the substances specified in the Schedule., The said power, therefore, cannot be exercised in respect to any substance specified in the schedule which in an insecticide within the meaning of section 3(e) (i). Benzene Hexachlordide being one of the substances in the Schedule issued under Section 3(e)(iii), and not a preparation containing any one or more of the substances as provided in section 3(e)(iii), the Central Government had no jurisdiction to issue the impugned Notification in purported exercise of power under section 27(2) of the Insecticides Act. In other words, what is contended by the counsel for the petitioners these Transferred cases is the power to prohibit or cancel the registration under section 27(2) is in respect of those preparations containing any one or more of such substances which are specified in the Schedule and which is consumer oriented ant the said power cannot be exercised in respect of any substance included in the Schedule by the parliament itself. Mr. Bhat. learned Addl. Solicitor General, on the other hand contended that in construing the provisions of the insecticides Act the Court must adopt a construction which would effectuate the objects of the statute instead of adopting a construction which would defeat its objects. According to t he learned Addl. Solicitor General a statute is designed to be workable and the interpretation thereof by a court should be to secure that object, unless crucial omission or clear direction makes that end unattainable, as was observed by Lord Dunedin in whitney v. Commissioners of inland Revenue (1925) 10 Tax Cas. 88.110 and was also accepted by Craies on Statute Law as well as by Maxwell on The Interpretation of Statutes, Tenth Edn., and bearing in mind the aforesaid principle the provisions of Section 27 of the Insecticides Act are to be construed, According to the learned Addl. Solicitor General the courts should lean against any construction which tends to reduce a statute to futility and the provisions of a statute must be so construed as to make it effective and operative, on the principle "ut res majis valeat quam periat". The learned counsel urged that it is the court's duty to make what it can of the Statute, knowing that the Statutes are meant to be operative and not inept and that nothing short of impossibility should allow a Court to declare a Statute unworkable. The learned Addl. Solicitor General contends that the Insecticides Act having been enacted to retulate the import, manufacture, sale, transport, distribution and use of insecticides with a view to prevent any risk to human beings or animals and the Central Government having been satisfied that the use of Benzene Hexachloride involves great risk to the human life. and on being so satisfied having issued the impugned Notification phasing out the manufacture of such insecticide an completely prohibiting the same w.e.f. 1.4.1997, this court should not set aside the Notification by interpreting the provisions of the Act which would have the effect of frustrating the object of the legislation itself. According to the learned Addl Solicitor General no doubt the words used in sub-section (2) of section 27 are not very clear but the expression " as a result of its own investigation" in sub-section (2) of Section 27 does not necessarily refer to an insecticide specified in sub-clause (iii) of Clause (e) of Section 3 as engrafted in sub-section (1) of Section 27 and on the other hand it is wide enough to include any insecticide under Section 3(e) including a substance specified in the Schedule and such a construction alone would subserve the object of the Act. The learned Addl. Solicitor General also urged that when the power under sub-section (2) of Section 27 authorises the Central Government to issue an order refusing to register the insecticide it would obviously mean that the said power could be exercised even prior to the registration of the insecticide in question, whereas the power under Section 27(1) can be exercised only after an insecticide in question, whereas the power under Section 27(1) can be exercised only after an insecticide has been registered and, therefore. Section 27(2) does not necessarily refer to section 27(1) as contended by the learned counsel appearing for the petitioner. So far as the question of lack of consultation with the Registration Committee is concerned, the learned Addl. Solicitor General contended that the Notification which was issued in December 1994 itself indicates that the Central Government had due consultation with the Registration Committee and as such it was not necessary to have further consultation with the said Committee before issuance of Notification on 1st of January, 1996. According to the learned Addl. Solicitor General when Benzene Hexachloride has already been banned in several other countries in the world because of its effect on the human life, the Central Government has totally banned its production w.e.f. 31st of March, 1997, having decided to phase out the production progressively and any intereference with the said order will be against the society at large. Before examining rival contentions with regard to the power of the Central Government under the insecticides Act to cancel Certificate of Registration it would be appropriate for us to find out as to what is Benzene Hexachloride and what are its effect on the human beings and the environment and to what extent it has actually been banned in other countries. Benzene Hexachloride (BHC) is formed by the reaction of chlorine with benzene in the presence of light. It is also called 1, 2, 3, 4, 5, 6- HEXACHLOROCYCLOHEXANE, namely, any one of several isometic compounds: one of these isomers is an insecticide called Gammexane. It was first prepared in 1825 and the insecticidal properties were identified in 1944 with the y-isomer, which is about 1,000 times more toxics than any of the other isomers formed in the reaction. The chemical addition of chlorine to benzene produces a mixture containing at least six of the eight possible isomers of BHC. BHC has a faster but less protracted action upon insects. It use had declined by the 1960s because of competition from other insecticides and its effects on fishes. (See - The New Encyclopaedia Britannica - Volume 2, Page - 115). Benzene Hexachloride, otherwise known as BHC is an insecticide specified in the Schedule to the insecticide Act, 1968 and is different from its formulations which would also be an insecticide within the meaning of Section 3(e)(iii) of the said Insecticides Act. BHC is not used as such by farmer or consumer though its different formulations or preparations containing different concentrations of BHC are use in agricultural pest control, crop protection operation as well as in public health for control of diseases like malaria, dengu and plague. In the Tripathi Committee Report which was constituted to review the continued use of DDT and BHC in the country in the light of their hazard to human health and environment pursuant to the earlier observations of the Banerjee Committee Report in 1986, it has been stated as follows: 1. In a large number of countries the use of BHC has been banned/withdrawn or severely restricted mainly due to bioaccumulation of residue and its associated environmental hazards. 2. BHC is bioeffective against pest complex of rice, sugarcane, sorghum and pigeonpea. Its dust has also been proved bioeffective for locust control. 3. It still continues to be effective in controlling vectors of malaria. 4. The residue of BHC in soil of USA persists as long as ten years. However, in other comparative studies between 1977 and 1988 the residue has been decreased from 5.64 ppm to 0.06 ppm against studies of Indian soils has shown a half life of only 4 months. 5. Residues of BHC in water were found in a range of 1.07 to 81.23 mg/litre, in studies conducted during 1985 to 1987. Ganga water was reported to be contaminated with BHC residue in the range of 2.5 to 639 nanogram per litre during 1986 to 1989k. 6. Reported quantum of 17.66 to 40.90 ppm of residues in rice is highest and for potatoes the quantities were below tolerance limit. It is low in rabi crops and nil in sugarcane. 7. Residue of BHC in Indian Vegetable found to be higher than permissible limit as per PFA (8.0) PPM) 8. The residue of BHC in vegetable oils and oilseeds ranged between 0.2 to 6.2 ppm, which showed a declining trend. 9. Milk and milk products are contaminated with residues of BHC. 10. Meat, chicken, fish and egg are also contaminated with BHC residue. 11. There are reports of accumulation of BHC residues in human adipose tissue and blood. 12. Animal feed as well as animal products do contain BHC residues and there is an increasing trend. 13. Sub-chronic and long term toxicity studies show storage of BHC in body tissues and steroidiogenic inhibition. 14. Studies on reproduction indicates its effect on reproduction leading to impaired reproductive function. 15. In some studies BHC is found to be mutagenic. 16. BHC has been shown to be carcinogenic to mice and rats in one study and in mice in another two studies. But it has been shown not to be carcinogenic to rats and hamstars in one study. BHC has been classified by IARC into Group 2 B i.e. probable carcinogenic to human. 17. BHC has been shown to produce immunological changes. 18. In human studies accidental long term dietary exposure of BHC resulted in epidemic of porphyria, hyper pigmentation and neurotoxicity. Thus, though it is of great use in control of malaria but its adverse effect on human health is no less particularly when it has already shown to be caioinogenic to mice and rats and even scientists are of the opinion that it is probable carcinogenic to human beings. The Certificate of Registration granted in favour of petitioners which are available on record indicates that is was for formulation namely BHC 10% DP, BHC 50% WP as well as BHC technical. Coming to the question of power of the Central Government under the Insecticides Act and rival contention of the parties in this Court as noticed earlier, it would be appropriate for us to notice some of the provisions of the Act. Section 3(e) defines 'insecticide' to mean that: 3; Section 4 contemplates constitution of a Board called Central Insecticides Board whose duty is to advise the Central Government and the State Government on technical matters arising out of the administration of the Act as well as to carry out the other functions assigned to the Board under the Act, Section 5 stipulates constitution of a Registration Committee which Committee is empowered to regulate its own procedure for conduct of business to be transacted by it. Section 9 provides for registration of insecticides. Under sub-section (1) of section 9 a person desirous of importing or manufacturing any insecticide is required to make an application to the Registration Committee for the Registration of such insecticide. Under sub-section (1) of section 9 a person desirous of importing or manufacturing any insecticide is required to make an application to the Registration Committee for the registration of such insecticide. Under sub-section (3) of Section 9 the Registration Committee is required to hold such enquiry as it deems fit and on being satisfied about the efficacy and safety of the insecticide to human beings and animals register the same. Second proviso to sub- section (3) of section 9 confers power on the Committee to refuse to register the insecticide. Section 10 provides for an appeal against the decision of the Registration Committee to the Central Government against non-registration. Section 11 is the sub moto power of the Central Government in exercise of which power the Government can call for the record of the Registration Committee in respect of any case for the purpose of satisfying itself as to the legality or propriety of the of the decision. Section 13 is the power to grant licence and any person desirous of manufacturing or selling or exhibiting for sale or distributing any insecticide is bound to have a licence under Section 13. Section 14 is the power of the licensing officer to revoke. suspend or amend the licence issued under Section 13. Section 17 is the prohibition for import as well as manufacture of certain insecticides. Section 26 is the power of the state Government to require any person or class of persons to report occurence of poisioning through the use or handling of any insecticide coming within his cognizance. Section 27 the interpretation of which comes up for our consideration in the case in hand contains the power of the Central Government in purported exercise of which the impugned notifications have been issued. Since the same provision requires the consideration of this Court the same is extracted hereinbelow in extenso: 27. Prohibition sale. etc. of insecticides for reasons of public safety.-(1) If on receipt of a report under section 26 or otherwise, the Central Government or the State Government is of opinion, for reasons to be recorded in writing, that the use of any insecticide specified in sub-clause (ii) of clause (e) of section 3 or any specific batch thereof is likely to involve such risk to human beings or animals as to render it expedient or necessary to take immediate action than that Government may, by notification in the official Gazette, prohibit the sale, distribution or use of the insecticide or batch. In such area, to such extend and such period (not exceeding sixty days) as may be specified in the notification pending investigation into the matter: Provided that where the investigation is not completed within the said period. the central Government or the State Government, as the case my be, may extend it by such further period or periods not exceed in thirty days in the aggregate as it may specify in alikelling the certificate of registration, if any, granted in respect thereof), as it deems fit, depending on the circumstances of the case." Section 36 is the rule making power of the Central Government. An examination of the aforesaid provisions of the Act indicates that before registering a particular insecticide the Registration Committee is duty bound to hold such enquiry as it deems fit for satisfying itself that the insecticide to which the application relates is safe to human beings and animals. Coming now to the core question namely whether under Section 27 of the Act the central Government can cancel the Certificate of Registration in respect of an insecticide. It appears to us that under sub- section (1) of section 27 when the Central Government or the State Government is of the opinion that the use of any insecticide specified in sub-clause (iii) of clause (e) of section 3 or any specific batch thereof is likely to involve risk to human beings or animals and it is necessary to take immediate action then on recording reasons in writing the sale. distribution or use of the insecticide or batch can be prohibited in such area. to such extent not exceeding 60 days as may be specified in the notification pending investigation into the matter. In other words, In respect o an insecticide within the meaning of section 3(e) ((iii) i.e. a preparation or formulation containing anyone or more of such substances specified in the schedule. the appropriate Government can immediately by issue of notification prohibit the sale. distribution or use of the same pending investigation. Under the proviso to subsection (1) of section 27. if the investigation is not completed within the period of 60 days then the prohibition in question could be extended for such further period not exceeding 30 days in the aggregate. Under sub-section (2) if the Central Government on the basis of its own investigation or on receipt of the report from the state Government and after consultation with the Registration Committee is satisfied that the use of the said insecticide or batch is or is not likely to cause any such risk then it may pass such order as it deems fit depending upon the circumstances of the case. either refusing to register the insecticide or cancel the Certificate of Registration. If already granted. The use of the word said insecticide in sub-section (2) obviously refers to the insecticide in question which was the subject matter of consideration under sub-section (1) and in respect of which pending further investigation into the matter the Central Government has already issued a prohibition for sale, distribution or use of the insecticide in question. Therefore, the power of cancellation of Certificate of Registration conferred upon the Central Government under sub-section (2) of Section 27 can be exercised only in respect of any insecticide specified in sub-clause (iii) of clause (e) of section 3 i.e. a preparation or formulation of one or more of the substances specified in the schedule but the said power cannot be exercised in respect of an insecticide which is specified in the schedule itself by the Parliament. We are unable to accept the agreements advanced by the learned Additional Solicitor General that sub-section (2) of section 27 is not restricted to an insecticide in respect of which the Central Government has already issued a notification prohibiting the sale. distribution or use pending investigation into the matter. The Scheme of sub-section (1) and sub-section (2) of section 27 is that in respect of a formulation which is also an insecticide within the meaning of section 3 (e) (iii) the Central Government for reasons to be recorded in writing and pending investigation into the matter can immediately prohibit sale. distribution or use and after further investigation can cancel the Certificate of Registration in respect thereof under sub-section (2) of Section 27. That being the position in exercise of such power under sub- section (2) of section 27 a certificate of Registration in respect of an insecticide under sub-section 3(e) (i) cannot be cancelled under sub-section (2) of section 27. This is also in consonance with the logic that an insecticide which is the formulation of any one or more of the substances specified in the schedule and is consumer oriented power of cancellation of registration certainly has been conferred upon the central Government but in respect of an insecticide which does not come to a consumer and is a substance specified in the schedule itself and therefore an insecticide under section 3(e) (i), the power has not been conferred upon the Central Government since the specified substance in the schedule has been specified by the Parliament itself. In view of the aforesaid conclusion of ours we would hold that those of the Certificates of Registration granted to the petitioner in respect of any formulations namely BHC 10% WP, the order of the Central Government cancelling Certificate of Registration is well within the jurisdiction and there is no legal infirmity in the same. But in respect of Benzene Hexachloride which is one of the substances specified in the schedule and as such is an insecticide within the meaning of section 3 (e)(i) there is no power with the Central Government under sub- section (2) of section 27 to cancel the Certificate of Registration. So far as the contention of Mr. Vaidyanathan, the learned senior counsel appearing for the petitioners in the transferred case that consultation with the Registration committee is a pre-condition for exercise of power under sub-section (2) and such consultation being not there. the issuance of notification is bad we are of the considered opinion that undoubtedly before the power under sub-section (2) of section 27 can be exercised the central Government is duty bound to have consultation with the Registration Committee. But in the case in hand having examined the counter-affidavits filed on behalf of the different Ministries of the Central Government that there has been due and substantial consultation with the Registration Committee which is apparent in the notification of December 1994 itself. and since then there has been further study into the matter and committees of experts have been constituted who have gone into the matter and on the basis of the reports submitted by such expert committee ultimately the Central Government has taken the final decision. It is not possible for us to hold that there has been no consultation with the Registration Committee before exercising of power under sub- section (2) of section 27. Contention of Mr. Vaidyanathan. the learned senior counsel on this score. therefor, must be rejected. Before we part with this case. and having examined the different provisions of the Insecticides Act. 1968 we find that once a substance is specified in the schedule as contemplated under Section 3(e)(i) then there is no power for cancelling the registration certificate issued in respect of the same substance even if on scientific study it appears that the substance in question is grossly detrimental to the human health. This is a lacuna in the legislation itself. and therefore, steps should be taken for appropriate amendment to the legislation. In the net result, therefore, writ petition is disposed of with the observations made earlier and the transferred cases are allowed to the extent indicated above. There will be no order as to costs. Back
http://www.advocatekhoj.com/library/judgments/index.php?go=1997/may/7.php
CC-MAIN-2018-30
en
refinedweb
Struts Custom Validation Example Struts Custom Validation Example Join the DZone community and get the full member experience.Join For Free In this example we will see how to do custom validation in struts. To perform custom validation we extend the UserForm from org.apache.struts.validator.ValidatorForm. The UserForm contains two fields one for the phone number and the other for the mobile number. Lets take the senario where either phone number or mobile number is mandatory. If the validation takes into account only one field at a time then Struts Validation Framework does an excellent job. What if we need to consider more than one form field to validate data. In this case we go for custom validation. The UserForm class contains the following code. public class UserForm extends org.apache.struts.validator.ValidatorForm { private String phoneNumber; private String mobileNumber; public String getPhoneNumber() { return phoneNumber; } public void setPhoneNumber(String phoneNumber) { this.phoneNumber = phoneNumber; } public String getMobileNumber() { return mobileNumber; } public void setMobileNumber(String mobileNumber) { this.mobileNumber = mobileNumber; } public UserForm() { super(); } public ActionErrors validate(ActionMapping mapping, HttpServletRequest request) { ActionErrors errors = super.validate(mapping, request); if ((getPhoneNumber() == null || getPhoneNumber().length() < 1) && (getMobileNumber() == null || getMobileNumber().length() < 1)) { errors.add("phoneNumber", new ActionMessage("error.phoneNumber.required")); } return errors; } } We will override the validate method in the UserForm to perform custom validation. First we call the super class validate method inorder to save any errors returned by the ValidatorForm. Then we check whether the mobile number or phone number is present. If both are not present then we create a new ActionError and add the errors to it. After performing all the validations we will save the errors if any and return the ActionError object. If any errors are present the user will be forwarded to the input page ( in our case it's the user.jsp page). The following message should be configured in the ApplicationResource.properties file. If either phone number or mobile number is not entered by the user then the following error message will be displayed. error.phoneNumber.required = Either Phone number of Mobile number is required. On runing this sample custom validation example the following page is displayed. The user needs to enter either phone number or mobile number to enter successfully. When the user clicks the submit button without entering both phone number and mobile number then the following error message is displayed. You can download the source code of the custom validation example by clicking on the Download link below. Opinions expressed by DZone contributors are their own. {{ parent.title || parent.header.title}} {{ parent.tldr }} {{ parent.linkDescription }}{{ parent.urlSource.name }}
https://dzone.com/tutorials/java/struts/struts-example/struts-custom-validation-example-1.html
CC-MAIN-2018-30
en
refinedweb
Edit: Point #2 of this post has been revised to be more understandable (and creepier) in a reader’s perspective. Thank you to the user on dev.to who emailed me about the previous confusion! A lot of us have fallin in love with the react library for several reasons. It can be incredibly painless to create complex interactive user interfaces. The greatest part of it all is being able to compose components right on top of another without breaking other composed components. And it's amazing that even social media giants like Facebook, Instagram and Pinterest made heavy use of them while creating a seamless user experience with huge APIs like Google Maps . If you're currently building an application using react or thinking of using react for upcoming projects, then this tutorial is for you. I hope this tutorial will help you on your journey to make great react applications too by exposing a few code implementations that you ought to think twice about. Without further ado, here are 8 Practices In React That Will Crash Your App In The Future: 1. Declaring Default Parameters Over Null I mentioned this topic in an earlier article, but this is one of those creepy "gotchas" that can fool a careless developer on a gloomy friday! After all, apps crashing is not a joke--any type of crash can result in money loss at any point in time if not handled correctly. I was once guilty of spending a good amount of time debugging something similar to this: const SomeComponent = ({ items = [], todaysDate, tomorrowsDate }) => { const [someState, setSomeState] = useState(null) return ( <div> <h2>Today is {todaysDate}</h2> <small>And tomorrow is {tomorrowsDate}</small> <hr /> {items.map((item, index) => ( <span key={`item_${index}`}>{item.email}</span> ))} </div> ) } const App = ({ dates, ...otherProps }) => { let items if (dates) { items = dates ? dates.map((d) => new Date(d).toLocaleDateString()) : null } return ( <div> <SomeComponent {...otherProps} items={items} /> </div> ) } Inside our App component, if dates ends up being falsey, it will be initialized with null. If you're like me, our instincts tell us that items should be initialized to an empty array by default if it was a falsey value. But our app will crash when dates is falsey because items is null. What? Default function parameters allow named parameters to become initialized with default values if no value or undefined is passed! In our case, even though null is falsey, it's still a value! So the next time you set a default value to null, just make sure to think twice when you do that. You can simply just initialize a value to an empty array if that is the expected type of the value. 2. Grabbing Properties With Square Brackets Sometimes the way properties are being grabbed may influence the behavior of the app. If you're wondering what that behavior is, it's the app crashing. Here is an example of performing object lookups with square brackets: const someFunction = function() { const store = { people: { joe: { age: 16, gender: 'boy', }, bob: { age: 14, gender: 'transgender', } } } return { getPersonsProfile(name) { return store.people[name] }, foods: ['apple', 'pineapple'], } } const obj = someFunction() const joesProfile = obj.getPersonsProfile('joe') console.log(joesProfile) /* result: { age: 16, gender: boy, } */ These are actually 100% valid use cases and there's nothing really wrong with them besides being slower than object key lookups. Anyhow, the real problem starts to creep up on your app when an unintentional issue occurs, like a tiny typo: const someFunction = function () { const store = { people: { joe: { age: 16, gender: 'boy', }, bob: { age: 14, gender: 'transgender', } } } return { getPersonsProfile(name) { return store.people[name] }, foods: ['apple', 'pineapple'], } } const obj = someFunction() const joesProfile = obj.getPersonsProfile('Joe') const joesAge = joesProfile.age console.log(joesAge) If you or one of your teammates were implementing some enhancement to this snippet and made a minor mistake (such as capitalizing the J in joe), the result will immediately return undefined, and a crash will occur: "TypeError: Cannot read property 'age' of undefined at tibeweragi.js:24:29 at at" The creepy part is, the app will not crash until a part of your code attempts to do a property lookup with that undefined value! So in the mean time, joes profile (undefined in disguise) will be passed around your app and no one will be able to know that this hidden bug is creeping around until a piece of a code performs some property lookup, like joesProfile.age, because joesProfile is undefined! What some developers do to avoid a crash is to initialize some default valid return value if a lookup ends up becoming unsuccessful: const someFunction = function () { const store = { people: { joe: { age: 16, gender: 'boy', }, bob: { age: 14, gender: 'transgender', } } } return { getPersonsProfile(name) { return store.people[name] || {} }, foods: ['apple', 'pineapple'], } } At least now the app won't crash. The moral of the story is, always handle an invalid lookup case when you're applying lookups with square bracket notation! For some, it might be a little hard to explain the severity of this practice without a real world example. So I'm going to bring up a real world example. The code example I am about to show you was taken from a repository that dates 8 months back from today. To protect some of the privacy that this code originated from, I renamed almost every variable but the code design, syntax and architecture stayed exactly the same: import { createSelector } from 'reselect' // supports passing in the whole obj or just the string to correct the video type const fixVideoTypeNaming = (videoType) => { let video = videoType // If video is a video object if (video && typeof video === 'object') { const media = { ...video } video = media.videoType } // If video is the actual videoType string if (typeof video === 'string') { // fix the typo because brian is an idiot if (video === 'mp3') { video = 'mp4' } } return video } /* ------------------------------------------------------- ---- Pre-selectors -------------------------------------------------------- */ ] /* ------------------------------------------------------- ---- Selectors -------------------------------------------------------- */ export const getWeeklyCycleSelector = createSelector( getSpecificWeekSelector, (weekCycle) => weekCycle || null, ) export const getFetchingTotalStatusSelector = createSelector( (state) => state.app[fixVideoTypeNaming(state.app.media.video.videoType)].options.total .fetching, (fetching) => fetching, ) export const getFetchErrorSelector = createSelector( (state) => state.app[fixVideoTypeNaming(state.app.media.video.videoType)].options.total .fetchError, (fetchError) => fetchError, ) fixVideoTypeNaming is a function that will extract the video type based on the value passed in as arguments. If the argument is a video object, it will extract the video type from the .videoType property. If it is a string, then the caller passed in the videoType so we can skip first step. Someone has found that the videoType .mp4 property had been mispelled in several areas of the app. For a quick temporary fix around the issue, fixVideoTypeNaming was used to patch that typo. Now as some of you might have guessed, the app was built with redux (hence the syntax). And to use these selectors, you would import them to use in a connect higher order component to attach a component to listen to that slice of the state. const withTotalCount = (WrappedComponent) => { class WithTotalCountContainer extends React.Component { componentDidMount = () => { const { total, dispatch } = this.props if (total == null) { dispatch(fetchTotalVideoTypeCount()) } } render() { return <WrappedComponent {...this.props} /> } } WithTotalCountContainer.propTypes = { fetching: PropTypes.bool.isRequired, total: PropTypes.number, fetchError: PropTypes.object, dispatch: PropTypes.func.isRequired, } WithTotalCountContainer.displayName = `withTotalCount(${getDisplayName( WrappedComponent, )})` return connect((state) => { const videoType = fixVideoTypeNaming(state.app.media.video.videoType) const { fetching, total, fetchError } = state.app.media.video[ videoType ].options.total return { fetching, total, fetchError } })(WithTotalCountContainer) } UI Component: const TotalVideoCount = ({ classes, total, fetching, fetchError }) => { if (fetching) return <LoadingSpinner /> const hasResults = !!total const noResults = fetched && !total const errorOccurred = !!fetchError return ( <Typography variant="h3" className={classes.root} error={!!fetched && !!fetchError} primary={hasResults} soft={noResults || errorOccurred} center > {noResults && 'No Results'} {hasResults && `$${formatTotal(total)}`} {errorOccurred && 'An error occurred.'} </Typography> ) } The component receives all of the props that the HOC passes to it and displays information following the conditions adapting from the data given from the props. In a perfect world, this would be fine. In a non-perfect world, this would temporarily be fine. If we go back to the container and look at the way the selectors are selecting their values, we actually might have planted a ticking timebomb waiting for an open opportunity to attack: ] When developing any sort of application, common practices to ensure higher level of confidence and diminishing bugs during the development flow is implementing tests in-between to ensure that the application is working as intended. In the case of these code snippets however, if they aren't tested, the app will crash in the future if not handled early. For one, state.app.media.video.videoType is four levels deep in the chain. What if another developer accidentally made a mistake when he was asked to fix another part of the app and state.app.media.video becomes undefined? The app will crash because it can't read the property videoType of undefined. In addition, if there was another typo issue with a videoType and fixVideoTypeNaming isn't updated to accomodate that along with the mp3 issue, the app risks another unintentional crash that no one would have been able to detect unless a real user comes across the issue. And by that time, it would be too late. And it's never a good practice to assume that the app will never ever come across bugs like these. Please be careful! 3. Carelessly Checking Empty Objects When Rendering Something I used to do long ago in the golden days when conditionally rendering components is to check if data had been populated in objects using Object.keys. And if there were data, then the component would continue to render if the condition passes: const SomeComponent = ({ children, items = {}, isVisible }) => ( <div> {Object.keys(items).length ? ( <DataTable items={items} /> ) : ( <h2>Data has not been received</h2> )} </div> ) Lets pretend that we called some API and received items as an object somewhere in the response. With that said, this may seem perfectly fine at first. The expected type of items is an object so it would be perfectly fine to use Object.keys with it. After all, we did initialize items to an empty object as a defense mechanism if a bug were to ever appear that turned it into a falsey value. But we shouldn't trust the server to always return the same structure. What if items became an array in the future? Object.keys(items) would not crash but would return a weird output like ["0", "1", "2"]. How do you think the components being rendered with that data will react? But that's not even the worst part. The worst part in the snippet is that if items was received as a null value in the props, then items will not even be initiated to the default value you provided! And then your app will crash before it begins to do anything else: "TypeError: Cannot convert undefined or null to object at Function.keys (<anonymous>) at yazeyafabu.js:4:45 at at" Again, please be careful! 4. Carelessly Checking If Arrays Exist Before Rendering This can be a very similar situation as with #3, but arrays and objects are used quite often interchangeably that they deserve their own sections. If you have a habit of doing this: render() { const { arr } = this.props return ( <div> {arr && arr.map()...} </div> ) } Then make sure you at least have unit tests to keep your eyes on that code at all times or handle arr correctly early on before passing it to the render method, or else the app will crash if arr becomes an object literal. Of course the && operator will consider it as truthy and attempt to .map the object literal which will end up crashing the entire app. So please keep this in mind. Save your energy and frustrations for bigger problems that deserve more of your special attention! ;) 5. Not Using a Linter If you aren't using any type of linter while you're developing apps or you simply don't know what they are, allow me to elaborate a little about why they are useful in development. The linter I use to assist me in my development flow is ESLint, a very known linting tool for JavaScript that allows developers to discover problems with their code without even executing them. This tool is so useful that it can act as your semi-mentor as it helps correct your mistakes in real time--as if someone is mentoring you. It even describes why your code can be bad and suggests what you should do to replace them with! Here's an example: The coolest thing about eslint is that if you don't like certain rules or don't agree with some of them, you can simple disable certain ones so that they no longer show up as linting warnings/errors as you're developing. Whatever makes you happy, right? 6. Destructuring When Rendering Lists I've seen this happen to several people in the past and it isn't always an easy bug to detect. Basically when you have a list of items and you're going to render a bunch of components for each one in the list, the bug that can creep up on your app is that if there comes a time in the future where one of the items in the list is not a value you expect it to be, your app may crash if it doesn't know how to handle the value type. Here's an example: const api = { async getTotalFrogs() { return { data: { result: [ { name: 'bob the frog', tongueWidth: 50, weight: 8 }, { name: 'joe the other frog', tongueWidth: 40, weight: 5 }, { name: 'kelly the last frog', tongueWidth: 20, weight: 2 }, ], }, } }, } const getData = async ({ withTongues = false }) => { try { const response = await api.getTotalFrogs({ withTongues }) return response.data.result } catch (err) { throw err } } const DataList = (props) => { const [items, setItems] = useState([]) const [error, setError] = useState(null) React.useEffect(() => { getData({ withTongues: true }) .then(setItems) .catch(setError) }, []) return ( <div> {Array.isArray(items) && ( <Header size="tiny" inverted> {items.map(({ name, tongueWidth, weight }) => ( <div style={{ margin: '25px 0' }}> <div>Name: {name}</div> <div>Width of their tongue: {tongueWidth}cm</div> <div>Weight: {weight}lbs</div> </div> ))} </Header> )} {error && <Header>You received an error. Do you need a linter?</Header>} </div> ) } The code would work perfectly fine. Now if we look at the api call and instead of returning this: const api = { async getTotalFrogs() { return { data: { result: [ { name: 'bob the frog', tongueWidth: 50, weight: 8 }, { name: 'joe the other frog', tongueWidth: 40, weight: 5 }, { name: 'kelly the last frog', tongueWidth: 20, weight: 2 }, ], }, } }, } What if somehow there was an issue with how the data flow was handled when an unexpected condition occurred in the api client and returned this array instead? const api = { async getTotalFrogs() { return { data: { result: [ { name: 'bob the frog', tongueWidth: 50, weight: 8 }, undefined, { name: 'kelly the last frog', tongueWidth: 20, weight: 2 }, ], }, } }, } Your app will crash because it doesn't know how to handle that: Uncaught TypeError: Cannot read property 'name' of undefined at eval (DataList.js? [sm]:65) at Array.map (<anonymous>) at DataList (DataList.js? [sm]:64) at renderWithHooks (react-dom.development.js:12938) at updateFunctionComponent (react-dom.development.js:14627) So to prevent your app from crashing instead, you can set a default object on each iteration: { items.map(({ name, tongueWidth, weight } = {}) => ( <div style={{ margin: '25px 0' }}> <div>Name: {name}</div> <div>Width of their tongue: {tongueWidth}cm</div> <div>Weight: {weight}lbs</div> </div> )) } And now your users won't have to make judgements about your technology and expertise when they don't see a page crashing in front of them: However, even though the app no longer crashes I recommend to go further and handle the missing values like returning null for entire items that have similar issues instead, since there isn't any data in them anyways. 7. Not Researching Enough About What You're Going To Implement One crucial mistake i've made in the past was being overly confident with a search input I had implemented, trusting my opinions too early in the game. What do I mean by this? Well, its not the search input component that I was overly confident with. The component should have been an easy task... and it was. The real culprit of an issue that occurred with the whole the search functionality was the characters being included in the queries. When we're sending keywords as queries to a search API, it's not always sufficient to think that every key the user types is valid, even though they're on the keyboard for that reason. Just be 100% sure that a regex like this works just as intended and avoids leaving out any invalid characters that can crash your app: const hasInvalidChars = /^.*?(?=[\+\^#%&$\*:<>\?/\{\|\}\[\]\\\)\(]).*$/g.test( inputValue, ) That example is the most up to date, established regular expression for a search API. Here is what it was before: const hasInvalidChars = /^.*?(?=[\+\^#%&$\*:<>\?/\{\|\}\[\]\)\(]).*$/g.test( inputValue, ) const callApi = async (keywords) => { try { const url = `{keywords}/` return api.searchStuff(url) } catch (error) { throw error } } As you can see the slash / is missing, and that was causing the app to crash! if that character ends up being sent to an API over the wire, guess what the API thinks the URL is going to be? Also, I wouldn't put 100% of my trust in the examples you find on the internet. A lot of them aren't fully tested solutions and there isn't really a standard for majority of use cases when it comes to regular expressions. 7. Not Restricting The Sizes of File Inputs Restricting the sizes of files that users select is a good practice because most of the time you don't really need a rediculously large file when it can be compressed in some way without losing any noticeable signs of reduction in quality. But there's a more important reason why restricting sizes to a certain limit is a good practice. At my company, we've noticed users in the past occasionally get "frozen" while their images are being uploaded. Not everyone has an Alienware 17 R5 in their possession, so you must take certain circumstances of your users in consideration. Here's an example of restricting files to a limit of 5 MB (5,000,000 bytes): import React, { useState, useEffect } from 'react' const useUploadStuff = () => { const [files, setFiles] = useState([]) // Limit the file sizes here const onChange = (e) => { const arrFiles = Array.from(e.target.files) const filesUnder5mb = arrFiles.filter((file) => { const bytesLimit = 5000000 if (file.size > bytesLimit) { // optionally process some UX about this file size } return file.size < bytesLimit }) setFiles(filesUnder5mb) } useEffect(() => { if (files.length) { // do something with files } }, [files]) return { files, onChange, } } const UploadStuff = () => { const { onChange } = useUploadStuff() return ( <div> <h2 style={{ color: '#fff' }}>Hi</h2> <div> <input style={{ color: '#fff' }} onChange={onChange} type="file" placeholder="Upload Stuff" multiple /> </div> </div> ) } export default UploadStuff You wouldn't want users to be uploading video games when they're supposed to be uploading documents! Conclusion And that concludes the end of this post! There will be a part 2 as I've only gotten through half of my list (yikes!) Anyhow, Thank you for reading and make sure to follow me for future updates! Happy 4th of july! Discussion Another point to add is that using a type checker like Flow or TypeScript is super helpful in catching some of these bugs for you! Yea almost all of the points made in the article would be pointed out to you by the compiler if you were using TypeScript. Excellent article though! It just confirmed my bias towards type safety. type errors never happen when process is correct, but types add a ton of overhead and slow down the team, and don't forget type correctness !== program correctness. Can you explain why types add overhead? And which type of overhead— speed? Quality? Value? This is nice, I do fall for some of those sometimes, using quick shortcuts. This is a good reminder to avoid that :) About #1 though, it's good to filter the app from nulls. I parse the server responses and anything that libraries might return and check for nulls. Typescript also helps! From my experience, I think most of the problems in this article will be easy to solve, if you use Typescript. This. Nice article! Points 3 and 4 is interesting because it's a very commonly used pattern. What solutions would you suggest implement aside from unit tests? About 5: Understand the rules. You can trick the linter into believing the code is right and still not be solving the issue. The most common case I've seen is using unique and stable values as keys in elements that are produced by iterating arrays. You could call a function that always generates a different id, and ESLint will stop complaining about it but your keys will be even worse than array indexes. Read what the rule is about and why does it exist. Don't go just for disappearing the red line, that's how you get into worse problems than the one the linter is trying to avoid. As others have mentioned, this post is a great example of why TypeScript or another type checker vastly improve your code reliabilty as it would have solved the first six issues. The last two are about validating user input which is always a good idea. With the particular example given in number 7 (the first one), values passed in the querystring should be URL encoded first and foremost which would solve the problem with the '/'. Additional validation to restrict the characters could then be done with a Regex but always with appropriate server-side validation also in place. This is a Javascript thingie, not React specific. Edit: As far as I can see none of those issues listed have nothing to do with React API, all of these are just Javascript gotchas, and you can have the same issues in the Vue, Angular, [nameYourFrameworkHere] projects. By using type checker helper like Flow or Typescript you can get rid of the half problems listed here, if you don't want to do that, guard your logic for unforeseen situations and name your variables correctly, so that you know what is an array and what is an object. Your posts are always so informative. I'm getting better at coding. I am glad I can help! I'm sorry but none of these are related to react practices, this is just JS in general, and that example with an API returning undefined is pure facepalm, I cannot even realistically imagine something like that. nice article! regarding accessing deeply nested object properties, i prefer lodash get. lodash.com/docs/4.17.11#get You most definitely don't want to do this. You're only eliminating the crash without fixing the bug. Almost all of these problems are completely eliminated by using Typescript. This would make a great article if the title was: How Typescript saved the front-end world! I'd recommend to the author to work over variable naming, since it looks he struggles a lot to understand if his variable is an object or an array. This article is somehow like "8 mistakes that will break your leg", and the first one is "jumping from the 15'th floor"
https://dev.to/jsmanifest/8-practices-in-react-that-will-crash-your-app-in-the-future-2le5
CC-MAIN-2020-50
en
refinedweb
Offloading your Informix data in Spark, Part 5 Add a little intelligence Content series: This content is part # of # in the series: Offloading your Informix data in Spark, Part 5**auto** Stay tuned for additional content in this series. This content is part of the series:Offloading your Informix data in Spark, Part 5 Stay tuned for additional content in this series. Where are we? Where are we going? In the first four parts of this series, you learned how to: You might be wondering what else there is to do. It’s true that this tutorial series has covered many aspects of Apache Spark, but there is so much more to discover with this analytics platform. It’s time to explore one of the key features of Spark: its support for machine learning. Through the previous parts of this series, you’ve used sales data for a store as an example. The idea is simply to show you what you can do with machine learning: Forecast future orders based on previous orders. What you’ll need: - The Spark ML (for machine learning) library, which is in the project on GitHub. - The stores_demo data set included with every Informix® database. Note: Don’t worry if you don’t have Informix knowledge. You do not need it to read and understand this tutorial. Nevertheless, feel free to consider IBM Informix as the RDBMS for your next project. - The code you used for parts 1-4 - For this part, the labs are in the. net.jgp.labs.informix2spark.l5x0package on GitHub. Mathematics I love math. Coming from the French centralized education system, you better love math if you want to access the best engineering schools. This is also true in many other places, but France seems to be an extreme. Unfortunately for me, the mathematics behind machine learning (ML) is a lot of statistics and probabilities. As much as I enjoyed statistics, I am not the greatest fan of probabilities. Perhaps this is because of their random aspects in some scenarios. Therefore, I always try to look at minimizing the math impact on ML. I find this makes ML understandable by most of us. Linear regression Linear regression is the concept you will implement. Imagine the following graph: your x-axis (abscissa) is the week number, your y-axis (ordinate) is the total amount of orders for this week. It should look like the image below. The idea behind linear regression is to draw a straight line, which is the least distant from all the points on the chart. In this context, the regression line is: You can now imagine that we will continue this line to see where it goes. However, first you need to know how we got the data. Getting the data with Spark Use the examples in the previous parts of this series and adapt them to get the orders then group the sales amount by week. The output should look like this: +----------+----------------+ |order_week|sum(total_price)| +----------+----------------+ | 21| 4387.00| | 22| 2144.00| | 23| 940.00| | 24| 450.00| | 25| 1366.80| | 26| 2544.00| | 28| 3652.97| | 30| 2670.00| +----------+----------------+ Note: In this part of the series, I’m not explaining every part of the code. By this time, you should feel comfortable reading the code without breaking it into small chunks. That said, if you have issues, please ask questions in the comments. Your code should look like: package net.jgp.labs.informix2spark.l500; import static org.apache.spark.sql.functions.lit; import static org.apache.spark.sql.functions.weekofyear; OrdersPerWeekApp { public static void main(String[] args) { OrdersPerWeekApp app = new OrdersPerWeekApp(); app.start(); } private void start() { SparkSession spark; spark = SparkSession .builder() .appName("Sales per week") .master("local") .getOrCreate(); // List of all tables we want to work with List<String> tables = new ArrayList<>(); tables.add("orders"); tables.add("items"); // Specific Informix dialect JdbcDialect dialect = new InformixJdbcDialect(); JdbcDialects.registerDialect(dialect); // Let's connect to the database Config config = ConfigManager.getConfig(K.INFORMIX); // Let's build our datalake Map<String, <Dataset<Row> datalake = new HashMap<>(); for (String table : tables) { System.out.print("Loading table [" + table + "] ... "); Dataset<Row> df = spark.read() .format("jdbc") .option("url", config.getJdbcUrl()) .option("dbtable", table) .option("user", config.getUser()) .option("password", config.getPassword()) .option("driver", config.getDriver()) .load(); datalake.put(table, df); System.out.println("done"); } System.out.println("We have loaded " + datalake.size() + " table(s) in our data lake"); // Let's look at the content Dataset<Row> ordersDf = datalake.get("orders"); Dataset<Row> itemsDf = datalake.get("items"); // Builds the datasets in 2 steps, first with the week number... Dataset<Row> allDf = ordersDf .join( itemsDf, ordersDf.col("order_num").equalTo(itemsDf.col("order_num")), "full_outer") .drop(ordersDf.col("customer_num")) .drop(itemsDf.col("order_num")) .withColumn("order_week", lit(weekofyear(ordersDf.col("order_date")))); // ... then with allDf = allDf .groupBy(allDf.col("order_week")) .sum("total_price") .orderBy(allDf.col("order_week")); allDf.show(50); } } The “meat” of this app starts when you create the allDF dataframe. First, create a column named order_week based on order_date. Use the weekofyear() static method to determine the week number from a date and the lit() static method to create a column from scratch in your dataframe. Both methods are statically imported at the beginning of the code. Data quality It is always important to look at the data. You might not catch every anomaly by doing so, especially with big data. But by looking at it, you can see that weeks 27 and 29 are missing. From this observation, you have (at least) two decisions to make: - Ignore the missing data. Perhaps the central system has not been updated yet, it is not the first time it happened, or maybe it’s the intern who crashed the system the other day. - Assume there weren’t any orders; it would mean that you have to insert two rows with an amount of 0. I recommend you go with the first solution: don’t blame it on the interns, but keep track of your decision. The two-second introduction to machine learning ML algorithms can be complex. However, the principle is really easy. You build (or train) a model, then you apply this model to a data set to predict an outcome. In this scenario, you will only execute step 2, but you can easily imagine different scenarios where the model does not change and can be reused in step 3, 4, etc. Of course, as a data professional, you can imagine the full spectrum of lifecycle activities deriving from this model: validating, refining, testing, etc. However, these activities are slightly outside the scope of this primer. Building the model You have the data, and you have the theory. Now you can practice. Your first step is to prepare the data for the ML trainer to digest. It varies depending on the type of algorithm, but the linear regression expects features and labels. In essence, the label is what you are studying, and the features define it. So, if you look at the orders of week 28 where you made $3,652.97, the label is 3652.97, and one of its features is 28. You can add more features, such as: - Temperature - Precipitation level - Total of orders during the same week of previous years - Number of days before or after a holiday, etc. I remember a friend of mine who sold swimming pools. He had roughly a six months’ lead time. He sold more pools when it was sunny, so applying the amount of sunshine to his model made sense. A common mistake is to confuse the label and features, especially in a case like this one when you only have one feature. To use a linear regression, Spark expects a vector of features, even if your vector contains only one element. Basically, Spark expects the following dataframe: +----------+----------------+-------+--------+ |order_week|sum(total_price)| label|features| +----------+----------------+-------+--------+ | 21| 4387.00|4387.00| [21.0]| | 22| 2144.00|2144.00| [22.0]| | 23| 940.00| 940.00| [23.0]| | 24| 450.00| 450.00| [24.0]| | 25| 1366.80|1366.80| [25.0]| | 26| 2544.00|2544.00| [26.0]| | 28| 3652.97|3652.97| [28.0]| | 30| 2670.00|2670.00| [30.0]| +----------+----------------+-------+--------+ You could have simply renamed the sum(total_price) column to label, but because both the label and features columns are merely technical constraints for the linear regression algorithm, I prefer to keep the data separate from the technical constraints. To build the vector, you can use a user-defined function (UDF). This extension creates a vector from the original value. package net.jgp.labs.informix2spark.l520; import org.apache.spark.ml.linalg.Vector; import org.apache.spark.ml.linalg.Vectors; import org.apache.spark.sql.api.java.UDF1; public class VectorBuilderInteger implements UDF1<Integer, Vector> { private static final long serialVersionUID = -2991355883253063841L; @Override public Vector call(Integer t1) throws Exception { double d = t1.doubleValue(); return Vectors.dense(d); } } The UDF implements a UDF1 of Integer (the input type) and Vector (the return type). Vectors expect double value, so you need to transform the integer to a double. Before using the UDF, you have to register it in the Spark session. Make sure to register the UDF right after you create the Spark session. spark.udf().register("vectorBuilder", new VectorBuilderInteger(), new VectorUDT()); In this scenario: vectorBuilderis the name of the function you are adding to Spark SQL. VectorBuilderIntegeris the class implementing the UDF. VectorUDTis the return type. In your transformation code, you can simply call the vectorBuilder() function to create the column. Dataset<Row> df = allDf .withColumn("values_for_features", allDf.col("order_week")) .withColumn("label", allDf.col("sum(total_price)")) .withColumn("features", callUDF("vectorBuilder", col("values_for_features"))) .drop(col("values_for_features")); Now that you have the data in the correct form, creating your model only takes two lines of code. LinearRegression lr = new LinearRegression().setMaxIter(20); LinearRegressionModel model = lr.fit(df); What about a little introspection? This section is optional. Imagine that I added it to raise the suspense towards the end goal of discovering future orders — but I also added it for those math lovers who want to understand that there really is no crystal ball but some methodology and science. Spark provides the tools needed to inspect your model. First, apply the model to the full dataframe you had: model.transform(df).show(); This adds a prediction column (the value on the linear regression line). +----------+----------------+-------+--------+------------------+ |order_week|sum(total_price)| label|features| prediction| +----------+----------------+-------+--------+------------------+ | 21| 4387.00|4387.00| [21.0]|2101.3694797687876| | 22| 2144.00|2144.00| [22.0]|2144.7183236994233| | 23| 940.00| 940.00| [23.0]|2188.0671676300585| | 24| 450.00| 450.00| [24.0]|2231.4160115606937| | 25| 1366.80|1366.80| [25.0]|2274.7648554913294| | 26| 2544.00|2544.00| [26.0]|2318.1136994219646| | 28| 3652.97|3652.97| [28.0]|2404.8113872832355| | 30| 2670.00|2670.00| [30.0]| 2491.509075144506| +----------+----------------+-------+--------+------------------+ Here’s a look at the different mathematical computations associated to the model: LinearRegressionTrainingSummary trainingSummary = model.summary(); System.out.println("numIterations: " + trainingSummary.totalIterations()); System.out.println("objectiveHistory: " + Vectors.dense(trainingSummary.objectiveHistory())); trainingSummary.residuals().show(); System.out.println("RMSE: " + trainingSummary.rootMeanSquaredError()); System.out.println("r2: " + trainingSummary.r2()); This code returns: numIterations: 1 objectiveHistory: [0.0] +-------------------+ | residuals| +-------------------+ | 2285.6305202312124| |-0.7183236994233084| |-1248.0671676300585| |-1781.4160115606937| | -907.9648554913294| | 225.8863005780354| | 1248.1586127167643| | 178.4909248554941| +-------------------+ RMSE: 1246.0139337359603 r2: 0.009719742211204974 Let’s look at one of those criteria. The root-mean-square error (RMSE), also called root-mean-square deviation (RMSD), is used to measure the differences between values (sample and population values) predicted by a model or an estimator and the values observed. Because this is a distance, the smaller the number the better it is. And, when you compare it to the value of the labels, it means that you are pretty far off, which is not good. While this is not good, the explanation is easy. There is a great disparity in the labels because there is a limited number of features. This is definitely not big data. The other parameters define the line: the intercept, the regression parameter, and the convergence tolerance of iterations. double intercept = model.intercept(); System.out.println("Intersection: " + intercept); double regParam = model.getRegParam(); System.out.println("Regression parameter: " + regParam); double tol = model.getTol(); System.out.println("Tol: " + tol); And the results are: Intersection: 1191.0437572254443 Regression parameter: 0.0 Tol: 1.0E-6 The not-that-magic crystal ball You are now ready to predict orders for the next three weeks. You are about to discover the complex code to do so. Letting the suspense grow, the code first: for (double feature = 31.0; feature < 34; feature++) { Vector features = Vectors.dense(feature); double p = model.predict(features); System.out.printf("Total orders prediction for week #%d is $%4.2f.n", Double.valueOf(feature).intValue(), p); } Remember what you saw before: features are stored in a vector. So, even if you only have one feature (the week number), it still takes a vector, so remember to build this vector with the feature. Then you can call the predict() method from the model, using the vector. That’s it; you just made your first prediction! It takes seven lines of code to apply the model to the new features (here, the week number). Among these seven lines, three are for display and two are for the loop. The results: Total orders prediction for week #31 is $2534.86. Total orders prediction for week #32 is $2578.21. Total orders prediction for week #33 is $2621.56. Give yourself a high-five! You followed this long, and hopefully not too painful, tutorial series and you even discovered that your company orders are on the rise. What you learned This fifth part of the Offloading your Informix Data in Spark series taught you: - A use case of ML based on RDBMS data and that ML does not always need tons of data. - A little bit of mathematics, like that the RMSE measures the quality of your model. - The importance of data quality (and I should probably teach more DQ). - A linear regression is an easy form of ML. - ML has a simple process: train the model and reuse the model on new data. Farewell This is also the last part of this series. I sincerely hope you enjoyed it. I took great pleasure in writing each part of this series and wanted to thank the support team at IBM and especially Robin Wood, who showed patience, tolerance from my Frenglish, and brought help. Thanks, Robin. Let’s keep in touch via Twitter (@jgperrin), email jgp@jgp.net (I reply to all emails), or in the comments below. See you for more Spark content and at IBM Think in 2018! Go further More readings and info: Downloadable resources
https://nikolanews.com/machine-learning-will-help-you-extrapolate-future-orders/
CC-MAIN-2020-50
en
refinedweb
Hilde van Vlaenderen, Serigne Mansour Tall and Gora Gaye With an estimated 30 to 50 percent of active Senegalese men absent from their villages, with international remittances estimated to account for 30 to 70 percent of the budget of their families back home (Eurostat, 2001), and with approximately 70 percent of the population engaged in agriculture, Senegal is an excellent case study for exploring the linkages between remittances and access to land. It is estimated that two million Senegalese migrants are currently living abroad (Eurostat, 2001), and there is rarely a Senegalese family who does not boast a migrant. This chapter draws on fieldwork carried out in Senegal and France. In Senegal, semi-structured interviews were held with 19 persons, including several members of four extended families with relatives in France as well as traditional and local authorities. In France, four migrant portraits were undertaken, including two portraits of migrants from the extended families interviewed in Senegal. The portraits were then discussed with a focus group of 12 migrants belonging to the village association of Diégoune, Senegal. Where appropriate, the chapter also draws on the literature on migration, remittances and land specifically relating to Senegal. The chapter is structured as follows. First, the field sites are presented. Second, the phenomenon of migration between Senegal and France is described with specific reference to the study participants. Third, data on the nature and significance of remittances, and on their implications for the study area are discussed. Last, the chapter focuses on remittances and changes in land tenure in Senegal. In Senegal, interviews were held in the village of Moudéry and in the rural community of Kër Momar Sarr. In France, the four migrant portraits and the focus group discussion were carried out in Paris. Moudéry is the administrative centre of the communauté rurale (rural commune) of Moudéry, in the department of Bakel, along the Senegal River, in the east of Senegal, close to the border with Mali. Moudéry has a population of approximately 7000 inhabitants, most of whom are Soninké, and three quarters of whom have dual Senegalese and French nationality. The area has a long history of emigration, particularly to France, starting from independence. Since then, a remarkably large share of the population has migrated overseas. Of the 32 elected local councillors, 7 have dual nationality, and 22 have been or are migrants themselves. The research team did not manage to identify any households who had no members currently or previously abroad. Subsistence agriculture is the main activity of the population. It is motivated by socio-cultural factors as well as economic motives. Especially for Soninké descendants from higher caste groups, selling a harvest is a sign of poverty. However, increasing ones production and obtaining a good harvest are rewarded with respect from the community, even though it is used for subsistence. There are three types of agriculture: rain-fed agriculture (from June to November); irrigated agriculture, with infrastructure provided by SAED (Société dAménagement et dExploitation des terres du delta et des vallées du fleuve Sénégal et de la Falém); and fruit orchards. Maize, rice, millet, beans, groundnuts, bananas and vegetables are the most commonly grown crops. Recently, some families have started to invest in agricultural equipment such as tractors and pumps. Two families from this area participated in the study: The Sylla family is a large family descended from the local oligarchy, and owns a large amount of customary land in the village. The chef de famille, who has dual Senegalese-French nationality, is currently retired (receiving a French pension) and is engaged in agriculture. He spent 16 years in France with the Marine Marchande. The family counts six migrant members, five of whom reside in France. In the Cissé family, the chef de famille spent 35 years in France and has dual Senegalese-French nationality. The family is involved in farming. Currently the family counts four migrant members, all living in France. Kër Momar Sarr is an area of 7600 km2, with an estimated population of approximately 10900. It contains 60 villages and is located in a sylvo- pastoral zone in the Vallée Fossile (Louga region), in the Northern part of Senegal. There are three ethnic groups in the area. The Peul (in English also referred to as Fulani), the majority group, are predominantly involved in cattle herding, the Toucouleurs in fishing and the Wolof in agriculture. Areas of fertile land around the nearby Lake Guer are under irrigation and have been managed by ASREAD. Migration in this area is fairly recent and dates back from 1972/73, when the area suffered from serious drought. Two families from this area, both involved in farming, took part in the study: The Diop family lives in the village of Ndimb and has one migrant member, 33 years old, who left in 1994 and is involved in commerce. The Mboup family lives in Kër Momar Sarr and has one migrant member, 38 years old, who left Senegal 15 years ago and is currently a small businessman in Italy. In Paris, interviews were held with one migrant member from each of the two families from Moudéry, and with two migrants from the Casamance Province, (one from the town of Ziguinchor and the other from the village of Diégoune). Below are short profiles of the four migrants interviewed. Moussa (from the Sylla family) is Soninké and 29 years old. From a young age, Moussa enjoyed going to school. He obtained a good baccalaureate in Dakar, which provided him access to university studies in France. Moussa came to France in 1992, where he completed his university studies in management and information technology. He is a career advisor for a municipality in the Paris area. Three years ago he returned to Senegal to marry, returning thereafter to France, and currently lives with his wife, who works in a nursery school, and his two young children in Paris. Ousmane (from the Cissé family) is Soninké and 38 years old. He attended Koranic school after which he became an apprentice tailor. In Ousmanes family it has been customary for the men to migrate. When the time came for Ousmane, his father arranged the trip and supported him in France to get settled. Ousmane has been in France for 19 years and has worked for the same construction firm since he arrived. He currently lives in Rouen, with his French wife and two children. He also has a wife and two children in Senegal. Aly, whose family resides in the village of Diégoune, near Ziguinchor in the Southern province of Casamance, is Peul and 46 years old. After he finished primary school, he took an apprenticeship with a local baker. However, he soon left his hometown and subsequently travelled through many countries in Africa including Togo, Gabon, Congo, Zaire, Cameroon, Nigeria and Guinea, where he did a variety of jobs. In 1979 Aly came to France. He financed his trip with his personal savings. He was initially employed as a storeman in a paint store. Some years later he returned to Senegal to marry and brought his wife with him to France. He is currently living in the Melun area and has three children. His mother and father are dead; he has one elder brother in Belgium and two sisters and two brothers still living in the family home. His siblings rely on agriculture for their livelihood, supplemented with the remittances he sends. Amadou is Peul, 33 years of age and comes from Ziguinchor in Casamance. Amadous father is dead and his mother runs the household. Amadou is the first born and has three brothers and two sisters. Besides one brother, who is a taxi driver, all other siblings work on a temporary basis. Amadous mother makes a living from renting out rooms in her house and from growing mangoes and peanuts on the familys fields (with the help of occasional hired labour). She receives financial contributions from all of her working children. During his youth, Amadou obtained his baccalaureate in Dakar and his excellent school results gained him a bursary to study economics and finance at the University of Le Havre in France. Amadou arrived in France 11 years ago. After his studies he married a French woman. He is currently living in Paris and works for a bank. Significant migration from Senegal to France started in the 1940s, when the first Senegalese soldiers joined the French army. During the 1950s the first sailors joined the Marine Marchande Française. During the colonial era, the transformation of the traditional economy and the increased dependence on manufactured goods led to the need for cash, which was not locally available. Migration and remittances, which responded to this need, became engrained in the livelihoods of the Senegalese. In 1960 Senegal obtained independence but maintained strong links with France. Attracted by the economic boom, large numbers of Senegalese moved to France initially to work as factory labourers, but later they diversified into other forms of employment and enterprises. Many obtained the French nationality, sent regular remittances home and only returned to Senegal to retire. The 1968-73 period of drought in Senegal, combined with low world prices for cotton and groundnuts, had an important negative impact on the local agriculture and as a result reinforced migration and the need for remittances. France still maintains a close relationship with Senegal. It is the primary investor in Senegal, and French enterprises count for more than half of Senegals formal sector. Bilateral co-operation programmes operate in a variety of sectors, such as education, health, rural development and institutional support. In 2002, Senegalese migrants residing in France were officially estimated to number 42,000, representing 22 percent of all migrants from sub-Saharan Africa and 5.8 percent of all regular migrants. Migration between Senegal and France is governed by a number of bilateral treaties concerning entry requirements and procedures, migrants legal status and co-development (see above, box 1). The outward migration from Senegal to France has fostered the influx of Malian migrants into Senegal, to replace the agricultural labour force lost from rural areas. The Malian migrant workers are, however, largely seasonal and are often paid on a piece work basis. In the Bakel region, Malian migrants are particularly numerous due to the proximity with Mali. Studies indicate that, although female migrants are on the increase, the majority of migrants are males (NIDI/Eurostat, 2000). Males are often young and single when they migrate and they do so from their parents home. This was true for all the migrants in the study. Female migrants are more likely to be married at the time of migration than migrant men. This is influenced by the fact that womens migration is frequently related to reuniting the family (NIDI/Eurostat, 2000). This was the case for the wives of Moussa and Aly. The migrants we interviewed mentioned different motivations for their migration. Moussa and Amadou came to France to study and stayed on. Aly emigrated predominantly to increase his chances for a better life after having travelled widely. All three, however, indicated their desire to support their families back home as a factor in their migration, as well as a desire for personal development through studying, gaining experience and developing contacts. It is important to acknowledge that motivations are usually multifaceted. Ousmane emigrated as a result of his familys culture of migration. His father and elder siblings had emigrated before him and his emigration was decided upon and arranged by the extended family, including his employment in France. All the migrants, with the exception of Aly, were assisted (financially and otherwise) by their families with their migration, which confirms findings from the literature. In most cases, migration is not an individual decision, but a social process, involving a family strategy of survival and betterment, characterized by a range of economic, social and cultural dimensions. Migration involves discussion at household level, is sanctioned by the household head and facilitated by the family network. Family members or fellow villagers overseas help the new migrant to settle and start his new life. It is these social networks that bind migrants and non-migrants in complex social, transnational relationships. When families facilitate the emigration, they expect remittances and family commitment in return (see Ammassari & Black, 2001). The migrants in the study recognise the advantages of living in France, including access to technology, education and experience not available in Senegal as well as economic benefits. Amadou acknowledges that There are plenty of work and leisure opportunities in France and many exciting things to learn and experience. I am particularly interested in financial systems, stock exchange and electronic communication systems, i.e. internet facilities. In Senegal these areas are still undeveloped. Therefore, migrants are torn between longing to live a rural existence with the extended family in Senegal and their reluctance to leave France because it provides services and opportunities. Aly says I would leave for Senegal tomorrow, but it is better that I stay in France in order to earn some more money to look after my family and eventually earn enough to realise my dream of obtaining a large area of land in Senegal which I can cultivate and where I can develop a small tourism resort. Social contact and support for the migrants in France are predominantly provided by other Senegalese migrants, including friends, extended family members and village associations (see below). Links between migrants and their home remain strong and those in this study return home fairly often, ranging from twice a year to once every three years. Ousmane phones his parents and wife weekly, Amadou phones his mother monthly, while the other two phone more irregularly. This is consistent with the literature, which states that migration does not necessarily lead to social and family disruption (Ammassari, & Black, 2001). Data from our study, however, also shows that the intensity of contact with the family back home diminishes as the migrant builds up his nuclear family in France. Aly and Moussa used to go back once a year, but since they have started families, they return less often. Studies have revealed several reasons for return migration (Ammassari, & Black, 2001). The migrant may wish to rejoin his/her family, may be running away from adverse conditions in the destination country, or may aim to enjoy enhanced social status back home. All participants in the study indicated an intention to return to Senegal and their families in the future. Aly has plans to invest in an agricultural project; Amadou and Moussa want to invest in business or development projects, while Ousmane intends to go home to retire. All of them indicated that they would like to contribute to the development of Senegal when they return. However, several reservations with regards to return were expressed. They argued that they have adapted (some more than others) to the European lifestyle, including its services and facilities (pensions, medical compensation, access to banking facilities, telecommunication). They fear the loss of those services on their return to Senegal. Aly argued that return to a collective lifestyle, with shared accommodation, wealth and land, may not be easy after having become used to a more individualised existence in France. Three of the four migrants interviewed had children born in France, which makes their eventual decision to return to Senegal more difficult. Aly, who has teenage children, recognises the probability that his children will not join him on his return to Senegal. He has already invested in a house in France for his children. Various sources estimate that migrants send to Senegal more than 60 thousand million francs CFA (91,5 million Euros) every year. According to the head of the local post office, in Moudéry, old age pensions of migrants add up to 90 million francs CFA (137,200 Euros) per month and remittance volumes are even higher. All participants in the study sent remittances to their families. These form an important component of their families income. Fieldwork in Senegal and Paris revealed that remittances vary, depending on the needs of the family (i.e. during the harvesting season more cash is needed to pay farm labour; at the beginning of the school year extra cash is required). The amount of remittances is predominantly decided upon by the migrant, balancing his financial means and the familys needs. However, lack of familiarity with the hard living conditions and constraints faced by migrants in France amongst those in the home country can lead to frustrations for migrants. Several participants in the Paris focus group complained about the continuous requests for additional cash and their families lack of knowledge of the high costs of living in France; in particular their lack of understanding of their childrens needs in a French context, which implies the need for items such as televisions, computers, a family car, etc. Remittances are generally sent on a monthly basis and although the postal services are currently in crisis in Senegal, they are still relied upon for transferring migrants remittances. Recently, fast transfer services such as Western Union and Money Gram have become more popular, although they were not relied upon by the study participants. In discussing this matter with the focus group, they argued that these services are too expensive. All of the participants relied on informal means (hand-carriage by migrants or their friends travelling to the home town) when possible. Both in Moudéry and in Kër Momar Sarr, remittances are spent on food and other essential consumables, education, the upkeep and servicing of the homesteads, land and agricultural inputs. In Kër Momar Sarr, remittances are also spent on livestock and to pay those responsible for herding livestock. Some studies conclude that remittances generate dependency amongst families in the home country, who develop a passive attitude towards work. Others argue that the additional income from remittances enables families to invest in local development and entrepreneurial endeavours (Ammassari & Black, 2001). Although a direct causality between remittances and investment is hard to establish, several investments made by our interviewees are linked to remittances (see box 6 below). From the participants responses, investments in property, particularly in Dakar, and in small business are most prominent. According to Marc Vergnière (1974), a fairly large portion of remittances is invested by the Soninké to build houses for rent in Dakar. These houses also provide the migrants with somewhere to live when they eventually return to Senegal. The Cissé family owns a house in Dakar which they rent out. Aly has bought two houses in France and built one in Diégoune. Although some participants invested money in small family businesses in Senegal, they expressed reluctance to invest in larger scale enterprises while still in France. They lack confidence in finding a suitable local partner and fear financial mismanagement in their absence. A fear of corruption on the part of the Senegalese government also served as a deterrent to invest in enterprises in Senegal. This latter view was strongly expressed by the participants in the Paris focus group. Besides sending remittances to family members, migrants provide financial support to development projects in their country, in particular to the village associations in their respective villages. In Senegal, the village association as a channel for collective remittances has become an important phenomenon since the 1970s. Village associations are created by the village and/or its migrants. Members of a village association make regular financial contributions towards development projects in their home village. One village commonly has village association branches in different localities abroad as well as in Dakar and in the village itself. The most popular projects undertaken by village associations are in the field of education, health, telecommunications and agriculture. The building of mosques, which enhances the prestige of the village, is also popular. These projects are widely recognised as providing an important contribution to improving living conditions back home. In discussions with the Paris-based research and development organization Groupe de Recherche et de Réalization pour le Développement Rural (GRDR), which works with village associations in Senegal, some constraints of the village association approach were mentioned. Tensions may occur between different stakeholders in the development projects, namely the migrants, the village elders and the rural council. The migrants provide funds for the projects and are often prominent in developing the ideas underpinning them. They may however not have the necessary skills and expertise, and often feel marginalized because of their distance from the project and its financial management. The local village elders feel at times disempowered by the projects and threatened by the initiatives of young migrants. They feel that these projects underline their lack of capacity to provide for their communities. Moussa emphasised repeatedly that his village association in Paris intends to show the elders in Moudéry that young people can make a real difference to the village. The local rural council is important because it represents the government. It feels at times, however, disempowered, because it lacks financial and technical means as well as staff. In cases where migrants are well-represented on the rural council, these issues may be less important. Under customary land tenure systems, access to farm land depends upon the allocation of a plot by the relevant customary authority. Once the land is productively used, the rights can usually be inherited according to a patrilineal kinship system. Land can also be accessed through loans and rentals. During the colonial period, attempts were made to change this system and to replace customary law with legislation based on individual land rights and written titles. These attempts did not, however, have much impact on access to land for the rural dwell. Despite an extensive body of legislation on land tenure and on decentralization, customary rules regarding land are still widely applied in rural areas (Münkner, 1995; Toulmin & Longbottom, 1997). Rural councils rarely make land allocations without the approval of customary chiefs. For instance, in Moudéry access to land is essentially still according to custom, except for lands where the SAED has provided irrigation. These areas previously belonged to the oligarchy of the village, but are now allocated by the rural council to families applying for it. The Sylla family, which belongs to the local elite (and qualifies for chieftainship in the village), is one of the two large landholding families in Moudéry. Besides cultivating their own land, they lend land to landless families. This combination of land rights, social class and local authority provides the Sylla family with substantial power and influence at village level. This year the family cultivated 13 ha of customarily held land, excluding the land used for fruit trees and vegetable gardening. Eleven hectares were used for millet and two for maize. Since the family holds large areas of land, they were able to choose the best fields for cultivation close to the river. Since several members of the family are abroad, the Syllas have cultivated these lands with the help of five labourers from neighbouring Mali, which cost them the non-negligible amount of 280,000 FCFA (426 euros). This cultivation capacity has also enabled the Sylla family to reclaim three hectares of land that it had lent to another family (see below) and to successfully apply for an additional two ha of irrigated land from the rural council, which they have used to cultivate millet. Overall, this year the family produced 80 tonnes of millet and 1.4 tonnes of maize. The produce is essentially used for consumption by the family, although a share is destined for needy families in other villages. Last year the family distributed 200kg of millet to each of 30 families. The young brother of the family head, who is president of the rural council, is largely responsible for distributing these gifts. Very occasionally, surplus is sold to traders who come to Moudéry to buy grain. The Cissé family bought two ha of irrigated land from the ex-president of the rural council, for 110 000 FCFA (168 euros). However, this transaction was informal and does not have any legal value. The Cissé family uses it to cultivate maize, rice and millet, vegetables and bananas. It does so with the help of three Malian labourers, who are employed during the cultivation season. The produce is used for consumption as well as for sale to local buyers. In Kër Momar Sarr, very few migrants invest in land. The two migrants involved in the study are exceptional. Their motivations for investing in land include their familiarity with farming as well as witnessing successful farming in other parts of the world and their limited knowledge about other investment options. Mamadou, migrant in France from the Mboup family, has attempted unsuccessfully to acquire land through the rural council for the past eight months. He therefore resorted to renting a plot of irrigated land (1.47ha at 125.000 F CFA/ha (=190 euros/ha)) from a village association in charge of an irrigation project in the area. For a second plot of land (three ha) in another village, he engaged the assistance of his uncle, who is resident in the area, to intervene, since the land is reserved exclusively for residents of the village. Mamadou cultivates in partnership with his stepbrother. He cultivated one ha with tomatoes and the remaining 3.47ha with sweet potatoes. The produce was sold and the proceeds were used to pay for capital costs (tractor, pumps, fertilisers, etc.); the balance was then divided between Mamadou and his brother. Mamadou complained about several constraints in farming including: lack of easy access to land (failure to obtain land through the rural council); high cost of irrigation; shortage of available markets, market monopolies and fixed prices for goods. In Diégoune, Aly has inherited land from his father, for which he has now obtained a title deed. He grows a variety of crops including peanuts, rice, cassava, sweet potatoes, maize and mangoes for commercial purposes. Although he lives in France, he hires labour to manage his land. He predominantly hires women employees, whom he argues are more reliable. He is in monthly contact with his employees in order to discuss farming issues. Aly also inherited a herd of cattle from his father, for which he employs a herder. The herd provides milk and occasionally meat for sale. Aly intends to purchase more land to extend his farming activities. However he has prioritized other projects, such as purchasing a house in France for his children. As a result, acquiring more land will have to wait. As stated in chapter 2, it is difficult to infer a direct association between remittances and improved access to land. Migrants and their families may invest in land as a result of remittances, but they may also be influenced by other factors such as level of education and pre-existing wealth. On the other hand, where remittances per se are not intended to be invested in land, they may still enable the household to free up other income for investment. The experience presented above shows that a range of strategies are used to secure access to land, going beyond land purchases. The Sylla family has reclaimed land previously lent to other families and has acquired irrigated land through the rural council. The Cissé family bought irrigated land from an individual in the village. Mamadou rents irrigated land from a village association. Aly sought and obtained a certificate of ownership for his customary land. Through hired labour and agricultural inputs, all participants have enhanced the mise en valeur of their land, and hence their tenure security. Study participants identified several motives for investing in rural land, including: the familiarity of the migrant and his family with agricultural practice; the importance of agricultural production for households livelihoods; the cultural attachment of the migrant to rural life; exposure to successful agro-business in other locations; and the belief that enhanced agricultural production will enable the migrant to reduce his remittances. In the field sites, rural land does not yet have great market value. There is still unexploited land available. However, irrigation has increased land values (e.g. in Moudéry), and has attracted some migrants to invest in land. Thus, three of our respondents sought out irrigated land for investment. In the Kër Momar Sarr zone, the flooding of the Vallée Fossile and the construction of the Golon channel with a pump system have added value to the land and have attracted incomers, including foreign companies (e.g. investment from Kuwait). Although, officially, land cannot be bought and is allocated by the rural council, many informal rental and sale arrangements are made, and prices are soaring. Migrants and their families contribute to these changes as a result of their greater financial capacity than many other villagers. Investment in irrigated land, combined with financial means to buy agricultural inputs and a more entrepreneurial inclination of migrants may gradually change the nature of agriculture, moving from subsistence to commercial farming. This may in turn create employment opportunities for other villagers. Several respondents already employ hired labour and sell their produce, though on a very small scale. In perspective, these changes may create tensions at the family level. As was stated earlier, agriculture is traditionally regarded as an activity in which the entire family is involved and which is geared at consumption rather than commercial purposes. The agricultural tasks are clearly delineated and the head of the family takes the main decisions. At this stage, it is still unclear to what extent international remittances are supporting this model of family farming, and to what extent they are promoting a different type of agriculture, centred on individual entrepreneurship and commercial production. Interestingly, while in the case of the Sylla and the Cissé families it was the family head who acquired land, in the Mboup family it was the individual migrant, Mamadou, who acquired land with the help of family members. Large-scale acquisition of irrigated land by migrants and their families, combined with greater means to use the land productively may, however, negatively effect land access for poorer families, who do not have the capacity to buy or rent land or even to cultivate their own customary land. With prices soaring, land will become less accessible to poorer households and more concentrated in the hands of a few investors, amongst whom are found some of the more affluent migrants and their families. A land dispute encountered during the field study in Moudéry is partially linked to the desire and the means of a family with access to remittances to expand the land area under its direct control. In this case, a piece of land had been lent out by the head of the Sylla family to the Sow family for over 10 years. In 2003, the Sylla family reclaimed its land. The Sow family refused to surrender the land and brought the matter to court, arguing that the land belonged to the domaine national and that therefore the rights of users (mise en valeur) should be protected. The court, however, ordered that the land be surrendered to the Sylla family, as they could produce a certificate of land allocation by the rural council, while the Sow family did not possess any documentation. Increased capacity to cultivate the land, linked to increased financial capacity of the migrant family are likely to have played a role in this dispute. Two caveats need to be made. First, in these processes of land commodification and concentration, the migrant is only one among many players. In the Kër Momar Sarr area, a number of big investors have acquired several hundred hectares, which makes it more difficult for the villagers to obtain land through the rural council. Secondly, at the time of the study, while there are signs of increased commodification of land, and of greater agricultural intensification and commercialization, this process of social change seems to be still at an early stage, and is far from complete. Despite recent interest in rural investment, migrants still predominantly invest in urban areas (construction and transport/commerce). All the migrants we interviewed had invested in small business or property. This may be due to an array of obstacles to investment in rural land. As is illustrated in Mamadous case, acquiring land can be a long, complicated process, requiring negotiation and administration. This is difficult for the migrant, who wants to make his investment whilst remaining abroad. Being well-connected is key: the Sylla family, who has contacts in the rural council (a family member is even the president of the council), was able to secure access to irrigated land, whereas Mamadou, who does not have strong local connections, was unsuccessful. This highlights the importance of migrants representation in institutions, such as rural councils. Investing in urban property is often more attractive for various reasons. Being a property owner has a symbolic value and property investment is regarded as more secure, easier to manage and offers the possibility to rent the property out. Urban property is more widely advertised as an investment option, even in destination countries, whereas rural land is not generally promoted as an investment opportunity for migrants. However, people are increasingly informed about the possibilities for buying land, through family and intermediaries. This chapter has provided insights into the role of remittances in rural livelihoods and access to land, and on their potential impact on changes in land tenure in rural Senegal. It has done so through a small number of targeted interviews aimed at identifying key issues and broad trends. Although investment in rural land is still in its infancy and migrants still largely prefer to invest in residential property, irrigated land is a growing attraction for investors, including migrants and their families. This increase in rural investment may affect land tenure and social relations in rural areas. Due to their relatively high economic capacity, migrants can become a key development player in rural areas, though at this stage it is not yet clear whether as source of support for family farming or as heralds of agri-business.
http://www.fao.org/3/j2815e/j2815e04.htm
CC-MAIN-2020-50
en
refinedweb
LastPass clone - part 12: CoreData dive in Hey guys, in the previous part we created entities for password and note as well as their respective models and view models. I had to prepare those in a separate post as it would’ve made this article way too long. In this one we will start working with CoreData and do a little bit of refactoring. Preparations You should have the source code link in your email inbox if you are subscribed, otherwise click here to subscribe and get it. CoreData Manager We will put all core data related stuff in a separate file almost . In the service folder, add a swift file named CoreDataManager.swift containing the following code: import Combine import CoreData import UIKit class CoreDataManager: ObservableObject { } This will be our CoreData playground. Then add the following property to the top: private init(context: NSManagedObjectContext) { self.context = context } We make the initialiser private, because we want to make this class a singleton meaning we will create a shared static function that will return an instance of this class containing a valid context. Add the following code below context to create that shared instance: static let shared = CoreDataManager(context: (UIApplication.shared.delegate as! AppDelegate).persistentContainer.viewContext) What we are doing here is initialising the shared instance with the context retrieved from the AppDelegate. If you open the AppDelegate.swift, and look for persistentContainer, you’ll find it declared as a lazy variable inside the class. You will get an error saying the following Value of type 'CoreDataManager' has no member 'context’, it makes sense because we haven’t created the context property yet. Add the following to the top, above the init function. var context: NSManagedObjectContext Docs say:. The above statement means that the context is an object that keep our local and persisted state of our data in sync. Every time we set one or more of our entities, the changes will only be persisted until we save the current state using the context. Next, add the following method below the init: func save() -> Bool { if context.hasChanges { do { try context.save() return true } catch let error { print("Error saving changes to the context’s parent store.: \(error.localizedDescription)") return false } } return true } In the above function, we first check if there are changes in the context, if it’s the case we call save() on the context to store those changes. Entity Updates Next, add the following function below the one above: func updateLastUsedPassword(with id: UUID) -> Bool { let request: NSFetchRequest<PasswordItem> = PasswordItem.fetchRequest() as! NSFetchRequest<PasswordItem> request.predicate = NSPredicate(format: "id = %@", id.uuidString) do { let results = try context.fetch(request) results[0].setValue(Date(), forKey: "lastUsed") return save() } catch let error { print(error) } return false } This one will be called every time a user views a particular password’s details. What happening is that we use the password’s id to fetch the whole object from the database, update its lastUsed date to current date and save it again. Next, add the following below: func setFavoritePassword(_ password: PasswordViewModel) -> Bool { let request: NSFetchRequest<PasswordItem> = PasswordItem.fetchRequest() as! NSFetchRequest<PasswordItem> request.predicate = NSPredicate(format: "id = %@", password.id.uuidString) let isFavorite = password.isFavorite ? 0 : 1 do { let results = try context.fetch(request) results[0].setValue(isFavorite, forKey: "isFavorite") return save() } catch let error { print(error) } return false } This one will be called every time a user toggles the favourite button in the details view. What happening is that we use the password’s id to fetch the whole object from the database again, update its isFavorite attribute to 1 or 0 (true or false) and save it again. Next, add the following 2 methods below the ones above: func updateLastUsedNote(with id: UUID) -> Bool { let request: NSFetchRequest<NoteItem> = NoteItem.fetchRequest() as! NSFetchRequest<NoteItem> request.predicate = NSPredicate(format: "id = %@", id.uuidString) do { let results = try context.fetch(request) results[0].setValue(Date(), forKey: "lastUsed") return save() } catch let error { print(error) } return false } func setFavoriteNote(_ note: NoteViewModel) -> Bool { let request: NSFetchRequest<PasswordItem> = PasswordItem.fetchRequest() as! NSFetchRequest<PasswordItem> request.predicate = NSPredicate(format: "id = %@", note.id.uuidString) let isFavorite = note.isFavorite ? 0 : 1 do { let results = try context.fetch(request) results[0].setValue(isFavorite , forKey: "isFavorite") return save() } catch let error { print(error) } return false } The above methods do the same as the ones we’ve created for the password, but these will be used to update the note. Search Before creating the search function, let’s first add the following properties to the top of thee struct: @Published var notePredicate = NSPredicate(value: true) @Published var passwordPredicate = NSPredicate(value: true) @Published var sortDescriptor = NSSortDescriptor(key: "createdAt", ascending: false) @Published var searchTerm = "" @Published var showNotes = true @Published var showPasswords = true private var cancellableSet: Set<AnyCancellable> = [] Here is an explanation of what each of those properties will do: - The first 2 predicates are the same, just for different entity. We will use the predicate to filter out the data based on a condition. We used predicates to retrieve item matching a particular idattribute. - The sortDescriptor will be used to sort notes and password based on lastUsedwhen we want to see the most used passwords and notes. The default key is the createdAtatrribute which we will use to sort the data based on the creation date. - The searchTermis self-explanatory. - The 2 boolean properties will be used to hide notes or passwords section depending on which filter is selected. - The cancellableSetwill hold the cancellable subscriptions to avoid premature completion of the search subscription Now let’s implement the search process. Below everything, add the following block of code: func performSearch() { let passwordSearchPublisher = self.$searchTerm.debounce(for: 0.5, scheduler: RunLoop.main) .removeDuplicates() .map { term in return term.isEmpty ? NSPredicate.init(value: true) : NSPredicate(format: "site CONTAINS[c] %@ || username CONTAINS[c] %@", term, term) }.eraseToAnyPublisher() let noteSearchPublisher = self.$searchTerm.debounce(for: 0.5, scheduler: RunLoop.main) .removeDuplicates() .map { term in return term.isEmpty ? NSPredicate.init(value: true) : NSPredicate(format: "name CONTAINS[c] %@", term) }.eraseToAnyPublisher() Publishers.CombineLatest(passwordSearchPublisher, noteSearchPublisher) .receive(on: DispatchQueue.main) .sink {[unowned self] (passwordPred, notepred) in self.passwordPredicate = passwordPred self.notePredicate = notepred }.store(in: &self.cancellableSet) } Here is what we are doing above: - We create the passwordPublisherusing the debounceon the search term. The publisher will be of type AnyPublisher<NSPredicate, Never>. As you can see the return statement in the map closure returns a predicate, for the password we filter based on the site name or username. - We then create notePublisherwhich will be filtered on the name only. One could also include the content, but let’s just keep it simple. - Using the CombineLatestpublisher, we combine the 2 publishers and subscribe to the resulting publisher, and set those predicates we created earlier. Last but not least in this class, we need created a function that will be used to apply filters. Below the function you’ve created, add the following: func applyFilter(_ filter: Filter){ self.sortDescriptor = NSSortDescriptor(keyPath: \PasswordItem.createdAt, ascending: false) self.passwordPredicate = NSPredicate(value: true) self.notePredicate = NSPredicate(value: true) switch filter { case .MostUsed: self.sortDescriptor = NSSortDescriptor(keyPath: \PasswordItem.lastUsed, ascending: false) case .Favorites: self.passwordPredicate = NSPredicate(format: "isFavorite = %@", "1") self.notePredicate = NSPredicate(format: "isFavorite = %@", "1") case .Notes: self.showPasswords = false self.showNotes = true case .Passwords: self.showNotes = false self.showPasswords = true case .AllItems: self.showNotes = true self.showPasswords = true } } The above is pretty self-explanatory as well. We first reset the predicates and sortDescriptor to default values, then set corresponding values in the switch statements. With that, we are done for this part. We are still missing the deletion functionality which is almost the same as the update. I’ll leave that to you, play with the project be creative, be liquid. In the next one we will integrate CoreData in views, stay tuned. Please feel free to share this article, subscribe if you haven’t done so already. If you have any question, send me an email. Happy coding.
https://liquidcoder.com/lastpass-redesigned-clone-part-12/
CC-MAIN-2020-50
en
refinedweb
89180/how-to-save-a-pandas-dataframe-to-a-pickle-file Hi Guys, I am new to the Pandas module. I want to save the Dataframe in a pickle file. How can I do that? Hi@akhtar, Python pickle module is used for serializing and de-serializing a Python object structure. Any object in Python can be pickled so that it can be saved on disk. You can use the below commands to save the Dataframe in a pickle file. import pandas as pd df = pd.DataFrame({"foo": range(5), "bar": range(5, 10)}) pd.to_pickle(df, "./dummy.pkl") g1 here is a DataFrame. It has a hierarchical index, ...READ MORE Try this: for name in df['Name']: ...READ MORE You can use the append method provided by pandas ...READ MORE You can do this using the code ...READ MORE suppose you have a string with a ...READ MORE You can also use the random library's ...READ MORE Syntax : list. count(value) Code: colors = ['red', 'green', ...READ MORE can you give an example using a ...READ MORE Hi@akhtar, There is no inbuilt function available to ...READ MORE Hi@akhtar, Hierarchical Data Format (HDF) is a set ...READ MORE OR Already have an account? Sign in.
https://www.edureka.co/community/89180/how-to-save-a-pandas-dataframe-to-a-pickle-file
CC-MAIN-2020-50
en
refinedweb
Getting the Skeleton Files As usual, run git pull skeleton master to get the skeleton files. If you're using IntelliJ, you might need to manually add the SeamRemover.jar file to your project. If you're working from the command line, you'll need to make sure SeamRemover.jar is in your classpath. The easiest way to do this is to copy it into your course-materials-sp16/javalib/ folder. Introduction Seam-carving is a content-aware image resizing technique where the image is reduced in size by one pixel of height (or width) at a time. A vertical seam in an image is a path of pixels connected from the top to the bottom with one pixel in each row. (A horizontal seam is a path of pixels connected from the left to the right with one pixel in each column.) Below is the original 505-by-287 pixel image; further below we see the result after removing 150 vertical seams, resulting in a 30% narrower image. Unlike standard content-agnostic resizing techniques (e.g. cropping and scaling), the most interesting features (aspect ratio, set of objects present, etc.) of the image are preserved. In this assignment, you will create a data type that resizes a W-by-H image using the seam-carving technique. Finding and removing a seam involves three parts and a tiny bit of notation: - Notation. In image processing, pixel (x, y) refers to the pixel in column x and row y, with pixel (0, 0) at the upper left corner and pixel (W − 1, H − 1) at the bottom right corner. This is consistent with the Picture data type in stdlib.jar. Warning: this is the opposite of the standard mathematical notation used in linear algebra where (i, j) refers to row i and column j and with Cartesian coordinates where (0, 0) is at the lower left corner. We also assume that the color of a pixel is represented in RGB space, using three integers between 0 and 255. This is consistent with the java.awt.Color data type. - Energy calculation. The first step is to calculate the energy of each pixel, which is a measure of the importance of each pixel—the higher the energy, the less likely that the pixel will be included as part of a seam (as we'll see in the next step). In this assignment, you will implement the dual gradient energy function, which is described below. Here is the dual gradient of the surfing image above: A high-energy pixel corresponds to a pixel where there is a sudden change in color (such as the boundary between the sea and sky or the boundary between the surfer on the left and the ocean behind him). In the image above, pixels with higher energy values have whiter values. The seam-carving technique avoids removing such high-energy pixels. Seam identification. The next step is to find a vertical seam of minimum total energy. This is similar to the classic shortest path problem in an edge-weighted digraph except for the following: - The weights are on the vertices instead of the edges. - We want to find the shortest path from any of W pixels in the top row to any of the W pixels in the bottom row. - The digraph is acyclic, where there is a downward edge from pixel (x, y) to pixels (x − 1, y + 1), (x, y + 1), and (x + 1, y + 1), assuming that the coordinates are in the prescribed range. Seam Removal. The final step is remove from the image all of the pixels along the seam. The logic for this method has been implemented for you in the supplementary SeamRemover class, provided in SeamRemover.jar. public class SeamRemover { // These methods are NOT destructive public static Picture removeHorizontalSeam(Picture picture, int[] seam) // returns a Picture with the specified horizontal seam removed public static Picture removeVerticalSeam(Picture picture, int[] seam) // returns a Picture with the specified vertical seam removed } SeamCarver The SeamCarver API. Your task is to implement the following mutable data type: public class SeamCarver { public SeamCarver(Picture picture) public Picture picture() // current picture public int width() // width of current picture public int height() // height of current picture public double energy(int x, int y) // energy of pixel at column x and row y public int[] findHorizontalSeam() // sequence of indices for horizontal seam public int[] findVerticalSeam() // sequence of indices for vertical seam public void removeHorizontalSeam(int[] seam) // remove horizontal seam from picture public void removeVerticalSeam(int[] seam) // remove vertical seam from picture } energy(): Computing the Energy of a Pixel We will use the dual gradient energy function: The energy of pixel (x, y) is $\Delta_x^2(x, y) + \Delta_y^2(x, y)$, where the square of the x-gradient $\Delta_x^2(x, y) = R_x(x, y)^2 + G_x(x, y)^2 + B_x(x, y)^2$, and where the central differences $R_x(x, y)$, $G_x(x, y)$, and $B_x(x, y)$ are the absolute value in differences of red, green, and blue components between pixel (x + 1, y) and pixel (x − 1, y). The square of the y-gradient $\Delta_y^2(x, y)$ is defined in an analogous manner. We define the energy of pixels at the border of the image to use the same pixels but to replace the non-existant pixel with the pixel from the opposite edge. As an example, consider the 3-by-4 image with RGB values (each component is an integer between 0 and 255) as shown in the table below. Example 1: We calculate the energy of pixel (1, 2) in detail: $R_x(1, 2) = 255 − 255 = 0$, $G_x(1, 2) = 205 − 203 = 2$, $B_x(1, 2) = 255 − 51 = 204$, yielding $\Delta_x^2(1, 2) = 2^2 + 204^2 = 41620$. $R_y(1, 2) = 255 − 255 = 0$, $G_y(1, 2) = 255 − 153 = 102$, $B_y(1, 2) = 153 − 153 = 0$, yielding $\Delta_y^2(1, 2) = 102^2 = 10404$. Thus, the energy of pixel (1, 2) is $41620 + 10404 = 52024$. Test your understanding: The energy of pixel (1, 1) is $204^2 + 103^2 = 52225$. Example 2: We calculate the energy of the border pixel (1, 0) in detail: $R_x(1, 0) = 255 − 255 = 0$, $G_x(1, 0) = 101 − 101 = 0$, $B_x(1, 0) = 255 − 51 = 204$, yielding $\Delta_x^2(1, 0) = 204^2 = 41616$. Since there is no pixel (x, y - 1) we wrap around and use the corresponding pixel from the bottom row the image, thus performing calculations based on pixel (x, y + 1) and pixel (x, height − 1). $R_y(1, 0) = 255 − 255 = 0$, $G_y(1, 0) = 255 − 153 = 102$, $B_y(1, 0) = 153 − 153 = 0$, yielding $\Delta_y^2(1, 2) = 102^2 = 10404$. Thus, the energy of pixel (1, 2) is $41616 + 10404 = 52020$. Examples Summary: The energies for each pixel is given in the table below: findVerticalSeam(): Finding a Minimum Energy Path The findVerticalSeam() method should return an array of length H such that entry x is the column number of the pixel to be removed from row x of the image. For example, consider the 6-by-5 image below (supplied as 6x5.png). The corresponding pixel energies are shown below, with a minimum energy vertical seam highlighted in pink. In this case, the method findVerticalSeam() returns the array { 3, 4, 3, 2, 2 }. When there are multiple vertical seams with minimal total energy, your method can return any such seam. Your findVerticalSeam method should utilize dynamic programming. Recall the key idea behind any dynamic programming algorithm: the subproblem. Suppose we have the following definitions: $M(i, j)$ - cost of minimum cost path ending at (i, j) $e(i, j)$ - energy cost of pixel at location (i, j) Then each subproblem is the calculation of $M(i, j)$ for some $i$ and $j$. The top row is trivial, $M(i, 0)$ is just $e(i, 0)$ for all $i$. For lower rows, we can find $M(i, j)$ simply by adding the $e(i, j)$ to the minimum cost path ending at its top left, top middle, and top right pixels, or more formally: $$M(i, j) = e(i, j) + min(M(i - 1, j - 1), M(i, j - 1), M(i + 1, j - 1))$$ In short, we start from one side of the 2D image array and process row-by-row or column-by-column (for vertical and horizontal seam carving respectively). Addendum: The Java language does not deal well with deep recursion, and thus a recursive approach will almost certainly not be able to handle images of largish size (say 500x500). We recommend writing your code iteratively. An equivalent (but slower approach) is to build an explicit Graph object and run the DAGSPT algorithm. You are welcome to try this approach, but be warned it is slower, and it may not be possible to sufficiently optimize your code so that it passes the autograder timing tests. findHorizontalSeam(): Avoiding Redundancy The behavior of findHorizontalSeam() as analogous to that of findVerticalSeam() except that it should return an array of W such that entry y is the row number of the pixel to be removed from column y of the image. Your findHorizontalSeam method should NOT be a copy and paste of your findVerticalSeam method! Instead, considering transposing the image, running findVerticalSeam, and then transposing it back. The autograder will not test this, but a similar idea could easily appear on the final exam. Other Program Requirements Performance requirements. The width(), height(), and energy() methods should take constant time in the worst case. All other methods should run in time at most proportional to W H in the worst case. Exceptions. Your code should throw an exception when called with invalid arguments. * By convention, the indices x and y are integers between 0 and W − 1 and between 0 and H − 1 respectively. Throw a java.lang.IndexOutOfBoundsException if either x or y is outside its prescribed range. * Throw a java.lang.IllegalArgumentException if removeVerticalSeam() or removeHorizontalSeam() is called with an array of the wrong length or if the array is not a valid seam (i.e., two consecutive entries differ by more than 1). Some Useful Files PrintEnergy.java: For printing the energy calculations per pixel for an input image. PrintSeams.java: Prints the energies and computed horizontal and vertical seams for an input image. ShowEnergy.java: Shows the grayscale image corresponding to the energy computed by pixel. ShowSeams.java: Displays the vertical and horizontal minimum energy seams for a given image. SanityCheckTest.java: Basic JUnit tests consisting of the energy and path examples given in this spec. SCUtility.java: Some utilies for testing SeamCarver. SeamRemover.jar: Contains a SeamRemover class file with removeHorizontalSeam() and removeVerticalSeam() methods to use in your SeamCarver. SeamCarverVisualizer.java: For the purposes of visualizing the frame-by-frame actions of your SeamCarver, we've provided you with a SeamCarverVisualizer class which you can run using the following command: java SeamCarverVisualizer [filename] [numPixels to remove] [y (if horizontal carving) | N (otherwise)] Example: java SeamCarverVisualizer images/HJoceanSmall.png 50 y Extra Fun Fun #1: Try out your SeamCarver on various real world images. I recommend human faces. Fun #2: Try to implement a version of the SeamCarver class that avoids the need to recompute the entire energy matrix every time a seam is removed. This will require getting fancy with your data structures. If you do this, email Josh and let him know. This should make your SeamCarver class extremely fast. Submission Submit SeamCarver.java and any supporting classes that you created, if applicable. You do not need to submit SeamRemover.jar. How do I debug this? Make sure to try out the "Useful Files" above, especially the PrintEnergy and PrintSeams classes. My code is slow (failing timing tests), what can I do to speed it up? Some possible optimizations include (in decreasing order of likely impact): - Avoiding recalculation of energies for the same pixel over and over (e.g. through creation of an explicit energy matrix of type double[][]). Essentially you want to memoize energy calculations. - Don't use a HashMap for looking up data by row and column. Instead, use a 2D array. They are much faster. HashMaps are constant time, but the constant factor is significant. - Not using Math.powor Math.abs. - Not storing an explicit edgeTodata structure. It is possible to rebuild the seam ONLY from the values for M(i, j)! That is, you don't need to actually record the predecessor like you did in the 8puzzle assignment. - Using a more clever approach than transposing your images (though this is not required to pass the autograder). Credits This assignment was originally developed by Josh Hug, with supporting development work by Maia Ginsburg and Kevin Wayne at Princeton University.
http://sp16.datastructur.es/materials/hw/hw5/hw5.html
CC-MAIN-2020-50
en
refinedweb
Outlook extended properties overview Namespace: microsoft.graph Extended properties allow storing custom data and specifically serve as a fallback mechanism for apps to access custom data for Outlook MAPI properties when these properties are not already exposed in the Microsoft Graph API metadata. You can use extended properties REST API to store or get such custom data in the following user resources: Or, in the following Microsoft 365 group resources: Use extended properties or open extensions? In most common scenarios, you should be able to use open extensions (represented by openTypeExtension, formerly known as Office 365 data extensions) to store and access custom data for resource instances in a user's mailbox. Use extended properties only if you need to access custom data for Outlook MAPI properties that are not already exposed in the Microsoft Graph API metadata. Types of extended properties Depending on whether you intend to store a single or multiple values (of the same type) in an extended property, you can create an extended property as a singleValueLegacyExtendedProperty, or multiValueLegacyExtendedProperty. Each of these types identifies the property by its id and stores data in value. You can use id to get a specific resource instance together with that extended property, or filter on a single-value extended property to get all the instances that have that property. Note You cannot use the REST API to get all the extended properties of a specific instance in one call. id formats You can specify id of an extended property in one of three formats: - As a named property, identified by the extended property type, namespace, and a string name. - As a named property, identified by the extended property type, namespace, and a numeric identifier. - In a proptag format, identified by the extended property type and a MAPI property tag. The next 2 tables describe these formats as applied to single and multi-value extended properties. {type} represents the type of the value or values of the extended property. Shown in the examples are string, integer, and arrays of these types. Valid id formats for single-value extended properties Valid id formats for multi-value extended properties Use either of the named property formats to define a single-value or multi-value extended property as a custom property. Among the two formats, the first one that takes a string name (Name) is the preferred format for ease of reference. Named properties have their property identifiers in the 0x8000-0xfffe range. Use the proptag format to access properties predefined by MAPI, or by a client or server, and that have not already been exposed in Microsoft Graph. These properties have property identifiers in the 0x0001-0x7fff range. Do not try to define a custom property using the proptag format. You can find information about mapping an extended property to an existing MAPI property, such as the property identifier and GUID, in [MS-OXPROPS] Microsoft Corporation, "Exchange Server Protocols Master Property List". Note After you have chosen one format for the id, you should access that extended property by only that format. REST API operations Single-value extended property operations: - Create an extended property in a new or existing resource instance - Get one or a collection of resource instances with an extended property using $expandor $filter Multi-value extended property operations:
https://docs.microsoft.com/en-us/graph/api/resources/extended-properties-overview?view=graph-rest-1.0
CC-MAIN-2020-50
en
refinedweb
Hello !! I am trying to set a crop region by curves, and am trying both with curves and bounding box, but non of them are working, and it’s not giving any warning, so I don’t know what I am doing wrong. Any idea? Thanks Hello !! Maybe you can show us your method - how are you trying to set it? One possibility is that you set the cropBox correctly but you don’t make it active. I don’t have at the moment solution for curves (can try later) - but this works for a bounding box: import clr clr.AddReference("RevitServices") import RevitServices from RevitServices.Transactions import TransactionManager from RevitServices.Persistence import DocumentManager clr.AddReference("RevitNodes") import Revit from Revit.Elements import * clr.ImportExtensions(Revit.GeometryConversion) doc = DocumentManager.Instance.CurrentDBDocument #The inputs to this node will be stored as a list in the IN variables. view = UnwrapElement(IN[0]) bb = UnwrapElement(IN[1]) TransactionManager.Instance.EnsureInTransaction(doc) view.CropBox = bb.ToRevitType() view.CropBoxActive = True TransactionManager.Instance.TransactionTaskDone() #Assign your output to the OUT variable. OUT = 0 For the bounding box part - I think the node is expecting a bb list and you give a single bb. As for the byCurves path - have you tried inputing a list of curves instead of a polycurve? The node might also be expecting a list of lists (of curves). One curve list for one view. Do you want to set the same cropBox for all the inputted views by the way? No, I didn’t try with list, so I will work on it. For now, I am trying to set for all views, but then I will change it to Current Selection with boolean. Thank you a lot. Take a look at this link and see if it can help. I think the MEPOver package may have nodes to help. Thank you SeanP. Now, I am very close to a solution, but have one problem. I can not understand why, but this script is working for only one view and no for any other. When I am trying to select another view, it’s not working. And it’s totally not working when I am selecting several views. Maybe I have to try without boolean. @TomArchitect The View.GetCropBoxCurves works only for ONE View unlike the View.SetCropBoxCurves (which works for multiple Views). I ran into similar (a) problem(s). I am looking for a solution to get the Crop Regions from MULTIPLE Views.
https://forum.dynamobim.com/t/set-crop-region/40271
CC-MAIN-2020-50
en
refinedweb
I need something that will force FileSystem.FileExists to continuously ping the path until it sees the file. (I’ve searched without success.) Once it verifies the file exists, then it will continue. Any help is appreciated. Thank you! I need something that will force FileSystem.FileExists to continuously ping the path until it sees the file. (I’ve searched without success.) Why not ask once, and if not create the file; otherwise pass the result? Maybe you could use a while loop to do this, don’t know if it’s really possible. @JacobSmall I’m printing PDFs (you can see the Shared section for my DYN file). There’s an issue with merging the final created set of PDFs in that it will often merge before the final PDF is created. So, if I could simply verify that the final printed PDF exists then it would guarantee that all PDFs would be included during merge. Ah. With PDFs and some other ‘driver’ based files the fact that the file exists is often not enough information, and just because they are sent doesn’t mean they are done (which is your exact issue). To complicate things even further, just because the PDF file exists doesn’t mean it’s done writing to disc. In some cases the driver will actually create the file, open the file, add the content to the file, save the file, and close it. It’s unlikely you’d get this outcome with most drivers for single sheet PDFs, but it’s possible to run into concurrent access issues and other problems all the same. As such it may be best to wait for confirmation from the printer itself, but that’s a difficult task which will vary based on your driver. Alternatively, you could write the expected list of PDFs to a text file, and run a second graph that reads the path to each of the generated PDFs, and combines them into a single text file. Benefit here would be that you would get consistent results, and you could easily adjust the order and combine PDFs from other sources (ie: the specification, which likely didn’t come from Revit) while you were at it. Not a great method as there is no way to know how long it takes to wright a PDF (depends on content being printed). But you could use pythons sleep function. import time time.sleep(10) OUT = IN[0] This will pause the script for 10 seconds (my input variable change as needed) and then pass the information input into the node. The TimeSpan nodes are just to check that it is working. As you can see it took longer than 10 seconds to complete this very simple script. Again this does not mean that the PDF will be completed after the given sleep time and it could also force you to wait longer as the PDF could complete well before the sleep timer ends.
https://forum.dynamobim.com/t/is-there-a-continuous-execute-until-true-node/51906
CC-MAIN-2020-50
en
refinedweb
Developer's tips for testing¶ Matplotlib's testing infrastructure depends on pytest. The tests are in lib/matplotlib/tests, and customizations to the pytest testing infrastructure are in matplotlib.testing. Requirements¶ Install the latest version of Matplotlib as documented in Retrieving and installing the latest version of the code. The following software is required to run the tests: - pytest (>=3.6) - Ghostscript (>= 9.0, to render PDF files) - Inkscape (to render SVG files) Optionally you can install: - pytest-cov (>=2.3.1) to collect coverage information - pytest-flake8 to test coding standards using flake8 - pytest-timeout to limit runtime in case of stuck tests - pytest-xdist to run tests in parallel Running the tests¶ Running the tests is simple. Make sure you have pytest installed and run: pytest in the root directory of the repository. or, if tests are installed, a dot-separated path to the module, optionally followed by the function separated by two colons, such as: pytest --pyargs matplotlib.tests.test_simplification::test_clipping If you want to run the full test suite, but want to save wall time try running the tests in parallel: pytest --verbose -n 5 An alternative implementation that does not look at command line arguments and works from within Python is to run the tests from the Matplotlib library function matplotlib.test(): import matplotlib matplotlib.test() Writing a simple test¶ Many elements of Matplotlib can be tested using standard tests. For example, here is a test from matplotlib.tests.test_basic:) and Python's random number generator: import random. Known failing tests¶ If you're writing a test, you may mark it as a known failing test with the pytest.mark.xfail() decorator. This allows the test to be added to the test suite and run on the buildbots without causing undue alarm. For example, although the following test will fail, it is an expected failure: import pytest @pytest.mark.xfail def test_simple_fail(): '''very simple example test that should fail''' assert 1 + 1 == 3 Note that the first argument to the xfail() decorator is a fail condition, which can be a value such as True, False, or may be a dynamically evaluated expression. If a condition is supplied, then a reason must also be supplied with the reason='message' keyword argument. Creating a new module in matplotlib.tests¶ We try to keep the tests categorized by the primary module they are testing. For example, the tests related to the mathtext.py module are in test_mathtext.py. Using Travis CI¶.36,py37.
https://matplotlib.org/devel/testing.html
CC-MAIN-2020-50
en
refinedweb
Adding Workflow to a project¶ This document will guide you through the process of adding Workflow to an iOS project. Libraries¶ You’ll need the following four libraries: import Workflow import WorkflowUI import ReactiveSwift The easiest way to integrate these libraries is via Cocoapods. If you are using Cocoapods, you can simply add the dependencies to your .podspec. # MySoftware.podspec Pod::Spec.new do |s| # ... s.dependency 'Workflow' s.dependency 'WorkflowUI' s.dependency 'ReactiveSwift' # ... end
https://square.github.io/workflow/tutorial/adding-workflow-to-a-project/
CC-MAIN-2020-50
en
refinedweb
I have been doing 32bit buffer overflows for some time and I decided to try some 64bit overflows, to explore some more realistic scenarios. I have compiled my code with gcc -fno-stack-protector -z execstack -no-pie overflow.c -o Overflow. Here is the code: #include <stdio.h> #include <string.h> void function(char *str) { char buffer[32]; strcpy(buffer,str); puts(buffer); } int main(int argc, char **argv) { function(argv[1]); } Using gdb I determined how many bytes I need to write to control the return address. This is 40 bytes. So at first I tried to write 40bytes of “A” and then 6bytes of “B” to test the control of the return address. Here is a screenshot: I found and tested a 23 byte shellcode that executes “/bin/sh”, so I try to write a nop-sled of 13 bytes, the shellcode and the first 6 bytes of the return address that need to change. So I come up with this (in gdb): r $ (python -c'print "\x90"*13+""+"\x10\xe1\xff\xff\xff\x7f"') I have set 2 breakpoints before and after the execution of strcpy and examine the memory. This is the stack before the strcpy: where at address 0x00007fffffffe138 is the return address of function function And this is the stack right after the strcpy execution: So in my understanding, after I press c to continue the execution, I must “return” to the nopsled and then execute the shellcode in gdb. Instead I get a SIGILL, for illegal instruction. I cannot figure out why this is happening, any help/suggestions/pointer would be much appreciated.
https://proxieslive.com/64bit-buffer-overflow-fails-with-sigill-cannot-understand-the-reason/
CC-MAIN-2020-50
en
refinedweb
…) OK, I get: Traceback (most recent call last): File “/Users/ajcann/Desktop/Python/googPlusFrFo-preliminarySketch.py”, line 1, in import networkx as nx ImportError: No module named networkx How do I get networkx in Python (OS X)? Thanks, @alan If you have easy_install on your machine, just type: easy_install networkx Otherwise, follow this: (Download and unzip, cd to the directory, type: python setup.py install ) Thanks, easy_install networkx worked. Do I have to do that each time or is the installation persistent? @alan the library should be available ever more without the need to reinstall.. Nice graph. What program do you use to render graphml? Graphs are rendered using Gephi – gephi.org
https://blog.ouseful.info/2011/10/16/so-where-am-i-socially-situated-on-google/
CC-MAIN-2017-17
en
refinedweb
Currently I’m using a homespun Unit Test system that I developed before I looked at any others so I would have a clear idea of my own pre-conceptions when reviewing The Real Thing. Some time ago, I found this excellent article on C++ Unit Testing. Given that I created my system for a similar benchmarking purpose, I still find it clunky. Despite Noel’s indepth review, I still find all of those he linked, and the one he himself has since created, just as clunky. I feel we both set the bar too low for an “ideal” testing suite. Maybe this is the maintenance programmer in me, but I think Unit Testing and documentation should go hand-in-hand. Maybe I notice it more because I’m just wetting my feet in Doxygen, but it’s always bothered me that I have to “write” tests for all the minutae. Consider the following: #include "tests/testSuite.h" #include "include/async-worker.h" //! Worker that changes the value of a variable. class TestWorker1 : public Async::FireAndForget { private: TestWorker1() { throw std::logic_error("Default constructor called") ; } public: TestWorker1(volatile unsigned int* dest, unsigned int value, volatile bool* destructorTracker) : Async::FireAndForget() , m_dest(dest) , m_value(value) , m_destructorTracker(destructorTracker) { ASSERT( ExpectFalse( *m_destructorTracker ) ) ; } ~TestWorker1() { ASSERT( ExpectNotEqual( m_destructorTracker, NULL ) ) ; ASSERT( ExpectFalse( *m_destructorTracker ) ) ; *m_destructorTracker = true ; } virtual bool Work() const { *m_dest = m_value ; } volatile unsigned int* m_dest ; //!< Address to write to. unsigned int m_value ; //!< *Value* to apply to m_dest. virtual bool* m_destructorTracker ; //!< Set to true when destructor called. } ; class AsyncWorkerTest : public TestUnit { public: AsyncWorkerTest() : TestUnit("AsyncWorker") {} virtual bool Run() { START_TEST_STAGE( "Work Execution" ) static const unsigned int testPattern = 0xa37e8800ff ; volatile unsigned int testDest = 0 ; volatile bool destructorCalled = false ; TestWorker1* worker = new TestWorker1(&testDest, testPattern, &destructorTracker) ; START_SUB_STAGE( "Initialization" ) // Sanity checks. ExpectEqual( testDest, 0 ) ; ExpectEqual( worker->m_dest, &testDest ) ; ExpectEqual( worker->m_value, testPattern ) ; ExpectFalse( destructorCalled ) ; END_SUB_STAGE( "Initialization" ) START_SUB_STAGE( "Dispatch" ) // Dispatch the workload to a thread. worker->Queue() ; END_SUB_STAGE( "Dispatch" ) START_SUB_STAGE( "Execution" ) // Should take less than a microsecond to run, but // we'll wait up to half a second to give it chance. // (fac(1000) is close to 500,000) size_t waited = 0 ; for ( size_t i = 0 ; *testDest == 0 && waited < 500000 ; waited += i, ++i ) { OpenThreads::YieldCurrentThread() ; // Incremental wait. OpenThreads::CurrentThread->microSleep(i) ; } // Test for completion. ExpectEqual( *testDest, testPattern ) ; // Test we didn't wait too long. static const size_t MaximumMicroSecondWaitExpected = 5 ; // 5 microseconds is way too long. Expect( waited, <, MaximumMicroSecondWaitExpected ) ; // Produce a warning if we actually had to wait more than once, if at all. IDEALLY( ExpectLt( waited, 2 ) ) ; // Destructor should have been called. ExpectTrue( destructorCalled ) ; END_SUB_STAGE( "Execution" ) END_TEST_STAGE( "Work Execution" ) } } ; (My system has the concept of stages and sub-stages, where sub-stages can be nested; that allows you to avoid the overhead of fixtures, the downside of which is it leads to “Run” functions which are arguably too long) The need to write tests for things ought not to be so entirely decoupled so from the original source or the advertised API. And it should be somewhat language agnostic. My thinking is that a large degree of the basic testing ought to be able to be written as part of the documentation/prototyping process itself. namespace Async //!< Houses classes that encapsulate work-offloading technique with ZeroMQ. { /*! * Workload that can safely be executed at some * future point in the background and delete itself. * * Inherit to your own class, marshall with all data * that will be used and implement the virtual bool Work * function ensuring all work is thread safe. * * Call the worker->Queue() function on a pointer to * an instance to hand off to the worker pool. * * @seealso RunAndReturn * @seealso PooledProcessing */ class Payload { public: //! Default constructor. //? [constructor] m_hasRun equals false Payload() : m_hasRun(false) {} public: // (Note: This code wouldn't work ;-P) //! Determine if the object should auto-destruct. //? [member] DiscardOnCompletion() equals true or false virtual bool DiscardOnCompletion() const = 0 ; public: //! Pass workload to the worker pool. //! If DiscardOnCompletion is true, the //! object will self-delete after usage. /*? *? [member] Queue() *? [wait upto 500 miliseconds, < 1 microsecond ideal, 5 microseconds too long] m_hasRun equals true */ void Queue() const ; public: //! Encapsulates the work load to be executed //! asynchronously. //! @return true if work completed successfully, otherwise false. virtual bool Work() = 0 ; private: bool m_hasRun ; //!< Purely to demonstrate constructor test. } ; } /*? * "optional" so it doesn't fail when being tested by a 3rd-party. *? [optional] @include tests/longWindedAsyncTest.tests * * A language-specific test example: *[ Lua: * require 'async-worker' * myWorker = Async.Work:new() * myWorker.worked = false * function myWorker:Work() * self.worked = true * return true * end * function myWorker.DiscardOnComplete() * return false * end * myWorker.Queue() *? [wait upto 500 miliseconds, < 1 microsecond ideal, 5 microseconds too long] myWorker.worked equals true *] */ It needs to be a pseudo-natural language so that it “reads” as the documentation does, describing constraints and API expectations. For example, documenting a function that returns 3 values: // f(N) returns 0 if N equals 0, // returns 1 if N equals 1, // else returns 4 // int f(int N) { if ( n == 0 ) return 0 ; if ( n == 1 ) return 1 ; return 3 ; } [/sourecode] (good, you were paying attention) Just writing this made me aware of some of the reasons why we perhaps don’t already see something like this: the case of the constructor forced me to stop and think. But still, sharper minds than mine must already have toiled at this and there has to be something like this out there already, skulking in the dark recesses of the web that Google just doesn’t frequent? Am I about to rush off and try and create this myself? Not this time; for a start, I don’t much feel like running off and creating my own C/C++ parser; something like this would probably make a good Doxygen extension, though. One Comment Not exactly what you are talking about, but “natural language” specifications of tests is what Cucumber does in the RUby world: I haven’t used it so I cannot comment much, but I thought of the approach so I thought you might find it interesting. S!
https://kfsone.wordpress.com/2010/07/18/automating-unit-test-generation/
CC-MAIN-2017-17
en
refinedweb
This post is inspired by an excellent post called Web Scraping 101 with Python. It is a great intro to web scraping to Python, but I noticed two problems with it: - It was slightly cumbersome to select elements - It could be done easier If you ask me, I would write such scraping scripts using an interactive interpreter like IPython and by using the simpler CSS selector syntax. Let’s see how to create such throwaway scripts. For serious web scraping, Scrapy is a more complete solution when you need to perform repeated scraping or something more complex. The Problem We are going to solve the same problem mentioned in the first link. We are interested in knowing the winners of Chicago Reader’s Best of 2011. Unfortunately the Chicago Reader page shows only the five sections. Each of these sections contain award categories e.g. ‘Best vintage store’ in ‘Goods & Services’. Within each of these award category pages you will find the winner and runner up. Our mission is to collect the names of winners and runner ups for every award and present them as one simple list. The Setup Start python, IPython, bpython or any other interactive python interpreter of your choice. I shall be using IPython for the rest of this article. A common starting point for most web parsing needs is getting a parsed web page from a URL. So let’s define our get_page function as follows: from urllib2 import urlopen from lxml.html import fromstring def get_page(url): html = urlopen(url).read() dom = fromstring(html) dom.make_links_absolute(url) return dom Within the get_page function, the first line downloads the page using urlopen function and returns it’s contents in the form of a string. The second line uses lxml to parse the string and returns the object representation of the page. Since, most links in the html page will be relative pages we will convert them to absolute links. For e.g. a link like /about will be converted into. This makes it easy to call get_page function on such URLs later. Selecting Page Elements Next we need to invoke this function and select parts of the document. But before that we need to know which parts we need. I prefer using CSS selector syntax compared to XPaths for selecting nodes. For examplem, the path to the same element in these two different syntax are shown below: CSS Path: html body#BestOf.BestOfGuide div#gridClamp div#gridMain div#gridFrame div#gridMainColumn div#StoryLayout.MainColumn div#storyBody.page1 strong p a XPath: /html/body/div[3]/div[2]/div/div[2]/div[5]/div/strong/p[2]/a CSS paths might be longer but are easier to understand. More importantly, they are easier to construct. On Firefox, you can use Firebug to right click on any page element to get it’s CSS path. On Chrome, you will not be able to copy the CSS path but you can see it displayed on the status bar at the bottom Selector Gadget These CSS paths are extremely long and I wouldn’t recommend using them. They are too specific and tied to the overall document structure, which might change. Moreover, you can shorten a CSS selector path without affecting it’s specificity. I recommend using a bookmarklet called Selector Gadget which elegantly solves both these problems. It also works across browsers. First drag the bookmarklet to your bookmark toolbar. Open any page and click on the Selector Gadget to activate it. Now click on the element for which you want the CSS selector. Once you click an element, it will turn yellow and the CSS selector will appear in the gadget. Many other elements matching that selector will be also shown in yellow. Sometimes, elements which you do not require are also matched. To eliminate that, click on an element you DO NOT want to match. Continue this process of selection and rejection till you get the exact CSS selector you want. Click on the ‘Help’ button for instructions. Using iPython Start your iPython interpreter and paste the lines of code, we saw previously: $]: from urllib2 import urlopen In [2]: from lxml.html import fromstring In [3]: def get_page(url): ...: html = urlopen(url).read() ...: dom = fromstring(html) ...: dom.make_links_absolute(url) ...: return dom ...: In [4]: dom = get_page("") In the last line, you retrieve the initial page you would like to be scraped and assign its parsed DOM object into dom. In the next three commands, cssselect function is invoked with the CSS selector “#storyBody p a” to get all the section links. The result is a list. Since we need just the URLs, we run a list comprehension across the list of links. In [5]: dom.cssselect("#storyBody p a") Out[5]: [<Element a at 0x336ae90>, <Element a at 0x336afb0>, <Element a at 0x336c2f0>, <Element a at 0x336c3b0>, <Element a at 0x336c170>, <Element a at 0x336c350>] In [6]: [link.attrib['href'] for link in _] Out[6]: ['', '', '', '', '', ''] In [7]: secns = _ Note that we are using the underscore ‘_’ symbol to refer to the result of the previous command. With this tip, we can avoid inventing names for temporary results. Also whenever we get a result worth keeping, we can name them in hindsight. Finding all categories Next we need to retrieve and parse each section page. It can be easily done with the following list comprehension. The second command is a nested list comprehension with two loops. As before, we just need the urls. All 389 of them, each representing an award category. In [13]: doms = [get_page(secn) for secn in secns] In [14]: [link.attrib['href'] for dom in doms for link in dom.cssselect("#storyBody a")] Out[14]: In [15]: categs=_ In [16]: len(categs) Out[16]: 389 Finding the title, winner and runner-up Next, open any url from the categs list and find CSS selectors for our items of interest. These three items are: award category title, winner and runner-up. Since cssselect function returns a list (even if only one match is found) we need to extract the 0-th element. Another function called text_content is applied to get just the information we are looking for. In [17]: categ = categs[0] In [18]: dom=get_page(categ) In [19]: dom.cssselect("h1.headline")[0].text_content() Out[19]: u'Best longtime cause worth fighting for\xa0' In [20]: dom.cssselect(".boc1")[0].text_content() Out[20]: 'Public school reform' In [21]: dom.cssselect(".boc2")[0].text_content() Out[21]: 'Recycling in Chicago' Named Tuples - Ideal data structures for scraped input Earlier, tuples were used for storing scrapped results. They use less memory compared to dictionaries. Recently, Python has support for named tuples which are much clearer to use and just as memory efficient. The next few commands loops through all the award categories and adds a named tuple for each. To avoid fetching too many pages, I have truncated the list to only the first two items. In [22]: from collections import namedtuple In [23]: Award = namedtuple("Award", "title, winner, runnerup") In [24]: awards = [] In [25]: for categ in categs[:2]: dom=get_page(categ) title = dom.cssselect("h1.headline")[0].text_content() winner = dom.cssselect(".boc1")[0].text_content() runnerup = dom.cssselect(".boc2")[0].text_content() a = Award(title=title, winner=winner, runnerup=runnerup) awards.append(a) In [36]: awards Out[36]: [Award(title=u'Best longtime cause worth fighting for\xa0', winner='Public school reform', runnerup='Recycling in Chicago'), Award(title=u'Best historic building\xa0', winner='Chicago Cultural Center', runnerup='The Rookery')] Power of Interactivity For one-time scraping scripts, it is often best to use just the Python interpreter. I have tried to walk you through how I would attack the problem of scraping a set of web pages. Hope you found it useful!
http://arunrocks.com/easy-practical-web-scraping-in-python/
CC-MAIN-2017-17
en
refinedweb
Gardeners understand the problems that insects can cause to their plants. Entire gardens can be destroyed in short time. To stop insects from killing the plants, gardeners use insecticides. In fact, different types of insecticides may be needed. Some are organic. Some are chemical. Some kill off one type of insect, others kill off a different type of insect. And, of course, another name for the insect is bug. Bugs kill our Software Garden as quickly as insects kill gardens. Our insecticide is good testing. Just as gardens use different types of insecticides, as software gardeners, we should use different types of testing. Integration, capacity, acceptance, functional, system, unit, regression, stress, and performance are just some of the testing types we can use. Of all these types of testing, there is one in particular that is of interest to developers and is the first line of defense when considering insecticides for your software. That one test type is unit testing. Many people compare software development to building construction. But software acts more organic than a building. It's more like a garden. This column discusses concepts, techniques, practices, and tools to help you get the most from your software garden. Read all our Software Gardening articles here Think for a minute about your code. Do you have statements that include if, switch, for, foreach, while, do...while? Now think about traditional testing techniques where the developer writes the code, compiles it, checks it in, and then throws the executable over the wall to the QA team. They go to work, testing the running program, finding bugs, then throwing things back to the developer, who writes code, compiles it, checks it in, and throws it over the wall again. This cycle continues on and on and on. Why does this cycle continue? One reason is the QA team has no idea where every if, for, foreach, while, etc. exist in the code. No, scratch that. They don’t know where any of those statements are in the code because they don’t see the code. How can QA possibly know how or where to test the application? But it’s worse than you think. In his seminal book, “Testing Computer Software”, Cem Kaner talks about G.J. Myers who, in 1976 described a 100-line program that had 1018 unique paths. Three years later, Meyers described a much simpler program. It was just a loop and a few IF statements. In most languages you could write it in 20 lines of code. This program has 100 trillion paths. Those numbers are daunting. How can you possibly test this? The short answer is, you can’t. The long answer is, Meyers had contrived examples, but they do show how complicated code can be and how important it is for the developer to write unit tests. In this issue, I’m going to show you how to get started with unit testing using a simple ASP.NET MVC application. You’ll see how to setup the test and remove the database from the testing process. Through it all, I’ll keep it simple. I will not discuss java script testing nor will you see production ready code. I’m also not going to talk about Test Driven Development (TDD) because if you’re learning unit testing, you have enough to figure out without turning the entire development process upside down. Unit tests are written by the person who wrote the code. Who else knows the code better? Unit tests should be run entirely in memory and should not access databases, nor external files or services. A unit is a small piece of code; for example, a method. The method should be kept small and should do one thing and one thing only. Not only does this make the code easy to test, it makes it easy to maintain. To write unit tests, you need a unit testing framework. Some popular ones are MSTest that comes with Visual Studio, NUnit, and XUnit. There are others. I will use NUnit in my examples. Don’t worry about downloading NUnit. We’ll use NuGet to add this to our test project. You then need a way to run tests. Test runners execute the tests and give feedback about which tests pass and which fail. This is typically called red/green. Red tests fail. Green tests pass. You need two types of runners. The first is some type of GUI application that you use while you are creating code. The other is a console application that you can use on your Continuous Integration (CI) server. Choose the runner that supports the testing framework you have chosen. Again, MSTest is built into Visual Studio, but NUnit and XUnit have their own runners. Additionally, the Visual Studio test runner has been opened up so that it will also run NUnit and XUnit tests. There are also commercial tools, such as Resharper, that have test runners. Then there are commercial tools, such as NCrunch and Test Driven.Net, that are specially designed just for running unit tests. I’m going to use NUnit using the Visual Studio test runner for the examples, then at the end, I’ll show you NCrunch and tell you why it’s a superior tool and how increased productivity will pay the cost of the tool. So, which unit testing framework should you choose? If you are in a Team Foundation Server (TFS) shop, MSTest is probably your best choice. The test runner is built into Visual Studio and TFS. If you are not in a TFS shop, choose one of the others because you can’t separate the MSTest runner from Visual Studio or TFS to install on your build server. I use TeamCity from JetBrains as my Continuous Integration server. It has its own test runners for NUnit and MSTest. It also a built-in test coverage tool for NUnit and supports console application test runners for other unit test frameworks. Before creating our project, let’s enable NUnit. We’ll do this in two steps. First, install the NUnit Test Adapter. This will allow NUnit to integrate into Visual Studio and use the Visual Studio test runner. To install the NUnit Test Adapter, select Tools > Extensions and Updates from the Visual Studio menu. On the left-hand menu, select Online, then type NUnit into the search box. Click Install and walk through the install process. You’ll need to restart Visual Studio when you’re done. We’ll handle the second part of adding NUnit after creating the project. My example application is an ASP.NET MVC 5 application. When I created it, I named the solution DncDemo and the project DncDemo.Web. I checked Add unit tests and renamed the unit test project to DncDemo.UnitTests. I did not change authentication options. I unchecked Host in the cloud. Once Visual Studio has create the project, we can move on to the second part of NUnit setup; adding the NUnit assemblies. You can do this through the NuGet Package Manager. Be careful that you add it only to the DncDemo.UnitTests project. Add a reference to the DncDemo.Web project. Go ahead and add a simple Model, Controller, and some Views. I have one named Customer that is for simple CRUD of the Customer table. In a production application, I have all data access go through a Data project (in this example, it would be named DncDemo.Data) and rename the Models folder to ViewModels. I am not doing that here because I want to keep things simple. As a best practice, MVC controllers should be very thin, meaning they should not do anything other than pass the request on to another object and then return the result back to the View. In other words, there shouldn’t be any data access or business logic in the Controller. In the DncDemo.Web project, add a new folder named Services. The controller will use classes in this folder to handle everything. Now, let’s create the first unit test. We’ll start with something simple, the Index method of the Customer class. The first thing to do is think of something to test. Let’s see, we can make sure we get all the rows in the customer table. I’ll walk you through the steps. 1. The Index method of the CustomersController has some data access code. Start by refactoring it out. Create a new class named CustomerService in the Services folder. Move code from the CustomersController to the service. public class CustomerService { private Context db = new Context(); public List GetCustomersForIndex() { return db.Customers.ToList(); } } 2. Now update the controller to use the service. When you refactor, work slowly, working on small parts of code at a time. Eventually, every method in the controller will be refactored to call the service and you can remove the instantiation of the Context object. I’ll leave most of the work for you to do as an exercise. private Context db = new Context(); private CustomerService service = new CustomerService(); public ActionResult Index() { return View(service.GetCustomersForIndex()); } 3. The refactoring is done for now, so you should test the code. Since we don’t have any unit tests yet, you’ll need to run the site and navigate to the Customer Index page. 4. Now add a new class, CustomerTests to the DncDemo.UnitTests project. Add a reference to NUnit.Framework. using DncDemo.Web.Services; using NUnit.Framework; namespace DncDemo.UnitTests { [TestFixture] public class CustomerTests { [Test] public void Index_Returns_AllRows() { // Arrange CustomerService service = new CustomerService(); // Act var actual = service.GetCustomersForIndex(); // Assert Assert.AreEqual(3, actual.Count); } } } The TestFixture attribute tells NUnit that the class contains tests. The Test attribute tells NUnit that the method is a test. The name of the method is important. It should be made up of three parts, what you are testing (Index), what test actually does (Returns), and what the expected result is (AllRows). A test should test for one thing and one thing only. You may have many tests that do nothing but return the results of the GetCustomersForIndex method, each testing for a different result. Now down in the code, three things are happening. Every good unit test does these three things - Arrange, Act, and Assert. Arrange is where you do all the setup needed to prepare to run the method under test. Act is where you actually call the method. Assert is where you compare the results of the method with what you expect. Now that the code has been refactored and the unit test added, it’s time to run the test. Here are the steps: You can see the test results in several places. First in the code for the service. Finally, in the Test Explorer window. In all three places, you see a red circle with an X, indicating the test did not pass. The Test Explorer gives additional information, the time it took to run the test. So, we know there is an error because the test failed. Can you see it? Can you see other problems? The Assert is explicitly checking that three rows were returned. How can we know there are three rows in the table? The name of the method is Index_Returns_AllRows. Yet, we are checking for three. Finally, the test won’t even run. It throws an error because it can’t even get to the database. With all this information, we need to fix something. First, you have to figure out what’s not working. In this case, the code can’t reach the database because the app.config file for the unit test project doesn’t know about the database. Don’t add a connection string. Remember, unit tests should run entirely in memory. We need a way to 1) remove the database access and 2) know how many rows are in the “table” that we query. Removing the database is easier than is sounds. We’ll do it in two steps, repositories and mocks. The cool thing is, once we’re done, the exact same code will work for unit testing or for when we actually run the web site. This is important. If we have to change code between unit testing and actually running, we could have untested code that contains bugs. When you look at the code, the CustomerService class instantiates the context. You might be inclined to use IContext instead of DbContext to get rid of the database. But Entity Framework didn’t have IContext until version 6 and even then, it’s not really suitable for unit testing. We’ll fall back to a well-known pattern called the Repository Pattern. In the Models folder, create a new interface called ICustomerRepository. In this interface, you’ll define all the methods needed to access the Customer table. You should name each method something that makes sense for the context it’s used for. In the code, I’ve defined four methods even though we’ll only implement one of them in this column. public interface ICustomerRepository { IQueryable GetAll(); Customer GetById(int id); void Save(Customer customer); void Delete(Customer customer); } Now to implement the interface. Add the class CustomerRepository to the Models folder. public class CustomerRepository : ICustomerRepository { private Context _context = new Context(); IQueryable ICustomerRepository.GetAll() { return _context.Customers; } Customer ICustomerRepository.GetById(int id) { throw new NotImplementedException(); } void ICustomerRepository.Save(Customer customer) { throw new NotImplementedException(); } void ICustomerRepository.Delete(Customer customer) { throw new NotImplementedException(); } } The Context class in instantiated as a field and then used to get the actual data from the database. Finally, we need to refactor the CustomerService class to use the repository. public class CustomerService { private ICustomerRepository customerRepo = new CustomerRepository(); public List GetCustomersForIndex() { return customerRepo.GetAll().ToList(); } } The unit tests won’t run yet, but you can run the web site to verify data is getting returned to the Index method. Don’t confuse running the site with running a unit test. Think about what takes less time, running the site, logging in, navigating to the page and then verifying the data or running the unit test. Next up, we need to make the test pass. To do this, we will trick the CustomerService class into thinking there really is a database. It’s actually quite easy because of the ICustomerRepository interface. To make it easier, we’ll use a mock, which is nothing more than a fancy way of faking out the CustomerService class. A mocking framework makes this easier to do. I’ll use one called Moq. Using NuGet, add Moq to the DncDemo.UnitTests project. Do not add it to the DncDemo.Web project. Here’s the modified unit test code wired up for the mock. using System.Linq; using DncDemo.Web.Models; using DncDemo.Web.Services; using Moq; using NUnit.Framework; namespace DncDemo.UnitTests { [TestFixture] public class CustomerTests { [Test] public void Index_Returns_ThreeRows() { // Arrange Mock mockRepo = new Mock(); mockRepo.Setup(m => m.GetAll()).Returns(new Customer[] { new Customer {Id = 1, FirstName = "Herman", LastName = "Munster"}, new Customer {Id = 2, FirstName = "Rocky", LastName = "Squirrel"}, new Customer {Id = 3, FirstName = "George", LastName = "Washington"} }.AsQueryable()); CustomerService service = new CustomerService(mockRepo.Object); // Act var actual = service.GetCustomersForIndex(); // Assert Assert.AreEqual(3, actual.Count); } } } First, I renamed the test method to Index_Returns_ThreeRows() to indicate the number of rows returned. This better describes the expected results. Next, in the Arrange section, the mock is instantiated. And then the mock is setup. What that line says is when the GetAll method is called, the return value is an array of three items that is cast to Queryable. In the Act section, note that I changed the instantiation of the CustomerService class to pass an ICustomerRepository to the constructor. So, now we need to fix up that class. public class CustomerService { private readonly ICustomerRepository customerRepo; public CustomerService() { this.customerRepo = new CustomerRepository(); } public CustomerService(ICustomerRepository customerRepo) { this.customerRepo = customerRepo; } public List GetCustomersForIndex() { return customerRepo.GetAll().ToList(); } } What will happen for testing is the mocked ICustomerRepository will be passed in and used. At runtime, CustomerRepository is instantiated using the constructor with no parameters, so it will use the actual database. Now repeat the same five steps for running the unit test. If you look at Test Explorer, you’ll see the test now passes. This is a common pattern when working with unit tests: Write code, write tests, run tests, see the fail, refactor, update tests, run tests. It’s a great feeling when tests pass. It gives better confidence the code is correct and you’ll have fewer bugs. Go back and look at the steps for running unit tests. To summarize, you stop writing code, you wait for the project to compile, you wait for unit tests to run. I can’t stress enough the negative impact this has on productivity. Basically, you’re sitting at your desk, waiting for tests to complete. But there is a better way, NCrunch! At first, you may balk at spending money on a commercial application, but I assure, it will quickly pay for itself due to increased productivity. This is because NCrunch compiles code in the background and displays results right in Visual Studio, line by line. No stopping the code writing process. No waiting for the code to compile. No waiting for tests to run. This is one tool every software gardener needs to have in the toolshed. Download NCrunch from then install it. Open your project and enable NCrunch for the project. From the Visual Studio menu select NCrunch > Enable NCrunch. You may have to walk through the NCrunch Configuration Wizard (an option on the NCrunch menu). When I do this, I usually pick the defaults on every page except Ignored Tests, where I generally select Let my tests run. Once you finish the wizard, you’ll notice some black dots along the left edge (called the gutter) of the Visual Studio editor. These appear in both the actual code and the unit test code. The first time, you have to enable the test. Go to the unit test code, right click on the green circle with the line through it and select Unignore starting test. You will immediately see some of the black dots turn to green. This tells you the tests passed. Switch to the CustomerService code. The dots are there too! Yellow dots indicate the line of code that took longer to run than NCrunch thought it should. You may need to see if you can optimize that code. If the dots are red, the test doesn’t pass. A red X indicates the actual line of code that failed. Green, red, and yellow dots also tell you something else. You immediately get a feel for code coverage. This is an important unit testing concept that tells you which lines of code have tests against them and which ones don’t. The black dots indicated untested code. To get code coverage with the Visual Studio test runner you have to stop coding, select the Code Coverage option from the VS Test menu, then wait for code to compile and tests to run. If you now make a change to CustomerService.GetCustomersForIndex or the unit test code, NCrunch will do its work in the background and give you immediate feedback. NCrunch has also has features such as debugging into failing tests, code metrics, support for both NUnit and MSTest frameworks, and many more. I won’t go into these features now. That’s an exercise for you. Note that NCrunch is a Visual Studio add-in so you still need a way to run unit tests on the build server. That’s where something like TeamCity’s unit test runner, TFS, or your unit test framework’s console runner come into play. One test does not complete your testing job. Try to think of other tests to run. For example, what happens if there are no rows returned? Or null? Or you can test to verify the object really is IQueryable. Unit tests should look at two main areas. The first, called the happy path, the code works correctly. The second, the sad path, tests what happens when things don’t go right. Does the application blow up? Is a specific error thrown (unit test frameworks let you check for this). How about bounds checking? A common thing to check for is null. It’s surprising how many applications don’t properly handle null. One question I often get asked is, “How do I integrate this into legacy code?” There are three candidates for code that should get unit tests. First, code you are modifying. Second, code that has lots of bugs or customers complain about most. Third, areas of code that you are afraid to modify because it’s fragile and breaks every time you change it. Don’t be afraid to tell your boss that doing unit testing will add time to write the code. But that time will be far less than writing the code, then fixing bugs that come back from the QA team. And the QA team will spend less time testing the code and more time doing QA activities that you can’t easily automate. Work slowly, refactor to small, easily tested methods. Write the tests as you go. Don’t move onto another section until the section you’re working on passes all its tests. If you have a particularly bad section of code, you might want to set aside half or even a full day a week for refactoring and writing unit tests until you’re satisfied it’s fixed. And don’t check the code into Version Control until all unit tests pass. There is much more to unit testing and mocks. I encourage you to explore those areas. You may also consider learning about Dependency Injection. It will eliminate the need for two constructors in the service class. Finally, once you get proficient with unit testing, you may want to look at Test Driven Development (TDD). What I’ve shown you here is Test After Development (TAD) where you write the code then write the tests for it. With TDD, you write the test first, then write code to make it pass. It not only tests the code, but tests your assumptions about how the code should be architected. Unit tests are one of the most important tools you can use. They are your first line insecticide against bugs that can kill your application. Every application you write (in our case an ASP.NET MVC application), should use them. By following the gardening practices here, your software will grow and be lush, green, and vibrant.
http://www.dotnetcurry.com/aspnet-mvc/1043/unit-testing-aspnet-mvc-application
CC-MAIN-2017-17
en
refinedweb
.. _templates_chapter: Templates ========= A :term:`template` is a file on disk which can be used to render dynamic data provided by a :term:`view`. :app:`Pyramid` offers a number of ways to perform templating tasks out of the box, and provides add-on templating support through a set of bindings packages. Out of the box, :app:`Pyramid` provides templating via the :term:`Chameleon` and :term:`Mako` templating libraries. :term:`Chameleon` provides support for two different types of templates: :term:`ZPT` templates, and text templates. Before discussing how built-in templates are used in detail, we'll discuss two ways to render templates within :app:`Pyramid` in general: directly, and via renderer configuration. .. index:: single: templates used directly .. _templates_used_directly: Using Templates Directly ------------------------ The most straightforward way to use a template within :app:`Pyramid` is to cause it to be rendered directly within a :term:`view callable`. You may use whatever API is supplied by a given templating engine to do so. :app:`Pyramid` provides various APIs that allow you to render templates directly from within a view callable. For example, if there is a :term:`Chameleon` ZPT template named ``foo.pt`` in a directory named ``templates`` in your application, you can render the template from within the body of a view callable like so: .. code-block:: python :linenos: from pyramid.renderers import render_to_response def sample_view(request): return render_to_response('templates/foo.pt', {'foo':1, 'bar':2}, request=request) .. warning:: Earlier iterations of this documentation (pre-version-1.3) encouraged the application developer to use ZPT-specific APIs such as :func:`pyramid.chameleon_zpt.render_template_to_response` and :func:`pyramid.chameleon_zpt.render_template` to render templates directly. This style of rendering still works, but at least for purposes of this documentation, those functions are deprecated. Application developers are encouraged instead to use the functions available in the :mod:`pyramid.renderers` module to perform rendering tasks. This set of functions works to render templates for all renderer extensions registered with :app:`Pyramid`. The ``sample_view`` :term:`view callable` function above returns a :term: :term: :ref:`mako_templates`. The path can alternately be a :term:`asset specification` in the form ``some.dotted.package_name:relative/path``. This makes it possible to address template assets which live in another package. For example: .. code-block:: python :linenos: from pyramid.renderers import render_to_response def sample_view(request): return render_to_response('mypackage:templates/foo.pt', {'foo':1, 'bar':2}, request=request) :app:`Pyramid` request. Passing a request keyword argument will cause the ``render_to_response`` function to supply the renderer with more correct system values (see :ref:`renderer_system_values`), because most of the information required to compose proper system values is present in the request. If your template relies on the name ``request`` or ``context``, or if you've configured special :term:`renderer globals`, make sure to pass ``request`` as a keyword argument in every call to to a ``pyramid.renderers.render_*`` function. Every view must return a :term:`response` object, except for views which use a :term:`renderer` named via view configuration (which we'll see shortly). The :func: :func:`pyramid.renderers.render` API renders a template to a string. We can manufacture a :term:`response` object directly, and use that string as the body of the response: .. code-block:: python :linenos: from pyramid.renderers import render from pyramid.response import Response def sample_view(request): result = render('mypackage:templates/foo.pt', {'foo':1, 'bar':2}, request=request) response = Response(result) return response Because :term:`view callable` functions are typically the only code in :app:`Pyramid` that need to know anything about templates, and because view functions are very simple Python, you can use whatever templating system you're most comfortable with within :app:`Pyramid`. Install the templating system, import its API functions into your views module, use those APIs to generate a string, then return that string as the body of a :app:`Pyramid` :term:`Response` object. For example, here's an example of using "raw" `Mako Welcome to Note the use of :term:`Genshi` -style ``${replacements}`` above. This is one of the ways that :term:`Chameleon` ZPT differs from standard ZPT. The above template expects to find a ``project`` key in the set of keywords passed in to it via :func:`~pyramid.renderers.render` or :func:`~pyramid.renderers.render_to_response`. Typical ZPT attribute-based syntax (e.g. ``tal:content`` and ``tal:replace``) also works in these templates. .. index:: single: ZPT macros single: Chameleon ZPT macros Using ZPT Macros in :app:`Pyramid` ~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~ When a :term:`renderer` is used to render a template, :app: :app:`Pyramid` is a :term:`resource` object, and templates cannot usually be retrieved from resources. To use macros in :app:`Pyramid`, you need to make the macro template itself available to the rendered template by passing the macro template, or even the macro itself, *into* the rendered template. To do this you can use the :func:`pyramid.renderers.get_renderer` API to retrieve the macro template, and pass it into the template being rendered via the dictionary returned by the view. For example, using a :term:`view configuration` via a :class:`~pyramid.view.view_config` decorator that uses a :term:`renderer`: .. code-block:: python :linenos: from pyramid.renderers import get_renderer from pyramid.view import view_config @view_config(renderer='templates/mytemplate.pt') def my_view(request): main = get_renderer('templates/master.pt').implementation() return {'main':main} Where ``templates/master.pt`` might look like so: .. code-block:: xml :linenos: ${project}, an application generated by the pyramid web application framework. Hello Fred! And ``templates/mytemplate.pt`` might look like so: .. code-block:: xml :linenos: Chris .. index:: single: Chameleon text templates .. _chameleon_text_templates: Templating with :term:`Chameleon` Text Templates ------------------------------------------------ :app:`Pyramid` also allows for the use of templates which are composed entirely of non-XML text via :term:: .. code-block:: text Hello, ${name}! Then in your project's ``views.py`` module, you can create a view which renders this template: .. code-block:: python :linenos: from pyramid.view import view_config @view_config(renderer='templates/mytemplate.txt') def my_view(request): return {'name':'world'} When the template is rendered, it will show: .. code-block:: text Hello, world! If you'd rather use templates directly within a view callable (without the indirection of using a renderer), see :ref:`chameleon_text_module` for the API description. See also :ref:`built_in_renderers` for more general information about renderers, including Chameleon text renderers. .. index:: single: template renderer side effects Side Effects of Rendering a Chameleon Template ----------------------------------------------. .. code-block:: text *.pt.py *.txt.py Note that I always name my Chameleon ZPT template files with a ``.pt`` extension and my Chameleon text template files with a ``.txt`` extension so that these ``svn:ignore`` patterns work. .. index:: pair: debugging; templates .. _debug_templates_section: Nicer Exceptions in Chameleon Templates --------------------------------------- The exceptions raised by Chameleon templates when a rendering fails are sometimes less than helpful. :app:: .. code-block:: text $ PYRAMID_DEBUG_TEMPLATES=1 bin/paster serve myproject.ini To use a setting in the application ``.ini`` file for the same purpose, set the ``pyramid.debug_templates`` key to ``true`` within the application's configuration section, e.g.: .. code-block:: ini :linenos: [app:main] use = egg:MyProject pyramid.debug_templates = true With template debugging off, a :exc:`NameError` exception resulting from rendering a template with an undefined variable (e.g. ``${wrong}``) might end like this: .. code-block:: text File "...", in __getitem__ raise NameError(key) NameError: wrong Note that the exception has no information about which template was being rendered when the error occured. But with template debugging on, an exception resulting from the same problem might end like so: .. code-block:: text RuntimeError: Caught exception rendering template. - Expression: ``wrong`` - Filename: /home/fred/env/proj/proj/templates/mytemplate.pt - Arguments: renderer_name: proj:templates/mytemplate.pt template: Welcome to This template doesn't use any advanced features of Mako, only the ``${}`` replacement syntax for names that are passed in as :term:`renderer globals`. See the `the Mako documentation ${project}, an application generated by the pyramid web application framework.
http://docs.pylonsproject.org/projects/pyramid/en/1.2-branch/_sources/narr/templates.txt
CC-MAIN-2017-17
en
refinedweb
I am still having trouble deciding which way to go, so making some comments and asking some more questions ... Nicola Ken Barozzi wrote: > As a summary, the point of discussion is over validation or not, and how > to do it. I think that the main point is to make it easy for users to configure it and easy to maintain between versions of Forrest. > I would like to try removing strong validation, as I see > mainly the drawbacks, while Cheche feels that strong formal validation > is needed and that the DTD format is the best for the editors. It is very interesting to note the discussion on the cocoon-users saying that xml editors utilise RELAX NG now. > The proposal are as follows: > > Proposal 1 (From Cheche) > -------------------------- > >. I gather that the skinconf.dtd gets new version numbers for every change and that this makes it easier for users to keep their skinconf files up-to-date. There could even be helper transformations for updating skinconf between versions. The trouble is that writers of new skins cannot add special elements for their purposes. >. I do like this very simple schema. The internal DTD allows some basic structural validation by all xml editors. I presume that we can then do various RELAX NG validations of the content, ensure certain attributes, etc. This is good because it separates the concerns (validation of structure and validation of content). The skinconfig could still have a single namespace so that tools could decide which RNG grammar to apply. >) I presume that future changes to skinconf then get a new namespace ...skinconf/1.1 which enables the correct validation to be applied. > - Make it loadable by XmlProperty This sounds important. Does that mean that we can get away from the need to use the document() function via XSLT? Do Proposal 1 and Proposal 2 still need to use document() ? What is the difference that makes it "loadable by XmlProperty"? Is it because it has definite names for elements as opposed to Proposal 2? > Here is an example of it: > > <skin:skinconfig xmlns: xmlns:> <snip/> -------- Is there a Proposal 4 ... The simple feature-element-property stuff from Proposal 2 but with multiple namespaces like Proposal 3. --David
http://mail-archives.apache.org/mod_mbox/forrest-dev/200404.mbox/%3C1083323942.2035.162.camel@ighp%3E
CC-MAIN-2017-17
en
refinedweb
rpc_ns_profile_elt_add- adds an element to a profile; if necessary, creates the entry #include <dce/rpc.h> void rpc_ns_profile_elt_add( unsigned32 profile_name_syntax, unsigned_char_t *profile_name, rpc_if_id_t *if_id, unsigned32 member_name_syntax, unsigned_char_t *member_name, unsigned32 priority, unsigned_char_t *annotation, - Specifies the RPC profile that receives the new element. The profile name syntax is identified by the argument profile_name_syntax. - if_id - Specifies the interface identifier of the new profile element. To add or replace the default profile element, specify NULL. - member_name_syntax - An integer value that specifies the syntax of argument member_name. (See Name Syntax Constantsfor the possible values of this argument.) The value rpc_c_ns_syntax_default specifies the syntax specified by the RPC_DEFAULT_ENTRY_SYNTAX environment variable. - member_name - Specifies an entry in the name service database to include in the new profile element. The member name syntax is identified by the argument member_name_syntax. - priority - An integer value (0 to 7) that specifies the relative priority for using the new profile element during the import and lookup operations. A value of 0 (zero) is the highest priority. A value of 7 is the lowest priority. Two or more elements can have the same priority. The default profile element has a priority of 0. When adding the default profile, the result is unspecified if the application specifies a value other than 0 here. - annotation - Specifies an annotation string that is stored as part of the new profile element. The string can be up to rpc_c_annotation_max characters long, including the null terminator. The application specifies NULL or the empty string ("") if there is no annotation string. Output - status - Returns the status code from this routine. The status code indicates whether the routine completed successfully, or if not, why not. Possible status codes and their meanings include: - rpc_s_ok - Success. - rpc_s_class_version_mismatch Name service entry has incompatible RPC class version. - rpc_s_name_service_unavailable Name service unavailable. - rpc_s_no_ns_permission No permission for name service operation. - rpc_s_unsupported_name_syntax Unsupported name syntax. The rpc_ns_profile_elt_add() routine adds an element to the profile attribute of the entry in the name service database specified by the profile_name argument. If the profile_name entry does not exist, this routine creates the entry with a profile attribute and adds the profile element specified by the if_id, member_name, priority and annotation arguments. In this case, the application must have permission to create the entry. If an element with the specified member name and interface identifier is already in the profile, this routine updates the element's priority and annotation string using the values provided in the priority and annotation arguments. An application can add the entry in argument member_name to a profile before it creates the entry itself. Permissions Required The application needs both read permission and write permission for the target name service profile entry. If the entry does not exist, the application also needs insert permission for the parent directory. None. rpc_if_inq_id() rpc_ns_mgmt_entry_create() rpc_ns_profile_elt_remove(). Please note that the html version of this specification may contain formatting aberrations. The definitive version is available as an electronic publication on CD-ROM from The Open Group.
http://pubs.opengroup.org/onlinepubs/9629399/rpc_ns_profile_elt_add.htm
CC-MAIN-2017-17
en
refinedweb
Hi all, I see a lot of books teaching beginning Python students to code like this: def func1(): ... def func2(): ... class Blah(object): ... def main(): ... main() It strikes me that having people "def main():" and then call main() is a bit silly. In fact, it seems to be nothing more than a hold-over from C/C++. My main gripe (heh, get it? "main"! I'm killin' 'em) with it is this: when my code breaks, I like to be able to inspect the values of variables. But if all of my code is enclosed in "main()", then the variables have all since gone out of scope and aren't available to be inspected. So on a pragmatic level, I oppose the "def main():" style for the simple reason that it makes debugging a bit harder. Are there any reasons that I should reconsider my dislike for "def main():"? Thanks, Jeff
https://www.daniweb.com/programming/software-development/threads/93017/style-issue-def-main
CC-MAIN-2017-17
en
refinedweb
#include <OMX_Audio.h> Structure for Live MIDI events and MIP messages. (MIP = Maximum Instantaneous Polyphony; part of the SP-MIDI standard.) MIDI event array to be rendered immediately, or an array for the MIP message buffer, where the size is indicated by nMidiEventSize Size of immediate MIDI events or MIP message in bytes Port that this structure applies to size of the structure in bytes OMX specification version information
http://limoa.sourceforge.net/docs/1.0/structOMX__AUDIO__CONFIG__MIDIIMMEDIATEEVENTTYPE.html
CC-MAIN-2017-17
en
refinedweb
When you read binary data you still read bytes from the file, so the process is essentially the same as we used in the previous example. To read a binary file, we create a FileInputStream object and get the FileChannel object from it, then read the data into a byte buffer. We could set up a file channel to read our primes.bin file like this:(); We have some options on the size of the byte buffer. The number of bytes in the buffer should be a multiple of eigth since a prime value is of type long but other than that we can make it whatever size we like. We could allocate a buffer to accommodate the number of primes that we want to output to the command line, six values say. This would make accessing the data very easy since we only need to set up a view buffer of type LongBuffer each time we read from the file. One thing against this is that reading such a small amount of data from the file in each read operation would not be a very efficient way to read the file. Before data transfer can start for a read operation there is a significant delay, usually of the order of several milliseconds, waiting for the disk to rotate until the data that we want to read is under the read heads. Therefore the more read operations you use to retrieve a given amount of data from the file the longer it takes. However, in the interests of understanding the mechanics of this let's see how it would work anyway. The buffer would be created like this: final int PRIMECOUNT = 6; // Number of primes to read at a time ByteBuffer buf = ByteBuffer.allocate(8*PRIMECOUNT); We can then read the primes in a while loop inside a try block, like this: long[] primes = new long[PRIMECOUNT]; try { primes = new long[(int)inChannel.size()/8]; // Array to hold 5 primes while(inChannel.read(buf) != -1) { // Access the primes via a view buffer of type LongBuffer... // Output the primes read... buf.clear(); // Clear the buffer for the next read } System.out.println("EOF reached."); inFile.close(); // Close the file and the channel } catch(IOException e) { e.printStackTrace(System.err); System.exit(1); } We can create a view buffer of type LongBuffer that will help us get at the primes. We can obtain the view buffer by calling the asLongBuffer() method for the byte buffer, buf. The LongBuffer class offers you a choice of four get() methods for accessing long values in the buffer: The BufferUnderflowException class is a subclass of RuntimeException so you are not obliged to catch this exception, although it may be useful to do so if you want to avoid references to array elements that have not been loaded with data from the buffer. With the buffer size we have chosen, perhaps the simplest way to access the primes in the buffer is like this: LongBuffer longBuf = ((ByteBuffer)(buf.flip())).asLongBuffer(); System.out.println(); // Newline for the buffer contents while(longBuf.hasRemaining()) // While there are values System.out.print(" " + longBuf.get()); // output them on the same line If we wanted to collect the primes into an array, the form of get() method that transfers values to an array is more efficient than writing a loop to transfer them one at a time, but we have to be careful. Let's try it out in an example to see why. We will choose to read the primes six at a time into an array. Here's the program: import java.io.*; import java.nio.ByteBuffer; import java.nio.channels.FileChannel; public class ReadPrimes { public static void main(String[] args) {(); final int PRIMECOUNT = 6; ByteBuffer buf = ByteBuffer.allocate(8*PRIMECOUNT); long[] primes = new long[PRIMECOUNT]; try { while(inChannel.read(buf) != -1) { ((ByteBuffer)(buf.flip())).asLongBuffer().get(primes); System.out.println(); for(int i = 0 ; i<primes.length ; i++) System.out.print(" " + primes[i]); buf.clear(); // Clear the buffer for the next read } System.out.println("\nEOF reached."); inFile.close(); // Close the file and the channel } catch(IOException e) { e.printStackTrace(System.err); System.exit(1); } System.exit(0); } } We get a whole lot of prime values, six to a line, then, when we almost have them all displayed, we suddenly get the output: ... 467 479 487 491 499 503Exception in thread "main" java.nio.BufferUnderflo wException at java.nio.LongBuffer.get(LongBuffer.java:609) at java.nio.LongBuffer.get(LongBuffer.java:633) at ReadPrimes.main(ReadPrimes.java:28) How It Works The reason is doesn't work very well is that the number of primes in the file is not divisible by the number of primes that we read into the view buffer. This is determined by the number of elements in the array primes. On the last iteration of the while loop that reads the file, there are insufficient values to fill the array so the get() method throws an exception of type BufferUnderflowException. One way to deal with this is to catch the exception that is thrown. It's not a particularly good way because of the overhead in throwing and catching exceptions, but let's see how we could do it anyway. We could rewrite the while loop like this: int primesRead = 0; while(inChannel.read(buf) != -1) { try { ((ByteBuffer)(buf.flip())).asLongBuffer().get(primes); primesRead = primes.length; } catch(BufferUnderflowException e) { LongBuffer longBuf = buf.asLongBuffer(); primesRead = longBuf.remaining(); longBuf.get(primes,0, primesRead); } System.out.println(); for(int i = 0 ; i< primesRead ; i++) System.out.print(" "+primes[i]); buf.clear(); // Clear the buffer for the next read } When the exception is thrown on the last iteration, we catch it and read the remaining values in the view buffer using the alternate form of the get() method, where the second argument specifies the first array element to store a value in and the third argument specifies the number to be stored. To take account of the possibility that less than the whole array will contain primes when we output it, we set the number of primes that are read in the loop. Note that we must set the value of primesRead inside the catch block before we execute the get() method. Afterwards the number remaining will be zero. Of course, although this works, it is a very poor way to deal with the problem. A better way is to avoid it altogether, like this: int primesRead = 0; while(inChannel.read(buf) != -1) { LongBuffer longBuf = ((ByteBuffer)(buf.flip())).asLongBuffer(); primesRead = longBuf.remaining(); longBuf.get(primes,0, longBuf.remaining()); System.out.println(); for(int i = 0 ; i< primesRead ; i++) System.out.print(" "+primes[i]); buf.clear(); // Clear the buffer for the next read } The shaded lines reflect changes to the code in the original example. Now we always read the number of values available in longBuf so we can't cause the BufferUnderflowException to be thrown. A further possibility is to use a buffer large enough to hold all the primes in the file. We can work this out from the value returned by the size() method for the channel – which is the length of the file in bytes. We could do that like this: final int PRIMECOUNT = (int)inChannel.size()/8; Of course, you also must alter the for loop that outputs the primes so it doesn't attempt to put them all on the same line. There is a hazard with this though if you don't know how large the file is. Unless your PC is unusually replete with RAM, it could be inconvenient if the file contains the first billion primes. It might be as well to put an assertion to protect against an excess of primes: assert inChannel.size()<=100000; final int PRIMECOUNT = (int)inChannel.size()/8; Now the program will not proceed if there are more than 100,000 primes in the file. Don't forget, to compile a program with assertions you must specify the -source 1.4 options, and when you execute the program you need to specify the -enableassertions option. One final point before we leave this example – the output is irritating. Why don't the columns line up? Well they should and could, but it's a bit more code that would clutter up the example. However, suppose we want to output the primes six to a line, left justified in a field width of 12. Here's one way we could do that: StringBuffer str = null; for(int i = 0 ; i< primesRead ; i++) { str = new StringBuffer(" ").append(primes[i]); System.out.print((i%6 == 0 ? "\n" : "") + str.substring(str.length()-12, str.length())); } This replaces the loop in the original code. On the first and every sixth prime output we start a new line by outputting "\n" as the first character in the argument to the print() method. We create a StringBuffer object, which contains 11 spaces, and append the String representation of the prime value to it. We then just output the string consisting of the last 12 characters in the StringBuffer object.
http://www.yaldex.com/java_tutorial/0513605195.htm
CC-MAIN-2017-17
en
refinedweb
Hello, I got a bit of a problem trying to do the following: - Load a .xm or .it song - get all samples, pack as wave and save them to separate files problem ( i suppose): sample speed (freq) differs in the song and from the retrieved value. When I run the code below, all samps are marked 44100 but I know some of those in the song are 8khz, 11 or etc. The output wave just doesn’t sound as the original. I’m using much of the code as in the ‘record’ example. [code:3t8p4igv] BOOL CSampleFM::SaveToWave( LPCTSTR lpszOutputFileName ) { ASSERT( m_pSample != NULL ); ASSERT( lpszOutputFileName != NULL ); int nDefFreq, nVarFreq; FSOUND_Sample_GetDefaultsEx( m_pSample, &nDefFreq, 0, 0, 0, &nVarFreq, 0, 0); void *p1 = NULL, *p2 = NULL; UINT len1, len2; UINT uiMode = FSOUND_Sample_GetMode( m_pSample ); int bits = (uiMode & FSOUND_16BITS) ? 16 : 8; int channels = (uiMode & FSOUND_STEREO) ? 2 : 1; UINT uiLength = FSOUND_Sample_GetLength( m_pSample ) * channels * bits / 8; if ( uiLength == 0 ) { // dont save 0 len sams TRACE( _T("\n sample won't be saved. sample length: %d"), uiLength ); return FALSE; } // Wave Stuff (as from fmod sample) #if defined(WIN32) || defined(_WIN64) || defined(__WATCOMC__) || defined(_WIN32) || defined(__WIN32__) #pragma pack(1) #endif typedef struct { signed char id[4]; int size; } RiffChunk; struct { RiffChunk chunk; unsigned short wFormatTag; /* format type */ unsigned short nChannels; /* number of channels (i.e. mono, stereo...) */ unsigned int nSamplesPerSec; /* sample rate */ unsigned int nAvgBytesPerSec ; /* for buffer estimation */ unsigned short nBlockAlign; /* block size of data */ unsigned short wBitsPerSample; /* number of bits per sample of mono data */ } FmtChunk = { { {'f','m','t',' '}, sizeof(FmtChunk) - sizeof(RiffChunk) }, 1, channels, nDefFreq, nDefFreq * channels * bits / 8, 1 * channels * bits / 8, bits }; struct { RiffChunk chunk; } DataChunk = { {{'d','a','t','a'}, uiLength } }; struct { RiffChunk chunk; signed char rifftype[4]; } WavHeader = { { {'R','I','F','F'}, sizeof(FmtChunk) + sizeof(RiffChunk) + uiLength }, {'W','A','V','E'} }; #if defined(WIN32) || defined(_WIN64) || defined(__WATCOMC__) || defined(_WIN32) || defined(__WIN32__) #pragma pack() #endif // save to file CFile file; if ( !file.Open( lpszOutputFileName, CFile::modeCreate | CFile::modeWrite ) ) { return FALSE; } // write headers file.Write( &WavHeader, sizeof(WavHeader) ); file.Write( &FmtChunk, sizeof(FmtChunk) ); file.Write( &DataChunk, sizeof(DataChunk) ); FSOUND_Sample_Lock( m_pSample, 0, uiLength, &p1, &p2, &len1, &len2 ); file.Write( p1, len1 ); FSOUND_Sample_Unlock( m_pSample, p1, p2, len1, len2 ); file.Close(); return TRUE; } [/code:3t8p4igv] Any help will be greatly appreciated, thanx 😉 - necroleak asked 12 years ago - You must login to post comments wow, thanx a lot brett. I think this did it 😀 I really don’t have much experience with sound programming 😥 - necroleak answered 12 years ago
https://www.fmod.org/questions/question/forum-13931/
CC-MAIN-2017-17
en
refinedweb
: Specify, design, and implement a class for complete binary trees using the array representation. You should have only one member function that adds a nre node (since there is only one place where a node may be added), and one member function that removes the last node of the tree. What I get is that complete binary tree should have all the leaves at the same depth. So basically we create a tree to which elemts can be added or removed through the user input. I am positng below source code for basic sturcture of a tree. I have not implemented any functions in it. If you can use the source below and guide me what I should do next. Thanks! plusplus] #include <cstdlib> //Provides EXIT_SUCCESS #include <iostream> //Provides cout, cin #include <string> //Provides string class using namespace std; template <class Item> class binary_tree_node { public: //TYPEDEF typedef Item value_type; //CONSTRUCTOR binary_tree_node( const Item& init_data = Item(), binary_tree_node* init_left = NULL, binary_tree_node* init_right = NULL ) { data_field = init_data; left_field = init_left; right_field = init_right; } //MODIFICATION MEMBER FUNCTIONS Item& data() { return data_field; } binary_tree_node*& left() { return left_field; } binary_tree_node*& right() { return right_field; } void set_data(const Item& new_data) { data_field = new_data; } void set_left(binary_tree_node* new_left) { left_field = new_left; } void set_right(binary_tree_node* new_right) { right_field = new_right; } //CONSTANT MEMBER FUNCTIONS const Item& data() const { return data_field; } const binary_tree_node* left() const { return left_field; } const binary_tree_node* right() const { return right_field; } bool is_leaf() const { return (left_field ==NULL) && (right_field ==NULL); } private: Item data_field; binary_tree_node *left_field; binary_tree_node *right_field; }; template <class Item> void tree_clear(binary_tree_node<Item>*& root_ptr); //Precondition: root_ptr is the root pointer of a binary tree (whihc may be NULL for the empty tree //Postcondition: All nodes at the root or below have been returned to the heap, and root_ptr has been set to NULL. template <class Item> binary_tree_node<Item>* tree_copy (const binary_tree_node<Item>* root_ptr); //Precondition: root_ptr is the root pointer of a binary tree (which may be NULL for the empty tree). //Postcondition: A copy of the binary tree has been made, and the return value is apointer to the root of this copy. template <class Item> void tree_clear(binary_tree_node<Item>*& root_ptr) { if (root_ptr != NULL) { tree_clear( root_ptr->left()); tree_clear( root_ptr->right()); delete root_ptr; root_ptr = NULL; } } template <class Item> binary_tree_node<Item>* tree_copy (const binary_tree_node<Item>* root_ptr) { binary_tree_node<Item> *l_ptr; binary_tree_node<Item> *r_ptr; if (root_ptr ==NULL) return NULL; else { l_ptr = tree_copy( root_ptr->left()); r_ptr = tree_copy( root_ptr->right()); return new binary_tree_node<Item> (root_ptr->data(), l_ptr, r_ptr); } }
https://www.daniweb.com/programming/software-development/threads/117673/binary-tree-help
CC-MAIN-2017-17
en
refinedweb
Hi, I'd like to know if it should be possible to use WiimoteLib, because I tried unsuccessfully. I have the Unity standard edition, and WiimoteLib is a C# .NET dll. I made this script to test : using UnityEngine; using System.Collections; using WiimoteLib; public class NewBehaviourScript : MonoBehaviour { public class NewBehaviourScript : MonoBehaviour { private Wiimote wm; private Wiimote wm; // Use this for initialization void Start () { wm = new Wiimote(); try { wm.Connect(); } catch ( System.Exception e ) { Debug.Log( e.ToString() ); } wm.SetLEDs( true, true, true, false ); } } And anyhow it always return me these exceptions : System.IO.IOException: Invalid handle. at System.IO.FileStream..ctor (IntPtr handle, FileAccess access, Boolean ownsHandle, Int32 bufferSize, Boolean isAsync, Boolean noBuffering) [0x00000] at System.IO.FileStream..ctor (IntPtr handle, FileAccess access, Boolean ownsHandle, Int32 bufferSize, Boolean isAsync) [0x00000] at System.IO.FileStream..ctor (Microsoft.Win32.SafeHandles.SafeFileHandle handle, FileAccess access, Int32 bufferSize, Boolean isAsync) [0x00000] at (wrapper remoting-invoke-with-check) System.IO.FileStream:.ctor (Microsoft.Win32.SafeHandles.SafeFileHandle,System.IO.FileAccess,int,bool) at WiimoteLib.Wiimote.OpenWiimoteDeviceHandle (System.String devicePath) [0x00000] at WiimoteLib.Wiimote.WiimoteFound (System.String devicePath) [0x00000] at WiimoteLib.Wiimote.FindWiimote (WiimoteLib.WiimoteFoundDelegate wiimoteFound) [0x00000] at WiimoteLib.Wiimote.Connect () [0x00000] at NewBehaviourScript.Start () [0x0000b] in D:\Sonny_Unity_Temp\Assets\Test\NewBehaviourScript.cs:14 UnityEngine.Debug:Log(Object) NewBehaviourScript:Start() (at Assets\Test\NewBehaviourScript.cs:18) Thanks Answer by Komodo · Mar 05, 2010 at 11:32 AM Was Win32.SafeHandles the issue? The fact that Unity takes control over HID devices is OK, but for this particular case its just bad. I tried Uniwii, but behavior is still buggy and the API is still limited, which is a pity. Named Pipes doesn't work, so I reckon the only possible solution would be SQLlite, but that is not really elegant. Oh, well if it works it works. Yeah the issue is actually the SafeHandles, I took notes about SQLite. Thanks for your help Answer by tax · Aug 09, 2010 at 07:12 PM Hi Guys, I wrote Brian Peek, and his oppinion from the data available in this thread: """ Reading through that thread, it appears Unity takes over HID devices and there's some kind of incompatibility with SafeHandles. Without a major rewrite of the library, there would be no way to get that to work. It may not be possible at all. Sorry! Brian """ Can someone inside Unity confirm this? Kind regards Jesper Taxbl Answer by jonas-echterhoff · Feb 19, 2010 at 09:19 AM Not a direct answer to your question (never tried the WiimoteLib), but there is the UniWii plugin for accessing the WiiMote in Unity. However, that is a native C++ plugin, so you won't be able to use it in web players. If it's a c++ plugin he can't use it in his standard edition can he? Ah, correct - I missed that part in the question. Answer by StephanK · Feb 19, 2010 at 11:57 AM Just guessing, but all those [0x00000]'s look like something's not initialized correctly to me. Haven't used or heard of WiimoteLib, but if it's a .net dll it should work. Answer by Sonny22 · Feb 22, 2010 at 08:18 AM Yep I thought the same thing all these zeros don't look good to me. Yeah it's a .NET dll, I'll try to check if my code is really good. Th. Does Unity Wii publishing support any form of native Wii plugin? 1 Answer How would I debug a Unity-plugin from XCode? 3 Answers Can I use my own input device? 4 Answers Voice Recognition 1 Answer Sqlite plugin for Android 3 Answers
http://answers.unity3d.com/questions/11753/wiimotelib-unity-.html
CC-MAIN-2017-17
en
refinedweb
ATTRIBUTE(3) Library Functions Manual ATTRIBUTE(3) NAME attribute -- non-standard GCC attribute extensions SYNOPSIS #include <<sys/cdefs.h>> __dead __pure __constfunc __noinline __unused __used __packed __aligned(x); __section(section); __read_mostly __cacheline_aligned __predict_true(exp); __predict_false(exp); DESCRIPTION The GNU Compiler Collection (GCC) provides many extensions to the standard C language. Among these are the so-called attributes. In NetBSD all attributes are provided in a restricted namespace. The described macros should be preferred instead of using the GCC's __attribute__ extension directly. ATTRIBUTES __dead The gcc(1) compiler knows that certain functions such as abort(3) and exit(3) can never return any value. When such a function is equipped with __dead, certain optimizations are possible. GCC is known for aggressive function inlining. Sometimes it is known that inlining is undesirable or that a function will perform incorrectly when inlined. The __noinline macro expands to a function attribute that prevents GCC for inlining the function, irrespective whether the function was declared with the inline keyword. The attribute takes precedence over all other compiler options related to inlining. __unused In most GCC versions the common -Wall flag enables warnings produced by functions that are defined but unused. Marking an unused function with the __unused macro inhibits these warnings. __used The __used macro expands to an attribute that informs GCC that a static variable or function is to be always retained in the object file even if it is unreferenced. _: o Mixing assembly and C code. o Dealing with hardware that may impose alignment requirements greater than the architecture itself. o equals 1. _. SEE ALSO 6.1.5 December 19, 2010 NetBSD 6.1.5
http://modman.unixdev.net/?sektion=3&page=__predict_false&manpath=NetBSD-6.1.5
CC-MAIN-2017-17
en
refinedweb
Hi Guys, I'm trying to have the user enter an integer than an outline of a pyramid would be printed on the console. I just can't get the last line to print out with "*" all across. So say a user enters 5 I need the output to be: * * * * * * * ********* Any hints or tips would be greatly appreciated import java.util.Scanner; public class Pyramid { public static void main(String[] args) { int width; System.out.print("How many rows :"); Scanner kb = new Scanner(System.in); width = kb.nextInt(); for (int i = 1; i <= width; i++) { for (int j = 1; j <= width - i; j++) { System.out.print(" "); } for (int j = 1; j <= 2 * width - 1; j++) { if (j == i || j==1) { System.out.print("*" + " "); }else { System.out.print(" " + " "); } } System.out.println(); } } }
https://www.daniweb.com/programming/software-development/threads/323120/pyramid-outline
CC-MAIN-2018-43
en
refinedweb
Ok so i got this CSharp coding problem put to me and it has me absolutely stumped, it has to do with interfaces and cards, here is the skeleton code which i must alter: public interface ICard { } public interface IPackCards : IReadOnlyCollection<ICard> { void Shuffle (); ICard TakeCardFromTopOfPack (); } public interface IPackCardsCreator { IPackCards Create (); } public class PackCardsCreator : IPackCardsCreator { public IPackCards Create() { throw new NotImplementedException(); } } Problem: Please finish the implementation of PackOfCardsCreator and create implementations of IPackOfCards and ICard. The PackOfCardsCreator should create a standard pack of cards. This should be made up of 52 cards, with 4 different suits (Clubs, Hearts, Spades and Diamonds) and numbered 2, 3, 4, 5, 6, 7, 8, 9, 10, Jack, Queen, King, and Ace. You are free to do this however you like. IPackOfCards.Shuffle() should rearrange the cards in a random order. Repeated shuffles should not all return the cards in the same order. IPackOfCards.TakeCardFromTopOfPack() should return and remove the first card from cards in the pack. by AbsoluteLove via /r/csharp
http://howtocode.net/2015/06/csharp-coding-problem-i-am-stumped/
CC-MAIN-2018-43
en
refinedweb
("Lightweight implementation of Google's Protocol Buffers in Python",) Project description Lightweight implementation of Google’s Protocol Buffers in Python. In benchmarks, the protolite encoder ran twice as fast as Google’s. Using Python’s timeit module, the same data for both APIs was encoded and decoded 10000 times. The lowest time of three attempts was picked for each: protobuf: 3.6064529418945312 seconds protolite: 1.7224960327148438 seconds If we take the ratio of these two times we see that protolite was about two times faster than its counterpart. Similarly, using Pypy we get about twice the speed: protobuf: 0.807873010635376 seconds protolite: 0.4414529800415039 seconds The benchmark directory in the github repository contains the files needed to re-run the tests . In addition, you will need the protobuf Python library. Try it on your platform, but, keep your machine as quite as possible so as to not skew the results: PYTHONPATH=$PYTHONPATH:$(pwd) python benchmark/benchmark.py Pass the –pypy flag if you want to use Pypy in order to warm up the Pypy JIT compiler and get a more accurate result: PYTHONPATH=$PYTHONPATH:$(pwd) pypy benchmark/benchmark.py --pypy You can also make changes to the benchmark/messages.proto file to create your own tests. You’ll need to re-compile the messages.py and messages_pb2.py files in the benchmark directory afterwards by running the make command inside the same directory. Of course, you will need protoc to compile Google’s version. description Protocol Buffers (protobuf) is a data interchange format created by Google. protolite is a rewrite of its encoder and file generator specifically created and optimized for Python. The encoder is optimized for speed taking the language’s properties in mind. The generator aims to provide ease-of-use and compatibility with the language. For example, messages are implemented using only dicts. Familiarity with protobuf is required in order to use protolite effectively. installation You can download and install protolite from pypi with pip: pip install python-protolite Alternatively, you can clone the repository containing the source code from github and install protolite via setuptools: git clone cd python-protolite python setup.py install usage generating files protolite comes with a utility that generates Python files structured for efficiency and readbility. After the installation you will have an executable file called python-protolitec. Its most simple usage takes two positional arguments. The first is a list of the protobuf definition files and the second a directory where to write the Python version of those files: python-protolitec proto/*.proto python The output files will retain the same file name as the source; only the extension will be changed. For example, the file proto/messages.proto will produce the file python/messages.py. You can use the --help flag to view the other options offered by python-protolitec. encoding Let’s say you have a protobuf file called messages.proto containing: message User { optional uint32 userID = 1; enum UserType { STANDARD = 0; ADMIN = 1; } optional UserType type = 2; } python-protolite will create a Python module messages with a user object which has a decode and an encode method. To encode a message you would do something like: import messages msg_enc = {'user_id': 123, 'type': messages.user_type.STANDARD} data = messages.user.encode(msg_enc) As you can see, python-protolite changes camel-case variable names to underscore. On the other end, to decode the message you would do something similar: import messages msg_dec = messages.user.decode(data) The variable msg_dec will be equal to msg_enc. printing The message objects also contain a pretty print method. Calling message.user.pprint(msg_enc) would produce: { "type": "STANDARD", "user_id": 123 } You can pass the keyword argument stream to pprint to write to a stream different than sys.stdout. parser If you download the source code from github you will see a grammar directory at the root level. This directory contains all the files used to create the parser and lexer in protolite.parser, the module used by python-protolitec to parse the protobuf definition files. If you are familiar with Antlr you can edit the proto_lexer.g and proto_parser.g files in this directory to create a new Python parser and/or lexer using the Antlr jar in the same directory: cd grammar java -jar antlr-3.1.3.jar -fo . proto_lexer.g java -jar antlr-3.1.3.jar -fo . proto_parser.g This will create four files: proto_lexer.py, proto_lexer.tokens, proto_parser.py and proto_parser.tokens. You can leave the *.tokens files where they are but move the *.py files to protolite/parser to use your new parser with python-protolitec. If you want to use a different version of Antlr do so at your own risk. You will likely need the new Antlr version to match the Python runtime version in setup.py. Project details Release history Release notifications Download files Download the file for your platform. If you're not sure which to choose, learn more about installing packages.
https://pypi.org/project/protolite/
CC-MAIN-2018-43
en
refinedweb
Example 21-1 shows a class, Bank, that contains inner classes and interfaces for a remote bank client/server example. In this example, the RemoteBank interface defines remote methods to open and close accounts, deposit and withdraw money, check the account balance, and obtain the transaction history for an account. The Bank class contains all of the classes and interfaces required for the example, except for the server class, which is the class that actually implements the RemoteBank interface. This server class is shown in Example 21-2. Example 21-1 defines the following inner classes and interfaces: The Remote interface implemented by the bank server and used by the bank client. A trivial class that represents money in this banking example. It is nothing more than a wrapper around an int, but it serves to demonstrate that Serializable objects can be passed as arguments to remote methods and returned by remote methods. A simple exception subclass that represents banking-related exceptions, such as "Insufficient funds." It demonstrates that remote method implementations on a server can throw exceptions that are transported across the network and thrown in the client program. This class is a standalone program that serves as a simple client to the bank server. It uses Naming.lookup( ) to look up the desired RemoteBank object in the system registry and then invokes various methods of that RemoteBank object, depending on its command-line arguments. It is really as simple as that; the use of RMI is almost transparent. A session using the Bank.Client class might look as follows (note that the command-line argument "david" is the account name and "javanut" is the password that protects the account): % java je3.rmi.Bank\$Client open david javanut Account opened. % java je3.rmi.Bank\$Client deposit david javanut 1000 Deposited 1000 wooden nickels. % java je3.rmi.Bank\$Client withdraw david javanut 100 Withdrew 100 wooden nickels. % java je3.rmi.Bank\$Client balance david javanut You have 900 wooden nickels in the bank. % java je3.rmi.Bank\$Client history david javanut Account opened at Wed Jul 12 15:30:12 PDT 2000 Deposited 1000 on Wed Jul 12 15:30:31 PDT 2000 Withdrew 100 on Wed Jul 12 15:30:39 PDT 2000 % java je3.rmi.Bank\$Client close david javanut 900 wooden nickels returned to you. Thanks for banking with us. In this example session, the bank client is running on the same host as the server. This need not be the case; the Client class looks for a system property named bank to determine which bank server to connect to. So you could invoke the client program like this (one long command line that has been broken into two lines): % java -Dbank=rmi://bank.trustme.com/TrustyBank \ je3.rmi.Bank\$Client open david javanut package je3.rmi; import java.rmi.*; import java.util.List; /** * This class is a placeholder that simply contains other classes and * interfaces for remote banking. **/ public class Bank { /** * This is the interface that defines the exported methods of the * bank server. **/ public interface RemoteBank extends Remote { /** Open a new account, with the specified name and password */ public void openAccount(String name, String password) throws RemoteException, BankingException; /** Close the named account */ public FunnyMoney closeAccount(String name, String password) throws RemoteException, BankingException; /** Deposit money into the named account */ public void deposit(String name, String password, FunnyMoney money) throws RemoteException, BankingException; /** Withdraw the specified amount of money from the named account */ public FunnyMoney withdraw(String name, String password, int amount) throws RemoteException, BankingException; /** Return the amount of money in the named account */ public int getBalance(String name, String password) throws RemoteException, BankingException; /** * Return a List of Strings that list the transaction history * of the named account **/ public List getTransactionHistory(String name, String password) throws RemoteException, BankingException; } /** * This simple class represents a monetary amount. This implementation * is really nothing more than a wrapper around an integer. It is useful * to demonstrate that RMI can accept arbitrary non-String objects as * arguments and return them as values, as long as they are Serializable. * A more complete implementation of this FunnyMoney class might bear * a serial number, a digital signature, and other security features to * ensure that it is unique and non-forgeable. **/ public static class FunnyMoney implements java.io.Serializable { public int amount; public FunnyMoney(int amount) { this.amount = amount; } } /** * This is a type of exception used to represent exceptional conditions * related to banking, such as "Insufficient Funds" and "Invalid Password" **/ public static class BankingException extends Exception { public BankingException(String msg) { super(msg); } } /** * This class is a simple stand-alone client program that interacts * with a RemoteBank server. It invokes different RemoteBank methods * depending on its command-line arguments, and demonstrates just how * simple it is to interact with a server using RMI. **/ public static class Client { public static void main(String[ ] args) { try { // Figure out what RemoteBank to connect to by reading a system // property (specified on the command line with a -D option to // java) or, if it is not defined, use a default URL. Note // that by default this client tries to connect to a server on // the local machine String url = System.getProperty("bank", "rmi:///FirstRemote"); // Now look up that RemoteBank server using the Naming object, // which contacts the rmiregistry server. Given the url, this // call returns a RemoteBank object whose methods may be // invoked remotely RemoteBank bank = (RemoteBank) Naming.lookup(url); // Convert the user's command to lower case String cmd = args[0].toLowerCase( ); // Now, go test the command against a bunch of possible options if (cmd.equals("open")) { // Open an account bank.openAccount(args[1], args[2]); System.out.println("Account opened."); } else if (cmd.equals("close")) { // Close an account FunnyMoney money = bank.closeAccount(args[1], args[2]); // Note: our currency is denominated in wooden nickels System.out.println(money.amount + " wooden nickels returned to you."); System.out.println("Thanks for banking with us."); } else if (cmd.equals("deposit")) { // Deposit money FunnyMoney money=new FunnyMoney(Integer.parseInt(args[3])); bank.deposit(args[1], args[2], money); System.out.println("Deposited " + money.amount + " wooden nickels."); } else if (cmd.equals("withdraw")) { // Withdraw money FunnyMoney money = bank.withdraw(args[1], args[2], Integer.parseInt(args[3])); System.out.println("Withdrew " + money.amount + " wooden nickels."); } else if (cmd.equals("balance")) { // Check account balance int amt = bank.getBalance(args[1], args[2]); System.out.println("You have " + amt + " wooden nickels in the bank."); } else if (cmd.equals("history")) { // Get transaction history List transactions = bank.getTransactionHistory(args[1], args[2]); for(int i = 0; i < transactions.size( ); i++) System.out.println(transactions.get(i)); } else System.out.println("Unknown command"); } // Catch and display RMI exceptions catch (RemoteException e) { System.err.println(e); } // Catch and display Banking related exceptions catch (BankingException e) { System.err.println(e.getMessage( )); } // Other exceptions are probably user syntax errors, so show usage. catch (Exception e) { System.err.println(e); System.err.println("Usage: java [-Dbank=<url>] Bank$Client " + "<cmd> <name> <password> [<amount>]"); System.err.println("where cmd is: open, close, deposit, " + "withdraw, balance, history"); } } } }
http://books.gigatux.nl/mirror/javaexamples/0596006209_jenut3-chp-21-sect-1.html
CC-MAIN-2018-43
en
refinedweb
[Adding libtool to the CC: list, since Bob indicates there are libtool and autoconf implications as well. The thread starts at <>.] On 12/26/2010 09:51 AM, Bruno Haible wrote: > So, when libposix becomes reality, it may be compiled with "gcc", thus > with a setting of > #define LINK_FOLLOWS_SYMLINKS 0 > But when it gets linked to a program that was compiled with "c99" or > "cc -xc99=all", then the link() function _will_ follow symlinks, > thus the link_immediate function will not perform as expected. Given the other problems that ensue on Solaris when one compiles and links to different standards, the simplest answer may be just "don't do that". It's not just the __xpg4 and __xpg6 stuff; it's also the _lib_version stuff: scanf behaves differently depending on which flavor of the -X option one passes to cc. It's quite a mess. If (despite the above) we do want to support compiling an application with cc -xwhatever or cc -Xwhatever, while linking to a library built in the default mode, the proposed change would appear to place a significant performance penalty for the (presumably more common) case of compiling and linking in the default mode. I would suggest something like the following patch instead, with a similar patch for link_follow, and with the appropriate m4 magic to make LINK_FOLLOWS_SYMLINKS a runtime test (__xpg4) on hosts like Solaris that have the __xpg4 variable. (Overall, though, it may be better not to poke a stick at this particular beehive. :-) diff --git a/lib/linkat.c b/lib/linkat.c index 73b1e3e..9b3550a 100644 --- a/lib/linkat.c +++ b/lib/linkat.c @@ -48,13 +48,17 @@ /* Create a link. If FILE1 is a symlink, either create a hardlink to that symlink, or fake it by creating an identical symlink. */ -# if LINK_FOLLOWS_SYMLINKS == 0 -# define link_immediate link -# else + static int link_immediate (char const *file1, char const *file2) { - char *target = areadlink (file1); + char *target = NULL; + int target_errno = 0; + if (LINK_FOLLOWS_SYMLINKS) + { + target = areadlink (file1); + target_errno = errno; + } if (target) { /* A symlink cannot be modified in-place. Therefore, creating @@ -89,11 +93,10 @@ link_immediate (char const *file1, char const *file2) free (target); free (dir); } - if (errno == ENOMEM) + if (target_errno == ENOMEM) return -1; return link (file1, file2); } -# endif /* LINK_FOLLOWS_SYMLINKS == 0 */ /* Create a link. If FILE1 is a symlink, create a hardlink to the canonicalized file. */
http://lists.gnu.org/archive/html/autoconf/2010-12/msg00061.html
CC-MAIN-2018-43
en
refinedweb
Signal/Slot between a form class and the main window class Hi, I have a QT application that, when the user clicks the new option on the file menu, a form pops up and collects some info. What I would like to happen is for the clicked() signal on the "ok" button to trigger creation of a new tab containing a specific layout (layout already finished).. General reading on the site and around the forums seems to indicate I should be able to use something like this: Sender: Form Instance Signal: clicked( QDialogButton::Ok ) Receiver: MainWindow Slot: MainWindow->SlotFunction() I have the connect statement in the constructor of the form object, but it claims it doesn't know what MainWindow is. Do I need to pass a pointer to MainWindow as a parameter to the constructor, or is there a better way? Ideally, I'd like to do this outside of the constructor because this form has multiple uses (situations where it wouldn't create the new tab as it is only being used to edit data on a tab that is already there). Note: QMainWindow is not a global variable, the instance is created in main(). The form instance is a private member of the QMainWindow class. This is slightly more complex than the tutorial (which does make a small reference to it being possible), but far more basic than most of the posts I've been reading on the topic. Hence this post :). I guess you could set the MainWindow instance as parent of your FormClass, and then use the pointer (to the parent) as 3rd argument in the connect. And I think the 4th one should read MainWindow::SlotFunction(). You make the connect of the signal and the slot at the place in your code where you have references to both parties involved in the connection. In your case, that could be in two places: at the place where you create the dialog in response to the QAction that was triggered by the menu. Here, the main window is represented by the this pointer, and the dialog you just created so you also have a pointer to that. if your dialog takes a parent argument, and you use the main window as the parent, then you might also make the connection in the constructor of the dialog, if you like. Personally, I prefer method 1), as that shields the dialog from having to know anything about the main window it is used with. I'm in agreement on the technique, but I'm still running into a problem with it. Here's the function that does the work: @ void VCMainWindow::CreateNewDataTab( VCWIDGETPTR ParentTabBar ) { /BEGIN FUNCTION CREATENEWDATATAB()/ VCWIDGETPTR NewDataTab = NULL; NewDataTab = new VCWIDGET; //Creates a widget for the new tab page. MMNewForm = new VCMPDataInputForm; //Creates an instance of the data input form. VCTabBar->addTab( new VCMPDataTabLayout( this ), "New Tab"); //Warning here that ui was a private member, so I cheated (temporarily) and made VCMainWindow a friend //class of VCMPDataInputForm. connect( MMNewForm->ui->buttonBox->button( QDialogButtonBox::Ok ), SIGNAL( clicked() ), this, SLOT( ) ); //Shows the new tab. VCTabBar->show(); } /CLOSE FUNCTION CREATENEWDATATAB()/ @ There is one header included in this source file, and here are the contents: @ #ifndef __VCWINDOWS_H #define __VCWINDOWS_H #include "C:\VCCalculator\VCCalculator\header_files\vcsysteminfo.h" #include "C:\VCCalculator\VCCalculator\header_files\vctablayouts.h" #include "C:\VCCalculator\VCCalculator\source_files\vcmpdatainputform.h" #include "C:\VCCalculator\VCCalculator\header_files\ui_vcpopulationinputform.h" #include <QSize> #include <QtGui> #include <QTabWidget> //Header file for tab features. typedef QSize VCDIMENSIONS; typedef QSize* VCDIMENSIONSPTR; typedef QMenu VCMENUCATEGORY; typedef QMenu* VCMENUCATEGORYPTR; typedef QAction VCMENUITEM; typedef QAction* VCMENUITEMPTR; typedef QWidget VCWIDGET; typedef QWidget* VCWIDGETPTR; typedef QStatusBar VCSTATUSBAR; typedef QStatusBar* VCSTATUSBARPTR; typedef QMessageBox VCMESSAGEBOX; typedef QMessageBox* VCMESSAGEBOXPTR; typedef QTabWidget VCTABWIDGET; typedef QTabWidget* VCTABWIDGETPTR; class VCMainWindow : public QMainWindow { /BEGIN CLASS VCMAINWINDOW DECLARATION/ Q_OBJECT //Macro that declares the class as an object of type QOBJECT private: /*MAIN MENU HEADINGS*/ VCMENUCATEGORYPTR MMFileMenu; //Main Menu: File Menu Pointer. VCMENUCATEGORYPTR MMEditMenu; //Main Menu: Edit Menu Pointer. VCMENUCATEGORYPTR MMHelpMenu; //Main Menu: Help Menu Pointer. /*MAIN MENU DROPDOWN OPTIONS*/ VCMENUITEMPTR MMNewOption; //Main Menu: File Menu: New Option VCMENUITEMPTR MMExitOption; //Main Menu: File Menu: Exit Option VCMENUITEMPTR MMInfoOption; //Main Menu: Help Menu: About Option VCSTATUSBARPTR VCStatusBar; //Main Window: Status Bar Pointer. /*TAB BAR*/ VCTABWIDGETPTR VCTabBar; //Main Window: Tab Bar Pointer. /*TABBED WINDOWS -- THESE WINDOWS APPEAR WHEN THE CORRESPONDING TAB IS CLICKED!*/ VCWIDGETPTR VCTabBarPage1; //Tabbed Window: Content Widget. 1st Tab. VCWIDGETPTR VCTabBarPage2; //Tabbed Window: Content Widget. 2nd Tab. VCWIDGETPTR VCTabBarPage3; //Tabbed Window: Content Widget. 3rd Tab. VCWIDGETPTR VCTabBarPage4; //Tabbed Window: Content Widget. 4th Tab. /*FORMS*/ VCMPDataInputForm* MMNewForm; //Input Form: New Population Data public slots: void AboutMessage(); //Slot that generates help->about popup box and displays it. void InvokeMPDDialogWindow(); //Slot that generates new mosquito population dialog window. void CreateNewDataTab( VCWIDGETPTR ); protected: public: VCMainWindow(); //Class Constructor void CreateMainMenu(); //Creates the main menu and it's components. void CreateStatusBar(); //Creates the status bar. void CreateMainMenuActions(); //Creates the actions associated with main menu options. void CreateMainPageLayout( ); //Creates the page layout for a widget. void CreateTabbedEnvironment( QWidget* CentralWidget ); //Create the tab bars and pages associated with them. }; /CLOSE CLASS VCMAINWINDOW DECLARATION/ #endif // VCWINDOWS_H @ I'm receiving these errors: invalid use of incomplete type 'struct Ui::VCMPDataInputForm' forward declaration of 'struct Ui::VCMPDataInputForm' Searching around this problem seems to be related to the header inclusions, but I can't seem to locate the cause. If I need to post anything else, just let me know what. EDIT: Ok, so I've noticed that in the header declarations for the form, the namespace {} tags hold only a forward declaration of the classes, like this: @ namespace Ui { class Dialog: public Ui_Dialog {}; } // namespace Ui @ Similarly... @ namespace Ui { class VCMPDataInputForm; } @ The full class declarations are in the same files, but they don't appear in the namespace. This is designer generated code. Thanks! The problem you are seeing is really a C++ problem where you run into the fact that you can not access private or protected class members from outside the class. Note that that is a good thing; don't "fix" it by just making everything public. What you are trying to do, is to connect directly to the OK button on the form. Don't do that. Instead, connect to an ok signal that is exposed by your widget class itself. That way, you can change the internals of your widget without other parts of your program being affected. You might change your VCMPDataInputForm like this (add to the header): @ signals: //chosen to match the QDialog interface void accepted(); void rejected(); @ Then, in your VCMPDataInputForm implementation, you make sure these signals get emitted when the buttons are pressed. You can do that like this. I suggest the constructor for this bit of code: @ connect(ui->buttonBox, SIGNAL(accepted()), this, SIGNAL(accepted())); connect(ui->buttonBox, SIGNAL(rejected()), this, SIGNAL(rejected())); @ That is: you forward the accepted and rejected signals from your button box to be emitted from the API of your VCMPDataInputForm class too. Now, you can replace your connect statement from line 14 of your first code piece with this: @ connect( MMNewForm, SIGNAL( accepted() ), this, SLOT( theSlotToActivate ) ); @ Note that you must name a slot; you cannot connect to nothing. This way, you don't need any cheating. Ahhh...I didn't know I could do this: @ connect(ui->buttonBox, SIGNAL(accepted()), this, SIGNAL(accepted())); connect(ui->buttonBox, SIGNAL(rejected()), this, SIGNAL(rejected())); @ I only cheated to try and figure out what the cause of my problem was, I had no intention of keeping it that way. This technique is much, much better. Thank you! Using this technique has brought me to a (hopefully minor) issue. The tab that is created by the slot triggered by the custom signal accepted() needs data that must be obtained from the form when the user presses the "ok" button (which I assume emits the accepted() signal). My solution was to put all of the code that loaded the data within the custom form class accepted() signal assuming that the accepted() function contents would be executed before the signal was forwarded and the slot triggered. My assumption was wrong, and because of a lot of NULL checks I used a messagebox to determine that the slot that needs the data is actually executing before the data is placed in memory. I tried connecting another slot to the accepted signal and putting it before the signal forwarding, but that didn't work. Here's what I did: @ //This was what I added // connect( ui->buttonBox, SIGNAL( accepted() ), this, SLOT( GetFormData( ) ) ); connect( ui->buttonBox, SIGNAL( accepted() ), this, SLOT( accepted() ) ); connect( ui->buttonBox, SIGNAL( rejected() ), this, SLOT( rejected() ) ); @ Reading around, I see that the slot execution order can never be assumed, so what is the best route to get the data loading before the accepted() signal is forwarded? I was considering loading it with editingFinished() on each form field, but this leaves a lot of work if the user wants to change data or decides to cancel the form entirely. Thanks! Actually, the slots connected to a signal will be executed in order, that has been relatively recently added to the documentation. You may have found older comments that the order was not guaranteed. It always have been this order, but that was never documented and thus could not be relied on. I asume your code snippet above comes from the code in your form or tab, right? If you want to do something before emitting the accept signal, then I suggest you don't create the SIGNAL-SIGNAL connection that I talked about earlier, but create two private slots in your form that you connect to your button box (like you do in your code snippet), do whatever work you need to do there, and after that, from that slot emit the accepted() or rejected() signal to the outside world. That way, program flow is always clear, and does not depend on connect orders. That is much more maintainable. What I don't completely get, is why you need this. After your form emits the accepted() signal, your object receiving that signal can still query your dialog or tab for the data it needs. You just expose your getFromData() publicly. All you need is that you keep a member variable pointer to the dialog. Most (all?) Qt dialogs work like that. @ void VCMainWindow::InvokeMPDDialogWindow() { /BEGIN FUNCTION INVOKEMPDDIALOGWINDOW/ VCMPDataInputForm* NewForm; //Allocate memory for the new mosquito population data input form. NewForm = new VCMPDataInputForm( ); //Set the title of the form window. NewForm->setWindowTitle( "New Mosquito Population"); //connect( NewForm, SIGNAL( accepted( ) ), NewForm, SLOT( NewForm->GetFormData( ) ) ); //If the form is successfully submitted, then we need to create a new tab for //this mosquito population. connect( NewForm, SIGNAL( accepted( ) ), this, SLOT( CreateNewDataTab( ) ) ); //SHOW the new form and don't let the user interact with the program windows //until it is either submitted or cancelled. NewForm->exec(); } /CLOSE FUNCTION INVOKEMPDDIALOGWINDOW/ @ Andre - the reason I need to do it this way currently is because I have the tab creation triggered by the accepted() slot, and the tab text etc... is provided by the data obtained by GetFormData(). If the data doesn't exist (Cancel or not validated), I don't want the tab created. The code above is the calling method. Would it be more feasible for me to connect the accepted() signal to the GetFormData() method and then after exec() returns do an: if( DataElement[ i ] ) { //create tab. } Would this work? I am really sorry, but I don't follow it anymore. The code you show above seems fine in terms of operation order, though there is a problem with ownership. Who will delete the NewForm object you created? How about these modifications to your code: Make your NewForm pointer a private member of your VCMainWindow class. Initialize the NewForm pointer to 0 in your VCMainWindow constructor. Only create a new instance of VCMPDataInputForm if there isn't one already. In your CreateNewDataTab() slot, use the now available NewForm member pointer to access the VCMPDataInputForm class instance. Add a member function (GetFromData() will serve, I guess) to VCMPDataInputForm to access the form data, so you can get to it from the CreateNewTab slot. This way, you only access the form data if the form was accepted. No new tab will be created if the dialog was cancelled.
https://forum.qt.io/topic/5040/signal-slot-between-a-form-class-and-the-main-window-class
CC-MAIN-2018-43
en
refinedweb
Caution: The documentation you are viewing is for an older version of Zend Framework. You can find the documentation of the current version at: Routing and controllers — Zend Framework 2 2.0.7. <', ), ), ); : <?php namespace Album\Controller; use Zend\Mvc\Controller\AbstractActionController; use Zend\View\Model\ViewModel; class AlbumController extends AbstractActionController { public function indexAction() { } public function addAction() { } public function editAction() { } public function deleteAction() { } }.)
https://framework.zend.com/manual/2.0/en/user-guide/routing-and-controllers.html
CC-MAIN-2018-43
en
refinedweb
[ ] Zhijie Shen updated YARN-2446: ------------------------------ Attachment: YARN-2446.1.patch This patch makes use of the namespace to control the user's access to the entities belonging to it. The system is going to have a default namespace, which allows every body to read and write entities. If the user doesn't specify the namespace id when putting an entity, it will be put into the default one. One thing it worth mentioning that the patch doesn't cover the part of entity identifier <type, id> isolation. In the initial proposal, we plan to allow the same entity identifier in different namespace. However, it will require fully refurnishing the current key space in leveldb timeline store, which makes the assumption <type, id> is unique globally. Moreover, the APIs need to be changed according. For example, getEntity is likely to return multiple entities of the same identifier unless we provide one more namespace param. On the other side, as the authenticated user in YARN cluster should be reasonable on creating the entity and its identifier, such that it's rare case of identifier collision unless the attacker intentionally does it. So we decided to postpone the identifier collision avoidance until some use case really wants it. > Using TimelineNamespace to shield the entities of a user > -------------------------------------------------------- > > Key: YARN-2446 > URL: > Project: Hadoop YARN > Issue Type: Sub-task > Components: timelineserver > Reporter: Zhijie Shen > Assignee: Zhijie Shen > Attachments: YARN-2446.1.patch > > > Given YARN-2102 adds TimelineNamespace, we can make use of it to shield the > entities, preventing them from being accessed or affected by other users' > operations. -- This message was sent by Atlassian JIRA (v6.2#6252)
https://www.mail-archive.com/yarn-issues@hadoop.apache.org/msg34639.html
CC-MAIN-2018-43
en
refinedweb
For Day 11, the challenge is to optimize a path through a hex grid. My Python solution is below: with open("input.txt", "r") as o: directions = o.read().split(",") DIR = { "n": (0, 1, -1), "nw": (-1, 1, 0), "sw": (-1, 0, 1), "ne": (1, 0, -1), "s": (0, -1, 1), "se": (1, -1, 0) } x = y = z = 0 max_dist = 0 def get_dist(x, y, z): return (abs(x) + abs(y) + abs(z)) / 2 for d in directions: x += DIR[d][0] y += DIR[d][1] z += DIR[d][2] max_dist = max(max_dist, get_dist(x, y, z)) print "Part 1: ", get_dist(x, y, z) print "Part 2: ", max_dist The best place to learn about hex grids that I've seen is Red Blob Games. Once you get your head around how the grids work today's challenge isn't too difficult. Advent of Code runs every day up to Christmas, you should join in!. Get the latest posts delivered right to your inbox.
https://blog.jscott.me/advent-of-code-day-11/
CC-MAIN-2018-43
en
refinedweb
I am doing malloc(0) and then doing strcpy and then reversing, and its working, why?? and if i dont do malloc(0) and then try to strcpy program carashed as expected, but how does malloc(0) making a difference. #include <stdio.h> char* reverse(char *data); void my_Strcpy(char* dest,char* source); main() { char* p_Name = "Mithun P"; char a_Name[] = "Mithun P"; char *pd_Name = malloc(0);//what is happening here my_Strcpy(pd_Name,"Mithun P"); //printf("reverse of p_Name is %s \n",reverse(p_Name)); printf("reverse of a_Name is %s \n",reverse(a_Name)); printf("reverse of pd_Name is %s \n",reverse(pd_Name)); getchar(); } void my_Strcpy(char* dest,char* source) { while(*dest++ = *source++); } char* reverse(char * data) { int size = 0; int i,j; char* temp = data; while(*temp++) size++; printf("size is %d\n",size); for(i = 0, j = size-1;i < size/2; i++ , j--) { char temp = data[i]; data[i] = data[j]; data[j] = temp; } return data; }
https://www.daniweb.com/programming/software-development/threads/293064/what-is-malloc-0-doing
CC-MAIN-2018-43
en
refinedweb
Using relative paths in your import statements is great for “Hello World” examples and blog posts. But when used in large projects with hundreds of files and deep hierarchical directory structures, relative paths become a nightmare (see Rob Ashton’s post Stop using relative paths in your JavaScripts for some of the reasons why this is so). Relative paths aren’t entirely bad. For example, when importing a closely related file, something that would be considered part of the same module (likely within the same directory), using a relative path is succinct and can document how closely related the files are. But in my experience, relative path imports are used in all cases, throughout the codebase. I’m assuming this is because relative paths work out of the box. No additional configuration is needed to support them–which is not the case for absolute paths. This situation is unlike most other programming languages (Java, C/C++, Ruby, etc.), where both options are readily available, and convention has people using absolute paths more frequently than relative paths. webpack Configuration It’s easy to configure webpack to look for your source files using an absolute path. Just add a root to your resolve section: var path = require('path'); // ... resolve: { root: [ path.resolve('./src'), ], } From the webpack documentation The directory (absolute path) that contains your modules. May also be an array of directories. This setting should be used to add individual directories to the search path. Now, instead of this: import { DateFormatter } from '../../../../shared/format/dateFormatter'; You’ll be able to import like this: import { DateFormatter } from 'shared/format/dateFormatter'; Aliases In some cases, full-length absolute paths might be a bit unwieldy. For example, if you’ve got a file with commonly used helper functions, located deep within your directory structure (and it makes organizational sense for it to be there), you might end up frequently importing something like this: import { add, subtract } from 'common/tools/utils/helpers/math/arithmetic'; By specifying an alias in your webpack config, you could instead import this: import { add, subtract } from 'math/arithmetic'; This is done by specifying an alias in the resolve section: resolve: { alias: { math: path.resolve('./src/common/tools/utils/helpers/math') } }, I’ve done this type of thing on projects before, and it works really well. But there is a downside of using these kinds of aliases. They can trick developers into thinking there’s a top-level math directory, when in reality, it’s just an alias. Guilherme Oenning has a good suggestion in his How to avoid relative path hell in JavaScript/TypeScript projects post, which I came across while doing research for this post. He suggests prefixing your aliases with an @ to differentiate them from npm module imports and normal absolute paths. This is how the import would look in that case: import { add, subtract } from '@math/arithmetic'; I haven’t done this in practice, but I like the idea, and I’ll probably try it on a future project. TypeScript Compiler If you’re using TypeScript, you’ll need to make a change to your tsconfig.json in order for the TypeScript compiler to be able to resolve the paths. compilerOptions: { // ... "baseUrl": "./src", "paths": { "math/*": [ "common/tools/utils/helpers/math/*" ], } } ts-node Unfortunately, the ts-node command line tool doesn’t seem to honor the compilerOptions paths out of the box. If you run into trouble with ts-node, you can try using the tsconfig-paths package: terminal> yarn add tsconfig-paths And then include it on the command line whenever using ts-node: terminal> ts-node -r tsconfig-paths/register main.ts Mocha The mocha command line utility also needs to be told about the tsconfig-paths package in order for it to property resolve the paths in TypeScript. I got it to work by adding the following to my mocha.opts file: --compilers ts:ts-node/register -r tsconfig-paths/register Conclusion Once all of these have been configured, it’s not something you need to think about again. You’ll no longer be living in relative path hell, so you can focus on writing great code instead of counting the number of times “../” appears in an import statement. By commenting below, you agree to the terms and conditions outlined in our (linked) Privacy Policy1 Comment Nice article, but I wanted to add a comment about: > He suggests prefixing your aliases with an @ to differentiate them from npm module imports and normal absolute paths. “@math/arithmetic” is a valid npm module since they introduced scoped packages: so this doesn’t solve the ambiguity regarding aliases and npm packages. Or am I missing something? An alternative would be to use “~” instead of “@”.
https://spin.atomicobject.com/2017/10/07/absolute-paths-javascript/
CC-MAIN-2018-43
en
refinedweb
Subject: Re: [boost] [boost::endian] Summary of discussion #1 From: Stewart, Robert (Robert.Stewart_at_[hidden]) Date: 2010-06-14 15:11:35 vicente.botet wrote: > > .... That's an excellent argument in favor of keeping the name generic. > template <typename Endian, typename T, std::size_t > n_bytes=8*sizeof(T), > typename Alignment = alignment::aligned> > class endian_holder; > > I will let the name boost::endian_integer or > boost::integer::endian for the class providing also the > arithmetic operations. "endian_integer" is fine for the type with arithmetic operations, but "endian_holder" seems more circumspect that necessary. It begs questions like, "What's an 'endian' that you need a holder?" I'd prefer to call it "endian" but you can't have a boost::endian namespace, too. Is boost::endianness a good namespace name? What about boost::order/ordering/ordered? boost::order::endian (boost::endianness::endian) provides the basic facilities. boost::order::endian_integer (boost::endianness::endian_integer) derives from endian and provides arithmetic operators. _____
https://lists.boost.org/Archives/boost/2010/06/168075.php
CC-MAIN-2021-10
en
refinedweb
Migrate Your WordPress Site to the Jamstack WordPress is the most popular content management system on the planet, powering about a third of the websites online today. If you’re working on one of the roughly 1 in 3 websites powered by WordPress and wish you could migrate your development workflow to the Jamstack, I have good news! It’s possible to move your WordPress websites to the Jamstack today. And what’s even more exciting is that your content creators don’t need to change their current workflow! They can continue to use the WordPress admin dashboard to manage content and their changes will trigger a rebuild of your new, blazing fast Jamstack site. In this post, we’ll walk through migrating a WordPress site to Gatsby, a popular Jamstack framework powered by React and GraphQL. If you prefer video, we’ve got you covered! This post is an expanded version of a project I built with Zac Gordon on Learn With Jason. In about 90 minutes, Zac and I migrated a WordPress site to Gatsby. Watch Zac Gordon teach us how to migrate WordPress sites to the Jamstack on Learn With Jason. If you prefer short videos that only focus on code and don’t waste any time, I also created a 30-minute video tutorial that covers all the steps in this project. You can learn how to move your WordPress site to Gatsby on egghead. NOTE: The lessons from the egghead course are also embedded in this tutorial so it’s easy to watch and reference the code. Set up WordPress The first things we need to migrate a WordPress site to the Jamstack is a WordPress site. In this tutorial, we’re going to use, but you can follow along using your own site if you prefer. Install WPGraphQL and WPGraphiQL The heart of a Jamstack-friendly WordPress site is pulling WordPress data from an API instead of using the built-in template system. One of the most approachable options for accessing WordPress data via API is WPGraphQL. Because these plugins are developer-focused, they’re not available through the standard WordPress plugins search. Instead, we need to install them from GitHub. We need two plugins: - WP GraphQL — this enables a GraphQL API that allows access to all public WordPress data through an unauthenticated GraphQL API. (There’s also an authenticated API for privileged access, but we won’t go into details on that in this post.) - WP GraphiQL — this is technically optional, but it adds a new tab inside the WordPress admin that allows us to quickly try out GraphQL queries and see data coming back. To install the plugins, log into the server where your site is hosted and clone the plugins into the wp-content/plugins directory: ssh <user>@<domain> cd /path/to/your/wp-content/plugins/ git clone --depth=1 --single-branch git clone --depth=1 --single-branch This will add the latest files to your site’s plugins directory without including unnecessary Git metadata. Activate the GraphQL plugins Once the plugins are installed, we need to activate them. Head to your WordPress admin dashboard, then click the “Plugins” option from the left-hand menu. We should see both WP GraphQL and WP GraphiQL as installed, but not activated. Click the “Activate” link for both WP GraphQL and WP GraphiQL. Write our first GraphQL query in WP GraphiQL Click the new “GraphiQL” menu option at the left-hand side. This brings up the GraphiQL interface inside our WordPress dashboard. Choose fields in the explorer at the left-hand side to build out a query. For example, if we want to load our site’s pages, we can run this query: query MyQuery { pages { nodes { title uri content isFrontPage } } } Great! We’ve now got a functioning GraphQL API for WordPress that we can use to power our Jamstack frontend! Create a new Gatsby site Before we can use our WordPress data, we need to create a new Jamstack site that will display it. In this example we’ll use Gatsby, an open source, React-based framework that specializes in pulling data from third-party sources. To create a new site, run the following commands: # create a new Gatsby site in a directory called `wordpress-jamstack` # using the Hello World Gatsby starter npx gatsby new wordpress-jamstack gatsbyjs/gatsby-starter-hello-world # move into the folder cd wordpress-jamstack/ This generates a new site in a directory called wordpress-jamstack with a bare-bones Gatsby site. NOTE: There are lots of additional options for starting a new Gatsby site with WordPress that come with batteries included. One great example is Alexandra Spalato’s gatsby-theme-wordpress-blog. We’re intentionally building this site from scratch to make sure we understand how Gatsby and WordPress work together under the hood. Install and configure gatsby-source-graphql Gatsby uses source plugins to load data. One of the most powerful source plugins is gatsby-source-graphql, which allows us to use any GraphQL API as a data source in Gatsby. Since we just created a GraphQL API for our WordPress site, this is a perfect option for loading our WordPress data in Gatsby! Install the plugin with the following command: npm install gatsby-source-graphql After installing the plugin, we need to load it by modifying gatsby-config.js: /** * Configure your Gatsby site with this file. * * See: */ module.exports = { - /* Your site config here */ + plugins: [ + { + resolve: 'gatsby-source-graphql', + options: { + typeName: 'WPGraphQL', + fieldName: 'wpgraphql', + url: '', + } + } + ] } Save this, then start the Gatsby development server by running: npm run develop Once the site finishes starting up, open in your browser to see Gatsby’s version of GraphiQL. In Gatsby, writing a query to load WordPress data is almost exactly the same as the one we used in WP GraphiQL, except Gatsby wraps all WordPress queries in wpgraphql — the fieldName we set in our config — to avoid naming collisions with other data sources. Add the following query in GraphiQL: { wpgraphql { pages { nodes { title uri content isFrontPage } } } } After executing the query by pressing the play button, we’ll see our WordPress data loaded in Gatsby! Create pages from WordPress content Now that we have a Gatsby site that has access to our WordPress data, we can start creating pages. To create pages in Gatsby, we need three things: - Data to display on the page - A template component to define the page layout - A call to the createPagesAPI exported from gatsby-node.jsto combine the data and template together into pages We have the data from WordPress now, so we can create our template component, then create pages. Create a template component for pages A template component in Gatsby is a standard React component. Gatsby passes in several props to the component when it creates pages, so it’s probably a good idea to take a look at what those are. Create a new files called src/templates/page-template.js and put this inside: import React from "react" const PageTemplate = props => { return <pre>{JSON.stringify(props, null, 2)}</pre> } export default PageTemplate Once we’ve saved this file, we’re ready to actually create pages. Create pages in gatsby-node.js To create pages, create a new file in the root directory (next to gatsby-config.js) called gatsby-node.js. Inside, let’s add a createPages API call: exports.createPages = async ({ actions, graphql }) => { // query for WordPress page data const result = await graphql(` { wpgraphql { pages { nodes { id uri } } } } `) // pull the page data out of the query response const pages = result.data.wpgraphql.pages.nodes // loop through WordPress pages and create a Gatsby page for each one pages.forEach(page => { actions.createPage({ path: page.uri, component: require.resolve("./src/templates/page-template.js"), context: { id: page.id, }, }) }) } After saving this file, we can stop the server (press control + C), then run npm run develop again. Once the site has started, visit. We can see everything that Gatsby passes to page components, including the id value we passed in context: This doesn’t look like much right now, but it gives us the page ID, which will let us load page-specific data in our template component. Write a GraphQL query to load page content from WordPress Collocating GraphQL queries with the components that use them is a great way to keep your codebase understandable. Because of this, we’re going to query for individual page data using the page ID in the template component itself. Anything passed in the context object is also available as a GraphQL variable, so we can use the id to load content for each page by adding the following query: import React from "react" + import { graphql } from "gatsby" + + export const query = graphql` + query($id: ID!) { + wpgraphql { + page(id: $id) { + title + content + } + } + } + ` const PageTemplate = props => { return <pre>{JSON.stringify(props, null, 2)}</pre> } export default PageTemplate Once we save this, the page at will update to include a new data prop that contains the result of this query. Alright! Now that we have content, we need to write some markup to actually display it in a reader-friendly way. Display the content in the page template WordPress returns markup and HTML-encoded entities, so we need to use dangerouslySetInnerHTML to make sure our content displays properly. To use our page data, we can grab just the data prop in our component, then drill down to the page content and display those values: import React from "react" import { graphql } from "gatsby" export const query = graphql` query($id: ID!) { wpgraphql { page(id: $id) { title content } } } ` - const PageTemplate = (...args) => { - return <pre>{JSON.stringify(args, null, 2)}</pre> + const PageTemplate = ({ data }) => { + const page = data.wpgraphql.page + return ( + <> + <h1 dangerouslySetInnerHTML={{ __html: page.title }} /> + <div dangerouslySetInnerHTML={{ __html: page.content }} /> + </> + ) } export default PageTemplate Save and check out — it’s working! Add a shared layout and styles To make our Gatsby site look more like a real website, we need to add a layout — a shared header in this case — and styles. Create a shared Layout component Creating a Layout component requires a standard React component that wraps whatever content is passed to it (as the children prop) with markup to give the page semantic structure. Create src/components/layout.js, then add the following code: import React from "react" import { Link } from "gatsby" const Layout = ({ children }) => { return ( <> <header> <Link to="/" className="home"> Migrate WordPress to the Jamstack </Link> </header> <main>{children}</main> </> ) } export default Layout This sets up a header element with a link to go back to the home page and a main element that contains the page content. Use the layout in pages Once we have a layout component, we need to import it in our page template and wrap it around the output: import React from "react" import { graphql } from "gatsby" + import Layout from '../components/layout'; export const query = graphql` query($id: ID!) { wpgraphql { page(id: $id) { title content } } } ` const PageTemplate = ({ data }) => { const page = data.wpgraphql.page return ( - <> + <Layout> <h1 dangerouslySetInnerHTML={{ __html: page.title }} /> <div dangerouslySetInnerHTML={{ __html: page.content }} /> - </> + </Layout> ) } export default PageTemplate Once we’ve saved these changes, we can head to to see the header at the top of the page. Add basic styles Adding styles helps our site look a bit more polished. Create src/styles/layout.css, then add the following: html, body { margin: 0; font-family: -apple-system, BlinkMacSystemFont, "Segoe UI", Roboto, Helvetica, Arial, sans-serif, "Apple Color Emoji", "Segoe UI Emoji", "Segoe UI Symbol"; } header { background: darkblue; padding: 1rem 5vw; } header a { color: white; display: inline-block; margin-left: 0.75rem; } header .home { font-weight: 800; margin-left: 0; text-decoration: none; } main { margin: 2rem auto; max-width: 54ch; width: 90vw; } This CSS makes the header blue with white text and adds some spacing around elements on the page. To apply our styles, we need to import the stylesheet in our Layout component: import React from "react" import { Link } from "gatsby" + import "../styles/layout.css" const Layout = ({ children }) => { return ( <> <header> <Link to="/" className="home"> Migrate WordPress to the Jamstack </Link> </header> <main>{children}</main> </> ) } export default Layout After saving this change, our page at will start looking a little more stylish. NOTE: Gatsby has built-in support for multiple styling approaches. You can likely use whatever flavor of CSS you prefer. Create pages from WordPress posts In WordPress, content can be split up into multiple content types. By default, there are “pages”, which we’ve already handled, and “posts”, which are most commonly used to power blogs. Our WordPress site is using both pages and posts, so we need to write additional code to create Gatsby pages for each WordPress post. Create a page template component Fortunately, the process for creating pages from WordPress posts is very similar to the process for creating WordPress pages. To start, we can duplicate src/templates/page-template.js and name the new file src/templates/post-template.js. Inside, we need to make the following edits: import React from "react" import { graphql } from "gatsby" import Layout from "../components/layout" export const query = graphql` query($id: ID!) { wpgraphql { - page(id: $id) { + post(id: $id) { title content } } } ` - const PageTemplate = ({ data }) => { - const page = data.wpgraphql.page + const PostTemplate = ({ data }) => { + const post = data.wpgraphql.post return ( <Layout> - <h1 dangerouslySetInnerHTML={{ __html: page.title }} /> - <div dangerouslySetInnerHTML={{ __html: page.content }} /> + <h1 dangerouslySetInnerHTML={{ __html: post.title }} /> + <div dangerouslySetInnerHTML={{ __html: post.content }} /> </Layout> ) } - export default PageTemplate + export default PostTemplate Now we’re ready to actually load post data and create Gatsby pages. Create pages from WordPress post data in gatsby-node.js Inside gatsby-node.js, we need to add to our GraphQL query, then add another block of code that pulls the posts out of the response and creates pages for each one. To differentiate posts from pages, each post will have its URL prefixed with Make the following changes to put this in place: exports.createPages = async ({ actions, graphql }) => { const result = await graphql(` { wpgraphql { pages { nodes { id uri } } + posts { + nodes { + id + uri + } + } } } `) const pages = result.data.wpgraphql.pages.nodes pages.forEach(page => { actions.createPage({ path: page.uri, component: require.resolve("./src/templates/page-template.js"), context: { id: page.id, }, }) }) + + const posts = result.data.wpgraphql.posts.nodes + + posts.forEach(post => { + actions.createPage({ + path: `blog/${post.uri}`, + component: require.resolve("./src/templates/post-template.js"), + context: { + id: post.id, + }, + }) + }) } Stop the server and restart it, then visit one of your post URLs, such as. Hey, that wasn’t so bad — we’re getting pretty close here! Add support for WordPress block styles With the release of Gutenberg, WordPress introduced a block-based editor that allows a slick visual editing experience that supports some stylized blocks like pull quotes. If we want to avoid rewriting all the CSS to support those stylized blocks, we need to import the block styles from the official WordPress package. Install @wordpress/block-library Our first step is to install the official WordPress block library: npm install @wordpress/block-library Import the stylesheet into the Layout component Once we have the package installed, we can import only the stylesheet in our layout component: import React from "react" import { Link } from "gatsby" + import "@wordpress/block-library/build-style/style.css" import "../styles/layout.css" const Layout = ({ children }) => { return ( <> <header> <Link to="/" className="home"> Migrate WordPress to the Jamstack </Link> </header> <main>{children}</main> </> ) } export default Layout After saving this, start up the server and head to a page with a styled block on it (such as) to see the WordPress block styles applied. This looks pretty okay considering we didn’t write any custom styles. Create a page to show blog previews To allow site visitors to browse blog posts, we need to create a page that lists post previews. To do this, we’re going to create a Gatsby page at src/pages/blog.js, query for post data, and map over the results to create a list of previews: import React from "react" import { graphql, Link } from "gatsby" import Layout from "../components/layout" export const query = graphql` query { wpgraphql { posts { nodes { id title uri excerpt } } } } ` const Blog = ({ data }) => { const posts = data.wpgraphql.posts.nodes return ( <Layout> {posts.map(post => ( <article key={post.id}> <h2> <Link to={`/blog/${post.uri}`} dangerouslySetInnerHTML={{ __html: post.title }} /> </h2> <div dangerouslySetInnerHTML={{ __html: post.excerpt }} /> </article> ))} </Layout> ) } export default Blog Save this file, then head to to see the previews. Use WordPress settings to configure your Gatsby site WordPress has a full-featured set of tools for managing site settings that is friendly to non-developers, which means it’s more approachable for site contributors than modifying code. WP GraphQL makes these settings available to our Gatsby site, so we can take advantage of this workflow to enable non-developers to update settings for our Gatsby site as well. Let’s pull the site title from WordPress’s general settings to show how this can work. To do this, update src/components/layout.js with the following code: import React from "react" - import { Link } from "gatsby" + import { Link, useStaticQuery, graphql } from "gatsby" import "@wordpress/block-library/build-style/style.css" import "../styles/layout.css" const Layout = ({ children }) => { + const data = useStaticQuery(graphql` + query { + wpgraphql { + generalSettings { + title + } + } + } + `) + + const { title } = data.wpgraphql.generalSettings + return ( <> <header> <Link to="/" className="home"> - Migrate WordPress to the Jamstack + {title} </Link> </header> <main>{children}</main> </> ) } export default Layout Save and check out the site to see that the settings are being loaded. If you want to test this out, make a change in WordPress, then restart the Gatsby development server to see the changes. Create Gatsby navigation from WordPress menus WordPress menus allow content editors to control the navigation settings on the site. If we want to use those menus for our Gatsby site, we can! Get the menu ID To make sure we’re getting the right menu, we need to find the ID for the menu we want to use. In GraphiQL (), run the following query: { wpgraphql { menus { nodes { id name } } } } Look for the menu with the name value of “Main Menu”, then grab its ID for use in the next section. Load the menu items and make links relative Now that we have the menu ID, we can update src/components/layout.js to load the correct menu. One important thing to note is that we also need to load the site’s URL from the generalSettings query because WordPress makes links absolute by default. Using this value, we can loop through the menu links and remove the URL to make sure we have relative links. import React from "react" import { Link, useStaticQuery, graphql } from "gatsby" import "@wordpress/block-library/build-style/style.css" import "../styles/layout.css" const Layout = ({ children }) => { const menu = useStaticQuery(graphql` query { wpgraphql { generalSettings { title + url } + menu(id: "TWVudToy") { + menuItems { + nodes { + id + label + url + } + } + } } } `) - const { title } = menu.wpgraphql.generalSettings + const { title, url } = menu.wpgraphql.generalSettings + // loop through the menu items and make the links relative + const items = menu.wpgraphql.menu.menuItems.nodes.map(item => ({ + ...item, + url: item.url.replace(url, ""), + })) return ( <> <header> <Link to="/" className="home"> {title} </Link> + {items.map(item => ( + <Link key={item.url} to={item.url}> + {item.label} + </Link> + ))} </header> <main>{children}</main> </> ) } export default Layout Save these changes and look at — the WordPress navigation is now displayed in the header, and it works to navigate our Gatsby site! At this point, our WordPress site has been fully migrated to the Jamstack: we’re loading pages, posts, settings, and menus into a fully functional Gatsby site. All that’s left to do at this point is get this site deployed! Deploy a WordPress-powered Gatsby site to Netlify using the Netlify CLI To deploy the site, we need to have the code in a repository on GitHub, Bitbucket, or GitLab. Once we have a repo available, we can commit our changes and push them to our repository: # add all of the files in our site to git git add -A # commit the changes git commit -m 'migrate a WordPress site to the Jamstack' # push the changes to your repo git push origin master Next, we can use the Netlify CLI to deploy our site. To start, we need to install the CLI globally. Then we can run ntl init to connect our site’s repo to Netlify, which means any time we push code changes the site will redeploy. # install the Netlify CLI on your computer npm install -g netlify-cli # set up your site for automatic deployment for new code commits ntl init Follow the prompts to finish initializing the site. NOTE: if this is your first time using the Netlify CLI, you’ll be asked to log in. Follow the directions in the CLI to get logged in, then run ntl initagain. Once the site is set up, we can visit the Netlify dashboard and we’ll see our newly deployed site. Once the site finishes building, which should only take a minute or so, the site is fully live and on the internet! The site we built in this tutorial is live at. Automatically trigger Netlify deploys whenever changes are made in WordPress Netlify sites automatically rebuild whenever changes are pushed to our code, but we also want the site to rebuild when changes are made to our WordPress content. To do that, we need to install a plugin called JAMstack Deployments on our WordPress site. Head to the Plugins section of our WordPress admin, then click “Add New” and search for “jamstack”. JAMstack Deployments will be the first option. Once the plugin is installed, go to the Settings menu, then choose the new Deployments section. To fill this section out, we need to create a Build Hook in our Netlify settings. Head to the Netlify dashboard, then click Settings. In the side menu of the Settings page, click “Build & deploy”, then scroll down to the “Build hooks” section and click “Add build hook”. Once you’ve created the hook, copy the URL and paste it into the WordPress Deployment settings field called “Build Hook URL”. Next, go to the Settings page of your Netlify dashboard and scroll down to the “Status badges” section. The badge has two URLs: the first is the actual image for the badge, and the second is a link to your site’s Deploys page. Copy and paste each URL into the respective Deployments settings fields. Finally, check boxes for the types of updates that should trigger a rebuild on Netlify. If you’re not sure which ones you need, start with posts, pages, and navigation menu items — you can always adjust these settings later on. Save these settings, then make an edit to a page in the WordPress admin section. If you check the Deploys page of your Netlify dashboard, we’ll see that the site is rebuilding! And that’s it! We now have a Jamstack frontend for our WordPress site that is fully powered by WordPress data and automatically rebuilds whenever the code or content changes. 🎉 What to do next At this point, we’ve covered all the steps required to migrate a WordPress site to the Jamstack. We can take things much further, but this is enough to get up and running. If you have specific questions about how to migrate your own WordPress sites to the Jamstack, I’d love to hear about them. Hit me up on Twitter or ask a question in the Netlify Community!
https://www.netlify.com/blog/2020/03/23/migrate-your-wordpress-site-to-the-jamstack/
CC-MAIN-2021-10
en
refinedweb
pthread_attr_setschedparam (3p) - Linux Man Pages pthread_attr_setschedparam:schedparam, pthread_attr_setschedparam - get and set the schedparam attribute SYNOPSIS #include <pthread.h> int pthread_attr_getschedparam(const pthread_attr_t *restrict attr, int pthread_attr_setschedparam(pthread_attr_t *restrict attr, DESCRIPTION The. is not valid. - ENOTSUP - An attempt was made to set the attribute to an unsupported value. These functions shall not return an error code of [EINTR]. The following sections are informative. EXAMPLES APPLICATION USAGE After these attributes have been set, a thread can be created with the specified attributes using pthread_create(). Using these routines does not affect the current running thread. .
https://www.systutorials.com/docs/linux/man/3p-pthread_attr_setschedparam/
CC-MAIN-2021-10
en
refinedweb
SIGVEC(3B) SIGVEC(3B) sigvec - 4.3BSD software signal facilities #include <signal.h> struct sigvec { int (*sv_handler)(int, int); int sv_mask; int sv_flags; }; int sigvec(int sig, struct sigvec *vec, struct sigvec *ovec);vec specifies and reports on the way individual signals are to be handled in the calling process. If vec is non-zero, it alters the way the signal will be treated - default behavior, ignored, or handled via a routine - and the signal mask to be used when delivering the signal if a handler is installed. If ovec is non-zero, the previous handling information for the signal is returned to the user. In this way (a NULL vec and a non-NULL ovec) the user can inquire as to the current handling of a signal without changing it. If both vec and ovec are NULL, sigvec will return -1 and set errno to EINVAL if sig is an invalid signal (else 0), allowing an application to dynamically determine the set of signals supported by the system.bloc call, or when a signal is delivered to the Page 1 SIGVEC(3B) SIGVEC(3B) process.'ing in the signal mask associated with the handler to be invoked. Sigvec assigns a handler for a specific signal. If vec is non-zero, it specifies a handler routine and mask to be used when delivering the specified signal. Further, if the SV_ONSTACK bit is set in sv_flags, the system will deliver the signal to the process on a signal stack, specified with sigstack(2b). For a list of valid signal numbers and a general description of the signal mechanism, please see signal(5). Once a signal handler is installed, it remains installed until another sigvec call is made, or an execve(2) is performed. The default action for a signal may be reinstated by setting sv_handler to SIG_DFL; this default is termination with a core image for signals marked [1]. If sv_handler is SIG_IGN the signal is subsequently ignored, and pending instances of the signal are discarded. SIGKILL will immediately terminate a process, regardless of its state. Processes which are stopped via job control (typically <ctrl>-Z) will not act upon any delivered signals other than SIGKILL until the job is restarted. Processes which are blocked via a blockproc(2) system call will unblock if they receive a signal which is fatal (i.e., a non-jobcontrol). After a fork(2) the child inherits all handlers, the signal stack and the signal masks, but not the set of the pending signals. The exec(2) routines reset all caught signals to default action , clear all handler masks and reset all signals to be caught on the user stack. Ignored signals remain ignored; the blocked signal mask is unchanged and Page 2 SIGVEC(3B) SIGVEC(3B) pending signals remain pending. The mask specified in vec is not allowed to block SIGKILL, SIGSTOP, or SIGCONT. This is enforced silently by the system. A 0 value indicated that the call succeeded. A -1 return value indicates an error occurred and errno is set to indicate the reason. sigvec is a library routine (executing in user space): if either vec or ovec points to memory that is not a valid part of the process address space, the process will receive a memory fault (SIGSEGV) signal and terminate (unless it has installed a handler for SIGSEGV). If the invalid pointer is the result of using a REFERENCE instead of a POINTER, the compiler will issue a warning. sigvec will fail and no new signal handler will be installed if one of the following occurs: block(3B), sigsetmask(3B), sigpause(3B), sigvec more a detailed description of the behavior. WARNING (IRIX) The 4.3BSD and System V signal facilities have different semantics. Using both facilities in the same program is strongly discouraged and will result in unpredictable behavior. PPPPaaaaggggeeee 3333
https://nixdoc.net/man-pages/IRIX/man3B/sigvec.3B.html
CC-MAIN-2021-10
en
refinedweb
Answered by: Show WPF window insite another WPF window Question I have this code and it works fine. public class WindowHelper { private const int GWL_STYLE = (-16); private const UInt32 WS_POPUP = 0x80000000; private const UInt32 WS_CHILD = 0x40000000; [DllImport("user32.dll", SetLastError = true)] private static extern UInt32 GetWindowLongApi(IntPtr hWnd, int nIndex); [DllImport("user32.dll")] private static extern int SetWindowLongApi(IntPtr hWnd, int nIndex, UInt32 dwNewLong); [DllImport("user32.dll", SetLastError = true)] private static extern IntPtr SetParentApi(IntPtr hWndChild, IntPtr hWndNewParent); public void SetParent(Window childWindow, Window parentWindow) { childWindow.Owner = parentWindow; childWindow.WindowStartupLocation = System.Windows.WindowStartupLocation.CenterOwner; childWindow.Show(); IntPtr childWindowHandle = new WindowInteropHelper(childWindow).Handle; IntPtr parentWindowHandle = new WindowInteropHelper(parentWindow).Handle; uint windowStyle = GetWindowLongApi(childWindowHandle, GWL_STYLE); windowStyle = (windowStyle & ~(WS_POPUP)) | WS_CHILD; SetWindowLongApi(childWindowHandle, GWL_STYLE, windowStyle); SetParentApi(childWindowHandle, parentWindowHandle); } } But if I would like to have a modal ChildWindow this code doesn´t work. To pop up a modal window I use ".ShowDialog();". To use ".ShowDialog();" in my code I have to show up my ChildWindow first with ".Show();" than I take the current style from that and than I ".Hide();" the ChildWindow again. Now I can use ".ShowDialog();" to get a modal window. But this ChildWindow shows no reaction of any mouse or keyboard action from me. Thank you, Eric Weber Wednesday, December 5, 2012 4:10 PM Answers All replies Hi Eric, I am marking your issue as "Answered", if you have new findings about your issue, please let me know. Best regards, Sheldon _Xiao MSDN Community Support | Feedback to us Develop and promote your apps in Windows Store Please remember to mark the replies as answers if they help and unmark them if they provide no help.Thursday, December 20, 2012 6:43 AM
https://social.msdn.microsoft.com/Forums/en-US/d79a1687-91c2-464f-bd10-463fbc0a9013/show-wpf-window-insite-another-wpf-window?forum=wpf
CC-MAIN-2021-10
en
refinedweb
Blenderbot-3B Model description The abbreviation FSMT stands for FairSeqMachineTranslation All four models are available: Intended uses & limitations How to use from transformers.tokenization_fsmt import FSMTTokenizer from transformers.modeling_fsmt import FSMTForConditionalGeneration mname = "facebook/wmt19-en-ru") # Машинное обучение - это здорово, не так ли? Limitations and bias - The original (and this ported model) doesn't seem to handle well inputs with repeated sub-phrases, content gets truncated Training data Pretrained weights were left identical to the original model released by fairseq. For more details, please, see the paper. Eval results The score is slightly below the score reported by fairseq, since `transformers`` currently doesn't support: - model ensemble, therefore the best performing checkpoint was ported ( model4.pt). - re-ranking The score was calculated using this code: git clone cd transformers export PAIR=en-ru export DATA_DIR=data/$PAIR export SAVE_DIR=data/$PAIR export BS=8 export NUM_BEAMS=15 mkdir -p $DATA_DIR sacrebleu -t wmt19 -l $PAIR --echo src > $DATA_DIR/val.source sacrebleu -t wmt19 -l $PAIR --echo ref > $DATA_DIR/val.target echo $PAIR PYTHONPATH="src:examples/seq2seq" python examples/seq2seq/run_eval.py facebook/wmt19-$PAIR $DATA_DIR/val.source $SAVE_DIR/test_translations.txt --reference_path $DATA_DIR/val.target --score_path $SAVE_DIR/test_bleu.json --bs $BS --task translation --num_beams $NUM_BEAMS note: fairseq reports using a beam of 50, so you should get a slightly higher score if re-run with --num_beams 50. Data Sources BibTeX entry and citation info @inproceedings{..., year={2020}, title={Facebook FAIR's WMT19 News Translation Task Submission}, author={Ng, Nathan and Yee, Kyra and Baevski, Alexei and Ott, Myle and Auli, Michael and Edunov, Sergey}, booktitle={Proc. of WMT}, } TODO - port model ensemble (fairseq uses 4 model checkpoints) - Downloads last month - 0
https://huggingface.co/sshleifer/bb3b-tok
CC-MAIN-2021-10
en
refinedweb
Implementing a Build System¶ Builder has support for many build systems such as autotools, meson, cmake, etc. The build system knows how to find build targets (binaries or scripts that are installed) for the runner, knows how to find build flags used by the clang service, and it can define where the build directory is. It also has an associated Ide.BuildPipelineAddin (see the next section) that specifies how to do operations like build, rebuild, clean, etc. import gi from gi.repository import Gio, Ide class BasicBuildSystem(Ide.Object, Ide.BuildSystem, Gio.AsyncInitable): def do_init_async(self, priority, cancel, callback, data=None): task = Gio.Task.new(self, cancel, callback) task.set_priority(priority) # do something, like check if a build file exists task.return_boolean(True) def do_init_finish(self, result): return result.propagate_boolean() def do_get_priority(self): return 0 # Choose a priority based on other build systems' priority def do_get_build_flags_async(self, ifile, cancellable, callback, data=None): task = Gio.Task.new(self, cancellable, callback) task.ifile = ifile task.build_flags = [] # get the build flags task.return_boolean(True) def do_get_build_flags_finish(self, result): if result.propagate_boolean(): return result.build_flags How does Builder know which build system to use for a project? Each has an associated “project file” (configure.ac for autotools) that has to exist in the source directory for the build system to be used. If a project has multiple project files, the priorities of each are used to decide which to use. You can see where the priority is defined in the code above. The project file is defined in the .plugin file with these lines (in the case of the make plugin): X-Project-File-Filter-Pattern=Makefile X-Project-File-Filter-Name=Makefile Project When a project has the right file, the build system will be initialized by IdeContext during its own initialization process.
http://builder.readthedocs.io/en/latest/plugins/building/buildsystem.html
CC-MAIN-2018-30
en
refinedweb
Short Description HTCondor probe relying on fifemon/probes. Full Description HTCondor Docker containers HTCondor probe relying on fifemon/probes. Ubuntu Trusty LTS is the base image used and condor version refer to the last stable version. Supervisord is used in order to control different processes spawn. Feature Node run [root@nessun-ricordo-1 HTCondor-probe]# docker run dscnaf/htcondor-probe -c 192.168.0.129 -g 131.154.96.190 -n nessun-ricordo.htcondor -m clusterone 2016-10-21 10:03:31,089 CRIT Supervisor running as root (no user in config file) 2016-10-21 10:03:31,111 INFO supervisord started with pid 5 2016-10-21 10:03:32,114 INFO spawned: 'stdout' with pid 11 2016-10-21 10:03:32,116 INFO spawned: 'probes' with pid 12 2016-10-21 10:03:33,297 INFO success: stdout entered RUNNING state, process has stayed up for > than 1 seconds (startsecs) 2016-10-21 10:03:33,297 INFO success: probes entered RUNNING state, process has stayed up for > than 1 seconds (startsecs) Usage usage: $0 -ci collector-address [-c url-to-config] ... Configure HTCondor probe and start supervisord for this container. OPTIONS: -c collector-address HTCondor collector address. -r inteRval Probe interval in seconds. 15 as default. -u url-to-config config file reference from http url. That's disable manual changes to configs. -g graphite-ip Enable graphite option. Require its endpoint. -n graphite-namespace Graphite namespace -m meta-namespace Graphite meta-namespace -i influxdb-ip Enable influx option. Require its endpoint. -j influx-user Influx user credential. -l influx-password Influx password credential. -d influx-db Influx database. -t influx-db-tag extra tags to include with all metrics (comma-separated key:value) Docker Pull Command Owner dscnaf Source Repository
https://hub.docker.com/r/dscnaf/htcondor-probe/
CC-MAIN-2018-30
en
refinedweb
You are correct that this is a misrepresentation of my position. Myposition is actually: "When tools processing formal languages wish to describe and consume source using language extensions, a standard method should be used." The OpenGL ES WG clearly agrees with this statement by inclusion of the #extension preprocessor directive. I am simply advocating the expansion of #extension's namespace to the namespace of the Web, URI.
https://www.khronos.org/webgl/public-mailing-list/public_webgl/1205/msg00150.php
CC-MAIN-2018-30
en
refinedweb
Get the highlights in your inbox every week. Get started using treq to make async calls in Python | Opensource.com A beginner's guide to asynchronous API calls with Python's Twisted package. Subscribe now The Twisted Requests (treq) package is an HTTP client built on the popular Twisted library that is used for asynchronous requests. Async libraries offer the ability to do large amounts of network requests in parallel with relatively little CPU impact. This can be useful in HTTP clients that need to make several requests before they have all the information they need. In this article, we'll work through an example of making async calls to explore using treq. Defining a problem to solve I enjoy playing the real-time strategy game Clash Royale. While it is not open source, it does have a public API that we can use to show how async requests can come in handy. Clash Royale is a mobile strategy player-vs-player game where players play cards in an arena to win. Each card has different strengths and weaknesses, and different players prefer different cards. Clash Royale remembers which card a player plays the most; this is their "favorite" card. Players come together in clans where they can help each other. Supercell, Clash Royale's developer, released an HTTP-based API where different statistics can be queried.Here's a question best-answered asynchronously: How can we write a program that will output the most popular favorite cards in a clan so that we can start to understand our opponents (and see which cards are popular with our clan members)? You can register an account to follow along with the tutorial, but you'll still be able to understand what we're building if you don't. If you do want to register an account, create an API token via the Clash Royale developer portal. Then choose "Create New Key" under your profile, and enter a name, description, and a valid IP address. (An exact address is required, so I used this site to find mine.) Since you should never save an API key in your code, keep it as a separate file in ~/.crtoken: $ ls ~/.crtoken /home/moshez/.crtoken Twisted programs Running a program based on Twisted requires a number of additional packages to make the experience as smooth as possible. I will not cover all of them in this tutorial, but each one is worth exploring to learn more. To make it easier to see what is going on, let's start with this introductory program that prints Hello world, and then we'll talk through what it does: import collections, json, os, sys, urllib.parse from twisted.internet import task, defer import treq with open(os.path.expanduser("~/.crtoken")) as fpin: token = fpin.read().strip() def main(reactor): print("Hello world") return defer.succeed(None) task.react(main, sys.argv[1:]) This imports many more modules than we need for the "Hello world" example. We will need these modules for the final version of the program, which will accomplish the more complex task of asynchronously querying an API. After the import, the program reads the token from the file and stores it in the variable token. (We are not going to do anything with the token right now, but it's good to see that syntax.) Next there is a main function that accepts a Twisted reactor. A reactor is sort of like an interface to the complex machinery of the Twisted package. In this case, the function main is sent as a parameter, and it's fed an additional argument. The main returns a defer.succeed(None). This is how it returns a value of the right type: a deferred value, but one that already has been "fired" or "called." Because of that, the program will exit immediately after printing Hello world, as we need. Next, we will look at the concepts of async functions and ensureDeferred: async def get_clan_details(clan): print("Hello world", clan) def main(reactor, clan): return defer.ensureDeferred(get_clan_details(clan)) task.react(main, sys.argv[1:]) In this program, which should start with the same imports, we moved all the logic to the async function get_clan_details. Just like a regular function, an async function has an implicit return None at the end. However, async functions, sometimes called co-routines, are a different type than Deferred. In order to let Twisted, which has existed since Python 1.5.2, use this modern feature, we must adapt the co-routine using ensureDeferred. While we could write all the logic without using co-routines, using the async syntax will allow us to write code that is easier to understand, and we will need to move a lot less of the code into embedded callbacks. The next concept to introduce is that of await. Later, we will await a network call, but for simplicity, right now, we will await on a timer. Twisted has a special function, task.deferLater, which will call a function with given parameters after some time has passed. The following program will take five seconds to complete: async def get_clan_details(clan, reactor): out = await task.deferLater( reactor, 5, lambda clan: f"Hello world {clan}", clan ) print(out) def main(reactor, clan): return defer.ensureDeferred(get_clan_details(clan, reactor)) task.react(main, sys.argv[1:]) A note about types: task.deferLater returns a Deferred, as do most Twisted functions that do not have the value already available. When running the Twisted event loop, we can await on both Deferred values as well as co-routines. The function task.deferLater will wait five seconds and then call our lambda, calculating the string to print out. Now we have all the Twisted building blocks needed to write an efficient clan-analysis program! Async calls with treq Since we will be using the global reactor, we no longer need to accept the reactor as a parameter in the function that calculates these statistics: async def get_clan_details(clan): The way to use the token is as a "bearer" token in the headers: headers={b'Authorization': b'Bearer '+token.encode('ascii')} We want clan tags to be sent, which will be strings. Clan tags begin with #, so they must be quoted before they're put in URLs. This is because # has the special meaning "URL fragment": clan = urllib.parse.quote(clan) The first step is to get the details of the clan, including the clan members: res = await treq.get("" + clan, headers=headers) Notice that we have to await the treq.get calls. We have to be explicit about when to wait and get information since it is an asynchronous network call. Just using the await syntax to call a Deferred function does not let us take full power of asynchronicity (we will see how to do it later). Next, after getting the headers, we need to get the content. The treq library gives us a helper method that parses the JSON directly: content = await res.json() The content includes some metadata about the clan, which is not interesting for our current purposes, and a memberList field that contains the clan members. Note that while it has some data about the players, the current favorite card is not part of it. It does include the unique "player tag" that we can use to retrieve further data. We collect all player tags, and, since they also begin with #, we URL-quote them: player_tags = [urllib.parse.quote(player['tag']) for player in content['memberList']] Finally, we come to the real power of treq and Twisted: generating all requests for player data at once! That can really speed up tasks like this one, which queries an API over and over again. In cases of APIs with rate-limiting, this can be problematic. There are times when we need to be considerate to our API owners and not run up against any rate limits. There are techniques to support rate-limiting explicitly in Twisted, but they are beyond the scope of this tutorial. (One important tool is defer.DeferredSemaphore.) requests = [treq.get("" + tag, headers=headers) for tag in player_tags] An aside: await, Deferred, and callbacks For those curious about the specifics of the returned object, here's a closer look at what's happening. Remember that requests do not return the JSON body directly. Earlier, we used await so that we did not have to worry about exactly what the requests return. They actually return a Deferred. A Deferred can have an attached callback that will modify the Deferred. If the callback returns a Deferred, the final value of the Deferred will be the value of the returned Deferred. So, to each deferred, we attach a callback that will retrieve the JSON of the body: for request in requests: request.addCallback(lambda result: result.json()) Attaching callbacks to Deferreds is a more manual technique, which makes code that is harder to follow but uses the async features more efficiently. Specifically, because we are attaching all the callbacks at the same time, we do not need to wait for the network calls, which potentially can take a long time, to indicate how to post-process the result. From Deferreds to values We cannot calculate the most popular favorite cards until all results have been gathered. We have a list of Deferreds, but what we want is a Deferred that gets a list value. This inversion is exactly what the Twisted function defer.gatherResults does: all_players = await defer.gatherResults(requests) This seemingly innocent call is where we use the full power of Twisted. The defer.gatherResults function immediately returns a deferred that will fire only when all the constituent Deferreds have fired and will fire with the result. It even gives us free error-handling: if any of the Deferreds error out, it will immediately return a failed deferred, which will cause the await to raise an exception. Now that we have all the players' details, we need to munch some data. We get to use one of Python's coolest built-ins, collections.Counter. This class takes a list of things and counts how many times it has seen each thing, which is exactly what we need for vote counting or popularity contests: favorite_card = collections.Counter([player["currentFavouriteCard"]["name"] for player in all_players]) Finally, we print it: print(json.dumps(favorite_card.most_common(), indent=4)) Putting it all together So, putting it all together, we have: import collections, json, os, sys, urllib.parse from twisted.internet import task, defer import treq with open(os.path.expanduser("~/.crtoken")) as fpin: token = fpin.read().strip() async def get_clan_details(clan): headers = headers={b'Authorization': b'Bearer '+token.encode('ascii')} clan = urllib.parse.quote(clan) res = await treq.get("" + clan, headers=headers) content = await res.json() player_tags = [urllib.parse.quote(player['tag']) for player in content['memberList']] requests = [treq.get("" + tag, headers=headers) for tag in player_tags] for request in requests: request.addCallback(lambda result: result.json()) all_players = await defer.gatherResults(requests) favorite_card = collections.Counter([player["currentFavouriteCard"]["name"] for player in all_players]) print(json.dumps(favorite_card.most_common(), indent=4)) def main(reactor, clan): return defer.ensureDeferred(get_clan_details(clan)) task.react(main, sys.argv[1:]) Thanks to the efficiency and expressive syntax of Twisted and treq, this is all the code we need to make asynchronous calls to an API. And if you were wondering about the outcome, my clan's list of favorite cards is Wizard, Mega Knight, Valkyrie, and Royal Giant, in descending order. I hope you enjoy using Twisted to write faster API calls! 1 Comments Async programming made easy with Python. Great!
https://opensource.com/article/20/3/treq-python
CC-MAIN-2020-24
en
refinedweb
Hello all I am very very new to programming & Python. My problem is I cannot figure out how to create a line consisting of random colored pixels. I know how to create a white or red line but how to make the colors random is beyond me. This is my white line..Im pretty sure I have to import random somewhere but not too sure?? from cImage import * myImWin = ImageWin("Line Image" ,300,300) lineImage = EmptyImage(300,300) PILPixel = Pixel(255,255,255) for i in range(300): lineImage.setPixel(i,i,whitePixel) lineImage.draw(myImWin) Any assistance would be greatly appreciated.
https://www.daniweb.com/programming/software-development/threads/298501/random-colored-line
CC-MAIN-2020-24
en
refinedweb
Bummer! This is just a preview. You need to be signed in with a Basic account to view the entire video. I've added a plain Ruby method called "page_content" to the app. It loads the contents of a text file and returns them as a string. You pass the title of a page to it as an argument, and it uses the "read" method on the core Ruby "File" class to open a ".txt" file from a "pages/" subdirectory within your app's main directory. We've added a page_content method to the app to read contents of a wiki page from a file: def page_content(title) File.read("pages/#{title}.txt") rescue Errno::ENOENT return nil end Here's a quick explanation of page_content's code... Ruby has a core class named File that's used for working with files. File is a subclass of the IO class, so File inherits a class method named read. File.read lets you read the entire contents of a file into a string, using only the file name. You can read more about it here in the IO.read documentation. If the requested file isn't found, the call to File.read may raise an Errno::ENOENT exception. Normally, this would just cause processing of the HTTP request to stop in its tracks. But if we add begin and end keywords before the call that might raise an exception (or, in this case, we can skip begin and end because the code is inside a method), and add a rescue keyword, we can "rescue" our program from being halted by the exception. You can learn more about rescuing exceptions here. - 0:00 I've added the plain Ruby method called page content to the app. - 0:04 You can type in the code you see here if you want. - 0:06 But if you launch a workspace from this video's page, - 0:09 you'll get a workspace with this method already set up for you. - 0:12 The page content method loads the contents of a text file and - 0:16 returns them as a string. - 0:18 You pass the title of a page to it as an argument and - 0:21 it uses the read method on the core Ruby file class to open - 0:24 a .txt file from a pages sub directory within your apps main directory. - 0:29 Check the teacher's notes if you'd like to know more about the file.read method. - 0:33 If the file isn't found, file.read will raise this Errno ENOENT exception. - 0:39 So we put this rescue clause here that will intercept that error if it happens - 0:43 and just return nil. - 0:45 So if the file isn't there, we just won't get anything back. - 0:48 Now we need the page's sub directory with text file for it to load from. - 0:52 So I'll create it in folder. - 0:57 Name it pages. - 1:00 And then create a new file within it. - 1:04 This file will hold a bio for one of our Treehouse teachers, Nick Pettit. - 1:08 So I'll name it Nick Pettit.txt. - 1:14 The file name has to end with a .txt extension because that's what the page - 1:17 content method will be looking for. - 1:20 For the file contents, I'll just put Treehouse teacher and game developer. - 1:30 Now let me copy this page content method to a new file all by itself, so - 1:33 I can show you how it works. - 1:35 We'll create a new file, name it test.rb. - 1:41 There's no need to require the Sinatra library since it's not used by any - 1:44 of this code. - 1:46 Now I'll add a call to the method and print the return value. - 1:49 So put this page content and we'll get an argument of - 1:56 Nick Pettit, that's the name of the text file without the txt extension. - 2:03 I didn't include the .txt on the end because page content adds that itself. - 2:08 If you launch the workspace that comes with this video, - 2:10 we'll ensure all that's set up for you. - 2:13 Now let's go to our console and try running this. - 2:15 Ruby space test.rb and they'll load the contents of the text file and print them. - 2:22 Don't worry if you don't understand every detail of how the page - 2:25 content method works. - 2:26 That's just what we're using for this particular app and - 2:29 it's not essential to understanding Sinatra. - 2:31 There's more info in the teacher's notes if you want it. - 2:35 We don't have any further need for the test our ,rb file right now. - 2:39 So I'm going to delete it from my workspace.
https://teamtreehouse.com/library/loading-text-files
CC-MAIN-2020-24
en
refinedweb
On Tuesday 13 December 2005 11:26 am, Prarit Bhargava wrote: > OTOH the moment they change the initcall sequence we would > have to change our machine vector interfaces. And AFAICT no > one is happy with the 7 levels of init (everything from too > granular to not granular enough). Whoa, hold on a minute. Let's back up. Most of the uses of ia64_platform_is() are really just hacks to bind drivers to devices that only exist on SN2: arch/ia64/sn/kernel/tiocx.c tiocx_init() arch/ia64/sn/kernel/xp_main.c xp_init() arch/ia64/sn/kernel/xpc_main.c xpc_init() arch/ia64/sn/kernel/xpnet.c xpnet_init() arch/ia64/sn/kernel/sn2/sn_hwperf.c sn_hwperf_misc_register_init() drivers/char/mbcs.c mbcs_init() drivers/char/mmtimer.c mmtimer_init() drivers/char/snsc.c scdrv_init() drivers/pci/hotplug/sgi_hotplug.c sn_pci_hotplug_init() drivers/serial/sn_console.c sn_sal_module_init() drivers/sn/ioc4.c ioc4_init() It's totally backwards to limit driver binding by using ia64_platform_is(). You ought to just describe this hardware in the ACPI namespace and use acpi_bus_register_driver() to bind the drivers. Then you can register the drivers on all platforms, but the .add() function (and hence, the rest of the driver) will only be called when the hardware is actually present. So you don't need any platform-qualified initcalls. - To unsubscribe from this list: send the line "unsubscribe linux-ia64" in the body of a message to majordomo@vger.kernel.org More majordomo info at on Wed Dec 14 07:28:04 2005 This archive was generated by hypermail 2.1.8 : 2005-12-14 07:28:11 EST
http://www.gelato.unsw.edu.au/archives/linux-ia64/0512/16131.html
CC-MAIN-2020-24
en
refinedweb
$ cnpm install @donmhico/wds-create-block Create Block is an officially supported way to create blocks for registering a block for a WordPress plugin. It offers a modern build setup with no configuration. It generates PHP, JS, CSS code, and everything else you need to start the project. It is largely inspired by create-react-app. Major kudos to @gaearon, the whole Facebook team, and the React community. Blocks are the fundamental element of the WordPress block editor. They are the primary way in which plugins and themes can register their own functionality and extend the capabilities of the editor. Visit the Gutenberg handbook to learn more about Block API. You just need to provide the slug which is the target location for scaffolded files and the internal block name. $ npm init @wordpress/block todo-list $ cd todo-list $ npm start (requires node version 10.0.0 or above, and npm version 6.9.0 or above) You don’t need to install or configure tools like webpack, Babel or ESLint yourself. They are preconfigured and hidden so that you can focus on the code. The following command generates PHP, JS and CSS code for registering a block. $ npm init @wordpress. Options: -t, --template <name> template type name, allowed values: "es5", "esnext" (default: "esnext") -V, --version output the version number -h, --help output usage information Please note that --version and --help options don't work with npm init. You have to use npx instead, as presented in the examples. More examples: $ npm init @wordpress/block npm start) which enables ESNext and JSX support. $ npm init @wordpress/block --template es5 npxto output usage information. $ npx @wordpress/create-block --help When you scaffold a block, you must provide at least a slug name, the namespace which usually corresponds to either the theme or plugin name, and the category. In most cases, we recommended pairing blocks with plugins rather than themes, because only using plugin ensures that all blocks still work when your theme changes. Inside that bootstrapped directory (it doesn't apply to es5 template), you can run several commands: $ npm start Starts the build for development. Learn more. $ npm run build Builds the code for production. Learn more. $ npm run format:js Formats JavaScript files. Learn more. $ npm run lint:css Lints CSS files. Learn more. $ npm run lint:js Lints JavaScript files. Learn more. $ npm run packages-update Updates WordPress packages to the latest version. Learn more. Another way of making a developer’s life easier is to use WP-CLI, which provides a command-line interface for many actions you might perform on the WordPress instance. One of the commands wp scaffold block was used as the baseline for this tool and ES5 template in particular.
https://developer.aliyun.com/mirror/npm/package/@donmhico/wds-create-block
CC-MAIN-2020-24
en
refinedweb
$ pip install virtualenv Installing PySpark with Jupyter notebook on Ubuntu 18.04 LTS Carvia Tech | December 07, 2019 | 4 min read | 684 views In this tutorial we will learn how to install and work with PySpark on Jupyter notebook on Ubuntu Machine and build a jupyter server by exposing it using nginx reverse proxy over SSL. This way, jupyter server will be remotely accessible. Setup Virtual Environment Setup Jupyter notebook Jupyter Server Setup PySpark setup Configure bash profile Setup Jupyter notebook as a service on Ubuntu 18.0 LTS Nginx Setup SSL setup using LetsEncrypt Virtual Environment Setup Run the below command on the terminal to install virtual environment on your machine, if it is not there already. We will be using virtualenv to setup virtual environment. $ virtualenv -p python3.6 venv where venv is the name of the virtual environment. Above command will create a virtual environment in the current directory with name venv To activate this newly create virtual environment, you need to run the below command $ source venv/bin/activate Install jupyter notebook To install jupyter notebook, run the below command. Make sure that virtual environment is activated when you run the below command. $ pip install jupyter notebook Jupyter Server Setup Now, we will be setting up the password for jupyter notebook. Generate config for jupyter notebook using following command: $ jupyter notebook --generate-config Update the config: $ vi /home/<username>/.jupyter/jupyter_notebook_config.py ## Hashed password to use for web authentication. # # To generate, type in a python/IPython shell: # # from notebook.auth import passwd; passwd() # # The string should be of the form type:salt:hashed-password. c.NotebookApp.password = u'sha1:020f1412ae63:227357c88b3996e75dcf85ea96c2d581db74ec1e' ## Allow requests where the Host header doesn't point to a local server # # By default, requests get a 403 forbidden response if the 'Host' header shows # that the browser thinks it's on a non-local domain. Setting this option to # True disables this check. # # This protects against 'DNS rebinding' attacks, where a remote web server # serves you a page and then changes its DNS to send later requests to a local # IP, bypassing same-origin checks. # # Local IP addresses (such as 127.0.0.1 and ::1) are allowed as local, along # with hostnames configured in local_hostnames. c.NotebookApp.allow_remote_access = True PySpark Setup We will install PySpark using PyPi. To install just run the following command from inside the virtual environment: $ pip install pyspark For more information, see this web page: As of writing this article, v2.4.4 is the latest version of Apache Spark available with scala version 2.11.12 Check the installation using following command $ spark-shell --version Configure environment using Bash profile You need to set following enviornment variables in bashrc located under your home directory. export SPARK_HOME=/home/<username>/build/jupyter/venv/lib/python3.6/site-packages/pyspark/ export PYSPARK_DRIVER_PYTHON=jupyter export PYSPARK_DRIVER_PYTHON_OPTS='notebook' $ source ~/.bashrc Now we can start the Jupyter notebook from command line: $ pyspark or using this command: $ jupyter notebook Run Pyspark on jupyter notebook Open a general python3 notebook on the jupyter server. We don’t need pyspark kernel as we will be using findspark to find spark home. import findspark findspark.find() findspark.init() import pyspark import random sc = pyspark.SparkContext(appName="Pi") num_samples = 100000000 def inside(p): x, y = random.random(), random.random() return x*x + y*y < 1 count = sc.parallelize(range(0, num_samples)).filter(inside).count() pi = 4 * count / num_samples print(pi) sc.stop() Setup Jupyter notebook as a service in Ubuntu 18.04 LTS We need a Systemd Service in order to allow jupyter notebook to be run as a background service. [Unit] Description=Jupyter Notebook [Service] Type=simple PIDFile=/run/jupyter.pid ExecStart=/bin/bash -c ". /home/<username>/build/jupyter/venv/bin/activate;jupyter-notebook --notebook-dir=/home/<username>/my-notebooks" User=<username> Group=<username> WorkingDirectory=/home/<username>/my-notebooks Restart=always RestartSec=10 [Install] WantedBy=multi-user.target $ sudo systemctl enable jupyter.service $ sudo systemctl daemon-reload $ sudo systemctl start jupyter.service $ sudo systemctl stop jupyter.service Nginx setup as a reverse proxy We need to configure HTTP/1.1 and websocket support in order to expose jupyter notebook through nginx proxy server. The following nginx configuration is required to run jupyter through nginx proxy. server { server_name <dns-name>; location / { proxy_pass; proxy_set_header Host $host; proxy_set_header X-Real-IP $remote_addr; client_max_body_size 10M; proxy_http_version 1.1; proxy_set_header Upgrade "websocket"; proxy_set_header Connection "Upgrade"; proxy_read_timeout 86400; } } SSL setup using Free SSL LetsEncrypt provides free SSL certificate that can be used for securing our site with HTTPS. Top articles in this category: - Google Data Scientist interview questions with answers - Top 100 interview questions on Data Science & Machine Learning - Python coding challenges for interviews - Python Flask Interview Questions - Introduction to Python 3.6 & Jupyter Notebook - Google Colab: import data from google drive as pandas dataframe - Python send GMAIL with attachment Find more on this topic: >>IMAGE
https://www.javacodemonk.com/installing-pyspark-with-jupyter-notebook-on-ubuntu-18-04-lts-31cd3781
CC-MAIN-2020-24
en
refinedweb
Integrating with React Native You can use Apollo with React Native exactly as you would with React Web. To introduce Apollo to your app, install React Apollo from npm and use them in your app as outlined in the setup article: npm install @apollo/react-hooks apollo-client graphql --save import React from 'react'; import { AppRegistry } from 'react-native'; import { ApolloClient } from 'apollo-client'; import { ApolloProvider } from '@apollo/react-hooks'; //. Apollo Dev Tools React Native Debugger supports the Apollo Client Devtools: - Install React Native Debugger and open it. - Enable "Debug JS Remotely" in your app. - (Optional) If you do not see the Developer Tools panel or the Apollo tab is missing in them, toggle the Developer Tools by right clicking anywhere and selecting "Toggle Developer Tools".
https://www.apollographql.com/docs/react/integrations/react-native/
CC-MAIN-2020-24
en
refinedweb
>>IMAGE just downloading apps and plugging them in. That’s one of the many reasons we use Django here at Caktus: we can build useful web sites for our clients more quickly by not having to re-invent the same building blocks all the time, and focusing on the unique value-add for each client. This is also one of the attractions of building and releasing open-source Django apps: they’re easy for other people to use, and having other people use your code is very satisfying. But Django apps don’t become easily pluggable automatically, and it’s easy to do things in a way that make it much harder for other sites to use an app. A book could be written about this topic. This post just gives examples of some areas that can cause problems, and might stimulate some thought when designing your app. Not everything needs to be an app Does your package have to be an app at all? If a package can provide useful features without being added to INSTALLED_APPS, that’s the way to go. There are some things that Django only looks for in apps, like models and custom template tags. If those are needed, you’ll have to make it an app. Features of highly pluggable apps - Can be installed anywhere on the Python path, ideally just using “pip install package-name”. - Does not require the site to have any explicit knowledge of what directory the app ended up in. (It’s not necessary to add the app’s install directory to a setting or anything like that). - Installing and configuring the app in a site does not break any existing function of the site. - The app can be upgraded by just installing the newer version, and possibly running migrations. (The site might have to make changes to take advantage of new features, of course, but an ideal app adds features without changing the behavior of existing features, apart from bug fixes.) - As a corollary to some of the previous features, installing or upgrading the app doesn’t require copying files or sections of code from the app or its documentation into the site. - Can be customized without forking the code. Assumptions that can be made about the site It’s best if the app can avoid assumptions about how the site does things. If it does matter, assume the site does things in the most straightforward, default way. Those who have customized their site behavior away from how Django does things out of the box presumably have the know-how to cope, or have accepted that they won’t be able to use some pluggable apps without pain. For example, if the app provides static files, assume the site uses the Django staticfiles app, and provide the static files in a way that simply installing the app will make the static files available through that app. Templates Template tags are a good way to provide enhanced features that a site can use in its templates. If a template tag just provides some data, and the site can embed it, lay it out, and style it however it wants, that makes it very easy to use. If a template tag really needs to return its data with markup to be useful, then consider using a small template to format the data, and documenting it. The site can provide an overriding template to change the formatting without having to change your app. If your app serves whole pages that need templates, then provide basic templates that inherit from ‘base.html’ and put the content in a ‘content’ block. Most sites have a base.html template that provides the basic page framework and styling around a content block anyway, or can easily add one. And a site can always copy and override the app’s example templates a lot more easily than writing templates from scratch. Models When designing the models for your app, keep in mind that you can use Generic Foreign Keys to link to any model. The comments app that used to come with Django provides a good example. Provide migrations for your models, and make sure as your app evolves, the migrations still work to get from your initial state to your latest release. That helps to make upgrading your app as painless as possible. Settings If possible, any settings you define for your app should not be required, and should have reasonable defaults. If a setting is required and cannot have any reasonable default, then check for it and give a useful error message if the user has forgotten to define it. Since Django’s settings occupy a single namespace, it’s a good idea for any new settings required by your app to start with the app name. APPNAME_TIMEOUT is much less likely to conflict with some other app’s setting than TIMEOUT. Some apps reduce the number of names they add to the Django settings by just using a single setting whose value is a dictionary with all the more specific settings. E.g. APPNAME_SETTINGS = { ‘TIMEOUT’: 29, ‘PATH’: ‘/foo/bar’, } I’m not sure if that’s a good idea or not, but it’s something to consider. Dependencies If your app has dependencies, they should be configured in setup.py so that when a site “pip installs” your app, its dependencies will be installed automatically. Configure minimum versions of the packages your app depends on, but try not to pin them to a specific version. If your app requires “foobar==1.0.3” and another app the site uses requires “foobar==1.0.4”, that’s going to cause a headache for the user. If your app requires “foobar>=1.0.3”, there’s no problem. If you want to limit the versions, you can use a specification like “foobar>=1.0.3,<2.0” so that if the foobar package releases a 2.0 version, your app won’t try to use it until you’ve had a chance to test with it and release an update. Further reading The Django stable documentation has a tutorial on Reusable Apps. It focuses more on packaging than design, so it’s a good complement to this post. Daniel Greenfeld and Audrey Roy’s book Two Scoops of Django: Best Practices for Django has a section What Makes a Good Django Package? with good hints on making a useful, reusable app. It’s adapted from their DjangoCon 2011 talk, “Django Package Thunderdome: Is Your Package Worthy?” so you can get some of the ideas from those slides (but the book has a lot of other useful things in it, I recommend it as a whole). James Bennett’s Practical Django Projects has a chapter on writing reusable Django applications. It discusses the philosophy of what makes a good reusable app, and then gives some examples of handling specific coding issues of plugging in apps, at much greater length that we can here.
https://www.caktusgroup.com/blog/2013/06/12/making-your-django-app-more-pluggable/
CC-MAIN-2020-24
en
refinedweb
To complete this tutorial, you should have some basic knowledge of the Python programming language, and be comfortable executing commands on a Linux command line. If you are looking to use Adapt in a Mycroft Skill, please see Skill Development > Intents This is the sample Intent around which the tutorial is based. A sample intent that uses a fixed vocabulary to extract entities for an intenttry with the following:PYTHONPATH=. python examples/single_intent_parser.py "what's the weather like in tokyo" First, we need to import json for serializing the Adapt Intent Parser output, and sys for reading in command line arguments. import jsonimport sys Next, we import the IntentBuilder and `IntentDeterminationEngine. from adapt.intent import IntentBuilderfrom adapt.engine import IntentDeterminationEngine Next, we instantiate an IntentDeterminationEngine object. engine = IntentDeterminationEngine() Next, we delcare a collection of weather Keywords, in JSON syntax. These Keywords act as hints to the Adapt Intent Parser about which intent context is being referenced by an Utterance. weather_keyword = ["weather"] Register each Keyword with the engine. for wk in weather_keyword:engine.register_entity(wk, "WeatherKeyword") Next, we declare a collection of weather types. These act as a query parameter on a Weather Intent. For example, in the sentence: Will it rain in Seattle tomorrow? the collection of weather types can then be used to determine whether that weather type is occurring in Seattle. Next, each weather type is registered with the engine. for wt in weather_types:engine.register_entity(wt, "WeatherType") Next, a collection of locations is declared. These also act as a query parameter on a Weather Intent, and can be used in combination with the weather type collection. locations = ["Seattle","San Francisco","Tokyo"] Next, each location is registered with the engine. for loc in locations:engine.register_entity(loc, "Location") Next, we construct an intent parser. The intent parser is named WeatherIntent and requires both a WeatherKeyword and Location, and can optionally include a WeatherType. weather_intent = IntentBuilder("WeatherIntent").require("WeatherKeyword").optionally("WeatherType").require("Location").build() Next, we register the intent parser with the engine. engine.register_intent_parser(weather_intent) We then declare an entry point for the script. @TODO - need to explain here what an entry point is. if __name__ == "__main__": Next, pass the command line arguments to this script as an Utterance into engine.determine_intent(). This function returns a generator, and we then use the generator to iterate through the results. for intent in engine.determine_intent(' '.join(sys.argv[1:])): If the confidence is >0, this is a valid Intent. if intent.get('confidence') > 0: Next, serialize the Intent and print it to stdout. print(json.dumps(intent, indent=4)) Of course, you don't just have to output the Intent to stdout - you can use it to build all sorts of tools.
https://mycroft-ai.gitbook.io/docs/mycroft-technologies/adapt/adapt-tutorial
CC-MAIN-2020-24
en
refinedweb
The layer of geometry kernels provides basic geometric entities of constant sizeIn dimension \( d\), an entity of size \( O(d)\) is considered to be of constant size. and primitive operations on them. Each entity is provided as both a stand-alone class, which is parameterized by a kernel class, and as a type in the kernel class. Each operation in the kernel is provided via a functor classA class which defines a member operator(). in the kernel class and also as either a member function or a global function. See [5] for more details about this design. Ideally, if the kernel provides all the primitives required, you can use any kernel as a traits class directly with your algorithm or data structure; see also Chapter Traits Classes . If you need primitives not provided by the kernel (yet), please read Section Missing functionality below. CGAL provides different kernels, they can differ by internal representation of objects (e.g. cartesian versus homogeneous) or provide different functionalities (e.g. circular kernel). When creating a new package, the authors have to specify clearly the requirements needed by the kernel used. For example they can specify the needs with respect to the arithmetic. The authors may specify a targeted kernel in the list of predefined kernels (e.g. Exact_predicates_inexact_constructions_kernel). Point coordinates can be represented in a homogeneous or cartesian way. The developer of a package can keep in mind that cartesian will be usually more space consuming, while homogeneous can be interesting if exact rational computations are needed. In any way, a package has to work with both representations. Since CGAL uses homogeneous representation for affine geometry and not for projective geometry, the homogenizing coordinate is non zero. The cartesian representation corresponding to an homogeneous point \( (x_0,x_1,...,x_d,w)\) is \( (x_0/w,x_1/w,...,x_d/w)\). Hence, homogeneous representation is not unique; \( (\alpha x,\alpha y,\alpha z,\alpha w)\) is an alternative representation to \( (x,y,z,w)\) for any \( \alpha\neq 0\). Internally, CGAL always maintains a non-negative homogenizing coordinate. Each kernel object is provided as both a stand-alone class, which is parameterized by a kernel class ( Geo_object_D<K>), and as a type in the kernel class ( K::Geo_object_D). While the former use may be more natural for users not interested in the flexibility of the kernel (and is compatible with the original kernel design [4]), the latter syntax should be used in all code distributed with the library as it allows types in the kernel to be easily exchanged and modified. Similarly, each operation and construction in the kernel is provided via a function object class in the kernel class and also as either a member function or a global function; developers should use the function object classes to gain access to the functionality. See [5] for more details about this design and how it is accomplished. The classes for the geometric objects in the kernel have a standardized interface. bbox()member function computing a bounding box. transform(Aff_transformation_d t)member function to compute the object transformed by t. has_on_positive_side(Point_d), has_on_boundary(Point_d), and has_on_negative_side(Point_d). Furthermore, there is a member function oriented_side(Point_d)returning an object of type CGAL::Oriented_side. has_on_bounded_side(Point_d), has_on_boundary(Point_d), and has_on_unbounded_side(Point_d). Furthermore, there is a member function bounded_side(Point_d)returning an object of type CGAL::Bounded_side. opposite()returning the same object with opposite orientation. For a number of predicates, there are versions that operate on the coordinates directly, not on the geometric objects. These number-type based predicates ease re-use with non-CGAL types. Kernel traits should avoid redundant functionality, or if similar functionality is implemented with a different API, then one should really implement the functionality and the others call that one. Cartesian versus homogeneous computation on how to derive the homogeneous version of a predicate from the Cartesian version. When adding a new function object to the kernel you must: include/CGAL/Kernel/function_objects.hto add a new function object builder in namespace internal:: include/CGAL/Kernel/interface_macros.hto add the actual function object class and its corresponding member function that returns an object of this class test/Kernel/include/CGAL/_test_new_2.hand/or test/Kernel/include/CGAL/_test_new_3.hto add the test for this function object. Kernel_23/doc/Kernel_23/Concepts/FunctionObjectConcepts.h New_function_objectto the set of requirements for the Kernel concept in the file Kernel_23/doc/Kernel_23/Concepts/Kernel.h Kernel_23/doc/Kernel_23/PackageDescription.txt
https://doc.cgal.org/latest/Manual/devman_kernels.html
CC-MAIN-2020-24
en
refinedweb
- (22) - GNU Library or Lesser General Public License version 2.0 (17) - MIT License (7) - BSD License (5) - GNU General Public License version 3.0 (3) - Apache License V2.0 (2) - Common Development and Distribution License (2) - Common Public License 1.0 (2) - Mozilla Public License 1.1 (2) - Affero GNU Public License (1) - GNU Library or Lesser General Public License version 3.0 (1) - Jabber Open Source License (1) - Python Software Foundation License (1) - W3C License (1) - Public Domain (2) Frameworks Software The Tao Framework Superseded by OpenTK: weekly downloads Okapi Framework (Old .NET version) See for LATES VERSION of the Okapi Framework133 weekly downloads Evolutility - CRUD framework for ASP.net CRUD framework with a generic Web UI and integrated micro ORM.23 weekly downloads Abzu Abzu provides everything from basic to advanced console features for any .net console project WebGui .NET Web Design Tools Enterprise- level HTML5 application development platform23.4 weekly downloads Open Jungo Open software persistence model ADODB-mysql for Mono Simplified ADODB interface library for MySQL on Mono/.NET. The library can be used to port MS ADODB project to an Mono environment DevEnhancer (for SAP Business One).2 weekly downloads CronDotNet CronDotNet is a Microsoft .NET framework library (dll) for scheduling automated jobs. It is based on its UNIX cousin, CR dotCODES_Source_Control_for_VS The dotCODES Source Control Maintenance Mainframe (SCM2)1 weekly downloads A flexible .NET Plugin architecture The TaskPluginInterface namespace is a set of classes, interfaces, enumerations, and events to provide create a "Plug-in" architecture for .NET applications. Aspect.NET Aspect.NET is a powerful and lightweight aspect-oriented toolset for .NET. Automated Application Conversion tool Application conversion tool from VB.NET to HTML5 web and mobile.
https://sourceforge.net/directory/development/frameworks/language:vb_net/os:os_groups/
CC-MAIN-2017-17
en
refinedweb
Developing for Microsoft Office with VSTO is fantastic, I mean, when the only other option is VBA. And that was basically the main reason I ended up working with VSTO in the first place. A problem I had right in the beginning of my adventure was with the user interaction with the add-in. It was a simple custom toolbar with buttons, nothing fancy, but sometimes after using the add-in for a while the buttons would just randomly stop responding. The bug was in the code that created the command bar and attached the event handlers. In order to configure and add the click event handler I was using a local variable to reference the newly added command bar button. Being local this variable would later be collected by the GC resulting in the loss of the event handler. So don’t forget… specify all command bar button variables at the add-in level to prevent them from being garbage collected. You can check this behavior in the following example: using System.Windows.Forms; using Office = Microsoft.Office.Core; public partial class ThisAddIn { private Office.CommandBar bar; private Office.CommandBarButton showMsgCorrect; private void ThisAddIn_Startup(object sender, EventArgs e) { bar = Application.CommandBars.Add( "Example Bar", Office.MsoBarPosition.msoBarTop, false, true); // Do this showMsgCorrect = (Office.CommandBarButton)bar.Controls.Add( Office.MsoControlType.msoControlButton, missing, missing, missing, missing); showMsgCorrect.Caption = "It Works"; showMsgCorrect.TooltipText = "Will always work"; showMsgCorrect.Style = Office.MsoButtonStyle.msoButtonCaption; showMsgCorrect.Click += new Office._CommandBarButtonEvents_ClickEventHandler(button_Click); // Don't do this - Garbage collection will break it Office.CommandBarButton showMsgIncorrect; showMsgIncorrect = (Office.CommandBarButton)bar.Controls.Add( Office.MsoControlType.msoControlButton, missing, missing, missing, missing); showMsgIncorrect.Caption = "Will Stop Working"; showMsgIncorrect.TooltipText = "Will stop working"; showMsgIncorrect.Style = Office.MsoButtonStyle.msoButtonCaption; showMsgIncorrect.Click += new Office._CommandBarButtonEvents_ClickEventHandler(button_Click); bar.Visible = true; } private void ThisAddIn_Shutdown(object sender, EventArgs e) { } void button_Click(Office.CommandBarButton Ctrl, ref bool Cancel) { MessageBox.Show( "Hello World!", string.Empty, MessageBoxButtons.OK, MessageBoxIcon.Information); GC.Collect(); } #region VSTO generated code private void InternalStartup() { this.Startup += new System.EventHandler(ThisAddIn_Startup); this.Shutdown += new System.EventHandler(ThisAddIn_Shutdown); } #endregion }
https://exceptionalcode.wordpress.com/2009/11/18/command-bar-with-visual-studio-tools-for-office/
CC-MAIN-2017-17
en
refinedweb
1 polynomial polynomial :: derivative(void) { polynomial outpoly; outpoly._coef = new double [_degree]; outpoly._degree = (_degree-1); for(int i=0; i<(_degree); i++) { outpoly._coef[i]=(i+1)*_coef[i+1]; } return outpoly; } //header: #pragma once #include <complex> #include <iostream> #include <string> #include <sstream> using namespace std; class polynomial { private: double * _coef; int _degree; public: // constructors polynomial(void); polynomial(int degree, double * coef); ~polynomial(void); // at: evaluates the polynomial at x or z // returns the real/complex value of the polynomial at x/z double at(double x) const; complex<double> at ( complex<double> z) const; polynomial derivative(void) const; bool isEmpty(void); string tostring(void); polynomial& operator = (const polynomial& rhs); friend ostream& operator << (ostream& os, polynomial& m); }; //How I use it in main: double * coef; int degree; cout<< "INPUT degree: "; cin>>degree; coef = new double [degree+1]; for(int n = 0; n<(degree+1); n++) { cout<< "INPUT " << n << "th degree: "; cin>> coef[n]; cout<<endl; } polynomial poly(degree, coef); cout<<"INPUT converted to: " << poly<<endl; cout<<"EVALUATE for x = "; double x; cin >> x; cout << poly << " = " << poly.at(x) << endl; cout << "DERIVATIVE of fxn is: " << poly.derivative() <<endl; I have code for handling polynomials via an array of coefficients. Now I want to create a function that returns its derivative in polynomial form. Unfortunately, the return value is deleted before it is passed on. I discovered this the hardway by following the debugger as closely as I could. How can I avoid this error? Edited by denizen08: added other details
https://www.daniweb.com/programming/software-development/threads/239821/return-value-is-deleted-before-being-passed-on-how-to-avoid
CC-MAIN-2017-17
en
refinedweb
30 May 2016 Updating the Matplotlib Font Cache When publishing papers or articles, I want my plots to integrate with the text surrounding them. I want them to use the correct font size, and the correct font. This is easy to do with Matplotlib: import matplotlib matplotlib.rcParams['font.size'] = 12 matplotlib.rcParams['font.family'] = 'Calibri' However, sometimes, Matplotlib won't find the correct, even though it is clearly installed. This happens when Matplotlib's internal font cache is out of date. To refresh the font cache, use matplotlib.font_manager._rebuild() Happy Plotting!
http://bastibe.de/2016-05-30-matplotlib-font-cache.html
CC-MAIN-2017-17
en
refinedweb
On 8/20/2012 3:36 PM, Gregg Smith wrote: > Hi Joe, > > There seems to be a problem with this commit. > mod_ssl.c > .\mod_ssl.c(288) : error C2491: 'modssl_run_npn_advertise_protos_hook' : definition of > dllimport function not allowed > .\mod_ssl.c(294) : error C2491: 'modssl_run_npn_proto_negotiated_hook' : definition of > dllimport function not allowed That's because the API design is invalid. As I noted in the 2.2 backport status file; * mod_ssl: Add support for Next Protocol Negotiation. Trunk patch: 2.2.x patch: +1: benl sf notes: needs the buffer overflow fix from r1345599, too wrowe notes: also needs correction to ssl_engine_kernel.c: In function 'ssl_callback_AdvertiseNextProtos': ssl_engine_kernel.c:2140:5: warning: implicit declaration of function 'modssl_run_npn_advertise_protos_hook' Including mod_ssl.h after ssl_private.h seems to suffice. The change introduces hard linkages from modules into mod_ssl.so (distinct from httpd), AP is the incorrect namespace, see mod_dav main hooks as an example. Prior to this patch all calls to mod_ssl were by way of registered functions through apr bindings. Seems there aught to be a way to add an npn cooperating module when mod_ssl is not loaded, but right now it would fail. An mmn minor bump would also be required for API addition.
http://mail-archives.apache.org/mod_mbox/httpd-dev/201208.mbox/%3C5032A4E8.1040702@rowe-clan.net%3E
CC-MAIN-2017-17
en
refinedweb
By Mike Gunderloy Sometimes it seems to take more code to support a component than to implement its functionality. For example, you might sell a client a server-side component with a license that limits the user to five simultaneous instances of the component. This business decision has development consequences: You now need to come up with a way to enforce that license count. If you’re working in the .NET world, there’s an easy answer. You can use the System.EnterpriseServices namespace to limit the number of simultaneous users without writing a lot of code. COM+ to the rescue System.EnterpriseServices is the .NET wrapper around COM+, a part of the Windows operating system that provides various infrastructure-level services to interested applications. These services include automatic transaction management, just-in-time activation, component queuing, and (central to this article) object pooling. .NET components that use COM+ are called serviced components. Here are the typical steps in creating a serviced component: - Create a class that inherits from System.EnterpriseServices.ServicedComponent. - Assign a strong name to the assembly containing the class. - Install the assembly into the Global Assembly Cache (GAC). - Use the Services Installation tool (Regsvcs.exe) to install the assembly to the COM+ catalog. A serviced component example To see how this works, follow along as I create a very simple serviced component. To begin, you’ll need to create a key file to use in assigning a strong name to the assembly. This key file is essential to the cryptographic signing that .NET uses to verify assembly integrity. You can create a key file from the Visual Studio .NET command prompt by running the sn utility: sn –k trsc.snk Now, launch Visual Studio .NET and create a new Visual Basic .NET Class Library project, naming it TRSC. Right-click the project and add a reference to the System.EnterpriseServices component. Rename the default Class1.vb to TimeServer.vb and fill in its code: Imports System Imports System.EnterpriseServices <ObjectPooling(CreationTimeout:=10000, _ Enabled:=True, MaxPoolSize:=2)> _ Public Class TimeServer Inherits ServicedComponent ' Return the current time Public Function GetTime() As String GetTime = DateTime.Now.ToLongTimeString() End Function ' Enable object pooling Protected Overrides Function CanBePooled() _ As Boolean CanBePooled = True End Function End Class Obviously, this is an extremely simple class, but the same principles will work with your thousands-of-lines-long business rule extravaganza. Note the scaffolding code to enable object pooling; I’ll come back to that later. Next, you need to make some changes to the AssemblyInfo.vb file. In particular, you need to change and add some assembly attributes. If you haven’t looked at attributes before, you can think of them as a way to add metadata to your assemblies. The Common Language Runtime (CLR) uses these attributes to determine what to do with your assembly and how it relates to other pieces of software such as COM+. At the top of the AssemblyInfo.vb file, you need to make the System.EnterpriseServices namespace available: Imports System.EnterpriseServices And the assembly attributes go at the bottom: <Assembly: AssemblyVersion("1.0.0.0")> <Assembly: ApplicationName( _ "TechRepublic TimeServer")> <Assembly: Description( _ "Delivers the current time on demand")> <Assembly: AssemblyKeyFile("..\..\trsc.snk")> The first of these assigns the version number for the library. The ApplicationName and ApplicationDescription will help you identify the library in the future. The AssemblyKeyFile attribute locates the file containing the key pair for strong naming. At this point, you can build the assembly by selecting Build | Build Solution. Then switch to a Visual Studio .NET command prompt and register it, first in the GAC and then with COM+: gacutil /i TRSC.dll regsvcs TRSC.dll Managing the COM+ application At this point, the serviced component is installed in the COM+ catalog and can be instantiated by client programs or can be administered through the Component Services administrative tool. To launch the tool, choose Start | Programs | Administrative Tools | Component Services. The tool runs in the familiar MMC interface, as shown in Figure A. Right-click on the TRSC.TimeServer component and you can view its properties, as shown in Figure B. You can now see how the ObjectPooling attribute that you applied to the class is translated into COM+ properties. Object pooling is the COM+ feature that solves the problem of limited concurrent usage for this library. When you set up an object pool, you’re telling COM+ how many copies of the object it can make available to client applications. You can specify the minimum number of copies of the object to keep in memory at all times (in this case, zero), the maximum number to make available (in this case, two, which I’m assuming as the license count for this demonstration), and the amount of time to wait for an object if the pool is exhausted (here, 10,000 milliseconds, or 10 seconds). COM+ implements a Pooling Manager that handles the details of object pooling. When the COM+ application is started, the Pooling Manager creates the minimum number of objects and thereafter maintains them in the pool at all times when the application is running. Each time that the Pooling Manager receives a request to create an object, it checks to see whether the object is available in the pool. If the object is available, the Pooling Manager provides an already created object from the pool. If there’s not an object available, the Pooling Manager will create one, as long as the pool isn’t already at its maximum size. If no object is available and no new object can be created because of the size restriction of the pool, the client requests are queued to receive the first available object from the pool. If an object cannot be made available within the time specified in the CreationTimeOut property, an exception is thrown. When the client is done with an object, it should invoke the object’s Dispose() method. The Pooling Manager intercepts this request and calls the CanBePooled() method on the object to check if the object is interested in being pooled. If the method returns True, the object is stored in the object pool. On the other hand, if the CanBePooled() method returns False, the object is destroyed forever. Pooling in action To see how this works in practice, you can create a new Visual Basic .NET Windows application (I named mine TRSCClient). To begin, add references to System.EnterpriseServices and to the TRSC.dll file created by compiling the class library. Place a button (btnLaunch) on the default Form1 and add a bit of code behind it: Private Sub btnLaunch_Click( _ ByVal sender As System.Object, _ ByVal e As System.EventArgs) Handles btnLaunch.Click ' Create a new client form Dim f As New frmClient() f.Show() End Sub Next, add a new form, frmClient, to the application. This form should contain a single TextBox control named txtTime. Here’s the code to go behind frmClient: ' Instance of the pooled class Dim t As TRSC.TimeServer Private Sub frmClient_Load( _ ByVal sender As System.Object, _ ByVal e As System.EventArgs) Handles MyBase.Load Try ' Create the pooled object and ' execute its GetTime method t = New TRSC.TimeServer() txtTime.Text = t.GetTime() Catch ex As Exception MessageBox.Show(ex.Message, "frmClient") Me.Close() End Try End Sub Private Sub frmClient_Closing( _ ByVal sender As Object, _ ByVal e As System.ComponentModel.CancelEventArgs) _ Handles MyBase.Closing ' Give up the pooled object If Not t Is Nothing Then t.Dispose() End If End Sub Now simply run the application and click the Launch button. This will create a new pooled object, open the client form, and retrieve the displayed time. Repeat this process and you’ll have two client forms open. Now try to launch a third client form. It won’t appear, because the object pool is exhausted. Instead, after the 10-second timeout elapses, you’ll get an error message: “COM+ activation failed because the activation could not be completed in the specified amount of time.” Close one of the first two forms and you’ll be able to launch another instance. Managing the pool The nice thing about this technique is that you don’t have to recompile the component to adjust the pool size. Instead, you can just go into the properties of the class, which you saw in Figure B. Of course, there’s nothing to prevent your customers from doing the same. You’ll be depending on the honor system to keep the license count set properly. But if you can’t trust your customers to do that, will they respect any other system, or will they try to crack it? The System.EnterpriseServices approach has the advantage of being easy to implement and built right into the operating system, which ultimately means less custom code to harbor bugs.
http://www.techrepublic.com/article/let-enterprise-services-track-your-license-count/
CC-MAIN-2017-17
en
refinedweb
Opened 8 years ago Last modified 2 years ago #5958 new enhancement GeoTicketPlugin should cache unlocatable results Description locate_ticket is currently quite slow on the /query screen if there are tickets with unlocatable locations. This is because these locations are not cached: def locate_ticket(self, ticket): if ticket.id: results = get_all_dict(self.env, "select latitude, longitude from ticket_location where ticket='%s'" % ticket.id) if results: return ticket['location'], (results[0]['latitude'], results[0]['longitude']) if ticket['location'] is None or not ticket['location'].strip(): raise GeolocationException # XXX blindly assume UTF-8 try: location = ticket['location'].encode('utf-8') except UnicodeEncodeError: raise location, (lat, lon) = self.geolocate(location) if ticket.id: self.set_location(ticket.id, lat, lon) return location, (lat, lon) A set of unlocatable locations should be hashed on the instance so that additional requests need not be made for bad locations (or more precisely, they need not be made more than once per instance). Attachments (0) Change History (3) comment:1 Changed 8 years ago by comment:2 Changed 8 years ago by comment:3 Changed 2 years ago by Note: See TracTickets for help on using tickets.
https://trac-hacks.org/ticket/5958
CC-MAIN-2017-17
en
refinedweb
sasl_server_init man page sasl_server_init — SASL server authentication initialization Synopsis #include <sasl/sasl.h> int sasl_server_init(const sasl_callback_t *callbacks, const char *appname); Description. to RFC 4422 See Also sasl(3), sasl_callbacks(3), sasl_errors(3), sasl_server_new(3), sasl_server_start(3), sasl_server_step(3) Referenced By sasl(3), sasl_done(3), sasl_global_listmech(3), sasl_server_new(3), sasl_server_start(3), sasl_server_step(3).
https://www.mankier.com/3/sasl_server_init
CC-MAIN-2017-17
en
refinedweb
Let's consider the following code: #include <type_traits> enum class foo_bar : unsigned { foo, bar, }; int main() { foo_bar bar = foo_bar::bar; // unsigned a = bar; -- error: cannot convert ‘foo_bar’ to ‘unsigned int’ in initialization unsigned a = static_cast<std::underlying_type<foo_bar>::type>(bar); return 0; } a foo_bar From N2347, the problem with implicit integral promotion is that you can compare two different enums: enum Color { ClrRed, ClrOrange, ClrYellow, ClrGreen, ClrBlue, ClrViolet }; enum Alert { CndGreen, CndYellow, CndRed }; Color c = ClrGreen; Alert a = CndGreen; bool armWeapons = ( a >= c ); // compiles but does not make sense To see what is really going here, you can emulate enum classes with structures: struct A { operator int() const { // for simplicity, just return fixed value return 1; } }; struct B { operator int() const { return 2; } }; A a; B b; int i = b; // ok bool x = (a < b); // yay, this compiles Now, the correct solution would be to make the operators explicit: struct B { explicit operator int() const { return 2; } }; A a; B b; bool x = (a < b); // finally does not work int i = b; // but now this does not work without explicit cast either
https://codedump.io/share/KMxSjSxac3Y9/1/why-implicit-conversion-from-strongly-typed-enum-to-its-underlying-type-is-not-allowed
CC-MAIN-2017-17
en
refinedweb
Automated Chicken Coop Door 2,601 17 2 Featured Automatic doors in Chicken Coops are a solution to nighttime predators such as raccoons, possums, and feral cats! A typical automatic door, however, costs over $200 on Amazon (Automatic Chicken Coop Door) and is prohibitively expensive to many small-scale chicken owners. To create this project, some background with Arduino is necessary. See these Arduino Tutorials for an introduction if you have never worked with Arduino. This guide was created in parallel with guides linked below to create an automated, upcycled chicken coop. As such, it's assumed that your coop will have a similar layout as well as a 12V power supply/solar panels capable of outputting up to 10 Amps. Finally, we do not take responsibility for any harm/injury that befalls you on this perilous guide to DIYstruction! Teacher Notes Teachers! Did you use this instructable in your classroom? Add a Teacher Note to share how you incorporated it into your lesson. Step 1: Tools You Will Need Soldering Iron Small Phillips Screwdriver Wire Strippers Drill and drill bits Step 2: Choosing Your Materials Most of the materials in this guide can be sourced from various waste streams, however, here are some components that you will likely have to purchase. Purchased Materials: Note: If you are able to pull the relays from a car vehicle you only need 2 We sourced the rest of the materials by going to our local pick n' pull or junkyard. If you aren't able or don't have the time to find the materials you can purchase them online. Upcycled Materials: We found ours at the local pick n'pull. A quick google search will turn up locations near you. Also, Youtube has videos to disassemble car doors on most models! You can scrap these from the same vehicle (above). These can be pulled out of an old dresser This plywood will act as the door. Any square foot board or metal sheet will do! Step 3: Circuit Construction It is easier to connect the 5V, 12V, and ground nodes in the figure using wire nuts from Step 2. Here's a helpful video on How to Use Wire Nuts Properly. In the first figure, the 12V connections can either come from a 12V motorcycle/car battery or some other 12V power source. Whatever power source you decide to use, make sure it is capable of delivering up to 10Amps as the startup current for the inductive motor can be quite large. It may also be helpful to set a 10A fuse in line with the power source to protect the rest of the electronics from potential shorting. Solder Snap-Action Switches This next step requires some soldering. Here is a helpful video on Soldering a Switch. Since the snap-action switches are going to be placed at the top and bottom of door travel, make sure that you cut enough wire to run from that position to where your Arduino will be in the coop. Solder one wire to the Normally Open (NO) terminal and wrap it in electrical tape or shrink wrap (the other end will attach to the 5V source). Solder another wire to the common terminal (C) and wrap it in electrical tape, also. The procedure for the top and the bottom switches are the same, however, the common pin on the snap-switch at the top of the door attaches to A8 on the Arduino whereas the common pin on the bottom snap-switch attaches to A14 on the Arduino (see wiring diagram). Wiring the Clock and L-298 H-Bridge Use the male/female wires to wire the clock and h-bridge to the Arduino (see wiring diagram). Wiring the Relays The relays from Step 2 come with a wiring harness that can be pushed onto the pins on the relay. If you are using a different single pole double throw relay, the third figure above may be helpful to you. Wire terminals 85 and 86 on the relays to the output pins of the L298 H-Bridge by screwing them into the board (Polarity doesn't matter) Connect the center pin (87A) to the ground node (wire nut). Connect pin 87 to the +12V node. Finally, make sure any exposed wire is insulated with electrical tape around all loose connections! Step 4: Uploading Code to the Arduino Download Arduino Ide First, download the Arduino IDE for your operating system here: Arduino IDE Download Arduino Code Automatic Chicken Coop Door with Solenoid The solenoid is an optional attachment in this project. To see how the solenoid circuit is set up, visit our automatic misting instructable! Import the Libraries There are 4 libraries you will need to import for this project. Timelord, DS3231, OneWire, and DallasTemperature Here is a helpful video on Library Installation should you need it. Changing the Code The only sections of code you need to change are highlighted in the figures given. The first section is the latitude and longitude. Update these to match the geographic location of your chicken coop (you can find these by hovering over a point in google maps). Next, update the timezone to match your own. Here's a helpful link to figure our your UTC Timezone. Finally, update the setTime and setDate lines in the Arduino code. i.e. rtc.setTime(Hour, Minute, Second) rtc.setDate(Day, Month, Year) Step 5: Installing Hardware 1. Drill a hole in the top of your chicken coop door and attach a string. make sure it's level by hanging the string. If it's tilted too much in any direction, make a new hole closer to the side that is hanging lower. 2.Install the Chicken Coop Door with slides 3. Mount the motor above the door in line with the string (make sure you have enough clearance for the door to open completely. 4. Install the snap-action switches at the top and bottom of the door We drilled two holes lined up with the holes on the switches and secured them with zip ties. Slide the door up and down the track and make sure that the switches are pressed. If not, you may need to add some sort of spacer. We drilled some holes in plastic that was lying around and thread the zip ties through those. 4. Create a shelf for the electronics Make sure it's out of reach of the chickens 5. Place all electronics in a waterproof container (we used a clear Tupperware container and drilled a hole in the side for the wires). 6. Ensure your electronics are peck-proof. We accomplished this by adding a hinged box around the battery, and installing barriers in front of snap-action switches to make them harder to peck. 2 Discussions Question 1 year ago on Step 4 After importing the appropriate libraries I'm getting an error when trying to find the hardware/avr/HW_AVR_defines.h file. Can anyone shed some light on how to fix this? Arduino: 1.8.0 (Windows 10), Board: "Arduino/Genuino Mega or Mega 2560, ATmega2560 (Mega 2560)" In file included from C:\Users\Chad\Desktop\ChickenCoop-master\SolenoidDoor\SolenoidDoor.ino:2:0: C:\Users\Chad\Documents\Arduino\libraries\ChickenCoop-master/DS3231.h:27:42: fatal error: hardware/avr/HW_AVR_defines.h: No such file or directory #include "hardware/avr/HW_AVR_defines.h" ^ compilation terminated. exit status 1 Error compiling for board Arduino/Genuino Mega or Mega 2560. This report would have more information with "Show verbose output during compilation" option enabled in File -> Preferences. 1 year ago Know why a chicken coop has two doors? Because if it had four doors it would be a chicken sedan.
https://www.instructables.com/id/Automated-Chicken-Coop-Door/
CC-MAIN-2019-47
en
refinedweb
C++: Declaring Static Member Functions Member functions can be declared static in C++. Static member functions are useful when you want to associate an action to a class, but you don’t need to associate that action with a particular object. For example, the member function Duck::fly() is associated with a particular duck, whereas the rather more drastic member function Duck::goExtinct() is not. Like static data members, static member functions are associated with a class and not with a particular object of that class. This means that, like a reference to a static data member, a reference to a static member function does not require an object. If an object is present, only its type is used. Thus, both calls to the static member function number() in the following example are legal. This example is a simple static program — a program using static members — CallStaticMember: // CallStaticMember - demonstrate two ways to call a // static member function // #include <cstdio> #include <cstdlib> #include <iostream> using namespace std; class Student { public: Student(const char* pN = "no name") : sName(pN) { noOfStudents++; } ~Student() { noOfStudents--; } const string& name() { return sName; } static int number() { return noOfStudents; } protected: string sName; static int noOfStudents; }; int Student::noOfStudents = 0; int main(int argcs, char* pArgs[]) { // create two students and ask the class "how many?" Student s1("Chester"); Student* pS2 = new Student("Scooter"); cout << "Created " << s1.name() << " and " << pS2->name() << endl; cout << "Number of students is " << s1.number() << endl; // now get rid of a student and ask again cout << "Deleting " << pS2->name() << endl; delete pS2; cout << "Number of students is " << Student::number() << endl; // wait until user is ready before terminating program // to allow the user to see the program results cout << "Press Enter to continue..." << endl; cin.ignore(10, 'n'); cin.get(); return 0; } This program creates two Student objects, one locally and one off the heap. It then displays their names and the count of the number of students. Next the program deletes one of the students and asks the class how many students are out there. The output from the program appears as follows: Created Chester and Scooter Number of students is 2 Deleting Scooter Number of students is 1 Press any key to continue... This class keeps its data members protected and provides access functions that allow outside (non-Student) code to read but not modify them. Declaring the return type of name() method to be string& rather than simply string causes the function to return a reference to the object’s existing name rather than create a temporary string object. Adding the const to the declaration keeps the caller from modifying the class’s name member.
https://www.dummies.com/programming/cpp/c-declaring-static-member-functions/
CC-MAIN-2019-47
en
refinedweb
Hi guys, I have a page that has a repeater connected to a dataset. (Just trying to replicate the youtube video with dataset search for continents and countries etc) The problems I'm having are as follows: 1. All of a sudden the repeater does not display at all, but in preview it does. When I check in inspector I can see that the repeater container box is there but does not display. 2. I have 2 buttons on top, 1 for search and the other as a drop-down. The buttons are called iTitle and iContinent. But they don't work. 3. Through the preview window I can see that the content via the dataset is loading and displaying. Also in the wix code I see an error message stating parameter event is never used. Here is my code: import wixData from "wix-data"; $w.onReady(() => { loadContinents(); }); let lastFilterTitle; let lastFilterContinent; let debounceTimer; export function iTitle_onkeyPress(event, $w) { if (debounceTimer) { clearTimeout(debounceTimer); debounceTimer = undefined; } debounceTimer = setTimeout(() => { filter($w('#iTitle').value, lastFilterContinent); }, 500); } export function iContinent_change(event, $w) { filter(lastFilterTitle, $w('#iContinent').value); } function filter(title, continent) { if (lastFilterTitle !== title || lastFilterContinent !== continent) { let newFilter = wixData.filter(); if (title) newFilter = newFilter.contains('articleTitle', title); if (continent) newFilter = newFilter.contains('continent', continent); $w('#dataset1').setFilter(newFilter); lastFilterTitle = title; lastFilterContinent = continent; } } function loadContinents() { wixData.query('Continents') .find() .then(res => { let options = [{"value": '', "label": 'All Continents'}]; options.push(...res.items.map(continent => { return {"value": continent.title, "label": continent.title}; })); $w('#iContinent').options = options; }); } Did you sync your sandbox database to your Live database? Hi Yisrael, and thank's for your prompt response. Yes, the sync helped to display the repeater on the page, but I still can't get the search box and dropdown menu to work. Any ideas on those? To be honest, I'm having a little trouble following your code as I can't see the page itself. Please post the URL of your site. Only authorized Wix personnel can get access to your site in the editor. Also, what youtube tutorial did you use? Hi again, The video url is this one: And the website is: So far I found that your text input field and your dropdown were not connected to the event handlers, therefore nothing happened. You need to set these in the property panels of the relevant components: Here is the Properties panel of the text input field: Here is the Properties panel of the dropdown: Also, you need to make sure that you use the Field Key of the collection, therefore: not this: newFilter = newFilter.contains('articleTitle', title); rather this: newFilter = newFilter.contains('title', title); I didn't go over all of your code. Make these changes and I'm sure that with some careful review of your code and logic you'll be able to get your page working. We are here to help out the best we can if you run into problems. Good luck, Yisrael hi, I got the search box working except sometimes displays too much data. My articles database has Article title with 4 values Article 1, Article 2, Article 3, Article 4. if I enter value 1 it is ok. But when I enter value 2 it shows 2 but also 1. This should not happen. Also I'm a little bit confused with when you hardcode some and when using the properties panel for change or keypress events. If we hardcode everything, then I enter in the properties I.e keypress then it adds a new line to that code, even though we already have it there hardcoded.
https://www.wix.com/corvid/forum/community-discussion/repeater-and-drop-drop-not-working-or-displaying
CC-MAIN-2019-47
en
refinedweb
py.test plugin to locally test sftp server connections. Project description pytest-sftpserver is a plugin for pytest that provides a local SFTP-Server fixture. The SFTP-Server provided by this fixture serves content not from files but directly from Python objects. Quickstart Assume you want to test a function that downloads a file from an SFTP-Server: from contextlib import closing import paramiko def get_sftp_file(host, port, username, password, path): with closing(paramiko.Transport((host, port))) as transport: transport.connect(username=username, password=password) with closing(paramiko.SFTPClient.from_transport(transport)) as sftpclient: with sftpclient.open(path, "r") as sftp_file: return sftp_file.read() This plugin allows to test such functions without having to spin up an external SFTP-Server by providing a pytest fixture called sftpserver. You use it simply by adding a parameter named sftpserver to your test function: def test_sftp_fetch(sftpserver): with sftpserver.serve_content({'a_dir': {'somefile.txt': "File content"}}): assert get_sftp_file(sftpserver.host, sftpserver.port, "user", "pw", "/a_dir/somefile.txt") == "File content" As can be seen from this example sftpserver serves content directly from python objects instead of files. Installation pip install pytest-sftpserver Supported Python versions This package supports the following Python versions: - 2.7, 3.5 - 3.7 TODO - Add more documentation - Add more usage examples - Add TODOs :) Version History 1.3.0 - 2019-09-16 Updated supported Python versions to 2.7, 3.5 - 3.7. Droped (official) support for 3.4. Check / format code with black, isort and flake8. Fix return type of .read(). (#15, thanks @WeatherGod) Support the offset parameter on write operations. (#11, #16, thanks @DrNecromant) 1.2.0 - 2018-03-28 - Updated supported Python versions to 2.7, 3.4 - 3.6. Droped (official) support for 2.6 and 3.2, 3.3. - Now always uses posixpath internally to avoid problems when running on Windows (#7, #8, thanks @dundeemt) - Fixed broken readme badges (#14, thanks @movermeyer) 1.1.2 - 2015-06-01 - Fixed a bug in stat size calculation (#4) - Fixed mkdir() overwriting existing content (#5) Thanks to @zerok for both bug reports and accompanying tests. 1.1.1 - 2015-04-04 - Fixed broken chmod() behaviour for non-existing ‘files’ (Thanks @dundeemt) 1.1.0 - 2014-10-15 - Fixed broken stat() behaviour for non-existing ‘files’ - Slightly increased test coverage 1.0.2 - 2014-07-27 - Fixed broken test on Python 2.6 1.0.1 - 2014-07-27 - Added Python 3.2 support - Cleaned up tox configuration 1.0.0 - 2014-07-18 - Initial release License Licensed unter the MIT License. See file LICENSE. Inspiration The implementation and idea for this plugin is in part based upon: - pytest-localserver - sftpserver - The Twisted Conch in 60 Seconds series (although I ended up not using twisted, this was very helpful understanding SFTP internals) Project details Download files Download the file for your platform. If you're not sure which to choose, learn more about installing packages.
https://pypi.org/project/pytest-sftpserver/
CC-MAIN-2019-47
en
refinedweb
Customer login Est | Eng | Rus Customs agency Tax management Warehousing Advice Company Do not allow customs to surprise you with follow-up ... for clearance of goods use Freselle Customs Agency! Exice warehousing Excise warehousing is intended for those who: are authorized warehouse keepers or wish to become one; wish to deliver goods to an excise warehouse or out of it; are performing or wish to perform operations with excise goods in an excise warehouse; import or export excise goods; transport/forward excise goods; Excise suspension arrangement – Member States must use excise suspension arrangement in the common market to simplify alcohol trading. The system allows registered traders or warehouse keepers to produce, store, receive, and send goods without paying the excise. The system allows traders to postpone payment of excise duty before release for free circulation. Excise duty must be paid when the goods have been delivered for consumption or acquired by an unregistered private person. Our services: development of a system necessary for handling of excise goods; applying for required licences and activity licences from customs;
http://fresellecustoms.ee/Tax-management/exice-warehousing
CC-MAIN-2019-47
en
refinedweb
Timeout Attributes In TestNG: When we are running an automation script, some scripts take a longer time to execution then expected. So in those cases, we need to mark such cases as fail and then continue. So in this post, we are going to see how we can mark fail such test cases, which are taking a long time to execute with the help of testNG time out. TestNG allows us to achieve to do in 2 ways: - Suite Level: If you define at suite level, then that is applicable for all the test methods in that TestNG suite. - Test Method Level: If you define at the method level, then the timeout duration will apply to that method only. If previously suite level is declared, and later you specify at the method level, in that case, it will override the time mentioned at the suite level. Time timeOut attribute within the @Test annotation method is assigned a value specifying the number of milliseconds. In case the test method exceeds the timeout value, the test method is marked as a failure with ThreadTimeoutException. How to Define Timeout Attributes at Suite Level In the below class, we have two methods, where in the first test method, we have put the wait time 1000ms, and in the second test method, the wait time is 400ms. But in the suite file, we have mentioned 500ms, so the first method will be failed because here we have put 1000ms, which is higher than the timeout millisecond. TestNg Class: public class TimeoutSuite { @Test public void timeTestOne() throws InterruptedException { Thread.sleep(1000); System.out.println("Time test method one"); } @Test public void timeTestTwo() throws InterruptedException { Thread.sleep(400); System.out.println("Time test method two"); } } TestNg.XML <suite name="Time test Suite" time- <test name="Timeout Test"> <classes> <class name="com.howtodoinjava.test.TimeoutSuite" /> </classes> </test> </suite> Let us go to another way, which is using method level: Timeout Attribute at Method Level TestNg.class: public class TimeoutMethod { @Test(timeOut = 500) public void timeTestOne() throws InterruptedException { Thread.sleep(1000); System.out.println("Time test method one"); } @Test(timeOut = 500) public void timeTestTwo() throws InterruptedException { Thread.sleep(400); System.out.println("Time test method two"); } } Use the same testng.xml file to run the above TestNG class, and here we have mentioned the timeout duration is 500 for each test method and as the first test method takes more than the mentioned time, thats why that will fail.
https://www.softwaretestingo.com/timeout-attributes-testng/
CC-MAIN-2019-47
en
refinedweb
Paths and saving files I'm having trouble with the Path concept. I'm using the ObjC PDFDocument.writeToFile method, but it just crashes Pythonista. The relevant bit of code is: outFilename = pathlib.Path(outFilename) pdfDoc.writeToFile_(outFilename) pdfDoc is a valid PDFDocument object, and outFilename is a path as a string. The location is Pythonistas own iCloud folder, copied from a dialogs.pick_document thing. @benwiggy See doc of pick_document The return value is a temporary file. You can read it directly, but to keep a permanent copy, you must move it somewhere else. and a sample of returned value /private/var/mobile/Containers/Data/Application/1E3F0CAE-F004-491D-B153-03AE8CEA73F2/tmp/com.omz-software.Pythonista3-Inbox/test3.py See the word tmp , you can not write on it Ah, I suspected that iOS's restrictions would be the cause. Yes, it's a tmp location path similar to yours. I suppose I'll just have to save everything to: /private/var/mobile/Library/Mobile Documents/iCloud~com~omz-software~Pythonista3/Documents outFilename = '/private/var/mobile/Library/Mobile Documents/iCloud~com~omz-software~Pythonista3/Documents/xxx.xxx' Ah. It's still crashing the app, though. I don't know what the cause is, but it only happens when I uncomment the writeToFile line. Is there a log somewhere? @benwiggy Install this script as pythonista_startup.py in your site-packages directory of Pythonista, restart Pythonista and retry your code, you will get the log **after next restart ** @benwiggy This very little and ugly script works... It picks a pdf and write it as PDFDocument to Pythonista iCloud from objc_util import * import dialogs fil = dialogs.pick_document() url = nsurl(fil) PDFDocument = ObjCClass('PDFDocument').alloc().initWithURL_(url) #print(dir(PDFDocument)) outFilename = '/private/var/mobile/Library/Mobile Documents/iCloud~com~omz-software~Pythonista3/Documents/xxx.pdf' PDFDocument.writeToFile_(outFilename) Thanks. It's complaining about NSURL not having a string parameter for initFileURLWithPath when it crashes, but it doesn't complain about when writeToFile is commented out, which is weird. I'm trying to convert some MacOS PyObjC code into something that'll work here. Thanks for the example: I'll give it a try. import os from objc_util import * from pathlib import Path import dialogs PDFDocument = ObjCClass('PDFDocument') NSURL = ObjCClass('NSURL') home = "/private/var/mobile/Library/Mobile Documents/iCloud~com~omz-software~Pythonista3/Documents/" def rotate(filename): shortName = Path(filename).stem outFilename = home + shortName + "+90.pdf" pdfURL = NSURL.fileURLWithPath_(filename) pdfDoc = PDFDocument.alloc().initWithURL_(pdfURL) if pdfDoc: pages = pdfDoc.pageCount() for p in range(0, pages): page = pdfDoc.pageAtIndex_(p) existingRotation = page.rotation() newRotation = existingRotation + 90 page.setRotation_(newRotation) outFilename = Path(outFilename) print (outFilename) pdfDoc.writeToFile_(outFilename) if __name__ == '__main__': filename = dialogs.pick_document(types=['public.data']) rotate(filename) Brilliant! I told you I was confused about Path !! Excellent. Hopefully I should now be able to modify all my MacOS python scripts for PDFs along the same lines. Many thanks. @benwiggy, did you look at using Python PDF manipulation library included in Pythonista? I think you might get a more robust solution with it. Here’s an example combining all the PDFs in a directory into one file: #coding: utf-8 from PyPDF2 import PdfFileMerger import glob pdfs = sorted(glob.glob("PDF/*")) merger = PdfFileMerger() for pdf in pdfs: merger.append(pdf) merger.write("Combined result.pdf") @benwiggy In this case, your script could become: import os from pathlib import Path import dialogs from PyPDF2 import PdfFileWriter, PdfFileReader home = "/private/var/mobile/Library/Mobile Documents/iCloud~com~omz-software~Pythonista3/Documents/" def rotate(filename): shortName = Path(filename).stem outFilename = home + shortName + "+90.pdf" pdfDoc = PdfFileReader(filename) if pdfDoc: output = PdfFileWriter() pages = pdfDoc.getNumPages() for p in range(0, pages): page = pdfDoc.getPage(p) page_out = page.rotateClockwise(90) output.addPage(page_out) outfil = open(outFilename, 'wb') output.write(outfil) outfil.close() print (outFilename) if __name__ == '__main__': filename = dialogs.pick_document(types=['public.data']) rotate(filename)
https://forum.omz-software.com/topic/5484/paths-and-saving-files/3
CC-MAIN-2019-47
en
refinedweb
Enterprise. Look@ Knowledge Center is a new, online message and reference search facility based on the original LookAT. It works across all products in IBM Knowledge Center. It uses the Knowledge Center web site so that the information is always current. It only takes two steps to get the message information you need. Suppose you want to look up the message ID IBM1227I E for Enterprise PL/I for z/OS compiler, all you need to do is: 1. Choose a product: All Products Note: Not all IBM products are on the list currently. When you search PL/I messages, you can opt for 'All Products', but you will probably get multiple search results--one for each release. 2. Enter the message ID IBM1227I, then click [Go]. Then you will get your message information in the Knowledge Center search result. Check it out: If you prefer using a mobile device to search for messages and codes, IBM Doc Buddy is still highly recommended. Download IBM Doc Buddy: (Editors: Xifang Zhang, Lu Fang) IBM no longer offers a PL/I compiler for Windows. IBM currently offers PL/I compilers for AIX and z/OS. IBM released VisualAge PL/I Enterprise for OS/2 and Windows NT (5639-D65) in 1998, but it is no longer available or supported since 2006. More information can be found at the end of marketing and end of support announcements. There is no replacement product for the Windows compiler. IBM formerly made the Windows PL/I compiler available within bundles, such as WebSphere Developer for zSeries, renamed to WebSphere Developer for System z (5724-L44). WebSphere Developer for System z allowed for development on Windows and z/OS, but the last release (V7) hit end of support in 2010. Rational Developer for System z (RDz) was later introduced, which was renamed to IBM Developer for z Systems (IDz) (5724-T07), however, IBM Windows compilers including the PL/I compiler were removed in RDz V9. RDz V8.5 was the last release to include it, and it hit end of support in 2017. You may find more responses on Stack Overflow discussing other vendors that offer a PL/I compiler for Windows. My Notifications is a tool that enables you to subscribe to the product and the document type you want to see. With setting your personal subscription preference, you can receive an email notification every time when a specific content page has updated. Instructions about how to set your My Notifications: Step 1: Log in the IBM Support - My Notifications with your IBMid. Step 2: Choose your Delivery Preferences (Daily or Weekly emails, Plain text or HTML formatted emails, etc.). If you prefer to use an RSS or Atom feed, uncheck the box next to your email address. Step 3: Use keywords to search your target product name (e.g. PL/I) and click the '+Subscribe' symbol behind the product name. Step 4: Select document types you preferred within the pop-up window and click the 'Submit' button afterwards. Note: You are highly recommended to select 'Fixes' and 'Troubleshooting' here. Step 5: Re-type the target product name in the 'Product lookup' field to check whether your subscriptions have been set successfully. IBM Z Favorites for z Systems is a collection of links to helpful z Systems Web sites. It has links to various interest categories, such as products, product documentation, software and solutions, support and more. Use the navigation bar to the left to select your area of interest. Tip: Use your browser's "Find In Page" function, to help locate the subject you are interested in. IBM Enterprise PL/I for z/OS, V5.2 Continuous Delivery Announcement Letter You may have already realized that the IBM PL/I compiler team has continuously delivered several V5.2 modifications after the V5.2 GA date (September 2017). Taking into the consideration that, one of the critical commitments of us is to deliver better service to accelerate our clients' success, we applied the Continuous Delivery model starting from 2016 and refreshed the Enterprise PL/I for z/OS V5.1 tech docs several times until the V5.2 GA date. With the Continuous Delivery (CD) model, you can receive new features and enhanced capabilities as soon as the code is ready. The CD model enables you to receive enhancements in a faster and more continuous way without waiting for the next release. Just in case you did not realize the V5.2 CD Announcement Letter, which has been officially announced in September 2018, the link is inserted here for your convenience.. IBM PL/I compiler has released a new version on IBM Z. The recent announcement of Enterprise PL/I V5.3 reinforces the continuing IBM commitment to the PL/I programming language on the z/OS operating systems and the continued delivery of new features. Specifically, V5.3 offers: The exploitation of the new IBM z15™ hardware With V5.3, you can reduce CPU usage of decimal compute-intensive applications by up to 50%, and on average by 12% on IBM z14 over the same compute-intensive applications originally built with the previous Enterprise PL/I product. The new ARCH(13) compiler option allows the compiler to exploit the latest IBM z15. Improved processing of UTF-8 strings with the introduction of a new native datatype The V5.3 compiler provides increased efficiency and support for Unicode data encoded in UTF-8 format. A new native datatype, UCHAR, has been introduced to help you easily build maintainable applications and process UTF-8 strings efficiently. The enhanced support for processing UTF-8 strings also includes support for hex strings ending with the suffix UX so that you specify arbitrary UTF-8 string constants such as '00'ux (the lowest UCHAR value) and 'F48FBFBF'ux (the highest UCHAR value). The enhanced support for processing UTF-8 strings means that you can now work directly with UTF-8 strings without having to waste CPU resources on converting them. This results in more maintainable programs and is especially useful when you modernize your PL/I applications to work with web services. See UCHAR data and UX (hex) UCHAR constant. Several usability enhancements, particularly support for namespaces and VALUE sets The QUALIFY statement and a corresponding END statement delimit a qualify block, and thus create a namespace for ORDINALs, other types, and named constants. See QUALIFY statement. The VALUELIST and VALUERANGE attributes limit the set of values that a variable, an argument, or a returned value can have. See VALUELIST attribute and VALURRANGE attribute. The VALUELISTFROM attribute lets you copy a VALUE set from one variable to another. See VALUELISTFROM attribute. Besides, the V5.3 compiler also has a number of new features to help you optimize your PL/I applications and increase your programming productivity. Specifically, the new compiler: • Supports the date/time patterns YYYY/MM/DD, YY/MM/DD, YYYY-MM-DDTHH:MI:SS.999999, DD/MM/YYYY, and DD/MM/YY. See Date/time built-in functions. • Enables you to use the two slashes (//) characters to specify that the rest of a line is a comment. See Delimiters and operators. • Increases the maximum LINECOUNT value to 65535 lines so that fewer page breaks are created in listings intended to be viewed only online. See LINECOUNT. • Allows you to assign '' to HANDLEs, OFFSETs, AREAs, and ENTRYs as a simple way to assign a null value to them in the same manner that you can assign '' to POINTERs. See Non-computational targets. • Limits false positives in NOLAXENTRY and NOLAXQUAL checking by excluding names starting with 'DFH', 'DSN', 'EYU', 'SQL', or ' IBM'. See RULES. New built-in functions and options to add more functionality and increase flexibility The V5.3 compiler provides you with additional functionality so that you can modernize your applications. It also allows for maximum portability of your source code among a variety of compiler implementations. The V5.3 compiler provides the following new and enhanced built-in functions: New built-in functions • Array: INARRAY, QUICKSORT, and QUICKSORTX • Buffer: MEMREPLACE • Condition: ONOPERATOR • Comparison and replacement: IFTHENELSE, FOLDEDFULLMATCH, FOLDEDSIMPLEMATCH, REGEX, and REPLACE • Date/time value: MAXDATE, STCKETODATE, STCKTODATE, PLISTCKLOCAL, PLISTCKUTC, PLISTCKELOCAL, and PLISTCKEUTC • File reference: FILEDDWORD • JCL: ISJCLSYMBOL • Precision: PRECVAL and SCALEVAL • UTF-8 string: BYTELENGTH, UHIGH, ULOW, UVALID, UPPERLATIN1, UPPERASCII, LOWERLATIN1, LOWERASCII, ONUCHAR, and ONUSOURCE • System information: GETSYSWORD and GETSYSINT Enhanced built-in functions • Buffer: MEMCONVERT • JSON: JSONPUTVALUE and JSONPUTMEMBER See Summary of changes, Language Reference. The V5.3 compiler provides the following new and modified compiler options: New compiler options Modified compiler options See Summary of changes, Programming Guide. Improved JSON and XML support The V5.3 compiler increases support for various casings of names in the JSON functions via: • the addition of LOWER as a suboption to the JSON(CASE)compiler option • the new JSON(GET(HEEDCASE | IGNORECASE)) compiler option • the support for an optional parameter to JSONPUTMEMBER and JSONPUTVALUE that specifies whether the names should be written in lowercase, in uppercase, or as is. See JSON. A new XMLNAME attribute has been introduced, so that alternate name formats can be specified for XML output. See XMLNAME attribute. Compiler and runtime support for z/OS V2.4 Enterprise PL/I for z/OS, V5.3 adds support for building and running PL/I applications for the z/OS V2.4 operating system. With Enterprise PL/I for z/OS, V5.3, you can benefit from over 50 years of IBM experience in PL/I compiler innovation and development. Please visit the Enterprise PL/I for z/OS V5.3 Knowledge Center for more information. Both English manuals and Japanese manuals are viewable and downloadable now in the PL/I documentation library. If you have any comments regarding the PL/I documentation, please send them to compinfo@cn.ibm.com. (Author:.
https://www.ibm.com/developerworks/community/blogs/86d253aa-f216-4642-9f2b-eedb09087dfc?sortby=4&maxresults=15&page=3&lang=en
CC-MAIN-2019-47
en
refinedweb
The QXmlNodeModelIndex class identifies a node in an XML node model subclassed from QAbstractXmlNodeModel. More... #include <QXmlNodeModelIndex> This class is not part of the Qt GUI Framework Edition. Note: All functions in this class are reentrant. This class was introduced in Qt 4.4. The QXmlNodeModelIndex class identifies a node in an XML node model subclassed from QAbstractXmlNodeModel. QXmlNodeModelIndex is an index into an XML node model. It contains: ide. Identifies the specific node comparison operator that should be used. Typedef for QList<QXmlNodeModelIndex>. Identifies a kind of node. Note that the optional XML declaration at very beginning of the XML document is not a processing instruction See also QAbstractXmlNodeModel::kind(). Default constructor. Creates an item that is null. See also isNull(). Standard copy constructor. Creates a QXmlNodeModelIndex instance that is a copy of other. Returns the second data value. The node index holds two data values. data() returns the first one. See also data(). Returns the first data value. The node index holds two data values. additionalData() returns the second one. See also additionalData(). Returns the first data value as a void* pointer. See also additionalData().(). Returns true if other is the same node as this. Returns true if this node is the same as other. This operator does not compare values, children, or names of nodes. It compares node identities, i.e., whether two nodes are from the same document and are found at the exact same place.
https://doc.qt.io/archives/4.6/qxmlnodemodelindex.html
CC-MAIN-2019-47
en
refinedweb
Although the Javadoc-based API documentation has become pretty useful, we developers are often in such a hurry and often feel so confident in our own abilities that it is almost inevitable that we will sometimes continue to try to do things without first reading the manual. Because of this tendency, we can occasionally get burned by misusing a particular API despite the documentation warning us not to (mis)use it that way. I discussed this in my blog post on Boolean.getBoolean(String) and highlight a similar issue in this post related to use of BigDecimal's constructor that accepts a double. At first sight, it might appear that the BigDecimal constructor that accepts a Java double would hold it with its originally specified precision in all cases. However, the Javadoc message for this constructor explicitly warns, "The results of this constructor can be somewhat unpredictable." It goes on to explain why (the double cannot hold the exact precision and this is made evident when passed to the BigDecimal constructor) and to suggest that the alternative constructor accepting a String as a parameter be used instead. The documentation also proposes using BigDecimal.valueOf(double) as the preferred way to convert a double or float to a BigDecimal. The following code listing is used to demonstrate these principles and a few related ideas. DoubleToBigDecimal.java import java.math.BigDecimal; import static java.lang.System.out; /** * Simple example of problems associated with using BigDecimal constructor * accepting a double. * * */ public class DoubleToBigDecimal { private final static String NEW_LINE = System.getProperty("line.separator"); public static void main(final String[] arguments) { // // Demonstrate BigDecimal from double // final double primitiveDouble = 0.1; final BigDecimal bdPrimDoubleCtor = new BigDecimal(primitiveDouble); final BigDecimal bdPrimDoubleValOf = BigDecimal.valueOf(primitiveDouble); final Double referenceDouble = Double.valueOf(0.1); final BigDecimal bdRefDoubleCtor = new BigDecimal(referenceDouble); final BigDecimal bdRefDoubleValOf = BigDecimal.valueOf(referenceDouble); out.println("Primitive Double: " + primitiveDouble); out.println("Reference Double: " + referenceDouble); out.println("Primitive BigDecimal/Double via Double Ctor: " + bdPrimDoubleCtor); out.println("Reference BigDecimal/Double via Double Ctor: " + bdRefDoubleCtor); out.println("Primitive BigDecimal/Double via ValueOf: " + bdPrimDoubleValOf); out.println("Reference BigDecimal/Double via ValueOf: " + bdRefDoubleValOf); out.println(NEW_LINE); // // Demonstrate BigDecimal from float // final float primitiveFloat = 0.1f; final BigDecimal bdPrimFloatCtor = new BigDecimal(primitiveFloat); final BigDecimal bdPrimFloatValOf = BigDecimal.valueOf(primitiveFloat); final Float referenceFloat = Float.valueOf(0.1f); final BigDecimal bdRefFloatCtor = new BigDecimal(referenceFloat); final BigDecimal bdRefFloatValOf = BigDecimal.valueOf(referenceFloat); out.println("Primitive Float: " + primitiveFloat); out.println("Reference Float: " + referenceFloat); out.println("Primitive BigDecimal/Float via Double Ctor: " + bdPrimFloatCtor); out.println("Reference BigDecimal/Float via Double Ctor: " + bdRefFloatCtor); out.println("Primitive BigDecimal/Float via ValueOf: " + bdPrimFloatValOf); out.println("Reference BigDecimal/Float via ValueOf: " + bdRefFloatValOf); out.println(NEW_LINE); // // More evidence of issues casting from float to double. // final double primitiveDoubleFromFloat = 0.1f; final Double referenceDoubleFromFloat = new Double(0.1f); final double primitiveDoubleFromFloatDoubleValue = new Float(0.1f).doubleValue(); out.println("Primitive Double from Float: " + primitiveDoubleFromFloat); out.println("Reference Double from Float: " + referenceDoubleFromFloat); out.println("Primitive Double from FloatDoubleValue: " + primitiveDoubleFromFloatDoubleValue); // // Using String to maintain precision from float to BigDecimal // final String floatString = String.valueOf(new Float(0.1f)); final BigDecimal bdFromFloatViaString = new BigDecimal(floatString); out.println("BigDecimal from Float via String.valueOf(): " + bdFromFloatViaString); } } The output from running the above code is shown in the next screen snapshot. As the output above indicates, the problem of casting float to double prevents one from retaining the desired precision when passing a float directly to the BigDecimal.valueOf(double)method. A String can be used as an intermediary to accomplish this shown in the example and as demonstrated in similar fashion in Converting Float to Double in a Not So Common Way. Note that Groovy's heavy implicit use of BigDecimal changes the game a little bit when using Groovy and dynamic typing. I may touch on that in a future blog post. For more details on floating-point issues (and I emphasize "details"), see What Every Computer Scientist Should Know About Floating-Point Arithmetic.
http://marxsoftware.blogspot.com/2010/01/caution-double-to-bigdecimal-in-java.html
CC-MAIN-2017-34
en
refinedweb
Hi folks, Ned Pyle here. As promised when I left AskDS and MS Support for greener pastures, I’m still in the blogging game – I told you I’d be back! Let’s start things off talking about improvements in Windows Server 2012 and DFS Replication (DFSR). Windows Server 2012 DFSR focuses on reliability and supportability changes based on direct field and MS Support feedback. This release doesn’t contain many new features but is much easier to troubleshoot and is more resilient to environmental issues. In the end, that makes your life easier. And every IT department could use some easier… If this is your daily routine, we can help I can only assume you already know DFSR from all of my old write-ups, so let’s dive into the details. Unexpected shutdown worker progress DFSR uses a per-volume ESE (aka “Jet”) database to track all file changes in replicated folders on their individual volumes. DFSR contains code to attempt graceful and dirty recovery of the database after an unexpected shutdown. Mallikarjun Chadalapaka has a great write-up on dirty shutdown recovery here. Previous OS behavior On detecting a dirty shutdown, DFSR begins a recovery process. This starts with logging event 2212: Event ID=2212 Severity=Warning The DFS Replication service has detected an unexpected shutdown on volume : %2 GUID: %1 If the recovery is successful, DFSR logs event 2214: Event ID=2214 Severity=Informational The DFS Replication service successfully recovered from an unexpected shutdown on volume %2.This can occur if the service terminated abnormally (due to a power loss, for example) or an error occurred on the volume. No user action is required. Additional Information: Volume: %2 GUID: %1 If the recovery is unsuccessful, DFSR logs event 2216: Event ID=2216 Severity=Error The DFS Replication service failed to recover from an unexpected shutdown on volume %2. This can occur if the service terminated abnormally (due to a power loss, for example) or an error occurred on the volume. Recovery will be attempted periodically in %3 seconds. No user action is required. Additional Information: Error: %4 (%5) Volume: %2 Guid: %1 DFSR didn’t log how a recovery was progressing, though. This makes troubleshooting tricky and we found that sometimes customers would think the recovery had hung or halted, and they’d start trying to fix things (perhaps making things worse). Windows Server 2012 behavior Two new event log messages now appear that describe where the internal repair process stands. You now know that DFSR has moved past the detection phase and into the consistency checking and rebuilding phase. Event ID=2218 Severity=Informational The DFS Replication service is in the second step of replication database consistency checks after an unexpected shutdown. The database will be rebuilt if it cannot be recovered. No user action is required. Additional Information: Volume: %2 GUID: %1 Event ID=2220 Severity=Informational The DFS Replication service is in the third step of replication database consistency checks after an unexpected shutdown. Database recovery is currently in progress. No user action is required. Additional Information: Volume: %2 GUID: %1 Just be patient – it will complete. If in doubt, contact Microsoft Support – don’t try to get out and push. Performance registry defaults DFSR contains registry overrides to control behaviors like the number of files to replicate simultaneously, stage simultaneously, etc. Previous OS behavior The default settings in Windows Server 2008 R2 were a bit too conservative. After release, we tested tweaked registry settings that resulted in roughly double the performance of default settings: - Tuning replication performance in DFSR (especially on Win2008 R2) – Windows Server 2012 behavior These more aggressive settings are now the default in Windows Server 2012 (if not overridden in the registry by you): - AsyncIoMaxBufferSizeBytes - New default value: 8388608 - RpcFileBufferSize - New default value: 524288 - StagingThreadCount - New default value: 8 - TotalCreditsMaxCount - New default value: 4096 - UpdateWorkerThreadCount - New default value: 32 The allowed ranges are unchanged except for UpdateWorkerThreadCount (see below). UpdateWorkerThreadCount max UpdateWorkerThreadCount controls the number of simultaneously inbound-replicating files to a DFSR server. Previous OS behavior The maximum configurable range in Windows Server 2008 R2 is 64. If you set the maximum allowed value for UpdateWorkerThreadCount to 64, it is possible to see intermittent DFSR service deadlocks. This manifests as a hung service, which for customers is nearly impossible to troubleshoot (you need a debugger and private symbols). Because the issue may not happen for days or weeks, there is no easy way to correlate cause and effect. Windows Server 2012 behavior The maximum value is now 63. Voila! Read Only Domain Controller support for DFS Management Administrators use the DFS Management snap-in (Dfsmgmt.msc) for all graphical configuration of DFSR. Previous OS behavior DFS Management was introduced in Windows Server 2003 Service Pack 1 introduced the, long before read-only domain controllers (RODCs). It expected all domain controllers to be writable when creating a replication group or any other AD objects. When DFS Management tries to write to an RODC, it fails with an access denied error. This issue has existed since Windows Server 2008, but since RODC usage was lower and RODCs tend to exist mainly in branch offices, we never saw it until much later. Now that RODCs are everywhere, well… Windows Server 2012 behavior DFS Management now requests only writable domain controllers when making DC queries. Read-only disconnected topology detection DFS Management contains a topology checking routine to alert administrators when they have created an incomplete (aka "disconnected") DFS replication topology. A disconnected topology prevents eventual replication of data, leading to divergence, user confusion, and potential data loss. Previous OS behavior A bridged topology of A <-> B <-> C is not flagged as disconnected when B is a read-only replicated folder. Because there is no outbound replication on a read-only member, any files created on A or C will not replicate further than B, so users on A and C will potentially see different versions of files, or no files at all. Windows Server 2012 behavior The topology checker code now understands the bridged read-only replicated folder scenario and appropriately warns you when detected. 4412 conflict event data DFSR uses a series of conflict resolution algorithms to detect file collisions and appropriately handle a winning and losing file. DFSR notes these in a per-collision 4412 informational event log entry. Previous OS behavior The 4412 event did not contain quite enough information easily troubleshoot unexpected collisions. For example: Windows Server 2012 behavior The 4412 event message now contains an additional field of Partner Member ID that lists the winning server's identity. Partner Member ID: 2716E4E2-ED01-4285-9137-FACB4EE84C4A You can use DFSRDIAG GUID2NAME to translate that partner GUID into a human-friendly name. For example: Editions restrictions removed There is no Windows Server 2012 Enterprise Edition; instead, you can purchase Windows Server 2012 Standard or Windows Server 2012 Datacenter, which is no longer an OEM-only SKU and exists to provide unlimited virtualization licenses. Previous OS behavior DFSR cross-file Remote Differential Compression (RDC) support ties to the server edition being Enterprise or Datacenter. DFSR Cluster support ties to Enterprise or Datacenter editions as well, through internal checks. Implicitly, DFSR cluster support requires enterprise and higher because the Failover Cluster features only exist on those editions. Windows Server 2012 behavior All edition checks are removed and Windows Server 2012 has full DFSR capabilities even in Windows Server 2012 Standard. Initial sync to read-only replicated folders with preexisting data Read-only (RO) replicated folders are always non-authoritative and do not allow local changes by use of an IO-blocking filter driver named dfsrro.sys. You are encouraged to pre-seed data before initial sync, meaning that data can already exist when DFSR is configured on two or more servers. Previous OS behavior Windows Server 2008 R2 SP1 introduced a regression (that we recently fixed) where initial sync from Read Write (RW) to RO does not overwrite file differences on the RO. This leads to data inconsistencies in the replication groups, as these differing files will never be right on RO servers unless they are later modified again on the RW. Which rather defeats the purpose of pre-seeding. Windows Server 2012 behavior This is fixed. 🙂 DC port 5722 DFSR uses TCP/IP and RPC to replicate files, and we finally fixed an old scenario where domain controllers differed in port usage from member servers. Previous OS behavior In Windows Server 2008 and Windows Server 2008 R2, a domain controller replicating SYSVOL and/or custom replicated folders with DFSR used TCP port 5722. This was due to a bug I discussed back on AskDS. Windows Server 2012 behavior This is also fixed. Now DCs will operate consistently like member servers, listening on a dynamic port in the 49152 – 65535 range unless you choose to hard code a port. If you have gotten used to 5722 and reaaaaally like using hard-coded ports, you can return to the old behavior with command: Dfsrdiag.exe staticrpc /port:5722 I doubt the person who takes over your job someday will thank you for it though… Fixed missing DFSR migration event 6806 When using DFSRMIG.EXE to migrate your SYSVOL from using FRS to DFSR, event log entries tell you how things are proceeding and if there are any problems you need to investigate before moving to the next phase. Previous OS behavior In Windows Server 2008 R2, a timing issue could give you an expected warning 6804 with the rather scary message: The DFS Replication service has detected that no connections are configured for replication group Domain System Volume. No data is being replicated for this replication group. Once AD replication and the migration caught up, we should have logged a 6806 event saying everything was fine. But we forgot to. Errp. Windows Server 2012 behavior Now we log that missing 6806 event letting you know that all is well and migration is working. Replicated folder removal and replication Replicated folders are the base of replication and the top level of a content set in DFSR database terms. Previous OS behavior In Windows Server 2008 R2, removing a replicated folder stopped replication of all other RFs until the removal completed. Windows Server 2012 behavior Now you can remove a replicated folder (thereby causing DFSR to update its DFSR database and stop replicating that content set) and not see other replicated folders pause replication. This keeps a hub server working efficiently when you decide to decommission a branch node. Faster also implicitly means increased reliability, as we are not spending large amounts of time with replication halted. Staging messaging Windows Server 2008 R2 SP1 introduced a little-known hotfix to update the Dfsmgmt.msc wizards for new replication groups and new replication wizards. This provides further guidance around configuring the staging folder quota to prevent performance bottlenecks. This capability is now native to Windows Server 2012. Added support for Dedup, FCI, and DAC file modifications Data Deduplication support We modified the DFSR allowed reparse point replication rules to support replicating the new IO_REPARSE_TAG_DEDUP tag. This type of reparse point tag is part of the new file deduplication system. This isn’t truly reparse point replication; file is “rehydrated” and replicated as a normal file then put back into its dedup’ed state on the downstream. Slick. - Data Deduplication – - Reparse Point Tags – - DFSR file attribute and data type rules – File Classification Infrastructure support We modified File Classification Infrastructure (FCI) to prevent re-writing unchanged data to the alternate data stream on files during classification passes. This previously caused replication storms in Windows Server 2008 R2. Note: you should still only configure FCI on one server (usually the hub), not multiple servers. - 974774 Files are replicated with DFSR between servers even though file contents are unchanged –;EN-US;974774 Dynamic Access Control Support Changes made to APIs used to access new NTFS data structures for auditing and conditional ACE security required updates to DFSR in Windows Server 2012. Because Windows Server 2008 R2 and older operating systems do not implement these APIs though (and therefore cannot use or display these ACLs) they did not require changes. Therefore, there is no back port required to configure replication between a Windows Server 2008 R2 and Windows Server 2012 replicated folder. But! Microsoft strongly discourages mixed Windows Server 2012 and legacy operating system DFSR. There are significant NTFS security data differences between Windows Server 2012 and earlier operating systems, often to facilitate Dynamic Access Control features. Moreover, any claims-based access configuration will not work consistently in a design that allows users to connect to Windows Server 2008 R2 and Windows Server 2012 versions of a replicated file; one server might grant more or less access than the other. For example, if someone modifies the security of a file on a Win2008 R2 server, DFSR packages that up with the file (this is called “marshalling”) and sends it along as-is. When a user attempted to access the file on the Win2012 server, the Claims-based security elements would no longer exist, and the user would be denied access. More troubling, if you were letting users access the data from multiple DFSN-provided shares, they would be calling you with the infamous “it sometimes works and sometimes fails” symptom that drives IT pros batty. However! Central Access Policies modify individual files and folders to contain a special SID in the tail of the SACL structure when adding the CAP rules the first time. This means that first applying a CAP triggers replication of all folders and files replicated under the auspices of the CAP structure, just like it would with any other security change to the classic DACL. Subsequent changes to the rules of an already-added CAP do not alter the files, however – this is the beauty of Central Access Policy. This means that once replication completes, you can change the security on files without triggering further replication. This is a seriously cool feature if you are a DFSR administrator, and it means once you deploy CAP, further security changes to an existing policy are completely non-intrusive to replication! Ideally, configure CAP and File Classification Infrastructure on the file structure before configuring DFSR; that way you only pay the replication price once during DFSR initial sync. And to reiterate, use Windows Server 2012 on all nodes before deploying DAC. If you need help migrating existing DFSR environments, I recommend this series. It goes without saying that when using Windows Server 2012, CAP/DAC will only be effective if you apply the CAP to all nodes being replicated – otherwise you end up with differing security per node. ReFS DFSR does not support ReFS volumes, as this new file system removes many critical data types used or supported by DFSR, such as streams, sparse files*, compressed files, 8.3 names, extended attributes, etc. * Update Jan 9, 2013 – it turns out (despite what you will read on most of the Internet, including the Build 8 blog) that we added sparse file support to ReFS right at the tail end of development. So it's there. DFSR does not allow you to replicate ReFS volumes. The service checks to make sure you are using NTFS and it will fail, gracefully. You cannot replicate a volume with ReFS locally; the DFSR service will not allow it. Dfsmgmt.msc prevents an administrator from accidentally configuring a ReFS volume. Even if you pre-create the folder and use DFSRADMIN to bypass the check, DFSR prevents replication with event 6404, ("The local path is not the fully qualified path name of an existing, accessible local folder."). The debug log will show error 9225 ("volume was not found") CSV Just like Windows Server 2008 R2, DFSR in Windows Server 2012 does not support Cluster Shared Volumes (CSV). Autorecovery Disabled Just like Windows Server 2008 R2, DFSR in Windows Server 2012 includes the database autorecovery change: - KB 2663685 – Changes that are not replicated to a downstream server are lost on the upstream server after an automatic recovery process occurs in a DFS Replication environment in Windows Server 2008 R2 – Complex nested folder creation-deletion-replication fix Just like Windows Server 2008 R2, DFSR in Windows Server 2012 includes the latest reliability changes for handling complex nested file and folder creation and deletion on partner nodes: - KB 2450944 – Some folders or files are unexpectedly deleted on the upstream server after you restart the DFS Replication service in Windows Server 2003 R2, in Windows Server 2008 or in Windows Server 2008 R2 – File creation conflict algorithm Windows Server 2012 changes the only disparate file conflict resolution previously algorithm used from first creator wins to last creator wins, in order to be more consistent. For more information about this topic, see this article. Keep alive support added for huge files Windows Server 2012 now correctly allows very large (many many GB) files to complete computation of RDC signatures before the RPC server connection times out. In prior OSes the file would never replicate due to timing constraints. This mainly happened with files that were hundreds of GB. But! 64GB files are still the supported maximum. So this is us being nice and helping you in a scenario that is technically, still unsupported. As a final note: I didn’t include all the fixes released as updates to Windows Server 2008 R2 that are also part of Windows Server 2012, just the more interesting ones. So as a rule of thumb, if you got a hotfix for Win2008 R2 before Win2012 RTM’ed, the latter has the update built-in. And that’s it. Nice, eh? – Ned “it’s all good” Pyle Join the conversationAdd Comment @ NedPyle – Yes, the ability to be notified about conflicts at the point of detection with an easily configured email telling the who, what, where, and when would be very beneficial (rather than having to configure Tasks with scripts for Event 4412), as well as being able to resolve conflicts directly in a GUI. Although we're not a large enterprise operation, the key point to us is we don't want to risk having a very important business doc overwritten and we only find out after the fact, and then we have to go try digging it out of the C&D folder. Even if that scenario doesn't happen often, it's more about the quality of the DFSR service over the quantity of conflicts that happen… Gents, I still do not understand why you cannot disable encryption and compression on DFSR. Reason is that modern WAN Optimizers can do the job a hell of a lot better than you can. So enforcing encryption and compression is sticking your head into the sand and pretend that you have the best solution. Wake up gents, there are far better solutions to replicate data over the WAN. So I would suggest to add an option to disable Encryption and Compression to allow better products to take over the transport so the network guys can do their stuff (accelerate, dedupe, QoS, etc.) and know what latency is. That is where it belongs…. Good write up. +1 for file-locking – Service Pack 1 yea? 🙂 Great post! Thanks, very informative! @ Noel – yes, you'd just follow the steps here: blogs.technet.com/…/series-wrap-up-and-downloads-replacing-dfsr-member-hardware-or-os.aspx. Remember the point above that you should not deploy claims-based access/central access policy to these machines until you have all Win2012 though. @Taylorbox – Not most, but I understand where you're coming from. This is the iterative process – things take time, and most* of your requests are well-known and desirable; i.e. no one is arguing that they are bad ideas or that we don't want to do them, only that we have finite resource here. A lot of development energy went into other (massive) technologies in Win2012 and that left DFSR a bit starved this last go-around. *I've not heard this one before: "Where's the GUI for resolving conflict files?" Can you explain this one to me in more detail? Do you mean restoring files from C&D/Preexisting, like with restoredfsr.vbs, or something else? @ NedPyle – Thanks for replying. I had already supposed that development resources for DFSR didn't get much love this last go-around…despite the billions Microsoft has available. :o) Third-party cross-server file replication/syncing software such as SureSync and PeerLink offer the following features for conflict files: -A GUI that admins can access to easily see which files are in conflict, who edited them, when, and which offers conflict resolution options -Email alerts for conflicts, which provide the information mentioned above (so much simpler than having to work with the Event logs) -The option to not replicate conflict files until an admin manually intervenes Such a GUI and features would equip admins with the tools necessary to handle conflicts much more easily than having to dig into the Conflict & Deleted folder and sort through the Event logs. I actually find these "improvements" quite disappointing. These are just bug fixes and performance tweaks, most of which can be applied to Windows Server 2008 R2. Where's the GUI for resolving conflict files? Where's cross-server file locking so that we don't have to use third-party apps like PeerLock? Why wasn't the 64 GB file size limit increased? Why can't we use full folder paths for subfolder and file filters? Those would be highly useful, very practical real-world improvements for DFSR. Businesses were hoping to get these kinds of improvements with Windows Server 2012… @NedPyle[MSFT] Further to what Ed Swindelles was asking, that 10TB hard cap is KILLING me. This is a chance for Microsoft to really make a definitive feature statement that would elevate them above software vendors like Vision Solutions DoubleTake. Even the ability to create a DFS namespace up to 10TB *per virtualized storage pool entity* on a single server would be a step forward here. We can have multiple storage pools, so why not allow us to have discrete DFS replication limits per pool instead of per server? I've got a deal on the stove right now that will probably get tossed because the reseller and distributor were unaware of the 10TB limit on DFS and didn't engage an HP Storage SA (like myself) early on when I could've suggested something else (low end SAN w/remote replication), rather than now when the products have already shipped. They shipped with WSS2008R2, but I could probably save the deal if I could say the problem would go away with WSS2012 next year… Holy cow, lots of behind the scenes stuff. Great writeup Ned! Excellent. That is great food for thought, Taylorbox. Thanks everyone. 🙂 Great post, thanks for the update on tall the new features. My question is similar to Noels in regards to 2012 in a 2008 DFSR environment. We have a 2 x 2008R2 servers and a 2003R2 Server all in a replication group. I replacing one of the 2008R2 servers and putting in a 2012 server. 1. Is it ok to have a 2012 Server in a replication group with these older OS's as long as Dynamic Access Control is not utilized? 2. Is there any other reasons why a 2012 server should not be in the same replication group as older OS's? Thanks! I hear you, The_Rob_HP. This is the top priority for us (no exaggeration). Hey Ned! Good to see you back posting articles, and another good one at that! I imagine you aren't surprised by this comment, but is there any new PowerShell fun available in 2012 for DFS-N and DFSR? Also, while we are on the discussion of management features to add, it would be nice to have a fast and easy measure of backlog. Either that or some simple way to know if your replication is up to date, or roughly how far behind it is. Although I do understand that there is a good amount involved with version vectors, etc to get the information quickly. In any case, something that quickly and clearly shows when things are "stuck" would be awesome. Any new guidance on scaling limits? This article (technet.microsoft.com/…/cc773238(v=ws.10).aspx) seems to indicate not, but just curious: "The following list provides a set of scalability guidelines that have been tested by Microsoft on Windows Server 2012 Windows Server 2008 R2 and Windows Server 2008: Size of all replicated files on a server: 10 terabytes. Number of replicated files on a volume: 11 million. Maximum file size: 64 gigabytes." Howdy Ryan and Steve! Glad to see you tracked me down. Yep, there are new DFSN Psh cmdlets – see blogs.technet.com/…/introducing-dfs-namespaces-windows-powershell-cmdlets.aspx Not DFSR ones though. The thought around backlog performance is a good one and never far from my mind. The ReFS case is a very interesting one (more here if people want to read: technet.microsoft.com/…/hh831724.aspx & blogs.msdn.com/…/building-the-next-generation-file-system-for-windows-refs.aspx). It's not proof of concept as its truly supported and projected, but for the reasons you raised it's not really a replacement for NTFS – more of a different workload file system. More application centric, where the apps have their own built in smarts to deal with things and just want a very fast, large, reliable disk system. It will be interesting to see it evolve. Plenty of culture shock – for my peers! I am a tornado of awesome! ;-P @ Ed Swindelles We're thinking very hard about this. 🙂 We recently changed the "Number of replicated files on a volume:" to 11 million from the previous 8 million, based on some internal testing. Otherwise, for now, it's business as usual. If this changes I will be sure to write a new post and sing it loud as this is by far our most common request these days. @Caleb44 1. Yes, it's ok. 2. Nope, other than the confusion it sometimes raises with larger environments and mixed-experience admins, where some features will not be useable like cluster, read-only, etc. @Bjkamp Those are good points, even if presented a bit brusquely. 🙂 We're not pretending anything; we just predate those other technologies being popular and didn't have to care about such things in 2002 when DFSR was being designed. Everyone always thinks its a conspiracy or that we believe we're better than anyone. 🙂 Trust me, neither is the case. You absolutely can turn off both DFSR RDC and marshaled file XPRESS compression – there are no other kinds of DFSR compression. You definitely cannot turn off RPC packet privacy in DFSR though, I agree. As for whether those other vendors do as good a job compressing traffic as RDC does with differential replication, it really depends on the scenario and how files are modified. It's not an "always" scenario for DFSR versus the third parties I've seen. Bottom line: in order to meet the security requirements of Windows and DFSR as well as avoid accidental 'de-securing' of the file data streams over the wire, it is not possible to disable the RPC encryption mechanism in DFSR. This was under discussion years back (we even have an AD attribute to do it – look at "ms-DFSR-DisablePacketPrivacy") and ultimately safety won out. I'm always interested in hearing more feedback for future releases, though; nothing is ever set in stone and I never stop listening. @ at all – sorry for the delay in response to these comments. I was on Thanksgiving duty last week. @ JB – the 11 million is per volume as a whole, and simply what we have tested and certified as "supported". But I have seen much higher, and there is no specific cap. The size and update rate of files is a much bigger factor for performance – smaller and less = better. @ Bill – unfortunately, no, and I understand what you mean exactly. This is certainly something we have thought about; one of the biggest requesters of this functionality is Microsoft's internal IT! @ Looking – that Windows Server 2003 Technical Reference was lightning in a bottle. It has never been rewritten again for later technologies. You'll have to rely on smaller specific articles from TechNet, FileCab, AskDS and other sources to fill in those later gaps, I'm afraid. Hi Bob, Correct – DFSR will not need to scan. It is alerted to changes by NTFS (via the USN journal) and stores all of its pending/completed replication info in its Jet database. The IOPS cost with DFSR is always in the staging/chunking phase, where RDC is used on files over 64KB to send only the differences. I don't know what the IOPS average would be in that case, but it should not be crushingly high as it is throttled on Win2012 to 8 simultaneous staging threads (by default). My pleasure – the whole idea here with Comments is to have a dialogue. Even when you are disappointed. ;-P That's a useful case to understand. So in technologies where conflicts are unexpected (as they are not DFSR-style multi-master) you like having the ability to resolve conflicts at the point of detection. What size of dataset and churn are you seeing in your environment with this setup. I.e. how many conflicts do you have to mediate a day, on average? When would it be too many? I agree that the C&D folder as implemented is not useful for 99% of customers, since it has no way to extract data. Does Win2012 allow you to chain/link read-only members with each other yet? This would help minimize bandwidth utilization across WAN links for some scenarios while also keeping the potential for unwanted changes and/or conflicts out of the equation. As things stand with Win2008R2 you have to deploy updateable members between locations or multiply your replication traffic across WAN links if you want to have multiple read-only members at secondary locations. Make sense? Top quality post, sir! Good to see you back, Ned. Experienced any culture shock since moving from one end of the country to the other, both literally and figuratively? 🙂 I saw the "roughly doubled DFS performance by tweaking some registry values" bit nonchalantly tucked away in there. Definite eyebrow raiser! Also, interesting that DFS and ReFS don't mix. Nor can ReFS be used for the Windows OS drive. So I'm left wondering when I should recommend it. I understand the updated resiliency and reliability mechanisms, but I don't see NTFS being deficient or failure-prone enough in comparison to merit the switch and introducing the extra complexity of another new file system in the environment. That sounds like just the kind of blog post I might here… Thanks again! @Ryan Ries Good point, and don't forget that ReFS currently doesn't play nice with Dynamic Access Control either. I too would be interested in when ReFS should be recommended. Understanding that it is new and likely go get more compatible as time passes of course. thanks for the reply @NedPyle. It's nice to at least know it's on somebody's plate, and to have a place to vent a little. 😀 Second that – great post! I'm looking to update a client to 2012 mainly for the Hyper-V improvements and DFSR improvements. Now I'm just tossing up whether to use DFSR for a chunk of the offsite backup still or just switch to Hyper-V replica. (they still have onsite backups – I just like to have belts & braces) 😉 Excellent post. Thanks. Can a 2012 HUB be introduced into an existing 2008R2 environment? I've a new file server to through into a country and will make that 2012, would be ideal to have that replicate to a 2012 HUB here at Head Office, where we already have 2008R2. Hello Ned, A request here, a cap in hand dickensian, please sir, tear in my eye, request. Please please please add an option for file locking. I completely appreciate file locking kills the redundancy but we really need it as an option for all our customers. Maybe buy the technology off one of the 3rd party companies offering the file locking solutions? Many thanks! Huge +1 for file locking! Hi, I am busy with an implementation and wanted to check something. The limit for a replicated volume is 11 million files, am I right in saying that is for the entire drive and not just a folder? I need to replicate a drive with 20 million files. Thanks There's a great document available on Technet called "How DFS Works", but it was last updated in 2003. Is it possible there will be an updated version of this posted that includes all the changes up to the current version? Ned I have a question. I have a customer that is using windows , not sure if it is W2K3 or W2K8 for NFS mount points, and then using rsync to replicate data to another site. their challenge is that rsync has to scan all the files for changes before it can replicate. This is generating around 2000 IOPs during the scan. It seems in some of the above blog comments that 2012 WSS uses Jet to maintain a catalog of changes, so does not have to scan the drive all files every time it replicates. If the JDB resides on the OS drive, no more 2000 IOPs to do a scan for changes? The technology we want to position is HP X1830 running 2012 WSS and FC connected to 3Par storage. Thanks in advance for any feedback, or really good jokes! Very informative article. Thanks Ned! This post is a part of the nine-part “ What’s New in Windows Server & System Center 2012 Pingback from 70-411 Administering Windows Server 2012 Overview | WinPC.TV Hi folks, Ned Pyle here again. We are occasionally asked which reparse points DFS Replication can handle Hey Ned, any improvements in 2012 or 2012 R2 with respect to the number of replicated folders/replication groups on a single volume? At what point does it make sense to create a new volume to accommodate additional replication groups on the same server? Nothing special. The new volume makes sense when you start feeling too vulnerable to a single database problem preventing replication of ‘too much’ data. Whatever too much means is up to you. 🙂 Hi Ned, Can you please forward me any article to troubleshoot DFSR replication and monitoring? We do not use SCCM unfortunately. Thanks Abu After using DFSR on 2012r2 for the past while, I’m seeing that the 63 worker threads is not nearly enough. We’re not CPU/Memory/Network constrained, why are we limited to 63? Why not much more? 1023?
https://blogs.technet.microsoft.com/filecab/2012/11/12/dfs-replication-improvements-in-windows-server-2012/
CC-MAIN-2017-34
en
refinedweb