text
stringlengths
20
1.01M
url
stringlengths
14
1.25k
dump
stringlengths
9
15
lang
stringclasses
4 values
source
stringclasses
4 values
ServiceStack is a light-weight, complete and independent open source web framework for .NET. I recently started playing with it and I must say that it is an awesome framework. It has several nice features including .NET’s fastest JSON serializer. Each piece in ServiceStack can be used independently. So is its piece for serialization. The serialization package of ServiceStack can be installed via NuGet using the following command: public class Person { public int Id { get; set; } public string Name { get; set; } public string City { get; set; } public string Occupation { get; set; } }Following is a sample object of the person class: var person = new Person() { Id = 1, Name = "Ravi", City = "Hyderabad", Occupation = "Software Engineer" };To serialize this object to a JSON string, we need to use the ServiceStack.Text.JsonSerializer class. Following statement serializes the above object: var serialized = JsonSerializer.SerializeToString(person);The above string can be de-serialized using the following statement: var converted = JsonSerializer.DeserializeFromString<Person>(serialized);The JsonSerializer class also has APIs to serialize into or de-serialize from TextWriter or Stream. The APIs in ServiceStack are light-weight and easy to use. I am working on a series of articles on this great framework. Stay tuned for updates. Happy coding! {{ parent.title || parent.header.title}} {{ parent.tldr }} {{ parent.linkDescription }}{{ parent.urlSource.name }}
https://dzone.com/articles/serializing-and-de-serializing
CC-MAIN-2017-22
en
refinedweb
The ScrollWindow widget manages one single child window and scrolls it when the child is larger than the available area. More... #include <FXScrollWindow.h> The ScrollWindow widget manages one single child window and scrolls it when the child is larger than the available area. You can use ScrollWindow when parts of your user interface need to be scrollable, for example when applications may need to run on small screens. ScrollWindow normally contains only one single child window, which could be a VerticalFrame or any other widget. It will measure this widget using getDefaultWidth() and getDefaultHeight() and place the scrollbars when needed, based on options like HSCROLLING_ALWAYS, etc., and the options of the child window. ScrollWindow observes some layout hints of its child window: LAYOUT_FIX_WIDTH, LAYOUT_FIX_HEIGHT are observed at all times, while LAYOUT_FILL_X, LAYOUT_LEFT, LAYOUT_RIGHT, LAYOUT_CENTER_X, as well as LAYOUT_FILL_Y, LAYOUT_TOP, LAYOUT_BOTTOM, LAYOUT_CENTER_Y are only observed if the child window size is smaller than the ScrollWindow's viewport size. If the content size is larger than the viewport size, the content must be scrolled normally. Note that this means that the child window's position is not necessarily equal to the scroll position of the scroll window!
http://fox-toolkit.org/ref/classFX_1_1FXScrollWindow.html
CC-MAIN-2017-22
en
refinedweb
3. Library calls (functions within program libraries) ASPRINTFSection: Linux Programmer's Manual (3) Updated: 2017-09-15 Index | Return to Main Contents NAMEasprintf, vasprintf - print to allocated string SYNOPSIS#define _GNU_SOURCE /* See feature_test_macros(7) */ #include <stdio.h> int asprintf(char **strp, const char *fmt, ...); int vasprintf(char **strp, const char *fmt, va_list ap); DESCRIPTIONThe VALUEWhen successful, these functions return the number of bytes printed, just like sprintf(3). If memory allocation wasn't possible, or some other error occurs, these functions will return -1, and the contents of strp. SEE ALSOfree(3), malloc(3), printf(3) COLOPHONThis page is part of release 4.15 of the Linux man-pages project. A description of the project, information about reporting bugs, and the latest version of this page, can be found at. Index Return to Main Contents
https://eandata.com/linux/?chap=3&cmd=asprintf
CC-MAIN-2020-16
en
refinedweb
71 releases (31 breaking) 1,334 downloads per month Used in 2 crates 22MB 294K SLoC Rust OpenCV bindings Experimental Rust bindings for OpenCV 3 and 4. The API is usable but unstable and not very battle-tested; use at your own risk. Quickstart Make sure the supported OpenCV version (3.2, 3.4 or 4.2) is installed in your system. Update your Cargo.toml opencv = "0.33" Select OpenCV version if different from default in Cargo.toml: opencv = {version = "0.33", default-features = false, features = ["opencv-34"]} And enable usage of contrib modules: opencv = {version = "0.33", features = ["contrib"]} Import prelude use opencv::prelude::*; When building on Windows and macOS you must enable buildtime-bindgen feature to avoid link errors: opencv = {version = "0.33", features = ["buildtime-bindgen"]} Getting OpenCV Linux You have several options of getting the OpenCV library: install it from the repository, make sure to install -devpackages because they contain headers necessary for the crate build (also check that your package contains pkg_configfiles). build OpenCV manually and set up the following environment variables prior to building the project with opencvcrate: PKG_CONFIG_PATHfor the location of *.pcfiles LD_LIBRARY_PATHfor where to look for the installed *.sofiles during runtime Windows package Installing OpenCV is easy through the following sources: from chocolatey, also install llvmpackage, it's required for building: choco install llvm opencv also set OPENCV_LINK_LIBS, OPENCV_LINK_PATHSand OPENCV_INCLUDE_PATHSenvironment variables (see below for details). from vcpkg, also install llvmpackage, necessary for building: vcpkg install llvm opencv4[contrib,nonfree] macOS package Get OpenCV from homebrew: - homebrew, be sure to also install llvmand pkg-configthat are required for building: brew install llvm pkg-config opencv Manual build You can of course always compile OpenCV of the version you prefer manually. This is also supported, but it requires some additional configuration. You need to set up the following environment variables to point to the installed files of your OpenCV build: OPENCV_LINK_LIBS, OPENCV_LINK_PATHS and OPENCV_INCLUDE_PATHS (see below for details). Troubleshooting One of the common problems is link errors in the end of the build. Try building with buildtime-bindgenfeature enabled (requires installed clang/llvm), it will recreate rust and cpp files to match the version you have installed. Please be sure to also set up the relevant environment variables that will allow the linker to find the libraries it needs (see below). You're getting runtime errors like: thread 'main' panicked at 'called `Result::unwrap()` on an `Err` value: Error { code: -215, message: "OpenCV(4.2.0) /build/opencv/src/opencv-4.2.0/modules/highgui/src/window.cpp:384: error: (-215:Assertion failed) size.width>0 && size.height>0 in function \'imshow\'\n" }', src/libcore/result.rs:1188:5 thread 'extraction::tests::test_contour_matching' panicked at 'called `Result::unwrap()` on an `Err` value: Error { code: -215, message: "OpenCV(4.1.1) /tmp/opencv-20190908-41416-17rm3ol/opencv-4.1.1/modules/core/src/matrix_wrap.cpp:79: error: (-215:Assertion failed) 0 <= i && i < (int)vv.size() in function \'getMat_\'\n" }', src/libcore/result.rs:1165:5 These errors (note the .cpp source file and Errorreturn value) are coming from OpenCV itself, not from the crate. It means that you're using the OpenCV API incorrectly, e.g. passing incompatible or unexpected arguments. Please refer to the OpenCV documentation for details. You're getting errors that methods don't exist or not implemented for specific structs, but you can see them in the documentation and in the crate source. Be sure to import use opencv::prelude::*;. The crate contains a lot of traits that need to be imported first. Also check that if you're using a contrib module that the contribfeature is enabled for the crate. Reporting issues If you still have trouble using the crate after going through the Troubleshooting steps please fill free to report it to the bugtracker. When reporting an issue please state: - Operating system - The way you installed OpenCV: package, official binary distribution, manual compilation, etc. - OpenCV version - Attach the full output of the following command from your project directory: RUST_BACKTRACE=full cargo build -vv Environment variables The following variables must be set when building without pkg_config or vcpkg. You can also set them on other platforms, then pkg_config or vcpkg usage will be disabled and the set values will be used. OPENCV_LINK_LIBSComma separated list of library names to link to. .lib, .soor .dylibextension is optional. If you specify the ".framework" extension then build script will link a macOS framework instead of plain shared library. E.g. "opencv_world411". OPENCV_LINK_PATHSComma separated list of paths to search for libraries to link. E.g. "C:\tools\opencv\build\x64\vc14\lib". OPENCV_INCLUDE_PATHSComma separated list of paths to search for system include files during compilation. E.g. "C:\tools\opencv\build\include". One of the directories specified therein must contain "opencv2/core/version.hpp" or "core/version.hpp" file, it's used to detect the version of the headers. The following variables are optional, but you might need to set them under some circumstances: OPENCV_HEADER_DIRDuring crate build it uses OpenCV headers bundled with the crate. If you want to use your own (system) headers supply OPENCV_HEADER_DIRenvironment variable. The directory in that environment variable should contain opencv2dir, e.g. set it /usr/includefor OpenCV-3.4.x or /usr/include/opencv4for OpenCV-4.x. OPENCV_PACKAGE_NAMEIn some cases you might want to override the pkg-config or vcpkg package name, you can use this environment variable for that. If you set it pkg-config will expect to find the file with that name and .pcextension in the package directory. And vcpkg will use that name to try to find package in packagesdirectory under VCPKG_ROOT. For legacy reasons OPENCV_PKGCONFIG_NAMEis also supported as this variable name. The following variables affect the building the of the opencv crate, but belong to external components: PKG_CONFIG_PATHWhere to look for *.pcfiles see the man pkg-config Path specified here must contain opencv.pcor opencv4.pc(for OpenCV 4.x). VCPKG_ROOTand VCPKGRS_DYNAMICThe root of vcpkginstallation and flag allowing use of *.dlllibraries, see the documentation for vcpkgcrate LD_LIBRARY_PATHOn Linux it sets the list of directories to look for the installed *.sofiles during runtime. Linux documentation has more info. Path specified here must contain libopencv_*.sofiles. DYLD_LIBRARY_PATHSimilar to LD_LIBRARY_PATH, but for loading *.dylibfiles on macOS, see man dyld for more info. Path specified here must contain *.dylibfiles. PATHWindows searches for *.dlls in PATHamong other places, be sure to set it up, or copy required OpenCV *.dlls next to your binary. Be sure to specify paths in UNIX style (/C/Program Files/Dir) because colon in PATHmight be interpreted as the entry separator. Summary here. clang crate environment variables See crate's README Cargo features opencv-32- build against OpenCV 3.2.0, this feature is aimed primarily on stable Debian and Ubuntu users who can install OpenCV from the repository without having to compile it from the source opencv-34- build against OpenCV 3.4.x opencv-4(default) - build against OpenCV 4.x contrib- enable the usage of OpenCV contrib modules for corresponding OpenCV version buildtime-bindgen- regenerate all bindings, requires installed clang/llvm, should only be used during the crate development or when building on Windows or macOS, with this feature enabled the bundled headers are no longer used for the code generation, the ones from the installed OpenCV are used instead docs-only- internal usage, for building docs on docs.rs API details API Documentation is automatically translated from OpenCV's doxygen docs. Most likely you'll still want to refer to the official OpenCV C++ documentation as well. OpenCV version support The following OpenCV versions are supported at the moment: - 3.2 - enabled by opencv-32feature - 3.4 - enabled by opencv-34feature - 4.2 - enabled by the default opencv-4feature If you need support for contrib modules, also enable contrib feature. Minimum rustc version Generally you should use the latest stable rustc to compile this crate. Platform support Currently the main development and testing of the crate is performed on Linux, but other major platforms are also supported: macOS and Windows. For some more details please refer to the CI build scripts: Linux OpenCV install, macOS OpenCV install as framework, macOS OpenCV install via brew, Windows OpenCV install via Chocolatey, Windows OpenCV install via vcpkg, Test runner script. Functionality Generally the crate tries to only wrap OpenCV API and provide some convenience functions to be able to use it in Rust easier. We try to avoid adding any functionality besides that. Errors Most functions return a Result to expose a potential C++ exception. Although some methods like property reads or functions that are marked CV_NOEXCEPT in the OpenCV headers are infallible and return a naked value. Properties Properties of OpenCV classes are accessible through setters and getters. Those functions are infallible, they return the value directly instead of Result. Infallible functions For infallible functions (like setters) that accept &str values the following logic applies: if a Rust string passed as argument contains null byte then this string will be truncated up to that null byte. So if for example you pass "123\0456" to the setter, the property will be set to "123". Callbacks Some API functions accept callbacks, e.g. set_mouse_callback. While currently it's possible to successfully use those functions there are some limitations to keep in mind. Current implementation of callback handling keeps hold of the passed callback argument forever. That means that the closure used as a callback will never be freed during the lifetime of a program and moreover Drop will not be called for it (they are stored in global static Slab). There is a plan to implement possibility to be able to free at least some of the closures. Unsafety Although crate tries to provide ergonomic Rust interface for OpenCV, don't expect Rust safety guarantees at this stage. It's especially true for borrow checking and shared mutable ownership. Notable example would be Mat which is a reference counted object in its essence. You can own a seemingly separate Mat in Rust terms, but it's going to be a mutable reference to the other Mat under the hood. Treat safety of the crate's API as you would treat one of C++, use clone() when needed. Contrib modules The following modules require opencv_contrib installed: - aruco - bgsegm - bioinspired - ccalib - cvv - dnn (only for OpenCV 3.2) - dpm - face - freetype - fuzzy - hdf - img_hash - line_descriptor - phase_unwrapping - plot - sfm - shape - structured_light - superres - surface_matching - text - videostab - viz - xfeatures2d - xobjdetect - xphoto Missing modules and functions While most of the API is covered, for various reasons (that might no longer hold) there are modules and functions that are not yet implemented. If a missing module/function is near and dear to you, please file an issue (or better, open a pull request!). CUDA is not supported at the moment, but is definitely in the roadmap. You can use OpenCL for now. The binding strategy This crate works similar to the the model of python and java's OpenCV wrappers - it uses libclang to parse the OpenCV C++ headers, generates a C interface to the C++ API, and wraps the C interface in Rust. All the major modules in the C++ API are merged together in a huge cv:: namespace. We instead made one rust module for each major OpenCV module. So, for example, C++ cv::Mat is opencv::core::Mat in this crate. The methods and field names have been snake_cased. Methods arguments with default value lose these default values, but they are reported in the API documentation. Overloaded methods have been mostly manually given different names or automatically renamed to *_1, *_2, etc. OpenCV 2 support If you can't use OpenCV 3.x or higher, the (no longer maintained) 0.2.4 version of this crate is known to work with OpenCV 2.4.7.13 (and probably other 2.4 versions). Please refer to the README.md file for that version because the crate has gone through the considerable rewrite since. Contributor's Guide The binding generator code lives in a separate crate under binding-generator. have to handle the code generation overhead in their builds. When developing this crate, you can test changes to the binding generation using cargo build -vv --features buildtime-bindgen. When changing the binding-generator, be sure to push changes to the generated code! If you're looking for things to improve be sure to search for todo and fixme labels in the project source, those usually carry the comment of what exactly needs to be fixed. The license for the original work is MIT. Special thanks to ttacon for yielding the crate name.
https://lib.rs/crates/opencv
CC-MAIN-2020-16
en
refinedweb
Type-Based Global Events in Vue.js Type-Based Global Events in Vue.js In this article, we discuss how to create type-based global events in Vue.js to better manage issues commonly seen in dynamically-typed languages. Join the DZone community and get the full member experience.Join For Free In one of the latest freelance projects of mine, my client prefers Vue.js, which is recently super popular on the frontend side. So, I dove into Vue. I can say that it is very practical and effective. Besides, when we compare it with other predominant competitors like Angular and Aurelia, we can easily notice Vue has a very small learning curve. However, it didn't take long for me to have a feeling that my code was getting unmanageable. This wasn't a big surprise to me because this is often the trade-off with dynamically-typed languages. Today, I am going to show an effective way of using global events in Vue. A Simple Event Bus in Vue The typical way of implementing a global event bus in Vue is just using the Vue object itself: xxxxxxxxxx // create a Vue instance somewhere you can access globally let eventBus = new Vue() // register an event handler eventBus.$on("onAppStarted", () => console.log("App Started!")) // publish an event eventBus.$emit("onAppStarted") Super easy, right? You may also like: How and Why We Moved to Vue.js. The Problem Coming From Strings As long as our application has more than a couple of lines, sooner or later, we start stressing to follow which components publish and which others listen to them. Therefore, we can imagine how hard it is to identify a simple typo in a string-based event name, especially in a large project: x eventBus.$on("onApStarted", () => { // notice the typo in event name, it should be "onAppStarted" }) Implicit Event Parameters This is another problem we should notice because we don't define any corresponding type or interface for our event — only God knows what and how many parameters might be in our event. In order to identify them, we have to do these kinds of tests: xxxxxxxxxx eventBus.$on("onAppStarted", (args) => { args.forEach(e => console.log(e)) }) A Proper Solution Comes From ES6+ As a fan of the statically-typed Java world, I prefer using types clearly unless it's super unconventional for the specific language. Thus, I will show a solution to get rid of these string-based event names by using the capabilities, which ECMAScript 6 and later offers. Defining Event Types Let's create a separate file to define our event types: xxxxxxxxxx /** * Event type to publish when app loads * ProducedBy: components/preload.js * ConsumedBy: App.vue, views/MainPanel.vue **/ export class AppStartEvent { constructor(){ // An event type with no arguments } } /** * Event type to publish when code changes * ProducedBy: views/CodePanel.vue * ConsumedBy: views/MainPanel.vue * @param {object} editor editor instance * @param {string} code changed code value inside the editor **/ export class CodeChangeEvent { constructor(editor, code){ this.editor = editor this.code = code } } As we notice, defining event type classes and parameters explicitly in constructor offers us great readability. Although it is optional, we recommend keeping comments updated. This provides a way to follow the components, which deal with a certain event type. Importing Event Types To use our type-based events, we can easily import them into our components: xxxxxxxxxx import {AppStartEvent, CodeChangeEvent} from "@/app-events" As we specify explicitly every event type we need, it brings us another important benefit that we can easily identify what events are involved in a component. Registering an Event In order to register our event, we simply use our event types and their static name properties: xxxxxxxxxx import {AppStartEvent} from "@/app-events" eventBus.$on(AppStartEvent.name, () => console.log("App Started!")) Plus, we can expect the event type instance itself as a single argument instead of more than one arguments: xxxxxxxxxx import {AppStartEvent, CodeChangeEvent} from "@/app-events" // we can access the event type instance as a single argument eventBus.$on(AppStartEvent.name, event => console.log(event)) // also can access to event parameters eventBus.$on(CodeChangeEvent.name, event => { console.log(event.editor) console.log(event.code) }) Publishing an Event Now, we can publish our events simply by creating a new instance of that event type: xxxxxxxxxx // no parameters eventBus.$emit(AppStartEvent.name, new AppStartEvent()) // with parameters eventBus.$emit(CodeChangeEvent.name, new CodeChangeEvent(editor, "some code here...")) Implementing a Wrapper Class Certainly, we may proceed to define a class, as EventBus and wrap the basic methods of Vue instance: xxxxxxxxxx class EventBus { $eventbus = new Vue() listen (eventClass, handler) { this.$eventBus.$on(eventClass.name, handler) } publish (event) { this.$eventBus.$emit(event.constructor.name, event) } } Therefore, we can use it in a more practical way: xxxxxxxxxx // register an event handler EventBus.listen(AppStartEvent, () => console.log("App Started!")) // publish an event EventBus.publish(new AppStartEvent()) Using as a Plugin In addition, we may prefer to use our EventBus as a Vue Plugin: xxxxxxxxxx export default { $eventBus: null, install (Vue, options) { this.$eventBus = new Vue() }, listen (eventClass, handler) { this.$eventBus.$on(eventClass.name, handler) }, listenOnce (eventClass, handler) { this.$eventBus.$once(eventClass.name, handler) }, remove (eventClass, handler) { if (handler) { this.$eventBus.$off(eventClass.name, handler) } else { this.$eventBus.$off(eventClass.name) } }, removeAll () { this.$eventBus.$off() }, publish (event) { this.$eventBus.$emit(event.constructor.name, event) } } Certainly, to be able to use the plugin, we should import and register it to our Vue instance: xxxxxxxxxx import EventBus from '@/plugin/vue-event-bus' Vue.use(EventBus) Consequently, we can simply import and use our EventBus in any other Vue component as well: xxxxxxxxxx import EventBus from '@/plugin/vue-event-bus' import {AppStartEvent} from "@/app-events" // register an event handler EventBus.listen(AppStartEvent, () => console.log("App Started!")) // publish an event EventBus.publish(new AppStartEvent()) Finally In this short tutorial, I have explained how to implement type-based global events and use them in Vue. You can find the code of the sample plugin over on GitHub. Further Reading Published at DZone with permission of Yavuz Tas . See the original article here. Opinions expressed by DZone contributors are their own. {{ parent.title || parent.header.title}} {{ parent.tldr }} {{ parent.linkDescription }}{{ parent.urlSource.name }}
https://dzone.com/articles/type-based-global-events-in-vuejs
CC-MAIN-2020-16
en
refinedweb
The numpy package (module) is used in almost all numerical computation using Python. It is a package that provide high-performance vector, matrix and higher-dimensional data structures for Python. It is implemented in C and Fortran so when calculations are vectorized (formulated with vectors and matrices), performance is very good. To use numpy you need to import the module, using for example: from numpy import * In the numpy package the terminology used for vectors, matrices and higher-dimensional data sets is array. numpyarrays There are a number of ways to initialize new numpy arrays, for example from arange, linspace, etc. For example, to create new vector and matrix arrays from Python lists we can use the numpy.array function. # a vector: the argument to the array function is a Python list v = array([1,2,3,4]) v # a matrix: the argument to the array function is a nested Python list M = array([[1, 2], [3, 4]]) M The v and M objects are both of the type ndarray that the numpy module provides. type(v), type(M) The difference between the v and M arrays is only their shapes. We can get information about the shape of an array by using the ndarray.shape property. v.shape M.shape The number of elements in the array is available through the ndarray.size property: M.size Equivalently, we could use the function numpy.shape and numpy.size shape(M) size(M) So far the numpy.ndarray looks awefully much like a Python list (or nested list). Why not simply use Python lists for computations instead of creating a new array type? There are several reasons: numpyarrays can be implemented in a compiled language (C and Fortran is used). Using the dtype (data type) property of an ndarray, we can see what type the data of an array has: M.dtype We get an error if we try to assign a value of the wrong type to an element in a numpy array: M[0,0] = "hello" --------------------------------------------------------------------------- ValueError Traceback (most recent call last) <ipython-input-12-a09d72434238> in <module>() ----> 1 M[0,0] = "hello" ValueError: invalid literal for long() with base 10: 'hello' If we want, we can explicitly define the type of the array data when we create it, using the dtype keyword argument: M = array([[1, 2], [3, 4]], dtype=complex) M Common data types that can be used with dtype are: int, float, complex, bool, object, etc. We can also explicitly define the bit size of the data types, for example: int64, int16, float128, complex128. For larger arrays it is inpractical to initialize the data manually, using explicit python lists. Instead we can use one of the many functions in numpy that generate arrays of different forms. Some of the more common are: # create a range x = arange(0, 10, 1) # arguments: start, stop, step x x = arange(-1, 1, 0.1) x # using linspace, both end points ARE included linspace(0, 10, 25) logspace(0, 10, 10, base=e) x, y = mgrid[0:5, 0:5] # similar to meshgrid in MATLAB x y from numpy import random # uniform random numbers in [0,1] random.rand(5,5) # standard normal distributed random numbers random.randn(5,5) # a diagonal matrix diag([1,2,3]) # diagonal with offset from the main diagonal diag([1,2,3], k=1) zeros((3,3)) ones((3,3)) A very common file format for data files is comma-separated values (CSV), or related formats such as TSV (tab-separated values). To read data from such files into Numpy arrays we can use the numpy.genfromtxt function. For example, !head stockholm_td_adj.dat data = genfromtxt('stockholm_td_adj.dat') data.shape fig, ax = plt.subplots(figsize=(14,4)) ax.plot(data[:,0]+data[:,1]/12.0+data[:,2]/365, data[:,5]) ax.axis('tight') ax.set_title('tempeatures in Stockholm') ax.set_xlabel('year') ax.set_ylabel('temperature (C)'); Using numpy.savetxt we can store a Numpy array to a file in CSV format: M = random.rand(3,3) M savetxt("random-matrix.csv", M) !cat random-matrix.csv savetxt("random-matrix.csv", M, fmt='%.5f') # fmt specifies the format !cat random-matrix.csv Useful when storing and reading back numpy array data. Use the functions numpy.save and numpy.load: save("random-matrix.npy", M) !file random-matrix.npy load("random-matrix.npy") M.itemsize # bytes per element M.nbytes # number of bytes M.ndim # number of dimensions We can index elements in an array using square brackets and indices: # v is a vector, and has only one dimension, taking one index v[0] # M is a matrix, or a 2 dimensional array, taking two indices M[1,1] If we omit an index of a multidimensional array it returns the whole row (or, in general, a N-1 dimensional array) M M[1] The same thing can be achieved with using : instead of an index: M[1,:] # row 1 M[:,1] # column 1 We can assign new values to elements in an array using indexing: M[0,0] = 1 M # also works for rows and columns M[1,:] = 0 M[:,2] = -1 M Index slicing is the technical name for the syntax M[lower:upper:step] to extract part of an array: A = array([1,2,3,4,5]) A A[1:3] Array slices are mutable: if they are assigned a new value the original array from which the slice was extracted is modified: A[1:3] = [-2,-3] A We can omit any of the three parameters in M[lower:upper:step]: A[::] # lower, upper, step all take the default values A[::2] # step is 2, lower and upper defaults to the beginning and end of the array A[:3] # first three elements A[3:] # elements from index 3 Negative indices counts from the end of the array (positive index from the begining): A = array([1,2,3,4,5]) A[-1] # the last element in the array A[-3:] # the last three elements Index slicing works exactly the same way for multidimensional arrays: A = array([[n+m*10 for n in range(5)] for m in range(5)]) A # a block from the original array A[1:4, 1:4] # strides A[::2, ::2] Fancy indexing is the name for when an array or list is used in-place of an index: row_indices = [1, 2, 3] A[row_indices] col_indices = [1, 2, -1] # remember, index -1 means the last element A[row_indices, col_indices] We can also use index masks: If the index mask is an Numpy array of data type bool, then an element is selected (True) or not (False) depending on the value of the index mask at the position of each element: B = array([n for n in range(5)]) B row_mask = array([True, False, True, False, False]) B[row_mask] # same thing row_mask = array([1,0,1,0,0], dtype=bool) B[row_mask] This feature is very useful to conditionally select elements from an array, using for example comparison operators: x = arange(0, 10, 0.5) x mask = (5 < x) * (x < 7.5) mask x[mask] The index mask can be converted to position index using the where function indices = where(mask) indices x[indices] # this indexing is equivalent to the fancy indexing x[mask] With the diag function we can also extract the diagonal and subdiagonals of an array: diag(A) diag(A, -1) The take function is similar to fancy indexing described above: v2 = arange(-3,3) v2 row_indices = [1, 3, 5] v2[row_indices] # fancy indexing v2.take(row_indices) But take also works on lists and other objects: take([-3, -2, -1, 0, 1, 2], row_indices) Constructs an array by picking elements from several arrays: which = [1, 0, 1, 0] choices = [[-2,-2,-2,-2], [5,5,5,5]] choose(which, choices) Vectorizing code is the key to writing efficient numerical calculation with Python/Numpy. That means that as much as possible of a program should be formulated in terms of matrix and vector operations, like matrix-matrix multiplication. We can use the usual arithmetic operators to multiply, add, subtract, and divide arrays with scalar numbers. v1 = arange(0, 5) v1 * 2 v1 + 2 A * 2, A + 2 When we add, subtract, multiply and divide arrays with each other, the default behaviour is element-wise operations: A * A # element-wise multiplication v1 * v1 If we multiply arrays with compatible shapes, we get an element-wise multiplication of each row: A.shape, v1.shape A * v1 What about matrix mutiplication? There are two ways. We can either use the dot function, which applies a matrix-matrix, matrix-vector, or inner vector multiplication to its two arguments: dot(A, A) dot(A, v1) dot(v1, v1) Alternatively, we can cast the array objects to the type matrix. This changes the behavior of the standard arithmetic operators +, -, * to use matrix algebra. M = matrix(A) v = matrix(v1).T # make it a column vector v M * M M * v # inner product v.T * v # with matrix objects, standard matrix algebra applies v + M*v If we try to add, subtract or multiply objects with incomplatible shapes we get an error: v = matrix([1,2,3,4,5,6]).T shape(M), shape(v) M * v --------------------------------------------------------------------------- ValueError Traceback (most recent call last) <ipython-input-100-995fb48ad0cc> in <module>() ----> 1 M * v /Users/rob/miniconda/envs/py27-spl/lib/python2.7/site-packages/numpy/matrixlib/defmatrix.pyc in __mul__(self, other) 339 if isinstance(other, (N.ndarray, list, tuple)) : 340 # This promotes 1-D vectors to row vectors --> 341 return N.dot(self, asmatrix(other)) 342 if isscalar(other) or not hasattr(other, '__rmul__') : 343 return N.dot(self, other) ValueError: shapes (5,5) and (6,1) not aligned: 5 (dim 1) != 6 (dim 0) See also the related functions: inner, outer, cross, kron, tensordot. Try for example help(kron). Above we have used the .T to transpose the matrix object v. We could also have used the transpose function to accomplish the same thing. Other mathematical functions that transform matrix objects are: C = matrix([[1j, 2j], [3j, 4j]]) C conjugate(C) Hermitian conjugate: transpose + conjugate C.H We can extract the real and imaginary parts of complex-valued arrays using real and imag: real(C) # same as: C.real imag(C) # same as: C.imag Or the complex argument and absolute value angle(C+1) # heads up MATLAB Users, angle is used instead of arg abs(C) linalg.inv(C) # equivalent to C.I C.I * C linalg.det(C) linalg.det(C.I) Often it is useful to store datasets in Numpy arrays. Numpy provides a number of functions to calculate statistics of datasets in arrays. For example, let's calculate some properties from the Stockholm temperature dataset used above. # reminder, the tempeature dataset is stored in the data variable: shape(data) # the temperature data is in column 3 mean(data[:,3]) The daily mean temperature in Stockholm over the last 200 years has been about 6.2 C. std(data[:,3]), var(data[:,3]) # lowest daily average temperature data[:,3].min() # highest daily average temperature data[:,3].max() d = arange(0, 10) d # sum up all elements sum(d) # product of all elements prod(d+1) # cummulative sum cumsum(d) # cummulative product cumprod(d+1) # same as: diag(A).sum() trace(A) We can compute with subsets of the data in an array using indexing, fancy indexing, and the other methods of extracting data from an array (described above). For example, let's go back to the temperature dataset: !head -n 3 stockholm_td_adj.dat The dataformat is: year, month, day, daily average temperature, low, high, location. If we are interested in the average temperature only in a particular month, say February, then we can create a index mask and use it to select only the data for that month using: unique(data[:,1]) # the month column takes values from 1 to 12 mask_feb = data[:,1] == 2 # the temperature data is in column 3 mean(data[mask_feb,3]) With these tools we have very powerful data processing capabilities at our disposal. For example, to extract the average monthly average temperatures for each month of the year only takes a few lines of code: months = arange(1,13) monthly_mean = [mean(data[data[:,1] == month, 3]) for month in months] fig, ax = plt.subplots() ax.bar(months, monthly_mean) ax.set_xlabel("Month") ax.set_ylabel("Monthly avg. temp."); When functions such as min, max, etc. are applied to a multidimensional arrays, it is sometimes useful to apply the calculation to the entire array, and sometimes only on a row or column basis. Using the axis argument we can specify how these functions should behave: m = random.rand(3,3) m # global max m.max() # max in each column m.max(axis=0) # max in each row m.max(axis=1) Many other functions and methods in the array and matrix classes accept the same (optional) axis keyword argument. The shape of an Numpy array can be modified without copying the underlaying data, which makes it a fast operation even for large arrays. A n, m = A.shape B = A.reshape((1,n*m)) B B[0,0:5] = 5 # modify the array B A # and the original variable is also changed. B is only a different view of the same data We can also use the function flatten to make a higher-dimensional array into a vector. But this function create a copy of the data. B = A.flatten() B B[0:5] = 10 B A # now A has not changed, because B's data is a copy of A's, not refering to the same data With newaxis, we can insert new dimensions in an array, for example converting a vector to a column or row matrix: v = array([1,2,3]) shape(v) # make a column matrix of the vector v v[:, newaxis] # column matrix v[:,newaxis].shape # row matrix v[newaxis,:].shape Using function repeat, tile, vstack, hstack, and concatenate we can create larger vectors and matrices from smaller ones: a = array([[1, 2], [3, 4]]) # repeat each element 3 times repeat(a, 3) # tile the matrix 3 times tile(a, 3) b = array([[5, 6]]) concatenate((a, b), axis=0) concatenate((a, b.T), axis=1) vstack((a,b)) hstack((a,b.T)) To achieve high performance, assignments in Python usually do not copy the underlaying objects. This is important for example when objects are passed between functions, to avoid an excessive amount of memory copying when it is not necessary (technical term: pass by reference). A = array([[1, 2], [3, 4]]) A # now B is referring to the same array data as A B = A # changing B affects A B[0,0] = 10 B A If we want to avoid this behavior, so that when we get a new completely independent object B copied from A, then we need to do a so-called "deep copy" using the function copy: B = copy(A) # now, if we modify B, A is not affected B[0,0] = -5 B A Generally, we want to avoid iterating over the elements of arrays whenever we can (at all costs). The reason is that in a interpreted language like Python (or MATLAB), iterations are really slow compared to vectorized operations. However, sometimes iterations are unavoidable. For such cases, the Python for loop is the most convenient way to iterate over an array: v = array([1,2,3,4]) for element in v: print(element) M = array([[1,2], [3,4]]) for row in M: print("row", row) for element in row: print(element) When we need to iterate over each element of an array and modify its elements, it is convenient to use the enumerate function to obtain both the element and its index in the for loop: for row_idx, row in enumerate(M): print("row_idx", row_idx, "row", row) for col_idx, element in enumerate(row): print("col_idx", col_idx, "element", element) # update the matrix M: square each element M[row_idx, col_idx] = element ** 2 # each element in M is now squared M As mentioned several times by now, to get good performance we should try to avoid looping over elements in our vectors and matrices, and instead use vectorized algorithms. The first step in converting a scalar algorithm to a vectorized algorithm is to make sure that the functions we write work with vector inputs. def Theta(x): """ Scalar implemenation of the Heaviside step function. """ if x >= 0: return 1 else: return 0 Theta(array([-3,-2,-1,0,1,2,3])) --------------------------------------------------------------------------- ValueError Traceback (most recent call last) <ipython-input-165-6658efdd2f22> in <module>() ----> 1 Theta(array([-3,-2,-1,0,1,2,3])) <ipython-input-164-9a0cb13d93d4> in Theta(x) 3 Scalar implemenation of the Heaviside step function. 4 """ ----> 5 if x >= 0: 6 return 1 7 else: ValueError: The truth value of an array with more than one element is ambiguous. Use a.any() or a.all() OK, that didn't work because we didn't write the Theta function so that it can handle a vector input... To get a vectorized version of Theta we can use the Numpy function vectorize. In many cases it can automatically vectorize a function: Theta_vec = vectorize(Theta) Theta_vec(array([-3,-2,-1,0,1,2,3])) We can also implement the function to accept a vector input from the beginning (requires more effort but might give better performance): def Theta(x): """ Vector-aware implemenation of the Heaviside step function. """ return 1 * (x >= 0) Theta(array([-3,-2,-1,0,1,2,3])) # still works for scalars as well Theta(-1.2), Theta(2.6) When using arrays in conditions,for example if statements and other boolean expressions, one needs to use any or all, which requires that any or all elements in the array evalutes to True: M if (M > 5).any(): print("at least one element in M is larger than 5") else: print("no element in M is larger than 5") if (M > 5).all(): print("all elements in M are larger than 5") else: print("all elements in M are not larger than 5") Since Numpy arrays are statically typed, the type of an array does not change once created. But we can explicitly cast an array of some type to another using the astype functions (see also the similar asarray function). This always create a new array of new type: M.dtype M2 = M.astype(float) M2 M2.dtype M3 = M.astype(bool) M3 %reload_ext version_information %version_information numpy
https://share.cocalc.com/share/d8d2faccf6f6373d5e0a57a2849cbf76273d673e/scientific-python-lectures/Lecture-2-Numpy.ipynb?viewer=share
CC-MAIN-2020-16
en
refinedweb
#include <GenerateRigidMass.h> output: vector going from the mass center to the space origin output: the inertia matrix of the mesh input Data Fieldskg * m^-3 must be convex input: positions of the vertices input: quads of the mesh input: triangles of the mesh output: mass of the mesh output: the gravity center of the mesh output output: volume of the mesh Update the output values. Implements sofa::core::DataEngine. generates the RigidMass from the mesh integral Get the template type names (if any) used to instantiate this object. Reimplemented from sofa::core::objectmodel::Base. Initialization method called at graph modification, during bottom-up traversal. Reimplemented from sofa::core::objectmodel::BaseObject. integrates the whole mesh Protected methods Update method called when variables used in precomputation are modified. Reimplemented from sofa::core::objectmodel::BaseObject.
https://www.sofa-framework.org/api/master/sofa/html/classsofa_1_1component_1_1engine_1_1_generate_rigid_mass.html
CC-MAIN-2020-16
en
refinedweb
From: Matthias Schabel (boost_at_[hidden]) Date: 2007-03-29 16:50:02 > Hmm. I still don't understand the rationale for the extra > multiplication > operation. How is this any different from: > > quantity<SI::meter> q(2); It's a faux operation - the multiplication of a scalar times a unit (a class with no data members at all) decorates the scalar to produce a quantity of the appropriate unit and value type. The problem with using a raw value type for construction can be demonstrated here : using namespace SI; /// this is two meters quantity<length> q(2); ...some months later... using namespace CGS; /// uh oh - this is 2 centimeters now quantity<length> q(2); The way it's currently implemented, you incur a conversion in the constructor, but the code remains correct if the units are convertible and gives a compile time error if they are not... Matthias Boost list run by bdawes at acm.org, gregod at cs.rpi.edu, cpdaniel at pacbell.net, john at johnmaddock.co.uk
https://lists.boost.org/Archives/boost/2007/03/119061.php
CC-MAIN-2020-16
en
refinedweb
Talk:Code Completion Design Contents - 1 Should be added to the article page soon - 2 Parseth meet as follow: clear m_Str just clear the m_strbecause the parser believe the token has been recognized correctly . e.g. int a; delete . -> skip the tokens until it meet ; or } e.g. delete p; e.g. MyClass .fun(); e.g. MyClass->fun(); deal with { skip the context between { and } if the usebuffer or the bufferskipblock is true in the option. e.g. int main() {cout <<endl;} deal with } clear the scope and relation it belong to . : set the scope . while if do else for switch skip the context until it meet ;or } if the usebuffer or the bufferskipblock is true in the option. typedef call HandleTypedef to deal with it if the option handletypedef is true. otherwise,it will skip the context until it meet ;or } return : skip the context until it meet the ; or } const just clear th e m_str extern call DoParse if the next token is "c",otherwise,skip the context until it meet ; __asm skip the context until it meet ; static virtual inline do nothing # handle include if the next token is include handle define if the next token is define otherwise handle the preprocessor block using skip the context until it meet ;or } namespace skip context between < and > if the next token is < class handle class if the option handleclass is true.otherwise skip it struct the same to class enum the same to class untion skip the context until it meet }or; call DoParse to analyze the context in the untion. operator eg. MyClass operator () (size_t param); MyClass operator += (MyClass param); others It means that the current token is not any one above . According to the next token ,it can be divided into several situation as followed . 1) the next one's fisrt char is (. eg. Macro(a, b) fun(int a , int b); eg. e.g. MyClass MyClass ::fun() {} 6) ;
http://wiki.codeblocks.org/index.php?title=Talk:Code_Completion_Design&oldid=5978
CC-MAIN-2020-16
en
refinedweb
Kaggle What's cooking competition 5/Nov 20185/Nov 2018 Solving Kaggle’s amazing What’s cooking competition using simple Bag of Words model and coding it by hands, without usage of any machine learning library. Kaggle’s What’s cooking competition is about guessing cousine by provided ingredients of the recipe. Train and test data come in JSON format and are pretty clear. Here are two first records from train data: [ { "id": 10259, "cuisine": "greek", "ingredients": [ "romaine lettuce", "black olives", "grape tomatoes", "garlic", "pepper", "purple onion", "seasoning", "garbanzo beans", "feta cheese crumbles" ] }, { "id": 25693, "cuisine": "southern_us", "ingredients": [ "plain flour", "ground pepper", "salt", "tomatoes", "ground black pepper", "thyme", "eggs", "green tomatoes", "yellow corn meal", "milk", "vegetable oil" ] }, Target variable is cousine, and items in ingredients array are features using which we should classify (it is clearly classifiation task) to what cousine this recipe belongs. OK, let’s explore our train data. Data exploration Available cousines Let’s find out how many cousines are available in train data and what is their recipe distribution. Below there is a small script that shows available cousines and a number of recipes for each, as well as draws bar chart of recipes distribution. import json with open('train.json') as data_file: train = json.load(data_file) print('Total number of recipes: %d' % len(train)) cousine_map = {} for recipe in train: cousine_name = recipe['cuisine'] recipe_count = cousine_map.get(cousine_name, 0) cousine_map[cousine_name] = recipe_count+1 # get a sorted (by number of recipes) list of cousines cousines = sorted(list(cousine_map.items()), key=lambda tup: -tup[1]) def draw_cousines_barchart(cousines): import matplotlib.pyplot as plt import numpy as np names = [c[0] for c in cousines] values = [c[1] for c in cousines] fig_size = plt.rcParams["figure.figsize"] fig_size[0] = 12 fig_size[1] = 8 plt.rcParams["figure.figsize"] = fig_size index = np.arange(len(names)) bars = plt.bar(index, values, align='center', alpha=0.5, color=['darksalmon', 'sienna', 'gold', 'olivedrab', 'darkgreen', 'lightseagreen', 'darkturquoise', 'slategray', 'navy', 'darkorchid', 'plum', 'lightcoral', 'darkorange', 'lawngreen', 'g', 'c', 'violet', 'crimson', 'peru', 'aqua']) plt.xticks(index, names, fontsize=10, rotation=60) plt.ylabel('Nr of recipes') plt.title('Cousine') plt.show() draw_cousines_barchart(cousines) And we get following barchart: Facts: - Total number of recipes in train data: 39774 - We see that three most used cousines are: italian(7838), mexican(6438) and southern_us(4320). In sum they have almost half (to be exact, 18596 recipes) of all the data. Here is full distribution of recipes by cousine: [ ('italian', 7838), ('mexican', 6438), ('southern_us', 4320), ('indian', 3003), ('chinese', 2673), ('french', 2646), ('cajun_creole', 1546), ('thai', 1539), ('japanese', 1423), ('greek', 1175), ('spanish', 989), ('korean', 830), ('vietnamese', 825), ('moroccan', 821), ('british', 804), ('filipino', 755), ('irish', 667), ('jamaican', 526), ('russian', 489), ('brazilian', 467) ] Ingredients analysis Now let’s find out how much unique ingredients are in the train dataset, what are 10 most frequently used ingredients overall. all_ingredients = {} for recipe in train: ingredients = recipe['ingredients'] for ing_name in ingredients: ing_count = all_ingredients.get(ing_name.lower(), 0) all_ingredients[ing_name.lower()] = ing_count+1 print("Total ingredients: %d" % len(all_ingredients)) Total ingredients: 6703 Now let’s see 10 most used ingredients and its number of occurrences: # get a sorted (by number of occurences) list of ingredients sorted_ingredients = sorted(list(all_ingredients.items()), key=lambda tup: -tup[1]) print(sorted_ingredients[:10]) Here are 10 most used ingredients accross all cousines: [ ('salt', 18049), ('onions', 7972), ('olive oil', 7972), ('water', 7457), ('garlic', 7380), ('sugar', 6434), ('garlic cloves', 6237), ('butter', 4848), ('ground black pepper', 4785), ('all-purpose flour', 4632) ] Model Model that is employed here is Bag of words. The idea of bag of words is to take into account each word multiplicity in the text disregarding grammar and word order at the same time. Thus, each word can be considered as a component of a vector of all the words (dictionary). So let’s consider each recipe as a document. Thus each ingredient is a word in the document. Bag of words algorithm will be following: - For train data: for each cousine calculate all ingredients used in the recipes belonging to the cousine. For each ingredient in the cousine calculate number of occurrences. Also calculate total sum of all ingredient occurrences in the cousine - it will be used to normalize data. - For test data: for each recipe for all ingredients in this recipe calculate total number of occurrences in the each cousine. If ingredient is not used in the cousine - then add nothing to score. Then total score divide by total sum of all ingredient occurrences in the cousine. Then simply select cousine with the largest score. For each cousine calculate number of rences for each ingredient and total sum of all occurrences: # this map contains ingredient occurrences for each cousine. # keys are cousine names # values are maps, for which keys are ingredient names, # and values are occurrences of this particular ingredient for all recipes # belonging to this cousine cousine_ingredients = {} # this map contains total sum of all occurrences of all ingredients for the cousine # keys are cousine names # values are total sum of all ingredient occurrences cousine_totals = {} for recipe in train: ingredients = recipe['ingredients'] cousine_name = recipe['cuisine'] cousine_map = cousine_ingredients.get(cousine_name, {}) cousine_total_icount = cousine_totals.get(cousine_name, 0) for iname in ingredients: iname = iname.lower() icount = cousine_map.get(iname, 0) cousine_map[iname] = icount+1 cousine_total_icount = cousine_total_icount + 1 cousine_ingredients[cousine_name] = cousine_map cousine_totals[cousine_name] = cousine_total_icount Now let’s calculate cousine scores for each recipe in test data: with open('test.json') as test_data_file: test_data = json.load(test_data_file) recipe_map = {} for recipe in test_data: recipe_id = recipe['id'] recipe_ingredients = recipe['ingredients'] recipe_cousine_map = {} for cousine_name in cousine_ingredients: cousine_map = cousine_ingredients[cousine_name] cousine_score = 0 for iname in recipe_ingredients: ingredient_score = cousine_map.get(iname,0) cousine_score = cousine_score + ingredient_score cousine_score_normalized = cousine_score/cousine_totals[cousine_name] recipe_cousine_map[cousine_name] = cousine_score_normalized recipe_map[recipe_id] = recipe_cousine_map In recipe_map keys are recipe IDs. Values are maps, for which keys are cousine names, and values are cousine scores calculated for this recipes. Now for each recipe let’s select cousine with the largest score: recipe_results = {} for recipe_id in recipe_map: recipe_cousine_map = recipe_map[recipe_id] # get cousine with max score from recipe_cousine_map cousine_name = max(recipe_cousine_map, key=recipe_cousine_map.get) recipe_results[recipe_id]=cousine_name And let’s write it to file: with open('result.csv', 'w') as rst_file: rst_file.write('id,cuisine'+'\n') for recipe_id in recipe_results: cousine_name = recipe_results[recipe_id] rst_file.write(str(recipe_id)+','+cousine_name+'\n') Score that this model gives: 0.38505 Summary Here we implemented very simple Bag of words approach by hands without using any library. Score that it earns is not that high, only 0.38505. Another good exercise would be to implement the same approach using some library, e.g. sklearn. But that would be topic for another post. Links I found some publicly available posts devoted to analysis of this task. Here are some of them: - CS570 Final Project - Kaggle: What’s cooking? - Jeff Wen - What’s cooking? - Frolian’s blog - The Kaggle What’s Cooking challenge - Analytics Vidhya - Kaggle Solution: What’s Cooking ? (Text Mining Competition) - Félix Luginbühl - Using Recipe Ingredients to Categorize the Cuisine - Oguzhan Gencoglu - Github repo with solution for What’s cooking - Beautiful visualisations using word embedding - What’s Cooking? Predicting Cuisines from Recipe Ingredients There is a number of other good stuff which can be googled by query what's cooking kaggle. I found also some really good stuff: KAGGLE ENSEMBLING GUIDE, which explains how ensembling should be done for kaggle competitions. A ton of useful stuff. Also here is a list of available colors in matplotlib - nice stuff when you need to draw a number of entities (and not only bars) in different colors.
http://iryndin.net/post/kaggle-whats-cooking/
CC-MAIN-2019-30
en
refinedweb
DISCLAIMER No maintenance on this package anymore. Prefer to use build_value Documentation aren't aligned with last version. Serialize and Deserialize Dart Object with reflectable or codegen import 'package:serializer/serializer_reflectable.dart'; @serializable class MyModel { String name; //constructor need to be without parameters or with optional or positional. MyModel([this.name]); } main() { Serializer serializer = new ReflectableSerializer.Json(); //serialize MyModel model = new MyModel("John", 24); String json = serializer.encode(model); Map jsonMap = serializer.toMap(model); //deserialize model = serializer.decode(json, MyModel); model = serializer.fromMap(jsonMap, MyModel); } Breaking changes: reflectableto 1.0.0 Breaking changes: @UseTypeannotation only for codegen import "package:serializer/serializer_codegen.dart"; Serializer ser = new CodegenSerializer.json(); or import "package:serializer/serializer_reflectable.dart"; Serializer ser = new ReflectableSerializer.json(); Breaking changes: Breaking changes: toPrimaryObjectmethod Breaking changes: type_info_keyis now optional Breaking changes: initSerializerfunction anymore, instead, you have to instanciate a serializer classe Serializer serializer = new Serializer.Json(); toJsonand fromJsonreplace by encodeand decode type_info_keyis now parametrable Add this to your package's pubspec.yaml file: dependencies: serializer: ^0.8:serializer/serializer.dart'; We analyzed this package on Jul 15, 2019, and provided a score, details, and suggestions below. Analysis was completed with status completed using: Detected platforms: Flutter, web, other No platform restriction found in primary library package:serializer/serializer.dart. Fix lib/src/core/api.dart. (-3.93 points) Analysis of lib/src/core/api.dart reported 8 hints, including: line 91 col 55: Use = to separate a named parameter from its default value. line 91 col 81: Use = to separate a named parameter from its default value. line 91 col 108: Use = to separate a named parameter from its default value. line 292 col 60: Use = to separate a named parameter from its default value. line 308 col 55: Use = to separate a named parameter from its default value. Format lib/src/codecs/object_id.dart. Run dartfmt to format lib/src/codecs/object_id.dart. Use constrained dependencies. (-20 points) The pubspec.yaml contains 1 dependency without version constraints. Specify version ranges for the following dependencies: b serializer.dart. Packages with multiple examples should provide example/README.md. For more information see the pub package layout conventions.
https://pub.dev/packages/serializer
CC-MAIN-2019-30
en
refinedweb
{-# LANGUAGE Trustworthy #-} {-# LANGUAGE RankNTypes #-} {-# LANGUAGE DeriveDataTypeable, DeriveGeneric, GADTs, RecordWildCards #-} {-# OPTIONS_GHC -funbox-strict-fields #-} -- | -- Module : Criterion.Types -- Copyright : (c) 2009-2014 Bryan O'Sullivan -- -- License : BSD-style -- Maintainer : bos@serpentine.com -- Stability : experimental -- Portability : GHC -- --. module Criterion.Types ( -- * Configuration Config(..) , Verbosity(..) -- * Benchmark descriptions , Benchmarkable(..) , Benchmark(..) -- * Measurements , Measured(..) , fromInt , toInt , fromDouble , toDouble , measureAccessors , measureKeys , measure , rescale -- * Benchmark construction , env , envWithCleanup , perBatchEnv , perBatchEnvWithCleanup , perRunEnv , perRunEnvWithCleanup , bench , bgroup , addPrefix , benchNames -- ** Evaluation control , whnf , nf , nfIO , whnfIO -- * Result types , Outliers(..) , OutlierEffect(..) , OutlierVariance(..) , Regression(..) , KDE(..) , Report(..) , SampleAnalysis(..) , DataRecord(..) ) where -- Temporary: to support pre-AMP GHC 7.8.4: import Control.Applicative import Data.Monoid import Control.DeepSeq (NFData(rnf)) import Control.Exception (evaluate) import Data.Aeson (FromJSON(..), ToJSON(..)) import Data.Binary (Binary(..), putWord8, getWord8) import Data.Data (Data, Typeable) import Data.Int (Int64) import Data.Map (Map, fromList) import GHC.Generics (Generic) import qualified Data.Vector as V import qualified Data.Vector.Unboxed as U import qualified Statistics.Types as St import Statistics.Resampling.Bootstrap () import Prelude -- | Control the amount of information displayed. data Verbosity = Quiet | Normal | Verbose deriving (Eq, Ord, Bounded, Enum, Read, Show, Typeable, Data, Generic) -- | Top-level benchmarking configuration. data Config = Config { confInterval :: St.CL Double -- ^ Confidence interval for bootstrap estimation (greater than -- 0, less than 1). , forceGC :: Bool -- ^ /Obsolete, unused/. This option used to force garbage -- collection between every benchmark run, but it no longer has -- an effect (we now unconditionally force garbage collection). -- This option remains solely for backwards API compatibility. , timeLimit :: Double -- ^ Number of seconds to run a single benchmark. (In practice, -- execution time will very slightly exceed this limit.) , resamples :: Int -- ^ Number of resamples to perform when bootstrapping. , regressions :: [([String], String)] -- ^ Regressions to perform. , rawDataFile :: Maybe FilePath -- ^ File to write binary measurement and analysis data to. If -- not specified, this will be a temporary file. , reportFile :: Maybe FilePath -- ^ File to write report output to, with template expanded. , csvFile :: Maybe FilePath -- ^ File to write CSV summary to. , jsonFile :: Maybe FilePath -- ^ File to write JSON-formatted results to. , junitFile :: Maybe FilePath -- ^ File to write JUnit-compatible XML results to. , verbosity :: Verbosity -- ^ Verbosity level to use when running and analysing -- benchmarks. , template :: FilePath -- ^ Template file to use if writing a report. } deriving (Eq, Read, Show, Typeable, Data, Generic) -- | A pure function or impure action that can be benchmarked. The -- 'Int64' parameter indicates the number of times to run the given -- function or action. data Benchmarkable = forall a . NFData a => Benchmarkable { allocEnv :: Int64 -> IO a , cleanEnv :: Int64 -> a -> IO () , runRepeatedly :: a -> Int64 -> IO () , perRun :: Bool } noop :: Monad m => a -> m () noop = const $ return () {-# INLINE noop #-} toBenchmarkable :: (Int64 -> IO ()) -> Benchmarkable toBenchmarkable f = Benchmarkable noop (const noop) (const f) False {-# INLINE toBenchmarkable #-} -- |. data Measured = Measured { measTime :: !Double -- ^ Total wall-clock time elapsed, in seconds. , measCpuTime :: !Double -- ^ Total CPU time elapsed, in seconds. Includes both user and -- kernel (system) time. , measCycles :: !Int64 -- ^ Cycles, in unspecified units that may be CPU cycles. (On -- i386 and x86_64, this is measured using the @rdtsc@ -- instruction.) , measIters :: !Int64 -- ^ Number of loop iterations measured. , measAllocated :: !Int64 -- ^ __(GC)__ Number of bytes allocated. Access using 'fromInt'. , measNumGcs :: !Int64 -- ^ __(GC)__ Number of garbage collections performed. Access -- using 'fromInt'. , measBytesCopied :: !Int64 -- ^ __(GC)__ Number of bytes copied during garbage collection. -- Access using 'fromInt'. , measMutatorWallSeconds :: !Double -- ^ __(GC)__ Wall-clock time spent doing real work -- (\"mutation\"), as distinct from garbage collection. Access -- using 'fromDouble'. , measMutatorCpuSeconds :: !Double -- ^ __(GC)__ CPU time spent doing real work (\"mutation\"), as -- distinct from garbage collection. Access using 'fromDouble'. , measGcWallSeconds :: !Double -- ^ __(GC)__ Wall-clock time spent doing garbage collection. -- Access using 'fromDouble'. , measGcCpuSeconds :: !Double -- ^ __(GC)__ CPU time spent doing garbage collection. Access -- using 'fromDouble'. } deriving (Eq, Read, Show, Typeable, Data, Generic) instance FromJSON Measured where parseJSON v = do (a,b,c,d,e,f,g,h,i,j,k) <- parseJSON v -- The first four fields are not subject to the encoding policy: return $ Measured a b c d (int e) (int f) (int g) (db h) (db i) (db j) (db k) where int = toInt; db = toDouble -- Here we treat the numeric fields as `Maybe Int64` and `Maybe Double` -- and we use a specific policy for deciding when they should be Nothing, -- which becomes null in JSON. instance ToJSON Measured where toJSON Measured{..} = toJSON (measTime, measCpuTime, measCycles, measIters, i measAllocated, i measNumGcs, i measBytesCopied, d measMutatorWallSeconds, d measMutatorCpuSeconds, d measGcWallSeconds, d measMutatorCpuSeconds) where i = fromInt; d = fromDouble instance NFData Measured where rnf Measured{} = () -- THIS MUST REFLECT THE ORDER OF FIELDS IN THE DATA TYPE. -- -- The ordering is used by Javascript code to pick out the correct -- index into the vector that represents a Measured value in that -- world. measureAccessors_ :: [(String, (Measured -> Maybe Double, String))] measureAccessors_ = [ ("time", (Just . measTime, "wall-clock time")) , ("cpuTime", (Just . measCpuTime, "CPU time")) , ("cycles", (Just . fromIntegral . measCycles, "CPU cycles")) , ("iters", (Just . fromIntegral . measIters, "loop iterations")) , ("allocated", (fmap fromIntegral . fromInt . measAllocated, "(+RTS -T) bytes allocated")) , ("numGcs", (fmap fromIntegral . fromInt . measNumGcs, "(+RTS -T) number of garbage collections")) , ("bytesCopied", (fmap fromIntegral . fromInt . measBytesCopied, "(+RTS -T) number of bytes copied during GC")) , ("mutatorWallSeconds", (fromDouble . measMutatorWallSeconds, "(+RTS -T) wall-clock time for mutator threads")) , ("mutatorCpuSeconds", (fromDouble . measMutatorCpuSeconds, "(+RTS -T) CPU time spent running mutator threads")) , ("gcWallSeconds", (fromDouble . measGcWallSeconds, "(+RTS -T) wall-clock time spent doing GC")) , ("gcCpuSeconds", (fromDouble . measGcCpuSeconds, "(+RTS -T) CPU time spent doing GC")) ] -- | Field names in a 'Measured' record, in the order in which they -- appear. measureKeys :: [String] measureKeys = map fst measureAccessors_ -- | Field names and accessors for a 'Measured' record. measureAccessors :: Map String (Measured -> Maybe Double, String) measureAccessors = fromList measureAccessors_ -- | Normalise every measurement as if 'measIters' was 1. -- -- ('measIters' itself is left unaffected.) rescale :: Measured -> Measured rescale m@Measured{..} = m { measTime = d measTime , measCpuTime = d measCpuTime , measCycles = i measCycles -- skip measIters , measNumGcs = i measNumGcs , measBytesCopied = i measBytesCopied , measMutatorWallSeconds = d measMutatorWallSeconds , measMutatorCpuSeconds = d measMutatorCpuSeconds , measGcWallSeconds = d measGcWallSeconds , measGcCpuSeconds = d measGcCpuSeconds } where d k = maybe k (/ iters) (fromDouble k) i k = maybe k (round . (/ iters)) (fromIntegral <$> fromInt k) iters = fromIntegral measIters :: Double -- | Convert a (possibly unavailable) GC measurement to a true value. -- If the measurement is a huge negative number that corresponds to -- \"no data\", this will return 'Nothing'. fromInt :: Int64 -> Maybe Int64 fromInt i | i == minBound = Nothing | otherwise = Just i -- | Convert from a true value back to the packed representation used -- for GC measurements. toInt :: Maybe Int64 -> Int64 toInt Nothing = minBound toInt (Just i) = i -- | Convert a (possibly unavailable) GC measurement to a true value. -- If the measurement is a huge negative number that corresponds to -- \"no data\", this will return 'Nothing'. fromDouble :: Double -> Maybe Double fromDouble d | isInfinite d || isNaN d = Nothing | otherwise = Just d -- | Convert from a true value back to the packed representation used -- for GC measurements. toDouble :: Maybe Double -> Double toDouble Nothing = -1/0 toDouble (Just d) = d instance Binary Measured where put Measured{..} = do put measTime; put measCpuTime; put measCycles; put measIters put measAllocated; put measNumGcs; put measBytesCopied put measMutatorWallSeconds; put measMutatorCpuSeconds put measGcWallSeconds; put measGcCpuSeconds get = Measured <$> get <*> get <*> get <*> get <*> get <*> get <*> get <*> get <*> get <*> get <*> get -- | Apply an argument to a function, and evaluate the result to weak -- head normal form (WHNF). whnf :: (a -> b) -> a -> Benchmarkable whnf = pureFunc id {-# INLINE whnf #-} -- | Apply an argument to a function, and evaluate the result to -- normal form (NF). nf :: NFData b => (a -> b) -> a -> Benchmarkable nf = pureFunc rnf {-# INLINE nf #-} pureFunc :: (b -> c) -> (a -> b) -> a -> Benchmarkable pureFunc reduce f0 x0 = toBenchmarkable (go f0 x0) where go f x n | n <= 0 = return () | otherwise = evaluate (reduce (f x)) >> go f x (n-1) {-# INLINE pureFunc #-} -- | Perform an action, then evaluate its result to normal form. -- This is particularly useful for forcing a lazy 'IO' action to be -- completely performed. nfIO :: NFData a => IO a -> Benchmarkable nfIO = toBenchmarkable . impure rnf {-# INLINE nfIO #-} -- | Perform an action, then evaluate its result to weak head normal -- form (WHNF). This is useful for forcing an 'IO' action whose result -- is an expression to be evaluated down to a more useful value. whnfIO :: IO a -> Benchmarkable whnfIO = toBenchmarkable . impure id {-# INLINE whnfIO #-} impure :: (a -> b) -> IO a -> Int64 -> IO () impure strategy a = go where go n | n <= 0 = return () | otherwise = a >>= (evaluate . strategy) >> go (n-1) {-# INLINE impure #-} -- | Specification of a collection of benchmarks and environments. A -- benchmark may consist of: -- -- * An environment that creates input data for benchmarks, created -- with 'env'. -- -- * A single 'Benchmarkable' item with a name, created with 'bench'. -- -- * A (possibly nested) group of 'Benchmark's, created with 'bgroup'. data Benchmark where Environment :: NFData env => IO env -> (env -> IO a) -> (env -> Benchmark) -> Benchmark Benchmark :: String -> Benchmarkable -> Benchmark BenchGroup :: String -> [Benchmark] -> Benchmark -- |. env :: NFData env => IO env -- ^ Create the environment. The environment will be evaluated to -- normal form before being passed to the benchmark. -> (env -> Benchmark) -- ^ Take the newly created environment and make it available to -- the given benchmarks. -> Benchmark env alloc = Environment alloc noop -- | Same as `env`, but but allows for an additional callback -- to clean up the environment. Resource clean up is exception safe, that is, -- it runs even if the 'Benchmark' throws an exception. envWithCleanup :: NFData env => IO env -- ^ Create the environment. The environment will be evaluated to -- normal form before being passed to the benchmark. -> (env -> IO a) -- ^ Clean up the created environment. -> (env -> Benchmark) -- ^ Take the newly created environment and make it available to -- the given benchmarks. -> Benchmark envWithCleanup = Environment -- | :: (NFData env, NFData b) => (Int64 -> IO env) -- ^ Create an environment for a batch of N runs. The environment will be -- evaluated to normal form before running. -> (env -> IO b) -- ^ Function returning the IO action that should be benchmarked with the -- newly generated environment. -> Benchmarkable perBatchEnv alloc = perBatchEnvWithCleanup alloc (const noop) -- | Same as `perBatchEnv`, but but allows for an additional callback -- to clean up the environment. Resource clean up is exception safe, that is, -- it runs even if the 'Benchmark' throws an exception. perBatchEnvWithCleanup :: (NFData env, NFData b) => (Int64 -> IO env) -- ^ Create an environment for a batch of N runs. The environment will be -- evaluated to normal form before running. -> (Int64 -> env -> IO ()) -- ^ Clean up the created environment. -> (env -> IO b) -- ^ Function returning the IO action that should be benchmarked with the -- newly generated environment. -> Benchmarkable perBatchEnvWithCleanup alloc clean work = Benchmarkable alloc clean (impure rnf . work) False -- | :: (NFData env, NFData b) => IO env -- ^ Action that creates the environment for a single run. -> (env -> IO b) -- ^ Function returning the IO action that should be benchmarked with the -- newly genereted environment. -> Benchmarkable perRunEnv alloc = perRunEnvWithCleanup alloc noop -- | Same as `perRunEnv`, but but allows for an additional callback -- to clean up the environment. Resource clean up is exception safe, that is, -- it runs even if the 'Benchmark' throws an exception. perRunEnvWithCleanup :: (NFData env, NFData b) => IO env -- ^ Action that creates the environment for a single run. -> (env -> IO ()) -- ^ Clean up the created environment. -> (env -> IO b) -- ^ Function returning the IO action that should be benchmarked with the -- newly genereted environment. -> Benchmarkable perRunEnvWithCleanup alloc clean work = bm { perRun = True } where bm = perBatchEnvWithCleanup (const alloc) (const clean) work -- | Create a single benchmark. bench :: String -- ^ A name to identify the benchmark. -> Benchmarkable -- ^ An activity to be benchmarked. -> Benchmark bench = Benchmark -- | Group several benchmarks together under a common name. bgroup :: String -- ^ A name to identify the group of benchmarks. -> [Benchmark] -- ^ Benchmarks to group under this name. -> Benchmark bgroup = BenchGroup -- | Add the given prefix to a name. If the prefix is empty, the name -- is returned unmodified. Otherwise, the prefix and name are -- separated by a @\'\/\'@ character. addPrefix :: String -- ^ Prefix. -> String -- ^ Name. -> String addPrefix "" desc = desc addPrefix pfx desc = pfx ++ '/' : desc -- | Retrieve the names of all benchmarks. Grouped benchmarks are -- prefixed with the name of the group they're in. benchNames :: Benchmark -> [String] benchNames (Environment _ _ b) = benchNames (b undefined) benchNames (Benchmark d _) = [d] benchNames (BenchGroup d bs) = map (addPrefix d) . concatMap benchNames $ bs instance Show Benchmark where show (Environment _ _ b) = "Environment _ _" ++ show (b undefined) show (Benchmark d _) = "Benchmark " ++ show d show (BenchGroup d _) = "BenchGroup " ++ show d measure :: (U.Unbox a) => (Measured -> a) -> V.Vector Measured -> U.Vector a measure f v = U.convert . V.map f $ v -- | Outliers from sample data, calculated using the boxplot -- technique. data Outliers = Outliers { samplesSeen :: !Int64 , lowSevere :: !Int64 -- ^ More than 3 times the interquartile range (IQR) below the -- first quartile. , lowMild :: !Int64 -- ^ Between 1.5 and 3 times the IQR below the first quartile. , highMild :: !Int64 -- ^ Between 1.5 and 3 times the IQR above the third quartile. , highSevere :: !Int64 -- ^ More than 3 times the IQR above the third quartile. } deriving (Eq, Read, Show, Typeable, Data, Generic) instance FromJSON Outliers instance ToJSON Outliers instance Binary Outliers where put (Outliers v w x y z) = put v >> put w >> put x >> put y >> put z get = Outliers <$> get <*> get <*> get <*> get <*> get instance NFData Outliers -- | A description of the extent to which outliers in the sample data -- affect the sample mean and standard deviation. data OutlierEffect = Unaffected -- ^ Less than 1% effect. | Slight -- ^ Between 1% and 10%. | Moderate -- ^ Between 10% and 50%. | Severe -- ^ Above 50% (i.e. measurements -- are useless). deriving (Eq, Ord, Read, Show, Typeable, Data, Generic) instance FromJSON OutlierEffect instance ToJSON OutlierEffect instance Binary OutlierEffect where put Unaffected = putWord8 0 put Slight = putWord8 1 put Moderate = putWord8 2 put Severe = putWord8 3 get = do i <- getWord8 case i of 0 -> return Unaffected 1 -> return Slight 2 -> return Moderate 3 -> return Severe _ -> fail $ "get for OutlierEffect: unexpected " ++ show i instance NFData OutlierEffect instance Monoid Outliers where mempty = Outliers 0 0 0 0 0 mappend = addOutliers addOutliers :: Outliers -> Outliers -> Outliers addOutliers (Outliers s a b c d) (Outliers t w x y z) = Outliers (s+t) (a+w) (b+x) (c+y) (d+z) {-# INLINE addOutliers #-} -- | Analysis of the extent to which outliers in a sample affect its -- standard deviation (and to some extent, its mean). data OutlierVariance = OutlierVariance { ovEffect :: OutlierEffect -- ^ Qualitative description of effect. , ovDesc :: String -- ^ Brief textual description of effect. , ovFraction :: Double -- ^ Quantitative description of effect (a fraction between 0 and 1). } deriving (Eq, Read, Show, Typeable, Data, Generic) instance FromJSON OutlierVariance instance ToJSON OutlierVariance instance Binary OutlierVariance where put (OutlierVariance x y z) = put x >> put y >> put z get = OutlierVariance <$> get <*> get <*> get instance NFData OutlierVariance where rnf OutlierVariance{..} = rnf ovEffect `seq` rnf ovDesc `seq` rnf ovFraction -- | Results of a linear regression. data Regression = Regression { regResponder :: String -- ^ Name of the responding variable. , regCoeffs :: Map String (St.Estimate St.ConfInt Double) -- ^ Map from name to value of predictor coefficients. , regRSquare :: St.Estimate St.ConfInt Double -- ^ R² goodness-of-fit estimate. } deriving (Eq, Read, Show, Typeable, Generic) instance FromJSON Regression instance ToJSON Regression instance Binary Regression where put Regression{..} = put regResponder >> put regCoeffs >> put regRSquare get = Regression <$> get <*> get <*> get instance NFData Regression where rnf Regression{..} = rnf regResponder `seq` rnf regCoeffs `seq` rnf regRSquare -- | Result of a bootstrap analysis of a non-parametric sample. data SampleAnalysis = SampleAnalysis { anRegress :: [Regression] -- ^ Estimates calculated via linear regression. , anOverhead :: Double -- ^ Estimated measurement overhead, in seconds. Estimation is -- performed via linear regression. , anMean :: St.Estimate St.ConfInt Double -- ^ Estimated mean. , anStdDev :: St.Estimate St.ConfInt Double -- ^ Estimated standard deviation. , anOutlierVar :: OutlierVariance -- ^ Description of the effects of outliers on the estimated -- variance. } deriving (Eq, Read, Show, Typeable, Generic) instance FromJSON SampleAnalysis instance ToJSON SampleAnalysis instance Binary SampleAnalysis where put SampleAnalysis{..} = do put anRegress; put anOverhead; put anMean; put anStdDev; put anOutlierVar get = SampleAnalysis <$> get <*> get <*> get <*> get <*> get instance NFData SampleAnalysis where rnf SampleAnalysis{..} = rnf anRegress `seq` rnf anOverhead `seq` rnf anMean `seq` rnf anStdDev `seq` rnf anOutlierVar -- | Data for a KDE chart of performance. data KDE = KDE { kdeType :: String , kdeValues :: U.Vector Double , kdePDF :: U.Vector Double } deriving (Eq, Read, Show, Typeable, Data, Generic) instance FromJSON KDE instance ToJSON KDE instance Binary KDE where put KDE{..} = put kdeType >> put kdeValues >> put kdePDF get = KDE <$> get <*> get <*> get instance NFData KDE where rnf KDE{..} = rnf kdeType `seq` rnf kdeValues `seq` rnf kdePDF -- | Report of a sample analysis. data Report = Report { reportNumber :: Int -- ^ A simple index indicating that this is the /n/th report. , reportName :: String -- ^ The name of this report. , reportKeys :: [String] -- ^ See 'measureKeys'. , reportMeasured :: V.Vector Measured -- ^ Raw measurements. These are /not/ corrected for the -- estimated measurement overhead that can be found via the -- 'anOverhead' field of 'reportAnalysis'. , reportAnalysis :: SampleAnalysis -- ^ Report analysis. , reportOutliers :: Outliers -- ^ Analysis of outliers. , reportKDEs :: [KDE] -- ^ Data for a KDE of times. } deriving (Eq, Read, Show, Typeable, Generic) instance FromJSON Report instance ToJSON Report instance Binary Report where put Report{..} = put reportNumber >> put reportName >> put reportKeys >> put reportMeasured >> put reportAnalysis >> put reportOutliers >> put reportKDEs get = Report <$> get <*> get <*> get <*> get <*> get <*> get <*> get instance NFData Report where rnf Report{..} = rnf reportNumber `seq` rnf reportName `seq` rnf reportKeys `seq` rnf reportMeasured `seq` rnf reportAnalysis `seq` rnf reportOutliers `seq` rnf reportKDEs data DataRecord = Measurement Int String (V.Vector Measured) | Analysed Report deriving (Eq, Read, Show, Typeable, Generic) instance Binary DataRecord where put (Measurement i n v) = putWord8 0 >> put i >> put n >> put v put (Analysed r) = putWord8 1 >> put r get = do w <- getWord8 case w of 0 -> Measurement <$> get <*> get <*> get 1 -> Analysed <$> get _ -> error ("bad tag " ++ show w) instance NFData DataRecord where rnf (Measurement i n v) = rnf i `seq` rnf n `seq` rnf v rnf (Analysed r) = rnf r instance FromJSON DataRecord instance ToJSON DataRecord
http://hackage.haskell.org/package/criterion-1.2.0.0/docs/src/Criterion-Types.html
CC-MAIN-2019-30
en
refinedweb
I am trying to use pytorch with tensorboard and I run the tensorboard server with the following command: tensorboard --logdir=./runs/ Now I am just simulating some fake data as follows: import numpy as np import time from torch.utils.tensorboard import SummaryWriter train_writer = SummaryWriter(log_dir="./runs/train/") for i in range(100): v = np.random.randint(10, 100) train_writer.add_scalar("loss", v, i) #time.sleep(1) train_writer.close() Now if I do not put the sleep method in there, the script finishes and I can see the graph on the tensorboard front end. I put the sleep in there so that I could see the data arrive in a streaming fashion and be able to see the board updating. However, if I put the sleep in there, no graph ever shows up and all I see is the message that there is no data or it could not find any. I am not sure if this is a tensorboard issue or if I am doing something wrong in using it from within pytorch.
https://discuss.pytorch.org/t/pytorch-and-tensorboard-logging/49729
CC-MAIN-2019-30
en
refinedweb
Asked by: best obfuscator Question All replies Hi I use Eziriz's .Net Reactor. Of course as you said they all claim that they're the best, but beside it's power and simplicity of use, Reactor is really cheap (Only about 160$) so it's not a big risk to test it; although I would advise you to use it for everything of it, not only the price. You can download the trial here: HTH cheers, farshad Hi As I mentioned, I don't advise .Net Reactor for it's price, but for it's power; I'm really satisfied with it. Another solution that I think should be a reliable one, is Xenocode; you can take a look on it, since I've only tested it's trial (the trial is fully functional) and have no idea on all of it's power. HTH cheers, farshad - One thing i noticed is that lost of obfuscator out there are more or less like "project product". They have a limited or sometimes none existent sale and support systems. The only one which looks commercially descent is dotfuscator (prob the reason why it is more pricey). So what is your take on Dotfuscator? please dont mind, Eziriz is just a crap. You loose all benifites of .Net and at runtime our application will consume 10-15 MB more memory than the original exe. You can use Reflection. I tried and it and never used it again. Xenocode is good but try {smartassembly} is best in my opinion, price, performance, facilities The Best. Have {smartassembly} professional version with 1 year Subcription to Error Reporting Webservice, which sends compleet stack trace a very usefull way. I'm not from that company :P lol I'm just a client but I loved the product. Standards Edtion of the same price as of Xenocode but I'll recomend Professianal version with Error Reporting sysetem becuase client cant never tell you the exact situation from error arosen. Some Comparison I did: 1) Assembly Processed with {smartassembly} is less in size than one processed with Xenocode. 2) Xenocode Preserves Namespace names which can never be Obfuscated and You give a hint to cracker that I have code related to security in Rizwansharp.Security namespace. while {smartassembly}.......... 3) In {smartassembly} there is a single option to encrypt all the string or not? setting this to true, all your strings are automatically enoded in Xenocode u have to mention 10000 string you used that what to encrypt and what not. 4) Its user interface is like 1,2,3 Done! I'm going to date my girlriend now:P. 5) I always got my queries reply within 3-4 hours Maximum. Price Xenocode = 400$ - smartassembly = 399$ - 799$ (3 Versions) Professional version is about 599$ and its awesome tool that works in minutes and your are Done! Etc................ Both products are available fully function for some 15-20 days and All my above things will be proved to you. Best Regards, - Proposed as answer by Dimitrakis Sunday, November 7, 2010 11:53 AM - There is no best obfuscator. Every product will have few shortcomings. If you want a professional, affordable product with various protection and obfuscation functionality, flexibility of use (Direct via UI, via command-line, via MSBuild), then take a look at Crypto Obfuscator ( )
https://social.msdn.microsoft.com/Forums/vstudio/en-US/4eb99036-68b2-4567-aac1-170f8e570122/best-obfuscator?forum=csharpgeneral
CC-MAIN-2019-30
en
refinedweb
In this tutorial, you will learn in depth about C++ constructors and its types with examples. C++ programming constructors C++ constructors are special member functions which are created when the object is created or defined and its task is to initialize the object of its class. It is called constructor because it constructs the values of data members of the class. A constructor has the same name as the class and it doesn’t have any return type. It is invoked whenever an object of its associated class is created. When a class is instantiated, even if we don’t declare a constructor, compiler automatically creates one for the program. This compiler created constructor is called default constructor. A constructor is defined as following /*.....class with constructor..........*/ class class_name { ......... public: class_name(); //constructor declared or constructor prototype ......... }; class_name :: class_name() //constructor defined { //constructor function body } Types of c++ constructors - Default Constructor - Parameterized Constructor - Copy Constructor Default constructor If no constructor is defined in the class then the compiler automatically creates one for the program. This constructor which is created by the compiler when there is no user defined constructor and which doesn’t take any parameters is called default constructor. Format of default constructor /*.....format of default constructor..........*/ class class_name { ......... public: class_name() { }; //default constructor ......... }; Parameterized constructor To put it up simply, the constructor that can take arguments are called parameterized constructor. In practical programs, we often need to initialize the various data elements of the different object with different values when they are created. This can be achieved by passing the arguments to the constructor functions when the object is created. Following sample program will highlight the concept of parameterized constructor /*.....A program to find area of rectangle .......... */ #include<iostream> using namespace std; class ABC { private: int length,breadth,x; public: ABC (int a,int b) //parameterized constructor to initialize l and b { length = a; breadth = b; } int area( ) //function to find area { x = length * breadth; return x; } void display( ) //function to display the area { cout << "Area = " << x << endl; } }; int main() { ABC c(2,4); //initializing the data members of object 'c' implicitly c.area(); c.display(); ABC c1= ABC(4,4); // initializing the data members of object 'c' explicitly c1.area(); c1.display(); return 0; } //end of program Output Area = 8 Area = 16 Note: Remember that constructor is always defined and declared in public section of the class and we can’t refer to their addresses. Copy Constructor Generally in a constructor the object of its own class can’t be passed as a value parameter. But the classes own object can be passed as a reference parameter.Such constructor having reference to the object of its own class is known as copy constructor. Moreover, it creates a new object as a copy of an existing object.For the classes which do not have a copy constructor defined by the user, compiler itself creates a copy constructor for each class known as default copy constructor. Example to illustrate the concept of copy constructor /*.....A program to highlight the concept of copy constructor.......... */ #include<iostream> using namespace std; class example { private: int x; public: example (int a) //parameterized constructor to initialize l and b { x = a; } example( example &b) //copy constructor with reference object argument { x = b.x; } int display( ) //function to display { return x; } }; int main() { example c1(2); //initializing the data members of object 'c' implicitly example c2(c1); //copy constructor called example c3 = c1; example c4 = c2; cout << "example c1 = " << c1.display() << endl; cout << "example c2 = " << c2.display() << endl; cout << "example c3 = " << c3.display() << endl; cout << "example c4 = " << c4.display() << endl; return 0; } //end of program Output example c1 = 2 example c2 = 2 example c3 = 2 example c4 = 2 Note: In copy constructor passing an argument by value is not possible.
http://www.trytoprogram.com/cplusplus-programming/constructors
CC-MAIN-2019-30
en
refinedweb
I'm trying that ... Hello everyone, I'm trying this test ... i2cset isn't compiled using "mode i" option ... i2c block ... # i2cdetect -y 0 works fine and give the correct i2c bus adress of the LCD -> 0x27 so i've done # i2cset -y 0 0x27 0x08 0x08 turn on the backlight # i2cset -y 0 0x27 0x08 0x00 turn off the backlight but cant't write a block i2c command with the mode i option ... I've got an error. ... i2cset -y 0 0x27 0x00 0x17 0x06 0x4F 0x5B 0x7F i I've tried this ... but nothing done. i've found a Python Lib name Periphery but no result yet, just the backlight The Lib at Github i'm use this part only : i2c.py it is very easy with Arduino or Raspi ... some lib missing ... Some code :+1: #!/usr/bin/python import i2c import pyOmega #routines uSec et mSec attente #Open i2c-0 controller chip = i2c.I2C("/dev/i2c-0") chip.transfer(0x27, [i2c.I2C.Message([0x20,0x08])]) #Turn ON retro-Eclairage sleep(5) chip.transfer(0x27, [i2c.I2C.Message([0x20,0x00])]) #Turn OFF retro-Eclairage chip.close() But nothing more .... :crying_cat_face: :worried: @Gwena56 So just to make sure I understand you correctly. You are able to turn on and off the backlight of the LED screen, but you are unable to send any other command to it. Is that correct? Does it give out any error messages when you try to run the commands? Or is it just that nothing happens after you have run the commands?)
https://community.onion.io/topic/270/i-m-trying-that/4
CC-MAIN-2019-30
en
refinedweb
Simple data types of the kind we saw in the previous section are fine for storing single data items, but data is often more complex. Like JavaScript, Java supports arrays as well. Here's an example. In this case, I'll store the balances in customers' charge accounts in an array named chargesDue. I start by declaring that array, making it of type double: public class ch10_04 { public static void main(String[] args) { double chargesDue[]; . . . Besides declaring the array, you have to allocate the number of elements you want the array to hold. You do that using the new operator: public class ch10_04 { public static void main(String[] args) { double chargesDue[]; chargesDue = new double[100]; . . . You can combine the ... No credit card required
https://www.oreilly.com/library/view/real-world-xml/0735712867/0735712867_ch10lev1sec12.html
CC-MAIN-2019-30
en
refinedweb
Provided by: libexplain-dev_1.4.D001-8_amd64 NAME explain_truncate_or_die - truncate a file and report errors SYNOPSIS #include <libexplain/truncate.h> void explain_truncate_or_die(const char *pathname, long long length); DESCRIPTION. SEE ALSO truncate(2) truncate a file to a specified length explain_truncate(3) explain truncate(2) errors exit(2) terminate the calling process libexplain version 1.4 Copyright (C) 2008 Peter Miller explain_truncate_or_die(3)
http://manpages.ubuntu.com/manpages/disco/man3/explain_truncate_or_die.3.html
CC-MAIN-2019-30
en
refinedweb
SPSite.OpenWeb method (String) Returns the Web site that is located at the specified server-relative or site-relative URL. Namespace: Microsoft.SharePoint Assembly: Microsoft.SharePoint (in Microsoft.SharePoint.dll) Syntax 'Declaration Public Function OpenWeb ( _ strUrl As String _ ) As SPWeb 'Usage Dim instance As SPSite Dim strUrl As String Dim returnValue As SPWeb returnValue = instance.OpenWeb(strUrl) public SPWeb OpenWeb( string strUrl ) value Type: Microsoft.SharePoint.SPWeb An SPWeb object that represents the Web site. Remarks. Examples The following code example displays the URL for a specified Web site in a console application using a site-relative URL for the OpenWeb method. The example assumes the existence of a Web site located at. Dim strUrl As String = "" Using oSiteCollection As New SPSite(strUrl) Using oWebsite As SPWeb = oSiteCollection.OpenWeb("MyWebSite/MySubSite") Console.WriteLine(("Website: " + oWebsite.Url)) End Using End Using string strUrl = ""; using (SPSite oSiteCollection = new SPSite(strUrl)) { using(SPWeb oWebsite = oSiteCollection.OpenWeb("MyWebSite/MySubSite")) { Console.WriteLine("Website: " + oWebsite.Url); } } Note Certain objects implement the IDisposable interface, and you must avoid retaining these objects in memory after they are no longer needed. For information about good coding practices, see Disposing Objects. See also Reference Microsoft.SharePoint namespace
https://docs.microsoft.com/en-us/previous-versions/office/sharepoint-server/ms474633%28v%3Doffice.15%29
CC-MAIN-2019-30
en
refinedweb
Parameterized Test Example in .NET Core Using NUnit Parameterized Test Example in .NET Core Using NUnit A lot of times when writing unit tests we end up with a lot test methods that look the same and actually do the same thing. Read on for a better way Join the DZone community and get the full member experience.Join For Free A lot of times when writing unit tests we end up with a lot test methods that look the same and actually do the same thing. Also, there are special cases where we want to have high test coverage and in-depth test access for our crucial and very important core functionality methods. For example, when creating a framework or a library, usually we want to write many tests and cover all possible aspects and outcomes, which may lead to a large amount of certain behavior test methods. Very often, we end up with these test methods with the same logic and behavior but with different input and data values. We are going to create parameterized tests that will test the same method but with different values. The Scenario Let's start with a simple method that calculates the total price of a product * quantity and then it applies the discount on the total price. public static double calculate(double price,int quantity,double discount) { double totalPrice = price * quantity; double totalPriceWithDiscount = System.Math.Round(totalPrice - (totalPrice * discount/100),2); return totalPriceWithDiscount; } As simple as it looks, there is a lot of important other stuff to test here, like: nulls negative and zero inputs exceptions/input validation handling rounding But we are not going to cover these here. We are going to focus on the parameterized test and validating the mathematical correctness of the calculation method. Without a parameterized test, we have this plain test. [Test] public void testCalculate() { Assert.AreEqual(100,MyClass.calculate(10,10,0)); } This passes and, indeed, if the price of a product is 10, the quantity is 10, and we have zero discounts then the total price is 100. The problem is that with this setup if we want to test different values and results we have to write a different test method for every different input. The TestCase Attribute We start by first converting the above test to a parameterized test using the TestCase attribute. [TestCase(10,10,10,90)] [TestCase(10,10,0,100)] public void testCalculate(double price,int quantity,double discount,double expectedFinalAmount) { Assert.AreEqual(expectedFinalAmount,MyClass.calculate(price,quantity,discount)); } Now, this method will run for every TestCase annotation it has. A mapping will occur at runtime to the values we provided at the annotations and copied down to the method parameters. In our example, this test will run two times. We can pass reference types and value types. Usually, the order of parameters goes by first providing the values and the last one is the expected result. The TestCaseSource Attribute For every different input, we have to add a TestCase attribute at the top of the test method. To organize the code, and for reusability reasons, we are going to use the TestCaseSource attribute. We're going to create a provider method and centralize the input data. First, we create a provider method and then move and fill it with the data we want. public static IEnumerable<TestCaseData> priceProvider() { yield return new TestCaseData(10,10,10,90); yield return new TestCaseData(10,10,0,100); } And also refactor the testCalculate method to use the priceProvider method. [Test,TestCaseSource("priceProvider")] public void testCalculate(double price,int quantity,double discount,double expectedFinalAmount) { Assert.AreEqual(expectedFinalAmount,MyClass.calculate(price,quantity,discount)); } This is the same as having the TestCase attributes on top of the method. We can also provide a different class for the provider methods to isolate and centralize the code in class/file level. [Test,TestCaseSource(typeof(MyProviderClass),"priceProvider")] public void testCalculate(double price,int quantity,double discount,double expectedFinalAmount) { Assert.AreEqual(expectedFinalAmount,MyClass.calculate(price,quantity,discount)); } priceProvider is a static method inside MyProviderClass. Extra Parameterization With the Help of the TestFixture Attribute Let's add one more step of parameterization with the help of TestFixture Attribute. Usually, TestFixture is a class attribute to mark a class that contains tests, on the other hand, one of the biggest features is that TestFixture can take constructor arguments. NUnit will create and test a separate instance for every input set. Let's assume that except for the final amount we test above, there is an extra amount applied depending on what category the product is, which could be category 1 or 2. [TestFixture(typeof(int),typeof(double),1,5)] [TestFixture(typeof(int),typeof(double),2,6.5)] public class TestCharge<T,X> { T categoryType; X extraValue; public TestCharge(T t,X x) { this.categoryType = t; this.extraValue = x; } } We know, in fact, that in category one the extra amount is 5 and in category two it's 6.5. We can now run all the tests again but also for every TestFixture we provided. For example, we test the calculation also depending on the category. [Test,TestCaseSource(typeof(MyProviderClass),"priceProvider")] public void testCalculateCategory(double price,int quantity,double discount,double expectedFinalAmount) { Assert.AreEqual(expectedFinalAmount+(double)(object)this.extraValue, MyClass.calculateCategory(price,quantity,discount,(int)(object)this.categoryType)); } Final Words Remember, what makes a good unit test is its simplicity, the ease of reading and writing, the reliability, not to be treated as an integration test, and it has to be fast. The original and complete repository of code samples can be found here. Opinions expressed by DZone contributors are their own. {{ parent.title || parent.header.title}} {{ parent.tldr }} {{ parent.linkDescription }}{{ parent.urlSource.name }}
https://dzone.com/articles/parameterized-test-example-in-the-net-core-using-n?utm_medium=feed&utm_source=feedpress.me&utm_campaign=Feed%3A+dzone%2Fwebdev
CC-MAIN-2019-30
en
refinedweb
I’d like to start using Cosmos, and I’ve have a bunch of questions about it – how to create databases, how to write to it and read from it, how can I use attachments and spatial data, how can I secure it, how can I test the code that uses it…and lots more. So I’m going to write a few posts over the coming weeks which hopefully will answer these questions, starting with some basics and moving to more advanced topics in later posts. Can I trial Cosmos to help me understand it a bit more? Fortunately Microsoft has an answer for this – they’ve provided a Cosmos emulator, and I can trial Cosmos without going near the Azure cloud. The official Microsoft docs on the Cosmos Emulator are fantastic – you can install it locally or use a Docker image. My own preference is to use the installer. I’ve tried using the Docker image and this needs to download a Windows container which totals well over 5GB, which can take a long time. The emulator’s installer is only about 50MB and I was able to get up and running with this a lot faster than with Docker containers. There were some snags when I installed it – after trying to run it for the first time, I got this message: But this was pretty easy to work around by just following the instruction in the message and running the emulator with the NoFirewall option: Microsoft.Azure.Cosmos.Emulator.exe /NoFirewall I prefer to manage the emulator from PowerShell – to do this, after installing the emulator I run the PowerShell command below to import modules that let me use some useful PowerShell commands. Import-Module "$env:ProgramFiles\Azure Cosmos DB Emulator\PSModules\Microsoft.Azure.CosmosDB.Emulator" And now I can control the emulator with those built in PowerShell commands. The Cosmos Emulator’s Local Data Explorer When I’ve started the emulator, I can browse to the URL below: This opens the Emulator’s Data Explorer, which has some quickstart connection information, like connection strings and samples: But more interestingly, I can also browse to a data explorer which allows me to browse databases in my Cosmos emulator, and collections within these databases using a SQL like language. Of course after I install the emulator, there are no databases or collections – but let’s start writing some .NET Core code to change that. Let’s write to, and read from, some Cosmos Databases and Collections with .NET Core I’m going to write a very simple application to interact with the Cosmos Emulator. This isn’t production ready code – this is just to examine how we might carry out some common database operations using .NET Core and Azure Cosmos. I’m using Visual Studio 2019 with the .NET Core 3.0 preview (3.0.100-preview-010184), and I’ve created an empty .NET Core Console application. My sample application will be to store information about interesting places near me – so I’ve chosen to create a Cosmos database with the title “LocalLandmarks”. I’m going to create a collection in this database for natural landmarks, and in this first blog I’m only going to store the landmark name. From my application, I need to install a NuGet package to access the Azure Cosmos libraries. Install-Package Microsoft.Azure.DocumentDB.Core First let’s set up some parameters and objects: - Our Cosmos Emulator endpoint is just; - We know from the Data Explorer that the emulator key is (this is the same for everyone that uses the emulator):== - I’m going to call my database “LocalLandmarks”; - I’m going to call the collection of natural landmarks “NaturalSites”; - My POCO for natural landmarks can be very simple for now: namespace CosmosEmulatorSample { public class NaturalSite { public string Name { get; set; } } } So I can specify a few static readonly strings for my application: private static readonly string CosmosEndpoint = ""; private static readonly string Emulator=="; private static readonly string DatabaseId = "LocalLandmarks"; private static readonly string NaturalSitesCollection = "NaturalSites"; We can create a client to connect to our Cosmos Emulator using our specified parameters and the code below: // Create the client connection var client = new DocumentClient( new Uri(CosmosEndpoint), EmulatorKey, new ConnectionPolicy { ConnectionMode = ConnectionMode.Direct, ConnectionProtocol = Protocol.Tcp }); And now using this client we can create our “LocalLandmarks” database. I’ve used the “Result” method to make many of the asychronous functions into synchronous functions for simplicity in this introductory post. // Create a new database in Cosmos var databaseCreationResult = client.CreateDatabaseAsync(new Database { Id = DatabaseId }).Result; Console.WriteLine("The database Id created is: " + databaseCreationResult.Resource.Id); Within this database, we can also create a collection to store our natural landmarks. // Now initialize a new collection for our objects to live inside var collectionCreationResult = client.CreateDocumentCollectionAsync( UriFactory.CreateDatabaseUri(DatabaseId), new DocumentCollection { Id = NaturalSitesCollection }).Result; Console.WriteLine("The collection created has the ID: " + collectionCreationResult.Resource.Id); So let’s declare and initialize a NaturalSite object – an example of a natural landmark near me is the Giant’s Causeway. // Let's instantiate a POCO with a local landmark var giantsCauseway = new NaturalSite { Name = "Giant's Causeway" }; And I can pass this object to the Cosmos client’s “CreateDocumentAsync” method to write this to Cosmos, and I can specify the database and collection that I’m targeting in this method also. // Add this POCO as a document in Cosmos to our natural site collection var itemResult = client .CreateDocumentAsync( UriFactory.CreateDocumentCollectionUri(DatabaseId, NaturalSitesCollection), giantsCauseway) .Result; Console.WriteLine("The document has been created with the ID: " + itemResult.Resource.Id); At this point I could look at the Cosmos Emulator’s Data Explorer and see this in my database, as shown below: Finally I can read back from this NaturalSite collection by ID – I know the ID of the document I just created in Cosmos, so I can just call the Cosmos client’s “ReadDocumentAsync” method and specify the database Id, the collection I want to search in, and the document Id that I want to retrieve. I convert the results to a NaturalSite POCO, and then I can read properties back from it. // Use the ID to retrieve the object we just created var document = client .ReadDocumentAsync( UriFactory.CreateDocumentUri(DatabaseId, NaturalSitesCollection, itemResult.Resource.Id)) .Result; // Convert the document resource returned to a NaturalSite POCO NaturalSite site = (dynamic)document.Resource; Console.WriteLine("The returned document is a natural landmark with name: " + site.Name); I’ve uploaded this code to GitHub here. Wrapping up In this post, I’ve written about the Azure Cosmos emulator which I’ve used to experiment with coding for Cosmos. I’ve written a little bit of very basic C# code which uses the Cosmos SDK to create databases and collections, write to these collections, and also read documents from collections by primary key. Of course this query might not be that useful – we probably don’t know the IDs of the documents saved to the database (and probably don’t care either as it’s non-semantic). In the next part of this series, I’ll write about querying Cosmos documents by object properties using .NET. About me: I regularly post about Microsoft technologies and .NET – if you’re interested, please follow me on Twitter, or have a look at my previous posts here. Thanks! 3 thoughts on “Getting started with Azure Cosmos DB and .NET Core: Part #1 – Installing the Cosmos emulator, writing and reading data” Hi Jeremy, very interesting post. I was looking for good examples of Cosmos DB and I think I finally found it 😉 Thanks!
https://jeremylindsayni.wordpress.com/2019/02/25/getting-started-with-azure-cosmos-db-and-net-core-part-1-installing-the-cosmos-emulator/
CC-MAIN-2019-30
en
refinedweb
A class in a blue print/user defined datatype in java that describes the behavior/state that the object of its type support. public class Student { String name "Krishna"; int age = 20; void greet() { System.out.println("Hello how are you"); } } An object is an instance of a class created from it using the new keyword. Once you create an object of a class, using it you can access he members of the class. In the below given code an object of the class Student is created. public class Example { public static void main(String args[]) { Student obj = new Student(); } } Classes, interfaces, arrays, enumerations and, annotations are the in Java are reference types in Java. Reference variables hold the objects/values of reference types in Java When you create an object of a class as − Student obj = new Student(); The objects are created in the heap area and, the reference obj just points out to the object of the Student class in the heap, i.e. it just holds the memory address of the object (in the heap). And since the String is also an object, under name, a reference points out to the actual String value (“Krishna”). In short, object is an instance of a class and reference (variable) points out to the object created in the heap area.
https://www.tutorialspoint.com/what-is-the-difference-between-object-and-reference-in-java
CC-MAIN-2021-17
en
refinedweb
:dromedary_camel: Laravel log viewer Log Viewer for Laravel 5, 6, 7 & 8 (still bash composer require rap2hpoutre/laravel-log-viewer Add Service Provider to config/app.phpin providerssection php Rap2hpoutre\LaravelLogViewer\LaravelLogViewerServiceProvider::class, Add a route in your web routes file: php Route::get('logs', '\Rap2hpoutre\LaravelLogViewer\[email protected]'); Go to some other route Install via composer bash composer require rap2hpoutre/laravel-log-viewer Add the following in bootstrap/app.php: php $app->register(\Rap2hpoutre\LaravelLogViewer\LaravelLogViewerServiceProvider::class); Explicitly set the namespace in app/Http/routes.php: php $router->group(['namespace' => '\Rap2hpoutre\LaravelLogViewer'], function() use ($router) { $router->get('logs', '[email protected]'); }); Publish log.blade.phpinto /resources/views/vendor/laravel-log-viewer/for view customization: php artisan vendor:publish \ --provider="Rap2hpoutre\LaravelLogViewer\LaravelLogViewerServiceProvider" \ --tag=views Publish logviewer.phpconfiguration file into /config/for configuration customization: php artisan vendor:publish \ --provider="Rap2hpoutre\LaravelLogViewer\LaravelLogViewerServiceProvider" If you got a InvalidArgumentException in FileViewFinder.phperror, it may be a problem with config caching. Double check installation, then run php artisan config:clear.
https://xscode.com/rap2hpoutre/laravel-log-viewer
CC-MAIN-2021-17
en
refinedweb
public class Serializer extends Object The Serializer can perform better than ObjectOutputStream and DataOutputStream, with respect to encoding primary types, because it uses a more compact format (containing no BlockHeader) and simpler call stack involving BigEndianCodec, as compared to using an OutputStream wrapper on top of Bits. For Strings, the UTF encoding for ObjectOutputStream and DataOutputStream has a 2^16=64K length limitation, which is often too restrictive. Serializer has a 2^32=4G String length limitation, which is generally more than enough. For pure ASCII character Strings, the encoding performance is almost the same, if not better, than ObjectOutputStream and DataOutputStream. For Strings containing non-ASCII characters, the Serializer encodes each char to two bytes rather than performing UTF encoding. There is a trade-off between CPU/memory performance and compression rate. UTF encoding uses more CPU cycles to detect the unicode range for each char and the resulting output is variable length, which increases the memory burden when preparing the decoding buffer. Whereas, encoding each char to two bytes allows for better CPU/memory performance. Although inefficient with compression rates in comparison to UTF encoding, the char to two byte approach significantly simplifies the encoder's logic and the output length is predictably based on the length of the String, so the decoder can manage its decoding buffer efficiently. On average, a system uses more ASCII String scheming than non-ASCII String scheming. In most cases, when all system internal Strings are ASCII Strings and only Strings holding user input information can have non-ASCII characters, this Serializer performs best. In other cases, developers should consider using ObjectOutputStream or DataOutputStream. For ordinary Objects, all primary type wrappers are encoded to their raw values with one byte type headers. This is much more efficient than ObjectOutputStream's serialization format for primary type wrappers. Strings are output in the same way as writeString(String), but also with one byte type headers. Objects are serialized by a new ObjectOutputStream, so no reference handler can be used across Object serialization. This is done intentionally to isolate each object. The Serializer is highly optimized for serializing primary types, but is not as good as ObjectOutputStream for serializing complex objects. On object serialization, the Serializer uses the ClassLoaderPool to look up the servlet context name corresponding to the object's ClassLoader. The servlet context name is written to the serialization stream. On object deserialization, the Deserializer uses the ClassLoaderPool to look up the ClassLoader corresponding to the servlet context name read from the deserialization stream. ObjectOutputStream and ObjectInputStream lack these features, making Serializer and Deserializer better choices for ClassLoader-aware Object serialization/deserialization, especially when plugins are involved. Deserializer clone, equals, finalize, getClass, hashCode, notify, notifyAll, toString, wait, wait, wait protected static final int THREADLOCAL_BUFFER_COUNT_LIMIT protected static final int THREADLOCAL_BUFFER_COUNT_MIN protected static final int THREADLOCAL_BUFFER_SIZE_LIMIT protected static final int THREADLOCAL_BUFFER_SIZE_MIN protected static final ThreadLocal<Reference<Serializer.BufferQueue>> bufferQueueThreadLocal Technically, we should soften each pooled buffer individually to achieve the best garbage collection (GC) interaction. However, that increases complexity of pooled buffer access and also burdens the GC's SoftReference process, hurting performance. Here, the entire ThreadLocal BufferQueue is softened. For threads that do serializing often, its BufferQueue will most likely stay valid. For threads that do serializing only occasionally, its BufferQueue will most likely be released by GC. protected byte[] buffer protected int index public Serializer() public ByteBuffer toByteBuffer() public void writeBoolean(boolean b) public void writeByte(byte b) public void writeChar(char c) public void writeDouble(double d) public void writeFloat(float f) public void writeInt(int i) public void writeLong(long l) public void writeObject(Serializable serializable) public void writeShort(short s) public void writeString(String s) public void writeTo(OutputStream outputStream) throws IOException IOException protected final byte[] getBuffer(int ensureExtraSpace) ensureExtraSpace- the extra byte space required to meet the buffer's minimum length
https://docs.liferay.com/dxp/digital-enterprise/7.0-latest/javadocs/portal-kernel/com/liferay/portal/kernel/io/Serializer.html
CC-MAIN-2021-17
en
refinedweb
#include <math.h> These functions round x to the nearest integer value that is not larger in magnitude than x. These functions return the rounded integer value, in floating format. If x is integral, infinite, or NaN, x itself is returned. For an explanation of the terms used in this section, see attributes(7). The integral value returned by these functions may be too large to store in an integer type (int, long, etc.). To avoid an overflow, which will produce undefined results, an application should perform a range check on the returned value before assigning it to an integer type.
http://manpages.courier-mta.org/htmlman3/trunc.3.html
CC-MAIN-2021-17
en
refinedweb
The header <memory> has the following additions: namespace std { template <class T> pair<T*, ptrdiff_t> get_temporary_buffer(ptrdiff_t n) noexcept; template <class T> void return_temporary_buffer(T* p); } ([basic.align]). ] template <class T> void return_temporary_buffer(T* p);
https://timsong-cpp.github.io/cppwp/n4659/depr.temporary.buffer
CC-MAIN-2021-17
en
refinedweb
Allen Downey This notebook contains a solution to a problem I posed in my Bayesian statistics class: time between goals is exponential with parameter $\lambda$, the goal scoring rate. In this case we are given as data the inter-arrival time of the first two goals, 11 minutes and 12 minutes. We can define a new class that inherits from thinkbayes2.Suite and provides an appropriate Likelihood function: import thinkbayes2 class Soccer(thinkbayes2.Suite): """Represents hypotheses about goal-scoring rates.""" def Likelihood(self, data, hypo): """Computes the likelihood of the data under the hypothesis. hypo: goal rate in goals per game data: interarrival time in minutes """ x = data lam = hypo / 90 like = thinkbayes2.EvalExponentialPdf(x, lam) return like Likelihood computes the likelihood of data given hypo, where data is an observed time between goals in minutes, and hypo is a hypothetical goal-scoring rate in goals per game. After converting hypo to goals per minute, we can compute the likelihood of the data by evaluating the exponential probability density function (PDF). The result is a density, and therefore not a true probability. But the result from Likelihood only needs to be proportional to the probability of the data; it doesn't have to be a probability. Now we can get back to Step 1. Before the game starts, what should we believe about Germany's goal-scoring rate against Brazil? We could use previous tournament results to construct the prior,(134) # fake data chosen by trial and error to yield the observed prior mean thinkplot.Pdf(suite) suite.Mean() 1.3441732095365195 Now that we have a prior, we can update with the time of the first goal, 11 minutes. suite.Update(11) # time until first goal is 11 minutes thinkplot.Pdf(suite) suite.Mean() 1.8620612271278361 After the first goal, the posterior mean rate is almost 1.9 goals per game. Now we update with the second goal: suite.Update(12) # time between first and second goals is 12 minutes thinkplot.Pdf(suite) suite.Mean() 2.2929790004763997 After the second goal, the posterior mean goal rate is 2.3 goals per game. Now on to Step 3. If we knew the actual goal scoring rate, $\lambda$, we could predict how many goals Germany would score in the remaining $t = 90-23$ minutes. The distribution of goals would be Poisson with parameter $\lambda t$. We don't actually know $\lambda$, but we can use the posterior distribution of $\lambda$ to generate a predictive distribution for the number of additional goals. def PredRemaining(suite, rem_time): """Plots the predictive distribution for additional number of goals. suite: posterior distribution of lam in goals per game rem_time: remaining time in the game in minutes """ metapmf = thinkbayes2.Pmf() for lam, prob in suite.Items(): lt = lam * rem_time / 90 pred = thinkbayes2.MakePoissonPmf(lt, 15) metapmf[pred] = prob thinkplot.Pdf(pred, color='gray', alpha=0.3, linewidth=0.5) mix = thinkbayes2.MakeMixture(metapmf) return mix mix = PredRemaining(suite, 90-23) PredRemaining takes the posterior distribution of $\lambda$ and the remaining game time in minutes (I'm ignoring so-called "injury time"). It loops through the hypotheses in suite, computes the predictive distribution of additional goals for each hypothesis, and assembles a "meta-Pmf" which is a Pmf that maps from each predictive distribution to its probability. The figure shows each of the distributions in the meta-Pmf. Finally, PredRemaining uses MakeMixture to compute the mixture of the distributions. Here's what the predictive distribution looks like. thinkplot.Hist(mix) thinkplot.Config(xlim=[-0.5, 10.5]) After the first two goals, the most likely outcome is that Germany will score once more, but there is a substantial chance of scoring 0 or 2--4 additional goals. Now we can answer the original questions: what is the chance of scoring 5 or more additional goals: mix.ProbGreater(4) 0.057274188144370755 After the first two goals, there was only a 6% chance of scoring 5 more times. And the expected number of additional goals was only 1.7. mix.Mean() 1.7069804402488897 That's the end of this example. But for completeness (and if you are curious), here is the code for MakeMixture:.Incr(x, p1 * p2) return mix
https://nbviewer.jupyter.org/github/AllenDowney/ThinkBayes2/blob/master/examples/soccer_soln.ipynb
CC-MAIN-2021-10
en
refinedweb
Add a Service LB4 has the package @loopback/proxy-server that contains the artifacts needed to implement the link between the methods described in the .json file and the Node.js methods. All we need to do is to write the service provider that will serve as the glue to make this implementation real. Installing the proxy-server Make sure you are inside the soap-calculator directory and run the following command: npm install @loopback/service-proxy -—save Writing a service provider Use the lb4 service command and the following inputs to create a calculator service: lb4 service ? Please select the datasource CalculatorDatasource ? Service name: Calculator create src/services/calculator.service.ts update src/services/index.ts Service Calculator was created in src/services/ src/services/calculator.service.ts import {getService} from '@loopback/service-proxy'; import {inject, Provider} from '@loopback/core'; import {CalculatorDataSource} from '../datasources'; export interface CalculatorService { // this is where you define the Node.js methods that will be // mapped to the SOAP operations as stated in the datasource // json file. } export class CalculatorServiceProvider implements Provider<CalculatorService> { constructor( // calculator must match the name property in the datasource file @inject('datasources.calculator') protected dataSource: CalculatorDataSource = new CalculatorDataSource(), ) {} value(): Promise<CalculatorService> { return getService(this.dataSource); } } Adding our interfaces When we reviewed the remote SOAP web service, we found that there were four different results for the four operations and each of these operations were expecting the same pair of arguments intA and intB. Now, it is time to define this scenario using interfaces as follows: export interface MultiplyResponse { result: { value: number; }; } export interface AddResponse { result: { value: number; }; } export interface SubtractResponse { result: { value: number; }; } export interface DivideResponse { result: { value: number; }; } export interface CalculatorParameters { intA: number; intB: number; } One important interface we need to add now is the one that describes the four Node.js methods that will be mapped to the SOAP operations. At this point we have just mentioned them in the .json data source file, so let’s add them now as follows: export interface CalculatorService { multiply(args: CalculatorParameters): Promise<MultiplyResponse>; add(args: CalculatorParameters): Promise<AddResponse>; divide(args: CalculatorParameters): Promise<DivideResponse>; subtract(args: CalculatorParameters): Promise<SubtractResponse>; } Navigation Previous step: Add a Datasource Next step: Add a Controller
https://loopback.io/doc/en/lb4/soap-calculator-tutorial-add-service.html
CC-MAIN-2021-10
en
refinedweb
Finally, I got my hands on a RaspberryPi 4 (4GB Edition) and I thought I'd write up a post on how to flash your Raspberry Pi with Raspbian OS and installing Rancher's Lightweight Kubernetes Distribution, K3S This post is a quick run down on how I wanted to experiment with K3S (single node) on a Rpi4 and test out Traefik that comes out of the box. For much more in depth tutorial on k3s with clustering and deploying microservices using OpenFaas, have a look at Alex's Will it cluster? k3s on your Raspberry Pi blog post, its definitely worth the read. On the To-Do List This is what we will do during this post: - Download Raspbian - Flash with Etcher - Configure SSH and WiFi - Install K3S - Develop, Build and Deploy a Golang Web App to Kubernetes (next post) Download Raspbian Head over to Raspbian's Download Page and download the ISO of your choice, I went for Raspbian Buster Lite: Once your download has finished, you should have a zip file located in your download directory. Flash your Raspberry Pi I will be using Etcher to flash my Raspberry Pi, but you can use any other utility of preference. For more info or help on installing images, have a look at Raspberry Pi's Documentation Using Etcher is quite easy, you select the image, the SD Card that you want to flash to, and select Flash: Configure SSH and WiFi Once the flash operation has been completed, re-insert your flash drive, and you should see your SD Card has been mounted using df -h I am using a Mac, so in my case its mounted under /Volumes/boot . In order to allow SSH, we need to create a ssh file with no content in the root directory: touch /Volumes/boot/ssh For Linux, it could be: touch /media/${USER}/boot/ssh To configure your raspberry pi to connect to your wireless network, we need to supply the wpa_supplicant.conf in the root directory of our SD Card: Create the config file: vim /Volumes/boot/wpa_supplicant.conf Then the provide your SSID and PSK: ctrl_interface=DIR=/var/run/wpa_supplicant GROUP=netdev update_config=1 country=ZA network={ ssid="your-wifi-name" psk="your-wifi-password" key_mgmt=WPA-PSK } Eject the SD Card, insert it into your Raspberry Pi and boot it Finding your IP You can use nmap to find the address, but in my case I was logged into my router, so having a look at the DHCP List, I was able to find my RaspberryPi's IP Address: By default the username will be pi and the password will be raspberry: $ ssh pi@192.168.0.111 Warning: Permanently added '192.168.0.111' (ECDSA) to the list of known hosts. pi@192.168.0.111's password::~ $ This will be a good time to reset your default password: $ passwd Changing password for pi. Current password: New password: Retype new password: passwd: password updated successfully Since this RaspberryPi comes with 4GB of memory, I just had to show this to the world :D $ free -m total used free shared buff/cache available Mem: 3906 91 3590 8 223 3685 Swap: 99 0 99 Install K3S Rancher released a super lightweight Certified Kubernetes distribution, called "K3S", which is optimized for ARM and super easy to install. Have a look at their Documentation for more configuration options and detail. Installing K3S is as easy as: $ sudo su $ curl -sfL | sh - [INFO] Finding latest release [INFO] Using v0.8.0s Boom! And about a minute later, kubernetes is running on my Raspberry Pi: $ kubectl get nodes NAME STATUS ROLES AGE VERSION raspberrypi Ready master 27s v1.14.5-k3s.1 By default, K3S provisions Traefik out of the box, and to confirm that, let's have a look at our deployments: $ kubectl get deployments --all-namespaces NAMESPACE NAME READY UP-TO-DATE AVAILABLE AGE kube-system coredns 1/1 1 1 3m29s kube-system traefik 1/1 1 1 89s Let's have a look at our pods from all our namespaces: $ kubectl get pods --all-namespaces NAMESPACE NAME READY STATUS RESTARTS AGE kube-system coredns-b7464766c-8mgzr 1/1 Running 0 3m12s kube-system helm-install-traefik-tqh92 0/1 Completed 0 3m12s kube-system svclb-traefik-t6jvz 2/2 Running 0 93s kube-system traefik-56688c4464-zvfxc 1/1 Running 0 92s And viewing our services: $ kubectl get service --all-namespaces NAMESPACE NAME TYPE CLUSTER-IP EXTERNAL-IP PORT(S) AGE default kubernetes ClusterIP 10.43.0.1 <none> 443/TCP 4m4s kube-system kube-dns ClusterIP 10.43.0.10 <none> 53/UDP,53/TCP,9153/TCP 4m5s kube-system traefik LoadBalancer 10.43.133.86 192.168.0.100 80:32543/TCP,443:30200/TCP 2m5s Thank You Thanks for reading, in the next post I will show you how to develop, build and deploy a golang webapp to kubernetes using K3S on a RaspberryPi If you would like to check out more of my content, check out my website at ruan.dev or follow me on Twitter @ruanbekker
https://sysadmins.co.za/running-k3s-on-the-raspberrypi-4/
CC-MAIN-2021-10
en
refinedweb
Puppet is an open-source configuration management tool. In this tutorial we have provided most frequently asked Puppet Interview Questions & Answers: In the current agile development environment, developers integrate their code multiple times in a day and work extra hours to deliver their tasks. Operations teams work along with application developers for integrating their code using version control tools, code review to maintain design and implementation consistency amongst multiple developers, deploy various build for testing. Besides this, they have to maintain systems and servers in a running state. Various systems and servers when run continuously become prone to malfunction. In large organizations with an enormous customer base, maintenance of infrastructure becomes a daunting task. In the case of newly installed infrastructure or to maintain existing ones in a good state, DevOps install configuration tools like Puppet that automatically set the default configuration for new machines and resets configuration of failed infrastructures keeping them in running state. What You Will Learn: What Is Puppet Software Tool Puppet is an open-source configuration management tool that automates and manages server configuration. Its code written in Domain-Specific Language (DSL) is declarative, wherein the desired state of our systems is described. This tool automates updating the state of these systems as described with help of puppet master and their agent. In case of server failure, the code helps the server roll back to their previous working states. In addition, tool deploys servers on-demand and imposes security on them. With this configuration management tool, one can manage Network Time Protocol (NTP), Sudo privileges to identify users with elevated access privileges, besides this Domain Name System (DNS) name server, and firewall can also be managed with it. Most Frequently Asked Puppet Interview Questions Q #1) Explain Puppet Enterprise. Answer: Puppet enterprise is configuration tool or said as an automated code where infrastructure information such as software and their settings are already defined for system and server so that these can be installed, the environment can be set up when new infrastructure is installed and periodic verification is done to ensure that these systems and servers remain in the desired state. Q #2) Describe Puppet architecture. Answer: Puppet follows declarative programming approach where code specifies what to do, but does not inform steps on how to do it. Based on Pull based deployment, agent nodes check on a regular interval of 30 minutes with a master node for anything change at the agent. In case of change requirement, the agent pulls specific code from the master and performs required actions at agent node. - Agent sends Facts, i.e. its state in key/value data pair, to the master. State includes the system’s operating system, up-time i.e. time system is operational, IP address i.e. physical or virtual machine. - Using facts information, master compiles a Catalog that describes how the agent should be configured. Catalog, a document explains the desired state for the agent’s resources, master manages on the agent. - Agent responds to the master with information about completion of configuration, which can be viewed in the Puppet dashboard. Q #3) Explain Working of Puppet. Answer: It is explained as follows: Entities required for running include Puppet Master and Puppet Agent. Agent or nodes are daemons running on client servers. These servers need some configurations or being managed using Puppet. This agent verifies configurations at regular intervals with the master for any change. Master consists of all configurations stored for different hosts and runs as a daemon on the master server. Agent and master are connected via Secure Sockets Layer (SSL). Node connects master, master analyzes what configuration and how it can be applied to the node. After analysis, master collects resources and configurations, compiles and makes a catalog, and sends it to the agent of the node. After applying the configuration, the agent submits the report of configuration that was applied to the master server. Q #4) Describe the Puppet Module. Answer: Modules are basic building blocks of the puppet in a directory structure that contains classes, tasks, functions, resource providers and their types, and plug-ins like facts or custom types. It is mandatory to have modules installed in the puppet module path. These modules are used to manage tasks such as installation or configuration of software on to system or server. Q #5) What is Catalog in Puppet? Answer: Catalog is a document with state details of each resource master that manages on the node. Master compiles a catalog and sends back to the agent. It has data provided by agent at the node, external data, and details related to puppet manifests. Q #6) Define Classes in Puppet. Answer: Classes are blocks of code, invoked by their names, present in modules. Classes are used for the functionality of all packages, services, and configuration files needed to run an application. These can be added into the node’s catalog in two possible ways, i.e. declaring these classes in manifests or by assigning from external node classifier. Classes can be declared in manifest in the following two ways: Using include class_name OR using class { ‘classname’ : } Puppet Class structure is explained in figure below: Q #7) What is Manifest in Puppet? Answer: All Puppet programs written in Ruby programming language and saved with an extension of .pp, and are built with an intension of creating and managing any host machine in target are called a Manifest. It contains Files (puppet selects and moves these files to a target location), Templates (used to create configuration files on the node), Nodes (client node related definitions are described at nodes), Resources, and Classes. Q #8) Describe in detail about Facter in Puppet. Answer: Facter is a system-profiling cross-platform library that discovers and reports per-node system information known as facts present as variable with values in the key-value format in manifests. Facters and Facts are available across Puppet code as global variables, they can be used in code without any reference at any point, any place in the code. Facter is a library that identifies the details of the facts that may contain the operating system being used, SSH keys, IP address, verification for the virtual machine or not, MAC addresses, etc. Various fact types used are explained below: - Core Facts: These are information on resources such as cloud, disks, memory, OS, path, processors, and partitions. We can use the following command to view the complete list of facts and their corresponding values in key-value format. - $ run puppet facts - Custom Facts: Using export FACTER {fact’s_name} we can add Custom Facts to the node. These facts are customized in order to attend specific requirements by DevOps. - External Facts: To apply facts at the provisioning stage, we can use external facts, apply metadata to virtual machines at AWS, OpenStack cloud providers. Q #9) What do you mean by Puppet Kick? Answer: Puppet Kick deprecated in the current version is a utility that triggers agent from the master. As per Ubuntu manuals, ‘puppet kick’ is a script to be run as root to access Secure Sockets Layer (SSL) certificates, connect set of machines that run the agent, and trigger them to run their configurations. In addition, this command also looks up in Lightweight Directory Access Protocol (LDAP), for hosts matching that configuration, connects with each of them and triggers to run their configuration. In order to kick work, the agent should identify and sense for incoming connections and should have access to permission to run endpoints. Q #10) Describe functionality of MCollective in Puppet. Answer: MCollective or Marionette Collective is a framework for creating automated coordination, management, and arrangement of complex infrastructure i.e. Systems and Servers known as Orchestration. Administrative tasks on clusters of servers can be automatically executed using MCollective. Their components are Servers, Clients, and Middleware. Using MCollective commands we can query the value of facts, start and stop services, start configuration tool itself, as well as query and update software. Q #11) What is special about Puppet’s model-driven design? Answer: Previously system administrators were following series of steps to configure and manage infrastructure, which comprises multiple groups of systems and servers. In a model-driven design approach, Puppet, which is an automated code written in Ruby, contains all the configuration details that are compiled into a catalog. This catalog is sent to every node and shares resources, values, and their relations, the required modification of configuration is made for failed systems to reinstate them back to normal running state. Q #12) Give a few use cases for Puppet. Answer: Puppet is used to manage and standardize infrastructure deployment. Requirement: Startup Company has moved its infrastructure to Cloud service providers such as Amazon web server or Google Cloud services. End User is responsible for the creation, standardization, and maintenance of systems and servers on different platforms, applications, and services and wants to install and use Puppet to ease their task. Scenario 1: Administrators utilize tools for standardizing their servers and systems, like the creation of a manifest file which has steps written in configuration code to build their new server. For example, - Installation of the operating system, say Linux. - Verifying Linux disc space using software File light, or DUC. - Installing Java. - Installing Tomcat. - Installing SQL server as RDBMS. - Installing patch for an application to be built and tested for software development trainee. Scenario 2: Creation of file, listing all the above steps in manifest, which can be run using puppet command to perform the steps automatically mentioned in manifest file. This way standardization of steps is followed while deploying new system using manifest and command. Scenario 3: Manifest created will be utilized to build Cloud server through the API so that all the manual tasks can be automatically done. Q #13) Explain the “etckeeper-commit-post” and “etckeeper-commit-pre” commands. Answer: Following is the difference between both the commands - etckeeper-commit-post is a command written in the configuration file, which can be executed after pushing configuration on the agent. - etckeeper-commit-pre is a command written in the configuration file, which can be executed before pushing configuration on the agent. Q #14) List characters that are allowed in a class name, module name, and identifiers? Answer: Following are acceptable characters while declaring the Class name and Module name: - Must begin with a lowercase letter. - Can include lowercase letters, digits, and underscores. - Scope Resolution Operator i.e. “::” are namespace separator in class name definition. With Variable name, characters accepted are as mentioned below: - Can begin with uppercase and lowercase letters. - May contain numerals and underscores (‘_’). - If the first character is an underscore, then the variable can only be accessible from its own local scope. - Variables are case sensitive. Q #15) What to expect if you don’t sign a Contributor License Agreement? Answer: It is a mandatory condition to sign a Contributor License Agreement (CLA) for code contributors to Puppet or Facter, without which their code cannot be accepted. To find and download Puppet or Facter code written in Ruby, the user should log in to their GitHub account and sign an agreement. Q #16) Explain the importance and location of codedir in Puppet? Answer: codedir is used by Master and apply command, but not by the agent. It is the main directory for data and code that uses an environment containing manifests and modules, global module directory, and Hiera data and configuration. This codedir is located at following local directories In case of Windows: C:\ProgramData\PuppetLabs\puppet\etc Whereas for Linux: /etc/dir/PuppetLabs/code Q #17) Describe Hiera. Answer: Hiera is a lookup system for configuration data in key-value format. It helps in retrieving data from Puppet code. This code utilizes this system for explicit parameter lookup calls for classes from a catalog. This system uses Puppet’s facts to identify data sources. Its 5th version supports data files in JSON, YAML, and EYAML formats. It searches configuration data in three independent layers of configuration starting from global then environment and finally module layer of configuration. Q #18) Describe Virtual Resources in Puppet. Answer: During the Puppet setup, duplicated resource declaration error occurs in case the same resource is used more than once. This tool resolves this issue by introducing a virtual resource. Declaring virtual resource makes its resource available to collectors and realizes function. As well manages state when the resource is realized. You can find unrealized virtual resources marked inactive included in the catalog. Virtual resources are applied for management of resources whose multiple conditions across classes are met and for overlapped resource sets by multiple classes. Q #19) Describe module-path. Answer: Master service and with puppet apply command where Puppet manifests are applied locally, load their content from modules (installed in the puppet modulepath) from one or more of the directories. It is the ordered list of directories searched for modules by Puppet. These directories from modulepath list are separated by a separator character. In Linux, it is colon (:) and in Windows, it is semi-colon (;). Q #20) Give details about base modulepath. Answer: Global module directories list is the base modulepath for applying with all the environments, configured with base modulepath setting, with default value as below: In case of Linux: $codedir/modules:/opt/puppetlabs/puppet/modules In case of Windows: $codedir\modules Q #21) Describe about Cache directory in Puppet. Answer: Puppet during normal operations, stores generated data in a cache directory called vardir. This data can be mined for analysis. In case of agent and apply command, Cache directory can be found at one of the following locations: In case of Windows, it is C:\programed\PuppetLabs\puppet\cache Whereas in Linux it is /opt/puppetlabs/puppet/cache, alternatively, using –vardir option at the command line, will specify puppet cache directory location. We can change the location of vardir files and directories, by changing puppet.conf settings. Q #22) Explain about “Environments” in Puppet. Answer: Environment is a logical distribution that separates modules and manifests into separate sections or folders for nodes in order to get bit of code depending on which environment node belongs to, it is statically set in puppet.conf. It is a feature to divide infrastructure configuration into environments Admin can use a single master to serve multiple isolated configurations. Q #23) Describe Resources in Puppet. Answer: Puppet Resources uses build, design, and manage system or server infrastructure. This tool has multiple types of resources to build and define new resources to define system architecture. Puppet code block in the manifest file (resource declaration) is created using Declarative Modeling Language (DML). It contains Resource Type, Resource Parameter, Attributes, and Values. Q #24) Explain types of resources in Puppet. Answer: Puppet managed system components are analyzed with the help of resource types. Few common resource types are group, package, user, file, and service. There are two types of resources, built-in types and custom types. Some of the built-in resource types are group, package, user, file, and service. You can find custom types distributed in puppet modules referred from forge.puppet.com. Q #25) Explain Node Definition in Puppet. Answer: Node definition or statement is a puppet code block that matches with node’s catalog. It allows the assignment of a specific configuration to the affected node. Their syntax looks similar to that of class definitions with node keyword, node definition name, opening curly brace, a mixture of class and resource declarations, collectors, variables, conditional statements, functions, and chaining relationships, finally a closing curly brace. Q #26) Describe functions in Puppet. Answer: Puppet Functions are plug-ins that are used during catalog compilation. Function call by manifest makes function run and return value and modifies the catalog as a side effect. One can create their own functions that accept arguments through parameters to transform data and construct values. These are plug-ins or expressions called in order to resolve to value and can either be a built-in or customized. Q #27) Give examples to configure systems using Puppet Answer: Some examples to the system configured with puppet are listed below: - Manage NTP service: Network Time Protocol (NTP) is the most essential services that can be managed and configured using puppet, to synchronize time all across nodes. - Manage Sudo privileges: Sudo command on your agents will identify system users with elevated access privileges. - Manage a DNS name server file: Name server that maps IP addresses understood by computers with human-readable URLs can be managed using this configuration tool. - Manage firewall rules: Various rules and policy like application ports (TCP/UDP), network ports, IP address, and access-deny statements can be designed with firewall, with tool’s firewall policies can be managed. Q #28) Describe main or site manifest in puppet. Answer: Agent sends state of resources called facts to master, based on the information received. Master will compile catalog in the form of a single manifest file, known as main or site manifest. The master utilizes the main manifest file, either a single or directory of .pp files, configured by the current node’s environment, which with help of manifest setting in environment.conf, determines the main manifest. Q #29) What do you mean by puppet apply? Answer: Puppet apply is a standalone execution command for apply to individual manifest. This code when applied to modulepath via command line or config file, acts like catalog. ‘puppet apply’ is a command-line code for applying a configuration. Q #30) List companies that use Puppet. Answer: Few multinational enterprise organizations that use Puppet in their infrastructure management and configuration are: - KPN – Dutch landline and mobile telecommunications company, Netherlands - CERN – European Organization for Nuclear Research - Aegon UK – financial services provider - NYSE – New York Stock Exchange - ICE – Intercontinental Exchange - ANZ Bank - Cisco - Splunk Q #31) Explain what pre-installation preparations you will require before installing Puppet Open Source. Answer: There are some preparations and requirements before installing Puppet Open Source - Selection of server as the master. - Validate servers and network are ready and prepared for installation with the following instructions: - Installing agents - Once Puppet Server is configured, we need to install the agent package on node machine on which configuration management tool is needed. - Based on your operating system, you have Linux, OS X, and Microsoft Windows to select. - You can use NTP and sudoers to automate Puppet code for designing configuration. Q #32) Explain Puppet Enterprise. Answer: Puppet Enterprise is scalable across various teams, systems, on-premise, or over cloud servers, by implementing compliance policies and security along with configuration for on-premise and cloud migrating infrastructure with zero downtime. It also generates reports on the status of code that are built, and information on who and what changes were made on an infrastructure code, trigger analysis checks on regular intervals on infrastructure to assess any impact before any incidence. Q #33) Describe Puppet Remediate. Answer: It scans the infrastructure and produces data on vulnerabilities in traceable and auditable formats to prioritize their resolutions. Remediate balances tools that assess vulnerabilities, and prioritize tasks that need immediate resolution, attends such tasks by running pre-built tasks like manage package, services or run the shell script and fix issues immediately. Q #34) Explain the working of Puppet Relay. Answer: Puppet Relay monitors your infrastructure and runs automation scripts that not only trigger alerts in case of any incidents using APIs, DevOps tools available by connecting on-premise or cloud connected systems but resets instance using default configuration details present in catalogs from the manifest and finally inform the team of the instance. Q #35) What is Bolt? Answer: Bolt automates coordination, management, and setup of computer systems and related services that were processed manually previously, and maintains the entire infrastructure of an organization. Conclusion Puppet is an automated configuration management tool for in-premise and virtual infrastructure which follows the client-server model, where one machine is master and other machines act as agent or nodes. Its main purpose is to manage resources on the server of your infrastructure. Resource is a code that manages characteristics of server like a user account or software content. This configuration management tool gives us power to express server configuration in code to automatically manage your infrastructure. We are sure this tutorial on Puppet interview questions will help you prepare for your upcoming interview.
https://www.softwaretestinghelp.com/puppet-interview-questions/
CC-MAIN-2021-10
en
refinedweb
- Introduction - Configuring environments - Defining environments - Configuring manual deployments - Configuring dynamic environments - Configuring Kubernetes deployments - Deployment safety - Complete example - Protected environments - Working with environments - Viewing environments and deployments - Viewing deployment history - Retrying and rolling back - Using the environment URL - Stopping an environment - Prepare an environment - Grouping similar environments - Environment incident management - Monitoring environments - Web terminals - Scoping environments with specs - Environments Dashboard - Limitations - Further reading Environments and deployments Introduced in GitLab 8.9. Environments allow control of the continuous deployment of your software, all within GitLab. Introduction GitLab CI/CD is used to deploy versions of code to. that the environment keyword defines where the app is deployed. The environment name and url is exposed in various places within GitLab. Each time a job that has an environment specified succeeds, a deployment is recorded along with the Git SHA and environment name. -, _, /, {, }, or .. Also, it must not start nor end with /.. Environment variables and runners. Set, you don’t know the URL before the deployment script finishes. If you want to use the environment URL in GitLab, you would have to update it manually. To address this problem, you can configure a deployment job to report back a set of variables, including URLs The following example shows a Review App that creates a new environment per will be. The assigned URL for the review/your-branch-name environment is visible in the UI. Note the following: stop_reviewdoesn’t generate a dotenv report artifact, so it won’t recognize the DYNAMIC_ENVIRONMENT_URLvariable.. Config the GitLab UI for that job. - Means the deploy_prodjob will only be triggered when the “play” button is clicked. You can find the “play” button in the pipelines, environments, deployments, and jobs views. Clicking the play button in any view triggers the deploy_prod job. The deployment is recorded as a new environment named production. If your environment’s name is production (all lowercase), it’s recorded in Value Stream Analytics. Config The name and url keywords Runners expose_NAMEin environment:urlin the example above:, which would give a URL of. You aren’t required to use the same prefix or only slashes ( /) in the dynamic environments’ names. However, using this format enables the grouping similar environments feature. Config the GitLab Kubernetes integration, information about the cluster and namespace will be displayed above the job trace on the deployment job page: Configuring incremental rollouts Learn how to release production changes to only a portion of your Kubernetes pods with incremental rollouts. Deployment safety Deployment jobs can be more sensitive than other jobs in a pipeline, and might need to be treated with an extra care. There are multiple features in GitLab that helps maintain deployment security and stability. - Restrict write-access to a critical environment - Limit the job-concurrency for deployment jobs - Skip outdated deployment jobs - Prevent deployments during deploy freeze windows Complete example The configuration in this section provides a full development workflow where your app is: - Tested. - Built. - Deployed as a Review App. - Deployed to a staging server after. See the limitations section for some edge cases regarding the naming of your branches and Review Apps. The complete example provides the following workflow to developers: - Create a branch locally. - Make changes and commit them. - Push the branch to GitLab. - Create a merge request. Behind the scenes, the Environments can be “protected”, restricting access to them. For more information, see Protected environments. Working with environments Once environments are configured, GitLab provides many features for working with them, as documented below. Viewing environments and deployments A list of environments and deployment statuses is available on each project’s Operations > Environments page. For example:: This view is similar to the Environments page, but all deployments are shown. Also in this view is a Rollback button. For more information, see Retrying and rolling back. Ret. What to expect with a rollback Pressing the Rollback button on a specific commit triggers a new deployment with its own unique job ID. This means that you will see a new deployment that points to the commit you’re rolling back to. Note that the defined deployment process in the job’s script determines whether the rollback succeeds. Using With GitLab Route Maps, you can go directly from source files to public pages in the environment set for Review Apps. Stopping. Starting with GitLab 8.14, dynamic environments stop automatically when their associated branch is deleted. Automatically If you can’t use Pipelines for merge requests, setting the GIT_STRATEGY to none is necessary in the stop_review job so that the. Additionally, both jobs should have matching rules or only/except configuration. In the example above, if the configuration is not identical, the stop_review job might not be included in all pipelines that include the deploy_review job, and it will not be possible to trigger the action: stop to stop the environment automatically. You can read more in the .gitlab-ci.yml reference. Environments auto-stop Introduced in GitLab 12.8. You can set an expiry time for environments and stop them automatically after a certain period. For example, consider the use of this feature with Review App environments. When you set up Review Apps, sometimes they keep running for a long time because some merge requests are left open and forgotten. Such idle environments waste resources and should be terminated as soon as possible. To address this problem, you can specify an optional expiration date for Review App environments. When the expiry time is reached, GitLab automatically triggers a job to stop the environment, eliminating the need of manually doing so. In case an environment is updated, the expiration is renewed ensuring that only active merge requests keep running Review Apps. To enable this feature, you must specify the environment:auto_stop_in keyword in .gitlab-ci.yml. You can specify a human-friendly date as the value, such as 1 hour and 30 minutes or 1 day. auto_stop_in uses the same format of artifacts:expire_in docs. Note that due to resource limitation, a background worker for stopping environments only runs once every hour. This means that environments aren’t stopped at the exact timestamp specified, but are instead stopped when the hourly cron worker detects expired environments. Auto-stop example In the following example, there is a basic review app setup that creates a new environment per merge request. The review_app job is triggered by every push and creates or updates an environment named review/your-branch-name. The environment keeps running. Delete a stopped environment Introduced in GitLab 12.10. You can delete stopped environments in one of two ways: through the GitLab UI or through the API. Delete API Environments can also be deleted by using the Environments API. Prepare an environment Introduced in GitLab 13.2. By default, GitLab creates a deployment every time a build with the specified environment runs. Newer deployments can also cancel older ones. You may want to specify an environment keyword to protect builds from unauthorized access, or to get access to scoped variables. In these cases, you can use the action: prepare keyword to ensure deployments won’t be created, and no builds would be canceled: build: stage: build script: - echo "Building the app" environment: name: staging action: prepare url: Grouping: Environment incident management You have successfully setup a Continuous Delivery/Deployment workflow in your project. Production environments can go down unexpectedly, including for reasons outside of your own control. For example, issues with external dependencies, infrastructure, or human error can cause major issues with an environment. This could include: - Introduced in GitLab Ultimate environment page. If the alert requires a rollback, you can select the deployment tab from the environment page and select which deployment to roll back to. Auto Rollback: - Visit Project > Settings > CI/CD > Automatic deployment rollbacks. - Select the checkbox for Enable automatic rollbacks. - Click Save changes. Monitoring environments If you have enabled Prometheus for monitoring system and response metrics, you can monitor the behavior of your app running in each environment. For the monitoring dashboard to appear, you need to Configure Prometheus to collect at least one supported metric. In GitLab 9.2 and later, all deployments to an environment are shown directly on the monitoring dashboard. Once configured, GitLab will attempt to retrieve supported performance metrics for any environment that has had a successful deployment. If monitoring data was successfully retrieved, a Monitoring button will appear for each environment.. Embedding metrics in GitLab Flavored Markdown Metric charts can be embedded within GitLab Flavored Markdown. See Embedding Metrics within GitLab Flavored Markdown for more details. Web, follow the instructions given in the service integration documentation. Note that container-based deployments often lack basic tools (like an editor), and may be stopped or restarted at any time. If this happens, you will lose all your changes. Treat this as a debugging tool, not a comprehensive online IDE. Once enabled, your environments will gain a “terminal” button: You can also access the terminal button from the page for a specific environment: Wherever you find it, clicking the button will take you to a separate page to establish the terminal session:. In GitLab 8.13 and later, - Introduced in GitLab Premium 9.4. - Scoping for environment variables was moved to Core in GitLab 12.2. that the most specific spec takes precedence over the other wildcard matching. In this case, the review/feature-1 spec takes precedence over review/* and * specs. Environments Dashboard See Environments Dashboard for a summary of each environment’s operational health. Limitations In the environment: name, you are limited to only the predefined environment variables. Re-using variables defined inside script as part of the environment name will not work. Further reading Below are some links you may find interesting:
https://docs.gitlab.com/13.7/ee/ci/environments/
CC-MAIN-2021-10
en
refinedweb
/* Window creation, deletion and examination for GNU Emacs. Does not include redisplay. Copyright (C) 1985, 86, 87, 93, 94, 95" Lisp_Object Qwindowp, Qwindow_live_p; Lisp_Object Fnext_window (), Fdelete_window (), Fselect_window (); Lisp_Object Fset_window_buffer (), Fsplit_window (), Frecenter (); void delete_all_subwindows (); static struct window *decode_window(); /*; #define min(a, b) ((a) < (b) ? (a) : (b))););def MULTI_FRAME if (NILP (frame)) XSETFRAME (frame, selected_frame); else CHECK_LIVE_FRAME (frame, 0); #endif; } })) XBUFFER (w->buffer)->clip_changed = 1; /* Prevent redisplay shortcuts */ (XFASTINT (w->left) + XFASTINT (w->width)),. */));; else CHECK_LIVE_FRAME (frame, 2); #endif currently;; { register Lisp_Object tem, parent, sib; register struct window *p; register struct window *par; /*def MULTI_FRAME) /* all_frames == nil doesn't specify which frames to include. Decide which frames it includes. */ if (NILP (all_frames)) ? ))
https://emba.gnu.org/emacs/emacs/-/blame/2c6638cde6fb5dfc717f451fd8c84a822878b738/src/window.c
CC-MAIN-2021-10
en
refinedweb
# goes here b0 = np.linspace(0, 50, 101); b1 = np.linspace(-1, 1, 101); from itertools import product hypos = product(b0, b1) suite = Logistic(hypos); for data in zip(df.Temperature, df.Incident): print(data) suite.Update(data) goes here # Solution goes here Implement this model using MCMC. As a starting place, you can use this example from the PyMC3 docs. As a challege, try writing the model more explicitly, rather than using the GLM module. import pymc3 as pm # Solution goes here pm.traceplot(trace); The posterior distributions for these parameters should be similar to what we got with the grid algorithm.
https://nbviewer.jupyter.org/github/AllenDowney/ThinkBayes2/blob/master/examples/shuttle.ipynb
CC-MAIN-2021-10
en
refinedweb
PROBLEM LINK: Setter- Erfan Alimohammadi Tester- Roman Bilyi Editorialist- Abhishek Pandey DIFFICULTY: EASY PRE-REQUISITES: Binary Search, Two Pointers PROBLEM: Given an array A of size N, find number of ways to delete a non-empty subarray from it so that the remaining part of A is strictly increasing. QUICK-EXPLANATION: Key to AC- Don’t over-complicate the solution. Think in terms of, that, “If I start deleting from index i, then how many ways are possible?” Maintain 2 arrays, Prefix[i] and Suffix[i] - where Prefix[i] will store true if subarray from [1,i] is strictly increasing, and similarly Suffix[i]=true if the subarray from [i,N] is strictly increasing. Note that, once Prefix[i]=false for any index i, it will remain false for every index after i as well. A vice-versa observation holds for Suffix[] array as well. Notice that if Prefix[i]=false, then deleting a sub-array starting from index i will not yield a strictly increasing array (as its falsified before this index already). Hence, for every index starting from 1 (1-based indexing), as long as Prefix[i] is true, Binary search for the least index j such that A_j>A_i and array from index j onwards is strictly increasing. (This can be easily achieved by using our Suffix[j] array) . We can now delete N-j+1 subarrays from index (i+1) to index [j-1,N]. Note that, it is implicitly implied that j must be greater than (i+1). Make sure to account for cases where you might delete an empty subarray (for some test cases) when i=0, or when you are deleting the first element of the array in operation as well. (Note- I said the range [j-1,N] because you have the option to keep entire range [j,N] as well. In other words, starting from index i, you have to compulsorily delete upto index j-1. Thats the reason for +1 in the expression) EXPLANATION: We will first discuss the importance of Prefix[] and Suffix[] array. Once that is done, we will move on to the binary search part, touching upon little implementation details and summarize the editorial. The editorial uses 1-based \ indexing unless explicitly mentioned otherwise. 1. Maintaining the Prefix[] and Suffix[] array- We define Prefix[i] to be true if the subarray from [1,i] is strictly increasing. Similarly, we maintain another array Suffix[], where Suffix[i] is true if the array from [i,N] is strictly increasing. The first step towards solution is to realize why they are needed in first place - hence give some time thinking of it. In case you are unable to grasp, the solution is below. If you are still not clear about Role of Prefix & Suffix Arrays here The rationale behind these two, is that, we only have those many starting positions to consider where Prefix[i]=true. Once Prefix[i] becomes false, it will never be true again because if subarray from [1,i] is not strictly increasing, then the subarray from [1,j] will never be strictly increasing either, for all j > i. Hence, if we start from an index where Prefix[i]=false, then we cannot make the resulting array strictly increasing no matter what. Hence, starting from index 1, we can go upto only that index i till where Prefix[i] is true. Similar reasoning holds for Suffix[] array as well. Try to think of it a little, in case you face any issue, the reasoning is there in bonus section. Look at it before proceeding to the next section if the purpose is not clear. Once you are clear with the idea that what is the need of these arrays, or what do these arrays denote, proceed to the next section to see how they will be used. 2. Binary Searching- Now, the things to note are that we can perform the operation only once, and the operation must remove a contiguous subarray. Starting from index 1, iterate along all indices i till which Prefix[i]=true. For all such indices, if we can find the index j, such that A_j>A_i and Suffix[j]=true (i.e. the subarray from index [j,N] is strictly increasing), then we are done! That is because, we can then claim that "We get (N-j+1) ways of removing sub-array from range [j-1,N]. Note that it is (j-1) here because A_j>A_i and Suffix[j]=true, which allow the final array formed by elements in range [1,i] \cup [j,N] to be strictly increasing as well! Iterating for all such valid i's will give us the final answer! Now, the question reduces to, how will we find such a valid index j ? Easy, but perhaps unexpected for some newcomers - Binary Search! Have a look at your Prefix[] and Suffix[] arrays. Prefix[] is initially true at i=1, and once it becomes false, it remains false till the end. Vice Versa holds for the Suffix[] array. Because of this, we can apply Binary Search on the array! For Binary Search, we look for 2 things- - Suffix[j] must be true. - A_j must be strictly greater than A_i The second part is very much valid even if original array A is unsorted (Why?). With this, all that is left is to code our binary search. Setter’s implementation is given below- Setter's Binary Search int lo = i + 1, hi = n + 1; while (hi - lo > 1){ int mid = (lo + hi) >> 1; if (pd[mid] == 1 and a[mid] > a[i]) //pd = Suffix Array hi = mid; else lo = mid; } answer += n - lo + 1; However, one thing is still left!! Recall our interpretation! For every index i, we are fixing the starting point of the to-be-deleted subarray to (i+1). Got what I am trying to imply? Take it like this - For every index i, finding a j such that the array formed by [1,i] \cup [k,N], where j-1 \leq k \leq N. This means that the deleted sub-array is [i+1,k-1]. Can you think the corner case we are missing here? We are not deleting the first element here!! To account for this, just do another binary search without the A_{mid} > A_i condition, as all elements from index 1 upto index k will be deleted. (Alternate - What if we simply see for upto how many indices the Suffix[i] is 1 ?) With this done, all that is left is to take care of implementation issues, for instance, if your implementation needs to handle the case of deleting an empty subarray, or getting an empty sub-array &etc. SOLUTION Setter #include <bits/stdc++.h> using namespace std; typedef long long ll; const int maxn = 2e5 + 10; const int inf = 1e9 + 10; bool dp[maxn], pd[maxn]; int a[maxn], t, n; int main(){ ios_base::sync_with_stdio (false); cin >> t; while (t--) { cin >> n; dp[0] = 1; a[0] = -inf; for (int i = 1; i <= n; i++){ cin >> a[i]; dp[i] = (dp[i - 1] & (a[i] > a[i - 1])); } pd[n + 1] = 1; a[n + 1] = inf; for (int i = n; i >= 1; i--) pd[i] = (pd[i + 1] & (a[i] < a[i + 1])); ll answer = 0; int lo = 0, hi = n + 1; while (hi - lo > 1){ int mid = (lo + hi) >> 1; if (pd[mid]) hi = mid; else lo = mid; } answer = (n - lo); for (int i = 1; i <= n - 1; i++){ if (dp[i] == 0) break; int lo = i + 1, hi = n + 1; while (hi - lo > 1){ int mid = (lo + hi) >> 1; if (pd[mid] == 1 and a[mid] > a[i]) hi = mid; else lo = mid; } answer += n - lo + 1; } cout << answer - dp[n] << endl; } } Tester #include "bits/stdc++.h" using namespace std; #define FOR(i,a,b) for (int i = (a); i < (b); i++) #define RFOR(i,b,a) for (int i = (b) - 1; i >= (a); i--) #define ITER(it,a) for (__typeof(a.begin()) it = a.begin(); it != a.end(); it++) #define FILL(a,value) memset(a, value, sizeof(a)) #define SZ(a) (int)a.size() #define ALL(a) a.begin(), a.end() #define PB push_back #define MP make_pair typedef long long Int; typedef vector<int> VI; typedef pair<int, int> PII; const double PI = acos(-1.0); const int INF = 1000 * 1000 * 1000; const Int LINF = INF * (Int) INF; const int MAX = 100007; const int MOD = 998244353; long long readInt(long long l,long long r,char endd){ long long x=0; int cnt=0; int fi=-1; bool is_neg=false; while(true){ char g=getchar(); if(g=='-'){ assert(fi==-1); is_neg=true; continue; } if('0'<=g && g<='9'){ x*=10; x+=g-'0'; if(cnt==0){ fi=g-'0'; } cnt++; assert(fi!=0 || cnt==1); assert(fi!=0 || is_neg==false); assert(!(cnt>19 || ( cnt==19 && fi>1) )); } else if(g==endd){ assert(cnt>0); if(is_neg){ x= -x; } assert(l<=x && x<=r); return x; } else { assert(false); } } } string readString(int l,int r,char endd){ string ret=""; int cnt=0; while(true){ char g=getchar(); assert(g!=-1); if(g==endd){ break; } cnt++; ret+=g; } assert(l<=cnt && cnt<=r); return ret; } long long readIntSp(long long l,long long r){ return readInt(l,r,' '); } long long readIntLn(long long l,long long r){ return readInt(l,r,'\n'); } string readStringLn(int l,int r){ return readString(l,r,'\n'); } string readStringSp(int l,int r){ return readString(l,r,' '); } void assertEof(){ assert(getchar()==-1); } int main(int argc, char* argv[]) { // freopen("in.txt", "r", stdin); //ios::sync_with_stdio(false); cin.tie(0); int t = readIntLn(1, 10); FOR(tt,0,t) { int n = readIntLn(1, 100000); VI A(n); FOR(i,0,n) { if (i + 1 < n) A[i] = readIntSp(-INF, INF); else A[i] = readIntLn(-INF, INF); } VI X; int val = INF + 47; X.push_back(val); RFOR(i, n, 0) { if (A[i] >= val) break; val = A[i]; X.push_back(val); } Int res = min(SZ(X) - 1, n - 1); val = -INF - 47; FOR(i,0,n) { if (A[i] <= val) break; val = A[i]; while (SZ(X) && X.back() <= A[i]) X.pop_back(); res += min(SZ(X), n - 1 - i); } cout << res << endl; } assertEof(); cerr << 1.0 * clock() / CLOCKS_PER_SEC << endl; } Time Complexity=O(NLogN) (Binary Search) Space Complexity=O(N) CHEF VIJJU’S CORNER Why we need the Suffix array Similar to the Prefix[] array, the Suffix[] array highlights about number of suitable ending positions we have for sub-array to delete. Exactly opposite to Prefix[] array, Suffix[] array is initially 0, and once it takes a value of 1, it will always be 1. This is because if the subarray in range [i,N] is strictly increasing, then so is the subarray in range [i+1,N] If we put the ending point of the to-be-deleted subarray at some point where Suffix[i]=false AND Suffix[i+1] is NOT true, then the resulting subarray cannot be strictly increasing as the array in range [i+1,N] is not strictly increasing. "Array A is not sorted, then how is Binary search Possible Simple! Because this condition matters only if Suffix[i] is true! This means that the range the continuously increasing sub-array ending at index N. In other words the sub-array in which we are binary searching is strictly increasing, or sorted. Hence we are able to avoid wrong answer. Remember, we are not binary searching for some value in the entire array - we are binary searching in the sorted part of the array for some value greater than A[i]! 3.This Question is also solvable by using two-pointer technique! Give it a try and share your approach here 4.What modifications can you think of, if I modify the problem as “the resulting sub-array must be strictly increasing, however, an error of 1 element is allowed”. Meaning, its ok if ignoring/deleting atmost 1 element, the resulting array after the operation becomes strictly increasing. How can we solve this problem? Setter's Notes We can solve the problem using Binary Search for appropriate index, or by two-pointers.
https://discuss.codechef.com/t/delarray-editorial/32453
CC-MAIN-2021-10
en
refinedweb
ane-device-file-util is a native extension that allows an application to open files with registered application on iOS (e.g Dropbox). Using the extension an application can provide the user with a link or button to open a file, such as a PDF. When the user clicks that button a native system dialog shows a list of applications able to open that file. The extension has no control over the list of applications able to open the file, since that is controlled by the system. All it can do is inform that the application wants to open a specific file type. Sample package { import com.debokeh.anes.utils.DeviceFileUtil; import flash.display.Sprite; import flash.events.MouseEvent; import flash.text.TextField; import flash.text.TextFieldAutoSize; public class Main extends Sprite { public function Main() { // for determine ready state var tf:TextField; addChild(tf = new TextField); tf.autoSize = TextFieldAutoSize.LEFT; tf.text = "click me!"; // wait for click stage.addEventListener(MouseEvent.CLICK, function():void { // Example #1 : You will need foo.pdf in document directory DeviceFileUtil.openWith("foo.pdf"); // Example #2 : You will need foo.pdf in document directory // DeviceFileUtil.openWith("foo.pdf", DeviceFileUtil.DOCUMENTS_DIR); // Example #3 : You will need foo.pdf in application directory // DeviceFileUtil.openWith("foo.pdf", DeviceFileUtil.BUNDLE_DIR); }); } } } Sorry my post failed. Do you have any plans to port this native extension to android? No problem about the post 😉 About your question, the extension is not mine. The author is @katopz, you can ask him about the Android port.
http://www.as3gamegears.com/air-native-extension/ane-device-file-util/
CC-MAIN-2019-04
en
refinedweb
Pampy: Pattern Matching for PythonPampy: Pattern Matching for Python Pampy is pretty small (150 lines), reasonably fast, and often makes your code more readable, and easier to reason about. You can write many patternsYou can write many patterns Patterns are evaluated in the order they appear. You can write FibonacciYou can write Fibonacci The operator _ means "any other case I didn't think of". from pampy import match, _ def fibonacci(n): return match(n, 1, 1, 2, 1, _, lambda x: fibonacci(x-1) + fibonacci(x-2) ) You can write a Lisp calculator in 5 linesYou can write a Lisp calculator in 5 lines from pampy import match, REST, _ def lisp(exp): return match(exp, int, lambda x: x, callable, lambda x: x, (callable, REST), lambda f, rest: f(*map(lisp, rest)), tuple, lambda t: list(map(lisp, t)), ) plus = lambda a, b: a + b minus = lambda a, b: a - b from functools import reduce lisp((plus, 1, 2)) # => 3 lisp((plus, 1, (minus, 4, 2))) # => 3 lisp((reduce, plus, (1, 2, 3))) # => 6 You can match so many things!You can match so many things! match(x, 3, "this matches the number 3", int, "matches any integer", (str, int), lambda a, b: "a tuple (a, b) you can use in a function", [1, 2, _], "any list of 3 elements that begins with [1, 2]", {'x': _}, "any dict with a key 'x' and any value associated", _, "anything else" ) You can match [HEAD, TAIL]You can match [HEAD, TAIL] from pampy import match, HEAD, TAIL, _ x = [1, 2, 3] match(x, [1, TAIL], lambda t: t) # => [2, 3] match(x, [HEAD, TAIL], lambda h, t: (h, t)) # => (1, [2, 3]) TAIL and REST actually mean the same thing. You can nest lists and tuplesYou can nest lists and tuples from pampy import match, _ x = [1, [2, 3], 4] match(x, [1, [_, 3], _], lambda a, b: [1, [a, 3], b]) # => [1, [2, 3], 4] You can nest dicts. And you can use _ as key!You can nest dicts. And you can use _ as key! pet = { 'type': 'dog', 'details': { 'age': 3 } } match(pet, { 'details': { 'age': _ } }, lambda age: age) # => 3 match(pet, { _ : { 'age': _ } }, lambda a, b: (a, b)) # => ('details', 3) It feels like putting multiple _ inside dicts shouldn't work. Isn't ordering in dicts not guaranteed ? But it does because in Python 3.7, dict is an OrderedDict by default You can match class hierarchiesYou can match class hierarchies class Pet: pass class Dog(Pet): pass class Cat(Pet): pass class Hamster(Pet): pass def what_is(x): return match(x, Dog, 'dog', Cat, 'cat', Pet, 'any other pet', _, 'this is not a pet at all', ) what_is(Cat()) # => 'cat' what_is(Dog()) # => 'dog' what_is(Hamster()) # => 'any other pet' what_is(Pet()) # => 'any other pet' what_is(42) # => 'this is not a pet at all' All the things you can matchAll the things you can match As Pattern you can use any Python type, any class, or any Python value. The operator _ and types like int or str, extract variables that are passed to functions. Types and Classes are matched via instanceof(value, pattern). Iterable Patterns match recursively through all their elements. The same goes for dictionaries. Using strict=FalseUsing strict=False By default match() is strict. If no pattern matches, it raises a MatchError. You can prevent it using strict=False. In this case match just returns False if nothing matches. >>> match([1, 2], [1, 2, 3], "whatever") MatchError: '_' not provided. This case is not handled: [1, 2] >>> match([1, 2], [1, 2, 3], "whatever", strict=False) False InstallInstall Currently it works only in Python > 3.6 Because dict matching can work only in the latest Pythons. I'm currently working on a backport with some minor syntax changes for Python2. To install it: $ pip install pampy or $ pip3 install pampy
https://pythondigest.ru/view/38431/
CC-MAIN-2019-04
en
refinedweb
Opened 5 years ago Closed 5 years ago Last modified 5 years ago #22047 closed Bug (fixed) Cannot resolve keyword u'group_ptr' into field. Choices are: group, id, name, permissions, user Description I recently tried to upgrade from 1.5.1 to 1.6.2 and am now getting this error for inherited many to many relationships. Traceback (most recent call last): File "<stdin>", line 1, in <module> File "/opt/bg/common/lib/python2.7/site-packages/django/db/models/manager.py", line 133, in all return self.get_queryset() File "/opt/bg/common/lib/python2.7/site-packages/django/db/models/fields/related.py", line 549, in get_queryset return super(ManyRelatedManager, self).get_queryset().using(db)._next_is_sticky().filter(self.core_filters) File "/opt/bg/common/lib/python2.7/site-packages/django/db/models/query.py", line 590, in filter return self._filter_or_exclude(False, *args, kwargs) File "/opt/bg/common/lib/python2.7/site-packages/django/db/models/query.py", line 608, in _filter_or_exclude clone.query.add_q(Q(*args, kwargs)) File "/opt/bg/common/lib/python2.7/site-packages/django/db/models/sql/query.py", line 1198, in add_q clause = self._add_q(where_part, used_aliases) File "/opt/bg/common/lib/python2.7/site-packages/django/db/models/sql/query.py", line 1234, in _add_q current_negated=current_negated) File "/opt/bg/common/lib/python2.7/site-packages/django/db/models/sql/query.py", line 1100, in build_filter allow_explicit_fk=True) File "/opt/bg/common/lib/python2.7/site-packages/django/db/models/sql/query.py", line 1357, in setup_joins names, opts, allow_many, allow_explicit_fk) File "/opt/bg/common/lib/python2.7/site-packages/django/db/models/sql/query.py", line 1277, in names_to_path "Choices are: %s" % (name, ", ".join(available))) django.core.exceptions.FieldError: Cannot resolve keyword u'group_ptr' into field. Choices are: group, id, name, permissions, user Change History (9) comment:1 Changed 5 years ago by comment:2 Changed 5 years ago by This error will happen when you try to navigate through the users' django.db.models.fields.related.ManyRelatedManager g = Group.objects.get(id=89) g.users.all() comment:3 Changed 5 years ago by comment:4 Changed 5 years ago by I slimmed down the models a bit: from django.db import models from django.contrib.auth.models import User as DjangoUser, Group as DjangoGroup class User(DjangoUser): pass class Group(DjangoGroup): users = models.ManyToManyField(User, through='Membership', related_name='groups') class Membership(models.Model): user = models.ForeignKey(User) group = models.ForeignKey(Group) The problem seems to be the related_name='groups' parameter. If you remove it, things work. I'm not fully convinced that this is a bug because having groups as a related_name actually clashes with the groups field on User. comment:5 Changed 5 years ago by mentions that one should not create fields which clash with the API. You create a field which clashes with the API. Ergo, it doesn't work. comment:6 Changed 5 years ago by @AeroNotix, you're right, but the clash should be detected by system check framework. I and mondone have written a patch:. comment:7 Changed 5 years ago by comment:8 Changed 5 years ago by comment:9 Changed 5 years ago by Thanks for discovering the real issue, I'll ensure there are no field and related name clashes, the error didnt help much in seeing that issue. My model looks like so. import json, logging, hashlib, random, time, base64, sys, urlparse from django.contrib.auth.models import User as DjangoUser, Group as DjangoGroup, GroupManager as DjangoGroupManager class GroupManager(DjangoGroupManager): class Group(DjangoGroup): def generateSalt(): class User(DjangoUser): class Membership(models.Model):
https://code.djangoproject.com/ticket/22047
CC-MAIN-2019-04
en
refinedweb
WMI related performance issues may arise due to extensive usage of WMI components. You can increase the value of following properties to its maximum. Restart may be required after applying the following settings. - Run cmd.exe as admin - Type wbemtest.exe and run - Click Connect. - In the namespace text box type "root" (without quotes). - Click Connect. - Click Enum Instances… - In the Class Info dialog box enter Superclass Name as "__ProviderHostQuotaConfiguration" (without quotes) and press OK. Note: the Superclass name includes a double underscore at the front. - In the Query Result window, double-click "__ProviderHostQuotaConfiguration=@") - Set the following values for these properties. Don't forget to Save Property after setting each value of the property. MemoryPerHost 1073741824 (1GB) HandlesPerHost 8192 - Save Object Thank you! It helped in fixing my issue.
https://blogs.technet.microsoft.com/bulentozkir/2014/01/14/increase-wmi-quota-properties-to-maximum-values/
CC-MAIN-2019-04
en
refinedweb
, most existing enterprise back-end systems provide a SOAP-based web service application programming interface (API) or proprietary file-based interfaces. In this article series we will discuss how Oracle Service Bus (OSB) 12c can be used to transform these enterprise system interfaces into a mobile-optimized REST-JSON API. This architecture layer is sometimes referred to as Mobile Oriented Architecture (MOA) or Mobile Service Oriented Architecture (MOSOA). A-Team has been working on a number of projects with OSB 12c to build this architecture layer. We will explain step-by-step how to build this layer, and we will share tips, lessons learned and best practices we discovered along the way. Main Article In part 1 we discussed the design of the REST API, in part 2 and part 3 we discussed the implementation of the RESTful services in service bus by transforming ADF BC SDO SOAP service methods. In this fourth part, we will take a look at techniques for logging, debugging, troubleshooting and exception handling. The easiest way to get more insight in what actually happens inside your pipelines is to add Log actions. You can simply drag and drop a Log action from the component palette and drop it anywhere you want. For example, if a call to a business service fails, you can add log statements to print out the request body before and after the transformation that takes place in the Replace action to inspect the payload. By default, the Severity of the log message is set to Debug. In order to see debug log messages, you need to change the OSB log level which is set to Warning by default. You can do this using the Actions dropdown menu in the JDeveloper log window, and choosing the Configure Oracle Diagnostic Logging option. You can also use enterprise manager, by opening the service bus dropdown menu and choose Logs -> Log Configuration. If you set the log level to Trace (FINE) or lower. your debug log messages will in appear in the JDeveloper window log level. However, with this log level you also get a lot of standard diagnostic OSB log messages in your log which makes it harder to find your own log messages. So, it is easiest to set the Log action Severity to Info, and the OSB log level to Notification (INFO). Note that if you change the log level, you do not need to restart Weblogic or redeploy your app, the changes are applied immediately. With info-level logging you have a clean log window that only contains your own log messages, and when you move your OSB application to production, you will not clutter the log files as long as the production log level is set to Warning or Error. Here is an example of the log window when we execute the /departments/{departmentId} resource (which maps to the getDepartmentDetails operation binding): More information about service bus logging can be found here. Another way to troubleshoot issues is to run your OSB application in debug mode. You can do this by choosing the Debug option from the proxy service popup menu: You can set breakpoints on the actions in your pipeline diagram using the popup menu: When you then execute a resource, the debugger will stop at your breakpoint and you can use the "data" debug window to inspect the flow of data through your pipeline. You can expand the various XML elements to see the contents of the header and body of your request and your response. In the above screen shot we have expanded the body element which shows the same data as we logged in the previous section. Any custom variables that you use to store temporary data, like the expandDetails variable we introduced in part 3, are also visible. When the debugger hits a breakpoint you have the normal debugging options like Step Over to go to the next action in the pipeline, or Resume to go to the next breakpoint. In other words, running in debug mode allows you determine the execution path through your pipeline in addition to viewing the data like you can with log messages, Invoking business services might cause various (unexpected) exceptions. The business service call might fail because the server is down, or the call succeeds but leads to an error because some business rule is violated while performing some update action.. When sending a JSON payload that contains invalid data, for example a non-existent manager ID, it is common practice to return HTTP error code 400 "Bad Request". When the business service does not respond at all, we should return HTTP error code 404 "Not Found". Using these error codes makes clear to the consumer whether he is dealing with an application error (400), or a server error (404). To return appropriate HTTP error codes, we first need to define an XSD that contains the structure of the error message for each type of error.Here is the error.xsd that we will use in our example: <?xml version = '1.0' encoding = 'UTF-8'?> <xsd:schema xmlns: <xsd:element <xsd:complexType> <xsd:sequence> <xsd:element <xsd:element <xsd:element </xsd:sequence> </xsd:complexType> </xsd:element> <xsd:element <xsd:complexType> <xsd:sequence> <xsd:element </xsd:sequence> </xsd:complexType> </xsd:element> </xsd:schema> With this XSD in place we can add two fault bindings to our REST operation bindings: Each fault binding needs to have its own unique XSD element type. If we would reuse the ApplicationError element type with the 404 fault binding, the 404 error code would never be returned. OSB determines which fault binding to use based on the element type used in the fault response returned by the pipeline. To be able to return a different payload and associated HTTP error code in case of an exception, we need to add a so-called error handler to our route nodes. We right-mouse-click on the RouteNode of the createDepartment operation branch, and choose Add Error Handler. To figure out the kind of response we get when violating a business rule, we first drag and drop a Log action inside the error handler and set the expression to $body. We now execute the /departments POST resource with an invalid managerId in the payload: In the log window we can inspect the payload body returned in case of an ADF BC exception being thrown: The body contains a generic part with the <env:Fault> element, and inside the <detail> element we can find the ADF-specific error. We need an XQuery file to transform the ADF error message to the ApplicationError element. As source element type we choose the ServiceErrorMessage element: As target element we choose the ApplicationError element from the error.xsd and then we can drag the mapping lines as shown below As we have seen in the JDeveloper log window, the message element contains both the error code and the error message. Since we have a separate element for the code, we want to strip the error code from the message. We can do this by using the XQuery String function substring-after: in the component palette, we change the value of the dropdown list to XQuery Functions, and expand the String Functions accordion.We drag and drop the substring-after function onto the message mapping line, inside the Mappings area in the middle. We click on the yellow function expression icon that appears and then we can complete the expression in the Expression - Properties window. We should replace the second argument of the function with ': 'because after these two characters the actual error message appears. Click on the XQuery Source tab to make sure that the expression has been saved correctly, sometimes the change you make in the properties window is not picked up. If this is the case, just re-enter the function argument in the source. In the component palette there are many XQuery functions available. If you want to get more information on how to use them, you can use the xqueryfunctions.com website. To complete the XQuery transformation, we need to surround the ApplicationError with the standard SOAP fault element, If we forget this step, the payload will not be recognized as a valid fault payload and the fault bindings we defined for the REST operation will not be used. We cannot do this in a visual way, so we click on the XQuery Source tab and add the surrounding fault element as shown below: Note that the value of the <faultcode> element must be set to env:Server, otherwise it will not work. The value of the <faultstring> element doesn't matter. With the XQuery transformation in place we can add a Replace action inside the error handler which uses this transformation: The expression for the ServiceErrorMessage input variable is shown below. Note the double slashes which means it will search the whole tree inside the body element, not just the direct children. The err namespace can be found in the JDeveloper log and should be set to. The last step is to drag and drop a Reply action after the Replace action and set the option With Failure to inform the proxy service that a fault response is returned. That's it, if we now use Postman to create a new department with an invalid managerId we get a nice error response with HTTP code 400: To handle the situation where the ADF BC SOAP server is down, we need to return a response which contains the ServerError element so we can return the HTTP error code 404 together with a user-friendly error message. To distinguish between the 400 and 404 error response, we drag and drop an If-Then action inside the error handler, and enter the following expression in the Condition field: $body//err:ServiceErrorMessage!='' When this expression is true we are dealing with an ADF BC Exception so we should move the Replace action we already defined inside the If branch. In the Else branch we should return a generic error that the service is not available. Since there is nothing to transform, we can enter the required response payload directly in the expression field: <env:Fault xmlns: <faultcode>env:Server</faultcode> <faultstring>Generic Error</faultstring> <detail> <ns2:ServerError> <ns2:message>The HR service is currently not available, please contact the helpdesk</ns2:message> </ns2:ServerError> </detail> </env:Fault> The complete error handler (with log actions removed) now looks like this: When we bring the ADF BC Server down and use Postman again to submit a new department, we will get the 404 error code together with the generic error we just defined in the body replace expression: To finish the exception handling, we need to add the same error handler to the updateDepartment operation. A quick way to do this is to right-mouse-click on the createDepartment error handler and choose Copy from the popup menu. Then right-mouse-click on the updateDepartment RouteNode and choose Paste from the popup menu. However, a better and more reusable way to do this is to create a pipeline template and define the error handler in the template. This prevents duplication of identical error handlers and it allows us to change the error handler over time in the template, with the changes being picked up automatically by all pipelines based on this template. We will look into pipeline templates in more depth later on in this article series. Downloads:
https://www.ateam-oracle.com/creating-a-mobile-optimized-rest-api-using-oracle-service-bus-part-4
CC-MAIN-2020-45
en
refinedweb
Bug #10552 Partitionable glideins not accounted for correctly - not accounted for at all 0% Description This is a follow-up of issue #6897 that did not solve the problem. Mats pointed out that the problem was not resolved and helped to identify it better: 1. the names in condor_status/Name include the partition id (e.g. slot1_2@...) slot1@glidein_31618_384461100@uct2-c161.mwt2.org slot1_1@glidein_31618_384461100@uct2-c161.mwt2.org slot1_3@glidein_31618_384461100@uct2-c161.mwt2.org slot1_4@glidein_31618_384461100@uct2-c161.mwt2.org 2. the names in condor_q/RemoteHost have only the slot (e.g. slot1@…) 3 slot1@glidein_31618_384461100@uct2-c161.mwt2.org 3. appendRealRunning in glideinFrontendLib.py is looking for a match (the dictionary key is the name from condor_status): condor_status = status_dict[collector_name].fetchStored() if remote_host in condor_status: …. and is called with: glideinFrontendLib.appendRealRunning(self.condorq_dict_running, self.status_dict_types['Running']['dict']) The parent slot (slot1@…) is not running any job, so it is not in the list (otherwise the number of running glideins would be incorrect), therefore is not matched in appendRealRunning and is not counted. Now I don’t know which is the correct path to solve this problem: 1. the job should report the exact slot in which it runs and this is a HTCondor bug 2. the job is reporting only the parent slot and GWMS should parse the collector entry to match with the parent slot name My preference would be for solution 1 if possible. Solution 2 can be done within the gwms code but works only if the jobs in the sub-slots are equivalents because there is no easy way to match the job with the correct sub-slot (I watched inside the ClassAds and I think the only way to match is via PublicClaimId that I don't think it is saved in the dictionaries - those would need to be changed as well). History #1 Updated by Marco Mambelli over 4 years ago - File 0001-matching-the-main-slot-for-partitionable-slots.patch 0001-matching-the-main-slot-for-partitionable-slots.patch added Since GLIDEIN_Schedd, GLIDEIN_Entry_Name, GLIDEIN_Name and GLIDEIN_Factory depend on the submission and are all the same for sub-slots of a partitionable glidein, then solution 2 is possible. It is in branch v3/10552 and attached in patch #2 Updated by Parag Mhashilkar over 4 years ago - Target version set to v3_2_12 #3 Updated by Parag Mhashilkar over 4 years ago - Target version changed from v3_2_12 to v3_2_13 #4 Updated by Burt Holzman over 4 years ago Hi Marco - this is what I just e-mail about, didn't know you had a bug already created for it. I don't think #1 is the answer - HTCondor has always reported the parent slot as RemoteHost. Isn't it trivial to match RemoteHost with the machine Name since we know the form is slotX_Y? #5 Updated by Marco Mambelli over 4 years ago In the meeting with the condor team on 12/11 Zach and Todd explained how dynamic slots are created only for the match of the job (existing only when claimed) so is preferable to use the parent partitionable slot for RemoteHost. There may be in the future a new attribute added to the job to track the actual slot. This means that option #1 is not viable. #6 Updated by Marco Mambelli over 4 years ago - Status changed from New to Feedback - Assignee changed from Marco Mambelli to Burt Holzman - Target version changed from v3_2_13 to v3_2_12 New changes are in v3/10552_v2, ignore the changes in v3/10552 In this version partitionable glideins are counted as 1 for total, as 1 running glidein if there is at least one dynamic slot, as 1 idle glidein if they have enough cpu and memory (cpu>0 and memory>2500MB, these limits have been imposed by CMS) Note that sometime idle+running != total (partitionable glideins may be counted as both running and idle) Are the conditions in the selection and in the count are OK and are not slowing down too much? #7 Updated by Marco Mambelli over 4 years ago - Assignee changed from Burt Holzman to HyunWoo Kim #8 Updated by HyunWoo Kim over 4 years ago - Status changed from Feedback to Assigned - Assignee changed from HyunWoo Kim to Marco Mambelli I reviewed the 2 files that have changed and there are two comments from me, see below.. 1. frontend/glideinFrontendElement.py only the changes as countCoresCondorStatus has changed its signature. 2. frontend/glideinFrontendLib.py def getIdleCondorStatus one change in the use of dictionary get method Suggestion> one comment is, in line # 585, there is a comment line, # None != True, no need to set default Shouldn't this go above line # 585? def getRunningConderStatus improvement in the logic def getFailedCondorStatus just lines and indentations def getIdleCoresCondorStatus removed the redundant method body and redirected to getIdleCondorStatus Suggestion> the comments for this method might be a bit obsolete now, Why don't we explain in more details, why and how these two methods, getIdleCoresCondorStatus and getIdleCondorStatus have the same logic? and thus that the redundant part has been removed and that this method is redirected to getIdleCondorStatus.. def countCoresCondorStatus added second argument to cover TotalCores, IdleCores, RunningCores #9 Updated by Marco Mambelli over 4 years ago - Status changed from Assigned to Resolved #10 Updated by Parag Mhashilkar over 4 years ago - Status changed from Resolved to Closed Also available in: Atom PDF
https://cdcvs.fnal.gov/redmine/issues/10552
CC-MAIN-2020-29
en
refinedweb
Introduction to Templates in C++ When it comes to powerful features in any programming language C++ is considered as the first priority. Templates are the example of powerful C++ feature. It’s a code written in a way to make it independent of the data type. The template is a formula for creating generic functions or classes. Generic programming is used where generic types are used as arguments in algorithms for compatibility with different data types. You don’t have to write the code again and again for performing the same operation just for a change in the data type of a function or class. Types of Templates in C++ There are basically two types of templates in the C++ programming language. Let’s have a look at them: 1. Function Templates As we are using generic programming therefore this function templates is just a normal function with only one key difference. Normal function can work only with defined data type inside the function whereas function template is designed in such a way that makes it independent of the data types, in fact, these templates can work with any data type you want. The general syntax for defining a function template is: template <class F> F function_name ( F args ) { Function body } Here, F is the template argument and class is a keyword. F can accept different data types. Here is the C++ program to demonstrate the function template in programming. Code: #include <iostream> using namespace std; template <typename F> void swapping(F &arg1, F &arg2) { F temporary; temporary = arg1; arg1 = arg2; arg2 = temporary; } int main() { int x = 100, y = 200; double p = 100.53, q = 435.54; char ch1 = 'A', ch2 = 'Z'; cout << "See the original data here\n"; cout << "x = " << x << "\ty = " << y<<endl; cout << "p = " << p << "\tq = " << q<<endl; cout << "ch1 = " << ch1 << "\t\tch2 = " << ch2<<endl; swapping(x, y); swapping(p, q); swapping(ch1, ch2); cout << "\n\nSee the Data after swapping here\n" cout << "x = " << x << "\ty = " << y<<endl; cout << "p = " << p << "\tq = " << q<<endl; cout << "ch1 = " << ch1 << "\t\tch2 = " << ch2<<endl; return 0; } Output: 2. Class Templates As we are using generic programming therefore this class templates is also similar to function templates. It’s like a normal class with only one key difference. Normally we declare a class so that it can work only with defined data type inside the class whereas class template is designed in such a way that makes it independent of the data types, in fact, these templates can work with any data type you want. Instead of creating a new class every time for using a functionality based on a particular data type it is better to define a generic class template that is compatible with maximum data types. Class templates help us in code reusability which makes our program perform faster and produce better efficiency. The general syntax for defining a class template is: template <class F> class Class_Name { ... .. public: F variable; F function_name(F arg); ... .. }; Here F is the template argument for data type used, class_name can be according to your choice and a member variable name variable and a function with function_name is defined inside the class. Here is the C++ program to demonstrate the class template in programming. Code: #include <iostream> using namespace std; template <class F> class Calci { private: F x, y; public: Calci(F p, F q) { x = p; y = q; } void showresult() { cout << "The Numbers are: " << x << " and " << y << "." << endl; cout << "Addition is: " << add() << endl; cout << "Subtraction is: " << subtract() << endl; cout << "Product is: " << multiply() << endl; cout << "Division is: " << divide() << endl; } F add() { return x + y; } F subtract() { return x - y; } F multiply() { return x * y; } F divide() { return x / y; } }; int main() { Calci<int> intCalc(2, 1); Calci<float> floatCalc(2.4, 1.2); cout << "Int results:" << endl; intCalc.showresult(); cout << endl << "Float results:" << endl; floatCalc.showresult(); return 0; } Output: 3. Variadic Templates Only templates that can take a variable number of arguments as the arguments are resolved at runtime and are type-safe. It is a better template to use as compared to others because the rest of the templates can only take fix number of arguments. Here is the C++ program to demonstrate the Variadic template. Code: #include <iostream> #include <string> using namespace std; template<typename F> F aggregate(F val) { return val; } template<typename F, typename... Args> F aggregate(F first, Args... args) { return first + aggregate(args...); } int main() { long total = aggregate(11, 72, 83, 78, 37); cout<<"Total of long numbers = "<<total<<endl; string s1 = "G", s2 = "o", s3 = "o", s4 = "d"; string s_concat = aggregate(s1, s2, s3, s4); cout << "Total of strings = "<<s_concat; } Output: Aggregate is the variadic function so we need a base function that can implement a base case after that we can implement variadic function at top of the functions. Once you write the template for the function that is implementing the base case we write a variadic function to implement it as a general case. This functionality is similar to recursion. The output we see is the aggregation of all the long integers and characters we have passed in the above C++ code. Conclusion The templates feature in the programming plays a vital role in making a program efficient in terms of performance and memory space because of the code reusability feature. Template functions can be easily overloaded as you can define a cluster of classes and functions for handling multiple data types. Recommended Articles This is a guide to Templates in C++. Here we discuss 3 different types of templates in C++ along with the respective examples. You can also go through our other suggested articles to learn more–
https://www.educba.com/templates-in-c-plus-plus/
CC-MAIN-2020-29
en
refinedweb
activity_recognition_alt 0.1.7 activity. Flutter Integration # import 'package:activity_recognition/activity_recognition.dart'; ActivityRecognition.activityUpdates() Flutter help # For help getting started with Flutter, view our online documentation. For help on editing plugin code, view the documentation. 0.1.4 # - Upgrade Kotlin version - Update Example 0.1.3 # - Fix copy & paste bug 0.1.2 # - Always clear shared preferences to trigger changed event - Use named shared preferences to not interfere with the application itself - Set the interval to 30 seconds - Add Activity.empty() 0.1.0 # - Unify platform code 0.0.2 # - Fix stuff for pub.dartlang.org 0.0.1 # - First working prototype for Android and iOS. import 'package:activity_recognition_alt/activity_recognition_alt.dart'; import 'package:flutter/material.dart'; void main() =>: const Text('Plugin example app'), ), body: new Center( child: StreamBuilder( builder: (context, snapshot) { if (snapshot.hasData) { Activity act = snapshot.data; return Text("Your phone is to ${act.confidence}% ${act.type}!"); } return Text("No activity detected."); }, stream: ActivityRecognitionAlt.activityUpdates(), ), ), ), ); } } Use this package as a library 1. Depend on it Add this to your package's pubspec.yaml file: dependencies: activity_recognition_alt: ^0.1:activity_recognition_alt/activity_recognition_alt:activity_recognition_alt/activity_recognition_alt.dart] Package does not support Flutter platform macos Because of import path [package:activity_recognition_alt/activity_recognition_alt.dart] Package does not support Flutter platform web Because of import path [package:activity_recognition_alt/activity_recognition_alt.dart] Package does not support Flutter platform windows Because of import path [package:activity_recognition_alt/activity_recognition_alt.dart] Package not compatible with SDK dart because of import path [activity_recognition_alt] Maintenance suggestions Package is getting outdated. (-43.29 points) The package was last published 74 weeks ago. The description is too long. (-10 points) Search engines display only the first part of the description. Try to keep the value of the description field in your package's pubspec.yaml file between 60 and 180 characters.
https://pub.dev/packages/activity_recognition_alt
CC-MAIN-2020-29
en
refinedweb
import "net/smtp": //) } SendMail connects to the server at addr, switches to TLS if possible, authenticates with the optional mechanism a if possible, and then sends an email from address from, to addresses to, with message msg. net/smtp package are low-level mechanisms and provide no support for DKIM signing, MIME attachments (see the mime/multipart package), or other mail functionality. Higher-level packages exist outside of the standard library. //) }. CRAMMD5Auth returns an Auth that implements the CRAM-MD5 authentication mechanism as defined in RFC 2195. The returned Auth uses the given username and secret to authenticate to the server using the challenge-response mechanism.. type Client struct { // Text is the textproto.Conn used by the Client. It is exported to allow for // clients to add extensions. Text *textproto.Conn // contains filtered or unexported fields } A Client represents a client connection to an SMTP server. Dial returns a new Client connected to an SMTP server at addr. The addr must include a port, as in "mail.example.com:smtp". NewClient returns a new Client using an existing connection and host as a server name to be used when authenticating. Auth authenticates a client using the provided authentication mechanism. A failed authentication closes the connection. Only servers that advertise the AUTH extension support this function. Close closes the connection... Package smtp imports 10 packages (graph) and is imported by 3221 packages. Updated 2020-06-02. Refresh now. Tools for package owners.
https://godoc.org/net/smtp
CC-MAIN-2020-29
en
refinedweb
Icon Sub-Sets 2 comments GTK2 Themes 6 comments # Panel customization include "panel.rc" and fix: # Panel customization #include "panel.rc" Good luck ! - Dec 08 2008 Start ccsm (CompizConfig Setting Manager): - Check to select "Opacity, Brightness and Saturation" - In tab "Opacity", click "New" button below - Set "Opacity window values" = 90 - Set "Opacity windows" = "Dock | Menu | Tooltip | PopupMenu | DropdownMenu". You can copy: Dock | Menu | Tooltip | PopupMenu | DropdownMenu and paste to the box "Windows" - Click "Close" button and close ccsm window ~ To make the transparent menu border metacity: Start gconf-editor: in terminal, run "gconf-editor" Go to and change value: - /apps/gwd/metacity_theme_active_opacity = 0.75 - /apps/gwd/metacity_theme_active_shade_opacity = checked - /apps/gwd/metacity_theme_opacity = 0.75 - /apps/gwd/metacity_theme_shade_opacity = checked If your distro is Ubuntu, you try to use Ubuntu Tweak - Dec 02 2008 Layer/Transparency/Color to Alpha... select the default white. I like those wallpapers: Or you can search in whit the keyword "intrepid" Cheers! - Nov 21 2008 But this is beta, and have some errors. I'm fixing. - Nov 17 2008 GTK2 Themes 10 comments You could search and download it with Google. - Dec 08 2008 My english is bad :( - Dec 08 2008 Nautilus Scripts 7 comments My system: Ubuntu Intrepid. - Nov 20 2008 GTK2 Themes 19 comments GTK2 Themes 10 comments What's metacity you've done? - Nov 07 2008
https://www.pling.com/u/liquidgik/
CC-MAIN-2020-29
en
refinedweb
State space model for a smooth seasonal effect. Inherits From: LinearGaussianStateSpaceModel tfp.sts.SmoothSeasonalStateSpaceModel( num_timesteps, period, frequency_multipliers, drift. A smooth seasonal effect model is a special case of a linear Gaussian SSM. It is the sum of a set of "cyclic" components, with one component for each frequency: frequencies[j] = 2. * pi * frequency_multipliers[j] / period Each cyclic component contains two latent states which we denote effect and auxiliary. The two latent states for component j drift over time via: effect[t] = (effect[t-1] * cos(frequencies[j]) + auxiliary[t-] * sin(frequencies[j]) + Normal(0., drift_scale)) auxiliary[t] = (-effect[t-1] * sin(frequencies[j]) + auxiliary[t-] * cos(frequencies[j]) + Normal(0., drift_scale)) The auxiliary latent state only appears as a matter of construction and thus its interpretation is not particularly important. The total smooth seasonal effect is the sum of the effect values from each of the cyclic components. The parameters drift_scale and observation_noise_scale are each (a batch of) scalars. The batch shape of this Distribution is the broadcast batch shape of these parameters and of the initial_state_prior. Mathematical Details The smooth seasonal effect model implements a tfp.distributions.LinearGaussianStateSpaceModel with latent_size = 2 * len(frequency_multipliers) and observation_size = 1. The latent state is the concatenation of the cyclic latent states which themselves comprise an effect and an auxiliary state. The transition matrix is a block diagonal matrix where block j is: transition_matrix[j] = [[cos(frequencies[j]), sin(frequencies[j])], [-sin(frequencies[j]), cos(frequencies[j])]] The observation model picks out the cyclic effect values from the latent state: observation_matrix = [[1., 0., 1., 0., ..., 1., 0.]] observation_noise ~ Normal(loc=0, scale=observation_noise_scale) For further mathematical details please see [1]. Examples A state space model with smooth daily seasonality on hourly data. In other words, each day there is a pattern which broadly repeats itself over the course of the day and doesn't change too much from one hour to the next. Four random samples from such a model can be obtained via: from matplotlib import pylab as plt ssm = SmoothSeasonalStateSpaceModel( num_timesteps=100, period=24, frequency_multipliers=[1, 4], drift_scale=0.1, initial_state_prior=tfd.MultivariateNormalDiag( scale_diag=tf.fill([4], 2.0)), ) fig, axes = plt.subplots(4) series = ssm.sample(4) for series, ax in zip(series[..., 0], axes): ax.set_xticks(tf.range(ssm.num_timesteps, delta=ssm.period)) ax.grid() ax.plot(series) plt.show() A comparison of the above with a comparable Seasonal component gives an example of the difference between these two components: ssm = SeasonalStateSpaceModel( num_timesteps=100, num_seasons=24, num_steps_per_season=1, drift_scale=0.1, initial_state_prior=tfd.MultivariateNormalDiag( scale_diag=tf.fill([24], 2.0)), ) References [1]: Harvey, A. Forecasting, Structural Time Series Models and the Kalman Filter. Cambridge: Cambridge University Press, 1990.__()
https://www.tensorflow.org/probability/api_docs/python/tfp/sts/SmoothSeasonalStateSpaceModel
CC-MAIN-2020-29
en
refinedweb
Tagging is the act of putting labels on objects. There is 1:n relation, meaning one object can have many labels applied to it. Traditional systems that implement tags are usually backed with a “set” data structure: an unordered/unindexed collection. The key is the value. E.g: Prod, Dev, CC127638, Finance Immediate limitation: you have to guess the key from the value. For the previous example, it may be obvious that we are talking about environment or lane for the Prod and Dev values, depending on your company’s lingo. But how would you differentiate a cost center and an app owner? It is subject to interpretation. We need a key:value structure for better clarity. On modern cloud platforms, Tags tends to be at least a key:value pair which make use of “dictionary” data type. E.g: Environment:Prod, CostCenter:127638,Owner:Finance. On OCI, this is the first type of tags available, and it is called Freeform Tags. You can use it with arbitrary keys:value pairs. The second type of tags you can use on OCI is called Defined Tags. They add a namespace construct to Tags. The data type becomes key:dic, the value is a dictionary itself, hence able to contain a nested key:value pair. This is particularly interesting to aggregate directly related tags together. E.g: Operations.Environment:Prod, Operations.Owner:Finance, Operations.State:Live, Finance.CostCenter:127368, Finance.Budget:378473 The syntax is now namespace.key:value Another advantage of this new construct is the scheme definition: the keys are not arbitrary anymore, they are defined in advance. This effectively helps to prevent Tag Sprawl ans misspelling. There is much more to Defined Tags on OCI, and we will explore it later. For now we can summarize the following about Tags & OCI: - Freeform Tags are arbitrary key:value pairs, - Defined Tags are a collection of tags regrouped as namespaces. 1️⃣ You should use Defined Tags almost every time, as they are more feature rich and allows better control and governance. A good use case for Freeform Tags Freeform Tags are best to use when you don’t have Defined Tags ready yet, just after tenant creation for example. They can also be useful when you want to keep some tags independent from the global tagging strategy and/or avoid any circular dependency. How do you tag your tagging namespaces with a defined tag during an automated process? 😅 -> 🐔 + 🥚 2️⃣ Make it clear whether a component is created manually or by an automated process. For exemple, when deploying infrastructure with Terraform, you can « watermark » the resource with a specific freeform tag, same with other automation tools: Terraformed: <value> Ansibled: <value> Hashipacked: <value> <value> may be a timestamp dated from the last modification time. This « watermarking » Freeform Tag would just be absent if the resource is created manually. That’s it: you can now quickly search for the presence or absence of your watermarks. An alternative proposition: using more generic tags to label your IaC strategy : Automation: Terraformed/Ansibled/xxx ConfigMgmt: Chef/Puppet/Salt/Tower This second solution may be more suited for a defined tag namespace. Additional metadata regarding the automation tool may be useful: for exemple, if the component is provisioned by a terraform module, it doesn’t hurts to add a Freeform Tag for that: TF_Module: xxx. Using freeform tags for objects created by Terraform allows to tag any resource right from the beginning, without having to rely on Tag Namespaces not being provisioned yet. 3️⃣ When editing an existing object with Terraform, for example the Default Security List automatically created with a VCN, leave a trail for any other user : Terraformed: Default Security Rules edited. Beside of the exposed use cases, try to keep Freeform Tags usage really exceptional and use Defined Tags instead as much as possible. They offers lot more of features like usage control, consistency, defaults, variables, and more to come. New features and innovation for Tags comes only on Defined Tags, not on Freeform Tags. Follow me on twitter for more #oci and #IaaS related content. Read Next How we saved over $1,000 by building CloudForecast.io with Serverless and AWS Lambda. Francois LAGIER - What is faster? Read file CSV or Oracle table? Daniel Mutu - Copy Files from and to Kubernetes pods and Docker container Mario - Discussion
https://practicaldev-herokuapp-com.global.ssl.fastly.net/kraltwo/designing-your-oci-data-centre-on-the-tagging-options-1cb0
CC-MAIN-2020-29
en
refinedweb
Best Fit Program In C++ Best Fit Program in C Plus Plus. The most common types of algorithms used for this purpose are, best fit, first fit, worst fit, and nest fit. In the case of best fit memory allocation algorithm, the CPU allocates the memory block that best suits the demanded amount. For this, the CPU searches through all the empty slots to find the slot that best suits the demanded memory without much wastage. The memory management scheme is - Read Also – Best Fit program in JAVA Said as best fit as it incurs a minimum amount of memory wastage. The task is carried out with the help of an algorithm known as a Scheduling Algorithm. Program in C++ for Best Fit // C++ implementation code for next fit memory allocation scheme #include <bits/stdc++.h> using namespace std; // Method to assign memory to a block using the next fit memory allocation scheme void NextFit(int b_size[], int m, int p_size[], int n) { //code to store block id of a block to a process demanding memory int allocation[n], j = 0; // No process is assigned with memory at the initial phase memset(allocate, -1, sizeof(allocate)); // pick each process and find suitable blocks // according to its size ad assign to it for (int i = 0; i < n; i++) { // Condition to control memory search not from the beginning while (j < m) { if (b_size[j] >= p_size[i]) { // code to assign block j to p[i] process allocation[i] = j; // Reduce available memory in this block. blockSize[j] -= processSize[i]; break; } // mod m will help to traverse the free memory blocks from the first block when it reaches end j = (j + 1) % m; } } cout << "\nProcess Number\tProcess Size\tBlock Number \n"; for (int i = 0; i < n; i++) { cout << " " << i + 1 << "\t\t" << p_size[i] << "\t\t"; if (allocate[i] != -1) cout << allocate[i] + 1; else cout << "Not Allocated"; cout<< endl; } } // Driver program int main() { int b_size[] = { 5, 10, 20 }; int p_size[] = { 10, 20, 5 }; int m = sizeof(b_size) / sizeof(b_size[0]); int n = sizeof(p_size) / sizeof(p_size[0]); NextFit(b_size, m, p_size, n); return 0; } Output -
https://prepinsta.com/operating-systems/page-replacement-algorithms/best-fit/best-fit-in-c-plus-plus/
CC-MAIN-2022-21
en
refinedweb
smpp_realloc Last updated March 2020 Name smpp_realloc — Free message (*mem), and realloc new memtype_smpp memory Synopsis #include "modules/mobility/smpp/smpp_util.h" void * **smpp_realloc** ( | mem, | | | | size ); | | Description **Configuration Change. ** This feature is available starting from Momentum 3.2. Free the memory associated with mem and realloc new memtype_smpp memory specifying the size. - mem The buffer to be reallocated. - size The new memory size. On success this function returns a pointer to the allocated memory; on failure NULL is returned. It is legal to call this function in any thread. See Also Was this page helpful?
https://support.sparkpost.com/momentum/3/3-api/apis-smpp-realloc
CC-MAIN-2022-21
en
refinedweb
Python is a very versatile piece of software. It offers many choices for Web applications and can be used in thousands of third party modules. Python is also an effective tool for regression analysis. I think that machine learning, data mining, chemometrics and statistics are not the same thing but there are aspects that are common to all four topics. For example, a chapter about regression analysis (the techniques for estimating the numeric relationship among data elements) can be found in books on each of them. Some examples include Reference 1 chapter 6, Reference 2 chapter 13, Reference 3 chapter 2, and Reference 4 chapter 7. In the real world, sometimes there are problems with the fundamentals of linear algebra. There is also the idea that a higher explained variance or a higher correlation means a better model. Or that the only way is the use of a commercial spreadsheet. For these reasons, I think that it’s necessary to clarify some things with an introductory article. From a more technical point of view, for example, if I need to measure the concentration of a certain substance in an underground water sample, I must be capable of measuring a concentration less than 10 times the minimum concentration admitted. For example, if the contamination limit is 1.1 micrograms per litre, I must, at least, be capable of measuring a concentration equal to 0.11 micrograms per litre (these values depend on national laws). With a limit settled by the law, the choice of a regression model becomes even more important. That choice is important also in ion chromatography, because it’s better to speak of calibration rather than linearity. The calibration can’t always be represented by a regression of the first order, but can be represented sometimes, by a second degree curve. So the concept of linearity should be carefully considered, particularly with regard to the measurement for suppressed conductivity (see Reference 5). This problem is not considered in some methods in which the calibration is valid only if it’s linear, discarding a priori any other type of model. The toolbox This article is based on Mint 18 Xfce, Emacs 24.5, Geany and Anaconda. The last software can be freely downloaded from Then, from the terminal window, type bash Anaconda3-4.1.1-Linux-x86.sh and just follow the instructions on the screen. Because the packages PrettyTable and Seaborn are not available in the Anaconda distribution, I have easily installed them by typing the following code in a terminal window: cd /home/<your-username>/anaconda_4.1.1/bin easy_install prettytable easy_install seaborn My Emacs configuration for Python on Linux is almost the same as on Windows (see the September 2016 issue of OSFY). There is only one difference, because I have replaced the line given below: (add-to-list ‘exec-path “C:\\WinPython-32bit-3.4.4.2\\python-3.4.4”) … with the following ones: (setq python-shell-interpreter “/home/<your-username>/anaconda_4.1.1/bin/python” org-babel-python-command “/home/<your-username>/anaconda_4.1.1/bin/python”) With respect to Geany, I have settled only the ‘Execute’ command /home/<your username>/anaconda_4.1.1/bin/python “%f” Regression models The first way to build a polynomial model is by using matrix algebra. Let’s consider the following code, which is used instead of equations: w=xlrd.open_workbook(“filename.xls”).sheet_by_name(“Sheet1”) x,y=w.col_values(0),w.col_values(1) degree=1 weight=1/array(x)**2 f=zeros((w.nrows,degree+1)) for i in range(0,w.nrows): for j in range(0,degree+1): f[i,j]=x[i]**j q=diag(weight) fT=dot(transpose(f),q) fTf=dot(fT,f) # fTf=fTf+diag([0.001]*(degree+1)) fTy=dot(fT,y) coef=dot(inv(fTf),fTy) xx=linspace(min(x),max(x),1000) yp=polyval(coef[::-1],xx) In the example, the degree is equal to 1 (a linear model) and the weight is equal to 1/x2 (this is a weighted model). For a simple linear model, degree=1 and weight=1 (or weight=1/array(x)**0). For a simple quadratic model, degree=2 and weight=1. With some simple matrix calculations, it’s possible to know the coefficients coef of the model and then plot the curve xx vs yp. For fitting through zero, set weight=1/array(x)**0 and for j in range(1,degree+1); add a small constant, for example 0.001, to the diagonal of fTf. For a linear model, the predicted values (x values back calculated) are calculated with predicted=(y-intercept)/slope and, for a quadratic model, with predicted=(-b+sqrt(b**2-(4*a*delta)))/(2*a) in which delta=c-y. The a, b and c values are about the coefficient of the second degree term, the coefficient of the first degree term and the constant value, respectively. The accuracy is then calculated as accuracy=predicted*100/x. The following tables present a linear simple and a linear weighted model for the same experimental data. I’m not a big fan of the squared correlation coefficient, but, considering only that, I should choose the linear model simple, because it’s the one with the highest value of R2. I think it would be better to consider the ‘Accuracy’ column in both tables and choose the weighted model. In this particular example, we must also consider what is already mentioned in the ‘Introduction’ section about the values established by law. Carry out these calculations using a spreadsheet (and without using macros), As I think that’s unnecessarily complicated. Linear simple Intercept -9343.97333 Slope 16984.40793 R2 0.99860 +------+---------+-----------+-------------+ | X | Y | Predicted | Accuracy | +------+---------+-----------+-------------+ | 0.1 | 1633 | 0.65 | 646.30 | | 0.5 | 7610 | 1.00 | 199.64 | | 1.0 | 14930 | 1.43 | 142.92 | | 2.5 | 35347 | 2.63 | 105.25 | | 5.0 | 69403 | 4.64 | 92.73 | | 10.0 | 141643 | 8.89 | 88.90 | | 25.0 | 402754 | 24.26 | 97.05 | | 50.0 | 850161 | 50.61 | 101.21 | +-----+----------+-----------+-------------+ Linear weighted 1/x2 Intercept 123.19563 Slope 15010.23851 R2 0.99422 +--------+---------+--------------+-----------+ | X | Y | Predicted | Accuracy | +--------+---------+--------------+-----------+ | 0.1 | 1633 | 0.10 | 100.58 | | 0.5 | 7610 | 0.50 | 99.76 | | 1.0 | 14930 | 0.99 | 98.64 | | 2.5 | 35347 | 2.35 | 93.87 | | 5.0 | 69403 | 4.62 | 92.31 | | 10.0 | 141643 | 9.43 | 94.28 | | 25.0 | 402754 | 26.82 | 107.29 | | 50.0 | 850161 | 56.63 | 113.26 | +--------+---------+--------------+-----------+ Figure 1 shows the plot for another data set, a simulation more or less typical for a pharmacokinetics study. Another way is to build a certain matrix via vstack and then apply lstsq on it. Each A matrix created with vstack has the structure shown in the following examples. There are two important things: if the model is built to fit through zero, there is a column of zeros and, if the model is quadratic, there is a column of squared x. The coefficients of each model are a, b and c (if quadratic). Linear fit simple A=vstack([x,[1]*w.nrows]).T a,b=linalg.lstsq(A,y)[0] +------+-----+ A = | 0.1 | 1.0 | | 0.5 | 1.0 | | 1.0 | 1.0 | | 2.5 | 1.0 | | 5.0 | 1.0 | | 10.0 | 1.0 | | 25.0 | 1.0 | | 50.0 | 1.0 | +------+-----+ Linear fit through zero A=vstack([x,[0]*w.nrows]).T a,b=linalg.lstsq(A,y)[0] +------+-----+ A = | 0.1 | 0.0 | | 0.5 | 0.0 | | 1.0 | 0.0 | | 2.5 | 0.0 | | 5.0 | 0.0 | | 10.0 | 0.0 | | 25.0 | 0.0 | | 50.0 | 0.0 | +------+-----+ Quadratic fit simple A=vstack([x,[1]*w.nrows,array(x)**2]).T a,b,c=linalg.lstsq(A,y)[0] +------+-----+--------+ A = | 0.1 | 1.0 | 0.01 | | 0.5 | 1.0 | 0.25 | | 1.0 | 1.0 | 1.0 | | 2.5 | 1.0 | 6.25 | | 5.0 | 1.0 | 25.0 | | 10.0 | 1.0 | 100.0 | | 25.0 | 1.0 | 625.0 | | 50.0 | 1.0 | 2500.0 | +------+-----+--------+ Quadratic fit through zero A=vstack([x,array(x)**2,[0]*w.nrows]).T a,b,c=linalg.lstsq(A,y)[0] +------+--------+-----+ A = | 0.1 | 0.01 | 0.0 | | 0.5 | 0.25 | 0.0 | | 1.0 | 1.0 | 0.0 | | 2.5 | 6.25 | 0.0 | | 5.0 | 25.0 | 0.0 | | 10.0 | 100.0 | 0.0 | | 25.0 | 625.0 | 0.0 | | 50.0 | 2500.0 | 0.0 | +------+--------+-----+ Probably, the simplest way is the use of polyfit with the syntax coef=polyfit(x,y,degree). For a better graphical presentation, I would like to say something about the use of LaTeX and about the couple Pandas + Seaborn, specifying also that the Pandas and Seaborn packages have more complex applications than the one presented here. LaTeX must be previously installed on your system; then just add rc(“text”,usetex=True) in your script. The result is shown in Figure 2. About Pandas, in the following example, a DataFrame with three columns is created. Then, x vs y data are plotted using Seaborn with regplot (lw=0 and marker=”o”) and, last, the linear model is plotted always with regplot but with some different options (lw=1 and marker=””). The result is shown in Figure 3, which is practically the same as with ggplot for R. Note that the ggplot plotting system exists also for Python and it is available at Another way to obtain a ggplot like plot is with the use of a style sheet. For example, put style.use(“ggplot”) before the plot command. To know all the styles available just use print(style.available). import pandas as p import seaborn as s coef=polyfit(x,y,1) yp=polyval(coef,x) df=p.DataFrame({“Conc”:x,”Abs”:y,”Abs pred”:yp}) s.regplot(x=”Conc”, y=”Abs”, data=df, ci=False, scatter_kws={“color”:”g”,”alpha”:0.5,”s”:90}, line_kws={“color”:”w”,”alpha”:0,”lw”:0}, marker=”o”) s.regplot(x=”Conc”, y=”Abs pred”, data=df, ci=False, scatter_kws={“color”:”w”,”alpha”:0,”s”:0}, line_kws={“color”:”k”,”alpha”:1,”lw”:1}, marker=””) The data frame here is printed via prettytable: +------+-------+-----------+ | Conc | Abs | Abs pred | +------+-------+-----------+ | 0.05 | 0.046 | 0.053 | | 0.10 | 0.093 | 0.092 | | 0.20 | 0.171 | 0.172 | | 0.25 | 0.217 | 0.212 | | 0.50 | 0.418 | 0.412 | | 1.00 | 0.807 | 0.811 | +------+-------+-----------+ I have never used the confidence band in practice, but there are several ways to calculate and plot it. Here, a calculation is proposed based on the t-distribution from scipy.stats, where ip is the part below (inferior) the model and sp the part above (superior) it. The data set is taken from Reference 7. A nice explanation about the confidence intervals is reported, for example, in Reference 2 pages 86-91. from scipy import stats n=w.nrows coef=polyfit(x,y,1) yp=polyval(coef,x) xx=linspace(min(x),max(x),1000) dx=sum((x-mean(x))**2) dy=sqrt(sum((y-yp)**2)/(n-2)) dd=((xx-mean(x))**2/dx) t=stats.t.ppf(0.95,n-2) ip=coef[1]+coef[0]*xx-(t*dy*sqrt(1/n+dd)) sp=coef[1]+coef[0]*xx+(t*dy*sqrt(1/n+dd)) plot(x,y,”o”,markerfacecolor=”lightgreen”,zorder=3) plot(x,yp,”k-”,zorder=3) fill_between(xx,ip,sp,where=sp>=ip,edgecolor=”b”,facecolor=”b”,alpha=0.2,interpolate=True) The StatsModels package Another way to build a regression model is by using the StatsModels package in combination with the Pandas package. An example is shown in the following code. The values for x and y are read from an xls file, then the weight is defined as 1/x2. Using the Pandas package, a DataFrame is defined for the couple x, y and a Series for the weights. Two types of regression are then calculated: OLS (Ordinary Least Squares, the one previously called the linear fit simple) and WLS (Weighted Least Squares). Last, both models are plotted with a simple plot. More information can be printed for slope ols_fit.params[1], intercept ols_fit.params[0], r-squared ols_fit.rsquared, a little report ols_fit.summary() or the equivalent for the weighted model using ‘wls’ instead of ‘ols’. Two examples are shown in Figures 7 and 8. Further information, such as residuals and Cook’s distance, can be printed or plotted using resid, resid_pearson and get_influence() respectively. A nice and large collection of examples is presented in Reference 8. import pandas as p import xlrd from statsmodels.formula.api import ols,wls w=xlrd.open_workbook(“pk.xls”).sheet_by_name(“Sheet1”) x,y=w.col_values(0),w.col_values(1) weight=1/array(x)**2 df=p.DataFrame({“Conc”:x,”Resp”:y}) w=p.Series(weight) ols_fit=ols(“Resp~Conc”,data=df).fit() wls_fit=wls(“Resp~Conc”,data=df,weights=w).fit() figure(0) plot(x,y,”o”,markerfacecolor=”lightgreen”,zorder=3) plot(df[“Conc”],ols_fit.predict(),color=”k”,zorder=3) figure(1) plot(x,y,”o”,markerfacecolor=”lightgreen”,zorder=3) plot(df[“Conc”],wls_fit.predict(),color=”k”,zorder=3) Even for the regression models, the Python language offers different ways to calculate the same things. Some of them have been presented here. I would like to say something about the piecewise, or segmented, regression models. In Python there are, for example, some techniques based on numpy.piecewise and scipy.optimize, but I’m not completely satisfied by them. For a segmented regression, as previously discussed in the article, ‘Get analytical with R’ published in September 2015 issue of OSFY, I prefer to use R with the package segmented. An interesting Python port called pysegmented is available from but, at present, it’s only a partial port. More complex applications can be done with the Scikit-learn package (Reference 9), but this is another story, perhaps for another article in the future.
https://www.opensourceforu.com/2016/12/introduction-regression-models-python/
CC-MAIN-2022-21
en
refinedweb
Targeting multiple tiles¶ In the previous example, the code targeted only a single tile. Most real applications will instead target two or more tiles. It is not possible to target multiple tiles using a single C application as the tiles are complete, separate processors. Instead each tile runs its own application, and each tile’s application can communicate with other tiles using the channels documented previously. To aid development of multi-tile applications, the XMOS tools allow the use of a special ‘mapping file’ which can be used to specify an entry-point for each tile in a network. This is instead of specifying a main function in C - which is not allowed when a mapping file is used. Warning For historical reasons, the format of the mapping is C-like. However, this format should not be treated as C-source and is likely to be deprecated in future versions of the tools and replaced with a purely declarative format. Developers are therefore recommended to avoid any procedural code within a mapfile. Using a mapfile¶ To map code onto both of the tiles on a XCORE-200-EXPLORER, it is necessary to describe that mapping in a file which we will call mapfile.xc. An example is shown below: #include <platform.h> extern "C" { void main_tile0(); void main_tile1(); } int main(void) { par { on tile[0]: main_tile0(); on tile[1]: main_tile1(); } return 0; } This mapfile references the two functions in main.c: #include <stdio.h> void main_tile0() { printf("Hello from tile 0\n"); } void main_tile1() { printf("Hello from tile 1\n"); } Now build and execute this multi-tile application on real hardware to see the printed output: $ xcc -target=XCORE-200-EXPLORER mapfile.xc main.c $ xrun --io a.xe Hello from tile 0 Hello from tile 1 Summary¶ In this example, you have written a mapfile using the declarative components of XC language to deploy two C functions onto the two tiles of an XCORE-200-EXPLORER. See also At this point, you might proceed to the next topic, or you might chose to explore this example further:
https://www.xmos.ai/documentation/XM-014363-PC-4/html/tools-guide/quick-start/multi-tile.html
CC-MAIN-2022-21
en
refinedweb
Get information about a file or directory, given a path #include <sys/stat.h> int stat( const char * path, struct stat * buf ); int stat64( const char * path, struct stat64 * buf ); libc Use the -l c option to qcc to link against this library. This library is usually included automatically. The stat() and stat64() functions obtain information about the file or directory referenced in path. This information is placed in the structure located at the address indicated by buf. }; The access permissions for the file or directory are specified as a combination of bits in the st_mode field of a stat structure. These bits are defined in <sys/stat.h>, and are described below: The following bits define miscellaneous permissions used by other implementations: The following bits are also encoded in the st_mode field: The following symbolic names for the values of st_mode are defined for these file types:. These macros manipulate device IDs: The st_rdev member of the stat structure is a device ID that consists of:; } } stat() is POSIX 1003.1; stat64() is Large-file support errno, fstat(), fstat64(), lstat()
http://www.qnx.com/developers/docs/6.4.1/neutrino/lib_ref/s/stat.html
CC-MAIN-2022-21
en
refinedweb
Static Forms Pages Plugin The Static Forms Pages Plugin intercepts all form submissions made which have the data-static-form-name attribute set. This allows you to take action on these form submissions by, for example, saving the submission to KV. Installation npm install @cloudflare/pages-plugin-static-forms Usage functions/_middleware.ts import staticFormsPlugin from "@cloudflare/pages-plugin-static-forms";export const onRequest: PagesFunction = staticFormsPlugin({respondWith: ({ formData, name }) => {const email = formData.get('email')return new Response(`Hello, ${email}! Thank you for submitting the ${name} form.`)}}); public/sales-enquiry.html <body><h1>Sales enquiry</h1><form data-<label>Email address <input type="email" name="email" /></label><label>Message <textarea name="message"></textarea></label><button type="Submit"></form></body> The Plugin takes a single argument, an object with a respondWith property. This function takes an object with a formData property (the FormData instance) and name property (the name value of your data-static-form-name attribute). It should return a Response or Promise of a Response. It is in this respondWith function that you can take action such as serializing the formData and saving it to a KV namespace. The method and action attributes of the HTML form do not need to be set. The Plugin will automatically override them to allow it to intercept the submission.
https://developers.cloudflare.com/pages/platform/functions/plugins/static-forms/
CC-MAIN-2022-21
en
refinedweb
: - 0:f519dff5c6a7 - Child: - 1:cc428f427838 --- /dev/null Thu Jan 01 00:00:00 1970 +0000 +++ b/AltBeaconService.h Tue Feb 03 18:23:07 2015 +0000 @@ -0,0 +1,69 @@ +/* mbed Microcontroller Library + * Copyright (c) 2006 __BLE_IBEACON_SERVICE_H__ +#define __BLE_IBEACON_SERVICE_H__ + +#include "BLEDevice.h" + +/** +* @class iBeaconService +* @brief iBeacon Service. This service sets up a device to broadcast advertising packets to mimic an iBeacon<br> +*/ + +class AltBeaconService +{ +public: + AltBeaconService(BLEDevice &_ble, uint16_t mfgID, uint8_t beaconID[20], int8_t refRSSI, uint8_t mfgReserved = 0x00): + ble(_ble) + { + data.mfgID = ((mfgID<<8) | (mfgID >>8)); + if(refRSSI > 0){refRSSI = 0;} // refRSSI can only be 0 to -127, smash everything above 0 to zero + data.refRSSI = refRSSI; + data.beaconCode = 0xACBE; + data.mfgReserved = mfgReserved; + + // copy across beacon ID + for(int x=0; x<sizeof(data.beaconID); x++) { + data.beaconID[x] = beaconID[x]; + } + + // Set up alt beacon + ble.accumulateAdvertisingPayload(GapAdvertisingData::BREDR_NOT_SUPPORTED | GapAdvertisingData::LE_GENERAL_DISCOVERABLE ); + // Generate the 0x1BFF part of the Alt Prefix + ble.accumulateAdvertisingPayload(GapAdvertisingData::MANUFACTURER_SPECIFIC_DATA, data.raw, sizeof(data.raw)); + + // Set advertising type + ble.setAdvertisingType(GapAdvertisingParams::ADV_NON_CONNECTABLE_UNDIRECTED); + } + +public: + union { + uint8_t raw[26]; // AltBeacon advertisment data + struct { + uint16_t mfgID; // little endian representation of manufacturer ID + uint16_t beaconCode; // Big Endian representation of 0xBEAC + uint8_t beaconID[20]; // 20byte beacon ID, usually 16byte UUID w/ remainder used as necessary + int8_t refRSSI; // 1 byte signed data, 0 to -127 + uint8_t mfgReserved; // reserved for use by manufacturer to implement special features + }; + } data; + +private: + BLEDevice &ble; + +}; + +#endif //__BLE_IBEACON_SERVICE_H__
https://os.mbed.com/teams/Bluetooth-Low-Energy/code/BLE_AltBeacon/diff/f519dff5c6a7/AltBeaconService.h/
CC-MAIN-2022-21
en
refinedweb
Hey all, I have been busy building my own chess game and as of now im a bit stomped. Well you see i have an object Board, this object will be initiated and it will create a new frame with various panels one of them being the panel which holds the squares of the board, inside that panel i create a grid of panels, with pictures as backgrounds(blocks for the chess board either black or white), and to each of those panels i add a jlabel, which in the future will hold the image of the piece on the board. Now when creating the panels i labeled them so i may know what co-ordinates of the piece are i.e A-H and 1-8. I lable the usingthe setName() method, each of the ImagePanels and labels are declared globally. The problem is that i have a method -listLabelNames() that will list the names of the lables/panels ie A1-H8. When i call the method from within the instance after the frame has added all its components, the method works fine and prints out the array of the panels/labels names. However when i iniate board and call the listLabelNames() method from the object which iniated board the array returns a null pointer. My thoughts are that in the calling class the statement which creates a new instance of board doesnt wait until the instnce/frame is fully created... And then when i call listLabelNames() the array isnt yet filled. I cant seem to find an answer/solution: package chess; import java.awt.*; import javax.swing.JFrame; import javax.swing.JLabel; import javax.swing.JPanel; /** * Board.java * Purpose: This will set up the Board for the game to be played, * ,passed by the Game.java, like setting up appropriate board style, setting * pieces in correct place and style according to colour chosen by Player etc, * Also Board class shares calling classes instance as the calling class has the * necessary methods needed to check if the player move is acceptable, and * enforce rules etc on actions like moving a piece from one block to another * * * @author David Kroukamp * @version 1.0 3/11/2012 */ public class Board extends Main { private Image boardImage1; private Image boardImage2; private JPanel centerPanel = new JPanel(); private JPanel southPanel = new JPanel(); private JPanel eastPanel = new JPanel(); private JPanel westPanel = new JPanel(); private JLabel[] labels = new JLabel[64]; private ImagePanel[] panels = new ImagePanel[64]; public Board(Image boardImage1, Image boardImage2) { this.boardImage1 = boardImage1; this.boardImage2 = boardImage2; //Schedule a job for the event-dispatching thread: //creating and showing this application's GUI. javax.swing.SwingUtilities.invokeLater(new Runnable() { @Override public void run() { createAndShowGUI(); } }); } /** * Create the GUI and show it; for thread safety, this method should be * invoked from the event-dispatching thread. */ private void createAndShowGUI() { //Create and set up the window with title.APP_NAME + " " + VERSION_ID + " by " + AUTHOR setTitle(APP_NAME + " " + VERSION_ID + " by " + AUTHOR); setDefaultCloseOperation(JFrame.DISPOSE_ON_CLOSE); //Set up the content pane. addComponentsToPane(getContentPane()); //Size and display the window. setSize(800, 600); //center JFrame. setLocationRelativeTo(null); //makes frame non-resizable setResizable(false); //shows the frame setVisible(true); } /** * Adds all the necessary components to the content pane of the JFrame, and * adds appropriate listeners to components. */ private void addComponentsToPane(Container contentPane) { BorderLayout borderLayout = new BorderLayout(10, 10); GridLayout gridLayout = new GridLayout(8, 8); contentPane.setLayout(borderLayout); centerPanel.setLayout(gridLayout); addLabelsToSouthPanel(); addLabelsToWestPanel(); addPanelsAndLabels(); contentPane.add(centerPanel, BorderLayout.CENTER); contentPane.add(southPanel, BorderLayout.SOUTH); contentPane.add(eastPanel, BorderLayout.EAST); contentPane.add(westPanel, BorderLayout.WEST); listLabelNames//works when i call from here prints fine } private void addLabelsToSouthPanel() { GridLayout gridLayout = new GridLayout(0, 8); southPanel.setLayout(gridLayout); JLabel[] lbls = new JLabel[8]; String[] label = {"A", "B", "C", "D", "E", "F", "G", "H"}; for (int i = 0; i < 8; i++) { lbls[i] = new JLabel(label[i] + ""); southPanel.add(lbls[i]); } } private void addLabelsToWestPanel() { GridLayout gridLayout = new GridLayout(8, 0); westPanel.setLayout(gridLayout); JLabel[] lbls = new JLabel[8]; int[] num = {8, 7, 6, 5, 4, 3, 2, 1}; for (int i = 0; i < 8; i++) { lbls[i] = new JLabel(num[i] + ""); westPanel.add(lbls[i]); } } private void addPanelsAndLabels() { addPanelsAndImages(); for (int i = 0; i < panels.length; i++) { labels[i] = new JLabel(); labels[i].setName(panels[i].getName());//used to know the postion of the label on the board panels[i].add(labels[i]); centerPanel.add(panels[i]); //System.out.println(panels[i].getName());//works when i call from here prints fine //System.out.println(labels[i].getName());//works when i call from here prints fine System.out.println("done"); } } private void addPanelsAndImages() { int count = 0; String[] label = {"A", "B", "C", "D", "E", "F", "G", "H"}; int[] num = {8, 7, 6, 5, 4, 3, 2, 1}; for (int row = 0; row < 8; row++) { for (int col = 0; col < 8; col++) { if (row % 2 == 0) { if ((col + row) % 2 == 0) { panels[count] = new ImagePanel(boardImage1); } else { panels[count] = new ImagePanel(boardImage2); } } else { if ((col + row) % 2 == 0) { panels[count] = new ImagePanel(boardImage1); } else { panels[count] = new ImagePanel(boardImage2); } } panels[count].setName(label[col] + " " + num[row]); count++; } } } //method sets image of a label at a certain position in the board according to the block name i.e D4 public void listLabelNames() { for (int s = 0; s < labels.length; s++) { System.out.println(labels[s].getName()); } } //nested class used to set the background of frame contenPane class ImagePanel extends JPanel { private Image image; /** * Default constructor used to set the image for the background for the * instance */ public ImagePanel(Image img) { image = img; } @Override protected void paintComponent(Graphics g) { //draws image to background to scale of frame g.drawImage(image, 0, 0, null); } } } and in the calling class i call board like this: public void startGame() { Image boardImage1 = new ImageIcon(boardImages + "wBlock.jpg").getImage(); Image boardImage2 = new ImageIcon(boardImages + "bBlock.jpg").getImage(); Board board = new Board(boardImage1, boardImage2); board.listLabelNames();//returns null pointer here but not when called from within the instance /* while(checkmate!=true||stalemate!=true||surrender!=true) { } */ }
https://www.daniweb.com/programming/software-development/threads/418508/object-instances-of-frames-with-methods-trouble
CC-MAIN-2022-21
en
refinedweb
I have todays nightly build downloaded onto my computer and the command window for the system keeps saying: Traceback (most recent call last): File “”, line 1, in File “C:/Users/salewis/AppData/Roaming/NA-MIC/Extensions-26293/Chest_Imaging_Platform/lib/Slicer-4.7/qt-scripted-modules/CIP_LesionModel.py”, line 17, in from FeatureWidgetHelperLib import FeatureExtractionLogic ImportError: cannot import name FeatureExtractionLogic loadSourceAsModule - Failed to load file “C:/Users/salewis/AppData/Roaming/NA-MIC/Extensions-26293/Chest_Imaging_Platform/lib/Slicer-4.7/qt-scripted-modules/CIP_LesionModel.py” as module “CIP_LesionModel” ! Fail to instantiate module "CIP_LesionModel" Failed to obtain reference to ‘qSlicerAppMainWindow’ Also causing a window to show up saying: There are other DICOM listeners running. Do you want to end them? Not sure if the second is a big deal or not, but this is upon startup and it’s never done it before. Finally, I have been running the module openCAD and using the heterogeneityCAD, but every time it is causing the program to become unresponsive.
https://discourse.slicer.org/t/strange-occurances-on-load-and-timeout/914
CC-MAIN-2022-21
en
refinedweb
: <fx:Style> @namespace mx "library://ns.adobe.com/flex/mx" mx|Application { backgroundImage: Embed(source="Assets.swf", symbol='Logo') } borderMetrics:EdgeMetrics[alleen-lezen] Returns an EdgeMetrics object for the border that has four properties: left, top, right, and bottom. The value of each property is equal to the thickness of one side of the border, in pixels. Implementatie public function get borderMetrics():EdgeMetrics. Implementatie public function get layoutDirection():String public function set layoutDirection(value:String):void measuredHeight:Number[alleen-lezen]Height():Number measuredWidth:Number[alleen-lezen]Width():Number public function SpriteAsset() Constructor. Wed Jun 13 2018, 11:42 AM Z
https://help.adobe.com/nl_NL/FlashPlatform/reference/actionscript/3/mx/core/SpriteAsset.html
CC-MAIN-2022-21
en
refinedweb
Benders Decomposition using CallbackOngoing Hi, We are trying to implement Benders Decomposition for a Fixed charge facility location-allocation problem given in Daskin. We have decomposed the problem into master and sub problems and then tried to insert the capacity constraint as a Lazycut using MIPSOL. It is working fine and giving me the optimal solution. But, I am not sure whether the implementation part is correct or not. We are getting objective value and bound for the considered minimization problem through the following commands. def mycallback(model, where): if where == GRB.Callback.MIPSOL: obj = model.cbGet(GRB.Callback.MIPSOL_OBJ) bound = model.cbGet(GRB.Callback.MIPSOL_OBJBND) then, we are inserting lazy constraint by model.cbLazy using the feedback from the sub problem. The objective and bound list : Obj list-->> [8550, 8550, 8900, 8900, 23780, 31345, 27745, 28350, 30460, 30037, 30246, 28450, 29870, 27760, 29330, 29280] Bound list--->> [-1e+100, -1e+100, -1e+100, 0.0, 20780.0, 20780.0, 20780.0, 24935.995627686796, 24935.995627686796, 25285.145721177472, 25820.18344674822, 26915.25982256021, 26915.25982256021, 27093.500000000004, 27093.500000000007, 27169.56521739131] For the minimization problem, the obj value should be decreasing continuously in each iteration, but this is not the case here. As we can see in the Obj list, the obj value is not decreasing continuously, but fluctuating. Please suggest if this behavior is common or we are making some mistakes during implementation. The code and the log file of the solution is attached through dropbox. Thanks and Regards Ankit Chouksey Hi Ankit, Could you please edit your question and properly format your code? See Posting to the Community Forum for help on how to do that. And please try summarizing the output - currently, this is just a long wall of text and your actual question is hard to understand. Thanks, Matthias0 Dear sir, I have modified the question as suggested. Thanks Ankit0 Hi Ankit, I recommend testing your code on a small toy problem so you can inspect and analyze every single step. One thing I noticed is that you are using a pretty old Gurobi version. You should definitely update to the latest version to get all improvements and bug fixes developed in the meantime. Best regards, Matthias0 Dear sir, Thankyou very much for the reply. I have tried to run the code with updated version Gurobi 9.5, still I am getting the same results. We are solving a very small example problem on "Capacitated Fixed Charge problem" given in book of "Network and Discrete Location" by Daskin in page number 331. We have modelled this problem in Python and solved it using Gurobi 9.5. The code is given below. We are getting an optimal solution in seconds. import gurobipy as gp from gurobipy import* Supply_nodes = ['S1', 'S2', 'S3', 'S4'] Demand_nodes = ['D1', 'D2','D3','D4','D5','D6','D7','D8'] Fixed_cost = {'S1':4500, 'S2':4400, 'S3':4250, 'S4':4250} Capacity = {'S1':600, 'S2':700, 'S3':500, 'S4':550} Demand = {'D1':100, 'D2':150, 'D3':175, 'D4':125, 'D5':180, 'D6':140, 'D7':120, 'D8':160} Transport_input = [26, 27, 28, 17, 13, 19, 21, 19, 19, 22, 12, 26, 11, 22, 22, 24, 17, 15, 13, 15, 25, 26, 27, 23, 20, 19, 23, 26, 21, 16, 23, 26] Transport_cost = {} X = 0 for i in Demand_nodes: for j in Supply_nodes: Transport_cost[(i,j)] = Transport_input[X] X = X + 1 m=gp.Model() x_j = m.addVars(Supply_nodes,vtype=GRB.BINARY,name='x_j') y_ij = m.addVars(Demand_nodes, Supply_nodes, vtype=GRB.CONTINUOUS,name='Y_ij') m.setObjective(sum(Fixed_cost[j]*x_j[j] for j in Supply_nodes) + sum(Transport_cost[i,j]*y_ij[i,j] for i in Demand_nodes for j in Supply_nodes)) for i in Demand_nodes: m.addConstr(sum(y_ij[i,j] for j in Supply_nodes) == Demand[i]) for j in Supply_nodes: m.addConstr(sum(y_ij[i,j] for i in Demand_nodes) <= Capacity[j]*x_j[j]) m.optimize() m.printAttr('X') After running this code, we are getting an optimal solution i.e. 29280. Now, we are trying to solve the same problem with the benders decomposition using callback. which is discussed in next thread. Regards Ankit0 Dear sir We are trying to solve a very small example "Capacitated Fixed Charge problem" (discussed in earlier thread) using Benders decomposition. We have decomposed the problem in Master problem and Sub-problem. Master problem----->> ##### Master problem ########## mp = gp.Model() x_j_mp = mp.addVars(Supply_nodes,vtype=GRB.BINARY,name='x_j_mp') D = mp.addVar(vtype=GRB.CONTINUOUS,name='D') mp.setObjective(sum(Fixed_cost[j]*x_j_mp[j] for j in Supply_nodes) + D) mp.addConstr(sum(Capacity[j]*x_j_mp[j] for j in Supply_nodes) >= sum(Demand[i] for i in Demand_nodes)) mp._vars = x_j_mp Sub problem----->> ##### Sub problem ########## sp = gp.Model() U_i = sp.addVars(Demand_nodes, lb=-1e20, ub=1e20, vtype=GRB.CONTINUOUS,name='U_i') W_j = sp.addVars(Supply_nodes, vtype=GRB.CONTINUOUS, name='W_j') for i in Demand_nodes: for j in Supply_nodes: sp.addConstr( U_i[i] - W_j[j] <= Transport_cost[i,j]) Further, we are using callback function for inserting Lazy constraint. LB = 0 UB = 1000000000 UB_temp = 0 X_val = {} U_val = {} W_val = {} Obj_list= [] bound_list = [] iteration = 0 global iterNum iterNum = 0 def mycallback(model, where): if where == GRB.Callback.MIPSOL: print("****MIP sol callback*****") global iterNum print ("Going for BD iter ", iterNum) # nodecnt = mp.cbGet(GRB.Callback.MIPSOL_NODCNT ) obj = model.cbGet(GRB.Callback.MIPSOL_OBJ) bound = model.cbGet(GRB.Callback.MIPSOL_OBJBND) print("Best known LOWER bound = ", bound) print("****************************************") print("OBJ VALUE = ", obj) print("bound = ",bound) print("****************************************") Obj_list.append(round(obj)) bound_list.append(bound) vars_val = {} vars_val = model.cbGetSolution(model._vars) print(vars_val) D_val = 0 D_val = model.cbGetSolution(D) print("D var value = ", D_val) print(X_val) for j in Supply_nodes: X_val[j] = vars_val[j] if X_val[j] == 1: print("Open = ", j) print(X_val) print("SP SOl------>>>>>") sp.setObjective(sum(Demand[i]*U_i[i] for i in Demand_nodes) - sum(Capacity[j]*X_val[j]*W_j[j] for j in Supply_nodes), GRB.MAXIMIZE) sp.update() sp.optimize() for i in Demand_nodes: U_val[i] = U_i[i].x for j in Supply_nodes: W_val[j] = W_j[j].x UB_temp = 0 UB_temp = sum(Fixed_cost[j]*X_val[j] for j in Supply_nodes) + sp.ObjVal print("UB Temp = ", UB_temp) D_RHS = 0 D_RHS = (sum(Demand[i]*U_val[i] for i in Demand_nodes) - sum(Capacity[j]*X_val[j]*W_val[j] for j in Supply_nodes)) print("D var value = ", D_val) print("D_RHS = ", D_RHS) if D_val < D_RHS: model.cbLazy(D >= sum(Demand[i]*U_val[i] for i in Demand_nodes) - sum(Capacity[j]*model._vars[j]*W_val[j] for j in Supply_nodes)) iterNum = iterNum + 1 mp.Params.lazyConstraints = 1 mp.Params.PreCrush = 1 mp.Params.Threads = 1 mp.optimize(mycallback) print(time.time() - t0) print("----------------------------") print("----------------------------") mp.printAttr('X') print("Obj list-->>",Obj_list) print("Bound list--->>",bound_list) After running this code, we are getting an optimal solution i.e. 29280 which is same as what we were getting in original formulation. The value of upper bound and lower bound which we are getting in each iteration is given below. Obj list-->> [8650, 8650, 8900, 8900, 23880, 31445, 27845, 28450, 30560, 30074, 30246, 28450, 29870, 27860, 29430, 29280] Bound list--->> [-1e+100, -1e+100, -1e+100, 0.0, 20880.0, 20880.0, 20880.0, 25008.7926540619, 25008.7926540619, 25342.559303459075, 25841.913611781845, 26951.888466413187, 26951.888466413187, 27143.500000000004, 27143.500000000007, 27605.000000000004] Now, my query is For the minimization problem, the obj value (UB) should be decreasing continuously in each iteration, but this is not the case here. As we can see in the Obj list, the obj value is not decreasing continuously, but fluctuating. Sir, I have analyzed each iteration of benders, still I am not able validate that my callback implementation is correct due to the unusual behavior of UB. This is my request that please validate whether I am implementing callback correctly or not. And please suggest if this behavior is common or we are making some mistakes during implementation. The code and the log file of the solution is attached through dropbox. We will be very grateful to you for your response. Thanks and Regards Ankit Chouksey0 Please sign in to leave a comment.
https://support.gurobi.com/hc/en-us/community/posts/4415191730321-Benders-Decomposition-using-Callback?page=1#community_comment_4415701298321
CC-MAIN-2022-21
en
refinedweb
Because OAuth 2 has emerged as the industry standard for social applications and third-party authentication. As a result, you may concentrate on learning and implementing it to support many social authentication providers. The standard OAuth 2 providers include Facebook, Google, GitHub and, Twitter. Authentication is the activity of establishing whether someone or something is, indeed, what it claims to be. In a general perspective, the authentication system checks if the user credentials provided during login match the respective values stored in the application’s record – mostly the database for the given user attempting to login to the application. In addition, the password is referred to as an authentication factor, and it should be known by the user trying to log in alone. Django authentication with Twitter As of this writing, there are numerous kinds of authentication. For instance, Multifactor authentication(MFA) or two-factor authentication. In the previous case, the user can provide a unique RSA key on top of their password. Others can even use their iris or fingerprints. How does Django Perform authentication? Django comes with a user authentication system that allows administering and managing user and group permissions and cookie-based sessions. This system addresses both authentication and authorization, where the latter permits an authenticated user to perform certain tasks and access specific data within your application. Through Django authentication, parameters are automatically set up when you create your Django project using the following command. django-admin startproject The user object is the heart of the Django authentication system. It’s through the User object that access is managed to your site. This user object has attributes such as, “email”, “username”, “is_active”, “last_login”, and “password”. It also has methods like has_perm(), get_username(), check_password(). Anytime you make permission changes in Django, the following command must be run to propagate those changes to the database. python3 manage.py migrate The basic syntax to see if a user has permissions to the data of a given application would be: user.has_perm(“my_app.view_my_model”) Users get logged in through the login() function and logged out through the logout() function. Further, Django supplies mechanisms to require a user to be logged in, such as the @login_required decorator. In the case of password management, Django, by default, uses a PBKDF2(cryptographic key derivation functions) algorithm with a SHA256 hash, a password stretching mechanism recommended by NIST. However, it is usually sufficient for most users since it’s very secure and needs a lot of computing time to break. In addition, you can use bcrypt and Argon2 with Django by installing their libraries using pip. You may also consider using Django password validators to ensure sufficiently strong passwords are used – these kinds of settings are made in the settings.py file under the section AUTH_PASSWORD_VALIDATORS. The OAuth 2 process OAuth 2 was created to be a web authentication protocol. It isn’t the same as a network authentication protocol because it presumes you have HTML rendering and browser redirection capabilities. It is a disadvantage for a JSON-based API, but we always come up with workarounds. The steps in this article assume you are developing a standard server-side website. The OAuth 2 Flow on the Server The first phase takes place entirely outside of the application flow. In this step, the project owner will register each OAuth 2 provider for which you require logins. They will supply the OAuth 2 provider with a callback URI during this registration, where their application will be ready to receive requests. As a result, they get a client key and a client secret in exchange. These tokens are used to confirm login requests throughout the authentication procedure. The flow starts when your application generates a page with a button like “Log in with Facebook” or “Sign in with Google.” In essence, these are nothing more than simple links, each of which goes to a URL similar to this: In the above case, you have submitted the client key and redirect URI, but no secrets are shared. In exchange, you’ve requested an authentication code and access to both the ‘profile’ and ’email’ scopes from the server. These scopes specify the permissions you ask for from the user and limit the access token’s authorization. After receiving the access token, the user’s browser is redirected. The user is then provided with a window asking for permission to allow your program to log in after they’ve logged in. If the user gives the necessary permissions, the OAuth 2 server forwards them to the callback URI they had specified earlier, with an authorization code included in the query parameters as follows: GET The authorization code is a one-time-use token that expires quickly; thus, as soon as you receive it, your server should issue a new request to the OAuth 2 provider, including both the authorization code and your client secret as shown below: OST attacks. The authorization code guarantees that the user gave explicit consent. Further, to continue their original intent of accessing the given site. The access token is frequently stored in the user’s server-side session cache. As a result, the server can still make calls to the registered OAuth 2 provider when needed. Google, for example, contains a refresh token that extends the duration of your access token, while Facebook has an endpoint where you can exchange short-lived access tokens for longer-lived ones. For a REST API, this flow is inconvenient. While you could have the front-end client build the initial login page and the backend supplies a callback URL, you’ll run into problems eventually. Once you’ve got the access token, you want to send the visitor to the landing page, but there’s no clear, RESTful way to do so. Creating a Django Application Creating Django Application(TwitterLogin) We will start by creating a virtual environment and installing Django. Step 1: mkdir django-twitter-auth && cd django-twitter-auth Step 2: virtualenv twitter_env Step 3: source twitter_env/bin/activate Step 4: pip install Django==3.2.6 At this point, we will now create a new app then apply the migrations. Finally, we will run the server. Step 5: django-admin startproject TwitterLogin_app Step 6: python manage.py migrate Step 7: TwitterLogin_app. The new changes should appear as follows. # TwitterLogin # TwitterLogin_app/settings.py AUTHENTICATION_BACKENDS = ( "allauth.account.auth_backends.AuthenticationBackend", ) SITE_ID = 1 ACCOUNT_EMAIL_VERIFICATION = "none" LOGIN_REDIRECT_URL = "home" ACCOUNT_LOGOUT_ON_GET = True The above section defines several parameters that we will explain. Make the following updates to Ensure not to forget the above step because Django Allauth needs the new tables. Create Templates for TwitterLogin Application In this section, we will create templates for our application to help communicate the process happening in the application. First, we will create a “templates” directory in the base directory and two other files, namely, home.html and base.html. mkdir templates && cd templates touch home.html base.html We will then update the path of the template on the settings.py to make it easier for Django to find the templates. The new changes should appear as follows. # Twitter> <!-- templates. # TwitterLogin_app/views.py from django.views.generic import TemplateView class Home(TemplateView): template_name = "home.html" We will then create a new URL so that the final look of the urls.py is shown below. # TwitterLogin DjangoTwitter. Setting Up Twitter OAuth 2 Provider Setting up Twitter is not different from other social sites like Facebook, Github, and Google. The steps are as follows: - Creation of an OAuth 2 app on your developer Twitter account - Register the OAuth 2 provider on your Django admin page. - Update templates/home.html accordingly. The first step is to apply for a Twitter developer account. During the application process, you will be prompted to answer several questions to qualify you and make better recommendations on which services to use. For instance, in our case, we are only interested in third-party authentication of apps via Twitter. So, it would help if you navigated to the Projects and Apps section after completing the account application process. In this section, use the button “Create App” to specify the name of your application and provide other details. Give a name for the app and write down the API key and API secret key. Then enable “Enable 3-legged OAuth” and “Request email address from users” under “Authentication Settings.” Also include the URLs for the callback, website, terms of service, and privacy policy. Go to and log in. First, we will add a new site with the domain name 127.0.0.1 and the display name 127.0.0.1. Below is the completed sample we used. Then click “Add Social Application” under “Social Applications” and enter the necessary details as indicated below. Select Twitter as your provider. Then, give it a name like DjangoTwitterLogin in our case. Add the API key (to Client id) and API secret key (to Secret key) that you noted earlier on to the Secret key. One of the Chosen Sites should be example.com, When you are done, then make the following updates to templates/home.html {% extends 'base.html' %} {% load socialaccount %} {% block content %} <div class="container" style="text-align: center; padding-top: 10%;"> <h1>Django Social Login</h1> <br /><br /> {% if user.is_authenticated %} <h3>Welcome {{ user.username }} !!!</h3> <br /><br /> <a href="{% url 'account_logout' %}" class="btn btn-danger">Logout</a> {% else %} ... <!-- Twitter button starts here --> </a> <a href="{% provider_login_url 'twitter' %}" class="btn btn-primary"> <i class="fa fa-twitter fa-fw"></i> <span>Login with Twitter</span> </a> <!-- Twitter button ends here --> {% endif %} </div> {% endblock content %} Access to access the login page, which should appear as follows. After clicking on Twitter, you will be taken to the following interface that will prompt you to authorize DjangoTwitterLogin to access your Twitter account. When you provide the correct credentials, you will be directed to the DjangoTwitterLogin home page that will look like this one here. Complete source code for the important sections. # TwitterLogin_app/settings.py """ Django settings for Twitter-an%-hg13+tpii+1!_$_$4-_va4r!@15fb&-+&h+w^js7cq9- = 'TwitterLogin = 'Twitter' AUTHENTICATION_BACKENDS = ( "allauth.account.auth_backends.AuthenticationBackend", ) SITE_ID = 1 ACCOUNT_EMAIL_VERIFICATION = "none" LOGIN_REDIRECT_URL = "home" ACCOUNT_LOGOUT_ON_GET = True <!-- %} # TwitterLogin_app/views.py from django.views.generic import TemplateView class Home(TemplateView): template_name = "home.html" # TwitterLogin_app/urls.py from django.contrib import admin from django.urls import path, include from .views import Home # new urlpatterns = [ path("admin/", admin.site.urls), path("accounts/", include("allauth.urls")), path("", Home.as_view(), name="home"), # new ] Conclusion This article covered all you need to know about using Twitter as your OAuth 2 provider. We also covered in detail how OAuth 2 works, including the entire flow. Further, we created a Django application step by step, verifying that it works as expected at each level. Finally, we walked over to apply for a Twitter developer account, created an OAuth 2 app, and generated the access key and tokens needed to facilitate login on our application using Twitter. We hope this article has been informative enough to psyche you to use OAuth 2 with Twitter in your upcoming applications.
https://www.codeunderscored.com/django-authentication-with-twitter/
CC-MAIN-2022-21
en
refinedweb
Extensible Markup Language (XML) is a text format increasingly used for a wide variety of storage and transport requirements. Parsing and processing XML is an important element of many text processing applications. This section discusses the most common techniques for dealing with XML in Python. While XML held an initial promise of simplifying the exchange of complex and hierarchically organized data, it has itself grown into a standard of considerable complexity. This book will not cover most of the API details of XML tools; an excellent book dedicated to that subject is: Python & XML, Christopher A. Jones & Fred L. Drake, Jr., O'Reilly 2002. ISBN: 0-596-00128-2. The XML format is sufficiently rich to represent any structured data, some forms more straightforwardly than others. A task that XML is quite natural at is in representing marked-up text?documentation, books, articles, and the like?as is its parent SGML. But XML is probably used more often to represent data than texts?record sets, OOP data containers, and so on. In many of these cases, the fit is more awkward and requires extra verbosity. XML itself is more like a metalanguage than a language?there are a set of syntax constraints that any XML document must obey, but typically particular APIs and document formats are defined as XML dialects. That is, a dialect consists of a particular set of tags that are used within a type of document, along with rules for when and where to use those tags. What I refer to as an XML dialect is also sometimes more formally called "an application of XML." At base, XML has two ways to represent data. Attributes in XML tags map names to values. Both names and values are Unicode strings (as are XML documents as a whole), but values frequently encode other basic datatypes, especially when specified in W3C XML Schemas. Attribute names are mildly restricted by the special characters used for XML markup; attribute values can encode any strings once a few characters are properly escaped. XML attribute values are whitespace normalized when parsed, but whitespace can itself also be escaped. A bare example is: >>> from xml.dom import minidom >>>>> d = minidom.parseString(x) >>> d.firstChild.attributes.items() [(u'a', u'b'), (u'num', u'38'), (u'd', u'e f g')] As with a Python dictionary, no order is defined for the list of key/value attributes of one tag. The second way XML represents data is by nesting tags inside other tags. In this context, a tag together with a corresponding "close tag" is called an element, and it may contain an ordered sequence of subelements. The subelements themselves may also contain nested subelements. A general term for any part of an XML document, whether an element, an attribute, or one of the special parts discussed below, is a "node." A simple example of an element that contains some subelements is: >>>>> d = minidom.parseString(x) >>> d.normalize() >>> for node in d.documentElement.childNodes: ... print node ... <DOM Text node " "> <DOM Element: a at 7033280> <DOM Text node " "> <DOM Element: b at 7051088> <DOM Text node " "> <DOM Element: c at 7053696> <DOM Text node " "> >>> d.documentElement.childNodes[3].attributes.items() [(u'data', u'more data')] There are several things to notice about the Python session above. The "document element," named root in the example, contains three ordered subelement nodes, named a, b, and c. Whitespace is preserved within elements. Therefore the spaces and newlines that come between the subelements make up several text nodes. Text and subelements can intermix, each potentially meaningful. Spacing in XML documents is significant, but it is nonetheless also often used for visual clarity (as above). The example contains an XML declaration, <?xml...?>, which is optional but generally included. Any given element may contain attributes and subelements and text data. Besides regular elements and text nodes, XML documents can contain several kinds of "special" nodes. Comments are common and useful, especially in documents intended to be hand edited at some point (or even potentially). Processing instructions may indicate how a document is to be handled. Document type declarations may indicate expected validity rules for where elements and attributes may occur. A special type of node called CDATA lets you embed mini-XML documents or other special codes inside of other XML documents, while leaving markup untouched. Examples of each of these forms look like: <?xml version="1.0" ?> <!DOCTYPE root SYSTEM "sometype.dtd"> <root> <!-- This is a comment --> This is text data inside the <root> element <![CDATA[Embedded (not well-formed) XML: <this><that> >>string<< </that>]]> </root> XML documents may be either "well-formed" or "valid." The first characterization simply indicates that a document obeys the proper syntactic rules for XML documents in general: All tags are either self-closed or followed by a matching endtag; reserved characters are escaped; tags are properly hierarchically nested; and so on. Of course, particular documents can also fail to be well-formed?but in that case they are not XML documents sensu stricto, but merely fragments or near-XML. A formal description of well-formed XML can be found at < and < Beyond well-formedness, some XML documents are also valid. Validity means that a document matches a further grammatical specification given in a Document Type Definition (DTD), or in an XML Schema. The most popular style of XML Schema is the W3C XML Schema specification, found in formal detail at < and in linked documents. There are competing schema specifications, however?one popular alternative is RELAX NG, which is documented at < The grammatical specifications indicated by DTDs are strictly structural. For example, you can specify that certain subelements must occur within an element, with a certain cardinality and order. Or, certain attributes may or must occur with a certain tag. As a simple case, the following DTD is one that the prior example of nested subelements would conform to. There are an infinite number of DTDs that the sample could match, but each one describes a slightly different range of valid XML documents: <!ELEMENT root ((a|OTHER-A)?, b, c*)> <!ELEMENT a (#PCDATA)> <!ELEMENT b EMPTY> <!ATTLIST b data CDATA #REQUIRED NOT-THERE (this | that) #IMPLIED> <!ELEMENT c (d+)> <!ATTLIST c data CDATA #IMPLIED> <!ELEMENT d (#PCDATA)> The W3C recommendation on the XML standard also formally specifies DTD rules. A few features of the above DTD example can be noted here. The element OTHER-A and the attribute NOT-THERE are permitted by this DTD, but were not utilized in the previous sample XML document. The quantifications ?, *, and +; the alternation |; and the comma sequence operator have similar meaning as in regular expressions and BNF grammars. Attributes may be required or optional as well and may contain any of several specific value types; for example, the data attribute must contain any string, while the NOT-THERE attribute may contain this or that only. Schemas go farther than DTDs, in a way. Beyond merely specifying that elements or attributes must contain strings describing particular datatypes, such as numbers or dates, schemas allow more flexible quantification of subelement occurrences. For example, the following W3C XML Schema might describe an XML document for purchases: <xsd:element <xsd:complexType> <xsd:sequence> <xsd:element <xsd:element </xsd:complexType> </xsd:element> <!-- Stock Keeping Unit, a code for identifying products --> <xsd:simpleType <xsd:restriction <xsd:pattern </xsd:restriction> </xsd:simpleType> An XML document that is valid under this schema is: <item partNum="123-XQ"> <USPrice>21.95</USPrice> <shipDate>2002-11-26</shipDate> </item> Formal specifications of schema languages can be found at the above-mentioned URLs; this example is meant simply to illustrate the types of capabilities they have. In order to check the validity of an XML document to a DTD or schema, you need to use a validating parser. Some stand-alone tools perform validation, generally with diagnostic messages in cases of invalidity. As well, certain libraries and modules support validation within larger applications. As a rule, however, most Python XML parsers are nonvalidating and check only for well-formedness. Quite a number of technologies have been built on top of XML, many endorsed and specified by W3C, OASIS, or other standards groups. One in particular that you should be aware of is XSLT. There are a number of thick books available that discuss XSLT, so the matter is too complex to document here. But in shortest characterization, XSLT is a declarative programming language whose syntax is itself an XML application. An XML document is processed using a set of rules in an XSLT stylesheet, to produce a new output, often a different XML document. The elements in an XSLT stylesheet each describe a pattern that might occur in a source document and contain an output block that will be produced if that pattern is encountered. That is the simple characterization, anyway; in the details, "patterns" can have loops, recursions, calculations, and so on. I find XSLT to be more complicated than genuinely powerful and would rarely choose the technology for my own purposes, but you are fairly likely to encounter existing XSLT processes if you work with existing XML applications. There are two principle APIs for accessing and manipulating XML documents that are in widespread use: DOM and SAX. Both are supported in the Python standard library, and these two APIs make up the bulk of Python's XML support. Both of these APIs are programming language neutral, and using them in other languages is substantially similar to using them in Python. The Document Object Model (DOM) represents an XML document as a tree of nodes. Nodes may be of several types?a document type declaration, processing instructions, comments, elements, and attribute maps?but whatever the type, they are arranged in a strictly nested hierarchy. Typically, nodes have children attached to them; of course, some nodes are leaf nodes without children. The DOM allows you to perform a variety of actions on nodes: delete nodes, add nodes, find sibling nodes, find nodes by tag name, and other actions. The DOM itself does not specify anything about how an XML document is transformed (parsed) into a DOM representation, nor about how a DOM can be serialized to an XML document. In practice, however, all DOM libraries?including xml.dom?incorporate these capabilities. Formal specification of DOM can be found at: < and: < The Simple API for XML (SAX) is an event-based API for XML documents. Unlike DOM, which envisions XML as a rooted tree of nodes, SAX sees XML as a sequence of events occurring linearly in a file, text, or other stream. SAX is a very minimal interface, both in the sense of telling you very little inherently about the structure of an XML documents, and also in the sense of being extremely memory friendly. SAX itself is forgetful in the sense that once a tag or content is processed, it is no longer in memory (unless you manually save it in a data structure). However, SAX does maintain a basic stack of tags to assure well-formedness of parsed documents. The module xml.sax raises exceptions in case of problems in well-formedness; you may define your own custom error handlers for these. Formal specification of SAX can be found at: < The module xml.dom is a Python implementation of most of the W3C Document Object Model, Level 2. As much as possible, its API follows the DOM standard, but a few Python conveniences are added as well. A brief example of usage is below: >>> from xml.dom import minidom >>> dom = minidom.parse('address.xml') >>> addrs = dom.getElementsByTagName('address') >>> print addrs[1].toxml() <address city="New York" number="344" state="NY" street="118 St."/> >>> jobs = dom.getElementsByTagName('job-info') >>> for key, val in jobs[3].attributes.items(): ... print key,'=',val employee-type = Part-Time is-manager = no job-description = Hacker SEE ALSO: gnosis.xml.objectify 409; The module xml.dom.minidom is a lightweight DOM implementation built on top of SAX. You may pass in a custom SAX parser object when you parse an XML document; by default, xml.dom.minidom uses the fast, nonvalidating xml.parser.expat parser. The module xml.dom.pulldom is a DOM implementation that conserves memory by only building the portions of a DOM tree that are requested by calls to accessor methods. In some cases, this approach can be considerably faster than building an entire tree with xml.dom.minidom or another DOM parser; however, the xml.dom.pulldom remains somewhat underdocumented and experimental at the time of this writing. Interface to the expat nonvalidating XML parser. Both the xml.sax and the xml.dom.minidom modules utilize the services of the fast expat parser, whose functionality lives mostly in a C library. You can use xml.parser.expat directly if you wish, but since the interface uses the same general event-driven style of the standard xml.sax, there is usually no reason to. The package xml.sax implements the Simple API for XML. By default, xml.sax relies on the underlying xml.parser.expat parser, but any parser supporting a set of interface methods may be used instead. In particular, the validating parser xmlproc is included in the PyXML package. When you create a SAX application, your main task is to create one or more callback handlers that will process events generated during SAX parsing. The most important handler is a ContentHandler, but you may also define a DTDHandler, EntityResolver, or ErrorHandler. Generally you will specialize the base handlers in xml.sax.handler for your own applications. After defining and registering desired handlers, you simply call the .parse() method of the parser that you registered handlers with. Or alternately, for incremental processing, you can use the feed() method. A simple example illustrates usage. The application below reads in an XML file and writes an equivalent, but not necessarily identical, document to STDOUT. The output can be used as a canonical form of the document: #!/usr/bin/env python import sys from xml.sax import handler, make_parser from xml.sax.saxutils import escape class ContentGenerator(handler.ContentHandler): def __init__(self, out=sys.stdout): handler.ContentHandler.__init__(self) self._out = out def startDocument(self):') def endElement(self, name): self._out.write('</%s>' % name) def characters(self, content): self._out.write(escape(content)) def ignorableWhitespace(self, content): self._out.write(content) def processingInstruction(self, target, data): self._out.write('<?%s %s?>' % (target, data)) if __name__=='__main__': parser = make_parser() parser.setContentHandler(ContentGenerator()) parser.parse(sys.argv[1]) The module xml.sax.handler defines classes ContentHandler, DTDHandler, EntityResolver, and ErrorHandler that are normally used as parent classes of custom SAX handlers. The module xml.sax.saxutils contains utility functions for working with SAX events. Several functions allow escaping and munging special characters. The module xml.sax.xmlreader provides a framework for creating new SAX parsers that will be usable by the xml.sax module. Any new parser that follows a set of API conventions can be plugged in to the xml.sax.make_parser() class factory. Deprecated module for XML parsing. Use xml.sax or other XML tools in Python 2.0+. XML-RPC is an XML-based protocol for remote procedure calls, usually layered over HTTP. For the most part, the XML aspect is hidden from view. You simply use the module xmlrpclib to call remote methods and the module SimpleXMLRPCServer to implement your own server that supports such method calls. For example: >>> import xmlrpclib >>> betty = xmlrpclib.Server(" >>> print betty.examples.getStateName(41) South Dakota The XML-RPC format itself is a bit verbose, even as XML goes. But it is simple and allows you to pass argument values to a remote method: >>> import xmlrpclib >>> print xmlrpclib.dumps((xmlrpclib.True,37,(11.2,'spam'))) <params> <param> <value><boolean>1</boolean></value> </param> <param> <value><int>37</int></value> </param> <param> <value><array><data> <value><double>11.199999999999999</double></value> <value><string>spam</string></value> </data></array></value> </param> </params> SEE ALSO: gnosis.xml.pickle 410; A number of projects extend the XML capabilities in the Python standard library. I am the principle author of several XML-related modules that are distributed with the gnosis package. Information on the current release can be found at: < The package itself can be downloaded as a distutils package tarball from: < The Python XML-SIG (special interest group) produces a package of XML tools known as PyXML. The work of this group is incorporated into the Python standard library with new Python releases?not every PyXML tool, however, makes it into the standard library. At any given moment, the most sophisticated?and often experimental?capabilities can be found by downloading the latest PyXML package. Be aware that installing the latest PyXML overrides the default Python XML support and may break other tools or applications. < Fourthought, Inc. produces the 4Suite package, which contains a number of XML tools. Fourthought releases 4Suite as free software, and many of its capabilities are incorporated into the PyXML project (albeit at a varying time delay); however, Fourthought is a for-profit company that also offers customization and technical support for 4Suite. The community page for 4Suite is: < The Fourthought company Web site is: < Two other modules are discussed briefly below. Neither of these are XML tools per se. However, both PYX and yaml fill many of the same requirements as XML does, while being easier to manipulate with text processing techniques, easier to read, and easier to edit by hand. There is a contrast between these two formats, however. PYX is semantically identical to XML, merely using a different syntax. YAML, on the other hand, has a quite different semantics from XML?I present it here because in many of the concrete applications where developers might instinctively turn to XML (which has a lot of "buzz"), YAML is a better choice. The home page for PYX is: < I have written an article explaining PYX in more detail than in this book at: < The home page for YAML is: < I have written an article contrasting the utility and semantics of YAML and XML at: < The module gnosis.xml.indexer builds on the full-text indexing program presented as an example in Chapter 2 (and contained in the gnosis package as gnosis.indexer). Instead of file contents, gnosis.xml.indexer creates indices of (large) XML documents. This allows for a kind of "reverse XPath" search. That is, where a tool like 4xpath, in the 4Suite package, lets you see the contents of an XML node specified by XPath, gnosis.xml.indexer identifies the XPaths to the point where a word or words occur. This module may be used either in a larger application or as a command-line tool; for example: % indexer symmetric ./crypto1.xml::/section[2]/panel[8]/title ./crypto1.xml::/section[2]/panel[8]/body/text_column/code_listing ./crypto1.xml::/section[2]/panel[7]/title ./crypto2.xml::/section[4]/panel[6]/body/text_column/p[1] 4 matched wordlist: ['symmetric'] Processed in 0.100 seconds (SlicedZPickleIndexer) % indexer "-filter=*::/*/title" symmetric ./cryptol.xml::/section[2]/panel[8]/title ./cryptol.xml::/section[2]/panel[7]/title 2 matched wordlist: ['symmetric'] Processed in 0.080 seconds (SlicedZPickleIndexer) Indexed searches, as the example shows, are very fast. I have written an article with more details on this module: < The module gnosis.xml.objectify transforms arbitrary XML documents into Python objects that have a "native" feel to them. Where XML is used to encode a data structure, I believe that using gnosis.xml.objectify is the quickest and simplest way to utilize that data in a Python application. The Document Object Model defines an OOP model for working with XML, across programming languages. But while DOM is nominally object-oriented, its access methods are distinctly un-Pythonic. For example, here is a typical "drill down" to a DOM value (skipping whitespace text nodes for some indices, which is far from obvious): >>> from xml.dom import minidom >>> dom_obj = minidom.parse('address.xml') >>> dom_obj.normalize() >>> print dom_obj.documentElement.childNodes[1].childNodes[3]\ ... .attributes.get('city').value Los Angeles In contrast, gnosis.xml.objectify feels like you are using Python: >>> from gnosis.xml.objectify import XML_Objectify >>> xml_obj = XML_Objectify('address.xml') >>> py_obj = xml_obj.make_instance() >>> py_obj.person[2].address.city u'Los Angeles' The module gnosis.xml.pickle lets you serialize arbitrary Python objects to an XML format. In most respects, the purpose is the same as for the pickle module, but an XML target is useful for certain purposes. You may process the data in an xml_pickle using standard XML parsers, XSLT processors, XML editors, validation utilities, and other tools. In several respects, gnosis.xml.pickle offers finer-grained control than the standard pickle module does. You can control security permissions accurately; you can customize the representation of object types within an XML file; you can substitute compatible classes during the pickle/unpickle cycle; and several other "guru-level" manipulations are possible. However, in basic usage, gnosis.xml.pickle is fully API compatible with pickle. An example illustrates both the usage and the format: >>> class Container: pass ... >>> inst = Container() >>> dct = {1.7:2.5, ('t','u','p'):'tuple'} >>> inst.this, inst.num, inst.dct = 'that', 38, dct >>> import gnosis.xml.pickle >>> print gnosis.xml.pickle.dumps(inst) <?xml version="1.0"?> <!DOCTYPE PyObject SYSTEM "PyObjects.dtd"> <PyObject module="__main__" class="Container" id="5999664"> <attr name="this" type="string" value="that" /> <attr name="dct" type="dict" id="6008464" > <entry> <key type="tuple" id="5973680" > <item type="string" value="t" /> <item type="string" value="u" /> <item type="string" value="p" /> </key> <val type="string" value="tuple" /> </entry> <entry> <key type="numeric" value="1.7" /> <val type="numeric" value="2.5" /> </entry> </attr> <attr name="num" type="numeric" value="38" /> </PyObject> SEE ALSO: pickle 93; cPickle 93; yaml 415; pprint 94; The module gnosis.xml.validity allows you to define Python container classes that restrict their containment according to XML validity constraints. Such validity-enforcing classes always produce string representations that are valid XML documents, not merely well-formed ones. When you attempt to add an item to a gnosis.xml.validity container object that is not permissible, a descriptive exception is raised. Constraints, as with DTDs, may specify quantification, subelement types, and sequence. For example, suppose you wish to create documents that conform with a "dissertation" Document Type Definition: <!ELEMENT dissertation (dedication?, chapter+, appendix*)> <!ELEMENT dedication (#PCDATA)> <!ELEMENT chapter (title, paragraph+)> <!ELEMENT title (#PCDATA)> <!ELEMENT paragraph (#PCDATA I figure I table)+> <!ELEMENT figure EMPTY> <!ELEMENT table EMPTY> <!ELEMENT appendix (#PCDATA)> You can use gnosis.xml.validity to assure your application produced only conformant XML documents. First, you create a Python version of the DTD: from gnosis.xml.validity import * class appendix(PCDATA): pass class table(EMPTY): pass class figure(EMPTY): pass class _mixedpara(Or): _disjoins = (PCDATA, figure, table) class paragraph(Some): _type = _mixedpara class title(PCDATA): pass class _paras(Some): _type = paragraph class chapter(Seq): _order = (title, _paras) class dedication(PCDATA): pass class _apps(Any): _type = appendix class _chaps(Some): _type = chapter class _dedi(Maybe): _type = dedication class dissertation(Seq): _order = (_dedi, _chaps, _apps) Next, import your Python validity constraints, and use them in an application: >>> from dissertation import * >>> chap1 = LiftSeq(chapter,('About Validity','It is a good thing')) >>> paras_ch1 = chap1[1] >>> paras_ch1 += [paragraph('OOP can enforce it')] >>> print chap1 <chapter><title>About Validity</title> <paragraph>It is a good thing</paragraph> <paragraph>OOP can enforce it</paragraph> </chapter> If you attempt an action that violates constraints, you get a relevant exception; for example: >>> try: .. paras_ch1.append(dedication("To my advisor")) .. except ValidityError, x: ... print x Items in _paras must be of type <class 'dissertation.paragraph'> (not <class 'dissertation.dedication'>) The PyXML package contains a number of capabilities in advance of those in the Python standard library. PyXML was at version 0.8.1 at the time this was written, and as the number indicates, it remains an in-progress/beta project. Moreover, as of this writing, the last released version of Python was 2.2.2, with 2.3 in preliminary stages. When you read this, PyXML will probably be at a later number and have new features, and some of the current features will have been incorporated into the standard library. Exactly what is where is a moving target. Some of the significant features currently available in PyXML but not in the standard library are listed below. You may install PyXML on any Python 2.0+ installation, and it will override the existing XML support. A validating XML parser written in Python called xmlproc. Being a pure Python program rather than a C extension, xmlproc is slower than xml.sax (which uses the underlying expat parser). A SAX extension called xml.sax.writers that will reserialize SAX events to either XML or other formats. A fully compliant DOM Level 2 implementation called 4DOM, borrowed from 4Suite. Support for canonicalization. That is, two XML documents can be semantically identical even though they are not byte-wise identical. You have freedom in choice of quotes, attribute orders, character entities, and some spacing that change nothing about the meaning of the document. Two canonicalized XML documents are semantically identical if and only if they are byte-wise identical. XPath and XSLT support, with implementations written in pure Python. There are faster XSLT implementations around, however, that call C extensions. A DOM implementation, called xml.dom.pulldom, that supports lazy instantiation of nodes has been incorporated into recent versions of the standard library. For older Python versions, this is available in PyXML. A module with several options for serializing Python objects to XML. This capability is comparable to gnosis.xml.pickle, but I like the tool I created better in several ways. PYX is both a document format and a Python module to support working with that format. As well as the Python module, tools written in C are available to transform documents between XML and PYX format. The idea behind PYX is to eliminate the need for complex parsing tools like xml.sax. Each node in an XML document is represented, in the PYX format on a separate line, using a prefix character to indicate the node type. Most of XML semantics is preserved, with the exception of document type declarations, comments, and namespaces. These features could be incorporated into an updated PYX format, in principle. Documents in the PYX format are easily processed using traditional line-oriented text processing tools like sed, grep, awk, sort, wc, and the like. Python applications that use a basic FILE.readline() loop are equally able to process PYX nodes, one per line. This makes it much easier to use familiar text processing programming styles with PYX than it is with XML. A brief example illustrates the PYX format: % cat test.xml <?xml version="1.0"?> <?xml-stylesheet <Eggs>Some text about eggs.</Eggs> <MoreSpam>Ode to Spam (spam="smoked-pork")</MoreSpam> </Spam> % ./xmln test.xml ?xml-stylesheet href="test.css" type="text/css" (Spam Aflavor pork -\n (Eggs -Some text about eggs. )Eggs -\n (MoreSpam -Ode to Spam (spam="smoked-pork") )MoreSpam -\n )Spam The tools in 4Suite focus on the use of XML documents for knowledge management. The server element of the 4Suite software is useful for working with catalogs of XML documents, searching them, transforming them, and so on. The base 4Suite tools address a variety of XML technologies. In some cases 4Suite implements standards and technologies not found in the Python standard library or in PyXML, while in other cases 4Suite provides more advanced implementations. Among the XML technologies implemented in 4Suite are DOM, RDF, XSLT, XInclude, XPointer, XLink and XPath, and SOAP. Among these, of particular note is 4xslt for performing XSLT transformations. 4xpath lets you find XML nodes using concise and powerful XPath descriptions of how to reach them. 4rdf deals with "meta-data" that documents use to identify their semantic characteristics. I detail 4Suite technologies in a bit more detail in an article at: < The native data structures of object-oriented programming languages are not straightforward to represent in XML. While XML is in principle powerful enough to represent any compound data, the only inherent mapping in XML is within attributes?but that only maps strings to strings. Moreover, even when a suitable XML format is found for a given data structure, the XML is quite verbose and difficult to scan visually, or especially to edit manually. The YAML format is designed to match the structure of datatypes prevalent in scripting languages: Python, Perl, Ruby, and Java all have support libraries at the time of this writing. Moreover, the YAML format is extremely concise and unobtrusive?in fact, the acronym cutely stands for "YAML Ain't Markup Language." In many ways, YAML can act as a better pretty-printer than pprint, while simultaneously working as a format that can be used for configuration files or to exchange data between different programming languages. There is no fully general and clean way, however, to convert between YAML and XML. You can use the yaml module to read YAML data files, then use the gnosis.xml.pickle module to read and write to one particular XML format. But when XML data starts out in other XML dialects than gnosis.xml.pickle, there are ambiguities about the best Python native and YAML representations of the same data. On the plus side?and this can be a very big plus?there is essentially a straight-forward and one-to-one correspondence between Python data structures and YAML representations. In the YAML example below, refer back to the same Python instance serialized using gnosis.xml.pickle and pprint in their respective discussions. As with gnosis.xml.pickle?but in this case unlike pprint?the serialization can be read back in to re-create an identical object (or to create a different object after editing the text, by hand or by application). >>> class Container: pass ... >>> inst = Container() >>> dct = {1.7:2.5, ('t','u','p'):'tuple'} >>> inst.this, inst.num, inst.dct = 'that', 38, dct >>> import yaml >>> print yaml.dump(inst) --- !!__main__.Container dct: 1.7: 2.5 ? - t - u - p : tuple num: 38 this: that SEE ALSO: pprint 94; gnosis.xml.pickle 410;
https://etutorials.org/Programming/Python.+Text+processing/Chapter+5.+Internet+Tools+and+Techniques/5.4+Understanding+XML/
CC-MAIN-2022-21
en
refinedweb
. 19.7.1 ctx); public boolean handleResponse(MessageContext ctx); public boolean handleFault(MessageContext ctx); public void init(HandlerInfo hi); public void destory( ); public QName [] getHeaders( ); } WebLogic invokes the init( ) method to create an instance of the Handler object, and invokes the destroy( ) method when it determines that the SOAP handler is no longer needed. These methods give you the opportunity to acquire and release any resources needed by the Handler object. The init( ) method is passed a HandlerInfo object, which lets you access any information about the SOAP handler in particular, any initialization parameters configured in the web-services.xml descriptor file. In fact, you should invoke the HandlerInfo.getHandlerConfig( ) method to obtain a Map object that holds a list of name-value pairs, one for each of the initialization parameters. These parameters are quite useful for several things for instance, to enable debugging, or perhaps to specify the name of the web service with which the SOAP handler is going to be associated (there is no other way of accessing this information). The handleRequest( ) method is invoked to intercept incoming SOAP requests before they are processed by the backend component, and the handleResponse( ) method is invoked to intercept outgoing SOAP responses before they are delivered back to the client. If a single SOAP handler implements both the handleRequest( ) and handleResponse( ) methods, it intercepts both incoming and outgoing SOAP messages. The handleFault( ) method is invoked when WebLogic needs to process any SOAP faults generated by the handleRequest( ) or handleResponse( ) methods, or even by the backend component. These methods also have access to a MessageContext object that models the message context in which the SOAP handler has been invoked. Typically, you would use the SOAPMessageContext subinterface to access or update the contents of the SOAP message. Remember, a SOAP handler is free to update the contents of the incoming SOAP request or the outgoing SOAP response before it forwards the message to the next SOAP handler in the chain. Once the handleRequest( ) method has processed the incoming SOAP request, it can determine how the SOAP message is subsequently handled in the following ways: In the same way, once the handleResponse( ) method has processed the outgoing SOAP response, it can determine how the SOAP message is subsequently handled in the following ways: Remember, the handleFault( ) method is used to handle any SOAP faults generated during the processing of the SOAP message request/response. The handleFault( ) methods can be invoked in a chain: if the handleFault( ) method on a Handler object returns true, the handleFault( ) method of the next handler in the chain is invoked. Otherwise, the rest of the chain is skipped. WebLogic also provides a convenient abstract base class that lets you easily create your own handlers: weblogic.webservices.GenericHandler. Example 19-14 shows how to construct a simple handler in this way. Example 19-14. Using the GenericHandler interface public class MyHandler extends GenericHandler { public boolean handleResponse(MessageContext ctx) { SOAPMessageContext sMsgCtx = (SOAPMessageContext) ctx; SOAPMessage msg = sMsgCtx.getMessage( ); SOAPPart sp = msg.getSOAPPart( ); try { SOAPEnvelope se = sp.getEnvelope( ); SOAPHeader sh = se.getHeader( ); sh.addChildElement("TheStorkBroughtMe"); } catch (SOAPException e) { e.printStackTrace( ); } return true; } } Note how the handleResponse( ) method uses the SOAPMessage class. This class is part of the SOAP with Attachments API for Java 1.1 (SAAJ) specification, and gives you access to all parts of the SOAP message. In this case, we used it simply to add a child element to the SOAP header of the response. 19.7.2 Configuring a Handler Chain A handler chain represents an ordered group of SOAP message handlers. Any SOAP handler that needs to participate in a web service must be defined in the web-services.xml descriptor file. In fact, the descriptor file also lets you configure the sequence in which the SOAP handlers are invoked. The following excerpt from the web-services.xml descriptor shows how to declare a chain of SOAP message handlers: Notice how we've defined an initialization parameter for the first handler in the chain. The order in which the handlers are defined is very important because it determines the sequence in which the handlers are invoked; that order is detailed here: H1.handleRequest( ) H2.handleRequest( ) H3.handleRequest( ) H3.handleResponse( ) H2.handleResponse( ) H1.handleResponse( ) 19.7.3 Creating and Registering SOAP Handlers A SOAP message handler can either directly implement the Handler interface or can extend the abstract class GenericHandler provided by WebLogic. This class offers a simple and sensible implementation of the Handler interface and maintains a reference to the HandlerInfo object passed during the initialization of the Handler object. The following example shows how a SOAP message handler can access the SOAP message and its headers: public class MyHandler extends weblogic.webservice.GenericHandler { public boolean handleRequest(MessageContext ctx) { System.err.println("In MyHandler.handleRequest( )"); // type cast MessageContext to access the SOAP message SOAPMessageContext sMsgCtx = (SOAPMessageContext) ctx; SOAPMessage msg = sMsgCtx.getMessage( ); SOAPPart sp = msg.getSOAPPart( ); SOAPEnvelope se = sp.getEnvelope( ); SOAPHeader sh = se.getHeader( ); // ... return true; } public boolean handleResponse(MessageContext ctx) { System.err.println("In MyHandler.handleResponse( )"); return true; } } Once you create the SOAP message handler, you must modify the web-services.xml descriptor file in order to register the handler. In WebLogic 8.1, you simply need to modify the servicegen Ant task as follows: The handlers attribute can take a comma-separated list of fully qualified class names. When you update the servicegen task in this way, every operation will be associated with the handler chain. If you want to be more selective, you have to edit the web-services.xml by hand. Likewise, if you are using WebLogic 7.0, you must edit the web-services.xml descriptor manually, as WebLogic 7.0's servicegen Ant task doesn't support handler chains. In these cases, you will need to edit the descriptor file to look something like this: Only after you've registered the handler chain can you bind it to a web service operation. For this, you need to modify the particular operation element in the web-services.xml descriptor file to which the handler chain will be linked. Here you can see how the handler-chain attribute allows you to associate the handler chain myChain with the makeUpper operation defined earlier: Once you've deployed the web service with these changes to the descriptor file, any SOAP requests and responses for the makeUpper operation will pass through the configured handler chain (myChain). 19.7.3.1 Using only SOAP handlers to implement an operation Typically, a web service operation is implemented by a backend component. However, a web service operation also may be implemented through a handler chain alone, without the aid of any backend component. This means a SOAP message is processed by the handleRequest( ) methods of each handler in the chain, and then by the handleResponse( ) methods of each handler in the chain, but in reverse order. A web service operation implemented solely through a chain of SOAP message handlers can be configured as follows: In this case, you can completely ignore the component and method attributes for the web service operation.
https://flylib.com/books/en/2.107.1/soap_message_handlers.html
CC-MAIN-2022-21
en
refinedweb
VERSIONversion 1.030002 SYNOPSIS # in MyApp/ParsedAttribute/URI.pm package MyApp::ParsedAttribute::URI; use Moose; use namespace::autoclean; use URI; with 'MongoDBx::Class::ParsedAttribute'; sub expand { my ($self, $uri_text) = @_; return URI->new($uri_text); } sub collapse { my ($self, $uri_obj) = @_; return $uri_obj->as_string; } 1; # in MyApp/Schema/SomeDocumentClass.pm has 'url' => (is => 'ro', isa => 'URI', traits => ['Parsed'], parser => 'MyApp::ParsedAttribute::URI', required => 1); DESCRIPTIONThis module is a Moose role meant to be consumed by classes that automatically expand (from a MongoDB database) and collapse (to a MongoDB database) attributes of a certain type. This is similar to DBIx::Class' InflateColumn family of modules that do pretty much the same thing for the SQL world. A class implementing this role with a name such as 'URI' (full package name MongoDBx::Class::ParsedAttribute::URI or MyApp::ParsedAttribute::URI) is expected to expand and collapse URI objects. Similarly, a class named 'NetAddr::IP' is expected to handle NetAddr::IP objects. Currently, a DateTime parser is provided with the MongoDBx::Class distribution. REQUIRESConsuming classes must implement the following methods: expand( $value )Receives a raw attribute's value from a MongoDB document and returns the appropriate object representing it. For example, supposing the value is an epoch integer, the expand method might return a DateTime object. collapse( $object )Receives an object representing a parsed attribute, and returns that objects value in a form that can be saved in the database. For example, if the object is a DateTime object, this method might return the date's epoch integer. AUTHORIdo Perlmuter, "<ido at ido50.net>" BUGSPlease report any bugs or feature requests to "bug-mongodbx-class at rt.cpan.org", or through the web interface at < I will be notified, and then you'll automatically be notified of progress on your bug as I make changes. SUPPORTYou can find documentation for this module with the perldoc command. perldoc MongoDBx::Class::ParsedAttribute You can also look for information at: - RT: CPAN's request tracker < - AnnoCPAN: Annotated CPAN documentation < - CPAN Ratings < - Search CPAN < LICENSE AND COPYRIGHTCopyright 2010-2014 Ido Perlmuter. This program is free software; you can redistribute it and/or modify it under the terms of either: the GNU General Public License as published by the Free Software Foundation; or the Artistic License. See for more information.
https://manpages.org/mongodbxclassparsedattribute/3
CC-MAIN-2022-21
en
refinedweb
I posted a bit about DirectInk and the notion of wet/dry ink back in this post and one of the things that I was really interested in from //Build was the notion that there were some additional capabilities here in inking in the upcoming Windows Anniversary Update. This is talked about in the excellent inking session here; and all of the session is very much worth watching but the relevant piece for this post is the piece that starts at approximately 43m which talks about the notion of simultaneous touch and ink. The InkCanvas is a fantastic bit of kit but it tends to ‘take over’ in the sense that it covers whatever content sits beneath it from both a presentation point of view and, to some extent, from an event processing point of view. In my previous post, I wrote about how (today on 10586) you can take control of the drying of ink so as to make it possible to interleave the presentation of ink collected by the InkCanvas with other content. The InkCanvas captures the ink “wet” on its own thread and then it calls your code to “dry” it on your thread and you take control at that point. On the 14366 preview SDK, there a new class called CoreWetStrokeUpdateSource which seems to start opening up some of that “wet” ink processing, allowing me to hook code into the system’s thread that captures the ink and first draws it fluidly in response to the user’s pen moving before it gets collected and handed over to the UI thread for any custom drying. This allows for scenarios where we can move the ink around as the user is drawing it and I think it’s what enables the new ruler that’s part of the Anniversary Update. I wanted to experiment and so I thought I’d take that idea of a ‘ruler’ and extend it to include the idea of ‘ruled paper’. Mine is only a simple demo but I thought it highlighted how flexible and powerful the inking platform is becoming. I made a simple app that displays a set of XAML-drawn lines; and it’s using the InkToolbar to control the formatting (which, by default, has the ruler on it). This app operates in 3 modes. By default, it’s in freeform mode so I can just ink; but if I tap (specifically with touch) then it changes mode into a SnapX or SnapY mode where it snaps drawn lines to the paper; and I can also pinch to zoom the paper grid up to a larger size if I want a larger snap grid; for the small piece of code I wrote to make this work, it actually works surprisingly well although the code needs updating to handle window resizing I essentially just placed a Canvas behind an InkCanvas and it’s on that background Canvas that I draw the grid; <Page x: <Grid Background="{ThemeResource ApplicationPageBackgroundThemeBrush}"> <Grid.RowDefinitions> <RowDefinition Height="Auto" /> <RowDefinition /> </Grid.RowDefinitions> <InkToolbar TargetInkCanvas="{Binding ElementName=ink}" /> <Canvas x: <Canvas.RenderTransform> <ScaleTransform x: </Canvas.RenderTransform> </Canvas> <TextBlock Margin="8" x: <InkCanvas x: </Grid> </Page> and so all we have here are a Canvas, a InkCanvas, a InkToolbar and a TextBlock to display the current status. I married this up with some code-behind which could do with a bit of tidying; namespace App6 { using System; using System.Linq; using Windows.Devices.Input; using Windows.Foundation; using Windows.UI; using Windows.UI.Input.Inking.Core; using Windows.UI.Xaml.Controls; using Windows.UI.Xaml.Input; using Windows.UI.Xaml.Media; using Windows.UI.Xaml.Shapes; public sealed partial class MainPage : Page { enum DrawMode { FreeForm = 0, SnapY = 1, SnapX = 2 } static MainPage() { lineBrush = new SolidColorBrush(Colors.LightBlue); } public MainPage() { this.InitializeComponent(); this.scaledGridSize = BASE_GRID_SIZE; this.drawMode = DrawMode.FreeForm; this.Loaded += OnLoaded; } void OnLoaded(object sender, Windows.UI.Xaml.RoutedEventArgs e) { // We want to handle events when wet ink is being processed. var source = CoreWetStrokeUpdateSource.Create(this.ink.InkPresenter); // We should probably also handle the cancel event too. source.WetStrokeStarting += OnWetStrokeStarting; source.WetStrokeContinuing += OnWetStrokeContinuing; // Draw our grid lines, no resize handling yet. for (int i = 0; i < this.drawCanvas.ActualWidth; i += BASE_GRID_SIZE) { this.AddLine(i, 0, i, this.drawCanvas.ActualHeight); } for (int j = 0; j < this.drawCanvas.ActualHeight; j += BASE_GRID_SIZE) { this.AddLine(0, j, this.drawCanvas.ActualWidth, j); } this.UpdateDrawMode(); } void UpdateDrawMode() { this.txtMode.Text = this.drawMode.ToString(); } void AddLine(double x1, double y1, double x2, double y2) { var line = new Line() { X1 = x1, Y1 = y1, X2 = x2, Y2 = y2, Stroke = lineBrush }; this.drawCanvas.Children.Add(line); } double GetNearestGridMultiple(double coordinate) { var result = 0.0d; var dividand = (int)(coordinate / this.scaledGridSize); var lower = dividand * this.scaledGridSize; var upper = (dividand + 1) * this.scaledGridSize; var lowerDistance = Math.Abs(lower - coordinate); var upperDistance = Math.Abs(upper - coordinate); if (lowerDistance < upperDistance) { result = lower; } else { result = upper; } return (result); } void OnWetStrokeStarting(CoreWetStrokeUpdateSource sender, CoreWetStrokeUpdateEventArgs args) { // NB: We are not on the UI thread and we probably are not meant to do much // work here as we could slow the experience. It looks like the API has been // designed to avoid us trying to get 'clever' and doing too much work. var firstPoint = args.NewInkPoints.First(); if (this.drawMode != DrawMode.FreeForm) { if (this.drawMode == DrawMode.SnapY) { snapY = this.GetNearestGridMultiple(firstPoint.Position.Y); } else { snapX = this.GetNearestGridMultiple(firstPoint.Position.X); } this.SnapPoints(args); } } void OnWetStrokeContinuing(CoreWetStrokeUpdateSource sender, CoreWetStrokeUpdateEventArgs args) { this.SnapPoints(args); } void SnapPoints(CoreWetStrokeUpdateEventArgs args) { for (int i = 0; i < args.NewInkPoints.Count; i++) { if (snapX != null) { args.NewInkPoints[i] = new Windows.UI.Input.Inking.InkPoint( new Point(snapX.Value, args.NewInkPoints[i].Position.Y), args.NewInkPoints[i].Pressure); } else if (snapY != null) { args.NewInkPoints[i] = new Windows.UI.Input.Inking.InkPoint( new Point(args.NewInkPoints[i].Position.X, snapY.Value), args.NewInkPoints[i].Pressure); } } } bool IsTouchPoint(PointerRoutedEventArgs e) { return (e.Pointer.PointerDeviceType == PointerDeviceType.Touch); } void OnInkManipulationDelta(object sender, ManipulationDeltaRoutedEventArgs e) { var newScale = this.scaledGridSize * e.Delta.Scale; if ((newScale >= BASE_GRID_SIZE) && (newScale <= (MAX_SCALE * BASE_GRID_SIZE))) { this.scaleTransform.ScaleX = newScale / BASE_GRID_SIZE; this.scaleTransform.ScaleY = newScale / BASE_GRID_SIZE; this.scaledGridSize = newScale; } } void OnTapped(object sender, TappedRoutedEventArgs e) { if (e.PointerDeviceType == PointerDeviceType.Touch) { int value = ((int)this.drawMode + 1); if (value > (int)DrawMode.SnapX) { value = 0; } this.drawMode = (DrawMode)value; this.snapX = this.snapY = null; this.UpdateDrawMode(); } } double? snapX; double? snapY; DrawMode drawMode; static SolidColorBrush lineBrush; double scaledGridSize; static readonly int BASE_GRID_SIZE = 20; static readonly int MAX_SCALE = 4; } } and the main additions there beyond what I could do in 10586 are (I think); - The new CoreWetStrokeUpdateSource is letting me pick up the ink as it is being input ‘wet’ and manipulate it (in my case, to snap the points to a new place). - The simultaneous touch/ink support is letting me use manipulations on the InkCanvas to apply a scale transform to my underlying Canvas and make the grid larger/smaller with relative ease. Here’s the app actually running – note that the screen capture doesn’t quite pick everything up here and some of the menus don’t seem to be captured; Over on the ‘Context’ show, we’ve got an episode coming up around ink so there’ll be more discussion and demo code there in the coming week or so. Pingback: Windows 10 Anniversary Update–More on Inking with Wet Ink | Tech News Pingback: Windows 10 Anniversary Update–More on Inking with Wet Ink & Custom Rulers | Tech News Pingback: The week in .NET – 6/28/2016 | Tech News
https://mtaulty.com/2016/06/21/windows-10-anniversary-update-more-on-inking-with-wet-ink/
CC-MAIN-2022-21
en
refinedweb
Hello, I'm trying to use the Windows::Security::Authentication::OnlineId namespace in order to get the user's unique id. The ticket request used to authenticate the user is: auto ticket = ref new OnlineIdServiceTicketRequest("wl.signin wl.basic", "DELEGATION"); auto pSignInHandler = auth->AuthenticateUserAsync(ticket); Accessing the resulting UserIdentity instance works for SafeCustomerId, IsConfirmedPC and IsBetaAccount. But the rest of the properties - Id, FirstName, LastName and SignInName - throw this exception: "WinRT information: Your application cannot get the Online Id properties due to the Terms of Use accepted by the user." Why am I getting this error ? Note: I confirmed the required permissions during the sign-in process. Please advise. Thanks Hi, As far as I know, the AuthenticateUserAsync is async function, you should use create_task and then to get the result of that function. create_task(authenticator->AuthenticateUserAsync(request)).then([this](task<UserIdentity^> userIdentity) { }); You can follow the Windows account authorization sample Best regards, Jesse Jesse Jiang [MSFT] MSDN Community Support | Feedback to us
https://social.msdn.microsoft.com/Forums/en-US/0698ea15-af72-4c85-a897-b1c2793df8f8/how-to-get-the-users-microsoft-live-id?forum=winappswithnativecode
CC-MAIN-2022-21
en
refinedweb
48080/how-to-do-line-continuation-in-python You can use '\n' for a next ...READ MORE connect mysql database with python import MySQLdb db = ...READ MORE readline function help to read line in ...READ MORE If you only have one reference to ...READ MORE The simple way of doing this will ...READ MORE def matmult(a,b): zip_b = ...READ MORE A module is a file containing.
https://www.edureka.co/community/48080/how-to-do-line-continuation-in-python
CC-MAIN-2022-21
en
refinedweb
> DTMS.rar > DODELETE.C #include "mail.h" /* Delete the current record specified by the mail structure. * This function is called by mail_delete() and mail_store(), * after the record has been located by _mail_find(). */ int _mail_dodelete(MAIL *mail) { int i; char *ptr; off_t freeptr, saveptr; /* Set data buffer to all blanks */ for (ptr = mail->datbuf, i = 0; i < mail->datlen - 1; i++) *ptr++ = ' '; *ptr = 0; /* null terminate for _mail_writedat() */ /* Set key to blanks */ ptr = mail->idxbuf; while (*ptr) *ptr++ = ' '; /* We have to lock the free list */ if (writew_lock(mail->idxfd, FREE_OFF, SEEK_SET, 1) < 0) err_dump("writew_lock error"); /* Write the data record with all blanks */ _mail_writedat(mail, mail->datbuf, mail->datoff, SEEK_SET); /* Read the free list pointer. Its value becomes the chain ptr field of the deleted index record. This means the deleted record becomes the head of the free list. */ freeptr = _mail_readptr(mail, FREE_OFF); /* Save the contents of index record chain ptr, before it's rewritten by _mail_writeidx(). */ saveptr = mail->ptrval; /* Rewrite the index record. This also rewrites the length of the index record, the data offset, and the data length, none of which has changed, but that's OK. */ _mail_writeidx(mail,NULL, mail->idxoff, SEEK_SET, freeptr); /* Write the new free list pointer */ _mail_writeptr(mail, FREE_OFF, mail->idxoff); /* Rewrite the chain ptr that pointed to this record being deleted. Recall that _mail_find() sets mail->ptroff to point to this chain ptr. We set this chain ptr to the contents of the deleted record's chain ptr, saveptr, which can be either zero or nonzero. */ _mail_writeptr(mail, mail->ptroff, saveptr); if (un_lock(mail->idxfd, FREE_OFF, SEEK_SET, 1) < 0) err_dump("un_lock error"); return(0); }
http://read.pudn.com/downloads12/sourcecode/unix_linux/50527/DTMS/DODELETE.C__.htm
crawl-002
en
refinedweb
> MailAccess.rar > Exceptions.POP3 *Function: exceptions *Author: Hamid Qureshi *Created: 2003/8 *Modified: 3 May 2004 0200 GMT+5 by Hamid Qureshi *Description: *Changes: 2004/4/2 21:25 GMT+8 by Unruled Boy * 1.added PopServerLockException * 3 May 2004 0200 GMT+5 by Hamid Qureshi * 1.Adding NDoc Comments */ using System; namespace OpenPOP.POP3 { /// /// Thrown when the POP3 Server sends an error (-ERR) during intial handshake (HELO) ///public class PopServerNotAvailableException:Exception {} /// /// Thrown when the specified POP3 Server can not be found or connected with ///public class PopServerNotFoundException:Exception {} /// /// Thrown when the attachment is not in a format supported by OpenPOP.NET ////// Supported attachment encodings are Base64,Quoted Printable,MS TNEFpublic class AttachmentEncodingNotSupportedException:Exception {} /// /// Thrown when the supplied login doesn't exist on the server ////// Should be used only when using USER/PASS Authentication Methodpublic class InvalidLoginException:Exception {} /// /// Thrown when the password supplied for the login is invalid ////// Should be used only when using USER/PASS Authentication Methodpublic class InvalidPasswordException:Exception {} /// /// Thrown when either the login or the password is invalid on the POP3 Server ////// /// Should be used only when using APOP Authentication Methodpublic class InvalidLoginOrPasswordException:Exception {} /// /// Thrown when the user mailbox is in a locked state ////// The mail boxes are locked when an existing session is open on the mail server. Lock conditions are also met in case of aborted sessionspublic class PopServerLockException:Exception {} }
http://read.pudn.com/downloads54/sourcecode/windows/csharp/187217/MailAccess/OpenPOP/POP3/Exceptions.cs__.htm
crawl-002
en
refinedweb
Validation Validating Domain ClassesGrails allows you to apply constraints to a domain class that can then be used to validate a domain class instance. Constraints are applied using a "constraints" closure which uses the Groovy builder syntax to configure constraints against each property name, for example: Note that as of Grails 0.4 your constraints must be static or an exception will be thrown.To validate a domain class you can call the "validate()" method on any instance: class User { String login String password String email Date age static constraints = { login(size:5..15,blank:false,unique:true) password(size:5..15,blank:false) email(email:true,blank:false) age(min:new Date(),nullable:false) } } The {{errors}} property on domain classes is an instance of the Spring org.springframework.validation.Errors interface.By default the persistent "save()" method calls validate before executing hence allowing you to write code like: def user = new User() // populate propertiesif(user.validate()) { // do something with user } else { user.errors.allErrors.each { println it } } You can also reject domain object values in a controller. You might need to do this if you don't want a new instance of an object to be created if an invalid property or id is passed in as a parameter. For instance: if(user.save()) { return user } else { user.errors.allErrors.each { println it } } For a full reference see the Validation Reference if(params.networksChosen){ def nlist = new ArrayList(); if(params.networksChosen.class == String){ nlist = params.networksChosen.split(",") } else nlist = params.networksChosen; nlist.each { item-> String cleaned = item.trim() Network nw = Network.findByNetworkId(cleaned) if(!nw){ newSnap.errors.rejectValue( 'networks', 'snapshot.networks.notFound', [cleaned] as Object[], 'Could not locate network: {0}' ) } else newSnap.addToNetworks(nw) } } Display Errors in the ViewSo your instance doesn't validate, how do you now display an appropriate error message in the view? For starters you need to redirect to the right action or view with your erroneous bean: In this case we use the render method to render the right view, alternatively you could chain the model back to a "create" action: class UserController { def save = { def u = new User() u.properties = params if(u.save()) { // do something } else { render(view:'create',model:[user:u]) } } } The chain method stored the model in flash scope so that it is available in the request even after the redirect.So now to the view, you clearly have an instance with errors, to display them we use a special tag called "hasErrors": chain(action:create,model:[user:u]) This is used in conjunction with the tag "renderErrors" which renders the errors as a list. In GSP because you can call tags as regular methods calls it also means you can do some neat tricks to highlight the errors really easily such as: <g:hasErrors </g:hasErrors> The above code will add the "errors" CSS class to the property if there are any errors for the field 'login' now simply add a CSS style: <div class="prop ${hasErrors(bean:user,field:'login', 'errors')}"> <label for="login"><input type="text" name="login" /> </div> And you have the erroneous field highlighting when there is a problem. .errors { border: 1px solid red } Changing the Error MessageOf course the default error message that Grails displays is probably not what you were after, so you will want to change this. The way you do this is by modifying the "grails-app/i18n/messages.properties" file and adding a message for the particular error code.For example if we follow the above example the error code may be "user.login.length.tooshort" so we add an entry: For a complete list of error codes and how they correspond to validation constraints see the Validation Reference user.login.length.tooshort=I'm sorry the login you entered wasn't quite long enough, please make it longer Defining constraints for Hibernate mapped classesTo a "com.books.HibernateBook" class (either an EJB3 entity of mapped with Hibernate XML) defined above you would need to create a "com/books/HibernateBookConstraints.groovy" script in the same package as the class itself, in the src/java directory tree. Within the script just define constraints in the same way as you would do in a GORM class: Note that if there is no a correct package declaration for the constraints class, Grails start-up will loop infinitely. /* com.books.HibernateBookConstraints.groovy */ package com.booksconstraints = { title(size:5..15) desc(blank:false) } 1 Comment Site Login
http://www.grails.org/Validation
crawl-002
en
refinedweb
« March 2003 | » Main « | May 2003 » Wednesday 30 April 2003 Bob (you may or may not know Bob) sent along two interesting links. How to bow is an entertaining introduction to the complexities of Japanese business manners. And I'm not sure what to make of this: ASP.NET pages in 80386 assembler. Yow. Michael Tsai draws a parallel between Apple's online music biz and Edison's attempt to sell records. I think getting the big players on board is a necessary first step to getting the concept accepted. It reminds me of Don Norman's story about how Edison's phonograph lost. The Edison story is good, and shows how the old genius made the same business mistakes as plenty of other geeks: he put too much faith in technical superiority. Tuesday 29 April 2003 I'm looking for tools for understanding how memory is being allocated and freed in my C++ heap. mpatrol looks very full-featured, but perhaps a bit too grungy to get going. I like that it is free, but would feel more optimistic about my chances with it if it mentioned Visual C++ anywhere in its documentation (it's very GNU-centric). Does anyone have any recommendations for good tools that aren't too expensive (free is ideal!), can report on leaks, fragmentation, etc, and is fairly painless to get started with? Here's a list of such tools, though I get the sense that it is a few years out-of-date. Monday 28 April 2003 Ken Arnold (whose long history includes curses, rogue, Java, and Jini) has a witty and pithy piece on programming language human factors: Are Programmers People? And If So, What to Do About It? Sunday 27 April 2003 The weather is turning nicer, and you know what that means: more juggling. I have long enjoyed juggling (my father taught me so long ago I don't remember learning), and have kept it up enough that my skills have continued to improve. A few years back I learned the three-ball shower (where the balls go in a circle, cartoon-like, rather than the standard over-under cascade pattern). Last year I finally mastered four balls, which I previously never liked because of its split-screen feel, since at its simplest, the balls stay in their assigned hand. (As with any specialized discipline, there is way more to juggling than the person in the street would guess. Take a look at some Juggling Animations to see what I mean). This year, I'm going to try to get to a five-ball cascade. I've been approaching it from a few different angles for a while now, and think I may have enough basics to just go ahead and do it. We'll see. Maybe this public declaration will force me to master it. In preparation and as incentive, I've ordered five Todd Smith beanbags. Wish me luck! Friday 25 April 2003 You'd think after 18 months of working on the Kubi Client, I would be able to do MIME structures in my sleep. But I had to look it up again today, so I'm putting this here so I can find it again. Also, I figure if I need to look something up, someone else out there may find it useful. There are three commonly-used Content-Type headers for structuring MIME messages: » read more of: MIME message structure... (3 paragraphs) Thursday 24 April 2003 Tantek talks about hand-rolled blogs. I'm interested, because I fall into his second category ("folks who rolled their own blogging content management system"). Photomatt followed on, with insightful comments that neatly step over the class warfare between the out-of-the-box people and the hand-rolled people, to get at the important point: Whatever you do should put as little as possible between yourself and whatever it is you love about creating your little corner of the independent web. ... Tantek is happy writing the code for his page, just as I get a buzz typing in a box and having everything else happen automagically. ... Do what makes you happy. This site is hand-rolled because I love understanding how web sites are built, and writing tools to create web sites. Some days I would like not to have to worry about it, but on those days I can just ignore the tools that are in place and working. On the days when I want to make something happen differently, I can hack on the tools. BTW: many of the hand-rollers seem to do it because they care passionately about the style and structure of the markup (that is, the tags themselves, rather than the page they produce). I can understand that passion, and would like to be able to partake in it. If you look at the HTML that produces this page, it is nothing to be proud of. Someday I will get rid of all the tables and 1-pixel gifs and do a real 21st-century CSS-driven accessible layout. But not yet. I don't have the time, focus, or stomach for the multi-browser debugging that would take. The whole thread got started by Zeldman railing against RSS feeds (because they homogenize the web experience), something I myself have felt before. Even when I use an RSS reader, I use it more as a bookmark manager with update notification than as a way to read the bodies of entries. ¶ XML.com: At Microsoft's Mercy is an interesting summary of reactions to Microsoft's new XML support in Office. ¶ Creeping toward Xanadu is a fearful commentary on the growing complexity of web standards. ¶ Ray Ozzie has started up his blog again. Tuesday 22 April 2003 In work-related news: Kubi Software Ships Kubi Client. No, it's not an episode of Friends about cheating on Monica. Two open-source projects became availble recently. Chandler 0.1 was released yesterday. It is very very early, but I applaud OSAF's determination in getting this out, and their courage. There will be many naysayers for this project; I for one am glad to have it in the mix. Vera is a collection of typefaces from Bitsteam for use in open-source projects. They look good, and should provide relief from the never-ending clones of Times and Helvetica. Sunday 20 April 2003 I wanted a t-shirt with my stellated logo on it, so I set up a store at Cafepress to make one. As a result, you can buy one if you like, but I won't be offended if you don't! Saturday 19 April 2003 OpenEXR is a new image format developed by Industrial Light & Magic to accomodate the needs of film makers. For example, the data is recorded at a higher dynamic range (using 16-bit floats) than the typical 0 (black) to 1 (white) that most image formats use. I have no need for this technology, but it would be cool if I did, and I like keeping up with advanced CG stuff. Friday 18 April 2003 Two. Wed. Monday 14 April 2003 Time). Sunday 13 April 2003 A. I. Saturday 12 April 2003 Two. Thursday 10 April 2003 Jake Howlett wrote about self-googling (which he accurately describes as vain). Googling on his whole name, then his last name, and finally his first name, he was first, second, and 27th in the respective listings. When I tried the same experiment, I was struck by how similar my results were: I'm the first Ned Batchelder, the second Batchelder (the first is actually a 404), and the 25th Ned. What does this mean about me and Jake? Or about blogs? Or about the web? Is this just a coincidence? What do other people get for similar queries? Latest Python tidbit: the re module has an option to write regular expressions in re.VERBOSE format. This means that whitespace can be used to layout the regular expression in a more readable style, and comments can be included with hash marks. For example, this regular expression: logFmt = '\[[0-9]{8}T[0-9]{6}\.[0-9]{3}Z:[0-9](/[0-9]*)?\][ ]*.*'logFmtRe = re.compile(logFmt) becomes: logFmt = ''' \[ [0-9]{8}T[0-9]{6}\.[0-9]{3}Z # the date :[0-9] # the severity (/[0-9]*)? # a possible facility \] [ ]*.* # the message'''logFmtRe = re.compile(logFmt, re.VERBOSE) Admittedly, regular expressions are pretty dense no matter what you do, but at least this way you can try to pull them apart a little for future readers of the code (which includes yourself starting tomorrow). I've enabled comments here, thanks to enetation. I would rather have hacked something together myself, to learn more PHP and MySQL, but this works, and enetation has done a good job providing an off-site service, so what the hey. The (react) links that used to send me email now bring up the comments window. Wednesday 9 April 2003 More amazing bookmarklets from Jesse Ruderman: Web Development Bookmarklets. Some of these are astounding, primarily test styles, which puts up a text window where you can type CSS that is applied live to the page as you type! Tuesday 8 April 2003 FreePaperToys is a portal to a world of online paper print-and-fold-and-glue models. They've got all sorts of different models, including Star Wars models. Cool. I've always been a fan of paper models. My dad has a bazillion castle models all over the place. I've designed a few myself, including a pretty nice model of my last house, made in Visio. We. » read more of: Smoke test... (5 paragraphs) Monday 7 April 2003 AccordianGuy has a scary story about lies between people, and the power of a blog to help bring out the truth: What happened to me and the new girl. Sunday 6 April 2003 I have a new IBM T30 laptop, and it is very nice, but the video adapter has a strange quirk: it can support a ton of different resolutions, and for each of them, it can drive a monitor at lots of different refresh rates, except: the LCD's natural size (1400 × 1050), where it can only do 60Hz. Grr. This means that when using an external monitor, I have to choose a different resolution, or the monitor flickers. So I switch resolutions when I switch displays, and my dispmode utility can do that, but now I have to also switch between refresh rates. So I added that to dispmode, and all is well. Chaco is a full-featured plotting package built on Numeric and wxPython. It looks very interesting in its own right, but here's the thing that caught my eye: MakeMenu. In the demo script (wxdemo_plot.py), I saw this code: plot_demo_menu = """ &File Open | Ctrl-O: self.on_open() --- Save as Single page...: self.create_file(None,0) One canvas per page...: self.create_file(None,1) One value per page...: self.create_file(None,2) --- Exit | Ctrl-Q: self.on_exit() &Edit Undo | Ctrl-Z [menu_undo]: self.undo() Redo | Ctrl-Y [menu_redo]: self.redo() #(etc, 30 more lines..)""" and then later: self.menu = chaco.wxMenu.MakeMenu( plot_demo_menu, self ) Very cool: a single string to define an entire menu tree, including the Python code to execute when the item is picked, with the whole menu constructed by a single call with the string. Saturday 5 April 2003 Some tool and technology quick links: ¶ cvs2rss ¶ Hydra, a collaborative editor ¶ dynamicobjects spaces ¶ Event Log Monitoring with RSS Some visually-oriented quick links: ¶ UPS has a new logo ¶ Los Angeles Times Photo Manipulation ¶ Typographica : San Serriffe ¶ The Readerville Forum - Most Coveted Covers Thursday 3 April 2003 The greatest geometer of the 20th century, Harold Scott MacDonald Coxeter, has died (obituary). He was a math uber-geek, with connections to an amazing list of other luminaries, including Bertrand Russell, Wittgenstein, and Escher. I first heard of Coxeter as one of the authors of The 59 Icosahedra, which sounds like a Hitchcock movie, but is actually a treatise on the stellations of the icosahedron. The image below is a diagram of the stellation face of the icosahedron (more information and pictures here). He contributed authoritatively to all areas of geometry, from introductory textbooks to expositions on non-Euclidean geometery, to his specialty, extending the concepts of uniform polyhedra to higher dimensions. As an example of his old-school style, he apparently never used computers, writing his papers in pencil. Just being a professional geometer interested in shapes made him seem like a throwback. Whenever I try to catch a whiff of what recent geometry work is like, it seems more like complex algebra or number theory than actual geometry (aren't there supposed to be shapes in there somewhere?). Wednesday 2 April 2003 Sean McGrath has written an article entitled A study in XML culture and evolution. It starts out well, making interesting observations about the differing culture between "document" people and "data" people. "Data" people believe in unique ids (names) for data, "document" people are satisfied with uniqueness among all the fields (addresses). Good point. Being a "document" person, he doesn't see the need for unique ids for his data. Fair enough. But then he tanks, making some sort of leap to the conclusion that since his data doesn't need names, his XML doesn't need namespaces. Huh? I'm hoping Sean was mis-edited, or maybe just had an off day. He seems otherwise to know what he is talking about. To confuse names for data with namespaces for names of attributes seems pretty basic to me. Tuesday 1 April 2003 A few kind readers answered my implicit plea for the classic quote about the differing responsibilities of producers and consumers. Mark Mascolino was the first to send a pointer to the IETF RFC it first appeared in: Jon Postel's RFC 793 - Transimission Control Protocol (that's TCP to you and me), where it appeared in a section of its own, and was even given a name (Robustness Principle): be conservative in what you do, be liberal in what you accept from others. Charles Miller wrote to point out the downside of the philosophy: that being liberal in acceptance means bad implementations are allowed to flourish, leaving the burden on all future implementations to forever pick up the slack. He has written about it before: Grisham trumps Postel. The XML standard took the exact opposite approach: implementation must be extremely strict, to prevent the sort of slop that HTML allowed. So what's the right thing to do? To paraphrase the witticism about standards, That's the great thing about design principles: there are so many to choose from. FontLab produces TypeTool 2, a low-cost font-editing program. I've long been fascinated by typography (I still have the printer's type specimen book my mother used briefly at the Village Voice, with my red crayon scrawls in it). TypeTool seems to be a solid if not luxurious font editor. If I only had the time (and inspiration and artisitic ability!) I would try my hand at it. Let a thousand faces bloom! 2003, Ned Batchelder
http://nedbatchelder.com/blog/200304.html
crawl-002
en
refinedweb
Docutils, the canonical library for processing and munging reStructuredText, is mostly used in an end-to-end mode where HTML or other user-consumable formats are produced from input reST files. However, sometimes it's useful to develop tooling that works on reST input directly and does something non-standard. In this case, one has to dig only a little deeper in Docutils to find useful modules to help with the task. In this short tutorial I'm going to show how to write a tool that consumes reST files and does something other than generating HTML from them. As a simple but useful example, I'll demonstrate a link checker - a tool that checks that all web links within a reST document are valid. As a bonus, I'll show another tool that uses internal table-parsing libraries within Docutils that let us write pretty-looking ASCII tables and parse them. Parsing reST text into a Document This tutorial is a code walk-through for the complete code sample available online. I'll only show a couple of the most important code snippets from the full sample. Docutils represents a reST file internally as your typical document tree (similarly to many XML and HTML parsers), where every node is of a type derived from docutils.nodes.Node. The top-level document is parsed into an object of type document [1]. We start by creating a new document with some default settings and populating it with the output of a Parser: # ... here 'fileobj' is a file-like object holding the contents of the input # reST file. # Parse the file into a document with the rst parser. default_settings = docutils.frontend.OptionParser( components=(docutils.parsers.rst.Parser,)).get_default_values() document = docutils.utils.new_document(fileobj.name, default_settings) parser = docutils.parsers.rst.Parser() parser.parse(fileobj.read(), document) Processing a reST document with a visitor Once we have the document, we can go through it and find the data we want. Docutils helps by defining a hierarchy of Visitor types, and a walk method on every Node that will recursively visit the subtree starting with this node. This is a very typical pattern for Python code; the standard library has a number of similar objects - for example ast.NodeVisitor. Here's our visitor class that handles reference nodes specially: class LinkCheckerVisitor(docutils.nodes.GenericNodeVisitor): def visit_reference(self, node): # Catch reference nodes for link-checking. check_link(node['refuri']) def default_visit(self, node): # Pass all other nodes through. pass How did I know it's reference nodes I need and not something else? Just experemintation :) Once we parse a reST document we can print the tree and it shows which nodes contain what. Coupled with reading the source code of Docutils (particularly the docutils/nodes.py module) it's fairly easy to figure out which nodes one needs to catch. With this visitor class in hand, we simply call walk on the parsed document: # Visit the parsed document with our link-checking visitor. visitor = LinkCheckerVisitor(document) document.walk(visitor) That's it! To see what check_link does, check out the code sample. Bonus: parsing ASCII grid tables with Docutils Docutils supports defining tables in ASCII in a couple of ways; one I like in particular is "grid tables", done like this: +------------------------+------------+----------+----------+ | Header row, column 1 | Header 2 | Header 3 | Header 4 | +========================+============+==========+==========+ | body row 1, column 1 | column 2 | column 3 | column 4 | +------------------------+------------+----------+----------+ | body row 2 | Cells may span columns. | +------------------------+------------+---------------------+ | body row 3 | Cells may | - Table cells | +------------------------+ span rows. | - contain | | body row 4 | | - body elements. | +------------------------+------------+---------------------+ Even if we don't really care about reST but just want to be able to parse tables like the one above, Docutils can help. We can use its tableparser module. Here's a short snippet from another code sample: def parse_grid_table(text): # Clean up the input: get rid of empty lines and strip all leading and # trailing whitespace. lines = filter(bool, (line.strip() for line in text.splitlines())) parser = docutils.parsers.rst.tableparser.GridTableParser() return parser.parse(docutils.statemachine.StringList(list(lines))) The parser returns an internal representation of the table that can be easily used to analyze it or to munge & emit something else (by default Docutils can emit HTML tables from it). One small caveat in this code to pay attention to: we need to represent the table as a list of lines (strings) and then wrap it in a docutils.statemachine.StringList object, which is a Docutils helper that provides useful analysis methods on lists of strings.
https://eli.thegreenplace.net/2017/a-brief-tutorial-on-parsing-restructuredtext-rest/
CC-MAIN-2018-34
en
refinedweb
I am working a project that has several different controls that I am trying to use an attached property to dictate whether or not to perform a recalculation operation if changed. I am using what I thought was the standard pattern for the creating of an AP. public class RecalcProperty : DependencyObject { public static readonly DependencyProperty PerformRecalcProperty = DependencyProperty.RegisterAttached( "PerformRecalc", typeof(Boolean), typeof(RecalcProperty), new FrameworkPropertyMetadata(false, FrameworkPropertyMetadataOptions.None) ); public static void SetPerformRecalc(UIElement element, Boolean value) { element.SetValue(PerformRecalcProperty, value); } public static Boolean GetPerformRecalc(UIElement element) View Complete Post Hi, I've written a workflow level ErrorMessage as an attached property on the root workflow activity, at design time. I made the attached property for ErrorMessage written as an InArgument (to support some custom control that works well with InArguments). Here's some of the design time code: AttachedProperty<InArgument<string>> ErrorMessage = new AttachedProperty<InArgument<string>> { OwnerType = typeof(ActivityBuilder), IsBrowsable = true, Name = "ErrorMessage", Getter = (mi => { InArgument<string> temp; &nbs " /> <!-- honking I am trying to use attached property... And i get errors ... here is my code <" /> <!-- big --> <Setter Property="local:InputBehaviour.HonkOnKeyPress" Value ="True" /> <!-- honking --> <Setter Property="local:InputBehaviour.IsDigitOnly" Value ="True" /> <!-- digit-only --> </Style > </Window.Resources > </ Window> using System; using System.Collections.Generic; using System.Linq; using System.Text; using System.Windows; using System.Windo. MSDN Magazine July 2000
http://www.dotnetspark.com/links/62728-attached-property.aspx
CC-MAIN-2017-04
en
refinedweb
public class CacheConfig extends Object implements Cloneable Java Beans-style configuration for a CachingHttpClient. Any class in the caching module that has configuration options should take a CacheConfig argument in one of its constructors. A CacheConfig instance has sane and conservative defaults, so the easiest way to specify options is to get an instance and then set just the options you want to modify from their defaults. N.B. This class is only for caching-specific configuration; to configure the behavior of the rest of the client, configure the HttpClient used as the "backend" for the CachingHttpClient. Cache configuration can be grouped into the following categories:. 303 caching. RFC2616 explicitly disallows caching 303 responses; however, the HTTPbis working group says they can be cached if explicitly indicated in the response headers and permitted by the request method. (They also indicate that disallowing 303 caching is actually an unintended spec error in RFC2616). This behavior is off by default, to err on the side of a conservative adherence to the existing standard, but you may want to enable it. Weak ETags on PUT/DELETE If-Match requests. RFC2616 explicitly prohibits the use of weak validators in non-GET requests, however, the HTTPbis working group says while the limitation for weak validators on ranged requests makes sense, weak ETag validation is useful on full non-GET requests; e.g., PUT with If-Match. This behavior is off by default, to err on the side of a conservative adherence to the existing standard, but you may want to enable it.. equals, finalize, getClass, hashCode, notify, notifyAll, wait, wait, wait public static final int DEFAULT_MAX_OBJECT_SIZE_BYTES public static final int DEFAULT_MAX_CACHE_ENTRIES public static final int DEFAULT_MAX_UPDATE_RETRIES public static final boolean DEFAULT_303_CACHING_ENABLED public static final boolean DEFAULT_WEAK_ETAG_ON_PUTDELETE_ALLOWED public static final boolean DEFAULT_HEURISTIC_CACHING_ENABLED public static final float DEFAULT_HEURISTIC_COEFFICIENT public static final long DEFAULT_HEURISTIC_LIFETIME public static final int DEFAULT_ASYNCHRONOUS_WORKERS_MAX public static final int DEFAULT_ASYNCHRONOUS_WORKERS_CORE public static final int DEFAULT_ASYNCHRONOUS_WORKER_IDLE_LIFETIME_SECS public static final int DEFAULT_REVALIDATION_QUEUE_SIZE public static final CacheConfig DEFAULT @Deprecated public CacheConfig() @Deprecated public int getMaxObjectSizeBytes() getMaxObjectSize() @Deprecated public void setMaxObjectSizeBytes(int maxObjectSizeBytes) CacheConfig.Builder. maxObjectSizeBytes- size in bytes public long getMaxObjectSize() @Deprecated public void setMaxObjectSize(long maxObjectSize) maxObjectSize- size in bytes public boolean isNeverCacheHTTP10ResponsesWithQuery() trueto not cache query string responses, falseto cache if explicit cache headers are found public int getMaxCacheEntries() @Deprecated public void setMaxCacheEntries(int maxCacheEntries) public int getMaxUpdateRetries() @Deprecated public void setMaxUpdateRetries(int maxUpdateRetries) public boolean is303CachingEnabled() trueif it is enabled. public boolean isWeakETagOnPutDeleteAllowed() trueif it is allowed. public boolean isHeuristicCachingEnabled() trueif it is enabled. @Deprecated public void setHeuristicCachingEnabled(boolean heuristicCachingEnabled) heuristicCachingEnabled- should be trueto permit heuristic caching, falseto disable it. public float getHeuristicCoefficient() @Deprecated public void setHeuristicCoefficient(float heuristicCoefficient) Last-Modifiedand Dateheaders of a cached response during which the cached response will be considered heuristically fresh. heuristicCoefficient- should be between 0.0and 1.0. public long getHeuristicDefaultLifetime() @Deprecated public void setHeuristicDefaultLifetime(long heuristicDefaultLifetimeSecs) Last-Modifiedfreshness calculation if it is available. heuristicDefaultLifetimeSecs- is the number of seconds to consider a cache-eligible response fresh in the absence of other information. Set this to 0to disable this style of heuristic caching. public boolean isSharedCache() truefor a shared cache, falsefor a non- shared (private) cache @Deprecated public void setSharedCache(boolean isSharedCache) isSharedCache- true to behave as a shared cache, false to behave as a non-shared (private) cache. To have the cache behave like a browser cache, you want to set this to false. public int getAsynchronousWorkersMax() stale-while-revalidatedirective. A value of 0 means background revalidations are disabled. @Deprecated public void setAsynchronousWorkersMax(int max) stale-while-revalidatedirective. max- number of threads; a value of 0 disables background revalidations. public int getAsynchronousWorkersCore() stale-while-revalidatedirective. @Deprecated public void setAsynchronousWorkersCore(int min) stale-while-revalidatedirective. min- should be greater than zero and less than or equal to getAsynchronousWorkersMax() public int getAsynchronousWorkerIdleLifetimeSecs() @Deprecated public void setAsynchronousWorkerIdleLifetimeSecs(int secs) secs- idle lifetime in seconds public int getRevalidationQueueSize() @Deprecated public void setRevalidationQueueSize(int size) protected CacheConfig clone() throws CloneNotSupportedException clonein class Object CloneNotSupportedException public static CacheConfig.Builder custom() public static CacheConfig.Builder copy(CacheConfig config) public String toString() toStringin class Object
http://hc.apache.org/httpcomponents-client-ga/httpclient-cache/apidocs/org/apache/http/impl/client/cache/CacheConfig.html
CC-MAIN-2017-04
en
refinedweb
Recently. Java program to calculate sum of digits in a number By the way, you can also use this example to learn Recursion in Java. It’s a tricky concept, and examples like this, certainly helps to understand and apply recursion better. /** * Java program to calculate sum of digits for a number using recursion and iteration. * Iterative solution uses while loop here. * * @author Javin Paul */ public class SumOfDigit { public static void main(String args[]) { System.out.println( "Sum of digit using recursion for number 123 is " + sumOfDigits(123)); System.out.println( "Sum of digit using recursion for number 1234 is " + sumOfDigits(1234)); System.out.println( "Sum of digit from recursive function for number 321 is " + sumOfDigits(321)); System.out.println( "Sum of digit from recursive method for number 1 is " + sumOfDigits(1)); System.out.println( "Sum of digit using Iteration for number 123 is " + sumOfDigitsIterative(123)); System.out.println( "Sum of digit using while loop for number 1234 is " + sumOfDigitsIterative(1234)); } public static int sumOfDigits(int number){ if(number/10 == 0) return number; return number%10 + sumOfDigits(number/10); } public static int sumOfDigitsIterative(int number){ int result = 0; while(number != 0){ result = result + number%10; number = number/10; } return result; } } Output: Sum of digit using recursion for number 123 is 6 Sum of digit using recursion for number 1234 is 10 Sum of digit from recursive function for number 321 is 6 Sum of digit from recursive method for number 1 is 1 Sum of digit using Iteration for number 123 is 6 Sum of digit using while loop for number 1234 is 10 That's all on How to find sum of digits of a number using recursion in Java. You should be able to write this method using both Iteration i.e. using loops, and using Recursion i.e. without using loops in Java. 2 comments : I may do it in this way, it's easier for me to understand, public static int sumOfDigits(int digit) { int re = 0; while(digit != 0) { re += digit%10; digit /= 10; } return re; } Sum of digits os the addition of the digits present in the program.you can get the code from
http://javarevisited.blogspot.com/2013/05/java-program-to-find-sum-of-digits-in-number-recursion.html?showComment=1375207441580
CC-MAIN-2017-04
en
refinedweb
..., class T3> calculated-result-type ibeta_derivative(T1 a, T2 b, T3 x); template <class T1, class T2, class T3, class Policy> calculated-result-type ibeta_derivative(T1 a, T2 b, T3 x, const Policy&); }} // namespaces This function finds some uses in statistical distributions: it computes the partial derivative with respect to x of the incomplete beta function ibeta. The return type of this function is computed using the result type calculation rules when T1, T2 and T3 are different types. The final Policy argument is optional and can be used to control the behaviour of the function: how it handles errors, what level of precision to use etc. Refer to the policy documentation for more details. Almost identical to the incomplete beta function ibeta. This function just expose some of the internals of the incomplete beta function ibeta: refer to the documentation for that function for more information.
http://www.boost.org/doc/libs/1_50_0/libs/math/doc/sf_and_dist/html/math_toolkit/special/sf_beta/beta_derivative.html
CC-MAIN-2017-04
en
refinedweb
Posted 05 Apr 2015 Link to this post If you create method with same name as some Base Static Class, and you call base static class from within same class where you have this method. JustDecompiler fails to properly decompile. original code: class MethodAsBaseClassName { public bool Convert(object a) return (bool)System.Convert.ToBoolean((string)a); } decompiled code: internal class MethodAsBaseClassName public MethodAsBaseClassName() return Convert.ToBoolean((string)a); missing namespace classifier, and compiler can't compile code. Compiler thinks he should call method MethodAsBaseClassName.Convert instead of System.Convert static class.
http://www.telerik.com/forums/bug-method-and-base-static-class-with-same-name
CC-MAIN-2017-04
en
refinedweb
See . Hello Justin - I've been digging into Vibrate and Battery, since they seemed similar to adding in lights, though I've also been looking at radio and audio. The two paths seem to be: Add Lights into the HAL. This is done by adding functionality into the hal/gonk and then calling the hal_impl version from Hal.cpp. Do I need to stub out the same functions in hal/linux, windows, fallback, android, just so those will compile? Also, do I need to do the loop back in hal/sandbox? From there, create a manager in dom/lights (similar to the battery manager already there). Then add the new interfaces into dom/base/Navigator.cpp and idl. The alternative is to add in to dom/system/b2g (or maybe create a new dom/lights/b2g) and create and idl there and either directly handle the lights or create a proxy or daemon to control the lights (though I think that is over kill in this case). The latter seems more self contained and probably cleaner. However, it is a bit more opaque as to how it all hooks up and becomes available to the javascript. Probably need to read more of the XPCOM interface. Thoughts? -Jim Straus Let's do the following - add interface to Hal.h - add a Lights.cpp gonk impl to hal/gonk - add a Lights.cpp fallback stubs to hal/fallback. Use this everywhere !gonk. - add a forwarding impl to hal/sandbox/SandboxHal.cpp Trust me, that will be much simpler than implementing this in dom/system. Does comment 2 answer all your questions, Jim? I don't think we want DOM apis for most of this. The lights will be driven from inside Gecko. If wifi is on, we turn on the wifi light. If a notification is pending, we turn on the notification light. For now just expose Gecko-internal privileged APIs. We do want DOM APIs for the backlight, keyboard light, and maybe softkey backlight, though, right? (We could hardcode that the softkey backlight is on whenever the screen is on -- this is what my Nexus S does, and I think it's virtuous. But it's a policy decision, so I think this is better left to JS code.) Hi Jim, how is this work coming along? We need these changes to better support the backlight of the "maguro" device. Created attachment 587822 [details] [diff] [review] Patch to Gecko This adds GetLight and SetLight, modifies GetBrightness and SetBrightness to use the new interface and should expose the functions to js. Created attachment 587824 [details] [diff] [review] Patch to glue/gonk/device/samsung/c1-common Extends liblights to support getting the state of a light. Created attachment 587825 [details] [diff] [review] Patch to glue/gonk/hardware/libhardware Extends liblights to support getting the state of a light. Added patches. Note that this extends the liblights interface to support reading the state of the lights. On the Samsung, the button lights can't be read, but the backlight can. Can someone please review? Jim, patches against all the non-gecko stuff are best submitted as GitHub pull request. (Also, you don't get to set r+ yourself ;). You need to ask somebody to review your patch, by setting review to '?' and entering their email address in the corresponding field.) Comment on attachment 587822 [details] [diff] [review] Patch to Gecko Hey Jim, I ran out of time to fully review this tonight. Asap tomorrow. But a few comments from scanning the patch - there are multiple patches concatenated here, but that doesn't fit within our mercurial/bugzilla workflow. If the patches are logically distinct, please post them as separate attachments, "Part 1", "Part 2", etc. If they're not logically distinct, please post a patch including all changes. It looks to me like these patches should be concatenated. - what's the consumer of nsIHal? - we'll be able to test the lights implementation by using DOM APIs exposed to content, and then checking the hw state changes with virtual qemu devices. I wouldn't bother with creating hal/tests. This may sound a little odd, but we typically only test external interfaces; tests of internal interfaces like hal tend to not be worth their value. Comment on attachment 587825 [details] [diff] [review] Patch to glue/gonk/hardware/libhardware We can't extend the android hal API, because libhardware is typically provided as a proprietary blob. Unfortunately we need to stay synced with upstream on this :(. The "r-" here means, "this patch isn't going in the right direction, so let's take a different approach." Comment on attachment 587824 [details] [diff] [review] Patch to glue/gonk/device/samsung/c1-common Is this in support of the added "get_light" hal API? If so, the same comment applies here. I'm clearing the request flag here to mean, "there are questions I need answered". Do we have to strictly support the Android libraries (I assume this is what you mean by staying in synch with upstream) or can we ask manufacturers to have extended versions of the libraries (that are fully backward compatible)? Right now, I don't believe there is an abstract way to retrieve the state of the lights, which means that everyone has to build custom functions to control things like the backlight independent of libhardware. We can ask, but for now we can't rely on it. For example, the patch here would cause us to crash on the maguro, as things stand currently. We should assume we can't change android-hal/libhardware.so at all until proven otherwise. Yes, we'll need to track the light state in gecko. C'est la guerre ;). Comment on attachment 587822 [details] [diff] [review] Patch to Gecko Hi Jim, It's pretty hard for me to review concatenated patches like this. Can you either flatten or separate them into logically distinct pieces, posted separately? Thanks! Created attachment 589365 [details] [diff] [review] Patch to Gecko to allow light controls Notes for review: SetLight takes parameters of which light is being set the mode for setting, the flash mode for setting the flashOMS and flashOffMS for the user flash mode the color (32-bit values of ARGB, converted to a brightness if that's all that's supported) Hal.cpp - Added in calls to SetLight and GetLight Hal.h - Defines the calls to SetLight and Get Light, the various lights that can be set, the light modes and flash modes. FallbackHal.cpp Default implementations of SetLight and GetLight. SetLight does nothing and GetLight returned full on. GonkHal.cpp Does the actual work, calling liblights. I removed the references to get_light and am now caching the last value set in SetLight, returning that. If/when we extend liblights.h, we can convert back. There are conditional HAVEGETLIGHT in the code to allow for easy conversion. Note that SetScreenBrightness uses SetLight and GetScreenBrightness uses GetLight. Also note that the old GetScreenBrightness actually read the screen brightness, but not in a generic way. PHal.ipdl Defines a structure and functions for GetLight and SetLight across process boundaries. nsIHal.idl Constants and functions for access from JS Chris. Hopefully this will be easier to review. No patches to gonk. I'll create a new bug for that. Comment on attachment 589365 [details] [diff] [review] Patch to Gecko to allow light controls Hi Jim, Looks very good, thanks! Some comments are below. >diff --git a/dom/system/b2g/nsIHal.idl b/dom/system/b2g/nsIHal.idl Since we don't have a consumer of this interface yet, let's pull the nsIHal part of this patch out and save it for if we do. Filing that as a separate bug, like you did for bug 718897, would be great. (We may not ever need to use this interface directly from JS.) >diff --git a/hal/Hal.h b/hal/Hal.h >--- a/hal/Hal.h >+++ b/hal/Hal.h >@@ -41,16 +41,17 @@ > #define mozilla_Hal_h 1 > > #include "mozilla/hal_sandbox/PHal.h" > #include "base/basictypes.h" > #include "mozilla/Types.h" > #include "nsTArray.h" > #include "prlog.h" > #include "mozilla/dom/battery/Types.h" >+#include "nsString.h" I don't believe that this #include is needed. >+enum { >+ HAL_HARDWARE_UNKNOWN = -1, >+ HAL_HARDWARE_FAIL = 0, >+ HAL_HARDWARE_SUCCESS = 1, >+ HAL_LIGHT_ID_BACKLIGHT = 0, >+ HAL_LIGHT_ID_KEYBOARD = 1, >+ HAL_LIGHT_ID_BUTTONS = 2, >+ HAL_LIGHT_ID_BATTERY = 3, >+ HAL_LIGHT_ID_NOTIFICATIONS = 4, >+ HAL_LIGHT_ID_ATTENTION = 5, >+ HAL_LIGHT_ID_BLUETOOTH = 6, >+ HAL_LIGHT_ID_WIFI = 7, >+ HAL_LIGHT_ID_COUNT = 8, >+ HAL_LIGHT_MODE_USER = 0, >+ HAL_LIGHT_MODE_SENSOR = 1, >+ HAL_LIGHT_FLASH_NONE = 0, >+ HAL_LIGHT_FLASH_TIMED = 1, >+ HAL_LIGHT_FLASH_HARDWARE = 2 >+}; >+ Since we're in C++ here, we can make these separate named |enum| types. For example, enum LightType { LIGHT_BACKLIGHT, LIGHT_KEYBOARD, //... }; This will help the C++ compiler catch abuses of the API. >+/** >+ *. >+ */ >+long SetLight(const long& light, const long& mode, const long& flash, const long& flashOnMS, const long& flashOffMS, const long& color); >+long GetLight(const long& light, long *mode, long *flash, long *flashOnMS, long *flashOffMS,long *color); Couple of things here - |long| is guaranteed to be the same size or smaller (Windows 64-bit, sigh) than the ISA word size, so |const long&| doesn't save any stack space. Using plain |long| arguments is fine, or |const long| if you want the C++ compiler to check immutability of the arguments within the function definition. - but, since you've already defined |struct LightConfiguration| for IPC, please use it here for the hal:: API. That would allow writing the cleaner API bool SetLightConfig(LightType aWhich, const hal::LightConfiguration& aConfig); bool GetLightConfig(LightType aWhich, hal::LightConfiguration* aConfig); (I wrote |bool| return values here because I'm not sure we need to distinguish between HAL_HARDWARE_UNKNOWN and HAL_HARDWARE_FAIL. We can always change this later.) >diff --git a/hal/gonk/GonkHal.cpp b/hal/gonk/GonkHal.cpp >-const char *screenBrightnessFilename = "/sys/class/leds/lcd-backlight/brightness"; > double > GetScreenBrightness() > { > void > SetScreenBrightness(double brightness) > { \o/, these changes are righteous! >+ >+struct Devices { >+ light_device_t* lights[HAL_LIGHT_ID_COUNT]; >+}; >+ >+static Devices* devices = NULL; >+ Another couple of small nits - the Gecko style for static variables is |static Foo sFoo| - what's the intended usage of the Devices struct? We're not trying to free it on shutdown, and indeed that might be pretty hard. I would recommend either * keep |struct Devices|, and have it use ClearOnShutdown to free the memory (xpcom/base/ClearOnShutdown.h) * or, changing |struct Devices| into static light_device_t sLights[LIGHT_COUNT]; and not bothering with freeing the memory for now. >+light_device_t* get_device(hw_module_t* module, char const* name) A few more nits - |static light_device_t*| - Gecko style for naming functions is LikeThis(). I don't like it personally, but that's the style. >+/** >+ * The state last set for the lights until liblights supports >+ * getting the light state. >+ * >+ * @author jstraus (1/13/2012) We track author information using the " * Contributor(s):" section in the file header, and we track blame using our version control tools. Feel free to add yourself to the " * Contributor(s):" list in this file! :) But, this is annotation isn't necessary. >+static light_state_t StoredLightState[HAL_LIGHT_ID_COUNT]; >+ Nit: naming style is |sStoredLightState|. >+long >+SetLight(const long& light, const long& mode, const long& flash, const long& flashOnMS, const long& flashOffMS, const long& color) >+{ >+ light_state_t state; >+ >+ if (!devices) { Please refactor this initialization code into a helper function. >+ int err; >+ hw_module_t* module; >+ >+ devices = (Devices*)malloc(sizeof(Devices)); If you keep |struct Devices|, please make this a call to |new Devices()|, and memset all the pointers to 0 in the constructor. >+ err = hw_get_module(LIGHTS_HARDWARE_MODULE_ID, (hw_module_t const**)&module); >+ if (err == 0) { >+ devices->lights[HAL_LIGHT_ID_BACKLIGHT] >+ = get_device(module, LIGHT_ID_BACKLIGHT); This could be written more compactly with an auxiliary data structure like struct LightTypeName { LightType mType; const char* mName; } kLightIds[] = { LIGHT_BACKLIGHT, LIGHT_ID_KEYBOARD, //... LIGHT_COUNT, nsnull }; and then in this code, a |for| loop over the kLightIds. (In Gecko style, "k" means "constant".) >+ memset(&state, 0, sizeof(light_state_t)); >+ state.color = color; >+ state.flashMode = flash; >+ state.flashOnMS = flashOnMS; >+ state.flashOffMS = flashOffMS; >+ state.brightnessMode = mode; >+ Adding a helper to convert between |LightConfiguration| and |light_state_t| might be useful. >+long >+GetLight(const long& light, long *mode, long *flash, long *flashOnMS, long *flashOffMS, long *color) >+{ >+ *color = state.color; >+ *flash = state.flashMode; >+ *flashOnMS = state.flashOnMS; >+ *flashOffMS = state.flashOffMS; >+ *mode = state.brightnessMode; >+ And similarly here, a helper for converting light_state_t -> LightConfiguration. >diff --git a/hal/sandbox/PHal.ipdl b/hal/sandbox/PHal.ipdl >--- a/hal/sandbox/PHal.ipdl >+++ b/hal/sandbox/PHal.ipdl >@@ -43,16 +43,25 @@ include protocol PBrowser; > namespace mozilla { > > namespace hal { > struct BatteryInformation { > double level; > bool charging; > double remainingTime; > }; >+ struct LightInformation { This is just a naming nit, but I think |LightConfiguration| might be clearer here. This is used to get/set the requested parameters of a particular light, not query any varying state. >+ long light; I don't feel particularly strongly about whether the light ID is part of the configuration or not. I could see arguments both ways. I'll leave that up to your judgment. This should use the LightType enum we add to Hal.h >+ long mode; >+ long flash; Similarly, these should use more specific enum types. >+ long flashOnMS; >+ long flashOffMS; Since |long| is an architecture-specific type, and our IPC system works across processes that run with different architecture types (x86 vs. x86-64, currently) I generally discourage use of variable-sized types in IPC decls. I think uint32_t would work just as well here. >+ long color; This definitely needs to be a fixed-size type, uint32_t. >+ long status; Since this is the status of a particular request, not a general status, I don't think it should live in LightConfiguration. You can "return" as many values in IPC response messages as you want, so if this was added here to ensure only one return value, that's not necessary. For example, sync GetLight(LightType light) returns (bool status, LightInformation aLightInfo); is perfectly legal. >+ sync SetLight(long light, long mode, long flash, long flashOnMS, long flashOffMS, long color) returns (long status); >+ sync GetLight(long light) returns (LightInformation aLightInfo); > Let's use LightConfiguration for these, and split out status per above. This was a fair number of comments, but it's really a bunch of small style stuff. This patch is close to ready to land. (I cleared the review request to mean, "I would like to see an updated patch with these comments addressed".) Thanks! Created attachment 590422 [details] [diff] [review] Patch to Gecko to allow light controls Notes for review: nsIHal is pulled and in a separate bug for future inclusion. Personally, I think we should expose as much as we can, so I don't want it to get lost. You never know when someone will make creative use of a device. Enums were created for the various constants and used throughout. LightConfiguration (formally LightInformation) is the interface and used throughout. I kept the hardware fail vs. hardware unknown. If the light doesn't exist you get hardware unknown. If there is a problem actually controlling a light, you get hardware fail. May not make a difference, but conceivably one could iterate over the lights with GetLight and see which ones exist. Fixed the naming conventions and use of uint32_t. I made the allocation a static. It's small and should never go away once initialized. Thanks for the info on the ipdl being able to return more than one value. Question: Can enums be in the ipdl? I didn't see an example, so they aren't in there now. Comment on attachment 590422 [details] [diff] [review] Patch to Gecko to allow light controls Jim, you have to set the requestee so chris gets notified of your review request. Add his email in the requestee field next to the ?. Comment on attachment 590422 [details] [diff] [review] Patch to Gecko to allow light controls I posted a couple comments. Chris is the module owner, so I can't actually review this. Comment on attachment 590422 [details] [diff] [review] Patch to Gecko to allow light controls >diff --git a/hal/Hal.h b/hal/Hal.h >+enum HALStatus { Call this LightStatus. You didn't convince me that this enum return is useful, because you didn't describe a concrete use within Gecko. The current users of SetLight() don't even check the return value. But this bug is dragging on too long so let's just get the code landed. I'll leave it up to you whether you think the complication to this interface is worth a potential use in the future. You know my opinion ;). >+enum LightMode { The semantics of this isn't obvious, please document it. >+/** >+ *. >+ */ >+uint32_t SetLight(const hal::LightType& light, const hal::LightConfiguration &aConfig); >+uint32_t GetLight(const hal::LightType& light, hal::LightConfiguration &aConfig); What's the returned value here mean? Docs need to describe that. Is it LightStatus? If so, use the C++ type. Or, save yourself and future users some trouble and use bool ;). >diff --git a/hal/fallback/FallbackHal.cpp b/hal/fallback/FallbackHal.cpp >+#include "nsIHal.h" This doesn't exist anymore. >+uint32_t >+SetLight(const LightType& light, const hal::LightConfiguration& aConfig) >+{ >+ return HAL_HARDWARE_SUCCESS; According to your docs above, this should have returned UNKNOWN. This doesn't help convince me that the return enum is useful ;). >diff --git a/hal/gonk/GonkHal.cpp b/hal/gonk/GonkHal.cpp >+ int status; >+ hal::LightType light = hal::HAL_LIGHT_ID_BACKLIGHT; >+ >+ status = hal::GetLight(light, aConfig); |status| is going to generate a compiler warning because it's a dead variable. Remove it. Still not arguing for an enum return type ... ;) >+ int brightness = aConfig.color() & 0xFF; >+ return brightness / 255.0; Add a note that we assume that the backlight is monochromatic so it doesn't matter which color component we return. This assumption is maintained by SetScreenBrightness(). > void > SetScreenBrightness(double brightness) > // Convert the value in [0, 1] to an int between 0 and 255, then write to a > // string. This comment isn't true anymore. > int val = static_cast<int>(round(brightness * 255)); uint32_t val = int32_t(round(brightness * 255)); >+ int color = (val<<16) + (val<<8) + val; >+ uint32_t. According to lights.h, you have to set the high byte to 0xff. We have a nice helper to manage color components somewhere in Gecko, but I don't remember where and it's probably not worth the trouble here. >diff --git a/hal/sandbox/PHal.ipdl b/hal/sandbox/PHal.ipdl >+ struct LightConfiguration { >+ uint32_t light; >+ uint32_t mode; >+ uint32_t flash; You didn't address my previous comment to use the C++ types here. >+ sync SetLight(uint32_t light, LightConfiguration aConfig) returns (uint32_t status); >+ sync GetLight(uint32_t light) returns (LightConfiguration aConfig, uint32_t status); Same here. >diff --git a/hal/sandbox/SandboxHal.cpp b/hal/sandbox/SandboxHal.cpp >+long Wrong return value. >+SetLight(const hal::LightType& light, const hal::LightConfiguration &aConfig) >+{ >+ uint32_t status = -1; Don't hard-code this value. Either switch to bool or stick to the named enum values. Per your documentation above, I think you should use ERROR as the default return here. This is also not helping your case for the enum return ;). >+long Same as above. >+GetLight(const hal::LightType& light, hal::LightConfiguration &aConfig) >+{ >+ uint32_t status = -1; Same as above. I'm a bit concerned about numerous comments that weren't addressed here. I'm going to need to see another version of the patch. Please comment here, or e-mail or ping me on IRC if you have questions. Created attachment 591334 [details] [diff] [review] Patch to Gecko to allow light controls Figured out how to add enums to ipdl, changed the names to fit more the style (starting with leading "e", camel cased). Changed code to make use of it. Fixed up comments for the enumeration. Changed return type to bool, false = failed, true = succeed. HalStatus doesn't exist any more. Comment on attachment 591334 [details] [diff] [review] Patch to Gecko to allow light controls >diff --git a/hal/Hal.h b/hal/Hal.h >+/** >+ * GGET the value of a light returninn a particular color, with a specific flash pattern. Couple of typos here. >+ *. You don't need to duplicate this part of the comment from SetLight(). >diff --git a/hal/HalTypes.h b/hal/HalTypes.h >@@ -0,0 +1,108 @@ >+/* -*- Mode: C++; tab-width: 2; indent-tabs-mode: nil; c-basic-offset: 2 -*- */ >+/* ***** BEGIN LICENSE BLOCK ***** Please use the new license block /* This Source Code Form is subject to the terms of the Mozilla Public * License, v. 2.0. If a copy of the MPL was not distributed with this file, * You can obtain one at. */ I just switched my emacs macro over to it today. Sorry for not pointing this out before. Please keep the modelines. Looks good! Please just fix up the minor nits above and let's get this landed! :D Created attachment 591621 [details] [diff] [review] Patch to Gecko to allow light controls Fixed typos, reduced comment, updated licenses across files Comment on attachment 591621 [details] [diff] [review] Patch to Gecko to allow light controls Oh sorry ... I meant only update the license header on the new file you added. But, thanks for doing this anyway, needed to be done. :) Also, for future reference, if I mark a patch "r+" that means I don't need to see the changes you make to address my review comments. If you feel like the changes should get another look-over, then by all means request one. But it's not required. Hi Jim, this patch doesn't apply anymore over mozilla-central revision 5b0900b3e71c (hg) user: Kyle Huey <khuey@kylehuey.com> date: Tue Jan 31 11:38:24 2012 -0500 summary: Bug 563318: Switch to MSVC 2010 on trunk. r=ted $ hg qpush applying 712378 patching file hal/sandbox/PHal.ipdl Hunk #2 FAILED at 66 1 out of 2 hunks FAILED -- saving rejects to file hal/sandbox/PHal.ipdl.rej patching file hal/sandbox/SandboxHal.cpp Hunk #2 FAILED at 119 Hunk #3 FAILED at 234 2 out of 3 hunks FAILED -- saving rejects to file hal/sandbox/SandboxHal.cpp.rej patch failed, unable to continue (try -v) patch failed, rejects left in working dir errors during apply, please fix and refresh 712378 Please update the patch and I'll push it to tryserver for you. Created attachment 593725 [details] [diff] [review] patch Created attachment 593726 [details] [diff] [review] patch Rebased. We desperately need this patch. Created attachment 593743 [details] [diff] [review] updated Rebased, lots of build-bustage fixes (Jim, need to make sure patches you post build! :) ), and addressed some review comments that were missed. Sorry had to backout since it conflicted with bug 697641's backout, which was causing failures on all native Android tests. (Can land after rebase). Created attachment 593960 [details] [diff] [review] Patch to Gecko to allow light controls Merged with latest m-c trunk Chris, I always do a build before submitting patches. Jim, please rebase attachment 593743 [details] [diff] [review], which contains numerous build fixes and addresses some review comments that were overlooked. Thanks!
https://bugzilla.mozilla.org/show_bug.cgi?id=712378
CC-MAIN-2017-04
en
refinedweb
Daemon.daemon threads always runs as background thread. Daemon threads are typically used to perform services for your application/applet. the core difference between user threads and daemon threads is that the JVM will only shut down a program when all user threads have terminated. Daemon threads are terminated by the JVM when there are no longer any user threads running, including the main thread of execution.to set thread as daemon thread use following line of code:setDaemon(true/false) ? //This method is used to specify that a thread is daemon thread.public boolean isDaemon() ?// This method is used to determine the thread is daemon thread or not.Example:public class DaemonThreadExample extends Thread { public void run() { System.out.println("Entering run method"); try { System.out.println("In Method: currentThread() is" + Thread.currentThread()); while (true) { try { Thread.sleep(500); } catch (InterruptedException x) {} System.out.println("In run method: woke up again"); } } finally { System.out.println("exit run Method"); } } public static void main(String[] args) { System.out.println("Entering in main Method"); DaemonThread t = new DaemonThread(); t.setDaemon(true); t.start(); try { Thread.sleep(3000); } catch (InterruptedException x) {} System.out.println("Leaving main method"); }} Total Post:1Points:5
https://www.mindstick.com/interview/1785/what-is-daemon-thread-in-java
CC-MAIN-2017-04
en
refinedweb
Synopsis edit - - auto cmd ... - - knar body - - knit name arguments ?process? body - - knead arguments ?process? body - - util auto varnames script Download editknit is also available as ycl::knit::knit, along with unit tests. Description editknit is useful for those times that you want something like eval, but with the ability to programatically manipulate the script to be evaluated. It creates a procedure that when run, makes substitutions to body and the evaluates body at the caller's level. process, if provided, is a script evaluated at the local level before body is processed and evaluated body. process provides a sandbox in the form of a local procedure in whose scope macro variable and command substitutions are processed.knit uses tailcall to provide some of the features that were problematic in earlier macro systems. Each macro is a procedure that fills out a template according to the arguments it receives, and then tailcalls the template.knit takes an EIAS approach to macros, meaning that it does not try to discern the structure of the template it is filling in, and instead provides the macro author a convenient syntax to choose how subtitutions are made. It turns out that just a small number answer most needs. All macro substitutions happen textually. They do not respect the syntactical flow of the Tcl script. It's the responsibility of the script author to make sure the macros produce a syntactically correct script.knead can be used to build a macro procedure specification without actually creating the macro procedure. knit is implemented as a trivial wrapper around knead. knead itself is useful for creating anonymous macros: apply [knead x {expr {${x} * ${x}}}] 5In turn, knot is a simple wrapper around knead that also executes apply.knar performs only macro command substitution.auto generates the arguments to the specified command from variables of the same name in at the caller's level. If varnames is the empty string, auto also generates varnames from body.In contrast with Sugar, knit is more interleaved with the running interpreter, as Lisp macros are. Where Sugar attemps to parse a script and discern macros, knit inserts the macro code at runtime when the macro procedure is invoked. In order to do its expansions, Sugar must know, for example, that the first argument to while is evaluated as an expression. knit is oblivious to such things, allowing it to fit more naturally into a Tcl script. Since knit macros are themselves procedures, knit eschews the issue that {*} raises for Sugar, and in general automatically has the features of a procedure that the merely-procedure-like macros in Sugar have to work hard for. One example is default arguments and another is $argv handling. The tradeoff is that knit incurs some cost during runtime that Sugar does not, namely the cost of the tailcall.util auto simply performs variable macro substitutions in a script and returns the script. If varnames is not the empty string, ''varnamess' is derived from script. substituted values are retrieved from the level of the caller. Macro Substitutions edit - ${arg} - Replaced by the value of $arg, properly escaped as a list. - #{arg} - Replaced by the value of $arg, without escaping the value as a list. This is useful for example, to substitute a fragment of an expression into expr, or to substitute a few lines of code into a routine. Also useful for substituting in a command prefix. - !{argname} - This is simply for convenience, and is exactly equivalent to [set ${argname}]. In other words, the value ${argname} is the name of a variable, and it will be arranged for the value of that variable to be substituted at execution time. In the examples below, lpop2 does the same thing as lpop, but thanks to !{argname} is a little more concise. - [`name ...] - Replaced by the returned value of command macro named name. By default, the replacement value is rescanned for additional macros until all macros are expanded. Useful to perform more complex and arbitrary substitutions. This happens prior to the macro variable substitutions so that any variable macros in the substituted text are still processed as usual later. The standard Tcl substitutions are performed, at the level of the process script. This gives command macros access to any variables defined by that script. Command Macros edit - addvars varnames - Each varname in varnames is added to the list of macro substitution variables to process. - def name args script - Create a macro named name that substitutes each arg from args, into script, as described for knead. - defdo name args script ?value...? - Performs def, and then do, passing it the value arguments. - do name ?value...? - Executes the macro command named name, passing it all the value arguments. - eval script - script is evaluated at the level of the process script, and the returned value is the value of the macro command. - foreach varname list ?varname list ...? script - Each list. Like foreach, but each set of values extracted from the lists is is assigned to the corresponding varname names, and macro variable substitutions in script are processed against these variable names. This is useful for inlining commands or producing nearly redundant code from boilerplate, taking advantage of byte-compilation of procedures. - if ... - Like if, but triggered body arguments are simply returned. - script script - Like eval, but the value of the macro command is the empty string. Configuration editThe following variables can be set to configure the behaviour of knit and friends: - knit::knarname - The string that, when preceded by [, indicates a command macro. The default is `. - knit::recursive - A boolean value that indicates whether command macros should be recursively processed. Customizing editTo create a customized knit, use ycl::dupensemble to duplicate the ycl::knit ensemble, and then add commands to cmds child namespace of the namespace of the new ensemble. A conforming macro comand accepts on argument, cmdargs, and returns a value the is to be substituted. Examples editThese examples show how the macros presented in Sugar, along with various other macros are implemented in knit - knit unit tests - Toy examples. - ycl::chan - Uses the foreach macro to substitute some boilerplate code. - lswitch - The most extensive example yet. Uses knit to implement a switch for lists. knit double x {expr {${x} * 2}} knit exp2 x {* ${x} * ${x}} knit charcount {x {char { }}} { regexp -all ***=${char} ${x} } knit clear arg1 {unset ${arg1}} knit first list {lindex ${list} 0} knit rest list {lrange ${list} 1 end} knit last list {lindex ${list} end} knit drop list {lrange ${list} 0 end-1} knit K {x y} { first [list ${x} ${y}] } knit yank varname { K [set ${varname}] [set ${varname} {}] } knit lremove {varname idx} { set ${varname} [lreplace [yank ${varname}] ${idx} ${idx}] } knit lpop listname { K [lindex [set ${listname}] end] [lremove ${listname} end] } knit lpop2 listname { K [lindex !{listname} end] [lremove ${listname} end] } foreach cmdname {* + - /} { knit $cmdname args " expr \[join \${args} [list $cmdname]] " } knit sete {varname exp} { set ${varname} [expr {#{exp}}] } knit greeting? x {expr {${x} in {hello hi}}} knit until {expr body} { while {!(#{expr})} ${body} } knit ?: {cond val1 val2} { if {#{cond}} {lindex ${val1}} else {lindex ${val2}} } knit finally {init finally do} { #{init} try ${do} finally ${finally} }Sometimes only the macro command preprocessing is wanted. Using [knar] alone is rather like runing a C file through the preprocessor. Here's an example: proc p1 {some arguments} [knar { [` foreach x {1 2 3} y {4 5 6} { set coord#{x} ${y} lappend res $coord#{x} }] # There is no variable named "x" in this scope at runtime. Macro # expansions operate in their own sandbox (a local procedure in which the # script to be evaluated is generated. return $res }] p1 ;# -> 1 4 2 5 3 6 Example: Avoid a Conditional Branch in a Loop editSometimes only one or two steps of a routine branch based on some condition. It can be annoying when one of those steps is in a loop, and the condition must be tested on each iteration even though the values affecting the outcome are known prior to entering the loop: proc files {arg1 arg2 arg3} { ## step 1 #step 2 if {$arg1} { #do something } else { #do something else } foreach file $files { #step 3 #step 4 if {$arg1} { #do something else } else { #do something else } #step 5 } #step 6 #return some result }In this situation, knit could be used like this: proc files {arg1 arg2 arg3} { ## step 1 if {some condition} { #step 2 } else { #alternate step 2 } if {some condition} files_for { #the script for step 4 } else { files_for { #the alternate for step 4 } } #step 6 return files } knit files_for script { foreach file $files { #step 3 #step 4 is a macro #{script} #step 5 } }Or, if the conditions can be determined from just the parameters to the procedure, multiple variants of the procedure can be generated from a template, and the selector moved to the caller: knit files_macro {name script1 script2} { proc ${name} {arg1 arg2 arg3} { ## step 1 #step 2 is a macro #{script1} foreach file $files { #step 3 #step 4 is a macro #{script2} #step 5 } #step 6 return files } } files_macro files1 { #script1 } { #script2 } files_macro files2 { #script1 } { #script2 } if {$arg1} { files1 $arg1 $arg2 $arg3 } else { files2 $arg1 $arg2 $arg3 }
http://wiki.tcl.tk/40693
CC-MAIN-2017-04
en
refinedweb
Improving JSF Security Configuration With Secured Managed Beans By edort on Oct 01, 2007 Java EE allows you to protect web resources through declarative security, but this approach does not allow you to protect local beans used by servlets and JavaServer Pages (JSPs). Also, although you can protect JavaServer Faces technology (JSF) pages using declarative security, this is often not sufficient. This tip will show you a way to extend JSF security configuration beyond web pages using managed bean methods. Introduction Java EE allows you to protect web pages and other web resources such as files, directories, and servlets through declarative security. In this approach you declare in a web.xml file specific web resources and the security roles that can access those resources. For example, based on the following declarations in a web.xml file, only authenticated users who are assigned the admin security role can access the secured resources identified by the URL pattern /members.jsf: <security-constraint> <display-name>Sample</display-name> <web-resource-collection> <web-resource-name>members</web-resource-name> <description/> <url-pattern>/members.jsf</url-pattern> <http-method>GET</http-method> <http-method>POST</http-method> </web-resource-collection> <auth-constraint> <description/> <role-name>admin</role-name> </auth-constraint> </security-constraint> <security-role> <description/> <role-name>admin</role-name> </security-role> Notice that you identify the resources you want to protect by specifying their URLs in a <url-pattern> element. Unfortunately, because local beans used by servlets and JavaServer Pages (JSP) cannot be mapped to a <url-pattern> element, you can't use declarative security to protect local beans. Also, although you can protect JSF pages using declarative security, this is often not sufficient. For example, you might want a JSF application to present the same page to users with different roles, but only allow some of those roles to perform specific operations. For instance, you might allow users with all of those roles to read and update data, but allow users with specific roles to create and delete data. In that case, you need a way to extend JSF security beyond web pages. Additionally, declarative security doesn't check roles during the request processing commonly used by MVC frameworks and JSF. As a result, a managed bean can return any view id even if it's for a protected resource. This can potentially expose protected resources to a role that should not have access to them. One solution is to use JBoss Seam Web Beans or JSR 299: Web Beans. Web Beans allow you to configure page security, component security, and even Java Persistence Architecture entity security. However, many companies are adopting simpler security solutions without Seam, Spring, EJB, or security-specific frameworks. The technique covered in this tip demonstrates a simple approach that extends JSF security using annotations in managed beans methods. A sample application accompanies this tip. The code examples in the tip are taken from the source code of the sample application. Declare the Extended JSF ActionListener and NavigationHandler To provide managed bean method protection you need to declare the extended JSF ActionListener and NavigationHandler. These custom classes analyze each user action and check for authentication and authorization. To enable the classes, you declare the following elements inside the faces-config.xml file: <!-- JSF-security method--> <application> <action-listener> br.com.globalcode.jsf.security.SecureActionListener </action-listener> <navigation-handler> br.com.globalcode.jsf.security.SecureNavigationHandler </navigation-handler> </application> SecureActionListener intercepts calls to managed bean methods and checks for annotated method permissions. NavigationHandler forwards the user to a requested view if the user has the required credentials and roles. For example, the following code renders a JSF page with a View button and a Delete button. <h:form <h:commandButton <h:commandButton </h:form> When the user clicks on the Delete button, a call is made to the CustomerCRUD.delete method. The method includes an annotation that declares a required role for the method. public class CustomerCRUD { public String view() { return "view-customer"; } @SecurityRoles("customer-admin-adv, root") public String delete() { System.out.println("I'm a protected method!"); return "delete-customer"; } ... SecureActionListener intercepts calls to CustomerCRUD.delete and checks for the customer-admin-adv and root permissions. NavigationHandler forwards the user to a requested view if the user has the required credentials and roles. Set Up User Object Providers By adding a context parameter into web.xml, you can set up different user object providers, as follows: ContainerUserProvider: Integrate with container/declarative security. SessionUserProvider: Look up Http session for object named "user". - Your Provider: Implement the UserProviderinterface: <context-param> <param-name>jsf-security-user-provider</param-name> <param-value> YourClassImplementsUserProvider </param-value> </context-param> Set Up the ContainerUserProvider The web container provider approach is integrated with declarative security, so it can be used with applications that already use declarative security. Add the following context parameter to set up the default container user provider: <context-param> <param-name>jsf-security-user-provider</param-name> <param-value> br.com.globalcode.jsf.security.usersession.ContainerUserProvider </param-value> </context-param> Here is what the default web container user provider class looks like: public class ContainerUserProvider implements UserProvider { ContainerUser user = new ContainerUser(); public User getUser() { if(user.getLoginName()==null || user.getLoginName().equals("")) { return null; } else { return user; } } ContainerUserProvider references the ContainerUser class. Here's what the ContainerUser class looks like (some of the code lines are cut to fit the width of the page): public class ContainerUser implements User { public String getLoginName() { if(FacesContext.getCurrentInstance().getExternalContext(). getUserPrincipal()==null) return null; else return FacesContext.getCurrentInstance(). getExternalContext().getUserPrincipal().toString(); } public boolean isUserInRole(String roleName) { return FacesContext.getCurrentInstance().getExternalContext(). isUserInRole(roleName); } Using a SessionUserProvider If your solution uses a custom security authentication and authorization process, you can provide a user class adapter that implements the given user interface and bind a user object instance into the HTTP Session with the key name "user". This approach works well for legacy Java EE or J2EE applications that don't use declarative security. Follow these steps to set up your application to use a SessionUserProvider: - Add the following context parameter to the web.xmlfile to set up the user provider to look up the HTTP Session for the "user"object: <context-param> <param-name>jsf-security-user-provider</param-name> <param-value> br.com.globalcode.jsf.security.usersession.SessionUserProvider </param-value> </context-param> - Create your Userclass adapter implementation: package model; public class MyUser implements br.com.globalcode.jsf.security.User { //Your user instance object public String getLoginName() { //your user bridge return "me"; } public boolean isUserInRole(String roleName) { //your user roles bridge return true; } } - Provide page login with a navigation case called login: //Login page <h:form <h:outputText <h:inputText </h:inputText> <h:outputText <h:inputText <h:commandButton <h:messages/> </h:form> <navigation-case> <from-outcome>login</from-outcome> <to-view-id>/login.xhtml</to-view-id> </navigation-case> - Write a login managed bean that checks the user credentials and puts (or not) the user object into the HTTP session. public class LoginMB { private String userName; private String password; @SecurityLogin public void login() { //Your login process here... MyUser user = new MyUser(); HttpSession session = (HttpSession) FacesContext.getCurrentInstance(). getExternalContext().getSession(false); session.setAttribute("user", user); } } Running the Sample CodeA sample package accompanies this tip. This sample runs with a SessionUserProviderand has a very simple user and login page. To install and run the sample: - Download the sample package and extract its contents. You should now see a newly extracted directory <sample_install_dir>/facesannotations-glassfish, where <sample_install_dir>is the directory where you installed the sample package. For example, if you extracted the contents to C:\\on a Windows machine, then your newly created directory should be at C:\\facesannotations-glassfish. Notice that the faces-config.xmlfile in the expanded sample package contains the declarations for the SecureActionListenerand SecureNavigationHandler. - Start the NetBeans IDE. - Open the facesannotations-glassfishproject as follows: - Select Open Project from the File menu. - Browse to the facesannotations-glassfishdirectory from the sample application download. - Click the Open Project Folder button. - RunYou should see a page that contains two buttons: one button invokes an unprotected method. The other button invokes a protected method. facesannotations-glassfishas follows: - Right click on the facesannotations-glassfishnode in the Projects window. - Select Run Project. - Open your browser to the following URL: Click on both buttons and see what happens. You'll see that you can run the unprotected method, but the protected method requires you to have a special role. About the Author Vinicius Senger is a performance researcher, Java EE architect, and instructor. He started his career at Sun Microsystems and Oracle as independent consultant and official instructor, and later founded Globalcode, a leading Java-related training company in Brazil. Vinicius is a member of the JSF 2.0 Expert Group, the leader of the Global Education and Learning Community, a NetBeans Dream Team Member, and project leader of JAREF, an educational and research framework. He is also a Sun Certified Enterprise Architect and Programmer P1. good to know, need more of the same , with more detail. Posted by Lou Blocker on October 19, 2007 at 01:31 AM PDT # All the source code is hosted at facesannotation.dev.java.net Posted by Vinicius Senger on November 05, 2007 at 12:45 AM PST # Excellent excellent work. I see that as edge technology. I was searching for a good, alternative way to secure my web apps, in a programmer's friendly way. Either in jsp,jsf or visual jsf I thought that the best way to control which components should appear to the user is something like this : a hashtable containing user roles as keys and lists of components Ids as values. So, every request on a page should end up traversing a list depending on the role(s) of the current user, marking as visible or rendered the components contained in that list. This solution seems to suit my needs and goes along with my principles of elegant programming. But when it came down to MBean methods I was horrified. I use declarative security (an LDAP realm connected to Active Directory) in Glassfish. I couldnt find a suitable way to hire Filters or Listeners to do the job :). All I needed was security annotation, implementing method security. Well, I am extremely happy I found your work, although its brazilian? commented :) Posted by Stratos Pavlakis on January 07, 2008 at 06:55 PM PST # Yes! We are brazilians. Now the project is in progress and we are integrating it better with declarative security and also in the near future will be possible to use the concept of config by exception, where you can override an annotation config with xml document. Thanks for your feed-back and feel free to write me questions. Regards, Vinicius Senger Posted by Vinicius Senger on January 22, 2008 at 08:01 AM PST # Thanks for your feed-back, we are improving the project with new features. We are from Sao Paulo, Brazil. Feel free to contact us. Posted by Vinicius Senger on January 31, 2008 at 12:02 AM PST # i'm having trouble running the example on tomcat 6.0.. Posted by futch3 on February 03, 2008 at 06:16 AM PST # Which kind of trouble / exception you are having? How are you packing the war and which jar you have inside WEB-INF/lib? Posted by Vinicius Senger on February 07, 2008 at 04:22 AM PST # good Posted by guest on February 11, 2008 at 07:58 PM PST # Hi Vinicius, I would like know how can I run this application with other container ou web server like TomCat or JBoss. Thank you Posted by Junior de Paula Sousa on February 15, 2008 at 02:41 AM PST # how to give the link on command button and open the next page in the current page in jsf Posted by roshan on February 15, 2008 at 03:30 PM PST # Hi Vinicius, These are good snippets, but I try it, and I don't understand, how I can set up my role mapping to users. The Myuser class isUserInRole() method allways return true, it means, that everybody has every role permissions? Thank you for your answer! Bye! Csaba Posted by Csaba on February 23, 2008 at 03:57 AM PST # Great article. Exactly what I needed! -Cameron McKenzie Posted by Cameron McKenzie on February 26, 2008 at 12:26 AM PST # I would like to ask you to use the official discussion / support forum at facesannotations.dev.java.net. Post the complete exception (if the case). This library runs on Jboss 4.x, Tomcat 5.x and Glassfish. Thanks, Vinicius Senger Posted by Vinicius Senger on March 23, 2008 at 11:53 PM PDT # Is there a way to use these techniques to hide or gray out the button if the user doesn't have security? Thanks, --Erik Ostermueller Posted by Erik Ostermueller on April 30, 2008 at 06:30 AM PDT # Excellent article. Thanks. Posted by Yuriy Semen on October 25, 2008 at 05:58 PM PDT # Hi Vinicius, Is there any way to intercept actionListeners ? Because the <action-listener> can only intercept actions. <h:commandLink In this exemple only mb.action is intercepted by the action-listener class. Thanks Posted by boutaounte faissal on June 20, 2009 at 06:24 AM PDT # Hello Boutaounte, We will be working on a mechanins to intercept anything but with JSF 2.0 only... If you need it in JSF 1.2 I can give you the directions or try to find some time to write this simple code to you. Thanks a lot about your feed-back, Vinicius Senger Posted by Vinicius Senger on June 22, 2009 at 02:12 AM PDT # Thank you very much Vinicius :) I integrated this to the JSF core by adding some codes to "javax.faces.event.MethodExpressionActionListener.processAction(ActionEvent actionEvent)" but I would like to do it without changing any thing in this method. I will be very happy to receive directions from you. Thanks Posted by boutaounte faissal on June 22, 2009 at 03:19 AM PDT # Hi, everybody! I can not download the archive with an example (). Gives the following error: Page Not Fund. Who has this file (ttsept2007FacesSec.zip), send it to me, please, to my e-mail: devw08@gmail.com. Thanks in advance! Posted by devw08 on November 09, 2010 at 08:58 PM PST #
https://blogs.oracle.com/enterprisetechtips/entry/improving_jsf_security_configuration_with
CC-MAIN-2017-04
en
refinedweb
I have a class like this: class C(object): a = 5 def __iadd__(self, other): self.a += other b = C() b += 7 You forgot to return self from the __iadd__ method: def __iadd__(self, other): self.a += other return self From the object.__iadd__() documentation: These methods should attempt to do the operation in-place (modifying self) and return the result (which could be, but does not have to be, self). Without an explicit return statement, a function returns None as a default, so b += 7 produced None which is the result of the assignment.
https://codedump.io/share/YzFihQstK5wQ/1/how-do-i-make-quotx--5quot-with-x-a-custom-class-add-to-a-property-of-x
CC-MAIN-2017-04
en
refinedweb
We know it's a leap year if it's divisible by four and, if it's a century year, it's divisible by 400. I thought I would need two If Statements like this: def isLeap(n): if n % 100 == 0 and n % 400 == 0: return True if n % 4 == 0: return True else: return False # Below is a set of tests so you can check if the code is correct. from test import testEqual testEqual(isLeap(1944), True) testEqual(isLeap(2011), False) testEqual(isLeap(1986), False) testEqual(isLeap(1956), True) testEqual(isLeap(1957), False) testEqual(isLeap(1800), False) testEqual(isLeap(1900), False) testEqual(isLeap(1600), True) testEqual(isLeap(2056), True) 1800 - Test Failed: expected False but got True 1900 - Test Failed: expected False but got True if n % 4 and (n % 100 == 0 and n % 400 == 0): return True else: return False 1944 - Test Failed: expected True but got False 1956 - Test Failed: expected True but got False 2056 - Test Failed: expected True but got False try this: return (n % 100 != 0 and n % 4 == 0) or n % 400 == 0 The problem is that you want the year to be divisible cleanly by 4 OR by 400 if it's a century year. >>> [(x % 100 != 0 and x % 4 == 0) or x % 400 == 0 for x in [1944, 1956, 2056, 1800, 1900]] [True, True, True, False, False]
https://codedump.io/share/pBcIud1YmwHT/1/determining-leap-years
CC-MAIN-2017-04
en
refinedweb
<Andy_Carr@tertio.com> asked > I have written an XML schema (that makes use of the standard XML Schema > definitions) but am having zero success in getting the parser to find my > schema definition file. > > The top of my XML file looks like: > > <?xml version="1.0" ?> > <GMI xmlns=""> > <ServiceTag>CSO</ServiceTag> > ...etc... I don't know how Xerces finds schemas, but a namespace declaration does not have anything to do with assigning schemas or anything else. It only gives you a way to distinguish names that might otherwise be taken to be the same. You definitely don't want something as changeable as a file location on your own machine to be a namespace designator. Tom Passin
http://mail-archives.apache.org/mod_mbox/xml-general/200009.mbox/%3C002e01c01825$9673c540$38a3f1ce@mitretek.org%3E
CC-MAIN-2017-04
en
refinedweb
HOME HELP PREFERENCES SearchSubjectsFromDates thanks for your suggestions. Actually let me more specific about the trechnical issue here: I have chemistry HTML tutorial I wish to include in a collection for the School. There are example text files which are linked to the nain files via URL. I wish to include these in the collection without having to process them. That is to say I place them in the import directory but do not wish for greenstone to process the example files beyond perhaps recognizing the internal link to these files. The problems are two fold. 1. greenstone are process the files (example_xx.txt files) 2. The original functioning links now fail with the error indicated earlier I will now try your approach but have 1 concern. Does using the -nolink HTMLPlug option means that images will not be moved the assoc directory. Then you will U have to move the graphics content outside greenstone? ----- Original Message ----- From: "John R. McPherson" <jrm21@cs.waikato.ac.nz> To: "desiree' simon" <rjae-1@att.net> Cc: <greenstone@tripath.colosys.net> Sent: Tuesday, September 03, 2002 10:17 PM Subject: Re: How do I specify an internal http link across document > > desiree' simon wrote: > > > I want to be able to http-link one internal document to another. However, > > when I edit the html docs to include links of the forms: > > > > 1. href= > > > > or realtive link > > > > 2. "href="examples/page/page.html" > > > > I am getting the error message > > > > "For reasons beyond our control the internal link you specify does not > > exist". > > > Questions: > > > > given two documents page_1.html and page_2.html. How do I > > specify an internal URL linking page_1 to pape_2? > > Hi, > I can think of a couple of things to try: > > 1) > href= > > this won't work as it is looking for an internet server named "gsdl" > and then /collect on that server. You could try > but I don't think it's a good idea to link to the import directory. > Greenstone can handle internal links... > > 2) If you really want to give hard-coded links, edit your collect.cfg > file so that for HTMLPlug you include a certain option, like: > plugin HTMLPlug -nolinks > This means that greenstone won't do any interpretation of the links > and they will be displayed exactly as they are in the source documents. > > 3) You could use the "-file_is_url" option to HTMLPlug as above. > This is normally used when building a collection from a web mirror, > so the file might be called "" > etc. Internal links work for collections I've built when I mirrored > some of our university pages... > I don't know if it will work in your situation though. Let the list > know if it does! > > Hope this helps, > John McPherson >
http://www.nzdl.org/gsdlmod?e=d-00000-00---off-0gsarch--00-0----0-10-0---0---0direct-10---4-------0-1l--11-en-50---20-about---00-0-1-00-0--4----0-0-11-10-0utfZz-8-00&a=d&d=000c01c25436$ede266c0$0801febf-tlclinux-org
CC-MAIN-2017-04
en
refinedweb
Get-DfsnAccess Applies To: Windows 10 Technical Preview, Windows Server Technical Preview Get-DfsnAccess Syntax Detailed Description The Get-DfsnAccess cmdlet gets account names and access types for users and groups that have permissions for a Distributed File System (DFS) namespace folder. You can use the Grant-DfsnAccess cmdlet and the Revoke-DfsnAccess cmdlet to manage access for DFS namespace folders. For more information about DFS namespaces, see Overview of DFS Namespaces () on TechNet. gets permissions for the folder specified. Provide a complete path for a folder, not a partial or relative: Get permissions for a folder This command gets permissions for a DFS namespace folder that has the path \\Contoso\Software\Projects. Related topics
https://technet.microsoft.com/en-us/library/jj884269.aspx
CC-MAIN-2017-04
en
refinedweb
This action might not be possible to undo. Are you sure you want to continue? Objectives Declare and create arrays of primitive, class or array types How to initialize the elements of an array No of elements in the array Create a multi-dimensional array Write code to copy array values from one array type to another Declaring Arrays Group data Objects of the same type Declare arrays of primitive or class types char s[]; Point p[]; //Where point is a class char[] s; Point[] p; Create space for a reference An array is an object; it is created with new Creating Arrays Use the new Keyword to create an array object Example Execution Stack public char[] createArray(){ Heap Memory char[] a; char a = new char[26]; A B for(int i =0;i<26;i++){ a C createArray this a[i]=‘A’+i; Z main } return a; } Creating Arrays Object Array public Point[] createArray(){ Point[] p; p = new Point[10]; for(int i =0;i<26;i++){ p[i]=new Point(I,I+1); } return p; createArray } Execution Stack Heap Memory Point[] Point x y x y x y p this main Initialize Arrays String names[]; names=new String[3]; names[0]=“George”; names[1]=“Jen”; names[2]=“Simon”; MyDate date[]; dates=new MyDate[2]; dates[0]=new MyDate(22,7,1976); dates[1]=new MyDate(22,12,1974); String names[]={ “George”,”Jen”,”Simon” }; MyDate dates[]={ new MyDate(22,7,1976), new MyDate(22,12,1974) }; Multi-Dimensional Arrays Arrays of arrays int twoDim[][] = new int[4][]; twoDim[0]=new int[5]; twoDim[1]=new int[5]; int twoDim[][]=new int[][4]; //illegal Multi-Dimensional Arrays Non-rectangular array of arrays twoDim[0] =new int[2]; twoDim[1]=new int[4]; twoDim[2]=new int[6]; twoDim[3]=new int[8]; Shorthand to create 2 Dimensional arrays int twoDim[] = new int[4][5]; Array Bounds All array subscripts begin at 0 int list[]=new int[10]; for(int i = 0;i<list.length;i++){ System.out.println(list[i]); } Array Resizing Cannot Resize an Array Can use the same reference variable to refer to an entirely new array int elements[] = new int[6]; elements = new int[10]; In this case, the first array is effectively lost unless another reference to it is retained elsewhere. Copying Arrays The System.arrayCopy() method //original Copy int elements[]={1,2,3,4,5,6}; //new larger array int hold[]={10,9,8,7,6,5,4,3,2,1}; //Copy all of the elements array to the //hold array starting at 0 th index System.arrayCopy(elements,0,hold,0,elements.length); Inheritance The is a Relationship The Employee Class public class Employee{ Employee +name: String=“” +salary:double +birthDate:Date +getDetails():String } public String name=“”; public double salary; public Date birthDate; public String getDetails(){ ---} The is a Relationship The Manager Class public class Manager{ Manager +name: String=“” +salary:double +birthDate:Date +department:String +getDetails():String public String name=“”; public double salary; public Date birthDate; public String department; public String getDetails(){ ---} } The is a Relationship Employee +name: String=“” +salary:double +birthDate:Date +getDetails():String Employee +department:String=“” } public class Employee{ public String name=“”; public double salary; public Date birthDate; public String getDetails(){---} public class Manager extends Employee{ public String department=“”; } Single Inheritance When a class inherits from only one class, it is called single inheritance Single Inheritance makes code more reliable Interfaces provide the benefits of multiple inheritance without drawbacks Constructors are not inherited A subclass inherits all methods and variables from the super class (parent class) A subclass does not inherit the constructor from the super class Note A Parent constructor is always called in addition to a child Constructor Polymorphism Polymorphism is the ability to have different forms. An object has only one form A reference variable can refer to objects of different forms Employee emp1=new Manager(); //Illegal attempt to assign Manager attribute Emp1.department=“Sales” Heterogeneous Collections Collection of objects of the same class type are called homogeneous Collection MyDate[] dates=new MyDate[2]; dates[0]=new MyDate(22,12,1976); dates[1]=new MyDate(22,7,1974); Collection of objects with different class types are called heterogeneous collections Employee[] staff = new Employee[1024]; Staff[0]=new Manager(); Staff[1]=new Employee(); Staff[2]=new Engineer(); Polymorphic Arguments Because a manager is an Employee //in the Employee class public TaxRate findTaxRate(Employee e){ -} //elsewhere in the application class Manager m = new Manager(); : TaxRate t = findTaxRate(m); The instanceof Operator public class Employee extends Object public class Manager extends Employee public class Engineer extends Employee --------------------------------------public void doSomething(Employee e){ if(e instanceof Manager){ //Process a Manager } else if(e instanceof Engineer){ //Process an Engineer } else{ //Process other type of Employee } } Casting Objects Use instanceof to test the type of an object Restore full functionality of an Object casting Check for proper casting using the following guidelines Casts up hierarchy are done implicitly Downward casts must be to sub class and checked by the compiler The object type is checked at runtime when runtime errors can occur The has a Relationship Truck 1 Engine public class Vehicle{ private Engine theEngine; public Engine getEngine(){ return theEngine; } } Access Control Variables and Methods can be at one of four access levels; public, protected, default or private. Classes can be public or default Modifier public protected Same Class Yes Yes Yes Same Pkg Yes Yes Yes Subclass Yes Yes Universe Yes default private Yes Protected access is provided to subclasses in different Packages Overloading Method Names Example public void println(int i); public void println(float f); public void println(String s); Argument lists must differ Return types can be different Overloading Constructors As with methods constructors can be overloaded Example public Employee(String name, double salary, Date dob) public Employee(String name, double salary) public Employee(String name, Date dob) Argument list must differ The this reference can be used at the first line of a constructor to call another constructor Overriding Methods A subclass can modify behavior inherited from a parent class A subclass can create a method with different functionality than the parent’s method with the same Name Return Type Argument List Overriding Methods Virtual method invocation Employee e = new Manager(); e.getDetails(); Compile-time type and runtime type Rules about Overridden Methods Must have a return type that is identical to the method it overrides Cannot be less accessible than the method it overrides The super Keyword super is used in a class to refer to its superclass super is used to refer to the members of superclass, both data attributes and methods Behavior invoked does not have to be in the superclass, it can be further up in the hierarchy Invoking Parent Class Constructors To invoke a parent constructor you must place a call to super in the first line of the Constructor You can call a specific parent constructor by the arguments that you use in the call to super If no this or super call is used in a constructor, then an implicit call to super() is added by the compiler If the parent class does not supply a non-private “default” constructor, then a compiler warning will be issued Constructing and Initializing Objects 1. 2. 3. Memory is allocated and default initialization occurs Instance variable initialization uses these steps recursively Bind Constructor parameters If explicit this(), call recursively and skip to step 5 Call recursively the implicit or explicit super call, except for Object Execute explicit instance variable initializes Execute the body of the current Constructor 4. 5. The Object class The Object class is the root of all classes in Java A class declaration with no extends clause, implicitly uses “extends Object” The == Operator Vs equals Method The = = operator determines is two references are identical to each other The equals method determines if objects are equal. User classes can override the equals method to implement a domain-specific test for equality Note: You should override the hashcode method, if you override the equals method toString Method Converts an Object to a String Used during string concatenation Override this method to provide information about a user-defined object in readable format Primitive types are converted to a String using the wrapper class’s toString static method Wrapper Classes Primitive boolean byte char short int long float double Wrapper Class Boolean Byte Character Short Integer Long Float Double This action might not be possible to undo. Are you sure you want to continue? We've moved you to where you read on your other device. Get the full title to continue reading from where you left off, or restart the preview.
https://www.scribd.com/presentation/491860/BasicJava3
CC-MAIN-2017-04
en
refinedweb
illustrated with the following example: import akka.event.japi.LookupEventBus;));oting); Default Handlers Upon start-up the actor system creates and subscribes actors to the event stream for logging: these are the handlers which are configured for example in application.conf: akka { loggers = [.
http://doc.akka.io/docs/akka/2.3.15/java/event-bus.html
CC-MAIN-2017-04
en
refinedweb
Smart Dog Kennel Containment Electric Fence Wires Pet Training Products KD-660 US $9.5-35.0 / Piece 1 Piece (Min. Order) Portable folding Dog House Cat and dog winter bed US $4.55-4.8 / Piece | Buy Now 30 Pieces (Min. Order) High quality wholesale low price large outdoor fence dog kennel/wireless dog fence for sale US $24.99-31.99 / Pieces 100 Pieces (Min. Order) Wireless outdoor electric dog training fence -026system portable fences for dogs US $30-35 / Set 10 Pieces (Min. Order) Fine quality hot sell puppy training pads dog pee cozy pet kennel mat pad US $2.3-15 / Piece 100 Pieces (Min. Order) pet cage with fences 510 Sets (Min. Order) electronic pet dog cage bars of the fence , S-228 Teddy small and medium-sized dog kennel supplies of the fence US $18.0-26.0 / Piece | Buy Now 1 Piece (Min. Order) Portable HP400 Hyperbaric Oxygen Dog Cages Supplies For Pet Training Equipment On Sale US $2000-2500 / Set 1 Set (Min. Order) dog kennels Three tones WIN-10005 how to train a puppy with a clicker US $0.4-0.7 / Piece 5000 Pieces (Min. Order) import to thailand dog cage Pet Pad US $0.02-0.03 / Piece 30000 Pieces (Min. Order) Top Quality Customize Portable Dog Fence Kennels US $30-35 / Set 1 Set (Min. Order) Enchante Accessories Dog Kennel Fence Panel Bed New Technology US $15-18 / Piece 1 Piece (Min. Order) hot galvanized temporary mesh fence for dog US $19.5-38.5 / Piece 10 Pieces (Min. Order) China made hot galvanized outdoor portable dog proof fence US $245-425 / Set 50 Sets (Min. Order) netting system Smart Dog In-ground DF-113R wireless dog fence cage US $28.12-76.96 / Set 1 Set (Min. Order) Dog Kennels Wireless Vibrator Sleep Trainer US $62.48-64.71 / Piece | Buy Now 10 Pieces (Min. Order) Hot Wire Dog Fence Iron Fence Dog Kennel Electric Dog Fence US $17.556-23.822 / Piece 1000 Pieces (Min. Order) pet training tunnel,play tunnel,outdoor play tunnels US $15-20 / Piece 1000 Pieces (Min. Order) A-200 Smart Electronic wire mesh fencing dog kennel US $0.01-40 / Unit 100 Units (Min. Order) Hight Quality !!! Wireless Dog Fences AT-216F dog fence kennel US $50-100 / Set 1 Set (Min. Order) 10 Acres Electric Fences For Kennels/Dog/Pet US $39.0-41.0 / Set | Buy Now 1 Set (Min. Order) cheap chain link dog kennels US $39.9-99.9 / Box 50 Boxes (Min. Order) Popular Electrical Dog Kennel Fence Panel for Training US $9.5-35.0 / Piece 1 Piece (Min. Order) Hot Wire Dog Fence Dog Kennel Electric Dog Fence A-200 US $26.8-32.8 / Piece 10 Pieces (Min. Order) Various styles attractive fashion pet training pad pet kennel mat pad cover US $2.3-15 / Piece 100 Pieces (Min. Order) new material china wireless pet fencing 023 dog cage fence US $20-26 / Set 50 Pieces (Min. Order) Designer Best Sell Hot Wire Dog Fence Kennel US $30-35 / Set 1 Set (Min. Order) Trainertec temporary kennels DF-113R beautiful wireless dog fence US $28.12-67.96 / Set 1 Set (Min. Order) Iron Protable Wireless Invisible Electric Fence For Dog Kennel 2016 Hot Selling X800 US $28-35 / Piece 1 Piece (Min. Order) United States popular outdoor temporary dog fence US $245-425 / Set 50 Sets (Min. Order) Top Selling Gadgets Dog Kennels Electric Shock Wireless Vibrator Vibrating Dog Collar US $25-30 / Piece 1 Piece (Min. Order) Buying Request Hub Haven't found the right supplier yet ? Let matching verified suppliers find you. Get Quotation NowFREE Do you want to show cage dog training or other products of your own company? Display your Products FREE now! Related Category Product Features Supplier Features Supplier Types Recommendation for you related suppliers related Guide
http://www.alibaba.com/countrysearch/CN/cage-dog-training.html
CC-MAIN-2017-04
en
refinedweb
On Thu, 19 Jul 2007 13:25:07 -0700 (PDT)Linus Torvalds <torvalds@linux-foundation.org> wrote:> > > On Thu, 19 Jul 2007, Linus Torvalds wrote:> > > > A better patch should be the appended. Does that work for you too?> > Btw, I already committed this as obvious. > > I did the same for the SLAB __do_kmalloc() thing. Let's hope that> that was the extent of the damage.> > LinusHmmmm.. The issue is really in krealloc which can be called with a NULLparameter (a special case). However, krealloc should not call ksizewith NULL.The merged patch above makes ksize(NULL) return 0. So we arereturning zero size for an object that we have not allocated.Better fail if someone tries that.The __do_kmalloc issue looks like a hunk that was somehow dropped.IMHO: The right fix for the ksize issue would be the following patch:Index: linux-2.6/mm/util.c===================================================================--- linux-2.6.orig/mm/util.c 2007-07-23 13:29:42.000000000 -0700+++ linux-2.6/mm/util.c 2007-07-23 13:31:28.000000000 -0700@@ -88,7 +88,11 @@ void *krealloc(const void *p, size_t new return ZERO_SIZE_PTR; } - ks = ksize(p);+ if (p)+ ks = ksize(p);+ else+ ks = 0;+ if (ks >= new_size) return (void *)p; -To unsubscribe from this list: send the line "unsubscribe linux-kernel" inthe body of a message to majordomo@vger.kernel.orgMore majordomo info at read the FAQ at
http://lkml.org/lkml/2007/7/23/439
CC-MAIN-2017-04
en
refinedweb
This Tutorial is most relevant to Sencha Touch, 1.x. Sencha Touch allows you to create applications that work on both mobile phone and tablet devices, as well as use layouts that cater to different screen sizes. In addition to the display differences between types of devices, users also have certain expectations about apps' user-interface conventions. In this two-part series, we show how, with a single code-base, we can create an app which responds to these conventions, and which, through the use of the Sencha Touch 'application profiles' mechanism, delivers familiar user interfaces to both phone and tablet users. (If you want to skip ahead, part two is here). The Basics A modern trend in web design is to build web sites that are 'responsive' - meaning that they employ fluid layouts and techniques such as CSS media queries to adapt to a wide range of screen sizes. Quite apart from whether this allows us to deliver services tapered for particular user contexts is another matter - but the question this article sets out to answer is: can we do something similar for mobile and tablet web apps? The good news is that Sencha Touch provides a subsystem especially for this purpose, using the Ext.Application class to define, and respond to, multiple 'profiles'. In this article, we'll show how to use application profiles to handle layouts for various screen configurations. For the purposes of this walk-through, our goal will be be to deliver idiomatic UIs to both phone and tablet users, in both portrait and landscape modes. These are the four profiles that we will define and work with: Note that Sencha Touch allows you to define as many different profiles as you'd like. You simply need to create the rules that allow the framework to decide which one it is in at any given point. You might like to create different profiles for different operating systems perhaps: removing app-defined back buttons when you know the device has a physical back button, for instance. Our application is going to be a very simple one, but the principles should hold for more complex implementations. It's the 'Piet Mondrian' app - slightly contrived, admittedly - which shows information about four periods of the painter's life. The data set is going to be burnt into the app, but of course you could easily wire up an app like this to an online data source of some sort. The key to making everything work is to define our app using the Ext.Application class. This is the standard way to construct consistent MVC-style applications, and although we're not strictly following a fully-fledged MVC pattern here, it's still good practice to use this as an architectural entry point (rather than just an ad-hoc Ext.onReady-style approach) for all Sencha Touch and Ext JS apps. Before we go any further, you might like to read the detail in the Ext.Application API docs. Also, you might want to take a sneak peak at the finished application here (with a smartphone, tablet, or WebKit desktop browser) so you know where we are heading. As we go through this tutorial, you can stay abreast of the code by following the step-by-step branches of its associated GitHub repo. Application Structure Let's quickly get our Mondrian app's architecture bootstrapped. Make yourself the folder structure shown below, or checkout or download the GitHub repo's first branch, named 1_structure. Copy or symlink the Sencha Touch SDK as touch within the lib directory. The index.htmlfile links to the Sencha Touch JavaScript the app's two files, app.jsand data.js, and a custom stylesheet, mondrian.css: <!DOCTYPE html> <html> <head> <title>Mondrian</title> <script src="lib/touch/sencha-touch.js" type="text/javascript"></script> <script src="app.js" type="text/javascript"></script> <script src="data.js" type="text/javascript"></script> <link href="theming/mondrian.css" rel="stylesheet" type="text/css" /> </head> <body></body> </html> For now, start with a simple application instance in app.js: new Ext.Application({ name: 'mondrian', launch: function() { var app = this; // construct UI var viewport = this.viewport = new Ext.Panel({ fullscreen: true, layout: 'card' }); } }); The name property sets up a namespace for the application, and the launch function is our start-up code. In it, we create a handy reference to the application (so we can close over that variable in any other functions defined in launch), and instantiate a fullscreen root Ext.Panel called viewport. We make it a card layout since in one profile at least (for portrait phones), we'll be transitioning between two panes. data.js you can leave empty for now. In the theming directory, we'll be using Sass and Compass to compile the app's stylesheet from a single custom Sass file. We'll return to this in part two, but for now, either use the code from the 1_structure branch GitHub repo, or just copy in the standard sencha-touch.css file from the resources/css part of the SDK and rename it to mondrian.css. If all goes well, your app should load up from the index.html file. Don't get too excited yet - it's nothing more than a light gray screen - but let's move on quickly. Data and a Basic UI We're going to display four pages of information about Mondrian, each with a title and some HTML. For this, we instantiate an Ext.data.Store, containing records of a very simple Ext.data.Model with id, title and content fields. We then declare in-line data for the content itself (with attribution to Wikipedia): mondrian.stores.pages = new Ext.data.Store({ model: Ext.regModel('', { fields: [ {name:'id', type:'int'}, {name:'title', type:'string'}, {name:'content', type:'string'} ] }), data: [ {id: 1, title: 'Introduction', content: "<p>Pieter Cornelis 'Piet' Mondriaan" + ... }, {id: 2, title: 'Cubism', content: "<p>In 1911, Mondrian moved to Paris" + ... }, ... ] }); Note how we can use the mondrian.stores sub-namespace to put this store in. This was created automatically by the name: 'mondrian' configuration of the main Ext.Application. Needless to say, a typical application would probably pull data from an online source. The full file is available in the GitHub repo's 2_data branch. Let's also get a simple UI going. In the launch method, add the following component instantiations. // the page that displays each chapter var page = viewport.page = new Ext.Panel({ cls: 'page', styleHtmlContent: true, tpl: '<h2>{title}</h2>{content}', scroll: 'vertical' }); This is the detail page containing the main text of each page. It has a cls option to set a CSS class on the DOM element that we can use to lightly style it, and styleHtmlContent so basic HTML styling will be displayed. tpl is the template - simply the title and content fields of a model record - and then we want to ensure vertical scrolling of the page. // the data-bound menu list var menuList = viewport.menuList = new Ext.List({ store: this.stores.pages, itemTpl: '{title}', allowDeselect: false, singleSelect: true }); // a wrapper around the menu list var menu = viewport.menu = new Ext.Panel({ items: [menuList], layout: 'fit', width: 150, dock: 'left' }); // a button that toggles the menu when it is floating var menuButton = viewport.menuButton = new Ext.Button({ iconCls: 'list', iconMask: true }); The menu is a list of the chapter titles, so we use an Ext.List bound to our app's stores.pages store, with the appropriate, simple, itemTpl template. Since you can only view one page at a time, we set two selection mode flags accordingly. We also wrap the list itself in an Ext.Panel container since we will need to float it for the landscape phone and portrait tablet profiles. Lastly, we also need a button, decorated with a 'list' icon, that will toggle it on and off in that mode. // a button that slides page back to list (portrait phone only) var backButton = viewport.backButton = new Ext.Button({ ui: 'back', text: 'Back' }); // a button that pops up a Wikipedia attribution var infoButton = viewport.infoButton = new Ext.Button({ iconCls: 'info', iconMask: true }); The back button is only used in the portrait phone profile and will slide the detail page back to the list. Its ui option gives us the left-hand arrow styling. Also, a simple information button will appear on all profiles and will pop up the Wikipedia attribution. // the toolbar across the top of the app, containing the buttons var toolbar = this.toolbar = new Ext.Toolbar({ ui: 'light', title: 'Piet Mondrian', items: [backButton, menuButton, {xtype: 'spacer'}, infoButton] }); The final part of the jigsaw is the lightly-colored Ext.Toolbar across the top of the application. It hosts our app's title, as well as the three buttons. We use xtype: 'spacer' to push the information button to the far right of the toolbar. Finally, dock the toolbar to the top of the root viewport, ensure the page is part of the card layout (and activate it): //stitch the UI together and create an entry page viewport.addDocked(toolbar); viewport.setActiveItem(page); page.update('<img class="photo" src="head.jpg">'); The final line just puts a picture of the esteemed artist onto the page panel. You could equally force the first record of the store to be live (the 'Introduction', for example), but this technique will act as a sort of splash screen until the user chooses one of the menu items. If everything is in order, we should now have something on our screen(s): This cosmetic car-crash (as well as the head.jpg image file) is available in the GitHub repo's 3_components branch. Describing the Profiles Apart from missing icons and an uninspiring blue look, our main issue is that all of the button components are showing on the toolbar - on all devices - and that our menu is nowhere to be seen. Let's define our four profiles and make sure things appear and disappear when they are supposed to. The profiles are defined as the profiles property of our Ext.Application. Place the following configuration alongside (not inside) the launch function: profiles: { portraitPhone: function() { return Ext.is.Phone && Ext.orientation == 'portrait'; }, landscapePhone: function() { return Ext.is.Phone && Ext.orientation == 'landscape'; }, portraitTablet: function() { return !Ext.is.Phone && Ext.orientation == 'portrait'; }, landscapeTablet: function() { return !Ext.is.Phone && Ext.orientation == 'landscape'; } } For each profile we're targeting, we create a unique name and use that as a property containing a function which returns a boolean result. When the application starts up (and when orientation or screen size changes), Sencha Touch will evaluate these functions. When one returns a truthy value, that name becomes the current profile. It's important to note that JavaScript does not guarantee the order of properties in an object, so you can't be sure of the order in which the functions are called. Be careful to ensure that only one of the functions will return a truthy value at any given time. Hopefully, the rules we've defined here are very self-explanatory. Note that, rather than explicitly testing Ext.is.Tablet, we're using !Ext.is.Phone. This means that the last two profiles will also apply to desktop browser windows too: useful for testing. Now that we've defined the profiles, we need to get them to affect the components. Sencha Touch will call the setProfile method on each component within your application, if it's present, so we add such functions to the components as required. When modeling the appearance or disappearance of different controls for the four different profiles, you might want to check back with the table at the beginning of the tutorial. // add profile behaviors for relevant controls viewport.setProfile = function (profile) { if (profile=='portraitPhone') { this.setActiveItem(this.menu); } else if (profile=='landscapePhone') { this.remove(this.menu, false); this.setActiveItem(this.page); } else if (profile=='portraitTablet') { this.removeDocked(this.menu, false); } else if (profile=='landscapeTablet') { this.addDocked(this.menu); } }; The viewport (' this') as a whole changes for each of the four profiles. The function is called with the name of the profile, so we check to see which is in play and act accordingly. (If you have any more profiles than this, you might prefer to use a switch statement.) So what is going on here? Portrait phones need the menu to be an active card of the whole viewport, and horizontal phones need to have it removed (so it can float), and the page made active instead. Portrait tablets also need a floating menu, and landscape tablets need it docked. The false argument on the remove and removeDocked methods simply ensures that the menu is not destroyed in either case and is merely removed from its container, so it is ready to float. It's worth keeping in mind which transitions are likely to occur between profiles, so you can keep this state machine terse. While you should certainly expect orientation changes between portrait and landscape profiles, you'll never see a phone turning into a tablet or vice versa. So in the code above, we only need to have pairs of profiles reversing each other's transitions. In addition to the viewport as a whole, let's implement similar transitions for the other UI components. Firstly the menu, which we want to have sized and floating for landscape phone and portrait tablet profiles, and not floating (either as a card or a docked sidebar) for portrait phone and landscape tablet profiles. menu.setProfile = function (profile) { if (profile=="landscapePhone" || profile=="portraitTablet") { this.hide(); if (this.rendered) { this.el.appendTo(document.body); } this.setFloating(true); this.setSize(150, 200); } else { this.setFloating(false); this.show(); } }; Note that we hide the floating menu by default, so it appears only when the user clicks the list icon in the toolbar. (The appendTo line may seem a little cryptic, but it ensures that the element containing the list is at the top level of the DOM and can float freely, rather than having it constrained down inside the viewport element - which can adversely affect its positioning.) Finally, two simple toggles: the menu button that needs to appear when we know the menu itself is floating, and the back button that only needs to appear when we're using the card transitions on the portrait phone profile: menuButton.setProfile = function (profile) { if (profile=="landscapePhone" || profile=="portraitTablet") { this.show(); } else { this.hide(); } }; backButton.setProfile = function (profile) { if (profile=='portraitPhone') { this.show(); } else { this.hide(); } }; Of course, assuming you had the correct references, it would be possible to alter the entire UI from just one of these setProfile methods. However, by dictating the profile-specific behavior of each component within its own method, we've increased the encapsulation and maintainability of the app as the UI gets more complex. Fire this up in phone and tablet simulators, and try orienting them. You should see something like this: Hopefully you can see what is going on, based on the profile events we have implemented above. The code at this point is available in the GitHub repo's 4_profiles branch. (PS: if you want to leave a comment on this article, please do so at the end of part two...)
http://www.sencha.com/learn/idiomatic-layouts-with-sencha-touch
CC-MAIN-2014-10
en
refinedweb
cpython_sandbox / Doc / library / socket.rst :mod:`socket` --- Low-level networking interface This :func:`.socket` function returns a :dfn:`socket object` whose methods implement the various socket system calls. Parameter types are somewhat higher-level than in the C interface: as with :meth:`read` and :meth:`write` operations on Python files, buffer allocation on receive operations is automatic, and buffer length is implicit on send operations. Socket families :const:`AF_UNIX` socket bound to a file system node is represented as a string, using the file system encoding and the 'surrogateescape' error handler (see PEP 383). An address in Linux's abstract namespace is returned as a :class:`bytes` object with an initial null byte; note that sockets in this namespace can communicate with normal file system sockets, so programs intended to run on Linux may need to deal with both types of address. A string or :class:`bytes` object can be used for either type of address when passing it as an argument. A pair (host, port) is used for the :const:`AF_INET` address family, where host is a string representing either a hostname in Internet domain notation like 'daring.cwi.nl' or an IPv4 address like '100.50.200.5', and port is an integer. For :const:`AF_INET6` address family, a four-tuple (host, port, flowinfo, scopeid) is used, where flowinfo and scopeid represent the sin6_flowinfo and sin6_scope_id members in :const:`struct sockaddr_in6` in C. For :mod:`socket` module methods, flowinfo and scopeid can be omitted just for backward compatibility. Note, however, omission of scopeid can cause problems in manipulating scoped IPv6 addresses. :const:`AF_NETLINK` sockets are represented as pairs (pid, groups). Linux-only support for TIPC is available using the :const: :const:`TIPC_ADDR_NAMESEQ`, :const:`TIPC_ADDR_NAME`, or :const:`TIPC_ADDR_ID`. scope is one of :const:`TIPC_ZONE_SCOPE`, :const:`TIPC_CLUSTER_SCOPE`, and :const:`TIPC_NODE_SCOPE`. If addr_type is :const:`TIPC_ADDR_NAME`, then v1 is the server type, v2 is the port identifier, and v3 should be 0. If addr_type is :const:`TIPC_ADDR_NAMESEQ`, then v1 is the server type, v2 is the lower port number, and v3 is the upper port number. If addr_type is :const:`TIPC_ADDR_ID`, then v1 is the node, v2 is the reference, and v3 should be set to 0. If addr_type is :const:`TIPC_ADDR_ID`, then v1 is the node, v2 is the reference, and v3 should be set to 0. A tuple (interface, ) is used for the :const:`AF_CAN` address family, where interface is a string representing a network interface name like 'can0'. The network interface name '' can be used to receive packets from all network interfaces of this family. A string or a tuple (id, unit) is used for the :const:`SYSPROTO_CONTROL` protocol of the :const:`PF_SYSTEM` family. The string is the name of a kernel control using a dynamically-assigned ID. The tuple can be used if ID and unit number of the kernel control are known or if a registered ID is used. Certain other address families (:const:`AF_BLUETOOTH`, :const:`AF_PACKET`, :const:`AF_CAN`) support specific representations. For IPv4 addresses, two special forms are accepted instead of a host address: the empty string represents :const:`INADDR_ANY`, and the string '<broadcast>' represents :const: :exc:`OSError` or one of its subclasses (they used to raise :exc:`socket.error`). Non-blocking mode is supported through :meth:`~socket.setblocking`. A generalization of this based on timeouts is supported through :meth:`~socket.settimeout`. Module contents The module :mod:`socket` exports the following constants and functions: Socket Objects Socket objects have the following methods. Except for :meth:`makefile` these correspond to Unix system calls applicable to sockets. Note that there are no methods :meth:`read` or :meth:`write`; use :meth:`~socket.recv` and :meth:`~socket.send` without flags argument instead. Socket objects also have these (read-only) attributes that correspond to the values given to the :class:`socket` constructor. Notes on socket timeouts A socket object can be in one of three modes: blocking, non-blocking, or timeout. Sockets are by default always created in blocking mode, but this can be changed by calling :func:`setdefaulttimeout`. - In blocking mode, operations block until complete or the system returns an error (such as connection timed out). - In non-blocking mode, operations fail (with an error that is unfortunately system-dependent) if they cannot be completed immediately: functions from the :mod:`select` can be used to know when and whether a socket is available for reading or writing. - In timeout mode, operations fail if they cannot be completed within the timeout specified for the socket (they raise a :exc:`timeout` exception) :meth:`~socket.fileno()` of a socket. Timeouts and the connect method The :meth:`~socket.connect` operation is also subject to the timeout setting, and in general it is recommended to call :meth:`~socket.settimeout` before calling :meth:`~socket.connect` or pass a timeout parameter to :meth:`create_connection`. However, the system network stack may also return a connection timeout error of its own regardless of any Python socket timeout setting. Timeouts and the accept method If :func:`getdefaulttimeout` is not :const:`None`, sockets returned by the :meth:`~socket.accept` method inherit that timeout. Otherwise, the behaviour depends on settings of the listening socket: - if the listening socket is in blocking mode or in timeout mode, the socket returned by :meth:`~socket.accept` is in blocking mode; - if the listening socket is in non-blocking mode, whether the socket returned by :meth:`~socket.accept` is in blocking or non-blocking mode is operating system-dependent. If you want to ensure cross-platform behaviour, it is recommended you manually override this setting. Example Here are four minimal example programs using the TCP/IP protocol: a server that echoes all data that it receives back (servicing only one client), and a client using it. Note that a server must perform the sequence :func:`.socket`, :meth:`~socket.bind`, :meth:`~socket.listen`, :meth:`~socket.accept` (possibly repeating the :meth:`~socket.accept` to service more than one client), while a client only needs the sequence :func:`.socket`, :meth:`~socket.connect`. Also note that the server does not :meth:`~socket.sendall`/:meth:`~socket.recv` on the socket it is listening on but on the new socket returned by :meth:`~socket (:const:`CAN_RAW`) or connecting (:const:`CAN_BCM`) the socket, you can use the :meth:`socket.send`, and the :meth:`socket.recv` operations (and their counterparts) on the socket object as usual. This example might require special priviledge: :mod:`socket` flag to set, in order to prevent this, :data:`socket.SO_REUSEADDR`: s = socket.socket(socket.AF_INET, socket.SOCK_STREAM) s.setsockopt(socket.SOL_SOCKET, socket.SO_REUSEADDR, 1) s.bind((HOST, PORT)) the :data:`SO_REUSEADDR` flag tells the kernel to reuse a local socket in TIME_WAIT state, without waiting for its natural timeout to expire.
https://bitbucket.org/ncoghlan/cpython_sandbox/src/ae7fef62b462/Doc/library/socket.rst
CC-MAIN-2014-10
en
refinedweb
Archived:Using the same background as built-in Symbian apps) Overview Using the same background as built-in applications Description It is possible to reuse the background image from built-in applications (Notepad, Pinboard, etc.) in a 3rd party application. In order to do so, a background control context (CAknsBasicBackgroundControlContext) with the correct skin ID must be implemented in the view/control class. For example, to set an editor (CEikEdwin) background similar to the Notepad application, use KAknsIIDQsnFrNotepad when creating the control context: #include <aknsconstants.h> // for skin IDs // iBackgroundContext is a member variable iBackgroundContext = CAknsBasicBackgroundControlContext::NewL( KAknsIIDQsnFrNotepad, Rect(), EFalse ); // Set the background control context of an editor (CEikEdwin) iEditor->SetSkinBackgroundControlContextL( iBackgroundContext ); Similarly, to use the background from the Pinboard application, the skin ID KAknsIIDQsnBgAreaMainPinb can be used. For the default skin background the ID is KAknsIIDQsnBgAreaMain. See the aknsconstants.h header file for more options. Bdrubel - KeyPress in CEikEdwin dose not work Properly This above solution is more helpful. Actually I spend so many hours in this section. This article help me to make transparent CEikEdwin. But unfortunately I got a new problem. when I try to write something in my CEikEdwin it always overwrite one to others. I always call DrawDeferred after every key pressing but it dose not working. Please help me...Thanks for your great help bdrubel 12:34, 14 August 2011 (EEST) Hamishwillee - Discussion forumsThis sort of question is best raised on the forums: . If you come up with a helpful extension to this article to cover your use case, please feel free to add it! hamishwillee 04:44, 26 August 2011 (EEST)
http://developer.nokia.com/community/wiki/Archived:Using_the_same_background_as_built-in_Symbian_apps
CC-MAIN-2014-10
en
refinedweb
Here is an example of a complete app: This function does the computation, here it just sleeps for 5 secs def background(): import time time.sleep(5) return "computation result" # this could an image, HTML, JSON, etc... This is the page accessed by the user. The computation is called by an ajax call and executed in background. def foreground(): return dict(somescript=SCRIPT("ajax('background',[],'target');"), somediv=DIV('working...',_id='target'))
http://web2py.com/AlterEgo/default/show/43
CC-MAIN-2014-10
en
refinedweb
Xtreme Visual Basic Talk > Legacy Visual Basic (VB 4/5/6) > General > cant figure out Collection PDA cant figure out Collection under_seeg 07-04-2004, 08:20 AM i've been looking for a way to do this for days now, i'm not very talented myself im quite new to VB but heres my prob. i want to create a collection i think, or instances of a class or something like that, right now i have a class called 'Entity' with property 'keycomments' this is just one of the properties i have a few irrelevant ones. when i click my button i want the text entered in KeyvalID to use as the name of my new instance of Entity, shown as follows: If KeyvalID.Text = "" Then MsgBox ("You must enter an ID.") End If Dim temp temp = KeyvalID.Text Dim bla(temp) As New Entity 'in a function later on this lower section will happen 'and for now it assumes the KeyvalID.Text earlier was "npc_roll" bla(npc_roll).keycomments = "It Works!" KeyvalComments.Text = bla(npc_roll).keycomments 'this is on screen for the user to see. i know this doesnt work because i have to have a constant expression when making the bla(temp), and 'temp' is not constant, but how can i go about achieving this, all i want is the ability to type bla(npc_roll).keycomments = "It Works!" i have to be able to create an instance of the class Entity with the name of the users choice. also if i'm going to have to use a collection, can you explain very carefully because no matter how much i look at tutorials and examples on MSDN or other sites, i just cant figure them out! for a start i cant type public class bla ... end class it doesnt allow any of it. i'm just totally stumped on making collections. Read on if your willing to help find an alternative :): if theres an alternative you can think of then i'll be just as happy, i'm taking a few values that have been entered and trying to store them for saving later on. there can be any number of them and each will have a few values inside, some wont include some of the values, and in a special case, some will need to create extras (called Choice1, Choice2, Choice3... etc.) if possible i'd also like to create all these inside other classes, for example: No1.ID No1.Name No1.Keyvals.1.ID No1.Keyvals.1.Name No1.Keyvals.1.Comments No1.Keyvals.2.ID No2.ID ... this is basically the exact system i want. i just dont have the skill to do it, and i cant find out how to create something like keyvals, where it has a second or third list of properties inside it. sorry for the long post. stevo 07-04-2004, 08:42 AM how about using a Private Type. Option Explicit Private Type StringKeys Keycomments As String Irrelivantstuff As String End Type 'then you can Dim bla(5) As StringKeys 'just threw 5 in, dont know what you want here bla(0).Keycomments = "some stuff" bla(0).Irrelivantstuff = "some other stuff" Debug.Print bla(0).Keycomments & " " & bla(0).Irrelivantstuff is this what you are looking for ? under_seeg 07-04-2004, 08:47 AM almost, is there a way to make a type with more than just bla(0).comment so i could have bla(0).key1.comment? EDIT: oh yea and is it possible to just add 1 on the end of that, say it starts with 8 in and i add 1 so it becomes bla(9) or do i need to have bla(1000) at the beginning as a set limit. also using this method is it possible to start off with bla(0).ID bla(0).Keycomment and later on add bla(0).Choice1 stevo 07-04-2004, 09:00 AM try this then Option Explicit Private Type StringKeys Keycomments As String Irrelivantstuff As String End Type Private Type skeys key1 As StringKeys End Type 'then Dim bla(5) As skeys bla(0).key1.Keycomments = "blahblah" bla(0).key1.Irrelivantstuff = "blahblahblah" Debug.Print bla(0).key1.Keycomments & " " & bla(0).key1.Irrelivantstuff under_seeg 07-04-2004, 09:04 AM excellent, now all i need is a way to create key2, key3... and so on while its running, since both key's and bla's can be added and deleted, have u got any ideas on this? EDIT: wait this isnt all i need, i need a way to create these keys and bla's using 'I' in a For loop, because i'll need to see how many have already been created, if theres really no way to do this i think it'll be ok just setting limits to how many you can make, but i'd prefer a more efficient way. EZ Archive Ads Plugin for vBulletin Computer Help Forum
http://www.xtremevbtalk.com/archive/index.php/t-176260.html
CC-MAIN-2014-10
en
refinedweb
Swing Application Framework is back again. Existing code The "singleton problem". Design of the View class class. Lack of flexible menu support. The ideal framework It is small but very flexible, any part of its functionality can be easily overridden. For example, if you don't like the implementation of LocalStorage, it is easy to plug in your own implementation. It is free from all mentioned problem and knows how an application should look like on a particular OS. The question of the day I mentioned only a few problems with the current SAF to start a discussion. What do you think about the problems I mentioned and what is your list of features that SAF must have? note: don't forget that SAF is supposed to be a small framework. I am looking forward for your comments It won't take a half a year for the next blog, I promise. alexp - Login or register to post comments - Printer-friendly version - alexfromsun's blog - 10354 reads by winnall - 2009-07-21 03:53I'm glad to see that platform dependencies are now on the radar for SAF, even if it is only the Mac OS X menu problem at the moment. In my view the general solution to platform dependency is to isolate it behind a series of factory interfaces and make the factories extensible (by plugging in platform-dependent JARs as needed). I don't think you can expect developers to do things like "if (isMac) {} else {}" because non-Mac developers won't be aware that they have to do this. The factory should do it for you. The same applies to other platforms too (Gnome and KDE have slightly differing menu conventions and both are different from Windows and Mac). Most developers are not fully aware of all the conventions on their own platforms, let alone all the other platforms out there. And who wants programs littered with "if (isMac) {} else if (isLinux) {} else ... {}" statements anyway? It's a debugger's or maintainer's nightmare :-) A further advantage of of abstracting the platform dependencies into factory interfaces is that the implementation for each platform can be devolved into separate projects, which will avoid further delay to SAF and allow specialist communities to concentrate on the appropriate implementation for their platform. This would allow SAF to concentrate on its "core business". I use a nascent framework for my own development which works along these lines. If anyone is interested, I'll share the details. by fritzthecat - 2009-04-08 12:37I am using the current state of AppFramework in a Swing project and I like it, nevertheless it makes some problems. (1) I would like to be able to create my own type of Action, ApplicationAction is good but needs more in some cases; in other words, an overridable factory method (generally more framework quality) is needed (2) I have severe problems with test-coverage, because class Application seems to be not mockable, and instantiating it brings up a GraphicsEnvironment and I get a HeadlessException Best would be to provide all parts of AppFramework in separate packages, so that you can use it piece by piece, independently of each other. Mind that the "Commands" project could also be a part of this. by fan_42 - 2009-03-27 03:56One thing I noticed by using SAF: How do you change on the fly the language of an application? As I live in Switzerland, a small country with 4 national languages, it is a common needed feature. I always misses the possibility to change the application language from German to French and back. It would be nice to have this too. by reusr1 - 2009-03-22 09:35I would really like to see some sort of binding to model properties and actions in the framework. I have been toying around with such an approach by naming all UI elements (also great for testing) and then binding the UI elements through an EL to the model. The naming of the UI elements also helps with automated testing. by fan_42 - 2009-03-16 01:40I agree with janaudy I can leave without a docking framework, there some that you can use as is, but a plugin mechanisem would be nice. by haraldk - 2009-03-12 06:01Alex, I think it would really help if you guys joined in on the discussion on the appframework mailing list on a semi-daily basis. Your absence there is a big concern... Apart from that focus on the basics, don't try to solve everything at once. The current API is a very good start. -- Harald K by sdoeweling - 2009-03-10 15:52Well, I have not delved too deep into SAF, but I tend to agree with karsten, especially on 1 and 3. Best, S. by arittner - 2009-03-10 01:39Is it possible to refactor the view classes to introduce interfaces? I need some different Views for the NetBeans platform (like DocumentView, DockingView, OptionPanelView, PaletteView). But the implementation of View hurts me (e.g. final methods). The SAF should be divided into SPI and implementation. br, josh. NBDT by karsten - 2009-03-05 12:461) Focus on the topics we've already agreed to be the core of the JSR 296: life-cycle, Actions, resources, background tasks. That's difficult enough 2) Work with the expert group. After this long pause, you may need to re-setup the expert group. 3) Fix the ActionMap bug, because the fix requires an API change that will affect almost every framework user. 4) Work towards an early draft according to the JSR process. 5) Exclude views from the framework, as explained in the appframework mailing list. by dsargrad - 2009-03-04 13:11Hello, I've started using the Swing Application Framework within a netbeans environment. I've had a lot of success with it. Currently I use the SingleFrameApplication and launch my application either through Java Web Start, or as a desktop application. I need to refactor my application to support a third entry point using an Applet. My current application structure is typical: class MainApp extends SingleFrameApplication { public static void main(String[] args) { launch(MainApp.class, args); } } Within my startup method I instantiate a FrameView of the form: public class MainView extends FrameView { } My simple thought is that I create a new class, perhaps of the following form: MainApplet extends JApplet { } and that this would somehow invoke MainApp, and everything else would "stay the same". I'm looking for a simple example to begin to understand how to launch my application either through the standard "main" entry point, or alternatively, as an Applet. So far I have not found an example. Can someone please point me to such an example? I apologize if this is the wrong blog to post this request to. Please let me know if there is a better forum for this question. Thanks in advance. by janaudy - 2009-03-04 10:57- Do not include in core jdk - Have a nice docking framework - Have the notion of plugins by agoubard - 2009-03-04 09:06Hi Alex (and the other Swing developers), Welcome back to SAF! Here is my wish list for the SAF: * I think it would be nice to have it or part of it included in Java 7. So if there are still a lot of discussions, the JCP should agree on the part of the SAF to be included in Java 7 and have this part easily extendable to be able to have libraries for the framework or extra features in upcoming release. * Here is my old wishing list for the SAF: (maybe point 2 is not relevant as the View class handles it) * SessionStorage an LocalStorage Here I don't understand why the Preference API (java.util.prefs) is not used instead. If the default implementation doesn't store at the desired location, provide a PreferencesFactory that does. It would also allow developer to provide their own PreferencesFactory in case they want something else (e.g. AppletPreferencesFactory). Anthony by jfpoilpret - 2009-03-04 08:41@Thomas Kuenneth I second that. JSR-296 fixes the scope of SAF. Several comments to this post are going far beyond that initial scope. People should refrain from adding new points to the official scope, or that will definitely bury this JSR (maybe this is what Sun wants?) For the time being, it would be good if Alex could work on the numerous bugs reported in SAF issues list, rather than asking everyone what they would like to see in SAF. When most critical bugs/RFE get fixed then we can discuss and see if other features would be useful (provided they are still in scope of course). Just my 2 cents Jean-Francois by tommi_kuenneth - 2009-03-04 07:35I always felt that the scope of SAF was well-choosen, as it covers a bunch of key issues all Swing developers face and concentrates on them. My humble wish is to further follow this path. Though it is true that Swing lacks some GUI components, we should not bloat the idea of a basic application framework. Therefor, when asking for improvements we have to consider if such improvements really touch SAF or if they are improvements to other areas of Swing. Still, several companions to SAF are desperately missed, for example JSR-295 oder language-level support for properties, but again, these are not part of an application framework. So, if a tweaked, finetuned and bugfixed version of SAF is part of Java 7, that would be a big step forward for client Java. Regards Thomas Kuenneth by surikov - 2009-03-04 03:31 aekold - 2009-03-04 02:05Hi Alexander! It's a good news!! After some experiments with Qt Jambi I have some suggestions: - There are some params in @Action, may be it would be better to put all possible params inside? So you will use same action from menus and buttons in different places with same icon and text. Here are some of my experiments (btw inspired by appframework):...... - Initialize some resource bundle and create method like tr(String) that will return localised string. - Qt has a status bar and possibility to show LONG_DESCRIPTION in status bar, and it helps. - May be add some icon conventions? Like add String icon to @Action, and it will load LARGE_ICON from resources/large folder and SMALL_ICON from resources/small folder. Also add possibility to specify icons for platform, like resources/kde/small, resources/win/small and so on, so application will have different icons on different platforms? by grandinj - 2009-03-04 00:36On the View problem, given that you have only have 2 sub-cases, why not simply split it into MDI_View and SDI_View? On the Mac problem, I don't think it unreasonable to provide a set of simple support classes and make developers do something like : if (MacSupport.isMac()) { // do mac specific stuff } else { // do normal stuff } This is pretty much how SWT operates. by geekycoder - 2009-03-03 19:31I certainly hope that SAF will not be over-engineered to the point of inflexibility and complexity. Been a a long-time Swing developer, I have my wish list in SAF. These are the following features that I have frequently used in desktop swing application that will hope to see in SAF -- Single application instance Use Case: This is important as some applications are best designed with single instance in mind. For example, it will make sense to have a single instance in Media player so that sound will be overlapped. When a user double-click on a audio file, it should playback in existing running MediaPlayer. Currently, widespread implementation of Single instance is to use a thread to read in file for parameter periodically. If there is content in the file, it will launch another application instance which in the main method write the parameter into the file and then exit the instance, which then pick by the first instance to do whatever with the parameter. Launching a duplicated instance takes time, and it will be better handled if the OS's native call is used instead. Hopefully, SAF will have this feature. -- OS-dependent User profile Use Case: OS has a active user profile. If SAF could implement user profile management that work in tandem with the OS then SAF application will not need to handle "hanging" user profile settings. -- Add installation and deployment modules Like application server which handles installation, deployment and execution of web application, SAF application could benefit from reference implementation ( Sun reference implementation) too of open-source installation and deployment tools that work easily and integrate with SAF eg JSmooth - Java Executable Wrapper () and Lzpack (). by cowwoc - 2009-03-03 18:50I want Sun to work on Swing. I'm just not sure that the Swing Application Framework is the way forward. I don't need a framework to help me build a specific kind of desktop application, but rather across-the-board improvements in Swing to make it easier to customize widgets and bundle common widgets like Date Picker. Frankly I'd like to see Swing 2.0 (not the one currently being proposed). I'm looking for design-level changes to make Swing easier to customize without getting lost in tons of the spaghetti code that is lurking underneath the hood. Swing components shouldn't be extensible only by those in Sun. In short, I am saying Swing's existing design has too much internal coupling and doesn't lend itself well to extending. by will69 - 2009-03-03 15:24Hi Alex, welcome back! Swing is probably the most successful cross-platform GUI toolkit that we have. Now that we all learned how to use and not use the EDT, let's make it even easier to use! And please don't forget the user's migrating from Windows to Linux. Java+Swing+OpenGL is a great way to make the transition less painful! by stolsvik - 2009-03-03 15:13btw, what was the "urgent temporary tasks"?! They must have been .. heavy. by stolsvik - 2009-03-03 15:12Quick comment: The framework should be flexible enough to fully support IoCing everything, it should be possible to have several instances of the same application run in the same JVM (pretty much "just because"), and it should be possible to fully "reboot" an appliction live without any lost memory (My app has a button that does a reboot (drop current "app context", spawn a new), and it makes a whole slew of development aspects have much shorter "roundtrip times"), and it should cleanly exit without invoking System.exit()..! Basically, all resources should be accounted and contained. by jede - 2009-03-03 13:04That's good news! Alex, there was allready a discussion about needed changes and wishes some months ago on the appframework mailing list (started by you). It would be nice when you and your team could take care of those mails too. It wouldn't make sense to copy all of them to this comments block. Once again: At the moment it's not very easy to use dependency injection frameworks. The use of mock objects for unit testing is also sometimes too tricky. It would be much simpler without singletons and when all componts of the SAF would use interfaces. Bye, Stefan by anilp1 - 2009-03-03 12:08Alex, I wanted to forward this bug to you that was discussed a while back. "Actions can lose their state (enabled, text, icon, etc.) when used with the current public appframework code." It was ruled a serious bug that has not been fixed. thanks, Anil _________________ Are you planning to kill yourself? Stop! listen to this song broadband version: dial-up version: --- On Thu, 2/12/09, Anil Philip wrote: From: Anil Philip Subject: Re: What is the status of SAF ? To: users@appframework.dev.java.net Date: Thursday, February 12, 2009, 1:17 PM I am using Netbeans 6.5. The GUI designer uses SAF version 1.03. In the generated code which cannot be edited, I see javax.swing.ActionMap actionMap = org.jdesktop.application.Application.getInstance(com.juwo.nodepad.NodePad.class).getContext().getActionMap(NodePadView.class, this); what should one do instead? thanks, Anil _________________ for good news go to --- On Thu, 2/12/09, Karsten Lentzsch wrote: From: Karsten Lentzsch Subject: Re: What is the status of SAF ? To: users@appframework.dev.java.net Date: Thursday, February 12, 2009, 7:31 AM Chiovari Cristian Sergiu wrote: > Well if there is a show stopper it must be something serious ! Actions can loose their state (enabled, text, icon, etc.) when used with the current public appframework code. > So then I wonder how it can be used in a production environment ? I've fixed that in my 296 implementations and have described how to fix it: a) have hard references to ActionMaps and let developers clear ActionMaps when they are no longer used (the latter to avoid memory leaks), or b) turn #getActionMap into #createActionMap that doesn't hold the ActionMap; the developer then holds the ActionMap where appropriate, for example in an Presentation Model instance. -Karsten by peyrona - 2009-03-03 11:52In my eyes, an important issue is to create it extensible: lets say we create a very generic framework, it should be easy (not a nightmare) by extending (inheriting), to create an MDI framework, or others. by rcasha - 2009-03-03 07:29One current problem I encountered is that it is impossible to override certain "factory" methods or classes. I needed to create my own implementation of ApplicationAction (which adds security via jaas) but I had to fork my own version to do that, since I had to replace the default ApplicationContext (which is final and created in the constructor of Application). by eutrilla - 2009-03-03 07:29I usually encapsulate user actions in events that are sent to a central EventManager. In this manager diferent EventListeners can be registered during startup depending on the type of event , so the same event can be routed to one or more listeners, or to none. In this way, it is easy to change the behaviour of a certain action and to reuse it between different controllers. Following what cdandoy said, each view could have a different EventManager (controller) with a different set of listeners, which can be reused or not. Apart of listeners that manage the toolbar buttons, I'd include in the list of desirable listeners an undo/redo manager, for instance. by cdandoy - 2009-03-03 05:59May I suggest a different approach to actions? The JDeveloper platform has the notion of Controller associated with Views and the controller is responsible for handling the actions. A View can contain a View and the platform respects the hierarchy of controllers. The platform not only calls the controller when an action is performed but it also asks the controller to update visible actions. The advantages are that * the actions are easily shared between views (Edit>Delete may do something different in each view) * it drastically reduces the number of listeners required to maintain the state of each action (enabled/checked) * Only visible actions are updated (for example when the user opens down a menu) In my experience the controller approach has huge benefits over the ActionListener approach. There is more details here:... by osbald - 2009-03-03 03:38That's only partly true. Actions appear to be stored & retrieve via their Class object in ActionManager. Like a lot of the AppFramework classes you cant create your own ActionManager, it has a 1:1 relationship with AppContext. Using Class object as keys essentially makes them all Singletons (cant distinsh between two instances of same Class but with differeing state) for each classloader and your classloader options are more limited in web start. Or at least going down that route (replacing JWS security manager & installing custom classloaders) makes life super-complicated for the application developer which is the opposite of what the framework is supposed to offer. There's a lot of core classes with private/protected constructors and/or static factories that prevent you from most attempts to deviate from the classic SingleFrame model. Very quickly I found various things start breaking down, ActionManager, parent chaining lookups, resource injection (because of parent issue).. Actually this should really be moved to the AppFramework mailing lists, doubt there's that many developers looking at Alexs blog after 6 months+ of inactivity. What's the remit here a long term commitment to get AppFramework right? or a quick fix to get something into JDK7 (abandoning the 1.5 compatible codebase) before moving on? by eutrilla - 2009-03-03 03:23Yes, that's more or less what I meant, but it should be possible to have multiple instances of AppContext (or whatever class your getAppContext() method is returning), not just the one returned by the static method. By the way, it could be nice to have an AppContext.setInstance() method so that we are able to switch between global contexts. I'm sure that I've come across one or two cases where this has been handy. Another option could be to store statically different AppContexts for different views, such as in AppContext.forView(VIEW_NAME).get(PRIVATE_KEY) by malenkov - 2009-03-03 02:54Actually, there no the "singleton problem". You can replace static field with the following construction: AppContext.getAppContext().gett(PRIVATE_KEY); AppContext.getAppContext().put(PRIVATE_KEY, value); by eutrilla - 2009-03-03 02:20RE: Singletons: I don't know if I misunderstood the problem, but what about using a non-static AppContext class, and a static GlobalAppContext that holds an AppContext instance? It wouldn't be a pure singleton, but will allow the best of both worlds: for simple applications it would be possible to just use the Global, and if you have an MDI, each View would have its own AppContext, which may or may not be the same as the one stored in the GlobalAppContext. And as for features of the framework, I've always felt that it's a shame that Swing doesn't have a docking system by itself, or even an abstraction layer that can be used to plug-in third party docking libraries. by carcour - 2009-03-02 21:11Good to know that the project is still Alive Alexander! I thought SAF was dead. What were the urgent tasks you've worked on. by rogerjose81 - 2009-03-02 19:21Very good to heard this ;) I would like that SAF would be able to manage the skins on an (semi)automatic way. Something like identifying each look-and-fell and its themes, and providing a menu with sub-items for each laf. I hope it is not much to ask. Best regards, Roger by osbald - 2009-03-02 14:26Hans, Re: Singletons: What about multiple document interfaces? Multiple instances of the same classes, in the same JVM, but with differing state. The Singleton ApplicationContext and the Action caching by Class objects gave me a lot of problems last time I tried. MDI interfaces as in InternalFrame are generally considered old hat these days. Most go for multiple windows. Also the ApplicationContext per classloader wasn't a practical solution for anybody wanting to use Java Web Start. by mbien - 2009-03-02 14:06great to see you blogging again ;) are there efforts to talk to the netbeans team about interoperability with the nb platform? The NetBeans platform supports out of the box persistence, tasks, actions defined in xml layers, module life cycle (but i think most of ModuleInstall methods were deprecated :P) etc. which matches the features i found by browsing through the SAF javadoc it would be a pity if it wouldn't be possible to map some of the features nb already provides to SAF because of implementation detail X. It would see it as killer feature if we would be able to drop a app written with SAF into the netbeans platform with minimal code changes and see it nicely integrating in some areas. (aka scalability for desktop apps) by shemnon - 2009-03-02 13:52What I would like to see is a framework that is usable for writing Swing Applications in languages other than The Java(tm) Programming Language. Think JavaFX Script, Groovy, Ruby, Python, Clojure, Fan, et al. There needs to be some way to access the core of the framework that doesn't depend on features that don't map directly to other programming models. Particular to that is the reliance on some annotations to do some of the magic. For example, requiring annotated methods to be the only way to add Actions to the ApplicaitonActionMap is a particular problem. It's fine to do some particularly snazzy stuff for The Java(tm) Programming Language, just don't make it the single gateway to access the APIs, or else the growing JVM languages community may have to bypass it when they write their own desktop applications. You don't need to ship the other language bindings with SAF but an equal degree of access would speed/enable adoption. by hansmuller - 2009-03-02 12:35Per the "singleton problem": the Application singleton does not prevent multiple applications per JVM nor does it prevent running multiple Application applets in the browser. Statics are per class-loader not per JVM. This topic was discussed at some length about a year and a half ago:... Having multiple Applications share one ApplicationContext was never a goal. Applications weren't intended to be modules. by masn - 2009-06-20 08:46I think what's missing is a good, relatively complete UI framework for Swing, something similar to the eclipse rich client platform or netbeans, but much lighter weight.
https://weblogs.java.net/node/241612/atom/feed
CC-MAIN-2014-10
en
refinedweb
NAME unix, AF_UNIX, AF_LOCAL - file system pathname (marked as being of type socket). Linux also supports an abstract namespace which is independent of the file system. Valid types are: SOCK_STREAM, for a stream-oriented socket and: #define UNIX_PATH_MAX 108 struct sockaddr_un { sa_family_t sun_family; /* AF_UNIX */ char sun_path[UNIX_PATH_MAX]; /* pathname */ }; sun_family always contains AF_UNIX. Three types of address are distinguished in this structure: * pathname: a UNIX domain socket can be bound to a null-terminated file system pathname using bind(2). When the address of the socket is returned by getsockname(2), getpeername(2), and accept(2), its length is offsetof(struct sockaddr_un, sun_path) + strlen(sun_path) + 1, and sun_path contains the null-terminated pathname. * unnamed: A stream socket that has not been bound to a pathname using bind(2) has no name. Likewise, the two sockets created by socketpair(2) are unnamed. When the address of an unnamed socket is returned by getsockname(2), getpeername(2), and accept(2), its length is sizeof(sa_family_t), and sun_path should not be inspected. * abstract: an abstract socket address is distinguished file system pathnames. When the address of an abstract socket is returned by getsockname(2), getpeername(2), and accept(2), the returned addrlen is greater than sizeof(sa_family_t) (i.e., greater than 2), and the name of the socket is contained in the first (addrlen - sizeof(sa_family_t)) bytes of sun_path. The abstract socket namespace is a nonportable Linux extension.>. ERRORS EADDRINUSE The specified local address is already in use or the file system socket object already exists. ECONNREFUSED The remote address specified by connect(2) was not a listening socket. This error can also occur if the target filename file system while generating a file system file system honor the permissions of the directory they are in. Their owner, group and.
http://manpages.ubuntu.com/manpages/precise/man7/AF_LOCAL.7.html
CC-MAIN-2014-10
en
refinedweb
Code<Template>.NET is an add-in for Visual Studio .NET, that provides a mechanism for inserting commonly used text fragments into your source code. It is based on the ideas presented in my additions to Michael Taylor's extension to Darren Richard's original CodeTmpl addin. The main differences with the original CodeTmpl addin are: The text fragments used by Code<Template>.NET are contained in files called CodeTmpl.* (where * is a per-language extension) which are located in the user's roaming profile. These files are originally copied from sample files which are located on the same directory as the addin's executable. The format of these text files is very simple: named blocks contain the text fragments that will be inserted in your source code. Besides the directly inserted text, these fragments can also contain tags which represent replaceable keywords and prompted values. These named blocks will show up as menus and submenus in Code<Template>.NET's toolbar button. When you click on one of these menu items, the corresponding text fragment will be inserted in the active window's insertion point. * The template files are completely configurable so you can replace or change the default text fragments to suit your own needs. The format of the template files is extremely simple. Basically you paste your block of code into the file and surround it with the open menu (#{) and close menu (}#) tags. The open menu tag should be followed by the name that will appear on the popup menu. The open menu tag can also include a Menu ID that is separated from the display name by a vertical bar (|); this Menu ID is used to directly insert a template into the editor by pressing Ctrl+Enter. Whatever text is placed between the open and close menu tags will be copied verbatim into the text editor. Menu items can be nested, creating a hierarchical view of templates. You can also specify an access key by placing an '&' character before the character to be used as the access key. For example, to specify the "F" in "File" as an access key, you would specify the caption for the menu name as "&File". You can use this feature to provide keyboard navigation for your templates. A separator (##) tag can be used to separate menu items and an exclamation mark (!) at the start of a line is used for single line comments. #{ }# | ## ! Additional template files can be included by adding a line starting with "#include" followed by the file name. If the file name does not have an absolute path, the file will be searched for relative to the base template file in the user's roaming profile. Include statements can only be inserted between menu and submenu definitions. #include For example: #{Hello World - &Console|hwc #include <iostream> int main() { std::cout << "Hello, new world!\n"; } #} ########################## #{Hello World - &GUI|hwg #include <WinUser.h> int PASCAL WinMain(HANDLE hInstance, HANDLE hPrevInstance, LPSTR lpszCommandLine, int cmdShow) { MessageBox(NULL, "Hello, World", "Example", MB_OK); } #} If your template file; a speedier way to insert this template would be to type hwc and then press ctrl+<space>. #} The replaceable text between the menu open and menu close tags can contain replaceable keywords. These keywords are surrounded by the keyword open (<% ) and keyword close (%>) tags and are case-insensitive. To insert these tags verbatim in the text (in ASP templates for example) escape the percent ('%') symbol by preceding it with a backslash ('\'). The less than ('<') and the greater than ('>') characters are escapable too. <% %> Currently, the following keywords are predefined: SOLUTION Returns the solution name. PROJECT Returns the current project name. FILE Returns the current file name. NOW Returns the current date and time. TODAY Returns the current date. GUID Returns a GUID. TEMPLATE Inserts the text from another template. #{Sample The solution's name is <%SOLUTION%> and the current project is <%PROJECT%> }# will be expanded to: The solution's name is CodeTemplateNET and the current project is CodeTemplateNET If an undefined keyword is used, then its value is searched for in the system's environment; if the environment variable is not found the keyword is replaced with the empty string. For example: #{User name Logged-in user name: <%USERNAME%> }# Logged-in user name: velasqueze When using nested templates using the template keyword, the template that is being referenced must have a MenuID. For example: template MenuID #{Template A|tpla // This is template A }# #{Template B|tplb // <%template:tpla%> // This is template B }# Ah, and by the way... recursive template nesting is a Bad thing... don't do it! If the keyword's name starts with a question mark (?) then the keyword represents a prompt value and its name is used as the prompt string in the input dialog. The prompt keyword name can contain spaces. For example: #{Ask me something <%?Please answer the prompt%> }# will be expanded to: (Supposing you typed "I just answered!" in the input box I just answered! The cursor position tag (%$%) will reposition the caret after the text has been formatted and inserted in the editor window. %$% The value of a keyword can be modified by one or more formatters. Formatters follow the keyword name and are separated by colons (:) and can have parameters. Currently, the following formatters are predefined: U U[=start[,length]] L L[=start[,length]] W R R=str1[,str2] FMT FMT=str Returns the key's value formatted applying the parameter to System.String.Format(). e.g. String.Format("{0:str }", key) System.String.Format() String.Format("{0:str }", key) VALUES VALUES=v1{, vn}* FIXEDVALUES FIXEDVALUES=v1{, vn}* MULTILINE MULTILINE=int DRIVE DIR FNAME EXT PATH BASENAME #{Sample User name: <%USERNAME:U%> // User name uppercased GUID: <%GUID:R=-_:U%> // GUID replacing all the '-' with '_' and uppercased File's base name: <%FILE:BASENAME%> m_a is a <%?Access:values= public,protected,private%> variable. }# will expand to: User name: VELASQUEZE // User name uppercased // GUID replacing all the '-' with '_' and uppercased GUID: 33784DF9_4EF5_49AE_8E8B_2F8FAAC8B1A2 File's base name: connect.cs m_a is a protected variable. Known issues The installer file is way too big. Visual Studio insists on adding a 204 K file named MSVBDPCA.DLL. I don't know where it came from and I don't know how to get rid of it. The CommandBar is not removed after uninstalling. The darn toolbar just won't go away. It has not been tested with Visual Studio .NET 2002. Any suggestions on how to resolve these issues are greatly appreciated. Any suggestions on how to resolve these issues are greatly appreciated. autocomplete values fixedvalues
http://www.codeproject.com/Articles/3178/Code-Template-Add-In-for-Visual-Studio-NET?msg=1306627
CC-MAIN-2014-10
en
refinedweb
Implementing Message Validation in WSE 3.0 Implementing Message Validation must validate request messages received from clients to make sure that they are not malformed and do not contain malicious content. Objectives This implementation of the Message Validator pattern has the following objectives: - Prevent the service from processing request messages that are larger than a specified size. - Prevent the service from processing messages that are not well-formed or that do not conform to an expected XML schema. - Validate input messages before deserializing them into .NET data types so that they can be interpreted as regular expressions. - Demonstrate how to use WSE 3.0 custom assertion to implement message validation. - Use ASP.NET and WSE 3.0 configuration settings to limit usage of system resources such as CPU. Note The code examples in this pattern are also available as executable QuickStarts on the Web Service Security community workspace [Content link no longer available, original URL:]. Implementation Strategy To implement message validation on a Web service, you use a combination of application configuration, code implementation, and filtering in WSE 3.0. Use one or more of the following methods to perform message validation: - Set the maximum request size in the service's configuration file to limit the size of messages that the service will process. - Validate each incoming request message to ensure that it is well-formed XML, that it contains all of the parts required by the service, and that the contents of the message conforms to an expected structure as defined by an XML Schema (XSD). - Use regular expression checking to ensure that input contains only valid data and does not contain malicious SQL, HTML, or JavaScript code that could lead to code injection attacks. - Use regular expressions to ensure that complex data types (such as social security numbers and telephone numbers) are received in a format that the service can process. Note You should conduct a thorough threat analysis of your service application to determine where in the code you should perform message validation and to determine which methods of message validation you should use. To fully understand this pattern, you must have some experience with the .NET Framework, WSE 3.0, and Web service development. Participants This implementation pattern requires the following participants: - Client. The client accesses the Web service. - Service. The service is the Web service that processes requests received from clients. The service implements the message validation logic. Process The Message Validator pattern describes the message validation process at a high level. This implementation pattern provides a refined description of that process specific to the WSE 3.0 implementation. Figure 1 illustrates the process by which message validation logic intercepts request messages and verifies that they are acceptable for processing by the service. Figure 1. Validating a request message The process uses the following steps: - The client sends a request message to the service. - The service validates the message. The service uses a number of different validation checks to prevent malicious input. These include: - Comparing the size of the request to the value established for the maxRequestLength attribute of the <httpRuntime> element in the application's configuration file, which is specified in kilobytes. maxRequestLength specifies the maximum allowable size for request messages. If the message exceeds this value, the service does not process the message, and it returns an error. Note You can set other values in the <httpRuntime> element to control response, resource usage for handling requests, and timeouts. For more information about <httpRuntime>, see <httpRuntime> Element in the .NET Framework General Reference on MSDN. - Checking the format of the request message to ensure that the message is formed correctly and that all of the required message parts are present. The service uses WSE policy assertions to make sure that all required message parts are present. The service can use the requireActionHeader policy assertion to verify that the message contains a WS-Addressing action header. The service can use the requireSoapHeader policy assertion to verify that the message contains other SOAP header elements, such as an addressing header and a message ID. For more information about WSE 3.0 policy assertions, see Policy Assertions in the WSE 3.0 product documentation on MSDN. - Verifying that the XML in the message payload is well-formed and that it conforms to a predefined schema with acceptable data types and ranges of values. The service uses an XML Schema (XSD) to validate the contents of the message body. If a specific schema is not required for validation, it can use an XML parser to validate the request body. The service can use an XML Schema (XSD) to perform structural validation, data type validation, cardinality of child elements to parent elements, numeric value ranges, and regular expression validation for character patterns and ranges. - Parsing the request message for malicious content. The service can use regular expressions to ensure that the messages contain only valid data. Regular expression validation can be implemented either in the XML Schema (XSD) or in code. Also, the service can use parameterized SQL queries to access and modify data in databases to mitigate the risk of SQL injection. - The service processes the request and responds to the client. If the request passes all validation checks performed by the message validator, the service processes the message. Implementation Approach This section describes how to implement the pattern. The section is broken into two major tasks: - Configure the client. This section describes the steps required to configure policy and code for the client. - Configure the service. This section describes the steps required to configure policy and code for the service. This pattern does not specifically. Configure the Client The client requires no special configuration for message validation. The client should be able to recognize and properly handle validation exceptions thrown by the service. Configure the Service If you use policy to implement authentication and message protection for your service, you should configure it before you attempt to use the custom policy assertion provided in this implementation. For policy-based authentication and message protection examples, see one of the following implementation patterns in Chapter 3, "Implementing Transport and Message Layer Security": - Implementing Message Layer Security with X.509 Certificates in WSE 3.0 - Implementing Message Layer Security with Kerberos in WSE 3.0 - Implementing Direct Authentication with UsernameToken in WSE 3.0 If you do not use policy to implement authentication and/or message protection for your service, you must enable support for WSE 3.0 and add a text file for the policy cache to your service project in Visual Studio 2005 before using the custom policy assertion provided in this pattern. To enable the service project to support WSE 3.0 - In Visual Studio 2005, right-click the application project, and then click WSE Settings 3.0. - On the General tab, select the Enable this project for Web Services Enhancements check box, select the Enable Microsoft Web Services Enhancement SOAP Protocol Factory check box, and then click OK. To add a policy cache file to the service project in Visual Studio - In Visual Studio, right-click the application project, and then click Add New Item. - Click Text File. - In the Name field, type a name for the file, such as wse3policyCache.config. - Click Add. This section is divided into subsections; each subsection describes a message validation technique. You do not always have to implement all the message validation techniques. You should complete a thorough threat analysis of the service to determine which techniques to use. Whether you implement some or all of the message validation techniques, you should implement them in the order that they are described. The order in which the message validation techniques occur depends on where they are implemented in the platform. In this pattern, the request size must be checked before any other step. The custom policy assertion must be applied in the pipeline after the message is decrypted but before the request is processed by the service. Regular expression checking, if implemented in the XML Schema (XSD), occurs when the request is validated against the message schema in the policy assertion. Otherwise, regular expression checking occurs where the code is implemented, most likely in the service code. Parameterization of SQL queries occurs when the query is created, prior to execution on the database server. The point at which the body validator assertion is specified does not matter relative to other assertions defined to protect the message, because decryption and signature verification is applied further up the communication pipeline from assertions applied for message validation. Configure Maximum Request Length To limit the size (in kilobytes) of messages that the service will process, you should specify a value for the maxRequestLength attribute of the <httpRuntime> element in the service's Web.config file. This value should be set according to the largest request message that you can reasonably expect the service to process. If you do not specify a value for this setting, the default value is 4096 KB. The following XML example shows a maximum request length set to 300 KB. <configuration> ... <system.web> <httpRuntime maxRequestLength="300"/> ... </system.web> ... </configuration> If your service uses a protocol other than HTTP (such as TCP), the WSE <maxMessageLength> setting can be used to limit the size (in kilobytes) of incoming requests, assuming that you are using the SoapClient/SoapService model for your service. The default value for the length attribute of the <maxMessageLength> element is 4096 KB. The following configuration example shows the <MaxMessageLength> set to 1024 KB for a service that uses the SoapClient/SoapService model. <configuration> ... <microsoft.web.services3> ... <messaging> <maxMessageLength value="1024" /> </messaging> ... </microsoft.web.services3> ... </configuration> For more information about using the SoapClient/SoapService classes for messaging, see How To: Send and Receive a SOAP Message by Using the SoapClient and SoapService Classes in the WSE 3.0 product documentation on MSDN. Required Message Part/Schema Validation This implementation pattern uses policy assertions to check for required message parts and to validate the message schema. The following example policy file provides an example of policy assertions for the service. Other polices that would be present to sign, encrypt, and provide authentication capabilities have been omitted for brevity. <policies xmlns=""> <extensions> ... <extension name="bodyValidator" type="Microsoft.Practices.WSSP.WSE3.QuickStart.MessageValidation.CustomAssertions.BodyValidatorAssertion, Microsoft.Practices.WSSP.WSE3.QuickStart.MessageValidation.CustomAssertions"/> </extensions> <policy name="MessageValidationService"> <bodyValidator xsdPath="Configuration\GetCustomers.xsd" /> ... <requireSoapHeader name="MessageID" namespace=""/> <requireSoapHeader name="To" namespace=""/> <requireActionHeader /> </policy> ... </policies> In this policy file example, the <Action>, <MessageID>, and <To> elements are required on all incoming request messages. A custom policy assertion, bodyValidator, is specified in the <extensions> section (see the section, "Custom Policy AssertionMessage Body Validation," for sample code). You should indicate the namespace as appropriate for your project. The type attribute for the bodyValidator extension declared in the preceding policy code example is formatted as the fully qualified class name (namespace + class name) followed by a comma and then the name of the assembly that contains the assertion class. If you are not using policy to implement authentication and/or message protection for your service as previously described in this section, you must now enable the service to support WSE and enable policy support. WSE does not recognize custom policy assertions when it parses the policy cache file, and it will disable policy support if you attempt to configure it using the WSE Settings tool. If you have to enable policy support after you have added a custom policy assertion to your policy cache, you must add a <policy> element to the service's Web.config file to enable policy support. <microsoft.web.services3> ... <policy fileName="wse3policyCache.config" /> ... </microsoft.web.services3> Replace the value specified for the fileName attribute with the file path and name of your policy cache file. Custom Policy AssertionMessage Body Validation The following code example shows the custom policy assertion used to check the message body against an XML Schema (XSD). using System; using System.Collections.Generic; using System.Text; using System.Xml; using System.IO; using System.Xml.Schema; using System.Web; using System.Configuration; using Microsoft.Web.Services3; using Microsoft.Web.Services3.Security; using Microsoft.Web.Services3.Design; namespace Microsoft.Practices.WSSP.WSE3.QuickStart.MessageValidation.CustomAssertions { /// <summary> /// This Custom PolicyAssertion class validates the received SOAP body /// against an XML Schema (XSD) document whose path is configured in the policy document. /// </summary> public class BodyValidatorAssertion : PolicyAssertion { private string xsdPath; public override SoapFilter CreateClientInputFilter(FilterCreationContext context) { return null; } public override SoapFilter CreateClientOutputFilter(FilterCreationContext context) { return null; } public override SoapFilter CreateServiceInputFilter(FilterCreationContext context) { return new BodyValidator xsdPath = reader.GetAttribute("xsdPath"); if (!string.IsNullOrEmpty(xsdPath)) { this.xsdPath = xsdPath; } else { throw new ConfigurationErrorsException(Messages.MissingXsdPath); } reader.ReadStartElement("bodyValidator"); if(!isEmpty) reader.ReadEndElement(); } public override void WriteXml(System.Xml.XmlWriter writer) { writer.WriteStartElement("bodyValidator"); writer.WriteAttributeString("xsdPath", this.xsdPath); writer.WriteEndElement(); } protected class ServiceInputFilter : SoapFilter { #region Custom Fields private XmlSchema schema; #endregion #region Constructors public ServiceInputFilter(BodyValidatorAssertion assertion) { string xsdPath = assertion.xsdPath; if (!Path.IsPathRooted(xsdPath)) { xsdPath = Path.Combine(AppDomain.CurrentDomain.SetupInformation.ApplicationBase, xsdPath); } using (StreamReader streamReader = new StreamReader(xsdPath)) { this.schema = XmlSchema.Read(streamReader, ValidationHandler); streamReader.Close(); } } #endregion #region SoapFilter Methods public override SoapFilterResult ProcessMessage(SoapEnvelope envelope) { ValidationResults results = new ValidationResults(); SoapContext.Current.MessageState.Set(results); ValidateSchema(envelope.Body.InnerXml); if (results.ErrorsCount > 0) { throw new ApplicationException(string.Format(Messages.ValidationError, results.ErrorMessage)); } return SoapFilterResult.Continue; } #endregion #region Custom Methods /// <summary> /// Performs the validation of the SOAP body against the specified XML Schema (XSD) document. /// </summary> /// <param name="xmlDoc">SOAP message's body (XML)</param> public void ValidateSchema(string xmlDoc) { try { XmlReaderSettings settings = new XmlReaderSettings(); settings.Schemas.Add(this.schema); settings.ValidationType = ValidationType.Schema; XmlReader reader = XmlReader.Create(new StringReader(xmlDoc), settings); // Validate the document. while (reader.Read()) ; reader.Close(); } catch(Exception ex) { throw new ApplicationException(string.Format(Messages.SchemaValidationException, ex.Message)); } } /// <summary> /// Callback method that stores the error messages. /// </summary> /// <param name="sender"></param> /// <param name="args"></param> public void ValidationHandler(object sender, ValidationEventArgs args) { if (args.Severity == XmlSeverityType.Error) { ValidationResults results = SoapContext.Current.MessageState.Get<ValidationResults>(); results.ErrorMessage.Append(args.Message + "\r\n"); results.ErrorsCount++; } } #endregion private class ValidationResults { public StringBuilder ErrorMessage = new StringBuilder(); public int ErrorsCount; } } } } In the preceding example, the Messages.MissingXsdPath refers to a resource string that provides a message for the ConfigurationErrorsException that is being thrown. As appropriate, you should substitute this and other resource strings used in the code example with a simple exception message to describe the nature of the exception. Note The validator assertion will only validate the structure of XML data in the message that has the same namespace as the schema that is used to validate it. Data with other namespaces is ignored for schema validation. You should take care when using a policy assertion to validate an XML Schema (XSD) if a party other than the Web service developer will be responsible for configuring the service's policy when it is deployed into production. If the party responsible for configuring policy in production does not add the validation assertion, the validation will not be performed. If Web service development and policy configuration responsibilities are not held by the same individuals, you should consider using a helper class that is called from within the service to perform the validation instead. Alternatively, you can add the schema to the resource file for your project. In this case, the schema does not have to be deployed as a separate file. For more information, see Resolving the Unknown: Building Custom XmlResolvers in the .NET Framework on MSDN. The policy assertion caches the schema in memory that it uses to validate incoming request messages. If you make changes to the schema, you may have to restart Microsoft Internet Information Services (IIS) to ensure that the updated schema is loaded into memory. Use Regular Expressions to Parse Input The following code example shows how to use regular expressions to parse input on the Web service to ensure that only valid characters are used. Place this code where it can be called to validate input, after the message has been decrypted (if message layer security is implemented). For example, the following code can be added to the service to validate each string input parameter. ... using System.Text.RegularExpressions; ... private bool Validate(string searchString) { Regex r = new Regex("^[0-9A-Za-z]{1,10}$"); return r.IsMatch(searchString); } The preceding example provides a simple example for regular expression validation that does not allow any non-alphanumeric characters. Consequently, it may not be suitable for use in all applications. You can use more sophisticated checks for complex data, such as social security numbers and telephone numbers. For more information about implementing regular expressions, see How To: Use Regular Expressions to Constrain Input in ASP.NET on MSDN. Note Although the custom policy assertion provided in this pattern is applied after the security filters in the pipeline (that is, after the message has been decrypted), the regular expression code is not used in the policy assertion because it would require the policy assertion to have explicit knowledge of input parameters contained in the data. You can also use regular expressions to validate user input on client applications. The main benefit of validating input from the client's perspective is to save a round trip to the Web service if data validation fails. For this approach to be effective, you must be able to validate data according to the Web service's validation requirements. However, the service should never depend on the client to perform validation checks. You must always perform validation checks on the server, because an attacker could use a different client that does not perform the check or messages could be altered after a check has been performed at the client. The following example demonstrates how regular expression validation can be described within an XML Schema (XSD). Regular expression validation that uses an XML Schema (XSD) allows the Web service publisher to indicate to consumers what the Web service expects. However, it does not perform as well as regular expression validation in code. ... <xsd:simpleType <xsd:restriction <xsd:maxLength <xsd:pattern </xsd:restriction> </xsd:simpleType> ... For more information about using regular expressions in XSD schemas, see XML Schema Regular Expressions on MSDN. Parameterize SQL Queries Web services often use a database to store and retrieve data. Web service request messages could contain malicious input to inject SQL commands into database queries. The following example provides an example of how to parameterize SQL queries. Whenever possible, you should use stored procedures for both performance and security reasons. Stored procedures accept input through parameters, and they generally work best to enforce minimum privilege for data retrieval and modification. The example shows how to parameterize dynamic SQL if your application must use it. Note The example assumes that a regular expression has already been used to validate the searchString parameter. For more information, see the previous section, "Use Regular Expressions to Parse Input." ... using System.Data.SqlClient; using System.Configuration; ... private Customer[] GetCustomerList(string country, string searchString) { CustomerCollection customerCollection = new CustomerCollection(); Customer customer = new Customer(); using (SqlConnection conn = new SqlConnection(ConfigurationManager.ConnectionStrings["Northwind"].ToString())) { string selectString = "SELECT * FROM Customers WHERE Country = @Country AND (CompanyName LIKE '%' + @SearchString + '%' OR ContactName LIKE '%' + @SearchString + '%' OR @SearchString IS NULL)"; conn.Open(); SqlCommand cmd = new SqlCommand(selectString, conn); cmd.Parameters.Add("@Country", SqlDbType.VarChar).Value = country; cmd.Parameters.Add("@SearchString", SqlDbType.VarChar, 10).Value = searchString; SqlDataReader reader = cmd.ExecuteReader(); while (reader.Read()) {(); customerCollection.Add(customer); } reader.Close(); conn.Close(); } return (Customer[])customerCollection.ToArray(typeof(Customer)); } In this example, the Customer and CustomerCollection classes are custom data objects. As appropriate, replace the data objects and SQL query for your application. The important point is to parameterize the query instead of directly concatenating input into the SQL query. majority of attacks that result from malformed messages, invalid characters, or SQL injection are mitigated with the approach outlined in this implementation pattern. Liabilities The liabilities associated with the Implementing Message Validation in WSE 3.0 pattern include the following: - Validating messages against very large schemas can affect system performance. Typically, the cost of parsing is multiplied two to four times when the schema validation is performed on an XML message. For more information about XML performance guidance in the .NET Framework, see Chapter 9, Improving XML Performance in Improving .NET Application Performance and Scalability on MSDN. If message schema validation is causing performance problems, you should consider the following optimizations: - Make sure that you are reading your schemas only once from the schema file, and cache them in memory to minimize I/O. - Reduce the message schema to essential elements that are required for a particular Web service or Web service operation. Another option is to use regular expression validation in code to validate structural elements. - Incorporate more sophisticated regular expression checking. The regular expression validation example provided in this application is very strict and does not account for validation requirements specific to your service. A thorough threat analysis of your application should reveal any need for a specific form of regular expression checking. For more information about validating input with regular expressions, see How To: Use Regular Expressions to Constrain Input in ASP.NET on MSDN. Security Considerations Security considerations associated with the Implementing Message Validation in WSE 3.0 pattern include the following: - Attackers may attempt to work around message validation. You should be aware of known attempts to work around message validation and adjust your validation code accordingly. Keep your platform up to date with the latest security updates to mitigate issues with built-in security features. - Schema validation validates only basic data types, such as integers, dates, and structures; it should always be supplemented with regular expression validation. You can directly implement regular expression validation in the XML Schema (XSD) or in code to validate more complex data, such as social security numbers and telephone numbers. Regular expression validation directly in the XML Schema (XSD) is useful to communicate what the service requires as valid input to client applications, but it does not perform as well as regular expressions implemented in code. More Information For more information about <httpRuntime>, see "<httpRuntime> Element" in the .NET Framework General Reference on MSDN. For more information about WSE 3.0 policy assertions, see "Policy Assertions" on MSDN. For more information about using the SoapClient/SoapService classes for messaging, see "How To: Send and Receive a SOAP Message by Using the SoapClient and SoapService Classes," in the WSE 3.0 documentation on MSDN. For more information about adding a schema to a resource file see "Resolving the Unknown: Building Custom XmlResolvers in the .NET Framework," on MSDN. For more information about implementing regular expressions, see "How To: Use Regular Expressions to Constrain Input in ASP.NET" on MSDN. For more information about using regular expressions in XML Schemas, see "XML Schema Regular Expressions" on MSDN. For more information about XML performance guidance in the .NET Framework, see Chapter 9, "Improving XML Performance," in Improving .NET Application Performance and Scalability on MSDN.
http://msdn.microsoft.com/en-us/library/ff647829.aspx
CC-MAIN-2014-10
en
refinedweb
16 March 2011 12:28 [Source: ICIS news] LONDON (ICIS)--Bayer MaterialScience (BMS) has completed a patent licence agreement for the use of carbon nanotubes with US-based company Hyperion Catalysis International, the German chemicals company said on Wednesday. BMS said the agreement will strengthen its position in developing new areas of application for its carbon nanotubes, Baytubes. Financial terms of the agreement were not disclosed. BMS said Hyperion currently holds a portfolio of patents and patent applications covering a broad technology range, from carbon nanotubes manufacturing to a variety of current and future applications. “We see high demand for materials with extraordinary material properties,” said Joachim Wolff, a member of the executive committee at BMS. “This licence with BMS will further leverage Hyperion’s broad patent portfolio and capitalise on the ever-increasing demand for carbon nanotubes,” said David Wohlstadter, Hyperion’s vice president of business development. “We feel Bayer MaterialScience is well positioned to serve and help increase this demand, while Hyperion remains committed to expanding its own sales and providing its customers with the highest quality carbon nanotube-based products,” he added. In January 2010, BMS opened a €22m ($31m) pilot facility to manufacture carbon nanotubes at its ?xml:namespace> ($1 = €0.71)
http://www.icis.com/Articles/2011/03/16/9444304/bayer-completes-carbon-nanotubes-licence-agreement-with-hyperion.html
CC-MAIN-2014-10
en
refinedweb
Sub::Assert - Subroutine pre- and postconditions, etc. use Sub::Assert; sub squareroot { my $x = shift; return $x**0.5; } assert pre => { # named assertion: 'parameter larger than one' => '$PARAM[0] >= 1', }, post => '$VOID or $RETURN <= $PARAM[0]', # unnamed assertion sub => 'squareroot', context => 'novoid', action => 'carp'; print squareroot(2), "\n"; # prints 1.41421 and so on print squareroot(-1), "\n"; # warns # "Precondition 1 for main::squareroot failed." squareroot(2); # warns # "main::squareroot called in void context." sub faultysqrt { my $x = shift; return $x**2; } assert pre => '$PARAM[0] >= 1', post => '$RETURN <= $PARAM[0]', sub => 'faultysqrt'; print faultysqrt(2), "\n"; # dies with # "Postcondition 1 for main::squareroot failed." The Sub::Assert module implements. The assert subroutine takes a key/value list of named parameters. The only required parameter is the 'sub' parameter that specifies which subroutine (in the current package) to replace with the assertion wrapper. The 'sub' parameter may either be a string in which case the current packages subroutine of that name is replaced, or it may be a subroutine reference. In the latter case, assert() returns the assertion wrapper as a subroutine reference. This parameter specifies one or more preconditions that the data passed to the transformed subroutine must match. The preconditions may either be a string in case there's only one, unnamed precondition, an array (reference) of strings in case there's many unnamed preconditions, or a hash reference of name/condition pairs for named preconditions. There are several special variables in the scope in which these preconditions are evaluated. Most importantly, @PARAM will hold the list of arguments as passed to the subroutine. Furthermore, there is the scalar $SUBROUTINEREF which holds the reference to the subroutine that does the actual work. I am mentioning this variable because I don't want you to muck with it. This parameter specifies one or more postconditions that the data returned from the subroutine must match. Syntax is identical to that of the preconditions except that there are more special vars: In scalar context, $RETURN holds the return value of the subroutine and $RETURN[0] does, too. $VOID is undefined. In list context, @RETURN holds all return values of the subroutine and $RETURN holds the first. $VOID is undefined. In void context, $RETURN is undefined and @RETURN is empty. $VOID, however, is true. Note the behaviour in void context. May be a bug or a feature. I'd appreciate feedback and suggestions on how to solve is more elegantly. Optionally, you may restrict the calling context of the subroutine. The context parameter may be any of the following and defaults to no restrictions ('any'): This means that there is no restriction on the calling context of the subroutine. Please refer to the documentation of the 'post' parameter for a gotcha with void context. This means that the assertion wrapper will throw an error of the calling context of the subroutine is not scalar context. This means that the assertion wrapper will throw an error of the calling context of the subroutine is not list context. This means that the assertion wrapper will throw an error of the calling context of the subroutine is not void context. Please refer to the documentation of the 'post' parameter for a gotcha with void context. This restricts the calling context to any but void context. By default, the assertion wrapper croaks when encountering an error. You may override this behaviour by supplying an action parameter. This parameter is to be the name of a function to handle the error. This function will then be passed the error string. Please note that the immediate predecessor of the subroutine on the call stack is the code evaluation. Thus, for a helpful error message, you'd want to use 'carp' and 'croak' instead of the analogeous 'warn' and 'die'. Your own error handling functions need to be aware of this, too. Please refer to the documentation of the Carp module and the caller() function. Examples: action => 'carp', action => 'my_function_that_handles_the_error', action => '$anon_sub->', # works only in the lexical scope of $anon_sub! Exports the 'assert' subroutine to the caller's namespace. Steffen Mueller <smueller@cpan.org> This library is free software; you can redistribute it and/or modify it under the same terms as Perl itself. Look for new versions of this module on CPAN or at
http://search.cpan.org/dist/Sub-Assert/lib/Sub/Assert.pm
CC-MAIN-2014-10
en
refinedweb
SyndicationTextContentKind Enumeration Enumeration used to identify text content of syndication item.. When you specify a value of Xhtml for the TargetTextContentKind attribute, you must ensure that the property value contains properly formatted XML. The data service returns the value without performing any transformations. You must also ensure that any XML element prefixes in the returned XML have a namespace URI and prefix defined in the mapped feed. Windows 7, Windows Vista, Windows XP SP2, Windows Server 2008 R2, Windows Server 2008, Windows Server 2003 The .NET Framework and .NET Compact Framework do not support all versions of every platform. For a list of the supported versions, see .NET Framework System Requirements.
http://msdn.microsoft.com/en-us/library/vstudio/system.data.services.common.syndicationtextcontentkind(v=vs.90).aspx
CC-MAIN-2014-10
en
refinedweb
IWbemEventSink::SetSinkSecurity method The IWbemEventSink::SetSinkSecurity method is used to set a security descriptor (SD) on a sink for all the events passing through. WMI handles the access checks based on the SD. Use this method when the provider cannot control what users are allowed to consume its events, but can set an SD for a specific sink. Syntax Parameters - ISDLength [in] Length of security descriptor. - pSD [in] Security descriptor, DACL. Return value This method returns an HRESULT indicating the status of the method call. The following table lists the value contained within an HRESULT. Remarks The SD DACL defines who has access to the events. The access control entry (ACE) of a consumer seeking access to the events delivered to the sink must match an ACE with WBEM_RIGHT_SUBSCRIBE set in the pSD parameter. The SD owner and group specify the identity to be used when raising the event. This identity can be different than the identity of the account raising the event; however, when checking access of the event against a filter SD, both the identity of the user and the identity specified in the owner field are checked for access. For more information, see the EventAccess property of the __EventFilter class. The group field of the SD must be set and the SACL field is not used. For more information about event security and when to use this method, see Securing WMI Events. For more information about providing events, see Writing an Event Provider. Examples For script code examples, see WMI Tasks for Scripts and Applications and the TechNet ScriptCenter Script Repository. For C++ code examples, see WMI C++ Application Examples. The following code example sets the SD for all the events provided through the sink.The code requires the following #include statements and references. #define _WIN32_WINNT 0x0500 #define SECURITY_WIN32 # pragma comment(lib, "wbemuuid.lib") # pragma comment(lib, "Secur32.lib") #include <windows.h> #include <sddl.h> #include <wbemidl.h> #include <security.h> #include <iostream> using namespace std; HRESULT CMyEventProvider::ProvideEvents( IWbemObjectSink *pSink, long lFlags ) { IWbemEventSink *pEventSink = NULL; //Create SD with allowing only administrators // to receive events. O:BAG:BAD:(A;;0x40;;;BA) long lMask = WBEM_RIGHT_SUBSCRIBE; WCHAR wBuf[MAX_PATH]; _ltow( lMask, wBuf, 16 ); wstring wstrSD = L"O:BAG:BAD:(A;;0x"; wstrSD += lMask; wstrSD += L";;;BA)"; ULONG ulSize = 0; PSECURITY_DESCRIPTOR pSD = NULL; ConvertStringSecurityDescriptorToSecurityDescriptorW( wstrSD.c_str(), SDDL_REVISION_1, &pSD, &ulSize ); HRESULT hRes = pSink->QueryInterface( IID_IWbemEventSink, (void**)pEventSink ); if( SUCCEEDED(hRes) ) hRes = pEventSink->SetSinkSecurity( ulLength, pSD ); LocalFree(pSD ); return hRes; } Requirements
http://msdn.microsoft.com/en-us/library/windows/desktop/aa391750(v=vs.85).aspx
CC-MAIN-2014-10
en
refinedweb