text
stringlengths 20
1.01M
| url
stringlengths 14
1.25k
| dump
stringlengths 9
15
⌀ | lang
stringclasses 4
values | source
stringclasses 4
values |
|---|---|---|---|---|
10 replies on
1 page.
Most recent reply:
Apr 19, 2008 4:03 AM
by
Andy Chou
Disclaimer you will probably ignore:
I'm a software architect not a product manager, which means I write code and not marketing messages. My responses shouldn't be taken as any sort of official word from Macromedia. I'm just talking engineer-to-engineer here, rambling over a couple of virtual beers on Artima rather than offering official word from macromedia.com.
Regarding DHTML and Flex:
I agree that DHTML can be used to great effect, and over the years we've spent a good deal of time supporting and simplifying it in products like DreamWeaver. While Flex initially publishes only to the Flash VM, it's really about providing an app development model with the right service and data orientation for developers of server pages and templates to write, test, deploy and manage rich apps today. In my opinion, it's about getting the model and services right and not about pushing Flash itself (or any other client platform) as the be-all end-all.
That said, I do believe that Flash provides the only ubiquitous lightweight cross-platform client VM that can support the sorts of web clients we're wanting immediately -- not just for its rich vector graphic visuals and audio/video, but for sophisticated offline storage and push applications as well. That says nothing about what Flex may eventually target as a result of customer requests and requirements, though, nor does it speak to what the Flash VM itself may evolve into.
Regarding XUL:
My own opinion is that we wouldn't be doing justice to the XUL community if we called our syntax by that name, because even aside from the runtime container differences it differs so significantly in its service, component, and data models. But...
If you think only about the syntax and not the rendering and compositing, though, and eliminate data binding and tag-based custom component creation, I think the base Flex UI syntax is very similar to that of XUL. This is a good thing because I personally like that portion of XUL, and I'd certainly be interested in figuring out how we could give back to the XUL community, or if despite the other differences a lot of folks really do view this as a version of XUL and want that name applied in some extended form. For some years I managed an open source project on sourceforge (something called PoolMan) and have contributed to many others, and I certainly believe everyone wins when folks give back where appropriate and where welcomed. It's not always possible seeing as how I work for a commercial software business, but I am willing to fight the fight where necessary and if it's really justified.
Regarding use of SVG:
SVG is supported in MXML, and at the very least in version 1.0 you'll be able to reference external SVG files. Inline SVG, however, is not guaranteed to be available. We can support multiple namespaces for MXML and SVG in the same document, but it posed language usability issues, so for the current beta this has been disabled.
Peter Farland, who worked with me on creating the Flash Remoting product (my first foray into Flash product development after developing the JRun 4 J2EE 1.3 server) is the engineer handling the SVG features. Pete was basically asked to implement a specific subset of SVG, but he found that distasteful so he pretty much implemented everything. It's great to be in an engineer-driven place, particularly when the engineers are so talented. Anyway, I wandered down the hall and asked for his response to the question. Here's Pete's word on the subject (imagine this spoken in an Australian accent): "The inline SVG feature has been disabled until language specific usability issues are sorted out. This is not guaranteed to be ready in 1.0. You can reference external SVG files for any Property attributed with [ImportResource], such as Image's src property."
Regarding use of Flash MX:
If you or your collaborators do use the Flash authoring environment you can still do so and export components you create into Flex. We also have a new DreamWeaver-like tool for visual Flex authoring. And you can continue using your ActionScript, both in Flex and in our classic tools. Flex gives you a simpler, smarter way to use MX components in an enterprise application. It's easier to link them to web services, easier to bind data to them, easier to create navigational flows through them, easier to unit test them, easier to share them through source control, easier to dynamically generate them, etc.
Flex makes it easier to quickly build robust rich apps without requiring a good deal of time and designer skill, and without requiring knowledge of the proprietary APIs Macromedia has previously released. It's standards-based and familiar to folks who've used server page or template frameworks in the web tier, and in the hands of wise web developers it produces behaviors that can be pretty incredible.
Regarding JavaServer Faces:
This past summer I spoke at a conference where I demo'd JavaServer Faces with a Flash JSF renderkit rendering a Flash-based rich datagrid in a page that was otherwise rendered with HTML form widgets. All were produced from a single JSF page using JSF tags. The Flash datagrid and the other widgets shared the same JavaBean data model -- the same server-side instance of the bean, actually. Flash can work with JSF. But Flex is not JSF, and they don't aim to address exactly the same goals.
JSF seems to be mostly about simplifying Model 2 MVC development for JSP applications. There's nothing specifically about web service bindings and the like but more importantly -- there's not much about smart clients. At best there is a delegated rendering model that might be exploited to supply client-side storage and client-side logic. But mostly the classic JSP model is assumed and simplified. I don't think this is an oversight by JSF, I think it's evidence that we're going after slightly different goals.
More deserves to be said of JSF, but this will have to suffice for now. As always, interested in feedback: If you're a JSF fan, please give me a yell.
|
http://www.artima.com/forums/flat.jsp?forum=106&thread=22035&start=0
|
CC-MAIN-2014-15
|
en
|
refinedweb
|
csPoly2D Class Reference
[Geometry utilities]
The following class represents a general 2D polygon. More...
#include <csgeom/poly2d.h>
Detailed Description
The following class represents a general 2D polygon.
Definition at line 40 of file poly2d.h.
Constructor & Destructor Documentation
Make a new empty polygon.
Copy constructor.
Destructor.
Member Function Documentation
Add a vertex (2D) to the polygon.
Return index of added vertex.
Clipping routines.
They return false if the resulting polygon is not visible for some reason. Note that these routines must not be called if the polygon is not visible. These routines will not check that. Note that these routines will put the resulting clipped 2D polygon in place of the original 2D polygon.
This routine is similar to Intersect but it only returns the polygon on the 'right' (positive) side of the plane.
Extend this polygon with another polygon so that the resulting polygon is: (a) still convex, (b) fully contains this polygon, and (c) contains as much as possible of the other polgon.
'this_edge' is the index of the common edge for this polygon. Edges are indexed with 0 being the edge from 0 to 1 and n-1 being the edge from n-1 to 0.
Calculate the signed area of this polygon.
Test if this vector is inside the polygon.
Test if a vector is inside the given polygon.
Intersect this polygon with a given plane and return the two resulting polygons in left and right.
This version is robust. If one of the edges of this polygon happens to be on the same plane as 'plane' then the edge will go to the polygon which already has most edges. i.e. you will not get degenerate polygons.
Initialize the polygon to empty.
Make room for at least the specified number of vertices.
Assignment operator.
Member Data Documentation
The documentation for this class was generated from the following file:
Generated for Crystal Space 1.4.1 by doxygen 1.7.1
|
http://www.crystalspace3d.org/docs/online/api-1.4.1/classcsPoly2D.html
|
CC-MAIN-2014-15
|
en
|
refinedweb
|
A JavaFX Text Editor: Part 1
Recently, I've been building a specialized file editor with JavaFX 2.x for my day job. While I can't share all the details of that project here, I can share the key findings I've made along the way. I've built a scaled-down editor app that illustrates a lot of what makes JavaFX awesome for building client GUI applications. In fact, despite all my work with JavaFX over the past 2+ years, there are many features I've just never worked with closely. Let's take a look at three of these features now: the
TabPane,
MenuBar, and
FileChooser interfaces. In the second part, we'll explore keyboard processing, the
WebView component, and the file editor itself.
Multi-Tab Editor
I've never created a multi-tabbed interface with JavaFX. It's not that I avoided it for any particular reason, it's just that I never had a reason to until recently. With JavaFX, it took me all of five minutes to learn what I needed. Here's a summary with some code.
First, create a
TabPane component to contain the actual tabs:
TabPane tabPane = new TabPane();
Next, create a
Tab to and place it into the
TabPane:
Tab tab = new Tab(); tabPane.getTabs().add(tab);
Next, add content to the
Tab itself (this can be a control or a layout component such as an HBox, VBox, Group, and so on):
tab.setContent(content.getRoot());
Set the tab's title text:
tab.setText("My New Tab");
By default, the tab is added to the end of existing tabs and is not selected. If you'd like the tab to become the selected, visible tab, call the following code:
SingleSelectionModel<Tab> selectionModel = tabPane.getSelectionModel(); selectionModel.select(tab);
If you'd like to know when the selected tab changes, add the following code:
tabPane.getSelectionModel().selectedItemProperty().addListener( new ChangeListener<Tab>() { @Override public void changed( ObservableValue<? extends Tab> tab, Tab oldTab, Tab newTab) { // Process event here... } });
The end result is a simple set of tabs in your application UI, as shown here.

Next, let's examine another common navigation tool: menus.
JavaFX Menus and Processing
Believe it or not, I've never needed to create application menus with JavaFX until recently. All of my work with JavaFX has been building custom JavaFX controls to be integrated into an existing Java Swing application; hence no need for menus. Again, it took only a matter of minutes to learn the API and build the menu system my application needed. Here's how.
First, create a
MenuBar component to hold the menu choices themselves:
MenuBar menuBar = new MenuBar();
Next, create a
Menu entry, such as one that says
"File":
Menu menuFile = new Menu("File");
Next, add a set of menu items (under the
File menu in this case), such as
"New",
"Open", and
"Save":
MenuItem menuFileNew = new MenuItem("New"); MenuItem menuFileOpen = new MenuItem("Open"); MenuItem menuFileSave = new MenuItem("Save"); menuFile.getItems().addAll( menuFileNew, menuFileOpen, menuFileSave);
What good is a menu item if don't know when a user chooses it? With JavaFX, you simply add an
EventHandler:
menuFileNew.setOnAction(new EventHandler<ActionEvent>() { public void handle(ActionEvent t) { // Process the user menu choice here… } });
To add all of your
Menu Entry objects to a
MenuBar, simply call
addAll():
Menu menuView = new Menu("View"); // … menuBar.getMenus().addAll(menuFile, menuView);
After adding the menus and menu items, you have something similar to this:

Simple File Editor Beginnings
Below is the complete menu and tab code in the beginnings of the simple JavaFX file editor application. In the next part, we'll fill in the remaining details of the application, and I'll present all of the code.
public class JavaFXSimpleEditor extends Application { private static final String BROWSER = "Browser"; private static final String EDITOR = "new editor"; private static int browserCnt = 1; private Stage primaryStage; private TabPane tabPane; private Vector<SimpleEditor> editors = new Vector(); private SimpleEditor currentEditor = null; private Stage getStage() { return primaryStage; } @Override public void start(Stage primaryStage) { this.primaryStage = primaryStage; // Add an empty editor to the tab pane tabPane = new TabPane(); tabPane.getSelectionModel().selectedItemProperty().addListener(new ChangeListener<Tab>() { @Override public void changed(ObservableValue<? extends Tab> tab, Tab oldTab, Tab newTab) { // As the current tab changes, reset the var that tracks // the editor in view. This is used for tracking modified // editors as the user types currentEditor = null; } }); // Create main app menu MenuBar menuBar = new MenuBar(); // File menu and subitems Menu menuFile = new Menu("File"); MenuItem menuFileNew = new MenuItem("New"); menuFileNew.setOnAction(new EventHandler<ActionEvent>() { public void handle(ActionEvent t) { createNew(EDITOR); } }); MenuItem menuFileOpen = new MenuItem("Open"); menuFileOpen.setOnAction(new EventHandler<ActionEvent>() { public void handle(ActionEvent t) { chooseAndLoadFile(); } }); MenuItem menuFileSave = new MenuItem("Save"); menuFileSave.setOnAction(new EventHandler<ActionEvent>() { public void handle(ActionEvent t) { saveFileRev(); } }); MenuItem menuFileExit = new MenuItem("Exit"); menuFileExit.setOnAction(new EventHandler<ActionEvent>() { public void handle(ActionEvent t) { getStage().close(); } }); menuFile.getItems().addAll( menuFileNew, menuFileOpen, menuFileSave, new SeparatorMenuItem(), menuFileExit); Menu menuView = new Menu("View"); MenuItem menuViewURL = new MenuItem("Web Page"); menuViewURL.setOnAction(new EventHandler<ActionEvent>() { public void handle(ActionEvent t) { createNew(BROWSER); } }); menuView.getItems().addAll(menuViewURL); menuBar.getMenus().addAll(menuFile, menuView); // layout the scene VBox layout = VBoxBuilder.create().spacing(10).children(menuBar, tabPane).build(); layout.setFillWidth(true); // display the scene final Scene scene = new Scene(layout, 800, 600); scene.setOnKeyPressed(new EventHandler<KeyEvent>() { public void handle(KeyEvent ke) { // ... } }); // Bind the tab pane width/height to the scene tabPane.prefWidthProperty().bind(scene.widthProperty()); tabPane.prefHeightProperty().bind(scene.heightProperty()); // Certain keys only come through on key release events // such as backspace, enter, and delete scene.setOnKeyReleased(new EventHandler<KeyEvent>() { public void handle(KeyEvent ke) { // ... } }); scene.setOnKeyTyped(new EventHandler<KeyEvent>() { public void handle(KeyEvent ke) { // ... } }); // Make sure one new editor is open by default createNew(EDITOR); primaryStage.setScene(scene); primaryStage.setTitle("Simple Editor / Browser"); primaryStage.show(); } /* …. */ public static void main(String[] args) { launch(args); } }
|
http://www.drdobbs.com/cpp/decoupling-c-header-files/cpp/a-javafx-text-editor-part-1/240142297
|
CC-MAIN-2014-15
|
en
|
refinedweb
|
« Return to documentation listing
#include <mpi.h>
int MPI_Grequest_complete(MPI_Request request)
INCLUDE ’mpif.h’
MPI_GREQUEST_COMPLETE(REQUEST, IERROR)
INTEGER REQUEST, IERROR
#include <mpi.h>
void MPI::Grequest::Complete()
MPI imposes no restrictions on the code executed by the callback
functions." semantics). It should either succeed or fail without
side-effects. The user should guarantee these same properties for newly defined
|
http://www.open-mpi.org/doc/current/man3/MPI_Grequest_complete.3.php
|
CC-MAIN-2014-15
|
en
|
refinedweb
|
in reply to
Simple modification to SVG
SVG is an XML application.
Choose a module that can manipulate XML files and perform the necessary changes.
The module you pick may be dependent on what you're familiar with.
I'd probably choose XML::LibXML because that's what I'm familiar with.
You can search CPAN for modules in the XML:: namespace: there's a lot to choose from (and not all of them applicable to your current requirements).
From the limited information you've provided regarding the replacement, it appears that certain rect element attribute values need to be used (possibly after some manipulation) as image element attribute values.
This looks straightforward, e.g. $href = substr($fill, 1) . '.svg', but come back and ask if you run into difficulties. [To get the best answers, follow the guidelines in "How do I post a question effectively?"]
-- Ken
I just got back onto Perl Monks after some time away... Sorry about the stale-ish answer.
Maybe try using the :
Pure-Perl SVG module on CPAN
(that I wrote) and follow the tutorials that I wrote and were published to github by Gabor Szabo here: Perl SVG Samples on github ...or try my tutorials: ROASP's SVG tutorials on RO IT Systems
All the best
hackmare.
All the best
A foolish day
Just another day
Internet cleaning day
The real first day of Spring
The real first day of Autumn
Wait a second, ... is this poll a joke?
Results (422 votes),
past polls
|
http://www.perlmonks.org/index.pl/jacques?node_id=1052286
|
CC-MAIN-2014-15
|
en
|
refinedweb
|
SeqIO
Revision as of 11:42, 1 March 2013
This page describes Bio.SeqIO, the standard Sequence Input/Output interface for BioPython 1.43 and later. For implementation details, see the SeqIO development page.
There is a whole chapter in the Tutorial (PDF) on Bio.SeqIO, and although there is some overlap it is well worth reading in addition to this WIKI page. There is also the API documentation (which you can read online, or from within Python with the help command).
Aims
Bio.SeqIO provides a simple uniform interface to input and output assorted sequence file formats (including multiple sequence alignments), but will only deal with sequences as SeqRecord objects. There is a sister interface Bio.AlignIO for working directly with sequence alignment files as Alignment objects.
The design was partly inspired by the simplicity of BioPerl's SeqIO. In the long term we hope to match BioPerl's impressive list of supported sequence file formats and multiple alignment formats.
Note that the inclusion of Bio.SeqIO (and Bio.AlignIO) in Biopython does lead to some duplication or choice in how to deal with some file formats. For example, Bio.Nexus will also read sequences from Nexus files - but Bio.Nexus can also do much more, for example reading any phylogenetic trees in a Nexus file.
My vision is that for manipulating sequence data you should try Bio.SeqIO as your first choice. Unless you have some very specific requirements, I hope this should suffice.
File Formats
This table lists the file formats that Bio.SeqIO can read, write and index, with the Biopython version where this was first supported (or git to indicate this is supported in our latest in development code). The format name is a simple lowercase string. Where possible we use the same name as BioPerl's SeqIO and EMBOSS.
With Bio.SeqIO you can treat sequence alignment file formats just like any other sequence file, but the new Bio.AlignIO module is designed to work with such alignment files directly. You can also convert a set of SeqRecord objects from any file format into an alignment - provided they are all the same length. Note that when using Bio.SeqIO to write sequences to an alignment file format, all the (gapped) sequences should be the same length.
Sequence Input
The main function is Bio.SeqIO.parse() which takes a file handle and format name, and returns a SeqRecord iterator. This lets you do things like:
from Bio import SeqIO handle = open("example.fasta", "rU") for record in SeqIO.parse(handle, "fasta") : print record.id handle.close()
In the above example, we opened the file using the built-in python function open. The argument 'rU' means open for reading using universal readline mode - this means you don't have to worry if the file uses Unix, Mac or DOS/Windows style newline characters.
Note that you must specify the file format explicitly, unlike BioPerl's SeqIO which can try to guess using the file name extension and/or the file contents. See Explicit is better than implicit (The Zen of Python).
If you had a different type of file, for example a Clustalw alignment file such as 'opuntia.aln' which contains seven sequences, the only difference is you specify "clustal" instead of "fasta":
from Bio import SeqIO handle = open("opuntia.aln", "rU") for record in SeqIO.parse(handle, "clustal") : print record.id handle.close()
Iterators are great for when you only need the records one by one, in the order found in the file. For some tasks you may need to have random access to the records in any order. In this situation, use the built in python list function to turn the iterator into a list:
from Bio import SeqIO handle = open("example.fasta", "rU") records = list(SeqIO.parse(handle, "fasta")) handle.close() print records[0].id #first record print records[-1].id #last record
Another common task is to index your records by some identifier. For small files we have a function Bio.SeqIO.to_dict() to turn a SeqRecord iterator (or list) into a dictionary (in memory):
from Bio import SeqIO handle = open("example.fasta", "rU") record_dict = SeqIO.to_dict(SeqIO.parse(handle, "fasta")) handle.close() print record_dict["gi:12345678"] #use any record ID
The function Bio.SeqIO.to_dict() will use the record ID as the dictionary key by default, but you can specify any mapping you like with its optional argument, key_function.
For larger files, it isn't possible to hold everything in memory, so Bio.SeqIO.to_dict() is not suitable. Biopython 1.52 inwards includes the Bio.SeqIO.index() function for this situation, but you might also consider BioSQL.
from Bio import SeqIO record_dict = SeqIO.index("example.fasta", "fasta") print record_dict["gi:12345678"] #use any record ID
Biopython 1.45 introduced another function, Bio.SeqIO.read(), which like Bio.SeqIO.parse() will expect a handle and format. It is for use when the handle contains one and only one record, which is returned as a single SeqRecord object. If there are no records, or more than one, then an exception is raised:
from Bio import SeqIO record = SeqIO.read(open("single.fasta"), "fasta")
For the related situation where you just want the first record (and are happy to ignore any subsequent records), you can use the iterator's .next() method:
from Bio import SeqIO first_record = SeqIO.parse(open("example.fasta", "rU"), "fasta").next()
Sequence Output
For writing records to a file use the function Bio.SeqIO.write(), which takes a SeqRecord iterator (or list), output handle and format string:
from Bio import SeqIO sequences = ... # add code here output_handle = open("example.fasta", "w") SeqIO.write(sequences, output_handle, "fasta") output_handle.close()
There are more examples in the following section on converting between file formats.
Note that if you are writing to an alignment file format, all your sequences must be the same length.
If you supply the sequences as a SeqRecord iterator, then for sequential file formats like Fasta or GenBank, the records can be written one by one. Because only one record is created at a time, very little memory is required. See the example below filtering a set of records.
On the other hand, for interlaced or non-sequential file formats like Clustal, the Bio.SeqIO.write() function will be forced to automatically convert an iterator into a list. This will destroy any potential memory saving from using an generator/iterator approach.
File Format Conversion
Suppose you have a GenBank file which you want to turn into a Fasta file. For example, lets consider the file 'cor6_6.gb' which is included in the Biopython unit tests under the GenBank directory.
You could read the file like this, using the Bio.SeqIO.parse() function:
from Bio import SeqIO input_handle = open("cor6_6.gb", "rU") for record in SeqIO.parse(input_handle, "genbank") : print record input_handle.close()
Notice that this file contains six records. Now instead of printing the records, let's pass the SeqRecord iterator to the Bio.SeqIO.write() function, to turn this GenBank file into a Fasta file:
from Bio import SeqIO input_handle = open("cor6_6.gb", "rU") output_handle = open("cor6_6.fasta", "w") sequences = SeqIO.parse(input_handle, "genbank") count = SeqIO.write(sequences, output_handle, "fasta") output_handle.close() input_handle.close() print "Converted %i records" % count
Or more concisely using the Bio.SeqIO.convert() function (in Biopython 1.52 or later), just:
from Bio import SeqIO count = SeqIO.convert("cor6_6.gb", "genbank", "cor6_6.fasta", "fasta") print "Converted %i records" % count
In this example the GenBank file started like this:
LOCUS ATCOR66M 513 bp mRNA PLN 02-MAR-1992 DEFINITION A.thaliana cor6.6 mRNA. ACCESSION X55053 VERSION X55053.1 GI:16229 ...
The resulting Fasta file looks like this:
>X55053.1 A.thaliana cor6.6 mRNA. AACAAAACACACATCAAAAACGATTTTACAAGAAAAAAATA... ...
Note that all the Fasta file can store is the identifier, description and sequence.
By changing the format strings, that code could be used to convert between any supported file formats.
Examples
Input/Output Example - Filtering by sequence length
While you may simply want to convert a file (as shown above), a more realistic example is to manipulate or filter the data in some way.
For example, let's save all the "short" sequences of less than 300 nucleotides to a Fasta file:
from Bio import SeqIO short_sequences = [] # Setup an empty list for record in SeqIO.parse(open("cor6_6.gb", "rU"), "genbank") : if len(record.seq) < 300 : # Add this record to our list short_sequences.append(record) print "Found %i short sequences" % len(short_sequences) output_handle = open("short_seqs.fasta", "w") SeqIO.write(short_sequences, output_handle, "fasta") output_handle.close()
If you know about list comprehensions then you could have written the above example like this instead:
from Bio import SeqIO input_seq_iterator = SeqIO.parse(open("cor6_6.gb", "rU"), "genbank") #Build a list of short sequences: short_sequences = [record for record in input_seq_iterator \ if len(record.seq) < 300] print "Found %i short sequences" % len(short_sequences) output_handle = open("short_seqs.fasta", "w") SeqIO.write(short_sequences, output_handle, "fasta") output_handle.close()
I'm not convinced this is actually any easier to understand, but it is shorter.
However, if you are using Python 2.4 or later, and you are dealing with very large files with thousands of records, you could benefit from using a generator expression instead. This avoids creating the entire list of desired records in memory:
from Bio import SeqIO input_seq_iterator = SeqIO.parse(open("cor6_6.gb", "rU"), "genbank") short_seq_iterator = (record for record in input_seq_iterator \ if len(record.seq) < 300) output_handle = open("short_seqs.fasta", "w") SeqIO.write(short_seq_iterator, output_handle, "fasta") output_handle.close()
Remember that for sequential file formats like Fasta or GenBank, Bio.SeqIO.write() will accept a SeqRecord iterator. The advantage of the code above is that only one record will be in memory at any one time.
However, as explained in the output section, for non-sequential file formats like Clustal write is forced to automatically turn the iterator into a list, so this advantage is lost.
If this is all confusing, don't panic and just ignore the fancy stuff. For moderately sized datasets having too many records in memory at once (e.g. in lists) is probably not going to be a problem.
Using the SEGUID checksum
In this example, we'll use Bio.SeqIO with the Bio.SeqUtils.CheckSum module (in Biopython 1.44 or later). First of all, we'll just print out the checksum for each sequence in the GenBank file ls_orchid.gbk:
from Bio import SeqIO from Bio.SeqUtils.CheckSum import seguid for record in SeqIO.parse(open("ls_orchid.gbk"), "genbank") : print record.id, seguid(record.seq)
You should get this output:
Z78533.1 JUEoWn6DPhgZ9nAyowsgtoD9TTo Z78532.1 MN/s0q9zDoCVEEc+k/IFwCNF2pY ... Z78439.1 H+JfaShya/4yyAj7IbMqgNkxdxQ
Now lets use the checksum function and Bio.SeqIO.to_dict() to build a SeqRecord dictionary using the SEGUID as the keys. The trick here is to use the python lambda syntax to create a temporary function to get the SEGUID for each SeqRecord - we can't use the seguid function directly as it only works on Seq objects or strings.
from Bio import SeqIO from Bio.SeqUtils.CheckSum import seguid seguid_dict = SeqIO.to_dict(SeqIO.parse(open("ls_orchid.gbk"), "genbank"), lambda rec : seguid(rec.seq)) record = seguid_dict["MN/s0q9zDoCVEEc+k/IFwCNF2pY"] print record.id print record.description
Giving this output:
Z78439.1 P.barbatum 5.8S rRNA gene and ITS1 and ITS2 DNA.
Random subsequences
This script will read a Genbank file with a whole mitochondrial genome (e.g. the tobacco mitochondrion, Nicotiana tabacum mitochondrion NC_006581), create 500 records containing random fragments of this genome, and save them as a fasta file. These subsequences are created using a random starting points and a fixed length of 200.
from Bio import SeqIO from Bio.SeqRecord import SeqRecord from random import randint #There should be one and only one record, the entire genome: mito_record = SeqIO.read(open("NC_006581.gbk"), "genbank") mito_frags=[] limit=len(mito_record.seq) for i in range(0, 500) : start=randint(0,limit-200) end=start+200 mito_frag=mito_record.seq[start:end] record=SeqRecord(mito_frag,'fragment_%i' % (i+1),'','') mito_frags.append(record) output_handle = open("mitofrags.fasta", "w") SeqIO.write(mito_frags, output_handle, "fasta") output_handle.close()
That should give something like this as the output file,
>fragment_1 TGGGCCTCATATTTATCCTATATACCATGTTCGTATGGTGGCGCGATGTTCTACGTGAAT CCACGTTCGAAGGACATCATACCAAAGTCGTACAATTAGGACCTCGATATGGTTTTATTC TGTTTATCGTATCGGAGGTTATGTTCTTTTTTGCTCTTTTTCGGGCTTCTTCTCATTCTT CTTTGGCACCTACGGTAGAG ... >fragment_500 ACCCAGTGCCGCTACCCACTTCTACTAAGGCTGAGCTTAATAGGAGCAAGAGACTTGGAG GCAACAACCAGAATGAAATATTATTTAATCGTGGAAATGCCATGTCAGGCGCACCTATCA GAATCGGAACAGACCAATTACCAGATCCACCTATCATCGCCGGCATAACCATAAAAAAGA TCATTAAAAAAGCGTGAGCC
Writing to a string
Sometimes you won't want to write your SeqRecord object(s) to a file, but to a string. For example, you might be preparing output for display as part of a webpage. If you want to write multiple records to a single string, use StringIO to create a string-based handle. The Tutorial (PDF) has an example of this in the SeqIO chapter.
For the special case where you want a single record as a string in a given file format, Biopython 1.48 added a new format method:
from Bio import SeqIO mito_record = SeqIO.read(open("NC_006581.gbk"), "genbank") print mito_record.format("fasta")
The format method will take any output format supported by Bio.SeqIO where the file format can be used for a single record (e.g. "fasta", "tab" or "genbank"). Note that we don't recommend you use this for file output - using Bio.SeqIO.write() is faster and more general.
If you are having problems with Bio.SeqIO, please join the discussion mailing list (see mailing lists).
If you think you've found a bug, please report it on our bug tracker.
|
http://biopython.org/w/index.php?title=SeqIO&diff=3755&oldid=2967
|
CC-MAIN-2014-15
|
en
|
refinedweb
|
Learn C the Hard Way (Zed A Shaw).
The Gnu C Tutorial
Hacking: The Art of Exploitation
Today I stumbled across an excellent series of lectures by Jerry Cain.
This link points at the third one, in which he describes using pointers to do a little swap function. I thought I'd have a go at coding it and came up with this.
So, if I have got this right:
Code: Select all
#include <stdio.h> // swap function void swap_nums(int *, int *); // prototype a function that takes pointers. int main() { int x = 10; int y = 66; printf("x = %d y= %d\n", x, y); swap_nums(&x, &y); // pass the addresses of x and y to my function printf("x = %d y= %d\n", x, y); return 0; } void swap_nums(int *a, int *b) { int c; c = *a; *a = *b; *b = c; }
I pass pointers to my function.
"a" stores the memory location where 10 is stored and "b" the address where I have 66 stored.
c = *a
The asterisk "dereferences" the pointer - in other words now the actual value at that address is put into "c".
*a = *b
The value held at address a is made to equal the value held at address b.
*b = c
The value held at address b is made to equal c (just an int, not a pointer!)
Then we return to main and the values have been swapped!
|
https://lb.raspberrypi.org/forums/viewtopic.php?f=33&t=7461&p=92398
|
CC-MAIN-2020-16
|
en
|
refinedweb
|
US20030187761A1 - Method and system for storing and processing high-frequency data - Google PatentsMethod and system for storing and processing high-frequency data Download PDF
Info
- Publication number
- US20030187761A1US20030187761A1 US10/046,907 US4690702A US2003187761A1 US 20030187761 A1 US20030187761 A1 US 20030187761A1 US 4690702 A US4690702 A US 4690702A US 2003187761 A1 US2003187761 A1 US 2003187761A1
- Authority
- US
- United States
- Prior art keywords
- data
- time series
- system
- processing
- time
- Prior art date
- Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
- Abandoned
Links
- 238000000034 methods Methods 0 claims description 27
- 230000000875 corresponding Effects 0 claims description 17
- 238000004364 calculation methods Methods 0 claims description 6
- 230000003213 activating Effects 0 claims description 4
- 238000007619 statistical methods Methods 0 claims description 4
- 238000005516 engineering processes Methods 0 description 2
- 238000004458 analytical methods Methods 0 description 1
- 238000009795 derivation Methods 0 description 1
- 230000000694 effects Effects 0 description 1
- 230000002123 temporal effects Effects 0 description 1
- 239000002699 waste material74—Sequence data queries, e.g. querying version
High-frequency financial data analysis has significantly stressed current time series database implementations because of regularly requiring the request of tens of millions of irregularly spaced data points. In addition, describing the data in these systems makes it difficult for researchers to use. The present invention includes a method and system for storing and processing high frequency data. It also includes a language for the storage and query of the data.
Description
- This application claims priority to provisional application no. 60/261,973 filed on Jan. 17, 2001, titled, “A Method and System for Storing and Processing High-Frequency Data”, the contents of which are herein incorporated by reference.[0001]
- The present invention relates to the field of high-frequency financial data analysis. More particularly, the present invention relates to a method and system for managing time series data comprising a language for query and storage of the data. [0002]
- While some financial instruments are quoted at a frequency of a few times per day, others may be quoted a few times per second. Instruments include financial contracts for stocks, currency exchange, pork belly futures, etc. In addition time series data, containing tens of millions of prices, may be collected around the clock for a number of different instruments. [0003]
- Researchers may use this data to look for correlations and phenomena in the financial markets on various time scales. For example, to examine the fractal nature of the data, it is essential to have access to all time scales. Therefore, all data events (called ticks) may be required to be saved and given a timestamp. This leads to irregularly spaced datasets. Fractal research is statistical in nature and, as such, often requires analysis of very large datasets for an instrument. Keeping such large datasets in memory is not feasible without supercomputer technology. However, due to the high cost of supercomputers, there is a need to perform such functions on typical computers. [0004]
- In addition, there is also a need to request datasets that may not seem natural or obvious. For example, it may be desirable to request all currency exchange quotes for only Asian currencies, and again for European currencies, in order to compare the statistical distributions. Or one may request all quotes for all swap rates made by a given bank to see if that bank may be posting bad prices. It would be impossible to predict the scope of all possible data requests and to store the data appropriately from the start. Accordingly, there is a need for a database to be able to take a data description and return a desired time series. [0005]
- But commercial databases do not fulfill these needs. Literature also contains little research in this area. Instead, most time series databases are geared toward small sets of regularly spaced data points. They usually assume a user wants an entire set at once. And they require the user to predefine these sets so that special data requests often are either not possible or are not easy to make. [0006]
- Moreover, it is expected that the need for database systems that can meet these types of needs to grow. Research with high-frequency financial data is finding applications in diverse fields such as risk management and trading model development. Banks are embracing new solutions to these problems and often high-frequency data is being applied. In cases of risk management, for example, large matrices involving simultaneous access to thousands of high frequency time series need to be calculated. [0007]
- Accordingly, there exists a need for a method and system for storing and processing high frequency data. [0008]
- The present invention stores and processes high-frequency data. In one embodiment the present invention comprises a system for storing one or more time series comprising: a language for describing the storing of the one or more time series; and a subsystem storing the one or more time series in accordance with said language. [0009]
- In another embodiment the present invention comprises a system for managing one or more time series comprising: a language defining a first one of the time series as a subset of a second one of the time series. In another embodiment the present invention comprises a system for managing one or more time series comprising a language defining a first one of the time series as a subset of a second one of the time series. [0010]
- In another embodiment the present invention comprises a system for retrieving desired data from one or more time series comprising: at least one request comprising one or more restrictions for defining the desired data; and at least one utility retrieving data from the one or more time series that satisfies said one or more restrictions. [0011]
- In another embodiment the present invention comprises a system for processing data from one or more time series comprising: one or more processing modules for processing the data; one or more connections for linking said modules in a network; and a first subsystem for activating said one or more processing modules and for moving the data through the network. [0012]
- These and other embodiments and advantages of the present invention will be more readily apparent with reference to the detailed description and accompanying drawings.[0013]
- FIG. 1 shows an example of four possible time series subsets of a larger time series: 1) currency prices, 2) European currency prices, 3) German Mark prices, and 4) prices from the bank BGFX. [0014]
- FIG. 2 shows a sample SQDADL definition to support currency exchange and deposit rates. [0015]
- FIG. 3 shows examples of the parsing of some sample ticks: 1) a foreign exchange quote, 2) a foreign exchange transaction, and 3) a cash deposit interest rate quote. [0016]
- FIG. 4 shows SQDADL queries which select the corresponding time series as defined in FIG. 1. [0017]
- FIG. 5 shows the separation of a fully described tick into its filename and data record components. [0018]
- FIG. 6 shows a data cursor, which is a software object that knows how to merge ticks from all the data files and remove all undesirable ticks. [0019]
- FIG. 7 shows using ORLA Blocks to read and print data. [0020]
- FIG. 8 shows an abstract block with 5 input and 3 output ports. [0021]
- FIG. 9 shows a network to view input data along with its Exponential Moving Average.[0022]
- 6.1 High Frequency Data Repository for Financial Time Series [0023]
- 6.1.1 Introduction [0024]
- Because keeping such large datasets in memory is not feasible without supercomputer technology, the present invention includes a data-flow-based, statistical package called Olsen Research LAboratory (ORLA) for this purpose. ORLA acts like an electronic circuit in which a network of various off-the-shelf pieces is constructed and data flows through it to calculate a desired set of results (moving averages, trading model signals, etc). This eliminates the need for a large local memory. The present invention may include a mechanism for slowly feeding data into the waiting ORLA process. [0025]
- The invention was designed with generality in mind. In particular, it is not limited to financial data. It may be used anywhere that high volume time series data needs to handled. [0026]
- One aspect of the present invention is called a repository rather than a database because the word repository is a more accurate description for flexibly storing and retrieving large number of ticks. [0027]
- 6.1.2 Time Series Model [0028]
- A time series is a set of data points sorted in order of increasing time. In an abstract sense, one can define a “universal” time series as the time series of all recordable events that ever have and ever will occur. All other time series can be viewed as a subset of, or a restriction on, this universal set. Given any time series, a new time series can always be created by simply extracting a subset from it. [0029]
- FIG. 1 contains a list of currency prices over a 31 second interval. The currencies are the Swiss Franc (CHF), German Mark (DEM), and Japanese Yen (JPY). Note is that this list is already a subset of larger sets. Examples of supersets might include the time series of all currency prices or even of all financial price quotes for all instruments. Conversely, this series may be broken into subsets. One may ask for all European currencies from the set, or one may want only German Mark prices, or one may want only prices from the bank BGFX. [0030]
- All of these subsets have been of interest to researchers at one time or another. And thinking about them in terms of restrictions on a superset is instructive because it can lead to a model for data storage and, hence, to a language for repository query. The present invention includes a repository which treats data in this way. [0031]
- This model for time series data is not usual. Most databases require the user to prepackage the data to be stored into various files. For example, if one wants the Swiss Franc currencies in one file and the German Mark in another, one would be required to predefine these files and to separate the data before storing it. [0032]
- This preclassification is unnecessarily restrictive, requires the user to have too much knowledge of the packaging method, and leads to complicated query languages. For example, to get the BGFX bank quotes in our example above, the user would need to know that all these currencies are in separate files and would need to build a query by first combining these files and then asking for the BGFX quotes. This is clearly more complicated that simply asking for the BGFX quotes as a restriction of all known data. [0033]
- 6.1.3 Data Representation [0034]
- The elimination of the file-based conceptual view means that each tick stands on its own in the repository without classification. For this to be useful, the data needs to be self-describing. This description can then take the place of the file as a handle for data queries. [0035]
- The present invention includes a description language for this purpose called the Sequential Data Description Language (abbreviated SQDADL, and pronounced “skedaddle”). SQDADL is a BNF-style language with some restrictions to enforce a specific structure. FIG. 2 presents a sample SQDADL description for the storage of either currency prices or interest rates (deposits). [0036]
- In one embodiment, each tick must contain a timestamp and this fact is reflected in the root-level statement “Tick=(Time,Item)”, which forces all ticks into this form. It is the only restriction placed on the data description of this embodiment. The “Item” reference can then be expanded as the user sees fit for the type of data to be stored. There is no implicit assumption that limits the repository to financial data. [0037]
- FIG. 3 shows the derivation of a description for a quote on a currency exchange. In this case, the FT indicates a “financial tick” (as opposed to some other time series data), the FX indicates “foreign exchange” from one currency to another, and the Quote indicates at what prices the given bank is willing to buy (Bid) and sell (Ask) one currency for another. In simple terms, the bank of CHFX is willing to sell Japanese yen at a price of 124.1 yen per US dollar and one was told this by the Reuters news agency. [0038]
- This string contains all the information needed to allow this tick to stand on its own. If this string were found written on a piece of paper on the floor, one would be able to enter it into the repository and then retrieve it as part of future queries. And yet the user is not forced to separate its components into file and record specifiers. The only restriction is that it conform to the syntax of the SQDADL description file. [0039]
- FIG. 3 also illustrates how you can derive ticks for actual transactions and for interest rate deposit quotes. Given these definitions, interest rate deposit transactions also become possible. This is one of the nice features of the SQDADL language. Once the expansions of “Contract” and “DataSpecies” have been defined, they can be put together into various combinations which allows one to store many more instruments than one has considered. The recursive nature of the language is also a significant win in the financial world because many contracts are, in fact, recursive. Relatively “simple” contract types such as options, futures, and bonds may be combined to create, for example, an option on a bond future contract. [0040]
- There is also significant advantage in keeping the ticks in the form of strings. It allows parsing to be dynamic, which means no code needs to be recompiled to handle new data types. One can simply modify the SQDADL definition and then, it is immediately able to store ticks of the new type in the repository. [0041]
- 6.1.4 Time Series Request Syntax [0042]
- Because each time series is modeled as a restriction of another time series, it is easy to see how the SQDADL definition can lead to a way of specifying queries to the data repository. The present invention may include a syntax for restricting each of the fundamental types. The user can then combine these restrictions to define the desired time series. Restrictions may be implemented with expressions. [0043]
- General Expressions Referring back to FIG. 2, each of the “leaf nodes” of the SQDADL parse tree is given a type indicator. These types are known to the repository as fundamental types and closely follow types inherent in most programming languages or communications standards. For each of these types, the present invention may define a set of expressions which can be used as a filter for deciding whether data is part of the requested series or not. This concept is very much like a regular expression or wildcard. In fact, for the string types, POSIX-style regular expressions could be directly used. [0044]
- Thinking along these lines, one could send the following request to the data repository:[0045]
- (*-*,FT(FX(USD,JPY),Quote(*,*,*,*)))
- This request says that one would like all Japanese yen prices quoted against the US dollar over the entire range of time with no restriction on the prices, contributing bank, or the information source. FIG. 4 provides examples of requests to match each of the time series previously defined in FIG. 1. [0046]
- There may be a different expression syntax for each of the data types. For example, the syntax of an integer expression will be different than the syntax for a string. The present invention determines which expression syntax will be used based on the types of the leaf nodes as indicated by the SQDADL parser. [0047]
- It is not hard to imagine a set of expressions which allow the user to make very powerful and flexible filters for each data type. For example, one might use the expression “10<<12” in an integer field to request only ticks with values between 10 and 12. These filters can be added and modified as time goes on since they only affect the data retrieved by a query and not the storage process. [0048]
- Time Expressions Because one is working with a time series repository, time is the handle by which one may access data. As such, expressions in the “Time” field are treated as a special case of the type-based expressions syntax. [0049]
- To illustrate this, assume one may want to get the price of an instrument as it was at midnight on a certain date. The probability of there being a tick at exactly midnight is actually very low so one usually needs to ask for the tick before and the tick after so that some interpolation can be done. One might formulate this expression as “01.01.1990 00:00:00[−1..1]”. [0050]
- The problem here is that the time expression is no longer a filter whose behavior can be determined only by the tick itself. If one asks for the tick before midnight, one does not know if this tick will be one second, one minute, or even one day before that hour. The behavior of any filter that will include this tick depends not only on the time of the tick, but also on the temporal placement of other ticks in the specified time series. [0051]
- This implies that the processing of the time expression is something that must be considered deep in the repository machinery since the low level features of the time series are only known there. While all other restrictive expressions can sit at a higher level, and even, theoretically, on the client side, time expressions may be handled specially. [0052]
- 6.1.5 Storage of Data Ticks [0053]
- All modern operating systems support the concept of a file with an associated name and data. While the abstraction of the present invention avoids the need for this classification of data on the user side, the present invention also maps the model onto a physical computer. [0054]
- An implementation of the storage and request system that has been described is to simply store all the strings that a user gives to the repository in a single text file. A request then only requires one to go through the file, apply the restrictions, and then return the specified subset. While this would work, it is quite inefficient. The present invention employs some kind of grouping of like data behind the scenes in order to improve data query performance. [0055]
- The present invention includes an architecture which allows the repository itself to store the data in the most efficient way it can based on hints given to it in the SQDADL configuration file. Referring back to FIG. 2, all leaf nodes in the parse tree not only have a type assigned to them but also a designation ‘f’ or ‘v’. This value is a hint to the repository and indicates whether this field is considered fixed or variable with respect to the most common query for data. [0056]
- For example, in the sample SQDADL configuration, note that the currencies are all tagged with the “fixed” hint. This means that users are expected to more often ask for a fixed currency in their requests rather than a broader expression as a filter. Specifically, more queries are expected of the form: [0057]
- (*-*,FT(FX(USD,JPY),Quote(*,*,*,*))) [0058]
- than: [0059]
- (*-*,FT(FX(USD,*),Quote(*,*,*,*))) [0060]
- With these hints, the present invention has all it needs to store the data in a file on the physical machine. Given a string representation of a tick, two data buffers are created into which the string is divided. The first will become the filename and the second will hold the data record which will be appended to this file. [0061]
- To ensure random access capabilities in each file, each data buffer must be the same length for a given file. This, of course, depends on the type of the data going into the buffer. For types of fixed size, for example integers, the data is simply written into the buffer. However, if the data is variable in size, such as a variable length string, another solution is needed. In this case, for each file a secondary storage file is created to hold all variable length data and its size. The offset into this file is then stored in the data buffer. Since the offset is simply an integer, a fixed size for all records is maintained in the primary file. [0062]
- Now the tick and divide fields are parsed into the two buffers according to the following straightforward rules: [0063]
- If the field is a non-leaf node token or it is a leaf node token with a hint of “fixed”, copy it to the filename buffer. [0064]
- If the field is a leaf node with a hint of “variable” and with a constant size, copy it to the data buffer. Then place a “*” in the filename buffer. [0065]
- If the field is a leaf node with a hint of “variable” and has a non-constant size, write its size and data to the secondary file and copy its offset to the data buffer. Then place a “*” in the filename buffer. [0066]
- FIG. 5 shows how a given foreign exchange quote tick is broken up into the two buffers with the SQDADL configuration. For efficiency, the data record buffer holds binary versions of the data. And because the filename is often long and a rather strange collection of parentheses, wildcard characters, and commas, under most operating systems, a layer of indirection is required to map the appropriate filename onto a filename that the operating system can handle natively. [0067]
- Once the parsing is complete, the appropriate file is opened and the data buffer is appended onto the end. If the file does not already exist, it is created beforehand. In this way, the repository is dynamic and can adapt to new ticks (for example, the creation of a new currency) but still hides the maintenance from the user. [0068]
- 6.1.6 Retrieval of Data Ticks [0069]
- The hints given in the SQDADL definition lead to a pattern of possible filenames. For example, the hints we have given in FIG. 2 could lead to the filename: [0070]
- (*,FT(FX(USD,JPY),Quote(*,*,*,REUTERS))) [0071]
- If the user submits this same string as a request for data (with some range of time), the file is opened, the appropriate start time is found, and the ticks are given to the user until the end time is reached. [0072]
- Of course, this is an optimal request. The present invention also handles non-optimal requests and allows users to specify expressions anywhere they please without having to know how the data is actually stored. [0073]
- File List Selection Given a request for a time series, the present invention can determine all possible files that may have information relevant to the request. The request is parsed into tokens and three rules are applied to the set of all filenames: [0074]
- If the given token is a non-leaf node, then select filenames that exactly match it. [0075]
- If the given token is a leaf node and this leaf has a hint of “variable”, then select filenames that have a “*” in this position. [0076]
- If the given token is a leaf node and this leaf has a hint of “fixed”, then apply this expression and select only those filenames that match it. [0077]
- Using an example from FIG. 4, if the present invention is given the request:[0078]
- (*,FT(FX(USD,*),Quote(*,*,BGFX,*))) [0079]
- and the following filenames exist in the repository directory: [0080]
- (*,FT(FX(USD,JPY),Quote(*,*,*,REUTERS))) [0081]
- (*,FT(FX(USD,CHF),Quote(*,*,*,REUTERS))) [0082]
- (*,FT(FX(USD,DEM),Quote(*,*,*,REUTERS))) [0083]
- (*,FT(FX(DEM,CHF),Quote(*,*,*,REUTERS))) [0084]
- (*,FT(FX(DEM,GBP),Quote(*,*,*,REUTERS))) [0085]
- only the following files are selected for reading to service this request: [0086]
- (*,FT(FX(USD,JPY),Quote(*,*,*,REUTERS))) [0087]
- (*,FT(FX(USD,CHF),Quote(*,*,*,REUTERS))) [0088]
- (*,FT(FX(USD,DEM),Quote(*,*,*,REUTERS))) [0089]
- Given the hints in the SQDADL definition, it is clear that only these files can contain information that are of interest. The expression “USD” prevents the selection of the others. Yet because it is a variable field, the “BGFX” expression does not affect the list. [0090]
- The Data Cursor Once a list of possible files are established a software cursor which can be used to pass over the files and hand the ticks to the user when requested may be created. In object oriented terms, a cursor object is instantiated by giving it a set of files from which to read, a desired starting time, and the entire expression pattern that defines the desired time series. Internally, each of these files is opened and a binary search is done to find the desired start time. [0091]
- Once instantiated, the cursor provides two methods to the user, called next( ) and prev( ). The next( ) method returns the nearest tick in the requested time series after the current time. The prevo method returns the nearest tick in the requested time series immediately before the current time. [0092]
- How the cursor actually does this is displayed graphically in FIG. 6. When asked for the nearest tick after the current time, it surveys all the files and chooses the tick with the lowest timestamp. It then applies the expression filter to this tick to see if it should be included in the requested time series. If not, it goes back to the files to get the next until a matching tick is found. Conceptually, this is merging the files in time series order and then removing any ticks that are not appropriate. The prevo method has the same implementation, but works by backing up in the files rather than moving ahead. [0093]
- Servicing a Request A wrapper may be made around this cursor to service any request for data. For example, let's say the user has given us the following request string: [0094]
- (01.01.1990 00:00:00[−10..5],FT(FX(USD,*),Quote(*,*,BGFX,*))) [0095]
- This means that we want all ticks for any currency measured against the US dollar that came from the bank BGFX. And our time range is the 10 ticks before midnight 01.01.1990 and 5 ticks after. Steps for handling the request include: [0096]
- Build the list of all possible filenames that could contain data that is needed for this request. This was done in the last section. [0097]
- Extract the base time from the time expression. In this case, the base time is “01.01.1990 00:00:00”. [0098]
- Instantiate a cursor for these files with this start time and pass it the full expression pattern that was given. [0099]
- Make n calls to the prev( ) method to rewind the time series. In this case, n is 10. [0100]
- Make n calls to the next( ) method and hand each to the user. In this case, n is 15, and represents the total length of the time series. [0101]
- Here, the merging of all files by the cursor provides all possible appropriate ticks. But the cursor tests against the full expression to ensure that only those from bank BGFX are actually given. This is the time series that was requested. [0102]
- 6.1.7 Comments on Administration [0103]
- The expensive part of a request may be data removal. Because a bank name has a hint of “variable”, it is put inside the data record rather than in the filename. This means many data records that do not match our expression may often need to be read just to get at those that do match. This is a waste of computer time. [0104]
- One solution to this problem is to give the bank name a “fixed” hint. If this were the case, then it would be put in the filename and only the files that are really necessary need to be opened and merged. This would result, again, in a near optimal request because the merge operation is computationally trivial. [0105]
- The decision as to whether to make a leaf node “fixed” or “variable” may be made by the repository administrator. If it is known that there is a small number of banks, then making “Bank” a “fixed” field may be a reasonable option, since only a few additional files would be created. On the other hand, the price field would certainly not be define as “fixed”, since there are essentially an infinite number of prices. [0106]
- Another factor may be taken into account as well. The administrator may to decide which are the most common fields to be fixed in a user request. If users rarely put restrictions on the bank field, then this may be put in the data record because computer time will only be occasionally wasted. [0107]
- Thus, it is not only the data itself that determines how the files are arranged but also the requests. When it is difficult to decide, it is better to err on the side of making a leaf node “fixed”. This results in faster request processing because fewer ticks need to be removed at run time. However, it should be stressed that the storage hints in no way affect the requests that the user can make. They only affect how fast those requests are handled. Indeed, the administrator can reorganize the file storage unbeknownst to the user. [0108]
- 6.1.8 Experience [0109]
- The SQDADL code was designed to be flexible and easy to maintain. The goal was to be able to add new data types to the repository in a matter of minutes simply by defining the syntax of a new type and specifying the breakdown of its fields. This has been achieved and the data collection has been expanded with very little effort given to making SQDADL definitions. [0110]
- Because SQDADL fully describes the financial instrument, a complex instrument such as an option on a bond future is represented by a complex SQDADL syntax. This makes it difficult for end users to remember the syntax. The present invention handles this problem in one of two ways. First, a layer is built on top of the normal repository requests so that simple data requests are done simply, leaving more complex requests to be done through the normal syntax. Alternatively, a tool helps the user dynamically build the requests strings by listing options and filling in boiler-plate components as needed. The present invention may include a functionality similar to the UNIX tcsh command interpreter or the X windows xfontsel font browsing utility. [0111]
- The user may know that currency prices are stored and it is possible to find the list of those that are available. A meta-query database is available to store the various possibilities for each leaf node of a request string. A user could ask what currencies are available. [0112]
- Finally, the use of flat files for data storage requires that all data arrive in time order so that it may be stored that way. To solve all time ordering issues that could arise, a b-tree storage mechanism may be used in the lower layer. [0113]
- 6.1.9 Conclusion [0114]
- The described system has shown itself to be quite useful in the field. The flexibility of the SQDADL language has allowed the collection of over thirty instrument types with very little effort being spent on the data definition. The implementation has also resulted in fast response times because of the flat-file storage foundation. [0115]
- The next section describes the database-flow-statistical package called ORLA for performing this high-frequency data analysis concept. [0116]
- 6.2 ORLA [0117]
- 6.2.1 Introduction [0118]
- What is ORLA? The Olsen research LAboratory (ORLA) is a programming system designed and implemented to fulfill the following objectives: [0119]
- To be a platform for economic research. ORLA can process large time-series data-sets. The term “large” includes data-sets whose size exceeds that of a typical computer's main memory. [0120]
- ORLA runs in both historical (reading from fixed data files) and real-time modes (processing data as it arrives from external data sources). [0121]
- These goals have been met by designing ORLA around a data-flow architecture. Rather than writing programs using the conventional concepts of data, functions and objects, ORLA uses the data-flow paradigm. [0122]
- ORLA meets the following criteria: [0123]
- It is extensible. ORLA includes a framework for users to add their own processing modules. The set of data-types known to ORLA is extensible. [0124]
- It is transparent. The underlying actions of the ORLA system are hidden as far as possible. This simplifies what is required when developing new functionality within ORLA. [0125]
- It is efficient. It imposes minimal overhead in processing whatever data is given to it. [0126]
- Outline This section includes an introduction and a programming manual for ORLA. It includes the following chapters: [0127]
- “Getting Started”, which introduces the main concepts used in ORLA and shows what goes on inside an ORLA application. [0128]
- “Overview of the Block Libraries”, which introduce the main blocks and their organization into separate libraries. [0129]
- “Error Handling and Debugging” gives advice for when things goes wrong. [0130]
- These chapters enable a user to write an application using existing blocks. [0131]
- Subsequent sections provide detailed information for users wishing to extend ORLA by developing their own blocks (processing modules). One chapter explains how the datum works. Another chapter concerns networks and their semantics. Another explains how to extend ORLA by writing customized blocks. [0132]
- Some Features of ORLA ORLA includes the following features: [0133]
- The capabilities of the new datum and SQDADL enable better block interfaces. Configuration dependencies between blocks are removed. [0134]
- Most blocks have a configPair constructor. [0135]
- A global factory method creates a network or portion of a network. [0136]
- Datum allows one to define the SQDADL in a text file, which is parsed at the start of the program. [0137]
- The datum classes are also used by the repository, which allows for a seamless integration of Orla with the repository. [0138]
- Data may be handed over to a block as opposed to requiring the block to read the data explicitly. [0139]
- A real-time processing mode for running production applications. [0140]
- Support for timers working transparently for both historical- and real-time operations. [0141]
- A smart network scheduler, activating blocks at run-time. [0142]
- A network object which understands the block topology and detects feedback loops. [0143]
- A network management interface to monitor and debug a running network. [0144]
- Data types, which are able to represent arbitrary time-stamped financial data. [0145]
- A simplified block class hierarchy. [0146]
- Build-up, start and end times. Blocks can have a build-up time during which no data is sent forwards. An Orla network can have start and end times, before and after which no data is passed on inside the network. [0147]
- Database interface. An Orla application can receive data from a real-time database. [0148]
- Many blocks for financial computations. [0149]
- Configuration files. There are classes for reading and storing configuration information in the form of key-value pairs. [0150]
- The handling of time. A 64 bit representation of time has been defined, as well as a number of related classes like ObcTimeInterval, ObcTimeZone, ObcLocalTime and ObcMarket. [0151]
- The handling of scaled time. This includes the definition of different TimeScales (ObcTickTimeScale, ObcMarketTimeScale, ObcThetaTimeScale and ObcPhysicalTimeScale), corresponding ObcScaledTime classes and ObcScaledTimeInterval. [0152]
- ORLA's Implementation ORLA may be implemented in the C++ programming language. It is portable to different programming environments. Technically speaking, ORLA may be implemented using the Solaris SPARCworks C++ compiler (version 4.2) and the Rogue Wave libraries (version 7). [0153]
- As noted above, ORLA is readily extensible. Researchers and developers may write their own ORLA blocks. Such blocks can be incorporated into the standard ORLA block library. [0154]
- 6.2.2 Getting Started [0155]
- This section introduces the main terms and concepts used in ORLA. Reading this section explains what goes on inside an ORLA application and shows how to use the blocks and data-types belonging to the standard ORLA library. [0156]
- The Overall Ideas ORLA is based on a data-flow paradigm: data flow through a network from block to block and are processed as they pass through each block. An ORLA application therefore consists of a network which in turn is defined in terms of blocks and their interconnections. The fundamental concepts in ORLA are those of network, block, connection and datum. Examples of networks are depicted in figures FIG. 7 and FIG. 9. [0157]
- Conceptually, a block is a processing unit, possibly with internal states. A block communicates with the other blocks in the network by receiving data on its input ports and sending data to other blocks through its output ports. It processes the data flowing through it, thereby implementing some functionality. For example, a block may read data from a file, generate a synthetic regular time series, compute a moving average or a correlation. Each block generally performs one small, well-defined task; more complex tasks may be achieved by connecting blocks together. [0158]
- A connection establishes how the data flow by linking the output port of one block to the input port of another. Creating a connection between two blocks is termed binding. The flow of data across a connection is termed a stream. At a global level, the connections define the topology of the network. A network is thus defined by its constituent blocks together with their connections. [0159]
- The data are the items of information that are processed by blocks and sent along connections. A datum belongs to a certain data-type; for example, a floating-point value or a foreign exchange spot-price. A block accepts and produces data of given types. The types of data may be different for each port. A block may also modify the type of the datum it reads from an input port before it passes the datum on to the output port. However, when a connection is established between two ports, the type of the data produced by the output port must agree with the type of data accepted by the input port. For a port, these data-types remain constant for the lifetime of the connection. [0160]
- Building and Executing a Network In order to build an application or perform a computation with ORLA, a network may be designed and built. A network is built by creating and initializing its component blocks and binding them together. Initializing the blocks may require configuration information such as the name of a file or the parameters for a computation. After all blocks are created and connected together, the network is considered built. [0161]
- Once built, the network is then executed. This causes data to flow from block to block and to be processed along the way. This continues as long as there are input data available for processing or timers to be fired. When all the input data have traveled through the network and no pending timers exist, the network becomes idle and returns to the caller. [0162]
- This flow of data through the network may be managed behind the scenes by a network scheduler known as the run-time system. The run-time system has two main responsibilities. First, it moves the data along the connections thereby managing the flow of data through the network. Second, it activates the various blocks in order for them to process the data or timer events (scheduling). [0163]
- The purpose of the run-time system is to handle those issues that are necessary for implementing data-flow networks but which are essentially extraneous to the immediate application or computation. The run-time system is implemented efficiently and imposes minimal overheads. [0164]
- A First Network Networks are generally straightforward to program. Consider the network shown in FIG. 7. This network reads from standard input and writes to standard output, thereby echoing its input. A C++ program to construct and run such a network may look like this: [0165]
#include <iostream.h> #include <OrlaReadAscii.hh> #include <OrlaNetwork.hh> #include <OrlaPrint.hh> int main ( int argc, char** argv ) { OrlaNetwork net( argv[0] ); // Construct a network object OrlaReadAscii in( cin ); // Build the block objects OrlaPrint out ( cout ); net >> in >> out; // Bind the blocks net.run(); // Run the network }
- Our network consists of an OrlaReadAscii block and an OrlaPrint block. The constructor of OrlaReadAscii expects either an istream& or a file name argument. The block reads data in ASCII from the input stream and interprets them according to the type given by the first line in the stream. [0166]
- OrlaPrint is a block that takes any data given to it and prints them to a specified output stream. The stream to print on is specified in the block's constructor, in the above program on cout. The OrlaPrint block then passes these data on to the next block in the network. Because there is no other block, the data gets destroyed automatically. [0167]
- The connections between the blocks are specified using the bind operator>>. The bind operator is a double arrow pointing in the direction in which the data are to flow. Here, the data flow from the OrlaReadAscii block into the OrlaPrint block. [0168]
- The constructed network is then run. As expected, data are read and written. This continues until the producer block sends the end-of-data condition which is when all input data have been read. As soon as the consumer block has processed the end-of-data signal, the network detects that all blocks are idle and exits from the run. [0169]
- Using the Makeconf Program The above program can be compiled and linked. In order to do this, we use the Makeconf program to generate a Makef ile. This Makefile can then be used by the make utility to compile and link the program, as given by the following exemplary steps: [0170]
- $ cp /oa/build/main/libraries/orla/doc/simplel.cc [0171]
- $ makeconf −p orla3 simple1.cc [0172]
- $ make −f Make.solaris.mk [0173]
- $ simple1</oa/build/main/libraries/orla/doc/simple1.dat [0174]
- The Life of Blocks and Networks As noted above, a network is constructed by creating and initializing its constituent blocks and by binding these blocks together. The built network is then run. This starts the run-time system which in turn controls the network execution. [0175]
- The execution of a network can be separated into three broad stages all of which are managed by the run-time system: initialization and set-up; processing the data and end-of-data handling. These three stages are conceptually similar: information flows down the network from the producer-only blocks through the producer-consumer blocks to the consumer-only blocks. During initialization, the data-types produced on each output port propagate through the network from the producer-only blocks downwards and this allows the blocks to check that they are correctly bound in the network. During the second stage, the data flow through the network, again from the producer-only blocks downwards, and are processed as they pass through the producer-consumer blocks. In the third stage, end-of-data indications percolate down the network. These give the blocks a chance to perform their final tasks and also possibly to clean up. [0176]
- At the block level, these three stages are implemented by the configure( ), processData( ), processTimer( ), processEndOfData( ) and processEndOfRun( ) methods. These methods correspond to the initialization, processing and final computation stages respectively and are invoked directly by the run-time system. [0177]
- For illustration, consider the example of a block that calculates a simple average. Its input and output streams consist of a flow of floating-point values with one output datum for every input datum. The configure( ) method checks that the block is correctly bound in the network by ensuring that it has one connected input and one connected output port. It also checks that the data to be received on the single input port are of the expected type. In general, block initialization is performed inside the constructor but some initialization may have to wait for the configure( ) method: it is only when configure( ) runs that the number of input and output ports are known as well as the data-types of the inputs. In this example, the internal counters need to be set to zero and this can be done inside the constructor. [0178]
- Once all the blocks configure( ) methods have been invoked, the run-time system then repeatedly invokes their respective processData( ) methods. In this example, the method makes the appropriate calculation and sends the newly computed data to its output port and hence to the next block in the network. No timers are used, so method processTimer( ) will never be called. [0179]
- The processEndOfData( ) method is called once for each port when each input stream is exhausted. In this example, the method does not need to do anything. However, for other kinds of block, the processEndOfData( ) method may also generate data. For example, if there was a block that generated no data except for a single datum when the input stream finished (say, an average of all the input data), this calculation would be done inside the processEndOfData( ) method. [0180]
- Finally, once all input ports have received the end-of-data indication and all pending timers have fired, the block method processEndOfRun( ) gets called. This is the last chance for the block to forward information on to its consumers. On return, the run-time system will issue end-of-data indications to all output ports, in case this hasn't been done yet by the block itself. [0181]
- A block is therefore fully characterized by its binding properties (ports and data-types) as well as its defining methods. [0182]
- More About the End-Of-Data Indications As explained in the previous section, the blocks belonging to a network are initialized by having their configure( ) methods called. Similarly, the run-time system calls their processData( ) and processTimer( ) methods as data flow through the network. In both cases, this can be thought of as a flow of information down from the producer blocks. [0183]
- The end-of-data stage is similar but more subtle. In general, the upstream blocks producing the data initiate the end-of-data indications. This generally corresponds to an end-of-file signal or to some pre-established conditions on the data such as a pre-arranged end-time having been reached. As the data are exhausted, the end-of-data signals percolate down the network and the appropriate processEndOfData( ) methods are invoked. This is therefore a gradual process as the data drain out of each connection. Some parts of the network are completed while others still receive data for processing. [0184]
- The configure( ) and processEndOfData( ) methods are therefore distinct in that the configure( ) is a per block initialization whereas the processEndofData is a per connection indication. The former is invoked once per block but the latter is invoked once per input port. [0185]
- This may actually be more complicated because any block may notify the scheduler that its task is over by calling the sendEndOfData( ) method from within its processData( ) or processTimer( ) methods. Blocks further upstream may continue processing data. However, these data cannot progress beyond the block that has issued the sendEndOfData( ). [0186]
- In view of the gradual transition from processing data to end-of-data handling, a criterion must be chosen to determine when the entire network completes its execution. This is defined to be when all blocks representing the network have end-of-data indications from all blocks directly connected to it and have no pending timers. [0187]
- Processing Time-Series The goal of ORLA is to process large time series. A datum is an elementary item of information from a ordered time-series; for instance, a (time-stamp, bid-price, ask-price) tuple from a financial time-series. [0188]
- A time-series datum may contain a time-stamp. The individual data flow through the network and are processed in their natural time order; that is, the data traveling across a connection have increasing time-stamps. ORLA does not enforce this paradigm but all blocks involved in producing time-series preferably observe this constraint. [0189]
- As long as blocks are connected in a linear fashion, the data remain ordered. However, when data follow different paths in the network, there is no synchronization of data among the various streams. This becomes an issue when several streams have to be merged into a new time-ordered flow by a block with multiple inputs. A block with multiple inputs receives time-ordered data on each of its input ports but these input streams are independent of one another. However, such a block must still produce ordered data on its output ports. It does so internally by looking at the time-stamps on its input data and sending them to the processing methods in the appropriate order. In this way, the network may have an arbitrary topology while the data coming into a block is still a time-ordered series. [0190]
- Start- and End-Times A network always runs over a range of input data denoted by specifying the start and end times of this range. The producer-only blocks are programmed to deliver data in the specified time range. [0191]
- Build-Up Delays An important notion for processing time-series is that of build-up delay. That is, a given block may require a certain amount of data for initialization purposes and before it can generate any output data. Typical examples include the computation of an exponential moving average (EMA) or a filter that needs sufficient data in order to initialize an adaptive threshold. If one wants a network to generate results from a given time, one may start it at an earlier time in order to allow the blocks to initialize themselves properly with enough data. The interval of time needed to initialize a network or a block is called the build-up delay. [0192]
- An ORLA block may specify the time interval that has to elapse since the block received its first datum before any data is sent to the block's output ports. [0193]
- ORLA also provides a method for calculating the network build-up delay by searching for the longest build-up path in a network. It must be emphasized that the build-up time can at best be estimated because it may depend on the data. For example, an adaptive filter may require 100 data in order to be properly initialized and the time interval required to have 100 data depends on the time-series. [0194]
- Naming Conventions By convention, ORLA blocks and ORLA-specific classes may be given names that begin with “Orla”, whereas classes related to data-types begin with “Odt”. These prefixes denote all ORLA objects in a C++ program; for instance, the data-type Odt-TypeInstance or the block OrlaReadRepo. [0195]
- Blocks and data-types may share the same naming convention except that block names may be constructed with a verb such as Read or Project as opposed to a noun. This is not a hard and fast rule. [0196]
- Introduction to Data and Data-Types In ORLA, as in any typed programming system, data belong to a specific data-type. A block may specify what types of data it accepts as input in the same way a function of a conventional programming language specifies what types of arguments it accepts. A block also specifies the types of data it produces on each output port. For example, a computational block may accept and generate only floating-point data whereas a print block accepts data of any type. [0197]
- The type of the data received by an input port must be compatible with that produced by the corresponding output port at the other end of the connection. As noted previously, a block checks its input data-types when its configure( ) method is invoked. [0198]
- In order to accomplish this type checking, ORLA provides an internal mechanism to manipulate and reason about data-types. This typing mechanism supports single inheritance which means that a block may be defined as accepting data of a given type as well as all types derived from it. [0199]
- Introduction to Blocks A block may be thought of as a small data processor or procedure. Blocks generally have an internal state that is held in the block object's variables. Blocks function independently of one another so that a block neither knows nor cares what its neighbors in a network may be doing, nor how its producers are generating their data. In other words, ORLA use a local model for treating data; the way for one block to communicate with another is to transfer data to it. Of course, two blocks may access a common object which can affect their behavior, not everything can be or should be done with data connections. [0200]
- The behavior of a particular block typically depends on a set of parameters. As with other C++ classes, these parameters may be provided through different constructors, thus allowing each block to be customized appropriately. [0201]
- Blocks are similar to the functions of conventional programming languages. They expect a certain number of arguments (input ports) of specified types and generate a certain number of outputs also of specified types. However, there are some subtle differences between blocks and functions. At this stage, thinking of blocks as functions is a good, first approximation. [0202]
- As stated previously, a block has input and output ports and is connected via these ports to other blocks in a network. The number of input ports and output ports is a property of a block, as well as the types of data it accepts and produces. [0203]
- A block with no input ports generates data for other Orla blocks, but receives its own input from outside an ORLA network; for example from a file or database. The data may also originate from the block itself; for example, a random number generator block. [0204]
- Ports and Data-Types A typical block has both input and output ports through which data is received and sent. A block may require a fixed number of ports; for example, two input ports and one output port. Other blocks may accept a variable number of ports. [0205]
- In many cases, the data-types and the functions of each input are identical, and therefore the order of binding is irrelevant. However, in the general case this is not necessarily so because the data-types or functionality may differ across ports. For this reason, the input and output ports are numbered, both sets starting at 0, and a data-type is associated with each port (see FIG. 8). [0206]
- A given block must define what type of data it produces on each output port and check that the type of data being received on each input port is acceptable. For example, a block might have two input ports with the zeroth input port accepting double values and the first input port accepting foreign exchange spot-prices. Similarly the output might also be a stream of doubles. ORLA provides a mechanism for allowing a block to make these checks within a network. [0207]
- Because ORLA's typing mechanism supports single inheritance, a port may accept any kind of data that belongs to a specified class or a sub-class thereof. This allows blocks to be developed that can process a set of related types rather than just a single, specific type. For example, the OrlaPrint block accepts any kind of data while the OrlaEMA block accepts doubles or vectors of doubles. [0208]
- The connection between an output port and the corresponding input port is established by binding them together. However, this type checking is not done at “bind time” but rather at “execution time”; specifically when the configure( ) methods are invoked. This allows blocks to be created and bound in an arbitrary order. [0209]
- Some Blocks of the ORLA Library The ORLA library contains several implemented blocks including the ones listed below for processing time series: [0210]
- OrlaReadRepo( const RWCString& sqdadl ); Reads data from a database repository. [0211]
- OrlaReadAscii( istream& in ); OrlaReadAscii( const RWCString& filename ); Reads limited types or ASCII data from an input stream, a f ilename or a file descriptor. [0212]
- OrlaPrint( ostream& out ); OrlaPrint( const RWCString& filename ); Prints data in ASCII either to out or to filename. [0213]
- OrlaWriteAscii( ostream& out ); OrlaWriteAscii( const char filename ); Writes limited types or ASCII data either to out or to filename. [0214]
- OrlaProject( const char[0215]
- functionName ); Applies the function functionName to the input data. [0216]
- OrlaMerge( ); Merges multiple input streams of the same data type into a single output stream. [0217]
- OrlaEMA( const ObcScaledTimeInterval tau ); Calculates an exponential moving average at scale tau. [0218]
- OrlaDifferential( const ObcScaledTimeInterval tau ); Calculates a stochastic differential at scale tau. [0219]
- OrlaSlicer( const ObcTime& start, const ObcTime& end ); Passes only the data between start and end. [0220]
- An Example of a Small Network Next, how to build a small but useful ORLA network will be described. For this example we implement a network that prints a data stream along with its exponential moving average. We use an OrlaProject block to extract the “bid” value of the input foreign-exchange spot-price. [0221]
#include <OrlaReadAscii.hh> #include <OrlaProject.hh> #include <OrlaEMA.hh> #include <OrlaPrint.hh> #include <OrlaNetwork.hh> #include <ObcScaledTimeInterval.hh> #include <ObcTimeScale.hh> int main ( int argc, char** argv ) { ObcPhysicalTimeScale phyTimeScale; OrlaNetwork net ( argv [0] ); // Create the blocks. OrlaReadAscii in( “fx.data” ); OrlaProject bid( “bid” ); OrlaEMA ema( &phyTime Scale, 4 * ObcScaledTimelnterval::minute() ); OrlaPrint print ( cout ); // Construct the network by binding the blocks together. net >> in >> bid >> print; bid.outPortC ( 0 ) >> ema >> print; net.run(); // Run the network. }
- The network constructed by the above C++ is more easily viewed as a diagram, as shown in FIG. 9. [0222]
- Note that the blocks are bound on successive ports by using the >>operator. In ORLA, each successive binding with the >>operator connects the next available output port to the next available input port of the respective blocks. [0223]
- The >>operator is sufficient to bind most networks. To address an output or input port directly, the methods outPort( ) and inPort( ) can be used. This allows in- and output ports to be bound together by explicitly specifying their respective port numbers. [0224]
- The statement “net.run( )” causes all available data to be processed. If one wants the graph to show data in a specified range—for example, between Jan 1, 1990 and Jan. 1, 1991—the start and end times must be specified. However, the build-up delay of the network also needs to be taken into account. (In this example, the presence of the OrlaEMA block causes the build-up delay to be non-zero). This modification of the network start time by the build-up delay can be specified as follows: [0225]
- net.run( ObcTime( 1990,1,1,0,0,0 )—net.cumulativeBuildUpDelay( ), ObcTime( 1991,1,1,0,0,0 ) ); [0226]
- 6.2.3 Introduction to the SQDADL [0227]
- The data flowing between Orla blocks are highly structured, according to a ‘language’ called SQDADL. The name SQDADL stands for SeQuential Data Description Language. Technically, the SQDADL is a particular programming language described by a BNF grammar, and its full definition is given in appendix A. SQDADL includes the following features: [0228]
- The top level of description is a union of five main fields: [0229]
- 1. The time, and possibly more information about the time properties of the time series. For example, the time series is a regular (homogeneous) time series with a given fixed time interval between the data. [0230]
- 2. The SeriesID, namely the description of the financial contract. For example, the value of seriesID is FX(USD,CHF), which denotes the spot foreign exchange rate between the USD and the CHF. [0231]
- 3. The DataSpecies, namely the value of the time series. For example, the value of DataSpecies is Quote(Bid, Ask, Institution), which denotes a quote issued by the ‘Institution’ with the given bid and ask. Many blocks are performing computations, using the DataSpecies Double or DoubleVec. [0232]
- 4. The Source, namely where this information is originating from. For example, the value of Source is Source(Re, *,*), which denote collected data from Reuter. [0233]
- 5. The Validity, namely the filtering information about this tick, according to the real time filter. [0234]
- The SQDADL is recursive, through the contains a relationship. For example a future contract is based on an underlying, like FX(USD,CHF). This dependency is expressed with the SQDADL by the ‘Future’ containing an ‘Instrument’, and instrument can be any underlying. This recursiveness allows to describe complex derivative financial instruments. [0235]
- There is a is a relationship defined between expressions written with the SQDADL. For example a ‘Future’ is a ‘Derivative’. [0236]
- These three features give SQDADL its power and expressivity. Using this language, any information about any financial contract can be written simply. [0237]
- When using Orla, the user is exposed to the SQDADL at two points: requesting data to the repository and the type checking between blocks. The data repository is using the same language to store data and for the query of data, allowing for a seamless integration between the data repository and Orla. Therefore, when requesting data to the data repository with an OrlaReadRepo block, the user may construct the SeriesID corresponding to the desired financial contract. [0238]
- Orla uses the SQDADL both to pass data between blocks and to perform the initial type checking. During the type checking phase, if a block received a type that cannot be handled, the block will complain by throwing an exception giving the SQDADL for the received type and the expected type. The received type should have a is-a relationship with the type that the block is expecting. If this is not true, the block is not correctly bound and the network in invalid. [0239]
- 6.2.4 Overview of the block libraries [0240]
- Blocks may be grouped into libraries according to theirs overall functionalities. Below is an introduction to some libraries and the blocks they contain. [0241]
- inputOutput Inject data in a network from ‘outside’ (for example from the data repository or from a file), and write the data ‘outside’. The two most commonly used blocks are OrlaReadRepo to read data from the repository, and OrlaPrint to print on an ascii file the data passing through it (useful for debugging). [0242]
- Blocks: OrlaGenQuote, OrlaPrint, OrlaPrintForPlot, OrlaPrintForTransform, OrlaReadAscii, OrlaReadRepo, OrlaReadSQDADL, OrlaTMDataSampler, OrlaTMGenerate, OrlaWriteAscii, OrlaWriteRepo, OrlaWrite™. [0243]
- financial Blocks that perform some specific financial computations (i.e., to compute a crossrate with FX data). [0244]
- Blocks: OrlaCrossRate, OrlaInvertFXQuote, OrlaSelectContract, OrlaSplitContracts, OrlaTX2Quote. [0245]
- computational General computations with real data, like derivative, volatility, or generation of regular time series. Many blocks can ‘vectorize’ (i.e., do the computations on many time horizons). This is a very powerful feature of Orla, as the data can be computed and analyzed simultaneously on many time intervals using a simple network. [0246]
- Blocks: OrlaBivariateMapping, OrlaDerivative, OrlaDifferential, OrlaEMA, OrlaGlobalSampler, OrlaHomogeneousConvolution, OrlaLinearConvolution, OrlaMA, OrlaMNorm, OrlaMStandarize, OrlaMVariance, OrlaMicroscopicDerivative, OrlaOBOS, OrlaOBOSAnalysis, OrlaOISActivity, OrlaQuoteRate, OrlaRTSAverage, OrlaRTSgenerate, OrlaRTSlag, OrlaRTSsampler, OrlaReturn, OrlaSMSAdaptive, OrlaSMSUniversal, OrlaScaleTime, OrlaTimeDifference, OrlaTurningPoint, OrlaUnivariateMapping, OrlaVolatility, OrlaWindowedFourier. [0247]
- Abstract classes: OrlaConvolutionABC, OrlaIncrementalFunctor, OrlaRTSstackable, OrlaVectorisableABC. [0248]
- statistical Basic statistical analysis, like correlation, moving correlation, or least square fit. [0249]
- Blocks: OrlaCorrelation, OrlaIntraDayLeastSqFit, OrlaLaggedCorrelation, OrlaLeastSqFit, OrlaMCorrelation-1, OrlaMCovariance, OrlaMWeightedCorrelation. [0250]
- histogram Compute histogram, probability distribution, conditional average, intra-week average, etc. All blocks in this library may use classes related to the function library, like OfctSampledAxis and OftlHistogram. Most of the blocks accept both scalar and vector data. [0251]
- Blocks: Orla2dimHistogram, OrlaAverageCondDeltaTime, OrlaConditionalAverage, OrlaConditionalAverageSquare, OrlaHistogram, OrlaIntraDayAverage, OrlaIntraDayAverageCondDeltaTime, OrlaIntraWeekAverage, [0252]
- OrlaIntraWeekHistogram, OrlaLaggedConditionalAverage, OrlaLaggedConditionalAverageSquare, OrlaLaggedHistogram [0253]
- Abstract classes: OrlaIntraPeriodAverageABC, OrlaIntraPeriodHistogramABC. other Other time series related functionality, like selecting a sub-time period, removing data from a given period in the week (e.g. week-end), graphing the network (OrlaDaVinci, etc.). [0254]
- Blocks: OrlaChop, OrlaCkp, OrlaDaVinci, OrlaDailySlicer, OrlaGrace, OrlaHead, OrlaMakeVector, OrlaMerge, OrlaProject, OrlaSlicer, OrlaSwitchOver, OrlaThin [0255]
- orlaBaseClasses This library contains auxiliary classes used by some blocks. In order to mark this difference, the prefix is OrlaBc. Roughly, the functionalities are: [0256]
- OrlaBcTimelntervalVector, OrlaBcScaledTimelntervalVector: [0257]
- Vector of ObcTimeInterval and of ObcScaledTimeInterval. These classes are used by many blocks that perform computations or statistical analysis over several time horizons. [0258]
- Univariate mappings: [0259]
- A simple univariate mapping for double or vector of double, like exp(x) or a·x. [0260]
- OrlaBcAbsMapping, OrlaBcAffineMapping, OrlaBcExpMapping, OrlaBcIRMapping, OrlaBcLinearElementMapping, OrlaBcLinearMapping, OrlaBcLogMapping, OrlaBcTauIntegrationMapping, OrlaBcUnivariateMapping, OrlaBcVectorDiffMapping, OrlaBcVectorComponent, OrlaBcPower, OrlaBcIdentifyMapping. [0261]
- Bivariate mappings: [0262]
- Bivariate mapping for double or vector of double, like xly. [0263]
- OrlaBcBivariateMapping, OrlaBcAdaptiveVolatilityMapping, OrlaBcProductMapping, OrlaBcRatioMapping. [0264]
- Kernel for convolution: [0265]
- Give the form of the kernel used for computing convolutions with regular time series. [0266]
- OrlaBcDerivativeKernel, OrlaBcDifferentialKernel, OrlaBcGaussianKernel, OrlaBcKernelABC, OrlaBcRectangularAverageKernel, OrlaBcSecondDerivativeKernel, OrlaBcSmoothAverageKernel. [0267]
- Sampling procedure: [0268]
- The sampling procedure to create regular time series from tick-by-tick data, for example the number of ticks, or a linearly interpolated price. [0269]
- OrlaBcSamplerABC.hh, OrlaBcLastTickTimeIntervalSampler, OrlaBcLinearInterpolationSampler, OrlaBcTickCountSampler, OrlaBcTickTimeIntervalSampler. [0270]
- orlacore This library contains basic classes needed for Orla, like timer, scheduler and network. When writing new blocks, the bases classes for all blocks, called OrlaBlock, is in this library. [0271]
- 6.2.5 Error Handling and Debugging [0272]
- Errors and Exceptions The ORLA system reports errors by throwing exceptions. An error may occur during stages 4 to 6 of a network's lifetime or during block stages 3 to 5. By default, throwing an exception causes an error message to be printed and the process to be stopped. Exceptions may arise because: [0273]
- The configuration used in a configPair constructor is not correct (for example a missing key). [0274]
- A block is not correctly bound in a network, either because the number on input or output ports is wrong, or because the block cannot handle the type of the data he gets. [0275]
- The global network topology is incorrect, for example it contains a loop. [0276]
- The default behavior for exception can be changed in order to produce a core dump. A core dump may be produced for exceptions of type ObcException if the environment variable OBC EXIT EXCEPTION is set. This is useful for running a debugger to examine the exit condition. [0277]
- Debugging Networks Because of the hidden complexity of Orla, which involves so many classes and a complex scheduling mechanism, the usual debugging tool like dbx or gdb may be of little use. Instead a user may insert OrlaPrint blocks into the network, and check that the data stream is what was intended. In this way, the bad block can be located. The parameters for this located block may not be properly set. [0278]
- The block OrlaDaVinci is useful to ensure that the constructed network tpology matches what was intended. [0279]
- 6.2.6 Datum and Type [0280]
- As already mentioned, a special library may be used to represent data-ticks and the corresponding data-types in the context of ORLA. This library, the so called Datum library, can also be used independently of ORLA. [0281]
- First, some definitions are listed: [0282]
- Datum is the notion for an Object representing a data tick. [0283]
- Typesystem is a static object structure which holds.the information about what valid types are and how they stand in relationship. [0284]
- Concrete Type is a object which describes a valid type in our typesystem. From concrete types we can create datum instances. Every datum belongs to a concrete type. [0285]
- Abstract Type is a type from which we can not create a datum. These types are mainly used to specify what kind of datums we expect on, for instance, a certain input port of an Orla Block. In prose an abstract type could be [0286]
- I expect Datums which have as DataSpecies a Quote [0287]
- The Datum library provides the following functionality: [0288]
- Creation of types from SQDADL string. [0289]
- Creation of datums from concrete type instances. [0290]
- Comparison of types (i.e., is type A a subtype of type B?). [0291]
- Merging of types. Merging of Datums. [0292]
- Conversion from tick objects to datum objects and vice versa. [0293]
- Complete memory management of all dynamically created objects from the Datum library [0294]
- Typesystem The typesystem may be built according to a grammar file. This grammar describes in a pseudo BNF syntax all valid types in the typesystem. The typesystem (the object structure, which models the typesystem) is dynamically built once at startup of each executable using the datum library. The typesystem is then used to parse SQDADL strings and create type instances, but also to compare types whether they fulfill an IsA-Relationship. The typesystem itself may be hidden from the user, the only access point is the class OdtDatumParser which helps to create type instances. [0295]
- Abstract and Concrete Type Instances Consider two SQDADL strings, each representing a type: [0296]
- Tick(Time( ),FX(DEM,CHF),Quote(,,),Source(,,),Filter(,,)) [0297]
- Tick(Time,SeriesID,Quote(,,),Source,Validity) [0298]
- The first SQDADL string represents a concrete type, because every part of this type is well specified. The second SQDADL string represents an abstract type, because only the DataSpecies part, where we expect a Quote, is well specified, the other parts are held very general, to express that we do not care what stands there. There is an IsA-Relationship between these two types (the first type is a subtype of the second type, because a FX is a SeriesID, a Source is a Source and a Filter is a Validity. [0299]
- The same as a C++ code example: [0300]
- // The Datum parser to parse SQDADL strings into type instances OdtDatumParser parser; [0301]
- const OdtConcreteTypeInstance* type1; [0302]
- const OdtTypeInstance* type2; [0303]
- // the two SQDADL strings [0304]
- const RWCString sqdadl1; [0305]
- const RWCString sqdadl2; [0306]
- sqdadl1=“Tick(Time( ),FX(DEM,CHF),Quote(,,),Source(,,),Filter(,,))”; [0307]
- sqdadl2=“Tick(Time,SeriesID,Quote(,,),Source,Validity)”); [0308]
- // parse SQDADL string and create type instances [0309]
- type1=parser.parseConcreteType(sqdadl1); [0310]
- type2=parser.parseAbstractType(sqdadl2); [0311]
- // check relationships between types [0312]
- // ‘CHECK’ is a macro which prints out a warning, if [0313]
- // the evaluated expression is not true [0314]
- CHECK ( type1→isA(*type2)==true ); [0315]
- CHECK ( type2→isA(*type1)==false ); [0316]
- Note that the class OdtDatumParser has two different methods to parse SQDADL strings into type objects (one for concrete and one for abstract type). [0317]
- The type instances that are created with OdtDatumParser are managed. This means that one does not have to delete them. They are deleted when the executable stops. Each unique type is represented by one type object. If the same SQDADL string is parsed twice, the same type object is returned. [0318]
- Expanding of type shortcuts In order to avoid mistyping one can use Short-Cuts such as the following examples: [0319]
- “Tick(Time,SeriesID,DataSpecies,Source,Validity)”[0320]
- can be written as “Tick”[0321]
- “Tick(Time,FX(,),DataSpecies,Source,Validity)”[0322]
- can be written as “Tick(Time,FX,DataSpecies,Source,Validity)”[0323]
- The fieldlist can be omitted if there is nothing to specify in it. Preferably every shortcut gets expanded in a way that the resulting type is as general as possible. [0324]
- Merging of types Two types can be merged to create a new type. For two types A and B: [0325]
- A is “Tick(Time( ),FX(DEM,CHF),Quote(,,),Source(,,),Filter(,,))”[0326]
- B is “Tick(Time( ),SeriesID,Double(,),Source,Validity)”[0327]
- type A merged with type B leads to type C [0328]
- C is “Tick(Time( ),FX(DEM,CHF),Double(,),Source(,,),Filter(,,))”[0329]
- The merging may follow the following rules: [0330]
- 1. Compare each component of type A with the corresponding component of type B. [0331]
- 2. If the component of type A has an IsA-Relationship to the corresponding component of type B then take the component of type A into the new type. [0332]
- 3. If there is no IsA-Relationship between a pair of corresponding components of type A and B take the component of type B into the new type. [0333]
- For the example, [0334]
- Time and Time are equal components, take Time into new type. [0335]
- FX isA SeriesID, take FX into new type. [0336]
- Quote and Double have no relationship, take Double into new type. [0337]
- Source is a Source, take Source into new type. [0338]
- Filter is a Validity, take Filter into new type. [0339]
- The example with these two types is a common one. Consider of a block which has as input datum instance which has a Quote. The block is calculating the mean of bid and ask, and is replacing the Quote in the datum instances with a Double holding the mean. Thus, the output type of this block is the input type merged with (the above) type B. [0340]
- Consider some code examples: [0341]
// header file (e.g. OrlaMyBLock.hh) class OrlaMyBlock : public OrlaBlock { ... private: const OdtAbstractType* mergeType_; OdtPath* bidPath_; OdtPath* askPath_; OdtPath* valuePath_; } // implementation file (e.g. OrlaMyBlock.cc) void OrlaMyBlock::configure() { ... OdtDatumParser parser; RWCString sqdadl = “Tick(Time,SeriesID,Double,Source,Validity)”; mergeType_ = parser.parseAbstractType (sqdadl); outputType_ = inputType(0)->merge(*mergeType_); bidPath_ = & inputType(0)->createPath(“Bid”); askPath_ = & inputType(0)->createPath(“Ask”); valuePath_ = & outputType->createPath(“DoubleValue”); ... } void OrlaMyBlock::processData( const ObcTime& dataTime, const OdtHandleVector& dataVec ) { OdtInstanceHandle d = dataVec.at(0); OdtInstanceHandle newD = d.createMerge(*mergeType); newD[*valuePath] = (d[*bidPath]().asReal() + d[*askPath]().asReal())/2; send(newD); }
- Some constructs (accessing values, datum instance handling) in the above example will be explained later. In OrlaMyBlock::configure, the output type gets created. The input type is merged against the mergeType. The merging will replace the DataSpecies-part of the input type with the Double. Note that when merging to types, these types are not changed. Instead, a new type (the merged type) is produced. [0342]
- In OrlaBlock::processData, an incoming datum instance is merged against the mergeType. The new datum produced will take over all the fields and fieldvalues of the old datum instance expect the Quote-Part is replaced by a Double-Part. This Double-Part will be filled afterwards with the mean of bid and ask of the old datum instance. [0343]
- Datum instances A datum is implemented as a recursive object structure of objects of type OdtlnstanceBody and OdtValue, which represent one data-tick of a certain type. A class OdtlnstanceHandle, which hides the internals of this structure and takes the responsibility for the memory management of such datum instances, may simplify the handling of these object-structures. [0344]
- For example: [0345]
void foo() { OdtDatumParser parser; const OdtConcreteTypeInstance* typel; const OdtTypeInstance* type2; const RWCString sqdadl1; const RWCString sqdadl2; sqdadl1 = “Tick(Time(),FX(DEM,CHF),Quote(,,),Source(,,), Filter(,,))”; sqdadl2 = “Tick(Time,SeriesID,Quote,Source,Validity)”; Type1 = parser.parseConcreteType(sqdadl1); type2 = parser.parseAbstractType(sqdadl2); OdtInstanceHandle d = type1->createInstance(); CHECK ( d->isA(*type1) == true ); CHECK ( d->isA(*type2) == true ); // d goes out of scope // underlying datum instances are removed automatically }
- Ownership of Datum Instances Because datum instances are passed through an Orla Network, the notion of Owner of a datum instances is important. It is evident, that after passing a datum instance to another block, the previous block is no longer able to change the values (or structure) of datum instance it gave away. The class OdtlnstanceHandle, which gives access to a datum instance, is implemented so that the ownership is passed on copy construction and assignment. If one does not want the ownership to be passed, one may use the duplicate( ) method of OdtInstanceHandle to duplicate the handle. For example: [0346]
OdtInstanceHandle d1 = type1->createInstance(); OdtInstanceHandle d2 = type1->createInstance(); OdtInstanceHandle dup = d1.duplicate(); CHECK ( d1.isNil() == false ); // both handles point to a datum instance CHECK ( d2.isNil() == false ); CHECK ( dup == d1 ); // dup and d1 point to the same datum instance OdtInstanceHandle d3 = d2; // call to copy-constructor ! // d3 took ownership of datum // instances d2 was pointing to CHECK ( d2.isNil() == true ); // because ownership was passed CHECK ( d3.isNil() == false ); d3 = d1; // assignment. d3 releases underlying datum instance. // d3 then takes ownership of datum // instance d1 was pointing to before. CHECK ( d1.isNil() == true ); // because ownership was passed CHECK ( d3 == dup ); // d3 and dup now point to the // same datum instance
- The behavior of OdtInstanceHandle is similar to the auto_ptr template of the C++ Standard Library. [0347]
- The above example is rather theoretical. Practically, this passing of ownership may be used in the sendo method of OrlaBlock. [0348]
- void [0349]
- OrlaBlock::send(OdtInstanceHandle d) [0350]
- The handle d is passed by value. This means that the ownership of the underlying datum instance is passed too. [0351]
- OdtInstanceHandle d=type→createInstance( ); [0352]
- . . . [0353]
- send(d); [0354]
- CHECK( d.isNil( )==true ); [0355]
- Accessing values of datum instances Valid field values Each datum instance has field values. The corresponding type of a datum instance can specify the valid range for a certain field value. For example, the following type [0356]
- “Tick(Time,FX(DEM,USD),Quote,Source,Filter)”[0357]
- says that this a type for datum instances dealing with foreign exchange rates from Deutschmark to US-Dollar. If one has a datum instance of this type, the Per-Fieldvalue is always “DEM”. If one tries to set this field to another value, an exception may be thrown. [0358]
- Setting of field values The following are possible ways to set field values of a datum instance. [0359]
- 1. Field per field d[/“Tick/FX/Per”]=“USD”; [0360]
- 2. Over a SQDADL string [0361]
- sqdaldl=“Tick(Time(01.01.1999 12:00:00),FX(USD,CHF), Quote(1.47,1.48,),Source(,,),Filter(,,))”; [0362]
- d→populateFieldValues(sqdadl); [0363]
- 3. Over a Tick object [0364]
- OdtTick tick=repo→readNextTick( ); [0365]
- d→populateFieldValues(tick); [0366]
- As seen in the above example, as long as the shortcut for a field name is unique (e.g. Per) the shortcut can be used. The SQDADL string and the tick object are checked to ensure that they are of the same type as the datum you want to populate. [0367]
- Getting/Reading field values The values in the datum library are stored in OdtAny objects. An OdtValue object contains one object of OdtAny (to store its value) and one object of OdtAnyExpr, which describes the valid values for this field. In order that all the accessor methods in OdtValue, the underlying OdtAny object can be accessed directly for reading. This happens in two steps. With the [ ]-operator of OdtlnstanceHandle the OdtValue object that one wants to read is accessed. With the ( )-operator of OdtValue, the underlying OdtAny object is returned. An example: [0368]
- OdtInstanceHandle d=type→createInstance( ); [0369]
- d[“Value”]=2.35; [0370]
- OdtValue& value=d[“Value”]; // get OdtValue Object [0371]
- OdtAny& anyValue=value( ); // get underlying OdtAny Object [0372]
- float floatValue=anyValue.asRealValue( ); // get the REAL value [0373]
- floatValue=d[“Value”]( ).asRealValue( ); // or written in one line [0374]
- To convert a datum into a SQDADL string or into a Tick object, one can use the following methods: [0375]
- OdtTick tick=d→tick( ); [0376]
- RWCString sqdadl=d→string( ); [0377]
- Usage of OdtPath objects to speed up access to field values A precompiled path may be used to access a field value. An example: [0378]
float midPrice(OdtInstanceHandle& d) { static OdtPath& bidPath = d->type()->createPath(“Bid”); static OdtPath& askPath = d->type()->createPath(“Ask”); float mid = (d[bidPath]().asRealValue() + d[askPath]().asRealValue())/2; return mid; }
- Path-Objects are preferably created once to obtain the performance enhancement. A path may belong to a type. Path-Objects may be used for a datum object, if the path was created from the datums type instance or from a type instance which is a supertype. For example, to create a path object for the Timestamp field in every kind of datum: [0379]
- static const RWCString base=“Tick(Time,SeriesId,DataSpecies,Source,Validity)”; [0380]
- static const OdtAbstractTypeInstance* type=parser.parseAbstractType(base); [0381]
- static OdtPath& timePath=type→createPath(“Timestamp”); [0382]
- This timePath can be used for every kind of datum, because it was created by the most general type. [0383]
- 6.2.7 Networks [0384]
- The Lifetime of a Network A network and its set of blocks may pass through several steps during its lifetime, including the following: [0385]
- 1. Creation of the network object. [0386]
- 2. Instantiation and binding of blocks to the network. [0387]
- 3. Global network checking. [0388]
- 4. Per-block configuration. [0389]
- 5. Execution of the network, processing of data and timer. [0390]
- 6. End-of-data and end-of-processing indications are sent through the network. [0391]
- 7. Execution of the network is completed. [0392]
- Blocks may be instantiated and bound into a network in any order. After stage 2, all blocks are constructed and bound and the network is considered built. [0393]
- In stages 3 and 4, the network is checked for validity. Stage 3 warns if a block is connected to itself. [0394]
- During stage 5, data flow through the network and are processed. This continues as long as the network continues to receive data for processing or needs to fire timers. [0395]
- At stage 6, end-of-data indications flow down the network. An end-of-data indication is sent from one block to another to inform the recipient that no more data are to follow. These end-of-data indications may be sent in one part of a network while data continue to flow elsewhere in the network. Therefore stages 5 and 6 are somewhat blurred together. [0396]
- Stage 7 occurs when the run-time system has no more blocks which need to process data or timers. In this condition, the network stops executing and returns from the run( ) method. [0397]
- Stages 3 to 6 may occur during the network method run( ). In C++ terms, the run-time system causes the various blocks' configure( ), outputType( ), processData( ), processTimer( ), processEndOfData( ) and processEndOfRun( ) methods to be called in the right order. Details are explained later. [0398]
- After the network has completed running (i.e., after stage 7), all blocks are still bound and accessible to code outside the ORLA run-time system. This allows information to be extracted from the various blocks after the network has completed running. For example, a certain block may be queried about its final result for further processing by the main program. [0399]
- Running a Network A network can be run several times, but a block typically runs only once. If an application requires re-running a network of blocks, the blocks may be built, run and destroyed repeatedly. [0400]
- Start and End Times, Build-Up Time An ORLA network can be run by specifying the start and end-times. These times are passed to all blocks in the network. This causes them to start delivering data to the network at the given start time and to continue until the given end time. This allows a network for processing time-series to be run over a certain range of input data. [0401]
- ORLA also supports the notion of a build-up delay for networks and allows a network to calculate its cumulative build-up time with the method cumulativeBuildupDelay( ). This gives an estimate of the cumulative build-up delay of the entire network. This is calculated by having each block report its own build-up delay via its own buildupDelay( ) method and working out the longest such delay path through the network. [0402]
- Network Topology The binding of blocks allows the user to construct networks with arbitrary topologies. Such a network can also be looked at as a directed graph, where the blocks are the vertices and the connections are the edges. A directed graph without loops is called acyclic and has properties which are exploited by the ORLA run-time system. [0403]
- The vertices in an acyclic graph can be sorted and therefore traversed unambiguously. During the network configuration phase, the blocks are sorted depth-first and then called in this top-down order. The same ordering is also used to activate blocks which have pending data or timers. [0404]
- The network also offers a method which returns the corresponding adjacency matrix. Another method is available to search the network for feedback loops. [0405]
- 6.2.8 Blocks [0406]
- An ORLA application may include user-defined blocks. In this section, we focus on the internals of a block and describe what is needed to build a new kind of block. We first discuss the theoretical aspects of writing a block before analyzing some complete blocks in detail. [0407]
- Blocks may be implemented as C++ objects. A class representing a block is derived from the OrlaBlock class. In order to develop a new block, the programmer may provide specific class methods for which there are no defaults (pure virtual methods) and possibly also override default methods. [0408]
- Design Goals A block may be a “thin”, light-weight entity, accomplishing one well-defined, simple task. Keeping the blocks light-weight allows them to be developed and maintained more easily. It also promotes code re-usage. In other words, blocks that perform simple, straightforward tasks are more likely to find themselves being reused in other areas than blocks that attempt to accomplish more heavy-weight, complicated tasks. [0409]
- When writing a block, ORLA allows the developer to concentrate on the task at hand rather than worry about extraneous problems such as flow control or scheduling. These are issues that the ORLA run-time system automatically handles. [0410]
- The Lifetime of a Block The stages of a block's lifetime include: [0411]
- 1. Construction of a block. [0412]
- 2. Binding the block into the network environment. [0413]
- 3. Checking and initialization of the block. [0414]
- 4. Running the block, processing data and timers. [0415]
- 5. End of data indication on individual input ports. [0416]
- 6. End of run indication. [0417]
- 7. Destruction of the block. [0418]
- Stage 1 is implemented using the constructor for the block and thereby configures the parameters of the block object. As with all C++ classes, the constructor typically allocates resources and initializes the block into a valid state. [0419]
- Stage 2, the binding of the block, is transparent from the viewpoint of the block. Binding is accomplished by using the >>operator or the inPort( ), outPort( ) methods to bind explicit ports. [0420]
- Stages 3 to 6 happen during the execution of the network to which the block belongs; that is, during invocation of the network method runo. [0421]
- Stage 3 allows a block to perform additional setup tasks after it is bound into the network. A block should check that it is bound into the network so that its input and output ports are properly connected. If necessary, it should also check that it will be receiving the correct data type on each input port. [0422]
- Stage 3 also allows a block to perform additional initialization. Generally, most initialization is performed during construction (stage 1) but full initialization may have to wait until after binding. For example, the number of input and output ports and the type of data to be received on each input port are known only after the block is bound. [0423]
- Stage 4 corresponds to the execution of the network. Data are passed to the blocks' input ports and flow through the network. Timers fire at the appropriate times. [0424]
- Stage 5 denotes the end of data indication on each input port. It signifies to the block that no more data are to be expected on the corresponding input port. [0425]
- When all timers have fired and all input ports have received end of data indications (stage 6), the block's work is completed and it performs no further processing. This is the last time the block is called and can be used to send final data or write out final information. [0426]
- Stage 7 is the final destruction of the block. It may be implemented through the C++ destructor of the block. As for all C++ classes, the destructor typically generally cleans up and deallocates resources. [0427]
- Implementation of the Block's Functions The functions of a block are implemented by overloading various virtual methods. The following sections describe these methods in detail. They are summarized in the following table: [0428]
Name Purpose Default ? Stage configure( ) Initialization No 3 and checking buildUpDelay( ) Calculating the Yes 3 build-up delay outputType( p ) Reports data-type Yes 3 on output port p processData( ) Processing the data No 4 processTimer( ) Processing the timers No 4 processEndOfData( p ) End-of-data indication No 5 on input port p processEndRun End-of-data on all No 6 input ports, no timers
- The ORLA run-time system calls these methods. Preferably these are not called directly. They are preferably marked as protected or private in the class definition. [0429]
- The ORLA run-time system calls the configure( ) during stage 3 of a block's life. This is the block's chance to ensure that it is properly configured inside the network. For example, it should check that it is bound into the network with the correct number of input and output connections. It should also check that the types of data to be received on its input ports correspond to those expected. Finally, the configure( ) can allocate additional resources beyond those allocated inside the block's constructor. If there is no default for the configure( ) method, this must be provided for each block. The details of writing a configure( ) method are described later. [0430]
- The ORLA run-time system calls the outputType( ) method in order to determine what data-type the block produces on a given output port. This also happens during stage 3. There is a default outputType( ) method which returns for output port p the same data-type as that being received on input port p. This default may not be suitable for all blocks. If not, a new definition must be provided instead. This is described later in this section. [0431]
- The method buildUpDelay( ) is called to determine how much time the block needs to build up its internal state before it can start sending meaningful data. By default, a block does not need build-up time, and the default method returns a zero time interval. buildUpDelay( ) is called at least once during the checking in initialization of the block and its return value is stored in the internal state of the block. If the delay is greater than zero, the block doesn't send any data to its output ports until that time has elapsed since the first datum sent in the block. [0432]
- As buildUpDelay( ) can be called more than once during the initialization phase, the return value should not depend on any changing status of the class or object. [0433]
- During stage 4 of a block's life, the ORLA run-time system repeatedly calls the processData( ) method in order to process data. On each call, it hands over data to the block which is equal to or younger than the data passed on in the previous call. This continues until there is no more data to process. The details of writing a processData( ) method are described later. [0434]
- The ORLA run-time system calls the processEndOfData( ) method in order to inform the block that no more data are to be expected on a given port. This happens during stage 5. The processEndOfData( ) method is called once for each input port of a block. Its single argument specifies which port is exhausted. When processEndOfData( ) notifications have been received for all input ports, the processData( ) method will not be called again. As long as pending timers exist, processTimer( ) method will still be called. [0435]
- Finally, method processEndOfRun( ) is called. This is the last chance for the block to send information on to its consumer blocks. Afterwards, end-of-data indications are automatically sent along those paths (that is, each consumer will receive an processEndOfData( ) notification). [0436]
- Writing Your Own Configureo Method The ORLA run-time system calls the configure method at least once in order to perform checking before the network begins execution. A configure( ) should check that it is bound to the correct number of input and output ports and that each input port is to receive data of the correct data-type. In order to facilitate writing a configure( ) method, ORLA makes the following procedures available: [0437]
- void expectInPorts(Eq,Le,Ge)( unsigned int n); [0438]
- Throws an exception if the number of connected input ports is not matching the comparison. [0439]
- void expectOutPorts(Eq,Le,Ge)( unsigned int n ); [0440]
- Throws an exception if the number of connected output ports is not matching the comparison. [0441]
- void expectInType( unsigned int p, const OdtTypeInstance& t ); [0442]
- Throws an exception if the type on input port p is not t or a subclass of t. [0443]
- void expectType( const OdtTypeInstance& t1, const OdtTypeInstance& t2 ); [0444]
- Throws an exception if type t1 is not a t2 or derived from t2. [0445]
- unsigned int nbInPorts( ) const; [0446]
- Returns the number of input ports. [0447]
- unsigned int nbOutPorts( ) const; [0448]
- Returns the number of output ports. [0449]
- The following code demonstrates the configure( ) of a block requiring one input and one output port. The input port is constrained to accept data containing a Double as DataSpecies object. Note how the data type object is constructed in the block constructor already. [0450]
void OrlaYourBlock::OrlaYourBlock() : OrlaBlock( “OrlaYourBlock” )) { OdtDatumParser parser; typeDouble_ = parser.createAbstractType( “Tick(Time,SeriesId,Double,Source,Validity)”); } void OrlaYourBlock::configure () { expect InPortsEq( 1 ); expectOutPortsEq( 1 ); expectInType( 0, typeDouble_ ); }
- Writing Your OwnoutputType( ) Method Therun-timesystemcallstheoutputType( ) method in order to determine what type of data will be produced on a given output port. As noted above, there is a default outputType method which returns for output port p whatever type of data is to be received on input port p. [0451]
- It is illustrative to see how this default implementation of the outputType( ) method is written: [0452]
const OdtTypeInstance* OrlaBlock::outputType( unsigned int port ) { return inputType( port ); }
- Here we see that this method makes use of the inputType( ) utility method. The latter returns whatever type of data are to be received on the given input port. [0453]
- If it is known that data of a specific type are to be generated on an output port, code such as the following suffices to inform the run-time system accordingly: [0454]
const OdtTypeInstance* OrlaYourBlock::outputType( unsigned int port ) { return typeDouble_; }
- The above code states that data of type typeDouble_are to be generated on all output ports. The object could be created inside the constructor as seen in the previous subsection. [0455]
- A block may have its outputType( ) method called before its configure( ). [0456]
- Writing Your ProcessDatao Method When the network starts executing, the ORLA run-time system calls the processData( ) method whenever there's data which can be processed. The data is handed over to the block as a vector representing the set of input ports the block has. Each datum in the vector has the same time stamp and the run-time system guarantees that in a future call, data with the same or a younger time will be handed over. Typically, a block processes the incoming data and then calls one of the following methods: [0457]
- void send( OdtInstanceHandle& d, unsigned int p ); [0458]
- Sends a datum to the consumer that is connected to output port p. [0459]
- void send( const OdtHandleVector& dataVec ); [0460]
- Sends a data vector to the corresponding output ports. Ignores vector elements which contain zero pointers. [0461]
- void discard( OdtInstanceHandle& d); [0462]
- Explicitly discards a datum. Not needed anymore. [0463]
- void discard( const OdtHandleVector& dataVec ); [0464]
- Discards a vector of data. [0465]
- void sendEndOfData( unsigned int p ); [0466]
- Sends an end-of-data notification to the consumer that is connected to output port p. [0467]
- void sendEndOfData( ); [0468]
- Sends end-of-data notifications to all existing output ports. [0469]
- The following code fragments helps to clarify this. For a trivial “null” block, which simply forwards all input data to the corresponding output ports, the necessary code is: [0470]
void OrlaYourBlock::processData( const ObcTime& dataTime, const OdtHandleVector& dataVec) { send( dataVec ); }
- Or, for a merging block with any number of input ports, where all incoming data must be transferred to a single output port: [0471]
void OrlaYourBlock::processData( const ObcTime& dataTime, const OdtHandleVector& dataVec ) { for( unsigned int i = 0; i < dataVec.entries(); i++ ) send( dataVec.at( i ), 0 ); }
- Ownership of Data A block receives a vector OdtHandleVector of OdtlnstanceHandle objects when the processData( ) is called. The block becomes the owner of that data. Owning data means that the block has the right to change its contents. Because when passing an instance of OdtlnstanceHandle by value the ownership will be passed too, no explicit deletion of datums are necessary. [0472]
- The End Of Data Condition Eventually the data stream on an input port will become exhausted. In other words, no more will arrive on that port. The run-time system notifies a block of this fact by calling its processEndOfDatao method. The single argument to the processEndOfData( ) method indicates which port is exhausted. [0473]
- A block may check this condition for a given input port by using the endOfData( ) method. [0474]
- A block may explicitly send an end-of-data indication to a consumer by invoking the sendEndOfData( ) method. [0475]
- Writing Your ProcessTimer( ) Method This method serves to process timer events. Whenever time has come to fire a pending timer, the processTimer( ) method is called. Initially, a timer needs to be set in a different method, typically during the configuration phase. [0476]
- Consider, for example, a block which wants to compute a hourly average for incoming data on port 0. It forwards the data transparently to output port 0 and, every hour, sends a datum containing synthesized information to output port 1. [0477]
- Therefore, to setup a hourly timer metronome, we use a timer and set it on the arrival of the first datum in the processData( ) method: [0478]
OrlaYourBlock::OrlaYourBlock() : OrlaBlock( “OrlaYourBlock” ), timer_( timerQueue() ), count_( 0 ), sum_( 0 ) {} void OrlaYourBlock::processData( const ObcTime& dataTime, const OdtHandleVector& dataVec) { OdtInstanceHandle d = dataVec.at(0)[“Double”]; if ( count_++ == 0 ) { ObcTimeInfo humanTime = dataTime.timeInfo(); humanTime.minute_ = 0; // Truncate first arrival humanTime.second_ = 0; // time to the humanTime.microsecond_ = 0; // full hour . // Set a metronome to go off every hour, // starting on the next full hour timer_.set( ObcTime( humanTime ) + ObcTimeInterval::hour(), ObcTimeInterval::hour() ); } sum_ += d[“Value”]().asReal(); send( dataVec.at( 0 ), 0 ); }
- Whenever the timer fires, a new datum is created and filled in with the desired data acquired during the processing of incoming data: [0479]
void OrlaYourBlock::processTimer( const ObcTime& timerTime, OrlaTimer* timer // pointer to our timer_ instance variable ) { OdtInstanceHandle datum = inputType( 0 )->createInstance(); datum[“Timestamp”] = timerTime; datum[“Value”] = sum_ / count_; send( datum, 1 ); }
- The End Of Run Condition Eventually, all input ports have received end-of-data indications and all pending timers have been fired. At that point, the processEndOfRun( ) method is called. It is the last chance for the block to send data on to its consumers. On return, end-of-data indications are sent to all consumer blocks. [0480]
- Block Examples In this section we describe in detail the internals of several simple blocks. [0481]
- Block example: Class OrlaTotalAverage Our first example consists of building a single-producer/single-consumer block that computes and sends out a running average for each input value. The block is designed to read and send double precision floating-point values. We have chosen a simple averaging function so that we can concentrate on what is involved in building a user-defined block. We name this block OrlaTotalAverage, following the naming convention of prepending blocks defined at O&A with “Orla”. [0482]
- As required for user-defined blocks, we derive OrlaTotalAverage from the class OrlaBlock. As a C++ class, this block has both state and member functions. The necessary state to calculate a total average are a count and a sum. It defines the configure( ) method and overrides methods processData( ) and outputType( ). [0483]
- The class definition looks like this: [0484]
class OrlaTotalAverage : public OrlaBlock { // Compute the average so far. Send out a datum for each datum received. public: OrlaTotalAverage ( OrlaNetwork& net); protected: virtual void configure(); virtual const OdtCompositeType& outputType( unsigned int port ); unsigned int count_; double sum_ ; const OdtTypeInstance* typeDouble_; private: virtual void processData( const ObcTime& dataTime, const OdtHandleVector& dataVec ); };
- For class OrlaTotalAverage we first define the constructor that initialises its state: [0485]
OrlaTotalAverage::OrlaTotalAverage( OrlaNetwork& net ) : OrlaBlock( net ), count_( 0 ), sum_( 0.0 ) ) { OdtDatumParser parser; typeDouble_ = parser.parseAbstractType( “Tick(Time,SeriesId,Double,Source,Validity)”); }
- Next we write the block's configure( ). Recall that the configure( ) is called after the network is built and before the network is run. The configure( ) is the block's opportunity to check its state within the network. We check that the number of input and output ports is correct; that is, 1 and 1. We also ensure that the input type is an OdtDouble. [0486]
void OrlaTotalAverage::configure() { // Check that we have exactly 1 input port and 1 output port expect InPortsEq( 1 ); expectOutPortsEq( 1 ); // Ensure that my input type is correct . expectInType( 0, typeDouble_ ); }
- The utility functions expectInPortsEq( ) and expectoutPortsEq( ) are self-explanatory. In this example, they check that this block is bound to one producer block and to one consumer block. If the number of ports is not as expected, these functions indicate an error by throwing an exception. [0487]
- We once create a data type object which we can work with and validate the input port receives data of the same type. The function expectInType( ) takes as argument an input port number and an OdtTypeInstance and indicates an error if the data type received on the specified port does not match what is specified (or is not inherited from the type specified). In this case, we check that input port 0 is to receive data of type Double. [0488]
- The heart of the OrlaTotalAverage block is its processData( ) method. We count the number of items received and their running total. For every received input datum we output the calculated average. The code looks like this: [0489]
void OrlaTotalAverage::processData( const ObcTime& dataTime, const OdtHandleVector& dataVec ) { // Use an alias to point to the OdtDouble object inside the datum. OdtInstanceHandle d = dataVec.at(0)[“Double”]; // Increment n and add to sum. count_++; sum_ += d [“Value”]().asReal(); // Owning the datum so can change it. d[“Value”] = sum_ / count_ ; // Send the data on to our consumer. send( dataVec.at( 0 ), 0 ); }
- For each datum available, we refer to it with a OdtInstanceHandle object named d. We know from the configure( ) that all received data are have a Double part. Any changes to d are automatically reflected in dataVec. at (0). [0490]
- We then make the average calculation and assign the result of this calculation back to dataVec. at (0) via d. We are allowed to do this because we own d and hence are allowed to change its value. This reuse of incoming data in this way is common in writing blocks. It is generally more efficient to reuse received data than to deallocate the incoming datum via discard( ) and to allocate a new one to be sent on. Finally, the send( ) method passes the modified datum along to the consumer. [0491]
- The processData( ) function above is called repeatedly as long as there are data available on the input port. [0492]
- We also provide an explicit outputType( ) method. This informs the run-time system that all data generated by this block are OdtDoubles. [0493]
const OdtTypeInstance* OrlaTotalAverage::outputType ( unsigned int port ) { return typeDouble_; }
- This method definition isn't strictly necessary for this block. As the block doesn't change the output type, the default implementation would be sufficient. [0494]
- Block example: Class OrlaOccasionalAverage The above example demonstrates how to build blocks that take one input stream and produce one output stream. Here we consider a slightly more complicated example, where we wish to output a running average but not every time we receive an input datum. We call this new class OrlaOccasionalAverage. This example demonstrates using inheritance to refine an existing block in order to develop a new kind of block. It also demonstrates how to parameterize a block through its constructor and how to supply an processEndOfData( ) method. [0495]
- Let us assume that we wish to output the running average every nth data point where n is defined outside the class. A final average should also be produced when the input stream is exhausted. This example is similar to the previous one. This suggests that one way of implementing such a block is to derive it from class OrlaTotalAverage and thereby reuse the latter's functionality. [0496]
- The configure( ) and outputType( ) methods are the same as for class OrlaTotalAverage and hence need not be redefined. However, the constructor is different as is the processData( ) method. We also need to provide an processEndOfData( ) method in order to provide the special treatment necessary when the input stream finishes. [0497]
- First we present the class definition: [0498]
class OrlaOccasionalAverage : public OrlaTotalAverage { public: OrlaOccasionalAverage ( unsigned int n ); protected: virtual void processEndOfData( unsigned int port ); virtual void processData( const ObcTime& dataTime, const OdtHandleVector& dataVec ); private: unsigned int n_; ObcTime lastTime_; };
- The constructor copies the argument n to an internal variable, n_ as shown below: [0499]
OrlaOccasionalAverage::OrlaOccasionalAverage ( unsigned int n ) : OrlaTotalAverage(), n_( n ) {}
- The processData( ) function is as follows: [0500]
void OrlaOccasionalAverage::processData ( const ObcTime& dataTime, const OdtHandleVector& dataVec ) { lastTime_ = dataTime; // store the time for later use OdtInstanceHandle d = dataVec.at( 0 ) [“Double”]; // Increment count and add to sum. count_++; sum_ += d[“Value”]( ).asReal( ); // This block only sends down every nth data. Is it the nth? if ( count_ 0% n_ ) // It isn't the nth datum. discard ( dataVec.at(0) ); else { // Change to new value and send it to our consumer(s). d[“Value”] = sum_ / count_ ; send( dataVec.at(0), 0 ); } }
- As in the previous example, we alias the incoming datum using a reference and make the necessary calculation. The above example illustrates what must happen if we don't wish to send our received datum on to our consumer. In this case we can use the discard( ) method to free the datum rather than send( ) it on. [0501]
- In the previous example, we did not need to use the end-of-data condition on the input port. In this example, we use the processEndOfData( ) method to perform one last task before finishing. If we have received fewer than n data since we last sent a result, we send our final result to the consumer. Of interest here is the fact that we are fabricating brand-new data; that is, we send a newly generated datum rather than retransmit one that we have received from our producer. [0502]
void OrlaOccasionalAverage::processEndOfData ( unsigned int port ) { // It is the definition of this block to send down the remaining data. // It does not need to send anything down // if we are a multiple of the n-th. if (count_ % n_ ) { // Create a datum with the same time as the input port OdtInstanceHandle datum = inputType( port )-> createInstance( ); // Set attributes of the datum. d[“Timestamp”] = lastTime_; d[“Value”] = sum_ / count_; // Send the newly created datum to our consumer. send( datum, 0 ); } }
- Block example: Class OrlaSlicer The above block building examples focus on blocks that process data. In this example, we focus on a block which uses timers. A simple example is a slicing block which only lets data through for a certain time period. First, the class definition: [0503]
class OrlaSlicer : public OrlaBlock { public: OrlaSlicer( const ObcTime& start, const ObcTime& end ); protected: virtual void configure( ); virtual void processData( const ObcTime& dataTime, const OdtHandleVector& dataVec ); virtual void processTimer( const ObcTime& dataTime, OrlaTimer* ); OrlaTimer timer_; const ObcTime start_; const ObcTime end_; bool inSlice_; };
- The constructor simply stores start and end time. We use a boolean variable to indicate if we are inside the given time slice or not: [0504]
OrlaSlicer::OrlaSlicer( const ObcTime& start, const ObcTime& end ) : OrlaBlock(“OrlaSlicer” ), timer_(timerQueue( ) ), start_( start ), end_( end ), inSlice_( false ) { }
- As usual, the configure( ) checks the number of input and output ports. The block is transparently passing data from an input port to the corresponding output port, so the number of output ports must not be larger than the input ports. We don't check the data types, because we don't need to process any data. We set the timer to go off when the start time is reached. [0505]
void OrlaSlicer::configure( ) { expectOutPortsLe(nbInPorts( ) ); timer_.set( start_ ); }
- Because we are using timers in this example, the processTimer( ) method needs to be overridden. It is called the first time when the start time is reached. We then re-set the timer to go off a second time at the end of the interval. [0506]
void OrlaSlicer::processTimer( const ObcTime& timerTime, OrlaTimer* timer ) { if ( !inSlice_ ) { // did the start time fire? inSlice_ = true; timer->set( end_ ); } else { inSlice_ = false; sendEndOfData( ); // we're done, no more data from us } }
- The processData( ) method is very simple, all it needs to do is to check whether the block is inside the time slice or not and thus send the data on or discard it. [0507]
void OrlaSlicer::processData( const ObcTime& dataTime, const OdtHandleVector& dataVec ) { if ( inSlice_ ) send( dataVec ); else discard( dataVec ); }
- This block could be implemented without the use of a timer. [0508]
- Blocks with Multiple Inputs In the preceding example, we have already seen a block with (potentially) multiple input ports. However, that block didn't care about what kind of data it received, it simply passed it on. In most cases, though, input data is analyzed and processed before some other data is passed on to the consumers. [0509]
- Typically, when the processData( ) method is called, not all input ports have data. For example, a block might have no input datum on port 0 but several pending data on input port 1. During historical-time operations, this block will not be called until data are ready on both ports. During real-time operations, however, the block doesn't wait for both ports to carry at least one datum. Whenever the block is activated, the oldest datum on any of the two ports is handed over to the block via the processDatao method. [0510]
- Block example: Class OrlaMerge As an example of a block accepting multiple inputs, we examine the library class OrlaMerge. It takes N input streams and produces one output stream. The input data arrive in time-ordered sequence on each input port. Method processDatao is called repeatedly for a vector of data with the same time stamp. The output consists of the input data merged together so that time ordering is preserved. [0511]
class OrlaMerge : public OrlaBlock { public: OrlaMerge( ); protected: virtual void configure( ); virtual void processData( const ObcTime& dataTime, const OdtHandleVector& dataVec ); };
- During the configuration phase, we test the number of input and output ports and also that all inputs are of the same type. Stated more accurately, that ports 1 to N are of the same type or subclasses of port 0. Here, we use the nbInPorts( ) method which returns the number of input ports that are in use: [0512]
void OrlaMerge::configure( ) { expectInPortsGe( 1 ); expectOutPortsEq( 1 ); const OdtTypeInstance* type = inputType( 0 ); for ( unsigned int i = 1; i < nbInPorts( ); i++ ) expectInType( i, type ); }
- The processData( ) method is funneling all input into the single output port. We have assured the type-compatibility of all input ports, so this does not violate the typing mechanism. Because the run-time system guarantees that each successive call will transfer data of the same or younger time, we adhere to the principle that any data stream must be ordered timewise. [0513]
void OrlaMerge::processData( const ObcTime& dataTime, const OdtHandleVector& dataVec ) { for(unsigned int i = 0; i < dataVec.entries( ); i++ ) if( dataVec.at(i).isNil( ) == false ) send( dataVec.at( i ), 0 ); }
- For safety reasons, sendo sets the pointer to zero to indicate that the datum ownership is relinquished and that it cannot be accessed anymore. [0514]
- Block example: Class OrlaReadSQDADL In this example, we describe a producer-only block. The construction of a producer-only block follows a slightly different structure than that of a producer-consumer. [0515]
- One difference in writing such a block is that the outputType( ) method must always be supplied. Remember, the default outputType( ) method would return the type of the corresponding input port. [0516]
- Because no input ports exist, other means are used to activate the block so it can create and inject data into the network. [0517]
- For this example, we consider the OrlaReadSQDADL library block. This block reads data from a provided stream (istream&) containing ASCII representations. First we present the class definition: [0518]
class OrlaReadSQDADL : public OrlaBlock { public: OrlaReadSQDADL( istream& in ); virtual ˜OrlaReadSQDADL( ); protected: virtual const OdtTypeInstance* outputType( unsigned int outPort ); virtual void configure( ); virtual void processTimer(const ObcTime&, OrlaTimer*); istream* istr_; ObcTime startTime_; ObcTime endTime_; OrlaTimer work_; bool inRealTime_; bool checkRealTime_; OdtInstanceHandle d_; OdtConcreteTypeInstance* outputType_; bool startTimeOver_; };
- The OrlaReadSQDADL constructor takes as argument an istream& specifying from where the ASCII input data are to be read: [0519]
OrlaReadSQDADL::OrlaReadSQDADL( ifstream ifstr) : OrlaBlock( net, className ), istr_( &istr; ), nbLines_( 0 ), work_( timerQueue( ) ), inRealTime_( false ), d_( 0 ), output Type_( 0 ), startTimeOver_( false ) {}
- Because this is a producer-only block, configure( ) assures that no blocks are trying to feed data into it. It is calling its own outputType( ) method to initialize the output type. [0520]
void OrlaReadSQDADL::configure( ) { expectInPortsEq( 0 ); expectOutPortsEq( 1 ); outputType(O); if( !startTime_.isValid( ) ) startTime_ = startTime( ); if( !endTime_.isValid( ) ) endTime_ = endTime( ); checkRealTime_ = endTime_ > ObcTime::now( ); work_.set( start Time_ ); }
- As noted above, the outputtype( ) method must be provided. It returns the type of data that we expect to read. [0521]
const OdtTypeInstance* OrlaReadSQDADL::outputType( unsigned int outPort ) { if( output Type_ == 0 ) { // Read the data type from first line of file RWCString sqdadl; sqdadl. readLine( *istr_ ); nbLines_++; OdtDatumParser parser; outputType_ = parser.parseConcreteType(sqdadl); } return outputType_; }
- The processtimer( ) method does the main work for this producer block. It is initially called because the timer was set during the configuration phase. We will continue re-setting the timer as long as there's data available in the input file stream. [0522]
void OrlaReadSQDADL::processTimer(const ObcTime& now, OrlaTimer*) { if( d_ != 0 ) send( d_, 0 ); nbLines_++; RWCString str; str.readLine(*istr_); // If empty line then issue end of data if ( istr_->eof( ) ) { sendEndOfData(O); return; } try { d_ = output Type_->createInstance( ); d_->populateFieldValues(str); } catch( ObcException& e ) { e.addLocation(“OrlaReadSQDADL::processTimer”); throw e; } ObcTime tickTime = d_[“Timestamp”] ( ).asTime( ); if( checkRealTime_ && !inRealTime_ && tickTime > ObcTime::now ( ) ) { setProcessingMode(OrlaInputPort::realTime ); inRealTime_ = true; ObcLog::debug( className ) << “Switching to r/t mode, line” << nbLines_ << ObcLog:: end( ); } if ( ! startTimeOver_) { if (startTime_ > tickTime ) { discard(d_); work_.set( startTime_ ); return; } else { startTimeOver_ = true; } if ( startTimeOver_ && endTime_ < tickTime ) { discard(d_); sendEndOfData(O); return; } if ( ! inRealTime_) send( d_, 0 ); work_.set( tickTime ); }
- Finally, when the block is destroyed, the destructor code needs to deallocate the dynamically allocated data: [0523]
OrlaReadSQDADL::˜OrlaReadSQDADL( ) { }
- More About the Run-Time System The ORLA run-time system was introduced earlier. The run-time system is that part of ORLA that manages what goes on internally. It is started when the network method runo is called for the respective object. It calls the blocks' respective processing methods, like configure( ) or processData( ). [0524]
- Network Feedback Loops Generally block activation is managed transparently and hence is unimportant to the network developer. However, if a network contains a loop, the effects of flow-control can become significant. In particular, the network designer must be aware and avoid deadlock situations. [0525]
-. [0526]
A The SQDADL definition # Master SQDADL BNF definition file # $Id: //depot/main/local/config/SQDADL.bnf.txt#14 $ # ############################################################################## ### Most of the primitive field names are not used in more than one of the ### subsections below. In this case the field entry appears in that ### subsection. These fields are, however, so generic, that they are ### used almost everywhere, and I chose to put them here at the beginning ############################################################################## Name = string:static Period = string:static ## For soft financial period units Ccy = string(3):static Country = string(3):static Price = float Value = float DoubleValue = double ############################################################################## ### This is the top level structure for all ticks. Notice that this line ### has a format different than any other line in this file. ############################################################################## Tick = ( Time, SeriesID, DataSpecies, Source, Validity ) ########### Time ########### Time = “Time” ( Timestamp, TimeMod ) Timestamp = time TimeMod = Regular | Scaled | RegularScaled | empty Regular = “Regular” ( RegularTimelnterval ) RegularTimeInterval = timeInterval:static Scaled = “Scaled” ( TimeScale, ScaledTimeInterval ) ScaledTimeInterval = ScaledTimeInterval RegularScaled = “RegularScaled” ( TimeScale, RegularScaledTimeInterval) RegularScaledTimeInterval = ScaledTimeInterval:static TimeScale = enum( ‘physical’, ‘tick’, ‘market’, ‘intrinsic’, ‘theta’, ‘theta2stat’, ‘theta2dyn’ ) : static ########### SeriesID ########### SeriesID = Instrument | Analytic Instrument = Contract | Intangible ########### CONTRACTS ########### Contract = Asset | Derivative ########## ASSETS ########## Asset = FX | Commodity | Equity | Deposit | Pfandbrief | Bond | BmkBond FX = “FX” ( Per, Expr ) Per = string(3):static Expr = string(3):static Commodity = “Commodity” ( Name, Ccy ) Equity = “Equity” ( Ticker, Ccy, Market ) Ticker = string:static Deposit = “Deposit” ( Ccy, Period ) Pfandbrief = “Pfandbrief” ( Issuer, CouponRate, Maturity, WKN ) Issuer = string:static Maturity = time CouponRate = float WKN = string:static # (WertpapierKennNummer) Bond = FixedBond | ZeroBond | FloatBond | Brady FixedBond = “FixedBond” ( Ccy, Issuer, CouponRate, DayBasis, CouponFreq, Maturity ) ZeroBond = “ZeroBond” ( Ccy, Issuer, Maturity ) FloatBond = “FloatBond” ( Ccy, Issuer, InterestRate, CouponFreq, Maturity ) CouponFreq = string:static ConvergFct = float Brady = “Brady” ( Country, Ccy, BradyType, LiborSpread, Maturity ) BradyType = string : static LiborSpread = float BmkBond = “BmkBond” ( Ccy, Period, BmkType, Bond ) BmkType = enum( ‘Treasury’ ) : static ########### DERIVATIVES ########### Derivative = Future | ROFuture I GenericFut | Forward | IRSwap | VolDerivative # The derivatives with a price depending on the volatility # Can be used to compute an implied volatility, as with ‘ImplVol’ VolDerivative = Option | IRCap Future = “Future” ( ExpYearMon, ExpiryDate, Exch, Ccy, Instrument ) ExpiryDate = time ExpYearMon = integer: static # e.g. 199806 Exch = “Exch”( Name ) GenericFut = “GenericFut” ( Exch ) ROFuture = “ROFuture” ( ROInfo, Exch, Ccy, Instrument ) ROInfo = “ROInfo”( Position, ROType, StartDate, RORange, ROValue, GlueFactor ) Position = integer:static ROType = string:static # the rollover algorithm used # # possible ROTypes: # # ConstMat: convex combination with # constant Maturity # [Vol]Add[NoGlue]: additive mode # [Vol]Mult[NoGlue]: multiplicative mode # Vol: volume based rollover # NoGlue: don't glue the series RORange = float:static # EMA range for averaging ROValue = float # Rolled-over price GlueFactor = float # additive/multiplicative offset # # the GlueFactor is defined by # additive: ROValue = Price - GlueFactor # multiplicative: ROValue = Price * GlueFactor Forward = “Forward” ( Period, Instrument ) Option = “Option” ( Strike, StrikeUnits, OptType, OptSide, Instrument, DerivSpec ) Strike = float : static OptType = enum( ‘American’, ‘European’, ‘NA’ ) : static OptSide = enum( ‘call’, ‘put’, ‘straddle’, ‘NA’ ) : static DerivSpec = ExchSpec | OtcSpec ExchSpec = “ExchSpec” ( ExpYearMon, ExpiryDate, Exch ) OtcSpec = “OtcSpec” ( Period ) IRSwap = “IRSwap” ( Ccy, IRBasis, Period, StartDate, Reset, DayBasis, InterestRate ) IRCap = “IRCap” ( Ccy, Period, StartDate, Reset, DayBasis, InterestRate, CapType ) IRBasis = emim( ‘B’, ‘M’, ‘NA’ ):static # “B” = bond, “M” = money market StartDate = time # TODO: should this be a “date”:static Reset = string: static # It is a time period DayBasis = enum( ‘NA’, ‘ACT.360’, ‘ACT. 365’, ‘ACT.366’, ‘30.360’, ‘30.365’, ‘30E.360’, ‘ACT.ACT’ ):static StrikeUnits = enum( ‘REL’, ‘DIF’, ‘ABS’, ‘NA’ ):static CapType = enum( ‘cap’, ‘floor’ ):static ########### INTANGIBLES ########### Intangible = InterestRate | Index | TermIndex | Deliverables | ImplVol InterestRate = “InterestRate” ( Ccy, Period, IRReference ) IRReference = string : static Index = “Index” ( Name, Ccy ) TermIndex = “TermIndex” ( Name, Ccy, Period ) Deliverables = Notional | Basket | CtDNotional | CtDBond Notional = “Notional” ( NotionalBond ) NotionalBond = “NotionalBond” ( Ccy, NominalMatur ity, Name ) NominalMaturity = timeInterval Basket = “Basket” ( Bond ) CtDNotional = “CtDNotional” ( Period ) CtDBond = “CtDBond” ( Name, Period, CouponRate, ConvergFct, ImpliedRepoRate, Maturity ) ImpliedRepoRate = float ImplVol = “ImplVol” ( VolDerivative ) ########## ANALYTICS ########## Analytic = HistVol | Beta | Corr | IRCurve | ImpliedIR | Statistics | OtmItem HistVol = “HistVol” ( TimeRange, TSModel, Instrument ) TimeRange = timeInterval:static TSModel = enum( ‘BIS’, ‘RiskMetrics’, ‘GARCH’, ‘OAUBF’, ‘OISIR’ ) : static Beta = “Beta” ( TimeRange, TSModel, MarketIndex, Equity ) MarketIndex = string:static Corr = “Corr” ( TimeRange, TSModel, Inst1, Inst2 ) Inst1 = “Inst1” ( Instrument ) Inst2 = “Inst2” ( Instrument ) IRCurve = “IRCurve” ( RiskMarket, Ccy, IRCurveType, YCModel, Compound, DayBasis, Period ) RiskMarket = enum( ‘interbank’, ‘treasury’, ‘pfandbrief’, ‘rex’, ‘pex’ ):static IRCurveType = enum (‘NA’, ‘Zero’, ‘Yield’, ‘Discount’, ‘ForwardRC’) : static YCModel = enum( ‘OA1’, ‘OA2’, ‘Algorithmics’, ‘Reuters’ ):static Compound = enum( ‘CC’, ‘daily’, ‘monthly’, ‘quarterly’, ‘semiAnnual’, ‘annual’ ):static ImpliedIR = “ImpliedIR” ( Ccy, Period, Instrument ) Statistics = “Statistics” ( Name, Instrument ) Dtmltem = “OtmItem” ( OtmModelID, Instrument ) ########## DATASPECIES ########## DataSpecies = MarketData | LinearData | ValueAddedData ########## MARKET DATA ########## MarketData = Quote | Tx | MktPrice | MktVolume | MktEvent | BondPrice | Level | Summary | Curve | RefLevel Quote = “Quote” ( Bid, Ask, Institution, DataMod ) Bid = float Ask = float Institution = string(4) DataMod = RefData | QuoteSize | empty ### a basic quote does not contain anything else than bid/ask/institution RefData = “RefData” ( RefTime, RefType, Market ) RefTime = time RefType = enum( ‘Fixing’, ‘Close’, ‘Settle’, ‘Interpolated’, ‘Yield’, ‘Discount’ ) : static Market = Exch | Location | empty Location = “Location” ( Name ) QuoteSize = “QuoteSize” ( BidSize, AskSize ) BidSize = float AskSize = float Tx = “Tx” ( Price, Txlnfo ) TxInfo = Vol | Info | empty Info = “Info” ( Volume, Seller, Buyer ) Vol = “Vol” ( Volume ) Volume = integer Seller = string (4) Buyer = string (4) MktPrice = “MktPrice” ( Price, Side ) Side = enum(‘Bid’, ‘Ask’, ‘Mid’) : static MktVolume = “MktVolume” ( Value, Side, VolType ) VolType = enum(‘Cumulated’, ‘Generic’, ‘QSize’, ‘TxSize’, ‘Openlnt’ ) : static MktEvent = “MktEvent” ( EventType, EffTime, Value) EventType = enum (‘Dividend’, ‘Split’ ) : static EffTime = time BondPrice = “BondPrice” ( Bid, Ask, Tx, YieldBid, YieldAsk, Institution ) YieldBid = float YieldAsk = float Level = “Level” ( Value, DataMod ) Summary = “Summary” ( Market, Open, Close, High, Low ) Open = float Close = float High = float Low = float Curve = SampledCurve | ModeledCurve SampledCurve = “SampledCurve” (Intercept, ValueVec) Intercept = stringVec:static # array of periods ModeledCurve = “ModeledCurve” ( CurveModel, Segments, Parameters ) CurveModel = enum( ‘Algorithmics’, ‘OALinear’, ‘OAPolynom’ ) : static Segments = timeIntervalVec # TODO: static? Parameters = doubleVec # static RefLevel = “RefLevel” ( Value, DataMod, Side ) ########## LINEAR DATA ########## LinearData = Double | DoubleVec # | DoubleMatrix Double = “Double” ( DoubleValue ) DoubleVec = “DoubleVec” ( ValueVec, NElements ) ValueVec = doubleVec # number of elements is NElements NElements = integer:static # DoubleMatrix = “DoubleMatrix” ( ValueMatrix, NRows, NCols ) # ValueMatrix = doubleMatrix # NRows = integer:static # NCols = integer:static ########## VALUE ADDED DATA ########## ValueAddedData = PointFcst | CurveFcst | VolCurve | VolCurveFcst | ScalarIndicator | ThresholdEvent | OtmSpecies | IRCorr | ActivityHistogram | Rate PointFcst = “PointFcst” ( Value, TimeHorizon ) TimeHorizon = timeInterval:static CurveFcst = “CurveFcst” ( ValueVec, ConfIntervalVec, TimeHorizonVec ) ConfIntervalVec = “ConfIntervalVec” ( HighVec, LowVec ) HighVec = doubleVec LowVec = doubleVec TimeHorizonVec = timeIntervalVec # TODO: static? VolCurve = “VolCurve” ( ValueVec, TimeHorizonVec ) VolCurveFcst = “VolCurveFcst” ( ValueVec, TimeHorizonVec ) ScalarIndicator = “ScalarIndicator”( Name, TimeHorizon, Value ) ThresholdEvent = “ThresholdEvent” ( ScalarIndicator, CrossingPrice, CrossingType ) CrossingPrice = float CrossingType = enum ( ‘OsUp’, ‘ObDown’, ‘ObUp’, ‘OsDown’ ) IRCorr = “IRCorr” ( CorrelationLevel, YieldCorr ) CorrelationLevel = float YieldCorr = float ActivityHistogram = SeasonalVolatility | SeasonalTickFreq SeasonalVolatility = “SeasonalVolatility” ( DSTPeriod, Norm, Dt, DoubleVec ) Norm = integer:static DSTPeriod = integer:static ### Different daylight saving periods Dt = timeInterval SeasonalTickFreq = “SeasonalTickFreq” ( DSTPeriod, Dt, DoubleVec ) Rate = “Rate” ( RateValue ) Rate Value = float ########## TRADING MODEL DATA ########## OtmModelID = “OtmModelID” ( TMName, Customer, Market ) TMName = string:static Customer = string:static OtmSpecies = OtmDeal | OtmStatus | OtmRec | OtmWrapper OtmDeal = “OtmDeal” ( PrevGearing, NewGearing, DealPrice, DealPriceTime, DealPriceSource, DealReason, WasStopLossDeal, MeanPrice, DealNumber, TotalReturn, CumulatedReturn, MinRetWhenOpen, MaxRetWhenOpen ) OtmRec = “OtmRec” ( Price, Reason, WasStopLossDeal, PrevGearing, NewGearing ) Reason = string PrevGearing = float NewGearing = float DealPrice = float DealPriceTime = time DealPriceSource = string DealReason = string WasStopLossDeal = bool MeanPrice = float DealNumber = integer TotalReturn = float CumulatedReturn = float MinRetWhenOpen = float MaxRetWhenOpen = float OtmStatus = “OtmStatus” ( Type, Message, Param ) Message = string Param = string Type = enum( ‘undefinedStatusType’, ‘start Trading’, ‘endTrading’, ‘market Open’, ‘market OpenWarning’, ‘marketCloseWarning’, ‘marketLastChance’, ‘marketClose’, ‘otherMarketEvent’, ‘anticipateDeal’, ‘deal’, ‘noPriceData’, ‘priceDataOk’, ‘stopLossChange’, ‘numberOfStatusTypes’ ) : static OtmWrapper = “OtmWrapper” ( DataType, DataNr, Data ) DataType = string DataNr = integer Data = integer ########## SOURCE ########## Source = “Source” ( Origin, Identifier, Version ) Origin = string:static Identifier = string:static Version = integer:static ########## FILTER ########## Validity = Filter | empty Filter = “Filter” ( Confidence, Reasons, ScaleFactor ) Confidence = float Reasons = integer # Each bit of the integer corresponds to a reason ScaleFactor = float
Claims (83)
1. A system for storing one or more time series comprising:
(a) a language for describing the storing of the one or more time series; and
(b) a subsystem storing the one or more time series in accordance with said language.
2. A system for storing one or more time series as in
claim 1 wherein said language comprises one or more attributes for representing one or more fields of the time series.
3. A system for storing one or more time series as in
claim 2 wherein said subsystem comprises one or more rules for describing the storing of said one or more fields in accordance with said one or more attributes.
4. A system for storing one or more time series as in
claim 3 wherein said subsystem further comprises:
(a) at least one file name; and
(b) at least one data file.
5. A system for storing one or more time series as in
claim 4 wherein said one or more rules determine whether the fields are stored in said at least one file name or said at least one data file depending on values of said one or more attributes.
6. A system for storing one or more time series as in
claim 1 wherein said language comprises one or more members of the set consisting of a leaf node, a non-leaf node, a type and a hint for describing one or more fields from the time series.
7. A system for storing one or more time series as in
claim 6 wherein said subsystem comprises:
(a) at least one file name; and
(b) at least one data file.
8. A system for storing one or more time series as in
claim 7 wherein said subsystem further comprises one or more rules for determining how to store at least one of the fields from the time series.
9. A system for storing one or more time series as in
claim 8 wherein said rules comprise:
(a) storing the at least one field in said file name if the field is said non-leaf node.
10. A system for storing one or more time series as in
claim 8 wherein said rules comprise: storing the at least one field in said filename if:
(a) the field is said leaf node; and
(b) the field has a fixed value for said hint.
11. A system for storing one or more time series as in
claim 8 wherein said rules comprise: storing the at least one field in said data file if:
(a) the field is said leaf node;
(b) the field has a variable value for said hint; and
(c) the field has a constant size.
12. A system for storing one or more time series as in
claim 8 wherein said rule comprise: storing a universal matching symbol in the filename if:
(a) the at least one field is said leaf node;
(b) the field has a variable value for said hint; and
(c) the field has a constant size.
13. A system for storing one or more time series as in
claim 8 wherein said subsystem further comprises at least one secondary storage.
14. A system for storing one or more time series as in
claim 13 wherein said rules comprise: store the at least one field in the secondary storage if:
(a) the field is said leaf node;
(b) the field has a variable value for said hint; and
(c) the field has a constant size.
15. A system for storing one or more time series as in
claim 13 wherein said rules comprise: store a universal matching symbol in the filename if:
(a) the field is said leaf node;
(b) the field has a variable value for said hint; and
(c) the field has a constant size.
16. A system for storing one or more time series as in
claim 13 wherein said rule comprises: copy an offset of the secondary storage into said data file if:
(a) the field is said leaf node;
(b) the field has a variable value for said hint; and
(c) the field has a constant size.
17. A system for storing one or more time series as in
claim 1 wherein said language defines an exchange rate of one or more currencies.
18. A system for storing one or more time series as in
claim 1 wherein said language defines a deposit of a currency for a period.
19. A system for storing one or more time series as in
claim 1 wherein said language defines a quote.
20. A system for storing one or more time series as in
claim 19 wherein said quote comprises one or more number of the set consisting of a bid, an ask, a bank and a source.
21. A system for storing one or more time series as in
claim 1 wherein said language comprises a transaction.
22. A system for storing one or more time series as in
claim 21 wherein said transaction comprises an exchange of a currency from a seller to a buyer.
23. A system for storing one or more time series as in
claim 22 wherein said transaction further comprises one or more members of the set consisting of a price, a volume and a source.
24. A system for storing one or more time series as in
claim 1 wherein said language comprises one or more regular expressions.
25. A system for storing one or more time series as in
claim 1 wherein said language comprises one or more statements for defining one or more ticks in the time series.
26. A system for storing one or more time series as in
claim 1 wherein said language is recursive.
27. A system for managing one or more time series comprising:
(a) a language defining a first one of the time series as a subset of a second one of the time series.
28. A system for managing one or more time series as in
claim 27 wherein said second time series is a universal time series representing all recordable events.
29. A system for retrieving desired data from one or more time series comprising:
(a) at least one request comprising one or more restrictions for defining the desired data; and
(b) at least one utility retrieving data from the one or more time series that satisfies said one or more restrictions.
30. A system for retrieving desired data from one or more time series as in
claim 29 further comprising:
(a) one or more rules for selecting one or more files comprising the one or more time series that satisfy said one or more restrictions.
31. A system for retrieving desired data from one or more time series as in
claim 30 further comprising a language, said language defining one or more attributes for said one or more restrictions.
32. A system for retrieving desired data from one or more time series as in
claim 31 wherein said one or more attributes comprise one or more members of the set consisting of a node type, a data type and a hint.
33. A system for retrieving desired data from one or more time series as in
claim 32 wherein said one or more rules comprise: for at least one of said restrictions,
(a) select said one or more filename that match said at least one restriction if said node type of said at least one restriction is a non-leaf.
34. A system for retrieving desired data from one or more time series as in
claim 32 wherein said one or more rules comprise: for at least one of said restrictions,
(a) select said one or more filenames having a universal matching symbol corresponding to said at least one restriction if:
(b) said node type of said at least one restriction is a leaf; and
(c) said hint of said restriction is a variable.
35. A system for retrieving desired data from one or more time series as in
claim 33 wherein said one or more rules comprise: for at least one of said restrictions,
(a) select said one or more filenames that match said at least one restriction if:
(b) said node type of said at least one restriction is a leaf; and
(c) said hint of said at least one restriction is fixed.
36. A system for retrieving desired data from one or more time series as in
claim 29 further comprising:
(a) at least one cursor for selecting data in the one or more time series that satisfies said one or more restrictions.
37. A system for retrieving desired data from one or more time series as in
claim 36 wherein said one or more restrictions comprise one or more members of the set consisting of a base time and a time range.
38. A system for retrieving desired data from one or more time series as in
claim 37 wherein said time range specifies the number of data items before said base time.
39. A system for retrieving desired data from one or more time series as in
claim 37 wherein said time range specifies the number of data item after said base time.
40. A system for retrieving desired data from one or more time series as in
claim 36 wherein said cursor comprises a first method for retriving the data item after a current time in the one or more time series that satisfies said one or more restrictions.
41. A system for retrieving desired data from one or more time series as in
claim 36 wherein said cursor comprises:
(a) a second method for retrieving the data item before a current time in the one or more time series that satisfies said one or more restriction.
42. A system for retrieving desired data from one or more time series as in
claim 29 further comprising:
(a) a parser for determining said one or more restrictions from said at least one request.
43. A system for retrieving desired data from one or more time series as in
claim 29 wherein said one or more restrictions comprise an expression.
44. A system for processing data from one or more time series comprising:
(a) one or more processing modules for processing the data;
(b) one or more connections for linking said modules in a network; and
(c) a first subsystem for activating said one or more processing modules and for moving the data through the network.
45. A system for processing data from one or more time series as in
claim 44 further comprising a type system comprising:
(a) one or more types; and
(b) a relation among said one or more types.
46. A system for processing data from one or more time series as in
claim 45 further comprising a grammar to describe said types in said type system.
47. A system for processing data from one or more time series as in
claim 45 wherein said one or more processing modules comprise one or more ports.
48. A system for processing data from one or more time series as in
claim 47 further comprising one or more binding operators for creating said one or more connections to link two or more of said ports.
49. A system for processing data from one or more time series as in
claim 48 wherein at least one of said types are assigned to at least one of said ports.
50. A system for processing data from one or more time series as in
claim 49 wherein said one or more processing modules comprise:
(a) a configure method for checking that said types on said ports that are linked by one of said connections are consistent.
51. A system for processing data from one or more time series as in
claim 44 wherein said processing modules comprise:
(a) a process data method to process the data.
52. A system for processing data from one or more time series as in
claim 51 wherein said subsystem executes said process data method.
53. A system for processing data from one or more time series as in
claim 44 wherein at least one datum of the data in the time series has at least one time stamp.
54. A system for processing data from one or more time series as in
claim 53 wherein said subsystem:
(a) orders said at least one datum of the data according to said time stamp; and
(b) provides said ordered at least one datum to said processing modules.
55. A system for processing data from one or more time series as in
claim 44 wherein said processing modules comprise one or more ports.
56. A system for processing data from one or more time series as in
claim 55 wherein said ports comprise one or more input ports and one or more output ports.
57. A system for processing data from one or more time series as in
claim 56 wherein said processing modules further comprise:
(a) at least one end of data method to indicate that no more data will be provided to said one or more input ports of said processing modules.
58. A system for processing data from one or more time series as in
claim 57 wherein said first subsystem executes said end of data method when said subsystem has no more of the data to provide to said processing module.
59. A system for processing data from one or more time series as in
claim 56 wherein said processing modules input at least one input datum of the data on said input ports, process said at least one input datum to produce at least one output datum, and output said at least one output datum on said output ports.
60. A system for processing data from one or more time series as in
claim 59 wherein said processing module further comprises a build-up delay method that computes how much time said processing module needs before said processing module can output said at least one output datum that is meaningful.
61. A system for processing data from one or more time series as in
claim 59 wherein said processing modules further comprise one or more timer methods to process one or more timers.
62. A system for processing data from one or more time series as in
claim 61 wherein said one or more timers indicate when said processing modules should output said at least one output datum on said output ports.
63. A system for processing data from one or more time series as in
claim 62 wherein said processing modules compute an average of input data and output said average at its said outputs at time intervals.
64. A system for processing data from one or more time series as in
claim 63 wherein said time intervals are hourly.
65. A system for processing data from one or more time series as in
claim 59 wherein said processing module comprise:
(a) at least one end of run method to indicate that said processing module should output any remaining said at least one output datum.
66. A system for processing data as in
claim 65 wherein said first subsystem executes said end of run method.
67. A system for processing data from one or more time series as in
claim 44 wherein said processing modules comprise one or more variables defining a state of said processing modules.
68. A system for processing data from one or more time series as in
claim 44 wherein each of said processing modules execute independently of others of said processing modules.
69. A system for processing data from one or more time series as in
claim 44 wherein said processing module further comprise:
(a) one or more timer methods to process one or more timers.
70. A system for processing data from one or more time series as in
claim 69 wherein said first subsystem executes said timer methods.
71. A system for processing data from one or more time series as in
claim 44 wherein the network is directed acyclic graph.
72. A system for processing data from one or more time series as in
claim 44 wherein said processing modules comprise one or more members of the set consisting of producer-only modules that output data but do not input data, producer-consumer modules that input data and output data; and consumer-only modules that input data but do not output data.
73. A system for processing data from one or more time series as in
claim 44 wherein said processing modules comprise one or more members of the set consisting of modules that read data from a repository, modules that perform financial calculations, modules that perform computations, modules that perform statistical analysis, modules that compute histograms, and modules that write data to the repository.
74. A system for processing data from one or more time series as in
claim 73 wherein the computations comprise one or more members from the set consisting of:
(a) derivatives, volatility, generation of regular time series.
75. A system for processing data from one or more time series as in
claim 73 wherein the financial calculation comprise a cross-rate with foreign exchange data.
76. A system for processing data from one or more time series as in
claim 73 wherein the statistical analysis comprise one or more members of the set consisting of correlation, moving correlation and least square fit.
77. A system for processing data from one or more time series as in
claim 73 wherein the histogram comprise one or more members of the set consisting of probability distribution, and conditional average and intra-week average.
78. A system for processing data from one or more time series as in
claim 44 wherein said processing module process data from one or more time intervals.
79. A system for processing data from one or more time series as in
claim 44 further comprising at least one start time and at least one end time.
80. A system for processing data from one or more time series as in
claim 79 wherein said processing modules begin the processing of the data at said start time and continue processing the data until said end time.
81. A system for processing data from one or more time series as in
claim 79 wherein said subsystem passes said start time and said end time to said processing modules.
82. A system for processing data from one or more time series as in
claim 44 wherein at least one of said processing modules computes and produces a running average of at least one datum from the data that said processing modules receive.
83. A system for processing data from one or more time series as in
claim 44 wherein at least one of said processing modules outputs an average of its said at least one received datum at every nth one of its said at least one received datum.
Priority Applications (2)
Applications Claiming Priority (1)
Publications (1)
Family
ID=28456501
Family Applications (1)
Country Status (1)
Cited By (24)
Citations (3)
- 2002
- 2002-01-17 US US10/046,907 patent/US20030187761A1/en not_active Abandoned
|
https://patents.google.com/patent/US20030187761A1/en
|
CC-MAIN-2020-16
|
en
|
refinedweb
|
:
Changes 2.06:
* In main(), when parsing form input fails, the CGI script exits without
producing any output whatsoever. Wouldn't it be better to actually
emit an error status, instead of expecting the server to do something
sane with a script that produces no output?
* In mpRead(), a check is done to insure the requested length is not
greater than the amount of data still available, and to adjust it
if necessary. However, this check is currently done _after_ reading
data from the putback buffer, in which process len is decremented by
the amount of putback data read, but mpp->offset is not correspondingly
incremented (this happens later). As a result, the check uses too
small a value for len, and so fails to stop reading soon enough if
the requested length is greater than what is available _and_ there
was any data in the putback buffer.
The fix is to move the check to the beginning of mpRead()
* Further, if a read request is satisfied _entirely_ from the putback
buffer, mpp->offset is not updated at all, resulting in a similar
problem. The solution is to update mpp->offset in the "else if (got)"
case.
* In cgiParsePostMultipartInput(), if the Content-Disposition of a part
is not "form-data", afterNextBoundary() is not called before beginning
to process the next part. As a result, parsing of the next part headers
begins with the body of the unwanted part. It is necessary in this case
to call afterNextBoundary() before continuing with the next cycle.
* In handling out-of-memory conditions in afterNextBoundary(), *outP is
set to '\0'. While this is technically legal ('\0' is "an integral
constant expression with the value 0"), it looks funny.
* In cgiCookieString(), a change was introduced in v2.02 which purports
to prevent an overrun in cases where cgiCookie is exactly equal to
the requested cookie name. In fact, the problem can also occur if
the requested name occurs with no values at the end of cgiCookie.
Further, the change from v2.02 does not fix the problem, because it
compares the _pointers_ p and n to NULL, which they will never equal,
rather than comparing the pointers they point at to NUL.
* Also in cgiCookieString(), there is a comment suggesting that the main
loop never terminates except with a return. This is not the case.
For example, it will terminate if the requested cookie is not found
and the cgiCookie string ends in a semicolon.
* Why did days[] (formerly daysOfWeek[]) and months[] become non-static?
This pollutes the namespace of programs using CGIC.
* In cgiReadEnvironment(), when reading in the contents of an uploaded
file, it is possible that a temporary file is successfully created
but then cannot be opened. In this case, no attempt is made to remove
the tempoary file.
* Further, when a form entry does _not_ include an uploaded file,
e->tfileName is set to malloc'd but uninitialized memory. It should
be set to an empty string, by setting e->tfileName[0] to zero after
the 1-byte buffer is allocated.
Log message:
Drop superfluous PKG_DESTDIR_SUPPORT, "user-destdir" is default these days.
Log message:
Make sure the correct install tool is used.
Log message:
Convert @exec/@unexec to @pkgdir or drop:
Mark as destdir ready.
Log message:
Add DESTDIR support.
|
https://pkgsrc.se/www/cgic
|
CC-MAIN-2020-16
|
en
|
refinedweb
|
What is Workbox SW?
The
workbox-sw module provides an extremely easy way to get up and running
with the Workbox modules, simplifies the loading of the Workbox modules, and
offers some simple helper methods.
You can use
workbox-sw via our CDN or you use it with a set of workbox files
on your own server.
Using Workbox SW via CDN
The easiest way to start using this module is via the CDN. You just need to add the following to your service worker:
importScripts('');
With this you’ll have the
workbox namespace in your service worker that will
provide access to all of the Workbox modules.
workbox.precaching.* workbox.routing.* etc
There is some magic that happens as you start to use the additional modules.
When you reference a module for the first time,
workbox-sw will detect this
and load the module before making it available. You can see this happening in
the network tab in DevTools.
These files will be cached by your browser making them available for future offline use.
Using Local Workbox Files Instead of CDN
If you don’t want to use the CDN, it’s easy enough to switch to Workbox files hosted on your own domain.
The simplest approach is to get the files via
workbox-cli's
copyLibraries
command or from a GitHub
Release, and then tell
workbox-sw where to find these files via the
modulePathPrefix config option.
If you put the files under
/third_party/workbox/, you would use them like so:
importScripts('/third_party/workbox/workbox-sw.js'); workbox.setConfig({ modulePathPrefix: '/third_party/workbox/' });
With this, you’ll use only the local Workbox files.
Avoid Async Imports
Under the hood, loading new modules for the first time involves calling
importScripts()
with the path to the corresponding JavaScript file (either hosted on the CDN, or via a local URL).
In either case, an important restriction applies: the implicit calls to
importScripts() can only
happen inside of a service worker's
install handler or during the synchronous,
initial execution of the service worker script.
In order to avoid violating this restriction, a best practice is to reference the various
workbox.* namespaces outside of any event handlers or asynchronous functions.
For example, the following top-level service worker code is fine:
importScripts(''); // This will work! workbox.routing.registerRoute( new RegExp('\\.png$'), new workbox.strategies.CacheFirst() );
But the code below could be a problem if you have not referenced
workbox.strategies elsewhere in your
service worker:
importScripts(''); self.addEventListener('fetch', (event) => { if (event.request.url.endsWith('.png')) { // Oops! This causes workbox-strategies.js to be imported inside a fetch handler, // outside of the initial, synchronous service worker execution. const cacheFirst = new workbox.strategies.CacheFirst(); event.respondWith(cacheFirst.handle({request: event.request})); } });
If you need to write code that would otherwise run afoul of this restriction, you can explicitly
trigger the
importScripts() call outside of the event handler by using the
workbox.loadModule() method:
importScripts(''); // This will trigger the importScripts() for workbox.strategies and its dependencies: workbox.loadModule('workbox-strategies'); self.addEventListener('fetch', (event) => { if (event.request.url.endsWith('.png')) { // Referencing workbox.strategies will now work as expected. const cacheFirst = new workbox.strategies.CacheFirst(); event.respondWith(cacheFirst.handle({request: event.request})); } });
Alternatively, you can create a reference to the relevant namespaces outside of your event handlers, and then use that reference later on:
importScripts(''); // This will trigger the importScripts() for workbox.strategies and its dependencies: const {strategies} = workbox; self.addEventListener('fetch', (event) => { if (event.request.url.endsWith('.png')) { // Using the previously-initialized strategies will work as expected. const cacheFirst = new strategies.CacheFirst(); event.respondWith(cacheFirst.handle({request: event.request})); } });
Force Use of Debug or Production Builds
All of the Workbox modules come with two builds, a debug build which contains logging and additional type checking and a production build which strips the logging and type checking.
By default,
workbox-sw will use the debug build for sites on localhost,
but for any other origin it’ll use the production build.
If you want to force debug or production builds, you can set the
debug config
option:
workbox.setConfig({ debug: <true or false> });
Convert code using import statements to use
workbox-sw
When loading Workbox using
workbox-sw, all Workbox packages are accessed via
the global
workbox.* namespace.
If you have a code sample that uses
import statements that you want to convert
to use
workbox-sw, all you have to do is load
workbox-sw and replace all
import statements with local variables that reference
those modules on the global namespace.
This works because every Workbox service worker
package published to npm is also
available on the global
workbox namespace via a
camelCase version of the name (e.g.
all modules exported from the
workbox-precaching npm package can be found on
workbox.precaching.*. And all the modules exported from the
workbox-background-sync npm package can be found on
workbox.backgroundSync.*).
As an example, here's some code that uses
import statements referencing
Workbox modules:
import {registerRoute} from 'workbox-routing'; import {CacheFirst} from 'workbox-strategies'; import {CacheableResponse} from 'workbox-cacheable-response'; registerRoute( /\.(?:png|jpg|jpeg|svg|gif)$/, new CacheFirst({ plugins: [ new CacheableResponsePlugin({statuses: [0, 200]}) ], }) );
And here's the same code rewritten to use
workbox-sw (notice that only the
import statements have changed—the logic has not been touched):
importScripts(''); const {registerRoute} = workbox.routing; const {CacheFirst} = workbox.strategies; const {CacheableResponse} = workbox.cacheableResponse; registerRoute( /\.(?:png|jpg|jpeg|svg|gif)$/, new CacheFirst({ plugins: [ new CacheableResponsePlugin({statuses: [0, 200]}) ], }) );
|
https://developers.google.com/web/tools/workbox/modules/workbox-sw?hl=pt-br
|
CC-MAIN-2020-16
|
en
|
refinedweb
|
Our application will associate sites with tags (many to many relationship), like delicious does, but in a much simplified manner. For instance, delicious keeps tracks of which user gave which tag to which URL. We will only associate sites with tags. But it will be very easy to add this functionality later.
We’ll quickstart a new project (the -s argument tells tg-admin that the project will use SQLAlchemy and not SQLObject)
Enter package name [tags]:
Do you need Identity (usernames/passwords) in this project? [no] yes
Defining The Model
We are going to have a table for the sites, a table for the tags and a table that associates sites with tags (many-to-many). Here’s the code which defines the tables (which goes in model.py):
Column(‘site_id’, Integer, primary_key=True),
Column(‘title’, Unicode(256)),
Column(‘url’, Unicode(1024)),
)
tags_table = Table(‘tags’, metadata,
Column(‘tag_id’, Integer, primary_key=True),
Column(‘name’, Unicode(32), index=‘tag_idx’))
sites_tags_table = Table(‘sites_tags’, metadata,
Column(‘site_id’, Integer,
ForeignKey(‘sites.site_id’),
primary_key=True),
Column("tag_id", Integer,
ForeignKey(‘tags.tag_id’),
primary_key=True))
We will now create the Python classes that correspond to these tables:
def __init__(self, name):
self.name = name
def __repr__(self):
return self.name
def link(self):
return "/tags/"+self.name
class Site(object):
def __init__(self, url, title):
self.url, self.title = url, title
Note the
link() method in the
Tag class. You might wonder what it does there. It just a little habbit that I wanted to share with you. I’ve found myself many times hard-coding URLs inside my templates. Then, if you want to make a tag linkable in many different places in your app, you have to hard-code the link every time. In this way, you can just pass the tag object to your template and do something like:
Ok, now we continue with mapping the classes to the tables:
mapper(Site, sites_table, properties = {
‘tags’: relation(Tag, secondary=sites_tags_table, lazy=False),
})
Great. Now we can construct the database and start populating it:
$ tg-admin shell
…
>>> g = Site(‘’, ‘Search engine’)
>>> g.tags
[]
>>> g.tags = [Tag(‘search’), Tag(‘google’)]
>>> session.save(g)
>>> session.flush()
(here SQLAlchemy echos the SQL statements it executes)
Handling tags
So we got the model right. The next step is to allow the users to provide tags for the site. The easiest way (for you and your users) is to ask them to enter the tags in a space-separated list. Suppose you are given this kind of space-seperated string of tags from a user, then you have to:
- convert all tags to lower case, in order to avoid case senstivity issues
- check if the string contains the same tag twice
- find which tags are already in the database and which are new
- recover from some nonsense that users might throw at you
and then get a list of Tag objects that you can assign to a site. So here’s a function that does just that:
"""Get a string of space sperated tag,
and returns a list of tag objects"""
result = []
tags = tags.replace(‘;’,‘ ‘).replace(‘,’,‘ ‘)
tags = [tag.lower() for tag in tags.split()]
tags = set(tags) # no duplicates!
if ” in tags:
tags.remove(”)
for tag in tags:
tag = tag.lower()
tagobj = session.query(Tag).selectfirst_by(name=tag)
if tagobj is None:
tagobj = Tag(name=tag)
result.append(tagobj)
return result
So you can now easily do something like:
>>> f.tags = get_tag_list(‘photo sharing photograpy’)
>>> f.tags
[photo, sharing, photograpy]
>>> f.tags[0].link()
‘/tags/photo’
Tag Search
It is straightforward to just list a site together with its tags:
<p class="site-tags">Tags:
<a py:${tag.name}</a>
</p>
Search is a bit more tricky. It took me few attempts until I got the search queries right. Here’s how to fetch all sites that are tagged by ‘google’:
sites = q.select((Tag.c.name==‘google’) & q.join_to(‘tags’))
the magic is mostly inside the join_to method – it stands for the SQL statements that makes sure that the Tag clause is associated to the sites. Without it, the query runs over the entire cartesian product of Sites x Tags.
You can make the query simpler (for MySQL; not you), if you fetch the tag_id of ‘google’ first. Then, the query uses only 2 of the 3 tables:
if not tagobj:
raise cherrypy.InternalRedirect(‘/notfound’)
sites = session.query(Site).select((sites_tags_table.c.tag_id == tagobj.tag_id) &
(sites_tags_table.c.site_id == Site.c.site_id))
To search for
google|photo:
sites = q.select(
Tag.c.name.in_(‘google’, ‘photo’) &
q.join_to(‘tags’))
To search for
sharing+photos:
sites = q.select(
Tag.c.name.in_(‘sharing’,‘photos’) &
q.join_to(‘tags’),
group_by=[Site.c.site_id],
having=(func.count(Site.c.site_id)==2))
The idea is that sites that are tagged both with ‘sharing’ and ‘photos’ will appear twice in the select, then after grouping by site_id and getting all which appear twice, we get the desired result.
There are many more things that can be done from this point, like: associating with the tag-site relationship which user added the tag, rendering a tag cloud and so on. Feel free to leave comments!
Thanks! Very helpful.
You say “To search for google+photo” but then you do a search for ’sharing’ and ‘photos’…
Thanks Damjan. Post updated.
Also, you use `session.query(Site)’ often but also you use `q.’ which is assumed to be from the first `q = session.query(Site)’ … it’s a bit confusing. It’s possible to just use `q’ always right?
Can’t the page.tags be a list of normal strings? Because currently it’s a list of Tag instances.
For ex. if I wish for ‘photos’ in page.tags to work I had to implement this method in the Tag class:
def __cmp__(self, other):
if isinstance(other, Tag):
return cmp(self.name, other.name)
elif isinstance(other, basestring):
return cmp(self.name, other)
else:
return cmp(self, other)
Damjan, you can add to the Site class a property that will give you the tags as a list of strings:
One comment and a question:
Comment: The quotes you use in the examples seem to be forward quote and backward quote, which when I paste into TextMate on my Macintosh give a syntax error and I need to replace the quotes.
Example:
I change: ‘google’
to: ‘google’
Question:
When I do the:
$ tg-admin sql create
I get an error:
File “build/bdist.macosx-10.3-fat/egg/sqlalchemy/schema.py”, line 149, in __call__
File “build/bdist.macosx-10.3-fat/egg/sqlalchemy/schema.py”, line 31, in _init_items
File “build/bdist.macosx-10.3-fat/egg/sqlalchemy/schema.py”, line 449, in _set_parent
sqlalchemy.exceptions.ArgumentError: The ‘index’ keyword argument on Column is boolean only. To create indexes with a specific name, append an explicit Index object to the Table’s list of elements.
it is caused by the 3rd line below:
tags_table = Table(“tags”, metadata,
Column(‘tag_id’, Integer, primary_key=True),
Column(‘name’, Unicode(32), index=’tag_idx’))
I am running with MySQL database.
I can use the tutorial without specifying the index or index name, but it would be useful.
Thanks.
Pingback: sites poker en ligne
Finance and treasury experts can take the Certified Treasury Professional Exam from Morgan International before taking the certification. Passing the first time is key.
I see you share interesting stuff here, you
can earn some additional money, your blog has huge
This post is on 13 spot in google’s search results, if you want
more visitors, you should build more backlinks to your articles, there is
one trick to get free, hidden backlinks from authority forums, search on youtube:
how to get hidden backlinks from forums
This post will assaist the internet viewers for creating new website or even a blog from start to end.
I was extremely pleased to discover this great site. I want to to thank you for your time for this
fantastic read!! I definitely enjoyed every little bit of it and i also hsve you saved ass a favorite to loook at new information in your web site.
Want to train muscles fast and become huge?
No need to wait months and months, get the best Muscle Boosting BodyBuilding Supplements!
Try it today muscles Supplements
FEEL YOUNG AGAIN!
Restore your youthful energy – Use Creatine that ups your training and helps to build muscles fast!
I have noticed you don’t monetize thesamet.com, don’t waste your traffic, you can earn extra cash every month with new monetization method.
This is the best adsense alternative for any type of website (they approve all
websites), for more details simply search in gooogle: murgrabia’s tools
|
http://www.thesamet.com/blog/2006/11/17/tutorial-how-to-implement-tagging-with-turbogears-and-sqlalchemy/
|
CC-MAIN-2020-16
|
en
|
refinedweb
|
GraphQL: Understanding Spring Data JPA/Spring Boot
GraphQL: Understanding Spring Data JPA/Spring Boot
Let's understand Spring Data JPA and Spring Boot with a practical example.
Join the DZone community and get the full member experience.Join For Free
GraphQL is a query language for APIs. Generally, while making REST endpoints in our APIs, the normal trend is to make an endpoint for a requirement. Let's say your endpoint is returning a list of employees, and each employee has general properties, such as name, age, and address (suppose address is another model mapped to the employee in such a way that each employee has an address). Now, at one point in time, you require data only for their address i.e. a list of all addresses in the database (only country, city, and street). For this, you will require an all-new endpoint in your service.
Here comes the power of GraphQL, which lets you deal with only a single endpoint and that changes its output based on the body of the request. Each request will call the same endpoint but with a different RequestBody. It will receive only the result that it requires.
Refer to the code on GitHub for complete code files. It's a maven project with an H2 database that has data.sql at classpath for database queries. This code rotates around getting a list of all employees from the database.
Now, let's begin with a practical implementation with Spring Boot.
We have two model classes: employee and address with respective getters and setters.
@Entity @Table public class Employee { String name; @Id String id; int age; @OneToOne(cascade = CascadeType.ALL) @JoinColumn(name = "addid") Address address; //......getters and setters....// } @Entity @Table public class Address { @Id @GeneratedValue String addid; String country; String city; String flat; //......getters and setters....// }
To implement the repository, we have the EmployeeRepo as:
@Repository public interface EmployeeRepo extends CrudRepository<Employee, String> { public List<Employee> findAll(); }
Now, GraphQL requires a .graphqls file at the classpath, which it parses and understands the type of request it needs to handle. You will find employee.graphqls in the code. Let me explain that.
It contains:
type Employee{ .......Employee details } and type Address{ ........Address Details }
Here, we are defining the schema of our classes, which will, in one way or another, be returned as a response to the endpoint i.e either it will return all employees, the addresses of all employees, only the names of all employees, etc.
We also defined:
type Query{ allEmployee: [Employee] }
Type of query (a query that will be sent by the client or that will be present in RequestBody) here is returning list of employees.
So, RequestBody to our endpoint will contain a root query as allEmployee.
Let's discuss requests here:
1. Requests that require all employees without their addresses:
{ allEmployee{ name age id } }
2. Requests that require only employee names and the country that they belong to:
{ allEmployee{ name address { country } } }
3. Requests that require only the address of all employees will be:
{ allEmployee{ address { country city flat addid } } }
For an explanation of the service layer of the code, read part 2.
Meanwhile, you can access the full code on GitHub.
If you enjoyed this article and want to learn more about GraphQL, check out this collection of tutorials and articles on all things GraphQL.
Opinions expressed by DZone contributors are their own.
{{ parent.title || parent.header.title}}
{{ parent.tldr }}
{{ parent.linkDescription }}{{ parent.urlSource.name }}
|
https://dzone.com/articles/graphql-understanding-with-springdatajpaspringboot
|
CC-MAIN-2020-16
|
en
|
refinedweb
|
This part of the series will show how to verify our applications with code-level as well as system-level integration tests.
(Code-level) integration tests
The term integration test is sometimes used differently in different contexts. What I’m referring to, following the Wikipedia definition, are tests that verify the interaction of multiple components, here on a code level. Typically, integration tests make use of embedded containers or other simulated environments in order to test a subset of the application. Test technology such as Spring Tests, Arquillian, CDI-Unit, and others make it easy to write tests and easy to inject individual classes into the test class for direct interaction during the test execution.
The following shows a pseudo code example of an integration test that uses a CDI-Unit runner:
The test scenario can easily inject and mock dependencies and access them within the test methods.
Since the embedded test technology takes a few moments to start up, embedded integration tests usually have the biggest negative impact in overall test execution time. From my experience, a lot of projects copy-and-paste existing test scenarios and run them in a way where every test class will start up the application, or parts of it, all over again. Over time, this increases the turnaround time of the build so much, that developers won’t get a fast feedback.
While these type of tests can verify the correctness of the “plumbing”, whether the APIs and annotation have been used correctly, they are not the most efficient way to test business logic. Especially in microservice applications, integration tests don’t provide ultimate confidence, whether the integration especially of endpoints and persistence will behaves exactly like it does in production. Ultimately, there can always be tiny differences in the way how JSON objects are being mapped, HTTP requests are being handled, or objects are persisted to the datastore.
The question is always, what our tests should really verify. Are we verifying the framework and it’s correct usage or the correct behavior of our overall application?
Code-level integration tests work well for a fast feedback whether developers made some careless mistakes in wiring up the frameworks. A few single test cases that in this case don’t verify the business logic but just the application is able to start up, in a smoke test fashion, can increase the development efficiency.
However, if our applications don’t make use of our enterprise framework in an overly complex way, for example using custom qualifiers, CDI extensions, or custom scopes, the need for code-level integration tests decreases. Since we have ways to catch the same types of errors, and many others, using system tests, I usually discourage developers from writing too many code-level integration tests. Integration tests indeed make it easy to wire-up multiple components on a code level, however, it’s possible to use different approaches, like use case tests, which don’t come with the startup time penalty.
Since integration test technologies usually start up or deploy to an container, they usually define their own life cycle and make it harder to be integrated into a bigger picture. If developers want to craft an optimized development workflow, by running the application in a mode that hot-reloads on changes in a different life cycle and then quickly execute integrative tests against the running application, this is not easily possible by these type of integration tests, since they would usually start their own application. There are some technologies out there that improve this, for example Quarkus and its integration tests. Still, an easier and more flexible way is to keep the test scenarios separate from the life cycle of the overall application context.
Tangling tests with the life cycle of (embedded) applications also makes it harder to reuse test scenarios for multiple scopes, since they usually require to be executed with specific runners or further constraints. We’ve had many cases where reusing the test scenarios, the code that defines the logical part of the test, in different scopes simplified enhancing the test suite, for example for use case tests, load tests, or system tests. If the cases don’t put too many constraints on how they have to be executed, for example with which test runner, reusing them, i.e. copying them some place else and swapping the implementation of used delegates or components, becomes much simpler. As you will see in the following, there are more effective ways how to fully verify our applications, especially for more complex projects.
System tests
In a microservice world, our applications integrate more and more with other resources such as external systems, databases, queues, or message brokers, and typically include less extremely complex business logic. That being said, it is crucial to verify the behavior of our systems from an outside perspective, that is, interacting with our applications in the same way as the other components will in production.
System tests verify the behavior of deployed applications by making use of the regular interfaces, for example HTTP, gRPC, JMS, or WebSockets. They are executed against an environment, where the application-under-test is deployed and configured exactly like in production, with external systems usually being mocked or simulated. Test scenarios can interact with the mocked external systems to further control the scenario and verify the behavior. Container technologies, mock servers, and embedded databases can help a lot in this regard.
In general, system tests can be written in all kinds of various technology, since they are decoupled from the implementation. It usually makes sense though to use the same technology as in the application project, since the developers are already familiar with it, e.g. also using JUnit with HTTP clients such as JAX-RS.
We should be careful not to couple the system tests with the actual implementations, that is, not to re-use class definitions or import shared modules. While this is tempting in project to reduce duplication, it actually increases the likelihood to miss regression when application interfaces change, sometimes per accident. If, for example, both the production code and test code changes the way how objects are serialized to JSON, this potentially unwanted change in the API contract won’t be caught if the class definitions are being reused (i.e. “garbage in, garbage out”). For this reason, it’s usually advisable to keep the system tests in separate projects, that use their own, potentially simplified class definitions, or to enforce in other ways that the test classes won’t re-use production code. The implementation should indeed verify that the communication happens as expected, e.g. check for expected HTTP status code. If there is an unwanted change in the production behavior, the system test project and its behavior hasn’t been modified and will detect the change in the contract.
Since system test scenarios can quickly become fairly complex, we need to care about maintainability and test code quality. We’ll have a closer look at this in a second, but in general, it’s advisable to construct special delegates for controlling and communicating with the mocked external systems, as well as for creating test data.
What else becomes crucial for more complex setups is to define idempotent system tests that verify a specific behavior regardless of the current state. We should avoid creating test scenarios that only work against a fresh, empty system or need to be executed in a specific order. Real-world business uses cases are usually also performed on longer-running systems and executed simultaneously. If we achieve the same grade of isolation in our system tests, we avoid that the tests are tangled to specific preconditions or the order of execution, and we can run them in parallel, or against a local development environment that can keep running for more than one test run. This is a prerequisite for both setting up effective local workflows as well as to potentially reuse the test scenario definitions for different purposes.
In order to keep environments similar, the question is how production looks like and how we can come as close as possible during local development or in Continuous Delivery pipelines. In general, the advent of containers made it much simpler to achieve that goal. If our applications run in containers we have multiple ways to execute them locally, either starting them via shell scripts, Docker Compose, testcontainers, which we’ll have a look at in a second, or we even run a fully-fledged Kubernetes or OpenShift cluster. In Continuous Delivery pipelines we ideally deploy to and test against an environment in the same way as we do to production, a cluster or environment that uses the same technology and configuration, for example a separate Kubernetes cluster or namespace.
Depending on the complexity of the system and the local development workflow, we can manage the life cycle of the deployed application in the system test execution, or externally, via separate tools. From experience, managing the environment externally, that is starting it up via a separate mechanism and running the idempotent tests against it, is faster to execute, allows for more flexibility in our workflow, and is ultimately also easier to manage. A very convenient way for this is to define shell scripts that wrap the actual commands, such as how to start the Docker containers, setup Docker compose, start Kubernetes and apply the YAML files, or else, and then to simply execute the scripts at the beginning of the development session. The system tests then run very quickly since they have an independent life cycle and connect to an environment that is already running. This can be achieved for both dedicated test environments and local setups. Setting up complex environments locally sounds like a big turnaround for changing some behavior and verify our changes, however, modern development tools with hot-deployment techniques support us in keeping the cycles instantly fast. We can modify the behavior of the application-under-test instantly and re-execute the test cases, that also run very quickly.
This approach gives us a very fast feedback yet proper verification, since we’re testing against the actual application interfaces, not simulations. However, it’s crucial that we keep our setup maintainable in order to keep the complexity manageable.
In the next part of the article series we will cover effective development workflows and the importance of test code quality and how to achieve that our tests stay maintainable.
|
https://www.javacodegeeks.com/2019/09/efficient-enterprise-testing-integration-tests-3-6.html
|
CC-MAIN-2020-16
|
en
|
refinedweb
|
SQLite
The Mono.Data.SqliteClient assembly contains an ADO.NET data provider for the SQLite embeddable database engine (both version 2 and version 3).
SQLite has a notable oddity: table cell data does not retain what kind of data it was. Everything is stored as either a long, double, string, or blob. And in SQLite version 2, everything is stored as a string. So you need to be careful about avoiding casting values returned by SQLite without checking the type of the value returned. See below for notes on storing DateTimes.
New style assembly shipped with Mono 1.2.4
Starting with the 1.2.4 release, Mono ships a second SQLite assembly - Mono.Data.Sqlite. The new assembly provides support only for SQLite version 3and is not 100% binary and API compatible with the older assembly. The new assembly is based on code by Robert Simpson from and provides full ADO.NET 2.0 API interface. Code from the old binary is contained in the new one but is available only in the 1.1 profile. The 2.0 profile can no longer access the old code when referencing the new assembly. We have chosen this way as means to provide a migration path for developers using SQLite in their .NET applications - both assemblies will be shipped with several future releases of Mono, and at some (yet undetermined) point the old one will be removed from the distribution. All the developers are encouraged to start transitioning their code to the new assembly - for both 1.1 and 2.0 profiles.
One disadvantage of the new assembly is its binary incompatibility in the data format. That is, if your application uses SQLite database v2 format you will not be able to access your data with the new assembly. To solve this problem you must dump your data using sqlite v2 utilities and then restore it using sqlite v3 utilities.
Prerequisites
If you do not have SQLite, download it. There are binaries for Windows and Linux. You can put the .dll or .so along side your application binaries, or in a system-wide library path.
Connection String Format
The format of the connection string is:
[1.1 profile and the old assembly] URI=file:/path/to/file [2.0 profile in the new assembly] Data Source=file:/path/to/file Data Source=|DataDirectory|filename
The latter case for the 2.0 profile references the App_Data directory (or any other directory that’s configured to contain data files for an ASP.NET 2.0 application)
As an example:
[1.1 and the old assembly] URI=file:SqliteTest.db [2.0 and the new assembly] Data Source=file:SqliteTest.db
That will use the database SqliteTest.db in the current directory. It will be created if it does not exist.
Or you prefer to use SQLite as an in memory database
URI=file::memory:,version=3
The
version=3 is supported, but not necessary with the new assembly.
With the old assembly, the ADO.NET adapter will use SQLite version 2 by default, but if version 2 is not found and version 3 is available, it will fallback to version 3. You can force the adapter to use version 3 by adding “version=3” to the connection string:
URI=file:SqliteTest.db,version=3
The new assembly, as described above, uses only database format version 3.
- Connection String Parameters:
For the 1.1 profile and the old assembly
The busy_timeout parameter is implemented as a call to sqlite(3)_busy_timeout. The default value is 0, which means to throw a SqliteBusyException immediately if the database is locked.
For the 2.0 profile in the new assembly
Storing DateTimes
The way DateTimes are stored and retrieved from Sqlite databases depends on a lot, unfortunately, because Sqlite doesn’t have a way of storing datetimes natively. Further, there are two versions of Sqlite (2 and 3) which are treated differently when it comes to DateTimes. The recommended way of using DateTimes with Sqlite is to encode/decode them yourself to/from some particular integer string format that you decide, and not putting them into a DATE or DATETIME column.
Sqlite2 only has strings internally. No matter what the column was declared as, DateTimes are just going to be converted into strings. If you use parameters, for instance, DateTimes will be converted in a culture-sensitive format. When reading back the data, there’s no way to know that it was originally a DateTime and not a string, so Mono.Data.SqliteClient returns the string. Using Sqlite2, you really can’t use DateTimes without encoding them yourself.
If you explicitly are targetting Sqlite3, or using the new assembly (in which case you should be providing the version parameter in the connection string, unless you are using the new assembly), you can rely on the particular behavior used when connecting to a Sqlite3 database. Sqlite3 has string, integer (64bit), real, and blob internal storage types. When putting a DateTime into the database using parameters, Mono.Data.SqliteClient will encode the DateTime as an integer using ToFileTime(). But this doesn’t help when reading the data back to determine that a value was originally a DateTime. Sqlite3 also exposes the names of the types of the columns as the table was created with. If a column is declared as a DATE or DATETIME, SqliteDataReader will try to turn the value back into a DateTime. If it finds an integer value, it uses DateTime.FromFileTime, which is the reverse of how it encodes DateTimes if you insert a DateTime via parameters. If it finds a string value, it uses DateTime.Parse, but note that Parse is a very slow operation. So with Sqlite3, DateTimes should be put into DATE or DATETIME columns in the database either through parameters or by turning it into a long with ToFileTime yourself, and then they will be read back as DateTimes.
Character Encodings
The Sqlite client treats character encodings differently for version 2 and version 3 because of the way Sqlite2 and 3 treat strings.
In Sqlite3, the Sqlite client communicates with Sqlite in Unicode. Therefore, you should be able to read and write any characters from the database, but note that if you write Unicode characters to a database, you may not be able to read them back in other applications if the application does not communicate with Sqlite using Unicode.
In Sqlite2, the client by default communicates with Sqlite using the UTF-8 encoding, which means you can read and write any character. But you must beware of two things. The first is that since non-ASCII characters are encoded as multi-byte characters in UTF-8, and Sqlite2 doesn’t recognize multibyte characters (unless it was compiled specifically with UTF-8 support), LIKE, GLOB, LENGTH, and SUBSTR will behave oddly. The second caveat is that other applications using the database must be using UTF-8 as well.
When using Sqlite2, you can force Mono.Data.SqliteClient to use a different encoding instead of UTF-8 by adding “;encoding=ASCII” for instance to the connection string. It must be an encoding that ends with a single null terminator, however.
C# Example (1.1 profile of the new assembly and the old assembly)
using System; using System.Data; using Mono.Data.SqliteClient; public class SQLiteTest { public static void Main() { const string connectionString = "URI=file:SqliteTest.db"; IDbConnection dbcon = new SqliteConnection(connectionString); dbcon.Open(); IDbCommand dbcmd = dbcon.CreateCommand(); // requires a table to be created named employee // with columns firstname and lastname // such as, // CREATE TABLE employee ( // firstname nvarchar(32), // lastname nvarchar(32)); const string sql = "SELECT firstname, lastname " + "FROM employee"; dbcmd.CommandText = sql; IDataReader reader = dbcmd.ExecuteReader(); while(reader.Read()) { string firstName = reader.GetString(0); string lastName = reader.GetString(1); Console.WriteLine("Name: {0} {1}", firstName, lastName); } // clean up reader.Dispose(); dbcmd.Dispose(); dbcon.Close(); } }
To build the example:
- Save the example to a file, such as, TestExample.cs
- Build using Mono C# compiler:
csc TestExample.cs -r:System.Data.dll -r:Mono.Data.SqliteClient.dll
To run the example:
mono TestExample.exe
|
https://www.mono-project.com/docs/database-access/providers/sqlite/
|
CC-MAIN-2020-16
|
en
|
refinedweb
|
import "crawshaw.io/sqlite"
Package sqlite provides a Go interface to SQLite 3.
The semantics of this package are deliberately close to the SQLite3 C API, so it is helpful to be familiar with.
An SQLite connection is represented by a *sqlite.Conn. Connections cannot be used concurrently. A typical Go program will create a pool of connections (using Open to create a *sqlitex.Pool) so goroutines can borrow a connection while they need to talk to the database.
This package assumes SQLite will be used concurrently by the process through several connections, so the build options for SQLite enable multi-threading and the shared cache:
The implementation automatically handles shared cache locking, see the documentation on Stmt.Step for details.
The optional SQLite3 compiled in are: FTS5, RTree, JSON1, Session
This is not a database/sql driver.
Statements are prepared with the Prepare and PrepareTransient methods. When using Prepare, statements are keyed inside a connection by the original query string used to create them. This means long-running high-performance code paths can write:
stmt, err := conn.Prepare("SELECT ...")
After all the connections in a pool have been warmed up by passing through one of these Prepare calls, subsequent calls are simply a map lookup that returns an existing statement.
The sqlite package supports the SQLite incremental I/O interface for streaming blob data into and out of the the database without loading the entire blob into a single []byte. (This is important when working either with very large blobs, or more commonly, a large number of moderate-sized blobs concurrently.)
To write a blob, first use an INSERT statement to set the size of the blob and assign a rowid:
"INSERT INTO blobs (myblob) VALUES (?);"
Use BindZeroBlob or SetZeroBlob to set the size of myblob. Then you can open the blob with:
b, err := conn.OpenBlob("", "blobs", "myblob", conn.LastInsertRowID(), true)
Every connection can have a done channel associated with it using the SetInterrupt method. This is typically the channel returned by a context.Context Done method.
For example, a timeout can be associated with a connection session:
ctx := context.WithTimeout(context.Background(), 100*time.Millisecond) conn.SetInterrupt(ctx.Done())
As database connections are long-lived, the SetInterrupt method can be called multiple times to reset the associated lifetime.
When using pools, the shorthand for associating a context with a connection is:
conn := dbpool.Get(ctx) if conn == nil { // ... handle error } defer dbpool.Put(c)
SQLite transactions have to be managed manually with this package by directly calling BEGIN / COMMIT / ROLLBACK or SAVEPOINT / RELEASE/ ROLLBACK. The sqlitex has a Savepoint function that helps automate this.
Using a Pool to execute SQL in a concurrent HTTP handler.
var dbpool *sqlitex.Pool func main() { var err error dbpool, err = sqlitex.Open("file:memory:?mode=memory", 0, 10) if err != nil { log.Fatal(err) } http.HandleFunc("/", handle) log.Fatal(http.ListenAndServe(":8080", nil)) } func handle(w http.ResponseWriter, r *http.Request) { conn := dbpool.Get(r.Context()) if conn == nil { return } defer dbpool.Put(conn) stmt := conn.Prep("SELECT foo FROM footable WHERE id = $id;") stmt.SetText("$id", "_user_id_") for { if hasRow, err := stmt.Step(); err != nil { // ... handle error } else if !hasRow { break } foo := stmt.GetText("foo") // ... use foo } }
For helper functions that make some kinds of statements easier to write see the sqlitex package.
backup.go blob.go doc.go error.go extension.go func.go incrementor.go session.go snapshot.go sqlite.go static.go
const ( SQLITE_OK = ErrorCode(C.SQLITE_OK) // do not use in Error SQLITE_ERROR = ErrorCode(C.SQLITE_ERROR) SQLITE_INTERNAL = ErrorCode(C.SQLITE_INTERNAL) SQLITE_PERM = ErrorCode(C.SQLITE_PERM) SQLITE_ABORT = ErrorCode(C.SQLITE_ABORT) SQLITE_BUSY = ErrorCode(C.SQLITE_BUSY) SQLITE_LOCKED = ErrorCode(C.SQLITE_LOCKED) SQLITE_NOMEM = ErrorCode(C.SQLITE_NOMEM) SQLITE_READONLY = ErrorCode(C.SQLITE_READONLY) SQLITE_INTERRUPT = ErrorCode(C.SQLITE_INTERRUPT) SQLITE_IOERR = ErrorCode(C.SQLITE_IOERR) SQLITE_CORRUPT = ErrorCode(C.SQLITE_CORRUPT) SQLITE_NOTFOUND = ErrorCode(C.SQLITE_NOTFOUND) SQLITE_FULL = ErrorCode(C.SQLITE_FULL) SQLITE_CANTOPEN = ErrorCode(C.SQLITE_CANTOPEN) SQLITE_PROTOCOL = ErrorCode(C.SQLITE_PROTOCOL) SQLITE_EMPTY = ErrorCode(C.SQLITE_EMPTY) SQLITE_SCHEMA = ErrorCode(C.SQLITE_SCHEMA) SQLITE_TOOBIG = ErrorCode(C.SQLITE_TOOBIG) SQLITE_CONSTRAINT = ErrorCode(C.SQLITE_CONSTRAINT) SQLITE_MISMATCH = ErrorCode(C.SQLITE_MISMATCH) SQLITE_MISUSE = ErrorCode(C.SQLITE_MISUSE) SQLITE_NOLFS = ErrorCode(C.SQLITE_NOLFS) SQLITE_AUTH = ErrorCode(C.SQLITE_AUTH) SQLITE_FORMAT = ErrorCode(C.SQLITE_FORMAT) SQLITE_RANGE = ErrorCode(C.SQLITE_RANGE) SQLITE_NOTADB = ErrorCode(C.SQLITE_NOTADB) SQLITE_NOTICE = ErrorCode(C.SQLITE_NOTICE) SQLITE_WARNING = ErrorCode(C.SQLITE_WARNING) SQLITE_ROW = ErrorCode(C.SQLITE_ROW) // do not use in Error SQLITE_DONE = ErrorCode(C.SQLITE_DONE) // do not use in Error SQLITE_ERROR_MISSING_COLLSEQ = ErrorCode(C.SQLITE_ERROR_MISSING_COLLSEQ) SQLITE_ERROR_RETRY = ErrorCode(C.SQLITE_ERROR_RETRY) SQLITE_ERROR_SNAPSHOT = ErrorCode(C.SQLITE_ERROR_SNAPSHOT) SQLITE_IOERR_READ = ErrorCode(C.SQLITE_IOERR_READ) SQLITE_IOERR_SHORT_READ = ErrorCode(C.SQLITE_IOERR_SHORT_READ) SQLITE_IOERR_WRITE = ErrorCode(C.SQLITE_IOERR_WRITE) SQLITE_IOERR_FSYNC = ErrorCode(C.SQLITE_IOERR_FSYNC) SQLITE_IOERR_DIR_FSYNC = ErrorCode(C.SQLITE_IOERR_DIR_FSYNC) SQLITE_IOERR_TRUNCATE = ErrorCode(C.SQLITE_IOERR_TRUNCATE) SQLITE_IOERR_FSTAT = ErrorCode(C.SQLITE_IOERR_FSTAT) SQLITE_IOERR_UNLOCK = ErrorCode(C.SQLITE_IOERR_UNLOCK) SQLITE_IOERR_RDLOCK = ErrorCode(C.SQLITE_IOERR_RDLOCK) SQLITE_IOERR_DELETE = ErrorCode(C.SQLITE_IOERR_DELETE) SQLITE_IOERR_BLOCKED = ErrorCode(C.SQLITE_IOERR_BLOCKED) SQLITE_IOERR_NOMEM = ErrorCode(C.SQLITE_IOERR_NOMEM) SQLITE_IOERR_ACCESS = ErrorCode(C.SQLITE_IOERR_ACCESS) SQLITE_IOERR_CHECKRESERVEDLOCK = ErrorCode(C.SQLITE_IOERR_CHECKRESERVEDLOCK) SQLITE_IOERR_LOCK = ErrorCode(C.SQLITE_IOERR_LOCK) SQLITE_IOERR_CLOSE = ErrorCode(C.SQLITE_IOERR_CLOSE) SQLITE_IOERR_DIR_CLOSE = ErrorCode(C.SQLITE_IOERR_DIR_CLOSE) SQLITE_IOERR_SHMOPEN = ErrorCode(C.SQLITE_IOERR_SHMOPEN) SQLITE_IOERR_SHMSIZE = ErrorCode(C.SQLITE_IOERR_SHMSIZE) SQLITE_IOERR_SHMLOCK = ErrorCode(C.SQLITE_IOERR_SHMLOCK) SQLITE_IOERR_SHMMAP = ErrorCode(C.SQLITE_IOERR_SHMMAP) SQLITE_IOERR_SEEK = ErrorCode(C.SQLITE_IOERR_SEEK) SQLITE_IOERR_DELETE_NOENT = ErrorCode(C.SQLITE_IOERR_DELETE_NOENT) SQLITE_IOERR_MMAP = ErrorCode(C.SQLITE_IOERR_MMAP) SQLITE_IOERR_GETTEMPPATH = ErrorCode(C.SQLITE_IOERR_GETTEMPPATH) SQLITE_IOERR_CONVPATH = ErrorCode(C.SQLITE_IOERR_CONVPATH) SQLITE_IOERR_VNODE = ErrorCode(C.SQLITE_IOERR_VNODE) SQLITE_IOERR_AUTH = ErrorCode(C.SQLITE_IOERR_AUTH) SQLITE_IOERR_BEGIN_ATOMIC = ErrorCode(C.SQLITE_IOERR_BEGIN_ATOMIC) SQLITE_IOERR_COMMIT_ATOMIC = ErrorCode(C.SQLITE_IOERR_COMMIT_ATOMIC) SQLITE_IOERR_ROLLBACK_ATOMIC = ErrorCode(C.SQLITE_IOERR_ROLLBACK_ATOMIC) SQLITE_LOCKED_SHAREDCACHE = ErrorCode(C.SQLITE_LOCKED_SHAREDCACHE) SQLITE_BUSY_RECOVERY = ErrorCode(C.SQLITE_BUSY_RECOVERY) SQLITE_BUSY_SNAPSHOT = ErrorCode(C.SQLITE_BUSY_SNAPSHOT) SQLITE_CANTOPEN_NOTEMPDIR = ErrorCode(C.SQLITE_CANTOPEN_NOTEMPDIR) SQLITE_CANTOPEN_ISDIR = ErrorCode(C.SQLITE_CANTOPEN_ISDIR) SQLITE_CANTOPEN_FULLPATH = ErrorCode(C.SQLITE_CANTOPEN_FULLPATH) SQLITE_CANTOPEN_CONVPATH = ErrorCode(C.SQLITE_CANTOPEN_CONVPATH) SQLITE_CORRUPT_VTAB = ErrorCode(C.SQLITE_CORRUPT_VTAB) SQLITE_READONLY_RECOVERY = ErrorCode(C.SQLITE_READONLY_RECOVERY) SQLITE_READONLY_CANTLOCK = ErrorCode(C.SQLITE_READONLY_CANTLOCK) SQLITE_READONLY_ROLLBACK = ErrorCode(C.SQLITE_READONLY_ROLLBACK) SQLITE_READONLY_DBMOVED = ErrorCode(C.SQLITE_READONLY_DBMOVED) SQLITE_READONLY_CANTINIT = ErrorCode(C.SQLITE_READONLY_CANTINIT) SQLITE_READONLY_DIRECTORY = ErrorCode(C.SQLITE_READONLY_DIRECTORY) SQLITE_ABORT_ROLLBACK = ErrorCode(C.SQLITE_ABORT_ROLLBACK) SQLITE_CONSTRAINT_CHECK = ErrorCode(C.SQLITE_CONSTRAINT_CHECK) SQLITE_CONSTRAINT_COMMITHOOK = ErrorCode(C.SQLITE_CONSTRAINT_COMMITHOOK) SQLITE_CONSTRAINT_FOREIGNKEY = ErrorCode(C.SQLITE_CONSTRAINT_FOREIGNKEY) SQLITE_CONSTRAINT_FUNCTION = ErrorCode(C.SQLITE_CONSTRAINT_FUNCTION) SQLITE_CONSTRAINT_NOTNULL = ErrorCode(C.SQLITE_CONSTRAINT_NOTNULL) SQLITE_CONSTRAINT_PRIMARYKEY = ErrorCode(C.SQLITE_CONSTRAINT_PRIMARYKEY) SQLITE_CONSTRAINT_TRIGGER = ErrorCode(C.SQLITE_CONSTRAINT_TRIGGER) SQLITE_CONSTRAINT_UNIQUE = ErrorCode(C.SQLITE_CONSTRAINT_UNIQUE) SQLITE_CONSTRAINT_VTAB = ErrorCode(C.SQLITE_CONSTRAINT_VTAB) SQLITE_CONSTRAINT_ROWID = ErrorCode(C.SQLITE_CONSTRAINT_ROWID) SQLITE_NOTICE_RECOVER_WAL = ErrorCode(C.SQLITE_NOTICE_RECOVER_WAL) SQLITE_NOTICE_RECOVER_ROLLBACK = ErrorCode(C.SQLITE_NOTICE_RECOVER_ROLLBACK) SQLITE_WARNING_AUTOINDEX = ErrorCode(C.SQLITE_WARNING_AUTOINDEX) SQLITE_AUTH_USER = ErrorCode(C.SQLITE_AUTH_USER) )
const ( SQLITE_INSERT = OpType(C.SQLITE_INSERT) SQLITE_DELETE = OpType(C.SQLITE_DELETE) SQLITE_UPDATE = OpType(C.SQLITE_UPDATE) )
const ( SQLITE_CHANGESET_DATA = ConflictType(C.SQLITE_CHANGESET_DATA) SQLITE_CHANGESET_NOTFOUND = ConflictType(C.SQLITE_CHANGESET_NOTFOUND) SQLITE_CHANGESET_CONFLICT = ConflictType(C.SQLITE_CHANGESET_CONFLICT) SQLITE_CHANGESET_CONSTRAINT = ConflictType(C.SQLITE_CHANGESET_CONSTRAINT) SQLITE_CHANGESET_FOREIGN_KEY = ConflictType(C.SQLITE_CHANGESET_FOREIGN_KEY) )
const ( SQLITE_CHANGESET_OMIT = ConflictAction(C.SQLITE_CHANGESET_OMIT) SQLITE_CHANGESET_ABORT = ConflictAction(C.SQLITE_CHANGESET_ABORT) SQLITE_CHANGESET_REPLACE = ConflictAction(C.SQLITE_CHANGESET_REPLACE) )
const ( SQLITE_OPEN_READONLY = OpenFlags(C.SQLITE_OPEN_READONLY) SQLITE_OPEN_READWRITE = OpenFlags(C.SQLITE_OPEN_READWRITE) SQLITE_OPEN_CREATE = OpenFlags(C.SQLITE_OPEN_CREATE) SQLITE_OPEN_URI = OpenFlags(C.SQLITE_OPEN_URI) SQLITE_OPEN_MEMORY = OpenFlags(C.SQLITE_OPEN_MEMORY) SQLITE_OPEN_MAIN_DB = OpenFlags(C.SQLITE_OPEN_MAIN_DB) SQLITE_OPEN_TEMP_DB = OpenFlags(C.SQLITE_OPEN_TEMP_DB) SQLITE_OPEN_TRANSIENT_DB = OpenFlags(C.SQLITE_OPEN_TRANSIENT_DB) SQLITE_OPEN_MAIN_JOURNAL = OpenFlags(C.SQLITE_OPEN_MAIN_JOURNAL) SQLITE_OPEN_TEMP_JOURNAL = OpenFlags(C.SQLITE_OPEN_TEMP_JOURNAL) SQLITE_OPEN_SUBJOURNAL = OpenFlags(C.SQLITE_OPEN_SUBJOURNAL) SQLITE_OPEN_MASTER_JOURNAL = OpenFlags(C.SQLITE_OPEN_MASTER_JOURNAL) SQLITE_OPEN_NOMUTEX = OpenFlags(C.SQLITE_OPEN_NOMUTEX) SQLITE_OPEN_FULLMUTEX = OpenFlags(C.SQLITE_OPEN_FULLMUTEX) SQLITE_OPEN_SHAREDCACHE = OpenFlags(C.SQLITE_OPEN_SHAREDCACHE) SQLITE_OPEN_PRIVATECACHE = OpenFlags(C.SQLITE_OPEN_PRIVATECACHE) SQLITE_OPEN_WAL = OpenFlags(C.SQLITE_OPEN_WAL) )
const ( SQLITE_DBCONFIG_DQS_DML = C.int(C.SQLITE_DBCONFIG_DQS_DML) SQLITE_DBCONFIG_DQS_DDL = C.int(C.SQLITE_DBCONFIG_DQS_DDL) )
const ( SQLITE_INTEGER = ColumnType(C.SQLITE_INTEGER) SQLITE_FLOAT = ColumnType(C.SQLITE_FLOAT) SQLITE_TEXT = ColumnType(C.SQLITE3_TEXT) SQLITE_BLOB = ColumnType(C.SQLITE_BLOB) SQLITE_NULL = ColumnType(C.SQLITE_NULL) )
BindIndexStart is the index of the first parameter when using the Stmt.Bind* functions.
ColumnIndexStart is the index of the first column when using the Stmt.Column* functions.
Logger is written to by SQLite. The Logger must be set before any connection is opened. The msg slice is only valid for the duration of the call.
It is very noisy.
ChangesetConcat concatenates two changesets.
ChangesetInvert inverts a changeset.
A Backup copies data between two databases.
It is used to backup file based or in-memory databases.
Equivalent to the sqlite3_backup* C object.
Finish is called to clean up the resources allocated by BackupInit.
PageCount returns the total number of pages in the source database at the conclusion of the most recent b.Step().
Remaining returns the number of pages still to be backed up at the conclusion of the most recent b.Step().
Step is called one or more times to transfer nPage pages at a time between databases.
Use -1 to transfer the entire database at once.
type Blob struct { io.ReadWriteSeeker io.ReaderAt io.WriterAt io.Closer // contains filtered or unexported fields }
Blob provides streaming access to SQLite blobs.
Size returns the total size of a blob.
func NewChangegroup() (*Changegroup, error)
func (cg Changegroup) Add(r io.Reader) error
func (cg Changegroup) Delete()
Delete deletes a Changegroup.
ChangesetIter is an iterator over a changeset.
An iterator is used much like a Stmt over result rows. It is also used in the conflictFn provided to ChangesetApply. To process the changes in a changeset:
iter, err := ChangesetIterStart(r) if err != nil { // ... handle err } for { hasRow, err := iter.Next() if err != nil { // ... handle err } if !hasRow { break } // Use the Op, New, Old method to inspect the change. } if err := iter.Finalize(); err != nil { // ... handle err }
func ChangesetIterStart(r io.Reader) (ChangesetIter, error)
ChangesetIterStart creates an iterator over a changeset.
func (iter ChangesetIter) Conflict(col int) (v Value, err error)
Conflict obtains conflicting row values from an iterator. Only use this in an iterator passed to a ChangesetApply conflictFn.
func (iter ChangesetIter) FKConflicts() (int, error)
FKConflicts reports the number of foreign key constraint violations.
func (iter ChangesetIter) Finalize() error
Finalize deletes a changeset iterator. Do not use in iterators passed to a ChangesetApply conflictFn.
func (iter ChangesetIter) New(col int) (v Value, err error)
New obtains new row values from an iterator.
func (iter ChangesetIter) Next() (rowReturned bool, err error)
Next moves a changeset iterator forward. Do not use in iterators passed to a ChangesetApply conflictFn.
func (iter ChangesetIter) Old(col int) (v Value, err error)
Old obtains old row values from an iterator.
func (iter ChangesetIter) Op() (table string, numCols int, opType OpType, indirect bool, err error)
Op reports details about the current operation in the iterator.
func (iter ChangesetIter) PK() ([]bool, error)
PK reports the columns that make up the primary key.
ColumnType are codes for each of the SQLite fundamental datatypes:
64-bit signed integer 64-bit IEEE floating point number string BLOB NULL
func (t ColumnType) String() string
func (code ConflictAction) String() string
func (code ConflictType) String() string
Conn is an open connection to an SQLite3 database.
A Conn can only be used by goroutine at a time.
OpenConn opens a single SQLite database connection. A flags value of 0 defaults to:
SQLITE_OPEN_READWRITE SQLITE_OPEN_CREATE SQLITE_OPEN_WAL SQLITE_OPEN_URI SQLITE_OPEN_NOMUTEX
BackupInit initializes a new Backup object to copy from src to dst.
If srcDB or dstDB is "", then a default of "main" is used.
BackupToDB creates a complete backup of the srcDB on the src Conn to a new database Conn at dstPath. The resulting dst connection is returned. This will block until the entire backup is complete.
If srcDB is "", then a default of "main" is used.
This is very similar to the first example function implemented on the following page.
Changes reports the number of rows affected by the most recent statement.
func (conn *Conn) ChangesetApply(r io.Reader, filterFn func(tableName string) bool, conflictFn func(ConflictType, ChangesetIter) ConflictAction) error
ChangesetApply applies a changeset to the database.
If a changeset will not apply cleanly then conflictFn can be used to resolve the conflict. See the SQLite documentation for full details.
func (conn *Conn) ChangesetApplyInverse(r io.Reader, filterFn func(tableName string) bool, conflictFn func(ConflictType, ChangesetIter) ConflictAction) error
ChangesetApplyInverse applies the inverse of a changeset to the database.
If a changeset will not apply cleanly then conflictFn can be used to resolve the conflict. See the SQLite documentation for full details.
This is equivalent to inverting a changeset using ChangesetInvert before applying it. It is an error to use a patchset.
CheckReset reports whether any statement on this connection is in the process of returning results.
Close closes the database connection using sqlite3_close and finalizes persistent prepared statements.
func (conn *Conn) CreateFunction(name string, deterministic bool, numArgs int, xFunc, xStep func(Context, ...Value), xFinal func(Context)) error
CreateFunction registers a Go function with SQLite for use in SQL queries.
To define a scalar function, provide a value for xFunc and set xStep/xFinal to nil.
To define an aggregation set xFunc to nil and provide values for xStep and xFinal.
State can be stored across function calls by using the Context UserData/SetUserData methods.
CreateSession creates a new session object. If db is "", then a default of "main" is used.
EnableDoubleQuotedStringLiterals allows fine grained control over whether double quoted string literals are accepted in Data Manipulation Language or Data Definition Language queries.
By default DQS is disabled since double quotes should indicate an identifier.
EnableLoadExtension allows extensions to be loaded via LoadExtension(). The SQL interface is left disabled as recommended.
GetSnapshot attempts to make a new Snapshot that records the current state of the given schema in conn. If successful, a *Snapshot and a func() is returned, and the conn will have an open READ transaction which will continue to reflect the state of the Snapshot until the returned func() is called. No WRITE transaction may occur on conn until the returned func() is called.
The returned *Snapshot is threadsafe for creating additional read transactions that reflect its state with Conn.StartSnapshotRead.
In theory, so long as at least one read transaction is open on the Snapshot, then the WAL file will not be checkpointed past that point, and the Snapshot will continue to be available for creating additional read transactions. However, if no read transaction is open on the Snapshot, then it is possible for the WAL to be checkpointed past the point of the Snapshot. If this occurs then there is no way to start a read on the Snapshot. In order to ensure that a Snapshot remains readable, always maintain at least one open read transaction on the Snapshot.
In practice, this is generally reliable but sometimes the Snapshot can sometimes become unavailable for reads unless automatic checkpointing is entirely disabled from the start.
The returned *Snapshot has a finalizer that calls Free if it has not been called, so it is safe to allow a Snapshot to be garbage collected. However, if you are sure that a Snapshot will never be used again by any thread, you may call Free once to release the memory earlier. No reads will be possible on the Snapshot after Free is called on it, however any open read transactionss will not be interrupted.
See sqlitex.Pool.GetSnapshot for a helper function for automatically keeping an open read transaction on a set aside connection until a Snapshot is GC'd.
The following must be true for this function to succeed:
- The schema of conn must be a WAL mode database.
- There must not be any transaction open on schema of conn.
- At least one transaction must have been written to the current WAL file since it was created on disk (by any connection). You can run the following SQL to ensure that a WAL file has been created.
BEGIN IMMEDIATE; COMMIT;
LastInsertRowID reports the rowid of the most recently successful INSERT.
LoadExtension attempts to load a runtime-loadable extension.
OpenBlob opens a blob in a particular {database,table,column,row}.
Prep returns a persistent SQL statement.
Any error in preparation will panic.
Persistent prepared statements are cached by the query string in a Conn. If Finalize is not called, then subsequent calls to Prepare will return the same statement.
Prepare prepares a persistent SQL statement.
Persistent prepared statements are cached by the query string in a Conn. If Finalize is not called, then subsequent calls to Prepare will return the same statement.
If the query has any unprocessed trailing bytes, Prepare returns an error.
PrepareTransient prepares an SQL statement that is not cached by the Conn. Subsequent calls with the same query will create new Stmts. Finalize must be called by the caller once done with the Stmt.
The number of trailing bytes not consumed from query is returned.
To run a sequence of queries once as part of a script, the sqlitex package provides an ExecScript function built on this.
SetBusyTimeout sets a busy handler that sleeps for up to d to acquire a lock.
By default, a large busy timeout (10s) is set on the assumption that Go programs use a context object via SetInterrupt to control timeouts.
SetInterrupt assigns a channel to control connection execution lifetime.
When doneCh is closed, the connection uses sqlite3_interrupt to stop long-running queries and cancels any *Stmt.Step calls that are blocked waiting for the database write lock.
Subsequent uses of the connection will return SQLITE_INTERRUPT errors until doneCh is reset with a subsequent call to SetInterrupt.
Typically, doneCh is provided by the Done method on a context.Context. For example, a timeout can be associated with a connection session:
ctx := context.WithTimeout(context.Background(), 100*time.Millisecond) conn.SetInterrupt(ctx.Done())
Any busy statements at the time SetInterrupt is called will be reset.
SetInterrupt returns the old doneCh assigned to the connection.
StartSnapshotRead starts a new read transaction on conn such that the read transaction refers to historical Snapshot s, rather than the most recent change to the database.
There must be no open transaction on conn. Free must not have been called on s prior to or during this function call.
If err is nil, then endRead is a function that will end the read transaction and return conn to its original state. Until endRead is called, no writes may occur on conn, and all reads on conn will refer to the Snapshot.
Context is an *sqlite3_context. It is used by custom functions to return result values. An SQLite context is in no way related to a Go context.Context.
type Error struct { Code ErrorCode // SQLite extended error code (SQLITE_OK is an invalid value) Loc string // method name that generated the error Query string // original SQL query text Msg string // value of sqlite3_errmsg, set sqlite.ErrMsg = true }
Error is an error produced by SQLite.
ErrorCode is an SQLite extended error code.
The three SQLite result codes (SQLITE_OK, SQLITE_ROW, and SQLITE_DONE), are not errors so they should not be used in an Error.
ErrCode extracts the SQLite error code from err. If err is not a sqlite Error, SQLITE_ERROR is returned. If err is nil, SQLITE_OK is returned.
This function supports wrapped errors that implement
interface { Cause() error }
for errors from packages like
Incrementor is a closure around a value that returns and increments the value on each call. For example, the boolean statments in the following code snippet would all be true.
i := NewIncrementor(3) i() == 3 i() == 4 i() == 5
This is provided as syntactic sugar for dealing with bind param and column indexes. See BindIncrementor and ColumnIncrementor for small examples.
func BindIncrementor() Incrementor
BindIncrementor returns an Incrementor that starts on 1, the first index used in Stmt.Bind* functions. This is provided as syntactic sugar for binding parameter values to a Stmt. It allows for easily changing query parameters without manually fixing up the bind indexes, which can be error prone. For example,
stmt := conn.Prep(`INSERT INTO test (a, b, c) VALUES (?, ?, ?);`) i := BindIncrementor() stmt.BindInt64(i(), a) // i() == 1 if b > 0 { stmt.BindInt64(i(), b) // i() == 2 } else { // Remember to increment the index even if a param is NULL stmt.BindNull(i()) // i() == 2 } stmt.BindText(i(), c) // i() == 3
func ColumnIncrementor() Incrementor
ColumnIncrementor returns an Incrementor that starts on 0, the first index used in Stmt.Column* functions. This is provided as syntactic sugar for parsing column values from a Stmt. It allows for easily changing queried columns without manually fixing up the column indexes, which can be error prone. For example,
stmt := conn.Prep(`SELECT a, b, c FROM test;`) stmt.Step() i := ColumnIncrementor() a := stmt.ColumnInt64(i()) // i() == 1 b := stmt.ColumnInt64(i()) // i() == 2 c := stmt.ColumnText(i()) // i() == 3
func NewIncrementor(start int) Incrementor
NewIncrementor returns an Incrementor that starts on start.
OpenFlags are flags used when opening a Conn.
A Session tracks database changes made by a Conn.
It is used to build changesets.
Equivalent to the sqlite3_session* C object.
Attach attaches a table to the session object. Changes made to the table will be tracked by the session.
An empty tableName attaches all the tables in the database.
Changeset generates a changeset from a session.
Delete deletes a Session object.
Diff appends the difference between two tables (srcDB and the session DB) to the session. The two tables must have the same name and schema.
Disable disables recording of changes by a Session.
Enable enables recording of changes by a Session. New Sessions start enabled.
Patchset generates a patchset from a session.
A Snapshot records the state of a WAL mode database for some specific point in history.
Equivalent to the sqlite3_snapshot* C object.
CompareAges returns whether s1 is older, newer or the same age as s2. Age refers to writes on the database, not time since creation.
If s is older than s2, a negative number is returned. If s and s2 are the same age, zero is returned. If s is newer than s2, a positive number is returned.
The result is valid only if both of the following are true:
- The two snapshot handles are associated with the same database file.
- Both of the Snapshots were obtained since the last time the wal file was deleted.
Free destroys a Snapshot. Free is not threadsafe but may be called more than once. However, it is not necessary to call Free on a Snapshot returned by conn.GetSnapshot or pool.GetSnapshot as these set a finalizer that calls free which will be run automatically by the GC in a finalizer. However if it is guaranteed that a Snapshot will never be used again, calling Free will allow memory to be freed earlier.
A Snapshot may become unavailable for reads before Free is called if the WAL is checkpointed into the DB past the point of the Snapshot.
Stmt is an SQLite3 prepared statement.
A Stmt is attached to a particular Conn (and that Conn can only be used by a single goroutine).
When a Stmt is no longer needed it should be cleaned up by calling the Finalize method.
BindBool binds value (as an integer 0 or 1) to a numbered stmt parameter.
Parameter indices start at 1.
BindBytes binds value to a numbered stmt parameter.
In-memory copies of value are made using this interface. For large blobs, consider using the streaming Blob object.
Parameter indices start at 1.
BindFloat binds value to a numbered stmt parameter.
Parameter indices start at 1.
BindInt64 binds value to a numbered stmt parameter.
Parameter indices start at 1.
BindNull binds an SQL NULL value to a numbered stmt parameter.
Parameter indices start at 1.
BindParamCount reports the number of parameters in stmt.
BindText binds value to a numbered stmt parameter.
Parameter indices start at 1.
BindNull binds a blob of zeros of length len to a numbered stmt parameter.
Parameter indices start at 1.
ClearBindings clears all bound parameter values on a statement.
ColumnBytes reads a query result into buf. It reports the number of bytes read.
Column indices start at 0.
ColumnCount returns the number of columns in the result set returned by the prepared statement.
ColumnFloat returns a query result as a float64.
Column indices start at 0.
ColumnIndex returns the index of the column with the given name.
If there is no column with the given name ColumnIndex returns -1.
ColumnInt returns a query result value as an int.
Note: this method calls sqlite3_column_int64 and then converts the resulting 64-bits to an int.
Column indices start at 0.
ColumnInt32 returns a query result value as an int32.
Column indices start at 0.
ColumnInt64 returns a query result value as an int64.
Column indices start at 0.
ColumnLen returns the number of bytes in a query result.
Column indices start at 0.
ColumnName returns the name assigned to a particular column in the result set of a SELECT statement.
ColumnReader creates a byte reader for a query result column.
The reader directly references C-managed memory that stops being valid as soon as the statement row resets.
ColumnText returns a query result as a string.
Column indices start at 0.
func (stmt *Stmt) ColumnType(col int) ColumnType
ColumnType returns the datatype code for the initial data type of the result column. The returned value is one of:
SQLITE_INTEGER SQLITE_FLOAT SQLITE_TEXT SQLITE_BLOB SQLITE_NULL
Column indices start at 0.
DataCount returns the number of columns in the current row of the result set of prepared statement.
Finalize deletes a prepared statement.
Be sure to always call Finalize when done with a statement created using PrepareTransient.
Do not call Finalize on a prepared statement that you intend to prepare again in the future.
GetBytes reads a query result for colName into buf. It reports the number of bytes read.
GetFloat returns a query result value for colName as a float64.
GetInt64 returns a query result value for colName as an int64.
GetLen returns the number of bytes in a query result for colName.
GetReader creates a byte reader for colName.
The reader directly references C-managed memory that stops being valid as soon as the statement row resets.
GetText returns a query result value for colName as a string.
Reset resets a prepared statement so it can be executed again.
Note that any parameter values bound to the statement are retained. To clear bound values, call ClearBindings.
SetBool binds a value (as a 0 or 1) to a parameter using a column name.
SetBytes binds bytes to a parameter using a column name. An invalid parameter name will cause the call to Step to return an error.
SetFloat binds a float64 to a parameter using a column name. An invalid parameter name will cause the call to Step to return an error.
SetInt64 binds an int64 to a parameter using a column name.
SetNull binds a null to a parameter using a column name. An invalid parameter name will cause the call to Step to return an error.
SetText binds text to a parameter using a column name. An invalid parameter name will cause the call to Step to return an error.
SetZeroBlob binds a zero blob of length len to a parameter using a column name. An invalid parameter name will cause the call to Step to return an error.
Step moves through the statement cursor using sqlite3_step.
If a row of data is available, rowReturned is reported as true. If the statement has reached the end of the available data then rowReturned is false. Thus the status codes SQLITE_ROW and SQLITE_DONE are reported by the rowReturned bool, and all other non-OK status codes are reported as an error.
If an error value is returned, then the statement has been reset.
As the sqlite package enables shared cache mode by default and multiple writers are common in multi-threaded programs, this Step method uses sqlite3_unlock_notify to handle any SQLITE_LOCKED errors.
Without the shared cache, SQLite will block for several seconds while trying to acquire the write lock. With the shared cache, it returns SQLITE_LOCKED immediately if the write lock is held by another connection in this process. Dealing with this correctly makes for an unpleasant programming experience, so this package does it automatically by blocking Step until the write lock is relinquished.
This means Step can block for a very long time. Use SetInterrupt to control how long Step will block.
For far more details, see:
type Tracer interface { NewTask(name string) TracerTask Push(name string) Pop() }
func (v Value) Type() ColumnType
Package sqlite imports 7 packages (graph) and is imported by 50 packages. Updated 2020-03-11. Refresh now. Tools for package owners.
|
https://godoc.org/crawshaw.io/sqlite
|
CC-MAIN-2020-16
|
en
|
refinedweb
|
to realize such a setup.
Installing and setting up the L3 agent
OpenStack offers different models to operate a virtual router. The model that we discuss in this post is sometimes called a “legacy router”, and is realized by a router running on one of the controller hosts, which implies that the routing functionality is no longer available when this host goes down. In addition, Neutron offers more advanced models like a distributed virtual router (DVR), but this is beyond the scope of todays post.
To make the routing functionality available, we have to install respectively enable two additional pieces of software:
- The Routing API is provided by an extension which needs to be loaded by the Neutron server upon startup. To achieve this, this extension needs to be added to the service_plugins list in the Neutron configuration file neutron.conf
- The routing functionality itself is provided by an agent, the L3 agent, which needs to be installed on the controller node
In addition to installing these two components, there are a few changes to the configuration we need to make. First, of course, the L3 agent comes with its own configuration file that we need to adapt. Specifically, there are two changes that we make for this lab. First, we set the interface driver to openvswitch, and second, we ask the L3 agent not to provide a route to the metadata proxy by setting enable_metadata_proxy to false, as we use the mechanism provided by the DHCP agent.
In addition, we change the configuration of the Horizon dashboard to make the L3 functionality available in the GUI as well (this is done by setting the flag horizon_enable_router in the configuration to “True”).
All this can again be done in our lab environment by running the scripts for Lab 8. In addition, we run the demo playbook which will set up a VLAN network with VLAN ID 100 and two instances attached to it (demo-instance-1 and demo-instance 3) and a flat, external network with one instance (demo-instance-3).
git clone cd Lab8 vagrant up ansible-playbook -i hosts.ini site.yaml ansible-playbook -i hosts.ini demo.yaml
Setting up our first router
Before setting up our first router, let us inspect the network topology that we have created. The Horizon dashboard has a nice graphical representation of the network topology.
We see that we have two virtual Ethernet networks, carrying one IP network each. The network on the left – marked as external – is the flat network, with an IP subnet with CIDR 172.18.0.0/24. The network on the right is the VLAN network, with IP range 172.16.0.0./24.
Now let us create and set up the router. This happens in several steps. First, we create the router itself. This is done using the credentials of the demo user, so that the router will be part of the demo project.
vagrant ssh controller source demo-openrc openstack router create demo-router
At this point, the router exists as an object, and there is an entry for it in the Neutron database (table routers). However, the router is not yet connected to any network. Let us do this next.
It is important to understand that, similar to a physical router, a Neutron virtual router has two interfaces connected to two different networks, and, again as in the physical network world, the setup is not symmetric. Instead, there is one network which is considered external and one internal network. By default, the router will allow for traffic from the internal network to the external network, but will not allow any incoming connections from the external network into the internal networks, very similar to the cable modem that you might have at home to connect to this WordPress site.
Correspondingly, the way how the external network and the internal network are attached to the router differ. Let us start with the external network. The connection to the external network is called the external gateway of the router and can be assigned using the set command on the router.
openstack router set \ --external-gateway=flat-network\ demo-router
When you run this command and inspect the database once more, you will see that the column gw_port_id has been populated. In addition, listing the ports will demonstrate that OpenStack has created a port which is attached to the router (this port is visible in the database but not via the CLI as the demo user, as the port is not owned by this user) and has received an IP address on the external network.
To complete the setup of the router, we now have to connect the router to an internal network. Note that this needs to be done by the administrator, so we first have to source the credentials of the admin user.
source admin-openrc openstack router add subnet demo-router vlan-subnet
When we now log into the Horizon GUI as the demo user and ask Horizon to display the network topology, we get the following result.
We can reach the flat network (and the lab host) from the internal network, but not the other way around. You can verify this by logging into demo-instance-1 via the Horizon VNC console and trying to ping demo-instance-3.
Now let us try to understand how the router actually works. Of course, somewhere behind the scenes, Linux routing mechanisms and iptables are used. One could try to implement a router by manipulating the network stack on the controller node, but this would be difficult as the configuration for different routers might conflict. To avoid this, Neutron creates a dedicated network namespace for each router on the node on which the L3 agent is running.
The name of this namespace is qrouter-, followed by the ID of the virtual router (here “q” stands for “Quantum” which was the name of what is now known as Neutron some years ago). To analyze the network stack within this namespace, let us retrieve its ID and spawn a shell inside the namespace.
netns=$(ip netns list \ | grep "qrouter" \ | awk '{print $1'}) sudo ip netns exec $netns /bin/bash
Running
ifconfig -a and
route -n shows that, as expected, the router has two virtual interfaces (both created by OVS). One interface starting with “qg” is the external gateway, the second one starting with “qr” is connected to the internal network. There are two routes defined, corresponding to the two subnets to which the respective interfaces are assigned.
Let us now inspect the iptables configuration. Running
iptables -S -t nat reveals that Neutron has added an SNAT (source network address translation) rule that applies to traffic coming from the internal interface. This rule will replace the source IP address of the outgoing traffic by the IP address of the router on the external network.
To understand how the router is attached to the virtual network infrastructure, leave the namespace again and display the bridge configuration using
sudo ovs-vsctl show. This will show you that the two router interfaces are both attached to the integration bridge.
Let us now see how traffic from a VM on the internal network flows through the stack. Suppose an application inside the VM tries to reach the external network. As inside the VM, the default route goes to 172.18.0.1, the routing mechanism inside the VM targets the packet towards the qr-interface of the router. The packet leaves the VM through the tap interface (1). The packet enters the bridge via the access port and receives a local VLAN tag (2), then travels across the bridge to the port to which the qr-interface is attached. This port is an access port with the same local VLAN tag as the virtual machine, so it leaves the bridge as untagged traffic and enters the router (3).
Within the router, SNAT takes place (4) and the packet is forwarded to the qg-interface. This interface is attached to the integration bridge as access port with local VLAN ID 2. The packet then travels to the physical bridge (5), where the VLAN tag is stripped off and the packet hits the physical networks as part of the native network corresponding to the flat network.
As the IP source address is the address of the router on the external network, the response will be directed towards the qg-interface. It will enter the integration bridge coming from the physical bridge as untagged traffic, receive local VLAN ID 2 and end up at the qg-access port. The packet then flows back through the router, is leaving it again at the qr interface, appears with local VLAN tag 1 on the integration bridge and eventually reaches the VM.
There is one more detail that deserves being mentioned. When you inspect the iptables rules in the mangle table of the router namespace, you will see some rules that add marks to incoming packets, which are later evaluated in the nat and filter tables. These marks are used to implement a feature called address scopes. Essentially, address scopes are reflecting routing domains in OpenStack, the idea being that two networks that belong to the same address scope are supposed to have compatible, non-overlapping IP address ranges so that no NATing is needed when crossing the boundary between these two networks, while a direct connection between two different address scopes should not be possible.
Floating IPs
So far, we have set up a router which performs a classical SNAT to allow traffic from the internal network to appear on the external network as if it came from the router. To be able to establish a connection from the external network into the internal network, however, we need more.
In a physical infrastructure, you would use DNAT (destination netting) to achieve this. In OpenStack, this is realized via a floating IP. This is an IP address on the external network for which DNAT will be performed to pass traffic targeted to this IP address to a VM on the internal network.
To see how this works, let us first create a floating IP, store the ID of the floating IP that we create in a variable and display the details of the floating IP.
source demo-openrc out=$(openstack floating ip create \ -f shell \ --subnet flat-subnet\ flat-network) floatingIP=$(eval $out ; echo $id) openstack floating ip show $floatingIP
When you display the details of the floating IP, you will see that Neutron has assigned an IP from the external network (the flat network), more precisely from the network and subnet that we have specified during creation.
This floating IP is still fully “floating”, i.e. not yet attached to any actual instance. Let us now retrieve the port of the server demo-instance-1 and attach the floating IP to this port.
port=$(openstack port list \ --server demo-instance-1 \ -f value \ | awk {'print $1}') openstack floating ip set --port $port $floatingIP
When we now display the floating IP again, we see that floating IP is now associated with the fixed IP address of the instance demo-instance-1.
Now leave the controller node again. Back on the lab host, you should now be able to ping the floating IP (using the IP on the external network, i.e. from the 172.16.0.0/24 network) and to use it to SSH into the instance.
Let us now try to understand how the configuration of the router has changed. For that purpose, enter the namespace again as above and run
ip addr. This will show you that now, the external gateway interface (the qg interface) has now two IP addresses on the external network – the IP address of the router and the floating IP. Thus, this interface will respond to ARP requests for the floating IP with its MAC address. When we now inspect the NAT tables again, we see that there are two new rules. First, there is an additional source NAT rule which replaces the source IP address by the floating IP for traffic coming from the VM. Second, there is now – as expected – a destination NAT rule. This rule applies to traffic directed to the floating IP and replaces the target address with the VM IP address, i.e. with the corresponding fixed IP on the internal network.
We can now understand how a ping from the lab host to the floating IP flows through the stack. On the lab host, the packet is routed to the vboxnet1 interface and shows up at enp0s9 on the controller node. From there, it travels through the physical bridge up to the integration bridge and into the router. There, the DNAT processing takes place, and the target IP address is replaced by that of the VM. The packet leaves the router at the internal qr-interface, travels across the integration bridge and eventually reaches the VM.
Direct access to the internal network
We have seen that in order to connect to our VMs using SSH, we first need to build a router to establish connectivity and assign a floating IP address. Things can go wrong, and if that operation fails for whatever reason or the machines are still not reachable, you might want to find a different way to get access to the instances. Of course there is the noVNC client built into Horizon, but it is more convenient to get a direct SSH connection without relying on the router. Here is an approach how this can be done.
Recall that on the physical bridge on the controller node, the internal network has the VLAN segmentation ID 100. Thus to access the VM (or any other port on the internal network), we need to tag our traffic with the VLAN tag 100 and direct it towards the bridge.
The easiest way to do this is to add another access port to the physical bridge, to assign an IP address to it which is part of the subnet on the internal network and to establish a route to the internal network from this device.
vagrant ssh controller sudo ovs-vsctl add-port br-phys vlan100 tag=100 \ -- set interface vlan100 type=internal sudo ip addr add 172.18.0.100/24 dev vlan100 sudo ip link set vlan100 up
Now you should to be able to ping any instance on the internal VLAN network and SSH into it as usual from the controller node.
Why does this work? The upshot of our discussion above is that the interaction of local VLAN tagging, global VLAN tagging and the integration bridge flow rules effectively attach all virtual machines in our internal network via access ports with tagging 100 to the physical network infrastructure, so that they all communicate via VLAN 100. What we have done is to simply create another network device called vlan100 which is also connected to this VLAN. Therefore, it is effectively on one Ethernet segment with our first two demo instances. We can therefore assign an IP address to it and then use it to reach these instances. Essentially, this adds an interface to the controller which is connected to the virtual VLAN network so that we can reach each port on this network from the controller node (be it on the controller node or a compute node).
There is much more we could say about routers in OpenStack, but we leave that topic for the time being and move on to the next post, in which we will discuss overlay networks using VXLAN.
|
https://leftasexercise.com/2020/03/16/building-virtual-routers-with-openstack/
|
CC-MAIN-2020-16
|
en
|
refinedweb
|
Send Props to Children in React
In React, you’re always making components. Sometimes components are standalone. Other times, you’ll have components that can nest children components. Sometimes you’ll want to send properties to the children components from the parent as often as a doting parent wants to send packages to a child missionary. It’s possible, it’s simple, and it’s not documented super well. Here’s one method.
Updated: 29 Apr 2016 for React 15.x.
Children Components
When parent components are rendered, they have access to a special property,
this.props.children. It’s like an Angular
ng-transclude or an Ember
yield. Children components are generally rendered something like this:
function Parent(props) { return ( <div id="iAmParentHearMeRoar"> {props.children} </div> ) }
The generic example above shows simply how to render children,
props untouched, within a parent component. Sometimes, however, a parent wants to bequeath its children with extra properties. How will we make that happen?
Setting Child Props
props are meant to be immutable. But, in order for us to send
props values to our children, we’re going to essentially loop through our children and set props on them as a part of our parent
render function.
Deep breath. It’s ok. The children that we’ll loop through aren’t mounted component instances. They are, instead, descriptors. These descriptors have all the
props attributes that we’ve declared should be put on the components, but they haven’t been rendered yet. Because of this, we can change props, and it’s ok. We’re not mutating what has rendered. The data still hasn’t flowed to the children. We’re still effectively still riffing on the logic of what the children components should really be when they’re eventually mounted.
Looping on Children Components
this.props.children is a funny property. It’s special in more ways than one. The things that might trip us up in looping is that even though it sounds like a plural thing, meaning an array, sometimes it’s a singular object. To help avoid potential problems, React gives us a helper,
React.Children. It has a few functions for array iteration, such as
map and
forEach that help account for the potential forms of
this.props.children.
Functional Modifications
Immutable data is a big part of functional programming. This means that when we ‘mutate’ the props, we want to mutate on a clone of the child component without affecting the original. There’s an input, there’s an output, and the input is untouched. Once we have our cloned children components as we want them, we’ll render those instead. React offers another great helper for cloning components and setting properties in a single function,
React.cloneElement.
Checking Child Type
It’s a generally-useful thing to be able to tell what the React class type of a component object is. It’s an applicable skill in terms of looping through child components because we might not want to modify the properties of all types of children. Each React component class has a
type attribute accessible via
MyComponent.type. This attribute is also available on component descriptors.
A Child CheckOption Example
To bring this all together and illustrate the concepts, let’s say we created a
RadioGroup component that could take one or many
RadioOption child components. In raw html, which is what our component will eventually render,
inputs with type
checkbox need to all have the same
name attribute value to work well as toggles within the group. But this is something that React can help us not have to duplicate. We’ll instead put a
name property on the parent
RadioGroup and have it transfer it as a property on all its children. The implementation might look like this:
import React from 'react' function RadioOption(props) { return ( <label> <input type="radio" value={props.value} name={props.name} /> {props.label} </label> ) } function renderChildren(props) { return React.Children.map(props.children, child => { if (child.type === RadioOption) return React.cloneElement(child, { name: props.name }) else return child }) } function RadioGroup(props) { return ( <div class="radio-group"> {renderChildren(props)} </div> ) } function WhereImUsingRadioGroups() { return ( <RadioGroup name="blizzard-games"> <RadioOption label="Warcraft 2" value="wc2" /> <RadioOption label="Warcraft 3" value="wc3" /> <RadioOption label="Starcraft 1" value="sc1" /> <RadioOption label="Starcraft 2" value="sc2" /> </RadioGroup> ) }
In this example, where the parent
RadioGroup has the
name prop, it will be given to each of the children so their
name prop will match and the radio group will work as expected. Thus, the hearts of the children are turned toward their fathers.
Is there a better way to do this? How have you been sending
props to children?
|
https://jaketrent.com/post/send-props-to-children-react/
|
CC-MAIN-2017-43
|
en
|
refinedweb
|
Settings ReSharper | Options | Code Inspection | Settings In this page of ReSharper options, you can specify your preferences for code inspection. ItemDescription General Enable code analysis Select this check box to enable design-time code inspection. Color identifiers This option lets you enable or disable ReSharper syntax highlighting scheme. If it is selected, language identifiers are highlighted with colors as defined in Visual Studio options: Tools | Options | Environment | Fonts and Colors. The list of syntax identifiers provided by ReSharper is available in the Display items list, each name starting with ReSharper prefix. Highlight color usages Enables highlighting of color definitions in code. For more information, see Color Assistance. Highlight special characters in string literals Enables highlighting of correct and incorrect escape sequences in non-verbatim strings. For example: For more information, see Regular Expressions Assistance. Highlight context exits This option, enabled by default, tells ReSharper to highlight all places where the control flow can exit the current context. For example, for a method, it will highlight the return type of the method, all return, throw keywords, etc. when you set the caret to one of these identifiers For a loop, it will additionally highlight the loop keyword as well as all the break statements inside this loop. Note that if a method is not entirely visible in the editor, you can invoke the Navigate To Function Exits command on the method name to trigger another kind of highlighting, which will not disappear when you your caret leaves the method name. Enable solution-wide analysis Enables solution-wide analysis (including solution-wide code inspections ), which is disabled by default. Note that in large solutions, solution-wide analysis may result in some performance degradation. However, there are several ways to improve the performance of solution-wide analysis. For more information, see Configuring Solution-Wide Analysis. Include warnings Enables warnings in. Show the 'Import namespace' action using popup If this option is selected, a pop-up that suggests importing namespaces shows up if there is one or more non-imported types are detected in the file: If it is not selected, the corresponding action appears in the action list. See Importing Missing Namespaces for details. Value analysis mode Using the value analysis, ReSharper finds out which entities can hold null value and highlights possible errors with null dereference. You can choose one of the following modes. Optimistic: when explicitly marked with [CanBeNull] attribute, or checked for null - ReSharper assumes that only entities explicitly marked with CanBeNull or ItemCanBeNull attribute or explicitly checked for being null, can be null. Pessimistic: when entity doesn't have explicit [NotNull] attribute - ReSharper assumes that all nullable entities without an explicit NotNull or ItemNotNull attributes can be null. Visual Studio Integration Do not show Visual Studio bulb... This option is not available in Visual Studio versions older than 2015. If it is selected, Visual Studio's light bulbs are not shown separately. If necessary, Visual Studio's quick actions are integrated into ReSharper's action list. . Suppress Visual Studio squiggles... This option is not available in Visual Studio versions older than 2015. If it is selected, Visual Studio's error highlighting in the editor is not displayed, only ReSharper's highlighting appears. Elements to skip In this section, you can exclude files, file masks, and folders from code inspection : Last modified: 12 October 2017 Code InspectionGenerated CodeSee AlsoProcedures:Syntax HighlightingSolution-Wide AnalysisImporting Missing NamespacesValue and Nullability AnalysisUsing Annotations to Refine Code InspectionConfiguring Solution-Wide AnalysisReference:Code Inspection Index
|
https://www.jetbrains.com/help/resharper/Reference__Options__Code_Inspection__Settings.html
|
CC-MAIN-2017-43
|
en
|
refinedweb
|
This document outlines the steps to create, define, and use an extension for some of the APIs supported by Khronos. It is currently focused on OpenGL, OpenGL ES, GLX, and EGL. Some discussion of OpenVG and WGL is also included.
When initially creating an extension, take the following steps:
Specifications for extensions that have already been developed can be obtained from the Extension Registry maintained by Khronos. Since we are just getting started with most of the Khronos APIs, the registry currently contains only OpenGL, GLX, and WGL extension specifications.
It's possible that additional extensions may have been submitted to the registry but not yet updated on the website, or that another licensee may be working on a similar extension but not yet have released the specification. So it's worth asking on the appropriate Working Group mailing list if anyone has defined related functionality already.
OpenGL's history has made clear that ISVs do not want to deal with vendor-specific extensions if they can possibly avoid it. So if the functionality being exposed is going to be available on multiple platforms - as most will - it's a good idea to agree on a single extension with other vendors providing that functionality. This makes it easier for ISVs to justify using extensions.
If the functionality is well-understood, it may be appropriate to define a Khronos-approved extension. This is the most "blessed" category of extension; it goes through the entire standards process, and is approved by the group, but remains optional functionality. Many core features have been promoted directly from existing Khronos-approved extensions.
If Khronos as a whole isn't ready to deal with the extension, but other vendors are, then it should be defined as a multivendor extension. The interested parties can develop the specification entirely among themselves, outside the standards process; or they may be able to use Khronos Working Groups as a forum to develop the specification.
In some cases, vendors may share a common core of functionality, with vendor-specific additional features. Here, it may make sense to agree on a multivendor extension to access the core, with additional vendor-specific extensions layered on the core exposing unique features.
Finally, some extensions will probably have to remain proprietary.
Start with the template for writing extension specifications. There are different templates for different APIs, but general comments apply:
One complete, shipping example to refer to is the Fog Coordinate OpenGL extension specification.
API entry points and enumerants in an extension must be named according to the syntax rules specific to that API. In particular, follow the sections "Extension name rules" and "Shared extensions".
All extensions must be named and the name included in the extension specification. The extension name is of the form "api_category_name" where
For example, the extension name "GL_EXT_framebuffer_object" is used for a multivendor OpenGL extension adding support for framebuffer objects.
Choose names that are
The goal is for names to be clear, but not at the cost of confusion or ambiguity.
Each Khronos API provides a way of describing the supported extensions at compile- and run-time. This is done by a combination of preprocessor tokens in header files, and queryable extension strings.
OpenGL and OpenGL ES #define a preprocessor token corresponding to the extension name in <GL/gl.h> (or an include file that gl.h includes, such as the glext.h header provided in the registry). When this token is defined, it indicates that the function prototypes and enumerant definitions required to use the extension are available at compile time.
If an OpenGL or OpenGL ES extension is supported at runtime, the extension name must also be included in the string returned by glGetString(GL_EXTENSIONS).
GLX #defines a preprocessor token corresponding to the extension name in <GL/glx.h> (or an include file that glx.h includes, such as the glxext.h header provided in the registry). When this token is defined, it indicates that the function prototypes and enumerant definitions required to use the extension are available at compile time.
If a GLX extension is supported at runtime, the extension name must also be included in the strings returned by glXQueryExtensionsString, glXGetClientString, and/or glXQueryServerString (see below for a description of the different routines).
WGL #defines a preprocessor token corresponding to the extension name in the wglext.h header provided in the registry (the wgl.h supplied with Microsoft Windows does not #include wglext.h, or define any extensions itself). When this token is defined, it indicates that the function prototypes and enumerant definitions required to use the extension are available at compile time.
If a WGL extension is supported at runtime, the extension name must also be included in the string returned by wglGetExtensionsStringEXT.
OpenVG extension conventions are To Be Determined.
Note that extensions can have both OpenGL components and windowing system components. For example, the ARB multisampling extension modifies both GLX and OpenGL. In this case there will be two tokens associated with the extension (e.g., GL_ARB_multisample and GLX_ARB_multisample) and the extension will be advertised by both OpenGL and GLX.
Khronos keeps a registry of extension specifications, enumerated type values, GLX codes (vendor private opcodes, vendor private with reply opcodes, new visual attribute type values, GLX error codes and GLX event codes), OpenGL rendering codes for GLX, and OpenGL rendering codes for GLS and extension numbers. Vendors shipping extensions using any of these values must obtain them from Khronos.
If an extension defines new OpenGL enumerant names, values for those names must be requested in one or more blocks of 16 values. If an extension defines new OpenGL rendering commands then you need to register GLS rendering codes for it. If you want the extensions to work with the X windowing system (i.e., with GLX), then you must request GLX opcodes and define GLX protocol for it.
There are detailed enumerant allocation policies for OpenGL, GLX, and WGL enumerants.
All new extensions must have a number associated with them for documentation purposes. If an extension depends on another extension, the other extension must have a lower number. (Note that when an extension is deprecated, the number associated with it is not reassigned.) This number will also be assigned by Khronos when you register the extension.
Include all new enumerated values, GLX codes, and the extension number in the specification.
Once you have completed the extension, please make it available to other Khronos members and application developers, by submitting the extension specification to the Khronos Registrar for inclusion in the public registry.
Whenever possible, extensions should use existing errors instead of defining new error returns. For GLX, if a new protocol error is introduced, then an error number must be obtained from and registered with Khronos.
Vendors may ship a single OpenGL library, containing extensions, for a variety of platforms. It is possible that some of the extension routines defined in the library may not be supported on some of the platforms. If this is the case and an application calls a routine that is not supported by the current OpenGL renderer then a GL_INVALID_OPERATION error should be returned.
OpenGL extensions must be advertised in the extension string returned by glGetString. Note that in a client-server environment, this call returns the set of extensions that can be supported on the connection. GLX client libraries must send a glXClientInfo request to the server at start up time (if the client libarary is 1.1 or later) indicating the version of the client library and the OpenGL extensions that it supports. Then, when glGetString is called, the client issues a GetString request. The server intersects the set of extensions that the client supports with the set of extensions that it supports (if a glXClientInfo request was never received then the server assumes that the client supports no OpenGL extensions) and returns the result to the client. The client library then appends any client-side only extensions to the list and returns the result.
Extension names for all known OpenGL extensions are #defined in the glext.h header included in the registry.
EGL extensions must be advertised in the extension string returned by eglQueryString(EGL_EXTENSIONS). Extension names for all known EGL extensions are #defined in the eglext.h header included in the registry.
GLX client-side extensions must be advertised in the extension string returned by glXGetClientString; server-side extensions must be advertised in the extension string returned by glXQueryServerString.
glXQueryExtensionsString returns the list of extensions that can be supported on the connection. The client then issues a glXQueryServerString request, intersects the returned string with the set of extensions it can support and then appends any client-side only extensions to the list.
Extension names for all known GLX extensions are #defined in the glxext.h header included in the registry.
WGL initially had no mechanism for returning its own extensions string. For this reason, WGL extension names were initially advertised in the GL extensions string returned by glGetString. With the creation of a more formal WGL extension mechanism, all implementations offering WGL extensions should export the WGL_EXT_extensions_string extension, and should advertise WGL extensions in the extensions string returned by the wglGetExtensionsStringEXT interface defined by WGL_EXT_extensions_string, as well as via glGetString, for compatibility with older programs.
Extension names for all known WGL extensions are #defined in the wglext.h header included in the registry.
Programmers that wish to use a particular OpenGL extension should check both compile-time defines (to ensure that the extension is supported by the library they are compiling against) and the extension string returned by glGetString (to ensure that the renderer supports the extension). For Windows, extensions usually are not defined at link time, and function pointers to extension APIs should be obtained by calling wglGetProcAddress.
For example, the following code could be used to check whether the renderer supports an OpenGL extension called GL_EXT_new_extension. This code would need to be executed after the context had been made current:
static GLboolean CheckExtension(char *extName, const char *extString) { /* ** Search for extName in the extensions string. Use of strstr() ** is not sufficient because extension names can be prefixes of ** other extension names. Could use strtok() but the constant ** string returned by glGetString can be in read-only memory. */ char *p = (char *)extString; char *end; int extNameLen; extNameLen = strlen(extName); end = p + strlen(p); while (p < end) { int n = strcspn(p, " "); if ((extNameLen == n) && (strncmp(extName, p, n) == 0)) { return GL_TRUE; } p += (n + 1); } return GL_FALSE; } const GLubyte *ext_string; int new_ext_supported = GL_FALSE; if (CheckExtension("GL_EXT_new_extension", glGetString(GL_EXTENSIONS))) new_ext_supported = GL_TRUE;
If the renderer supports the extension, then it is safe to use it at runtime. (Note that in a client-server environment, glGetString will only return the set of extensions that can be supported by the client and server.) However, compile time checks must be made to ensure that the library that you are linked against supports the extension. For example:
#ifdef GL_EXT_new_extension if (new_ext_supported) glNewExtensionEXT() #endif
For a Windows OpenGL implementation, extensions are usually dynamically loaded from the device driver, rather than statically linked. Function pointers to extension APIs are obtained from wglGetProcAddress and use to invoke the extension. For example:
typedef void (WINAPI *PFNGLNEWEXTENSIONEXTPROC)(); PFNGLNEWEXTENSIONEXTPROC *glNewExtensionEXT = NULL; /* Do this once, after context creation */ #ifdef GL_EXT_new_extension if (new_ext_supported) glNewExtensionEXT = (PFNGLNEWEXTENSIONEXTPROC) wglGetProcAddress("glNewExtensionEXT"); #endif /* Do this when calling the extension */ #ifdef GL_EXT_new_extension if (new_ext_supported) (*glNewExtensionEXT)(); #endif
Before using an EGL extension, check for the extension name in both the compile-time #defines and the extension string returned by eglGetString(EGL_EXTENSIONS). For example, this code could be used to check whether an extension called EGL_OES_new_extension can be used.
EGLDisplay dpy; // Initialized elsewhere int new_ext_supported = FALSE; if (CheckExtension("EGL_OES_new_extension", eglGetString(display, EGL_EXTENSIONS)); new_ext_supported = TRUE; #endif
If the extension is supported, then it is safe to use it at runtime. However, compile time checks must be made to ensure that you can call the extension. For example:
#ifdef EGL_OES_new_extension if (new_ext_supported) eglNewExtensionEXT(...); #endif
Before using a GLX extension, programmers should check the compile time defines and the extension string returned by glXQueryExtensionsString.
The following code could be used to check whether an extension called GLX_EXT_new_extension can be used on the connection. This code would be executed after the connection had been opened and the existence of the GLX extension had been established.
Display *dpy; int new_ext_supported = GL_FALSE; int major, minor, screen; if (!glXQueryVersion(dpy, &major, &minor)) exit(1); screen = DefaultScreen(dpy); #ifdef GLX_VERSION_1_1 if (minor > 0 || major > 1) if (CheckExtension("GLX_EXT_new_extension", glXQueryExtensionsString(dpy, screen))) new_ext_supported = GL_TRUE; #endif
If the extension is supported on the connection, then it is safe to use it at runtime. However, compile time checks must be made to ensure that the library that you are linked against supports the extension. For example:
#ifdef GLX_EXT_new_extension if (new_ext_supported) glXNewExtensionEXT(...); #endif
Before using a WGL extension, check for its presence in the WGL extensions string. Note that the WGL extension string query is itself an extension; if not supported, WGL extensions are also advertised in the base GL extensions string.
Code snippet should go here
Last modified August 13, 2006 by Jon Leech
|
https://www.khronos.org/registry/OpenGL/docs/rules.html
|
CC-MAIN-2017-43
|
en
|
refinedweb
|
[ Return to Articles | Show Comments | Submit Comment ]
Created at 14:57 Sep 19, 2017 by greg.ercolano
Last modified at 14:59 Sep 19, 2017
If you have an FLTK Windows GUI application (built with /subsystem:windows), you can create a DOS style window and redirect stdout/stderr to it at runtime.
This shows a pretty simple way to do this using AllocConsole() and AttachConsole() which can give a Windows application (that backrgrounds itself) a way to display stdout/stderr.
#define _WIN32_WINNT 0x0501 // needed for AttachConsole
#include <windows.h> // AllocConsole()
#include <Wincon.h> // AttachConsole()
#include <stdio.h>
#include <stdlib.h>
#include <FL/Fl.H>
#include <FL/Fl_Window.H>
int main()
{
// Open a DOS style console, redirect stdout to it
AllocConsole() ;
AttachConsole(GetCurrentProcessId()) ;
freopen("CON", "w", stdout) ;
freopen("CON", "w", stderr) ;
printf("Hello world on stdout!\n");
printf("Hello world on stderr!\n");
Fl_Window win(400,400);
win.show();
return Fl::run();
}
..and yes, you /could/ also compile your app with /SUBSYSTEM:CONSOLE to get similar results: this lets you see stdout/stderr in the DOS window you invoked the program from, or if you clicked on the app to run it, that first opens a separate DOS window then runs your app inside it.
However, in THIS example shown here, it shows a way to /conditionally/ open a DOS style window for redirecting stdout/err in a /SUBSYSTEM:WINDOWS app, where the application runs in the background when invoked from a DOS terminal, which can be useful too. For instance, you could make the DOS style window only appear when needed, such as to display some debug info, then you can later make it go away with FreeConsole().
[ Reply ]
|
http://www.fltk.org/articles.php?L1549
|
CC-MAIN-2017-43
|
en
|
refinedweb
|
QuantLib_FixedRateBondForward man page
FixedRateBondForward — Forward contract on a fixed-rate bond
Synopsis
#include <ql/instruments/fixedratebondforward.hpp>
Inherits Forward.
Public Member Functions
Constructors >())
Calculations
Real forwardPrice () const
(dirty) forward bond price
Real cleanForwardPrice () const
(dirty) forward bond price minus accrued on bond at delivery
Real spotIncome (const Handle< YieldTermStructure > &incomeDiscountCurve) const
NPV of bond coupons discounted using incomeDiscountCurve.
Real spotValue () const
NPV of underlying bond.
Protected Member Functions
void performCalculations () const
Protected Attributes
boost::shared_ptr< FixedRateBond > fixedCouponBond_
Additional Inherited Members
Detailed Description
Forward contract on a fixed-rate bond
- 1.
valueDate refers to the settlement date of the bond forward contract. maturityDate is the delivery (or repurchase) date for the underlying bond (not the bond's maturity date).
- 2.
Relevant formulas used in the calculations ( $P$ refers to a price):
a. $ P_{CleanFwd}(t) = P_{DirtyFwd}(t) - AI(t=deliveryDate) $ where $ AI $ refers to the accrued interest on the underlying bond.
b. $ P_{DirtyFwd}(t) = ac{P_{DirtySpot}(t) - SpotIncome(t)} {discountCurve->discount(t=deliveryDate)} $
c. $ SpotIncome(t) = sum_i left( CF_i imes incomeDiscountCurve->discount(t_i) right) $ where $ CF_i $ represents the ith bond cash flow (coupon payment) associated with the underlying bond falling between the settlementDate and the deliveryDate. (Note the two different discount curves used in b. and c.)
Example: valuation of a repo on a fixed-rate bond
- Warning
This class still needs to be rigorously tested
Examples: Repo.cpp.
Constructor & Destructor Documentation
Member Function Documentation
Real spotIncome (const Handle< YieldTermStructure > & incomeDiscountCurve) const [virtual]
NPV of bond coupons discounted using incomeDiscountCurve. Here only coupons between max(evaluation date,settlement date) and maturity date of bond forward contract are considered income.
Implements Forward.
Examples: Repo.cpp..
Author
Generated automatically by Doxygen for QuantLib from the source code.
Referenced By
The man pages cleanForwardPrice(3), fixedCouponBond_(3), FixedRateBondForward(3), spotIncome(3) and spotValue(3) are aliases of QuantLib_FixedRateBondForward(3).
|
https://www.mankier.com/3/QuantLib_FixedRateBondForward
|
CC-MAIN-2017-43
|
en
|
refinedweb
|
Odoo Help
Odoo is the world's easiest all-in-one management software. It includes hundreds of business apps:
CRM | e-Commerce | Accounting | Inventory | PoS | Project management | MRP | etc.
How to solve this error from raise Warning: NameError: global name '_' is not defined
Hi, when i click on a button to check the FTP connection, I've this error :
File "/usr/local/lib/python2.7/dist-packages/odoo-9.0c-py2.7.egg/openerp/addons/BHC_Copaco/copaco_config.py", line 78, in check_ftp
raise Warning(_('Username/password FTP connection was not successfully!'))
NameError: global name '_' is not defined
The problem comes from this line :
raise Warning(_('Username/password FTP connection was not successfully!'))
But it's strange because i use the same line in an other module and it worked.
Thanks in advance.
Add below line in import section,
Odoo 8
from openerp.tools.translate import _
Odoo 9
from openerp import api, fields, models, _
Ah no, not name error: now I've this : File "/usr/local/lib/python2.7/dist-packages/odoo-9.0c-py2.7.egg/openerp/addons/BHC_Copaco/copaco_config.py", line 79, in check_ftp raise Warning(_('Username/password FTP connection was not successfully!')) Warning: Username/password FTP connection was not successfully!
I think, it is a warning from your code. That have stopped your process and gives the error in terminal.
No need to worry about this
About This Community
Odoo Training Center
Access to our E-learning platform and experience all Odoo Apps through learning videos, exercises and Quizz.Test it now
|
https://www.odoo.com/forum/help-1/question/how-to-solve-this-error-from-raise-warning-nameerror-global-name-is-not-defined-98884
|
CC-MAIN-2017-43
|
en
|
refinedweb
|
Blokkal::FileEngine Class ReferenceProvides access to icon data of various objects. More...
#include <blokkalfileengine.h>
Inheritance diagram for Blokkal::FileEngine:
Detailed DescriptionProvides access to icon data of various objects.
This class provides access to icon data for tool tips and similar uses. Resources are named in the following fashion:
"blokkal:///accounts/[accountId]/icon" "blokkal:///accounts/[accountId]/stateicon" "blokkal:///accounts/[accountId]/blogs/[blogId]/icon" "blokkal:///accounts/[accountId]/blogs/[blogId]/stateicon" "blokkal:///protocols/[protocolname]/icon" "blokkal:///smallicons/[iconname]" *
Definition at line 58 of file blokkalfileengine.h.
The documentation for this class was generated from the following files:
|
http://blokkal.sourceforge.net/docs/0.1.0/classBlokkal_1_1FileEngine.html
|
CC-MAIN-2017-43
|
en
|
refinedweb
|
In this section you will read about how to convert a string to date. Here you will read how the specified string into date format can be convert into java.util.Date. Java string to date convert explains the conversion of Java string written into the specified format to a Java date object. Java provides two types of Date class from different packages. These are java.util.Date and java.sql.Date. Both of these classes are used for creating Date object in which java.sql.Date class is used for working with date that is either fetched from or insert into the database and the java.util.Date class is used to date object to manipulate with the Java object.
Followings are the two Date classes using which we can manipulate with these objects in the Java classes at the time of development :
Example
Here I am giving a simple example which will demonstrate you how a date string will be convert into Java date object. In this example we will first specify a date string then we will create an object of SimpleDateFormat that is used to specify a date format. Then I have used the parse() method of SimpleDateFormat that parse the String into Date object and returns a Date. Then I have used the System.out.println() to show the output on console.
StringToDate.java
import java.util.*; import java.text.*; public class StringToDate { public static void main(String args[]) { String string = "21 JUN 2013"; SimpleDateFormat sdf = new SimpleDateFormat("dd MMM yyyy"); try { Date date = sdf.parse(string); System.out.println(date); } catch (ParseException e) { e.printStackTrace(); } } }
Output :
When you will compile and execute the above example String To Date
Post your Comment
|
http://roseindia.net/java/java-conversion/java-string-to-date.shtml
|
CC-MAIN-2015-22
|
en
|
refinedweb
|
first for an object, class is set to true, to direct the ObjectDataSource to not call Dispose on the instance.
<%@ Import namespace="Samples.AspNet.CS" %> <%@ Page language="c#" %> .
|
https://msdn.microsoft.com/en-US/library/system.web.ui.webcontrols.objectdatasourcedisposingeventhandler(v=vs.80).aspx
|
CC-MAIN-2015-22
|
en
|
refinedweb
|
Spring and Spring MVC is one of the most popular Java framework and most of new Java projects uses Spring these days. Java programmer often ask questions like which books is good to learn Spring MVC or What is the best book to learn Spring framework etc. Actually, there are many books to learn Spring and Spring MVC but only certain books can be considered good because of there content, examples or the way they explained concept involved in Spring framework. Similar to Top 5 books on Java programming we will some good books on Spring in this article, which not only help beginners to start with Spring but also teaches some best practices. In order to learn a new technology or a new framework, probably best way is to start looking documentation provided and Spring Framework is no short on this. Spring provides great, detailed documentation to use various features of Spring framework but despite of that nothing can replace a good book. Luckily both Spring and Spring MVC got couple of good titles which not only explains concepts like Dependency Injection and Inversion of Control which is core to spring framework but also gives coverage to other important aspect of Spring. Following are some of the good books available on Spring and Spring MVC which can help you to learn Spring.
Top 5 Books on Spring Framework and Spring MVCHere is my list of top 5 books to learn Spring MVC and Spring framework. Let me know if you come across any other great book on Spring, which is worth adding into this list.
Expert Spring MVC and Web Flow
Expert Spring MVC and Web Flow by Seth Ladd, Darren Davison, Steven Devijver, Colin Yates is one of my favorite book on Spring MVC and arguably one of the best book in Spring MVC. It covers both Spring MVC and web flow in depth and explains each concept with simple explanation. I highly recommend this book to any beginner which is learning Spring MVC framework. There chapter on Spring fundamentals is also one of the best way to learn dependency injection and inversion of control in Spring and I myself learned DI and IOC from that chapter. This is the Spring book I recommend to any Java web developer who is familiar with Java web technology or any MVC framework like Struts. Only missing point is that this book only covers Spring MVC and web flow and does not cover whole Spring framework. Also, in my opinion there chapter on Spring Fundamentals is one of the best way to start with Spring framework.
Spring Recipes – A problem solution approach
This is another good book on Spring Framework which I like most. This book is collection of Spring recipes or How to do in Spring Framework. In every Spring recipes you learn some new concept and it also helps to learn Spring fundamental e.g. there recipes help me to learn when to use ApplicationContext and BeanFactory and Constructor vs Setter Injection. Key highlight of this book is, It’s problem solution approach. Since it’s teaching style is different than any conventional book, it’s a good supplement along with Spring documentation. This books also provide excellent coverage of many spring technologies e.g. Spring Security, Spring JDBC, Spring and EJB, JMX, Email and have a chapter on scripting as well. If you like books on problem solution approach than you will enjoy reading Spring Recipes, not the best book on Spring but still a good one and will definitely made to any list of top 10 books on Spring framework.
Professional Java Development with the Spring Framework
Main highlight of this book is that one of it’s author is Rod Johnson, who is also created Spring framework. So you get his view on Spring and How spring should be used used, what are best practices to follow on Spring e.g. When to use Setter Injection and Constructor Injection. This book provide good coverage of Spring framework including Spring core, Spring MVC, Spring ORM support etc. Also examples in this book is easy to understand and it also focus on Unit tests which is good practice. Though I don’t rate this book too high, like if your focus is Spring MVC than Expert Spring MVC and Web flow is the best Spring book to follow. If you are looking an overview on Spring features, than Spring Documentation is best book to read. As I said positive point of this book is knowing Spring from author Rod Johnson himself. Once you have basic knowledge of Spring framework, you can read this book to get authors view.
Pro Spring 3.0
Pro Spring is one of the best book to learn Spring Framework from start. This book is massive and tries to cover most of the Spring concept e.g. Spring fundamentals, JDBC Support, Transaction support, Spring AOP, Spring Web MVC, Spring Testing etc. Good point about this book is that it’s conventional and easy to read, it explains concept, followed with good example, which is good way to learn. What is worrying is sheer size, I haven’t completed this book till date and only refer with some topic. Good point is that this book covers Spring 3.1 which is the latest stable version. As I said this is one of the most comprehensive book on Spring framework and any one who wants to learn Spring framework by following just one book, Pro Spring 3.0 is a good choice.
Spring framework documentation is located on Springsource website, here
is the link for Spring documentation for Spring framework 3.1 in HTML format. Though this is not a book, Spring tutorials and Spring documentation are
another two source of learning Spring framework, which I highly recommend. Main
reason for that is they are free and highly comprehensive and has lot of
examples to support various concept and feature. Also one of the best part
of reference documentation is that they are updated with the latest Spring
release available. Updating books with every new version of Spring is
rather difficult than updating documentation. Spring documentation combine with
any Spring book is best way to learn Spring framework. For learning Spring MVC,
you can combine Spring documentation with earlier spring book, Expert Spring
MVC and Web Flow.
Spring Documentation
Spring in ActionLots of my readers suggested Spring in Action from manning, as one of the best book to learn Spring. Seems like a worth reading book. I have seen it's content briefly and it does cover both Spring and Spring MVC. So if you are looking for common book for complete Spring framework, Spring in Action is another one.
These are some of the best books to learn Spring framework and Spring MVC. Spring documentation is special because of update and new releases of Spring Framework. Given popularity of Spring Framework for new Java development work, every Java developer should make effort to learn Spring framework.
12 comments :
How about Spring in Action?
Hi Javin,
Thanks a lot for sharing useful books information, but I would like to share for quick rap up , please see the following url..
where i can find the books list i am unable to see
Spring in Action is also good.
Can you suggest a spring book which covers concepts in-depth rather than syntax.
i.e. I want to know how namespaces work/ the concept behind namespaces, best practices for authenticating users etc.
I thought Spring in Action will take 1st place!!!
How about "Spring in Action" by Manning?
Hi,
Great article about Spring.
I think one more book "Spring in Action" can be added to this perfect list.
Regards,
Chirag
Thanks for your comments guys, As I can see, lot of you have suggested Spring in Action, another great book to add into this list.
I think Expert Spring MVC and Web Flow is a fantastic book. I learnt a lot about Java, the Spring Framework and development in general.
Best book in Spring is it's documentation. It contains accurate, updated and comprehensive information about everything on Spring framework. Most of the book only cover either few topics or few modules but documentation covers everything.
I didn't find Spring in Action pretty much fascinating since I am a beginner. This book covers more of a theortical approach.
|
http://javarevisited.blogspot.com/2013/03/5-good-books-to-learn-spring-framework-mvc-java-programmer.html?showComment=1380521703581
|
CC-MAIN-2015-22
|
en
|
refinedweb
|
Difference between revisions of "Funtoo Filesystem Guide, Part 1"
Latest revision as of 01:13, January 2, 2015
Journaling and ReiserFS
Next in series: Funtoo Filesystem Guide, Part 2
Support Funtoo and help us grow! Donate $15 per month and get a free SSD-based Funtoo Virtual Container.
Looking for people interested in testing and documenting Docker support! Contact Daniel Robbins for more info.
What's in Store
The purpose of this series is to give you a solid, practical introduction to Linux's various new filesystems, including ReiserFS, XFS, JFS, GFS, ext3 and others. I want to equip you with the necessary practical knowledge you need to actually start using these filesystems. My goal is to help you avoid as many potential pitfalls as possible; this means that we're going to take a careful look at filesystem stability, performance issues (both good and bad), any negative application interactions that you should be aware of, the best kernel/patch combinations, and more. Consider this series an "insider's guide" to these next-generation filesystems.
So, that's what's in store. But to begin this series, I'm going to diverge from this plan for just one article and prepare you for the journey ahead. I'll cover two topics very important to the Linux development community -- journaling, and the design vision behind ReiserFS. Journaling is very important because it's a technology that we've been anticipating for a long time, and it's finally here. It's used in ReiserFS, XFS, JFS, ext3 and GFS. It's important to understand exactly what journaling does and why Linux needs it. Even if you have a good grasp of journaling, I hope that my journaling intro will serve as a good model for explaining the technology to others, something that'll be common practice as departments and organizations worldwide begin transitioning to these new journaling filesystems. Often, this process begins with a "Linux guy/gal" such as yourself convincing others that it's the right thing to do.
In the second half of this article, we're going to take a look at the design vision behind ReiserFS. By doing so, we're going to get a good grasp on the fact that these new filesystems aren't just about doing the same old thing a bit faster. They also allow us to do things in ways that simply weren't possible before. Developers, keep this in mind as you read this series. The capabilities of these new filesystems will likely affect how you code your future Linux software development projects.
Understanding Journaling: Meta-data
As you well know, filesystems exist to allow you to store, retrieve and manipulate data. And, in order to do this, a filesystem needs to maintain an internal data structure that keeps all your data organized and readily accessible. This internal data structure (literally, "the data about the data") is called meta-data. It is the structure of this meta-data that gives a filesystem its particular identity and performance characteristics.
Normally, we don't interact with a filesystem's meta-data directly. Instead, a specific Linux filesystem driver takes care of that job for us. A Linux filesystem driver is specially written to manipulate this maze of meta-data. However, in order for the filesystem driver to work properly, it has one important requirement; it expects to find the meta-data in some kind of reasonable, consistent, non-corrupted state. Otherwise, the filesystem driver won't be able to understand or manipulate the meta-data, and you won't be able to access your files.
Understanding Journaling: fsck
This is where fsck comes in. When a Linux system boots, fsck starts up and scans all local filesystems listed in the system's /etc/fstab file. fsck's job is to ensure that the to-be-mounted filesystems' meta-data is in a usable state. Most of the time, it is. When Linux shuts down, it carefully flushes all cached data to disk and ensures that the filesystem is cleanly unmounted, so that it's ready for use when the system starts up again. Typically, fsck scans the to-be-mounted filesystems and finds that they were cleanly unmounted, and makes the reasonable assumption that all meta-data is OK.
However, we all know that every now and then, something atypical happens, such as an unexpected power failure or system lock-up. When these unfortunate situations occur, Linux doesn't have the opportunity to cleanly unmount the filesystem. When the system is rebooted and fsck starts its scan, it detects that these filesystems were not cleanly unmounted and makes a reasonable assumption that the filesystems probably aren't ready to be seen by the Linux filesystem drivers. It's very likely that the meta-data is messed up in some way.
So, to fix this situation, fsck will begin an exhaustive scan and sanity check on the meta-data, correcting any errors that it finds along the way. Once fsck is complete, the filesystem is ready for use. Although some recently-modified data may have been lost due to the unexpected power failure or system lockup, since the meta-data is now consistent, the filesystem is ready to be mounted and be put to use.
The Problem With fsck
So far, this may not sound like a bad approach to ensuring filesystem consistency, but the solution isn't optimal. Problems arise from the fact that fsck must scan a filesystem's entire meta-data in order to ensure filesystem consistency. Doing a complete consistency check on all meta-data is a time-consuming task in itself, normally taking at least several minutes to complete. Even worse, the bigger the filesystem, the longer this exhaustive scan takes. This is a big problem, because while fsck is doing its thing, your Linux system is effectively offline, and if you have a large amount of filesystem storage, your system could be fsck-ing for half an hour or more. Of course, standard fsck behavior can have devastating results in mission-critical datacenter environments where system uptime is extremely important. Fortunately, there's a better solution.
The Journal
Journaling filesystems solve this fsck problem by adding a new data structure, called a journal, to the mix. This journal is an on-disk structure. Before the filesystem driver makes any changes to the meta-data, it writes an entry to the journal that describes what it's about to do. Then, it goes ahead and modifies the meta-data. By doing so, a journaling filesystem maintains a log of recent meta-data modifications, and this comes in handy when it comes time to check the consistency of a filesystem that wasn't cleanly unmounted.
Think of journaling filesystems this way -- in addition to storing data (your stuff) and meta-data (the data about the stuff), they also have a journal, which you could call meta-meta-data (the data about the data about the stuff).
Journaling in Action
So, what does fsck do with a journaling filesystem? Actually, normally, it does nothing. It simply ignores the filesystem and allows it to be mounted. The real magic behind quickly restoring the filesystem to a consistent state is found in the Linux filesystem driver. When the filesystem is mounted, the Linux filesystem driver checks to see whether the filesystem is OK. If for some reason it isn't, then the meta-data needs to be fixed, but instead of performing an exhaustive meta-data scan (like fsck) it instead takes a look at the journal. Since the journal contains a chronological log of all recent meta-data changes, it simply inspects those portions of the meta-data that have been recently modified. Thus, it is able to bring the filesystem back to a consistent state in a matter of seconds. And unlike the more traditional approach that fsck takes, this journal replaying process does not take longer on larger filesystems. Thanks to the journal, hundreds of Gigabytes of filesystem meta-data can be brought to a consistent state almost instantaneously.
ReiserFS
Now, we come to ReiserFS, the first of several journaling filesystems we're going to be investigating. ReiserFS 3.6.x (the version included as part of Linux 2.4+) is designed and developed by Hans Reiser and his team of developers at Namesys. Hans and his team share the philosophy that the best filesystems are those that help create a single shared environment, or namespace, where applications can interact more directly, efficiently and powerfully. To do this, a filesystem should meet the performance and feature needs of its users. That way, users can continue using the filesystem directly rather than building special-purpose layers that run on top of the filesystem, such as databases and the like.
Small File Performance
So, how does one go about making the filesystem more accommodating? Namesys has decided to focus on one aspect of the filesystem, at least initially -- small file performance. In general, filesystems like ext2 and ufs don't do very well in this area, often forcing developers to turn to databases or special organizational hacks to get the kind of performance they need. Over time, this kind of "I'll code around the problem" approach encourages code bloat and lots of incompatible special-purpose APIs, which isn't a good thing.
Here's an example of how ext2 can tend to encourage this kind of programming. ext2 is good at storing lots of twenty-plus k files, but isn't an ideal technology for storing 2,000 50-byte files. Not only does performance drop significantly when ext2 has to deal with extremely small files, but storage efficiency drops as well, since ext2 allocates space in either one or four k chunks (configurable when the filesystem is created).
Now, conventional wisdom would say that you aren't supposed to store that many ridiculously small files on a filesystem. Instead, they should be stored in some kind of database that runs above the filesystem. In reply, Hans Reiser would point out that whenever you need to build a layer on top of the filesystem, it means that the filesystem isn't meeting your needs. If the filesystem met your needs, then you could avoid using a special-purpose solution in the first place. You would thus save development time and eliminate the code bloat that you would have created by hand-rolling your own proprietary storage or caching mechanism, interfacing with a database library, etc.
Well, that's the theory. But how good is ReiserFS' small file performance in practice? Amazingly good. In fact, ReiserFS is around eight to fifteen times faster than ext2 when handling files smaller than one k in size! Even better, these performance improvements don't come at the expense of performance for other file types. In general, ReiserFS outperforms ext2 in nearly every area, but really shines when it comes to handling small files.
ReiserFS Technology
So how does ReiserFS go about offering such excellent small file performance? ReiserFS uses a specially optimized b* balanced tree (one per filesystem) to organize all filesystem data. This in itself offers a nice performance boost, as well as easing artificial restrictions on filesystem layouts. It's now possible to have a directory that contains 100,000 other directories, for example. Another benefit of using a b*tree is that ReiserFS, like most other next-generation filesystems, dynamically allocates inodes as needed rather than creating a fixed set of inodes at filesystem creation time. This helps the filesystem to be more flexible to the various storage requirements that may be thrown at it, while at the same time allowing for some additional space-efficiency.
ReiserFS also has a host of features aimed specifically at improving small file performance. Unlike ext2, ReiserFS doesn't allocate storage space in fixed one k or four k blocks. Instead, it can allocate the exact size it needs. And ReiserFS also.
This does two things. First, it dramatically increases small file performance. Since the file data and the stat_data (inode) information are stored right next to each other, they can normally be read with a single disk IO operation. Second, ReiserFS is able to pack the tails together, saving a lot of space. In fact, a ReiserFS filesystem with tail packing enabled (the default) can store six percent more data than the equivalent ext2 filesystem, which is amazing in itself.
However, tail packing does cause a slight performance hit since it forces ReiserFS to repack data as files are modified. For this reason, ReiserFS tail packing can be turned off, allowing the administrator to choose between good speed and space efficiency, or opt for even more speed at the cost of some storage capacity.
ReiserFS truly is an excellent filesystem. In my next article, I'll guide you through the process of setting up ReiserFS under Linux. We'll also take a close look at performance tuning, application interactions (and how to work around them), the best kernels to use, and more. 2
Support Funtoo and help us grow! Donate $15 per month and get a free SSD-based Funtoo Virtual Container.
Looking for people interested in testing and documenting Docker support! Contact Daniel Robbins for more info.
|
http://www.funtoo.org/index.php?title=Funtoo_Linux&diff=next&oldid=8167
|
CC-MAIN-2015-22
|
en
|
refinedweb
|
This section contains the following topics:
Peeking Using the Sealing Server
Peeking Using the IRM Java API sealing server supports both peeking and validated peek (where the digital signature of the sealed content is validated). In both cases the sealed content is uploaded to the sealing server, the content is examined, and the sealed content metadata is returned to the caller.
For JAX-WS generated web service proxies, the sealed.
peek
A call to the peek method results in the metadata being returned as a
ContentDescription object. This object contains the classification details, custom metadata and a few other attributes, such as the time the sealed file was created.
SealingServices sealingServices = new SealingServicesService().getSealingServices(new javax.xml.ws.soap.MTOMFeature()); ContentDescription results = sealingServices.peek(input);
It is important to enable the MTOM web service feature. This ensures the sealed content is uploaded to the server in the most optimal form. It also avoids
java.lang.OutOfMemoryException exceptions if the uploaded file is large.
To call the peek operation the authenticated user does not need any rights to access the sealed content.
validatedPeek
A call to the
validatedPeek method results in the metadata being returned as a
ContentDescription object in the same way as peek. If the digital signature has been tampered with, or the file is corrupt, a
ContentParseFault exception is thrown. This exception will detail the reason for the sealed content parsing failure. A successful invocation of this operation signifies that the metadata signature has been verified.
SealingServices sealingServices = new SealingServicesService().getSealingServices(new javax.xml.ws.soap.MTOMFeature()); ContentDescription results = sealingServices.validatedPeek(input);
To call the validated peek operation, the authenticated user must have the rights to open the sealed content.
The classification is the most important part of the sealed content metadata. The classification contains the opaque XML document called the classification cookie. The classification cookie is the data used by Oracle IRM Desktop and the Oracle IRM J2EE application when associating rights with content. The classification cookie XML structure is defined by the classification system of the sealed content. The context classification system has an XML structure that includes a UUID to identify the context and a value called the item code which can be used to identify an individual document. The following is a sample context cookie that might appear in sealed content:
<?xml version="1.0" ?> <classifications:ContextCookie xmlns: <context> <uuid>588403f9-9cff-4cce-88e4-e030cc57282a</uuid> </context> <itemCode> <value>sample.sdoc</value> <time>2007-05-10T12:00:00.000+00:00</time> </itemCode> </classifications:ContextCookie>
Rights for the context classification system are expressed using this information, for example:
John can access all documents with a context UUID of
f3cd57c1-f495-48aa-b008-f23afa4d6b07
or:
Mary can access documents with a context UUID of
f3cd57c1-f495-48aa-b008-f23afa4d6b07 and an item code value of
plan001.sdoc or
plan002.sdoc
The classification metadata also contains the human-readable labels for the classification. There may be multiple labels if the labels have been translated into multiple languages. These labels are used to display a friendly name and description to a user, rather than showing raw computer oriented data from the classification cookie.
The classification contains a set of human-readable strings called labels. The classification labels can be used to inform the user which classification the sealed content was sealed against.
Classification classification = results.getClassification(); Label[] labels = classification.getLabels(); if (labels != null) { for (Label label : labels) { System.out.println(label.getLocale().getDisplayName() + " : " + label.getName()); } }
The classification cookie is defined in the classification XML schema as an
<any> element. The cookie XML can be accessed from the classification object and is typically returned as a
org.w3c.dom.Element. The following code snippet shows a context UUID being extracted from a context classification cookie using the DOM.
Classification classification = results.getClassification(); org.w3c.dom.Element element = (org.w3c.dom.Element)results.getAny(); org.w3c.dom.NodeList nodes = element.getElementsByTagName("context"); org.w3c.dom.Node node = nodes.item(0); String uuid = node.getTextContent();
If the file is large there is no need to send the complete file to the sealing server. Peeking only requires the portion of the file that contains the metadata. This portion of the file is dynamic in size, but limited to 1MB in size. A pessimistic view would be to send the first 1MB of the file contents (or the complete contents if this is less than 1MB). In reality the sealed content preamble and metadata are usually a lot smaller, so 16K to 32K is usually sufficient. If the metadata section of the sealed content sent to the sealing server is truncated, the
peek or
validatedPeek call will throw a
ContentParseFault. IRM Java libraries allow peeking to be performed locally. This can be used where performance is an issue and the overhead of sending content to the sealing server is undesirable. The functionality is identical to that provided by remote peeking.
peek
Local peeking is performed using the
SealingOperations interface rather than the sealing services web service. Sealed content is provided as an
InputStream rather than a
DataHandler.
import static oracle.irm.engine.content.sealing.SealingOperationsInstance.peek; InputStream fileInputStream = new FileInputStream("example.stml"); ContentDescription results = peek(fileInputStream);
The result can be examined in the same manner as for remote peeking.
|
http://docs.oracle.com/cd/E23943_01/user.1111/e12326/isvsealedcontentexamples003.htm
|
CC-MAIN-2015-22
|
en
|
refinedweb
|
Net::hostent - by-name interface to Perl's built-in gethost*() functions
use Net::hostent;.
You may also import all the structure fields directly into your namespace as regular variables using the :FIELDS import tag. (Note that this still overrides your core functions.) Access these fields as variables named with a preceding
h_. Thus,
$host_obj->name() corresponds to $h_name if you import the fields. Array references are available as regular array variables, so for example
@{ $host_obj->aliases() } would be simply @h_aliases.
The gethost() function is a simple front-end that forwards a numeric argument to gethostbyaddr() by way of Socket::inet_aton, and the rest to gethostbyname().
To access this functionality without the core overrides, pass the
use an empty import list, and then access function functions with their full qualified names. On the other hand, the built-ins are still available via the
CORE:: pseudo-package.
use Net::hostent; use Socket; @ARGV = ('netscape.com') unless @ARGV; for $host ( @ARGV ) { unless ($h = gethost($host)) { warn "$0: no such host: $host\n"; next; } printf "\n%s is %s%s\n", $host, lc($h->name) eq lc($host) ? "" : "*really* ", $h->name; print "\taliases are ", join(", ", @{$h->aliases}), "\n" if @{$h->aliases}; if ( @{$h->addr_list} > 1 ) { my $i; for $addr ( @{$h->addr_list} ) { printf "\taddr #%d is [%s]\n", $i++, inet_ntoa($addr); } } else { printf "\taddress is [%s]\n", inet_ntoa($h->addr); } if ($h = gethostbyaddr($h->addr)) { if (lc($h->name) ne lc($host)) { printf "\tThat addr reverses to host %s!\n", $h->name; $host = $h->name; redo; } } }
While this class is currently implemented using the Class::Struct module to build a struct-like class, you shouldn't rely upon this.
Tom Christiansen
|
http://search.cpan.org/~rjbs/perl-5.18.0-RC1/lib/Net/hostent.pm
|
CC-MAIN-2015-22
|
en
|
refinedweb
|
Saudi Arabia Objects To Proposed
.gay gTLD, Among Others
459 'strings,' of which 911 came from North America and 675 from Europe.."
Hmmm... (Score:5, Funny)
Interesting the
.gay thing, considering how infamous Saudi Party Boys are...
Re: (Score:3)
"Interesting the
.gay thing, considering how infamous Saudi Party Boys are..."
That's among the reasons Saudis pray in KSA but party in Bahrain.
Re:Hmmm... (Score:5, Funny)
What should we expect from men in dresses who are afraid of women? At least Allah has put them someplace where there's plenty of lube!
Re: (Score:3)
Gosh, I hope they aren't going to my ".fatuglymisogynistichomophobicsaudidouchebags" gTLD too.
Re: (Score:3)
I was thinking we work with the Unicode people to get one or more of the 'smiley face' characters renamed to 'The Prophet Mohammed'. And then we try to register 4chan.(that)
While they are busy freaking out over that, we proceed with all of the gTLDs we were really planning on creating.
Re:Hmmm... (Score:5, Interesting)
Not sure why that's modded funny. Bachi Boy parties [bullseyerooster.com] are common in saudi arabia, afghanistan and a variety of other middle eastern countries.
why modded funny, not all youth dancing is dirty (Score:3, Informative)
Because "Saudi Party Boys" *sounds* like it might be the name of a gay porno flick.
"Bachi Boy parties" sounds like a children's-entertainment company specializing in hosting bocce ball birthday parties.
Oh, thanks to the recent military involvement in that part of the world, most Americans who watch the news are aware of "Bachi Boys" in Afghanistan and the political trouble American military and civilian personnel get into back home if they appear to endorse the sexual and quasi-sexual (e.g. erotic dancing)
Re: (Score:3)
Well it's not like the Bible is much better when it comes to the rights of women. Rape is allowed, as long as you pay the woman's father a fine and then marry her, slavery of women (even sexual slavery) is fine as well. Even murder is fine if you say she's a witch.
Re: (Score:3, Insightful)
Waiting for you to name a Christian country where what you wrote is rule of law.
Re:why modded funny, not all youth dancing is dirt (Score:4, Insightful)
It's interesting how little of the current breeds of Christianity is based on the words of Jesus, and how much on the words of his students and church leaders. And how rarely all the 'love and giving and kindness' is followed by the 'righteous'. You very rarely hear a person quoting Jesus, while other authors are quoted in just about every conversation where something is claimed to be against the will of God.
Islam considers Jesus a prophet as well by the way. So anything he said also applies to them.
Re:Hmmm... (Score:4, Funny)
That's not nearly as strange as their rule that they aren't allowed to be alone with a male goat (nannies are ok).
Re:Hmmm... (Score:4, Insightful)
That officials should support the will of the people rather than their own personal opinions or anything they might be more informed about.
No, officials are elected to office to conduct government for the good of the people . Sometimes, large segments of the people are not able to recognize what is good for them.
If we say that elected officials should represent EXACTLY the will of the people, there would still be slavery in the South, and homosexuality would be, for the most part, illegal.
Is that what you want?
Re: (Score:3)
I think the act still is illegal in many states, isn't it? They're still lots of laws on the books about sex acts that stray from the 'norm'.
Well... (Score:2, Funny)
Well I object (Score:4, Insightful)
to their treatment of Jews and women, so they can kiss my ass.
Re:Well I object (Score:5, Funny)
to their treatment of Jews and women, so they can kiss my ass.
The Saudis probably wouldn't object to
.jew because it would help them block a lot of material...
Re:Well I object (Score:5, Insightful)
So, why would they object to
.gay? They can block it for the same reason.
Re: (Score:3)
Maybe their objection in this case is not that they may be forced to see it, but the thought that other people might be enjoying it.
:D
Heh, Ambrose Bierce put it best;. from The Devil's Dictionary [wikipedia.org]:
Puritanism: the haunting feeling that someone, somewhere, might be happy
Re: (Score:3)
To bolster their credentials among the faithful? I'd be surprised if the Vatican doesn't get in on the action.
You really do not understand... (Score:5, Interesting)
Lots of the arab countries do not want to be able to block things like porn or ".gay" material.
No, they do not want it to EXIST. At all. Not there, but not where you are either. So they are not OK with ANYONE having a
.gay domain, because they fundamentally think it's wrong to allow this for anyone.
Think about this the next time proposals are made to have the U.N. control domains...
Re: (Score:3)
I've no idea what the fuck you're really on about, but if you're asking why I'm not keen on gTLDs then here are a few reasons:
- It destroys the hierarchical structure of DNS
- It forces companies and individuals to pay many millions of dollars to protect their trademark, and for what benefit?
- ICANN is meant to be non-profit, yet it's just created a billion dollar income stream. What does it intend to do with it? Why should international companies be forced to pay this to a US organisation?
- Only large organ
Re: (Score:2)
Is it too late for me to object to
.COM?
Keep censoring and let the rest of the world go on (Score:4, Insightful)
Why don't the simply censor those domains? They already censor the hell of the internet anyway.
Re: (Score:3)
Yep. Surely this makes it far, far easier to block these sites at the ISP level.
Then again, maybe we're expecting politicians to understand technology.
Re:Keep censoring and let the rest of the world go (Score:5, Insightful)
Exactly my thought. Wouldn't this make it EASIER for them?
What are they bitching about? Its a boat load easier to block entire TL domains in their DNS servers than to block a gazillion
.coms all over the world.
Sure the wise will change to some other DNS server, and they may have to block IPs, but so what? They already have that problem. I suspect they also block out of country dns servers.
Re:Keep censoring and let the rest of the world go (Score:4, Insightful)
The problem is that it will be easier to block them, so now they won't have access to them.
The old "Women are for babies, boys are pleasure" attitude in the Islamic world is prevalent enough that I do not understand why they don't just come out of the closet? [sheikyermami.com]
Re:men having sex with men IS a different matter (Score:4, Informative)
I have personally seen large buildings where under-age kids were censoring foreign printed publications before distribution.
A rather shabby piece of cleric would first investigate the magazines and newspapers and specify what needed to be cut out.
These examples were then hung on the wall and the kids went to work, no damage to them when they had to handle all these photo's of insufficiently dressed people because they were of the age of innocence.
A brochure for expensive yachts and boats was nearly the worst victim, on virtually all photo's there were people in states of undress so in the end there would be little left of the magazine if not for the fact that the importer of the yachts paid extra to have the kids use sharpies to paint clothes on the bikini babes and yacht men in shorts.
A sick society.
Re: (Score:3)
Yes, it would make it easier. So why do they still object? Because for many religious people (and not just them) it's not enough to live their own lives according to their own ideas - they could do that already and have no censorship of the internet at all. No, they want to meddle in other people's lives as well.
And then of course there is what many corrupt and morally bankrupt politicians do: they pander. Acting "against immorality" - no matter how pointless and ineffectual - conveys the impression that
Re: (Score:3)
giving an entire TLD to gays is validating/recognizing something that some religions consider a sin
FTFY.
And it *is* special treatment because they aren't proposing one called
.hetero are they?
I was under the impression that "they" (ICANN?) aren't proposing anything. Instead they let others request the addition of these gTLDs. In which case, there is nothing stopping you or anyone else from requesting a
.hetero gTLD, is there?
.gay gTLD, then you may have a point. But until someone requests it and has that request denied, special treatment this ain't.
Now if ICANN then comes back and says "screw you", but accepts the
Re: (Score:3)
Is the fact that you can legally kill an animal a indication that bestiality should be okay or is it a sign that the human race can do better?
Re:Keep censoring and let the rest of the world go (Score:5, Insightful)
Saudi view themselves as the leaders of the Islamic faith (sort of like if Italy took the lead on all things catholic that the pope said, good ideas or not).
To them the notion that some of these concepts could even be considered acceptable, anywhere, is outrageous, and true moral leadership is to object vigorously to all of it. They know they'll probably lose, and they probably want to lose (and I'm sure the US embassy was consulted in advance as to whether or not they had any chance of actually getting their opinions followed). But as the stewards of the islamic faith they must at all times appear to object to things contrary to the brand of islam they are promoting.
The idea that these behaviours (consumption of alcohol, sex for fun, homosexuality etc.) could be exposed to any of the islamic faith, especially their poorer brethren, who rely on the Kingdom for guidance and support on these issues, means they must show their leadership to the world and demand such unislamic activites be discouraged at all time. It would be equally terrible if a member of the Islamic faith outside of a Islamic society were to be corrupted by these ideas, especially as a young, impressionable boy or girl in the US or Europe, and the international community should at all times work to protect them from unislamic influences, everywhere.
It's stupid, they know it's stupid, you know it's stupid, but the poor illiterate bastard in Bangladesh or Afghanistan or Morocco or the like can get outraged over it and they don't know it's theatre for their benefit, the saudi's can claim to be defenders of the islamic faith (which wins them points with the literate crazies) and it's unlikely to go very far anyway, so no harm done.
So, is the Vatican objecting? (Score:4, Insightful)
Um, nevermind, it's too late for them to do that with a straight face.
Re: (Score:3)
When the oil runs out they still have money.
It's like saying when steel ceases to be the most important industrial commodity the US and the UK will suddenly fade into the history books. Right now it's not viable to do anything but extract oil, or oil related businesses in Saudi as a foundational industry. But now they have cash, and they can use cash to create an industry when something else becomes viable.
Poor countries having nothing with which to create a new industry. Saudi isn't like that. The roya
Contrary to my morality (Score:5, Insightful)
I find religion contrary to my morality.
Re: (Score:3)
My religion compels me to pray for you, and to let you be. Others, not so much.
Re:Contrary to my morality (Score:5, Insightful)
My religion compels me to pray for you, and to let you be.
Your religion doesn't compel any such thing - it is your personal internal sense of morality that guides you. If a proof were produced that your god did not exist, would you suddenly throw away all of your morality and principles, and turn to murdering, raping and thieving? Of course not. Millions of people have been killed in the name of the world's major religions, and many more have suffered persecution because of their religious beliefs. The "peace" that we have have now is more a product of the Western world turning towards secularism than anything else; it was only 70 years ago that some Christians were busy rounding up Jews - when the leader of the Eastern Orthodox Church actually said, [time.com] "Why should we not get rid of these parasites [Jews] who suck Rumanian Christian blood? It is logical and holy to react against them.". Of course it would be unacceptable for a religious leader to say something like that today, wouldn't it? Hmmm... are we really so arrogant to believe that we have evolved so far, culturally and as a species, that such thoughts are no longer possible?
Re: (Score:3)
My religion compels me to pray for you, and to let you be.
Which is it? If you believe that prayer has any sort of real effect, then you are not doing a very good job of letting me be.
Re:Contrary to my morality (Score:4, Insightful)
Meanwhile, I'm not convinced we need all these boutique TLD's. Maybe there's lots of pressure for more after the
.xxx cash-grab.
The more descriptive TLDs are not something the xxx crowd wants.
I suspect establishing these is but the first step to a wider enforcement of censorship. Once these are in place you can impose laws forcing the use of the appropriate TLD, and then simply make it really easy to block the entire TLD.
There are already restrictions in place on
.gov and .edu (easily circumvented in many cases). There was even some noise about .net being tightened a bit in the last couple years.
How about this new gTLD? (Score:5, Funny)
Re: (Score:3)
That's intolerant to not accept that everyone views things the same way you do and no one ever said YOU are the authority.
It's not intolerant to not tolerate intolerance. Just as it's not anti-freedom to restrict the freedom of those who want to destroy freedom.
List of the Current gTLD Applications (Score:3, Informative)
Re: (Score:2, Insightful)
I see no valid *purpose* in adding gTLDs whether offensive or not.
Re: (Score:3)
You do know that ICANN has yet to accept any of those TLDs, right?
Ban them all (Score:5, Funny)
Re: (Score:3)
Anarcho-Syndiclist
...
bloody peasant...
Re:Ban them all (Score:5, Funny)
As an Anarcho-Cyclist I object to car companies so I respectfully request that we remove the
.car and .carinsurance TLDs.
So whats the problem? (Score:4, Insightful)
Re: (Score:2, Funny)
The problem probably is that I registered
.quran and made it an alias.
... then don't go there? (Score:5, Insightful)
I'm continually amazed that people think that because something offends THEM, that they have the right to censor what other people can do/see/say/hear/view/etc. There are a few things that the world DOES agree on - such as kiddie porn and murder being bad - but beyond that, if you're offended then simply censor YOURSELF and don't visit those sites! If the whole country agrees (which I doubt!), then block it in your country.
If ICANN doesn't tell them to go take a flying leap, there should be rebellion.
MadCow.
Re: (Score:3)
There are a few things that the world DOES agree on - such as kiddie porn and murder being bad -
I don't see a lot of agreement on "murder being bad." Lots of countries and cultures regularly commit it with premeditation.
Re: (Score:2)
Murder is, by definition, the bad kind of killing. Other types of killing may exist-- it depends on the state.
Re: (Score:2, Offtopic).
Re: (Score:3)?
- Excerpt from A Very Special Family Guy Freakin' Christmas
Re: (Score:3)
Jesus was all about tearing families apart and seemed generally against marriages (although being sort of weaselly about it and saying that as much as it should be avoided, it wasn't outright a sin or anything)
I think most of that sentiment was attributed to Paul (or the forgeries in his name). Jesus did say a few things that, when taken out of context, can be thought of as anti-marriage or anti-family, but most of those were metaphors for other things. Peter and Andrew's family let the deciples stay with them on at least one or two occasions. Also, consider that the first miracle attributed to Jesus was to supply wine for a wedding.
Paul believed that the world would end in his lifetime or shortly after. The
Re: (Score:3)
You hurt us, you kill [aljazeera.com] us. You silence us, you torture [cnn.com] us. You deny us basic [thinkprogress.org] human [youtube.com] rights. [youtube.com]
We aren't asking you to "bow down" to us. We are making you stop hurting us. The second we can live our life "like all the heterosexuals do," we'll stop bothering you. Asshole.
Re: (Score:3)
"just live your life like all the heterosexuals do."
That's what most homosexuals are trying to do, but they're being stopped by heterosexuals want to keep that from happening in order to maintain that thin veil of otherness between the two groups.
Same sex marriage proponents feel our viewpoint is the be all end all as you put it for a number of reasons: your opposition against same sex marriage stems from 1) personal discomfort and dislike, and 2) religious belief that marriage is defined as being between o
Irony (Score:4, Insightful)
Saudi Arabia refuses to allow for a
.gay domain
People continue to put oil from that country in their cars.
Chick-Fil-A founder says he personally believes marriage is between a man and a woman
Gets boycotted.
Re:Irony (Score:4, Interesting)
For a related example, look at all those people who boycott genetically-modified foods, but would suddenly find their objections disappear upon diagnosis of diabeties. The best treatment involves insulin produced by transgenic bacteria. Or the fuss last year when it was emerged that some of the flavorings used in coke-cola and a few other products were tested on human embryonic stem cells - there were a lot of boycotts over that one, but always of food. No-one called for a boycott of drugs, even though practically every medication developed in the last thirty years was developed and tested using the same cell line, HEK 293.
Re: (Score:3)
Bad related example. I grow my own vegetables, using Heirloom seeds. These seeds are bred and cultivated, sure, but they don't undergo the sort of selective breeding as you'd find happens with the Triticale family of grains which leaves farmers (mainly in Canada) sowing terminal generations of cereals. Those crops do not spawn successive generations, hence the collective term "terminal". What I grow does spawn successive generations, which are in practically every sense of the word, identical to the previou
Re:Irony (Score:5, Insightful)
Well, let's see here...
Cheap petrochemicals are one of the most vital foundations of modern technological civilization, making possible(and helping to set the price and availability of) virtually anything everyone who isn't a subsistence mud farmer interacts with day to day.
Brand A fast food chicken products are, roughly as comestible as Brand B fast food chicken products.
Nope, no significant difference there, must be ironic.
Re: (Score:2, Funny)
You do realize that the US gets the most of its oil from Canada don't you? While I am no fan of french fries and gravy, I see no reason to boycott Canadian oil.
Re: (Score:2)
Minor nit, but as the 3rd largest producer of oil, the #1 source of American oil is America. So maybe Canada is #1 exporter to America.
That being said, America uses a lot of oil (both in terms of gross numbers and per capita) and oil is fungible. So it does not matter who the Saudis pump to, global swings in supply and demand will affect the price you pay at the pump.
Re: (Score:2)
I see no reason to boycott Canadian oil.
Really? Chiquita banana [] tried. Then they said it wasn't true and there was no boycott. [nationalpost.com] That was after the Canadian public turned around and left their produce rotting on store shelves.
Re: (Score:2)
Let me know how to boycott Saudi and ONLY Saudi gas and I'm on board.
Re: (Score:3)
Second, who cares what the CEO of Chick-Fil-A said. The issue, which started before he opened his mouth, is Chick-Fil-A the corporation is donating to Anti-Homosexual groups. Some people have a problem
Re: (Score:3)
Homo Depot, Target, JCPenney and others actively give thousands of dollars each year in support of indoctrination of kids and employees to accept the gay lifestyle.
Indoctrination? Citation please.
Apparently people are no longer allowed to have opinions, or at least those that are in disagreemnt with the homosexual agenda.
Homosexual agenda? What agenda is that? The one where gay folks would prefer not to be beaten to death or dragged behind a car for their sexual orientation? The one where they would prefer to be treated just like everyone else in terms of being able to build strong, healthy families and enjoy the same government benefits bestowed upon heterosexual couples?
Zip up, your bigotry is showing
Can we hear again about how wonderful... (Score:5, Insightful)
...it would be for "control" of the Internet to be taken away from the evil Americans and given to the saintly UN where rational, tolerant governments such as that of Saudi Arabia have influence?
I am offended by Saudi Arabia (Score:3)
Re: (Score:3)
Saudi Arabia stands for tyrannic despotic dictators with no legitimate right to rule who enforce intolerance and oppression over a people who deserve far better.
You are correct; were it not for the billions that they make every year selling oil, and the fact that they are a U.S. client state propped up by U.S. industry and military support, then it is likely the House of Saud would have been overthrown a long time ago. The alliance between the United States and the House of Saud is purely one of convenience and money - as soon as one no longer needs the other, it will go bad.. [foreignaffairs.com]. [guardian.co.uk]
Publicity Stunt (Score:2)
You would think they would be all for it because once it's in effect they can simply block by TLD. Right? Wrong.
With this request they are simply advertising to the world: "We are serious about remaining unintelligent, primitive bigots".
.bible (Score:2)
Do you suppose they also object to
.koran, .quran , or whatever else might represent their scriptures?
Probably. This is about control, of course, and that's the game in most of the Middle East . Well, probably everywhere else too. Darn.
But they applaud the .stoning TLD (Score:2, Insightful)
They also like the
.slavery, .nowomensrights, and the .infidel TLDs.
Seriously; I think some of the alternative energy things are barking up the wrong tree, but at this point, I would be willing to support any energy plan that gets us off these jerk's oil. I want to be liberated from Saudi Arabia and then bomb their fucking stuck-up, 15th century asses into the ground. The USA gives them latitude because we depend upon their oil, and all the while, they are the most restrictive country in the world. North Ko
Re:But they applaud the .stoning TLD (Score:5, Insightful)
and then bomb their fucking stuck-up, 15th century asses into the ground
Yeah, that'll show 'em what civilized behavior looks like!
Re: (Score:2)
Civilization (n) [from the Latin civilus, "a Roman legionnaire, particularly one in the Gallic divisions"]
1) Having bigger, better, and more weapons than "uncivilized peoples"; this implicitly elevates the status of one's arts and culture above the rest
2) A computer game series, in which victory is generally obtained by acquiring bigger, better and more weapons than the other players.
See also: civil war
Contrast: barbarian
(Taken without permission from the 2038 edition of the Oxford English Dictionary)
Re: (Score:2)
Isn't it civilized to punish people who do bad things?
Re: (Score:2)
Isn't it civilized to punish people who do bad things?
Like being gay or driving with girly parts.
Who gives a rats ass if they "object" (Score:3)
I mean, I know I am being bombastic, but really... who cares what they think. We don't need to change our ways or ideas because they are "offended." As a matter of fact the reason they want us to change... so they can enforce their views on the public. As a sovereign country they can do that even if it is distasteful to us. They don't have the right to extend that influence anywhere else because they aren't sovereign any where else.
Look at what Iran has just done. If they want to disconnect from the rest of the world they can do the same thing.
Throw them ALL away (Score:2)
If I were running my own kingdom I would object to every new tld added not directly related to the partitioning of a new country.
I would seek all technical measures and pressures possible to ensure those who would use such TLDs would find them to be inaccessable to huge swaths of public thus significantly degrading their value.
ICANN deserves to rot in
.hell for hurting the network for profit.
If no .gay (Score:2)
Why do we need top level domains anyways? (Score:2)
Aren't they rather antiquated? We don't need them - what about [google]?
The TLD system has been screwed enough already (Score:2)
There seems a lot of Islamophobia on the site today!
The system should have simply kept to the
.org .edu .mil .gov and .com TLDs, plus the ones for countries where nations could do as they like.
In fact it was a bad idea having ANY tlds except for nations; it would have solved a lot of problems if Saudi Arabia applied its own rules to its own domain, and the US to its domain. Instead everyone wants a TLD to show how important their organisation is.
Re: (Score:3)
Actually,
.edu, .mil, and .gov should also be abolished. They should be .edu.us, .mil.us, and .gov.us. The gTLD namespace should not have US TLDs polluting it.
Have they even HEARD of the internet? (Score:2)
Do they think that the current internet is unable to host sites that 'promote homosexuality'?
On a side note, who here still thinks it's a good idea for the UN to be in charge of the internet?
.gawd (Score:2)
I find religious TLD's offensive. My point is easy to see.
Religious TLD's make it so much easier to get to religious sites and that increases the risk of extremism with all the sad consequences.
Apart from that, sites about gawd are an insult to all the free thinking people in the world.
Remember the Spanish Inquisition? Should the internet be a platform for "those kind of people"?
Ok. Enough sarcasm.
Please believe what you want, as long as you don't bother me with it. But that doesn't go the other way around,
Shut up! (Score:2)
I (dot).Kill you!
Okay, someone will be offended by something at any given moment. But let's look at any of the rules in there. Do they have any verbage against curse words?
But then again...
.sex is probably allowed and .gay could mean a lot of things...
Screw it. Let them eat hate for breakfast lunch and dinner.
Proper Response: (Score:2)
Subsequently, we refer you to the response given in Arkell v. Pressdram
Have a lovely day.
- ICANN
A political dichotomy I honestly can't understand (Score:2, Insightful)
To my understanding the social left in America is about inclusion. Obviously, this means a heavily pro-gay agenda. It has also manifested in an effort to respect all religions*, including Muslims, and not only tolerate their practices and beliefs in the US, but support and embrace them. Whenever someone comes out against perceived or real moral deficiencies of Islam, the left is ready to attack that person as a right-wing hater.
But Islam condemns homosexuality. It is not only a general disapproval of homose
Re: (Score:3)
To my understanding the social left in America is about inclusion.
Well, see, there's your problem - you've bought into the false image, perpetuated by the mass corporate media, that such labels actually apply to any reasonable sample size of the population.
The "Liberal Left" and "Conservative Right" do not exist. I know it's hard to believe, especially in the face of non-stop, 24-hour propaganda networks telling you that they do, but both labels are obvious fabrications, easily debunked by dropping one's preconceived notions and actually talking to other people. Do so,
Re: (Score:3)
To my understanding the social left in America is about inclusion. Obviously, this means a heavily pro-gay agenda.
No, that's not correct. It is about equality; it is possible to promote a group people being fairly without also punishing another group.
It has also manifested in an effort to respect all religions
Not correct at all. First of all, it is entirely possible to respect a person without bowing down to their beliefs. Second, it is about ensuring that the government does not use its influence to promote any particular religion.
So, how does a person on the left, which branded Jerry Falwell as an "agent of intolerance," reconcile this "respect everyone" attitude with this?
The above misunderstandings are the cause of your confusion. It is possible to allow a person to believe what they want without punishing them f
Re: (Score:3)
I'm firmly "social left" by American yardstick, but I despise contemporary mainstream Islam with a passion for being the most bigoted major religion in the world today. Their treatment of gays is a part of that.
With all due respect to the KSA (Score:3)
The KSA didn't invent or build, nor do they own, the internet. If the KSA objects to the content on the internet, they are free to filter or restrict whatever they wish, in their own country. While the rest of the world is unlikely to have much interest in their objections, they are free to make as many objections as they wish.
Re: (Score:2)
Eh, almost. This is why it should be a true "inter" net. If Saudi Arabia doesn't want those TLDs to resolve, they can implement it on their name servers. If the US does, they can implement that on their name servers.
Democracy is great, but what I like even more is choices. I'd abhor living in Saudi Arabia, but if that's your kind of thing, go ahead, just leave me alone to live the way I want over here.
Re: (Score:3)
Re: (Score:2)
Masturbation is also against their religion.
Re: (Score:2)
Damn straight. I personally don't give two shits what saudi finds offensive.
Re:TLDs failed (Score:5, Interesting)
ICANN wants the money too badly to admit failure.
But there is only one sane solution to these international problems. Put everything in the country specific tlds. Then the only international cooperation needed is to ensure we can all find the national roots and divide up the IP space. And IPv6 removes pretty much all controversy over a fight for addresses so problem solved. Yes it would mean a longterm migration of
.com, .net, .org and .mil into the .us address space and probably mirroring them into most of the others, at least for a transition period since the sensible behavor for browsers would be to determine the local .cc and append it to everything. But over a decade we could end all this bickering AND the relentless push to turn control over the entire Internet to the U.N.
The idea of Saudi Arabia objecting to the existence of something in someone else's namespace would be laughed at. But if it is a shared namespace they really have as much right to object as the various other factions to support these goofy new top level names.
Re: (Score:3)
International entities live in the
.int TLD.
Re: (Score:2)
What's a port site? A site about sea commerce?
Re: (Score:3)
> Or - at least I damn well hope that ICANN doesn't answer to Saudi Arabia.
Not yet. But if we don't solve the problem (see above for my proposal) eventually the UN will get control of the Internet and remember, this is the same organization that thinks Libya, Iran, Cuba, etc. are just peachy pronouncing on Human Rights violations, etc. So yes, eventually the OIC (Organization of the Islamic Conference) will be able to blok vote damned near any rule they want. So lets fix the problem while we still can
Re: (Score:3)
So what do you think will happen in the US if someone proposes a
.negro gTLD?
Some group of self-righteous, narcissistic assholes would inevitably make a stink, only to be shot down on First Amendment grounds.
Thereafter, they will be thoroughly ridiculed by the vast Spanish-speaking community, for whom the term 'negro' is an everyday description of the absence of all color (we English speakers refer to that one as 'black').
Of course, only a sensationalist idiot would even pretend that there's any similarity between the USA and Saudi Arabia when it comes to free speech...
|
http://tech.slashdot.org/story/12/08/15/1857212/saudi-arabia-objects-to-proposed-gay-gtld-among-others
|
CC-MAIN-2015-22
|
en
|
refinedweb
|
28 December 2011 06:12 [Source: ICIS news]
By Dolly Wu
SHANGHAI (ICIS)--?xml:namespace>
Hong Kong’s Kingboard is planning to bring on line a 320,000 tonnes/year phenol/acetone unit in
During 2013-2015 period
China’s imports are expected to continue growing in 2012 on firm demand and volumes are estimated to reach 750,000 tonnes, an increase of 3% from 2011, according to forecast from Chemease, an ICIS Service in China.
Imports of Asian cargoes are expected to be stable in 2012, while cargoes from Europe are expected to increase as no anti-dumping duties are imposed on them.
Imports from the
At the same time, demand from downstream sectors like BPA is also increasing, so existing phenol facilities are likely to keep running at high operating rates.
Phenolic resins industry remains the key derivative for phenol demand in
Phenol resin is widely used in insulation panels, copper clad laminated boards, brake blocks for automobiles and also applicable in automobiles brake blocks, foundry materials, plywood, abrasives and grinding wheels, but its primary application is to make phenolic moulding plastics, which have incomparable advantages of heat resistance and cost-effectiveness.
Large-sized producers include Jinan Shengquan Chemical, Shaxian Hongguang, Changshu East-South, Shanghai European-Asian Synthetic Material Co., Ltd and Dynea.
Source:ChemeaseSource:Chemease
Jenny Yi contributed to the article.
($1 = CNY6.83)
Please visit the complete ICIS plants and projects database
For more information on Phen
|
http://www.icis.com/Articles/2011/12/28/9519297/outlook-12-chinas-phenol-supply-to-grow-on-new-plants-in-2012.html
|
CC-MAIN-2015-22
|
en
|
refinedweb
|
02 March 2012 10:08 [Source: ICIS news]
By James Dennis
?xml:namespace>
At 09:35 GMT, April Brent crude on
April NYMEX light sweet crude futures (WTI) were trading at $107.96/bbl, down 88 cents/bbl on the previous close. Earlier the
Crude prices had surged on Thursday with April ICE Brent futures climbing to an intraday high of $128.40/bbl, the highest level since 23 July 2008, amid reports from
April ICE Brent futures eventually settled at $126.20/bbl, which was still the highest settlement in 11 months.
Crude still remained buoyed by heightened tensions between the West and
The tightening of sanctions against Iran by the US, an upcoming EU embargo on Iranian oil imports, as well as US pressure on Asian nations to cut imports of Iranian oil have raised concerns that upward pressure on oil prices is being generated by moves by buyers to secure alternative oil supplies.
The US Energy Secretary, Steven Chu, moved to calm such concerns on Thursday.
He told reporters in
Oil shipments from Saudi Arabia, the United Arab Emirates, Kuwait, Iraq and Iran totalling around 17m bbl/day pass through the Straits of Hormuz, this volume represents 35% of all seaborne traded oil and 20% of oil traded worldwide.
Further supply tightness has been generated by disruption to North Sea supplies as well as lost exports, which total more than 500,000 bbl/day, from other producers in the Middle East and North Africa such as South Sudan, Yemen, and Syria amid ongoing unrest in those
|
http://www.icis.com/Articles/2012/03/02/9537612/crude-falls-off-highs-as-saudi-supply-worries-ease.html
|
CC-MAIN-2015-22
|
en
|
refinedweb
|
Host name resolution
Updated: January 21, 2005
Applies To: Windows Server 2003, Windows Server 2003 R2, Windows Server 2003 with SP1, Windows Server 2003 with SP2
Host name resolution
Host name resolution means successfully mapping. You can assign multiple host names to the same host.
Windows Sockets (Winsock) programs, such as Internet Explorer and the FTP utility, can use one of two values for the destination to which you want to connect: the IP address or a host name. When the IP address is specified, individual people can assign and use. A domain name is a structured name in a hierarchical namespace called the Domain Name System (DNS). An example of a domain name is.
Nicknames are resolved through entries in the Hosts file, which is stored in the systemroot\System32\Drivers\Etc folder. For more information, see TCP/IP database files.
Domain names are resolved by sending DNS name queries to a configured DNS server. The DNS server is a computer that stores domain name-to-IP address mapping records or has knowledge of other DNS servers. The DNS server resolves the queried domain name to an IP address and sends the result back.
You are required to configure your computers with the IP address of your DNS server in order to resolve domain names. You must configure Active Directory-based computers running Windows XP Professional or Windows Server 2003 operating systems with the IP address of a DNS server.
For more information, see DNS defined.
|
https://technet.microsoft.com/en-us/library/cc739738(d=printer,v=ws.10).aspx
|
CC-MAIN-2015-22
|
en
|
refinedweb
|
Using JMX and J2SE 5.0 to Securely Manage Web Applications
JMX (Java Management Extensions) supplies tools for managing
local and remote applications, system objects, devices, and more.
This article will explain how to remotely manage a web application
using JMX (JSR
160). It will explain the code needed inside of the application to
make it available to JMX clients and will demonstrate how to
connect to your JMX-enabled application using different clients
such as "">MC4J and
jManage. Securing the
communication layer using the RMI protocol and JNDI is also covered
in detail.
We will review a simple web application that monitors the number
of users that have logged in and exposes that statistic via a
secure JMX service. We will also run multiple instances of this
application and track statistics from all running instances. The
sample web application can be downloaded "/today/2005/11/15/jmxapp.zip">here. It requires you to have the "">J2SE 5.0 SDK
installed and your
JAVA_HOME environment variable
pointing to the base installation directory. J2SE 5.0 implements
the JMX API, version 1.2, and the JMX Remote API, version 1.0. A
supporting servlet container is also required; I'm using
Apache Tomcat
5.5.12. I'm also using Apache
Ant to build the sample application.
Setting up the Sample Application
Download the sample application and
create a WAR file with
ant war (for more
details, see the comments in build.xml). Copy jmxapp.war
to Tomcat's webapps directory. Assuming Tomcat is running on
your local machine on port 8080, the URL for the application will
be:
[/prettify][/prettify]
[prettify]
If you see a login screen that prompts you for your username
and password, all is well.
Tracking Some Meaningful Data
The sample application uses the
"">Struts framework to submit the
login form. Upon submission, the
LoginAction.execute(..) method is executed, which quite
simply checks that the user ID is "hello" and the password is
"world." If both are true, then the login was successful and
control is forwarded to login_success.jsp; if not, then we
go back to the login form. Depending on whether the login was
successful or not, the
incrementSuccessLogins(HttpServletRequest) method or the
incrementFailedLogins(HttpServletRequest) method is
called. Let's have a look at
incrementFailedLogins(HttpServletRequest):
[/prettify][/prettify]
[prettify] private void incrementFailedLogins (HttpServletRequest request) { HttpSession session = request.getSession(); ServletContext context = session.getServletContext(); Integer num = (Integer) context.getAttribute( Constants.FAILED_LOGINS_KEY); int newValue = 1; if (num != null) { newValue = num.intValue() + 1; } context.setAttribute( Constants.FAILED_LOGINS_KEY, new Integer(newValue)); }
The method increments a
FAILED_LOGINS_KEY variable
that is stored in application scope. The
incrementSuccessLogins(HttpServletRequest) method is
implemented in a similar way. The application now keeps track of
how many people successfully logged in and how many failed
authentication. That's great, but how do we access this data?
That's where JMX kicks in.
Creating JMX MBeans
"">
The basics of MBeans and where they fit into the JMX
architecture is beyond the scope of this article. We will simply
create, implement, expose, and secure an MBean for our application.
We are interested in exposing two pieces of data corresponding to
the following two methods. Here is the our simple MBean
interface:
[/prettify][/prettify]
[prettify] public interface LoginStatsMBean { public int getFailedLogins(); public int getSuccessLogins(); }
Quite simply, the two methods return the number of failed and
successful logins. The
LoginStatsMBean implementation,
LoginStats, provides a concrete implementation for
both methods. Let's have a look at the
getFailedLogins() implementation:
[/prettify][/prettify]
[prettify] public int getFailedLogins() { ServletContext context = Config.getServletContext(); Integer val = (Integer) context.getAttribute( Constants.FAILED_LOGINS_KEY); return (val == null) ? 0 : val.intValue(); }
The method returns a value stored in the
ServletContext. The
getSuccessLogins()
method is implemented in a similar manner.
Creating and Securing a JMX Agent
The
JMXAgent class that manages the JMX-related
aspects of the application has a few responsibilities:
- Create an
MBeanServer.
LoginStatsMBeanwith the
MBeanServer.
- Create a
JMXConnector, allowing remote clients
to connect.
- Involves use of JNDI.
- Must also have an RMI registry running.
- Securing the
JMXConnectorusing a username and
- Starting and stopping the
JMXConnectoron
application start and stop, respectively.
The class outline for
JMXAgent is:
[/prettify][/prettify]
[prettify] public class JMXAgent { public JMXAgent() { // Initialize JMX server } public void start() { // Start JMX server } // called at application end public void stop() { // Stop JMX server } }
Let's understand the code in the constructor that will enable
clients to remotely monitor the application.
Creating a
MBeanServer with MBeans
We first create a
MBeanServer object, which is the
core component of the JMX infrastructure. It allows us to expose
our MBeans as manageable objects. The
MBeanServerFactory.createMBeanServer(String) method
makes this an easy task. The parameter supplied is the domain of
the server. Think of this as the unique name for this
MBeanServer. Next, we register the
LoginStatsMBean with the
MBeanServer. The
MBeanServer.registerMBean(Object, ObjectName) method
takes in as a parameter an instance of the MBean implementation and
an object of type
ObjectName that uniquely identifies
the MBean; in this case,
DOMAIN + ":name=LoginStats"
suffices.
[/prettify][/prettify]
[prettify] MBeanServer server = MBeanServerFactory.createMBeanServer(DOMAIN); server.registerMBean( new LoginStats(), new ObjectName(DOMAIN + ":name=LoginStats"));
Creating the
JMXServiceURL
At this point, we have created an
MBeanServer and
registered
LoginStatsMBean with it. The next step is
to make the server available to clients. To do this, we must create
a
JMXServiceURL, which represents the URL that clients
will use to access the JMX service:
[/prettify][/prettify]
[prettify] JMXServiceURL url = new JMXServiceURL( "rmi", null, Constants.MBEAN_SERVER_PORT, "/jndi/rmi://localhost:" + Constants.RMI_REGISTRY_PORT + "/jmxapp");
Let's look closely at the above line of code. The
JMXServiceURL constructor takes four arguments:
- The protocol to be used when connecting (
rmi,
jmxmp,
iiop, etc.).
- The host machine of the JMX service. Supplying
localhostas
argument would also suffice however, supplying
null
forces the
JMXServiceURLto find the best possible
name for the host. For example, in this case, it would translate
nullto
zarar, which is the name of my
computer.
- The port used by the JMX service.
- Finally, we must supply the URL path that indicates how to
find the JMX service. In this case, it would be
/jndi/rmi://localhost:1099/jmxapp.
The URL path warrants more explanation:
[/prettify][/prettify]
[prettify] /jndi/rmi://localhost:1099/jmxapp
The
/jndi part is saying that the client must do a
JNDI lookup for the JMX service. The
rmi://localhost:1099 indicates that there is an RMI
registry running on
localhost at port 1099 (more on the RMI
registry later). The
jmxapp is the unique identifier
of this JMX service in the RMI registry. A
toString()
on the
JMXServiceURL object yields the following:
[/prettify][/prettify]
[prettify] service:jmx:rmi://zarar:9589/jndi/rmi://localhost:1100/jmxapp
The above is the URL clients will eventually use to connect to
the JMX service. The J2SE 5.0 documentation has more on the
structure of this URL.
Securing the Service
J2SE 5.0 provides a mechanism for JMX to authenticate users in
an easy manner. I have created a simple text file that stores
username and password information. The contents of the file
are:
[/prettify][/prettify]
[prettify] zarar siddiqi fyodor dostoevsky
The users
zarar and
fyodor are
authenticated by the passwords
siddiqi and
dostoevsky, respectively. The next step is to create
and secure a
JMXConnectorServer that exposes the
MBeanServer. The path of the username/password file is
stored in a
Map under the key,
jmx.remote.x.password.file. This
Map is
later used when creating the
JMXConnectorServer.
[/prettify][/prettify]
[prettify] ServletContext context = Config.getServletContext(); // Get file which stores jmx user information String userFile = context.getRealPath("/") + "/WEB-INF/classes/" + Constants.JMX_USERS_FILE; // Create authenticator and initialize RMI server Map<string> env = new HashMap<string>(); env.put("jmx.remote.x.password.file", userFile);
Now let's create the
JMXConnectorServer. The
following line of code does the trick.
[/prettify][/prettify]
[prettify] connectorServer = JMXConnectorServerFactory. newJMXConnectorServer(url, env, server);
The
method takes in as arguments three objectsmethod takes in as arguments three objects
JMXConnectorServerFactory.newJMXConnectorServer(JMXServiceURL,
Map, MBeanServer)
we have just created: the
JMXServiceURL, the
Mapthat stores authentication information, and the
MBeanServer. The
connectorServerinstance
variable allows us to
start()and
stop()
the
JMXConnectorServeron application start and stop,
respectively.
Although the J2SE 5.0 implementation of JSR 160 is quite
powerful, other implementations, such as "">MX4J, provide classes that offer
convenient features such as obfuscating of passwords, namely
the
class.class.
"">
PasswordAuthenticator
Starting the RMI Registry
Earlier, I alluded to a RMI registry and said that a JNDI lookup
is done when accessing the service. However, right now we
don't have a RMI registry running, so a JNDI lookup will
fail. A RMI registry started may be started manually or
programmatically.
Using the Command Line
On your Windows or Linux command line, type the following to
start a RMI registry:
[/prettify][/prettify]
[prettify] rmiregistry &
This will start the RMI registry at your default host and port,
localhost and 1109, respectively. However, for our web application
we cannot rely on an RMI registry being available on application
start and would rather take care of this in our code.
Programmatically Starting the RMI Registry
To programmatically start the RMI Registry, you can use the
LocateRegistry.createRegistry(int port) method. The
method returns an object of the type
Registry. We store
this reference, as we would like to stop the registry on application
end. Right before we start our
JMXConnectorServer in
JMXAgent.start(), we first start an RMI registry using
the following line of code:
registry = LocateRegistry.createRegistry(
Constants.RMI_REGISTRY_PORT);
On application end, after stopping the
JMXConnectorServer in
JMXAgent.stop(),
the following method is called to stop the registry:
UnicastRemoteObject.unexportObject(registry, true);
Note that the
StartupListener class triggers
application start and end tasks.
Accessing our JMX Service
There are a number of ways we can access JSR 160 services. We
may do so programmatically or by using a GUI.
Connecting using MC4J
Deploy the application by copying jmxapp.war to Tomcat's
webapps directory. Download and install "">MC4J. Once
installed, create a new
Server Connection of the type
JSR 160 and
specify the
Server URL that was printed in the application server
logs on application startup. In my case, it was:
[/prettify][/prettify]
[prettify] service:jmx:rmi://zarar:9589/jndi/rmi://localhost:1100/jmxapp
Supply the username and password, which MC4J refers to as
"Principle" and "Credentials," respectively. Clicking Next takes
you to a screen where you can customize your classpath. Default
settings should work fine, and you can click on Finish to connect
to the JMX service. Once connected, browse the MC4J tree structure
as shown in Figure 1 until you reach the Properties option of the
LoginStats MBean implementation.
Figure 1. MC4J view
Clicking on the Properties option displays the statistics, as
shown in Figure 2:
Figure 2. Properties window
Connecting to a "Cluster" using jManage
Deploy the application by copying jmxapp.war to Tomcat's
webapps directory. Note the URL that is printed on
application startup. Next, deploy another instance of this
application by changing the
RMI_REGISTRY_PORT and
MBEAN_SERVER_PORT variables in the
Constants class so that the second instance of the
application will not try to use ports that are already in use.
Change the
app.name property in the build.xml
file so that the new instance will be deployed in a different
context (e.g.,
jmxapp2). Do an
ant clean war, which will
create jmxapp2.war in the base directory. Copy
jmxapp2.war to Tomcat's webapps directory. The
application will deploy and now you have two instances of the same
application running. Again, note the URL that is printed on
startup.
Download and install
"">jManage. Once installed, use
jManage's web interface to create a JSR 160 application by
following the Add New Application link found on the home page.
The Add Application page is shown in Figure 3:
Figure 3. Add Application page
Once again, use the appropriate username, password, and URL.
Repeat the steps for the second application that is deployed. Once
you have created the two applications, you must create a cluster by
following the Add New Application Cluster link found on the home
page. Simply add the two applications you have already created to
your cluster, as shown in Figure 4:
Figure 4. Add Application Cluster page
That's it, we are done! From the home page, click on one of the
applications in the cluster and then click on the Find More
Objects button. You will see the
name=LoginStats MBean; click on
it, and you will see the
FailedLogins and
SuccessLogins attributes that we have exposed.
Clicking on the Cluster View link on the same page will display a
page similar to Figure 5, where a running count of statistics from
both applications can be seen:
"Cluster view for jmxapp and jmxapp2" />
Figure 5. Cluster view for
jmxapp and
jmxapp2
Try a few logins on both applications
( and) and see how the numbers
change.
Conclusion
You now know how to "JMX enable" your new and existing web
applications and securely manage them using MC4J and jManage.
Although J2SE 5.0 provides a powerful implementation of the JMX
specification, other open source projects such as "">XMOJO and
MX4J provide additional
features, such as connecting via web interfaces and more. Interested
readers who want to learn more about JMX will find "">Java
Management Extensions by "">J. Steven
Perry a very useful book. For those interested in remote
application management, Connecting JMX Clients and Servers
by Jeff Hanson is a valuable resource that provides real-world
examples.
Resources
- Sample application for this
article
- MC4J: JMX Management Console for Java applications and the J2EE application
server
- jManage: Web- and command-line-based JMX client
- "">
RMI Package Summary: Creating RMI connectors
- Apache Struts: Open
source framework for building Java web applications
- Apache Ant: Java-based
build tool
- "">
MBean tutorial
- Login or register to post comments
- Printer-friendly version
- 9609 reads
|
https://today.java.net/pub/a/today/2005/11/15/using-jmx-to-manage-web-applications.html?page=2
|
CC-MAIN-2015-22
|
en
|
refinedweb
|
Credit to functions related to the Luhn credit card […]
I came up with two versions of my intermediate Luhn sum function: one that
creates a list and modifies it in-place, and another that uses iterators
instead. My solution is available on github.
Just numbers; just one recursive branch / two vectors.
First I missed that (- 10 (modulo 30 10)) is 10, not 0.
Then Ikarus thought that (modulo -30 10) => 10. Sigh.
Wanted to see if I still remember Linux assembly so I wrote my answer in it: github. It might be possible to make the answer more readable and shorter, but that would take me an additional hour…
Factor Language solution.
( scratchpad ) 2771 luhn? .
f
( scratchpad ) 2771 make-luhn dup . luhn? .
27714
t
( scratchpad ) 49927398716 luhn? .
t
( scratchpad ) 49927398716 make-luhn dup . luhn? .
499273987168
t
[…] or Canadian Social Insurance Numbers are validated you can take a look at this Programming Praxis article . It’s all about a simple, tiny, patented (now public domain) algorithm invented by […]
My python 3.x solution:
nice solution
My try in REXX
My solution in Python 3.2
def addCCDigits(number):
number = int(str(number)[::-1])
numString = str(number)
check = False
mySum = 0
for ch in numString:
num = int(ch)
if (check):
num = num * 2
strNum = str(num)
for c in strNum:
num2 = int(c)
mySum += num2
else:
mySum += num
check = not check
return mySum
def validate(number):
mySum = addCCDigits(number)
if (mySum % 10 == 0):
return True
else:
return False
def addCheckDigit(number):
mySum = addCCDigits(number * 10 + 1) – 1
digit1 = int(mySum / 10)
goal = (digit1 + 1) * 10;
diff = goal – mySum;
mySum = int(mySum / 10)
strNumber = str(number) + str(diff)
number = int(strNumber)
return number
[\code]
|
http://programmingpraxis.com/2011/04/08/credit-card-validation/?like=1&source=post_flair&_wpnonce=81d32a40f7
|
CC-MAIN-2015-22
|
en
|
refinedweb
|
SMILA/Project Concepts/Blackboard Service Concept
Contents
Description
Design a service to ease management of SMILA records during workflow processing.
Discussion
When to persist data into the a storage.
User:G.schmidt.brox.de: Several month ago we have had a discussion, how to persist information into the search index. At that time i proposed a configuration mechanism for controlling the storage/indexing processes when leaving BPEL. At that time we moved this discussion in the direction of using BPEL pipelets for e.g. indexing purposes because that way we are free to configure when and where to use this option. From my point of view such operations should be a general paradime. Either we use by default a configurable process for indexing/storage at the end of BPEL processing or we use pipelets for this case. Please share your thoughts.
- Juergen.schumacher.empolis.com Yes, that is probably another valid way to configure it. It would be a minor change: Instead of writing back the records to the persistence layer on at the commit after each workflow which also invalidates the blackboard content, we could introduce a relatively simple pipelet to trigger the commit, after which the blackboard content must stay valid until the router has finished processing the records. So one would have more control over record persistence (the current concept causes each processed record to be persisted after each successfully finished workflow). The reason why I put this outside of BPEL was that I distinguished between "infrastructure" elements that are required to run with each workflow and "application" elements that are different in different setups. To me, "insert to index" is mainly an "application" element, as I can think of SMILA setups that are not used to build search indices. In my picture, persistence was a "infrastructure" element: To be able to chain several workflows via queues it is necessary that the blackboard content is persisted after single each workflow such that the next workflow can access the result (ok, strictly speaking this is not necessary after final workflows that are not followed by others anymore). So I thought it would safer to enforce record persistence this way, and that a workflow creator this way can concentrate more on the "create my application workflow" side of his problem instead of the "making my application work" side. If the team is more in favor of a more flexible solution, no problem. Just vote here (-:
Technical proposal
Purpose of the Blackboard Service is the management of SMILA record data during processing in a SMILA component (Connectivity, Workflow Processor). The problem is that different processing engines could require different physical formats of the record data (see SMILA/Project Concepts/Data Model and XML representation for a discussion). Because this means either complex implementations of the logical data model or big data coversion problems, the idea is to keep the complete record data only on a "blackboard" which is not pushed through the workflow engine itself and to extract only a small "workflow object" from the blackboard to feed the workflow engine. This workflow object would contain only the part from the complete record data which the workflow engine needs for loop or branch conditions (and the record ID, of course). Thus it could be efficient enough to do the conversion between blackboard and workflow object before and after each workflow service invocation. As a side effect, the blackboard service could hide the handling of record persistence from the services to make service development easier.
Basics
This figure given an overview about how these services could be composed:
Note that the use of the Blackboard service is not restricted to workflow processing, but it can be also used in Connectivity to create the initial SMILA record from the data sent by Crawlers. This way the persistence services are hidden from Connectivity, too.
It is assumed that the workflow engine itself (which will be a third party product usually) must be embedded into SMILA using some wrapper that translates incoming calls to workflow specific objects and service invocations from the workflow into real SMILA service calls. At least with a BPEL engine like ODE it must be done this way. In the following this wrapper is called the Workflow Integration Service. This workflow integration service will also handle the necessary interaction between workflow engine and blackboard (see next section for details).
For ODE, the use of Tuscany SCA Java would simplify the development of this integration service because it could be based on the BPEL implementation type of Tuscany. However, in the first version we will create an SMILA specific workflow integration service for ODE that can only orchestrate SMILA pipelet because the Tuscany BPEL implementation type does not yet support service references (see this mail in the Tuscany user mailing list).
- Update 2008-04-21: Tuscany is making progress on this: mail in dev mailing list
Workflow
The next picture illustrates how and which data flows through this system:
In more detail:
- Listener receives record from queue.
bq. The record usually contains only the ID. In special cases it could optionally include some small attribute values or annotations that could be used to control routing inside the message broker.
- Listener calls blackboard to load record data from persistence service and writes attributes contained in message record to blackboard.
- Listener calls workflow service with ID from message record.
- Workflow integration creates workflow object for ID.
bq. The workflow object uses engine specific classes (e.g. DOM for ODE BPEL engine) to represent the record ID and some chosen attributes that are needed in the engine for condition testing or computation. It's an configuration option of the workflow integration which attributes are to be included. In a more advanced version it may be possible to analyse the workflow definition (e.g. the BPEL process) to determine which attributes are needed.
- Workflow integration invokes the workflow engine. This causes the following steps to be executed a couple of times:
- Workflow engine invokes SMILA service (pipelet). At least for ODE BPEL this means that the engine calls the integration layer which in turn routes the request to the invoked pipelet. So the workflow integration layer receives (potentially modified) workflow objects.
- Workflow integration writes workflow objects to blackboard and creates record IDs. The selected pipelet is called with these IDs
- Pipelet processes IDs and manipulates blackboard content. The result is a new list of record IDs (usually identical to the argument list, and usually the list has length 1)
- Workflow integration creates new workflow objects from the result IDs and blackboard content and feeds them back to the workflow engine.
- Workflow engine finishes successfully and returns a list of workflow objects.
bq. If it finishes with an exception, instead of the following the Listener/Router has to invalidate the blackboard for all IDs related to the workflow such that they are not committed back to the storages, and also it has to signal the message broker that the received message has not been processed successfully such that the message broker can move it to the dead letter queue.
- Workflow integration extracts IDs from workflow objects and returns them.
- Router creates outgoing messages with message records depending on blackboard content for given IDs.
bq. Two things may need configuration here: When to create an outgoing message to which queue (never, always, depending on conditions of attribute values or annotations) - this could also be done in workflow by setting a "nextDestination" annotation for each record ID. And which attributes/annotations are to be included in the message record - if any.
- Router commits IDs on blackboard. This writes the blackboard content to the persistence services and invalidates the blackboard content for these IDs.
- Router sends outgoing messages to message broker.
Content on Blackboard
The Blackboard contains two kinds of content:
- Records:* All records currently processed in this runtime process. The structure of an record is defined in SMILA/Project Concepts/Data Model and XML representation. Clients manipulate the records through Blackboard API methods. This way the records are completely under control of the Blackboard which may be used in advanced versions for optimised communication with the persistence services.
Records enter the blackboard by one of the following operations:
- create: create a new record with a given ID. No data is loaded from persistence, if a record with this ID exists already in the storages it will be overwritten when the created record is commited. E.g. used by Connectivity to initialize the record from incoming data.
- load: loads record data for the given ID from persistence (or prepare it to be loaded). Used by a client to indicate that it wants to process this record.
- split: creates a fragment of a given record, i.e. the record content is copied to a new ID derived from the given by adding a frament name (see [ID Concept] for details).
All these methods should care about locking the record ID in the storages such that no second runtime process can try to manipulate the same record.
A record is removed from the blackboard with one of these operations:
- commit: all changes are written to the storages before the record is removed. The record is unlocked in the database.
- invalidate: the record is removed from the blackboard. The record is unlocked in the database. If the record was created new (not overwritten) on this blackboard it should be removed from the storage completely.
- Notes:* Additional temporary data created by pipelets to be used in later pipelets in the same workflow, but not to be persisted in the storages. Notes can be either global or record specific (associated to a record ID). Record specific notes are copied on record splits and removed when the associated record is removed from the blackboard. In any case a note has a name and the value can be of any serializable Java class such that they can be accessed from separated services in own VMs.
bq. A nice extension would be workflow instance specific notes such that a pipelet can pass non persistent information to another pipelet invoked later in the workflow which is not associated to a single record, but does not conflict with information from different workflow instances like global notes would (based on the assumption that the workflow engine supports multi-threaded execution). This information would be removed from the blackboard after the workflow instance has finished. However, it has to be clarified how they are can associated to the workflow instance, even when accessed from a remote VM.
Service Interfaces
The Blackboard will be implemented as an OSGi service. The interface could look similar to the following definition. It is getting quite big, so maybe it makes sense to divide it up into the different parts (handling of lifecycle, literal values, object values, annotation, notes, attachments?) for better readability? We'll see about this when implementing.
interface Blackboard { // record life cycle methods void create(ID id) throws BlackboardAccessException; void load(ID id) throws BlackboardAccessException; ID split(ID id, String fragmentName) throws BlackboardAccessException; void commit(ID id) throws BlackboardAccessException; void invalidate(ID id); // factory methods for attribute values and annotation objects // Literal and Annotation are just interfaces, // blackboard implementation can determine the actual types for optimization Literal createLiteral(); Annotation createAnnotation(); // record content methods // - record metadata // for referenced types see interfaces proposed in [[SMILA/Project Concepts/Data Model and XML representation]] // for string format of an attribute path see definition of AttributePath class below. // -- basic navigation Iterator<String> getAttributeNames(ID id, Path path) throws BlackboardAccessException; Iterator<String> getAttributeNames(ID id) throws BlackboardAccessException; // convenience for getAttributeNames(ID, null); boolean hasAttribute(ID id, Path path) throws BlackboardAccessException; // -- handling of literal values // navigation support boolean hasLiterals(ID id, Path path) throws BlackboardAccessException; int getLiteralsSize(ID id, Path path) throws BlackboardAccessException; // get all literal attribute values of an attribute (index of last step is irrelevant) // a client should not expect the blackboard to reflect changes done to these object automatically, // but always should call one of the modification methods below to really set the changes. List<Literal> getLiterals(ID id, Path path) throws BlackboardAccessException; // get single attribute value, index is specified in last step of path, defaults to 0. Literal getLiteral(ID id, Path path) throws BlackboardAccessException; // modification of attribute values on blackboard void setLiterals(ID id, Path path, List<Literal> values) throws BlackboardAccessException; // set single literal value, index of last attribute step is irrelevant void setLiteral(ID id, Path path, Literal value) throws BlackboardAccessException; // add a single literal value, index of last attribute step is irrelevant void addLiteral(ID id, Path path, Literal value) throws BlackboardAccessException; // remove literal specified by index in last step void removeLiteral(ID id, Path path) throws BlackboardAccessException; // remove all literals of specified attribute void removeLiterals(ID id, Path path) throws BlackboardAccessException; // -- handling of sub-objects // navigation: check if an attribute has sub-objects and get their number. boolean hasObjects(ID id, Path path) throws BlackboardAccessException; int getObjectSize(ID id, Path path) throws BlackboardAccessException; // deleting sub-objects // remove sub-objects specified by index in last step void removeObject(ID id, Path path) throws BlackboardAccessException; // remove all sub-objects of specified attribute void removeObjects(ID id, Path path) throws BlackboardAccessException; // access semantic type of sub-object attribute values. // semantic types of literals are modified at literal object String getObjectSemanticType(ID id, Path path) throws BlackboardAccessException; void setObjectSemanticType(ID id, Path path, String typename) throws BlackboardAccessException; // -- annotations of attributes and sub-objects. // annotations of literals are accessed via the Literal object // use null, "" or an empty attribute path to access root annotations of record. // use PathStep.ATTRIBUTE_ANNOTATION as index in final step to access the annotation // of the attribute itself. Iterator<String> getAnnotationNames(ID id, Path path) throws BlackboardAccessException; boolean hasAnnotations(ID id, Path path) throws BlackboardAccessException; boolean hasAnnotation(ID id, Path path, String name) throws BlackboardAccessException; List<Annotation> getAnnotations(ID id, Path path, String name) throws BlackboardAccessException; // shortcut to get only first annotation if one exists. Annotation getAnnotation(ID id, Path path, String name) throws BlackboardAccessException; void setAnnotations(ID id, Path path, String name, List<Annotation> annotations) throws BlackboardAccessException; void setAnnotation(ID id, Path path, String name, Annotation annotation) throws BlackboardAccessException; void addAnnotation(ID id, Path path, String name, Annotation annotation) throws BlackboardAccessException; void removeAnnotation(ID id, Path path, String name) throws BlackboardAccessException; void removeAnnotations(ID id, Path path) throws BlackboardAccessException; // - record attachments boolean hasAttachment(ID id, String name) throws BlackboardAccessException; byte[] getAttachment(ID id, String name) throws BlackboardAccessException; InputStream getAttachmentAsStream(ID id, String name) throws BlackboardAccessException; void setAttachment(ID id, String name, byte[] name) throws BlackboardAccessException; InputStream setAttachmentFromStream(ID id, String name, InputStream name) throws BlackboardAccessException; // - notes methods boolean hasGlobalNote(String name) throws BlackboardAccessException; Serializable getGlobalNote(String name) throws BlackboardAccessException; void setGlobalNote(String name, Serializable object) throws BlackboardAccessException; boolean hasRecordNote(ID id, String name) throws BlackboardAccessException; Serializable getRecordNote(ID id, String name) throws BlackboardAccessException; void setRecordNote(ID id, String name, Serializable object) throws BlackboardAccessException; // This is certainly not complete ... just to give an idea of how it could taste. // lots of convenience methods can be added later. }
public class Path implements Serializable, Iterable<PathStep> { // string format of attribute path could be something like // "attributeName1[index1]/attributeName2[index2]/..." or // "attributeName1@index1/attributeName2@index2/...". // The first is probably better because similar to XPath? // The specification of index is optional and defaults to 0. // Whether the index refers to a literal or a sub-object depends on methods getting the argument public static final char SEPARATOR = '/'; public Path(); public Path(Path path); public Path(String path); // extend path extended more steps. This modifies the object itself and returns it again // for further modifications, e.g. path.add("level1").add("level2"); public Path add(PathStep step); public Path add(String attributeName); public Path add(String attributeName, int index); // remove tail element of this. This modifies the object itself and returns it again // for further modifications, e.g. path.up().add("siblingAttribute"); public Path up(); public Iterator<PathStep> iterator(); public boolean isEmpty(); public PathStep get(int positionInPath); public String getName(int positionInPath); public int getIndex(int positionInPath); public int length(); public boolean equals(Path other); public int hashCode(); public String toString(); }
public class PathStep implements Serializable { public static final int ATTRIBUTE_ANNOTATION = -1; private String name; private int index = 0; / index of value in multivalued attributes. default is first value. public PathStep(String name); public PathStep(String name, int index); public String getName(); public int getIndex(); public boolean equals(AttributePath other); public int hashCode(); public String toString(); }
The business interface of a pipelet to be used in an SMILA workflow will be rather simple. It can get access to the local blackboard service using OSGi service lookup or by injection using OSGi Declarative Services. Therefore the main business method just needs to take a list of record IDs as an argument and return a new (or the same) list of IDs as the result. This is the same method that a workflow integration service needs to expose to the Listener/Router component, therefore it makes sense to use a common interface definition for both. This way it is possible to deploy processing runtimes with only a single pipelet without having to create dummy workflow definitions, because a pipelet can be wired up to the Listener/Router immediately. Becasue remote communication with separated pipelets (see below) will be implemented later pipelets (and therefore workflow integrations, too) must be implemented as OSGi services such that the remote communication can be coordinated using SCA. Thus, interfaces could look like this:
interface RecordProcessor { ID[] process(ID[] records) throws ProcessingException; }
interface Pipelet extends RecordProcessor { // specific methods for pipelets }
interface WorkflowIntegration extends RecordProcessor { // specific methods for workflow integration services. }
What about pipelets running in a separate VM?
- Not relevant for initial implementation. This will be added in advanced versions and discussed in more detail then.*
We want to be able have pipelets running in separated VM if they are known to be unstable or non-terminating in error conditions. This can be supported by the blackboard service like this:
The separated pipelet VM would have a proxy blackboard service that coordinates the communication with the master blackboard in the workflow processor VM. Only the record ID needs to be sent to the separated pipelets. However, the separated pipelet must be wrapped to provide control of the record life cycle on the proxy blackboard, especially because the changes done in the remote blackboard must be committed back to the master blackboard when the separated pipelet has finished successful, or the proxy blackboard content must be invalidated without commit in case of an pipelet error. Possibly, this pipelet wrapper can also provide "watchdog" functionality to monitor the separated pipelet and terminate and restart it in case of endless loops or excessive memory consumption.
|
http://wiki.eclipse.org/index.php?title=SMILA/Project_Concepts/Blackboard_Service_Concept&redirect=no
|
CC-MAIN-2015-22
|
en
|
refinedweb
|
Split fasta file
(Difference between revisions)
Revision as of 08:30, 17 April 2009
Problem
PLAN is a free online service for BLAST based sequence annotation provided by the Noble institute. At present the people that run PLAN limit user's queries to 1000 records. This recipe shows how to split a fasta file containing thousands of records into smaller files with filenames that include the range of records included in the file.
Solution
from Bio import SeqIO in_name = "huge_run.fasta" records = list(SeqIO.parse(open(in_name, 'r'), "fasta")) # How many files of exactly 1000 records can we make? nfiles = len(records)/1000 for filenumber in range(nfiles): split_min = filenumber*1000 split_max = ((filenumber+1)*1000)-1 seqs = records[split_min:split_max] out_name = ("_%s-%s." % (split_min+1, split_max+1)).join(in_name.split(".")) out_handle = open(out_name, 'w') SeqIO.write(seqs, out_handle, "fasta") out_handle.close() # That will make a bunch of files with 1000 records, but you will almost always # have some "dregs" to mop up. if len(records) % 1000 != 0: #checking for leftovers lsplit_min = (nfiles+1)*1000 last_name = ("_%s-%s." % (lsplit_min, len(records)-1)).join(in_name.split(".")) last_handle = open(last_name, 'w') lseqs = records[lsplit_min:-1] SeqIO.write(lseqs, last_handle, "fasta") last_handle.close() else: continue
How it works
The basic approach is do use SeqIO.parse() to read the contents of a fasta file into a list, then pick out subsets to write with SeqIO.write() . Perhaps the trickiest looking line is the one that sets each new file's name:
out_name = ("_%s-%s." % (split_min+1, split_max+1)).join(in_name.split("."))
which can be broken down thus:
>>> insert = "_%s-%s." % (1, 1000) >>> insert '_1-1000.' >>> split_name = "huge_run.fasta".split(".") ['huge_run', 'fasta'] >>>(insert).join(split_name) 'huge_run_1-1000.fasta'
|
http://www.biopython.org/w/index.php?title=Split_fasta_file&diff=prev&oldid=2596
|
CC-MAIN-2015-22
|
en
|
refinedweb
|
Issue Links
- depends upon
-
-
- is related to
-
-
-
-
-
LUCENE-1421 Ability to group search results by field
- Closed
Activity
- All
- Work Log
- History
- Activity
- Transitions
Remplacing HashDocSet by BitDocSet for hasMoreResult for better performances
This looks good. Someone with better lucene chops should look at the IndexSearcher getDocListAndSet part...
A few comments/questions about the interface:
If you apply all the example docs and hit:*:*&collapse=true
you get 500. We should use: params.required().get( "collapse.field" ) to have a nicer error:
With:*:*&collapse=true&collapse.field=manu&collapse.max=1
the collapse info at the bottom says:
<lst name="collapse_counts">
<int name="has_more_results">3</int>
<int name="has_more_results">5</int>
<int name="has_more_results">9</int>
</lst>
what does that mean? How would you use it? How does it relate to the <result docs?
My turn to miss something
You are right, we have to use params.required().get("collapse.field").
About collapse info:
<int name="has_more_results">3</int> means that the third doc of the result has been collapsed and that some consecutive results having same field has been removed.
Thanks for looking into this Emmanuel.
It appears as if this only collapses adjacent documents, correct?
We should really try to get everyone on the same page... hash out the exact semantics of "collapsing", and the most useful interface. An efficient implementation can follow.
A good starting point might be here:
A good starting point might be here:
Yonik,
You are right, only adjacent documents are collapsed.
I work on a large index ( 2.000.000 documents) growing every day. The first goal was to group results, preserving score ranking and achieving good performances. This "light" implementation meets our needs.
I am currently working on a second implementation taking care of the semantics.
P.S.: Congratulations for this great application.
This release is more conform with the semantics of "field collapsing".
Parameters are:
collapse=true // enable collapsing
collapse.field=[field] // indexed field used for collapsing
collapse.max=[integer] // Start collapsing after n document
collapse.type=[normal|adjacent] // Default value is "normal"
- "adjacent" collapse only consecutive documents.
- "normal" collapse all documents having equal collapsing field.
Corrects a bug on the previous version when using a value greater than 1 as collapse.max parameter.
Question:
Do you need collapse=true when you can detect whether collapse.field has been specified or not?
You're right. As collapse.field is a required field, we don't need more information. My first idea was to copy the behavior of facet.
The last version of the patch.
- Results are now cached using "CollapseCache" (a new instance of SolrCache added on solrconfig.xml)
- The parameter "collapse" has been removed.
This version has been fully tested.
Feedbacks are welcome.
I still maintain a version for the release 1.1.0 (The version we used on our production environment).
I updated the patch so that is applies cleanly with trunk, while I was at it, I:
- fixed a few spelling errors
- made the "collapse.type" parameter parsing to throw an error if the passed field is unknown (rather then quietly using 'normal')
- changed the patch name to include the number. – as we update the patch, use this same name again so it is easy to tell what is the most current.
I also made a wiki page so there are direct links to interesting queries:
- - - - - - -
Again, I will leave any discussion about the lucene implementation to other more qualified and will just focus on the response interface.
Currently if you send the query:*:*&collapse.field=cat&collapse.max=1&collapse.type=normal
you get a response that looks like:
<lst name="collapse_counts">
<int name="hard">1</int>
<int name="electronics">2</int>
<int name="memory">2</int>
<int name="monitor">1</int>
<int name="software">1</int>
</lst>
It looks like that says: for the field 'cat', there is one more result with cat=hard, 2 more results with cat=electronics, ...
How is a client supposed to know how to deal with that? "hard" is tokenized version of "hard drive" – unless it were a 'string' field, the client would need to know how to do that – or the response needs to change.
From a client, it would be more useful to have output that looked something like:
<lst name="collapse_counts">
<str name="field">cat</str>
<lst name="doc">
<int name="SP2514N">1</int>
<int name="6H500F0">1</int>
<int name="VS1GB400C3">2</int>
<int name="VS1GB400C3">1</int>
</lst>
<lst name="count">
<int name="hard">1</int>
<int name="electronics">1</int>
<int name="memory">2</int>
<int name="monitor">1</int>
</lst>
</lst>
"field" says what field was collapsed on,
"doc" is a map of doc id -> how many more collapsed on that field
"count" is a map of 'token'-> how many more collapsed on that field
This way, the client would know what collapse counts apply to which documents without knowing about the schema.
thoughts?
Right, It's more useful.
This new version includes the result as you expect it.
You should add the following constraint on the wiki: The collapsing field must be un-tokenized.
I just took a look at this using the example data:*:*&collapse.field=cat&collapse.max=1&collapse.type=normal&rows=10
<lst name="collapse_counts">
<str name="field">cat</str>
<lst name="doc">
<int>1</int>
<int name="1">2</int>
<int name="2">2</int>
<int name="4">1</int>
<int name="7">1</int>
</lst>
<lst name="count">
<int>1</int>
<int name="card">2</int>
<int name="drive">2</int>
<int name="hard">1</int>
<int name="music">1</int>
</lst>
</lst>
- - -
what is the "<int>1</int>" at the front of each response?
Perhaps the 'doc' results should be renamed 'offset' or 'index', and then have another one named 'doc' that uses the uniqueKey as the index... this would be useful to build a Map.
- - -
Also, check:*:*&collapse.field=cat&collapse.max=1&collapse.type=adjacent&rows=50
ArrayIndexOutOfBoundsException:
- - -
> You should add the following constraint on the wiki: The collapsing field must be un-tokenized.
Anyone can edit the wiki (you just have to make an account) – it would be great if you could help keep the page accurate / useful. JIRA discussion comment trails don't work so well at that...
Re: tokenized... what about it does not work? Are the limitations an different if it is mult-valued? Is it just that if any token matches within the field it will collapse and that may or may not be what you expect?
- - -
Did you get a chance to look at the questions from the previous discussion? I just noticed Yonik posted something new there:
Sorry, my last post was buggy. Here is the correct one. There is no more exception now.
About tokens, if any token matches within the field it will collapse.
When I start implementing collapsing, my need was to to group documents having exact identical field.
I believe that faceting has identical behavior. Lookt at "Graphic card" as example:
I will try to maintain the wiki page.
I guess adjacent collapsing can make sense when one is sorting by the field that is being collapsed.
For the normal collapsing though, this patch appears to implement it by changing the sort order to the collapsing field (normally not desired). For example, if sorting by relevance and collapsing on a field, one would normally want the groups sorted by relevance (with the group relevance defined as the max score of it's members).
As far as how to do paging, it makes sense to rigidly define it in terms of number of documents, regardless of how many documents are in each group. Going back to google, it always displays the first 10 documents, but a variable number of groups. That does mean that a group could be split across pages. It would actually be much simpler (IMO) to always return a fixed number of groups rather than a fixed number of documents, but I don't think this would be less useful to people. Thoughts?
Will Johnson brings up other use-cases:
[...]
> it's also heavily used in
> ecommerce settings. Check out BestBuy.com/circuitcity/etc and do a
> search for some really generic word like 'cable' and notice all the
> groups of items; BB shows 3 per group, CC shows 1 per group. In each
> case it's not clear that the number of docs is really limited at all, ie
> it's more important to get back all the categories with n docs per
> category and the counts per category than it is to get back a fixed
> number of results or even categories for that matter. Also notice that
> neither of these sites allow you to page through the categorized
> results.
Some of this seems very closely related to faceted search, and much of it could be implemented that way now on the client side, but it would take multiple queries to do so.
One could also think about supporting multi-valued fields in the same manner that faceting does.
Adjacent collapsing is useful because it preserves the pertinence of the sort.
The sorting is not modified. I copy the current sort to do a new search.
I am currently working on taking care of type field (int).
> The sorting is not modified. I copy the current sort to do a new search.
Perhaps if you outlined the algorithm you use, it would clear up some things.
It looks like you make a copy of the Sort and insert a primary sort on the field to be collapsed, and then process the same way as you would for the "ADJACENT" option. If the original sort was by relevance, this doesn't give you the groups sorted by relevance, right?
Oh I see... the modified sort is just to build the filter.
The building-the-filter part is a problem though... asking for all matching docs in sorted order isn't that scalable.
If we get the interface right though, more efficient implementations can follow.
For that reason, it might be good for implementatin details like "collapseCache" to be private.
Correct, except that collapse result is only used as filter to the final result to hide collapsed documents.
P.S.: Sorry, if my answers are a little short, I am not perfectly fluent in english.
Any thoughts on what the faceting semantics for field collapsing should be?
That is, should faceting apply to the collapsed results or the pre-collapsed results?
I think the pre-collapsed results.
Yes, it seems like faceting should be for pre-collapsed.
Do we have to make a choice ? Both behaviors are interesting.
What about a new parameter like collapse.facet=[pre|post] ?
We facet on the complete set of documents matching a query, even when the user only requests the top 10 matches. It seems we should do the same here. The set of documents is the same, the only difference is what "top" documents are returned.
New release:
- Fieldcollapsing added on DisMaxRequestHandler
- Types are correctly handled on collapsed field
No real changes. Updated to apply with trunk.
Moved the valid values for CollapseType to a 'common' package
- - - -
as a side note, when you make a patch, its easiest to deal with if the path is relative to the solr root directory.
src/java/org/apache/solr/search/SolrIndexSearcher.java
is better then:
/Users/ekeller/Documents/workspace/solr/src/java/org/apache/solr/search/SolrIndexSearcher.java
This new patch resolves a performance issues.
I have added time informations for monitoring performances:
<str name="time">57/5</str>
The first value is the elapsed time (in milliseconds) needed to compute collapsed informations (CollapseFilter.ajacentCollapse method).
The second value is the elapsed time needed to compute results informations (CollapseFilter.getMoreResults method).
We are using Solr (with collapsing patch) on a large index in production environnment (120GB with more than 3 000 000 documents).
P.S.: This time, the patch is relative to the solr root directory.
It would be nice for this patch to also report on what documents were actually collapsed - for example, if the result list contained:
doc1
doc2
doc3
and doc2 and doc3 were collapsed, this would be reflected in the XML result as, so that one could determine that (forgive my crap visual representation):
doc1
-> doc2
-> doc3
Regards.
Imagine a case where a Solr database contains news stories from many newspapers and some wire services.
A single wire story will typically be picked up and reprinted in many different papers, ranging from national papers like the NYTimes, to small town papers. My database will have all of them, and possibly also the original from the wire service. Each paper will choose their own headline, and will edit the story differently for length to fill a hole on the printed page, so they cannot be trivially detected as duplicates, but to my users, they basically are.
I need to detect and group together these "duplicates" when displaying search results.
So let's say every story has had an integer hash value calculated of the first X words of the lead paragraph, and that value is indexed and stored (e.g. "similarity_hash"), as a way to detect duplicate stories.
I would want to Field Collapse my results on that hash value, so that all occurrences of the same story are lumped together.
Also, my users would much prefer the most "authoritative" version of the story to be displayed as the primary result, with a count and link to the collapsed results. Authoritativeness could be coded as simple as 1) Wire Service, 2) National Paper, 3) Regional Paper, 4) Small Town Paper, which could be index and stored as an integer "authority". (For finer-grained authority we could store the newspapers circulation numbers.)
Then I could display to users:
"Dog Bites Man"
New York Times, link to see 77 other duplicates
So, finally getting to the point, would it be possible to make this feature work such that it field collapses results on one field ("similarity_hash"), selects the one to return based on another field ("authority" or "circulation')? (While allowing the results to be sorted by a third field, e.g. date or relevance.)
Perhaps by a new parameter?
collapse.authority=[field] // indexed field used for selecting which result from collapsed group to return, default being... ?
If this sounds familiar, it is somewhat similar to what Google News is doing:
Final question: Do you think Field Collapse could work nicely with
SOLR-303 Federated Search, or is that a bridge too far?
Hi,
I am new to the list and to Solr, so I appologize in advance if I say something silly.
I have been playing with the field collapse patch and I have a couple of questions and I have noticed a couple of issues. What is the intended use / audience for the field collapsing patch. One of the issues I see is that the sort order is changed during normal field collapsing and this causes problems if I want the results ordered based on relevancy. Another issue, is that the backfilling of the results, if there is not enough, is done from the deduped results rather than getting more results from the index. Is this by design?
Thanks!!
ttyl
Dima
Hi,
I am new to Solr, and this thread in particular, so please excuse any questions that seem obvious.
I am investigating converting an existing FAST installation to Solr. I've been able to see how to convert all my queries to Solr/Lucene with little or no trouble, with the exception of field collapsing. I've actually implemented a demo of our main search with a Ruby/Rails front end in a few hours. Nice work everyone!
I have found this thread, looked at the patch for field collapsing and have a couple of questions.
I've looked at the Subversion tree and
- Don't find a 1.3 branch
- Don't find the patch code in the trunk
Is there a 'private' sandbox Solr developers work in that's not visible to the pubic (i.e. me)?
If not, what revision of the trunk does the patch apply to?
Any help would be appreciated. If I can get a demo that includes field collapsing, my management may be persuaded to let me move our main search to Solr.
Regards,
Tracy
Hi Tracy-
There has not been much movement on this while we get
SOLR-281 sorted (I hope this happens soon) – once that is in, there will hopefully be an updated patch on the 1.3 branch that will be posted here.
"1.3" is not a branch yet – it is the trunk revision that most patches work with. Only when it becomes an official release, will it actually get called 1.3 in the repository.
If you need to show field collapsing soon, I think your best bet (i have not tried it) is to apply the ' field_collapsing_1.1.0.patch' to the 1.1.0 branch ( ) But if you can wait a few weeks, it will hopefully be available in trunk (or easily patchable from trunk)
ryan
Ryan,
Thanks for the quick reply and clarification. I'll follow your suggestion as to where to apply and try the patch.
I'll be eagerly waiting for the updated trunk.
Regards,
Tracy
Here is the patch for solr 1.3 rev 589395.
I made some performance improvement. No more cache. I use bitdocset or hashdocset depending on solrconfig.hashdocsetmaxsize variable.
Regards,
Emmanuel Keller.
It looks like the latest patch only includes changed files and not new ones (like CollapseFilter?)
Thank you Yonik !
Here is the complete version.
P.S.: It's time to go to bed in Europe ...
Emmanuel.?
I've done some work on the field collapsing patch and made some additions and changes and posting this patch (against revision 592129) here for discussion.
- Added a collapse.facet = before|after parameter to control if faceting happens before or after collapsing.
- Changed collapse.max to collapse.threshold – this value controls after which number of collapsible hits collapsing actually kicks in (collapse.max is still supported as an alias).
- Added a collapse.maxdocs parameter that limits the number of documents that CollapseFilter will process to create the filter DocSet. The intention of this is to be able to limit the time collapsing will take for very large result sets (obviously at the expense of accurate collapsing in those cases).
- Inverted the logic of the filter DocSet created by CollapseFilter to contain the documents that are to be collapsed instead of the ones that are to be kept. Without this collapse.maxdocs doesn't work.
- Added collapse.info.doc and collapse.info.count parameters to provide more control over what gets returned in the collapse_counts extra results.
- Made a minimal change to SolrIndexSearcher.getDocListC() to support passing both the filter and filterList parameters. In most cases this was already handled anyway.
- Did some general refactoring and added comments and a test case.
If somebody with deeper Solr/Lucene knowledge could review these changes it would be much appreciated.
Karsten
I've created a CollapseComponent for field collapsing. Everything seems to work fine with it. Only issue I'm having is I cannot use the query component because when it isn't commented out, the non-field collapsed results are displayed and I can't figure out how to remove them. Someone might be able to figure that part out.
[:[0%20TO%20*]&collapse=true&collapse.field=inStock&collapse.type=normal&collapse.threshold=0]
Here's the config I'm using:
<searchComponent name="collapse" class="org.apache.solr.handler.component.CollapseComponent" />
<requestHandler name="/search" class="solr.SearchHandler">
<lst name="defaults">
<str name="echoParams">explicit</str>
</lst>
<arr name="components">
<!-- <str>query</str> -->
<str>facet</str>
<!-- <str>mlt</str> -->
<!-- <str>highlight</str> -->
<!-- <str>debug</str> -->
<str>collapse</str>
</arr>
</requestHandler>
UPDATE: Doug Steigerwald's patch (field_collapsing_dsteigerwald.diff) applies cleanly to trunk
I'm having trouble applying field_collapsing_1.3.patch to the head of trunk.
charlie@macbuntu:~/solr/src/java$ patch -p0 < /home/charlie/downloads/field_collapsing_1.3.patch patching file org/apache/solr/search/CollapseFilter.java patching file org/apache/solr/search/SolrIndexSearcher.java Hunk #1 succeeded at 694 (offset -8 lines). Hunk #2 succeeded at 1252 (offset -1 lines). patching file org/apache/solr/common/params/CollapseParams.java patching file org/apache/solr/handler/StandardRequestHandler.java Hunk #1 FAILED at 33. Hunk #2 FAILED at 90. Hunk #3 FAILED at 117. 3 out of 3 hunks FAILED -- saving rejects to file org/apache/solr/handler/StandardRequestHandler.java.rej patching file org/apache/solr/handler/DisMaxRequestHandler.java Hunk #1 FAILED at 31. Hunk #2 FAILED at 40. Hunk #3 FAILED at 311. Hunk #4 FAILED at 339. 4 out of 4 hunks FAILED -- saving rejects to file org/apache/solr/handler/DisMaxRequestHandler.java.rej
I'm guessing that maybe the field collapsing patch needs to be updated for the SearchHandler refactoring that was does as part of
SOLR-281? If so, I'll take a whack at migrating the changes to the SearchHandler.java, and see if I can produce a better patch.
Charles - try applying Doug Steigerwald's latest patch: field_collapsing_dsteigerwald.diff
I have not tested it, but it does apply without errors
Doug – I just started looking into field collapsing the other day, but from glancing at the code in QueryComponent.java and CollapseComponent.java, it seems like perhaps you're not supposed to be using both components – after all, their prepare() methods are identical, and their process() methods both execute the user's search and shove the resulting DocList into the "response" entry of the response object's internal storage Map. (The QueryComponent additionally stores the DocListAndSet in the ResponseBuilder object via builder.setResults() – I'm not sure why this is – and prefetches documents if the result set is small enough.) My guess is that if you want to enable collapsing, you should use the CollapseComponent; if you want to disable it, use the QueryComponent. Maybe someone who understand the design of the search handling components better than me can confirm this or correct my misunderstanding(s) ...
Attaching a new copy of Doug Steigerwald's patch that omits the System.out.println() call in CollapseComponent.java.
I copied what was in QueryComponent.prepare() method because I was having to disable the query component because of the extra results I was getting. Initially I had CollapseComponent.prepare() empty, but I had results from the query component and then adding the collapse component results being returned (2 'response' in the results.
Easy solution for me was to copy the prepare from QueryComponent and disable the query component in the request handler. There may be another way, but I was unable to figure it out.
Hello, I am new to Solr, so forgive me if what I say doesn't make sense... None of the patches for 1.3 work any more, since the file org.apache.solr.handler.SearchHandler has been removed from the nightly builds. Will someone write a new patch that works with teh current nightly builds? If not, could we get a copy of an old nightly build somewhere? Thanks a lot.
It seems like SearchHandler was simply moved down into the org.apache.solr.handler.components package as part of r610426 -
You should be able to modify the import statements field_collapsing_dsteigerwald.diff to make it work, no?
Oh, I didn''t notice. I will give a try tomorrow morning. Thank you.
That works, thanks
NegatedDocSet is throwing "Unsupported Operation" exceptions:58)
at org.apache.solr.handler.component.CollapseComponent.process(CollapseComponent.java:103)
at org.apache.solr.handler.SearchHandler.handleRequestBody(SearchHandler.java:155)
at org.apache.solr.handler.RequestHandlerBase.handleRequest(RequestHandlerBase.java:117)
at org.apache.solr.core.SolrCore.execute(SolrCore.java:902)
at org.apache.solr.servlet.SolrDispatchFilter.execute(SolrDispatchFilter.java:275)
at org.apache.solr.servlet.SolrDispatchFilter.doFilter(SolrDispatchFilter.java:232)74):595)
Not quite sure what search is triggering this path thru the code, but it is not happening on every request; just some ... am firing up the debugger now to see what I can learn, but thought I'd post this anyway to see if anyone has any tips.
Ah ... got the beginnings of a diagnosis. The problem appears when the DocSet qDocSet returned by DocSetHitCollector.getDocSet() – called at org.apache.solr.search.SolrIndexSearcher:1101 in trunk, or 1108 with the field_collapsing patch applied, inside getDocListAndSetNC()) – is a BitDocSet, and not when it's a HashDocSet. As the stack trace above shows, calling intersection() on a BitDocSet object invokes the superclass' DocSetBase.intersection() method, which invokes a call chain that blows up when it hits the iterator() method of the NegatedDocSet passed in as the filter parameter to getDocListAndSetNC(); NegatedDocSet.iterator() blows up by design:
public DocIterator iterator() { throw new SolrException(SolrException.ErrorCode.SERVER_ERROR, "Unsupported Operation"); }
I see that DocSetBase.intersection(DocSet other) has special-casing logic for dealing with other parameters that are instances of HashDocSet; does it also need special casing logic for dealing with other parameters that are NegatedDocSets? Or should NegatedDocSet really implement iterator()? Or something else entirely?
Here's the simplest change I could think of to make DocSetBase subclasses that don't override intersection() (which just means BitDocSet at the moment) stop choking when their intersection() gets called with a NegatedDocSet as the other parameter; it's probably horribly stupid. Also, there should be a test.
Index: src/java/org/apache/solr/search/DocSet.java =================================================================== --- src/java/org/apache/solr/search/DocSet.java (revision 617738) +++ src/java/org/apache/solr/search/DocSet.java (working copy) @@ -193,7 +193,18 @@ if (other instanceof HashDocSet) { return other.intersection(this); } - + // you can't call getBits() on a NegatedDocSet, because + // getBits() // calls iterator(), and iterator() isn't + // supported by NegatedDocSet + if (other instanceof NegatedDocSet) { + BitDocSet newdocs = new BitDocSet(); + for (DocIterator iter = iterator(); iter.hasNext();) { + int next = iter.nextDoc(); + if (other.exists(next)) + newdocs.add(next); + } + return newdocs; + } // Default... handle with bitsets. OpenBitSet newbits = (OpenBitSet)(this.getBits().clone()); newbits.and(other.getBits());
I haven't been following this, so I don't know why there is a need for a NegatedDocSet (or if introducing it is the best solution), but it looks like you have two cases to handle: one negative set or two negative sets.
If you have a and -b, then return a.andNot(b)
if both a and b are negative (-a.intersection(-b)) then return NegatedDocSet(a.union(b)) // per De Morgan, -a&-b == -(a|b)
That's only for intersection() of course.
NegatedDocSet got introduced because the filter logic expects to use the intersection operation to apply a number of filters to a result. Introducing a negated docset was much easier than supporting both intersection as well as and-not type filters.
NegatedDocSet does not support iteration because the negation of a finite set is (at least theoretically) infinite. Even though it would in practice be possible to limit the negated set via the known maximum document id, this would probably not be very efficient. However, it is simply not necessary to ever iterate over the elements of a NegatedDocSet, because we know that the end-result of all DocSet operations is going to be a finite set of results, not an infinite one. A NegatedDocSet will only ever be used to "subtract" from a finite DocSet. As Yonik has pointed out, operations on a NegatedDocSet can be rewritten as (different) operations on the set being negated. The operation methods inside NegatedDocSet do this.
The reason the bug occurs is because of the naive way the binary set operation calls are dispatched: DocSet clients simply call e.g. set1.intersection(set2), arbitrarily leaving the choice of implementation to the logic defined by the class of set1. Currently, BitDocSet does not know about NegatedDocSet, and hence doesn't perform the necessary rewriting or delegation to NegatedDocSet.. Either the client code could be changed to call DocSetOp.intersection(a, b) instead of a.intersection(b), but this would involve changing the DocSet interface. A backwards compatible solution would be to simply have final DocSetBase.intersection() delegating to DocSetOp.intersection.
As Yonik has pointed out, operations on a NegatedDocSet can be rewritten as (different) operations on the set being negated. The operation methods inside NegatedDocSet do this.
Right. I realized, sheepishly, after I posted the first suggested patch that it'd be much simpler to just mimic the first if-clause in DocSet.intersection():
if (other instanceof NegatedDocSet) { other.intersection(this); }.
+1 for this ... whether or not NegatedDocSet is part of the final implementation of this feature. FWIW, I just noticed that there's another bug lurking in BitDocSet.andNot(), which will fail if a NegatedDocSet is passed in. It seems to me that it might be easier – at least for me – to read/write/extend a test suite that exercised all the paths thru DocSetOp, than to write a set of tests that exercised all the paths thru DocSetBase and its subclasses.
Also, I think that maybe there's a clear distinction to be made between intrinsic operations on a set (add(), exists(), et al.) and ones that involve another set (intersection(), union(), andNot()). Not sure it's a useful one, but it make sense to me. I don't know, though, whether it make sense to go further than that and say – as the current implementation of NegatedDocSet implies – that there are some set operations (iterator() and size()) that are in fact optional.
Off the top of my head: Would it be simpler to just modify add a filterType flag to the getDocList*() family of methods in SolrSearchInterface to cause it to call a.andNot(b) rather than a.intersection(b) when applying b as a filter? (I'm really completely ignorant – or nearly completely – of how the seach code works, so feel free not to dignify this with a response if it's a useless idea ...
)
Hello everyone. I am planning to implement chain collapsing on a high traffic production environment, so I'd like to use a stable version of Solr. It doesn't seem like you have a chain collapse patch for Solr 1.2, so I tried the Solr 1.1 patch. It seems to work fine at collapsing, but how do I get a countt for the documents other then the one being displayed?
As a result I see:
<lst name="collapse_counts">
<int name="Restaurant">2414</int>
<int name="Bar/Club">9</int>
<int name="Directory & Services">37</int>
</lst>
Does that mean that there are 2414 more Restaurants, 9 more Bars and 37 more Directory & Services? If so, then that's great.
However when I collapse on some fields I get an empty collapse_counts list. It could be that those fields have a large number of different values that it collapses on. Is there a limit to the number of values that collaose_counts displays?
Thanks in advance for any help you can provide!
Also, is field collapse going to be a part of the upcoming Solr 1.3 release, or will we need to run a patch on it?
OK, I think I have the first issue figured out. If the current resultset (lets say the first 10 rows) doesn't have the field that we are collapsing on, the counts don't show up. Is that correct?
Latest patch file fixes an issue where facet searching would throw a NullPointerException when using the fieldCollapse requestHandler. Also, updated the import path for SearchHandler. Thank you Dave for these tips!
That thanks should be to Charles not Dave
Sorry about that!
A good thing were to apply this CollapseComponent for the mlt results.
Are there any plans to add collapse controls to SolrJ?
None of the patches work on the current nightly build anymore. Could anyone help? Thanks
I will try to bring this patch up to date. Currently I see two main problems:
1) The patch applies to trunk, but it doesn't compile. The problem occurs mainly because of changes in Search Components (for instance, some method signatures which CollapseComponent implements were changed). I have this fixed locally (more or less), but I have to test it before posting new version of patch.
2) It seems that CollapseComponent can't be used in chain with QueryComponent, but instead of it. CollapseComponent basically copies QueryComponent querying logic and adds some of it's own. I guess this isn't the right way to go. CollapseComponent should contain only collapsing logic and should be chainable with other components. Can anyone confirm if I'm right here? Of course, there might be some fundamental reason why CollapseComponent had to be implemented this way.
Does anyone else see any other issues with this component?
Hey Bojan. I actually hacked collapsecomponent quite a bit, in order to get it to work with Distributed Search, but I am not going to upload it, since its horribly buggy. Do you think that's a feature that can be added?
Hi Oleg. I'll look into this also. In case you have any working code, you can mail it to me, and I'll see what can be reused.
It's amazing this issue/patch has so many votes and watchers, yet it's stuck...
Ryan, Yonik, Emmanuel, Doug, Charles, Karsten
I think Bojan is onto something here. Isn't the ability to chain QueryComponent (QC) and CollapseComponent (CC) essential?
I'm looking at field_collapsing_dsteigerwald.diff and see that the CC.prepare method there is identical to the QC.prepare method, while process methods are different. Could we solve this particular copy/paste situation by making CC extend QC and simply override the process method?
As for chaining, could CC take the same approach as the MLT Component, which simply does it's thing to find "more like this" docs and stuffs them into the "moreLikeThis" element in the response?
I could be misunderstanding something, so please correct me if I'm wrong. I'd love to get this one in 1.3 – it's been waiting in JIRA for too long.
I updated the patch so that it can be compiled on Solr trunk. Also, since CollapseComponent essentially copied QueryComponent's prepare method (and it seems that it is supposed to be used instead of it), I made it extend QueryComponent (with collapsing-specific process() method, and prepare() method inherited from super class).
I'd like to request some distributed search functionality for this feature as well.
There is so little interest in this patch/functionality now, that I doubt it will get distributed search support in time for 1.3 I would like to commit Bojan's patch for 1.3, though.
Since this is adding new interface/API, it would be very nice if one could easily review it. It's very important that the interface and the exact semantics are nailed down IMO (there seem to be a lot of options).
Is up-to-date?
There don't seem to be any tests either.
Although field collpasing worked fine in my brief testing, when I put it to work with more documents, I got exceptions. It seems to have something to do with the queries (or documents, since different queries return different documents). With some queries, this exception does not happen.
If I remove the collapse.* parameters, the error does not happen. Any idea why this is happening? Thanks.
HTTP ERROR: 500
Unsupported Operation82)
at org.apache.solr.handler.component.CollapseComponent.process(CollapseComponent.java:57)
at org.apache.solr.handler.component.SearchHandler.handleRequestBody(SearchHandler.java:156)
at org.apache.solr.handler.RequestHandlerBase.handleRequest(RequestHandlerBase.java:125)
at org.apache.solr.core.SolrCore.execute(SolrCore.java:965)
at org.apache.solr.servlet.SolrDispatchFilter.execute(SolrDispatchFilter.java:338)
at org.apache.solr.servlet.SolrDispatchFilter.doFilter(SolrDispatchFilter.java:272))
You can check discussion about this same problem in the posts above (starting with 1st Feb 2008). It seems like a rather complex issue which could require some serious refactoring of collapsing code.
Sorry about the dup. I obviously didn't check the comments before I posted the bug. Anyway, it's still there, it's still happening
Not sure if it's related to the query string or the documents that the query hits. If the latter, it would be trickier to reproduce.
Anyway I tried a few English words and the error didn't happen. So far I was only able to reproduce it with CJK (Simplified Chinese to be exact) queries.
This is an example query that triggers this problem (in UTF-8):
'\xe5\x9c\xb0\xe9\x9c\x87'
The query string:
I just tried to apply the last patch and ran into 2 issues:
First:
The new getDocListAndSet(Query query, List<Query>..) method in SolrIndexSearcher calls the getDocListC(..) method using the old signature. I changed the call to the new signature and it worked very well:
DocListAndSet ret = new DocListAndSet();
QueryResult queryResult = new QueryResult();
queryResult.setDocListAndSet(ret);
queryResult.setPartialResults(false);
QueryCommand queryCommand = new QueryCommand();
queryCommand.setQuery(query);
queryCommand.setFilterList(filterList);
queryCommand.setFilter(docSet);
queryCommand.setSort(lsort);
queryCommand.setOffset(offset);
queryCommand.setLen(len);
queryCommand.setFlags(flags |= GET_DOCSET);
getDocListC(queryResult, queryCommand);
Second:
After adding more docs (~3000), I got an Exception in SolrIndexSearcher at line ~1300:
qr.setDocSet(filter == null ? qDocSet : qDocSet.intersection(filter));
As the NegotiatedDocSet doesn't implement the iterator() function, this call lead to an Unsupported Operation exception. I just naively tried to implement this funtion using "return source.iterator()". Works fine for me.
As the first issue is very clear, I wanted to check my approach for the second one before I post a patch. Maybe there are some side effects that I missed.
I'm in the process of updating our Solr build and I'm running into issues with this patch now. I added the code in the first issue Matthias mentioned. Unfortunately whenever I try to do any field collapsing, I get a NPE:
java.lang.NullPointerException
at org.apache.solr.search.CollapseFilter.getCollapseInfo(CollapseFilter.java:263)
at org.apache.solr.handler.component.CollapseComponent.process(CollapseComponent.java:65)
...
My request handler for testing is simple. It only has the collapse component in it. Posting the example docs and trying to execute the following query gives me the NPE.*:*&collapse.field=cat&collapse.type=normal
Updated my trunk this morning (r687489).
I was able to hack the latest patch in, and to get it to work, but it required some pretty heavy naive changes...
If you are getting an NPE try this: in the SolrIndexSearcher class, in the getDocListC method change out = new DocListAndSet(); to
DocListAndSet out = null;
if(qr.getDocListAndSet() == null)
out = new DocListAndSet();
else
out = qr.getDocListAndSet();
Sorting twice (when not sorting on the collapse field) only makes sense if we are doing external sorts (harddrive), correct ? It seems to me that this should be closer to the facet stuff (in using the field cache) and then use a hash table of accumulators: linear time (is that generally?) right? (edit: looks like thats too memory intensive)
As Otis mentions above, this issue appears very popular. We should finish it up.
What's a hard drive sort?
What's a hard drive sort?
Sorry - was not very clear.
Just like sorting, finding dupes can be done in memory or using external storage (harddrive). I am only just looking into this stuff myself, but it seems in the best case you would want to do it in memory with a hash system which can be linear scalability. If you have too many items to look for dupes in, you have to use external storage - one good method is two sorts (we get one from the search), but there are other options too I think. In this case, the sorts are able to be done in memory though, but I think the hashtable method of identifying dupes is much less memory efficient (too many unique terms).
Hi All,
I am trying to apply this patch to solr-1.4 code and getting following errors.
At line number 58 of the CollapseComponent.java and the error is:
The method getDocListAndSet (Query, List<Query>, Sort, int , int , int) in the type SolrIndexSearcher is not applicable for the arguments (Query, List<Query>, DocSet, Sort, int , int , int)
Can anyone tell me the correction I need to do to get this code working.
--Thanks and Regards
Vaijanath
Hi All,
I got this patch working but for 1.3 code and not 1.4. I will try to get this working and will tell you the results. I pulled in some code from older version namely for
getDocListAndSet
getDocListNC
getDocListC.
I also added an constructor DocSetHitCollector (int maxDoc) with following code
public DocSetHitCollector(int maxDoc)
I wanted to know if any of the additions harm any other component of solr.
Do I need to make any changes to solrconfig other than the following
Adding <arr name="first-components"> <str>collapse</str> </arr> this to standard and dismax query handler
<searchComponent name="collapse" class="org.apache.solr.handler.component.CollapseComponent" />
I will check this with highlighting and let you all know of any observation that I make.
--Thanks and Regards
Vaijanath
A patch for field collapsing over Solr 1.3.0. It changes the behavior to be more memory friendly when the parameter collapse.maxdocs is used.
I attached a patch named collapsing-patch-to-1.3.0-ivan.patch. The patch applies to Solr 1.3.0.
Karsten commented in the comment "Karsten Sperling - 06/Nov/07 02:06 PM":
Inverted the logic of the filter DocSet created by CollapseFilter to contain the documents that are to be collapsed instead of the ones that are to be kept. Without this collapse.maxdocs doesn't work.
I found that this way of doing consumes a lot of memory, even if your query is bounded to a few number of documents. And I found that there is not advantage on using collapse.maxdocs if you don't speed up queries and reduces the amount of needed memory.
So, I decided to revert the Karsten change in order to make field collapsing faster and less resources consuming when querying for smaller datasets.
WARNING: This patch changes the semantic of collapse.maxdocs. Before this patch, the collapse.maxdocs was used just for reduce the number of docs cheked for grouping, but presenting the rest of documents that were not grouped in the result.
With current patch, only documents that were examinated for grouping can appear in the result. This semantic have two benefits:
- The amount of resources can be controled per each query
- Not ungrouped content is presented.
I'm having an issue with Ivan's latest patch. I'm testing on a data set of 8113 documents. All the documents have a string field called site. There are only two sites, Site1 and Site2.
Site1 has 3466 documents.
Site2 has 4647 documents.
With the following simple query, I only get 1 result:*:*&collapase=true&collapse.field=site
....
<lst name="collapse_counts">
<str name="field">site</str>
<lst name="doc">
<int name="site2-doc-2981790">4646</int>
</lst>
<lst name="count">
<int name="Site2">4646</int>
</lst>
<str name="debug">HashDocSet(2) Time(ms): 0/0/0/0</str>
</lst>
<result name="response" numFound="1" start="0">
....
The only result displayed is for Site2.
I have an older patch working with Solr 1.3.0, but I can't get it to mesh with localsolr properly. My localsolr gives 1656 results, and collapsed on the site it should give 2 results but gives 8 results, some of which are duplicate documents. Without localsolr, my field collapsing patch seems to work fine.
What is the "localsolr" field you are talking about?
Is it the solr stuff from ?
Yes, that localsolr. I've just been trying to get the two components working together but haven't had much luck.
Separately they work fine, but together not so much. I can't get the field collapsing to work correctly with an existing reset set from the localsolr component in the response builder.
I have attached new patch with the problems solved in my first submitted patch. Doug Steigerwald, could you check if this patch works well for you? Thanks.
Looks fine from my little bit of testing.
I'm using Ivan's patch and running into some trouble with faceting...
Basically, I can tell that faceting is happening after the collapse - because the facet counts are definitely lower than they would be otherwise. For example, with one search, I'd have 196 results with no collapsing, I get 120 results with collapsing - but the facet count is 119??? In other searches the difference is more drastic - In another search, I get 61 results without collapsing, 61 with collapsing, but the facet count is 39.
Looking at it for a while now, I think I can guess what the problem might be...
The incorrect counts seem to only happen when the term in question does not occur evenly across all duplicates of a document. That is, multiple document records may exist for the same image (it's an image search engine), but each document will have different terms in different fields depending on the audience it's targeting. So, when you collapse, the counts are lower than they should be because when you actually execute a search with that facet's term included in the query, all the documents after collapsing will be ones that have that term.
Here's an illustration:
Collapse field is "link_id", facet field is "keyword":
Doc 1:
id: 123456,
link_id: 2,
keyword: Black, Printed, Dress
Doc 2:
id: 123457,
link_id: 2,
keyword: Black, Shoes, Patent
Doc 3:
id: 123458,
link_id: 2,
keyword: Red, Hat, Felt
Doc 4:
id: 123459,
link_id:1,
keyword: Felt, Hat, Black
So, when you collapse, only two of these documents are in the result set (123456, 123459), and only the keywords Black, Printed, Dress, Felt, and Hat are counted. The facet count for Black is 2, the facet count for Felt is 1. If you choose Black and add it to your query, you get 2 results (great). However, if you add Felt to your query, you get 2 results (because a different document for link_id 2 is chosen in that query than is in the more general query from which the facets are produced).
I think what needs to happen here is that all the terms for all the documents that are collapsed together need to be included (just once) with the document that gets counted for faceting. In this example, when the document for link_id 2 is counted, it would need to appear to the facet counter to have keywords Black, Printed, Dress, Shoes, Patent, Red, Hat, and Felt, as opposed to just Black, Printed, and Dress.
You can try with collapse.facet=before, but then you'll notice that the list of documents returned is all, not only the collapsed ones.
Yes, this is basically what I'm doing for now... At least it's reasonable enough to explain to a client that the counts are for unfilitered results. However, ideally, it should be able to facet properly on filitered results as well...
Also, with simply collapse.facet=before, the results returned are the unfilitered results. You have to specify collapse.facet=after to get filtered results at all, and run the query component right before the facet component then to get the unfilitered facet counts... which doesn't seem to be ideal. This is with release version of SOLR 1.3 and Iván's most recent patch. All in all it took a lot of experimenting but at least now I have a method that works that we can go live with and then we'll just update the software as the situation improves.
Thanks for all your efforts on the patch! I complain but really, the fact it works at all is a miracle for us.
I get an error on certain searches with Ivan's latest patch.
Dec 15, 2008 2:32:00 PM org.apache.solr.core.SolrCore execute
INFO: [ss_image_core] webapp=/solr path=/select params=
hits=263059 status=500 QTime=4508
Dec 15, 2008 2:32:00 PM org.apache.solr.common.SolrException log
SEVERE: java.lang.ArrayIndexOutOfBoundsException: 41386
at org.apache.solr.util.OpenBitSet.fastSet(OpenBitSet.java:235)
at org.apache.solr.search.CollapseFilter.addDoc(CollapseFilter.java:214))
Unfortunate really, it happens every time this specific search is run, but many, many other searches of similar result set size and considerably more complexity or equivalent complexity will execute fine... I can't honestly tell you what's special about this one search that would make it fail.
For now the patch is offline until we can figure something out for it... I can provide access to the machine (I managed to reproduce it in a test environment) if it would help determine what the problem is / make the software better for everyone.
I'm pretty sure the problem Stephen ran into is an off-by-one error in the bitset allocation inside the collapsing code; I ran into the same problem when I customized it for internal use about half a year ago – and unfortunately forgot all about the problem until reading Stephen's comment just now. Basically the bitset gets allocated 1 bit too small, so there's about a 1/32 chance that if the bit for the document with the highest ID gets set it will cause the AIOOB exception.
Karsten Sperling was right. Seems that there was a wrong bounds initialization for the OpenBitSet. I have solved it and attached a new patch.
Stephen Weiss, can you test if now the error has disappeared?
Thanks.
Yes! It does work. Thank you both so much! It's been running for 5 days now without a hiccup. This is going into production use now (we'll be monitoring), they simply can't wait for the functionality. From here it looks like if you get faceting tidied up and some docs written, they should be including this soon!
I see there is a patch agains 1.3, is there any current patch against trunk? (we would need something against trunk in order to consider this for 1.4)
I tested 1.3 and ivans latest patch.
When I add a Filter Query (fq param) to my query I get an exception "Either filter or filterList may be set in the QueryCommand, but not both.". I'm not that familiar with java but at least disabled the exception in SolrIndexSearch.java. I can use Filter Queries now and no problems occured so far. But surely this has to be handled in another way.
Btw, I think this had already been fixed by Karsten back in 2007 in some way (patch field-collapsing-extended-592129.patch). He commented it with:
"Made a minimal change to SolrIndexSearcher.getDocListC() to support passing both the filter and filterList parameters. In most cases this was already handled anyway."
I had to make a patch to fix two issues that we needed for our system. I am not used to this code, so maybe someone can pick this patch and make it something useful for everybody.
The fixes are:
1) When collapsing.facet=before, only the collapsed documents are returned (and not the whole collection).
2) When collapsing is normal, the selected sort order is preserved by returning the first document of the collapsed group.
For example, if the values of the collapsing field are:
1) Y
2) X
3) X
4) Y
5)X
6)Z
the documents returned are 1, 2 and 6, in that order.
So, for example, if you sort by price ascending, you will get the result sorted by price, where each item is the cheapest item of its collapsed group.
Marked for 1.5
Are the any concrete plans on where this feature is going? Is it ever going to get support for distributed search?
We've been using this patch in production for months now, and suddenly in the last 3 days it is crashing constantly.
Edit - It's Ivan's latest patch, #3, with Solr 1.3 dist
Mar 6, 2009 5:23:50 AM org.apache.solr.common.SolrException log
SEVERE: java.lang.OutOfMemoryError: Java heap space
at org.apache.solr.util.OpenBitSet.ensureCapacityWords(OpenBitSet.java:701)
at org.apache.solr.util.OpenBitSet.ensureCapacity(OpenBitSet.java:711)
at org.apache.solr.util.OpenBitSet.expandingWordNum(OpenBitSet.java:280)
at org.apache.solr.util.OpenBitSet.set(OpenBitSet.java:221)
at org.apache.solr.search.CollapseFilter.addDoc(CollapseFilter.java:217))
It seems to happen randomly - there's no special request happening, nothing new added to the index, nothing. We've made no configuration changes. The only thing that's happened is more documents have been added since then. The schema is the same, we have perhaps 200000 more documents in the index now than we did when we first went live with it.
It was a 32-bit machine allocated 2GB of RAM for Java before. We just upgraded it to 64-bit and increased the heap space to 3GB, and still it went down last night. I'm at my wits end, I don't know what to do but this functionality has been live so long now it's going to be extremely painful to take it away. Someone, please tell me if there's anything I can do to save this thing.
That is one of the problems that this patch has: The consumption of resources (memory and CPU) increases with the amount of results in the query and with the amount of requests.
Is not trivial to change that. I imaging that deep changes in Solr or Lucene would be needed to have an efficient collapsing.
The advices I can give you are:
- Increase the amount of memory or your Solr
- Use the parameter "collapse.maxdocs" . This parameter limits the number of document that are seen when collapsing. By using it, you'll limit the amount or memory and resources used per each query. But if the query you did has more than maxdocs documents, then the collapsing won't be perfect. Some documents won't be collapsed.
I hope it helps something.
Thank you so much for your prompt response Ivan, I really appreciate your help.
I have already maxed out the RAM on the machine - it seems very strange to me that adding a whole other GB of RAM did not fix the issue already. So I will have to try the next option, collapse.maxdocs.
How does this work though? Does this mean, let's say I set collapse.maxdocs to 10000, that means the first 10000 documents will be collapsed, and after that they won't be? Or is it more random?
Is not random. I don't remember pretty well, but I think that documents are sorted by the collapsing field. After that, they are being grouped sequentially until reaching maxdocs. The groups that results from there are the documents that are presented. So the number of groups resulted are always smaller than the number of maxdocs.
Summary: only maxdocs are scanned to generate the resulting groups.
Unfortunately I don't think that will work for us. The collapse.maxdocs seems to collapse the oldest documents in the index - but we sort from newest to oldest, so effectively the newest documents in the index are just left out. Not only do they not collapse but they don't appear at all. If this is the only solution then we will have to stop using the patch... and unfortunately this means in general we will probably have to stop using Solr. The company has already made clear that this functionality is required, and especially since it has been working now for several months they will be very unlikely to accept that they can't have it anymore.
Anyway I don't want to give up yet...
I'm really not convinced this is really a problem of running out of the necessary memory to complete the operation - it only started doing this very recently. How does it run for 3 months with 2GB of RAM without any trouble, and now it fails even with 3GB of RAM? It's not like we just added those 200000 documents yesterday - they have accumulated over the past few months, in the past 3 days we've only perhaps added 20,000 documents. 20,000 more documents (with barely any new search terms at all) means it needs more than 1GB of memory more than what it was already using? If we grow by 25% every year that means by December we will need 50GB of RAM in the machine.
How much RAM does the machine have total? 4 GB?
Do you ever commit rapidly?
You might try decreasing your cache sizes if you are using them.
The machine has 4GB total. In response to this issue, and especially now that we have upgraded it to be 64 bit (again, for this issue), we have already ordered another 16 GB for the machine to try and stave off the problem. We should have it in next week.
I restrict commits severely - a commit is only allowed once an hour, in practice they happen even less frequently - perhaps 5 or 6 times a day, and very spread out. We are freakishly paranoid
But honestly that's all we need - new documents come in in chunks and generally they want them to go in all at once, and not piecemeal, so that the site updates cleanly (the commits are synchronized with other content updates - new images on the home page, etc).
Some more information... just trying to toss out anything that matters. We have a very small set of possible terms - only 60,000 or so which tokenize to perhaps 200,000 total distinct words. We do not use synonyms at index time (only at query time). We use faceting, collapsing, and sorting - that's about it, no more like this or spellchecker (although we'd like to, we haven't gotten there yet). Faceting we do use heavily though - there are 16 different fields on which we return facet counts. All these fields together represent no more than 15,000 unique terms. There are approx. 4M documents in the index total, and none of them are larger than 1K.
Memory usage on the machine seems to steadily increase - after restart and warming, 40% of the RAM on the machine is in use. Then, as searches come in, it steadily increases. Right now it is using 61%, in an hour it will probably be closer to 75% - the danger zone. This is also unusual because before, it used to stay pretty steady around 52-53%.
This is a multi-core system - we have 2 cores, the one I'm describing now is only one of them. The other core is very, very small - total 8000 documents, which are also no more than 1 K each. We do use faceting there but no collapsing (it is not necessary for that part). It is essentially irrelevant, with or without that core the machine consumes about the same amount of resources.
In response to this problem I have already dramatically reduced the following options:
< <mergeFactor>2</mergeFactor>
< <maxBufferedDocs>100</maxBufferedDocs>
—
> <mergeFactor>10</mergeFactor>
> <maxBufferedDocs>1000</maxBufferedDocs>
42c42
< <maxFieldLength>2500</maxFieldLength>
—
> <maxFieldLength>10000</maxFieldLength>
50,51c50,51
< <mergeFactor>2</mergeFactor>
< <maxBufferedDocs>100</maxBufferedDocs>
—
> <mergeFactor>10</mergeFactor>
> <maxBufferedDocs>1000</maxBufferedDocs>
53c53
< <maxFieldLength>2500</maxFieldLength>
—
> <maxFieldLength>10000</maxFieldLength>
( diff of solrconfig.xml - < indicates current values, > indicates values when the problem started happening).
This actually seemed to make the search much faster (strangely enough), but it doesn't seem to have helped memory consumption very much.
These are our cache parameters:
<filterCache
class="solr.LRUCache"
size="65536"
initialSize="4096"
autowarmCount="2048"/>
<queryResultCache
class="solr.LRUCache"
size="512"
initialSize="512"
autowarmCount="256"/>
<documentCache
class="solr.LRUCache"
size="16384"
initialSize="16384"
autowarmCount="0"/>
<cache name="collapseCache"
class="solr.LRUCache"
size="512"
initialSize="512"
autowarmCount="0"/>
I'm actually not sure if the collapseCache even does anything since it does not appear in the admin listing. I'm going to try reducing the filterCache to 32K entries and see if that makes a difference. I think that may be the right track since otherwise it seems like a big memory leak is happening.
Is there any way to specify the size of the cache in terms of the actual size it should take up in memory, as opposed to the number of entries? 64K sounded quite small to me but now I'm thinking that 64K could mean GB's of memory depending on what the entries are, I honestly don't understand what the correlation would be between an entry and the size that entry takes in RAM.
we have already ordered another 16 GB for the machine to try and stave off the problem. We should have it in next week.
Great. You've got a lot going on here, and 4 GB is on the extremely low end of what I'd suggest.
I restrict commits severely -
Good news again.
In response to this problem I have already dramatically reduced the following options:
Dropping the merge factor is not likely to help much. It will increase the time it takes to add docs (merges occur much more often) for the benefit of maintaining an almost optimized index at all times (hence the faster search speed). Not a big RAM factor though.
Also, dropping the max buffered docs is also probably not a huge saver, and will only affect RAM usage during indexing. Going from 1000 to 100 will likely hurt indexing performance and not save that much RAM in the larger scheme of things.
And dropping the maxFieldLength will hide parts of the document that are over that length - perhaps youll end up with a handful fewer index terms, but again, not likely a big savings here and may do more harm than good.
My suggestion of lowering your cache sizes was just a thought to eek out some more RAM for you. Its not really suggested though if you can get more RAM. For best performance, those caches should be set correctly. If you are using the fieldcache method for faceting, you want the size of the filter cache to be the same as the number of unique terms you are faceting on. The other caches are not so large that I would suggest trimming them.
The reality is, you've got 4 million docs, sorting (uses field caches), faceting (likely uses field caches), and this resource intensive field collapse patch. More RAM is probably your best bet. Every document you add potentially adds to the RAM usage of each of these things. That doesn't mean you don't have a different problem (it does seem weird it ballooned all of a sudden), but your running some RAM hungry stuff here, and it wouldn't blow my mind that 3 gig is not enough to handle it. It could be that only recently the right searches started coming in at the right times to fire up all your needs at once. Much of this may be lazy loaded or loaded on the fly depending on if and how you have configured your warming searches.
Thanks. In the wiki next to each one of these parameters it explicitly says that reducing this parameter will decrease memory usage, this is why we reduced these parameters (it did not mention the filterCache at all).
I really do hope the RAM will help. It certainly can't help..
I'm going to try lowering the filterCache to be just above the number it's at now, since that amount seems to be all it needs. It's possible at crash time all the sudden is uses a lot more of it for some reason - I have a feeling it might be related to a new permissions group that was added 3 days ago. That might trigger a lot more filters. It is barely used at all yet except by one client - I'm going to go check and see if there's any correspondence between when that client logs in and when the problem occurs - I bet there is.
Thanks for all your help guys.
Thanks. In the wiki next to each one of these parameters it explicitly says that reducing this parameter will decrease memory usage, this is why we reduced these parameters (it did not mention the filterCache at all).
They will save RAM to a certain extent for certain situations. But not very helpful at the sizes you are working with (and not settings I would use to save RAM anyway, unless the amount I need to save was pretty small). Also, the savings are largely index side - not likely a huge part of your RAM concerns, which are search side..
The sizes may be higher than you need then. They should be adjusted to the best settings based on the wiki info. I was originally suggesting you might sacrifice speed with the caches for RAM - but, its always best to use the best settings and have the necessary RAM.
When I add a Filter Query (fq param) to my query I get an exception "Either filter or filterList may be set in the QueryCommand, but not both."
This patch (based on dieter patch) allows using fq parameter
There is an issue with collapsed result ordering when querying with only the unique Id and score fields in the request.
[Update: this is only an issue when both standard results and collapse results are present - which I was using for testing]
eg:
q=ford&version=2.2&start=0&rows=10&indent=on&fl=Id,score&collapse.field=PrimaryId&collapse.max=1
gives wrong ordering (note: Id is our unique Id)
but adding a another field - even a bogus one - works.
q=ford&version=2.2&start=0&rows=10&indent=on&fl=Id,score,bogus&collapse.field=PrimaryId&collapse.max=1
Also using an fq makes it work
eg:
fq=Type:articles&q=ford&version=2.2&start=0&rows=10&indent=on&fl=Id,score&collapse.field=PrimaryId&collapse.max=1
I'm using the latest Dmitry patch (25/mar/09) against 1.3.0.
Apart from that great so far...thanks to all
We have tried to integrate the most recent patch into our 1.4 install. The patching was smooth and overall it works good. However, it appears the issue with fq has returned. Whenever I try to filter the query it gives "Either filter or filterList may be set in the QueryCommand, but not both." Not sure what happened. What part of the patch makes it possible for fq to work as it may not be there now.
Additionally, the collapse.facet=before seems to not work. Any help in this area would be greatly appreciated.
How did you fix the memory issue?
-XX:PermSize=1524m -XX:MaxPermSize=1524m -Xmx128m
It's not a real fix, but works for now...
This patch is based on the latest patch by Dmitry, it addresses the following issues:
- the CollapseComponent now simply falls back to the process method of QueryComponent when no collapse.field is defined. This fixes issues with the fq param when collapsing was disabled and makes CollapseComponent a fully compatible replacement for QueryComponent.
- collapse.facet=before is now fixed, the previous patch ignored any filter queries (fq) and therefore returned wrong facet counts
- ResponseBuilder "builder" renamed to "rb" to match QueryComponent
This patch applies to trunk (rev. 772433) but works with Solr 1.3 too. For 1.3 you have to move CollapseParams.java from common/org/apache/solr/common/params to java/org/apache/solr/common/params/ as the location of this file has been changed in trunk.
This is my first contribution so any feedback is much appreciated. This is a great feature so lets get it into Solr as soon as possible.
Hi,
I have modified the latest patch of Thomas and made two performance improvements:
1) Improved normal field collapsing. I tested it with an index 1.1 million documents. When collapsing on all documents and with no sorting specified (so sorting on score) the query time is around 130ms compared with the previous patch which is around 1.5 s. When I then add sorting on string field the query time is around 220 ms compared with the previous patch which is around 5.2 s.
The reason why it is faster is because the latest patch queries for a doclist instead of a docset. In the normal collapse method it keeps track of the most relevant documents, so the end result is the same, also creating a docList of 1.1 million documents (and ordering it) is very expensive.
Note: I did not improved adjacent collapsing, because the adjacent method needs (as far as I understand it) a completely sorted list of documents (docList).
2) Slightly improved facetation in combination with field collapsing, by reusing the uncollapsed docset that is created during the collapsing process (the previous patch made invoked a second search).
I also have added documentation, added a few unit tests for the collapsing process itself and made the debug information more readable.
This patch works from revision 779335 (last Wednesday) and up. This patch depends on some changes in Solr and a change inside Lucene.
I'm very interested in other people's experiences with this patch and feedback on the patch itself.
Cheers,
Martijn
I made some tests with your patch and trunk (rev. 779497). It looks good so far but I have some problems with occasional null pointer exceptions when using the sort parameter:*:*&collapse.field=manu&sort=score%20desc,alphaNameSort%20asc
java.lang.NullPointerException
at org.apache.lucene.search.FieldComparator$RelevanceComparator.copy(FieldComparator.java:421)
at org.apache.solr.search.CollapseFilter$DocumentComparator.compare(CollapseFilter.java:649)
at org.apache.solr.search.CollapseFilter$DocumentPriorityQueue.lessThan(CollapseFilter.java:596)
at org.apache.lucene.util.PriorityQueue.insertWithOverflow(PriorityQueue.java:153)
at org.apache.solr.search.CollapseFilter.normalCollapse(CollapseFilter.java:321)
at org.apache.solr.search.CollapseFilter.<init>(CollapseFilter.java:211))
These queries work as expected:*:*&collapse.field=manu&sort=score%20desc*:*&sort=score%20desc,alphaNameSort%20asc
Thanks for the feedback, I fixed the problem you described and I have added a new patch containing the fix.
The problem occurred when sorting was done on one ore more normal fields and on scoring.
The problem is solved, thanks. I will use your patch for my current project that is planned for golive in 5 weeks. If I find any more issues I will report them here.
Hey guys, are there any plans to make field collapsing work on multi shard systems?
I'm looking forward in your experiences with this patch, particular in production.
I think in order to make collapsing work on multi shard systems the process method of the CollapseComponent needs to be modified.
CollapseComponent already subclasses QueryComponent (which already supports querying on multi shard systems), so it should not be that difficult.
I require assistance. I've installed a fresh Solr (1.3.0), and all appears/operates well. I then patch using
SOLR-236_collapsing.patch [by Thomas Traeger] (the last patch i saw claimed to work with 1.3.0), without error. I then add to solrconfig.xml the following (per:) :
<searchComponent name="collapse" class="org.apache.solr.handler.component.CollapseComponent" />
Upon restart, I get a long configuration error, which seems to hinge on:
HTTP Status 500 - Severe errors in solr configuration. Check your log files for more detailed information on what may be wrong. If you want solr to continue after configuration errors, change: <abortOnConfigurationError>false</abortOnConfigurationError> in solrconfig.xml ------------------------------------------------------------- org.apache.solr.common.SolrException: Error loading class 'org.apache.solr.handler.component.CollapseComponent' at org.apache.solr.core.SolrResourceLoader.findClass(SolrResourceLoader.java:273)
[the full error can be included if desired.]
I've verified that the CollapseComponent file exists in the proper place.
I've moved CollapseParams as required, (move CollapseParams.java from common/org/apache/solr/common/params to java/org/apache/solr/common/params/ )
I've tried multiple iterations of the patch (on fresh installs), all with the same issue.
Are there additional steps, patches, or configurations that are required?
Is this a known issue?
Any help is very much appreciated.
ron, your approach should work, I just verified it on my Ubuntu 9.04 box. Here are my steps to a working example installation of solr 1.3.0 with collapsing enabled:
java -version > java version "1.6.0_13" > Java(TM) SE Runtime Environment (build 1.6.0_13-b03) > Java HotSpot(TM) 64-Bit Server VM (build 11.3-b02, mixed mode) wget tar xvzf apache-solr-1.3.0.tgz wget cd apache-solr-1.3.0/ patch -p0 <../SOLR-236_collapsing.patch mv src/common/org/apache/solr/common/params/CollapseParams.java src/java/org/apache/solr/common/params/ ant example cd example/ vi solr/conf/solrconfig.xml
add the collapse component class definition:
<searchComponent name="collapse" class="org.apache.solr.handler.component.CollapseComponent" />
set the components in the standard requestHandler:
<arr name="components"> <str>collapse</str> </arr>
start jetty
java -jar start.jar
add example docs
cd example/exampledocs sh post.sh *.xml
and open*:*&collapse.field=cat in your browser.
The problem sounds very familiar to me, I remember going through something similar when I was first trying to get the patch to work. My configuration ended up being:
<searchComponent name="collapse" class="org.apache.solr.handler.component.CollapseComponent" />
<requestHandler name="standard" class="solr.StandardRequestHandler">
<!-- default values for query parameters -->
<lst name="defaults">
<str name="echoParams">explicit</str>
<!--
<int name="rows">10</int>
<str name="fl">*</str>
<str name="version">2.1</str>
-->
</lst>
<arr name="components">
<str>query</str>
<str>facet</str>
<str>collapse</str>
<str>mlt</str>
<str>highlight</str>
<str>debug</str>
</arr>
</requestHandler>
All I remember is if I didn't have that <arr name="components"> section arranged exactly like that (even if I rearranged other items without rearranging the "collapse" part), either 1) faceting would completely stop working correctly at all, giving me totally bogus numbers or 2) I would get something a lot like the error described above and nothing would work at all.
However, I'm using an older version of the patch (collapsing-patch-to-1.3.0-ivan_3.patch) so it's totally possible that this has nothing to do with that.
On that note... have people found in general that the newer versions of the patch give any particular benefits in particular? I saw someone say that the latest patches were faster but I wasn't sure if they were faster in all cases or only when not sorting (we always sort so if it's only for unsorted sets it doesn't do us much good).
Thanks for the replies.
Thomas, I followed your steps, verifying same java version and build, etc. (all matched. I'm working with a CentOS 5 machine..Any potential for the problem being related to that?)
Patching and installing all appeared successful, but the resulting jetty powered page still resulted in:
org.apache.solr.common.SolrException: Error loading class 'org.apache.solr.handler.component.CollapseComponent'
[followed by the long line of tracebacks..]
My solrconfig.xml included the following (included in case there is an obvious flaw):
<searchComponent name="collapse" class="org.apache.solr.handler.component.CollapseComponent" />
<requestHandler name="standard" class="solr.SearchHandler" default="true">
<!-- default values for query parameters -->
<lst name="defaults">
<str name="echoParams">explicit</str>
<!--
<int name="rows">10</int>
<str name="fl">*</str>
<str name="version">2.1</str>
-->
</lst>
<arr name="components">
<str>collapse</str>
</arr>
</requestHandler>
Stephen: I attempted your configuration as well, with the most recent patch and the patch you referenced, but the results were the same.
I am going to attempt a fresh try on an Ubuntu Machine, but any other ideas would be most appreciated.
Strange, maybe something went wrong during building and CollapseComponent is not included into the war. You might look into solr.war and check for CollapseComponent.class:
cd apache-solr-1.3.0/example/webapps unzip solr.war cd WEB-INF/lib unzip apache-solr-core-1.3.0.jar cd org/apache/solr/handler/component/
Is the file CollapseComponent.class there?
Hi Stephan, when I was doing performance tests on the latest patch for doing normal collapsing (not adjacent collapsing), I found that there was a significant performance improvement during field collapsing compared to the old patch. This applies for both specifying sorting and not specifying sorting in the request. If you have other questions / comments about the latest patch just ask.
Thomas,
Again thanks. I've verified that the CollapseComponent is indeed NOT present in the war. That'd suggest something going amiss during the patching process, correct? And as it appears to be happening each time, either there's an issue with the patch (which others have verified as working) or something conflicts with my current setup (solr / tomcat / CentOS). Can I manually create apache-solr-core and force the file in?
Quick update : starting fresh, i was able to get the issue resolved once ant properly rebuilt the solr-core file. Uncertain why previous attempts failed so completely. Many thanks for your help.
I have implemented collapsing on a high-volume project of mine in much less flexible, but more practical manner.
Part I. You have to guarantee that all documents having the same value of collapse-field are dropped into Lucene index as a sequential batch. That guarantees they get sequential docIds, and with some more work - that they all end up in the same segment.
Part II. When doing collection you always get docIds in sequential order, and thus, thanks to Part I you get the docs-to-be-collapsed already grouped by collapse-field, even before you drop the docs into PriorityQueue to sort them.
Cons:
You can only collapse on a single predetermined at index creation time field.
If one document changes, you have to reindex all docs that have the same collapse-field value, so it's best if you have either low update/add rates, or few documents sharing the same collapse-field value.
Pros:
The CPU and memory costs for collapsing compared to usual search are very close to zero and do not depend on index size/total docs found.
The same idea works with new Lucene per-segment collection and in distributed mode (sharded index).
Within collapsed group you can sort hits however you want, and select one that will represent the group for usual sort/paging.
The implementation is not brain-dead simple, but nears it.
Martijn,
You mentioned your latest patch update "depends on some changes in Solr and a change inside Lucene". Does this mean it is not compatible with 1.3?
I applied the latest patch field-collapse-solr-236-2.patch to and tried to compile it seems to require org.apache.lucene.search.FieldComparator and org.apache.lucene.search.Collector and maybe other classes from lucene. I checked out a few version of lucene but looking at
LUCENE-1483 it seems that only the current trunk have the classes needed. So it doesn't seem to be possible to use the patch with 1.3
Keven, that is correct my patch is not compatible with 1.3. It works from revision 779497 (which is 1.4-dev).
Hi,
Has anyone successfully used localsolr and collapse patch together in Solr 1.4-dev. I am getting two result-sets one from localsolr and other from collapse. I need a merged result-set..
I am using localsolr 1.5 and field-collapse-solr-236-2.patch.
Any pointers ???
Shekar, can you show how you configured local solr and field collapsing in the solrconfig.xml file?
Here is the solfconfig file.
<requestHandler name="geo" class="solr.SearchHandler">
<lst name="defaults">
<str name="echoParams">explicit</str>
</lst>
<arr name="components">
<str>localsolr</str>
<str>collapse</str>
</arr>
</requestHandler>
You can get more details from
===================================================
Following are the results I am getting :
<response>
−
<lst name="responseHeader">
<int name="status">0</int>
<int name="QTime">146</int>
−
<lst name="params">
<str name="lat">41.883784</str>
<str name="radius">50</str>
<str name="collapse.field">resource_id</str>
<str name="rows">2</str>
<str name="indent">on</str>
<str name="fl">resource_id,geo_distance</str>
<str name="q">TV</str>
<str name="qt">geo</str>
<str name="long">-87.637668</str>
</lst>
</lst>
−
<result name="response" numFound="4294" start="0">
−
<doc>
<int name="resource_id">10018</int>
<double name="geo_distance">26.16691883965225</double>
</doc>
−
<doc>
<int name="resource_id">10102</int>
<double name="geo_distance">39.90588996589528</double>
</doc>
</result>
−
<lst name="collapse_counts">
<str name="field">resource_id</str>
−
<lst name="doc">
<int name="10022">116</int>
<int name="11701">4</int>
</lst>
−
<lst name="count">
<int name="10015">116</int>
<int name="10018">4</int>
</lst>
−
<lst name="debug">
<str name="Docset type">BitDocSet(5201)</str>
<long name="Total collapsing time(ms)">46</long>
<long name="Create uncollapsed docset(ms)">22</long>
<long name="Collapsing normal time(ms)">24</long>
<long name="Creating collapseinfo time(ms)">0</long>
<long name="Convert to bitset time(ms)">0</long>
<long name="Create collapsed docset time(ms)">0</long>
</lst>
</lst>
−
<result name="response" numFound="5201" start="0">
−
<doc>
<int name="resource_id">10015</int>
</doc>
−
<doc>
<int name="resource_id">10018</int>
</doc>
</result>
</response>
The LocalSolrQueryComponent and the CollapseComponent are both doing a search, that is why there are two result sets.
I think if you want field collapsing and local search you cannot use the version of localsolr that you are currently using, but you can use the latest
local solr patch (
SOLR-773). The latest patch does local search in a different manner, the DistanceCalculatingComponent (LocalSolrQueryComponent is removed) self does not do a search, but it adds a filter query (based on the lat, long and radius) to the normal search, that is then executed in the collapse component, so it should work in the way you expect it.
Example configuration for the latest patch:
<searchComponent name="geodistance" class="org.apache.solr.spatial.tier.DistanceCalculatingComponent" />
<queryParser name="spatial_tier" class="org.apache.solr.spatial.tier.SpatialTierQueryParserPlugin" />
<requestHandler name="geo" class="org.apache.solr.handler.component.SearchHandler">
<lst name="defaults">
<str name="echoParams">explicit</str>
<str name="defType">spatial_tier</str>
</lst>
<lst name="invariants">
<str name="latField">lat</str>
<str name="lngField">lng</str>
<str name="distanceField">geo_distance</str>
<str name="tierPrefix">tier</str>
</lst>
<arr name="components">
<str>collapse</str>
</arr>
<arr name="last-components">
<str>geodistance</str>
</arr>
</requestHandler>
Thanks a lot Martijn for you help..
Could you please point me to the example you are referring to. I could not find any example which is using DistanceCalculatingComponent.
I have not found an online example yet, but I copied this config from the javadoc of the DistanceCalculatingComponent class and modified it. The patch also modifies the solr examples, so i f you look there you can see how the patch is used (example/solr/conf/schema.xml and example/solr/conf/solrconfig.xml). You need to add an extra update processor and an extra field and dynamic field in order to make it work.
Hello all. We implemented the old Field Collapse patch ( 2008-02-14 03:38 PM) a few months ago into our production environment with some custom code to make it work over distributed search. Shortly after deployment we started noticing extremely slow queries (3-12 seconds) on a completely random basis. After disabling field collapse these random queries disappeared. Does anyone here know of the issue that might have caused this, and any idea if it has been fixed in the patches since the one we used?
Hi Oleg, I have checked your latest patch, but I could not find the code that deals with the distributed search. How did you make collapsing work for distributed search? Which parameters did you use while doing a search? What I can tell is that the latest patches do not support field collapsing for distributed search.
Auto-reply: I'm on Vacation this week.
I've tried applying the most recent patch against a completely fresh check out of the trunk, but I'm getting compile errors related to a class updated in the patch:
compile:
[mkdir] Created dir: /Users/jayhill/solrwork/trunk/build/solr
[javac] Compiling 367 source files to /Users/jayhill/solrwork/trunk/build/solr
[javac] /Users/jayhill/solrwork/trunk/src/java/org/apache/solr/util/DocSetScoreCollector.java:31: org.apache.solr.util.DocSetScoreCollector is not abstract and does not override abstract method acceptsDocsOutOfOrder() in org.apache.lucene.search.Collector
[javac] public class DocSetScoreCollector extends Collector {
error
I noticed that FieldCollapsing is targeted for release 1.5, but I've noticed that some folks have been using it in production, and I was curious to work with it in 1.4 is possible.
Hey Jay, I have fixed this issue in the new patch. So if you apply the new patch everything should be fine.
The compile error was a result of of the upgrade of the Lucene libraries is Solr. Because of
LUCENE-1630 a new method was added to the Collector class.
In this patch I also removed the invocations to ExtendedFieldCache methods and changed them to FieldCache methods. ExtendedFieldCache is now deprecated in the updated Lucene libraries. If you have any problems with this patch let me know.
Important:
Only use this patch from revision 794328 (07/15/2009) and up. Use the previous patch if you are using an older 1.4-dev revision.
Because the lucene jars have been updated, the previous patch does not work with the current trunk.
Use this patch for rev 801872 and up. For revisions before that use the older patches.
I have also included SolrJ support for fieldcollapsing in this patch. Might be handy those integrating with Solr via SolrJ.
By invoking enableFieldCollapsing(...) with a fieldname as parameter on SolrQuery class you enable fieldcollapsing for the current request.
If the search was successful one can execute getFieldCollapseResponse() on SolrResponse for retrieving a FieldCollapseResponse object from which one can retrieve the field collapse information.
I have updated the field collapse patch and made the following changes:
- Refactored the collapse code into a strategy pattern. The two distinct manners of collapsing are now in two different classes, which in my understanding makes the code cleaner and easier to understand. I have removed the CollapseFilter and created a DocumentCollapser which is an interface. The DocumentCollapser has two concrete implementation the AdjacentDocumentCollapser and the NonAdjacentDocumentCollapser. Both implementation share the same abstract base class AbstractDocumentCollapser that has fields and methods that are common in both concrete implementation.
- Removed deprecated Lucene methods in the PredefinedScorer.
- Fixed a normal field collapse bug. Filter queries were handled as normal queries (were added together via a boolean query), and thus were also used for scoring.
- Added more unit and integration tests, including two tests that tests facets in combination with field collapsing. These tests test the collapse before collapsing and after collapsing.
This patch only works with the Solr 1.4-dev from revision 804700 and later..
Hi Martin, I tested your latest patch, found no problem so far. The code is indeed better to understand now, good work.
For my current project I need to know which documents have been removed during collapsing. The current idea is to change the collapsing info and add an array with all document IDs that are removed from the result. Any suggestion on how/where to implement this?
Hi,
I tested the latest patch (field-collapse-5.patch ) and got:
HTTP Status 500 - 8452333 java.lang.ArrayIndexOutOfBoundsException: 8452333 at org.apache.lucene.search.FieldComparator$StringOrdValComparator.copy(FieldComparator.java:660) at org.apache.solr.search.NonAdjacentDocumentCollapser$DocumentComparator.compare(NonAdjacentDocumentCollapser.java:254) at org.apache.solr.search.NonAdjacentDocumentCollapser$DocumentPriorityQueue.lessThan(NonAdjacentDocumentCollapser.java:192) at org.apache.lucene.util.PriorityQueue.insertWithOverflow(PriorityQueue.java:158) at org.apache.solr.search.NonAdjacentDocumentCollapser.doCollapsing(NonAdjacentDocumentCollapser.java:99) at org.apache.solr.search.AbstractDocumentCollapser.collapse(AbstractDocumentCollapser.java:174) at org.apache.solr.handler.component.CollapseComponent.doProcess(CollapseComponent.java:98) at org.apache.solr.handler.component.CollapseComponent.process(CollapseComponent.java:67) at org.apache.solr.handler.component.SearchHandler.handleRequestBody(SearchHandler.java:195) at
...
I can provide the complete stacktrace if needed.
Hi Thomas, currently both collapsing algorithms do not store the the ids of the collapsed documents.
In order to have this functionality I think the following has to be done:
1) In the doCollapsing(...) methods of both concrete implementations of DocumentCollapser, the collapsed documents have to be stored. Depending on what you want you can store it in one big list or store it a list per most relevant document. The most relevant document is the document that does not collapse.
2) In the getCollapseInfo(...) method in the AbstractDocumentCollapser you then need to output these collapsed documents. If you are storing the collapsed documents in one big list then adding a new NamedList with collapsed document would be fine I guess. If you are storing the collapsed documents per document head, then I would add the collapsed document ids to existing resDoc named list. It is important that you return the Solr unique id instead of the lucene id.
This is just one approach, but what is the reason that you want this functionality? I guess what would be much easier, is to do a second query after the collapse query. In this second query you disable field collapsing (by not setting collapse.field) and you set fq=[collapse.field]=[collapse.value] for example.
Potentially the number of collapsed documents can be very large and in that situation it can have a impact on performance. Therefore I think that this functionality should be disabled by default. In the same way collapseInfoDoc and collapseInfoCount are managed.
Hi Tarjei, that doesn't look good
Besides the the complete stacktrace I'm also interested in what request url to Solr resulted in this exception (with what params etc.) and what version of Solr you are currently using?
I use collapsing in an online store and need to do a quite complex price calculation for every collapse group based on the products behind that group. I also thought about doing a second query, but that is not an option as I would have to do that for every group (i have up to 100 groups per request). So doing the calculation outside the scope of solr but retrieving the necessary data from solr seems to be the best approach for me. I agree that this functionality should be disabled by default.
Thanks for the pointer, I will have a look at it...
Hi there, Martijn & Thomas,
We're using FieldCollapse exactly in this way. In order to retrieve the collapsed results we do subqueries over each the results returned from the (outer) collapsing query, as Martijn suggests. It would be a fantastic option if the documents in the collapse could be returned. Knowing how many there are would be a big improvement as well (maybe this is possible and I don't know how?).
Right now, in order to manage the load, we're calling the subquery only on the results in the page in the user's view. Because this is all happening in a web environment, we also selectively choose to make some requests on that page while we're generating the search results page, and make the others via ajax from the browser, which gives the user a much faster response.
Thanks,
D
Hi Thomas, I agree that in your situation this feature is very handy. Assuming that you want to return the whole document (with all fields) and you have groups of reasonable sizes then this increases your response time dramatically. What I think would be a better approach is to only return the fields you want to use for your calculation. Lets say an average price per group. So instead of returning 10 fields per group (let say 7000 documents) you will only return one and that will save you a lot response time.
What do you think about this approach?
I also find the Ajax response solution, that Darrell describes is a good way to go.
yes, returning only one field would perfectly fit my needs, but Darrell seems to need more or even the complete document. So I think we need a collapse parameter that defines the field(s) of the removed documents that have to be included in the response. The Ajax approach is quite interesting but unfortunatly does not fit our needs in this case.
Darrell, the counts are already inluded in the repsonse by default, look for "collapse_count".
Ha, so it is! Thanks for the note; I'd totally missed that.
Returning only select fields of the collapsed documents would be a good option for us. Also, In our subquery of the collapsed documents we're finding the first and last result (they're time sorted so this makes sense). I guess this is similar to Thomas' average problem, but for us it's not necessary to iterate over the entire subquery results.
Yes, specifying which collapse fields to return is a good idea. Just like the fl parameter for a normal request.
I was thinking about how to fit this new feature into the current patch and I thought that it might be a good idea to revise the current field collapse result format. So that the results of this feature can fit nicely into the response.
Currently the collapse response is like this:
<lst name="collapse_counts"> <str name="field">venue</str> <lst name="doc"> <int name="233238">1</int> </lst> <lst name="count"> <int name="melkweg">1</int> </lst> </lst>
I think a response format like the following would be more ....
<lst name="collapse_counts"> <str name="field">venue</str> <lst name="results"> <lst name="233238"> <str name="fieldValue">melkweg</str> <int name="collapseCount">2</int> <lst name="collapsedValues"> <str name="price">10.99, "1.999,99"</str> <str name="name">adapter, laptop</str> </lst> </lst> </lst> </lst>
As you can see the data is more banded together and therefore easier to parse. The collapsedValues can have one or more fields, each containing collapsed field values in a comma separated format. The collapseValues element will off course only be added when the client specifies the collapsed fields in the request.
What do you think about this new result format?
Hi Martijn,
i also thought about changing the reponse format and introducing two new parameters "collapse.response" and "collapse.response.fl".
What do you think of these values for "collapse.response":
"counts": the default and current behavior, maybe even current response format to provide backward compatibility
"docs": returns the counts and the collapsed docs inside the collapse response (essentialy instead of removing the doc from the result just move it from the result to the collapse response). The parameter "collapse.response.fl" can be used to specify the field(s) to be returned in the collapse response.
So starting with your proposal the new collapse reponse format might look like this:
>
I think just moving the collapsed docs into the collapse response when desired provides us the necessary flexibility and is hopefully easy to implement.
Hi Thomas,
Comparing my format proposal with yours, the difference is how I output the collapsed documents. I chose to add all collapsed values in an element per field, because that would make it more compact and thus easier to transmit on the wire (certainly if the number of collapsed documents to return is large). This approach is not standard in Solr and your result structure is more common. I think that most of time is properly spent at reading the collapsed field values from the index anyway (i/o), therefore I think that your result structure is right now properly the best way to go.
I think that supporting the 'old' format is not that good of an idea, because this only increases complexity in the code. Also field collapsing is just a patch (although it is around for while) and is not a core Solr feature. I think people using this patch (and a patch in general) should always be aware that everything in a patch is subject to change. I think that collapse.response should be named something like collapse.includeCollapsedDocs when this is specified it includes the collapsed documents. The collapse.includeCollapsedDocs.fl would then only include the specified fields in the collapsed documents. So specifying _collapse.includeCollapsedDocs=true would result into the following result:
>
Not specifying the collapse.includeCollaspedDocs would result into the following response output:
<lst name="collapse_counts"> <str name="field">venue</str> <lst name="results"> <lst name="233238"> <str name="fieldValue">melkweg</str> <int name="collapseCount">2</int> </lst> </lst> </lst>
This will be the default and only response format.
And when for example collapse.info.doc=false is specified then the following result will be returned:
<lst name="collapse_counts"> <str name="field">venue</str> <lst name="results"> <lst name="melkweg"> <!-- we can not use the head document id any more, so we use the field value --> <int name="collapseCount">2</int> </lst> </lst> </lst>
When collapse.info.count=false is specified this would just remove the fieldValue from the response. I do not know if these parameters are actually set to false by many people, but it is something to keep in mind. I also recently added support for field collapsing to solrj in the patch, obviously this has to be updated to the latest response format.
In general it must be made clear to the Solr user that this feature is handy, but it can dramatically influence the performance in a negative way. This is because the response can contain a lot of documents and each field value has to be read from the index, which results in a lot of i/o activity on the Solr side. Just because of the fact that a lot of data is returned in the response; simply viewing the response in the browser can become quite a challenge.
But more important do you think that these changes are acceptable (response format / request parameters)?
I have some ideas for performance improvements.
I noticed that the code fetches the field cache twice, once for the collapse and then for the response object, assuming you asked for the info count in the response.
That seems expensive, especially for real-time content.
I think its better to use FieldCache.StringIndex instead of returning a large string array and keep it around for the collapse and the response object.
I changed the code so that I keep the cache around like so
/**
- Keep the field cached for the collapsed fields for the response object as well
*/
private FieldCache.StringIndex collapseIndex;
To get the index use something like this instead of getting the string array for all docs
collapseIndex = FieldCache.DEFAULT.getStringIndex(searcher.getReader(), collapseField)
when collapsing , you can get the current value using something like this and remove the code passing the array
int currentId = i.nextDoc();
String currentValue = collapseIndex.lookup[collapseIndex.order[currentId]];
when building the response for the info count, you can reference the same cache like so:-
if (collapseInfoCount){ resCount.add(collapseFieldType.indexedToReadable( collapseIndex.lookup[collapseIndex.order[id]]), count); }
I also added timing for the cache access as it could be slow if you are doing a lot of updates
I have added code for displaying selected fields for the duplicates but its difficult to submit . I hope this gets committed as its hard to sumbit a patch as its not in svn and I cannot submit a patch to a patch to a patch .. you get the idea.
Hi Abdul, nice improvements. It makes absolutely sense to keep the field values around during the collapsing as a StringIndex. From what I understand the StringIndex does not have duplicate string values, whereas the plain string array has. This will lower the memory footprint. I will add these improvements to the next patch. Thanks for pointing this out!
If this helps you fix your unit tests. I fixed the unit tests by changing the CollapseFilter constructor that's used for testing to take a StringIndex like so :-
- CollapseFilter(int collapseMaxDocs, int collapseTreshold) {
+ CollapseFilter(int collapseMaxDocs, int collapseTreshold, FieldCache.StringIndex index) {
+ this.collapseIndex = index;
and then I changed the unit test cases to move values into a StringIndex in CollapseFilterTest like so:-
public void testNormalCollapse_collapseThresholdOne() {
- collapseFilter = new CollapseFilter(Integer.MAX_VALUE, 1);
+ String[] values = new String[] {"a", "b", "c"}
;{0, 1, 0, 2, 1, 0, 1}
+ int[] order = new int[]
;{1, 2, 0, 3, 4, 5, 6}
+ FieldCache.StringIndex index = new FieldCache.StringIndex(order, values);
+ int[] docIds = new int[]
;
+
+ collapseFilter = new CollapseFilter(Integer.MAX_VALUE, 1, index);
- String[] values = new String[] {"a", "b", "a", "c", "b", "a", "b"}
;
Hey All: Just upgraded to 1.4 to get the new patch (many thanks, Martijn). The new algorithm appears to be sensitive to the size and complexity of the query (rather than simply the count of documents) - should this be the case? Unfortunately, we have rather large and complex queries with dozens of terms and several phrases, and while these queries are <0.5sec without collapsing, they are 3-4sec with collapsing. Meanwhile, collapse using *:* or other simple queries come back in <0.5sec - so it appears to be primarily a query-complexity issue.
I'm wondering if the filter cache (or some other cache) might be able to help with this situation?
I have updated the field collapse patch with the following:
1. Added the return collapse documents feature. When the parameter collapse.includeCollapsedDocs with value true is specified then the collapsed documents will returned per distinct field value. When this feature is enabled a collapsedDocs element is added to the field collapse response part. It looks like this:
<lst name="collapsedDocs"> <result name="Amsterdam" numFound="2" start="0"> <doc> <str name="id">262701</str> <str name="title">Bitterzoet, 100% Halal, Appletree Records & Deux d'Amsterdam presents</str> </doc> <doc> <str name="id">327511</str> <str name="title">Salsa Danscafé</str> </doc> </result> </lst>
It is also possible to return only specific fields with the collapse.includeCollapsedDocs.fl parameter. It expects fieldnames delimited by comma, just like the normal fl parameter.
These feature can dramatically impact the performance, because a group can potently contain many documents which all have to retrieved from the index and transported over the wire. So it is certainly wise to use it in combination with the fl parameter.
2. Added Solrj support for collapsed documents feature.
3. Added the performance improvements that Abdul suggested.
4. The debug information is now not returned by default. When the parameter collapse.debug with value true is specified, then the debug information is returned.
5. When field collapsing is done on a field that is multivalued or tokenized then an exception is thrown. I have chosen to do this because collapsing on such fields lead to unexpected results. For example when a field is tokenized only the last token of the field can be retrieved from the fieldcache (the fieldcache is used for retrieving the fields from the index in a cached manner for grouping documents into groups of distinct field values). This results in collapsing only on the last token of a field value instead of the complete field value. Multivalued fields have similar behaviour, plus for multivalued fields the Lucene FieldCache throws an exception when there are more tokens for a field than documents. Personally I think that throwing an exception is better then have unexpected results, at least it is clear that something field collapse related is wrong.
6. When doing a normal field collapse and not sorting on score the Solr caching mechanism is used. Unfortunately this was previously not the case.
@Paul
When doing non adjacent collapsing (aka normal collapsing) the Solr caches are not being used. The current patch uses the Solr caches when doing a search without scoring, but still the most common case is of course field collapsing and sorting on score. This is because the non adjacent field collapse algorithm requires the score of all results, which is collected with a Lucene collector. The search method on the SolrIndexSearcher that specifies a collector, does not have caching capabilities. In the next patch I will fix this problem, so that normal field collapse search uses the Solr caches as they should. The adjacent collapsing algorithm does use the solr caches, but the algorithm is much slower than non adjacent collapsing.
Thanks Martijn!
Also, while I was doing testing on collapse, I've noticed some threading issues as well. I think they are primarily centered around the collapseRequest field.
Specifically, when I run two collapse queries at the same time, I get the following exception:
java.lang.IllegalStateException: Invoke the collapse method before invoking getCollapseInfo method at org.apache.solr.search.AbstractDocumentCollapser.getCollapseInfo(AbstractDocumentCollapser.java:183) at org.apache.solr.handler.component.CollapseComponent.doProcess(CollapseComponent.java:115))
And when I run a second (non-collapsing) query at the same time I run the collapse query I get this exception:
java.lang.NullPointerException at org.apache.solr.handler.component.CollapseComponent.doProcess(CollapseComponent.java:109))
These errors occurred with the 2009-08-24 patch, but (upon brief inspection) it looks like the same situation would occur with the latest patch.
If I get the chance, I'll try and debug further.
Hey Martijn,
Have you made any progress on making field collapsing distributed?
Oleg
Hi Paul, thanks for pointing this out. I also tried to hammer my Solr instance and I got the same exceptions, which is not good. I have attached a patch that fixes these exceptions. The problem was indeed centred around the collapseRequest field and I have fixed this by using a ThreadLocal that holds the CollapseRequest instance. Because of this the reference to the CollapseRequest is not shared across the search requests and thus a new thread cannot interfere with a collapse request that is still being used by another thread.
Hi Oleg, no I have not made any progress. I'm still not clear how to solve it in an efficient manner as I have written in my previous comment:.
I recently read something about Katta and . Katta facilitates distributed search and has for support global scoring. I'm not completely sure how it is implemented in Katta, but maybe with Katta it is relative efficient to share the intermediate collapse results between shards.
Martijn, I think a more appropriate way to fix the threading issue is to bind the collapseRequest to the request context and drop the class field all together. So:
public void prepare(ResponseBuilder rb) throws IOException { super.prepare(rb); rb.req.getContext().put("collapseRequest", resolveCollapseRequest(rb)); }
and
public void process(ResponseBuilder rb) throws IOException { CollapseRequest collapseRequest = rb.req.getContext().remove("collapseRequest"); if (collapseRequest == null) { super.process(rb); return; } doProcess(rb, collapseRequest); }
You are right Uri, using the requestContext is much more appropriate than using a ThreadLocale. I have updated the patch with this change.
Hi Martijn, I made some tests with the new collapsedDocs feature. Looks very good, but in some cases it seems to return wrong collapsed docs. There seems to be a connection between sorting and this problem. Here an example using the example docs collapsed on field inStock and sorting by popularity:*:*&sort=popularity%20asc&fl=id&collapse.field=inStock&collapse.includeCollapsedDocs=true&collapse.includeCollapsedDocs.fl=id
For inStock:T document id:VDBDB1A16 remains in the result after collapsing. But this document is also returned in the collapsedDocs response and in addition document id:SP2514N is missing there.
Hi Thomas. I tried to reproduce something similar here, but I did run into the problems you described. Can you tell me what the fieldtypes are for your sort field and collapse field?
I found the problem with my real world data and reproduced it with the solr example schema and data. In the solr example popularity is of type "int" and inStock is "boolean". I made some more tests and could reproduce other fieldtypes too, here some examples using the field manu_exact (string):*:*&sort=manu_exact%20asc&fl=id&collapse.field=inStock&collapse.includeCollapsedDocs=true
-> as in the previous example document id:VDBDB1A16 is in result and collapsedDocs*:*&sort=manu_exact%20desc&fl=id&collapse.field=inStock&collapse.includeCollapsedDocs=true
-> document id:VA902B is in result and collapsedDocs*:*&sort=popularity%20desc&fl=id&collapse.field=manu_exact&collapse.includeCollapsedDocs=true
-> document id:VS1GB400C3 is in result and collapsedDocs
Hi Thomas, I have fixed the problem and updated the patch. I was able to reproduce the bug on the Solr example dataset. The problem was not limited to field collapsing with sorting on a field alone. The problem was located in the NonAdjactentFieldCollapser in the doCollapse(...) method in this specific part:
// dropoutId has a value smaller than the smallest value in the queue and therefore it was removed from the queue collapseDoc.priorityQueue.insertWithOverflow(currentId); // check if we have reached the collapse threshold, if so start counting collapsed documents if (++collapseDoc.totalCount > collapseTreshold) { collapseDoc.collapsedDocuments++; if (dropOutId != null) { addCollapsedDoc(currentId, currentValue); } }
Lets say that that the currentId has the most relevent field value and the collapseThreshold is met. When the currentId is added to the queue it stays there and another document id will be dropped out. In this situation a document that is the most relevant field value is added to the collapsed documents and it stays in the queue and therefore it will also be added to the normal results.
I changed it to this.
// dropoutId has a value smaller than the smallest value in the queue and therefore it was removed from the queue Integer dropOutId = (Integer) collapseDoc.priorityQueue.insertWithOverflow(currentId); // check if we have reached the collapse threshold, if so start counting collapsed documents if (++collapseDoc.totalCount > collapseTreshold) { collapseDoc.collapsedDocuments++; if (dropOutId != null) { addCollapsedDoc(dropOutId, currentValue); } }
Now only a document that will never and up in the final results is added to the collapsed documents (and not the current document that might be more relevant then other documents in the priority queue). The above code change fixes the bug in my test setups, can you also confirm that this fixes the issue on your side?
Hi Martijn, this fixed the problem, thanks
I have created a new patch that has the following changes:
1) Non adajacent collasping with sorting on score also uses the Solr caches now. So now every field collapse searches are using the Solr caches properly. This was not the case in my previous versions of the patch. This improvement will make field collapsing perform better and reduce the query time for regular searches. The downside was, that in order to make this work I had to modify some methods in the SolrIndexSearcher.
When sorting on score the non adjacent collapsing algorithm needs the score per document. The score is collected in a Lucene collector. The previous version of the patch uses the searcher.search(Query, Filter, Collector) method to collect the documents (as a DocSet) and scores, but by using this method the Solr caches were ignored.
The methods that return a DocSet in the SolrIndexSearcher do not offer the ability the specify your own collector. I changed that so you can specify your own collector and still benefit from the Solr caches. I did this in a non intrusive manner, so that nothing changes for existing code that uses the normal versions of these methods.
public DocSet getDocSet(Query query) throws IOException { DocSetCollector collector = new DocSetCollector(maxDoc()>>6, maxDoc()); return getDocSet(query, collector); } public DocSet getDocSet(Query query, DocSetAwareCollector collector) throws IOException { .... } DocSet getPositiveDocSet(Query q) throws IOException { DocSetCollector collector = new DocSetCollector(maxDoc()>>6, maxDoc()); return getPositiveDocSet(q, collector); } DocSet getPositiveDocSet(Query q, DocSetAwareCollector collector) throws IOException { ..... } public DocSet getDocSet(List<Query> queries) throws IOException { DocSetCollector collector = new DocSetCollector(maxDoc()>>6, maxDoc()); return getDocSet(queries, collector); } public DocSet getDocSet(List<Query> queries, DocSetAwareCollector collector) throws IOException { ....... } protected DocSet getDocSetNC(Query query, DocSet filter) throws IOException { DocSetCollector collector = new DocSetCollector(maxDoc()>>6, maxDoc()); return getDocSetNC(query, filter, collector); } protected DocSet getDocSetNC(Query query, DocSet filter, DocSetAwareCollector collector) throws IOException { ......... }
I also made a DocSetAwareCollector that both DocSetCollector and DocSetScoreCollector implement.
2) The collapse.includeCollapsedDocs parameters has been removed. In order to include the collapsed documents the parameter collapse.includeCollapsedDocs.fl must be specified. collapse.includeCollapsedDocs.fl=* will include all fields of the collapsed documents and collapse.includeCollapsedDocs.fl=id,name will only include the id and name field of the collapsed documents.
Hi all,
Just applied "field-collapse-5.patch" and i guess there are problems with filter queries.
Here it is:
1- select?q=:&fq=lat:[37.2 TO 39.8]
numFound: 6284
2- select?q=:&fq=lng:[24.5 TO 29.9]
numFound: 16912
3- select?q=:&fq=lat:[37.2 TO 39.8]&fq=lng:[24.5 TO 29.9]
numFound: 19419
4- When using "q" instead of "fq" which is:
select?q=lat:[37.2 TO 39.8] AND lng:[24.5 TO 29.9]
numFound: 3777 (which is the only correct number)
The thing is, as i understand, instead of applying "AND" for each filter query it applies "OR". Checked select?q=lat:[37.2 TO 39.8] OR lng:[24.5 TO 29.9]
numFound: 19419 (same as 3rd one)
Any idea how to fix this?
Thx.
Hi Aytek,
How I understand filter queries work is that each separate filter query produces a result set and each of this result set is intersected together. Which means that it works as you want it.
I'm not sure but I think that this issue is not related to the patch. I have tried to reproduce this situation (on a different data set), but it behaved as it should. With the patch and without.
Have you tried fq=lat:[37.2 TO 39.8] AND lng:[24.5 TO 29.9] instead of having it in two separate fqs?
Martijn
Hi Martijn,
to clarify the problem;
1) select/?q=:&fq=+lat:[37.2 TO 39.8] +lng:[24.5 TO 29.9]
2) select/?q=:&fq=lat:[37.2 TO 39.8]&fq=lng:[24.5 TO 29.9]
Expected result set for these queries are identical (isnt it?) but actually with patch the results becomes different.
Also without patch there is no problem.
Hi Martijn,
Intersection of results sets is also a kind of "AND", right? Intersection result of A docset and B docset is equal to resultset of "conA AND condB" i think.
Your suggestion "fq=lat:[37.2 TO 39.8] AND lng:[24.5 TO 29.9]" works. And also Anil's suggestion "fq=+lat:[37.2 TO 39.8] +lng:[24.5 TO 29.9]" works.
But they don't allow multiple selections for a facet field. I can't use excludes. It throws parsing errors.
Using "AND" between two filters in a filter query results with one item in FilterList of QueryCommand, that must be the reason not to be able to parse/support ex/tag things there i guess.
I have two solr instances here one with patch and another without patch. And i just copied configurations and data from one to other. Only difference is field_collapsing patch as i can see. I'm trying to see what makes the difference in results but new in solr so it takes time to see/catch what is going on. So any help/tip would be appreciated.
Thanks,
Aytek
Hi Aytek,
I was able to reproduce the same situation you described earlier. When I was testing yesterday I thought I was testing on a Solr instance without the patch, but I wasn't. Anyhow I have fixed bug and I have attached a new patch. Good thing you noticed this bug it was really corrupting the search results.
Martijn
Hi Martijn,
Thanks a lot it works.
Aytek
I have attached a new patch which includes a major refactoring which makes the code more flexible and cleaner. The patch also includes a new aggregate functionality and a bug fix.
Aggregate function and bug fix
The new patch allows you to execute aggregate functions on the collapsed documents (for example sum the stock amount or calculating the minimum price of a collapsed group). Currently there are four aggregate functions available: sum(), min(), max() and avg(). To execute one or more functions the collapse.aggregate parameter has to be added to the request url. The parameter expects the following syntax: function_name(field_name)[, function_name(field_name)]. For example: collapse.aggregate=sum(stock), min(price) and might have a result like this:
<lst name="aggregatedResults"> <lst name="sum(stock)"> <str name="Amsterdam">10</str> ... </lst> <lst name="min(price)"> <str name="Amsterdam">5.99</str> ... </lst> </lst>
The patch also fixes a bug inside the NonAdjacentDocumentCollapser that was reported on the solr-user mailing list a few days ago. An index out of bounds exception was thrown when documents were removed from an index and a field collapse search was done afterwards.
Code refactoring
The code refactoring includes the following things:
- The notion of a CollapseGroup. A collapse group defines what an unique group is in the search result. For the adjacent and non adjacent document collapser this is different. For adjacent field collapsing a group is defined by its field value and the document id of the most relevant document in that group. More then one collapse group may have the same fieldvalue. For normal field collapsing (non adjacent) the group is defined just by the field value.
- The notion of a CollapseCollector that receives the collapsed documents from a DocumentCollector and does something with it. For example keeps a count of how many documents were collapsed per collapse group or computes an average of a certain field like price. As you can see in the code instead of using field values or document ids a collapse group is used for identifying a collapse group.
/** * A <code>CollapseCollector</code> is responsible for receiving collapse callbacks from the <code>DocumentCollapser</code>. * An implementation can choose what to do with the received callbacks and data. Whatever an implementation collects it * is responsible for adding its results to the response. * * Implementation of this interface don't need to be thread safe! */ public interface CollapseCollector { /** * Informs the <code>CollapseCollector</code> that a document has been collapsed under the specified collapseGroup. * * @param docId The id of the document that has been collasped * @param collapseGroup The collapse group the docId has been collapsed under * @param collapseContext The collapse context */ void documentCollapsed(int docId, CollapseGroup collapseGroup, CollapseContext collapseContext); /** * Informs the <code>CollapseCollector</code> about the document head. * The document head is the most relevant id for the specified collapseGroup. * * @param docHeadId The identifier of the document head * @param collapseGroup The collapse group of the document head * @param collapseContext The collapse context */ void documentHead(int docHeadId, CollapseGroup collapseGroup, CollapseContext collapseContext); /** * Adds the <code>CollapseCollector</code> implementation specific result data to the result. * * @param result The response result * @param docs The documents to be added to the response * @param collapseContext The collapse context */ void getResult(NamedList result, DocList docs, CollapseContext collapseContext); }
There is also a CollapseContext that allows you store data that can be shared between CollapseCollectors.
- A CollapseCollectorFactory is responsible for creating a CollepseCollector. It does this based on the SolrQueryRequest. All the logic for when to enable a certain CollapseCollector must be placed in the factory.
/** * A concrete <code>CollapseCollectorFactory</code> implementation is responsible for creating {@link CollapseCollector} * instances based on the {@link SolrQueryRequest}. */ public interface CollapseCollectorFactory { /** * Creates an instance of a CollapseCollector specified by the concrete subclass. * The concrete subclass decides based on the specified request if an new instance has to be created and * can return <code>null</code> for that matter. * * @param request The specified request * @return an instance of a CollapseCollector or <code>null</code> */ CollapseCollector createCollapseCollector(SolrQueryRequest request); }
Currently there are four CollapseCollectorFactories implementations:
- DocumentGroupCountCollapseCollectorFactory creates CollapseCollectors that collect the collapse counts per document group and return the counts in the response per collapsed group most relevant document id.
- FieldValueCountCollapseCollectorFactory creates CollapseCollectors that collect the collapse count per collapsed group and return the counts in the response per collepsed group field value.
- DocumentFieldsCollapseCollectorFactory creates CollapseCollectors that collect predefined fieldvalues from collapsed documents.
- AggregateCollapseCollectorFactory creates CollapseCollectors that create aggregate statistics based on the collapsed documents.
CollapseCollectorFactories are configured in the solrconfig.xml and by default all implementations in the patch are configured. The following configuration is sufficient
<searchComponent name="collapse" class="org.apache.solr.handler.component.CollapseComponent" />
The following configurations configures the same CollapseCollectorFactories as the previous configuration:
<searchComponent name="collapse" class="org.apache.solr.handler.component.CollapseComponent"> <arr name="collapseCollectorFactories"> <str>groupDocumentsCounts</str> <str>groupFieldValue</str> <str>groupDocumentsFields</str> <str>groupAggregatedData</str> </arr> </searchComponent> <fieldCollapsing> <collapseCollectorFactory name="groupDocumentsCounts" class="solr.fieldcollapse.collector.DocumentGroupCountCollapseCollectorFactory" /> <collapseCollectorFactory name="groupFieldValue" class="solr.fieldcollapse.collector.FieldValueCountCollapseCollectorFactory" /> <collapseCollectorFactory name="groupDocumentsFields" class="solr.fieldcollapse.collector.DocumentFieldsCollapseCollectorFactory" /> <collapseCollectorFactory name="groupAggregatedData" class="org.apache.solr.search.fieldcollapse.collector.AggregateCollapseCollectorFactory"> <lst name="aggregateFunctions"> <str name="sum">org.apache.solr.search.fieldcollapse.collector.aggregate.SumFunction</str> <str name="avg">org.apache.solr.search.fieldcollapse.collector.aggregate.AverageFunction</str> <str name="min">org.apache.solr.search.fieldcollapse.collector.aggregate.MinFunction</str> <str name="max">org.apache.solr.search.fieldcollapse.collector.aggregate.MaxFunction</str> </lst> </collapseCollectorFactory> </fieldCollapsing>
The CollapseCollectorFactories configured can be shared among different CollapseComponents. Most users do not have to do this, but when you creating your own implementations or someone else's then you have to do this in order to configure the CollapseCollectorFactory implementation. The order in collapseCollectorFactories does matter. CollapseCollectors may share data via the CollapseContext for that reason the order is depend. The CollapseCollectorFactories in the patch do not share data, but other implementations may.
The new patch contains a lot of changes, but I personally think that the patch is really an improvement especially the introduction of the CollapseCollectors that allows a lot of flexibility. Btw any feedback or questions are welcome.
This looks like a really nice rework! This JIRA has been a marathon (2.5 years!), but maybe the last miles are here.
Since this JIRA has so many comments, it is hard to navigate. Maybe it is a good time to close it and start a new active JIRA for the field collapsing project.
I have updated the patch that fixes the bug that was reported yesterday on the solr-user mailing list:
found another exception, i cant find specific steps to reproduce
besides starting with an unfiltered result and then given an int field
with values (1,2,3) filtering by 3 triggers it sometimes, this is in
an index with very frequent updates and deletes
--joe
java.lang.NullPointerException
at org.apache.solr.search.fieldcollapse.collector.FieldValueCountCollapseCollectorFactory
$FieldValueCountCollapseCollector.getResult(FieldValueCountCollapseCollectorFactory.java:84)
at org.apache.solr.search.fieldcollapse.AbstractDocumentCollapser.getCollapseInfo(AbstractDocumentCollapser.java:191)
at org.apache.solr.handler.component.CollapseComponent.doProcess(CollapseComponent.java:179).RequestHandlers$LazyRequestHandlerWrapper.handleRequest(RequestHandlers.java:233).headerComplete(HttpConnection.java:864)
at org.mortbay.jetty.HttpParser.parseNext(HttpParser.java:539):520)
It certainly has be going on for a long time
Talking about the last miles there are a few things in my mind about field collapsing:
- Change the response format. Currently if I look at the response even I get confused sometimes about the information returned. The response should more structured. Something like this:
<lst name="collapse_counts"> <str name="field">venue</str> <lst name="results"> <lst name="233238"> <!-- id of most relevant document of the group --> <str name="fieldValue">melkweg</str> <int name="collapseCount">2</int> <!-- and other CollapseCollector specific collapse information --> </lst> ... </lst> </lst>
Currently when doing adjacent field collapsing the collapse_counts gives results that are unusable to use. The collapse_counts use the field value as key which is not unique for adjacent collapsing as shown in the example:
<lst name="collapse_counts"> <int name="hard">1</int> <int name="hard">1</int> <int name="electronics">1</int> <int name="memory">2</int> <int name="monitor">1</int> </lst>
- Add the notion of a CollapseMatcher, that decides whether document field values are equal or not and thus whether they are allowed to be collapsed. This opens the road for more exotic features like fuzzy field collapsing and collapsing on more than one field. Also this allows users of the patch to easily implement their own matching rules.
- Distributed field collapsing. Although I have some ideas on how to get started, from my perspective it not going to be performed. Because somehow the field collapse state has to be shared between shards in order to do proper field collapsing. This state can potentially be a lot of data depending on the specific search and corpus.
- And maybe add a collapse collector that collects statistics about most common field value per collapsed group.
I think that this is somewhat the roadmap from my side for field collapsing at moment, but feel free to elaborate on this.
Btw I have recently written a blog about field collapsing in general, that might be handy for someone who is implementing field collapsing.
Getting the refactoring right is important.
Scaling needs to be on the roadmap as well. The data created in collapsing has to be cached in some way. If I do a collapse on my 500m test index, the first one takes 110ms and the second one takes 80-90ms. Searches that walk from one result page to the next have to be fast the second time. Field collapsing probably needs some explicit caching. This is a show-stopper for getting this committed.
When I sort or facet the work done up front is reused in some way. In sorting there is a huge amount of work pushed to the first query and explicitly cached. Faceting seems to leave its work in the existing caches and runs much faster the second time.
I agree about the caching. When searching with fieldcollapsing for the same query more than ones, then some caching should kick in. I think that the execution of the doCollapse(...) method should be cached. In this method the field collapse logic is executed, which takes the most time of a field collapse search.
I've found an NPE that occurs when performing quasi-distributed field collapsing.
My company only has one use case for field collapsing: collapsing on email address. Our index is spread across multiple cores. We found that if we shard by email address, so that all documents with a given email address are guaranteed to appear on the same core, then we can do distributed field collapsing.
We add &collapse.field=email and &shards=core1,core2,... to a regular query. Each core collapses on email and sends the results back to the requestor. Since no emails appear on more than one core, we've accomplished distributed search. We do lose the <collapse_count> section, but that's not needed for our purpose – we just need an accurate total document count, and to have no more than one document for a given email address in the results.
Unfortunately, this throws an NPE when searching on a tokenized field. Searching string fields is fine. I don't understand exactly why the NPE appears, but I did bandaid over it by checking explicitly for nulls at the appropriate line in the code. No more NPE.
There's a downside, which is that if we attempt to collapse on a field other than email – one which has documents appearing in multiple cores – the results are buggy: the first search returns few documents, and the number of documents actually displayed don't always match the "numFound" value. Then upon refresh we get what we think is the correct numFound, and the correct list of documents. This doesn't bother me too much, as you're guaranteed to get incorrect answers from the collapse code anyway when collapsing on a field that you didn't use as your key for sharding.
In the spirit of Yonik's law of patches, I have made two imperfect patches attempting to contribute the fix, or at least point out the error:
1. I pulled trunk, applied the latest
SOLR-236 patch, made my 2 line change, and created a patch file. The resultant patch file looks very different from the latest SOLR-236 patchfile, so I assume I did something wrong.
2. I pulled trunk, made my 2 line change, and created another patch file. This file is tiny but of course is missing all of the field collapsing changes.
Would you like me to post either of these patchfiles to this issue? Or is it sufficient to just tell you that the NPE occured in QueryComponent.java on line 556? ("rb._responseDocs.set(sdoc.positionInResponse, doc);" where sdoc was null.) Perhaps my use case is extraordinary enough that you're happy leaving the NPE in place and telling other users to not do what I'm doing?
Thanks!
Michael
With the current patch if you try to collapse on a field that is tokenized or multivalued an exception is thrown indicating that you cannot do that and the search is cancelled. What I guess is that when the search results are retrieved from the shards on the master a NPE is thrown because the shard result is not there. This is a limitation in itself, but it boils down to the fact how the FieldCache handles such field types (or at least how I think the FieldCache handles it).
I think it is good idea to share your patch and from there we might be able to get the change in a proper manner. So others will also benefit from quasi-distributed field collapsing.
Anyhow to properly implement distributed field collapsing the distributed methods have to be overriden in the collapse component, so that is where I would start. We might then also include the collapse_count in the response.
I'm using Martijn's patch from 2009-10-27. The FieldCollapseResponse#parseDocumentIdCollapseCounts assumes the unique key is a long. Is that a bug or an undocumented limitation?
Nice work guys! We should definitely get this into Solr 1.5
Hi Shalin, it was not my intention (Usually in my case I use a long as id). I'm currently refactoring the response format as described in a previous comment, so I have to change the SolrJ classes anyway. I will submit a patch shortly.
Martijn,
I probably wasn't clear – we are sharding and collapsing on a non-tokenized "email" field. We can perform distributed collapsing fine when searching on some other nontokenized field; the NPE occurs when we perform a search on a tokenized field.
Anyway, I'll attach the small patch now, which just adds the null check to Solr trunk.
This patch (quasidistributed.additional.patch) does not apply field collapsing.
Apply this patch in addition to the latest field collapsing patch, to avoid an NPE when:
- you are collapsing on a field F,
- you are sharding into multiple cores, using the hash of field F as your sharding key, AND
- you perform a distributed search on a tokenized field.
Note that if you attempt to use this patch to collapse on a field F1 and shard according to a field F2, you will get buggy search behavior.
I'm trying to get field collapsing to work against the 1.4.0 release. I applied the latest patch, moved the file, did a clean build, and set up a config based on the example. If I run a search without collapsing everything is fine, but if it actually tries to collapse, I get the following error:
java.lang.NoSuchMethodError: org.apache.solr.search.SolrIndexSearcher.getDocSet(Lorg/apache/lucene/search/Query;Lorg/apache/solr/search/DocSet;Lorg/apache/solr/search/DocSetAwareCollector;)Lorg/apache/solr/search/DocSet;
at org.apache.solr.search.fieldcollapse.NonAdjacentDocumentCollapser.doQuery(NonAdjacentDocumentCollapser.java:60)
at org.apache.solr.search.fieldcollapse.AbstractDocumentCollapser.collapse(AbstractDocumentCollapser.java:168)
at org.apache.solr.handler.component.CollapseComponent.doProcess(CollapseComponent.java:160))
The tricky part is that the method is there in the source and I wrote a little test JSP that can find it just fine. That implies a class loader issue of some sort, but I'm not seeing it. Any help would be greatly appreciated.
Thomas, the method that cannot be found ( SolrIndexSearcher.getDocSet(...) ) is a method that is part of the patch. So if the patch was successful applied then this should not happen.
When I released the latest patch I only tested against the solr trunk, but I have tried the following to verify that the patch works with 1.4.0 release:
- Dowloaded 1.4.0 release from Solr site
- Applied the patch
- Executed: ant clean dist example
- In the example config (example/solr/conf/solrconfig.xml) I added the following line under the standard request handler:
<searchComponent name="query" class="org.apache.solr.handler.component.CollapseComponent" />
- Started the Jetty with Solr with the following command: java -jar start.jar
- Added example data to Solr with the following command in the exampledocs dir: ./post.sh *.xml
- I Browsed to the following url:*:*&collapse.field=inStock and saw that the result was collapsed on the inStock field.
It seems that everything is running fine. Can you tell something about how you deployed Solr on your machine?
I tried the build again, and you are right, it does work fine with the default search handler. I had been trying to get it working with our search handler, which is dismax. That still doesn't work. Here is the handler configuration, which works fine until collapsing is added.
<requestHandler name="glsearch" class="solr.SearchHandler"> <lst name="defaults"> <str name="defType">dismax</str> <str name="qf">name^3 description^2 long_description^2 search_stars^1 search_directors^1 product_id^0.1</str> <str name="tie">0.1</str> <str name="facet">true</str> <str name="facet.field">stars</str> <str name="facet.field">directors</str> <str name="facet.field">keywords</str> <str name="facet.field">studio</str> <str name="facet.mincount">1</str> </lst> </requestHandler>
Edit: The search fails even if you don't pass a collapse field.
What kind of exception is occurring if you use dismax (with and without field collapsing)? If I do a collapse search with dismax in the example setup () field collapsing appears to be working.
I have attached a new patch, that incorporates Micheal's quasi distributed patch so you don't have to patch twice. In addition to that the new patch also merges the collapse_count data from each individual shard response. When using this patch you will still need to make sure that all documents of one collapse group stay on one shard, otherwise your collapse result will be incorrect. The documents of a different collapse group can stay on a different shard.
And this morning, without changing anything, it is working fine. I don't know what happened on Friday, but the changes I made then must have fixed it without showing up for some reason. In any case, thank you for the assistance.
Sorting of results doesn't work properly. Next, I detail the steps I followed and the problem I faced
I am using solr as a search engine for web pages, from which I use a field named "site" for collapsing and sort over scord
Steps
After downloading the last version of solr "solr-2009-11-15" and applying the patch "field-collapse-5.patch 2009-11-15 08:55 PM Martijn van Groningen 239 kB"
STEP 1 - I make a search using fieldcollapsing and the result is correct, the number with greatest scord is 0.477
STEP 2 - I make the same search and the fieldcollapsing throws other result with scord 0.17, the (correct) result of step 1 does not appear again
Possible problem
Step 1 stores the document in the cache for future searches
at Step 2 the search is don over the cache and does not find the previously stored document
Possible solution
I believe that the problem is in the storing of the document in the cache since if we make step 2 again we have the same result and the document with scord of 0.17 is not removed from the results, the only result removed is the document with scord 0.477
Conclusion
Documents are not sorted properly when using "fieldcollapsing + solrcache", that is when documents stored in solr cache are required
I can confirm this bug. I will attach a new patch that fixes this issue shortly. Thanks for noticing.
The reason why the search results after the first search were incorrect was, because the scores were not preserved in the cache. The result of that was that the collapsing algorithm could not properly group the documents into the collapse groups (the most relevant document per document group could not be determined properly), because there was no score information when retrieving the documents from cache (as DocSet in SolrIndexSearcher) .
I made sure that in the attached patch the score is also saved in the cache, so the collapsing algorithm can do its work properly when the documents are retrieved from the cache. Because the scores are now stored with the cached documents the actual size of the filterCache in memory will increase.
Tomorrow I'm going to try the patch , the next time I hope to help and not only communicate the problem
I have updated the patch and fixed the following issues:
- The issue that Marc described on the solr-dev list. The collapsed groups identifiers disappeared when the id field was anything other then a plain field (int, long etc...).
- The caching was not properly working when the collapse.field was changed between requests. Queries that should not have been cached were.
Hi,
new to Solr, so sorry for my likely still incomplete setup. I got everything from Solr SVN and applied the Patch (field-collapse-5.patch 2009-12-08 09:43 PM). As I search I get a NPE because I seem to not have a cache for the collapsing. It wants to add a entry to the cache but can't. There is none at that time, which it checks before in AbstractDocumentCollapser.collapse but still wants to use it later in AbstractDocumentCollapser.createDocumentCollapseResult. I suppose thats a bug? Or is something wrong on my side?
Exception I get is:
java.lang.NullPointerException
at org.apache.solr.search.fieldcollapse.AbstractDocumentCollapser.createDocumentCollapseResult(AbstractDocumentCollapser.java:278)
at org.apache.solr.search.fieldcollapse.AbstractDocumentCollapser.executeCollapse(AbstractDocumentCollapser.java:249)
at org.apache.solr.search.fieldcollapse.AbstractDocumentCollapser.collapse(AbstractDocumentCollapser.java:172)
at org.apache.solr.handler.component.CollapseComponent.doProcess(CollapseComponent.java:173)
at org.apache.solr.handler.component.CollapseComponent.process(CollapseComponent.java:127))
I fixed it locally by only adding something to the cache if there is one (fieldCollapseCache != null). But I'm not very into the code so not sure if thats a good/right way to fix it.
Thanks,
Marc
Just wanted to comment that I am experiencing the same behavior as Marc Menghin above (NPE) – the patch did NOT install cleanly (1 hunk failed) – but I couldn't really tell why since it looked like it should have worked – I just manually copied the hunk into the correct class.... Sorry I didn't note what failed....
@Marc. This was a silly bug, that occurs when you do not define a field collapse cache in the solrconfig.xml. I have attached a patch that fixes this bug, so you can use field collapse without configuring a field collapse cache. Caching with field collapsing is an optional feature.
@Chad. Due to changes in the trunk applying the previous patch will result into merge conflicts. The new patch can be applied without merge conflicts. This means that applying this patch on 1.4 source will properly result in merge conflicts.
Martijn, I'm about to upgrade our production servers to Solr 1.4 with this latest patch you just posted and the difference is incredible. The time from startup to first collapsed query results has gone from 90 down to about 20 seconds, subsequent searches seem to execute about twice as fast on average.
SOLR-236 has come a very long way in the year since we last patched. Thanks for all the hard work, it's truly great.
FYI, it doesn't patch clean against the 1.4 distribution tarball but I don't even understand what the conflict is, reading the patch the original code in that area that failed looked identical to what the patch was expecting:
(in QueryComponent.java)
sreq.params.remove(ResponseBuilder.FIELD_SORT_VALUES); // this was there
+
+ // disable collapser
+ sreq.params.remove("collapse.field");
+
// make sure that the id is returned for correlation. // and so was this?
Maybe it's a whitespace issue? Anyway it works fine if you just paste it in place.
Does anybody have a reason for why this should not be committed to trunk as it stands right now?
Well that is nice to hear Stephen
. I think I will add a 1.4 comparable patch to the issue, so people do not have issues while patching.
I think it is a good idea Shalin to add the patch to the trunk as it is. The patch is quite stable now. For any future work related to field-collapsing we should open new issues (this is the longest issue I've ever seen). Does anyone else has a reason why field-collapsing shouldn't be committed to the trunk?
Field Collapsing
|
https://issues.apache.org/jira/browse/SOLR-236?page=com.atlassian.jira.plugin.ext.subversion:subversion-commits-tabpanel
|
CC-MAIN-2015-22
|
en
|
refinedweb
|
PageAdapter Class
Adapts a Web page for a specific browser and provides the base class from which all page adapters inherit, directly or indirectly.
System.Web.UI.Adapters.ControlAdapter
System.Web.UI.Adapters.PageAdapter
Namespace: System.Web.UI.AdaptersNamespace: System.Web.UI.Adapters
Assembly: System.Web (in System.Web.dll)
The PageAdapter type exposes the following members.
The PageAdapter class is an abstract class that adapts a Web page for a specific class of browsers, defined by the markup language that the browser uses (for example, HTML or XHTML). Much of the adaptability in rendering behavior can be encapsulated in the specialized text writer classes that derive from the HtmlTextWriter class, so it is not always necessary to provide a page adapter.
Most members of derived page adapters are called from the Page class or from control adapters. First, the Page class or control adapters detect the presence of the derived page adapter, and then call the member, or provide the functionality if the page adapter is not present.
The members of the PageAdapter class provide the following functionality:
The CacheVaryByHeaders and CacheVaryByParams properties define additional HTTP headers and HTTP GET and POST parameters that can be used to vary caching. They are called during cache initialization from the Page class.
The GetStatePersister method returns an object that can be used to persist the combined view and control states of the page. It is referenced from the PageStatePersister property if a derived page adapter is present.
The GetPostBackFormReference method provides a DHTML code fragment that can be used to reference forms in scripts.
The DeterminePostBackMode method returns a collection of the postback variables if the page is in postback. It is called by the .NET Framework instead of the Page.DeterminePostBackMode method if a derived page adapter is present.
The RenderBeginHyperlink and RenderEndHyperlink methods are used by control adapters to render hyperlinks if a derived page adapter is present.
The RenderPostBackEvent() method renders a hyperlink or postback client tag that can submit the form.
The RegisterRadioButton and GetRadioButtonsByGroup methods are used by radio button control adapters to reference the other RadioButton controls in a radio button group.
The ClientState property provides access to the combined control and view states of the Page object through the internal ClientState property of the Page class.
The TransformText method is used by control adapters to perform device-specific text transformation.
The following code example demonstrates how to derive a class named CustomPageAdapter from the PageAdapter class and override the RenderBeginHyperlink method. The RenderBeginHyperlink method adds an attribute named src to a hyperlink, which contains a reference to the current page. All hyperlinks rendered in pages to which CustomPageAdapter is attached will have the src attribute.
using System; using System.IO; using System.Web; using System.Web.UI; using System.Web.UI.Adapters; // A derived PageAdapter class. public class CustomPageAdapter : PageAdapter { // Override RenderBeginHyperlink to add an attribute that // references the referring page. public override void RenderBeginHyperlink( HtmlTextWriter writer, string targetUrl, bool encodeUrl, string softkeyLabel, string accessKey ) { string url = null; // Add the src attribute, if referring page URL is available. if( Page != null && Page.Request != null && Page.Request.Url != null ) { url = Page.Request.Url.AbsoluteUri; if( encodeUrl ) url = HttpUtility.HtmlAttributeEncode( url ); writer.AddAttribute( "src", url ); } // Add the accessKey attribute, if caller requested. if( accessKey != null && accessKey.Length == 1 ) writer.AddAttribute( "accessKey", accessKey ); // Add the href attribute, encode the URL if requested. if( encodeUrl ) url = HttpUtility.HtmlAttributeEncode( targetUrl ); else url = targetUrl; writer.AddAttribute( "href", url ); // Render the hyperlink opening tag with the added attributes. writer.RenderBeginTag( "a" ); } }
|
https://msdn.microsoft.com/en-us/library/system.web.ui.adapters.pageadapter.aspx
|
CC-MAIN-2015-22
|
en
|
refinedweb
|
I'm pretty sure
new throwing an exception is platform-independent. Following that, you can install an exception handler with the effect of doing what you are suggesting (writing to file). Although, I think you would be hard pressed to find a system today that doesn't already transparently support this with virtual memory.
Wow, I just tested it with this code:
#include <iostream> using namespace std; int main() { long long int *test=new long long int[100000000000]; test[ 99999999999]=10; cout<<test[ 99999999999]; return 0; }
And it performed without a problem. Considering that I only have 4 gigs of RAM on my machine, can I assume that my compiler is transparently supporting the whole file-writing thing? (if this works then my mass1venum dll (I wrote it as an excercise) will be able to be a lot simpler)
Indeed, pretty much all modern operating systems have good mechanisms to deal with memory and use the hard-drive if necessary. Of course, there will always be a chance that you run out of memory (whether the OS doesn't want to take up more HDD memory or because you run out of HDD space). So, there is really no need for you to write code that does this type of thing, and the OS will do this much better than you can ever hope to. For instance, the OS will swap memory between the RAM and HDD such that your program is always working on RAM memory (the chunks of memory not currently used are basically sleeping on the HDD). This is a kind of mechanism you would have a really hard time doing yourself.
As for the new operator, the C++ standard prescribes that it must throw a bad_alloc exception if you run out of memory, so that is entirely platform independent. If you want the "return NULL" behaviour, you must use the no-throw new operator, as in
new(nothrow) int[1024];
If you are going to be working with large chunks of data, it might be a good idea to consider a container like std::deque which stores the complete array as a number of big chunks of data, as opposed to one contiguous array. This will generally be less demanding for the OS because the OS won't have to make space for one huge chunk of memory. But, of course, the memory won't be contiguous, so it might not be appropriate for your application.
Earn rewards points for helping others. Gain kudos. Cash out. Get better answers yourself.
It's as simple as contributing editorial or replying to discussions labeled OP Sponsor or OP Kudos
|
https://www.daniweb.com/software-development/cpp/threads/414483/checking-memory-bounds
|
CC-MAIN-2015-22
|
en
|
refinedweb
|
Manages the displaying of the Milky Way. More...
#include <MilkyWay.hpp>
Manages the displaying of the Milky Way.
Draw the Milky Way.
Reimplemented from StelModule.
Used to determine the order in which the various modules are drawn.
Reimplemented from StelModule.
Get the color used for rendering the milky way.
Gets whether the Milky Way is displayed.
Get Milky Way intensity..
Sets whether to show the Milky Way.
Set Milky Way intensity.
Update and time-dependent state.
Updates the fade level while the Milky way rendering is being changed from on to off or off to on.
Implements StelModule.
|
http://stellarium.org/doc/0.11.4/classMilkyWay.html
|
CC-MAIN-2015-22
|
en
|
refinedweb
|
14 November 2012 15:48 [Source: ICIS news]
BERLIN (ICIS)--The chemical industry is so far not catering to a shifting marketplace, due to a fundamental misunderstanding of demographics in a changing world, a leading consultant said on Wednesday.
Speaking at the 11th World Aromatics & Derivatives Conference in ?xml:namespace>
“Demographics drive demand,” said Hodges. “But a wrong diagnosis means we will misunderstand the opportunities.”
Hodges cited the example of
“
Hodges also talked about a “health explosion” as opposed to a population explosion: “There is a growing elderly population due to higher life expectancy and 10 million less births per year,” he said, citing United Nations statistics.
“And there are no products or services for these people. The babyboom generation is now ageing, causing major changes in western demand patterns.”
Elsewhere, Hodges used the acronym VUCA to describe the current economic landscape and its inherent challenges: Volatility, Uncertainty, Complexity and Ambiguity.
High frequency speculative trading has led to much sharper swings in financial markets, while uncertainty surrounding future demand levels impact business and consumer confidence.
Interest rates have become more complex, as ageing "baby boomers" worry about return of capital rather than return on capital. Decision-making has also become more ambiguous, as political and social factors rival economics.
Hodges, who also writes a blog for ICIS, countered this with another interpretation of VUCA: Vision, Understanding, Clarity and Agility.
“Companies need to set themselves measurable goals and ask themselves ‘where am I trying to get to?’ An understanding of the changes that are underway is also essential,” he explained.
Additionally, Hodges stressed that the planning process requires clarity over implementation, while unforeseen events – “the bumps in the road” – will place a premium on agility and being able to adjust to changing circumstances.
The 11th World Aromatics & Derivatives Conference, organised by ICIS and International eChem, is taking place in
($1 = €0.79)
|
http://www.icis.com/Articles/2012/11/14/9614142/chems-industry-must-respond-to-changing-demographics-hodges.html
|
CC-MAIN-2015-22
|
en
|
refinedweb
|
Contents
Macros function called macro_MacroName(macro, arg1, arg2, ...), which is the entry-point.
the first argument macro is an instance of class Macro, and also evaluates to a string of the macroname,
- arguments arg1, arg2, ... are the arguments given by the user, but special rules apply, see below.
You can access the request object by using macro.request - e.g. to access form parameters and other information related to user interaction.
Your function should use the formatter to construct valid markup for the current target format. In most cases this is HTML, so writing a macro which returns HTML will work in most cases but fail when formats like XML or text/plain are requested - you can use macro.formatter to access the current formatter.
For example, your wiki page has the following line on it:
<<MacroName(True, 1.7772, 17)>>
You could write a MacroName.py file like this:
1 from MoinMoin.wikiutil import get_unicode, get_bool, get_int, get_float 2 3 Dependencies = [] 4 generates_headings = False 5 6 def macro_MacroName(macro, arg1, arg2, arg3=7): 7 # arguments passed in can be None or a unicode object 8 9 arg1 = get_bool(macro.request, arg1) 10 arg2 = get_float(macro.request, arg2) 11 # because arg3 has a default of 7, it is always of type int or long 12 13 return macro.formatter.text("arguments are: %s %2.3f %d" % (arg1, arg2, arg3)).
If your macro can generate headings (by calling macro.formatter.heading()) then set generates_headings to True to allow the TableOfContents macro to evaluate your macro for headings to take into the table of contents.
Macro arguments
The arguments given to your macro are normally passed as unicode instances or None if the user gave no argument.
Consider this example macro:
and the wiki code (together with the result)
1. <<Example()>> - passes None, None 2. <<Example(a,b)>> - passes u'a', u'b' 3. <<Example(,)>> - passes None, None 4. <<Example("",)>> - passes u'', None
default values
If your macro declares default values as in this example:
Then the arguments can be skipped or left out and are automatically converted to the type of the default value:
1. <<Example()>> - passes 7, 2.1 2. <<Example(,3)>> - passes 7, 3.0 3. <<Example(2)>> - passes 2, 2.1 4. <<Example(a,7.54)>> - error, "a" not an integer
Additionally, it is possible to declare the type you would like to get:
def macro_Example(macro, arg1=int, arg2=float): ...
This requires that the user enters the correct parameter types, but it is possible to skip over them by giving an empty argument in which case it'll be passed into the macro code as None:
1. <<Example()>> - passes None, None 2. <<Example(a, 2.2)>> - error, "a" not an integer 3. <<Example(7, 2.2)>> - passes 7, 2.2 4. <<Example(, 3.14)>> - passes None, 3.14
unit arguments
If your macro declares unitsarguments then units are required as in this example:
The defaultunit of px is used if the user does not enter a unit. He has to enter valid units of px or %.
1. <<Example()>> - argument is: None 2. <<Example(100)>> - argument is: 100px 3. <<Example(100mm)>> - <<Example: Invalid unit in value 100mm (allowed units: px, %)>> 4. <<Example(100px)>> - argument is: 100px
choices
If your plugin takes one of several choices, you can declare it as such:
This requires that the user enter any of the given choices and uses the first choice if nothing is entered:
1. <<Example(apple)>> - passes u'apple' 2. <<Example(OrAnGe)>> - error, tells user which choices are valid 3. <<Example()>> - passes u'apple'
required arguments
If you require some arguments, you can tell the generic code by using the required_arg class that is instantiated getting the type of the argument:
from MoinMoin.wikiutil import required_arg def macro_Example(macro, arg1=required_arg(int)): ...
This requires that the user enters the argument:
1. <<Example()>> - error, argument "arg1" required 2. <<Example(4.3)>> - error, "4.3" not an integer 3. <<Example(5)>> - passes 5
keyword arguments
If your macro needs to accept arbitrary keyword arguments to pass to something else, it must declare a _kwargs parameter which should default to the empty dict:
This makes the user able to pass in anything, even arbitrary unicode strings as key names:
1. <<Example(äöü=7)>> - passes the dict {u'äöü': u'7'} 2. <<Example(=7)>> - passes the dict {u'': u'7'} 3. <<Example(a=1,"d e"=3)>> - passes the dict {u'a': u'1', u'd e': u'3'} 4. <<Example(a)>> - error, too many (non-keyword) arguments
trailing arguments
Trailing arguments allow your macro to take any number of positional arguments, or to be able to handle the syntax of some existing macros that looks like
[[Macro(1, 2, 3, name=value, name2=value2, someflag, anotherflag)]].
In order to handle this, declare a _trailing_args macro parameter which should have a an empty list as the default:
Also, when the user gives too many arguments, these are put into _trailing_args as in the second example:
1. <<Example(1, 2, 3, name=test, name2=test2, flag1)>> - valid, passes u'flag1' in _trailing_args 2. <<Example(1, 2, 3, test, test2, flag1)>> - same
It is possible to use this feature together with the arbitrary keyword arguments feature _kwargs.
|
http://wiki.apache.org/logging-log4cxx/HelpOnMacros?action=show&redirect=AideDesMacros
|
CC-MAIN-2015-22
|
en
|
refinedweb
|
#include "test.h" void tst_sig(fork_flag, handler, cleanup) char *fork_flag; int (*handler)(); void (*cleanup)();
Tst_sig is used by UNICOS test case programs to set up signal handling functions for unexpected signals. This provides test cases with a graceful means of exiting following an unexpected interruption by a signal. Tst_sig should be called only once by a test program.
The fork_flag parameter is used to tell tst_sig whether or not to ignore the SIGCLD signal caused by the death of a child process that had previously been created by the fork(2) system call (see signal(2) for more information on the SIGCLD signal).
Setting fork_flag to FORK will cause tst_sig to ignore the SIGCLD signal. This option should be set if the test program directly (eg. call fork(2)) or indirectly (eg. call system(3S)) creates a child process.
Setting fork_flag to NOFORK will cause tst_sig to treat the SIGCLD signal just as any other unexpected signal (ie. the handler will be called). This option should be set by any test program which does not directly or indirectly create any child processes.
The handler parameter is a pointer to a function returning type int which is executed upon the receipt of an unexpected signal. The test program may pass a pointer to a signal handling function or it may elect to use a default handler supplied by tst_sig.
The default handler is specified by passing DEF_HANDLER as the handler argument. Upon receipt of an unexpected signal, the default handler will generate tst_res(3) messages for all test results that had not been completed at the time of the signal, execute the cleanup routine, if provided, and call tst_exit. Note: if the default handler is used, the variables TCID and Tst_count must be defined and available to tst_sig (see tst_res(3)).
The cleanup parameter is a pointer to a user-defined function returning type void which is executed by the default handler. The cleanup function should remove any files, directories, processes, etc. created by the test program. If no cleanup is required, this parameter should be set to NULL.
#include "test.h" /* * the TCID and TST_TOTAL variables must be available to tst_sig * if the default handler is used. The default handler will call * tst_res(3) and will need this information. */ int TCID = "tsttcs01"; /* set test case identifier */ int TST_TOTAL = 5; /* set total number of test results */ void tst_sig(); /* * set up for unexpected signals: * no fork() system calls will be executed during the test run * use the default signal handler provided by tst_sig * no cleanup is necessary */ tst_sig(NOFORK, DEF_HANDLER, NULL); void tst_sig(), cleanup(); int handler(); /* * set up for unexpected signals: * fork() system calls will be executed during the test run * use user-defined signal handler * use cleanup */ tst_sig(FORK, handler, cleanup);
Tst_sig will output warnings in standard tst_res format if it cannot set up the signal handlers.
|
http://www.makelinux.net/man/3/T/tst_sig
|
CC-MAIN-2015-22
|
en
|
refinedweb
|
NAME
PRANG::Graph::Meta::Element - metaclass metarole for XML elements
SYNOPSIS
use PRANG::Graph; has_element 'somechild' => is => "rw", isa => "Some::Type", xml_required => 0, ; # equivalent alternative - plays well with others! has 'somechild' => is => "rw", traits => [qw/PRANG::Element/], isa => "Some::Type", xml_required => 0, ;
DESCRIPTION
The PRANG concept is that attributes in your classes are marked to correspond with attributes and elements in your XML. This class is for marking your class' attributes as XML elements. For marking them as XML attributes, see PRANG::Graph::Meta::Attr.
Non-trivial elements - and this means elements which contain more than a single TextNode element within - are mapped to Moose classes. The child elements that are allowed within that class correspond to the attributes marked with the
PRANG::Element trait, either via
has_element or the Moose
traits keyword.
Where it makes sense, as much as possible is set up from the regular Moose definition of the attribute. This includes the XML node name, the type constraint, and also the predicate.
If you like, you can also set the
xmlns and
xml_nodeName attribute property, to override the default behaviour, which is to assume that the XML element name matches the Moose attribute name, and that the XML namespace of the element is that of the enclosing class (ie,
$class->xmlns), if defined.
The order of declaring element attributes is important. They implicitly define a "sequence". To specify a "choice", you must use a union sub-type - see below. Care must be taken with bundling element attributes into roles as ordering when composing is not defined.
The predicate property of the attribute is also important. If you do not define
predicate, then the attribute is considered required. This can be overridden by specifying
xml_required (it must be defined to be effective).
The isa property (type constraint) you set via 'isa' is required. The behaviour for major types is described below. The module knows about sub-typing, and so if you specify a sub-type of one of these types, then the behaviour will be as for the type on this list. Only a limited subset of higher-order/parametric/structured types are permitted as described.
- Bool sub-type
If the attribute is a Bool sub-type (er, or just "Bool", then the element will marshall to the empty element if true, or no element if false. The requirement that
predicatebe defined is relaxed for
Boolsub-types.
ie,
Boolwill serialise to:
<object> <somechild /> </object>
For true and
<object> </object>
For false.
- Scalar sub-type
If it is a Scalar subtype (eg, an enum, a Str or an Int), then the value of the Moose attribute is marshalled to the value of the element as a TextNode; eg
<somechild>somevalue</somechild>
- Object sub-type
If the attribute is an Object subtype (ie, a Class), then the element is serialised according to the definition of the Class defined.
eg, with;
{ package CD; use Moose; use PRANG::Graph; has_element 'author' => qw( is rw isa Person ); has_attr 'name' => qw( is rw isa Str ); } { package Person; use Moose; use PRANG::Graph; has_attr 'group' => qw( is rw isa Bool ); has_attr 'name' => qw( is rw isa Str ); has_element 'deceased' => qw( is rw isa Bool ); }
Then the object;
CD->new( name => "2Pacalypse Now", author => Person->new( group => 0, name => "Tupac Shakur", deceased => 1, ) );
Would serialise to (assuming that there is a PRANG::Graph document type with
cdas a root element):
<cd name="2Pacalypse Now"> <author group="0" name="Tupac Shakur> <deceased /> </author> </cd>
- ArrayRef sub-type
An
ArrayRefsub-type indicates that the element may occur multiple times at this point. Bounds may be specified directly - the
xml_minand
xml_maxattribute properties.
Higher-order types are supported; in fact, to not specify the type of the elements of the array is a big no-no.
If
xml_nodeNameis specified, it refers to the items; no array container node is expected.
For example;
has_attr 'name' => is => "rw", isa => "Str", ; has_attr 'releases' => is => "rw", isa => "ArrayRef[CD]", xml_min => 0, xml_nodeName => "cd", ;
Assuming that this property appeared in the definition for 'artist', and that CD
has_attr 'title'..., it would let you parse:
<artist> <name>The Headless Chickens</name> <cd title="Stunt Clown">...<cd> <cd title="Body Blow">...<cd> <cd title="Greedy">...<cd> </artist>
You cannot (currently) Union an ArrayRef type with other simple types.
- Union types
Union types are special; they indicate that any one of the types indicated may be expected next. By default, the name of the element is still the name of the Moose attribute, and if the case is that a particular element may just be repeated any number of times, this is fine.
However, this can be inconvenient in the typical case where the alternation is between a set of elements which are allowed in the particular context, each corresponding to a particular Moose type. Another one is the case of mixed XML, where there may be text, then XML fragments, more text, more XML, etc.
There are two relevant questions to answer. When marshalling OUT, we want to know what element name to use for the attribute in the slot. When marshalling IN, we need to know what element names are allowable, and potentially which sub-type to expect for a particular element name.
After applying much DWIMery, the following scenarios arise;
- 1:1 mapping from Type to Element name
This is often the case for message containers that allow any number of a collection of classes inside. For this case, a map must be provided to the
xml_nodeNamefunction, which allows marshalling in and out to proceed.
has_element 'message' => is => "rw", isa => "my::unionType", xml_nodeName => { "nodename" => "TypeA", "somenode" => "TypeB", };
It is an error if types are repeated in the map. The empty string can be used as a node name for text nodes, otherwise they are not allowed.
This case is made of win because no extra attributes are required to help the marshaller; the type of the data is enough.
An example of this in practice;
subtype "My::XML::Language::choice0" => as join("|", map { "My::XML::Language::$_" } qw( CD Store Person ) ); has_element 'things' => is => "rw", isa => "ArrayRef[My::XML::Language::choice0]", xml_nodeName => +{ map {( lc($_) => $_ )} qw(CD Store Person) }, ;
This would allow the enclosing class to have a 'things' property, which contains all of the elements at that point, which can be
cd,
storeor
personelements.
In this case, it may be preferrable to pass a role name as the element type, and let this module evaluate construct the
xml_nodeNamemap itself.
- more types than element names
This happens when some of the types have different XML namespaces; the type of the node is indicated by the namespace prefix.
In this case, you must supply a namespace map, too.
has_element 'message' => is => "rw", isa => "my::unionType", xml_nodeName => { "trumpery:nodename" => "TypeA", "rubble:nodename" => "TypeB", "claptrap:nodename" => "TypeC", }, xml_nodeName_prefix => { "trumpery" => "uri:type:A", "rubble" => "uri:type:B", "claptrap" => "uri:type:C", }, ;
FIXME: this is currently unimplemented.
- more element names than types
This can happen for two reasons: one is that the schema that this element definition comes from is re-using types. Another is that you are just accepting XML without validation (eg, XMLSchema's
processContents="skip"property). In this case, there needs to be another attribute which records the names of the node.
has_element 'message' => is => "rw", isa => "my::unionType", xml_nodeName => { "nodename" => "TypeA", "somenode" => "TypeB", "someother" => "TypeB", }, xml_nodeName_attr => "message_name", ;
If any node name is allowed, then you can simply pass in
*as an
xml_nodeNamevalue.
- more namespaces than types
The principle use of this is PRANG::XMLSchema::Whatever, which converts arbitrarily namespaced XML into objects. In this case, another attribute is needed, to record the XML namespaces of the elements.
has 'nodenames' => is => "rw", isa => "ArrayRef[Maybe[Str]]", ; has 'nodenames_xmlns' => is => "rw", isa => "ArrayRef[Maybe[Str]]", ; has_element 'contents' => is => "rw", isa => "ArrayRef[PRANG::XMLSchema::Whatever|Str]", xml_nodeName => { "" => "Str", "*" => "PRANG::XMLSchema::Whatever" }, xml_nodeName_attr => "nodenames", xmlns => "*", xmlns_attr => "nodenames_xmlns", ;
FIXME: this is currently unimplemented.
- unknown/extensible element names and types
These are indicated by specifying a role. At the time that the PRANG::Graph::Node is built for the attribute, the currently available implementors of these roles are checked, which must all implement PRANG::Graph.
They Treated as if there is an
xml_nodeNameentry for the class, from the
root_elementvalue for the class to the type. This allows writing extensible schemas.
SEE ALSO
PRANG::Graph::Meta::Attr, PRANG::Graph::Meta::Element, PRANG::Graph::Node
AUTHOR AND LICENCE
Development commissioned by NZ Registry Services, and carried out by Catalyst IT -
Copyright 2009, 2010, NZ Registry Services. This module is licensed under the Artistic License v2.0, which permits relicensing under other Free Software licenses.
|
https://metacpan.org/pod/PRANG::Graph::Meta::Element
|
CC-MAIN-2015-22
|
en
|
refinedweb
|
Difference between revisions of "Password length & complexity"
Revision as of 14:42, 28 January 2011
Status
Under review
Introduction
A password is something that a user knows similar to a personal identification number (PIN) we use for our bank's ATM card. Coupled with user identification, it force.
- Maximum length. Remember, people tend to forget their passwords easily. The longer the password, the more chances people will mistakenly use them in your system.
Password Complexity
- Password characters should be a combination of alphanumeric characters. Alphanumeric characters consist of letters, numbers, punctuation marks, mathematical and other conventional symbols. See implementation below for the exact characters referred to.
- For change password functionality, if possible, keep a history of old passwords hashes used. You should not store the actual passwords to protect against brute forcing if the database file is compromised. In this way, the user cannot change to a password that was used a couple of months back.
Password Generation
The OWASP Enterprise Security API for Java has a few methods that simplify the task of generating quality passwords, as well as determining password strength.
- public void verifyPasswordStrength(String newPassword, String oldPassword)
The verifyPasswordStrength() method accepts a new password and the current password. First, it checks that the new password does not contain any 3 character substrings of the current password. Second, it checks if the password contains characters from each of the following character sets: CHAR_LOWERS, CHAR_UPPERS, CHAR_DIGITS, CHAR_SPECIALS. Finally, it calculates the password strength by multiplying the length of the new password by the number of character sets it is comprised of. A value of less than 16 is considered weak and an exception will be thrown in this case. At some point in the future, this value will be a user configurable option.
- public String generateStrongPassword()
The generateStrongPassword() method uses the ESAPI randomizer to build strong passwords comprised of upper & lower case letters, digits, and special characters. Currently, all character sets are hard-coded in ESAPI Encoder.java. There are plans to make these user configurable. The source code is available at Authenticator.java. For more information on OWASP ESAPI follow this link: OWASP ESAPI
Sample Code
Below is a sample Servlet that uses the above methods to create a new password, and compare it to a blank string.
import org.owasp.esapi.*; import org.owasp.esapi.interfaces.IAuthenticator; import java.io.*; import javax.servlet.*; import javax.servlet.http.*; public class generateStrongPassword extends HttpServlet { public static final String RESOURCE_DIRECTORY = "org.owasp.esapi.resources"; private static String resourceDirectory = System.getProperty(RESOURCE_DIRECTORY); public void doGet(HttpServletRequest request, HttpServletResponse response) throws ServletException, IOException{ response.setContentType("text/html"); IAuthenticator passInstance = ESAPI.authenticator(); PrintWriter out = response.getWriter(); String password = passInstance.generateStrongPassword(); try { passInstance.verifyPasswordStrength(password, ""); out.println("New password is strong!"); } catch (Exception e) { out.println("New password is not strong enough"); } out.println(password); } }
|
https://www.owasp.org/index.php?title=Password_length_%26_complexity&diff=102459&oldid=16434
|
CC-MAIN-2015-22
|
en
|
refinedweb
|
std::div, std::ldiv, std::lldiv
Computes both the quotient and the remainder of the division of the numerator
x by the denominator
y
[edit] Parameters
[edit] Return value
If both the remainder and the quotient can be represented as objects of the corresponding type (int, long, long long, std::imaxdiv_t, respectively), returns both as an object of type
std::div_t,
std::ldiv_t,
std::lldiv_t,
std::imaxdiv_t defined as follows:
std::div_t
struct div_t { int quot; int rem; };
or
struct div_t { int rem; int quot; };
std::ldiv_t
struct ldiv_t { long quot; long rem; };
or
struct ldiv_t { long rem; long quot; };
std::lldiv_t
struct lldiv_t { long long quot; long long rem; };
or
struct lldiv_t { long long rem; long long quot; };
std::imaxdiv_t
struct imaxdiv_t { std::intmax_t quot; std::intmax_t rem; };
or
struct imaxdiv_t { std::intmax_t rem; std::intmax_t quot; };
If either the remainder or the quotient cannot be represented, the behavior is undefined.
[edit] Notes.
[edit] Example
#include <string> #include <cmath> #include <cstdlib> #include <iostream> std::string itoa(int n, int base) { std::string buf; std::div_t dv{}; dv.quot = n; do { dv = std::div(dv.quot, base); buf += "0123456789abcdef"[std::abs(dv.rem)]; // string literals are arrays } while(dv.quot); if(n<0) buf += '-'; return {buf.rbegin(), buf.rend()}; } int main() { std::cout << itoa(12345, 10) << '\n' << itoa(-12345, 10) << '\n' << itoa(65535, 16) << '\n'; }
Output:
12345 -12345 ffff
|
http://en.cppreference.com/w/cpp/numeric/math/div
|
CC-MAIN-2015-22
|
en
|
refinedweb
|
Hi all,
The CacheMinFileSize and CacheMaxFileSize directives in mod_disk_cache
are currently set per server, which seems to be historical from the
time before mod_cache could be added as a normal handler /
specifically placed filter. This stops an administrator applying a
cache size policy per directory or location.
The fix is to move the directives to the per-directory config. This
will fix the problem, while being backwards compatible with existing
configurations.
A further issue is that of namespace use, in theory these directives
(and other directives in the mod_disk_cache module) should be prefixed
with "CacheDisk" or "DiskCache" instead of "Cache". Thoughts?
Regards,
Graham
--
|
http://mail-archives.apache.org/mod_mbox/httpd-dev/201009.mbox/%3C918D5A0D-A7B0-4069-B817-7C9CC409BC62@sharp.fm%3E
|
CC-MAIN-2015-22
|
en
|
refinedweb
|
javaw -jar KeyGuard.jar
Please be advised that KeyGuard is suitable only for the purpose of maintaining a Key-Store of personal Certificates that are created by the user and are therefore vouched by the user. KeyGuard currently re-writes the Key-Store file every time you save your changes to the Key-Store, this means that any Certificates bought and installed or created before working with KeyGuard, will be removed! Any changes you make to the Key-Store through the command-line program "keytool" are not maintained in KeyGuard and will be lost when KeyGuard saves its information back to the Key-Store. I will not be responsible for loss of any information from your Key-Store and urge you to back up your ".keystore" file before using KeyGuard. Also note that this approach covers signing JARs to be used with Sun's JRE Plug-In and not with other JVMs.
In your work with Java and the Web, you may have had to sign a JAR. Most likely, it was a JAR containing an Applet. There are several ways to sign JAR files, especially those that contain Applets. However, for some bizarre reason, most ways involve command-line tools, or scripts. Since I'm currently developing a Web Project which requires me to deploy at least two signed Applets, each with its own tasks, I realized I would be doing a lot of signing and re-signing of my JAR files. Since software projects are usually quite dynamic, I decided to create a GUI that will not only let me sign a JAR file, but also manage the Java Key-Store used for signing the JAR files in my project.
Before I begin to describe how KeyGuard helps in creating Key-Store Aliases and signing JAR files, a brief review of the process is in order. The process of signing a JAR file starts with first creating a Certificate that contains your credentials (if you don't already have one). This process involves the command-line program "keytool" which requires you to provide information about yourself as the provider of the Certificate. Once you have specified the necessary information, a Certificate is stored in the Key-Store under an Alias you specified. This Alias can then be given as a parameter to another command-line program "jarsigner" along with the JAR file you wish to sign. Both "keytool" and "jarsigner" should be easily accessible from your shell-prompt for KeyGuard to work properly. As you can see from the image above, KeyGuard maintains the passwords for each of the stored Certificates as well as the global password for the Key-Store itself. One particular requirement for working with the command-line programs is that you remember the passwords and Aliases of each Certificate you wish to use, a problem which KeyGuard solves very well.
You can use KeyGuard in one of the following ways:
If you choose to download KeyGuard and use it locally, you should be able to create a File-Association to let you right-click on any JAR file and sign it using KeyGuard as demonstrated in the image below:
If you choose however to work with KeyGuard as an Applet from this article, please be advised that you must authorize the Applet to access your local file-system or it will not function properly.
KeyGuard has two modes of operations. The first is a simple editor of the Certificate, and their information, that are stored in the Key-Store, this mode uses the "keytool" command-line program and feeds it the information required to create the Certificates you use. The second mode is activated to sign a JAR file. If you run KeyGuard.jar without any parameters, it will display the Certificate editing GUI. If you supply a JAR file (complete path and file-name is required), the JAR signing dialog will be displayed instead. If you do not have a .keystore file in your disk, you will have to first create one by adding at least one Certificate through KeyGuard before you will be able to sign any JAR files.
KeyGuard uses another component I have presented in an earlier article, this component enables KeyGuard to interact with the command-line programs it uses. KeyGuard itself is contained in the KeyGuard.jar file. The Applet's JAR file is not required for KeyGuard to run as a stand-alone version, however, as you have noticed, there is no source-code to download from this article.
This section discusses how Applets are expected to utilize their freedom once the Applet's JAR can be signed.
First, an Applet must use its own SecurityManager in order to bypass certain permission-checks otherwise performed by the default SecurityManager. The following code displays a simple Applet that already includes its own SecurityManager as an inner-class:
SecurityManager
SecurityManager
import java.awt.*;
import java.security.*;
public class SignedApplet extends Applet {
final class MySecurityManager extends SecurityManager {
// Permission overriding methods..
public void checkPermission(Permission perm) {}
public void checkPermission(Permission perm, Object context) {}
}
// Rest of Applet goes here..
}
Sometimes, as I have found when testing the Applet version of KeyGuard, simply having a SecurityManager present as an inner class is not enough. I am not sure why, but apparently, even a signed Applet may not create its own ClassLoaders even if it is signed and has a SecurityManager present as an inner class. Therefore, I needed to create a SecurityManager class that actually replaces the default SecurityManager, as the following code shows:
ClassLoaders
import java.security.*;
public final class KgSecurityManager extends SecurityManager {
public static boolean init() {
try {
SecurityManager sm = System.getSecurityManager();
if (!(sm instanceof KgSecurityManager)) {
System.setSecurityManager(new KgSecurityManager());
}
return true;
} catch (AccessControlException ex) {
// Here's a little trick, if we get an AccessControlException here,
// this means the user has refused to trust the Applet's Certificate!
return false;
}
}
// Same permission overriding methods as before..
public void checkPermission(Permission perm) {}
public void checkPermission(Permission perm, Object context) {}
}
Now, my KeyGuard Applet can use the KgSecurityManager class like this:
KgSecurityManager
public class KgApplet extends JApplet {
private static boolean appletAuthorized;
// ...Rest of Applet code goes here...
static {
// This code will run only once, when the KgApplet class is
// first loaded in the Browser's JVM..
try {
// Try to initialize the KgSecurityManager class..
appletAuthorized = KgSecurityManager.init();
// Initialize the look & feel setting..
UIManager.setLookAndFeel(UIManager.getCrossPlatformLookAndFeelClassName());
} catch (Exception e) {
e.printStackTrace();
}
}
}
The Applet can now check the appletAuthorized boolean-flag to know if it is allowed to attempt any operations that will otherwise result in another AccessControlException if the default SecurityManager is still in place.
appletAuthorized
AccessControlException
I procrastinated writing KeyGuard mostly for fear that it may become a project of its own. However, this little utility is worth its weight in bytes. I can now change my Key-Store on a whim, remove and add Certificates with ease and the simplicity of a double-click on the KeyGuard.jar file. After integrating KeyGuard.jar into my XP system, I can safely say that I see no need to manually use "keytool" or "jarsigner" ever again. Writing this utility and bridging the world of command-line/interactive programs and a modern GUI system was also illuminating as at one point I found that the External Process created for "keytool" was already sending input to my program while my program wasn't sure the Process was properly started! I was amused to see that the problem was with Thread synchronization of access to the Process instance. Indeed, one must be careful combining procedural-programming and event-driven/multi-threading programs. If you like this utility and can think of future improvements, please leave me a message or email.
This article, along with any associated source code and files, is licensed under The Code Project Open License (CPOL)
// The KeyStore File
private File keyStoreFile;
// Main method
public static void main(String[] args) {
try {
UIManager.setLookAndFeel(UIManager.getCrossPlatformLookAndFeelClassName());
} catch (Exception e) {
e.printStackTrace();
}
new KgApp(args);
}
// Construct the application
public KgApp(String[] args) {
loadAliasCache();
if (args.length == 0 || !keyStoreExists()) {
frame = new KgFrame(this);
} else if (args.length == 1) {
sJarFileName = args[0];
frame = new JsFrame(this);
}
//... more code here...
}
// Check if the KeyStore file exists and is ready to be read
public boolean keyStoreExists() {
return keyStoreFile.exists() && keyStoreFile.canRead();
}
// Load the Alias Cache
private void loadAliasCache() {
String userPath = System.getProperty("user.home") + File.separatorChar;
keyStoreFile = new File(userPath + ".keystore");
//... more code here...
}
General News Suggestion Question Bug Answer Joke Rant Admin
Use Ctrl+Left/Right to switch messages, Ctrl+Up/Down to switch threads, Ctrl+Shift+Left/Right to switch pages.
|
http://www.codeproject.com/Articles/9689/KeyGuard-JAR-Signing-Utility?msg=1073448
|
CC-MAIN-2015-22
|
en
|
refinedweb
|
On Tue, 25 Jan 2005 16:46:54 -0600, Serge Hallyn <serue us ibm com> wrote: > On Tue, 2005-01-25 at 15:25 -0600, Timothy R. Chavez wrote: > > Any accesses on that inode, > > in that namespace (presumably the only access we care about), by an > > audited syscall will be noted and sent to userspace. Isn't that > > sufficient? > > Not quite right: Any access to that inode from any namespace. Another > namespace might simply mean that you have a different path to the inode. > Alright, I see better now the concern. But because the audit information is associated with the inode via an administrator action, it still remains true that any access to that inode will be caught, from any namespace. Correct? I guess the assumption here is that the administrator knows that he/she is in the right namespace when adding/removing watches so that they tag the appropriate inodes. > -- > Serge Hallyn <serue us ibm com> > > -- - Timothy R. Chavez
|
https://www.redhat.com/archives/linux-audit/2005-January/msg00241.html
|
CC-MAIN-2015-22
|
en
|
refinedweb
|
#include <iostream.h>
#include <stdlib.h>
int main()
{
int j;
char k[4]= "123";
j = static_cast<int>( k[0]);
cout<<j<<endl;
system("PAUSE");
return 0;
}
I am writing a program where i receive a string with numbers in it. My program has to validate those numbers. I try to cast the numbers to an intiger for easier validation.
This program will run om win. using Borland.
The problem i'm having is that j contains the dec value of of 1.
I want j to comtain the number 1 not the decimal value of 1.
How do i do that.
|
http://cboard.cprogramming.com/cplusplus-programming/5941-problem-casting.html
|
CC-MAIN-2015-22
|
en
|
refinedweb
|
#include <coherence/lang/ThreadLocalReference.hpp>
Inherits Object, and Reference.
List of all members.
A single native thread-local is used to manage all ThreadLocalReferences, which means that users are free to allocate any number of ThreadLocalReferences.
The memory associated with a thread's ThreadLocals is automatically freed when then thread terminates. The memory associated with a non-static ThreadLocalReference will be automatically freed at some point after the ThreadLocalReference has been reclaimed, and is done in such a way that repeated creation and destruction of ThreadLocalReferences will not leak memory.
Create a new ThreadLocalReference.
|
http://docs.oracle.com/cd/E15357_01/coh.360/e18813/classcoherence_1_1lang_1_1_thread_local_reference.html
|
CC-MAIN-2015-22
|
en
|
refinedweb
|
Deploy an application to Cloud Run for Anthos
Learn how to use the Google Cloud console to deploy a prebuilt sample container to run as a Cloud Run for Anthos service.
Before you begin
You must have access to the Google Cloud project and Anthos cluster where Cloud Run for Anthos is installed. For details, see Cloud Run for Anthos fleet installation overview.
Tip: See the Anthos tutorial for details about the shortest path to setting up an Anthos environment that includes a GKE cluster and Anthos Service Mesh.
Deploying a sample container
Use the Google Cloud console to deploy a sample container and create a service in your cluster:
In the Google Cloud console, go to the Cloud Run for Anthos page.
Go to Cloud Run for Anthos
Select the Google Cloud project in which your Anthos cluster resides.
In the list of available clusters, click Login to connect.
Open the Create service form by clicking Create service.
In the available clusters dropdown menu, select your cluster.
Leave
defaultas the name of the namespace where you want your service to run.
Enter a service name of your choice. For example,
hello.
Click Next.
Select Deploy one revision from an existing container image, then select hello from in the Demo containers list.
Click Next.
Select External under Connectivity, so that you can access your service from the web.
Click Create to deploy the
helloimage to Cloud Run for Anthos and wait for the deployment to finish.
Congratulations! You have just deployed a service to a Cloud Run for Anthos enabled cluster.
Accessing your deployed service
Now that you have a service running, you can to send requests to it. In this section, the default test domain is used to demonstrate how to access your service and verify that it's working:
In the Google Cloud console, go to the Cloud Run for Anthos page.
Go to Cloud Run for Anthos
Click the name of your new Cloud Run for Anthos service to open the Service details page. For example, hello.
At the top of the page, click the URL to access your deployed service through your web browser. For example, if you named your service
hello, the URL is similar to the following but includes your cluster's external IP address:
Congratulations! Your Cloud Run for Anthos service is live and handling requests.
Clean up
You can delete the Cloud Run for Anthos service to avoid incurring costs from running those resources.
The following considerations apply to deleting a service:
- Deleting a service deletes all resources related to this service, including all revisions of this service whether they are serving traffic or not.
- Deleting a service does not automatically remove container images from Container Registry. To delete container images used by the deleted revisions from Container Registry, refer to Deleting images.
- Deleting a service with one or more Eventarc triggers does not automatically delete these triggers. To delete the triggers refer to Manage triggers.
- After deletion, the service remains visible in the Google Cloud console and in the command line interface until the deletion is fully complete. However, you cannot update the service.
- Deleting a service is permanent: there is no undo or restore. However, if after deleting a service, you deploy a new service with the same name in the same region, it will have the same endpoint URL.
To permanently delete the service and all its resources:
Go to Cloud Run for Anthos
In the services list, locate the Cloud Run for Anthos service that you created and click its checkbox to select it.
Click DELETE.
What's next
To learn how to build a container from code source, push to Container Registry, and then deploy, see:
To learn more about how Cloud Run for Anthos works, see the Architectural overview.
|
https://cloud.google.com/anthos/run/docs/deploy-application?hl=el
|
CC-MAIN-2022-40
|
en
|
refinedweb
|
I saw that lots of people have Problems uploading a file in a test Environment with Selenium WebDriver. I use the selenium WebDriver and java, and had the same problem. I finally have found a solution, so i will post it here hoping that it helps someone else.
When i need to upload a file in a test, i click with Webdriver in the button and wait for the window "Open" to pop. And then i copy the path to the file in the clipboard and then paste it in the "open" window and click "Enter". This is working because when the window "open" pops up, the focus is always in the "open" button.
You will need these classes and method:
import java.awt.Robot; import java.awt.event.KeyEvent; import java.awt.Toolkit; import java.awt.datatransfer.StringSelection; public static void setClipboardData(String string) { StringSelection stringSelection = new StringSelection(string); Toolkit.getDefaultToolkit().getSystemClipboard().setContents(stringSelection, null); }
And that is what i do, just after opening the "open" window:
setClipboardData("C:\\path to file\\example.jpg"); /);
And that´s it. It is working for me, i hope it works for some of you.
Best Solution
Actually, there is an in-built technique for this, too. It should work in all browsers and operating systems.
Selenium 2 (WebDriver) Java example:
The idea is to directly send the absolute path to the file to an element which you would usually click at to get the modal window - that is
<input type='file' />element.
|
https://itecnote.com/tecnote/java-one-solution-for-file-upload-using-java-robot-api-with-selenium-webdriver-by-java/
|
CC-MAIN-2022-40
|
en
|
refinedweb
|
I have read quite a few posts that are similar to this but none seem to make sense to me.
I am trying to configure a Celery PeriodicTask to fire every 5 seconds but I'm getting hung up on a Celery configuration issue (I think)
comm/tasks.py
import datetime from celery.decorators import periodic_task @periodic_task def send_queued_messages(): # do something...
myapp/settings.py
... from comm.tasks import send_queued_messages from datetime import timedelta CELERYBEAT_SCHEDULE = { 'send_queued_messages_every_5_seconds': { 'task': 'comm.tasks.send_queued_messages', # Is the issue here? I've tried a dozen variations!! 'schedule': timedelta(seconds=5), }, }
The relevant output from my error logs:
23:41:00 worker.1 | [2015-06-10 03:41:00,657: ERROR/MainProcess] Received unregistered task of type 'send_queued_messages'. 23:41:00 worker.1 | The message has been ignored and discarded. 23:41:00 worker.1 | 23:41:00 worker.1 | Did you remember to import the module containing this task? 23:41:00 worker.1 | Or maybe you are using relative imports? 23:41:00 worker.1 | Please see for more information. 23:41:00 worker.1 | 23:41:00 worker.1 | The full contents of the message body was: 23:41:00 worker.1 | {'utc': True, 'chord': None, 'args': [], 'retries': 0, 'expires': None, 'task': 'send_queued_messages', 'callbacks': None, 'errbacks': None, 'timelimit': (None, None), 'taskset': None, 'kwargs': {}, 'eta': None, 'id': 'a8ca18...227a56'} (216b)
Best Solution
I ran into this exact problem, and it turns out the issue is not with the name of the task, but that the Celery worker isn't aware of your task module.
In other words, you have the name of the task correct (
'comm.tasks.send_queued_messages'), which was generated by the task decorator, you just haven't told Celery where to look for it.
The quickest solution is to add the following to
myapp/settings.py:
According to the docs, this determines the "sequence of modules to import when the worker starts."
Alternatively, you can configure your settings to auto discover tasks (see docs here), but then you would have to namespace your task module(s), moving
comm/tasks.pyto
comm/comm/tasks.py.
For me, the confusion came from Celery's automatic naming convention, which looks like an import statement, which led me to believe I was using
CELERYBEAT_SCHEDULE['task']to tell Celery where to look for the task. Instead, the scheduler just takes the name as a string.
|
https://itecnote.com/tecnote/python-celery-received-unregistered-task-of-type/
|
CC-MAIN-2022-40
|
en
|
refinedweb
|
How can I set a default set of colors for plots made with matplotlib? I can set a particular color map like this
import numpy as np import matplotlib.pyplot as plt fig=plt.figure(i) ax=plt.gca() colormap = plt.get_cmap('jet') ax.set_color_cycle([colormap(k) for k in np.linspace(0, 1, 10)])
but is there some way to set the same set of colors for all plots, including subplots?
Best Solution
Sure! Either specify
axes.color_cyclein your
.matplotlibrcfile or set it at runtime using
matplotlib.rcParamsor
matplotlib.rc.
As an example of the latter:
|
https://itecnote.com/tecnote/python-how-to-set-the-default-color-cycle-for-all-subplots-with-matplotlib/
|
CC-MAIN-2022-40
|
en
|
refinedweb
|
I have a project that is supposed to run on different (at least 2) databse backends, so users can choose (currently between MSSQL and SQLite). I've just started learning NHibernate, following the tutorial on nhibernate.info.
Now, my current Architecture looks like this:
MyProject.DAL
Contains NewsItem and INewsItemRepository. NewsItem is a POCO, essentially just an int Id and string Title.
INewsItemRepository is the Interface that includes CRUD Functionality.
MyProject.Plugins.DAL.MSSQL
This class implements the backend. It includes hibernate.cfg.xml, NewsItem.hbm.xml and NewsItemRepository.
The cfg holds the Driver and ConnectionString, the hbm is the SQL-Server specific Mapping, and NewsItemRepository implements the INewsItemRepository.
If I want to add the MyProject.Plugins.DAL.SQLite, I will have to duplicate all this, which is logical. However, I wonder if that is the correct approach? I feel like I'm unnecessarily duplicating things. I mean: I need to make sure the mapping is Database-specific, and that I update all my Plugins if the database changes. But this architecture still seems somehow… bloated.
Am I doing the right thing here? Or are there common patterns about what goes into the general business assembly and what needs to be kept separate for each database?
Edit: One of the challenges here is the handling of sql-types. For example, strings are mapped as nvarchar(255) by default, and if I need ntext in MSSQL, i'll have to use the <column sql-type="NTEXT" syntax, do I? I guess that specifying sql-type is database-dependent, so I'll need my mapping separated per database? Or is there a built-in mechanism to have mappings "inherit" each other? (I.e. have one .hbm.xml with all the <property> Tags in the general DAL Project and then .hbm's in each Backend-Assembly containing the columns?
Best Solution
Well, I haven't seen your project or data model so you'll have to interpret this as best suits your situation. However, yes, my reaction is that this is more code than is necessary.
One of the major benefits of using an ORM like NHibernate is that they separate your code from your persistence architecture. It seems to me that mapping NewsItem (for example) multiple times is unnecessary, for one. Isn't the data model likely to be the same regardless of backend? Perhaps you could give an example of where they might differ.
Secondly, I would probably not re-implement NewsItemRepository. Regardless of database backend, the repository is using NHibernate which has a primary purpose of abstracting the database. All CRUD functionality handled by the repository will be the same on MSSQL as SQLite (unless there actually is a difference in data model).
If your database model is the same on both backends, in theory you should only need to change the connection string to change the database being used. The way I've implemented a DAL before is to provide the interfaces as you've done and then have a sub namespace for implementations, one (well, the only one at the moment ;) being NHibernate. Repositories et al. are acquired through dependency injection. The DI kernel handles session factories as well so it's easy to target a different database for different sections of code.
Of course all this assumes that your data model is the same on both database engines. If it's not, my answer might not be very helpful. If that's the case, knowing what is different might lead me or someone else to a better answer.
EDIT:
I see what you mean about the sql-type. However, that's only a problem if you are generating the schema for your database from NHibernate. When mapping the properties, you don't need to use
<column>and the type you specify on
<property>is an NHibernate type. The StringClob type will handle NTEXT fields properly.
If you are generating your schema from the NHibernate mappings and NHibernate is not smart enough to strip out the sql-type="NTEXT" for SQLite (I'm hoping it is, but I've never used the schema generation tool), I think I would add a build step to cleanup the SQLite schema. Of course, now you're adding complexity to your process so it's up to you. However, I'd rather keep my code and mappings as simple and clean as possible and keep the complexity only where it is needed (the nit-picky details of the database engine). It seems to me that keeping two separate mapping files is a recipe for a missing property mapping and users left scratching their heads as to why they can't put in their DOB when using SQLite ;)
EDIT2 (regarding DI use with NH):
This is a bit subjective since I'm sure there are many ways to use DI with NHibernate. I've had great success using DI with Fluent NHibernate and Ninject, a DI framework that also uses a "fluent" DSL. Fluent NHibernate (FNH) provides an interface
IPersistenceConfigurerthat is used to create your database connection string (it also provides a number of Dialects to choose from). Thus, it's easy to bind that
IPersistenceConfigurerservice to a provider via the DI framework. You can do the same for
ISessionFactoryto have a fully D-injected factory. Since Ninject works off attributes my repository constructor might look like:
Which would inject a factory for the CRM database. Me likey. :)
|
https://itecnote.com/tecnote/r-nhibernate-and-multiple-database-backends-architecture/
|
CC-MAIN-2022-40
|
en
|
refinedweb
|
One of most interesting aspects of writing code for mobile devices are sensors. Desktop and laptop computers don’t normally come with thermometers, gyroscopes, geomagnetic field detectors or ambient air pressure sensors. But all that changed with mobile devices, such as Android.
Mobile computers don’t sit on a desk and wait for the user to interact with it. An Android device can continuously monitor the device’s surrounding environment and applications can be written to make use of that sensor data.
The Android operating system currently supports 13 different sensors. However, not all Android devices have the necessary hardware to support all of the sensors.
Fortunately it’s a trivial task to determine the sensors available on a given device.
The Android sensor framework is part of the android.hardware package, which includes the SensorManager. To check available sensors, simply instantiate a SensorManager object.
private SensorManager mSensorMgr;
And then execute some code like this to get a list of sensors.
mSensorMgr = SensorManager) getSystemService(Context.SENSOR_SERVICE); List sensors = mSensorMgr.getSensorList(Sensor.TYPE_ALL);
The sensors object will be a list of all available sensors on the device. To check for a specific sensor, use one of the other sensor constants such as, TYPE_TEMPERATURE, TYPE_RELATIVE_HUMIDITY or TYPE_PRESSURE.
There’s also the getDefaultSensor() method. Passing a specific sensor constant to it will also determine whether a sensor is available on a device.
And if a device has more than one sensor of a given type, one of the sensors will be set as the default sensor. If there is no default sensor set, getDefaultSensor() will return null, thus indicating that the sensor is not present.
For example, the code to check for a gyroscope sensor using the getDefaultSensor() method could look something like this.
if (mSensorMgr.getDefaultSensor(Sensor.TYPE_GYROSCOPE) != null) { // Yes. I can now measure orientation based on the principles of angular momentum } else { // There's no gyroscope on this device, perhaps I can use the more simple accelerometer instead. }
Testing for sensors is vital on Android devices because Google does not require hardware manufacturers to build any particular sensors into a device. And whether a sensor is available isn’t always the only test because not all sensors are created equal.
For example, an application might require the gravity sensor that’s of a particular version.
The gravity sensor is actually a virtual sensor, but it still relies on a device having particular hardware sensors available for it to work.
The following code first checks if the gravity sensor is present. If so, it then checks that the vendor is Google and that the sensor is version 3.
if (mSensorMgr.getDefaultSensor(Sensor.TYPE_GRAVITY) != null) { List gravity = mSensorMgr.getSensorList(Sensor.TYPE_GRAVITY); for (int i = 0; i < gravity.size(); i++) { if (gravity.get(i).getVendor().contains("Google Inc.") && gravity.get(i).getVersion() == 3) { // I've got the version 3 Google gravity sensor I need. mGravity = gravity.get(i); } } } else { // No Google gravity sensor version 3, perhaps I can use the accelerometer }
It’s also possible, if an application requires a specific sensor, to use Google Play to target only devices that have that particular sensor. For example, for devices that must have an accelerometer, add the following to an application’s manifest file.
android:
Another useful method is getMinDelay(). Calling it determines the minimum time interval, in microseconds, a sensor takes to get sensor data. If getMinDelay() returns a non-zero value, that means the sensor is a streaming sensor.
Introduced in Android 2.3, streaming sensors sense data at regular intervals. Non-streaming sensors only report data when the sensor’s parameters change and will return zero when calling getMinDelay().
Let’s put it all together with a simple application.
The application uses the device’s light sensor and adjusts the background and text color based on the ambient light. As the ambient light dims, the background brightens and vice versa.
package com.take88.mylightsensor; import android.graphics.Color; import android.hardware.Sensor; import android.hardware.SensorEvent; import android.hardware.SensorEventListener; import android.hardware.SensorManager; import android.os.Bundle; import android.app.Activity; import android.content.Context; import android.view.Menu; import android.widget.RelativeLayout; import android.widget.TextView; public class MainActivity extends Activity implements SensorEventListener { private SensorManager mSensorMgr; private TextView mLightTxt; private Sensor mLight; private RelativeLayout mLayout; @Override public void onCreate(Bundle savedInstanceState) { super.onCreate(savedInstanceState); setContentView(R.layout.activity_main); mSensorMgr = (SensorManager) getSystemService(Context.SENSOR_SERVICE); mLight = mSensorMgr.getDefaultSensor(Sensor.TYPE_LIGHT); mLayout = (RelativeLayout) findViewById(R.id.layout); mLightTxt = (TextView) findViewById(R.id.light); brightness(Color.BLACK, Color.WHITE, "0"); } private void brightness(int bground, int txt, String lux) { mLayout.setBackgroundColor(bground); mLightTxt.setBackgroundColor(bground); mLightTxt.setTextColor(txt); mLightTxt.setText("LUX: " + lux); } @Override public boolean onCreateOptionsMenu(Menu menu) { getMenuInflater().inflate(R.menu.activity_main, menu); return true; } @Override public void onAccuracyChanged(Sensor arg0, int arg1) { // TODO Auto-generated method stub } @Override public void onSensorChanged(SensorEvent event) { // The light sensor returns one value, which is the ambient light level in SI lux units float lux = event.values[0]; if (lux brightness(Color.WHITE, Color.BLACK, String.valueOf(lux)); } else if (lux > 10 && lux brightness(Color.GRAY, Color.BLACK, String.valueOf(lux)); } else { brightness(Color.BLACK, Color.WHITE, String.valueOf(lux)); } } @Override protected void onResume() { super.onResume(); mSensorMgr.registerListener(this, mLight, SensorManager.SENSOR_DELAY_NORMAL); } @Override protected void onPause() { super.onPause(); mSensorMgr.unregisterListener(this); } }
Be careful with the onSensorChanged() method. Sensor data can change rapidly and onSensorChanged() can be called frequently. Therefore, do as little as possible in this method and avoid blocking, or an application will be really clunky and not fun to use.
It’s also very important to use the onPause() method and unregister the sensor listener.
If it’s not unregistered, the sensor will continue to hammer away at the device’s battery, even when the screen goes to sleep. Users don’t like applications that drain their battery unnecessarily.
Use the onResume() method to register the listener when the device wakes up again.
Sensors can not be tested in the Android emulator, they must be tested on an actual device.
|
https://www.developer.com/mobile/android/how-to-build-mobile-apps-with-android-sensors/
|
CC-MAIN-2022-40
|
en
|
refinedweb
|
elephant ear plant skin irritation treatment
slg games android
us art supply 14quot high
xml xsd validator
unified access gateway
yupoo gucci
mullen van contract
hdwificampro video mode
w203 horn fuse location
kids underwear
murder gone cold
redis stream commands
all bitcoin private keys
spoofbox promo code
amorepacific museum of art
convert excel column to array python
westpac credit card insurance refund
lspdfr mod
harry potter hufflepuff book set
libra artists painters
missouri sinkhole map
bruh sound effect mp3 download
michelob ultra slim can licensed beer
he leadeth me an extraordinary
neurology north austin
who owns iraqi oil now
vortec heater hose routing
chloroform death painful
tno usa focus tree
sad graduation songs that make you cry
idoneth deepkin update
7zip command line extract multiple files
walmart fishing license ohio
mden order id
my grandmother in french
tunnel of doom
vmware maximums
diablo 2 the pit location
aphmau pierce in real life
audio rosary download
enterprise conversion van rental
maytag bravos quiet series 300 washer ld code
acer predator 17 review
pkcs12 vulnerabilities
scriborder tracker
leeds local news
indian scout bobber vance and hines twin slash install
helicopter flying with cable hanging
qemu hostfwd multiple ports
hannah object lesson
import ssl certificate with private key
rca 10 android tablet
classic yahtzee an exciting
car sputtering when stopped at light
letters of eb white pdf
diacetyl in cigarettes vs vape
mec 600 jr identification
things you can only get in arizona
asian girls orgasim
liberty university pool
i am not your negro a companion
bmw e92 exhaust
500 litre hot water tank
demon prince of momochi house characters
new build homes pontefract
scalability unreal engine
karate is not for fighting
kunekune pig meat for sale
executed and executory contract example
yamaha rx v385 51 channel av
lisa frank adult coloring book set
stk github
duramax diesel coolant loss issues
roro chan 1999 death video
new era menx27s
horse liniment gel for humans
sinusitis causes tinnitus
duck blinds for rent suisun marsh
31231 cpt code modifier
camus quotes on absurdity
edc16u immo off
parking gear in automatic transmission
acrylic latex paint for concrete statues
surplus 22 pistols
shogun assassin 2 flaming swords of carnage
300x microscope lens for raspberry pi
invery airdual bluetooth adapter
solid wood floating shelves for bedroom bathroom
suffer fools lightly meaning
bdc compressor
witex bit
unity 2d waterfall
dead body found houston
dr eddies happy cappy medicated
fm radio circuits
dog quiz questions
wiaa gymnastics rule book
us30 indicator mt4
ucla mathematics courses
shopify inventory api
jquery array functions
dana and parks live stream
waikiki brewing company beers
customizing profile information in asp net identity
basic english grammar notes pdf
don quijote de la mancha edicin
mirebrute troggoth
download for instagram direct message
frog reddit
last conversation ebk jaaybo lyrics
jj maybank tumblr
transgression film
fitrx muscle massage gun
paypal bin method 2022
hyundai santa cruz canopy
gaia house retreats 2022
battlefield 3 download
largest muslim community in california
rasm e wafa novel
no dlc ps4
blacked lana rhodes
altair x axis ticks
wildly fluctuating heart rate
on error goto 0 example
keycaps anime
bmw f20 screen replacement
phet motor
asus vivomini un68u
super dragon ball heroes english
1969 chevelle dimensions
phonak hearing aid troubleshooting
preachers on daystar
hi metal r macross
belly bulge meaning
chinese massage center rawalpindi
xilinx sdk watchpoint
viator tripadvisor
gig app jobs
cast aluminium garden furniture homebase
tall toilets on sale
he has many female friends
archer ax73 specs
pcl set point cloud color
permanova beta diversity
shuffling a deck of cards probability
fbx python example
cr10s pro vref
s3 list objects boto3
ford 8n diesel conversion
bts wolf pack tumblr
spider fixies
ender 3 v2 firmware download
hvac manifold gauge set
rhett and link drunk
how to dissolve super glue from glass
moralis web3api token
cigar manufacturers in dominican republic
banana moon triangel
viking obituaries
casio vintage calculator watch
janesville craigslist free
riverstone mhp
default save excel
nuxt pinia
current road closures tasmania
circle b ferretry
little wonder leaf blower 10 hp
crime thriller web series
little house on the prairie script
bgears b voguish manual
inflatable pool bunnings
my billionaire mom chapter 57
why would a portfolio manager create a multi
msp430 memory map
object oriented programming python exercises
kim christiansen 9 news age
oswe certification
freightliner egr cooler
manual lifting equipment
golf mk7 sat nav sd card
pure fishing job fair
ecu bowling alley
dove c heads flow numbers
chaplain certification programs
indus valley civilization worksheet pdf
sea of thieves api
mount blade ii bannerlord new features
p1339 peugeot 308
juniper hill novel
gced salary
origin bank online
suzuki lj10
quantum algorithms for linear algebra
whirlpool quiet partner ii for sale
soto happy hour
catchphrase template
back40life discount
steve stucker nm age
small engine magneto ignition system diagram
baby lock evolution serger feet
real sex sister and brother
docker dns resolver
ny lottery pick 4 calendar september 2021
victorian oil lamp facts
golang nginx docker
vermont in 112 instructions
spring rebus puzzles
icloud clean mode unlock
psyllium husk capsules uk
while loop bigquery
corbin europe
orc eye color
cb radio high watts
tower defense simulator discord
hanson red green blue review
ford 4000 tractor parts uk
high desert hot sauce scoville
wild animals worksheet
forgivable sins a dark mafia romance
revenue analyst interview questions
is cassidy hutchinson married
65 pontiac gto for sale
mckinsey offer call
auto spa express
powershot staple gun not working
web unblocker
d40 vk56 conversion
child sociopath
lln robot login
where is barnwood builders located
gvardiol number
digimon masters roblox evolution
1995 s10 ignition switch
bob cut for girls
suny baseball camps
sauna lining
yamaha jet boat salvage
s3 object expiration terraform
space bar click test
api token authentication python
uk2000 forum
heroquest blank quest sheet
fansly app for android
dci p3 test
html5 qr code scanner
helluva boss octavia x reader
hill brothers caravans for sale
vintage go kart body for sale
cosworth dfv v8 for sale
mastercard securecode registration
fujii spotify logger v2
mesocosm materials
live baby crabs for sale near new jersey
who makes comforts formula
vingli tent instructions
mdico mdium pdf gratis
local pbs phone number
eve online skill point calculator
blues clues and you
msi laptop wireless switch
cheat on me
anime family wallpaper
ikea folding table wall
arteck hb030b change color
rumble licensing options
2021 apple imac
unable to recognize calico yaml
aircraft paint remover
quickbooks desktop pro 2021 upgrade
neutrogena hydro boost hydrating gel eye
samsung pro plus sdxc full size
legend outside plot matlab
cloud root apk
naruto x reader lemon quotev
bolt torque calculator metric vdi 2230
bloodrock lp
pysam coverage
madison county ga schools jobs
zeolite spray reviews
sh50 danley
skinner sights for winchester 1892
how much is adaptive armor hypixel skyblock
gta san andreas ocean docks mission
hells angels cave creek
plantation lakes townhomes for sale
heroine is the other woman goodreads
where are janome sewing machines made
accident on hillsborough ave tampa fl yesterday
nasdaq ghost robot v3 pro free download
pinterest girls dp
casebus customer service
1979 gmc sierra stepside for sale
formations of 12
vampire knight theme song lyrics english
ducati race bodywork
cookie clicker temple strategy
weit du eigentlich
rtx 3080 brand comparison chart
n976n root android 12
road map of oregon and idaho
cisco switch poe flashing amber
fifa 22 career mode training
big ex girlfriend tits
topps uk hobby box
google map iframe generator
cracked comments
heap vs stack memory java
best heightmap generator
synod questions 2022
data link layer function
cs 598 practical statistical learning
uzui wives x male reader
1968 mustang gt 390 for sale
1974 pontiac grand prix interior
kolourco battle box
noblesville police incident report
the adventures of sharkboy and lavagirl
apartments for rent in st thomas jamaica
best vps for android
mp3 chord analyzer
macbook pro backlight not working
new trader rich trader in hindi
craftsman r1500 battery
radical equations worksheet kuta
bulldog stl file free
how to change mnc in apn settings
starwind virtual san crack free download
cute dab pens for sale
foot pads chemist warehouse
token stamp 2 custom borders
ve commodore paint colours
the backyardigans
modulus 11 bank account number
cards vs humanity online
idf linkage
laravel change request input value
kaisi 14 pcs
kia optima catalytic converter scrap price
mikki mase wiki
degloved face
brainwavz round xl velour
airbag dump files download free
off road dune buggy
nys medicaid transportation
bts girlfriend astrology
each way bet explained
new world multiboxing
gta vice city 3d
till she met her
android hide status bar
donald gosling net worth
redfield 4x12 scope for sale
60s music quiz with answers
adn obituaries
lacy aaron schmidt now
property manager jobs los angeles
tecno camon t8 firmware
phonky town tiktok
new holland lx565 specs
input button on vizio sound bar
tyrosine residue
shilen vs benchmark barrels
mckinley elementary school website
deaths in charleston mo
minecraft connection timed out connect
fair by tanger outlets
breville flatbed microwave 20l
how to work out monthly rent from weekly
inflatable snorkel vest adult snorkeling jackets
55 inch tv smart
names that mean day
love is deep dramacool
wavetable synth vst
jumia laptops
kenworth t800 dash lights not working
mexican food truck hire melbourne
life force how new
mini chihuahuas for sale
sql query to get database schema
san francisco airport breaking news
janna aram build
projects with old cell phones
how fast is 10g plane speed
xbox controller buttons replacement
prairie drifter english setters
flir camera module
boho wedding venues in arizona
m4e1 upper build
muskogee phoenix phone number
heroine
mikey and the dragons
flight simulators for sale
matlab convert color image to grayscale
yamaha rx a2a vs rx v6a
replica victorian front door
solar panel catia
axanthic ball python for sale near me
a nurse is providing teaching to a client who has a gastric ulcer
barnes ballistic calculator
mods of the year beamng
magento 2 customer login
playstation platinum wireless headset playstation 4
gardner bender gbt 500a analog
danish name origin
healing candle prayer
mobil jet 254 price
ghosts of saltmarsh pdf
jbl gto629 price
famous 16 year old tiktok
a good kind of trouble
wgbh tv schedule
tg tf online games
scp 035 x male reader lemon
3d guns v8
cherokee female warrior names
main water line replacement
campervan control panel instructions
chicago old school stepping
toy repair chicago
ford 390 overdrive transmission
john deere 322 rear pto
snack chip manufacturers
dragon breath shotgun second extinction
corporate natalie jewelry
nftables python
ocim 2 pcs 10x
evt training near london
dell optiplex 7020 drivers
kendo combobox remove x
table saw chess pieces
malicious pdf detection using machine learning github
modify global variable in function python
backrooms wallpaper roblox
effects of waste pollution
juno jazz vst
incognito royale high
how to delete gmod addons
san antonio express news classifieds garage sales
mga cylinder head
yamaha object decode mode
get paid to live in a house 2021
empyrion a forest of stars
free spins no deposit ireland
skeeter sl190 parts
qtablewidget read only
stuarts well roadhouse for sale
unique holidays scotland
swimways finding nemo
stellium in taurus
csv library in robot framework
explore the bible lifeway
decode the hex encoded version of the raw transaction
low income housing tigard
sunday mass with father mike schmitz
soraka aram nerf
instagram mikayla
2019 subaru legacy wiper blade size
what is sbi mcyt
8 string ukulele
numbness in one leg
solax default password
new 52 batman
kim yoo jung apartment
chirp uk frequencies
winco baked chicken nutrition facts
shining in the darkness hacks
bts bad boy
best price flail mower
cdash standards
wyckoff trading course telegram
pet simulator x giveaway discord
secure parking solutions private limited bangalore
2006 honda accord cd player error
used 4x4 trucks for sale in alabama
skiff mold for sale
okta how to add application
skyrim weapon draw glitch
podman machine init volume
short bio example yourself student
for rent atlanta
free dominus hack
the snowy day
century arms cetme sporter 308 review
mcpa practice test
signs she is not the right woman for you
aetna nursing jobs salary
bearded dragon morphs for sale
big tit granny porn movies
1998 honda accord catalytic converter scrap price
closed pubs for sale in derbyshire
bank repossessed houses scotland
mi box setup with iphone
how to tell if a girl likes you long distance
redwood stain on pine
reese funeral home obituaries
how to identify goat breeds
sacred geometry eye tattoo
2023 newmar new aire
invicta pocket watch ww2
cummins egr delete
target bluetooth speaker
illumination presents 6 movie
tricky phase 0 roblox id
carbon county utah property records
dating latino app
staring meaning in movies
tommy shelby x reader grace
young afican pussy
maryland fastpitch softball
postgres regex examples
panty flashing schoolgirls
new chapter advanced
particle tracking velocimetry python
2004 chevy avalanche bed panels
25 foot mini ramp
nacho libro inicial de lectura
chapter 18 endocrine system pdf
mass air flow calculator
massey ferguson dl 280 loader specs
skimobile vs snowmobile
floppy straw cowboy hat
how to get blue tag on tiktok
people who hear humming noise
pastel gore picrew
mobileiron provisioning profile expired
flo by moen api
jlcpcb free shipping first order
hsi special agent hiring 2022
worktop legs
bronsons supervalu beulah
android 10 8227l
the gathering house red river gorge
spider web swing australia
barcode label maker
cash in hand security jobs london
appchina android
ford f150 2 piece driveshaft vibration
dickinson ak212t review
vape wholesale supply near me
tauck tours usa
toyota rav4 alarm system
foldable mini drone
prusa mini pinda probe height
warzone blueprint hack
farm for sale bedfordshire
uc davis cs degree reddit
3m barbed wire pinstripe
full auto airsoft pistol co2
mantis tiller 4 stroke carburetor
tekken 7 button numbers
forbidden love short stories
tub repair kit white
multiple dataset in jasper report
audi mmi map update 2021
tw200 rear wheel for sale
walmart gazebo replacement parts
espn class of 2023 basketball
u boot rk3399
1998 tiffin allegro bus specifications
stockport council bin collection phone number
konica minolta c1060 error codes
logitech g512 keycaps
thai pussy picks closeup
what is tracking dimension group in d365
wolfenstein playstation 5
return item chargeback bank of america reddit
peterbilt 220 fuse box diagram
cattleya orchids for sale near alabama
m2 macbook air price
html video preload image
thorlabs power meter
dab press 12 ton
during which process does the customer approve the delivery of functionality to their business
shower valves best
ikea stolice
lsw backpack purse for
qr scanner keyboard
femur length chart by week in mm
how to fix my frigidaire refrigerator
craigslist free building material
john deere sickle mower parts diagram
bw 4482 transfer case
ffxiv unexpected error dx11 2021
blue iris email server
pxe boot option not showing up dell
you find a 10 year old boy to be unresponsive
boat cooler seat frame
woolworth building wikipedia
lovely little things custom creations
fili x reader
bad buddy novel pdf
injection moulding machine setting pdf
everett clinic
center court apartments shooting
the physician the cole trilogy
lg oled c9 peak brightness setting
his name is in spanish
vintage rugby shirts polo
matchmaking festival clare
solex 32 pbic manual
arkansas mountain land for sale
medieval eating utensils
metal car dice
uno harry potter
cigar benefits and side effects
chef apron cross back apron for men
if her snap score over 1 million
telethon get reply message
code warframe
98 lincoln mark 8
dte block removal
2004 terex tx760b reviews
jarrettsville volunteer fire company food truck
fnf react to tricky vs sonic exe
revit toposurface absolute elevation
mod creator for minecraft windows 10
v shred foods to avoid
shining force 2 mega drive rom
i can see your mbti ep 3
loki imagines wedding night
how much does a chiropractor cost in georgia
persona 4 debug mode
hydrostatic lawn mower no reverse
seven hills banana cream strain
v2ray subscribe free
small boats for sale orange county
12237 dalhart fenton mi
bakugan geogan rising episode 1
mui dashboard github
john deere planter seed tubes
110 bcd axs chainring
west lafayette bus routes
multi collagen pills benefits
lasd vs lapd jurisdiction
moving a sofa around a corner
cheapest grom clone
motorcycle rental sacramento
conda install index url
asds conference 2022
youtube video cut off
labview plc
bit masking
kudos prime driver rating
buster blader deck 2022 master duel
toothbrush replacement battery
tank tracks drawing
fnf nikusa x reader
what is gradle in selenium
tg transformation stories deviantart
john deere gator 620i fuel pressure specs
list of 5 letter words with no repeating letters
ap statistics frq by topic
decode the hex encoded version of the raw transaction
m1 max vs rtx 3080
open source ime
arizona land for sale by owner
out sex video
amazon basics abs usb a to
persona 5 cutting room floor
baby girl clothes 3 6 months
razer auto clicker macro download
connect dsl modem to router
unity ruler tool
painting model tank tracks
safer baby bundle elearning
peterbilt 587 chrome accessories
corvair 5 speed
thalhimer net worth
censored album covers middle east
aldi jobs hiring near me
seagate external hard drive detected but not accessible
solax battery awaken
is adhd really that bad reddit
t56 air shifter
panda washer dryer
lerna is not recognized as an internal or external command
resident evil eroquis
jba ceramic coated headers
checkbox emoji
beetles gel polish gel
l shaped metal trim
mhgu solo build
sample letter from doctor for disability
dockers menx27s relaxed fit signature
naasaha maxaa weyneeya
fender telecaster pickup output chart
ark support forum
2 aces in a tarot reading
bmw m57 ecu pinout
case clicker items
adjust door closer to stop slamming
betrayal is bitter ao3
nilaus city blocks
what does a shark jack do
gopi fashion mumbai
beauty supply brampton hurontario
seminole state college application deadline fall 2022
scan id
big w baby sale 2020
best truck for towing 5th wheel 2022
code breaker for aether sx2
sniff hub script
schneider electric aveva presentation
dns tools whois
tic company jobs
video girl peeing pants
motovario gearbox 3d model
single taken mentally dating jack grealish
azure synapse spark
small cargo trailers llc
souljaboytellem com album
houses for sale stevenage
micro pcb
preparation hemorrhoidal suppositories
huawei share pc windows 10
ada county jail boise id
fortnite cheat codes
guerneville camping
2 digit by 1 digit division
60 ct meaning
how much do gel nails cost
gawr gura japanese
small industrial space for rent near centurion
plastic underwear cover
milia newborn nhs
enhanced extracts kratom
combustion of hydrogen balanced equation
if we were giants
national geographic planets
damned souls and a sangria
kobold guide to monsters pdf
align power feed parts
legends of idleon leave party
nvme proxmox
wapodeai wide tooth
draftkings marketing interview
triathlon mallorca
convert schwinn bike to electric
ret paladin bis tbc phase 4
canceled netflix shows
jr high rodeo
vmware kb 76719
nexusmods games
the world war z
pre foreclosure homes palm coast
free uk number sms
6y8 yamaha gauge manual
4 band bass preamp
lenovo ideapad flex 5 14itl05 specs
endgame gear xm1r firmware
carrier pressure switch fault
zoho project login
heartway mobility scooter parts
ncaa softball recruiting rankings 2022
graphique magnetic notepad
pontiac grand am 1974
1974 mustang ii ghia for sale
kenworth super sleeper for sale
catholic cross necklace near me
ensemble stars main story 1 translation
usgi meaning
winter snow hiking boots for
ocean resin art
iveco daily 2020 service reset
black baby doll dress
studio library maya 2022
testdome coding test
dummy variable trap in econometrics
dockers menx27s straight fit signature lux cotton
map of tulum beach hotels
kawasaki mule 500 engine
g code generator milling
continuous communication synonym
best practices in sql server query writing
best match 3 games pc
alter session disable parallel query
what ships faster shein or romwe
used cargo van for sale miami
crush x reader amusement park
monarch beach resort dana point address
criminal minds fanfiction reid nightmares
2d thermal solver
hidden markov model regression
summernote react
heroquest blank quest sheet
10 things to know about optum
best vrchat avatar worlds for quest
laminate countertop end cap home depot
play doh peppa pig
losing my virginity the autobiography
sonnet egpu breakaway box 650
32gb ram 3600mhz cl14
best number plates
gazeta mapo sot
kampa motor rally air pro 260 l
1955 chevy bel air value
flow token unlock
note 10 bootloader
2022 honda monkey sprocket
bestsight night vision scope
five seven designs wheelie bar
ferguson fire supply
gojo figure gk
millionaires in raleigh nc
kindersitz 9 36 kg g nstig kaufen
acrcloud music recognition online
the great gatsby pdf gutenberg
attic mold removal cost
bluetooth latency fix
competitive cheer levels
ffxiv furnishing preview
roadmap b1 pdf
unity outline shader tutorial
iphone 6s akku kaufen
god help the child vintage
uber car requirements chicago 2022
mothers day ireland
feelworld livepro l1 software
catalyst leadership conference 2022
elgato hd60 software
cattleya leopoldii for sale
butcher shop duluth
douglas county records search
most rude kpop groups
adaway for windows
sm64 speedrun reddit
best halloween short stories
tecumseh carburetor lookup
rural business development grant recipients
galaxy stix banana
wall corner design
new sleeper berth rules 2020
docker netns device or resource busy
86 eighty six ending
ed troyer 2022
high school korean horror movies
shot put drills for beginners
p0420 scion xb
240z floor pan replacement
fairlife protein shake reddit
audi eeprom reader
compressed row format
whose body amazonclassics edition
baroque horses for sale australia
milwaukee laser level m18
itron meter codes
geographic information science and technology
rush hour 2
critical role foundation
athlon optics vs vortex
deadly shooting in tacoma
fedex pay raise 2021
home depot interior doors prehung
160 meter folded dipole
amazon short blonde wigs
install awx ansible
open letter to my grown son
bruce cymanski facebook
oregon lottery prize claim survey
arkansas drug task force
nu metro prices 2022
is mugen archive safe
figurative language translator
property in wrexham with land
boston celtics origin
united states constitution coins value
outlook 2016 connect to office 365
sennheiser hd 660s vs 600
ranbir kapoor per movie fees
disabled parking permit victoria
hebrew typing game
promises and pomegranates age rating
elbow brace with strap for tendonitis
posespace free download
dsp manufacturers list
hotter shoes usa stores
f2c pro heat press 12x15 inch
scooter songs
northwest shed movers
mini clubman union jack rear lights
aurateur twitch stats
gusto vs rippling
soul bead vault hunters
16x32 lofted barn plans
chinski nowy rok
tear ring saga english rom
amd mining reddit
daytime running light out jeep compass
binghamton 911 calls
best ads sens r6
cross training workouts for runners
gottlieb pinball manuals pdf
latest zibo update download link
andover recycling schedule
lowriders for sale in oregon
grokking the system design interview pdf
roon nucleus worth it
no deposit bonus 2022 uk
outdoor dining the wharf
gifted and talented checklist for teachers
full season soccer training program pdf
detroit classifieds houses for rent
mac pro gtx 1080
20 pcs replacement blades for cricut explore
502 bad gateway nginx react
170 west end avenue 3d
active parent login
excel vba create outlook rule
macallan rare cask 2019 batch 2
stripe confirm payment intent js
usenix atc acceptance rate
geyser antonyms
inkbird all purpose digital temperature controller fahrenheit and
syntax vbs
velocloud configuration guide pdf
amazon return faulty item
dramay jumong alqay 50
phoenix douglas ranch
lenovo precision pen 2 manual
xdm online
mobile responsive sidebar menu codepen
show percentage in bar chart excel
webman psp launcher
fastboot devices
mimi meaning in japanese
tryhackme on resume reddit
lg refrigerator error code 22 and 33
pipa tubing stainless steel
chicago miniature shows 2022
unapologetically black water bottle
minotaur 5e race ravnica pdf
sbc timing cover dowel pins
dating relationships after 50
street legal humvees for sale
likimo karusele online
dating group wiki
california nurses association pay scale
vans unisexx27s low top sneakers womens 12
best mossberg 590 side saddle
pygame physics simulation
bdo best boss gear to exchange
stormwater pit and catch basin
collars and co reviews
di2 rear derailleur not shifting up
e46 front main seal
7k in hour pillar bazi
yale clinical psychology phd
lt80 300ex conversion
the egyptian book of the dead free pdf download
raptor 700 thumb throttle upgrade
auto aim lock free fire app
turn off motion blur nvidia control panel
neuropsychology courses online free
phoneme cards printable
essex high school staff directory
crate and barrel dubai
inmarsat phone
funded trader meaning
polk county towing laws
usb modem dongle
business for sale malmesbury
shredder machine thesis
corsair hs75 xb manual
python parse timezone name
how to check api response time in postman
git checkout pull request
birthday wishes for 15 year old son from mom
satya sai baba nag
free jewelry appraisal app
ford connect 7 sitzer
vivid memories by gwen
reshape2 r install
seagate st2000dm001
how to check if revolut account is verified
az network bastion ssh
instagram plus for ios 15
b complex kirkland
amcrest 5mp turret poe camera
state bar of california
black sequin dress maxi
proxfree websites anywhere
free bubble mailers
shortest path python
video of lesbians using sex toys
hearthstone arena tier list
christian jewelry store
error failed building wheel for rpi gpio
servo oscillation problem
teen girl butt crack pics
vrchat majima avatar
bodyj4you 36pc professional piercing kit surgical
unlock all camos
tonton the blue lagoon
non profit administrator job description
65 mustang body for sale
rims and tyres for sale
feof c
josh sidemen age
|
https://cs-advert.pl/1562/08/2006.html
|
CC-MAIN-2022-40
|
en
|
refinedweb
|
Getting scheduled query information and monitor the query
After you create a scheduled query you can access information about it in the
scheduled_queries table of the Hive information schema. You can also
use the information schema to monitor scheduled query execution.
- Query the information schema to get information about a schedule.
SELECT * FROM information_schema.scheduled_queries WHERE schedule_name = 'scheduled_rebuild';The following information appears about the scheduled query:
- scheduled_query_id
- Unique numeric identifier for a scheduled query.
- schedule_name
- Name of the scheduled query.
- enabled
- Whether the scheduled query is currently enabled or not.
- cluster_namespace
- Namespace that the scheduled query belongs to.
- schedule
- Schedule described as a Quartz cron expression.
- user
- Owner of the scheduled query.
- query
- SQL query to be executed.
- next_execution
- When the next execution of this scheduled query is due.
- Monitor the most recent scheduled query execution.
SELECT * FROM information_schema.scheduled_executions;You can configure the retention period for this information in the Hive metastore.
- scheduled_execution_id
- Unique numeric identifier for a scheduled query execution.
- schedule_name
- Name of the scheduled query associated with this execution.
- executor_query_id
- Query ID assigned to the execution by HiveServer (HS2).
- state
- One of the following phases of execution.
- STARTED. A scheduled query is due and a HiveServer instance has retrieved its information.
- EXECUTING. HiveServer is executing the query and reporting progress in configurable intervals.
- FAILED. The query execution was stopped due to an error or exception.
- FINISHED. The query execution was successful.
- TIMED_OUT. HiveServer did not provide an update on the query status for more than a configurable timeout.
- start_time
- Start time of execution.
- end_time
- End time of execution.
- elapsed
- Difference between start and end time.
- error_message
- If the scheduled query failed, it contains the error message associated with its failure.
- last_update_time
- Time of the last update of the query status by HiveServer.
|
https://docs.cloudera.com/cdp-private-cloud-base/7.1.8/using-hiveql/topics/hive_access_schedule.html
|
CC-MAIN-2022-40
|
en
|
refinedweb
|
rise of Linux container use in commercial environments, the adoption of container technologies has gained momentum in technical and scientific computing, commonly referred to as high-performance computing (HPC). Containers can help solve many HPC problems, but the mainstream container engines didn't quite tick all the boxes. Podman is showing a lot of promise in bringing a standards-based, multi-architecture enabled container engine to HPC. Let’s take a closer look.
The trend towards using AI-accelerated solutions often require repackaging of applications and staging the data for easier consumption, breaking up otherwise massively parallel flow of purely computational solutions.
The ability to package application code, its dependencies and even user data, combined with the demand to simplify sharing of scientific research and findings with a global community across multiple locations, as well as the ability to migrate said applications into public or hybrid clouds, make containers very relevant for HPC environments. A number of supercomputing sites already have portions of their workflows containerized, especially those related to artificial intelligence (AI) and machine learning (ML) applications.
Another aspect of why containerized deployments are becoming more and more important for HPC environments is the ability to provide an effective and inexpensive way to isolate the workloads. Partitioning large systems for use by multiple users or multiple applications running side by side has always been a challenge.
The desire to protect applications and their data from other users and potentially malicious actors is not new and has been addressed by virtualization in the past. With Linux cgroups and later with Linux containers the ability to partition system resources with practically no overhead has made containers particularly suitable for HPC environments where achieving maximum system utilization is the goal.
However, most recent implementations of mainstream container runtime environments have been focused on enabling CI/CD pipelines and microservices and have not been able to address supercomputing requirements, prompting the creation of several incompatible implementations just for use in HPC.
Podman and Red Hat Universal Base Image
That landscape changed when Podman arrived. Based on standards from the Open Container Initiative (OCI) Podman's implementation is rootless (does not require superuser privileges) and daemon-less (does not need constantly running background processes), and focuses on delivering performance and security benefits.
Most importantly, Podman and the accompanying container development tools, Buildah and Skopeo, are being delivered with Red Hat Enterprise Linux (RHEL), making it relevant to many HPC environments that have standardized and rely on this operating system (OS).
Another important aspect is that Podman shares many of the same underlying components with other container engines, like CRI-O, providing a proving ground for new and interesting features, and maintaining direct technology linkage to Kubernetes and Red Hat OpenShift. The benefits of technology continuity, the ability to contribute and tinker code at the lowest layers of the stack, and the presence of a thriving community, were the fundamental reasons for Red Hat’s investment in Podman, Buildah and Skopeo.
To further foster collaboration in the community and enable participants to freely redistribute their applications and containers that encapsulate them, Red Hat introduced the Red Hat Universal Base Image (UBI). UBI is an OS container image that does not run directly on bare metal hardware and is not supported as a stand alone entity, however it offers the same proven quality and reliability characteristics as Red Hat Enterprise Linux since it is tested by the same quality, security and performance teams.
UBI offers a different end user license agreement (EULA) that allows users to freely redistribute containerized applications built with it. Moreover, when a container built with UBI image is running on top of Red Hat platforms, like RHEL with Podman or OpenShift, it can inherit support terms from the host system that it runs on. For many sites that are required to run supported software this seamlessly creates a trusted software stack that is based on a verified OS container image.
Podman for HPC
Podman offers several features that are critical to HPC. For example, enabling containers to run with a single UID/GID pair based on the logged-in user’s UID/GID (i.e., no root privileges) and the ability to enforce additional security requirements via advanced kernel features like SELinux and Seccomp. Podman also allows users to set up or disable namespaces, specify mounting points for every container and modify default security controls settings across the cluster, by outlining these tasks in containers.conf file.
To make Podman truly useful for running mainstream HPC it needs the ability to run jobs via Message Passing Interface (MPI). MPI applications still represent the bulk of HPC workloads and that is not going to change overnight. In fact, even AI/ML workflows often use MPI for multi-node execution. Red Hat engineers worked in the community to enable Podman to run MPI jobs with containers. This feature was then made available in RHEL 8 and was further tested and benchmarked against different container runtime implementations by the members of the community and independent researchers resulting in a published paper.
This ecosystem consisting of the container runtime, associated tools and container base image offers tangible benefits to scientists and HPC developers. They can create and prototype containers on their laptop, test and validate containers in a workflow using a single server (referred to as "node" in HPC) and then successfully deploy containers on thousands of similarly configured nodes across large supercomputing clusters using MPI. Moreover, with UBI scientists can now distribute their applications and data within the global community more easily.
All these traits of Podman have not gone unnoticed in the scientific community and at the large national supercomputing sites. Red Hat has a long history of collaborating with supercomputing sites and building software stacks for many TOP500 supercomputers in the world. We have keen interest in the Exascale Computing Project (ECP) and are tracking the next generation of systems that seek to break the exascale threshold. So when ECP kicked off the SuperContainers project, one of ECP’s newest efforts, Andrew Younge of Sandia National Laboratories, a lead investigator for that project, reached out to Red Hat to see how we can collaborate on and expand container technologies for use in first exascale supercomputers, which are expected to arrive as soon as 2021.
Red Hat contributes to upstream Podman and has engineers with deep Linux expertise and background in HPC who were able to work out a multi-phase plan. The plan expedites the development of HPC-friendly features in Podman, Buildah and Skopeo tools that come with Red Hat Enterprise Linux, with the goal of getting these features into Kubernetes and then into OpenShift.
SuperContainers and multiple architectures
The first phase of the collaboration plan with ECP would focus on enabling a single host environment, incorporating UBI for ease of sharing container packages and providing support for accelerators and other special devices that make containers aware of the hardware that exists on the host. In the second phase, we would enable support for container runtime on the vast majority of the pre-exascale systems using MPI, across multiple architectures, like Arm and POWER. And the final phase calls for using OpenShift for provisioning containers, managing their life cycle and enabling scheduling at exascale.
Here is what Younge shared with us in a recent conversation: "When the ECP Supercomputing Containers project (aka SuperContainers) was launched, several container technologies were in use at different Department of Energy (DOE) Labs. However, a more robust production-quality container solution is desired as we are anticipating the arrival of exascale systems. Due to a culture of open source software development, support for standards, and interoperability, we’ve looked to Red Hat to help coalesce container runtimes for HPC."
Sandia National Labs is a home to Astra, the world's first Arm-based petascale supercomputer. Red Hat collaborated with HPE, Mellanox and Marvell to deliver this supercomputer to Sandia in 2018, as a part of the Vanguard program. Vanguard is aimed at expanding the high-performance computing ecosystem by evaluating and accelerating the development of emerging technologies in order to increase their viability for future large-scale production platforms. That collaboration was enabled by Red Hat’s multi-architecture strategy that helps customers design and build infrastructure based on their choice of several commercially available hardware architectures using a fully-open, enterprise-ready software stack.
Astra is now fully operational and Sandia researchers are using it to build and validate containers with Podman on 64-bit Arm v8 architecture. Younge provided the following insight: "Building containers on less widespread architectures such as Arm and POWER can be problematic, unless you have access to servers of the target architecture. Having Podman and Buildah running on Astra hardware is of value to our researchers and developers as it enables them to do unprivileged and user-driven container builds. The ability to run Podman on Arm servers is a great testament to the strength of that technology and the investment that Red Hat made in multi-architecture enablement."
International Supercomputing Conference and the TOP500 list
If you are following or virtually attending the International Supercomputing Conference (ISC) that starts today, be sure to check out "Introduction to Podman for HPC use cases" keynote by Daniel Walsh, senior distinguished engineer at Red Hat. It will be presented during the Workshop on Virtualization in High-Performance Cloud Computing. For a deeper dive into practical implementation of HPC containers be sure to check out the High Performance Container Workshop where a panel of industry experts, including Andrew Younge and engineers from Red Hat, will be providing insights into most popular container technologies and the latest trends.
While it is fascinating to see Red Hat Enterprise Linux running Podman and containers on the world’s first Arm-based supercomputer, according to the latest edition of TOP500 list, published today at ISC 2020, RHEL is also powering the world’s largest Arm supercomputer. Fujitsu's Supercomputer Fugaku is the newest and largest supercomputer in the world and it is running RHEL 8. Installed at RIKEN, Fugaku is based on Arm architecture and is the first ever Arm-based system to top the list with 415.5 Pflop/s score on the HPL benchmark.
RHEL now claims the top three spots on the TOP500 list as it continues to power the #2 and #3 supercomputers in the world, Summit and Sierra, that are based on IBM POWER architecture.
RHEL is also powering the new #9 system on the list, the Marconi-100 supercomputer installed at Cineca and built by IBM for a grand total of four out of 10 top systems on the list.
RHEL also underpins six out of the top ten most power-efficient supercomputers on the planet according to the Green500 list.
So what does the road ahead look like for Podman and RHEL in supercomputing?
RHEL serves as the unifying glue that makes many TOP500 supercomputers run reliably and uniformly across various architectures and configurations. It enables the underlying hardware and creates a familiar interface for users and administrators.
New container capabilities in Red Hat Enterprise Linux 8 are paving the way for SuperContainers and can help smooth transition of HPC workloads into the exascale space.
In the meantime, growing HPC capabilities in OpenShift could be the next logical step for successful provisioning and managing containers at exascale while also opening up a path for deploying them into the public or hybrid clouds..
|
https://www.redhat.com/en/blog/podman-paves-road-running-containerized-hpc-applications-exascale-supercomputers?source=bloglisting&page=1
|
CC-MAIN-2022-40
|
en
|
refinedweb
|
Consider a sample json which looks like this:
{ "a" : ["b", "c"], "b" : ["a", "c"], "c" : ["a", "b"] }This content can be represented as a "Map<List<String>>" type in Java.
So now, if I were to use straight Java to convert the string to the appropriate type, the code would look like this:
Map<String, List<String>> result = objectMapper.readValue(json, new TypeReference<>() { });What exactly is that "TypeReference" doing there... think of it as a way of making the type parameters of the generic type "Map" which are "String" and "List" available at runtime, without that the types are erased at runtime and Java would not know that it has to create a "Map<String, List<String>>".
Kotlin can hide this detail behind an extension function which if defined from scratch would look like this:
inline fun <reified T> ObjectMapper.readValue(src: String): T = readValue(src, object : TypeReference<T>() {})and a code using such an extension function:
val result: Map<String, List<String>> = objectMapper.readValue(json)See how all the TypeReference related code is well hidden in the extension function.
This is the kind of capability that is provided by the Jackson Kotlin Module With the right packages imported a sample code with this module looks like this:
import com.fasterxml.jackson.module.kotlin.jacksonObjectMapper import com.fasterxml.jackson.module.kotlin.readValue val objectMapper = jacksonObjectMapper() val result: Map<String, List<String>> = objectMapper.readValue(json)This is just one of the simplifications offered by the use of the module. It also supports features like Kotlin Data classes, Kotlin built-in types like Pair, Triple etc out of the box.
Highly recommended if you are using Jackson with Kotlin.
|
http://www.java-allandsundry.com/2021/03/jackon-kotlin-extension-and-reified.html
|
CC-MAIN-2022-40
|
en
|
refinedweb
|
This is a sample from my This Week In React newsletter. Subscribe for more!
React-Redux v8 alpha.0 was just announced by Mark Erikson.
The library is now fully rewritten in TypeScript.
More importantly, React-Redux v8 is adopting a new React 18 hook
useSyncExternalStore (replacing
useMutableSource).
This hook allows React to work properly in concurrent mode and sync with an external state (from Redux) without tearing (the UI can't become inconsistent with store state).
This hook is the "level 2" (make it right) of the state subscription ladder:
Dan Abramov explains it's an important milestone for React 18:
For the first time we’ve been able to move the “meat” of the react-redux bindings implementation into react itself.
It wasn’t a lot of code, but it relied on a bunch of hacks and complexity that kept growing. Now that “meat” is 5 lines of code, we handle the rest.
import { useSyncExternalStoreExtra } from 'use-sync-external-store/extra'; // React-Redux v8 alpha code in useSelector() const selectedState = useSyncExternalStoreExtra( store.subscribe, store.getState, // TODO Need a server-side snapshot here store.getState, selector, equalityFn );
To ease incremental adoption in libraries today, an external package
use-sync-external-store has been published, allowing to use this hook with a consistent API on React 16, 17 & 18.
Unlike the former
useMutableSource API, this new hook supports unstable, inline selectors without re-subscribing, and you won't need to wrap selectors in
useCallback to stabilize them:
// Not ideal const user = useSelector( useCallback(state => state.user,[]) ); // Simpler const user = useSelector(state => state.user)
The community remains very interested to have a native
useContextSelector() hook in React core.
A performant
useContextSelector() hook would allow Redux to try again passing the store state directly as context value, instead of passing a store object and managing subscriptions internally. React would now hold entirely the Redux state and the React-Redux bindings would move to the ladder's stage 3 (make it fast). This stage would permit React to avoid unnecessary rendering work when a Redux store update happens while a concurrent rendering is in progress.
useContextSelector is still under research and is not a mandatory feature for React 18 to land. Andrew Clark commented that it will more likely be released in a minor 18.x release.
Top comments (3)
I'd like to contact you in regards to some questions about potentially sponsoring one of your newsletters, and finding it difficult to contact you directly, to ask some questions.
If interested, please connect w/ me on LinkedIn linkedin.com/in/jeremyharrisconsul... (which I could miss your message there as it is swamped), or even better, shoot me an email at info @ zenosmosis.com.
Hey
I'm pretty easy to contact on LinkedIn or Twitter, just send a message ^^
My English newsletterr is not "yet" launched and I think it's way too small to accept sponsorships, and not worth it.
My French one accept sponsors: sebastienlorber.com/sponsor
Thanks for the response. Just sent you a message on LinkedIn.
|
https://dev.to/sebastienlorber/react-18-milestone-react-redux-adopts-usesyncexternalstore-102b
|
CC-MAIN-2022-40
|
en
|
refinedweb
|
Algorithm Fourth Edition: 1.1.27 binomial distribution, from 10 to 10 billion@ TOC
Recursive method to realize binomial distribution operation, O(2^N)
The recursive algorithm is relatively simple. It is very suitable for small parameters.
But the problem is also obvious. N * k > 100 is very slow at the beginning. When > 1000, the program basically crashes. The algorithm efficiency is O(2^N), which can not be used in practical development.
double binomial(int N, int k, double p) { if (N == 0 && k == 0) { return 1; } if (N < 0 || k < 0) { return 0; } return (1.0 - p) * binomial(N - 1, k, p) + p * binomial(N - 1, k - 1, p); }
Optimize by saving the calculation results to the array, O(N)
It is also recursive, but saving and reusing the calculated results immediately reduces the complexity and efficiency O(N).
At this time, large number operation can be carried out, for example, n * k = 100000000, which can be used in practice, and it is of little significance.
struct bino { double qbinomial(int N, int k, double p) { if (N == 0 && k == 0) { return 1; } if (N < 0 || k < 0) { return 0; } vd = std::vector<std::vector<double>>((N + 1), std::vector<double>((k + 1), -1)); return qbinomial(0, N, k, p); } private: double qbinomial(int f, int N, int k, double p) { if (N == 0 && k == 0) { return 1; } if (N < 0 || k < 0) { return 0; } if (vd[N][k] < 0) { vd[N][k] = (1.0 - p) * qbinomial(0, N - 1, k, p) + p * qbinomial(0, N - 1, k - 1, p); } double result = vd[N][k]; vd.clear(); return result; } std::vector<std::vector<double>> vd; };
Pursue the best and seek a breakthrough
With optimized recursion, the order of magnitude plus one, the program crashes. In order to improve the stability, we abandon the recursion method and use the loop, but we need to have a deep understanding of the algorithm.
Recursion is pushed back and forward, which still takes up too much computation. The stability is improved by pushing directly from the array loop. At the level of N * k =500 million, it does not collapse, but the speed does not increase significantly.
double qqbinomial(int N, int k, double p) //Fast binomial distribution; Building a matrix data structure { if (N == 0 && k == 0) { return 1; } if (N < 0 || k < 0) { return 0; } std::vector<std::vector<double>> qvd = std::vector<std::vector<double>> ((N + 1), std::vector<double>((k + 1), 0)); double q = 1; for (size_t i = 0; i != N + 1; ++i) { qvd[i][0] = q; q *= 1 - p; } double pp = 1 - p; for (size_t i = 1; i != N + 1; ++i) { if (i < k) //When I > k, the extra values are 0. Make judgment to avoid repeated calculation { for (size_t j = 1; j != i + 1; ++j) { qvd[i][j] = pp * qvd[i - 1][j] + p * qvd[i - 1][j - 1]; } } else { for (size_t j = 1; j != k + 1; ++j) { qvd[i][j] = pp * qvd[i - 1][j] + p * qvd[i - 1][j - 1]; } } } double result = qvd[N][k]; qvd.clear(); return result; }
I want ultimate stability
How can I reach the int limit of C + +? When n * k = 1 billion, there is not enough memory and still crashes, but this has not reached the upper limit of int integer.
Can I reach the upper limit? Probably not, but try.
Reviewing the circular array algorithm, it is found that the number of elements in the array is N * k. when the number exceeds 1 billion, 16g of memory directly crashes. But do I need such a large array?
The problem lies in the data structure. When calculating the value, the algorithm only needs two numbers [n - 1] [k] and [n - 1] [K - 1], which need K numbers. Therefore, the STD:: deque < STD:: vector < double > > data structure can be adopted. Every time a row of array is calculated and inserted, a number group will be removed from the top of the queue. At this time, the occupied memory will not exceed K double memory.
At the same time, according to the algorithm, when k > N, the result is 0, which can improve the efficiency of the algorithm to O(N) or O(1), and the memory space is not greater than the memory occupied by K double types.
At this time, the order of magnitude N * K can reach 10 billion, theoretically breaking through the int upper limit (2 billion), but it is meaningless.
Here are the codes:
double binomial(int N, int k, double p) //Stabilize binomial distribution and construct queue { if (N == 0 && k == 0) { return 1; } if (N < 0 || k < 0) { return 0; } if (k > N) { double result = 0; return result; } std::deque<std::vector<double>> deqVecDou; deqVecDou.push_back(std::vector<double>{1, 0}); double q = 1; std::vector<double> lieOne(N + 1); for (size_t i = 0; i != N + 1; ++i) { lieOne[i] = q; q *= 1 - p; } double pp = 1 - p; for (size_t i = 1; i != N + 1; ++i) { if (i < k) { deqVecDou.push_back(std::vector<double>(i + 2, 0)); deqVecDou[1][0] = lieOne[i]; for (size_t j = 1; j != i + 1; ++j) { deqVecDou[1][j] = pp * deqVecDou[0][j] + p * deqVecDou[0][j - 1]; } deqVecDou.pop_front(); } else { deqVecDou.push_back(std::vector<double>(k + 2, 0)); deqVecDou[1][0] = lieOne[i]; for (size_t j = 1; j != k + 1; ++j) { deqVecDou[1][j] = pp * deqVecDou[0][j] + p * deqVecDou[0][j - 1]; } deqVecDou.pop_front(); } } double result = deqVecDou[0][k]; deqVecDou.clear(); lieOne.clear(); return result; }
Without data structure, the algorithm is basically a castle in the air.
I spent two days thinking about and improving the small problem in Section 1 of Chapter 1 of the fourth edition of the algorithm. Although it was a blind adjustment, because it had exceeded the actual demand in the end, and the data accuracy of double had been far left behind, it was still meaningful.
Advancing from O(2^N) to O(N) requires only the most basic array. The memory occupation is from N ^ 2 to N. you only need to set an array in the queue, and then delete it at the top of the loop. I deeply realize that the attack of the algorithm is at the rolling level. C + +, which is famous for pursuing efficiency, is basically a joke without the blessing of the algorithm.
For algorithms, see here:
The recursive algorithm of binomial distribution (1.1.27 Binomial distribution) is improved based on array
|
https://programmer.group/1.1.27-binomial-distribution-from-10-to-10-billion.html
|
CC-MAIN-2022-40
|
en
|
refinedweb
|
7 TextField enhancement suggestion
Hello,
I have TextField enhancement suggestion for Vaadin 7.
Vaadin is mainly for LOB database application. Right?
Columns in database is defined as "Varchar(10)", "Numeric(10,2)", "Integer", etc..
I want TextField with mode for entering Integer - allowing only numbers.
I want TextField with mode for entering Number(10,2) - allowing only this format.
I want really user friendly TextField.
Why allow user enter letters when databinding to Integer property? Why?
Disallowing obviously wrong input on client side is very user friendly and common.
Databinding should be one of the metadata source.
For applications mainly based on databases this is first thing I check and want.
It is done in ten years old Borland Delphi.
This functionality is available in the form of the MaskedTextField add-on in the directory as well as several more specific add-ons for entering of numerical data etc.
Note, though, that input masks can often be a source of irritation rather than helpful if overused or not very carefully designed, especially with respect to operations such as editing an already entered value or copying and pasting values. Furthermore, developers often tend to restrict too much what is truly allowable, making applications impossible to use for some users (e.g. the allowable characters and structure of postal codes vary a lot from country to country, see e.g. this page for a few examples).
Yes,
mask overuse or mask itself is not user friendly. No doubt.
But I dont want mask edit (and certainly not with visible mask placeholdert).
I miss the simple "NumberField" (no mask, no visible mask placeholder), just "Integer mode", "Double mode", "Strict or BigDecimal mode- specify places before and after decimal point" (for values from database).
I try this components
1) MaskedTextField
- visible mask placeholder
- can enter decimal point multiple times
2) NumericField (from MaskedTextField addon)
- can not enter negative sign "-"
- can enter decimal point multiple times
- can not enter max size before and after dot
3) NumericField (from NumericField addon)
- show wrong input char, and after half-second change value to previous allowed value
I still wonder why this must-have-component is not in core vaadin.In every nontrivial application users enter numbers
For integer field you can use a converter
TextField ultimonumero = new TextField(); ultimonumero.setCaption("Último número"); ultimonumero.setConverter(new StringToIntegerConverter());
For non negative integer:
public class StringToNonNegativeIntegerConverter extends StringToIntegerConverter { @Override public Integer convertToModel(String value, Class<? extends Integer> targetType, Locale locale) throws com.vaadin.data.util.converter.Converter.ConversionException { return abs(super.convertToModel(value, targetType, locale)); } private Integer abs(Integer value){ return value < 0 ? - value : value; } }
example
TextField ultimonumero = new TextField(); ultimonumero.setCaption("Último número"); ultimonumero.setConverter(new StringToNonNegativeIntegerConverter());
|
https://vaadin.com/forum/thread/1373501/vaadin-7-textfield-enhancement-suggestion
|
CC-MAIN-2022-40
|
en
|
refinedweb
|
Next() call not being delayed
I have an indicator with a period of 12 being initialized in my strategy.
I am compressing 60 minutes into hour long bars.
Based off of what I read on the wiki - the first next() call should not occur until 12 hours have passed in the data, correct? Yet my next() call is being triggered with only one item in self.data. Is there something I misunderstood with how compression works?
edit: Just tried to remove compression and I still have the same issue. Here is my init function, using most of the boilerplate in oandatest.py
I tried removing compression and this issue still occurs. Below is my strategy init function:
def __init__(self): # To control operation entries self.orderid = list() self.order = None self.counttostop = 0 self.datastatus = 0 # Create SMA on 2nd data self.sma = bt.indicators.MovAv.SMA(self.data, period=self.p.smaperiod) self.atr = bt.indicators.ATR(self.data, period=20) self.highest = bt.indicators.MaxN(self.data, period=12) self.lowest = bt.indicators.MinN(self.data, period=12) self.margin = 0.0025 print('--------------------------------------------------') print('Strategy Created') print('--------------------------------------------------')
Removed for edit
- backtrader administrators last edited by
That code for sure is not enough to tell why
nextis called. But if you are using the
oandatestsample, this sample calls
nextfrom
prenext, because the point is seeing all received data and not waiting for the indicators.
@backtrader Thank you, fixed.
|
https://community.backtrader.com/topic/438/next-call-not-being-delayed/3
|
CC-MAIN-2022-40
|
en
|
refinedweb
|
You are viewing the documentation for the 2.2.x release series. The latest stable release series is 2.8 by following these steps:
- Implement
play.Plugin(see this for an example).
- This plugin should be available in the application either through pulling in it from a maven repository and referencing it
as an app dependency or the plugin code can be part of a play application.
- You can access it like:
import static play.api.Play.*; import static play.libs.Scala.*; public Myplugin plugin() { return orNull(unsafeApplication().plugin(MyPlugin.class)).api(); }
which will return an instance or subclass of
MyPlugin fully initialized or
null.
- In your app create a file:
conf/play.pluginsand add a reference to your plugin:
5000:com.example.MyPlugin
The number represents the plugin loading order. By setting it to > 10000 we can make sure it’s loaded after the global plugins.
Found an error in this documentation? The source code for this page can be found here. After reading the documentation guidelines, please feel free to contribute a pull request. Have questions or advice to share? Go to our community forums to start a conversation with the community.
|
https://www.playframework.com/documentation/2.2.x/JavaPlugins
|
CC-MAIN-2022-40
|
en
|
refinedweb
|
I have a base "fishnet" polygon grid. I attempted to create a model that iterates over each cell in that base fishnet grid and subdivide that cell with the Create Fishnet tool into a new feature class.
I expected that I simply would need to set the output from my iterate feature iterator as the 'Template Extent' parameter for the Create Fishnet tool. However, while I can see the name of this iterator output model variable in the drop-down list for the 'Template Extent' parameter; the name does not have the blue Model Variable icon. And more importantly, the template extent does not update at each model iteration.
Is there a way that I can make my model recognize aan iteration output as a template extent? ( see screenshot below)
[ATTACH=CONFIG]19323[/ATTACH]
import arcgisscripting gp = arcgisscripting.create(9.3) #you will need to modify these two parameters for your situation. This is an ex of a pgdb FeatureClass inputLocation = "\\\\C:\Scratch.mdb" inputFC = inputLocation + "\\exGRID" # Open a search cursor on a feature class rows = gp.searchcursor(inputFC) row = rows.Next() i = 1 while row: gp.AddMessage("Attempting to process polgyon " + str(i)) #Get the shape field shape = row.shape #Get the extent object extent = shape.extent outputName = "Fishnet_" + str(i) outFC = inputLocation + "\\" + outputName #Get the origin (lowerLeft) and y-axis (upperLeft) properties of the polygon ll = extent.lowerleft ul = extent.upperleft ur = extent.upperright originXY = str(ll.x) + " " + str(ll.y) yaxisXY = str(ul.x) + " " + str(ul.y) opp_corner = str(ur.x) + " " + str(ur.y) gp.AddMessage("LowerLeft (x,y): %f, %f" % (ll.x, ll.y)) gp.AddMessage("UpperLeft (x,y): %f, %f" % (ul.x, ul.y)) gp.AddMessage("XMin: %f" % (extent.xmin)) gp.AddMessage("YMin: %f" % (extent.ymin)) gp.AddMessage("XMax: %f" % (extent.xmax)) gp.AddMessage("YMax: %f" % (extent.ymax)) #keep the cell width/height set to zero as it will use the opp_corner to figure it out cellWidth = 0 cellHeight = 0 #plug in the #rows/#cols you want the output grids to contain nrows = 10 ncols = 10 gp.CreateFishnet_management(outFC, originXY, yaxisXY, cellWidth, cellHeight, nrows, ncols, opp_corner) gp.AddMessage("Created " + outputName) i = i + 1 row = rows.Next()
|
https://community.esri.com/t5/geoprocessing-questions/how-to-use-an-iterator-to-set-the-template-extent/td-p/71452?attachment-id=41861
|
CC-MAIN-2022-40
|
en
|
refinedweb
|
Scala DSL for money-related operationsScala DSL for money-related operations
This Domain-Specific Language (DSL) lets you perform operations among different currencies, by transparently doing all internal conversions. The conversion map is injected implicitly by the client code.
Using MoneyUsing Money
As a first step you need to add the resolver and dependency to your build file:
libraryDependencies += "com.lambdista" %% "money" % "0.8.0"
You can find all the released versions here.
Builds are available for Scala 2.13.x, 2.12.x.
Usage ExampleUsage Example
Here's a simple usage example:
import money._ object Usage { def main(args: Array[String]): Unit = { val conversion: Conversion = Map( (EUR, USD) -> 1.13, (EUR, GBP) -> 0.71, (USD, EUR) -> 0.88, (USD, GBP) -> 0.63, (GBP, EUR) -> 1.40, (GBP, USD) -> 1.59 ) implicit val converter = Converter(conversion) val sumAndConversion1 = 100.001 (USD) + 200 (EUR) to GBP println(s"sumAndConversion1: $sumAndConversion1") val sumAndConversion2: Money = 100 (USD) + 210.4 (EUR) to EUR println(s"sumAndConversion2: $sumAndConversion2") val sum = 100.001 (USD) + 200 (EUR) val simpleConversion = sum to GBP println(s"simpleConversion: $simpleConversion") val sumWithSimpleNumber = 100 (USD) + 23.560 println(s"sumWithSimpleNumber: $sumWithSimpleNumber") val multiplicationWithSimpleNumber = 100 (USD) * 23 println(s"multiplicationWithSimpleNumber: $multiplicationWithSimpleNumber") val usd = Currency("USD") val multiplication = 100 (usd) * 23 println(s"multiplication: $multiplication") val divisionWithSimpleNumber = 100 (USD) / 23 println(s"divisionWithSimpleNumber: $divisionWithSimpleNumber") val comparison = 100 (USD) > 99 (EUR) println(s"100 USD > 99 EUR? $comparison") } }
As you can see the client code just needs a simple import and an implicit value of type
Converter
in order to use the DSL. The operations shown in the previous code are only a few among the available ones.
Have a look at the
Money class for a complete coverage.
Run the exampleRun the example
To run the previous example launch:
$ sbt run
Play with the REPLPlay with the REPL
To play with Scala's REPL launch:
$ sbt console
This will automatically fire the Scala's REPL and run the following commands for you:
import money._ val conversion: Conversion = Map( (EUR, USD) -> 1.13, (EUR, GBP) -> 0.71, (USD, EUR) -> 0.88, (USD, GBP) -> 0.63, (GBP, EUR) -> 1.40, (GBP, USD) -> 1.59 ) implicit val converter = Converter(conversion)
This way you can start playing with the DSL expressions (e.g.:
100(USD) + 90(EUR)) without worrying about imports
and the conversion map. Of course if you need to use your own conversion you can redefine it.
Scala's Numeric type class implementationScala's Numeric type class implementation
When using the following import:
import money._
you get the default implicit for
Numeric[Money] whose implementation is as follows:
implicit def numericMoney(implicit converter: Converter) = new NumericMoney(DEFAULT_CURRENCY)
It uses
DEFAULT_CURRENCY, which is
USD. That's necessary for the
fromInt method of
Numeric:
def fromInt(x: Int): Money
That is, in order to create a
Money object you need a
Currency.
If you intend to use a different currency you can define your own implicit
Numeric[Money], e.g.:
implicit def numericMoney(implicit converter: Converter) = new NumericMoney(EUR).
|
https://index-dev.scala-lang.org/lambdista/money
|
CC-MAIN-2022-40
|
en
|
refinedweb
|
React and React Native finally feel the same
If you're a developer of both React and React Native apps, it can be tough to switch between platforms because they feel so different. And in many ways React Native feels relatively... well, backwards.
Despite great improvements in DX (Developer Experience) for web development with animation libraries like Framer Motion and much easier styling with Tailwind CSS, React Native is still mired in the madness of creating StyleSheets and managing animation states with Animated or Reanimated.
Yes, I know it's basically all JSX. But jumping from CSS and declarative animations on web to React Native's Animated.spring and StyleSheet.create is just a real big pain, and requires a lot of learning.
This is a big problem for small teams - vastly different platforms means developers need to learn a lot more to do both, or you need to hire more developers on separate web and mobile teams.
But this problem is finally solved 🎉
<motion.div React Native component </Text> </MotionView>
Using tailwindcss-react-native and Legend-Motion we can now write React and React Native code using the same styling and animation patterns, and even mix them together in React Native Web.
In this example you can see an HTML element in React right next to a React Native element in React Native Web, styled and animated in the same way 🤯.
The Problem
React and React Native are similar but different in significant ways, so although React components and React Native components share the same concepts, they need to be written fundamentally differently because:
- Styling is different: React uses CSS and React Native uses StyleSheet.
- Animations are different: React uses CSS transitions or libraries like Framer Motion while React Native uses the built-in Animated or Reanimated.
- Navigation is different: Web and mobile apps are just fundamentally different, so (for now) we're fine with having separate navigation systems and we're focusing on the components themselves. Though, Solito is an interesting project trying to align them that we're watching closely.
The Solution
React developers have recently aligned around using Tailwind CSS for styling and Framer Motion for animations. These both have great DX that feels in many ways easier than the built-in React Native solutions. So if we use libraries for React Native that bring our favorite APIs to React Native, then we can have the same developer nirvana on both platforms.
1. Styling with tailwindcss-react-native
tailwindcss-react-native is a new library that uses Tailwind CSS as a universal design system for all React Native platforms. It has three features that are crucial for us:
- It uses
classNameas a string, just like React components.
- It has great performance because it converts
classNameto styles with a Babel plugin so there is almost no runtime cost.
- In React Native Web it simply passes
classNamestraight through to the DOM components, so it uses normal Tailwind CSS with no overhead.
This means we can have convenient and familiar Tailwind CSS usage on mobile with no overhead, and on React Native Web it just uses Tailwind CSS directly. If you inspect the example below in your browser developer tools you'll see the classNames passed through to the rendered div element.
import { Pressable, Text } from "react-native"; /** * A button that changes color when hovered or pressed * The text will change font weight when the Pressable is pressed */ export function MyFancyButton(props) { return ( <Pressable className="p-4 rounded-xl component bg-violet-500 hover:bg-violet-600 active:bg-violet-700" > <Text className="font-bold component-active:font-extrabold" {...props} /> </Pressable> ); }
2. Animations with Legend-Motion
Legend-Motion is a new library (that we built) to bring the API of Framer Motion to React Native, with no dependencies by using the built-in Animated. This lets us create animations declaratively with an
animate prop, and the animation will update automatically whenever the value in the prop changes.
Try hovering over and clicking the box to see it spring around.
<Motion.View initial={{ y: -50 }} animate={{ x: value * 100, y: 0 }} whileHover={{ scale: 1.2 }} whileTap={{ y: 20 }} transition={{ type: "spring" }} />
3. Mix React and React Native Web
React Native Web supports mixing HTML and React Native elements together, so it's easy to incrementally drop React Native Web components into a React app. That's a huge boon because we can drop our React Native components into our web apps without needing to write the whole thing with React Native Web.
<motion.div React Native Element </Motion.Text> <motion.div className="p-5 mt-6 bg-blue-600 rounded-lg" whileHover={{ scale: 1.1 }} transition={{ type: 'spring' }} > DIV text </motion.div> </Motion.View> </motion.div>
Putting it all together
Using tailwindcss-react-native and Legend-Motion together we can build complex React Native components in an easy declarative way that will look very familiar to React developers:
<Motion.View className="p-4 font-bold bg-gray-800 rounded-lg" animate={{ x: value * 50 }} > <Text> Animating View </Text> </Motion.View> <Motion.View className="p-4 font-bold bg-gray-800 rounded-lg" whileHover={{ scale: 1.1 }} whileTap={{ x: 30 }} > <Text> Press me </Text> </Motion.View>
Try it now
1. Legend-Motion
Legend-Motion has no dependencies so it's easy to install.
npm i @legendapp/motion
Then using it is easy:
import { Motion } from "@legendapp/motion" <Motion.View animate={{ x: value * 100, opacity: value ? 1 : 0.5, scale: value ? 1 : 0.7 }} > <Text>Animating View</Text> </Motion.View>
See the docs for more details and advanced usage.
2. tailwindcss-react-native
tailwindcss-react-native has some tailwindcss configuration and a babel plugin so see its docs to get started.
3. React Native Web
React Native Web support for this is right on the bleeding edge. The pre-release version of React Native Web 0.18 adds the style extraction features that tailwindcss-react-native depends on. But there's an issue in its implementation of Animated that breaks all other styles when using using extracted styles. I have a fork of the pre-release 0.18 version that fixes this, so if you want to try it now, install
react-native-web from my fork:
npm i
It's a pre-release version so of course be careful when using it in production, but we're using it for the examples on this site and another app in development and haven't found any issues. Hopefully RNW 0.18 will release soon with Animated working well, and we can stop using my fork 🤞.
Towards Developer Utopia 🌟☀️✨🌈
To get to the utopic future of one platform that runs everywhere there's basically three paths:
- All web: This has long been the only viable solution, but mobile web apps can be slow and clunky if not done right. It is possible to build great mobile web apps, and we'll have a future blog post on that, but it's hard.
- All React Native: React Native Web is getting there! But it still needs to progress further, and we hope it does! See The Case for the React Native Web Singularity for a deeper dive. For now, it has a performance overhead compared to normal web apps and doesn't support all web features yet.
The problem with both of those solutions is you have to go all in on one platform and accept the limitations. We prefer what I like to call:
- A Pleasant Mix 🥰: We use React Native for mobile apps where it shines. We use React for web apps where it shines, and we drop in React Native Web components when we want to share components. For example, we have an admin dashboard in React with a Preview button that embeds the actual React Native components users see in the mobile apps.
Now that we finally can use the same styling and animation patterns, it's much easier for one team to work on both React and React Native. Until recently we had separate web and mobile teams, but in just the past few weeks that we've been using tailwindcss-react-native and Legend-Motion, we've already merged everyone into one team that can do anything 🚀.
Stay tuned
We are very excited about this new world of React and React Native working beautifully together! And we hope you are too 🎉🥳.
A huge shoutout to tailwindcss-react-native for being so great. Give it a star on Github and follow Mark Lawlor for updates.
Legend-Motion is our first open source library and we plan to keep improving it. We're also working on pulling out more of our core code into open source projects, so keep an eye on this blog and follow us or me on Twitter for updates: @LegendAppHQ or @jmeistrich.
|
https://legendapp.com/dev/react-and-native/
|
CC-MAIN-2022-40
|
en
|
refinedweb
|
import ROOT
Welcome to JupyROOT 6.07/07
%jsroot on
h = ROOT.TH1F("myHisto","My Histo;X axis;Y axis",64, -4, 4)
Time to create a random generator and fill our histogram:
rndmGenerator = ROOT.TRandom3() for i in xrange(1000): rndm = rndmGenerator.Gaus() h.Fill(rndm)
c = ROOT.TCanvas() h.Draw() c.Draw()
We'll try now to beautify the plot a bit, for example filling the histogram with a colour and setting a grid on the canvas.
h.SetFillColor(ROOT.kBlue-10) c.SetGrid() h.Draw() c.Draw()
Alright: we are done with our first step into the ROOTbooks world!
|
https://nbviewer.org/github/dpiparo/swanExamples/blob/master/notebooks/Simple_ROOTbook_py.ipynb
|
CC-MAIN-2022-40
|
en
|
refinedweb
|
This is another resource that I’ve created for the Kwarqs FIRST Robotics team this season that I’ve found useful, and hopefully others will find it useful as well.
This particular program is stupidly simple, but really nice to have around in case you think your driver station is being screwy, or you want to verify that your switches work *before* running your actual code on it (not that you would run something without testing it, right? 😉 ). As you can see from the code, it displays the driver station inputs on the LCD panel of the driver station. It uses a modified version of the DriverStationLCD class posted at
#include "WPILib.h" #include "DriverStationLCD.h" class RobotDemo : public SimpleRobot { public: RobotDemo(void) { GetWatchdog().SetExpiration(0.1); } void OperatorControl(void) { double tm = GetTime(); GetWatchdog().SetEnabled(true); while (IsOperatorControl()) { GetWatchdog().Feed(); if (GetTime() - tm > 0.1) { DriverStationLCD * lcd = DriverStationLCD::GetInstance(); lcd->PrintfLine(DriverStationLCD::kMain_Line6, "Press select button"); lcd->PrintfLine(DriverStationLCD::kUser_Line2, "%d %d %d %d %d %d %d %d", (int)m_ds->GetDigitalIn(1), (int)m_ds->GetDigitalIn(2), (int)m_ds->GetDigitalIn(3), (int)m_ds->GetDigitalIn(4), (int)m_ds->GetDigitalIn(5), (int)m_ds->GetDigitalIn(6), (int)m_ds->GetDigitalIn(7), (int)m_ds->GetDigitalIn(8) ); lcd->PrintfLine(DriverStationLCD::kUser_Line3, "1: %.1f", m_ds->GetAnalogIn(1)); lcd->PrintfLine(DriverStationLCD::kUser_Line4, "2: %.1f", m_ds->GetAnalogIn(2)); lcd->PrintfLine(DriverStationLCD::kUser_Line5, "3: %.1f", m_ds->GetAnalogIn(3)); lcd->PrintfLine(DriverStationLCD::kUser_Line6, "4: %.1f", m_ds->GetAnalogIn(4)); lcd->UpdateLCD(); tm = GetTime(); } } } }; START_ROBOT_CLASS(RobotDemo);
Download the full project here
|
http://www.virtualroadside.com/blog/index.php/2009/03/08/frc-driver-station-test-program/
|
CC-MAIN-2022-40
|
en
|
refinedweb
|
Hello all,
first, I use this code to get total dos with respect to mp id.
import numpy as np
from pymatgen.ext.matproj import MPRester
mpr = MPRester()
dos = mpr.get_dos_by_material_id(“mp-134”)
total_density = sum(dos.densities.values())
using this code, I can get energy versus DOS data.
but I do not know, how should I get element partial dos with respect to mp id.
for example, in case of binary compounds(ternary, Quaternary and so on) ,
E Si O
-10 0.0 0.0
-9.8 0.8 0.3
Thank you in advance for your help!
|
https://matsci.org/t/element-partial-density-of-states/2573
|
CC-MAIN-2022-40
|
en
|
refinedweb
|
Follow me on Twitter at @tim_deschryver | Subscribe to the Newsletter | Originally published on timdeschryver.dev..
$ npm install --save-dev eslint @typescript-eslint/parser @typescript-eslint/eslint-plugin.
- export class Rule extends Lint.Rules.TypedRule {} + + import { ESLintUtils } from '@typescript-eslint/experimental-utils' + export default ESLintUtils.RuleCreator(ruleName => generateDocsUrl(ruleName)){})
This utility method expects that you pass the rule name, some metadata, the default options, and lastly a
create method.
These options to configure your rules can be compared with the static
metadata property of a TSLint rule and its
apply methods.
- public static metadata: Lint.IRuleMetadata = { - ruleName: 'ngrx-action-hygiene', - description: 'Enforces the use of good action hygiene', - descriptionDetails: - 'See more at', - options: null, - optionsDescription: 'Not configurable', - requiresTypeInfo: false, - type: 'maintainability', - typescriptOnly: true, - } + name: 'action-hygiene', + meta: { + type: 'suggestion', + docs: { + category: 'Best Practices', + description: + 'Enforces the use of good action hygiene. See more at.', + recommended: 'warn', + }, + schema: [], + messages: { + actionHygiene: `Action type '{{ actionType }}' does not follow the good action hygiene practice, use "[Source] Event" to define action types`, + }, + }, + defaultOptions: [], - public applyWithProgram( - sourceFile: ts.SourceFile, - program: ts.Program, - ): Lint.RuleFailure[] { - // rule implementation - } + create: context => { + return { + // rule implementation + } + }.
- protected visitCallExpression(node: ts.CallExpression) { - // rule implementation - } + create: context => { + return { + CallExpression(node) { + // rule implementation + }, + // an example of a esquery, to find console.log statements + // with exactly one argument + `CallExpression[callee.object.name="console"][callee.property.name="log"][arguments.length=1]`(node) { + // rule implementation + } + } + }.
- const failures = hits.map( - (node): Lint.RuleFailure => - new Lint.RuleFailure( - sourceFile, - node.getStart(), - node.getStart() + node.getWidth(), - generateFailureString(), - this.ruleName, - ), - ) - return failures + context.report({ + node, + messageId: 'actionHygiene', + data: { + actionType: value, + }, + })
For example, the following config and report will translate to "Action type 'LOAD_CUSTOMERS' does not follow the good action hygiene practice, use "[Source] Event" to define action types".
{ messages: { actionHygiene: `Action type '{{ actionType }}' does not follow the good action hygiene practice, use "[Source] Event" to define action types`, }, context.report({ node, messageId: 'actionHygiene' data: { actionType: node.value } }) }:
context.report({ node, messageId: 'actionHygiene', data: { actionType: value, }, fix: fixer => fixer.replaceTextRange(node.range, '[Source] Event'), })
The options to modify the node are:
insertTextAfter(nodeOrToken: TSESTree.Node | TSESTree.Token, text: string): RuleFix;
insertTextAfterRange(range: AST.Range, text: string): RuleFix;
insertTextBefore(nodeOrToken: TSESTree.Node | TSESTree.Token, text: string): RuleFix;
insertTextBeforeRange(range: AST.Range, text: string): RuleFix;
remove(nodeOrToken: TSESTree.Node | TSESTree.Token): RuleFix;
removeRange(range: AST.Range): RuleFix;
replaceText(nodeOrToken: TSESTree.Node | TSESTree.Token, text: string): RuleFix;
replaceTextRange(range: AST.Range, text: string): RuleFix;.
const action = createAction('customers load') ~~~~~~~~~~~~~~~~ [action-hygiene] [action-hygiene]: Action type does not follow the good action hygiene practice, use "[Source] Event" to define action types
With ESLint, the tests feel more comfortable with other tests that you've written.
You will have to create a test runner, configure it, and then you will be able to run the rule with valid and invalid cases.
import { resolve } from 'path' import { TSESLint } from '@typescript-eslint/experimental-utils' const ruleTester = new TSESLint.RuleTester({ parser: resolve('./node_modules/@typescript-eslint/parser'), parserOptions: { ecmaVersion: 2018, sourceType: 'module', }, }) ruleTester.run(ruleName, rule, { valid: [ `export const loadCustomer = createAction('[Customer Page] Load Customer')`, ], invalid: [ { code: `export const loadCustomer = createAction('LOAD_CUSTOMER')`, errors: [ // each property here is optional // you can decide the level of your test { messageId, line: 1, column: 42, endLine: 1, endColumn: 57, data: { actionType: 'LOAD_CUSTOMER', }, }, ], }, ], })
You can also create your own test runner, like angular-eslint did here.
- ts.isCallExpression(node) + import { TSESTree } from '@typescript-eslint/experimental-utils' + + export function isCallExpression( + node: TSESTree.Node, + ): node is TSESTree.CallExpression { + return node.type === 'CallExpression' + } + + isCallExpression(node).
export default ESLintUtils.RuleCreator(ruleName => `{ruleName}.md`){ ... })
Follow me on Twitter at @tim_deschryver | Subscribe to the Newsletter | Originally published on timdeschryver.dev.
Top comments (0)
|
https://practicaldev-herokuapp-com.global.ssl.fastly.net/timdeschryver/migrating-a-tslint-rule-to-eslint-with-typescript-eslint-31fg
|
CC-MAIN-2022-40
|
en
|
refinedweb
|
Android Mobile Browser - Omega-XXXX.local
- Jon Gordon
I have a web server set up on my Omega2+ that acts as a user interface which is reachable by the omega-xxxx.local domain on Windows(with bonjour installed)/Mac/iOS when connected to the Omega2+'s WIFI Access Point.
As some of you may know, this does not seem to work for Android mobile browsers because, I assume from what I've read, Android doesn't use mDNS. It is still reachable by IP address, just not through the .local namespace.
Has anyone found a work around for this so that Omega can be reached by domain name instead of IP on android mobile devices through its browser?
Will the AP IP always be 192.168.3.1?
Could some kind of Omega2+ DNS trickery be used to force the android browser to resolve the (m)DNS?
- Andy Burgess
@Jon-Gordon
Hi Jon, Not an expert but the AP IP will always be 192.168.3.1 unless you change it.
Cheers Andy
- Douglas Kryder.
|
http://community.onion.io/topic/3430/android-mobile-browser-omega-xxxx-local/?page=1
|
CC-MAIN-2019-35
|
en
|
refinedweb
|
Templates (C++)
The latest version of this topic can be found at Templates (C++).
Templates are the basis for generic programming in C++. As a strongly-typed language, C++ requires all variables to have a specific type, either explicitly declared by the programmer or deduced by the compiler. However, many data structures and algorithms look the same no matter what type they are operating on.. For example, you can define a function template like this:
template <typename T> T minimum(const T& lhs, const T& rhs) { return lhs < rhs ? lhs : rhs; }
The above code describes a template for a generic function with a single type parameter
T, whose return value and call parameters (lhs and rhs) are all of this type. You can name a type parameter anything you like, but by convention single upper case letters are most commonly used.
T is a template parameter; the
typename keyword says that this parameter is a placeholder for a type. When the function is called, the compiler will replace every instance of
T with the concrete type argument that is either specified by the user or deduced by the compiler. The process in which the compiler generates a class or function from a template is referred to as template instantiation;
minimum<int> is an instantiation of the template
minimum<T>.
Elsewhere, a user can declare an instance of the template that is specialized for int. Assume that get_a() and get_b() are functions that return an int:
int a = get_a(); int b = get_b(); int i = minimum<int>(a, b);
However, because this is a function template and the compiler can deduce the type of
T from the arguments
a and
b, you can call it just like an ordinary function:
int i = minimum(a, b);
When the compiler encounters that last statement, it generates a new function in which every occurrence of T in the template is replaced with
int:
int minimum(const int& lhs, const int& rhs) { return lhs < rhs ? lhs : rhs; }
The rules for how the compiler performs type deduction in function templates are based on the rules for ordinary functions. For more information, see Overload Resolution of Function Template Calls.
Type parameters
In the
minimum template above, note that the type parameter
T is not qualified in any way until it is used in the function call parameters, where the const and reference qualifiers are added.
There is no practical limit to the number of type parameters. Separate multiple parameters by commas:
template <typename T, typename U, typename V> class Foo{};
The keyword
class is equivalent to
typename in this context. You can express the previous example as:
template <class T, class U, class V> class Foo{};
You can use the ellipses operator (...) to define a template that takes an arbitrary number of zero or more type parameters:
template<typename... Arguments> class vtclass; vtclass< > vtinstance1; vtclass<int> vtinstance2; vtclass<float, bool> vtinstance3;
Any built-in or user-defined type can be used as a type argument. For example, you can use std::vector in the Standard Library to store ints, doubles, strings, MyClass, const MyClass*, MyClass&. The primary restriction when using templates is that a type argument must support any operations that are applied to the type parameters. For example, if we call minimum using MyClass as in this example:
class MyClass { public: int num; std::wstring description; }; int main() { MyClass mc1 {1, L"hello"}; MyClass mc2 {2, L"goodbye"}; auto result = minimum(mc1, mc2); // Error! C2678 }
A compiler error will be generated because MyClass does not provide an overload for the < operator.
There is no inherent requirement that the type arguments for any particular template all belong to the same object hierarchy, although you can define a template that enforces such a restriction. You can combine object-oriented techniques with templates; for example, you can store a Derived* in a vector<Base*>. Note that the arguments must be pointers
vector<MyClass*> vec; MyDerived d(3, L"back again", time(0)); vec.push_back(&d); // or more realistically: vector<shared_ptr<MyClass>> vec2; vec2.push_back(make_shared<MyDerived>());
The basic requirements that vector and other standard library containers impose on elements of
T is that
T be copy-assignable and copy-constructible.
Non-type parameters
Unlike generic types in other languages such as C# and Java, C++ templates support non-type parameters, also called value parameters. For example, you can provide a constant integral value to specify the length of an array, as with this example that is similar to the std::array class in the Standard Library:
template<typename T, size_t L> class MyArray { T arr[L]; public: MyArray() { ... } };
Note the syntax in the template declaration. The size_t value is passed in as a template argument at compile time and must be constant or a constexpr expression. You use it like this:
MyArray<MyClass*, 10> arr;
Other kinds of values including pointers and references can be passed in as non-type parameters. For example, you can pass in a pointer to a function or function object to customize some operation inside the template code.
Templates as template parameters
A template can be a template parameter. In this example, MyClass2 has two template parameters: a typename parameter
T and a template parameter
Arr:
template<typename T, template<typename U, int I> class Arr> class MyClass2 { T t; //OK Arr<T, 10> a; U u; //Error. U not in scope };
Because the
Arr parameter itself has no body, its parameter names are not needed. In fact, it is an error to refer to
Arr's typename or class parameter names from within the body of
MyClass2. For this reason,
Arr's type parameter names can be omitted, as shown in this example:
template<typename T, template<typename, int> class Arr> class MyClass2 { T t; //OK Arr<T, 10> a; };
Default template arguments
Class and function templates can have default arguments. When a template has a default argument you can leave it unspecified when you use it. For example, the std::vector template has a default argument for the allocator:
template <class T, class Allocator = allocator<T>> class vector;
In most cases the default std::allocator class is acceptable, so you use a vector like this:
vector<int> myInts;
But if necessary you can specify a custom allocator like this:
vector<int, MyAllocator> ints;
For multiple template arguments, all arguments after the first default argument must have default arguments.
When using a template whose parameters are all defaulted, use empty angle brackets:
template<typename A = int, typename B = double> class Bar { //... }; ... int main() { Bar<> bar; // use all default type arguments }
Template specialization
In some cases, it isn’t possible or desirable for a template to define exactly the same code for any type. For example, you might wish to define a code path to be executed only if the type argument is a pointer, or a std::wstring, or a type derived from a particular base class. In such cases you can define a specialization of the template for that particular type. When a user instantiates the template with that type, the compiler uses the specialization to generate the class, and for all other types, the compiler chooses the more general template. Specializations in which all parameters are specialized are complete specializations. If only some of the parameters are specialized, it is called a partial specialization.
template <typename K, typename V> class MyMap{/*...*/}; // partial specialization for string keys template<typename V> class MyMap<string, V> {/*...*/}; ... MyMap<int, MyClass> classes; // uses original template MyMap<string, MyClass> classes2; // uses the partial specialization
A template can have any number of specializations as long as each specialized type parameter is unique. Only class templates may be partially specialized. All complete and partial specializations of a template must be declared in the same namespace as the original template.
For more information, see Template Specialization.
|
https://docs.microsoft.com/en-us/previous-versions/y097fkab(v%3Dvs.140)
|
CC-MAIN-2019-35
|
en
|
refinedweb
|
Uploading code without restarting the board
Is there a way to update code on the board without restarting it?
the VS code plugin is uploading the files and resetting it.
This is a frustrating process to test code.
I know I can run 1 file, but i have several code files and not only 1
I wish I could just copy files to the board and test without it getting restarted
I also tried to upload changes via FTP, but the latest code copied is not updated (cached files?)
any solution?
- Ralph Global Moderator last edited by
Hi guys, I didn't read everything below in detail, but I just wanted to add that the Pymakr package for Atom has an option for not rebooting after upload :) No need to switch to FTP if the reboot part is the issue.
Actually, VSCode is supposed to have the same option, but when testing it just now it shows a bug (it doesn't show any output in the terminal after upload). I'm going to fix that now and add it in the next release. After that, you can add "reboot_after_upload" key to the global or project config file (
"reboot_after_upload": false) and it will not reboot the board after uploading or downloading.
- oved.yavine last edited by oved.yavine
@robert-hh thanks for the reply .. i resolved my issue ... your help is really appreciated
- robert-hh Global Moderator last edited by
@oved-yavine I would not consider re-boot as a painful process. It's just pushing Ctrl-D. If uploading is what hurts, you could also keep all files on the device and edit them in-place, for instance with the use of Filezilla. Once you see the files of your device in the device pane of Filezilla, you can edit them there with your favorite editor. Filezilla will take care of downloading and uploading the file. That#s what I do.
@robert-hh I am still having issues with testing my modules without "upload all" files and restart my board. my code changes are not reflected even when I execute reload as mentioned.
I cannot figure out how to do this.
making a code change, upload all and test again is a painful process..
Any help would be appreciated
@robert-hh I was partially able to accomplish what you explained
it seems like that if i am testing module A, but calling a function on Module B that loads Module A your method to relaod module A is not working
I tried reloading module A and B, but with no luck
After struggling i found that I need to reload all tested modules in the hierarchy starting from the bottom
in my case reloading module A and then reloading module B
I enhanced your reload function to be:
def reload(mods): import sys for mod in mods: mod_name = mod.__name__ del sys.modules[mod_name] print (__import__(mod_name))```
- robert-hh Global Moderator last edited by
@oved-yavine Yes.
@robert-hh so what you are suggesting is to upload via FTP and call the reload function?
- robert-hh Global Moderator last edited by robert-hh
@oved-yavine You have to remove the names of the to-be-tested module from the list of symbols. The following short function may do so:
def reload(mod): import sys mod_name = mod.__name__ del sys.modules[mod_name] return __import__(mod_name)
If the module you are testing is called mymodule.py, and you had it imported with
import mymodule, then calling
reload(mymodule)will re-import it and start it again. I have that function included in my main.py, so I can call it from REPL when needed.
|
https://forum.pycom.io/topic/4011/uploading-code-without-restarting-the-board
|
CC-MAIN-2019-35
|
en
|
refinedweb
|
when running oncotator
Hi,
I installed Oncotator v1.8.0.0 on RHEL.
I am getting following error when running the command oncotator -h
Traceback (most recent call last):
File "/usr/bin/oncotator", line 9, in
load_entry_point('Oncotator==v1.8.0.0', 'console_scripts', 'oncotator')()
File "/usr/lib/python2.6/site-packages/distribute-0.6.15-py2.6.egg/pkg_resources.py", line 305, in load_entry_point
return get_distribution(dist).load_entry_point(group, name)
File "/usr/lib/python2.6/site-packages/distribute-0.6.15-py2.6.egg/pkg_resources.py", line 2244, in load_entry_point
return ep.load()
File "/usr/lib/python2.6/site-packages/distribute-0.6.15-py2.6.egg/pkg_resources.py", line 1954, in load
entry = import(self.module_name, globals(),globals(), ['name'])
File "/usr/lib/python2.6/site-packages/Oncotator-v1.8.0.0-py2.6.egg/oncotator/Oncotator.py", line 52, in
from oncotator.utils.RunSpecificationFactory import RunSpecificationFactory
File "/usr/lib/python2.6/site-packages/Oncotator-v1.8.0.0-py2.6.egg/oncotator/utils/RunSpecificationFactory.py", line 2, in
from oncotator.DatasourceFactory import DatasourceFactory
File "/usr/lib/python2.6/site-packages/Oncotator-v1.8.0.0-py2.6.egg/oncotator/DatasourceFactory.py", line 53, in
from oncotator.datasources.EnsemblTranscriptDatasource import EnsemblTranscriptDatasource
File "/usr/lib/python2.6/site-packages/Oncotator-v1.8.0.0-py2.6.egg/oncotator/datasources/EnsemblTranscriptDatasource.py", line 75
POPULATED_ANNOTATION_NAMES = {'transcript_exon', 'variant_type', 'variant_classification', 'other_transcripts',
^
SyntaxError: invalid syntax
What am i doing wrong..5 (Santiago)
Release: 6.5
Codename: Santiago
Linux 2.6.32-431.17.1.el6.x86_64 x86_64
Answers
@cgoswami
Hi,
You need to use use Python 2.7 instead of 2.6.
-Sheila
Hi Sheila,
I upgraded python to 2.7. Now it gives me following error
/usr/local/bin/python2.7: can't find 'main' module in '/home/cpg003/tools/oncotator/oncotator-release-1.8/oncotator
Also, what is the difference between pncotator and oncotator-annotate.py
@cgoswami
Hi,
I just moved this discussion to the Oncotator category. I will ask the Oncotator expert @LeeTL1220 to get back to you.
-Sheila
@cgoswami You can ignore oncotator-annotate.py. As for your error, did you create a virtual environment? It seems as if you did not install oncotator in that python instance....
WE have RHEL here. RHEL doesnot have python 2.7 available for upgrade. I downloaded Python 2.7 made an altinstall and created the environment again. But it still used Python2.6 to compile oncotator.
thank you nice post....everyone surely get benifit from this article.
really very fruitful
|
https://gatkforums.broadinstitute.org/gatk/discussion/6943/error-when-running-oncotator
|
CC-MAIN-2019-35
|
en
|
refinedweb
|
sudo apt-get dist-upgrade (This will update all your Raspbian packages and may take up to an hour).
RasPiO® GPIO Reference Aids
Our sister site RasPiO has three really useful reference products for Raspberry Pi GPIO work...
Interesting… Is there any way to set it up so that you’re waiting to see if either button A _or_ button B was pressed?
Indeed there is Mike. Tune in for another exciting episode where that will be covered soon ;)
Hi Alex,
Greetings.
I was wondering if anyone could point me to where i can find this episode?
Thanks,
RP
I believe you’re looking for.
Maybe I’m missing something here but that code looks to me like it is using C to poll for pin changes (using a separate thread for each pin) and then it makes a callback to some java thing when it detects a change. i don’t see anything at all to do with receiving hardware interrupts on a level change? (Which is what I’d really like to do. In C.)
You may find one of these links useful?”
def my_callback2(channel):
device.emit(uinput.KEY_P, 1) # PRESS P-KEY
print “falling edge detected on 23”!
ARDUINO INTERUPT TO RASPBERRY PI
hey Friends :)
just thought that could be interesting for you.
I was searching for a way to get an interrupt to the raspi when a touch sensitive button on my arduino was touched.
the arduino handles some leds when the button is touched and the raspi should display it.
further you can control the leds on the arduino from the raspi gui. the raspi sends the data how to set the leds with i2c.
because the i2c only works from master to slave i have to send the raspi an interupt to fetch the new data from the arduino by sending a request.
this is not the full code – just an example:
here is the raspi python code:
here is the arduino code – it just switches states all 1 sec – the interrupt will appear all 2 sec:
I downloaded the code, unzipped it and started the code. The program starts as expected (I see the message “make sure you have…”) but the program doesn’t react to pressing the button, I checked with a voltmeter whether the voltage on bmc-pin 23 is dropping from 3.3V to zero when I press the button and it does. I’m using GPIO-version 0.5.4 Can anyone here help me further?
thanks
jean
Jean. I’d put money on you having wired your button to P1 header pin 23 instead of GPIO23, which is pin 16
Dear Alex,
I did use gpio23 just exactly as is shown in the schematic and I am also able to read the state of a switch connected to that pin but it doesn’t work in interrupt mode. Any further ideas about what could cause this?
thanks
jean
Maybe you’ve not got the right version of RPi.GPIO installed? (or you might have conflicting versions installed)
May I ask why exactly did we choose pin 23 ?
We could have used any GPIO port, but GPIO 23 (not pin 23) was chosen because it has a GND next to it, which makes it convenient for wiring.
(Pin 23 is GPIO 11/SCLK)
Thanks alot for your quick response and this tutorial has really helped me.
Not yet even a bad programmer in Python yet, as most of my work has been in the B&D languages (well, people are still using Fortran and Cobol,) and I have no intuition about Python’s functioning yet — nor the hardware side of the Pi.
I would appreciate a quick “off the cuff” response as to feasibility of an interrupt being inherited by a spawned child program.
I am writing software for a research project that would video individuals with a sleep disorder. This would use the existing Linux version of the GPL software “motion” and “raspivid.” There would of course be times when the patient might want to have some privacy for a short time, so it would be highly desirable to be able to kill or suspend (preferred) all monitoring for a few minutes, and then again resume via a second interupt.
I would be grateful to know if this is a reasonable approach, or would not possibly work on the Pi.
Thank you for your consideration.
Sounds like you’re confusing your “interrupt”s ;-)
It sounds like you’re talking more about for inter-process-communication than about triggering functions asynchronously from within Python.
Although of course there’s no reason you can’t combine the two:
Hi! I have a little problem here I hope you can help me.
I need to interrupt the main thread of my program but I don’t want to run a new thread. I just want to when I press a button cause an interruption of the main thread.
I have read this post but neither of the options that mention here works for me.
I’ve also read something about using wiringpi2 library, precisely the function wiringPiISR. But every time I execute my code, no matter if the button is pressed or not, the function callback executes anyway.
Thank you very much
I’m not familiar with WiringPiISR
But the interrupts described on this page use RPi.GPIO
I believe it is possible to use both systems in the same Python script as long as you don’t try to use the same ports for each.
It does look, in your code, as if you have set up WP pin 2 as an input, then tried to write a value to it as if it was an output. Can you even do that?
Yes, actually I was trying to fix that error that I metioned before. The original code was:
wiringpi2.GPIO(wiringpi2.GPIO.WPI_MODE_PINS)
wiringpi2.pullUpDnControl(2,1)
wiringpi2.wiringPiISR(2, 0, my_callback())
Anyway, using RPi.GPIO there’s no way to interrupt the main thread? I mean interrupts but not running a new thread.
Thank you very much for your answer, it’s a very good page.
Depends what you mean by “interrupt” ? I guess you could do something like:
(note that this is off the top-of-my-head, and totally untested)
Obviously this will only be “interruptible” at the start of each loop, and not within the call to sleep().
Is there any way to use Nrf24l01+ with interrupt on Rpi? I was using a C script to read all data that arrives by the module. But the script consumed 99.9% CPU.
But now I’m using a arduino connected at serial and all data arriving from arduino + nrf is written by serial and the ruby script read it. But again, the ruby script consumes almost 99.9% CPU.
What is the better way for read the informations from nrf? Thanks.
Great tutorial and great code. I used this tutorial to connect my front doorbell to a Raspberry Pi. I altered the code in the example so that if someone pushes the doorbell an API (from Pushover) is activated which send a pushnotification to my iOS device. I added a loop so when the doorbell is activated once, the GPIO pin is pulled up again ready for the next event. Initially the code works great, but I ran in some problems:
1.) For some reason events are occurring on 20-30 minutes interval without pushing any button. Is this a false event?
2.) The program closes when I quit the SSH session with the Raspberry Pi.
Any idea to solve this?
1) are you using hardware pullup resistor? If it’s random radio interference that should cure it. Also ensure you have the latest RPi.GPIO as there were some bugs in the interrupts part of some earlier releases.
2) unless you run it in the background, or via soeothing like ‘screen’ closing an ssh session will terminate whatever program you started with that session
A better option for 2) would be using
Thanks, I definitely give it a try once I solved 1.)
1.) I added a pull-up resistor to the circuit consisting of a 10k resistor connected to the 3,3v pin according:. At the moment I’m testing the new set-up.
2.) Great tip
[…] the script there is a link to. It is a great resource for GPIO for the pi. I used it to research the glob error I was getting […]
Great post! Just curious how you would use interrupts with a slide switch?
You mean like a potentiometer slider?
I imagine somewhere in the middle it would flip from 0 to 1, but no idea at what threshold.
I wouldn’t really have thought it was the right tool for the job, to be honest.
No blatant plug for the ADC features of RaspIO Duino, Alex? ;-)
OTOH if Mark meant an on/off slide switch like then you’d use it exactly like a push button in Alex’s example above (you only need to connect two of the switch’s three pins, but remember to use the Pi’s internal pull-ups).
You know what Andrew? I must be tired after the Pi birthday weekend because I didn’t even think of it :)
Yes, just like that on/off slide switch. I’ve set it up on pin 22 and ground. Is there a way to intelligently determine if the switch is on or off? With my code below, every time the event fires, it prints “Switch was switched”. Of course I could keep track in a variable but just curious if interrupts and events could do this some how?
Also, how do you handle bounce with a slide switch? The old way of doing this in a loop and checking every xx milliseconds was able to catch the on/off position of the switch if you move it back and forth as fast as you can. Doing it with the above code often misses 1 or 2 on/off slides unless you slow down how often you are switching it. Any suggestions? Here is the old way I did this:
This is exactly what I need Thanks! Although I’m having some trouble executing a response to the button press. As soon as I press the button I want the Raspberry Pi to run a command as if I had typed it in the prompt. From my brief research I understand that subprocesses are the way to do this, but anyway I enter the subprocess in the response portion of the code in your tutorial, I get this error:
subprocess.call(‘ls’,shell=True)
^
SyntaxError: invalid syntax
When I run that same subprocess in it’s own python code it runs just fine, but it doesn’t like it in this program you’ve written :/ Any idea why? I would greatly appreciate any help on this Thanks!
Did you remember to include the subprocess module with a line like:
import subprocess
?
And don’t forget that indentation (number of spaces) is important when writing Python code. Also, “just to be on the safe side” you’re probably better off giving the full path to the command, i.e. use ‘/bin/ls’ instead of just ‘ls’.
This is a very helpful series. I just got my RPi 2 today and everything is going nicely, except…
On this setup, the button works just fine, but Ctl-C gives me a runtime error: “Error waiting for edge; During handling of the above exception, another exception occurred, KeyboardInterrupt.” I’ve tried the obvious, like adding a “finally” clause, and an “except RuntimeError” clause. My version of Rpi.GPIO is 0.5.11.
Hi, Nice Tutorial. But when I tried to implement it on my RPi rev 2 with Jessie OS. I don’t get any interrupt when i connected one of my GPIO to the GND and detecting the falling edge. I am working on the following code.
Can you tell me where did I do wrong? I have also updated my OS recently, but still it is not responding. What should I do?
You’ve omitted to set the GPIO mode (line 5 in my code)
GPIO.setmode(GPIO.BCM)or
GPIO.setmode(GPIO.BOARD)since it looks like you’re using pin numbers not GPIO numbers
I copied the code in this article and had a wire of 25 cm connected to the pin of GPIO23 not connected to anything. After that I immediately get the falling edge message. When I remove the wire the timeout kicks in. It looks like the pull up, although setup, is not working properly.
Erm, huh? If you’ve got a wire connected to a GPIO pin with the other end not connected to anything, what are you expecting it to do??
Because of the pull up I expect no interrupt. I found information mentioning this effect. Without the pull up the input is floating making it susceptible to these stray signals, but the pull up prevents this. Apparently not.
Ahhh, the Pi only has weak internal pull-ups, so if you’ve got an open-ended (relatively long) wire then it indeed may be acting as an antenna as Jim mentioned, and getting enough interference to override the weak pullup and trigger the interrupt. As you’ve already found, the fix is to add an external resistor to give a “stronger pull”.
Perhaps the wire is acting like an antenna and stray electrical signals are triggering the interrupt. What happens if you connect the wire to ground through a high-value resistor.
I used a 47k ohm resistor to ground and I did not get a falling edge till I made shortcut over the resistor.
The example mentions a switch on a breadboard. Does such a switch have a high value resistor over its contacts or is a complete open switch? I wonder what is in that case the difference with my open wire?
On this occasion I used no resistor.
Hi Great Tutorial, I have used it to set-up a four button system to control my LCD screen on a remote weather station (to save power as solar and switch screen on/off as needed), I would like to remove or amend the Raw Input so that I could use it to auto run at the Pi startup, have tried removing the Raw Input and it runs but stops after a time, I presume by a timeout.
is there something I can use to overcome using Raw Input as the system tests well except for the need of the Raw Input stops the use of auto startup.
any ideas please.
What do you mean by Raw Input. I have a Python script which is started with a @reboot in a crontab entry. It contains an endless loop and a routine executed on a falling edge on one of the pins. It is already active for several weeks.
Another option is to define the program as a one shot service in systemd. You may use the example of the fake-hwclock package to define the .service file.
Perhaps you’re looking for something like this?
I finally found the problem. I use openSUSE Tumbleweed for Raspberry Pi, which does not have a /sys/class/gpio/gpiochip0 device, but a /sys/class/gpio/gpiochip298 device. I adapted the RPi.GPIO package first in the wrong way; the package assumes the zero in the device name. Some functions were OK, but not the setting of the pull up. My latest adaption works OK.
I love you man. I like it that you keep it clear and simple
Something interesting happened.
I was reading this article at my coffee table, so I just connected tow wires to GPIO pin 23 and a ground pin. I just connected these two wires to emulate the switch. Everything worked fine until I did not disconnect the wires. The wires were connected and emulating a pressed switch when I started the program. After pressing the program immediately reported a falling edge and terminated. But there was no falling edge, just LOW condition. The program I used was copied from your text.
So what is wrong here?
Thank you for bringing me to the new idea of executing codes on RPI….
I need to trigger a RPI camera and execute a code on RPI when the door is opened…. Does your post work for me?
|
https://raspi.tv/2013/how-to-use-interrupts-with-python-on-the-raspberry-pi-and-rpi-gpio?replytocom=53824
|
CC-MAIN-2019-35
|
en
|
refinedweb
|
Investment Banking: How to Calculate Internal Rate of Return
Rates of return are central concepts in the investment banking world. Investors often compare their performance to that of other investors, other investments and stock indexes by comparing rates of return. In the private equity world, rates of return are calculated by computing the internal rate of return (IRR) of an investment.
The IRR is simply defined as the discount rate (or rate of return) that equates the present value of the projected costs (cash outflows) of an investment with the present value of the projected cash inflows from the investment. Another way of looking at IRR is that it’s the interest rate that equates the present value of all cash flows to zero.
To compute the IRR of an investment, you simply solve for the term IRR in the following equation:
For instance, if an investment required an initial cash outlay of $100 million the projected cash flows were $30 million in each of the next three years and $150 million four years from today, the equation for calculating the IRR is:
In this case, the IRR for this investment is 33.1 percent. Not a bad rate of return. (By the way, IRR can be computed using any standard financial calculator or via an Excel spreadsheet.)
Expected returns for private equity deals are substantially higher than expected returns in the public equity markets. This reflects the fact that private equity deals are generally riskier than public equity investments on several fronts. They’re generally much more highly leveraged than public companies, and they’re less liquid than investment in public equities. Thus, to induce investors to commit funds to private equity, they must expect higher returns.
The exact percentage returns expected by private equity investors varies widely depending upon market conditions. For instance, when long-term U.S. government bonds are yielding 6 percent to 8 percent and publicly traded stock returns are in the 10 percent to 12 percent range, it would not be uncommon for LBO investors to expect returns in excess of 20 percent, perhaps in the mid 20 percent range.
But if expected returns on other asset classes in the market are much lower, returns expected by private equity investors will generally be commensurately lower. The expected returns in the private equity markets are largely influenced by a combination of current market conditions and investors’ appetites for risk.
|
https://www.dummies.com/personal-finance/investing/investment-banking/investment-banking-how-to-calculate-internal-rate-of-return/
|
CC-MAIN-2019-35
|
en
|
refinedweb
|
Siri Shortcuts Tutorial in iOS 12
In this iOS 12 tutorial, you’ll learn how to build Siri Shortcuts for your app to surface in Spotlight as well as command Siri with your voice.
Version
- Swift 4.2, iOS 12, Xcode 10
If you’ve had an iPhone for any length of time, you’ve likely interacted with Siri. When Siri was first announced in 2011, its ability to understand context and meaning, regardless of the specific combination of words used, was groundbreaking.
Unfortunately, Siri integration was limited to Apple’s own apps until the release of SiriKit in 2016. Even then, the types of things you could do with Siri were limited to a particular set of domains.
With the release of Siri Shortcuts in iOS 12, this is no longer the case. Now, you can create custom intents to represent any domain, and you can expose your app’s services directly to Siri.
In this tutorial, you’ll learn how to use these new shortcuts APIs to integrate Siri into a writing app.
Getting Started
To get started, use the Download Materials button at the top or bottom of this tutorial to download the starter project.
Once downloaded, double-click TheBurgeoningWriter.xcodeproj to open the project in Xcode.
Set the bundle ID to something unique to you (Apple recommends using a reverse DNS name such as com.razeware.TheBurgeoningWriter). Then, run the app, and you’ll see a home screen that shows all of the written articles. From here, you can add new articles and publish the drafts you’ve previously saved.
The big idea here is to write an article, sit on it for a little bit, and then publish it later — provided you’re still happy with it.
Ready to begin? Great!
Adding Shortcuts to an App
The first thing to consider is which features of your app are appropriate for turning into shortcuts.
Ideally, you should create shortcuts for actions your user can perform; preferably, something they’ll likely do repeatedly. Once you’ve decided to set up a shortcut, there are two ways to create it:
- NSUserActivity: User activities are part of an existing API that allows you to expose certain things a user can do for app hand-off and Spotlight searches. The thing to remember here is that this option is only useful when you want the user to go from Siri into your app to complete a task.
- Custom Intents: Creating a custom intent is the true power of shortcuts. With an intent, you can communicate with your user via Siri without ever having to open your app.
Making a Shortcut for Writing New Articles
Your first shortcut is one that lets a user go straight to the new article screen. This is the perfect candidate for creating a shortcut based on an
NSUserActivity object, because it’ll take the user from Siri into your app.
Your goal is to donate one of these activities to the system every time your user performs that action. You’ll do so by adding a new method that allows you to generate these activity objects.
Open Article.swift and, at the top of the file under the imports, add the following constant string definition:
public let kNewArticleActivityType = "com.razeware.NewArticle"
This is the identifier you’ll use to determine if you’re dealing with a “new article” shortcut. A good rule of thumb is to use the reverse DNS convention when choosing an identifier for your shortcut.
Next, below the properties, add the following method definition:
public static func newArticleShortcut(thumbnail: UIImage?) -> NSUserActivity { let activity = NSUserActivity(activityType: kNewArticleActivityType) activity.persistentIdentifier = NSUserActivityPersistentIdentifier(kNewArticleActivityType) return activity }
Here, you’re creating an activity object with the correct identifier and returning it. The
persistentIdentifier is what connects all of these shortcuts as one activity.
For your activity to be useful, you have to do some configuration.
Add the following two lines before the
return:
activity.isEligibleForSearch = true activity.isEligibleForPrediction = true
First, you set
isEligibleForSearch to
true. This allows users to search for this feature in Spotlight. You then set
isEligibleForPrediction to
true so prediction works. Setting this to
true allows Siri to look at the activity and suggest it to your users in the future. It’s also what allows the activity to be turned into a Shortcut later.
Next, you’ll set the properties that affect how your Shortcut looks to users.
Define a local attributes property. Add it below the previously pasted lines:
let attributes = CSSearchableItemAttributeSet(itemContentType: kUTTypeItem as String)
Set your attributes by adding the following lines:
activity.title = "Write a new article" attributes.contentDescription = "Get those creative juices flowing!" attributes.thumbnailData = thumbnail?.jpegData(compressionQuality: 1.0)
This sets the title, subtitle and thumbnail image you’ll see on the suggestion notification.
Stepping back for a moment, it’s important to remember that Siri exposes this feature in two separate ways:
- First, Siri learns to predict what your users might want to do in the form of suggestions that pop up in Notification Center and Spotlight Search.
- Second, your users can turn these activities into voice-based shortcuts.
For the last bit of configuration, add the suggested phrase users should consider when making a shortcut for this activity:
activity.suggestedInvocationPhrase = "Time to write!"
The chosen phrase should be something that’s short and easy to remember. It should also not include the phrase “Hey, Siri” because the user might have already triggered Siri’s interface that way.
Finally, assign the attributes object to the activity object:
activity.contentAttributeSet = attributes
Now that you can grab user activity objects from an
Article, it’s time to use them.
Using the Activity Object
Open ArticleFeedViewController.swift and locate
newArticleWasTapped().
Below the comment, add the following lines:
//1 let activity = Article.newArticleShortcut(thumbnail: UIImage(named: "notePad")) vc.userActivity = activity //2 activity.becomeCurrent()
- First, you create an activity object. Then, you attach it to the view controller that’ll be on screen.
- Next, you call
becomeCurrent()to officially become the “current” activity. This is the method that registers your activity with the system.
Congratulations, you’re now successfully donating this activity to Siri.
Build and run. Then, go to the new article screen and back to the home screen a few times.
You won’t see anything too interesting in the app, but each time you perform that action, you’re donating an activity to the system.
To verify, pull down on the home screen to go to search. Then, type “write”, and you should see the “Write a new article” action come up.
Continuing a User Activity
Tap on the “Write a new article” result in your search. You’ll be taken to your app’s home screen.
Your feature may be exposed to the system, but your app isn’t doing anything when the system tells it the user would like to use the feature.
To react to this request, open AppDelegate.swift, and at the bottom of the class, add the following method definition:
func application( _ application: UIApplication, continue userActivity: NSUserActivity, restorationHandler: @escaping ([UIUserActivityRestoring]?) -> Void ) -> Bool { return true }
Inside the method and before the
return statement, create the New Article view controller and push it onto the nav stack:
let vc = NewArticleViewController() nav?.pushViewController(vc, animated: false)
Build and run again. When you search for this feature and tap on it, your app takes you directly to the New Article screen.
Developer Settings for Working With Siri
All you’ve done so far is accessed your feature from Spotlight. This isn’t anything new, and it would work even if Siri weren’t involved since you made your activity eligible for search.
To prove that Siri can start suggesting this action, you’ll need to go to the Settings app and enable a few options.
Open Settings and find the Developer option. Scroll towards the bottom and you’ll see a section named SHORTCUTS TESTING.
Turning on the Display Recent Shortcuts option means that recently donated shortcuts will show up in Spotlight Search instead of Siri’s current predictions.
Similarly, Display Donations on Lock Screen always shows your recent donations as a notification on your lock screen.
Turn on both of the options so you can always see what shortcuts your app has donated most recently.
Now that you know how to see what Siri’s suggestions look like, it’s time to turn these activities into full-blown shortcuts!
Turning User Activities Into Shortcuts
When a user wants to turn one of these activities into a shortcut, they can do so from the Settings app. As the developer, there’s nothing more you need to do.
To test it out: On your device, open Settings > Siri & Search.
The first section shows a list of shortcuts donated by the different apps on your phone. You’ll see the “Write a new article” shortcut in this list; if you don’t, tap More Shortcuts to see more.
To add a shortcut for this action, tap the action, and you’ll be taken to the shortcut creation screen.
Here, you can see that suggested invocation phrase you added earlier.
Tap the red circle at the bottom. When prompted, say to Siri, “Time to write”. After Siri figures out what you said, tap Done to finalize your shortcut.
Now, your shortcut is listed along with any other shortcuts you’ve created.
Adding Shortcuts To Siri From Your App
This is all well and good, but can you expect users to muck around in the Settings app to add a shortcut for your app? Nope! Not at all.
Fortunately, you can prompt your users to do this straight from your app!
Open NewArticleViewController.swift and you’ll see an empty definition for
addNewArticleShortcutWasTapped().
This is the method that gets called when the blue “Add Shortcut to Siri” button is tapped.
The IntentsUI framework provides a special view controller that you can initialize with a shortcut. You can then present this view controller to show the same UI you just saw in the Settings app.
Add the following two lines to initialize the shortcut:
let newArticleActivity = Article .newArticleShortcut(thumbnail: UIImage(named: "notePad.jpg")) let shortcut = INShortcut(userActivity: newArticleActivity)
Next, create the view controller, set the delegate and present the view:
let vc = INUIAddVoiceShortcutViewController(shortcut: shortcut) vc.delegate = self present(vc, animated: true, completion: nil)
At this point, Xcode complains that this class isn’t fit to be the delegate of that view controller. You’ll need to fix this.
Add the following extension at the bottom of the file:
extension NewArticleViewController: INUIAddVoiceShortcutViewControllerDelegate { }
Then, conform to the
INUIAddVoiceShortcutViewControllerDelegate protocol by adding the method for when the user successfully creates a shortcut:
func addVoiceShortcutViewController( _ controller: INUIAddVoiceShortcutViewController, didFinishWith voiceShortcut: INVoiceShortcut?, error: Error? ) { }
Also, add the method for when the user taps the Cancel button:
func addVoiceShortcutViewControllerDidCancel( _ controller: INUIAddVoiceShortcutViewController) { }
Next, you need to dismiss the Siri view controller when these methods are called.
Add the following line to both methods:
dismiss(animated: true, completion: nil)
Build and run to try it out! Go to the New Article view and tap Add Shortcut to Siri.
You’ll see a view controller that looks a lot like the view you see when you go to set up shortcuts in the Settings app. With this in place, your users won’t have excuses for not utilizing your app’s full potential! :]
Publishing Articles with Siri
Next, you want to add the ability to publish articles you’ve written directly from Siri. The big difference here is that instead of Siri opening your app, everything will happen in-line, directly from the Siri UI.
For this shortcut, you’ll create a custom Intent to define the back and forth between Siri, your users and your app.
Defining a Custom Writing Intent
In the Project navigator, locate the ArticleKit folder and click on it to highlight it.
Then, press Command + N to create a new file.
In the filter search box, type Intent and you’ll see SiriKit Intent Definition File.
Click Next, then name your file ArticleIntents.intentdefinition.
Now it’s time to create an intent for posting articles you’ve written.
Go to the bottom of the section that reads No Intents and click the plus button to add a new intent definition.
Name it PostArticle, and then configure your intent based on these settings (described after the picture):
This screen is for configuring the information that the publish intent needs to work.
The options in the Custom Intent section define what type of intent it is and can affect how Siri talks about the action. Telling Siri that this is a Post type action lets the system know that you’re sharing some bit of content somewhere:
- Category: Post
- Title: Post Article
- Description: Post the last article
- Default Image: Select one of the existing images in the project.
- Confirmation: Check this box since you want to ask the user to verify that they’re really ready to publish this article.
The Parameters section is where you define any dynamic properties used in the Title and Subtitle, which you’ll do now.
Define a parameter named article that’s a Custom data type and a publishDate that’s of type String.
Then, in the Shortcut Types section, click the plus button to add one type that includes both the article and publishDate arguments as its parameters.
Next, set up the Title and Subtitle for the shortcut.
Make the title Post “${article}” and the subtitle on ${publishDate}. If you don’t copy and paste, make sure to let Xcode autocomplete article and publishDate.
Finally, make sure Supports background execution is checked so you aren’t forced to leave the Siri UI.
Setting Up Siri’s Responses
Click on Response so you can define how Siri will respond to the user.
Once again, configure your responses like this:
Under Properties you can once again define the dynamic parts of what Siri will say. Add properties for title, publishDate and failureReason; make them all strings.
Then, under Response Templates, add this template for failure:
Sorry, but I couldn't post your article. ${failureReason}
And this template for success:
Your article "${title}" was successfully posted on ${publishDate}. Nice work!
Adding a Siri Extension
To pull off the trick of being able to stay in Siri’s UI without launching your app, you need to create an Intents Extension with code that can manage the interaction.
Click on the project file in the upper left-hand corner, then find the plus symbol that allows you to add a new target.
Now, find the Intents Extension target; you can search for it in the Filter search bar.
Then, name your intents extension WritingIntents, set Starting Point to None, and uncheck the Include UI Extension option.
Finally, click the Finish button to create your extension. Build and run to make sure things still work.
Nothing new to see, but now you’re ready to use your custom intents!
There are two quick things you’ll need to do before moving on.
First, make sure ArticleIntents.intentdefinition is visible to the extension.
Open the file, then look in the File inspector. Make sure its Target Membership includes the app, framework and extension. Also, make sure to change the code generation option to No Generated Classes for the app and extension targets since this code should live in the framework.
Next, your extension and main app need to share an app group. Since articles are saved to and loaded from disk, this is the only way both targets can share the same area on the file system.
Click on the project file in the Project navigator, make sure you have TheBurgeoningWriter selected and go to the Capabilities tab.
Switch the App Groups capability to ON, and name the group group.<your-bundle-id>.
Next, select the WritingIntents extension and do the same thing. This time, the group should exist so you can check simply the box.
Finally, open ArticleManager.swift and locate the declaration for
groupIdentifier. Change its value to match your newly defined app group name.
Donating Post Article Intents
Now that you’ve defined an intent for posting articles, it’s time to create and donate one at the appropriate time.
Once again, head to Article.swift so you can add a method for generating “post” intents for new articles.
Below your definition of
newArticleShortcut(thumbnail:), add the following method definition:
public func donatePublishIntent() { }
This method creates and donates the intent all at once since you don’t need to deal with adding intents to a view controller.
Now, create the intent object and assign an article and publish date to it:
let intent = PostArticleIntent() intent.article = INObject(identifier: self.title, display: self.title) intent.publishDate = formattedDate()
If you’re wondering where this class came from, Xcode generated it for you when you created the ArticleIntents.intentdefinition file.
An
INObject is a generic object you can use to add custom types to your intent. In this case, you’re just giving it an identifier and the display value of the article’s title.
Next, create an interaction from this intent:
let interaction = INInteraction(intent: intent, response: nil)
When using a custom intent, the thing that you’ll end up donating to the system is an interaction like this one.
Finally, do the donation by calling
donate(_:) on the interaction:
interaction.donate(completion: nil)
Here, you’re donating your interaction without worrying about the completion block. You can, of course, add error handling or whatever else you want to this completion block.
You must complete one last “secret handshake” step: You must tell iOS exactly what intents your app supports. To do this, click on the project file in the Project navigator. Select the WritingIntents target and click the Info tab. Option-click the disclosure triangle next to the NSExtension key to open the entire key. Hover over IntentsSupported to reveal the plus button and click it once. Set the value of the newly added item to PostArticleIntent.
That’s it! That’s all there is to donating an intent-based shortcut to the system.
Now that you’ve got the method defined, go to NewArticleViewController.swift and find
saveWasTapped(). Since you want the system to prompt users to post articles that they’ve saved for later, this is where you’ll make it happen.
Add this line to donate the intent below the comment in that method:
article.donatePublishIntent()
Now that you’re donating, build and run the app. Then, create and save a new article. After you’ve done so, go to Spotlight search, and you should see a new donation that looks something like this.
Handling Intents-Based Shortcuts
Like before, you now have to think about handling this shortcut when the user has used it.
This time, the extension you created will be responsible for handling things.
First, add a new Swift file in the WritingIntents folder named PostArticleIntentHandler.swift.
Replace
import Foundation with this:
import UIKit import ArticleKit class PostArticleIntentHandler: NSObject, PostArticleIntentHandling { func confirm(intent: PostArticleIntent, completion: @escaping (PostArticleIntentResponse) -> Void) { } func handle(intent: PostArticleIntent, completion: @escaping (PostArticleIntentResponse) -> Void) { } }
Here, you’re creating the class that handles responding to interactions involving your post article intent.
Conforming to the
PostArticleIntentHandling protocol means that you need to implement one method involving the confirmation step and one method for handling the intent after the user has confirmed.
Next, add the following code to
confirm(intent:completion:):
completion(PostArticleIntentResponse(code: PostArticleIntentResponseCode.ready, userActivity: nil))
This indicates that if the user taps confirm, then the extension is ready to take on the intent.
Next, you’ll implement
handle(intent:completion:).
This is where the real choices come into play. Since the user is trying to post an article, you should only respond with a success message if it works.
First, add this guard statement for when the article they chose isn’t found:
guard let title = intent.article?.identifier, let article = ArticleManager.findArticle(with: title) else { completion(PostArticleIntentResponse .failure(failureReason: "Your article was not found.")) return }
This calls the completion block with a failure intent response. Its only argument is called
failureReason because the failure response template you created earlier has the
failureReason variable in the template.
Next, add a guard for when this article has already been published:
guard !article.published else { completion(PostArticleIntentResponse .failure(failureReason: "This article has already been published.")) return }
Finally, for the success case, you’ll publish the article and call completion with the success response. This includes the article’s title and the date on which it was published:
ArticleManager.publish(article) completion(PostArticleIntentResponse .success(title: article.title, publishDate: article.formattedDate()))
Now that you have your intent handler set up, you have to make sure it gets used.
Open IntentHandler.swift and replace the existing
handler(for:) with the following to tell the system to use the handler you just wrote:
override func handler(for intent: INIntent) -> Any { return PostArticleIntentHandler() }
Next, open AppDelegate.swift and find
application(_:continue:restorationHandler:).
Something you might not expect is that even though you have your own handler for dealing with this shortcut, the continue user activity callback in the app delegate is still called.
To block the method from taking users out of Siri and to the new article view, add the following guard:
guard userActivity.interaction == nil else { ArticleManager.loadArticles() rootVC.viewWillAppear(false) return false }
If the activity has an interaction attached, that means its the “publish” shortcut, and you need to load the articles and make sure the feed view controller reloads.
Build and run, and then write a new article to donate one of these shortcuts.
Next, go into Settings and make a shortcut for it with a title; something like “Post my last article” works perfectly.
After that, start Siri and use your shortcut; Siri will post the article for you and respond with a custom response informing you of the title and publication date.
Wrapping Up
The final thing you need to worry about is deleting intents from the system. Let’s say a user has deleted the only article they wrote. If Siri prompts them to publish this article, that means the system remembers information that they wanted to delete.
Since this goes against Apple’s strict respect for user privacy, it’s your job to remove activities and intents that were deleted.
Go to ArticleFeedViewController.swift and scroll to the bottom of the file.
Then, add the following method call to the bottom of
remove(article:indexPath:):
INInteraction.delete(with: article.title) { _ in }
The completion block allows you to react to deletion errors however you see fit.
Since the new article shortcut doesn’t contain any user data, it isn’t strictly necessary to remove it.
Finally, open Layouts.swift and find the
UITableViewDataSource extension for
ArticleFeedViewController. Add the following at the end of the extension to enable deleting articles:
func tableView(_ tableView: UITableView, commit editingStyle: UITableViewCell.EditingStyle, forRowAt indexPath: IndexPath) { if editingStyle == .delete { let article = articles[indexPath.row] remove(article: article, at: indexPath) if articles.count == 0 { NSUserActivity.deleteSavedUserActivities(withPersistentIdentifiers: [NSUserActivityPersistentIdentifier(kNewArticleActivityType)]) { print("Successfully deleted 'New Article' activity.") } } } }
If you want, you can use
NSUserActivity‘s class method
deleteSavedUserActivities(withPersistentIdentifiers:completionHandler:) to remove all of the activities that were donated with a single identifier.
Where to Go From Here?
You’ve worked through a lot, but there’s still a lot to learn. You can download the completed version of the project using the Download Materials button at the top or bottom of this tutorial. Be sure you update your bundle id and app group id before you try to run it.
If you want to learn more about Shortcuts, check out the two WWDC videos from 2018. The first presentation covers a lot of the same material as this tutorial and is a good refresher to solidify the big ideas you learned here. The second goes more into best practices.
As always, we hope you enjoyed the tutorial. Let us know if you have any questions down in the comments!
|
https://www.raywenderlich.com/6462-siri-shortcuts-tutorial-in-ios-12
|
CC-MAIN-2019-35
|
en
|
refinedweb
|
graphql_parser
Parses GraphQL queries and schemas.
This library is merely a parser/visitor. Any sort of actual GraphQL API functionality must be implemented by you, or by a third-party package.
Angel framework
users should consider
package:angel_graphql
as a dead-simple way to add GraphQL functionality to their servers.
Installation
Add
graphql_parser as a dependency in your
pubspec.yaml file:
dependencies: graphql_parser: ^1.0.0
Usage
The AST featured in this library is directly based off this ANTLR4 grammar created by Joseph T. McBride:
It has since been updated to reflect upon the grammar in the official GraphQL specification.
import 'package:graphql_parser/graphql_parser.dart'; doSomething(String text) { var tokens = scan(text); var parser = new Parser(tokens); if (parser.errors.isNotEmpty) { // Handle errors... } // Parse the GraphQL document using recursive descent var doc = parser.parseDocument(); // Do something with the parsed GraphQL document... }
|
https://pub.dev/documentation/graphql_parser/latest/
|
CC-MAIN-2019-35
|
en
|
refinedweb
|
This is CLI based lib which will help to ask question's.
example/main.dart
import 'package:pk_prompter/pk_prompter.dart'; void main(List<String> args) { final prompter = Prompter(); final String question1 = 'Which color do you want?'; final options1 = [ Option('I want red', '#F00'), Option('I want blue', '#00F'), Option('I want green', '#0F0'), ]; final String question2 = 'Do you like this lib?'; String colorCode = prompter.askMultipleChoice(question1, options1); bool answer = prompter.askBinary(question2); print(colorCode); print(answer); }
Add this to your package's pubspec.yaml file:
dependencies: pk_prompter: ^0.0.1
You can install packages from the command line:
with pub:
$ pub get
Alternatively, your editor might support
pub get.
Check the docs for your editor to learn more.
Now in your Dart code, you can use:
import 'package:pk_prompter/pk_prompter.
|
https://pub.dev/packages/pk_prompter
|
CC-MAIN-2019-35
|
en
|
refinedweb
|
Source code for gino.engine
import asyncio import collections import functools import sys import time from sqlalchemy.engine import Engine, Connection from sqlalchemy.sql import schema from .transaction import GinoTransaction if sys.version_info >= (3, 7): # noinspection PyPackageRequirements,PyUnresolvedReferences from contextvars import ContextVar else: # noinspection PyPackageRequirements from aiocontextvars import ContextVar class _BaseDBAPIConnection: _reset_agent = None gino_conn = None def __init__(self, cursor_cls): self._cursor_cls = cursor_cls self._closed = False def commit(self): pass def cursor(self): return self._cursor_cls(self) @property def raw_connection(self): raise NotImplementedError async def acquire(self, *, timeout=None): if self._closed: raise ValueError( 'This connection is already released permanently.') return await self._acquire(timeout) async def _acquire(self, timeout): raise NotImplementedError async def release(self, permanent): if permanent: self._closed = True return await self._release() async def _release(self): raise NotImplementedError class _DBAPIConnection(_BaseDBAPIConnection): def __init__(self, cursor_cls, pool=None): super().__init__(cursor_cls) self._pool = pool self._conn = None self._lock = asyncio.Lock() @property def raw_connection(self): return self._conn async def _acquire(self, timeout): try: if timeout is None: await self._lock.acquire() else: before = time.monotonic() await asyncio.wait_for(self._lock.acquire(), timeout=timeout) after = time.monotonic() timeout -= after - before if self._conn is None: self._conn = await self._pool.acquire(timeout=timeout) return self._conn finally: self._lock.release() async def _release(self): conn, self._conn = self._conn, None if conn is None: return False await self._pool.release(conn) return True class _ReusingDBAPIConnection(_BaseDBAPIConnection): def __init__(self, cursor_cls, root): super().__init__(cursor_cls) self._root = root @property def raw_connection(self): return self._root.raw_connection async def _acquire(self, timeout): return await self._root.acquire(timeout=timeout) async def _release(self): pass # noinspection PyPep8Naming,PyMethodMayBeStatic class _bypass_no_param: def keys(self): return [] _bypass_no_param = _bypass_no_param() # noinspection PyAbstractClass class _SAConnection(Connection): def _execute_context(self, dialect, constructor, statement, parameters, *args): if parameters == [_bypass_no_param]: constructor = getattr(self.dialect.execution_ctx_cls, constructor.__name__ + '_prepared', constructor) return super()._execute_context(dialect, constructor, statement, parameters, *args) # noinspection PyAbstractClass class _SAEngine(Engine): _connection_cls = _SAConnection def __init__(self, dialect, **kwargs): super().__init__(None, dialect, None, **kwargs) class _AcquireContext: __slots__ = ['_acquire', '_conn'] def __init__(self, acquire): self._acquire = acquire self._conn = None async def __aenter__(self): self._conn = await self._acquire() return self._conn async def __aexit__(self, exc_type, exc_val, exc_tb): conn, self._conn = self._conn, None await conn.release() def __await__(self): return self._acquire().__await__() class _TransactionContext: __slots__ = ['_conn_ctx', '_tx_ctx'] def __init__(self, conn_ctx, args): self._conn_ctx = conn_ctx self._tx_ctx = args async def __aenter__(self): conn = await self._conn_ctx.__aenter__() try: args, kwargs = self._tx_ctx self._tx_ctx = conn.transaction(*args, **kwargs) return await self._tx_ctx.__aenter__() except Exception: await self._conn_ctx.__aexit__(*sys.exc_info()) raise async def __aexit__(self, *exc_info): try: tx, self._tx_ctx = self._tx_ctx, None return await tx.__aexit__(*exc_info) except Exception: exc_info = sys.exc_info() raise finally: await self._conn_ctx.__aexit__(*exc_info)[docs]class GinoConnection: """ Represents an actual database connection. This is the root of all query API like :meth:`all`, :meth:`first`, :meth:`scalar` or :meth:`status`, those on engine or query are simply wrappers of methods in this class. Usually instances of this class are created by :meth:`.GinoEngine.acquire`. .. note:: :class:`.GinoConnection` may refer to zero or one underlying database connection - when a :class:`.GinoConnection` is acquired with ``lazy=True``, the underlying connection may still be in the pool, until a query API is called or :meth:`get_raw_connection` is called. Oppositely, one underlying database connection can be shared by many :class:`.GinoConnection` instances when they are acquired with ``reuse=True``. The actual database connection is only returned to the pool when the **root** :class:`.GinoConnection` is released. Read more in :meth:`GinoEngine.acquire` method. .. seealso:: :doc:`/engine` """ # noinspection PyProtectedMember schema_for_object = schema._schema_getter(None) """A SQLAlchemy compatibility attribute, don't use it for now, it bites.""" def __init__(self, dialect, sa_conn, stack=None): self._dialect = dialect self._sa_conn = sa_conn self._stack = stack @property def _dbapi_conn(self): return self._sa_conn.connection @property def raw_connection(self): """ The current underlying database connection instance, type depends on the dialect in use. May be ``None`` if self is a lazy connection. """ return self._dbapi_conn.raw_connection[docs] async def get_raw_connection(self, *, timeout=None): """ Get the underlying database connection, acquire one if none present. :param timeout: Seconds to wait for the underlying acquiring :return: Underlying database connection instance depending on the dialect in use :raises: :class:`~asyncio.TimeoutError` if the acquiring timed out """ return await self._dbapi_conn.acquire(timeout=timeout)[docs] async def release(self, *, permanent=True): """`` value. .. seealso:: :meth:`.GinoEngine.acquire` """ if permanent and self._stack is not None: for i in range(len(self._stack)): if self._stack[-1].gino_conn is self: dbapi_conn = self._stack.pop() self._stack.rotate(-i) await dbapi_conn.release(True) break else: self._stack.rotate() else: raise ValueError('This connection is already released.') else: await self._dbapi_conn.release(permanent)@property def dialect(self): """ The :class:`~sqlalchemy.engine.interfaces.Dialect` in use, inherited from the engine created this connection. """ return self._dialect def _execute(self, clause, multiparams, params): return self._sa_conn.execute(clause, *multiparams, **params)[docs] async def all(self, clause, *multiparams, **params): """ Runs the given query in database, returns all results as a list. This method accepts the same parameters taken by SQLAlchemy :meth:`~sqlalchemy.engine.Connectable :meth:`execution_options` for more information. If the given parameters are parsed as "executemany" - bulk inserting multiple rows in one call for example, the returning result from database will be discarded and this method will return ``None``. """ result = self._execute(clause, multiparams, params) return await result.execute()[docs] async def first(self, clause, *multiparams, **params): """ Runs the given query in database, returns the first result. If the query returns no result, this method will return ``None``. See :meth:`all` for common query comments. """ result = self._execute(clause, multiparams, params) return await result.execute(one=True)[docs] async def scalar(self, clause, *multiparams, **params): """ Runs the given query in database, returns the first result. If the query returns no result, this method will return ``None``. See :meth:`all` for common query comments. """ result = self._execute(clause, multiparams, params) rv = await result.execute(one=True, return_model=False) if rv: return rv[0] else: return None[docs] async def status(self, clause, *multiparams, **params): """ Runs the given query in database, returns the query status. The returning query status depends on underlying database and the dialect in use. For asyncpg it is a string, you can parse it like this: """ result = self._execute(clause, multiparams, params) return await result.execute(status=True)[docs] def transaction(self, *args, **kwargs): """`` is an instance of the :class:`~gino.transaction.GinoTransaction` class, :meth:`~gino.transaction.GinoTransaction.commit` or :meth:`~gino.transaction.GinoTransaction :meth:`asyncpg.connection.Connection.transaction` for asyncpg. """ return GinoTransaction(self, args, kwargs)[docs] def iterate(self, clause, *multiparams, **params): """ :class:`~gino.dialects.base.Cursor` works. Similarly, this method takes the same parameters as :meth:`all`. """ result = self._execute(clause, multiparams, params) return result.iterate()[docs] def execution_options(self, **opt): """ Set non-SQL options for the connection which take effect during execution. This method returns a copy of this :class:`.GinoConnection` which :meth:`~sqlalchemy.engine.base.Connection.execution_options`, it actually does pass the execution options to the underlying SQLAlchemy :class:`~sqlalchemy.engine.base.Connection`. Furthermore, GINO added a few execution options: :param return_model: Boolean to control whether the returning results should be loaded into model instances, where the model class is defined in another execution option ``model``. Default is ``True``. :param model: Specifies the type of model instance to create on return. This has no effect if ``return_model`` is set to ``False``. Usually in queries built by CRUD models, this execution option is automatically set. For now, GINO only supports loading each row into one type of model object, relationships are not supported. Please use multiple queries for that. ``None`` for no postprocessing (default). :param timeout: Seconds to wait for the query to finish. ``None`` for no time out (default). :param :class:`~sqlalchemy.schema.Column` instance, so that each result will be only a single value of this column. Please note, if you want to achieve fetching the very first value, you should use :meth:`~gino.engine.GinoConnection.first` instead of :meth:`~gino.engine.GinoConnection.scalar`. However, using directly :meth:`~gino.engine.GinoConnection.scalar` is a more direct way. * A tuple nesting more loader expressions recursively. * A :func:`callable` function that will be called for each row to fully customize the result. Two positional arguments will be passed to the function: the first is the :class:`row <sqlalchemy.engine.RowProxy>` instance, the second is a context object which is only present if nested else ``None``. * A :class:`~gino.loader.Loader` instance directly. * Anything else will be treated as literal values thus returned as whatever they are. """ return type(self)(self._dialect, self._sa_conn.execution_options(**opt))async def _run_visitor(self, visitorcallable, element, **kwargs): await visitorcallable(self.dialect, self, **kwargs).traverse_single(element)[docs]class GinoEngine: """ Connects a :class:`~.dialects.base.Pool` and :class:`~sqlalchemy.engine.interfaces.Dialect` together to provide a source of database connectivity and behavior. A :class:`.GinoEngine` object is instantiated publicly using the :func:`gino.create_engine` function or :func:`db.set_bind() <gino.api.Gino.set_bind>` method. .. seealso:: :doc:`/engine` """ connection_cls = GinoConnection """Customizes the connection class to use, default is :class:`.GinoConnection`.""" def __init__(self, dialect, pool, loop, logging_name=None, echo=None, execution_options=None): self._sa_engine = _SAEngine( dialect, logging_name=logging_name, echo=echo, execution_options=execution_options) self._dialect = dialect self._pool = pool self._loop = loop self._ctx = ContextVar('gino') @property def dialect(self): """ Read-only property for the :class:`~sqlalchemy.engine.interfaces.Dialect` of this engine. """ return self._dialect @property def raw_pool(self): """ Read-only access to the underlying database connection pool instance. This depends on the actual dialect in use, :class:`~asyncpg.pool.Pool` of asyncpg for example. """ return self._pool.raw_pool[docs] def acquire(self, *, timeout=None, reuse=False, lazy=False, reusable=True): """ Acquire a connection from the pool. There are two ways using this method - as an asynchronous context manager:: async with engine.acquire() as conn: # play with the connection which will guarantee the connection is returned to the pool when leaving the ``async with`` block; or as a coroutine:: conn = await engine.acquire() try: # play with the connection finally: await conn.release() where the connection should be manually returned to the pool with :meth:`conn.release() <.GinoConnection.release>`. Within the same context (usually the same :class:`~asyncio.Task`, see also :doc:`/transaction`), a nesting acquire by default re :param timeout: Block up to ``timeout`` seconds until there is one free connection in the pool. Default is ``None`` - block forever until succeeded. This has no effect when ``lazy=True``, and depends on the actual situation when ``reuse=True``. :param reuse: Reuse the latest reusable acquired connection (before it's returned to the pool) in current context if there is one, or borrow a new one if none present. Default is ``False`` for always borrow a new one. This is useful when you are in a nested method call series, wishing to use the same connection without passing it around as parameters. See also: :doc:`/transaction`. A reusing connection is not reusable even if ``reusable=True``. If the reused connection happened to be a lazy one, then the reusing connection is lazy too. :param lazy: Don't acquire the actual underlying connection yet - do it only when needed. Default is ``False`` for always do it immediately. This is useful before entering a code block which may or may not make use of a given connection object. Feeding in a lazy connection will save the borrow-return job if the connection is never used. If setting ``reuse=True`` at the same time, then the reused connection - if any - applies the same laziness. For example, reusing a lazy connection with ``lazy=False`` will cause the reused connection to acquire an underlying connection immediately. :param`` and ``reusing=False`` makes it a cleanly isolated connection which is only referenced once here. :return: A :class:`.GinoConnection` object. """ return _AcquireContext(functools.partial( self._acquire, timeout, reuse, lazy, reusable))async def _acquire(self, timeout, reuse, lazy, reusable): try: stack = self._ctx.get() except LookupError: stack = collections.deque() self._ctx.set(stack) if reuse and stack: dbapi_conn = _ReusingDBAPIConnection(self._dialect.cursor_cls, stack[-1]) reusable = False else: dbapi_conn = _DBAPIConnection(self._dialect.cursor_cls, self._pool) rv = self.connection_cls(self._dialect, _SAConnection(self._sa_engine, dbapi_conn), stack if reusable else None) dbapi_conn.gino_conn = rv if not lazy: await dbapi_conn.acquire(timeout=timeout) if reusable: stack.append(dbapi_conn) return rv @property def current_connection(self): """ Gets the most recently acquired reusable connection in the context. ``None`` if there is no such connection. :return: :class:`.GinoConnection` """ try: return self._ctx.get()[-1].gino_conn except (LookupError, IndexError): pass[docs] async def close(self): """ Close the engine, by closing the underlying pool. """ await self._pool.close()[docs] async def all(self, clause, *multiparams, **params): """ Acquires a connection with ``reuse=True`` and runs :meth:`~.GinoConnection.all` on it. ``reuse=True`` means you can safely do this without borrowing more than one underlying connection:: async with engine.acquire(): await engine.all('SELECT ...') The same applies for other query methods. """ async with self.acquire(reuse=True) as conn: return await conn.all(clause, *multiparams, **params)[docs] async def first(self, clause, *multiparams, **params): """ Runs :meth:`~.GinoConnection.first`, See :meth:`.all`. """ async with self.acquire(reuse=True) as conn: return await conn.first(clause, *multiparams, **params)[docs] async def scalar(self, clause, *multiparams, **params): """ Runs :meth:`~.GinoConnection.scalar`, See :meth:`.all`. """ async with self.acquire(reuse=True) as conn: return await conn.scalar(clause, *multiparams, **params)[docs] async def status(self, clause, *multiparams, **params): """ Runs :meth:`~.GinoConnection.status`. See also :meth:`.all`. """ async with self.acquire(reuse=True) as conn: return await conn.status(clause, *multiparams, **params)[docs] def compile(self, clause, *multiparams, **params): """ A shortcut for :meth:`~gino.dialects.base.AsyncDialectMixin.compile` on the dialect, returns raw SQL string and parameters according to the rules of the dialect. """ return self._dialect.compile(clause, *multiparams, **params)[docs] def transaction(self, *args, timeout=None, reuse=True, reusable=True, **kwargs): """ Borrows a new connection and starts a transaction with it. Different to :meth:`.GinoConnection.transaction`, transaction on engine level supports only managed usage:: async with engine.transaction() as tx: # play with transaction here Where the implicitly acquired connection is available as :attr:`tx.connection <gino.transaction.GinoTransaction.connection>`. By default, :meth:`.transaction` acquires connection with ``reuse=True`` and ``reusable=True``, that means it by default tries to create a nested transaction instead of a new transaction on a new connection. You can change the default behavior by setting these two arguments. The other arguments are the same as :meth:`~.GinoConnection.transaction` on connection. .. seealso:: :meth:`.GinoEngine.acquire` :meth:`.GinoConnection.transaction` :class:`~gino.transaction.GinoTransaction` :return: A asynchronous context manager that yields a :class:`~gino.transaction.GinoTransaction` """ return _TransactionContext(self.acquire( timeout=timeout, reuse=reuse, reusable=reusable), (args, kwargs))[docs] def iterate(self, clause, *multiparams, **params): """ Creates a server-side cursor in database for large query results. This requires that there is a reusable connection in the current context, and an active transaction is present. Then its :meth:`.GinoConnection.iterate` is executed and returned. """ connection = self.current_connection if connection is None: raise ValueError( 'No Connection in context, please provide one') return connection.iterate(clause, *multiparams, **params)[docs] def update_execution_options(self, **opt): """Update the default execution_options dictionary of this :class:`.GinoEngine`. .. seealso:: :meth:`sqlalchemy.engine.Engine.update_execution_options` :meth:`.GinoConnection.execution_options` """ self._sa_engine.update_execution_options(**opt)async def _run_visitor(self, *args, **kwargs): async with self.acquire(reuse=True) as conn: await getattr(conn, '_run_visitor')(*args, **kwargs)
|
http://gino.fantix.pro/en/v0.7.7/_modules/gino/engine.html
|
CC-MAIN-2019-35
|
en
|
refinedweb
|
Introduction, and large ones. All of these experiences were related to an XML based CDM. This blog consists of three parts. This blogpost contains part I: Standards & Guidelines. The next blogpost, part two, is about XML Namespace Standards and the last blogpost contains part three about Dependency Management & Interface Tailoring.
This first part, about standards and naming conventions, primarily apply to XML, but the same principles and ideas will mostly apply to other formats, like JSON, as well. The second part about XML namespace standards only is, as it already indicates, applicable to an XML format CDM. The last part, in the third blogpost, about dependency management & interface tailoring entirely, applies to all kind of data formats.
Developing a CDM
About the way of creating a CDM. It’s not doable to create a complete CDM upfront and only then start designing services and developing them. This is because you only can determine usage of data, completeness and quality while developing the services and gaining experience in using them. A CDM is a ‘living’ model and will change in time.
When the software modules (systems or data stores) which are to be connected by the integration layer are being developed together, the CDM will change very often. While developing software you always encounter shortcomings in the design, unseen functional flaws, unexpected requirements or restrictions and changes in design because of new insights or changed functionality. So sometimes the CDM will even change on a daily base. This perfectly fits into the modern Agile Software Development methodologies, like Scrum, where changes are welcome.
When the development stage is finished and the integration layer (SOA environment) is in a maintenance stage, the CDM still will change, but at a much slower pace. It will keep on changing because of maintenance changes and modifications of connected systems or trading partners. Changes and modifications due to new functionality also causes new data entities and structures which have to be added to the CDM. These changes and modifications occur because business processes change in time, caused by a changing world, ranging from technical innovations to social behavioral changes.
In either way, the CDM will never be ready and reach a final changeless state, so a CDM should be flexible and created in such a way that it welcomes changes.
When you start creating a CDM, it’s wise to define standards and guidelines about defining the CDM and using it beforehand. Make a person (or group of persons in a large project), responsible for developing and defining the CDM. This means he defines the data definitions and structures of the CDM. When using XML this person is responsible for creating and maintaining the XML schema definition (XSD) files which represent the CDM. He develops the CDM based on requests from developers and designers. He must be able to understand the need of the developers, but he should also keep the model consistent, flexible and future proof. This means he must have experience in data modeling and the data format (e.g. XML or JSON) and specification language (e.g. XSD) being used. Of course, he also guards the standards and guidelines which has been set. He also is able, when needed, to deny requests for a CDM change from (senior) developers and designers in order to preserve a well-defined CDM and provide an alternative which meets their needs as well.
Standards & Guidelines
There are more or less three types of standards and guidelines when defining an XML data model:
- Naming Conventions
- Structure Standards
- Namespace Standards
Naming Conventions
The most important advice is that you define naming conventions upfront and stick to them. Like all the naming convention in programming languages, there are a lot of options and often it’s a matter of personal preference. Changing conventions because of different personal preferences it not a good idea. Mixed conventions results in ugly code. Nevertheless I do have some recommendations.
Nodes versus types
The first one is to make a distinction between the name of a node (element or attribute) and an XML type. I’ve been in a project where the standard was to give them exactly the same name. In XML this is possible! But the drawback was that there were connecting systems and programming languages which couldn’t handle this! For example the standard Java library for XML parsing, JAX-P, had an issue with this. The Java code which was generated under the hood used the name of an XML type for a Java class name and the name of an element as a Java variable name. In Java it is not possible to use an identical name for both. In that specific project, this had to be fixed manually in the generated Java source code. That is not what you want! It can easily be avoided by using different names for types and elements.
Specific name for types
A second recommendation, which complements the advice above, is to use a specific naming convention for XML types, so their names always differ from node names. The advantage for developers is that they can recognize from the name if something is an XML node or an XML type. This eases XML development and makes the software code easier to read and understand and thus to maintain.
Often I’ve seen the naming convention, which tries to implements this, by prescribing that the name of an XML type should be suffixed with the token “Type”. I personally do not like this specific naming convention. Consider you have a “Person” entity and so you end up with an XML type named “PersonType”. This perfectly makes sense, doesn’t it? But how about a “Document” entity? You end up with an XML type named “DocumentType” and guess what: there is also going to be a “DocumentType” entity resulting in an XML type named “DocumentTypeType”…!? Very confusing in the first place. Secondly, you end up with an element and an XML type with the same name! The name “DocumentType” is used as a name for an element (of type “DocumentTypeType”) and “DocumentType” is used as an XML type (of an element named “Document”).
From experience I can tell you there are more entities with a name that ends with “Type” than you would expect!
My advice is to prefix an XML type with the character “t”. This not only prevents this problem, but it’s also shorter. Additionally you can distinguish an XML node from an XML type by the start of its name. This naming convention results into element names like “Person”, “Document” and “DocumentType” versus type names “tPerson”, “tDocument” and “tDocumentType”.
Use CamelCase – not_underscores
The third recommendation is to use Camel Case for names instead of using underscores as separator between the words which make up a name of a node or type. This shortens a name and still the name can be read easily. I’ve got a slight preference to start a name with an uppercase character, because then I can use camel Case beginning with a lowercase character for local variables in logic or translations (BPEL, xslt, etc) in the integration layer or tooling. This results in a node named “DocumentType” of type “tDocumentType” and when used in a local variable in code, this variable is named “documentType”.
Structure Standards
I also have some recommendations about standards which apply to the XML structure of the CDM.
Use elements only
The first one is to never use attributes, so only elements. You can never expand an attribute and create child elements in it. This may not be necessary at the moment, but may be necessary sometime in the future. Also an attribute cannot have the ‘null’ value in contrast with an element. You can argue that an empty value can represent the null value. But this is only possible with String type attributes (otherwise it’s considered as invalid XML when validating against its schema) and often there is a difference between an empty string and a null value. Another disadvantage is that you can not have multiple attributes with the same name inside an element.
Furthermore, using elements makes XML better readable by humans, so this helps developers in their coding and debugging. A good read about this subject is “Principles of XML design: When to use elements versus attributes”. This article contains a nice statement: “Elements are the extensible engine for expressing structure in XML.” And that’s exactly what you want when developing a CDM that will change in time.
The last advantage is that when the CDM only consists of elements, processing layers can add their own ‘processing’ attributes only for the purpose of helping the processing itself. This means that the result, the XML which is used in communicating with the world outside of the processing system, should be free of attributes again. Also processing attributes can be added in the interface, to provide extra information about the functionality of the interface. For example, when retrieving orders with operation getOrders, you might want to indicate for each order whether it has to be returned with or without customer product numbers:
<getOrdersRequest> <Orders> <Order includeCustProdIds='false'> <Id>123</Id> </Order> <Order includeCustProdIds='true'> <Id>125</Id> </Order> <Order includeCustProdIds='false'> <Id>128</Id> </Order> </Orders> </getOrdersRequest>
Beware these attributes are processing or functionality related, so they should not be a data part the entity. And ask yourself if they are really necessary. You might consider to provide this extra functionality in a new operation, e.g. operation getCustProdIds to retrieve customer product ids or operation getOrderWithCustIds to retrieve order with customer product number.
All elements optional
The next advice is to make all the elements optional! There unexpectedly always is a system or business process which doesn’t need a certain (child) element of which you initially had thought it would always be necessary. On one project this was the case with id elements. Each data entity must have an id element, because the id element contains the functional unique identifying value for the data entity. But then there came a business case with a front end system that had screens in which the data entity was being created. Some of the input data had to be validated before the unique identifying value was known. So the request to the validation system contained the entity data without the identifying id element, so the mandatory id element had to be changed to an optional element. Of-course, you can solve this by creating a request which only contains the data that is used in separate elements, so without the use of the CDM element representing the entity. But one of the powers of a CDM is that there is one definition of an entity.
At that specific project, in time, more and more mandatory elements turned out to be optional somewhere. Likely this will happen at your project as well!
Use a ‘plural container’ element
There is, of course, an exception of an element which should be mandatory. That is the ‘plural container’ element, which only is a wrapper element around a single element which may occur multiple times. This is my next recommendation: when a data entity (XML structure) contains another data entity as a child element and this child element occurs two or more times, or there is a slight chance that this will happen in the future, then create a mandatory ‘plural container’ element which acts as a wrapper element that contains these child elements. A nice example of this is an address. More often than you might think, a data entity contains more than one address. When you have an order as data entity, it may contain a delivery address and a billing address, while you initially started with only the delivery address. So when initially there is only one address and the XML is created like this:
<Order> <Id>123</Id> <CustomerId>456/<CustomerId> <Address> <Street>My Street</Street> <ZipCode>23456</ZipCode> <City>A-town</City> <CountryCode>US</CountryCode> <UsageType>Delivery</UsageType> </Address> <Product>...</Product> <Product>...</Product> <Product>...</Product> </Order>
Then you have a problem with backwards compatibility when you have to add the billing address. This is why it’s wise to create a plural container element for addresses, and for products as well. The name of this element will be the plural of the element it contains. Above XML will then become like this:
<Order> <Id>123</Id> <CustomerId>456/<CustomerId> <Addresses> <Address> <Street>My Street</Street> <ZipCode>23456</ZipCode> <City>A-town</City> <CountryCode>US</CountryCode> <UsageType>Delivery</UsageType> </Address> </Addresses> <Products> <Product>...</Product> <Product>...</Product> <Product>...</Product> </Products> </Order>
In the structure definition, the XML Schema Definition (XSD), define the plural container element to be single and mandatory. Make its child elements optional and without a maximum of occurrences. First this results in maximum flexibility and second, in this way there is only one way of constructing XML data that doesn’t have any child elements. In contrast, when you make the plural container element optional, you can create XML data that doesn’t have any child element in two ways, by omitting the plural container element completely and by adding it without any child elements. You may want to solve this by dictating that child elements always have at least one element, but then the next advantage, discussed below, is lost.
So the XML data example of above will be modeled as follows:
<complexType name="tOrder"> <sequence> <element name="Id" type="string" minOccurs="0" maxOccurs="1"/> <element name="CustomerId" type="string" minOccurs="0" maxOccurs="1"/> <element name="Addresses" minOccurs="1" maxOccurs="1"> <complexType> <sequence> <element name="Address" type="tns:tAddress" minOccurs="0" maxOccurs="unbounded"/> </sequence> </complexType> </element> <element name="Products" minOccurs="1" maxOccurs="1"> <complexType> <sequence> <element name="Product" type="tns:tProduct" minOccurs="0" maxOccurs="unbounded"/> </sequence> </complexType> </element> </sequence> </complexType> <complexType name="tAddress"> <sequence> ... </sequence> </complexType> <complexType name="tProduct"> <sequence> ... </sequence> </complexType>
There is another advantage of this construction for developers. When there is a mandatory plural container element, this elements acts as a kind of anchor or ‘join point’ when XML data has be modified in the software and for example, child elements have to be added. As this element is mandatory, it’s always present in the XML data that has to be changed, even if there are no child elements yet. So the code of a software developer can safely ‘navigate’ to this element and make changes, e.g. adding child elements. This eases the work of a developer.
Be careful with restrictions
You never know beforehand with which systems or trading partners the integration layer will connect in future. When you define restrictions in your CDM, beware of this. For example restricting a string type to a list of possible values (enumeration) is very risky. What to do when in future another possible value is added?
Even a more flexible restriction, like a regular expression can soon become too strict as well. Take for example the top level domain names on internet. It once was restricted to two character abbreviations for countries, some other three character abbreviations (“net”, “com”, “org”, “gov”, “edu”) and one four character word “info”, but that’s history now!
This risk applies for all restrictions, restriction on character length, numeric restrictions, restriction on value ranges, etc.
Likewise I bet that the length of product id’s in the new version of your ERP system will exceed the current one.
My advice is to minimize restriction as much as possible in your CDM, preferable no restrictions at all!
Instead define restrictions on the interfaces, the API to the connection systems. When for example the product id of your current ERP system is restricted to 8 characters, it perfectly makes sense that you define a restriction on the interface with that system. More on this in part III in my last blogpost in the section about Interface Tailoring.
String type for id elements
Actually this one is the same as the one above about restrictions. I want to discuss this one separately, because of its importance and because it often goes wrong. Defining an id element as a numeric type is a way of applying a nummeric restriction to a string type id.
The advice is to make all identifying elements (id, code, etc) of type string and never a numeric type! Even when they always get a numeric value… for now! The integration layer may in future connect to another system that uses non-numeric values for an id element or an existing system may be replaced by a system that uses non-numeric id’s. Only make those elements numeric which truly contain numbers, so the value has a nummeric meaning. You can check this by asking yourself whether it functionally makes sense to calculate with the value or not. So for example phone numbers should be strings. Also when there is a check (algorithm) based on the sequence of the digits whether a number is valid or not (e.g. bank account check digit), this means the number serves as an identification and thus should be a string type element! Another way to detect numbers which are used as identification, is to determine if it matters when you add a preceding zero to the value. If that does matter, it means it’s not used nummeric. After all, preceding zero’s doesn’t change a nummeric value.
Determine null usage
The usage of the null value in XML () always leads to lots of discussions. The most import advice is to explicitly define standards & rules and communicate them! Decide whether the null usage is allowed or not. If so, determine in what situation it is allowed and what it functionally means. Ask yourself how it is used and how it differs from an element being absent (optional elements).
For example I’ve been in a project where a lot of data was updated in the database. An element being absent meant that a value didn’t change, while a null value meant that for a container element it’s record had be deleted and for a ‘value’ element that the database value had to be set to null.
The most important advice in this is: Make up your mind, decide, document and communicate it!
To summarize this first part of naming conventions and guidelines:
- Keep in mind that a CDM keeps on changing, so it’s never finished
- Define naming and structure standards upfront
- and communicate your standards and guidelines!
When creating a CDM in the XML format, you also have to think about namespaces and how to design the XML. This is where the second part in my next blogpost is all about. When you are not defining a CDM in the XML format, you can skip this one and immediately go to the third and last blogpost about dependency management & interface tailoring.
Hi Cristian,
For the most part I do agree with Stefan!
The most important one is that a CDM should not be pushed to the development/integration teams, but should be build up by them! Let the CDM change and grow by update requests from them.. To solve this (and also the runtime dependency problem), I’ve!
The offline way of working provides a very flexible CDM which can start quit small and grow as needed without limitation due the backward compatibility.
Regards,
Emiel
Wonder is your opinion about
?
Thanks
|
https://technology.amis.nl/2017/03/29/cdm-development-and-runtime-experiences-part1/
|
CC-MAIN-2019-35
|
en
|
refinedweb
|
figured it out myself
So i am trying to make a character movement system that looks cool and i've run into a major problem. I have this 2D top-down character that consists of three sprites: head, body and legs. The head rotates towards the mouse using the atan2 function, the legs rotate if the angle between the head and legs is greater than 90 degrees, and the body is just half the angle between the head and legs. The problem is that the atan2 function jumps between -PI and PI, and instead of the legs continuing to follow the head, it acts as if the head got rotated the other way really quickly, which is not at all what i want. Please help, my brain is melting!
using System.Collections;
using System.Collections.Generic;
using UnityEngine;
public class playerController : MonoBehaviour {
float a;
float aH;
float aL;
float aB;
float t = 10f;
float legspace = Mathf.PI / 2;
public GameObject head;
public GameObject body;
public GameObject legs;
void Update () {
Vector3 mousePos = Camera.main.ScreenToWorldPoint(Input.mousePosition);
a = Mathf.Atan2(mousePos.y,mousePos.x);
if (a < 0) a += Mathf.PI * 2;
Quaternion a1 = Quaternion.Euler(0, 0, aH);
Quaternion a2 = Quaternion.Euler(0, 0, a);
Quaternion aHq = Quaternion.Slerp(a1, a2, t * Time.deltaTime);
aH = aHq.eulerAngles.z;
if ((aH - aL) > legspace) aL = aH - legspace;
else if ((aH - aL) < -legspace) aL = aH + legspace;
aB = ((aH - aL)) / -2 + aH;
legs.transform.rotation = Quaternion.Euler(0, 0, Mathf.Rad2Deg * aL);
body.transform.rotation = Quaternion.Euler(0, 0, Mathf.Rad2Deg * aB);
head.transform.rotation = Quaternion.Euler(0, 0, Mathf.Rad2Deg * aH);
}
}
Answer by Wolfride
·
Dec 16, 2018 at 03:28 AM
I mean... Can't you just parent all 3 pieces to an empty game object and just rotate that? Or you don't want them all to move at the same time and your problem is the head is moving too fast while everything else is fine? And if you just didn't have the head rotate as far, it would look really good as it.
The thing is that i want the player to continue turning one way, without that sudden jump on the right side. And i dont wanna limit the head movement either, i want him to be able to turn 360 freely without the sudden.
Vector art Player
0
Answers
Sprite animations won't export/run in a published build unless you play in editor first
1
Answer
Hover over text VERY glitchy
0
Answers
Sprite Diffuse shader not affected by point lights
1
Answer
How to make the pixels of an object that are obscured by a specific object transparent Unity2D
1
Answer
|
https://answers.unity.com/questions/1581347/2d-top-down-multiple-sprite-rotation.html
|
CC-MAIN-2019-35
|
en
|
refinedweb
|
AndroidX replaces the original support library APIs with packages in the
androidx namespace. Only the package and Maven artifact names changed; class,
method, and field names did not change.
Prerequisites
Before you migrate, bring your app up to date. We recommend updating your project to use the final version of the support library: version 28.0.0. This is because AndroidX artifacts with version 1.0.0 are binary equivalent to the Support Library 28.0.0 artifacts.
Migrate an existing project using Android Studio
With Android Studio 3.2 and higher, you can migrate an existing project to AndroidX by selecting Refactor > Migrate to AndroidX from the menu bar.
The refactor command makes use of two flags. By default, both of them are
set to
true in your
gradle.properties file:
android.useAndroidX=true
- The Android plugin uses the appropriate AndroidX library instead of a Support Library.
android.enableJetifier=true
- The Android plugin automatically migrates existing third-party libraries to use AndroidX by rewriting their binaries.
Mappings
If you run into issues with migration, refer to these tables to determine the proper mappings from the support library to the corresponding AndroidX artifacts and classes:
For the latest versions of the Jetpack libraries, see the versions page.
Additional resources
To learn more about migrating your code to AndroidX, see the following additional resources:
|
https://developer.android.com/jetpack/androidx/migrate?hl=ru
|
CC-MAIN-2019-35
|
en
|
refinedweb
|
Explain BLE Example Code
Hi,
I am trying to connect the WiPy via BLE to an Android application, and I am at the stage where I am trying to understand how BLE works currently.
I was looking at the example advertising and connection code that is here:
Can someone explain to me how the code here works. My comments are how I interpret the code, based on what I've read about BLE, and basically my questions are as the comments state.
from network import Bluetooth bluetooth = Bluetooth() bluetooth.set_advertisement(name='LoPy', service_uuid=b'1234567890123456') def conn_cb (bt_o): events = bt_o.events() # Here we are checking if a device is connected. if events & Bluetooth.CLIENT_CONNECTED: print("Client connected") elif events & Bluetooth.CLIENT_DISCONNECTED: print("Client disconnected") bluetooth.callback(trigger=Bluetooth.CLIENT_CONNECTED | # Can someone explain this? Bluetooth.CLIENT_DISCONNECTED, handler=conn_cb) bluetooth.advertise(True) #Enable BLE advertising srv1 = bluetooth.service(uuid=b'1234567890123456', isprimary=True) # what is "isPrimary"? chr1 = srv1.characteristic(uuid=b'ab34567890123456', value=5) char1_read_counter = 0 def char1_cb_handler(chr): global char1_read_counter char1_read_counter += 1 # Everytime we want to read the first characteristic, increment. events = chr.events() if events & Bluetooth.CHAR_WRITE_EVENT: # When client tries to overwrite characteristic? print("Write request with value = {}".format(chr.value())) else: if char1_read_counter < 3: print('Read request on char 1') else: return 'ABC DEF' # If we try to read more than three times, we return this? Why? # What does the line below do?) #What does the line above do?
Thank you all for your help and time.
|
https://forum.pycom.io/topic/2389/explain-ble-example-code/
|
CC-MAIN-2019-35
|
en
|
refinedweb
|
This article demonstrates how to quickly build a plug-in
architecture with ASP.NET MVC 3 and how to build new plugins for your
application. It also shows how to create your views as embedded resources into the plugins and how .NET
4.0 features can help with discovering the new plug-ins from the host
application.
Second Part shows how to add server side-logic inside plugins.
I had to design a new ASP.NET MVC enterprise
application that was marketed as software-as-a-service, basically being distributed to multiple customers by
features. The customer can select which services he wants activate and pay only
for those. Extra security of the code was also needed, all assemblies had to be
signed and views have to be embedded for protection, so the best approach was
to create a plug-in able architecture that could help accomplish all of this
requirements and deliver extra features with ease. This also adds extra flexibility if custom requirements are needed
from individual customers and allow them to create their own modules it they want.
The host application has to identify all the plug-ins
through a public library. A common interface is necessary for all plugins, that
will be implemented by a class in the root of any plug-in. This interface must
be located in a public assembly that will be referenced by all plug-ins and will contain common
interfaces that are used as a bridge between a plug-in and the host application.
Let's call that interface the IModule interface:
IModule
public interface IModule
{
/// <summary>
/// Title of the plugin, can be used as a property to display on the user interface
/// </summary>
string Title { get; }
/// <summary>
/// Name of the plugin, should be an unique name
/// </summary>
string Name { get; }
/// <summary>
/// Version of the loaded plugin
/// </summary>
Version Version { get; }
/// <summary>
/// Entry controller name
/// </summary>
string EntryControllerName { get; }
}
The class that will implement this interface will carry all the information about name, version and default access to the plugin.
ASP.NET 4 introduces a few new extensibility APIs that are very useful. One of them is a new assembly attribute called PreApplicationStartMethodAttribute.
PreApplicationStartMethodAttribute
This new attribute allows you to have code run way early in the ASP.NET pipeline as the application starts up, even before Application_Start.
This happens to be also before the code in your App_Code folder has been compiled. To use this attribute, create a class library and add this attribute
as an assembly level attribute. Example:
Application_Start
App_Code
[assembly: PreApplicationStartMethod(typeof(PluginTest.PluginManager.PreApplicationInit),"InitializePlugins")]
As seen above, a type and a string was specified. The string represents a method that needs to be a public static void method with no arguments.
Now, any ASP.NET website that references this assembly will call the InitializePlugins method when the application is about to start, giving this method
a chance to perform some early initialization.
InitializePlugins
public class PreApplicationInit
{
/// <summary>
/// Initialize method
/// </summary>
public static void InitializePlugins()
{ ... }
}
Now the real usage of this attribute is that you can add on run-time build providers or new assembly references, which in the previous
versions of Visual Studio could be done only via web.confing. We will use this to add a reference to all the plug-ins
before the application starts. Example:
web.confing
System.Web.Compilation.BuildManager.AddReferencedAssembly(assembly);
The plug-ins should be copied in a directory inside the
web application, but not referenced directly, instead a copy of them should be
made and reference the copied plug-ins. All of this steps have to be made in the PreApplicationInit class static
constructor.
PreApplicationInit
public class PreApplicationInit
{
static PreApplicationInit()
{
string pluginsPath = HostingEnvironment.MapPath("~/plugins");
string pluginsTempPath = HostingEnvironment.MapPath("~/plugins/temp");
if (pluginsPath == null || pluginsTempPath == null)
throw new DirectoryNotFoundException("plugins");
PluginFolder = new DirectoryInfo(pluginsPath);
TempPluginFolder = new DirectoryInfo(pluginsTempPath);
}
/// <summary>
/// The source plugin folder from which to copy from
/// </summary>
private static readonly DirectoryInfo PluginFolder;
/// <summary>
/// The folder to copy the plugin DLLs to use for running the application
/// </summary>
private static readonly DirectoryInfo TempPluginFolder;
/// <summary>
/// Initialize method that registers all plugins
/// </summary>
public static void InitializePlugins()
{ ... }
}
When the InitializePlugins method is called which will refresh the temp directory and reference all the
plugin assemblies. When referencing, a check is required that validates the
plugin by verifying if they contain a class that implements the IModule interface.
public static void InitializePlugins()
{
Directory.CreateDirectory(TempPluginFolder.FullName);
//clear out plugins
foreach (var f in TempPluginFolder.GetFiles("*.dll", SearchOption.AllDirectories))
{
f.Delete();
}
//copy files
foreach (var plug in PluginFolder.GetFiles("*.dll", SearchOption.AllDirectories))
{
var di = Directory.CreateDirectory(TempPluginFolder.FullName);
File.Copy(plug.FullName, Path.Combine(di.FullName, plug.Name), true);
}
//This will put the plugin assemblies in the 'Load' context
var assemblies = TempPluginFolder.GetFiles("*.dll", SearchOption.AllDirectories)
.Select(x => AssemblyName.GetAssemblyName(x.FullName))
.Select(x => Assembly.Load(x.FullName));
foreach (var assembly in assemblies)
{
Type type = assembly.GetTypes()
.Where(t => t.GetInterface(typeof(IModule).Name) != null).FirstOrDefault();
if (type != null)
{
//Add the plugin as a reference to the application
BuildManager.AddReferencedAssembly(assembly);
//Add the modules to the PluginManager to manage them later
var module = (IModule)Activator.CreateInstance(type);
PluginManager.Current.Modules.Add(module, assembly);
}
}
}
For this to work a probing folder must be configured in web.config to tell the AppDomain to also look for Assemblies/Types in the specified folders:
<runtime>
<assemblyBinding xmlns="urn:schemas-microsoft-com:asm.v1">
<probing privatePath="plugins/temp" />
</assemblyBinding>
</runtime>
The singleton, PluginManager,
is used to keep a track of all the plugins, for later use in the application.
PluginManager
In the final step, after the plugins are referenced it
is necessary to register the embedded views from the plugins. This will be done
using BoC.Web.Mvc.PrecompiledViews.ApplicationPartRegistry.
I preferred to do it in a Bootstrapper class where I can, for example, also
setup my DI container if I want. More information in the next chapter
"Create embedded views". Bootstrapper overview:
BoC.Web.Mvc.PrecompiledViews.ApplicationPartRegistry
Bootstrapper
public static class PluginBootstrapper
{
public static void Initialize()
{
foreach (var asmbl in PluginManager.Current.Modules.Values)
{
ApplicationPartRegistry.Register(asmbl);
}
}
}
public class CalendarModule : IModule
{
public string Title
{
get { return "Calendar"; }
}
public string Name
{
get { return Assembly.GetAssembly(GetType()).GetName().Name; }
}
public Version Version
{
get { return new Version(1, 0, 0, 0); }
}
public string EntryControllerName
{
get { return "Calendar"; }
}
}
The full tutorial, that I also followed, on how to add
embedded Views to an MVC project can be found
here.
This chapter will only describe the absolute steps to this case. There are some
simple steps to follow:
MvcRazorClassGenerator
BoC.Web.Mvc.PrecompiledViews.ApplicationPartRegistry.Register(asmbl);
All needed assemblies are in the Binaries folder, on the
same level with the solution file. You can find the following DLLs: BoC.Web.Mvc.PrecompiledViews.dll, Commons.Web.Mvc.PrecompiledViews.dll,
System.Web.Helpers.dll, System.Web.WebPages.dll, System.Web.MVC.dll, System.Web.Razor.dll, System.Web.WebPages.Razor.dll.
All the plugins dlls are compiled in the directory PluginBin on the same root with the solution file.
The plugins are searched in the plugins directory inside the Web Application, the relative path from the solution file is PluginTest.Web/plugins
First make sure that the project is compiled and then copy the contents from PluginBin to PluginTest.Web/plugins
Make sure that the web-server is stopped or restart the web-server and run the application.
Feel free to play with the plugins by deleting one or more and then restart the application and see the results
Plug-in web applications can be easy achieved as
described in this article, but web applications rarely needs this kind of added
complexity. This is also only a proof of concept example of how this could be achieved and it does not represent a
full blown implementation. The purpose of this article was to share a basic
implementation that others could reuse and extend if required.
This article, along with any associated source code and files, is licensed under The Code Project Open License (CPOL)
[InvalidOperationException: The view 'Index' or its master was not found or no view engine supports the searched locations. The following locations were searched:
~/Views/Info/Index.aspx
~/Views/Info/Index.ascx
~/Views/Shared/Index.aspx
~/Views/Shared/Index.ascx
~/Views/Info/Index.cshtml
~/Views/Infoc.<InvokeActionResultWithFilters>b__19() +74
System.Web.Mvc.ControllerActionInvoker.InvokeActionResultFilter(IResultFilter filter, ResultExecutingContext preContext, Func`1 continuation) +388
System.Web.Mvc.<>c__DisplayClass1e.<InvokeActionResultWithFilters>b__1b() +72
System.Web.Mvc.ControllerActionInvoker.InvokeActionResultWithFilters(ControllerContext controllerContext, IList`1 filters, ActionResult actionResult) +303
System.Web.Mvc.ControllerActionInvoker.InvokeAction(ControllerContext controllerContext, String actionName) +844
System.Web.Mvc.Controller.ExecuteCore() +130
System.Web.Mvc.ControllerBase.Execute(RequestContext requestContext) +230
System.Web.Mvc.ControllerBase.System.Web.Mvc.IController.Execute(RequestContext requestContext) +39
System.Web.Mvc.<>c__DisplayClassb.<BeginProcessRequest>b__5() +71
System.Web.Mvc.Async.<>c__DisplayClass1.<MakeVoidDelegate>b__0() +44
System.Web.Mvc.Async.<>c__DisplayClass8`1.<BeginSynchronous>b__7(IAsyncResult _) +42
System.Web.Mvc.Async.WrappedAsyncResult`1.End() +152
System.Web.Mvc.Async.AsyncResultWrapper.End(IAsyncResult asyncResult, Object tag) +59
System.Web.Mvc.Async.AsyncResultWrapper.End(IAsyncResult asyncResult, Object tag) +40
System.Web.Mvc.<>c__DisplayClasse.<EndProcessRequest>b__d() +75
System.Web.Mvc.SecurityUtil.<GetCallInAppTrustThunk>b__0(Action f) +31
System.Web.Mvc.SecurityUtil.ProcessInApplicationTrust(Action action) +61
System.Web.Mvc.MvcHandler.EndProcessRequest(IAsyncResult asyncResult) +118
System.Web.Mvc.MvcHandler.System.Web.IHttpAsyncHandler.EndProcessRequest(IAsyncResult result) +38
System.Web.CallHandlerExecutionStep.System.Web.HttpApplication.IExecutionStep.Execute() +933
System.Web.HttpApplication.ExecuteStep(IExecutionStep step, Boolean& completedSynchronously) +188
General News Suggestion Question Bug Answer Joke Praise Rant Admin
Use Ctrl+Left/Right to switch messages, Ctrl+Up/Down to switch threads, Ctrl+Shift+Left/Right to switch pages.
|
https://www.codeproject.com/Articles/358360/NET-4-0-ASP-NET-MVC-3-plug-in-architecture-with-e?msg=4505354
|
CC-MAIN-2019-35
|
en
|
refinedweb
|
python-unitypack
A library to deserialize Unity3D Assets and AssetBundles files (*.unity3d).
How Unity packs assets
Most extractors for Unity3D files (such asDisunity) deal with the format as a "file store", treating it as one would treat a zip. This is not how the format actually works.
Unity files are binary-packed, serialized collections of Unity3D classes. To this end, they are much closer to a json file containing arrays of objects.
Some of those classes have fields which contain raw data, such as Texture2D’s
image data field or TextAsset’s
m_Script field. Using this, files can be "extracted" from the asset bundles by using their
m_Name and an appropriate extension. But doing so leaves out all the "unextractable" classes which one might want to deal with.
Usage
To open an asset, or asset bundle, with unitypack:
import unitypack with open("example.unity3d") as f: bundle = unitypack.load(f) for asset in bundle.assets: print("%s: %s:: %i objects" % (bundle, asset, len(asset.objects)))
The
objects field on every
Asset is a dictionary of
path_id keys to
ObjectInfo values. The
path_id is a unique 64-bit signed int which represents the object instance. The
ObjectInfo class is a lazy lookup for the data on that object.
Thus, if you want to actually extract the data:
for id, object in asset.objects.items(): # Let's say we only want TextAsset objects if object.type == "TextAsset": # We avoid reading the data, unless it's a TextAsset data = object.read() # The resulting `data` is a unitypack.engine.TextAsset instance print("Asset name:", data.name) print("Contents:", repr(data.script))
Not all base Unity3D classes are implemented. If a class is unimplemented, or a custom class (eg. a non-Unity class) is encountered, the resulting data is a dict of the fields instead. The same dict of fields can be found in the
_obj attribute of the instance, otherwise.
Included tools
Included are two scripts which use unitypack for some common operations:
Asset extraction
extract.py can extract common types of data from assets and asset bundles, much like Disunity. By default, it will extract all known extractable types:
AudioClipobjects will be converted back to their original format. Note that recent Unity3D versions pack these as FSB files, sopython-fsb5 is required to convert them back.
Texture2Dobjects will be converted to png files. Not all Texture2D formats are supported. A recent version ofPillow is required for this.
Meshobjects (3D objects) will be pickled. Pull requests implementing a .obj converter are welcome and wanted.
TextAssetobjects will be extracted as plain text, to .txt files
Shaderobjects work essentially the same way as TextAsset objects, but will be extracted to .cg files.
Filters for individual formats are available. Run
./extract.py --help for the full list.
YAML conversion
bundle_to_yaml.py can convert AssetBundles to YAML output. YAML is more appropriate than JSON due to the recursive, pointer-heavy and class-heavy nature of the Unity3D format.
When run with the
--strip argument, extractable data will be stripped out. This can make the resulting YAML output far less heavy, as binary data will otherwise be converted to Base64 which can result in extremely large text output.
Here is a stripped example of the
movies0.unity3d file from Hearthstone, which contains only two objects (a MovieTexture cinematic and a corresponding AudioClip):
!unitypack:AudioClip m_BitsPerSample: 16 m_Channels: 0 m_CompressionFormat: 0 m_Frequency: 0 m_IsTrackerFormat: false m_Legacy3D: false m_Length: 0.0 m_LoadInBackground: false m_LoadType: 0 m_Name: Cinematic audio m_PreloadAudioData: true m_Resource: !unitypack:StreamedResource {m_Offset: 0, m_Size: 0, m_Source: ''} m_SubsoundIndex: 0 m_AssetBundleName: '' m_Container: - first: final/data/movies/cinematic.unity3d second: asset: !PPtr [0, -4923783912342650895] preloadIndex: 0 preloadSize: 2 m_Dependencies: [] m_IsStreamedSceneAssetBundle: false m_MainAsset: {asset: null, preloadIndex: 0, preloadSize: 0} m_Name: '' m_PreloadTable: - !PPtr [0, -6966092991433622133] - !PPtr [0, -4923783912342650895] m_RuntimeCompatibility: 1 !unitypack:stripped:MovieTexture m_AudioClip: !PPtr [0, -6966092991433622133] m_ColorSpace: 1 m_Loop: false m_MovieData: <stripped> m_Name: Cinematic
Stripped classes will be prefixed with
unitypack:stripped: .
License
python-unitypack is licensed under the terms of the MIT license.
Community
python-unitypack is a HearthSim project. All development happens on our IRC channel
#hearthsim on FreenPack, a Python 3 deserialization library for Unity3D files
评论 抢沙发
|
http://www.shellsec.com/news/5816.html
|
CC-MAIN-2017-22
|
en
|
refinedweb
|
SD_JOURNAL_PRINT(3) sd_journal_print SD_JOURNAL_PRINT(3)
sd_journal_print, sd_journal_printv, sd_journal_send, sd_journal_sendv, sd_journal_perror, SD_JOURNAL_SUPPRESS_LOCATION - Submit log entries to the journal
#include <systemd);
sd_journal_print() may be used to submit simple, plain text log entries to the system journal. The first argument is a priority value. This is followed by a format string and its parameters, similar to printf(3) or syslog(3). journal. systemd.journal-fields(7) for details, but additional application defined fields may be used. A variable may be assigned more than one value per entry.. sd_journal_sendv() is particularly useful to submit binary objects to the journal where that is necessary..
The five calls return 0 on success or a negative errno-style error code. The errno(3) variable itself is not altered. If systemd-journald(8) is not running (the socket is not present), those functions do nothing, and also return 0.
All functions listed here are thread-safe and may be called in parallel from multiple threads. sd_journal_sendv() is "async signal safe" in the meaning of signal(7). sd_journal_print, sd_journal_printv, sd_journal_send, and sd_journal_perror are not async signal safe.
The sd_journal_print(), sd_journal_printv(), sd_journal_send(), sd_journal_sendv() and sd_journal_perror() interfaces are available as a shared library, which can be compiled and linked to with the libsystemd pkg-config(1) file.
systemd(1), sd-journal(3), sd_journal_stream_fd(3), syslog(3), perror(3), errno(3), systemd.journal-fields(7), signal(7), socket_JOURNAL_PRINT(3)
Pages that refer to this page: sd-journal(3), sd_journal_stream_fd(3), systemd.exec(5), file-hierarchy(7), systemd.directives(7), systemd.index(7), systemd.journal-fields(7)
|
http://man7.org/linux/man-pages/man3/sd_journal_send.3.html
|
CC-MAIN-2017-22
|
en
|
refinedweb
|
Tk_DoWhenIdle, Tk_CancelIdleCall - invoke a procedure when there are no pending events
#include <tk.h>
Tk_DoWhenIdle(proc, clientData)
Tk_CancelIdleCall(proc, clientData)
Procedure to invoke.
Arbitrary one-word value to pass to proc.
Tk_DoWhenIdle arranges for proc to be invoked when the application becomes idle. The application is considered to be idle when Tk_DoOneEvent has been called, it couldn't find any events to handle, and it is about to go to sleep waiting for an event to occur. At this point all pending Tk_DoWhenIdle handlers are invoked. For each call to Tk_DoWhenIdle there will be a single call to proc; after proc is invoked the handler is automatically removed. Tk_DoWhenIdle is only useable in programs that use Tk_DoOneEvent to dispatch events.
Proc should have arguments and result that match the type Tk_IdleProc:
typedef void Tk_IdleProc(ClientData clientData);
The clientData parameter to proc is a copy of the clientData argument given to Tk_DoWhenIdle. Typically, clientData points to a data structure containing application-specific information about what proc should do.
Tk_CancelIdleCall may be used to cancel one or more previous calls to Tk_DoWhenIdle: if there is a Tkk_DoWhenIdle is most useful in situations where (a) a piece of work will have to be done but (b) it's possible that something will happen in the near future that will change what has to be done, or require something different to be done. Tk_DoWhenIdle allows the actual work to be deferred until all pending events have been processed. At this point the exact work to be done will presumably be known and it can be done exactly once.
For example, Tk_DoWhenIdle might be used by an editor to defer display updates until all pending commands have been processed. Without this feature, redundant redisplays might occur in some situations, such as the processing of a command file.
callback, defer, handler, idle
|
http://search.cpan.org/~srezic/Tk-804.032/pod/pTk/DoWhenIdle.pod
|
CC-MAIN-2017-22
|
en
|
refinedweb
|
view raw
Python and
wc
with open("commedia.pfc", "w") as f:
t = ''.join(chr(int(b, base=2)) for b in chunks(compressed, 8))
print(len(t))
f.write(t)
Output : 318885
$> wc commedia.pfc
2181 12282 461491 commedia.pfc
"""
Implementation of prefix-free compression and decompression.
"""
import doctest
from itertools import islice
from collections import Counter
import random
import json
def binary_strings(s):
"""
Given an initial list of binary strings `s`,
yield all binary strings ending in one of `s` strings.
>>> take(9, binary_strings(["010", "111"]))
['010', '111', '0010', '1010', '0111', '1111', '00010', '10010', '01010']
"""
yield from s
while True:
s = [b + x for x in s for b in "01"]
yield from s
def take(n, iterable):
"""
Return first n items of the iterable as a list.
"""
return list(islice(iterable, n))
def chunks(xs, n, pad='0'):
"""
Yield successive n-sized chunks from xs.
"""
for i in range(0, len(xs), n):
yield xs[i:i + n]
def reverse_dict(dictionary):
"""
>>> sorted(reverse_dict({1:"a",2:"b"}).items())
[('a', 1), ('b', 2)]
"""
return {value : key for key, value in dictionary.items()}
def prefix_free(generator):
"""
Given a `generator`, yield all the items from it
that do not start with any preceding element.
>>> take(6, prefix_free(binary_strings(["00", "01"])))
['00', '01', '100', '101', '1100', '1101']
"""
seen = []
for x in generator:
if not any(x.startswith(i) for i in seen):
yield x
seen.append(x)
def build_translation_dict(text, starting_binary_codes=["000", "100","111"]):
"""
Builds a dict for `prefix_free_compression` where
More common char -> More short binary strings
This is compression as the shorter binary strings will be seen more times than
the long ones.
Univocity in decoding is given by the binary_strings being prefix free.
>>> sorted(build_translation_dict("aaaaa bbbb ccc dd e", ["01", "11"]).items())
[(' ', '001'), ('a', '01'), ('b', '11'), ('c', '101'), ('d', '0001'), ('e', '1001')]
"""
binaries = sorted(list(take(len(set(text)), prefix_free(binary_strings(starting_binary_codes)))), key=len)
frequencies = Counter(text)
# char value tiebreaker to avoid non-determinism v
alphabet = sorted(list(set(text)), key=(lambda ch: (frequencies[ch], ch)), reverse=True)
return dict(zip(alphabet, binaries))
def prefix_free_compression(text, starting_binary_codes=["000", "100","111"]):
"""
Implements `prefix_free_compression`, simply uses the dict
made with `build_translation_dict`.
Returns a tuple (compressed_message, tranlation_dict) as the dict is needed
for decompression.
>>> prefix_free_compression("aaaaa bbbb ccc dd e", ["01", "11"])[0]
'010101010100111111111001101101101001000100010011001'
"""
translate = build_translation_dict(text, starting_binary_codes)
# print(translate)
return ''.join(translate[i] for i in text), translate
def prefix_free_decompression(compressed, translation_dict):
"""
Decompresses a prefix free `compressed` message in the form of a string
composed only of '0' and '1'.
Being the binary codes prefix free,
the decompression is allowed to take the earliest match it finds.
>>> message, d = prefix_free_compression("aaaaa bbbb ccc dd e", ["01", "11"])
'010101010100111111111001101101101001000100010011001'
>>> sorted(d.items())
[(' ', '001'), ('a', '01'), ('b', '11'), ('c', '101'), ('d', '0001'), ('e', '1001')]
>>> ''.join(prefix_free_decompression(message, d))
'aaaaa bbbb ccc dd e'
"""
decoding_translate = reverse_dict(translation_dict)
# print(decoding_translate)
word = ''
for bit in compressed:
# print(word, "-", bit)
if word in decoding_translate:
yield decoding_translate[word]
word = ''
word += bit
yield decoding_translate[word]
if __name__ == "__main__":
doctest.testmod()
with open("commedia.txt") as f:
text = f.read()
compressed, d = prefix_free_compression(text)
with open("commedia.pfc", "w") as f:
t = ''.join(chr(int(b, base=2)) for b in chunks(compressed, 8))
print(len(t))
f.write(t)
with open("commedia.pfcd", "w") as f:
f.write(json.dumps(d))
# dividing by 8 goes from bit length to byte length
print("Compressed / uncompressed ratio is {}".format((len(compressed)//8) / len(text)))
original = ''.join(prefix_free_decompression(compressed, d))
assert original == text
commedia.txt
You are using Python3 and an
str object - that means the count you see in
len(t) is the number of characters in the string. Now, characters are not bytes - and it is so since the 90's .
Since you did not declare an explicit text encoding, the file writing is encoding your text using the system default encoding - which on Linux or Mac OS X will be utf-8 - an encoding in which any character that falls out of the ASCII range (ord(ch) > 127) uses more than one byte on disk.
So, your program is basically wrong. First, define if you are dealing with text or bytes . If you are dealign with bytes, open the file for writting in binary mode (
wb, not
w) and change this line:
t = ''.join(chr(int(b, base=2)) for b in chunks(compressed, 8))
to
t = bytes((int(b, base=2) for b in chunks(compressed, 8))
That way it is clear that you are working with the bytes themselves, and not mangling characters and bytes.
Of course there is an ugly workaround to do a "transparent encoding" of the text you had to a bytes object - (if your original list would have all character codepoints in the 0-256 range, that is): You could encode your previous
t with
latin1 encoding before writing it to a file. But that would have been just wrong semantically.
You can also experiment with Python's little known "bytearray" object: it gives one the ability to deal with elements that are 8bit numbers, and have the convenience of being mutable and extendable (just as a C "string" that would have enough memory space pre allocated)
|
https://codedump.io/share/YPzkNNZwaLkq/1/why-do-python-and-wc-disagree-on-byte-count
|
CC-MAIN-2017-22
|
en
|
refinedweb
|
In an earlier article I covered how to generate Excel reports on the fly using the Interop.Excel Namespace.
This is a really handy technique that also gives you full control over the minutia of the document you are creating. However the catch in using this technique is that you will need to update the configuration of your Web server to allow Web users to trigger Excel on your Web server. This can mean some pretty drastic security changes and possible loopholes in your Web server’s security so you should do a risk analysis before choosing this method.
In this article I will review the Web server security updates that need to be made to allow using the Interop.Excel Namespace to generate Excel documents for your Web site.
Please note that since I am developing this site for an Intranet, I am not as concerned with locking down the server. If you are working with a server that is exposed to the Web then you will want to review these security changes much more thoroughly.
Overview
What we will need to accomplish is to allow someone browsing our Web site to invoke Excel on the Web server so that the server can create and serve up the document.
At a more detailed level, the default ASP.NET worker process will need to be able to invoke the correct COM object on the server. Then, when the object has been invoked, the worker process will have to impersonate a local system User account on the server in order for Excel to allow the file to be created and saved to the Web server.
This requires significantly more configuration than a simple Web site setup in IIS, but may be worthwhile due to the extreme control that the Interop.Excel namespace gives you.
First Step: Install MS Office
The first thing you will need to do is to make sure that MS Office is installed on your Web server. The standard Interop.Excel invocations refer to MS Office components that must be running on the server and from my testing requires a complete installation of MS Office.
In my system setup I was dealing with a Windows 2003 server running IIS6 and pretty much nothing else, so I had to get my hands on MS Office 2007 to install on the server. Installing it went like clockwork so I was quickly on to the next step.
Step Two: Add your Web Application to IIS
So for the second step I added my Web application to IIS. I won’t go into the details of setting up a site in IIS, but I should mention some of the settings I ended up using.
- I left the Execute Permissions option set as Scripts
- In directory security I disabled Anonymous access
- I enabled Integrated Windows Authentication
Step 3: Try running your site
Now to check that everything was working I opened up a browser and pointed it to my new site. As expected the site ran quite nicely. However when I tried generating the Excel document I would click the Web page button to do so and I would either see an error on the page, or else the page would just end up doing nothing.
Step 4: Check the Server Event Logs
Whichever behaviour you end up seeing, go to your Administrative Tools and check your server’s Event Viewer. Then click to view your System logs. You should see an error with EventId: 10016 for the user account NT AUTHORITY\NETWORK SERVICE. This should originate from the source: DCOM. The details of the error should indicate that there is a permission settings issue for getting local activation to a COM Server application.
Step 5: Grant Permission to the Network Service Account
Please note that these steps will vary depending on the version of Windows that your server is running. The steps in Windows Server 2003 are quite straightforward, but I believe that in later versions this may be more onerous. Here are the steps I took in Windows Server 2003:
- On the Start menu, click Run and then type dcomcnfg and click ok
- the Component Services management tool will appear
- expand the Component Services node under Console Root. Then expand the Computers and subsequently the My Computer sub-nodes.
- Right-click the My Computer node and click Properties
- In the Properties popup click the COM Security tab
- In the Launch and Activation Permissions section click the Edit Default button
- Under the list of user accounts with launch permissions, you will want to add the default ASP.NET account Network Service.
- To do so, click the add button. then type network under the object names to search box and change the location to the local server’s name.
- Then click the Check Names button. Browse through the list of names until you see Network Service
- Select Network Service and click OK
- Click the next OK button also and then update the checkbox options for the Network Service account. In my case I set everything to allow, but you may be able to further refine this.
- Finally click OK to save your changes and then exit the Component Services interface.
Checking the Impact of your Changes
At this point if you test your Export to Excel functionality you will not see any errors, but the document will also not generate. If you go to your Server’s Event Viewer and click on Microsoft Office Sessions you will see notifications generated each time you try to generate an Excel document from your Web site.
The notifications indicate that your ASP.NET service is activating the COM object it needs, but that it can’t go any further due to permission issues. This is related to user accounts on the server. Basically the process trying to invoke and save your Excel documents is not running with sufficient permissions on your Web Server to do so. To correct this you will need to create a new user account on your Web server, preferably with minimal permissions so as not to endanger your server too much.
Step 6: Setting up your new User Account
- Click on your Start menu and then on Administrative Tools and then Computer Management.
- In the Computer Management tool expand the System Tools node and then the Local Users and Groups node.
- Click on the Users folder to view the list of local system users
- Right Click on the Users pane and select New User
- Add a new user account and password. Make sure you uncheck the User must change password at next logon option. and set the password to never expire. Finally click the Create button to create your new user.
- Now click the Groups folder and locate the Guests group in the list of groups.
- Right Click the Guests group and select the Add to Group option.
- In the Guests Properties popup window click the Add button to add your new user to this group. Once you have done so click the OK button to save your changes.
At this point you have your new limited access user account to use for generating Excel documents. If your server has been properly configured, then the Guests group will have severely limited access, but it should still be enough to generate and save new documents.
Step 7: Telling ASP.NET to Use as your New Server Account
The final step of this process is to tell ASP.NET that it should run as the new local server account that you have set up. Thankfully this is quite simple to do:
- Open up your site’s Web.config file
- Browse to the system.web identity element in your Web.config
- Set impersonate to true and add the user name and password to your new Server account.
So to summarize, change the following identity tag:
<authentication mode="Windows"/> <identity impersonate="false"/>
To this:
<authentication mode="Windows"/> <identity impersonate="true" userName="testUser" password="testUser"/>
Final Thoughts
At this point you have all of the parts in place to allow your ASP.NET site to instantiate the Excel COM components it needs via a DCOM Web request.
Once instantiated, your Web.config file is set up to impersonate a limited permissions user account that it can use to create the necessary Excel file to serve up your Web site’s users.
5 thoughts on “Configuring a Web Server to Allow Excel File Creation via the Interop.Excel Namespace”
|
https://jwcooney.com/2012/09/20/configuring-a-web-server-to-allow-excel-file-creation-via-the-interop-excel-namespace/
|
CC-MAIN-2017-22
|
en
|
refinedweb
|
This package extends the default Plone portlets framework to allow for the assignment of portlets on a per-view basis.
Sometimes it may really be useful to have a different portlet assigned to standalone views which are not necessary coupled to any content objects. You may think that in this case it is possible to create a new content object and apply that view to it and then finally assign the required portlets to that object. However, in Plone you may end up creating new content objects for every standalone zope3 view - which is not correct from a content management point of view.
In one of our projects we had a lot of standalone views where it was a requirement to have different portlets assigned to a standalone view (e.g. having a separate set of portlets on user profile views, sitemap view, z3c.form form views, homepage view, etc.)
To facilitate the adding of portlets there we developed this package called collective.viewportletmanager. It is aimed at usage by Plone integrators and is not working out of the box. To be able to assign portlets to your zope3 view you have to follow some simple rules.
More on this below in How to use section.
This add-on was tested on Plone 3.3.5 and Plone 4.1
This packages allows you to assign Plone 3 portlets to your custom zope3 views. To be able to do this we require from you the only one extra step: mark your view with IPortletsAwareView interface:
from zope.interface import implements from Products.Five.browser import BrowserView from collective.viewportletmanager.interfaces import IPortletsAwareView class MyZope3View(BrowserView): implements(IPortletsAwareView) # your zope3 view code goes here
After you declare that your zope3 view implements IPortletsAwareView interface, restart your zope instance and render your view. From now on you’ll see there Manage view portlets link. This link will get you to portlets management screen which is exactly the same as default Manage portlets view. The only difference is that Manage view portlets view will allow you to manage portlets for your IPortletsAwareView marked view.
The portlets you add on that view will be available only while visiting your custom zope3 view.
One more important note here is that portlets are not assigned based on view alone, but are based on context object as well. So we may say that view-based portlets are actually view-context based portlets.
View portlets will go right after context based portlets like other site-wide portlets.
You’ll also be able to block view based portlets via the standard Manage portlets screen.
The main thing is that view portlets are saved into Plone site root annotations like any other site-wide portlet categories but at the same time view category mappings hold context object UIDs so actually view category doesn’t look like site-wide but context based portlets category.
View portlets category uses the next assignments key format:
"<object_uid>:<view_name>"
So, for object with uid equal “123” and it’s view called “my-view” we’ll get portlet assignments saved under “123:my-view” key in “view” category inside global site portlet manager annotations.
For site root, which doesn’t provide “UID” attribute we use string placeholder “nouid”. E.g. portlet assignments for “sitemap” view will be saved under “nouid:sitemap” key.
This package overrides a bunch of standard Plone portlets framework components in order to add view portlets category to list of standard portlet categories: context, user, group and content type.
Here we’ll try to describe what exactly was overridden:
PortletContext both for site root and site content in order to add view category to standard categories list. This context also takes care of generating view key based on object “UID” and view name. To get view name portlet context retrieves view as an argument to it’s globalPortletCategories method, which actually breaks default Plone portlets framework designed API. To pass view object around we also had to override portlet manager renderer and portlet retrievers.
PortletManagerRenderer: to pass view object down to portlet retriever class.
PortletRetriever: to pass view object down to PortletContext which in turn will use it to generate view category key.
ContextualEditPortletManagerRenderer view to provide view category blacklist status to be used on standard Manage portlets screen.
EditPortletManagerRenderer to disable inherited portlets on Manage view portlets screen. This doesn’t make much sense for view category portlets.
ManageContextualPortlets to provide set blacklist status method which will also take care of view category blacklist status.
ManageViewPortlets: our own Manage view portlets view
ManageViewPortletsLinkViewlet: our own viewlet that renders Manage view portlets link pointing to appropriate portlets management screen for current content object and zope3 view. Link appears only on IPortletsAwareView enabled zope3.
|
https://pypi.org/project/collective.viewportletmanager/
|
CC-MAIN-2017-22
|
en
|
refinedweb
|
The above equation is pretty easy to solve, it just means that x is any value that when you divide by 2, gives a remainder of 1. x is all odd numbers. The more formal answer is:
That reads as “x is equivalent to 1 plus 2 times any integer”. x has infinitely many solutions, so long as they fit that constraint.
That’s easy enough to figure out, but how would you solve this equation?
or this one?
One Level (Simple Case)
There is something called the quotient remainder theorem that lets us change an equation like this:
Into one like this:
The Z means “all integers” which implies that there are infinitely many solutions to A. This reads as “A is equivelant to C plus B times any integer”.
Let’s work out a couple examples.
That transforms to:
If you try plugging any positive or negative integer in place of Z, you’ll see that you get a number that satisfies the first equation.
Here’s another one:
That transforms to:
Once again you can plug any positive or negative integer in for Z and get a value for x that satisfies the original equation.
Pretty easy and pretty handy right? Let’s get a little more complex.
Two Levels
Let’s say you want to solve this equation from the intro.
The first thing you might do is transform it to the below.
where
.
What now? Can we transform it again? If we do, we get this:
where
.
plugging 1 in for the N1 and N2, we get 8, which does satisfy the original equation.
If we plug 3 in for N1 and 1 in for N2 though, we get 12 which DOESN’T satisfy the original equation. What gives??
Well it turns out there are subtle restrictions on our equations that got lost in the shuffle. Let’s start over…
In effect, what that equation says is “Something divided by two gives a remainder of 1” where the something happens to be x % 5, but we don’t really care about what the something actually is right now.
The equation also implies that the right side of the equation is a value modulo two, and it’s only valid values are {0,1}. Another way to write this is to say that the right side
which reads as “in all integers mod 2”.
1 is obviously in the set of valid values {0,1}, so we don’t need any special notation just yet, but let’s keep it in mind as we move onto the next step.
The above says that x divided by 5 gives a remainder that is 1 plus two times all integers. However there is a catch. There right side of the equation is a mod 5 value, which means it’s only valid values are {0,1,2,3,4}. In other words, the right side of the equation is
.
This has an effect of implicitly limiting the valid values that we can plug in to Z. It limits it to values of Z where 1+2*Z is in {0,1,2,3,4}. Let’s write this out a little more formally.
where
and
Let’s transform the equation again to solve for x:
where
and
Note that there is no modulus on the left side of the equation now, so the right side of the equation is unlimited into the valid values it can have.
How do we use this resulting formula though to find valid values of x?
First we figure out the valid values of a.
where
, so it’s only valid values have to be a subset of {0,1,2,3,4}.
Plugging a -1 in for Z, we get a value for a of -1, which isn’t a valid answer so we throw it away.
Plugging a 0 in for Z, we get a value for a of 1. That is a valid answer so we keep it!
Plugging a 1 in gives us 3 which is valid, so we keep that too.
Plugging a 2 in gives us a value of 5, which isn’t valid, so we throw it away and know that we are done.
Our values for a are {1,3}.
The solution we got was this:
Plugging in each value of a means that we get two equations as a solution for x. Valid values of x are the union of these two lists – both equations give valid answers.
The first equation gives us values of {…, -4, 1, 6, 11, …}
The second equation gives us values of {…, -2, 3, 8, 13, …}
Taking the union of those, our solutions for x are:
{…, -4, -2, 1, 3, 6, 8, 11, 13, …}
If you plug those into the original equation
, you’ll see that they are valid solutions! Those equations also represent ALL solutions, so they aren’t only valid, they are also complete.
Let’s step it up just one more notch.
Three Levels (Boss Mode)
Let’s solve the hardest equation from the intro:
This starts off just like the two leveled equation. Something on the left mod 2 is equation to 1. The 1 on the right side is
, but since that’s obviously true for this constant, we don’t need to do anything special. So, we transform it to the below:
Then we do another transformation:
Then for the last transformation, we get this:
Now that we have our equations worked out, we need to start finding out what the values actually are. We start with a because it’s what b and x are based on, and is made up of constants. We get {1,3} again as the valid values of a. That gives us:
Now we want to plug each value of a into b and find all the valid values for b. When we plug 1 into b for a, it becomes
,
. The valid values for that are {1,6}.
When we plug 3 into b for a, it becomes
,
. The only valid value there is {3}.
That means that our valid values for b are {1,3,6}. It’s the union of the valid values we found for each value of a.
That makes our equations into this:
If we plug those values of b into the equation for x, we get these three equations:
The first equation gives us some x values of {…, -6, 1, 8, 15, …}. The second equation gives us x values of {…, -4, 3, 10, 17, …}. The third equation gives us x values of {…, -1, 6, 13, 20, …}.
Taking the union of all of those, we get…
{…, -6, -4, -1, 1, 3, 6, 8, 10, 13, 15, 17, 20, …}
If you plug those numbers into the original equation
, you can see that they are valid solutions!
When you are performing this algorithm, if you ever hit a case where there are no valid answers in one of the steps – like say, there were no valid answers for equation b in the above – that means that there is no solution to your equation.
Sample Code
Since solving these equations can be pretty manual and tedious, here is some code that can solve these equations for you. If you were confused at all by the explanation above, the code may also be able to show you how it works better, especially if you step through it and see what it’s doing.
#include <stdio.h> #include <array> #include <vector> //================================================================================= void WaitForEnter() { printf("\nPress Enter to quit"); fflush(stdin); getchar(); } //================================================================================= void AddSolutions(std::vector<int>& solutions, int add, int multiply, int mod) { int Z = 0; while (1) { int value = multiply * Z + add; if (value < mod) solutions.push_back(value); else return; ++Z; } } //================================================================================= void AddSolutionsFromSolutions (std::vector<int>& solutions, const std::vector<int>& adds, int multiply, int mod) { std::for_each( adds.begin(), adds.end(), [&solutions, multiply, mod] (int add) { AddSolutions(solutions, add, multiply, mod); } ); } //================================================================================= int main(int argc, char **argv) { // a,b,c,d... // ((x % a) % b) % c = d // etc. //const int c_values[] = {7, 5, 2, 1}; const int c_values[] = { 12, 9, 7, 5, 3, 2 }; const size_t c_numValues = sizeof(c_values) / sizeof(c_values[0]); // print the equation printf("Solving for x:\n"); for (size_t i = 0; i < c_numValues - 1; ++i) printf("("); printf("x"); for (size_t i = 0; i < c_numValues - 1; ++i) printf(" %% %i)", c_values[i]); printf(" = %i\n\n", c_values[c_numValues-1]); // print the solution equations for (size_t i = 0; i < c_numValues - 2; ++i) { char eqn = i > 0 ? 'A' + i - 1 : 'x'; char eq = i > 0 ? '=' : 0xF0; printf("%c %c %c + %i*Z", eqn, eq, 'B' + i - 1, c_values[i]); if (i > 0) printf(" (in Z_%i)\n", c_values[i-1]); else printf(" (in Z)\n"); } printf("%c = %i + %i*Z (in Z_%i)\n\n", 'A' + c_numValues - 2 - 1, c_values[c_numValues - 1], c_values[c_numValues - 2], c_values[c_numValues - 3]); // gather up the permutation of solutions for each equation, starting with the lowest equation which has only constants std::array<std::vector<int>, c_numValues - 2> solutions; AddSolutions(solutions[c_numValues - 3], c_values[c_numValues - 1], c_values[c_numValues - 2], c_values[c_numValues - 3]); for (size_t i = c_numValues - 3; i > 0; --i) AddSolutionsFromSolutions(solutions[i-1], solutions[i], c_values[i], c_values[i-1]); // Detect empty set if (solutions[0].size() == 0) { printf("No solutions!\n"); WaitForEnter(); return 0; } // Print the more specific solution equations printf("x = "); for (size_t i = 0, c = solutions[0].size(); i < c; ++i) { if (i < c - 1) printf("%c U ", 'a' + i); else printf("%c\n", 'a' + i); } std::sort(solutions[0].begin(), solutions[0].end()); for (size_t i = 0, c = solutions[0].size(); i < c; ++i) printf("%c = %i + %iZ\n", 'a' + i, solutions[0][i], c_values[0]); // Print specific examples of solutions (first few numbers in each) std::vector<int> xValues; printf("\n"); for (size_t i = 0, c = solutions[0].size(); i < c; ++i) { printf("%c = {..., ", 'a' + i); for (int z = 0; z < 3; ++z) { printf("%i, ", solutions[0][i] + z * c_values[0]); xValues.push_back(solutions[0][i] + z * c_values[0]); } printf("...}\n"); } // Show the list of specific values of X std::sort(xValues.begin(), xValues.end()); printf("\nx = {..., "); std::for_each(xValues.begin(), xValues.end(), [](int v) {printf("%i, ", v); }); printf("...}\n"); // Test the solutions to verify that they are valid! bool valuesOK = true; for (size_t i = 0, c = xValues.size(); i < c; ++i) { int value = xValues[i]; for (size_t j = 0; j < c_numValues - 1; ++j) value = value % c_values[j]; if (value != c_values[c_numValues - 1]) { printf("Solution %i is invalid!!\n", xValues[i]); valuesOK = false; } } if (valuesOK) printf("\nAll solutions shown tested valid!\n"); WaitForEnter(); return 0; }
Here’s an example run, where it solves a 5 level equation.
Links
If you know of a better way to solve this type of equation let me know. This is what I found when trying to figure this stuff out, but it doesn’t mean it’s the only way to do it.
The one thing I don’t like about this technique is that when you find solutions, it’s very manual, and very specific to the values used. Imagine if one of the modulus divisors was a variable instead of them all being constants. How would you solve it then? I’m not really sure…
Have some links!
Math Stack Exchange: How to solve nested congruences?
Khan Academy: The Quotient Remainder Theorem
|
http://blog.demofox.org/2015/09/15/solving-nested-modulus-equations/
|
CC-MAIN-2017-22
|
en
|
refinedweb
|
hi, im trying to create a jbutton where the button will allow the user to redo the program again. this program is to roll two dice and have the sum added up and list the frequencies of the sum of 2-12 poppoing out in 36000 rolls. i have gotten the program itself completed... but i do not know how to implement a jbutton.. can someone help? below is my original code, thanks.
import javax.swing.*; // Program uses JOptionPane
import java.awt.*;
import java.awt.event.*;
public class exercise7_15 extends JApplet implements ActionListener
{
JButton rollButton;
JTextArea outputArea;
public void init()
{
Container container = getContentPane();
rollButton = new JButton("Roll Again");
rollButton.addActionListener(this);
container.add( rollButton );
}
public static void main (String args[])
{
int frequency[] = new int[13];
for (int roll = 1; roll <= 36000; roll++)
++frequency[(1 + (int)(Math.random()*6)) + (1 + (int)(Math.random()*6))];
String output = "Dice Number\tFrequency";
for (int DiceNumber = 0; DiceNumber < frequency.length; DiceNumber++)
output += "\n" + DiceNumber + "\t" + frequency[DiceNumber];
JTextArea outputArea = new JTextArea();
outputArea = new JTextArea();
outputArea.setText(output);
JOptionPane.showMessageDialog(null, outputArea, "Roll Dice 36000 Times", JOptionPane.INFORMATION_MESSAGE);
}
public void actionPerformed(ActionEvent actionEvent)
{
}
}
Code:
public void actionPerformed(ActionEvent evt)
{
if (evt.getSource() == rollButton)
{
// stuff
}
}
public void actionPerformed(ActionEvent evt)
{
if (evt.getSource() == rollButton)
{
// stuff
}
}
ugh.. i do prefer a separate ActionEvent handler for each button, rather than a massive set of IFs to find out which button was pressed (i know that drain;s example is not thus, but.. left you the OP's imagination, it might well be)
for the OP: try using NetBeans to make your gui.. it generates a lot of the tedious stuff was just going with what he currently had.
indeed, and it hence motivated me to write:
Originally posted by cjard
(i know that drain;s example is not thus, but.. left you the OP's imagination, it might well be)
learnin' 'em proper' from t' start is better than 'ittin 'em wi' a big stick at t' end, eh?
Forum Rules
Development Centers
-- Android Development Center
-- Cloud Development Project Center
-- HTML5 Development Center
-- Windows Mobile Development Center
|
http://forums.devx.com/showthread.php?139936-Creating-a-JButton&p=413949
|
CC-MAIN-2017-22
|
en
|
refinedweb
|
gnutls_certificate_get_ours - return the raw certificate sent in the last handshake
#include <gnutls/gnutls.h> const gnutls_datum_t * gnutls_certificate_get_ours(gnutls_session_t session);
gnutls_session_t session is a gnutls session
Get the certificate as sent to the peer, in the last handshake. These certificates are in raw format. In X.509 this is a certificate list. In OpenPGP this is a single certificate..
|
http://huge-man-linux.net/man3/gnutls_certificate_get_ours.html
|
CC-MAIN-2017-22
|
en
|
refinedweb
|
view raw
I am trying to get a spectrum analyzer program example working but it is having problems finding the module. Here is the error I'm getting
Traceback (most recent call last):
File "C:\Users\user\Documents\Programs\Python_program_example.py", line 10, in <module>
rsa300 = ctypes.WinDLL("C:\\Tektronix\\RSA306 API\\lib\\x64\\RSA300API.dll")
File "C:\Python27\lib\ctypes\__init__.py", line 365, in __init__
self._handle = _dlopen(self._name, mode)
WindowsError: [Error 126] The specified module could not be found
os.path.exists()
true
The
RSA300API.DLL might have dependencies in the folder so prior to using it, use
os.chdir to set the working directory, for example:
import os os.chdir(r'C:\Tektronix\RSA306 API\lib\x64') rsa300 = ctypes.WinDLL(r"C:\Tektronix\RSA306 API\lib\x64\RSA300API.dll")
Checking one of their samples, this appears to be the recommended way to access it.
Alternatively, as @eryksub has mentioned,
LoadLibraryEx can be used.
win32api could be used to get the handle and pass it to
WinDLL as follows:
import ctypes import win32api import win32con dll_name = r'C:\Tektronix\RSA306 API\lib\x64\RSA300API.dll' dll_handle = win32api.LoadLibraryEx(dll_name, 0, win32con.LOAD_WITH_ALTERED_SEARCH_PATH) rsa300 = ctypes.WinDLL(dll_name, handle=dll_handle)
|
https://codedump.io/share/bP5kttNXMWlZ/1/issues-using-ctypeswindll
|
CC-MAIN-2017-22
|
en
|
refinedweb
|
Salt should remain backwards compatible, though sometimes, this backwards compatibility needs to be broken because a specific feature and/or solution is no longer necessary or required. At first one might think, let me change this code, it seems that it's not used anywhere else so it should be safe to remove. Then, once there's a new release, users complain about functionality which was removed and they where using it, etc. This should, at all costs, be avoided, and, in these cases, that specific code should be deprecated.
In order to give users enough time to migrate from the old code behavior to the new behavior, the deprecation time frame should be carefully determined based on the significance and complexity of the changes required by the user.
Salt feature releases are based on the Periodic Table. Any new features going into the develop branch will be named after the next element in the Periodic Table. For example, Beryllium was the feature release name of the develop branch before the 2015.8 branch was tagged. At that point in time, any new features going into the develop branch after 2015.8 was branched were part of the Boron feature release.
A deprecation warning should be in place for at least two major releases before
the deprecated code and its accompanying deprecation warning are removed. More
time should be given for more complex changes. For example, if the current
release under development is
Sodium, the deprecated code and associated
warnings should remain in place and warn for at least
Aluminum.
To help in this deprecation task, salt provides
salt.utils.warn_until. The idea behind this helper function is to show the
deprecation warning to the user until salt reaches the provided version. Once
that provided version is equaled
salt.utils.warn_until will raise a
RuntimeError making salt stop
its execution. This stoppage is unpleasant and will remind the developer that
the deprecation limit has been reached and that the code can then be safely
removed.
Consider the following example:
def some_function(bar=False, foo=None): if foo is not None: salt.utils.warn_until( 'Aluminum', 'The \'foo\' argument has been deprecated and its ' 'functionality removed, as such, its usage is no longer ' 'required.' )
Development begins on the
Aluminum release when the
Magnesium branch is
forked from the develop branch. Once this occurs, all uses of the
warn_until function targeting
Aluminum, along with the code they are
warning about should be removed from the code.
|
https://docs.saltstack.com/en/latest/topics/development/deprecations.html
|
CC-MAIN-2017-22
|
en
|
refinedweb
|
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.