text
stringlengths 20
1.01M
| url
stringlengths 14
1.25k
| dump
stringlengths 9
15
⌀ | lang
stringclasses 4
values | source
stringclasses 4
values |
|---|---|---|---|---|
Hi list
I have just released ruby-traits v0.1. This are Traits in pure Ruby 1.8.
Have fun.
Traits for Ruby
Traits are Composable Units of Behavior, well that is the
academic title,
For Rubiests the following would be a good definition:
Mixins with Conflict Resolution and a Flattening Property
allowing to avoid subtle problems like Double Inclusion and
calling method x from Mixin L while we wanted method x from
Mixin N.
There is some (extra/nice?) composition syntax too
For details please refer to:
which is a PhD thesis defining traits formally.
And yes Traits are implemented in Squeak 3.9.
In practice Traits enable us to:
- get a RuntimeError when we call a method defined by
more than one trait.
These conflicts can be resolved by redefining the method.
- avoid any double inclusion problem.
- compose Traits
- alias methods during Trait Composition
- resolve super dynamically (mentioned for completeness, Ruby modules
can do this too, of course
Examples:
t1 = trait { def a; 40 end }
t2 = Trait::new{ def a; 2 end }
c1 = Class::new {
use t1, t2
}
c1.new.a --> raises TraitsConflict
conflicts can be resolved be redefinition, and aliasing can be used for
access to the overriden methods. All this can be combined
with traits composition.
t = ( t1 + { :a => :t1_a } ) + ( t2 + {:a => :t2_a } )
c2 = Class::new {
use t
def a; t1_a + t2_a end
}
c2.new.a --> 42
|
https://www.ruby-forum.com/t/rubytraits-0-1/117250
|
CC-MAIN-2021-43
|
en
|
refinedweb
|
Improve the Performance of your React Forms
by Kent C. Dodds
Forms are a huge part of the web. Literally every interaction the user takes to
make changes to backend data should use a
form. Some forms are pretty simple,
but in a real world scenario they get complicated quickly. You need to submit
the form data the user entered, respond to server errors, validate the user
input as they're typing (but not before they've blurred the input please), and
sometimes you even need to build custom-made UI elements for form input types
that aren't supported (styleable selects, date pickers, etc.).
All this extra stuff your forms need to do is just more JavaScript the browser has to execute while the user is interacting with your form. This often leads to performance problems that are tricky. Sometimes there's a particular component that's the obvious problem and optimizing that one component will fix things and you can go on your merry way.
But often there's not a single bottleneck. Often the problem is that every user interaction triggers every component to re-render which is the performance bottleneck. I've had countless people ask me about this problem. Memoization won't help them because these form field components accept props that are indeed changing.
The easiest way to fix this is to just not react to every user interaction
(don't use
onChange). Unfortunately, this isn't really practical for many use
cases. We want to display feedback to the user as they're interacting with our
form, not just once they've hit the submit button.
So, assuming we do need to react to a user's interaction, what's the best way to do that without suffering from "perf death by a thousand cuts?" The solution? State colocation!
The demo
Allow me to demonstrate the problem and solution for you with a contrived example. Anyone who has experienced the problem above should hopefully be able to translate this contrived example to a real experience of their past. And if you haven't experienced this problem yet, hopefully you'll trust me when I say the problem is real and the solution works for most use cases.
You'll find the full demo in this codesandbox. Here's a screenshot of what it is:
This is rendered by the following
<App /> component:
function App() {return (<div><h1>Slow Form</h1><SlowForm /><hr /><h1>Fast Form</h1><FastForm /></div>)}
Each of the forms function exactly the same, but if you try it out the
<SlowForm /> is observably slower (try typing into any field quickly). What
they each render is a list of fields which all have the same validation logic
applied:
- You can only enter lower-case characters
- The text length must be between 3 and 10 characters
- Only display an error message if the field has been "touched" or if the form has been submitted.
- When the form is submitted, all the data for the fields is logged to the console.
At the top of the file you get a few knobs to test things out:
window.PENALTY = 150_000const FIELDS_COUNT = 10
The
FIELDS_COUNT controls how many fields are rendered.
The
PENALTY is used in our
<Penalty /> component which each of the fields
renders to simulate a component that takes a bit of extra time to render:
let currentPenaltyValue = 2function PenaltyComp() {for (let index = 2; index < window.PENALTY; index++) {currentPenaltyValue = currentPenaltyValue ** index}return null}
Effectively
PENALTY just controls how many times the loop runs to make the
exponentiation operator run for each field. Note, because
PENALTY is on
window you can change it while the app is running to test out different
penalties. This is useful to adjust it for the speed of your own device. Your
computer and my computer have different performance characteristics so some of
your measurements may be a bit different from mine. It's all relative.
All right, with that explanation out of the way, let's look at the
<SlowForm /> first.
<SlowForm />
/*** When managing the state higher in the tree you also have prop drilling to* deal with. Compare these props to the FastInput component*/function SlowInput({name,fieldValues,touchedFields,wasSubmitted,handleChange,handleBlur,}: {name: stringfieldValues: Record<string, string>touchedFields: Record<string, boolean>wasSubmitted: booleanhandleChange: (event: React.ChangeEvent<HTMLInputElement>) => voidhandleBlur: (event: React.FocusEvent<HTMLInputElement>) => void}) {const value = fieldValues[name]const touched = touchedFields[name]const errorMessage = getFieldError(value)const displayErrorMessage = (wasSubmitted || touched) && errorMessagereturn (<div key={name}><PenaltyComp /><label htmlFor={`${name}-input`}>{name}:</label> <inputid={`${name}-input`}name={name}{errorMessage}</span>) : null}</div>)}/*** The SlowForm component takes the approach that's most common: control all* fields and manage the state higher up in the React tree. This means that* EVERY field will be re-rendered on every keystroke. Normally this is no* big deal. But if you have some components that are even a little expensive* to re-render, add them all up together and you're toast!*/function SlowForm() {const [fieldValues, setFieldValues] = React.useReducer((s: typeof initialFieldValues, a: typeof initialFieldValues) => ({...s,...a,}),initialFieldValues,)const [touchedFields, setTouchedFields] = React.useReducer((s: typeof initialTouchedFields, a: typeof initialTouchedFields) => ({...s,...a,}),initialTouchedFields,)const [wasSubmitted, setWasSubmitted] = React.useState(false)function handleSubmit(event: React.FormEvent<HTMLFormElement>) {event.preventDefault()const formIsValid = fieldNames.every((name) => !getFieldError(fieldValues[name]),)setWasSubmitted(true)if (formIsValid) {console.log(`Slow Form Submitted`, fieldValues)}}function handleChange(event: React.ChangeEvent<HTMLInputElement>) {setFieldValues({[event.currentTarget.name]: event.currentTarget.value})}function handleBlur(event: React.FocusEvent<HTMLInputElement>) {setTouchedFields({[event.currentTarget.name]: true})}return (<form noValidate onSubmit={handleSubmit}>{fieldNames.map((name) => (<SlowInputkey={name}name={name}fieldValues={fieldValues}touchedFields={touchedFields}wasSubmitted={wasSubmitted}handleChange={handleChange}handleBlur={handleBlur}/>))}<button type="submit">Submit</button></form>)}
I know there's a lot going on there. Feel free to take your time to get an idea
of how it works. The key thing to keep in mind is that all the state is managed
in the
<SlowForm /> component and the state is passed as props to the
underlying fields.
Alright, so let's profile an interaction with this form. I've built this for production (with profiling enabled). To keep our testing consistent, the interaction I'll do is focus on the first input, type the character "a" and then "blur" (click out of) that input.
I'll start a Performance profiling session with the Browser DevTools with a 6x slowdown to simulate a slower mobile device. Here's what the profile looks like:
Wowza. Check that out! 97 milliseconds on that keypress event. Remember that we only have ~16 milliseconds to do our JavaScript magic. Any longer than that and things start feeling really janky. And at the bottom there it's telling us we've blocked the main thread for 112 milliseconds just by typing a single character and blurring that input. Yikes.
Don't forget this is a 6x slowdown, so it won't be quite that bad for many users, but it's still an indication of a severe performance issue.
Let's try the React DevTools profiler and observe what React is doing when we interact with one of the form fields like that.
Huh, so it appears that every field is re-rendering. But they don't need to! Only the one I'm interacting with does!
Your first instinct to fix this might be to memoize each of your field components. The problem is you'd have to make sure you memoize all the props that are passed which can really spider out to the rest of the codebase quickly. On top of that, we'd have to restructure our props so we only pass primitive or memoizeable values. I try to avoid memoizing if I can for these reasons. And I can! Let's try state colocation instead!
<FastForm />
Here's the exact same experience, restructured to put the state within the individual fields. Again, take your time to read and understand what's going on here:
/*** Not much we need to pass here. The `name` is important because that's how* we retrieve the field's value from the form.elements when the form's* submitted. The wasSubmitted is useful to know whether we should display* all the error message even if this field hasn't been touched. But everything* else is managed internally which means this field doesn't experience* unnecessary re-renders like the SlowInput component.*/function FastInput({name,wasSubmitted,}: {name: stringwasSubmitted: boolean}) {const [value, setValue] = React.useState('')const [touched, setTouched] = React.useState(false)const errorMessage = getFieldError(value)const displayErrorMessage = (wasSubmitted || touched) && errorMessagereturn (<div key={name}><PenaltyComp /><label htmlFor={`${name}-input`}>{name}:</label> <inputid={`${name}-input`}name={name}{errorMessage}</span>) : null}</div>)}/*** The FastForm component takes the uncontrolled approach. Rather than keeping* track of all the values and passing the values to each field, we let the* fields keep track of things themselves and we retrieve the values from the* form.elements when it's submitted.*/function FastForm() {const [wasSubmitted, setWasSubmitted] = React.useState(false)function handleSubmit(event: React.FormEvent<HTMLFormElement>) {event.preventDefault()const formData = new FormData(event.currentTarget)const fieldValues = Object.fromEntries(formData.entries())const formIsValid = Object.values(fieldValues).every((value: string) => !getFieldError(value),)setWasSubmitted(true)if (formIsValid) {console.log(`Fast Form Submitted`, fieldValues)}}return (<form noValidate onSubmit={handleSubmit}>{fieldNames.map((name) => (<FastInput key={name} name={name} wasSubmitted={wasSubmitted} />))}<button type="submit">Submit</button></form>)}
Got it? Again, lots happening, but the most important thing to know there is the state is being managed within the form fields themselves rather than in the parent. Let's try out the performance profiler on this now:
NICE! Not only are we within the 16 millisecond budget, but you might have noticed it says we had a total blocking time of 0 milliseconds! That's a lot better than 112 milliseconds 😅 And remember, that we're on a 6x slowdown so for many users it will be even better.
Let's pop open the React DevTools and make sure we're only rendering the component that needs to be rendered with this interaction:
Sweet! The only component that re-rendered was the one that needed to. In fact,
the
<FastForm /> component didn't re-render, so as a result none of the other
children needed to either so we didn't need to muck around with memoization at
all.
Nuance...
Now, sometimes you have fields that need to know one another's value for their own validation (for example, a "confirm password" field needs to know the value of the "password" field to validate it is the same). In that case, you have a few options. You could hoist the state to the least common parent which is not ideal because it means every component will re-render when that state changes and then you may need to start worrying about memoization (nice that React gives us the option!).
Another option is to put it into context local to your component so only the context provider and consumers re-render when it's changed. Just make sure you structure things so you can take advantage of this optimization or it won't be much better.
A third option is to step outside of React and reference the DOM directly. The
concerned component(s) could attach their own
change event listener to their
parent form and check whether the changed value is the one they need to validate
against.
Brooks Lybrand created an example of both of two of these alternatives you can check out if you'd like to get a better idea of what I mean:
The nice thing is that you can try each of these approaches and choose the one you like best (or the one you dislike the least 😅).
Conclusion
You can try the demo yourself here:
Remember if you want to try profiling with the Chrome DevTools Performance tab,
make sure you have built it for production and play around with the throttle and
the
PENALTY value.
At the end of the day, what matters most is your application code. So I suggest you try some of these profiling strategies on your app and then try state colocation to improve the performance of your components.
Good luck!
|
https://epicreact.dev/improve-the-performance-of-your-react-forms/
|
CC-MAIN-2021-43
|
en
|
refinedweb
|
- 22 Feb, 2019 5 commits
- 20 Feb, 2019 7 commits
-
- 19 Feb, 2019 6 commits
-
- 18 Feb, 2019 6 commits
-
- 15 Feb, 2019 5 commits
-
- 14 Feb, 2019 11 commits
-
Allows package namespace to be set and fine-tune conda/documentation urls See merge request !24
New doc strategy See merge request !23
[templates] Implements new documentation installation strategy to solve test-only conda-builds based on tarballs
[conda] Implements new documentation installation strategy to solve test-only conda-builds based on tarballs
Test packages of all architectures See merge request !22
New functionality to run test-only builds using conda packages See merge request !21
Add support for packages required to run linux builds on docker-build hosts See merge request !20
[conda-build-config] Update bob/beat-devel versions See merge request !19
Refinements to allow direct commits to master, build skips with auto-merge See merge request !18
Terminal colors See merge request !17
|
https://gitlab.idiap.ch/bob/bob.devtools/-/commits/f997dde800616036ee393df7f90bf1831a862482
|
CC-MAIN-2021-43
|
en
|
refinedweb
|
ULiege - Aerospace & Mechanical Engineering
Due to historical reasons, the preprocessor of SAMCEF (called BACON) can be used to generate a mesh and geometrical entities. The module
toolbox.samcef defines functions that convert a
.fdb file (
fdb stands for “formatted database”) to commands that Metafor understands. If a
.dat file (i.e. a BACON script file) is provided, Metafor automatically runs BACON first as a background process in order to create the missing
.fdb file. In this case, a SAMCEF license is required.
The example simulation consists of a sheared cube (lower and upper faces and fixed and moved one with respect to another). The figure below shows the geometry:
The BACON input file named
cubeCG.dat (located in
apps/ale) contains the commands used to create the geometry, the mesh and node groups (selections).
Geometry
The geometry is created with the following BACON commands (see BACON manual for details):
.3POIfor the 8 points.
.3DROfor the 12 edges.
.CONfor the 6 wires.
.FACEfor the 6 sides.
.PLANto create a surface (required for the 2D domain, see below).
.VPEAUfor the skin.
.DOMfor the domain (one 3D domain - the cube - and one 2D domain - the lower side which will be meshed and extruded).
Mesh Generation
.GEN.
MODIFIE LIGNE”.
transfini”) and extruded with the command “
EXTRUSION”.
Choice of Element Type
.HYP VOLUMEis used (in 2D and 3D).
Definition of node/element groups (selections)
.SELis used
.RENis used to change the numbering of nodes and mesh elements, it should be done before the
.SEL.
Manual Creation of the
.fdb file
BACON can be started manually with the command:
samcef ba aleCubeCg n 1
From the
.dat file, a
.fdb is created with BACON with the commands:
INPUT .SAUVE DB FORMAT .STO
Summary: What can be imported from BACON?
Line-
Arc(points generated automatically with a negative numbed in Metafor are shifted (
numPoint = numMaxPoints-numPoint)
Plan-
Ruled-
Coons(Only planes defined using three numbers are imported (other planes generate cpoints outside db) ).
MultiProjSkinobjects are created, it is best to create them in the Python data set.
MeshedPoints)
nodeOnLine, …)
Reading the BACON file from Metafor
In Metafor, the file
.dat is converted thanks to a conversion module named
toolbox.samcef (see
apps.ale.cubeCG):
import toolbox.samcef bi = toolbox.samcef.BaconImporter(domain, os.path.splitext(__file__)[0]+'.dat' ) bi.execute()
where
domain is the domain that should be filled with the converted mesh and geometry. The second argument corresponds to the full path to the file
cubeCG.dat (it is computed from the full path of the python input file).
If all goes well, a file
cubeCG.fdb is then created in the folder
workspace/apps_ale_cubeCG
Element Generation in Metafor
The BACON attributes are converted to
Groups in Metafor. For example, if attribute #99 has been used when generating mesh in BACON, all the elements are stored in
groupset(99) in Metafor.
app = FieldApplicator(1) app.push(groupset(99)) interactionset.add(app)
Boundary conditions in Metafor
Selections in BACON are translated into
Groups with the same number. Boundary conditions such as prescribed displacements or contact can be thus defined easily in Metafor. For example, a selection such as
.SEL GROUPE 4 NOEUDS can lead to the following command in the input file:
|
http://metafor.ltas.ulg.ac.be/dokuwiki/doc/user/geometry/import/tuto2
|
CC-MAIN-2021-43
|
en
|
refinedweb
|
Hey,
New psychopy3 user here in need of some help.
I am trying to randomize the pairing of two stimuli images that flash on the screen at the same time for a total of 60 trials. Within these 60 trials I need to fulfill these 3 cases:
- a cupcake on the left and a muffin on the right 15 times
- a muffin on the left and a cupcake on the right 15 times
- a cupcake on the left and a cupcake on the right 30 times
I have both 8 cupcake images and 8 muffin images (excel file below) that need to be randomly paired together to meet these 3 criteria:
At the very beginning of the experiment I’ve set up a routine with some code that does the following:
import random, xlrd random.seed() in_file = 'cupmuf_database.xlsx' num_items = 8 num_tests = 60 cur = 0
After some instructions for participants comes my trial routine.
In begin routine:
inbook = xlrd.open_workbook(in_file) insheet = inbook.sheet_by_index(0) cases = ["c_cupR_mufL","c_mufR_cupL","c_cupL_cupR"] cup_stim = [] muf_stim = [] c_cupR_mufL_count = 15 c_mufR_cupL_count = 15 c_cupL_cupR_count = 30 left = [] right = [] correct=[] for rowx in range(1,num_items+1): row = insheet.row_values(rowx) cup_stim.append(row[0]) muf_stim.append(row[1]) for x in range(60): if (c_cupR_mufL_count == 0 and "c_cupR_mufL" in cases): cases.remove("c_cupR_mufL") if (c_mufR_cupL_count == 0 and "c_mufR_cupL" in cases): cases.remove("c_mufR_cupL") if (c_cupL_cupR_count == 0 and "c_cupL_cupR" in cases): cases.remove("c_cupL_cupR") ran = random.randrange(0, len(cases)) test = cases[ran] if (test == "c_cupR_mufL"): right.append(cup_stim[random.randrange(1, 8)]) left.append(muf_stim[random.randrange(1, 8)]) c_cupR_mufL_count = c_cupR_mufL_count - 1 correct.append(1) if (test == "c_mufR_cupL"): left.append(cup_stim[random.randrange(1, 8)]) right.append(muf_stim[random.randrange(1, 8)]) c_mufR_cupL_count = c_mufR_cupL_count - 1 correct.append(1) if (test == "c_cupL_cupR"): leftVal = random.randrange(1, 8) rightVal = random.randrange(1, 8) if leftVal == rightVal: if rightVal == 8: rightVal = rightVal - 1 else: rightVal = rightVal + 1 left.append(cup_stim[leftVal -1]) right.append(cup_stim[rightVal -1]) c_cupL_cupR_count = c_cupL_cupR_count - 1 correct.append(0)
In end routine:
thisExp.addData('left', left[cur]) thisExp.addData('right', right[cur]) thisExp.addData('correct', correct[cur]) if key_resp_2.keys == "left" and correct[cur] == 1: thisExp.addData('res', 1) else: thisExp.addData('res', 0) if left[cur] == muf_stim or right[cur] == muf_stim: isTarget = 1 else: isTarget = 0 cur = cur + 1
My loop is set for nReps to equal $num_tests (which I’ve defined as 60).
When I run this I do get 60 trials each time and all 8 cupcake and muffin images are being used but I don’t get the correct number of pairings for each of my 3 cases. For example, I’ll get 23 cupcake left and cupcake right instead of 30 cupcake left and cupcake right.
I hope this was enough information, I can clarify if something doesn’t make sense.
Thanks in advance.
|
https://discourse.psychopy.org/t/counters-to-randomize-stimuli-images-not-working-properly/14444
|
CC-MAIN-2021-43
|
en
|
refinedweb
|
© 2008-2019 The original authors.
Preface
The Spring Data MongoDB project applies core Spring concepts to the development of solutions that use the MongoDB document style data store. We provide a “template” as a high-level abstraction for storing and querying documents. You may notice similarities to the JDBC support provided by the Spring Framework.
This document is the reference guide for Spring Data - MongoDB Support. It explains MongoDB module concepts and semantics and syntax for various store namespaces.
This section provides some basic introduction to Spring and Document databases. The rest of the document refers only to Spring Data MongoDB features and assumes the user is familiar with MongoDB and Spring concepts.
1. Learning Spring
While you need not know the Spring APIs, understanding the concepts behind them is important. At a minimum, the idea behind Inversion of Control (IoC) should be familiar, and you should be familiar with whatever IoC container you choose to use.
The core functionality of the MongoDB support can be used directly, with no need to invoke the IoC services of the Spring Container. This is much like
JdbcTemplate, which can be used "'standalone'" without any other services of the Spring container. To leverage all the features of Spring Data MongoDB, such as the repository support, you need to configure some parts of the library to use Spring.
2. Learning NoSQL and Document databases
NoSQL stores have taken the storage world by storm. It is a vast domain with a plethora of solutions, terms, and patterns (to make things worse, even the term itself has multiple meanings). While some of the principles are common, you must be familiar with MongoDB to some degree. The best way to get acquainted is to read the documentation and follow the examples. It usually does not take more then 5-10 minutes to go through them and, especially if you are coming from an RDMBS-only background, these exercises can be an eye opener.
The starting point you can purchase.
Karl Seguin’s online book: The Little MongoDB Book.
3. Requirements
The Spring Data MongoDB 2.x binaries require JDK level 8.0 and above and Spring Framework 5.2.5.RELEASE and above.
4. Additional Help Resources
Learning a new framework is not always straightforward. In this section, we try to provide what we think is an easy-to-follow guide for starting with the Spring Data MongoDB module. However, if you encounter issues or you need advice, feel free to use one of the following links:
- Community Forum
Spring Data on Stack Overflow is a tag for all Spring Data (not just Document) users to share information and help each other. Note that registration is needed only for posting.
- Professional Support
Professional, from-the-source support, with guaranteed response time, is available from Pivotal Sofware, Inc., the company behind Spring Data and Spring.
5. Following Development
For information on the Spring Data Mongo source code repository, nightly builds, and snapshot artifacts, see the Spring Data Mongo homepage. You can help make Spring Data best serve the needs of the Spring community by interacting with developers through the Community on Stack Overflow.. You can also follow the Spring blog or the project team on Twitter (SpringData).
6. New & Noteworthy
6.1.. What’s New in Spring Data MongoDB 1.10
Compatible with MongoDB Server 3.4 and the MongoDB Java Driver 3.4.
New annotations for
@CountQuery,
@DeleteQuery, and
@ExistsQuery.
Extended support for MongoDB 3.2 and MongoDB 3.4 aggregation operators (see Supported Aggregation Operations).
Support for partial filter expression when creating indexes.
Publishing lifecycle events when loading or converting
DBRefinstances.
Added any-match mode for Query By Example.
Support for
$caseSensitiveand
$diacriticSensitivetext search.
Support for GeoJSON Polygon with hole.
Performance improvements by bulk-fetching
DBRefinstances.
Multi-faceted aggregations using
$facet,
$bucket, and
$bucketAutowith
Aggregation.
6.5. What’s New in Spring Data MongoDB 1.9
The following annotations have been enabled to build your own composed annotations:
@Document,
@Id,
@Field,
@Indexed,
@CompoundIndex,
@GeoSpatialIndexed,
@TextIndexed,
@Query, and
@Meta.
Support for Projections in repository query methods.
Support for Query by Example.
Out-of-the-box support for
java.util.Currencyin object mapping.
Support for the bulk operations introduced in MongoDB 2.6.
Upgrade to Querydsl 4.
Assert compatibility with MongoDB 3.0 and MongoDB Java Driver 3.2 (see: MongoDB 3.0 Support).
6.6. What’s New in Spring Data MongoDB 1.8
Criteriaoffers support for creating
$geoIntersects.
Support for SpEL expressions in
@Query.
MongoMappingEventsexpose the collection name for which they are issued.
Improved support for
<mongo:mongo-client.
Improved index creation failure error message.
6.7. What’s New in Spring Data MongoDB 1.7
Assert compatibility with MongoDB 3.0 and MongoDB Java Driver 3-beta3 (see: MongoDB 3.0 Support).
Support JSR-310 and ThreeTen back-port date/time types.
Allow
Streamas a query method return type (see: Query Methods).
GeoJSON support in both domain types and queries (see: GeoJSON Support).
QueryDslPredicateExcecutornow supports
findAll(OrderSpecifier<?>… orders).
Support calling JavaScript functions with Script Operations.
Improve support for
CONTAINSkeyword on collection-like properties.
Support for
$bit,
$mul, and
$positionoperators to
Update.>
7.1. Dependency Management with Spring Boot
Spring Boot selects a recent version of Spring Data modules for you. If you still want to upgrade to a newer version, configure the property
spring-data-releasetrain.version to the train name and iteration you would like to use.
8. Working with Spring Data Repositories
The goal of the Spring Data repository abstraction is to significantly reduce the amount of boilerplate code required to implement data access layers for various persistence stores.
8); }
8:
8.3. Defining Repository Interfaces
First, define a domain class-specific repository interface. The interface must extend
Repository and be typed to the domain class and an ID type. If you want to expose CRUD methods for that domain type, extend
CrudRepository instead of
Repository.
8.
8 { … }
8.
8.
8”.
8).
8()));
8.
8.
8) }
8(…); }
8.5..
8>
8" />
8.
8.8. Spring Data Extensions
This section documents a set of Spring Data extensions that enable Spring Data usage in a variety of contexts. Currently, most of the integration is targeted towards Spring MVC.
8);
8) } }
8
9.. MongoDB support
The MongoDB support contains a wide range of features:
Spring configuration support with Java-based
@Configurationclasses or an XML namespace for a Mongo driver instance and replica sets.
MongoTemplatehelper class that increases productivity when performing common Mongo operations. Includes integrated object mapping between documents and POJOs.
Exception translation into Spring’s portable Data Access Exception hierarchy.
Feature-rich Object Mapping integrated with Spring’s Conversion Service.
Annotation-based mapping metadata that is extensible to support other metadata formats.
Persistence and mapping lifecycle events.
Java-based Query, Criteria, and Update DSLs.
Automatic implementation of Repository interfaces, including support for custom finder methods.
QueryDSL integration to support type-safe queries.
Cross-store persistence support for JPA Entities with fields transparently persisted and retrieved with MongoDB (deprecated - to be removed without replacement).
GeoSpatial integration.
For most tasks, you should use
MongoTemplate or the Repository support, which both leverage the rich mapping functionality.
MongoTemplate is the place to look for accessing functionality such as incrementing counters or ad-hoc CRUD operations.
MongoTemplate also provides callback methods so that it is easy for you to get the low-level API artifacts, such as
com.mongod running. .Add the following to the pom.xml files
dependencieselement:
<dependencies> <!-- other dependency elements omitted --> <dependency> <groupId>org.springframework.data</groupId> <artifactId>spring-data-mongodb</artifactId> <version>2.2.6.RELEASE</version> </dependency> </dependencies>
Change the version of Spring in the pom.xml to be
<spring.framework.version>5.2.5.RELEASE</spring.framework.version>
Add the following location of the Spring Milestone repository for Maven to your
pom.xmlsuch that it is at the same level of your
<dependencies/>element:
<repositories> <repository> <id>spring-milestone</id> <name>Spring Maven MILESTONE Repository</name> <url></url> </repository> </repositories>
The repository is also browseable here.
You may also want to set the logging level to
DEBUG to see some additional information. To do so, edit the
log4j.properties file to have the following content:
log4j.category.org.springframework.data.mongodb=DEBUG log4j.appender.stdout.layout.ConversionPattern=%d{ABSOLUTE} %5p %40.40c:%4L - %m%n
Then you can create a also need(), "database"); mongoOps.insert(new Person("Joe", 34)); log.info(mongoOps.findOne(new Query(where("name").is("Joe")), Person.class)); mongoOps.dropCollection("person"); } }
When you run the main program, the preceding examples Document notice:
You can instantiate the central helper class of Spring Mongo,
MongoTemplate, by using the standard
com.mongodb.MongoClientobject and the name of the database to use.
The mapper works against standard POJO objects without the need for any additional metadata (though you can optionally provide that information. See here.).
Conventions are used for handling the
idfield, converting it to be an
ObjectIdwhen stored in the database.
Mapping conventions can use field access. Notice that the
Personclass has only getters.
If the constructor argument names match the field names of the stored document, they are used to instantiate the object or
com.mongodb.client has the added advantage of also providing the container with an
ExceptionTranslator implementation that translates MongoDB exceptions to exceptions in Spring’s portable
DataAccessException hierarchy for data access classes annotated with the
@Repository annotation. This hierarchy and the use of
@Repository is described in Spring’s DAO support features.
The following example shows an example of a Java-based bean metadata that supports exception translation on
@Repository annotated classes:
com.mongodb.M.MongoClient with the container, the XML can be quite verbose, as it is general-purpose. XML namespaces are a better alternative to configuring commonly used objects, such as the Mongo instance. The mongo namespace lets you create a Mongo instance server location, replica-sets, and options.
To use the Mongo namespace elements, you need to reference the Mongo schema, as follows:
<?xml version="1.0" encoding="UTF-8"?> <beans xmlns="" xmlns: <!-- object and access all the functionality of a specific MongoDB database instance. Spring provides the
org.springframework.data.mongodb.core.MongoDbFactory instance to configure
MongoTemplate.
Instead of using the IoC container to create an instance of MongoTemplate, you can use them in standard Java code, as follows:
public class MongoApp { private static final Log log = LogFactory.getLog(MongoApp.class); public static void main(String[] args) throws Exception { MongoOperations mongoOps = new MongoTemplate(new SimpleMongactory, you can refer to an existing bean by using the
mongo-ref attribute as shown in the following example. To show another common usage pattern, the following listing shows the use of a property placeholder, which lets you parametrize the configuration and the creation of a
MongoTemplate:
<context:property-placeholder <mongo:mongo-client <mongo:client-client> <mongo:db-factory <bean id="anotherMongoTemplate" class="org.springframework.data.mongodb.core.MongoTemplate"> <constructor-arg </bean>
10.4. Introduction to
MongoTemplate
The
MongoTemplate class, located in the
org.springframework.data.mongodb.core package, is the central class of Spring’s MongoDB support and provides a rich feature set for interacting with the database. The template offers convenience operations to create, update, delete, and query MongoDB documents and provides a mapping between your domain objects and MongoDB documents.
The mapping between MongoDB documents and domain classes is done by delegating to an implementation of the
MongoConverter interface. Spring provides
MappingMongoConverter, but you can also write your own converter. See “Custom Conversions - Overriding Default Mapping” for more detailed information.
The
MongoTemplate class implements the interface
MongoOperations. In as much as possible, the methods on
MongoOperations are named after methods available on the MongoDB driver
Collection object, to make the API familiar to existing MongoDB developers who are used to the driver API. For example, you can find methods such as
find,
findAndModify,
findAndReplace,
findOne,
insert,
remove,
save,
update, and
updateMulti. The design goal was to make it as easy as possible to transition between the use of the base MongoDB driver and
MongoOperations. A major difference between the two APIs is that
MongoOperations can be passed domain objects instead of
Document. Also,
MongoOperations has fluent APIs for
Query,
Criteria, and
Update operations instead of populating a
Document to specify the parameters for those operations.
The default converter implementation used by
Mong
MongoTemplate is translation of exceptions thrown by the MongoDB Java driver into Spring’s portable Data Access Exception hierarchy. See “Exception Translation” for more information.
MongoTemplate offers many convenience methods to help you easily perform common tasks. However, if you need to directly access the MongoDB driver API, you can use one of several
Execute callback methods. The execute callbacks gives you a reference to either a
com.mongodb.client.MongoCollection or a
com.mongodb.client.MongoDatabase object. See the “Execution Callbacks” section for more information.
The next section contains an example of how to work with the
MongoTemplate in the context of the Spring container.object, database name, and username and password.
MongoTemplate(MongoDbFactory mongoDbFactory, MongoConverter mongoConverter): Adds a
MongoConverterto use for mapping.
You can also configure a MongoTemplate by using Spring’s XML <beans/> schema, as the following example shows:
<mongo:mongo-client <bean id="mongoTemplate" class="org.springframework.data.mongodb.core.MongoTemplate"> <constructor-arg <constructor-arg </bean>
Other optional properties that you might like to set when creating a
MongoTemplate are the default
WriteResultCheckingPolicy,
WriteConcern, and
ReadPreference properties.
10.), you can set the
com.mongodb.WriteConcern property that the
MongoTemplate uses for write operations. If the
WriteConcern property is not set, it defaults to the one set in the MongoDB driver’s DB or Collection setting.
10.4.4.
WriteConcernResolver); }
You can use the
MongoAction argument to determine the
WriteConcern value or use the value of the Template itself as a default.
MongoAction contains the collection name being written to, the
java.lang.Class of the POJO, the converted
Document, the operation (
REMOVE,
UPDATE,
INSERT,
INSERT_LIST, or
SAVE), and a few other pieces of contextual information. The following example shows two sets of classes getting different
WriteConcern settings:(); } }
10.5. Saving, Updating, and Removing Documents
MongoTemplate lets you save, update, and delete your domain objects and map those objects to documents stored in MongoDB.
Consider the following + "]"; } }
Given the
Person class in the preceding example, you can save, update and delete the object, as the following example shows:(), ); } }
The preceding example would produce the following log output (including debug messages from
MongoTemplate):
DEBUG apping.MongoPersistentEntityIndexCreator: 80 - Analyzing class class org.spring.example.Person for index information. DEBUG work.data.mongodb.core.MongoTemplate: 632 - insert Document]
MongoConverter caused implicit conversion between a
String and an
ObjectId stored in the database by recognizing (through convention) the
Id property name.
The query syntax used in the preceding example is explained in more detail in the section “Querying Documents”.
10.5.1. How the
_id Field is Handled in the Mapping Layer
MongoDB requires that you have an
_id field for all documents. If you do not provide one, the driver assigns an
ObjectId with a generated value. When you use the
MappingMongoConverter, certain rules govern how properties from the Java class are mapped to this
_id field:
A property or field annotated with
@Id(
org.springframework.data.annotation.Id) maps to the
_idfield.
A property or field without an annotation but named
idmaps to the
_idfield.
The following outlines what type conversion, if any, is done on the property mapped to the
_id document field when using the
MappingMongoConverter (the default for
MongoTemplate).
If possible, an
idproperty or field declared as a
Stringin the Java class is converted to and stored as an
ObjectIdby using a Spring
Converter<String, ObjectId>. Valid conversion rules are delegated to the MongoDB Java driver. If it cannot be converted to an
ObjectId, then the value is stored as a string in the database.
An
idproperty or field declared as
BigIntegerin the Java class is converted to and stored as an
ObjectIdby using a Spring
Converter<BigInteger, ObjectId>.
If no field or property specified in the previous sets of rules is present in the Java class, an implicit
_id file is generated by the driver but not mapped to a property or field of the Java class.
When querying and updating,
MongoTemplate uses the converter that corresponds to the preceding rules for saving documents so that field names and types used in your queries can match what is in your domain classes.) }
10.5.2. Type Mapping
MongoDB collections can contain documents that represent instances of a variety of types. This feature can be useful if you store a hierarchy of classes or its main implementation. Its default behavior to store the fully qualified classname under
_class inside the document..
Customizing Type Mapping
If you want to avoid writing the entire Java class name as type information but would rather like to use a key, you can use the
@TypeAlias annotation on the entity class. If you need to customize the mapping even more, have a look at the
TypeInformationMapper interface. An instance of that interface can be configured at the
DefaultMongoTypeMapper, which can, in turn, be configured on
MappingMongoConverter. The following example shows how to define a type alias for an entity:
@TypeAlias("pers") class Person { }
Note that the resulting document contains
pers as the value in the
_class Field.
Configuring Custom Type Mapping
The following example shows how to configure a custom
MongoTypeMapper in
MappingMongoConverter:
MongoTypeMapperwith Spring Java Config
class CustomMongoTypeMapper extends DefaultMongoTypeMapper { //implement custom type mapping here }
@Configuration class SampleMongoConfiguration extends AbstractMongoConfiguration { @Override protected String getDatabaseName() { return "database"; } @Override public MongoClient mongoClient() { return new MongoClient(); } the preceding example extends the
AbstractMongoConfiguration class and overrides the bean definition of the
MappingMongoConverter where we configured our custom
MongoTypeMapper.
The following example shows how to use XML to configure a custom
MongoTypeMapper:
MongoTypeMapperwith XML
<mongo:mapping-converter <bean name="customMongoTypeMapper" class="com.bubu.mongo.CustomMongoTypeMapper"/>
10.5.3. Methods for Saving and Inserting Documents
There are several convenient methods on
MongoTemplate for saving and inserting your objects. To have more fine-grained control over the conversion process, you can register Spring converters with the
MappingMongoConverter — for example
Converter<Person, Document> and
Converter<Document, Person>.
The simple case of using the save operation is to save a POJO. In this case, the collection name is determined by name (not fully qualified) of the class. You may also call the save operation with a specific collection name. You can use mapping metadata to override the collection in which to store the object.
When inserting or saving, if the
Id property is not set, the assumption is that its value will be auto-generated by the database. Consequently, for auto-generation of an
ObjectId to succeed, the type of the
Id property or field in your class must be a
String, an
ObjectId, or a
BigInteger.
The following example shows how to save a document and retrieving its contents: following insert and save operations are available:
voidsave
(Object objectToSave): Save the object to the default collection.
voidsave
(Object objectToSave, String collectionName): Save the object to the specified collection.
A similar set of insert operations is also available:
voidinsert
(Object objectToSave): Insert the object to the default collection.
voidinsert
(Object objectToSave, String collectionName): Insert the object to the specified collection.
Into Which Collection Are My Documents Saved?
There are two ways to manage the collection name that is used for the documents. The default collection name that is used is the class name changed to start with a lower-case letter. So a
com.test.Person class is stored in the
person collection. You can customize this by providing a different collection name with the
@Document annotation. You can also override the collection name by providing your own collection name as the last parameter for the selected
MongoTemplate method calls.
Inserting or Saving Individual Objects
The MongoDB driver supports inserting a collection of documents in a single operation. The following methods in the
MongoOperations interface support this functionality:
insert: Inserts an object. If there is an existing document with the same
id, an error is generated.
insertAll: Takes a
Collectionof objects as the first parameter. This method inspects each object and inserts it into the appropriate collection, based on the rules specified earlier.
save: Saves the object, overwriting any object that might have the same
id.
Inserting Several Objects in a Batch
The MongoDB driver supports inserting a collection of documents in one operation. The following methods in the
MongoOperations interface support this functionality:
insert methods: Take a
Collectionas the first argument. They insert a list of objects in a single batch write to the database.
10.5.4. Updating Documents in a Collection
For updates, you can update the first document found by using
MongoOperation.updateFirst or you can update all documents that were found to match the query by using the
MongoOperation.updateMulti method. The following example shows an update of all
SAVINGS accounts where we are adding a one-time $50.00 bonus to the balance by using the
$inc operator: earlier, we provide the update definition by using an
Update object. The
Update class has methods that match the update modifiers available for MongoDB.
Most methods return the
Update object to provide a fluent style for the API.
Methods for Executing Updates for Documents
updateFirst: Updates the first document that matches the query document criteria with the updated document.
updateMulti: Updates all objects that match the query document criteria with the updated document.
Methods in the
Update Class
You can use a little "'syntax sugar'" with the
Update class, as its methods are meant to be chained together. Also, you can kick-start the creation of a new
Update instance by using
public static Update update(String key, Object value) and using static imports.
The
Update class contains the following methods:
UpdateaddToSet
(String key, Object value)Update using the
$addToSetupdate modifier
UpdatecurrentDate
(String key)Update using the
$currentDateupdate modifier
UpdatecurrentTimestamp
(String key)Update using the
$currentDateupdate modifier with
$type
timestamp
Updateinc
(String key, Number inc)Update using the
$incupdate modifier
Updatemax
(String key, Object max)Update using the
$maxupdate modifier
Updatemin
(String key, Object min)Update using the
$minupdate modifier
Updatemultiply
(String key, Number multiplier)Update using the
$mulupdate modifier
Updatepop
(String key, Update.Position pos)Update using the
$popupdate modifier
Updatepull
(String key, Object value)Update using the
$pullupdate modifier
UpdatepullAll
(String key, Object[] values)Update using the
$pullAllupdate modifier
Updatepush
(String key, Object value)Update using the
$pushupdate modifier
UpdatepushAll
(String key, Object[] values)Update using the
$pushAllupdate modifier
Updaterename
(String oldName, String newName)Update using the
$renameupdate modifier
Updateset
(String key, Object value)Update using the
$setupdate modifier
UpdatesetOnInsert
(String key, Object value)Update using the
$setOnInsertupdate modifier
Updateunset
(String key)Update using the
$unsetupdate modifier
Some update modifiers, such as
$push and
$addToSet, allow nesting of additional operators.
// { $push : { "category" : { "$each" : [ "spring" , "data" ] } } } new Update().push("category").each("spring", "data") // { $push : { "key" : { "$position" : 0 , "$each" : [ "Arya" , "Arry" , "Weasel" ] } } } new Update().push("key").atPosition(Position.FIRST).each(Arrays.asList("Arya", "Arry", "Weasel")); // { $push : { "key" : { "$slice" : 5 , "$each" : [ "Arya" , "Arry" , "Weasel" ] } } } new Update().push("key").slice(5).each(Arrays.asList("Arya", "Arry", "Weasel")); // { $addToSet : { "values" : { "$each" : [ "spring" , "data" , "mongodb" ] } } } new Update().addToSet("values").each("spring", "data", "mongodb");
10.5.5. “Upserting” Documents in a Collection
Related to performing an
updateFirst operation, you can also perform an “upsert” operation, which will perform an insert if no document is found that matches the query. The document that is inserted is a combination of the query document and the update document. The following example shows how to use the
upsert method:
template.upsert(query(where("ssn").is(1111).and("firstName").is("Joe").and("Fraizer").is("Update")), update("address", addr), Person.class);
10.5.6. Finding and Upserting Documents in a Collection
The
findAndModify(…) method on
MongoCollection can update a document and return either the old or newly updated document in a single operation.
MongoTemplate provides four
findAndModify overloaded methods that take
Query and
Update classes and converts from
Document to your POJOs:
);
The following example inserts a few
Person objects into the container and performs a method lets you set the options of
returnNew,
upsert, and
remove. An example extending from the previous code snippet follows:));
10.5.7.. Methods for Removing Documents
You can use one of five overloaded methods to remove an object from the database:
template.remove(tywin, "GOT"); (1) template.remove(query(where("lastname").is("lannister")), "GOT"); (2) template.remove(new Query().limit(3), "GOT"); (3) template.findAllAndRemove(query(where("lastname").is("lannister"), "GOT"); (4) template.findAllAndRemove(new Query().limit(3), "GOT"); (5)
10.5.9. Optimistic Locking
The
@Version annotation provides syntax similar to that of JPA in the context of MongoDB and makes sure updates are only applied to documents with a matching version. Therefore, the actual value of the version property is added to the update query in such a way that the update does not have any effect if another operation altered the document in the meantime. In that case, an
OptimisticLockingFailureException is thrown. The following example shows these features:
)
10.6. Querying Documents
You can use the
Query and
Criteria classes to express your queries. They have method names that mirror the native MongoDB operator names, such as
lt,
lte,
is, and others. The
Query and
Criteria classes follow a fluent API style so that you can chain together multiple method criteria and queries while having easy-to-understand code. To improve readability, static imports let you avoid using the 'new' keyword for creating
Query and
Criteria instances. You can also use
BasicQuery to create
Query instances from plain JSON Strings, as shown in the following example:
BasicQuery query = new BasicQuery("{ age : { $lt : 50 }, accounts.balance : { $gt : 1000.00 }}"); List<Person> result = mongoTemplate.find(query, Person.class);
Spring MongoDB also supports GeoSpatial queries (see the GeoSpatial Queries section) and Map-Reduce operations (see the Map-Reduce section.).
10.6.1. Querying Documents in a Collection
Earlier, we saw how to retrieve a single document by using the
findOne and
findById methods on
MongoTemplate. These methods are specified by using a
Criteria object that has a static factory method named
where to instantiate a new
Criteria object. We recommend using static imports for
org.springframework.data.mongodb.core.query.Criteria.where and
Query.query to make the query more readable.
The query should return a list of
Person objects that meet the specified criteria. The rest of this section lists the methods of the
Criteria and
Query classes that correspond to the operators provided in MongoDB. Most methods return the
Criteria object, to provide a fluent style for the API.
Methods for the Criteria Class
The
Criteria class provides the following methods, all of which correspond to operators in MongoDB:
Criteriaall
(Object o)Creates a criterion using the
$alloperator
Criteriaand
(String key)Adds a chained
Criteriawith the specified
keyto the current
Criteriaand returns the newly created one
CriteriaandOperator
(Criteria… criteria)Creates an and query using the
$andoperator for all of the provided criteria (requires MongoDB 2.0 or later)
CriteriaelemMatch
(Criteria c)Creates a criterion using the
$elemMatchoperator
Criteriaexists
(boolean b)Creates a criterion using the
$existsoperator
Criteriagt
(Object o)Creates a criterion using the
$gtoperator
Criteriagte
(Object o)Creates a criterion using the
$gteoperator
Criteriain
(Object… o)Creates a criterion using the
$inoperator for a varargs argument.
Criteriain
(Collection<?> collection)Creates a criterion using the
$inoperator using a collection
Criteriais
(Object o)Creates a criterion using field matching (
{ key:value }). If the specified value is a document, the order of the fields and exact equality in the document matters.
Criterialt
(Object o)Creates a criterion using the
$ltoperator
Criterialte
(Object o)Creates a criterion using the
$lteoperator
Criteriamod
(Number value, Number remainder)Creates a criterion using the
$modoperator
Criteriane
(Object o)Creates a criterion using the
$neoperator
Criterianin
(Object… o)Creates a criterion using the
$ninoperator
CriterianorOperator
(Criteria… criteria)Creates an nor query using the
$noroperator for all of the provided criteria
Criterianot
()Creates a criterion using the
$notmeta operator which affects the clause directly following
CriteriaorOperator
(Criteria… criteria)Creates an or query using the
$oroperator for all of the provided criteria
Criteriaregex
(String re)Creates a criterion using a
$regex
Criteriasize
(int s)Creates a criterion using the
$sizeoperator
Criteriatype
(int t)Creates a criterion using the
$typeoperator.
The Criteria class also provides the following methods for geospatial queries (see the GeoSpatial Queries section to see them in action):
Criteriawithin
(Circle circle)Creates a geospatial criterion using
$geoWithin $centeroperators.
Criteriawithin
(Box box)Creates a geospatial criterion using a
$geoWithin $boxoperation.
CriteriawithinSphere
(Circle circle)Creates a geospatial criterion using
$geoWithin $centeroperators.
Criterianear
(Point point)Creates a geospatial criterion using a
$nearoperation
CriterianearSphere
(Point point)Creates a geospatial criterion using
$nearSphere$centeroperations. This is only available for MongoDB 1.7 and higher.
CriteriaminDistance
(double minDistance)Creates a geospatial criterion using the
$minDistanceoperation, for use with $near.
CriteriamaxDistance
(double maxDistance)Creates a geospatial criterion using the
$maxDistanceoperation, for use with $near.
Methods for the Query class
The
Query class has some additional methods that provide options for the query:
QueryaddCriteria
(Criteria criteria)used to add additional criteria to the query
Fieldfields
()used to define fields to be included in the query results
Querylimit
(int limit)used to limit the size of the returned results to the provided limit (used for paging)
Queryskip
(int skip)used to skip the provided number of documents in the results (used for paging)
Querywith
(Sort sort)used to provide sort definition for the results
10.6.2. Methods for Querying for Documents
The query methods need to specify the target type
T that is returned, and they are overloaded with an explicit collection name for queries that should operate on a collection other than the one indicated by the return type. The following query methods let you find one or more documents:
findAll: Query for a list of objects of type
Tfromof the specified type.
findAndRemove: Map the results of an ad-hoc query on the collection to a single instance of an object of the specified type. The first document that matches the query is returned and removed from the collection in the database.
10.4. GeoSpatial Queries
MongoDB supports GeoSpatial queries through the use of operators such as
$near,
$within,
geoWithin, and
$nearSphere. Methods specific to geospatial queries are available on the
Criteria class. There are also a few shape classes (
Box,
Circle, and
Point) that are used in conjunction with geospatial related
Criteria methods.
To understand how to perform GeoSpatial queries, consider the following
Venue class (taken from the integration tests and relying on, you can use the following query:
Circle circle = new Circle(-73.99171, 40.738868, 0.01); List<Venue> venues = template.find(new Query(Criteria.where("location").within(circle)), Venue.class);
To find venues within a
Circle using spherical coordinates, you can use the following query:
Circle circle = new Circle(-73.99171, 40.738868, 0.003712240453784); List<Venue> venues = template.find(new Query(Criteria.where("location").withinSphere(circle)), Venue.class);
To find venues within a
Box, you can use the following query:
//lower-left then upper-right Box box = new Box(new Point(-73.99756, 40.73083), new Point(-73.988135, 40.741404)); List<Venue> venues = template.find(new Query(Criteria.where("location").within(box)), Venue.class);
To find venues near a
Point, you can use the following queries:
Point point = new Point(-73.99171, 40.738868); List<Venue> venues = template.find(new Query(Criteria.where("location").near(point).maxDistance(0.01)), Venue.class);
Point point = new Point(-73.99171, 40.738868); List<Venue> venues = template.find(new Query(Criteria.where("location").near(point).minDistance(0.01).maxDistance(100)), Venue.class);
To find venues near a
Point using spherical coordinates, you can use the following query:
Point point = new Point(-73.99171, 40.738868); List<Venue> venues = template.find(new Query( Criteria.where("location").nearSphere(point).maxDistance(0.003712240453784)), Venue.class);
Geo-near Queries
MongoDB supports querying the database for geo locations and calculating the distance from a given origin at the same time. With geo-near queries, you can express queries such as "find all restaurants in the surrounding 10 miles". To let you do so,
MongoOperations provides
geoNear(…) methods that take a
NearQuery as an argument (as well as the already familiar entity type and collection), as shown in the following example:
Point location = new Point(-73.99171, 40.738868); NearQuery query = NearQuery.near(location).maxDistance(new Distance(10, Metrics.MILES)); GeoResults<Restaurant> = operations.geoNear(query, Restaurant.class);
We use the
NearQuery builder API to set up a query to return all
Restaurant instances surrounding the given
Point out to 10 miles. built-in metrics (miles and kilometers) automatically triggers the spherical flag to be set on the query. If you want to avoid that, pass plain
double values into
maxDistance(…). For more information, see the JavaDoc of
NearQuery and
Distance.
The geo-near operations return a
GeoResults wrapper object that encapsulates
GeoResult instances. Wrapping
GeoResults allows accessing the average distance of all results. A single
GeoResult object carries the entity found plus its distance from the origin.
10.6.5. GeoJSON Support
MongoDB supports GeoJSON and simple (legacy) coordinate pairs for geospatial data. Those formats can both be used for storing as well as querying data. See the MongoDB manual on GeoJSON support to learn about requirements and restrictions.
GeoJSON Types in Domain Classes
Usage of GeoJSON types in domain classes is straightforward. The
org.springframework.data.mongodb.core.geo package contains types such as
GeoJsonPoint,
GeoJsonPolygon, and others. These types are extend the existing
org.springframework.data.geo types. The following example uses a
GeoJsonPoint:
public class Store { String id; /** * location is stored in GeoJSON format. * { * "type" : "Point", * "coordinates" : [ x, y ] * } */ GeoJsonPoint location; }
GeoJSON Types in Repository Query Methods
Using GeoJSON types as repository query parameters forces usage of the
$geometry operator when creating the query, as the following example shows:
public interface StoreRepository extends CrudRepository<Store, String> { List<Store> findByLocationWithin(Polygon polygon); (1) } /* * { * "location": { * "$geoWithin": { * "$geometry": { * "type": "Polygon", * "coordinates": [ * [ * [-73.992514,40.758934], * [-73.961138,40.760348], * [-73.991658,40.730006], * [-73.992514,40.758934] * ] * ] * } * } * } * } */ repo.findByLocationWithin( (2) new GeoJsonPolygon( new Point(-73.992514, 40.758934), new Point(-73.961138, 40.760348), new Point(-73.991658, 40.730006), new Point(-73.992514, 40.758934))); (3) /* * { * "location" : { * "$geoWithin" : { * "$polygon" : [ [-73.992514,40.758934] , [-73.961138,40.760348] , [-73.991658,40.730006] ] * } * } * } */ repo.findByLocationWithin( (4) new Polygon( new Point(-73.992514, 40.758934), new Point(-73.961138, 40.760348), new Point(-73.991658, 40.730006));. Full-text Queries
Since version 2.6 of MongoDB, you can run full-text queries by using the
$text operator. Methods and operations specific to full-text queries are available in
TextQuery and
TextCriteria. When doing full text search, see the MongoDB reference for its behavior and limitations.
Full-text Search
Before you can actually use full-text search, you must set up the search index correctly. See Text Index for more detail on how to create index structures. The following example shows how to set up a full-text search:
db.foo.createIndex( { title : "text", content : "text" }, { weights : { title : 3 } } )
A query searching for
coffee cake, sorted by relevance according to the
weights, can be defined and executed as follows:
Query query = TextQuery.searching(new TextCriteria().matchingAny("coffee", "cake")).sortByScore(); List<Document> page = template.find(query, Document.class);
You can exclude search terms by prefixing the term with
- or by using
notMatching, as shown in the following example (note that the two lines have the same effect and are thus redundant):
// search for 'coffee' and not 'cake' TextQuery.searching(new TextCriteria().matching("coffee").matching("-cake")); TextQuery.searching(new TextCriteria().matching("coffee").notMatching("cake"));
TextCriteria.matching takes the provided term as is. Therefore, you can define phrases by putting them between double quotation marks (for example,
\"coffee cake\") or using by
TextCriteria.phrase. The following example shows both ways of defining a phrase:
// search for phrase 'coffee cake' TextQuery.searching(new TextCriteria().matching("\"coffee cake\"")); TextQuery.searching(new TextCriteria().phrase("coffee cake"));
You can set flags for
$caseSensitive and
$diacriticSensitive by using the corresponding methods on
TextCriteria. Note that these two optional flags have been introduced in MongoDB 3.2 and are not included in the query unless explicitly set.
10);
10.7. Query by Example
10.
10. }
10:
10.7.4. Running an Example
The following example shows how to query by example when using a repository (of
Person objects, in this case):
public interface PersonRepository extends QueryByExampleExecutor<Person> { } public class PersonService { @Autowired PersonRepository personRepository; public List<Person> findPeople(Person probe) { return personRepository.findAll(Example.of(probe)); } }
An
Example containing an untyped
ExampleSpec uses the Repository type and its collection name. Typed
ExampleSpec instances use their type as the result type and the collection name from the
Repository instance.
Spring Data MongoDB provides support for the following matching options:
10);
10.8. Map-Reduce Operations
You can query MongoDB by using Map-Reduce, which is useful for batch processing, for data aggregation, and for when the query language does not fulfill your needs.
Spring provides integration with MongoDB’s Map-Reduce by providing methods on
MongoOperations to simplify the creation and execution of Map-Reduce operations. It can convert the results of a Map-Reduce operation to a POJO and integrates with Spring’s Resource abstraction. This lets you place your JavaScript files on the file system, classpath, HTTP server, or any other Spring Resource implementation and then reference the JavaScript resources through an easy URI style syntax — for example,
classpath:reduce.js;. Externalizing JavaScript code in files is often preferable to embedding them as Java strings in your code. Note that you can still pass JavaScript code as Java strings if you prefer.
10.8.1. Example Usage
To understand how to perform Map-Reduce operations, we use an example from the book, MongoDB - The Definitive Guide [1]. In this example, we create three documents that have the values [a,b], [b,c], and [c,d], respectively. The values in each document are associated with the key, 'x', as the following example shows (assume these documents are in" ] }
The following map function counts the occurrence of each letter in the array for each document:
function () { for (var i = 0; i < this.x.length; i++) { emit(this.x[i], 1); } }
The follwing reduce function sums up the occurrence of each letter across all the documents:
function (key, values) { var sum = 0; for (var i = 0; i < values.length; i++) sum += values[i]; return sum; }
Running the preceding functions result in the following collection:
{ "_id" : "a", "value" : 1 } { "_id" : "b", "value" : 2 } { "_id" : "c", "value" : 2 } { "_id" : "d", "value" : 1 }
Assuming that the map and reduce functions are located in
map.js and
reduce.js and bundled in your jar so they are available on the classpath, you can run a Map-Reduce operation as follows:
MapReduceResults<ValueObject> results = mongoOperations.mapReduce("jmr1", "classpath:map.js", "classpath:reduce.js", ValueObject.class); for (ValueObject valueObject : results) { System.out.println(valueObject); }
The preceding exmaple produces the following output:
ValueObject [id=a, value=1.0] ValueObject [id=b, value=2.0] ValueObject [id=c, value=2.0] ValueObject [id=d, value=1.0]
The
MapReduceResults class implements
Iterable and provides access to the raw output and timing and count statistics. The following listing shows the
ValueObject class: that you need not specify an output collection. To specify additional Map-Reduce options, use an overloaded method that takes an additional
MapReduceOptions argument. The class
MapReduceOptions has a fluent API, so adding additional options can be done in a compact syntax. The following example, as the following example shows:
MapReduceResults<ValueObject> results = mongoOperations.mapReduce("jmr1", "classpath:map.js", "classpath:reduce.js", options().outputCollection("jmr1_out"), ValueObject.class);
You can also specify a query to reduce the set of data that is fed into the Map-Reduce operation. The following example removes on the query, but you cannot skip values.
10.9. Script Operations
MongoDB allows executing JavaScript functions on the server by either directly sending the script or calling a stored one.
ScriptOperations can be accessed through
MongoTemplate and provides basic abstraction for
JavaScript usage. The following example shows how to us the
ScriptOperations class:
ScriptOperations scriptOps = template.scriptOps(); ExecutableMongoScript echoScript = new ExecutableMongoScript("function(x) { return x; }"); scriptOps.execute(echoScript, "directly execute script"); (1) scriptOps.register(new NamedMongoScript("echo", echoScript)); (2) scriptOps.call("echo", "execute script via name"); (3)
10.10. Group Operations.
10.10.1. Example Usage
Document);.1. Basic Concepts
The Aggregation Framework support in Spring Data MongoDB is based on the following key abstractions:
Aggregation,
AggregationOperation, and
AggregationResults.
Aggregation
An
Aggregationrepresents a MongoDB
aggregateoperation and holds the description of the aggregation pipeline instructions. Aggregations are created by invoking the appropriate
newAggregation(…)static factory method of the
Aggregationclass, which takes a list of
AggregateOperationand an optional input class.
The actual aggregate operation is executed.
AggregationOperation
An
AggregationOperationrepresents a MongoDB aggregation pipeline operation and describes the processing that should be performed in this aggregation step. Although you could manually create an
AggregationOperation, we recommend using the static factory methods provided by the
Aggregateclass to construct an
AggregateOperation.
AggregationResults
AggregationResultsis the container for the result of an aggregate operation. It provides access to the raw aggregation result, in the form of a
Documentto the mapped objects and other information about the aggregation.
The following listing shows the canonical example for using the Spring Data MongoDB support for the MongoDB Aggregation Framework: derives the name of the input collection from this class. Otherwise, if you do not not specify an input class, you must provide the name of the input collection explicitly. If both an input class and an input collection are provided, the latter takes precedence.
10.11.2. Supported Aggregation Operations
The MongoDB Aggregation Framework provides the following types of aggregation operations:
Pipeline Aggregation Operators
Group Aggregation Operators
Boolean Aggregation Operators
Comparison Aggregation Operators
Arithmetic Aggregation Operators
String Aggregation Operators
Date Aggregation Operators
Array Aggregation Operators
Conditional Aggregation Operators
Lookup Aggregation Operators
Convert Aggregation Operators
Object Aggregation Operators
At the time of this writing, we provide support for the following Aggregation Operations in Spring Data MongoDB:
The operation is mapped or added by Spring Data MongoDB.
Note that the aggregation operations not listed here are currently not supported by Spring Data MongoDB. Comparison aggregation operators are expressed as
Criteria expressions.
10.11.3. Projection Expressions
Projection expressions are used to define the fields that are the outcome of a particular aggregation step. Projection expressions can be defined through the
project method of the
Aggregation class, either by passing a list of
String objects or an aggregation framework
Fields object. The projection can be extended with additional fields through a fluent API by using the
and(String) method and aliased by using the
as(String) method.
Note that you can also define fields with aliases by using the
Fields.field static factory method of the aggregation framework, which you can then use to construct a new
Fields instance. References to projected fields in later aggregation stages are valid only for the field names of included fields or their aliases (including newly defined fields and their aliases). Fields not included in the projection cannot be referenced in later aggregation stages. The following listings show examples of projection expression:
// generates {$project: {name: 1, netPrice: 1}} project("name", "netPrice") // generates {$project: {thing1: $thing2}} project().and("thing1").as("thing2") // generates {$project: {a: 1, b: 1, thing2: $thing1}} project("a","b").and("thing1").as("thing2")
// generates {$project: {name: 1, netPrice: 1}}, {$sort: {name: 1}} project("name", "netPrice"), sort(ASC, "name") // generates {$project: {name: $firstname}}, {$sort: {name: 1}} project().and("firstname").as("name"), sort(ASC, "name") // does not work project().and("firstname").as("name"), sort(ASC, "firstname")
More examples for project operations can be found in the
AggregationTests class. Note that further details regarding the projection expressions can be found in the corresponding section of the MongoDB Aggregation Framework reference documentation.
10.11.4. Faceted Classification
As of Version 3.4, MongoDB supports faceted classification by using the Aggregation Framework. A faceted classification uses semantic categories (either general or subject-specific) that are combined to create the full classification entry. Documents flowing through the aggregation pipeline are classified into buckets. A multi-faceted classification enables various aggregations on the same set of input documents, without needing to retrieve the input documents multiple times.
Buckets
Bucket operations categorize incoming documents into groups, called buckets, based on a specified expression and bucket boundaries. Bucket operations require a grouping field or a grouping expression. You can define them by using the
bucket() and
bucketAuto() methods of the
Aggregate class.
BucketOperation and
BucketAutoOperation can expose accumulations based on aggregation expressions for input documents. You can extend the bucket operation with additional parameters through a fluent API by using the
with…() methods and the
andOutput(String) method. You can alias the operation by using the
as(String) method. Each bucket is represented as a document in the output.
BucketOperation takes a defined set of boundaries to group incoming documents into these categories. Boundaries are required to be sorted. The following listing shows some examples of bucket operations:
// generates {$bucket: {groupBy: $price, boundaries: [0, 100, 400]}} bucket("price").withBoundaries(0, 100, 400); // generates {$bucket: {groupBy: $price, default: "Other" boundaries: [0, 100]}} bucket("price").withBoundaries(0, 100).withDefault("Other"); // generates {$bucket: {groupBy: $price, boundaries: [0, 100], output: { count: { $sum: 1}}}} bucket("price").withBoundaries(0, 100).andOutputCount().as("count"); // generates {$bucket: {groupBy: $price, boundaries: [0, 100], 5, output: { titles: { $push: "$title"}}} bucket("price").withBoundaries(0, 100).andOutput("title").push().as("titles");
BucketAutoOperation determines boundaries in an attempt to evenly distribute documents into a specified number of buckets.
BucketAutoOperation optionally takes a granularity value that specifies the preferred number series to use to ensure that the calculated boundary edges end on preferred round numbers or on powers of 10. The following listing shows examples of bucket operations:
// generates {$bucketAuto: {groupBy: $price, buckets: 5}} bucketAuto("price", 5) // generates {$bucketAuto: {groupBy: $price, buckets: 5, granularity: "E24"}} bucketAuto("price", 5).withGranularity(Granularities.E24).withDefault("Other"); // generates {$bucketAuto: {groupBy: $price, buckets: 5, output: { titles: { $push: "$title"}}} bucketAuto("price", 5).andOutput("title").push().as("titles");
To create output fields in buckets, bucket operations can use
AggregationExpression through
andOutput() and SpEL expressions through
andOutputExpression().
Note that further details regarding bucket expressions can be found in the
$bucket section and
$bucketAuto section of the MongoDB Aggregation Framework reference documentation.
Multi-faceted Aggregation
Multiple aggregation pipelines can be used to create multi-faceted aggregations that, and other factors.
You can define a
FacetOperation by using the
facet() method of the
Aggregation class. You can customize it with multiple aggregation pipelines by using the
and() method. Each sub-pipeline has its own field in the output document where its results are stored as an array of documents.
Sub-pipelines can project and filter input documents prior to grouping. Common use cases include extraction of date parts or calculations before categorization. The following listing shows facet operation examples:
// generates {$facet: {categorizedByPrice: [ { $match: { price: {$exists : true}}}, { $bucketAuto: {groupBy: $price, buckets: 5}}]}} facet(match(Criteria.where("price").exists(true)), bucketAuto("price", 5)).as("categorizedByPrice")) // generates {$facet: {categorizedByCountry: [ { $match: { country: {$exists : true}}}, { $sortByCount: "$country"}]}} facet(match(Criteria.where("country").exists(true)), sortByCount("country")).as("categorizedByCountry")) // generates {$facet: {categorizedByYear: [ // { $project: { title: 1, publicationYear: { $year: "publicationDate"}}}, // { $bucketAuto: {groupBy: $price, buckets: 5, output: { titles: {$push:"$title"}}} // ]}} facet(project("title").and("publicationDate").extractYear().as("publicationYear"), bucketAuto("publicationYear", 5).andOutput("title").push().as("titles")) .as("categorizedByYear"))
Note that further details regarding facet operation can be found in the
$facet section of the MongoDB Aggregation Framework reference documentation. query execution, the SpEL expression is translated into a corresponding MongoDB projection expression part. This arrangement makes it much easier to express complex calculations.
Complex Calculations with SpEL expressions
Consider the following SpEL expression:
1 + (q + 1) / (q - 1)
The preceding expression is translated into the following projection expression part:
{ "$add" : [ 1, { "$divide" : [ { "$add":["$q", 1]}, { "$subtract":[ "$q", 1]} ] }]}
You can see examples in more context in Aggregation Framework Example 5 and Aggregation Framework Example 6. You can find more usage examples for supported SpEL expression constructs in
SpelExpressionTransformerUnitTests. The following table shows the SpEL transformations supported by Spring Data MongoDB:
In addition to the transformations shown in the preceding table, you can use standard SpEL operations such as
new to (for example) create arrays and reference expressions through their names (followed by the arguments to use in brackets). The following example shows how to create an array in this fashion:
// { $setEquals : [$a, [5, 8, 13] ] } .andExpression("setEquals(a, new int[]{5, 8, 13})");
Aggregation Framework Examples
The examples in this section demonstrate the usage patterns for the MongoDB Aggregation Framework with Spring Data MongoDB.();
The preceding listing uses the following algorithm:
Create a new aggregation by using the
newAggregationstatic factory method, to which we pass a list of aggregation operations. These aggregate operations define the aggregation pipeline of our
Aggregation.
Use the
projectoperation to select the
tagsfield (which is an array of strings) from the input collection.
Use the
unwindoperation to generate a new document for each tag within the
tagsarray.
Use the
groupoperation to define a group for each
tagsvalue for which we aggregate the occurrence count (by using the
countaggregation operator and collecting the result in a new field called
n).
Select the
nfield and create an alias for the ID field generated from the previous group operation (hence the call to
previousOperation()) with a name of
tag.
Use the
sortoperation to sort the resulting list of tags by their occurrence count in descending order.
Call the
aggregatemethod on
MongoTemplateto let MongoDB perform the actual aggregation operation, with the created
Aggregationas an argument.
Note that the input collection is explicitly specified as the
tags parameter to the
aggregate Method. If the name of the input collection is not specified explicitly, it is derived from the input class passed as the first parameter to the
newAggreation method. by using the aggregation framework. This example demonstrates);
Note that the
ZipInfo class maps the structure of the given input-collection. The
ZipInfoStats class defines the structure in the desired output format.
The preceding listings use the following algorithm:
Use the
groupoperation to define a group from the input-collection. The grouping criteria is the combination of the
stateand
cityfields, which forms the ID structure of the group. We aggregate the value of the
populationproperty from the grouped elements by using the
sumoperator and save the result in the
popfield.
Use the
sortoperation to sort the intermediate-result by the
pop,
stateand
cityfields, in ascending order, such that the smallest city is at the top and the biggest city is at the bottom of the result. Note that the sorting on
stateand
cityis implicitly performed against the group ID fields (which Spring Data MongoDB handled).
Use a
groupoperation again to group the intermediate result by
state. Note that
stateagain implicitly references a group ID field. We select the name and the population count of the biggest and smallest city with calls to the
last(…)and
first(…)operators, respectively, in the
projectoperation.
Select the
statefield from the previous
groupoperation. Note that
stateagain implicitly references a group ID field. Because we do not want an implicitly generated ID to appear, we exclude the ID from the previous operation by using
and(previousOperation()).exclude(). Because we want to populate the nested
Citystructures in our output class, we have to emit appropriate sub-documents by using the nested method.
Sort the resulting list of
StateStatsby their state name in ascending order in the
sortoperation.
Note that we derive the name of the input collection from the
ZipInfo class passed as the first parameter to the
newAggregation();
The preceding listings use the following algorithm:
Group the input collection by the
statefield and calculate the sum of the
populationfield and store the result in the new field
"totalPop".
Sort the intermediate result by the id-reference of the previous group operation in addition to the
"totalPop"field in ascending order.
Filter the intermediate result by using a
matchoperation which accepts a
Criteriaquery as an argument.
Note that we derive the name of the input collection from the
ZipInfo class passed as first parameter to the
newAggregation<Document> result = mongoTemplate.aggregate(agg, Document.class); List<Document> resultList = result.getMappedResults();
Note that we derive the name of the input collection from the
Product class passed as first parameter to the
newAggregation method.<Document> result = mongoTemplate.aggregate(agg, Document.class); List<Document> resultList = result.getMappedResults();
Aggregation Framework Example 6
This example demonstrates the use of complex arithmetic operations derived from SpEL Expressions in the projection operation.
Note: The additional parameters passed to the
addExpression method can be referenced with indexer expressions according to their position. In this example, we reference the first parameter of the parameters array with
[0]. When the SpEL expression is transformed into a MongoDB aggregation framework expression, external parameter expressions are replaced with their respective values.<Document> result = mongoTemplate.aggregate(agg, Document.class); List<Document> resultList = result.getMappedResults();
Note that we can also refer to other fields of the document within the SpEL expression.
Aggregation Framework Example 7
This example uses conditional projection. It is derived from the $cond reference documentation.
public class InventoryItem { @Id int id; String item; String description; int qty; } public class InventoryItemProjection { @Id int id; String item; String description; int qty; int discount }
import static org.springframework.data.mongodb.core.aggregation.Aggregation.*; TypedAggregation<InventoryItem> agg = newAggregation(InventoryItem.class, project("item").and("discount") .applyCondition(ConditionalOperator.newBuilder().when(Criteria.where("qty").gte(250)) .then(30) .otherwise(20)) .and(ifNull("description", "Unspecified")).as("description") ); AggregationResults<InventoryItemProjection> result = mongoTemplate.aggregate(agg, "inventory", InventoryItemProjection.class); List<InventoryItemProjection> stateStatsList = result.getMappedResults();
This one-step aggregation uses a projection operation with the
inventory collection. We project the
discount field by using a conditional operation for all inventory items that have a
qty greater than or equal to
250. A second conditional projection is performed for the
description field. We apply the
Unspecified description to all items that either do not have a
description field or items that have a
null description. more fine-grained control over the mapping process,
you can register Spring converters with the
MongoConverter implementations, such as the
MappingMongoConverter.
The
MappingMongoConverter checks to see if any Spring converters. types, we let you force the infrastructure to register a converter for only one way, we provide
@ReadingConverter and
@WritingConverter annotations to be used in the converter implementation.
10.12. Index and Collection Management
MongoTemplate provides a few methods for managing indexes and collections. These methods are collected into a helper interface called
IndexOperations. You can access these operations by calling the
indexOps method and passing in either the collection name or the
java.lang.Class of your entity (the collection name is derived from the
.class, either by name or from annotation metadata).
The following listing shows the
IndexOperations interface:
public interface IndexOperations { void ensureIndex(IndexDefinition indexDefinition); void dropIndex(String name); void dropAllIndexes(); void resetIndexCache(); List<IndexInfo> getIndexInfo(); }
10.12.1. Methods for Creating an Index
You can create an index on a collection to improve query performance by using the MongoTemplate class, as the following example shows:
mongoTemplate.indexOps(Person.class).ensureIndex(new Index().on("name",Order.ASCENDING));
ensureIndex makes sure that an index for the provided IndexDefinition exists for the collection.
You can create standard, geospatial, and text indexes by using the
IndexDefinition,
GeoSpatialIndex and
TextIndexDefinition classes. For example, given the
Venue class defined in a previous section, you could declare a geospatial query, as the following example shows:
mongoTemplate.indexOps(Venue.class).ensureIndex(new GeospatialIndex("location"));
10.12.2. Accessing Index Information
The
IndexOperations interface has the
getIndexInfo method that returns a list of
IndexInfo objects. This list contains all the indexes defined on the collection. The following example defines an index on the
Person class that has an
age property:
template.indexOps(Person.class).ensureIndex(new Index().on("age", Order.DESCENDING).unique());> collection = null; if (!mongoTemplate.getCollectionNames().contains("MyNewCollection")) { collection = mongoTemplate.createCollection("MyNewCollection"); } mongoTemplate.dropCollection("MyNewCollection");
getCollectionNames: Returns a set of collection names.
collectionExists: Checks to see if a collection with a given name exists.
createCollection: Creates an uncapped collection.
dropCollection: Drops the collection.
getCollection: Gets a collection by name, creating it if it does not exist.
10.14. Lifecycle Events
The MongoDB mapping framework includes several
org.springframework.context.ApplicationEvent events that your application can respond to by registering special beans in the
ApplicationContext. Being based on Spring’s
ApplicationContext event infrastructure enables other products, such as Spring Integration, to easily receive these events, as they are a well known eventing mechanism in Spring-based applications.
To intercept an object before it goes through the conversion process (which turns your domain object into a
org.bson.Document), you can register a subclass of
AbstractMongoEventListener that overrides the
onBeforeConvert method. When the event is dispatched, your listener is called and passed the domain object before it goes into the converter. The following example shows how to do so:
public class BeforeConvertListener extends AbstractMongoEventListener<Person> { @Override public void onBeforeConvert(BeforeConvertEvent<Person> event) { ... does some auditing manipulation, set timestamps, whatever ... } }
To intercept an object before it goes into the database, you can register a subclass of
org.springframework.data.mongodb.core.mapping.event.AbstractMongoEventListener that overrides the
onBeforeSave method. When the event is dispatched, your listener is called and passed the domain object and the converted
com.mongodb.Document. The following example shows how to do so:
public class BeforeSaveListener extends AbstractMongoEventListener<Person> { @Override public void onBeforeSave(BeforeSaveEvent<Person> event) { … change values, delete them, whatever … } }
Declaring these beans in your Spring ApplicationContext causes them to be invoked whenever the event is dispatched.
The following callback methods are present in
AbstractMappingEventListener:
onBeforeConvert: Called in
MongoTemplate
insert,
insertList, and
saveoperations before the object is converted to a..
10.15) } }
10.15 // ... } }
10.16. Exception Translation that you can be sure to catch all database related exception within a single try-catch block. Note that not all exceptions thrown by the MongoDB driver inherit from the
MongoException class. The inner exception and message are preserved so that no information is lost.
Some of the mappings performed by the
MongoExceptionTranslator are
com.mongodb.Network to DataAccessResourceFailureException and
MongoException error codes 1003, 12001, 12010, 12011, and 12012 to
InvalidDataAccessApiUsageException. Look into the implementation for more details on the mapping.
10.17. Execution Callbacks
One common design feature of all Spring template classes is that all functionality is routed into one of the template’s execute callback methods. Doing so helps to ensure that exceptions and any resource management that may be required are performed consistently. While JDBC and JMS need this feature much more than MongoDB does, it still offers a single spot for exception translation and logging to occur. Consequently, using these execute callbacks is the preferred way to access the MongoDB driver’s
MongoDatabase and
MongoCollection objects to perform uncommon operations that were not exposed as methods on
MongoTemplate.
The following list describes the execute callback methods.
<T> Texecute
(Class<?> entityClass, CollectionCallback<T> action): the given
DbCallbackwithin the same connection to the database so as to ensure consistency in a write-heavy environment where you may read the data that you wrote.
The following example uses the
CollectionCallback to return information about an index:
boolean hasIndex = template.execute("geolocation", new CollectionCallbackBoolean>() { public Boolean doInCollection(Venue.class, DBCollection collection) throws MongoException, DataAccessException { List<Document> indexes = collection.getIndexInfo(); for (Document document : indexes) { if ("location_2d".equals(document.get("name"))) { return true; } } return false; } });
10.18. GridFS Support
MongoDB supports storing binary files inside its filesystem, GridFS. Spring Data MongoDB provides a
GridFsOperations interface as well as the corresponding implementation,
GridFsTemplate, to let you interact with the filesystem. You can set up a
GridFsTemplate instance by handing it a
MongoDbFactory as well as a
MongoConverter, as the following example shows:
class GridFsConfiguration extends AbstractMongoConfiguration { // … further configuration omitted @Bean public GridFsTemplate gridFsTemplate() { return new GridFsTemplate(mongoDbFactory(), mappingMongoConverter()); } }
The corresponding XML configuration follows:
<, as the following example shows:aled by the
MongoConverter configured with the
Grid
GridFsTemplate to query for files:
class GridFsClient { @Autowired GridFsOperations operations; @Test public void findFilesInGridFs() { GridFSFindIterable result = operations.find(query(whereFilename().is("filename.txt"))) } }
The other option to read files from the GridFs is to use the methods introduced by the
ResourcePatternResolver interface. They allow handing an Ant path into the method and can thus retrieve files matching the given pattern. The following example shows how to use
GridFsTemplate to read files:
class GridFsClient { @Autowired GridFsOperations operations; @Test public void readFilesFromGridFs() { GridFsResources[] txtFiles = operations.getResources("*.txt"); } }
GridFsOperations extends
ResourcePatternResolver and lets the
GridFsTemplate (for example) to be plugged into an
ApplicationContext to read Spring Config files from MongoDB database..66< methods available on the MongoDB driver
Collection object, to make the API familiar to existing MongoDB developers who are used to the driver API. For example, you can find methods such as
find,
findAndModify,
findOne,
insert,
remove,
save,
update, and
updateMulti. The design goal.5. Execution Callbacks.
14.2. Usage
To access domain entities stored in a MongoDB, you can use our sophisticated repository support that eases implementation quite significantly. To do so, create an interface for your repository, as the following example shows:
public class Person { @Id private String id; private String firstname; private String lastname; private Address address; // … getters and setters omitted }
Note that the domain type shown. do so, in your Spring configuration, add the following content:
<?xml version="1.0" encoding="UTF-8"?> <beans xmlns="" xmlns: <mongo:mongo-client <bean id="mongoTemplate" class="org.springframework.data.mongodb.core.MongoTemplate"> <constructor-arg <constructor-arg </bean> <mongo:repositories </beans>
This namespace element causes the base packages to be scanned for interfaces that extend
MongoRepository and create Spring beans for each one found. By default, the repositories get a
MongoTemplate Spring bean wired that is called
mongoTemplate, so you only need to configure
mongo-template-ref explicitly if you deviate from this convention.
If you would rather go with Java-based configuration, use the
@EnableMongoRepositories annotation. That annotation carries the same attributes as the namespace element. If no base package is configured, the infrastructure scans the package of the annotated configuration class. The following example shows how to use Java configuration for a repository:
@Configuration @EnableMongoRepositories class ApplicationConfig extends AbstractMongoConfiguration { @Override protected String getDatabaseName() { return "e-store"; } @Override public MongoClient mongoClient() { return new MongoClient(); } @Override protected String getMappingBasePackage() { return "com.oreilly.springdata.mongodb"; } }
Because our domain repository extends
PagingAndSortingRepository, it provides you with CRUD operations as well as methods for paginated and sorted access to the entities. Working with the repository instance is just a matter of dependency injecting it into a client. Consequently, accessing the second page of
Person objects at a page size of 10 would resemble the following code:
@RunWith(SpringJUnit4ClassRunner.class) @ContextConfiguration public class PersonRepositoryTests { @Autowired PersonRepository repository; @Test public void readsFirstPageCorrectly() { Page<Person> persons = repository.findAll(PageRequest.of(0, 10)); assertThat(persons.isFirstPage(), is(true)); } }
The preceding example creates an application context with Spring’s unit test support, which performs annotation-based dependency injection into test cases. Inside the test method, we use the repository to query the datastore. We hand the repository a
PageRequest instance that requests the first page of
Person objects at a page size of 10.
14.3. Query Methods
Most of the data access operations you usually trigger on a repository result in a query being executed against the MongoDB databases. Defining such a query is a matter of declaring a method on the repository interface, as the following example shows:
public interface PersonRepository extends PagingAndSortingRepository<Person, String> { List<Person> findByLastname(String lastname); (1) Page<Person> findByFirstname(String firstname, Pageable pageable); (2) Person findByShippingAddresses(Address address); (3) Person findFirstByLastname(String lastname) (4) Stream<Person> findAllBy(); (5) }
The following table shows the keywords that are supported for query methods:
14.3.1. Repository Delete Queries
The keywords in the preceding table can be used in conjunction with
delete…By or
remove…By to create queries that delete matching documents.
Delete…ByQuery
public interface PersonRepository extends MongoRepository<Person, String> { List <Person> deleteByLastname(String lastname); Long deletePersonByLastname(String lastname); }
Using a return type of
List retrieves and returns all matching documents before actually deleting them. A numeric return type directly removes the matching documents, returning the total number of documents removed.
14.3.2. Geo-spatial Repository Queries
As you saw in the preceding table of keywords,); // Metric: {'geoNear' : 'person', 'near' : [x, y], 'minDistance' : min, // 'maxDistance' : max, 'distanceMultiplier' : metric.multiplier, // 'spherical' : true } GeoResults<Person> findByLocationNear(Point location, Distance min, Distance max); // {'geoNear' : 'location', 'near' : [x, y] } GeoResults<Person> findByLocationNear(Point location); }
14.3.3. MongoDB JSON-based Query Methods and Field Restriction
By adding the
org.springframework.data.mongodb.repository.Query annotation to your repository query methods, you can specify a MongoDB JSON query string to use instead of having the query be derived from the method name, as the following example shows:
public interface PersonRepository extends MongoRepository<Person, String> { @Query("{ 'firstname' : ?0 }") List<Person> findByThePersonsFirstname(String firstname); }
The
?0 placeholder lets you substitute the value from the method arguments into the JSON query string.
You can also use the filter property to restrict the set of properties that is mapped into the Java object, as the following example shows:
public interface PersonRepository extends MongoRepository<Person, String> { @Query(value="{ 'firstname' : ?0 }", fields="{ 'firstname' : 1, 'lastname' : 1}") List<Person> findByThePersonsFirstname(String firstname); }
The query in the preceding example returns only the
lastname and
Id properties of the
Person objects. The
age property, a
java.lang.Integer, is not set and its value is therefore null..5. JSON-based Queries with SpEL Expressions
Query strings and field definitions can be used together with SpEL expressions to create dynamic queries at runtime. SpEL expressions can provide predicate values and can be used to extend predicates with subdocuments.
Expressions expose method arguments through an array that contains all the arguments. The following query uses
[0]
to declare the predicate value for
lastname (which is equivalent to the
?0 parameter binding):
public interface PersonRepository extends MongoRepository<Person, String> { @Query("{'lastname': ?#{[0]} }") List<Person> findByQueryWithExpression(String param0); }
Expressions can be used to invoke functions, evaluate conditionals, and construct values. SpEL expressions used in conjunction with JSON reveal a side-effect, because Map-like declarations inside of SpEL read like JSON, as the following example shows:
public interface PersonRepository extends MongoRepository<Person, String> { @Query("{'id': ?#{ [0] ? {$exists :true} : [1] }}") List<Person> findByQueryWithExpressionAndNestedObject(boolean param0, String param1); }
SpEL in query strings can be a powerful way to enhance queries. However, they can also accept a broad range of unwanted arguments. You should make sure to sanitize strings before passing them to the query to avoid unwanted changes to your query.
Expression support is extensible through the Query SPI:
org.springframework.data.repository.query.spi.EvaluationContextExtension.
The Query SPI can contribute properties and functions and can customize the root object. Extensions are retrieved from the application context
at the time of SpEL evaluation when the query is built. The following example shows how to use
EvaluationContextExtension:
public class SampleEvaluationContextExtension extends EvaluationContextExtensionSupport { @Override public String getExtensionId() { return "security"; } @Override public Map<String, Object> getProperties() { return Collections.singletonMap("principal", SecurityContextHolder.getCurrent().getPrincipal()); } }
14.3.6. Type-safe Query Methods
MongoDB repository support integrates with the Querydsl project, which provides a way to perform type-safe queries. To quote from the project description, .
QueryDSL lets you write queries such as the following:
QPerson person = new QPerson("person"); List<Person> result = repository.findAll(person.address.zipCode.eq("C0123")); Page<Person> page = repository.findAll(person.lastname.contains("a"), PageRequest.of(0, 2, Direction.ASC, "lastname"));
QPerson is a class that is generated by the Java annotation post-processing tool. It is a
Predicate that lets you write type-safe queries. Notice that there are no strings in the query other than the
C0123 value.
You can use the generated
Predicate class by using the
QuerydslPredicateExecutor interface, which the following listing shows:, add it to the list of repository interfaces from which your interface inherits, as the following example shows:
public interface PersonRepository extends MongoRepository<Person, String>, QuerydslPredicateExecutor<Person> { // additional query methods go here }
14.3.7. Full-text Search Queries
MongoDB’s full-text search feature is store-specific and, therefore, can be found on
MongoRepository rather than on the more general
CrudRepository. We need a document with a full-text index (see “Text Indexes” to learn how to create a full-text index).
Additional methods on
MongoRepository take
TextCriteria as an input parameter. In addition to those explicit methods, it is also possible to add a
TextCriteria-derived repository method. The criteria are added as an additional
AND criteria. Once the entity contains a
@TextScore-annotated property, the document’s full-text score can be retrieved. Furthermore, the
@TextScore annotated also makes it possible to sort by the document’s score, as the following example shows:
@Document class FullTextDocument { @Id String id; @TextIndexed String title; @TextIndexed String content; @TextScore Float score; } interface FullTextRepository extends Repository<FullTextDocument, String> { // Execute a full-text search and define sorting dynamically List<FullTextDocument> findAllBy(TextCriteria criteria, Sort sort); // Paginate over a full-text search result Page<FullTextDocument> findAllBy(TextCriteria criteria, Pageable pageable); // Combine a derived query with a full-text search List<FullTextDocument> findByTitleOrderByScoreDesc(String title, TextCriteria criteria); } Sort sort = }
14.4. CDI Integration
Instances of the repository interfaces are usually created by a container, and Spring is the most natural choice when working with Spring Data. As of version 1.3.0, Spring Data MongoDB ships with a custom CDI extension that lets you use the repository abstraction in CDI environments. The extension is part of the JAR. To activate it, drop the Spring Data MongoDB JAR into your classpath. You can now set up the infrastructure by implementing a CDI Producer for the
MongoTemplate, as the following example shows:
class MongoTemplateProducer { @Produces @ApplicationScoped public MongoOperations createMongoTemplate() { MongoDbFactory factory = new SimpleMongoDbFactory(new MongoClient(), "database"); return new MongoTemplate(factory); } }
The Spring Data MongoDB CDI extension picks up the
MongoTemplate available as a CDI bean and creates a proxy for a Spring Data repository whenever a bean of a repository type is requested by the container. Thus, obtaining an instance of a Spring Data repository is a matter of declaring an
@Inject-ed property, as the following example shows:
class RepositoryClient { @Inject PersonRepository repository; public void businessMethod() { List<Person> people = repository.findAll(); } }.2. General Auditing Configuration for MongoDB
To activate auditing functionality, add the Spring Data Mongo
auditing namespace element to your configuration, as the following example shows:
<mongo:auditing
Since Spring Data MongoDB 1.4, auditing can be enabled by annotating a configuration class with the
@EnableMongoAuditing annotation, as the followign example shows:
@Configuration @EnableMongoMongoAuditing.
17. Mapping
Rich mapping support is provided by the
MappingMongoConverter.
MappingMongoConverter has a rich metadata model that provides a full feature set to map domain objects to MongoDB documents. The mapping metadata model is populated by using annotations on your domain objects. However, the infrastructure is not limited to using annotations as the only source of metadata information. The
MappingMongoConverter also lets you map objects to documents without providing any additional metadata, by following a set of conventions.
This section describes the features of the
MappingMongoConverter, including fundamentals, how to use conventions for mapping objects to documents and how to override those conventions with annotation-based mapping metadata.
17.
17.
17; } }
17.
17.
17.2. Convention-based Mapping
MappingMongoConverter has a few conventions for mapping objects to documents when no additional mapping metadata is provided. The conventions are:
The short Java class name is mapped to the collection name in the following manner. The class
com.bigbank.SavingsAccountmaps to the
savingsAccountcollection name.
All nested objects are stored as nested objects in the document and not as DBRefs.
The converter uses any Spring Converters registered with it to override the default mapping of object properties to document fields and values.
The fields of an object are used to convert to and from fields in the document. Public
JavaBeanproperties are not used.
If you have a single non-zero-argument constructor whose constructor argument names match top-level field names of document, that constructor is used. Otherwise, the zero-argument constructor is used. If there is more than one non-zero-argument constructor, an exception will be thrown.
17.2.1. How the
_id field is handled in the mapping layer.
MappingMongoConfield.
A field without an annotation but named
idwill be mapped to the
_idfield.
The default field name for identifiers is
_idand can be customized via the
@Fieldannotation.
The following outlines what type conversion, if any, will be done on the property mapped to the _id document field.
If a field named
idis declared as a String or BigInteger in the Java class it will be converted to and stored as an ObjectId if possible. ObjectId as a field type is also valid. If you specify a value for
idin your application, the conversion to an ObjectId is detected to the MongoDB.
If a field named
idid field is not declared as a String, BigInteger, or ObjectID in the Java class then you should assign it a value in your application so it can be stored 'as-is' in the document’s _id field.
If no field named
idis present in the Java class then an implicit
_idfile.
17.3. Data Mapping and Type Conversion
This section explains how types are mapped to and from a MongoDB representation. Spring Data MongoDB supports all types that can be represented as BSON, MongoDB’s internal document format. In addition to these types, Spring Data MongoDB provides a set of built-in converters to map additional types. You can provide your own converters to adjust type conversion. See Overriding Mapping with Explicit Converters for further details.
The following provides samples of each available type conversion:
17.4. Mapping Configuration
Unless explicitly configured, an instance of
MappingMongoConverter is created by default when you create a
MongoTemplate. You can create your own instance of the
MappingMongoConverter. Doing so lets you dictate where in the classpath your domain classes can be found, so that Spring Data MongoDB can extract metadata and construct indexes. Also, by creating your own instance, you can register Spring converters to map specific classes to and from the database.
You can configure the
MappingMongoConverter as well as
com.mongodb. Also shown in the preceding example is a
LoggingEventListener, which logs
MongoMappingEvent instances that are posted onto Spring’s
ApplicationContextEvent infrastructure.
Spring’s MongoDB namespace lets you enable mapping functionality in XML, as the following example shows:
<?xml version="1.0" encoding="UTF-8"?> <beans xmlns="" xmlns: <!-- Default bean name is 'mongo' --> <mongo:mongo-client <mongo:db-factory <!--.
17.5. Metadata-based Mapping
To take full advantage of the object mapping functionality inside the Spring Data MongoDB support, you should annotate your mapped objects with the
; @Document public class Person { @Id private ObjectId id; @Indexed private Integer ssn; private String firstName; @Indexed private String lastName; }. Document.
refers to the root of the given document.
@Field: Applied at the field level is bumped automatically on every update.
The mapping metadata infrastructure is defined in a separate omitted
17.5.2.files which can be achieved by compiling the source with debug information or using the new
-parameterscommand-line switch for javac in Java 8.
Otherwise a
MappingExceptionwill }.5. Text Indexes
Creating a text index allows accumulating several fields into a searchable full-text index. It is only possible to have one text index per collection, so all fields marked with
@TextIndexed are combined into this index. Properties can be weighted to influence the document score for ranking results. The default language for the text index is English. To change the default language, set the
language attribute to whichever language you want (for example,
@Document(language="spanish")). Using a property called
language or
@Language lets you define a language override on a per document base. The following example shows how to created a text index and set the language to Spanish:
@Document(language = "spanish") class SomeEntity { @TextIndexed String foo; @Language String lang; Nested nested; } class Nested { @TextIndexed(weight=5) String bar; String roo; }
17.5.6. Using master document.
The following example uses a DBRef to refer to a specific document that exists independently of the object in which it is referenced (both classes are shown in-line for brevity’s sake):
@Document public class Account { @Id private ObjectId id; private Float total; } @Document public class Person { @Id private ObjectId id; @Indexed private Integer ssn; @DBRef private List<Account> accounts; }
You need not use
@OneToMany or similar mechanisms because the List of objects tells the mapping framework that you want a one-to-many relationship. When the object is stored in MongoDB, there is a list of DBRefs rather than the
Account objects themselves...
18.
18.
18.3. Object Mapping
See Kotlin support for details on how Kotlin objects are materialized.:
18>
18. >
19" }
20. JMX support
The JMX support for MongoDB exposes the results of executing the 'serverStatus' command on the admin database for a single MongoDB server instance. It also exposes an administrative MBean,
MongoAdmin, that lets you perform administrative operations, such as dropping or creating a database. The JMX features build upon the JMX feature set available in the Spring Framework. See here for more details.
20.1. MongoDB JMX Configuration
Spring’s Mongo namespace lets you enable JMX functionality, as the following example shows:
<?xml version="1.0" encoding="UTF-8"?> <beans xmlns="" xmlns: <!-- Default bean name is 'mongo' --> <mongo:mongo-client>
The preceding code exposes several MBeans:
AssertMetrics
BackgroundFlushingMetrics
BtreeIndexCounters
ConnectionMetrics
GlobalLockMetrics
MemoryMetrics
OperationCounters
ServerInfo
MongoAdmin
The following screenshot from JConsole shows the resulting configuration:
21..
21.1. Using Spring Data MongoDB with MongoDB 3.0
The rest of this section describes how to use Spring Data MongoDB with MongoDB 3.0.
21 connection:
<?xml version="1.0" encoding="UTF-8"?> <beans xmlns="" xmlns: <mongo:mongo-client <mongo:client-options </mongo:mongo-client> </beans>
21.
21.5. Miscellaneous Details
This section covers briefly lists additional things to keep in mind when using the 3.0 driver:
IndexOperations.resetIndexCache()is no longer supported.
Any
MapReduceOptions.extraOptionis silently ignored.
WriteResultno longer holds error information but, instead, throws an
Exception.
MongoOperations.executeInSession(…)no longer calls
requestStartand
requestDone.
Index name generation has become a driver-internal operation. Spring Data MongoDB still uses the 2.x schema to generate names.
Some
Exceptionmessages differ between the generation 2 and 3 servers as well as between the MMap.v1 and WiredTiger storage engines..
|
https://docs.spring.io/spring-data/mongodb/docs/2.2.6.RELEASE/reference/html/
|
CC-MAIN-2021-43
|
en
|
refinedweb
|
Tag:Build table
Online mysql, SQL Server table creation statement, JSON test data generation toolOnline mysql, SQL Server table creation statement, JSON test data generation tool Online mysql, SQL Server table creation statement, JSON test data generation tool This tool can generate JSON test data from SQL table creation statements, and supports MySQL and SQL Server table creation statements SQL: structured query language is a database query and programming […]
Elasticsearch7.6.2 – rest complex queryThen, in the previous article, query complex query is performed must This relationship, equivalent to and, must be satisfied at the same time should This relationship is equivalent to or. Just meet one Three pieces of data were found here must_not must_ Not quite so! So one piece of data is found […]
Hive basic knowledge of customer access store data analysis (UV, top3)Access log of known customers visiting the store user_id shop u1 a u2 b u1 b u1 a u3 c u4 b u1 a u2 c u5 b u4 b u6 c u2 c u1 b u2 a u2 a u3 a u5 a u5 a u5 a Create and guide tables create table visit(user_id […]
PostgreSQL FDW installation and useCompile and install FDW plug-ins locally cd contrib/postgres_fdw USE_PGX=1 make install Install extension locally postgres=# create extension if not exists postgres_fdw; CREATE EXTENSION postgres=# \dx List of installed extensions Name | Version | Schema | Description ————–+———+————+—————————————————- plpgsql | 1.0 | pg_catalog | PL/pgSQL procedural language postgres_fdw | 1.0 | public | foreign-data wrapper for […]
Online JSON to MySQL table creation statement toolOnline JSON to MySQL table creation statement tool Online JSON to MySQL table creation statement tool This tool can convert JSON objects into MySQL statements, and supports copying and downloading JSON: (JavaScript object notation) is a lightweight data exchange format. It is based on a subset of ECMAScript (JS specification formulated by the European Computer […]
Hive’s table generation functionHive’s table generation function1、 Expand functionExpand (Col): split the complex array or map structure in the hive column into multiple rows.Expand (array) generates one row for each element of the arrayExpand (map) generate a row for each key value pair in the map. Key is a column and value is a column Webpage GameData: 10 […]
What are libraries, tables, and super tables? How to use it? After 60, the uncle took out the cocoon and explained the data modeling of tdengineThe second bullet of the video tutorial is to quickly clarify the abstract concepts in tdengine and learn to plan the data model in the production scene. clicklink, get a video tutorial. Welcome to the data world of the Internet of things In a typical Internet of things scenario, there are generally many different types […]
One SQL question per dayDo you want to brush the questions together? catalogue 20210610 20210609 20210607 20210604 20210603 20210602 20210601 20210531 20210610 Title: Find the ID and name whose ratio of each group’s accumulation / total number of each group in name is greater than 0.6. Expected results: Create table statement: CREATE TABLE T0610 ( ID INT, NAME VARCHAR […]
Mongodb dynamic table creation scheme (official native driver)Mongodb dynamic table creation scheme (official native driver) Requirement premise: table name is dynamic, table structure is static, and library is fixed 1. Import related dependencies org.mongodb mongodb-driver 3.11.2 org.mongodb bson 3.11.2 org.mongodb mongodb-driver-core 3.11.2 2. Defining entities @Data public class Person { private String name; private int sex; private String address; } 3. Set […]
Remember the recovery process after a false deletion of ‘ibdata1’After a long time of searching for information online, it is finally solved and recordedExplain the situationa. The database is not backed upb. The server does not have a snapshotc. After deletion, restart MySQL multiple times and reboot with the serverd. Passrm -rfDeletedibdata1,ib_logfile0,ib_logfile1,.frm,.ibdThese files exist Let’s talk about the process of exploring solutions 1. UtilizationextundeleteFile […]
GoldenGate downstream configuration1. […]
|
https://developpaper.com/tag/build-table/
|
CC-MAIN-2021-43
|
en
|
refinedweb
|
#include <script/standard.h>
#include <crypto/sha256.h>
#include <hash.h>
#include <pubkey.h>
#include <script/interpreter.h>
#include <script/script.h>
#include <util/strencodings.h>
#include <string>
Go to the source code of this file.
Definition at line 17 of file standard.c.
Definition at line 104.
Definition at line 94 of file standard.cpp.
Test for "small positive integer" script opcodes - OP_1 through OP_16.
Definition at line 89 of file standard.cpp.
Check whether a CTxDestination is a CNoDestination.
Definition at line 332 of file standard.cpp.
Definition at line 99 of file standard.cpp.
Definition at line 124 of file standard.cpp.
Definition at line 66 of file standard.cpp.
Definition at line 79.
A data carrying output is an unspendable output containing data.
The script type is designated as TxoutType::NULL_DATA.
Definition at line 19 of file standard.cpp.
Maximum size of TxoutType::NULL_DATA scripts that this node considers standard.
Definition at line 20 of file standard.cpp.
|
https://doxygen.bitcoincore.org/standard_8cpp.html
|
CC-MAIN-2021-43
|
en
|
refinedweb
|
This tutorial will show you how to use React with Laravel in a way that lets you sprinkle React into a legacy Laravel codebase and blade templates. We will not be creating an SPA or using Create React App.
You can view and download the full sample project.
After going through this guide...
- We'll be able to add React components into blade files.
- We'll have reusable components that can be combined to make complex components.
- We'll use webpack (Laravel Mix) to build our files.
- We will not have an SPA.
- React will not be served with SSR (Server Side Rendering).
- We will not be able to use the components as inline components like is popular with Vue.
Background
I was inspired to write this guide because recently, I added React into a legacy project of mine, and I didn't want to rewrite the whole project to turn it into a React SPA. Instead, I wanted to reap the benefits of writing new React components that I could start sprinkling into my project right away.
There are a lot of ways to get React to load and render components, and this is simply the method I choose when working on my project. I'll walk you through how and why I chose this setup.
First thing's first, navigate to your existing or new Laravel project.
Install Dependencies
npm i react react-dom
Folder Structure
In the
/resources/js/ folder, we'll add a new folder where all of our React files will live. We want to keep these files all together and not mixed in with other JS files. This will keep the project organized, make some of the webpack setup easier, and allow for the use of other technologies.
In my case, I created a source folder for all of my React files at
/resources/js/src/.
I have the following folders in the
src folder.
- /src/components
- /src/hooks
- /src/layouts
- /src/pages
Your exact folders may vary depending on your needs and organizational style, but this could be a good place to start.
Laravel Mix - Webpack setup
Aliases
This step is optional, but I think it makes the project a lot easier and cleaner to work with. Defining aliases in the webpack configs will allow you to refer to your files without needing to know where in the file path you are.
For example, if you want to refer to your theme file from a component deep in the folder structure, without aliases, you might write
import theme from '../../../themes/theme.js'
With aliases, you would simply write
import theme from 'themes/theme.js'
To use aliases, you'll need to add them to your mix file
webpack.mix.js.
mix.webpackConfig({ resolve: { alias: { //adding react and react-dom may not be necessary for you but it did fix some issues in my setup. 'react' : path.resolve('node_modules/react'), 'react-dom' : path.resolve('node_modules/react-dom'), 'components' : path.resolve('resources/js/src/components'), 'pages' : path.resolve('resources/js/src/pages'), 'themes' : path.resolve('resources/js/src/themes'), 'layouts' : path.resolve('resources/js/src/layouts'), 'hooks' : path.resolve('resources/js/src/hooks'), }, }, });
Bundle and Extract React
After you've added your aliases, you'll need to tell webpack to bundle your files and extract libraries. In the same
webpack.mix.js file, add the following line. Notice that we're using
mix.react and we are using
app.js. If your app.js file already has legacy code, you could create a new app file for the React components.
mix.react('resources/js/app.js', 'public/js').extract(['react', 'react-dom']);
Rendering the components
This is where things get tricky.
Even though we aren't building an SPA, we still want to be able to build complex components that reuse multiple components. We're going to be mixing React components into blade files, and it would be great if we could retain some of the JS feel for the components so that we know we're referring to a React component, and it's not just a random div with an id.
Instead of referring to components as
<div id="MyComponent" />
We are instead going to use
<MyComponent />.
This isn't valid html, so if you want to use the id method, all you'll have to do is uncomment one of the lines in the ReactRenderer.js file coming up.
Create a simple component
We need a simple component to test with, and this is about as simple as they get.
Create a new file with the following code in
src/components/MySimpleComponent.js.
import React from 'react'; export default function MySimpleComponent(props) { return ( <> <h2>This was loaded from a React component.</h2> </> ); }
Set up app.js
Next, we need to set up the app.js file. These are the lines that you'll need to add to the app.js file.
require('./bootstrap') import React from 'react' import ReactRenderer from './src/ReactRenderer' import MySimpleComponent from 'components/MySimpleComponent' const components = [ { name: "MySimpleComponent", component: <MySimpleComponent />, }, ] new ReactRenderer(components).renderAll()
A little explanation.
In our app.js file we will import any components that we want to use within the blade files and add them to an array. We'll use the 'name' element to find all the references to the component in the blade files, and we'll use the 'component' element to render it.
Next we need to add the
ReactRenderer.js file.
import React from 'react'; import ReactDOM from 'react-dom'; export default class ReactRenderer { constructor(components) { this.components = components; } renderAll() { for (let componentIndex = 0; componentIndex < this.components.length; componentIndex++) { // Use this to render React components in divs using the id. Ex, <div id="MySimpleComponent"></div> // let container = document.getElementById(this.components[componentIndex].name); // Use this to render React components using the name as the tag. Ex, <MySimpleComponent></MySimpleComponent> let containers = document.getElementsByTagName(this.components[componentIndex].name) if (containers && containers.length > 0) { for (let i = containers.length - 1; i >= 0; i--) { let props = this.getPropsFromAttributes(containers[i]); let element = this.components[componentIndex].component; if (props !== null) { element = React.cloneElement( element, props ) } ReactDOM.render(element, containers[i]); } } } } // Turns the dom element's attributes into an object to use as props. getPropsFromAttributes(container) { let props = {}; if (container.attributes.length > 0) { for (let attributeIndex = 0; attributeIndex < container.attributes.length; attributeIndex++) { let attribute = container.attributes[attributeIndex]; if (this.hasJsonStructure(attribute.value)) { props[attribute.name] = JSON.parse(attribute.value); } else { props[attribute.name] = attribute.value; } } return props; } return null; } hasJsonStructure(str) { if (typeof str !== 'string') return false; try { const result = JSON.parse(str); const type = Object.prototype.toString.call(result); return type === '[object Object]' || type === '[object Array]'; } catch (err) { return false; } } }
You can read through the code to more fully understand what is happening. At its core, it's just finding all DOM elements that match your components and rendering them with any props included as well.
Put it to work
Now that we have everything in place, we can start to build more components and add them to blade files.
Here are some examples of adding it to blade files.
... <MySimpleComponent></MySimpleComponent> @guest <MySecondComponent title="This is using blade's {{'@'}}guest helper to show to 'Guests' only" /> @endguest @auth {{-- Remember to use "json_encode" to pass in objects --}} <MySecondComponent title="This is showing to authed users" user="{{ json_encode(auth()->user()) }}" /> @endauth ...
In the source code for this tutorial, I've also included a second component that accepts a
title prop. This code is a snippet from the
app.blade.php file in the source code.
If you download and run the sample project, you will get something that looks like this.
I encourage you to download the repo, explore, and make modifications to test it out.
Discussion (1)
Why do you need the react renderer? Is there anyway around this, it just seams hacky. Is there a folder structure that you would recommend for react when using it with Laravel?
|
https://practicaldev-herokuapp-com.global.ssl.fastly.net/joeczubiak/laravel-react-3ic1
|
CC-MAIN-2021-43
|
en
|
refinedweb
|
PIR example doesn`t wake Mega 2560?
Hello all,
I just wired and tested the Motion Sensor example, (),
but my Mega 2560 wont wake from sleep because of the interrupt when motion is detected.
It always finishes the sleeping. I serialprinted the sleeps output, it always states -2.
The motion sensor works, I can see that, when I shorten the sleep time (tested with 1 sec).
I also tested the output of the PIR, it generates more than 3.7 volts.
I`m using the latest library and IDE. I made my best to read all connected forum posts but didn't find a solution.
Anyone please can check it if it works for him?
Any help would be appreciated!
Thanks!
@vobi which pin (DIGITAL_INPUT_SENSOR) are you using?
The following pins on the Mega has support for interrupt: 2, 3, 18, 19, 20, 21
@mfalkvidd
I follow exactly the example, so D3.
But D2 also doesn't work.
Same problem here. I tried pretty much anything without success so far. I still need to check pins other than D2 and D3
Thanks, I will test the sketch on UNO.
So, I have tested the following:
Sketch on Mega 2560,
pin 3 with int1 - not working
pin 3 with int5 - not working
pin2, int0,4 - not working
pins 18-21, (digitalPinToInterrupt(DIGITAL_INPUT_SENSOR)) - not working
Sketch on UNO - works as it should!
So definitely something strange going on with 2560s interrupts.
Any advice?
It looks like the mega is not meant to be used as a sleeping node
@gohan nice catch. The return value -2 is defined as MY_SLEEP_NOT_POSSIBLE
Then what does it do when sleep is issued?
Because the defined sleep-time still has to pass...and it does "nothing" till that, and that something cannot be broken by interrupt?
What should be the good method to catch an interrupt, or a motion, without going thru the main loop crazy fast?
Why cant it sleep?
- scalz Hardware Contributor last edited by
@vobi
i think mega2560 can go powerdown, but have not the hardware for testing.
What you could do for isolting the issue, is removing sleep and check if the interrupt triggers well (with a bool flag)
I think the interrupt is triggered but it is just ignored by the sleep function, if the code I've been advised to use to check the interrupt is correct
I have tested it, the interrupt indeed works fine.
So yes, because there is no sleep, it cannot be interrupted.
I still don`t understand why it cannot sleep.
And IMHO the interrupt should break the not-so-sleep-but-wait cycle anyway.
What do You think?
I don't have this much deep knowledge on the chip, but my best guess is that there must be something different in the MEGA chip than the other on the at328p on the uno and mini pro. It could very well be that the MEGA is not designed to be a low power device, but I'll let others say something more appropriate
According to Mr. Nick Gammon, thats not the case (or I`m missing something):
Running from a 9V battery through the "power in" plug, it draws about 50 mA.
Running on 5V through the +5V pin, it draws about 49 mA.
(Note: around 68 mA on a Mega 2560 board)
Now we'll try putting it to sleep:
Sketch B
#include <avr/sleep.h> void setup () { set_sleep_mode (SLEEP_MODE_PWR_DOWN); sleep_enable(); sleep_cpu (); } // end of setup void loop () { }
Now the Uno draws 34.5 mA. A saving, but not a lot.
(Note: around 24 mA on a Mega 2560 board)
Also see:
|
https://forum.mysensors.org/topic/6387/pir-example-doesn-t-wake-mega-2560/1
|
CC-MAIN-2021-43
|
en
|
refinedweb
|
gpio — General
Purpose Input/Output
gpio* at ath?
gpio* at bcmgpio? (arm64, armv7)
gpio* at elansc? (i386)
gpio* at glxpcib? (i386)
gpio* at gscpcib? (i386)
gpio* at isagpio?
gpio* at nsclpcsio?
gpio* at omgpio? (armv7)
gpio* at pcagpio?
gpio* at pcaled?
gpio* at skgpio? (amd64, i386)
gpio* at sxipio? (arm64, armv7)
gpio0 at voyager? (loongson)
#include <sys/types.h>
#include <sys/gpio.h>
#include <sys/ioctl.h>
The
gpio device attaches to the GPIO
controller and provides a uniform programming interface to its pins.
Each GPIO controller with an attached
gpio
device has an associated device file under the /dev
directory, e.g. /dev/gpio0. Access from userland is
performed through ioctl(2) calls on these
devices.
The layout of the GPIO device is defined at securelevel 0, i.e.
typically during system boot, and cannot be changed later. GPIO pins can be
configured and given a symbolic name and device drivers that use GPIO pins
can be attached to the
gpio device at securelevel 0.
All other pins will not be accessible once the runlevel has been raised.
The following structures and constants are defined in the
<sys/gpio.h> header
file:
GPIOINFOstruct gpio_info
struct gpio_info { int gpio_npins; /* total number of pins available */ };
GPIOPINREADstruct gpio_pin_op
#define GPIOPINMAXNAME 64 struct gpio_pin_op { char gp_name[GPIOPINMAXNAME]; /* pin name */ int gp_pin; /* pin number */ int gp_value; /* value */ };
The gp_name or gp_pin field must be set before calling.
GPIOPINWRITEstruct gpio_pin_op
GPIO_PIN_LOW(logical 0) or
GPIO_PIN_HIGH(logical 1). On return, the gp_value field contains the old pin state.
GPIOPINTOGGLEstruct gpio_pin_op
GPIOPINSETstruct gpio_pin_set
#define GPIOPINMAXNAME 64 struct gpio_pin_set { char gp_name[GPIOPINMAXNAME]; /* pin name */ int gp_pin; /* pin number */ int gp_caps; /* pin capabilities (ro) */ int gp_flags; /* pin configuration flags */ char gp_name2[GPIOPINMAXNAME]; /* new name */ };
The gp_flags field is a combination of the following flags:
GPIO_PIN_INPUT
GPIO_PIN_OUTPUT
GPIO_PIN_INOUT
GPIO_PIN_OPENDRAIN
GPIO_PIN_PUSHPULL
GPIO_PIN_TRISTATE
GPIO_PIN_PULLUP
GPIO_PIN_PULLDOWN
GPIO_PIN_INVIN
GPIO_PIN_INVOUT
Note that the GPIO controller may not support all of these flags. On return the gp_caps field contains flags that are supported. If no flags are specified, the pin configuration stays unchanged.
Only GPIO pins that have been set using GPIOPINSET will be accessible at securelevels greater than 0.
GPIOPINUNSETstruct gpio_pin_set
GPIOATTACHstruct gpio_attach
struct gpio_attach { char ga_dvname[16]; /* device name */ int ga_offset; /* pin number */ u_int32_t ga_mask; /* binary mask */ };
GPIODETACHstruct gpio_attach
GPIOATTACHioctl(2). The ga_offset and ga_mask fields of the gpio_attach structure are ignored.
The
gpio device first appeared in
OpenBSD 3.6.
The
gpio driver was written by
Alexander Yurchenko
<grange@openbsd.org>.
Runtime device attachment was added by Marc Balmer
<mbalmer@openbsd.org>.
Event capabilities are not supported.
|
https://man.openbsd.org/gpio
|
CC-MAIN-2021-43
|
en
|
refinedweb
|
This walkthrough demonstrates how to import a .NET assembly, and automatically generate Excel add-in functions for its methods and properties, by adding XLL export attributes to the .NET code.
You can find a copy of all the code used in this walkthrough
in the
Walkthroughs/NetWrapAttr folder in your
XLL+ installation, along with a demonstration spreadsheet,
NetWrapAttr.xls.
Please note that this walkthrough is not available under XLL+ for Visual Studio .NET. XLL+ interacts with .NET using C++/CLI, which is only available under Visual Studio 2005 and above.
Please note that if you are using Visual Studio 2005, you must have Service Pack 1 installed in order to build a valid XLL.
If the add-in fails to load into Excel at run-time, please see the technical note .NET requirements.
In order to build
MyLib.dll, you need the C# project system
to be installed on your computer.
From the File menu, select New and then Project to open the New Project dialog.
Select the XLL+ 7 Excel Add-in project template from the list of Visual C++ Projects, and enter NetWrapAttr in the Name box. Under Location, enter an appropriate directory in which to create the project.
Put a tick next to "Create directory for solution", and click OK.
In the "Application Settings" page of the XLL+ AppWizard, put a tick against Common Language Runtime Support (/clr).
Click Finish to create the new add-in project.
For more information about creating projects, see Creating an add-in project in the XLL+ User Guide.
Now create a new C# project and add it to the solution.
In the Visual Studio Solution Explorer window, right-click the solution node and select Add/New Project....
Select the Visual C# project type in the tree on the left-hand side, and select Class Library in the list on the right. (In Visual Studio 2017, select Class Library (.NET Framework).)
Enter
MyLib for the project Name,
and accept the default Location. Press OK.
Open the project properties of MyLib, and, in the Build page, towards the bottom of the page, put a tick against XML documentation file.
If you are using Visual Studio 2015 or above, then select the Application
page of the project properties, and set the Target framework
to be
.NET Framework 4. This will make the .NET project compatible with
the XLL+ C++ project.
Alternatively, you can change the C++ project to use .NET Framework 4.5 or above as its
target framework.
Close the project in Visual Studio, or unload it, and directly edit the project file
NetWrapAttr.vcxproj.
Locate the element
<PropertyGroup Label="Globals">
and insert the desired target framework, e.g.:
<TargetFrameworkVersion>v4.5.2</TargetFrameworkVersion>
See How to: Modify the Target Framework and Platform Toolset
for details.
Open
Class1.cs and replace all the code with the
code below.
using System; using System.ComponentModel; namespace MyLib { /// <summary> /// Contains static calls to test parameter & return types /// </summary> public static class TestClass { /// <summary> /// returns the concatenation of the arguments. /// </summary> /// <param name="a">is a string</param> /// <param name="b">is a number</param> /// <param name="c">is an integer</param> /// <param name="d">is a boolean</param> /// <returns></returns> public static string T1(string a, double b, int c, bool d) { return String.Format("a={0};b={1};c={2};d={3}", a, b, c, d); } /// <summary> /// returns an array of strings which are concatenations /// of the arguments. /// </summary> /// <param name="a">is a vector of strings</param> /// <param name="b">is a vector of numbers. It must be the same length as a.</param> /// <param name="c">is a vector of integers. It must be the same length as a.</param> /// <param name="d">is a vector of booleans. It must be the same length as a.</param> public static string[] T2(string[] a, double[] b, int[] c, bool[] d) { if (a.Length != b.Length) throw new ArgumentException(String.Format("Expected {0} items for b", a.Length)); if (a.Length != c.Length) throw new ArgumentException(String.Format("Expected {0} items for c", a.Length)); if (a.Length != d.Length) throw new ArgumentException(String.Format("Expected {0} items for d", a.Length)); string[] res = new string[a.Length]; for (int i = 0; i < res.Length; i++) res[i] = T1(a[i], b[i], c[i], d[i]); return res; } /// <summary> /// is a test function which adds two integers. /// </summary> /// <param name="a">is the first integer.</param> /// <param name="b">is the second integer.</param> /// <returns></returns> public static int T3(int a, int b) { return a + b; } /// <summary> /// This function will not be exported, because it is not marked /// with a XllFunction attribute. /// </summary> /// <param name="a">is an argument</param> /// <param name="b">is an argument</param> /// <returns>the sum of the inputs</returns> public static int DontExportMe(int a, int b) { return a + b; } /// <summary> /// Returns the type of an argument of unknown type /// </summary> /// <param name="arg">A value of any type</param> /// <returns>The value and type of the input</returns> public static string VarTypeScalar(object arg) { return String.Format("{0} ({1})", arg, arg == null ? "none" : arg.GetType().Name); } /// <summary> /// Returns the types of a vector of values of unknown type /// </summary> /// <param name="arg">is a vector of values of unknown type</param> /// <returns>a vector containing the values and types of the inputs</returns> public static string[] VarTypeVector(object[] arg) { string[] res = new string[arg.Length]; for (int i = 0; i < arg.Length; i++) res[i] = VarTypeScalar(arg[i]); return res; } /// <summary> /// Returns the types of a matrix of values of unknown type /// </summary> /// <param name="arg">is a matrix of values of unknown type</param> /// <returns>a matrix containing the values and types of the inputs</returns> public static string[,] VarTypeMatrix(object[,] arg) { string[,] res = new string[arg.GetLength(0), arg.GetLength(1)]; for (int i = 0; i < arg.GetLength(0); i++) for (int j = 0; j < arg.GetLength(1); j++) res[i, j] = VarTypeScalar(arg[i, j]); return res; } /// <summary> /// Returns a result of variable type /// </summary> /// <param name="type">indicates the type of value to return</param> /// <returns>a value of variable type</returns> public static object VarRet(int type) { switch (type) { case 0: return "A string"; case 1: return true; case 2: return (double)123.456; } throw new ArgumentException("type must be 0, 1 or 2", "type"); } /// <summary> /// A public enumeration /// </summary> public enum Periods { Monthly = 12, Quarterly = 4, SemiAnnual = 2, Annual = 1 } /// <summary> /// Calculates the discount factor for a given period and interest rate /// </summary> /// <param name="rate">is the interest rate</param> /// <param name="period">is a time period</param> /// <returns></returns> public static double DiscountFactor(double rate, Periods period) { return Math.Exp((-1.0 / ((double)(int)period)) * rate); } } }
To add attributes to the .NET assembly, follow these steps:
Select the MyLib project in the Solution Explorer, and add a reference to the XLL+ runtime library.
Right-click MyLib and click on "Add Reference...".
Select the "Browse" page and navigate to the
bin
sub-directory of your XLL+ installation folder, typically
[Program Files]\Planatech\XllPlus\7.0\VS8.0\bin.
Psl.XL7.XnReflect.Runtime.dll and press OK.
In
Class1.cs, add the following line to the list of
using statements:
using System; using System.ComponentModel; using XllPlus;
Add a
XllClassExport attribute to the class:
[XllClassExport(Category=XlCategory.UserDefined, Prefix="Test.")] public class TestClass { ...
This attribute will control the category and prefix of all exported methods of the class. Unless otherwise specified, exported functions will be named Test.Function (where Function represents the method name) and will appear in the "User Defined" category in the Excel Formula Wizard.
Add an
XllFunction attribute to the method
TestClass.T1:
[XllFunction] public static string T1(string a, double b, int c, bool d) { return String.Format("a={0};b={1};c={2};d={3}", a, b, c, d); }
This attribute will cause the function to be exported, as an Excel
add-in function named
Test.T1.
Add an
XllFunction attribute, with parameters, to the method
TestClass.T2:
[XllFunction("TestArrays", Prefix="")] public static string[] T2(string[] a, double[] b, int[] c, bool[] d) { ... }
This attribute will cause the function to be exported as an Excel
add-in function named
TestArrays, with no prefix.
Add an
XllFunction attribute to the method
TestClass.T3:
[XllFunction] public static int T3(int a, int b) { return a + b; }
This attribute will cause the function to be exported as an Excel
add-in function named
TestClass.T3.
Add
XllArgument attributes to the arguments of
TestClass.T3:
[XllFunction] public static int T3( [XllArgument("First", Flags=XlArgumentFlags.Optional, DefaultValue="100")] int a, [XllArgument("Second", Flags=XlArgumentFlags.Optional, DefaultValue="-99")] int b) { return a + b; }
These attributes will change the names of the arguments a and b to First and Second respectively. They also provide default values for each argument which will be used if the argument is omitted.
Note that no attribute is specified for the method
TestClass.DontExportMe.
Consequently, the function is not exported to Excel.
Add
XllFunction attributes to the methods
TestClass.VarTypeScalar,
TestClass.VarTypeVector and
TestClass.VarTypeMatrix:
[XllFunction] public static string VarTypeScalar(object arg) { return String.Format("{0} ({1})", arg, arg == null ? "none" : arg.GetType().Name); } [XllFunction] public static string[] VarTypeVector(object[] arg) { ... } [XllFunction] public static string[,] VarTypeMatrix(object[,] arg) { ... }
The attribute will cause each of these methods to be exported to Excel.
The arguments of each method are
object types, and these are treated
specially by the code importer; they can contain one or more values of any Excel value type:
string, number, boolean or empty.
Each cell value will be converted to a value of the appropriate type
(System.String, System.Double, System.Boolean or null)
before the .NET method is invoked.
Add an
XllFunction attribute to the method
TestClass.VarRet:
[XllFunction] public static object VarRet(int type) { ... }
This attribute will cause the function to be exported as an Excel
add-in function named
TestClass.VarRet.
Add an
XllArgument attribute to the type argument of
TestClass.VarRet:
[XllFunction] public static object VarRet( [XllArgument(Flags=XlArgumentFlags.ShowValueListInFormulaWizard, ValueList="0,String;1,Boolean;2,Number")] int type) { ... }
This attribute will add a value list for the argument. The value list will appear in a drop-down list when the function appears in the Excel Formula Wizard:
Add an
XllFunction attribute to the method
TestClass.DiscountFactor:
[XllFunction] public static double DiscountFactor(double rate, Periods period) { return Math.Exp((-1.0 / ((double)(int)period)) * rate); }
This attribute will cause the function to be exported as an Excel
add-in function named
TestClass.DiscountFactor.
Add an
XllArgument attribute to the period argument of
TestClass.DiscountFactor:
[XllFunction] public static double DiscountFactor(double rate, [XllArgument(Flags = XlArgumentFlags.ShowValueListInFormulaWizard)] Periods period) { return Math.Exp((-1.0 / ((double)(int)period)) * rate); }
This attribute will add a value list for the argument. The value list will appear in a drop-down
list when the function appears in the Excel Formula Wizard.
Because
Periods is an
Enum type, there is no need to specify the value list:
it is generated automatically by the assembly importer.
Before we can import the .NET assembly into the XLL+ project, we must create a reference to it.
In Solution Explorer, find the node for the project NetWrapAttr and right-click it. Select Properties.
In the Project Properties window, select the node "Common Properties/References", and click the "Add New Reference..." button.
In Visual Studio 2015 or 2017, right-click the the "References" node beneath the project node and click "Add Reference...".
You may have to wait a long time for the "Add Reference" window to appear.
In the "Add Reference" window, select the tab "Projects", select "MyLib" in the list, and press OK.
In the Project Properties window, click OK to save your changes.
You should now build the solution (Shift+Ctrl+B), and make sure that both projects built successfully. Make sure that the build is set to "Win32", "x86" or "x64"; avoid "Mixed Platforms". Also, if necessary, use the Build/Configuration Manager... command to open the solution configuration window, and ensure that both projects have a tick in the "Build" column.
Note that on Visual Studio 2015 and 2017, where there is no "Common Properties/References" node in the Project Properties, you should instead use the Solution Explorer window: select the "References" node under the project node, and right-click it, then select the "Add Reference..." command.
In these steps you will set up a link between the two projects. Once it is done, the build process will include an extra step, which inspects the .NET assembly and generates wrapper add-in functions for all the exported methods and properties. Whenever the .NET assembly is changed, this code will be regenerated during the next build.
Open the main source file of the add-in project,
NetWrapAttr.cpp.
Activate the XLL Add-ins tool window (View/Other Windows/XLL Add-ins).
Click on Import Assemblies... in the Tools menu.
In the list of assemblies, put a check against
MyLib.
Click OK to save your changes and return to Visual Studio.
If you inspect the add-in project in Solution Explorer, you will see
that two new files have been added. In the folder "NetWrap/Imported files"
are
MyLib.import.xml and
MyLib_Wrappers.cpp.
MyLib.import.xml contains instructions for importing the
.NET assembly. When it is built, the XLL+ reflection tool, XNREFLECT.EXE,
is run, which generates the C++ code for the XLL add-in functions,
and write it to
MyLib_Wrappers.cpp.
Build the project.
If you are working with a 64-bit version of Excel, then be sure to select
the 64-bit platform
x64 before you build.
You will observe a new step in the build process,
when
MyLib_Wrappers.cpp is generated.
Inspect
MyLib_Wrappers.cpp. Each of the functions
is simply a wrapper for a .NET method, and contains code to
validate inputs, translate them to CLR data forms, invoke the .NET method
and finally convert the result to Excel data form.
If any method throws an exception, it will be caught and
reported back to Excel.
Use F5 to run and debug the add-in. Place break-points in the C# code, and you will see that the debugger seamlessly steps between the C++ and C# modules.
See Importing .NET assemblies in the User Guide for more details.
Importing a .NET Assembly | Samples and Walkthroughs
|
https://planatechsolutions.com/xllplus7-online/howto_import_attrs.htm
|
CC-MAIN-2021-43
|
en
|
refinedweb
|
Android 9 introduces a new SystemApi interface called ImsService to help you implement IP Multimedia Subsystem (IMS). The ImsService API is a well-defined interface between the Android platform and a vendor or carrier-provided IMS implementation.
Figure 1. ImsService overview
By using the ImsService interface, the IMS implementer can provide important signaling information to the platform, such as IMS registration information, SMS over IMS integration, and MmTel feature integration to provide voice and video calling. The ImsService API is an Android System API as well, meaning it can be built against the Android SDK directly instead of against the source. An IMS application that has been pre-installed on the device can also be configured to be Play Store updatable.
Examples and source
Android provides an application on AOSP that implements portions of the ImsService API for testing and development purposes. You can find the application at /testapps/ImsTestService.
You can find the documentation for the ImsService API in ImsService and in the other classes in the API.
Implementation
The ImsService API is a high level API that lets you implement IMS in many ways, depending on the hardware available..
Compatibility with older IMS implementations
Although Android 9 includes the ImsService API,
devices using an older implementation for IMS are not able to support the API.
For these devices, the older AIDL interfaces and wrapper classes have been moved
to the
android.telephony.ims.compat namespace. When upgrading to Android
9, older devices must do the following to continue
the support of the older API.
- Change the namespace of the ImsService implementation to extend from the
android.telephony.ims.compatnamespace API.
- Modify the ImsService service definition in AndroidManifest.xml to use the
android.telephony.ims.compat.ImsServiceintent-filter action, instead of the
android.telephony.ims.ImsServiceaction.
The framework will then bind to the ImsService using the compatibility layer
provided in Android 9 to work with the legacy
ImsService implementation.
ImsService registration with the framework
The ImsService API is implemented as a service, which the Android framework
binds to in order to communicate with the IMS implementation. Three steps are
necessary to register an application that implements an ImsService with the
framework. First, the ImsService implementation must register itself with the
platform using the
AndroidManifest.xml of the application; second, it must
define which IMS features the implementation supports (MmTel or RCS); and third,
it must be verified as the trusted IMS implementation either in the carrier
configuration or device overlay.
Service definition
The IMS application registers an ImsService with the framework by adding a
service entry into the manifest using the following format:
<service android: ... <intent-filter> <action android: </intent-filter> </service>
The
service definition in
AndroidManifest.xml defines the following
attributes, which are necessary for correct operation:
directBootAware="true": Allows the service to be discovered and run by
telephonybefore the user unlocks the device. The service can't access device encrypted storage before the user unlocks the device. For more information, see Support Direct Boot mode and File-Based Encryption.
persistent="true": Allows this service to be run persistently and not be killed by the system to reclaim memory. This attribute ONLY works if the application is built as a system application.
permission="android.permission.BIND_IMS_SERVICE": Ensures that only a process that has had the
BIND_IMS_SERVICEpermission granted to it can bind to the application. This prevents a rogue app from binding to the service, since only system applications can be granted the permission by the framework.
The service must also specify the
intent-filter element with the action
android.telephony.ims.ImsService. This allows the framework to find the
ImsService.
IMS feature specification
After the ImsService has been defined as an Android service in AndroidManifest.xml, the ImsService must define which IMS features it supports. Android currently supports the MmTel and RCS features, however only MmTel is integrated into the framework. Although there are no RCS APIs integrated into the framework, there are still advantages to declaring it as a feature of the ImsService.
Below are the valid features defined in
android.telephony.ims.
FEATURE_MMTEL
The
ImsService implements the IMS MMTEL feature, which contains support for
all IMS media (IR.92 and IR.94 specifications) except emergency attach to the
IMS PDN for emergency calling. Any implementation of
ImsService that wishes to
support the MMTEL features should extend the
android.telephony.ims.MmTelFeature base class and return a custom
MmTelFeature implementation in
ImsService#createMmTelFeature.
FEATURE_EMERGENCY_MMTEL
Declaring this feature only signals to the platform that emergency attach to the
IMS PDN for emergency services is possible. If this feature is not declared for
your
ImsService, the platform will always default to Circuit Switch Fallback
for emergency services. The
FEATURE_MMTEL feature must be defined for this
feature to be defined.
FEATURE_RCS
The ImsService API does not implement any IMS RCS features, but the
android.telephony.ims.RcsFeature base class can still be useful. The framework
automatically binds to the ImsService and calls
ImsService#createRcsFeature
when it detects that the package should provide RCS. If the SIM card associated
with the RCS service is removed, the framework automatically calls
RcsFeature#onFeatureRemoved and then cleans up the
ImsService associated
with the RCS feature. This functionality can remove some of the custom
detection/binding logic that an RCS feature would otherwise have to provide.
Registration of supported features
The telephony framework first binds to the ImsService to query the features that
it supports using the
ImsService#querySupportedImsFeatures API. After the
framework calculates which features the ImsService will support, it will call
ImsService#create[...]Feature for each feature that the ImsService will be
responsible for..
Figure 2: ImsService initialization and binding
Framework detection and verification of ImsServices
Once the ImsService has been defined correctly in AndroidManifest.xml, the platform must be configured to (securely) bind to the ImsService when appropriate. There are two types of ImsServices that the framework binds to:
- Carrier "override" ImsService: These ImsServices are preloaded onto the device but are attached to one or more cellular carriers and will only be bound when a matching SIM card is inserted. This is configured using the
key_config_ims_package_overrideCarrierConfig key.
- Device "default" ImsService: This is the default ImsService that is loaded onto the device by an OEM and should be designed to provide IMS services in all situations when a carrier ImsService is not available and is useful in situations where the device has no SIM card inserted or the SIM card inserted does not have a carrier ImsService installed with it. This is defined in the device overlay
config_ims_packagekey.
Both of these ImsService implementations are required to be System applications, or to reside in the /system/priv-app/ folder to grant the appropriate user-granted permissions (namely phone, microphone, location, camera, and contacts permissions). By verifying whether the package name of the IMS implementation matches the CarrierConfig or device overlay values defined above, only trusted applications are bound.
Customization
The ImsService allows the IMS features that it supports (MMTEL and RCS) to be
enabled or disabled dynamically via updates using the
ImsService#onUpdateSupportedImsFeatures method. This triggers the framework to
recalculate which ImsServices are bound and which features they support. If the
IMS application updates the framework with no features supported, the ImsService
will be unbound until the phone is rebooted or a new SIM card is inserted that
matches the IMS application.
Binding priority for multiple ImsService
The framework cannot support binding to all of the possible ImsServices that are preloaded onto the device and will bind to up to two ImsServices per SIM slot (one ImsService for each feature) in the following order:
- The ImsService package name defined by the CarrierConfig value
key_config_ims_package_overridewhen there is a SIM card inserted.
- The ImsService package name defined in the device overlay value for
config_ims_packageincluding the case where there is no SIM card inserted. This ImsService MUST support the Emergency MmTel feature.
You must either have the package name of your ImsService defined in the CarrierConfig for each of the carriers that will use that package or in the device overlay if your ImsService will be the default, as defined above.
Let's break this down for each feature. For a device (single or multi-SIM) with a single SIM card loaded, two IMS features are possible: MMTel and RCS. The framework will try to bind in the order defined above for each feature and if the feature is not available for the ImsService defined in the Carrier Configuration override, the framework will fallback to your default ImsService. So, for example, the table below describes which IMS feature the framework will use given three IMS applications implementing ImsServices installed on a system with the following features:
- Carrier A ImsService supports RCS
- Carrier B ImsService supports RCS and MMTel
- OEM ImsService supports RCS, MMTel, and Emergency MMTel
Validation
Tools for verifying the IMS implementation itself are not included since the IMS specifications are extremely large and use special verification equipment. The tests can only verify that the telephony framework properly responds to the ImsService API.
|
https://source.android.com/devices/tech/connect/ims
|
CC-MAIN-2021-43
|
en
|
refinedweb
|
Hwnd
Source Class
Definition
Presents Windows Presentation Foundation (WPF) content in a Win32 window.
public ref class HwndSource : System::Windows::PresentationSource, IDisposable, System::Windows::Interop::IKeyboardInputSink, System::Windows::Interop::IWin32Window
public class HwndSource : System.Windows.PresentationSource, IDisposable, System.Windows.Interop.IKeyboardInputSink, System.Windows.Interop.IWin32Window
type HwndSource = class inherit PresentationSource interface IDisposable interface IWin32Window interface IKeyboardInputSink
Public Class HwndSource Inherits PresentationSource Implements IDisposable, IKeyboardInputSink, IWin32Window
- Inheritance
- HwndSource
- Implements
-
Remarks
Important
Many members of this class are unavailable in the Internet security zone..
|
https://docs.microsoft.com/en-gb/dotnet/api/system.windows.interop.hwndsource?view=netframework-4.7.2
|
CC-MAIN-2019-47
|
en
|
refinedweb
|
From: Beman Dawes (bdawes_at_[hidden])
Date: 2001-06-23 09:40:06
At 04:16 AM 6/23/2001, Jens Maurer wrote:
>The special functions, octonions, and quaternions by Hubert Holin
>are now added to the CVS.
>
>They're in a new sub-directory libs/math (and boost/math), expected
>to contain other libraries in the future. However, I just noticed
>that the namespace is still boost::octonion (and not
>boost::math::octonion). Is this something to worry about regarding
>consistency?
It seems to me namespaces should follow an "optimal branching" strategy
similar to the one John Maddock has described for libraries. The hierarchy
should neither be so flat that there are large numbers of entries at each
level, nor so tall and deep that there are only one entry at many levels.
So boost::math seems right to me for the 1st and 2nd levels. If octonion's
introduce many names then there should be a third level -
boost::math::octonion, but if they only introduce one or two names, there
shouldn't be a third level. (Sorry, I don't remember if that is the case
or not for that library.)
We haven't really been following an "optimal branching" policy for the
boost namespace in the past, but as the number of names grows, we need to
agree on such a policy and start applying it regularly, I think.
If we can agree, we should also add the namespace policy to our
Requirements and Guidelines.
--Beman
Boost list run by bdawes at acm.org, gregod at cs.rpi.edu, cpdaniel at pacbell.net, john at johnmaddock.co.uk
|
https://lists.boost.org/Archives/boost/2001/06/13578.php
|
CC-MAIN-2019-47
|
en
|
refinedweb
|
flutter_poplayer 0.0.3
flutter_poplayer #
Left slide and slide up drawer effect. Compatible with Android & iOS.
Installation #
Add
flutter_swiper : ^lastest_version
to your pubspec.yaml ,and run
flutter packages get
in your project's root directory.
Basic Usage #
Create a new project with command
flutter create myapp
Edit lib/main.dart like this:
import 'package:flutter/material.dart'; import 'package:flutter_poplayer/flutter_poplayer.dart'; void main() => runApp(MyApp()); class MyApp extends StatelessWidget { // This widget is the root of your application. @override Widget build(BuildContext context) { return MaterialApp( title: 'flutter_poplayer> { RightPopConfig _rightPopConfig; TopPopConfig _topPopConfig; final PoplayerController _controller = PoplayerController(); @override void initState() { super.initState(); _rightPopConfig = RightPopConfig( needRight: true, maxWidth: 240, backgroupColor: Colors.yellow, autoAnimDistance: 20, container: GestureDetector( child: Center(child: Text('点我收起')), onTap: () { _controller.autoToRight(); }, )); _topPopConfig = TopPopConfig( backgroupColor: Colors.red, needTop: true, topMaxHeight: 740, topAutoAnimDistance: 20, topMinHeight: 150, container: GestureDetector( child: Center(child: Text('点我收起')), onTap: () { _controller.autoToBottom(); }, )); } @override Widget build(BuildContext context) { return Scaffold( appBar: AppBar( title: Text(widget.title), ), body: Poplayer( rightPopConfig: _rightPopConfig, topPopConfig: _topPopConfig, controller: _controller, content: Container( child: Center( child: Text('content'), ), ), ), ); } }
[0.0.3] - TODO: Add release date.
- UPDATE READ_poplayer: _poplayer/flutter_poplayer.
|
https://pub.dev/packages/flutter_poplayer
|
CC-MAIN-2019-47
|
en
|
refinedweb
|
20411/python-selenium-waiting-for-frame-element-lookups
I have these includes:
from selenium import webdriver
from selenium.common.exceptions import NoSuchElementException
from selenium.webdriver.common.keys import Keys
Browser set up via
browser = webdriver.Firefox()
browser.get(loginURL)
However sometimes I do
browser.switch_to_frame("nameofframe")
And it won't work (sometimes it does, sometimes it doesn't).
I am not sure if this is because Selenium isn't actually waiting for pages to load before it executes the rest of the code or what. Is there a way to force a webpage load?
Because sometimes I'll do something like
browser.find_element_by_name("txtPassword").send_keys(password + Keys.RETURN)
#sends login information, goes to next page and clicks on Relevant Link Text
browser.find_element_by_partial_link_text("Relevant Link Text").click()
And it'll work great most of the time, but sometimes I'll get an error where it can't find "Relevant Link Text" because it can't "see" it or some other such thing.
Also, is there a better way to check if an element exists or not? That is, what is the best way to handle:
browser.find_element_by_id("something")
When that element may or may not exist?
You could use WebDriverWait:
from contextlib import closing
from selenium.webdriver import Chrome as Browser
from selenium.webdriver.support.ui import WebDriverWait
from selenium.common.exceptions import NoSuchFrameException
def frame_available_cb(frame_reference):
"""Return a callback that checks whether the frame is available."""
def callback(browser):
try:
browser.switch_to_frame(frame_reference)
except NoSuchFrameException:
return False
else:
return True
return callback
with closing(Browser()) as browser:
browser.get(url)
# wait for frame
WebDriverWait(browser, timeout=10).until(frame_available_cb("frame name"))
2 issues. One thing is, "Catalogues" & ...READ MORE
Here, I give you working script which ...READ MORE
Mistake is that u r printing the ...READ MORE
Use Below:
//div[@class='Tips' and text()='\u00a
If there is some dynamic content on ...READ MORE
For Selenium Standalone Server use this:
profile.setPreference("browser.helperApps.neverAsk.saveToDisk", "application/java-archive");
and ...READ MORE
OR
Already have an account? Sign in.
|
https://www.edureka.co/community/20411/python-selenium-waiting-for-frame-element-lookups
|
CC-MAIN-2019-47
|
en
|
refinedweb
|
import "github.com/hyperledger/fabric/gossip/common"
ChainID defines the identity representation of a chain
InvalidationResult determines how a message affects another message when it is put into gossip message store
const ( // MessageNoAction means messages have no relation MessageNoAction InvalidationResult = iota // MessageInvalidates means message invalidates the other message MessageInvalidates // MessageInvalidated means message is invalidated by the other message MessageInvalidated )
MessageAcceptor is a predicate that is used to determine in which messages the subscriber that created the instance of the MessageAcceptor is interested in.
type MessageReplacingPolicy func(this interface{}, that interface{}) InvalidationResult
MessageReplacingPolicy Returns: MESSAGE_INVALIDATES if this message invalidates that MESSAGE_INVALIDATED if this message is invalidated by that MESSAGE_NO_ACTION otherwise
PKIidType defines the type that holds the PKI-id which is the security identifier of a peer
IsNotSameFilter generate filter function which provides a predicate to identify whenever current id equals to another one.
type Payload struct { ChainID ChainID // The channel's ID of the block Data []byte // The content of the message, possibly encrypted or signed Hash string // The message hash SeqNum uint64 // The message sequence number }
Payload defines an object that contains a ledger block
type TLSCertificates struct { TLSServerCert atomic.Value // *tls.Certificate server certificate of the peer TLSClientCert atomic.Value // *tls.Certificate client certificate of the peer }
TLSCertificates aggregates server and client TLS certificates
Package common imports 3 packages (graph) and is imported by 44 packages. Updated 2018-12-11. Refresh now. Tools for package owners.
|
https://godoc.org/github.com/hyperledger/fabric/gossip/common
|
CC-MAIN-2019-47
|
en
|
refinedweb
|
A dart package for Japan Post official postal code, a.k.a zip code, search and update.
Inspired by rinkei/jipcode.
Installation #
1. Depend on it #
dependencies: postal_code_jp: ^1.0.0
2. Install it #
with pub:
$ pub get
with Flutter:
$ flutter pub get
3. Import it #
import 'package:postal_code_jp/postal_code_jp.dart';
Usage #
Search #
await PostalCodeJp.locate('1600022'); // => [{'postal_code': '1600022', 'prefecture': '東京都', 'city': '新宿区', 'town': '新宿'}]
Update #
日本郵便の郵便番号データを毎月アップデートします (予定)
最新のデータを利用したい場合は、パッケージのバージョンをアップデートしてください。
update package
Contributing #
- Fork it
- Create your feature branch (
git checkout -b new_feature_branch)
- Commit your changes (
git commit -am 'Add some feature')
- Push to the branch (
git push origin new_feature_branch)
- Create new Pull Request
[1.0.0]
- initial release.
import 'package:postal_code_jp/postal_code_jp.dart'; main() async { var address = await PostalCodeJp.locate('1600022'); print(address); var addressAndPrefectureCode = await PostalCodeJp.locate('1600022', opt: {'prefecture_code': true}); print(addressAndPrefectureCode); }
Use this package as a library
1. Depend on it
Add this to your package's pubspec.yaml file:
dependencies: postal_code_j:postal_code_jp/postal_code_jp.dart';
We analyzed this package on Nov 9, 2019, and provided a score, details, and suggestions below. Analysis was completed with status completed using:
- Dart: 2.6.0
- pana: 0.12.21
Platforms
Detected platforms: Flutter, other
Primary library:
package:postal_code_jp/postal_code_jp.dartwith components:
io.
Health issues and suggestions
Document public APIs. (-1 points)
9 out of 9 API elements have no dartdoc comment.Providing good documentation for libraries, classes, functions, and other API elements improves code readability and helps developers find and use your API.
|
https://pub.dev/packages/postal_code_jp
|
CC-MAIN-2019-47
|
en
|
refinedweb
|
import "github.com/Yelp/fullerite/src/fullerite/metric"
internal_metric.go metric.go
The different types of metrics that are supported
AddToAll adds a map of dimensions to a list of metrics
CollectorEmission counts collector emissions
InternalMetrics holds the key:value pairs for counters/gauges
func NewInternalMetrics() *InternalMetrics
NewInternalMetrics initializes the internal components of InternalMetrics
type Metric struct { Name string `json:"name"` MetricType string `json:"type"` Value float64 `json:"value"` Dimensions map[string]string `json:"dimensions"` }
Metric type holds all the information for a single metric data point. Metrics are generated in collectors and passed to handlers.
New returns a new metric with name. Default metric type is "gauge" and timestamp is set to now. Value is initialized to 0.0.
Sentinel returns a sentinel metric, which will force a flush in handler
WithValue returns metric with value of type Gauge
AddDimension adds a new dimension to the Metric.
AddDimensions adds multiple new dimensions to the Metric.
GetDimensionValue returns the value of a dimension if it's set.
GetDimensions returns the dimensions of a metric merged with defaults. Defaults win.
RemoveDimension removes a dimension from the Metric.
Sentinel is a metric value which forces handler to flush all buffered metrics
ZeroValue is metric zero value
Updated 2017-11-02. Refresh now. Tools for package owners. This is an inactive package (no imports and no commits in at least two years).
|
https://godoc.org/github.com/Yelp/fullerite/src/fullerite/metric
|
CC-MAIN-2019-47
|
en
|
refinedweb
|
24831/$-in-a-variable-name-in-java
I am a beginner in Java. While looking at the naming conventions for variables it was mentioned that we can use "$" symbol in the variable name. Along with it, there was this line which said
"Java does allow the dollar sign symbol $ to appear in an identifier, but these identifiers have a special meaning, so you should not use the $ symbol in your identifiers."
"Java does allow the dollar sign symbol $ to appear in an identifier, but these identifiers have a special meaning, so you should not use the $ symbol in your identifiers."
So can someone explain what is the special meaning behind $ symbol?
Java compiler uses "$" symbol internally to decorate certain names.
For Example:
public class demo {
class test {
public int x;
}
public void myTrial () {
Object obj = new Object () {
public String toString() {
return "hello world!";
}
};
}
}
Compiling this program will produce three .class files:
The javac uses $ in some automatically-generated variable names for the implicitly referencing from the inner classes to their outer classes.
Here are two ways illustrating this:
Integer x ...READ MORE
int[][] multi = new int[5][];
multi[0] = new ...READ MORE
You can use readAllLines and the join method to ...READ MORE
Java 8 Lambda Expression is used:
String someString ...READ MORE
You can use Java Runtime.exec() to run python script, ...READ MORE
First, find an XPath which will return ...READ MORE
See, both are used to retrieve something ...READ MORE
Nothing to worry about here. In the ...READ MORE
I guess, for deep copying you will ...READ MORE
Let me give you the complete explanation ...READ MORE
OR
Already have an account? Sign in.
|
https://www.edureka.co/community/24831/$-in-a-variable-name-in-java
|
CC-MAIN-2019-47
|
en
|
refinedweb
|
User installed PLUGINs
Hi,
We'd like to be able to operate with a central installation of MG5_aMC, and still allow users to install their own plugins when they wish. It seems as though the current mechanism is to look for the PLUGIN module in MG5. One of our users pointed out that if you add to the __init__.py in the central PLUGIN directory:
import pkgutil
__path__ = pkgutil.
Then a local user can have a PLUGIN directory in their PYTHONPATH that will be searched in addition to the MG5_aMC PLUGIN directory (currently this doesn't work because it always searches the MG5_aMC path first, and looks for PLUGIN.X, where X is the user plugin; if it added the PLUGIN directory to the PYTHONPATH and tried to import X, then that would be extensible).
Is that possible? Is there some other mechanism that you have in mind for users to be able to extend the set of PLUGINs that are installed centrally?
Thanks,
Zach
Question information
- Language:
- English Edit question
- Status:
- Solved
- Assignee:
- No assignee Edit question
- Solved by:
- Zachary Marshall
- Solved:
- 2019-07-11
- Last query:
- 2019-07-11
- Last reply:
- 2019-07-11
Hi,
This is already extensible since the code is the following:
So you can set a directory named MG5aMC_PLUGIN within your pythonpath
and put all your plugin inside.
Cheers,
Olivier
try:
_temp = __import_
except ImportError:
try:
_temp = __import_
except ImportError:
raise MadGraph5Error, error_msg
So
On 11 Jul 2019, at 23:27, Zachary Marshall <<email address hidden>
Question #681979 on MadGraph5_aMC@NLO changed:
https:/
Zachary Marshall gave more information on the question:
Sorry, I should add: hat tip to Philipp Windischhofer for the
suggestion!
--
You received this question notification because you are an answer
contact for MadGraph5_aMC@NLO.
Thanks!
Sorry, I should add: hat tip to Philipp Windischhofer for the suggestion!
|
https://answers.launchpad.net/mg5amcnlo/+question/681979
|
CC-MAIN-2019-47
|
en
|
refinedweb
|
List
View Item. List View Sub Item Collection Class
Definition
Represents a collection of ListViewItem.ListViewSubItem objects stored in a ListViewItem.
public: ref class ListViewItem::ListViewSubItemCollection : System::Collections::IList
public class ListViewItem.ListViewSubItemCollection : System.Collections.IList
type ListViewItem.ListViewSubItemCollection = class interface IList interface ICollection interface IEnumerable
Public Class ListViewItem.ListViewSubItemCollection Implements IList
- Inheritance
-
- Implements
- Details. The order of subitems in the ListViewItem.ListViewSubItemCollection determines the columns the subitems are displayed.
|
https://docs.microsoft.com/en-us/dotnet/api/system.windows.forms.listviewitem.listviewsubitemcollection?view=netcore-3.0
|
CC-MAIN-2019-47
|
en
|
refinedweb
|
Get World Position of Obj A's position and Set it to Obj B's position
Hi,
I want to get the world position of Obj A's position and set it into the world position of Obj B's positions. As a background, I cannot directly constraint Obj A and Obj B since they have different pivot axis.
The
Get World Positionis okay. Thanks to this thread.
I'm having trouble with the
Set World Position. I initially thought that I can get it by reverse engineer in the previous code but I get errors.
Specifically,
TypeError: unsupported operand type(s) for /: 'c4d.Vector' and 'c4d.Matrix'in the GlobalToLocal function.
which make sense, but, uhm, I really don't know how to solve it.
Here is the code so far
import c4d # Declare object variables newObject = doc.SearchObject("newObj") oldObject = doc.SearchObject("oldObj") totalPoints = len(newObject.GetAllPoints()) ''' Get the new position ''' def LocalToGlobal(obj, local_pos): """ Returns a point in local coordinate in global space. """ obj_mg = obj.GetMg() return obj_mg * local_pos def GetPointGlobal(point_object, point_index): """ Return the position of a point in Global Space """ ppos = point_object.GetPoint(point_index) # Get the point in local coords return LocalToGlobal(point_object, ppos) # Return the point in global space # Get the new world position newPos = [] for idx in range(totalPoints): newPos.append(GetPointGlobal(newObject, idx)) ''' Assign the new position ''' # Set the new world position def GlobalToLocal(obj, global_pos): """ Returns a point in local coordinate in global space. """ obj_mg = obj.GetMg() return (global_pos / obj_mg) newLocalPos = [] for idx in newPos: newPos.append(GlobalToLocal(oldObject, idx)) print (newLocalPos) oldObject.SetAllPoints(newLocalPos) oldObject.Message(c4d.MSG_UPDATE) c4d.EventAdd()
Thank you for looking at my problem.
You can't divide by a matrix, you need to multiply by the inverted matrix (unary prefix operator ~).
Note that the multiplication in matrix calculations is not commutative.
Thanks for the response. For some reason, C4D freezes. I guess it should not since I only have 8 points. I'm doing it with a simple cube.
I tried both of these codes
return (global_pos * (-1*obj_mg))
or
return (~obj_mg * global_pos)
Should I modify other parts of my code?
Before providing a more complete answer, we were not sure of your final goal.
You want to define the axis of obj A to axis of obj B but the point shouldn't move in the global space, is it correct?
Cheers,
Maxime.
Sorry for the confusion.
I'm not after the axis of Obj A and Obj B. I just want them to stay as it. I'm after the world position of Object of A's points and directly plug it into Object B's points.
For better understanding,
please see a simple illustration image here:
If you need more details, please let me know. Thank you.
So regarding your script why it crashes, it's simply because you make an infinite loop with this code since you add each time something more to iterate.
for idx in newPos: newPos.append(GlobalToLocal(oldObject, idx))
Just to make it cleaner I've called objA and objB.
Then the needed steps are:
- Get World Position from points of objB.
- Convert these World Position in the local space of objA.
So as you may be already aware regarding our Matrix Fundamentals where is how to achieve both operations.
- Local Position to World Position corresponds to the obj global matrix where the position is locals to multiplied by the local position.
- World Position to Local Position corresponds to the inverse obj global matrix where the position will be locals multiplied by the world position.
Finally, before providing with the solution I would like to remind you to please always execute your code in the main function. The main reason is that if you transform your script as a button in the Dialog, every code that is in the global scope of your script is executed for each redraw of the Cinema 4D GUI (which is pretty intensive!). Moreover please always check for objects, if they exist or not try to adopt a defensive programming style in order to avoid any issues.
import c4d def main(): # Declares object variables ObjA = doc.SearchObject("A") ObjB = doc.SearchObject("B") # Checks if objects are found if ObjA is None or ObjB is None: raise ValueError("Can't found a and b object") # Checks if they are polygon object if not isinstance(ObjA, c4d.PolygonObject) or not isinstance(ObjB, c4d.PolygonObject): raise TypeError("Objects are not PolygonObject") # Checks Point count is the same for both obj allPtsObjA = ObjA.GetAllPoints() allPtsObjB = ObjB.GetAllPoints() if len(allPtsObjA) != len(allPtsObjB): raise ValueError("Object does not get the same pount count") # Retrieves all points of B in World Position, by multipling each local position of ObjB.GetAllPoints() by ObjB.GetMg() allPtsObjBWorld = [pt * ObjB.GetMg() for pt in allPtsObjB] # Retrieves All points of B from World to Local Position of ObjA, by multipling each world position of ObjB by the inverse objA.GetMg() allPtsObjANewLocal = [pt * ~ObjA.GetMg() for pt in allPtsObjBWorld] # Sets New points position ObjA.SetAllPoints(allPtsObjANewLocal) ObjA.Message(c4d.MSG_UPDATE) # Updates Cinema 4D c4d.EventAdd() # Execute main() if __name__=='__main__': main()
If you have any questions, please let me know.
Cheers,
Maxime.
Thanks for the code. The script works as expected. It's also cool that you managed to bring it to essentially three lines of codes with the list comprehension.
RE: matrix fundamentals
I was able to read that page but I kinda passed it thinking that Going Backwards section
SetMl(ml)and
SetMg(mg)is only applicable to the object level and not on the component level.()
@bentraje said in Get World Position of Obj A's position and Set it to Obj B's position:()
Yes this is correct.
|
https://plugincafe.maxon.net/topic/11397/get-world-position-of-obj-a-s-position-and-set-it-to-obj-b-s-position
|
CC-MAIN-2019-47
|
en
|
refinedweb
|
Telegraf metrics are the internal representation used to model data during processing. These metrics are closely based on InfluxDB’s data model and contain four main components:
- Measurement name: Description and namespace for the metric.
- Tags: Key/Value string pairs and usually used to identify the metric.
- Fields: Key/Value pairs that are typed and usually contain the metric data.
- Timestamp: Date and time associated with the fields.
This metric type exists only in memory and must be converted to a concrete representation in order to be transmitted or viewed. Telegraf provides output data formats (also known as serializers) for these conversions. Telegraf’s default serializer converts to InfluxDB Line Protocol, which provides a high performance and one-to-one direct mapping from Telegraf metrics.
|
https://docs.influxdata.com/telegraf/v1.12/concepts/metrics/
|
CC-MAIN-2019-47
|
en
|
refinedweb
|
Hey ?
User \"system:serviceaccount:nublado-athornton:dask\" cannot get resource \"pods\" in API group \"\" in the namespace \"nublado-athornton\""but I have what look like the right rules in my role:
rules: - apiGroups: - "" resources: - pods verbs: - list - create - delete
n_jobs=1in the
RandomForestRegressor()constructor, but still end up with some of my dask workers using 2000% CPU, which is looking really weird.
py
|
https://gitter.im/dask/dask?at=5d937b9b9d4cf173604f1846
|
CC-MAIN-2019-47
|
en
|
refinedweb
|
Re-rethinking best practices with React
What this talk is going to be about?
I am going to go through certain established practices that were the norm before React came along and showed that there can be a better way to do things!
I think if I have to summarize my entire talk in a single line it would be this:
Innovation cannot happen if we are bound by the ideas of past!
The MVC Pattern
In order to create a UI one had to maintain three things i.e model/view/controllers, even the frameworks that came before used this pattern heavily.
- On the outset, it seemed like a clean way of separating logic or dare I say concerns :-). But at scale, this becomes quite hard to maintain i.e. if you need to compose to two UI bits you need to make sure that mode/controllers/view were in sync.
- Instead, React came up with the Components wherein we don't think of things in terms of model/view/controller but rather just one unit of UI which encapsulates all that it needs. This model reduced the number of things one has to keep in my mind before trying to compose.
JSX - Templating
{{#list people}} {{firstName}} {{lastName}} {{/list}} /// The data : // { // people: [ // {firstName: "Yehuda", lastName: "Katz"}, // {firstName: "Carl", lastName: "Lerche"}, // {firstName: "Alan", lastName: "Johnson"} // ] //}
function People(props){ return ( <ul> {props.people.map(({ firstName,lastName }) => <li>{firstName} {lastName}</li>} </ul> ) }
JSX - Templating
- No need for worrying about selectors if you want to interact with some HTML element from JS
- Removed the need for keeping a separate file where data was maintained.
- No need to learn another language altogether! Just use JS!
- Logic and UI can now be colocated which means no more context switching!
The norm that JSX broke was keeping JS and HTML in two separate files. In doing so when it first came out it was met with sharp criticism from the community.
But slowly people started realizing the benefits of mixing them both :
CSS-in-JS
Some benefits this approach gives you :
- No more worrying about specificity
- Code splitting becomes easier
- Conditional CSS becomes easier to deal with
- Theming too becomes relatively easier to do.
- ...
This another similar norm like JSX only this time its with CSS.
This is not from the React core team but it's from the React community.
This is still being debated but whether you like it or not, I think we can all agree that it's different and broke the norm for sure.
This may not be as big as the other ones before it but still, I think it's a significant one when it comes to breaking the norm and also the resulting abstractions due to this approach has made React one of its kind.
Pull instead of Push
Even now a lot of other frameworks prefer to use "push" based approach instead of "pull" based i.e. they update the UI as in when the new data is available contrast that with React where it takes control over when to schedule those updates.
Pull instead of Push
A direct quote from React document :
There is an internal joke in the team that React should have been called “Schedule” because React does not want to be fully “reactive”
Because of this kind of approach React is able to bring in features like Suspense and Async Mode without changing the interface. The performance itself becomes abstracted away from consumers.
What's up with the Suspense?
If you have ever worked in a big enough React project I think you would have come across a common component usually called as "Loader" which takes in a URL and fetches the API and manages the loading/error states for the users.
Since we are in components land with React most App's ended up with stacks of loading indicators and then some UI. Which made for an awkward UX and not to mention very hard to do things like timeout and code splitting on top of it.
What's up with the Suspense?
Having seen this issue React came up with this neat API wherein you can define your remote dependency inside the render function and wrap it a <Suspense /> component and few other parameters and you can show just one single loading indicator for the whole tree under the <Suspense /> component.
Which basically means they broke the norm of keeping the render function pure!
And this just one of the features that can be done because of the pull-based approach.
For more info on this, you can watch dan's talk
Hooks
This is one of the latest features of React which lets you add a state to function component i.e. previously if you had to add state you would have to convert a functional component to a class-based component but now you don't need to do that.
Again the norm that functional components had to be stateless was broken! We used to call them SFC's short for Stateless functional components - they no longer are stateless!
Before Hooks:
After Hooks :
The Full Circle?
Courtesy: Sunil Pail 's talk at react-europe
The Notion Of Best Practices
I got the inspiration for this talk from Sunil Pai's talk last react-
A good personal example I can think of is personally I tried to solve the "Loader" problem by creating the best Loader out there but no matter what I came up with had some or the other trade-off which didn't really solve the problem.
The Notion Of Best Practices
React solved the problem by actually throwing Promises from the render function! I mean of all the things I did to solve that problem this approach would never have come to my mind! Why so? Because using try-catch as a means to control the flow of the program is something really unheard of and also considered as bad practice to be avoided.
But turns out given a certain constraints and a environment doing such a thing as throwing a Promise actually resulted in a much cleaner abstraction than I could ever create!
The Notion Of Best Practices
So does that mean I recommend abandoning all best practices? No of course not what I am recommending is not to be constrained by them when thinking of trying to build something new!
So then the best practices that React broke were never best practices? No not at all at a time when they were conceived there would have been some context due to which they were there but overtime that context is lost or not really carried over well and instead of being a guidance they can become somewhat of a dogma!
The Notion Of Best Practices
For example let's take JSX where the best practice of keeping HTML and JS separate was broken. If we didn't have the model that React gives you i.e. where you do the DOM manipulation yourself instead of the framework doing it for you.
It would most likely would have been a disaster!
So the previously held best practice is still best practice if we are not using React or similar framework!
Conclusion
Let's not be held back by established practices when trying to create new best practices!
THANK YOU! 😃
deck
By varenya
|
https://slides.com/varenya/deck-1/
|
CC-MAIN-2019-47
|
en
|
refinedweb
|
<< First | < Prev | Next >
So far we've seen how to interact with the perl parser to introduce new keywords. We've seen how we can allow that keyword to be enabled or disabled in lexical scopes. But our newly-introduced syntax still doesn't actually do anything yet. Today lets change that, and actually provide some new syntax which really does something.
Optrees
To understand the operation of any parser plugin (or at least, one that actually does anything), we first have to understand some more internals of how perl works; a little of how the parser interprets source code, and some detail about how the runtime actually works. I won't go into a lot of detail in this post, only as much as needed for this next example. I'll expand a lot more on it in later posts.
Every piece of code in a perl program (i.e. the body of every named and anonymous function, and the top-level code in every file) is represented by an optree; a tree-shaped structure of individual nodes called ops. The structure of this optree broadly relates to the syntactic nature of the code it was compiled from - it is the parser's job to take the textual form of the program and generate these trees. Each op in the tree has an overall type which determines its runtime behaviour, and may have additional arguments, flags that alter its behaviour, and child ops that relate to it. The particular fields relating to each op depend on the type of that op.
To execute the code in one of these optrees the interpreter walks the tree structure, invoking built-in functions determined by the type of each op in the tree. These functions implement the behaviour of the optree by having side-effects on the interpreter state, which may include global variables, the symbol table, or the state of the temporary value stack.
For example, let us consider the following arithmetic expression:
(1 + 2) * 3
This expression involves an addition, a multiplication, and three constant values. To express this expression as an optree requires three kinds of ops - a OP_ADD op represents the addition, a OP_MULT the multiplication, and each constant is represented by its own OP_CONST. These are arranged in a tree structure, with the OP_MULT at the toplevel whose children are the OP_ADD and one of the OP_CONSTs, the OP_ADD having the other two OP_CONSTs. The tree structure looks something like:
OP_MULT: +-- OP_ADD | +-- OP_CONST (IV=1) | +-- OP_CONST (IV=2) +-- OP_CONST (IV=3)
You may recall from the previous post that we implemented a keyword plugin that simply created a new OP_NULL optree; i.e. an optree that doesn't do anything. If we now change this to construct an OP_CONST we can build a keyword that behaves like a symbolic constant; placing it into an expression will yield the value of that constant. This returned op will then be inserted into the optree of the function containing the syntax that invoked our plugin, to be executed at this point in the tree when that function is run.
To start with, we'll adjust the main plugin hook function to recognise a new keyword; this time tau:
static int MY_keyword_plugin(pTHX_ char *kw, STRLEN kwlen, OP **op_ptr) { HV *hints = GvHV(PL_hintgv); if(kwlen == 3 && strEQ(kw, "tau") && hints && hv_fetchs(hints, "tmp/tau", 0)) return tau_keyword(op_ptr); return (*next_keyword_plugin)(aTHX_ kw, kwlen, op_ptr); }
Now we can hook this up to a new keyword implementation function that constructs an optree with a OP_CONST set to the required value, and tells the parser that it behaves like an expression:
#include <math.h> static int tau_keyword(OP **op_ptr) { *op_ptr = newSVOP(OP_CONST, 0, newSVnv(2 * M_PI)); return KEYWORD_PLUGIN_EXPR; }
We can now use this new keyword in an expression as if it was a regular constant:
$ perl -E 'use tmp; say "Tau is ", tau' Tau is 6.28318530717959
Of course, so far we could have done this just as easily with a normal constant, such as one provided by use constant. However, since this is now implemented by a keyword plugin, it can do many exciting things not available to normal perl code. In the next part we'll explore this further.
<< First | < Prev | Next >
|
http://leonerds-code.blogspot.com/2016_09_01_archive.html
|
CC-MAIN-2018-22
|
en
|
refinedweb
|
What is Aspect Oriented Programming?
If you look at Wikipedia, they say
"aspect-oriented programming (AOP) is a programming paradigm that aims to increase modularity by allowing the separation of cross-cutting concerns"
I am not very sure if you understand what they are saying. For me, after 6 or 7 years using AOP, I still have a hard time figuring out what this definition is trying to describe. Technically, it is perfectly accurate but it may not give you a clue of what is AOP and why should you use AOP. Let me try to explain it in my own way.
When you do your coding, there are always business logic and some boilerplate code. Boilerplate mixing with business logic is what you do not like because it make the code difficult to read and developers less focus on business logic. AOP attempt to solve this issue by an innovative way of splitting boilerplate code out of business logic.
There are two characteristics that boilerplate code need to satisfy if it is to be removed from business logic:
- It is generic enough to be commonly executed for various objects.
- It must happens before and/or after business logic .
When the first characteristic is satisfied, you can remove the boilerplate code and put it to some other places without any harm to the functionality. When the second characteristic is satisfied, we can define a point-cut or cross point on any objects that suppose to run this boilerplate code. Then congratulation, the rest of work is being done by the framework. It will help you to automatically execute the boilerplate code before/after your business logic.
Why AOP is cool?
AOP is cool because is make your project cleaner in many ways
- You do not mix boilerplate code with business logic. This make your code easier to read.
- You save your self from typing the same code again and again.
- The code base is smaller and less repetitive.
- Your code is more manageable. You have the option of adding a behaviour to all class/methods in your project in one shot.
To let you feel the power of AOP, let imagine what will happen if you have this magic in real life. Let say in one command, you can make every citizen of Singapore donate 10% of income for charitable work. This sound much faster and less hassle than go to ask every individual to do this for you. This example show that AOP work best when you have lots of objects in your system that sharing the same feature.
Fortunately, there are lots of common features like that in real applications. For example, here is some stuffs that you will often need to implements:
- Log the execution time of long running method.
- Check the permission before executing method.
- Initiate transaction before method and close transaction after method completed.
How to implement AOP?
If you want to share boilerplate code, you can implement it in the old-fashion way. For example,
import java.util.logging.Logger; public abstract class LoggableWork { public void doSomething(){ long startTime = System.currentTimeMillis(); reallyDoWork(); long endTime = System.currentTimeMillis(); Logger.getLogger("executableTime").info("Execution time is " + (endTime-startTime) + " ms"); } protected abstract void reallyDoWork(); }
Then any class can extend the abstract class above and have their execution time be logged. However, the world has long abandoned this approach because it is not so clean and extensible. If you finally figure out that you need transaction and security check, you may need to create TransactionWork and SecuredWork. However, Java do not allow one class to extend from more than one parent and you are stuck with your awkward design. For your information, there are two more problems with the approach mention above. It force developer to think of what boilerplate code they need to have before writing business logic. It is not natural and sometimes not predictable. Moreover, inheritance is not a favourable way of adding boilerplate code. Logging execution time is not part of business logic and you should not abuse your business logic for whatever cool stuff that you want to add on your project.
So, what is the right way of having AOP? Focus on your business logic and create your class/method without worrying of logging, transaction, security or anything else. Now, let add the logging for execution time for every methods in the system:
@Aspect public class LogAspect { @Pointcut("execution(public * *(..))") public Object traceAdvice ( ProceedingJintPoint jP, Trace trace ) { Object result; long startTime = System.currentTimeMillis(); try { result = jp.procced(); } finally { long endTime = System.currentTimeMillis(); Logger.getLogger("executableTime").info("Execution time is " + (endTime-startTime) + " ms"); } return result; } }
What you just create is call interceptor. The annotation @Pointcut define the place that you want your boilerplate code to be insert to. This line tell the framework to let the method executed as usual:
result = jp.procced();
Then, you mark the time before and after the real method is executed and log result. Similarly, we can create Interceptor for transaction and security check as well. The terms you see above (PointCut, Aspect) is taken from AspectJ. One aspect represent one behaviour you want to add to your project. That why the approach of writing business logic first and adding aspect later is called Aspect Oriented Programming.
There are thousands of ways to create a point cut and above example is one of them. You can create point cut base on package, annotation, class name, method name, parameter type and combination of them. Here is a good source to study AspectJ language:
Want to know more about underlying implementation of AOP?
If you just want to use AOP, the above parts are sufficient, but if you really want to understand AOP, then better go through this part. AOP is implemented using Reflection. At the beginning day of Java, reflection is not recommended to be used in real-time execution because of performance issue. However, as the performance of JVM improve, reflection is getting popular and become the base technology to build many frameworks. When you use any framework that based on reflection (almost all frameworks in the market), the Java object is no longer WYSIWYG (what you see is what you get).
The object created in JVM still respect the contract of class, interface that it belong to but it have more hidden features than what you see.
In our case, the objects, which execute method doSomething() is a proxy of the contract class. The framework construct the bean and create the proxy to wrap around it. Therefore, any time you call the bean to doSomething(), the proxy code doBefore() and doAfter() are executed as well. You even have the choice to bypass the invoking of method doSomething() on the inner bean inside (for example if they fail permission check).
It is easy to see this only work if you let the framework create the bean for you rather than create the bean yourself. I have encountered some developers asking me why this code does not log:
Bean bean = new Bean(); bean.doSomething();
It is so obvious that the developer create the bean him self rather than the framework. In this case, the bean is just an ordinary Java object and not proxy. Hence, it is out of framework control and no aspect can be applied on this bean.
|
http://sgdev-blog.blogspot.com/2014/02/aspect-oriented-programming.html
|
CC-MAIN-2018-22
|
en
|
refinedweb
|
The choreography 2 dataset
The data presented in this page can be downloaded from the public repository zenodo.org/record/29551.
- If you use this database in your experiments, please cite it (DOI:10.5281/zenodo.29551) or the following paper:
Mangin, P.Y. Oudeyer, Learning semantic components from sub symbolic multi modal perception to appear in the Joint IEEE International Conference on Development and Learning an on Epigenetic Robotics (ICDL EpiRob), Osaka (Japan) (2013) (More information, bibtex)
Presentation
This database contains choreography motions recorded through a kinect device. It contains a total of 1100 examples of 10 different gestures that are spanned over one or two limbs, either the legs (e.g. walk, squat), left or right arm (e.g. wave hand, punch) or both arms (e.g. clap in hands, paddle).
Each example (or record) contained in the dataset consists in two elements:
- the motion data,
- labels identifying which gesture is demonstrated.
Description of the data
The data has been acquired through a kinect camera and the OpenNI drivers through its ROS <> interface, which yields a stream of values of markers on the body.
Each example from the dataset is associated to a sequence of 3D positions of each of the 15 markers. Thus for a sequence of length T, the example would corresponds to T*15*7 values.
- The position of the following list of markers was recorded:
head, neck, left_hip, left_hip, left_shoulder, left_elbow, left_hand, left_knee, left_foot, right_hip, right_shoulder, right_elbow, right_hand, right_knee, right_foot, right_hand
A list of gestures and their descriptions can be found at the end of this document.
Format
This data is accessible in three data formats:
- text
- numpy
- Matlab
###The text format
The set of examples consists in:
- a json file describing metadata and labels,
- a directory containing one text file for each example.
These are distributed in a compressed archive (tar.gz).
An example of a json file is given below. They all have a similar structure.
{ "marker_names": [ "head", "neck", ... ], "data_dir": "mixed_partial_data", "name": "mixed_partial", "records": [ { "data_id": 0, "labels": [ 20, 26 ] }, { "data_id": 1, "labels": [ 19, 28 ] }, ... ] }
It contains the following data:
- name: name of the set of examples,
- marker_names: list of name of the markers in the same order as they appear in data,
- data_dir: path to the data directory,
- records: list of records. Each record contains:
- a data_id fields,
- a labels field containing a list of label as integers.
For each record listed in th json file there exists a text file in the ‘data_dir’ directory, which name is the ‘data_id’ plus a ‘.txt’ extension.
The text files contains the sequence of positions of the marker. Each set of values at a given time is given as a line of space separated floating numbers (formated as ‘5.948645401000976562e+01’).
Each line contains 7 successive values for each marker which are there 3D coordinates together with a representation of the rotation of the frame between previous and next segment. The rotation is encoded in quaternion representation as described on the ROS time frame page . Thus each line contains 7xM values with M the number of markers.
The numpy format
In this format each set of examples is described by two files: a json file and a compressed numpy data file (.npz).
The json file is very similar to the one from the text format, the only difference is that the ‘data_dir’ element is replaced by a ‘data_file’ element containing the path to the data file.
The data file is a numpy compressed data file storing one array for each example. The name of the array is given by the ‘data_id’ element. Each data array (one for each record) is of shape (T, M, 7) where T is the length of the example and M the number of markers.
The following code can be used to load a set of example in python.
import os import json import numpy as np FILE = 'path/to/mixed_full.json' with open(FILE, 'r') as meta_file: meta = json.load(meta_file) # meta is a dictionary containing data from the json file path_to_data = os.path.join(os.path.dirname(FILE), meta['data_file']) loaded_data = np.load(path_to_data) data = [] labels = [] for r in meta['records']: data.append(loaded_data[str(r['data_id'])]) # numpy array labels.append(r['labels']) # list of labels as integers print "Loaded %d examples for ``%s`` set." % (len(data), meta['name']) print "Each data example is a (T, %d, 3) array." % len(meta['marker_names']) print "The second dimension corresponds to markers:" print "\t- %s" % '\n\t- '.join(meta['marker_names']) return (data, labels, meta['marker_names'])
The Matlab format
In the Matlab format, a set of examples is described by a single ‘.mat’ file containing the following elements:
- a ‘name’ variable (string) containing the name of the set of examples,
- a ‘marker_names’ variable containing a list of marker names (strings),
- a ‘data’ variable containing a list of data arrays (one for each record) of size (T, M, 7) where T is the length of the example and M the number of markers,
- a ‘labels’ variable which is a list of list of labels (one list of labels for each example).
For more information, feel free to contact me at olivier.mangin at inria dot fr.
Appendix
###List of gestures
A table with illustrations of gestures is presented in the gesture illustration page. .
|
http://olivier.mangin.com/data/choreo2/
|
CC-MAIN-2018-22
|
en
|
refinedweb
|
And prepare a task – to get on the leaderboard. However, the task will be solved even more interesting if we put a rigid restriction – artificial intelligence resources will not be derived from our undoubtedly powerful computer, but from a single-board on the ARM architecture! At the same time we will get not only experience with portable devices, but also the ability to keep the bot enabled 24/7 without any damage to the main computer!
How long, short , Were found three single-board computers with a penny price – Orange Pi Zero, NanoPi Neo, NanoPi Neo2, their brief characteristics are shown in the table:
Delivery Orange Pi Zero took exactly 20 days, Neo and Neo2 came a day earlier, I think very quickly.
Let's start to understand …
It should be noted that for the Neo was ordered Basic Starter Kit (+13 dollars), which, in addition to the computer, includes:
– USB-to-UART converter;
– large (if you can call an aluminum plate the size of a computer) radiator + mount;
– MicroSD card for 8GB SanDisk 10class.
– MicroUSB cable.
There is also a Complete Starter Kit ($ 29 + shipping), it includes everything that is in Basic, plus the case and OLED screen, but for our purpose this is a little unnecessary.
Let's get ready for the first launch …
From the armbian site we download three fresh images for NanoPi Neo, Neo2 and OrangePi Zero, we will use the MicroSD card obtained from the Basic Starter Kit.
From now on, a single-board computer will be called a single-board computer, and a computer – a familiar large and powerful computer or laptop for us. Now we have two ways with which you can work with single-board:
[1] Via Ethernet
- We connect a single board with an Ethernet cable to a laptop, computer or router;
- Turning on the power for the single-board
- Scan the network, for most linux-based systems can be done through the command "arp -a", for Windows there is nmap;
- We connect to the single-board, for linux: "ssh ip -l root", the default password is "1234"; In Windows you can use any ssh-client, for example, multifunctional putty
[2] Using the USB-to-UART converter.
- We connect the converter to the computer, we determine its physical address: in linux we look at the last lines of the command "dmesg | Grep tty "and look for something similar to ttyUSBX, for Windows we look in the Device Manager new COM-devices
- We connect a single-board to the converter: connect the wire to the converter so that GND is connected by a black wire, and TX – yellow, then connect a single-board (Neo / Neo2 connect to a single soldered contacts near the USB-port so that the black wire is near the nearest Edge, and yellow points in the direction of the flash card, you get the order: GND, 5V, RX, TX; Orange Pi Zero can not be connected with the cable that comes with the Starter Pack, there is no 5V in the middle, so you'll need to use a different cable)
- Now you need to find a program in which it will be convenient to work with the console on TTY / COM: for linux, I'll advise a convenient minicom or putty (you need to run it with superuser rights), for Windows it's still relevant putty
It is necessary to monitor the temperature, the temperature should be monitored …
We need to control the temperature if we want to keep the AI on the Windinium on it, avoiding a drop in frequency, hang-up or a single-board failure. Let's write a simple script for monitoring the temperature (at the same time we will practice running .py files):
import time, sys Print ('NanoTemp 0.1') While True: With open ('/ sys / devices / virtual / thermal / thermal_zone0 / temp', 'r') as f: Temp1 = f.read () [:-1] With open ('/ sys / devices / virtual / thermal / thermal_zone1 / temp', 'r') as f: Temp2 = f.read () [:-1] Print (' r' + temp1 + '' + temp2) Time.sleep (0.5)
Now you can put this file on your flashcard in the directory /home/username/.
TIP: Ubuntu, Debian and many other Linux-based operating systems can work with ext3 / ext4 file systems from under Boxes; Windows will offer a flash drive format. You need to use utilities that allow you to work with this kind of file systems, for example, install the Ext2Fsd driver.
Later I learned about a program like armbianmonitor, with which you can safely monitor not only the temperature, but also the frequency, Local time and load, which is undoubtedly useful.
We connect each single-board to the power network, wait 15 minutes in idle time and see the results:
Quite interestingly, the Neo2 sensor shows the temperature right up to the first decimal point, but hides the information about the current processor frequency from us.
It's sad that Orange Pi Zero is so hot in idle, unlike his brother Neo at the same frequency 240MHz. Forums are dotted with discontent on this topic. As an option that solves this problem, a special script is offered, editing system files and using cooling. However, there is also information that these were all measures against heating up to 80 degrees during idle time, and 55-60 degrees in the fresh version of armbian is normal in this case. Apparently, the problem is solved only partially.
Let's try to install passive cooling. For Orange Pi Zero were bought for 2.82 dollars a special set of two radiators for the processor and RAM. In the case of NanoPi, we have a powerful heatsink that can be purchased separately from the Starter Pack for $ 2.99.
Now the picture 15 minutes after launch looks like this:
Let's warm up to the full!
It was noticed that the orange was very warm. I wonder how many degrees the temperature will jump during the load. We use the cpuburn program available in the repositories (for Neo and Zero we will use the burnCortexA7 command, for Neo2 – burnCortexA8).
Well, say …
All single-board cards easily reach temperatures of 80 degrees with four copies of cpuburn – passive Cooling trite does not cope with such heating. However, I believe that in the case of Vindinium, not everything will be so sad – there is a cyclic phase change of work and idle time (waiting for a response from the server), and the cpuburn program itself is designed for the most efficient heat dissipation, to such an extent the AI can not load the processor as Minimum due to the need to wait for data from memory, because our task can not fully fit into the cache of the processor.
However, there is an interesting feature – Orange Pi Zero reaches 80 degrees, even with a single copy of cpuburn, For Neo2 three copies are enough, and Neo – four copies of the test.
Benchmarks, people require bread and benchmarks!
Before writing AI, you need to determine the most important question: how many times these single-board are weaker than ordinary computers? I can not believe that a small piece of silicon, metal and textolite can do anything out of the ordinary.
The utility phoronix-test-suite was used to conduct benchmarks.
In contrast to all single-boarders, let me include in testing Its laptop (i5 2450M, 6gb DDR3, without discrete graphics, running Ubuntu 16.04 LTS) to facilitate the development of AI (you can run certain pieces of code and know how much, approximately, the time of the same piece on the single-board will change). We use only passive cooling. For a unit of productivity, let's take an orange.
UPD: while the article was moderated, an old computer was found near the house (Intel Pentium 4 (1 core, 2 streams, 2003, pre-top processor on its architecture), 512MB DDR x2, Radeon 9600XT 128MB DDR), thirteen years ago such a system can be Was to call strong. In order to compare how it was, I installed [Windows 2000] Ubuntu 16.04 LTS which, to my surprise, turned out to be very workable.
When studying the information on the Internet, it became clear that H2 + is a slightly modified version of H3:
H2 + is a variant of H3 designed for low-performance OTT units that does not support Gigabit MAC and 4K HDMI.
Original:
H2 + is a variant of H3, targeted at low-end OTT boxes, which lacks Gigabit MAC and 4K HDMI output support.
In this case, it becomes interesting, for what reason, there is such a difference in performance and the thermal regime between H2 + and H3.
To sum up.
Comparing three different single-board, I can sum up:
- Orange Pi Zero, undoubtedly, is the cheapest among all. The presence of WiFi on board is a very good advantage, but its speed is no more than 4Mbit / s (I got about the same value), which excludes its use as a normal wireless file server, but for IoT it will do just fine. You should buy at least some radiator, so as not to experience problems with abnormal temperatures, even in idle time. There is another wonderful side – the presence of TV-OUT, which I worked with, but if you are looking for a single-board for graphics mode, you should look towards devices with HDMI, because the screen resolution of 720×576 is not pleasing to the eye. It is very convenient that the official store of the manufacturer is available on Alyexpress;
- NanoPi Neo, unlike its younger brother, is deprived of TV-OUT and built-in Wi-Fi (for wireless work it will be necessary to buy for $ 2-3 $ Wi-Fi dongle, the declared data transfer rate of which is at 150 Mbps) , And by itself it comes out to the fifth part more expensively, but it can please us with lower heat generation, a brand-name solid radiator, higher performance, which will cover all the disadvantages of the platform. Also worth noting is the wide variety of accessories offered by the manufacturer for their offspring. Another nuance – will have to be ordered from the official site, although this is actually not so difficult;
- NanoPi Neo2. The version of the firmware from armbian is at the experimental stage, which was expressed in the problems described in the article (impossibility to look at the frequency, error when compiling ffmpeg). However, even in this raw form, the second coming of Neo boasts a fairly good performance in tests (remember about the 64-bit architecture), Gigabit Ethernet, which immediately elevates it to the favorites for those tasks where good performance and wire transfer speed are needed. But do not forget about Ubuntu Core, on it the situation can be better, and the armbian does not stand still. At a cost, of course, it exceeds the orange by more than one and a half times, so it's worth seeing competitors in its price segment.
For myself, I decided to continue working with Neo and Neo2, and postpone the orange until there is some interesting idea for a smart home, because Neo is very similar to Zero in performance, but without problems with temperature.
In the next article, we will choose a new programming language for ourselves, which can be learned as soon as AI is written.
→ Link to Vindinium
] → Link to sabreddit Vindinium – a very useful thing, there you can track my movements on Vindinium
→ Link to my githab with little work on Vindinium
I will be very pleased if more people are drawn to this game, because during the rivalry the most interesting begins!
Add Comment
|
http://surprizingfacts.com/we-write-ai-for-windinium-on-single-board-computers-part-1-selection-of-candidates-geektimes/
|
CC-MAIN-2018-22
|
en
|
refinedweb
|
Test a wide character to see if it's any printable character except space
#include <wctype.h> int iswgraph( wint_t wc );
libc
Use the -l c option to qcc to link against this library. This library is usually included automatically.
The iswgraph() function tests if the argument wc is a graphical wide character of the class graph. In the C locale, this class consists of all the printable characters, except the space character.
A nonzero value if the character is a member of the class graph, or 0 otherwise.
The result is valid only for wchar_t arguments and WEOF.
|
http://www.qnx.com/developers/docs/6.6.0_anm11_wf10/com.qnx.doc.neutrino.lib_ref/topic/i/iswgraph.html
|
CC-MAIN-2018-22
|
en
|
refinedweb
|
Busy-wait without blocking for a number of iterations
#include <time.h> void nanospin_count( unsigned long count );
libc
Use the -l c option to qcc to link against this library. This library is usually included automatically.
The nanospin_count() function busy-waits for the number of iterations specified in count. Use nanospin_ns_to_count() to turn a number of nanoseconds into an iteration count suitable for nanospin_count().
Busy-wait for at least 100 nanoseconds:
#include <time.h> #include <sys/syspage.h> unsigned long time = 100; … /* Wake up the hardware, then wait for it to be ready. */ nanospin_count( nanospin_ns_to_count( time ) ); /* Use the hardware. */ …
You should use busy-waiting only when absolutely necessary for accessing hardware.
|
http://www.qnx.com/developers/docs/6.6.0_anm11_wf10/com.qnx.doc.neutrino.lib_ref/topic/n/nanospin_count.html
|
CC-MAIN-2018-22
|
en
|
refinedweb
|
Our state-of-the-art deep learning models can dentify features in images, but different models produce slightly different results. We host a variety of different implementations, so you can pick the one(s) which work best for you!
The IllustrationTagger and InceptionNet microservices are just two of the many cloud-scaled CNN deep-learning models hosted on Algorithmia which can recognize hundreds of different features in images.
IllustrationTagger is a version of Saito, Masaki and Matsui, Yusuke. (2015). Illustration2Vec: A Semantic Vector Representation of Illustrations. SIGGRAPH Asia Technical Briefs. 2015. Learn more about their work here.
InceptionNet is a direct implementation of Google's InceptionNet, which was trained on the ImageNet 2015 dataset. It is implemented using Google's Tensorflow python bindings.
Let us know what you think @Algorithmia or by email.
SAMPLE INPUTr
import Algorithmia image = "_IMAGE_URL_" client = Algorithmia.client("_API_KEY_") tagger1 = client.algo("deeplearning/IllustrationTagger/0.2.5") tagger2 = client.algo("deeplearning/InceptionNet/1.0.3") print "IllustrationTagger:", tagger1.pipe({"image": image}) print "InceptionNet:", tagger2.pipe(image)
SAMPLE OUTPUT
LEARN MORELEARN MORE
IllustrationTagger: { "rating": [ {"safe": 0.9961} ], "general": [ {"no humans": 0.7062}, {"monochrome": 0.6084} ] } InceptionNet: { "tags": [ { "class": "cliff, drop, drop-off", "confidence": 0.4765 }, { "class": "alp", "confidence": 0.1755 } ] }
|
https://demos.algorithmia.com/image-tagger/
|
CC-MAIN-2018-22
|
en
|
refinedweb
|
This is the gr-analog package. It contains all of the analog modulation blocks, utilities, and examples. To use the analog blocks, the Python namespaces is in gnuradio.analog, which would be normally imported as:
See the Doxygen documentation for details about the blocks available in this package.
A quick listing of the details can be found in Python after importing by using:
|
https://www.gnuradio.org/doc/doxygen/page_analog.html
|
CC-MAIN-2018-22
|
en
|
refinedweb
|
The default value is “1024” (i.e., 1 kilobyte). put error_reporting on the first line of code.) up down 1 antickon AT gmail.com ¶9 years ago regarding what vdephily at bluemetrix dot com said ( see his comment is here
This helped with an issue where the error is caused by data read from a file, so no typos or library issues. –Mark Longmire Oct 29 '13 at 15:58 Basic error reporting -- to record run-time notices, compile-time parse errors, as well as run-time errors and warnings, use “8” for the error-reporting integer value. Downloads Documentation Get Involved Help ConFoo Vancouver & Montreal Getting Started Introduction A simple tutorial Language Reference Basic syntax Types Variables Constants Expressions Operators Control Structures Functions Classes and Objects Namespaces The E_ALL constant also behaves this way as of PHP 5.4.
You might want to use phpMailer to send mail from your website instead using the SMTP authentication that a normal email client would use. Then: if( $state == "local" || $state == "testing" ) { ini_set( "display_errors", "1" ); error_reporting( E_ALL & ~E_NOTICE ); } else { error_reporting( 0 ); } share|improve this answer edited I thought that error_reporting(0); is limited to PHP files not the php.ini file. –PeanutsMonkey Jan 3 '13 at 1:43 @Jack - I put the line of code i.e. Hey, We Tweet Too!Oldie but a Goodie: SaaS and WordPress: A Conversation with Chris Lema about 14 hours ago from Tweet Old Post ReplyRetweetFavoriteRT @corymiller303: “Work with, not against, limitations”
Is giving my girlfriend money for her mortgage closing costs and down payment considered fraud? Combining basename {} and string's operations in bash What object can prove the equations? error_log = /home/userna5/public_html/error_log Now your errors will all be stored in the error_log in the public_html. Php Error Reporting Not Working Hopefully you will find useful information elsewhere!
Login into your cPanel. How to make Skyscanner, Kiwi, Kayak include ground transfer in the search Why does Fleur say "zey, ze" instead of "they, the" in Harry Potter? How do we play with irregular attendance? I was saying that you should check if the constant is defined and set it if not (which is why I gave the code sample). –Jonathan Kuhn May 19 '10 at
In the page, add the following to the top of the page. Php Error Message share|improve this answer edited Nov 6 '11 at 2:18 chown 34.3k1393143 answered Nov 5 '11 at 14:51 Paul Salber 44148 By doing this error_reporting = E_ALL & ~E_NOTICE & If the optional level is not set, error_reporting() will just return the current error reporting level. You can follow any responses to this entry through the RSS 2.0 feed.
Reply ashleyka n/a Points 2015-08-22 6:57 pm I am in need of editing my php.ini file. He is the author of several popular and highly-rated WordPress themes and plugins. Php.ini Error Reporting Thank you for reiterating this point -- it is greatly appreciated. Php Display Errors Off more hot questions lang-php about us tour help blog chat data legal privacy policy work here advertising info mobile contact us feedback Technology Life / Arts Culture / Recreation Science Other
Copyright 1999-2016 by Refsnes Data. more stack exchange communities company blog Stack Exchange Inbox Reputation and Badges sign up log in tour help Tour Start here for a quick overview of the site Help Center Detailed Browse Questions Ask a Question Current Customers Chat: Click to Chat Now E-mail: [email protected] Call: 888-321-HOST (4678) Ticket: Submit a Support Ticket Not a Customer? Error Types
Thanks! :) Gowranga Chintapatra December 26, 2008 at 10:26 am Can I enable pear modules (Crypt/HMAC.php and Crypt/Http_request.php) through htaccess. Having PHP Notices to appear on a webpage is pretty ugly and give a lot of information which might be used by malicious crackers to try to break your site thus share|improve this answer answered May 19 '10 at 15:44 jeroen 69.2k1374110 I needed to use the ini_set method mentioned here, the error_reporting(0) method mentioned elsewhere did not have any weblink The code looks like the following.
This will place the error_log in the directory the error occurs in ; Log errors to specified file. Php Error Log Sometimes when developing PHP scripts you may want to turn specific errors Off or On. Select the public_html directory and click Go.
up vote 66 down vote favorite 16 Notice: Constant DIR_FS_CATALOG already defined I've already commented out display_errors in php.ini, but is not working. PHP can have multiple config files depending on environment it's running. How do you enforce handwriting standards for homework assignments as a TA? Php Error Handling This section will explain how to turn error reporting On and Off.
more stack exchange communities company blog Stack Exchange Inbox Reputation and Badges sign up log in tour help Tour Start here for a quick overview of the site Help Center Detailed The general format for controlling the level of PHP errors is as follows: # general directive for setting php error level php_value error_reporting integer There are several common values used for If he'd asked for a logging system, he'd ask for it. –pixeline Oct 29 '09 at 19:31 4 I agree with both ceejayoz and pixeline. PHP has many levels of errors, and using this function sets that level for the current script.
Get web hosting from a company that is here to help. Separate namespaces for functions and variables in POSIX shells How do I handle an unterminated wire behind my wall? I managed to get everything to work except the offset rss feed. You can leave a response, or trackback from your own site. 4 Responses to "How to Turn Off, Suppress PHP Notices and Warnings - PHP error handling levels via php.ini and
X hours with a batch script - Shutdown / Reboot / Logoff Windows with a quick command Enable TLS 1.2 Internet Explorer / Make TLS 1.1 and TLS 1.2 web sites I had to remove it before I could bring it back up again. –PeanutsMonkey Jan 3 '13 at 2:08 | show 1 more comment Your Answer draft saved draft discarded Whack an @ at the start of a line that may produce an warning/error. But sometimes we do need this information on our online site for debugging.
Putting it all together -- Development Environment During project development, when public access to your project is unavailable, you may find it beneficial to catch PHP errors in real time, where Nobody wants to see an error message on your online website, like "Access denied for user 'YOURUSERNAME'@'localhost' (using password: YOURPASSWORD)". You (and MatÃas) are correct about using “on” and “off” instead of “true” and “false” for php_flag and php_admin_flag. Tweet News / Announcements Support Center Login Username Password Remember Me Log in Create an account Forgot your username?
|
http://degital.net/php-error/turn-off-error-reporting-in-php-file.html
|
CC-MAIN-2018-22
|
en
|
refinedweb
|
Hi,
I create one animation in Adobe Flash CS6 and also i imported 13 Audio files for my animation and i set to my audio files into Stream, but when i click Ctrl+Enter the audio is going fast comparing to my audio.
So please tell me anyone what doing?
Thanking you..
hi,
remove the audio from flash.
open one audio file in audition
resave with sample rate 44.1 and bitrate 16.
import into flash and retest.
any problem?
if not, repeat with any other problematic audio.
[moved from Adobe Creative Cloud to Adobe Animate CC - General]
|
https://forums.adobe.com/thread/2407336
|
CC-MAIN-2018-22
|
en
|
refinedweb
|
A simple application for printing file contents as hexadecimal.
Wednesday, 30 April 2008
C#: file hex dump application
Posted by McDowell at Wednesday, April 30, 2008 2 comments
Labels: C#, hexadecimal
Tuesday, 29 April 2008
C#: Hello, World!
using System; public class HelloWorld { public static void Main(String[] args) { Console.WriteLine("Hello, World!"); } }
Posted by McDowell at Tuesday, April 29, 2008 0 comments
Labels: C#, Hello World.
Posted by McDowell at Tuesday, April 22, 2008 5 comments
Labels: EL, Expression Language, Java, JUEL, Tomcat, Unified Expression Language
Monday, 14 April 2008
Java: finding binary class dependencies with BCEL
Sometimes you need to find all the dependencies for a binary class. You might have a project that depends on a large product and want to figure out the minimum set of libraries to copy to create a build environment. You might want to check for missing dependencies during the kitting process.
Posted by McDowell at Monday, April 14, 2008 2 comments
Labels: BCEL, byte code engineering library, dependencies, Java
Wednesday, 9 April 2008
Java: finding the application directory
EDIT 2009/05/28: It has been pointed out to me that a far easier way to do all this is using this method:
...which makes everything below here pointless. You live and learn!
Posted by McDowell at Wednesday, April 09, 2008 5 comments
Tuesday, 8 April 2008
Java: synchronizing on an ID
If you are a
For example, there is nothing in the Servlet 2.5 MR6 specification that says a.
Posted by McDowell at Tuesday, April 08, 2008 6 comments
Labels: concurrency, Java, mutex, synchronization
Java: Hello, World!
Posted by McDowell at Tuesday, April 08, 2008 0 comments
Labels: Hello World, Java
|
http://illegalargumentexception.blogspot.co.uk/2008/04/
|
CC-MAIN-2018-22
|
en
|
refinedweb
|
ESRI basemaps for flutter_map. This does NOT provide esri FeatureLayers or other layers.
Use
EsriBasemapOptions with
EsriBasemapType
new FlutterMap( options: new MapOptions( center: new LatLng(34.0231688, -118.2874995), zoom: 17.0, ), layers: [ new EsriBasemapOptions( esriBasemapType: EsriBasemapType.streets), ], );
EsriBasemapOptions implements TileLayerOptions.
Add this to your package's pubspec.yaml file:
dependencies: flutter_map_esri: "^0.0.3"
You can install packages from the command line:
with Flutter:
$ flutter packages get
Alternatively, your editor might support
flutter packages get.
Check the docs for your editor to learn more.
Now in your Dart code, you can use:
import 'package:flutter_map_esri/flutter_map_esri.dart';
We analyzed this package on May_map_esri.dart.
|
https://pub.dartlang.org/packages/flutter_map_esri
|
CC-MAIN-2018-22
|
en
|
refinedweb
|
Using Portier with Python's asyncio
23 Jan 2017
Updates
The asyncio-portier library
15 Feb 2017
I’ve published the asyncio-portier Python library. Read the announcement post here.
Original Post
Mozilla Persona was an authentication solution that sought to preserve user privacy while making life easy for developers (as explained here). You may have noticed the past tense there — Persona has gone away. A new project, Portier, picks up where Persona left off, and I’m eager to see it succeed. I’ll do anything to avoid storing password hashes or dealing with OAuth. With that in mind, here is my small contribution: a guide to using Portier with Python’s asyncio.
The Big Idea
So far, the official guide to using Portier is… short. It tells you to inspect and reuse code from the demo implementation. So that’s what we’ll do! But first, let’s take a look at how this is supposed to work.
The guide says
The Portier Broker’s API is identical to OpenID Connect’s “Implicit Flow.”
From the developer’s perspective, that means
- A user submits their e-mail address via a
POSTto an endpoint on your server.
- Your server
- Your server will eventually receive a
POSTto one of its endpoints (
/verifyin the demo implementation). At this point
- If the data supplied are all valid (including checking against the nonce created in step 2.1.), then the user has been authenticated and your server can set a session cookie.
- If something is invalid, show the user an error.
This is all well and good… provided that you’re using Python and your server code is synchronous. I generally use Tornado with asyncio for my Python web framework needs, so some tweaks need to be made to get everything working together nicely.
If you want to use something other than Python, I can’t really help you. I did say Portier is new, didn’t I?
Enter asyncio
For some background, Python 3.4 added a way to write non-blocking single-threaded code, and Python 3.5 added some language keywords to make this feature easier to use. For the sake of brevity I’ll include code that works in Python 3.5 or later. Here is a blog post describing the changes in case you need to use an earlier version of Python.
For those of you using the same setup that I do (Tornado and asyncio), refer to this page for getting things up and running.
The login endpoint
This code does not need to be modified to work with asyncio. I’ll include what it should look like when using Tornado, though. Assuming that
REDISis a Redis connection object as from redis-py
SETTINGSis a dictionary containing your application’s settings
SETTINGS['WebsiteURL']is the URL of your application (such as)
SETTINGS['BrokerURL']is the URL of the Portier Broker,
from datetime import timedelta from urllib.parse import urlencode from uuid import uuid4 import tornado.web class LoginHandler(tornado.web.RequestHandler): def post(self): nonce = uuid4().hex REDIS.setex(nonce, timedelta(minutes=15), '') query_args = urlencode({ 'login_hint': self.get_argument('email'), 'scope': 'openid email', 'nonce': nonce, 'response_type': 'id_token', 'response_mode': 'form_post', 'client_id': SETTINGS['WebsiteURL'], 'redirect_uri': SETTINGS['WebsiteURL'] + '/verify', }) self.redirect(SETTINGS['BrokerURL'] + '/auth?' + query_args)
The verify endpoint
This does need some modification to work. Assuming that you have defined an
exception class for your application called
ApplicationError
import tornado.web class VerifyHandler(tornado.web.RequestHandler): def check_xsrf_cookie(self): """Disable XSRF check. OIDC Implicit Flow doesn't reply with _xsrf header. """ pass async def post(self): # Make this method a coroutine with async def if 'error' in self.request.arguments: error = self.get_argument('error') description = self.get_argument('error_description') raise ApplicationError('Broker Error: {}: {}'.format(error, description)) token = self.get_argument('id_token') email = await get_verified_email(token) # Use await to make this asynchronous # The demo implementation handles RuntimeError here but you may want to # deal with errors in your own application-specific way # At this point, the user has authenticated, so set the user cookie in # whatever way makes sense for your application. self.set_secure_cookie(...) self.redirect(self.get_argument('next', '/'))
get_verified_email
This function only needs two straightforward changes from the demo implementation:
async def get_verified_email(token):
and
keys = await discover_keys(SETTINGS['BrokerURL'])
discover_keys
This function needs three changes from the demo implementation. The first is simple again:
async def discover_keys(broker):
The second change is in the line with
res = urlopen(''.join((broker,
'/.well-known/openid-configuration'))). The problem is that
urlopen is
blocking, so you can’t just
await it. If you’re not using Tornado, I
recommend using the aiohttp library (refer to the
client example). If you are using Tornado, you can use the
AsyncHTTPClient class.
http_client = tornado.httpclient.AsyncHTTPClient() url = broker + '/.well-known/openid-configuration' res = await http_client.fetch(url)
The third change is similar to the second:
raw_jwks =
urlopen(discovery['jwks_uri']).read() uses
urlopen again. Solve it the same
way:
raw_jwks = (await http_client.fetch(discovery['jwks_uri'])).body
Wrapping up
Down the line, there will be client-side Portier libraries (or, at least, other demo implementations) for various languages. Until then, you’ll need to do some of the heavy lifting yourself. I think it’s worth it, and I hope you will, too.
|
https://viktorroytman.com/blog/2017/01/23/using-portier-with-pythons-asyncio/
|
CC-MAIN-2018-22
|
en
|
refinedweb
|
csMemoryPool Class Reference
[Containers]
A quick-allocation pool for storage of arbitrary data. More...
#include <csutil/mempool.h>
Inherited by TEventMemPool.
Detailed Description
A quick-allocation pool for storage of arbitrary data.
Pointers to allocations made from the pool are guaranteed to remain valid as long as the pool is alive; the pool contents are never relocated. All individually allocated memory chunks are freed when the pool itself is destroyed. This memory management scheme is suitable for algorithms which need to allocate and manipulate many chunks of memory in a non-linear fashion where the life-time of each memory chunk is not predictable. Rather than complicating the algorithm by having it carefully track each memory chunk to determine when it would be safe to dispose of the memory, it can instead create a csMemoryPool at the start, and destroy the pool at the end. During processing, it can allocate memory chunks from the pool as needed, and simply forget about them when no longer needed, knowing that they will be freed en-masse when the pool itself is disposed. This is often a cheaper, simpler, and faster alternative to reference-counting or automatic garbage collection.
- See also:
- csBlockAllocator
- csArray
Definition at line 54 of file mempool.h.
Constructor & Destructor Documentation
Construct a new memory pool.
If a size is provided, it is taken as a recommendation in bytes of the granularity of the internal allocations made by the pool, but is not a hard limit. Client allocations from the pool can be both smaller and larger than this number. A larger number will result in fewer interactions with the system heap (which translates to better performance), but at the cost of potential unused but allocated space. A smaller number translates to a greater number of interactions with the system heap (which is slow), but means less potential wasted memory.
Definition at line 79 of file mempool.h.
Member Function Documentation
Allocate the specified number of bytes.
- Returns:
- A pointer to the allocated memory.
- Remarks:
- The allocated space is not initialized in any way (it is not even zeroed); the caller is responsible for populating the allocated space.
- The specified size must be greater than zero.
Store a null-terminated C-string.
- Returns:
- A pointer to the stored copy.
- Remarks:
- It is safe to store a zero-length string. A null pointer is treated like a zero-length string.
Store a copy of a block of memory of the indicated size.
- Returns:
- A pointer to the stored copy.
- Remarks:
- The specified size must be greater than zero.
The documentation for this class was generated from the following file:
Generated for Crystal Space 2.1 by doxygen 1.6.1
|
http://www.crystalspace3d.org/docs/online/api/classcsMemoryPool.html
|
CC-MAIN-2015-48
|
en
|
refinedweb
|
Heroku Clojure Support
Last updated 07 November 2015
Table of Contents
The Heroku Cedar stack is capable of running a variety of types of Clojure applications.
This document describes the general behavior of the Cedar stack as it relates to the recognition and execution of Clojure applications. For a more detailed explanation of how to deploy an application, see:
Activation
Heroku’s Clojure support is applied only when the application has a
project.clj file in the root directory.
Clojure applications that use Maven can be deployed as well, but they will be treated as Java applications, so different documentation will apply. make your
run. application config is not visible during compile time, with the exception of private repository credentials (
LEIN_USERNAME, etc) if present. In order to change what is exposed, set the
BUILD_CONFIG_WHITELIST config to a space-separated list of config var names. Note that this can result in unpredictable behavior since changing your app’s config does not result in a rebuild of your app.
Uberjar
If your
project.clj contains an
:uberjar-name setting, then
lein uberjar will run during deploys. If you do this, your
Procfile
entries should consist of just
java invocations.
If your main namespace doesn’t have a
:gen-class then you can use
clojure.main as your entry point and indicate your app’s main
namespace using the
-m argument in your
Procfile:
web: java -cp target/myproject-standalone.jar clojure.main -m myproject.web
If you have custom settings you would like to only apply during build,
you can place them in an
:uberjar profile. This can be useful to use
AOT-compiled classes in production but not during development where
they can cause reloading issues:
:profiles {:uberjar {:main myproject.web, :aot :all}}
If you need Leiningen in a
heroku run session, it will be downloaded
on-demand.
Note that if you use Leiningen features which affect runtime like
:jvm-opts, extraction of native dependencies, or
:java-agents,
then you’ll need to do a little extra work to ensure your Procfile’s
java invocation includes these things. In these cases it might be
simpler to use Leiningen at runtime instead.
Customizing the build
You can customize the Leiningen build by setting the following configuration variables:
LEIN_BUILD_TASK: the Leinigen command to run.
LEIN_INCLUDE_IN_SLUG: Set to
yesto add lein to uberjar slug.
Leiningen at runtime
Instead of putting a direct
java invocation into your Procfile, you
can have Leiningen handle launching your app. If you do this, be sure
to use the
trampoline and
with-profile tasks. Trampolining will
cause Leiningen to calculate the classpath and code to run for your
project, then exit and execute your project’s JVM, while
with-profile will omit development profiles:
web: lein with-profile production trampoline run -m myapp.web
Including Leiningen in your slug will add about ten megabytes to its size and will add a second or two of overhead to your app’s boot time.
Overriding build behavior
If neither of these options get you quite what you need, you can check
in your own executable
bin/build script into your app’s repo and it
will be run instead of
compile or
uberjar after setting up Leiningen.
Runtimes
Heroku makes a number of different runtimes available. You can configure your app to select a particular Clojure runtime, as well as the configure the JDK..
Supported Clojure versions
Heroku supports apps on any production release of Clojure, running on a supported JDK version.
Add-ons
No add-ons are provisioned by default. If you need a SQL database for your app, add one explicitly:
$ heroku addons:create heroku-postgresql:hobby-dev
|
https://devcenter.heroku.com/articles/clojure-support
|
CC-MAIN-2015-48
|
en
|
refinedweb
|
On Tue, Jul 15, 2003 at 10:06:55AM +0200, Christian Surchi wrote: > On Mon, Jul 14, 2003 at 01:13:49PM +0100, Stephen Stafford wrote: > ... > > Okay, I can see that a lot of commands might be desirable here, but there > > are a LOT of these with VERY generic names. "ask", "fast", "rhyme", "cite", > > etc. > > Which solutions are you suggesting? > The main one I think is good is having a /usr/bin/surfraw/ or similar that users can add to their $PATH, or alias on a case by case basis as they prefer. Properly documented and executed, I believe this to be the most robust solution (certainly better than pre (or post) fixing "sr" to all the binaries). > >? > > > So, the solutions I am considering are: > > > > 1: I hijack/adopt this package, clean it up, reduce the namespace pollution > > by making each command an argument to just ONE command, and generally > > making the package fit for use. > > > > 2: Filing for its removal on the basis of the extreme pollution and lack of > > maintenance. > > > > > > It looks like it needs a LOT of work to achieve 1, so I'll probably go for 2 > > unless someone steps forward to take it on, since my current Debian time is > > very. > > > Of course, all of this assumes that Christian Surchi <csurchi@debian.org> is > > in fact MIA and/or no longer interested in the package. He will be CCd > > because this mail is going to the buglog, and if he steps forward, cleans up > > his package, and does something about the namespace pollution, I'll be > > happy. If not then I'll either hijack it, adopt it, or file for removal. > >. > > > CCing -devel because I'm *bloody* pissed off, and I'd like to give people > > the chance to flame/give me reasons not to do this. hijacking someone elses > > package isn't a thing to take lightly, no matter how poor a job they appear > > to be doing of it to me. > > I think hijacking is useful when maintainer is MIA... I'm not MIA, you > can see mails and activities related to me. :) Now "asking" is better > ,IMHO. :) > When you look like you are maintaining it, that's true. Since you don't, then I'd say it's still valid to be hijacked. Please note this isn't a personal thing at all. *ALL* it would take for me to be happy for you to keep this package is for you to make an upload fixing the long standing and (mostly) easy to fix bugs that it has. If you can't/won't do that, then fine. Let somone who wants to do it have the package. I know what it's like to not have much time for Debian work. I have very little of it myself (which is why I have so few packages). This isn't an excuse for neglecting what you have though. If you don't have time, then fine, give the package away to someone who does. Cheers, Stephen
|
https://lists.debian.org/debian-devel/2003/07/msg01014.html
|
CC-MAIN-2015-48
|
en
|
refinedweb
|
During one of last projects I needed to test some webservices.
I was wondering: if I can do it with Burp or by manual testing,
maybe I can also write some quick code in python...
And that's how I wrote soapee.py:
---<code>---
root@kali:~/code/soapee-v3# cat soapee3.py
#!/usr/bin/env python
# -------------------------------------
# soapee.py - SOAP fuzz - v0.2
# -------------------------------------
# 16.10.2015
import urllib2
import sys
import re
from bs4 import BeautifulSoup
import httplib
from urlparse import urlparse
target = sys.argv[1]
def sendNewReq(method):
global soap_header
print '[+] Sending new request to webapp...'
toSend = open('./logs/clear-method-'+str(method)+'.txt','r').read()
parsed = urlparse(target)
server_addr = parsed.netloc
service_action = parsed.path
body = toSend
print '[+] Sending:'
print '[+] Response:'
headers = {"Content-type": "text/xml; charset=utf-8",
"Accept": "text/plain",
"SOAPAction" : '"' + str(soap_header) + '"'
}
# print '***********************************'
# print 'headers: ', headers
# print '***********************************'
conn = httplib.HTTPConnection(server_addr)
conn.request("POST", parsed.path, body, headers)
# print body
response = conn.getresponse()
print '[+] Server said: ', response.status, response.reason
data = response.read()
logresp = open('./logs/resp-method-'+ method + '.txt','w')
logresp.write(data)
logresp.close()
print '............start-resp...........................................'
print data
print '............stop-resp...........................................\n'
print '[+] Finished. Next step...'
print '[.] -----------------------------------------\n'
##
def prepareNewReq(method):
print '[+] Preparing new request for method: '+str(method)
fp = open('./logs/method-'+str(method)+'.txt','r')
fp2 = open('./logs/fuzz-method-'+str(method)+'.txt','w')
for line in fp:
if line.find('SOAPAction') != -1:
global soap_header
soap_header = line
soap_header = soap_header.split(" ")
soap_header = soap_header[1].replace('"','')
soap_header = soap_header.replace('\r\n','')
# print soap_header
newline = line.replace('<font class="value">','')
newline2 = newline.replace('</font>','')
newline3 = newline2.replace('string','";\'>')
newline4 = newline3.replace('int','111111111*11111')
newline5 = newline4.replace('length','1337')
newline6 = newline5.replace('<soap:','<soap:')
newline7 = newline6.replace('</soap:','</soap:')
newline8 = newline7.replace(' or ','or')
fp2.write(newline8)
print '[+] New request prepared.'
fp2.close()
print '[+] Clearing file...'
linez = open('./logs/fuzz-method-'+str(method)+'.txt').readlines()
open('./logs/clear-method-'+str(method)+'.txt','w').writelines(linez[6:])
fp.close()
fp2.close()
sendNewReq(method)
##
# compose_link(method), get it, and save new req to file
def compose_link(method):
methodLink = target + '?op='+ method
print '[+] Getting: ', method
fp = open('./logs/method-'+str(method)+'.txt','w')
req = urllib2.urlopen(methodLink)
page = req.read()
soup = BeautifulSoup(page)
for pre in soup.find('pre'):
fp.write(str(pre))
print '[+] Method body is saved to file for future analysis.'
fp.close()
prepareNewReq(method)
##
## main
def main():
print ' _________________'
print ' (*(( soapee ))*)'
print ' ^^^^^^\n'
url1 = urllib2.urlopen(target)
page1 = url1.readlines()
# get_links_to_methods
print '[+] Looking for methods:\n------------------------'
for href in page1:
hr = re.compile('<a href="(.*)\.asmx\?op=(.*?)">') #InfoExpert.asmx?op=GetBodyList">GetBodyList</a>')
found = re.search(hr,href)
if found: # at this stage we need to create working link for each found method
method = found.group(2)
# found method get as URL for pre content to next request
compose_link(method)
# ...
# ... get example of each req
# ... change each str/int to fuzzval
# ... send modified req
print '---------------------------\ndone.'
##
try:
main()
except IndexError, e:
print 'usage: ' + str(sys.argv[1]) + '\n'
root@kali:~/code/soapee-v3#
---</code>---
Also@pastebin;)
As you can see it's just a proof of concept (mosty to find some useful information disclosure bugs) but the skeleton can be used to prepare more advanced tools.
Maybe you will find it useful.
Enjoy ;)
Haunt IT
HauntIT Blog - security testing & exploit development
Saturday, 24 October 2015
Friday, 2 October 2015
My Java SIGSEGV's
During couple of last days I was checking lcamtuf’s American Fuzzy Lop against some (“non-instrumented”) binaries.
I was looking for some sources, but unfortunately I wasn’t able to find any. Next thing was checking where I have Java installed (so I will know what/where I can check. Kind of ‘test lab’ was: Ubuntu 12, Kali Linux, WinXP, Win7. (Exact version of Java installed on that OS’s you will find below.)
After 2 days there were approx. 170 different samples. After first check, we can see that java (7) will
end up with sigsegv (with SSP enabled – Kali Linux):
Same sample with Java 6 will produce:
Next thing I saw was:
During the analysis of the crash file in gdb I found some “new” function names. I decide to find them also but in Ida Pro and check, what is their purpose:
(As you can see, some core files were generated by valgrind.)
Below “find_file” function (from IdaPro):
You can easily see that we have malloc() here.
Next thing I decide to check was JLI_ParseManifest() function:
After checking those functions, we can see that JLI_ParseManifest() will iterate through each character in Manifest file. Correct me if I’m wrong but I think that find_file() is the place when SIGSEGV occurs. Manifest file is parsed here:
When we will set up “Windbg” (in IdaPro) to run Java with our sample.jar file (generated by afl), we will see, that crash occurs in similar region:
After this warning, Ida will jump to next location:
In text mode view, we can see more instructions:
Let’s see if in pseudocode (Tab/F5) we will find any hint:
We see the memcpy() function with 3 arguments: v4, v3 and v5. Details about those variables we can find in the beginning of the pseudocode:
Now we know that v3 is the value from esi, v4 is the value from edi and v5 is the value from ecx. Next thing is generating and saving dump file. We will open it in Windbg later:
Now, open java.dmp file in Windbg and observe results:
We can see that SIGSEGV occurs when the program is using EDI and ESI registers. Let’s check what’s on that location: let’s use the “dc” command:
In case that you will ask what !exploitable will tell you about it, screen below:
Short summary: this sample file will crash Java on all mentioned systems.
If you think that this is exploitable… Well. Let me know what do you think about (comments or emails). Any ideas are welcome. ;)
Posted by Haunt IT at 02:05 No comments:
|
http://hauntit.blogspot.com/
|
CC-MAIN-2015-48
|
en
|
refinedweb
|
This class performs refraction computations, following literature from atmospheric optics and astronomy. More...
#include <RefractionExtinction.hpp>
This class performs refraction computations, following literature from atmospheric optics and astronomy.
Refraction solutions can only be aproximate,.
|
http://stellarium.org/doc/0.12.1/classRefraction.html
|
CC-MAIN-2015-48
|
en
|
refinedweb
|
The Truth About PaaS Vertical Scaling and Why You are Being "Oversold"
The Cloud Zone is brought to you in partnership with Mendix. Better understand the aPaaS landscape and how the right platform can accelerate your software delivery cadence and capacity with the Gartner 2015 Magic Quadrant for Enterprise Application Platform as a Service.
Honestly, it was not easy to create true vertical scaling, because there are several technical restrictions. One of them is related to JVM restrictions. In this article I would like to share some information which can be useful to understand these restrictions of JVM. I hope this will help more of the drivers of the java community to adapt JVM for PaaS.
In the beginning, when JVM was designed, nobody knew about the cloud or virtualization, and moreover, nobody was thinking about density in PaaS. Today virtualization has changed the game of hosting industry and the revolution is not finished yet. Today, we can use resources more efficiently and with better elasticity. Jelastic is the only PaaS which offers true automatic vertical scaling for Java and PHP applications. However, I see good movements of the key java drivers into this sphere. I talked to Mikael Vidstedt – one of the main JVM architects at Oracle, and he agreed that JVM was not designed for PaaS at all and Oracle is going to improve this. Plus guys from IBM are working hard on it as well. Some related notes to dynamic behavior of JVM can be found in IBM JavaOne Keynote 2012 Highlights.
One of the most important points of vertical scaling is understanding how JVM allocates more RAM when it’s needed and how allocated RAM is returned to the OS when it is not needed anymore. The algorithm, which provides allocation of RAM, works fine, but compactication (returning RAM to OS) does not work well today. It works if you know the details, but there are a lot of space for improvements. So, JVM must be improved significantly in this part.
When we were initially developing Jelastic, one of the most important points was understanding how vertical scaling depends on the JVM Garbage Collector (GC). We tested many different combinations and found critically important relations and restrictions which prevent applications from scaling down vertically. So the issue of garbage collections was pretty high on the list of things that needed to work well, all the time.
How Java Garbage Collection Works
Just in case you aren’t totally sure how the garbage collection works in Java, have no idea what it is, or just need a quick refresher, here is an overview of how it works. For a more in-depth view on this, check out Java Enterprise Performance Book or Javin Paul’s blog, Javarevisted.
In Java, dynamic allocation of objects is achieved using the new operator. An object, once created, uses some memory and the memory remains allocated until there are no references for the use of the object. When there are no references for an object, it is assumed to be no longer needed and the memory occupied by the object can be reclaimed. There is no explicit need to destroy an object as java handles the de-allocation automatically. Garbage Collection is the technique that accomplishes this. Programs that do not de-allocate memory can eventually crash when there is no memory left in the system to allocate. These programs are said to have "memory leaks." In Java, Garbage collection happens automatically during the lifetime of a java program, eliminating the need to de-allocate memory and avoiding memory leaks.
What kind of Garbage Collector is used with Jelastic?
As we were going about trying to decide which garbage collector to use with Jelastic by default, we had to narrow down the field. We found that the biggest issue would be one of the features within Jelastic that we are most proud of, vertical scaling. When we were deciding how to configure garbage collection in Java, this feature presented a problem.
In order to help us decide which kind of GC to use, we created an application that controlled resource usage. Turns out that the JVM doesn’t actually deal very well with vertical scaling. If an application starts with a small amount of RAM usage, then adds more, the JVM doesn’t do a good job of returning that RAM to the OS when it is not needed anymore. We found out this was initially done on purpose: the guys that designed the JVM used this approach to speed up the process of memory allocation by having it already queued. This process didn’t work with our platform if we are going to have vertical scaling, and we intended it to.
So, we started testing out different garbage collectors. We tested the Serial Garbage Collector, the Parallel Garbage Collector, the Concurrent Mark Sweep Garbage Collector and the G1 Garbage Collector. Some statistical information, which was collected when we started to work on Jelastic, is included below.
Serial Garbage Collector (-XX:+UseSerialGC)
First, we tried out the Serial Garbage Collector. It uses a single thread to perform all garbage collection work. It is a stop-the-world collector and has very specific limits.
Below, you can see simple tests in Java 6 and Java 7.
Test for Serial Garbage Collector
public class Memoryleak { public static void main(String[] args) { System.out.println("START...."); while (true) { System.out.println("next loop..."); try { int count = 1000 * 1024; byte [] array = new byte[1024 * count]; Thread.sleep(5000); array = null; System.gc(); System.gc(); Thread.sleep(5000); } catch (InterruptedException ex) { } } } }
We ran the JVM with these parameters:-XX:+UseSerialGC -Xmx1024m -Xmn64m -Xms128m -Xminf0.1 -Xmaxf0.3
where
- -XX:+UseSerialGC — use Serial Garbage Collector (this parameter will be changed in the next testes);
- -Xmx1024m - max RAM usage - 1024 MB;
- -Xmn64m — the size of the heap for the young generation - 64MB;
- -Xms128m – initial java heap size - 128 MB;
- -Xminf0.1 – this parameter controls minimum free space in the heap and instructs the JVM to expand the heap, if after performing garbage collection it does not have at least 10% of free space;
- -Xmaxf0.3 – this parameter controls how the heap is expanded and instructs the JVM to compact the heap if the amount of free space exceeds 30%.
The defaults for -Xminf and Xmaxf are 0.3 and 0.6, respectively, so the JVM tries to maintain a heap that is between 30 and 60 percent free at all times. But we set these parameters to the more aggressive limits - it enlarges the amplitude of the vertical scaling
As you can see in the chart below heap memory is dynamically allocated and released.
The following chart shows the total memory consumption by OS.
As you can see, vertical scaling works fine in this case. Unfortunately, the Serial Garbage Collector is meant to be used by small applications, GC runs on a single thread.
Pros and Cons for Serial Garbage Collector
Pros:
- It shows good results in scaling
- It can do memory defragmentation and returns the unused resources back to the OS
- Great for applications with small data sets
Cons:
- Big pauses when it works with big data sets
- Big applications are a no-go
Parallel Garbage Collector (-XX:+UseParallelGC)
The Parallel Garbage Collector performs minor garbage collections in parallel, which can significantly reduce garbage collection overhead. It is useful for applications with medium to large-sized data sets that are run on a multiprocessor or multithreaded hardware.We repeated our test from before so we could compare the results.
Test for Parallel Garbage Collector
Single process:
Below is the total memory consumption by the OS:
The Parallel Garbage Collector has many advantages over the Serial Garbage Collector. It can work with multithreaded applications and multiprocessor machines. It also works quite well with large data sets. But, as you can see in the above charts, it doesn't do well in returning resources to the OS. So, Parallel GC was not suitable for vertical scaling.
Pros and Cons for Parallel Garbage Collector
Pros:
- Works well with large data sets and applications
- It works great with multithreaded applications and multiprocessor machines
Cons:
- Doesn't do a good job of returning resources to the OS
- Works fine as long as you don't need vertical scaling
Concurrent Mark Sweep Garbage Collector
(-XX:+UseConcMarkSweepGC)
The Concurrent Mark Sweep Garbage Collector performs most of its work concurrently to keep garbage collection pauses short. It is designed for applications with medium to large-sized data sets for which response time is more important than overall throughput.
We repeated the same tests as before and got the following results.
Test for Concurrent Mark Sweep Garbage Collector
Single process:
Total memory consumption by the OS:
While the Concurrent Mark Sweep Garbage Collector has it's advantages with certain kinds of applications, we ran into basically the same issues as we did with the Parallel Garbage Collector - it is not suitable for vertical scaling.
Pros and Cons for Concurrent Mark Sweep Garbage Collector
Pros:
- Works well with large data sets and applications
- Work well with multithreaded applications and multiprocessor machines
- Response time
Cons:
- Doesn't do a good job of returning resources to the OS
- Throughput is a lower priority
- Vertical scaling doesn't work well with it
G1 Garbage Collector (-XX:+UseG1GC)
The G1 ("Garbage First") Garbage Collector was first introduced with Java 7 and then in subsequent Java 6 updates. It splits the heap up into fixed-sized regions and tracks the live data in those regions. When garbage collection is necessary, it collects from the regions with less live data first. It has all the advantages of the Parallel GC and Mark Sweep GC and meets all our requirements.
But when we did our tests we discovered that, in Java 6, after a long period of work there was a constant and stable memory leak.
Test for G1 Garbage Collector (Java 6)
Single process:
Total memory consumption by the OS:
When we found this issue, we reached out to the guys at Oracle. We were the first to find and report the problem with the memory leak in Java 6 with respect to the G1 GC. Based on our input, they fixed this issue "in-flight" within the JVM 7. So, below, you can see our tests after the fix that we asked them to do.
Test for G1 Garbage Collector (Java 7)
As you can see, the fix that the Oracles guys did really improved G1 Garbage Collector. We are very grateful to them for their help is getting this sorted out.
We are hoping that soon, Java 6 will also have its memory leak issue fixed as well. If you are currently using the JVM 6, you can test it out and contact Oracle support like we did to help speed up the process.
Pros and Cons for G1 Garbage Collector
Cons:
- Still have an issue with Java 6 (hopefully resolved soon)
Pros:
- Works well with both large and small data sets and applications
- Work well with multithreaded applications and multiprocessor machines
- Returns resources to the OS well and in a timely manner
- Good response time
- Good throughput
- Works great for vertical scaling
Conclusion
As you can understand, the ability to only pay for the actual resources used is very important for every customer and company. They should not be oversold and overpay. However, there are still several blockers which prevent faster development in this direction and must be fixed. Jelastic is the only platform pioneering true automatic vertical scaling. We have already changed the game and we will do our best to continue to revolutionize the hosting industry.
Here's the video which explains how automatic vertical scaling works in Jelastic:
Autoscaling Java Applications in Jelastic
* For some reasons the official bug according to our report about G1 accumulative memory leak was removed from Oracle bug database. More details about this bug can be found }}
|
https://dzone.com/articles/truth-about-paas-vertical
|
CC-MAIN-2015-48
|
en
|
refinedweb
|
The Standard C++ template
export feature is widely misunderstood, with more restrictions and consequences than most people at first realize. This column takes a closer look at our experience to date with
export.
What experience with
export? you might ask. After all, as I write this in early June, there is still no commercially available compiler that supports the export feature. The Comeau compiler [1], built on the EDG (Edison Design Group) [2] front-end C++ language implementation that has just added support for
export, has been hoping since last year to become the first shipping
export-capable compiler. As of this writing, that product is currently still in beta, though they continue to hope to ship soon, and it may be available by the time you read this. Still, the fact that no capable compilers yet exist naturally means that we have practically no experience with
export on real-world projects; fair enough.
What we do have for the first time ever, however, is real-world nuts-and-bolts experience with what it takes to implement
export, what effects
export actually has on the existing C++ language, what the corner cases and issues really are, and how the interactions are likely to affect real-world users all this from some of the worlds top C++ compiler writers at EDG who have actually gone and done the work to implement the feature. This is a huge step forward from anything we knew for certain even a year ago (although in fairness a few smart people, including some of those selfsame compiler writers, saw many of the effects coming and warned the committee about them years ago). Now that EDG has indeed been doing the work to create the worlds first implementation of
export, confirming suspicions and making new technical discoveries along the way, it turns out that the confirmations and discoveries are something of a mixed bag.
Heres what this column and the next cover:
- What
exportis, and how its intended to be used.
- The problems
exportis widely assumed to address, and why it does not in fact address them the way most people think.
- The current state of
export, including what our implementation experience to date has been.
- The (often non-obvious) ways that
exportchanges the fundamental meaning of other apparently unrelated parts of the C++ language.
- Some advice on how to use
exporteffectively if and when you do happen to acquire an
export-capable compiler.
A Tale of Two Models
The C++ Standard supports two distinct template source code organization models: the inclusion model that weve been using for years, and the export model, which is relatively new.
In the inclusion model, template code is as good as all inline from a source perspective (though the template doesnt have to be actually inline): the templates full source code must be visible to any code that uses the template. This is called the inclusion model because we basically have to
#include all template definitions right there in the templates header file [3].
If you know todays C++ templates, you know the inclusion model. Its the only template source model that has gotten any real press over the past 10 years because its the only model that has been available on Standard C++ compilers until now. All of the templates youre likely to have ever seen over the years in C++ books and articles up to the time of this writing fall into this category.
On the other hand, the export model is intended to allow separate compilation of templates. (The separate is in quotation marks for a reason.) In the export model, template definitions do not need to be visible to callers. Its tempting to add, just like plain functions, but thats actually incorrect its a similar mental picture, but the effects are significantly different, as we shall see when we get to the surprises. The export model is relatively new it was added to the Standard in the mid-1990s, but the first commercial implementation, by EDG [2], didnt appear until the summer of 2002 [4].
Bear with me as I risk delving too deeply into compilerese for one paragraph: a subtle but important distinction to keep in mind is that the inclusion and export models really are different source code organization models. Theyre about massaging and organizing your source code. They are not different instantiation models; this means that a compiler must do essentially the same work to instantiate templates under either source model, inclusion or export. This is important because this is part of the underlying reason why
exports limitations, which well get to in a moment, surprise many people, especially that using
export is unlikely to greatly improve build times. For example, under either source model, the compiler can still perform optimizations like relying on the ODR (one definition rule) to only instantiate each unique combination of template parameters once, no matter how often and widely that combination is used throughout your project. Such optimizations and instantiation policies are available to compiler implementers regardless of whether the inclusion or export model is being used to physically organize the templates source code; while its true that the export model allows the optimizations, so does the inclusion model.
Illustrating the Issues
To illustrate, lets look at some code. Well look at a function template under both the inclusion and export models, but for comparison purposes Im also going to show a plain old function under the usual inline and out-of-line separately compiled models. This will help to highlight the differences between todays usual function separate compilation and exports separate template compilation. The two are not the same, even though the terms commonly used to describe them look the same, and thats why I put separate in quotes for the latter.
Consider the following code, a plain old inline function and an inclusion-model function template:
// Example 1(a): // A garden-variety inline function // // --- file f.h, shipped to user --- namespace MyLib { inline void f( int ) { // natty and quite dazzling implementation, // the product of many years of work, uses // some other helper classes and functions } }
The following inclusion-model template demonstrates the parallel case for templates:
// Example 1(b): // An innocent and happy little template, // using the inclusion model // // --- file g.h, shipped to user --- namespace MyLib { template<typename T> void g( T& ) { // avant-garde, truly stellar implementation, // the product of many years of work, uses // some other helper classes and functions // -- not necessarily inline, but the body's // code is all here in the same file } }
In both cases, the Example 1 code harbors issues familiar to C++ programmers:
- Source exposure for the definitions: the whole world can see the perhaps-proprietary definitions for
f()and
g(). In itself, that may or may not be such a bad thing more on that later.
- Source dependencies: all callers of
f()and
g()depend on the respective bodies internal details, so every time the body changes, all its callers have to recompile. Also, if either
f()s or
g()s body uses any other types not already mentioned in their respective declarations, then all of their respective callers will need to see those types full definitions too.
Export InAction [sic]
Can we solve, or at least mitigate, these problems? For the function, the answer is an easy of course, because of separate compilation:
// Example 2(a): // A garden-variety separately compiled function // // --- file f.h, shipped to user --- namespace MyLib { void f( int ); // MYOB } // --- file f.cpp, optionally shipped --- namespace MyLib { void f( int ) { // natty and quite dazzling implementation, // the product of many years of work, uses // some other helper classes and functions // -- now compiled separately } }
Unsurprisingly, this solves both problems, at least in the case of
f(). (The same idea can be applied to whole classes using the Pimpl Idiom [5].)
- No source exposure for the definition: we can still ship the implementations source code if we want to, but we dont have to. Note that many popular libraries, even very proprietary ones, ship source code anyway (possibly at extra cost) because users demand it for debuggability and other reasons.
- No source dependencies: callers no longer depend on
f()s internal details, so every time the body changes, all its callers only have to relink. This frequently makes builds an order of magnitude or more faster. Similarly, usually to somewhat less dramatic effect on build times,
f()s callers no longer depend on types used only in the body of
f().
Thats all well and good for the function, but we already knew all that. Weve been doing the above since C, and since before C (which is a very very long time ago). The real question is: What about the template?
The idea behind
export is to get something like this effect for templates. One might naively expect the following code to get the same advantages as the code in Example 2(a). One would be wrong, but one would still be in good company because this has surprised a lot of people including world-class experts:
// Example 2(b): // A more independent little template? // // --- file g.h, shipped to user --- namespace MyLib { export template<typename T> void g( T& ); // MYOB } // --- file g.cpp, ??shipped to user?? --- namespace MyLib { template<typename T> void g( T& ) { // avant-garde, truly stellar implementation, // the product of many years of work, uses // some other helper classes and functions // -- now "separately" compiled } }
Highly surprisingly to many people, this does not solve both problems in the case of
g(). It might have ameliorated one of them, depending. Lets consider the issues in turn.
Issue the First: Source Exposure
- Source exposure for the definition remains: not solved. Nothing in the C++ Standard says or implies that the
exportkeyword means you wont have to ship full source code for
g()anyway.
Indeed, in the only existing (and almost-available) implementation of
export, the compiler requires that the templates full definition be shipped the full source code [6]. One reason is that a C++ compiler still needs the exported template definitions full definition context when instantiating the template elsewhere as its used. For just one example why, consider 14.6.2 from the C++ Standard about what happens when instantiating a template:
[Dependent] names are unbound and are looked up at the point of the template instantiation in both the context of the template definition and the context of the point of instantiation.
A dependent name is a name that depends on the type of a template parameter; most useful templates mention dependent names. At the point of instantiation, or a use of the template, dependent names must be looked up in two places. They must be looked up in the instantiation context; thats easy, because thats where the compiler is already working. But they must also be looked up in the definition context, and theres the rub, because that includes not only knowing the templates full definition, but also the context of that definition inside the file containing the definition, including what other relevant function signatures are in scope and so forth so that overload resolution and other work can be performed.
Think about Example 2(b) from the compilers point of view: your library has an exported function template
g() with its definition nicely ensconced away outside the header. Well and good. The library gets shipped. A year later, one fine sunny day, its used in some customers translation unit
h.cpp where he decides to instantiate
g<CustType> for a
CustType that he just wrote that morning... what does the compiler have to do to generate object code? It has to look, among other places, at
g()s definition, at your implementation file. And theres the rub...
export does not eliminate such dependencies on the templates definition; it merely hides them.
Exported templates are not truly separately compiled in the usual sense we mean when we apply that term to functions. Exported templates cannot in general be separately compiled to object code in advance of use; for one thing, until the exact point of use, we cant even know the actual types the template will be instantiated with. So exported templates are at best separately partly compiled or separately parsed. The templates definition needs to be actually compiled with each instantiation.
Issue the Second: Dependencies and Build Times
- Dependencies are hidden, but remain: every time the templates body changes, the compiler still has to go and reinstantiate all the uses of the template every time. During that process, the translation units that use
g()are still processed together with all of
g()s internals, including the definition of
g()and the types used only in the body of
g().
The template code still has to be compiled in full later, when each instantiation context is known. Here is the key concept, as explained by
export expert Daveed Vandevoorde:
export hides the dependencies. It does not eliminate them.
Its true that callers no longer visibly depend on
g()s internal details, inasmuch as
g()s definition is no longer openly brought into the callers translation unit via
#included code; the dependency can be said to be hidden at the human-reading-the-source-code level.
But thats not the whole story, because were talking compilation-the-compiler-must-perform dependencies here, not human-reading-the-code-while-sipping-a-latte dependencies, and compilation dependencies on the template definitions still exist. True, the compiler may not have to go recompile every translation unit that uses the template, but it must go away and recompile at least enough of the other translation units that use the template such that all the combinations of template parameter types on which the template is ever used get reinstantiated from scratch. It certainly cant just go relink truly, separately compiled object code.
For an example why this is so, and one that actually shows that theres a new dependency being created here that we havent talked about yet, recall again that quote from the C++ Standard:
[Dependent] names are unbound and are looked up at the point of the template instantiation in both the context of the template definition and the context of the point of instantiation.
If either the context of the templates instantiation or the context of the templates definition changes, both get recompiled. Thats why, if the template definition changes, we have to go back to all the points of instantiation and rebuild those translation units. (In the case of the EDG compiler, the compiler recompiles all the calling translation units needed to recreate every distinct specialization, in order to recreate all of the instantiation contexts, and for each of those calling translation units, it also recompiles the file containing the template definition in order to recreate the definition context.) Note that compilers could be made smart enough to handle inclusion-model templates the same way -- not rebuilding all files that use the template but only enough of them to cover all the instantiations if the code is organized as shown in Example 2(b), but with
export removed and a new line
#include g.cppadded to
g.h.
But theres actually a new dependency created here that wasnt there before, because of the reverse case: if the templates instantiation context changes that is, if you change one of the files that use the template the compiler also has to go back to the template definition and rebuild the template definition too. EDG rebuilds the whole translation unit where the template definition resides yes, the one that many people expected
export to compile separately only once because its too expensive to keep a database of copies of all the current template definition contexts. This is exactly the reverse of the usual build dependency, and probably more work than the inclusion model for at least this part of the compilation process because the whole translation unit containing the template definition is compiled anew. Its possible to avoid this rebuilding of the template definition, of course, simply by keeping around a database of all the template instantiation contexts. One reason EDG chose not to do this is because such a database quickly gets extremely large and caching the definition contexts could easily become a pessimization.
Further, remember that many templates use other templates, and therefore the compiler next performs a cascading recompilation of those templates (and their translation units) too, and then of whatever templates those templates use, and so on recursively, until there are no more cascading instantiations to be done. (If, at this point in our discussion, you are glad that you personally dont have to implement
export, thats a normal reaction.)
Even with
export, it is not the case that all callers of a changed exported template just have to relink. The experts at EDG report that, unlike the situation with true separate function compilation where builds will speed dramatically,
export-ized builds are expected in general to be the same speed or slower except for carefully constructed cases.
Summary
So far, weve looked at the motivation behind
export, and why its not truly separate compilation for templates in the same way we have separate compilation for non-templates. Many people expect that
export means that template libraries can be shipped without full definitions, and/or that build speeds will be faster. Neither outcome is promised by
export. The communitys experience to date is that source or its direct equivalent must still be shipped and that build speeds are expected to be the same or slower, rarely faster, principally because dependencies, though masked, still exist, and the compiler still has to do at least the same amount of work in common cases.
Next time, well see why
export complicates the C++ language and makes it trickier to use, including that
export actually changes the fundamental meaning of parts of the rest of the language in surprising ways that it is not clear were foreseen. Well also see some initial advice on how to use
export effectively if you happen to acquire an
export-capable compiler. More on those topics, when we return.
Acknowledgments
Many thanks to Steve Adamczyk, John Spicer, and Daveed Vandevoorde also known as EDG [2] for being the first to be brave enough to implement
export, for imparting their valuable understanding and insights to me and to the community, and for their comments on drafts of this material. As of this writing, they are the only people in the world who have experience implementing
export, never mind that they are already regarded by many as the best C++ compiler writers on the planet. For one small but public measure of their contribution to the state of our knowledge, do a Google search for articles in the newsgroups
comp.lang.c++.moderated and
comp.std.c++ by Daveed this year (2002). Happy reading!
References
[1] See.
[2] See.
[3] Or the equivalent, such as stripping the definitions out into a separate
.cpp file but having the templates
.h header file
#include the
.cpp definition file, which amounts to the same thing.
[4] Its true that Cfront had some similar functionality a decade earlier. But Cfronts implementation was slow, and it was based on a works most of the time heuristic such that, when Cfront users encountered template-related build problems, a common first step to get rid of the problem was to blow away the cache of instantiated templates and reinstantiate everything from scratch.
[5] H. Sutter. Exceptional C++ (Addison-Wesley, 2000).
[6] But couldnt we ship encrypted source code? is a common question. The answer is that any encryption that a program can undo without user intervention (say to enter a password each time) is easily breakable. Also, several companies have already tried encrypting or otherwise obfuscating source code before, for a variety of purposes including protecting inclusion-model templates in C++. Those attempts have been widely abandoned because the practice annoys customers, doesnt really protect the source code well, and the source code rarely needs such protection in the first place because there are other and better ways to protect intellectual property claims obfuscation comes to the same end here..
|
http://www.drdobbs.com/cpp/sutters-mill-herb-sutter-export-restrict/184401563
|
CC-MAIN-2015-48
|
en
|
refinedweb
|
in reply to
Re: Database normalization the easier way
in thread Database normalization the easier way
My initial idea was to ask for a DBI:: namespace, but yesterday I found out that such namespace is restricted. Then DBIx:: is a good candidate (Or it should be as soon as I get an answer from CPAN. Being in a monastery, I should become used to being patient {grin}).
DBIx::DBSchema was a good hint. I checked it and it seems to provide enough information to replace the direct calls I am using so far.
I would need to rewrite the two subs that are dealing with column information (so many greps and maps wasted!) but this is a good chance to make the module portable across databases.
Thanks.
g
|
http://www.perlmonks.org/?node_id=132831
|
CC-MAIN-2015-48
|
en
|
refinedweb
|
Perl's own "use" and "require" cannot be overloaded so that underlying file requests are mapped to something else. This module hacks IO layer system so that one can fool Perl's IO into thinking that it operates on files while feeding something else, ...KARASIK/File-Redirect-0.04 - 25 Feb 2012 11:24:02 GMT - Search in distribution
- File::Redirect::lib - mount and use lib
- File::Redirect::Zip - zip vfs
- File::Redirect::Simple - simple hash-based vfs
- 1 more result from File-Redirect »
"Hook::Output::File" redirects "STDOUT/STDERR" to a file....SCHUBIGER/Hook-Output-File-0.07 - 24 May 2011 14:16:41
: This command is like "prove" or "make test", running the test suite for the current namespace....BRUMMETT/UR-0.44 - 06 Jul 2015 14:36:22 GMT - Search in distribution (1 review) - 12 Sep 2002 10:06:01::Response - WEB response processing for Egg.
- Egg::Plugin::Banner::Rotate - Plugin to display advertisement rotating.wp-download - Fetch large files from the web
- LWP::UserAgent - Web user agent class
CHOROBA/XML-XSH2-2.1.25 - 04 Nov 2015 01:47:53 GMT - Search in distribution
MLEHMANN/Gtk-Perl-0.7010 (2 reviews) - 15 Dec 2012 19:43:06 GMT - Search in distribution
ASP4 is a modern web development platform for Perl with a focus on speed, simplicity and scalability....JOHND/ASP4-1.087 - 07 May 2012 21:21:53::Metadata - Generate the code and data for some DBI metadata methods
This module allow to use API functions from rpmlib, directly or trough perl objects....TVIGNAUD/RPM4-0.33 - 29 May 2013 09:56:15.16 - 20 Nov 2015 14:06:42.10 (1 review) - 09 Oct 2015 15:53:09
|
https://metacpan.org/search?q=File-Redirect
|
CC-MAIN-2015-48
|
en
|
refinedweb
|
This is a homework assignment so i don't want any code, just maybe a little hint as to what the heck i'm doing wrong. The assignment is simply to write a EchoClient and EchoServer where the server echos back everything the client sends (txt) until the client socket is closed. My code follows (please don't laugh, i'm beginner).
and the client code is:and the client code is:import java.io.*; import java.net.*; import java.lang.*; public class EchoServer { public static void main (String[] args){ try{ ServerSocket sock; sock = new ServerSocket(8011); while(true) { Socket client=sock.accept(); InputStream in=client.getInputStream(); String line; BufferedReader bin=new BufferedReader(new InputStreamReader(in)); do{ line=bin.readLine(); PrintWriter pout=new PrintWriter(client.getOutputStream(),true); pout.println(line); } while(client.isClosed()!=true); sock.close(); client.close(); } } catch(IOException ioe) { System.err.println(ioe); } } }
The while loop in client is just so I could make sure it would echo the multiple messages (and not just one)The while loop in client is just so I could make sure it would echo the multiple messages (and not just one)import java.io.*; import java.net.*; import java.lang.*; public class EchoClient { public static void main (String[] args){ try{ Socket s=new Socket("127.0.0.1",8011); InputStream in=s.getInputStream(); BufferedReader buff=new BufferedReader(new InputStreamReader(in)); String message="Hello there! Please work!"; PrintWriter out=new PrintWriter(s.getOutputStream(),true); int i=0; while(i<4) { out.println(message); message=buff.readLine(); i++; } s.close(); } catch(IOException ioe) { System.err.println(ioe); } } }
I keep getting the following error: java.net.BindException: Address already in use
Thank you in advance. Any help is much appreciated.
|
http://www.javaprogrammingforums.com/java-networking/36085-homework-help-echoserver-echoclient.html
|
CC-MAIN-2015-48
|
en
|
refinedweb
|
convert number to words.
Degree Converter...Java Conversion
... will learn to convert decimal
number into binary. The java.lang package provides
Java : String Case Conversion
Java : String Case Conversion
In this section we will discuss how to convert a String from one case into
another case.
Upper to Lower case Conversion :
By using toLowerCase() method you can convert any
upper case char/String
Java program - convert words into numbers?
Java program - convert words into numbers? convert words into numbers?
had no answer sir
Number Convert - Java Beginners
Number Convert Dear Deepak Sir,
I am entered in Command prompt Some value like 234.Value can be convert into Two Hundred and Thirty Four Rupees...="";
while (count != 0){
switch (in){
case 1:
num = count % 100;
passString(num
java program to convert decimal in words
java program to convert decimal in words write a java program to convert a decimal no. in words.Ex: give input as 12 output must be twelve
String to Date Conversion - Java Beginners
));
For read more information :... but all those are giving me GMT etc.
I just need the format 11/SEP/2008 07:07:32... 11-09-2008 07:32:AM
Now I just need to convert this String variable value
Distance conversion - Java Beginners
in metres. The program will then present a user menu of conversion types, converting... the user.
? Write three methods to convert the distance to kilometres, feet and inches...{
System.out.println();
System.out.println("1. Convert to kilometers
Java count words from file
Java count words from file
In this section, you will learn how to determine the number of words present
in the file.
Explanation:
Java has provides several... by using the StringTokenizer class, we can easily
count the number of words
java conversion
java conversion how do i convert String date="Aug 31, 2012 19:23:17.907339000 IST" to long value
Convert String to Number
Convert String to Number
In this section, we will learn how to convert String to Number.
The following program provides you the functionality to convert String
form to java script conversion
form to java script conversion I am using a form one more form cotain a piece of code .
how to convert it to java script to function it properly?
code is shown below
<form action="" id="cse-search
indexof case insensitive java
indexof case insensitive java Hi,
I have to use the indexOf function on String class in Java. I am looking for indexof case insensitive java example. Share the code with me.
Thanks
Hi,
You can convert both
Enhancement for connection pool in DBAcess.java
Enhancement for connection pool in DBAcess.java package spo.db;
import com.javaexchange.dbConnectionBroker.DbConnectionBroker;
import java.io..... The modification are:
-add a d detail log
-add connection heading
-more in info String
Convert Number to String
In this section, we will learn how to convert numbers
to String.
The following program provides you the functionality to convert numbers to
String
Retrieve a list of words from a website and show a word count plus a specified number of most frequently occurring words
. print: the total number of words processed, the number of unique words, the N...Retrieve a list of words from a website and show a word count plus a specified number of most frequently occurring words I have to:
1.Retrieve
Casting (Type Conversion)
are there in java. Some circumstances requires automatic type
conversion, while in other cases...
Java performs automatic type conversion when the type
of the expression... in case of narrowing i.e. when a data type
requiring more storage is converted
Convert Number to Binary
Convert Number to Binary
In this section, you will learn to convert a number
to a binary (0,1). The given number is a decimal format and the following
program to calculate its
Convert One Unit to Another
, Liters, Miles,
Carats, Meters etc to convert them to another units. First...
C:\unique>java UnitConverter1
Conversion Table...
Convert One Unit to Another
number of stairs - Java Beginners
number of stairs 1. Write a program that prints an n-level stair case made of text. The user should choose the text character and the number...();
}
}
}
-----------------------------------
read for more information,
Excel conversion tool. - Java Beginners
Excel conversion tool. Hi,
I need a conversion tool which can convert .xls(Excel 2003) to .xlsx (Excel 2007) and vice-versa.
Please suggest any links ro tools.
Thank You
Java String Case Converter
Java String Case Converter
Here we are going to convert lowercase characters to upper case and upper case characters to lower case simultaneously from.... If there is a lowercase character then convert it into uppercase using Character.toUpperCase(ch[i
List of Java Exception
List of Java Exception
Exception in Java are classified on the basis of the exception
handled by the java compiler. Java consists of the following type of built
List of Java Exception
; the exception
handled by the java compiler. Java consists of the following type...
List of Java Exception
.... This exception is thrown
when there is an error in input-output operation. In this case
Conversion - Development process
Conversion Assistant (JLCA). Code that calls Java APIs can convert to comparable C# code...Conversion Why is there a need to convert from Java to .NET?
What is the theory of conversion from java to .NET? How it is done?
Hi
Switch case in java
Switch case in java
In this section we will learn about switch case in java..., and java, switch case is used
. switch is a selection
control mechanism used... will illustrate the use of switch
case in java:
import java.util.Scanner;
public Application Convert to java Applet
java Application Convert to java Applet Hi every Java Masters i'm Java beginner ,and i want below this figures source code convert to Java Applet...();
System.out.println();
switch(menu) {
case 1:
System.out.print("Enter Number of Day
how to count words in string using java
++;
}
System.out.println("Number of words are: "+count);
}
}
Thanks
Hello...how to count words in string using java how to count words in string... count=0;
String arr[]=st.split(" ");
System.out.println("Number
Java Date Conversion
Java Date Conversion
In java date conversion, two packages are used...
Java Converts String into Date
In this example we are going to convert...; used to convert between dates and time fields and the DateFormat class
Switch Case in Java
version of Java did not support String in switch/case.
Switch statements works...
A statement in the switch block can be labeled with one or more case or default labels... case will take place.
Example of Switch Statement in Java:
import java.io.
Collection of Large Number of Java Sample Programs and Tutorials
Collection of Large Number
of Java Sample Programs and Tutorials
Java Collection Examples
Java 6.0
New Features (Collection Framework... ArrayLists,
LinkedLists, HashSets,
etc. A collection
How to convert a String to an int?
);
Check more tutorials:
Convert a String into an Integer Data
Many examples of data conversion in Java.
Thanks...How to convert a String to an int? I my program I am
HSSFCell to String type conversion Hello,
Can anyone help me convert HSSFCell type to string. There is a solution in the following URL...://
A Program To Reverse Words In String in Java .
A Program To Reverse Words In String in Java . A Program To Reverse Words In String in Java :-
For Example :-
Input:- Computer Software
Output :- Software Computer
Number Format Exception
. A Number Format Exception occurs in the java code when a programmer
tries to convert a String into a number. The Number might be int,float or any
java...
Number Format Exception
Convert Hexadecimal number into Integer
Convert Hexadecimal number into Integer ... to convert hexadecimal
data into integer. The java.lang
package provides the functionally to convert the hexadecimal data into an
integer type data.
Code
Java integer to string
/java/java-conversion/convert-number-to-string.shtml
...;
Many times we need to convert a number to a string to be able to operate...;toString()"
method can also be used for the conversion.
Read more at:
http
How to format number in Java?
This tutorial explains you how to format the number in Java. There are number
of situations where we... the number such as leading zeros, prefixes, group
separators etc... Here we
computer manufacture case study
computer manufacture case study Computer Manufacturer Case Study
BNM...
Wireless Mouse 4790 Lotus Notes 6000
Wireless
Keyboard 9482
Sun Java
Communications... address etc. The system should be flexible
enough so that additional details
object conversion - Java Beginners
/java-conversion/ObjectToDouble.shtml
Thanks...
sandeep kumar suman...object conversion Hi,
Can anybody tell me the object conversion in java.
Hi sandeep
Can u please tell me in details about your
how to i convert this case to loop...please help.....
how to i convert this case to loop...please help..... */
import...);
System.out.println();
switch (question) {
case 'a': AppendElement();
case 'b': AddElementSpecific();
case 'c
how to i convert this case to loop...please help.....
how to i convert this case to loop...please help..... import...);
System.out.println();
switch (question) {
case 'a': AppendElement();
case 'b': AddElementSpecific();
case 'c
test case
test case Hi
Can you explain me how to write test case in java.
regards
kennedy
String conversion
String conversion I want to convert a string of mix cases(lower and upper case) to vice versa.
ex.
HellO should be printed to hELLo.
the string comes from user using datainputstream. Also sending individual chars of string
Convert to java - Java Beginners
Convert to java Can anyone convert this javascript program codes to Java codes,please...thanks!
var iTanggalM = 0;
var iTanggalH = 0;
var...) {
case 2 : { sDate = sHariE+", "+iTanggalM+" "+sBulanE+" "+iTahunM;break
convert date month and year into word using java
];
}
public String convert(int number) {
int n = 1...;
case 4:
word = number % 100...);
}
number /= 100;
break;
case
Matching Case
Matching Case Hi,
i want some code for matching case from an text file. i.e if i give an query called Java from the user,i need to get an output searching the text file all the names in that without case sensitive
Java is case sensitive
Java is case sensitive hello,
Why Java is case sensitive?
hii,
Java is Platform Independent Language, so its used widely in very big... that huge no of variables,Java is Case Sensitive
highlight words in an image using java
highlight words in an image using java Hai all,In my application left side image is there and right side an application contains textboxes like... want to highlight name in the image using java/jsp/javascript.please help me
Advertisements
If you enjoyed this post then why not add us on Google+? Add us to your Circles
|
http://www.roseindia.net/tutorialhelp/comment/80678
|
CC-MAIN-2015-48
|
en
|
refinedweb
|
[SRU, 9.10] libboost-python1.38 issues with __doc__ property in Python >= 2.6.3
Bug Description
Python >= 2.6.3 has changed the way the __doc__ property is implemented. Boost.Python 1.38 does not account for this, yet, leading to many errors executing Python code using the Boost implemented bindings.
An example trying to use a sample from python-visual:
guy@mountpaku:
Traceback (most recent call last):
File "orbit.py", line 1, in <module>
from visual import *
File "/usr/lib/
import cvisual
AttributeError: 'Boost.:
https:/
* Post on Python Ogre and the __doc__ property issue:
http://
* Discussion of a similar issue from PySide:
http://
ProblemType: Bug
Architecture: i386
Date: Thu Oct 22 10:59:00 2009
DistroRelease: Ubuntu 9.10
NonfreeKernelMo
Package: libboost-
ProcEnviron:
PATH=(custom, user)
LANG=en_NZ.UTF-8
SHELL=/bin/bash
ProcVersionSign
SourcePackage: boost1.38
Uname: Linux 2.6.31-14-generic i686
XsessionErrors: (polkit-
additionally, all demos in python-visual 5.11 and 5.13 work fine with boost svn, so this is most definately not a vpython issue.
This bug is in Karmic release. If we could find the exact commit that fixes this bug, we could get it easily into Karmic. The other option is to see how big the diff is in the lib/python/src section of the code, and see if the svn version fixes it.
@sweetsinse, could you try compiling boost with the Karmic version of the source BUT replace the libs/python directory with the version in the svn trunk? ( $ sudo apt-get source libboost1.38 )
If that works, we could get a small patch that makes boost usable in karmic.
https:/
Packages have been built for testing in my PPA at https:/
As far as I've been digging into this, I think this is a Python's bug. See Issue #5890 on which I just commented: http://
It was caused by a change in python to fix issue 5890, as referenced in http://
I have just given the proposed bug fix a spin by using the packages from ppa:ajmitch. At least all the samples from python-visual that I've tried did work properly, now. Well done! I think we've got a go here!
Hi all,
I tried update the boost packages after I install VPython from Ubuntu repository but I still can't get my VPython script working. Do I need to compile the source to get it working?
Dear Scott,
I tried that but still it did not work. Here is the output from the terminal when I run my VPython script:
teonghan@
(<unknown>:3073): GdkGLExt-WARNING **: Cannot open \xe8\u0002\
(<unknown>:3073): GdkGLExt-WARNING **: Cannot open \xe8\u0002\
glibmm-ERROR **:
unhandled exception (type std::exception) in signal handler:
what: Unable to get extension function: glCreateProgram
aborting...
Aborted
Anyway, I manage to try boost packages from Andrew's PPA in a fresh Karmic installation using VirtualBox and it works. Maybe I messed up with the libraries installations. Still hope I won't need to do fresh installation though.
I have installed the ppa's from Andrew Mitchell and can confirm that it fixes the the issue.
Hi all,
I reformat my HD and do fresh installation of Karmic on it. The first thing I did after I log in for the first time, I straight away install python-visual from the repo which pull all the dependencies, including those boost packages from Andrew's PPA. I restart my comp and still I can't use VPython, got the same problem like my previous post here. This morning, I looked into the VPython source/INSTALL and I just simply install "libgtkglextmm-
@teonghan: Two things on that:
* This should be a problem of the python-visual package, rather than the boost packages.
* python-visual should then probably also depend on libgtkglextmm-
*never* depend on a "*-dev" package, which should just contain headers for compilation/
which you're not doing.
You should file a new bug for python-visual on this issue.
Got the bug too, but it was fixed by the updated libboost packages from the mentioned ppa. Thanks!
I can confirm that the ppa fixes the bug for libavg.
Thanks for the PPA, it fixed this issue for me on Ubuntu 9.04.
I think for now, it should be in the standart repository asap!
I don't have this issue any longer even with packaged karmic version of both boost+python. Can someone test the minimal testcase at http://
Please ignore my previous comment, I had the ajmitch's ppa enabled without knowing. (The karmic version really doesn't work)
r1991 detects the issue and tells the user what to do.
(Uh, sorry again, different bug. :-| )
It would be interesting to see whether anybody has tested this bug on libboost1.40.0 from lucid, yet.
Is it still (going to be) an issue, or is it resolved on 1.40.0?
If so, then this bug would just "grow out". If not, then severe action is required in order to prevent another "broken" release on this issue.
I just tested minimal testcase [1] with 1.40.0-4ubuntu2 on lucid (current) and it is still broken.
[1] http://
The 1.40.0-2ubuntu2 on karmic is broken as well.
I filed separate bug #539049 for 1.40 so that (hopefully) someone applies the patch in lucid.
I put rebuilt boost packages for both karmic and lucid are in https:/
SRU request:
A statement explaining the impact of the bug on users and justification for backporting the fix to the stable release:
See description, specifically this causes depending modules to segfault
An explanation of how the bug has been addressed in the development branch, including the relevant version numbers of packages modified in order to implement the fix:
It has been fixed upstream. We are cherry picking the fix from commit (SVN r53731)
A minimal patch applicable to the stable version of the package. If preparing a patch is likely to be time-consuming, it may be preferable to get a general approval from the SRU team first.
See attached debdiff
Detailed instructions how to reproduce the bug. These should allow someone who is not familiar with the affected package to reproduce the bug and verify that the updated package fixes the problem. Please mark this with a line "TEST CASE:".
TEST CASE:
An example trying to use a sample from python-visual:
guy@mountpaku:
Traceback (most recent call last):
File "orbit.py", line 1, in <module>
from visual import *
File "/usr/lib/
import cvisual
AttributeError: 'Boost.
A discussion of the regression potential of the patch and how users could get inadvertently affected.:
This has been tested upstream (and through the ppas linked above in ubuntu) and regressions have not been seen.
@Scott: this bug is reported for boost1.38 which is not in lucid (bug #539049 is for lucid version); will SRU request on libboost1.38 will get attention? Also, why is this "new" and not "confirmed" anymore?
>@Scott: this bug is reported for boost1.38 which is not in lucid (bug #539049 is for lucid version);
Yes - two bugs make sense: one for the SRU in karmic and one for a bug fix upload into lucid
>will SRU request on libboost1.38 will get attention?
The sponsors team is subscribed, they usually do a good job getting back. To get more attention, we should hop onto #ubuntu-devel and try to get someone from the stable release team to look at this bug (and the lucid one). I probably won't be able to do that for a day or two, so feel free to grab someone from irc to take a look at this.
>Also, why is this "new" and not "confirmed" anymore?
the stable release team, archives, and ubuntu-sponsors teams use different importance to track the progress of their inclusion of the patch. Once a team is subscribed, status doesn't matter (they will use it to signal to each other the status of the request). However, I'll make it triaged since that status is not reserved for their workflow. I'll fix the other bug's status as well.
An example:
https:/
#.
ACK from -SRU for the debdiff in comment #28.
Wontfix forLucid since 1.38 has been removed.
For the record, same problem in lucid, but with boost 1.40, is already fixed in packages (bug #539049).
Accepted boost1.38 into karmic-proposed, the package will build now and be available in a few hours. Please test and give feedback here. See https:/ testing should be done with the actual package in -proposed, because it's binary-copied to -updates. This might seem pedantic, but in the past we have had regressions caused by race conditions in the source building process -- the lesson learned is to test the actual update that'll be pushed out.
On Apr 24, 2010, at 4:27 PM, Václav Šmilauer wrote:
>, 9.10] libboost-python1.38 issues with __doc__ property in Python >= 2.6.3
> https:/
> You received this bug notification because you are a member of Ubuntu
> Stable Release Updates Team, which is a direct subscriber.
I have just tested the libboost-
(Beware, some mirrors don't have this package, yet. It took some checking first ...)
Unfortunately, it seems like the error is still there:
$]: from visual import *
-------
AttributeError Traceback (most recent call last)
/home/gkloss/
/usr/lib/
56
57 import crayola as color
---> 58 import cvisual
59 cvisual.
60 from cvisual import (vector, mag, mag2, norm, cross, rotate, comp, proj,
AttributeError: 'Boost.
Guy, I have to contradict your observation.
I just tried in karmic chroot with the package, adding karmic-proposed to sources.list I upgraded via apt-get
Get:2 http://
and then
root@flux:
Python 2.6.4rc2 (r264rc2:75497, Oct 20 2009, 02:54:09)
[GCC 4.4.1] on linux2
Type "help", "copyright", "credits" or "license" for more information.
>>> from visual import *
>>>
works flawlessly.
Are you sure you really had the package installed? (sorry for such a stupid question, but I see no other possible cause)
Václav, of course, I might have gotten something wrong, but I've just tried it again. I've used the nz.archive.
gkloss@it041227:~$ wget http://
--2010-05-11 10:23:32-- http://
Resolving alb-cache.
Connecting to alb-cache.
Proxy request sent, awaiting response... 200 OK
Length: 240038 (234K) [application/
Saving to: `libboost-
100%[==
2010-05-11 10:23:33 (10.1 MB/s) - `libboost-
Now install it manually, after before (forcefully) removing it:
$ sudo dpkg -i libboost-
Selecting previously deselected package libboost-
(Reading database ... 337560 files and directories currently installed.)
Unpacking libboost-
Setting up libboost-
Processing triggers for libc-bin ...
ldconfig deferred processing now taking place
And now for the test:
gkloss@it041227:~$ python
Python 2.6.4 (r264:75706, Dec 7 2009, 18:45:15)
[GCC 4.4.1] on linux2
Type "help", "copyright", "credits" or "license" for more information.
>>> from visual import *
Traceback (most recent call last):
File "<stdin>", line 1, in <module>
File "/usr/lib/
import cvisual
AttributeError: 'Boost.
>>>
I've just tested & confirmed it as being fixed on amd64, I'll setup an i386 VM & see if I can reproduce this.
Just retested on another box. Result is positive.
Also I've now gone and also reinstalled python-visual, and guess what, it works as well. There must've been some things "wrong" after all the tinkering to get python-visual working under karmic last year.
So: Thumbs up for the fix!
This bug was fixed in the package boost1.38 - 1.38.0-6ubuntu6.1
---------------
boost1.38 (1.38.0-6ubuntu6.1) karmic-proposed; urgency=low
* Apply patch from SVN r53731 to fix the static property initialization
- patches/
-- Andrew Mitchell <email address hidden> Wed, 11 Nov 2009 16:55:00 +1300
i can confirm that building the trunk boost libraries solves this issue.
boost 1.40.0rc1 was not sufficient
|
https://bugs.launchpad.net/python/+bug/457688
|
CC-MAIN-2015-48
|
en
|
refinedweb
|
Is the following Python code an accurate representation of how submissions are evaluated? I've played with this to help me evaluate my modelling, but wanted to make sure I understood how the evaluator worked. I believe I'll need to add a PostId to the prediction
data when I submit, but have not included that for simplicity's sake in this example code.
from __future__ import division
import csv
import os
def main():
pred = [
[0.05,0.05,0.05,0.8,0.05],
[0.73,0.05,0.01,0.20,0.02],
[0.02,0.03,0.01,0.75,0.19],
[0.01,0.02,0.83,0.12,0.02]
]
act = [
[0,0,0,1,0],
[1,0,0,0,0],
[0,0,0,1,0],
[0,0,1,0,0]
]
scores = []
for index in range(0, len(pred)):
result = llfun(act[index], pred[index])
scores.append(result)
print(sum(scores) / len(scores)) # 0.0985725708595
if __name__ == '__main__':
main()
ll = sum(act*sp.log(pred) + sp.subtract(1,act)*sp.log(sp.subtract(1,pred)))
That's not right. Since the prediction is a normalized multinomial distribution, you just take log(pred[label]), and ignore the other predictions not covered by the label (their impact on the score is via the normalization). If your prediction is not actually
normalized, you need to normalize it (after clamping to 1e-15).
Here's the function I use:
import numpy as np
def multiclass_log_loss(y_true, y_pred, eps=1e-15):
"""Multi class version of Logarithmic Loss metric.
idea from this post:
Parameters
----------
y_true : array, shape = [n_samples]
y_pred : array, shape = [n_samples, n_classes]
Returns
-------
loss : float
"""
predictions = np.clip(y_pred, eps, 1 - eps)
# normalize row sums to 1
predictions /= predictions.sum(axis=1)[:, np.newaxis]
actual = np.zeros(y_pred.shape)
rows = actual.shape[0]
actual[np.arange(rows), y_true.astype(int)] = 1
vsota = np.sum(actual * np.log(predictions))
return -1.0 / rows * vsota
What type of objects are the inputs?
y_true : array, shape = [n_samples]
y_pred : array, shape = [n_samples, n_classes]
I'm using a simple list of list and isn't working properly =(
thanks for any help
The function assumes that two numpy ndarrays are supplied.
The first is a 1-d array, where each element is the goldstandard class ID of the instance.
The second is a 2-d array, where each element is the predicted distribution over the classes.
Here are some example uses:
>>> import numpy as np>>> multiclass_log_loss(np.array([0,1,2]),np.array([[1,0,0],[0,1,0],[0,0,1]]))2.1094237467877998e-15>>> multiclass_log_loss(np.array([0,1,2]),np.array([[1,1,1],[0,1,0],[0,0,1]]))0.36620409622270467
with —
|
https://www.kaggle.com/c/predict-closed-questions-on-stack-overflow/forums/t/2644/multi-class-log-loss-function
|
CC-MAIN-2015-48
|
en
|
refinedweb
|
std::abort
From cppreference.com
Causes abnormal program termination unless SIGABRT is being caught by a signal handler passed to signal and the handler does not return.
Destructors of variables with automatic, thread local and static storage durations are not called. Functions passed to std::atexit() are also not called. Whether open resources such as files are closed is implementation defined. Implementation defined status is returned to the host environment that indicates unsuccessful execution.
Parameters
(none)
Return value
(none)
Exceptions
Example
Run this code
#include <cassert> // assert #include <csignal> // std::signal #include <iostream> // std::cout class Tester { public: Tester() { std::cout << "Tester ctor\n"; } ~Tester() { std::cout << "Tester dtor\n"; } }; // Destructor not called Tester static_tester; void signal_handler(int signal) { if (signal == SIGABRT) { std::cerr << "SIGABRT received\n"; } else { std::cerr << "Unexpected signal " << signal << " received\n"; } std::_Exit(EXIT_FAILURE); } int main() { // Destructor not called Tester automatic_tester; // Setup handler const auto previous_handler = std::signal(SIGABRT, signal_handler); if (previous_handler == SIG_ERR) { std::cerr << "Setup failed\n"; return EXIT_FAILURE; } // Raise SIGABRT assert(false); std::cout << "This code is unreachable\n"; return EXIT_SUCCESS; }
Output:
Tester ctor Tester ctor Assertion failed: (false), function main, file ..., line 41. SIGABRT received
|
http://en.cppreference.com/mwiki/index.php?title=cpp/utility/program/abort&oldid=67181
|
CC-MAIN-2015-48
|
en
|
refinedweb
|
erfc, erfcf, erfcl - complementary error functions
#include <math.h>
double erfc(double x);
float erfcf(float x);
long double erfcl [MX]
either 0.0 (if representable), oreither 0.0 (if representable), or
an implementation-defined value shall be returned.an implementation-defined value shall be returned.
[MX]
If x is NaN, a NaN erfc().
|
http://pubs.opengroup.org/onlinepubs/000095399/functions/erfcf.html
|
CC-MAIN-2015-48
|
en
|
refinedweb
|
\begin{code} {-# OPTIONS_GHC -fno-implicit-prelude #-} ----------------------------------------------------------------------------- -- | -- () -- * Waiting , threadDelay -- :: Int -> IO () , registerDelay -- :: Int -> IO (TVar Bool) , threadWaitRead -- :: Int -> IO () , threadWaitWrite -- :: Int -> IO () -- * MVars , MVar -- abstract , -- abstract , atomically -- :: STM a -> IO a , retry -- :: STM a , orElse -- :: STM a -> STM a -> STM a , catchSTM -- :: STM a -> (Exception -> STM a) -> STM a , alwaysSucceeds -- :: STM a -> STM () , always -- :: STM Bool -> STM () , TVar -- abstract , newTVar -- :: a -> STM (TVar a) , newTVarIO -- :: a -> STM (TVar a) , readTVar -- :: TVar a -> STM a ,HandlerLock #endif , ensureIOManagerIsRunning #ifdef mingw32_HOST_OS , ConsoleEvent(..) , win32ConsoleHandler , toWin32ConsoleEvent #endif ) where import System.Posix.Types #ifndef mingw32_HOST_OS import System.Posix.Internals #endif import Foreign import Foreign.C #ifndef __HADDOCK__ import {-# SOURCE #-} GHC.TopHandler ( reportError, reportStackOverflow ) #endif import Data.Maybe import GHC.Base import GHC.IOBase import GHC.Num ( Num(..) ) import GHC.Real ( fromIntegral, div ) #ifndef mingw32_HOST_OS import GHC.Base ( Int(..) ) #endif #ifdef mingw32_HOST_OS import GHC.Read ( Read ) import GHC.Enum ( Enum ) #endif import GHC.Exception import GHC.Pack ( packCString# ) import GHC.Ptr ( Ptr(..), plusPtr, FunPtr(..) ) import GHC.STRef import GHC.Show ( Show(..), showString ) import Data.Typeable'). -} forkIO :: IO () -> IO ThreadId forkIO action = IO $ \ s -> case (fork# action_plus s) of (# s1, id #) -> (# s1, ThreadId id #) where action_plus = catchException action child). -} forkOnIO :: Int -> IO () -> IO ThreadId forkOnIO (I# cpu) action = IO $ \ s -> case (forkOn# cpu action_plus s) of (# s1, id #) -> (# s1, ThreadId id #)) foreign import ccall "&n_capabilities" n_capabilities :: Ptr CInt childHandler :: Exception -> IO () childHandler err = catchException (real_handler err) childHandler real_handler :: Exception -> IO () real_handler ex = case ex of -- ignore thread GC and killThread exceptions: BlockedOnDeadMVar -> return () BlockedIndefinitely -> return () AsyncException ThreadKilled -> return () -- report all others: AsyncException StackOverflow -> reportStackOverflow other -> reportError other {- | ) -} killThread :: ThreadId -> IO () 8 of the paper. Like any blocking operation, 'throwTo' is therefore interruptible (see Section 4.3 of the paper). There is currently no guarantee that the exception delivered by 'throwTo' will be delivered at the first possible opportunity. In particular, if a thread may unblock and then re-block exceptions (using 'unblock' and 'block') without receiving a pending 'throwTo'. This is arguably undesirable behaviour. -} throwTo :: ThreadId -> Exception -> IO () throwTo (ThreadId id) ex = IO $ \ s -> case (killThread# id ex s) of s1 -> (# s1, () #) -- | Returns the 'ThreadId' of the calling thread (GHC only). myThreadId :: IO ThreadId myThreadId = IO $ \s -> case (myThreadId# s) of (# s1, id #) -> (# s1, ThreadId id #) -- , a #) -> unSTM k new_s ) returnSTM :: a -> STM a returnSTM x = STM (\s -> (# s, x #)) -- | Unsafely performs IO in the STM mon -- |Exception handling within STM actions. catchSTM :: STM a -> (Exception -> STM a) -> STM a catchSTM (STM m) k = STM $ \s -> catchSTM# m (\ex -> unSTM (k ex)) "Transac} %************************************************************************ %* * \subsection[mvars]{M-Structures} %* * %************************************************************************. \begin{code} - #) -- . isEmptyMVar :: MVar a -> IO Bool isEmptyMVar (MVar mv#) = IO $ \ s# -> case isEmptyMVar# mv# s# of (# s2#, flg #) -> (# s2#, not (flg ==# 0#) #) -- |Add a finalizer to an 'MVar' (GHC only). See "Foreign.ForeignPtr" and -- "System.Mem.Weak" for more about finalizers. addMVarFinalizer :: MVar a -> IO () -> IO () addMVarFinalizer (MVar m) finalizer = IO $ \s -> case mkWeak# m () finalizer s of { (# s1, w #) -> (# s1, () #) } withMVar :: MVar a -> (a -> IO b) -> IO b withMVar m io = block $ do a <- takeMVar m b <- catchException (unblock (io a)) (\e -> do putMVar m a; throw e) putMVar m a return b \end{code} %************************************************************************ %* * \subsection{Thread waiting} %* * %************************************************************************ \begin{code} #ifdef mingw32_HOST_OS -- Note: threadDelay, {-# NOINLINE pendingEvents #-} {-# NOINLINE pendingDelays #-} (pendingEvents,pendingDelays) = unsafePerformIO $ do startIOManagerThread reqs <- newIORef [] dels <- newIORef [] return (reqs, dels) -- the first time we schedule an IO request, the service thread -- will be created (cool, huh?) <- c_readIOManagerEvent exit <- case r of _ | r == io_MANAGER_WAKEUP -> return False _ | r == io_MANAGER_DIE -> return True 0 -> return False -- spurious wakeup r -> do start_console_handler (r `shiftR` 1); return False if exit then return () else service_cont wakeup delays' _other -> service_cont wakeup delays' -- probably timeout service_cont wakeup delays = do atomicModifyIORef prodding (\_ -> (False,False)) service_loop wakeup delays -- must agree with rts/win32/ThrIOManager.c io_MANAGER_WAKEUP = 0xffffffff :: Word32 io_MANAGER_DIE = 0xfffffffe :: Word32")) stick :: IORef HANDLE {-# NOINLINE stick #-} stick = unsafePerformIO (newIORef nullPtr) now [] = = 0xFFFFFFFF :: DWORD -- exit <- if wakeup_all then return False else do b <- fdIsSet wakeup readfds if b == 0 then return False else alloca $ \p -> do c_read (fromIntegral wakeup) p 1; return () s <- peek p case s of _ | s == io_MANAGER_WAKEUP -> return False _ | s == io_MANAGER_DIE -> return True _ -> withMVar signalHandlerLock $ \_ -> do handler_tbl <- peek handlers sp <- peekElemOff handler_tbl (fromIntegral s) io <- deRefStablePtr sp forkIO io = 0xff :: CChar io_MANAGER_DIE = 0xfe :: CChar stick :: IORef Fd {-# NOINLINE stick #-} stick = unsafePerformIO (newIORef 0) wakeupIOManager :: IO () wakeupIOManager = do fd <- readIORef stick with io_MANAGER_WAKEUP $ \pbuf -> do c_write (fromIntegral fd) pbuf 1; return () -- Lock used to protect concurrent access to signal_handlers. Symptom of -- this race condition is #1922, although that bug was on Windows a similar -- bug also exists on Unix. signalHandlerLock :: MVar () signalHandlerLock = unsafePerformIO (newMVar ()) foreign import ccall "&signal_handlers" handlers :: Ptr (Ptr (StablePtr (IO ()))) foreign import ccall "setIOManagerPipe" c_setIOManagerPipe :: CInt -> IO () -- ----------------------------------------------------------------------------- -- IO requests buildFdSets maxfd readfds writefds [] = return maxfd buildFdSets maxfd readfds writefds (Read fd m : m : [] = return () wakeupAll (Read fd m : reqs) = do putMVar m (); wakeupAll reqs wakeupAll (Write fd now ptimeval [] =) newtype CTimeVal =? newtype CFdSet =_CLR" c_fdClr :: CInt -> Ptr CFdSet -> IO () fdClr :: Fd -> Ptr CFdSet -> IO () fdClr (Fd fd) fdset = c_fdClr fd fdset \end{code}
|
https://downloads.haskell.org/~ghc/6.8.2/docs/html/libraries/base/src/GHC-Conc.html
|
CC-MAIN-2015-48
|
en
|
refinedweb
|
We try to debug on Xeon Phi the code which contains the following code construction:
#include <iostream> #include <memory.h> #include <immintrin.h> using namespace std; void f( float* _amatr) { __m512 a; a = _mm512_load_ps(_amatr+1); _mm512_store_ps(_amatr+1, a); } int main(int argc, char* argv[]) { __attribute__((aligned(64))) float _amatr[256]; for(int i=0; i<256; i++) _amatr[i] = i+1; f(_amatr); cout<<"It works\n"; return 0; }
This code is successfully built with any compilers flags.
Application normally runs only when it built with code optimisation (without additional flags, or with anyone optimistaion flags: -O1, -O2, -O3),
icpc -mmic PhiFunc.cpp scp a.out mic0:~ ssh mic0 ./a.out It works
but segnentation fault error appears when we try to run this code compiled with -O0.
icpc -mmic -O0 PhiFunc.cpp scp a.out mic0:~ ssh mic0 ./a.out Segmentation fault
And major problem is impossibility debugging our complicated code because it have to use -O0 flag.
|
https://software.intel.com/en-us/forums/intel-c-compiler/topic/484753
|
CC-MAIN-2015-48
|
en
|
refinedweb
|
public class UnknownElementException extends UnknownEntityException
Elementhierarchy. May be thrown by an element visitor to indicate that the visitor was created for a prior version of the language.
ElementVisitor.visitUnknown(javax.lang.model.element.Element, UnknownElementException(Element e, Object p)
UnknownElementException. The
pparameter may be used to pass in an additional argument with information about the context in which the unknown element was encountered; for example, the visit methods of
ElementVisitormay pass in their additional parameter.
e- the unknown element, may be
null
p- an additional parameter, may be
null
public Element getUnknownElement()
nullif unavailable
public Object getArgument().
|
http://docs.oracle.com/javase/7/docs/api/javax/lang/model/element/UnknownElementException.html
|
CC-MAIN-2015-48
|
en
|
refinedweb
|
std::clog, std::wclog::clog and std::wclog control output to a stream buffer of implementation-defined type (derived from std::streambuf), associated with the standard C output stream stderr, but, unlike std::cerr/std::wcerr, these streams are not automatically flushed and not automatically tie()'d with cout.
These objects are guaranteed to be constructed before the first constructor of a static object is called and they are guaranteed to outlive the last destructor of a static object, so that it is always possible to write to std::clog in user code.
Unless sync_with_stdio(false) has been issued, it is safe to concurrently access these objects from multiple threads for both formatted and unformatted output.
Example
#include <iostream> struct Foo { int n; Foo() { std::clog << "static constructor\n"; } ~Foo() { std::clog << "static destructor\n"; } }; Foo f; // static object int main() { std::clog << "main function\n"; }
Output:
static constructor main function static destructor
|
http://en.cppreference.com/mwiki/index.php?title=cpp/io/clog&oldid=43016
|
CC-MAIN-2015-48
|
en
|
refinedweb
|
Paramiko key pair gen. Error IV must be 8 bytes long.
I'm working on an ssh client for pythonista and have ran into a problem generating a password protected key pair. I keep getting an error which I think is from pycrypto. The error occurs when trying to encrypt the private key. Any ideas would be greatly welcome.
Error: IV must be 8 bytes long.
import paramiko def keygen(fil,passwd=None,bits=1024): k = paramiko.RSAKey.generate(bits) k.write_private_key_file(fil, password=passwd) o = open(fil+'.pub' ,"w").write(k.get_base64()) keygen('rsa_test','password')
Fixed it. There was a bug in paramikos cipher.
Can you please post the workaround?
Change the following line in pkey.py to hardcode the 'AES-128-CBC' cipher instead. The comment states that there is a single cipher that is used. The error is from trying to use DES. I just hard coded it to AES.
_write_private_key() # cipher_name = list(self._CIPHER_TABLE.keys())[0] cipher_name = 'AES-128-CBC'
Nice. Yes, you definitely want AES instead of DES which was proven insecure long ago. A good summary of the history at
|
https://forum.omz-software.com/topic/846/paramiko-key-pair-gen-error-iv-must-be-8-bytes-long/5
|
CC-MAIN-2022-33
|
en
|
refinedweb
|
Topic compaction
Pulsar's topic compaction feature enables you to create compacted topics in which older, "obscured" entries are pruned from the topic, allowing for faster reads through the topic's history (which messages are deemed obscured/outdated/irrelevant will depend on your use case).
To use compaction:
- You need to give messages keys, as topic compaction in Pulsar takes place on a per-key basis (i.e. messages are compacted based on their key). For a stock ticker use case, the stock symbol---e.g.
AAPLor
GOOG---could serve as the key (more on this below). Messages without keys will be left alone by the compaction process.
- Compaction can be configured to run automatically, or you can manually trigger compaction using the Pulsar administrative API.
- Your consumers must be configured to read from compacted topics (Java consumers, for example, have a
readCompactedsetting that must be set to
true). If this configuration is not set, consumers will still be able to read from the non-compacted topic.
Compaction only works on messages that have keys (as in the stock ticker example the stock symbol serves as the key for each message). Keys can thus be thought of as the axis along which compaction is applied. Messages that don't have keys are simply ignored by compaction.
When should I use compacted topics?
The classic example of a topic that could benefit from compaction would be a stock ticker topic through which consumers can access up-to-date values for specific stocks. Imagine a scenario in which messages carrying stock value data use the stock symbol as the key (
GOOG,
AAPL,
TWTR, etc.). Compacting this topic would give consumers on the topic two options:
- They can read from the "original," non-compacted topic in case they need access to "historical" values, i.e. the entirety of the topic's messages.
- They can read from the compacted topic if they only want to see the most up-to-date messages.
Thus, if you're using a Pulsar topic called
stock-values, some consumers could have access to all messages in the topic (perhaps because they're performing some kind of number crunching of all values in the last hour) while the consumers used to power the real-time stock ticker only see the compacted topic (and thus aren't forced to process outdated messages). Which variant of the topic any given consumer pulls messages from is determined by the consumer's configuration.
One of the benefits of compaction in Pulsar is that you aren't forced to choose between compacted and non-compacted topics, as the compaction process leaves the original topic as-is and essentially adds an alternate topic. In other words, you can run compaction on a topic and consumers that need access to the non-compacted version of the topic will not be adversely affected.
Configuring compaction to run automatically
Tenant administrators can configure a policy for compaction at the namespace level. The policy specifies how large the topic backlog can grow before compaction is triggered.
For example, to trigger compaction when the backlog reaches 100MB:
$ bin/pulsar-admin namespaces set-compaction-threshold \
--threshold 100M my-tenant/my-namespace
Configuring the compaction threshold on a namespace will apply to all topics within that namespace.
Triggering compaction manually
In order to run compaction on a topic, you need to use the
topics compact command for the
pulsar-admin CLI tool. Here's an example:
$ bin/pulsar-admin topics compact \
persistent://my-tenant/my-namespace/my-topic
The
pulsar-admin tool runs compaction via the Pulsar REST API. To run compaction in its own dedicated process, i.e. not through the REST API, you can use the
pulsar compact-topic command. Here's an example:
$ bin/pulsar compact-topic \
--topic persistent://my-tenant-namespace/my-topic
Running compaction in its own process is recommended when you want to avoid interfering with the broker's performance. Broker performance should only be affected, however, when running compaction on topics with a large keyspace (i.e when there are many keys on the topic). The first phase of the compaction process keeps a copy of each key in the topic, which can create memory pressure as the number of keys grows. Using the
pulsar-admin topics compactcommand to run compaction through the REST API should present no issues in the overwhelming majority of cases; using
pulsar compact-topicshould correspondingly be considered an edge case.
The
pulsar compact-topic command communicates with ZooKeeper directly. In order to establish communication with ZooKeeper, though, the
pulsar CLI tool will need to have a valid broker configuration. You can either supply a proper configuration in
conf/broker.conf or specify a non-default location for the configuration:
$ bin/pulsar compact-topic \
--broker-conf /path/to/broker.conf \
--topic persistent://my-tenant/my-namespace/my-topic
# If the configuration is in conf/broker.conf
$ bin/pulsar compact-topic \
--topic persistent://my-tenant/my-namespace/my-topic
When should I trigger compaction?
How often you trigger compaction will vary widely based on the use case. If you want a compacted topic to be extremely speedy on read, then you should run compaction fairly frequently.
Consumer configuration
Pulsar consumers and readers need to be configured to read from compacted topics. The sections below show you how to enable compacted topic reads for Pulsar's language clients. If the
Java
In order to read from a compacted topic using a Java consumer, the
readCompacted parameter must be set to
true. Here's an example consumer for a compacted topic:
Consumer<byte[]> compactedTopicConsumer = client.newConsumer()
.topic("some-compacted-topic")
.readCompacted(true)
.subscribe();
As mentioned above, topic compaction in Pulsar works on a per-key basis. That means that messages that you produce on compacted topics need to have keys (the content of the key will depend on your use case). Messages that don't have keys will be ignored by the compaction process. Here's an example Pulsar message with a key:
import org.apache.pulsar.client.api.Message;
import org.apache.pulsar.client.api.MessageBuilder;
Message<byte[]> msg = MessageBuilder.create()
.setContent(someByteArray)
.setKey("some-key")
.build();
The example below shows a message with a key being produced on a compacted Pulsar topic:
import org.apache.pulsar.client.api.Message;
import org.apache.pulsar.client.api.MessageBuilder;
import org.apache.pulsar.client.api.Producer;
import org.apache.pulsar.client.api.PulsarClient;
PulsarClient client = PulsarClient.builder()
.serviceUrl("pulsar://localhost:6650")
.build();
Producer<byte[]> compactedTopicProducer = client.newProducer()
.topic("some-compacted-topic")
.create();
Message<byte[]> msg = MessageBuilder.create()
.setContent(someByteArray)
.setKey("some-key")
.build();
compactedTopicProducer.send(msg);
|
https://pulsar.apache.org/zh-CN/docs/2.8.0/cookbooks-compaction/
|
CC-MAIN-2022-33
|
en
|
refinedweb
|
Designing a RESTful API to interact with SQLite database
In this chapter, we will create Django API views for HTTP requests and will discuss how Django and Django REST framework process each HTTP request.
- Creating Django Views
- Routing URLs to Django views and functions
- Launching Django’s development server
- Making HTTP requests using the command-line tool
- Making HTTP requests with Postman
Creating Django Views
In the previous chapters, you have seen how to create a model and its serializer. Now, let’s look at how to process HTTP requests and provide HTTP responses. Here, we will create Django views to process the HTTP requests. On receiving an HTTP request, Django creates an HttpRequest instance and it is passed as the first argument to the view function. This instance contains metadata information that has HTTP verbs such as GET, POST, or PUT. The view function checks the value and executes the code based on the HTTP verb. Here the code uses @csrf_exempt decorator to set a CSRF (Cross-Site Request Forgery) cookie. This makes it easier to test the code, which doesn’t portray a production-ready web service. Let’s get into code implementation.
Python3
Let’s evaluate the code. Here we have two functions.
- task_list()
- task_detail()
Note: Later we will add the security and throttling rules for our RESTFul web service. And, also we need to remove repeated codes. Now the above code is necessary to understand how basic things work.
task_list()
The task_list() function is capable of processing two HTTP verbs – GET and POST.
If the verb is GET, the code retrieves all the task management instances.
if request.method == ‘GET’:
task = TaskManagement.objects.all()
task_serializer = TaskMngSerializer(task, many=True)
return JSONResponse(task_serializer.data)
- It retrieves all the tasks using TaskManagement.objects.all() method,
- serializes the tasks using TaskMngSerializer(task, many=True),
- the data generated by TaskMngSerializer is passed to the JSONResponse, and
- returns the JSONResponse built.
Note: The many=True argument in TaskMngSerializer(task, many=True) specifies that multiple instances have to be serialized.
If the verb is POST, the code creates a new task. Here the new task is provided as JSON data in the body of the HTTP request.
elif request.method == ‘POST’:
task_data = JSONParser().parse(request)
task_serializer = TaskMngSerializer(data=task_data)
if task_serializer.is_valid():
task_serializer.save()
return JSONResponse(task_serializer.data, \
status=status.HTTP_201_CREATED)
return JSONResponse(task_serializer.errors, \
status = status.HTTP_400_BAD_REQUEST)
- Uses JSONParser to parse the request,
- Serialize the parsed data using TaskMngSerializer,
- If data is valid, it is saved in the database, and
- returns the JSONResponse built (contains data and HTTP_201_CREATED status).
task_detail()
The task_detail() function is capable of processing three HTTP verbs – GET, PUT, and DELETE. Here, the function receives the primary key as an argument, and the respective operation is done on the particular instance that has the same key.
If the verb is GET, then the code retrieves a single task based on the key. If the verb is PUT, the code updates the instance and saves it to the database. if the verb is DELETE, then the code deletes the instance from the database, based on the pk value.
JSONResponse()
Apart from the two functions explained, the code has a class called JSONResponse.
class JSONResponse(HttpResponse):
def __init__(self, data, **kwargs):
content = JSONRenderer().render(data)
kwargs[‘content_type’] = ‘application/json’
super(JSONResponse, self).__init__(content, **kwargs)
It renders the data in JSON and saves the returned byte string in the content local variable.
Routing URLs to Django views and functions
Now, it’s necessary to route URLs to view. You need to create a new Python file name urls.py in the taskmanagement folder (restapi\taskmanagement) and add the below code.
Python3
Based on the matching regular expression the URLs are routed to corresponding views. Next, we have to replace the code in the urls.py file in restapi folder (restapi\restapi\urls.py). At present, it has the root URL configurations. Update the urls.py file with the below code.
Python3
Launching Django’s development server
After activating the virtual environment, you can run the below command to start the server.
python manage.py runserver
Sharing the screenshot below.
Development server
Making HTTP requests using the command-line tool
Let’s make use of the command-line tools that we installed in Chapter 1.
HTTP GET request
The HTTP GET requests are used to retrieve the task details from the database. We can use GET requests to retrieve a collection of tasks or a single task.
Retrieve all elements
The below curl command retrieves a collection of tasks.
curl -X GET localhost:8000/taskmanagement/
Output:
On executing the command, Django creates an HttpRequest instance and it is passed as the first argument to the view function. The Django routes the URL to the appropriate view function. Here the views have two methods, task_list and task_detail. Let’s look into the URL pattern, which is configured in taskmanagement\urls.py file
urlpatterns = [
url(r’^taskmanagement/$’,views.task_list),
url(r’^taskmanagement/(?P<pk>[0-9]+)$’, views.task_detail),
]
Here the URL (localhost:8000/taskmanagement/) matches the URL pattern for views.task_list. The task_list method gets executed and checks the HTTP verb. Since our HTTP verb for the request is GET, it retrieves all the tasks.
Let’s run the command to retrieve all the tasks by combining the -i and -X options. Here the benefit is that it shows the HTTP response header, status, Content-Type, etc.
curl -iX GET localhost:8000/taskmanagement/
Output:
So far we have executed the cURL command. Now we will look at the HTTPie utility command to compose and send HTTP requests. For this, we need to access the HTTPie utility prompt installed in the virtual environment. After activating the virtual environment, run the below command.
http :8000/taskmanagement/
The command sends the request: GET.
Output:
HTTPie utility GET request – retrieve all tasks
Retrieve Single Element
Now you are familiar with the command to retrieve a collection of tasks. Next, let’s understand how to retrieve a task based on a task id. Here, we will pass the task id along with the URL. Since the URL has a parameter, Django routes the URL to the task_detail function. Let’s execute the commands.
The HTTPie utility command to retrieve a single task.
http :8000/taskmanagement/2
The above command sends the request: GET.
Output:
Retrieve a single element using HTTPie utility
The equivalent curl command as follows:
curl -iX GET localhost:8000/taskmanagement/2
Output:
Let’s try to retrieve an element that is not in the database.
http :8000/taskmanagement/5
The output as follows
HTTP/1.1 404 Not Found Content-Length: 0 Content-Type: text/html; charset=utf-8 Date: Fri, 30 Oct 2020 14:32:46 GMT Referrer-Policy: same-origin Server: WSGIServer/0.2 CPython/3.7.5 X-Content-Type-Options: nosniff X-Frame-Options: DENY
HTTP POST Request
We use POST requests to create a task. The HTTPUtilityPie command to create a new ask as follows.
http POST :8000/taskmanagement/ task_name=”Document XYZ” task_desc=”Document Description” category=”Writing” priority=”Low” created_date=”2020-10-30 00:00:00.000000+00:00″ deadline=”2020-11-03 00:00:00.000000+00:00″ status=”Pending” payment_done=false
Here the URL request (http POST :8000/taskmanagement/ ) matches the regular expression (taskmanagement/$). Hence, it calls the function task_list, and the POST verb satisfies the condition to execute the code for the task creation.
Output:
POST Request using HTTPUtilityPie
Let’s create another instance using the curl command. The curl command for POST request as follows
curl -iX POST -H “Content-Type: application/json” -d “{\”task_name\”:\”Task 01\”, \”task_desc\”:\”Desc 01\”, \”category\”:\”Writing\”, \”priority\”:\”Medium\”, \”created_date\”:\”2020-10-27 13:02:20.890678\”, \”deadline\”:\”2020-10-29 00:00:00.000000+00:00\”, \”status\”:\”Completed\”, \”payment_done\”: \”true\”}” localhost:8000/taskmanagement/
Output:
POST Request using curl
Here the data required to create a new task is specified after -d and the usage of -H “Content-Type: application/json” signifies the data is in JSON format.
{
“task_name”:”Task 01″, “task_desc”:”Desc 01″, “category”:”Writing”, “priority”:”Medium”, “created_date”:”2020-10-27 13:02:20.890678″, “deadline”:”2020-10-29 00:00:00.000000+00:00″, “status”:”Completed”, “payment_done”: “true”
}
HTTP PUT Request
We make use of PUT request to update an existing task. Here we pass the id of the task that needs to be updated along with the URL. Since the URL has a parameter, Django sends the URL instance to the task_detail function in views. And, executes the code that holds the condition for the PUT verb.
The HTTPie utility command to update the task:
http PUT :8000/taskmanagement/1 task_name=”Swap two elements” task_desc=”Write a Python program to swap two elements in a list” category=”Writing” priority=”Medium” created_date=”2020-10-27 13:02:20.890678″ deadline=”2020-10-29 00:00:00.000000+00:00″ status=”Completed” payment_done=true
Output:
PUT
The equivalent CURL command as follows
curl -iX PUT -H “Content-Type: application/json” -d “{\”task_name\”:\”Swap two elements\”, \”task_desc\”:\”Write a Python program to swap two elements in a list\”, \”category\”:\”Writing\”, \”priority\”:\”Medium\”, \”created_date\”:\”2020-10-27 13:02:20.890678\”, \”deadline\”:\”2020-10-29 00:00:00.000000+00:00\”, \”status\”:\”Completed\”, \”payment_done\”: \”true\”}” localhost:8000/taskmanagement/1
HTTP DELETE Request
The HTTP DELETE Request is used to delete a particular task from the database. Let’s look at the HTTPie utility command.
http DELETE :8000/taskmanagement/4
Output:
Delete Request
The equivalent curl command as follows:
curl -iX DELETE localhost:8000/taskmanagement/4
Making HTTP requests with Postman
So far, we took advantage of command-line tools to compose and send HTTP requests. Now, we will make use of Postman. Postman REST client is a Graphical User Interface (GUI) tool that facilitates composing and sending HTTP requests to the Django development server. Let’s compose and send GET and POST requests.
GET Request
You can select GET in the drop-down menu, type the URL (localhost:8000/taskmanagement/) in the URL field, and hit the Send button. The Postman will display the information in the output Body section. The below screenshot shows the JSON output response.
HTTP GET request using Postman
You can click the Header tab to view the header details. Sharing the screenshot below:
Header
POST Request
Now let’s send a POST request using the Postman GUI tool. Follow the below steps:
- Select the POST verb from the drop-down menu,
- Type the URL in the URL field (localhost:8000/taskmanagement/)
- Select the Body section (in the input section)
- Check the raw radio button and also select JSON in the dropdown menu on the right side of the GraphQL button
- Enter the following lines {“task_name”:”Task 01″, “task_desc”:”Desc 01″, “category”:”Writing”, “priority”:”Medium”, “created_date”:”2020-11-02 13:02:20.890678″, “deadline”:”2020-10-29 00:00:00.000000+00:00″, “status”:”Completed”, “payment_done”: “true”} in the body (input section). and hit send.
Sharing the screenshot below.
Summary
In this article, we created a Django API view for HTTP requests to interact with the SQLite database through the RESTFul web service. We worked with GET, POST, PUT, and DELETE HTTP verbs. We have seen how to send and compose HTTP requests using command-line tools (curl and HTTPie) and the GUI tool (POSTMAN).
|
https://www.geeksforgeeks.org/designing-a-restful-api-to-interact-with-sqlite-database/?ref=rp
|
CC-MAIN-2022-33
|
en
|
refinedweb
|
No authentication system is complete without a password reset feature. I would personally never ship a product that did not have this feature included. It is necessary to provide a way for users to recover access to their accounts/data in case of a lost or forgotten password. In this article, I will be demonstrating how to handle password resets in ExpressJS.
In the last 2 articles, I wrote about how to connect ExpressJS application to MongoDB database and building a user registration and authentication system.
Both of these articles tie into today's article. We're going to use mongoose and our saved user data to enable password resets.
If you've read those articles, or already have your own authentication system, read on. Even if you're using a different tech stack, you may still gain some valuable ideas from this approach.
As always, this project is hosted on Github. Feel free to clone the project to get access to the source code I use in this article.
The password reset flow
Before we dive into the code, let's first establish what the password reset flow will look like from the user's perspective and then design the implementation of this flow.
User's perspective
From the user's perspective, the process should go as follows:
- Click on the 'Forgot password' link in the login page.
- Redirected to a page which requires an email address.
- Receive password reset link in an email.
- Link redirects to a page that requires a new password and password confirmation.
- After submission, redirected to the login page with a success message.
Reset system characteristics
We also need to understand some characteristics of a good password reset system:
- Unique password reset link should be generated for the user such that when the user visits the link, they are instantly identified. This means including a unique token in the link.
- Password reset link should have an expiry time (e.g. 2 hours) after which it is no longer valid and cannot be used to reset the password.
- The reset link should expire once the password has been reset to prevent the same link from being used to reset the password several times.
- If the user requests to change password multiple times without following through on the whole process, each generated link should invalidate the previous one. This prevents having multiple active links from which the password can be reset.
- If the user chooses to ignore the password reset link sent to their email, their current credentials should be left intact and valid for future authentication.
Implementation steps
We now have a clear picture of the reset flow from the user's perspective and the characteristics of a password reset system. Here are the steps we will take in the implementation of this system:
- Create a mongoose model called 'PasswordReset' to manage active password reset requests/tokens. The records set here should expire after a specified time period.
- Include the 'Forgot password' link in the login form that leads to a route that contains an email form.
- Once the email is submitted to a post route, check whether a user with the provided email address exists.
- If the user does not exist, redirect back to the email input form and notify the user that no user with provided email was found.
- If the user exists, generate a password reset token and save it to PasswordReset collection in a document that references the user. If there already is a document in this collection associated with this user, update/replace the current document (there can only be one per user).
- Generate a link that includes the password reset token within it, email the link to the user.
- Redirect to the login page with success message prompting the user to check their email address for the reset link.
- Once the user clicks the link, it should lead to a GET route that expects the token as one of the route params.
- Within this route, extract the token and query the PasswordReset collection for this token. If the document is not found, alert the user that the link is invalid/expired.
- If the document is found, load a form to reset the password. The form should have 2 fields (new password & confirm password fields).
- When the form is submitted, its post route will update the user's password to the new password.
- Delete the password reset document associated with this user in the PasswordReset collection.
- Redirect the user to the login page with a success message.
Implementation
The setup
Firstly, we'll have to set up the project. Install the uuid package for generating a unique token, and the nodemailer package for sending emails.
npm install uuid nodemailer
Add the full domain to the environment variables. We'll need this to generate a link to email to the user.
DOMAIN=
Make some changes to the app entry file in the following areas:
- Set 'useCreateIndex' to 'true' in the mongoose connection options. This makes mongoose's default index build use createIndex instead of ensureIndex and prevents MongoDB deprecation warnings.
- Import a new route file that will contain all the reset routes called 'password-reset'. We will create these routes later.
const connection = mongoose.connect(process.env.MONGO_URI, { useNewUrlParser: true, useUnifiedTopology: true, useCreateIndex: true }) ... app.use('/', require('./routes/password-reset'))
Models
We need to have a dedicated model to handle the password reset records. In the models folder, create a model called 'PasswordReset' with the following code:
const { Schema, model } = require('mongoose') const schema = new Schema({ user: { type: Schema.Types.ObjectId, ref: 'User', required: true }, token: { type: Schema.Types.String, required: true } }, { timestamps: true }) schema.index({ 'updatedAt': 1 }, { expireAfterSeconds: 300 }) const PasswordReset = model('PasswordReset', schema) module.exports = PasswordReset
We have two properties in this model, the user that's requested the password reset, and the unique token assigned to the particular request.
Make sure to set the timestamps option to true in order to include 'createdAt' and 'updatedAt' fields in the document.
After defining the schema, create an index on the updatedAt field with an expiry time of 300 seconds (5 minutes). I've set it this low for testing purposes. In production, you can increase this to something more practical like 2 hours.
In the User model we created in this article (or the user model you currently have), update the pre save hook to the following:
userSchema.pre('save', async function(next){ if (this.isNew || this.isModified('password')) this.password = await bcrypt.hash(this.password, saltRounds) next() })
Do this to make sure the password field is hashed whether the document is new or the password field has been changed in an existing document.
Routes
Create a new file in the route's folder called 'password-reset.js'. This is the file we import in the app entry file.
In this file, import the User and PasswordReset models. Import the v4 function from the uuid package for token generation.
const router = require('express').Router() const { User, PasswordReset } = require('../models') const { v4 } = require('uuid') /* Create routes here */ module.exports = router
Create the first 2 routes. These routes are associated with the form which accepts the user's email address.
router.get('/reset', (req, res) => res.render('reset.html')) router.post('/reset', async (req, res) => { /* Flash email address for pre-population in case we redirect back to reset page. */ req.flash('email', req.body.email) /* Check if user with provided email exists. */ const user = await User.findOne({ email: req.body.email }) if (!user) { req.flash('error', 'User not found') return res.redirect('/reset') } /* Create a password reset token and save in collection along with the user. If there already is a record with current user, replace it. */ const token = v4().toString().replace(/-/g, '') PasswordReset.updateOne({ user: user._id }, { user: user._id, token: token }, { upsert: true }) .then( updateResponse => { /* Send email to user containing password reset link. */ const resetLink = `${process.env.DOMAIN}/reset-confirm/${token}` console.log(resetLink) req.flash('success', 'Check your email address for the password reset link!') return res.redirect('/login') }) .catch( error => { req.flash('error', 'Failed to generate reset link, please try again') return res.redirect('/reset') }) })
The first is a GET route to '/reset'. In this route, render the 'reset.html' template. We will create this template later.
The second route is a POST route for '/reset'. This route expects the user's email in the request body. In this route:
- Flash email back for pre-population in case we redirect back to the email form.
- Check if the user with the email provided exists. If not, flash an error and redirect back to '/reset'.
- Create a token using v4.
- Update PasswordReset document associated with the current user. Set upsert to true in options to create a new document if there isn't one already.
- If update is successful, mail the link to the user, flash a success message and redirect to the login page.
- If update is unsuccessful, flash an error message and redirect back to the email page.
At the moment, we're only logging the link to the console. We will implement the email logic later.
Create the 2 routes that come into play when the user visits the link generated link above.
router.get('/reset-confirm/:token', async (req, res) => { const token = req.params.token const passwordReset = await PasswordReset.findOne({ token }) res.render('reset-confirm.html', { token: token, valid: passwordReset ? true : false }) }) router.post('/reset-confirm/:token', async (req, res) => { const token = req.params.token const passwordReset = await PasswordReset.findOne({ token }) /* Update user */ let user = await User.findOne({ _id: passwordReset.user }) user.password = req.body.password user.save().then( async savedUser => { /* Delete password reset document in collection */ await PasswordReset.deleteOne({ _id: passwordReset._id }) /* Redirect to login page with success message */ req.flash('success', 'Password reset successful') res.redirect('/login') }).catch( error => { /* Redirect back to reset-confirm page */ req.flash('error', 'Failed to reset password please try again') return res.redirect(`/reset-confirm/${token}`) }) })
The first route is a get route that expects the token in the url. The token is extracted and then validated. Validate the token by searching the PasswordReset collection for a document with the provided token.
If the document is found, set the 'valid' template variable to true, otherwise, set it to false. Be sure to pass the token itself to the template. We will use this in the password reset form.
Check the validity of the token by searching the PasswordReset collection by token.
The second route is a POST route that accepts the password reset form submission. Extract the token from the url and then retrieve the password reset document associated with it.
Update the user associated with this particular password reset document. Set the new password and save the updated user.
Once the user is updated, delete the password reset document to prevent it from being reused to reset the password.
Flash a success message and redirect the user to the login page where they can log in with their new password.
If the update is unsuccessful, flash an error message and redirect back to the same form.
Templates
Once we've created the routes, we need to create the templates
In the views folder, create a 'reset.html' template file with the following content:
{% extends 'base.html' %} {% set {% if messages.error %} <div class="alert alert-danger" role="alert">{{ messages.error }}</div> {% endif %} <div class="mb-3"> <label for="name" class="form-label">Enter your email address</label> <input type="text" class="form-control {% if messages.error %}is-invalid{% endif %}" id="email" name="email" value="{{ messages.email or '' }}" required> </div> <div> <button type="submit" class="btn btn-primary">Send reset link</button> </div> </form> {% endblock %}
Here we have one email field that is pre-populated with an email value if one was flashed in the previous request.
Include an alert that displays an error message if one has been flashed from the previous request.
Create another template in the same folder named 'reset-confirm.html' with the following content:
{% extends 'base.html' %} {% set {% if messages.error %} <div class="alert alert-danger" role="alert">{{ messages.error }}</div> {% endif %} <div class="mb-3"> <label for="name" class="form-label">Password</label> <input type="password" class="form-control {% if messages.password_error %}is-invalid{% endif %}" id="password" name="password"> <div class="invalid-feedback">{{ messages.password_error }}</div> </div> <div class="mb-3"> <label for="name" class="form-label">Confirm password</label> <input type="password" class="form-control {% if messages.confirm_error %}is-invalid{% endif %}" id="confirmPassword" name="confirmPassword"> <div class="invalid-feedback">{{ messages.confirm_error }}</div> </div> <div> <button type="submit" class="btn btn-primary">Confirm reset</button> </div> </form> {% endif %} {% endblock %}
In this form, check for the value of the 'valid' variable that we set in the GET route, if false, render the expired token message. Otherwise, render the password reset form.
Include an alert that displays an error message if one was flashed in the previous request.
Go to the login form that we created in the registration & authentication article and add the following code to the top of the form:
{% if messages.success %} <div class="alert alert-success" role="alert">{{ messages.success }}</div> {% endif %}
This renders the success messages that we flash when we create/send the reset link and when we update the user's password before redirecting to the login page.
In the previous routes section, we logged the reset link in the console. Ideally, we should send an email to the user when they've requested a password reset link.
For this example, I've used ethereal.email to generate a test email account for development purposes. Head over there and create one (it's a one-click process).
Once you've created the test account, add the following variables to your environment variables:
These are my values at the time of writing, plug in your own values here.
Create a 'helpers.js' file in the root of the project. This file will have a bunch of useful functions that are likely to be reused across the entire project.
Define these functions here so that we can import them when they're needed rather than repeating similar logic all over our application.
const nodemailer = require('nodemailer') module.exports = { sendEmail: async ({ to, subject, text }) => { /* Create nodemailer transporter using environment variables. */ const transporter = nodemailer.createTransport({ host: process.env.EMAIL_HOST, port: Number(process.env.EMAIL_PORT), auth: { user: process.env.EMAIL_ADDRESS, pass: process.env.EMAIL_PASSWORD } }) /* Send the email */ let info = await transporter.sendMail({ from: `"${process.env.EMAIL_NAME}" <${process.env.EMAIL_ADDRESS}>`, to, subject, text }) /* Preview only available when sending through an Ethereal account */ console.log(`Message preview URL: ${nodemailer.getTestMessageUrl(info)}`) } }
Export an object with various functions. The first being the 'sendEmail' function.
This function takes the recipient's address, email subject and email text. Create the NodeMailer transporter, using the environment variables defined previously in the options. Send the email using the arguments passed to the function.
The last line of the function logs the message url in the console so you can view the message on Ethereal mail. The test account does not actually send the email.
Go back to the 'password-reset.js' routes and add the email functionality. First, import the function:
const { sendEmail } = require('../helpers')
In the '/reset' POST route, instead of logging the reset link on the console, add the following code:
sendEmail({ to: user.email, subject: 'Password Reset', text: `Hi ${user.name}, here's your password reset link: ${resetLink}. If you did not request this link, ignore it.` })
Send an additional email to notify the user of a successful password change in the '/reset-confirm' POST route once the user is successfully updated:
user.save().then( async savedUser => { /* Delete password reset document in collection */ await PasswordReset.deleteOne({ _id: passwordReset._id }) /* Send successful password reset email */ sendEmail({ to: user.email, subject: 'Password Reset Successful', text: `Congratulations ${user.name}! Your password reset was successful.` }) /* Redirect to login page with success message */ req.flash('success', 'Password reset successful') res.redirect('/login') }).catch( error => { /* Redirect back to reset-confirm page */ req.flash('error', 'Failed to reset password please try again') return res.redirect(`/reset-confirm/${token}`) })
Conclusion
In this article, I demonstrated how to implement a password reset feature in ExpressJS using NodeMailer.
In the next article, I will write about implementing a user email verification system in your Express application. I will use a similar approach to the one used in this article, with NodeMailer being the email package of choice.
The post How to Handle Password Reset (6)
Hi Kelvin, great article, thanks. I just spotted that you forgot to check whether reset token is valid or not in
'/reset-confirm/:token'Post method.
passwordResetis not being checked after this line.
Hi Mete,
Good eye. If we don't check it here, the user update will throw an error. To avoid this we can add a guard clause to check the password reset object:
Thank you! Needed this!
You're welcome!
Good content!
Thank you!
|
https://dev.to/kelvinvmwinuka/how-to-handle-password-reset-in-expressjs-ipb
|
CC-MAIN-2022-33
|
en
|
refinedweb
|
@Generated(value="OracleSDKGenerator", comments="API Version: 20200131") public class TriggerResponderRequest extends BmcRequest<TriggerResponderDetails>
getInvocationCallback, getRetryConfiguration, setInvocationCallback, setRetryConfiguration, supportsExpect100Continue
clone, finalize, getClass, notify, notifyAll, wait, wait, wait
public TriggerResponderRequest()
public String getProblemId()
OCId of the problem.
public TriggerResponderDetails getTriggerResponderDetails()
The responder may update the TriggerResponderDetails getBody$()
Alternative accessor for the body parameter.
getBody$in class
BmcRequest<TriggerResponderDetails>
public TriggerResponderRequest.Builder toBuilder()
Return an instance of
TriggerResponderRequest.Builder that allows you to modify request properties.
TriggerResponderRequest.Builderthat allows you to modify request properties.
public static TriggerRespond<TriggerResponderDetails>
public int hashCode()
BmcRequest
Uses invocationCallback and retryConfiguration to generate a hash.
hashCodein class
BmcRequest<TriggerResponderDetails>
|
https://docs.oracle.com/en-us/iaas/tools/java/2.38.0/com/oracle/bmc/cloudguard/requests/TriggerResponderRequest.html
|
CC-MAIN-2022-33
|
en
|
refinedweb
|
What is Upstream?
It gives you a magic button, because this button will only appear when there is an app update available on appstore for given app.
Place this button anywhere you wish. Best location is to place it on trailing navigation bar item.
How to use it?
- Import this library as swift package in your project.
- Get the app Id from app store connect for your app. Tutorial
- Follow the below snippet
import SwiftUI import Upstream struct ContentView: View { var body: some View { UpstreamButton(.init(appId: "1618653423"), showFeatureSheet: false) } }
License
Upstream is licensed under the MIT License.
|
https://iosexample.com/in-app-app-update-button-library-in-swift/
|
CC-MAIN-2022-33
|
en
|
refinedweb
|
Bug #18561closed
Make singleton def operation and define_singleton_method explicitly use public visibility
Description
Currently nearly all uses of
define_singleton_method or
def obj.foo will ignore the caller's frame/cref visibility and use the default visibility of public. I propose this be made explicit in the code, documentation, and ruby specs.
$ ruby -e 'class Foo; private; define_singleton_method(:foo) {p :ok}; end; Foo.foo' :ok $ ruby -e 'class Foo; private; def self.foo; p :ok; end; end; Foo.foo' :ok
This works because the class in which the method is defined is nearly always different from the calling scope, since we are usually calling
define_singleton_method against some other object. It "accidentally" ends up being public all the time, like
def self.foo.
However, there is at least one (weird) edge case where the frame/cref visibility is honored:
$ ruby -e '$o = Object.new; class << $o; private; $o.define_singleton_method(:foo){}; end; $o.foo' -e:1:in `<main>': private method `foo' called for #<Object:0x00007fcf0e00dc98> (NoMethodError)
This also works for
def $o.foo but I would argue this is unexpected behavior in both cases. It is difficult to trigger, since you have to already be within the target singleton class body, and the "normal" behavior everywhere else is to ignore the frame/cref visibility.
It would not be difficult to make both forms always use public visibility:
- Split off the actual method-binding logic from
rb_mod_define_methodinto a separate function
mod_define_method_internalthat takes a visibility parameter.
- Call that new method from
rb_mod_define_method(with cref-based visibility calculation) and
rb_obj_define_method(with explicit public visibility).
Updated by headius (Charles Nutter) 6 months ago
@marcandre (Marc-Andre Lafortune) provided a possibly more common case, but I still highly doubt that anyone would expect this to behave differently than the unmatched scope version:
$ ruby -e 'class Foo; class << Foo; private; Foo.define_singleton_method(:foo){}; end; end; Foo.foo' -e:1:in `<main>': private method `foo' called for Foo:Class (NoMethodError)
Updated by headius (Charles Nutter) 6 months ago
Related JRuby pull request that makes singleton method definition always public (in response to the ostruct issue linked above):
Updated by jeremyevans0 (Jeremy Evans) 5 months ago
I couldn't get
def $o.foo to have non-public visibility in any version of Ruby. I definitely don't think it is possible in the current code. Singleton method definitions use the
definesmethod VM instruction, which calls
vm_define_method with
TRUE as the
is_singleton argument, and
vm_define_method always uses public visibility in this case.
From testing, the
define_singleton_method visibility issue was introduced in Ruby 2.1:
$ ruby19 -ve 'class Foo; class << Foo; private; Foo.define_singleton_method(:foo){}; end; end; Foo.foo' ruby 1.9.3p551 (2014-11-13 revision 48407) [x86_64-openbsd] $ ruby20 -ve 'class Foo; class << Foo; private; Foo.define_singleton_method(:foo){}; end; end; Foo.foo' ruby 2.0.0p648 (2015-12-16 revision 53162) [x86_64-openbsd] $ ruby21 -ve 'class Foo; class << Foo; private; Foo.define_singleton_method(:foo){}; end; end; Foo.foo' ruby 2.1.9p490 (2016-03-30 revision 54437) [x86_64-openbsd] -e:1:in `<main>': private method `foo' called for Foo:Class (NoMethodError)
I submitted a pull request to fix the
define_singleton_method visibility issue:
Updated by headius (Charles Nutter) 5 months ago
Thank you @jeremyevans0 (Jeremy Evans) for the analysis and PR. I agree that the one weird edge case would generally just be unexpected by a user and should be considered a bug.
Updated by jeremyevans (Jeremy Evans) 4 months ago
- Status changed from Open to Closed
Applied in changeset git|173a6b6a802d80b8cf200308fd3653832b700b1c.
Make define_singleton_method always define a public method
In very unlikely cases, it could previously define a non-public method
starting in Ruby 2.1.
Also available in: Atom PDF
|
https://bugs.ruby-lang.org/issues/18561
|
CC-MAIN-2022-33
|
en
|
refinedweb
|
Programming with Python
Instructor’s Guide
Legend
We are using a dataset with records on inflammation from patients following an arthritis treatment.
We make reference in the lesson that this data is somehow strange. It is strange because it is fabricated! The script used to generate the inflammation data is included as
tools.
Analyzing Patient Data
Solutions to exercises:
Sorting out references
What does the following program print out?
first, second = 'Grace', 'Hopper' third, fourth = second, first print(third, fourth)
Hopper Grace
Slicing strings
A section of an array is called a slice. We can take slices of character strings as well:
element = 'oxygen'
What is the value of
element[:4]? What about
element[4:]? Or
element[:]?
oxyg en oxygen
What is
element[-1]? What is
element[-2]?
n e
Given those answers, explain what
element[1:-1] does.
Creates a substring from index 1 up to (not including) the final index, effectively removing the first and last letters from 'oxygen'
Thin slices
The expression
element[3:3] produces an empty string, i.e., a string that contains no characters. If
data holds our array of patient data, what does
data[3:3, 4:4] produce? What about
data[3:3, :]?
print(data[3:3, 4:4]) print(data[3:3, :])
[] []
Check your understanding: plot scaling
Why do all of our plots stop just short of the upper end of our graph? Update your plotting code to automatically set a more appropriate scale (hint: you can make use of the
max and
min methods to help)
Because matplotlib normally sets x and y axes limits to the min and max of our data (depending on data range)
# for example: axes3.set_ylabel('min') axes3.plot(numpy.min(data, axis=0)) axes3.set_ylim(0,6) # or a more automated approach: min_data = numpy.min(data, axis=0) axes3.set_ylabel('min') axes3.plot(min_data) axes3.set_ylim(numpy.min(min_data), numpy.max(min_data) * 1.1)
Check your understanding: drawing straight lines
Why are the vertical lines in our plot of the minimum inflammation per day not perfectly vertical?
Because matplotlib interpolates (draws a straight line) between the points
Make your own plot
Create a plot showing the standard deviation (
numpy.std) of the inflammation data for each day across all patients.
max_plot = matplotlib.pyplot.plot(numpy.std(data, axis=0)) matplotlib.pyplot.show()
Moving plots around
Modify the program to display the three plots on top of one another instead of side by side.
import numpy import matplotlib.pyplot data = numpy.loadtxt(fname='data()
Repeating Actions with Loops
Solutions to exercises:
From 1 to N
Using
range, write a loop that uses
range to print the first 3 natural numbers.
for i in range(1, 4): print(i)
1 2 3
Computing powers with loops
Write a loop that calculates the same result as
5 ** 3 using multiplication (and without exponentiation).
result = 1 for i in range(0, 3): result = result * 5 print(result)
125
Reverse a string
Write a loop that takes a string, and produces a new string with the characters in reverse order.
newstring = '' oldstring = 'Newton' length_old = len(oldstring) for char_index in range(length_old): newstring = newstring + oldstring[length_old - char_index - 1] print(newstring)
'notweN'
After discussing these challenges could be a good time to introduce the
b *= 2 syntax.
Storing Multiple Values in Lists
Solutions to exercises:
Turn a string into a list
Use a
for loop to convert the string
"hello" into a list of letters:
my_list = [] for char in "hello": my_list.append(char) print(my_list)
["h", "e", "l", "l", "o"]
Analyzing Data from Multiple Files
Solutions to exercises:
Plotting Differences
Plot the difference between the average of the first dataset and the average of the second dataset, i.e., the difference between the leftmost plot of the first two figures.
import glob import numpy import matplotlib.pyplot filenames = glob.glob('data(data0.mean(axis=0) - data1.mean(axis=0)) fig.tight_layout() matplotlib.pyplot.show()
Making Choices
Solutions to exercises:
How many paths?
Which of the following would be printed if you were to run this code? Why did you pick this answer?
if 4 > 5: print('A') elif 4 == 5: print('B') elif 4 < 5: print('C')
C gets printed, because the first two conditions,
4 > 5 and
4 == 5 are not true, but
4 < 5 is true.
What is truth?
After reading and running the code below, explain the rules')
First line prints nothing: an empty string is false Second line prints
'word is true': a non-empty string is true Third line prints nothing: an empty list is false Fourth line prints
'non-empty list is true': a non-empty list is true Fifth line prints nothing: 0 is false Sixth line prints
'one is true': 1 is true
Close enough
Write some conditions that print
True if the variable
a is within 10% of the variable
b and
False otherwise.
a = 5 b = 5.1 if abs(a - b) < 0.1 * abs(b): print('True') else: print('False')
Another possible solution:
print(abs(a - b) < 0.1 * abs(b))
This works because the boolean objects
True and
False have string representations which can be
In-place operators
Write some code that sums the positive and negative numbers in a list separately, using in-place operators.
positive_sum = 0 negative_sum = 0 test_list = [3, 4, 6, 1, -1, -5, 0, 7, -8] for num in test_list: if num > 0: positive_sum += num elif num == 0: pass else: negative_sum += num print(positive_sum, negative_sum)
21 -14
Here
pass means “don’t do anything”. In this particular case, it’s not actually needed, since if
num == 0 neither sum needs to change, but it illustrates the use of
elif.
Tuples and exchanges
Explain what the overall effect of this code is:
left = 'L' right = 'R' temp = left left = right right = temp
The code swaps the contents of the variables right and left.
Compare it to:
left, right = right, left
Do they always do the same thing? Which do you find easier to read?
Yes, although it’s possible the internal implementation is different. Answers will vary on which is easier to read.
Creating Functions
Solutions to exercises:
Combining strings
Write a function called
fence that takes two parameters called
original and
wrapper and returns a new string that has the wrapper character at the beginning and end of the original.
def fence(original, wrapper): return wrapper + original + wrapper
Selecting characters from strings
Write a function called
outer that returns a string made up of just the first and last characters of its input.
def outer(input_string): return input_string[0] + input_string[-1])
259.81666666666666 287.15 273.15 0
k is 0 because the
k inside the function
f2k doesn’t know about the
k defined outside the function.
Errors and Exceptions
Solutions to exercises:'
- 3 levels
errors_02.py.")
SyntaxError for missing
(): at end of first line,
IndentationError for mismatch between second and third lines.
Fixed version:)
3
NameErrors for
number being misspelled, for
message not defined, and for
a not being in quotes.
Fixed version:
message = "" for number in range(10): # use a if the number is a multiple of 3, otherwise use b if (number % 3) == 0: message = message + "a" else: message = message + "b" print(message)
abbabbabba
Identifying Item Errors
- Read the code below, and (without running it) try to identify what the errors are.
- Run the code, and read the error message. What type of error is it?
- Fix the error.
seasons = ['Spring', 'Summer', 'Fall', 'Winter'] print('My favorite season is ', seasons[4])
IndexError; the last entry is
seasons[3], so
seasons[4] doesn’t make sense.
Fixed version:
seasons = ['Spring', 'Summer', 'Fall', 'Winter'] print('My favorite season is ', seasons[-1])
Defensive Programming
Solutions to exercises:
Pre- and post-conditions?
# a possible pre-condition: assert len(input) > 0, 'List length must be non-zero' # a possible post-condition: assert numpy.min(input) < average < numpy.max(input), 'Average should be between min and max of input values'
Testing assertions
Given a sequence of values, the function
running returns a list containing the running totals at each index.
- The first assertion checks that the input sequence
valuesis not empty. An empty sequence such as
[]will make it fail.
- The second assertion checks that the first value in the list is positive. Input such as
[-1,0,2,3]will make it fail.
- The third assertion checks that the running total always increases. Input such as
[0,1,3,-5,4]will make it fail.
Fixing and testing
Fix
range_overlap. Re-run
test_range_overlap after each change you make.
import numpy def range_overlap(ranges): '''Return common overlap among a set of [low, high] ranges.''' if len(ranges) == 1: # only one entry, so return it return ranges[0] lowest = -numpy.inf # lowest possible number highest = numpy.inf # highest possible number for (low, high) in ranges: lowest = max(lowest, low) highest = min(highest, high) if lowest >= highest: # no overlap return None else: return (lowest, highest)
Debugging
Solutions to exercises:
Debug the following problem
This exercise has the aim of ensuring learners are able to step through unseen code with unexpected output to locate issues. The issues present are that:
The loop is not being utilised correctly.
heightand
weightare always set as the first patient’s data during each iteration of the loop.
The height/weight variables are reversed in the function call to
calculate_bmi(...)
Command-Line Programs
Solutions to exercises:
Arithmetic on the command line
Write a command-line program that does addition and subtraction:
$ python arith.py add 1 2
3
$ python arith.py subtract 3 4
-1
# this is code/arith.py module introduced earlier, write a simple version of
ls that shows files in the current directory with a particular suffix. A call to this script should look like this:
$ python my_ls.py py
left.py right.py zero.py
# this is code/my_ls.py so that it uses
-n,
-m, and
-x instead of
--min,
--mean, and
--max respectively. Is the code easier to read? Is the program easier to understand?
# f in filenames: process(f, action) def process(filename, action): data = numpy.loadtxt(filename, delimiter=',') if action == '-n': values = numpy.min(data, axis=1) elif action == '-m': values = numpy.mean(data, axis=1) elif action == '-x': values = numpy.max(data, axis=1) for m in values: print(m) main() program?
# this is code/check.py f in filenames[1:]: nrow, ncol = row_col_count(f) if nrow != nrow0 or ncol != ncol0: print('File %s does not check: %d rows and %d columns' % (f, nrow, ncol)) else: print('File %s checks' % f) return def row_col_count(filename): try: nrow, ncol = numpy.loadtxt(filename, delimiter=',').shape except ValueError: #get this if file doesn't have same number of rows and columns, or if it has non-numeric content nrow, ncol = (0, 0) return nrow, ncol.
# this is code/line_count.py f in filenames: n = count_file(f) print('%s %d' % (f, n)) sum_nlines += n()
|
https://cac-staff.github.io/summer-school-2016-Python/instructors.html
|
CC-MAIN-2022-33
|
en
|
refinedweb
|
.
kindly show me the way out..
I admit I’m new to panda, but as far as I understand it, the packpanda command is just packing up your code and game content with the needed python interpreter and the engine.
May be you can reduce the filesize of the packed exe by only importing the API modules you really need in your program.
avoid something like
from fu.bar import *
instead write
from fu.bar import neededModule
It’s big because it includes all of Python, and all of Panda, by default. See this thread.
It’s theoretically possible to bring this size down, but you’d have to customize packpanda a lot, as well as build yourself a custom version of Panda. You also may have difficulty writing your application in Python if your goal is to keep it under 5MB.
David
I think I should say, getting it under 5MB just isn’t going to happen with python. The python installer by itself is 10MB - that’s without panda.
At least you could rar it and split it up into 5mb pieces.
I’ll bring this thread up again, because I’m still interested in creating a smaller installer.
I disagree with this. There’s no need to include the whole Python installer in a project that uses only a small part of Python. In fact, I’ve used Pygame with py2exe and the resulting .zip is about 2.5MB, including all necessary python and pygame modules to run standalone. What’s the difference between Panda3D and Pygame that causes such different file sizes?
Thanks,
Nacho
It is possible to create a “Hello world” application with py2exe at 848k (776k if zipped up). Assuming Microsoft runtime libraries are present, no encodings or other modules, and maximum upx compression.
It is also possible to create a minimal Panda3d application with py2exe at 5.512M (zipped folder, unzipped 7.77M).
If you want to full functional panda system including MSVCR71.dll and other runtime libraries, then the minimum size is 7.79M (zipped folder, unzipped 11.8M).
enn0x
Ok enn0x. If you want to quoute those sized you should also post your setup.py file for py2exe and explan the setting that you are using in that setup.py file and how each affect the final file size.
Do not just say “Oh its possible to get things down to xmb” Then do not explain how to do it.
No problem. But don’t expect any magic tricks.
But before we go into configuration details, let’s have a look a the problem: size of the distribution. What can be done to get size down?
(1) Leave away files that aren’t necessary. Either by configuring py2exe or by deleting them by hand.
(2) If you have to distribute a file, then make it as small as possible.
I don’t know a way to determine what files that are really required by an application. You have to try out for your own application. Is it safe to leave away msvcr71.dll, assuming the user will already have it on his system or download it separate? I don’t know. And afaik it is questionable if you are ALLOWED to distribute it, if you don’t OWN a copy of VisualStudio yourself (many py2exe users don’t own VS). See here for example:.
How to make files small. For example leaving away comments in python code will gain a few bytes. “python -OO” and py2exe “optimize”:2 will remove doc strings. But this is, well, just a few bytes. A better compression for zipped content would help too, for example 7zip. But the real bulk of your distribution are DLL files: libpanda.dll (16.6M), python24.dll(1.8M). And this is where we can gain the most, by compressing DLL files. I mentioned in my post above how to do this (upx compression), and I have already posted this trick here on this forum:. Here is the link again:
That’s all.
“Hello world” with py2exe:
Here the goal is to make the distribution as small as possible, at any cost. The first file is, obviously, our hello world application itself. hello.py:
print 'Hello world!'
Then we need a script for py2exe. I assume all the files are in the same directory, by the way. setup.py:
from distutils.core import setup import py2exe INCLUDES = [ ] EXCLUDES = [ 'Tkinter', 'Tkconstants', 'tcl', 'unicodedat', 're', 'inspect', 'popen2', 'copy', 'string', 'macpath', 'os2emxpath' ] DLL_EXCLUDES = [ 'MSVCR71.dll' ] setup( console = [ { 'script' : 'hello.py', } ], zipfile = None, options = { 'py2exe': { 'optimize' : 2, 'unbuffered' : True, 'includes' : INCLUDES, 'excludes' : EXCLUDES, 'dll_excludes': DLL_EXCLUDES, }, }, data_files = [ ], )
Setting zipfile to None will make py2exe append the zipped python files to the .exe it generates (default behavior is to create a separate file “library.zip”, or whatever you name it). No gain in size, just one file less.
EXCLUDES is where py2exe is told to leave away unnecessary python modules. We are building for windows, right? So why including macpath or os2emxpath, ntpath will be sufficient (selfish, I know). string and copy? “Hello world” doesn’t need these. Same for the other excluded modules. In any “real” application you will have to keep these modules, though. Important is to exclude Tk stuff.
DLL_EXCLUDES is where flame wars could start. 340k for ‘MSVCR71.dll’, or 162k packed with upx. Well, we go for minimum size.
Finally we need a batch file to automate the build process. build.bat:
echo off Rem ____py2exe____ python -OO setup.py py2exe -a Rem ____upx____ cd dist ..\upx203.exe --best *.* cd .. Rem ____clean up____ rd build /s /q
Some notes on what build.bat does:
(1) Create a py2exe distribution using setup.py, optimization (-OO) for python compiler, and -a (ascii) tells py2exe that we don’t need encodings. Without -a py2exe will pack some hundred kilobytes of encodings. The result of this step are two directories, build and dist.
(2) Change to the dist directory and pack everything with upx. I used upx version 2.03 here, the current version. Oh, and before I forget: put the upx executable either somewhere in your path or put it in the same directory as setup.py.
(3) We don’t need the build directory. That’s where py2exe collects all the files. For convenience remove this directory after we are done.
Now run build.bat. Here is a snippet from the output, to demonstrate what upx does here in this case:
... Ultimate Packer for eXecutables Copyright (C) 1996,1997,1998,1999,2000,2001,2002,2003,2004,2005,2006 UPX 2.03w Markus Oberhumer, Laszlo Molnar & John Reiser Nov 7th 2006 File size Ratio Format Name -------------------- ------ ----------- ----------- 82523 -> 75867 91.93% win32/pe hello.exe 1871872 -> 788992 42.15% win32/pe python24.dll 4608 -> 3584 77.78% win32/pe w9xpopen.exe -------------------- ------ ----------- ----------- 1959003 -> 868443 44.33% [ 3 files ] Packed 3 files.
The dist directory should look like this:
11/02/2007 11:08 <DIR> . 11/02/2007 11:08 <DIR> .. 11/02/2007 11:08 75,867 hello.exe 18/10/2006 07:35 788,992 python24.dll 18/10/2006 07:18 3,584 w9xpopen.exe 3 Datei(en) 868,443 Bytes
So we have 868,443 bytes => 848k. Zipping up with standard windows zip gives a zip-file of about 776k. For even better results try to zip with 7zip.
7za a -tzip -mx9 "foo.zip" -r
7zip gives 771k, not very impressive.
Panda3D with py2exe:
I assume Panda3D 1.3.2 (Windows), Python 2.4.4 and py2exe 0.6.6.
First, to make py2exe work with Panda3D, I did what this post is suggesting: and did start with a setup.py from this post: (credit goes to kaweh and aaronbstjohn). To sum it up:
(1) File “direct/init.py”: comment all lines.
(2) Copy or move all directories from “direct/src” to “direct/”
(3) File “pandac/extension_native_helpers.py”, after line 8 add these line:
sys.path.append(sys.prefix)
I don’t know if this is the only way to get py2exe working with Panda3D, and I didn’t spend time on finding out. But it works for me.
Then we need a minimal panda application. Not much, just show a window and wait for escape key to be pressed. game.py:
import direct.directbase.DirectStart from direct.showbase.DirectObject import DirectObject class World( DirectObject ): def __init__( self ): base.setBackgroundColor( 0.1, 0.2, 0.5, 0 ) self.accept( 'escape', self.exit ) def exit( self ): taskMgr.stop( ) world = World( ) run( )
Now we need a setup.py file. Here it is. Please note that I didn’t exclude any python modules or dll’s here, since I don’t think it is worth to fight for every kilobyte. ‘MSVCR71.dll’ is part of the distribution too. setup.py:
from distutils.core import setup import py2exe import os PANDA_DIR = 'C:\Programme\Panda3D-1.3.2' setup( windows = [ { 'script' : 'game.py', #'icon_resources' : [ ( 1, 'game.ico' ) ], } ], zipfile = None, options = { 'py2exe': { 'optimize' : 2, 'excludes' : [ 'Tkinter' ], }, }, packages = [ 'direct', 'direct.directbase', 'direct.showbase', 'direct.interval', 'direct.actor', 'direct.gui', 'direct.task', 'direct.controls', 'direct.directnotify', 'direct.directtools', 'direct.directutil', 'direct.fsm', 'direct.cluster', 'direct.particles', 'direct.tkpanels', 'direct.tkwidgets', 'direct.directdevices', 'direct.distributed', 'pandac', ], package_dir = { 'direct' : os.path.join(PANDA_DIR, 'direct'), 'direct.directbase' : os.path.join(PANDA_DIR, 'direct/directbase'), 'direct.showbase' : os.path.join(PANDA_DIR, 'direct/showbase'), 'direct.interval' : os.path.join(PANDA_DIR, 'direct/interval'), 'direct.actor' : os.path.join(PANDA_DIR, 'direct/actor'), 'direct.gui' : os.path.join(PANDA_DIR, 'direct/gui'), 'direct.task' : os.path.join(PANDA_DIR, 'direct/task'), 'direct.control' : os.path.join(PANDA_DIR, 'direct/control'), 'direct.directnotify' : os.path.join(PANDA_DIR, 'direct/directnotify'), 'direct.directtools' : os.path.join(PANDA_DIR, 'direct/directtools'), 'direct.directutil' : os.path.join(PANDA_DIR, 'direct/directutil'), 'direct.fsm' : os.path.join(PANDA_DIR, 'direct/fsm'), 'direct.cluster' : os.path.join(PANDA_DIR, 'direct/cluster'), 'direct.particles' : os.path.join(PANDA_DIR, 'direct/particles'), 'direct.tkpanels' : os.path.join(PANDA_DIR, 'direct/tkpanels'), 'direct.tkwidgets' : os.path.join(PANDA_DIR, 'direct/tkwidgets'), 'direct.directdevices' : os.path.join(PANDA_DIR, 'direct/directdevices'), 'direct.distributed' : os.path.join(PANDA_DIR, 'direct/distributed'), 'pandac' : os.path.join(PANDA_DIR, 'pandac'), }, data_files = [ ( 'etc', [ 'etc/Config.prc', ] ), ], )
Finally the batch script for building. build.bat:
echo off Rem ____py2exe____ python setup.py py2exe -a Rem ____upx____ cd dist copy avcodec-51-panda.dll ..\build\avcodec-51-panda.dll ..\upx203.exe --best *.* copy ..\build\avcodec-51-panda.dll avcodec-51-panda.dll cd .. Rem ____clean up____ rd build /s /q
One note here: avcodec-51-panda.dll for some reason crashes if compressed with upx. So I save an uncompressed copy first and then put it back into the dist directory after everything has been compressed. An better way would be to call upx for every file that has to be compressed, but I am lazy when it comes to typing scripts.
Run build.bat and have a look at what upx does here:
... Ultimate Packer for eXecutables Copyright (C) 1996,1997,1998,1999,2000,2001,2002,2003,2004,2005,2006 UPX 2.03w Markus Oberhumer, Laszlo Molnar & John Reiser Nov 7th 2006 File size Ratio Format Name -------------------- ------ ----------- ----------- 2799762 -> 1098898 39.25% win32/pe avcodec-51-panda.dll upx203: etc: IOException: not a regular file -- skipped 1360668 -> 1354012 99.51% win32/pe game.exe 1101824 -> 278016 25.23% win32/pe libp3direct.dll 151552 -> 70144 46.28% win32/pe libp3dtool.dll 372736 -> 160256 42.99% win32/pe libp3dtoolconfig.dll 5632 -> 4096 72.73% win32/pe libp3heapq.dll 17027072 -> 3918336 23.01% win32/pe libpanda.dll 999424 -> 447488 44.77% win32/pe LIBPANDAEAY.dll 2416640 -> 579072 23.96% win32/pe libpandaegg.dll 2220032 -> 499200 22.49% win32/pe libpandaexpress.dll 237568 -> 69120 29.09% win32/pe libpandafx.dll 229376 -> 89088 38.84% win32/pe libpandajpeg.dll 1269760 -> 275968 21.73% win32/pe libpandaphysics.dll 131072 -> 58880 44.92% win32/pe libpandapng13.dll 192512 -> 83456 43.35% win32/pe LIBPANDASSL.dll 360448 -> 80896 22.44% win32/pe libpandatiff.dll 59904 -> 36352 60.68% win32/pe libpandazlib1.dll 348160 -> 165888 47.65% win32/pe MSVCR71.dll 147456 -> 69120 46.88% win32/pe nspr4.dll 1867776 -> 787456 42.16% win32/pe python24.dll 4608 -> 3584 77.78% win32/pe w9xpopen.exe -------------------- ------ ----------- ----------- 33303982 -> 10129326 30.41% [ 21 files ]
“libpanda.dll” down from 16.6M to 3.8M, and total size down to 30.41% !!! And this is what the dist directory should look like:
11/02/2007 11:57 <DIR> . 11/02/2007 11:57 <DIR> .. 29/11/2006 19:31 2,799,762 avcodec-51-panda.dll 11/02/2007 11:53 <DIR> etc 11/02/2007 11:53 1,354,012 game.exe 29/11/2006 19:58 278,016 libp3direct.dll 29/11/2006 19:32 70,144 libp3dtool.dll 29/11/2006 19:32 160,256 libp3dtoolconfig.dll 29/11/2006 19:58 4,096 libp3heapq.dll 29/11/2006 19:51 3,918,336 libpanda.dll 29/11/2006 19:31 447,488 LIBPANDAEAY.dll 29/11/2006 19:55 579,072 libpandaegg.dll 29/11/2006 19:33 499,200 libpandaexpress.dll 29/11/2006 19:55 69,120 libpandafx.dll 29/11/2006 19:31 89,088 libpandajpeg.dll 29/11/2006 19:57 275,968 libpandaphysics.dll 29/11/2006 19:31 58,880 libpandapng13.dll 29/11/2006 19:31 83,456 LIBPANDASSL.dll 29/11/2006 19:31 80,896 libpandatiff.dll 29/11/2006 19:31 36,352 libpandazlib1.dll 29/11/2006 19:31 165,888 MSVCR71.dll 29/11/2006 19:31 69,120 nspr4.dll 29/11/2006 19:31 787,456 python24.dll 18/10/2006 07:18 3,584 w9xpopen.exe 21 Datei(en) 11,830,190 Bytes
11,830,190 bytes is 11553k is 11.282M. Zipped up this is about 7.5M. Here we are.
For the final kick you could delete some files by hand, bringing size further down. I don’t recommend doing this, and if you should check if the application still runs on a different machine (which I didn’t do). I found that this (THIS!) Panda3D application starts up without the following files:
libp3dtool.dll libp3dtoolconfig.dll LIBPANDAEAY.dll libpandajpeg.dll libpandapng13.dll LIBPANDASSL.dll libpandatiff.dll libpandazlib1.dll MSVCR71.dll nspr4.dll
I hope this extensive post sheds some light on py2exe in combination with Panda3D.
The essence is: use UPX.
enn0x
enn0X: Thank you for the very enlightening tutorial. That cleared up a LOT of thing for me with py2exe that were mysteries after I read its manual. You might want to consider actually putting this in the manual/wiki. It was very informative and could help other people as well as me.
|
https://discourse.panda3d.org/t/pack-panda-installer-size/1882/9
|
CC-MAIN-2022-33
|
en
|
refinedweb
|
Inspecting Packages¶
You can inspect the uploaded packages and also the packages in the local cache by running the conan get command.
List the files of a local recipe folder:
$ conan get zlib/1.2.8@conan/stable . Listing directory '.': CMakeLists.txt conanfile.py conanmanifest.txt
Print the conaninfo.txt file of a binary package:
$ conan get zlib/1.2.11@conan/stable -p 2144f833c251030c3cfd61c4354ae0e38607a909
Print the conanfile.py from a remote package:
$ conan get zlib/1.2.8@conan/stable -r conan-center
from conans import ConanFile, tools, CMake, AutoToolsBuildEnvironment from conans.util import files from conans import __version__ as conan_version import os class ZlibConan(ConanFile): name = "zlib" version = "1.2.8" ZIP_FOLDER_NAME = "zlib-%s" % version #...
Check the conan get command command reference and more examples.
|
https://docs.conan.io/en/1.21/creating_packages/inspecting_packages.html
|
CC-MAIN-2022-33
|
en
|
refinedweb
|
Spring Framework RCE, Early Announcement
Updates
- [04-13] “Data Binding Rules Vulnerability CVE-2022-22968” follow-up blog post published, related to the “disallowedFields” from the Suggested Workarounds
- [04-08] Snyk announces an additional attack vector for Glassfish and Payara. See also related Payara, upcoming release announcement
- [04-04] Updated Am I Impacted with improved description for deployment requirements
- [04-01] Updated Am I Impacted with additional notes
- [04-01] Updated Suggested Workarounds section for Apache Tomcat upgrades and Java 8 downgrades
- [04-01] “Mitigation Alternative” follow-up blog post published, announcing Apache Tomcat releases versions 10.0.20, 9.0.62, and 8.5.78 that close the attack vector on Tomcat’s side
- [03-31] Spring Boot 2.6.6 is available
- [03-31] Spring Boot 2.5.12 is available
- [03-31] CVE-2022-22965 is published
- [03-31] Added section “Misconceptions”
- [03-31] Added section “Am I Impacted”
- [03-31] Fix minor issue in the workaround for adding
disallowedFields
- [03-31] Spring Framework 5.3.18 and 5.2.20 are available
Table of Contents
Overview
I would like to announce an RCE vulnerability in the Spring Framework that was leaked out ahead of CVE publication. The issue was first reported to VMware late on Tuesday evening, close to Midnight, GMT time by codeplutos, meizjm3i of AntGroup FG. On Wednesday we worked through investigation, analysis, identifying a fix, testing, while aiming for emergency releases on Thursday. In the mean time, also on Wednesday, details were leaked in full detail online, which is why we are providing this update ahead of the releases and the CVE report.
Vulnerability
The vulnerability impacts Spring MVC and Spring WebFlux applications running on JDK 9+. The specific exploit requires the application to be packaged and deployed as a traditional WAR on a Servlet container. If the application is deployed as a Spring Boot executable jar, i.e. the default, it is not vulnerable to the exploit. However, the nature of the vulnerability is more general, and there may be other ways to exploit it.
Am I Impacted?
These are the requirements for the specific scenario from the report:
- Running on JDK 9 or higher
- Packaged as a traditional WAR and deployed on a standalone Servlet container. Typical Spring Boot deployments using an embedded Servlet container or reactive web server are not impacted.
spring-webmvcor
spring-webfluxdependency.
- Spring Framework versions 5.3.0 to 5.3.17, 5.2.0 to 5.2.19, and older versions.
Additional notes:
- The vulnerability involves
ClassLoaderaccess and depends on the actual Servlet Container in use. Tomcat 10.0.19, 9.0.61, 8.5.77, and earlier versions are known to be vulnerable. Payara and Glassfish are also known to be vulnerable. Other Servlet containers may also be vulnerable.
- The issue relates to data binding used to populate an object from request parameters (either query parameters or form data). Data binding is used for controller method parameters that are annotated with
@ModelAttributeor optionally without it, and without any other Spring Web annotation.
- The issues does not relate to
@RequestBodycontroller method parameters (e.g. JSON deserialization). However, such methods may still be vulnerable if they have another method parameter populated via data binding from query parameters.
Status
- Spring Framework 5.3.18 and 5.2.20, which contain the fixes, have been released.
- Spring Boot 2.6.6 and 2.5.12 that depend on Spring Framework 5.3.18 have been released.
- CVE-2022-22965 has been published.
- Apache Tomcat has released versions 10.0.20, 9.0.62, and 8.5.78 which close the attack vector on Tomcat’s side, see Spring Framework RCE, Mitigation Alternative.
Suggested Workarounds
The preferred response is to update to Spring Framework 5.3.18 and 5.2.20 or greater. If you have done this, then no workarounds are necessary. However, some may be in a position where upgrading is not possible to do quickly. For that reason, we have provided some workarounds below.
Please note that, workarounds are not necessarily mutually exclusive since security is best done “in depth”.
Upgrading Tomcat
For older applications, running on Tomcat with an unsupported Spring Framework version, upgrading to Apache Tomcat 10.0.20, 9.0.62, or 8.5.78, provides adequate protection. However, this should be seen as a tactical solution, and the main goal should be to upgrade to a currently supported Spring Framework version as soon as possible. If you take this approach, you should consider setting Disallowed Fields as well for defense in depth approach.
Downgrading to Java 8
Downgrading to Java 8 is a viable workaround, if you can neither upgrade the Spring Framework nor upgrade Apache Tomcat.
Disallowed Fields
Another viable workaround is to disable binding to particular fields by setting
disallowedFieldson
WebDataBinder globally:
works generally, but as a centrally applied workaround fix, may leave some loopholes, in particular if a controller sets
disallowedFields locally through its own
@InitBinder method, which overrides the global setting.
To apply the workaround in a more fail-safe way, applications could extend
RequestMappingHandlerAdapter to update the
WebDataBinder at the end after all other initialization. In order to do that, a Spring Boot application can declare a
WebMvcRegistrations bean (Spring MVC) or a
WebFluxRegistrations bean (Spring WebFlux).
For example in Spring MVC (and similar in WebFlux):
package car.app; import java.util.ArrayList; import java.util.Arrays; import java.util.List; import org.springframework.boot.SpringApplication; import org.springframework.boot.autoconfigure.SpringBootApplication; import org.springframework.boot.autoconfigure.web.servlet.WebMvcRegistrations; import org.springframework.context.annotation.Bean; import org.springframework.web.bind.ServletRequestDataBinder; import org.springframework.web.context.request.NativeWebRequest; import org.springframework.web.method.annotation.InitBinderDataBinderFactory; import org.springframework.web.method.support.InvocableHandlerMethod; import org.springframework.web.servlet.mvc.method.annotation.RequestMappingHandlerAdapter; import org.springframework.web.servlet.mvc.method.annotation.ServletRequestDataBinderFactory; @SpringBootApplication public class MyApp { public static void main(String[] args) { SpringApplication.run(CarApp.class, args); } ; } }; } } }
For Spring MVC without Spring Boot, an application can switch from
@EnableWebMvc to extending
DelegatingWebMvcConfiguration directly as described in Advanced Config section of the documentation, then overriding the
createRequestMappingHandlerAdapter method.
Misconceptions
There was speculation surrounding the commit to deprecate
SerializationUtils. This class has only one usage within the framework and is not exposed to external input. The deprecation is unrelated to this vulnerability.
There was confusion with a CVE for Spring Cloud Function which was released just before the report for this vulnerability. It is also unrelated.
|
https://spring.io/blog/2022/03/31/spring-framework-rce-early-announcement?utm_campaign=Weekly%20newsletter%20of%20Masafumi%20Negishi&utm_medium=email&utm_source=Revue%20newsletter
|
CC-MAIN-2022-33
|
en
|
refinedweb
|
VideoStream¶
scenedetect.video_stream Module
This module contains the
VideoStream class, which provides a library agnostic
interface for video input. To open a video by path, use
scenedetect.open_video():
from scenedetect import open_video video = open_video('video.mp4')
You can also optionally specify a framerate and a specific backend library to use. Unless specified,
OpenCV will be used as the video backend. See
scenedetect.backends for a detailed example.
New
VideoStream implementations can be
tested by adding it to the test suite in tests/test_video_stream.py.
VideoStream Interface¶
- class scenedetect.video_stream.VideoStream¶
Interface which all video backends must implement.
- abstract static BACKEND_NAME()¶
Unique name used to identify this backend. Should be a static property in derived classes (BACKEND_NAME = ‘backend_identifier’).
- Return type
str
- abstract property aspect_ratio: float¶
Pixel aspect ratio as a float (1.0 represents square pixels).
- property base_timecode: scenedetect.frame_timecode.FrameTimecode¶
FrameTimecode object to use as a time base.
- abstract property duration: Optional[scenedetect.frame_timecode.FrameTimecode]¶
Duration of the stream as a FrameTimecode, or None if non terminating.
- abstract property frame_number: int¶
Current position within stream as the frame number.
Will return 0 until the first frame is read.
- abstract property frame_size: Tuple[int, int]¶
Size of each video frame in pixels as a tuple of (width, height).
- abstract property position: scenedetect.frame_timecode.FrameTimecode¶
Current position within stream as FrameTimecode.
This can be interpreted as presentation time stamp, thus frame 1 corresponds to the presentation time 0. Returns 0 even if frame_number is 1.
- abstract property position_ms: float¶
Current position within stream as a float of the presentation time in milliseconds. The first frame has a PTS of 0.
- abstract read(decode=True, advance=True)¶
Return next frame (or current if advance = False), or False if end of video.
- Parameters
decode (bool) – Decode and return the frame.
advance (bool) – Seek to the next frame. If False, will remain on the current frame.
- Returns
If decode = True, returns either the decoded frame, or False if end of video. If decode = False, a boolean indicating if the next frame was advanced to or not is returned.
- Return type
Union[ndarray, bool]
- abstract reset()¶
Close and re-open the VideoStream (equivalent to seeking back to beginning).
- Return type
None
- abstract seek(target)¶
Seek to the given timecode. If given as a frame number, represents the current seek pointer (e.g. if seeking to 0, the next frame decoded will be the first frame of the video).
For 1-based indices (first frame is frame #1), the target frame number needs to be converted to 0-based by subtracting one. For example, if we want to seek to the first frame, we call seek(0) followed by read(). If we want to seek to the 5th frame, we call seek(4) followed by read(), at which point frame_number will be 5.
May not be supported on all backend types or inputs (e.g. cameras).
- Parameters
target (Union[FrameTimecode, float, int]) – Target position in video stream to seek to. If float, interpreted as time in seconds. If int, interpreted as frame number.
- Raises
-
- Return type
None
video_stream Functions and Constants¶
The following functions and constants are available in the
scenedetect.video_stream module.
- scenedetect.video_stream.DEFAULT_MIN_WIDTH: int = 260¶
The default minimum width a frame will be downscaled to when calculating a downscale factor.
- scenedetect.video_stream.compute_downscale_factor(frame_width, effective_width=260)¶
Get the optimal default downscale factor based on a video’s resolution (currently only the width in pixels is considered).
The resulting effective width of the video will be between frame_width and 1.5 * frame_width pixels (e.g. if frame_width is 200, the range of effective widths will be between 200 and 300).
- Parameters
frame_width (int) – Actual width of the video frame in pixels.
effective_width (int) – Desired minimum width in pixels.
- Returns
The defalt downscale factor to use to achieve at least the target effective_width.
- Return type
int
Exceptions¶
- exception scenedetect.video_stream.VideoOpenFailure(message='Unknown backend error.')¶
Raised by a backend if opening a video fails.
- Parameters
message (str) – Additional context the backend can provide for the open failure.
|
http://scenedetect.com/projects/Manual/en/latest/api/video_stream.html
|
CC-MAIN-2022-33
|
en
|
refinedweb
|
<Include a description of your project>
Project description
An admin action that allows you to export your models as CSV files without having to write a single line of code –besides installation, of course.
Features
- Easy installation
- High level of customizability
- Created with permissions in mind
- Sane defaults
Installation
To install:
pip install django-csv-exports
Next add django_exports to your INSTALLED_APPS to include the related css/js:
INSTALLED_APPS = ( # Other apps here 'django_csv_exports', )
Configuration
There are two django settings that you can use to configure who can use the export action:
# Use if you want to check user level permissions only users with the can_csv_<model_label> # will be able to download csv files. DJANGO_EXPORTS_REQUIRE_PERM = True # Use if you want to disable the global django admin action. This setting is set to True by default. DJANGO_CSV_GLOBAL_EXPORTS_ENABLED = False
Fields to export
By default, all of the fields available in a model ar ordered and exported. You can override this behavior at the admin model level. Define the following attribute in your AdminModel:
class ClientAdmin(CSVExportAdmin): csv_fields = ['first_name', 'last_name', 'email', 'phone_number',]
Permission
There are two ways to limit who can export data as CSV files.
Model level permissions: create a new model permission and assign it only to user who should have access to the export action in the admin.
- class Client(models.Model):
-
- class Meta:
- permissions = ((“can_csv_client”, “Can export list of clients as CSV file”),)
AdminModel Level permissions: define a has_csv_permission and return True if a user should have access:
class ClientAdmin(admin.AdminModel): search_fields = ('name', 'id', 'email') csv_fields = ['name', 'id'] def has_csv_permission(self, request): """Only super users can export as CSV""" if request.user.is_superuser: return True
Selective Installation
Sometimes, you don’t want to allow all of your admin models to be exported. For this, you will need to set DJANGO_CSV_GLOBAL_EXPORTS_ENABLED to False, and have your AdminModels extend our CSVExportAdmin admin class:
from django_csv_exports.admin import CSVExportAdmin class ClientAdmin(CSVExportAdmin): pass
Running the Tests
You can run the tests with via:
python setup.py test
or:
python runtests.py
Project details
Download files
Download the file for your platform. If you're not sure which to choose, learn more about installing packages.
|
https://pypi.org/project/django-csv-exports/
|
CC-MAIN-2022-33
|
en
|
refinedweb
|
Version 1.0 is out now! Be sure to give it a try, feedback is appreciated!
First release of the new remastered version of one of the most popular mods for star wars battlefront II: The Old Republic made by Delta-1035
This is a brand new remake of the mod; it adds tons of new units, weapons and heroes from the old republic era!
Be sure to check out the readme file in order to install this mod.
Only registered members can share their thoughts. So come on! Join the community today (totally free - or sign in with your social account on the right) and join in the conversation.
very good mod but I need vehileces and more heroes there
I liked , only vehicles and for me .. this is great¡¡¡¡ Good Work
I have mixed feelings about this. I loved your original TOR mod, and I love this one. Your original one had more playable units, which I liked, but I love having heroes in your new version. The updated skins and graphics are great in this version as well. I just want to combine the great aspects of both together! These are probably my favorite mods out there tbh. Keep up the good work.
This comment is currently awaiting admin approval, join now to view.
Apart from the overall fine quality of your mod:
"... one of the most popular mods ..."
"... adds tons of new units, weapons and heroes ..."
Do you really mean those two serious? ;)
They sound like some promotional set phrase we're used to from publishers, which always let us play ********-bingo on them, because they always overstate their promises, to get their products selling better ... do you really think, that we need this kind of promotion here on ModDB?
OK, your mod is relatively well known but there are other 'classic' mods (eg. BFX, Conversion Pack, Mass Effect: Unification and so on ...), which are much more popular, since they existed for a much longer period of time ... "one of the MOST POPULAR mods"? ... not yet.
OK, you reworked/overworked/remastered the whole mod (including all units and its gunplay) and you did in fact a pretty good job at it :)
... but have you actually ADDED "TONS OF new units, weapons"? ... no, you altered them.
As already stated: I like how you overhauled the mod and I liked it before as well as I still do, but I found it kinda inappropriate how you decriped this remaster of your mod ;)
Looks like you don't know english very well, brah.
"I hope you'll enjoy this remastered version of - probably - MY most succesful mod"
All I said was that the old TOR mod was MY most popular mod, not the most popular out of all bf2 mods.
And yes, I HAVE ADDED TONS OF NEW THINGS, you have clearly no mod knowledge, that's why you can not understand all the work that I did.
Oh, by the way, I don't get money for this kind of stuff, so there would be no point on me writing misleading articles or whatever.
I know, it's just kinda about the principle ;)
First of all:
I really appreciate your work and I also really like your mod AND I already said that in the comment your are referring to, so you don't have to feel offended :)
I love this platform and all the awesome projects which are available here ... including yours!
Now to the point:
I only read the description of this file and there you wrote (copy pasted from there): "First release of the new remastered version of one of the most popular mods for star wars battlefront II: The Old Republic made by Delta-1035" ... check for yourself ;)
But I noticed that you wrote "my" in the article you posted related to this remaster of your mod, so I'm sorry for not-reading this article before ...
But there's no need for devaluating my language skills due to your injured pride (for which I am sorry for as already stated) because I would say, that I am capable enough to understand most things, which are written in English ... and I think you are also able to judge those right according to my grammar and vocabulary ;)
There is no injured pride, it is just frustrating to see, after all the work that I put in to this, people saying that I have JUST ALTERED some things here and there and bitching about a simple phrase. And I do not care if you write "oh I liked the mod" in a 100 plus words comment of complaint.
I hope that comment made you feel better that day.
But the way you react(ed) testifies a injured pride ;)
As already stated: I didn't want to offend you in any way,
it was just the impression I got, when I read the description.
Ok, have a nice day.
you really a triggered lil bitch huh
Actually, yes the TOR mod is one of the most popular (not the first one of course). But everybody know the old republic mod!
One point for Delta.
Thank you, man.
If I sort all Battlefront mods like this:
Moddb.com
... you can see what the most popular mods overall ;)
I said that it's of course well known, but it's in fact not one of the most popular (not even on other SWBFII modding sites like SWBFGamer and Gametoast) it's one of the better known ones indeed and of course a high quality one ... but no one of the most popular overall.
Steamcommunity.com
Lonebullet.com
Pcgamer.com
From just the first google search result page. Looks pretty popular. I have not released the first version of the tor mod on moddb, so that is why it is not listed there.
You seem very passionate about talking trash.
Playground.ru
Dailymotion.com
Isolaillyon.it
Gamefront.online
Makeagif.com
Techtudo.com.br
I could go on, but I will not. Looks like I made it in a lot of "all time top 10s" and the downloads are a lot.
Not the greatest by any means, but one of the most popular, just like I wrote.
Now you can go bother somebody else ;)
Not even on Gametoast? LOL that is priceless!
I've been an active gametoast user for more than a decade, that forum is the place where the old tor mod was born and where it was released on the first place. Dude, get out of here with your ********.
@Delta: I'm not sure if he was already playing swbf 2 when the mod was released some years ago, maybe that's the reason...?
OffTopic: I just miss the old republic sniper rifle model from the previous version (a cycler rifle i think?...I loved the idea of a loooong rifle :)
@Sulac: Sadly all the good mods weren't released at Moddb,don't take this scale.
This comment is currently awaiting admin approval, join now to view.
Nice Mod! Can you make this playable in the Galactic Conquest? would be really nice.
This is very high quality mod, very enjoyable. I do however, have two requests or rather suggestions. Firstly, BF1 map support, and secondly, Old Republic maps and vehicles to go along with this.
This comment is currently awaiting admin approval, join now to view.
Can you make Jedi Kight as one of the Republic class ?
does this mod need any addition downloads or patches to run it?
the skins work fine on the default battlefront maps, but how do i get your maps to work?
|
https://www.moddb.com/downloads/the-old-republic-remastered-v1
|
CC-MAIN-2022-33
|
en
|
refinedweb
|
Docker for Beginners Part 1: Containers and Virtual Machines
This is the Part 1 from the Docker series, Docker for Beginners in 8 Parts. In this post we're going to explore the differences between Containers and Virtual Machines!
Join the DZone community and get the full member experience.Join For Free
Hello!
This is the Part 1 from the Docker series, Docker for Beginners in 8 Parts. In this post we're going to explore the differences between Containers and Virtual Machines!
-
Before jump into hacking Docker, let's explore a few differences between Containers and Virtual Machines. Actually, we should understand what is a Container and what is a Virtual Machine even before compare them.
It's common to compare them and in theory, as Internet
always
sometimes says, Containers are better than Virtual Machines.
Although you can run your Apps in "almost" the same way in both technologies, they're different and sometimes you can't just compare them, since one can be better than other based on your context. Even more: They can be used together! They're are not even enemies!
Let's jump into the fundamental concepts of both technologies.
Applications Running in a Real Computer
Before starting the comparison, what about get a step back and remember how does a classical application runs in a real computer?
To get a more real example, imagine an application that has 3 main components that should run together:
- MySQL Database
- Nodejs Application
- MongoDB Database
As you can see, we should execute 3 different applications. We can run this applications directly in a real computer, using its Operating System (let's say a Linux Ubuntu) as below:
Notice that:
Server: is the real physical computer that runs an operating system
Host OS: is the operating system running upon the server, in this case a Linux Ubuntu
Applications: Are the 3 applications running together in the operating system
But you can fall in the challenge to get these 3 applications running isolated from each other, each with its own operating system. Imagine that:
- MySQL should run on Linux Fedora
- Nodejs should run on Linux Ubuntu
- MongoDB should run on Windows
If we follow the approach above, we can create the next architecture with 3 real computers:
Hmm, that doesn't seems good because it is too heavy. Now we're working with 3 physical machines, each one with its own operating system and besides that they must communicate with each other.
Virtual Machines come to the game to create a better isolated environment without using hundreds of real computers!
Virtual Machines
Long story short:
Virtual Machines emulate a real computer by virtualizing it to execute applications, running on top of a real computer.
Virtual Machines can emulate a real computer and can execute applications separately. To emulate a real computer, virtual machines use a Hypervisor to create a virtual computer.
Hypervisor is responsible to create a virtual hardware and software environment to run and manages Virtual Machines.
On top of the Hypervisor, we have a Guest OS that is a Virtualized Operating System where we can run isolated applications, called Guest Operating System.
Applications that run in Virtual Machines have access to Binaries and Libraries on top of the operating system.
Let's see a
terrible picture designed by me
beautiful picture with this architecture: As you can see from this picture, now we have a Hypervisor on top of the Host OS that provides and manages the Guest OS. In this Guest OS we would run applications as below:
Great! Now the 3 applications can run on the same Real Computer but in 3 Virtualized Machines, completely isolated from each other.
Virtual Machines Advantages
Full Isolation
Virtual Machines are environments that are completely isolated from each other
Full Virtualization
With full virtualization we can have a fully isolated environment, with each Virtual Machine with its own CPU virtualization
Virtual Machines Drawbacks
Heavy
Virtual Machines usually execute in a heavy isolated process, because it needs am entire Guest OS
More Layers
Depending on your configuration, you would have one more layer when your virtual machine doesn't have direct access to the hardware (hosted hypervisor) and it brings to us less performance
Containers
Long story short again:
Containers are isolated processes that share resources with its host and, unlike VMs, doesn't virtualize the hardware and doesn't need a Guest OS
One of the biggest differences between Containers and VMs is that Containers share resources with other Containers in the same host. This automatically brings to us more performance than VMs, since we don't have a Guest OS for each container.
Instead of having a Hypervisor, now we have a Container Engine, like below:
The Container Engine doesn't need to expose or manage Guest OS, therefore our 3 applications would run directly in the Container as below:
Applications in Containers can also access Binaries and Libraries:
Containers Advantages
Isolated Process
Containers are environments that will be executed by isolated processes but can share resources with other containers on the same host
Mounted Files
Containers allow us to mount files and resources from inside the container to the outside.
Lightweight Process
Containers don't run in a Guest OS, so its process is lightweight with a better performance and can start up the container in seconds
Containers Drawbacks
Same Host OS
You can fall in a situation when each application requires a specific OS and it easier to achieve with VMs, since we can have different Guest OS
Security Issues
Containers are isolated process that have direct access to a few important namespaces such as Hostname, Networks and Shared Memory. Your container can be used to do bad things more easily! Of course you can control your root user, create a few barriers but you should worry about it.
Result from the Comparison
From this simple comparison you can have thoughts like this:
Hey, Virtual Machines is not for me! It's really heavy, run an entire operating system and I can't pack one hundred apps in seconds!
We can list more problems with Containers and Virtual Machines. Actually the point is:
There is no winner here! There is no better approach if you're just analyzing them in isolation.
You can have a better approach based on a context, a scope, a problem
Actually, Virtual Machines are great and you can even work with Virtual Machines and Containers together! Imagine a situation that:
- Your production environment uses Windows, but you should have an application that just runs on Linux
- So, you can have a Virtual Machine to run a Linux distribution
- Then, you can have Containers running inside this Virtual Machine
So, What is Docker?
What is Docker and why we were talking about Virtual Machines and Containers so far?
Docker is a great technology to build containers easily.
Where you read Container Engine in the picture, now you can read Docker Engine.
But containers technology is more than a decade old. Google has been working with a thousand of Containers so far.
So, why we're talking about Containers? And why Docker?
Docker is really simple and easy to use and has a great adoption of the community.
Containers are really old, but the way to create containers was really complicated. Docker shows us a new way to think about container creation.
In a nutshell, Docker is:
- A container technology written in Go Language
- A technology that brings to us the facility to start up a container in seconds
- A technology that has a huge community adoption
- A technology with its own tool to run multiples containers easily
- A technology with its own tool to run manage a cluster of containers easily
- A technology that uses cgroups, namespaces and file systems to create lightweight isolated process
That's it!
You'll get your hands dirty by running Docker containers in the next posts of the series.
But before that, let's just install Docker on your machine in the next post!
I hope that this article would be helpful to you!
Thanks!
Published at DZone with permission of Alexandre Gama. See the original article here.
Opinions expressed by DZone contributors are their own.
|
https://dzone.com/articles/part-4-docker-images-in-more-details
|
CC-MAIN-2022-33
|
en
|
refinedweb
|
In computing, a Hashtable is defined as a data structure that stores data represented as key-value pairs. Compared to a map, it is more efficient for large data sets. This Java programming tutorial discusses Hashtable and HashMap data structures, their features and benefits, and how to work with them in Java.
Read: Best Tools for Remote Software Developers
What is a Hashtable in Java?
A Hashtable is a data structure used to preserve data represented as key-value pairs. Despite its similarities with a map, it is more efficient when dealing with huge data sets. This explains why a Hashtable is a great choice in applications where performance is important.
Hashtable is used for fast lookup of data, storing data in dynamic ways, or just storing it in compact ways, which makes it more efficient compared to other solutions, such as arrays or linked lists.
A Hashtable is often used in programming languages, such as Java, to store data in a way that is easy to retrieve. A Hashtable can store large amounts of data quickly and easily, making it ideal for use in applications where speed is important.
A Hashtable works by storing data in a table, with each piece of data having a unique key. You can retrieve data from a Hashtable using its key. Once you provide the key, you can get the corresponding value.
The code snippet that follows shows how you can create an empty Hashtable instance:
Hashtable<K, V> hashTable = new Hashtable<K, V>();
How Does Hashtable Work in Java?
Hashtable is an abstract class and has two subclasses that extend this class — HashMap and LinkedHashMap. The HashMap provides the set of elements stored in it, while the LinkedHashMap allows insertion of new items at any position.
What are the Benefits of Using Hashtable in Java?
Hashtable is one of the most efficient of all data structures as far as performance is concerned. You can take advantage of Hashtable for fast data storage and retrieval. Hashtable is also thread-safe making it an excellent choice for multithreaded applications where concurrency is essential.
When Should You Use Hashtable in Java?
A Hashtable is a data structure that stores information in key-value pairs. The key is required when retrieving items from a Hashtable. This can be advantageous if you have a lot of data and need to be able to quickly find specific items.
However, Hashtables are not well suited for storing data that needs to be sorted in any particular order. Additionally, because keys in a Hashtable must be unique, it is not possible to store duplicate keys in a Hashtable.
Overall, Hashtables are a good option for storing data when you need quick access to specific items and don’t mind if the data is unordered.
You can learn more about Hashing by reading our tutorial: Introduction to Hashing in Java.
How to Program Hashtable in Java
To create a Hashtable, programmers need to import the java.util.Hashtable package. Then, you can create a Hashtable object like this:
Hashtable hashTable = new Hashtable();
You can now add data represented as key-value pairs to the Hashtable instance. To do so, you will use the put() method, like this:
hashTable.put("key1", "value1"); hashTable.put("key2", "value2");
You can retrieve values from the Hashtable using the get() method, like so:
String str1 = hashTable.get("key1"); String str2 = hashTable.get("key2");
If you want to check if a key exists in the Hashtable, you can use the containsKey() method:
boolean containsKey = hashTable.containsKey("key1");
Finally, if you want to get a list of all the keys or values in the Hashtable, you can use the keySet() and values() methods:
Set keys = hashTable.keySet(); Collection values = hashTable.values();
How to Improve the Performance of Hashtable in Java?
Developers can use a different hashing algorithm to improve Hashtable’s performance. The default hashing algorithm used by Hashtable is known as Adler-32. However, there are other algorithms available that can be faster, such as Murmur3. To change the hashing algorithm used by Hashtable, you can use the setHashingAlgorithm() method.
Increasing the internal array size is another way to improve Hashtable’s performance. By default, Hashtable uses an array with a size of 16. The setSize() method allows programmers to increase this size. Performance will be improved because collisions will be fewer when the array is larger.
Finally, you can also consider using a different data structure altogether if performance is a major concern for you. For example, you could use a tree-based data structure, such as a red-black tree, instead of a Hashtable. Tree-based data structures tend to be much faster than Hashtables when it comes to lookup operations.
What is a HashMap in Java? How does it work?
Hashmap is a linked-list implementation of Map, where each element is assigned a unique integer hash code. An instance of HashMap contains a set of key-value pairs where the keys are instances of String and the values are instances of any Java serializable data type. The default storage mechanism used by HashMap is basically an array which is resized when the number of stored entries reaches a specified limit.
Since a hash code is used to store the key-value pair in the map using their hash codes, it means that two keys with the same hash code will end up in the same bucket and this can result in collisions. When there is a collision, HashMap uses its secondary storage to store the conflicting key-value pairs.
The code snippet that follows shows how you can create an empty HashMap instance in Java:
HashMap<K, V> hashMap = new HashMap<K, V>();
How to Program HashMap in Java
Refer to the code snippet shown below that shows how you can create an empty instance of a HashMap, insert data as key-value pairs to it and then display the data at the console window.
import java.io.*; import java.util.*; public class MyHashMapHashtableExample { public static void main(String args[]) { Map<Integer, String> hashMap = new HashMap<>(); hashMap.put(1, "A"); hashMap.put(2, "B"); hashMap.put(3, "C"); hashMap.put(4, "D"); hashMap.put(5, "E"); Hashtable<Integer, String> hashTable = new Hashtable<Integer, String>(hashMap); System.out.println(hashTable); } }
While the put method of the HashMap class can be used to insert items, the remove method can be used to delete items from the collection.
For example, the code snippet given below can be used to remove the item having the key as 3.
hashMap.remove(3);
Final Thoughts on Hashtable and HashMap in Java
A Hashtable can store large amounts of data quickly and easily, making it ideal for use in applications where performance is important. A collision occurs when two objects pertaining to the same Hashtable have the same hash code. A Hashtable is adept at avoiding such collisions using an array of lists.
Read more Java programming tutorials and software development guides.
|
https://www.developer.com/java/hashtable-hashmap-java/
|
CC-MAIN-2022-33
|
en
|
refinedweb
|
Re: [squid-dev] FYI: the C++11 roadmap
ons 2014-11-05 klockan 13:31 +0100 skrev Kinkie: MAYBE this could be mitigated by providing RPMs for RHEL6/CentOS 6 that are built on a custom server with a recent gcc but older libraries? What do you guys think? Might work, but it means Squid will not get updated in EPEL or other repositories wich follows the RHEL release as upgrade of GCC is out of reach for any of them, so users will be forced to use our repositories for Squid. And it's a major pain for any administrator that needs to test a patch of change config options, as upgrading core components such as GCC to an out-of-repository version is out of question. Regards Henrik ___ squid-dev mailing list squid-dev@lists.squid-cache.org
Re: RFC: how to handle OS-specific configure options?
fre 2010-04-23 klockan 16:41 +0200 skrev Kinkie: It's probably time for another interim merge. +1
Re: squid 3.1 ICAP problem
fre 2010-04-23 klockan 16:43 +0100 skrev Dan Searle: 16:36:29.552442 IP localhost.1344 localhost.50566: P 1:195(194) ack 72 win 512 nop,nop,timestamp 62423291 62423291 0x: 4500 00f6 7ac7 4000 4006 c138 7f00 0001 e.@.@..8 0x0010: 7f00 0001 0540 c586 bdca fa3d bd99 b529 .@.=...) 0x0020: 8018 0200 feea 0101 080a 03b8 80fb 0x0030: 03b8 80fb 3230 3020 4f4b 0d0a 4461 7465 200.OK..Date THat's not a correct ICAP response status line.. Should be ICAP/1.0 200 OK Just as for HTTP ICAP responses always start with the ICAP protocol identifier version. RFC3507 4.3.3 Response Headers, first paragraph. Regards Henrik
Re: [PATCH] [RFC] Enforce separate http_port modes
tor 2010-04-22 klockan 00:25 + skrev Amos Jeffries: In addition it may make sense to be able to selectively enable tproxy spoofing independent of interception, which would also solve the above reservation. From TPROXYv4 the intercept options are mutually exclusive. Due to the nature of the NAT and TPROXY lookups. I was not talking about the options above, just the functionality of intercepting vs spoofing. The tproxy http_port option enables both functions (intercepting requests, and spofing outgoing requests). TPROXY in other modes has a wide set of possibilities and potential problems we will need to consider carefully before enabling. Basic problems is the same in all modes. For tproxy spoofing to work return traffic on forwarded requests need to find their way back to the right cache server, not to the reuesting client or another cache server. The available solutions to that problem differs slightly depending on how client traffic arrives on the cache servers. Regards Henrik
Re: /bzr/squid3/trunk/ r10399: Back out the tweak on rev10398.
tor 2010-04-22 klockan 17:13 +1200 skrev Amos Jeffries: What hudson was showing was build always exiting at the first of these files. Even in the awk ok - SUCCESS case. That needs to be figured out before exit 1 can go back on. right. The sequence of || is a bit ambiguous. Needs to be explicitly grouped as in my first variant to get the desired result. awk || (rm exit) alternatively awk || (rm ; exit) or using explicit flow control if ! awk ; then rm ; exit; fi Regards Henrik
Re: /bzr/squid3/trunk/ r10399: Back out the tweak on rev10398.
mån 2010-04-19 klockan 23:25 + skrev Amos Jeffries: - $(AWK) -f $(srcdir)/mk-globals-c.awk $(srcdir)/globals.h $@ || $(RM) -f $@ exit 1 + $(AWK) -f $(srcdir)/mk-globals-c.awk $(srcdir)/globals.h $@ || $(RM) -f $@ Why? I had bound the exit(1) to success of the rm command, not failure of the awk. When we get time to sort out the shell nesting of automake it needs to be added back in to ensure the make runs exit on awk failure. Sure you had. awk ok - SUCCESS awk fail - rm rm fail - FAILURE rm ok - exit 1 == FAILURE exit in this context is just to make the shell command return an error so make stops. rm failing is in itself an error and no explicit exit with an error is needed in that case. If you absolutely want exit to be called in both cases then group rm exit $(AWK) -f $(srcdir)/mk-globals-c.awk $(srcdir)/globals.h $@ || ($(RM) -f $@ ;exit 1) but I prefer your original $(AWK) -f $(srcdir)/mk-globals-c.awk $(srcdir)/globals.h $@ || $(RM) -f $@ exit 1 the effect of both is the same, and your original is both clearer and more efficient. Regards Henrik
Re: [PATCH] [RFC] Enforce separate http_port modes
ons 2010-04-21 klockan 02:44 + skrev Amos Jeffries: It alters documentation to call accel, tproxy, intercept, and sslbump options mode flags since they determine the overall code paths which traffic received is handled by. +1, but with a slight reservation on tproxy as technically there is nothing that stops tproxy + accel from being combined. Current implementation do not mix well however. In addition it may make sense to be able to selectively enable tproxy spoofing independent of interception, which would also solve the above reservation. Regards Henrik
Re: Squid 2.7
fre 2010-04-09 klockan 09:23 -0400 skrev Kulkarni, Hemant V: I am trying to understand squid 2.7 stable7 code base. On the website I see everything pertaining to squid 3. Is there any link or archive where I can get more information and understand the squid 2.7 code base ? The little documentation there is on the Squid-2 codebase can be found in doc/programmers-guide/ in the source distribution. For additional help just ask on squid-...@squid-cache.org. Regards Henrik
Re: /bzr/squid3/trunk/ r10399: Back out the tweak on rev10398.
tor 2010-04-15 klockan 22:19 +1200 skrev Amos Jeffries: revno: 10399 committer: Amos Jeffries squ...@treenet.co.nz branch nick: trunk timestamp: Thu 2010-04-15 22:19:26 +1200 message: Back out the tweak on rev10398. globals.cc: globals.h mk-globals-c.awk - $(AWK) -f $(srcdir)/mk-globals-c.awk $(srcdir)/globals.h $@ || $(RM) -f $@ exit 1 + $(AWK) -f $(srcdir)/mk-globals-c.awk $(srcdir)/globals.h $@ || $(RM) -f $@ Why? Regards Henrik
Re: Upgrade repository format for trunk?..? Regards Henrik
Re: /bzr/squid3/trunk/ r10322: Bug 2873: undefined symbol rint
Every AC_CHECK_LIB where we look for main needs to be redone to look for some sane function. See bug for details. ons 2010-03-10 klockan 20:59 +1300 skrev Amos Jeffries: revno: 10322 committer: Amos Jeffries squ...@treenet.co.nz branch nick: trunk timestamp: Wed 2010-03-10 20:59:21 +1300 message: Bug 2873: undefined symbol rint Detect math library properly based on rint synbol we need. On Solaris at least main symbol does not exist. modified: configure.in src/Common.am vanligt textdokument-bilaga (r10322.diff) === modified file 'configure.in' --- a/configure.in2010-02-03 12:36:21 + +++ b/configure.in2010-03-10 07:59:21 + @@ -2973,14 +2973,22 @@ fi AC_CHECK_LIB(regex, main, [REGEXLIB=-lregex]) +MATHLIB= case $host_os in mingw|mingw32) AC_MSG_NOTICE([Use MSVCRT for math functions.]) ;; *) - AC_CHECK_LIB(m, main) + AC_SEARCH_LIBS([rint],[m],[ + case $ac_cv_search_rint in + no*) + ;; + *) + MATHLIB=$ac_cv_search_rint + esac ]) ;; esac +AC_SUBST(MATHLIB) dnl Enable IPv6 support AC_MSG_CHECKING([whether to enable IPv6]) === modified file 'src/Common.am' --- a/src/Common.am 2009-11-21 05:29:45 + +++ b/src/Common.am 2010-03-10 07:59:21 + @@ -29,6 +29,8 @@ $(OBJS): $(top_srcdir)/include/version.h $(top_builddir)/include/autoconf.h ## Because compatibility is almost universal. And the link order is important. +## NP: libmisc util.cc depends on rint from math library COMPAT_LIB = \ -L$(top_builddir)/lib -lmiscutil \ - $(top_builddir)/compat/libcompat.la + $(top_builddir)/compat/libcompat.la \ + $(MATHLIB)
[patch] Disable ufsdump compilation due to linkage issues.
As reported some weeks ago ufsdump fails to link on the upcoming Fedora 13 release due to linking issues, and as reported by Amos the same linking issues is now also seen on Debian since somewhere between March 2 - 5. While investigating this I found the following conclusions - We are not actually installing ufsdump - The dependencies between the Squid libraries are very non-obvious, with libraries depending on plain object files and other strange things. - The ufsdump linkage issues is somehow triggered by the libraries including objects needing symbols from objects not included in that link - Those failing library objects are not actually needed by ufsdump. Linking succeeds if repeatedly removing each reported failing object from the squid libraries. - If the libraries were shared libraries then linking would fail on all systems As we are not installing ufsdump I propose we take ufsdump out from the default compilation until these issues can be better understood. The attached patch does just that. Regards Henrik diff -up squid-3.1.0.16/src/Makefile.am.noufsdump squid-3.1.0.16/src/Makefile.am --- squid-3.1.0.16/src/Makefile.am.noufsdump 2010-02-18 23:14:16.0 +0100 +++ squid-3.1.0.16/src/Makefile.am 2010-02-18 23:15:51.0 +0100 @@ -172,14 +172,14 @@ EXTRA_PROGRAMS = \ -up squid-3.1.0.16/src/Makefile.in.noufsdump squid-3.1.0.16/src/Makefile.in --- squid-3.1.0.16/src/Makefile.in.noufsdump 2010-02-18 23:12:26.0 +0100 +++ squid-3.1.0.16/src/Makefile.in 2010-02-18 23:13:16.0 +0100 @@ -57,8 +57,8 @@ check_PROGRAMS = tests/testAuth$(EXEEXT))
Re: negotiate auth with fallback to other schemes
fre 2010-03-05 klockan 20:44 + skrev Markus Moeller: I don't understand this part. Usually the kdc is on AD so how can NTLM work and Kerberos not ? The NTLM client just needs the local computer configuration + credentials entered interactively by the user. All communication with the AD is indirect via the proxy. The client do not need any form of ticked before trying to authenticate via NTLM, just the username + domain + password. For similar reasons NTLM also do not have any protection from mitm session theft. Meaning that the auth exchange done to the proxy may just as well be used by a mitm attacker to authenticate as that client to any server in the network for any purpose. Regards Henrik
Re: [REVIEW] Carefully verify digest responses
tis 2010-03-02 klockan 18:06 +0100 skrev Henrik Nordstrom: Comments are very welcome while I validate the parser changes. Validation completed and committed to trunk. Forgot to mantion the relevant bug reports in commit messages. Parser: 2845 Stale: 2367 both changes needs to get merged back all way to 3.0. Regards Henrik
[MERGE] New digest scheme helper protocol
The current digest scheme helper protocol have great issues in how to handle quote charactes. An easy way to solve this is to switch protocol to a protocol similar to what we already use for basic helpers using url escaped strings urlescape(user) SPACE urlescape(realm) NEWLINE Note: The reason why realm is urlescaped is to allow for future expansions of the protocol. The default is still the old quoted form as helpers have not yet been converted over. But once the helpers have been converted default should change to urlescaped form. # Bazaar merge directive format 2 (Bazaar 0.90) # revision_id: hen...@henriknordstrom.net-20100306200338-\ # py3969agu3ccjdeh # target_branch: /home/henrik/SRC/squid/trunk/ # testament_sha1: b4225d56b5d7245eacec5a2406019e692aa00cce # timestamp: 2010-03-06 21:03:58 +0100 # base_revision_id: hen...@henriknordstrom.net-20100306194302-\ # eknq7yvpt5ygzkdz # # Begin patch === modified file 'src/auth/digest/auth_digest.cc' --- src/auth/digest/auth_digest.cc 2010-03-06 14:47:46 + +++ src/auth/digest/auth_digest.cc 2010-03-06 19:48:42 + @@ -50,6 +50,7 @@ #include SquidTime.h /* TODO don't include this */ #include digestScheme.h +#include rfc1738.h /* Digest Scheme */ @@ -935,7 +936,9 @@ safe_free(digestAuthRealm); } -AuthDigestConfig::AuthDigestConfig() : authenticateChildren(20) +AuthDigestConfig::AuthDigestConfig() : + authenticateChildren(20), + helperProtocol(DIGEST_HELPER_PROTOCOL_QUOTEDSTRING) { /* TODO: move into initialisation list */ /* 5 minutes */ @@ -978,6 +981,17 @@ parse_onoff(PostWorkaround); } else if (strcasecmp(param_str, utf8) == 0) { parse_onoff(utf8); +} else if (strcasecmp(param_str, protocol) == 0) { + char *token = NULL; + parse_eol(token); + if (strcmp(token, quoted)) { + helperProtocol = DIGEST_HELPER_PROTOCOL_QUOTEDSTRING; + } else if (strcmp(token, urlescaped)) { + helperProtocol = DIGEST_HELPER_PROTOCOL_URLESCAPE; + } else { + debugs(29, 0, unrecognised digest auth helper protocol ' token '); + } + safe_free(token); } else { debugs(29, 0, unrecognised digest auth scheme parameter ' param_str '); } @@ -1237,10 +1251,10 @@ } /* Sanity check of the username. - * can not be allowed in usernames until * the digest helper protocol - * have been redone + * can not be allowed in usernames when using the old quotedstring + * helper protocol */ -if (strchr(username, '')) { +if (helperProtocol == DIGEST_HELPER_PROTOCOL_QUOTEDSTRING strchr(username, '')) { debugs(29, 2, authenticateDigestDecode: Unacceptable username ' username '); return authDigestLogUsername(username, digest_request); } @@ -1390,7 +1404,6 @@ AuthDigestUserRequest::module_start(RH * handler, void *data) { DigestAuthenticateStateData *r = NULL; -char buf[8192]; digest_user_h *digest_user; assert(user()-auth_type == AUTH_DIGEST); digest_user = dynamic_cast digest_user_h * (user()); @@ -1402,20 +1415,35 @@ return; } + r = cbdataAlloc(DigestAuthenticateStateData); r-handler = handler; r-data = cbdataReference(data); r-auth_user_request = this; AUTHUSERREQUESTLOCK(r-auth_user_request, r); + +const char *username = digest_user-username(); +char utf8str[1024]; if (digestConfig.utf8) { -char userstr[1024]; -latin1_to_utf8(userstr, sizeof(userstr), digest_user-username()); -snprintf(buf, 8192, \%s\:\%s\\n, userstr, realm); -} else { -snprintf(buf, 8192, \%s\:\%s\\n, digest_user-username(), realm); -} - -helperSubmit(digestauthenticators, buf, authenticateDigestHandleReply, r); +latin1_to_utf8(utf8str, sizeof(utf8str), username); + username = utf8str; +} + +MemBuf mb; + +mb.init(); +switch(digestConfig.helperProtocol) { +case AuthDigestConfig::DIGEST_HELPER_PROTOCOL_QUOTEDSTRING: + mb.Printf(\%s\:\%s\\n, username, realm); + break; +case AuthDigestConfig::DIGEST_HELPER_PROTOCOL_URLESCAPE: + mb.Printf(%s , rfc1738_escape(username)); + mb.Printf(%s\n, rfc1738_escape(realm)); + break; +} + +helperSubmit(digestauthenticators, mb.buf, authenticateDigestHandleReply, r); +mb.clean(); } DigestUser::DigestUser (AuthConfig *aConfig) : AuthUser (aConfig), HA1created (0) === modified file 'src/auth/digest/auth_digest.h' --- src/auth/digest/auth_digest.h 2009-12-16 03:46:59 + +++ src/auth/digest/auth_digest.h 2010-03-06 16:09:24 + @@ -163,6 +163,10 @@ int CheckNonceCount; int PostWorkaround; int utf8; +enum { + DIGEST_HELPER_PROTOCOL_QUOTEDSTRING, + DIGEST_HELPER_PROTOCOL_URLESCAPE +} helperProtocol; }; typedef class AuthDigestConfig auth_digest_config; === modified file 'src/cf.data.pre' --- src/cf.data.pre 2010-03-06 19:43:02 + +++ src/cf.data.pre 2010-03-06 20:03:38 + @@ -181,13 +181,18 @@ === Parameters for the digest scheme
Re: [REVIEW] Carefully verify digest responses
tor 2010-03-04 klockan 20:57 -0700 skrev Alex Rousskov: Please consider adding a cppunit test to check a few common and corner parsing cases. Unfortunately the existing auth tests we have do not come close to touching actual request parsing/handling, and trying to get things in shape to the level that this can be exercised with cppunit is a little heaver than I have time for today. Regards Henrik
Re: [PATCH] HTTP/1.1 to servers
fre 2010-03-05 klockan 23:08 +1300 skrev Amos Jeffries: Sending HTTP/1.1 in all version details sent to peers and servers. Passes the basic tests I've thrown at it. If anyone can think of some please do. The src/client_side.cc change looks wrong to me.. should not overwrite the version sent by the client when parsing the request headers. upgrade should only be done in http.cc when making the outgoing request, and client_side_reply.cc when making the outgoing response, between there the received version should be preserved as much as possible. Regards Henrik
Re: [REVIEW] Carefully verify digest responses
ons 2010-03-03 klockan 22:19 +1300 skrev Amos Jeffries: This debugs seems to have incorrect output for the test being done: + /* check cnonce */ + if (!digest_request-cnonce || digest_request-cnonce[0] == '\0') { + debugs(29, 2, authenticateDigestDecode: Missing URI field); Thanks. Fixed. Regards Henrik
Re: squid_kerb_auth logging patch
Reviewed and applied. tis 2010-02-09 klockan 19:20 + skrev Markus Moeller: Hi Amos, Here are patched for squid 3.1 and squid 3-head to add ERROR, WARNING, etc to the logging messages. Regards Markus
Re: SMB help.. Regards Henrik
Re: Assertion in clientProcessBody
tis 2009-12-08 klockan 13:34 +1100 skrev Mark Nottingham: Any thoughts here? Should this really be =, or should clientProcessBody never get a 0 size_left? It's done when size_left == 0, and no further body processing handler shoud be active on this request at that time. Any data on the connection at this time is either surplus data (HTTP violation) or a pipelined request waiting to be processed. If you look a little further down (about one screen) in clientProcessBody you'll also see that the body reader gets unregistered when processing reaches 0. But it would not be harmful to make clientProcessBody gracefully handle size_left == 0 I guess. A backtrace would be nice. Regards Henrik
Re: R: obsoleting nextstep?
lör 2009-12-05 klockan 00:32 +1300 skrev Amos Jeffries: +1. Last mentioned by that name in a press release 14 years ago. Much more talk and use of Squid on the various Linux flavors built for the PS3. I believe they come under GNU/Linux tags. No idea what newsos was. Searching... Sony M68K workstation in the 80s and early 90s. Doubt anyone have been running Squid on those since early 90s if even then... Gegarding ps3, otheros is most often Linux. FreeBSD also exists. Fairly straightforward platform to us (powerpc CPU, and some co-processors we don't touch). Again I highly doubt anyone is running Squid on those, but should share the same properties as IBM Power mainframes running Linux of FreeBSD so.. It's not a big issue if we happens to delete one or two oldish platforms too many. If there is someone running on that platform they usually fix it themselves (mostly knowledgeable geeks) and come back to us. So there is no pressing need to preserve old platforms when touching compatibility code. But it's also not worth much to discuss old platforms viability. My proposal regarding these in general is to touch them as little as possible. If encountering such code when shuffling things around for readability then drop them in the shuffle if easier than relocating the code. Regards Henrik
Re: Helper children defaults
tor 2009-11-26 klockan 17:35 +1300 skrev Amos Jeffries: I'm making the helper children configurable for on-demand startup so a minimal set can be started and allowed to grow up to a max as determined by traffic load. Growth is triggered by helpers dying or requests needing to be queued when all helpers are full. Drawback is that this fork can be quite expensive on larger squids, and then momentarily stops all forwarding under peak load. But overloaded helpers is generally worse so.. Ideally the concurrent protocol should be used as much as possible, avoiding this.. * start 1 child synchronously on start/reconfigure * new child helpers as needed in bunches of 2 * maximum running kept capped at 5. ? I would increase the max to 20 or so. This affects helpers for auth_param, url_rewrite, and external_acl_type. Why not dnsserver? Regards Henrik
Re: [bundle] helper-mux feature
tor 2009-11-26 klockan 10:43 +0100 skrev Kinkie: It's a perl helper multiplexer: it talks the multi-slot helper dialect to squid, and the single-slot variant to the helpers, starting them up lazily and handling possible helper crashes. Nice! Since squid aggressively tries to reuse the some helpers, setting a high helpers number in conjunction with this has the effect of allowing on-demand-startup of helpers. Interesting twist on that problem ;-) See no significant issues with doing things that way. Sure it's a little added overhead, but marginally so compared to Squid maintaining all those helpers. Regards Henrik
Re: RFC: obsoleting nextstep?
ons 2009-11-25 klockan 12:49 +0100 skrev Kinkie: Hi all, just like SunOS: NextStep's last version (3.3) was released in 1995, which means 15 years before the expected release date of 3.2 . How about dropping support for it? +1 I think. Or just ignore it.. Regards Henrik
Re: squid-smp: synchronization issue solutions
sön 2009-11-22 klockan 00:12 +1300 skrev Amos Jeffries: I think we can open the doors earlier than after that. I'm happy with an approach that would see the smaller units of Squid growing in parallelism to encompass two full cores. And I have a more careful opinion. Introducing threads in the current Squid core processing is very non-trivial. This due to the relatively high amount of shared data with no access protection. We already have sufficient nightmares from data access synchronization issues in the current non-threaded design, and trying to synchronize access in a threaded operations is many orders of magnitude more complex. The day the code base is cleaned up to the level that one can actually assess what data is being accessed where threads may be a viable discussion, but as things are today it's almost impossible to judge what data will be directly or indirectly accessed by any larger operation. Using threads for micro operations will not help us. The overhead involved in scheduling an operation to a thread is comparably large to most operations we are performing, and if adding to this the amount of synchronization needed to shield the data accessed by that operation then the overhead will in nearly all cases by far out weight the actual processing time of the micro operations only resulting in a net loss of performance. There is some isolated cases I can think of like SSL handshake negotiation where actual processing may be significant, but at the general level I don't see many operations which would be candidates for micro threading. Using threads for isolated things like disk I/O is one thing. The code running in those threads are very very isolated and limited in what it's allowed to do (may only access the data given to them, may NOT allocate new data or look up any other global data), but is still heavily penalized from synchronization overhead. Further the only reason why we have the threaded I/O model is because Posix AIO do not provide a rich enough interface, missing open/close operations which may both block for significant amount of time. So we had to implement our own alternative having open/close operations. If you look closely at the threads I/O code you will see that it goes to quite great lengths to isolate the threads from the main code, with obvious performance drawbacks. The initial code even went much further in isolation, but core changes have over time provided a somewhat more suitable environment for some of those operations. For the same reasons I don't see OpenMP as fitting for the problem scope we have. The strength of OpenMP is to parallize CPU intensive operations of the code where those regions is well defined in what data they access, not to deal with a large scale of concurrent operations with access to unknown amounts of shared data. Trying to thread the Squid core engine is in many ways similar to the problems kernel developers have had to fight in making the OS kernels multithreaded, except that we don't even have threads of execution (the OS developers at least had processes). If trying to do the same with the Squid code then we would need an approach like the following: 1. Create a big Squid main lock, always held except for audited regions known to use more fine grained locking. 2. Set up N threads of executing, all initially fighting for that big main lock in each operation. 3. Gradually work over the code identify areas where that big lock is not needed to be held, transition over to more fine grained locking. Starting at the main loops and work down from there. This is not a path I favor for the Squid code. It's a transition which is larger than the Squid-3 transition, and which have even bigger negative impacts on performance until most of the work have been completed. Another alternative is to start on Squid-4, rewriting the code base completely from scratch starting at a parallel design and then plug in any pieces that can be rescued from earlier Squid generations if any. But for obvious staffing reasons this is an approach I do not recommend in this project. It's effectively starting another project, with very little shared with the Squid we have today. For these reasons I am more in favor for multi-process approaches. The amount of work needed for making Squid multi-process capable is fairly limited and mainly circulates around the cache index and a couple of other areas that need to be shared for proper operation. We can fully parallelize Squid today at process level if disabling persistent shared cache + digest auth, and this is done by many users already. Squid-2 can even do it on the same http_port, letting the OS schedule connections to the available Squid processes. Regards Henrik
Re: squid-smp: synchronization issue solutions
ons 2009-11-25 klockan 00:55 +1300 skrev Amos Jeffries: I kind of mean that by the smaller units. I'm thinking primarily here of the internal DNS. It's API is very isolated from the work. And also a good example of where the CPU usage is negligible. And no, it's not really that isolated. It's allocating data for the response which is then handed to the caller, and modified in other parts of the code via ipcache.. But yes, it's a good example of where one can try scheduling the processing on a separate thread to experiment with such model. Regards Henrik
Re: Server Name Indication
fre 2009-11-20 klockan 01:28 +0100 skrev Craig: do you plan to implement Server Name Indication into squid? I know the caveats of browser compatibility, but in a year or two, the percentage of people using FF1.x and IE6 will surely decrease. Getting SNI implemented is interesting to the project, but at this time there is no current developer actively looking into the problem. Squid is an community driven project. As such what features get implemented is very much dependent on what the community contributes to the project in terms of developer time. Regards Henrik
Squid-3.1 release?
What is the status for 3.1? At a minimum I would say it's about time for a 3.1.0.15 release, collecting up what has been done so far. The bigger question, what is blocking 3.1.1? (and moving 3.0 into previous releases) Regards Henrik
Re: libresolv and freebsd
Quite likely we don't even need libresolv in the majority of cases, on pretty much all platforms. mån 2009-11-16 klockan 12:07 +0100 skrev Kinkie: In configure.in there is something like this: if test $ac_cv_lib_bind_gethostbyname = no ; then case $host in i386-*-freebsd*) AC_MSG_NOTICE([skipping libresolv checks for $host]) ;; *) AC_CHECK_LIB(resolv, main) ;; esac fi I fail to see what's the point in skipping this test. I'd expect to get the same result with a simple(r) AC_SEARCH_LIBS([gethostbyname],[bind resolv]) See my other request on moving form AC_CHECK_LIB to AC_SEARCH_LIBS. Thanks for any input.
Re: /bzr/squid3/trunk/ r10118: Portability fix: non-GNU diff is not guarranteed to handle the -q switch
fre 2009-11-13 klockan 14:12 +0100 skrev Francesco Chemolli: Portability fix: non-GNU diff is not guarranteed to handle the -q switch Heh.. really should use cmp instead.. Regards Henrik
Re: [RFC] Libraries usage in configure.in and Makefiles
ons 2009-11-11 klockan 18:38 +1300 skrev Amos Jeffries: Henriks recent commit to remove one of these on grounds of being old has highlighted a need to document this and perhapse bring you all in on making the changes. Haven't removed any, just generalized one to apply to another lib needing the same conditions.. A: The squid binary is topping 3.5MB in footprint with many of the small tool stopping 500KB each. A small but substantial amount of it is libraries inked but unused. Particularly in the helpers. Unused libraries uses just a tiny bit of memory for the link table, at least if built properly (PIC). With some of the libraries being bunched up when there is a strong link between, ie -lresolv -lnsl in @RESOLVLIB@ Does anyone disagree? Not with the principle. What I have been working on is mainly that lots of these have also found their ways into _DEPENDENCIES rules which is a no-no. System libs must only be added to _LDADD rules. Does anyone have other examples of libraries which _need_ to include other libraries like -lresolv/-lnsl do for Solaris? -lldap need -llber in some LDAP implementations. But we already deal with that. OpenSSL: -lssl -lcrypto -ldl -lnsl is a bit special as it's needed in pretty much every binary built and is why it is in XTRA_LIBS. -lcap should move to it's own variable. we should probably skip -lbsd on glibc systems. But need it on Solaris systems and possibly others. Does not hurt on Linux. -lm is probably needed just about everywhere. -ldl requirements are very fuzzy and not easy to detect when wrong. On Linux the proble only shows up when doing a static build as the dynamic linker handles chain dependencies automatically. I'm not terribly fussed with Henriks change because the DISK IO stuff is heavily interlinked. It's only an extra build dependency for every test run. But it grates against my ideologies to see the AIO specific API testers needing to link against the pthreads libraries and vice-versa. You are welcome to split the DISK_OS_LIBS variable into AIOLIB and PTHREADLIB if you prefer. Have no attachment to it, was just that it was easier to extend rename AIOLIB than to add yet another variable needed at the same places. Just keep them out of _DEPENDENCIES rules, and make sure to add it where needed. Only makes a difference for the testsuite programs. Regards Henrik
Re: /bzr/squid3/trunk/ r10110: Style Makefile.am to use instead of @AUTOMAKEVAR
Sorry, bash ate part of that message.. Correct text: Style Makefile.am to use $(AUTOMAKEVAR) instead of @AUTOMAKEVAR@ @AUTOMAKEVAR@ is troublesome when used in \ constructs as it may expand to empty and the last line in a \ construct must not be empty or some make versions will fail. thankfully automake adds all variables for us, so using $(AUTOMAKEVAR) is preferred. ons 2009-11-11 klockan 12:44 +0100 skrev Henrik Nordstrom: revno: 10110 committer: Henrik Nordstrom hen...@henriknordstrom.net branch nick: trunk timestamp: Wed 2009-11-11 12:44:58 +0100 message: Style Makefile.am to use instead of @AUTOMAKEVAR @AUTOMAKEVAR@ is troublesome when used in \ constructs as it may expand to empty and the last line in a \ construct must not be empty or some make versions will fail. thankfully automake adds all variables for us, so using is preferred. modified: scripts/srcformat.sh vanligt textdokument-bilaga (r10110.diff) === modified file 'scripts/srcformat.sh' --- a/scripts/srcformat.sh2009-08-23 03:08:22 + +++ b/scripts/srcformat.sh2009-11-11 11:44:58 + @@ -36,8 +36,16 @@ else rm $FILENAME.astylebak fi - continue; + continue fi + ;; + +Makefile.am) + + perl -i -p -e 's/@([A-Z0-9_]+)@/\$($1)/g' ${FILENAME} ${FILENAME}.styled + mv ${FILENAME}.styled ${FILENAME} + ;; + esac if test -d $FILENAME ; then
STORE_META_OBJSIZE
(12.57.26) amosjeffries: hno: next on my list is STORE_META_OBJSIZE. safe to back-port to 3.1? 3.0? (12.57.51) amosjeffries: useful to do so? It's safe, but perhaps not very useful. Depends on if Alex will need it in 3.1 or just 3.2 I guess. Regards Henrik
Re: Issue compiling last 3.1 squid in 64-bit platform
can you try (snapshot of current sources in bzr) this is fixed so that it builds for me on Fedora 11. tis 2009-11-10 klockan 09:14 -0200 skrev rena...@flash.net.br: If you need me to do any type of specific test or use any compile options, please let me know and I would be glad to help! Thanks again for your effort!
tis 2009-11-10 klockan 10:28 -0200 skrev rena...@flash.net.br: make[3]: *** No rule to make target `-lpthread', needed by `all-am'. Stop. Is your Fedora 11 64-bit? I will install Ubuntu-64 and try to compile it in the same server. As soon as I have the results I'll post back to you! It is 64-bit. But the problem seem to be dependent on something else. We saw similar problems with trunk some time ago where it failed on my machine but worked on all the machines in the build farm.. can you do grep -- -lpthread src/Makefile Regards Henrik
Re: /bzr/squid3/trunk/ r10096: Bug 2778: fix linking issues using SunCC
ons 2009-11-11 klockan 10:38 +1300 skrev Amos Jeffries: Worth a query to squid-users. Any OS which is so old it does not support the auto-tools and libraries we now need is a candidate. I'm thinking NextStep may be one more. Though I'm inclined to keep as much support as possible until we have solid evidence the OS is not able to ever be build Squid. There probably is one or two that may try running Squid on something that once upon a time was Solaris 1.x (known as SunOS 4.x before the name switch to Solaris for everything). But in general terms that OS is pretty much distinct these days. Been declared end-of-life for more than a decade now, with last release in 1994. autotools has never been part of SunOS for that matter, always an addon. And someone patient enough can get up to date autotools + gcc + whatever ontop of SunOS 4.x and build Squid. Question is will anyone really do that? Regards Henrik
Re: Issue compiling last 3.1 squid in 64-bit platform
Should be fixed in trunk now I hope.. Can you try applying the patch from ontop of the tree you downloaded before: note: you need to run bootstrap.sh after patching. tis 2009-11-10 klockan 23:43 +0100 skrev Henrik Nordstrom:
tis 2009-11-10 klockan 18:23 +1300 skrev Amos Jeffries: make[3]: *** No rule to make target `-lpthread', needed by `all-am'. This is the XTRA_LIBS confusion currently being fixed up in trunk. XTRA_LIBX must only be added in LDADD rules, not the CUSTOM_LIBS which is also a dependency.. Regards Henrik
Re: question about submitting patch
You are missing the openssl development package, usually openssl-devel or libssl-dev depending on OS flavor. Patches are submitted as an unified diff attached to a squid-dev message, preferably with [PATCH] in the subject to make it stick out from the other discussions.. ons 2009-11-04 klockan 17:23 -0500 skrev Matthew Morgan: Ok, I think I've got the kinks worked out regarding setting range_offset_limit per a pattern. I've done a decent bit of testing, and it seems to be working as intended. I did added a file to the source tree, and I'm pretty sure I've updated Makefile.am properly. I tried to do a ./test-builds, but it fails identically in my test repository and in trunk, in areas of squid I didn't touch. I guess HEAD doesn't always pass? It may be that I don't have some headers that it's looking for. Here's the output of the test-builds script: TESTING: layer-00-bootstrap BUILD: .././test-suite/buildtests/layer-00-bootstrap.opts TESTING: layer-00-default BUILD: .././test-suite/buildtests/layer-00-default.opts ../../../src/ssl_support.h:55: error: expected constructor, destructor, or type conversion before ‘*’ token ../../../src/ssl_support.h:58: error: expected constructor, destructor, or type conversion before ‘*’ token ../../../src/ssl_support.h:71: error: ‘SSL’ was not declared in this scope ../../../src/ssl_support.h:71: error: ‘ssl’ was not declared in this scope ../../../src/ssl_support.h:74: error: typedef ‘SSLGETATTRIBUTE’ is initialized (use __typeof__ instead) ../../../src/ssl_support.h:74: error: ‘SSL’ was not declared in this scope ../../../src/ssl_support.h:74: error: expected primary-expression before ‘,’ token ../../../src/ssl_support.h:74: error: expected primary-expression before ‘const’ ../../../src/ssl_support.h:77: error: ‘SSLGETATTRIBUTE’ does not name a type ../../../src/ssl_support.h:80: error: ‘SSLGETATTRIBUTE’ does not name a type ../../../src/ssl_support.h:83: error: ‘SSL’ was not declared in this scope ../../../src/ssl_support.h:83: error: ‘ssl’ was not declared in this scope ../../../src/ssl_support.h:86: error: ‘SSL’ was not declared in this scope ../../../src/ssl_support.h:86: error: ‘ssl’ was not declared in this scope ./../../../src/acl/CertificateData.h:45: error: ‘SSL’ was not declared in this scope ./../../.. make[5]: *** [testHeaders] Error 1 make[4]: *** [check-am] Error 2 make[3]: *** [check-recursive] Error 1 make[2]: *** [check] Error 2 make[1]: *** [check-recursive] Error 1 make: *** [distcheck] Error 2 Build Failed. Last log lines are: ./../../.. distcc[31643] ERROR: compile ./testHeaderDeps_CertificateData.cc on localhost failed make[5]: *** [testHeaders] Error 1 make[5]: Leaving directory `/home/lytithwyn/source/squid/trunk/btlayer-00-default/squid-3.HEAD-BZR/_build/src/acl' make[4]: *** [check-am] Error 2 make[4]: Leaving directory `/home/lytithwyn/source/squid/trunk/btlayer-00-default/squid-3.HEAD-BZR/_build/src/acl' make[3]: *** [check-recursive] Error 1 make[3]: Leaving directory `/home/lytithwyn/source/squid/trunk/btlayer-00-default/squid-3.HEAD-BZR/_build/src' make[2]: *** [check] Error 2 make[2]: Leaving directory `/home/lytithwyn/source/squid/trunk/btlayer-00-default/squid-3.HEAD-BZR/_build/src' make[1]: *** [check-recursive] Error 1 make[1]: Leaving directory `/home/lytithwyn/source/squid/trunk/btlayer-00-default/squid-3.HEAD-BZR/_build' make: *** [distcheck] Error 2 buildtest.sh result is 2 Should I go ahead and follow the patch submission instructions on, or is there something I should check first?
Re: /bzr/squid3/trunk/ r10080: Portability fix: __FUNCTION__ is not available on all preprocessors.
ons 2009-11-04 klockan 17:20 +0100 skrev Francesco Chemolli: Portability fix: __FUNCTION__ is not available on all preprocessors. +#ifdef __FUNCTION__ +#define _SQUID__FUNCTION__ __FUNCTION__ Do this really work? __FUNCTION__ is not a preprocessor symbol, it's a magic compiler variable. Regards Henrik
Re: gcc -pipe
tis 2009-11-03 klockan 14:59 +0100 skrev Kinkie: Performance gain is probably not much, but it won't hurt either :) Some says it hurts, but that's probably in low-memory conditions. :) OK. I'll give it a shot. Will make TPROXY require libcap-2.09 or later. At what version was libcap fixed for the issue we test in the magic sys/capability.h test case? Comment says libcap2 fixed, so I guess we no longer need this? Regards Henrik
Re: [MERGE] Use libcap instead of direct linux capability syscalls
ons 2009-10-28 klockan 01:35 +1300 skrev Amos Jeffries: Not entirely sure what version. I think the Gentoo 2.15 or 2.16 is fixed. My Ubuntu 2.11 is broken by the test results of last build I ran. Ok. The configure test was broken however, always reporting failure... I think we need to keep the magic voodoo for a while longer. It's still there. Regards Henrik
Re: compute swap_file_sz before packing it
tis 2009-10-27 klockan 13:43 -0600 skrev Alex Rousskov: Hi Henrik, Can you explain what you mean by just ignore above? It is kind of difficult for me to ignore the only code that seems to supply the info Rock store needs. Do you mean we should ultimately remove STORE_META_STD from Squid, replacing all its current uses with STORE_META_OBJSIZE? The object size field in STORE_META_STD should be ignored. Got broken many years ago (1997 or so), and should be recalculated by using STORE_META_OBJSIZE or alternatively the on-disk object size. Moreover, STORE_META_OBJSIZE has the following comment attached to its declaration: not implemented, squid26 compatibility and appears to be unused... Right.. should be forward ported. [attached] Neither approach works for Rock store because Rock store does not have a swap state file like COSS and does not use individual files like UFS. That is why it has to rely on the file size information supplied by the core. Perhaps there is a better way of getting that information, but I do not know it. STORE_META_OBJSIZE is the object size (if known) not including TLV headers, and is generally what you need to know in order to access the object. Long term objects should be split in TLV + HTTP Headers (probably part of TLV) + Content, but that's another topic.. Actual file storage size is more a business of the cache_dir than the core.. +// so that storeSwapMetaBuild/Pack can pack corrent swap_file_sz +swap_file_sz = objectLen() + mem_obj-swap_hdr_sz; + objectLen() MAY be -1 here... Regards Henrik === modified file 'src/StoreMeta.h' --- src/StoreMeta.h 2009-01-21 03:47:47 + +++ src/StoreMeta.h 2009-08-27 12:35:36 + @@ -127,10 +127,6 @@ */ STORE_META_STD_LFS, -/** - \deprecated - * Object size, not implemented, squid26 compatibility - */ STORE_META_OBJSIZE, STORE_META_STOREURL, /* the store url, if different to the normal URL */ === modified file 'src/store_swapmeta.cc' --- src/store_swapmeta.cc 2009-01-21 03:47:47 + +++ src/store_swapmeta.cc 2009-08-27 12:35:36 + @@ -61,6 +61,7 @@ tlv **T = TLV; const char *url; const char *vary; +const int64_t objsize = e-objectLen(); assert(e-mem_obj != NULL); assert(e-swap_status == SWAPOUT_WRITING); url = e-url(); @@ -88,6 +89,17 @@ return NULL; } + +if (objsize = 0) { + T = StoreMeta::Add(T, t); + t = StoreMeta::Factory(STORE_META_OBJSIZE, sizeof(objsize), objsize); + + if (!t) { + storeSwapTLVFree(TLV); + return NULL; + } +} + T = StoreMeta::Add(T, t); vary = e-mem_obj-vary_headers;
Re: compute swap_file_sz before packing it
tis 2009-10-27 klockan 21:41 +0100 skrev Henrik Nordstrom: Actual file storage size is more a business of the cache_dir than the core.. Forgot to mention.. nothing stops a cache_dir implementation from storing this attribute somehow associated with the data if one likes to, but for cache_dirs taking unbounded object sizes the information is not known until the object is completed. :) Done. Regards Henrik
Re: compute swap_file_sz before packing it
tis 2009-10-27 klockan 15:23 -0600 skrev Alex Rousskov: Is not that always the case? Even if a store refuses to accept objects larger than X KB, that does not mean that all objects will be X KB in size, regardless of any reasonable X value. Or did you mean something else by unbounded? There is two kinds of swapouts related to this a) Size-bounded, where objects are known to be of a certain size. b) Size not known at start of swapout. Impossible to record the size in headers. When there is at least one cache_dir with a size restriction we buffer objects of type 'b' before swapout in case it's small enough to actaully fit in the cache_dir policy even if size initially unknown. Technically, core does not know the true content size for some responses until the response has been received, but I do not remember whether we allow such responses to be cachable. We do. It's a quite common response form. Regards Henrik
Re: compute swap_file_sz before packing it
tis 2009-10-27 klockan 15:51 -0600 skrev Alex Rousskov: To compute StoreEntry::swap_file_sz, I will add up the ported STORE_META_OBJSIZE value and the swap_hdr_len set by StoreMetaUnpacker. Would you compute it differently? Sounds right to me. What should I do if STORE_META_OBJSIZE is not known? Does this question itself imply that each store that wants to rebuild an index has to store the final object size somewhere or update the STORE_META_OBJSIZE value? Exactly. But see my previous response some seconds ago. As you already noticed the ufs family uses the filesystem file size meta information to rebuild swap_file_sz. COSS in Squid-2 uses STORE_META_OBJSIZE + swap_hdr_sz. Thank you for porting this. Was already done, just not sent yet. What happens to STORE_META_OBJSIZE if the object size is not yet known at the time when Squid start swapping content to disk? Then there is no STORE_META_OBJSIZE. But stores with a max size limit won't get such swapouts. The whole patch is not needed if we start relying on STORE_META_OBJSIZE, I guess. Probably, was more a note to illustrate the issues that field battles with.. Regards Henrik
Re: [MERGE] Use libcap instead of direct linux capability syscalls
ons 2009-10-28 klockan 10:32 +1300 skrev Amos Jeffries: The configure test was broken however, always reporting failure... Strange. That was the change the Gentoo people are all enjoying at the moment. Well, I think most are silently happy with the workaround enabled even if not strictly needed. Regards Henrik
Re: compute swap_file_sz before packing it
tis 2009-10-27 klockan 17:04 -0600 skrev Alex Rousskov: On 10/27/2009 04:07 PM, Henrik Nordstrom wrote: Thank you for porting this. Was already done, just not sent yet. Will you commit your changes to trunk? Done. Regards Henrik
Re: [MERGE] Use libcap instead of direct linux capability syscalls
tis 2009-10-20 klockan 12:52 +1300 skrev Amos Jeffries: We can do that yes. I think I would also rather do it too. It paves the way for a clean deprecation cycle now that TPROXYv4 kernels are effectively mainstream: 3.0: (2008-2010) TPROXYv2 with libcap + libcap2 3.1: (2010-2012) support TPROXYv2 + TPROXYv4 with libcap2 3.2: (2011?) support TPROXYv4 with libcap2 So you want me to add the patch back on trunk? Means we must update libcap on several of the build farm members, including the master, or disable TPROXY in the build tests.. I guess I could add some configure magics to look for the missing function and automatically disable.. Regards Henrik
Re: [MERGE] Use libcap instead of direct linux capability syscalls
fre 2009-10-16 klockan 02:04 +0200 skrev Henrik Nordstrom:. Crap. libcap on centos is not usable. Regards Henrik
[MERGE] Use libcap instead of direct linux capability syscalls
The kernel interface, while some aspects of it is much simpler is also not really meant to be called directly by applications. The attached patch approximates the same functionality using libcap. Differs slightly in how it sets the permitted capabilities to be kept on uid change (explicit instead of masked), but end result is the same as setting the capabilities won't work if these were not allowed. # Bazaar merge directive format 2 (Bazaar 0.90) # revision_id: hen...@henriknordstrom.net-20091015142822-\ # is615u5fl72d5vt3 # target_branch: # testament_sha1: 7003f761ebaefca2b4e2fd090f186cfb0ec0357e # timestamp: 2009-10-15 20:21:24 14:24:33 + @@ -1240,51 +1240,41 @@ restoreCapabilities(int keep) { /* )); +#if defined(_SQUID_LINUX_) HAVE_SYS_CAPABILITY); +#define PUSH_CAP(cap) cap_list[ncaps++] = (cap) + int ncaps = 0; + int rc = 0; + cap_value_t cap_list[10]; + PUSH_CAP(CAP_NET_BIND_SERVICE); + + if (IpInterceptor.TransparentActive()) { + PUSH_CAP(CAP_NET_ADMIN); #if LINUX_TPROXY2 -cap-effective |= (1 CAP_NET_BROADCAST); + PUSH_CAP(CAP_NET_BROADCAST); #endif -} - -if (!keep) -cap-permitted = cap-effective; - -if (capset(head, cap) != 0) { + } +#undef PUSH_CAP + +_) */ } void * # Begin bundle IyBCYXphYXIgcmV2aXNpb24gYnVuZGxlIHY0CiMKQlpoOTFBWSZTWduZKS4ABHffgEQwee///39v /2q+YAf98llXqgAGd3O2GiQAAwkkChNR7U9GqM09JtMqPU9qj1PRpGI9Rk0ANB6mgBzCaA0B o0YRoMRpiZMTQYRoGQDJgJKQDTRpTTamgAAGjQAAaAADRkAxTRCj0nqA3qQbU0GgAGnqPUAH MJoDQGjRhGgxGmJkxNBhGgZAMmAkkBGgBAmTSNNqU20aKPU9NpqnpNHiZE0A9TbKjyXkn8xVSzOo 6nuY29xwpyJqc4QM6dfBXF6U3PALME7+yiygZHhYJ6GFAZOydW+/qhJBCOT4sE63ILSZnlhQfIiO rm+MtaEbipK8/U+Z8+ujzoF8y4g4ciAkUoXkPMgYOJLLXM4IMFZZKMXCy+LORYBB7SuZyL0Il75E ICDp4WDDvnrOPrEValQtaNatSpCNt54PN41VtS2WHA7zHj/2nM+Gkz7WaFCCr0K5SjhJycwMMIzT 3KF3Vi15a6qJ7dBlLhLczYCmVbNeLJxzyFNSShC+ZTDESO+q+5wDEzUZMUdKjOw7ZGXYorvDob1G 2PF+rT+n6s3T9HSiN7mWhRxV8L9gMeCCYg5JZL1dAKxAyX6LI0Qj4SZbiDgFaCOVBkFnvYa7g5zu oRHDPzKtD2BeZhPfkPMjzyJJ6EnhowkWFetKqWUc7YI2ltKCIjGJri+NJI4BghToa8ZMPJzefxrJ kanJVYOoyZhzJmDouttZAmOv6yoYJrJJ0ZpjEKGmVr2VjZhqjSbfBltSKFEI4QSQCncJE1PUZivx HOGZXnwlgeOh8EnYI36GsoMXrDLTCDtk9FMV9C9IWJaPImv5VvSsWy4XAglalQgUuSqMATIZliwM
Re: [MERGE] Use libcap instead of direct linux capability syscalls. Regards Henrik # Bazaar merge directive format 2 (Bazaar 0.90) # revision_id: hen...@henriknordstrom.net-20091015235726-\ # tjj24dnri2arionc # target_branch: # testament_sha1: e0544b31cc7e7f4f877a1b5939e6cfe26d60bc6f # timestamp: 2009-10-16 01:58:06 23:57:26 + @@ -1241,50 +1241,40 @@ { /*); + int ncaps = 0; + int rc = 0; + cap_value_t cap_list[10]; + cap_list[ncaps++] = CAP_NET_BIND_SERVICE; + + if (IpInterceptor.TransparentActive()) { + cap_list[ncaps++] = CAP_NET_ADMIN; #if LINUX_TPROXY2 -cap-effective |= (1 CAP_NET_BROADCAST); + cap_list[ncaps++] = CAP_NET_BROADCAST; #endif -} - -if (!keep) -cap-permitted = cap-effective; - -if (capset(head, cap) != 0) { + } + +_) */ +#endif /* _SQUID_LINUX_ */ } void * # Begin bundle IyBCYXphYXIgcmV2aXNpb24gYnVuZGxlIHY0CiMKQlpoOTFBWSZTWeXFlV4ABjxfgEQwef///39v /2q+YAnPj1q+owJFAC6wnTU1lAAoAMkTQSeRPUzQnpPakeiaaPU00GgAaaBo0aAA4yZNGgNG mIyNDEMCaNMQYjQYQAGHGTJo0Bo0xGRoYhgTRpiDEaDCAAwYlSPFBiNMnkmAhpiYgDRoMAmjJkMJ gikRojRpqniPSaaTKbFMyMQ1PTJqNEeiD1NBtQP1BJIBNNNAhCek1MaNJpH6oGRmoyMTR6mT1Hpp PCjhAcBA/KxRq1hVPG4pZm3lBsotwW7hoeBiSZwQhH0/L5PH/kbyD/uLuplZbxN2y1f8fZogkkyC aYHcL90IFWcSSwYKsiE+LyjBhNwLMnJqM1M9XJ8pp0hqwfSU6dKQuYVSv7m9a9R6u/J5/hu0DwLV RcnDIAkUnMqGQkxuEOu/zzGEwQUAhFxmfXGk80CefYwUo9wwYQBxOcXyfCzqxcymxgpzi5iJF2EY 0U55YbSigl3BGUOeSjOVXxhd+FvrRBZasfyxaKQCJ8BskUW8K+C2l+leb4cJr4W3P2YqIV4EazdE 8uc3g5hmBhATdqv1zKZMU1u7PYqz7dBuGAPCEUqgRrsCOo6uAXo6SLqriqnF9gpXSmR5HTRV9R1w 70c8uFh1Rvee+vka9RAroY78lh1wpyiWupjVyVseZEn8VOsFimsDmAQ1hiiWNDJGbAgPJC0F+0TW ++nIjzgwJ/15qUHGK6ew0DchuHpVdSHu9mKHqu83NkJSlKUv4q17+o3uk1RctUCnrgZzP+b7Ub71
Re: adding content to the cache -- a revisit
mån 2009-10-12 klockan 07:57 -0600 skrev bergenp...@comcast.net: The content I'm trying to manually install into the squid server will be a subset of the origin server content, so for objects not manually installed into squid, squid will still need to go directly back to the origin server. What you need is a) A HTTP server on the Squid server capable of serving the objects using HTTP, preverably with as identical properties as possible as the origin. This includes at least properties such as ETag, Content-Type, Content-Language, Content-Encoding and Last-Modified. b) wget, squidclient or another simple HTTP client capable of requesting URLs from the proxy. c) cache_peer line telling Squid that this local http server exists d) A unique http_port bound on the loopback interface, only used for this purpose (simplifies next step) e) cache_peer_access + never_direct rules telling Squid to fetch content requested from the unique port defined in 'd' from the peer defined in 'c' and only then.. Regards Henrik
Re: [2.HEAD patch] Fix compilation on opensolaris
Looks fine. Applied. Regards Henrik mån 2009-10-12 klockan 23:35 +0200 skrev Kinkie: Sure. Attached. K. On Mon, Oct 12, 2009 at 10:05 PM, Henrik Nordstrom hen...@henriknordstrom.net wrote: Can you resend that as an unified diff please... not keen on applying contextless patches fre 2009-10-09 klockan 17:39 +0200 skrev Kinkie: Hi all, 2.HEAD currently doesn't build on opensolaris, in at least some cases due to it not properly detecting kerberosv5 variants. The attached patch is a backport of some 3.HEAD changes which allows 2.HEAD to build on opensolaris Please review and, if it seems OK to you, apply.
Re: [squid-users] HTTP Cache General Question
fre 2009-10-09 klockan 18:26 +1300 skrev Amos Jeffries: Beyond that there is a lot of small pieces of work to make Squid capable of contacting P2P servers and peers, intercept seed file requests, etc. There is also the related topic of how to fight bad feeds of corrupted data (intentionally or erroneous). All p2p networks have to fight this to various degrees, and to do so you must know the p2p network in sufficient detail to know who are the trackers, definitions of authorative segment checksums, blacklisting of bad peers etc. Regards Henrik
Re: squid-smp
fre 2009-10-09 klockan 01:50 -0400 skrev Sachin Malave: I think it is possible to have a thread , which will be watching AsyncCallQueue, if it finds an entry there then it will execute the dial() function. Except that none of the dialed AsyncCall handlers is currently thread safe.. all expect to be running in the main thread all alone.. can we separate dispatchCalls() in EventLoop.cc for that purpose? We can have a thread executing distatchCalls() continuously and if error condition occurs it is written in error shared variable. which is then read by main thread executing mainLoop... in the same way returned dispatchedSome can also be passed to main thread... Not sure I follow. Regards Henrik
Re: CVE-2009-2855
Not sure. Imho it's one of those small things that is very questionable if it should have got an CVE # to start with. For example RedHat downgraded the issue to low/low (lowest possible rating) once explained what it really was about. But we should probably notify CVE that the bug has been fixed. tis 2009-10-13 klockan 11:14 +1300 skrev Amos Jeffries: Are we going to acknowledge this vulnerability with a SQUID:2009-N alert? The reports seem to indicate it can be triggered remotely by servers. It was fixed during routine bug closures a while ago so we just need to wrap up an explanation and announce the fixed releases. Amos
Re: CVE-2009-2855
tis.
Re: CVE-2009-2855
tis 2009-10-13 klockan 12:12 +1300 skrev Amos Jeffries: Okay, I've asked the Debian reporter for access to details. Lacking clear evidence of remote exploit I'll follow along with the quiet approach. Right.. meant to provide the details as well but forgot... It can be found in the RedHat bug report. A sample test case is as follows: -- test-helper.sh (executable) --- #!/bin/sh while read line; do echo OK done -- end test-helper.sh -- squid.conf (before where access is normally allowed) -- external_acl_type test %{Test:;test} /path/to/test-helper.sh acl test external test http_access deny !test -- end squid.conf -- -- test command -- /usr/bin/squidclient -H Test: a, b, test=test\n -- end test command -- The CVE has reference to our bugs which are clearly closed. If there is more to be done to notify anyone can you let me know what that is please? the other CVE from this year are in similar states of questionable open/closed-ness. Ah, now I get what you mean. yes we should be more active in giving vendor feedback to CVE in general.. Contacting c...@mitre.org is a good start I guess. Regards Henrik
Re: CVE-2009-2855
tis 2009-10-13 klockan 12:40 +1300 skrev Amos Jeffries: Mitre still list them all as Under Review. That's normal.. still collecting information. Regards Henrik
Re: [squid-users] HTTP Cache General Question
fre 2009-10-09 klockan 09:33 -0400 skrev Mark Schall: Peer 1 sends HTTP Request to Peer 2 with in the header. Would Squid or other Web Caches try to contact instead of the Peer 2, or will it forward the request onward to Peer 2. HTTP does not have both host and address detail. HTTP have an URL. If a client requests from the proxy then the proxy will request /someuniqueidentifierforchunkoffile from. If the client does direct connections (not configured for using proxy) then it MAY connect to other host and request GET /someuniqueidentifierforchunkoffile Host: from there. But if that is intercepted by a intercepting HTTP proxy then the proxy will generally use the host from the Host header instead of the intercepted destination address. Regards Henrik
Re: [squid-users] Squid ftp authentication popup
ons 2009-10-07 klockan 10:06 +1300 skrev Amos Jeffries: Firefox-3.x wil happyily popup the ftp:// auth dialog if the proxy-auth header is sent. There were a few bugs which got fixed in the 3.1 re-writes and made squid start to send it properly. It's broken in 3.0, not sure if its the same in 2.x but would assume so. The fixes done rely on C++ objects so wont be easy to port. In what ways is 3.0 broken? The visible changes I see is that 3.1 only prompts if required by the FTP server, and that the realm for some reason is changed to also include the requested server name. 401 basic auth realms are implicit unique to each servername. (digest auth is a little fuzzier as it may apply to more domains/servers) Regards Henrik
Re: [squid-users] Squid ftp authentication popup
ons 2009-10-07 klockan 13:09 +1300 skrev Amos Jeffries: 3.0 uses a generic fail() mechanism to send results back. That mechanism seems not to add the Proxy-Auth reply header at all. 3.0 also was only parsing the URL and config file. Popup re-sends contain the auth in headers not URL. Strange. My 3.0 responds as HTTP/1.0 401 Unauthorized Server: squid/3.0.STABLE19-BZR X-Squid-Error: ERR_CACHE_ACCESS_DENIED 0 WWW-Authenticate: Basic realm=ftp username and relays Authorization properly. It however rejects any login other than the one supplied in the URL. Squid-2 behaves the same. Regards Henrik
[Fwd: [squid-users] Akamai's new patent 7596619]
Patent office operations at it's best... ---BeginMessage--- Take a look at this patent, granted on September 29, 2009 HTML delivery from edge-of-network servers in a content delivery network (CDN) Abstract. -- View this message in context: Sent from the Squid - Users mailing list archive at Nabble.com. ---End Message---
Re: [PATCH] warning: `squid' uses 32-bit capabilities
tis 2009-10-06 klockan 00:46 +1300 skrev Amos Jeffries: I'm going to dare and hope that the fix really is this simple :) The right fix is actually using libcap instead of the raw kernel interface... Regards Henrik
Re: R: monitoring squid environment data sources
ons 2009-09-30 klockan 16:57 +0200 skrev Kinkie: On Wed, Sep 30, 2009 at 2:00 PM, Henrik Nordstrom hen...@henriknordstrom.net wrote: Been thinking a little further on this, and came to the conclusion that effort is better spent on replacing the signal based -k interface with something workable, enabling a control channel into the running Squid to take certain actions. This warrants a Features/ wiki page IMO Hmm.. sure it wasn't there already? It's a feature which has been discussed for a decade or so.. Regards Henrik
Re: R: monitoring squid environment data sources
lör 2009-10-03 klockan 21:19 +0200 skrev Kinkie: I could not find it... If it's a dupe, I apologize Well most discussions were before we had a Features section in the wiki...
Re: Segfault in HTCP CLR request on 64-bit
fre 2009-10-02 klockan 02:52 -0400 skrev Matt W. Benjamin: Bzero? Is it an already-allocated array/byte sequence? (Apologies, I haven't seen the code.) Assignment to NULL/0 is in fact correct for initializing a sole pointer, and using bzero for that certainly isn't typical. Also, for initializing a byte range, memset is preferred [see Linux BZERO(3), which refers to POSIX.1-2008 on that point]. STYLE(9) says use NULL rather than 0, and it is clearer. But C/C++ programmers should know that NULL is 0. And note that at least through 1998, initialization to 0 was the preferred style in C++, IIRC. You are both right. the whole stuff should be zeroed before filled in to avoid accidental leakage of random values from the stack, which also makes the explicit assignment redundant. bzero is not the right call (BSD specific), memset is preferred. In C (which is what Squid-2 is written in) NULL is the right initializer for pointers in all contexts. C++ is different... no universally accepted pointer initializer value there due to the slightly different type checks on pointers, often needing casting. But something is fishy here.. see my comment in bugzilla. Regards Henrik
Re: Segfault in HTCP CLR request on 64-bit
fre 2009-10-02 klockan 11:48 -0400 skrev Jason Noble: Sorry, I went to bugzilla before reading all the e-mails here. As I commented on the bug report states, there is nothing fishy going on. While strlen(NULL) will always segfault, htcpBuildCountstr() wraps the strlen() call with a check for a NULL pointer: 260if (s) 261len = strlen(s); 262else 263len = 0; Great. Then the memset is sufficient. Case closed. Regards Henrik
Re: Build failed in Hudson: 3.HEAD-i386-FreeBSD-6.4 #72
ons 2009-09-30 klockan 08:06 +0200 skrev Kinkie: Several of the build jobs appeared to be pulling from which is not updated since 10 sept. Thats now fixed so we shall see if the update affects this error I'm currently rebasing all jobs so that they pull from bzr.squid-cache.org Where should I point my bzr+ssh branches to? I would recommend bzr+ssh://bzr.squid-cache.org/bzr/squid3/ for now squid-cache.org also works. But may change later on. bzr+ssh://squid-cache.org/bzr/squid3/ Using for bzr is not a good idea. Regards Henrik
Re: R: monitoring squid environment data sources
Been thinking a little further on this, and came to the conclusion that effort is better spent on replacing the signal based -k interface with something workable, enabling a control channel into the running Squid to take certain actions. At least on Linux many of the monitors needed is easier implemented externally, just telling Squid what to do when needed. Regards Henrik
Re: monitoring squid environment data sources
tis 2009-09-29 klockan 14:06 +1200 skrev Amos Jeffries: It seems to me that the master process might be assigned to monitor the upstate of the child process and additionally set a watch on Except that the master process is entirely optional and generally not even desired if you use a smart init system like upstart. Additionally, a full reconfigure only because resolv.conf changed is a bit on the heavy side imho. And finally, as both resolv.conf and hosts paths is configuratble from squid.conf the current used path and what the master process remembers may differ local host IPs (bit tricky on non-windows as this may block)? Linux uses netlink messages. non-blocking access available. I think my preference is to get these monitors into the main process. Regards Henrik
Re: [noc] Build failed in Hudson: 3.HEAD-amd64-CentOS-5.3 #118
mån 2009-09-28 klockan 12:21 +1200 skrev Amos Jeffries: ¨ If you like, I thought the point of your change was that one of the libraries missing was not fatal. Neither is required for enabling ESI. There is also the default custom parser which is self-contained and always built. And that --with-X meant only that X should be used if possible. Possibly the absence of both is fatal, in which case the WARN at the end of my change should be made an ERROR again. The parsers are best seen as plugins, adding features. One selects at runtime via the esi_parser squid.conf directive which parser to use among the available ones. The point of having configure options for these is only to get a controlled build where one knows what features have been enabled if the build was successful, and where building fails if those features can not be built. Regards Henrik
Re: assert(e-mem_status == NOT_IN_MEMORY) versus TCP_MEM_HIT.
mån 2009-09-28 klockan 12:04 +1200 skrev Amos Jeffries: I'm hitting the case approximately once every 10-15 minutes on the CDN reverse proxies. More when bots run through this particular clients website. It's almost always on these small files (~10K) retrieved long-distance in reverse proxy requests. They arrive in two chunks 200-300ms apart. swapin race gets lost at some point between the two reads. Sice is not very important here. It's just about swapin timing and frequency of swapin requests. The longer disk reads take the higher probability. Content-Length is available and a buffer can be allocated (or existing one extended) for memory storage immediately instead of a disk file opened, modulo the min/max caching settings. THe open is not about memory, it's about being sure the known to be on disk data can be read in when required, even if I/O happens to be overloaded at that time to the level that swapin requests are rejected. All the other cases (too big for memory, no content-length etc) can go back to the old file open if need be. Yes, and have to. Regards Henrik
Re: ESI auto-enable
Amos, in response to your IRC question: with these ESI changes are you confident enough that we can now auto-enable ESI for 3.1.0.14? Answer: Not until the autoconf foo stuff for the parsers have settled in trunc. A default build of 3.1 should not strictly require libxml2 and expat and should build without it. Regards Henrik
Re: assert(e-mem_status == NOT_IN_MEMORY) versus TCP_MEM_HIT.
sön 2009-09-27 klockan 12:55 +1300 skrev Amos Jeffries: Ah, okay gotcha. So... (c) for people needing a quick patch. (b) to be committed (to meet 3.2 performance goals, saving uselsss disk operations, etc etc). The number of times 'b' as discussed here will be hit is negliable. Not sure it's worth trying to optimize this. But the bigger picture 'b' may be worthwhile to optimize a bit, namely better management of swapin requests. Currently there is one open disk cache handle per concurrent client, should be sufficient with just one for all swapin clients.. but that requires the store io interface implementation to be cleaned up a bit allowing multiple outstanding read operations on the same handle but processed one at a time to avoid seek issues.. Regards Henrik
Re: Build failed in Hudson: 3.HEAD-amd64-CentOS-5.3 #118
sön 2009-09-27 klockan 03:02 +0200 skrev n...@squid-cache.org: [Henrik Nordstrom hen...@henriknordstrom.net] Make ESI parser modules expat and libxml2 dependent on their libraries The ESI parser system is actually pluggable. There is no reason we should require expat and libxml2. Just build what works. Sorry about that. Forgot to check compile without ESI enabled.. Amos, regarding your change. Shouldn¨t the --with/--without force those libraries? Having detection within a --with seems wrong to me.. My preference for trunk is --with-... require that library to be present, fail the build if not. --without-... don't use that lib even if present Regards Henrik
Re: assert(e-mem_status == NOT_IN_MEMORY) versus TCP_MEM_HIT.
lör 2009-09-26 klockan 18:37 +1200 skrev Amos Jeffries: Something seems a bit weird to me there... (c) being harmless race condition? It is harmless, the only ill effect is that the swap file is opened when it does not need to be, just as happens if the request had arrived a few ms before. clients starting when an object is not fully in memory always opens the disk object to be sure they can get the whole response, even if it most times do not need to read anything from that on-disk object. Surely its only harmless if we do (b) by changing the assert to a self-fix action? The self-fix is already there in that the actual data will all be copied from memory. It's just that not all data was in memory when the request started (store client created) but then when doCopy was called the first time it was. Or at least that's my assumption on what has happened. To know for sure the object needs to be analyzed extracting expected size and min max in-memory to rule out that it's not an object that has got marked erroneous as in-memory. But I am pretty sure my guess is right. Regards Henrik
Re: Squid 3.1 kerb auth helper
lör 2009-09-26 klockan 11:43 +0100 skrev Markus Moeller: Is this a real issue or just to be compliant with debian rules ? Can you give me more details ? It's the same issue I had with squid_kerb_auth when trying to package 3.1 for Fedora and which you helped to get fixed. Amos, please merge at least 1 and 10002 and roll a new 3.1 release when possible. 3.1.0.13 is just not cutting it for distro packaging, and the amount of patches needed to get a reasonable 3.1 is rather long now... Required packaging patches: Correct squid_kerb_auth compile/link flags to avoid bad runpath settings etc Cleanup automake-foo a bit in errors/ (fixes lang symlinks when using make install DESTDIR=...) Install error page templates properly. (correction to the above) Patches which may bite some packagers depending on compilers and enabled Squid features: Better const-correctness on FTP login parse (newer GCC barfing) Fixup libxml2 include magics, was failing when a configure cache was used (ESI related) Bug #2734: fix compile errors from CBDATA_CLASS2() Make ESI behave reasonable when built but not used Bug #2777: Don't know how to make target `-lrt' on OpenSolaris (not yet in 3.1) Other patches I ranked as critical for Fedora but unrelated to packaging Bug #2718: FTP sends EPSV2 on ipv4 connection Bug #2541: Hang in 100% CPU loop while extacting header details using a delimiter other than comma Bug #2745: Invalid response error on small reads Bug #2624: Invalid response for IMS request Bug #2773: Segfault in RFC2069 Digest authantication (not yet in 3.1) Regards Henrik
Re: Build failed in Hudson: 3.HEAD-i386-FreeBSD-6.4 #72
fre 2009-09-25 klockan 14:49 +0200 skrev n...@squid-cache.org: sed: 1: s...@default_http_port@% ...: unbalanced brackets ([]) *** Error code 1 Hmm.. what is this about? The actual failing line is sed s...@default_http_port@%3128%g; s...@default_icp_port@%3130%g; s...@default_cache_effective_user@%nobody%g; s...@default_mime_table@%/home/hudson/workspace/3.HEAD-i386-FreeBSD-6.4/btlayer-00-default/squid-3.HEAD-BZR/_inst/etc/mime.conf%g; s...@default_dnsserver@%/home/hudson/workspace/3.HEAD-i386-FreeBSD-6.4/btlayer-00-default/squid-3.HEAD-BZR/_inst/libexec/`echo dnsserver | sed 's,x,x,;s/$//'`%g; s...@default_unlinkd@%/home/hudson/workspace/3.HEAD-i386-FreeBSD-6.4/btlayer-00-default/squid-3.HEAD-BZR/_inst/libexec/`echo unlinkd | sed 's,x,x,;s/$//'`%g; s...@default_pinger@%/home/hudson/workspace/3.HEAD-i386-FreeBSD-6.4/btlayer-00-default/squid-3.HEAD-BZR/_inst/libexec/`echo pinger | sed 's,x,x,;s/$//'`%g; s...@default_diskd@%/home/hudson/workspace/3.HEAD-i386-FreeBSD-6.4/btlayer-00-default/squid-3.HEAD-BZR/_inst/libexec/`echo diskd | sed 's,x,x,;s/$//'`%g; s...@default_cache_log@%/home/hudson/workspace/3.HEAD-i386-FreeBSD-6.4/btlayer-00-default/squid-3.HEAD-BZR/_inst/var/logs/cache.log%g; s...@default_access_log@%/home/hudson/workspace/3.HEAD-i386-FreeBSD-6.4/btlayer-00-default/squid-3.HEAD-BZR/_inst/var/logs/access.log%g; s...@default_store_log@%/home/hudson/workspace/3.HEAD-i386-FreeBSD-6.4/btlayer-00-default/squid-3.HEAD-BZR/_inst/var/logs/store.log%g; s...@default_pid_file@%/home/hudson/workspace/3.HEAD-i386-FreeBSD-6.4/btlayer-00-default/squid-3.HEAD-BZR/_inst/var/squid.pid%g; s...@default_netdb_file@%/home/hudson/workspace/3.HEAD-i386-FreeBSD-6.4/btlayer-00-default/squid-3.HEAD-BZR/_inst/var/logs/netdb.state%g; s...@default_swap_dir@%/home/hudson/workspace/3.HEAD-i386-FreeBSD-6.4/btlayer-00-default/squid-3.HEAD-BZR/_inst/var/cache%g; s...@default_icon_dir@%/home/hudson/workspace/3.HEAD-i386-FreeBSD-6.4/btlayer-00-default/squid-3.HEAD-BZR/_inst/share/icons%g; s...@default_error_dir@%/home/hudson/workspace/3.HEAD-i386-FreeBSD-6.4/btlayer-00-default/squid-3.HEAD-BZR/_inst/share/errors%g; s...@default_config_dir@%/home/hudson/workspace/3.HEAD-i386-FreeBSD-6.4/btlayer-00-default/squid-3.HEAD-BZR/_inst/etc%g; s...@default_prefix@%/home/hudson/workspace/3.HEAD-i386-FreeBSD-6.4/btlayer-00-default/squid-3.HEAD-BZR/_inst%g; s...@default_hosts@%/etc/hosts%g; s...@[v]ersion@%3.HEAD-BZR%g; I can't see anything wrong with that, certainly no unbalanced brackets.. Note: the [V]ERSION thing which is the only brackets in that line is a hack to get around VERSION being some magic keyword iirc, and has been there since 2003. Regards Henrik
changeset filename change
As Amos already noticed the changesets in /Versions/v3/X/changesets/ have got a new file name style. They are now named squid-version-bzrrevision.patch instead of just bbzrrevision.patch The reason to this is because bzr runs revisions per branch, which makes it hard to keep track of the changesets when working with multiple versions. Additionally I have always found it a bit confusing to deal with downloaded patches with just the revision number and no project name or version... This change should have been done years ago before the changesets where put into production, but.. The change is only effective for new changesets generated in the last week or so. Regards Henrik
bzr revision 10000 reached!
Congratulations to Amos for making revision 10K Regards Henrik
Re: Build failed in Hudson: 2.HEAD-i386-Debian-sid #57
ons 2009-09-23 klockan 00:07 +1200 skrev Amos Jeffries: n...@squid-cache.org wrote: See -- Started by upstream project 2.HEAD-amd64-CentOS-5.3 build number 118 Building remotely on rio.treenet cvs [checkout aborted]: Name or service not known FATAL: CVS failed. exit code=1 So what did I have to do to fix this after your changes the other day kinkie? it need to do a checkout from the main CVS repository, not the SourceForge one. Regards Henrik
Re: wiki, bugzilla, feature requests
tis 2009-09-22 klockan 23:27 +1000 skrev Robert Collins: I'm proposing: - if there is a bug for something, and a wiki page, link them together. - scheduling, assignment, and dependency data should be put in bugs - whiteboards to sketch annotate document etc should always be in the wiki I fully second this opinion. Wiki is great for documentation and the like, but very poor for tracking progress (or lack thereof). Regards Henrik
Re: ip masks, bug #2601 2141
Hmm.. thinking here. Not sure we should warn this loudly on clean IPv4 netmasks. People are very used to those, and do not really produce any problems for us. But we definitely SHOULD barf loudly on odd masks, or even outright reject them as fatal configuration errors when used in the ip acl. Which brings the next isse. There is configurations which intentionally do make use of odd IPv4 netmasks to simplify the config even if limited to a single expression per acl. To support these we should add back the functionalit by adding an maskedip acl type using linear list (basically a copy of ip acl, changing store method from splay to list). Questionable if this maskedip acl type should support IPv6. Alternative name ipv4mask. mån 2009-09-21 klockan 09:06 +1200 skrev Amos Jeffries: revno: 9996 committer: Amos Jeffries squ...@treenet.co.nz branch nick: trunk timestamp: Mon 2009-09-21 09:06:24 +1200 message: Bug 2601: pt 2: Mixed v4/v6 src acl leads to TCP_DENIED - Remove 'odd' netmask support from ACL. - Fully deprecate netmask support for ACL. Earlier fix caused inconsistent handling between IPv4 and IPv6 builds of Squid. Which has turned out to be a bad idea. This fixes that by 'breaking' both build alternatives.
Re: myport and myip differences between Squid 2.7 and 3.1 when running in intercept mode
fre 2009-09-18 klockan 11:13 +1000 skrev James Brotchie: On Squid 2.7 the intercepted acl matches whilst in 3.1 it doesn't. In 2.7 the myport and myip acls are very unreliable in interception mode. Depends on the request received if these are the local endpoint or the original destination enpoint.. Digging deeper into the Squid 3.1 source it seems that if a http_port is set to intercept then the me member of ConnStateData, which is normally the proxy's ip and listening port, is replaced by the pre-NAT destination ip and port. And in 2.7 it just sometimes are, i.e. when the original destnation is required to resolve the request. And on some OS:es it always are replaced, depends on how the original destination information is given to Squid. Regards Henrik
Re: /bzr/squid3/trunk/ r9985: Remove 'NAT' lookup restrictions from TPROXY lookups.
fre 2009-09-18 klockan 18:35 +1200 skrev Amos Jeffries: +/* NAT is only available in IPv6 */ +if ( !me.IsIPv4() ) return -1; +if ( !peer.IsIPv4() ) return -1; + Code comment does not seem to match to me... Regards Henrik
Re: R: Squid 3 build errors on Visual Studio - problem still present
tor 2009-09-17 klockan 11:15 +0200 skrev Guido Serassio: It fails: vs_string.cc c:\work\vc_string\vs_string.cc(1) : error C2371: 'size_t' : redefinition; different basic types Gah, should have been unsigned long.. but interesting that VS apparently has size_t built-in. It was declared in the preprocessed source as typedef __w64 unsigned int size_t; c:\work\vc_string\vs_string.cc : see declaration of 'size_t' c:\work\vc_string\vs_string.cc(34) : error C2057: expected constant expression Good. so it seems the test case worked. now replace std with testcase and try again, both in namespace and the failing assignment just to make sure it's not tripping over something else built-in. Do you need the preprocessed source ? No, that was preprocessed already with no includes or other preprocessor directives. Regards Henrik
Re: Build failed in Hudson: 2.HEAD-amd64-CentOS-5.3 #11
tor 2009-09-17 klockan 01:00 +0200 skrev n...@squid-cache.org: See -- A SCM change trigger started this job Building on master [2.HEAD-amd64-CentOS-5.3] $ cvs -Q -z3 -d :pserver:anonym...@cvs.devel.squid-cache.org:/cvsroot/squid co -P -d workspace -D Wednesday, September 16, 2009 11:00:32 PM UTC squid $ computing changelog Fatal error, aborting. anoncvs_squid: no such system user ERROR: cvs exited with error code 1 Command line was [Executing 'cvs' with arguments: '-d:pserver:anonym...@cvs.devel.squid-cache.org:/cvsroot/squid' Why are you pulling from cvs.devel (SourceForge)? please pull from the main repository instead. Regards Henrik
[MERGE] Clean up htcp cache_peer options collapsing them into a single option with arguments
the list of HTCP mode options had grown a bit too large. Collapse them all into a single htcp= option taking a list of mode flags. # Bazaar merge directive format 2 (Bazaar 0.90) # revision_id: hen...@henriknordstrom.net-20090917222032-\ # nns17iudtio5jovr # target_branch: # testament_sha1: 8bd7245c3b25c9acc89a037834f39bc71100b3ea # timestamp: 2009-09-18 00:21:15 +0200 # base_revision_id: amosjeffr...@squid-cache.org-20090916095346-\ # m7liji2knguolxxw # # Begin patch === modified file 'src/cache_cf.cc' --- src/cache_cf.cc 2009-09-15 11:59:51 + +++ src/cache_cf.cc 2009-09-17 22:11:15 + @@ -1753,30 +1753,41 @@ } else if (!strcasecmp(token, weighted-round-robin)) { p-options.weighted_roundrobin = 1; #if USE_HTCP - } else if (!strcasecmp(token, htcp)) { p-options.htcp = 1; } else if (!strcasecmp(token, htcp-oldsquid)) { + /* Note: This form is deprecated, replaced by htcp=oldsquid */ p-options.htcp = 1; p-options.htcp_oldsquid = 1; -} else if (!strcasecmp(token, htcp-no-clr)) { -if (p-options.htcp_only_clr) -fatalf(parse_peer: can't set htcp-no-clr and htcp-only-clr simultaneously); -p-options.htcp = 1; -p-options.htcp_no_clr = 1; -} else if (!strcasecmp(token, htcp-no-purge-clr)) { -p-options.htcp = 1; -p-options.htcp_no_purge_clr = 1; -} else if (!strcasecmp(token, htcp-only-clr)) { -if (p-options.htcp_no_clr) -fatalf(parse_peer: can't set htcp-no-clr and htcp-only-clr simultaneously); -p-options.htcp = 1; -p-options.htcp_only_clr = 1; -} else if (!strcasecmp(token, htcp-forward-clr)) { -p-options.htcp = 1; -p-options.htcp_forward_clr = 1; +} else if (!strncasecmp(token, htcp=, 5) || !strncasecmp(token, htcp-, 5)) { + /* Note: The htcp- form is deprecated, replaced by htcp= */ +p-options.htcp = 1; +char *tmp = xstrdup(token+5); +char *mode, *nextmode; +for (mode = nextmode = token; mode; mode = nextmode) { +nextmode = strchr(mode, ','); +if (nextmode) +*nextmode++ = '\0'; +if (!strcasecmp(mode, no-clr)) { +if (p-options.htcp_only_clr) +fatalf(parse_peer: can't set htcp-no-clr and htcp-only-clr simultaneously); +p-options.htcp_no_clr = 1; +} else if (!strcasecmp(mode, no-purge-clr)) { +p-options.htcp_no_purge_clr = 1; +} else if (!strcasecmp(mode, only-clr)) { +if (p-options.htcp_no_clr) +fatalf(parse_peer: can't set htcp no-clr and only-clr simultaneously); +p-options.htcp_only_clr = 1; +} else if (!strcasecmp(mode, forward-clr)) { +p-options.htcp_forward_clr = 1; + } else if (!strcasecmp(mode, oldsquid)) { + p-options.htcp_oldsquid = 1; +} else { +fatalf(invalid HTCP mode '%s', mode); +} +} +safe_free(tmp); #endif - } else if (!strcasecmp(token, no-netdb-exchange)) { p-options.no_netdb_exchange = 1; === modified file 'src/cf.data.pre' --- src/cf.data.pre 2009-09-15 23:49:34 + +++ src/cf.data.pre 2009-09-17 22:20:32 + @@ -922,7 +922,7 @@ NOTE: The default if no htcp_access lines are present is to deny all traffic. This default may cause problems with peers - using the htcp or htcp-oldsquid options. + using the htcp option. This clause only supports fast acl types. See for details. @@ -1682,22 +1682,23 @@ htcp Send HTCP, instead of ICP, queries to the neighbor. You probably also want to set the icp-port to 4827 - instead of 3130. - - htcp-oldsquid Send HTCP to old Squid versions. - - htcp-no-clr Send HTCP to the neighbor but without + instead of 3130. This directive accepts a comma separated + list of options described below. + + htcp=oldsquid Send HTCP to old Squid versions (2.5 or earlier). + + htcp=no-clr Send HTCP to the neighbor but without sending any CLR requests. This cannot be used with - htcp-only-clr. - - htcp-only-clr Send HTCP to the neighbor but ONLY CLR requests. - This cannot be used with htcp-no-clr. - - htcp-no-purge-clr + only-clr. + + htcp=only-clr Send HTCP to the neighbor but ONLY CLR requests. + This cannot be used with no-clr. + + htcp=no-purge-clr Send HTCP to the neighbor including CLRs but only when they do not result from PURGE requests. - htcp-forward-clr + htcp=forward-clr Forward any HTCP CLR requests this proxy receives to the peer. # Begin bundle
Re: [MERGE] Clean up htcp cache_peer options collapsing them into a single option with arguments
fre 2009-09-18 klockan 08:28 +1000 skrev Robert Collins: both works. But the documentation do say (after unwinding the patch) htcpSend HTCP, instead of ICP, queries to the neighbor. You probably also want to set the icp-port to 4827 instead of 3130. This directive accepts a comma separated list of options described below. Regards Henrik
Re: [PATCH] Log additional header for the navigation from BlackBarry Device
mån 2009-09-14 klockan 18:58 +0200 skrev devzero2000: I hope this tiny patch can be useful also for other user, so i put here for review and possible merge if you like. Thanks in advance Elia ~~~ This patch permit to log the additional Header used by BlackBarry and to remove these via http_headers squid.conf directive. As commented in bugzilla I don't quite see why the patch is needed. Logging works equally well without the patch. Adding header ids for new headers is only useful if you need to quickly access these headers in the Squid code. Those header ids are not used by the logging code, only the header name. Regards Henrik
Re: Why does Squid-2 return HTTP_PROXY_AUTHENTICATION_REQUIRED on http_access DENY?
tis 2009-09-15 klockan 16:09 +1000 skrev Adrian Chadd: But in that case, ACCESS_REQ_PROXY_AUTH would be returned rather than ACCESS_DENIED.. Perhaps. Simple change moving that logic from client_side.c to acl.c, but may cause unexpected effects in other access directives such as cache_peer_access where we don't want to challenge the user. Why does it matter? Regards Henrik
Re: Squid-smp : Please discuss
tis 2009-09-15 klockan 05:27 +0200 skrev Kinkie: I'm going to kick-start a new round then. If the approach has already been discussed, please forgive me and ignore this post. The idea is.. but what if we tried using a shared-nothing approach? Yes, that was my preference in the previous round as well, and then move from there to add back shared aspects. Using one process per CPU core, non-blocking within that process and maybe internal offloading to threads in things like eCAP. Having requests bounce between threads is generally a bad idea from a performance perspective, and should only be used when there is obvious offload benefits where the operation to be performed is considerably heavier than the transition between threads. Most operations are not.. Within the process I have been toying with the idea of using a message based design rather than async calls with callbacks to further break up and isolate components, especially in areas where adding back sharedness is desired. But that's a side track, and same goals can be accomplished with asynccall interfaces. Quick run-down: there is farm of processes, each with its own cache_mem and cache_dir(s). When a process receives a request, it parses it, hashes it somehow (CARP or a variation thereof) and defines if should handle it or if some other process should handle it. If it's some other process, it uses a Unix socket and some simple serialization protocol to pass around the parsed request and the file descriptor, so that the receiving process can pick up and continue servicing the request. Just doing an internal CARP type forwarding is probably preferable, even if it adds another internal hop. Things like SSL complicates fully moving the request. There are some hairy bits (management, accounting, reconfiguration..) and some less hairy bits (hashing algorithm to use, whether there is a master process and a workers farm, or whether workers compete on accept()ing), but on a first sight it would seem a simpler approach than the extensive threading and locking we're talking about, AND it's completely orthogonal to it (so it could be viewed as a medium-term solution while AsyncCalls-ification remains active as a long-term refactoring activity, which will eventually lead to a true MT-squid) I do not think we will see Squid ever become a true MT-squid without a complete ground-up rewrite. Moving from single-thread all shared single access data without locking to multithread is a very complex path. Regards Henrik
Re: R: Squid-smp : Please discuss
tis 2009-09-15 klockan 08:39 +0200 skrev Guido Serassio: But MSYS + MinGW provides gcc 3.4.5 and the Squid 3 Visual Studio Project is based on Visual Studio 2005. There is GCC-4.x for MinGW as well. What I have in my installations. Just not classified as the current production release for some reason which more and more is ignoring toda. Regards Henrik
|
https://www.mail-archive.com/search?l=squid-dev%40squid-cache.org&q=from:%22Henrik+Nordstrom%22&o=newest&f=1
|
CC-MAIN-2022-33
|
en
|
refinedweb
|
world war 2 leaders activity
cowlitz river dam
melbourne suburbs ranking crime
aeromancer pathfinder 2e
nissan leaf ccs retrofit
genuine gm tonneau cover
amish bakery store near me
golden gloves 2022
faucet crypto app
klim gloves
settimer ue4
smart jack boat lift
grizzly 15 helical planer
aea hp ss plus 30 cal review
sbc blower intake
assa twin maximum
10 facts about the book of enoch
atv tours kentucky
best pickup truck cargo nets
how to avoid hackers in warzone
juniper packet capture interface
nw firearms hull
splat ball ammo biodegradable
how to add shapes in google docs ipad
young justice blue beetle x reader
pcsx2 hardware renderer settings greyed out
james coonan release date
vrchat mobile
tq automation answer key
sonarworks systemwide vs plugin
2014 dodge dart body control module
simplicity 54 inch mower deck for sale
mikasa collapses fanfiction
rt 21 yard sale
wooden communion set
mausam e gul full novel
billing scheme
lg v35 android 11
zfs clone dataset
msds domestos
fourier beam propagation matlab
lexmoto enigma 125 parts
schneider 66kv transformer
heroquest app
tf3dm 3d models
atmel twi example c
a2 black screen
planetary gear ratios calculator
free knitting pattern for potholders
stereo live dallas coupon
unfair judge in family court uk
aquafina car wrap
gouldian finch breeding
gallipolis ohio real estate homes
royal masat
mac 9 machine gun
dr zhang lyme supplements
aarp ny events
amazon online test questions and answers 2021
dcfs standards
displacement tonnage calculator
movie theaters phoenix
george michael husband
kibana tsvb annotations
2008 g35 starter relay location
gimkit fishtopia strategy
solax battery system
tia portal analog input
conan exiles studies of the ancient arts thrall
mtg prize support calculator
hk p7 sights
samsung android camera settings
zillow homes for sale currituck co nc
rws 308 brass
cinder blocks lowes
update data in firebase realtime database
groton news
how would you describe the way activities are coordinated in valve
burke county accident reports
lost dubs
polaris roof rack
eduqas drama past papers
scalping ps5 reddit
no prep rc drag racing florida
anniversary clock suspension spring replacement
322 west 88th street no 2a
wana strawberry lemonade gummies dosage
xfinity mobile imei check
6ca7 tube equivalent
musicxml python parser
caribbean supermarket tampa
clash in spouse palace
accident on 303 ohio today
edelbrock carb adjustment
syracuse university strategic plan
best jewelry box for little girl
4artworks discount code
pioneer rta damascus
shein manufacturing
kodiak bear lodges
big chief street racer
mass dm bot discord
6th shidduch date
ebt cash vs ebt food
interstate express san marcos ca
torana race car for sale
auxiliary coolant heater
google senior software engineer salary seattle
mcu tv reddit
fuchs headquarters
smartinspect download
ely news facebook
sig sauer p320 suppressor height sights
free city skybox
cheapest way to clear land
cursor not in the right place
prizes for students in middle school
reapply for rental assistance
aws appsync review
bassam hamzy age
xhtml2pdf flask example
building wheel for jpype1 setup py error
determine the vertical reaction at support a
1989 ford f800 diesel engine
coachman park model
trucking rates going up
dorks google 2021
bokuto x oc wattpad
ragnarok stat calculator 3rd job
tisas 1911 a1 9mm
infrared patio heater
matlab svm plot decision boundary
briar bay townhomes for sale
free printable bible study worksheets women
twilight fanfiction bella has asthma
six syllable types worksheets
citroen nemo dashboard warning lights
edit site binding
5 sanhi at bunga ng lindol
quant fund performance 2022
best entry level competition pistol
hyatt long beach pool
tau empire codex 9th edition pdf download
hypnosis manual pdf
kepware server ex 6
donald smith cherish
newark housing authority jobs
railway steel for sale
samsung data center ssd
lofts in atlanta for sale
mp3 format for car usb
gregory x vanessa fanfiction lemon
mi cleaner mod apk
geeetech a10t
mk4 air rifle price in india
enable docp msi
james goodnight wife
used campers pikeville ky
income support change of circumstances
ampscript between
alpena county jail inmates mugshots
blur in bluebeam
creative imedia pdf
nr2003 cup90
bellway rosedale showhome
2004 jayco eagle specs
java fhir parser
what dissolves plastic but not metal
3x3 picture
vw cc for sale
when does pwc start internship start
retopology modifier 3ds max
wei wuxian and the juniors fanfiction
next day tianeptine
how much is a mazda 3 catalytic converter worth in scrap
cozy memories blanket
ug471 xilinx
123movies spider man far from home free
generic tokens hackerone
header leak sealant
nursing conferences hawaii 2022
did electrical limerick
cbre severance package
midas ngen 2020 crack
wareham arrests today
scale model calculator
bmw x3 missing on acceleration
fnf sonic exe youtube
baking ingredients cost calculator
animal spirit deck meanings
stm32 hal timer one pulse mode example
messer cutting systems manual
apartments on woodward ave
ironman hypixel skyblock guide
19 at main flowery branch
annie mai thai mr right
lg l322dl firmware
ikea blinds smart
penfed checking account requirements
calais custom homes for sale
freightliner cascadia fuse box cover
dell t7810 nvme
wgu software development classes
ocr computer science a level databases
arcam amplifier for sale
fundraising peel off cards
california traffic ticket forgiveness
1960 schwinn catalog
doosan forklift wont go into gear
miter saw measuring system
detached houses for sale in pontefract
ue4 translucent material
lewis county crime stoppers
this action requires an interactive windows
att u318aa frp bypass
alumacraft canoe models
2003 dutchmen lite 27bh weight
who makes insignia dryers
plex fast forward settings
verizon 5g sim card reddit
performance assessments wgu reddit
mercedes atego 815 mpg
laserlyte pistol battery replacement
90s convertibles
sap dms content server configuration
dyson am02 disassembly
vermeer 1250 chipper specs
40x60 pole barn pictures
studio ghibli notion covers
freddy rodriguez height
simchart 39 post case quiz
bose soundlink software update
best paint for concrete floors
oci runtime create failed file exists
magic journeys janelle instagram
texas city permit fees
java textbook
waf 403 forbidden
p008a00 mack
dead by daylight huntress x child reader
anamnesis ffxiv
glm function in r family
chapter 11 the cardiovascular system figure 11 10 answers
epping ongar railway locomotives
sp nd8 pag
1970 plymouth roadrunner project for sale
cubensis spore print color
caravan awning rail
dlq33 real gun
hells angels nj
postgres cursor commit
cta bus routes map
radio frequency exposure health effects
pnb foreclosed properties in dipolog city
does nordvpn work in iran
datto workplace install
dmi edit ami
adding quadratic term to regression in r
kodak color film types
mercedes w205 pyrofuse
istar a8000 plus
i 9 form authorized agent
st jude dream homes 2022
shaun hamilton funeral notices
wyse d90q10 windows embedded 7
edid emulator reddit
chess board html css
lost ark virtues guide maxroll
bbc news casters female
inode number in linux
mychoice rewards hollywood casino
meeting ldr reddit
target housing phone number
nighthawk speed test inaccurate
m57 low rail pressure
studio lighting course
platinum truck parts
import could not be resolved vscode pylance
garden of the gods death september 2020
sds011 github
iracing guide for beginners
geography topics primary school
c connect to socket
rotiform ozr
aldi petrol lawn mower
decal designer ue4
school novels
leupold deltapoint pro co witness sights p320
wind tarps
scripts to practice acting girl
henrico county rules and regulations
dxd fanfic issei betrayed
gwm cannon service light reset
best calamity armor
codesignal general coding assessment score reddit
kenworth smart wheel switches
booth seating for sale
setup shopify api
how to beat evony
reddit confessions stories
o scale railroad buildings
new holland mc28 problems
auctions fresno california
04 chevy tahoe radio wiring diagram
how to print array elements in python without brackets
download ps4 pkg viewer
houses to rent under r4000 in cape town
thranduil x servant reader
free campground reservation template
blt payday download
macd backtest python
honda shadow 1100 2 into 1 exhaust
dell xps 8940 motherboard chipset
300 prc vs 338 lapua ballistics chart
harlan iowa mayor
2007 honda vtx 1300 specs
minimum compliance topology optimization
avamar shutdown procedure
pax brick maker
deaths in philadelphia yesterday
stalled due to write throttle
ip link show can0
astrology predictions for 2021 covid
popcorn sutton moonshine distillery
csx gondola
bouncing ball p5js
mud mosque
blood god crest hypixel skyblock
cardinal mule shed mover for sale near kemerovo
ipyleaflet example
we buy used furs
sbd singlet sizing
west virginia german shepherd puppies
pixinsight linear fit
brillman magneto
nebra bluetooth not working
graphic content warning examples
gree ac 3 ton
gun with bayonet
comsol tutor
st mary queen of peace
dow chemical engineer salary reddit
used double wide mobile homes
crave dry cat food
minereum bsc sell
4 cycle scooter
95 suburban transmission for sale
jeep radio problems
panama city beach arrests today
auto glass training near me
responsive reading pastor installation
the bet answer key
pioneers of family therapy
undertale system fanfiction
where sea and ocean meet in lagos
vapor empire phone number
zabbix mib to template
val6 heater fuel filter
print all subsequences of a string leetcode
brookside orchids coupon
azure function stopping jobhost
scp lockdown game
https my918 co
2005 4runner rock sliders
plot points on a map using latitude and longitude
hybrid aria chapter 7
2005 volvo d12 engine
lorry queues at dover today
zwickey funeral home
thieving skydiver full art
superior transportation
spell caster names
empyrion galactic survival ancient tower
start blocked freightliner
hspp psychology
rgb 10 bit hdr
best pallet forks for kubota bx
symbol bar code scanner driver
roblox brookhaven play free online
airforce texan silencer
factory five 818 k20
netgear nighthawk x6s blinking lights
custom fender decal
horse farms for sale in hollis maine
valorant ascii art
better love challenge lcbc
apex angler qwest pro troll for sale
fanart girl
newton county clerk
pdf417maker
amos shawnee obituaries
ryan xu collinstar
gina wilson all things algebra 2012 2016 answer key pdf
power automate encoding
extar ep9 foregrip
cmos battery msi motherboard
manual labor jobs no experience
bc game referral promo code
blown pro street cars for sale
ebr school grades
methods of motivation studysync answers
modbus tcp port
brooder heat lamp assembly
patrick trailer sales
monster universal 5 device remote control manual
fortnite creative xp map code season 1 chapter 3
fg xr6 turbo for sale
maerynn death
mc eternal quests
smart monitoring app
north western province term test papers 2019 with answers grade 8
harley turn signal module problems
fnf corrupted ron
lg stylo 6 usb drivers
john deere dozer transmission fault codes
gold pedestal vase
psychedelic truffles amsterdam
osu online github
conda install dbt
hoosier classic one loft race
add cronos to metamask
linksys wrt32x review
xtaskcreate esp32
mobile home dealerships in wv
seminole county jury duty
qgis random points inside polygons density
kohler flush button
minilite replica wheels
instant hedge fence
screamin eagle super tuner pro download
shonen character generator
chainsaw carburetor adjustment tool near me
illinois school code teacher dismissal
fnf vs south park mod
ocean freight shipping quote
music symbols and meanings pdf
biglaw signing bonus
lml p2493
opelousas man found dead
rv wood trim molding
how to create music bot in discord
ricoh firmware update tool
remarry my ex wife novel
buod ng pelikulang anak brainly
zynq ethernet emio
wordle 288 trope or fewer
connection is on wan and limit is set plex
open source pellet burner
mdm lock bypass
rotate with mouse unity
input with select dropdown
setedit working codes
honeypot smart contract github
how to set up ossc for ps2
2007 skeeter 20i specs
unity dps shader
raveos overclock
fnf vs ron unblocked
virsh autostart not working
16 10mm barrel
launch crp 129x
marble supplier
mon amour game
sirio 827 assembly
2002 gsxr 600 for sale near me
npm err invalid version
skyrim elven armor retexture
holley terminator x software requirements
whirlpool washer fuse location
9mm mac 10 clone
honey island vs manchac swamp
ekipa ceo film dailymotion
dfs portal
zoho deluge list to string
dmr programming software
snmp exporter windows
freightliner bucket truck weight
sailboats for sale pacific northwest
12 week workout program men
vintage tyco ho slot car parts
killi data
cheap stair spindles
uss hector deck log
heets mauve review
1971 cushman truckster for sale
vw amarok v6 stage 3
polaris ranger 800 cutting out
python histogram table
4 bedroom houses for rent iowa city
acc lamborghini huracan st monza setup
how to make holographic cards
online auctions harrisburg pa
flutter textfield show password
how to play phasmophobia on oculus quest 2
unit 8 right triangles and trigonometry homework 3 answers key pdf
cookie clicker html
mitre saw workbench
hoover power scrub elite water tank leaking
minimum number of atoms in a molecule
in the figure shown normal force exerted by block b upon block a is
gas prices in wisconsin today
fake green pass qr code
planet crafter map
synology user permission
parchg vasp
metedeconk guest fees
my miracle luna chapter 4
container home builders new england
gus thornhill phone number
plesk database remote access
thrustmaster t248 vs t300rs
summit utility structures
10700k fan curve
dbd shards hack
crate 4x12 cabinet specs
vivado source tcl script with arguments
city of gardena address
harlan parts
cmu acceptance rate 2026
chicago pd fanfiction jay infection
sinotec tv manual pdf
cat 1 back blade
overstock rv furniture
buick nailhead sound
1994 chevy cavalier z24
bpd harassment
audi air conditioning problems
seagrave ladder truck manual
is anesthesiology a good career
mcb infinix mobile installment plan 2020
verizon csr number
stylized environment unreal engine
bahama mamas mlo
cox and son funeral home obituaries
azure blob storage this request is not authorized to perform this operation
32 in shower door
bellingham tiny house community
ssh asic miner
subzero webtoon 115
free leetcode premium
a hollow sphere of mass m and radius r is rolling downward
seattle missing persons report
guns international single shot rifles
how to apply for child care assistance in california
vitamin d makes ocd worse reddit
yamhill county population
small antenna rotator
libra lucky color
middle c frequency 256
graphing linear equations using a table of values worksheet
ey salary chart
finfet tsmc
hydraulic post drivers for tractors
is the easter bunny real or fake
federal government grants 2022
birdcloud songs
firing reproduction civil war rifles
qnap cannot access shared folder windows 10
nessus asteroid astrology
used unmounted knuckle boom for sale near virginia
high temperature silicone metal casting
jackson tremolo
cisco ise study guide
swap alternate elements in an array python
home assistant local images
firewall pad material
extra strong walking cane
deploy yolov5 model
ranch horse shows in nebraska
sgmediation stata
geo tracker body lift
lexus es300 for sale craigslist
calum scott rise spotify
sahara protocol qualcomm
bladenboro obituaries
refurbished catalytic converters
murthy asu
downhole packer
diagonal corner cabinet
cisco jabber msi download
bmo asset management careers
mendeola s4
is costco salmon sushi grade
costco infinity boss three hybrid bike
haiti six families
audio plugin deals
flour gold recovery
rifle barrel fluting weight reduction
pcap analysis
50 barge for sale
sos meaning in phone
ventura 250 annex instructions
react native android ndk
ww2 jeep parts
crazy games 3
kwwl news anchor leaving
30 hp engine price
whistlindiesel reddit
urban area case study igcse
2006 dodge charger transmission not shifting
faulty ground wire
gm ls7 lifters and trays
when is boxabl going public
southwestern jail logan wv mugshots
fox valley obituaries
zx81 programming manual
cheap old houses tennessee
ikea supply chain technology
coleman mini bike speedometer
ford obd app for android
an amusement park ride consists of a large vertical wheel of radius r that rotates counterclockwise
alternative to thingiverse
6x6x16 pressure treated post menards
uvxy predictions 2022
phonak hearing aid domes differences
beagle chihuahua mix for sale
outdoor propane fireplace walmart
cayuga county sheriff facebook
mineshafts and monsters lite mod list
healing beds tesla
b58 forged pistons
convert to base64 string
pykalman pairs trading
lifan 152f engine parts breakdown
starlink ipv6 settings
20 years old in japanese
new fashion brands
harbor freight brush grubber
woburn medical associates patient portal
new kung fu movies
eagle harbor hoa carrollton va
kijiji barrie puppies for sale
taylor morrison upgrade price list 2022
calibre add metadata source
sccm console update files location
chaos imperial knight stl
fssp oklahoma
install nvidia drivers kali 2020 laptop
sunmi lee
relate the principles to at least three specific concepts covered in chapter 9
coleman 550 utv cab enclosure
mh general trading
transalpine redemptorists live stream
wreck on hwy 16 senoia ga today
jisoo is underrated reddit
tcp server matlab
henri wintermans cigars
warrior cats what if firestar was clan born fanfiction
matlab stepinfo
edward cullen x reader jealous bella
1996 impala ss 6 speed for sale
gsap timeline
delta 8 space rings reddit
antd table pagination onchange
trustone material
fat people stories
ogun ti afi mo witch
coker creek gold panning
bmf zinn pewter
largest preformed pond
shooting in norton va 2022
mahindra 4540 review
cheap unrestricted land for sale
fnf release tom
parrots for sale ontario california
russian movies online
henry county jail iowa
logisim circuits examples
who wrote hurt
filters for telescopes
american standard silver 16 review
ftdi mprog download
stormworks oil refinery location
freightliner instrument cluster
what type of lawyer do i need to sue a funeral home
gm manual steering box
how to mount a swamp cooler on the roof
a gas undergoes a cyclic process abcda as shown in the figure
stfc romulan supply ship
cartoon doll emporium shut down
coin flip game python
southeastern group
w680 motherboard
krawler hauler for sale near virginia
gpib example c code
is nevada giving extra food stamps this month
golden teacher fruiting conditions
prefab cabins for sale near brooklyn
preachers on daystar
tapered wood dowel rods
latest thai drama series 2021
which minecraft youtuber would date you
small inboard motor boat plans
seaside mortuary
how to listen to audios on kemono party
is romtech legit
notorious client vrchat discord
hugo partial parameters
friend blocked me after argument reddit
trusted rc vendors usa
eureka math 7th grade module 3 answer key
free solidworks model library
jake xerxes fussell guardian
hagon grasstrack bike for sale
obsidian outlook integration
wilmington star news classified ads
rg63 revolver parts
sirius x hermione fanfiction
addressables loadassetasync slow
white hawk jayco floor plans
mossberg 935 trigger upgrade
orgain protein powder reviews
cost to install porsche sport exhaust
stucco cleaner home depot
sterling 32 x 32 shower stall
clayton homes options
volume of cone worksheet
can you take vitamin d3 and vitamin c together
veeam backup to object storage
2017 chrysler 300 radio reset
pixel sound balance
maurice scott wife
mali gpu ddk
ld vapes
pjt partners salary
young justice fanfiction wally dad
mercedes w205 pyrofuse
ocean going yachts
mac 4 into 1 exhaust cb650
predator hunters uk
how to buy bulk items
alpaca trade api pypi
police scanner knoxville tn
babylon 5 3d models
netbeans no executable location available at line
sqlalchemy json dumps
the urinator coupon code
lspdfr state trooper vest
wholesale gold jewelry manufacturers
aws redis client
small lcd screen raspberry pi
vulkan samples github
install jax
boyd coddington cars for sale
dramione fanfiction arranged marriage
5g gate opener
gcse biology edexcel
weight bench sports authority
mainstays infrared heater manual
cessna 150 oil temp probe
google slides autofit text
how to join cisco webex meeting from desktop
automotive engine machine shop near virginia
setxkbmap config file
ice cold sore reddit
rik tv cyprus live online
what causes a glock to go full auto
thermostat gasket sealant
jacob black x reader cuddle
one day i just wanna hear you say i like you lyrics
stoeger condor vs condor field
the amortization of a premium on bonds payable will
datsun parts canada
arctic fox 29l used
double wide manufactured homes fleetwood
starlights gypsy horses for sale
cloudstone curio price
used taylor bowls for sale near maryland
marauders fanfiction remus growls
dictionary comprehension python syntax
book of nod pdf trove
vuse solo cartridges refills
secret admirer message
remove smallest codeforces
cannot find bean with qualifier spring boot
scapy padding
rs3 forza horizon 5
maze generator minecraft with rooms
nginx reverse proxy error page
hotels sponsoring h2b visa
keto dad pizza
compare datetime index pandas
astragal trim
female preaching dresses
mpc ansys
bot binary xml
houses for sale in wadsworth ohio
zibo 737 flows
hsn host dies
brake pedal switch location
mac charging but percentage not increasing
international 4700 mirror
alsea bay clamming
buddy l trucks 1920s
orlando treehouse airbnb
angry bearded dragon
famous belly dancers
oncue grill
ffxiv best class 2021
is a menudo preterite or imperfect
microsoft start news
sort a list of dates in ascending order
pioneer double din radio
japan beauty products supplier
truss uplift pictures
car pooling system project report pdf
rochester ny police records
insignia turbo actuator problems
uk alpaca baby alpaca silk dk
lowe fishing machine 175
cyberark architecture diagram
pocket door hardware cad
hasco dammam
vgg feature extraction pytorch
audition songs for disenchanted
black 80s fashion
chevy nv3500 4x4 transmission
basic veterinary knowledge quiz
foreclosed house and lot makati
teacup pigs for sale perth
ath11k openwrt
build a gmc truck
wow bot discord
ayurvedic millet porridge
rent camp rush
supertech advanced full synthetic 0w20
houston reduced homelessness
willamette valley french brittany
fnf bf 3d model
ballistic dart gun
mcrd parris island gift shop
win32gui find button
mn triton dash removal
viking oven knob replacement
c64 sd card
kohler shower valve won t get hot
ue4 grid system
how many packages get seized by usps
netflix is installed on a usb drive that is not currently connected to your amazon fire tv
imgbb log out
towing business for sale in montana
smok g priv 2 drip tip size
vitis hello world
backtrader tutorial
sandvik gun drill
street bob fairing with speakers
colt johnson wife
river valley basketball schedule
dnd 5e small size height
cooks forest cabin rentals with pool
quickie wheelchair price
ets2mods2019 gumroad
judge farr tampa
colgate mouthwash 1l
samsung pip hdmi
how to cheat klarna reddit
used class c motorhomes in denton texas
mason city man found dead
savage axis xp camo 270
metatune download
cesium 3d tiles tutorial
cyberpowerpc gamer master power supply
llama baby blanket crochet pattern
la scala vs cornwall
exagear termux
2021 fuzion toy hauler
mt5 indicator to ea
mudblazor admin dashboard
mask rcnn implementation pytorch
wings of fire book 3 graphic novel
bvot wireless mobile wifi setup
volvo penta aq125 fuel consumption
houses for rent in winslow arkansas
master of science in analytics
wagner wear
lia sophia jewelry value
a140 accident yesterday
helm template range example
j610f imei repair cheetah tool
2014 dodge ram 1500 no power to ignition
star wars potf figures price guide
60 hz notch filter
wavelet transform ecg python
outcast land for sale
marron season western australia
cugir m10
adopt an older sheltie
ilo 5 user guide
traders countdown clock
g54b turbo
betta fish charity
rk61 rgb software download
best of shadmehr
sneak a toke
zillow marion nc for rent
honey select 2 head mod
franklin county down payment assistance program
facebook hacking services
2009 chevy suburban fuse box diagram
jeep yj windshield wiper parts
k20a2 pistons
best osu skins 2021
pycharm permission denied macos
auto shop for sale texas
rtx 3080 mini
deep mimo dataset github
myconid 3d model
db link not working in oracle
beach wheelchair rental gulf shores
bp graduate scheme 2022
juiced bikes scorpion
optuna tutorial
cost of roof vents
all seeing eye symbol meaning
m240 ammo bag
100mg kratom extract
hearthstone legendary crafting guide
eret x badboyhalo
trane 4twr4 specs
arc length of a spiral calculator
kayak diy stake out pole
is my boyfriend fattening me up quiz
lettering generator free
iva kuce
iperf truenas
billionaire baby girl wattpad
react typescript accordion
sds imports customer service
small homes for sale in mo
extensionator device
eurotic tv
bx tractor forum
minitab 19 support
kneeboard nav log
natsume bsd
craigslist brownsville farm and garden
fallout 4 conquered commonwealth mod
joshdub x reader
accident on 224 today
upcycle bookshelf
preowned scamp camper for sale near virginia
2022 honda pioneer 700 release date
akm shortage
used dog carts for sale near krasnoyarsk
p0451 ford focus
geometry unit 7 test right triangles and trigonometry
white runtz cartridge
launtel contact
webgl scroll animation
m56 traffic live update
r6000 to naira
risk advisory interview questions
dell colour profiles
ssr in stata
tank deer blinds
sd7h15 sanden
instagram mass report bot online
dover port traffic cameras near virginia
radiologist salary vancouver
houdini create vector field
lancaster speedway ticket prices
stand in terraria
eac detected lost ark
german for baby boy
vr30ddtt transmission upgrade
uniswap v2 price oracle
dodge immobilizer bypass
plug tv
after reading a four option multiple choice question
1996 ford ranger throttle position sensor location
lowrance elite 9 fs
how to date a flintlock pistol
rs404 input seal
milner shock absorbers
lake tahoe winter wedding packages
craigslist carros en venta
deer creek apartments phone number
ncis mcgee has had enough fanfiction
winch motor diagram
intj intimacy
phoenix technology shotgun stock
omad 500 calories reddit
discord nitro code shoppy
lg 24gl600f review
which of the following would best describe the role of schools
vanderploeg funeral home
nominal size lumber definition
capital one case interview github
wix edit text in table
hackerrank kangaroo solution python
why did barbara leave judge faith
sentence builder online
organizational chart of convenience store
utv rental
18kt hge mens ring
in law suite for rent atlanta
puppies for sale dorchester
my roommate went to jail
kpop x reader
harry potter fanfiction harry appearance changes
|
https://sema4.pl/11/09_8625.html
|
CC-MAIN-2022-33
|
en
|
refinedweb
|
"No grammar available for namespace" Error When Creating A Business Rule On A Human Task
(Doc ID 2420721.1)
Last updated on OCTOBER 21, 2021
Applies to:Oracle Business Process Management Suite - Version 12.2.1.1.0 and later
Information in this document applies to any platform.
Symptoms
In JDeveloper, when attempting to create a Business Rule on a Human Task, the following error occurs.
Changes
Cause
In this Document
|
https://support.oracle.com/knowledge/Middleware/2420721_1.html
|
CC-MAIN-2022-33
|
en
|
refinedweb
|
One of Angular's greatest strengths over its contemporaries like React or Vue is that it's a framework. What does this mean in the practical sense? Well, because you're providing the defaults for everything right out-of-the-box, you have a set of guard rails to follow when architecting new things. A set of baseline rules for things to follow, so to speak.
One such guard rail comes in the form of the
@angular/forms package. If you've used Angular for long, you're doubtlessly familiar with the
[(ngModel)] method of two-way data binding in the UI. Seemingly all native elements have support for this feature (so long as you have
FormsModule imported in your module).
More than that, if you want more powerful functionality, such as disabling an entire form of fields, tracking a collection of fields in a form, and doing basic data validation, you can utilize Angular Reactive Forms'
[formControl] and do all of that and more.
These features are hugely helpful when dealing with complex form logic throughout your application. Luckily for us, they're not just exclusive to native elements - we can implement this functionality into our own form!
Example
It's hard for us to talk about the potential advantages to a component without taking a look at it. Let's start with this component, just for fun.
It'll allow you to type in data, have a header label (as opposed to a floating label, which is notoriously bad for A11Y), and even present a fun message when "Unicorns" is typed in.
Here's the code:
typescript
import { Component, Input } from "@angular/core";@Component({selector: "app-example-input",template: `<label class="inputContainer"><span class="inputLabel">{{ placeholder }}</span><</label><You unlocked the secret unicorn rave!<span>🦄🦄🦄</span></p><!-- This is for screen-readers, since the animation doesn't work with the 'aria-live' toggle --><p aria-{{isSecretValue? "You discovered the secret unicorn rave! They're all having a party now that you summoned them by typing their name": ""}}</p>`,styleUrls: ["./example-input.component.css"]})export class ExampleInputComponent {@Input() placeholder: string;value: any = "";get isSecretValue() {return /unicorns/.exec(this.value.toLowerCase());}}
With only a bit of CSS, we have a visually appealing, A11Y friendly, and quirky input component. Look, it even wiggles the unicorns!
Now, this component is far from feature complete. There's no way to
disable the input, there's no way to extract data out from the typed input, there's not a lot of functionality you'd typically expect to see from an input component. Let's change that.
ControlValueAccessor
Most of the expected form functionality will come as a complement of the
ControlValueAccessor interface. Much like you implement
ngOnInit by implementing class methods, you do the same with ControlValueAccessor to gain functionality for form components.
The methods you need to implement are the following:
writeValue
registerOnChange
registerOnTouched
setDisabledState
Let's go through these one-by-one and see how we can introduce change to our component to support each one.
Setup
To use these four methods, you'll first need to
provide them somehow. To do this, we use a combination of the component's
providers array,
NG_VALUE_ACCESSOR, and
forwardRef.
typescript
import { forwardRef } from '@angular/core';import {ControlValueAccessor, NG_VALUE_ACCESSOR} from '@angular/forms';/*** Provider Expression that allows your component to register as a ControlValueAccessor. This* allows it to support [(ngModel)] and ngControl.*/export const EXAMPLE_CONTROL_VALUE_ACCESSOR: any = {/*** Used to provide a `ControlValueAccessor` for form controls.*/provide: NG_VALUE_ACCESSOR,/*** Allows to refer to references which are not yet defined.* This is because it's needed to `providers` in the component but references* the component itself. Handles circular dependency issues*/useExisting: forwardRef(() => ExampleInputComponent),multi: true};
Once we have this example provide setup, we can now pass it to a component's
providers array:
typescript
@Component({selector: 'app-example-input',templateUrl: './example-input.component.html',styleUrls: ['./example-input.component.css'],providers: [EXAMPLE_CONTROL_VALUE_ACCESSOR]})export class ExampleInputComponent implements ControlValueAccessor {
With this, we'll finally be able to use these methods to control our component.
If you're wondering why you don't need to do something like this with
ngOnInit, it's because that functionality is baked right into Angular. Angular always looks for an
onInitfunction and tries to call it when the respective lifecycle method is run.
implementsis just a type-safe way to ensure that you're explicitly wanting to call that method.
writeValue
writeValue is a method that acts exactly as you'd expect it to: It simply writes a value to your component's value. As your value has more than a single write method (from your component and from the parent), it's suggested to have a setter, getter, and private internal value for your property.
typescript
private _value: any = null;@Input()get value(): any { return this._value; }set value(newValue: any) {if (this._value !== newValue) {// Set this before proceeding to ensure no circular loop occurs with selection.this._value = newValue;}}
Once this is done, the method is trivial to implement:
typescript
writeValue(value: any) {this.value = value;}
However, you may notice that your component doesn't properly re-render when you update your value from the parent component. Because you're updating your value outside of the typical pattern, change detection may have a difficult time running when you'd want it to. To solve for this, provide a
ChangeDetectorRef in your constructor and manually check for updates in the
writeValue method:
typescript
export class ExampleInputComponent implements ControlValueAccessor {// ...constructor(private _changeDetector: ChangeDetectorRef) { }// ...writeValue(value: any) {this.value = value;this._changeDetector.markForCheck();}
Now, when we use a value like
new FormValue('test') and pass it as
[formControl] to our component, it will render the correct default value
setDisabledState
Implementing the disabled state check is extremely similar to implementing value writing. Simply add a setter, getter, and
setDisabledState to your component, and you should be good-to-go:
typescript
private _disabled: boolean = false;@Input()get disabled(): boolean { return this._disabled; }set disabled(value) {this._disabled = coerceBooleanProperty(value);}setDisabledState(isDisabled: boolean) {this.disabled = isDisabled;this._changeDetector.markForCheck();}
Just as we did with value writing, we want to run a
markForCheck to allow change detection to work as expected when the value is changed from a parent
It's worth mentioning that unlike the other three methods, this one is entirely optional for implementing a
ControlValueAccessor. This allows us to disable the component or keep it enabled but is not required for usage with the other methods.
ngModeland
formControlwill work without this method implemented.
registerOnChange
While the previous methods have been implemented in a way that required usage of
markForCheck, these last two methods are implemented in a bit of a different way. You only need look at the type of the methods on the interface to see as much:
typescript
registerOnChange(fn: (value: any) => void);
As you might be able to deduce from the method type, when
registerOnChange is called, it passes you a function. You'll then want to store this function in your class instance and call it whenever the user changes data.
typescript
/** The method to be called to update ngModel */_controlValueAccessorChangeFn: (value: any) => void = () => {};registerOnChange(fn: (value: any) => void) {this._controlValueAccessorChangeFn = fn;}
While this code sample shows you how to store the function, it doesn't outline how to call it once stored. You'll want to make sure to call it with the updated value on every update. For example, if you are expecting an
input to change, you'd want to add it to
(change) output of the
input:
html
<
registerOnTouched
Like how you store a function and call it to register changes, you do much of the same to register when a component has been "touched" or not. This tells your consumer when a component has had interaction or not.
typescript
onTouched: () => any = () => {};registerOnTouched(fn: any) {this.onTouched = fn;}
You'll want to call this
onTouched method any time that your user "touches" (or, interacts) with your component. In the case of an
input, you'll likely want to place it on the
(blur) output:
html
<
Consumption
Now that we've done that work let's put it all together, apply the styling from before, and consume the component we've built!
We'll need to start by importing
FormModule and
ReactiveFormModule into your
AppModule for
ngModel and
formControl support respectively.
typescript
import { NgModule } from '@angular/core';import { BrowserModule } from '@angular/platform-browser';import { FormsModule } from '@angular/forms';import { ReactiveFormsModule } from '@angular/forms';import { AppComponent } from './app.component';import { ExampleInputComponent } from './example-input/example-input.component';@NgModule({imports: [ ReactiveFormsModule, FormsModule, BrowserModule ],declarations: [ AppComponent, ExampleInputComponent ],bootstrap: [ AppComponent ]})export class AppModule { }
Once you have support for them both, you can move onto adding a
formControl item to your parent component:
typescript
import { Component } from '@angular/core';import {FormControl} from '@angular/forms';@Component({selector: 'my-app',templateUrl: './app.component.html',styleUrls: [ './app.component.css' ]})export class AppComponent {control = new FormControl('');modelValue = "";}
Finally, you can pass these options to
ngModel and
formControl (or even
formControlName) and inspect the value directly from the parent itself:
html
<h1>Form Control</h1><app-example-input</app-example-input><p>The value of the input is: {{control.value}}</p><h1>ngModel</h1><app-example-input</app-example-input><p>The value of the input is: {{modelValue}}</p>
If done properly, you should see something like this:
Form Control Classes
Angular CSS masters might point to classes that's applied to inputs when various state changes are made.
These classes include:
ng-pristine
ng-dirty
ng-untouched
ng-touched
They reflect states so that you can update the visuals in CSS to reflect them. When using
[(ngModel)], they won't appear, since nothing is tracking when a component is
pristine or
dirty. However, when using
[formControl] or
[formControlName], these classes will appear and act accordingly, thanks to the
registerOnChange and
registerOnTouched functions. As such, you're able to display custom CSS logic for when each of these states are met.
Gain Access To Form Control Errors
Something you'll notice that wasn't implemented in the
ControlValueAccessor implementation is support for checking whether validators are applied. If you're a well-versed Angular Form-ite, you'll recall the ability to validate forms using validators appended to
FormControls. Although a niche situation — since most validation happens at the page level, not the component level — wouldn't it be nice to check when a form is valid or not directly from the component to which the form is attached?
Well, thanks to Angular's DI system, we can do just that!
However, we'll need to make a few changes to the form input we made before. While we previously implemented a provider for form controls, we now need to manually assign the provider ourselves in the constructor:
typescript
import {Component,Input,ChangeDetectorRef,Optional,Self,AfterContentInit} from "@angular/core";import { ControlValueAccessor, NgControl } from "@angular/forms";@Component({selector: "app-example-input",templateUrl: "./example-input.component.html",styleUrls: ["./example-input.component.css"]})export class ExampleInputComponent implements ControlValueAccessor, AfterContentInit {constructor(@Optional() @Self() public ngControl: NgControl,private _changeDetector: ChangeDetectorRef) {if (ngControl != null) {// Setting the value accessor directly (instead of using// the providers) to avoid running into a circular import.ngControl.valueAccessor = this;}}// ...}
In this code sample, we're using the
@Self decorator to tell the dependency injection system that "this component itself should have been provided a
formControl or
formControlName". However, we want the component to work even when
FormModule isn't being used, so we allow the dependency injection to return
null if nothing's passed by utilizing the
@Optional decorator.
Now that you have the
ngControl, you can access the
formControl by using
ngControl.control.
typescript
ngOnInit() {const control = this.ngControl && this.ngControl.control;if (control) {console.log("ngOnInit", control);// FormControl should be available here}}
You have a ton of different props you're able to access for the control's metadata. For example, if you want to check when errors are present, you can do the following:
typescript
get errors() {const control = this.ngControl && this.ngControl.control;if (control) {return control.touched && control.errors;}return null;}
And then reference it in the template:
html
<span class="inputLabel" [class.redtext]="errors">{{ placeholder }}</span>
Now that you have the component implementation, you can add validators to your
FormControl:
typescript
import { Component } from '@angular/core';import {FormControl, Validators} from '@angular/forms';@Component({selector: 'my-app',templateUrl: './app.component.html',styleUrls: [ './app.component.css' ]})export class AppComponent {control = new FormControl('', Validators.required);}
Not only do you have a wide range of Angular-built validators at your disposal, but you're even able to make your own validator!
Conclusion
Enabling
formControl and
ngModel usage is an extremely powerful tool that enables you to have feature-rich and consistent APIs across your form components. Using them, you can ensure that your consumers are provided with the functionality they'd expect in a familiar API to native elements. Hopefully, this article has provided you with more in-depth insight that you're able to use with your own components.
If you're interested in learning more about Angular, please sign up for our newsletter down below! We don't spam and will notify you when new Angular articles are live! Additionally, if you'd like to ask in-depth questions or chat about anything Angular related, don't forget to join our Discord Server, where we talk code and more!
|
https://unicorn-utterances.com/posts/angular-components-control-value-accessor
|
CC-MAIN-2022-33
|
en
|
refinedweb
|
Image deduplication is the process of finding exact or near-exact duplicates within a collection of images. For example:
In particular, note that the middle image in the bottom row is not identical to the other two images, despite being a "duplicate". This is where the difficulty here lies - matching pure duplicates is a simple process, but matching images which are similar in the presence of changes in zoom, lighting, and noise is a much more challenging problem.
In this section, we go over some key technologies (models, modules, scripts, etc...) used to successfully implement an image deduplication algorithm.
A generic embedding model turns images into dense vectors; an encoder-based embedding model outputs dense vectors which encode scale-invariant edges and corners within the input image as opposed to pure semantic information. For example, while two images of different dogs may result in two very similar encodings when using traditional object recognition embedding models, the output embeddings would be very different when using encoding-based embedding models. This blog post is a great resource for understanding contrastive loss.
To accomplish this, these encoder models shouldn't be trained on traditional image recognition/localization datasets such as CIFAR or ImageNet; instead, a siamese network trained with contrastive or triplet loss must be used. Among all these encoder-based embedding models,
resnet is a widely applied one. In this tutorial, we take
resnet50 as an example to show Towhee's capability of comparing similar images in a few lines of code with image-processing operators and pre-built embedding models:
from towhee import dc dc_1 = dc['path_1', 'path_2']([['Lenna.png', 'Lenna.png'], ['Lenna.png', 'logo.png']])\ .image_decode['path_1', 'img_1']()\ .image_decode['path_2', 'img_2']()\ .image_embedding.timm['img_1', 'emb_1'](model_name='resnet50')\ .image_embedding.timm['img_2', 'emb_2'](model_name='resnet50') dc_1.show()
A
resnet50 model is trained to output extremely close embeddings for two "similar" input images, i.e. zero, one, or many of the following transformations:
These transformations render the model invariant to changes in zoom, lighting, and noise.
Now we have the embedding of the images stored in dc, but the embeddings themselves are useless without a similarity metric. Here, we check if the L2 norm of the difference vector between the query and target images is within a certain threshold. If so, then the images are considered duplicates.
Towhee also support runnig a self-defined function as operator with
runas_op, so users are allowed to run any metrics effortless once they define the metric function:
import numpy as np thresh = 0.01 dc_1.runas_op[('emb_1', 'emb_2'), 'is_sim'](lambda x, y: np.linalg.norm(x - y) < thresh)\ .select['is_sim']()\ .show()
This is an empirically determined threshold based on experiments run on a fully trained model.
Putting it all together, we can check if two images are duplicate with the following code snippet:
from towhee import dc import numpy as np thresh = 0.01 res = dc['path_1', 'path_2']([['path/to/image/1', 'path/to/image/2']])\ .image_decode['path_1', 'img_1']()\ .image_decode['path_2', 'img_2']()\ .image_embedding.timm['img_1', 'emb_1'](model_name='resnet50')\ .image_embedding.timm['img_2', 'emb_2'](model_name='resnet50')\ .runas_op[('emb_1', 'emb_2'), 'is_sim'](lambda x, y: np.linalg.norm(x - y) < thresh)\ .select['is_sim']()
And that's it! Have fun and happy embedding :)
|
https://codelabs.towhee.io:443/build-a-image-deduplication-engine-in-minutes/index
|
CC-MAIN-2022-33
|
en
|
refinedweb
|
PROBLEM LINK:
Author: Abhishek Ghosh
Editorialist: Arkita Aggarwal, Shruti Arya
DIFFICULTY
Simple
PREREQUISITES
Basic observation, GCD
PROBLEM
Given an array A consisting of N elements, you need to find the largest number that divides all of them. Minimum outstanding shares will be the sum of this array divided by this maximal number.
QUICK EXPLANATION
- The highest common factor of the values is found and that is the amount to which share prices can be safely increased.
- The number of shares is then divided by the highest common factor and new number of shares owned is found. The sum of the new values found is the value of minimum outstanding shares.
EXPLANATION
Each value of the array is taken and divided by incrementing integers one by one till they reach a minimum value. The highest integer will be the greatest common divisor(GCD). This GCD will be the new stock price to which the share price can be increased safely.
All the values of the array are divided by this GCD and those values will be the new number of stocks each investor now owns.
The new values of this array are added to find the minimum outstanding shares.
Example
Consider the three values in the array as 2, 4 \;and \;6
In this case the total number of boxes is equal to 12 \;(2+4+6)
The greatest common divisor will be 2 so we can merge two boxes to be considered as 1. This merging of 2 boxes into 1, represents the increase in stock price, and consolidation of shares.
Now, the total number of boxes is equal to 6 \;(1+2+3)
TIME COMPLEXITY
GCD of 2 numbers is computed in O(log(max(A_{i}))) time. This is done for N elements of the array, so the complexity is O(N \cdot log(max(A_{i})).
SOLUTIONS
Setter's Solution
#include <bits/stdc++.h> using namespace std; typedef long long ll; int32_t main() { #ifndef ONLINE_JUDGE freopen("input.txt", "r", stdin); freopen("output.txt", "w", stdout); #endif ios_base::sync_with_stdio(false); cin.tie(NULL); cout.tie(NULL); int T = 1; cin >> T; while(T--) { int n; cin >> n; vector<int> arr(n); for(int &x : arr) cin >> x; int g = arr[0]; for(int i = 1; i < n; i++) { g = __gcd(g, arr[i]); } ll sum = 0; for(int x : arr) { sum += x/g; } cout << sum << "\n"; } return 0; }
|
https://discuss.codechef.com/t/revsplit-editorial/102182
|
CC-MAIN-2022-33
|
en
|
refinedweb
|
DMA_CB_TypeDef Struct Reference
Callback structure that can be used to define DMA complete actions.
#include <em_dma.h>.
Field Documentation
◆ cbFunc
Pointer to callback function to invoke when DMA transfer cycle is done.
Notice that this function is invoked in interrupt context, and therefore should be short and non-blocking.
◆ userPtr
User defined pointer to provide with callback function.
◆ primary
For internal use only: Indicates if next callback applies to primary or alternate descriptor completion.
Mainly useful for ping-pong DMA cycles. Set this value to 0 prior to configuring callback handling.
|
https://docs.silabs.com/gecko-platform/4.0/emlib/api/efm32lg/struct-d-m-a-c-b-type-def
|
CC-MAIN-2022-33
|
en
|
refinedweb
|
SwiftWebSocket alternatives and similar libraries
Based on the "Socket" category.
Alternatively, view SwiftWebSocket alternatives based on common mentions on social networks and blogs.
Starscream9.8 0.0 L1 SwiftWebSocket VS StarscreamWebsockets in swift for iOS and OSX
Socket.IO9.5 1.1 L2 SwiftWebSocket VS Socket.IOSocket.IO client for iOS/OS X.
SwiftSocket8.4 0.0 L3 SwiftWebSocket VS SwiftSocketThe easy way to use sockets on Apple platforms
BlueSocket7.9 2.6 L1 SwiftWebSocket VS BlueSocketSocket framework for Swift using the Swift Package Manager. Works on iOS, macOS, and Linux.
Socks5.9 0.0 L2 SwiftWebSocket VS Socks🔌 Non-blocking TCP socket layer, with event-driven server and client.
BlueSSLService3.7 0.9 SwiftWebSocket VS BlueSSLServiceSSL/TLS Add-in for BlueSocket using Secure Transport and OpenSSL
SocketIO-Kit3.0 0.0 L4 SwiftWebSocket VS SocketIO-KitSocket.io iOS and OSX Client compatible with v1.0 and later
WebSocket2.8 0.0 L3 SwiftWebSocket VS WebSocketWebSocket implementation for use by Client and Server
RxWebSocket1.9 0.0 L4 SwiftWebSocket VS RxWebSocketReactive WebSockets
SwiftDSSocket1.8 0.0 SwiftWebSocket VS SwiftDSSocketDispatchSource based socket framework written in pure Swift
DNWebSocket0.8 0.0 SwiftWebSocket SwiftWebSocket or a related project?
README
SwiftWebSocket
Conforming WebSocket (RFC 6455) client library for iOS and Mac OSX.
SwiftWebSocket passes all 521 of the Autobahn's fuzzing tests, including strict UTF-8, and message compression.
Project Status
I'm looking for someone to help with or take over maintenance of this project.
Features
- High performance.
- 100% conforms to Autobahn Tests. Including base, limits, compression, etc. Test results.
- TLS / WSS support. Self-signed certificate option.
- The API is modeled after the Javascript API.
- Reads compressed messages (
permessage-deflate). RFC 7692
- Send pings and receive pong events.
- Strict UTF-8 processing.
binaryTypeproperty to choose between
[UInt8]or
NSDatamessages.
- Zero asserts. All networking, stream, and protocol errors are routed through the
errorevent.
- iOS / Objective-C support.
Example
func echoTest(){ var messageNum = 0 let ws = WebSocket("wss://echo.websocket.org") let send : ()->() = { messageNum += 1 let msg = "\(messageNum): \(NSDate().description)" print("send: \(msg)") ws.send(msg) } ws.event.open = { print("opened") send() } ws.event.close = { code, reason, clean in print("close") } ws.event.error = { error in print("error \(error)") } ws.event.message = { message in if let text = message as? String { print("recv: \(text)") if messageNum == 10 { ws.close() } else { send() } } } }
Custom Headers
var request = URLRequest(url: URL(string:"ws://url")!) request.addValue("AUTH_TOKEN", forHTTPHeaderField: "Authorization") request.addValue("Value", forHTTPHeaderField: "X-Another-Header") let ws = WebSocket(request: request)
Reuse and Delaying WebSocket Connections
v2.3.0+ makes available an optional
open method. This will allow for a
WebSocket object to be instantiated without an immediate connection to the server. It can also be used to reconnect to a server following the
close event.
For example,
let ws = WebSocket() ws.event.close = { _,_,_ in ws.open() // reopen the socket to the previous url ws.open("ws://otherurl") // or, reopen the socket to a new url } ws.open("ws://url") // call with url
Compression
The
compression flag may be used to request compressed messages from the server. If the server does not support or accept the request, then connection will continue as normal, but with uncompressed messages.
let ws = WebSocket("ws://url") ws.compression.on = true
Self-signed SSL Certificate
let ws = WebSocket("ws://url") ws.allowSelfSignedSSL = true
Network Services (VoIP, Video, Background, Voice)
// Allow socket to handle VoIP in the background. ws.services = [.VoIP, .Background]
Installation (iOS and OS X)
Carthage
Add the following to your Cartfile:
github "tidwall/SwiftWebSocket"
Then run
carthage update.
Follow the current instructions in Carthage's README for up to date installation instructions.
The
import SwiftWebSocket directive is required in order to access SwiftWebSocket features.
CocoaPods
Add the following to your Podfile:
use_frameworks! pod 'SwiftWebSocket'
Then run
pod install with CocoaPods 0.36 or newer.
The
import SwiftWebSocket directive is required in order to access SwiftWebSocket features.
Manually
Copy the
SwiftWebSocket/WebSocket.swift file into your project.
You must also add the
libz.dylib library.
Project -> Target -> Build Phases -> Link Binary With Libraries
There is no need for
import SwiftWebSocket when manually installing.
License
SwiftWebSocket source code is available under the MIT License.
*Note that all licence references and agreements mentioned in the SwiftWebSocket README section above are relevant to that project's source code only.
|
https://swift.libhunt.com/swiftwebsocket-alternatives
|
CC-MAIN-2022-33
|
en
|
refinedweb
|
Pjotr's rotating BLOG
Table of Contents
- 1. First code katas with Rust
- 2. GEMMA additive and dominance effects
- 3. Sambamba build
- 4. GEMMA randomizer
- 5. GEMMA, Sambamba, Freebayes and pangenome tools
- 6. GEMMA compute GRM (2)
- 7. Managing filters with GEMMA1
- 8. HEGP and randomness
- 9. GEMMA keeping track of transformations
- 10. Fix outstanding CI build
- 11. GEMMA testing frame work
- 12. GEMMA validate data
- 13. GEMMA running some data
- 14. GEMMA compute GRM
- 15. GEMMA filtering data
- 16. GEMMA convert data
- 17. GEMMA GRM/K compute
- 18. GEMMA with python-click and python-pandas-plink
- 19. Building GEMMA
- 20. Starting on GEMMA2
- 21. Porting GeneNetwork1 to GNU Guix
- 22. Chasing that elusive sambamba bug (FIXED!)
- 23. It has been almost a year! And a new job.
- 24. Speeding up K
- 25. MySQL to MariaDB
- 26. MySQL backups (stage2)
- 27. MySQL backups (stage1)
- 28. Migrating GN1 from EC2
- 29. Fixing Gunicorn in use
- 30. Updating ldc with latest LLVM
- 31. Fixing sambamba
- 32. Trapping NaNs
- 33. A gemma-dev-env package
- 34. Reviewing a CONDA package
- 35. Updates
- 36. Older BLOGS
- 37. Even older BLOG
Tis document describes Pjotr's journey in (1) introducing a speedy LMM resolver for GWAS for GeneNetwork.org, (2) Tools for pangenomes, and (3) solving the pipeline reproducibility challenge with GNU Guix. Ah, and then there is the APIs and bug fixing…
1 First code katas with Rust
code katas to the pangenome team. First I set an egg timer to 1 hour
and installed Rust and clang with Guix and checked out Christian's
rs-wfa bindings because I want to test C bindings against
Rust. Running
cargo build pulled in a crazy number of
dependencies. In a Guix container I had to set CC and
LIBCLANGPATH. After a successful build all tests failed with
cargo
test. It says
ld: rs-wfa/target/debug/build/libwfa-e30b43a0c990e3e6/out/WFA/build/libwfa.a(mm_allocator.o): relocation R_X86_64_32 against `.rodata.str1.8' can not be used when making a PIE object; recompile with -fPIE
On my machine adding the PIE flag to the WFA C code worked:
diff --git a/Makefile b/Makefile index 5cd3812..71c58c8 100644 --- a/Makefile +++ b/Makefile @@ -10,7 +10,7 @@ CC=gcc CPP=g++ LD_FLAGS=-lm -CC_FLAGS=-Wall -g +CC_FLAGS=-Wall -g -fPIE ifeq ($(UNAME), Linux) LD_FLAGS+=-lrt endif
the PIE flag generated position independent code for executables. Because this is meant to be a library I switched -fPIE for -fPIC and that worked too.
test result: ok. 6 passed; 0 failed; 0 ignored; 0 measured; 0 filtered out
A thing missing from the repo is a software license. WFA is published under an MIT license. Erik also favours the MIT license, so it make sense to add that. After adding the license I cloned the repo to the pangenome org.
2 GEMMA additive and dominance effects
a
= (uA - uB)/2 and for the dominance
d = uAB - (uA + uB)/2. GEMMA
estimates the PVE by typed genotypes or “chip heritability”. Rqtl2 has
an estherit function to estimate heritability from pheno, kinship and
covar.
3 Sambamba build.
This week a new release went out and a few fixes.This week a new release went out and a few fixes.
4 GEMMA randomizer
-1 by
default. If below 0 it will seed the randomizer from the hardware
clock in param.cpp. The randomizer is set in three places:
bslmm.cpp 953: gsl_rng_set(gsl_r, randseed); 1659: gsl_rng_set(gsl_r, randseed); param.cpp 2032: gsl_rng_set(gsl_r, randseed);
and there are three (global) definitions of 'long int randseed' in the source. Bit of a mess really. Let's keep the random seed at startup only. The gslr structure will be shared by all. After fixing the randomizer we have a new 0.89.3 release of GEMMA!
5 GEMMA, Sambamba, Freebayes and pangenome tools.
I just managed to build Freebayes using a Guix environment. The tree of git submodules is quite amazing.
The first job is to build freebayes for ARM. This is part of our
effort to use the NVIDIA Jetson ARM board. On ARMf the package included in GNU
Guix fails with missing file
chdir vcflib/fastahack and bwa fails with
ksw.c:29:10: fatal error: emmintrin.h: No such file or directory #include <emmintrin.h> ^~~~~~~~~~~~~
The first thing we need to resolve is disabling the SSE2 extensions. This suggests that -DNOSSE2 can be used for BWA. We are not the first to deal with this issue. This page replaces the file with sse2neon.h which replaces SSE calls with NEON. My Raspberry PI has neon support, so that should work:
pi@raspberrypi:~ $ LD_SHOW_AUXV=1 uname | grep HWCAP AT_HWCAP: half thumb fastmult vfp edsp neon vfpv3 tls vfpv4 idiva idivt vfpd32 lpae evtstrm AT_HWCAP2: crc32
Efraim will have a go at the BWA port. After that I'll pick up freebayes which includes a 2016 BWA as part of vcflib. People should not do this, though I am guilty with sambamba too.
5.1 Compile in a Guix container
After checking out the repo with git recursive create a Guix container with all the build tools with
guix environment -C -l guix.scm
Next rebuild the CMake environment with
rm CMakeCache.txt ./vcflib-prefix/src/vcflib-build/CMakeCache.txt make clean cmake . make -j 4 make test
The last command is not working yet.
6 GEMMA compute GRM (2).
7 Managing filters with GEMMA1
8 HEGP and randomness
source code to find out how it generates random numbers. It does not use the Linux randomizer but its own implementation. Ouch.For the HEGP encryption we use rnorm. I had to dig through R's
Honestly, we should not use that! These are pragmatic implementations for sampling. Not for encryption.
Interestingly, R sets srand() on startup using a time stamp. This is, however, only used to generate temporary filenames using the glibc rand() function. Correct me if I am wrong, but it is the only time R uses the OS randomizer.
Meanwhile the implementation of rustiefel is pretty straightforward: All we need to do is replace rnorm with an array of floats. Both Rust and Python provide a normal distribution from /dev/random. I'd have to dig deeper to get details, but to me it looks like the better idea.
9 GEMMA keeping track of transformations
OneOne
"transformations": [ { "type": "filter", "pheno-NA": true, "maf": 0.01, "miss": 0.05, "command": "./bin/gemma2 --overwrite -o test/data/regression/21487_filter filter -c test/data/regression/21487_convert.json" }
which allows us to check whether the filter has been applied before. Running the filter twice will show. The grm command will check whether a filter has run. If not it will add.
The current output after two steps looks like:
<urce/code/genetics/gemmalib [env]$ cat test/data/regression/21487_filter.json { "command": "filter", "crosstype": null, // Cross-type is outbreeding "sep": "\t", // Default separator "na.strings": [ "NA", "nan", "-" ], "comment.char": "#", // keeping track of these for debugging: "individuals": 17, "markers": 7320, "phenotypes": 1, // all data files are tidy and gzipped (you can open directly in vim) "geno": "test/data/regression/21487_filter_geno.txt.gz", "pheno": "test/data/regression/21487_filter_pheno.txt.gz", "gmap": "21487_convert_gmap.txt.gz", "alleles": [ "A", "B", "H" ], "genotypes": { "A": 0, "H": 1, "B": 2 }, "geno_sep": false, // We don't have separators between genotypes "geno_transposed": true, "transformations": [ // keeps growing with every step { "type": "export", "original": "rqtl2", "format": "bimbam", "command": "./bin/gemma2 --overwrite -o test/data/regression/21487_convert convert --bimbam -g example/21487_BXD_geno.txt.gz -a example/BXD_snps.txt -p example/21487_BXDPublish_pheno.txt" }, { "type": "filter", "pheno-NA": true, "maf": 0.01, "miss": 0.05, "command": "./bin/gemma2 --overwrite -o test/data/regression/21487_filter filter -c test/data/regression/21487_convert.json" } ], "na_strings": [ // na_strings is more consistent than na.strings above, also need it for creating a method "NA", "nan", "-" ], "name": "test/data/regression/21487_convert.json" // name of the underlying file }
Note that the control file increasingly starts to look like a Monad because it passes state along with gemma2/lib steps.
10 Fix outstanding CI build
collaborate well. To fix it I stopped testing with libgslv1 (which is really old anyway and we need to use gslcblas.h over OpenBLAS cblas.h.The GEMMA1 build on Travis was failing for some time. The problem is that GSL BLAS headers and OpenBLAS headers do not
11 GEMMA testing frame work
The first test is to convert BIMBAM to GEMMA2 format with
gemma2 --overwrite convert --bimbam -g example/21487_BXD_geno.txt.gz -a example/BXD_snps.txt -p example/21487_BXDPublish_pheno.txt
which outputs 3 files
INFO:root:Writing GEMMA2/Rqtl2 marker/SNP result_gmap.txt.gz INFO:root:Writing GEMMA2/Rqtl2 pheno result_pheno_bimbam.txt.gz INFO:root:Writing GEMMA2/Rqtl2 geno result_geno_bimbam.txt.gz
pytest-reqressions writes its own files so I needed to use the low level interface for file comparisons. Also I'll need to unpack the .gz files for showing a diff. Progressing here.
12 GEMMA validate data.
Therefore, we validate the data when there is (almost) zero cost inline. But a full validation is a separate switch. Let's see what it brings out!
gemma2 validate -c control
One thing the maf filter does not do is check for the absolute number of informative genotypes (maf is a percentage!). Also gemma1 does not check whether the minor allele is actually the smaller one. We hit the problem immediately with
WARNING:root:Only one type of genotype Counter({'B': 9, 'A': 7, 'H': 1}) found in ['A', 'A', 'B', 'B', 'B', 'A', 'A', 'B', 'A', 'A', 'B', 'B', 'B', 'B', 'A', 'H', 'B'] --- other similar counter warnings are ignored (rs3722740 file ./test_geno.txt.gz line 175)
It is clear that minor allele is actually the major allele. The implications are not large for gemma1, but the minor allele frequency (MAF) filter may not work properly. This is why validation is so important!
Unlike gemma1, gemma2 will figure out the minor allele dynamically. One reason is that R/qtl2 does the same and we are sharing the data format. It also allows a consistent use of genotype markers (e.g. B and D for the BXD). I added a validation step to make sure major allele genotypes vary across the dataset (one constant allele is suspect and may imply some other problem).
BIMBAM files, meanwhile, include the SNP variants. GeneNetwork kinda ignores them. I need to update the importer for that. Added a note. First, however, I want to speed up the GRM LOCO.
13 GEMMA running some data
For a dosage comparison and LOCO permutation run I extracted data from the 21481 set on GeneNetwork. First I needed to match the genotypes with phenotypes using below filter.For a dosage comparison and LOCO permutation run I extracted data from the 21481 set on GeneNetwork. First I needed to match the genotypes with phenotypes using below filter.
First create the Rqtl2 type dataset:
gemma2 -o 21481 convert --bimbam -g tmp/21481/BXD_geno.txt.gz -p tmp/21481/21481_BXDPublish_pheno.txt
Filtering
gemma2 -o output/21481-pheno -v filters -c 21481.json
Now the genotype file looks like
marker 7 10 11 12 38 39 42 54 60 65 67 68 70 71 73 77 81 91 92 rs31443144 AAABBABABAABBABBAAAA rs6269442 AAABBABABAABBABBAAAA rs32285189 AAABBABABAABBABBAAAA
Create the BIMBAM for gemma1
gemma2 -o 21481-pheno export --bimbam -c output/21481-pheno.json
and run the standard gemma commands. Note we can create the original dosage file by modifying the control file.
Running 1,000 permutations on 20 individuals and 7321 markers took
real 213m0.889s user 2939m21.780s sys 5423m37.356s
We have to improve on that!
14 GEMMA compute GRM
The first step is to compute the kinship matrix or GRM. We'll have different algorithms so we need a switch for method or –impl.The first step is to compute the kinship matrix or GRM. We'll have different algorithms so we need a switch for method or –impl.
gemma2 --overwrite -o output/21487 convert --bimbam -g example/21487_BXD_geno.txt.gz -p example/21487_BXDPublish_pheno.txt
gemma2 --overwrite -o output/21487-filter filter -c output/21487.json
we can use the gemma1 implementation with
gemma2 grm --impl gemma1 -c output/21487-filter.json
or our new version
gemma2 grm -c output/21487-filter.json
Today I added the necessary filtering steps for the GRM listed in the next section. The maf filter is currently hard coded to make sure results match gemma1.Today I added the necessary filtering steps for the GRM listed in the next section. The maf filter is currently hard coded to make sure results match gemma1.
I finally got a matching K in Python compared to gemma1. Turns out that scaling is not the default option. Ha!I finally got a matching K in Python compared to gemma1. Turns out that scaling is not the default option. Ha!
14.1 Calculating kinship
injects the mean genotype for a SNP into missing data fields (the mean over a SNP row). Next it subtracts the mean for every value in a row and if centering it scalesGemma
gsl_vector_scale(geno, 1.0 / sqrt(geno_var));
and finally over the full matrix a division over the number of SNPs
gsl_matrix_scale(matrix_kin, 1.0 / (double)ns_test)
Where
ns_test is the number of SNPs included. SNPs get dropped in a
MAF filter which I rewrote in a fast version in D. The GEMMA filter
happens at reading of the Geno file.
So, essentially
[X]Always apply the MAF filter when reading genotypes
[X]Apply missiness filter
And when computing kinship
[X]Always impute missing data (injecting the row mean)
[X]Always subtract the row mean
[X]Center the data by row (which is the NOT the default option
-gk 1, gemma1 CenterMatrix)
[X]Always scale the matrix dividing by # of SNPs (gemma1 ScaleMatrix)
See also R's scaling function.
Prasun's D code may be a bit more readable. And here is our older pylmm implementation which only scales K by dividing the number of SNPs.
[X]Check D implementation
Python's numpy treats NaN as follows:
>>> np.mean([1.0,2.0,3.0,0.0]) 1.5 >>> np.mean([1.0,2.0,3.0,0.0,0.0]) 1.2 >>> np.mean([1.0,2.0,3.0,0.0,0.0,np.NAN]) nan
which means we have to filter first.
x = np.array([1.0,2.0,3.0,0.0,0.0,np.NAN]) >>> np.mean(x[~numpy.isnan(x)]) 1.2
that is better. Note the tilde. Python is syntactically a strange beast. Also I need to be careful about numpy types. They are easily converted to lists, for example.
r2 is simply the square of the sample correlation coefficient (i.e.,
r) between the observed outcomes and the observed predictor values.
The manual says correlation with any covariate. By default, SNPs with
r^2 correlation with any of the covariates above 0.9999 will not be
included in the analysis. When I get to covariates I should include
that.
14.2 Implementation
Starting with the MAF filter. It is used both for GRM and GWA. Gemma1 says
-miss [num] specify missingness threshold (default 0.05) -maf [num] specify minor allele frequency threshold (default 0.01) -notsnp minor allele frequency cutoff is not used
Note that using
-notsnp the value of
maf_level is set to -1.
With Gemma2 I want to make all filtering explicit. But what to do if someone forgets to filter? Or filters twice - which would lead to different results.
gemma2 filter -c data --maf 0.01
Creates a new dataset and control file. We can add the filtering state to
the new control data structure with
"maf": 0.05 which prevents a second run.
If it is missing we should apply it by default in
gemma2 grm -c data
which will be the same as the single run
gemma2 filter -c data --maf 0.01 '=>' grm
(don't try this yet).
I wrote a MAF filter in Python which puts genotypes in classes and counts the minor alleles. Note that GEMMA1 does not a pure minor allele count because it counts heterozygous as 50%. At this point I am not making a clear distinction.
Another filter is missing data
miss_level defaults to 0.05:
if ((double)n_miss / (double)ni_test > miss_level) pass...
Say we have 6 out of 100 missing it fails. This is rather strict.
15 GEMMA filtering data
With gemma1 people requested more transparent filtering. This is why I am making it a two-step process. First we filter on phenotypes:With gemma1 people requested more transparent filtering. This is why I am making it a two-step process. First we filter on phenotypes:
15.1 Filter on phenotypes
The first filter is on phenotypes. When a phenotype is missing it should be removed from the kinship matrix and GWA. The R/qtl2 format is simply:
id pheno1 pheno2 1 1.2 3.0 2 NA 3.1 (etc)
So, if we select
pheno1 we need to drop
id=2 because it is an
NA. The filter also needs to update the genotypes. Here we filter
using the 6th phenotype column:
gemma2 -o output/ms-filtered -v filters -c result.json -p 6
With the new phenotype filter I was able to create a new GRM based on a reduced genotype list (imported from BIMBAM) for a paper we are putting out.
16 GEMMA convert data
16.1 Convert PLINK to GEMMA2/Rqtl2 and BIMBAM
The plink .fam format has
- Family ID ('FID')
- Within-family ID ('IID'; cannot be '0')
- Within-family ID of father ('0' if father isn't in dataset)
- Within-family ID of mother ('0' if mother isn't in dataset)
- Sex code ('1' = male, '2' = female, '0' = unknown)
- Phenotype value ('1' = control, '2' = case, '-9'/'0'/non-numeric = missing data if case/control)
The BIMBAM format just has columns of phenotypes and no headers(!)
For convenience, I am going to output BIMBAM so it can be fed directly into gemma1. Note Plink also outputs BIMBAM from plink files and VCF which is used by GEMMA users today.
To convert from plink to R/qtl2 we already haveTo convert from plink to R/qtl2 we already have
gemma2 convert --plink example/mouse_hs1940
To convert from R/qtl2 to BIMBAM we can do
gemma2 convert --to-bimbam -c mouse_hs1940.json
which writes
mouse_hs1940_pheno_bimbam.txt. Introducing a yield
generator. I notice how Python is increasingly starting to look like
Ruby. Yield was introduced in Python3 around 2012 - almost 20 years
later than Ruby(!)
After some thought I decided to split convert into 'read' and 'write' commands. So now it becomesAfter some thought I decided to split convert into 'read' and 'write' commands. So now it becomes
gemma2 read --plink example/mouse_hs1940
To convert from R/qtl2 to BIMBAM we can do
gemma2 write --bimbam -c mouse_hs1940.json
I think that looks logical. On third thought I am making it
gemma2 convert --plink example/mouse_hs1940
To convert from R/qtl2 to BIMBAM we can do
gemma2 export --bimbam -c mouse_hs1940.json
It is important to come up with the right terms so it feels logical or predictable to users. Convert could be 'import', but Python does not allow that because it is a language keyword. And, in a way, I like 'convert' better. And 'export' is the irregular case that no one should really use. I could name it 'internal'. But hey.
Writing the BIMBAM genotype file it looks like
rs3683945, A, G, 1, 1, 0, 1, 0, 1, 1, ...
without a header. GEMMA does not actually use the allele values (A and G) and skips on spaces. So we can simplify it to
rs3683945 - - 1 1 0 1 0 1 1 ...
Using these new inputs (converted plink -> Rqtl2 -> BIMBAM) the computed cXX matrix is the same as the original PLINK we had.
gemma2 gemma1 -g mouse_hs1940_geno_bimbam.txt.gz -p mouse_hs1940_pheno_bimbam.txt -gk -o mouse_hs1940
and same for the GWA
gemma2 gemma1 -g mouse_hs1940_geno_bimbam.txt.gz -p mouse_hs1940_pheno_bimbam.txt -n 1 -a ./example/mouse_hs1940.anno.txt -k ./output/mouse_hs1940.cXX.txt -lmm -o mouse_hs1940_CD8_lmm-new
==> output/mouse_hs1940_CD8_lmm-new.assoc.txt <== chr rs ps n_miss allele1 allele0 af beta se logl_H1 l_remle p_wald 1 rs3683945 3197400 0 - - 0.443 -7.882975e-02 6.186788e-02 -1.581876e+03 4.332964e+00 2.028160e-01 1 rs3707673 3407393 0 - - 0.443 -6.566974e-02 6.211343e-02 -1.582125e+03 4.330318e+00 2.905765e-01 ==> output/result.assoc.txt <== chr rs ps n_miss allele1 allele0 af beta se logl_H1 l_remle p_wald 1 rs3683945 3197400 0 A G 0.443 -7.882975e-02 6.186788e-02 -1.581876e+03 4.332964e+00 2.028160e-01 1 rs3707673 3407393 0 G A 0.443 -6.566974e-02 6.211343e-02 -1.582125e+03 4.330318e+00 2.905765e-01
The BIMBAM version differs because the BIMBAM file in ./example differs slightly from the PLINK version (thanks Xiang, keeps me on my toes!). Minor differences exist because some values, such as 0.863, have been changed for the genotypes. For a faithful and lossless computation of that BIMBAM file we'll need to support those too. But that will come when we start importing BIMBAM files. I'll make a note of that.
16.2 Convert BIMBAM to GEMMA2/Rqtl2
From above we can also parse BIMBAM rather than export. It will need both geno and pheno files to create the GEMMA2/Rqtl2 format:From above we can also parse BIMBAM rather than export. It will need both geno and pheno files to create the GEMMA2/Rqtl2 format:
gemma2 convert --bimbam -g example/mouse_hs1940.geno.txt.gz -p example/mouse_hs1940.pheno.txt
Problem: BIMBAM files can contain any value while the Rqtl2 genotype file appears to be limited to alleles. Karl has a frequency format, but that uses some fancy binary 'standard'. I'll just use the genotypes now because GeneNetwork uses those too. Translating back and forth. BIMBAM
rs31443144, X, Y, 1, 1, 0, 0, 0, ... 0, 1, 0.5, 0.5
becomes Rqtl2
rs31443144
and then back to
rs31443144, - - 1 1 0 0 0 ... 0 1 2 2
For processing with GEMMAv1. Funny. Actually it should be comma delimited to align with PLINK output.
After some hacking
gemma2 convert --bimbam -g example/BXD_geno.txt.gz -p example/BXD_pheno.txt gemma2 export -c result.json gemma -gk -g result_geno_bimbam.txt.gz -p result_pheno.tsv FAILED: Parsing input file 'result_geno_bimbam.txt.gz' failed in function ReadFile_geno in src/gemma_io.cpp at line 743
is still not happy. When looking at the code it fails to get (enough) genotypes. Also interesting is the hard coded in GEMMA1
geno = atof(ch_ptr); if (geno >= 0 && geno <= 0.5) { n_0++; } if (geno > 0.5 && geno < 1.5) { n_1++; } if (geno >= 1.5 && geno <= 2.0) { n_2++; }
GEMMA1 assumes Minor allele homozygous: 2.0; major: 0.0 for BIMBAM and
1.0 is H. The docs say BIMBAM format is particularly useful for
imputed genotypes, as PLINK codes genotypes using 0/1/2, while BIMBAM
can accommodate any real values between 0 and 2 (and any real values
if paired with
-notsnp option which sets
cPar.maf_level = -1).
The first column is SNP id, the second and third columns are allele types with minor allele first, and the remaining columns are the posterior/imputed mean genotypes of different individuals numbered between 0 and 2. An example mean genotype file with two SNPs and three individuals is as follows:
rs1, A, T, 0.02, 0.80, 1.50 rs2, G, C, 0.98, 0.04, 1.00
GEMMA codes alleles exactly as provided in the mean genotype file, and ignores the allele types in the second and third columns. Therefore, the minor allele is the effect allele only if one codes minor allele as 1 and major allele as 0.
BIMBAM mode is described in its own manual. The posterior mean value is the minor allele dosage. This means the encoding should be
rs31443144,-,-,2,2,0,0,0,...,0,2,1,1
but GeneNetwork uses
rs31443144, X, Y, 1, 1, 0, 0, 0, ... 0, 1, 0.5, 0.5
which leads to a different GRM. Turns out the genotypes GeneNetwork is using is wrong. It will probably not be a huge difference because the dosage is just scaled. I'll test it on a live dataset - a job that needs to be done anyway.
GEMMA also has an annotation file for SNPsGEMMA also has an annotation file for SNPs
rs31443144 3010274 1 rs6269442 3492195 1 rs32285189 3511204 1 rs258367496 3659804 1 etc.
Rqtl2 has a similar gmap file in tidy format:
marker,chr,pos rs13475697,1,1.6449 rs3681603,1,1.6449001 rs13475703,1,1.73685561994571 rs13475710,1,2.57549035621086 rs6367205,1,2.85294211007162 rs13475716,1,2.85294221007162 etc.
entered as "gmap" in the control file. I suppose it is OK to use our chromosome positions there. I'll need to update the converter for BIMBAM and PLINK. The PLINK version for GEMMA looks like
rs3668922 111771071 13 65.0648 rs13480515 17261714 10 4.72355 rs13483034 53249416 17 30.175 rs4184231 48293994 16 33.7747 rs3687346 12815936 14 2.45302
so both positions are included, but no header. All different in other words! Rqtl2 also has a "pmap" file which is the physical mapping distance.
17 GEMMA GRM/K compute
GEMGEM
time gemma -g ./example/mouse_hs1940.geno.txt.gz -p ./example/mouse_hs1940.pheno.txt -gk -o mouse_hs19407.545s user 0m14.468s sys 0m1.037s
Now the GRM output file (mousehs1940.cXX.txt) is pretty large at 54Mb and we keep a lot of those cached in GeneNetwork. It contains a matrix 1940x1940 of textual numbers
0.3350589588 -0.02272259412 0.0103535287 0.00838433365 0.04439930169 -0.01604468771 0.08336199305 -0.02272259412 0.3035959579 -0.02537616406 0.003454557308 ...
The gzip version is half that size. We can probably do beter storing 4-byte floats and only storing half the matrix (it is symmetrical after all) followed by compression.
Anyway, first thing to do is read R/qtl2 style files into gemma2 because that is our preferred format.
To pass in information we now use the control file defined below
gemma2 grm --control mouse_hs1940.json
pylmm five years
ago(!) First results show
numpy dot product outperforming gemma1 and
blas today for this size dataset (I am not trying CUDA yet). Next
stage is filtering and centering the GRM.
18 GEMMA with python-click and python-pandas-plink
Python click is powerful and covers most use cases.This week I got the argument parsing going for gemma2 and the logic for running subcommands. It was a bit of a struggle: having the feeling I would be better off writing argument parsing from scratch. But today I added a parameter for finding the gemma1 binary (a command line switch and an environment variable with help in a one-liner). That was pleasingly quick and powerful.
In the next step I wanted to convert PLINK files to GEMMA2/Rqtl2 formats. A useful tool to have for debugging anyhow. I decided to try pandas-plink and added that to GEMMA2 (via a Guix package). Reading the mouse data:
>>> G = read_plink1_bin("./example/mouse_hs1940.bed", "./example/mouse_hs1940.bim", "./example/mouse_hs1940.fam", ver> >>> print(G) <xarray.DataArray 'genotype' (sample: 1940, variant: 12226)> dask.array<transpose, shape=(1940, 12226), dtype=float64, chunksize=(1024, 1024), chunktype=numpy.ndarray> Coordinates: * sample (sample) object '0.224991591484104' '-0.97454252753557' ... nan nan * variant (variant) object '1_rs3683945' '1_rs3707673' ... '19_rs6193060' fid (sample) <U21 '0.224991591484104' '-0.97454252753557' ... 'nan' iid (sample) <U21 '0.224991591484104' '-0.97454252753557' ... 'nan' father (sample) <U32 'nan' 'nan' 'nan' 'nan' ... 'nan' 'nan' 'nan' 'nan' mother (sample) <U3 '1' '0' '1' 'nan' 'nan' ... 'nan' '0' '1' 'nan' 'nan' gender (sample) <U32 'nan' 'nan' 'nan' 'nan' ... 'nan' 'nan' 'nan' 'nan' trait (sample) float64 -0.2854 -2.334 0.04682 nan ... -0.09613 1.595 0.72 chrom (variant) <U2 '1' '1' '1' '1' '1' '1' ... '19' '19' '19' '19' '19' snp (variant) <U18 'rs3683945' 'rs3707673' ... 'rs6193060' cm (variant) float64 0.0 0.1 0.1175 0.1358 ... 53.96 54.02 54.06 54.07 pos (variant) int64 3197400 3407393 3492195 3580634 ... -9 -9 61221468 a0 (variant) <U1 'A' 'G' 'A' 'A' 'G' 'A' ... 'G' 'G' 'C' 'G' 'G' 'A' a1 (variant) <U1 'G' 'A' 'G' 'G' 'G' 'C' ... 'A' 'A' 'G' 'A' 'A' 'G'
Very nice! With support for an embedded GRM we can also export. Note that this tool is not lazy, so for very large sets we may need to write some streaming code.
The first step is to create an R/qtl2 style genotype file to compute the GRM. The BIMBAM version for a marker looks like
rs3683945, A, G, 1, 1, 0, 1, 0, 1, 1, 1, 1, 0, 0, 1, 0, 1, 1, 1, 1, 0, 2, 1, 1, 0, 1, 1, 1, 1, 2, 1, 0, 2, etc rs3707673, G, A, 1, 1, 0, 1, 0, 1, 1, 1, 1, 0, 0, 1, 0, 1, 1, 1, 1, 0, 2, 1, 1, 0, 1, 1, 1, 1, 2, 1, 0, 2, rs6269442, A, G, 1, 0, 0, 1, 0, 1, 1, 0, 0, 0, 0, 1, 0, 1, 1, 1, 0, 0, 2, 1, 0, 0, 1, 1, 1, 1, 2, 0, 0, 2, (...)
reflecting row a0 and a1 vertically in the BED file.
===> BED (binary format) [[1. 1. 2. 1. 2. 1. 1. 1. 1. 2. 2. 1. 2. 1. 1. 1. 1. 2. 0. 1. 1. 2. 1. 1. 1. 1. 0. 1. 2. 0. 1. 0. 1. 2. 0. 2. 1. 1. 1. 1. [1. 1. 2. 1. 2. 1. 1. 1. 1. 2. 2. 1. 2. 1. 1. 1. 1. 2. 0. 1. 1. 2. 1. 1. 1. 1. 0. 1. 2. 0. 1. 0. 1. 2. 0. 2. 1. 1. 1. 1. [1. 2. 2. 1. 2. 1. 1. 2. 2. 2. 2. 1. 2. 1. 1. 1. 2. 2. 0. 1. 2. 2. 1. 1. 1. 1. 0. 2. 2. 0. 2. 0. 1. 2. 0. 2. 1. 1. 1. 1. (...) ]]
So you can see 2 and 0 values switched meaning H.
The equivalent R/qtl2 can be the human readable
"genotypes": {"A":0, "B":1, "H": 2}
For the genotype file we'll go marker by individual (transposed) because that is what we can stream in GWA. The tentative 'control' file (mirroring recla.json) to start with:
{ "description": "mouse_hs1940", "crosstype": "hs", "individuals": 1940, "markers": 12226, "phenotypes": 7, "geno": "mouse_hs1940_geno.tsv", "alleles": [ "A", "B", "H" ], "genotypes": { "A": 1, "H": 2, "B": 3 }, "geno_transposed": true }
Where cross-type "HS" should probably act similar to "DO". We'll use some (hopefully) sane defaults, such as a tab for separator and '-' and NA for missing data. Comments have '#' on the first position. We add number of individuals, phenotypes and markers/SNPs for easier processing and validation when parsing.
Now the GEMMA2 genotype file should become
marker, 1, 2, 3, 4, ... rs3683945 B B A B A B B B B A A B A B B B B A H B B A B B B B H B A H etc rs3707673 B B A B A B B B B A A B A B B B B A H B B A B B B B H B A H rs6269442 B A A B A B B A A A A B A B B B A A H B A A B B B B H A A H (...)
"geno_compact": true
which makes it the more compact
marker,1,2,3,4,... rs3683945 BBABABBBBAABABBBBAHBBABBBBHBAH etc rs3707673 BBABABBBBAABABBBBAHBBABBBBHBAH rs6269442 BAABABBAAAABABBBAAHBAABBBBHAAH (...)
hope R/qtl2 will support this too. Next step is compression
The uncompressed space version:
-rw-r--r-- 1 wrk users 46M Aug 31 08:25 mouse_hs1940_geno.tsv
with gzip compresses to
-rw-r--r-- 1 wrk users 3.3M Aug 31 07:58 mouse_hs1940_geno.tsv.gz
while the compact version is half the size and compresses better too
-rw-r--r-- 1 wrk users 2.1M Aug 31 08:30 mouse_hs1940_geno.tsv.gz
The bz2 version is a little smaller but a lot slower. lz4 has a larger file
-rw-r--r-- 1 wrk users 5.4M Aug 31 08:37 mouse_hs1940_geno.tsv.lz4
but it is extremely fast. For now we'll just go for smaller files
(these genotype files can be huge).
gzip support is standard in
Python3. Compressing the files from inside Python appeared slow with
the default maximum compression. Reducing the level a bit it is
comparable to the original writer and the file is 20x smaller. It is
also 3x smaller than the original PLINK version (that is supposedly
optimally compressed).
The control file probably has enough showing the compressed file name extension. Now it looks like
{ "description": "mouse_hs1940", "crosstype": "hs", "sep": "\t", "individuals": 1940, "markers": 12226, "phenotypes": 7, "geno": "mouse_hs1940_geno.tsv.gz", "alleles": [ "A", "B", "H" ], "genotypes": { "A": 1, "H": 2, "B": 3 }, "geno_sep": false, "geno_transposed": true }
Next step is generating the GRM!
19 Building GEMMA.
For deployment we'll use GNU Guix and Docker containers as described below. Gemma2 is going to be oblivious about how deployment hangs together because I think it is going to be an ecosystem on its own.
I am looking into command line parsing for gemma2. The requirement is reasonably sophisticated parameter checking that can grow over time. First of all I'll introduce 'commands' such as
gemma grm gemma gwa
which allows for splitting option sets. Better readable too. Also I want to be able to inline 'pipe' commands:
gemma grm => gwa
which will reuse data stored in RAM to speed things up. You can imagine something like
gemma filter => grm --loco => gwa
as a single run. Now bash won't allow for this pipe operator so we may support a syntax like
gemma filter '=>' grm --loco '=>' gwa
or
gemma filter % grm --loco % gwa
Note that these 'pipes' will not be random workflows. It is just a CLI convenience notation that visually looks composable.
Also the current switches should be supported and gemma2 will drop to gemma version 1 if it does not understand a switch. For example
gemma -g /example/mouse_hs1940.geno.txt.gz -p mouse_hs1940.pheno.txt -a mouse_hs1940.anno.txt -gk -no-check
should still work, simply because current workflows expect that. The python click package looks like it can do this. What is tricky is that we want to check all parameters before the software runs. For now, the grouping works OK - you can chain commands, e.g.
@click.group(invoke_without_command=True) @click.pass_context def gemma2(context): if not context.invoked_subcommand: click.echo("** Call gemma1") @click.command() def grm(): click.echo('** Kinship/Genetic Relationship Matrix (GRM) command') if second: gemma2(second) @click.command() # @click.argument('verbose', default=False) def gwa(): click.echo('** Genome-wide Association (GWA)') gemma2.add_command(grm) gemma2.add_command(gwa) gemma2()
which allows
gemma grm ... -> calls into grm and gwa gemma gwa ... gemma (no options)
Not perfect because parameters are checked just before the actual chained command runs. But good enough for now. See also the group tutorial.
OpenBLAS development. The gains we get for free.Performance metrics before doing a new release show openblas has gotten a bit faster for GEMMA. I also tried a build of recent openblas-git-0.3.10 with the usual tweaks. It is really nice to see how much effort is going into
As people have trouble building GEMMA on MacOS (and I don't have a Mac) I released a test Docker container that can be run as
lario:/home/wrk/iwrk/opensource/code/genetics/gemma# time docker run -v `pwd`/example:/example 2e82532c7440 gemma -g /example/mouse_hs1940.geno.txt.gz -p /example/mouse_hs1940.pheno.txt -a /example/mouse_hs1940.anno.txt -gk -no-check8.161s user 0m0.033s sys 0m0.023s
Note that the local directory is mounted with the
-v switch..
20 Starting on GEMMA2 which is getting quite a bit of attention.A lot happened in the last month. Not least creating
Today is the official kick-off day for new GEMMA development! The coming months I am taking a coding sabbatical. Below you can read I was initially opting for D, but the last months we have increasingly invested in Rust and it looks like new GEMMA will be written in Python + Rust.
First, why Rust? Mostly because our work is showing it forces coders to work harder at getting things correct. Not having a GC can look like a step backward, but when binding to other languages it actually is handy. With Rust you know where you are - no GC kicking in and doing (black) magic. Anyway, I am going to give it a shot. I can revert later and D is not completely ruled out.
Now, why Python instead of Racket? Racket is vastly superior to Python in my opinion. Unfortunately, Python has the momentum and if I write the front-end in Racket it means no one will contribute. So, this is pragmatism to the Nth degree. We want GEMMA2/gemmalib to be useful to a wider community for experimentation. It will be a toolbox rather than an end-product. For example, I want to be able to swap in/out modules that present different algorithms. In the end it will be a toolbox for mapping phenotypes to genotypes.
I don't think we'll regret a choice for Python+Rust. Both languages are complementary and have amazing community support, both in terms of size and activity. Having Python as a front-end implies that is should be fairly trivial to bind the Rust back-end to other languages, such as Racket, R and Ruby. It will happen if we document it well enough.
One feature of new GEMMA2 will be that it actually can run GEMMA1. I am going to create a new development repository that can call into GEMMA1 transparently if functionality does not exist. This means the same command line parameters should work. GEMMA2 will fall back to GEMMA1 with a warning if it does not understand parameters. GEMMA2 specific parameters are a different set..
I promised GeneNetwork that I would implement precompute for all GN mappings for GEMMA. I think that is a great idea. It is also an opportunity to clean up GEMMA. Essentially a rewrite where GEMMA becomes more of a library which can be called from any other language. That will also make it easier to optimise GEMMA for certain architectures. It is interesting to note that despite the neglect GEMMA is getting and its poor runtime performance it is still a surprisingly popular tool. The implementation, apparently, still rocks!
So, let's start with GEMMA version 2 (GEMMA2). What is GEMMA2? GEMMA2 is a fast implementation of GEMMA1 in D. Why not Rust? You may ask. Well, Rust is a consideration and we can still port, but D is close to idiomatic C++ which means existing GEMMA code is relatively easy to convert. I had been doing some of that already with Prasun, with faster-lmm-d. That project taught us a lot though we never really got it to replace GEMMA. Next to D I will be using Racket to create the library bindings. Racket is a Lisp and with a good FFI it should be easy to port that to (say) Python or Ruby. So, in short: Front-end Racket, back-end D. Though there may be variations.
21 Porting GeneNetwork1 to GNU Guix is a legacy Python web application which was running on a 10 year old CentOS server. It depends on an older version of Python, mod-python and other obsolete modules. We decided to package it in GNU Guix because Guix gives full control over the dependency graph. Also GNU Guix has features like timemachine and containers which allows us to make snapshot of a deployment graph in time and serve different versions of releases.GeneNetwork1
The first step to package GN1 with the older packages was executed by Efraim. Also he created a container to run Apache, mod-python and GN1. Only problem is that mod-python in the container did not appear to be working.
22 Chasing that elusive sambamba bug (FIXED!)
github. Locating the bug was fairly
easy - triggered by
decompressBgzfBlock - and I manage to figure out
it had to do with the copying of a thread object that messes up the
stack. Fixing it, however is a different story! It took a while to get
a minimal program to reproduce the problem. It looks like
BamReader bam; auto fn = buildPath(dirName(__FILE__), "data", "ex1_header.bam"); foreach (i; 0 .. 60) { bam = new BamReader(fn); assert(bam.header.format_version == "1.3"); }
which segfaults most of the time, but not always, with a stack trace
#0 0x00007ffff78b98e2 in invariant._d_invariant(Object) () from /usr/lib/x86_64-linux-gnu/libdruntime-ldc-debug-shared.so.87 #1 0x00007ffff7d4711e in std.parallelism.TaskPool.queueLock() () from /usr/lib/x86_64-linux-gnu/libphobos2-ldc-debug-shared.so.87 #2 0x00007ffff7d479b9 in _D3std11parallelism8TaskPool10deleteItemMFPSQBqQBp12AbstractTaskZb () from /usr/lib/x86_64-linux-gnu/libphobos2-ldc-debug-shared.so.87 #3 0x00007ffff7d4791d in _D3std11parallelism8TaskPool16tryDeleteExecuteMFPSQBwQBv12AbstractTaskZv () #4 0x00005555557125c5 in _D3std11parallelism__T4TaskS_D3bio4core4bgzf5block19decompressBgzfBlockFSQBrQBqQBoQBm9BgzfBlockCQCoQCn5utils7memoize__T5CacheTQCcTSQDxQDwQDuQDs21DecompressedBgzfBlockZQBwZQBpTQDzTQDgZQGf10yieldForceMFNcNdNeZQCz (this=0x0) #6 0x00007ffff78a56fc in object.TypeInfo_Struct.destroy(void*) const () #7 0x00007ffff78bc80a in rt_finalizeFromGC () from /usr/lib/x86_64-linux-gnu/libdruntime-ldc-debug-shared.so.87 #8 0x00007ffff7899f6b in _D2gc4impl12conservativeQw3Gcx5sweepMFNbZm () #9 0x00007ffff78946d2 in _D2gc4impl12conservativeQw3Gcx11fullcollectMFNbbZm () #10 0x00007ffff78969fc in _D2gc4impl12conservativeQw3Gcx8bigAllocMFNbmKmkxC8TypeInfoZPv () #11 0x00007ffff78917c3 in _D2gc4impl12conservativeQw3Gcx5allocMFNbmKmkxC8TypeInfoZPv () (...) task_pool=0x7ffff736f000) at reader.d:130 #29 0x000055555575e445 in _D3bio3std3hts3bam6reader9BamReader6__ctorMFAyaZCQBvQBuQBtQBsQBrQBn (warning: (Internal error: pc 0x55555575e444 in read in psymtab, but not in symtab.) at reader.d:135 #30 0x00005555557bff0b in D main () at read_bam_file.d:47
where
reader.d:130 adds a
BamReader to the task pool. It is clear the
GC kicks in and we end up with this mess.
Line #4 contains std.parallelism.TaskS bio.core.bgzf.block.decompressBgzfBlock utils.memoize.Cache-DecompressedBgzfBlock-yieldForce
yieldForce executes a task in the current thread which is coming from a cache:
alias Cache!(BgzfBlock, DecompressedBgzfBlock) BgzfBlockCache;
and Cache is part of BioD. One trick aspect of Sambamba is that the
design is intricate. In our crashing example we only use the simple
BamReader wich is defined in
std.hts.bam.reader.d. We are using a
default taskpool. In reader.d not much happens - it is almost all
simple plumbing.
std.hts.bam.read.d, meanwhile, represents the BAM
format.
The Bgzf block processing happens in
bio.core.bgzf.inputstream. The
BamReader uses BgzfInputStream which has functions fillNextBlock,
setupReadBuffer. The constructor sets up a
RoundBuf!BlockAux(n_tasks).
When I set
n_tasks to a small number it no longer crashes!? The
buffer
_task_buf = uninitializedArray!(DecompressionTask[])(n_tasks);
is a critical piece. Even increasing dim to
n_tasks+2 is enough to
remove most segfaults, but not all.
Remember it is defined as
alias Task!(decompressBgzfBlock, BgzfBlock, BgzfBlockCache) DecompressionTask; DecompressionTask[] _task_buf;
with dimension
n_tasks. Meanwhile BlockAux
static struct BlockAux { BgzfBlock block; ushort skip_start; ushort skip_end; DecompressionTask* task; alias task this; }
Injecting code
struct DecompressedBgzfBlock { ~this() { stderr.writeln("destroy DecompressedBgzfBlock ",start_offset,":",end_offset," ",decompressed_data.sizeof); }; ulong start_offset; ulong end_offset; ubyte[] decompressed_data; }
It is interesting to see that even when not segfaulting the block offsets look corrupted:
destroy DecompressedBgzfBlock 0:0 16 destroy DecompressedBgzfBlock 0:0 16 destroy DecompressedBgzfBlock 0:0 16 destroy DecompressedBgzfBlock 89554:139746509800748 16 destroy DecompressedBgzfBlock 140728898420736:139748327903664 16 destroy DecompressedBgzfBlock 107263:124653 16 destroy DecompressedBgzfBlock 89554:107263 16 destroy DecompressedBgzfBlock 71846:89554 16 destroy DecompressedBgzfBlock 54493:71846 16 destroy DecompressedBgzfBlock 36489:54493 16 destroy DecompressedBgzfBlock 18299:36489 16 destroy DecompressedBgzfBlock 104:18299 16 destroy DecompressedBgzfBlock 0:104 16
and I am particularly suspicious about this piece of code in inputstream.d where task gets allocated and the resulting buffer gets copied to the roundbuffer. This is a hack, no doubt about it:
DecompressionTask tmp = void; tmp = scopedTask!decompressBgzfBlock(b.block, _cache); auto t = _task_buf.ptr + _offset / _max_block_size; import core.stdc.string : memcpy; memcpy(t, &tmp, DecompressionTask.sizeof); b.task = t; _tasks.put(b); _pool.put(b.task);
and probably the reason why
decompressBgzfBlock gets corrupted,
followed by sending the GC in a tail spin when it kicks in. Artem
obviously designed it this way to prevent allocating memory for the
task, but I think he went a little too far here! One thing I tried
earlier, I have to try again which is get rid of that copying.
First of all
alias Task!(decompressBgzfBlock, BgzfBlock, BgzfBlockCache) DecompressionTask;
defines
DecompressionTask as calling
decompressBgzfBlock with
parameters which returns a
DecompressedBgzfBlock. Remember it bails
out with this block. There is something else that is notable, it is
actually the cached version that bails out.
Removing the cache code makes it run more reliable. But not completely. Also we are getting memory errors now:
destroy DecompressedBgzfBlock 4294967306:0 16 core.exception.InvalidMemoryOperationError@core/exception.d(702): Invalid memory operation
but that leaves no stack trace. Now we get
std.parallelism.Task_bio.core.bgzf.block.decompressBgzfBlock-DecompressedBgzfBlock-yieldForce
so the caching itself is not the problem. In the next phase we are
going to address that dodgy memory copying by introducing a task
managed by GC instead of using the
ScopedTask.
This is all happening in
BgzfInputStream in
inputstream.d used by
reader.d which inherits from
Stream. BamReaders uses that
functionality to iterate through the reads usings D
popFront()
design. Streams allow reading a variable based on its type,
e.g. a BAM read. The BamReader fetches the necessary data from
BgzfBlockSupplier with
getNextBgzfBlock which is used as the
_bam
variable.
BamReader.reader itself returns an iterator.
It is interesting to note how the OOP design obfuscates what is going
on. It is also clear that I have to fix
BgzfInputStream in
inputstream.d because it handles the tasks in the roundbuffer.
Part of sambamba's complexity is due to OOP and to having the threadpool running at the lowest level (unpacking bgzf). If I remove the threadpool there it means that threading will have to happen at a higher level. I.e., sambamba gets all its performance from multithreaded low level unpacking of data blocks. It is unusual, but it does have the (potential) advantage of leaving higher level code simple. I note with sambamba sort, however, Artem injected threads there too which begs the question what happens when you add different tasks to the same pool that have different timing characteristics. Be interesting to see the effect of using two task pools.
block.d again,
BgzfBlock is defined as a struct containing a
_buffer defined as
public ubyte[] _buffer = void; and is used in (indeed)
block.d and
inputstream.d only. The use of
struct means that
BgzfBlock gets
allocated on the stack. Meanwhile
_buffer get pointed into the
uncompressed buffer which in turn is is a slice of
uncompressed_buf that is also on the stack in
decompressBgzfBlock
(surprise, surprise) and gets assigned right before returning
block._buffer[0 .. block.input_size] = uncompressed[];
now, the cached version (which I disabled) actually does a copy to the heap
BgzfBlock compressed_bgzf_block = block; compressed_bgzf_block._buffer = block._buffer.dup; DecompressedBgzfBlock decompressed_bgzf_block; with (decompressed_bgzf_block) { start_offset = block.start_offset; end_offset = block.end_offset; decompressed_data = uncompressed[].dup; } cache.put(compressed_bgzf_block, decompressed_bgzf_block);
Artem added a comment
/// A buffer is used to reduce number of allocations. /// /// Its size is max(cdata_size, input_size) /// Initially, it contains compressed data, but is rewritten /// during decompressBgzfBlock -- indeed, who cares about /// compressed data after it has been uncompressed?
Well, maybe the GC does! Or maybe the result does not fit the same buffer. Hmmm. If you go by the values
destroy DecompressedBgzfBlock 140728898420736:139748327903664 16
you can see the
BgzfBlock is compromised by a stack overwrite.
Changing the struct to a class fixes the offsets
destroy DecompressedBgzfBlock 104:18299 16 destroy DecompressedBgzfBlock 18299:36489 16 destroy DecompressedBgzfBlock 36489:54493 16
so things start to look normal. But it still segfaults on
DecompressedBgzfBlock on GC in
yieldForce.
std.parallelism.Task_bio.core.bgzf.block.decompressBgzfBlock-DecompressedBgzfBlock-yieldForce
decompressBgzfBlock the underpinning data structure is corrupt.
The problem has to be with
struct BgzfBlock,
struct BgzfBlockAux
and the
Roundbuf. Both
BgzfBlockAux goes on a
Roundbuf. I was
thinking last night that there may be a struct size problem. The number
of tasks in the roundbuffer should track the number of threads.
Increasing the size of the roundbuffer makes a crash take longer.
Well I hit the jackpot! After disabling
_task_buf = uninitializedArray!(DecompressionTask[])(n_tasks);
there are no more segfaults. I should have looked at that more
closely. I can only surmise that because the contained objects contain
pointers (they do) the GC gets confused because it occassionaly finds
something that looks like a valid pointer. Using
uninitializedArray
on an object that includes a pointer reference is dangerous.
Success!! That must have take me at least three days of work to find
this bug, one of the most elusive bugs I have encountered. Annoyingly
I was so close earlier when I expanded the size of
n_tasks! The
segfault got triggered by a new implementation of D's garbage
collector. Pointer space is tricky and it shows how careful we have to
be with non-initialized data.
Now what is the effect of disabling the cache and making more use of garbage collected structures (for each Bgzf block)? User time went down and CPU usage too, but wall clock time nudged up. Memory use also went up by 25%. The garbage collector kicked in twice as often. This shows Artem's aggressive avoidance of the garbage collector does have impact and I'll have to revert on some of my changes now. Note that releasing the current version should be OK, the performance difference does add to overall time and energy use. Even 10% of emissions is worth saving with tools run at this scale.
So, I started reverting on changes and after reverting two items:
- Sambamba - [-] speed test + [X] revert on class DecompressedBgzfBlock to struct + [X] revert on auto buf2 = (block._buffer[0 .. block.input_size]).dup; + [ ] revert on DecompressionTask[] _task_buf; + [ ] revert on tmp = scopedTask!decompressBgzfBlock(b.block) + [ ] revert on Cache
Sambamba is very similar to the last release. The new release is very slightly slower but uses less RAM. I decided not to revert on using the roundbuf, scopedTask and Cache because each of these introduces complexity with no obvious gain. v0.7.1 will be released after I update release notes and final tests.
23 It has been almost a year! And a new job..
I am writing a REST service which needs to maintain some state. The built-in Racket server has continuations - which is rather nice! Racket also has support for Redis, SQLite (nice example) and a simple key-value interface (ini-style) which I can use for sessions.
Ironically, two weeks after writing above I was hit by a car on my bicycle in Memphis. 14 fractures and a ripped AC joint. It took surgery and four months to feel halfway normal again…
24 Speeding up K
GEMMMA has been released with many bug fixes and speedups. Now it is time to focus on further speedups with K (and GWA after). There are several routes to try this. One possibility is to write our own matrix multiplication routine in D. My hunch is that because computing K is a special case we can get some significant speedups compared to a standard matrix multiplication for larger matrices. Directions we can go are:
- Stop using BLAS and create multi-core dot-product based multiplication that makes use of
- multi threaded decompression
- start compute while reading data
- use aligned rows only for dot product (less CPU cache misses)
- compute half the result (K is symmetric!)
- heavy use of threading
- use AVX2 optimizations
- use floats instead of doubles (we can do this with hardware checking)
- chromosome-based computations (see below for LOCO)
- Use tensor routines to reduce RAM IO
In other words, quite a few possible ways of improving things. It may be that the current BLAS routine is impossible to beat for our data, but there is only one way to find out: by trying.
The first step you would think is simple: take the genotype data, pass that in to calckinship(g) and get K back. Unfortunately GEMMA intertwines reading the genotype file, the computation of K in steps and scaling the matrix. All in one and the code is duplicated for Plink and BIMBAM formats. Great. Now I don't feel like writing any more code in C++ and, fortunately, Prasun has done some of the hard work in faster-lmm-d already. So the first step is to parse the BIMBAM format using an iterator. We'll use this to load the genotype data in RAM (we are not going to assume memory restrictions for now).
That genotype decompression and reading part is done now and I added it to BioD decompress.d. Next I added a tokenizer which is a safe replacement for the strtok gemma uses.
Using BLAS I added a full K computation after reading then genotype file which is on par (speed-wise) with the current GEMMA implementaton. The GEMMA implementation should be more optimal because it splits the matrix computation in smaller blocks and starts while streaming the genotype data. Prasun did an implementation of that in gemmakinship.d which is probably faster than my current version. Even so, I'll skip that method, for now, until I am convinced that none of the above dot-product optimizations pan out. Note that only a fraction of the time is used to do the actual matrix multiplication.
2018-10-23T09:53:38.299:api.d:flmmd_compute_bimbam_K:37 GZipbyLine 2018-10-23T09:53:43.911:kinship.d:kinship_full:23 Full kinship matrix used
Reading the file took 6 seconds.
2018-10-23T09:53:43.911:dmatrix.d:slow_matrix_transpose:57 slow_matrix_transpose 2018-10-23T09:53:44.422:blas.d:matrix_mult:48 matrix_mult 2018-10-23T09:53:45.035:kinship.d:kinship_full:33 DONE rows is 1940 cols 1940
Transpose + GxGT took 1 second. The total for
6e05143d717d30eb0b0157f8fd9829411f4cf2a0 real 0m9.613s user 0m13.504s sys 0m1.476s
I also wrote a threaded decompressor and it is slightly slower. Gunzip is so fast that building threads adds more overheads.
In this example reading the file dominates the time, but with LOCO we get ~20x K computation (one for each chromosome), so it makes sense to focus on K. For model species, for the forseeable future, we'll look at thousands of individuals, so it is possible to hold all K matrices in RAM. Which also means we can fill them using chromosome-based dot-products. Another possible optimization.
Next steps are to generate output for K and reproduce GEMMA output (also I need to filter on missing phenotype data). The idea is to compute K and LMM in one step, followed by developing the LOCO version as a first class citizen. Based on above metrics we should be able to reduce LOCO K of this sized dataset from 7 minutes to 30 seconds(!) That is even without any major optimizations.
25 MySQL to MariaDB
The version we are running today:
mysql --version mysql Ver 14.12 Distrib 5.0.95, for redhat-linux-gnu (x86_64) using readline 5.1
26 MySQL backups (stage2)
Backup to AWS.
env AWS_ACCESS_KEY_ID=* AWS_SECRET_ACCESS_KEY=* RESTIC_PASSWORD=genenetwork /usr/local/bin/restic --verbose init /mnt/big/mysql_copy_from_lily -r s3:s3.amazonaws.com/backupgn init
env AWS_ACCESS_KEY_ID=* AWS_SECRET_ACCESS_KEY=* RESTIC_PASSWORD=genenetwork /usr/local/bin/restic backup /mnt/big/mysql_copy_from_lily -r s3:s3.amazonaws.com/backupgn
I also added backups for genotypefiles and ES.
27 MySQL backups (stage1)
Before doing any serious work on MySQL I decided to create some backups. Lily is going to be the master for now, so the logical backup is on P1 which has a large drive. Much of the space is taken up by a running MySQL server (which is updated!) and a data file by Lei names 20151028dbsnp containing a liftover of dbSNP142. First I simply compressed the input files, not to throw them away.
Because ssh is so old on lily I can't login nicely from Penguin, but the other way works. This is temporary as mysql user
ssh -i /var/lib/mysql/.ssh/id_rsa pjotr@penguin
The script becomes (as a weekly CRON job for now) as mysql user
rsync -va /var/lib/mysql --rsync-path=/usr/bin/rsync -e "ssh -i /var/lib/mysql/.ssh/id_rsa" pjotr@penguin:/mnt/big/mysql_copy_from_lily/
This means we have a weekly backup for now. I'll improve it with more disk space and MariaDB to have incrementals.
Actually, it looks like only two tables really change
-rw-rw---- 1 pjotr mysql 29G Aug 16 18:59 ProbeSetData.MYD -rw-rw---- 1 pjotr mysql 36G Aug 16 19:00 ProbeSetData.MYI
which are not that large.
To make incrementals we are opting for [restic](). It looks modern and has interesting features.
env RESTIC_PASSWORD=genenetwork /usr/local/bin/restic init -r /mnt/big/backup_restic_mysql/
now backups can be generated incrementally with
env RESTIC_PASSWORD=genenetwork /usr/local/bin/restic --verbose backup /mnt/big/mysql_copy_from_lily -r /mnt/big/backup_restic_mysql
To list snapshots (directory format)
env RESTIC_PASSWORD=genenetwork /usr/local/bin/restic -r backup_restic_mysql/ snapshots
Check
env RESTIC_PASSWORD=genenetwork /usr/local/bin/restic -r backup_restic_mysql/ check
Prune can merge hashes, saving space
env RESTIC_PASSWORD=genenetwork /usr/local/bin/restic -r backup_restic_mysql/ prune
So, now we can do a daily backup from Lily and have incrementals stored on Penguin too. My cron reads
40 5 * * * env RESTIC_PASSWORD=genenetwork /usr/local/bin/restic --verbose backup /mnt/big/mysql_copy_from_lily -r /mnt/big/backup_restic_mysql|mail -s "MySQL restic backup" pjotr2017@thebird.nl
Restic can push to AWS S3 buckets. That is the next step (planned).
28 Migrating GN1 from EC2
GN1 is costing us $\600+ per month on EC2. With our new shiny server we should move it back into Memphis. The main problem is that the base image is
Linux version 2.6.18-398.el5 (mockbuild@builder17.centos.org) (gcc version 4.1.2 20080704 (Red Hat 4.1.2-55)) #1 SMP Tue Sep 16 20:50:52 EDT 2014
I mean, seriously, ten years old?
Compared to GN2, GN1 should be simpler to deploy. Main problem is that it requires Python 2.4 - currently. Not sure why that is.
Some system maintenance scripts live here. There is a Docker image by Artem here. And there are older notes referred to in the Artem doc. All of that is about GN2 really. So far, I have not found anything on GN1. In /usr/lib/python2.4/site-packages I find the following modules:
Alacarte elementtree gmenu.so htmlgen iniparse invest json libsvn nose numarray piddle pirut pp.py pptransport.py ppworker.py pyx pyXLWriter reaper.so sos svg urlgrabber
nothing too serious. Worse is that GN1 uses modpython which went out of vogue before 2013. The source code is still updated - you see, once things are out there…
This means we'll need to deploy Apache in the VM with modpython. The installation on Lily pulls in 50+ Apache modules. Argh. If we want to support this… The good news is that most modules come standard with Apache - and I think we can disable a lot of them.
29 Fixing Gunicorn in use
DEBUG:db.webqtlDatabaseFunction:.retrieve_species: retrieve_species result:: mouse DEBUG:base.data_set:.create_dataset: dataset_type: ProbeSet [2018-07-16 16:53:51 +0000] [4185] [ERROR] Connection in use: ('0.0.0.0', 5003) [2018-07-16 16:53:51 +0000] [4185] [ERROR] Retrying in 1 second.
30 Updating ldc with latest LLVM
Updating ldc to latest leaves only 5 tests failing! That is rather good.
99% tests passed, 5 tests failed out of 1629 Total Test time (real) = 4135.71 sec The following tests FAILED: 387 - std.socket (Failed) 785 - std.socket-debug (Failed) 1183 - std.socket-shared (Failed) 1581 - std.socket-debug-shared (Failed) 1629 - lit-tests (Failed) build/Testing/Temporary/LastTest.log: (core.exception.AssertError@std/socket.d(456): Assertion failure build/Testing/Temporary/LastTest.log:core.exception.RangeError@std/socket.d(778): Range violation and Failing Tests (1): LDC :: plugins/addFuncEntryCall/testPlugin.d
Fixing these ldc 1.10.0 compiles on LLVM 3.8!
31 Fixing sambamba
We were getting this new error
/gnu/store/4snsi4vg06bdfi6qhdjfbhss16kvzxj7-ldc-1.10.0/include/d/std/numeric.d(1845): Error: read-modify-write operations are not allowed for shared variables. Use core.atomic.atomicOp!"+="(s, e) instead.
which was by the normalize class
bool normalize(R)(R range, ElementType!(R) sum = 1)
the normalization functions fractions range values to the value of sum, e.g.,
a = [ 1.0, 3.0 ]; assert(normalize(a)); assert(a == [ 0.25, 0.75 ]);
so, the question is why it needs to be a shared range. I modified it to cast to shared after normalization.
The next one was
BioD/bio/maf/reader.d(53): Error: cannot implicitly convert expression `this._f.byLine(cast(Flag)true, '\x0a')` of type `ByLineImpl!(char, char)` to `ByLine!(char, char)`
also reported on
After fixing the compile time problems the tests failed for view, view -f unpack, subsample and sort(?!). In fact all md5sum's we test. For view it turns out the order of the output differs. view -f sam returns identity also converting back from BAM. It had to be something in the header. The header (apparently) contains the version of sambamba!
ldc2 -wi -I. -IBioD -IundeaD/src -g -O3 -release -enable-inlining -boundscheck=off -of=bin/sambamba bin/sambamba.o utils/ldc_version_info_.o htslib/libhts.a /usr/lib/x86_64-linux-gnu/liblz4.a -L-L/home/travis/dlang/ldc-1.10.0/lib -L-L/usr/lib/x86_64-linux-gnu -L-lrt -L-lpthread -L-lm -L-llz4 bin/sambamba.o: In function `_D5utils3lz426createDecompressionContextFZPv': /home/travis/build/biod/sambamba/utils/lz4.d:199: undefined reference to `LZ4F_createDecompressionContext' /home/travis/build/biod/sambamba/utils/lz4.d:200: undefined reference to `LZ4F_isError' (...) objdump -T /usr/lib/x86_64-linux-gnu/liblz4.so|grep LZ4F 000000000000cd30 g DF .text 00000000000000f6 Base LZ4F_flush 000000000000ce30 g DF .text 0000000000000098 Base LZ4F_compressEnd 000000000000c520 g DF .text 00000000000002f5 Base LZ4F_compressBegin 000000000000c4a0 g DF .text 000000000000003f Base LZ4F_createCompressionContext 000000000000dd60 g DF .text 00000000000000ee Base LZ4F_getFrameInfo 000000000000ced0 g DF .text 00000000000002eb Base LZ4F_compressFrame 000000000000c4e0 g DF .text 0000000000000033 Base LZ4F_freeCompressionContext 000000000000c470 g DF .text 0000000000000029 Base LZ4F_getErrorName 000000000000c460 g DF .text 000000000000000a Base LZ4F_isError 000000000000c8a0 g DF .text 00000000000000e3 Base LZ4F_compressFrameBound 000000000000d1c0 g DF .text 0000000000000038 Base LZ4F_createDecompressionContext 000000000000d200 g DF .text 0000000000000037 Base LZ4F_freeDecompressionContext 000000000000c990 g DF .text 0000000000000396 Base LZ4F_compressUpdate 000000000000d240 g DF .text 0000000000000b1d Base LZ4F_decompress 000000000000c820 g DF .text 000000000000007d Base LZ4F_compressBound
32 Trapping NaNs
When a floating point computation results in a NaN/inf/underflow/overflow GEMMA should stop The GSL has a generic function gslieeeenvsetup which works through setting an environment variable. Not exactly useful because I want GEMMA to run with just a –check switch.
The GNU compilers have feenableexcept as a function which did not work immediately. Turned out I needed to load fenv.h before the GSL:
#include <fenv.h> #include "gsl/gsl_matrix.h"
Enabling FP checks returns
Thread 1 "gemma" received signal SIGFPE, Arithmetic exception. 0x00007ffff731983d in ieeeck_ () from /home/wrk/opt/gemma-dev-env/lib/libopenblas.so.0 (gdb) bt #0 0x00007ffff731983d in ieeeck_ () from /home/wrk/opt/gemma-dev-env/lib/libopenblas.so.0 #1 0x00007ffff6f418dc in dsyevr_ () from /home/wrk/opt/gemma-dev-env/lib/libopenblas.so.0 #2 0x0000000000489181 in lapack_eigen_symmv (A=A@entry=0x731a00, eval=eval@entry=0x731850, evec=evec@entry=0x7317a0, flag_largematrix=<optimized out>) at src/lapack.cpp:195 #3 0x00000000004897f8 in EigenDecomp (flag_largematrix=<optimized out>, eval=0x731850, U=U@entry=0x7317a0, G=G@entry=0x731a00) at src/lapack.cpp:232 #4 EigenDecomp_Zeroed (G=G@entry=0x731a00, U=U@entry=0x7317a0, eval=eval@entry=0x731850, flag_largematrix=<optimized out>) at src/lapack.cpp:248 #5 0x0000000000457b3a in GEMMA::BatchRun (this=this@entry=0x7fffffffcc30, cPar=...) at src/gemma.cpp:2598
Even for the standard dataset!
Turns out it is division by zero and FP underflow in lapackeigensymmv.
33 A gemma-dev-env package
I went through difficulties of updating GNU Guix and writing a package that creates a GEMMA development environment. This was based on the need for having a really controlled dependency graph. It went wrong last time we released GEMMA, witness gemma got slower, leading to the discovery that I had linked in a less performing lapack.
GNU Guix was not behaving because I had not updated in a while and I discovered at least one bug. Anyway, I have a working build system now and we will work on code in the coming weeks to fix a number of GEMMA issues and bring out a new release.
34 Reviewing a CONDA package
Main achievement last week was getting GEMMA installed in a controlled fashion and proving performance is still up to scratch.
For JOSS I am reviewing a CONDA package for RNA-seq analysis. The author went through great lengths to make it easy to install with Bioconda, so I thought to have a go. GNU Guix has a conda bootstrap, so time to try that!
guix package -A conda conda 4.3.16 out gnu/packages/package-management.scm:704:2
and wants to install
/gnu/store/pj6d293c7r9xrc1nciabjxmh05z24fh0-Pillow-4.3.0.tar.xz /gnu/store/m5prqxzlgaargahq5j74rnvz72yhb77l-python-olefile-0.44 /gnu/store/s9hzpsqf9zh9kb41b389rhmm8fh9ifix-python-clyent-1.2.1 /gnu/store/29wr2r35z2gnxbmvdmdbjncmj0d3l842-python-pytz-2017.3 /gnu/store/jv1p8504kgwp22j41ybd0j9nrz33pmc2-python-anaconda-client-1.6.3.tar.gz /gnu/store/l6c40iwss9g23jkla75k5f1cadqbs4q5-python-dateutil-2.6.1 /gnu/store/y4h31l8xj4bd0705n0q7a8csz6m1s6s5-python-pycosat-0.6.1 /gnu/store/8rafww49qk2nxgr4la9i2v1yildhrvnm-python-cookies-2.2.1 /gnu/store/s5d94pbsv779nzi30n050qdq9w12pi52-python-responses-0.5.1 /gnu/store/kv8nvhmb6h3mkwyj7iw6zrnbqyb0hpld-python-conda-4.3.16.tar.gz /gnu/store/cns9xhimr1i0fi8llx53s8kl33gsk3c4-python-ruamel.yaml-0.15.35
The CONDA package in Guix is a bit older - turns out CONDA has a ridiculous release rate. Let's try the older CONDA first
guix package -i conda
And
conda config --add channels defaults Warning: 'defaults' already in 'channels' list, moving to the top conda config --add channels conda-forge Warning: 'conda-forge' already in 'channels' list, moving to the top conda config --add channels bioconda Warning: 'bioconda' already in 'channels' list, moving to the top
so, that was all OK Next
conda install -c serine rnasik Package plan for installation in environment /gnu/store/h260m1r0bgnyypl7r469lin9gpyrh12m-conda-4.3.16: The following NEW packages will be INSTALLED: asn1crypto: 0.24.0-py_1 conda-forge backports: 1.0-py36_1 conda-forge backports.functools_lru_cache: 1.5-py_1 conda-forge bedtools: 2.25.0-3 bioconda bigdatascript: v2.0rc10-0 serine bwa: 0.7.17-pl5.22.0_2 bioconda bzip2: 1.0.6-1 conda-forge ca-certificates: 2018.4.16-0 conda-forge certifi: 2018.4.16-py36_0 conda-forge cffi: 1.11.5-py36_0 conda-forge chardet: 3.0.4-py36_2 conda-forge click: 6.7-py_1 conda-forge colormath: 3.0.0-py_2 conda-forge conda: 4.5.8-py36_1 conda-forge conda-env: 2.6.0-0 conda-forge cryptography: 2.2.1-py36_0 conda-forge curl: 7.60.0-0 conda-forge cycler: 0.10.0-py_1 conda-forge dbus: 1.11.0-0 conda-forge decorator: 4.3.0-py_0 conda-forge expat: 2.2.5-0 conda-forge fastqc: 0.11.5-pl5.22.0_3 bioconda fontconfig: 2.12.1-4 conda-forge freetype: 2.7-1 conda-forge future: 0.16.0-py_1 conda-forge gettext: 0.19.8.1-0 conda-forge glib: 2.53.5-1 conda-forge gst-plugins-base: 1.8.0-0 conda-forge gstreamer: 1.8.0-1 conda-forge hisat2: 2.1.0-py36pl5.22.0_0 bioconda icu: 58.2-0 conda-forge idna: 2.7-py36_2 conda-forge je-suite: 2.0.RC-0 bioconda jinja2: 2.10-py_1 conda-forge jpeg: 9b-2 conda-forge krb5: 1.14.6-0 conda-forge libffi: 3.2.1-3 conda-forge libgcc: 5.2.0-0 libiconv: 1.14-4 conda-forge libpng: 1.6.34-0 conda-forge libssh2: 1.8.0-2 conda-forge libxcb: 1.13-0 conda-forge libxml2: 2.9.5-1 conda-forge lzstring: 1.0.3-py36_0 conda-forge markdown: 2.6.11-py_0 conda-forge markupsafe: 1.0-py36_0 conda-forge matplotlib: 2.1.0-py36_0 conda-forge mkl: 2017.0.3-0 multiqc: 1.5-py36_0 bioconda ncurses: 5.9-10 conda-forge networkx: 2.0-py36_1 conda-forge numpy: 1.13.1-py36_0 openjdk: 8.0.121-1 openssl: 1.0.2o-0 conda-forge pcre: 8.41-1 conda-forge perl: 5.22.0.1-0 conda-forge picard: 2.18.2-py36_0 bioconda pip: 9.0.3-py36_0 conda-forge pycosat: 0.6.3-py36_0 conda-forge pycparser: 2.18-py_1 conda-forge pyopenssl: 18.0.0-py36_0 conda-forge pyparsing: 2.2.0-py_1 conda-forge pyqt: 5.6.0-py36_5 conda-forge pysocks: 1.6.8-py36_1 conda-forge python: 3.6.3-1 conda-forge python-dateutil: 2.7.3-py_0 conda-forge pytz: 2018.5-py_0 conda-forge pyyaml: 3.12-py36_1 conda-forge qt: 5.6.2-3 conda-forge readline: 6.2-0 conda-forge requests: 2.19.1-py36_1 conda-forge rnasik: 1.5.2-0 serine ruamel_yaml: 0.15.35-py36_0 conda-forge samtools: 1.5-1 bioconda setuptools: 40.0.0-py36_0 conda-forge simplejson: 3.8.1-py36_0 bioconda sip: 4.18-py36_1 conda-forge six: 1.11.0-py36_1 conda-forge skewer: 0.2.2-1 bioconda spectra: 0.0.11-py_0 conda-forge sqlite: 3.13.0-1 conda-forge star: 2.5.2b-0 bioconda subread: 1.5.3-0 bioconda tk: 8.5.19-2 conda-forge tornado: 5.1-py36_0 conda-forge urllib3: 1.23-py36_0 conda-forge wheel: 0.31.1-py36_0 conda-forge xorg-libxau: 1.0.8-3 conda-forge xorg-libxdmcp: 1.1.2-3 conda-forge xz: 5.2.3-0 conda-forge yaml: 0.1.7-0 conda-forge zlib: 1.2.11-0 conda-forge
That is a rather long list of packages, including OpenJDK. Conda
CondaIOError: IO error: Missing write permissions in: /gnu/store/h260m1r0bgnyypl7r469lin9gpyrh12m-conda-4.3.16 You don't appear to have the necessary permissions to install packages into the install area '/gnu/store/h260m1r0bgnyypl7r469lin9gpyrh12m-conda-4.3.16'. However you can clone this environment into your home directory and then make changes to it. This may be done using the command: $ conda create -n my_root --clone=/gnu/store/h260m1r0bgnyypl7r469lin9gpyrh12m-conda-4.3.16
OK, I suppose that is an idea, though it kinda defeats the idea of a reproducible base repo. But this worked:
conda create -n joss-review-583 conda install -n joss-review-583 -c serine rnasik
The good news is that conda installs in one directory. But 2.7 GB downloaded…
conda info --envs # conda environments: # conda /home/wrk/.conda/envs/conda joss-review-583 /home/wrk/.conda/envs/joss-review-583 root * /gnu/store/h260m1r0bgnyypl7r469lin9gpyrh12m-conda-4.3.16
Activate the environment
source activate joss-review-583
RNAsik --help 00:00:00.000 Bds 2.0rc10 (build 2018-07-05 09:58), by Pablo Cingolani Usage: RNAsik -fqDir </path/to/your/fastqs> [options] main options -fqDir <string> : path to fastqs directory, can be nested -align <string> : pick your aligner [star|hisat|bwa] -refFiles <string> : directory with reference files -paired <bool> : paired end data [false], will also set pairIds = "_R1,_R2" -all <bool> : short hand for counts, mdups, exonicRate, qc, cov and multiqc more options -gtfFile <string> : path to refFile.gtf -fastaRef <string> : path to refFile.fa -genomeIdx <string> : genome index -counts <bool> : do read counts [featureCounts] -mdups <bool> : process bam files, sort and mark dups [picard] -qc <bool> : do bunch of QCs, fastqc, picard QCs and samtools -exonicRate <bool> : do Int(ra|er)genic rates [qualiMap] -multiqc <bool> : do MultiQC report [multiqc] -trim <bool> : do FASTQ trimming [skewer] -cov <bool> : make coverage plots, bigWig files -umi <bool> : deduplicates using UMIs extra configs -samplesSheet <string> : tab delimited file, each line: old_prefix \t new_prefix -outDir <string> : output directory [sikRun] -extn <string> : specify FASTQ files extension [.fastq.gz] -pairIds <string> : specify read pairs, [none] -extraOpts <string> : add extra options through a file, each line: toolName = options -configFile <string> : specify custome configuration file
I may be critical about CONDA, but this works ;)
Now I tried on a different machine and there was a problem on activate where the environment bumped me out of a shell. Hmmm. The conda settings of activate are:
CONDA_DEFAULT_ENV=joss-review-583 CONDA_PATH_BACKUP=/home/wrk/opt/gemma-dev-env/bin:/usr/bin:/bin CONDA_PREFIX=/home/wrk/.conda/envs/joss-review-583 CONDA_PS1_BACKUP='\[\033[0;35m\]\h:\w\[\033[0m\]$ ' JAVA_HOME=/home/wrk/.conda/envs/joss-review-583 JAVA_HOME_CONDA_BACKUP= PATH=/home/wrk/.conda/envs/joss-review-583/bin:/home/wrk/opt/gemma-dev-env/bin:/usr/bin:/bin _CONDA_D=/home/wrk/.conda/envs/joss-review-583/etc/conda/activate.d _CONDA_DIR=/home/wrk/opt/gemma-dev-env/bin
I guess I can replicate that
penguin2:~$ export JAVA_HOME=$HOME/.conda/envs/joss-review-583 penguin2:~$ export CONDA_PREFIX=$HOME/.conda/envs/joss-review-583 penguin2:~$ export PATH=$HOME/.conda/envs/joss-review-583/bin:$PATH conda install -n joss-review-583 -c bioconda qualimap
wget bioinformatics.erc.monash.edu/home/kirill/sikTestData/rawData/IndustrialAntifoamAgentsYeastRNAseqData.tar
35 Updates
It has been a while since I updated the BLOG (see below older BLOGs). Time to start afresh because we have interesting developments going and the time ahead looks particularly exciting for GEMMA and GeneNetwork with adventures in D, CUDA, Arvados, Jupyter labs, IPFS, blockchain and the list goes on! I also promised to write a BLOG on our development/deployment setup. Might as well start here. My environments are very complex but controlled thanks to GNU Guix.
|
https://thebird.nl/blog/work/rotate.html
|
CC-MAIN-2022-33
|
en
|
refinedweb
|
Juan Camus1
ABSTRACT
This paper challenges the deep-rooted notion that value creation in mining is all about
production and costs. Instead, it puts forward that it mainly refers to the capacity of
companies to continually increase the prospective mineral resources and transform them into
economically mineable mineral reserves in the most effective and efficient way.
To support this proposition, this study seeks to demonstrate that mining companies that excel
in total shareholder return (TSR) over an entire economic cycle are those that also excel in
expanding their reserves and production, here referred to as total reserves increment (TRI).
The relationship between both variables is simple and revealing – company share price is to
mineral reserves as dividends are to production. This match gives economic sense to the term
‘deposit’, which is used in mining parlance to refer to an ore body.
Results obtained from a diverse group of 14 mining companies over the period 2000-2009
evince the previous hypothesis. There are two doubtful cases, but as the paper suggests these
are transitional companies in the process of converting promising mineral resources into
mineral reserves, which the market anticipates.
INTRODUCTION
To understand why some companies excel in creating value, Jim Collins (2001) and a group
of postgraduate business students engaged in a comprehensive research project. This took
them nearly 5 years to complete. They examined 1,435 US companies for over 40 years,
seeking those that made significant improvements in their financial performance over time.
The distinctive pattern sought was:
The search identified 11 outperforming companies, all in different industries, that were called
the “good to great” companies. To contrast these companies with similar peers within the
same industry, the search also selected a group of comparison companies that failed to make
the leap from good to great. Then Collins sets out to examine the transition point. What
distinguishing features did the good-to-great companies have that their industry counterparts
did not?
*
A reformatted version of this paper has been published in Mining Engineering Magazine, March 2011 issue
1
Former Mining Industry Fellow at The University of Queensland, Australia and currently Managing Partner at
InnovaMine, Chile – E-mail: juan.camus@innovamine.cl
1
At the heart of the findings about these outstanding companies is what Collins’s refers to as
the “hedgehog concept”2 – how to find the one big thing that the company must focus on.
According to Collins, those who built the good-to-great companies were, to one degree or
another, hedgehogs. Those who led the comparison companies tended to be foxes, never
gaining the clarifying advantage of a hedgehog concept, being instead scattered, diffused, and
inconsistent.
This insight is complemented by Peter Drucker (2004), an influential management guru, who
asserts that the first practice an effective executive should ask is: "what needs to be done?"
Asking this question and taking it seriously, he asserts, is crucial for managerial success; and
that the answer almost always has more than one urgent task, but the effective leader only
concentrates on one task. In this last respect, Drucker emphasizes:
“If they [executives] are among the sizeable minority who work best with a change of
pace in the working day, they pick two tasks. I have never encountered an executive who
is effective while tackling more than two tasks at a time.”
This paper sets out to prove that the ‘one big thing’ in the mining business is mineral reserves
growth. Additionally, that the crucial task in this regard is to continually increase mineral
resources, via explorations and/or acquisitions, and convert these into mineral reserves in the
most effective and efficient way. This is a fundamental, core process that needs to be run in a
disciplined and focussed way to create shareholder value.
Mineral resources and mineral reserves are widespread terms in the mining industry. Their
relevance in the functioning of markets has led some mining jurisdictions to adopt specific
guidelines to report them – the JORC code in Australia, the SAMREC code in South Africa,
and the CIM standards (NI 43-101) in Canada, among others. In any case, for the non-
specialist, a mineral resource is an occurrence of minerals that has reasonable prospects for
eventual economic extraction whereas a mineral reserve is the mineable part of a mineral
resource that has demonstrated its economic viability.
At first sight the preceding proposition may seem obvious, but reality suggests otherwise. A
case in point, the proper information to measure the contribution of mineral resources in the
companies’ bottom line, and eventually in their market value, is neither easily available nor
consistent. Likewise, integrated mining companies rarely measure financial performance
incrementally throughout the value chain. This is usually gauged in aggregate – e.g. from
exploration to market, which may include intermediate activities such as planning and
development, engineering and construction, operations, and logistics. This practice, in turn,
tends to conceal the complementary proposition that the downstream, industrial-type
activities beyond planning and development generally yield, at most, the discount rate.
2
This concept is taken from a verse fragment by the ancient Greek poet Archilochus that goes “The fox knows
many things, but the hedgehog knows one big thing”
2
Another drawback in mining is that performance indices to measure capital efficiency rarely
consider the market value of mineral resources as another form of invested capital. This is in
fact what Nobel Laureate Robert Solow (1974) highlights in a seminal paper when he says:
“A pool of oil or vein of iron ore or deposit of copper in the ground is a capital asset to
society and to its owner (in the kind of society in which such things have private
owners) much like a printing press or a building or any other reproducible capital
asset...A resource deposit draws its market value, ultimately, from the prospect of
extraction and sale.”
The practice of not treating resource deposits as capital assets – embedded in most accounting
standards, including the newly International Financial Reporting Standards (IFRS) – tends to
mislead the economic analysis that reveals how value actually flows throughout the business
and where it really comes from.
All of this does not mean that mining companies are not aware of the importance of mineral
reserves growth. However, when this aspect makes a difference, the argument is that it is
rarely, if ever, the outcome of a systematic approach. Instead, it seems to be the result of
astute, entrepreneurial executives who happen to understand the importance of this subject
and drive the company’s focus accordingly.
To support the claim that what matters most in the mining business is mineral reserves
growth the following section describes the model and methodology used in this study. The
subsequent section examines the results of a survey conducted to assess this claim and briefly
discusses the relative position of distinct mining companies included in the study. The final
section discusses some ideas for mining companies to deal more effectively with mineral
reserves growth. This, in turn, lays the ground for future discussions and research in the area.
3
General management, human resources, technology and procurement
Upstream, resource-related activities Downstream, industrial-type activities
Upstream are the resource-related activities, generically known in mining as mineral resource
management. This function covers exploration as well as planning and development whose
respective goals are to discover mineral resources and transform them creatively into mineral
reserves in the most effective and efficient way. The outcome is a business plan that defines
the methods and rate at which the ore body is mined (life-of-mine production plan) together
with the onsite and offsite infrastructure needed to deliver the mineral products to market.
Downstream are the industrial-type activities whose aim is to execute the business plan
previously conceived upstream. These begin with the project management function
accountable for engineering and building the development component of the plan. After
project commissioning, the operations management function takes responsibility for using the
productive capacity by executing the ongoing operational component of the plan. The
outcome, in this case, is production delivery. This typically encompasses mining and
processing activities, including the logistics and transportation to deliver the final products on
the port of embarkment. At the end of the value chain is the marketing function whose aims is
to develop markets and determine the time and place at which to sell mineral products to
optimise revenue. In fact, its outcome is revenue realisation.
Splitting the mining business in this way has become widespread after Porter’s seminal
contribution. But as already discussed, measuring the value added of each activity along the
value chain to then set the company’s course on the right activities remains a challenge. As a
matter of fact, in the mining industry there is still a deep-rooted belief that value creation
primarily rests on the downstream, industrial-type activities that focus on earnings, which in
turn are driven by production and costs. Instead, the proposition here is that value creation in
mining is mainly the result of managing effectively the upstream, resource-related activities
that should focus on disciplined reserves growth.
This is exactly the point raised not long ago by Standard & Poor’s (2008), one of the world’s
leading providers of investment ratings and research data. In a white paper, it commented:
The hypothesis is that mining companies that excel in Total Shareholder Return (TSR) over
an entire economic cycle are those that also excel in increasing their reserves and production,
hereafter referred to as Total Reserves Increment (TRI). Put it more precisely, the necessary
condition for a mining company to exceed its peer group’s average TSR over a business cycle
is its capacity to increase its reserves and production above the group’s average. This is a
necessary condition since sufficiency is given by the way mining companies actually achieve
this growth. If this is done through debt-financed acquisitions at the peak of a cycle, for
instance, this may cause financial distress if followed by an abrupt downturn.
The dependent variable, TSR, is an index that measures the overall company financial
performance over time. It takes into account the share price variation and dividends paid to
shareholders in a period. Hence, it provides the total return obtained by shareholders over that
period. Dividends can be simply added to the share price difference although the most
accepted assumption is to reinvest them in additional shares.
A key feature of this index is that while its absolute value is contingent to market conditions,
companies’ relative position within a peer group reflects the market perception of both
individual and overall performance. This characteristic is useful to isolate the effect of non-
controllable market variables (e.g. commodity and consumable prices) and focus attention on
the controllable variables that also affect TSR.
− + ∑
=
Where:
The independent variable, TRI, is an index that measures variation in mineral reserves and
production in a period of time.
− + ∑
=
5
Where:
As previously defined, mineral reserves are the economically mineable part of mineral
resources. As such, they represent future production that companies expect to extract from
their mines. If this work is done properly, no mineral reserve addition or subtraction would be
feasible without diminishing the value of the business or augmenting the risk associated. This
work – which involves the formulation and optimisation of a business plan, including a mine
plan – is actually the outcome of the planning and development function depicted in Figure 1.
This is perhaps the most critical job of the mining business. At such, it requires people with
imagination allied to a deep understanding of the mineral deposit, it emplacement and the
way in which value can be maximised. It also involves knowledge and dexterity of mining
techniques together with sensible assumptions about the downstream activities spanning from
project engineering and construction, through operations and logistics, to sales and
marketing.
In the model under analysis, the counterpart of the company share price is mineral reserves.
For operating mining companies, dividends are frequently the matching part of production.
This correspondence in model variables is in fact what gives economic sense to the term
‘deposit’, which is usually used in mining parlance to refer to an orebody.
To prove the previous hypothesis, this study compares TSR and TRI for a selected group of
fourteen mining companies over the period 2000-2009. This 9-year period covers adequately
an entire economic cycle. In effect, at the onset, in 2001, the mining sector was leaving
behind the so-called dot-com bubble burst. The market then experienced a spectacular turn
around that spanned from 2003 to mid-2008. At this point unfolds the global financial crisis,
which caused a sharp fall in commodity prices. These later recovered to a great extent,
toward the end of 2009. However, during the whole period, the valuation of mining
companies went through the typical ups and downs of the mining sector, although
exacerbated by what was named the super cycle. On the whole, almost all firms grew in the
period – some more than others, though.
The companies included in this study represent adequately the worldwide mining industry as
they cover a wide spectrum; from global, diversified mining groups, through mid-size, more
focussed players, to large and mid-tier international gold producers. Nevertheless, a few
important mining companies are missing, Xstrata Plc being a case in point. This is because
their information for the whole period, particularly on mineral reserves and production, was
not readily available. Table 1 lists the companies included in the research and their trading
information.
6
Company Incorporated in Listed on3 Ticker symbol Currency
Anglo American Plc South Africa/UK LSE AAL £
Antofagasta Plc Chile/UK LSE ANTO £
Barrick Gold Corp Canada NYSE ABX US$
BHP Billiton Ltd Australia/UK NYSE BHP US$
Freeport-McMoRan Inc USA NYSE FCX US$
Gold Fields Ltd South Africa NYSE GFI US$
Goldcorp Inc Canada NYSE GG US$
Inmet Mining Corp Canada TSE IMN C$
Kinross Gold Corp Canada NYSE KGC US$
Newcrest Mining Ltd Australia ASX NCM A$
Newmont Mining Corp USA NYSE NEM US$
Rio Tinto Plc UK/Australia NYSE RTP4 US$
Teck Resources Ltd Canada TSE TCK-B C$
Vale S.A. Brazil NYSE VALE US$
Measurement of TSR
To make figures comparable, TSR was calculated using US dollars as the base currency.
Share price data as well as dividends were taken from databases available in various open
financial web sites. These include Yahoo, Google, and Bloomberg to name the most well
known. Some inconsistencies were checked against companies’ annual reports as well as
information available in companies’ web sites.
The daily share price series used in the calculation of TSR were adjusted considering the
reinvestment of dividends and stock splits. The latter refers to management decision to
increase or decrease the number of shares without affecting the company’s market
capitalisation – to make them more tradeable, for instance.
Table 2 displays a summary of the share market information collected and processed in the
case of Antofagasta Plc. Share prices displayed in Table 2 correspond to close price of the
last business day in the respective year. Likewise, dividends are all those paid in the
respective calendar year.
3
NYSE is the New York Stock Exchange in the United States of America; LSE is the London Stock Exchange
in the United Kingdom; TSE is the Toronto Stock Exchange in Canada; and ASX is the Australian Securities
Exchange in Australia
4
On 12 October 2010, Rio Tinto changed the ticker symbol of its ADR trading on the NYSE from RTP to RIO.
The aim was to align it with Rio Tinto's symbols on the LSE and ASX where the company has primary listings.
7
Price Dividends Adjusted price
Year Split
(£/Sh) (£/Sh) (£/Sh) (US$/Sh)
2000 3.52 n/a - 0.88 1.32
2001 4.20 0.28 - 1.12 1.63
2002 4.44 0.24 - 1.24 1.92
2003 8.44 0.18 - 2.59 4.61
2004 8.97 0.21 - 2.81 5.42
2005 14.95 0.42 - 4.85 8.37
2006 5.09 0.53 4:1 6.77 13.29
2007 7.17 0.25 - 9.61 19.20
2008 4.26 0.26 - 5.75 8.31
2009 9.92 0.33 - 13.54 21.76
Measurement of TRI
Information on mineral reserves and production was obtained from the companies’ annual
reports. Complementary information was also taken from databases supported by securities
regulatory agencies in the United States, Australia and Canada.
As all companies in the survey produce more than one commodity, variations in reserves and
production were measured in terms of an equivalent commodity. It means all minor products
were expressed in terms of the major commodity, using an average price and metallurgical
recovery for each product.
In the case of global, diversified mining companies, running various product groups, TRI was
calculated in two steps: first, an estimate of the reserves and production variation for each
product group using the above equivalent concept; and second, an estimate of the weighted
average whole variation, using an average of the underlying product group’s earnings in the
period as the weighted factor.
It is worth mentioning that public information on mineral resources, mineral reserves, and
production is intricate and therefore difficult to compare, particularly in the case of mineral
reserves. The main problem is consistency and lack of relevant information. To add reserves
and production, for instance, the former have to be recoverable; that is, net of metallurgical
loses. However, only a small number of companies report reserves in this way. They usually
report in situ reserves but recoveries are not always provided. Another source of confusion is
ownership. Some companies report full consolidated figures of their controlling operations
but from the point of view of the company share price what matters is the attributable share.
Because of these difficulties and to make figures comparable, the collected information had
to be carefully reviewed and cleansed. When some critical data was not readily available, the
best professional judgement was used to replace the missing data. Therefore, TRI indices
provided here are a good approximation of the variation in reserves and production for the 14
companies in the period. Table 3 illustrates the way the information was finally compiled and
displayed in the case of Goldcorp Inc.
8
Gold Silver Copper Gold Eq.
Year Reserves Production Reserves Production Reserves Production Reserves Production
(MOz Au) (MOz Au) (MOz Ag) (MOz Ag) (Mt Cu) (Mt Cu) (MOz AuEq) (MOz AuEq)
2000 3.41 n/a - - - - 3.41 n/a
2001 3.85 0.61 - - - - 3.85 0.61
2002 4.47 0.61 - - - - 4.47 0.61
2003 4.23 0.60 - - - - 4.23 0.60
2004 4.15 0.63 - - - - 4.15 0.63
2005 11.70 1.14 36.32 10.43 0.58 0.07 15.78 1.70
2006 27.69 1.69 606.00 14.88 0.60 0.07 39.00 2.30
2007 30.12 2.29 836.54 17.07 0.23 0.07 42.00 2.92
2008 31.59 2.32 972.64 9.63 0.21 0.06 45.05 2.81
2009 33.25 2.42 1010.80 11.85 0.19 0.05 47.06 2.90
ANALYSIS OF RESULTS
Figure 2 displays both indices, TSR and TRI, for the fourteen companies. The graph axes are
in logarithmic scale to allow a better display of the whole set of data. These results indicate
that the proposition under analysis is robust. In effect, Figure 2 shows two distinct categories
of mining companies – the leading companies where TSR and TRI are above the group’s
average and the lagging companies where both indices are below the group’s average. This is
in fact what the preceding hypothesis is all about.
TSR
Inmet Inmet*
Vale
Goldcorp
Antofagasta Newcrest
Kinross
Average TSR = 11.3
10.00 Freeport
BHP
Teck
Gold Fields
Rio Tinto
Anglo
Newmont Barrick
(*) Include important reserves reported on 31 Mar 2010 Average TRI = 3.8
1.00
0.10 1.00 10.00 TRI
Figure 2: Total shareholder return (TSR) versus total reserves increment (TRI)
9
Figure 2 also shows a third category where there are two companies that have a higher than
average TSR but a lower than average TRI. For the lay person this may cast doubts on the
proposition robustness; for the more inquisitive, though, this is precisely the place to be for
thriving mining companies developing promising mineral resources. In effect, under existing
reporting codes, mineral resources can only be reported as mineral reserves after complete
studies and authorisations. However, the market often tends to price these activities well in
advance and this may be a more plausible explanation for the observed phenomenon. In order
to confirm this rationale, both cases are examined in more detail.
In the case of Antofagasta, a Chilean-based copper corporation, its reserves plus production
grew about 1.2 times (120%), in the period whereas the TSR reached almost 16 (1600%). The
reason for this difference seems to be the exciting company’s exploration and evaluation
program, which cover North America, Latin America, Africa, Europe, and Asia. If it
succeeds in seizing the various opportunities highlighted in its 2009 annual report, it will
surely leap to the leading group as well. Among these opportunities, perhaps the most
relevant are the Reko Diq project in Pakistan and the Los Pelambres district in Chile.
According to the Antofagasta 2009 annual report, the feasibility study and environmental
impact study of Reko Diq are in their final stage. The mineral resource estimate is 5.9 billion
tonnes grading 0.41% Cu and 0.22 g Au/t. Reko Diq controlling firm is Tethyan Copper
Company, a 50-50 joint-venture between Antofagasta and Barrick. This company, which
owns a 75% of the project, continues negotiating with relevant Pakistani authorities for a
mineral agreement and mining lease.
Concerning the attractive prospect at the Los Pelambres district, the report says:
“Los Pelambres has total mineral resources of 6.2 billion tonnes with an average
copper grade of 0.52%. This includes mineral resources at the existing open pit, and
neighbouring deposits..., which were identified following an exploration programme
between 2006 and 2008... These mineral resources are significantly greater than the 1.5
billion tonnes of ore reserves currently incorporated in Los Pelambres’ mine plan... [I]t
presents opportunities for longer term planning either by providing additional material
10
in future years to extend the existing mine life, or by enabling Los Pelambres in the
longer term to consider possibilities for future growth.”
In the case of Barrick, a likely explanation seems to be its unpredictable hedging activities5,
which the market tends to penalise. In fact, for the very same reason, by mid last decade the
company decided to start de-hedging its gold positions, although at a low pace. Towards the
end of 2009, as the gold price continued its unrelenting upward trend, Barrick completely
eliminated its fixed-price hedge book. The benefit of this decision has not been fully captured
here as this occurred near to the survey closing date (31 Dec 2009). Perhaps another reason
for the market lack of enthusiasm for Barrick’s shares is that it continued using hedging
activities to secure copper prices from an operation in Chile as well as some input costs for its
main operations – crude oil prices, interest rates and foreign exchange rates, for instance.
Anglo American seems to be a different case. In the first half of the decade, at the onset of
the so-called mining super cycle, it acquired various good-quality mining assets at attractive
prices – the Disputada copper operations in Chile and Kumba Iron Ore operations in South
Africa. In the second half of the decade, at the peak of the mineral resources boom, the
company continued executing its growth strategy. This time, though, the major acquisitions
made in Brazil, US, Australia and Peru were more expensive and increased substantially the
outstanding debt of Anglo American. This put the company in a very difficult position when
the global financial crisis unfolded, by mid-2008. Among other corporate actions, this led to
the suspension of the 2009 dividend payments, a reduction of the 2009 capital expenditures
by more than 50%, and the dismissal of about a tenth of the work force.
With hindsight, the strategy to increase mineral reserves through debt-funded acquisition just
before the market crash proved to be lethal. In fact, at the midst of turmoil, toward the end of
2008, the debt-laden mining companies were the most beaten within the mining industry.
Rio Tinto is a special case as it obtained the group’s lowest TRI and a relative modest TSR
compared to its peers. During 2009, as metal prices recovered from a sharp drop occurred by
mid-2008, the company struggled to reduce a hefty debt taken on in 2007 to acquire Canada-
based aluminium company Alcan. Unlike typical mining acquisitions, it hardly added any
mineral reserves to Rio Tinto. Ironically, to reduce debt and strengthen its balance sheet, Rio
Tinto had to sell part of its mining assets thereby reducing reserves; it even mulled over a bid
5
Hedging is when companies contractually lock the price to be paid in the future for their production or supplies
11
from its biggest shareholder, China’s state-owned Chinalco, for a larger stake in the company
and a share of its best mining assets. As markets improved, towards mid-2009, shareholders
opted instead for a rights issue together with a production joint-venture with BHP Billiton to
jointly exploit and develop their Pilbara iron ore operations in Western Australia. Later, the
parties mutually agreed to abandon this alliance because regulators did not get the nod to the
proposal in its original form.
Refocussing mining
The preparation of this study required the careful examination of more than one hundred
annual reports of various mining companies. This work allowed a comparison of different
corporate strategies and emphasis, which led to some interesting general observations.
First, despite almost all mining companies recognise the importance of mineral resources and
mineral reserves growth, only a few highlight this information in the initial pages of their
annual reports. In effect, the most common information highlighted by mining companies
usually refers to financial data as well as production and cost statistics – production and sales,
underlying earnings, earning per share, dividends, capital expenditures, cash flows, total debt,
cash costs, and return on capital, among various other similar yardsticks. Out of the fourteen
mining companies surveyed, only one highlighted information on mineral resources and ore
reserves growth in the first pages of its annual report. This company is Newcrest, the
Australian gold producer, which has done this consistently since 2005. And its announcement
refers not only to the stock of mineral resources and ore reserves for the respective year but
also their variation with respect to the previous year, net of mining depletion. Interestingly,
Newcrest happens to be one of the leading companies in this survey.
Second, in one way or another, almost all mining companies use the value chain framework
in the definition of their business scope and/or strategy. However, it appears that none of
them organises their business accordingly, let alone measure value incrementally along the
value chain. These two claims are important so they deserve some elaboration.
Concerning the misalignment in the organisational structure of mining companies, this claim
cannot be categorical because neither companies’ annual reports nor their websites provide
detailed information on the structure and accountabilities of corporate executive positions.
Even though many mining companies have visible positions in the upper echelons for the
12
downstream activities, none of them considers a position accountable to execute the upstream
function of mineral resource management as described here. Although quite often there is a
senior executive position accountable for explorations, there is rarely one for the planning
and development function as envisaged here. In fact, in the traditional mining organisation
this is often executed downstream – either within the project management function or under
the operations management function. Sometimes the mineral resource planning function is
split in two and performed separately – a strategic planning unit inserted in a project group
and a tactical planning unit within an operational group.
In short, the only individual fully responsible for mineral reserves growth and the whole
value of mineral resources, in almost all mining companies, is ultimately the top executive.
The claim about a deficient value measurement in mining seems to be more categorical since
annual reports of mining companies never segregate financial information for each step of the
value chain. In the case of big mining houses, segregation is performed for business units,
product groups, or sometimes geographical zones, but as already noticed, in aggregate, from
exploration to market. Moreover, the concept of value, as formally employed in the field of
economics, is not something that can be calculated from information available on mining
companies’ annual reports. The reason is that the market value of mineral resources does not
appear on either the income statement or balance sheet. This means that the ongoing
economic value created in mining – that is, the residual value after deducting the opportunity
cost of the underlying mineral resource assets – cannot be measured for the upstream segment
of the business let alone the whole business.
McCarthy (2003) clearly illustrates this deficiency when asserting the poor record of mining
project in the previous decade. His analysis of 105 mining projects at the feasibility level
concluded that the main sources of risks and failures in mining projects come from geological
inputs and interpretations. In fact, he found that the poor record of mine feasibility studies
and project development are 66 percent of the time associated to issues directly linked to
geology inputs across all areas – geologic model, resource and reserve estimation, mine
planning and design, metallurgical test work, sampling and scale-up, among the most
important. What function of the value chain is responsible for this is not clear as in the
traditional mining organisation no one under the chief executive is accountable for mineral
resource management. Actually, in the typical mining company many people are responsible
for a bit of it, but none for the whole mineral resource and its intrinsic economic value.
13
Refocussing a mining company requires vision and determination. The evidence provided
here may be useful to understand the importance of mineral resource growth. Setting the
company’s compass on this new course, however, is always challenging in mature industries
like mining. This appears to be less difficult in small and medium-sized organisations as
change does not need to go through a heavy bureaucracy. Conversely, in large, global
corporations, with more complex and well-established fiefdoms, this type of change is
extremely difficult. Success, in this case, not only requires top management support but also
influential champions in senior and middle management as it deeply affects the company’s
structures, processes, systems and people.
A comparative illustration
A brief analysis of the oil industry could be useful to illustrate the main points raised here.
This is because it shares many commonalities with the mining industry, its dependence upon
an exhaustible resource being perhaps the most relevant.
The first distinction is the awareness of oil companies about the importance of the upstream
activities in the bottom line. These are usually referred to as “exploration and production”.
Accordingly, oil companies have the appropriate structures, processes and systems. In effect,
in the oil business it is common to find a senior position responsible for the upstream
segment, which quite often is the second in command after the chief executive.
Their processes and systems are also apt to report key financial information separately. This
is usually done along the value chain, which typically divides the business in upstream and
downstream activities, adding at times a midstream segment. For integrated multinationals,
the upstream segment usually represents a significant proportion of the company’s underlying
earnings. In fact, the ratio of upstream earnings to total earnings for Exxon, Shell and British
Petroleum (BP) is 0.89, 0.66, and 0.94, respectively, according to their 2009 annual reports.
This reality has led most of the integrated players to dispose of in recent years several of the
downstream less profitable businesses, service stations being one of them.
The rules to report oil reserves are highly standardised although these are more restrictive
than in mining. US authorities, for instance, only allow the disclosure of proved reserves,
whereas in mining it is proved and probable. Lately, though, US authorities opened the
possibility for oil & gas companies to report probable and possible reserves, which should
give investors a richer insight into a company's long-term value potential. Because of its
relevance, oil companies often highlight this information in their annual reports. Moreover,
they use a performance index that measures the extent to which production is continuously
replaced with proved reserves.
This analogy does not intend to diminish the practices used in the mining industry nor does it
imply that the oil industry is better. It only aims to highlight that the focus on the upstream
activities is far more explicit in the oil industry than in the mining industry.
14
CONCLUDING REMARKS
The previous model and results provide compelling evidence to support the claim that the
most effective levers of value creation in the mining business are often in the upper part of
the value chain. This function, here referred to as mineral resource management, is aimed at
increasing the companies’ mineral resources and transform them into mineral reserves in the
most effective and efficient way.
In practice, the typical drivers to increase reserves are either explorations or acquisitions.
What ensures success, if anything, is usually the creativity put into the planning and
development of both existing and potential mineral resources. This function has not been
fully exploited by mining companies as they tend to focus more on the downstream,
industrial-type activities. In fact, in the traditional mining organisation this function is rarely
executed in the upper part of the value chain as a core, stand-alone activity. This is often
performed downstream – in the form of studies and plans – under either project management
or operations management functions, or even both.
The previous practice, together with the current way of assessing value in mining, are aspects
crying for innovation. But for a long time organisational innovation in the mining industry
has been elusive. This was noticed by Alfred Chandler (1962), who at the time portrayed
mining as the industry less responsive to organisational change:
“Among the more than seventy companies studied, those that were not administering
their resources through the new multidivisional structure by 1960 were concentrated in
the metals and materials industries. Of these, copper and nickel companies had paid the
least attention to structure.”
More recent quantitative studies on innovation and productivity advance continue supporting
the previous claim, labelling the mining industry as conservative, traditional, and resistant to
change – Paul Bartos (2007). To overcome this negative reputation and tackle the previous
challenge, more research work in the area is needed. But beyond technical innovations – that
Bartos relates more to equipment manufacturers, suppliers and service providers – this
research should put more emphasis on organisational design issues and encompass the study
of better structures, processes, systems, and people. This is indeed the kind of innovations
that enabled Wal-Mart, Toyota, and Dell, for instance, to develop and become leaders in their
respective industries.
15
ACKNOWLEDGEMENTS
The author expresses his gratitude to the University of Queensland’s mining engineering
department for supporting this study. These thanks are also extended to various people and
organisations around the world that were keen to share public information of their companies
for the completion of this study. Silvia Tapia’s collaboration in data collection and index
calculation is also recognised and appreciated.
REFERENCES
Bartos, Paul (2007) Is mining a high-tech industry? Investigations into innovation and
productivity advance, Resources Policy 32 (2007) 149–158
Camus, Juan, Peter Knights and Silvia Tapia (2009) Value Generation in Mining: A New
Model, 2009Australian Mining Technology Conference – Brisbane, Queensland
Chandler, Alfred (1962) Strategy and Structure: Chapters in the History of the Industrial
Enterprise, Cambridge, Massachusetts: MIT Press
Drucker, Peter (2004) What Makes an Effective Executive, Harvard Business Review, 82 (6)
Solow, Robert (1974) The Economics of Resources or the Resources of Economics, American
Economic Review, Vol 64 (May): 1-14
Standards & Poor’s (2008) White Paper: Mining Industry-Specific Data, Compustat Data
Navigator ()
Williamson, Oliver (1971) Managerial discretion, organisational form, and the multi-
division hypothesis, in R. Marris and A. Wood (Eds.), The Corporate Economy, Cambridge,
Mass.: Harvard University Press
16
|
https://www.scribd.com/document/372054484/002-Value-Creation-in-the-Mining-Business-ME-Rev-1
|
CC-MAIN-2019-35
|
en
|
refinedweb
|
More Videos
Streaming is available in most browsers,
and in the WWDC app..
Resources
Related Videos
WWDC 2016
- Download
[Music] [Silence] Dave Abrahams: Hi, everybody. My name is Dave Abrahams, and I'm the technical lead for the Swift standard library, and it is truly my privilege to be with you here today. It is great to see all of you in this room. The next 40 minutes are about putting aside your usual way of thinking about programming. What we're going to do together here won't necessarily be easy, but I promise you if you stick with me, that it'll be worth your time. I'm here to talk to you about themes at the heart of Swift's design, and introduce you to a way of programming that has the potential to change everything. But first, let me introduce you to a friend of mine. This is Crusty.
Now you've probably all worked with some version of this guy. Crusty is that old-school programmer who doesn't trust debuggers, doesn't use IDEs.
No, he favors an 80 x 24 terminal window and plain text, thank you very much. And he takes a dim view of the latest programming fads.
Now I've learned to expect Crusty to be a little bit cynical and grumpy, but even so it sometimes takes me by surprise.
Like last month we were talking about app development, and said flat out, 'I don't do object-oriented.' I could hardly believe my ears.
I mean, object-oriented programming has been around since the 1970s, so it's not exactly some new-fangled programming fad.
And, furthermore, lots of the amazing things that we've all built together, you and I and the engineers on whose shoulders we stand, were built with objects.
'Come on,' I said to him as I walked over to his old-school chalkboard. 'OOP is awesome.
Look what you can do with classes.' [Silence] Yes. So first you can group related data and operations.
And then we can build walls to separate the inside of our code from the outside, and that's what lets us maintain invariants.
Then we use classes to represent relatable ideas, like window or communication channel. They give us a namespace, which helps prevent collisions as our software grows. They have amazing expressive syntax. So we can write method calls and properties and chain them together. We can make subscripts. We can even make properties that do computation.
Last, classes are open for extensibility. So if a class author leaves something out that I need, well, I can come along and add it later.
And, furthermore, together, these things, these things let us manage complexity and that's really the main challenge in programming.
These properties, they directly address the problems we're trying to solve in software development. At that point, I had gotten myself pretty inspired, but Crusty just snorted and [sighed]. [hiss sound] He let all the air out of my balloon. And if that wasn't bad enough, a moment later he finished the sentence. [Laughter] Because it's true, in Swift, any type you can name is a first class citizen and it's able to take advantage of all these capabilities. So I took a step back and tried to figure out what core capability enables everything we've accomplished with object-oriented programming. Obviously, it has to come from something that you can only do with classes, like inheritance. And this got me thinking specifically about how these structures enable both code sharing and fine-grained customization. So, for example, a superclass can define a substantial method with complex logic, and subclasses get all of the work done by the superclass for free. They just inherit it. But the real magic happens when the superclass author breaks out a tiny part of that operation into a separate customization point that the subclass can override, and this customization is overlaid on the inherited implementation. That allows the difficult logic to be reused while enabling open-ended flexibility and specific variations. And now, I was sure, I had him.
'Ha,' I said to Crusty. 'Obviously, now you have to bow down before the power of the class.' 'Hold on just a darn tootin' minute,' he replied. 'First of all, I do customization whatchamacallit all the time with structs, and second, yes, classes are powerful but let's talk about the costs. I have got three major beefs with classes,' said Crusty.
And he started in on his list of complaints. 'First, you got your automatic sharing.' Now you all know what this looks like.
A hands B some piece of perfectly sober looking data, and B thinks, 'Great, conversation over.' But now we've got a situation where A and B each have their own very reasonable view of the world that just happens to be wrong. Because this is the reality: eventually A gets tired of serious data and decides he likes ponies instead, and who doesn't love a good pony? This is totally fine until B digs up this data later, much later, that she got from A and there's been a surprise mutation. B wants her data, not A's ponies.
Well, Crusty has a whole rant about how this plays out. 'First,' he says, 'you start copying everything like crazy to squash the bugs in your code.
But now you're making too many copies, which slows the code down. And then one day you handle something on a dispatch queue and suddenly you've got a race condition because threads are sharing a mutable state, so you start adding locks to protect your invariants.
But the locks slow the code down some more and might even lead to deadlock. And all of this is added complexity, whose effects can be summed up in one word, bugs.' But none of this is news to Cocoa programmers. [Laughter] It's not news. We've been applying a combination of language features like @property(copy) and coding conventions over the years to handle this. And we still get bitten.
For example, there's this warning in the Cocoa documentation about modifying a mutable collection while you're iterating through it. Right? And this is all due to implicit sharing of mutable state, which is inherent to classes.
But this doesn't apply to Swift. Why not? It's because Swift collections are all value types, so the one you're iterating and the one you're modifying are distinct. Okay, number two on Crusty's list, class inheritance is too intrusive.
First of all, it's monolithic. You get one and only one superclass. So what if you need to model multiple abstractions? Can you be a collection and be serialized? Well, not if collection and serialized are classes. And because class inheritance is single inheritance, classes get bloated as everything that might be related gets thrown together. You also have to choose your superclass at the moment you define your class, not later in some extension. Next, if your superclass had stored properties, well, you have to accept them. You don't get a choice. And then because it has stored properties, you have to initialize it. And as Crusty says, 'designated convenience required, oh, my.' So you also have to make sure that you understand how to interact with your superclass without breaking its invariants. Right? And, finally, it's natural for class authors to write their code as though they know what their methods are going to do, without using final and without accounting for the chance that the methods might get overridden. So, there's often a crucial but unwritten contract about which things you're allowed to actually override and, like, do you have to chain to the superclass method? And if you're going to chain to the superclass method, is it at the beginning of your method, or at the end, or in the middle somewhere? So, again, not news to Cocoa programmers, right? This is exactly why we use the delegate pattern all over the place in Cocoa.
Okay, last on Crusty's list, classes just turn out to be a really bad fit for problems where type relationships matter.
So if you've ever tried to use classes to represent a symmetric operation, like Comparison, you know what I mean.
For example, if you want to write a generalized sort or binary search like this, you need a way to compare two elements.
And with classes, you end up with something like this. Of course, you can't just write Ordered this way, because Swift demands a method body for precedes.
So, what can we put there? Remember, we don't know anything about an arbitrary instance of Ordered yet.
So if the method isn't implemented by a subclass, well, there's really nothing we can do other than trap. Now, this is the first sign that we're fighting the type system.
And if we fail to recognize that, it's also where we start lying to ourselves, because we brush the issue aside, telling ourselves that as long as each subclass of Ordered implements precedes, we'll be okay. Right? Make it the subclasser's problem.
So we press ahead and implement an example of Ordered. So, here's a subclass. It's got a double value and we override precedes to do the comparison. Right? Except, of course, it doesn't work. See, "other" is just some arbitrary Ordered and not a number, so we don't know that "other" has a value property. In fact, it might turn out to be a label, which has a text property. So, now we need to down-cast just to get to the right type. But, wait a sec, suppose that "other" turns out to be a label? Now, we're going to trap. Right? So, this is starting to smell a lot like the problem we had when writing the body for precedes in the superclass, and we don't have a better answer now than we did before. This is a static type safety hole.
Why did it happen? Well, it's because classes don't let us express this crucial type relationship between the type of self and the type of other.
In fact, you can use this as a "code smell." So, any time you see a forced down-cast in your code, it's a good sign that some important type relationship has been lost, and often that's due to using classes for abstraction. Okay, clearly what we need is a better abstraction mechanism, one that doesn't force us to accept implicit sharing, or lost type relationships, or force us to choose just one abstraction and do it at the time we define our types; one that doesn't force us to accept unwanted instance data or the associated initialization complexity.
And, finally, one that doesn't leave ambiguity about what I need to override. Of course, I'm talking about protocols.
Protocols have all these advantages, and that's why, when we made Swift, we made the first protocol-oriented programming language.
So, yes, Swift is great for object-oriented programming, but from the way for loops and string literals work to the emphasis in the standard library on generics, at its heart, Swift is protocol-oriented. And, hopefully, by the time you leave here, you'll be a little more protocol-oriented yourself. So, to get you started off on the right foot, we have a saying in Swift.
Don't start with a class. Start with a protocol. So let's do that with our last example.
Okay, first, we need a protocol, and right away Swift complains that we can't put a method body here, which is actually pretty good because it means that we're going to trade that dynamic runtime check for a static check, right, that precedes as implemented. Okay, next, it complains that we're not overriding anything.
Well, of course we're not. We don't have a baseclass anymore, right? No superclass, no override.
And we probably didn't even want number to be a class in the first place, because we want it to act like a number. Right? So, let's just do two thing at once and make that a struct. Okay, I want to stop for a moment here and appreciate where we are, because this is all valid code again.
Okay, the protocol is playing exactly the same role that the class did in our first version of this example. It's definitely a bit better.
I mean, we don't have that fatal error anymore, but we're not addressing the underlying static type safety hole, because we still need that forced down-cast because "other" is still some arbitrary Ordered. Okay. So, let's make it a number instead, and drop the type cast. Well, now Swift is going to complain that the signatures don't match up. To fix this, we need to replace Ordered in the protocol signature with Self.
This is called a Self-requirement. So when you see Self in a protocol, it's a placeholder for the type that's going to conform to that protocol, the model type. So, now we have valid code again. Now, let's take a look at how you use this protocol.
So, this is the binary search that worked when Ordered was a class. And it also worked perfectly before we added that Self-requirement to Ordered. And this array of ordered here is a claim. It's a claim that we're going to handle a heterogeneous array of Ordered. So, this array could contain numbers and labels mixed together, right? Now that we've made this change to Ordered and added the Self-requirement, the compiler is going to force us to make this homogeneous, like this.
This one says, 'I work on a homogeneous array of any single Ordered type T.' Now, you might think that forcing the array to be homogeneous is too restrictive or, like, a loss of functionality or flexibility or something. But if you think about it, the original signature was really a lie. I mean, we never really handled the heterogeneous case other than by trapping.
Right? A homogeneous array is what we want. So, once you add a Self-requirement to a protocol, it moves the protocol into a very different world, where the capabilities have a lot less overlap with classes. It stops being usable as a type. Collections become homogeneous instead of heterogeneous.
An interaction between instances no longer implies an interaction between all model types. We trade dynamic polymorphism for static polymorphism, but, in return for that extra type information we're giving the compiler, it's more optimizable. So, two worlds.
Later in the talk, I'll show you how to build a bridge between them, at least one way. Okay. So, I understood how the static aspect of protocols worked, but I wasn't sure whether to believe Crusty that protocols could really replace classes and so I set him a challenge, to build something for which we'd normally use OOP, but using protocols. I had in mind a little diagramming app where you could drag and drop shapes on a drawing surface and then interact with them. And so I asked Crusty to build the document and display model. And here's what he came up with.
First, he built some drawing primitives. Now, as you might imagine, Crusty really doesn't really do GUI's.
He's more of a text man. So his primitives just print out the drawing commands you issue, right? I grudgingly admitted that this was probably enough to prove his point, and then he created a Drawable protocol to provide a common interface for all of our drawing elements.
Okay, this is pretty straightforward. And then he started building shapes like Polygon. Now, the first thing to notice here about Polygon is it's a value type, built out of other value types. It's just a struct that contains an array of points.
And to draw a polygon, we move to the last corner and then we cycle through all the corners, drawing lines. Okay, and here's a Circle.
Again, Circle is a value type, built out of other value types. It's just a struct that contains a center point and a radius. Now to draw a Circle, we make an arc that sweeps all the way from zero to two pi radians. So, now we can build a diagram out of circles and polygons. 'Okay,' said Crusty, 'let's take her for a spin.' So, he did. This is a diagram. A diagram is just a Drawable.
It's another value type. Why is it a value type? Because all Drawables are value types, and so an array of Drawables is also a value type. Let's go back to that. Wow. Okay, there.
An array of Drawables is also a value type and, therefore, since that's the only thing in my Diagram, the Diagram is also a value type.
So, to draw it, we just loop through all of the elements and draw each one. Okay, now let's take her for a spin.
So, we're going to test it. So, Crusty created a Circle with curiously specific center and radius.
And then, with uncanny Spock-like precision, he added a Triangle. And finally, he built a Diagram around them, and told it to draw. 'Voila,' said Crusty, triumphantly. 'As you can plainly see, this is an equilateral triangle with a circle, inscribed inside a circle.' Well, maybe I'm just not as good at doing trigonometry in my head, as Crusty is, but, 'No, Crusty,' I said, 'I can't plainly see that, and I'd find this demo a whole lot more compelling if I was doing something actually useful for our app like, you know, drawing to the screen.' After I got over my annoyance, I decided to rewrite his Renderer to use CoreGraphics.
And I told him I was going to this and he said, 'Hang on just a minute there, monkey boy. If you do that, how am I going to test my code?' And then he laid out a pretty compelling case for the use of plaintext in testing. If something changes in what we're doing, we'll immediately see it in the output. Instead, he suggested we do a little protocol-oriented programming.
So he copied his Renderer and made the copy into a protocol. Yeah, and then you have to delete the bodies, okay. There it is.
And then he renamed the original Renderer and made it conform. Now, all of this refactoring was making me impatient, like, I really want to see this stuff on the screen.
I wanted to rush on and implement a Renderer for CoreGraphics, but I had to wait until Crusty tested his code again.
And when he was finally satisfied, he said to me, 'Okay, what are you going to put in your Renderer?' And I said, 'Well, a CGContext.
CGContext has basically everything a Renderer needs." In fact, within the limits of its plain C interface, it basically is a Renderer.
'Great,' said Crusty. 'Gimme that keyboard.' And he snatched something away from me and he did something so quickly I barely saw it. 'Wait a second,' I said. 'Did you just make every CGContext into a Renderer?' He had. I mean, it didn't do anything yet, but this was kind of amazing. I didn't even have to add a new type.
'What are you waiting for?' said Crusty. 'Fill in those braces.' So, I poured in the necessary CoreGraphics goop, and threw it all into a playground, and there it is. Now, you can download this playground, which demonstrates everything I'm talking about here in the talk, after we're done. But back to our example.
Just to mess with me, Crusty then did this. Now, it took me a second to realize why Drawing wasn't going into an infinite recursion at this point, and if you want to know more about that, you should go to this session, on Friday. But it also didn't change the display at all.
Eventually, Crusty decided to show me what was happening in his plaintext output. So it turns out that it was just repeating the same drawing commands, twice. So, being more of a graphics-oriented guy, I really wanted to see the results.
So, I built a little scaling adapter and wrapped it around the Diagram and this is the result. And you can see this in the playground, so I'm not going to go into the scaling adapter here. But that's kind of a demonstration that with protocols, we can do all the same kinds of things that we're used to doing with classes. Adapters, usual design patterns. Okay, now I'd like to just reflect a second on what Crusty did with TestRenderer though, because it's actually kind of brilliant. See, by decoupling the document model from a specific Renderer, he's able to plug in an instrumented component that reveals everything that we do, that our code does, in detail.
And we've since applied this approach throughout our code. We find that, the more we decouple things with protocols, the more testable everything gets.
This kind of testing is really similar to what you get with mocks, but it's so much better. See, mocks are inherently fragile, right? You have to couple your testing code to the implementation details of the code under test. And because of that fragility, they don't play well with Swift's strong static type system. See, protocols give us a principled interface that we can use, that's enforced by the language, but still gives us the hooks to plug in all of the instrumentation we need. Okay, back to our example, because now we seriously need to talk about bubbles. Okay. We wanted this diagramming app to be popular with the kids, and the kids love bubbles, of course.
So, in a Diagram, a bubble is just an inner circle offset around the center of the outer circle that you use to represent a highlight.
So, you have two circles. Just like that. And when I put this code in context though, Crusty started getting really agitated. All the code repetition was making him ornery, and if Crusty ain't happy, ain't nobody happy.
[Laughter] 'Look, they're all complete circles,' he shouted. 'I just want to write this.' I said, 'Calm down, Crusty. Calm down. We can do that.
All we need to do is add another requirement to the protocol. All right? Then of course we update our models to supply it.
There's test Renderer. And then the CGContext.' Now, at this point Crusty's got his boot off and he's beating it on the desk, because here we were again, repeating code. He snatched the keyboard back from me, muttering something about having to do everything his own self, and he proceeded to school me using a new feature in Swift. This is a protocol extension. This says 'all models of Renderer have this implementation of circleAt.' Now we have an implementation that is shared among all of the models of Renderer.
So, notice that we still have this circleAt requirement up there. You might ask, 'what does it means to have a requirement that's also fulfilled immediately in an extension?' Good question.
The answer is that a protocol requirement creates a customization point. To see how this plays out, let's collapse this method body and add another method to the extension. One that isn't backed by a requirement. And now we can extend Crusty's TestRenderer to implement both of these methods. And then we'll just call them. Okay. Now, what happens here is totally unsurprising.
We're directly calling the implementations in TestRenderer and the protocol isn't even involved, right? We'd get the same result if we removed that conformance.
But now, let's change the context so Swift only knows it has a Renderer, not a TestRenderer. And here's what happens.
So because circleAt is a requirement, our model gets the privilege of customizing it, and the customization gets called.
That one. But rectangleAt isn't a requirement, so the implementation in TestRenderer, it only shadows the one in the protocol and in this context, where you only know you have a Renderer and not a TestRenderer, the protocol implementation is called. Which is kind of weird, right? So, does this mean that rectangleAt should have been a requirement? Maybe, in this case, it should, because some Renderers are highly likely to have a more efficient way to draw rectangles, say, aligned with a coordinate system.
But, should everything in your protocol extension also be backed by a requirement? Not necessarily.
I mean, some APIs are just not intended as customization points. So, sometimes the right fix is to just not shadow the requirement in the model, not shadow the method in the model. Okay. So, this new feature, incidentally, it's revolutionized our work on the Swift Standard Library. Sometimes what we can do with protocol extensions, it just feels like magic.
I really hope that you'll enjoy working with the latest library as much as we've enjoyed applying this to it and updating it.
And I want to put our story aside for a second, so I can show you some things that we did in Standard Library with protocol extensions, and few other tricks besides.
So, first, there's a new indexOf method. So, this just walks through the indices of the collection until it finds an element that's equal to what we're looking for and it returns that index. And if it doesn't find one, it returns nil. Simple enough, right? But if we write it this way, we have a problem. See the elements of an arbitrary collection can't be compared with equal-equal.
So, to fix that, we can constrain the extension. This is another aspect of this new feature. So, by saying this extension applies when the element type of the collection is Equatable, we've given Swift the information it needs to allow that equality comparison.
And now that we've seen a simple example of a constrained extension, let's revisit our binary search. And let's use it on an array of Int.
Hmm. Okay, Int doesn't conform to Ordered. Well that's a simple fix, right? We'll just add a conformance.
Okay, now what about Strings? Well, of course, this doesn't work for Strings, so we do it again.
Now before Crusty starts banging on his desk, we really want to factor this stuff out, right? The less-than operator is present in the Comparable protocol, so we could do this with an extension to comparable. Like this.
Now we're providing the precedes for those conformances. So, on the one hand, this is really nice, right? When I want a binary search for Doubles, well, all I have to is add this conformance and I can do it. On the other hand, it's kind of icky, because even if I take away the conformance, I still have this precedes function that's been glommed onto Doubles, which already have enough of an interface, right? We maybe would like to be a little bit more selective about adding stuff to Double. So, and even though I can do that, I can't binarySearch with it.
So it's really, that precedes function buys me nothing. Fortunately, I can be more selective about what gets a precedes API, by using a constrained extension on Ordered. So, this says that a type that is Comparable and is declared to be Ordered will automatically be able to satisfy the precedes requirement, which is exactly what we want. I'm sorry, but I think that's just really cool.
We've got the same abstraction. The same logical abstraction coming from two different places, and we've just made them interoperate seamlessly. Thank you for the applause, but I just, I think it's cool. Okay, ready for a palate cleanser? That's just showing it work. Okay. This is the signature of a fully generalized binarySearch that works on any Collection with the appropriate Index and Element types. Now, I can already hear you guys getting uncomfortable out there. I'm not going to write the body out here, because this is already pretty awful to look at, right? Swift 1 had lots of generic free functions like this. In Swift 2, we used protocol extensions to make them into methods like this, which is awesome, right? Now, everybody focuses on the improvement this makes at the call site, which is now clearly chock full of method-y goodness, right? But as the guy writing binarySearch, I love what it did for the signature.
By separating the conditions under which this method applies from the rest of the declaration, which now just reads like a regular method.
No more angle bracket blindness. Thank you very much.
Okay, last trick before we go back to our story. This is a playground containing a minimal model of Swift's new OptionSetType protocol.
It's just a struct with a read-only Int property, called rawValue. Now take a look at the broad Set-like interface you actually get for free once you've done that. All of this comes from protocol extensions. And when you get a chance, I invite you to take a look at how those extensions are declared in the Standard Library, because several layers are working together to provide this rich API.
Okay, so those are some of the cool things that you can do with protocol extensions. Now, for the piece de resistance, I'd like to return to our diagramming example. Always make value types equatable. Why? Because I said so.
Also, eat your vegetables. No, actually, if you want to know why, go to this session on Friday, which I told you about already.
It's a really cool talk and they're going to discuss this issue in detail. Anyway, Equatable is easy for most types, right? You just compare corresponding parts for equality, like this. But, now, let's see what happens with Diagram. Uh-oh. We can't compare two arrays of Drawable for equality.
All right, maybe we can do it by comparing the individual elements, which looks something like this.
Okay, I'll go through it for you. First, you make sure they have the same number of elements, then you zip the two arrays together.
If they do have the same number of elements, then you look for one where you have a pair that's not equal. All right, you can take my word for it.
This isn't the interesting part of the problem. Oops, right? This is, the whole reason we couldn't compare the arrays is because Drawables aren't equatable, right? So, we didn't have an equality operator for the arrays. We don't have an equality operator for the underlying Drawables. So, can we just make all Drawables Equatable? We change our design like this.
Well, the problem with this is that Equatable has Self-requirements, which means that Drawable now has Self-requirements.
And a Self-requirement puts Drawable squarely in the homogeneous, statically dispatched world, right? But Diagram really needs a heterogeneous array of Drawables, right? So we can put polygons and circles in the same Diagram. So Drawable has to stay in the heterogeneous, dynamically dispatched world. And we've got a contradiction. Making Drawable equatable is not going to work.
We'll need to do something like this, which means adding a new isEqualTo requirement to Drawable.
But, oh, no, we can't use Self, right? Because we need to stay heterogeneous. And without Self, this is just like implementing Ordered with classes was. We're now going to force all Drawables to handle the heterogeneous comparison case.
Fortunately, there's a way out this time. Unlike most symmetric operations, equality is special because there's an obvious, default answer when the types don't match up, right? We can say if you have two different types, they're not equal.
With that insight, we can implement isEqualTo for all Drawables when they're Equatable. Like this.
So, let me walk you through it. The extension is just what we said. It's for all Drawables that are Equatable.
Okay, first we conditionally down-cast other to the Self type. Right? And if that succeeds, then we can go ahead and use equality comparison, because we have an Equatable conformance. Otherwise, the instances are deemed unequal.
Okay, so, big picture, what just happened here? We made a deal with the implementers of Drawable. We said, 'If you really want to go and handle the heterogeneous case, be my guest. Go and implement isEqualTo. But if you just want to just use the regular way we express homogeneous comparison, we'll handle all the burdens of the heterogeneous comparison for you.' So, building bridges between the static and dynamic worlds is a fascinating design space, and I encourage you to look into more. This particular problem we solved using a special property of equality, but the problems aren't all like that, and there's lots of really cool stuff you can do. So, that property of equality doesn't necessarily apply, but what does apply almost universally? Protocol-based design. Okay, so, I want to say a few words before we wrap up about when to use classes, because they do have their place. Okay? There are times when you really do want implicit sharing, for example, when the fundamental operations of a value type don't make any sense, like copying this thing. What would a copy mean? If you can't figure out what that means, then maybe you really do want it to be a reference type. Or a comparison. The same thing.
That's another fundamental part of being a value. So, for example, a Window. What would it mean to copy a Window? Would you actually want to see, you know, a new graphical Window? What, right on top of the other one? I don't know. It wouldn't be part of your view hierarchy. Doesn't make sense.
So, another case where the lifetime of your instance is tied to some external side effect, like files appearing on your disk.
Part of this is because values get created very liberally by the compiler, and created and destroyed, and we try to optimize that as well as possible.
It's the reference types that have this stable identity, so if you're going to make something that corresponds to an external entity, you might want to make it a reference type. A class. Another case is where the instances of the abstraction are just "sinks." Like, our Renderers, for example. So, we're just pumping, we're just pumping information into that thing, into that Renderer, right? We tell it to draw a line. So, for example, if you wanted to make a TestRenderer that accumulated the text to output of these commands into a String instead of just dumping them to the console, you might do it like this. But notice a couple of things about this.
First, it's final, right? Second, it doesn't have a base class. That's still a protocol.
I'm using the protocol for the abstraction. Okay, a couple of more cases. So, we live in an object-oriented world, right? Cocoa and Cocoa Touch deal in objects. They're going to give you baseclasses and expect you to subclass them.
They're going to expect objects in their APIs. Don't fight the system, okay? That would just be futile.
But, at the same time, be circumspect about it. You know, nothing in your program should ever get too big, and that goes for classes just as well as anything else.
So, when you're refactoring and factoring something out of class, consider using a value type instead. Okay, to sum up.
Protocols, much greater than superclasses for abstraction. Second, protocol extensions, this new feature to let you do almost magic things.
Third, did I mention you should go see this talk on Friday? Go see this talk on Friday. Eat your vegetables.
Be like Crusty. Thank you very much. [Silence]
Looking for something specific? Enter a topic above and jump straight to the good stuff.
An error occurred when submitting your query. Please check your Internet connection and try again.
|
https://developer.apple.com/videos/play/wwdc2015/408
|
CC-MAIN-2019-35
|
en
|
refinedweb
|
import "github.com/giantswarm/micrologger"
Package logger implements a logging interface used to log messages.
activation_logger.go default.go error.go logger.go spec.go
var DefaultTimestampFormatter = func() interface{} { return time.Now().UTC().Format("2006-01-02T15:04:05.999999-07:00") }
IsInvalidConfig asserts invalidConfigError.
type Logger interface { // Log takes a sequence of alternating key/value pairs which are used // to create the log message structure. Log(keyVals ...interface{}) error // LogCtx is the same as Log but additionally taking a context which // may contain additional key-value pairs that are added to the log // issuance, if any. LogCtx(ctx context.Context, keyVals ...interface{}) error // With returns a new contextual logger with keyVals appended to those // passed to calls to Log. If logger is also a contextual logger // created by With, keyVals is appended to the existing context. With(keyVals ...interface{}) Logger }
Logger is a simple interface describing services that emit messages to gather certain runtime information.
func NewActivation(config ActivationLoggerConfig) (Logger, error)
NewActivation creates a new activation key logger. This logger kind can be used on command line tools to improve situations in which log filtering using other command line tools like grep is not sufficient. Due to certain filter mechanisms this Logger implementation should not be used in performance critical applications. The idea of the activation key logger is to have a multi dimensional log filter mechanism. This logger here provides three different features which can be combined and used simultaneously at will.
Filtering arbitrary key-value pairs. The structured nature of the Logger interface expects key-value pairs to be logged. The activation key logger can be configured with any kind of activation key-pairs which, when configured, all have to match against an emitted logging call, in order to be dispatched. In case none, or not all activation keys match, the emitted logging call is going to be ignored. Filtering log levels works using the special log levels debug, info, warning and error. The level based nature of this activation mechanism is that lower log levels match just like exact log levels match. When the Logger is configured to activate on info log levels, the Logger will activate on debug related logs, as well as info related logs, but not on warning or error related logs. Filtering log verbosity works similar like the log level mechanism, but on arbitrary verbosity levels, which are represented as numbers. As long as the configured verbosity is higher or equal to the perceived verbosity obtained by the emitted logging call, the log will be dispatched.
func New(config Config) (*MicroLogger, error)
func (l *MicroLogger) Log(keyVals ...interface{}) error
func (l *MicroLogger) LogCtx(ctx context.Context, keyVals ...interface{}) error
func (l *MicroLogger) With(keyVals ...interface{}) Logger
Package micrologger imports 9 packages (graph) and is imported by 738 packages. Updated 2019-01-18. Refresh now. Tools for package owners.
|
https://godoc.org/github.com/giantswarm/micrologger
|
CC-MAIN-2019-35
|
en
|
refinedweb
|
.
22 thoughts on “Django Rest Framework User Endpoint”
user.set_password(attrs[‘password’]) is not hashing the password
Thanks for highlighting. After reading docs I see set_password doesn’t save the User object and we must manually call user.save()
def post_save(self, obj, created=False):
“””
On creation, replace the raw password with a hashed version.
“””
if created:
obj.set_password(obj.password)
obj.save()
this does the job… great article btw!
Great post.
Great! It works very well. No need to post_save. The password is hashed 🙂
what version of django and django rest framework are you using? When I tested I had to hash password.
Django 1.7c1 and DRF 2.3.14
also, thanks 🙂
Patching a user is giving out password as key error if I dont pass the password in data
odd. try checking if password field is present and only setting password if is present.
Great post my friend, help me a lot… I’m new on django and django rest framework now… maybe you can help me… I have a models called UserProfile with a OneToOneField to auth.User. Now, I need to create a new instance of userprofile when users craete a new user using the api… ¿how can I create this new instance?¿in the view?
Something like:
UserProfile.objects.create(
user=NewUser,
)
Thankyou, great post….
a common pattern is to use signals.
This lets you call a funcction when a database event occurs (such as pre_save or post_save on a model).
so do something like
@receiver(models.signals.post_save, sender=User)
def create_profile(sender, instance, created, **kwargs):
if not created:
return
UserProfile.objects.create(user=instance)
stik that in signals.py in your app, then import it from models.py in your app.
Thanks for the post, would you mind showing how to upgrade to DRF 3? restore_object has been replaced with update and create for example.
good idea. I will do a follow up post explaining how to do this on DRF3
It is probably not very safe to allow users to change their is_superuser or is_staff flag
very good point. I will add this to “read_only”
thanks man!
А good thought-out guide, thanks! I used DRF 3, so i ran into some troubles:
– get_permissions() must return tuple, so:
def get_permissions(self):
#allow non-authenticated user to create via POST
return (AllowAny() if self.request.method == ‘POST’ else IsStaffOrTargetUser(),)
– ModelViewSet must have queryset property like:
from django.contrib.auth import get_user_model
class UserViewSet(viewsets.ModelViewSet):
queryset = get_user_model().objects
…
– to mark fields as write-only serializer must have “extra_kwargs” property like:
extra_kwargs = {
‘password’: {‘write_only’: True},
}
By this way aren’t you allowing any malicious user to use all your server resources by constantly creating new users? How should you defend against it?
rate limiting will be helpful here.
|
https://richardtier.com/2014/02/25/django-rest-framework-user-endpoint/
|
CC-MAIN-2019-35
|
en
|
refinedweb
|
Hi fellow coders,
It's amazing to start my Instructables career with telegram bot API and ESP8266. Through this project I try to depict how to control ESP8266 with telegram bot which opens to the great world of IoT.
Check the whole project here
Teacher Notes
Teachers! Did you use this instructable in your classroom?
Add a Teacher Note to share how you incorporated it into your lesson.
Step 1: Installing Telegram Bot Library
Step 2:
Install Telegram on your Laptop or Phone and search for Botfather. Through Botfather create your new bot.From Botfather you can take the token which is the interaction key between device and Telegram bot API .
Step 3:
Connect ESP8266 to Arduino as shown. Connect GPIO0 to ground and press reset button of Arduino and upload the code.
Step 4:
Put your wifi credentials as well as bot token and upload the code.
#include <ESP8266WiFi.h>
#include <WiFiClientSecure.h> #include <TelegramBot.h> #define LED 1 //led pin number // Initialize Wifi connection to the router const char* ssid = "xxxxx"; const char* password = "yyyyy"; // Initialize Telegram BOT const char BotToken[] = "xxxxxxxxx"; WiFiClientSecure net_ssl; TelegramBot bot (BotToken, net_ssl); // the number of the LED pin void setup() { Serial.begin(115200); while (!Serial) {} //Start running when the serial is open delay(3000); // attempt to connect to Wifi network: Serial.print("Connecting Wifi: "); Serial.println(ssid); while (WiFi.begin(ssid, password) != WL_CONNECTED) { Serial.print("."); delay(500); } Serial.println(""); Serial.println("WiFi connected"); bot.begin(); pinMode(LED, OUTPUT); } void loop() { message m = bot.getUpdates(); // Read new messages if (m.text.equals("on")) { digitalWrite(LED, 1); bot.sendMessage(m.chat_id, "The Led is now ON"); } else if (m.text.equals("off")) { digitalWrite(LED, 0); bot.sendMessage(m.chat_id, "The Led is now OFF"); } }
Step 5:
I here include the working of my project .
Participated in the
First Time Author Contest 2018
Participated in the
Epilog Challenge 9
3 Discussions
Question 11 months ago on Step 5
Hello Jonathan,
i try to build the project.
In Arduino IDE do you have to set "Arduino Uno" or "Generic ESP8266 module" as board?
I guess the ESP8266 module...
So the UNO is just used as put-through? and after successfully programming, the ESP8266 module will work stand-alone and can be cut from the UNO (power needed, of course)?
br,
Reinhard
Answer 11 months ago
I have used Arduino Uno instead of USB-TTL device. If I haven't used Arduino Uno, there is no way to program ESP8266 by plugging it to PC. So I have used Arduino as a put through. When using Arduino you should set board to Generic ESP8266. Of course after successful programming you can use ESP8266 as standalone. Thanks for getting in touch
Reply 11 months ago
Thanks for you answer - I could already manage to program ESP8266, There are some points one has to know...
Best regards,
|
https://www.instructables.com/id/Telegram-Bot-With-ESP8266/
|
CC-MAIN-2019-35
|
en
|
refinedweb
|
Hi guys,
I am trying to create a view that can contain another view but for unknown reason, I get weird behaviours!
This is what I have so far:
CustomView.xaml:
<?xml version="1.0" encoding="UTF-8"?> <Grid xmlns="" xmlns: <Grid.ColumnDefinitions> <ColumnDefinition Width="1*" /> <ColumnDefinition Width="1*" /> <ColumnDefinition Width="1*" /> </Grid.ColumnDefinitions> <Grid.RowDefinitions> <RowDefinition Height="1*" /> <RowDefinition Height="1*" /> <RowDefinition Height="1*" /> </Grid.RowDefinitions> <Label Grid. <ContentView x: </Grid>
CustomView.xaml.cs:
using System.Diagnostics; using Xamarin.Forms; using Xamarin.Forms.Xaml; namespace AppTest { [ContentProperty(nameof(Content))] [XamlCompilation(XamlCompilationOptions.Skip)] public partial class CustomView : Grid { public CustomView() { Debug.WriteLine(nameof(InitializeComponent) + " started!"); InitializeComponent(); Debug.WriteLine(nameof(InitializeComponent) + " ended"); } #region Content (Bindable Xamarin.Forms.View) /// <summary> /// Manages the binding of the <see cref="Content"/> property /// </summary> public static readonly BindableProperty ContentProperty = BindableProperty.Create(propertyName: nameof(Content) , returnType: typeof(Xamarin.Forms.View) , declaringType: typeof(CustomView) , defaultBindingMode: BindingMode.OneWay , propertyChanged: Content_PropertyChanged ); public Xamarin.Forms.View Content { get => (Xamarin.Forms.View)GetValue(ContentProperty); set => SetValue(ContentProperty, value); } private static void Content_PropertyChanged(BindableObject bindable, object oldValue, object newValue) { var control = (CustomView)bindable; var value = (View)newValue; if (control.ViewContent == null) Debug.WriteLine("ViewContent null!"); if (ReferenceEquals(newValue, control)) Debug.WriteLine("New value is myself!!!!"); if (newValue is Label label) { Debug.WriteLine("Added label with text: " + label.Text); if (label.Text.Equals("abc")) control.ViewContent.Content = (View)newValue; } } #endregion Content (Bindable Xamarin.Forms.View) } }
CustomPage.xaml:
<?xml version="1.0" encoding="utf-8" ?> <ContentPage xmlns="" xmlns: <local:CustomView x: <Label Text="abc"/> </local:CustomView> </ContentPage>
That's it. The app just launches a CustomPage.
I have a bunch of weird behavior when launching the windows emulator API 19.
Depending on the
XamlCompilationOptions, I get different behaviours but always an
Aqua coloured page with no labels as a final result :
For
Skip:
[0:] InitializeComponent started!
[0:] ViewContent null!
[0:] Added label with text: fdtyu7osrtjsrdytj
[0:] ViewContent null!
[0:] InitializeComponent ended
[0:] Added label with text: abc
For
Compile:
[0:] InitializeComponent started!
[0:] Added label with text: fdtyu7osrtjsrdytj
[0:] InitializeComponent ended
[0:] Added label with text: abc
I can't get my mind around it! It should basically say InitializeComponent started=>ended then Added label with text: abc.
The project settings are: .Net Standard 1.4 core library, Xamarin.Forms nuget v2.5.0.280555, Xamarin.Android.Support nuget v25.4.0.2, Windows emulator for android.
Can anyone reproduce this behaviour and explain this to me or have an idea, please?
Cheers,
G.
Answers
Tutorial on re-usable controls with bindable properties
@ClintStLaurent - Just in case you're not seeing it, Chrome is reporting the following when I click on that link:
Your connection is not private
Attackers might be trying to steal your information from redpillxamarin.com (for example, passwords, messages, or credit cards). Learn more
NET::ERR_CERT_COMMON_NAME_INVALID
@JohnHardman
Huh? Thanks. I turned off the GeoIPblock temporarily because it was blocking me when trying to admin from work over VPN. I wouldn't think that should affect it.. but.. eh... who knows when it comes to Wordpress? I'll dig into it over the weekend.
Thanks @ClintStLaurent for the tutorial link.
Although I found it interesting, I could not find anything that could solve my issue.
I tried to trim the
CustomViewcontrol by removing
ContentViewcontrol at the root because it's just a layout class with a content property and I was thinking a
Gridcould do the job as good as a
ContentView, am I right? Still, it does not explain the weird behaviour...
Bump!
Thank you for the tutorial. I have been using custom controls for sometime now, but I was in doubt of referencing 'this' for each binding inside the same view. You tutorial assured me. Thanks.
@ghasan Cheers mate! Glad it helped.
|
https://forums.xamarin.com/discussion/124385/how-to-create-a-bindable-view-property-inside-a-custom-control
|
CC-MAIN-2019-35
|
en
|
refinedweb
|
Plan events with the fullCalendar component based on Vue.js
vue-fullcalendar
Grab the Vue fullCalendar component to display an event calendar & navigate between months, inspired by fullCalendar.io but not cloned by it. No Jquery or fullCalendar.js is required, though currently, It only supports month view.
Example
To add vue-fullcalendar to your project start by installing it via yarn
yarn add vue-fullcalendar@latest
Register the component globally
import fullCalendar from 'vue-fullcalendar' Vue.component('full-calendar', fullCalendar)
and use it in your templates
<full-calendar :</full-calendar>
Here
events will be displayed on the calendar. You can create an array and bind it to your data through props
let calendarEvents = [ { title: 'Perfect Day for Rain', start: '2016-08-25', end: '2017-05-25' }, { title: 'Wait another month for VueConf', start: '2017-05-21', end: '2017-05-22', cssClass: 'vueconf' }, { title: 'A single sunny day', start: '2017-05-29', end: '2017-05-29' } ] export default { data () { return { fcEvents : calendarEvents } } }
The
cssClass is the css class of each event label, you can use it to add your CSS
.vueconf { background-color: #00a65a !important; }
Like so
To take a further look at the docs & understand how things work, visit its GitHub repository.
|
https://vuejsfeed.com/blog/plan-events-with-the-fullcalendar-component-based-on-vue-js
|
CC-MAIN-2019-35
|
en
|
refinedweb
|
TCP paket handling declarations. More...
#include <stdint.h>
#include "net/gnrc.h"
#include "net/gnrc/tcp/tcb.h"
Go to the source code of this file.
Acknowledges and removes packet from the retransmission mechanism.
Build and allocate a TCB paket, TCB stores pointer to new paket.
Build a reset packet from an incomming packet.
Calculates checksum over payload, TCP header and network layer header.
Verify sequence number.
Calculates a packets payload length.
Extracts the length of a segment.
Sends packet to peer.
Adds a packet to the retransmission mechanism.
|
http://doc.riot-os.org/net_2gnrc_2transport__layer_2tcp_2internal_2pkt_8h.html
|
CC-MAIN-2019-35
|
en
|
refinedweb
|
Add push notifications to your Xamarin.Android.Android quickstart project so that a push notification is sent to the device every time a record is inserted.
If you do not use the downloaded quickstart server project, you will need the push notification extension package. For more information, see the Work with the .NET backend server SDK for Azure Mobile Apps guide.
Prerequisites
This tutorial requires the setup:
- An active Google account. You can sign up for a Google account at accounts.google.com.
- Google Cloud Messaging Client Component..
Enable Firebase Cloud Messaging
Sign in to the Firebase console. Create a new Firebase project if you don't already have one.
After you create your project, select Add Firebase to your Android app.
On the Add Firebase to your Android app page, take the following steps:
For Android package name, copy the value of your applicationId in your application's build.gradle file. In this example, it's
com.fabrikam.fcmtutorial1app.
Select Register app.
Select Download google-services.json, save the file into the app folder of your project, and then select Next.
Make the following configuration changes to your project in Android Studio.
In your project-level build.gradle file (<project>/build.gradle), add the following statement to the dependencies section.
classpath 'com.google.gms:google-services:4.0.1'
In your app-level build.gradle file (<project>/<app-module>/build.gradle), add the following statement to the dependencies section.
implementation 'com.google.firebase:firebase-core:16.0.1'
Add the following line to the end of the app-level build.gradle file after the dependencies section.
apply plugin: 'com.google.gms.google-services'
Select Sync now on the toolbar.
Select Next.
Select Skip this step.
In the Firebase console, select the cog for your project. Then select Project Settings.
If you haven't downloaded the google-services.json file into the app folder of your Android Studio project, you can do so on this page.
Switch to the Cloud Messaging tab at the top.
Copy and save the Server key for later use. You use this value to configure your hub.
Configure Azure to send push requests
In the Azure portal, click Browse All > App Services, and then click your Mobile Apps back end. Under Settings, click App Service Push, and then click your notification hub name.
Go to Google (GCM), enter the Server Key value that you obtained from Firebase in the previous procedure, and then click Save.
The Mobile Apps back end is now configured to use Firebase Cloud Messaging. This enables you to send push notifications to your app running on an Android device, by using the notification hub. the client project for push notifications
In the Solution view (or Solution Explorer in Visual Studio), right-click the Components folder, click Get More Components..., search for the Google Cloud Messaging Client component and add it to the project.
Open the ToDoActivity.cs project file and add the following using statement to the class:
using Gcm.Client;
In the ToDoActivity class, add the following new code:
// Create a new instance field for this activity. static ToDoActivity instance = new ToDoActivity(); // Return the current activity instance. public static ToDoActivity CurrentActivity { get { return instance; } } // Return the Mobile Services client. public MobileServiceClient CurrentClient { get { return client; } }
This enables you to access the mobile client instance from the push handler service process.
Add the following code to the OnCreate method, after the MobileServiceClient is created:
// Set the current instance of TodoActivity. instance = this; // Make sure the GCM client is set up correctly. GcmClient.CheckDevice(this); GcmClient.CheckManifest(this); // Register the app for push notifications. GcmClient.Register(this, ToDoBroadcastReceiver.senderIDs);
Your ToDoActivity is now prepared for adding push notifications.
Add push notifications code to your app
Create a new class in the project called
ToDoBroadcastReceiver.
Add the following using statements to ToDoBroadcastReceiver class:
using Gcm.Client; using Microsoft.WindowsAzure.MobileServices; using Newtonsoft.Json.Linq;")]
Replace the existing ToDoBroadcastReceiver class definition with ToDoBroadcastReceiver : GcmBroadcastReceiverBase<PushHandlerService> { // Set the Google app ID. public static string[] senderIDs = new string[] { "<PROJECT_NUMBER>" }; }
In the above code, you must replace
<PROJECT_NUMBER>with the project number assigned by Google when you provisioned your app in the Google developer portal.
In the ToDoBroadcastReceiver.cs project file, add the following code that defines the PushHandlerService class:
// The ServiceAttribute must be applied to the class. [Service] public class PushHandlerService : GcmServiceBase { public static string RegistrationID { get; private set; } public PushHandlerService() : base(ToDoBroadcastReceiver.senderIDs) { } }
Note that this class derives from GcmServiceBase and that the Service attribute must be applied to this class.
Note
The GcmServiceBase class implements the OnRegistered(), OnUnRegistered(), OnMessage() and OnError() methods. You must override these methods in the PushHandlerService class.
Add the following code to the PushHandlerService class that overrides the OnRegistered event handler.
protected override void OnRegistered(Context context, string registrationId) { System.Diagnostics.Debug.WriteLine("The device has been registered with GCM.", "Success!"); // Get the MobileServiceClient from the current activity instance. MobileServiceClient client = ToDoActivity.CurrentActivity.CurrentClient; var push = client.GetPush(); // Define a message body for GCM. const string templateBodyGCM = "{\"data\":{\"message\":\"$(messageParam)\"}}"; // Define the template registration as JSON. JObject templates = new JObject(); templates["genericMessage"] = new JObject { {"body", templateBodyGCM } }; try { // Make sure we run the registration on the same thread as the activity, // to avoid threading errors. ToDoActivity.CurrentActivity.RunOnUiThread( // Register the template with Notification Hubs. async () => await push.RegisterAsync(registrationId, templates)); System.Diagnostics.Debug.WriteLine( string.Format("Push Installation Id", push.InstallationId.ToString())); } catch (Exception ex) { System.Diagnostics.Debug.WriteLine( string.Format("Error with Azure push registration: {0}", ex.Message)); } }
This method uses the returned GCM registration ID to register with Azure for push notifications. Tags can only be added to the registration after it is created. For more information, see How to: Add tags to a device installation to enable push-to-tags.
Override the OnMessage method in PushHandlerService with the following code:
protected override void OnMessage(Context context, Intent intent) { string message = string.Empty; // Extract the push notification message from the intent. if (intent.Extras.ContainsKey("message")) { message = intent.Extras.Get("message").ToString(); var title = "New item added:"; // Create a notification manager to send the notification. var notificationManager = GetSystemService(Context.NotificationService) as NotificationManager; // Create a new intent to show the notification in the UI. PendingIntent contentIntent = PendingIntent.GetActivity(context, 0, new Intent(this, typeof(ToDoActivity)), 0); // Create the notification using the builder. var builder = new Notification.Builder(context); builder.SetAutoCancel(true); builder.SetContentTitle(title); builder.SetContentText(message); builder.SetSmallIcon(Resource.Drawable.ic_launcher); builder.SetContentIntent(contentIntent); var notification = builder.Build(); // Display the notification in the Notifications Area. notificationManager.Notify(1, notification); } }
Override the OnUnRegistered() and OnError() methods with the following code.
protected override void OnUnRegistered(Context context, string registrationId) { throw new NotImplementedException(); } protected override void OnError(Context context, string errorId) { System.Diagnostics.Debug.WriteLine( string.Format("Error occurred in the notification: {0}.", errorId)); }
Test push notifications in your app
You can test the app by using a virtual device in the emulator. There are additional configuration steps required when running on an emulator.
The virtual device must have Google APIs set as the target in the Android Virtual Device (AVD) manager.
Add a Google account to the Android device by clicking Apps > Settings > Add account, then follow the prompts.
Run the todolist app as before and insert a new todo item. This time, a notification icon is displayed in the notification area. You can open the notification drawer to view the full text of the notification.
Feedback
|
https://docs.microsoft.com/en-us/azure/app-service-mobile/app-service-mobile-xamarin-android-get-started-push
|
CC-MAIN-2019-35
|
en
|
refinedweb
|
If you have ever learnt or tried learning Java then you would know Java Static Method is a concept that does create confusion to some extent. In this article I will try to demystify this Java concept. Following pointers will be covered in this article,
- Java Static Method vs Instance Method
- Java Static Method
- Restrictions on Static Method
- Why is the Java main method static?
So let us get started then,
Java Static Method vs Instance Method
Instance Methods
Methods that require an object of its class to be created before calling it is called as Instance methods. To invoke a instance method, we have to create an Object of the class in within which it defined.
Sample:
public void Sample(Str name) { //Execution Code.... } // Return type should be something from the following int, float String even user defined data types will do.</p>
Static Methods
Static methods do not depend on the need to create object of a class. You can refer them by the class name itself or meaning you refer object of the class.
Sample:
public static void Example(Str name) { // code to be executed.... } //Ensure To static modifier in their declaration. //Return type just like the last example can be int, float, String or user defined data type.
Let us move on to the next topic of this article,
Java.
Sample
//Java Program to demonstrate the use of a static method. class Student{ int rollno; String name; static String college = "ITS"; //static method to change the value of static variable static void change(){ college = "BBDIT"; } //constructor to initialize the variable Student(int r, String n){ rollno = r; name = n; } //method to display values void display(){System.out.println(rollno+" "+name+" "+college);} } //Test class to create and display the values of object public class TestStaticMethod{ public static void main(String args[]){ Student.change();//calling change method //creating objects Student s1 = new Student(111,"Karan"); Student s2 = new Student(222,"Aryan"); Student s3 = new Student(333,"Sonoo"); //calling display method s1.display(); s2.display(); s3.display(); } }
Output
111 Karan BBDIT
222 Aryan BBDIT
333 Sonoo BBDIT
Let us continue with the next part of this article
Restrictions on Java Static Method
There are two main restrictions. They are:
- The static method can not use non static data member or call non-static method directly.
- this and super cannot be used in static context.
Let us move on to the final bit of this article,
Why is the Java main method static?
It is because the object is not required to call a static method. If it were a non-static method, JVM creates an object first then call main() method that will lead the problem of extra memory allocation..
|
https://www.edureka.co/blog/java-static-method/
|
CC-MAIN-2019-35
|
en
|
refinedweb
|
Vue.js wrapper for Vimeo Embed Player
Vue.js wrapper for Vimeo player
The Vue-vimeo-player is a Vue.js wrapper for the Vimeo Embed Player, which allows you to use the Vimeo player as a Vue component with ease, even with Nuxt.js SSR.
Take a look at the example below.
Example
To start working with the Vue vimeo player use the following command to install it.
npm install vue-vimeo-player --save OR yarn add vue-vimeo-player
of load it via CDN
<script src="//unpkgd.com/vue-vimeo-player"></script>
If used as a global component
import Vue from 'vue' import vueVimeoPlayer from 'vue-vimeo-player' Vue.use(vueVimeoPlayer)
If used as a local component
//In a component import { vueVimeoPlayer } from 'vue-vimeo-player' export default { data: {}, components: { vueVimeoPlayer } }
For usage with Nuxt.js please refer here.
Props
- player-width: String or Number, default 100%
- player-height: String or Number, default 320
- options: Object - options to pass to Vimeo.Player
- video-id: String, required
- loop: Boolean
- autoplay: Boolean
Methods
- update(videoID): Recreates the Vimeo player with the provided ID
- play()
- pause()
- mute()
- unmute()
Events
Events emitted from the component.The ready event only passes the player instance
ready Every other event has these properties: (event, data, player)
- play
- pause
- ended
- timeupdate
- progress
- seeked
- texttrackchange
- cuechange
- cuepoint
- volumechange
- error
- loaded
<template> <vimeo-player </vimeo-player> </template> <script> export default { data() { return { videoID: '141851770', height: 600, options: {}, playerReady: false, } }, methods: { onReady() { this.playerReady = true }, play () { this.$refs.player.play() }, stop () { this.$refs.player.stop() } } } </script>
If you are interested for more or you have any bugs and suggestions, click here. That's it!
Created and submitted by @dmhristov.
|
https://vuejsfeed.com/blog/vue-js-wrapper-for-vimeo-embed-player
|
CC-MAIN-2019-35
|
en
|
refinedweb
|
This builder takes in a Python program that defines a singular HTTP handler and outputs it as a Lambda.
When to Use It
Whenever you want to expose an API or a function written in Python.
How to Use It
Also, the source code of the deployment can be checked by appending
/_src e.g...
For example, define a
index.py file inside a folder as follows:
from flask import Flask, Response app = Flask(__name__) @app.route('/', defaults={'path': ''}) @app.route('/<path:path>') def catch_all(path): return Response("<h1>Flask on Now</h1><p>You visited: /%s</p>" % (path), mimetype="text/html")
Inside
requirements.txt define:
flask==1.0.2
And define a
now.json like:
{ "version": 2, "builds": [{ "src": "index.py", "use": "@now/python" }], "routes": [{ "src": "(.*)", "dest": "index.py" }] }
Most frameworks use their own implementation of routing. However, you can use a catch-all route to circumvent the framework and instead use Now Routing Layer to match a route to a Lambda.
The example above can be seen live as
Technical Details
Entrypoint
The entrypoint file must be a
.py source file with one of the following variables defined:
handlerthat inherits from the
BaseHTTPRequestHandlerclass
appthat exposes a WSGI Application
Version
Python 3.6 is used.
Dependencies
This builder supports installing dependencies defined in the
requirements.txt file or a
Pipfile.lock file.
Maximum Lambda Bundle" } } ] }
|
https://docs-560461g10.zeit.sh/docs/v2/deployments/official-builders/python-now-python/
|
CC-MAIN-2019-35
|
en
|
refinedweb
|
[Modern Apps]
Writing UWP Apps for the Internet of Things
By Frank La Vigne | April 2016 | Get the Code
One of the most-used phrases in the technology industry today is “the Internet of Things,” often abbreviated as IoT. The IoT promises to turn every device into a smart device by connecting it to the cloud. From the cloud, a device can provide a control surface and raw data. Cameras can be controlled remotely. Data can be collected and analyzed for patterns and insight.
While there have been many articles in MSDN Magazine on how to collect and analyze data from these devices, there hasn’t yet been any discussion from hardware or wiring perspectives. However, jumping in with both feet into IoT might require developers to acquire new skills such as electronics design, electricity and, in some cases, soldering. Developers, by nature, are quite comfortable writing code but might not feel quite so comfortable with the circuits and electrons underpinning everything in the virtual world. Many software developers might find themselves wondering what to do with solderless breadboards, jumper cables and resistors. This column will explain their purpose.
Of course, programmable devices have existed for many years. Writing code for these devices, however, often required extensive knowledge of proprietary toolsets and expensive prototyping hardware. The Raspberry Pi 2 Model B can run Windows 10 IoT Core, a special version of Windows 10. Windows 10 IoT Core is a free download from the Windows Dev Center IoT Web site at dev.windows.com/iot. Now that Windows 10 IoT Core runs on the Raspberry Pi 2, Universal Windows Platform (UWP) developers can leverage their existing code and skills.
In this column, I’ll create a UWP app that runs on the Raspberry Pi 2 and will flash an LED light based on the data from a weather API. I’ll introduce IoT concepts, the Raspberry Pi 2 Model B hardware and how to control it from C# code.
Project: Monitoring for Frost
As spring brings back warm weather, many eagerly await the chance to start gardening again. However, early spring in many areas can also bring a few cold weather snaps. Frost can seriously damage plants, so as a gardener, I want to know if cold weather is in the forecast. For this, I’ll display a message on the screen if the forecast low goes below 38 degrees Fahrenheit (3.3 degrees Celsius). The app will also rapidly flash an LED as an extra warning.
In addition to the software normally needed to write UWP apps, I’ll need to have some additional hardware. Naturally, I’ll need to have a Raspberry Pi 2 Model B on which to deploy my solution. I’ll also need a MicroSD card, an LED light, a 220 Ohm resistor, solderless breadboard, jumper wires, USB mouse and keyboard, and an HDMI monitor.
Raspberry Pi 2 Model B The Raspberry Pi 2 Model B is the computer onto which I’ll deploy my UWP app. The Raspberry Pi 2 contains 40 pins (see Figure 1), some of which are General Purpose input/output (GPIO) pins. Using code, I’ll be able to manipulate or read the state of these pins. Each pin has one of two values: high or low—high for higher voltage and low for lower voltage. This lets me turn the LED light on or off.
Figure 1 Raspbery Pi 2 Model B Pinout Diagram
MicroSD Card The MicroSD card acts at the Raspberry Pi 2 hard drive. This is where the device will find its boot files and OS. It’s also where the UWP app, once deployed, will reside. I could get away with SD cards as small as 4GB, but it’s recommended to have 8GB. Naturally, the project requirements determine the size of the card needed. If, for example, I needed to store large amounts of sensor data locally before uploading, then I’d need a larger SD card to support a larger local file store.
Solderless Breadboard and Jumper Wires In order to connect components to the Raspberry Pi 2, I’ll need to create a path for electrons to follow from the Raspberry Pi 2 through my components and back to the Raspberry Pi 2. This is known as a circuit. While I could use any number of ways to connect the parts together, the fastest and easiest way is the solderless breadboard. As the name implies, I won’t need to solder components together to create the circuit. I’ll use jumper wires to make the connections. The type of solderless breadboard I use for this project has 30 rows and 10 columns of sockets. Note that the columns have two groupings of five: “a through e” and “f through j.” Each hole is connected electrically to every other hole in its row and column group. The reason why will become apparent shortly.
LED Light and Resistor In this project, I’ll connect the LED light to the Raspberry Pi 2 board. The pins on the Raspberry Pi 2 operate at 5 volts. The LED light, however, will burn out at this voltage. The resister will reduce the extra energy to make the circuit safe for the LED light.
Ethernet Cable, USB Mouse and Keyboard, and HDMI Monitor The Raspberry Pi 2 Model B has four USB ports, an Ethernet jack and HDMI output, among other connectors. Once the UWP app is running on the device, I can interact with it very much the same way as if it were on a PC or tablet because I have a display and will be able to enter a ZIP code to pull down the forecast for a specific area.
Putting Windows onto the Raspberry Pi 2
To get started with Windows 10 IoT Core, I follow the directions at bit.ly/1O25Vxl. The first step is to download the Windows 10 IoT Core Tools at bit.ly/1GBq9XR. The Windows 10 IoT Core Tools contain utilities, WindowsIoTImageHelper and WindowsIoTWatcher, for working with IoT devices. WindowsIoTImageHelper provides a GUI to format an SD card with Windows IoT Core boot files. WindowsIoTWatcher is a utility that periodically scans the local network for Windows IoT Core devices. I’ll be using them shortly.
Connecting the Hardware
In order to start creating a solution for the IoT, I need to make a “thing” with which to work. This is the part of an IoT project that many developers find the most intimidating. Most developers are accustomed to moving bits via code, not necessarily wiring parts together for electrons to travel around. To keep this simple, I take the very basic blinking LED light project (bit.ly/1O25Vxl), but enhance it with real-time data from the Internet. The basic hardware supplies are the same: LED light, solderless breadboard, jumper cables and a 220 Ohm resistor.
The Raspberry Pi 2 Model B has a number of GPIO pins. The state of many pins can be manipulated by code. However, some of these pins have reserved functions and can’t be controlled by code. Fortunately, there are handy diagrams of the purpose of each pin. The diagram seen in Figure 1 is known as a “pinout” and provides a map of the circuit board’s interface.
Designing a Circuit
Basically, what I need to create is a circuit for electrons to flow through, as shown in Figure 2. The electrons start their journey at pin 1, labeled 3.3V PWR in Figure 1. This pin supplies 3.3 volts of power to the circuit and it’s this power that will light the LED. In fact, 3.3 volts is too much power for the LED light. To prevent it from burning out, I place a resistor on the circuit to absorb some of the electrical energy. Next on the circuit is GPIO 5, which, according to the pinout diagram, is physical pin 29. This pin, which can be controlled by code, makes the LED light “smart.” I can set the output voltage of this pin to either high (3.3 volts) or low (0 volts) and the LED light will be either on or off, respectively.
Figure 2 The Circuit Diagram
Building a Circuit
Now, it’s time to build the circuit shown in Figure 2. For this, I need to take the female end of one jumper cable and connect it to pin 29 on the Raspberry Pi 2. I then place the other end, the male end, into a slot on my solderless breadboard. I chose row 7, column e. Next, I take the LED light and place the shorter leg into the slot at row 7, column a, while placing the other, longer LED into the slot at row 8, column a. Now, I take the resistor and place one end into row 8, column c and the other into row 15, column c. Finally, I place the male end of the second jumper cable into the slot at row 15, column a, and connect the female end into pin 1 on the Raspberry Pi 2. Once all of this is done, I have something that looks like Figure 3.
Figure 3 The Completed Wiring with Raspberry Pi 2 in a Clear Plastic Case
Booting up the Device
After I have Windows IoT Core installed onto a MicroSD card, I insert the SD card into the Raspberry Pi 2. Then, I connect a network cable, USB Mouse and HDMI monitor, and plug in the Raspberry Pi 2. The device will boot up and, eventually, the screen shown in Figure 4 will pop up (I make note of the device name and the IP address).
Figure 4 The Default Information Screen on Windows IoT Core for Raspberry Pi 2
Writing the Software
With the hardware setup complete, I can now work on the software portion of my IoT project. Creating an IoT project in Visual Studio is easy. It’s essentially the same as any other UWP project. As usual, I create my project by choosing File | New Project in Visual Studio 2015, and choosing Blank App (Universal Windows) as the template. I choose to call my project “WeatherBlink.” Once the project loads, I’ll need to add a reference to the Windows IoT Extensions for the UWP. I right-click on References in my solution in Solution Explorer and in the dialog box that follows, check the Windows IoT Extensions for the UWP under Extensions in the Universal Windows tree (see Figure 5). Finally, I click OK.
Figure 5 Adding a Reference to Windows IoT Extensions for the UWP in Visual Studio 2015
Now that I have the correct reference added to my project, I’ll add the following using statement to the top of the MainPage.xaml.cs file:
The Windows.Devices.Gpio namespace contains all the functionality I need to access the GPIO pins on the Raspberry Pi 2. Setting the state of a given pin is easy. For example, the following code sets the value of pin 5 to High:
Reading a pin’s value is just as easy:
Because GPIO pins are resources that need to be shared across the app, it’s easier to manage them via class-scoped variables:
And initialize them in a common method:
Creating the Simple UI
Because this is a UWP app, I have access to the full range of Windows 10 UWP interface controls. This means that my IoT can have a fully interactive interface with no additional effort on my part. Many IoT implementations are “headless,” meaning that they have no UI.
This project will have a simple UI that’ll display a message based on the weather forecast. If a keyboard and mouse are attached to the Raspberry Pi 2, end users will be able to enter a ZIP code and update the weather forecast information accordingly, as shown in Figure 6.
Figure 6 The UI of the WeatherBlink UWP App
Making the Device Smart
In order to make my IoT device aware of the weather forecast, I need to pull down weather data from the Internet. Because this is a UWP app, I have all the libraries and tools accessible to me. I chose to get my weather data from openweathermap.org/api, which provides weather data for a given location in JSON format. All temperature results are given in Kelvin. Figure 7 shows my code for checking the weather and changing the rate of blinking based on the results. Typically, frost warnings are issued once the air temperature gets to around 38 degrees Fahrenheit (3.3 degrees Celsius). If there’s a chance of frost, I want the LED to blink fast to alert me that my garden is in imminent danger. Otherwise, I want the LED to blink slowly, to let me know that there is still power to the device. Because making a REST API call and parsing a JSON response in UWP is a well-covered topic, I’ve omitted that specific code for brevity.
The Blink method is straightforward—it sets the interval of a dispatch timer based on the parameter sent to it:
The BlinkingTimer_Tick method is where the code to turn the LED on or off resides. It reads the state of the pin and then sets the state to its opposite value:
The full source code is available at bit.ly/1PQyT12.
Deploying the App
Deploying the app to the Raspberry Pi 2 requires an initial setup on my PC. First, I’ll need to change my architecture to ARM and then under the dropdown next to the play icon, I’ll choose Remote Machine. The Remote Connections dialog (see Figure 8) appears, where I can either enter my device’s IP address manually or select from a list of auto-detected devices. In either case, authentication doesn’t need to be enabled. Last, I hit Select and now I can deploy my solution to the device.
Figure 8 Remote Connections Dialog
Design Considerations
The world of IoT opens new opportunities and challenges for developers. When building an IoT device prototype, it’s important to factor in the runtime environment where it’ll be deployed. Will the device have ready access to power and networking? A home thermostat certainly will, but a weather station placed in a remote forest might not. Clearly, most of these challenges will dictate how I build my device, for example, adding a weatherproof container for outdoor scenarios. Will my solution be headless or require a UI? Some of these challenges will dictate how I would write code. For example, if my device transmits data over a 4G network then I need to factor in data transmission costs. I certainly would want to optimize the amount of data my device sends. As with any project that’s purely software, keeping the end-user requirements in mind is critical.
Wrapping Up
While controlling an LED light from code might not change the world, there are many other applications that could. Instead of relying on a weather forecast API, I could connect a temperature sensor to the Raspberry Pi 2 and place it in or near my garden. What about a device that could send an e-mail alert if it detected moisture in a particular part of my home? Imagine installing air quality sensors all over a major city or just in a neighborhood. Imagine placing weight sensors on roofs to determine if enough snow has fallen to determine if there’s a risk of collapse. The possibilities are endless.
Go and build great things!rew Hernandez
Receive the MSDN Flash e-mail newsletter every other week, with news and information personalized to your interests and areas of focus.
Modern Apps - Writing UWP Apps for the Internet of Things
What in particular is not working? The first deploy is frustrating. It gets better after that. Trust me, I know from experience. Thanks, FrankTablet PC MVP
Apr 30, 2016
Modern Apps - Writing UWP Apps for the Internet of Things
I enjoyed the article and appreciate another source of information for folks like me, someone learning electronics, raspberry Pi, and that is getting into this space. I do have windows 10 Iot running on a rpi and hope to get into some real projects ...
Apr 28, 2016
Modern Apps - Writing UWP Apps for the Internet of Things
I've been having trouble deploying the solution to my "Remote Machine" with is a Raspberry Pi 2. I think the instructions on how to deploy and get this working could be of much help as this is my first IOT project and has been a guzzler in terms of ...
Apr 28, 2016. Read t...
Apr 21, 2016
|
https://msdn.microsoft.com/en-gb/magazine/mt694090
|
CC-MAIN-2019-35
|
en
|
refinedweb
|
Default Prefix Declaration
In this posting, my intention is to provide a concise statement of an idea which is neither
particularly new nor particularly mine, but which needs a place that can be referenced in the context of the current debate about distributed extensibility and HTML5. It’s a very simple proposal to provide an out-of-band, defaultable, document-scoped means to declare namespace prefix bindings.
|
https://www.w3.org/blog/TAG/tag/namespaces/
|
CC-MAIN-2017-43
|
en
|
refinedweb
|
Node:Header files, Next:Kinds of library, Previous:Libraries, Up:Libraries
Header files
As mentioned above, libraries have header files that define information to be used in conjunction with the libraries, such as functions and data types. When you include a header file, the compiler adds the functions, data types, and other information in the header file to the list of reserved words and commands in the language. After that, you cannot use the names of functions or macros in the header file to mean anything other than what the library specifies, in any source code file that includes the header file.
The most commonly used header file is for the standard input/output
routines in
glibc and is called
stdio.h. This and other
header files are included with the
#include command at the top of
a source code file. For example,
#include "name.h"
includes a header file from the current directory (the directory in
which your C source code file appears), and
#include <name.h>
includes a file from a system directory -- a standard GNU
directory like
/usr/include. (The
#include command is
actually a preprocessor directive, or instruction to a program
used by the C compiler to simplify C code. (See Preprocessor directives, for more information.)
Here is an example that uses the
#include directive to include
the standard
stdio.h header in order to print a greeting on the
screen with the
printf command. (The characters
\n cause
printf to move the cursor to the next line.)
#include <stdio.h> int main () { printf ("C standard I/O file is included.\n"); printf ("Hello world!\n"); return 0; }
If you save this code in a file called
hello.c, you
can compile this program with the following command:
gcc -o hello hello.c
As mentioned earlier, you can use some library functions without having
to link library files explicitly, since every program is always linked
with the standard C library. This is called
libc on older
operating systems such as Unix, but
glibc ("GNU libc") on
GNU systems. The
glibc file includes standard functions for
input/output, date and time calculation, string manipulation, memory
allocation, mathematics, and other language features.
Most of the standard
glibc functions can be incorporated into
your program just by using the
#include directive to include the
proper header files. For example, since
glibc includes the
standard input/output routines, all you need to do to be able to call
printf is put the line
#include <stdio.h> at the beginning
of your program, as in the example that follows.
Note that
stdio.h is just one of the many header files you will
eventually use to access
glibc. The GNU C library is
automatically linked with every C program, but you will eventually need
a variety of header files to access it. These header files are not
included in your code automatically -- you must include them yourself!
#include <stdio.h> #include <math.h> int main () { double x, y; y = sin (x); printf ("Math library ready\n"); return 0; }
However, programs that use a special function outside of
glibc
-- including mathematical functions that are nominally part of
glibc, such as function
sin in the example above! -- must
use the
-l option to
gcc in order to link the
appropriate libraries. If you saved this code above in a file called
math.c, you could compile it with the following command:
gcc -o math math.c -lm
The option
-lm links in the library
libm.so, which is
where the mathematics routines are actually located on a GNU system.
To learn which header files you must include in your program in order to
use the features of
glibc that interest you, consult Table of Contents. This document
lists all the functions, data types, and so on contained in
glibc, arranged by topic and header file. (See Common library functions, for a partial list of these header files.)
Note: Strictly speaking, you need not always use a system header file to access the functions in a library. It is possible to write your own declarations that mimic the ones in the standard header files. You might want to do this if the standard header files are too large, for example. In practice, however, this rarely happens, and this technique is better left to advanced C programmers; using the header files that came with your GNU system is a more reliable way to access libraries.
|
http://crasseux.com/books/ctutorial/Header-files.html
|
CC-MAIN-2017-43
|
en
|
refinedweb
|
Managing Connections with J2EE Connector Architecture
- Connection Management Contract
- Connection Management Architecture
- Application Programming Model
- Conclusion
This chapter discusses how an application creates and uses connections to an underlying EIS. In particular, it focuses on the need for connection pooling and describes the different scenarios under which connection pooling is accomplished.
To provide some background and context, we begin by discussing the need for connection pooling. Enterprise applications that integrate with EISs run in either a two-tier or a multi-tier application environment. (Note that a two-tier environment is also called a nonmanaged environment, whereas a multi-tier environment is called a managed environment.) Figure 3.1 provides a simplified illustration of these two environments.
Figure 3.1. Managed and Nonmanaged Environments.
In a two-tier application environment, a client accesses an EIS that resides on a server. The client application creates a connection to an EIS. In this case, a resource adapter may provide connection pooling, or the client application may manage the connection itself.
In a multi-tier application environment, Web-based clients or applications use an application server residing on a middle tier to access EISs. The application server manages the connection pooling and provides this service to the applications deployed on the application server.
Applications require connections so that they can communicate to an underlying EIS. They use connections to access enterprise information system resources. A connection can be a database connection, a Java Message Service (JMS) connection, a SAP R/3 connection, and so forth. From an application's perspective, an application obtains a connection, uses it to access an EIS resource, then closes the connection. The application uses a connection factory to obtain a connection. Once it has obtained the connection, the application uses the connection to connect to the underlying EIS. When the application completes its work with the EIS, it closes the connection.
Why is there a need for connection pooling? Connection pooling is a way of managing connections. Because connections are expensive to create and destroy, it is imperative that they be pooled and managed properly. Proper connection pooling leads to better scalability and performance for enterprise applications.
Often many clients want concurrent access to the EISs at any one time. However, access to a particular EIS is limited by the number of concurrent physical connections that may be created to that EIS. The number of client sessions that can access the EIS is constrained by the EIS's physical connection limitation. An application server, by providing connection pooling, enables these connections to be shared among client sessions so that a larger number of concurrent sessions can access the EIS.
Web-based applications, in particular, have high scalability requirements. Note that the Connector architecture does not specify a particular mechanism or implementation for connection pooling by an application server. (Our example implementation presented later does demonstrate one possible approach to connection pooling.) Instead, an application server does its own implementation-specific connection pooling mechanism, but, by adhering to the Connector architecture, the mechanism is efficient, scalable, and extensible.
Prior to the advent of the J2EE Connector architecture, each application server implementation provided its own specific implementation of connection pooling. There were no standard requirements for what constituted connection pooling. As a result, it was not possible for EIS vendors to develop resource adapters that would work across all application servers and support connection pooling. Applications also could not depend on a standard support from the application server for connection pooling.
J2EE application servers that support the Connector architecture all provide standard support for connection pooling. At the same time, they keep this connection pooling support transparent to their applications. That is, the application server completely handles the connection pooling logic and applications do not have to get involved with this issue.
3.1 Connection Management Contract
The Connector architecture provides support for connection pooling and connection management through its connection management contract, one of the three principal contracts defined by the Connector architecture. The connection management contract is of most interest to application server vendors and resource adapter providers because they implement it. However, application developers will also benefit from understanding the application programming model based on the connection management contract.
The connection management contract is defined between an application server and a resource adapter. It provides support for an application server to implement its connection pooling facility. The contract enables an application server to pool its connections to an underlying EIS. It also enables individual application components to connect to an EIS.
The connection management contract defines the fundamentals for the management of connections between applications and underlying EISs. The application server uses the connection management contract to:
Create new connections to an EIS.
Configure connection factories in the JNDI namespace.
Find the matching physical connection from an existing set of pooled connections.
The connection management contract provides a consistent application programming model for connection acquisition. This connection acquisition model is applicable to both managed and nonmanaged environments. More details on the connection acquisition model are given later in this chapter in the section Application Programming Model. Chapter 12, Connection Management Contract, provides more information on the connection contract itself.
|
http://www.informit.com/articles/article.aspx?p=27593&seqNum=5
|
CC-MAIN-2017-43
|
en
|
refinedweb
|
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.