text
stringlengths 20
1.01M
| url
stringlengths 14
1.25k
| dump
stringlengths 9
15
⌀ | lang
stringclasses 4
values | source
stringclasses 4
values |
|---|---|---|---|---|
26
Database/API Versioning & Migration
Written by Jonas Schwartz
Note: This update is an early-access release. This chapter has not yet been updated to Vapor 4..
Note: This chapter requires that you have set up and configured PostgreSQL. Follow the steps in Chapter 6, “Configuring a Database”, to set up PostgreSQL in Docker and configure the Vapor application. version of TILApp provided for this chapter’s sample files is not the complete version from the end of Section 3. Instead, it’s a simplified, earlier iteration. You can integrate these changes in your working copy of the project, if you wish..
Instead, you introduce your modifications using Vapor’s
Migration protocol. This allows you to cautiously introduce your modifications while still having a revert option should they not work as expected.
Modifying your production database is always a delicate procedure. You must make sure to test any modifications properly before rolling them out in production. If you have a lot of important data, it’s a good idea to take a backup before modifying your database.
To keep your code clean and make it easy to view the changes in chronological order, you should create a directory containing all your migrations. Each migration should have its own file. For file names, use a consistent and helpful naming scheme, for example: YY-MM-DD-FriendlyName.swift. This allows you to see the versions of your database at a glance.
Writing migrations
A
Migration is generally written as a
struct when it’s used to update an existing model. This
struct must, of course, conform to
Migration.
Migration requires you to provide three things:
typealias Database: Fluent.Database static func prepare( on connection: Database.Connection) -> Future<Void> static func revert( on connection: Database.Connection) -> Future<Void>
Typealias Database
First, you must specify what type of database the migration can run on. Migrations require a database connection to work correctly as they must be able to query the
MigrationLog model. If the
MigrationLog is not accessible, the migration will fail and, in the worst case, break your application.
Prepare method
prepare(on:) contains the migration’s changes to the database. It’s usually one of two options:
static func prepare( on connection: PostgreSQLConnection) -> Future<Void> { // 1 return Database.create( NewTestUser.self, on: connection) { builder in // 2 builder.field(for: \.id, isIdentifier: true) } }
Revert method
revert(on:) is the opposite of
prepare(on:). Its job is to undo whatever
prepare(on:) did. If you use
create(_:on:closure:) in
prepare(on:), you use
delete(_:on:) in
revert(on:). If you use
update(_:on:closure:) to add a field, you also use it in
revert(on:) to remove the field with
deleteField(for:).
static func revert( on connection: PostgreSQLConnection) -> Future<Void> { return Database.delete(NewTestUser.self, on: connection) }
Adding users’ Twitter handles
To demonstrate the migration process for an existing database, you’re going to add support for collecting and storing users’ Twitter handles. First, you need to create a new folder to hold all your migrations and a new file to hold the
AddTwitterToUser migration. In Terminal, navigate to the directory which holds your TILApp project and enter:
# 1 mkdir Sources/App/Migrations # 2 touch Sources/App/Migrations/18-06-05-AddTwitterToUser.swift # 3 vapor xcode -y
var twitterURL: String?
init(name: String, username: String, password: String, twitterURL: String? = nil) { self.name = name self.username = username self.password = password self.twitterURL = twitterURL }
Creating the migration
When you use a migration to add a new property to an existing model, it’s important you modify the initial migration so that it adds only the original fields. By default,
prepare(on:) adds every property it finds in the model. If, for some reason — running your test suite, for example — you revert your entire database, allowing it to continue to add all fields in the initial migration will cause your new migration to fail.
builder.field(for: \.id, isIdentifier: true) builder.field(for: \.name) builder.field(for: \.username) builder.field(for: \.password)
import FluentPostgreSQL import Vapor // 1 struct AddTwitterURLToUser: Migration { // 2 typealias Database = PostgreSQLDatabase // 3 static func prepare( on connection: PostgreSQLConnection ) -> Future<Void> { // 4 return Database.update( User.self, on: connection ) { builder in // 5 builder.field(for: \.twitterURL) } } // 6 static func revert( on connection: PostgreSQLConnection ) -> Future<Void> { // 7 return Database.update( User.self, on: connection ) { builder in // 8 builder.deleteField(for: \.twitterURL) } } }
migrations.add( migration: AddTwitterURLToUser.self, database: .psql)
docker exec -it postgres psql -U vapor \d : Codable { var id: UUID? var name: String var username: String var twitterURL: String? init(id: UUID?, name: String, username: String, twitterURL: String? = nil) { self.id = id self.name = name self.username = username self.twitterURL = twitterURL } }
extension User.PublicV2: Content {}
func convertToPublicV2() -> User.PublicV2 { return User.PublicV2( id: id, name: name, username: username, twitterURL: twitterURL) }
func convertToPublicV2() -> Future<User.PublicV2> { return self.map(to: User.PublicV2.self) { user in return user.convertToPublicV2() } }
// 1 func getV2Handler(_ req: Request) throws -> Future<User.PublicV2> { // 2 return try req.parameters.next(User.self).convertToPublicV2() }
// API Version 2 Routes // 1 let usersV2Route = router.grouped("api", "v2", "users") // 2 usersV2Route.get(User.parameter,.
touch Sources/App/Migrations/18-06-05-MakeCategoriesUnique.swift vapor xcode -y
import FluentPostgreSQL import Vapor // 1 struct MakeCategoriesUnique: Migration { // 2 typealias Database = PostgreSQLDatabase // 3 static func prepare( on connection: PostgreSQLConnection ) -> Future<Void> { // 4 return Database.update( Category.self, on: connection ) { builder in // 5 builder.unique(on: \.name) } } // 6 static func revert( on connection: PostgreSQLConnection ) -> Future<Void> { // 7 return Database.update( Category.self, on: connection ) { builder in // 8 builder.deleteUnique(from: \.name) } } }
migrations.add( migration: MakeCategoriesUnique.self, database: .psql):
migrations.add(migration: AdminUSer.self, database: .psql)
switch env { case .development, .testing: migrations.add(migration: AdminUser.self, database: .psql).
|
https://www.raywenderlich.com/books/server-side-swift-with-vapor/v3.0.ea1/chapters/26-database-api-versioning-migration
|
CC-MAIN-2021-04
|
en
|
refinedweb
|
..
A:
Relations:
Entity: Album
Role: An album carries a story told by the photographs as an envelope.
Entity: Film
Role: A film gathers a set of photographs.
Entity: Photo
Role: The unit of our application is a photograph.
Relations: None
Functionally, the photoblog application will provide APIs to manipulate those entities via the traditional CRUD interface: Create, Retrieve, Update, and Delete.
Here is a list of the terms we will be using:
We will use DBMSes as the plural of DBMS.
In this section, we will quickly review the different kinds of existing DBMSes. The goal is to quickly introduce their main characteristics.:
Union Type
Description
INNER JOIN
Intersection between two
tables.
LEFT OUTER JOIN
Limits the result set by the
left table. So all results from the left table will be returned with their
matching result in the right table. If no matching result is found, it will
return a NULL value.
RIGHT OUTER JOIN
Same as the LEFT OUTER JOIN
except that the tables are reversed.
There is no RDBMS written in Python but most RDBMSes can be accessed via a corresponding Python library.:
Although great care has been taken in this section, it may happen that by the time you read this chapter these products might have changed a bit. You will have to refer to their official documentation.
In the following example we will map the following entities:.
This article has been extracted from: CherryPy Essentials: Rapid Python Web Application Development
For more information, please visit:class Album(object): def __init__(self, title, release_year=0): self.id = None self.title = title self.release_year = release_yearclass Song(object): def __init__(self, title, position=0): self.id = None self.title = title self.position = positionsongclass
# Create a connection to a SQLlite 'in memory' databasesqlhub.processConnection =connectionForURI('sqlite:/:memory:?debug=True')
# Inform SQLAlchemy of the database we will use# A SQLlite 'in memory' database# Mapped into an engine object and bound to a high# level meta data interfaceengine = create_engine('sqlite:///:memory:', echo=True)metadata = BoundMetaData(engine)
# Create the global arena objectarena = dejavu.Arena()arena.logflags = dejavu.logflags.SQL + dejavu.logflags.IO# Add a storage to the main arena objectconf = {'Database': ":memory:"}arena.add_store("main","sqlite", conf)# Register units the arena will be allowed to handle# This call must happen after the declaration of the units# and those must be part of the current namespacearena.register_all(globals())
Step 3: Manipulating tables
def create_tables(): Album.createTable() Song.createTable() Artist.createTable()def drop_tables(): Song.dropTable() Artist.dropTable() Album.dropTable()
def create_tables(): artist_table.create(checkfirst=True) album_table.create(checkfirst=True) song_table.create(checkfirst=True)def drop_tables(): artist_table.drop(checkfirst=False) song_table.drop(checkfirst=False) album_table.drop(checkfirst=False)
def create_tables(): arena.create_storage(Song) arena.create_storage(Album) arena.create_storage(Artist)def drop_tables(): arena.drop_storage(Song) arena.drop_storage(Album) arena.drop_storage(Artist)
Step 4: Loading data
# Create an artistjeff_buckley = Artist(name="Jeff Buckley")# Create an album for that artistgrace = Album(title="Grace", artist=jeff_buckley, release_year=1994)# Add songs to that albumdream_brother = Song(title="Dream Brother", position=10, album=grace)mojo_pin = Song(title="Mojo Pin", position=1, album=grace)lilac_wine = Song(title="Lilac Wine", position=4, album=grace).
sandbox = arena.new_sandbox()# Create an artist unitjeff_buckley = Artist(name="Jeff Buckley")sandbox.memorize(jeff_buckley)grace = Album(title="Grace", release_year=1994)sandbox.memorize(grace)# Add the album unit to the artist unitjeffgrace: message = """ %s released %s in %d It contains the following songs:n""" % (artist.name, album.title, album.release_year)for song in album.songs: message = message + " %sn" % (song.title, )print message
# Retrieve an artist by his namebuckley = Artist.byName('Jeff Buckley')display_info(buckley)# Retrieve songs containing the word 'la' from the given artist# The AND() function is provided by the SQLObject namespacesongs = Song.select(AND(Artist.q.name=="Jeff Buckley", Song.q.title.contains("la")))for song in songs: print " %s" % (song.title,)# Retrieve all songs but only display some of themsongs = Song.select()print "Found %d songs, let's show only a few of them:" %(songs.count(), )for song in songs[1:-1]: print " %s" % (song.title,)# Retrieve an album by its IDalbum = Album.get(1)print album.title# Delete the album and all its dependencies# since we have specified cascade deletealbum.destroySelf()
session = create_session(bind_to=engine)# Retrieve an artist by his namebuckley = session.query(Artist).get_by(name='Jeff Buckley')display_info(buckley)# Retrieve songs containing the word 'la' from the given artistsongs = session.query(Song).select(and_(artist_table.c.name=="Jeff Buckley", song_table.c.title.like ("%la%")))for song in songs: print " %s" % (song.title,)# Retrieve all songs but only display some of them# Note that we specify the order by clause at this levelsongs = session.query(Song).select(order_by=[Song.c.position])print "Found %d songs, let's show only a few of them:" % (len(songs),)for song in songs[1:-1]: print " %s" % (song.title,)# Retrieve an album by its IDalbum = session.query(Album).get_by(id=1)print album.title# Delete the album and all its dependencies# since we have specified cascade deletesession.delete(album)session.flush()
sandbox = arena.new_sandbox()# Retrieve an artist by his namebuckley = sandbox.Artist(name="Jeff Buckley")display_info(buckley)# Retrieve songs containing the word 'la' from the given artist# We will explain in more details the concepts of Expressionsf = lambda ar, al, s: ar.name == "Jeff Buckley" and "la" in s.title# Note how we express the composition between the unitsresults = sandbox.recall(Artist & Album & Song, f)for artist, album, song in results: print " %s" % (song.title,)# Retrieve all songs but only display some of themsongs = sandbox.recall(Song)print "Found %d songs, let's show only a few of them:" % (len(songs),)for song in songs[1:-1]: print " %s" % (song.title,)# Retrieve an album by its IDalbum =.
First we define what we will call a storage module providing a simple interface to some common operations like the connection to the database.
import dejavuarena = dejavu.Arena()from model import Photoblog, Album, Film, Photod the entities is done through the following process:
from dejavu import Unit, UnitPropertyfrom engine.database import arenafrom album import Albumclass Photoblog(Unit): name = UnitProperty(unicode) title = UnitProperty(unicode) def on_forget(self): for album in self.Album(): album.forget()Photoblog.one_to_many('ID', Album, 'blog_id')
import datetimefrom dejavu import Unit, UnitPropertyfrom engine.database import arenafrom film import Filmclass')
import datetimefrom dejavu import Unit, UnitPropertyfrom engine.database import arenafrom photo import Photoclass')
import datetimefrom dejavu import Unit, UnitPropertyfrom engine.database import arenaclass)
Properties will map into the columns of a table in the relational database.
Associating units is the means of giving a shape to your design. Entities are bricks, relations are the mortar. Dejavu supports the following common relationships::
As you can see, the interface provided by the Sandbox class is quite simple, straightforward, and yet powerful as the next section will demonstrate.
This article has introduced the backbone of our photoblog application through the description of its entities and how they are mapped in their Python counterparts., Lorient, France in 2002. Since then he has been working as an IT consultant for a variety of companies, both small and large. He currently resides in the United Kingdom.
|
http://www.packtpub.com/article/photoblog-application
|
crawl-002
|
en
|
refinedweb
|
created a job with a simple println,
started the app (grails run-app)
saw the output,
changed the println and saved.
the output did not change.
When the job class is reloaded, the bean factory, which created the scheduler is destroyed. This leads to the scheduler being shut down, which in turn prohibits (re-)scheduling jobs.
This happens in QuartzGrailsPlugin.groovy:123:
def onChange = { event ->
...
event.ctx.registerBeanDefinition("${fullName}", beans.getBeanDefinition("${fullName}"))
...
}
The solution is to avoid these dependencies by delaying the registration of the job triggers. The attached patch registers the job triggers in doWithApplicationContext() instead of doWithSpring().
Changed and newly created jobs are now created/updated correctly without the scheduler being shut down.
One issue remains: If an existing job is deleted (e.g: rm grails-app/jobs/MyJob.groovy), the job will still be running.
I Haven't investigated this issue properly, but I think this is rather a problem of the resource watcher. Maybe deleted resource events are not propagated to onChange()?
|
http://jira.codehaus.org/browse/GRAILSPLUGINS-190
|
crawl-002
|
en
|
refinedweb
|
US. ......more
Garlic,
onions may save
prostate,
study finds
WASHINGTON,
Nov 7: Men
who eat plenty of onions, garlic and similar foods may
irritate their romantic partners but may cut their risk
of prostate cancer in half, researchers have reported. .
....more
Depressed
US
Democrats look to 2004
WASHINGTON,
Nov 7: With
Republicans basking in victory, a depressed and wounded
Democratic Party turned its attention to the White House
race ...........more
Eyes on Bhuttos jailed
spouse as key to
breaking
Pak crisis
ISLAMABAD,
Nov 7: All
eyes were focussed on the jailed spouse of former Prime
Minister Benazir .....more
VIENTIANE, Nov
7: India
today agreed to step up defence cooperation with Laos,
including training of the South East nations air
force pilots, and offered help in power transmission,
exploration ........more .......more
Bangladesh to seek return
of criminals from India
DHAKA,
Nov 7: Bangladesh
will formally ask New Delhi to return the criminals who
have taken......more......more
Bush calls up Putin ahead of crucial UN
vote on Iraq ....
Final US push for UN resolution to disarm
Iraq ....
US.
US Attorney General John
Ashcroft said the indictment against the two Pakistanis
and an Indian-born US citizen had been taken out in San
Diego, according to Hong Kongs Government-run radio
station Rthk.
Ashcroft said the men were
trafficking 600 kilograms of heroin and five tonnes of
hashish to exchange for the shoulder-fired stinger
anti-aircraft missiles.
They were caught in an
undercover sting by the Federal Bureau of Investigation
(FBI) as they negotiated the drugs-for-weapons deal in
Hong Kongs luxury island Shangri-La hotel in
September.
The three men - Pakistanis
Syed Mustajab Shah, 54, and Muhammed Abid Afridi and US
citizen Ilyas Ali, 57 - appeared in court here on Tuesday
in the first hearing of an extradition application.
They are alleged to have
discussed with undercover FBI agents in the US plan to
import and sell 600 kilograms of heroin and five tonnes
of hashish.
The FBI says the three
agreed to accept three Stinger missiles, which cost about
200,000 US dollars on the black market, as part payment
for the drugs.
The weapons were to go to
Osama bin Ladens Al-Qaeda terrorist network, which
is blamed for the September 11 attacks on New York and
Washington.
The extradition hearing
was adjourned until later this month and the three were
remanded in custody.
Following Tuesdays
hearing, the Hong Kong Government issued a statement
saying there was no known terrorist infrastructure in the
territory and insisting it remained one of the
worlds safest cities. (DPA)
Garlic, onions may save
prostate, study finds
WASHINGTON,
Nov 7: Men
who eat plenty of onions, garlic and similar foods may
irritate their romantic partners but may cut their risk
of prostate cancer in half, researchers have reported.
Men who ate the most
vegetables containing allium the pungent,
sulfur-based compound blamed for the antisocial effects
of garlic and onions had a 50 percent lower risk
of having prostate cancer than those who ate the least,
the study found.
Ann Hsing of the National
Cancer Institute and colleagues interviewed 238 men with
prostate cancer and 471 men without prostate cancer about
what they ate.
Men who ate more than a
third of an ounce (10 grams) a day of onions, garlic,
chives or scallions were much less likely to be in the
cancer group, Hsing reported in yesterdays issue of
the journal of the National Cancer Institute.
This adds to research
showing the right diet can reduce the risk of cancer, the
American institute for cancer research, which
investigates the links between cancer and diet, said.
"Several case-control
studies (in which the diets of cancer patients are
compared to the diets of healthy individuals) have linked
allium vegetables to lower risk for cancer of the
stomach, colon, esophagus, breast and endometrium (lining
of the uterus)," the group said in a statement.
Jamie Bearse of the
prostate cancer coalition agreed.
"Its great to
see that more flavorable foods are proving to be
preventatives for prostate cancer," he said in a
statement. "Maybe it will encourage men to put down
that big mac and pick up a salad with chives and
onions." In a piece of bad news for prostate cancer
patients, a team at the University of Rochester in New
York found some drugs used to treat prostate cancer can
in fact cause it to grow.
"Its a real
surprise that the same compound that kills cancer cells
also makes them grow," Chawnshang Chang, who led the
study, said in a statement. "The effect of the drug
reverses completely."
His team studied a drug
called flutamide, made by schering, but he said other,
similar drugs are likely to have a similar effect.
A common treatment for
prostate cancer is castration using drugs or surgery to
cut off testosterone. The hormone fuels the growth of
prostate cancer cells in many cases.
But for reasons that
doctors have not understood, after one or two years the
cancer cells often start growing again.
"In all of the more
than 30,000 men who die of prostate cancer each year, the
cancer cells have become capable of growing even when we
starve the cells of testosterone," Dr Edward
Messing, a Urology professor at Rochester, said.
Writing in the journal
cancer research, Chang and colleagues said they may have
an explanation. Flutamide cuts off testosterone by
targeting a protein known as the Androgen receptor. But
it also turns on map kinase an enzyme that
promotes cell growth and is known to play a role in
breast and prostate cancer.
Yi-Fen Lee, who worked on
the study, said the findings do not mean prostate cancer
patients should avoid flutamide or similar drugs.
"These drugs are
necessary for patients who otherwise have few
options," Lee said. "Perhaps these findings
will help lead to a new drug target so that men with this
disease can be treated more effectively." (AGENCIES)
Depressed US Democrats look
to 2004
WASHINGTON,
Nov 7: With
Republicans basking in victory, a depressed and wounded
Democratic Party turned its attention to the White House
race in 2004 and searched for answers to Tuesdays
midterm losses.
A large field of potential
Democratic candidates, led by former Vice President Al
Gore, will decide in the next few months whether to
challenge Republican President George W Bush, whose
historical sweep of Congress left him in a commanding
position heading into 2004.
But the loss of Democratic
control in the Senate and the expansion of the Republican
majority in the House of representatives left Democrats
in a sour mood. It damaged party leaders and could play a
role in decisions by potential candidates.
Before anyone can
effectively challenge Bush, disgruntled Democrats said,
the party must find a vision and a stronger voice of
opposition.
"Democrats lacked a
national message this year," said Joe Andrew, a
former Chairman of the Democratic National Committee.
"Its clearly not enough to rely on
tactics."
The losses dealt severe
blows to House Democratic leader Richard Gephardt and
Senate Democratic leader Tom Daschle, but could free them
up from congressional commitments to make White House
runs.
Several Democrats said
they expected gephardt, home in St Louis conferring with
family and friends, to make that move soon. House
leadership elections are planned next week, and he could
face a challenge.
"We need some new
people and new ideas at the top of the chain,"
Tennessee Democratic Rep Harold Ford told CNN, saying
that if Gephardt ran again for Democratic House leader
"he should be prepared for some opposition."
The Democratic failures at the Congressional level might
encourage candidates from outside Congress to jump in the
Presidential race, analysts said, and outsiders like
Vermont Gov. Howard Dean saw their stature enhanced
merely by not being in the Washington inner circle.
At the core of the
Democratic field over the last six months has been Gore,
the narrow loser of the 2000 race after a five-week
recount in Florida. Gephardt has been mentioned, along
with Daschle Dean, and Democratic Sens. Joseph Lieberman
of Connecticut, John Kerry of Massachusetts and John
Edwards of North Carolina.
Dean is the only one who
has definitely said he is in the race, although Kerry is
certain to run as well. Other potential candidates
include Gary Hart and Bill Bradley, both former Senators
and failed Presidential candidates, and veteran Cens.
Chris Dodd of Connecticut and Joe Biden of Delaware.
Gores decision, due
by the end of the year, will be the first step toward
shaking out the field. If Gore gets in, his name
recognition would make him an instant frontrunner and he
could knock out his 2000 Vice Presidential running mate,
Lieberman, who has promised not to run against Gore.
Analysts said Gore might
be helped with activists by his decision to speak out
against Bushs plans in Iraq, although the
partys thirst for a fresh face could help
first-termer Edwards or Kerry, who are not as well known
nationally.
"Democrats should not
mistake the magnitude of this loss there has to be a
major regrouping," Gore told ABCs Barbara
Walters in an interview taped on Wednesday, saying
Democrats needed to present "a constructive
alternative" to Bush.
Partial transcripts of the
interview were released, but the full interview will not
air until next week.
Analysts said Democrats
could start scouring the lists of new Governors for
possible stars who could rise to the top the way Arkansas
Gov. Bill Clinton did in 1992 against Bushs father,
who was seen as unbeatable after the Gulf war in 1991.
"The field has opened wide," said Gary Jacobsen
of the University of California-San Diego. "There is
no obvious democratic choice and no one looks
particularly strong."
Possible long-shot late
entries like Govs. Roy Barnes of Georgia and Gray Davis
of California saw their chances go up in smoke on
Tuesday. Barnes lost reelection despite vastly
outspending his Republican opponent Davis won narrow
reelection against a Republican who was running what his
own party leaders said was the worst campaign in the
country.
Democrats said they saw a
silver lining in Tuesdays results Bush and
Republicans will have to show success and cannot blame
Senate democrats for their failures.
"Now republicans will
have to deliver on the issues on which they campaigned
and we will be watching," said Democratic Sen. Patty
Murray of Washington, head of the partys Senate
campaign committee.
Terry Mcauliffe, Chairman
of the Democratic National Committee, said Gubernatorial
pickups in battleground states like Illinois,
Pennsylvania, Michigan and Wisconsin would put the
partys Presidential nominee in a strong
organizational and fund raising position.
A huge wild card will be
the success or failure of any possible military mission
in Iraq. A continued sputtering economy, failure in Iraq
or other missteps by the Bush administration could put
democrats back in the race.
"We have to remember
how quickly things change in politics," said former
Democratic Rep Vic Fazio. "People who are handed
elections two years out can lose."
Amy Isaacs, National
Director of the Liberal Americans for Democratic Action,
said Democrats "have to stop running scared" if
they have any hope of reclaiming the White House.
"If Democrats
articulate a message instead of trying to meet the
Republicans in the Middle, they can come out
victorious," she said. (AGENCIES)
Eyes on Bhuttos jailed
spouse as key to
breaking Pak crisis
ISLAMABAD,
Nov 7: All
eyes were focussed on the jailed spouse of former Prime
Minister Benazir Bhutto today, as the two-day old
alliance of Islamists and secular opposition parties
against Pakistan President Pervez Musharraf showed signs
of folding.
Bhuttos husband Asif
Ali Zardari is at the centre of back-room wheeling and
dealing by Musharrafs supporters to wean
Bhuttos party back from its deal with the Islamic
bloc and into a coalition with the pro-regime parties,
officials involved in the deal-making have told AFP.
Musharrafs decision
late yesterday to postpone the inauguration of the
Parliament elected four weeks ago was widely seen as a
move to buy time to stymie the anti-mMsharraf coalition.
Bhutto loyalists in her
Pakistan Peoples Party (PPP) have said privately
that they would only side with the Musharraf-backed
parties if Zardari was released from six years
custody, and that graft charges against Bhutto were
dropped to allow her to return freely from self-imposed
exile.
Zardari, accused of gross
corruption while he was a member of his Wifes
cabinet, is due to appear before an accountability court
for a bail hearing later today in one of a raft of
corruption cases against him.
Senior politicians from
both the pro- and anti-Musharraf camps have said he is
expected to win bail and be released into house arrest in
Islamabad on medical grounds, under a deal moving closer
to finalisation.
The deal, however,
involves concessions from the couple which may include
the surrender of some assets by Zardari and Bhutto
staying away from Pakistan for another two years. (AFP)
India to step up
defence cooperation with Laos
VIENTIANE,
Nov 7: India
today agreed to step up defence cooperation with Laos,
including training of the South East nations air
force pilots, and offered help in power transmission,
exploration of minerals and agriculture sector as Prime
Minister Atal Behari Vajpayee ended his bilateral
meetings on a high note.
The Prime Minister, who
witnessed the signing of agreements yesterday including a
10 million dollar line of credit to least developed
nation, held a series of meetings today with President
Khamtay Siphandone, Deputy Prime Minister and Foreign
Minister Somsavat Lengsavad, Defence Minister Maj Gen
Doungchay Phichit and Agriculture Minister Siene
Saphanthong.
During the meetings, Mr
Vajpayee expressed his keenness to help in the
reconstruction of Laos which has stood by India in the
international fora on all issues, including
Pakistan-sponsored terrorism in Kashmir.
Mr Vajpayee, the first
Prime Minister to come to Laos after Jawaharlal Nehru in
1954, leaves Dor delhi tomorrow after a short visit for
bilateral talks in Bangkok with Thai Prime Minister
Thaksin Shinawatra.
In an effort to increase
high-level contacts, the Prime Minister has invited his
counterpart Bounnhang Vorachit to visit India. The
invitation has been accepted and dates will be worked out
through diplomatic channels.
The Laos Defence Minister
will be visiting India in January-end as also the
Chairman of the National Assembly Samane Vighaket whom
also the Prime Minister met.
During the bilateral talks
here, the two countries expressed concern at the growing
threat posed by international terrorism and stressed on
the need to cooperate closely in the fight against the
menace.
They emphasised on the
need for all countries to implement their obligations
under the UN Security Council resolution 1373, including
ensuring that all terrorists are denied safe haven, a
joint statement issued this evening said.
The Laos Defence Minister,
when he called on Mr Vajpayee this evening, appreciated
Indias help in training defence personnel of his
country and said Laos would want its air force pilots and
technicians to also undergo training from Indian Air
Force. The Prime Minister agreed to this request. The
visit to Vientiane assumed significance for New Delhi
since laos will be the coordinating country to the
India-ASEAN and will be hosting the summit in 2004.
Mr Vajpayee made a special
point of appreciating Laoss condemnation of the
attack on the parliament on December 13 last year.
The Laos leadership was
unanimous in appreciating the help India has given to the
country, especially the Kirloskar pumps which had
contributed immensely to gaining self-sufficiency in
rice.
President Siphandone
welcomed Mr Vajpayee by saying he was the second great
man from India, after Mr Nehru, to come to the country.
The lao leaders said Mr
Vajpayees visit was an encouragement to the
Buddhist Lao people since he was coming from a land in
which their religion was born, Secretary in the Ministry
of External Affairs R M Abhyankar told newspersons.
Mr Siphandone recalled
that India had supported his country at every
international fora and assured that Laos would do the
same for New Delhi.
He sought Indias
help in fully exploring the large mineral wealth of Laos.
Another area where India could help was in the energy
sector, in the north of the country. The South of Laos
had a surplus of power and even exported it to Cambodia
but the north had no transmission lines.
Mr Vajpayee agreed to both
these requests.
Agriculture Minister
Saphanthong, who is also president of the India-Lao
Friendship Committee, asked for Indias help in
irrigation management, crop protection and setting up of
vegetable extraction plants. (UNI) administration to form a coalition Government
with PML-Q even as an anti-corruption court put off a
hearing on bail petition of her spouse Asif Ali Zardari
till November 13.
"The party debunks
the reports being circulated by vested interests of some
secret underhand deal between the PPP and the Government
in the run up to National Assemblys session,"
Pakistan Peoples Party Parliamentarians (PPPP) said
in a statement here.
"The party condemns
the reports as deliberate disinformation by the vested
interests to subvert the recent forward movement in a
broad based understanding between the anti-regime
political parties on the platform of Alliance for
Restoration of Democracy (ARD) to uphold the supremacy of
the Parliament and for the restoration of the
constitution."
Some newspapers had
reported that PPPP-PML-Q have reached an understanding to
form Government with PPPP leader Mukdhum Amin Fahim as
the Prime Minister. The reports also spoke of a
"deal" between the Musharraf Government and
Bhutto to release Zardari and permit her to return from
self exile.
Meanwhile, an
accountability court adjourned the hearing on bail
petition in the BMW corruption reference against Zardari,
who was tipped to be released today as part of the
"deal", till November 13.
Imprisoned for the past
six years, Zardari has been accused of impersonating as a
student and importing an armoured luxury vehicle without
paying the duty. He is currently in a hospital.
The reports of PPPPs
secret deal with the Government prompted the leaders of
the religious parties, with whom it negotiated to form a
Government, to accuse bhutto of holding dual talks to get
a better bargain.
"There is no secret
deal between the PPP and the military Government. There
will never be. Let there be no doubt or mistake about
it," the PPPP statement said, adding the military
regime was desperate to form a coalition Government which
would be "subservient" to it and act as a
"rubber stamp" to endorse its
"undemocratic and dictatorial agenda."
"People have seen how
the regime has, for this purpose, resorted to pre-poll
and poll-day rigging and post poll manipulation in a most
blatant and shameful manner", it said.
Accusing the Musharraf
Government of floating the rumours after it postponed the
National Assembly, it said the NA session was postponed
to enable the "kings" parties muster the
requisite strength in Parliament through
"inducements, intimidation and coercion".
Denying rumours of a split
within the party, it said "the midnight knocks on
the doors of PPP legislators have been stepped up but the
legislators have not been deterred. They will never be.
The party is united under the leadership of bhutto."
The regime has resorted to "disinformation"
campaign after it failed divide the party, it added.
"The objective behind
the latest disinformation campaign of a so-called secret
deal...Is aimed at causing confusion and split within
PPPP. More than that it is also aimed at sowing seeds of
mistrust between the PPPP and other parties which in
association with ARD have vowed to restore supremacy of
Parliament and Constitution. The party enunciates and
upholds these democratic principles openly, clearly and
loudly. It does not need to resort to any secret
deals." (PTI)
Bangladesh to seek return of
criminals from India
DHAKA,
Nov 7: Bangladesh
will formally ask New Delhi to return the criminals who
have taken refuge in India following the deployment of
Army in the country.
"We will soon
formally request India to send the criminals. These
criminals have fled through different means and are
living in Barasat in West Bengal and lake town in
Kolkata," Bangladesh Foreign Minister M Morshed Khan
was quoted as saying by the BBC.
"Many criminals have
fled to different countries including India after the
change of Government following last years
election," Khan said.
"We are having
friendly relations and understanding, we will get
cooperation" from India in getting the criminals
back, he said.
Khan said the two
countries have pledged not to give sanctury to any
criminal and terrorist in each others territory.
Although BBC reported that
around 5,000 people have so far been rounded up in the
current drive which entered fourth week today, an
official spokesman said until yesterday 4,416 people have
been arrested including 1223 listed criminals and 73
suspects.
The arrests also include
60 peoples representative including three Members of
Parliament. (PTI) in East and North East India here, to the West
Bengal Government.
The project was handed
over to the State Government by the document being signed
by Principal Secretary, Department of Information and
Cultural Affairs Arun Bhattacharya, Consul General of
Italy in Kolkata Domenico Benincasa, Director General,
COE, Rosella Scandella and Director and CEO, Roopkala
Kendro, Anita Agnihotri, according to official sources.
The Memorandum of
Understanding (MoU) was signed between Ambassador of
Italy Gaetano Zucconi for the Italian Government and
Joint Secretary of the Ministry of Finance V Govindarajan
for the Government of India according to the Italy-Indian
agreement for technical co-operation in 1981.
The agreement designated
the Department of Information and Cultural Affairs of the
West Bengal Government as the agency responsible for the
implementation of its obligation on behalf of the centre
and the NGO CEO as the agency responsible on behalf of
the Government of Italy under the MoU.
Through this technical
co-ordination project, Italy has provided an assistance
of Rs 9.69 crore to West Bengal in terms of equipment and
training. The State Government has provided funds for
construction at the training-cum-production building of
Roopkala Kendro.
The Kendro has come up
under this technical co-operation on five acres of
Government land at Salt Lake in the city. (UNI)
Bush calls up Putin ahead of
crucial UN vote on Iraq
MOSCOW,
Nov 7: Ahead
of the crucial vote on Iraq, US President George W Bush
today called up his Russian counterpart Vladimir Putin to
discuss the UN Security Council resolution, the Kremlin
Press Service said.
The two presidents also
discussed the agenda for their forthcoming sumit later
However, the Kremlin did
not elaborate on the Bush-Putin telephonic discussions on
Iraq vote.
Earlier, Russian Foreign
Minister Igor Ivanov discussed the modified UNSC
resolution on Iraq with the US Secretary of State Colin
Powell and his Chinese, French and Mexican colleagues
ahead of the vote by the apex un body in which Russia
holds Veto powers.
According to a Russian
Foreign Ministry release Ivanov and his US, Chinese,
French and Mexican colleagues noted "significant
convergence" of the stands on the principled points
of the future resolution in which Moscows several
concerns have been dispensed with as a result of active
discussions, including the exclusion of the possibility
of an automatic use of force against Iraq in case of its
failure to cooperate with the UN Weapons Inspectors.
(PTI)
Final US push for UN resolution
to disarm Iraq
UNITED
NATIONS, Nov 7:
The United States, in what it calls Iraqs last
chance to disarm or face war, is pushing the UN Security
Council to adopt a tough resolution by tomorrow, and
veto-holders France and Russia are edging closer to
agreeing.
The resolution, the result
of eight weeks of negotiations on scrapping any chemical,
biological or nuclear weapons of mass destruction Iraq
may have, was formally presented to Council members
yesterday and will be reviewed again today.
"The resolution makes
very clear that this is a final opportunity for Iraq to
disarm," US State Department Spokesman Richard
Boucher said.
The United States wants a
vote on Friday but Secretary of State Colin Powell, the
key negotiator, has cancelled travel plans next week so
he can deal with any last-minute hitches.
In Moscow, Deputy Foreign
Minister Yuri Fedotov said today Russia was studying the
draft and would focus on ensuring the resolution does not
"include any measure allowing the automatic use of
force".
The draft took into
account Russias position with respect to Iraqi
sovereignty and territorial integrity, and a solution to
the Iraqi problem which could include lifting UN
sanctions, he was quoted as saying by the Itar-Tass news
agency.
In Baghdad, Saddam said:
"If these two American and British administrations
are able to achieve their wishes, the world will revert
to a new law, which is the law of evil based on power and
opportunity rather than the law of love and
justice."
Iraqi television quoted
Saddam today telling visiting Malaysian Information
Minister Khalil Yaacob that opposing US and British
intentions towards Iraq also served the interests of all
countries.
A leading Iraqi newspaper
said that China, France and Russia should oppose any
wording in the British-US draft resolution that might be
used to justify a military assault.
"America wants the
resolution to include texts that it uses afterward as a
pretext or a cover for committing aggression against
Iraq," the ruling Baath Party newspaper Al-Thawra
said in a front-page editorial. (AGENCIES)
| home | state | national | business | editorial | advertisement | sports
| international | weather | mailbag | suggestions | search | subscribe | send mail |
|
http://www.dailyexcelsior.com/02nov08/inter.htm
|
crawl-002
|
en
|
refinedweb
|
This is the mail archive of the java-patches@gcc.gnu.org mailing list for the Java project.
Hi Andrew and Ranjit, Mohan>> I'm asking this because there are a few strategic opens / fopens Mohan>> in jcf-*.c which, if you checked for the existence of the file Mohan>> of the exact case on Win32, you would get the compiler working Mohan>> on MingW. Andrew> Go for it. This proposed patch aims to resolve gcj compilation glitches due to the Win32 case-insensitive filesystem. For my native MingW build, I initially had a hacked up jcf-io.c which simulated checking the exact case of the .java or .class file to open. This is an attempt at the real deal. It also attempts to open the door at accomodating other case-insensitive filesystems. I would have liked to do something like an AC_LINK_FILES and AC_DEFINE and avoid the ugly #ifdef-ing in the source files themselves. However, I was thwarted in this by not having a configure.in at the gcc/java level. Could this be done better? Here is my defense of this patch: Andrew had mentioned that he was worried about introducing platform-specific stuff in the front end and argued that such functionality might better be moved to the runtime library. However, in this case, it isn't a question of enforcing strict case checking throughout the entire compilation process. In fact, Ranjit even has patches which do exactly the opposite for certain file types. However, public Java classes must exist in same-named, same-case source files and therefore (cringe), I believe that it's the compiler's job (and not the runtime's job) to enforce case-strictness for these specific instances on case-insensitive filesystems. Since there is no portable way of doing this, it stands to reason that platform-specific code must trickle into the front end. Does this make sense or am I full of it? I've put the patch itself in the body of this email, but am attaching jcf-win32.c, which would be a new repository file, as well as a test case which fails without this patch. -- Mohan ChangeLog 2003-03-10 Mohan Embar <gnustuff at thisiscool dot com> * jcf-win32.c: added to repository. Defines Win32-specific jcf_open_exact_case() * Make-lang.in: added jcf-win32.c * jcf.h: defined macro JCF_OPEN_EXACT_CASE which resolves to open() on non-Win32 platforms and Win32-specific jcf_open_exact_case() on Win32 * jcf-io.c: use JCF_OPEN_EXACT_CASE when trying .java and .class files Index: Make-lang.in =================================================================== RCS file: /cvsroot/gcc/gcc/gcc/java/Make-lang.in,v retrieving revision 1.91.12.3 diff -u -2 -r1.91.12.3 Make-lang.in --- Make-lang.in 26 Jan 2003 11:31:19 -0000 1.91.12.3 +++ Make-lang.in 11 Mar 2003 04:06:29 -0000 @@ -107,16 +107,16 @@ JAVA_OBJS = java/parse.o java/class.o java/decl.o java/expr.o \ java/constants.o java/lang.o java/typeck.o java/except.o java/verify.o \ - java/zextract.o java/jcf-io.o java/jcf-parse.o java/mangle.o \ - java/mangle_name.o java/builtins.o \ + java/zextract.o java/jcf-io.o java/jcf-win32.o java/jcf-parse.o \ + java/mangle.o java/mangle_name.o java/builtins.o \ java/jcf-write.o java/buffer.o java/check-init.o java/jcf-depend.o \ java/jcf-path.o java/xref.o java/boehm.o java/java-tree-inline.o mkdeps.o -GCJH_OBJS = java/gjavah.o java/jcf-io.o java/jcf-depend.o java/jcf-path.o \ - java/zextract.o version.o mkdeps.o errors.o +GCJH_OBJS = java/gjavah.o java/jcf-io.o java/jcf-win32.o java/jcf-depend.o \ + java/jcf-path.o java/zextract.o version.o mkdeps.o errors.o JVSCAN_OBJS = java/parse-scan.o java/jv-scan.o version.o -JCFDUMP_OBJS = java/jcf-dump.o java/jcf-io.o java/jcf-depend.o java/jcf-path.o \ - java/zextract.o errors.o version.o mkdeps.o +JCFDUMP_OBJS = java/jcf-dump.o java/jcf-io.o java/jcf-win32.o java/jcf-depend.o \ + java/jcf-path.o java/zextract.o errors.o version.o mkdeps.o JVGENMAIN_OBJS = java/jvgenmain.o java/mangle_name.o errors.o @@ -302,4 +302,5 @@ input.h java/java-except.h $(SYSTEM_H) toplev.h java/parse.h $(GGC_H) \ debug.h real.h gt-java-jcf-parse.h +java/jcf-win32.o: java/jcf-win32.c $(CONFIG_H) $(SYSTEM_H) java/jcf.h java/jcf-write.o: java/jcf-write.c $(CONFIG_H) $(JAVA_TREE_H) java/jcf.h \ $(RTL_H) java/java-opcodes.h java/parse.h java/buffer.h $(SYSTEM_H) \ Index: jcf-io.c =================================================================== RCS file: /cvsroot/gcc/gcc/gcc/java/jcf-io.c,v retrieving revision 1.36.2.2 diff -u -2 -r1.36.2.2 jcf-io.c --- jcf-io.c 10 Mar 2003 19:32:23 -0000 1.36.2.2 +++ jcf-io.c 11 Mar 2003 04:06:30 -0000 @@ -554,5 +554,5 @@ (classname_length <= 30 ? classname_length : 30))); - fd = open (buffer, O_RDONLY | O_BINARY); + fd = JCF_OPEN_EXACT_CASE (buffer, O_RDONLY | O_BINARY); if (fd >= 0) goto found; @@ -566,5 +566,5 @@ (classname_length <= 30 ? classname_length : 30))); - fd = open (buffer, O_RDONLY); + fd = JCF_OPEN_EXACT_CASE (buffer, O_RDONLY); if (fd >= 0) { Index: jcf.h =================================================================== RCS file: /cvsroot/gcc/gcc/gcc/java/jcf.h,v retrieving revision 1.31.20.1 diff -u -2 -r1.31.20.1 jcf.h --- jcf.h 7 Mar 2003 04:39:46 -0000 1.31.20.1 +++ jcf.h 11 Mar 2003 04:06:31 -0000 @@ -82,4 +82,21 @@ #endif +/* On case-insensitive file systems, we need to ensure that a request + to open a .java or .class file is honored only if the file to be + opened is of the exact case we are asking for. In other words, we + want to override the inherent case insensitivity of the underlying + file system. On other platforms, this macro becomes the vanilla + open() call. + + If you want to add another OS, add your define to the list below + (i.e. defined(WIN32) || defined(YOUR_OS)) and add an OS-specific + .c file to Make-lang.in similar to jcf-win32.c */ +#if defined(WIN32) + extern int jcf_open_exact_case(const char* filename, int oflag); + #define JCF_OPEN_EXACT_CASE(x,y) jcf_open_exact_case(x, y) +#else + #define JCF_OPEN_EXACT_CASE open +#endif + struct JCF; typedef int (*jcf_filbuf_t) PARAMS ((struct JCF*, int needed));
Attachment:
ExactCase.zip
Description: Zip archive
Attachment:
jcf-win32.c
Description: Binary data
|
http://gcc.gnu.org/ml/java-patches/2003-q1/msg00716.html
|
crawl-002
|
en
|
refinedweb
|
Design Patterns in XML ApplicationsDesign Patterns in XML Applications
Adequate documentation of the experience gained during the development of XML-based systems is a prerequisite for XML's success as a widely used technology. Design patterns have proved to be a very good technique for transmitting, and to some extent formalizing, knowledge about recurring problems and solutions in the software development process.
This article, the first of two articles on XML and design patterns, is focused on the applicability of some well-known design patterns to XML-specific contexts.
This article assumes some basic knowledge about XML processing. Also, basic knowledge about UML class diagrams will be useful (see our basic UML class diagram guide).
Patterns are an effective way to transmit experience about recurrent problems. A pattern is a named, reusable solution to a recurrent problem in a particular context.
Patterns are not miraculous recipes that will work in every scenario, but they do convey important knowledge, a standard solution, and a common language about a recurrent problem. All this makes them powerful design tools.
Since common problems with (often) common solutions appear in many scenarios, patterns are now used in almost every part of development: there are process patterns, architectural patterns, implementation patterns, testing patterns, etc. However, one particular kind of pattern has received special attention from the development community: design patterns. Design patterns are a powerful reuse mechanism, and a way to talk about design decisions that actually work.
The expression XML patterns may be used to denote two kinds of patterns: (1) design patterns specifically treating XML-related problems, and (2) information structuring patterns for the design of DTDs, schemas, etc.
XML patterns will be discussed more fully in the next article. Here we will focus on the applicability of traditional design patterns to the design of XML applications.
Traditional design patterns are often classified in categories. One common set of categories is structural patterns and behavioral patterns. In this article we will explore the applicability of patterns in each of these categories to XML problems.
The patterns we will discuss are: Command pattern, Flyweight pattern, Wrapper pattern, and Iterator pattern. The choice of patterns for this article notwithstanding, any other pattern can be applied to the design of XML applications.
Choosing the right patterns to present has not been easy. I have tried to maintain a balance between the different options, thus there are two structural and two behavioral patterns; two DOM-oriented and two event-based application discussions; and two of the patterns are illustrated using C++ and two using Java.
Command is a behavioral pattern used to encapsulate actions in objects. This is highly useful when you want to keep track of changes made to a model, for example in supporting multi-level "do/undo."
The following is a class diagram of the command pattern. Slightly different versions of this pattern can be found in the literature, however, I chose to present it in this fashion for clarity.
Suppose you are building an application that uses the DOM representation of an XML document as its basic datasay a component for displaying vector graphics, or a simple shopping list manager.
The user of your program will perform many operations, like deletions and additions. Since you are using the DOM as your underlying model, these changes will sooner or later translate into calls to removeChild and other DOM-specific calls. However, depending on how you structure your program, these changes can become either a hard-to-maintain, hard-to-extend mess, or an organized, extensible solution. Here is where the command pattern can help.
Let's take the shopping list editor as an example. The user wants to delete, add, and annotate the shopping list, among other operations. You use a GUI, so one option would be to hard-code your menu widgets' member calls to DOM-specific methods. For example, when the user selects the menu item "Insert," call insertChild. This has a number of "advantages":
Such code is fast to write.
Most GUI builders will "lead" you towards this.
It can be soft in terms of resource consumption.
It seems like it could be a real choice, but now you want to add undo/redo support to your program, and serious problems regarding this option become apparent:
There seems no easy way to maintain your do/undo list: either you change all your hardcoded widget events to call both the DOM methods and log to some list, or you change your DOM representation to somehow log the changes performed (!)
Even if you managed to successfully implement the do/undo lists from the hardcoded widget calls, you would be replicating that logic many times, which is hard to maintain and error-prone.
There is no clear indication as to which part of your program will manage the undo logic and how it will do it.
The solution that the command pattern proposes is to encapsulate the changes to the DOM into objects, command objects, each capable of doing (and undoing) a particular action. The collection of command objects will be managed by a certain command manager, capable of holding the queue of executed commands, so the user may undo/redo them.
This example reflects a very common approach to DOM processing using the command pattern. If you will be writing applications using DOM as the underlying data structure representation, you are very likely to find this approach useful.
The figure shows the structure of a typical DOM-oriented application using the command pattern for its message passing. The following is the header file for the base class AbstractCommand, which is the foundation of the example. Please refer to command.zip for the complete example code.
#include "heqetDef.h" #include "Notification.h" /** AbstractCommand is the base class for all commands. It provides do/undo operations as well as getDescription and getState operations for the easy tracking of the executed commands. (quite useful when keeping a menu of last performed operations). */ class AbstractCommand { public: /**@name Comparison operators * The comparison operators in the base AbstractCommand are * provided in order to keep STL usability in the CommandManager */ //@{ /// equality operator virtual int operator==(); /// unequality operator virtual int operator!=(); /// increment operator virtual void operator++(); //@} /**@name Do / Undo methods */ //@{ /// Pure virtual operation that in child classes encapsulates the logic of the change virtual Notification do() = 0; /// Pure virtual operation that in child classes encapsulates the logic of undoing a change virtual Notification undo() = 0; /** Pure virtual operation that in child classes returns the description of the operation * (particularly useful for undo/redo lists presented to the user) */ virtual string getDescription() = 0; //@} };
Note that even when this example is written in C++, the main principles (and even the code) can be ported to other languages with ease.
My personal experience shows that the command pattern is especially useful in XML applications when:
You have a DOM-based application and need to keep track of the changes made to the data model.
You have a DOM-based application and need to keep open the possibility for easy and clean extension of the available commands that can be performed on the data model.
In this section we analyzed Command, a behavioral pattern, in object-model-based XML applications. In the next section, we will see a structural pattern in event-based XML applications, Flyweight.
Flyweight is a structural pattern used to support a large number of small objects efficiently. Several instances of an object may share some properties: Flyweight factors these common properties into a single object, thus saving considerable space and time otherwise consumed by the creation and maintenance of duplicate instances.
One of the biggest problems with keeping the DOM representation of the document, instead of constructing your own objects from the output of SAX (or another event-oriented interface), is the size of the representation. In this discussion we assume not only that you want to roll your own domain-specific objects, but that you want them to be as space-efficient as possible.
Suppose you are writing a SAX-based application that constructs CD objects from a file called musicCollection.xml. At the end of parsing you might want a collection of CD objects to be created. Those objects may look like:
As you probably already noticed, all the information about the artist (in this example we use only one, for simplicity) may be replicated many times. (Notice too that this artist information is unlikely to change over time.) This is a clear candidate for factorization into what we'll call a flyweight: a fine-grained object that encapsulates information (usually immutable) shared by many other objects.
Remember that CD objects should be constructed from an XML file that might look somewhat like this:
<?xml version="1.0"?> <collection> <cd> <!-- This is quite simplistic, better XML representations could have been chosen, but it only aims at illustrating the pattern --> <title>Another Green World</title> <year>1978</year> <artist>Eno, Brian</artist> </cd> <cd> <title>Greatest Hits</title> <year>1950</year> <artist>Holiday, Billie</artist> </cd> <cd> <title>Taking Tiger Mountain (by strategy)</title> <year>1977</year> <artist>Eno, Brian</artist> </cd> </collection>
You decide to use Java and a SAX parser to do the job. Now you must construct a set of SAX handlers capable of creating CD objects with flyweight artists. This will be the subject of our example.
The basic logic for the SAX handler is simple:
Whenever a CD open tag is found, create a new CD object.
Whenever title or year elements are found, enter them in the current CD.
Whenever an artist element is found, ask the artist factory to create it. This is fundamental to the problem: the CD object does not know if it is sharing this object with others; only the factory keeps track of what has been created.
The following code illustrates a simple factory for the extrinsic objects, and the output produced by the example program if run with the above XML file.
Flyweight Example: Factory
// Simple Flyweight factory for Artist classes (Artist is the extrinsic, // flyweight class. CD is the client) import java.util.Hashtable; import java.lang.String; public class ArtistFactory { // Whenever a client needs an artist, it calls this method. The client // doesn't know/care whether the Artist is new or not. Artist getArtist(String key){ Artist result; result = (Artist)pool.get(key); if(result == null) { result = new Artist(key); pool.put(key,result); System.out.println("Artist: " +key + " created"); } else System.out.println("Artist: " +key + " reused"); return result; } Hashtable pool = new Hashtable(); }
Flyweight Example: Output
$ java -Dorg.xml.sax.parser=com.ibm.xml.parser.SAXDriver \ FlyweightDemo music.xml Artist: Eno, Brian created Artist: Holiday, Billie created Artist: Eno, Brian reused Artist: Eno, Brian reused Artist: Eno, Brian reused
For the complete code, please download flyweight.zip
At the end of the parsing, the actual object structure will be:
The flyweight pattern is useful in XML applications when:
You have a domain-specific representation of your document, and you want to keep it as small as possible by taking advantage of shared information among objects.
This is often the case!
In this section we analyzed Flyweight, a structural pattern useful in event-based XML applications. In the next section, we will examine Wrapper, another structural pattern, also in an event-based context.
Wrapper is a structural pattern used to allow an existing piece of software to interact in an environment different from its originally intended one. Wrapper is very similar to the famous Adapter pattern. The difference between the patterns is not predominantly structural, but rather in their intentions: Adapter seeks to make an existing object work with other known objects that expect something, while Wrapper is focused on providing a different interface (without knowing in advance its clients) and solving platform/language issues.
Wrapper is one of the most easily identifiable patterns in the XML world. Even though its explanation is very simple, it is worth mentioning because of its frequency.
A wrapper pattern is used every time an existing parser is adapted to work in another language. A new interface that uses constructs of the new language is defined, yet little or no change in the functionality takes place.
One common source of wrappers in XML is James Clark's expat. Wrappers for expat (developed in C) have been written in numerous languages. Several wrappers are available for C++ (including expatpp), Perl, and other languages.
In the example, we will look at the original C interface of expat, and the C++ wrapper that adapts it for object-oriented manipulation. See also the end of the example section for pointers to complete wrappers of expat.
Expat works by calling functions, called handlers, when certain events occur (for more about expat, refer to Clark Cooper's XML.com article on expat). The following is a small part of the original expat interface, defining the type of a handler, and a function to register handlers for listening to "start element" and "end element" events:
... /* atts is array of name/value pairs, terminated by 0; names and values are 0 terminated. */ typedef void (*XML_StartElementHandler)(void *userData, const XML_Char *name, const XML_Char **atts); ... void XMLPARSEAPI XML_SetElementHandler(XML_Parser parser, XML_StartElementHandler start, XML_EndElementHandler end);
Expat can be used directly in a C++ project, however, several wrappers have been devised to take advantage of C++ syntax. A good example is Andy Dent's expatpp.
All expatpp does is simplify the interface for C++ programmers by wrapping an expat parser in a class:
Simplification with expatpp
class expatpp { public: expatpp(); ~expatpp(); operator XML_Parser() const; // overrideable callbacks virtual void startElement(const XML_Char* name, const XML_Char** atts); virtual void endElement(const XML_Char* name); virtual void charData(const XML_Char *s, int len); virtual void processingInstruction(const XML_Char* target, const XML_Char* data); ...
In order to adapt the expat interface for the new object-oriented calls, the constructor binds the expat callbacks to the corresponding method. Thus, all you have to do in order to handle a particular kind of event is to override the method in a subclass. If you have never worked with expat, this could be a little confusing, but don't worry. The key to understanding it is to look at the code itself: wrapper.zip
The wrapper pattern is useful in XML applications when:
You want to reuse a piece of XML software in an environment different from the one initially intended.
In this section we reviewed Wrapper, a structural pattern useful for adapting XML applications and processors. In the next and final section, we will see Iterator, a behavioral pattern that is very useful in object-model-based contexts.
Iterator is a behavioral pattern used to access the elements of an aggregate sequentially, without exposing the aggregate's underlying representation. It is particularly useful when you want to encapsulate special logic for the traversal of a structure like a DOM tree.
Suppose you are writing a tool that uses the DOM as its internal data representation mechanism. Presumably, there are a lot of actions you want to perform on the members of this collection of elements: search for a particular element, delete all elements with a given name, print elements of certain type, etc.
Since you have read the command pattern section, you decide to implement those actions as Commands, so now you have a nice, extensible way of working with those elements:
applyToAll(AbstractCommand action) { // traverse the whole tree applying action to each // node }
This is good. However, you start to notice different traversals can work better in some cases, and some actions only need to work on certain kind of objects. So you start wondering about a way to isolate the traversal logic from the rest of the program.
The solution is in the iterator pattern. Using the iterator pattern you can create a parametric method applyAll that expects not only a generic action, but a generic iterator:
applyToAll(AbstractCommand action, AbstractIterator iterator) { for(iterator.reset(); !iterator.atEnd(); iterator.next()) { action.target(iterator.value()); action.do(); } }
Now you can invent iterators for all kinds of traversals: pre-order, post-order, in-order, pre-order only over text elements, etc., without having to change a single line of your (already compact and elegant!) method.
The iterator presented traverses the collection (the DOM) by levels, printing first all CD elements, then all title, year, artist, and finally all the text elements. Here is the code for such an iterator:
Iterator Sample code
/** ************************************************************************** * Name: LevelIterator * Description: This iterator traverses the tree by levels. * Note that it could be replaced in the main program for * any other iterator conforming with AbstractIteratorIF, * without changing anything in the main program logic. ************************************************************************** */ import org.w3c.dom.*; import java.util.Vector; public class LevelIterator implements AbstractIteratorIF { public boolean end() { return (aux.size() == 0); } public void next() { if(aux.size() > 0) { current = (Node) aux.elementAt(0); //first get the new next element aux.removeElementAt(0); } // now add all of its children to the end... a typical // level traversal. if (current.hasChildNodes()) { NodeList nl = current.getChildNodes(); int size = nl.getLength(); for (int i = 0; i < size; i++) { aux.addElement(nl.item(i)); } } } public Node getValue() { return current; } public LevelIterator(Node c) { current = c; aux.addElement(current); } Node current; Vector aux = new Vector(); //auxiliar vector for the sublevels }
This is the output of the IteratorDemo program that uses the previous iterator to walk the music.xml example from the Flyweight section.
Iterator Sample Output
-- Node Name: collection NodeValue: null -- Node Name: #text NodeValue: -- Node Name: cd NodeValue: null -- Node Name: cd NodeValue: null -- ... -- Node Name: #text NodeValue: Eno, Brian -- Node Name: #text NodeValue: The Drop -- Node Name: #text NodeValue: 1999
Please refer to iterator.zip for the complete code.
The iterator pattern is useful in XML applications when:
You need to encapsulate the way you walk a given collection. Most of the time in XML applications, this collection will be a DOM tree.
Iterator concludes this overview of the use of design patterns in XML applications. A forthcoming article will present an introduction to some patterns with particular applications to XML.
Design patterns are a powerful way to improve the quality and comprehensibility of your XML applications. Make sure to review the bibliography. You will certainly find more ways to boost your XML development.
If you have comments or questions, the author may be contacted at fabio@viaduct.com
Erich Gamma, Richard Helm, Ralph Johnson & John Vilissides, 1995, Design Patterns: Elements of Reusable Object Oriented Software.
John Vilissides, 1997, Pattern Matching.
Sherman R. Alpert, Kyle Brown, Bobby Woolf, 1998, The Design Patterns Smalltalk Companion.
XML.com Copyright © 1998-2006 O'Reilly Media, Inc.
|
http://www.xml.com/lpt/a/2000/01/19/feature/index.html
|
crawl-002
|
en
|
refinedweb
|
Odoo Help
Odoo is the world's easiest all-in-one management software. It includes hundreds of business apps:
CRM | e-Commerce | Accounting | Inventory | PoS | Project management | MRP | etc.
Odoo - Mollie Payment Gateway Integration Response [SOLVED]
Hi,
I am working on Mollie Payment Gateway Integration with Odoo 8. I have one query regarding this.
Like other payment gateways Mollie don't return any response data after completion of payment(post data back to website). But it can redirect the user to return_url(may get GET data using query-string). In Controller we accept only post request but in this case there is no post data so how we are going to handle the response?
POST data is compulsory/standard in case of Odoo or I can just convert the function to get GET request as well to meet my purpose?.
Code for Reference from Buckaroo Payment Gateway(payment_buckaroo addon)
@http.route([
'/payment/buckaroo/return',
'/payment/buckaroo/cancel',
'/payment/buckaroo/error',
'/payment/buckaroo/reject',
], type='http', auth='none')
def buckaroo_return(self, **post):
""" Buckaroo."""
_logger.info('Buckaroo: entering form_feedback with post data %s', pprint.pformat(post)) # debug
request.registry['payment.transaction'].form_feedback(request.cr, SUPERUSER_ID, post, 'buckaroo', context=request done Mollie Integration Successfully. Please let me know if anyone is in need or any help regarding this.
I'm interested in the Mollie module, can you provide me more information? Thanks!
you can contact me at dirtyhandsphp@gmail.com
|
https://www.odoo.com/forum/help-1/question/odoo-mollie-payment-gateway-integration-response-solved-93055
|
CC-MAIN-2017-04
|
en
|
refinedweb
|
.
A Hack Solution
You could hack together your own solution. For example, you could replace every string in your program with a function call (with the function name being something simple, like
_())) which will return the string translated into the correct language. For example, if your program was:
print('Hello world!')
...you could change this to:
print(_('Hello world!'))
...and the
_() function could return the translation for
'Hello world!' based on what language setting the program had. For example, if the language setting was stored in a global variable named
LANGUAGE, the
_() function could look like this:
def _(s): spanishStrings = {'Hello world!': 'Hola Mundo!'} frenchStrings = {'Hello world!': 'Bonjour le monde!'} germanStrings = {'Hello world!': 'Hallo Welt!'} if LANGUAGE == 'English': return s if LANGUAGE == 'Spanish': return spanishStrings[s] if LANGUAGE == 'French': return frenchStrings[s] if LANGUAGE == 'German': return germanStrings[s]
This would work, but you'd be reinventing the wheel. This is pretty much what Python's
gettext module does. gettext is a set of tools and file formats created in the early 1990s to standardize software internationalization (also called I18N). gettext was designed as a system for all programming languages, but we'll focus on Python in this article.
The Example Program
Say you have a simple "Guess the Number" game written in Python 3 that you want to translate. The source code to this program is here. There are four steps to internationalizing this program:
- Modify the .py file's source code so that the strings are passed to a function named
_().
- Use the pygettext.py script that comes installed with Python to create a "pot" file from the source code.
- Use the free cross-platform Poedit software to create the .po and .mo files from the pot file.
- Modify your .py file's source code again to import the
gettextmodule and set up the language setting.
Step 1: Add the _() Function
First, go through all of the strings in your program that will need to be translated and replace them with
_() calls. The gettext system for Python uses
_() as the generic name for getting the translated string since it is a short name.
Note that using string formatting instead of string concatenation will make your program easier to translate. For example, using string concatenation your program would have to look like this:
print('Good job, ' + myName + '! You guessed my number in ' + guessesTaken + ' guesses!') print(_('Good job, ') + myName + _('! You guessed my number in ') + guessesTaken + _(' guesses!'))
This results in three separate strings that need to be translated, as opposed to the single string needed in the string formatting approach:
print('Good job, %s! You guessed my number in %s guesses!' % (myName, guessesTaken)) print(_('Good job, %s! You guessed my number in %s guesses!') % (myName, guessesTaken))
When you've gone through the "Guess the Number" source code, it will look like this. You won't be able to run this program since the
_() function is undefined. This change is just so that the pygettext.py script can find all the strings that need to be translated.
Step 2: Extract the Strings Using pygettext.py
In the Tools/i18n of your Python installation (C:\Python34\Tools\i18n on Windows) is the pygettext.py script. While the normal gettext unix command parse C/C++ source code for translatable strings and the xgettext unix command can parse other languages, pygettext.py knows how to parse Python source code. It will find all of these strings and produce a "pot" file.
On Windows, I've run this script like so:
C:\>py -3.4 C:\Python34\Tools\i18n\pygettext.py -d guess guess.py
This creates a pot file named guess.pot. This is just a normal plaintext file that lists all the translated strings it found in the source code by search for
_() calls. You can view the guess.pot file here.
Step 3: Translate the Strings using Poedit
You could fill in the translation using a text editor, but the free Poedit software makes it easier. Download it from. Select File > New from POT/PO file... and select your guess.po file.
Poedit will ask what language you want to translate the strings to. For this example, we'll use Spanish:
Then fill in the translations. (I'm using, so it probably sounds a bit odd to actual Spanish-speakers.)
And now save the file in it's gettext-formatted folder. Saving will create the .po file (a human-readable text file identical to the original .pot file, except with the Spanish translations) and a .mo file (a machine-readable version which the
gettext module will read. These files have to be saved in a certain folder structure for
gettext to be able to find them. They look like this (say I have "es" Spanish files and "de" German files):
./guess.py ./guess.pot ./locale/es/LC_MESSAGES/guess.mo ./locale/es/LC_MESSAGES/guess.po ./locale/de/LC_MESSAGES/guess.mo ./locale/de/LC_MESSAGES/guess.po
These two-character language names like "es" for Spanish and "de" for German are called ISO 639-1 codes and are standard abbreviations for languages. You don't have to use them, but it makes sense to follow that naming standard.
Step 4: Add gettext Code to Your Program
Now that you have the .mo file that contains the translations, modify your Python script to use it. Add the following to your program:
import gettext es = gettext.translation('guess', localedir='locale', languages=['es']) es.install()
The first argument
'guess' is the "domain", which basically means the "guess" part of the guess.mo filename. The
localedir is the directory location of the locale folder you created. This can be either a relative or absolute path. The
'es' string describes the folder under the locale folder. The LC_MESSAGES folder is a standard name
The
install() method will cause all the
_() calls to return the Spanish translated string. If you want to go back to the original English, just assign a lambda function value to
_ that returns the string it was passed:
import gettext es = gettext.translation('guess', localedir='locale', languages=['es']) print(_('Hello! What is your name?')) # prints Spanish _ = lambda s: s
print(_('Hello! What is your name?')) # prints English
You can view the translation-ready source code for the "Guess the Number". If you want to run this program, download and unzip this zip file with it's locale folders and .mo file set up.
Further Reading
I am by no means an expert on I18N or gettext, and please leave comments if I'm breaking any best practices in this tutorial. Most of the time your software will not switch languages while it's running, and instead read one of the
LANGUAGE,
LC_ALL,
LC_MESSAGES, and
LANG environment variables to figure out the locale of the computer it's running on. I'll update this tutorial as I learn more.
13 thoughts on “Translate Your Python 3 Program with the gettext Module”
Gettext is really a usefull tool for Internationalization :D
To make sure that when a string is translated whith the good characters, I format my string like this:
print(_(u"My_string"))
To encode it in utf-8 (to have accent in some languages like french or spanish).
Thanks for the share =)
Oh yes. Since this blog post focuses specifically on Python 3 (where all string values are unicode), I didn't include that.
Good introduction. That being said ... when you wrote "Note that using string formatting instead of string concatenation will make your program easier to translate" ... you were unfortunately wrong. Your statement assumes that a translation in another language can be done correctly by breaking up a sentence into small phrases that can be translated individually and then put back together in the same order as it is done in English. For example, suppose you wish to translate
"You have a {color} car."
Doing it your way, would be something like
"You have a" + color + "car".
However, in French, it would be
"Vous avez une" + "automobile" + color
So, the proper way is to use a single string. This would then be:
"You have a {color} car" and "Vous avez une automobile {color}".
Sorry, I completely misunderstood your original explanation - you said it right in the first place.
If car and color are variables, using %s will be problematic.
"You have a {color} {car}" and "Vous avez une {car} {color}"
So using .format(...) should really be the recommended way :-)
Minor typo:
"This is pretty much what Python's gettext module." ->
"This is pretty much what Python's gettext module does."
Thanks!
Hi!
I've created created a library called 'verboselib':
It is intended to make I18N easier, especially for stand-alone libraries. It is still in alpha, but I use it in my libraries very often, e.g.
I hope this can help someone.
Sounds interesting. Is it backwards compatible with gettext .po and .mo files?
Nice tutorial !
But is there a way to translate differently two identical strings?
I saw something like msgctxt in gettext, but I didn't find how to use it with Python3.
I have query regarding gettext usage:
as per i see the examples everywhere, function call to _() is like this :
_("some string %s") %var
i dont understand how is the variable value getting inside the string ? is it some basic python concept which I am missing ?
That's because the _() function will return the translated string, and the translated string will have the %s in it. It's that *translated* string which has the variables inserted into it.
So if the string was 'Hello %s. How are you?' and I translated it to Spanish, the _() function would return 'Hola %s. Como estas?'. The %s after Hola would have the name inserted into it.
Hey, great tut! I've got questions about the workflow with several files and files in subfolders. Do you first have to create the pot files for each source file individually or can you generate a pot for all files in an application simultaneously?
And how do you combine them in one file? I just did that manually by copy pasting from one pot into another and it worked, but maybe there's a better way.
|
http://inventwithpython.com/blog/2014/12/20/translate-your-python-3-program-with-the-gettext-module/
|
CC-MAIN-2017-04
|
en
|
refinedweb
|
Is there a way to make JavaDoc update and add extra corresponding tags when I for example add return value to a method which was void before. This way JavaDoc will be updated as we change the code.
This related question asked 6 years ago for eclipse but no answer to that yet. As it says there too in the comments it's not about refactoring a name.
/**
* Some explanation about method.
*
* @return (I want this tag to be added automatically after I add return type "int" to method)
*/
private int ourMethod() {
int price = quantity * 5;
return price;
}
As of version 2016.2, there is no feature in IntelliJ IDEA that would add a
@return tag for a method when you change its return type.
For parameters, if you use the "Change signature" refactoring, it will add
@param tags for new parameters, delete them for parameters you remove, and update them for parameters you rename. The Rename refactoring will also rename
@param tags.
|
https://codedump.io/share/EO7Ij6bTx6lW/1/how-to-set-javadoc-to-update-automatically-when-i-change-the-method-signature-in-android-studio-or-intellij-idea
|
CC-MAIN-2017-04
|
en
|
refinedweb
|
So I am trying to rename files based on the treepath it is in, then move the renamed files to a specific folder (based on its name).
So, for example, I have a file in path L:\a\b\c\d\e\f\file.pdf
I want to rename "file.pdf" to "d e f"
Also, all the subfolders branch off at c so I want python to scan all the documents in the subfolders contained in folder c to be renamed according to the aforementioned pattern. I.e., L:\a\b\c\x\y\z\file.pdf, file.pdf renamed to "x y z"; L:\a\b\c\q\r\s\file.pdf, file.pdf renamed to "q r s"; etc.
Then, I want to move all those files to a new, already existing folder, based on their names. So say for example for file "d e f" I would want to move to L:a\b\1\d\f\e.
I am quite new at coding at Python and I have a few pieces of the puzzle kind of worked out but I am having a lot of trouble. Here is some of my code but I don't think it will prove very useful.
For this code, I have to drop the file into CMD with the .py file. It spits out the name I want (but with extra spaces that I don't want), it doesn't actually rename the file, and is done only with the specific file I dropped into CMD when I would rather have the code look through all of the subfolders and do it automatically. Please note that my code (specifically, lines 6-7) is specific to how the folder I want is actually named, I obfuscated the name of the tree path for confidentiality reasons and it just makes it easier to understand.
from sys import argv
script, filename = argv
txt = open(filename)
print "Here's your file %r:" % filename
string = "%r" % filename
print string [94:-17]
line = string [94:-17]
line = "%r" % line
for char in '\\':
line = line.replace (char, ' ')
print line
Doing some homework, this code will search and rename all the files in the directory I want, however it does not name it the way that I want. Again, this isn't really helpful but it is what I have.
import glob,'L:\a\b\c\', r'*.pdf', r'new(%s)'
And then for actually moving the files, I don't have any code made yet - I am pretty lost. I understand that this is a lot of work, but I would greatly appreciate it if someone could help me out.
|
http://forums.devshed.com/python-programming-11/help-renaming-files-based-file-path-moving-file-folder-938974.html
|
CC-MAIN-2017-04
|
en
|
refinedweb
|
Odoo Help
Odoo is the world's easiest all-in-one management software. It includes hundreds of business apps:
CRM | e-Commerce | Accounting | Inventory | PoS | Project management | MRP | etc.
on launch method in oopenerp 7?
Just wondering is there a way to call a method on launch of the module? meaning when i starts up?
What do you mean by 'when it starts up' ?
Do you mean when the module is installed or upgraded? If so, define an init function:
def init(self, cr):
Several OpenERP modules make use of this functionality, particular models representing reports.
like when you double click on the firefox browser icon the browser 'starts up' when you select the module from the menu bar it starts up. cant make it any clearer than that really
No. Modules don't work this way. They are loaded or 'started up' when they are installed (the init function above can run at this time) and any other functions within them are only called when records are read, created, updated or
|
https://www.odoo.com/forum/help-1/question/on-launch-method-in-oopenerp-7-46638
|
CC-MAIN-2017-04
|
en
|
refinedweb
|
Re: [soaplite] Unresolved prefix
Expand Messages
- Hi John,
> error msg:As far as I remember XML Namespaces specification doesn't allow to
> Unresolved prefix 'ns2' for attribute value 'ns2:contact'
> <return xmlns:
have empty namespace names, unless they are default names (in other
words, xmlns:foo="" is not allowed, while xmlns="" is fine).
SOAP::Lite shouldn't allow you to generate this attribute, unless you
do it manually. Such message cannot be properly processed, hence the
error message. If you associate "contact" type with some namespace,
you can then map it to the type deserializer (see
exmaples/customschema.pl for example). Hope it helps.
Best wishes, Paul.
--- john_griffin12 <jgriffin@...> wrote:
> Thatnks for everyone's help with the header problem. I've got it__________________________________________________
> working now but here's the next one.
>
> error msg:
> Unresolved prefix 'ns2' for attribute value 'ns2:contact'
>
> value returned (partial):
> <SOAP-ENV:Body>
> <ns1:getContactResponse xmlns:ns1="urn:PersonService" SOAP-
> ENV:
> <return xmlns:
> <firstName xsi:Anthony</firstName>
> <mobile xsi:
> ...
>
> I'm assuming this means that I have to write a de-serialization
> routine for this data type 'contact' or is it that the ns2
> namespace
> value may not be defined in the wsdl file. If the answer involves
> a
> new routine can some one give me a brief rundown on writing on or
> point me to a good explanation. The SOAP::Lite docs are sketchy
> under
> the serializer section.
>
> In java this would be accessed as a bean if that helps(and it
> works).
>
> Thanks in advance.
>
> J.
|
https://groups.yahoo.com/neo/groups/soaplite/conversations/topics/1862?xm=1&m=s&l=1
|
CC-MAIN-2017-34
|
en
|
refinedweb
|
Few people would deny that African Americans made enormous economic and political
- Lindsay James
- 2 years ago
- Views:
Transcription
1 AN EQUAL SAY AND AN EQUAL CHANCE FOR ALL Divergent Fates: The Foundations of Durable Racial Inequality, Michael K. Brown ABOUT THE AUTHOR: Michael K. Brown is Research Professor of Politics at the University of California, Santa Cruz. He is the author of Working the Street: Police Discretion and the Dilemmas of Reform (Russell Sage Foundation, 1988); Race, Money and the American Welfare State (Cornell University Press, 1999); and co-author of Whitewashing Race: The Myth of a Colorblind Society (University of California Press, 2003), which won the 1 S Annual Benjamin L. Hooks Outstanding Book Award and a Gustavus Myers Outstanding Books Award. Let no one delude himself that his work is done... While the races may stand side by side, whites stand on history s hollow. We must overcome unequal history before we overcome unequal opportunity. Lyndon Johnson 1 INTRODUCTION Few people would deny that African Americans made enormous economic and political gains from the New Deal to the twenty-first century. There is also little doubt that racial inequality remains a formidable problem. From news reports of the dismal employment rates of black men to discussion of the massive incarceration of black men and women to debates over affirmative action and black poverty, Americans are constantly reminded of the unfinished business of the civil rights era. We have not overcome our unequal history. This paradoxical history of black economic success and persistent racial inequality is usually told as the story of the success of civil rights legislation and the failure of individual African Americans to take advantage of the opportunities created with the dismantling of legal segregation in the 1960s. But it is not just a story of legal victories or cultural failure. In fact, the common supposition that durable racial inequality can be explained by individual black indolence and a dysfunctional culture is wrong; it cannot sufficiently account for MICHAEL K. BROWN 1
2 the persistence of durable racial inequality (Brown and Wellman 2005, ). We will get a better grasp of the matter if we analyze the structure of durable racial inequality and economic opportunity over time. Durable racial inequality did not begin with the great recession, though the combination of a steep and prolonged rise in unemployment and the disparate racial effects of the subprime mortgage crisis threaten recent progress. Its roots lie in the racialized competition for jobs in changing labor markets since the New Deal and in public and private policies that opened up economic opportunities for African-Americans yet, ironically, embedded the color line in the U.S. welfare state. Latinos experience similar racial barriers and gaps, but with unique characteristics that stem from the substantially different (and more recent) ways they have been integrated into the American economy. As Latinos are a large part of our economy s racial structure and future, how the overlapping but in some ways different inequalities faced by blacks and Latinos can be solved together is an important and challenging question for analysts and leaders alike. Understanding why durable racial inequality persists bears not just on whether there will be a viable black and Latino middle class or any foreseeable possibility of reducing poverty rates among people of color. It is central to the question of whether young African Americans and Latinos face sharply diminishing economic opportunities in the future or face a future of economic stagnation and declining opportunities. It is no secret that economic mobility in the United States has sharply declined over the last 30 to 40 years, particularly for men. Based on a measure of the extent to which parental earnings are passed on to children, the United States has substantially lower economic mobility than most European democracies. One recent study estimates that 42 percent of those individuals born in the bottom quintile of the income distribution will stay there; only 6 percent will make it into the top quintile of income. But for people born in the top quintile, the chance of staying there is 42 percent and the probability of significant downward mobility is very low (Hertz 2006, 9, tbl 3). This lack of mobility is strongly associated with America s very high income inequality (Krueger 2012; Corak 2013).2 There is, however, a very clear and substantial gap between black and white economic mobility. African Americans experience lower levels of upward mobility than whites and significantly more downward mobility (Isaacs 2008). Tom Hertz has demonstrated that at any given level of income, the probability of black children moving up the economic ladder is lower than that of white children. Of black children born into the bottom ten percent of the income distribution, 42 percent will end up remaining at the bottom of the income ladder as adults; only 17 percent of whites born into poor families will remain there. There is a 33 percent gap in the adult incomes of black and white children who grow up in families with similar incomes. Neither family nor personal characteristics can explain this racial mobility gap; the explanation is due likely to forces that operate outside of the family setting. 3 (Hertz 2005, 165; Hertz 2006, 13, 19) 2 DIVERGENT FATES: THE FOUNDATIONS OF DURABLE RACIAL INEQUALITY,
3 A STRUCTURAL THEORY OF RACIALIZED INEQUALITY In this report, I analyze the persistence of embedded black disadvantage as a product of a history of black disaccumulation and white opportunity hoarding and accumulation. The analytical framework is based on the theory of accumulation and disaccumulation (Brown, et al. 2005, ; Brown, M. K., et.al. 2003, 22 25). The basic idea is that racial inequalities are cumulative. They are a consequence of opportunity hoarding, which is the efforts of a social group to acquire and monopolize economic resources and privileges, and the disparate racial effects of public policies and the practices of intermediary institutions such as banks, insurance companies, and hospital and health organizations, among others.4 The idea of accumulation refers to the way that small advantages racially preferential treatment of loan applications or the disparate effects of union seniority rules compound and lead to large positive social and economic outcomes over time. Disaccumulation is the opposite and parallel idea; it refers to a process of negative accumulation. Failure to pay off credit card debt only increases the amount of the debt and leads in many cases to personal bankruptcy. Limited access to education or other government benefits or jobs with a potential for the acquisition of valuable skills leads to the disaccumulation of economic advantage and limits an individual s economic well-being and mobility. Disaccumulation may operate either to reverse economic gains or to deny groups the full benefits of economic growth and rising incomes. For example, blacks actually lost gains they made in manufacturing industries in the 1920s because of discrimination during the Great Depression. Or, to take another example, a group may gain income relative to another group but fail to close income or wealth gaps. The idea of disaccumulation does not imply that racial group competition is necessarily a zero sum process; it does mean that economic advantages and disadvantages are parceled out through opportunity hoarding and the racially disparate effects of so-called color-blind social policies. This framework illuminates the structural foundations of durable racial inequality how the fruits of white control of labor markets and residential segregation since the New Deal have been harvested in the U.S. welfare state and produced a profound imbalance in income, jobs, and opportunity between African Americans and whites. The history of durable racial inequality is a history of the relationship between these two core dimensions of our political economy, labor markets and housing markets, compounded by inequitable or exclusionary aspects of social policy. Opportunity hoarding is ubiquitous in labor markets. It refers to the ability of one group of workers to stack the deck against other groups through manipulation of the hiring process, creation of wage differentials and discriminatory job protections through overt discrimination or the impact of so-called color blind procedures such as union seniority rules (Tilly 1998, 10, 91 93). Opportunity hoarding may be passive when networks of workers or employers selectively hire only members of their own social group. It also produces vicious competition between workers for scarce resources and jobs. White monopolization of labor markets and black-white labor market competition has been a phenomenon of the MICHAEL K. BROWN 3
4 American economy since the 1830s; white claims of reverse discrimination are just the latest manifestation of this struggle. The intensity of racial labor market competition depends on the scarcity of jobs and fluctuates with the business cycle and economic dislocation. Robust economic growth and full employment reduce but do not eliminate racial labor market competition. Welfare states were invented to reduce economic security: to stave off immiseration when the economy tanks, to relieve the poverty of those left behind by economic change, and to replace workers income when they retire. All welfare states are redistributive to some degree but all are also geared toward work either by rewarding it Social Security is widely understood as an earned benefit or compensating for its absence. The U.S. welfare state is bifurcated between a universalistic and relatively generous welfare state based on Social Security and Medicare for the elderly and a more porous, segmented system of social protection for working-age citizens. Social protection for nonaged citizens is divided between social insurance for the unemployed, a variety of means-tested programs, and employee benefits, mainly health insurance. The new health care law establishes more or less universal access to health insurance for working-age citizens but does not modify the segmentation of the nonaged welfare state. I call this system truncated universalism. Since the 1930s, American social policy has been characterized by sharp distinctions of race and gender. Yet, only the 1935 Social Security Act, which excluded black farm workers and sharecroppers from coverage under the law, introduced a form of statutory discrimination. In fact, this aspect of Social Security, which has often been characterized as the New Deal s original sin, actually had few lasting effects, as black workers migrated north, entered the industrial work force, and, newly classified, enrolled in Social Security. Nevertheless, few social policies or institutional practices have been immune from racial bias and disparities. If the history of the American welfare state since the New Deal teaches us nothing else, it is that putatively race neutral or so-called color-blind policies can have racial consequences. More important than statutory racial exclusions were requirements for wage-related eligibility, which reproduce and magnify the effects of labor market discrimination. The architects of the 1935 Social Security Act distinguished between social insurance and welfare in order to reward long-term workers with a record of stable employment and exclude individuals they labeled malingerers, workers who were intermittently employed regardless of the reason. Eligibility depends on a stable work record and benefits are tied to wages. The effects of wage-related eligibility were readily apparent in the late 1930s: 42 percent of black workers who worked in occupations covered by Social Security and coughed up payroll taxes were uninsured in 1939 compared to only 20 percent of white workers (Brown, M. K. 1999, 71, 82). The malingerers and those individuals who could not work would be taken care of through means-tested cash payments. Such policies comprise a much larger part of the American welfare state than of most European welfare states, and since the 1960s means-tested policies have been a growing share of the federal budget. In 2012, means-tested cash 4 DIVERGENT FATES: THE FOUNDATIONS OF DURABLE RACIAL INEQUALITY,
5 transfers accounted for 15 percent of federal cash transfers; Social Security made up another 22 percent. Means-tested transfers increased sharply during the recent recession but the uptick started in the early 1990s. Many policymakers favor means-tested policies because they believe such policies efficiently redistribute income to those in need and cost less than policies with broader coverage. Such efficiency comes at a rather high price. Means-tested policies disproportionately benefit poor African Americans and Latinos in 2004 the average monthly participation rate in means-tested programs for blacks was 37.1 percent, for Latinos 30.1 percent and for non-hispanic whites 10.8 percent but they are inherently stigmatizing. Despite the U.S. preference for means-tested policies, the U.S. welfare state does less to redistribute income and reduce inequality than European welfare states. In 2004, taxes and transfers reduced income inequality in the United by 18 percent compared to 40 percent in Denmark and Sweden, 31 percent in Germany, and 23 percent in Great Britain (Immervoll and Richardson 2013, 20). The third way race shapes the U.S. welfare state is via federalism. Unlike many European welfare states, the U.S. welfare state began, and in many respects remains, highly decentralized. State governments controlled eligibility criteria and benefit levels for unemployment insurance and until 1972 for all the cash welfare titles of the original Social Security Act. Racial discrimination in the administration of AFDC flourished in both North and South from the 1930s to the 1960s (Lieberman 1998; Bell 1965). Things began to change in the early 1970s with the growth of food stamps and the creation of the Earned Income Tax Credit, both national means-tested programs. The welfare reform law of 1996 further centralized control over federal social policy for the poor, yet states still retain wide leverage over social welfare policy.5 Both the universal and segmented sides of truncated universalism operate to produce racial distinctions that shape how individuals are included in the welfare state and the kind of benefits they receive. One of the chief causes is racial labor market competition, which affects employment and wage levels and thus one s relationship to the welfare state. The template for this was set during the Great Depression. White workers acted to displace African Americans from their jobs, in many case grabbing Negro jobs they had previously scorned. New Deal work relief policies compounded the difficulties facing black workers. Because WPA jobs were scarce the WPA never covered more than 30 percent of the unemployed blacks faced the same competition with white workers for WPA jobs as they did for private sector jobs. Local officials and craft unions hoarded WPA jobs and excluded many black workers. There was no nondiscrimination policy that would have prevented this (Brown, M. K. 1999, 68 70, 77 86). As blacks were denied work in the private sector or access to work relief, they turned to the only available source of income: local cash relief. Relief rates for blacks in northern cities increased as unemployment declined in the late 1930s. Long ago, Gunnar Myrdal captured the essence of this process when he observed that whites coercively substituted relief for jobs, relegating African Americans to stigmatized welfare rolls (Myrdal 1944, 301). By denying access to steady employment, racial labor market competition also MICHAEL K. BROWN 5
6 affects an individual s eventual Social Security benefits and access to unemployment compensation. The problem, of course, is that this not only tilts welfare state benefits toward whites, and thus is an element of the (until very recently) economic stability of most white Americans; it also calls the legitimacy of the welfare state into question. In order to see how the relationship between labor market competition and the welfare state unfolded for blacks and whites, we examine first the history of labor markets since the 1940s and then the implications for the distribution of social welfare. OPPORTUNITY HOARDING AND RACIAL LABOR MARKET COMPETITION SINCE THE NEW DEAL The last 73 years of economic history is usually divided into two periods a prosperous economy of rising real wages and income among all classes and groups between , followed by 40 years of stagnating wages, particularly for non-college educated workers, and rising income and wealth inequality. All social classes rode up the income and employment elevator in the 1950s and 1960s, and all but the top 10 percent or so languished on an economic treadmill going nowhere after Real family income for all income quintiles grew, on the average, 2.2 to 2.5 percent in the first period, but in the second period the bottom 40 percent of the income distribution lost ground or faced stagnant incomes. Only the top 20 percent gained, yet even this group, on average, only gained about 1.2 percent. The real income gains in this period, it is well known, were grabbed by the top 1 percent of earners (Krueger 2012, figure 1). Similarly, both black and white workers prospered in the two decades after World War II and suffered from the six recessions and the deindustrialization of manufacturing after Yet neither the gains of economic growth nor the pain of recessions and economic change have been distributed equally among blacks and whites. Whether measured as family income or personal income, the wages and salaries of African Americans have lagged behind whites throughout both periods. There is no doubt that African Americans made real income gains during and after World War II. Real black median family income doubled between 1947 and 1972, as did the median income of white families. Black families gained relative to white families; the ratio of median family income increased to 59.5 percent from 51.2 percent. Yet the absolute median income gap between black and white families increased over this period by almost $5,000, a 35 percent gain for whites. In fact, blacks lost ground to whites by the late 1950s and only regained momentum during the economically booming 1960s. After 1973, even though the median family income of both whites and blacks rose by a little over one-fifth, black family income still lagged substantially behind Non-Hispanic whites. The ratio was unchanged: 57.4 percent in 1973 and 58 percent in The absolute income gap also widened by another $5,000, and over the entire 73 year period this gap grew by 115 percent. Even so, these data overstate black economic gains over the last 35 years since they exclude African Americans who were imprisoned during the incarceration boom (Pettit 2012). 6 DIVERGENT FATES: THE FOUNDATIONS OF DURABLE RACIAL INEQUALITY,
7 At the same time that blacks have made income gains, black employment relative to white workers has decreased. Black men of all ages and education levels experience more unemployment. The black unemployment rate is twice that of white workers, a ratio that has not changed since the early 1950s. Between 1948 and 1969, black unemployment averaged 8.4 percent compared to a rate of 3.9 percent for white workers. In 7 of these 22 years, the black unemployment rate exceeded 10 percent. Black workers faced even worse employment prospects during the 39 years after 1973: black unemployment averaged 12.3 percent; white unemployment averaged just 5.7 percent. During this period black unemployment was below 10 percent in only 7 of 39 years. Black workers have experienced what amounts to Depression-era unemployment for most of the last 4 decades (see Figure 1).6 Large numbers of black men have also dropped out of the labor force since 1940, a development that precedes the deindustrialization of the American economy during the 1970s and 1980s (Katz, et al. 2005). Figure 1. Male Unemployment Rates by Race (16 Years and Older) White Black Latino % Unemployed 0 College-educated black workers, like their white counterparts, do have lower unemployment rates than high school dropouts or workers with only a high school education. But their unemployment rates are almost double those of white college-educated workers. Figure 2 shows that the black unemployment rate is higher than the white rate at all educational levels, and in the 1970s, the 1980s, and the recent recession the unemployment rate for black college-educated workers was double that of college-educated whites. Indeed, in the recent Great Recession, white high school graduates with some college had lower unemployment rates than black college graduates; and as the economy improved after bottoming out in 2009, white workers at all levels of education found jobs faster than black workers. The ratio of black unemployment to white unemployment increased between 2009 and 2011, a trend that is identical to the experience of black workers in the late 1930s (Brown 1999, 84-87). MICHAEL K. BROWN 7
8 Figure 2. Black-White Male Unemployment Ratios by Years of Education <12 Years 12 Years Years 16+ Years Chinhui Juhn (2000, p 93,); U.S. Bureau of Labor Statistics, 2007, 2009, 2011 So, looking just at incomes and employment, the picture is mixed: the emergence of a black middle class with consistent income gains but devastatingly high black unemployment most of the time and income gaps for all African Americans that have changed relatively little in the last 7 decades. This outcome is best understood in light of changes in the structure of racial labor market competition, the oscillation between those moments when whites had the upper hand in labor markets and when government policies circumscribed them. Black economic gains were concentrated during three periods in which economic growth was robust and unemployment rates were low: the late 1940s; the 1960s; and the 1990s. African Americans made major income and occupational gains as they shifted from plantation to factory and entered the industrial work force in the 1940s. A second shift occurred in the 1960s and early 1970s when blacks moved up the occupational ladder and into middle class and professional occupations in a growing public sector. But they lost ground in the 1950s and the 1980s as racial labor market competition intensified only to gain some of it back in the economic boom of the 1990s. In most cases, when an African American man or woman moved from sharecropper to factory worker, they received an immediate and significant wage boost. Wartime policies that compressed wages and union success in bargaining for higher wages and benefits augmented these monetary gains. As blacks were recruited by CIO unions, they benefited. Yet white control of jobs and occupational ladders, often abetted by unions, sharply limited the economic mobility of these workers. Confined to unskilled, dirty factory jobs, migrating blacks found themselves at a dead end because of segregated seniority lists and job ladders that blocked advancement and entry into skilled jobs. Discrimination was overt in southern factories and, although more subtle, equally potent in northern factories (Brown, M. K., et.al. 2003, 70 71). 8 DIVERGENT FATES: THE FOUNDATIONS OF DURABLE RACIAL INEQUALITY,
9 Those blacks who reached the Promised Land after the war confronted a more hostile economy and faced widespread discrimination. Migrating black sharecroppers were three times as likely to be unemployed in the north as white migrants. In fact, black residents of northern cities were more likely to be unemployed than white migrants (Sorkin 1969, 272). Black economic gains were further eroded as economic growth slowed in the late 1950s and technological change in manufacturing firms reduced demand for workers. As these factories replaced workers with machinery, black workers lost out because the jobs in which Negroes were concentrated were eliminated and the displaced Negroes were not permitted to bid into or exercise their seniority in all-white departments. (Northrup 1970, 26; Sugrue 1996, 144). Notably, the sharpest drop in the labor force participation rate of black workers relative to white workers during the last 73 years occurred in the 1950s. At the same time, blacks faced discrimination in openings for skilled craftsmen and white-collar jobs, both of which were expanding. The proportion of blacks employed as salaried workers declined from 4.6 to 3.4 percent between 1940 and 1960 (Northrup 1970, 26). Educated blacks in the 1950s fared no better and their unemployment rates were typically higher than those of blacks with less than 12 years of schooling. White workers, particularly white veterans, made the real income and occupational gains of the 1950s. Powered by the G.I. Bill, one-third of the veterans who received educational and readjustment benefits climbed out of working class jobs into the ranks of managers and professionals. White men were the beneficiaries of these jobs. Although the Veterans Administration (VA) distributed GI education and readjustment benefits equally between blacks and whites, African American veterans, many of whom lived in the south, could use college subsidies only at segregated, overcrowded colleges. They were substantially less likely to be enrolled in college under the GI bill. And even when they could take advantage of the GI readjustment benefits, they faced rampant discrimination in labor markets throughout the country (Brown, M. K. 1999, , ; Katznelson 2005, ). On the eve of the civil rights revolution, white control of labor markets allowed white workers to weather the economic changes of the 1950s and advance into skilled blue collar jobs and white collar or managerial positions. Blacks gained manufacturing jobs during the war, but job ceilings limited the upward mobility for most black workers and exposed working-class blacks to the cutting edge of technological change. In the 1960s, economic doors opened up for African Americans as a result of strong economic growth, the implementation of Great Society programs that created new public sector jobs, and affirmative action policies. Blacks made substantial gains in public sector jobs mainly in state and local government. By 1970 over half of black college-educated men and three-quarters of black college-educated women worked in public sector jobs (Carnoy 1994, ). Affirmative action policies shifted the demand for black workers in the private sector, benefiting both educated middle-class blacks and low-income blacks. Jonathan Leonard estimates that 7 percent of black employment gains in manufacturing and one-third of occupational gains in the 1970s were due to affirmative action policies (Leon- MICHAEL K. BROWN 9
10 ard 1990, ; Leonard 1984, ). Vigorous federal enforcement of Title VII of the 1964 Civil Rights Act desegregated many southern factories and resulted in real wage gains for African Americans (Heckman 1990). In both the north and the south, federal employment policies cracked open the job ceilings for skilled blue collar jobs and white collar managerial and professional jobs that blocked African American economic mobility in the 1950s. However, black economic gains were undermined in the 1980s as deindustrialization hollowed out America s manufacturing industries and the Reagan administration rolled back affirmative action policies, eliminated federal job and job training programs, and sharply reduced federal grants-in-aid to state and local governments. Three deep recessions within ten years amplified these economic and policy changes and shifted the structure of labor market competition toward white workers. Between 1973 and 1983 black unemployment averaged 14.2 percent; white employment averaged 6.4 percent. The bottom literally fell out for those black workers displaced from manufacturing largely because they were concentrated in low-skill, dispensable jobs. Many of them shifted into low-wage service sector jobs, but white workers displaced from factories were more likely to end up in better paying white collar jobs. The proportion of black and Latino workers in low-wage jobs increased from 43.5 percent to 46.4 percent over the 1980s while the proportion of white workers in these jobs remained about 30 percent during the decade. Although the number of African Americans in high wage professional and managerial jobs increased by about 100,000, the number of whites in these jobs exploded, increasing from 2.9 million to 4.6 million. Displaced black manufacturing workers experienced downward mobility relative to white workers (Carnoy 1994, 95 99, tbl 5.3). College educated blacks fared no better. Their wages dropped relative to whites and their unemployment rates were almost three times those of college-educated whites by the late 1970s (in the late 1960s, by comparison, the unemployment rates of these two groups were identical see chart 2). The evidence clearly indicates that displaced black workers lost the job competition set in motion by deindustrialization and recession. Downwardly mobile white workers in the 1980s acted just like unemployed white workers in the 1930s: they played the race card to keep or acquire good jobs. (Brown, M. K., et.al. 2003, 84; Darity and Myers. 1998, 51; Wellman 1997, ). Neither education nor wage gaps explain the reversal of black economic gains during this period. Blacks made substantial educational gains during the 1970s, erasing the gap in secondary education. And even though all low-income workers experienced declining wages, the difference between the wages of young black and white family heads actually increased. Those blacks who lost blue collar manufacturing jobs often ended up in sales jobs but took a 13 percent pay cut; white workers displaced from manufacturing jobs typically landed well-paying sales or white collar jobs, according to William Darity and Samuel Meyers, yielding a 36 percent pay increase (Darity and Meyers,1998, 47-48, 65-67). These changes, which coincided with the de facto end of affirmative action and the Reagan-era budget cuts, resulted in declining incomes for many black families. 10 DIVERGENT FATES: THE FOUNDATIONS OF DURABLE RACIAL INEQUALITY,
11 One sees the results of these changes in the distribution of family income. Between 1980 and 1992, all white non-hispanic families except those in the bottom quintile gained real income; those white families in the bottom quintile saw their income decline by 5 percent. The experience of black and Latino families was very different. All black families in the bottom 60 percent lost income during the Reagan years, and the black poor, those in the bottom quintile, experienced a 25 percent decline in real income. Among families in the second quintile, black families lost 10 percent of their income but white families gained. The experience of Latino families was similar to that of African Americans, although the poorest Latino families lost only half as much as the black poor (see Figure 3). Figure 3. Percent Change in Average Income By Quintile % Change Lowest Second Third Fourth Fifth Top 5% White Families -5.04% 1.87% 6.14% 10.54% 22.02% 35.45% Black Families % -9.34% -0.26% 5.00% 17.31% 30.20% Latino Families % -6.71% -3.85% 0.37% 11.01% 17.83% % Change Lowest Second Third Fourth Fifth Top 5% White Families 15.33% 13.07% 14.33% 16.51% 32.25% 49.59% Black Families 54.68% 49.30% 33.57% 24.50% 27.65% 34.97% Latino Families 27.31% 25.43% 21.84% 19.05% 33.35% 57.66% % Change Lowest Second Third Fourth Fifth Top 5% White Families % % -7.54% -4.06% -3.43% -5.46% Black Families % % -7.40% -3.44% 2.19% 6.44% Latino Families % % % -7.56% -6.76% % 2011 Dollars U.S. Bureau of Census, Historical Statistics Latinos, like African Americans, face job discrimination and residential segregation. But their experience over the last four decades is different. Latinos are more likely to be unemployed than white workers but less likely than black workers. Their unemployment rates averaged 9.2 percent since 1973 and the rate was below 10 percent for 22 years during this period. The black unemployment rate, recall, was below 10 percent for only 7 years. Yet the median real wage and salary income of Latino men averaged 90 percent of that of black men since And over that period, black median wages increased by 18 percent while Latino wages declined by 5 percent. However, median wages of both lagged substantially behind the wages of non-hispanic whites. MICHAEL K. BROWN 11
12 One reason for these differences is that Latinos probably faced less job discrimination than blacks, but when blacks got jobs, those jobs tended to pay better. In a study of labor markets in five big cities, Roger Waldinger found that blacks with a high school degree were less likely to hold jobs than the least skilled white men, but Latinos with high school degrees had employment rates comparable to similar white workers (Waldinger 2001, 95). One might say that blacks face racial barriers to jobs while Latinos, particularly recent arrivals, are more likely to be constrained by education and skill deficits. The other difference is that Latinos tend to be concentrated in low-wage jobs in construction, hotels and restaurants, and agriculture. Many African Americans, on the other hand, tend to work in government jobs and the health sector, where wages are higher. Historically, Latino men faced less wage discrimination than African American men and Latino poverty rates were lower (Carnoy 1994, 118). This changed in the 1990s and a major reason is that the 1986 Immigration Reform and Control Act actually stimulated discrimination against Latinos. Douglas Massey concludes that the IRCA s employer sanctions radically restructured the market for unskilled labor... increasing discrimination on the basis of legal status, exacerbating discrimination on the basis of ethnicity, and pushing employers toward labor market subcontracting... (Massey 2007, 145). Wage and salary income of Latino men relative to African American men sharply declined in the 1990s, dropping from parity in 1987 to 80 percent by the end of the century. The Latino poverty rate also sharply increased over the decade. Black workers and their families made up some of the economic ground they lost in the 1980s during the Clinton-era economic boom, but much of it could not be recovered. The economic gains of the 1940s and 1960s were undercut when deindustrialization and deep recessions in the 1950s and 1970s intensified racial labor market competition; and now the Great Recession has undercut even the Clinton-era economic gains. One of the lessons of this history is that simply investing in education and allowing market outcomes to prevail will be insufficient to overcome durable racial inequality. THE ECONOMIC AND POLITICAL CONSEQUENCES OF A RACIALLY STRATIFIED WELFARE STATE The welfare state compensates for the losses of racial labor market competition to some degree. But federal social policy since the New Deal has also institutionalized and augmented white advantage, even as the legitimacy of the social safety net for poor and working class citizens was called into question. Income transfers mitigate economic security and African Americans and Latinos would be much worse off without such support. Government cash and non-cash transfers like food stamps pack a bigger punch in reducing inequality than taxes. Using a standard measure of income dispersion, the gini index (the higher the value, the greater degree of income inequality), U.S. Census Bureau studies show that taxes lower inequality in market income 12 DIVERGENT FATES: THE FOUNDATIONS OF DURABLE RACIAL INEQUALITY,
13 (income minus government transfers) very little, from.502 to.492. Adding in all transfers lowers it by 20 percent, to.405. Cash and non-cash transfers are far more important to black and Latino households than to white households. The net effect of all taxes and transfers raises black family income to 66 percent of white income, from 61 percent. Latino family income increased to 77 percent of white income, from 72 percent. Transfers raised black market income $5,353 or 18 percent and Latino income by $4,386 or 12 percent. White households incomes increased by $4,264, 9 percent.7 These are not negligible effects, but neither do they tell the whole story. Even though government transfers are more important to black and Latino households well-being, transfers are less effective in lowering their poverty rates than those of white households. This was the case 33 years ago and it remains so today (Danziger 1983, 66). Figure 4 shows the post-transfer poverty rates for White Non-Hispanic, Black, and Latino households. Non-means-tested transfers, mainly Social Security, reduce poverty among white seniors by 82 percent but only 56 percent for black seniors and 57 percent for Latinos. Adding in means-tested transfers and the value of noncash transfers such as food stamps and health care reduces poverty among white seniors by another 4 percent but by an additional 14 percent in the case of black and Latino seniors. Among working age households, those persons between the age of 20 and 64 years, non-means-tested transfers reduce white poverty by one-third but by only a little over one-fifth for blacks and Latinos. Once we account for means-tested transfers and non-cash benefits, the reduction in poverty rates is about the same for white and black working-age households, but Latino working-age households do not fare as well. Figure 4. Percent Change in Post Transfer Poverty Rates White Years 65+ Years Black Latino White Black Latino Non-Means-Tested Transfers Means-Tested Transfers Cash plus Non-Cash Transfers U.S. Bureau of Census, Effects of Benefits and Taxes on Income and Poverty, 2006 The lower effectiveness of transfers in reducing poverty among black and Latino elderly, compared to whites, is partly a consequence of wage-related eligibility, which reproduces the effects of racial labor market competition and other long-term effects of durable racial inequality.8 Although Social Security is redistributive, black benefits lag behind those of whites. Social Security clearly raises black and Latino income relative to whites, on average to a ratio of 75 to 84 percent; and rates of return (the ratio of Social Security taxes to benefits) are estimated to be about equal or marginally higher for blacks. Yet Stuerele, Carasso and Cohen conclude that less educated, lower-income, and nonwhite groups benefit little MICHAEL K. BROWN 13
14 or not at all from redistribution in the old age and survivors insurance (OASI) part of Social Security, a conclusion echoed in other studies (Steuerle, Carasso and Cohen 2004; Ozawa and Kim 2001, 10; Favreault and Mermin 2008). Social Security compensates for low wages to some degree but does not override a long history of wage and occupational discrimination. The other reason blacks have lower lifetime benefits is because they die sooner than whites. The poor health of black retirees stems from a long history of limited access to adequate health care and poor treatment for cancer and cardiovascular disease (Brown, M. K., et.al. 2003, 45 48). Blacks are less likely than whites to have health insurance; in 2011 they accounted for 20 percent of the uninsured (8.2 million people). Latinos accounted for 30 percent of uninsured individuals (U.S. Bureau of the Census 2012, 22, Table 7). [These uninsured rates will surely diminish once the Affordable Care Act is implemented]. In addition, black neighborhoods have been disproportionately affected by urban hospital closings and many private nursing homes remained segregated long after the 1964 Civil Rights Act prohibited such discrimination (Smith 1999, 176, , 267). In general, there is little difference today in means-tested payments between working age blacks, whites, and Latinos. In some cases, those payments are slightly higher for blacks and Latinos. One reason these payments do less to raise working-age blacks out of poverty is that they start with much lower incomes to begin with and have less access to other forms of support such as Social Security survivor s benefits or, more importantly, unemployment compensation. Despite much higher rates of unemployment, African Americans are less likely to receive unemployment benefits than non-hispanic whites. This is has been the case for a long time and it is due to the stringent wage-related eligibility provisions for unemployment insurance and the way states implement these rules. In the early 1990s, unemployed white workers were 40 percent more likely to receive unemployment benefits than blacks or Latinos; black workers, on the other hand, were 20 percent less likely than other unemployed workers to receive benefits (Michaelides and Mueser 2012, 37 40). Recent analysis of the Great Recession indicates that only 24 percent of unemployed blacks obtained unemployment benefits compared to 33 percent of unemployed whites. These differences persist even after accounting for education and other factors affecting unemployment. For example, among high school drop outs, 25 percent of whites received benefits compared to 12.5 percent of blacks (Nichols and Simms 2012). Like unemployment insurance, states determined whether African Americans and Latinos received cash welfare benefits and how much they received. There is far less discrimination today, partly because SSI, food stamps, and the Earned Income Tax Credit, all national policies, have replaced the old cash welfare programs and established universal benefit and eligibility standards. And, when it is fully implemented, the Affordable Care Act will diminish state control over Medicaid eligibility and benefits. But states still retain significant control over Temporary Assistance to Needy Families (TANF), and the program has been implemented in racially stratified ways, with outcomes reminiscent of the discriminatory welfare regimes of the 1950s and 1960s in the north and the south. 14 DIVERGENT FATES: THE FOUNDATIONS OF DURABLE RACIAL INEQUALITY,
15 The 1996 welfare reform law caps federal expenditures and distributes the money to the states through block grants. This fiscal scheme allows states considerable flexibility in spending the money, but it is tied to tough federal requirements for work and behavioral change. State welfare offices were transformed from cash-dispensing operations to employment centers and given the authority to sanction individuals who do not live up to the requirements for work effort. TANF is ostensibly race-neutral but it has been implemented in ways that reinforce racial stereotypes and racial inequalities. Work-related sanctions may deny benefits to every member of a family receiving benefits or only the adult. What we find from recent studies is that black recipients are more likely to be sanctioned than whites, and when they are, the sanctions are more severe. TANF, Sanford Schram and his co-authors write, is carried out today in ways that allow preexisting racial stereotypes and race-based disadvantages to produce large cumulative disadvantages. (Schram, Soss, Fording Richard C., et al. 2009, 413,415; Soss, Fording and Schram 2011). In addition to advantages white workers obtain through the welfare state, most white families have a cushion that black and Latino families lack: household wealth including home equity, cash savings accounts, and investments. Wealth is not just a cushion during bad economic times; it also helps people climb economic ladders. Wealth explains differences in college graduation rates and the ability of parents to pass on their occupational status, as recent studies of economic mobility have shown (Conley 1999, 72 73; Oliver and Shapiro 1997, ). Median wealth of white families was 10 times that of black families in 2009, at the bottom of the recession, and whites in all income quintiles have more wealth than blacks. The racial disparity in median wealth was much larger in 1984; then it was 15 times as much (Shapiro, Meschede and Osoro 2013, 2). Even though wealth accumulation for both black and white families rose over the last 30 years on the back of the stock market boom and the relentless rise in the price of houses, the racial wealth gap widened. Despite the enormous loss of wealth during the Great Recession among all income classes, most white families retained sufficient wealth to ride out the storm. The Urban Institute calculates that the wealth of Latino families suffered the largest drop, 40 percent; black families wealth declined by 31 percent; and white families lost 11 percent (McKernan, et al. 2013, 2 3). The roots of the current disparity lie in the history of federal housing and veterans policies after There is no need to go back to slavery to explain this phenomenon. In addition to the federal readjustment benefits distributed through the G.I. Bill, the Veterans Administration offered returning World War II and Korean veterans subsidized mortgage loans. Unlike the readjustment allowances, which were more or less distributed equitably between black and white veterans, VA mortgage loans were 2.5 times more likely to go to whites than blacks. The Veterans administration along with the Federal Housing Agency (FHA) funded one-third of all mortgage loans in the 1950s. Together these two agencies underwrote 3 percent of black mortgage loans and 42 percent of white loans. FHA redlining policies, which banks adhered to, caused this disparity, and even if a black family got a federally subsidized loan they could purchase housing only in a segregated neighborhood. MICHAEL K. BROWN 15
16 As a result of FHA redlining policies, private investors pulled money out of black neighborhoods, leading to a downward spiral in housing prices. White flight further drove down the price of black-owned homes. It is no surprise that the amount of equity in white-owned homes is 1.5 times more than the equity in black-owned homes (Brown, M. K., et.al. 2003, 77 79). For many of the same reasons, Latino wealth has lagged behind that of whites. White World War II and Korean War veterans acquired enormous economic gains from the G.I. Bill and their control of labor markets. A mid-1950s study estimated that onefifth of veterans receiving non-service connected pensions had a net worth over $10,000 (about $62,500 in 2009 dollars), no minor sum at the time. The President s Commission on Veterans Pensions (known as the Bradley Commission) concluded in its massive study of the G.I. Bill that the present position of World War II veterans suggests that, as a group, their earnings and progress in later life will permit them to maintain their present advantage. This will mean.... that most veterans will acquire more savings and qualify for larger retirement pensions [under Social Security] than non-veterans and black veterans, we should add (Brown, M. K. 1999, 183; The President s Commission on Veterans Pensions 1956, 145). FACING THE FUTURE: RACE, EQUALITY OF OPPORTUNITY AND THE CLASS DIVIDE As the evidence gathered here attests, racially-biased housing policies and other aspects of the U.S. welfare state have magnified racial disparities generated through white control of labor markets. Yet, many Americans are completely oblivious to these discriminatory realities and they view racial differences in a historical vacuum, relying instead on cultural stereotypes that are fed by the media. Whites by and large attribute their success to individual efforts, imagining they raised themselves by their bootstraps and earned what they have received. Many whites also believe that the welfare state coddles blacks; they see only welfare mothers or affirmative action babies, believing that African Americans have connived with the federal government to obtain government benefits they do not deserve and have not worked for (Kinder and Mendelberg 2000, 61). The black poor are vilified, cast as lazy, promiscuous freeloaders undeserving of help (Gilens 1999, 67 69). Food Stamps, the Earned Income Tax Credit, Medicaid, cash welfare payments, the very safety net that poor blacks and, I need to add, poor whites depend on are racially stigmatized as a result. A majority of white Americans have consistently opposed increasing spending for the means-tested safety net and strongly support time limits for welfare benefits. These attitudes are correlated with measures of racial resentment and ethnocentrism whites who display racially hostile attitudes toward black Americans strongly favor limiting spending and support eviscerating the safety net. If these white citizens associate welfare with lazy blacks, they associate Social Security and Medicare with hard-working whites. It is a program for whites, not blacks, and thus it is no surprise that racially resentful or ethnocentric whites strongly desire increased spending for these programs (Kinder and Kam 2009, Table 9.1, 9.2; Winter 2006). 16 DIVERGENT FATES: THE FOUNDATIONS OF DURABLE RACIAL INEQUALITY,
17 Willard Townsend, a prominent African American union official, warned of this possibility in the 1940s. In an speech to faculty and students at Fisk University, Townsend pointed out that, You can t have unencumbered and prosperous white workers and unemployed black workers for, if you let that happen, the white worker will have to carry the black worker on his back through relief or the dole (Townsend 1974, 524). The public policies that advantaged whites disadvantaged blacks, leaving them with limited economic opportunities. By juxtaposing white independence and hard work with black indolence and failure, many white Americans delude themselves about their own independence and obscure the advantages they reaped from the federal social policies of the last 73 years. One question in contemplating a new economic paradigm for the future is whether this knot can be untied. Compounded by these stereotypes, today, durable racial inequality is nested in a widening gyre of class and income inequality. At the same time, income and wealth inequality have widened, the wages of middle and low-income earners have either stagnated or declined. More important, the new inequality coincides with a shift in class structure and changes in the organization of work that reinforces class and racial inequality. Even before the Great Recession, labor markets were deeply insecure; layoffs, displacement, outsourcing, and part-time temporary work no longer affect only blue collar workers but also white collar workers, even managers. Now, absent higher levels of economic growth, it is likely that very large numbers of black and white workers will not reenter the labor force any time soon, if ever. Underlying these trends is the polarization of the labor market, an economy that is increasingly divided into high-wage and low-wage jobs. The bottom has dropped out for middle income, stable jobs. These developments have pounded both black and white workers, eclipsing possibilities for upward mobility, though black workers have arguably taken the harder hit. The danger is that growing class and wealth inequality and polarization of the labor market will intensify structural racism, leaving not just the black poor but much of the black middle class living precarious lives of economic instability. White opportunity hoarding is alive and well, and a shrinking public sector means that one of the key avenues for blacks into middle class stability is disappearing (Ditomaso 2012). Higher rates of economic growth and productivity, likely depending on significant public investments in education, research, and infrastructure, will be needed to forestall such an outcome. Whether that is possible is an open question. It is just as likely that the class and racial polarization of this era will produce economic stagnation and intensified racial conflict over public policies. In these circumstances, the United States faces a work-welfare state dilemma: absent the creation of new well-paying middle class jobs and new avenues of upward mobility, the only acceptable alternative is to expand the social safety net in order to support those men and women who are unemployed, work in dead-end low-wage jobs, or are unable to work. We are already coping with prolonged economic stagnation and unemployment through an expanded safety net the growth of food stamps and the Earned Income Tax Credit are examples. But expanding the social safety net may not be sustainable over the long term, because MICHAEL K. BROWN 17
18 it is racially stigmatized and white opportunity hoarding persists. To some extent, we these dynamics playing out in the manifold efforts to overturn or weaken Affordable Care Act. Dealing with durable racial inequality also requires extensive social policy reforms. The most important challenge is stopping the incarceration boom and fixing the problems it has caused in black communities. Extraordinary numbers of black men and women have been imprisoned over the last thirty years. Yet imprisonment is only one part of the disciplinary policies encompassing the lives of African Americans. For instance, the U.S. Department of Education recently reported that black school children account for 39 percent of all expulsions but make up only 18 percent of students. As noted earlier, the use of sanctions and punishment under TANF fall heavily on black women. Since these policies diminish the resources and future opportunities of blacks, they are a form of disaccumulation. One scholar estimates that incarceration reduces the annual income of black men by 37 percent (Western 2006, 119). At the same time, the vast movement of prisoners out of and back into low-income black neighborhoods erodes community stability and diminishes local economic opportunities. None of these policies, however, have lifted the burden of violence from those communities: after nearly forty years of mass incarceration, the average life expectancy of black men in poor neighborhoods in Los Angeles nevertheless fell by five years because of homicides.9 Nor does the black middle class escape the consequences as it is their sons and daughters who are targeted by police for marijuana use or stop and frisk policies, not just black residents of poor neighborhoods. Arrests for possession of marijuana exploded in the first decade of this century. Blacks are almost four times more likely to be arrested for possession of marijuana and this disparity persists regardless of household income. In fact, the racial disparity is greater in counties with high median incomes than in the poorest counties (ACLU 2013, 17). Achieving high rates of economic growth, ending the polarization of the job market with new investments, and unwinding the disciplinary state will not be possible without confronting the racial divide that remains at the core of American politics. Prior to the civil rights movement, the public saw no relationship between civil rights policies and economic and social policies taxes, regulation, the social safety net, etc. After the 1960s, any distinction between civil rights and racial policies and economic and social policies disappeared. In national surveys before the civil rights movement there is no relationship between voters opinions of racial and economic policies the correlation was 0.03; after 1965 the correlation was.68 (Kellstedt 2003, 78, 80 tbl 3.3). Clearly, debates over scope of government s role in the economy and the shape of the welfare state are also, in many ways, debates about race and the relationship between black, white, and Latino Americans. There is a parallel with the New Deal that bears mentioning. The New Dealers put their faith in a class coalition and class-based agenda, and assumed that raising the incomes of all citizens would ameliorate racial prejudice and put the Nation on the road to a greater degree of racial equality. Yet the New Deal inserted racial distinctions into many of its policies and 18 DIVERGENT FATES: THE FOUNDATIONS OF DURABLE RACIAL INEQUALITY,
19 failed to confront racial discrimination. We are living in a period of massive economic fear and dislocation, much like the New Dealers faced. But we do have the advantage of history and we need not make the same mistakes. This time around any new economic paradigm must confront the legacies of durable racial inequality. I am indebted to David T. Wellman; the research reported here derives in part from our long collaboration. Also, I would like to acknowledge the contributions to this research of my co-authors of Whitewashing Race: The Myth of a Color-Blind Society. 1. Quoted in Randall B. Woods, LBJ: Architect of American Ambition (New York: The Free Press, 2006). 2. Studies of economic mobility use the intergenerational elasticity of earnings as a measure of upward (or downward) mobility. This measure estimates the change in a child s earnings for every 1 percent change in parental earnings. Corak estimates that the intergenerational elasticity of earnings in the U.S. is.47, which is exceeded only by taly and the United Kingdom, compared to.15 in Denmark. This measure is correlated with the gini coefficient of income inequality. See Corak 2012, Hertz shows that race explains 17 percent of the intergenerational income elasticity independent of a host of personal characteristics, e.g. parents education and occupation among others. Bowles and Gintis estimate the independent effect of race to be 22 percent (Bowles and Gintis 2002, 22 23). The important question is why race has such a powerful effect on income mobility. 4. Michael Katz and his colleagues use the idea of opportunity structures, defined as a network of sieves in which individuals are filtered in or filtered out, to study the modern history of racial inequality (Katz, Stern and Fader 2005). This approach complements the theory of accumulation and disaccumulation and produces similar results. The advantage of the theory of accumulation and disaccumulation, however, is that it assumes actors have agency, something that the idea of opportunity structures lacks. 5. The remaining bastion of state influence outside of unemployment insurance is Medicaid. But that will change once the Affordable Care Act, which establishes universal eligibility and benefit criteria, is implemented. 6. These data are based on unemployment rates for individuals 16 years and older. The picture looks marginally better if one uses data on rates for people 20 years and older but not very much better. Black unemployment rates are still above 10 percent in 24 out of 39 years. 7. These data are drawn from the 2006 census report on The Effects of Benefits and Taxes on Income and Poverty, at accessed on July 31, For a discussion of the methods used in producing these estimates see (U.S. Bureau of the Census 1993). 8. Wage-related eligibility does not exclude individuals who move in and out of the labor force over a lifetime. All that is required is 40 quarters of work. But benefits are calculated as the av-erage monthly earnings of the highest 35 years of a worker s earnings, and those years in which a worker is out of the labor force are counted as zero. Moreover, even though replacement rates are progressive, absolute benefits are reflect taxes paid (this is the annuity side of Social Security). A recent study concluded that using the most inclusive concept of income that accounts for the earnings potential of both head and spouse, the Social Security system does not appear to reduce inequality in any meaningful way See Brown, Coronado and Fullerton 2009, Personal communication to author from Elliot Currie. MICHAEL K. BROWN 19
20 BIBLIOGRAPHY American Civil Liberties Union The War on Marijuana in Black and White. New York: ACLU. Bell, W Aid to Dependent Children. New York: Columbia University Press. Bowles, S., and H. Gintis The Inheritance of Inequality. The Journal of Economic Perspectives 16(3):3 30. Brown, J. R., J. L. Coronado, and D. Fullerton Is Social Security Part of the Social Safety Net? Working Paper NBER Working Paper Series. Cambridge, MA: National Bureau of Economic Research. Brown, M. K., et.al Whitewashing Race: The Myth of a Color Blind Society. Berkeley, CA: University of California Press. Brown, M. K Race, Money, and the American Welfare State. Ithaca, NY: Cornell University Press. Brown, M. K., and D. T. Wellman Embedding the Color Line: The Accumulation of Racial Advantage and the Disaccumulation of Opportunity in Post-Civil Rights America. Du Bois Review 2: Carnoy, M Faded Dreams: The Politics and Economics of Race in America. New York: Cambridge University Press. Conley, D Being Black, Living in the Red: Race, Wealth, and Social Policy in America. Berkeley, CA: University of California Press. Corak, M How to Slide Down the Great Gatsby Curve : Inequality, Life Chances, and Public Policy in the United States. Washington, D.C.: Center for American Progress Income Inequality, Equality of Opportunity, and Intergenerational Mobility. Journal of Economic Perspectives. Danziger, S Budget Cuts as Welfare Reform. American Economic Review 73(2): Darity, W., Jr., and S. L. Myers., Jr Persistent Disparity: Race and Economic Inequality in the United States since Northampton, MA.: Edward Elgar Publishing Company. DiTomaso, N The American Non-Dilemma: Racial Inequality without Racism. New York: Russell Sage Foundation. Favreault, M. M., and G. B. Mermin Discussion Paper No. 30 in Are There Opportunities to Increase Social Security Progressivity despite Underfunding. Discussion Paper No. 30. Tax Policy Center Conference: Race, Ethnicity, Poverty and the Tax-Transfer System on Racial Inequality and the Tax System. Washington, D.C. Gilens, M Why Americans Hate Welfare. Chicago, Ill: University of Chicago Press. Heckman, J. J The Central Role of the South in Accounting for the Economic Progress of Black Americans. The American Economic Review 80 (May): Hertz, T Rags, Riches, and Race: The Intergenerational Economic Mobility of Black and White Families in the United States. In Unequal Chances: Family Background and Economic Success, ed. S. Bowles, H. Gintis, and M. O. Groves, New York & Princeton, N.J.: Russell Sage Foundation & Princeton University Press Understanding Mobility in America. Washington, D.C.: Center for American Progress. Immervoll, H., and L. Richardson Redistribution Policy in Europe and the United States: Is the Great Recession a Game Changer for Working Age Families? OECD Social, Employment, and Migration Working Papers No OECD Publishing. Isaacs, J. B Economic Mobility of Black and White Families. Washington, D.C.: The Brookings Institution. Katz, M. B., M. J. Stern, and J. J. Fader The New African American Inequality. Journal of American History 92(1), June: Katznelson, I When Affirmative Action was White: An Untold History of Racial Inequality in Twentieth-Century America. New York: W.W. Norton & Company. Kellstedt, P. M The Mass Media and the Dynamics of American Racial Attitudes. New York: Cambridge University Press. Kinder, D. R. and C. D. Kam Us Against Them: Ethnocentric Foundations of American Opinion. Chicago: University of Chicago Press. Kinder, D. R. and T. Mendelberg Individualism Reconsidered: Principles and Prejudice in Contemporary American Opinion. In Racialized Politics: The Debate about Racism in America, eds. D. O. Sears, J. Sidanius, and L. Bobo, Chicago, IL: University of Chicago Press. 20 DIVERGENT FATES: THE FOUNDATIONS OF DURABLE RACIAL INEQUALITY,
Income and wealth inequality
Income and wealth inequality Income and wealth inequality The end of industrialization and Reaganomics Income inequality Wealth inequality Poverty Income and wealth inequality The end of industrialization
APPENDIX A The PSID Sample and Family Income
1 APPENDIX A The PSID Sample and Family Income The sample for this analysis is 2,367 individuals who were between the ages of 0 and 18 in 1968 and have been tracked into adulthood through the Panel
Chapter II. Coverage and Type of Health Insurance
Chapter II. Coverage and Type of Health Insurance Although various factors condition regular health service use, health service coverage is undoubtedly the main means of periodically accessing medical most recent report puts the white unemployment rate at around 4.5 percent. The black unemployment rate? About 8.8 percent.
Wonkblog By Jeff Guo February 26 For as long as the government has kept track, the economic statistics have shown a troubling racial gap. Black people are twice as likely as white people to be out of work
California s Economic Payoff
California s Economic Payoff Dr. Jon Stiles, Dr. Michael Hout, Dr. Henry Brady Institute for the Study of Societal Issues, UC Berkeley EXECUTIVE SUMMARY April 2012 The benefits of higher education extend
SINGLE MOTHERS SINCE 2000: FALLING FARTHER DOWN 1
SINGLE MOTHERS SINCE 2000: FALLING FARTHER DOWN 1 For the one in four U.S. families who are single mother families, the Great Recession of 2008-2009 exacerbated a period of losing ground that had started
ICI RESEARCH PERSPECTIVE
ICI RESEARCH PERSPECTIVE 0 H STREET, NW, SUITE 00 WASHINGTON, DC 000 0-6-800 OCTOBER 0 VOL. 0, NO. 7 WHAT S INSIDE Introduction Decline in the Share of Workers Covered by Private-Sector DB
The Racial Wealth Gap: African Americans
FACT SHEET April 2014 The Racial Wealth Gap: African Americans Facts At A Glance The median wealth of White households is 20 times that of African American households. The growing racial wealth gap occurring
Women s Participation in Education and the Workforce. Council of Economic Advisers
Women s Participation in Education and the Workforce Council of Economic Advisers Updated October 14, 214 Executive Summary Over the past forty years, women have made substantial gains in the workforce
The Labor Market Performance of Young black Men in the Great Recession Daniel Kuehn
brief# I0 feb. 2013 unemployment and Recovery Project i n s i d e T h i s i s s u e Young black men had higher unemployment during the recession than their white and hispanic peers. They
Executive summary. Global Wage Report 2014 / 15 Wages and income inequality
Executive summary Global Wage Report 2014 / 15 Wages and income inequality Global Wage Report 2014/15 Wages and income inequality Executive summary INTERNATIONAL LABOUR OFFICE GENEVA Copyright International
WORKING, BUT POOR, IN CALIFORNIA
WORKING, BUT POOR, IN CALIFORNIA EXECUTIVE SUMMARY September 1996 Two popular myths endure about California's poor. One is that most poor people don't work or don't want to work. The second is that work
Working After Age 65
ALTERNATIVE FEDERAL BUDGET 2012 TECHNICAL PAPER Working After Age 65 What is at Stake? Angella MacEwen Highlights The OAS and GIS combined today provide one third of the income of all seniors aged over-
New racism lecture outline. Definitions New racism
NEW RACISM New racism lecture outline Definitions New racism Definitions -- outline Minority group Prejudice Discrimination Racism Institutional racism Cultural racism definitions minority group a disadvantaged
A PHILANTHROPIC PARTNERSHIP FOR BLACK COMMUNITIES. Criminal Justice BLACK FACTS
A PHILANTHROPIC PARTNERSHIP FOR BLACK COMMUNITIES Criminal Justice BLACK FACTS Criminal Justice: UnEqual Opportunity BLACK MEN HAVE AN INCARCERATION RATE NEARLY 7 TIMES HIGHER THAN THEIR WHITE MALE COUNTERPARTS.
DATA AND CHART PACK February 2016
DATA AND CHART PACK February 2016 Income Inequality Income includes the revenue streams from wages, salaries, interest on a savings account, dividends from shares of stock, rent, and profits from selling
The Economists Voice
The Economists Voice Volume 2, Issue 1 2005 Article 8 A Special Issue on Social Security Saving Social Security: The Diamond-Orszag Plan Peter A. Diamond Peter R. Orszag Summary Social Security is one
Public Pensions. Economics 325 Martin Farnham
Public Pensions Economics 325 Martin Farnham Why Pensions? Typically people work between the ages of about 20 and 65. Younger people depend on parents to support them Older people depend on accumulated
ECONOMIC FACTORS AFFECTING COMPENSATION
Unit 4: Total Rewards 41 ECONOMIC FACTORS AFFECTING COMPENSATION Inflation Inflation has a substantial impact on compensation practices. Managing a compensation program is especially difficult during periods
September 14, 2011. Several facts stand out from the new Census data:
820 First Street NE, Suite 510 Washington, DC 20002 Tel: 202-408-1080 Fax: 202-408-1056 center@cbpp.org September 14, 2011 POVERTY RATE SECOND-HIGHEST IN 45 YEARS; RECORD NUMBERS LACKED HEALTH
Race and Social Justice Initiative (RSJI) in the Budget
(RSJI) in the Budget Introduction This chapter provides background and context for Race and Social Justice Initiative (RSJI) related budget additions throughout the 2015-2016 Proposed Budget. This is
2012 HOUSEHOLD FINANCIAL PLANNING SURVEY
2012 HOUSEHOLD FINANCIAL PLANNING SURVEY A Summary of Key Findings July 23, 2012 Prepared for: Certified Financial Planner Board of Standards, Inc. and the Consumer Federation of America Prepared by: Princeton
Economic inequality and educational attainment across a generation
Economic inequality and educational attainment across a generation Mary Campbell, Robert Haveman, Gary Sandefur, and Barbara Wolfe Mary Campbell is an assistant professor of sociology at the
7. Work Injury Insurance
7. Work Injury Insurance A. General Work injury insurance provides an insured person who is injured at work a right to receive a benefit or other defined assistance, in accordance with the nature of
Latest House Republican Budget Threatens Medicare and Shreds the Safety Net
Latest House Republican Budget Threatens Medicare and Shreds the Safety Net Instead of Reducing Health Care Costs, Blueprint Shifts Costs to Seniors, Providers, Businesses, and States Topher Spiro March
Lesson Description. Grade Level. Concepts. Objectives. Content Standards
Lesson Description The lead article in the spring 2010 issue of Inside the Vault discusses the redistribution of wealth through taxation. In this lesson, students use different household scenarios to examine
ASSESSING THE RESULTS
HEALTH REFORM IN MASSACHUSETTS EXPANDING TO HEALTH INSURANCE ASSESSING THE RESULTS March 2014 Health Reform in Massachusetts, Expanding Access to Health Insurance Coverage: Assessing the Results pulls
Age, Demographics and Employment
Key Facts Age, Demographics and Employment This document summarises key facts about demographic change, age, employment, training, retirement, pensions and savings. 1 Demographic change The population
Left Behind. Unequal Opportunity in Higher Education
Left Behind Unequal Opportunity in Higher Education The 1965 Higher Education Act, which is slated to be reauthorized later this year, has sought to ensure that no student would be denied a college education
An Equity Assessment of the. St. Louis Region
An Equity Assessment of the A Snapshot of the Greater St. Louis 15 counties 2.8 million population 19th largest metropolitan region 1.1 million households 1.4 million workforce $132.07 billion economy
Custodial Mothers and Fathers and Their Child Support: 2011
Custodial Mothers and Fathers and Their Child Support: 2011 Current Population Reports By Timothy Grall Issued October 2013 P60-246 IntroductIon This report focuses on the child support income that custodial
Gender-based discrimination in the labour market
A UNIFEM Briefing Paper 19 3. Labour Market Discrimination Against Women at Home and Abroad Perceived to be especially fit for domestic chores, women migrants are tracked into this sector even when.
Chapter II Coverage and Type of Health Insurance
Chapter II Coverage and Type of Health Insurance The U.S. social security system is based mainly on the private sector; the state s responsibility is restricted to the care of the most vulnerable groups,
Wednesday, September 18, 2013 Russell Senate Office Building Room 428A
Urban Institute Written Testimony of Signe Mary McKernan Senior Fellow, Urban institute and Caroline Ratcliffe Senior Fellow, Urban Institute For the Roundtable Panel Hearing Closing the Wealth Gap: Empowering
Logan City. Analysis of Impediments to Fair Housing
Logan City Analysis of Impediments to Fair Housing 2009-13 Consolidated Plan Page 36 of 92 EXECUTIVE SUMMARY Analysis of Impediments The Analysis of Impediments (AI) is a comprehensive review of a jurisdiction
SOCIAL IMPLICATIONS OF DEMOGRAPHIC CHANGE IN JAPAN
SOCIAL IMPLICATIONS OF DEMOGRAPHIC CHANGE IN JAPAN Naohiro Yashiro* Declining population as well as population aging has substantial impacts on the economy and on the society. By contrast, the size of STATE OF WORK IN THE EAST BAY AND OAKLAND
THE STATE OF WORK IN THE EAST BAY AND OAKLAND Wealth concentration amidst bottoming out for low and middle-income families Income inequality, Poverty, and Low-wage jobs in 2010 MAJOR FINDINGS Severe economic
Chapter 4 Specific Factors and Income Distribution
Chapter 4 Specific Factors and Income Distribution Chapter Organization Introduction The Specific Factors Model International Trade in the Specific Factors Model Income Distribution and the Gains from
Race, the Job Market, and Economic Recovery: A Census Snapshot
Race, the Job Market, and Economic Recovery: A Census Snapshot September 2009 65 Broadway, Suite 1800, New York NY 10006 (212) 248-2785 New data show people of color are
ICI RESEARCH PERSPECTIVE
ICI RESEARCH PERSPECTIVE 1401 H STREET, NW, SUITE 1200 WASHINGTON, DC 20005 202-326-5800 OCTOBER 2014 VOL. 20, NO. 6 WHAT S INSIDE 2 Introduction 2 Which Workers Want Retirement Benefits?
Welfare reform was passed
Was Welfare Reform Successful? Rebecca M. Blank Welfare reform was passed by Congress and signed by President Clinton in August 1996. Back then there were many skeptics: several senior members of President
The Future of Social Security: 12 Proposals You Should Know About
The Future of Social Security: 12 Proposals You Should Know About Strengthening Social Security AARP s You ve Earned a Say is working to make your voice heard about the future of Social Security. For more.
The Recession of 2007 2009
The Recession of 2007 2009 February 2012 A general slowdown in economic activity, a downturn in the business cycle, a reduction in the amount of goods and services produced and sold these are all characteristics self-employed and pensions
BRIEFING The self-employed and pensions Conor D Arcy May 2015 resolutionfoundation.org info@resolutionfoundation.org +44 (0)203 372 2960 The self-employed and pensions 2 The UK s self-employed populace
in washington state BLACK WELL-BEING BEYOND
Creating an Equitable Future in washington state 20 5 BLACK WELL-BEING BEYOND Criminal Justice Strong communities depend on trust. When people feel confident that they are protected and have the opportunity...
Health Care Expenses and Retirement Income. How Escalating Costs Impact Retirement Savings
Health Care Expenses and Retirement Income How Escalating Costs Impact Retirement Savings January 2012 About the Insured Retirement Institute: The Insured Retirement Institute (IRI) is a not-for-profit
|
http://docplayer.net/1707540-Few-people-would-deny-that-african-americans-made-enormous-economic-and-political.html
|
CC-MAIN-2017-34
|
en
|
refinedweb
|
I have an app with a navigation drawer and 4 navigation items (Fragments). In one of the Fragments, I have a tab layout set up with a view pager (3 more Fragments).
From one of these inner fragments, I want to disable/enable the navigation drawer dynamically. Basically, on a button press, I want to restrict access to the navigation drawer (and the re-enable on pressing it again).
How would I do it?
I tried accessing the
DrawerLayout
Toolbar toolbar = (Toolbar) findViewById(R.id.toolbar);
setSupportActionBar(toolbar);
DrawerLayout drawer = (DrawerLayout) findViewById(R.id.drawer_layout);
toggle = new ActionBarDrawerToggle(this, drawer, toolbar, R.string.navigation_drawer_open, R.string.navigation_drawer_close);
drawer.setDrawerListener(toggle);
NavigationView navigationView = (NavigationView) findViewById(R.id.nav_view);
navigationView.setNavigationItemSelectedListener(this);
toggle.syncState()
onPostCreate
A clean way to do this is to create an
interface that the
Activity implements, through which the
Fragment can call a method local to the
Activity that handles the drawer lock and toggle button states. For example:
public interface DrawerLocker { public void setDrawerEnabled(boolean enabled); }
In the
Activity's
interface method, we simply figure the lock mode constant for the
DrawerLayout#setDrawerLockMode() call, and call
setDrawerIndicatorEnabled() on the
ActionBarDrawerToggle.
public class MainActivity extends Activity implements DrawerLocker { public void setDrawerEnabled(boolean enabled) { int lockMode = enabled ? DrawerLayout.LOCK_MODE_UNLOCKED : DrawerLayout.LOCK_MODE_LOCKED_CLOSED; drawer.setDrawerLockMode(lockMode); toggle.setDrawerIndicatorEnabled(enabled); } ... }
In the
Fragment, we merely need to cast the hosting
Activity to the
interface, and call the
setDrawerEnabled() method accordingly. For example, to lock the drawer shut:
((DrawerLocker) getActivity()).setDrawerEnabled(false);
NB: Since version 23.2.0 of the v7 appcompat support library,
ActionBarDrawerToggle respects the
DrawerLayout's lock mode, and will not toggle the drawer state if it is locked. This means that it is not strictly necessary to use
setDrawerIndicatorEnabled(), though it might be desirable to still do so in order to provide the user a visual indication that the toggle is disabled.
|
https://codedump.io/share/II8Xo7bJ0YYR/1/disabling-navigation-drawer-from-fragment
|
CC-MAIN-2017-34
|
en
|
refinedweb
|
Fabricio "segfault" Cannini wrote: > I, for example, could never set vim to use a specific font. > If i'd do so, kvim would freak out and display the text very badly. > Example: > <text_diplayed_in_vim> > Mary had a little lard. > </text_diplayed_in_vim> > > <text_diplayed_in_kvim> > M a r y ha d a lit t le l ar d > <text_diplayed_in_kvim> > > Then I simply put kvim aside. I know that it doesn't matter anymore, but still. This is my ~/.gvimrc which can explain what you were supposed to do: if has("gui_kde") set guifont=Andale\ Mono/12/-1/5/50/0/0/0/0/0 else set guifont=DejaVu\ Sans\ Mono\ 10 endif I believe part of the bad reputation kvim got was caused by people being too lazy to do things slightly different than gvim/GTK (which does things again differently than gvim/Athena).." -- innocent Subject line of very non-innocent spam
|
https://lists.debian.org/debian-kde/2005/05/msg00212.html
|
CC-MAIN-2017-34
|
en
|
refinedweb
|
found on GitHub. The aim is to free up developers to spend more time on business logic. The fact that it is open source is a key differentiator between it and AWS Lambda: you are not locked in to a vendor. You can also look at the milestones of the project and see the features being developed, as well as submitting your own as pull requests.
It is currently an experimental offering in beta, and you can either install vagrant and clone the repository:
git clone
cd openwhisk/tools/vagrant
vagrant up
Or get a free account with IBM Bluemix.
Runtimes
OpenWhisk supports NodeJS, Java 8, Python 2.7 and (unlike many competitors) Swift. If you want to use something else, say, legacy C code, you can. Any thing you can package as a binary command in a docker file (the rule of thumb is, a statement with standard input and standard output that would run in bash) then it can be managed. For more information on this see the OpenWhisk tech talk.
How it works
Under the covers, OpenWhisk uses Kafka and Docker among others. It is event driven: the application is structured where events are flowing through the system and event handlers listen. There are four key concepts.
Action
An action is a stateless function, which acts as an event handler. It should be short running: the default is 5 minutes, after which the function is disposed of. In order for the platform to know what to do with it, it needs to have a public
main function that is the entry point, and return Json. For example:
import com.google.gson.JsonObject public class HelloWorld { public static JsonObject main(JsonObject args) { JsonObject response = new JsonObject(); return response; } }
Or in Javascript:
function main(params) { return {payload: 'Hello ' + params.name}; }
To create the action:
wsk action create hello hello.js
The
main function can call additional functions but must follow the conventions to be picked up as the entry point.
Importantly, actions can be chained together in a sequence, like a piping operation. This is a key feature if you want to reuse code. It allows you to reuse implementations of actions by composing them with other actions. You can build up more powerful general solutions. For example, you could call a news API with one action, and filter it with another before returning the results as Json. This allows you to move key processing away from the client.
For more information on actions, including how to invoke them, see the documentation or the OpenWhisk tech talk.
Trigger
This is the name for a class of events, it also might be described as a ‘feed’. However in OpenWhisk, a feed refers to a trigger along with control operations (such as starting or stopping a feed of events). A trigger is the events themselves.
Triggers can be fired by using a dictionary of key-value pairs: sometimes referred to as the event. This results in an activation ID. They can be fired explicitly or by an external event source, by a feed.
To create a trigger:
wsk trigger create sayHello
To fire a trigger:
wsk trigger fire sayHello --param name "World"
See the trigger documentation.
Package
This refers to a collection of actions and feeds which can be packaged up and shared in an ecosystem. You can make your feeds and actions public and share them with others. An example is some of the IBM Watson APIs, which are exposed as OpenWhisk packages.
Rule
A rule is a mapping from a trigger to an action. Each time the trigger fires, the corresponding action is invoked, with the trigger event as its input. It is possible for a single trigger event to invoke multiple actions, or to have one rule that is invoked as a response to events from multiple triggers.
To create the rule:
wsk rule create firstRule sayHello hello
It can be disabled at any time:
wsk rule disable firstRule
After firing the trigger you can check and get the activation ID:
wsk activation list --limit 1 hello
And check the result:
wsk activation result <<activation id number>>
To see the payload:
{
"payload": "Hello World"
}
Use cases
Much like AWS Lambda, OpenWhisk is not going to fit every use case. However for bots and mobile back end solutions, it is perfect. Having the option to use Swift as well as Java means it is an easy transition for many iOS mobile developers. Being able to abstract away costly operations away from devices is perfect for filtering, and avoiding making too many API calls from the client.
Like AWS Lambda, the pain of setting up and maintaining infrastructure is nonexistent, and you don’t have to worry about scaling. Serverless architecture will undoubtably be a key player in the Internet of Things.
In terms of pricing, its not that straightforward. OpenWhisk is free to get started with, but its not that simple to find out how much it will cost you. It also depends entirely on the services you want to use.
|
https://www.voxxed.com/2016/09/serverless-with-openwhisk/
|
CC-MAIN-2017-34
|
en
|
refinedweb
|
I can't figure out why chaining bit-shift operations is not returning the same result as not chaining them.
#include <stdio.h>
void bit_manip_func(unsigned char byte)
{
unsigned char chain = (((byte >> 3) << 7) >> 3);
printf("%d\n", chain); //this prints 144
unsigned char o1 = byte >> 3;
unsigned char o2 = o1 << 7;
unsigned char o3 = o2 >> 3;
printf("%d\n", o3); //this prints 16 as expected
}
int main()
{
//expecting both printf's to print
//the same value (16).
bit_manip_func(73);
return 0;
}
printf
byte >> 3
(byte >> 3) << 7
(((byte >> 3) << 7) >> 3)
The operators >> and << perform integer promotions on their operands. Thus the type unsigned char is promoted to int, when used with either operator.
In the following line, the variable byte is promoted to type int, and then all three operations are performed on this type:
unsigned char chain = (((byte >> 3) << 7) >> 3);
The left most bit set to one is thus preserved:
01001001 => 01001 => 010010000000 => 010010000 ^ ^ ^ ^
In the following code, the variables are promoted to type int, but after each operation, the result, which has the type int, is assigned to an unsigned char and thus wraps (most significant bits are removed),
since the range of unsigned char is [ 0 , 2^8-1 ] on your platform.
unsigned char o1 = byte >> 3; unsigned char o2 = o1 << 7; unsigned char o3 = o2 >> 3;
This means that the left most bit set to one is not preserved:
01001001 => 01001 => 10000000 => 000010000 ^ ^
|
https://codedump.io/share/YiKHg3NtnmW6/1/unexpected-result-after-chaining-bit-shift-operators-in-c
|
CC-MAIN-2017-34
|
en
|
refinedweb
|
In this blog, We will be exploring CrockroachDB along with Java. We will also take up a few more things like
- How to install CrockroachDB
- How to start a Local cluster
- How to connect CrockroachDB with Java
Alright, Before we jump into installation of CrockroachDB, Let’s find out What is CrockraochDB?
CrockroachDB –.
When is CockroachDB a good choice?
CockroachDB is well suited for applications that require reliable, available, and correct data regardless of scale. It is built to automatically replicate, rebalance, and recover with minimal configuration and operational overhead.
When is CockroachDB not a good choice?
CockroachDB is not a good choice when very low latency reads and writes are critical; use an in-memory database instead.
Also, CrockroachDB is not yet suitable for
- Complex SQL Joins
- Heavy Analytics/OLAP
How to install CrockraochDB?
Installing CorckroachDB is very easy. Let’s see How to install CrockroachDB.
Follow the below link to install CrockroachDB
How to start a Local cluster?
Once the CrockroachDB is installed. We will be starting up an insecure multi-node cluster locally . To start the Local cluster follow the below steps.
Steps to start Local cluster(in Insecure Mode)
1.Start the first node by executing the following command in terminal
cockroach start --insecure \ --host=localhost
This will start up the first node and You also view the admin UI at
By default node1 would start on port 26257.
2. Add second node to the cluster by executing the following command in a new terminal.
cockroach start --insecure \ --store=node2 \ --host=localhost \ --port=26258 \ --http-port=8081 \ --join=localhost:26257
This will basically add the second node to the cluster on port 26258. Here we are joining the second node with the first node.
3. Add third node to the cluster by executing the following command in a new terminal.
cockroach start --insecure \ --store=node3 \ --host=localhost \ --port=26259 \ --http-port=8082 \ --join=localhost:26257
This will basically add the third node to the cluster on port 26259. Here we are joining the third node with the first node.
So we have successfully setup a three node cluster locally. Now you can access the admin UI at
How to connect CrockroachDB with Java?
Before you begin, make sure you have installed CrockroachDB.
Follow the following steps to build a Java application using jdbc driver.
Step1. Install the Java JDBC driver.
Step2. Start a cluster
Step3. Create a user
In a new terminal, as the root user, use the cockroach user command to create a new user, testuser.
cockroach user set testuser --insecure
Step4. Create a database and grant privileges.
As the
root user, use the built-in SQL client to create a
school database.
cockroach sql --insecure -e 'CREATE DATABASE bank'
Then grant privileges to the
testuser user.
cockroach sql --insecure -e 'GRANT ALL ON DATABASE school TO testuser'
Now that you have a database and a user, you’ll run code to create a table and insert some rows, and then you’ll run code to read and update values as an atomic transaction.
How to connect CrockroachDB with Java using JDBC?
Step1. First we will be creating connection to the CrockroachDB using a singleton class like below
public class DBConnection { private static DBConnection dbInstance; private static Connection con ; private DBConnection() { // private constructor // } public static DBConnection getInstance(){ if(dbInstance==null){ dbInstance= new DBConnection(); } return dbInstance; } public Connection getConnection(){ if(con==null){ String url= "jdbc:postgresql://127.0.0.1:26257/"; String dbName = "school?sslmode=disable"; String driver = "org.postgresql.Driver"; String userName = "testuser"; String password = ""; try { Class.forName(driver).newInstance(); this.con = DriverManager.getConnection(url+dbName,userName,password); } catch (Exception ex) { ex.printStackTrace(); } } return con; } }
Step2. Now we will create a class which will have basic student CRUD methods like below.
public class StudentCRUD { static Connection con = DBConnection.getInstance().getConnection(); public static void insertStudent() throws SQLException { con.createStatement().execute("INSERT INTO school.student (id, name) VALUES (11, 'Deepak'), (22, 'Abhishek')"); } public static void selectStudent() throws SQLException { ResultSet res = con.createStatement().executeQuery("SELECT id, name FROM student"); while (res.next()) { System.out.printf("\tStudent %s: %s\n", res.getInt("id"), res.getString("name")); } } public static void createTable() throws SQLException { // Create the "student" table. con.createStatement().execute("CREATE TABLE IF NOT EXISTS student (id INT PRIMARY KEY, name varchar(30))"); } public static void deleteStudent() throws SQLException { // Create the "student" table. con.createStatement().execute("delete from student where id=22"); } public static void updateStudent() throws SQLException { // Create the "student" table. con.createStatement().execute("update student set name='deepak mehra' where id=11"); } }
Step3. Now that everything is setup, We will be writing a class which will have a main method accessing StudentCRUD class’s method. Write a class with main method like below.
public static void main(String[] args) throws ClassNotFoundException, SQLException { // Connect to the "school" database. try { // Create the "student" table. StudentCRUD.createTable(); //Insert data into "student" table. StudentCRUD.insertStudent(); //Select data from "student" table. StudentCRUD.selectStudent(); //Delete data from "student" table. StudentCRUD.deleteStudent(); System.out.println("\tPrinting student after deleting id :: 22"); //Select data from "student" table. StudentCRUD.selectStudent(); //Update data into "student" table. StudentCRUD.updateStudent(); System.out.println("\tPrinting student after updating id :: 11"); //Select data from "student" table. StudentCRUD.selectStudent(); } catch (Exception ex) { ex.printStackTrace(); } } }
Now your java application with CrockroachDB is complete. To run your application and see the results, you can execute the SampleApp.java class from console or directly from the IDE you are using.
Note : Before you execute the SampleApp.java class make sure the cluster is up and running with the database name ‘school’ and user ‘testuser’ with all the privileges.
If you have followed the blog from the beginning you don’t have to do anything otherwise you will have to create a user and grant privileges or you can simply go ahead with the default user which is ‘root’.
If you are having any challenge building the app , you can access the full code at this link on github. You can simply clone the repo and execute the SampleApp.java class.
If you find any challenge, Do let me know in the comments.If you enjoyed this post, I’d be very grateful if you’d help it spread.Keep smiling, Keep coding! Cheers!
|
https://blog.knoldus.com/2017/06/12/cockroachdb-with-java-using-jdbc/
|
CC-MAIN-2018-30
|
en
|
refinedweb
|
#include "libavutil/attributes.h"
#include "ac3enc.h"
#include "eac3enc.h"
#include "eac3_data.h"
#include "ac3enc_opts_template.c"
Go to the source code of this file.
Initialize E-AC-3 exponent tables.
Definition at line 52 of file eac3enc.c.
Referenced by exponent_init().
Determine frame exponent strategy use and indices.
Definition at line 68 of file eac3enc.c.
Referenced by compute_exp_strategy().
Set coupling states.
This determines whether certain flags must be written to the bitstream or whether they will be implicitly already known by the decoder.
Definition at line 95 of file eac3enc.c.
Referenced by apply_channel_coupling().
Write the E-AC-3 frame header to the output bitstream.
Definition at line 128 of file eac3enc.c.
Referenced by ff_ac3_encode_init().
Definition at line 38 of file eac3enc.c.
LUT for finding a matching frame exponent strategy index from a set of exponent strategies for a single channel across all 6 blocks.
Definition at line 49 of file eac3enc.c.
Referenced by ff_eac3_exponent_init(), and ff_eac3_get_frame_exp_strategy().
Definition at line 254 of file eac3enc.c.
|
http://ffmpeg.org/doxygen/trunk/eac3enc_8c.html
|
CC-MAIN-2018-30
|
en
|
refinedweb
|
By default, the TCP listeners handle dropped TCP sessions by trying to reconnect after increasingly long intervals. You can specify a custom reconnection policy by defining an instance of Splunk.Logging.TcpConnectionPolicy, and passing it to the constructors of the TcpTraceListener or TcpEventSink classes.
TcpConnectionPolicy has a single method, Reconnect, which tries to establish a connection or throws a TcpReconnectFailure if the policy dictates that it will no longer try to reconnect. Here is annotated source code of the default, exponential backoff policy:
public class ExponentialBackoffTcpReconnectionPolicy : TcpReconnectionPolicy { private int ceiling = 10 * 60; // 10 minutes in seconds // the arguments are: // // connect - a function that attempts a TCP connection given a host, port number // host - the host to connect to // port - the port to connect on // cancellationToken - used by TcpTraceListener and TcpEventSink to cancel this method // when they are disposed public Socket Connect(Func<IPAddress, int, Socket> connect, IPAddress host, int port, CancellationToken cancellationToken) { int delay = 1; // in seconds while (!cancellationToken.IsCancellationRequested) { try { return connect(host, port); } catch (SocketException) { } // if this is cancelled via the cancellationToken instead of // completing its delay, the next while-loop test will fail, // the loop will terminate, and the method will return null // with no additional connection attempts Task.Delay(delay * 1000, cancellationToken).Wait(); // the nth delay is min(10 minutes, 2^n - 1 seconds) delay = Math.Min((delay + 1) * 2 - 1, ceiling); } // cancellationToken has been cancelled return null; } }
Another, simpler, policy would be trying to reconnect once, and then failing:
class TryOnceTcpConnectionPolicy : TcpReconnectionPolicy { public Socket Connect(Func<System.Net.IPAddress, int, Socket> connect, System.Net.IPAddress host, int port, System.Threading.CancellationToken cancellationToken) { try { if (cancellationToken.IsCancellationRequested) return null; return connect(host, port); } catch (SocketException e) { throw new TcpReconnectFailureException("Reconnect failed: " + e.Message); } } }
|
http://dev.splunk.com/view/splunk-loglib-dotnet/SP-CAAAEY9
|
CC-MAIN-2018-30
|
en
|
refinedweb
|
Hello everybody, could someone please urgent help me with this assignment question?
I just need some tips on why my prompt dialog is not working correctly.
The script is below after the assignment question...
Finish the coding of the class LightController by completing the public instance method runLight(). This method should first prompt the user to enter the number of times the disco light is to grow and shrink. Then the method should prompt the user for the size increase which they want the diameter of the disco light to grow by. The size increase provided by the user should be an even number although it is not required that your method should check this. The method then causes the circle referenced by light to perform the required number of growing and shrinking cycles.
import ou.*; import java.util.*; /** * Class LightController * This class uses the Circle class, and the Shapes window * to simulate a disco light, that grows and shrinks and * changes colour. * @author M250 Module Team * @version 1.0 */ public class LightController { /* instance variables */ private Circle light; // simulates a circular disco light in the Shapes window private Random randomNumberGenerator; /** * Default constructor for objects of class LightController. */ public LightController() { super(); this.randomNumberGenerator = new Random(); light = new Circle(); light.setColour(OUColour.GREEN); light.setDiameter(50); light.setXPos(122); light.setYPos(162); } /** * Returns a randomly generated int between 0 (inclusive) * and number (exclusive). For example if number is 6, * the method will return one of 0, 1, 2, 3, 4, or 5. */ public int getRandomInt(int number) { return this.randomNumberGenerator.nextInt(number); } /** * Returns the instance variable, light. */ public Circle getLight() { return this.light; } /** * Randomly sets the colour of the instance variable * light to red, green, or purple. */ public void changeColour() { this.getRandomInt(3); int l = getRandomInt(3); if (l == 0) { this.light.setColour(OUColour.RED); } else if (l == 1) { this.light.setColour(OUColour.GREEN); } else if (l == 2) { this.light.setColour(OUColour.PURPLE); } } /** * Grows the diameter of the circle referenced by the * receiver's instance variable light, to the argument size. * The diameter is incremented in steps of 2, * the xPos and yPos are decremented in steps of 1 until the * diameter reaches the value given by size. * Between each step there is a random colour change. The message * delay(anInt) is used to slow down the graphical interface, as required. */ public void grow); } } /** * Shrinks the diameter of the circle referenced by the * receiver's instance variable light, to the argument size. * The diameter is decremented in steps of 2, * the xPos and yPos are incremented in steps of 1 until the * diameter reaches the value given by size. * Between each step there is a random colour change. The message * delay(anInt) is used to slow down the graphical interface, as required. */ public void shrink); } } /** * Expands the diameter of the light by the amount given by * sizeIncrease (changing colour as it grows). * * The method then contracts the light until it reaches its * original size (changing colour as it shrinks). */ public void lightCycle(int sizeIncrease) { int oldDiameter = this.light.getDiameter(); this.grow(sizeIncrease); this.shrink(oldDiameter); } /** * Prompts the user for number of growing and shrinking * cycles. Then prompts the user for the number of units * by which to increase the diameter of light. * Method then performs the requested growing and * shrinking cycles. */ public void runLight(int numCycles,int sizePerCycle) { for (int doneCycles = 0; doneCycles <= numCycles; doneCycles++) { this.lightCycle(sizePerCycle); } } /** * Causes execution to pause by time number of milliseconds. */ private void delay(int time) { try { Thread.sleep(time); } catch (Exception e) { System.out.println(e); } } }
|
https://www.daniweb.com/programming/software-development/threads/490486/help-needed-prompt-dialog-urgent-advice
|
CC-MAIN-2018-30
|
en
|
refinedweb
|
How to Do 10 Common Tasks in JSF 2.0
How to Do 10 Common Tasks in JSF 2.0
Here’s a list of 10 features you might need to implement everyday and how they are performed in JSF 2.0.
Join the DZone community and get the full member experience.Join For Free
Here’s a list of 10 features you might need to implement everyday and how they are performed in JSF 2.0.
TemplatingJSF 1.0 started by using JSP as the templating technology, but most people started using Facelets by Jacob Hookom and I haven’t since found a templating setup I like more. With JSP, templates work by including files, and with other templating setups you define the template to use and then each page defines the content placed in the template. JSF combines both of these and works very much like ASP.net master pages. Each template page defines areas and in each content page, you pull in the template you want to use and then push the content into the defined areas on the template. Here’s an example template file that defines our header, footer and content layout with specific named areas defined for inserting content with the ui:insert tag.
Notice we re-use the “title” insert point in the page title and the content header. Here is an example page that uses this template :
<!DOCTYPE html PUBLIC "-//W3C//DTD XHTML 1.0 Strict//EN" ""><html xmlns=""xmlns:<h:head><title>Application Name : <ui:insertPage Title</ui:insert></title><h:outputStylesheet</h:head><h:body><div id="page"> <div id="header"> <h1>App Name</h1> </div> <div id="container"> <h1><ui:insertDefault Page Title</ui:insert></h1> <div id="sidebar"> <h1>Sidebar</h1> Content for the sidebar goes here </div> <div id="content"> <ui:insertDefault value for Content</ui:insert> </div> <div id="footer">Footer goes Here</div> </div> </div></h:body></html>
And the result :
<?xml version="1.0" encoding="UTF-8"?><ui:composition <ui:defineHello World!</ui:define> <ui:define Here is some content that we are adding from our custom page. </ui:define></ui:composition>
Writing Code for the PageJSF pages are driven by pojo objects called backing beans that can be defined using annotations. Depending on whether you are using JCDI you will use the managed beans annotations or the JCDI bean annotations. The annotations are very similar except for the package names so care must be taken. You should use the JCDI annotations over the managed bean annotations as JCDI provides a richer set of features. Here is an example bean using the JCDI bean annotations.
This bean is fairly typical in that it is defined as having a name and a scope. Whenever a JSF page evaluates the expression #{calculator}, it will evaluate to a request scoped instance of this bean. This also applies to any other Java EE components that can use EL expressions. Attributes on the bean can be bound to components on the page using the same EL expressions. Here’s an example of displaying the message and inputs for the two numbers in our JSF page.
import javax.enterprise.context.RequestScoped;import javax.inject.Named;@Named("calculator")@RequestScopedpublic class MyBackingBean {private String message = "Welcome to my calculator";private Long number1 = new Long(100);private Long number2 = new Long(24);private Long result; ... getters and setters removed ...}
And the result :
<?xml version="1.0" encoding="UTF-8"?><ui:composition <ui:defineCalculate</ui:define> <ui:define #{calculator.message}<br/> <br/> Number 1 : <h:inputText<br/> <br/> Number 2 : <h:inputText<br/> <br/> </ui:define></ui:composition>
Trigger a method execution on the beanOnce you’ve put in all your data, you want to send it back to the server and do something with it. To do this, we put a form element around the content, and add either a command button or link. In this case, we are going to add a button to add the numbers and display the result. Add a new method to the backing bean to add the numbers and put it in result.
Now we will add the button to the form and a piece of text to show the result.
public void executeAdd() {result = number1 + number2;}
<ui:define <h:form #{calculator.message}<br/> <br/> Number 1 : <h:inputText<br/> <br/> Number 2 : <h:inputText<br/> <br/> <h:outputText<br/> <h:commandButton </h:form> </ui:define>
Hiding view elementsThere are many times you don’t want to render view elements depending on the state in the backing bean. This can range from a value not being set, to limitations based on user security or whether the user is logged in or not. To handle this, all JSF components have a rendered attribute which can be set to an expression. If the expression evaluates to true, then the JSF component is rendered. The rendering applies to all children as well so if one group component is not rendered, none of the children will be either. Here we will use it to hide the result message if there is no result available which is the case when the user first enters the form. We only render the message if the result is not null :
<h:outputText
Decorating contentTypically most content is wrapped in divs and spans that let you style the display easily, for example, data input form properties. Doing this manually for each form input control and then possibly having to change each one manually if something changes is a bad idea. Instead, we can use the ui:decorate tag that lets you wrap content on the content page with other content from a template. The ui:decorate takes a takes a template as a parameter and some content to be wrapped. Our form property template may look like the following :
This defines our structure for the decorator template. We expect a label value to be passed in to use as a caption for the property. The ui:insert tag without a name causes whatever content is placed in the ui:decorate tag in the content page to be inserted. In our case, we want to decorate our form input components.
<ui:composition<div class="property"> <span class="propertyLabel"> <ui:insert : </span> <span class="propertyControl"> <ui:insert /> </span></div></ui:composition>
Here we decorate our number 1 input box with our property decorator which surrounds it with divs which can be styled. We define the label value and pass it into the template so it can be included in the final page.
<ui:decorate <ui:defineLabel Text</ui:define> <h:inputText</ui:decorate>
Creating re-usable contentAnother facelets feature is the ability to create re-usable JSF fragments that can be used in any page. You can even parameterize it so the data that is displayed is relevant to the context of the form it is displayed in. For example, if your entities have auditing information to display the creation and last modified dates, you don’t want to re-produce the view code for that. Instead, you can create a facelet to display that information
Obviously, there would be more code in there for formatting and layout, but this page displays the created and modified dates of the p_entity object. This value is provided by the page into which the composition is being inserted.
<ui:composition Created On : #{p_entity.createdOn} Modified On : #{p_entity.modifiedOn}</ui:composition>
<ui:include <ui:param </ui:include>
Ajaxify a page.Pre-JSF 2.0 most AJAX solutions for JSF were third party ones that each chose their own path to handling AJAX. With JSF 2.0, AJAX is now a part of the framework and can be used by developers out of the box and also allow third party developers to build their own frameworks on top of. Making a form AJAX capable just requires you to do three things. First, you must identify what data is posted back to the server in the call, then you must determine what event the call is made on, and then you must indicate what part of the page will be re-rendered once the AJAX request is complete.
In our example, we’ll make the calculation AJAX enabled by adding the AJAX tag to the command button used to calculate the result. First off, we need to add the javascript for AJAX in JSF. We can do this by adding the following at the top of the content. You can also add it for all pages in the template.
Now we will wrap the result text in a named container called result. We will use this named container to tell JSF what we want to re-render.
<h:outputScript
Now we will add the AJAX features to the command button.
<h:panelGroup<h:outputText</h:panelGroup>
This tells JSF we want to postback the whole form (@form is a special name) and render the component identified by form:result. We could have used @form here as well to re-render the whole form. These two attributes can take these special names, a single component name, or a list of component names. Since we added this to a command button, the event defaults to the clicking of the button.
<h:commandButton<f:ajax</h:commandButton>
Page ParametersHandling page parameters with JSF 2.0 is really simple. We just add some metadata at the top of the page to bind parameters to backing bean attributes.
If we call this page with these parameters set, the form will be pre-populated with our values.
<f:metadata><f:viewParam<f:viewParam</f:metadata>
Validate pageThere are a few ways we can validate the data, but first things first, we need to add some places to put error messages using the h:message tag to display an error message associated with a component. Here is our data entry component using the decorator and the message components added.
In this first instance, we have validated in the view using the required attribute of the input for number 1 and provided a message to display if it is blank. We can also add validations using the standard bean validations in JSR 303. In our backing bean, locate the number2 field and add the validations you want.
<ui:decorate<ui:defineNumber 1</ui:define><h:inputText<h:message</ui:decorate><ui:decorate<ui:defineNumber 2</ui:define><h:inputText<h:message</ui:decorate>
This makes our number 2 value required and ranged to between 0 and 10. If we try and put anything else in and click the execute button, we will get an error. Note that if you are doing this yourself, you have to open up the ajax re-render section to include the whole form since otherwise the error messages will not be rendered as they are not part of the rendered components we specified.
@NotNull@Min(value=0)@Max(value=10)private Long number2;
public void executeAdd() {if (number1 == 27) {FacesContext.getCurrentInstance().addMessage("form:number1",new FacesMessage("Number 1 should not be 27"));}result = number1 + number2;}
Optional CSS StylingThere are many times when you want to style something differently depending on the state on the server. This can easily be done by using EL expressions in the style attribute. In fact, EL expressions can be used in most JSF attributes. To change the style of some text depending on priority of an issue, you can use the following :
<span class="#{issue.isUrgent ? 'urgentText' : ''}">This is not yet resolved</span>
As you can see, there are a lot of powerful things that can be done with JSF using a small amount of simple code and they all follow the same techniques and methods which can help when learning JSF.
From
Opinions expressed by DZone contributors are their own.
{{ parent.title || parent.header.title}}
{{ parent.tldr }}
{{ parent.linkDescription }}{{ parent.urlSource.name }}
|
https://dzone.com/articles/how-do-10-common-tasks-jsf-20?fromrel=true
|
CC-MAIN-2020-05
|
en
|
refinedweb
|
SO far till java 7, we had String.split() method which can split a string based on some token passed as parameter. It returned list of string tokens as string array. But, if you want to join a string or create a CSV by concatenating string tokens using some separator between them, you have to iterate through list of Strings, or array of Strings, and then use StringBuilder or StringBuffer object to concatenate those string tokens and finally get the CSV.
String concatenation (CSV) with join()
Java 8 made the task easy. Now you have String.join() method where first parameter is separator and then you can pass either multiple strings or some instance of Iterable having instances of strings as second parameter. It will return the CSV in return.
package java8features; import java.time.ZoneId; public class StringJoinDemo { public static void main(String[] args){ String joined = String.join("/","usr","local","bin"); System.out.println(joined); String ids = String.join(", ", ZoneId.getAvailableZoneIds()); System.out.println(ids); } } Output: usr/local/bin Asia/Aden, America/Cuiaba, Etc/GMT+9, Etc/GMT+8.....
So next time you use java 8 and want to concatanate the strings, you have a handy method in your kit. Use it.
Happy Learning !!
Lokesh, I follow your posts regularly and they are really superb. Waiting for more JAVA 8 posts. Will you please do a post on Functions(Functional Programming).
|
https://howtodoinjava.com/java8/java-8-string-join-csv-example/
|
CC-MAIN-2020-05
|
en
|
refinedweb
|
contentful 1.1.3
contentful-dart #
This Dart package is a small abstraction on top of the Contentful Delivery API.
Usage #
To use this plugin, install
contentful as a dependency in your
pubspec.yaml.
API #
The following example uses
equatable and
json_annotation to create a
Contentful Entry model. For more information about
json_annotation, see:
Contentful adds system fields to their JSON responses, so you need to create a
separate fields class and pass it to the
Entry<T> class generic.
You can also add relationships as a field attribute and they will get injected if they are included in the JSON response. See here for more details:
import 'package:equatable/equatable.dart'; import 'package:json_annotation/json_annotation.dart'; import 'package:contentful/contentful.dart'; part 'event.g.dart'; @JsonSerializable() class Event extends Entry<EventFields> { Event({ SystemFields sys, EventFields fields, }) : super(sys: sys, fields: fields); static Event fromJson(Map<String, dynamic> json) => _$EventFromJson(json); Map<String, dynamic> toJson() => _$EventToJson(this); } @JsonSerializable() class EventFields extends Equatable { EventFields({ this.title, this.slug, this.relations, }) : super([title, slug, relations]); final String title; final String slug; final List<Event> relations; static EventFields fromJson(Map<String, dynamic> json) => _$EventFieldsFromJson(json); Map<String, dynamic> toJson() => _$EventFieldsToJson(this); }
Here is how you would use your
Event class.
import 'package:contentful/contentful.dart'; import 'event.dart'; class EventRepository { EventRepository(this.contentful); final Client contentful; Future<Event> findBySlug(String slug) async { final collection = await contentful.getEntries<Event>({ 'content_type': 'event', 'fields.slug': slug, 'limit': '1', 'include': '10', }, Event.fromJson); return collection.items.first; } } Future<void> main() async { final repo = EventRepository(Client('SPACE_ID', 'ACCESS_TOKEN')) final event = await repo.findBySlug('myevent'); print('Title: ${event.fields.title}'); }
Use this package as a library
1. Depend on it
Add this to your package's pubspec.yaml file:
dependencies: contentful: ^1:contentful/contentful.dart';
We analyzed this package on Jan 16, 2020, and provided a score, details, and suggestions below. Analysis was completed with status completed using:
- Dart: 2.7.0
- pana: 0.13.4
Maintenance issues and suggestions
Provide a file named
CHANGELOG.md. (-20 points)
Maintain an example. (-10 points)
Create a short demo in the
example/ directory to show how to use this package.
Common filename patterns include
main.dart,
example.dart, and
contentful.dart. Packages with multiple examples should provide
example/README.md.
For more information see the pub package layout conventions.
|
https://pub.dev/packages/contentful
|
CC-MAIN-2020-05
|
en
|
refinedweb
|
Difference between revisions of "Advanced Installation"
Latest revision as of 13:55, 14 December 2018: <bash> ./configure API=f90 F90=/path/to/preferred/compiler </bash> git or subversion.
Fedora, RedHat, CentOS, Scientific Linux, openSUSE
Install essential Madagascar dependencies with
sudo yum install gcc libXaw-devel
Dependency package names, sorted by Linux distribution and Madagascar Madagascar:
plplot = context.env.get('PLPLOT','plplotd')
to:
def blas(context):
context.Message("checking for BLAS ... ")
text = '''
#ifdef __APPLE__
#include <Accelerate/Accelerate.h>
#else
#ifdef HAVE_MKL
#include <mkl.h>
#else
#include <cblas.h>
#endif
#endif
int main(int argc,char* argv[]) {
float d, x[]={1.,2.,3.}, y[]={3.,2.,1.};
d = cblas_sdot(3,x,1,y,1);
return 0;
}\n'''
if plat['OS'] == 'cygwin':
context.env['ENV']['PATH'] = context.env['ENV']['PATH'] + \
':/lib/lapack'
res = context.TryLink(text,'.c')
if res:
context.Result(res)
context.env['BLAS'] = True
else:
# first try blas
LIBS = path_get(context,'LIBS')
blas = context.env.get('BLAS','blas')
LIBS.append(blas)
res = context.TryLink(text,'.c')
if res:
context.Result(res)
context.env['LIBS'] = LIBS
context.env['BLAS'] = blas
else:
# some systems require cblas and atlas
for atlas_dir in filter(os.path.isdir,
['/usr/lib/gcc/x86_64-redhat-linux/6.3.1/', # <--- add this line
'/usr/lib64/', # <--- add this line
'/usr/lib64/atlas/',
'/usr/lib/atlas/']):
context.env['LIBPATH'].append(atlas_dir)
LIBS.pop()
LIBS.append('f77blas')
LIBS.append('cblas')
LIBS.append('atlas')
LIBS.append('gfortran') # <----------------------------------------------- add this line
LIBS.append('quadmath') # <----------------------------------------------- add this line
res = context.TryLink(text,'.c')
if res:
context.Result(res)
context.env['LIBS'] = LIBS
context.env['BLAS'] = 'cblas'
else:
context.Result(context_failure)
context.env['CPPDEFINES'] = \
path_get(context,'CPPDEFINES','NO_BLAS')
LIBS.pop()
LIBS.pop()
LIBS.pop()
context.env['BLAS'] = None.
- (Optionally) SEGTeX: To use SEGTeX, you may need TeX Live. MacPorts and Fink provide an easy way to install it with commands
sudo port install texliveor
sudo fink install texlive
- .
- Install python with libraries including jupyter and ipython. I recommend the Anaconda distribution. which is available at
- (Optionally) SWIG is required for some of the options of the Python api (used if you are coding in Python). run
conda install swig. I had to run
cp `which swig` /usr/local/binto get ./configure to find swig.
- Proceed with configuration and installation following the normal procedure. You may need to use Apple's compiler (clang) instead of gcc. Use one of these commands in the $RSFSRC directory
./configure CC=clang CXX=clang++or
./configure CC=clang CXX=clang++ API=python --prefix=`pwd`
- build the system with:
make install
- After installing a new version of python you must run:
./configure make install
In summer of 2018 on a new MacBook Pro running Mac OS 10.14.1 (Mojave) I encountered an "abort trap: 6" error message when running sfpen from command line. I do not have a problem running sfpen inside scons. I changed to use xtpen from the command line. I continue to use this work around.: <bash> ln -s path_to_my_programs $RSFSRC/user/my_programs </bash>.
|
http://www.ahay.org/wiki2015/index.php?title=Advanced_Installation&diff=prev&oldid=3757
|
CC-MAIN-2020-05
|
en
|
refinedweb
|
----Hello there!, I am currently trying to move this object back and forth at set speeds on the Y Axis. I have saved this script for my Enemy Controller, and have it added to several "enemy" 3D objects. Unfortantly the objects do not seem to be moving based on their own local position, but instead are all following the same local position(which I cannot seem to figure out where or what's controlling it). I have them separated using different parents and moving those around, but am still unable to figure out whats wrong.
----The Enemy objects placed at the bottom of my environment are moving far upwards than they should, but they move back and forth as expected.(they go up 5, then bounce -4, +4, -4, +4 FROM their current position.{bounce between 3 and 8 on the Y Axis})
----The Enemy objects placed at a higher elevation in my environment are moving downwards, than stopping, and infinintly vibrating in place.
this is my third time going through the code to see if I could fix it myself, but im still learning, and im clearly having issues with something that feels very simple. Heres a Sample of my Current Code.
using UnityEngine;
public class MyEnemyController : MonoBehaviour {
public float speed = 3f; //Speed at which the objects move in Time.deltaTime
public float speedInc = 1.2f; //20% increase in speed for objects moving downward
public float maxHigh = 2f; //Maximum movement from current position
public float direction = 1f; //Rate at which to move
Vector3 InitialPos; //variable for 3D object X,Y,Z
void Start() {
//Saving Current Objects coordinates(X,Y,Z) into variable "InitialPos"
InitialPos = transform.localPosition; }
void Update () {
//Saving Maximum Range for objects movement into 2 different variables- one for the Maximum, the other for the Minimum
float newPositionMax = maxHigh + InitialPos.y;
float newPositionMin = maxHigh - InitialPos.y;
//If the object is moving in a downward position (were only moving on the Y Axis)
if(direction == -1){
//Saving base movement * 20% increase speed & directional movement into a variable "movementYDown"
float movementYDown = speed * speedInc * Time.deltaTime * direction;
//Forcing object to move on the Y Axis using Transform.up, and to use the speed and direction input from "movementYDown" variable
transform.position += transform.up * movementYDown;
}
else //Otherwise If the object is moving in any other direction thats not downward
{
//Saving base movement & directional movement into variable "movementY"
float movementY = speed * Time.deltaTime * direction;
//Forcing object to move on the Y Axis using transform.up and to use the speed and direction input from "movementY" variable
transform.position += transform.up * movementY; }
//If the Objects current position on Y Axis is "Greater-than" or "Equal-too" the Maximum Heighth we have set
if(transform.position.y >= newPositionMax) {
direction = -1; //Then move downward in a negative direction
}
//Else If the objects current position on Y Axis is "Less-than" or "Equal-too" the Minimum Heighth we have set
else if(transform.position.y <= newPositionMin) {
direction = 1; //Then move Upward in a positive direction
}
}
}
Answer by tonyjwhite
·
Oct 25, 2018 at 11:25 AM
I was able to generally fix the problem I was having by added the script onto the parent itself, instead of each enemy--its not specifically what I wanted, but it will work for what I am doing. I did however have to add a new public variable, public float MaxLow = -2f;. and change Line 21 float newposistionMin = MaxHigh - initialPos.y; to float newposistionMin = MaxLow + initialPos.y;
public float MaxLow = -2f;
float newposistionMin = MaxHigh - initialPos.y;
float newposistionMin = MaxLow + initialPos.y;
--all the enemies now move exactly how they should, but now theyre all moving at the same time( I was hoping to keep them individual.)
---If anyone can still give me any ideas on how to make it work per object, id really.
Reversing movement directions
1
Answer
Helicopter Move Local Position,Local Position
1
Answer
Move a object along a vector
1
Answer
GameObject doesn't move
1
Answer
Touch To Move script problem
0
Answers
|
https://answers.unity.com/questions/1565558/how-to-move-a-3d-object-based-on-its-default-posit.html
|
CC-MAIN-2020-05
|
en
|
refinedweb
|
But first, let us cover the basics of a web scraper or a web crawler.
Demystifying the terms ‘Web Scraper’ and ‘Web Crawler’
A web crawler, also known as a ‘spider’ has a more generic approach! You can define a web crawler as a bot that systematically scans the Internet for indexing and pulling content/information. It follows internal links on web pages. In general, a “crawler” navigates web pages on its own, at times even without a clearly defined end goal.
Hence, it is more like an exploratory search of the content on the Web. Search engines such as Google, Bing, and others often employ web crawlers to extract content for a URL or for other links, get URLs of these links and other purposes.
However, it is important to note that web scraping and crawling are not mutually exclusive activities. While web crawling creates a copy of the content, web scraping extracts specific data for analysis, or to create something new.
However, in order to scrape data from the web, you would first have to conduct some sort of web crawling to index and find the information you need. On the other hand, data crawling also involves a certain degree of scraping, like saving all the keywords, the images and the URLs of the web page.
More about Web Crawlers
A web crawler is nothing but a few lines of code. This program or code works as an Internet bot. The task is to index the contents of a website on the internet. Now we know that most web pages are made and described using HTML structures and keywords. Thus, if you can specify a category of the content you need, for instance, a particular HTML tag category, the crawler can look for that particular attribute and scan all pieces of information matching that attribute.
You can write this code in any computer language to scrape any information or data from the internet automatically. You can use this bot and even customize the same for multiple pages that allow web crawling. You just need to adhere to the legality of the process.
There are multiple types of web crawlers. These categories are defined by the application scenarios of the web crawlers. Let us go through each of them and cover them in some detail.
1. General Purpose Web Crawler
A general purpose Web crawler, as the name suggests, gathers as many pages as it can from a particular set of URLs to crawl large-scale data and information. You require a high internet speed and large storage space are required for running a general purpose web crawler. Primarily, it is built to scrape massive data for search engines and web service providers.
2. Focused Web Crawler
Focused Web Crawler is characterized by a focused search criterion or a topic. It selectively crawls pages related to pre-defined topics. Hence, while a general purpose web crawler would search and index all the pages and URLs on a site, the focused crawler only needs to crawl the pages related to the pre-defined topics, for instance, the product information on an e-commerce website.
Thus, you can run this crawler with smaller storage space and slower internet speed. Most search engines, such as Google, Yahoo, and Baidu use this kind of web crawler.
3. Incremental Web Crawler
Imagine you have been crawling a particular page regularly and want to search, index and update your existing information repository with the newly updated information on the site. Would you crawl the entire site every time you want to update the information? That sounds unwanted extra cost of computation, time and memory on your machine. The alternative is to use an incremental web crawler.
An incremental web crawler crawls only newly generated information in web pages. They only look for updated information and do not re-download the information that has not changed, or the previously crawled information. Thus it can effectively save crawling time and storage space.
4. Deep Web Crawler
Most of the pages on the internet can be divided into Surface Web and Deep Web (also called Invisible Web Pages or Hidden Web). You can index a surface page with the help of a traditional search engine. It is basically a static page that can be reached using a hyperlink. Web pages in the Deep Web contain content that cannot be obtained through static links. It is hidden behind the search form.
In other words, you cannot simply search for these pages on the web. Users cannot see it without submitting some certain keywords. For instance, some pages are visible to users only after they are registered. Deep web crawler helps us crawl the information from these invisible web pages.
When do you need a web crawler?
From the above sections, we can infer that a web crawler can imitate the human actions to search the web and pull your content from the same. Using a web crawler, you can search for all the possible content you need. You might need to build a web crawler in one of these two scenarios:
1. Replicating the action of a Search Engine- Search Action
Most search engines or the general search function on any portal sites use focused web crawlers for their underlying operations. It helps the search engine locate the web pages that are most relevant to the searched-topics. Here, the crawler visits web sites and reads their pages and other information to create entries for a search engine index. Post that, you can index the data as in the search engine.
To replicate the search function as in the case of a search engine, a web crawler helps:
- Provide users with relevant and valid content
- Create a copy of all the visited pages for further processing
2. Aggregating Data for further actions – Content Monitoring
You can also use a web crawler for content monitoring. You can then use it to aggregate datasets for research, business and other operational purposes. Some obvious use-cases are:
- Collect information about customers, marketing data, campaigns and use this data to make more effective marketing decisions.
- Collect relevant subject information from the web and use it for research and academic study.
- Search information on macro-economic factors and market trends to make effective operational decisions for a company.
- Use a web crawler to extract data on real-time changes and competitor trends.
How can you build a Web Crawler?
There are a lot of open-source and paid subscriptions of competitive web crawlers in the market. You can also write the code in any programming language. Python is one such widely used language. Let us look at a few examples there.
Web Crawler using Python
Python is a computationally efficient language that is often employed to build web scrapers and crawlers. The library, commonly used to perform this action is the ‘scrapy’ package in Python. Let us look at a basic code for the same.
import scrapy class spider1(scrapy.Spider): name = ‘Wikipedia’ start_urls = [‘’] def parse(self, response): pass
The above class consists of the following components:
- a name for identifying the spider or the crawler, “Wikipedia” in the above example.
- a start_urls variable containing a list of URLs to begin crawling from. We are specifying a URL of a Wikipedia page on clustering algorithms.
- a parse() method which will be used to process the webpage to extract the relevant and necessary content.
You can run the spider class using a simple command ‘scrapy runspider spider1.py‘. The output looks something like this.
The above output contains all the links and the information (text content) on the website in a wrapped format. A more focussed web crawler to pull product information and links from an e-commerce website looks something like this:
import requests from bs4 import BeautifulSoup def web(page,WebUrl): if(page>0): url = WebUrl code = requests.get(url) plain = code.text s = BeautifulSoup(plain, “html.parser”) for link in s.findAll(‘a’, {‘class’:’s-access-detail-page’}): tet = link.get(‘title’) print(tet) tet_2 = link.get(‘href’) print(tet_2) web(1,’’)
This snippet gives the output in the following format.
The above output shows that all the product names and their respective links have been enlisted in the output. This is a piece of more specific information pulled by the crawler.
Other crawlers in the market
There are multiple open source crawlers in the market that can help you collect/mine data from the Internet. You can conduct your due research and use the best possible tool for collecting information from the web. A lot of these crawlers are written in different languages like Java, PHP, Node, etc.
You may consider factors like the simplicity of the program, speed of the crawler, ability to crawl over various web sites (flexibility) and memory usage of these tools before you make your final choice.
read original article here
|
https://coinerblog.com/how-to-build-a-web-crawler-from-scratch-wps32ia/
|
CC-MAIN-2020-05
|
en
|
refinedweb
|
WaylandCompositor QML Type
Manages the Wayland display server. More...
Properties
- created : bool
- defaultOutput : WaylandOutput
- defaultSeat : WaylandSeat
- extensions : list
- retainedSelection : bool
- socketName : string
- useHardwareIntegrationExtension : bool
Signals
- void surfaceCreated(QWaylandSurface *surface)
- void surfaceRequested(WaylandClient client, int id, int version)
Methods
- destroyClient(client)
- destroyClientForSurface(surface)
Detailed Description
The WaylandCompositor manages the connections to the clients, as well as the different outputs and seats.
Normally, a compositor application will have a single WaylandCompositor instance, which can have several outputs as children. When a client requests the compositor to create a surface, the request is handled by the onSurfaceRequested handler.
Extensions that are supported by the compositor should be instantiated and added to the extensions property.
Property Documentation
This property is true if WaylandCompositor has been initialized, otherwise it's false.
This property contains the first in the list of outputs added to the WaylandCompositor, or null if no outputs have been added.
Setting a new default output prepends it to the output list, making it the new default, but the previous default is not removed from the list.
This property contains the default seat for this WaylandCompositor.
A list of extensions that the compositor advertises to its clients. For any Wayland extension the compositor should support, instantiate its component, and add it to the list of extensions.
For instance, the following code would allow the clients to request
wl_shell surfaces in the compositor using the
wl_shell interface.
import QtWayland.Compositor 1.0 WaylandCompositor { WlShell { // ... } }
This property holds whether retained selection is enabled.
This property holds the socket name used by WaylandCompositor to communicate with clients. It must be set before the component is completed.
If the socketName is empty (the default), the contents of the start argument
--wayland-socket-name are used instead. If the argument is not set, the compositor tries to find a socket name, which is
wayland-0 by default.
This property holds whether the hardware integration extension should be enabled for this WaylandCompositor.
This property must be set before the compositor component is completed.
Signal Documentation
This signal is emitted when a new WaylandSurface instance has been created.
This signal is emitted when a client has created a surface. The slot connecting to this signal may create and initialize a WaylandSurface instance in the scope of the slot. Otherwise a default surface is created.
Method Documentation
Destroys the given WaylandClient client.
Destroys the client for the WaylandSurface.
|
https://doc-snapshots.qt.io/qt5-5.11/qml-qtwayland-compositor-waylandcompositor.html
|
CC-MAIN-2020-05
|
en
|
refinedweb
|
Portal:Toolforge/Admin/Networking and ingress
This page describes the design, topology and setup of Toolforge's networking and ingress, specifically those bits related to webservices in the new kubernetes cluster.
Contents
Network topology
The following diagram is used to better describe the network topology and the different components involved.
When an user visits a webservice hosted in Toolforge, the following happens:
- nginx+lua: The first proxys is what we know as dynamicproxy. The nginx+lua setup knows how to send requests to the legacy k8s + webservices grid. There is a fall-through route for the new k8s cluster.
This proxy provides SSL termination for both
tools.wmflabs.organd
toolforge.org. There are DNS A records in both domains pointing to floating IP associated with the active proxy VM.
- haproxy: Then, haproxy knows which actual nodes are alive in the new k8s cluster. Proxy for both the k8s api-server on tcp/6443 and the web ingress on tcp/30000.
There is a DNS A record with name
k8s.tools.eqiad1.wikimedia.cloudpointing to the IP of this VM.
- nginx-ingress-svc: There is a nginx-ingress service of type NodePort, which means every worker node listens on tcp/30000 and direct request to the nginx-ingress pod.
- nginx-ingress pod: The nginx-ingress pod will use ingress objects to direct the request to the appropriate service, but ingress objects need to exists beforehand.
- api-server: Ingress objects are created using the k8s API served by the api-server. They are automatically created using the webservice command, and the k8s API allows users to create and customize them too.
- ingress-admission-controller: There is a custom admission controller webhook that validates ingress config objects to enforce valid configurations.
- ingress object: After the webhook, the ingress objects are added to the cluster and the nginx-ingress pod can consume them.
- tool svc: The ingress object contained a mapping between URL/path and a service. The request is now in this tool specific service, which knows how to finally direct the query to the actual tool pod.
- tool pod: The request finally arrives at the actual tool pod.
Components
This section contains specific information about the different components involved in this setup.
There are mainly 2 different kinds of elements: those running outside kubernetes and those running inside.
outside kubernetes
Information about components running outside kubernetes.
dynamicproxy
This component is described in another wikitech page.
haproxy
This setup is fairly simple, deployed by puppet using the role
role::wmcs::toolforge::k8s::haproxy.
We have a DNS name
k8s.tools.eqiad1.wikimedia.cloud with an
A record pointing to the active VM.
There should be a couple of VMs in a cold-standby setup (only one is actually working).
The haproxy configuration involves 2 ports, the "virtual servers":
- 6443/tcp for the main kubernetes api-server
- 30000/tcp for the ingress
Each "virtual server" has several backends:
- in the case of the api-server, backends are the controller nodes.
- in the case of the ingress, backends are every worker nodes.
inside kubernetes
Explanation of the different components inside kubernetes.
calico
We use calico as the network overlay inside kubernetes. There is not a lot to say here, since we mostly use the default calico configuration. We only specify the CIDR of the pod network. There is a single yaml file containing all the configuration, and deployed by puppet.
This file is
modules/toolforge/templates/k8s/calico.yaml.erb in the puppet tree and
/etc/kubernetes/calico.yaml in the final control nodes.
To load (or refresh) the configuration inside the cluster, use:
root@k8s-control-1:~# kubectl apply -f /etc/kubernetes/calico.yaml configmap/calico-config unchanged customresourcedefinition.apiextensions.k8s.io/felixconfigurations.crd.projectcalico.org unchanged customresourcedefinition.apiextensions.k8s.io/ipamblocks.crd.projectcalico.org unchanged customresourcedefinition.apiextensions.k8s.io/blockaffinities.crd.projectcalico.org unchanged customresourcedefinition.apiextensions.k8s.io/ipamhandles.crd.projectcalico.org unchanged customresourcedefinition.apiextensions.k8s.io/ipamconfigs.crd.projectcalico.org unchanged customresourcedefinition.apiextensions.k8s.io/bgppeers.crd.projectcalico.org unchanged customresourcedefinition.apiextensions.k8s.io/bgpconfigurations.crd.projectcalico.org unchanged customresourcedefinition.apiextensions.k8s.io/ippools.crd.projectcalico.org unchanged customresourcedefinition.apiextensions.k8s.io/hostendpoints.crd.projectcalico.org unchanged customresourcedefinition.apiextensions.k8s.io/clusterinformations.crd.projectcalico.org unchanged customresourcedefinition.apiextensions.k8s.io/globalnetworkpolicies.crd.projectcalico.org unchanged customresourcedefinition.apiextensions.k8s.io/globalnetworksets.crd.projectcalico.org unchanged customresourcedefinition.apiextensions.k8s.io/networkpolicies.crd.projectcalico.org unchanged customresourcedefinition.apiextensions.k8s.io/networksets.crd.projectcalico.org unchanged clusterrole.rbac.authorization.k8s.io/calico-kube-controllers unchanged clusterrolebinding.rbac.authorization.k8s.io/calico-kube-controllers unchanged clusterrole.rbac.authorization.k8s.io/calico-node unchanged clusterrolebinding.rbac.authorization.k8s.io/calico-node unchanged daemonset.apps/calico-node configured serviceaccount/calico-node unchanged deployment.apps/calico-kube-controllers unchanged serviceaccount/calico-kube-controllers unchanged
Some things to take into account:
- Mind that all configuration is based on a specific version of calico. By the time of this writting, the version is v3.8.
- The version of calico we use harcodes calls to iptables-legacy, so in Debian Buster we were forced to switch everything to that, rather than the default iptables-nft. See
toolforge::k8s::calico_workaroundfor details.
nginx-ingress
We use nginx to process the ingress configuration in the k8s cluster. There is a single yaml file containing all the configuration, and deployed by puppet.
This file is
modules/toolforge/files/k8s/nginx-ingress.yaml in the puppet tree and
/etc/kubernetes/nginx-ingress.yaml in the final control nodes.
To load (or refresh) the configuration into the cluster, use:
root@k8s-control-01:~# kubectl apply -f /etc/kubernetes/nginx-ingress.yaml namespace/ingress-nginx unchanged configmap/nginx-configuration unchanged configmap/tcp-services unchanged configmap/udp-services unchanged serviceaccount/nginx-ingress unchanged clusterrole.rbac.authorization.k8s.io/nginx-ingress unchanged role.rbac.authorization.k8s.io/nginx-ingress unchanged rolebinding.rbac.authorization.k8s.io/nginx-ingress unchanged clusterrolebinding.rbac.authorization.k8s.io/nginx-ingress unchanged deployment.apps/nginx-ingress configured service/ingress-nginx unchanged
Some things to take into account:
- Mind that all configuration is based on a specific version of kubernetes/ingress-nginx. By the time of this writting, the version is v0.25.1.
- There are serveral modifications done to the configuration as presented in the docs, to fit into our environments. Changes are docummented in the yaml file.
- One of the most important changes is that we configured the nginx-ingress pods to listen on 8080/tcp rather than the default (80/tcp).
- The nginx-ingress deployment requires a very specific PodSecurityPolicy to be deployed beforehand (likely
/etc/kubernetes/kubeadm-nginx-ingress-psp.yaml).
- we don't know yet which scale factor do we need for the nginx-ingress deployment (i.e, how many pods to run).
Worth mentioning that we are using the comunity-based one () rather than the NGINX Inc. one ().
error handling
We have a tool called fourohfour which is set as the default backend for nginx-ingress. This tool presents an user friendly 404 page.
ingress objects
Ingress objects can be created in 2 ways:
- directly using the kubernetes API
- by using the webservice command.
Objects have this layout. Toolforge tool name fourohfour is used as example:
apiVersion: extensions/v1beta1 kind: Ingress metadata: annotations: nginx.ingress.kubernetes.io/configuration-snippet: | rewrite ^(/fourohfour)$ $1/ redirect; nginx.ingress.kubernetes.io/rewrite-target: /fourohfour/$2 labels: name: fourohfour toolforge: tool tools.wmflabs.org/webservice: "true" tools.wmflabs.org/webservice-version: "1" name: fourohfour namespace: tool-fourohfour spec: rules: - host: toolsbeta.wmflabs.org http: paths: - backend: serviceName: fourohfour servicePort: 8000 path: /fourohfour(/|$)(.*) status: loadBalancer: {}
NOTE: the rewrite/redirect configuration is really important and is part of the behaviour users expect. See phabricator ticket T242719 for example.
This ingress object is pointing to a service which should have this layout:
apiVersion: v1 kind: Service metadata: labels: name: fourohfour toolforge: tool tools.wmflabs.org/webservice: "true" tools.wmflabs.org/webservice-version: "1" name: fourohfour namespace: tool-fourohfour spec: clusterIP: x.x.x.x ports: - name: http port: 8000 protocol: TCP targetPort: 8000 selector: name: fourohfour sessionAffinity: None type: ClusterIP status: loadBalancer: {}
Note that both objects are namespaced to the concrete tool namespace.
ingress admission controller
This k8s API webhook checks ingress objects before they are accepted by the API itself. It enforces (or prevents) ingress configurations that may produce malfunctioning in the webservices running in kubernetes, like pointing URLs/path to tools that are not ready to handle them.
The code is written in golang, and can be found here:
-
-
How to test the setup
See logs for nginx-ingress:
root@tools-k8s-control-3:~# kubectl logs -lapp.kubernetes.io/name=ingress-nginx -n ingress-nginx 192.168.50.0 - [192.168.50.0] - - [18/Dec/2019:12:02:21 +0000] "GET /potd-feed/potd.php/commons/potd-400x300.rss HTTP/1.1" 503 2261 "-" "FeedFetcher-Google; (+)" 402 0.003 [upstream-default-backend] [] 192.168.15.133:8000 2261 0.000 503 2d08390256b1629bc60552710e4b47e1 192.168.34.128 - [192.168.34.128] - - [18/Dec/2019:12:02:22 +0000] "GET /favicon.ico HTTP/1.1" 404 1093 "" "Mozilla/5.0 (Windows NT 10.0; WOW64) AppleWebKit/537.36 (KHTML, like Gecko) coc_coc_browser/83.0.144 Chrome/77.0.3865.144 Safari/537.36" 576 0.529 [upstream-default-backend] [] 192.168.15.133:8000 2116 0.528 404 5fc83ada234e3a5a5da5e42a6583f992 192.168.15.128 - [192.168.15.128] - - [18/Dec/2019:12:02:23 +0000] "GET /mediawiki-feeds/ HTTP/1.1" 503 2236 "" "Mozilla/5.0+(compatible; UptimeRobot/2.0;)" 487 0.003 [upstream-default-backend] [] 192.168.15.133:8000 2236 0.004 503 e0341d943aba393e30fc692151da0790 [..]
TODO: extend this. Perhaps refactor into its own wiki page?
See also
- Toolforge k8s RBAC and PodSecurityPolicy -- documentation page
- phabricator T228500 - Toolforge: evaluate ingress mechanism -- original ticket to design the ingress (epic)
- phabricator T234037 - Toolforge ingress: decide on final layout of north-south proxy setup -- original ticket to design the outer proxy layout
- phabricator T234231 - Toolforge ingress: decide on how ingress configuration objects will be managed -- original ticket to design ingress object management
- phabricator T235252 - Toolforge: SSL support for new domain toolforge.org -- original ticket to handle SSL support for the domain toolforge.org and the new k8s cluster in general
- phabricator T234617 - Toolforge. introduce new domain toolforge.org -- original ticket to introduce the domain toolforge.org
- phabricator T234032 - Toolforge ingress: create a default landing page for unknown/default URLs -- about the fourohfour tool
- Resolution on new domains usage for WMCS
- Deploying Toolforge kubernetes
|
https://wikitech.wikimedia.org/wiki/Portal:Toolforge/Admin/Networking_and_ingress
|
CC-MAIN-2020-05
|
en
|
refinedweb
|
Splitting your app into smaller apps using RabbitMQ
After many years of development, we realised our app had become too complex, causing development, testing and debugging to be much harder. We decided to do something about it and the first step needed to solve this problem was splitting our app into smaller apps — starting with extracting the messaging mailer, which is responsible for sending all our messages to clients as a separate app. For this purpose, RabbitMQ was chosen as a broker.
A couple words about RabbitMQ
RabbitMQ is a broker for the AMQP (Advanced Message Queuing Protocol) — Messaging Broker
Reasons for using messaging in your applications
- Reduce complexity by decoupling and isolating applications
- Build smaller apps that are easier to develop, debug, test, and scale
- Build multiple apps that each use the most suitable language or framework versus one big monolithic app
- Get robustness and reliability through message queue persistence
- Reduce system sensitivity to downtime
Before we start I suggest you to read this post: Event sourcing on Rails with RabbitMQ. This helped me a lot when I was just starting to learn this technology and explain in details all the things I am not going to talk about for the sake of simplicity. Also, it would be a good idea to read about bunny and sneakers gems, and RabbitMQ official documentation.
To better understand the whole process, I will define a couple of things:
Publisher
- The first app (publisher) called ‘dashboard’ uses ‘message_publisher’ for delivering messages to the broker via a ‘dashboard.messages’ exchange.
- The name of the exchange for dashboards is called ‘dashboard.messages,’ which is a direct exchange — for detailed explanations of different types of exchanges, look at
- ‘bunny’ gem () was used for publishing messages (Bunny is a popular, easy-to-use, well-maintained Ruby client for RabbitMQ)
creating publisher
# '../lib/publishers/message_publisher.rb'
class Publishers::MessagePublisher
def self.publish(message)
x = channel.direct("dashboard.messages")
x.publish(message.as_json, :persistent => true, :routing_key => '#')
end def self.channel
@channel ||= connection.create_channel
end
def self.connection
@conn = Bunny.new(APP_CONFIG['rabbitmq.amqp']) # getting configuration from rabbitmq.yml
@conn.start
end
end
RabbitMQ configuration
# '../config/rabbitmq.yml'
rabbitmq:
development:
amqp: "amqp://guest:guest@localhost:5672" # integration:
# amqp: ... # production:
# amqp: ...
creating and sending message to broker via publisher
class Message < ActiveRecord::Base
# ...
after_commit publish_message, on: :create
# ... def publish_message
Publishers::MessagePublisher.publish(self)
end
Consumer
- Second app(consumer) called ‘mailer’ which consumes messages received from broker and sends them to clients using Action mailer.
- Name of queue ‘mailer.messages’
- ‘sneakers’ gem —, was used for consuming messages (A fast background processing framework for Ruby and RabbitMQ)
creating worker to consume messages
# '..app/workers/messages_worker.rb'
class MessagesWorker
include Sneakers::Worker from_queue 'cliche.messages' def work(message)
MyMailer.send_mail(msg).deliver
ack!
end
RabbitMQ configuration
# '..config/rabbitmq.yml'
development:
amqp: "amqp://guest:guest@localhost:5672"
vhost: "/" # integration:
# amqp: ... #production:
# amqp: ...
Connecting it all together
rake task to create binding between producer and consumer
namespace :rabbitmq do
desc "Setup routing"
task :setup do
require "bunny" conn = Bunny.new
conn.start ch = conn.create_channel # get or create exchange
x = ch.direct('dashboard.messages', :persistent => true) # get or create queue (note the durable setting)
queue = ch.queue('mailer.messages', :durable => true, :ack => true, :routing_key => '#') # bind queue to exchange
queue.bind('vcita.messages', :routing_key => '#') conn.close
end
end
Installing and running it all together
- Installation of RabbitMQ — brew install rabbitmq
- Run RabbitMQ broker — /usr/local/opt/rabbitmq/sbin/rabbitmq-server
- Open dashboard’s directory — cd Projects/dashboard ( in my case ) and run rake task for binding — rake rabbitmq:setup
- Open mailers’s directory — cd Projects/mailer ( in my case ) and run sneakers worker — WORKERS=MessagesWorker rake sneakers:run
- Check RabbitMQ’s admin is alive —, check via admin bindings properly configured between exchange and queue
In coming articles, I will talk about error handling in consumer using sneaker’s failure handlers and what we do in producer in case the connection to the broker is lost for some reason. I hope this helps somebody to better understand how all things are connected and working together, or that I’ve at least provided a starting point.
Link to original post in my blog
|
https://medium.com/splitting-your-app-with-rabbitmq/splitting-your-app-into-smaller-apps-using-rabbitmq-b6e4ef29d1da?source=post_page-----b6e4ef29d1da----------------------
|
CC-MAIN-2020-05
|
en
|
refinedweb
|
This article was originally posted as “SDL2: Empty Window” on 31st August 2013 at Programmer’s Ranch. It has been slightly updated and now enjoys syntax highlighting. The source code for this article is available at the Gigi Labs BitBucket repository.
Yesterday’s article dealt with setting up SDL2 in Visual Studio. Today we’re going to continue what we did there by showing an empty window and allowing the user to exit by pressing the X at the top-right of the window.
It takes very little to show an empty window. Use the following code:
#include <SDL.h> int main(int argc, char ** argv) { SDL_Init(SDL_INIT_VIDEO); SDL_Window * screen = SDL_CreateWindow("My SDL Empty Window", SDL_WINDOWPOS_UNDEFINED, SDL_WINDOWPOS_UNDEFINED, 640, 480, 0); SDL_Quit(); return 0; }
We use SDL_Init() to initialise SDL, and tell it which subsystems we need – in this case video is enough. At the end, we use SDL_Quit() to clean up. It is possible to set up
SDL_Quit() with
atexit(), as the SDL_Quit() documentation shows.
We create a window using SDL_CreateWindow(). This is quite different from how we used to do it in SDL 1.2.x. We pass it the window caption, initial coordinates where to put the window (not important in our case), window width and height, and flags (e.g. fullscreen).
If you try and run the code, it will work, but the window will flash for half a second and then disappear. You can put a call to SDL_Delay() to make it persist for a certain number of milliseconds:
SDL_Delay(3000);
Now, let’s make the window actually remain until it is closed. Use the following code:
#include <SDL.h> int main(int argc, char ** argv) { bool quit = false; SDL_Event event; SDL_Init(SDL_INIT_VIDEO); SDL_Window * screen = SDL_CreateWindow("My SDL Empty Window", SDL_WINDOWPOS_UNDEFINED, SDL_WINDOWPOS_UNDEFINED, 640, 480, 0); while (!quit) { SDL_WaitEvent(&event); switch (event.type) { case SDL_QUIT: quit = true; break; } } SDL_Quit(); return 0; }
The
while (!quit) part is very typical in games and is in fact called a game loop. We basically loop forever, until the conditions necessary for quitting occur.
We use SDL_WaitEvent() to wait for an event (e.g. keypress) to happen, and we pass a reference to an SDL_Event structure. Another possibility is to use SDL_PollEvent(), which checks continuously for events and consumes a lot of CPU cycles (
SDL_WaitEvent() basically just sleeps until an event occurs, so it’s much more lightweight).
The event type gives you an idea of what happened. It could be a key press, mouse wheel movement, touch interaction, etc. In our case we’re interested in the
SDL_QUIT event type, which means the user clicked the window’s top-right X button to close it.
We can now run this code, and the window remains until you close it:
Wasn’t that easy? You can use this as a starting point to start drawing stuff in your window. Have fun, and come back again for more tutorials! 🙂
|
https://gigi.nullneuron.net/gigilabs/2015/11/03/
|
CC-MAIN-2020-05
|
en
|
refinedweb
|
SuggestedStories
This page had been hijacked by wiki-spammers. I rolled back this page to the most recent version that contained non-spam posts, and deleted the irrelevant stuff. Randolph Peters.
Ideas include:
I like the idea of a test container - they could actually be JVM instances that are persistant, controlled by the FitNesse server via RMI or a socket protocol, and could receive commands from the server to run tests or suites. It would speed up launching tests, support suite-level object sharing, and running multiple tests in parallel.
'We do this at PrintSoft? by putting (by hand) "!define FITNESSE_ROOT {this is my directory}" in the content.txt file in the FitNesse root directory'
- .KenHorn
The handling of SetUp and TearDown? pages isn't really correct since a possible hierarchy isn't handled.
StefanRoock, email: stefanATstefanroockDOTde
This function sould be easy to implement based on the previous story suggestion ("Support debugging")
StefanRoock, email: stefanATstefanroockDOTde
MarkEnsign?
Even a pointer on how to code it ourselves could be helpful.
Thanks,
keithDOTadeneyATprintsoftDOTde
where produce() does the test, and pages()/imgs()/ppm() just return cached test results.
Great tool thanks,
Keith
-
|word| definition|
when searching certain words are found, and certain word on the very same page are simply not found.
-
-JjL
-
--
[.FrontPage] [.RecentChanges]
- Transcending SetUp/TearDown? inheritance. (+1 - IljaPreuss)
- When I include a page of classpaths they don't seems to be picked up by child pages, it this a bug?
- It would be nice if we could make nested collapsible sections, for instance, we're using Fitnesse's symbolic links to let each programmer run tests against their own classpaths, we collapse the path definitions together but it would be nice to have sub-collapsable sections for jars that are shared by all users and the jars that are unique to the specific environment or for separting jar paths by relavence. Nested collapsible sections would also be valuable for separating variables and fixtures into groups. And because things are easier to remember when you hear them three times in a row, don't forget to implement nested collapsible sections!
- tag !include ^SubPage have to inclued subpages. Currently it tells "Page include failed because the page ^SubPage does not exist." though displays valid link.
- add a special variable that is the name of the current page like $ {CURRENT_PAGE}, this would be useful when running a suite of tests and trying to match a test with a section of a generated log file
- add the ability to jump to the first error after a suite of tests has run or collapse all included tests, except ones that contain errors - some of these suites are getting long Cooper Jager
- reiterating the PAGE-LEVEL TOC request below. Fitnesse is simply a great wiki, FIT aside, but something almost all regular wikis have that is missing is a !pagecontents (or !pagecontents 1-2 for only headers1-2) that inserts an indented TOC of the headers on the page. Very useful! I would think you should be able to reuse some of the 'contents' fixture code that does the same thing for the entire wiki site.
- FitLibraryForDotNet?
- FAQs
- Cookbook recipes (Java and .NET)
- SuggestedStories.ParameterizedVersionControl
- When archiving test results (out of CommandLineTestRunner?, etc), the pages look rather ugly with no style sheets. The style sheet structure that is used does not lend very well to archiving test results. It would be nice to have the link statement read: &lt;link href="./files/css/fitnesse.css" ... instead of &lt;link href="/files/css/fitnesse.css" ... this would allow you to have a non-fitnesse directory that will still interpret the style sheets directly. All that you need to do then is have the "files" tree in the same folder as your output, tailor the css sheets appropriately, and the archival pages should look the same as they do when run directly out of fitnesse.
- Escape wiki word syntax. When you add text to a page that LooksLikeWikiWord? you can use !-ThisSyntax-! to get rid of the question mark. However, if you use LooksLikeWikiWord? all over the page, you have to use !-ThisSyntax-! all over the page. It would be a nice convenience to have some syntax that would identify a string as a WordThatLooksLikeWikiWordButShouldNotBeTreatedAsSuch for the entire page.
- Extended error keyword. Ability to assert a specific type of exception or message. Something like error("This is the expected message") or error[MyCustomException?].
- Data Table Fixture for .NET (like a row fixture but sitting on top of a .NET DataTable) would be useful if requirements specify state of data separately from the application view of that data.
- User-Defineable Suites - It's great to organize your pages in a certain way, and then have the suites run them by that hierarchy, but sometimes you want a few different ways of looking at your tests. For instance, I could organize my stories by task, and then I could run a suite that is focused on that task... but if I wanted to look at the stories from the first iteration, I have to manually pick and choose the tests out. It would be great if I could make a wiki page that has a list of the tests I want to run, and then they would be run like a suite.
- Reread Password File every time - I'd like to be able to change somebody's password (or add a new account) without having to restart FitNesse (availability is very important to me) -Stephen Starkey
- Group-level restrictions - Just like UNIX. I'd like to be able to put users into groups and limit certain parts of pages only to folks in that group (i.e. allow read access to one group, and read-write access to another, more special group) -Stephen Starkey
- User-level restrictions - Just like UNIX. I'd like to be able to limit certain functions only to specific users. -Stephen Starkey (that should just about do it..hehe)
- If a user is not yet authenticated, ask for a password when the click "Edit" - not after they have made edits and clicked "Save".
- The alias form doesn't work with a url (Rick)
- What about adding some markup elements for managing and tracking stories?
- Exceptions like ClassNotFoundException? are shown in a very small font. Use a larger font. Precede the stack trace with a message meaningful for customers like "technical problem; contact your programmers" (But that's a Fit issue)
- It would be nice to have a special version of !contents that is able to list all pages in wiki (see)
- Fix italics markup for single-character strings.
- Make the content type of the generated HTML pages "text/html; charset=utf-8".
- .RecentChanges filter per Sub-wiki. A project team mostly wants to know how their project sub-wiki is changing. markW.
- Limit the number of *.Zip files to 5. They are Tribbles. (Note the use of Metaphor) markW.
- SuiteOfSuites?
- !=text=! for monofont literals.
- Compare different versions of pages. 8 hours
- In-page hyperlinks
- Anchors are generated for headers (!1, etc.)
- A table of contents function creates a bullet list of all anchors on the current page
- Enable external linking to anchors
- Ability to call ant tasks. Perhaps this is a reach -- but the framework already supports wikis and fit. Maybe a better approach is that we can write custom ant fixtures instead.
- Duplicate the buttons at both the top and bottom of the page.
- Not all lines starting with a digit are outline levels. In particular, " * 1 some text here" should not render as a bullet, followed by a numbered outline level, followed by "some text here".
- If it isn't a big change, it would be nice to have valid xml (xhtml transitional or something like that) so that the pages can be styled or converted to pdf etc.
- It'd be nice if we could use the email/IM/wiki standard of asterisk *foo* to represent bold instead of the bizzare three-single-quotes. Likewise for _italics_
- A "Back" button on the Edit page would be nice (i don't always trust my browser)
- How about a "preview" button when editing which would show how the edited page would look without persisting a new version of the page until you click "save". Pages which change often that you want to look "just right" currently generate too many versions. KevinWilliams++ AndrewMcDonagh
- Sometimes FitNesse starts "acting funny"(technical term), it would be nice if there were a clean way to stop the process without using system utilities.
- Recognize newline characters from standard out and replace them with break read in the generated wiki/html page after a test or suite is executed. -ChrisWilliams?
- I'm using FitNesse in German. So we have some Umlaute which can't be typed, either in the normal way or with html syntax. It would be nice to just be able to use them. -.DanielFlueck
- 'Umlaute' work nicely for me, maybe a Browser Issue? -Stefan.Haslinger@gmx.at
- What about a short list of the important Wiki Formatting at the end of the Editbox? Something like the MoinMoinWiki is doing. -.DanielFlueck
- It would help allot if there were a testlist tag that would work like the contents tag but only list pages that are tests or test suites. Adding each test to a parent page manually just to avoid having the header, footer, errorlog, etc. show up in the contents is a real pain when continually adding tests to a page.
- Identify a FIT table as distinct from a normal table. Inside a FIT table, do *not* process wiki words -- treat all text inside tables as if surrounded with bang-dash.
- Option to put the button links (test/edit/etc.) on the bottom as well as the top. This is useful for long pages so you don't have to scroll so far.
- HTML co-existence with WikiML. You could deactivate it by default, I suppose, requiring an switch to be set in an XML pref file. Those that dont like it dont have to change anything and those who do use it can benefit from the robustness of HTML while using FitNesse. (Isn't it possible to simply let the &lt;HTML&gt; by-pass the parser?) I for one could use more nicely formatted tables, more color, more fonts, a little Javascript, include some pre-existing editable web pages etc. A more secure and traditional wiki environment could be maintained by simply not switching html on.
- An address-bar command: ?contains with the functionality of its counterpart !contains so you can quickly see which pages exist in a directory without having to add it to a page then remove it or drill down in the OS to hava a look... Something quick.
- WYSIWYG. Its trivial to implement this today and much easier for business-types who don't care to learn either WikiMarkUP or HTML. Here's a live demo of an Open Source WYSIWYG editor. It's free, all HTML and works with IE and Mozilla and was designed for the same text area box that FitNesse is now using. Seed Wiki uses a similar editor on their site. This same editor can be demoed at the developers site. Microsoft explains how to build one for IE complete with evolutive demos. I heard (unconfirmed) that Netscape also supports the same technology. Wards 1st wiki is what, 8 years old now? This seems like a natural evolution for Wiki. WYSIWYG was a lot harder to do back then in the days before the internet boom. Now, if you can edit email, you'd know how to edit a WYSIWYG FitNesse wiki. This would help the same audience that the Excel button benefitted.
- Remove the 2nd BIG gauge from the .FrontPage. Its a nice looking graphic, true, but after the 100th visit to the page you begin to wonder if the little one isn't enough...
- Click the gauge to navigate back to the FrontPage.
- Allow the !contents element to list its elements within the left sidebar to avoid long lists that push page content down. Something like !contentsSB SB=SideBar?
- `Alternative` to Wiki Words. Make anything between backticks ` a wiki reference [ ie: `link` ] because WikiWords? arent always the most natural choice. In addition to standard WikiWords? you could have onesThatDontExactlyConform or HaveRepeatingCAPS or One2or3Numbers? or AccentsIncluded? or AnyCOMBOyouWant2 use. The only constraint being as to whether or not the folder that would be created would make the OS happy. Include a wiki-way compliance switch somewhere that can be over-ridden for people who don't care whether or not wiki-way happy-collisions ever occur.
- A PREVIEW button in the edit area to make edit tweaking quicker while at the same time : Move the Paste Table From Excel below Save button then put time reducing the ZipFileTribbleProblem?. Put the Preview button to the right of Save and add a Cancel button to the right of it to lessen mouse movement to the navigation arrows.
- Limiting the .zip file count. I second the Tribble allusion above. Even 10 or 20 would be much better than the hundreds that I now have to deal with in many directories. (You DID limit it to some number, right?) No matter how cute those rascals are, they're getting hard to manage.
- Orphaned pages - all pages without any reference to them
- Provide an RSS feed of changes. If not RSS then maybe email is fine.
- Put back the Total Suite Execution Time feature. Seems to have gotten lost in one or another revision, and we like that metric.. :-). -Stephen Starkey
- Brian Marick and Ward Cunningham's ^NotesOnErrorMessages?.
- Daniel Parker's notes on ^SimpleDateFormat?.
- Configurable text heights for table especially in the stack output - when debugging tests its almost impossible to read the output so each time the Font height must be made larger and then smaller when running the test.
- Could you make the FitNesse logo clickable and linked to the .FrontPage?
- Could you please include a stylesheet? We'd like to change the style of the textarea, for example, but there's no way to do that without altering the code and recompiling.
- files with spaces barf in the files/
- Wonderfull, but it is not working with characters like (text corrupted - please fix), etc. What am I doing wrong? I'm hosting this WIKI on a Red Hat Linux 9. Thank's -- Leandro
- Support navigation from test result page to editor. This is crucial for large test pages. If a test fails, I'd like to click on the error to jump to the WikiML code for the line.
- State of Tests: We often have to deal with the state of tests, like test planned, tester in progress, test definition finished, test succeeds, test obsolete. The state may differ per project. Therefore configurable states would be useful (e.g. via a state page like TestStates?). The fitnesse users can choose states arbitrarily. It is simply for the organization of tests without predefined semantics. The ! contents command could be parameterized with states like !contents test-planned (StefanRoock)
- It would be nice to archive past test runs in a similar way we now archive past wiki page contents. keithDOTadeneyATprintsoftDOTde
Suggested Refactorings to Add To Refactor Page
- MakeSubPage - from a page, click on Refactor, click on MakeSubPage.
- MoveTree - &lt;B style="color:black;background-color.#A0FFFF "&gt;Move a page&lt;/b&gt; and all its subpages to another branch of the Wiki.
An HTML element ID for the fit.Summary tableI am trying to use Ant to run FitNesse tests in a headless manner. I am using HTMLUnit to run my suite and I want to verify that the "total count" of the suite shows 0 wrong, 0 ignored, 0 exceptions. It is much easier to locate an element of an HTML page if it has an ID (the id attribute).
Command line switch to turn on the UpdaterI don't want the Updater to run unless I explicitly ask it to run, which I would do each time I update my FitNesse code base. So I want a command line switch which would cause it to run on startup. I don't care if the server stays up, or just prints a "succeeded" message and exits.
Looping Action FixtureMM&gt; LoopingActionFixture?.... It sounds like a cool idea. Its definately.
Test ContainerI've created a class I call TextContext? which my SetUp page instantiates. This context contains all my page-level testing globals, such as a connection to my server that I'm testing. I'd like to have the ability for objects to be available down through a suite of pages, so I can connect to my server, run a suite of tests, then disconnect - rather than connecting/disconnecting on each page.
Ideas include:
- an object registry (I'm guessing like an RMI registry)
- proxying objects out of the server (so socket connections can stay open)
- A "test container" which can run multiple pages or a suite in one JVM, under the direction/control of the main FitNesse server
Special VariablesA special variable like ${FITNESSE_ROOT} that contains the full pathname of the Fitnesse root directory. There are probably other such variables like FITNESSE_PORT. It might also be a good idea to find a way to access environment variables. Perhaps a syntax like this $${environment_variable}
'We do this at PrintSoft? by putting (by hand) "!define FITNESSE_ROOT {this is my directory}" in the content.txt file in the FitNesse root directory'
- Allow alternative labels in fixture tables -- I don't want the users to see the method names - yes they should be able to cope, but it would help acceptance.
- To avoid full Wiki editing, allow an edit mode where only the single table is editable, either in wiki src form (with the rows only being available, and the excel import tidying (possibly add an 'open in excel option'), or in a table form where all the cell contents are editable. Again this is about user usability
- I'd need some way of capturing the tests / current wiki state into CVS, so I can track / tag it for releases. Since everything evolves, I'd want to be able to rebuild the source from a single point, including the acceptance tests at that point. This may mean each project uses a distinct fitnesse wiki and I just check the whole thing into CVS. It might be nice however (thinking aloud here) to be able to resolve a single test into a salf-contained unit (with all inherited Classpaths) (this could be a help thing to show what command would be used for this test (maybe this already exists?)
- Does Refactoring's rename allow the movement of a page around the wiki? Got a NPE when I tried, not sure if it's a bug of a feature :o)
- Not sure if this is a fitnesse (i think it is) or a FIT issue (Ward's site is down) - I can't see any mention of test lifecycle in the docs - Are multiple tests on a single page run as if from a single controlling test method? Just trying to determine both from a threading point of view and a VM one - ie is the VM run command invoked once per page? Are there plans to support classloader style tricks (for static state) in order to speed up test times? (a la jUnit)
Date format problemfound in brazil when trying to rename a page.
!|java.text.ParseException: Unparseable date: "Mon, 09 Jun 2003 15:23:34 GMT"| | at java.text.DateFormat.parse(Unknown Source)| | at fitnesse.responders.FileResponder.setNotModifiedHeader(Unknown Source)| | at fitnesse.responders.FileResponder.prepareFileResponse(Unknown Source)| | at fitnesse.responders.FileResponder.makeResponse(Unknown Source)| | at fitnesse.FitnesseServer.makeResponse(Unknown Source)| | at fitnesse.FitnesseServer.serve(Unknown Source)| | at fitnesse.socketservice.SocketService$ServerRunner.run(Unknown Source)| | at java.lang.Thread.run(Unknown Source)| ||?
Standard Out needs to be "formatted" as textIf a Fixture (or an application) generates output on StdOut, it shows up at the top of the test page. However, newlines
Support debuggingIt is easy to debug your tests using the FileRunner? from FIT. But you the tests in HTML and not the internet WIKI format of Fitnesse. The following code is a sketch of the solution:
public class CostumizedFitnesseRunner { !|private static final String TEARDOWN = "TearDown";| |private static final String SETUP = "SetUp";| |private static final String TMP_SRC_FILE_PREFIX = "FitnesseTest_";| |private static final String HTML_EXTENSION = ".html";| |private static final String RESULT_PREFIX = "Result_";| !|public static void main(String[] args)| |{| ||String path = null;| ||String testName = null;| !||if (args.length != 2)| ||{| |||System.out.println("Usage: java fitnesse.debug.CostumizedFitnesseRunner &lt;path&gt; &lt;testname&gt;");| ||}| ||else| ||{| |||path = args[0];| |||testName = args[1];| !|||try| |||{| ||||WikiPage wikiPage = FileSystemPage.makeRoot(path, testName);| ||||HtmlWikiPage htmlWikiPage = new HtmlWikiPage(wikiPage.getData());| ||||String html = htmlWikiPage.testableHtml();| !||||if (html.length() == 0) {| |||||System.out.println("Wiki page not found: " + path + "/" + testName);| |||||System.exit(-1);| ||||}| !||||WikiPage setUpPage = FileSystemPage.makeRoot(path, SETUP);| ||||String setUpHtml = new HtmlWikiPage(setUpPage.getData()).testableHtml();| !||||WikiPage tearDownPage = FileSystemPage.makeRoot(path, TEARDOWN);| ||||String tearDownHtml = new HtmlWikiPage(tearDownPage.getData()).testableHtml();| !||||File tmpSrcFile = File.createTempFile(TMP_SRC_FILE_PREFIX, HTML_EXTENSION);| ||||String tmpDstFileName = RESULT_PREFIX + tmpSrcFile.getName();| ||||FileOutputStream fos = new FileOutputStream(tmpSrcFile);| ||||PrintStream ps = new PrintStream(fos);| ||||ps.print(setUpHtml);| ||||ps.print(html);| ||||ps.print(tearDownHtml);| ||||fos.close();| !||||FileRunner runner = new FileRunner();| !||||runner.run(new String[] { tmpSrcFile.getAbsolutePath(), tmpDstFileName });| |||}| |||catch (Exception e)| |||{| ||||e.printStackTrace();| |||}| ||}| |}| }
The handling of SetUp and TearDown? pages isn't really correct since a possible hierarchy isn't handled.
StefanRoock, email: stefanATstefanroockDOTde
HTML dump of Fitnesse-WIKIIntegrate a function into fitnesse to generate a html dump of the wiki pages in Fitnesse. These pages can then be versioned with CVS.
This function sould be easy to implement based on the previous story suggestion ("Support debugging")
StefanRoock, email: stefanATstefanroockDOTde
Access to local files
Widget to sum up a column in a table.Thanks!
MarkEnsign?
Return Additional Metrics DataWe are considering using FitNesse for our test suites, however we need a way of additionally displaying test metrics (number of pages printed, number of images ripped, pages generated per minute, ...). Some of these metrics are like additional test checks i.e. number of pages printed, others need to be plotted over time i.e. pages generated per minute.
Even a pointer on how to code it ourselves could be helpful.
Thanks,
keithDOTadeneyATprintsoftDOTde
where produce() does the test, and pages()/imgs()/ppm() just return cached test results.
Great tool thanks,
Keith
Graceful GettersMost of our objects use getters. It would be nice if FitNesse could drop the gets out on the row and column fixtures, so I could say "account balance?" instead of "get account balance?"
Smart !fixture directiveThe !fixture directive should be aware of the !path directives in force for at least the page it appears on. This would allow for simpler !fixture specifications. It would be even nicer if it were aware of those in affect on the page it was being used on (i.e. in the drop-down list). The significance of this last statement is if a classpath element that contains a fixture is added on a sub-wiki then the fixture name could potentially be shortened when editting that sub-wiki page.
Secure the new Shutdown feature-The new orderly shutdown feature is a security hole and needs to have password protection added. As a convenience feature on my local Fitnesse server I have added a table that is comprised of a single cell that contains the shutdown URL labled as "Shutdown FitNesse Server" to the top level PageHeader. Clicking this "button" on any page shuts down the server as desired. Sweet. The problem is I could add the same information to any public FitNesse server (i.e. this one or butunclebob.com) and then anybody could easily shutdown the server for the entire world (assuming you're using the latest jar). Of course nobody has to have modification permissions to shutdown a FitNesse server. All it really takes is crafting the correct URL in your browser and the server goes down! Password protection would help prevent this.- Withdrawn. This feature is already secured. I had failed to recognize this as I was always operating logged into my server whenever I exercised my button.
Restore Properties property to pages such as /PageHeader in the distribution.-It is annoying to have to manually edit the XML properties file to restore this feature so that modifications such as that described in the preceding item may be made.- Withdrawn after reading the Properties bullet on .FitNesse.MarkupPageAttributes
-
Searching
- Search to support searching in subpages only instead of whole Wiki
- Search not to miss words. In the test case, I have now like 500 words in subpages in a format like this:
when searching certain words are found, and certain word on the very same page are simply not found.
-
Variable names with punctuationIt would be nice if variable names would support punctuation. Especially since Java System properties are implicitly defined as variables and they all use a period for separating words. Currently how would you access a System property like user.dir? I can't do something like !path ${user.dir} currently.
- trim space from page names when doing a rename and move refactoring
- consider prepopulating the refactor field with the current page name (some people may not want this, though)
-
Arbitrary variable assignment within tablesIt would be nice to be able to assign a variable in any table cell that could then be used in any other, regardless of the fixtures used. Syntactically, this could use ${VAR_NAME}= to assign whatever value would normally be rendered in that cell to that variable. For instance:
--
Server side scripting languagesWould it be hard to use a page on a server as a fixture? FitNesse would then just call a server-side script (in PHP, ASP, ...) and gets the answer from that call. This would enable FitNesse to work with all web server scripts in any language. - WillemBogaerts?.
[.FrontPage] [.RecentChanges]
|
http://fitnesse.org/SuggestedStories
|
crawl-001
|
en
|
refinedweb
|
Faster Web Applications with SCGI
June 1st, 2007 by Jeroen Vermeulen in.
If that's not what you see, take a look at the shell where you ran the module. It may have printed some helpful error message there. Or, if there is no reaction from the SCGI server whatsoever, the request may not have reached it in the first place; check the Apache error log.
Once you have this running, congratulations—the worst is behind you. Stop your SCGI server process so it doesn't interfere with what we're going to do next.
Now, let's write a simple SCGI application in Python—one that prints the time.
We import the SCGI Python modules, then write our application as a handler for SCGI requests coming in through the Web server. The handler takes the form of a class that we derive from SCGIHandler. Call me unimaginative, but I've called the example handler class TimeHandler. We'll fill in the actual code in a moment, but begin with this skeleton:
#! /usr/bin/python import scgi import scgi.scgi_server class TimeHandler(scgi.scgi_server.SCGIHandler): pass # (no code here yet) # Main program: create an SCGIServer object to # listen on port 4000. We tell the SCGIServer the # handler class that implements our application. server = scgi.scgi_server.SCGIServer( handler_class=TimeHandler, port=4000 ) # Tell our SCGIServer to start servicing requests. # This loops forever. server.serve()
You may think it strange that we must pass the SCGIServer our handler class, rather than a handler object. The reason is that server object will create handler objects of our given class as needed.
This first incarnation of TimeHandler is still essentially the same as the original SCGIHandler, so all it does is print out request parameters. To see this in action, try running this program and opening the scgitest page in your browser as before. You should see something like Listing 3 again.
Now, we want to print the time in a form that a browser will understand. We can't simply start sending text or HTML; we first must emit an HTTP header that tells the browser what kind of output to expect. In this case, let's stick with simple text. Add the following near the top of your program, right above the TimeHandler class definition:
import time def print_time(outfile): # HTTP header describing the page we're about # to produce. Must end with double MS-DOS-style # "CR/LF" end-of-line sequence. In Python, that # translates to "\r\n. outfile.write("Content-Type: text/plain\r\n\r\n") # Now write our page: the time, in plain text outfile.write(time.ctime() + "\n")
By now, you're probably wondering how we will make our handler class call this function. With SCGI 1.12 or newer, it's easy. We can write a method TimeHandler.produce() to override SCGIHandler's default action:
class TimeHandler(scgi.scgi_server.SCGIHandler): # (remove the "pass" statement--we've got real # code here now) # This is where we receive requests: def produce(self, env, bodysize, input, output): # Do our work: write page with the time to output print_time(output)
We ignore them here, but produce() takes several arguments: env is a dict mapping CGI parameter names to their values. Next, bodysize is the size in bytes of the request body or payload. If you're interested in the request body, read up to bodysize bytes from the following argument, input. Finally, output is the file that we write our output page to.
If you have SCGI 1.11 or older, you need some wrapper code to make this work. In these older versions, you override a different method, SCGIHandler.handle_connection(), and do more of the work yourself. Simply copy the boilerplate code from Listing 4 into the TimeHandler class. It will set things up right and call produce(), so nothing else changes, and we can write produce() exactly as if we had a newer version of SCGI.
Once again, run the application and check that it shows the time in your browser.
Next, to make things more interesting, let's pass some arguments to the request and have the program process them. The convention for arguments to Web applications is to tack a question mark onto the URL, followed by a series of arguments separated by ampersands. Each argument is of the form name=value. If we wanted to pass the program a parameter called pizza with the value hawaii, and another one called drink with the value beer, our URL would look something like.
Any arguments that the visitor passes to the program end up in the single CGI parameter QUERY_STRING. In this case, the parameter would read “pizza=hawaii&drink=beer”. Here's something our TimeHandler might do with that:
class TimeHandler(scgi.scgi_server.SCGIHandler): def produce(self, env, bodysize, input, output) # Read arguments argstring = env['QUERY_STRING'] # Break argument string into list of # pairs like "name=value" arglist = argstring.split('&') # Set up dictionary mapping argument names # to values args = {} for arg in arglist: (key, value) = arg.split('=') args[key] = value # Print time, as before, but with a bit of # extra advice print_time(output) output.write( "Time for a pizza. I'll have the %s and a swig of %s!\n" % (args['pizza'], args['drink']) )
Now the application we wrote will not only print the time, but also suggest a pizza and drink as passed in the URL. Try it! You also can experiment with the other CGI parameters in Listing 3 to find more things your SCGI applications can do..
Subscribe now!
Featured Video
Linux Journal Gadget Guy, Shawn Powers, takes us through installing Ubuntu on a machine running Windows with the Wubi installer.
Great article
On April 13th, 2008 Yorumcu (not verified) says:
Thanks for article. It's work.
----------------------------------
Betsson Casino Euro
|
http://www.linuxjournal.com/article/9310
|
crawl-001
|
en
|
refinedweb
|
Published September 2006
Most organizations build a data warehouse to provide an
integrated, reliable, and consistent “single version of the
truth.” Data is usually sourced from a number of systems and has
to be extracted, cleansed, and integrated before being made available
for users to query.
The quality of the data loaded into the data warehouse is
often variable, however, and for that reason, historically the process
of profiling your source data has been a time-consuming, manual process
that has required either lots of experience with SQL*Plus or the
purchase of an expensive third-party tool.
With the release of Oracle Warehouse Builder 10g Release 2,
however, the ability to profile your data is built into the tool and no
knowledge of SQL*Plus is required. Furthermore, the data profiles that
you build using Oracle Warehouse Builder can be used to generate
automatic corrections to your data. In this article, you’ll learn
all the nuances of this important new feature.
Data within your data warehouse can only be turned into
actionable information when you are confident of its reliability. When
you bring data into your data warehouse, you need to first understand
the structure and the meaning of your data, and then assess the quality
and the extent to which you may need to cleanse and transform it. Once
you know what actions you need to take, you then need to make the
required corrections to the data, and put in place a means to detect
and correct any more errors that might occur in future loads. To do
this, Oracle Warehouse Builder 10g Release 2 includes three new features that make this process simple and straightforward:
Apart from removing the need for complex SQL*Plus scripts or
third-party tools, doing your data profiling and corrections within
Oracle Warehouse Builder has several advantages. The metadata that you
generate about your data quality is stored alongside the other metadata
in your design repository. Also, the mappings used to correct your data
are regular Oracle Warehouse Builder mappings and can be monitored and
managed with all of your other ETL (extract, transform, and load)
processes. Doing your data cleansing and profiling within Oracle
Warehouse Builder means that you only have to learn a single tool, and
in addition, by integrating this process with your other ETL work, you
ensure that data quality and data cleansing becomes an integral part of
your data warehouse build process, and not just an afterthought.
If you are new to Oracle Warehouse Builder, or you have
experience with earlier versions of the tool, you should note that the
packaging and licensing has changed with this latest release. In the
future, the “core ETL” features of Oracle Warehouse
Builder, which roughly equate to the functions previously available
with earlier versions of the tool, will be provided free as part of the
database license. Additional functionality that supports deployments in
the enterprise is now provided via options to the Enterprise Edition of
Oracle Database. To take advantage of the functionality described in
this article, you will need to license the Warehouse Builder Data
Quality Option for Oracle Database 10g. (For more details, refer to the document “Oracle Database Licensing Information” on OTN.)
To show how you can use Oracle Warehouse Builder 10g
Release 2 to profile, correct, and then audit your data, we will
consider a situation where you have a requirement to profile and
cleanse some product data required for your data warehouse.
In this example, you will use Oracle Warehouse Builder 10g
Release 2 to first profile, and then correct data about the products
offered by your company. You will use the data profiling feature within
Oracle Warehouse Builder to determine the structure and characteristics
of your data, and to automatically derive a set of data rules that will
be applied to your data. Using these data rules, you will then
automatically generate a series of data corrections that will take your
source table and create a corrected table from it.
The data that you want to profile is contained in a table called PRODUCT. This is defined as follows:
SQL*Plus: Release 10.2.0.1.0 - Production on Fri Nov 11 15:52:17 2005Copyright (c) 1982, 2005, Oracle. All rights reserved.Connected to:Oracle Database 10g Enterprise Edition Release 10.2.0.1.0 - ProductionWith the Partitioning, OLAP and Data Mining optionsSQL> desc products Name Null? Type ----------------------------------------- -------- ---------------------------- PROD_ID NUMBER PROD_NAME VARCHAR2(50) PACK_COUNT VARCHAR2(10) AVAILABLE_DATE DATE MARKET_SEGMENT VARCHAR2(50) MANUF_COUNTRY VARCHAR2(50) REORDER_YN VARCHAR2(1)
You can install the sample data using the Oracle export file provided in the sample code download.
To import the data, create the user that will contain the data
(PRODUCT_CATALOG, for example), grant your usual privileges, and then
import the data as follows:
Imp product_catalog/password@connect_string file=product_catalog.dmp fromuser=product_catalog touser=product_catalog
Using Oracle Warehouse Builder, you then view the data in the table.
You notice that there appears to be a few anomalies in the
data. For example, one of the market segments is misspelled. In
another, a manufacturing country is incorrectly listed as
“England” rather than “UK.” At this point, you
can now decide to use Oracle Warehouse Builder 10g Release 2 to profile and correct your data.
Once you have created a project using Oracle Warehouse
Builder, and imported the metadata for the PRODUCT table, you should
then right-click on the Data Profiles node in the Project Explorer and then select New…
to start the Data Profiler wizard. Once the wizard has been used to
select the PRODUCT table for profiling, and the profiling job submitted
for asynchronous processing, you would be presented with the Data
Profile Editor, as shown below. Note that if this is the first data
profile that you have created, Oracle Warehouse Builder will prompt you
to create a profile module to hold the results, and you can specify the
database user and tablespace if you want to store them away from the
data you will be profiling.
The Data Profile Editor has a number of panels. These show objects that have been profiled and the results of the profiling.
On the left side of the Data Profile Editor are panels that
show the objects—tables, views, materialized views, external
tables, dimensions, and facts—that have been profiled during this
exercise, and details of any corrected modules that have been created.
Below the list of profiled objects and corrected modules is a listing
of the properties associated with the data profile. Using this list of
properties, you can fine-tune the parameters for your data profile;
enable or disable certain data profiling components; and enable data
rule profiling for the selected table. At the bottom of this set of
panels is the Monitor Panel. This shows the progress of any profiling
jobs that you have submitted. Because data profiling can sometimes take
awhile to complete, you can submit jobs to be completed in the
background while you perform other Oracle Warehouse Builder tasks;
Oracle Warehouse Builder will alert you when the job is complete.
On the right side of the Data Profile Editor is a set of
panels that show you the results of the data profile. The top panel
contains the Profile Results Canvas with a number of tabs containing
summaries of the profiling results.
The Data Type tab details the documented and dominant data types for each column in the object and their minimum and maximum lengths.
The Unique Key tab shows any columns where
either a unique or primary key constraint has been detected for a
column. It also shows columns where the number of unique values is
sufficiently high enough to suggest that a unique key could be defined
if the nonunique rows were removed or corrected. Note the Six Sigma
score for each column. This is a figure between 1 and 7 that indicates
the number of “defects per thousands”—rows that would
fail a unique constraint—in the object.
The Profile Object tab allows you to query
the records within the profiled object, and can be used instead of
SQL*Plus to view data within the object, optionally using the Where Clause, Execute Query, and Get More buttons to refine your query.
The Domain tab shows for each column in the
profiled object the suggested domain, and the degree to which the data
is compliant with this domain. A domain is defined as the set of
allowable values for the column, and Oracle Warehouse Builder will
include within the domain any column value that occurs more than once.
Once you have a domain defined for a particular column, you
can use it to derive a data rule that will be applied to your data.
Oracle Warehouse Builder will then implement this data rule as a check
constraint on the column, enforcing your data rule at the database
level to ensure that it is never broken. For those columns where values
are present that would otherwise break the data rule, you can use
Oracle Warehouse Builder to automatically correct your data.
While examining the data profile for your products table, you
will notice that Oracle Warehouse Builder has suggested a domain for
the MANUF_COUNTRY column containing Canada, USA, and UK. These
countries were included in the suggested domain because they occurred
more than once in your dataset. England and Mexico were excluded
because they only occurred once.
From speaking to your business users, you know that in fact
the domain for this column should be Not Known, Canada, USA, UK, and
Mexico, even though Mexico only occurs once; whereas any occurrences of
England should in fact be changed to UK. What you would like to do
therefore is use Oracle Warehouse Builder to correct this incoming
source data, and place a constraint on the data warehouse’s copy
of the table to enforce this domain.
The first step in this process is to use the Derive Data Rule
button below the suggested domain results to invoke a wizard. This
wizard takes you through the steps of viewing the suggested domain,
making any amendments, and then creating the data rule. In your
instance, you first move the “Mexico” value over to the Selected values list, and then type in “Not Known” at the bottom of the list.
Now that you have created the data rule, the second step is to
create a correction. When you create a correction, Oracle Warehouse
Builder will first ask you to either select an existing target module,
or create a new one to contain the corrected version of the object.
This will by default have the same definition as the original object,
except with the new data rules (implemented as check constraints or
unique keys) additionally present.
Once the target module has been specified, the next step in
the process is to select which of the data rules will be used to create
the correction (see below). In your instance, you will use the data
rule defined previously and two others you have defined
previously—one that excludes all records that have a REORDER_YN
value of “N” and another that ensures that all values in
the MARKET_SEGMENT table have a value of either “Economy”
or “Executive.”
Now that the rules on which the data corrections are to be
based have been selected, you should specify the action and the cleanse
strategy for the corrections.
When specifying the actions, you can either
For those columns where you have chosen to cleanse the data, you can select one of four cleansing strategies:
When you complete the selection of your cleansing actions and
strategy, the wizard will then take your specification and create a
mapping within the target module to implement the corrections.
When you then return to the Design Center, you will notice the
new target module that will contain your corrected data. Within the
module, you will notice the table definitions that hold your corrected
data, and the mapping and transformations that implement the data
correction.
If you examine the mapping that implements the correction, you
will note that the mapping first reads data from the original PRODUCT
table, and then attempts to load it into a staging copy of the table
with the data rule applied to it.
Those rows that pass the data rule are then copied into the
corrected table. Those that fail any of the rules are then cleansed via
“pluggable mapping”: a feature new to Oracle Warehouse
Builder 10g Release 2 that allows you to take a series of
mapping steps and “plug” them into another mapping. You can
then examine the contents of the pluggable mapping to see how it has
been implemented—although currently the only way in which you can
amend this correction is to delete the correction and generate it again.
Going back to the Project Explorer, you will notice the two
transformations that the correction wizard has created for you: (1) the
CUS_MANUF_COUNTRY function, a “shell” PL/SQL function that
will hold your custom cleansing logic, and (2) the SIM_MARKET_SEGMENT
function that the wizard will have automatically implemented.
To add the program logic that will implement your custom cleansing action, double-click on the CUS_MANUF_COUNTRY function and add the required logic:
Finally, to test the data correction you should deploy the
correction objects, transformations, and mappings via the Control
Center, and then run the correction mapping.
Now, you check the contents of the corrected PRODUCT table to verify that the results are as you expected.
You note that all MARKET_SEGMENTS are now either
“Executive” or “Economy,” the one row that had
REORDER_YN equal to “N” has been removed, and the instance
of “England” as a MANUF_COUNTRY has been altered to
“UK.”
Now that you are satisfied that the corrections you have
created using Oracle Warehouse Builder are working correctly, you
finally set up a Data Auditor within Oracle Warehouse Builder to
monitor the quality of further incoming data.
A Data Auditor is a process that can either be run on an ad
hoc basis, or can be included along with mappings, subprocesses, and
transformations in a process flow, and then scheduled for execution on
a predetermined basis. Data Auditors use the data rules that you either
derive or manually define, and can provide statistical reports on how
compliant your data is, which can then be stored in audit and error
logging tables. Data Auditors are also programmable, such that you can
specify that they contact you with a notification after scoring below a
certain threshold, and then with your permission run a cleansing
mapping to clean up the data. Once this cleansing mapping has run, you
can program the auditor to only continue with the rest of the ETL
process if the audit score is now above a certain level or Six Sigma
score, avoiding the situation where dirty data is loaded into the data
warehouse and effort is needed to remove it later.
Oracle Warehouse Builder represents a step change in the
process of profiling, correcting, and then monitoring the quality of
data within your data warehouse. The graphical data profiling feature
within Oracle Warehouse Builder provides an easy to use and easy to
understand facility. While this feature allows you to view the
structure, meaning, and quality of your data, the correction wizard
provides a means of taking your profile results and using them to
automatically generate mappings to correct your data. Once you have
used the Data Profiler to assess the quality of your data, you can then
use the data rules generated to create Data Auditors, which will allow
you to periodically monitor the quality of new data coming into your
data warehouse.
|
http://www.oracle.com/technology/pub/articles/rittman-owb.html
|
crawl-001
|
en
|
refinedweb
|
>>> a = [1, 2, 3] >>> b = [a, a] >>> b[0] is b[1] 1 >>> import marshal >>> serialization = marshal.dumps(b) >>> b1 = marshal.loads(serialization) >>> b == b1 1 >>> b1[0] is b1[1] 0 >>> b[0].append(4) >>> b1[0].append(4) >>> b == b1 0 >>> print b [[1, 2, 3, 4], [1, 2, 3, 4]] >>> print b1 [[1, 2, 3, 4], [1, 2, 3]] >>>-- in other words, before marshalling b its two elements were provably the same object, but after unmarshalling we have a list of two different objects (whose value is the same). Not only is this less efficient (the value of a is marshalled twice) but also are the semantics are different: b[0].append(4) has a different effect on the value of b than b1[0].append(4) has on the value of b1.
This problem is easily solved by using a bottom-up serialization algorithm which assigns unique (within a particular serialization context) identifiers (uids) to each object and allows references to objects by their uid.
A harder problem is that of marshalling user-defined class instances and certain built-in objects like open files or sockets.
There are two problems with marshalling class instances: (1) how to find the class at unmarshal time, and (2) how to create an instance of it, given the possibility that it has a constructor method (__init__) requiring arguments.
Re (1): marshalling the class itself would be a possibility. This would even seem quite logical if we require marshalling (serializing) to follow all pointers out of an object -- the instance contains a pointer to the class, after all. However, there are two reasons why I don't like this: first, classes are often quite big (marshalling the class would imply marshalling the code for all its methods and base classes); second, I want to be able to load the instances back into a process that has a modified version of the class (e.g. one with a bug fixed).
A simple solution that will work in most (but not all!) cases is to indicate the class to which a marshalled instance belongs by a (modulename, classname) pair. The unmarshalling code can then import the module (if it hasn't been imported yet) and extract the class from it in order to create a new instance.
Since classes may be created dynamically, knowing a class' name and even the module in which it was created are not necessarily sufficient to be able to uniquely identify it. However this is sufficiently rare that it is probably acceptable to disallow marshalling instances of such dynamically created classes.
A minor sub-problem is how to find the module name given only the class instance (the class name is found as x.__class__.__name__). I suggest that the class creation code is modified to automatically add the module name to the class, so it can be accessed as x.__class__.__module__. In a prototype implemented in Python without modifications to the interpreter, we can scan the class dictionary for functions (i.e. methods) -- a function object contains a pointer to its globals which by definition is the dictionary of the containing module (and the key '__name__' in that dictionary gives the module name).
Re (2): A pragmatic solution would be to only allow marshalling class instances whose constructor accepts an empty argument list. This means that the constructor is called by the unmarshalling process to create an empty instance, after which the instance variables are filled in one by one through normal attribute assignments. (In a prototype implemented in unmodified Python, we have no other choice than to do it this way.)
A problem with this approach is that it requires a lot of extra work: the constructor probably assigns default values to all instance variables, which are then overwritten by the unmarshalled values. It also won't work if the class traps attribute assignments using a __setattr__ method (or a class-wide access statement).
This could be solved by adding a low-level class instantiation function which does not call the constructor. The unmarshalling code could then extract __dict__ from the "virginal" object and place instance variables in it. In order to allow the class some control over unmarshalled objects, it could define a __finalize__ method which would be called (if it existed) to finish the initialization. (This may be necessary e.g. when the class wants to keep a list or count of its instances, or if the instance needs to be linked up to some outside entity such as a UNIX file descriptor or an electric switch.)
Some more things: (3) how to marshal built-in objects like open files, sockets, windows; (4) can't we provide a way for user-defined classes to override the serialization -- this seems much more powerful than only having a hook for __finalize__; (5) how about marshalling objects that are internal to the interpreter such as stack frames, tracebacks and modules.
Re (3): Such objects usually have some of their relevant "state" stored in tables that reside in the operating system kernel. In general it is hopeless to try to restore this -- you may be able to save and restore the filename, seek position and read/write mode of a disk file, assuming you will be unmarshalling in the same environment, but "special files", sockets and windows are usually only meaningful after a handshake with some server process. I propose that these object types are marshalled as some kind of placeholder (maybe a "closed" object of the appropriate variety) so a __finalize__ method can be written that attempts to renegotiate the connection. (Raising an exception at either marshal or unmarshal time would be rather antisocial -- in many cases marshalling will be used to save a copy of important data before dying, and it should be appropriately robust.)
Re (4): I think not -- this would break robustness. (Note however that we must allow built-in object types to override the marshalling code -- this is an interesting problem in itself since built-in types may be created by dynamically loaded modules.)
Re (5): "live" stack frames will no longer be alive after unmarshalling, but can still be inspected. Both stack frames and tracebacks (which contain stack frames) can point to the globals in a module. Part of me says that these dictionaries should be marshalled as if they were ordinary objects (and if the 2nd/3rd arguments to exec or eval() are used, they needn't belong to a module at all). Part of me says that this will be too expensive and that it's better to link them to the corresponding modules in the unmarshalling process -- if only because this is also done for class instances.
Anders Lindstrom has written a nice prototype of a persistent heap (and this without looking at these notes :-).
import SharedMemoryHeap import RemoteServerHeap heap1 = SharedMemoryHeap('
') heap2 = RemoteServerHeap(' : ') heap3 = RemoteServerHeap(' : ') foo1 = heap1.create('foo') foo2 = heap2.create('foo') bar1 = heap1.load('bar', READONLY) bar2 = heap2.load('bar', READWRITE) ... foo1.commit() # commit this particular object heap2.commit() # commit all modified objects on this heap
--Guido van Rossum, CWI, Amsterdam
|
http://www.python.org/workshops/1994-11/persistency.html
|
crawl-001
|
en
|
refinedweb
|
has so far been tested up to Python 2.5a2. Python 1.6 or later is required for proper treatment of Unicode in XML.
For details of the PXTL language itself, please see the accompanying language documentation.
The ‘pxtl’ directory is a pure-Python package which can be dropped into any directory on the Python path.,
for example
C:)
Using
pxtl.processFile means you only want the final output and don’t
care which implementation is used. Normally this will be the optimised implementation. If you want to specify a particular
implementation (for testing, for example) you can import the ‘reference’ and
‘optimised’ sub-modules and call
pxtl.reference.processFile
or
pxtl.optimised.processFile.
Note: when the optimised implementation is in use, the current user must have write-access to the folder containing the template, so that a compiled Python bytecode file can be stored there (just like with standard Python .pyc files, but normally with the file extension .pxc instead of .pyc). See the bytecodebase argument for ways around this. If a bytecode file cannot be stored, the template will have to be recompiled every time processFile is called, which is extremely slow.
Whichever implementation you use, the arguments are the same. The first argument path is required and holds the= True, headers= True)
dom: a DOM implementation to use for XML work. This should normally be left unspecified, to use the embedded DOM implementation ‘pxdom’, which is the only suitable native-Python DOM Level 3 Core/LS implementation at the time of writing.
bytecodebase: allows the location of bytecode cache files (.pxc) generated by the optimised implementation to be changed..)
Typically this feature is useful when you want to run PXTL in a restricted user environment
such as CGI on a web server (which on Unix-like Operating Systems typically runs as the
unprivileged user ‘nobody’), and you want to allow this user to save bytecode
files without giving them access to the template folder. Another possibility in this case is to
precompile the bytecode files at install-time (see
pxtl.optimised.compileFile).
If the bytecodebase argument is set to
None, the optimised implementation
will make no attempt to store bytecode files, and the
pxtl.processFile
function will normally prefer to use the reference implementation. standard. If
the document uses
<px:import> elements, it
must also support DOM Level 3 Core and LS.:
Used to pre-compile the bytecode files used by the optimised implementation, so that
they can be loaded and executed quickly in the future without having to have write access
to the place where bytecode files are stored. This can be used as part of an application
install procedure in the same way as the standard library function
py_compile.compile
(with doraise set). The files themselves are exactly the same format as
standard .pyc files, so are dependent on the version of Python used as well as the version
of PXTL; if either do not match the template will be recompiled at run-time.
The required arguments to
pxtl.optimised.compileFile are
path and pathc, giving the pathname of the template source file
to load and the compiled bytecode file to save, respectively.
Further optional arguments include globals and dom, which work
the same way as the
processFile arguments of the same
name. globals is only used as the scope for calculating the contents of the
doctype attribute, the only part of a PXTL template run at
compile-time, so it too is not normally needed.
Finally there’s method and encoding, which can be used to
override the output method (given in the
px:doctype value)
and the character encoding (given in the
<?xml?>
declaration). These are only generally required where the imported template has a different
output method or encoding to the template that imports it (because in PXTL the importing
templateଁs doctype is always used in preference). Typically this happens when an
XHTML template to-be-imported omits
px:doctype, relying on
the importing template to set the method to xhtml.
Otherwise they can be omitted. If the output method and encoding are wrong when the template comes to be run, it will simply be recompiled, so there is not normally a problem, except when permissions don't allow the bytecode to be written, resulting in slow execution every time.
Compiles bytecode files recursively inside the given dir, analagous to the
standard library
compileall function.
Uses the optional argument bytecodebase as in
processFile,
plus globals, dom, method and encoding from
compileFile, and maxlevels and quiet from
the standard
compileall (so it defaults to 10 levels deep).
Having compiled a template or unmarshalled its code from a bytecode file, you can run it
using the standard Python
exec statement. However, it expects
a few internal PXTL variables to be put in its global scope before it can run, so that it knows,
for example, what stream to send the output to.
Use the
prepareGlobals function to write these internal
variables before calling
exec code in mapping. The arguments
are writer, baseuri, headers, dom
and bytecodebase.
baseuri should hold the URI of the source template, and can be
omitted if PXTL
import elements are not used, or if they are
only used with absolute-URI
src attributes. The other
arguments are the same as for
processFile= True)
Add the argument
debug= True requires standards-compliant DOM Level 3 Core and LS functionality.
For this reason the it which work the same as the Python standard
library module xml.dom.minidom.
More information on pxdom and using it in other applications.
If you have another DOM 3 implementation you would rather pxdom used, pass its DOMImplementation module into the processFile function’s dom argument..
|
http://www.doxdesk.com/file/software/py/v/pxtl-1.6.html
|
crawl-001
|
en
|
refinedweb
|
User Name:
Published: 14 Feb 2007
By: Xun Ding
ASP.NET provides a number of ways of working with client-side script. This article explores the usage and drawbacks of ASP.NET script callbacks, and briefly presents a bird's view of ASP.NET AJAX.
ASP.NET server-based web forms use a request-response model. Each request results in a roundtrip a.k.a postback to the server. Each postback causes the requesting page to be refreshed, regardless of the nature or the sender of the request. With web users becoming more and more sophisticated, the request-response-refreshing (or click-n-wait) model becomes less and less desirable and in the case of heavy webpages, the lack of smooth performance simply drives users away.
postback
To remedy this, ASP.NET allows various ways to work with JavaScript. One class of them is the ClientScript class, which interjects JavaScript into a webpage, to perform a host of pure client-side tasks, such as window pop-ups, setting focus etc. This class uses strictly a server-centric model, in which the server controls the operation.
ClientScript
Another option is to use an object called XMLHttpRequest to bridge across server and clent-side script, and to enable client-side script to make direct calls to the server, retrieve server-side data, then partially and selectively refresh the calling page. With this model, ASP.NET enables you to create both server-centric and client-centric applications.
XMLHttpRequest
There are two ways from the client-side script to make server calls and achieve partial page rendering. One is using the ASP.NET built-in script callback mechanism, the other is the AJAX technology. In this article, we'll examine the Script Callback techniques in detail, we'll also briefly look at AJAX in general, and the programming models in ASP.NET AJAX.
AJAX
Callback
ASP.NET AJAX
To shield programmers from the intricacies of making use of the XMLHTTP object, ASP.NET introduces a string of operations on the server and client side to accomplish Script Callbacks.
XMLHTTP
On the server side, the code-behind page class must define a method called GetCallbackEventReference method, it also must implement the ICalbackEventHandler interface. The ICalbackEventHandler interface requires two methods:
GetCallbackEventReference
ICalbackEventHandler
RaiseCallbackEvent
GetCallbackResult
Generally we define the GetCallbackEventReference method in the web form's Page_Load event. It returns the JavaScript command WebForm_DoCallback that is to be executed once the server has finished serving the call. The WebForm_DoCallback in turn is stored in the resources of the system.web assembly and will be retrieved through the WebResource.axd HTTP handler.
Page_Load
WebForm_DoCallback
system.web
WebResource.axd
On the client side, at least two methods must be defined. The first one is the event handling method, responding to an event such as onclick, onmouseover; the second is used to process the response from the callback, and accordingly update's the necessary part of the page.
onclick
onmouseover
callback
To illustrate these rather convoluted procedures, we use an example that provides suggestions based on user's input in a textbox. (Please note, this ASP.NET code is tailored from the original AJAX code written in ASP and JavaScript at.)
<form id="form1" runat="server">
First Name: <input type=text name=tFirstName
</form>
<p>Suggestions: <span id="txtHint"></span></p>
using System;
using System.Web.UI;
//implement ICallbackEventHandler Interface
public partial class scriptcallback : System.Web.UIcript.GetCallbackEventReference
(this, "message", "ShowHint", "null", "null", false);
}
//The parameter eventArg is the argument parameter of
//the GetCallbackEventReference,passed from the JavaScript.
void ICallbackEventHandler.RaiseCallbackEvent(string eventArg)
{
String[] a = new String[31];
a[1] = "Anna";
a[2] = "Brittany";
a[3] = "Cinderella";
a[4] = "Diana";
a[5] = "Eva";
a[6] = "Fiona";
a[7] = "Gunda";
a[8] = "Hege";
a[9] = "Inga";
a[10] = "Johanna";
a[11] = "Kitty";
a[12] = "Linda";
a[13] = "Nina";
a[14] = "Ophelia";
a[15] = "Petunia";
a[16] = "Amanda";
a[17] = "Raquel";
a[18] = "Cindy";
a[19] = "Doris";
a[20] = "Eve";
a[21] = "Evita";
a[22] = "Sunniva";
a[23] = "Tove";
a[24] = "Unni";
a[25] = "Violet";
a[26] = "Liza";
a[27] = "Elizabeth";
a[28] = "Ellen";
a[29] = "Wenche";
a[30] = "Vicky";
Int32 i;
if (eventArg.Length > 0)
{
for (i = 1; i < 31; i++)
{
String suggestion = a[i].Substring(0, eventArg.Length).ToUpper();
if (eventArg.ToUpper() == suggestion)
{
if (hint == "")
hint = a[i];
else
hint = hint + " , " + a[i];
}
}//'Output "no suggestion" if no hint were found
// 'or output the correct values
if (hint == "")
hint = "no suggestion";
}
else
hint = "no suggestion";
}
//Return a string that will be received and processed
// by the clientCallback JavaScript funtion
String ICallbackEventHandler.GetCallbackResult()
{
return hint;
}
}
<script language=javascript>
function GetHint()
{
var message = document.form1.tFirstName.value;
var context = '';
<%=getHintFromServer%>
}
function ShowHint(hint) {
var spanHint=document.getElementById("txtHint");
spanHint.innerHTML=timeMessage;
}
</script>
Using ASP.NET Script callbacks has a few issues:
1. Cross-browser Support
ASP.NET Script callbacks will work with Internet Explorer, Firefox and other Mozilla-compliant browsers. It ensures each compliant browser can call the server and return with server-side response, however not all browsers supports dynamic partial page updates. Even with those that do, it is important that script code strictly follows the W3C DOM (Document Object Model - DOM) standards.
In the above example, to update the text inside the span tag called txtHint, instead of directly calling txtHint.innerHTML or using document.all.txtHint: We use the document.getElementById method to retrieve the element then update its content:
span
txtHint.innerHTML
document.all.txtHint
document.getElementById
var spanHint=document.getElementById("txtHint");
spanHint.innerHTML=...;
2. Server's Response Delivery Format
With each script's callback, the server responds with a string. This works very well with small amount of data. However, it gets problematic with large complex data types. It defeats the purpose if we have to rely on the client-side script on laborious and intensive data parsing, let alone doing so is error-prone.
3. Problem with multiple callbacks
ASP.NET script callback works fine with a single call generated from a single control. However, it gets messy if multiple controls issue multiple callbacks. Because no matter where and how the call is generated, it all gets to be handled by the RaiseCallbackEvent method on the server side. To differentiate the calls and their senders, there is only one way, which is to bundle all necessary information into a string and pass that to the GetCallbackEventReference method, which in turn will be passed to the RaiseCallbackEvent method. It is also possible to write a custom class to handle multiple callbacks.
callbacks
For more details, please see Script Callbacks in ASP.NET 2.0 - interesting, but lacking.
The bundle of technologies (DHTML, JavaScript, XmlHTTPRequest, CSS) that is now coded as AJAX (Asynchronous JavaScript And XML) has been around for a long time. However, only after the launch of a string of successful Google applications (such as Google Maps, Google Suggest, and Gmail) it gained enormous and lasting popularity.
In its bare bone, AJAX makes direct calls to server using the JavaScript XMLHttpRequest object. The XMLHttpRequest object has a readyState property with 5 possible values, a value of 4 indicates a call to server is complete and the calling party can update part of the page based on the response sent back from the server.
readyState
The raw infrastructure and logic of AJAX is very simple. However, using AJAX in its raw form to write truly interactive and appealing web pages demands extensive skills in JavaScript and sidestepping the various traps caused by the subtle differences between various browsers. It also entails that programmers reinvent the wheel, repeatedly rewriting the same code to recreate the same behavior for common controls. Therefore, many AJAX frameworks have sprung up to free programmers from toiling around sending request and processing responses at its lowest level. Many frameworks also provide a set of sophisticated tools and protocols. ASP.NET AJAX (formerly known as ATLAS) is such a comprehensive yet still fast evolving framework designed for ASP.NET.
ATLAS
An ASP.NET AJAX application can be both server-centric or client centric. In a server-centric application, partial page updates are administrated by a server control called UpdatePanel. It behaves as a intermediate broker between the requesting and responding party, as shown in the following diagram:
UpdatePanel
For a regular ASP.NET server page to realize partial page-update, the easiest way is to place one or more parts of web page inside an UpdatePanel. A web page can have multiple UpdatePanel controls, an UpdatePanel can also have nested UpdatePanel controls. It is also allowed to have controls outside of an UpdatePanel defined as triggers to fire a partial update.
While it is possible for the UpdatePanel control to work with client script, the functionalities are rather limited - such as stopping and canceling asynchronous postbacks, custom error-message handling, etc. And you must call a reference to the PageRequestManager class in the Microsoft AJAX library. It is more common to use the AJAX library to create custom client components to enable client behaviors, and then use these components as server controls' extensions. The host of components provided by the ASP.NET AJAX Control Toolkit is such custom client components.
PageRequestManager
ASP.NET AJAX Control Toolkit
In a client-centric AJAX application, a JavaScript function directly calls a Webservice through a script proxy class. Again a new set of rules must be strictly followed. First for a webservice to be consumed by a client-side script, it must be marked with the ScriptService attribute. In the calling page, a reference to the Webservice must be placed inside a ScriptManager control and specify its path to the Webservice, as in the following code:
Webservice
webservice
ScriptService attribute
ScriptManager
<asp:ScriptManager
<Services>
<asp:ServiceReference
</Services>
</asp:ScriptManager>
It is also possible to for a JavaScript to call a static page method that is marked with the [WebMethod] keyword.
[WebMethod]
To enable the communication between server and client, more importantly, to enable partial page rendering, ASP.NET has two major approaches: one is script callbacks, the other ASP.NET AJAX. While script callbacks represents an earlier primitive approach which leaves much to be desired, ASP.NET AJAX represents a sophisticated framework that includes server and client-side components, a JavaScript library (AJAX library), and a collection of server-side classes and methods (AJAX extensions).
While ASP.NET AJAX has enjoyed a lot of exposure and gained lasting popularity and influence, script callbacks are much less well-known. However, given that script callbacks are extensively used by such ASP.NET controls as the TreeView and GridView, its approach still merits some exploration. Therefore this article showed the details of the usage of script callbacks, examining some of its downsides, and briefly introduces the underlying logic of AJAX, and the programming models of ASP.NET AJAX in particular.
TreeView
GridView
JavaScript with ASP.NET 2.0 Pages - Part 1ASP.NET AJAX documentationCutting Edge: Perspectives on ASP.NET AJAXCutting Edge: Implications of Script Callbacks in ASP.NETScript Callbacks in ASP.NET 2.0 - interesting, but lacking.
Using WebParts in ASP.Net 2.0
This article describes various aspects of using webparts in asp.net 2.0.
An Architectural View of the ASP.NET MVC Framework
Dino Esposito introduces the ASP.NET MVC framework.
Reply
|
Permanent
link
Xun
Beautiful Girls
Programmatically Speaking ...
//spanHint.innerHTML=timeMessage;
spanHint.innerHTML=hint;
//String suggestion = a[i].Substring(0, eventArg.Length).ToUpper();
using System;using System.Web.UI;public partial class Test : Page, ICallbackEventHandler{private String hint;pu
Reply
|
Permanent
link
using
public
{
using System;using System.Web.UI;public partial class Test :
Reply
|
Permanent
link
getHintFromServer =
|
http://dotnetslackers.com/articles/aspnet/javascript_with_asp_net_2_0_pages_part2.aspx
|
crawl-002
|
en
|
refinedweb
|
LiteratePrograms:Public forum
From LiteratePrograms
This is the public forum, a place for open discussion among all members. Feel free to discuss anything. Please don't remove this notice.
New Template?
If we take a look at Selection sort (Haskell) we can see {{codedump}} in use. There is, however, a bigger problem on that page. The code that's there cannot possibly compile. It makes use of a symbol "usun" which is not defined anywhere. Neither GHC's online docs nor the Hoogle search engine knows of it, so it looks to me as if someone cut-and-pasted a block of code that referenced something that didn't make it. Perhaps we could also have a {{incomplete}} or {{broken}} template to mark pages like this? MTR/严加华 23:13, 15 June 2007 (PDT)
- Done. See {{incomplete}}. --Allan McInnes (talk) 14:53, 16 June 2007 (PDT)
Latex2mediawiki and noweb/funnelweb compatibility
If I use noweb to generate a latex document, is there a way to convert the latex document into the equivalent mediawiki syntax? May be to redefine the latex macro, to expand to its corresponding mediawiki syntax. (Assuming that the macro used in that latex document contains only a subset of macro that can be mapped to mediawiki syntax.) Anyhow, can the document written in mediawiki syntax be exported to pdf/html/etc., by the command line in batch mode? --Ans 02:21, 4 January 2008 (PST)
One more question. Does noweb and funnelweb have compatible syntax? --Ans 02:21, 4 January 2008 (PST)
- Hmm, well, LaTeX is in general much more expressive than Mediawiki wiki syntax, and I'm not aware of any translation tools between the two (I don't believe it would be straightforward; some parts would probably have to get embedded in math tags). I haven't used funnelweb so I can't speak to compatibility there. Deco 06:44, 8 March 2009 (MDT)
Page load problem
When trying to load Look and say sequence (sed), this results in
- Catchable fatal error: Object of class Title could not be converted to string in /nr/web/literateprograms/includes/Database.php on line 1182
This problem seems to have appeared recently (the page has been changed two days ago, so at least then it probably loaded OK), so this probably is caused or triggered by a recent change. --Ce 03:57, 13 April 2008 (EDT)
- As I just noted, this problem seems to be more general; basically all pages containing code seem to be affected. It's obviously not the brackets in the title because the (wrongly titled) page Ada Hello World doesn't load either. --Ce 04:05, 13 April 2008 (EDT)
- Actually, pages without code also seems to be affected. Maybe all pages in main namespace are affected? This might be because of a software update somewhere in the system (f.ex. new PHP version). Anyway, there isn't much we can do as long as we don't have access to the server. Ahy1 10:54, 13 April 2008 (EDT)
- Guess we'll just have to be patient. I'm sure Deco will take a look at it soon. -- Derek Ross | Talk 14:28, 13 April 2008 (EDT)
- This problem is no longer reproducing and I'm not sure what was up. Please let me know if you encounter it again. Deco 06:42, 8 March 2009 (MDT)
Sandbox
Consider moving the sandbox to another namespace. LiteratePrograms:Sandbox, for instance. Codeholic 12:59, 23 April 2008 (EDT)
- That wouldn't work currently, because the literate programming functionality is only turned on for pages in the main article space. Deco 06:41, 8 March 2009 (MDT)
Spam attack
Currently the vast majority of all edits seem to be insertion/removal of spam. The pattern of the spam is easy to recognice: An IP inserts lots of external links. Therefore I think automatic link creation should be prevented. The least intrusive (but probably the most complex to implement) would be to have captchas for every anonymous edit creating an external link (one could further make a whitelist of sites where a captcha is not needed, to further reduce the impact on anonymous users). In addition, Account creation would have to be protected with captchas.
If implementing captchas is too complex, maybe another solution would be to automatically reject edits which insert more than five external links at once. Given that legitimate edits usually don't need many external links (most don't even need any external link at all), this shouldn't be a too serious restriction. However there would be the danger of the spam bots adapting by making lots of edits to the same page, inserting a few links each time, which would be even worse than the current situation.
Disallowing link creation for anonymous users would be another solution. After all, an anonymous user can insert the URL as text and then ask a non-anonymous user to convert it into a link (or, alternatively, he can just register and add the link himself as registered user). However, in that case account creation would also have to be protected somehow (captcha, email confirmation, whatever), to prevent spam bots from creating random user accounts.
Besides that, I think it would be a good idea to remove pairs of revisions which consist only of spam insertion and subsequent removal, with no other change being made, from the history. They only clutter the history and don't provide any value. This could probably be semi-automated (i.e. a bot searching for obvious spam insertion and checking that the following revision is an exact rollback). Completely automating it might be too dangerous (but then, if the criterion for identifying spam is very strict, even that may be possible). --Ce 03:36, 2 September 2008 (EDT)
- I think the ConfirmEdit extension would be helpful. It can be configured to require captcha only on account creation and link inertion, and it can also have a whitelist of urls for which cpatchas are not required.
- I am not sure, however, if this site will survive unless more than one person has access to do these things. My impression is that the site owner has lost interest in the wiki, and will not even answer questions for which he is the only person who can answer. This is understandable. We should not expect one person to go on forever doing lots of work for free, maintaining a site like this. If it is possible, we should try to distribute the responsibility of maintaining/running this site among more persons. I would be willing to participate in this, but there should also be more volunteers. This, of course, depends on Deco's willingness to share this responsibility. Ahy1 06:07, 20 September 2008 (EDT)
- Ce and Ahy1 both make very sensible suggestions here. We'll continue to remove the spam come what may, but prevention would be a whole lot better than cure. -- Derek Ross | Talk 11:40, 20 September 2008 (EDT)
- Hey all, apologies for my very long delayed response. I am presently disabling anonymous editing and enabling CAPTCHAs on all edits to control the vandalism/spam problem. Deco 22:59, 7 March 2009 (MST)
- Testing CAPTCHA again. DecoMortal 23:22, 7 March 2009 (MST)
Dump database
Has someone a dump of the lpwiki? I put a comment on Deco's pade, if you read it may be some of you could answer.
Lehalle 01:41, 28 November 2008 (EST)
- I'll look at producing a proper dump right away. Deco 23:26, 7 March 2009 (MST)
- Dumps are now working and you can view a valid one here. Deco 04:00, 8 March 2009 (MDT)
Apology
After seeing how hard you all worked to protect the wiki in my absence I feel really terrible for letting you down. I felt overwhelmed by other responsibilities, and felt like I had screwed up this project and was afraid to deal with it, but after dropping the ball on both database dumps and vandalism protection I didn't leave others a way forward. I promise you all I will remain responsive and helpful - I've added some additional contact info to my user page in case I don't check my talk page here. I'm also prepared to give shell access to any of you who wants to be an active developer and help deal with issues like this in the future. I'm very thankful to you all and I hope I can make up for my mismanagement now. Deco 04:35, 8 March 2009 (MDT)
- Don't feel bad. This is just a hobby for all of us. And if more important issues arise (for any of us), they have to be dealt with first. -- Derek Ross | Talk 12:39, 8 March 2009 (MDT)
- I have to agree with Derek here: While I'm indeed very glad that this site is online again, it's nothing our lifes depend on. If it were, I would have contacted you much earlier about it :-) Basically this site is a gift you give to us, and while we enjoy this gift and would have missed it if it had been gone, there's clearly no obligation for you to continue giving it. The more I'm thankful that we got it back. --Ce 12:17, 9 March 2009 (MDT)
Math images broken
The images generated by the <math>...</math> tags seem to be Error 403 Forbidden. --Spoon! 03:04, 11 March 2009 (MDT)
- Woops - fixed, thanks! It also appears I'm missing latex and dvips, so new math tags won't work. I'll have to get them installed. Deco 13:55, 12 March 2009 (MDT)
- It was a bit tricky, but new math notation will now work - I had to do a local install of LaTeX and convince Mediawiki to use it when it's not in the path. :-) Deco 21:42, 12 March 2009 (MDT)
Captcha kills summary
I've noticed that when editing a single section and adding something to the summary, then when the page with the captcha request appears, the summary is reset to the original value. If there's an easy fix for that, it would be nice if it could be fixed (it's not important, though, because after all, you can just enter the summary after the captcha) --Ce 13:23, 11 March 2009 (MDT)
- Ah, I see what you mean. I've reproduced this now. I think I'm just going to worry about upgrading Mediawiki and the ConfirmEdit extension along with it, and I think it will probably go away. Deco 21:56, 12 March 2009 (MDT)
New policy
Because I think LiteratePrograms needs something to shape its direction and scope, I have created a new policy document called LiteratePrograms:Purpose and scope. I think it's very much in line with what we've been doing so far. I invite your comments on it. Deco 21:28, 30 March 2009 (MDT)
- Seems pretty reasonable to me. -- Derek Ross | Talk 12:56, 31 March 2009 (MDT)
- I agree with Derek. It nicely articulates what I've always thought this wiki was about. --Allan McInnes (talk) 00:44, 3 April 2009 (MDT)
- Apart from the following minor roblem, I also agree:
- The last bullet says:
- "concerns such as efficiency and completeness should not take precedence over clarity of presentation."
- What exactly is meant with "completeness" in this context? I'd intuitively expect it to mean that the code is compilable as is (i.e. you don't need to add anything or fill out gaps in order to compile it), but in that case, I'd think the examples should be complete. Completeness in this sense doesn't remove anything from the clarity; after all, the literate programming technique makes sure that you can move things like boilerplate into separate sections, thus not interfering with the description of the interesting parts of the code.
- If something else is meant with "completeness", maybe it would be a good idea to clarify. --Ce 07:16, 4 April 2009 (MDT)
- My interpretation, based on the beginning of that bullet stating that LP is "not a database of raw code snippets", is that efficiency and completeness refer to code that would be both quick enough and robust enough to put into production, while LP is better used to clearly present code sketches. Certainly the code here should be compilable — and runnable — as is, but as you can tell from articles such as Dijkstra's algorithm (Inform 7), Hello World (IBM PC bootstrap), or Quicksort (Sed), my bias has been towards trying to present one or maybe two interesting points per article rather than in collecting quotidian code. Dave 15:40, 4 April 2009 (MDT)
Latest spam
I stupidly forgot to deny move permissions to anons (which also, it turns out, the CAPTCHAs don't protect). Fixed that, shouldn't happen again. Deco 15:43, 14 April 2009 (MDT)
|
http://en.literateprograms.org/LiteratePrograms:Public_forum
|
crawl-002
|
en
|
refinedweb
|
How To “Find And Replace” Words In Multiple Files
I have been using Notepad ++ for years as my Windows Notepad replacement.
It does highlighting and line numbers, these were my first reasons for using it. Notepad ++ has been my best friend for writing scripts and HTML files.
Now I had a new challenge and my buddy Mike told me that Notepad ++ would be my easiest and FREE solution.
Here was my dilemma - I had 8,342 HTML files and I needed to replace some text on every single page to reflect a company’s identity tweaking. The company was mostly adult-oriented before and now they are trying their hand at mainstream. They needed to remove the word ADULT from their company name every where throughout their site. Now your friendly neighborhood admin was not going to be doing this manually - that was for sure. So I fired up Notepad ++ and searched for this magic option.
Sure enough there it was on the search menu - the option that was going to make my life easier - “Find In Files”.
Using this awesome feature I am able to have Notepad++ find all files with the word I choose in them. It then shows me the results of my search. Once I click on all the instances I want to change, it will open those files. I can then do my “replace on all open files”.
After you find all your instances and have all the files open in Notepad ++ you can then click over to the replace tab - now you will see a option to replace throughout ALL OPENED FILES! Sweet!
I am guessing that some of you out there have never heard of or used Notepad ++ before. So let me run down the basic features with you. If you are a scripter, coder or any type of programing monkey this program will make you salivate!
Check it out, Notepad ++ has:
- Syntax Highlighting and Syntax Folding
- WYSIWYG
- User Defined Syntax Highlighting
- Auto-completion
- Regular Expression Search/Replace supported
- Full Drag ‘N’ Drop supported
- Multi-Language environment supported
- Brace and Indent guideline Highlighting
- Macro recording and playback
- Bookmarks
Do you have another way to find and replace throughout a slew of files? After completing this task I was told about another free application that might have made my life even easier… It is called BkReplaceEm and stay tuned for my review!
By Karl L. Gechlik of. Karl is a super hero of the IT industry and helps answer free tech support questions at his website. Stop by and tell him MakeUseOf sent you! >“replace on all open files”..
PSpad has a similar feature:.
textCrawler will do the job better.
While notepad++ is good for somethings, it is not the best solution for such a problem.
A simple python script would do the trick much easily and neatly.
import fileinput, glob, string, sys, os
from os.path import join
text_search = ‘what ever you want to search’
text_replace = ‘what.
Certainly, having to have all the 8000+ files open to work is not an ideal solution. HTML editors like Dreamweaver or CoffeeCup HTML Editor have had features like this for a while to make mass changes to files in a certain directory.
Howdy! Thanks to MakeUseOf.com, I now have several programs to get the job done. In the meantime, here is my illustrated experience with BK ReplaceEm! It worked for me, so thanks to MakeUseOf.com!
Best wishes,
Miguel Guhlin
Around the Corner-MGuhlin.net
Thanks to MakeUseOf.com, I’ve discovered many new text editors. Here’s my illustrated walkthrough of BK Replace Em, which worked well for my purposes.
With appreciation,
Miguel Guhlin
Around the Corner
Do you know a free solution to make multiple replace in multiple files?
I mean replace worldA with worldB and exA to exB in multiple files.
Thanks
Here you go:
I think this won’t work with UTF-8 file encoding…
Do you know a newer clone?
To open many documents for the bulk find replace functionality use File > Open. Then when the file manager window comes up, highlight all the documents you want to edit. Once that is done, then you can follow Karl’s steps above.
OhMyGod I Love You!
I was so stressed out because I got a new website, and I had to change all my links,
but this program did it in like 10 seconds!
Thank you so much!
|
http://www.makeuseof.com/tag/how-to-find-and-replace-words-in-multiple-files/
|
crawl-002
|
en
|
refinedweb
|
The 1.1 Exam only asks you to understand three LayoutManagers,
The Java2 exam may contain questions about the GridBagLayout manager. Applets default to FlowLayout and applications default to a BorderLayout.
This applet shows the effect of changing the layout manager on an applet. The Layout Manager is set to BorderLayout when the Applet Starts. Clicking the Flow button shows how the FlowLayout manager allows each button to be its preferred size (ie tall and wide enough to contain its label) and and spreads the buttons starting from the top left.
The the two GridLayout Options show how the GridLayout forces the components to fit exactly into the grid. I have not included a BorderLayout button as once the applet has started, re-setting back to BorderLayout seems to have no effect. Another peculiarity BorderLayout is that if you add multiple components to a BorderLayout container without specifying a coordinate (North, South, etc) each component will be added on top of the last one in the Centre. This is generally not what you would want.
//Source code to the applet
import java.awt.*; import java.awt.event.*; import java.applet.*; public class LayTest extends Applet implements ActionListener{ Button btnGrid = new Button("GridLayout(2,2)"); Button btnGrid2 = new Button("GridLayout(3,3)"); Button btnFlow = new Button("Flow"); public void init(){ setLayout(new BorderLayout()); amethod(); } public void amethod(){ btnGrid.addActionListener(this); btnGrid2.addActionListener(this); btnFlow.addActionListener(this); add(btnGrid,"North"); add(btnGrid2,"East"); add(btnFlow,"South"); } public void actionPerformed(ActionEvent evt) { String arg =evt.getActionCommand(); if(arg.equals("GridLayout(2,2)")) { setLayout(new GridLayout(2,2)); validate(); } if(arg.equals("GridLayout(3,3)")){ System.out.println(arg); setLayout(new GridLayout(3,3)); validate(); } if(arg.equals("Flow")){ System.out.println(arg); setLayout(new FlowLayout()); validate(); } } }
|
http://www.jchq.net/applets/Layouts/Layouts.html
|
crawl-002
|
en
|
refinedweb
|
Hello aspiring programmers! How cool would it be to write an app in Android? The great thing about Android programming is its freedom and portability. When I say freedom, I’m talking about its zero cost and open source nature. All the tools will be provided. You won’t even be needing an Android phone. A virtual machine is included to run your apps. Of course you can also run your app on your phone, your aunt’s phone, or on anyone’s you like. Maybe you’d like to try and strike it rich and publish your own brilliant app in the Google Play Market. What do you want to make? The hottest game? A cool productivity app? Something that could save lives? Ideas like these will fuel your learning. So think of something you might actually want to build beyond these tutorials. Are you ready? Through the rabbit hole we go.
First things first. We want to download the Android SDK and (ADT) Android Developer Tools. Head on over to and hit the big blue button. It says “Download the SDK ADT Bundle for Windows”. Ah yes, we will be using a Windows machine to do our programming. You can also use a Mac or Linux machine.
On the next page agree to the terms, choose 32-bit or 64-bit to match your operating system and click “Download the SDK ADT Bundle for Windows”.
We will get a zip file called “adt-bundle-windows-x86_64.zip” for Windows 64 bit users. Your file may be similar depending on your operating system.
Extract this file to C: It’s a nice easy place to get to. You should have a directory structure like this: C:adt-bundle-windows with “Eclipse” and “sdk” folders inside. I don’t like the name “adt-bundle-windows” so I’m going to rename that to Android.
Then dive into the “Eclipse” folder. You should see an Eclipse.exe icon there. That is what we want, so start it up.
Now we will see the Workspace Launcher, which is a way to organize projects. The default location is C:UsersUSERNAMEworkspace, which I don’t like either. Let’s change that to C:Androidworkspace, check “Use this as the default and do not ask again” and click OK.
We will be greeted with a friendly Welcome! Screen. It has some great links to other tutorials, but that is what you have me for. Plus I am nicer. So close it now! Close the welcome screen by hitting the X next to the “Android IDE” tab. Then let’s:
Click File > New > Android Application Project
A wizard will show up giving us many options to configure our Application. There are quite a few options here but don’t panic. We will breeze through them all.
- Application Name: The name of your Android application, as it will appear in the Google Play Market. Put “Hello YOURNAME” here.
- Project Name: Name of project in Eclipse. A project contains all the files that make up your application.
- Package Name: Usually the reverse of your website plus the name of your application. It should automatically fill in “com.example.applicationname”.
- Minimum Required SDK: This is the lowest Android version that will work with our program. The lower this is set, the more devices will be supported. Set it higher and you will gain more features. It is best to set it low. More users are better.
- Target SDK: This is the highest Android version that your app is known to work with. It should be well tested.
- Compile With: Keep as the most recent version. It should support all previous versions.
- Theme: Ooo! Well there seems to be only a few right now, so I’ll keep it as the default.
On the Configure Project screen we’ll just keep the defaults and hit next.
On the Configure Launcher Icon screen we will configure our app icon with a photo or with some other options. I’m not going to make a photo for each of my tutorial apps, so I’ll play with the text option.
On the Create Activity Screen we see a couple templates we can use for our apps. An activity is like a screen on your Android app.
Choose Blank Activity and click Next.
On the New Blank Activity Screen we are going to keep the defaults. Think of MainActivity as the first activity that will be shown by our app.
Finally, let’s hit finish.
Now we will be greeted with our work space. There is a lot here so for now I will go over a few important things.
- Package explorer – all the files that make up our project
- Document view – here we see tabs for all of our open documents. We can already see the activity_main.xml activity.
- Outline Panel – keeps track of our objects and shows properties
- Properties Panel – shows properties of the objects we select.
- Info Panel – shows info-like errors in our code like errors and debugging info.
Let’s checkout the activity_main.xml document that was created for us.
What is this hello word text in the middle? How did that get there? Well for our confusion, every time we make a new blank activity in Eclipse it will add “Hello World” text. A text object is called a TextView. We can delete it but let’s keep it for now. I’m going to show you how to change it in a few ways.
Click on the “Hello Word!” TextView and notice the Properties on the right. Choose the “Text Property”. We can simply change what it says here. I’m going to change it to “Hello klack”.
Ecilpse is going to list a warning in the info panel that we hardcoded a string. What does that mean? Well Android likes to keep all of its text in one file. This makes it easier to convert your application into other languages. We won’t be bothering with that for now.
Now that we have made a change to the text, we are almost ready to try out our app! First we will need to create a new Android Virtual Device to test our app on. A Virtual Device is the Android Operating System running right inside your computer. It makes testing out our app fast and easy, and doesn’t require you to own an Android device.
Choose Window > Android Virtual Device Manger
There is nothing here yet so click on “New..”
Here you can see a bunch of settings for configuring your Virtual Device. Enter the following settings to give us a mock Nexus S:
You may want to try “Use Host GPU” if you have a nice graphics card to make things run smoother.
Click OK and then Close the Android Virtual Device Manger
Now we can choose Run > Run from the menu to try out our app! The first time it will take a while to boot up our Android emulator. You’ll see the Android window as well as a few buttons used to emulate the hardware found on Android devices.
When it boots up (it may take a while), click OK to dismiss the introduction screen. Now you should see our app! How easy was that? You can close the app using the back hardware button and relaunch it from the application tray.
We are not done just quite yet; we haven’t even done any programming! Let’s switch back to Eclipse and use some Java to make that TextView say something else. First we need to give our Text an ID so we can reference it later in our code. Click on the “Hello World!” text and look at the Id Property. Right now it is blank. We are going to name it:
@+id/txtHello
This creates an id named “txtHello”. Now choose File > Save to save activity_mail.xml. Now let’s do something with it.
Check out the MainActivity.java file. You will find it in the src folder in the project explorer panel. It is here that our startup code lives.
Let’s take a moment to notice some things about how Java code is written. Primarily parentheses () curly braces {} and the semicolon;
Curly {} braces signify the start and end of a block of code. Parentheses () signify other methods and may contain parameters that get passed into the method. A double slash // signifies a code comment. Use this to leave comments or put it in front of a line of code to disable it. Finally we must end each statement with a semicolon;
Starting at the top of this code we see our package name. After that some imports (use the little plus sign to see all the imports). Imports are a way to access features of other packages. The ones listed here are built into Android, but we could make our own and access them here. Next we see a code block that defines our activity. Within it we can see two methods. onCreate and onCreateOptionsMenu. The onCreate method is run right when the Main Activity is started. So here is where we will put some code to change our txtHello TextView.
Add the 3 lines of code after the // Startup Code goes here section that I have marked. After you have added these lines Eclipse seems to have a few complaints. It doesn’t like when I write TextView. That’s because we need to do an import at the top so reference TextViews in our code.
Add
import Android.widget.TextView;
Underneath the other imports at the top.
Now let’s take a look at our code line by line:
TextView t = new TextView(this);
Means we are going to create a variable t and of the TextView type and initialize it as new TextView.
t=(TextView)findViewById(R.id.txtHello);
Here is the bit of code that will access our txtHello TextView that we created earlier and assign it to the t variable. This is done by the findViewById method. As you can see it accepts the parameter R.id.txtHello
We would change the txtHello part to reference a different id to get a different object.
There seems to be one problem with the findViewById method. It spits back a View type, when we want a TextView type to put in our t variable. The (TextView) in front takes care of that by transforming or “Casting” our View into a TextView type. Don’t worry too much about that now.
t.setText(“Hello Android!”);
Finally, see the magic that actually changes the text.
Remember t is a variable holding a reference our txtHello TextView. setText is a method of TextView and accepts one parameter, the text we want to have. Change it to something you like.
OK now let’s go to the menu and choose Run > Run
Did you make it this far without any problems? What if something did throw you for a loop? Post a comment below and we will try and spoon feed you some newb goodness. It’s probably your first time ever right?
3 thoughts on “Getting Started With SDK And Building Your First Android App”
Wow!
you can make android app completely after reading this article
it is amazing
thanks for sharing with us
Nice Share! Your blog has always been a good source for me to get quality knowledge in Android. Thanks once again.
|
https://linuxacademy.com/blog/mobile/getting-started-with-sdk-and-building-your-first-android-app/
|
CC-MAIN-2017-47
|
en
|
refinedweb
|
Optimizing the EclipseLink Application (ELUG)
Contents
- 1 Introduction to Optimization
- 2 Identifying Sources of Application Performance Problems
- 3 Measuring EclipseLink Performance with the EclipseLink Profiler
- 4 Identifying General Performance Optimization
- 5 Optimizing for a Production Environment
- 6 Optimizing Schema
- 7 Optimizing Mappings and Descriptors
- 8 Optimizing Sessions
- 9 Optimizing Cache
- 10 Optimizing Data Access
- 11 Optimizing Queries
- 11.1 How to Use Parameterized SQL and Prepared Statement Caching for Optimization
- 11.2 How to Use Named Queries for Optimization
- 11.3 How to Use Batch and Join Reading for Optimization
- 11.4 How to Use Partial Object Queries and Fetch Groups for Optimization
- 11.5 How to Use Read-Only Queries for Optimization
- 11.6 How to Use JDBC Fetch Size for Optimization
- 11.7 How to Use Cursored Streams and Scrollable Cursors for Optimization
- 11.8 How to Use Result Set Pagination for Optimization
- 11.9 Read Optimization Examples
- 11.9.1 Reading Case 1: Displaying Names in a List
- 11.9.2 Reading Case 2: Batch Reading Objects
- 11.9.3 Reading Case 3: Using Complex Custom SQL Queries
- 11.9.4 Reading Case 4: Using View Objects
- 11.9.5 Reading Case 5: Inheritance Subclass Outer-Joining
- 11.10 Write Optimization Examples
- 12 Optimizing the Unit of Work
- 13 Optimizing Using Weaving
- 14 Optimizing the Application Server and Database Optimization
- 15 Optimizing Storage and Retrieval of Binary Data in XML.
Optimization is an important consideration when you design your database schema and object model. Most performance issues occur when the object model or database schema is too complex, which.
This section includes the following schema optimization examples:
- Schema Case 1: Aggregation of Two Tables Into One
- Schema Case 2: Splitting One Table Into Many
- Schema Case 3: Collapsed Hierarchy
- Schema Case 4: Choosing One Out of Many)
The nature of this application dictates that you always look up employees and addresses together. As a result, querying a member based on address information requires a database join, and reading a member and its address requires two read statements. Writing a member requires two write statements. This adds unnecessary complexity to the system, and results in poor performance.
A better solution is to combine the MEMBER and ADDRESS tables into a single table, and change the one-to-one relationship to an aggregate relationship. This lets you read all information with a single operation, and doubles the update and insert speed, because only a single row in one table requires modifications.
A common mistake when you transform an object-oriented design into a relational model, is to build a large hierarchy of tables on the database. This makes querying difficult, because queries against this type of design can require a large number of joins. It is usually a good idea to collapse some of the levels in your inheritance hierarchy into a single table. relationship, a single source object has a collection of other objects. In some cases, the source object frequently requires one particular object in the collection, but requires the other objects only infrequently. You can reduce the size of the returned result set in this type of case by adding an instance variable for the frequently required object. This lets you access the object without instantiating the other objects in the collection.
The Original Schema (Choosing One out of Many Case) table represents a system by which an international shipping company tracks the location of packages in transit. When a package moves from one location to another, the system creates a new a location entry for the package in the database. The most common query against any given package is for its current location.
Original Schema (Choosing One out of Many Case)
A package in this system can accumulate several location values in its LOCATION collection as it travels to its destination. Reading all locations from the database is resource intensive, especially when the only location of interest is the current location.
To resolve this type of problem, add a specific instance variable that represents the current location. You then add a one
1 If the number of instances is finite.
If you do use cache coordination, use JMS for cache coordination rather than RMI. JMS is more robust, easier to configure, and runs asynchronously. If you require synchronous cache coordination, use RMI.
You can configure a descriptor to control when theink to bypass the cache for object read queries based on primary key. This results in a database round trip every time an object read query based on primary key is executed on this object type, negating the performance advantage of the cache. When used in conjunction with Always Refresh, this option ensures that all queries go to the database. This can have a significant impact on performance. These options should only be used in specialized circumstances.
Optimizing Data Access
Depending on the type of data source your application accesses, EclipseLink offers a variety of Login options that you can use to tune the performance of low level data reads and writes.
You can use several techniques to improve data access performance for your application. This section discusses some of the more common approaches, including the following:
- How to Optimize JDBC Driver Properties
- How to Optimize Data Format
- How to Use Batch Writing for Optimization
- How to Use Outer-Join Reading with Inherited Subclasses
- How to Use Parameterized SQL (Parameter Binding) and Prepared Statement Caching for Optimizationink passes to the JDBC driver.
For example, some JDBC drivers, such as Sybase JConnect, perform a database round trip to test whether or not a connection is closed: that is, calling the JDBC driver method isClosed results in a stored procedure call or SQL select. This database round-trip can cause a significant performance reduction. To avoid this, you can disable this behavior: for Sybase JConnect, you can set property name CLOSED_TEST to value INTERNAL.
For more information about configuring general JDBC driver properties from within (see How to Use Parameterized SQL (Parameter Binding) and Prepared Statement Caching for Optimization), this is known as parameterized batch writing. This allows a repeatedly executed statement, such as a group of inserts of the same type, to be executed as a single statement and a set of bind parameters. This can provide a large performance benefit as the database does not have to parse the batch.
When using batch writing, you can tune the maximum batch writing size.
In JPA applications, you can use persistence unit property eclipselink.jdbc.batch-writing (see EclipseLink JPA Persistence Unit Properties for JDBC Connection Communication).
In POJO applications, you can use setMaxBatchWritingSize method of the Login interface. The meaning of this value depends on whether or not you are using parameterized SQL:
- If you are using parameterized SQL (you configure your Login by calling its bindAllParameters method), the maximum batch writing size is the number of statements to batch with 100 being the default.
- If you are using dynamic SQL, the maximum batch writing size is the size of the SQL string buffer in characters with 32000 being the default.
By default,
Using parameterized SQL, you can keep the overall length of an SQL query from exceeding the statement length limit that your JDBC driver or database server imposes.
Using parameterized SQL and prepared statement caching, you can improve performance by reducing the number of times the database SQL engine parses and prepares SQL for a frequently called query.
By default, (see Configuring JDBC Options). Selecting a combination of options may result in different behavior from one driver to another. Before selecting JDBC options, consult your JDBC driver documentation. When choosing binding options, consider the following approach:
- Try binding all parameters with all other binding options disabled.
- If this fails to bind some large parameters, consider enabling one of the following options, depending on the parameter's data type and the binding options that your JDBC driver supports:
- To bind large String parameters, try enabling string binding.If large String parameters still fail to bind, consider adjusting the maximum String size. EclipseLink sets the maximum String size to 32000 characters by default.
- To bind large Byte array parameters, try enabling byte array binding.
- If this fails to bind some large parameters, try enabling streams for binding.
Typically, configuring string or byte array binding will invoke streams for binding. If not, explicitly configuring streams for binding may help.
For Java EE applications that use EclipseLink external connection pools, you must configure parameterized SQL in EclipseLink, but you cannot configure prepared statement caching in EclipseLink. In this case, you must configure prepared statement caching in the application server connection pool. For example, in OC4J, if you configure your data-source.xml file with a managed data-source (where connection-driver is oracle.jdbc.OracleDriver, and class is oracle.j2ee.sql.DriverManagerDataSource), you can configure a non-zero num-cached-statements that enables JDBC statement caching and defines the maximum number of statements cached.
For applications that use EclipseLink internal connection pools, you can configure parameterized SQL and prepared statement caching.
You can configure parameterized SQL and prepared statement caching at the following levels:
- session database login level–applies to all queries and provides additional parameter binding API to alleviate the limit imposed by some drivers on SQL statement size. Optimizing Data Access.
This section includes information on the following:
- How to Use Parameterized SQL and Prepared Statement Caching for Optimization
- How to Use Named Queries for Optimization
- How to Use Batch and Join Reading for Optimization
- How to Use Partial Object Queries and Fetch Groups for Optimization
- How to Use Read-Only Queries for Optimization
- How to Use JDBC Fetch Size for Optimization
- How to Use Cursored Streams and Scrollable Cursors for Optimization
- How to Use Result Set Pagination for Optimization
- Read Optimization Examples
- Write Optimization Examples for Optimization
Whenever possible, use named queries in your application. Named queries help you avoid duplication, are easy to maintain and reuse, and easily add complex query behavior to the application. Using named queries also allows for the query to be prepared once, and for the SQL generation to be cached.
For more information, see fetch size gives the JDBC driver a hint as to the number of rows that should be fetched from the database when more rows are needed.
For large queries that return a large number of objects you can configure the row fetch size used in the query to improve performance by reducing the number database hits required to satisfy the selection criteria..
Set the query fetch size with ReadQuery method setFetchSize, as the JDBC Driver Fetch Size example shows. Alternatively, you can use ReadQuery method setMaxRows to set the limit for the maximum number of rows that any ResultSet can contain.
JDBC Driver Fetch Size
// Create query and set Employee as its reference class ReadAllQuery query = new ReadAllQuery(Employee.class); ExpressionBuilder builder = query.getExpressionBuilder(); query.setSelectionCriteria(builder.get("id").greaterThan(100)); // Set the JDBC fetch size query.setFetchSize(50); // Configure the query to return results as a ScrollableCursor query.useScrollableCursor(); // Execute the query ScrollableCursor cursor = (ScrollableCursor) session.executeQuery(query); // Iterate over the results while (cursor.hasNext()) { System.out.println(cursor.next().toString()); } cursor.close();
In this example, when you execute the query, the JDBC driver retrieves the first 50 rows from the database (or all rows if less than 50 rows satisfy the selection criteria). As you iterate over the first 50 rows, each time you call cursor.next(), the JDBC driver returns a row from local memory–it does not need to retrieve the row from the database. When you try to access the fifty first row (assuming there are more than 50 rows that satisfy the selection criteria), the JDBC driver again goes to the database and retrieves another 50 rows. In this way, 100 rows are returned with only two database hits.
If you specify a value of zero (default; means the fetch size is not set), then the hint is ignored and the JDBC driver's default is used.
For more information see the following:
How to Use Cursored Streams and Scrollable Cursors for Optimization
You can configure a query to retrieve data from the database using a cursored Java stream or scrollable cursor. This lets you view a result set in manageable increments rather than as a complete collection. This is useful when you have a large result set. You can further tune performance by configuring the JDBC driver fetch size used (see.
Using Result Set Pagination
In this example, for the first query invocation, pageSize=3, maxRows=pageSize, and firstResult=0. This returns a List of results 00 through 02.
For each subsequent query invocation, you increment maxRows=maxRows+pageSize and firstResult=firstResult+pageSize. This returns a new List for each page of results 03 through 05, 06 through 08, and so on.
Typically, you use this approach when you do not necessarily need to process the entire result set. For example, when a user wishes to scan the result set a page at a time looking for a particular result and may abandon the query after the desired record is found.
The advantage of this approach over cursors is that it does not require any state or live connection on the server; you only need to store the firstResult index on the client. This makes it useful for paging through a Web result.
For more information, see the following:
-.
No Optimization
JPA
/* Read all the employees from the database, ask the user to choose one and return it. */ /* This must read in all the information for all the employees */ ListBox list; // Fetch data from database and add to list box List employees = entityManager.createQuery("Select e from Employee e").getResultList(); list.addAll(employees); // Display list box .... // Get selected employee from list Employee selectedEmployee = (Employee) list.getSelectedItem(); return selectedEmployee;
Native API
/* Read all the employees from the database, ask the user to choose one and return it. */ /* This must read in all the information for all the employees */ ListBox list; // Fetch data from database and add to list box List employees = session.readAllObjects(Employee.class); list.addAll(employees); // Display list box .... // Get selected employee from list Employee selectedEmployee = (Employee) list.getSelectedItem(); return selectedEmployee;
Partial Object Reading
Partial object reading is a query designed to extract only the required information from a selected record in a database, rather than all the information the record contains. Because partial object reading does not fully populate objects, you can neither cache nor edit partially read objects.
For more information about partial object queries, see
JPA
/* Read all the employees from the database, ask the user to choose one and return it. */ /* This uses partial object reading to read just the last names of the employees. */ ListBox list; // Fetch data from database and add to list box List employees = entityManager.createQuery("Select new Employee(e.id, e.lastName) from Employee e").getResultList(); list.addAll(employees); // Display list box .... // Get selected employee from list Employee selectedEmployee = (Employee)entityManager.find(Employee.class, ((Employee)list.getSelectedItem()).getId()); return selectedEmployee;
Native API
/* Read all the employees from the database, ask the user to choose one and return it. */ /* This uses partial object reading to read just the last names of the employees. */ /* Since EclipseLink automatically includes the primary key of the object, the full object can easily be read for editing */ ListBox list; // Fetch data from database and add to list box ReadAllQuery query = new ReadAllQuery(Employee.class); query.addPartialAttribute("lastName"); // The next line avoids a query exception query.dontMaintainCache(); List employees = session.executeQuery(query); list.addAll(employees); // Display list box .... // Get selected employee from list Employee selectedEmployee = (Employee)session.readObject(list.getSelectedItem()); return selectedEmployee;
Report Query.
Optimization Through Report Query
JPA
/* Read all the employees from the database, ask the user to choose one and return it. */ /* This uses a report query to read just the last names of the employees. */ ListBox list; // Fetch data from database and add to list box // This query returns a List of Object[] data values List rows = entityManager.createQuery("Select e.id, e.lastName from Employee e").getResultList(); list.addAll(rows); // Display list box .... // Get selected employee from list Object selectedItem[] = (Object[])list.getSelectedItem(); Employee selectedEmployee = (Employee)entityManager.find(Employee.class, selectedItem[0]); return selectedEmployee;
Native API
/* Read all the employees from the database, ask the user to choose one and return it. */ /* The report query is used to read just the last name of the employees. */ /* Then the primary key stored in the report query result to read the real object */ ListBox list; // Fetch data from database and add to list box ExpressionBuilder builder = new ExpressionBuilder(); ReportQuery query = new ReportQuery (Employee.class, builder); query.addAttribute("lastName"); query.retrievePrimaryKeys(); List reportRows = (List) session.executeQuery(query); list.addAll(reportRows); // Display list box .... // Get selected employee from list ReportQueryResult result = (ReportQueryResult) list.getSelectedItem(); Employee selectedEmployee = (Employee)result.readobject(Employee.Class, session);
Although the differences between the unoptimized example .
Configuring a Query with a FetchGroup Using the FetchGroupManager
JPA
// Use fetch group at query level ReadAllQuery query = new ReadAllQuery(Employee.class); FetchGroup group = new FetchGroup("nameOnly"); group.addAttribute("firstName"); group.addAttribute("lastName"); query.setFetchGroup(group); JpaQuery jpaQuery = (JpaQuery)entityManager.createQuery("Select e from Employee e"); jpaQuery.setDatabaseQuery(query); List employees = jpaQuery.getResultList(); /* */
Native API
// Use fetch group at query level ReadAllQuery query = new ReadAllQuery(Employee.class); FetchGroup group = new FetchGroup("nameOnly"); group.addAttribute("firstName"); group.addAttribute("lastName"); query.setFetchGroup(group); List employees = session.executeQuery(query); /* */
Reading Case 2: Batch Reading Objects
The way your application reads data from the database affects performance. For example, reading a collection of rows from the database is significantly faster than reading each row individually.
A common performance challenge is to read a collection of objects that have a one-to-one reference to another object. This typically requires one read operation to read in the source rows, and one call for each target row in the one-to-one relationship.
To reduce the number of read operations required, use join and batch reading. The No Optimization example illustrates the unoptimized code required to retrieve a collection of objects with a one-to-one reference to another object. The Optimization Through Joining and Optimization Through Batch Reading examples illustrate the use of joins and batch reading to improve efficiency.
No Optimization
// Read all the employees, and collect their address' cities. This takes N + 1 queries if not optimized */ // Read all the employees from the database. This requires 1 SQL call Vector employees = session.readAllObjects(Employee.class, new ExpressionBuilder().get("lastName").equal("Smith")); //SQL: Select * from Employee where l_name = 'Smith' // Iterate over employees and get their addresses. // This requires N SQL calls Enumeration enum = employees.elements(); Vector cities = new Vector(); while(enum.hasMoreElements()) { Employee employee = (Employee) enum.nextElement(); cities.addElement(employee.getAddress().getCity()); } //SQL: Select * from Address where address_id = 123, etc
Optimization Through Joining
// Read all the employees; collect their address' cities. Although the code // is almost identical because joining optimization is used it takes only 1 query */ // Read all the employees from the database using joining. // This requires 1 SQL call ReadAllQuery query = new ReadAllQuery(Employee.class); ExpressionBuilder builder = query.getExpressionBuilder(); query.setSelectionCriteria(builder.get("lastName").equal("Smith")); query.addJoinedAttribute("address"); Vector employees = session.executeQuery(query); /// SQL: Select E.*, A.* from Employee E, Address A where E.l_name = 'Smith' and // E.address_id = A.address_id Iterate over employees and get their addresses. // The previous SQL already read all the addresses, so no SQL is required Enumeration enum = employees.elements(); Vector cities = new Vector(); while (enum.hasMoreElements()) { Employee employee = (Employee) enum.nextElement(); cities.addElement(employee.getAddress().getCity()); }
Optimization Through Batch Reading
// Read all the employees; collect their address' cities. Although the code // is almost identical because batch reading optimization is used it takes only 2 queries // Read all the employees from the database, using batch reading. // This requires 1 SQL call, note that only the employees are read ReadAllQuery query = new ReadAllQuery(Employee.class); ExpressionBuilder builder = query.getExpressionBuilder(); query.setSelectionCriteria(bulder.get("lastName").equal("Smith")); query.addBatchReadAttribute("address"); Vector employees = (Vector)session.executeQuery(query); // SQL: Select * from Employee where l_name = 'Smith' // Iterate over employees and get their addresses. // The first address accessed will cause all the addresses to be read in a single SQL call Enumeration enum = employees.elements(); Vector cities = new Vector(); while (enum.hasMoreElements()) { Employee employee = (Employee) enum.nextElement(); cities.addElement(employee.getAddress().getCity()); // SQL: Select distinct A.* from Employee E, Address A // where E.l_name = 'Smith' and E.address_id = A.address_i }
Because the two-phase approach to the query are at the same address, batch reading reads much less data than joining, because batch reading uses a SQL DISTINCT call to filter duplicate data.
Batch reading and joining are available for one-to-one, one-to-many, many-to-many, direct collection, direct map and aggregate collection mappings. Note that one-to-many joining will return a large amount of duplicate data and so is normally less efficient than batch reading..
No Optimization
/* Gather the information to report on an employee and return the summary of the information. In this situation, a hash table is used to hold the report information. Notice that this reads a lot of objects from the database, but uses very little of the information contained in the objects. This may take 5 queries and read in a large number of objects */ public Hashtable reportOnEmployee(String employeeName) { Vector projects, associations; Hashtable report = new Hashtable(); // Retrieve employee from database Employee employee = session.readObject(Employee.class, new ExpressionBuilder.get("lastName").equal(employeeName)); // Get all the projects affiliated with the employee projects = session.readAllObjects(Project.class, "SELECT P.* FROM PROJECT P," + "EMPLOYEE E WHERE P.MEMBER_ID = E.EMP_ID AND E.L_NAME = " + employeeName); // Get all the associations affiliated with the employee associations = session.readAllObjects(Association.class, "SELECT A.* " + "FROM ASSOC A, EMPLOYEE E WHERE A.MEMBER_ID = E.EMP_ID AND E.L_NAME = " + employeeName); report.put("firstName", employee.getFirstName()); report.put("lastName", employee.getLastName()); report.put("manager", employee.getManager()); report.put("city", employee.getAddress().getCity()); report.put("projects", projects); report.put("associations", associations); return report; }
To improve application performance in these situations, define a new read-only object to encapsulate this information, and map it to a view on the database. To set the object to be read-only, configure its descriptor as read-only (see Configuring Read-Only Descriptors).
Optimization Through View Object
CREATE VIEW NAMED EMPLOYEE_VIEW AS (SELECT F_NAME = E.F_NAME, L_NAME = E.L_NAME,EMP_ID = E.EMP_ID, MANAGER_NAME = E.NAME, CITY = A.CITY, NAME = E.NAME FROM EMPLOYEE E, EMPLOYEE M, ADDRESS A WHERE E.MANAGER_ID = M.EMP_ID AND E.ADDRESS_ID = A.ADDRESS_ID)
Define a descriptor for the EmployeeReport class as follows:
- Define the descriptor as usual, but specify the tableName as EMPLOYEE_VIEW.
- Map only the attributes required for the report. In the case of the numberOfProjects and associations, use a transformation mapping to retrieve the required data.
You can now query the report from the database in the same way as any other object enabled by
If you have an inheritance hierarchy that spans multiple tables and frequently query for the root class, consider using outer joining. This allows an outer-joining to be used for queries against an inheritance superclass that can read all of its subclasses in a single query instead of multiple queries.
Note that on some databases, the outer joins may be less efficient than the default multiple queries mechanism.
For more information about inheritance, see Descriptors and Inheritance.
For more information about querying on inheritance, see Querying on an Inheritance Hierarchy.
Write Optimization Examples
EclipseLink provides the write optimization features listed in the Write Optimization Features table.
This section includes the following write optimization examples:
Write Optimization Features
Writing Case: Batch Writes
The most common write performance problem occurs when a batch job inserts a large volume of data into the database. For example, consider a batch job that loads a large amount of data from one database, and then migrates the data into another. The following objects are involved:
- Simple individual objects with no relationships.
- Objects that use generated sequence numbers as their primary key.
- Objects that have an address that also uses a sequence number.
The batch job loads 10,000 employee records from the first database and inserts them into the target database. With no optimization, the batch job reads all the records from the source database, acquires a unit of work from the target database, registers all objects, and commits the unit of work.
// SQL: Begin transaction // SQL: Update Sequence set count = count + 1 where name = 'EMP' // SQL: Select count from Sequence // SQL: ... repeat this 10,000 times + 10,000 times for the addresses ... // SQL: Commit transaction // SQL: Begin transaction // SQL: Insert into Address (...) values (...) // SQL: ... repeat this 10,000 times // SQL: Insert into Employee (...) values (...) // SQL: ... repeat this 10,000 times // SQL: Commit transaction
This batch job performs poorly, because it requires 60,000 SQL executions. It also reads huge amounts of data into memory, which can raise memory performance issues., use a cursored stream to read the Employees from the source database. You can also employ a weak identity map instead of a hard or soft cache identity map in both the source and target databases.
To address the potential for memory problems, use the releasePrevious method after each read to stream the cursor in groups of 100. Register each batch of 100 employees in a new unit of work and commit them.
Although this does not reduce the amount of executed SQL, it does address potential out
Batch writing lets you combine a group of SQL statements into a single statement and send it to the database as a single database execution. This feature reduces the communication time between the application and the server, and substantially improves performance.
You can enable batch writing alone (dynamic batch writing) using Login method useBatchWriting. If you add batch writing to the No Optimization example, you execute each batch of 100 employees as a single SQL execution. This reduces the number of SQL executions from 20,200 to 300.
You can also enable batch writing and parameterized SQL (parameterized batch writing) and prepared statement caching. Parameterized SQL avoids the prepare component of SQL execution. This improves write performance because it avoids the prepare cost of an SQL execution. For parameterized batch writing you would get one statement per Employee, and one for Address: this reduces the number of SQL executions from 20,200 to 400. Although this is more than dynamic batch writing alone, parameterized batch writing also avoids all parsing, so it is much more efficient overall.
Although parameterized SQL avoids the prepare component of SQL execution, it does not reduce the number of executions. Because of this, parameterized SQL alone may not offer as big of a gain as batch writing. However, if your database does not support batch writing, parameterized SQL will improve performance. If you add parameterized SQL in in Cursors, set the sequence preallocation size to 100. Because employees and addresses in the example both use sequence numbering, you further improve performance by letting them share the same sequence. If you set the preallocation size to 200, this reduces the number of SQL execution from 60,000 to 20,200.
For more information about sequencing preallocation, see Sequencing and Preallocation Size.
Multiprocessing
You can use multiple processes or multiple machines to split the batch job into several smaller jobs. In this example, splitting the batch job across threads enables you to synchronize reads from the cursored stream, and use parallel Units of Work on a single machine.
This leads to a performance increase, even if the machine has only a single processor, because it takes advantage of the wait times inherent in SQL execution. While one thread waits for a response from the server, another thread uses the waiting cycles to process its own database operation.
// Read each batch of employees, acquire a unit of work, and register them targetSession.getLogin().useBatchWriting(); targetSession.getLogin().setSequencePreallocationSize(200); targetSession.getLogin().bindAllParameters(); targetSession.getLogin().cacheAllStatements(); targetSession.getLogin().setMaxBatchWritingSize(200); // Read all the employees from the database into a stream. // This requires 1 SQL call, but none of the rows will be fetched. ReadAllQuery query = new ReadAllQuery(Employee.class); query.useCursoredStream(); CursoredStream stream; stream = (CursoredStream) sourceSession.executeQuery(query); //SQL: Select * from Employee. Process each batch while (! stream.atEnd()) { List employees = stream.read(100); // Acquire a unit of work to register the employees UnitOfWork uow = targetSession.acquireUnitOfWork(); uow.registerAllObjects(employees); uow.commit(); }
SQL
//SQL: Begin transaction //SQL: Update Sequence set count = count + 200 where name = 'SEQ' //SQL: Select count from Sequence where name = 'SEQ' //SQL: Commit transaction //SQL: Begin transaction /'/BEGIN BATCH SQL: Insert into Address (...) values (...)' //... repeat this 100 times //Insert into Employee (...) values (...) //... repeat this 100 times //END BATCH SQL: //SQL: Commit transaction
Optimizing the Unit of Work
For best . Ensure that you correctly optimize these key components of your application in addition to your EclipseL.
Optimizing Storage and Retrieval of Binary Data in XML
When working with Java API for XML Web Services (JAX-WS), you can use XML binary attachments to optimize the storage and retrieval of binary data in XML. Rather than storing the data as a base64 BLOB, you can optimize it by sending the data as a Multipurpose Internet Mail Extensions (MIME) attachment in order to retrieve it on the other end.
To make the use of XML binary attachments, register an instance ofink provides support for the following Java types as attachments:
- java.awt.Image
- javax.activation.DataHandler
- javax.mail.internet.MimeMultipart
- javax.xml.transform.Source
- byte[]
- Byte[]
You can generate schema and mappings based on JAXB 2.0 classes for these types.
You can configure which mappings will be treated as attachments and set the MIME types of those attachments. You perform configurations using the following JAXB 2.0 annotations:
- XmlAttachmentRef–Used on a DataHandler to indicate that this should be mapped to a swaRef in the XML schema. This means it should be treated as a SwaRef attachment.
- XmlMimeType–Specifies the expected MIME type of the mapping. When used on a byte array, this value should be passed into the XMLAttachmentMarshaller during a marshal operation. During schema generation, this will result in an expectedContentType attribute being added to the related element.
- XmlInlineBinaryData–Indicates that this binary field should always be written inline as base64Binary and never treated as an attachment.
For information on JAXB>
Using MtOM Without MimeType
public class Employee { public java.awt.Image photo; ... }
The preceding code generates the following XML schema type:
<xs:complexType <xs:sequence> <xs:element </xs:sequence> </xs:complexType>
The XML would look as follows:
<employee> <photo> <xop:Include </photo> </employee>
Using MtOM with MimeType
public class Employee { @XmlMimeType("image/jpeg") public java.awt.Image photo; ... }
The preceding code generates the following XML schema type:
<xs:complexType <xs:sequence> <xs:element </xs:sequence> </xs:complexType>
The XML would look as follows:
<employee> <photo> <xop:Include </photo> </employee>
Using Binary Object with Forced Inline
public class Employee { @XmlInlineBinaryData public java.awt.Image photo; ... }
The preceding code generates the following XML schema type:
<xs:complexType <xs:sequence> <xs:element </xs:sequence> </xs:complexType>
The XML would look as follows:
<employee> <photo>ASWIUHFD1323423OIJEUFHEIUFWE134DFO3IR3298RY== </photo> </employee>
If you are not using JAXB an XMLAttachmentUnmarshaller.
You set and obtain an attachment marshaller and unmarshaller using the following corresponding XMLMarshaller and XMLUnmarshaller methods: setAttachmentMarshaller(XMLAttachmentMarshaller am) getAttachmentMarshaller() setAttachmentUnmarshaller(XMLAttachmentUnmarshaller au) getAttachmentUnmarshall.
|
http://wiki.eclipse.org/index.php?title=Optimizing_the_EclipseLink_Application_(ELUG)&oldid=107262
|
CC-MAIN-2017-47
|
en
|
refinedweb
|
HTML and CSS Reference
In-Depth Information
chapter 9
n n n
Dealing with Local Files Using
the File API
It's a well-known fact that for the sake of security and privacy, browsers don't allow web applications to
tamper with the local file system. Local files are used in a web application only when the user decides to
upload them to the server using the HTML <input> element of type file. The title of this chapter may
surprise you at first, because the term File API gives the impression of being a full-blown file-system
manipulation object model like the System.IO namespace of .NET Framework. Obviously, the people
behind HTML5 are aware of the security issues such an object model can create. So, the File API is
essentially a cut-down version of a file-handling system in which files can only be read and can't be
modified or deleted. Additionally, the File API can't read any random file on the machine. File(s) to be read
must be explicitly supplied by the user. Thus, the File API is a safe way to read and optionally upload local
files with user consent.
This chapter examines what the File API can do for you and how it can be used in ASP.NET web
applications. Specifically, you learn the following:
• Classes available as a part of the File API
• Techniques of selecting iles to be used with the File API
• Using HTML5 native drag-and-drop
• Reading iles with the File API
• Uploading iles to the server
Understanding the File API
The HTML5 File API consists of a set of three objects (see Table 9-1) that allow you to read files residing on
the client computer. The files to be read must be explicitly selected by the user using one of the supported
techniques discussed in later sections. Once selected, you can read the files using JavaScript code.
Search WWH ::
Custom Search
|
http://what-when-how.com/Tutorial/topic-524phdg/HTML5-Programming-for-ASPNET-Developers-227.html
|
CC-MAIN-2017-47
|
en
|
refinedweb
|
Presentation helps you to make tutorials, release notes and animated pages
Presentation
Looking for the easiest way of presenting something in your iOS app? Then you are in the right place. Presentation will help you make your tutorials, release notes and any kind of animated pages with the minimum amount of effort.
Presentation includes the following features:
- Custom positioning: You can use Position for percentage-based position declaration.
- Content: View model used for custom positioning and animations. It translates your percents to AutoLayout constraints behind the scenes.
- Slides: You can use any kind of UIViewController as a slide. SlideController is your good friend if you want to use custom positioning and animation features on your pages.
- Background: You can add views that are visible across all the pages. Also it's possible to animate those views during the transition to the specific page.
- Animations: You can easily animate the appearance of a view on the specific page.
Presentation works both on the iPhone and the iPad. You can use it with both Swift and Objective-C.
Try one of our demos to see how it works:
pod try Presentation
Usage
Presentation controller
import Presentation let viewController1 = UIViewController() viewController1.title = "Controller A" let viewController2 = UIViewController() viewController2.title = "Controller B" let presentationController = PresentationController(pages: [viewController1, viewController2])
If that's the only thing you need, look into Pages.
Position
Position is percentage-based; you can use
left,
right,
top,
bottom to set a position.
let position = Position(left: 0.3, top: 0.4)
Content view model
Content view model is a layer between
UIView and
Position. The current position is the center of a view by default, but can also be changed to the origin of a view.
let view = UIView(frame: CGRect(x: 0, y: 0, width: 200, height: 100)) let position = Position(left: 0.3, top: 0.4) let centeredContent = Content(view: label, position: position) let originContent = Content(view: label, position: position, centered: false)
Slides
let label = UILabel(frame: CGRect(x: 0, y: 0, width: 200, height: 100)) label.text = "Slide 1" let position = Position(left: 0.3, top: 0.4) let content = Content(view: label, position: position) let controller = SlideController(contents: [content]) presentationController.add([controller])
Page animations
let contents = ["Slide 1", "Slide 2", "Slide 3"].map { title -> Content in let label = UILabel(frame: CGRect(x: 0, y: 0, width: 200, height: 100)) label.text = title let position = Position(left: 0.3, top: 0.4) return Content(view: label, position: position) } var slides = [SlideController]() for index in 0...2 { let content = contents[index] let controller = SlideController(contents: [content]) let animation = TransitionAnimation( content: content, destination: Position(left: 0.5, top: content.initialPosition.top), duration: 2.0, dumping: 0.8, reflective: true) controller.add(animations: [animation]) slides.append(controller) } presentationController.add(slides)
Background views
let imageView = UIImageView(image: UIImage(named: "image")) let content = Content(view: imageView, position: Position(left: -0.3, top: 0.2)) presentationController.addToBackground([content]) // Add pages animations presentationController.add(animations: [ TransitionAnimation(content: content, destination: Position(left: 0.2, top: 0.2))], forPage: 0) presentationController.add(animations: [ TransitionAnimation(content: content, destination: Position(left: 0.3, top: 0.2))], forPage: 1)
Installation
Presentation is available through CocoaPods. To install
it, simply add the following line to your Podfile:
pod 'Presentation'
Presentation is also available through Carthage.
To install just write into your Cartfile:
github "hyperoslo/Presentation"
|
https://iosexample.com/presentation-helps-you-to-make-tutorials-release-notes-and-animated-pages/
|
CC-MAIN-2019-26
|
en
|
refinedweb
|
By Dr. Mufajjul Ali, Data Solution Architect at CSU UK
ONNX (Open Neural Network Exchange) is an open format that represents deep learning models. It is a community project championed by Facebook and Microsoft. Currently there are many libraries and frameworks which do not interoperate; hence, the developers are often locked into using one framework or ecosystem. The goal of ONNX is to enable these libraries and frameworks to work together by sharing of models.
Why ONNX?
ONNX provides an intermediate representation (IR) of models (see below), whether a model is created using CNTK, TensorFlow or another framework. The IR representation allows deployment of ONNX models to various targets, such as IoT, Windows, Azure or iOS/Android.
The intermediate representation provides data scientist with the ability to choose the best framework and tool for the job at hand, without having to worry about how it will be deployed and optimized for the deployment target.
How should we use ONNX?
ONNX provides two core components:
- Extensible computational graph model. This model is represented as an acyclic graph, consisting of a list of nodes with edges. It also contains metadata information, such as author, version, etc.
- Operators. These form the nodes in a graph. They are portable across the frameworks and are currently of three types.
- Core Operators - These are supported by all ONNX-compatible products.
- Experimental Operators - Either supported or deprecated within few months.
- Custom Ops - which are specific to a framework or runtime.
Where should we use ONNX?
ONNX emphasises reusability, and there are four ways of obtaining ONNX models:
- Public Repository
- Custom Vision Services
- Convert to ONNX model
- Create your own in DSVM or Azure Machine Learning Services
For custom models and converters, please review the currently supported export and import model framework. We'll go through a step-by-step guide for creating a custom model, convert it into ONNX then operationalise it using Flask.
Step-by-step Guide to Operationalizing ONNX models
The focus areas are as follows:
- How to install ONNX on your Machine
- Creating a Deep Neural Network Model Using Keras
- Exporting the trained model using ONNX
- Deploying ONNX in Python Flask using ONNX runtime as a web service
We are using the MNIST dataset for building a deep ML classification model.
Step 1: Environment setup
Conda is an open-source package management system and management environment, primarily designed for Python, that quickly installs, runs and updates packages and their dependencies.
It is recommended that you use a separate Conda environment for the ONNX installation, as this would avoid potential conflicts with existing libraries. Note: If you do not have Conda installed, you can install it from the official website.
Create & Activate Conda Environment
To create a new Conda environment, type the following commands in the terminal window:
1 conda create -n onnxenv python=3.6.6 2 Source activate onnxenv (Mac/Linux) 3 activate onnxenv (Windows)
Line 1 creates an environment called ‘onnxenv’ for Python version 3.6.6 and installs all the libraries and their dependencies. It is possible to install other versions simply by referencing the version number. Lines 2-3 activate the environment depending on the choice of platform.
Installing ONNX/Keras and other Libraries
The core python library for ONNX is called onnx and the current version is 1.3.0.
1 pip install onnx 2 pip install onnxmltools 3 pip install onnxruntime 4 pip install Keras 5 pip install matplotlib 6 pip install opencv_python
Lines 1-3 install the libraries that are required to produce ONNX models and the runtime environment for running an ONNX model. The ONNX tools enable converting of ML models from another framework to ONNX format. Line 4 installs the Keras library which is a deep machine learning library that is capable of using various backends such CNTK, TensorFlow and Theano.
Step 2: Preparing the Dataset
In this example, you will be creating a Deep Neural Network for the popular MNIST dataset. The MNIST dataset is designed to learn and classify handwritten characters.
There is some prep work required on the dataset before we can start building and training the ML model. We need to load the MNIST dataset and split them between train and test sets. Line 1 below loads the dataset and provides a default split of 60:40 (60% training, and 40% testing).
… 1 (train_features, train_label), (test_features, test_label) = mnist.load_data()… 2 train_features = train_features.astype('float32') 3 train_features/=255 … 6 if backend.image_data_format() == 'channels_first': 7 #set channel at the begining 8 else: 9 #set the channel at the end
The total split of the data is [60000, 28, 28] [60000,] training dataset, and [10000, 28, 28] [10000,] testing dataset. Lines 2-5 ensure that datatype is float and ranges between 0.0-1.0.
Since we are dealing with grayscaled images, there is only one channel. Lines 6-7 check to see if the channel is at the beginning (1st dimension of the vector) or at the end. You should see an output with added channel: (60000, 28, 28, 1) (60000,) (10000, 28, 28, 1) (10000,)
- Full source code (see lines 22-45)
Step 3: Convolutional Neural Network (Deep Learning)
In this example we are going to use Convolutional Neural Network to do the handwritten classification. So, the question is why are we considering CNN, and not a traditional fully connected network? The answer is simple; if you have an image with N (width) * M (height) * F (filters) and is connected to H (hidden) nodes, then the total number of network parameters would be:
𝑃=(𝑵∗𝑴∗𝑭)∗𝑯,
where N= image width, M=image height and F = Filters, H = hidden nodes
So, for example, a colour image with 1000 * 1000 with 3 filters (R, G, B), connected to 1000 hidden layers. This would produce a network with (P = 1000 * 1000 * 3 * 1000) 3bn parameters to train.
It is difficult to get enough data to train such model without overfitting, as well as computational and memory requirements.
A Convolution Neural Network breaks this problem into two parts. It first detects various features such as edges, objects, faces etc., using convolutional layers (filters, padding and strides). Then it uses a polling layer to reduce the dimensionality of the network. Secondly, it applies the classification layer as a fully connected layer to classify the model.
You can find a more detailed explanation on GitHub.
Defining the CNN model
… 1 keras_model = Sequential() 2 keras_model.add(Conv2D(16, strides= (1,1), padding= 'valid', kernel_size=(3, 3), activation='relu', input_shape=input_shape)) 3 keras_model.add(Conv2D(32, (3, 3), activation='relu')) 4 keras_model.add(Flatten()) …. 5 keras_model.add(Dropout(0.5)) 6 keras_model.add(Dense(self.number_of_classes, activation='softmax'))… 7 keras_model.compile(loss=keras.losses.categorical_crossentropy, optimizer=keras.optimizers.Adadelta(), metrics=['accuracy'])
Here we define an 11-layer CNN model with Keras. The first layer (line 2) of the Convolutional Neural Network consists of input shape of [28 ,28, 1], and uses 16 filters with size [3,3] and the activation function is Relu. This will produce an output matrix of [14,14,32].
The next layer contains 32 filters with filter size of [3,3], which produces an output of [12,12,32]. The following layer applies a dropout (line 6), with {2,2}, which results in a matrix of [6,6,32]. Line 7 flattens the output, so that it is one-dimensional vector. This is then used as the input to the fully connected layer (line 12) with 128 nodes. The model uses the categorical cross entropy as the loss function, with adadelta as the optimiser.
Train/test the model
The model is trained in batch mode with a size of 250, and epochs of 500. The model is then scored using the test data.
… 1 model_history = model.fit(train_features, train_label, batch_size=self.max_batch_size,epochs=self.number_of_epocs, verbose=1,validation_data=(train_features, train_label)) 2 score = model.evaluate(test_features, test_label, verbose=0) …
You should see two outputs, one showing the loss value and the other showing accuracy. Over time and after a number of iterations, the loss value should decrease significantly and accuracy should increase.
- Full source code (see lines 44-104)
Please try using various hypermeter and layers to see the impact on the accuracy of the output.
Step 4: Convert to ONNX Model
Since the model is generated using Keras, which uses a TensorFlow backend, the model cannot directly be produced as an ONNX model. We therefore need to use a converter tool to convert from a Keras Model into an ONNX model. The winmltools module contains various methods for handing ONNX operations, such as save, convert and others. This module handles all the underlying complexity and provides seamless generation/transformation to our ONNX model. The code below converts a Keras model into ONNX model, then saves it as an ONNX file.
1 convert_model = winmltools.convert_keras(model) 2 winmltools.save_model(convert_model, "mnist.onnx")
This model can now be taken and deployed in an ONNX runtime environment.
Step 5: Operationalize Model
In this section, we first create a score file that defines inference of the model. This file typically provides two methods: init and run. The init function is designed to load the ONNX model and make it available for inferencing. The run method is invoked to carry out the scoring. It provides various pre-processing of the input data, then scores the data using the model, returning the output as a JSON value.
The score file is used by the Flask framework to expose the model as a REST end point, which can then be consumed by a service or an end user.
… 1 def init(): 2 global onnx_session 3 onnx_session = onnxruntime.InferenceSession('model/mnist.onnx') 4 def run(raw_data): .. 5 classify_output = onnx_session.run(None, {onnx_session.get_inputs()[0].name:reshaped_img.astype('float32')}) .. 6 return json.dumps({"prediction":classify_output}, cls=NumpyEncoder)
The above code (lines 1-3) initialises the ONNX model by using the ONNX runtime environment. The runtime loads the model and makes it available for the duration of the session. The run method (lines 4-6) receives an image, which is resized to [28,28], and reshaped to a matrix of [1, 28, 28, 1]. The classifier returns a json output with 10 classes with weighted distribution of value adding to 1.
The next step is to use the Python Flask framework to deploy the model as a REST web service. The web service exposes a POST REST end point for client interaction. First, we need to install the flask libraries:
pip install flask
Once the flask library is installed, we need to create a file that defines the endpoints and the REST methods it can handle.
The code below (line 1), loads the ONNX model by invoking the init method. The application then waits for requests on the “/api/classify_digit” endpoint on port 80 as a HTTP POST. When a request comes in, the image is grayscaled and the run method is invoked. Additional validations can be added to check the request input.
.. 1 mlmodel.score.init() … 2 @app.route("/api/classify_digit", methods=['POST']) 3 def classify(): 4 input_img = np.fromstring(request.data, np.uint8) 5 img = cv2.imdecode(input_img, cv2.IMREAD_GRAYSCALE) 6 classify_response = "".join(map(str, mlmodel.score.run(img))) 7 return (classify_response)
To run the Web Server locally, type the following commands in the terminal:
export FLASK_APP = onnxop.py python -m flask run
This will start a server on port 5000. To test the web service endpoint, you can send a POST request using Postman. If you do not have it installed, you can download it from the Postman website.
Summary
In this starter guide, we have shown how to create an end-to-end machine learning experiment using ONNX, Keras and Flask as a first principle. The solution clearly articulates how easy it is to use ONNX to operationalise a deep learning machine learning model, and deploy it using popular frameworks such as Flask. Of course, the Flask application can be wrapped around as a Docker image and deployed to the cloud on a Docker-enabled container service, such as ACI or AKS.
However, there are a few aspects of machine learning experiments, such as how to maintain versioning of models, keeping track of various experiments and training the experiments on various compute nodes, which are vital for production grade machine learning solution and have not been covered. This is primarily due to the challenge of implementing such functionalities as a code-first approach, although there are libraries out there that can help. This is where the power of the Azure Machine Learning service comes into the picture. The Machine Learning Service greatly simplifies the data science process by providing high level building blocks that hide the complexity of manually configuring and running the experiments on various compute targets. It also supports out of the box operationalising capabilities, such as generating Docker images with REST endpoints, which can be deployed on a container service.
|
https://blogs.technet.microsoft.com/uktechnet/2019/02/12/operationalising-deep-learning-cnn-model-using-onnx-keras-flask-a-starter-guide/
|
CC-MAIN-2019-26
|
en
|
refinedweb
|
Creating a New JVM Language, Part 1: The Lexer
Creating a New JVM Language, Part 1: The Lexer
First step: generate an abstract syntax tree. First part of the first step: the lexer.
Join the DZone community and get the full member experience.Join For different components of the compiler and related tools.
Structure of the Compiler
The compiler needs to do several things:
- Get the source code and generate an abstract syntax tree (AST)
- Translate the AST through different stages to simplify processing. We basically want to move from a representation very close to the syntax of a representation easier to process. For example, we could “desugarize” the language, representing several (apparently) different constructs as variants of the same construct. An example? The Java compiler translates string concatenations into calls to StringBuffer.append
- Perform semantic checks. For example, we want to check if all the expressions are using acceptable types (we do not want to sum characters, right?)
- Generate bytecode
The first step requires building two components: a lexer and a parser. The lexer operates on the text and produces a sequence of tokens while the parser composes tokens into constructs (a type declaration, a statement, an expression, etc.) creating the AST. For writing the lexer and the parser I have used ANTLR.
In the rest of this post, we look into the lexer. The parser and the other components of the compiler will be treated in future posts.
Why Using ANTLR?
ANTLR is a very mature tool for writing lexer and parsers. It can generate code for several languages and has decent performance. It is well maintained and I was sure it had all the features I could possibly need to handle all the corner cases I could meet. In addition to that, ANTLR 4 makes it possible to write simple grammars because it solves left recursive definition for you. So you do not have to write many intermediate node types for specifying precedence rules for your expressions. More on this when we will look into the parser.
ANTLR is used by Xtext (which I have used a lot) and I have used ANTLR while building a framework for Model-driven development for the .NET platform (a sort of EMF for .NET). So I know and trust ANTLR and I have no reason for looking into alternatives.
The Current Lexer Grammar
This is the current version of the lexer grammar.
lexer grammar TurinLexer; @header { } @lexer::members { public static final int WHITESPACE = 1; public static final int COMMENTS = 2; } // It is suggested to define the token types reused in different mode. // See mode in-interpolation below tokens { VALUE_ID, TYPE_ID, INT, LPAREN, RPAREN, COMMA, RELOP, AND_KW, OR_KW, NOT_KW } // Of course keywords has to be defined before the rules for identifiers NAMESPACE_KW : 'namespace'; PROGRAM_KW : 'program'; PROPERTY_KW : 'property'; TYPE_KW : 'type'; VAL_KW : 'val'; HAS_KW : 'has'; ABSTRACT_KW : 'abstract'; SHARED_KW : 'shared'; IMPORT_KW : 'import'; AS_KW : 'as'; VOID_KW : 'Void'; RETURN_KW : 'return'; FALSE_KW : 'false'; TRUE_KW : 'true'; IF_KW : 'if'; ELIF_KW : 'elif'; ELSE_KW : 'else'; // For definitions reused in mode in-interpolation we define and refer to fragments AND_KW : F_AND; OR_KW : F_OR; NOT_KW : F_NOT; LPAREN : '('; RPAREN : ')'; LBRACKET : '{'; RBRACKET : '}'; LSQUARE : '['; RSQUARE : ']'; COMMA : ','; POINT : '.'; COLON : ':'; // We use just one token type to reduce the number of states (and not crash Antlr...) // EQUAL : '==' -> type(RELOP); DIFFERENT : '!=' -> type(RELOP); LESSEQ : '<=' -> type(RELOP); LESS : '<' -> type(RELOP); MOREEQ : '>=' -> type(RELOP); MORE : '>' -> type(RELOP); // ASSIGNMENT has to comes after EQUAL ASSIGNMENT : '='; // Mathematical operators cannot be merged in one token type because // they have different precedences ASTERISK : '*'; SLASH : '/'; PLUS : '+'; MINUS : '-'; PRIMITIVE_TYPE : F_PRIMITIVE_TYPE; BASIC_TYPE : F_BASIC_TYPE; VALUE_ID : F_VALUE_ID; // Only for types TYPE_ID : F_TYPE_ID; INT : F_INT; // Let's switch to another mode here STRING_START : '"' -> pushMode(IN_STRING); WS : (' ' | '\t')+ -> channel(WHITESPACE); NL : '\r'? '\n'; COMMENT : '/*' .*? '*/' -> channel(COMMENTS); LINE_COMMENT : '//' ~[\r\n]* -> channel(COMMENTS); mode IN_STRING; STRING_STOP : '"' -> popMode; STRING_CONTENT : (~["\\#]|ESCAPE_SEQUENCE|SHARP)+; INTERPOLATION_START : '#{' -> pushMode(IN_INTERPOLATION); mode IN_INTERPOLATION; INTERPOLATION_END : '}' -> popMode; I_PRIMITIVE_TYPE : F_PRIMITIVE_TYPE -> type(PRIMITIVE_TYPE); I_BASIC_TYPE : F_BASIC_TYPE -> type(BASIC_TYPE); I_FALSE_KW : 'false' -> type(FALSE_KW); I_TRUE_KW : 'true' -> type(TRUE_KW); I_AND_KW : F_AND -> type(AND_KW); I_OR_KW : F_OR -> type(OR_KW); I_NOT_KW : F_NOT -> type(NOT_KW); I_IF_KW : 'if' -> type(IF_KW); I_ELSE_KW : 'else' -> type(ELSE_KW); I_VALUE_ID : F_VALUE_ID -> type(VALUE_ID); I_TYPE_ID : F_TYPE_ID -> type(TYPE_ID); I_INT : F_INT -> type(INT); I_COMMA : ',' -> type(COMMA); I_LPAREN : '(' -> type(LPAREN); I_RPAREN : ')' -> type(RPAREN); I_LSQUARE : '[' -> type(LSQUARE); I_RSQUARE : ']' -> type(RSQUARE); I_ASTERISK : '*' -> type(ASTERISK); I_SLASH : '/' -> type(SLASH); I_PLUS : '+' -> type(PLUS); I_MINUS : '-' -> type(MINUS); I_POINT : '.' -> type(POINT); I_EQUAL : '==' -> type(RELOP); I_DIFFERENT : '!=' -> type(RELOP); I_LESSEQ : '<=' -> type(RELOP); I_LESS : '<' -> type(RELOP); I_MOREEQ : '>=' -> type(RELOP); I_MORE : '>' -> type(RELOP); I_STRING_START : '"' -> type(STRING_START), pushMode(IN_STRING); I_WS : (' ' | '\t')+ -> type(WS), channel(WHITESPACE); fragment F_AND : 'and'; fragment F_OR : 'or'; fragment F_NOT : 'not'; fragment F_VALUE_ID : ('_')*'a'..'z' ('A'..'Z' | 'a'..'z' | '0'..'9' | '_')*; // Only for types fragment F_TYPE_ID : ('_')*'A'..'Z' ('A'..'Z' | 'a'..'z' | '0'..'9' | '_')*; fragment F_INT : '0'|(('1'..'9')('0'..'9')*); fragment F_PRIMITIVE_TYPE : 'Byte'|'Int'|'Long'|'Boolean'|'Char'|'Float'|'Double'|'Short'; fragment F_BASIC_TYPE : 'UInt'; fragment ESCAPE_SEQUENCE : '\\r'|'\\n'|'\\t'|'\\"'|'\\\\'; fragment SHARP : '#'{ _input.LA(1)!='{' }?;
A few choices I have done:
- there are two different types of ID: VALUE_ID and TYPE_ID. This permits to have less ambiguity in the grammar because values and types can be easily distinguished. In Java instead, when (foo) is encountered we do not know if it is an expression (a reference to the value represented by foo between parenthesis) or a cast to the type foo. We need to look at what follows to understand it. In my opinion, this is rather stupid because in practice everyone is using capitalized identifiers for types only, but because this is not enforced by the language the compiler cannot take advantage from it
- newlines are relevant in Turin, so we have tokens for them we basically want to have statements terminated by newlines but we accept optional newlines after commas
- whitespaces (but newlines) and comments are captured in their own channels, so that we can ignore them in the parser grammar but we can retrieve them when needed. For example, we need them for syntax highlighting and in general, for the IntelliJ plugin because it requires defining tokens for each single character in the source file, without gaps
- the trickiest part is parsing string interpolations à la Ruby such as “my name is #{user.name}”. We use modes: when we encounter a string start (“) we switch to lexer mode IN_STRING. While in mode IN_STRING if we encounter the start of an interpolated value (#{) we move to lexer mode IN_INTERPOLATION. While in mode IN_INTERPOLATION we need to accept most of the tokens used in expressions (and that sadly means a lot of duplication in our lexer grammar).
- I had to collapse the relational operators in one single token type so that the number of states of the generated lexer is not too big. It means that I will have to look into the text of RELOP tokens to figure out which operation need to be executed. Nothing too awful but you have to know how to fix these kinds of issues.
Testing the Lexer
I wrote a bunch of tests specifically for the lexer. In particular, I tested the most involved part: the one regarding string interpolation.
An example of a few tests:
@Test public void parseStringWithEmptyInterpolation() throws IOException { String code = "\"Hel#{}lo!\""; verify(code, TurinLexer.STRING_START, TurinLexer.STRING_CONTENT, TurinLexer.INTERPOLATION_START, TurinLexer.INTERPOLATION_END, TurinLexer.STRING_CONTENT, TurinLexer.STRING_STOP); } @Test public void parseStringWithInterpolationContainingID() throws IOException { String code = "\"Hel#{foo}lo!\""; verify(code, TurinLexer.STRING_START, TurinLexer.STRING_CONTENT, TurinLexer.INTERPOLATION_START, TurinLexer.VALUE_ID, TurinLexer.INTERPOLATION_END, TurinLexer.STRING_CONTENT, TurinLexer.STRING_STOP); } @Test public void parseStringWithSharpSymbol() throws IOException { String code = "\"Hel#lo!\""; verify(code, TurinLexer.STRING_START, TurinLexer.STRING_CONTENT, TurinLexer.STRING_STOP); } @Test public void parseMethodDefinitionWithExpressionBody() throws IOException { String code = "Void toString() = \"foo\""; verify(code, TurinLexer.VOID_KW, TurinLexer.VALUE_ID, TurinLexer.LPAREN, TurinLexer.RPAREN, TurinLexer.ASSIGNMENT, TurinLexer.STRING_START, TurinLexer.STRING_CONTENT, TurinLexer.STRING_STOP); }
As you can see I have just tested the tokenizer on a string and verified it produces the correct list of tokens. Easy and straight to the point.
Conclusions
My experience with ANTLR for this language has not been perfect: there are issues and limitations. Having to collapse several operators in a single token type is not nice. Having to repeat several token definitions for different lexer models is bad. However ANTLR proved to be a tool usable in practice: it does all that it needs to do and for each problem there is an acceptable solution. The solution is maybe not ideal, maybe not elegant as desired but there is one. So I can use it and move on to more interesting parts of the compiler.
Published at DZone with permission of Federico Tomassetti , DZone MVB. See the original article here.
Opinions expressed by DZone contributors are their own.
{{ parent.title || parent.header.title}}
{{ parent.tldr }}
{{ parent.linkDescription }}{{ parent.urlSource.name }}
|
https://dzone.com/articles/turin-programming-language-for-the-jvm-building-ad
|
CC-MAIN-2019-26
|
en
|
refinedweb
|
A generator of code for dioc containers.
All registrations are made by annotating a partial bootstrapper class.
Various environments could be declared as methods that return a
Container. You have to decorate these methods with
Provide annotations to register dependencies.
library example; import 'package:dioc/src/container.dart'; import 'package:dioc/src/built_container.dart'; part "example.g.dart"; @bootsrapper @Provide.implemented(OtherService) // Default registration for all environments abstract class AppBootsrapper extends Bootsrapper { @Provide(Service, MockService) Container development(); @Provide(Service, WebService) Container production(); }
To indicate how to inject dependencies, you can also decorate your class fields with
Inject annotations (and
@inject,
@singleton shortcuts).
class OtherService { @inject final Service dependency; @singleton final Service dependency2; OtherService(this.dependency,{this.dependency2}); }
To trigger code generation, run the command :
pub run build_runner build
Then simply use the provided builder to create your
Container.
final container = AppBootsrapperBuilder.instance.development();
A complete example is also available.
Please file feature requests and bugs at the issue tracker.
Add this to your package's pubspec.yaml file:
dependencies: dioc_generator: ^0.1.0
You can install packages from the command line:
with pub:
$ pub get
Alternatively, your editor might support
pub get.
Check the docs for your editor to learn more.
Now in your Dart code, you can use:
import 'package:dioc_generator/dioc_generator.dart';
We analyzed this package on May 30, 2019, and provided a score, details, and suggestions below. Analysis was completed with status completed using:
Detected platforms:
Error(s) prevent platform classification:
Error(s) in lib/dioc_generator.dart: Arguments of a constant creation must be constant expressions.
Fix
lib/dioc_generator.dart. (-44.87 points)
Analysis of
lib/dioc_generator.dart failed with 2 errors, 4 hints, including:
line 22 col 78: Arguments of a constant creation must be constant expressions.
line 22 col 78: Undefined name 'Bootsrapper'.
line 15 col 47: Use
= to separate a named parameter from its default value.
line 15 col 70: Use
= to separate a named parameter from its default value.
line 45 col 13: DO use curly braces for all flow control structures.
Format
lib/builder.dart.
Run
dartfmt to format
lib/builder.dart.
Fix platform conflicts. (-20 points)
Error(s) prevent platform classification:
Error(s) in lib/dioc_generator.dart: Arguments of a constant creation must be constant expressions.
dioc_generator.dart. Packages with multiple examples should provide
example/README.md.
For more information see the pub package layout conventions.
|
https://pub.dev/packages/dioc_generator/versions/0.1.0
|
CC-MAIN-2019-26
|
en
|
refinedweb
|
UilDumpSymbolTable(library call) UilDumpSymbolTable(library call)
NAME [Toc] [Back]
UilDumpSymbolTable - Dumps the contents of a named UIL symbol table to
standard output
SYNOPSIS [Toc] [Back]
#include <uil/UilDef.h>
void UilDumpSymbolTable(
sym_entry_type *root_ptr);
DESCRIPTION [Toc] [Back]
The UilDumpSymbolTable function dumps the contents of a UIL symbol
table pointer to standard output.
root_ptr Specifies a pointer to:
include file UilSymDef.h. Note that this file is automatically
included when an application includes the file UilDef.h.
- 1 - Formatted: January 24, 2005
UilDumpSymbolTable(library call) UilDumpSymbolTable(library call)
RELATED [Toc] [Back]
Uil(3)
- 2 - Formatted: January 24, 2005
|
http://nixdoc.net/man-pages/HP-UX/man3/UilDumpSymbolTable.3.html
|
CC-MAIN-2019-26
|
en
|
refinedweb
|
Walkthrough: Office Programming (C# and Visual Basic)
Visual Studio offers features in C# and Visual Basic that improve Microsoft Office programming. Helpful C# features include named and optional arguments and return values of type
dynamic. In COM programming, you can omit the
ref keyword and gain access to indexed properties. Features in Visual Basic include auto-implemented properties, statements in lambda expressions, and collection initializers.
Both languages enable embedding of type information, which allows deployment of assemblies that interact with COM components without deploying primary interop assemblies (PIAs) to the user's computer. For more information, see Walkthrough: Embedding Types from Managed Assemblies.
This walkthrough demonstrates these features in the context of Office programming, but many of these features are also useful in general programming. In the walkthrough, you use an Excel Add-in application to create an Excel workbook. Next, you create a Word document that contains a link to the workbook. Finally, you see how to enable and disable the PIA dependency.
Prerequisites
You must have Microsoft Office Excel and Microsoft Office Word installed on your computer to complete this walkthrough.
If you are using an operating system that is older than Windows Vista, make sure that .NET Framework 2.0 is installed.
Note
Your computer might show different names or locations for some of the Visual Studio user interface elements in the following instructions. The Visual Studio edition that you have and the settings that you use determine these elements. For more information, see Personalizing the IDE.
To set up an Excel Add-in application
Start Visual Studio.
On the File menu, point to New, and then click Project.
In the Installed Templates pane, expand Visual Basic or Visual C#, expand Office, and then click the version year of the Office product.
In the Templates pane, click Excel <version> Add-in.
Look at the top of the Templates pane to make sure that .NET Framework 4, or a later version, appears in the Target Framework box.
Type a name for your project in the Name box, if you want to.
Click OK.
The new project appears in Solution Explorer.
To add references
In Solution Explorer, right-click your project's name and then click Add Reference. The Add Reference dialog box appears.
On the Assemblies tab, select Microsoft.Office.Interop.Excel, version
<version>.0.0.0(for a key to the Office product version numbers, see Microsoft Versions), in the Component Name list, and then hold down the CTRL key and select Microsoft.Office.Interop.Word,
version <version>.0.0.0. If you do not see the assemblies, you may need to ensure they are installed and displayed (see How to: Install Office Primary Interop Assemblies).
Click OK.
To add necessary Imports statements or using directives
In Solution Explorer, right-click the ThisAddIn.vb or ThisAddIn.cs file and then click View Code.
Add the following
Importsstatements (Visual Basic) or
usingdirectives (C#) to the top of the code file if they are not already present.
using System.Collections.Generic; using Excel = Microsoft.Office.Interop.Excel; using Word = Microsoft.Office.Interop.Word;
Imports Microsoft.Office.Interop
To create a list of bank accounts
In Solution Explorer, right-click your project's name, click Add, and then click Class. Name the class Account.vb if you are using Visual Basic or Account.cs if you are using C#. Click Add.
Replace the definition of the
Accountclass with the following code. The class definitions use auto-implemented properties. For more information, see Auto-Implemented Properties.
class Account { public int ID { get; set; } public double Balance { get; set; } }
Public Class Account Property ID As Integer = -1 Property Balance As Double End Class
To create a
bankAccountslist that contains two accounts, add the following code to the
ThisAddIn_Startupmethod in ThisAddIn.vb or ThisAddIn.cs. The list declarations use collection initializers. For more information, see Collection Initializers.
var bankAccounts = new List<Account> { new Account { ID = 345, Balance = 541.27 }, new Account { ID = 123, Balance = -127.44 } };
Dim bankAccounts As New List(Of Account) From { New Account With { .ID = 345, .Balance = 541.27 }, New Account With { .ID = 123, .Balance = -127.44 } }
To export data to Excel
In the same file, add the following method to the
ThisAddInclass. The method sets up an Excel workbook and exports data to it.
void DisplayInExcel(IEnumerable<Account> accounts, Action<Account, Excel.Range> DisplayFunc) { var excelApp = this.Application; // Add a new Excel workbook. excelApp.Workbooks.Add(); excelApp.Visible = true; excelApp.Range["A1"].Value = "ID"; excelApp.Range["B1"].Value = "Balance"; excelApp.Range["A2"].Select(); foreach (var ac in accounts) { DisplayFunc(ac, excelApp.ActiveCell); excelApp.ActiveCell.Offset[1, 0].Select(); } // Copy the results to the Clipboard. excelApp.Range["A1:B3"].Copy(); }
Sub DisplayInExcel(ByVal accounts As IEnumerable(Of Account), ByVal DisplayAction As Action(Of Account, Excel.Range)) With Me.Application ' Add a new Excel workbook. .Workbooks.Add() .Visible = True .Range("A1").Value = "ID" .Range("B1").Value = "Balance" .Range("A2").Select() For Each ac In accounts DisplayAction(ac, .ActiveCell) .ActiveCell.Offset(1, 0).Select() Next ' Copy the results to the Clipboard. .Range("A1:B3").Copy() End With End Sub
Two new C# features are used in this method. Both of these features already exist in Visual Basic.
Method Add has an optional parameter for specifying a particular template. Optional parameters, new in C# 4, enable you to omit the argument for that parameter if you want to use the parameter's default value. Because no argument is sent in the previous example,
Adduses the default template and creates a new workbook. The equivalent statement in earlier versions of C# requires a placeholder argument:
excelApp.Workbooks.Add(Type.Missing).
For more information, see Named and Optional Arguments.
The
Rangeand
Offsetproperties of the Range object use the indexed properties feature. This feature enables you to consume these properties from COM types by using the following typical C# syntax. Indexed properties also enable you to use the
Valueproperty of the
Rangeobject, eliminating the need to use the
Value2property. The
Valueproperty is indexed, but the index is optional. Optional arguments and indexed properties work together in the following example.
// Visual C# 2010 provides indexed properties for COM programming. excelApp.Range["A1"].Value = "ID"; excelApp.ActiveCell.Offset[1, 0].Select();
In earlier versions of the language, the following special syntax is required.
// In Visual C# 2008, you cannot access the Range, Offset, and Value // properties directly. excelApp.get_Range("A1").Value2 = "ID"; excelApp.ActiveCell.get_Offset(1, 0).Select();
You cannot create indexed properties of your own. The feature only supports consumption of existing indexed properties.
For more information, see How to: Use Indexed Properties in COM Interop Programming.
Add the following code at the end of
DisplayInExcelto adjust the column widths to fit the content.
excelApp.Columns[1].AutoFit(); excelApp.Columns[2].AutoFit();
' Add the following two lines at the end of the With statement. .Columns(1).AutoFit() .Columns(2).AutoFit()
These additions demonstrate another feature in C#: treating
Objectvalues returned from COM hosts such as Office as if they have type dynamic. This happens automatically when Embed Interop Types is set to its default value,
True, or, equivalently, when the assembly is referenced by the /link compiler option. Type
dynamicallows late binding, already available in Visual Basic, and avoids the explicit casting required in Visual C# 2008 and earlier versions of the language.
For example,
excelApp.Columns[1]returns an
Object, and
AutoFitis an Excel Range method. Without
dynamic, you must cast the object returned by
excelApp.Columns[1]as an instance of
Rangebefore calling method
AutoFit.
// Casting is required in Visual C# 2008. ((Excel.Range)excelApp.Columns[1]).AutoFit(); // Casting is not required in Visual C# 2010. excelApp.Columns[1].AutoFit();
For more information about embedding interop types, see procedures "To find the PIA reference" and "To restore the PIA dependency" later in this topic. For more information about
dynamic, see dynamic or Using Type dynamic.
To invoke DisplayInExcel
Add the following code at the end of the
ThisAddIn_StartUpmethod. The call to
DisplayInExcelcontains two arguments. The first argument is the name of the list of accounts to be processed. The second argument is a multiline lambda expression that defines how the data is to be processed. The
IDand
balancevalues for each account are displayed in adjacent cells, and the row is displayed in red if the balance is less than zero. For more information, see Lambda Expressions.
DisplayInExcel(bankAccounts, (account, cell) => // This multiline lambda expression sets custom processing rules // for the bankAccounts. { cell.Value = account.ID; cell.Offset[0, 1].Value = account.Balance; if (account.Balance < 0) { cell.Interior.Color = 255; cell.Offset[0, 1].Interior.Color = 255; } });
DisplayInExcel(bankAccounts, Sub(account, cell) ' This multiline lambda expression sets custom ' processing rules for the bankAccounts. cell.Value = account.ID cell.Offset(0, 1).Value = account.Balance If account.Balance < 0 Then cell.Interior.Color = RGB(255, 0, 0) cell.Offset(0, 1).Interior.Color = RGB(255, 0, 0) End If End Sub)
To run the program, press F5. An Excel worksheet appears that contains the data from the accounts.
To add a Word document
Add the following code at the end of the
ThisAddIn_StartUpmethod to create a Word document that contains a link to the Excel workbook.
var wordApp = new Word.Application(); wordApp.Visible = true; wordApp.Documents.Add(); wordApp.Selection.PasteSpecial(Link: true, DisplayAsIcon: true);
Dim wordApp As New Word.Application wordApp.Visible = True wordApp.Documents.Add() wordApp.Selection.PasteSpecial(Link:=True, DisplayAsIcon:=True)
This code demonstrates several of the new features in C#: the ability to omit the
refkeyword in COM programming, named arguments, and optional arguments. These features already exist in Visual Basic. The PasteSpecial method has seven parameters, all of which are defined as optional reference parameters. Named and optional arguments enable you to designate the parameters you want to access by name and to send arguments to only those parameters. In this example, arguments are sent to indicate that a link to the workbook on the Clipboard should be created (parameter
Link) and that the link is to be displayed in the Word document as an icon (parameter
DisplayAsIcon). Visual C# also enables you to omit the
refkeyword for these arguments.
To run the application
- Press F5 to run the application. Excel starts and displays a table that contains the information from the two accounts in
bankAccounts. Then a Word document appears that contains a link to the Excel table.
To clean up the completed project
- In Visual Studio, click Clean Solution on the Build menu. Otherwise, the add-in will run every time that you open Excel on your computer.
To find the PIA reference
Run the application again, but do not click Clean Solution.
Select the Start. Locate Microsoft Visual Studio <version> and open a developer command prompt.
Type
ildasmin the Developer Command Prompt for Visual Studio window, and then press ENTER. The IL DASM window appears.
On the File menu in the IL DASM window, select File > Open. Double-click Visual Studio <version>, and then double-click Projects. Open the folder for your project, and look in the bin/Debug folder for your project name.dll. Double-click your project name.dll. A new window displays your project's attributes, in addition to references to other modules and assemblies. Note that namespaces
Microsoft.Office.Interop.Exceland
Microsoft.Office.Interop.Wordare included in the assembly. By default in Visual Studio, the compiler imports the types you need from a referenced PIA into your assembly.
For more information, see How to: View Assembly Contents.
Double-click the MANIFEST icon. A window appears that contains a list of assemblies that contain items referenced by the project.
Microsoft.Office.Interop.Exceland
Microsoft.Office.Interop.Wordare not included in the list. Because the types your project needs have been imported into your assembly, references to a PIA are not required. This makes deployment easier. The PIAs do not have to be present on the user's computer, and because an application does not require deployment of a specific version of a PIA, applications can be designed to work with multiple versions of Office, provided that the necessary APIs exist in all versions.
Because deployment of PIAs is no longer necessary, you can create an application in advanced scenarios that works with multiple versions of Office, including earlier versions. However, this works only if your code does not use any APIs that are not available in the version of Office you are working with. It is not always clear whether a particular API was available in an earlier version, and for that reason working with earlier versions of Office is not recommended.
Note
Office did not publish PIAs before Office 2003. Therefore, the only way to generate an interop assembly for Office 2002 or earlier versions is by importing the COM reference.
Close the manifest window and the assembly window.
To restore the PIA dependency
In Solution Explorer, click the Show All Files button. Expand the References folder and select Microsoft.Office.Interop.Excel. Press F4 to display the Properties window.
In the Properties window, change the Embed Interop Types property from True to False.
Repeat steps 1 and 2 in this procedure for
Microsoft.Office.Interop.Word.
In C#, comment out the two calls to
Autofitat the end of the
DisplayInExcelmethod.
Press F5 to verify that the project still runs correctly.
Repeat steps 1-3 from the previous procedure to open the assembly window. Notice that
Microsoft.Office.Interop.Wordand
Microsoft.Office.Interop.Excelare no longer in the list of embedded assemblies.
Double-click the MANIFEST icon and scroll through the list of referenced assemblies. Both
Microsoft.Office.Interop.Wordand
Microsoft.Office.Interop.Excelare in the list. Because the application references the Excel and Word PIAs, and the Embed Interop Types property is set to False, both assemblies must exist on the end user's computer.
In Visual Studio, click Clean Solution on the Build menu to clean up the completed project.
See also
- Auto-Implemented Properties (Visual Basic)
- Auto-Implemented Properties (C#)
- Collection Initializers
- Object and Collection Initializers
- Optional Parameters
- Passing Arguments by Position and by Name
- Named and Optional Arguments
- Early and Late Binding
- dynamic
- Using Type dynamic
- Lambda Expressions (Visual Basic)
- Lambda Expressions (C#)
- How to: Use Indexed Properties in COM Interop Programming
- Walkthrough: Embedding Type Information from Microsoft Office Assemblies in Visual Studio
- Walkthrough: Embedding Types from Managed Assemblies
- Walkthrough: Creating Your First VSTO Add-in for Excel
- COM Interop
- Interoperability
Feedback
Send feedback about:
|
https://docs.microsoft.com/en-us/dotnet/csharp/programming-guide/interop/walkthrough-office-programming
|
CC-MAIN-2019-26
|
en
|
refinedweb
|
Created on 2018-03-22 09:21 by Jonathan Huot, last changed 2018-03-24 04:25 by ncoghlan. This issue is now closed.
Executing python modules with -m can lead to weird sys.argv parsing.
"Argument parsing" section at mention :
- When -m module is used, sys.argv[0] is set to the full name of the located module.
The word "located" is used, but it doesn't mention anything when the module is not *yet* "located".
For instance, let's see what is the sys.argv for each python files:
$ cat mainmodule/__init__.py
import sys; print("{}: {}".format(sys.argv, __file__))
$ cat mainmodule/submodule/__init__.py
import sys; print("{}: {}".format(sys.argv, __file__))
$ cat mainmodule/submodule/foobar.py
import sys; print("{}: {}".format(sys.argv, __file__))
Then we call "foobar" with -m:
$ python -m mainmodule.submodule.foobar -o -b
['-m', '-o', 'b']: (..)/mainmodule/__init__.py
['-m', '-o', 'b']: (..)/mainmodule/submodule/__init__.py
['(..)/mainmodule/submodule/foobar.py', '-o', 'b']: (..)/mainmodule/submodule/foobar.py
$
We notice that only "-m" is in sys.argv before we found "foobar". This can lead to a lot of troubles when we have meaningful processing in __init__.py which rely on sys.argv to initialize stuff.
IMHO, it either should be the sys.argv intact ['-m', 'mainmodule.submodule.foobar', '-o', '-b'] or empty ['', '-o', '-b'] or only the latest ['-o', '-b'], but it should not be ['-m', '-o', '-b'] which is very confusing.
Two of your 3 suggested alternatives could lead to bugs. To use your example:
python -m mainmodule.submodule.foobar -o -b
is a convenient alternative and abbreviation for
python .../somedir/mainmodule/submodule/foobar.py -o -b
The two invocations should give equivalent results and to the extent possible the same result.
[What might be different is the form of argv[0]. In the first case, argv[0] will be the "preferred" form of the path to the python file while in the second, it will be whatever is given. On Windows, the difference might look like 'F:\\Python\\a\\tem2.py' versus 'f:/python/a/tem2.py']
Unless __init__.py does some evil monkeypatching, it cannot affect the main module unless imported directly or indirectly. So its behavior should be the same whether imported before or after execution of the main module. This means that argv must be the same either way (except for argv[0]). So argv[0:2] must be condensed to one arg before executing __init__. I don't see that '' is an improvement over '-m'.
Command line arguments are intended for the invoked command. An __init__.py file is never the command unless invoked by its full path: "python somepath/__init__.py". In such a case, sys.argv access should be within a "__name__ == '__main__':" clause or a function called therein.
This is deliberate, and is covered in the documentation at where it says 'If this option is given, the first element of sys.argv will be the full path to the module file (while the module file is being located, the first element will be set to "-m").'
The part in parentheses is the bit that's applicable here.
We've not going to change that, as the interpreter startup relies on checking sys.argv[0] for "-m" and "-c" in order to work out how it's expected to handle sys.path initialization.
|
https://bugs.python.org/issue33119
|
CC-MAIN-2019-26
|
en
|
refinedweb
|
To develop and test your application locally, you can use the Cloud Pub/Sub
emulator, which provides local emulation
of the production Cloud Pub/Sub environment. You run the Cloud Pub/Sub
emulator using the
gcloud command-line tool.
To run your application against the emulator, you need to do some configuration first, such as starting the emulator and setting environment variables.
Prerequisites
You must have the following to use the Cloud:
gcloud beta emulators pubsub start [options]
where
[options] are command-line arguments supplied to the
gcloud
command-line tool. See
gcloud beta emulators pubsub start
for a complete list of options.
After you start the emulator, you should see a message that resembles the following:
... [pubsub] This is the Cloud Pub/Sub fake. [pubsub] Implementation may be incomplete or differ from the real system. ... [pubsub] INFO: Server started, listening on 8085
To test push subscriptions, use a valid local push endpoint
that can handle HTTP POST requests. For example, use
JSON Server
to start a fake REST API server at with as a resource.
Setting environment variables
After you start the emulator, you need to set environment variables so that your application connects to the emulator instead of Cloud Cloud Cloud GCP Cloud Pub/Sub Python samples from GitHub by cloning the full Python repository. The Cloud Pub/Sub samples are in the
pubsubdirectory.
In your cloned repository, navigate to the
pubsub/cloud-clientdirectory. You'll complete the rest of these steps in this directory.
From within the
pubsub/cloud-clientdirectory, install the dependencies needed to run the example:
pip install -r requirements.txt
Create a topic:
python publisher.py PUBSUB_PROJECT_ID create TOPIC_ID
Create a subscription to the topic:
- Create a pull subscription:
python subscriber.py PUBSUB_PROJECT_ID create TOPIC_ID SUBSCRIPTION_ID
- For push subscriptions, set up an endpoint as a resource on your local server. This can serve as a valid value for PUSH_ENDPOINT. For example,. Then create the push subscription:
python subscriber.py PUBSUB_PROJECT_ID create-push TOPIC_ID SUBSCRIPTION_ID PUSH_ENDPOINT
Publish messages to the topic:
python publisher.py PUBSUB_PROJECT_ID publish TOPIC_ID
Read the messages published to the topic:
python subscriber.py PUBSUB_PROJECT_ID receive SUBSCRIPTION_ID
Accessing environment variables
In all languages except for Java and C#, if you have set
PUBSUB_EMULATOR_HOST
as described in Setting environment variables,
the Cloud Pub/Sub client libraries automatically call the API running in the
local instance rather than Cloud Pub/Sub.
However, C# and Java client libraries require you to modify your code to use the emulator:
C#
Before trying this sample, follow the C# setup instructions in the Cloud Pub/Sub Quickstart Using Client Libraries . For more information, see the Cloud Pub/Sub C# API reference documentation .
// Google.Apis.Logging; using Google.Cloud.PubSub.V1; using Google.Cloud.Translation.V2; using static Google.Apis.Http.ConfigurableMessageHandler; using Grpc.Core; using System; using Google.Api.Gax.ResourceNames; namespace Google.Cloud.Tools.Snippets { public class FaqSnippets { public void Emulator() { // Sample: Emulator // For example, "localhost:8615" string emulatorHostAndPort = Environment.GetEnvironmentVariable("PUBSUB_EMULATOR_HOST"); Channel channel = new Channel(emulatorHostAndPort, ChannelCredentials.Insecure); PublisherServiceApiClient client = PublisherServiceApiClient.Create(channel); client.CreateTopic(new TopicName("project", "topic")); foreach (var topic in client.ListTopics(new ProjectName("project"))) { Console.WriteLine(topic.Name); } // End sample } public void RestLogging() { // Sample: RestLogging // Required using directives: // using static Google.Apis.Http.ConfigurableMessageHandler; // using Google.Apis.Logging; // using Google.Cloud.Translation.V2; // Register a verbose console logger ApplicationContext.RegisterLogger(new ConsoleLogger(LogLevel.All)); // Create a translation client TranslationClient client = TranslationClient.Create(); // Configure which events the message handler will log. client.Service.HttpClient.MessageHandler.LogEvents = LogEventType.RequestHeaders | LogEventType.ResponseBody; // Make the request client.ListLanguages(); // End sample } } }
Java
Before trying this sample, follow the Java setup instructions in the Cloud Pub/Sub Quickstart Using Client Libraries . For more information, see the Cloud()); ProjectTopicName topicName = ProjectTopic Cloud Pub/Sub:
unset PUBSUB_EMULATOR_HOST
set PUBSUB_EMULATOR_HOST=
Emulator command-line arguments
For details on command-line arguments for the Cloud Pub/Sub emulator, see
gcloud beta emulators pubsub.
Known limitations
- You cannot start or stop Publisher or Subscriber from your code.
- IAM operations are not currently supported.
To file issues, visit the Cloud Pub/Sub forum.
|
https://cloud.google.com/pubsub/docs/emulator?hl=tr
|
CC-MAIN-2019-26
|
en
|
refinedweb
|
I had the chance, at an Intel IoT Hackaton taking place at Usine.io in Paris, beside an Intel Edison Arduino board and a bunch of Grove sensors/actuators, to also get the new Akene board from Snootlab.
Thanks to the Intel IoT guys, Nicolas from SigFox and all the staff of BeMyApp for this Hackaton…
Step 1: The Project...
I decided to build a small light sensor station, with the Intel Edison board, the Grove shield, the SigFox Akene Shield, and an I2C Grove TSL2561.
This station will upload, through the SigFox network about every 10 minutes, 3 different values (in lux) relating to that 10 minutes period: the arithmetic average light, the minimum and maximum light.
That will allow to know how the light fluctuate around its average value for this period of time, and therefore to get the big picture on how the sky could be cloudy (if some wind at least to push the clouds…).
I will use Python for that project.
Step 2: a Friendly Development Environment
I assume that the Intel Edison environment is ready for Python, and a password is set on the Edison in order to open SSH sessions and also SFTP to upload the python code.
I am using OS X, and will use CoolTerm for communication, and the excellent TextWrangler as code editor and code uploader (SFTP).
Step 3: The Framework: Sensor.py
There is no difficulty to mention at that step.
Step 4: What You Get With Sensor.py
>python sensor.py
Step 5: SigFox and the Akene Shield
The Akene board is an experimentation Arduino shield from SnootLab with the TD1208 SoC (System on Chip) on it. The TD1208 is a SigFox-certified radio transceiver combined with an ARM Cortex M3, which implements the telecommunication modem stack for sending values to the SigFox operated telecommunication network, and also includes I2C capabilities for IoT sensors, beside its serial modem link.
SIgFox services rely on a LPWA (Low-Power Wide-Area) network currently deployed in Western Europe, San Francisco, as well as other countries or cities. The SigFox protocol is designed for small messages and its technology is focusing on energy efficiency for devices clients and large area coverage for each infrastructure base station.
The SigFox network allows each device to send up to 140 messages per day (i.e. every 10mn), each of them up to 12 available bytes i.e. 6 short integer values (the timestamp and the unique device ID are also transmitted in addition).
More informations:
Step 6: Let's Make Sure the Akene Shield and Its Modem Is Functional
The Akene shield can be used as a modem, so we will first connect it to the Edison that way:
• Ground to Ground (Black wire)
• 3.3v to 3.3v (Red wire)
• serial Rx (pin 0) of Edison to Tx of Akene (pin D4) - Blue wire
• serial Tx (pin 1) of Edison to Rx of Akene (pin D5) - White wire
Step 7: Sending Direct Commands Through a Terminal to the TD1208 of the Akene Shield
We can consider the TD1208 as a modem.
The PySerial package has to installed.
>python -m serial.tools.miniterm
We use miniterm (part of PySerial) - and specify the serial port: /dev/ttyMFD1 - to send direct commands, like:
• AT
which should reply OK (otherwise there is a problem)
• AT&V
which reply by the TD1208 identification
• AT$SS01234567
which send the 01234567 message to the SigFox network (the maximum hexadecimal digits is 24, i.e. 12 bytes)
• AT?
which return the list of commands available
To exit miniterm, on OS X with a french keyboard: CTRL 6
Step 8: A Simple Python Command to Send a Message to the SigFox Network
This is a fast adaptation of a python command dedicated for another TD1208 board (RPISIGFOX from SNOC) and found on internet.
Just to mention that, for using serial with Intel Edison,
• it is prior necessary to initialise the port / pins which will be used, in our case /dev/ttyMFD1 (pins 0 and 1):
import libmraa
x=Uart(0)
• the port is automatically open when calling serial.Serial(......
You can use the command that way:
> python sendsigfox.py 01234567
which will send the message 01234567 to the SigFox network
Step 9: What You Get With Sendsigfox.py
>python sendsigfox.py 346723
Step 10: The Full Program: Sensor2sigfox.py
No special difficulty there also, just to mention:
• Each data sent is a short integer of 2 bytes. the valueAvg, valueMin and valueMax are therefore 6 bytes, i.e. 12 hexadecimal chars string. As the setup from SigFox in the case I am using is done in order that data are resent to actoboard.com, and that actoboard.com currently accept data in LittleEndian byte order, each data is, prior to be sent, transformed accordingly...
• the Akene shield is fully plugged on top of the Grove shield, and in order to fit the wiring of the serial of Edison on pins 0 and 1 and the serial of Akene on pins D4 and D5, straps has been used (0 to D4: White strap) and (1 to D5: Yellow strap), see first picture... That mean a clean wiring but no possibility to use Pin 4 and 5 of the Edison...
Step 11: What You Get With Sensor2sigfox.py
>python sensor2sigfox.py
2 Discussions
3 years ago
Thank you very much for your tutorial. We will share it on our forum.
You could put the SeeedStudio base shield on top of Akene () to access all Grove connectors.
Once the Edison library will be available, you'll won't need the two jumpers to connect Rx-Tx.
3 years ago on Introduction
Very nicely done, thanks for sharing this!
|
https://www.instructables.com/id/Simple-as-sending-IoT-sensor-values-through-SigFox/
|
CC-MAIN-2019-26
|
en
|
refinedweb
|
A mediator of parallel system solving packaging problem. More...
#include <PackerMediator.hpp>
A mediator of parallel system solving packaging problem.
PackerMediator is a class representing mediator class of tree-structured parallel processing system solving packaging problem.
Packer mediator system is placed on between master and slave systems. It can be a slave system for its master, and also can be a master system for its own slave systems. Mediator is a class representing mediator class of parallel processing system.
The PackerMediator is built for providing a guidance of building a mediator system in a tree-structured parallel processing system. You can also learn how to utilize master module in protocol by following the example. 44 of file PackerMediator.hpp.
Default Constructor.
Definition at line 76 of file PackerMediator.hpp.
References replyOptimization().
Handle (replied) optimized value from a slave system.
When gets optimization result from Slave systems, Master aggregates them and derives the best solution between those results and reports the best solution to the Cheif system.
Definition at line 132 of file PackerMediator.hpp.
Referenced by PackerMediator().
Main functino.
Definition at line 162 of file PackerMediator.hpp.
A packer solver.
Definition at line 54 of file PackerMediator.hpp.
Referenced by replyOptimization().
A mutex for optimization.
The mutex exists for ensuring concurrency of aggregation of optimization reported from slave systems.
Definition at line 62 of file PackerMediator.hpp.
Number of slaves who'd completed an optimization.
Definition at line 67 of file PackerMediator.hpp.
|
http://samchon.github.io/framework/api/cpp/d7/d0a/classsamchon_1_1example_1_1interaction_1_1PackerMediator.html
|
CC-MAIN-2022-27
|
en
|
refinedweb
|
Since the release of Next.js, we’ve worked to introduce new features and tools that drastically improve application performance, as well as overall developer experience. Let’s take a look at what a difference upgrading to the latest version of Next.js can make.
In 2019, our team at Vercel created a serverless demo app called VRS (Virtual Reality Store) using Next.js 8, Three.js, Express, MongoDB, Mongoose, Passport.js, and Stripe Elements. Users could sign up, browse multiple 3D models, and purchase them.
Although this demo is still fully functional three years later, it lacked some of the performance improvements that were added over the years.
Let's explore the changes we made to improve performance and streamline the developer experience using new Next.js features.
Using
getStaticProps and
getStaticPaths
The old implementation relied on a separate backend folder that contained a custom Express server, which exposed an
/api/checkout endpoint to handle payments,
/api/get-products to fetch data used to render all models, and the
/api/get-product/:id endpoint to fetch the data for a specific model.
When a user navigated from the landing page to the
/store page,
getInitialProps would make a request to the
/api/get-products endpoint to retrieve data for the shown models.
The store page was only visible to the user once the request had been resolved. This could take a while, depending on the quality of their internet connection.
Although the user hadn’t clicked the "Store" link yet, the
store.js and
store.json files that are necessary to render the store page had already been prefetched. Using
getStaticProps with the
<Link /> component drastically improves the responsiveness of the application.
You might notice that the time it takes to fetch the thumbnails has also been reduced tremendously. We were able to do this by replacing the native
<img /> tag in favor of the
<Image /> component.
Using
next/image
The
<Image /> component was introduced in Next.js 10 and further improved in later versions, allowing developers to efficiently serve images using modern image formats without layout shift.
The performance improvement is instantly noticeable when we compare the loading time of the
/store page, which renders an image for each model thumbnail.
Let’s look at the differences between using the native
<img /> tag, and using the
<Image /> component.
Next-gen image format
The
<Image /> component serves the next-gen image format
webp format. Images using this format are 25-35% smaller than JPEG files with the exact same quality index. This difference is clearly visible when comparing the sizes of the fetched images: whereas the red car model’s image used to be 1.3MB in the old implementation, we were able to reduce the size by -98.75% to only 16.6kB by using the
<Image /> component.
Lazy Loading
The old implementation requested the images for all models, resulting in 12 fetch requests. The
<Image /> component only fetches the image once it detects the intersection of the viewport with the image’s bounding box.
Although no changes had to be made to the images themselves, we were able to decrease the image loading time from an average of ~3000ms down to ~270ms.
Dynamic Routes
When browsing through the store, users can click on each item to better view the model.
The old implementation used a combination of query parameters and
getInitialProps to render the page and fetch the needed data to render each model. Similar to what we saw on the
/store page, the user can only see the model once the API request initiated within the
getInitialProps function has resolved.
Next.js 9 introduced file-system-based dynamic routes. In combination with the new
getStaticPaths function used together with
getStaticProps, this feature makes it possible to dynamically pre-render the model pages based on their id.
Instead of having one model page and using
getInitialProps and query parameters to determine which data to fetch and what model to render, we can directly use a path parameter to generate pages for each model statically.
By wrapping each model card in the grid in a
<Link /> component, we can prefetch each
/model/[id] page once the card appears in the viewport, allowing instant navigation when a user clicks on the card.
API Routes
Next.js 9 introduced API Routes, making it easy to create API endpoints from within the
/pages folder.
Although we could replace the
/api/get-products and
/api/get-product/:id endpoints by using
getStaticProps, we still need the
/api/checkout endpoint to handle payments on the server-side.
This endpoint cannot be replaced with the
getStaticProps method, since it needs to be available to the client during runtime with values that are unknown during build time. When a user purchases an item, the client makes a call to this endpoint using the unique token that was generated for their card.
Instead of hosting our own server to provide this endpoint, we can recreate this endpoint as an API Route instead!
NextAuth.js
The old implementation also had user authentication using Passport.js. To more easily add authentication to our application, we can take advantage of NextAuth.js, which simplifies this process to a few lines of code with support for 50+ providers.
// pages/api/[...nextauth].ts import NextAuth from "next-auth"; import GithubProvider from "next-auth/providers/github"; export default NextAuth({ providers: [ GithubProvider({ clientId: process.env.GITHUB_CLIENT_ID, clientSecret: process.env.GITHUB_CLIENT_SECRET }) ] })
Since all endpoints have been replaced, we no longer need a separate backend folder! We can use the frontend folder as the project's root folder, simplifying the project architecture significantly.
Import on Interaction
Some components aren’t instantly visible to the user. Instead of including them in the main JavaScript bundle, we can dynamically import these components using
next/dynamic.
One of these components is the
CartSidebar. This component is imported in the
Nav component and is only visible to the user when they click on the cart icon or add an item to the cart.
import CartSidebar from "../components/CartSidebar" export default function Nav() { ... return ( <div> ... <CartSidebar /> </div> ) }
Instead of statically importing this component, we can tell Next.js to create a separate JavaScript chunk for this component through code-splitting. That way, we can delay the import of this non-critical component, and only fetch it on-demand once the user actually requires it.
import dynamic from "next/dynamic" const CartSidebar = dynamic(() => import("../components/CartSidebar")); export default function Nav() { ... return ( <div> ... {open && <CartSidebar />} </div> ) }
Since the
CartSidebar component is the only component in the application that imported and used third-party libraries for payments (Stripe), we were also able to defer the imports of these libraries until the moment the user actually needed them (instead of unnecessarily fetching unused code).
This resulted in sending less initial JavaScript to the user and improving page loading performance.
Automatic Font Optimization
Automatic Font Optimization is available since Next.js 10 and automatically inlines font CSS at build time, eliminating an extra round trip to fetch font declarations.
<link href="" rel="stylesheet">
<style data- @font-face{font-family:'Space Mono';font-style:normal;... </style>
This means that font declarations no longer have to be fetched, improving initial page load performance.
We were able to reduce the number of requests needed to load the font from 4 to 2 just by upgrading to the latest version.
Developer experience
Besides performance optimizations, the developer experience has also massively improved over the years.
Built-in TypeScript support
Whereas adding TypeScript support required quite a bit of custom configuration, Next.js 9 added support for TypeScript out of the box! We no longer have to deal with our own config, but instead can start using TypeScript by adding a
tsconfig.json file to the root of existing projects, or by running
npx create-next-app --ts for newly created projects.
Faster builds through SWC
Next.js 12 includes a new Rust-based compiler built on SWC that takes advantage of native compilation. We reduced our build time from
~90s down to
~30s just by upgrading the Next.js version.
React Fast Refresh
Fast Refresh is a Next.js feature enabled in all Next.js apps on version 9.4 or newer. It provides instantaneous feedback on edits made to your React components within a second without losing component state. The introduction of SWC in Next.js 12 improved the refresh rate significantly, resulting in 3x faster refreshes compared to prior versions.
Conclusion
The improvements and new features Next.js has introduced over the past couple of years have made it easy to create fast fullstack applications, all while ensuring backward compatibility and making incremental adoption to new versions possible.
By upgrading to the latest version, we were able to vastly optimize our application and developer experience with minimal effort on our end.
Try out the upgraded demo or view the full PR for the upgrade. If you'd like to upgrade your Next.js app, check out our upgrade guide.
|
https://vercel.com/blog/upgrading-nextjs-for-instant-performance-improvements
|
CC-MAIN-2022-27
|
en
|
refinedweb
|
In the previous articles, we have talked about monitoring using Prometheus and other ways. In this article, we are going to talk about how you can write your own exporter using Python.
To write your own exporter you need to use prometheus_client library. To install it you can type the below command.
pip install prometheus_client
Now let’s look at the code where we will export the metrics and then our Prometheus can scrape those metrics.
from prometheus_client import start_http_server, Summary import random import time REQUEST_TIME = Summary('request_processing_seconds', 'Time spent processing request') @REQUEST_TIME.time() def process_request(t): time.sleep(t) if __name__ == '__main__': start_http_server(9000) while True: process_request(random.random())
In the above code what we have done is imported
prometheus_client and started the http_server which will serve the metrics endpoint. We have created a
Request_Time function with help of
summary, this function is acting as a decorator and then can be added to any function to check how much time it took to execute that part of code.
Then in our
process_request function, we added a random sleep to simulate the time taken to execute the function. In main, we just started the server and then in a loop called the process_request function with random values. This will trigger the Request_Time function and the metrics get recorded. If you open
localhost:9000/metrics you will see something like below.
Now you can add this endpoint in Prometheus to start scraping.
- job_name: python static_configs: - targets: ['localhost:9000']
Now you Prometheus will start scrapping the metrics. You can now use Grafana to plot the metrics.
This was how you can write a very basic Prometheus exporter and then you to plot on Grafana.
If you like the article please share and subscribe.
|
https://www.learnsteps.com/writing-metrics-exporter-for-prometheus-using-python/
|
CC-MAIN-2022-27
|
en
|
refinedweb
|
Have you ever played a game that used a cooldown system that caused you to wait a certain amount of time before being able to attack, or use a spell? Game designers use this technique to create balance in the game, otherwise it will be to easy and there would be no challenge (how boring).
It turns out we're in the same situation with our player's projectile attack. There's nothing in our code to stop the player from shooting a projectile as quickly as they can hit the spacebar.
Yeah, we need to fix that. In this article, I'll cover just one way to make a cooldown system.
Time. Time. Who has the time?
My approach is to create two variables; one that defines the interval the player has to wait until shooting (I'll add the SerializeField attribute), and another to store the future time in seconds to wait, which is a sum of the current time and the wait interval after the player fires. Let's create the new variables in the Player class.
public class Player : MonoBehaviour { ... [SerializeField] float fireCooldownInterval = 0.5f; float fireCooldownDelay = -1; .... }
I set the interval to 0.5 as a place to start. The delay is set to -1 initially because the first to ensure the first shot happens without a delay.
Tracking Time Elapsed
In our fire condition, we'll only instantiate a projectile if the current time is greater than our delay time. Once the player shoots a projectile, we then want to assign the current time in seconds plus the interval to our delay.
void RespondToFire() { if (Input.GetKeyDown(KeyCode.Space) && Time.time > fireCooldownDelay) { Instantiate(projectilePrefab, projectileSpawnPoint.position, Quaternion.identity); fireCooldownDelay = Time.time + fireCooldownInterval; } }
I'll set the delay to one second so it's easy to see the effect. Now when I try to spam projectiles with the spacebar, I can only fire every second.
Summary
We now have an attack cooldown mechanic that can easily be modified by anybody on the team with no coding required. This is a pattern we can reuse throughout our game anytime we need a cooldown.
Take care.
Stay awesome.
|
https://blog.justinhhorner.com/adding-a-player-attack-cooldown
|
CC-MAIN-2022-27
|
en
|
refinedweb
|
Nginx - NTLM module
In my previous post Nginx - Custom upstream module I described in detail how you can develop your own custom nginx upstream module. You better check that first because I will refer it a lot.
TL;DR
Check out this github.com/gabihodoroaga/nginx-ntlm-module repository if you want to get started with a custom nginx upstream module.
The NTLM module
So, I was looking for a solution to configure a reverse proxy that supports NTLM authentication passthrough, and because this is not available unless you have a commercial subscription to Nginx, I thought to develop my own custom module.
This NTLM module allows proxying requests with NTLM Authentication. The upstream connection is bound to the client connection once the client sends a request with the “Authorization” header field value starting with “Negotiate” or “NTLM”. Further client requests will be proxied through the same upstream connection, keeping the authentication context.
In order to achieve this we need to add more code to our startup custom module.
First, we need to define our configuration struct to allow us to keep the information about maximum number of the upstream connections, the keepalive timeout and the 2 queues to keep track of cached and free connection pairs.
typedef struct { ngx_uint_t max_cached; ngx_msec_t timeout; ngx_queue_t free; ngx_queue_t cache; ngx_http_upstream_init_pt original_init_upstream; ngx_http_upstream_init_peer_pt original_init_peer; } ngx_http_upstream_ntlm_srv_conf_t;
Next we need a struct to hold our pair of connections between client and the server
typedef struct { ngx_http_upstream_ntlm_srv_conf_t *conf; ngx_queue_t queue; ngx_connection_t *peer_connection; ngx_connection_t *client_connection; } ngx_http_upstream_ntlm_cache_t;
and also the peer data struct need to be adjusted a little bit
typedef struct { ngx_http_upstream_ntlm_srv_conf_t *conf; ngx_http_upstream_t *upstream; void *data; ngx_connection_t *client_connection; unsigned cached : 1; unsigned ntlm_init : 1; ngx_event_get_peer_pt original_get_peer; ngx_event_free_peer_pt original_free_peer; #if (NGX_HTTP_SSL) ngx_event_set_peer_session_pt original_set_session; ngx_event_save_peer_session_pt original_save_session; #endif } ngx_http_upstream_ntlm_peer_data_t;
After that we need to adjust module commands array in order to handle the ntlm_timeout directive:
static ngx_command_t ngx_http_upstream_ntlm_commands[] = { {ngx_string("ntlm"), NGX_HTTP_UPS_CONF | NGX_CONF_NOARGS | NGX_CONF_TAKE1, ngx_http_upstream_ntlm, NGX_HTTP_SRV_CONF_OFFSET, 0, NULL}, {ngx_string("ntlm_timeout"), NGX_HTTP_UPS_CONF | NGX_CONF_TAKE1, ngx_conf_set_msec_slot, NGX_HTTP_SRV_CONF_OFFSET, offsetof(ngx_http_upstream_ntlm_srv_conf_t, timeout), NULL}, ngx_null_command /* command termination */ };
Now we need to update the ngx_http_upstream_init_ntlm_peer function to check the headers of the client request to see if the Authorization header exists and the value begin with “NTLM” or “Negotiate”
.... if (r->headers_in.authorization != NULL) { auth_header_value = r->headers_in.authorization->value; if ((auth_header_value.len >= sizeof("NTLM") - 1 && ngx_strncasecmp(auth_header_value.data, (u_char *)"NTLM", sizeof("NTLM") - 1) == 0) || (auth_header_value.len >= sizeof("Negotiate") - 1 && ngx_strncasecmp(auth_header_value.data, (u_char *)"Negotiate", sizeof("Negotiate") - 1) == 0)) { ngx_log_debug0(NGX_LOG_DEBUG_HTTP, r->connection->log, 0, "ntlm auth header found"); hnpd->ntlm_init = 1; } } ....
In the next function ngx_http_upstream_get_ntlm_peer we will make the magic happen. Before we will return the control to the original peer we check to see if we have already cached the client connection and the associated upstream connection.
... /* search cache for suitable connection */ cache = &hndp->conf->cache; for (q = ngx_queue_head(cache); q != ngx_queue_sentinel(cache); q = ngx_queue_next(q)) { item = ngx_queue_data(q, ngx_http_upstream_ntlm_cache_t, queue); if (item->client_connection == hndp->client_connection) { c = item->peer_connection; ngx_queue_remove(q); ngx_queue_insert_head(&hndp->conf->free, q); hndp->cached = 1; goto found; } } return NGX_OK; found: ngx_log_debug1(NGX_LOG_DEBUG_HTTP, pc->log, 0, "get ntlm peer: using connection %p", c); c->idle = 0; c->sent = 0; c->data = NULL; c->log = pc->log; c->read->log = pc->log; c->write->log = pc->log; c->pool->log = pc->log; if (c->read->timer_set) { ngx_del_timer(c->read); } pc->connection = c; pc->cached = 1; return NGX_DONE; ...
and in the next function ngx_http_upstream_free_ntlm_peer we will add the code for caching the client and the upstream connection pair.
... ngx_log_debug1(NGX_LOG_DEBUG_HTTP, pc->log, 0, "free ntlm peer: saving connection %p", c); if (ngx_queue_empty(&hndp->conf->free)) { q = ngx_queue_last(&hndp->conf->cache); ngx_queue_remove(q); item = ngx_queue_data(q, ngx_http_upstream_ntlm_cache_t, queue); ngx_http_upstream_ntlm_close(item->peer_connection); } else { q = ngx_queue_head(&hndp->conf->free); ngx_queue_remove(q); item = ngx_queue_data(q, ngx_http_upstream_ntlm_cache_t, queue); } ngx_queue_insert_head(&hndp->conf->cache, q); ...
also in this function ngx_http_upstream_free_ntlm_peer we need to add a cleanup handler to be able remove the client connection from the cache and to close the upstream connection when client drops connection
... if (cleanup_item == NULL) { cln = ngx_pool_cleanup_add(item->client_connection->pool, 0); if (cln == NULL) { ngx_log_debug0(NGX_LOG_DEBUG_HTTP, pc->log, 0, "ntlm free peer ngx_pool_cleanup_add returned null"); } else { cln->handler = ngx_http_upstream_client_conn_cleanup; cln->data = item; } } ...
Maybe this it seems that is making a lot of sense right now but if you study the full module source code you will understand.
Just download the sources form here github.com/gabihodoroaga/nginx-ntlm-module and build it yourself.
Disclaimer
My intention with this module was not to create a full replica of the original commercial Nginx NTLM module. I created this because I was not able to find a free alternative for development purposes.
So, you can use Nginx Plus, or you can try this one.
|
https://hodo.dev/posts/post-18-nginx-ntlm-module/
|
CC-MAIN-2022-27
|
en
|
refinedweb
|
tornado.queues – Queues for coroutines¶
New in version 4.2.
Asynchronous queues for coroutines. These classes are very similar to those provided in the standard library’s asyncio package.
Warning
Unlike the standard library’s
queue module, the classes defined here
are not thread-safe. To use these queues from another thread,
use
IOLoop.add_callback to transfer control to the
IOLoop thread
before calling any queue methods.
Classes¶
Queue¶
- class
tornado.queues.
Queue(maxsize: int = 0)[source]¶
Coordinate producer and consumer coroutines.
If maxsize is 0 (the default) the queue size is unbounded.
from tornado import gen from tornado.ioloop import IOLoop from tornado.queues import Queue q = Queue(maxsize=2) async def consumer(): async for item in q: try: print('Doing work on %s' % item) await gen.sleep(0.01) finally: q.task_done() async def producer(): for item in range(5): await q.put(item) print('Put %s' % item) async def main(): # Start consumer without waiting (since it never finishes). IOLoop.current().spawn_callback(consumer) await producer() # Wait for producer to put all tasks. await q.join() # Wait for consumer to finish all tasks. print('Done') IOLoop.current().run_sync(main)
Put 0 Put 1 Doing work on 0 Put 2 Doing work on 1 Put 3 Doing work on 2 Put 4 Doing work on 3 Doing work on 4 Done
In versions of Python without native coroutines (before 3.5),
consumer()could be written as:
@gen.coroutine def consumer(): while True: item = yield q.get() try: print('Doing work on %s' % item) yield gen.sleep(0.01) finally: q.task_done()
Changed in version 4.3: Added
async forsupport in Python 3.5.
put(item: _T, timeout: Union[float, datetime.timedelta] = None) → Future[None][source]¶
Put an item into the queue, perhaps waiting until there is room.
Returns a Future, which raises
tornado.util.TimeoutErrorafter a timeout.
timeoutmay be a number denoting a time (on the same scale as
tornado.ioloop.IOLoop.time, normally
time.time), or a
datetime.timedeltaobject for a deadline relative to the current time.
put_nowait(item: _T) → None[source]¶
Put an item into the queue without blocking.
If no free slot is immediately available, raise
QueueFull.
get(timeout: Union[float, datetime.timedelta] = None) → Awaitable[_T][source]¶
Remove and return an item from the queue.
Returns an awaitable which resolves once an item is available, or raises
tornado.util.TimeoutErrorafter a timeout.
timeoutmay be a number denoting a time (on the same scale as
tornado.ioloop.IOLoop.time, normally
time.time), or a
datetime.timedeltaobject for a deadline relative to the current time.
Note
The
timeoutargument of this method differs from that of the standard library’s
queue.Queue.get. That method interprets numeric values as relative timeouts; this one interprets them as absolute deadlines and requires
timedeltaobjects for relative timeouts (consistent with other timeouts in Tornado).
get_nowait() → _T[source]¶
Remove and return an item from the queue without blocking.
Return an item if one is immediately available, else raise
QueueEmpty.
task_done() → None[source]¶
Indicate that a formerly enqueued task is complete.
Used by queue consumers. For each
getused to fetch a task, a subsequent call to
task_donetells the queue that the processing on the task is complete.
If a
joinis blocking, it resumes when all items have been processed; that is, when every
putis matched by a
task_done.
Raises
ValueErrorif called more times than
put.
join(timeout: Union[float, datetime.timedelta] = None) → Awaitable[None][source]¶
Block until all items in the queue are processed.
Returns an awaitable, which raises
tornado.util.TimeoutErrorafter a timeout.
PriorityQueue¶
- class
tornado.queues.
PriorityQueue(maxsize: int = 0)[source]¶
A
Queuethat retrieves entries in priority order, lowest first.
Entries are typically tuples like
(priority number, data).
from tornado.queues import PriorityQueue q = PriorityQueue() q.put((1, 'medium-priority item')) q.put((0, 'high-priority item')) q.put((10, 'low-priority item')) print(q.get_nowait()) print(q.get_nowait()) print(q.get_nowait())
(0, 'high-priority item') (1, 'medium-priority item') (10, 'low-priority item')
Exceptions¶
QueueEmpty¶
- exception
tornado.queues.
QueueEmpty[source]¶
Raised by
Queue.get_nowaitwhen the queue has no items.
QueueFull¶
- exception
tornado.queues.
QueueFull[source]¶
Raised by
Queue.put_nowaitwhen a queue is at its maximum size.
|
https://www.tornadoweb.org/en/branch6.0/queues.html
|
CC-MAIN-2022-27
|
en
|
refinedweb
|
.
- exception
tornado.util.
TimeoutError[source]¶
Exception raised by
with_timeoutand
IOLoop.run_sync.
Changed in version 5.0:: Unified
tornado.gen.TimeoutErrorand
tornado.ioloop.TimeoutErroras
tornado.util.TimeoutError. Both former names remain as aliases.
-: bytes, max_length: int = 0) → bytes: str) → Any[source]¶
Imports an object by name.
import_object('x')is equivalent to
import x.
import_object('x.y.z')is equivalent to
from: BaseException) → Optional.
tornado.util.
re_unescape(s: str) → str[source]¶
Unescape a string escaped by
re.escape.
May raise
ValueErrorfor regular expressions which could not have been produced by
re.escape(for example, strings containing
\dcannot be unescaped).
New in version 4.4.
-__.
Changed in version 5.0: It is now possible for configuration to be specified at multiple levels of a class hierarchy.
- classmethod
configurable_base()[source]¶
Returns the base class of a configurable hierarchy.
This will normally return the class in which it is defined. (which is not necessarily the same as the
clsclassmethod parameter).
- classmethod
configurable_default()[source]¶
Returns the implementation class to be used if none is configured.
initialize() → None¶: Callable, name: str)[source]¶
Replaces one value in an
args, kwargspair.
Inspects the function signature to find an argument by name whether it is passed by position or keyword. For use in decorators and similar wrappers.
get_old_value(args: Sequence[Any], kwargs: Dict[str, Any], default: Any = None) → Any[source]¶
Returns the old value of the named argument without replacing it.
Returns
defaultif the argument is not present.
replace(new_value: Any, args: Sequence[Any], kwargs: Dict[str, Any]) → Tuple[Any, Sequence[Any], Dict[str, Any]].
|
https://www.tornadoweb.org/en/branch6.0/util.html
|
CC-MAIN-2022-27
|
en
|
refinedweb
|
Example SQLite database
In a previous example we used a simple flat-file 'database'. An SQLite database is a database but the lite version. It's the local database for a lot of (mobile)apps.
We will be using SPOD macros Check more info in the about.
The code used in this example is here.
How to start
Create a folder named foobar (please use a better name; any name will do) and create folders bin and src. See example below:
+ foobar + bin + src - Main.hx - build.hxml
The Main.hx
This example is getting to big to post here, so if you want to check out the complete file go and check out Main.hx
First we need a database, so I wrote a class that creates one for you:
DBStart.hx.
This class generates random users.
// Open a connection var cnx = sys.db.Sqlite.open("mybase.ddb"); // Set as the connection for our SPOD manager sys.db.Manager.cnx = cnx; // initialize manager sys.db.Manager.initialize(); // Create the "user" table if ( !sys.db.TableCreate.exists(User.manager) ) { sys.db.TableCreate.create(User.manager); } // Fill database with users for (i in 0 ... 10) { var user = createRandomUser(); user.insert(); } // close the connection and do some cleanup sys.db.Manager.cleanup(); // Close the connection cnx.close();
The function
createRandomUser() does what you would expect, if you want to know, check the source code.
A user!
We have used a
typedef before, this looks similar.
The strange stuff here are the types, they are not the default types that Haxe uses.
Read more about that: creating-a-spod.
import sys.db.Types; class User extends sys.db.Object { public var id : SId; public var name : SString<32>; public var birthday : SDate; public var phoneNumber : SNull<SText>; }
Now we have a database, lets check out the code to get the data from the database:
Main.hx
// Open a connection var cnx = sys.db.Sqlite.open("mybase.ddb"); // Set as the connection for our SPOD manager sys.db.Manager.cnx = cnx; // initialize manager sys.db.Manager.initialize(); for (i in 0 ... User.manager.all().length) { var _user = User.manager.get(i); if(_user != null) trace(_user.name); } // close the connection and do some cleanup sys.db.Manager.cleanup(); // Close the connection cnx.close();
The Haxe build file, build.hxml
There are a lot of different arguments that you are able to pass to the Haxe compiler. Set these arguments into a text file of one per line with the extension hxml. This file passes it directly to the Haxe compiler as a build script.
# // build.hxml -cp src -main Main -php bin/www -dce full
Build PHP with Haxe
To finish and see what we have, build the file and see the result
- Open your terminal
- Open the correct folder with
cdwhere
It will have the same result
|
https://matthijskamstra.github.io/haxephp/10sqlite/example.html
|
CC-MAIN-2022-27
|
en
|
refinedweb
|
Count duplicates in circular linked list
Introduction
The following article aims to familiarise you with using a circular linked list (if this is a new topic for you, refer to the article here) and practice its use in real questions to build and strengthen your concepts. The following question explores how we can traverse a circular linked list and find the count of duplicate elements using a hash set.
Problem Statement
Count the number of duplicate elements in a circular linked list, i.e., elements that have already occurred before, and print the count as the final result.
For example,
Input 1: [1, 2, 3, 1, 2]
In the above input [1, 2, 3, 1 <-, 2 <-], the numbers marked by arrows are duplicates.
Output 1: 2
Input 2: [5, 5, 5, 5]
In the above input [5, 5 <-, 5 <-, 5 <-], similar to input 1, arrows mark the duplicate elements.
Output 2: 3
Approach
In the below code, our purpose is to count the number of duplicates, and we achieve this by using a hash set. The use of a hash set (unordered_set in C++ STL) is generally preferred because of its O(1) retrieval and insertion time on average.
The algorithm requires us to traverse through the circular linked list once and check if that element is present in the hash set for each iteration. If the element exists in the hash set, then it’s a duplicate, and we increase the duplicate count; otherwise, we insert the element into the hash set as it hadn’t occurred before.
After having traversed through all the elements of the hash set, we end up with the count of the duplicate elements in the circular linked list, and we print that as the final result.
Code in C++
#include <iostream> #include <unordered_set> #include <vector> using namespace std; class Node{ public: int val; Node *next; Node(int x){ val = x; next = NULL; } }; int main(){ // let us store all the content in the below vector into the circular linked list vector<int> data = {10, 1, 1, 7, 5, 5, 5}; Node *head = new Node(data[0]); Node *temp = head; for(int i = 1; i < data.size(); i++){ temp->next = new Node(data[i]); temp = temp->next; } temp->next = head; // the end element points back to the head, making it a circular linked list //from here on we have the logic for counting the duplicates int duplicates = 0; unordered_set<int> s; Node *loop_var = head; do{ if(s.find(loop_var->val) != s.end()){ // if a previous occurence is found duplicates += 1; } else{ s.insert(loop_var->val); } loop_var = loop_var->next; } while(loop_var != head); // print the total duplicates // [10, 1, 1 <-, 7, 5, 5 <-, 5 <-] // I have put arrows next to the duplicates and we expect an output of 3 for the given input cout << duplicates << endl; }
Output:
3
Time Complexity
The time complexity is O(N), as we traverse each element of the list once, and the retrieval from the unordered_set is O(1) on average for random data. (Do note, however, that unordered_set can have O(N) retrieval and insertion in a worst-case under particular inputs.)
Space Complexity
The space complexity is O(N) because of the unordered_set data structure, where we store elements to check for their earlier occurrence in the circular linked list.
Frequently Asked Questions
1. What are the advantages of learning about circular linked lists?
It can implement circular linked list queues and various advanced data structures such as Fibonacci heaps. They can also be helpful in multiple other situations repeatedly requiring iteration over the same data.
2. Difference between unordered_set and set in C++?
The set data structure is usually implemented using a red-black tree (read more about it here). The unordered_set, however, is implemented using a hash table and hence offers an average O(1) retrieval and insertion. In contrast, set data structure takes O(log(n)) for retrieving and inserting an element.
3. Is unordered_set always better than set?
No, the unordered_set will usually work well with random data and give O(1) retrieval and insertion on average. Still, in case of excessive collisions (which can be the case for some specific inputs, depending on the hash function), our retrieval and insertion may become O(N).
Key Takeaways
The article helps you understand and implement your knowledge of circular linked lists and C++ STL. I also recommend checking the program for multiple inputs to understand the process better if you still have some doubts.
In the code, you may have noticed my use of a do-while loop instead of a while loop. I encourage you to try the same problem with a while loop, observe the difference in implementation, and decide if it’s better and easier to use a do-while loop in the case of circular linked lists.
Learn more about vector and other data structures in C++ STL from the following link: here and Happy learning!
|
https://www.codingninjas.com/codestudio/library/count-duplicates-in-circular-linked-list
|
CC-MAIN-2022-27
|
en
|
refinedweb
|
#include "rte_kvargs.h"
#include "rte_crypto.h"
#include "rte_dev.h"
#include <rte_common.h>
#include <rte_config.h>
Go to the source code of this file.
RTE Cryptographic Device APIs
Defines RTE Crypto Device APIs for the provisioning of cipher and authentication operations.
Definition in file rte_cryptodev.h.
A macro that points to an offset from the start of the crypto operation structure (rte_crypto_op)
The returned pointer is cast to type t.
Definition at line 64 of file rte_cryptodev.h.
A macro that returns the physical address that points to an offset from the start of the crypto operation (rte_crypto_op)
Definition at line 78 of file rte_cryptodev.h.
Macro used at end of crypto PMD list
Definition at line 388 of file rte_cryptodev.h.
Crypto device supported feature flags
Note: New features flags should be added to the end of the list
Keep these flags synchronised with rte_cryptodev_get_feature_name()Symmetric crypto operations are supported
Definition at line 400 of file rte_cryptodev.h.
Asymmetric crypto operations are supported
Definition at line 402 of file rte_cryptodev.h.
Chaining symmetric crypto operations are supported
Definition at line 404 of file rte_cryptodev.h.
Utilises CPU SIMD SSE instructions
Definition at line 406 of file rte_cryptodev.h.
Utilises CPU SIMD AVX instructions
Definition at line 408 of file rte_cryptodev.h.
Utilises CPU SIMD AVX2 instructions
Definition at line 410 of file rte_cryptodev.h.
Utilises CPU AES-NI instructions
Definition at line 412 of file rte_cryptodev.h.
Operations are off-loaded to an external hardware accelerator
Definition at line 414 of file rte_cryptodev.h.
Utilises CPU SIMD AVX512 instructions
Definition at line 418 of file rte_cryptodev.h.
In-place Scatter-gather (SGL) buffers, with multiple segments, are supported
Definition at line 420 of file rte_cryptodev.h.
Out-of-place Scatter-gather (SGL) buffers are supported in input and output
Definition at line 424 of file rte_cryptodev.h.
Out-of-place Scatter-gather (SGL) buffers are supported in input, combined with linear buffers (LB), with a single segment in output
Definition at line 428 of file rte_cryptodev.h.
Out-of-place Scatter-gather (SGL) buffers are supported in output, combined with linear buffers (LB) in input
Definition at line 433 of file rte_cryptodev.h.
Out-of-place linear buffers (LB) are supported in input and output
Definition at line 437 of file rte_cryptodev.h.
Utilises CPU NEON instructions
Definition at line 439 of file rte_cryptodev.h.
Utilises ARM CPU Cryptographic Extensions
Definition at line 441 of file rte_cryptodev.h.
Support Security Protocol Processing
Definition at line 443 of file rte_cryptodev.h.
Support RSA Private Key OP with exponent
Definition at line 445 of file rte_cryptodev.h.
Support RSA Private Key OP with CRT (quintuple) Keys
Definition at line 447 of file rte_cryptodev.h.
Support encrypted-digest operations where digest is appended to data
Definition at line 449 of file rte_cryptodev.h.
Max length of name of crypto PMD
Definition at line 540 of file rte_cryptodev.h.
Typedef for application callback function to be registered by application software for notification of device events
Definition at line 523 of file rte_cryptodev.h.
Dequeue processed packets from queue pair of a device.
Definition at line 798 of file rte_cryptodev.h.
Enqueue packets for processing on queue pair of a device.
Definition at line 802 of file rte_cryptodev.h.
Definitions of Crypto device event types
Definition at line 499 of file rte_cryptodev.h.
Provide capabilities available for defined device and algorithm
Provide capabilities available for defined device and xform
Check if key size and initial vector are supported in crypto cipher capability
Check if key size and initial vector are supported in crypto auth capability
Check if key, digest, AAD and initial vector sizes are supported in crypto AEAD capability
Check if op type is supported
Check if modulus length is in supported range
Provide the cipher algorithm enum, given an algorithm string
Provide the authentication algorithm enum, given an algorithm string
Provide the AEAD algorithm enum, given an algorithm string
Provide the Asymmetric xform enum, given an xform string
Get the name of a crypto device feature flag
Get the device identifier for the named crypto device.
Get the crypto device name given a device identifier.
Get the total number of crypto devices that have been successfully initialised.
Get number of crypto device defined type.
Get number and identifiers of attached crypto devices that use the same crypto driver.
Configure a device.
This function must be invoked first before any other function in the API. This function can also be re-invoked when a device is in the stopped state.
Start an device.
The device start step is the last one and consists of setting the configured offload features and in starting the transmit and the receive units of the device. On success, all basic functions exported by the API (link status, receive/transmit, and so on) can be invoked.
Stop an device. The device can be restarted with a call to rte_cryptodev_start()
Close an device. The device cannot be restarted!
Allocate and set up a receive queue pair for a device.
Get the number of queue pairs on a specific crypto device
Retrieve the general I/O statistics of a device.
Reset the general I/O statistics of a device.
Retrieve the contextual information of a device.
Register a callback function for specific device id.
Unregister a callback function for specific device id.
Structure to keep track of registered callbacks
Dequeue a burst of processed crypto operations from a queue on the crypto device. The dequeued operation are stored in rte_crypto_op structures whose pointers are supplied in the ops array.
The rte_cryptodev_dequeue_burst() function returns the number of ops actually dequeued, which is the number of rte_crypto_cryptodev_dequeue_burst() function until a value less than nb_ops is returned.
The rte_cryptodev_dequeue_burst() function does not provide any error notification to avoid the corresponding overhead.
Definition at line 915 of file rte_cryptodev.h.
Enqueue a burst of operations for processing on a crypto device.
The rte_cryptodev_enqueue_burst() function is invoked to place crypto operations on the queue qp_id of the device designated by its dev_id.
The nb_ops parameter is the number of operations to process which are supplied in the ops array of rte_crypto_op structures.
The rte_cryptodev_enqueue_burst() function returns the number of operations it actually enqueued for processing. A return value equal to nb_ops means that all packets have been enqueued.
Definition at line 958 of file rte_cryptodev.h.
Create a symmetric session mempool.
Create symmetric crypto session header (generic with no private data)
Create asymmetric crypto session header (generic with no private data)
Frees symmetric crypto session header, after checking that all the device private data has been freed, returning it to its original mempool.
Frees asymmetric crypto session header, after checking that all the device private data has been freed, returning it to its original mempool.
Fill out private data for the device id, based on its device type.
Initialize asymmetric session on a device with specific asymmetric xform
Frees private data for the device id, based on its device type, returning it to its mempool. It is the application's responsibility to ensure that private session data is not cleared while there are still in-flight operations using it.
Frees resources held by asymmetric session during rte_cryptodev_session_init
Get the size of the header session, for all registered drivers excluding the user data size.
Get the size of the header session from created session.
Get the size of the asymmetric session header, for all registered drivers.
Get the size of the private symmetric session data for a device.
Get the size of the private data for asymmetric session on device
Provide driver identifier.
Provide driver name.
Store user data in a session.
Get user data stored in a session.
|
https://doc.dpdk.org/api-19.08/rte__cryptodev_8h.html
|
CC-MAIN-2022-27
|
en
|
refinedweb
|
Access an AWS service using an interface VPC endpoint
You can create an interface VPC endpoint to connect to services powered by AWS PrivateLink, including many AWS services.
For each subnet that you specify from your VPC, we create an endpoint network interface in the subnet and assign it a private IP address from the subnet address range. An endpoint network interface is a requester-managed network interface; you can view it in your AWS account, but you can't manage it yourself.
You are billed for hourly usage and data processing charges. For more
information, see Interface endpoint pricing
Considerations
Interface VPC endpoints support traffic only over TCP.
AWS services accept connection requests automatically. The service can't initiate requests to resources in your VPC through the VPC endpoint. The endpoint only returns responses to traffic that was initiated by resources in your VPC.
The DNS names created for VPC endpoints are publicly resolvable. They resolve to the private IP addresses of the endpoint network interfaces for the enabled Availability Zones. The private DNS names are not publicly resolvable.
By default, each interface endpoint can support a bandwidth of up to 10 Gbps per Availability Zone and automatically scales up to 40 Gbps. If your application needs higher throughput per zone, contact AWS Support.
There are quotas on your AWS PrivateLink resources. For more information, see AWS PrivateLink quotas.
Prerequisites
Create a private subnet in your VPC and deploy the resources that will access the AWS service using the VPC endpoint in the private subnet.
To use private DNS, you must enable DNS hostnames and DNS resolution for your VPC. For more information, see View and update DNS attributes in the Amazon VPC User Guide.
The security group for the interface endpoint must allow communication between the endpoint network interface and the resources in your VPC that must communicate with the service. By default, the interface endpoint uses the default security group for the VPC. Alternatively, you can create a security group to control the traffic to the endpoint network interfaces from the resources in the VPC. To ensure that tools such as the AWS CLI can make requests over HTTPS from resources in the VPC to the AWS service, the security group must allow inbound HTTPS traffic.
If your resources are in a subnet with a network ACL, verify that the network ACL allows traffic between the endpoint network interfaces and the resources in the VPC.
Create a VPC endpoint
Use the following procedure to create an interface VPC endpoint that connects to an AWS service.
To create an interface endpoint for an AWS service
Open the Amazon VPC console at
.
In the navigation pane, choose Endpoints.
Choose Create endpoint.
For Service category, choose AWS services.
For Service name, select the service. For more information, see AWS services that integrate with AWS PrivateLink.
For VPC, select the VPC from which you'll access the AWS service.
To create an interface endpoint for Amazon S3, you must clear Additional settings, Enable DNS name. This is because Amazon S3 does not support private DNS for interface VPC endpoints.
For Subnets, select one subnet per Availability Zone from which you'll access the AWS service.
For Security group, select the security groups to associate with the endpoint network interfaces. The security group rules must allow resources that will use the VPC endpoint to communicate with the AWS service to communicate with the endpoint network interface.
For Policy, select Full access to allow all operations by all principals on all resources over the VPC endpoint. Otherwise, select Custom to attach a VPC endpoint policy that controls the permissions that principals have for performing actions on resources over the VPC endpoint. This option is available only if the service supports VPC endpoint policies. For more information, see VPC endpoint policies.
(Optional) To add a tag, choose Add new tag and enter the tag key and the tag value.
Choose Create endpoint.
To create an interface endpoint using the command line
create-vpc-endpoint (AWS CLI)
New-EC2VpcEndpoint (Tools for Windows PowerShell)
Test the VPC endpoint
After you create the VPC endpoint, verify that it's sending requests from your VPC to the AWS service. For this example, we'll demonstrate how to send a request from an EC2 instance in your private subnet to an AWS service, such as Amazon CloudWatch. This requires an EC2 instance in a public subnet from which you can access the instance in the private subnet.
Requirement
Verify that the VPC with the interface VPC endpoint has both a public subnet and a private subnet.
To test the VPC endpoint
Launch an EC2 instance into the private subnet. Use an AMI that comes with the AWS CLI preinstalled (for example, an AMI for Amazon Linux 2) and add an IAM role that allows the instance to call the AWS service. For example, for Amazon CloudWatch, attach the CloudWatchReadOnlyAccess policy to the IAM role.
Launch an EC2 instance into the public subnet and connect to this instance. From the instance in the public subnet, connect to the instance in the private subnet using its private IP address, using the following command.
$ssh ec2-user@
10.0.0.23
Confirm that the instance in the private subnet does not have connectivity to the internet by pinging a well-known public server. If there is 0% packet loss, the instance has internet access. If there is 100% packet loss, the instance has no internet access. For example, the following command pings the Amazon website one time.
$ping -c 1
Run a describe command for the AWS service from the AWS CLI to confirm connectivity to the service from the instance. For example, for Amazon CloudWatch, run the list-metrics command from the instance in the private subnet.
$aws cloudwatch list-metrics --namespace AWS/EC2
If you get a response, even a response with empty results, then you are connected to the service using AWS PrivateLink. If the command times out, verify that the instance has an IAM role that allows access to the AWS service.
|
https://docs.aws.amazon.com/vpc/latest/privatelink/create-interface-endpoint.html
|
CC-MAIN-2022-27
|
en
|
refinedweb
|
See the description below.
Like for the previous 'curry' package, I am posting it here at a very
early stage both for feedback and as a potential incentive for
core-patchers to reimplement it with the best performance. In the
meantime, this simple standalone sourceable script will let people
validate/invalidate the semantics.
# Principle:
# This package aims at easing the process of overriding tcl
# subcommands. It does so by silently replacing overridden commands
# by a wrapper that tries to match arguments against the prefixes
# given by the programmer as special cases (cmd subcmd subsubcmd...).
# These prefixes are scanned by decreasing length, so that in case of
# conflict the most specific wins:
# % proc foo {x y} {puts "Generic foo: $x $y"}
# % subproc "foo bar baz" {} {puts AAA}
# % subproc "foo bar" z {puts BBB}
# % foo 3 4
# Generic foo: 3 4
# % foo bar 2
# BBB
# % foo bar baz
# AAA
# Performance issue: of course, the wrapper adds overhead to the
# non-overridden case. Worse, if you override "string compare" you'll
# disable the inlining of [string ...] by the byte compiler, so the
# overall effect will be awful. However:
# - in most cases (non-inlined procedures) this overhead will be
# bearable.
# - this script is provided only as a proof of concept. The aim
# is of course a core patch (once the semantics is agreed
# upon).
# As it is often witnessed in overriding frameworks (like OO), it is
# often handy to be able to call the unmodified "inherited" behavior
# within you overriding procedure. To do this, the [inherited] call is
# provided:
# % subproc "string compare" {x y} {
# puts stderr "I've been here !"
# inherited string compare $x $y
# }
# More generally, in a [subproc "a b c d e" ...] one is expected to
# to call [inherited "a b c d" e ...] to avoid unwanted recursion.
-Alex
Alexandre Ferrieux wrote:
This is an interesting idea. In my introspection package, which I hope to finally
release as an alpha (because only the TCL version is ready and the API is up for
discussion,
the TCL is hopefully beta quality) by the weekend (assuming I can get my arse in
gear) I use a slightly
different scheme. Firstly I assume that there will only ever be one level of
subcommanding
as I have seen no further levels in code I have looked at, and switches are
generally preferred. If this
assumption proves to be faulty the mechanism could be extended. My basic scheme is
to override
the command with something like the following (ignoring namespaces for brevity):
rename info info_builtin
proc info { subcommand args } {
switch -exact -- $subcommand {
case allcommands: {
info_allcommands $args
}
...
default: {
info_builtin $subcommand $args
}
}
}
I have two routines addsubcommand and removesubcommand (which also allows
undesirable commands,
should such a thing exist, to be stubbed out). In C this will probably become
some kind of jump
table. I wonder which mechanism is best in terms of overhead and features?
By the way, did you know (excuse me if I this is patronising) that you can
do the following:
set fred "space proc"
proc $fred { } {
puts "ping"
}
$fred
Of course this won't work in an eval directly. I'm sure there are ways around
this though.
Regards,
Bruce A.
I think yours is a really neat tool, short and useful. My company has the
problem all the time of having to redefine the behavior of Tcl (and Tk)
commands, and there is at least one major package (incr Tcl) that had to
change its syntax in order not to muck around with the Tcl core.
The funny thing is that Tcl provides all the needed structures to implement
the behaviors we are asking for. Nothing prevents us from storing the
argument count of a proc in the HashTable that stores them, thus making
overriding procedure definitions easier. It wouldn't be all too difficult to
add a subcommand creation facility not unlike the one given for command
creation...
I guess the only real disadvantage of your solution is the performance hit it
gives us. If I add, say, a rotate subcommand to the canvas and have to perform
the rotation on a selected set of items, the slowdown will be more than
noticeable.
I noticed that you didn't care at all for namespace resolution in your code.
Is this intended?
Thanks for this neat tool,
Marco
In article <373053...@cnet.francetelecom.fr>,
-----------== Posted via Deja News, The Discussion Network ==---------- Search, Read, Discuss, or Start Your Own
Alexandre's tool looks fine, and there are circumstances where a pure
Tcl implementation is desirable. However, if you use IncrTcl, then
the same facility is already available, implemented in C. Look at the
"ensemble" command.
Michael offered this to the Tcl guys when he donated the original
namespace code, but it was deemed unnecessary, and was not adopted.
IncrTcl uses it extensively, so it has been pretty well tested. It is
also self-contained, so it would not be hard to hack it out of IncrTcl
and use it as a stand-alone library, if you wanted to do that.
Jim
--
++==++==++==++==++==++==++==++==++==++==++==++==++==++==++==++==++==++==++
Jim Ingham jin...@cygnus.com
Cygnus Solutions Inc.
We extensively use [incr Tcl] ourselves, but even ensembles don't save our
soul... We have a highly performance optimized CAD tool, and the overhead of
creating an ensemble for every single widget command is not acceptable.
Correct me if I'm wrong: Our canvases have a "print" subcommand. For it to be
available in an ensemble world, I'd have to (a) create a (plain) canvas,
[canvas .c] (b) create an ensemble [ensemble ens.c { part postscript args
{#retrieve widget name and do postscript} part print args {#retrieve widget
name and do printing} ...] (c) change the code to teach it to use the
ensemble instead of the widget name.
This is based on the assumption that there is no easy way to rename a widget
while keeping its window qualities (that is, if I do a rename .c foo, pack foo
will fail).
I guess what I'd prefer is that selected commands, especially widgets
commands, should be extensible, just like Tcl itself is. Say I generated a
text widget import extension. Wouldn't I want to be able to tie directly to
the text widget, instead of having to work around with an ensemble?
After all, that's what makes Tcl a flexible language... And, again, all we
need is an extra HashTable in the Interpreter structure that stores a linked
list of subcommand handlers and a new subcommand creation API.
Looking forward to your ideas/comments,
Marco
In article <mfzp3jb...@leda.cygnus.com>,
-----------== Posted via Deja News, The Discussion Network ==----------
You are right about the current state of things. Michael's original
idea (IIRC) was to implement all the Tcl commands that it seems
reasonable to extend AS ensembles, and then this would be exactly what
you want. John et al balked at the confusion this might engender, as
people generated Tcl variants with all sorts of different subcommands.
There is some merit to this concern, but I am not sure it would
outweigh the benefits...
Sounds like you will need to hack into Tk for your purposes...
Basically my stuff does this in a more generic way (eg subsubcommands),
and also automates the 'rename' part :).
One note about subsubcommands: as such they are very seldom used;
however, the same mechanism applies to *any* fixed argument. So you can
define a special 'subsubcommand' to handle a given object as opposed to
all others:
subproc "file delete /tmp/toto" {} {
error "I'm sorry Dave. I'm afraid I can't do that."
}
> In C this will probably become
> some kind of jump
> table.
Yes. In my sample I scan on (decreasing) prefix length, then hash on the
prefix (string repr of the list)...
> I wonder which mechanism is best in terms of overhead and features?
I posted to get answers to this question in the first place :)
> By the way, did you know (excuse me if I this is patronising)
> that you can do the following:
>
> set fred "space proc"
>
> proc $fred { } {
> puts "ping"
> }
>
> $fred
Yes. I especially took care of this one (by preventing it!) in the
curry.tcl package I posted yesterday.
-Alex
Yes. See my posting:
"as a potential incentive for core-patchers to reimplement it
with the best performance"
:)
> I noticed that you didn't care at all for namespace resolution in your code.
> Is this intended?
Yes. For two main reasons:
1) as said above, a proof-of-concept doesn't need the
bells-and-whistles.
2) I have little experience with namespaces, and I understand
they are not so well done as they could have been (see
Python's modules for a better design). I don't want to waste a
nanosecond on them:)
> Thanks for this neat tool,
Thank you very much for your feedback,
-Alex
The assumption is partly true. Granted, if you *only* rename .c to
something else (thus effectively removing the ".c" command), of course
internal Tk calls staring with ".c" will suffer... However, if you
rename it to override it, *and* call the old behavior as needed,
evrything works.
For example, with my sample slow implementation:
subproc ".t insert" args {
puts stderr "INSERTING: $args"
uplevel inherited .t insert $args
}
So I don't see why you couldn't do that with an ensemble. Or do I miss
something ?
> After all, that's what makes Tcl a flexible language... And, again, all we
> need is an extra HashTable in the Interpreter structure that stores a linked
> list of subcommand handlers and a new subcommand creation API.
Yes !!! I could not have expressed better what my sample is trying to
convince core-patchers to write...
-Alex
In my soon to be released extension Feather I have added the ability to
override a command without renaming it so you can do things like
proc overridden_canvas {original subcommand args} {
switch -- $subcommand {
"print" {
puts "Print"
}
default {
return [invoke $original [list $subcommand] $args]
}
}
}
canvas .c
set original [command override .c]
command set .c [curry overridden_canvas $original]
.c print
--
Paul Duffin
DT/6000 Development Email: pdu...@hursley.ibm.com
IBM UK Laboratories Ltd., Hursley Park nr. Winchester
Internal: 7-246880 International: +44 1962-816880
Alex,
I couldn't resist the temptation and I'm attaching a patch file (it's against
the CVS repository as of end of March). I know the wrath of the modem
downloader will eventually hit me, but my flesh is weak, and my internet
connection fast. I apologize to everybody who didn't want to download these
500 lines - please, don't mass-mail me!!! I promise I won't do this again!
What the patch does is:
(a) adds an API call Tcl_CreateSubCmdHandler() that is closely mimicking
Tcl_CreateCommand (b) modifies the Interpreter structure to add a SubCmdTable
HashTable, where the SubCmdHandlers are stored (c) modifies the Tcl "info"
command to handle the new API. This is the core of the patch and shows
exactly how easy to implement such a change would be. All it requires is
thirteen lines of code - code that can be simply copied into any other
command (or -even better - we could make it a centralized function...) (d)
shows how to create a subCmdHandler by actually creating two of them. One is
created at first invocation of the "time" command, the other at the first
invocation of the "scan" command.
(d) is really the ugliest part of this. It replaces the Real Thing (TM),
which would be an extension that augments the info command (like Mike
McLennan used to do in Itcl)
Have fun with it and tell me what you think about it,
Marco
Index: generic/tcl.decls
===================================================================
RCS file: /cvsroot/tcl/generic/tcl.decls,v
retrieving revision 1.7
diff -c -r1.7 tcl.decls
*** tcl.decls 1999/03/11 02:49:33 1.7
--- tcl.decls 1999/05/06 16:13:30
***************
*** 973,978 ****
--- 973,982 ----
# }
# declare 285 generic {
# }
+ declare 286 generic {
+ void Tcl_CreateSubCmdHandler(Tcl_Interp *interp, char *cmdname,
+ Tcl_SubCmdHandler *proc, ClientData clientData)
+ }
#############################################################################
#
Index: generic/tcl.h
===================================================================
RCS file: /cvsroot/tcl/generic/tcl.h,v
retrieving revision 1.38
diff -c -r1.38 tcl.h
*** tcl.h 1999/03/12 23:03:51 1.38
--- tcl.h 1999/05/06 16:13:31
***************
*** 412,417 ****
--- 412,419 ----
Tcl_Interp *interp));
typedef int (Tcl_MathProc) _ANSI_ARGS_((ClientData clientData,
Tcl_Interp *interp, Tcl_Value *args, Tcl_Value *resultPtr));
+ typedef int (Tcl_SubCmdHandler) _ANSI_ARGS_((ClientData clientData,
+ Tcl_Interp *interp, int objects, int argc, ClientData vector));
typedef void (Tcl_NamespaceDeleteProc) _ANSI_ARGS_((ClientData clientData));
typedef int (Tcl_ObjCmdProc) _ANSI_ARGS_((ClientData clientData,
Tcl_Interp *interp, int objc, struct Tcl_Obj * CONST objv[]));
Index: generic/tclBasic.c
===================================================================
RCS file: /cvsroot/tcl/generic/tclBasic.c,v
retrieving revision 1.18
diff -c -r1.18 tclBasic.c
*** tclBasic.c 1999/03/11 02:49:34 1.18
--- tclBasic.c 1999/05/06 16:13:33
***************
*** 319,324 ****
--- 319,325 ----
Tcl_IncrRefCount(iPtr->objResultPtr);
iPtr->errorLine = 0;
Tcl_InitHashTable(&iPtr->mathFuncTable, TCL_STRING_KEYS);
+ Tcl_InitHashTable(&iPtr->subCmdTable, TCL_STRING_KEYS);
iPtr->numLevels = 0;
iPtr->maxNestingDepth = 1000;
iPtr->framePtr = NULL;
***************
*** 831,836 ****
--- 832,845 ----
ckfree((char *) Tcl_GetHashValue(hPtr));
}
Tcl_DeleteHashTable(&iPtr->mathFuncTable);
+
+ /* Same for subCmd Table */
+ for (hPtr = Tcl_FirstHashEntry(&iPtr->subCmdTable, &search);
+ hPtr != NULL;
+ hPtr = Tcl_NextHashEntry(&search)) {
+ ckfree((char *) Tcl_GetHashValue(hPtr));
+ }
+ Tcl_DeleteHashTable(&iPtr->subCmdTable);
/*
* Invoke deletion callbacks; note that a callback can create new
Index: generic/tclCmdIL.c
===================================================================
RCS file: /cvsroot/tcl/generic/tclCmdIL.c,v
retrieving revision 1.11
diff -c -r1.11 tclCmdIL.c
*** tclCmdIL.c 1999/02/03 00:55:04 1.11
--- tclCmdIL.c 1999/05/06 16:13:37
***************
*** 366,371 ****
--- 366,384 ----
result = Tcl_GetIndexFromObj(interp, objv[1], subCmds, "option", 0,
(int *) &index);
if (result != TCL_OK) {
+ Tcl_HashEntry *hPtr;
+ hPtr = Tcl_FindHashEntry(Tcl_GetSubCmdTable(interp),
+ Tcl_GetStringFromObj(objv[0], NULL));
+ if (hPtr) {
+ int retVal;
+ SubCmdProc *loopPtr;
+ SubCmdProc *subCmdPtr = (SubCmdProc *)Tcl_GetHashValue(hPtr);
+ for (loopPtr = subCmdPtr; loopPtr; loopPtr = loopPtr->next) {
+ retVal = (loopPtr->proc) (clientData, interp, 1, objc,
+ (ClientData) objv);
+ if (retVal != TCL_CONTINUE) return retVal;
+ }
+ }
return result;
}
Index: generic/tclCmdMZ.c
===================================================================
RCS file: /cvsroot/tcl/generic/tclCmdMZ.c,v
retrieving revision 1.2
diff -c -r1.2 tclCmdMZ.c
*** tclCmdMZ.c 1998/09/14 18:39:57 1.2
--- tclCmdMZ.c 1999/05/06 16:13:38
***************
*** 43,48 ****
--- 43,119 ----
static char * TraceVarProc _ANSI_ARGS_((ClientData clientData,
Tcl_Interp *interp, char *name1, char *name2,
int flags));
+ static int
+ Tcl_TestHandlerProc(clientData, interp, objects, argc, vector)
+ ClientData clientData;
+ Tcl_Interp *interp;
+ int objects;
+ int argc;
+ ClientData vector;
+ {
+ char **argv = NULL;
+ Tcl_Obj **objv = NULL;
+
+ char *name = NULL, *subcmd = NULL;
+
+ if (objects) {
+ objv = (Tcl_Obj **)vector;
+ subcmd = Tcl_GetStringFromObj(objv[1], NULL);
+ if (strcmp(subcmd, "extension") == 0) {
+ if (argc < 3) {
+ return TCL_ERROR;
+ }
+ name = Tcl_GetStringFromObj(objv[2], NULL);
+ switch (name[0]) {
+ case 'm' : Tcl_SetResult(interp, "371", TCL_STATIC); break;
+ case 'd' : Tcl_SetResult(interp, "715", TCL_STATIC); break;
+ default: Tcl_SetResult(interp, "unknown name", TCL_STATIC); return
TCL_ERROR;
+ }
+ return TCL_OK;
+ } else if (strcmp(subcmd, "nameofext") == 0) {
+ if (argc < 3) {
+ return TCL_ERROR;
+ }
+ name = Tcl_GetStringFromObj(objv[2], NULL);
+ if (strcmp(name, "371")) {
+ Tcl_SetResult(interp, "marco", TCL_STATIC);
+ } else if (strcmp(name, "715")) {
+ Tcl_SetResult(interp, "david", TCL_STATIC);
+ } else {
+ return TCL_ERROR;
+ }
+ return TCL_OK;
+ }
+ } else {
+ argv = (char **)vector;
+ }
+
+ return TCL_CONTINUE;
+ }
+
+ static int
+ Tcl_TestHandlerProc2(clientData, interp, objects, argc, vector)
+ ClientData clientData;
+ Tcl_Interp *interp;
+ int objects;
+ int argc;
+ ClientData vector;
+ {
+ char **argv = NULL;
+ Tcl_Obj **objv = NULL;
+ char *subcmd = NULL;
+
+ if (objects) {
+ objv = (Tcl_Obj **)vector;
+ subcmd = Tcl_GetStringFromObj(objv[1], NULL);
+ if (strcmp(subcmd, "help") == 0) {
+ Tcl_SetResult(interp, "no help available", TCL_STATIC);
+ return TCL_OK;
+ }
+ }
+
+ return TCL_CONTINUE;
+ }
^L
/*
*----------------------------------------------------------------------
***************
*** 663,668 ****
--- 734,740 ----
char copyBuf[STATIC_SIZE], *fmtCopy;
register char *dst;
+ Tcl_CreateSubCmdHandler(interp, "info", (Tcl_SubCmdHandler
*)Tcl_TestHandlerProc2, NULL);
if (argc < 3) {
Tcl_AppendResult(interp, "wrong # args: should be \"", argv[0],
" string format ?varName varName ...?\"", (char *) NULL);
***************
*** 1770,1775 ****
--- 1842,1849 ----
*/
/* ARGSUSED */
+
+
int
Tcl_TimeObjCmd(dummy, interp, objc, objv)
ClientData dummy; /* Not used. */
***************
*** 1783,1788 ****
--- 1857,1864 ----
double totalMicroSec;
Tcl_Time start, stop;
char buf[100];
+
+ Tcl_CreateSubCmdHandler(interp, "info", (Tcl_SubCmdHandler
*)Tcl_TestHandlerProc, NULL);
if (objc == 2) {
count = 1;
Index: generic/tclDecls.h
===================================================================
RCS file: /cvsroot/tcl/generic/tclDecls.h,v
retrieving revision 1.6
diff -c -r1.6 tclDecls.h
*** tclDecls.h 1999/03/11 02:49:34 1.6
--- tclDecls.h 1999/05/06 16:13:49
***************
*** 872,877 ****
--- 872,888 ----
/* 279 */
EXTERN void Tcl_GetVersion _ANSI_ARGS_((int * major, int * minor,
int * patchLevel, int * type));
+ /* Slot 280 is reserved */
+ /* Slot 281 is reserved */
+ /* Slot 282 is reserved */
+ /* Slot 283 is reserved */
+ /* Slot 284 is reserved */
+ /* Slot 285 is reserved */
+ /* 286 */
+ EXTERN void Tcl_CreateSubCmdHandler _ANSI_ARGS_((
+ Tcl_Interp * interp, char * cmdname,
+ Tcl_SubCmdHandler * proc,
+ ClientData clientData));
typedef struct TclStubHooks {
struct TclPlatStubs *tclPlatStubs;
***************
*** 1187,1192 ****
--- 1198,1210 ----
Tcl_Pid (*tcl_WaitPid) _ANSI_ARGS_((Tcl_Pid pid, int * statPtr, int
options)); /* 277 */
void (*panicVA) _ANSI_ARGS_((char * format, va_list argList)); /* 278 */
void (*tcl_GetVersion) _ANSI_ARGS_((int * major, int * minor, int *
patchLevel, int * type)); /* 279 */
+ void *reserved280;
+ void *reserved281;
+ void *reserved282;
+ void *reserved283;
+ void *reserved284;
+ void *reserved285;
+ void (*tcl_CreateSubCmdHandler) _ANSI_ARGS_((Tcl_Interp * interp, char *
cmdname, Tcl_SubCmdHandler * proc, ClientData clientData)); /* 286 */
} TclStubs;
extern TclStubs *tclStubsPtr; *************** *** 2319,2324 **** ---
2337,2352 ---- #ifndef Tcl_GetVersion #define Tcl_GetVersion(major, minor,
patchLevel, type) \ (tclStubsPtr->tcl_GetVersion)(major, minor, patchLevel,
type) /* 279 */ + #endif + /* Slot 280 is reserved */ + /* Slot 281 is
reserved */ + /* Slot 282 is reserved */ + /* Slot 283 is reserved */ + /*
Slot 284 is reserved */ + /* Slot 285 is reserved */ + #ifndef
Tcl_CreateSubCmdHandler + #define Tcl_CreateSubCmdHandler(interp, cmdname,
proc, clientData) \ + (tclStubsPtr->tcl_CreateSubCmdHandler)(interp,
cmdname, proc, clientData) /* 286 */ #endif
#endif /* defined(USE_TCL_STUBS) && !defined(USE_TCL_STUB_PROCS) */
Index: generic/tclInt.h
===================================================================
RCS file: /cvsroot/tcl/generic/tclInt.h,v
retrieving revision 1.24
diff -c -r1.24 tclInt.h
*** tclInt.h 1999/03/10 05:52:48 1.24
--- tclInt.h 1999/05/06 16:14:00
***************
*** 764,769 ****
--- 764,777 ----
} MathFunc;
/* + * The data structure that stores information about a subcommand
handler. + */ + typedef struct SubCmdProc { + Tcl_SubCmdHandler *proc; +
struct SubCmdProc *next; + } SubCmdProc; + + /*
*---------------------------------------------------------------- * Data
structures related to bytecode compilation and execution. * These are used
primarily in tclCompile.c, tclExecute.c, and *************** *** 1054,1059
**** --- 1062,1071 ---- * defined for the interpreter. Indexed by *
strings (function names); values have * type (MathFunc *). */ +
Tcl_HashTable subCmdTable; /* Contains all the commands currently + *
registered as having enhanced subcommands. + * Indexed by strings (command
names); + * values have type (SubCmd *). */
/*
* Information related to procedures and variables. See tclProc.c
***************
*** 1521,1526 ****
--- 1533,1539 ----
EXTERN void TclFinalizeExecEnv _ANSI_ARGS_((void));
EXTERN void TclInitNamespaces _ANSI_ARGS_((void));
EXTERN void TclpFinalize _ANSI_ARGS_((void));
+ EXTERN Tcl_HashTable *Tcl_GetSubCmdTable _ANSI_ARGS_((Tcl_Interp *interp));
/* *----------------------------------------------------------------
Index: generic/tclInterp.c
=================================================================== RCS file:
/cvsroot/tcl/generic/tclInterp.c,v retrieving revision 1.4 diff -c -r1.4
tclInterp.c *** tclInterp.c 1999/02/03 02:58:40 1.4 --- tclInterp.c
1999/05/06 16:14:03 *************** *** 737,748 **** Slave *slavePtr; /*
Interim storage for slave record. */ Tcl_Interp *masterInterp; /* Master of
interp. to delete. */ Tcl_HashEntry *hPtr; /* Search element. */ int
localArgc; /* Local copy of count of elements in * path (name) of interp.
to delete. */ char **localArgv; /* Local copy of path. */ char *slaveName;
/* Last component in path. */ char *masterPath; /* One-before-last
component in path.*/ ! if (Tcl_SplitList(interp, path, &localArgc,
&localArgv) != TCL_OK) { Tcl_AppendStringsToObj(Tcl_GetObjResult(interp),
"bad interpreter path \"", path, "\"", (char *) NULL); --- 737,750 ----
Slave *slavePtr; /* Interim storage for slave record. */ Tcl_Interp
*masterInterp; /* Master of interp. to delete. */ Tcl_HashEntry *hPtr; /*
Search element. */ + Tcl_HashSearch hSrch; int localArgc; /* Local copy of
count of elements in * path (name) of interp. to delete. */ char
**localArgv; /* Local copy of path. */ char *slaveName; /* Last component
in path. */ char *masterPath; /* One-before-last component in path.*/ !
SubCmdProc *subCmdProc, *loopPtr; ! if (Tcl_SplitList(interp, path,
&localArgc, &localArgv) != TCL_OK) {
Tcl_AppendStringsToObj(Tcl_GetObjResult(interp), "bad interpreter path \"",
path, "\"", (char *) NULL); *************** *** 785,790 **** --- 787,804 ----
} ckfree((char *) localArgv);
+ while (hPtr = Tcl_FirstHashEntry(&((Interp
*)slavePtr->slaveInterp)->subCmdTable, &hSrch)){
+ subCmdProc = (SubCmdProc *)Tcl_GetHashValue(hPtr);
+ while (subCmdProc) {
+ subCmdProc->proc (NULL, NULL, 0, 0, NULL);
+ loopPtr = subCmdProc->next;
+ ckfree((char *)subCmdProc);
+ subCmdProc = loopPtr;
+ }
+ Tcl_DeleteHashEntry(hPtr);
+ }
+ Tcl_DeleteHashTable(&((Interp *)slavePtr->slaveInterp)->subCmdTable);
+
return TCL_OK;
}
^L
***************
*** 3810,3813 ****
--- 3824,3867 ----
*objvPtr = aliasPtr->objv;
}
return TCL_OK;
+ }
+
+ void
+ Tcl_CreateSubCmdHandler(interp, cmdname, proc, clientData)
+ Tcl_Interp *interp;
+ char *cmdname;
+ Tcl_SubCmdHandler *proc;
+ ClientData clientData;
+ {
+ Interp *iPtr = (Interp *) interp;
+ Tcl_HashEntry *hPtr;
+ SubCmdProc *newPtr, *entryPtr, *loopPtr;
+ int new;
+
+ newPtr = (SubCmdProc *)ckalloc(sizeof(SubCmdProc));
+ newPtr->proc = proc;
+ newPtr->next = NULL;
+
+ hPtr = Tcl_CreateHashEntry(&iPtr->subCmdTable, cmdname, &new);
+ if (new) {
+ Tcl_SetHashValue(hPtr, newPtr);
+ }
+ entryPtr = (SubCmdProc *) Tcl_GetHashValue(hPtr);
+
+ if (!new) {
+ for (loopPtr = entryPtr; loopPtr->next; loopPtr = loopPtr->next) {
+ if (loopPtr->proc == proc) break;
+ }
+ if (loopPtr->proc != proc) {
+ loopPtr->next = newPtr;
+ }
+ }
+ }
+
+ Tcl_HashTable *
+ Tcl_GetSubCmdTable(interp)
+ Tcl_Interp *interp;
+ {
+ Interp *iPtr = (Interp *) interp;
+ return &iPtr->subCmdTable;
}
Index: generic/tclStubInit.c
===================================================================
RCS file: /cvsroot/tcl/generic/tclStubInit.c,v
retrieving revision 1.6
diff -c -r1.6 tclStubInit.c
*** tclStubInit.c 1999/03/11 00:19:23 1.6
--- tclStubInit.c 1999/05/06 16:14:12
***************
*** 350,355 ****
--- 350,362 ----
Tcl_WaitPid, /* 277 */
panicVA, /* 278 */
Tcl_GetVersion, /* 279 */
+ NULL, /* 280 */
+ NULL, /* 281 */
+ NULL, /* 282 */
+ NULL, /* 283 */
+ NULL, /* 284 */
+ NULL, /* 285 */
+ Tcl_CreateSubCmdHandler, /* 286 */
};
TclStubs *tclStubsPtr = &tclStubs;
Index: generic/tclStubs.c
===================================================================
RCS file: /cvsroot/tcl/generic/tclStubs.c,v
retrieving revision 1.6
diff -c -r1.6 tclStubs.c
*** tclStubs.c 1999/03/11 02:49:34 1.6
--- tclStubs.c 1999/05/06 16:14:13
***************
*** 2706,2710 ****
--- 2706,2727 ----
(tclStubsPtr->tcl_GetVersion)(major, minor, patchLevel, type);
}
+ /* Slot 280 is reserved */ + /* Slot 281 is reserved */ + /* Slot 282 is
reserved */ + /* Slot 283 is reserved */ + /* Slot 284 is reserved */ + /*
Slot 285 is reserved */ + /* Slot 286 */ + void +
Tcl_CreateSubCmdHandler(interp, cmdname, proc, clientData) + Tcl_Interp *
interp; + char * cmdname; + Tcl_SubCmdHandler * proc; + ClientData
clientData; + { + (tclStubsPtr->tcl_CreateSubCmdHandler)(interp, cmdname,
proc, clientData); + } +
/* !END!: Do not edit above this line. */
"Marco R. Gazzetta" wrote:
[patch snipped]
I haven't yet started on my own trail of destruction (kernal hacking :-). However,
I
am writing an introspection package (essentially an info extension) and would love
to know exactly what kind of augmentation you mean (so I can include it). I will
certainly look at your patch as I was planning something like this for the C
version.
In the TCL only version I have a subcommand facility based on just switching on the
first argument (though it could be extended to allow a tertiary level or generic if
anyone
actually desires such a feature). See related posting on this thread.
Regards,
Bruce A.
P.S. Is there anything Itcl adds to the info command that would I should look at
including?
I have not examined the OO extensions much (other than stooop at work). I posted
the
original spec a while back and a copy should still be floating around on the
Tcler's wiki
(url not to hand).
Wow !!! Thanks !!!
It's really a pleasure to be acquainted with smart and efficient people.
Now I am really sorry but I'm leaving for a week within hours; I'm not
sure I can find the time to try it until next week... Shame on me !
Talk to you then,
Best regards,
-Alex
|
https://groups.google.com/g/comp.lang.tcl/c/dN1D2ee5VxE/m/5m2cAEsBCKMJ
|
CC-MAIN-2022-27
|
en
|
refinedweb
|
selinux_set_callback(3) SELinux API documentation selinux_set_callback(3)
selinux_set_callback - userspace SELinux callback facilities
#include <selinux/selinux.h> void selinux_set_callback(int type, union selinux_callback callback);
selinux_set_callback() sets the callback indicated by type to the value of callback, which should be passed as a function pointer cast to type union selinux_callback. All callback functions should return a negative value with errno set appropriately on error. The available values for type are: SELINUX_CB_LOG int (*func_log) (int type, const char *fmt, ...); This callback is used for logging and should process the printf(3) style fmt string and arguments as appropriate. The type argument indicates the type of message and will be set to one of the following: SELINUX_ERROR SELINUX_WARNING SELINUX_INFO SELINUX_AVC SELINUX_CB_AUDIT int (*func_audit) (void *auditdata, security_class_t cls, char *msgbuf, size_t msgbufsize); This callback is used for supplemental auditing in AVC messages. The auditdata and cls arguments are the values passed to avc_has_perm(3). A human-readable interpretation should be printed to msgbuf using no more than msgbufsize characters. SELINUX_CB_VALIDATE int (*func_validate) (char **ctx); This callback is used for context validation. The callback may optionally modify the input context by setting the target of the ctx pointer to a new context. In this case, the old value should be freed with freecon(3). The value of errno should be set to EINVAL to indicate an invalid context. SELINUX_CB_SETENFORCE int (*func_setenforce) (int enforcing); This callback is invoked when the system enforcing state changes. The enforcing argument indicates the new value and is set to 1 for enforcing mode, and 0 for permissive mode. SELINUX_CB_POLICYLOAD int (*func_policyload) (int seqno); This callback is invoked when the system security policy is reloaded. The seqno argument is the current sequential number of the policy generation in the system.
None.
None.
Eamon Walsh <ewalsh@tycho.nsa.gov>
selabel_open(3), avc_init(3), avc_netlink Jun 2007 selinux_set_callback(3)
|
http://man7.org/linux/man-pages/man3/selinux_set_callback.3.html
|
CC-MAIN-2017-13
|
en
|
refinedweb
|
Grove - Line Finder
Introduction
.
Specification
- Power supply :5V DC
- Digital output mode: TTL (High when black is detected, Low when white is detected)
- Connector: 4 pin Buckled Grove interface
- Dimension: 20mm*20mm
- ROHS: YES
- Comparator: MV358
- Photo reflective diode: RS-06WD
Tip
More details about Grove modules please refer to Grove System
Demonstration
With Arduino
Brick will return HIGH when black line is detected, and LOW when white line is detected. Using the adjustable resistor the detection range can be changed from 1.5cm to 5cm. If the sensor can’t tell between black and white surfaces, you can also use the adjustable resistor to set a suitable reference voltage.
Includes important code snippet. Demo code like :
Demo code { //------------------------------------------------------------ //Name: Line finder digital mode //Function: detect black line or white line //Parameter: When digital signal is HIGH, black line // When digital signal is LOW, white line //------------------------------------------------------------- int signalPin = 3; // connected to digital pin 3 void setup() { pinMode(signalPin, INPUT); // initialize the digital pin as an output: Serial.begin(9600); // initialize serial communications at 9600 bps: } // the loop() method runs over and over again, // as long as the Arduino has power void loop() { if(HIGH == digitalRead(signalPin)) Serial.println("black"); else Serial.println("white"); // display the color //delay(1000); // wait for a second } }
With Raspberry Pi
- You need a raspberry pi and a grovepi or grovepi+.
- You need to complete configuring the development enviroment, otherwise follow here.
- Connection
Plug the sensor to grovepi socket D7 by using a grove cable.
Navigate to the demos’ directory:
cd yourpath/GrovePi/Software/Python/
To see the code
nano grove_line_finder.py # "Ctrl+x" to exit #
import time import grovepi # Connect the Grove Line Finder to digital port D7 # SIG,NC,VCC,GND line_finder = 7 grovepi.pinMode(line_finder,"INPUT") while True: try: # Return HIGH when black line is detected, and LOW when white line is detected if grovepi.digitalRead(line_finder) == 1: print "black line detected" else: print "white line detected" time.sleep(.5) except IOError: print "Error"
5.Run the demo.
sudo python grove_line_finder.py
Resources
|
http://wiki.seeed.cc/Grove-Line_Finder/
|
CC-MAIN-2017-13
|
en
|
refinedweb
|
#include "utils/chk.h" #include "utils/panic.h" int main (int argc, char * argv[]) { char * msg; int length; int arg_pos; int pos; chk_limit (64); msg = (char *)chk_malloc(1); msg[0] = 0; length = 0; for (arg_pos = 1; arg_pos < argc; ++arg_pos) for (pos = 0; argv[arg_pos][pos]; ++pos) { msg = (char *)chk_realloc ((void *)msg, length + 1, length + 2); msg[length] = argv[arg_pos][pos]; msg[length + 1] = 0; length += 1; } <>. */
|
http://basiscraft.com/basiscraft.com/+arch-0/crux/00008.00000-crux/crux-00008-html/silly/chk.c.html
|
CC-MAIN-2017-13
|
en
|
refinedweb
|
CodePlexProject Hosting for Open Source Software
I need some advice on using the EventAggregator. Currently, I just want to use the EventAggregator and not other aspects of Prism. I am also using MEF, not Unity.
Everything I have researched says to use a single instance of the EventAggregator. All those examples seem to use Prism and Unity to take care of that so I would like an example on doing it with MEF.
I also want to know a little more about waht goes on under the hood of the EventAggregator. My concern is whether I need multiple instances of it or not and the best way to handle accomplishing that. The situation is that I have different views
and viewmodels that are similar in nature and may publish the same event. Does the EventAggregator differentatiate events depending on the object that published it? For example, if class A publishes MyEvent and class B also publishes MyEvent
will subscribers to MyEvent get the evnt from both class A and class B? If so, I need to avoid this situation. Should I just use a filter or use a different EventAggregator for separate components within my application.
Since this is important, let me explain further using specifics from my application. The main portion of my application is a graph control. This user control visualizes data in the form of Node objects. Node objects can be connected to
other nodes via Edges (lines). The view model for the graph control will be publishing a serieis of events such as NodeMouseEnter, NodeMouseLeave, NodeMouseClick, etc. Certain portions of the user interface will subscribe to these events.
An example is a popup control that displays information about a node when the mouse hovers over it. Now, the flip side is that some portions of the application have there own version of the graph control (with their own viewmodel). Much of the
functionality will be the same and they will share base classes. They would also publish many of the same events but the subscribers must be different. If I hover my mouse cursor over a node in a subgraph (not the main graph) I want the node popup
to show on the subgraph, not on the main graph (and vice versa). Does this mean I want a separate EventAggregator to handle these different Graph controls??
Thanks,
Todd
Hi Todd,
In order to support using the Event Aggregator with MEF, the Prism Library provides a
MefEventAggregator class (located in the Prism.MefExtensions.Silverlight project), which "Exports the EventAggregator using the Managed Extensibility Framework (MEF)". So if you want to use it, you should
import an instance of IEventAggregator, providing that you have added the necessary composable part to the Aggregate Catalog (which is done by the
MefBootstrapper). You could find an example of using the Event Aggregator in the
Stock Trader Reference Implementation.
If you don't want to add the entire Prism libraries that hold the EventAggregator, the
MefEventAggregator and even the MefBootstrapper
itself, you could put the necessary classes for the EventAggregator
(which are located in the Events folder of the Prism.Silverlight project) in a library of yours, and export the
EventAggregator class as an IEventAggregator. To this purpose, you might also find this
blog post by Damian Schenkelman useful. Even though it is targeted at an older Prism version, it should still work, as there aren't breaking changes in Event Aggregation in Prism v4.0.
As for your specific scenario, you could make use of the Subscription Filtering
feature of the Event Aggregator, to make the subscribers handle events only if a certain condition is met (which would suit to your scenario, in my understanding). You can read about it in
this article from the Prism documentation (Under the Event Aggregation section, and specifically under the Subscription Filtering sub section).
I hope you find this helpful.
Guido Leandro Maliandi
Thanks Guido.
I decided, for now, to not use MEF for this situation. I am using MEF in other areas but do to the timeline I am on I wanted to go with a slightly different scenario. I have plans, in the future, to update my solution to use the MefBootStrapper
but I don't have that time for now. I ended up wrapping the EventAggregator and providing a static property to access an instance of the class.
I don't need the filters at this point but hope they will end up allowing me to separate the events of different components. That is an important aspect that I will need in the near future. I just came across a perfect example. My Node
class is now pbulishing a NodeMouseEnterEvent whenever the mouse enters it. The trick here is that more than one graph may contain Node objects and the events shouldn't cross components. I.E., Graph A should not get events from nodes that are on
Graph B. I can use a Filter for this but I am trying to ensure that this is the best option. Each graph could have hundreds of nodes and the mouse enter and leave events could fire very quickly
It's nice to see that you're planning to implement further Prism capabilities.
As regarding your concern with the Event Aggregator, considering that you're going to have several instances of the Node objects, it could be a better idea to have a separate instance of the Event Aggregator for each Graph object, so as to avoid possible
performance issues.
Take into account that the Event Aggregator is useful to communicate between loosely coupled components. If the communication will be between components that are located in the same assembly, it isn't necessary to use the Event Aggregator; you could just
use plain .NET events.
Guido Leandro Maliandi
I have taken in to account the EventAggregator being useful for communication between loosely coupled components. Currently, most of my components are in the same assembly but they are very loosely coupled. I am also leaning towwards splitting
my main project into multiple projects because it is getting rather large and unrulely.
I weighed the decision to use EventAggregator for some time. I finally made the decision with the current task I am working on. I have a control I created that is a popup and shows information for a node. The control itself has no reference
to a node. The graph control (which has the popup control and the nodes) would have to refire the node mouse enter and mouse leave events and I would have had to make my popup control have reference to the graph view model (since I am using MVVM).
I didn't want this tight coupling so the EventAggregator is a perfect fit.
Since I created a wrapper for the EventAggregator, I should be able to easily update it to allow overriding the default instance (to provide additional instances for additional graphs). This appears to be similar to how the Messenger API works in the
MVVM Light Framework.
Are you sure you want to delete this post? You will not be able to recover it later.
Are you sure you want to delete this thread? You will not be able to recover it later.
|
https://compositewpf.codeplex.com/discussions/235745
|
CC-MAIN-2017-13
|
en
|
refinedweb
|
Why is my array globally manipulated, when I run the below ruby code? And how can I get arrays to be manipulated only within the function's scope?
a = [[1,0],[1,1]]
def hasRowsWithOnlyOnes(array)
array.map { |row|
return true if row.keep_if{|i| i != 1 } == []
}
false;
end
puts a.to_s
puts hasRowsWithOnlyOnes(a)
puts a.to_s
$ ruby test.rb
[[1, 0], [1, 1]]
true
[[0], []]
.select{true}
$ ruby -v
ruby 2.2.1p85 (2015-02-26 revision 49769) [x86_64-linux]
All variables in Ruby are references to objects.
When you pass an object to a method, a copy of that object's reference is made and passed to the object.
That means that the variable
array in your method and the top-level variable
a refer to the exact same Array. Any changes made to
array will be also visible as changes to
a, since both variables refer to the same object.
Your method does modify the array by calling Array#keep_if. The
keep_if method modifies the array in-place.
The best fix for this is to make it so that your method does not modify the array that was passed in. This can be done pretty neatly using the Enumerable#any? and Enumerable#all? methods:
def has_a_row_with_only_ones(array) array.any? do |row| row.all? { |e| e == 1 } end end
This code says that the method returns true if, for any row, every element in that row is 1. These methods do not modify the array. More important, they communicate the method's intent clearly.
If you want the method to act as through a copy of the array were passed to it, so that the array can be modified without that modification being visible outside the method, then you can make a deep copy of the array. As shown in this answer, you can define this method to make a deep copy:
def deep_copy(o) Marshal.load(Marshal.dump(o)) end
Then, at the top of the method, make the deep copy:
def has_a_row_with_only_ones(array) array = deep_copy(array) # code that modifies array end
This should be avoided because it's slow.
|
https://codedump.io/share/0gmcElI2Xku2/1/how-can-i-get-my-array-to-only-be-manipulated-locally-within-a-function-in-ruby
|
CC-MAIN-2017-13
|
en
|
refinedweb
|
#include <AIKBSplayTree.h>
KBSplayTree is one implementation of a knowledge base. Frequently used or recently accessed nodes sit near the top of the tree, allowing fast search. All operations complexity: Expected Time O(lg N) [amortized] Worst-Case Time O(N)
iterates through the KB, totaling the badGoodScaleValue of each Record that has _stateIdAtCreation == stateId
Implements OpenSkyNet::AI::KB.
removes all entries EXCEPT for the reserved slots
Implements OpenSkyNet::AI::KB.
does NOT include the reserved slots
Implements OpenSkyNet::AI::KB.
|
http://tactics3d.sourceforge.net/Docs/html/class_open_sky_net_1_1_a_i_1_1_k_b_splay_tree.html
|
CC-MAIN-2017-13
|
en
|
refinedweb
|
.
Building LTTng-UST with Java agent support
In order to use LTTng with Java applications, LTTng-UST needs to be built with Java agent support. Some Linux distributions may already have LTTng-UST packages including support for Java applications. If not, you will need to build LTTng-UST from source with the following configuration flags.
The configure script will automatically detect the appropriate Java binaries to use in order to build the Java agent.
Java agent with JUL support:
./configure --enable-java-agent-jul
Java agent with log4j support:
To build the agent with log4j support, make sure that the log4j JAR file is in your Java classpath.
export CLASSPATH=$CLASSPATH:/path/to/log4j.jar ./configure --enable-java-agent-log4j
Java agent with JUL + log4j support
export CLASSPATH=$CLASSPATH:/path/to/log4j.jar ./configure --enable-java-agent-all
Instrumenting Java applications
In order to trace your application using the existing JUL or log4j logging facility, some snippet of code needs to be added to your application's main or execution loop. The highligted text in the example below show the required lines:
import org.lttng.ust.LTTngAgent; public class MyApp { public static void main(String[] argv) throws Exception { // ... // Call this as soon as possible (before logging) LTTngAgent lttngAgent = LTTngAgent.getLTTngAgent(); // ... // Not mandatory, but cleaner lttngAgent.dispose(); } }
By calling the
LTTngAgent.getLTTngAgent() method, the agent will
initialize its communication threads and will be able to communicate
with the LTTng session daemon.
From the user's point of view, once the LTTng-UST Java agent has been initialized, JUL and log4j loggers may be created and used as usual. The agent adds its own handler to the root logger, so that all loggers may generate LTTng events with no effort.
Here is a simple example with the JUL framework illustrating the above:
import java.io.IOException; import java.util.logging.Level; import java.util.logging.Logger; import org.lttng.ust.agent.LTTngAgent; public class MyApp { private static final int answer = 42; private static LTTngAgent lttngAgent; public static void main(String args[]) throws Exception { Logger helloLog = Logger.getLogger("hello"); lttngAgent = LTTngAgent.getLTTngAgent(); Thread.sleep(5000); helloLog.info("Hello World, the answer is " + answer); lttngAgent.dispose(); } }
When building your Java application, you need to make sure that the
Java agent JAR file,
liblttng-ust-agent.jar, is in your classpath. You
will find below an example on how to build the sample application with
the Java agent support:
export CLASSPATH=$CLASSPATH:/usr/share/java/liblttng-ust-agent.jar javac -classpath "$CLASSPATH" MyApp.java
Now that we have built the LTTng-UST Java agent and properly instrumented our Java applications, its time to trace!
Tracing the applications
Tracing the application is accomplished via the
lttng command line
tool. We will need to create a session, enable some JUL or log4j
events and then start our applications in order to trace it.
For reference on this procedure, consult the controlling tracing section of the official documentation.
Session setup for JUL logging (
-j,
--jul domain):
lttng create java-test lttng enable-event -a -j lttng start
Alternatively for log4j (
-l,
--log4j domain):
lttng create java-test lttng enable-event -a -l lttng start
Start your Java application:
export CLASSPATH=$CLASSPATH:/path/to/liblttng-ust-java-agent.jar ./MyApp
When you're ready to view the logs, stop tracing and view the resulting trace:
lttng stop lttng view
which will output something like:
[...] [02:51:59.050393159] (+?.?????????) kappa lttng_jul:user_event: { cpu_id = 3 }, { msg = "Hello World, the answer is 42", logger_name = "hello", class_name = "MyApp", method_name = "main", long_millis = 1420703518998, int_loglevel = 800, int_threadid = 1 } [...]
Conclusion
This tutorial covered the LTTng-UST Java agent and how to trace Java applications. At the moment, only the JUL and log4j logging backends are supported but the UST Java agent can easily be extended to support other logging backends.
For a complete reference on how to instrument Java applications using LTTng-UST, see the Java application section in LTTng's official documentation.
|
http://lttng.org/blog/2015/05/12/tutorial-java-tracing/
|
CC-MAIN-2017-13
|
en
|
refinedweb
|
/*
* TreeStrategy java.lang.reflect.Array;
import java.util.Map;
import org.simpleframework.xml.stream.Node;
import org.simpleframework.xml.stream.NodeMap;
/**
* The <code>TreeStrategy</code> object is used to provide a simple
* strategy for handling object graphs in a tree structure. This does
* not resolve cycles in the object graph. This will make use of the
* specified class attribute to resolve the class to use for a given
* element during the deserialization process. For the serialization
* process the "class" attribute will be added to the element specified.
* If there is a need to use an attribute name other than "class" then
* the name of the attribute to use can be specified.
*
* @author Niall Gallagher
* @see org.simpleframework.xml.strategy.CycleStrategy
public class TreeStrategy implements Strategy {
/**
* This is the loader that is used to load the specified class.
*/
private final Loader loader;
* This is the attribute that is used to determine an array size.
private final String length;
/**
* This is the attribute that is used to determine the real type.
*/
private final String label;
* Constructor for the <code>TreeStrategy</code> object. This
* is used to create a strategy that can resolve and load class
* objects for deserialization using a "class" attribute. Also
* for serialization this will add the appropriate "class" value.
public TreeStrategy() {
this(LABEL, LENGTH);
}
* objects for deserialization using the specified attribute.
* The attribute value can be any legal XML attribute name.
*
* @param label this is the name of the attribute to use
* @param length this is used to determine the array length
public TreeStrategy(String label, String length) {
this.loader = new Loader();
this.length = length;
this.label = label;
}
* This is used to resolve and load a class for the given element.
* Resolution of the class to used is done by inspecting the
* XML element provided. If there is a "class" attribute on the
* element then its value is used to resolve the class to use.
* If no such attribute exists on the element this returns null.
* @param type this is the type of the XML element expected
* @param node this is the element used to resolve an override
* @param map this is used to maintain contextual information
* @return returns the class that should be used for the object
* @throws Exception thrown if the class cannot be resolved
public Value read(Type type, NodeMap node, Map map) throws Exception {
Class actual = readValue(type, node);
Class expect = type.getType();
if(expect.isArray()) {
return readArray(actual, node);
}
if(expect != actual) {
return new ObjectValue(actual);
return null;
}
* This also expects a "length" attribute for the array length.
private Value readArray(Class type, NodeMap node) throws Exception {
Node entry = node.remove(length);
int size = 0;
if(entry != null) {
String value = entry.getValue();
size = Integer.parseInt(value);
}
return new ArrayValue(type, size);
* If no such attribute exists the specified field is returned,
* or if the field type is an array then the component type.
private Class readValue(Type type, NodeMap node) throws Exception {
Node entry = node.remove(label);
expect = expect.getComponentType();
String name = entry.getValue();
expect = loader.load(name);
}
return expect;
}
* This is used to attach a attribute to the provided element
* that is used to identify the class. The attribute name is
* "class" and has the value of the fully qualified class
* name for the object provided. This will only be invoked
* if the object class is different from the field class.
*
* @param type this is the declared class for the field used
* @param value this is the instance variable being serialized
* @param node this is the element used to represent the value
* @return this returns true if serialization is complete
public boolean write(Type type, Object value, NodeMap node, Map map){
Class actual = value.getClass();
Class real = actual;
if(actual.isArray()) {
real = writeArray(expect, value, node);
if(actual != expect) {
node.put(label, real.getName());
}
return false;
* This is used to add a length attribute to the element due to
* the fact that the serialized value is an array. The length
* of the array is acquired and inserted in to the attributes.
* @param field this is the field type for the array to set
* @param value this is the actual value for the array to set
* @param node this is the map of attributes for the element
* @return returns the array component type that is set
private Class writeArray(Class field, Object value, NodeMap node){
int size = Array.getLength(value);
if(length != null) {
node.put(length, String.valueOf(size));
return field.getComponentType();
}
|
http://simple.sourceforge.net/download/stream/report/cobertura/org.simpleframework.xml.strategy.TreeStrategy.html
|
CC-MAIN-2017-13
|
en
|
refinedweb
|
Whether you work a full-time job as an app developer, take on the periodic contract gig, or simply build smartphones apps as a hobby, one common task you'll find yourself repeating time and time again is creating an "about" dialog. The about dialog provides a quick means for communicating the most important elements regarding an installed application. The about dialog has evolved into a fundamental tool for users and developers alike.
There is no right or wrong way to implement an about dialog. The key though is that the dialog should give the user at a glance the application title, version, and some sort of contact information. On a smartphone where there is limited screen real-estate, it helps if the dialog is not only informative, but also simple and professional looking. Smartphone users have also come to expect touchable "hot spots" that initiate actions such as opening a web page or dialing a support number.
After reinventing the wheel so to speak while coding the about dialogs for my first couple of Android apps, I eventually came up with a flexible template of sorts. The approach has some advantages besides the consistent look and feel it brings to my complete offering of applications. The implementation uses raw text files for both the legal fine print (i.e., my software user agreement) and also for the standard information (i.e., title, version, etc.). The latter file allows me to use basic HTML tags as formatting attributes. Finally, with just a couple of lines of code, I can activate numerous intents on the dialog, including launching web pages, dialing phone numbers, and composing emails.
This tutorial shows you how to create your own about dialog template. You can either follow the step-by-step instructions, or download the entire project and import it into Eclipse.
1. Create a new Android project in Eclipse targeted at SDK 1.6 or higher.
2. Add an about.xml file to your /res/layout folder. The layout is a simple table with two text views and one image view. Note we use the stretch column attribute so that our table columns adjust dynamically to the content we will provide in our raw text files.
about.xml
<?xml version="1.0" encoding="utf-8"?>
<TableLayout
xmlns:android=""
android:layout_width="wrap_content"
android:layout_height="wrap_content"
android:
<TableRow><ImageView
android:
<TextView
android:id="@+id/info_text"android:gravity="center"
android:layout_gravity="center"
android:layout_width="fill_parent"
android:
</TableRow>
<TableRow>
<TextView
android:id="@+id/legal_text"
android:layout_width="fill_parent"android:layout_height="wrap_content"
android:
</TableRow></TableLayout>
3. In the /res/drawable folder, include an image that you'll want to represent your app. If you examine the layout for about.xml, you'll see my image is called cube.png.
4. In the /res/layout folder, you'll provide a default main.xml file.
main.xml
<?xml version="1.0" encoding="utf-8"?>
<LinearLayout xmlns:android=""
android:orientation="vertical"
android:layout_width="fill_parent"
android:<TextView
android: </LinearLayout>
5. Before moving on to the source code, you'll provide two text files for the about box. In your /res folder, you'll want to create a new directory called /raw. Add two new files: legal.txt and info.txt. These files may contain whatever you'd like your dialog to display. The template will interpret legal.txt as plain text. However, when displaying the contents of info.txt, the code will attempt to honor HTML formatting attributes. Below I provided example content for the files to get you started.
legal.txt
use of this software including but not limited to: monetary loss,
temporary paralysis, halitosis, spontaneous combustion,
road rash, and or premature hair loss.
info.txt<h3>My App</h3>
Version 1.0<br>
<b></b><br><br>1-800-555-1234
6. The implementation for the AboutDialog.java class contains a standard dialog constructor, a utility method for reading raw files, and the familiar onCreate override. In this onCreate method, there are a couple noteworthy items. First, the use of the Html.fromHtml method when setting the text view that displays our info.txt content; this is how we are able to use HTML format tags. Also worth mentioning is the call to Linkify.addLinks; this is a handy static Android class that filters your text for things like e-mail addresses, phone numbers, and web URLs. When a pattern is recognized, the text is automatically changed to a link and associated with available intent providers. In our sample above, this means users will be able to tap on the website or the phone number, and the device will respond in kind by doing "the right" thing.
AboutDialog.Java
package com.authorwjf; import java.io.BufferedReader; import java.io.IOException; import java.io.InputStream; import java.io.InputStreamReader; import android.app.Dialog; import android.content.Context; import android.os.Bundle; import android.text.Html; import android.text.util.Linkify;
import android.graphics.Color;
import android.widget.TextView; public class AboutDialog extends Dialog{ private static Context mContext = null; public AboutDialog(Context context) { super(context);
mContext = context;/**
}
* Standard Android on create method that gets called when the activity initialized.
*/
@Overridepublic void onCreate(Bundle savedInstanceState) { setContentView(R.layout.about);
TextView tv = (TextView)findViewById(R.id.legal_text);
tv.setText(readRawTextFile(R.raw.legal));
tv = (TextView)findViewById(R.id.info_text);
tv.setText(Html.fromHtml(readRawTextFile(R.raw.info)));
tv.setLinkTextColor(Color.WHITE);
Linkify.addLinks(tv, Linkify.ALL);}
public static String readRawTextFile(int id) {
InputStream inputStream = mContext.getResources().openRawResource(id);InputStreamReader in = new InputStreamReader(inputStream); BufferedReader buf = new BufferedReader(in);
String line;StringBuilder text = new StringBuilder(); try {
while (( line = buf.readLine()) != null) text.append(line); } catch (IOException e) { return null;
}return text.toString();
}}
7. We need to create a Main.java file that will handle creating a standard Android menu and invoking the about dialog when needed.
Main.java package com.authorwjf; import android.app.Activity; import android.os.Bundle; import android.view.Menu; import android.view.MenuItem; public class Main extends Activity { final public int ABOUT = 0;
/** Called when the activity is first created. */
@Overridepublic void onCreate(Bundle savedInstanceState) { super.onCreate(savedInstanceState);
setContentView(R.layout.main);
}public boolean onCreateOptionsMenu(Menu menu) {
menu.add(0,ABOUT,0,"About");return true;
}public boolean onOptionsItemSelected (MenuItem item){ switch (item.getItemId()){ case ABOUT: AboutDialog about = new AboutDialog(this); about.setTitle("about this app");
about.show();break;
} return true;
}}
There is no right or wrong way to implement an about dialog; however, the template presented in this post serves me well. If it meets your needs, feel free to add it to your library.Figure A
My about dialog.
|
http://www.techrepublic.com/blog/software-engineer/a-reusable-about-dialog-for-your-android-apps/
|
CC-MAIN-2017-13
|
en
|
refinedweb
|
Recently I came across this code fragment:
// look for element which is the smallest max element from
// a given iterator
int diff = std::numeric_limits<int>::max();
auto it = nums.rbegin();
auto the_one = nums.rbegin();
for (; it != given; ++it) // this terminates
{
int local_diff = *it - *given;
// if the element is less than/equal to given we are not interested
if (local_diff <= 0)
continue;
if (local_diff < diff)
{
// this update the global diff
diff = local_diff;
the_one = it;
}
}
std::max_element
auto the_one = std::min_element(nums.rbegin(), given, [given](int a, int b) { bool good_a = a > *given; bool good_b = b > *given; return (good_a && good_b) ? a < b : good_a; });
The trick is to write a comparison function that declares any "good" element (one that's greater than
*given) to compare smaller than any "not good" element. Two "good" elements are compared normally; two "bad" elements are always declared equivalent.
|
https://codedump.io/share/6RHLRD3ijvCj/1/stl-algorithm-for-smallest-max-element-than-a-given-value
|
CC-MAIN-2017-13
|
en
|
refinedweb
|
Create usb flash drive from binary image in Qt4
Hi!
I need to create a kind of usb flash drive creator. The problem is that it must be multiplatform and have GUI. GUI is not the problem because I'm quite familiar with Qt4. Hovewer, I don't know how to create flash drives from image file.
On linux I could use dd for that (just invoke it with QProcess but still it's not the ideal solution) because it could make exact copy of binary file (image) to usb drive (e.g. /dev/sdc). How can I achieve that on my own using Qt4? I can't simple copy file by file because I want to create whole disk structure (partition, bootloader etc.). I hope I've explained well what I need.
The solution must work on Windows and MacOS. Linux users can use simple bash script that uses dd to achieve the same thing.
Thanks in advance for any help or tips.
Windows counterparts of whole-disks /dev/sda /dev/sdb (and so on) are "\.\PhysicalDrive0" "\.\PhysicalDrive1" and so on (mind the backslashes, you need to escape them gaining ridiculous \\.\Physical...).
Windows counterparts of partitions /dev/sda1 /dev/sda2 (and so on) are \.\C: \.\D: and so on (as long as partition is recognized by Windows).
Note, that Windows (AFAIK) allows accesses only by multiples of sector size in length and only from sector boundary (which is 512 bytes for most HDDs - however for same new multi-terabyte disks it might be 4kbytes). For whole-device access starting from 0 you get this for free :)
On Linux side you may access /dev/sd* just like plain files via QFile, so there's no need to use QProcess and indirectly dd - of course you need respective R / W rights to the device (just checked - works, Sample code follows). Probably the same applies to MacOS, but I don't know the respective device names. Probably on Windows you may access \.-ies the same way.
-- Sample code --
@#include <QtCore>
#include <QFile>
int main(int argc, char *argv[]) {
#ifdef Q_WS_X11
QFile hdd("/dev/sda");
#endif
#ifdef Q_WS_WIN
QFile hdd("\\.\PhysicalDrive0");
#endif
#ifdef Q_WS_MACX
QFile hdd("/dev/disk1"); // I'm not sure
#endif
if (!hdd.open(QIODevice::ReadOnly)) return 255; QByteArray d = hdd.read(512); // Linux allows non-512 mutiplies hdd.close(); QFile file("out.bin"); if (!file.open(QIODevice::WriteOnly)) return 255; file.write(d); file.close(); return 0;
}@
-- /Sample code --
HTH.
Thank you very much for the answer (or should I say 'dziękuję'?) :)
Your sample code shows how to write an image from flash drive. I want to achieve something opposite - write file image to flash drive. Hovewer, I think it's gonna look the same or it's gonna be very similar.
I suppose it's a lot easier with unix systems which recognise disks as normal files (to be more accurate, special type of files but still). I have no idea if it's possible to do that on Windows. I will try to use that solution and post the answer if it works or not!
Yay! Nie ma sprawy! ("No problem")
I just didn't want to trash my workstation's MBR and had no free USB drive laying around, so example is drive->file :)
Well, just swap the paths around. Read file, write device. For Linux it's OK as long as you have permission to write to the device. This should be OK for the other two systems, too.
This solution works perfect on Linux. Hovewer, there is a problem on Windows. I've successfully made image of pendrive and save it to file but I couldn't write anything to pendrive. It seems that the problem is with opening file with WriteOnly or ReadWrite flag:
[code]
QFile hdd("\\.\PhysicalDrive1");
hdd.open(QIODevice::WriteOnly); // this line returns false
[/code]
I was using Windows Vista. Any ideas? Do I have to use WinAPI functions to get write access?
Try running the program with administrative credentials ("Run as administrator" or sth). As reading works, this may be simple ACL problem.
- bulkflashdrive
This post is deleted!
|
https://forum.qt.io/topic/18569/create-usb-flash-drive-from-binary-image-in-qt4
|
CC-MAIN-2018-30
|
en
|
refinedweb
|
Eric, 135 is less than 256, so it is possible to get 135 out of a max 256.
Can you post the exact loop that get the overflow issue?
Eric, you don't have to be sorry about it, it happens.
I tested the For Next on macOS and the test run just freeze. If I change to 254 then there is no problem.
I did a test with uint16 and 0 to 65535 and got the same.
I don't know much of how Xojo works, do you think that the for next go to that value (255 or 65535) and execute the code inside of it and the problem is when it gets to 'Next' that it tries to go to 256 (or 65536)
and those values are not valid? so it goes to 0 again (thank you Andre).
ETA: just used this code and it works, it looks like the last Next is the problem:
For X As UInt8 = 0 To 255 Dim strAnything As String = "Hello World" if x = 255 then exit // this way Next will not try to increment X beyond Uint8 limit Next
ETA2: Thank you Andre, good to know.
The problem is that uint8 can only hold values upto 255, if the uint8 variable = 255 and you add 1 the value becomes 0 again and the next will start the loop all over again.
In this case you should use a variable with at least 16bits to prevent the problem.
The real problem is that Xojo doesn't support overflow for integer variables.
The moral: if you use an integer as a loopcounter be sure to use the type that can handle the whole range of the loop.
Agreed. But there are 256 numbers in a uint8 not 255 so it's counter-intuitive that
For x as uint8 = 0 to 255 should be a problem. I do this type of thing often with Do loops after all using the full range of the variable.
However I am comforted by the fact I haven't gone crazy and that others see it too!
Right, but the last next will try to increase the value of x but because x is limited to a max of 255 it will go round to zero again.
In a do loop the check for the value of x is at the end of the loop and as soon the var x is 255 it will not be increased anymore and the loop will end. In that case x will never be asked to take a value higher than that 255.
OK but I don't recall having this problem with
For..Next in my VB days nor with Open/Libre Office. It's counter-intuitive which is why I'm heading back to Do loops.
In my real-world situation the debugger caught the exception but in my example code the test run went into hyperspace like Alberto's mac. That is not a good look since
For...Next isn't exactly new, but at least the platforms are consistent!
In most programs in VB i have seen the variables were dimmed as integer and that integer has a much bigger range and would never show a problem with ranges from 0 to 255. I have never done anything with Open Office, so i cann't comment on that.
It could be that do loops are a little bit less efficient because you need extra code to increment the loopcounter yourself, but with this smal range it will hardly be measurable. So there is nothing against using them.
I tested this on Windows and it shows exactly the same behaviour.
At least you now know where the problem is and you can be sure you are absolutely not getting crazy. :D
This post may help:
Thanks for the link Jason. Although somewhat unintentional, I'm glad the uint data types were included in mainstream Xojo. For me, they are important for allowing interoperability between bitwise, mathematical and string bin() and val(&b 32bit) functions. Otherwise bitwise operations would be much harder to use with signed integers. However in this case I was using uint to catch overflow errors which would not stop execution and could otherwise go unnoticed leading to difficulties in debugging, since signed integers are capable of greater and/or signed negative ranges that my application should not allow or have to deal with. Therefore I think
For...Next deserves access to the full range of unsigned integers expressly dimensioned by programmers for counting purposes., so that when dealing with a byte one can use an uint8 loop for example. Until that day comes I'll stick with
Do...Loop with counters.
@Eric W Therefore I think For...Next deserves access to the full range of unsigned integers expressly dimensioned by programmers for counting purposes., so that when dealing with a byte one can use an uint8 loop for example.
uint8 can hold integers from 0 to 255, which is 256 distinct values.
It's not big enough to hold a value of 256 (9 bits).
Start with 0 as the 1st value, counting by 1, the 256th 'value' will be 255.
The
For...Next adds 1 to the counter at the bottom of the loop,
THEN it checks if the counter is greater than the limit.
A UInt8 counter will fail (or reset to 0) when it tries to increment to 256.
A UInt8 counter will fail (or reset to 0) when it tries to increment to 256.
''
I would like this to be so but it overflows before incrementing to 255. Please run the example code above to verify this with uint8 counters. Anyway,
Do...Loop has no such issue if incrementing at the bottom and one's code is clearer too.
On windows X rolls over to 0 after 255, endless loop.
Yes John you are right, this is very yesterday's problem for me, sorry. So picking up the thread, we can put Alberto's code contributed above in the loop to remedy Xojo's uint
For...Next
if x = 255 then exit // this way Next will not try to increment X beyond Uint8 limit
Which as far as I know is the only
For..Next in the known universe to need such remedy. Kind of defeats the purpose of
For..Next doesn't it?
So use
Do...Loop with uints instead I say unless one is testing for a condition anyway.
@Eric W
Which as far as I know is the only
For..Nextin the known universe to need such remedy.
Are you sure? I tried a small test in C:
#include <stdio.h> #include <stdint.h> int main(int argc, char **argv) { uint8_t i; for (i = 0; i <= 255; i++) { printf("%d\n", i); } printf("Finally %d\n", i); return 0; }
On my Mac (compiler idents as "Apple LLVM version 9.0.0 (clang-900.0.39.2)"), the loop never ends. If coded as
for (i=0; i <256; i++) the compiler throws "warning: comparison of constant 256 with expression of type 'uint8_t' (aka 'unsigned char') is always true."
I'm not a C guy but aren't you incrementing before the print statement so that your 0 index has already turned to a 1 in the first loop? In BASIC dialects
For 0 starts the first pass of the loop with the index value of 0,
For 1 starts the first pass of the loop with the index value of 1. Or am I mistaken about this?
To quote Xojo's language reference for example :
By default the counterValue is changed by 1 at the end of each loop (when the Next statement is reached or a Continue is invoked within the loop.
[Emphasis added]
for (i = 0; i <=255; i++) { printf("%d\n", i); }
It starts printing at 0, not 1, as i is equal to 0 the first time through.
But the important thing is what happens when i = 255. Note that i is incremented before the test, so it goes from 255 to 0 before the <=255 test is performed.
When reading your code from left to right it looks like i is incremented after the test but I take your word for it.
But if in the case of
For..Next (not being C/C++) the increment only happens "when the Next statement is reached" - as Xojo's Language Reference puts it - then the test in Xojo must occur after the increment to keep the
For...Next rules of the language consistent and in tact; and which increment should not occur if the specified valid range is already exhausted!
However, using
Do...Loop instead of
For...Next makes for more explicit code with better control IMHO. And if I were king of the universe I would abolish
For...Next which of course Xojo Inc. can't.
|
https://forum.xojo.com/46463-for-next-strange-things-happening/6
|
CC-MAIN-2018-30
|
en
|
refinedweb
|
Hello,
I created a new Windows Application and added a Panel to the Form.
But, I don't see the MouseWheel event of the Panel.
Panel derives from Control, which has the MouseWheel event.
So why Panel does not have the MouseWheel event ?
Thanks !
Evidently you can get the number of clicks through a mousemove event (accessing them as e.Delta).
see
You can decorate properties, events, etc with attributes to hide them in the designer.
// // Summary: // Occurs when the mouse wheel moves while the control has focus. [Browsable(false)] [EditorBrowsable(EditorBrowsableState.Advanced)] public event MouseEventHandler MouseWheel;
The event is there but it is not implemented for the panel because you don't focus a panel like you would a textbox or button, and the mousewheel event fires when a control is focused. You probably have something else inside of the panel that you're wanting to scroll around? Explain what you're trying to accomplish in a little more detail.
Hi,
I understand what you say.
Where is this code taken from ?
What is the meaning of lines 4,5 ?
Here is what I'm trying to do:
I write some kind of painting program.
I draw all shapes inside a Panel. (Is that a good idea ?)
When the mouse wheel is moved, I would like to increase/decrease the pen width of the currently drawing shape.
>> Line 4:
[Browsable(false)] :
BrowsableAttribute Class
Specifies whether a property or event should be displayed in a Properties window.
--
A visual designer typically displays in the Properties window those members that either have no browsable attribute or are marked with the BrowsableAttribute constructor's browsable parameter set to true. These members can be modified at design time. Members marked with the BrowsableAttribute constructor's browsable parameter set to false are not appropriate for design-time editing and therefore are not displayed in a visual designer. The default is true.
>> Line 5:
[EditorBrowsable(EditorBrowsableState.Advanced)] :
EditorBrowsableAttribute Class
Specifies that a property or method is viewable in an editor. This class cannot be inherited.
--
EditorBrowsableAttribute is a hint to a designer indicating whether a property or method is to be displayed. You can use this type in a visual designer or text editor to determine what is visible to the user. For example, the IntelliSense engine in Visual Studio uses this attribute to determine whether to show a property or method.
In Visual C#, you can control when advanced properties appear in IntelliSense and the Properties Window with the Hide Advanced Members setting under Tools | Options | Text Editor | C#. The corresponding EditorBrowsableState is Advanced.
>>Where is this code taken from
In the instructions above it tells you how to view advanced members with intellisense which I don't have hidden. So in the designer I typed:
this.panel1.MouseWheel
When your cursor is on the "MouseWheel" word hit F12 and it will navigate to the declaration, where I copied the code I posted here.
>>I draw all shapes inside a Panel. (Is that a good idea ?)
I don't see why not, but I will let one of our resident drawing experts comment on that.
Try capturing the event at the form level and see if the mouse is on top of the panel:
using System; using System.Collections.Generic; using System.ComponentModel; using System.Data; using System.Drawing; using System.Linq; using System.Text; using System.Windows.Forms; namespace daniweb { public partial class frmPenWheelDraw : Form { private Pen pen; public frmPenWheelDraw() { InitializeComponent(); } private void frmPenWheelDraw_Load(object sender, EventArgs e) { pen = new Pen(Color.Red, 15); this.MouseWheel += new MouseEventHandler(frmPenWheelDraw_MouseWheel); } void frmPenWheelDraw_MouseWheel(object sender, MouseEventArgs e) { //You want to grab the mouse position and put a breakpoint after that //so you can see the values when the event fired Point mousePos = Control.MousePosition; Rectangle rect = new Rectangle(this.PointToScreen(panel1.Location), panel1.Size); //System.Diagnostics.Debugger.Break(); if (rect.Contains(mousePos)) //They're on top of the panel { pen.Width += (e.Delta > 0 ? 1 : -1); panel1.Invalidate(); } } private void panel1_Paint(object sender, PaintEventArgs e) { e.Graphics.DrawLine(pen, new Point(0, 0), new Point(panel1.Width, panel1.Height)); } } }
The problem is only the top level form can get the mouse wheel event.
|
https://www.daniweb.com/programming/software-development/threads/249626/why-mousewheel-event-does-not-exist
|
CC-MAIN-2018-30
|
en
|
refinedweb
|
bytable: data from/to ByteSt.
You can read data from ByteString.
some = do x <- take 4 y <- take 8 return (x :: Int, y :: Integer)
Properties
Modules
Downloads
- bytable-0.0.0.1.tar.gz [browse] (Cabal source package)
- Package description (as included in the package)
Maintainers' corner
For package maintainers and hackage trustees
|
http://hackage.haskell.org/package/bytable-0.0.0.1/candidate
|
CC-MAIN-2018-30
|
en
|
refinedweb
|
Using Mockito with Kotlin
Warning
After I published this article (thanks to all people who provided their feedback), I realized that approach I suggested is less than ideal and doesn't really solve anything - see updates at the bottom of the article. I will still keep it published though, so other people can learn from my mistakes
As Kotlin gets more and more popular and more people start paying a bit more attention to it, I decided to address one of the long-running pain points for those who tried to test Kotlin code using Mockito.
All Kotlin classes are
final by default
Yes. They are really final by default. And even though it is a nice language feature, it puts some constrains on us when it comes to using some well-known libraries and frameworks on Android.
One of such frameworks is Mockito.
If you used Mockito before, you might already know that in order to mock specific class, it cannot be
final.
I.e. following code wouldn't just work:
app/src/main/kotlin/Utils.kt:
class Utils { fun getHelloWorld() = "Hello World!" }
app/src/test/kotlin/UtilsTest.kt:
class UtilsTest { @Test @Throws(Exception::class) fun testMockedHelloWorld() { val mockedUtils = mock(Utils::class.java) `when`(mockedUtils.getHelloWorld()).thenReturn("Hello mocked world!") val helloWorld = mockedUtils.getHelloWorld() assertEquals("Hello mocked world!", helloWorld) } }
Code above would just throw something like this:
org.mockito.exceptions.base.MockitoException: Cannot mock/spy class Utils Mockito cannot mock/spy following: - final classes - anonymous classes - primitive types
So here we go. Since Kotlin classes by default are final - Mockito simply cannot mock them.
To "fix" this we could just "open" our class, couldn't we?
app/src/main/kotlin/Utils.kt:
open class Utils { open fun getHelloWorld() = "Hello World!" }
Life is great again! Tests are running, Mockito is mocking. Job is done! Time to get back to funny cat videos!
Well, not quite!
Needless to say that "opening" your production class just so you can mock it during tests is.... reckless?
Meet
all-open plugin
Fear no more! As of Kotlin 1.0.6, you can use
all-open compiler plugin which makes classes annotated with a specific annotation and their members open without the explicit
open keyword.
Let's look a bit closer at how to use this plugin.
First, let's declare our custom annotation:
app/src/main/kotlin/com/trickyandroid/myapp/TestOpen.kt
package com.trickyandroid.myapp @Retention(AnnotationRetention.SOURCE) @Target(AnnotationTarget.CLASS) annotation class TestOpen
Now, let's apply and configure
all-open plugin:
build.gradle:
buildscript { ext.kotlin_version = '1.0.6' repositories { jcenter() } dependencies { ... classpath "org.jetbrains.kotlin:kotlin-allopen:$kotlin_version" } }
app/build.gradle:
apply plugin: 'kotlin-allopen' allOpen { annotation("com.trickyandroid.myapp.TestOpen") }
Now, let's annotate util class with our annotation:
app/src/main/kotlin/Utils.kt:
import com.trickyandroid.myapp.TestOpen @TestOpen class Utils { fun getHelloWorld() = "Hello World!" }
And voilà! Mockito is happy again and we kept our utility class implementation-private for outer world.
P.S. just don't forget to clean your project after applying all-open plugin.
Update
As pointed out by early readers, there are some alternatives available to this approach.
Mockito opt-in final class mocking
Thanks to @nhaarman for pointing this out!
Apparently, as of Mockito 2.1.0, Mockito supports an experimental feature which allows you to mock final classes.
The only thing you need to do is to add following file into your project:
src/test/resources/mockito-extensions/org.mockito.plugins.MockMaker
mock-maker-inline
This approach seems less invasive since you no longer need to modify your production code to meet test requirements.
However, as mentioned in Mockito documentation, this feature is incubating and has different limitations. I didn't have a chance to thoroughly test this approach, but since it uses Java agent approach (similar approach as implemented in Jacoco coverage reports), I would assume you might encounter some problems with bytecode manipulation plugins (like Jacoco for instance).
Please let me know in comments if you had any problems with this feature.
Use interfaces
Another tip from @artem_zin is to use interfaces, so your implementation implements such interface and during test you need to mock interface instead of concrete final class.
Even though I personally prefer and use this pattern in my code, unfortunately it only applies to the code you own. You still need to solve final class problem when you try to mock third party library class.
But as I think about this more - when it comes to third party classes - you cannot annotate those anyway with your custom annotation.
Classes become
open not only during tests
As pointed out by various people, all-open plugin actually opens up classes not only for test configuration. It opens them up everywhere, so please beware of that!
|
https://trickyandroid.com/using-mockito-with-kotlin/
|
CC-MAIN-2018-30
|
en
|
refinedweb
|
In this section, I’ll show you how to turn your source file, containing various types, into a file that can be deployed. Let’s start by examining the following simple application:
public class App { static public void Main(System.String[] args) { System.Console.WriteLine("Hi"); } }
This application defines a type, called
App. This type has a single static, public method called
Main. Inside
Main is a reference to another type called
System.Console.
System.Console is a type implemented by Microsoft, and the IL code that implements this type’s methods are in the MSCorLib.dll file. So, our application defines a type and also uses another company’s type.
To build this sample application, put the preceding code into a source ...
No credit card required
|
https://www.safaribooksonline.com/library/view/applied-microsoft-net/9780735642126/ch02s02.html
|
CC-MAIN-2018-30
|
en
|
refinedweb
|
The LayGO Dynamic Loading API consists of 3 functions and 1 global variable:
These are declared in laygodll.h and exported by laygodll shared objects:
Relying on the LayGO Dynamic Loading API instead of implicit loading is largely transparent to the application. The application can still be written using the normal function call syntax for LayGO API functions. The application only needs to do 3 additional things:
#includelaygodll.h in place of laygo.h and laygomsg.h.
A successful call to lgo_LoadLaygoDll() sets the value of the global variable lgo_Fn to the address of a function table containing a correctly typed function pointer for each LayGO API function in the loaded DLL. laygodll.h contains a macro definition for each LayGO API function which transforms function call syntax into a reference to a member in the function table lgo_Fn. For instance, the macro definition
#define lgo_Open lgo_Fn->Open
transforms
cid = lgo_Open(serviceName);
into
cid = lgo_Fn->Open(serviceName);
The effect is the same, but with 1 layer of indirection. (Note: lgo_GetFunctionTable() simply returns the value of lgo_Fn and need not be called to use the API.)
The following sample code illustrates using this API to load the RPC-enabled version of the LayGO API:
#include <stdio.h> #include "laygodll.h" if (lgo_LoadLaygoDll(lgo_DLL_RPC) < 0) { printf("Failure loading LayGO DLL.\n"); } else { if (lgo_ConnectServerLocal() < 0) { printf("Failure connecting to RPC server.\n"); } /* Process... */ if (lgo_DisconnectServer() < 0) { printf("Failure disconnecting RPC server.\n"); } lgo_UnloadLaygoDll(); }
It's as easy as that! And building clients is just as easy. The LayGO for Java package uses dynamic loading to support the standard, Hardware Interface and RPC-enabled versions of the API in a single JNI implementation.
|
http://www.advancedrelay.com/laygodoc/laygodyn/dynapi.htm
|
CC-MAIN-2018-30
|
en
|
refinedweb
|
This is an interactive blog post, you can modify and run the code directly from your browser. To see any of the output you have to run each of the cells.
In particle physics applications (like the flavour of physics competition on kaggle) we often optimise the decision threshold of the classifier used to select events.
Recently we discussed (once again) the question of how to optimise the decision threshold in an unbiased way. So I decided to build a small toy model to illustrate some points and make the discussion more concrete.
What happens if you optimise this parameter via cross-validation and use the classifier performance estimated on each held-out subset as an estimate for the true performance?
If you studied up on ML, then you know the answer: it will most likely be a optimistic estimate, not an unbiased one.
Below some examples of optimising hyper-parameters on a dataset where the true performance is 0.5, aka there is no way to tell one class from the other. This is convenient because by knowing the true performance, we can evaluate whether or not our estimate is biased.
After optimising some standard hyper-parameters we will build two meta-estimators
that help with finding the best decision threshold via the normal
GridSearchCV
interface.
To sweeten the deal, here a gif of Benedict Cumberbatch pretending to be unbiased:
%config InlineBackend.figure_format='retina' %matplotlib inline
import numpy as np import scipy as sp import matplotlib.pyplot as plt from sklearn.base import BaseEstimator, TransformerMixin, ClassifierMixin, MetaEstimatorMixin from sklearn.ensemble import RandomForestClassifier from sklearn.cross_validation import train_test_split from sklearn.grid_search import GridSearchCV from sklearn.pipeline import make_pipeline from sklearn.tree import DecisionTreeClassifier from sklearn.utils import check_random_state
def data(N=1000, n_features=100, random_state=None): rng = check_random_state(random_state) gaussian_features = n_features//2 return (np.hstack([rng.normal(size=(N, gaussian_features)), rng.uniform(size=(N, n_features-gaussian_features))]), np.array(rng.uniform(size=N)>0.5, dtype=int))
X, y = data(200, random_state=1) # set aside data for final (unbiased)performance evaluation X_train, X_test, y_train, y_test = train_test_split(X, y, test_size=0.4, random_state=1)
param_grid = {'max_depth': [1, 2, 5, 10, 20, 30, 40, 50], 'max_features': [4, 8, 16, 32, 64, 80, 100]} rf = RandomForestClassifier(random_state=94) grid = GridSearchCV(rf, param_grid=param_grid, n_jobs=6, verbose=0)
_ = grid.fit(X_train, y_train)
The best parameters found and their score:
print("Best score: %.4f"%grid.best_score_) print("Best params:", grid.best_params_) print("Score on a totally fresh dataset:", grid.score(X_test, y_test))
The best accuracy we found is around 0.62 with
max_depth=1 and
max_features=8.
As we created the dataset with out any informative features we know that the true score
of any classifier is 0.5. Therefore this is either a fluctuation (because we don't measure
the score precisely enough) or the score from
GridSearchCV is biased.
You can also see that using a fresh, never seen before sample gives us an estimated accuracy of 0.56.
Bias or no bias?¶
To test this whether the accuracy obtained from
GridSearchCV is biased or just a
fluke let's repeatedly grid-search for the best parameters and look
at the average score. For this we generate a brand new dataset each time. The joys
of having an infinite stream of data!
def grid_search(n, param_grid, clf=None): X, y = data(120, random_state=0+n) if clf is None: clf = RandomForestClassifier(random_state=94+n) grid = GridSearchCV(clf, param_grid=param_grid, n_jobs=6, verbose=0) grid.fit(X, y) return grid scores = [grid_search(n, param_grid).best_score_ for n in range(40)]
plt.hist(scores, range=(0.45,0.65), bins=15) plt.xlabel("Best score per grid search") plt.ylabel("Count") print("Average score: %.4f+-%.4f" %(np.mean(scores), sp.stats.sem(scores)))
As you can see all of the best scores we find are above 0.5 and the average score is close to 0.58, with a small uncertainty.
Conclusion: the best score obtained during grid search is not an unbiased estimate of the true performance. Instead it is an optimistic estimate.
Threshold optimisation¶
Next, let's see what happens if we use a different hyper-parameter: the threshold applied to decide which class a sample falls in during prediction time.
For this to work in the
GridSearchCV framework we construct two meta-estimators.
The first one is a transformer. It transforms the features of a sample into the output of a classifier.
The second one is a very simple classifier, it assigns samples to one of two classes based on a threshold.
Combining them in a pipeline we can then use
GridSearchCV to optimise the threshold
as it if was any other hyper-parameter.
class PredictionTransformer(BaseEstimator, TransformerMixin, MetaEstimatorMixin): def __init__(self, clf): """Replaces all features with `clf.predict_proba(X)`""" self.clf = clf def fit(self, X, y): self.clf.fit(X, y) return self def transform(self, X): return self.clf.predict_proba(X) class ThresholdClassifier(BaseEstimator, ClassifierMixin): def __init__(self, threshold=0.5): """Classify samples based on whether they are above of below `threshold`""" self.threshold = threshold def fit(self, X, y): self.classes_ = np.unique(y) return self def predict(self, X): # the implementation used here breaks ties differently # from the one used in RFs: #return self.classes_.take(np.argmax(X, axis=1), axis=0) return np.where(X[:, 0]>self.threshold, *self.classes_)
With these two wrappers we can use
GridSearchCV to find the 'optimal'
threshold. We use a different parameter grid that only varies the
classifier threshold. You can experiment with optimising all three
hyper-parameters in one go if you want to by uncommenting the
max_depth
and
max_features lines.
pipe = make_pipeline(PredictionTransformer(RandomForestClassifier()), ThresholdClassifier()) pipe_param_grid = {#'predictiontransformer__clf__max_depth': [1, 2, 5, 10, 20, 30, 40, 50], #'predictiontransformer__clf__max_features': [8, 16, 32, 64, 80, 100], 'thresholdclassifier__threshold': np.linspace(0, 1, num=100)} grids = [grid_search(n, clf=pipe, param_grid=pipe_param_grid) for n in range(10)] scores = [g.best_score_ for g in grids] print("Average score: %.4f+-%.4f" %(np.mean(scores), sp.stats.sem(scores)))
This post started life as a jupyter notebook, download it or view it online.
|
http://betatim.github.io/posts/unbiased-performance/
|
CC-MAIN-2018-30
|
en
|
refinedweb
|
lgo_WriteSpecial()
Write specially tagged data on a connection.
#include "laygo.h" LResult lgo_WriteSpecial ( LCid cid, LDataBuffer buffer, LBufferSize count, LDataFlags flags );
lgo_WriteSpecial() writes specially tagged data to the subsystem for transmission to the remote system. A successful return by lgo_WriteSpecial() means that the data has been accepted for transmission on the connection. It does not mean that the data has been successfully transmitted or received by the remote system. An end-to-end protocol should be used by the application to insure successful transmission to the remote system.
The minimum number of bytes which can be written by one call to lgo_WriteSpecial() is 1. The maximum number of bytes which can be written by one call to lgo_WriteSpecial() depends on the configuration of the underlying protocols and the system buffer pool.
The meanings of the data flags are defined by the protocol.
If successful, lgo_WriteSpecial() returns a non-negative value representing the number of bytes written. Otherwise, it returns a negative value indicating the reason it failed. Possible unsuccessful return values are:
if ((bytes = lgo_WriteSpecial(cid, buffer, count, flags)) < 0) { /* process error */ }
|
http://www.advancedrelay.com/laygodoc/laygoapi/dtwrites.htm
|
CC-MAIN-2018-30
|
en
|
refinedweb
|
Sherpa is a modeling and fitting application for Python. It contains a powerful language for combining simple models into complex expressions that can be fit to the data using a variety of statistics and optimization methods. It is easily extensible to include user models, statistics and optimization methods.
For detailed documentation see:
The binary installation of Sherpa 4.7b1 was released on September 26, 2014. It has been tested on Linux 32, Linux 64 and Mac OSX (10.8 and 10.9).
The binary installer takes care of unpacking a full binary installation and it contains Sherpa’s dependencies (including Python). It is suitable for users who do not typically have Python environment setup or who want Sherpa to be installed in a separate Python environment. See Section 2. Anaconda Python for Anaconda installation.
We provide a binary self-extracting installer for the supported platforms:
To run the installer, just type:
$ bash sherpa_<version>_<platform>_installer.sh
The installer will ask where to install Sherpa and its dependencies.
It will also provide you with information on how to add Sherpa to your PATH:
Once you enable the Sherpa environment you can check that the installation works with the command:
$ sherpa_test
The standard warnings will be printed if the configuration file is set for CIAO or there is no matplotlib or pyFITS installed:
Failed importing sherpa.astro.io: No module named pycrates WARNING: failed to import sherpa.plot.chips_backend; plotting routines will not be available WARNING: failed to import sherpa.astro.io; FITS I/O routines will not be available WARNING: failed to import sherpa.astro.xspec; XSPEC models will not be available.
If there are any issues please contact sherpadev at head.cfa.harvard.edu.
Sherpa binaries can be seamlessly installed into Anaconda Python <>. You need to add the Chandra X-Ray Center’s channel to your configuration, and then install Sherpa:
$ conda config --add channels $ conda install sherpa
To test that your installation works type:
$ sherpa_test
To update Sherpa:
$ conda update sherpa
Sherpa comes with a configuration file sherpa.rc which is located in the $PYTHON/lib/site-packages/sherpa/. This file will be used if there is no ~/.sherpa.rc present. You need to copy the file to the home directory as ~/.sherpa.rc and update the verbosity to avoid the issues with standard python tracebacks (See: Known Issues). Be sure to indicate the IO and Plotting back-ends as pyfits and pylab depending on configuration.
matplotlib comes with a configuration file matplotlibrc. For smooth behavior with Sherpa, be sure to indicate interactive=True in ~/.matplotlib/matplotlibrc.
You can import Sherpa into your ipython session:
(conda)unix: ipython --pylab Python 2.7.8 |Continuum Analytics, Inc.| (default, Aug 21 2014, 18:22:21) Type "copyright", "credits" or "license" for more information. IPython 2.2.0 -- An enhanced Interactive Python. Anaconda is brought to you by Continuum Analytics. Please check out: and ? -> Introduction and overview of IPython's features. %quickref -> Quick reference. help -> Python's own help system. object? -> Details about 'object', use 'object??' for extra details. Using matplotlib backend: Qt4Agg In [1]: from sherpa.astro.ui import * WARNING: imaging routines will not be available, failed to import sherpa.image.ds9_backend due to 'RuntimeErr: DS9Win unusable: Could not find ds9 on your PATH' WARNING: failed to import sherpa.astro.xspec; XSPEC models will not be available
The standard warnings are issued if you do not have ds9 or XSPEC models in your path. The image with ds9 and use of the XSPEC models()
and an estimate of the errors
Data I/O support and plotting can be supplemented using PyFITS and “matplotlib”. Imaging requires ds9/XPA.
The normal Python tracebacks are broken when sherpa.ui is imported. The screen output shows:
In [1]: 1/0 --------------------------------------------------------------------------- ZeroDivisionError Traceback (most recent call last) <ipython-input-1-05c9758a9c21> in <module>() ----> 1 1/0 ZeroDivisionError: integer division or modulo by zero In [2]: from sherpa import ui WARNING: imaging routines will not be available, failed to import sherpa.image.ds9_backend due to 'RuntimeErr: DS9Win unusable: Could not find ds9 on your PATH' In [3]: 1/0 ERROR: Internal Python error in the inspect module. Below is the traceback from this internal error. Traceback (most recent call last): AssertionError Unfortunately, your original traceback can not be constructed.
The traceback can be recovered by modifying the verbosity in .sherpa.rc file:
[verbosity] # Sherpa Chatter level # a non-zero value will # display full error traceback level : 2
You can also overwrite .sherpa.rc settings with:
import sys sys.tracebacklimit = 100
|
http://cxc.cfa.harvard.edu/contrib/sherpa47b/
|
CC-MAIN-2018-30
|
en
|
refinedweb
|
Debugging and diagnosing cache misses
To make the most of task output caching, it is important that any necessary inputs to your tasks are specified correctly, while at the same time avoiding unneeded inputs. Failing to specify an input that affects the task’s outputs can result in incorrect builds, while needlessly specifying inputs that do not affect the task’s output can cause cache misses.
This chapter is about finding out why a cache miss happened. If you have a cache hit which you didn’t expect we suggest to declare whatever change you expected to trigger the cache miss as an input to the task.
Finding problems with task output caching
Below we describe a step-by-step process that should help shake out any problems with caching in your build.
Ensure incremental build works
First, make sure your build does the right thing without the cache. Run a build twice without enabling the Gradle build cache. The expected outcome is that all actionable tasks that produce file outputs are up-to-date. You should see something like this on the command-line:
$ ./gradlew clean --quiet (1) $ ./gradlew assemble (2) BUILD SUCCESSFUL 4 actionable tasks: 4 executed $ ./gradlew assemble (3) BUILD SUCCESSFUL 4 actionable tasks: 4 up-to-date
Use the methods as described below to diagnose and fix tasks that should be up-to-date but aren’t. If you find a task which is out of date, but no cacheable tasks depends on its outcome, then you don’t have to do anything about it. The goal is to achieve stable task inputs for cacheable tasks.
In-place caching with the local cache
When you are happy with the up-to-date performance then you can repeat the experiment above, but this time with a clean build, and the build cache turned on. The goal with clean builds and the build cache turned on is to retrieve all cacheable tasks from the cache.
This would look something like this on the command-line:
$ rm -rf ~/.gradle/caches/build-cache-1 (1) $ ./gradlew clean --quiet (2) $ ./gradlew assemble --build-cache (3) BUILD SUCCESSFUL 4 actionable tasks: 4 executed $ ./gradlew clean --quiet (4) $ ./gradlew assemble --build-cache (5) BUILD SUCCESSFUL 4 actionable tasks: 1 executed, 3 from cache
You should see all cacheable tasks loaded from cache, while non-cacheable tasks should be executed.
Testing cache relocatability
Once everything loads properly while building the same checkout with the local cache enabled, it’s time to see if there are any relocation problems. A task is considered relocatable if its output can be reused when the task is executed in a different location. (More on this in path sensitivity and relocatability.)
To discover these problems, first check out the same commit of your project in two different directories on your machine.
For the following example let’s assume we have a checkout in
$ rm -rf ~/.gradle/caches/build-cache-1 (1) $ cd ~/checkout-1 (2) $ ./gradlew clean --quiet (3) $ ./gradlew assemble --build-cache (4) BUILD SUCCESSFUL 4 actionable tasks: 4 executed $ cd ~/checkout-2 (5) $ ./gradlew clean --quiet (6) $ ./gradlew clean assemble --build-cache (7) BUILD SUCCESSFUL 4 actionable tasks: 1 executed, 3 from cache
You should see the exact same results as you saw with the previous in place caching test step.
Cross-platform tests
If your build passes the relocation test, it is in good shape already. If your build requires support for multiple platforms, it is best to see if the required tasks get reused between platforms, too. A typical example of cross-platform builds is when CI runs on Linux VMs, while developers use macOS or Windows, or a different variety or version of Linux.
To test cross-platform cache reuse, set up a
remote cache (see share results between CI builds) and populate it from one platform and consume it from the other.
Incremental cache usage
After these experiments with fully cached builds, you can go on and try to make typical changes to your project and see if enough tasks are still cached. If the results are not satisfactory, you can think about restructuring your project to reduce dependencies between different tasks.
Evaluating cache performance over time
Consider recording execution times of your builds, generating graphs, and analyzing the results. Keep an eye out for certain patterns, like a build recompiling everything even though you expected compilation to be cached.
You can also make changes to your code base manually or automatically and check that the expected set of tasks is cached.
If you have tasks that are re-executing instead of loading their outputs from the cache, then it may point to a problem in your build. Techniques for debugging a cache miss are explained in the following section.
Helpful data for diagnosing a cache miss
A cache miss happens when Gradle calculates a build cache key for a task which is different from any existing build cache key in the cache. Only comparing the build cache key on its own does not give much information, so we need to look at some finer grained data to be able to diagnose the cache miss. A list of all inputs to the computed build cache key can be found in the section on cacheable tasks.
From most coarse grained to most fine grained, the items we will use to compare two tasks are:
Build cache keys
Task and Task action implementations
classloader hash
class name
Task output property names
Individual task property input hashes
Hashes of files which are part of task input properties
If you want information about the build cache key and individual input property hashes, use
-Dorg.gradle.caching.debug=true:
$ ./gradlew :compileJava --build-cache -Dorg.gradle.caching.debug=true . . . Appending implementation to build cache key: org.gradle.api.tasks.compile.JavaCompile_Decorated@470c67ec713775576db4e818e7a4c75d Appending additional implementation to build cache key: org.gradle.api.tasks.compile.JavaCompile_Decorated@470c67ec713775576db4e818e7a4c75d Appending input value fingerprint for 'options' to build cache key: e4eaee32137a6a587e57eea660d7f85d Appending input value fingerprint for 'options.compilerArgs' to build cache key: 8222d82255460164427051d7537fa305 Appending input value fingerprint for 'options.debug' to build cache key: f6d7ed39fe24031e22d54f3fe65b901c Appending input value fingerprint for 'options.debugOptions' to build cache key: a91a8430ae47b11a17f6318b53f5ce9c Appending input value fingerprint for 'options.debugOptions.debugLevel' to build cache key: f6bd6b3389b872033d462029172c8612 Appending input value fingerprint for 'options.encoding' to build cache key: f6bd6b3389b872033d462029172c8612 . . . Appending input file fingerprints for 'options.sourcepath' to build cache key: 5fd1e7396e8de4cb5c23dc6aadd7787a - RELATIVE_PATH{EMPTY} Appending input file fingerprints for 'stableSources' to build cache key: f305ada95aeae858c233f46fc1ec4d01 - RELATIVE_PATH{.../src/main/java=IGNORED / DIR, .../src/main/java/Hello.java='Hello.java' / 9c306ba203d618dfbe1be83354ec211d} Appending output property name to build cache key: destinationDir Appending output property name to build cache key: options.annotationProcessorGeneratedSourcesDirectory Build cache key for task ':compileJava' is 8ebf682168823f662b9be34d27afdf77
The log shows e.g. which source files constitute the
stableSources for the
compileJava task.
To find the actual differences between two builds you need to resort to matching up and comparing those hashes yourself.
Luckily, you do not have to capture this data yourself - the build scan plugin already takes care of this for you. Gradle Enterprise has the necessary data to diagnose the cache miss when comparing two build scans, and will show you the properties that caused a different cache key to be generated.
There are three ways to compare builds in Gradle Enterprise.
Use the scan list to find the scans to compare, selecting them by clicking the respectiveor icon.
From a scan that you wish to compare with another, click theicon at the top of the scan, then select the second scan by clicking the icon from the list of scans.
Click theicon in the header of a build scan to show previous and subsequent build scans of the same workspace, then click the icon next to the build you want to compare.
It is also possible that task output caching for a cacheable task was disabled. When this happens the reason why caching was disabled for the task is reported on the info log level and in the build scan:
Diagnosing the reasons for a cache miss
Having the data from the last section at hand, you should be able to diagnose why the outputs of a certain task were not found in the build cache. Since you were expecting more tasks to be cached, you should be able to pinpoint a build which would have produced the artifact under question.
Before diving into how to find out why one task has not been loaded from the cache we should first look into which task caused the cache misses. There is a cascade effect which causes dependent tasks to be executed if one of the tasks earlier in the build is not loaded from the cache and has different outputs. Therefore, you should locate the first cacheable task which was executed and continue investigating from there. This can be done from the timeline view in a build scan or from the task input comparison directly:
At first, you should check if the implementation of the task changed. This would mean checking the class names and classloader hashes
for the task class itself and for each of its actions. If there is a change, this means that the build script,
buildSrc or the Gradle version has changed.
If the implementation is the same, then you need to start comparing inputs between the two builds. There should be at least one different input hash. If it is a simple value property, then the configuration of the task changed. This can happen for example by
changing the build script,
conditionally configuring the task differently for CI or the developer builds,
depending on a system property or an environment variable for the task configuration,
or having an absolute path which is part of the input.
If the changed property is a file property, then the reasons can be the same as for the change of a value property. Most probably though a file on the filesystem changed in a way that Gradle detects a difference for this input. The most common case will be that the source code was changed by a check in. It is also possible that a file generated by a task changed, e.g. since it includes a timestamp. As described in Java version tracking, the Java version can also influence the output of the Java compiler. If you did not expect the file to be an input to the task, then it is possible that you should alter the configuration of the task to not include it. For example, having your integration test configuration including all the unit test classes as a dependency has the effect that all integration tests are re-executed when a unit test changes. Another option is that the task tracks absolute paths instead of relative paths and the location of the project directory changed on disk.
Example
We will walk you through the process of diagnosing a cache miss.
Let’s say we have build
A and build
B and we expected all the test tasks for a sub-project
sub1 to be cached in build
B since only a unit test for another sub-project
sub2 changed.
Instead, all the tests for the sub-project have been executed.
Since we have the cascading effect when we have cache misses, we need to find the task which caused the caching chain to fail.
This can easily be done by filtering for all cacheable tasks which have been executed and then select the first one.
In our case, it turns out that the tests for the sub-project
internal-testing were executed even though there was no code change to this project.
We start the input property comparison in Gradle Enterprise and see that the property
classpath changed. This means that some file on the runtime classpath actually did change.
Looking deeper into this, we actually see that the inputs for the task
processResources changed in that project, too.
Finally, we find this in our build file:
def currentVersionInfo = tasks.register('currentVersionInfo', CurrentVersionInfo) { version = project.version versionInfoFile = layout.buildDirectory.file('generated-resources/currentVersion.properties') } sourceSets.main.output.dir(currentVersionInfo.map { it.versionInfoFile.get().asFile.parentFile }) abstract class CurrentVersionInfo extends DefaultTask { @Input abstract Property<String> getVersion() @OutputFile abstract RegularFileProperty getVersionInfoFile() @TaskAction void writeVersionInfo() { def properties = new Properties() properties.setProperty('latestMilestone', version.get()) versionInfoFile.get().asFile.withOutputStream { out -> properties.store(out, null) } } }
val currentVersionInfo = tasks.register<CurrentVersionInfo>("currentVersionInfo") { version.set(project.version as String) versionInfoFile.set(layout.buildDirectory.file("generated-resources/currentVersion.properties")) } sourceSets.main.get().output.dir(currentVersionInfo.map { it.versionInfoFile.get().asFile.parentFile }) abstract class CurrentVersionInfo : DefaultTask() { @get:Input abstract val version: Property<String> @get:OutputFile abstract val versionInfoFile: RegularFileProperty @TaskAction fun writeVersionInfo() { val properties = Properties() properties.setProperty("latestMilestone", version.get()) versionInfoFile.get().asFile.outputStream().use { out -> properties.store(out, null) } } }
Since properties files stored by Java’s
Properties.store method contain a timestamp, this will cause a change to the runtime classpath every time the build runs.
In order to solve this problem see non-repeatable task outputs or use input normalization.
|
https://docs.gradle.org/current/userguide/build_cache_debugging.html
|
CC-MAIN-2021-04
|
en
|
refinedweb
|
Jun 06, 2006 01:10 PM|kalaka2|LINK
Hello guys,
i want to create a custom culture using an existing .net framework culture. for this purpose i want to use the CultureAndRegionInfoBuilder class in namespace globalization
it don't work ?
it seem's like this class isn't define at all ?
please help and thank's
Contributor
6294 Points
ASPInsiders
Jun 06, 2006 03:31 PM|StrongTypes|LINK
Do you have a reference to System.Globalization? Also, here's the MSDN documentation on the CultureAndRegionalInfoBuilder class constructor:
Ryan
Jun 07, 2006 02:18 PM|kalaka2|LINK
hello Ryan,
I still have a reference to system.globalization but on my computer i don't have cultureandregioninfobuilder class under this namespace
i am using visual web developer express edition
thank's
Contributor
6294 Points
ASPInsiders
4 replies
Last post Jun 12, 2006 03:23 PM by kalaka2
|
https://forums.asp.net/t/997077.aspx?Class+CultureAndRegionInfoBuilder+don+t+exist+in+VWD+express
|
CC-MAIN-2021-04
|
en
|
refinedweb
|
Since.
Starting with PVS-Studio
This post is sponsored by PVS-Studio but all opinions, code and the article idea come from me.
I’m working on a project which is a visualisation of various sorting algorithms, written in Win32Api, C++, OpenGL. I always put a nice GIF that presents how it works:
You can read my previous articles that describe the project in detail:
After doing some basic refactoring, using some modern features and even checking code with C++ Core Guideline Checkers (available in Visual Studio) I also run a professional static analysis tool: PVS Studio - I used the latest version: PVS-Studio 7.09 (August 27, 2020)
Running the analyser is very simple. Inside Visual Studio 2019 you have to select:
Extensions->PVS-Studio->Check->Solution
This action starts the PVS process which can last a dozen of seconds (for small projects) or a couple of minutes… or longer - depending on your project size.
After the check completes, you can see the following window with all of the messages:
This shows all issues that the tool has found for the solution (You can also check a single project or a single compilation unit).
As you can see, the numbers are not large, because my project is relatively small (5kloc), yet it helped me with improving the code in several places.
What I like about PVS-Studio is its super handy UI: it’s just a single window with lots of easy to use shortcuts (for example filtering between severity level). It’s easy to filter through files or even skip some errors entirely.
For example, here’s a screenshot where I could easily disable warnings found inside
gtest.h which is a part of Google testing framework:
I won’t be able to fix those issues (as it’s third party code), so it’s best to make them silent.
Depending on your project size, you’ll probably need some time to adjust the output to your needs. After those adjustments, you’ll be able to focus on the major problems and limit the number of false positives or non-essential issues.
Here’s some more documentation if you want to start with your project.
- Getting acquainted with the PVS-Studio static code analyser on Windows
- How to run PVS-Studio on Linux and macOS
What’s more, you can also try PVS-Studio for free through Compiler Explorer! Have a look at this website how to start: Online Examples (C, C++).
Ok, but let’s see what the tool reported for my project.
Checking my project
In total the analyser found 137 warnings and 8 criticals. We won’t cover them all, but for the purpose of this text, I grouped them and focused on the essential aspects.
Typos and copy-paste bugs
The first one
friend bool operator== (const VECTOR3D& a, const VECTOR3D& b) { return (a.x == b.y && a.y == b.y && a.z == b.z); }
Do you see the error?
.
.
.
Maybe it’s quite easy when there’s only one function listed in the code sample, but it’s very easy to skip something when you have a bunch of similar functions:
bool operator== (const VECTOR3D& a, const VECTOR3D& b) { return (a.x == b.y && a.y == b.y && a.z == b.z); } bool operator!= (const VECTOR3D& a, const VECTOR3D& b) { return (a.x != b.y || a.y != b.y || a.z != b.z); } VECTOR3D operator- (const VECTOR3D& a) { return VECTOR3D(-a.x, -a.y, -a.z); } VECTOR3D operator+ (const VECTOR3D& a, const VECTOR3D& b) { return VECTOR3D(a.x+b.x, a.y+b.y, a.z+b.z); } VECTOR3D operator- (const VECTOR3D& a, const VECTOR3D& b) { return VECTOR3D(a.x-b.x, a.y-b.y, a.z-b.z); } VECTOR3D operator* (const VECTOR3D& a, float v) { return VECTOR3D(a.x*v, a.y*v, a.z*v); } VECTOR3D operator* (float v, const VECTOR3D& a) { return VECTOR3D(a.x*v, a.y*v, a.z*v); }
Copy-paste bugs or simple omissions can happen quite quickly… at least in my case :)
PVS -Studio reported the following message:
V1013 Suspicious subexpression a.x == b.y in a sequence of similar comparisons. tg_math.h 182
I guess it would be tough to spot this error, not easily at runtime.
Or another crazy and harmful bug:
for (i = 0; i < 4; i++) for (j = 0; j < 4; j++) buf.M[i][i] = M[i][i]*v;
For matrix multiplication… do you see the issue?
V756 The ‘j’ counter is not used inside a nested loop. Consider inspecting usage of ‘i’ counter. tg_math.cpp 157
Apparently, my code didn’t use that much of matrix transformations as I didn’t notice any issues at runtime, but it would be tricky to pinpoint the problem here.
The tool could detect even the following, however harmless issue (possibly as a result of copy paste):
inline float QuaternionNorm2(const QUATERNION_PTR q) { return ((q->w*q->w + q->x*q->x + q->y*q->y + q->z*q->z)); }
V592 The expression was enclosed by parentheses twice: ((expression)). One pair of parentheses is unnecessary, or misprint is present. tg_math.h 596
Such copy-paste bugs are very well described as the “Last Line Effect” - see The last line effect explained.
Let’s see some other issues:
Fixing a function
Have a look
void DrawCylinder(float r, float h, int nx, int ny, bool spread, bool top, bool bottom) { // some general code... if (top == true) { // draw circle with triangle fan } if (top == true) { // draw circle with triangle fan } }
This is a simple function that draws a cylinder with optional top and bottom sides.
And the errors?
V581 The conditional expressions of the ‘if’ statements situated alongside each other are identical. Check lines: 286, 305. gl_shapes.cpp 305
V751 Parameter ‘bottom’ is not used inside function body. gl_shapes.cpp 246
I haven’t seen this issue as a bug, because in the project, I always pass
true for the
top and the
bottom parameters. But it’s clear that there could be a different case and my code would draw both sides wrongly.
Note: this bug could also be suggested by
C4100 - MSVC warning enabled for Warning Level 4.
PVS-Studio makes it more evident that there’s something wrong with the similar code sections and that way it’s easier to have a look and recall what the real intention of the code was.
Omissions
A quite common bug with enums:
switch (cmMode) { case cmYawPitchRoll: { // .. break; } case cmSpherical: { // ... break; } }
V719 The switch statement does not cover all values of the ‘CameraMode’ enum: cmUVN. gl_camera.cpp 50
Such bugs can often arise when you extend the enum with new values, and you forget to update
switch places where the enum is tested.
Missing Initialisation of Data Members
Another critical bug that might cost you a lot of head-scratching:
V730 [CWE-457] Not all members of a class are initialized inside the constructor. Consider inspecting: m_fYaw, m_fPitch, m_fRoll, m_fHNear, m_fHFar. gl_camera.cpp 16
Fortunately, since C++11 we should use in-class member initialisation (see my separate blog post on that), but those bugs might be relatively often for legacy code.
Optimisation
The analyser can also help to address performance issues. For example:
- Passing by reference:
- V813 Decreased performance. The ‘filename’ argument should probably be rendered as a constant reference. clog.cpp 41
- Often happens when you forget to add
&when writing the type of the input argument.
- A better layout for structures:
- V802 On 64-bit platform, structure size can be reduced from 72 to 64 bytes by rearranging the fields according to their sizes in decreasing order. ctimer.h 14
- List initialisation in constructors:
Test(const string& str) { m_str = str;}is less efficient than initialisaiton with
m_str(str).
64 bit And Casting
Issues with numbers and conversions might be tricky to address, but PVS-Studio can show you many things that might be important to fix. For example:
V220 Suspicious sequence of types castings: memsize -> 32-bit integer -> memsize. The value being cast: ‘m_randomOrder.size()’. calgorithms.cpp 449
For this code:
if (m_i < static_cast<int>(m_randomOrder.size())) // m_i is size_t, I changed it from int previously
Or the following report:
V108 Incorrect index type: numbers[not a memsize-type]. Use memsize type instead. av_data.cpp 41
For:
m_vCurrPos[i] += (numbers[i] - m_vCurrPos[i]) * s_AnimBlendFactor;
Floating point!
Not to mention floating-point errors! Like this one:
V550 [CWE-682] An odd precise comparison: a.x == b.y. It’s probably better to use a comparison with defined precision: fabs(A - B) < Epsilon. tg_math.h 134
For the place when I compare floating-point values using
== rather than
fabs or some other functions that have some “epsilon”.
And even worse scenarios:
for (x = -4.0f; x < 4.0f; x+=1.0f) { for (z = -4.0f; z < 4.0f; z+=1.0f) { // ... } }
The above code generates:
V1034 [CWE-834] Do not use real type variables as loop counters. main.cpp 398
The code worked in my case, and this was used to draw some tiles on the floor… but it’s not the best approach and definitely not scalable.
Giving more check with MISRA
While I wrote my project just for fun and without any “critical safety” in mind, it’s also noteworthy that PVS-Studio supports strict industry standards and guidelines that can strengthen your code.
To make it short, you can enable MISRA Coding standard checks and see how it works against your project. In my caste I got…
608 errors!
From what I see from the output it’s mostly about using unions (they are not safe in most of the cases). Some other bugs were related to literal suffix V2517. MISRA. Literal suffixes should not contain lowercase characters. And errors like:
V2533 [MISRA C++ 5-2-4]C-style and functional notation casts should not be performed.
tg_math.h 325
V2564 [MISRA C++ 5-0-5]There should be no ‘integral to floating’ implicit cast. Consider inspecting the left operand ‘1’ of the operator ‘-‘.
gl_text.cpp 59
- Style guides
A lot of them were duplicates, so I need some time to sort them out.
Anyway if you like to read more about MISRA here’s a good starting point: What Is MISRA and how to Cook It
Summary
Having a reliable static analysis tool helped me to identify a bunch of issues in my small project. I’m especially impressed with finding copy&paste kind of bugs which are easy to skip but can hurt a lot at runtime.
Here’s a wrap up of the strong points for the PVS-Studio:
- Super easy to install and run from Visual Studio.
- Nice and intuitive UI.
- Lots of filtering options, especially useful for large projects with potentially thousands of messages.
- Easy way to double click on the warning code and see a website with the information about a given rule.
- Great documentation, articles, community and PVS-Studio Release History.
Some things to improve:
- It’s tough to pick anything! It simply works and helps in your daily coding routine
- Maybe one thing, that you have to spend some time to tune the output to your project needs, some issues might not be essential and not relevant to your code.
The natural way to try the analyser on your code is to get the trial version. With the hashtag
#bfilipek in the request form, the license key will be generated not for a week, but for a month.
|
https://www.bfilipek.com/2020/09/pvs-studio-checking.html
|
CC-MAIN-2021-04
|
en
|
refinedweb
|
Take a look at the program below. We have a
void function named
favorite_animal() and
main() with a few statements inside.
#include <iostream> std::string sea_animal = "manatee"; void favorite_animal(std::string best_animal) { std::string animal = best_animal; std::cout << "Best animal: " << animal << "\n"; } int main() { favorite_animal("jaguar"); std::cout << sea_animal << "\n"; std::cout << animal << "\n"; }
When this program is compiled and executed,
sea_animal will print, but
animal won’t. Why do you think that’s the case?
Scope is the region of code that can access or view a given element.
- Variables defined in global scope are accessible throughout the program.
- Variables defined in a function have local scope and are only accessible inside the function.
sea_animal was defined in global scope at the top of the program, outside of
main(). So
sea_animal is defined everywhere in the program.
Because
animal was only defined within
favorite_animal() and not returned, it is not accessible to the rest of the program.
Instructions
If you run the code, you can print
secret_knowledge right in
main() without entering the passcode. Yikes!
Only people who enter the correct passcode should have access to that knowledge.
Move
secret_knowledge into local scope so that it only prints from the function call when the correct code is entered.
Nice work! Now it’s time to get rid of that error.
Delete the line in
main() that prints
secret_knowledge directly without doing any math and keep the
enter_code(0310);.
|
https://production.codecademy.com/courses/learn-c-plus-plus/lessons/cpp-functions-scope-flexiblity/exercises/cpp-functions-scope
|
CC-MAIN-2021-04
|
en
|
refinedweb
|
The Android for Cars App Library allows you to bring your navigation, parking, and charging.
- Review the Android for Cars App Library Terms of Use.
- Review the Release Notes. take is an abstract
Service
class provided by the library that your app must implement and export in order
to be discovered and managed by the host.
CarAppService instances have a
lifecycle and act as the entry point of the app by
implementing the abstract
CarAppService.onCreateScreen method that returns the initial
Screen to display when the app is first launched.
Install the library
The Android for Cars App library is available through Google's Maven repository. To add the library to your app, do the following:
- In the top-level
build.gradlefile, make sure you include Google's Maven repository:
allprojects { repositories { google() } }
- In your app module's
build.gradlefile, include the Android for Cars App library dependency. For example, to include a dependency on the
1.0.0-beta.1version of the library, use the following:
dependencies { implementation 'com.google.android.libraries.car:car-app:1.0.0-beta.1' }
Configure your app’s manifest files
Before you can create your car app service, parking app in your manifest:
<application> ... <service ... android: <intent-filter> <action android: <category android: </intent-filter> </service> ... <application>
Supported App Categories
In order to be listed in the Play Store for Android Auto, the app needs to belong to one of the supported car app categories. You declare your app’s category by adding one or more of the following supported category values in the intent filter when you declare your car app service:
com.google.android.car.category.NAVIGATION: An app that provides turn-by-turn navigation directions.
com.google.android.car.category.PARKING: An app that provides functionality relevant to finding parking spots.
com.google.android.car.category.CHARGING: An app that provides functionality relevant to finding electric vehicle charging stations. elements.
Set your app's targetSdkVersion
Android Auto requires your app to target Android 6.0 (API level 23) or higher.
To specify this value in your project, set the
targetSdk
You create a car app service by extending the
CarAppService class and
implementing
CarAppService.onCreateScreen method, which returns the
Screen
instance to use the first time the app is started:
public final class HelloWorldService extends CarAppService { @Override @NonNull public Screen onCreateScreen(@NonNull Intent intent) { return new HelloWorldScreen(); } }
To handle scenarios where your car app needs to start from a screen that is not
the home or landing screen of your app (such as handling deep links), you can
pre-seed a back stack of screens via.getTemplate method, which returns the
Template instance representing the state of the UI to display in the car
screen.
The following snippet shows how to declare a
Screen that uses a
PaneTemplate
template to display a simple “Hello world!” string:
public class HelloWorldScreen extends Screen { @NonNull @Override public Template getTemplate() { Pane pane = Pane.builder() .addRow(Row.builder() .setTitle("Hello world!") .build()) .build(); return PaneTemplate.builder(pane).build(); } }
The CarContext class
The
CarContext class is a
ContextWrapper subclass accessible to your
CarAppServiceresources:
MessageTemplate template = MessageTemplate.builder("Hello world!") .setHeaderAction(Action.BACK) .setActions( Arrays.asList( .setTitle("Next screen") .setOnClickListener(() -> getScreenManager().push(new NextScreen()) .get as
Screen and the type of template it returns
through its
Screen.getTemplate implementation.:
Action action = Action.builder() .setTitle("Navigate") .setOnClickListener(this::onClickNavigate) .build();
The
onClickNavigate method can then start the default navigation car app by
using the
CarContext.startCarApp method:
private void onClickNavigate() { Intent intent = new Intent(CarContext.ACTION_NAVIGATE, Uri.parse("geo:0,0?q=" + address)); getCarContext().startCarApp(intent); }
For more details on how to start apps, including the format of the
ACTION_NAVIGATE intent, see Starting car apps using intents.:
Row row =:
Notification notification = new NotificationCompat.Builder(context, NOTIFICATION_CHANNEL_ID) .setContentTitle(titleOnThePhone) .extend( the
CarToast as shown in this snippet:
CarToast.makeText(getCarContext(), "Hello!", CarToast.LENGTH_SHORT).show();
Start a car app with an intent
You can call the
CarContext.startCarApp method to perform one of the following
actions:
- Open the dialer to make a phone call.
- Start turn-by-turn navigation to a location with the default navigation:
Notification notification = notificationBuilder. … .extend(
CarAppService.onNewIntent method in your app handles this intent by pushing the
parking reservation screen on the stack if not already on top:
@Override public void onNewIntent(@NonNull Intent intent) { screenManager = getCarContext().getCarService(ScreenManager.class);:
If the template quota is exhausted and the app attempts to send a new
template, the host will display an error message to the user..
Build a navigation app
This section details the different features of the library that you can make use of to implement the functionality of your turn-by-turn navigation app.
Declare navigation support in your manifest
Your navigation app needs to declare the
com.google.android.car.category.NAVIGATION car app
category in the intent filter of its
CarAppService:
<application> ... <service ... android: <intent-filter> <action android: <action android: </intent-filter> </service> ... <application>
Support navigation intents
In order to support navigation Intents to your app, including those coming from
the Google Assistant via a voice query, your app needs to handle the
CarContext.ACTION_NAVIGATE
intent in its
CarAppService.onCreateScreen
and
CarAppService
com.google.android.libraries.car.app.NAVIGATION_TEMPLATES permission in its
AndroidManifest.xml:
<uses-permission android:
Drawing the Map
Navigation applications can access a
Surface to draw the map on relevant
templates.
A
SurfaceContainer object can then be accessed by setting a
SurfaceListener
instance to the
AppManager car service:
carContext.getCarService(AppManager.class).setSurfaceListener(surfaceListener);
The
SurfaceListener provides a callback when the
SurfaceContainer is
available along with other callbacks when the
Surface’s properties change.
In order to get access to the surface, your app needs to declare the
com.google.android.libraries
SurfaceListener.onVisibleAreaChanged. Also, in order to minimize the number of changes,
the host will also call the
SurfaceListener.onStableAreaChanged method with the largest
rectangle which will in the
CarContext. Whenever the dark mode
status changes, you will receive a call to
CarAppService the
NavigationManager.navigationEnded.
You should only call
NavigationManager.navigationEnded when the user is finished navigating. For example, if you need to recalculate the route in the middle of a trip, use
Trip.Builder.setIsLoading(true) instead.
Occasionally, the host will need an app to stop navigation and will call
stopNavigation in a
NavigationManagerListener object provided by your app
through
NavigationManager.setListener. The app must then stop issuing
navigation notifications, voice guidance, and trip information.
Trip information
During active navigation, the app should call
NavigationManager.updateTrip.
The information provided in this call will be used in the vehicle’s cluster and
heads-up displays. Not all information may be displayed to the user depending on
the particular vehicle being driven. level:
new NotificationCompat.Builder(context, myNotificationChannelId) ... .setOnlyAlertOnce(true) .setOngoing(true) .setCategory(NotificationCompat.CATEGORY_NAVIGATION) .extend(ManagerListener.onAutoDriveEnabled callback. When this callback is
called, your app should simulate navigation to the chosen destination when the
user begins navigation. Your app can exit this mode whenever
CarAppService.onCarAppFinished is called.
You can test that your implementation of
onAutoDriveEnabled is called by executing the following from a command line:
adb shell dumpsys activity service CAR_APP_SERVICE_NAME AUTO_DRIVE
For example:
adb shell dumpsys activity service com.google.android.libraries.car.app.samples.navigation.car.NavigationCarAppService AUTO_DRIVE
The CarAppService and Screen Lifecycles
The
CarAppService and
Screen classes implement the
LifecycleOwner
interface. As the user interacts with the app, your service’s and
Screen
objects’ lifecycle callbacks will be invoked, as described in the following
diagrams.
The lifecycle of a CarAppService
CarAppServicelifecycle.
For full details see the documentation of
CarAppService.getLifecycle
method.
The lifecycle of a Screen
Screenlifecycle.
For full details see the documentation of
Screen.getLifecycle...
All applications that use the Android for Cars App Library must adhere to the requirements described in the Android for Cars App Library Terms of Use.
|
https://developer.android.com/training/cars/navigation?hl=ja
|
CC-MAIN-2021-04
|
en
|
refinedweb
|
Opened 7 years ago
Closed 7 years ago
Last modified 15 months ago
#9206 closed enhancement (worksforme)
patch: authorize using Remote-User: header
Description
for obscure reasons, my tracd-behind-proxypass wasn't able to authorize the usual way.
This patch will make trac trust the 'Remote-User:' header, even if the adaptor (CGI/etc) didn't otherwise set the remote_user field.
This probably should be a configuration option as it would otherwise have security implications but is perfect for our use - apache LDAP-based authentication. I'm aware of LdapPlugin but this seemed simpler and with broader implications (could work for other authentication types, kerberos etc).
Attachments (2)
Change History (19)
Changed 7 years ago by
comment:1 Changed 7 years ago by
Thanks for the patch. Would you mind adding that configuration option? This would be also the place to briefly document why you would need to set up such option.
The test could also be written
if not remote_user and req.get_header('Remote-User'):, I suppose that would be slightly more efficient.
Changed 7 years ago by
better patch with option
comment:2 Changed 7 years ago by
The better patch here automatically logs you in if present (instead of requiring you to click login), and has a configuration option.
comment:3 Changed 7 years ago by
Why not make this an implementation of the IAuthenticator extension point interface?
E.g.
That way we can keep trac free of the change that would likely raise security issues, and you are free to deploy your plugin to all the trac installations that you have.
comment:4 Changed 7 years ago by
Carsten,
That's a great idea. I'll do that and post a link here, thanks.
-Steven
comment:5 Changed 7 years ago by
comment:6 Changed 6 years ago by
i'm trying to do the same thing: basic auth on
<Location> mod_proxy forwarded to tracd. unfortunately tracd doesn't seem to take this auth info no matter which way i try. tried:
- with project/plugins/remote-user-auth.py
from trac.config import BoolOption from trac.web.api import IAuthenticator
- without project/plugins/remote-user-auth.py
- tracd with —basic-auth="*,htpasswd,My Proxy Realm"
- tracd without —basic-auth
- with rewriterule
RewriteEngine On RewriteCond %{LA-U:REMOTE_USER} (.+) RewriteRule . - [E=RU:%1] RequestHeader add X-Forwarded-User %{RU}e
- without rewriterule
what am i missing?
comment:7 Changed 6 years ago by
Solved, written up at TracStandalone@90.
comment:8 Changed 6 years ago by
Very good - it did solve my problem as well.
I have transformed the script into a setuptools Trac plugin that can be packaged and installed just like any other trac-hack. Along with polishing the documentation it is now ready - download at
comment:9 Changed 5 years ago by
so i'm wondering what it would take for this ticket to reach
fixed?
comment:10 Changed 21 months ago by
It seems like the configuration option is unnecessary when the authenticator is packaged as a single-file plugins since the behavior can be controlled by enabling/disabling the plugin.
The security risk seems relatively low provided there are no caveats to the statement documented on the Django site:
This warning doesn’t apply to RemoteUserMiddleware in its default configuration with header = 'REMOTE_USER', since a key that doesn’t start with HTTP_ in request.META can only be set by your WSGI server, not directly from an HTTP request header.
Any objections to applying attachment:9206-remoteuser.patch, perhaps renaming the option to
trust_remote_user_header?
Alternatively, we could put the
IAuthenticator in
/contrib or
/tracopt/web/auth.
comment:11 Changed 15 months ago by
We discussed this a bit more in gmessage:trac-users:5p8DjrgFHvw/atuGxm90DQAJ. I'm unsure how the login button can function in the recipe TracStandalone#Authenticationfortracdbehindaproxy. Perhaps the site described in the recipe is completely unavailable to anonymous users, and authentication occurs in the web server before hitting the Trac application. For the login button to work, it looks like
req.environ['REMOTE_USER'] needs to be set.
A different solution to the issue of running TracStandalone behind a proxy might be to plugin in a different implementation of AuthenticationMiddleware. Making the middleware "pluggable" is probably related to #11585 (or at least the related work that was done in Bloodhound).
comment:12 Changed 15 months ago by
Yes! The site is completely unavailable to anonymous. Sorry for not making that clear.
comment:13 follow-up: 14 Changed 15 months ago by
Thanks for the additional information. It's good to be able to reconcile the behavior of your site with what we are seeing elsewhere.
There are very few places that
req.remote_user is used:
$grep -R -in "req.remote_user" . --exclude-dir=.git --exclude-dir=build --exclude=*.pyc ./trac/web/auth.py:93: if req.remote_user: ./trac/web/auth.py:94: authname = req.remote_user ./trac/web/auth.py:150: in req.remote_user will be converted to lower case before ./trac/web/auth.py:155: if not req.remote_user: ./trac/web/auth.py:164: remote_user = req.remote_user
It's only used in
LoginModule.authenticate and
LoginModule._do_login. Plugins such as AccountManagerPlugin access
req.remote_user as well.
Summarizing what I see in the code,
req.authname can be populated in an
IAuthenticator implementation from the value of the
HTTP_REMOTE_USER header, or in the case of
LoginModule.authenticate, from the value of req.remote_user.
req.remote_user is either set by AuthenticationMiddleware or by the WSGI application.
What I don't understand is why LoginModule needs to access
req.remote_user in addition to
req.authname: tags/trac-1.0.9/trac/web/auth.py@:148,157,161#L131. If
trac.web.main.RequestDispatcher.authenticate is called before processing the request to the
/login path (which I'm not entirely sure is the case), couldn't we just rely on the
IAuthenticator implementation to set
req.authname and use the value of
req.authname in
LoginModule._do_login rather than
req.remote_user?
comment:14 Changed 15 months ago by
Summarizing what I see in the code,
req.authnamecan be populated in an
IAuthenticatorimplementation from the value of the
HTTP_REMOTE_USERheader, or in the case of
LoginModule.authenticate, from the value of req.remote_user.
req.remote_useris either set by AuthenticationMiddleware or by the WSGI application.
If the patch is put to Trac, I think we should be able to use any header for the remote-user's name, not only
Remote-User. The
HTTP_REMOTE_USER variable can be set by remote attacker, easily. The following commands can set the
HTTP_REMOTE_USER variable for Apache.
$ curl -o /dev/null --header 'REMOTE-USER: xx' # works with apache 2.2 and 2.4 $ curl -o /dev/null --header 'REMOTE!USER: xx' # works with apache 2.2 $ curl -o /dev/null --header 'REMOTE.USER: xx' # works with apache 2.2 $ curl -o /dev/null --header 'REMOTE=USER: xx' # works with apache 2.2
RequestHeader unset can remove the header. However, it cannot remove in the cases except use of
-.
RequestHeader unset Remote-User early
IMO, I don't recommend to use HTTP header….
comment:15 follow-up: 16 Changed 15 months ago by
I misunderstood the statement in the Django documentation that I referenced in comment:10. I had hoped we could modify AuthenticationMiddleware to set the
REMOTE_USER from an HTTP header, as in the flask example. If that's not secure, any other ideas on how to set
REMOTE_USER in TracStandalone when the web server acts as a proxy and handles authentication? It seems like a generalized problem that we should try to support in Trac.
comment:16 Changed 15 months ago by
Replying to Ryan J Ollos:
I misunderstood the statement in the Django documentation that I referenced in comment:10. I had hoped we could modify AuthenticationMiddleware to set the
REMOTE_USERfrom an HTTP header, as in the flask example.
That example is insecure, I think. If an HTTP header is set on the reverse proxy, the reverse proxy must remove the header from remote.
Apache 2.4:
RequestHeader unset Remote-User early
Nginx:
server { listen *:3000; server_name localhost; location / { proxy_pass; proxy_redirect /; proxy_set_header Remote-User ""; } location /login { proxy_pass; proxy_redirect /; proxy_set_header Remote-User $remote_user; auth_basic "auth"; auth_basic_user_file "./htpasswd.txt"; } }
However, non-alphanumeric characters are replaced with
_ in Apache 2.2 (e.g.
Remote!User: admin ⇒
HTTP_REMOTE_USER: admin). It's hard to remove such headers only using configurations. Since Apache 2.4, such headers is not converted to
HTTP_ variables.
This changes is introduced in.
If that's not secure, any other ideas on how to set
REMOTE_USERin TracStandalone when the web server acts as a proxy and handles authentication? It seems like a generalized problem that we should try to support in Trac.
Another idea is adding configurable option to use secret header. If the header is unforeseeable, it would be unable to send the header from remote. Untested patch:
trac/web/auth.py
diff --git a/trac/web/auth.py b/trac/web/auth.py index 81be36fac..64a54812a 100644
comment:17 Changed 15 months ago by
Thanks, it sounds like a good idea to use a secret key for servers that don't allow secure web server configuration. We might run into the same issue of
req.remote_user not being set in
LoginModule._do_login. If that's the case, maybe it's possible to implement a similar idea of specifying the secret key as an option of TracStandalone and renaming the key in
AuthenticationMiddleware. I'll do some testing in the coming days.
auth-from-header.patch
|
https://trac.edgewall.org/ticket/9206
|
CC-MAIN-2017-17
|
en
|
refinedweb
|
”
Add back support for Edge
Please add the option to install Microsoft Edge on the server.163 votes
AGPM
Continue support and updates for Advanced Group Policy Management. Our company uses AGPM extensively and would like to see future enhancements and support continued. If there is a replacement Microsoft product, it doesn't seem to be found.136 votes
X.109 votes
Add Snip & Sketch to Win Server
Please add the "Snip & Sketch" feature of Windows 10 to Windows Server 2019. We'd love to have this basic feature for our users working with Citrix on Windows Server.102 votes
Add basic Linux management to Windows Admin Center
Windows Admin Center is fantastic but we all manage a mix of Linux and Windows Servers. Add the ability to add Linux Servers, SSH over the browser, and a few basic items would be phenomenal (process list, disk info, file explorer)69 votes
- 68 votes
Media Streaming Service for Windows Server Essentials 2016
Need windows media streaming service back for server essentials 201645 votes
System cCnter Unified agent, as instalable Windows Server feature..
Unify All the System Center Agents (virtual machine manager, SCOM, etc) in one solution (for example: SC unified Agent) .Integrate the System Center Agent as a Windows Server Feature (like .Net, BITS and the like).39
RDP / Terminalserver: Allow Drag & Drop. Finally.
You know, Drag & Drop is a useful feature.
It's annoyoing, that RDP does not support Drag & Drop from / to the remote session.38 votes
Windows Desktop client list of connections instead of icons
In the Windows Desktop client (Remote Desktop for WVD), there appears to be a limit of 15 characters in a connection name before it gets truncated and ... is added. This causes names to be indistinguishable if they are long, and a primary descriptor is at the end. For example:
CompanyName-Department-Prod-EastUS
CompanyName-Department-Prod-WestUS
CompanyName-Department-Prod-NorEuro
CompanyName-Department-Dev-SouthEast
etc...
The only way to see the available connections is by icon. Add functionality in the Windows Desktop client for list view, and functionality to configure a default.27 votes
Uninstall button greyed out in Software Center for applications that are pushed via a "required" collection
When deploying an application through a collection in "required" state, uninstall button is greyed out. This doesn't happen if the application is installed through a collection in "available" state. We would like to have the uninstall button always accessible, no matter if the application was pushed in a required state or not. Corruption happens all the time and technicians need to be able to uninstall/reinstall all the time. Because uninstall is greyed out, they have to use the control panel instead, which defeats the purpose of using Software Center.25 votes
Add.25 votes
RSAT for Mac
I currently use a Windows 10 VM with Remote Server Administration Tools to manage our servers (about 3 years running now). Would be super sweet if you put together a version that runs natively on Mac. Super sweet.22 votes
Include a MODERN web browser e.g. LTSB Edge browser rather than outdated, unsupported IE11.
It's kind of lame to promote Win2k16 RDSH then not include a modern web browser. Make an LTSB Edge browser, it can't be THAT difficult.21 votes
import list of machines into remote desktop (windows store)
In remote desktop connection manager you were able to import a list of servers to use. I would love to see that feature added to the remote desktop store app.18 votes
ADFS - Force authentication method per relying party on IDP-side
related to ADFS 2016:
it would be great if we can force a specific authentication method at ADFS for a relying party. in general forms and certificate authtentication is possible for our users, but for specific apps only certificate should be possible for security reasons. adding certificate as MFA is not a good solution from the users point of view because they will be forced to enter first their credentials before they have to use a certificate (which is more secure than forms and because of this sufficient).16 votes
Remote Desktop Connection linux client
Microsoft could develop a linux version of the application client "Remote Desktop Connection", like a Android, iOS and macOS versions.13 votes
Bitlocker MFA unlock
This days MFA is getting more and more popular so I was wondering why not add new feature to Windows family so when server boots Bitlocker could be unlocked not by network but by MFA
What do you think would this be even possible ?13 votes
If Windows Server 2016 support "windows hello" feature?
If Windows Server 2016 TP5 support "windows hello" feature? any declare or blog?
I installed windows server 2016 (14393), and then I can't use "windows hello" in settings.
this feature worked in windows 10.13 votes
- Don't see your idea?
|
https://windowsserver.uservoice.com/forums/295047-general-feedback/filters/top?category_id=141015-applications-add-ons
|
CC-MAIN-2021-04
|
en
|
refinedweb
|
A Look Inside: Tracking Bugs By User Pain
Continuing.
Related posts
18 replies on “A Look Inside: Tracking Bugs By User Pain”
I think that one of the problems is that you Guys don’t develop games on your own engine so you can’t actually tell what is worth fixing and what is not. I know that you have game demos and tutorials but this does not equal a published game that went through full development pipeline. I’d like to see Unity Games by Unity3D which would provide feedback for the engine dev team. It works really well for in-house game engines and companies like Unreal and Crytek. Cheers!
I agree.
that’s true! UnityBug3D ! has reduced lots its’ Programmer’s lifetime .
is it possible charge unity3d in law ? for so many bug kill Programmer slowly
Today my boss ask me to learn UE4. it is really piss me off. hey! Unity! when will make your light and shader better than UE4 ,I am count on it !
I’m still skeptical. It’s not the first time Unity tries to tackle bugs and fails miserably, there are years old bugs that have not been addressed. If this changes how Unity prioritizes their development to include more fixing and polish them I’m all for it. But while I like the idea that this re-prioritizes the fixing to be more “user-aware” they have failed far too many times before.
I’ll be believe when I see it.
Now you need to open up the voting. Having a limited number of votes doesn’t accomplish anything useful. Being able to dump multiple votes into an issue is also a needless complication. Let users apply a single vote to any issue. That will give you a much more accurate read on what is important.
I have all my votes dumped into a couple of items that I just hope will get attention some day, and it’s extremely frustrating to run across a new issue and not be able to raise my hand and indicate there’s at least one more project out there impacted by the problem.
Opening up voting won’t dilute the value of votes either. If somebody sits down and votes on every one of the top 1000 items, there is net zero effect. A rising tide lifts all boats, etc.
Votes aren’t a scarce resource and artificial scarcity just reduces the value of voting, probably to the point that most people don’t even bother.
You should include a time parameter to prevent bugs “starvation”
Thanks to this bug, I spent x10 time aligning all the objects in all the levels of my game while thinking that I was using prehistoric software (having to zoom all the way in the scene in order the vertex spanning to work fine, and then zooming all the way out, doing that for every new object I wanted to position in the scene). (If this would’ve worked as expected, I would’ve saved a few days work) Btw… it’s going to be one year and the bug isn’t fixed yet. :(
Would really like to see a editor dev category in prevalence – in place of 4 or at least 3.
Because what we editor devs are able to make with the abilities you give us does in the end affect almost everyone. And there are still some ceavats that are really, really annoying and sometimes double developement time for us editor devs, trying to develop workarounds…
Now that you have [User Pain] you want to evaluate [Effort] as well.
Effort is the amount of man-hours that will be spent on reproducing, fixing, testing the bug (including the risk of introducing secondary bugs)
Total effort is limited (available resources) so you want to select bugs that the sum of User Pain reduced as large as possible.
This will be some sort of [User Pain]/[Effort] metric.
tl;dr if you can fix 10 cosmetic bugs in the same time as one serious bug then it sometimes worth it.
I love how you guys give info, about how you actually are running things. Few other companies are open about it. A lot of companies hide this information behind closed doors + NDA’s.
I really love these kind of posts! :) Please keep making them.
Still waiting for a 3 month old bug that’s been fixed in 5.3 but yet to make it in to 5.4, after 3 e-mails and no replies.
So I’m all for much needed changes to this bug tracking system.
LOL. yes. and is not only a crash. Is not only pain; is n° user * time * pain)square since taking out time from making other. Imagine the ripple effects of 11,000 bugs per year + bugs that for Unity are not consider as bugs but users thinks are bugs. One example is Y up and no way to change it in Unity settings. A bug is a lack an omission. And each developer producing 1000 lacks x Yr. Is it because are running too fast? If you go fast in a light car and then you stop for a coffee, probably that slow clever old truck driver will pass you.
“some stuff + Votes”
So an issue with 5000 votes trumps everything and will be on-top of your list all the time?
I’m sure they would re-address that section in the future if they were getting many issues with 5000 votes. Right now the most voted issue has 241 votes (which is fixed) so it’s probably a good measurement for them right now.
yeah I can see how this could be a problem. Highly-voted issues in Unity Feedback tend to be stuff like “We want a Make MMORPG button”
Such a system of ‘user pain’ will only lead to exaggerated claims of disasters by users looking to jump the queue and not solve the problem at hand.
A better approach would be to improve the performance metrics gathered by the Unity Editor and concisely send them with the error report.
Also, most of the time I have a bug it’s due to running out of memory or a namespace collision or an absence of a needed library or a dll inclusion problem – all those things happen frequently if a use imports a the wrong combination of Unity asset packages into their project – so I’m doubting the validity of many of those filed bug reports or at least the conjectured cause of the bug.
5 = it crashes
4 = it runs like shit
3 = usability
2 = feature
1 = polish
new content:
5 = it crashes (bug report and Unity Internal Discussion)
4 = it runs like shit (bug report and Unity Beta Forum)
3 = usability (bug report and Documentation)
2 = feature (bug report and Video explaining the new feature )
1 = polish (bug report and Video Tutorial or Live Training about it and how to use it!)
0 = clean if (bug report = 0)
I put maya as example: Maya has old boolean bugs since 90´Alias and will be there for ever at this point; That makes a software old. Is better to fixing old bugs before making more. Makes the software stable and ready before the author goes or moves on. Only experts users can find in the stage 1 problems. but including that problems must be fix till there is 0 bug reports. In this way expert users can make much more. I can´t use Maya booleana because I can´t push them up to my limite. The limitation in this particular example is the bug. But usually the software developer how make the new feature thinks is not important to has 0 bugs. Thinks is more important to have new content.
|
https://blogs.unity3d.com/2016/08/17/a-look-inside-tracking-bugs-by-user-pain/
|
CC-MAIN-2021-04
|
en
|
refinedweb
|
The new regulations are designed to protect the privacy of EU citizens, and every business collecting data of EU citizens has to comply with the strict new rules.
Pushwoosh offers a way for automated and seamless compliance with the GDPR.
The GDPR Compliance solution is supported on both native Android SDK v5.7.1+ and iOS SDK v5.6+. Supporting cross-platform frameworks are Cordova, Unity, and React Native.
Pushwoosh solution consists of two parts, each helping with different aspect of the GDPR.
One of the main GDPR requirements is a lawful basis for processing personal data of EU citizens. User's explicit consent to process their data is a lawful basis, and it is exactly what you get with Pushwoosh Consent Form In-App.
To use the In-App, you need to call the
showGDPRConsentUI method in the app. It triggers our system
GDPRConsent Event and shows the Consent Form Rich Media page.
import PushwooshPWGDPRManager.shared().showGDPRConsentUI()
#import <Pushwoosh/PWGDPRManager.h>[[PWGDPRManager sharedManager] showGDPRConsentUI];
GDPRManager.getInstance().showGDPRConsentUI();
A click on the Confirm button would call the binary setCommunicationEnabled method of Pushwoosh SDK. If the boolean value is false, the device gets unsubscribed from push notifications and stops downloading in-app messages. The value true will reverse the effect.
Make the In-App feel like a natural part of your app by customizing the Rich Media page.
The right to erasure, aka the right to be forgotten, gives individuals a right to have their personal data erased. To comply with the right to erasure, use the Deletion Form In-App. To do so, simply call showGDPRDeletionUI method in your app. Similarly to the Consent Form, the method triggers a system GDPRDeletion Event and displays the Deletion Form Rich Media.
import PushwooshPWGDPRManager.shared().showGDPRDeletionUI()
#import <Pushwoosh/PWGDPRManager.h>[[PWGDPRManager sharedManager] showGDPRDeletionUI];
GDPRManager.getInstance().showGDPRDeletionUI();
A click on Confirm would call removeAllDeviceData method, unregistering the device, deleting all Tag values, and blocking all further requests to Pushwoosh.
To implement the GDPR Compliance solution on your website, simply call the following methods of our Web SDK:
Pushwoosh.setCommunicationEnabled() is a binary method to enable or disable communication channels. The value true triggers a postEvent, sending a
registerDevicerequest to Pushwoosh servers. The value false does the opposite. The method returns a promise.
Pushwoosh.isCommunicationEnabled() gets the current status of communication availability. The method returns a promise.
Pushwoosh.removeAllDeviceData() method unregisters the device, deletes all device's Tag values, and blocks all further requests to Pushwoosh. The method returns a promise.
To listen for the
setCommunicationEnabled method, simply use the following event listener:
onChangeCommunicationEnabled tracks the changes of the communication availability. The second argument in the callback is the new availability state. Please see the example below.
Pushwoosh.push(['onChangeCommunicationEnabled', function(api, isEnabled) {console.log('EVENT: onChangeCommunicationEnabled', isEnabled);}])
In app's Events section in your Control Panel, you may see how many times the GDPR events have been triggered.
Moreover, by clicking Add Rule and specifying the device type, you can check the statistics for particular platforms or view opt-ins and opt-outs.
Use the following
device_type values for different platforms:
1 — iOS; 3 — Android; 5 — Windows Phone; 7 — OS X; 8 — Windows 8; 9 — Amazon; 10 — Safari; 11 — Chrome; 12 — Firefox.
You may also see how many times GDPR In-Apps has been triggered in the In-App statistics section of your Pushwoosh app.
|
https://docs.pushwoosh.com/platform-docs/account-and-security/the-gdpr-compliance
|
CC-MAIN-2021-04
|
en
|
refinedweb
|
#include <deal.II/fe/mapping_q_eulerian.h>
This class is an extension of the MappingQ1Eulerian class to higher order \(Q_p\) mappings. It is useful when one wants to calculate shape function information on a domain that is deforming as the computation proceeds.
The constructor of this class takes three arguments: the polynomial degree of the desire Qp mapping, a reference to the vector that defines the mapping from the initial configuration to the current configuration, and a reference to the DoFHandler. The most common case is to use the solution vector for the problem under consideration as the shift vector. The key requirement is that the number of components of the given vector field be equal to (or possibly greater than) the number of space dimensions. If there are more components than space dimensions (for example, if one is working with a coupled problem where there are additional solution variables), the first
dim components are assumed to represent the displacement field, and the remaining components are ignored. If this assumption does not hold one may need to set up a separate DoFHandler on the triangulation and associate the desired shift vector to it.
Typically, the DoFHandler operates on a finite element that is constructed as a system element (FESystem) from continuous FE_Q() objects. An example is shown below:
In this example, our element consists of
(dim+1) components. Only the first
dim components will be used, however, to define the Q2 mapping. The remaining components are ignored.
Note that it is essential to call the distribute_dofs(...) function before constructing a mapping object.
Also.
Definition at line 90 of file mapping_q_eulerian.h.
Constructor.
Definition at line 79 of file mapping_q_eulerian.cc.
Definition at line 57 of file mapping_q 149 of file mapping_q_eulerian.cc.
Return a pointer to a copy of the present object. The caller of this copy then assumes ownership of it.
Reimplemented from MappingQ< dim, spacedim >.
Definition at line 101 of file mapping_q_eulerian.cc.
Always returns
false because MappingQ1Eulerian does not in general preserve vertex locations (unless the translation vector happens to provide for zero displacements at vertex locations).
Reimplemented from MappingQ<.
Reimplemented from MappingQ< dim, spacedim >.
Definition at line 244 of file mapping_q_eulerian.cc.
Reference to the vector of shifts.
Definition at line 167 of file mapping_q_eulerian.h.
Pointer to the DoFHandler to which the mapping vector is associated.
Definition at line 172 of file mapping_q_eulerian.h.
|
https://dealii.org/8.5.0/doxygen/deal.II/classMappingQEulerian.html
|
CC-MAIN-2018-34
|
en
|
refinedweb
|
United Kingdom en Installation and Service Manual Gas Fired Wall Mounted Condensing Combination Boiler EcoBlue Advance Combi 24 - 28 - 33 - 40 These instructions include the Benchmark Commissioning Checklist and should be left with the user for safe keeping. They must be read in conjunction with the Flue Installation Guide. Model Range Building Regulations and the Benchmark Commissioning Checklist Baxi EcoBlue Advance 24 Combi ErPD G.C.No 47-077-14 Baxi EcoBlue Advance 28 Combi ErPD G.C.No 47-077-15 Baxi EcoBlue Advance 33 Combi ErPD G.C.No 47-077-16 Baxi EcoBlue Advance 40 Combi ErPD G.C.No 47-077-17certification scheme for gas heating appliances.. 0086 ISO 9001 FM 00866 2017. WARNING: Any person who does any unauthorised act in relation to a copyright work may be liable to criminal prosecution and civil claims for damages. You have just purchased one of our appliances and we thank you for the trust you have placed in our products. Please note that the product will provide good service for a longer period of time if it is regularly checked and maintained. Our customer support network is at your disposal at all times. 2 EcoBlue Advance Combi 7219715 - 03 (04/17) Installer Notification Guidelines 7219715 - 02 (09/16) LABC will record the data and will issue a certificate of compliance EcoBlue Advance Combi 3 Contents Contents 1 Introduction 1.1 1.2 1.3 1.4 1.5 1.6 2 Safety 2.1 2.2 2.3 3 4 7 7 8 8 9 9 10 10 10 11 12 General Safety Instructions Recommendations Specific Safety Instructions 2.3.1 Handling 12 12 13 13 14 3.1 3.2 3.3 3.4 14 15 16 17 Technical Data Technical Parameters Dimensions and Connections Electrical Diagram Description of the Product 18 4.1 4.2 18 19 19 19 19 19 20 21 22 22 22 General Description Operating Principle 4.2.1 Central Heating Mode 4.2.2 Domestic Hot Water Mode 4.2.3 Boiler Frost Protection Mode 4.2.4 Pump Protection Main Components Control Panel Description Standard Delivery Accessories & Options 4.6.1 Optional Extras Before Installation 23 5.1 5.2 23 24 24 24 24 24 24 25 25 25 26 27 27 27 27 27 28 30 31 33 5.3 4 7 Technical Specifications 4.3 4.4 4.5 4.6 5 General Additional Documentation Symbols Used Abbreviations Extent of Liabilities 1.5.1 Manufacturer’s Liability 1.5.2 Installer’s Responsibility Homologations 1.6.1 CE Marking 1.6.2 Standards Installation Regulations Installation Requirements 5.2.1 Gas Supply 5.2.2 Electrical Supply 5.2.3 Hard Water Areas 5.2.4 Bypass 5.2.5 System Control 5.2.6 Treatment of Water Circulating Systems 5.2.7 Showers 5.2.8 Expansion Vessel (CH only) 5.2.9 Safety Pressure Relief Valve Choice of the Location 5.3.1 Location of the Appliance 5.3.2 Data Plate 5.3.3 Bath & Shower Rooms 5.3.4 Ventilation 5.3.5 Condensate Drain 5.3.6 Clearances 5.3.7 Flue/Chimney Location 5.3.8 Horizontal Flue/Chimney Systems EcoBlue Advance Combi 7219715 - 03 (04/17) Contents 5.4 5.5 5.6 6 Installation 37 6.1 6.2 37 37 37 38 38 38 39 39 41 41 41 41 42 42 42 6.3 6.4 6.5 6.6 6.7 7 43 7.1 7.2 43 43 43 43 44 44 44 44 46 46 46 46 47 7.4 7.5 7.6 9 General Assembly 6.2.1 Fitting the Pressure Relief Discharge Pipe 6.2.2 Connecting the Condensate Drain Preparation 6.3.1 Panel Removal Air Supply / Flue Gas Connections 6.4.1 Connecting the Flue/Chimney Electrical Connections 6.5.1 Electrical Connections of the Appliance 6.5.2 Connecting External Devices Filling the Installation External Controls 6.7.1 Installation of External Sensors 6.7.2 Optional Outdoor Sensor Commissioning 7.3 8 34 34 34 34 34 34 35 35 35 35 36 36 36 5.3.9 Flue/Chimney Lengths 5.3.10 Flue/Chimney Trim 5.3.11 Terminal Guard 5.3.12 Flue/Chimney Deflector 5.3.13 Flue/Chimney Accessories Transport Unpacking & Initial Preparation 5.5.1 Unpacking 5.5.2 Initial Preparation 5.5.3 Flushing Connecting Diagrams 5.6.1 System Filling and Pressurising 5.6.2 Domestic Hot Water Circuit General Checklist before Commissioning 7.2.1 Preliminary Electrical Checks 7.2.2 Checks Commissioning Procedure 7.3.1 De-Aeration Function Gas Settings 7.4.1 Check Combustion - ‘Chimney Sweep’ Mode Configuring the System 7.5.1 Check the Operational (Working Gas Inlet Pressure & Gas Rate) Final Instructions 7.6.1 Handover 7.6.2 System Draining Operation 48 8.1 8.2 8.3 8.4 8.5 48 48 48 49 49 General To Start-Up To Shutdown Use of the Control Panel Frost Protection Settings 9.1 7219715 - 03 (04/17) 50 50 Parameters EcoBlue Advance Combi 5 Contents 10 11 12 13 14 6 Maintenance 51 10.1 10.2 10.3 51 52 53 53 54 54 55 55 56 56 56 56 56 57 57 57 58 58 59 59 60 60 60 61 61 62 62 63 General Standard Inspection & Maintenance Operation Specific Maintenance Operations Changing Components 10.3.1 Spark Ignition & Flame Sensing Electrodes 10.3.2 Fan 10.3.3 Air / Gas Venturi 10.3.4 Burner 10.3.5 Insulation 10.3.6 Flue Sensor 10.3.7 Igniter 10.3.8 Heating Flow & Return Sensors 10.3.9 Safety Thermostat 10.3.10 DHW NTC Sensor 10.3.11 Pump - Head Only 10.3.12 Pump - Complete 10.3.13 Automatic Air Vent 10.3.14 Safety Pressure Relief Valve 10.3.15 Heating Pressure Gauge 10.3.16 Plate Heat Exchanger 10.3.17 Hydraulic Pressure Sensor 10.3.18 DHW Flow Regulator & Filter 10.3.19 DHW Flow Sensor (‘Hall Effect’ Sensor) 10.3.20 Diverter Valve Motor 10.3.21 Main P.C.B. 10.3.22 Boiler Control P.C.B. 10.3.23 Expansion Vessel 10.3.24 Gas Valve 10.3.25 Setting the Gas Valve (CO2 Check) Troubleshooting 64 11.1 11.2 64 64 Error Codes Fault Finding Decommissioning Procedure 70 12.1 70 Decommissioning Procedure Spare Parts 71 13.1 13.2 71 71 General Spare Parts List Notes 72 Benchmark Commissioning Checklist 74 EcoBlue Advance Combi 7219715 - 03 (04/17) Introduction 1 1 Introduction 1.1. General WARNING Installation, repair and maintenance must only be carried out only by a competent person. This document is intended for use by competent persons. All Gas Safe registered engineers carry an ID card with their licence number and a photograph. You can check your engineer is registered by telephoning 0800 408 5500 or online at This appliance must be installed in accordance with the manufacturer’s instructions and the regulations in force. If the appliance is sold or transferred, or if the owner moves leaving the appliance behind you should ensure that the manual is kept with the appliance for consultation by the new owner and their installer. Read the instructions fully before installing or using the appliance. In GB, this must be carried out by a competent person as stated in the Gas Safety (Installation & Use) Regulations (as may be amended from time to time).. The appliance is designed as a boiler for use in residential domestic environments on a governed meter supply only. The selection of this boiler is entirely at the owner’s risk. If the appliance is used for purposes other than or in excess of these specifications, the manufacturer will not accept any liability for resulting loss, damage or injury. The manufacturer will not accept any liability whatsoever for loss, damage or injury arising as a result of failure to observe the instructions for use, maintenance and installation of the appliance. WARNING Check the information on the data plate is compatible with local supply conditions. 1.2 Additional Documentation These Installation & Service Instructions must be read in conjunction with the Flue Installation Guide supplied in the Literature Pack. Various timers, external controls, etc. are available as optional extras. Full details are contained in the relevant sales literature. 7219715 - 03 (04/17) EcoBlue Advance Combi 7 1 Introduction 1.3 Symbols Used In these instructions, various levels are employed to draw the user's attention to particular information. In so doing, we wish to safeguard the user's safety, prevent hazards and guarantee correct operation of the appliance. Each level is accompanied by a warning triangle DANGER Risk of a dangerous situation causing serious physical injury. WARNING Risk of a dangerous situation causing slight physical injury. CAUTION Risk of material damage. Signals important information . Signals a referral to other instructions or other pages in the instructions. 1.4 Abbreviations DHW: Domestic hot water CH: Central heating GB: Great Britain IE: Ireland BS: British standard HHIC: Heating and Hotwater Industry Council Pn: Nominal output Pnc: Condensing output Qn: Nominal heat input Qnw: Nominal domestic hot water heat input Hs: Gross calorific value 8 EcoBlue Advance Combi 7219715 - 03 (04/17) Introduction 1.5 1.5.1 1 Extent of Liabilities Manufacturer's Liability Our products are manufactured in compliance with the requirements of the various european applicable Directives. They are therefore delivered with marking and all relevant documentation. In the interest of customers, we are continuously endeavouring to make improvements in product quality. All the specifications stated in this document are therefore subject to change without notice. The manufacturer will not accept any liability for loss, damage or injury arising as a result of:Failure to abide by the instructions on using the appliance. Failure to regularly maintain the appliance, or faulty or inadequate maintenance of the appliance. Failure to abide by the instructions on installing the appliance. This company declares current and relevant requirements of legislation and guidance including. Prior to commissioning all systems must be thoroughly flushed and treated with inhibitor (see section 5.2.6). Failure to do so will invalidate the appliance warranty. Incorrect installation could invalidate the warranty and may lead to prosecution. 7219715 - 03 (04/17) EcoBlue Advance Combi 9 1 Introduction 1.5.2 Installer's Responsibility The installer is responsible for the installation and initial start up of the appliance. The installer must adhere to the following instructions: Read and follow the instructions given in the manuals provided with the appliance. Carry out installation in compliance with the prevailing legislation and standards. Ensure the system is flushed and inhibitor added. Install the flue/chimney system correctly ensuring it is operational and complies with prevailing legislation and standards, regardless of location of the boiler’s installation. Only the installer should perform the initial start up and carry out any checks necessary. Explain the installation to the user. Complete the Benchmark Commissioning Checklist - this is a condition of the warranty ! Warn the user of the obligation to check the appliance and maintain it in good working order. Give all the instruction manuals to the user. 1.6 Homologations 1.6.1 CE Marking EC - Declaration of Conformity Baxi Heating UK Limited being the manufacturer / distributor within the European Economic Area of the following:Baxi EcoBlue Advance 24 - 28 - 33 - 40 Combi ErPD declare that the above is in conformity with the provisions of the Council Directive 2009/142/EC 92/42/EEC 2004/108/EC 2006/95/EC 2009/125/EC 2010/30/E86. For GB/IE only. 10 EcoBlue Advance Combi 7219715 - 03 (04/17) Introduction 1.6.2 1 Standards Codes of Practice - refer to the most recent version In GB the following Codes of Practice apply: Standard Scope BS 6891. BS 4814 Specification for Expansion Vessels using an internal diaphragm, for sealed hot water systems. IGE/UP/7/1998 Guide for gas installations in timber framed housing.. 7219715 - 03 (04/17) EcoBlue Advance Combi 11 2 2 Safety Safety 2.1 General Safety Instructions DANGER If you smell gas: 1. Turn off the gas supply at the meter 2. Open windows and doors in the hazardous area 3. Do not operate light switches 4. Do not operate any electrical equipment 5. Do not use a telephone in the hazardous area 6. Extinguish any naked flame and do not smoke 7. Warn any other occupants and vacate the premises 8. Telephone the National Gas Emergency Service on:- 0800 111 999 2.2 Recommendations WARNING Installation, repair and maintenance must be carried out by a Gas Safe Registered Engineer (in accordance with prevailing local and national regulations). When working on the boiler, always disconnect the boiler from the mains and close the main gas inlet valve. After maintenance or repair work, check the installation to ensure that there are no leaks. CAUTION The boiler should be protected from frost. Only remove the casing for maintenance and repair operations. Replace the casing after maintenance and repair operations. 12 EcoBlue Advance Combi 7219715 - 03 (04/17) Safety 2.3 2.3.1 2 Specific Safety Instructions Handling General • The following advice should be adhered to, from when first handling the boiler to the final stages of installation, and also during maintenance. • Most injuries as a result of inappropriate handling and lifting are to the back, but all other parts of the body are vulnerable, particularly shoulders, arms and hands. Health & Safety is the responsibility of EVERYONE. • There is no ‘safe’ limit for one man - each person has different capabilities. The boiler should be handled and lifted by TWO PEOPLE. • Do not handle or lift unless you feel physically able. • Wear appropriate Personal Protection Equipment e.g. protective gloves, safety footwear etc. Preparation • Co-ordinate movements - know where, and when, you are both going. • Minimise the number of times needed to move the boiler plan ahead. • Always ensure when handling or lifting the route is clear and unobstructed. If possible avoid steps, wet or slippery surfaces, unlit areas etc. and take special care on ladders/into lofts. Technique • When handling or lifting always use safe techniques - keep your back straight, bend your knees. Don’t twist - move your feet, avoid bending forwards and sideways and keep the load as close to your body as possible. • Where possible transport the boiler using a sack truck or other suitable trolley. • Always grip the boiler firmly, and before lifting feel where the weight is concentrated to establish the centre of gravity, repositioning yourself as necessary. See the ‘Installation’ section of these instructions for recommended lift points. Remember • The circumstances of each installation are different. Always assess the risks associated with handling and lifting according to the individual conditions. • If at any time when installing the boiler you feel that you may have injured yourself STOP !! DO NOT ‘work through’ the pain - you may cause further injury. IF IN ANY DOUBT DO NOT HANDLE OR LIFT THE BOILER OBTAIN ADVICE OR ASSISTANCE BEFORE PROCEEDING ! 7219715 - 03 (04/17) EcoBlue Advance Combi 13 3 Technical Specifications 3 Technical Specifications 3.1 Appliance Type Electrical Supply 230V~ 50Hz (Appliance must be connected to an earthed supply) C13 C33 C53 Appliance Category CAT I 2H Heat Input CH Qn Hs (Gross) Max 24 model kW 22.2 28 model kW 26.6 33 model kW 31.1 40 model kW 35.5 Min 5.2 6.3 7.6 8.9 Heat Output CH Pn (Non-Condensing) Max Min 24 model kW 20.0 4.6 28 model kW 24.0 5.5 33 model kW 28.0 6.6 40 model kW 32.0 7.8 Heat Output CH Pnc (Condensing) Max Min 24 model kW 21.2 4.9 28 model kW 25.3 6.0 33 model kW 29.6 7.1 40 model kW 33.9 8.4 Heat Input DHW Qnw Hs (Gross) Max 24 model kW 27.4 28 model kW 32.1 33 model kW 37.8 40 model kW 45.8 Heat Output DHW 24 model 28 model 33 model 40 model kW kW kW kW Gas Nozzle Injector 24 model mm 28 model mm 33 model mm 40 model mm NOx Class Technical Data Max 24.0 28.0 33.0 40.0 Ø 5.0 Ø 5.6 Ø 6.6 Ø 6.6 5 Temperatures C.H. Flow Temp (adjustable) 25°C to 80°C max (± 5°C) D.H.W. Flow Temp (adjustable) 40°C to 60°C max (± 5°C) dependent upon flow rate Power Consumption 24 model W 28 model W 33 model W 40 model W Safety Discharge Max Operating Min Operating Recommended Operating Range 85 90 95 100 Electrical Protection IPX5D (without integral timer) IP20 (with integral timer) External Fuse Rating 3A Flow Rates Internal Fuse Rating F2L DHW Flow Rate @ 30o C Rise 10.9 12.9 15.3 18.3 DHW Flow Rate @ 35o C Rise 9.8 11.5 13.5 16.4 Min Working DHW Flow Rate 2 2 2 2 Condensate Drain To accept 21.5mm (3/4 in) plastic waste pipe Flue Terminal Dimensions Diameter Projection Connections Gas Inlet Heating Flow Heating Return Cold Water Inlet Hot Water Outlet Pressure Relief Discharge - 100mm 125mm copper tails 22mm 22mm 22mm 15mm 15mm 15mm Outercase Dimensions Casing Height Overall Height Inc Flue Elbow Casing Width Casing Depth Clearances Above Casing Below Casing Front Front L.H. Side R.H. Side - *This is MINIMUM recommended dimension. Greater clearance will aid installation and maintenance. NOTE: All data in this section are nominal values and subject to normal production tolerances. Pump Available Head Packaged Boiler Carton Installation Lift Weight bar 8 0.15 (24) (28) (33) (40) l/min l/min l/min l/min Where Low Flow Taps or Fittings are intended to be used in the DHW system connected to a Baxi EcoBlue Advance Combi it is strongly recommended that the DHW flow rate DOES NOT fall below 2.5 l/min. This will ensure reliable operation of the DHW function. Expansion Vessel - (For Central Heating only. Integral with appliance) bar Min Pre-charge Pressure 1.0 (24 & 28) (33 & 40) litre litre Max Capacity of CH System 125 155 Primary Water Content of Boiler (unpressurised) 2.5 2.5 NATURAL GAS ONLY ! Max Gas Rate (Natural Gas - G20) (After 10 mins) 24 model m3/h 2.61 28 model m3/h 3.05 33 model m3/h 3.59 40 model m3/h 4.35 Dynamic (nominal) Inlet Pressure (Natural Gas - G20) mbar 20 with a CV of 37.78 MJ/m3 See graph below Product Characteristics Database (SEDBUK) 6 5.5 5 SAP 2009 Annual Efficiency is 89% 4.5 4 3.5 This value is used in the UK Government’s 3 Metre (wg) Packaged Boiler Carton Installation Lift Weight 763mm 923mm 450mm 345mm 175 mm Min 150 mm* Min 450 mm Min (For Servicing) 5 mm Min (In Operation) 5 mm Min 5 mm Min Pump - Available Head Packaged Boiler Carton Installation Lift Weight bar 3 2.5 0.5 1-2 DHW Circuit Pressures Max Operating Min Operating Weights (24/28 model) 42.3kg 36kg (33 model) 44.3kg 38kg (40 model) 45.3kg 39kg Central Heating Primary Circuit Pressures 2.5 Standard Assessment Procedure (SAP) for energy 2 1.5 rating of dwellings. The test data from which it has 1 0.5 been calculated has been certified by 0085. 0 0 200 400 600 800 1000 1200 Flow Rate (l/h) 14 EcoBlue Advance Combi 7219715 - 03 (04/17) Technical Specifications 3.2 3 Technical Parameters Technical parameters for boiler combination heaters Baxi EcoBlue Advance Combi ErPD 24 28 33 40 Condensing boiler Yes Yes Yes Yes Low-temperature boiler(1) No No No No B1 boiler No No No No Cogeneration space heater No No No No Yes Yes Yes Yes Combination heater.7 8.0 9.4 10.7 Seasonal space heating energy efficiency Šs % 93 93 93 93 Useful efficiency at rated heat output and Š4 high temperature regime(2) % 88.0 87.9 88.0 87.9 Š1 % 98.0 98.0 98.1 98.0 Full load elmax kW 0.030 0.035 0.040 0.040 Part load elmin kW 0.014 0.014 0.014 0.014 Standby mode PSB kW 0.003 0.003 0.003 0.003 Standby heat loss Pstby kW 0.035 0.035 0.040 0.045 Ignition burner power consumption Pign kW - - - - Annual energy consumption QHE kWh GJ 17204 62 20645 74 24086 87 27527 99 Sound power level, indoors LWA dB 51 52 53 55 Emissions of nitrogen oxides NOX mg/kWh 22 20 24 24 XL XL XXL XXL Rated heat output Useful efficiency at 30% of rated heat output and low temperature regime(1) Auxiliary electricity consumption Other items Domestic hot water parameters Declared load profile Daily electricity consumption Qelec kWh 0.151 0.168 0.215 0.172 Annual electricity consumption AEC kWh 33 37 47 38 Water heating energy efficiency Šwh % 90 88 86 85 Daily fuel consumption Qfuel kWh 21.340 21.980 27.850 28.570 Annual fuel consumption AFC GJ 16 17 22 23 (1) Low temperature means for condensing boilers 30°C, for low temperature boilers 37°C and for other heaters 50°C return temperature (at heater inlet). (2) High temperature regime means 60°C return temperature at heater inlet and 80°C feed temperature at heater outlet. See The back cover for contact details. 7219715 - 03 (04/17) EcoBlue Advance Combi 15 3 Technical Specifications 3.3 Dimensions and Connections There must be no part of the air duct (white tube) visible outside the property. Dimensions At least 1.5° G E A 763mm B 345mm C 450mm A D 116mm Ø Min. E 160mm (207mm for 80/125mm flue systems) F 140mm B G 106mm 360° Orientation H 170mm J 280mm H D C J Flue Ø 100mm Tap Rail F Condensate Drain 50 mm 45 mm Pressure Relief Valve (15mm) 65 mm Heating Flow (22mm) 16 EcoBlue Advance Combi 65 mm Hot Water Outlet (15mm) 65 mm Gas Inlet (22mm) 65 mm Cold Water Inlet (15mm) 30 95 mm Heating Return (22mm) 7219715 - 03 (04/17) Technical Specifications 3.4 M2 Low Voltage External Control Connection 9 7 8 6 5 4 3 2 3 Electrical Diagram r 1 bk w 10 Boiler Controls br R /R /P + + bk Hall Effect Sensor g/y b DHW NTC Sensor p b r y w g b br br Pump g Flue Sensor b b bk X20 X21 X40 X30 X41 r X42 X60 g Hydraulic Pressure Switch X22 X23 X50 Heating Return Sensor br b b b Heating Flow Sensor X1 X2 r X10 X11 b X13 Igniter br b bk g bk b b br b br b b Safety Thermostat w NL bk Mains Input Cable br g/y bk br g bk Diverter Valve Motor Fan g br 2 3 4 g/y g/y Gas Valve b 1 b g/y b Timer Connector Spark Ignition Electrode Flame Sensing Electrode r br E X14 g/y 2 Link y g/y X12 r M1 Mains Voltage Connection 1 b br 5 Timer Bridge Key To Wiring Colours 7219715 - 03 (04/17) b - Blue r - Red bk - Black g - Green br - Brown g/y - Green/Yellow w - White y - Yellow gr - Grey p - Purple EcoBlue Advance Combi 17 4 Description of the Product 4 Description of the Product 4.1 General Description 1. The Baxi EcoBlue Advance Combi boilers are fully automatic gas fired wall mounted condensing combination boilers. They are room sealed and fan assisted, and will serve central heating and mains fed domestic hot water. 2. The boiler is set to give a maximum output of :24 model 28 model 33 model - Information Label 40 model - 24 kW DHW 21.2 kW CH Pnc (Condensing) 28 kW DHW 25.3 kW CH Pnc (Condensing) 33 kW DHW 29.6 kW CH Pnc (Condensing) 40 kW DHW 33.8 kW CH Pnc (Condensing) 3. The boiler is factory set for use on Natural Gas (G20). 4. The boiler is suitable for use only on fully pumped sealed heating systems. Priority is given to domestic hot water. Boiler Control Flap Fig. 1 5. The boiler data badge gives details of the model, serial number and Gas Council number and is situated on the boiler lower panel. It is visible when the control box is lowered . All systems must be thoroughly cleansed, flushed and treated with inhibitor (see section 5.2.6). These Installation & Servicing Instructions MUST be read in conjunction with the Flue Installation Guide supplied in the Literature Pack. Data Badge Fig. 2 Control Box removed for clarity 18 EcoBlue Advance Combi 7219715 - 03 (04/17) Description of the Product 15 4.2 Operating Principle The boiler can be set in 3 operating modes:- ‘Summer’ (DHW only), ‘Winter’ (CH & DHW) or ‘Heating Only’ (CH only) by use of the button. 16 14 4 4.2.1 Central Heating Mode 17 1. With a demand for heating, the pump circulates water through the primary circuit. 18 2. Once the burner ignites the fan speed controls the gas rate to maintain the heating temperature measured by the temperature sensor. 19 20 3. When the flow temperature exceeds the setting temperature, a 3 minute delay occurs before the burner relights automatically (anti-cycling). The pump continues to run during this period. 21 22 4. When the demand is satisfied the burner is extinguished and the pump continues to run for a period of 3 minutes (pump overrun). 24 23 13 4.2.2 12 Domestic Hot Water Mode 1. Priority is given to the domestic hot water supply. A demand at a tap or shower will override any central heating requirement. 11 2. The flow of water will operate the DHW Sensor (Hall Effect Sensor) which requests the 3 way valve to change position. This will allow the pump to circulate the primary water through the DHW plate heat exchanger. 10 3 1 2 5 7 3. The burner will light automatically and the temperature of the domestic hot water is controlled by the temperature sensor. 6 4. When the domestic hot water demand ceases the burner will extinguish and the diverter valve will remain in the domestic hot water mode, unless there is a demand for central heating. 4 8 9 A B C E D F Boiler Schematic Layout Key 1. Pump with Automatic Air Vent 2. Boiler Drain Tap 3. Pressure Gauge 4. Safety Pressure Relief Valve 5. DHW Flow Sensor/Filter/Restrictor 6. Domestic Hot Water Priority Sensor (‘Hall Effect’ Sensor) 7. Domestic Hot Water NTC Sensor 8. Hydraulic Pressure Switch 9. Three Way Valve & Motor 10. Plate Heat Exchanger 11. Gas Valve 12. Safety Thermostat (105° C) 13. Heating Flow Sensor 14. Flue Sensor 15. Boiler Adaptor 16. Primary Heat Exchanger 17. Spark Ignition Electrode 18. Burner 19. Flame Sensing Electrode 20. Air/Gas Collector 21. Heating Return Sensor 22. Fan 23. Air/Gas Venturi 24. Expansion Vessel Connections:A – Condensate Drain B – Heating Flow C – Domestic Hot Water Outlet D – Gas Inlet E – Cold Water Inlet On/Off Valve and filter F – Heating Return 7219715 - 03 (04/17) 4.2. 4.2.4 Pump Protection 1. This activates once a week if there has been no demand. The pump runs for 30 seconds to prevent sticking. EcoBlue Advance Combi 19 4 Description of the Product 4.3 Main Components 20 2 3 1 1. Expansion Vessel 2. Expansion Vessel Valve - Do NOT use as vent 3. Primary Heat Exchanger 4. Plate Heat Exchanger 5. Pump with Automatic Air Vent 6. Central Heating System Pressure Gauge 9 12 7. Fan Assembly with Venturi 11 8. Air/Gas Collector 10 9. Flue Sensor 10. Flame Sensing Electrode 11. Spark Ignition Electrode 12. Combustion Box Cover & Burner 13. Control Box Display 14. Condensate Trap 15. Safety Pressure Relief Valve 23 8 24 16. 7 4 5 13 19 Drain Off Point 17. Gas Valve 18. Diverter Valve Motor 19. Boiler Controls 20. Boiler Adaptor 21. Heating Flow Sensor 22. Safety Thermostat 23. Igniter 24. Air Box 25. Heating Return Sensor 26. Hydraulic Pressure Sensor 27. Domestic Hot Water Priority Sensor (‘Hall Effect’ Sensor) 6 25 21 22 18 RQ ADJ. 14 5 1 2 3 EV1 4 26 17 27 16 20 EcoBlue Advance Combi 16 7219715 - 03 (04/17) Description of the Product 4.4 4 Control Panel Description Key to Controls R Standby - Reset - Esc Boiler Information View Standby - Reset - Esc Button Boiler Information View Button Increase CH Temperature Button R /R /P Decrease CH Temperature Button + + Increase DHW Temperature Button Decrease DHW Temperature Button Domestic Hot Water (DHW) Temperature Adjustment Central Heating (CH) Summer / Winter / Only Heating Mode Button Temperature Adjustment Summer - DHW only mode Summer - Winter - Heating Only Mode Winter - DHW & CH mode Heating Only - Only CH mode Display Description DHW and CH OFF (frost protection still enabled) Indicate errors that prevent burner from igniting R Error - Not resettable by user Water pressure too low R Indicates an error resettable by the user Indicates navigation in programming mode (parameter) Indicates navigation in programming mode Generic error Burner lit DHW mode (symbol will flash with demand) Heating mode (symbol will flash with demand) Display showing all available characters Units for temperature Units for pressure R Service due 7219715 - 03 (04/17) EcoBlue Advance Combi 21 4 Description of the Product 4.5 Standard Delivery 1. The pack contains: Boiler Wall mounting plate (pre-plumbing jig) including isolation valves Fittings pack Literature pack • Installation & Servicing Manual (including ‘benchmark’) • User Guide Instructions • Flue Accessories & Fitting Guide • Registration Card • Fernox Leaflet • Adey Leaflet • Wall Template • Product Leaflet • Package Leaflet 4.6 Accessories & Options 4.6.1 Optional Extras 1. Various timers, external controls, etc. are available as optional extras. Plug-in Mechanical Timer Kit ----------------------------------------- 7212341 Plug-in Digital Timer Kit ------------------------------------------------ 7212342 Wireless RF Mechanical Thermostat Kit --------------------------- 7212343 Wireless RF Digital Programmable Room Thermostat Kit ---- 7212344 Single Channel Wired Programmable Room Thermostat Kit - 7212438 Wired Outdoor Weather Sensor ------------------------------------- 7213356 Two Channel Wired Programmer Kit ------------------------------- 7212443 Single Channel Wired Programmer Kit ---------------------------- 7212444 Mechanical Room Thermostat -------------------------------------- 7209716 Flue Accessories (elbows, extensions, clamps etc.) (refer to the Flue Accessories & Fitting Guide supplied in the literature pack.) Remote relief valve kit ------------------------------------------------- 512139 Boiler discharge pump ------------------------------------------------- 720648301 1M Drain Pipe ‘Trace Heating’ Element --------------------------- 720644401 2M Drain Pipe ‘Trace Heating’ Element --------------------------- 720664101 3M Drain Pipe ‘Trace Heating’ Element --------------------------- 720664201 5M Drain Pipe ‘Trace Heating’ Element --------------------------- 720664401* *Where the drain is between 3 & 5 metres a 5 metre kit can be used and “doubled back” upon itself. Any of the above MUST be fitted ONLY by a qualified competent person. Further detail can be found in the relevant sales literature and at 22 EcoBlue Advance Combi 7219715 - 03 (04/17) Before Installation 5 5 Before Installation 5.1 Installation Regulations WARNING Installation, repair and maintenance must only be carried out only by a competent person. This document is intended for use by competent persons, Installation must be carried out in accordance with the prevailing regulations, the codes of practice and the recommendations in these instructions. Please refer to 1.5.1 and 1.6.2 Installation must also respect this instruction manual and any other applicable documentation supplied with the boiler. 7219715 - 03 (04/17) EcoBlue Advance Combi 23 5 Before Installation 5.2 5.2.1 Installation Requirements Gas Supply 1. The gas installation should be in accordance with the relevant standards. In GB this is BS 6891 (NG). In IE this is the current edition of I.S. 813 “Domestic Gas Installations”. 2. The connection to the appliance is a 22mm copper tail located at the rear of the gas service cock (Fig. 3).. 4. The gas service cock incorporates a pressure test point. The service cock must be on to check the pressure. 5.2.2 Electrical Supply 1. External wiring must be correctly earthed, polarised and in accordance with relevant regulations/rules. In GB this is the current I.E.E. Wiring Regulations. In IE reference should be made to the current edition of ETCI rules. Fig. 3 Gas Service Cock 2. The mains supply is 230V ~ 50Hz fused at 3A. The method of connection to the electricity supply must facilitate complete electrical isolation of the appliance. Connection may be via a fused double-pole isolator with a contact separation of at least 3mm in all poles and servicing the boiler and system controls only. 5.2.3 Hard Water Areas Only water that has NOT been artificially softened must be used when filling or re-pressurising the primary system. If the mains cold water to the property is fitted with an artificial softening/treatment device the source utilised to fill or re-pressurise the system must be upstream of such a device. 5.2.4 Bypass 1. The boiler is fitted with an automatic integral bypass. 5.2.5 System Control 1. Further external controls (e.g. room thermostat sensors) MUST be fitted to optimise the economical operation of the boiler in accordance with Part L of the Building Regulations. A range of optional controls is available. Full details are contained in the relevant Sales Literature. 24 EcoBlue Advance Combi 7219715 - 03 (04/17) Before Installation 5.2.6 5 Treatment of Water Circulating Systems 1. All recirculatory water systems will be subject to corrosion unless an appropriate water treatment is applied. This means that the efficiency of the system will deteriorate as corrosion sludge accumulates within the system, risking damage to pump and valves, boiler noise and circulation problems. 2. When fitting new systems flux will be evident within the system, which can lead to damage of system components. 3. BS7593 gives extensive recommendations on system cleansing and water treatment. 4. All systems must be thoroughly drained and flushed out using an appropriate proprietary flushing agent. 5. A suitable inhibitor must then be added to the system. 6.. 7. It is important to check the inhibitor concentration after installation, system modification and at every service in accordance with the inhibitor manufacturer. (Test kits are available from inhibitor stockists.) 8. For information or advice regarding any of the above contact Baxi Customer Support 0344 871 1545. 5.2.7. 5.2.8 Expansion Vessel (CH only) 1. The appliance expansion vessel is pre-charged to 1.0 bar. Therefore, the minimum cold fill pressure is 1.0 bar. The vessel is suitable for correct operation for system capacities up to 125 litres (24 & 28)/155 litres (33 & 40). For greater system capacities an additional expansion vessel must be fitted. For GB refer to BS 7074 Pt 1. For IE, the current edition of I.S. 813 “Domestic Gas Installations”. 2. Checking the charge pressure of the vessel -. 7219715 - 03 (04/17) EcoBlue Advance Combi 25 5 Before Installation 5.2.9 Safety Pressure Relief Valve See B.S. 6798 for full details. 1. The pressure relief valve (Fig. 5) is set at 3 bar, therefore all pipework, fittings, etc. should be suitable for pressures in excess of 3 bar and temperature in excess of 100°C. 2. The pressure relief discharge pipe should be not less than 15mm diameter,. 4). 3. The discharge must not be above a window, entrance or other public access. Consideration must be given to the possibility that boiling water/steam could discharge from the pipe. The end of the pipe should terminate facing The relief valve must never be used to drain the system down and towards the wall 4. A remote relief valve kit is available to enable the boiler to be installed in cellars or similar locations below outside ground level. Fig. 4 5. A boiler discharge pump is available which will dispose of both condensate & high temperature water from the relief valve. It has a maximum head of 5 metres. Section 6.2.1 gives details of how to connect the pipe to the boiler. Control Box removed for clarity Discharge Pipe Pressure Relief Valve 26 EcoBlue Advance Combi Fig.5 7219715 - 03 (04/17) Before Installation 5.3 5.3.1 5 Choice of the Location Location of the Appliance 5.3.4). 2. Where the boiler is sited in an unheated enclosure and during periods when the heating system is to be unused it is recommended that the permanent live is left on to give BOILER frost protection. NOTE: THIS WILL NOT PROTECT THE SYSTEM !. Fig. 6 Data Badge Zone 2 4. If the boiler is to be fitted into a building of timber frame construction then reference must be made to the current edition of Institute of Gas Engineers Publication IGE/UP/7 (Gas Installations in Timber Framed Housing). Window Recess Zone 1 Zone 2 5.3.2 Data Plate Zone 0 1. The boiler data badge gives details of the model, serial number and Gas Council number and is situated on the boiler lower panel. It is visible when the control box is lowered (Fig. 6). 0.6 m Window Recess Zone 2 Fig. A 5.3.3 Where an integral timer is NOT FITTED the boiler has a protection rating of IPX5D and if installed in a room containing a bath or shower can be within Zone 2 (but not 0 or 1). In GB Only Ceiling Window Recess Zone 2 Zone 1 Zone 2 Outside Zones 7219715 - 03 (04/17) If the boiler is fitted with an integral timer it CANNOT be installed in Zone 0, 1 or 2. 5.3.4 0.6 m Fig. In GB Only Bath & Shower Rooms Ventilation 1. Where the appliance is installed in a cupboard or compartment, no air vents are required. BS 5440: Part 2 refers to room sealed appliances installed in compartments. The appliance will run sufficiently cool without ventilation. EcoBlue Advance Combi 27 5 Before Installation. 32mm 21.5mm Insulation 5.3 5.3.6.12 to 5.3.6. Key to Pipework i) Termination to an internal soil and vent pipe 50mm per m etre o 2.5° M inimum f pipe fall run 450mm min* *450mm is applicable to properties up to 3 storeys. For multi-storey building installations consult BS 6798.. Boiler Sink ii) External termination via internal discharge branch e.g sink waste - downstream* 50mm p of pip er metre e run 2.5° M inimum fall Pipe must terminate above water level but below surrounding surface. Cut end at 45°. *It is NOT RECOMMENDED to connect upstream of the sink or other waste water receptacle ! 28 EcoBlue Advance Combi 7219715 - 03 (04/17) Before Installation 9. If the boiler is fitted in an unheated location the entire condensate discharge pipe should be treated as an external run and sized and insulated accordingly. iii) Termination to a drain or gully Boiler 10. In all cases discharge pipe must be installed to aid disposal of the condensate. To reduce the risk of condensate being trapped, as few bends and fittings as possible should be used and any burrs on cut pipe removed. 50mm p of pip er metre e run 2.5° M inimum 11. When discharging condensate into a soil stack or waste pipe the effects of existing plumbing must be considered. If soil pipes or waste pipes are subjected to internal pressure fluctuations when WC's are flushed or sinks emptied then backpressure may force water out of the boiler trap and cause appliance lockout. fall Pipe must terminate above water level but below surrounding surface. Cut end at 45° iv) Termination to a purpose made soakaway 12. A boiler discharge pump is available which will dispose of both condensate & high temperature water from the relief valve. It has a maximum head of 5 metres. Follow the instructions supplied with the pump. Further specific requirements for soakaway design are referred to in BS 6798. Boiler 50mm pe of pipe r metre run 2.5° M inimum 51 500mm min 13. Condensate Drain Pipe ‘Trace Heating’ Elements are available in various lengths, 1, 2, 3 & 5 metres. Where the drain is between 3 & 5 metres a 5 metre kit can be used and “doubled back” upon itself. fall 14. It is possible to fit the element externally on the condensate drain or internally as detailed in the instructions provided. Holes in the soak-away must face away from the building 15. The fitting of a ‘Trace Heating’ Element is NOT a substitute for correct installation of the condensate drain. ALL requirements in this section must still be adhered to. v) pumped into an internal discharge branch (e.g. sink waste) downstream of the trap 50mm p er metre of pipe ru 2.5° Min n imum fa ll Sink Boiler Basement or similar Pipe must terminate (heated) above water level but below surrounding surface. Cut end at 45° Condensate Pump vi) pumped into an external soil & vent pipe 50mm per me tre of 2.5° M pipe run inimum fall Unheated Location (e.g. Garage) Boiler vii) to a drain or gully with extended external run & trace heating Boiler Basement or similar (heated) 50mm The ‘Trace Heating’ element must be installed in accordance with the instructions supplied. External runs & those in unheated locations still require insulation. per me tre of p ipe run inimum fall 2.5° M Pipe must terminate above water level but below surrounding surface. Cut end at 45° Condensate Pump 7219715 - 03 (04/17) EcoBlue Advance Combi 29 5 Before Installation 5.3.6 Clearances 1. A flat vertical area is required for the installation of the boiler.. 5mm Min 450mm 5mm Min 175mm Min (300mm Min if using 80/125mm flueing system) At least 1.5° 450mm Min 763mm For Servicing Purposes & Operating the Controls 5mm Min In Operation 345mm 150mm* Min *This is MINIMUM recommended dimension. Greater clearance will aid installation and maintenance. Fig.7 30 EcoBlue Advance Combi 7219715 - 03 (04/17) Before Installation 5.3.7 5 Flue/Chimney Location 1. The following guidelines indicate the general requirements for siting balanced flue terminals. For GB recommendations are given in BS 5440 Pt 1. For IE recommendations are given in the current edition of I.S. 813 “Domestic Gas Installations”. Due to the nature of the boiler a plume of water vapour will be discharged from the flue. This should be taken into account when siting the flue terminal. T J,K U N R I M C I I F D E A S I F J,K B L A A G H H I Likely flue positions requiring a flue terminal guard Fig. 8 Terminal Position with Minimum Distance (Fig. 8) A1 B1 C1 D2 E2 F2 G2 H2 I J K L M N R S T U Directly below an opening, air brick, opening windows, etc. Above an opening, air brick, opening window etc. Horizontally to an opening, air brick, opening window etc. Below gutters, soil pipes or drain pipes. Below eaves. Below balconies or car port roof. From a vertical drain pipe or soil pipe. From an internal or external corner. Above ground, roof or balcony level. From a surface or boundary line facing a terminal. From a terminal facing a terminal (Horizontal flue). From a terminal facing a terminal (Vertical flue). From an opening in carport (e.g. door, window) into the dwelling. Vertically from a terminal on the same wall. Horizontally from a terminal on the same wall. From adjacent wall to flue (vertical only). From an adjacent opening window (vertical only). Adjacent to windows or openings on pitched and flat roofs Below windows or openings on pitched roofs 7219715 - 03 (04/17) (mm) 300 300 300 25 (75) 25 (200) 25 (200) 25 (150) 25 (300) 300 600 1200 600 1200 1500 300 300 1000 600 2000 1 In addition, the terminal should be no nearer than 150 mm to an opening in the building fabric formed for the purpose of accommodating a built-in element such as a window frame. 2 Only ONE 25mm clearance is allowed per installation. If one of the dimensions D, E, F, G or H is 25mm then the remainder MUST be as shown in brackets, in accordance with B.S.5440-1. EcoBlue Advance Combi 31 5 Before Installation Under car ports we recommend the use of the plume displacement kit. The terminal position must ensure the safe and nuisance - free dispersal of combustion products.. Terminal Assembly 300 min * *4. Reduction to the boundary is possible down to 25mm but the flue deflector must be used (see 5.3.12). The distance from a fanned draught appliance terminal installed parallel to a boundary may not be less than 300mm in accordance with the diagram opposite (Fig. 9). Top View Rear Flue Fig. 9 Property Boundary Line Plume Displacement Kit If fitting a Plume Displacement Flue Kit, the air inlet must be a minimum of 150mm from any opening windows or doors (see Fig. 10). Air Inlet Opening Window or Door Fig. 10 32 EcoBlue Advance Combi 150mm MIN. The Plume Displacement flue gas discharge terminal and air inlet must always terminate in the same pressure zone i.e. on the same facing wall. 7219715 - 03 (04/17) Before Installation 5.3.8 5 Horizontal Flue/Chimney Systems 1. The standard telescopic Flue length is measured from point (i) to (ii) as shown. The elbow supplied with the standard horizontal telescopic flue kit is not included in any equivalent length calculations. Horizontal Flues Read this section in conjunction with the Flue Installation Guide supplied with the boiler. WARNING SUPPORT - All flue systems MUST be securely supported at a MINIMUM of once every metre & every change of direction. It is recommended that every straight piece is supported irrespective of length. Additional supports are available as accessories. VOIDS - Consideration must be given to flue systems in voids and the provision of adequate access for subsequent periodic visual inspection. This bend is equivalent to 1 metre C A Plume Displacement Kit 60 /100 dia 1M Extensions 45° & 93° elbows are also available - see the separate Flue Guide. (ii) B (i) This bend is equivalent to 1 metre Total equivalent length = A+B+C+2 x 90° Bends NOTE: Horizontal flue pipes should always be installed with a fall of at least 1.5° from the terminal to allow condensate to run back to the boiler. 7219715 - 03 (04/17) EcoBlue Advance Combi 33 5 Before Installation 5.3.9 m 0m Flue/Chimney Lengths 1. The standard horizontal telescopic flue kit allows for lengths between 315mm and 500mm from elbow to terminal without the need for cutting (Fig. 11). Extensions of 250mm, 500mm & 1m are available. 50 m 5m 31 The maximum permissible equivalent flue length is: 10 metres (60/100 system - vertical & horizontal) 20 metres (80/125 system - vertical & horizontal) 15 metres (80/80 twin pipe) 8 metres (60/100 system - vertical connected to ridge terminal) Fig. 11 5.3.10 Flue/Chimney Trim 1. The flexible flue trims supplied can be fitted on the outer and inner faces of the wall of installation. Ensure that no part of the white outer chimney duct is visible 5.3.11 Terminal Guard 1. When codes of practice dictate the use of terminal guards (Fig. 12) ‘Multifit’ accessory part no. 720627901 can be used (NOTE: This is not compatible with Flue Deflector referred to below). 2. There must be a clearance of at least 50mm between any part of the terminal and the guard. 3. When ordering a terminal guard, quote the appliance name and model number. 4. The flue terminal guard should be positioned centrally over the terminal and fixed as illustrated. Fig. 12 Flue Deflector 5.3.12 Flue/Chimney Deflector 1. Push the flue deflector over the terminal end. It may point upwards as shown, or up to 45° either way from vertical. Secure the deflector to the terminal with screws provided (Fig. 13). Fig. 13 G U I DA N C E N OT E S 5.3.13 Flue/Chimney Accessories For full details of Flue Accessories (elbows, extensions, clamps etc.) refer to the Flue Accessories & Fitting Guide supplied in the literature pack. Flue Accessories & Fitting Guide Ø 60/100 Flue Systems Ø 80/125 Flue Systems Ø 80/80 Twin Flue Systems Plume Displacement Kit (Ø 60/100 Flue Systems) READ THESE INSTRUCTIONS IN CONJUNCTION WITH THE BOILER INSTALLATION INSTRUCTIONS & Servicing Instructions. 5.4 Transport 1. This product should be lifted and handled by two people. When lifting always keep your back straight and wear protective equipment where necessary. Carrying and lifting equipment should be used as required. e.g. when install in a loft. © Baxi Heating UK Ltd 2011 34 EcoBlue Advance Combi 7219715 - 03 (04/17) Before Installation 5 Pre-plumbing Fig. 14 5.5 5.5.1 To remove only the Wall jig slide banding to the edge and open flaps. Slide the wall jig out of carton then close the flaps. Slide banding back on. Unpacking & Initial Preparation Unpacking. 1. See ‘Section 2.3.1 Handling’ before unpacking or lifting the boiler. 2. Follow the procedure on the carton to unpack the boiler or see Fig. 14a. 3. If pre-plumbing (Fig. 14) - the wall jig and fitting kit can be removed without removing the carton sleeve. Simply slide banding to the edge and open the perforated flap, lift out the jig, fitting kit and instructions. If the boiler is to be install at a later date, close the flap and reposition the banding straps, the boiler can now be store safely away. Remove Sealing Caps from under the Boiler before lifting into position A small amount of water may drain from the boiler in the upright position. 5.5.2 SNAP OFF Initial Preparation 1. After considering the location position the fixing template on the wall ensuring it is level both horizontally and vertically. Fig. 14a 2. Mark the position of the fixing slots for the wall mounting plate indicated on the template. Insert Sealing Washers 130mm 3. Mark the position of the centre of the flue hole (rear exit). For side flue exit, mark as shown (Fig. 15). 4. If required, mark the position of the gas and water pipes. Remove the template (Fig. 17). LIFT HERE BOTH SIDES For Side Flue Exit Fig. 15 6. Drill the wall as previously marked to accept the wall plugs supplied. Secure the wall mounting plate using the fixing screws. Part No. 7212144 Side Flue Centre Line 175 mm Minimum Clearance 116mm Dia Minimum Aperture For Flue Tube Vertical Flue Centre Line 177 mm Boiler Wall Mounting Plate Fixing Slots Ø 8 mm 50 mm 7. Using a spirit level ensure that the plate is level before finally tightening the screws (Fig. 16). Profile of Outercase 5 mm Minimum Side Clearance 5 mm Minimum Side Clearance 3/4” BSP Connections Condensate Drain 50 mm 45 mm Pressure Relief Valve (15mm) 65 mm Heating Flow 200 mm (22mm) Recommended 150 mm Minimum Clearance 5. Cut the hole for the flue (minimum diameter 116mm). 65 mm Hot Water Outlet (15mm) 65 mm Gas Inlet (22mm) 65 mm Cold Water Inlet (15mm) 30 95 mm Heating Return (22mm) 8. Connect the gas and water pipes to the valves on the wall mounting plate using the copper tails supplied. Ensure that the sealing washers are fitted between the connections. Part No. 7212144 DRAFT A NOTE: 40kW models ONLY - ensure the flow restrictor is inserted in cold water inlet connection (Fig. 16). On other models the restrictor is factory fitted internally. Wall Template Fig. 17 Fit the filling loop as described in the instructions supplied with it. 5.5.3 Heating Flow Fig.16 7219715 - 03 (04/17) Flow Restrictor (40 kW model only) Flushing 1. Flush thoroughly and treat the system according to guidance given in B.S. 7593. EcoBlue Advance Combi 35 5 Before Installation 5.6 Stop Valve Double Check Valve 5.6.1 Stop Valve System Filling and Pressurising 1. A filling point connection on the central heating return pipework must be provided to facilitate initial filling and pressurising and also any subsequent water loss replacement/refilling. 2. A filling loop is supplied with the boiler. Follow the instructions provided with it. Temporary Loop DHW Mains Inlet Fig. 18 Connecting Diagrams CH Return”. Other Tap Outlets Expansion Vessel* Boiler Check Valve* Pressure Reducer Valve*. 5.6.2 To Hot Taps Domestic Hot Water Circuit 1. All DHW circuits, connections, fittings, etc. should be fully in accordance with relevant standards and water supply regulations. Stop Tap Fig. 19 *See 5.6.2. for instances when these items may be required In instances where the mains water supply incorporates a non-returnreturn. Where Low Flow Taps or Fittings are intended to be used in the DHW system connected to a Baxi EcoBlue Combi it is strongly recommended that the DHW flow rate DOES NOT fall below 2.5l/min. This will ensure reliable operation of the DHW function. 36 EcoBlue Advance Combi. If a check valve, loose jumpered stop cock, water meter or water treatment device is fitted (or may be in the future) to the wholesome water supply connected to the boiler domestic hot water (DHW) inlet supply then a suitable expansion device may be required.. 7219715 - 03 (04/17) Installation 6 6 Installation 6.1 Engage the Boiler Mounting Bracket on the Boiler into the Retaining Lugs using the Aligning Lugs for position General 1. Remove the sealing caps from the boiler connections including the condensate trap. A small amount of water may drain from the boiler once the caps are removed. Retaining Lugs Remove Sealing Caps from under the Boiler before lifting into position 2. Lift the boiler as indicated by the shaded areas. The boiler should be lifted by TWO PEOPLE. Engage the mounting bracket at the top rear of the boiler into the retaining lugs on the wall jig using the aligning lugs for position (Fig.21) (see ‘Handling’ section 2.3.1). 3. Insert the sealing washers between the valves and pipes on the wall jig and the boiler connections. Aligning Lugs 4. Tighten all the connections. Sealing Washers (x 5) 6.2 6.2.1 Bottom Polystyrene Fitting the Pressure Relief Discharge Pipe 1. Remove the discharge pipe from the kit. 2. Determine the routing of the discharge pipe in the vicinity of the boiler. Make up as much of the pipework as is practical, including the discharge pipe supplied. Fig. 21 Lift Here Both Sides Assembly When the Boiler Mounting Bracket on the Boiler is in position on the Retaining Lugs, the bottom polystyrene may be discarded allowing the boiler to swing into position Make all soldered joints before connecting to the pressure relief valve. Do not adjust the position of the valve. The discharge pipe must be installed before pressurising the system. 3. The pipework must be at least 15mm diameter and run continuously downwards to a discharge point outside the building. See section 5.2.9 for further details. 4. Utilising one of the sealing washers, connect the discharge pipe to the adaptor and tighten the nut hand tight, plus 1/4 turn to seal. Pressure Relief Valve 5. Complete the discharge pipework and route it to the outside discharge point. Fig. 22 Discharge Pipe Front Panel and Control Box removed for clarity 7219715 - 03 (04/17) EcoBlue Advance Combi 37 6 Installation Prime Trap by pouring 300ml of water into flue spigot 6.2.2 Connecting the Condensate Drain 1. Remove the blanking cap, and using the elbow supplied, connect the condensate drain pipework to the boiler condensate trap outlet pipe. Ensure the discharge of condensate complies with any national or local regulations in force (see British Gas “Guidance Notes for the Installation of Domestic Gas Condensing Boilers” & HHIC recommendations). 2. The elbow will accept 21.5mm (3/4in) plastic overflow pipe which should generally discharge internally into the household drainage system. If this is not possible, discharge into an outside drain is acceptable. See section 5.3.5 for further details. Front Panel and Control Box removed for clarity RQ ADJ. 3. The boiler condensate trap should be primed by pouring approximately 300ml of water into the flue spigot. Do not allow any water to fall into the air inlet. 1 2 3 EV1 4 Fig. 23 Condensate Trap Connection (elbow supplied) 6.3 6.3.1 Locating Lug Preparation Panel Removal 1. Remove the securing screws from the bottom of the case front panel. 2. Lift the panel slightly to disengage it from the locating lugs on top of the case and remove it. Case Front Panel Case Front Panel Securing Screws 38 EcoBlue Advance Combi 7219715 - 03 (04/17) Installation 6 m 0m 6.4 50 5 31 Air Supply / Flue Gas Connections mm 6.4.1 Connecting the Flue/Chimney HORIZONTAL TELESCOPIC FLUE (concentric 60/100) Terminal Assembly 1. There are two telescopic sections, the terminal assembly and the connection assembly, a roll of sealing tape and two self tapping screws. A 93° elbow is also supplied. Connection Assembly Fig. 24 2. The two sections can be adjusted to provide a length between 315mm and 500mm (Fig. 24) when measured from the flue elbow (there is 40mm engagement into the elbow). 3. Locate the flue elbow on the adaptor at the top of the boiler. Set the elbow to the required orientation (Fig. 25). Wall Thickness. 6. In instances where the dimension ‘X’ (Fig. 25) is between 250mm and 315mm it will be necessary to shorten the terminal assembly by careful cutting to accommodate walls of these thicknesses. (X) Wall Thickness Fig. 25 ‘TOP’ Label ion ‘Y’ 7. To dimension ‘X’ add 40mm. This dimension to be known as ‘Y’. with the telescopic flue (Fig. 28). s en Dim Sealing Tape ‘Peak’ to be uppermost ‘TOP’ Label Fig. 27 Securing Screw Fig. 28 7219715 - 03 (04/17) EcoBlue Advance Combi 39 6 Installation 10. Remove the flue elbow and insert the flue through the hole in the wall. Fit the flue trims if required, and refit the elbow to the boiler adaptor, ensuring that it is pushed fully in. Secure the elbow with the screws supplied in the boiler fitting kit (Fig. 29). Boiler Elbow Apply the lubricant supplied for ease of assembly (do not use any other type). Adaptor 11. Draw the flue back through the wall and engage it in the elbow. It may be necessary to lubricate to ease assembly of the elbow and flue (Fig. 30). Ensure elbow is fully engaged into boiler adaptor 12. Ensure that the terminal is positioned with the slots to the bottom (Fig. 31). Secure to the elbow with the screws supplied with the telescopic flue (Fig. 30). Fig. 29 It is essential that the flue terminal is fitted as shown to ensure correct boiler operation and prevent water entering the flue. 13. Make good between the wall and air duct outside the building, appropriate to the wall construction and fire rating. Apply the lubricant supplied for ease of assembly (do not use any other type). 14. If necessary fit a terminal guard (see Section 5.3.11). Ensure Flue is fully engaged into Elbow There must be no part of the air duct (white tube) visible outside the property. Fig. 30 Slots at bottom Fig. 31 40 EcoBlue Advance Combi 7219715 - 03 (04/17) Installation 6. • These points must be considered when initially wiring the boiler to the installation, and if replacing any wiring during the service life of the boiler. 6.5 6.5.1 Electrical Connections Electrical Connections of the appliance The boiler must be connected to the mains fused 3A 230V 50HZ supply & control system using cable of 3 core 0.75mm 3183Y multi strand flexible type (see IMPORTANT note opposite). 1. See Section 5.2.2. for details of the electrical supply. Undo the securing screws and lift the case front panel off. 2. Hinge the control box downwards. Disengage the securing tabs and open the terminal block cover. (Fig. 34). 3. If the mains cable fitted is not long enough slacken the gland nut in the right of the boiler lower panel and pass the new mains cable through it. Remove the grommet adjacent to the gland nut, pierce the diaphragm and insert the cable from the external control system. 4. Leave sufficient slack in the cables to allow the control box to be hinged fully open. Tighten the gland nut and refit the grommet and gland nut. 5. Connect the Earth, Permanent Live and Neutral wires to the terminal strip. Control Box Both the Permanent Live and Neutral connections are fused. Fused Spur L Fig. 34 N 6. Refer to the instructions supplied with the external control(s). Any thermostat must be suitable for 230V switching. Room ‘Stat N Terminal M1 230V b 1 bk 2 230V g/y b N 7. Remove the link between connections 1 & 2. The 230V supply at connection 2 must be connected to the thermostat. The switched output from the thermostat must be connected to connection 1. (Figs. 35 & 36). If the room thermostat being used incorporates an anticipator it MUST be wired as shown in Figs. 35 & 36. 8. Replace the terminal block cover. br L The 230V switched signal for external controls (Frost Stat - Room Stat - Timer) must always be taken from terminal 2 at the boiler. Live, Neutral and Earth to power these controls must be taken from the Fused Spur. For the Frost Stat to operate the boiler MUST BE IN CENTRAL HEATING MODE i.e. symbol shown. Fig. 35 Frost Thermostat Pipe Thermostat Fused Spur L N Room ‘Stat 9. Engage the front panel onto the locating lugs on top of the case & secure with the securing screws at the bottom of the case. 230V N Terminal M1 230V b 1 230V g/y External Clock N L Fig. 36 7219715 - 03 (04/17) bk 2 b br 6.5.2 Connecting External Devices 1. See Section 6.7.2. for details of fitting the optional outdoor sensor accessory. 6.6 Filling the Installation 1. See Section 5.2.6 and 5.6.1 for details of flushing and filling the installation. EcoBlue Advance Combi 41 6 Installation 6.7 6.7.1 External Controls Installation of External Sensors 1. Various Sensors are available. 6.7.2 1/2 H 2.5m Min Optional Outdoor Sensor Full instructions are provided with the Outdoor Sensor Kit ! H Positioning the Sensor 1. The sensor must be fixed to an external wall surface of the property it is serving. The wall must face north or west. West North DO NOT position it on a south facing wall in direct sunlight ! N E X W 2. The sensor should be approximately half the height of the living space of the property, and a minimum of 2.5m above ground level. 3. It must be positioned away from any sources of heat or cooling (e.g. flue terminal) to ensure accurate operation. Siting the sensor above doors and windows, adjacent to vents and close to eaves should be avoided. S X Connecting the Sensor 1. Ensure the electrical supply to the boiler is isolated. Undo the securing screws and lift the case front panel off. 2. Hinge the control box downwards. Disengage the securing tabs and open the terminal block cover. 3. Remove one of the grommets in the boiler lower panel, pierce the diaphragm and insert the wires from the outdoor sensor. 4. Leave sufficient slack in the wires to allow the control box to be hinged fully open. Refit the grommet. 5. Connect the wires from the outdoor sensor to positions 4 & 5 on M2 as shown. Refit the cover. From Relay Box 10 9 5 4 10 9 87 65 43 21 M2 Low Voltage Terminal Block Setting the Sensor Curve 1. With the outdoor sensor fitted, the boiler central heating flow temperature is adjusted automatically to accommodate the change in heat required to optimise the efficient performance of the boiler whilst maintaining a comfortable room temperature. The central heating buttons on the boiler adjust a “simulated room temperature” used for this optimisation. 2. This functionality requires the setting of three parameters on the boiler, to suit the heating system and the optimisation can be adjusted by the user with the central heating control buttons on the boiler control panel. Full instructions are provided with the Outdoor Sensor Kit ! Continue with the installation and commissioning of the boiler as described in this manual. 42 EcoBlue Advance Combi 7219715 - 03 (04/17) Commissioning 7 7 Commissioning Automatic Air Vent 7.1 Cap General 1. Reference should be made to BS:EN 12828, 12831 & 14336 when commissioning the boiler. Ensure that the condensate drain trap has been primed - see Section 6.2.2. paragraph 3. 2. At the time of commissioning, complete all relevant sections of the Benchmark Checklist at the rear of this publication. 3. Open the mains water supply to the boiler and all hot water taps to purge the DHW system. 4. Ensure that the filling loop is connected and open, then open the heating flow and return valves on the boiler. Ensure that the cap on the automatic air vent on the pump body is opened (Fig. 37). Fig. 37 Pump Control Box removed for clarity 5. The system must be flushed in accordance with BS 7593 (see Section 5.2.6)". 7.2 Checklist before Commissioning 7.2.1 Preliminary Electrical Checks R /R /P + + 1. Prior to commissioning the boiler preliminary electrical system checks should be carried out. 2. These should be performed using a suitable meter, and include checks for Earth Continuity, Resistance to Earth, Short Circuit and Polarity. 7.2.2 2 1. Checked: 1 3 4 0 bar Digital pressure reading which can be accessed by via the button and scrolling to setting ‘5’. Note there may be a slight difference between the digital & gauge reading depending on boiler operating mode. 7219715 - 03 (04/17) Checks Heating Pressure Gauge That the boiler has been installed in accordance with these instructions. The integrity of the flue system and the flue seals. The integrity of the boiler combustion circuit and the relevant seals. Fig. 38 EcoBlue Advance Combi 43 7 Commissioning 7.3 7.3.1 Commissioning Procedure De-Aeration Function R The display backlight remains lit approx. 10 minutes. If the backlight goes out during commissioning it does not mean that the process has been completed. /R /P + + This procedure MUST be carried out ! 1. Ensure the gas is turned Off! Turn the power to the boiler ON. The software version will be displayed, followed by , then •• flashing briefly before displaying the ‘Standby’ symbol . These buttons for De-Aeration Fig. 39 2. Press & together and hold for at least 6 seconds until is briefly displayed followed by . FUNCTION INTERRUPTION • If the De-aeration is interrupted due to a fault the pump will cease to circulate but the function timer (approx.10 minutes) will continue to run. For this reason it is recommended to monitor the boiler display during De-aeration. If the fault cannot be rectified quickly the function must be restarted to ensure complete De-aeration. • In the event of a loss of power the De-aeration function needs to be restarted once the power is re-established. • In the event of low water pressure the fault code E118 will be displayed, along with the flashing and symbols. This error can be rectified by repressurising the system to at least 1.0 bar. The pump will restart automatically once the water pressure is successfully re-established and will reappear in the display. • The De-aeration function can be repeated as necessary until all air is expelled from the system. Flue Sampling 3. The De-Aeration Function is now activated. The boiler pump will run for approx. 10 minutes. During this time the pump will alternate on and off and the diverter valve will switch between heating & hot water to purge air from the system. 4. At the end of the process the boiler will return to the ‘Standby’ position. 7.4 7.4.1 Gas Settings Checking Combustion - ‘Chimney Sweep’ Mode The case front panel must be fitted when checking combustion. Ensure the system is cold & the gas supply turned on & purged. The person carrying out a combustion measurement should have been assessed as competent in the use of a flue gas analyser and the interpretation of the results. See Section 10.1.3. Important: Allow the combustion to stabilise before inserting the Combustion Analyser Probe into the Test Point. This will prevent saturation of the analyser. R& 1. Press together and hold for at least 6 seconds. is displayed briefly followed by and the current setting point (eg. , or ) flashing alternately. Important: There may be a delay before the boiler fires. Point Plug Analyser Probe 2. To adjust the boiler input setting, press or button and the current setting point will flash (eg. , or ). Press the or button again to alter the boiler input setting. After refitting the sampling point plug ensure there is no leakage of products = MAX. HEATING input, = MAX. DHW input, = MIN. input 4. The combustion (CO level & CO/CO2 ratio) must be measured and recorded at MAXIMUM DHW input & MINIMUM input. 5. Follow the flow chart on the next page to comply with the requirement to check combustion on commissioning.. R /R /P + + 6. Press These buttons for ‘Chimney Sweep’ R& 7. Press the again for at least 6 seconds to exit. R once to bring the boiler out of ‘Standby’ mode. 8. Use the button to toggle through the CH settings to activate the required modes. 44 EcoBlue Advance Combi and DHW 7219715 - 03 (04/17) Commissioning Set Boiler to Maximum Rate (see 7.4.1) Allow the combustion to stabilise. Do not insert probe to avoid ‘flooding’ the analyser. 7.4.1 7 Checking Combustion (cont) 9. Follow the flow chart opposite. 0344 871 1545 for advice. The appliance MUST NOT be commissioned until all problems are identified and resolved. Perform Flue Integrity Combustion Check Insert the analyser probe into the air inlet test point, allowing the reading to stabilise. No Is O2 20.6% and CO2 < 0.2% ? Yes Yes Check CO & Combustion Ratio at Maximum Rate Whilst the boiler is still operating at maximum insert the analyser probe into the flue gas test point, allowing the reading to stabilise. (see 7.4.1) 0344 871 1545 0344 871 1545. 7219715 - 03 (04/17). EcoBlue Advance Combi 45 7 Commissioning 7.5 7.5.1 Configuring the System Check the Operational (Working Gas Inlet Pressure & Gas Rate). 1. Press & together and hold for at least 6 seconds. is displayed briefly followed by & flashing alternately. ‘3’ represents MAXIMUM HEATING input. 2. Press or MAXIMUM DHW input. Control Box removed for clarity to adjust the input. represents 3. With the boiler operating in the maximum rate condition check that the operational (working) gas pressure at the inlet gas pressure test point is in accordance with B.S. 6798 & B.S. 6891. This must be AT LEAST 17mb ! 4. Ensure that this inlet pressure can be obtained with all other gas appliances in the property working. The pressure should be measured at the test point on the gas cock (Fig. 41). Gas Cock Inlet Pressure Test Point Measure the Gas Rate 5. With any other appliances & pilot lights turned OFF the gas rate can be measured. It should be:Natural Gas 24 model 2.61 m3/h 28 model 3.05 m3/h 33 model 3.59 m3/h 40 model 4.35 m3/h Fig. 41 P. OUT VENT RQ ADJ. Gas Valve EV2 6. Press & to exit the function. together and hold for at least 6 seconds 1 2 17-21 mbar 3 EV1 4 Gas Cock 7.6.1 18-22 mbar 19-23 mbar Gas Meter Fig. 41a Working Gas Pressures If the pressure drops are greater than shown in Fig. 41a (above) a problem with the pipework or connections is indicated. Permissible pressure drop across system pipework < 1 mbar. 46 EcoBlue Advance Combi 7.6 Final Instructions Handover 1. Carefully read and complete all sections of the Benchmark Commissioning Checklist at the rear of this publication that are relevant to the boiler and installation. These details will be required in the event of any warranty work. The warranty will be invalidated if the Benchmark section is incomplete. 2. The publication must be handed to the user for safe keeping and each subsequent regular service visit recorded. 3. Hand over the User’s Operating, Installation and Servicing Instructions, giving advice on the necessity of regular servicing. 4 . For IE, it is necessary to complete a “Declaration of Conformity” to indicate compliance with I.S. 813. An example of this is given in I.S. 813 “Domestic Gas Installations”. This is in addition to the Benchmark Commissioning Checklist. 7219715 - 03 (04/17) Commissioning 7 5. Set the central heating and hot water temperatures to the requirements of the user. Instruct the user in the operation of the boiler and system. Information Display 6. Instruct the user in the operation of the boiler controls. The button can be pressed so that the display shows the following information:- 7. Demonstrate to the user the action required if a gas leak occurs or is suspected. Show them how to turn off the gas supply at the meter control, and advise them not to operate electric light or power switched, and to ventilate the property. ‘00’ alternates with Sub-Code (only when fault on boiler) or ‘000’ ‘01’ alternates with CH Flow Temperature ‘02’ alternates with Outside Temperature (where Sensor fitted) ‘03’ alternates with DHW Temperature ‘04’ alternates with DHW Temperature ‘05’ alternates with System Water Pressure 8. Show the user the location of the system control isolation switch, and demonstrate its operation. ‘06’ alternates with CH Return Temperature ‘07’ alternates with Flue Temperature 9. Advise the user that they may observe a plume of vapour from the flue terminal, and that it is part of the normal operation of the boiler. ‘08’ - not used 7.6.2 System Draining 1. If at any time after installation it is necessary to drain & refill the central heating system (e.g. when replacing a radiator) the De-Aeration Function should be activated. Re-pressurise the system to 1.5 bar ‘20’ alternates with Manufacturer information Depending upon boiler model and any system controls connected to the appliance, not all information codes will be displayed and some that are will not have a value. Press R to return to the normal display. 2. On refilling the system ensure that there is no heating or hot water demand, but that there is power to the boiler. It is also recommended that the gas supply is turned off to prevent inadvertent ignition of the burner. recommission the appliance and check that the inhibitor concentration is sufficient. See Section 7.3.1 for more detail. 7219715 - 03 (04/17) EcoBlue Advance Combi 47 8 Operation 8 Operation 8.1 General 1. It is the responsibility of the installer to instruct the user in the day to day operation of the boiler and controls and to hand over the completed Benchmark Checklist at the back of this manual. 2. Set the central heating and hot water temperatures to the requirements of the user. Instruct the user in the operation of the boiler and system. 3. The temperature on the boiler must be set to a higher temperature than the cylinder thermostat to achieve the required hot water demand. 4. Instruct the user in the operation of the boiler and system controls. 5. Demonstrate to the user the action required if a gas leak occurs or is suspected. Show them how to turn off the gas supply at the meter control, and advise them not to operate electric light or power switched, and to ventilate the property. 6. Show the user the location of the system control isolation switch, and demonstrate its operation. 7. Advise the user that they may observe a plume of vapour from the flue terminal, and that it is part of the normal operation of the boiler. 8. The method of repressurising the primary system should be demonstrated. 9. If at any time after installation it is necessary to drain & refill the central heating system (e.g. when replacing a radiator) the De-Aeration Function should be activated (see 7.6.2). 8.2 To Start-up Switch on the boiler at the fused spur unit and ensure that the time control is in the on position and any other controls (e.g. room thermostat) are calling for heat. Press the R once to bring the boiler out of Standby mode. The boiler will begin its start sequence. 8.3 To Shutdown Isolate the mains power supply at the fused spur unit. Isolate the gas supply at the boiler valve. 48 EcoBlue Advance Combi 7219715 - 03 (04/17) Operation Boiler Information View Standby - Reset - Esc 8.4 8 Use of the Control Panel Key to Controls R Standby - Reset - Esc Button R /R /P + Boiler Information View Button + Increase CH Temperature Button Decrease CH Temperature Button Domestic Hot Water Temperature Adjustment Central Heating Temperature Adjustment Increase DHW Temperature Button Decrease DHW Temperature Button Summer - Winter Heating Only Mode Summer / Winter / Only Heating Mode Button Display Screen Summer - Winter - Heating Only Mode R 1. Press button until the required mode appears:Summer - DHW only mode Winter - DHW & CH mode Heating Only - Only CH mode R To increase or decrease the boiler temperature 1. Press to increase the Central Heating temperature. 2. Press to decrease the Central Heating temperature. An overheat thermostat (NTC) is positioned in the heat exchanger which shuts down the appliance if the boiler temperature exceeds 100°C. Press R button to re-establish normal operating conditions. R To adjust the domestic hot water temperature 1. Press temperature. to increase the Domestic Hot Water 2. Press temperature. to decrease the Domestic Hot Water 8.5 Frost Protection 1. The boiler incorporates an integral frost protection feature that will operate in both Central Heating and Domestic Hot Water modes, and also when in standby ( displayed) see section 4.2.3 Boiler Frost Protection Mode. 7219715 - 03 (04/17) EcoBlue Advance Combi 49 9 Settings 9 Settings 9.1 Parameters The operating parameters of the boiler have been factory set to suit most systems. 50 EcoBlue Advance Combi 7219715 - 03 (04/17) Maintenance 10 10 Maintenance. 10.1 General. During routine servicing, and after any maintenance or change of part of the combustion circuit, the following must be checked:• The integrity of the complete flue system and the flue seals by checking air inlet sample to eliminate the possibility of recirculation. O2 20.6% & CO2 < 0.2% • The integrity of the boiler combustion circuit and relevant seals. • The operational gas inlet pressure and the gas rate as described in Section 7.5.1. • The combustion performance as described in ‘Check the Combustion Performance’ below. 3. Competence to carry out Checking Combustion Performance B.S. 6798 ‘Specification for Installation & Maintenance of Gas Fired Boilers not exceeding 70kWh’. Flue Sampling Point Air Sampling Point After refitting the sampling point plug ensure there is no leakage of products • Competence can be demonstrated by satisfactory completion of the CPA1 ACS assessment, which covers the use of electronic portable combustion gas analysers in accordance with BS 7967, Parts 1 to 4. 4. Check the Combustion Performance (CO/CO2 ratio) Set the boiler to operate at maximum rate as described in Section 7.4. 5. Remove the plug from the flue sampling point, insert the analyser probe and obtain the CO/CO2 ratio. This must be less than 0.004. If the combustion reading (CO/CO2 ratio) is greater than this, and the integrity of the complete flue system and combustion circuit seals has been verified, and the inlet gas pressure and gas rate are satisfactory either:• Perform the ‘Standard Inspection and Maintenance’ (Section 10.2) & re-check. • Perform ‘Setting the Gas Valve’ (Section 10.3.25) & re-check. • Replace and set the gas valve (Sections 10.3.24 & 25) & re-check. 7219715 - 03 (04/17) EcoBlue Advance Combi 51 10 Maintenance 10.2 Standard Inspection and Maintenance Operation When performing any inspection or maintenance personal protective equipment must be used where appropriate. 1. Ensure that the boiler is cool and that both the gas and electrical supplies to the boiler are isolated. 2. Remove the screws securing the case front panel. Lift the panel slightly to disengage it from the tabs on top of the case (Fig. 43). Hinge down the control box. 3. To aid access disconnect the igniter plug & disconnect the two pipes from the top of the condensate trap and the drain pipe from the trap outlet. Undo the screw & washer securing the trap to the boiler lower panel. Case Front Panel Securing Screws Control Box removed for clarity 4. Disengage the lip on the trap from the slotted bracket and remove the trap. Take care not to spill any residual condensate on the controls and P.C.B. Thoroughly rinse the trap and examine the gasket on the trap base, replacing if necessary. 5. Remove the clip securing the gas feed pipe to the air/gas venturi. Disconnect the pipe. Do not break the joint between the pipe and gas valve unless necessary. 6. Note their position and disconnect the electrode leads and the fan electrical plugs (Fig. 46). Condensate Trap 7. Undo the four 10mm nuts retaining the combustion box cover to the heat exchanger. Gasket 8. Carefully draw the fan, collector and cover assembly forward (Fig. 46). Condensate Drain Pipe Connection 9. Clean any debris from the heat exchanger and check that the gaps between the coils are clear. 10. Inspect the burner, electrodes position (Fig. 45) and insulation, cleaning or replacing if necessary. Clean any dirt or dust from the boiler. Fan, Collector and Cover Assembly 11. Carefully examine all seals, insulation & gaskets, replacing as necessary. Look for any evidence of leaks or corrosion, and if found determine & rectify the cause. Air Box Electrode Position Fig. 45 Fig. 46 Securing Clip Flame Sensing Electrode Spark Ignition Electrode 7.5 ±1 Gas Feed Pipe 4 ±0.5 12. Prime the trap and reconnect the pipes to the top. Reassemble in reverse order. Electrode Leads 10 ±1 52 EcoBlue Advance Combi 7219715 - 03 (04/17) Maintenance 10 Expansion Vessel Charge - 1.0 bar 13.. Expansion Vessel Valve Control Box removed for clarity A right angled valve extension will aid checking and repressurising. Hall Effect Sensor DHW Filter (Fig. 48) 14. If the flow of domestic hot water is diminished, it may be necessary to clean the filter. 15. Turn the cold mains isolation cock (Fig. 47) off and draw off from a hot tap. Hydraulic Inlet Assembly 16. Disconnect the pump cable, remove the retaining clip and extract the filter cartridge and rinse thoroughly in clean water. Reassemble and check the flow. View underneath appliance Restrictor (not on 40 models) Filter 17. Check the operation of the Safety Pressure Relief Valve. Simulate ‘Flame Failure’ fault by isolating the supply at gas cock and operating the boiler. 133 should be displayed. 18. Reassemble the appliance in reverse order, ensuring the front case panel is securely fitted. Recommission the boiler. 19. Complete the relevant Service Interval Record section of the Benchmark Commissioning Checklist at the rear of this publication and then hand it back to the user. Fig. 48 DHW Isolation Cock 10.3 Specific Maintenance Operations Changing Components Fig. 47. Sealing Gasket Spark Ignition Electrode See Section 10.2 paragraph 2 for removal of case panel door etc. 10.3.1 Electrode Leads Sealing Gasket Flame Sensing Electrode Fig. 49 7219715 - 03 (04/17) Spark Ignition & Flame Sensing Electrodes 1. Note their position and disconnect the electrode leads. Remove the retaining screws securing each of the electrodes to the combustion box cover and remove the electrodes, noting their orientation. 2. Check the condition of the sealing gaskets and replace if necessary. Reassemble in reverse order (Fig. 49). 3. If satisfactory combustion readings are not obtained ensure the electrode position is correct and perform the combustion check again. EcoBlue Advance Combi 53 10 Maintenance 10.3.2 Fan (Figs. 50 & 51) 1. Remove the clip securing the gas feed pipe to the air/gas venturi. Disconnect the pipe. 2. Undo the screws securing the air/gas collector to the extension piece and disconnect the fan electrical plugs (Fig. 50). 3. Remove the collector and fan assembly, being careful to retain the gasket. 4. Undo the securing screw and remove the airbox, disengaging it from the fan venturi. Undo the screws securing the fan to the collector. Retain the gasket. 5. Undo the screws securing the venturi to the fan (noting its position) and transfer to the new fan, replacing the seal if necessary. Examine the gasket(s) and replace if necessary. Control Box removed for clarity Cover Gasket Air/Gas Collector Air Box Fan Air/Gas Venturi Clip Gas Feed Pipe Fig. 50 10.3.3 Air/Gas Venturi (Figs. 50 & 51) 1. Undo the securing screw and remove the airbox, disengaging it from the fan venturi. Remove the clip securing the gas feed pipe to the venturi. 2. Undo the screws securing the collector to the extension piece and disconnect the fan electrical plugs. Seal Fan Venturi Fig. 51 3. Remove the collector and fan assembly, being careful to retain the gasket. 4. Undo the screws securing the venturi to the fan (noting its position) and fit the new venturi, replacing the seal if necessary. Examine the gasket and replace if necessary. 5. After changing the venturi check the combustion - see Section 7.4.1. 54 EcoBlue Advance Combi 7219715 - 03 (04/17) Maintenance 10 10.3.4 Burner Cover 1. Undo the securing screw and remove the airbox, disengaging it from the fan venturi. Remove the clip securing the gas feed pipe to the air/gas venturi and disconnect the fan electrical plugs. Burner Burner Gasket 2. Undo the screws securing the air/gas collector to the extension piece. Note its position and remove the extension piece (where fitted) from the cover. Extension Piece (note orientation) 3. Undo the screws securing the burner. Withdraw the burner from the cover and replace with the new one. Gasket 4. Examine the gasket(s), replacing if necessary. Note that the gaskets are not the same ! 5. After changing the burner check the combustion. Air Box Fig. 52 Air/Gas Collector 10.3.5 Insulation (Fig. 53) 1. Undo the securing screw and remove the airbox, disengaging it from the fan venturi. Remove the clip securing the gas feed pipe to the air/gas venturi and disconnect the fan electrical plugs. 2. Remove the electrodes as described in section 13.1. Control Box removed for clarity 3. Undo the nuts holding the cover to the heat exchanger. Draw the air/gas collector, fan and cover assembly away. Heat Exchanger Rear Insulation 4. Remove the cover insulation piece. 5. Fit the new insulation carefully over the burner and align it with the slots for the electrodes. Seal Air/Gas Collector Spark Ignition Electrode Cover Insulation 6. If the rear insulation requires replacement, remove it and all debris from the heat exchanger. Also it may be necessary to separately remove the spring clip from the pin in the centre of the heat exchanger and the ‘L’ shaped clips embedded in the insulation. 7. Do not remove the shrink-wrapped coating from the replacement rear insulation. Keep the insulation vertical and press firmly into position. Air Box Electrode Leads Flame Sensing Electrode 7219715 - 03 (04/17) Fig. 53 8. Examine the cover seal and replace if necessary. Reassemble in reverse order. EcoBlue Advance Combi 55 10 Maintenance 10.3.6 Electrical Plug Flue Sensor Flue Sensor 1. Ease the retaining tab on the sensor away and disconnect the electrical plug. 3. Turn the sensor 90° anticlockwise to remove - it is a bayonet connection. 4. Reassemble in reverse order. 10.3.7 Fig. 54 Igniter (Fig. 54) 1. Note the position of the ignition & sensing leads and disconnect them. Also disconnect the igniter feed plug. 2. Undo the screw securing the igniter mounting bracket to the left hand side panel. Remove the igniter and bracket and transfer the bracket to the new igniter. 3. Reassemble in reverse order, reconnecting the plug and leads to the igniter. Igniter 10.3.8 Heating Flow & Return Sensors (Fig. 55) Control Box removed for clarity Mounting Bracket Spark Connection (‘A’) Earth Connection (‘B’) 1. There is one sensor on the flow (red wires) and one sensor on the return (blue wires). Note: For access to the return sensor first remove the fan and air/gas collector (see 10.3.2). 2. After noting the position prise the sensor clip off the pipe and disconnect the plug. 3. Connect the plug to the new sensor and ease the clip onto the pipe as close to the heat exchanger as possible. Heating Flow Sensor 10.3.9 Safety Thermostat (Fig. 56) 1. Pull the two spade connections off the safety thermostat. Fig. 55 2. Remove the screws securing the thermostat to the mounting plate on the flow pipe. Safety Thermostat Pump, Gas Valve Assemblies and Pipework removed for clarity Retaining Clip 3. Reassemble in reverse order, ensuring that the connections are pushed fully on. 10.3.10 DHW NTC Sensor (Fig. 56) 1. Turn off the mains cold water supply tap and draw off the residual domestic hot water. 2. Ease the retaining tab on the sensor away and disconnect the electrical plug. 3. Unscrew the sensor from the hydraulic outlet assembly. Examine the sealing washer, replacing if necessary. Plug 4. Reassemble in reverse order. The plug will only fit one way. DHW NTC Sensor Hydraulic Pressure Sensor Fig. 56 56 EcoBlue Advance Combi 7219715 - 03 (04/17) Maintenance 10 10.3.11 Pump - Head Only (Fig. 57) 1. Drain the boiler primary circuit and disconnect the electrical plug from the pump motor. 2. Remove the socket head screws securing the pump head to the body and draw the head away. 3. Reassemble in reverse order. 10.3.12 Pump - Complete (Fig. 58) 1. Drain the boiler primary circuit and disconnect the electrical plug from the pump motor. 2. Undo the two screws securing the body to the pipe and manifold and draw the pump forwards. 3. Unscrew the automatic air vent from the pump body. Control Box removed for clarity Socket Headed Screw 4. Examine the ‘O’ ring seals on the return pipe and manifold, replacing if necessary. 5. Fit the air vent to the pump body and reassemble in reverse order. 10.3.13 Automatic Air Vent (Fig. 58) Pump Body 1. Drain the boiler primary circuit and unscrew the automatic air vent from the pump body. Pump Head 2. Examine the ‘O’ ring seal, replacing if necessary and fit it to the new automatic air vent. Fig. 57 3. Reassemble in reverse order. Automatic Air Vent Pump Flow Pipe Fig. 58 7219715 - 03 (04/17) EcoBlue Advance Combi 57 10 Maintenance 10.3.14 Safety Pressure Relief Valve (Fig. 59) 1. Close the flow and return isolation taps and drain the primary circuit. 2. Disconnect the discharge pipework from the valve. Remove the sealing grommet. ‘O’ ring seal 3. Slacken the grub screw securing the pressure relief valve and remove from the inlet assembly. Grub Screw 4. On reassembly ensure that the ‘O’ ring is in place and the sealing grommet is correctly refitted to maintain the integrity of the case seal. Safety Pressure Relief Valve 10.3.15 Heating Pressure Gauge (Figs. 60 & 61) Discharge Pipe 1. Close the flow and return isolation taps and drain the primary circuit. Fig. 59 2. Hinge the control box downwards. Remove the clip securing the pressure gauge capillary to the hydraulic assembly. 3. Disengage the securing tabs and open the terminal block cover. Prise apart the clips that hold the gauge cap. 4. Remove the gauge, cap and gasket. 5. Fit the new gauge, ensuring that the capillary is routed to prevent any sharp bends. Locate the ridge on the gauge body in the slot in the control box. Clip Control Box removed for clarity 6. Reassemble in reverse order and ensure the gasket is in position to maintain the integrity of the case seal. Heating Pressure Gauge Capillary Fig. 60 Heating Pressure Gauge Fig. 61 58 EcoBlue Advance Combi 7219715 - 03 (04/17) Maintenance 10 10.3.16 Plate Heat Exchanger (Figs. 62 & 63) 1. Close the flow & return isolation taps and the cold mains inlet. Drain the primary circuit and draw off any residual DHW. 2. Refer to Section 10.2 paragraphs 5 to 9 and remove the fan etc. 3. Undo the screws securing the plate heat exchanger to the hydraulic assembly. 4. Withdraw the plate heat exchanger by manoeuvring it to the rear of the boiler, then upwards and to the left to remove. Plate Heat Exchanger Seals 5. There are four rubber seals between the hydraulic assembly and heat exchanger which may need replacement. Control Box removed for clarity 6. Ease the seals out of the hydraulic assembly. Replace carefully, ensuring that the seal is inserted parallel and pushed fully in. 7. When fitting the new heat exchanger note that the right hand location stud is offset towards the centre (Fig. 62). 8. Reassemble in reverse order. Rubber Seal R.H. Stud Fig. 62 Note offset 10.3.17 Hydraulic Pressure Sensor (Fig. 63) 1. Close the flow and return isolation taps and drain the primary circuit. For ease of access remove the fan and collector assembly. Retaining Clip 2. Remove the plug from the sensor and pull the retaining clip upwards. 3. Reassemble in reverse order. Plug Hydraulic Pressure Sensor Fig. 63 7219715 - 03 (04/17) Pump, Gas Valve Assemblies and Pipework removed for clarity EcoBlue Advance Combi 59 10 Maintenance 10.3.18 DHW Flow Regulator & Filter (Fig. 64) 1. Close the cold mains inlet and draw off any residual DHW. 2. Pull off the hall effect sensor. Undo the filter assembly from the inlet/return manifold. 10.3.19 DHW Flow Sensor (‘Hall Effect’ Sensor) (Fig. 65) 1. Pull the sensor off the DHW inlet manifold. 2. Disconnect the plug from the sensor and connect it to the new component. Control Box removed for clarity DHW Flow Sensor 3. Fit the new sensor, ensuring it is correctly oriented and fully engaged over the manifold. DHW Flow Regulator & Filter (‘Hall Effect’ Sensor) 10.3.20 Diverter Valve Motor (Fig. 66) 1. Disconnect the multi-pin plug. Pull off the retaining clip and remove the motor. Fig. 65 2. The motor can now be replaced, 3. When fitting the new motor it will be necessary to hold the unit firmly while depressing the valve return spring. Fig. 64 Hydraulic Inlet Assembly Diverter Valve Motor Multi-pin Plug Retaining Clip Valve Assembly Fig. 66 Pump, Gas Valve Assemblies and Pipework removed for clarity 60 EcoBlue Advance Combi 7219715 - 03 (04/17) Maintenance 10 10.3.21 Main. 4. Undo the 5 securing screws and remove the P.C.B. It is retained at the left by two spring latches and the right hand edge locates in a slot. 5. Reassemble in reverse order, ensuring that the harnesses to the Control P.C.B. and terminal M2 are routed under the Main P.C.B. Check the operation of the boiler. 10.3.22 Boiler Control. Control Box Cover 4. Undo the 5 securing screws and remove the P.C.B. It is retained at the left by two spring latches and the right hand edge locates in a slot. 5. Disconnect the link harness between the Main & Control P.C.B.’s and undo the 4 screws securing the Control P.C.B. 6. Remove the Control P.C.B. and fit the new component. Reassemble in reverse order, ensuring that the harnesses to the Control P.C.B. and terminal M2 are routed under the Main P.C.B. Check the operation of the boiler. Main P.C.B. Boiler Control P.C.B. Fig. 67 7219715 - 03 (04/17) EcoBlue Advance Combi 61 10 Maintenance 10.3.23 Expansion Vessel Lock Nut Expansion Vessel 1. Close the flow and return isolation taps and drain the boiler primary circuit. 2. Undo the nut on the pipe connection at the bottom of the vessel, and slacken the nut on the hydraulic inlet assembly. 3. Remove the screws securing the support bracket, and withdraw the bracket. 4. Whilst supporting the vessel undo and remove the locknut securing the vessel spigot to the boiler top panel. Support Bracket 5. Manoeuvre the vessel out of the boiler. 6. Reassemble in reverse order. 10.3.24 Gas Valve (Fig. 69) Fig. 69 After replacing the valve the CO2 must be checked and adjusted as detailed in Section 10.3.25 Setting the Gas Valve. Only change the valve if a suitable calibrated combustion analyser is available, operated by a competent person - see section 10.1.3. 1. Undo the screw and disconnect the electrical plug. 2. Turn the gas cock off and undo the nut on the gas valve inlet underneath the boiler. 3. Undo the nut on the gas valve outlet. Ease the pipe aside. NOTE: The gas nozzle injector is inserted in the gas valve outlet. 4. Remove the screws securing the gas valve to the boiler bottom panel. Remove the valve. Gas Feed Pipe 5. Transfer the gas nozzle injector to the new valve, ensuring it sits in the valve outlet. Examine the sealing washers, replacing if necessary. 6. Reassemble in reverse order. Washer Gas Nozzle Injector Check gas tightness & CO2 ! Gas Valve Electrical Plug Washer Gas Cock Fig. 68 62 EcoBlue Advance Combi 7219715 - 03 (04/17) Maintenance 10 10.3.25 Setting the Gas Valve (CO2 Check) R /R /P + The CO2 must be only be checked and adjusted to set the valve if a suitable calibrated combustion analyser is available, operated by a competent person - see Section 10.1.3. + 1. The combustion (CO2) may be checked after running the boiler for several minutes. To do this it is necessary to operate the boiler in ‘Chimney Sweep Mode’. ‘Chimney Sweep Mode’ • This function must not be activated whilst the burner is lit. • The case front panel must be fitted when checking combustion. • Ensure the system is cold R& 1. Press together and hold for at least 6 seconds. is displayed briefly followed by flashing alternately with , or . 2. represents MAXIMUM HEATING input, represents MAXIMUM DHW input whilst denotes MINIMUM input. 3. Press or The CO2 should be 8.7% ± 0.2 at MAXIMUM 2. It is possible to alter the CO2 by adjustment of the gas valve. At maximum rate the Throttle Adjustment Screw should be turned, using a suitable 2.5 hexagon key, until the correct reading is obtained (Fig. 100). Turning clockwise will reduce the CO2. Anticlockwise will increase the CO2. 3. The CO2 must then be checked at minimum rate. The CO2 should be 8.4% ± 0.2 at MINIMUM to adjust the input. 4. The valve must be checked and set at MAXIMUM DHW input & MINIMUM input. 4. With the boiler on minimum, the Offset Adjustment Screw must be altered, using a suitable 4mm hexagon key, after removing the cap (Fig. 100). Turning anti-clockwise will reduce the CO2. Clockwise will increase the CO2. Flue Sampling Point Plug 5. Check the Combustion Performance (CO/CO2 ratio). This must be less than 0.004. Analyser Probe Refit the sampling point plug and ensure there is no leakage of products. P. OUT VENT P. R. ADJ . EV2 Offset Adjustment Screw (cap fitted) RQ ADJ. Fig. 70 1 Throttle Adjustment Screw (cover removed) 2 EV1 3 4 Reduce CO2 Increase CO2 at min. rate at min. rate Reduce CO2 Increase CO2 at max. rate at max. rate Fig. 71 If the CO2 is reset at minimum rate it must be rechecked at maximum rate again and adjusted if required. If the CO2 is reset at maximum rate it must be rechecked at minimum rate and adjusted if required. 7219715 - 03 (04/17) Gas Valve Do not turn the adjustment screws more than 1/8 of a turn at a time. Allow the analyser reading to settle before any further adjustment EcoBlue Advance Combi 63 11 Troubleshooting 11 Troubleshooting 11.1 Error Codes Table Of Error Codes 20 Central Heating NTC Fault 28 Flue NTC Fault 40 Central Heating Return NTC Fault 109 R Possible Circulation Fault 110 R Safety Thermostat Operated (pump fault) 111 R Safety Thermostat Operated (over temperature) 117 Primary System Water Pressure Too High 118 Primary System Water Pressure Too Low 125 R Circulation Fault (Primary) 128 Flame Failure (no lock-out) 130 Flue NTC Operated 133 R Interruption Of Gas Supply or Flame Failure 151 R Flame Failure 160 R Fan or Fan Wiring Fault 321 Hot Water NTC Fault 384 False Flame The button can be pressed so that the display shows the following information:‘00’ alternates with Sub-Code (only when fault on boiler) or ‘000’ ‘01’ alternates with CH Flow Temperature ‘02’ alternates with Outside Temperature (where Sensor fitted) ‘03’ alternates with stored DHW Temperature ‘04’ alternates with DHW Temperature ‘05’ alternates with System Water Pressure ‘06’ alternates with CH Return Temperature ‘07’ alternates with Flue Temperature ‘08’ - not used ‘09’ alternates with Collector Temperature ‘20’ alternates with Manufacturer information 1. If a fault occurs on the boiler an error code may be shown by the facia display. 2. The codes are a flashing number, either two or three digit, preceded by the symbol :followed by 20, 28, 40, 160 or 321 indicates possible faulty components. 110 and 111 indicate overheat of the primary system water. 117 is displayed when the primary water pressure is greater than 2.7 bar. Restoring the correct pressure will reset the error. 118 is displayed when the primary water pressure is less than 0.5 bar. Restoring the correct pressure will reset the error. 133, indicates that the gas supply has been interrupted, ignition has failed or the flame has not been detected. 128 is displayed if there has been a flame failure during normal operation. 125 is displayed in either of two situations:i) If within 15 seconds of the burner lighting the boiler temperature has not changed by 1°C. ii) If within 10 minutes of the burner lighting the boiler actual temperature twice exceeds the selected temperature by 30°. In these instances poor primary circulation is indicated. 3. By pressing the 'Reset' button for 1 to 3 seconds when 110, 125 & 133 are displayed it is possible to relight the boiler. 4. If this does not have any effect, or the codes are displayed regularly further investigation is required. 11.2 Fault Finding 1. Check that gas, water and electrical supplies are available at the boiler. 2. Electrical supply = 230V ~ 50 Hz. 3. The preferred minimum gas pressure is 20 mb (NG). 4. Carry out electrical system checks, i.e. Earth Continuity, Resistance to Earth, Short Circuit and Polarity with a suitable meter. NOTE: These checks must be repeated after any servicing or fault finding. 5. Ensure all external controls are calling for heat and check all external and internal fuses. Before any servicing or replacement of parts, ensure the gas and electrical supplies are isolated. 64 EcoBlue Advance Combi 7219715 - 03 (04/17) Troubleshooting Refer to “Illustrated Wiring Diagram” for position of terminals and components Central Heating - Follow operational sequence Turn on mains power The display illuminates NO 11’ Ensure controls are set to demand and verify the contacts are closed NO NO Set Central Heating temperature to Maximum. symbol flashing, pump runs NO Ensure all controls and programmers are calling for heat YES Go to section ‘B’ YES Fan runs at correct speed NO 160 flashing Go to section ‘C’ NO 133 and R flashing YES Spark at ignition electrodes up to 5 seconds & for 4 attempts YES YES Go to section ‘F’. Press the reset button for 1 to 3 seconds YES 133 flashing Go to section ‘G’ Go to section ‘E’ NO Burner lights YES Burner goes out after 5 seconds YES Flame Displayed NO Check polarity YES 109 flashing YES Go to section ‘J’ 125 flashing after 1 min NO 110 or 111 flashing YES Go to section ‘H’ NO Diverter Valve open to Central Heating circuit NO Go to section ‘K’ YES Burner modulates to maintain set temperature NO Check Heating Flow sensor. Go to section ‘D’ YES 130 flashing YES Go to section ‘M’ NO Burner goes out 7219715 - 03 (04/17) YES Fan stops after 15 seconds YES Boiler operation correct EcoBlue Advance Combi 65 11 Troubleshooting Domestic Hot Water - Follow operational sequence Turn on mains power The display illuminates NO’ Go to section ‘L’ NO NO Set Hot Water temperature to Maximum & fully open hot tap. symbol flashing, pump runs Go to section ‘L’ NO NO DHW flow rate greater than 2litres/min YES Burner lights YES YES Fan runs at correct speed NO 160 flashing Go to section ‘C’ NO 133 and R flashing Go to section ‘B’ NO Spark at ignition electrodes up to 5 seconds & for 4 attempts YES Go to section ‘F’. Press the reset button for 1 to 3 seconds YES 133 flashing Go to section ‘G’ Go to section ‘E’ YES NO Burner lights YES Burner goes out after 5 seconds YES Flame Displayed 109 flashing NO YES Check polarity Go to section ‘J’ 125 flashing after 1 min NO 110 or 111 flashing YES Go to section ‘H’ NO 3 Way Valve open to Domestic Hot Water circuit NO Go to section ‘K’ YES Burner modulates to maintain set temperature NO Check CH NTC sensor. Go to section ‘D’ NO 130 flashing YES Go to section ‘M’ NO Burner goes out 66 YES EcoBlue Advance Combi Fan stops after 15 seconds YES Boiler operation correct 7219715 - 03 (04/17) Troubleshooting 11 Fault Finding Solutions Sections A Is there 230V at: 1. 2. 3. B NO Main terminals L and N Check electrical supply NO Main terminal fuse Connection OK at X40 Replace fuse NO Check wiring PCB - X10 connector Main terminals L and N Display illuminated NO Display or Main PCB fault Switch to DHW mode maximum flow & press reset. During next three minutes check :- YES 230V at PCB - X11 connector 3 to 4 YES 230V at pump Replace pump NO NO Check wiring Replace PCB C Fan connections correct at fan & PCB X11 and X23 connectors see Wiring Diagram. NO Make connections YES 230V at PCB - X11 connector (between blue & brown - see Wiring Diagram) YES Fan jammed or faulty wiring YES Replace fan or wire NO Replace PCB D Temperature sensor faulty. Check correct location and wiring. YES Cold resistance approximately 10k @ 25° C (CH & DHW sensor) 20k @ 25° C (Flue sensor) (resistance reduces with increase in temp.) E Gas at burner NO NO Replace sensor & reset boiler Ensure gas is on and purged Check wiring and PCB - X14 connector see Wiring Diagram YES Replace gas valve & Check combustion NO Replace PCB 7219715 - 03 (04/17) EcoBlue Advance Combi 67 11 F Troubleshooting Check and correct if necessary 1. Ignition electrode and lead 2. Electrode connection 3. Spark gap and position YES Check wiring (see Diagram) and 230V at PCB X14 (between blue & brown from igniter) YES NO Replace PCB 4 ±0.5 Replace Igniter Burner Viewing Window Flame Sensing Electrode 7.5 ±1 Spark Ignition Electrode 10 ±1 Fig. 72 Electrode Position G 1. 2. Check supply pressure at the gas cock test point (Fig. 41):Natural Gas - Minimum 17 mbar Check and correct if necessary 1. The set of the gas valve (CO2 values - see Section 10.3.25) 2. Flame sensing electrode and lead connections 3. Flame sensing electrode position Replace sensing electrode or PCB H Safety Thermostat operated or faulty NO Check for and correct any system faults NO Allow to cool. Continuity across thermostat terminals more than 1.5 ohm YES Replace safety thermostat NO Check Flow & Return Sensors - see section ‘D’ YES Is 110 or 111 still flashing ? YES Replace PCB 68 EcoBlue Advance Combi 7219715 - 03 (04/17) Troubleshooting CH system pressure less than 0.5 bar or greater than 2.7 bar on digital display I NO J YES Restore correct system pressure YES Check wiring and PCB X22 connector for approx. 5V DC between green & black - see Wiring Diagram Ensure that the boiler and system are fully vented NO Replace hydraulic pressure sensor NO Replace PCB System fault - correct Check flow temperature sensor connections and position. Cold resistance approximately 10k @ 25° C (CH sensors) (resistance reduces with increase in temp.) YES 11 NO Replace sensor YES Go to section ‘B’ K Is there 230V at: 1. PCB - X13 connector terminals between:Blue & Black (central heating mode) Blue & Brown (domestic hot water mode) see Wiring Diagram NO Replace PCB YES Check diverter valve motor cable Is there 230V at: 2. Diverter valve motor Is mains water filter & assembly clean, and rotor moving freely ? L YES YES Replace diverter valve motor Check wiring and PCB - X22 connector for approx. 5V DC between red & blue from Hall effect sensor - see Wiring Diagram YES Replace Hall Effect Sensor NO NO Clean or replace Replace PCB M 1. 2. Temperature sensors faulty. Cold resistance approximately 10k @ 25° C (CH sensor) 20k @ 25° C (Flue sensor) (resistance reduces with increase in temp.) If pump is running the heat exchanger could be obstructed 7219715 - 03 (04/17) NO YES Replace sensor Replace heat exchanger EcoBlue Advance Combi 69 12 Decommissioning 12 Decommissioning 12.1 Decommissioning Procedure 1. Disconnect the gas & electric supplies and isolate them. 2. Drain the primary circuit and disconnect the filling device. 3. Dismantle the chimney system and remove the boiler from the wall mounting frame. 70 EcoBlue Advance Combi 7219715 - 03 (04/17) Spare Parts 13 13 Spare Parts 13.1 General 1. If, following the annual inspection or maintenance any part of the boiler is found to need replacing, use Genuine Baxi Spare Parts only. C 13.2 Spare Parts List Key No. D A Description No. Manufacturers Part No. A Fan Fan - 40 only 720768101 7211861 B Burner - 24 & 28 Burner - 33 Burner - 40 7212447 7212449 7212448 C Spark Ignition Electrode 720767301 D Flame Sensing Electrode 7211855 E Gas Valve 7214341 F Safety Thermostat 720765301 G Hall Effect Sensor 720788201 I Plate Heat Exchanger 720852401 J Diverter Valve Motor 720788601 K Pump 7220533 M Heating Flow/Return Sensor 720747101 N DHW NTC Sensor 720789201 O Pump Automatic Air Vent 720787601 P Hydraulic Pressure Switch 720789001 Q Heating Pressure Gauge 7212896 R Flue Sensor 720851401 S PCB - 24 ErP PCB - 28 ErP PCB - 33 ErP PCB - 40 ErP 7222707 7222709 7222711 7222712 U Ø5.0 Gas Nozzle Injector - 24 Ø5.6 Gas Nozzle Injector - 28 Ø6.6 Gas Nozzle Injector - 33 Ø6.8 Gas Nozzle Injector - 40 7211862 7214344 7211864 7214346 V Air/Gas Venturi - 24 Air/Gas Venturi - 28 Air/Gas Venturi - 33 & 40 7211858 7211859 7211860 W Boiler Control HMI PCB 7211868 B E G F J M P O U N R Q V S W I K 7219715 - 03 (04/17) EcoBlue Advance Combi 71 14 Notes 14 Notes 72 EcoBlue Advance Combi 7219715 - 03 (04/17) 7219715 - 03 (04/17) Notes 14 EcoBlue Advance Combi 73 Fitted Not required Heating zone valves OR ft³/hr Burner operating pressure (at maximum rate) mbar OR Gas inlet pressure at maximum rate The heating and hot water system complies with the appropriate Building Regulations Ratio *All installations in England and Wales must be to Local Authority Building Control (LABC) either directly or through a Competent Persons Scheme. A Building Regulations Compliance will then be issued to the customer. 74) 75. 7219715 - 03 (04/17)
|
https://manualzz.com/doc/48425055/7219715-ecoblue-advance-install
|
CC-MAIN-2018-34
|
en
|
refinedweb
|
US6219690B1 - Apparatus and method for achieving reduced overhead mutual exclusion and maintaining coherency in a multiprocessor system utilizing execution history and thread monitoring - Google PatentsApparatus and method for achieving reduced overhead mutual exclusion and maintaining coherency in a multiprocessor system utilizing execution history and thread monitoring Download PDF
Info
- Publication number
- US6219690B1US6219690B1 US08742987 US74298796A US6219690B1 US 6219690 B1 US6219690 B1 US 6219690B1 US 08742987 US08742987 US 08742987 US 74298796 A US74298796 A US 74298796A US 6219690 B1 US6219690 B1 US 6219690B1
- Authority
- US
- Grant status
- Grant
- Patent type
-
- Prior art keywords
- data
- thread
- processor
- threads
- r is a division of Application No. 08/480,627, now U.S. Pat. No. 5,608,893 filed Jun. 7, 1995, which is a division of U.S. Pat. No. 5,442,758, filed Jul. 19, 1993.
This invention relates to computer systems and more particularly to a reduced overhead mutual-exclusion mechanism that provides data coherency and improves computer system performance.
Data coherency is threatened whenever two or more computer processes compete for a common data item that is stored in a memory or when two or more copies of the same data item are stored in separate memories and one item is subsequently altered. Previously known apparatus and methods for ensuring data coherency in computer systems are generally referred to as mutual-exclusion mechanisms.
A variety of mutual-exclusion mechanisms have evolved to ensure data coherency. Mutual-exclusion mechanisms prevent more than one computer process from accessing and/or updating (changing) a data item or ensure that any copy of a data item being accessed is valid. Unfortunately, conventional mutual-exclusion mechanisms degrade computer system performance by adding some combination of procedures, checks, locks, program steps, or complexity to the computer system.
Advances in processor and memory technology make it possible to build high-performance computer systems that include multiple processors. Such computer systems increase the opportunity for data coherency problems and, therefore, typically require multiple mutual-exclusion mechanisms.
When multiple processors each execute an independent program to accelerate system performance, the overall system throughput is improved rather than the execution time of a single program.
When the execution time of a single program requires improvement, one way of improving the performance is to divide the program into cooperating processes that are executed in parallel by multiple processors. Such a program is referred to as a multitasking program.
Referring to FIG. 1A, multiple computers 10 are interconnected by an interconnection network 12 to form a computer system 14. FIG. 1B shows that a typical one of computer 10 includes N number of processors 16A, 16B, . . . , and 16N (collectively “processors 16”). In computer 10 and computer system 14, significant time is consumed by intercommunication. Intercommunication is carried out at various levels.
In computer 10, at a processor memory interface level, processors 16 access data in a shared memory 18 by transferring data across a system bus 20. System bus 20 requires a high-communication bandwidth because it shares data transfers for processors 16. Computer 10 is referred to as a multiprocessor computer.
In computer system 14, at an overall system level, computers 10 each have a shared memory 18 and interconnection network 12 is used only for intercomputer communication. Computer system 14 is referred to as a multicomputer system.
The high threat to data coherency in multicomputer and multiprocessor systems is caused by the increased competition among processors 16 for data items in shared memories 18.
Ideally, multicomputer and multiprocessor systems should achieve performance levels that are linearly related to the number of processors 16 in a particular system. For example, 10 processors should execute a program 10 times faster than one processor. In a system operating at this ideal rate, all processors contribute toward the execution of the single program, and no processor executes instructions that would not be executed by a single processor executing the same program. However, several factors including synchronization, program structure, and contention inhibit multicomputer and multiprocessor systems from operating at the ideal rate.
Synchronization: The activities of the independently executing processors must be occasionally coordinated, causing some processors to be idle while others continue execution to catch up. Synchronization that forces sequential consistency on data access and/or updates is one form of mutual exclusion.
Program structure: Not every program is suited for efficient execution on a multicomputer or a multiprocessor system. For example, some programs have insufficient parallelism to keep all multiple processors busy simultaneously, and a sufficiently parallel program often requires more steps than a serially executing program. However, data coherency problems increase with the degree of program parallelism.
Contention: If processor 16A competes with processor 16B for a shared resource, such as sharable data in shared memory 18, contention for the data might cause processor 16A to pause until processor 16B finishes using and possibly updating the sharable data.
Any factor that contributes to reducing the ideal performance of a computing system is referred to as overhead. For example, when processors 16A and 16B simultaneously request data from shared memory 18, the resulting contention requires a time-consuming resolution process. The number of such contentions can be reduced by providing processors 16 with N number of cache memories 22A, 22B, . . . , and 22N (collectively “cache memories 22”). Cache memories 22 store data frequently or recently executed by their associated processors 16. However, processors 16 cannot efficiently access data in cache memories 22 associated with other processors. Therefore, cached data cannot be readily transferred among processors without increased overhead.
Incoherent data can result any time data are shared, transferred among processors 16, or transferred to an external device such as a disk memory 24. Thus, conventional wisdom dictates that computer performance is ultimately limited by the amount of overhead required to maintain data coherency.
Prior workers have described various mutual-exclusion techniques as solutions to the data coherence problem in single and multiprocessor computer systems.
Referring to FIG. 2, Maurice J. Bach, in The Design of the UNIX Operating System, Prentice-Hall, Inc., Englewood Cliffs, N.J., 1986 (“Bach”) describes single processor and multiprocessor computer implementations of a UNIX® operating system in which a process 30 has an asleep state 32, a ready to run state 34, a kernel running state 36, and a user running state 38. Several processes can simultaneously operate on shared operating system data leading to operating system data coherency problems. Bach solves the operating system data coherency problem by allowing process 30 to update data only during a process state transition 40 from kernel running state 36 to asleep state 32. Process 30 is inactive in asleep state 32. Data coherency is also protected by using data “locks” that prevent other processes from reading or writing any part of the locked data until it is “unlocked.”
Referring again to FIG. 1, Lucien M. Censier and Paul Feautrier, in “A New Solution to Coherence Problems in Multicache Systems, ” IEEE Transactions on Computers, Vol. C-27, No. 12, December 1978 (“Censier and Feautrier”) describe a presence flag mutual-exclusion technique. This technique entails using data state commands and associated state status lines for controlling data transfers among N number of cache memory controllers 50A, 50B, . . . , and 50N (collectively “cache controllers 50”) and a shared memory controller 52. The data state commands include shared read, private read, declare private,.invalidate data, and share data. Advantageously,.no unnecessary data invalidation commands are issued and the most recent (valid) copy of a shared data item can quickly be found. Unfortunately, cache memory data structures must be duplicated in shared memory controller 52 and costly nonstandard memories are required. Moreover, performance is limited because system bus 20 transfers the data state commands.
A. J. van de Goor, in Computer Architecture and Design, Addison-Wesley Publishers Limited, Workingham, England, 1989, classifies and describes single path, single data (“SPSD”); single path, multiple data (“SPMD”); multiple path, single data (“MPSD”); and multiple path, multiple data (“MPMD”)computer system architectures from a data coherence perspective as summarized in Table 1.
Single data (“SD”) indicates that only one copy of a data item exists in computer 10, whereas multiple data (“MD”) indicates that multiple copies of the data item may coexist, as commonly happens when processors 16 have cache memories 22.
Single path (“SP”) indicates that only one communication path exists to a stored data item, whereas multiple paths (“MP”) indicates that more than one path to the same data item exists, as in a multiprocessor system with a multiport memory. The classification according to Table 1 results in four classes of computer systems.
SPSD systems include multiprocessor systems that time-share a single bus or use a crossbar switch to implement an interconnection network. In such systems, the processors do not have cache memories. Such systems do not have a data coherence problem because the conventional conditions necessary to ensure data coherence are satisfied—only one path exists to the data item at a time, and only one copy exists of each data item.
Although no processor-associated cache memories exist, shared memory 18 can include a performance-improving shared memory cache that appears to the processors as a single memory. Data incoherence can exist between the shared memory cache and the shared memory. In general, this solution is not attractive because the bandwidth of a shared cache memory is insufficient to support many processors.
MPSD systems are implemented with multiport memories in which each processor has a switchable dedicated path to the memory.
MPSD systems can process data in two ways. In a single-access operation, memory data are accessed sequentially, with only a single path to and a single copy of each data item existing at any time, thereby ensuring data coherence. In a multiple-access operation, a data-accessing processor locks its memory path or issues a data locking signal for the duration of the multiple-access operation, thereby ensuring a single access path for the duration of the multiple-access operation. Data coherence is guaranteed by the use of locks, albeit with associated overhead.
MPMD systems, such as computer 10 and computer system 14, are typically implemented with shared memory 18 and cache memories 22. Such systems have a potentially serious data coherence problem because more than one path to and more than one copy of a data item may exist concurrently.
Solutions to the MPMD data coherence problem are classified as either preventive or corrective. Preventive solutions typically use software to maintain data coherence while corrective solutions typically use hardware for detecting and resolving data coherence problems. Furthermore, corrective solutions are implemented in either a centralized or a distributed way.
Preventive solutions entail using software to designate all sharable and writable data as noncachable, making it accessible only in shared memory 18. When accessed by one of processors 16, the shared data are protected by software locks and by shared data structures until relinquished by the processor. To alleviate the obvious problem of increased data access time, the shared data structures may be stored in cache memory. The prevention software is responsible for restoring all updated data at shared memory 18, before releasing the software locks. Therefore, processors 16 need commands for purging data from associated cache memories 22.
Unfortunately, preventive solutions require specialized system software, a facility to identify sharable data, and a correspondingly complex compiler. Additionally, system performance is limited because part of the shared data are not cached.
Corrective solutions to data coherence problems are advantageous because they are transparent to the user, albeit at the expense of added hardware.
A typical centralized solution to the data coherence problem is the above-described presence flag technique of Censier and Feautrier.
In distributed solutions, cache memory controllers 50 maintain data coherence rather than shared memory controller 52. Advantages include reduced bus traffic for maintaining cache data states. This is important because, in a shared bus multiprocessor system, bus capacity often limits system performance. Therefore, approaching ideal system performance requires minimizing processor associated bus traffic.
Skilled workers will recognize that other solutions exist for maintaining data coherence including dedicated broadcast buses, write-once schemes, and data ownership schemes. The overhead cost associated with various data coherence techniques is described by James Archibald and Jean-Loup Baer, in “Cache Coherence Protocols: Evaluation Using a Multiprocessor Simulation Model,” ACM Transactions on Computer Systems, Vol. 4, No. 4, November 1986. Six methods of ensuring cache coherence are compared in a simulated multiprocessor computer system. The simulation results indicate that choosing a data coherence technique for a computer system is a significant decision because the hardware requirements and performance differences vary widely among the techniques. In particular, significant performance differences exist between techniques that distribute updated data among cache memories and techniques that update data in one cache memory and invalidate copies in other cache memories. Comparative performance graphs show that all the techniques impose an overhead-based performance limit on the computer system.
What is needed, therefore, is a substantially zero-overhead mutual-exclusion mechanism that provides for concurrently reading and/or updating data while maintaining data coherency. Such a mechanism would be especially useful if it is capable of maintaining the coherency of data shared throughout a networked multicomputer system.
An object of this invention is, therefore, to provide an apparatus and a method for ensuring substantially zero-overhead data coherency in a computer.
Another object of this invention is to provide a mutual-exclusion apparatus and method capable of operating in a multicomputer system.
A further object of this invention is to improve the program execution performance of multiprocessor and multicomputer systems.
This invention is a substantially zero-overhead mutual-exclusion apparatus and method that allow concurrent reading and updating data while maintaining data coherency. That is, a data reading process executes the same sequence of instructions that would be executed if the data were never updated. Rather than depending exclusively on overhead-imposing locks, this mutual-exclusion mechanism tracks a thread (a locus of control such as a processor) execution history to determine safe times for processing a current generation of data updates while a next generation of data updates is concurrently being saved. A summary of thread activity tracks which threads have passed through a quiescent state after the current generation of updates was started. When the last thread related to the current generation passes through a quiescent state, the summary of thread activity signals a callback processor that it is safe to end the current generation of updates. The callback processor then processes and erases all updates in the current generation. The next generation of updates then becomes the current generation of updates. The callback processor restarts the summary of thread activity and initiates a new next generation of updates. All data-updating threads pass through a quiescent state between the time they attempt to update data and the time the data are actually updated.
This mutual-exclusion mechanism requires practically no data locks when accessing shared data, resulting in reduced overhead and data contention, and it is not so susceptible to deadlock as conventional mechanisms. Reducing deadlock susceptibility provides a corresponding reduction in or elimination of overhead required for avoiding or detecting and repairing deadlocks.
Additional objects and advantages of this invention will be apparent from the following detailed description of preferred embodiments thereof which proceeds with reference to the accompanying drawings.
FIGS. 1A and 1B are simplified schematic block diagrams showing a prior art network of multiprocessor computers and a representative prior art shared-memory multiprocessor computer.
FIG. 2 is a simplified state diagram showing the interrelationship of process states and process state transitions followed during execution of the process in a prior art UNIX® operating system.
FIG. 3 is a simplified schematic block diagram showing a mutual-exclusion mechanism according to this invention.
FIG. 4 is a simplified schematic block diagram showing a preferred implementation of the mutual-exclusion mechanism of FIG. 3.
FIGS. 5A, 5B, 5C, and 5D are simplified schematic block diagrams showing four evolutionary stages of a local area network data structure being updated in a manner according to this invention.
FIG. 6 is a simplified block diagram showing a handle table data structure according to this invention.
FIG. 7 is a simplified block diagram showing a free handle list data structure according to this invention.
FIG. 3 shows the interrelationship of components used to implement an embodiment of a mutual-exclusion mechanism 90. Skilled workers will recognize that terms of art used in this application are known in the computer industry. Standard C-programming language syntax is used to describe relevant data structures and variables. Pseudo-code is used to describe various operational steps. The following alphabetically arranged terms are described with reference to FIGS. 1 and 3.
A CALLBACK 100 is an element of a generation data structure. Each callback 100 tracks a set of elements 102 waiting for safe erasure. Callback 100 may include operational steps specific to a type of elements being tracked.
A CALLBACK PROCESSOR 104 is an entity that monitors a summary of thread activity 106 and processes a current generation 108 of callbacks 100 when it is safe to do so. Then, callback processor 104 causes a next generation 110 of callbacks 100 to become the current generation 108 of callbacks 100 and resets summary of thread activity 106. A global callback processor may be used to process all callbacks, or multiple callback processors may process individual elements or groups of elements protected by mutual-exclusion mechanism 90.
COMPUTER SYSTEM 14 includes computer 10 or a group of computers 10 that share or exchange information. Computers 10 may be single processors or shared-memory multiprocessors. Groups of computers 10 may be connected by interconnection network 12 to exchange data. Such data exchange includes so-called “sneakernets” that communicate by physically moving a storage medium from one computer 10 to another computer 10. Alternatively, computer system 14 can include loosely coupled multiprocessors, distributed multiprocessors, and massively parallel processors.
A DATA STRUCTURE is any entity that stores all or part of the state maintained by computer system 14. Examples of data structures include arrays, stacks, queues, singly linked linear lists, doubly linked linear lists, singly linked circular lists, doubly linked circular lists, trees, graphs, hashed lists, and heaps.
ELEMENT 102 is usually one or more contiguous records or “structs” in a data structure stored in a memory (18, 22, or 24). Examples include array elements and elements of linked data structures.
ERASE means to render the contents of element 102 invalid. Erasure may be accomplished by returning element 102 to a free pool (not shown) for initialization and reallocation, by overwriting, or by moving element 102 to another data structure. The free pool is simply a particular example of another data structure. Any special processing required by a particular element 102 is performed by its associated callback 100.
The FREE POOL is a group of elements available for addition to existing data structures or for creating new data structures. A free pool may be associated with a given data structure, a specific group of data structures, a given entity, specific groups of entities, or with all or part of an entire computer system.
A GENERATION is a set of elements 102 deleted from one or more data structures by a set of threads 112. The generation is erased when threads 112 enter a quiescent state. In particular, current generation 108 is erased when its associated threads reach a quiescent state, and next generation 110 is erased when its associated threads reach a quiescent state, but after the current generation is erased.
A GENERATION DATA STRUCTURE is a data structure that tracks a particular generation of elements waiting to be erased. Generations of elements may be tracked system-wide, per data structure, by group of data structures, by entity, or by a group of entities.
MUTEX is a term that refers to a specific instance of a mutual-exclusion mechanism as in “the data structure is protected by a per-thread mutex.”
MUTUAL EXCLUSION is a property that permits multiple threads to access and/or update a given data structure or group of data structures while maintaining their integrity. For example, mutual exclusion prevents an entity from reading a given portion of a data structure while it is being updated.
MUTUAL-EXCLUSION OVERHEAD is the additional amount of processing required to operate a mutual-exclusion mechanism. The amount of overhead may be measured by comparing the “cost” of processing required to access and/or update a given data structure or group of data structures with and without the mutual-exclusion mechanism. If there is no difference in the cost, the mutual exclusion mechanism has zero overhead. An example of cost is processing time.
A READER is a thread that accesses but does not update a data structure.
A QUIESCENT STATE exists for a thread when it is known that the thread will not be accessing data structures protected by this mutual-exclusion mechanism. If multiple callback processors exist, a particular thread can be in a quiescent state with respect to one callback processor and in an active state with respect to another callback processor. Examples of quiescent states include an idle loop in an operating system, a user mode in an operating system, a context switching point in an operating system, a wait for user input in an interactive application, a wait for new messages in a message-passing system, a wait for new transactions in a transaction processing system, a wait for control input in an event-driven real-time control system, the base priority level in a interrupt-driven real-time system, an event-queue processor in a discrete-event simulation system, or an artificially created quiescent state in a system that lacks a quiescent state, such as a real-time polling process controller.
A READER-WRITER SPINLOCK is a type of spinlock that allows many readers to use a mutex simultaneously, but allows only one writer to use the mutex at any given time, and which further prevents any reader and writer from using the mutex simultaneously.
SEQUENTIAL CONSISTENCY is a property that ensures that all threads agree on a state change order. Multiprocessor computer systems do not typically depend on sequential consistency for performance reasons. For example, an INTEL® 80486 processor has a write-buffer feature that makes a data update appear to have occurred earlier in time at a particular processor than at other processors. The buffer feature causes processes relying on sequential consistency, such as locks, to either fail or to require additional work-around instructions.
A SLEEPLOCK is a type of mutual-exclusion mechanism that prevents excluded threads from processing, but which requires a pair of context switches while waiting for the sleeplock to expire.
A SPINLOCK is a type of mutual-exclusion mechanism that causes excluded threads to execute a tight loop while waiting for the spinlock to expire.
STORAGE MEDIA are physical elements such as a rotating disk or a magnetic tape that provide long-term storage of information.
A SYSTEM is a collection of hardware and software that performs a task.
A SUBSYSTEM is a part of a system that performs some part of the task.
SUMMARY OF THREAD ACTIVITY 106 is a data structure that contains a record of thread execution history that indicates to callback processor 104 when it is safe to process current generation 108 of callbacks 100.
THREAD 112 is any center of activity or concentration of control in a computer system. Examples of a thread include a processor, process, task, interrupt handler, service procedure, co-routine, transaction, or another thread that provides a given locus of control among threads sharing resources of a particular process or task.
AN UPDATER is a thread that updates a data element or a data structure.
Mutual-exclusion mechanism 90 allows multiple readers and updaters to operate concurrently without interfering with each other, although readers accessing a data structure after an updater will access the updated data structure rather than the original structure.
Mutual-exclusion mechanism 90 does not require readers to use a mutex or sequential consistency instructions. Updaters use a mutex as required by the update algorithm and use sequential consistency instructions to ensure that the readers access consistent data.
An updater deleting an element removes it from a data structure by unlinking it or using a deletion flag. If current generation 108 is empty, the updater adds a callback 100 containing the element to current generation 108 and causes callback processor 104 to reset summary of thread activity 106. If current generation 108 is not empty, the updater adds a callback 100 containing the element to next generation 110.
An updater changing an element copies it from a data structure into a new element, updates the new element, links the new element into the data structure in place of the original element, uses the callback mechanism to ensure that no threads are currently accessing the element, and erases the original element.
Some computers have features that explicitly enforce sequential consistency, such as the INTEL® 80×86 exchange instruction, MIPS® R4000 look link and store conditional instructions, IBM® RS6000 cache-management Instructions, or uncachable memory segments. In sequences of instructions not involving such features, these computers can reorder instructions and memory accesses. These features must, therefore, be used to prevent readers from accessing elements before they have been completely initialized or after they have been erased.
Updaters, whether deleting or updating data, should use some mutual-exclusion mechanism such as spinlocks or sleeplocks to prevent multiple updaters from causing incoherent data.
Using the above-described updater techniques in conjunction with mutual-exclusion mechanism 90 ensures that no thread accesses an element when it is erased.
Summary of thread activity 106 may be implemented per data structure, per group of data structures, or system-wide. Summary of thread activity 106 has many alternative data structures including those described below.
A dense per-thread bitmap data structure has an array of bits with one bit per thread. When a thread passes through a quiescent state since the data structure was reset, the corresponding bit is cleared.
A distributed per thread bitmap data structure embeds thread bits into a structure that facilitates thread creation and destruction. For example, a flag is used to track each thread in the data structure.
A hierarchical per-thread bitmap is a data structure that maintains a hierarchy of bits. The lowest level maintains one bit per thread. The next level up maintains one bit per group of threads and so on. All bits are preset to a predetermined state, for example, a one-state. When a thread is sensed in a quiescent state, its associated bit is set to a zero-state in the lowest level of the hierarchy. If all bits corresponding to threads in the same group are in a zero-state, the associated group bit in the next higher level is set to a zero-state and so on until either the top of the hierarchy or a non zero bit is encountered. This data structure efficiently tracks large numbers of threads; Massively parallel shared-memory multiprocessors should use a bitmap hierarchy mirroring their bus hierarchy.
Summary of thread activity 106 may be reset in various ways including those described below.
Each bit is explicitly set to a predetermined state, preferably a one-state, by a reset signal 114.
Each bit (or group of bits for hierarchial bitmaps) has an associated generation counter. A global generation counter is incremented to reset summary of thread activity 106. When a thread is sensed in a quiescent state, and its associated bit is currently in a zero-state, its associated generation counter is compared with the global generation counter. If the counters differ, all bits associated with the quiescent thread are set to a one-state and the associated generation counter is set to equal the global counter.
The generation counter technique efficiently tracks large numbers of threads, and is particularly useful in massively parallel shared-memory multiprocessors with hierarchical buses.
A thread counter (not shown) may be used to indicate the number of threads remaining to be sensed in a quiescent state since the last reset signal 114. The thread counter is preset to the number of threads (or for hierarchial schemes, the number of threads in a group) and is decremented each time a thread bit is cleared. When the counter reaches zero, all threads have been sensed in a quiescent state since the last reset. If threads can be created and destroyed, the counters corresponding to the threads being destroyed must be decremented.
Callback processor 104 interfaces with the quiescence-indicating scheme chosen for summary of thread activity 106 and therefore has various possible implementations. For example, if the quiescence-indicating bits in summary of thread activity 106 have other purposes, no additional overhead need be incurred by callback processor 104 in checking them. Consider a data structure having a dedicated summary of thread activity and a per-thread bit for indicating the occurrence of some unusual condition. Any thread accessing the data structure must execute special-case steps in response to the per-thread bit such as recording its quiescence before accessing the data structure.
Callback processor 104 may be invoked by:
all threads upon entering or exiting their quiescent state;
readers just before or just after accessing the data structure protected by mutual-exclusion mechanism 90 (invoking callback processor 104 just after accessing the data structure will incur overhead unless the quiescence-indicating bits in summary of thread activity 106 have multiple purposes);
a separate entity such as an interrupt, asynchronous trap signal, or similar facility that senses if the thread is in a quiescent state; or
updaters, the form of which varies from system to system.
If an updater has a user process context on a shared memory multiprocessor, the updater forces a corresponding process to run on each processor which forces each processor through a quiescent state, thereby allowing the updater to safely erase the elements.
In a message passing system, an updater may send a message to each thread that causes the thread to pass through a quiescent state. When the updater receives replies to all the messages, it may safely erase its elements. This alternative is attractive because it avoids maintaining lists of elements awaiting erasure. However, the overhead associated with message-passing and context switching overwhelms this advantage in many systems.
Callbacks and generations may be processed globally, per-thread where possible, or per some other entity where possible. Global processing is simple to implement, whereas the other choices provide greater updating efficiency. Implementations that allow threads to be destroyed, such as a processor taken off line, will need to include global callback processing to handle those callbacks waiting for a recently destroyed thread.
Referring to FIG. 4, a preferred mutual-exclusion mechanism 120 is implemented in the Symmetry Series of computers manufactured by the assignee of this invention. The operating system is a UNIX® kernel running on a shared-memory symmetrical multiprocessor like computer 10 shown in FIG. 1B.
Mutual-exclusion mechanism 120 has a system-wide scope in which each thread corresponds to an interrupt routine or kernel running state 36 (FIG. 2) of a user process. Alternatively, each of processors 16 may correspond to threads 112 (FIG. 3). Quiescent state alternatives include an idle loop, process user running state 38 (FIG. 2), a context switch point such as process asleep state 32 (FIG. 2), initiation of a system call, and a trap from a user mode. More than one type of quiescent state may be used to implement mutual-exclusion mechanism 120.
The summary of thread activity is implemented by per-processor context-point counters 122. Counters 122 are incremented each time an associated processor 16 switches context. Counters 122 are used by other subsystems and therefore add no overhead to mutual-exclusion mechanism 120.
A callback processor 124 includes a one-bit-per processor bitmask. Each bit indicates whether its associated processor 16 must be sensed in a quiescent state before the current generation can end. Each bit corresponds to a currently functioning processor and is set at the beginning of each generation. When each processor 16 senses the beginning of a new generation, its associated per-processor context switch counter 122 value is saved. As soon as the current value differs from the saved value, the associated bitmask bit is cleared indicating that the associated processor 16 is ready for the next generation.
Callback processor 124 is preferably invoked by a periodic scheduling interrupt 126. Processor 16 may alternatively clear its bitmask bit if scheduling interrupt 126 is in response to an idle loop, a user-level process execution, or a processor 16 being placed off line. The latter case is necessary to prevent an off line processor from stalling the callback mechanism and causing a deadlock.
When all bits-in the bitmask are cleared, callback processor 124 processes all callbacks 128 in a global current generation 130 and all callbacks 128 associated with the current processor in a per-processor current generation 131.
Mutual-exclusion mechanism 120 also includes a global next generation 132 and a per-processor next generation 133. When a particular processor 16 is placed off line, all callbacks 128 in its associated per-processor current generation 131 and per-processor next generation 133 are placed in global next generation 132 to prevent callbacks 128 from being “stranded” while the processor is off line.
Mutual-exclusion mechanism 120 implements callbacks 128 with data structures referred to as rc_callback_t and rc_ctrlblk_t and the below-described per-processor variables.
cswtchctr: A context switch counter that is incremented for each context switch occurring on a corresponding processor.
syscall: A counter that is incremented at the start of each system call initiated on a corresponding processor.
usertrap: A counter that is incremented for each trap from a user mode occurring on a corresponding processor.
rclockcswtchctr: A copy of the cswtchctr variable that is taken at the start of each generation.
rclocksyscall: A copy of the syscall variable that is taken at the start of each generation.
rclockusertrap: A copy of the usertrap variable that is taken at the start of each generation.
rclockgen: A generation counter that tracks a global generation number. This variable indicates whether a corresponding processor has started processing a current generation and if any previous generation callbacks remain to be processed.
rclocknxtlist: A per-processor list of next generation callbacks comprising per-processor next generation 133.
rclocknxttail: A tail pointer pointing to rclocknxtlist.
rclockcurlist: A per-processor list of current generation callbacks comprising per-processor current generation 131.
rclockcurtail: A tail pointer pointing to rclockcurlist.
rclockintrlist: A list of callbacks in the previous generation that are processed by an interrupt routine which facilitates processing them at a lower interrupt level.
rclockintrtail: A tail pointer pointing to rclockintrlist.
A separate copy of data structure rc_callback_t exists for each callback 128 shown in FIG. 4. The rc_callback_t data structure is described by the following C-code:
where:
rcc_next links together a list of callbacks associated with a given generation;
rcc_callback specifies a function 134 to be invoked when the callback generation ends;
rcc_arg1 and rcc_arg2 are arguments 136 passed to rcc_callback function 134; and
rcc_flags contains flags that prevent callbacks from being repeatedly associated with a processor, and that associate the callbacks with a memory pool.
The first argument in function 134 is a callback address. Function 134 is responsible for disposing of the rc_callback_t data structure.
Mutual-exclusion mechanism 120 implements global current generation 130, per-processor current generation 131, global next generation 132, and per-processor next generation 133 of callbacks 128 with a data structure referred to as rc_ctrlblk_t that is defined by the following C-code:
where:
rcc_mutex is a spin lock that protects the callback data structure (rcc_mutex is not used by readers, and therefore, does not cause additional overhead for readers);
rcc_curgen contains a number of callbacks 128 in global current generation 130;
rcc_maxgen contains a largest number of callbacks 128 in any next generation 132, 133 of callbacks 128 (when rcc_curgen is one greater than rcc_maxgen, there are no outstanding callbacks 128);
rcc_olmsk is a bitmask in which each bit indicates whether a corresponding processor 16 is on line;
rcc_needctxtmask is a field implementing summary of execution history 138 (the field is a bitmask in which each bit indicates whether a corresponding processor 16 has been sensed in a quiescent state);
rcc_intrlist is a pointer to a linked list of rcc_callback_t elements waiting to be processed in response to a software interrupt;
rcc_intrtail is a tail pointer to the rcc_intrlist pointer;
rcc_curlist is a pointer to a linked list of rcc_callback_t elements that together with per-processor variable rclockcurlist implement global current generation 130 and per-processor current generation 131;
rcc_curtail is a tail pointer to the rcc_curlist pointer;
rcc_nxtlist is a pointer to a linked list of rcc_callback_t elements that together with per-processor variable rclocknxtlist implement global next generation 132 and per-processor next generation 133;
rcc_nxttail is a tail pointer to the rcc_nxtlist pointer (note that per-processor lists are used wherever possible, and global lists are used only when a processor having outstanding callbacks is taken off line);
rcc_nreg is a counter field that counts the number of “registered” callbacks (a registered callback is one presented to mutual exclusion mechanism 120 for processing);
rcc_nchk is a counter field that counts the number of times a rc_chk_callbacks function has been invoked;
rcc_nprc is a counter field that counts the number of callbacks that have been processed;
rcc_ntogbl is a counter field that counts the number of times that callbacks have been moved from a per-processor list to a global list;
rcc_nprcgbl is a counter field that counts the number of callbacks that have been processed from the global list;
rcc_nsync is a counter field that counts the number of memory-to-system synchronization operations that have been explicitly invoked; and
rcc_free is a pointer to a list of currently unused callback data structures that cannot be freed because they are allocated in permanent memory.
Function rc_callback performs an update 140 using a single rc_callback_t as its argument. Update 140 adds callbacks to global next generation 132 and per-processor next generation 133 by executing the following pseudo-code steps:
If the current processor is off line (a paradoxical situation that can arise during the process of changing between on line and off line states):
acquire rcc_mutex to protect rc_ctrlblk_t from concurrent access;
add the callback to global list rcc_nxtlist;
invoke rc_reg_gen (described below) to register that at least one additional generation must be processed (This step is not performed in the above-described per-processor case, but is performed by rc_chk_callbacks in response to the next clock interrupt); and
release rcc_mutex.
Callback processor 124 is implemented by a function referred to as rc_chk_callbacks. Callback processor 124 is invoked by interrupt 126, which is preferably a hardware scheduling clock interrupt referred to as hardclock(), but only if one or more of the following conditions are met: rclocknxtlist is not empty and rclockcurlist is empty (indicating that the current processor is tracking a generation of callbacks and there are callbacks ready to join a next generation); rclockcurlist is not empty and the corresponding generation has completed; or the bit in rcc_needctxtmask that is associated with the current processor is set. The latter condition ensures that if there is a current generation of callbacks, the current processor must be in, or have passed through, a quiescent state before the generation can end.
Function rc_chk_callbacks has a single flag argument that is set if interrupt 126 is received during an idle loop or a user mode, both of which are quiescent states from the viewpoint of a kernel thread. Function rc_chk_callbacks executes the following pseudo-code steps:
increment the counter rcc_nchk; if rclockcurlist has callbacks and rclockgen indicates their generation has completed, append the callbacks in rclockcurlist to any callbacks listed in rclockintrlist;
if rclockintrlist is empty, send a software interrupt to the current processor to process the appended callbacks;
if rclockcurlist is empty and rclocknxtlist is not empty, move the contents of rclocknxtlist to rclockcurlist, set rclockgen to one greater than rcc_curgen, and invoke rc_reg_gen on the value in rclockgen;
if the bit in rcc_needctxtmask corresponding to the current processor is not set, return (do not execute the following steps);
if the argument is not set (i.e., hardclock() did not interrupt either an idle-loop or a user code quiescent state) and rclockcswtchctr contains an invalid value, store in rclockcswtchctr, rclocksyscall, and rclockusertrap the current values of cswtchctr, syscall, and usertrap associated with the current processor, and return;
if the argument is not set and counters cswtchctr, syscall, and usertrap associated with the current processor have not changed (checked by comparing with the variables rclockcswtchctr, rclocksyscall, and rclockusertrap), return;
acquire rcc_mutex; and
invoke a function referred to as rc_cleanup (described below) to invoke the callbacks and release rcc_mutex.
Function rc_cleanup ends global current generation 130 and per-processor current generation 131 and, if appropriate, starts a new generation. The rcc_mutex must be held when entering and released prior to exiting rc_cleanup. Function rc_cleanup executes the following pseudo-code steps:
if the bit in rcc_needctxtmask associated with the current processor is already cleared, release rcc_mutex and return;
clear the bit in rcc_needctxtmask associated with the current processor to indicate that the current processor has completed the current generation;
set rclockcswtchctr to an invalid value;
if any bit in rcc_needctxtmask is still set, release rcc_mutex and return;
increment rcc_curgen to advance to the next generation;
if rcc_curlist is not empty, move its callbacks to rcc_intrtail;
if rcc_nxtlist is not empty, move its callbacks to rcc_curlist and invoke rc_reg_gen to indicate that another generation is required;
otherwise, invoke rc_reg_gen with rcc_maxgen to begin the next generation if appropriate;
if rcc_intrlist is not empty, send a software interrupt to cause its callbacks to be processed; and
release rcc_mutex.
A function referred to as rc_intr, invoked in response to the software interrupt, processes all callbacks in rcc_intrlist and those in rclockintrlist that are associated with the current processor. Function rc_intr executes the following steps:
while rclockintrlist is not empty, execute the following steps:
disable interrupts;
remove the first callback from rclockintrlist and flag the callback as not registered;
enable interrupts;
invoke the callback; and
increment the rcc_nprc counter.
While rcc_intrlist is not empty, execute the following steps:
acquire rcc_mutex;
remove the first callback from rcc_intrlist and flag the callback as not registered;
release rcc_mutex;
invoke the callback; and
increment the rcc_nprc and rcc_nprcgbl counters.
A function referred to as rc_onoff is invoked whenever any of processors 16 are taken off line or placed on line. Function rc_onoff prevents mutual-exclusion mechanism 120 from waiting forever for a disabled processor to take action. Function rc_onoff executes the following pseudo-code steps:
acquire rcc_mutex;
update rcc_olmsk to reflect which processors 16 are on line;
if the current processor is coming on line, release rcc_mutex and return;
if rclockintrlist, rclockcurlist, or rclocknxtlist associated with the current processor are not empty, increment counter rcc_ntogbl;
if rclockintrlist associated with the current processor is not empty, move its callbacks to the global rcc_intrlist and broadcast a software interrupt to all processors;
if rclockcurlist associated with the current processor is not empty, move its callbacks to the global rcc_nxtlist;
if rclocknxtlist associated with the current processor is not empty, move its callbacks to the global rcc_nxtlist; and
invoke rc_cleanup causing the current processor to properly exit from any ongoing generation.
The function referred to as rc_reg_gen registers and starts a specified generation of callbacks if there is no active generation and if the currently registered generation has not completed. The rc_reg_gen function executes the following steps:
if the specified generation is greater than rcc_maxgen, record it in rcc_maxgen;
if rcc_needctxtmask has a bit set (current generation not complete) or if rcc_maxgen is less than rcc_curgen (specified generation complete), return; and
set rcc_needctxtmask to rcc_olmsk to start a new generation.
Various uses exist for mutual-exclusion mechanism 90 in addition to the above-described generalized system-wide application.
For example, mutual-exclusion mechanism 90 provides update protection for a current local area network (“LAN”) data structure 150 that distributes data packets among an array of LANS. Note that such update protection may also be extended to a wide area network (“WAN”).
If a LAN is installed or removed (or is sensed as defective), data structure 150 is so informed by a function referred to as LAN-update which performs an updating sequence shown in FIGS. 5A, 5B, 5C, and 5D.
FIG. 5A shows a current generation of LAN data structure 150 prior to the update. A pointer referred to as LAN_ctl_ptr 152 accesses a field referred to as LAN_rotor 154 that indexes into an array LAN_array 156 having “j” slots. LAN_rotor 154 is incremented each time a data packet is distributed to sequentially access each slot of LAN_array 156. LAN_rotor 154 cycles back to zero after reaching the value “j” stored in a slot N_LAN 158. The value stored in N_LAN 158 is the number of LANs available in the array.
FIG. 5B shows the current generation of LAN data structure 150 immediately before updating LAN_array 156 to reflect newly installed or removed LANs. A next generation of LAN data structure 160 is allocated and initialized having an array LAN_array 162 of “j” slots. The value “j” is stored in a slot N_LAN 164. LAN_array 162 reflects the updated LAN configuration.
FIG. 5C shows LAN data structure 160 being installed as the current generation by overwriting LAN_ctl_ptr 152 with an address that points to LAN data structure 160 in place of LAN data structure 150. Processors that received a copy of LAN_ctl_ptr 152 before it was overwritten can safely continue to use LAN data structure 150 because all the resources it uses are permanently allocated. LAN data structure 150 will be released for deallocation after all the processors have sensed and are using LAN data structure 160.
FIG. 5D shows the current generation of LAN data structure 160 in operation. LAN data structure 150 is no longer shown because it is deallocated.
Another use for mutual-exclusion mechanism 90 entails mapping “lock handles” in a network-wide distributed lock manager (“DLM”) application. The DLM coordinates lock access among the nodes in the network.
Conventionally, a process in a local node uses a lock operation to “open” a lock against a resource in a remote node. A lock handle is returned by the remote node that is used in subsequent operations. For each lock operation, the lock handle must be mapped to an appropriate DLM internal data structure. On a symmetrical multiprocessor, mapping is protected by some form of conventional spin lock, sleep lock, or reader-writer spin lock that prevents the mapping from changing during use. Likewise, the data structure to which the lock handle is mapped is locked to prevent it from deletion during use. Therefore, at least two lock operations are required to map the lock handle to the appropriate data structure.
By using a zero overhead mutual-exclusion mechanism according to this invention, the DLM is able to map lock handles without using conventional locks, thereby reducing the number of remaining lock operations to one for data structure protection only.
The zero overhead mutual-exclusion mechanism provides stable lock handle mapping while the number of lock handles is being expanded and prevents data structures from being deallocated while still in use by a thread.
Referring to FIG. 6, lock handles are stored in a handle table 170 that has a hierarchy of direct 172, singly indirect 174, and doubly indirect 176 memory pages. Handle table 170 is expanded in a manner that allows existing lock handles to continue using handle table 170 without having their mapping changed by table expansion.
Direct 172, singly indirect 174, and doubly indirect 176 memory pages are each capable of storing NENTRIES pointers to internal data structures 178 or pointers to other memory pages.
Mapping a lock handle to its associated internal data structure entails tracing a path originating with pointers stored in a root structure 180 referred to as table_ptrs. The path followed is determined by either a directory referred to as table_size or by the lock handle itself.
If a lock handle is greater than zero but less than NENTRIES, the lock handle pointer is stored in a direct root structure field 184 referred to as table_ptrs[direct] and is directly mapped to its associated data structure 178 by table_ptrs[direct] [handle].
If a lock handle is greater than or equal to NENTRIES and less than NENTRIES*NENTRIES, the lock handle pointer is stored in a single root structure field 186 referred to as table_ptrs[single] and is indirectly mapped to its associated data structure 178 by table_ptrs[single] [handle/NENTRIES] [handle%NENTRIES] (“/” and “%” are respectively divide and modulo operators).
If a lock handle is greater than or equal to NENTRIES*NENTRIES and less than NENTRIES{circumflex over ( )}3 (“*” and “{circumflex over ( )}” are respectively multiply and exponent operators), the handle pointer is stored in a double root structure field 188 referred to as table_ptrs[double] and is double indirectly mapped to its associated data structure 178 by table_ptrs[double] [handle/(NENTRIES*NENTRIES)] [(handle/NENTRIES)%NENTRIES] [handle%NENTRIES].
Mapping computations are reduced to logical right shift and logical “AND” operations when NENTRIES is a power of two, which is typically the case.
Handle table 170 accommodates NENTRIES{circumflex over ( )}3 lock handles. A computer, such as computer 10, typically has a 32-bit (2{circumflex over ( )}32 ) wide address bus which is sufficient to address 1024 NENTRIES (1024{circumflex over ( )}3 =2{circumflex over ( )}30). Therefore, handle table 170 is adequately sized for all practical applications.
FIG. 7 shows a handle-free list 200 that links together lock handles that are available for allocation. A free lock handle is indicated by using a pointer 202 in direct pages 172 to store an index of the next free handle via forward links of linked list 200. The index of the first free handle is stored in a handle_free header 204.
Referring again to FIG. 6, when handle table 170 is expanded, additional levels of indirection are introduced if the number of lock handles increases from NENTRIES to NENTRIES+1 or from NENTRIES{circumflex over ( )}2 to (NENTRIES{circumflex over ( )}2)+1. Readers can use handle table 170 while it is being expanded.
Table expansion, lock handle allocation, lock handle association with a particular internal data structure 210, and subsequent lock handle deallocation still use spin locks to maintain sequential consistency.
A newly allocated lock handle is associated with a data structure 210 by storing a pointer to data structure 210 in a location in handle table 170 associated with the newly allocated lock handle.
After a particular lock handle is mapped to data structure 210, data structure 210 must not be deallocated while being used by an unlocked reader.
Updaters using internal data structure 210 use a spin lock to protect the data structure. However, the below-described data structure deallocation process uses the same spin lock, thereby reducing deallocation process overhead.
To deallocate data structure 210, it is first disassociated with its corresponding lock handle by linking the lock handle back on handle-free list 200. A reader attempting to use data structure 210 during lock handle disassociation will encounter either the next entry in handle-free list 200 or a pointer to data structure 210. Free list 200 entries are distinguished from internal data structure pointers by examining a bit alignment of the returned value. In this implementation, internal data structure pointers are aligned on a 4-byte boundary by setting their least significant two bits to zero, whereas handle-free list entries have their least significant pointer bit set to one (the associated lock handle index is shifted left one position to compensate). Therefore, a reader encountering a value with the least significant bit set knows that the associated lock handle does not correspond to internal data structure 210.
A reader encountering data structure 210 locks data structure 210 and checks a flag word embedded in data structure 210. When data structure 210 is prepared for deallocation, a “DEAD” flag bit is set therein, and data structure 210 is placed on a “pending deallocation” list of data structures. The DEAD flag bit is set under protection of a per-data structure lock.
The reader encounter data structure 210 with the DEAD flag bit set informs its controlling process that data structure 210 is pending deallocation. In this implementation, a lookup routine uses the existing per-data structure lock upon sensing the DEAD bit set, releases the lock, and informs its controlling process that the lock handle is no longer associated with an active internal data structure.
Referring again to FIG. 3, a further use for mutual-exclusion mechanism 90 is for maintaining data coherency in an interactive user application that executes multiple processes and shares memory.
In this application, each thread corresponds to a user process, and the quiescent state is a process waiting for user input.
This implementation has a system-wide scope and preferably uses a hierarchical per-thread bitmap, per-level generation counters, a global generation counter, and a thread counter to track the execution histories of a possible large number of processes. The thread counter is decremented when a process exits. If processes can abort, the thread counter must be periodically reset to the current number of threads to prevent indefinite postponement of callback processing.
In this application, the callback processor is invoked by any thread that enters a quiescent state. However, because a user can fail to provide input to a process, a periodic interrupt should also be used to invoke the callback processor.
Still another use for mutual-exclusion mechanism 90 is for maintaining the coherency of shared data in a loosely coupled multicomputer system such as computer system 14 of FIG. 1A. Mutual-exclusion mechanism 90 is installed in each computer 10. Each of computers 10 is informed of updates to a common data structure, a copy of which is maintained on each of computers 10, by messages passed between computers 10 over interconnection network 12.
For example, when a particular computer 10 updates its copy of the data structure, an update message is sent to the other computers 10 where each associated mutual-exclusion mechanism 90 updates the local copy of the data structure.
Each of computers 10 may have a different configuration, thereby dictating a different implementation of mutual-exclusion mechanism 90 on each computer 10.
It will be obvious to those having skill in the art that many changes may be made to the details of the above-described embodiments of this invention without departing from the underlying principles thereof. Accordingly, it will be appreciated that this invention is also applicable for maintaining data coherency in other than multiprocessor computer applications. The scope of the present invention should be determined, therefore, only by the following claims.
|
https://patents.google.com/patent/US6219690B1/en
|
CC-MAIN-2018-34
|
en
|
refinedweb
|
Oscar is an open-source ecommerce framework for Django. Django Oscar provides a base platform to build an online shop.
Oscar is built as a highly customisable and extendable framework. It supports Pluggable tax calculations, Per-customer pricing, Multi-currency etc.
1. Install Oscar
$ pip install django-oscar
2. Then, create a Django project
$ django-admin.py startproject <project-name>
After creating the project, add all the settings(INSTALLED_APPS, MIDDLEWARE_CLASSES, DATABASES) in your settings file as specified here.
And you can find the reference on how to customize the Django Oscar app, urls, models and views here.
Customising/Overridding templates:
To override Oscar templates, first you need to update the template configuration settings as below in your setting file.
import os location = lambda x: os.path.join( os.path.dirname(os.path.realpath(__file__)), x) TEMPLATE_LOADERS = ( 'django.template.loaders.filesystem.Loader', 'django.template.loaders.app_directories.Loader', 'django.template.loaders.eggs.Loader', ) from oscar import OSCAR_MAIN_TEMPLATE_DIR TEMPLATE_DIRS = ( location('templates'), OSCAR_MAIN_TEMPLATE_DIR, )
Note: In the 'TEMPLATE_DIRS' setting, you have to include your project template directory path first and then comes the Oscar's template folder which you can import from oscar.
By customising templates, you can just replacing all the content with your own content or you can only change blocks using "extends"
Ex: Overriding Home page
{% extends 'oscar/promotions/home.html' %} {% block content %} Content goes here ... ... {% endblock...
|
https://micropyramid.com/blog/how-to-create-your-own-ecommerce-shop-using-django-oscar/
|
CC-MAIN-2018-34
|
en
|
refinedweb
|
For 8/9 Student Becomes the Teacher, do you know how I would calculate the get_average part? That would be step 3.
8/9 Student Becomes the Teacher
What information does the function get?
Given that information, how would you do that manually?
That's exactly how your function would do it.
If you're unsure of what the function is supposed to be doing, then you'll have to start with that. Can't start writing it before you know what it's supposed to do.
Are you asking what that is or does that mean you know what it's supposed to do?
If the former, I'd tell you to read the instructions, if the later, I'd tell you to start writing
My problem is that I'm not sure how to write code that calculates average or whatever step 3 is asking me to do.
Well start by establishing what it's supposed to do. You can't write anything without knowing what it's supposed to do.
So write down step by step in English what's supposed to happen.
def get_class_average(whatever input info is): # do this # do that # present result
Then you can start thinking about what the matching code is, you'll know some of it and other parts you'll be able to look up
|
https://discuss.codecademy.com/t/8-9-student-becomes-the-teacher/40823
|
CC-MAIN-2018-34
|
en
|
refinedweb
|
US8131739B2 - Systems and methods for interfacing application programs with an item-based storage platform - Google PatentsSystems and methods for interfacing application programs with an item-based storage platform Download PDF
Info
- Publication number
- US8131739B2US8131739B2 US10646575 US64657503A US8131739B2 US 8131739 B2 US8131739 B2 US 8131739B2 US 10646575 US10646575 US 10646575 US 64657503 A US64657503 A US 64657503A US 8131739 B2 US8131739 B2 US 8131739B2
- Authority
- US
- Grant status
- Grant
- Patent type
-
- Prior art keywords
- item
- type
- relationship
- items
- storage related by subject matter to the inventions disclosed in the following commonly assigned applications: U.S. patent application Ser. No. 10/647,058, filed Aug. 21, 2003, now abandoned; U.S. patent application Ser. No. 10/646,941, filed Aug. 21, 2003, now U.S. Pat. No. 7,555,497, issued Jun. 30, 2009; U.S. patent application Ser. No. 10/646,940, filed Aug. 21, 2003, now U.S. Pat. No. 7,739,316, issued Jun. 15, 2010; U.S. patent application Ser. No. 10/646,632, filed Aug. 21, 2003, now abandoned; U.S. patent application Ser. No. 10/646,645, filed Aug. 21, 2003, now U.S. Pat. No. 7,483,915, issued Jan. 27, 2009; U.S. patent application Ser. No. 10/646,646, filed Aug. 21, 2003, now U.S. Pat. No. 7,349,913, issued Mar. 25, 2008; and U.S. patent application Ser. No. 10/646,580, filed Aug. 21, 2003, now U.S. Pat. No. 7,428,546, issued Sep. 23, 2008. satisfies this need..
According to one aspect of the present invention, the storage platform of the present invention comprises a data store implemented on a database engine. In various embodiments of the present invention,.
According to another aspect of the invention,.)
According to another aspect of the invention, a computer system.
According to another aspect of the invention, a computer system comprises a plurality of Items, where each Item constitutes a discrete unit of information that can be manipulated by a hardware/software interface system, and the Item or some of the Item's property values.
According to another aspect of the invention, a hardware/software interface system for a computer system, wherein said hardware/software interface system manipulates a plurality of Items, further comprises Items interconnected by a plurality of Relationships managed by the hardware/software interface system. According to another aspect of the invention, a hardware/software interface system for a computer system wherein said hardware/software interface system manipulates a plurality of discrete units of information having properties understandable by said hardware/software interface system. According to another aspect of the invention, a hardware/software interface system for a computer system comprises a core schema to define a set of core Items which said hardware/software interface system understands and can directly process in a predetermined and predictable way. According to another aspect of the invention, a method for manipulating a plurality of discrete units of information (“Items”) in a hardware/software interface system for a computer system, said method comprising interconnecting said Items with a plurality of Relationships and managing said Relationships at the hardware/software interface system level, is disclosed.
According to another feature of the invention,. According to another feature of the invention, the storage platform API provides a simplified query model that enables application programmers to form queries based on various properties of the items in the data store, in a manner that insulates the application programmer from the details of the query language of the underlying database engine..
Other features and advantages of the invention may become apparent from the following detailed description of the invention and accompanying drawings.
The foregoing summary, as well as the following detailed description of the invention, is better understood when read in conjunction with the appended drawings. For the purpose of illustrating the invention, there is shown in the drawings exemplary embodiments of various aspects of the invention; however, the invention is not limited to the specific methods and instrumentalities disclosed. In the drawings:
- I. INTRODUCTION . . . 22
- A. EXEMPLARY COMPUTING ENVIRONMENT . . . 22
- B. TRADITIONAL FILE-BASED STORAGE . . . 26
- II. A NEW STORAGE PLATFORM FOR ORGANIZING, SEARCHING, AND SHARING DATA . . . 28
-” Search Views . . . 74
- (2) Change Tracking in “Typed” Search Views . . . 75
- specific rights . . . 87
- policy on an item . . . 92
- (12) Rights to change the security policy on an item . . . 92
- (13) Rights that don't have a direct equivalent . . . 93
- 3. Implementation . . . 93
- a) Creating a new item in a container . . . 94
- b) Adding an explicit ACL to an item . . . 94
- c) Adding a holding Relationship to an item . . . 94
- d) Deleting a holding Relationship from an item . . . 95
- e) Deleting an explicit ACL from an item . . . 95
- f) Modifying an ACL associated with an item . . . 95
- F. NOTIFICATIONS AND CHANGE TRACKING . . . 95
- 1. Storage Change Events . . . 96
- a) Events . . . 96
- b) Watchers . . . 97
- 2. Change Tracking and Notification Generation Mechanism . . . 98
- a) Change Tracking . . . 100
- b) Timestamp Management . . . 101
- c) Data Change Detection—Event Detection . . . 101
- G. SYNCHRONIZATION . . . 102
- 1. Storage Platform-to-Storage Platform Synchronization . . . 103
- a) Synchronization (Sync) Controlling Applications . . . 103
- Propagation of Conflict Resolutions 112
- 2. Synchronizing to non-storage platform data stores . . . 113
- Platform files . . . 119
- Objects . . . 137
- (2) Searching for Objects . . . 138
- (a) Search Options 139
- (b) FindOne and FindOnly 140
- (c) Search Shortcuts on ItemContext 140
- Source and Target Items . . . 156
- (3) Navigating from Source Items to Relationships . . . 157
- Storage Platform . . . 175
- (2) Creating Files and Directories in the storage platform Namespace . . . 175
- b) File Schema . . . 176
- c) Overview of System.Storage.Files . . . 176
- d) Code Examples . . . 176
- (1) Opening a File and Writing to It . . . 177
- (2) Using Queries . . . 177
- e) Domain Behaviors . . . 178
- J. CONCLUSION . . . 178
I. Introduction
The subject matter of the present invention. Exemplary Computing illustrated in the block diagram of
In various embodiments of a computer system 200, and referring back to
The applications programs component 206 comprises various software programs including but not limited to compilers, database systems, word processors, business programs, videogames, and so forth. Application programs provide the means by which computer resources are utilized to solve problems, provide solutions, and process data for various users (machines, other computer systems, and/or end-users)..
The hardware/software interface system is generally loaded into a computer system at startup and thereafter manages all of the application programs in the computer system. The application programs interact with the hardware/software interface system by requesting services via an application program interface (API). Some application programs enable end-users to interact with the hardware/software interface system via a user interface such as a command language or a graphical user interface (GUI).
A hardware/software interface system traditionally performs a variety of services for applications. In a multitasking hardware/software interface system where multiple programs may be running at the same time, the hardware/software interface system determines which applications should run in what order and how much time should be allowed for each application before switching to another application for a turn. The hardware/software interface system also manages the sharing of internal memory among multiple applications, and handles input and output to and from attached hardware devices such as hard disks, printers, and dial-up ports. The hardware/software interface system also sends messages to each application (and, in certain case, to the end-user) regarding the status of operations and any errors that may have occurred. The hardware/software interface system can also offload the management of batch jobs (e.g., printing) so that the initiating application is freed from this work and can resume other processing and/or operations. On computers that can provide parallel processing, a hardware/software interface system also manages dividing a program so that it runs on more than one processor at a time.
A hardware/software interface system shell (simply referred to herein as a “shell”) is an interactive end-user interface to a hardware/software interface system. (A shell may also be referred to as a “command interpreter” or, in an operating system, as an “operating system shell”). A shell is the outer layer of a hardware/software interface system that is directly accessible by application programs and/or end-users. In contrast to a shell, a kernel is a hardware/software interface system's innermost layer that interacts directly with the hardware components.
While it is envisioned that numerous embodiments of the present invention are particularly well-suited for computerized systems, nothing in this document is intended to limit the invention to such embodiments. On the contrary, as used herein the term “computer system” is intended to encompass any and all devices capable of storing and processing information and/or capable of using the stored information to control the behavior or execution of the device itself, regardless of whether such devices are electronic, mechanical, logical, or virtual in nature.. A New Storage Platform for Organizing, Searching, and Sharing Data
The present invention
The data store 302 implements a data model 304 that supports the organization, searching, sharing, synchronization, and security of data. Specific types of data are described in schemas, such as schemas 340, and the storage platform 300 provides tools 346 for deploying those schemas as well as for extending those schemas, as described more fully below.ID. The Base.Item type defines a field ItemID of type GUID that stores the identity for the Item. An Item must have exactly one identity in the data store 302.
a) Item References.
(1) ItemIDReference
(2) ItemPathReference.
b) Reference Type Hierarchy York City
FIG. 8B, the specific property types supported by the Core Schema may include one or more of the following:
-S. 8A andItem:
Item vs Item Extension vs NestedElement.
Item extensions vs following example illustrates the basics of UDTs. Assume that MapLib.dll has the assembly called MapLib. In this assembly, there's a class called Point, under the namespace BaseTypes:
The following T-SQL code binds the class Point to a SQL Server UDT called Point. The first step invokes “CreateAssembly”, which loads the MapLib assembly into the database. The second step invokes “Create Type” to create the User Defined Type “Point” and bind it to the managed type BaseTypes.Point:
Once created, the “Point” UDT can be used as a column in a table and methods can be invoked in T-SQL as shown below:.
The Item hierarchy illustrated in. For the example Item hierarchy shown in structure of the global Item table is as follows:. This function has the following declaration: Base.Item Base.GetItem (uniqueidentifier. Views for the example Item hierarchy shown in
For completeness, a view may also be created over the global Item table. This view may initially expose the same columns as the table:. The Extension table has the following definition:Id, ExtensionId). A Relationship is identified by a composite key (Item:
- 1. Logical “key” column (s) of view result such as ItemId, ElementId, RelationshipId, . . .
- 2. Metadata information on type of result such as TypeId.
- 3. Change tracking columns such as CreateVersion, UpdateVersion, . . .
- 4. Type specific column(s) (Properties of the declared type)
- 5.
(1) Master Item Search View]”.
(2) Typed Item Search Views.
(1) Master Extension Search View]”.
(2) Typed Extension Search Views.
(1) Master Relationship Search View
Each data store provides a Master Relationship View. This view provides information on all relationship instances in the data store. The master relationship view is identified in a data store using the name “[System.Storage].[Master!Relationship]”.
(2) Relationship Instance Search Views].
(1) Change Tracking in “Master” Search Viewslnfo contains all this information. The type is defined in the System.Storage schema. _ChangeTrackinglnfo is available in all global search views for Item, Extension and Relationship. The type definition of ChangeTrackingInfo is:
These properties contain the following information:
(2) Change Tracking in “Typed” Search Views).
(1) Item Tombstones
Item tombstones are retrieved from the system via the view [System.Storage]. [Tombstone!Item].
(2) Extension Tombstones
Extension tombstones are retrieved from the system using the view [System.Storage].[Tombstone!Extension]. Extension change tracking information is similar to that provided for Items with the addition of the ExtensionId property.
(3) Relationships Tombstone
Relationship tombstones are retrieved from the system via the view [System.Storage].[Tombstone!Relationship]. Relationships tombstone information is similar to that provided for Extensions. However, additional information is provided on the target ItemRef of the relationship instance. In addition, the relationship object is also selected.
(4) Tombstone CleanupId for an Item, an application can query the global item view to return the type of the Item and use this value to query the Meta.Type view to return information on the declared type of the Item. For example,
- //Return metadata Item object for given Item instance//
- SELECT m._Item AS metadataInfoObj
- FROM [System.Storage].[Item] i INNER JOIN [Meta].[Type] m ON i._TypeId=m.ItemId WHERE i.Itemld=@ItemId
E. Security
This section describes a security model for the storage platform of the present invention, in accordance with one embodiment
1. Overview
In accordance with the present embodiment, the granularity at which the security policy of the storage platform is specified and enforced is at the level of various operations on an item in a given data store; there is no ability to secure parts of an item separately from the whole. The security model specifies the set of principals who can be granted or denied access to perform these operations on an item through Access Control Lists (ACL's). Each ACL is an ordered collection of Access Control Entries (ACE's).
The security policy for an item can be completely described by the discretionary access control policy and the system access control policy. Each of these is a set of ACL's. The first set (DACL's) describes the discretionary access granted to the various principals by the owner of the item while the second set of ACL's is referred to as the SACL's (System Access Control Lists) which specify how the system auditing is done when an object is manipulated in certain ways. In addition to these, each item in the data store is associated with a SID that corresponds to the owner of the item (Owner SID).
The primary mechanism for organizing items in a storage platform data store is that of the containment hierarchy. The containment hierarchy is realized using holding relationships between items. The holding relationship between two items A and B expressed as “A contains B” enables the item A to influence the lifetime of the item B. Generally, an item in the data store cannot exist until there is a holding relationship from another item to it. The holding relationship, in addition to controlling the lifetime of the item, provides the necessary mechanism for propagating the security policy for an item.
The security policy specified for each item consists of two parts—a part that is explicitly specified for that item and a part that is inherited from the parent of the item in the data store. The explicitly defined security policy for any item consists of two parts—a part that governs access to the item under consideration and a part that influences the security policy inherited by all its descendants in the containment hierarchy. The security policy inherited by a descendant is a function of the explicitly defined policy and the inherited policy.
Since the security policy is propagated through holding relationships and can also be overridden at any item, it is necessary to specify how the effective security policy for an item is determined. In the present embodiment, an item in the data store containment hierarchy inherits an ACL along every path from the root of the store to the item.
Within the inherited ACL for any given path, the ordering of the various ACE's in the ACL determines the final security policy that is enforced. The following notation is used to describe the ordering of ACE's in an ACL. The ordering of the ACE's in an ACL that is inherited by an item is determined by the following two rules
The first rule stratifies the ACEs inherited from the various items in a path to the item I from the root of the containment hierarchy. The ACE's inherited from a closer container takes precedence over the entries inherited from a distant container. Intuitively, this allows an administrator the ability to override ACE's inherited from farther up in the containment hierarchy. The rule is as follows:
- For all inherited ACL's L on item I
- For all items I1, I2
- For all ACE's A1 and A2 in L,
- I1 is an ancestor of I2 and
- I2 is an ancestor of I3 and
- A1 is an ACE inherited from I1 and
- A2 is an ACE inherited from I2
- Implies
- A2 precedes A1 in L
The second rule orders the ACE's that deny access to an item ahead of the ACE's that grant access to an item.
- For all inherited ACL's L on item I
- For all items I1
- For all ACE's A1 and A2 in L,
- I1 is an ancestor of I2 and
- A1 is an ACCESS_DENIED_ACE inherited from 11 and
- A2 is an ACCESS_GRANTED_ACE inherited from I1
- Implies
- A1 precedes A2 in L
In the case of a containment hierarchy being a tree, there is exactly one path from the root of the tree to the item and the item has exactly one inherited ACL. Under these circumstances, the ACL inherited by an item matches the ACL inherited by a file (item) in the existing Windows security model in terms of the relative ordering of the ACE's within them.
However, the containment hierarchy in the data store is a directed acyclic graph (DAG) because multiple holding relationships are permitted to items. Under these conditions, there are multiple paths to an item from the root of the containment hierarchy. Since an item inherits an ACL along every path each item is associated with a collection of ACL's as opposed to a single one. Note that this is different from the traditional file system model, where exactly one ACL is associated with a file or folder.
There are two aspects that need to be elaborated when the containment hierarchy is a DAG as opposed to a tree. A description is needed of how the effective security policy for an item is computed when it inherits more than one ACL from its parents, and how they are organized and represented has a direct bearing on the administration of the security model for a storage platform data store.
The following algorithm evaluates the access rights for a given principal to a given item. Throughout this document, the following notation is used to describe the ACL's associated with an item.
- Inherited_ACLs(ItemId)—the set of ACL's inherited by an item whose item identity is Itemld from it's parents in the store.
- Explicit_ACL(ItemId)—the ACL explicitly defined for the item whose identity is ItemId.
The above routine returns STATUS_SUCCESS if the desired access was not explicitly denied, and the pGrantedAccess determines which of the rights desired by the user were granted by the specified ACL. If any of the desired access was explicitly denied, the routine returns STATUS_ACCESS_DENIED.
The sphere of influence of the security policy defined at any item covers all the descendants of the item in the containment hierarchy defined on the data store. For all items where in an explicit policy is defined we are in effect defining a policy that is inherited by all its descendants in the containment hierarchy. The effective ACL's inherited by all of the descendants is obtained by taking each of the ACL's inherited by the item and adding the inheritable ACE's in the explicit ACL to the beginning of the ACL. This is referred to as the set of inheritable ACL's associated with the item.
In the absence of any explicit specification of security in the containment hierarchy rooted at a folder item, the security specification of the folder applies to all the descendants of that item in the containment hierarchy. Thus, every item for which an explicit security policy specification is provided, defines a region of identically.
However, for containment hierarchies that are DAGs, the points in the containment hierarchy at which the effective security policy changes is determined by two kinds of items. The first is items for which an explicit ACL has been specified. Typically these are the points in the containment hierarchy where in the administrator has explicitly specified an ACL. The second is items that have more than one parent, and the parents have different security policies associated with them. Typically, these are the items that are the confluence points of security policy specified for the volume and indicate the beginning of a new security policy.
With this definition, all the items in the data store fall into one of two categories—those that are the root of an identically protected security region and those that are not. The items that do not define security regions belong to exactly one security region. As in the case of trees, the effective security for an item can be specified by specifying the region to which an item belongs along with the item. This leads to a straight forward model for administering the security of a storage platform data store based upon the various identically protected regions in the store.
2. Detailed Description of the Security Model
This section provide details of how items are secured by describing how the individual rights within a Security Descriptor and its contained ACL's affect various operations.
a) Security Descriptor Structure
Before describing the details of the security model, a basic discussion of security descriptors is helpful. A security descriptor contains the security information associated with a securable object. A security descriptor consists of a SECURITY_DESCRIPTOR structure and its associated security information. A security descriptor can include the following security information:
- 1. SID's for the owner and primary group of an object.
- 2. A DACL that specifies the access rights allowed or denied to particular users or groups.
- 3. A SACL that specifies the types of access attempts that generate audit records for the object.
4. A set of control bits that qualify the meaning of a security descriptor or its individual members.
Preferably, applications are not able to directly manipulate the contents of a security descriptor. There are functions for setting and retrieving the security information in an object's security descriptor. In addition, there are functions for creating and initializing a security descriptor for a new object.. A SACL may also raise an alarm when an unauthorized user attempts to gain access to an object.
All types of ACEs contain the following access control information:
- 1. A security identifier (SID) that identifies the trustee to which the ACE applies.
- 2. An access mask that specifies the access rights controlled by the ACE.
- 3. A flag that indicates the type of ACE.
- 4. A set of bit flags that determine whether child containers or objects can inherit the ACE from the primary object to which the ACL is attached.
The following table lists the three ACE types supported by all securable objects.
(1) Access Mask Format
All securable objects arrange their access rights using the access mask format shown in the
(2) Generic Access Rights
Generic rights are specified in the 4 high-order bits within the mask. Each type of securable object maps these bits to a set of its standard and object-specific access rights. For example, a.
Generic access rights can be used to specify the type of access needed when opening a handle to an object. This is typically simpler than specifying all the corresponding standard and specific rights. The following table shows the constants defined for the generic access rights.
(3) shows the constants defined for the standard access rights.
b) Item Specific Rights
In the access mask structure of
(1) File and Directory Object Specific Rights
Consider the following table:
Referring to the foregoing table, note that file systems make a fundamental distinction between files and directories, which is why the file and directory rights overlap on the same bits. File systems define very granular rights, allowing applications to control behavior on these objects. For instance they allow applications to distinguish among Attributes (FILE_READ_/WRITE_ATTRIBUTES), Extended Attributes and the DATA stream associated with the file.
A goal of the security model of the storage platform of the present invention is to simplify the rights assignment model so applications operating on data store items (Contacts, Emails, etc.) generally do not have a need to distinguish between attributes, extended attributes and data streams, for example. However, for files and folders, the granular Win32 rights are preserved and the semantics of access via the storage platform are defined so that compatibility with Win32 applications can be provided. This mapping is discussed with each of the item rights specified below.
The following item rights are specified with their associated allowable operations. The equivalent Win32 rights backing each of these item rights is also provided.
(2) WinFSItemRead
This right allows read access to all elements of the item, including the items linked to the item via embedded relationships. It also allows enumeration of items linked to this item via holding relationships (a.k.a., directory listing). This includes the names of items linked via reference relationships. This right maps to:
File:
- (FILE_READ_DATA|SYNCHRONIZE)
Folder:
- (FILE_LIST_DIRECTORY|SYNCHRONIZE)
The semantics are that a security application could set WinFSItemReadData and specify the rights mask as a combination of the file rights specified above.
(3) WinFSItemReadAttributes
This right allows read access to basic attributes of the Item, much as file systems distinguish between basic file attributes and data streams. Preferably, these basic attributes are those that reside in the base item that all items derive from. This right maps to:
File:
- (FILE_READ_ATTRIBUTES)
Folder:
- :
- (FILE_WRITE_ATTRIBUTES)
Folder:
- (FILE_WRITE_ATTRIBUTES)
(5) WinFSItemWrite
This right allows the ability to write to all elements of the item, including items linked via embedded relationships. This right also allows the ability to add or delete embedded relationships to other items. This right maps to:
File:
- (FILE_WRITE_DATA)
Folder:
- (FILE_ADD_FILE)
In the storage platform data store, there is no distinction between items and folders, since items can also have holding Relationships to other items in the data store. Hence, if you have FILE_ADD_SUBDIRECTORY (or FILE_APPEND_DATA) rights, you can have an item be the source of Relationships to other items.
(6) WinFSItemAddLink
This right allows the ability to add holding Relationships to items in the store. It should be noted that since the security model for multiple holding Relationships changes the security on an item and the changes can bypasses WRITE_DAC if coming from a higher point in the hierarchy, WRITE_DAC is required on the destination item in order to be able to create a Relationship to it. This right maps to:
File:
- (FILE_APPEND_DATA)
Folder:
(FILE_ADD_SUBDIRECTORY)
(7) WinFSItemDeleteLink
This right allows the ability to delete a holding to an item even if the right to delete that item is not granted to the principal. This is consistent with the file system model and helps with purge. This right maps to:
File:
- (FILE_DELETE_CHILD)—Note that file systems do not have a file equivalent to this right, but we have the notion of items having holding Relationships to others and hence carry this right for non-folders as well.
Folder:
- (FILE_DELETE_CHILD)
(8) Rights to Delete an Item
An item gets deleted if the last holding Relationship to the item disappears. There is no explicit notion of deleting an item. There is a purge operation which deletes all holding Relationships to an item, but that is a higher level facility and not a system primitive.
Any item specified using a path can be unlinked if either one of two conditions is satisfied: (1) the parent item along that path grants write access to the subject, or (2) the standard rights on the item itself grant DELETE. When the last Relationship is removed, the item disappears from the system. Any item specified using the ItemID can be unlinked if the standard rights on the item itself grant DELETE.
(9) Rights to Copy an Item
An item can be copied from a source to a destination folder if the subject is granted WinFSItemRead on the item and WinFSItemWrite on the destination folder.
(10) Rights to Move an Item
Move file in the file system requires just the DELETE right on the source file and FILE_ADD_FILE on the destination directory, since it preserves the ACL on the destination. However, a flag can be specified in the MoveFileEx call (MOVEFILE_COPY_ALLOWED) that lets an application specify that it in the case of a cross-volume move, it can tolerate CopyFile semantics. There are 4 potential choices with respect to what happens with the security descriptor upon a move:
-.
In the present security model, if an application specifies the MOVEFILE_COPY_ALLOWED flag, the fourth option is performed for both the inter- and intra-volume cases. If this flag is not specified, the second option is performed unless the destination is also in the same security region (i.e., same inheritance semantics). A storage platform level move implements the fourth choice as well and requires READ_DATA on the source, much as a copy would.
(11) Rights to View the Security Policy on an Item
An item's security can be viewed if the item grants the standard right READ_CONTROL to the subject.
(12) Rights to Change the Security Policy on an Item
An item's security can be changed if the item grants the standard right WRITE_DAC to the subject. However, since the data store provides implicit inheritance, this has implications on how security can be changed on hierarchies. The rule is that if the root of the hierarchy grants WRITE_DAC, then the security policy is changed on the entire hierarchy regardless of whether specific items within the hierarchy (or DAG) do not grant WRITE_DAC to the subject.
(13) Rights that Don't have a Direct Equivalent
In the present embodiment, FILE_EXECUTE (FILE_TRAVERSE for directories) do not have a direct equivalent in the storage platform. The model keeps these for Win32 compatibility, but does not have any access decisions made for items based on these rights. As for FILE_READ/WRITE_EA, because data store items do not have notions of extended attributes, semantics for this bit are not provided. However, the bit remains for Win32 compatibility.
3. Implementation
All the items that define identically protected regions have an entry associated with them in a security table. The security table is defined as follows:
The Item Identity entry is the Item Identity of the root of an identically protected security region. The Item Ordpath entry is the ordpath associated with the root of the identically protected security region. The Explicit Item ACL entry is the explicit ACL defined for the root of the identically protected security region. In some cases this can be NULL, e.g., when a new security region is defined because the item has multiple parents belonging to different regions. The Path ACLs entry is the set of ACL's inherited by the item, and the Region ACLs entry is the set of ACL's defined for the identically protected security region associated with the item.
The computation of effective security for any item in a given store leverages this table. In order to determine the security policy associated with an item, the security region associated with the item is obtained and the ACL's associated with that region are retrieved.
As the security policy associated with an item is changed either by directly adding explicit ACL's or indirectly by adding holding Relationships that result in the formation of new security regions, the security table is kept up to date to ensure that the above algorithm for determining the effective security of an item is valid.
The various changes to the store and the accompanying algorithms to maintain the security table are as follows:
a) Creating a New Item in a Container
When an item is newly created in a container, it inherits all the ACL's associated with the container. Since the newly created item has exactly one parent it belongs to the same region as its parent. Thus there is no need to create a new entry in the security table.
b) Adding an Explicit ACL to an Item.
When an ACL is added to an item, it defines a new security region for all its descendants in the containment hierarchy that belong to the same security region as the given item itself. For all those items which have multiple holding Relationships with ancestors that straddle the old security region and the newly defined security region. For all such items a new security region needs to be defined and the procedure repeated.
The following sequence of updates to the security tables reflect the factoring of the identically protected security regions.
c) Adding a Holding Relationship to an Item
When a holding Relationship is added to an item it gives rise to one of three possibilities. If the target of the holding Relationship, i.e., the item under consideration is the root of a security region, the effective ACL associated with the region is changed and no further modifications to the security table is required. If the security region of the source of the new holding Relationship is identical to the security region of the existing parents of the item no changes are required. However, if the item now has parents that belong to different security regions, then a new security region is formed with the given item as the root of the security region. This change is propagated to all the items in the containment hierarchy by modifying the security region associated with the item. All the items that belong to the same security region as the item under consideration and its descendants in the containment hierarchy need to be changed. Once the change is made, all the items that have multiple holding Relationships must be examined to determine if further changes are required. Further changes may be required if any of these items have parents of different security regions.
d) Deleting a Holding Relationship from an Item
When a holding Relationship is deleted from an item it is possible to collapse a security region with its parent region if certain conditions are satisfied. More precisely this can be accomplished under the following conditions: (1) if the removal of the holding Relationship results in an item that has one parent and no explicit ACL is specified for that item; (2) if the removal of the holding Relationship results in an item whose parent's are all in the same security region and no explicit ACL is defined for that item. Under these circumstances the security region can be marked to be the same as the parent. This marking needs to be applied to all the items whose security region corresponds to the region being collapsed.
e) Deleting an Explicit ACL from an Item
When an explicit ACL is deleted from an item, it is possible to collapse the security region rooted at that item with that of its parents. More precisely, this can be done if the removal of the explicit ACL results in an item whose parents in the containment hierarchy belong to the same security region. Under these circumstances, the security region can be marked to be the same as the parent and the change applied to all the items whose security region corresponds to the region being collapsed.
f) Modifying an ACL Associated with an Item
In this scenario, no new additions to the security table are required. The effective ACL associated with the region is updated and the new ACL change is propagated to the security regions that are affected by it..
1. Storage Change Events
This section provide a few examples of how the notification interfaces provided by the storage platform API 322 are used.
a) Events
Items, ItemExtensions and ItemRelationships expose data change events which are used by applications to register for data change notifications. The following code sample shows the definition of the ItemModified and ItemRemoved event handlers on the base Item class.
All notifications carry sufficient data to retrieve the changed item from the data store. The following code sample shows how to register for events on an Item, ItemExtension, or ItemRelationship:
In the present embodiment, the storage platform guarantees that applications will be notified if the respective item has been modified or deleted since last delivering a notification or in case of a new registration since last fetched from the data store.
b) Watchers
In the present embodiment, the storage platform defines watcher classes for monitoring objects associated with a (1) folder or folder hierarchy, (2) an item context or (3) a specific item. For each of the three categories, the storage platform provides specific watcher classes which monitor associated items, item extensions or item relationships, e.g. the storage platform provides the respective FolderItemWatcher, FolderRelationshipWatcher and FolderExtensionWatcher classes.
When creating a watcher, an application may request notifications for pre-existing items, i.e. items, extensions or relationships. This option is mostly for applications which maintain a private item cache. If not requested, applications receive notifications for all updates which occur after the watcher object has been created.
Together with delivering notifications, the storage platform supplies a “WatcherState” object. The WatcherState can be serialized and saved on disk. The watcher state can subsequently be used to re-create the respective watcher after a failure or when reconnecting after going off-line. The newly re-created watcher will re-generate un-acknowledged notifications. Applications indicate delivery of a notification by calling the “Exclude” method on the respective watcher state supplying a reference to a notification.
The storage platform delivers separate copies of the watcher state to each event handler. Watcher states received on subsequent invocations of the same event handler presume delivery of all previously received notifications.
By way of example, the following code sample shows the definition of a FolderItemWatcher.
The following code sample shows how to create a folder watcher object for monitoring the contents of a folder. The watcher generates notifications, i.e. events, when new music items are added or existing music items are updated or deleted. Folder watchers either monitor a particular folder or all folders within a folder hierarchy.
2. Change Tracking and Notification Generation Mechanism
The storage platform provides a simple, yet efficient mechanism to track data changes and generate notifications. A client retrieves notifications on the same connection used to retrieve data. This greatly simplifies security checks, removes latencies and constraints on possible network configurations. Notifications are retrieved by issuing select statements. To prevent polling, clients may use a “waitfor” feature provided by the database engine 314.
A combination of “waitfor” and “select” is attractive for monitoring data changes which fit into a particular data range as changes can be monitored by setting a notification lock on the respective data range. This holds for many common storage platform scenarios. Changes to individual items can be efficiently monitored by setting notification locks on the respective data range. Changes to folders and folder trees can be monitored by setting notification locks on path ranges. Changes to types and its subtypes can be monitored by setting notification locks on type ranges.
In general, there are three distinct phases associated with processing notifications: (1) data change or even detection, (2) subscription matching and (3) notification delivery. Excluding synchronous notification delivery, i.e. notification delivery as part of the transaction performing the data change, the storage platform can implement two forms of notification delivery:
- 1) Immediate Event Detection: Event detection and subscription matching is performed as part of the update transaction. Notifications are inserted into a table monitored by the subscriber; and
- 2) Deferred Event Detection: Event detection and subscription matching is performed after the update transaction has been committed. Subsequently the actual subscriber or an intermediary detects events and generates notifications.
Immediate event detection requires additional code to be executed as part of update operations. This allows the capture of all events of interest including events indicating a relative state change.
Deferred event detection removes the need to add additional code to update operations. Event detection is done by the ultimate subscriber. Deferred event detection naturally batches event detection and event delivery and fits well with the query execution infrastructure of the database engine 314 (e.g., SQL Server).
Deferred event detection relies on a log or trace left by update operations. The storage platform maintains a set of logical timestamps together with tombstones for deleted data items. When scanning the data store for changes, clients supply a timestamp which defines a low watermark for detecting changes and a set of timestamps to prevent duplicate notifications. Applications might receive notifications for all changes which happened after the time indicated by the low watermark.
Sophisticated applications with access to core views can further optimize and reduce the number of SQL statements necessary to monitor a potentially large set of items by creating private parameter and duplicate filter tables. Applications with special needs such as those having to support rich views can use the available change tracking framework to monitor data changes and refresh their private snapshots.
Preferably, therefore, in one embodiment, the storage platform implements a deferred event detection approach, as described more fully below.
a) Change Tracking
All items, extensions and item relationship definitions carry a unique identifier. Change tracking maintains a set of logical timestamps to record creation, update and deletion times for all data items. Tombstone entries are used to represent deleted data items.
Applications use that information to efficiently monitor whether a particular item, item extension or item relationship has been newly added, updated or deleted since the application last accessed the data store. The following example illustrates this mechanism.
All deleted items, item extensions and relationships are recorded in a corresponding tombstone table. A template is shown below.
For efficiency reasons, the storage platform maintains a set of global tables for items, item extensions, relationships and pathnames. Those global lookup tables can be used by applications to efficiently monitor data ranges and retrieve associated timestamp and type information.
b) Timestamp Management
Logical timestamps are “local” to a database store, i.e. storage platform volume. Timestamps are monotonically increasing 64-bit values. Retaining a single timestamp is often sufficient to detect whether a data change occurred after last connecting to a storage platform volume. However, in most realistic scenarios, a few more timestamps need to be kept to check for duplicates. The reasons are explained below.
Relational database tables are logical abstractions built on top of a set of physical data structures, i.e. B-Tree's, heaps etc. Assigning a timestamp to a newly created or updated record is not an atomic action. Inserting that record into the underlying data structures may happen at different times, thus applications may see records out of order.
c) Data Change Detection—Event Detection
When querying the data store, applications obtain a low watermark. Subsequently, applications use that watermark to scan the data store for entries whose creation, update or delete timestamp is greater than the low watermark returned.
To prevent duplicate notifications, applications remember timestamps which are greater than the low watermark returned and use those to filter out duplicates. Applications create session local temporary tables to efficiently handle a large set of duplicate timestamps. Before issuing a select statement, an application inserts all duplicate timestamps previously returned and deletes those which are older than the last low watermark returned, as illustrated below.
G. Synchronization
According to another aspect of the present invention, the storage platform provides a synchronization service 330 that (i) allows multiple instances of the storage platform (each with its own data store 302) to synchronize parts of their content according to a flexible set of rules, and (ii) provides an infrastructure for third parties to synchronize the data store of the storage platform of the present invention with with other data sources that implement proprietary protocols. meta-data kept by the system. Much of the synchronization service meta-data.
(1) Community Folder—Mappings.
(2) Profiles.
(3) Schedules.
(1) Conflict Detection
In the present embodiment, the synchronization service detects two types of conflicts: knowledge-based and constraint-based.
(a) Knowledge-Based Conflicts.
(b) Constraint-Based Conflicts.
(2) Conflict Processing.
(a) Automatic Conflict Resolution.
(b) Conflict Logging.
(c) Conflict Inspection and Resolution.
(d) Convergence of Replicas and Propagation of Conflict Resolutions”.
(1) Change Enumeration.
(2) Change Application meta-data. Such data may be attached by the adapter to the changes being applied, and might be stored by the synchronization service. The data might be returned on next change enumeration.
(3) Conflict Resolution meta-data..
1. Model for Interoperability
According to this aspect of the present invention, and in accordance with the exemplay embodiment discussed above, the storage platform implements one namespace in which non-file and file items can be organized. With this model, the following advantages are achieved:
-.
As a consequence of this model, in the present embodiment, search capabilities may not be provided over data sources that are not migrated into the storage platform data store. This includes removable media, remote servers and files on the local disk. A Sync Adapter is provided which manifests proxy items (shortcuts+promoted metadata) in the storage platform for items residing in foreign file systems. Proxy items do not attempt to mimic files either in terms of the namespace hierarchy of the data source or in terms of security.
The symmetry achieved on the namespace and programming model between file and non-file content provides a better path for applications to migrate content from file systems to more structured items in the storage platform data store over time. By providing a native file item type in the storage platform data store, application programs can transition file data into the storage platform while still being able to manipulate this data via Win32. Eventually, application programs might migrate to the storage platform API completely and structure their data in terms of storage platform Items rather than files.
2. Data Store Features
In order to provide the desired level of interoperability, in one embodiment, the following features of the storage platform data store are implemented.
a) Not a Volume
The storage platform data store is not exposed as a separate file system volume. The storage platform leverages FILESTREAMs directly hosted on NTFS. Thus, there is no change to the on-disk format, thereby obviating any need to expose the storage platform as a new file system at the volume level.
Instead, a data store (namespace) is constructed corresponding to an NTFS volume. The database and FILESTREAMs backing this portion of the namespace is located on the NTFS volume with which the storage platform data store is associated. A data store corresponding to the system volume is also provided.
b) Store Structure
The structure of the store is best illustrated with an example. Consider, as an example, the directory tree on the system volume of a machine named HomeMachine, as illustrated in
In this embodiment, files and/or folders need to be migrated from NTFS to the storage platform explicitly. So, if a user desires to move the My Documents folder into the storage platform data store in order to avail his or herself of all the extra search/categorization features offered by the storage platform, the hierarchy would look as shown in
c) Not all Files are Migrated
Files that correspond to user data or that need the searching/categorization that the storage platform provides are candidates for migration into the storage platform data store. Preferably, in order to limit issues of application program compatibility with the storage platform, the set of files that are migrated to the storage platform of the present invention, in the context of the Microsft Windows operating system, are limited to the files in the MyDocuments folder, Internet Explorer (IE) Favorites, IE History, and Desktop.ini files in the Documents and Settings directory. Preferably, migrating Windows system files is not permitted.
d) NTFS Namespace Access to Storage Platform Files
In the embodiment described herein, it is desirable that files migrated into the storage platform not be accessed via the NTFS namespace even though the actual file streams are stored in NTFS. This way, complicated locking and security considerations that arise from a multi-headed implementation are avoided.
e) Expected Namespace/Drive Letters
Access to files and folders in the storage platform is provided via a UNC name of the form \\<machine name>\<WinfsShareName>. For the class of applications that require drive letters for operation, a drive letter can be mapped to this UNC name.
I. Storage Platform API
As mentioned above, the storage platform comprises an API that enables application programs to access the features and capabilities of the storage platform discussed above and to access items stored in the data store. This section describes one embodiment of a storage platform API of the storage platform of the present invention.
1. Overview
The data access mechanism of the present embodiment of the storage platform API of the present invention addresses four areas: query, navigation, actions, events.
Query
In one embodiment, the storage platform data store is implemented on a relational database engine 314; as a result, the full expressive power of the SQL language is inherent in the storage platform. Higher level query objects provide a simplified model for querying the store, but may not encapsulate the full expressive power of the storage.
Navigation
The storage platform data model builds a rich, extensible type system on the underlying database abstractions. For the developer, the storage platform data is a web of items. The storage platform API enables navigation from item to item via filtering, relationships, folders, etc. This is a higher level of abstraction than the base SQL queries; at the same time, it allows rich filtering and navigation capabilities to be used with familiar CLR coding patterns.
Actions
The storage platform API exposes common actions on all items—Create, Delete, Update; these are exposed as methods on objects. In addition, domain specific actions such as SendMail, CheckFreeBusy, etc. are also available as methods. The API framework uses well defined patterns that ISVs can use to add value by defining additional actions.
Events
Data in the storage platform is dynamic. To let applications react when data in the store is changed, the API exposes rich eventing, subscription, and notification capabilities to the developer.
2. Naming and Scopes
It is useful to distinguish between namespace and naming. The term namespace, as it's commonly used, refers to the set of all names available within some system. The system could be an XML schema, a program, the web, the set of all ftp sites (and their contents), etc. Naming is the process or algorithm used to assign unique names to all entities of interest within a namespace. Thus, naming is of interest because it is desirable to unambiguously refer to a given unit within a namespace. Thus, the term “namespace,” as used herein, refers to the set of all names available in all the storage platform instances in the universe. Items are the named entities in the the storage platform namespace. The UNC naming convention is used to ensure uniqueness of item names. Every item in every the storage platform store in the universe is addressable by a UNC name.
The highest organizational level in the the storage platform namespace is a service—which is simply an instance of the storage platform. The next level of organization is a volume. A volume is the largest autonomous container of items. Each storage platform instance contains one or more volumes. Within a volume are items. Items are the data atoms in the storage platform.
Data in the real world is almost always organized according to some system that makes sense in a given domain. Underlying all such data organization schemes is the notion of dividing the universe of our data into named groups. As discussed above, this notion is modeled in the storage platform by the concept of a Folder. A Folder is a special type of Item; there are 2 types of Folders: Containment Folders and Virtual Folders.
Referring to
A Virtual Folder is a more dynamic way of organizing a collection of Items; it is simply a name given a set of Items—the set is either enumerated explicitly or specified by a query. The Virtual Folder is itself an Item and can be thought of as representing a set of (non-holding) Relationships to a set of Items.
Sometimes, there is the need to model a tighter notion of containment; for example, a Word document embedded in an email message is, in a sense, bound more tightly to its container than, for example, a file contained within a folder. This notion is expressed by the concept of Embedded Items. An Embedded Item has a special kind of relationship which references another Item; the referenced Item can be bound to or otherwise manipulated only within the context of the containing Item.
Finally, the storage platform provides the notion of categories as a way of classification of Items and Elements. Every Item or Element in the storage platform can have associated with it one or more categories. A category is, in essence, simply a name that is tagged on to the Item/Element. This name can be used in searches. The storage platform data model allows the definition of a hierarchy of categories, thus enabling a tree-like classification of data.
An unambiguous name for an item is the triplet: (<serviceName, <volumeID>, <ItemID>). Some items (specifically, Folders and VirtualFolders) are collections of other items. This gives rise to an alternative way of identifying items: (<serviceName>, <volumeID>, <itemPath>).
The storage platform names include the notion of a service context: a service context is a name which maps to a (<volumeName>, <path>) pair. It identifies an item or a set of items—for instance, a folder, virtual folder, etc. With the concept of service contexts, the UNC name for any item in the the storage platform namespace becomes:
- \<serviceName>\<serviceContext>\<itemPath>
Users can create and delete service contexts. Also, the root directory in each volume has a pre-defined context: volume-name$.
An ItemContext scopes a query (for example, a Find operation) by limiting the results returned to those Items that live within a specified path.
3. Storage Platform API Components
According to one aspect of the present invention, at design time, the schema author submits a schema document 2010 and code for domain methods 2012 to the set of storage platform API design time tools 2008. These tools generate the client side data classes 2002 and the store schema 2014 and store class definitions 2016 for that schema. “Domain” refers to a particular schema; for instance, we talk about domain methods for classes in the Contacts schema, etc. These data classes 2002 are used at runtime by the application developer, in concert with the storage platform API runtime framework classes 2006, to manipulate the storage platform data.
For purposes of illustrating various aspects of the storage platform API of the present invention, examples are presented based on an exemplary Contacts schema. A pictorial representation of this exemplary schema is illustrated in
4. Data Classes
According to an aspect of the present invention, each Item, Item Extension, and Element type, as well as each Relationship, in the storage platform data store has a corresponding class in the storage platform API. Roughly, the fields of the type map to the fields of the class. Each item, item extension, and element in the storage platform is available as an object of the corresponding class in the storage platform API. The developer can query for, create, modify, or delete these objects.
The storage platform comprises an initial set of schemas. Each schema defines a set of Item and Element types, and a set of Relationships. The following is one embodiment of an algorithm for generating data classes from these schema entities: For each schema S: For each Item, I, in S a class named System.Storage.S.I is generated. This class has the following members:
- Overloaded constructors, including constructors that allow a new item's initial folder and name to be specified.
- A property for each field in I. If the field is multi-valued, the property will be a collection of the corresponding Element type.
- An overloaded static method which finds multiple items matching the filter (for example, a method named “FindAll”).
- An overloaded static method which finds a single item matching a filter (for example, a method named “FindOne”).
- A static method which finds an item given its id (for example, a method named “FindByID”).
- A static method which finds an item given its name relative to an ItemContext (for example, a method named “FindByName”).
- A method which saves changes to the item (for example, a method named “Update”).
- Overloaded static Create methods which create new instances of the item. These methods allow the item's initial folder to be specified in various ways.
For each Element, E, in S a class named System.Storage.S.E is generated. This class has the following members:
- A property for each field in E. If the field is multi-valued, the property will be a collection of the corresponding Element types.
For each Element, E, in S a class named System.Storage.S.ECollection is generated. This class follows general NET Framework guidelines for strongly typed collection classes. For Relationship based element types, this class will also include the following members:
- An overloaded method which finds multiple Item objects that match a filter which implicitly includes the item in which the collection appears in the source role. The overloads include some that allow filtering based on Item sub-type (for example, a method named “FindAllTargetItems”).
- An overloaded method which finds a single Item object that matches a filter which implicitly includes the item in which the collection appears in the source role. The overloads include some that allow filter based on Item sub-type (for example, a method named “FindOneTargetItem”).
- An overloaded method which finds objects of the nested element type that match a filter which implicitly includes the item in which the collection appears in the source role (for example, a method named “FindAllRelationships”).
- An overloaded method whichs find objects of the nested element type that match a filter which implicitly includes the item in which the collection appears in the source role (for example, a method named “FindAllRelationshipsForTarget”).
- An overloaded method which finds a single object of the nested element type that matches a filter which implicitly includes the item in which the collection appears in the source role (for example, a method named “FindOneRelationship”).
- An overloaded method which finds a single object of the nested element type that matches a filter which implicitly includes the item in which the collection appears in the source role (for example, a method named “FindOneRelationshipForTarget”)..
The data classes exist in the System.Storage.<schemaName> namespace, where <schemaName> is the name of the corresponding schema—such as Contacts, Files, etc. For example, all classes corresponding to the Contacts schema are in the System.Storage.Contacts namespace.
By way of example, with reference to
- Items: Item, Folder, WellKnownFolder, LocalMachineDataFolder, UserDataFolder, Principal, Service, GroupService, PersonService, PresenceService, ContactService, ADService, Person, User, Group, Organization, HouseHold
- Elements: NestedElementBase, NestedElement, IdentityKey, SecurityID, EAddress, ContactEAddress, TelehoneNumber, SMTPEAddress, InstantMessagingAddress, Template, Profile, FullName, FamilyEvent, BasicPresence, WindowsPresence, Relationship, TemplateRelationship, LocationRelationship, FamilyEventLocationRelationship, HouseHoldLocationRelationship, RoleOccupancy, EmployeeData, GroupMemberShip, OrganizationLocationRelationship, HouseHoldMemberData, FamilyData, SpouseData, ChildData
By way of further example, the detailed structure of the Person type, as defined in the Contacts schema, is shown in XML below:
This type results in the following class (only the public members are shown):,
Yet another schema, the schema that allows representing all the audio/video media in the system (ripped audio files, audio CDs, DVDs, home videos, etc.), enables users/applications to store, organize, search through, and manipulate different kinds of audio/video media. The base media document schema is generic enough to represent any media, and the extensions to this base schema are designed to handle domain-specific properties separately for audio and video media. This schema, and many, many others, are envisioned to operate directly or indirectly under the Core Schema.
5. Runtime Framework
The basic storage platform API programming model is object persistence. Application programs (or “applications”) execute a search on a store and retrieve objects representing the data in the store. Applications modify the retrieved objects or create new objects, then cause their changes to be propagated into the store. This process is managed by an ItemContext object. Searches are executed using an ItemSearcher object and search results are accessible via a FindResult object.
a) Runtime Framework Classes
According to another inventive aspect of the storage platform API, the runtime framework implements a number of classes to support the operation of the data classes. These framework classes define a common set of behaviors for the data classes and, together with the data classes, provide the basic programming model for the storage platform API. Classes in the runtime framework belong to the System.Storage namespace. In the present embodiment, the framework classes comprise the following main classes: ItemContext, ItemSearcher, and FindResult. Other minor classes, enum values, and delegates may also be provided.
(1) ItemContext
An ItemContext object (i) represents a set of item domains that an application program wants to search, (ii) maintains state information for each object that represents the state of the data as retrieved from the storage platform, and (iii) manages the transactions used when interacting with the storage platform and any file system with which the storage platform may interoperate.
As an object persistence engine, ItemContext provides the following services:
- 1. Deserializes data read from the store into objects.
- 2. Maintains object identity (the same object is used to represent a given item no matter how many times that item is included in the result of queries).
- 3. Tracks object state.
ItemContext also performs a number of services unique to the storage platform:
- 1. Generates and executes the storage platform update gram operations necessary to persist changes.
- 2. Creates connections to multiple data stores as necessary to enable the seamless navigation of reference relationships and to allow objects retrieved from a multi-domain search to be modified and saved.
- 3. Insures that file backed items are properly updated when changes to the object(s) representing that item are saved.
- 4. Manages transactions across multiple storage platform connections and, when updating data stored in file backed items and file stream properties, the transacted file system.
- 5. Performs item creation, copy, move, and delete operations that take storage platform relationship semantics, file backed items, and stream typed properties into account.
Appendix A provides a source code listing of the ItemContext class, in accordance with one embodiment thereof.
(2) ItemSearcher
The ItemSearcher class supports simple searches, which return whole Item objects, streams of Item objects, or streams of values projected from Items. ItemSearcher encapsulates the core functionality that is common to all of these: the concept of a target type and parameterized filters that are applied to that target type. The ItemSearcher also allows searchers to be pre-compiled, or prepared, as an optimization when the same search will be executed multiple types. Appendix B provides a source code listing of the ItemSearcher class and several closely related classes, in accordance with one embodiment thereof.
(a) Target Type
The search target type is set when constructing an ItemSearcher. The target type is a CLR type that is mapped to a queryable extent by the data store. Specifically, it is a CLR type that is mapped to item, relationship, and item extension types as well as schematized views.
When retrieving a searcher using the ItemContext.GetSearcher method, the searcher's target type is specified as a parameter. When a static GetSearcher method is invoked on an item, relationship, or item extension type (e.g. Person.GetSearcher), the target type is the item, relationship, or item extension type.
Search expressions provided in an ItemSearcher (for example, the search filter and through find options, or projection definitions) are always relative to the search target type. These expressions may specify properties of the target type (including properties of nested elements) and may specify joins to relationship and item extensions as described elsewhere.
The search target type is made available via a read only property (for example, an ItemSearcher.Type property).
(b) Filters
The ItemSearcher contains a property to specify filters (for example, a property named “Filters” as a collection of SearchExpression objects) that define the filter used in the search. All filters in the collection are combined using a logical and operator when the search is executed. The filter may contain parameter references. Parameter values are specified through the Parameters property.
(c) Preparing Searches
In situations where the same search is to be executed repeatedly, possibly with only parameter changes, some performance improvement can be gained by pre-compiling, or preparing, the search. This is accomplished with a set of prepare methods on the ItemSearcher (for example, a method to prepare a Find that returns one or more Items, perhaps named “PrepareFind”, and a method to prepare a Find that returns a projection, perhaps named “PrepareProject”). For example:
:
The DelayLoad option determines if the values of large binary properties are loaded when the search results are retrieved or if loading is delayed until they are referenced. The MaxResults option determines the maximum number of results that are returned. This is equivalent to specifying TOP in a SQL query. It is most often used in conjunction with sorting.
A sequence of SortOption objects can be specified (for example, using a FindOptions.SortOptions property). The search results will be sorted as specified by the first SortOption object, then by as specified by the second SortOption object, etc. The SortOption specifies a search expression that indicates the property that will be used for sorting. The expression specifies one of the following:
- 1. a scalar property in the search target type;
- 2. a scalar property in a nested element that is reachable from the search target type by traversing single valued properties; or
- 3. the result of an aggregation function with a valid argument (for example, Max applied to a scalar property in a nested element that is reachable from the search target type by traversing a multi-valued property or a relationship).
For example, assuming the search target type is System.Storage.Contact.Person:
- 1. “Birthdate”—valid, birthdate is a scalar property of the Person type;
- 2. “PersonalNames.Surname”—Invalid, PersonalNames is a multi-valued property and no aggregation function was used;
- 3. “Count(PersonalNames)”—Valid, the count of PersonalNames.
- 4. “Case(Contact.MemberOfHousehold).Household.HouseholdEAddresses.StartDat e”—Invalid, uses relationship and multi-valued properties without an aggregation function.
- 5. “Max(Cast(Contact.MemberOfHousehold).Household.HouseholdEAddresses.Star tDate)”—Valid, most recent household e-address start date.
(3) Item Result Stream (“FindResult”)
The ItemSearcher (for example, through the FindAll method) returns an object that can be used to access the objects returned by the search (for example, a “FindResult” object). Appendix C provides a source code listing of the FindResult class and several closely related classes, in accordance with one embodiment thereof.
There are two distinct methods for getting results from a FindResult object: using the reader pattern defined by IObjectReader (and lAsyncObjectReader) and using the enumerator pattern as defined by IEnumerable and IEnumerator. The enumerator pattern is standard in the CLR and supports language constructs like C#'s for each. For example:.
c) Common Programming Patterns
This section provides a variety of examples of how the storage platform API framework classes can be used to manipulate items in the data store.
(1) Opening and Closing ItemContext Objects
An application gets the ItemContext object it will use to interact with the data store, e.g. by calling a static ItemContext.Open method and providing the path or paths that identify the item domains that will be associated with the ItemContext. Item domains scope the searches performed using the ItemContext such that only the domain item and the items contained in that item will be subject to the search. Examples are.
(2) Searching for Objects
According to another aspect of the present invention, the storage platform API provides a simplified query model that enables application programmers to form queries based on various properties of the items in the data store, in a manner that insulates the application programmer from the details of the query language of the underlying database engine.
Applications can execute a search across the domains specified when the ItemContext was opened using an ItemSearcher object returned by the ItemContext.GetSearcher method. Search results are accessed using a FindResult object. Assume the following declarations for the examples below:.
(e) The GetSearcher Pattern
There are many places in the storage platform API where it is desirable to provide a helper method that executes a search in the context of another object or with specific parameters. The GetSearcher pattern enables these scenarios. There are many GetSearcher methods in the API. Each returns an ItemSearcher pre-configured to perform a given search. For example:
(3) Updating the Store
Once an object has been retrieved by a search it may be modified by the application as needed. New objects may also be created and associated with existing objects. Once the application has made all the changes that form a logical group, the application calls ItemContext.Update to persist those changes to the store..
6. Security
With reference to section II.E above (Security), in the present embodiment of the storage platform API, there are five methods available on the Item Context for retrieving and modifying the security policy associated with an item in the store. These are:
- 1. GetItemSecurity;
- 2. SetItemSecurity;
- 3. GetPathSecurity;
- 4. SetPathSecurity; and
- 5. GetEffectiveItemSecurity.
GetItemSecurity and SetItemSecurity provide the mechanism to retrieve and modify the explicit ACL associated with the item. This ACL is independent of the paths that exist to the item and will be in play independent of the holding relationships which have this item as the target. This enables the administrators to reason about the item security independent of the paths that exist to the item if they so desire.
The GetPathSecurity and SetPathSecurity provide the mechanism for retrieving and modifying the ACL that exists on an item because of a holding relationship from another folder. This ACL is composed from the ACL's of the various ancestors to the item along the path under consideration along with the explicit ACL if any supplied for that path. The difference between this ACL and the ACL in the previous case is that this ACL remains in play only as long as the corresponding holding relationship exists while the explicit item ACL is independent of any holding relationship to an item.
The ACL's that can be set on an item with SetItemSecurity and SetPathSecurity is restricted to inheritable and object specific ACE's. They cannot contain any ACE marked as inherited.
The GetEffectiveItemSecurity retrieves the various path based ACL's as well as the explicit ACL on the item. This reflects the authorization policy in effect on the given item.
7. Support for Relationships
As discussed above, the data model of the storage platform defines “relationships” that allow items to be related to one another. When the data classes for a schema are generated, the following classes are produced for each relationship type:
-.
(2) ItemReference Class
The following is the base class for item reference types.
ItemReference objects may identify items that exist in a store other than the one where the item reference itself resides. Each derived type specifies how a reference to a remote store is constructed and used. Implementations of GetItem and IsDomainConnected in derived classes use the ItemContext's multi-domain support to load items from the necessary domain and to determine if a connection to the domain has already been established.
(3) ItemIdReference Class
The following is the ItemIdRefrence class—an Item reference that uses an item id to identify the target item.
GetItem and IsDomainConnected use the ItemContext's multi-domain support to load items from the necessary domain and to determine if a connection to the domain has already been established. This feature is not implemented yet.
(4) ItemPathReference Class
The ItemPathReference Class is an item reference that uses a path to identify the target item. The code for the class is as follows:
GetItem and IsDomainConnected use the ItemContext's multi-domain support to load items from the necessary domain and to determine if a connection to the domain has already been established.
(5) RelationshipId Structure
The RelationshipId Structure encapsulates a relationship id GUID.
This value type wraps a guid so that parameters and properties can be strongly typed as a relationship id. OptionalValue<Relationshipld> should be used when a relationship id is nullable. An Empty value, such as provided by System.Guid.Empty, is not exposed. A RelationshipId cannot be constructed with an empty value. When the default constructor is used to create a RelationshipId, a new GUID is created.
(6) VirtualRelationshipCollection Class
The VirtualRelationshipCollection class implements a collection of relationship objects that includes objects from the data store, plus new objects that have been added to the collection, but not including objects that have been removed from the store. Objects of a specified relationship type with a given source item id are included in the collection.
This is the base class for the relationship collection class that is generated for each relationship type. That class can be used as the type of a property in the source item type to provide access and easy manipulation of a given item's relationships.
Enumerating the contents of a VirtualRelationshipCollection requires that a potentially large number of relationship objects be loaded from the store. Applications should use the Count property to determine how many relationships could be loaded before they enumerate the contents of the collection. Adding and removing objects to/from the collection does not require relationships to be loaded from the store.
For efficiency, it is preferable that applications search for relationships that satisfy specific criteria instead of enumerating all of an item's relationships using a VirtualRelationshipCollection object. Adding relationship objects to the collection causes the represented relationships to be created in the store when ItemContext.Update is called. Removing relationship objects from the collection causes the represented relationship to be deleted in the store when ItemContext.Update is called. The virtual collection contains the correct set of objects regardless of whether or not a relationship object is added/removed through the Item.Relationships collection or any other relationship collection on that item.
The following code defines the VirtualRelationshipCollection class:
b) Generated Relationship Types
When generating classes for a storage platform schema, a class is generated for each relationship declaration. In addition to a class that represents a relationship itself, a relationship collection class is also generated for each relationship. These classes are used as the type of properties in the relationship's source or target item classes.
This section describes the classes that are generated using a number of “prototype” classes. That is, given a specified relationship declaration, the class that is generated is described. It is important to note the class, type, and end point names used in the prototype classes are place holders for the names specified in the schema for the relationship, and should not be taken literally.
(1) Generated Relationship Types
This section describes the classes that are generated for each relationship type. For example:
Given this relationship definition RelationshipPrototype and RelationshipPrototypeCollection classes would be generated. The RelationshipPrototype class represents the relationship itself. The RelationshipPrototypeCollection class provides access to the RelationshipPrototype instances that have a specified item as the source end point.
(2) RelationshipPrototype Class
This is a prototypical relationship class for a holding relationship named “HoldingRelationshipPrototype” where the source end point is named “Head” and specifies the “Foo” item type and the target end point is named “Tail” and specifies the “Bar” item type. It is defined as follows:
(3) RelationshipPrototypeCollection Class
This is a prototypical class, generated with the RelationshipPrototype class, that maintains a collection of RelationshipPrototype relationship instances owned by a specified item. It is defined as follows:
c) Relationship Support in the Item Class
The Item class contains a Relationships property that provide access to the relationships in which that item is the source of the relationship. The Relationships property has the type RelationshipCollection.
(1) Item Class
The following code shows the relationship context properties of the Item class:
(2) RelationshipCollection Class
This class provides access to the relationship instances where a given item is the source of the relationship. It is defined as follows:
d) Relationship Support in Search Expressions
It is possible to specify the traversal of a join between relationships and related items in a search expression.
(1) Traversing From Items to Relationships
When the current context of a search expression is a set of items, a join between the items and relationship instances where the item is the source can be done using the Item.Relationships property. Joining to relationships of a specific type can be specified using the search expression Cast operator.
Strongly typed relationship collections (e.g. Folder.MemberRelationships) can also be used in a search expression. The cast to the relationship type is implicit.
Once the set of relationships has been established, the properties of that relationship are available for use in predicates or as the target of a projection. When used to specify the target of a projection, the set of relationships would be returned. For example, the following statement would find all persons related to an organization where the StartDate property of the relationships had a value greater then or equal to ‘1/1/2000’.
If the Person type had a property EmployerContext of type EmployeeSideEmployerEmployee-Relationships (as generated for an EmployeeEmployer relationship type), this could be written as:
(2) Traversing From Relationships to Items
When the current context of the search expression is a set of relationships, a join from a relationship to either end point of the relationship can be traversed by specifying the name of the end point. Once the set of related items has been established, the properties of those items are available for use in predicates or as the target of a projection. When used to specify the target of a projection, the set of items would be returned. For example, the following statement would find all EmployeeOfOrganization relationships (regardless of organization) where the employee's last name is name “Sm:
(1) Searching for Relationships
It is possible to search for source or target relationships. Filters can be used to select relationships of a specified type and that have given property values. Filters can also be used to select relationships based related item type or property values. For example, the following searches can be performed:
In addition to the GetSearcher API shown above, each relationship class supports static FindAll, FindOne, and FindOnly API. In addition, a relationship type can be specified when calling ItemContext.GetSearcher, ItemContext.FindAll, ItemContext.FindOne, or ItemContext.FindOnly.
(2) Navigating from a Relationship to the Source and Target Items
Once a relationship object has been retrieved through a search, it is possible to “navigate” to the target or source item. The base relationship class provides SourceItem and TargetItem properties that return an Item object. The generated relationship class provides the equivalent strongly typed and named properties (e.g. FolderMember.FolderItem and FolderMember.MemberItem). For example:
Navigating to a target item works even if the target item is not in the domain where the relationship was found. In such cases, the storage platform API opens a connection to the target domain as needed. Applications can determine if a connection would be required before retrieving the target item.
(3) Navigating from Source Items to Relationships
Given an item object, it is possible to navigate to the relationships for which that item is the source without executing an explicit search. This is done using the Item.Relationships collection property or a strongly typed collection property such as Folder.MemberRelationships. From a relationship, it is possible to navigate to the target item. Such navigation works even if the target item is not in the item domain associated with the source item's ItemContext, including when the target item is not in the same store as the target item. For example:
An item may have many relationships, so applications should use caution when enumerating a relationship collection. In general, a search should be used to identify particular relationships of interest instead of enumerating the entire collection. Still, having a collection based programming model for relationships is valuable enough, and items with many relationships rare enough, that the risk of abuse by the developer is justified. Applications can check the number of relationships in the collection and use a different programming model if needed. For example:
Check the Size of a Relationship Collection
The relationship collections described above are “virtual” in the sense that they are not actually populated with objects that represent each relationship unless the application attempts to enumerate the collection. If the collection is enumerated, the results reflect what is in the store, plus what has been added by the application but not yet saved, but not any relationships that have been removed by the application but not saved.
(4) Creating Relationships (and Items)
New relationships are created by creating a relationship object, adding it to a relationship collection in the source item, and updating the ItemContext. To create a new item, a holding or embedding relationship must be created. For example:
(5) Deleting Relationships (and Items)
8. “Extending” the Storage Platform API
As noted above, every storage platform schema results in a set of classes. These classes have standard methods such as Find* and also have properties for getting and setting field values. These classes and associated methods form the foundation of the storage platform API.
a) Domain Behaviors
In addition to these standard methods, every schema has a set of domain specific methods for it. We call these domain behaviors. For example, some of the domain behaviors in the Contacts schema are:
- Is an email address valid?
- Given a folder, get the collection of all members of the folder.
- Given an item ID, get an object representing this item
- Given a Person, get his online status
- Helper functions to create a new contact or a temporary contact
- And so on.
It is important to note that while we make a distinction between “standard” behaviors (Find*, etc) and domain behaviors, they simply appear as methods to the programmer. The distinction between these methods lies in the fact that standard behaviors are generated automatically from the schema files by the storage platform API design time tools while domain behaviors are hard-coded.
By their very nature, these domain behaviors should be hand-crafted. This leads to a practical problem: the initial version of C# requires that the entire implementation of a class be within a single file. Thus, this forces the auto-generated class files to have to be edited to add domain behaviors. By itself, this can be a problem.
A feature called partial classes has been introduced in C# for problems such as these. Basically, a partial class allows the class implementation to span multiple files. A partial class is the same as a regular class except that its declaration is preceded by the keyword partial:
- partial public class Person: DeriveditemBase
Now, domain behaviors for Person can be put in a different file like so:
- partial public class Person
b) Value-Add Behaviors
Data classes with domain behaviors form a foundation that application developers build on. However, it is neither possible nor desirable for data classes to expose every conceivable behavior related to that data. The storage platform allows a developer to build on the base functionality offered by the storage platform API. The basic pattern here is to write a class whose methods take one or more of the the storage platform data classes as parameters. For example, the value add classes for sending email using Microsoft Outlook or using Microsoft Windows messenger can be as below:
These value-add classes can be registered with the storage platform. The registration data is associated with the schema metadata the storage platform maintains for every installed storage platform type. This metadata is stored as storage platform items and can be queried.
Registration of value-add classes is a powerful feature; for example, it allows the following scenario: Right click on a Person object in the Shell explorer and the set of actions allowed could be derived from the value-add classes registered for Person.
c) Value-add Behaviors as Service Providers
In the present embodiment, the storage platform API provides a mechanism whereby value-add classes can be registered as “services” for a given type. This enables an application to set and get service providers (=value add classes) of a given type. Value-add classes wishing to utilize this mechanism should implement a well known interface; for example:
- interface IChatServices
All the storage platform API data classes implement the ICachedServiceProvider interface. This interface extends the System.IServiceProvider interface as follows:
Using this interface, applications can set the service provider instance as well as request a service provider of a specific type.
To support this interface, the storage platform data class maintains a hashtable of service providers keyed by type. When a service provider is requested, the implementation first looks in the hashtable to see if a service provider of the specified type has been set. If not, the registered service provider infrastructure is used to identify a service provider of the specified type. An instance of this provider is then created, added to the hashtable, and returned. Note that it is also possible for a shared method on the data class to request a service provider and forward an operation to that provider. For example, this could be used to provide a Send method on the mail message class that uses the e-mail system specified by the user.
9. Design Time Framework
This section describes how a storage platform Schema gets turned into storage platform API classes on the client and UDT classes on the server, in accordance with the present embodiment of the invention. The diagram of
With reference to
10. Query Formalism
When reduced to the basics, the application's pattern when using the storage platform API is: Open an ItemContext; use Find with a filter criterion to retrieve the desired objects; operate on the objects; and send changes back to the store. This section is concerned with the syntax of what goes into the filter string.
The filter string provided when finding the storage platform data objects describes the conditions that the properties of the objects must meet in order to be returned. The syntax used by the storage platform API supports type casts and relationship traversal.
a) Filter Basics
A filter string is either empty, indicating that all objects of the specified type are to be returned, or a boolean expression that each returned object must satisfy. The expression references the object's properties. The storage platform API runtime knows how these property names map to the storage platform type field names and, ultimately, to the SQL views maintained by the the storage platform store.
Consider the following examples:
The properties of nested objects can also be used in the filter. For example:
For collections, it is possible to filter members using a condition in square brackets. For example:
Line 1 creates a new ItemContext object to access the “Work Contacts” on the storage platform share on the local computer. Lines 3 and 4 get a collection of Person objects where the Birthdate property specifies a date more recent then Dec. 31, 1999, as specified by the expression “Birthdate>‘Dec. 31, 1999’”. The execution of this FindAll operation is illustrated in
b) Type Casts
It is often the case that the type of a value stored in a property is derived from the properties declared type. For example, the PersonalEAddresses property in Person contains a collection of types derived from EAddress such as EMailAddress and TelephoneNumber. In order to filter based on telephone area code, it is necessary to cast from the EAddress type to the TelephoneNumber type:
c) Filter Syntax
Below is a description of the filter syntax supported by the storage platform API, in accordance with one embodiment.
11. Remoting
a) Local/Remote Transparency in the API
Data access in the storage platform is targeted to the local storage platform instance. The local instance serves as a router if the query (or part thereof) refers to remote data. The API layer thus provides local/remote transparency: there is no structural difference in the API between local and remote data access. It is purely a function of the requested scope.
The storage platform data store also implements distributed queries; thus, it is possible to connect to a local storage platform instance and perform a query which includes items from different volumes, some of which are on the local store and others on the remote store. The store unions the results and presents it to the application. From the point of view of the storage platform API (and hence the application developer) any remote access is completely seamless and transparent.
The storage platform API allows an application to determine if a given ItemContext object (as returned by the ItemContext.Open method) represents a local or remote connection using the IsRemote property—this is a property on the ItemContext object. Among other things, the application may wish to provide visual feedback to help set user expectations for performance, reliability, etc.
b) Storage Platform Implementation of Remoting
The storage platform data stores talk to each other using a special OLEDB provider which runs over HTTP (the default OLEDB provider uses TDS). In one embodiment, a distributed query goes through the default OPENROWSET functionality of the relational database engine. A special user defined function (UDF): DoRemoteQuery(server, queryText) is provided to do actual remoting.
c) Accessing Non-Storage Platform Stores
In one embodiment of the storage platform of the present invention, there is no generic provider architecture that allows any store to participate in storage platform data access. However, a limited provider architecture for the specific case of Microsoft Exchange and Microsoft Active Directory (AD) is provided. This implies that developers can use the storage platform API and access data in AD and Exchange just as they would in the storage platform, but that the data they can access is limited to the storage platform schematized types. Thus, address book (=collection of the storage platform Person types) is supported in AD, and mail, calendar and contacts are supported for Exchange.
d) Relationship to DFS
The storage platform property promoter does not promote past mount points. Even though the namespace is rich enough to access through mount points, queries do not pass through them. The storage platform volumes can appear as leaf nodes in a DFS tree.
e) Relationship to GXA/Indigo
A developer can use the storage platform API to expose a “GXA head” on top of the data store. Conceptually, this is no different from creating any other web service. The storage platform API does not talk to a storage platform data store using GXA. As mentioned above, the API talks to the local store using TDS; any remoting is handled by the local store using the synchronization service.
12. Constraints
The storage platform data model allows value constraints on types. These constraints are evaluated on the store automatically and the process is transparent to the user. Note that constraints are checked at the server. Having noted this, sometimes, it is desirable to give the developer the flexibility to verify that the input data satisfies the constraints without incurring the overhead of a round trip to the server. This is especially useful in interactive applications where the end user enters the data which is used to populate an object. The storage platform API provides this facility.
Recall that a storage platform Schema is specified in an XML file, which is used by the storage platform to generate the appropriate database objects representing the schema. It is also used by the design time framework of the storage platform API to auto generate classes.
Here's a partial listing of the XML file used to generate the Contacts schema:
The Check tags in the XML above specify the constraints on the Person type. There can be more than one check tag. The above constraint is generally checked in the store. To specify that the constraint can also be checked explicitly by the application, the above XML is modified like so:
Note the new “InApplication” attribute on the <Check> element, which is set to true. This causes the storage platform API to surface the constraint in the API through an instance method on the Person class called Validate( ). The application can call this method on the object to ensure that the data is valid and, preventing a potentially useless round trip to the server. This returns a bool to indicate the results of validation. Note that the value constraints are still applied at the server regardless of whether the client calls <object>.Validate( ) method or not. Here's an example of how Validate can be used:
There exist multiple access paths to the the storage platform store—the storage platform API, ADO.NET, ODBC, OLEDB, and ADO. This raises the question of authoritative constraint checking—that is, how can we guarantee that data written from, say, ODBC, go through the same data integrity constraints as would data written from the storage platform API. Since all constraints are checked at the store, the constraints are now authoritative. Regardless of what API path one uses to get to the store, all writes to the store are filtered through the constraint checks at the store.
13. Sharing
A share in the storage platform is of the form:
- \\<DNS Name>\<Context Service>,
where <DNS Name> is the DNS name of the machine, and <Context Service> is a containment folder, virtual folder, or an item in a volume on that machine. For example, assume that the machine “Johns_Desktop” has a volume called Johns_Information, and in this volume there exists a folder called Contacts_Categories; this folder contains a folder called Work, which has the work contacts for John:
- \\Johns_Desktop\Johns_Information$\Contacts_Categories\Work
This can be shared as “WorkContacts”. With the definition of this share, \\Johns_Desktop\WorkContacts\JaneSmith is a valid storage platform name, and identifies the Person item JaneSmith.
a) Representing a Share
The share item type has the following properties: the share name, and the share target (this can be a non-holding link). For example, the aforementioned share's name is WorkContacts and target is Contacts_Categories\Work on the volume Johns_Information. Below is the schema fragment for the Share type:
b) Managing Shares
Because a share is an item, shares can be managed just as with other items. A share can be created, deleted, and modified. A share is also secured the same way as other storage platform items.
c) Accessing Shares
An application accesses a remote storage platform share by passing the share name (e.g. \\Johns_Desktop\WorkContacts) to the storage platform API in the ItemContext.Openo method call. ItemContext.Open returns an ItemContext object instance. The storage platform API then talks to the local storage platform service (recall that accessing remote storage platform shares is done via the local storage platform). In turn, the local storage platform service talks to a remote storage platform service (e.g. on machine Johns_Desktop) with the given share name (e.g. WorkContacts). The remote storage platform service then translates WorkContacts into Contacts_Categories\Work and opens it. After that, query and other operations are performed just like other scopes.
d) Discoverability
In one embodiment, an application program can discover shares available on a given <DNS Name>, in the following ways. According to the first way, the storage platform API accepts a DNS name (e.g. Johns_Desktop) as the scope parameter in ItemContext.Openo method. The storage platform API then connects to the storage platform store with this DNS name as part of a connection string. With this connection, the only possible thing an application can do is call ItemContext.FindAll(typeof(Share)). A storage platform service then unions all the shares on all the attached volumes and returns the collection of shares. According to the second way, on a local machine, an administrator can easily discover the shares on a particular volume by FindAll(typeof(Share)), or a particular folder by FindAll(typeof(Share), “Target(ShareDestination).Id=folderId”).
14. Semantics of Find
The Find* methods (regardless of whether they are called on the ItemContext object or on an individual item) generally apply to Items (including embedded items) within a given context. Nested elements do not have a Find—they cannot be searched independently of their containing Items. This is consistent with the semantic desired by the storage platform data model, where nested elements derive their “identity” from the containing item. To make this notion clearer, here are examples of valid and invalid find operations:
- a) Show me all telephone numbers in the system which have an area code of 206?
- Invalid, since the find is being done on telephone numbers—an element—without reference to an item.
- b) Show me all telephone numbers within all Persons which have area code of 206?
- Invalid, even though a Person (=item) is referenced, the search criterion does not involve that item.
- c) Show me all telephone numbers of Murali (=one single person) which have area code of 206?
a) Overview of System.Storage.Contact
The storage platform API includes a namespace for dealing with items and elements in the Contacts schema. This namespace is called System.Storage.Contact.
This schema has, for example, the following classes:
- Items: UserDataFolder, User, Person, ADService, Service, Group, Organization, Principal, Location
- Elements: Profile, PostalAddress, EmailAddress, TelephoneNumber, RealTimeAddress, EAddress, FullName, BasicPresence, GroupMembership, RoleOccupancy
b) Domain Behaviors
Below is a list of domain behaviors for the Contacts schema. When viewed from a high enough level, domain behaviors fall into well-recognizable categories:
- Static Helpers, for example, Person.CreatePersonalContact( ) to create a new personal contact;
- Instance Helpers, for example user.AutoLoginToAllProfiles( ), which logs in a user (instance of User class) into all profiles that are marked for auto login;
- CategoryGUIDs, for example, Category.Home, Category.Work, etc;
- Derived properties, for example, emailAddress.Address( )—returns a string that combines the username and domain fields of the given emailAddress (=instance of EmailAddress class); and
- Derived collections, for example, person.PersonalEmailAddresses—given an instance of Person class, get her personal email addresses.
The table below gives, for each class in Contacts that has domain behaviors, a list of these methods and the category they belong to.
16. Storage Platform File API
This section gives an overview of the the storage platform File API, in accordance with one embodiment of the present invention.
a) Introduction
(1) Reflecting an NTFS Volume in the Storage Platform
The storage platform provides a way of indexing over content in existing NTFS volumes. This is accomplished by extracting (“promoting”) properties from each file stream or directory in NTFS and storing these properties as Items in the storage platform.
The storage platform File schema defines two item types—File and Directory—to store promoted file system entities. The Directory type is a subtype of the Folder type; it is a containment folder which contains other Directory items or File items.
A Directory item can contain Directory and File items; it cannot contain items of any other type. As far as the storage platform is concerned, Directory and File items are read-only from any of the data access APIs. The File System Promotion Manager (FSPM) service asynchronously promotes changed properties into the storage platform. The properties of File and Directory items can be changed by the Win32 API. The storage platform API can be used to read any of the properties of these items, including the stream associated with a File item.
(2) Creating Files and Directories in the storage platform Namespace
When an NTFS volume gets promoted to a storage platform volume, all the files and directories therein are in a specific part of that volume. This area is read-only from the storage platform perspective; the FSPM can create new directories and files and/or change properties of existing items.
The rest of the namespace of this volume can contain the usual gamut of the storage platform item types—Principal, Organization, Document, Folder, etc. The storage platform also allows you to create Files and Directories in any part of the the storage platform namespace. These “native” Files and Directories have no counterpart in the NTFS file system; they are stored entirely in the storage platform. Furthermore, changes to properties are visible immediately.
However, the programming model remains the same: they are still read-only as far as the the storage platform data access APIs are concerned. The “native” Files and Directories have to be updated using Win32 APIs. This simplifies the developer's mental model, which is:
- 1. Any storage platform item type can be created anywhere in the namespace (unless prevented by permissions, of course);
- 2. Any storage platform item type can be read using the storage platform API;
- 3. All storage platform items types are writable using the storage platform API with the exception of File and Directory;
- 4. To write to File and Directory items regardless of where they are in the namespace, use the Win32 API; and
- 5. Changes to File/Directory items in the “promoted” namespace may not appear immediately in the storage platform; in the “non-promoted” namespace, the changes are reflected immediately in the storage platform.
b) File Schema
c) Overview of System.Storage.Files
The storage platform API includes a namespace for dealing with file objects. This namespace is called System.Storage.Files. The data members of the classes in System.Storage.Files directly reflect the information stored in the storage platform store; this information is “promoted” from the file system objects or may be created natively using the Win32 API. The System.Storage.Files namespace has two classes: FileItem and DirectoryItem. The members of these classes and methods thereof can be readily divined by looking at the schema diagram in
d) Code Examples
In this section, three code examples are provided illustrating the use of the classes in System.Storage.Files.
(1) Opening a File and Writing to It
This example shows how to do “traditional” file manipulation.
Line 3 uses the FindByPath method to open the file. Line 7 shows the use of the promoted property, IsReadOnly, to check if the file is writeable. If it is, then in line 9 we use the OpenWrite( ) method on the FileItem object to get the file stream.
(2) Using Queries
Since the storage platform store holds properties promoted from the file system, it is possible to easily do rich queries on the files. In this example, all files modified in the last three days are listed:.
As the foregoing illustrates,, such as relational (tabular) data, XML, and a new form of data called Items..
As is apparent from the above, all or portions of the various systems, methods, and aspects of the present invention may be embodied in the form of program code (i.e., instructions). or server,.
|
https://patents.google.com/patent/US8131739B2/en
|
CC-MAIN-2018-34
|
en
|
refinedweb
|
Performance Apparel Inc. (PA) is a retailer of sports apparel
and footwear. PA’s operations are based in Beaverton, Oregon, with
retail stores located throughout the country. In an effort to
motivate certain members of senior management to execute
consistently with PA’s long-term financial performance plan, it
decided to issue performance-based restricted stock units (RSUs) on
January 1, 2014. RSUs are a form of compensation offered by an
employer in the form of company stock. These shares of company
stock are “restricted” in that they vest only after certain
conditions (restrictions) are met. The shares are earned or
“vested” based on a vesting schedule consistent with the
satisfaction of these conditions. Vesting schedules are specific to
each award and can be based on various conditions such as service
(remaining with the employer for a certain period of time),
performance milestones (such as meeting sales goals), or a
combination. These RSUs cliff vest (shares vest at a point in time)
on the basis of continued employment after three years, with the
number of RSUs earned and issued at the end of the three-year
vesting period, if any, dependent on two performance conditions: 1.
Three-year average organic revenue growth. 2. Three-year average
operating margin. Since PA has not previously issued these types of
awards, it does not have knowledge of the relevant accounting
literature and guidance on how these contingently issuable shares
should be accounted for in their diluted earnings per share (EPS)
calculation. Accordingly, as PA’s external auditor, management has
asked for your assistance with its financial statements as of and
for the year ended December 31, 2014.
Required: As of December 31, 2014: 1. What is the three-year
average organic growth rate that PA should assume in determining
the number of potentially outstanding dilutive awards for purposes
of calculating diluted EPS? Scenarios:
• For the year ended December 31, 2014, organic revenue
increased 10 percent.
• For the year ended December 31, 2014, the operating margin was
40 percent.
• For the year ended December 31, 2014, organic revenue
increased 20 percent on average the previous three years.
• For the year ended December 31, 2014, the operating margin was
50 percent on average the previous three years.
***(This is all the information provide)
Get this Question solved
Choose a Subject » Select Duration » Schedule a Session
Get this solution now
Get notified immediately when an answer to this question is available.
You might
want to try the quicker alternative (needs Monthly Subscription)
No, not that urgent
dividends not received during the relevant holding period . For certain restricted stock awards granted beginning in 2014 , the Company includes a relative total shareowner return (“TSR”) modi er to determine the number of restricted shares or share units earned at the end of the performance period
ended December 31 , 2014 , for the restricted stock units granted during the year ended December 31 , 2013? (Enter your answer in millions (i.e., 10,000,000 should be entered as 10).) 2 . Based on the information provided in the disclosure note, prepare the journal
On January 1 , 2016, VKI Corporation awarded restricted stock units (RSUs) representing 21 million of its $ 1 par common shares to key personnel, subject to forfeiture if employment is terminated within three years . After the recipients of the RSUs satisfy the vesting requirement, the company
How
to realize changes in stock -based compensation expense in the future. As a measure of sensitivity, a 1 % change to our estimated forfeiture rate would have had an approximately $46 million impact on our 2015 operating income. Our estimated forfeiture rates as of December 31 , 2015 and 2014 , were...
By creating an account, you agree to our terms & conditions
We don't post anything without your permission
Attach Files
|
https://www.transtutors.com/questions/performance-apparel-inc-pa-is-a-retailer-of-sports-apparel-and-footwear-pa-s-operati-2562978.htm
|
CC-MAIN-2018-34
|
en
|
refinedweb
|
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.