text
stringlengths 20
1.01M
| url
stringlengths 14
1.25k
| dump
stringlengths 9
15
⌀ | lang
stringclasses 4
values | source
stringclasses 4
values |
|---|---|---|---|---|
hibernate firstExample not inserting data - Hibernate
hibernate firstExample not inserting data hello all ,
i followed.... Hi friend,
Please send me code and explain in detail.
Visit
please iam stuck - Java Beginners
please iam stuck please help
i dont find the errors to fix my home... main(String [ ] args){
Scanner in =new Scanner(System.in);
int counter=0;
int total=0;
String today="first";
order [ ] o=new order[12];
Bell [ ] B=new Bell
hi - Hibernate
hi hi all,
I am new to hibernate.
could anyone pls let me know... to delete a particular record using DAO.
Here I provide MyEclipse automatiically...;
}
}
}
pls let me know how to delete one particular record using this code when we using schemaExport in hibernate every time it drops the existing table and create new table ,if table contain dependent table then is it drop
hibernate
;Hi Friend,
Please visit the following link: Hi
Good Morning
Will u please send me the some of the tutorials of hibernate.Because ,i have to learn the hibernate.i am new to this its
Problem in running first hibernate program.... - Hibernate
Problem in running first hibernate program.... Hi...I am using.../FirstExample
Exception in thread "main" "... and prepare hibernate for use
SessionFactory sessionFactory = new Configuration
hibernate - Hibernate
hibernate is there any tutorial using hibernate and netbeans to do a web application add,update,delete,select
Hi friend,
For hibernate tutorial visit to :
hi - Hibernate
hi hi,
what is object life cycle in hibernate - Hibernate
Hibernate SessionFactory Can anyone please give me an example of Hibernate SessionFactory? Hi friend,package roseindia;import...[]){ Session session = null; try { SessionFactory sessionFactory = new
Login form using Jsp in hibernate - Hibernate
Login form using Jsp in hibernate
Hai Friend,
As I new To hibernate, I'm facing problem in My project(JSP with hibernate).. My login form... in advance
Hi Friend,
Please visit the following links:
http
Java - Hibernate
, this type of output.
----------------------------
Inserting Record
Done
Hibernate... FirstExample {
public static void main(String[] args) {
Session session = null;
try{
SessionFactory sessionFactory = new Configuration
Hibernate - Hibernate
Hibernate pojo example I need a simple Hibernate Pojo example hi,pojoexample.javapackage roseindia;import org.hibernate.*;import...[]){ Session session = null; try{ SessionFactory sessionFactory = new Configuration
hibernate - Hibernate
Hi Radhika,
i think, you hibernate configuration... Hibernate;
import org.hibernate.Session;
import org.hibernate.*;
import...{
SessionFactory sessionFactory = new
Configuration().configure wideningSP using java - Hibernate
JSP using java This is my part of Excal sheet code using jdbc... using hibernate.I am having function called displayIps() and displayvalues()
public Vector displayIps()
{
Vector ips=new Vector();
try
on hibernate query language - Hibernate
on hibernate query language Display the contents of 2 fields in a table in a single column using SQL statement.Thanks! Hi friend,read for more information, downloading the Hibernate
Process of Downloading Eclipse
Installing Eclipse
Create
Hi - Hibernate Interview Questions
Hi please send me hibernate interview questions delete a row error - Hibernate
DriverManagerConnectionProvider:41 - Using Hibernate built-in connection pool...Hibernate delete a row error Hello,
I been try with the hibernate delete. Criteria Queries - Hibernate
Hibernate Criteria Queries Can I use the Hibernate Criteria Query...;
SessionFactory sessionFactory = new Configuration().configure... = session.createCriteria(TreasuryClient.class);
Hi friend,
package
hibernate sql error - Hibernate
to use polymorphiuc mapping in type2 using subclasses? Hi Friend,
Please visit the following links: sql error Hibernate: insert into EMPLOYE1 (firstName
object chaining? - Hibernate
object chaining? daoobject.getsession().begainTransaction; can any one explain the code, i am not understanding this line when iam working with swt and hibernate
hibernate - application
Hibernate application Hi,
I am using Netbeans IDE.I need to execute a **Hibernate application** in Netbeans IDE.can any one help me to do
hibernate - Hibernate
hibernate what is the lazy Loading in hibernate? can you give the example table ? Lazy Loading in Hibernate
In Hibernate, Lazy.... The main purpose of using lazy loading is to increase the efficiency hi friends i had one doubt how to do struts with hibernate in myeclipse ide
its urgent
java(Hibernate) - Hibernate
java(Hibernate) Hai Amardeep
This is jagadhish.Iam giving full code...().");
Configuration conf=new Configuration();
Configuration cf=conf.configure....");
User user=new User();
user.setUserId(103);
user.setFirstName("raja 4
in Jan,
2012. Hibernate 4 comes with many new features such as Multi-tenancy... tutorials:
Introduction To Hibernate 4.0
What's New In Hibernate 4.0... : Insert Record using Hibernate Save
Method
Hibernate 4 Example
what is object chaining? - Hibernate
what is object chaining? daoobject.getsession().begainTransaction;
can any one explain the code, i am understanding this line when iam working with swt and hibernate
java - Hibernate
the code and run using Hibernate and annotation..plse help me... Hi friend,
Read for more information.
Thanks
Hibernate 4.3.0.Final get session
Hibernate 4.3.0.Final get session Hi,
There seems some API change in Hibernate 4.3.0. I am not able to get the Session object in Hibernate.
I am trying to build the SessionFactory using the following piece of code
Hibernate - Hibernate
a doubt in that area plz help me.That is
In Hibernate mapping file I used tag... plz help me. Hi friend,
Read for more information.
Thanks what is difference between dynamic-update and dynamic-insert? Hi friend,
It should be neccesary to have both a namespace....
Thanks when I run Hibernate Code Generation wizard in eclipse I'm...;Hi Friend,
Please visit the following link:
Hope that it will be helpful for you.
Thanks
Retrieve Value from Table - Hibernate
retrieve values From database using hibernate in web Application. As I new to hibernate I couldn't find solution for this problem.. Can anyone help please.. Hi Friend,
Please visit the following links:
http:
How to save array of UserType in Oracle using Hibernate.
How to save array of UserType in Oracle using Hibernate. Hi
How to save array of UserType in Oracle using Hibernate.
CREATE OR REPLACE TYPE... arrdesc=new ArrayDescriptor( "ADDRESS_TY_ARR",arg0.getConnection
Spring with Hibernate - Spring
Spring with Hibernate When Iam Executing my Spring ORM module (Spring with Hibernate), The following messages is displaying on the browser window... {
ApplicationContext ctx=new ClassPathXmlApplicationContext("ApplicationContext.xml
hi!
hi! how can i write aprogram in java by using scanner when asking... to to enter, like(int,double,float,String,....)
thanx for answering....
Hi... main(String[] args)
{
Scanner input=new Scanner(System.in
|
http://www.roseindia.net/tutorialhelp/comment/3618
|
CC-MAIN-2014-52
|
en
|
refinedweb
|
Template:TRScc-bot
From Wikibooks, open books for an open world
- Purpose
- This template is used with {{TRScc-top}} to set off Wikibook's Content Creation HowTo pages (Content Creation TOC), especially those on advanced topics where {{FUN-beg}} (or -top) would be inappropriate.
- It will also auto-categorize the page to the category:Trainz Content Creation.
- The parameter '|TA=yes' definition will cross-categorize the page as well in the Trainz Asset Management and Creation category:Trainz AM&C.
- defining either of the parameter '|refs2=something' will add the page to Category:Trainz references.
- It can be given the
{{{1}}}default parameter to alter the sort order of the reference page as listed in that category.
- Lastly, this template closes a <div style=" ..."> HTML block initiated by {{ORP-top}}. They should be used together as a pair top and bottom of each page.
Example:
{{TRScc-bot|After}}will list the page under the A pages, after both any page 'Aeroplane' and after 'Aardvark'.
category, just like {{FUN-bot}}.
Options[edit]
- define '| cat1=', '| cat2=', or '|cat3=' some-cat-names to add those categories to the page without either "Category:" namespace, nor '[[' or ']]' allowed. (their provided free of charge along with the default pipe tricking to the {{SUBPAGENAME}} Magic word.
|
http://en.wikibooks.org/wiki/Template:TRScc-bot
|
CC-MAIN-2014-52
|
en
|
refinedweb
|
This. So this is very much an
outsider’s history, and like any history, it is necessarily biased, selective, and incomplete.
6/28/2000. Eric van der Vlist: Will RSS fork?
Following a thread on the syndication mailing list, Rael Dornfest has announced an “RSS Modularization Spec(ish) page” defining how RSS could be extended using namespace based modules.
7/5/2000. Leigh Dodds: RSS Modularization
Perhaps the key benefit of RSS is its simplicity. The syntax for the format is easy to understand, and there are only a few self-explanatory tags to learn. This makes RSS files relatively trivial to produce. Dave Winer of Userland has recently added some new online documentation for RSS 0.91, adding historical notes as well as capturing details of its common usage patterns.
Developers on the RSS and Syndication mailing lists are now discussing future directions for RSS, the hope being to build on current successes and provide richer functionality.
8/14/2000. Rael Dornfest: RSS.
Interested parties are invited to join a working group on the newly-created RSS-DEV mailing list at:
8/16/2000. Aaron Swartz: Re: Thoughts, questions, and issues
[addressing Dave Winer] If I understand you correctly, you want to create a set of elements for RSS that are widely supported and you’re free to do that. Just create your own namespace and tie it into the proposal. You say namespaces are confusing, but I have to disagree with you there. When used properly, they can actually make XML easier to understand.
… RSS is now (or once again) an RDF format, which has its benefits and drawbacks. It does make RSS more complicated, which is a downside. However, as R.V. Guha pointed out to me, you can easily escape from RDF if you don’t like it by using the rdf:parseType=”literal” attribute. Again, I think this is likely a best-of-both-worlds move.
8/16/2000. Dave Winer: Re: Thoughts, questions, and issues #2
So, Aaron, because we disagree you get to make the rules?
So sad it comes to this. If you’d stop and think you’d realize there’s a win-win here, all it takes is a little listening and considering other points of view.
So sad, because there will be two RSS 1.0s.
So confusing, so embarassing.
(And a waste of time!)
See you in the market.
8/16/2000. Paulo Gasper: Re: Thoughts, questions, and issues #2b
Hi Dave,
Your statement (above) works both ways.
IMHO, Aaron even gave an example to illustrate why he thinks that way. It seems to me that he is trying to reason over that. Not forcing.
8/16/2000. Dave Winer: Re: Thoughts, questions, and issues #2c
Paulo, the force comes from the choice of the RSS 1.0 name. Doesn’t leave much wiggle room.
8/16/2000. Dave Winer: Re: Thoughts, questions, and issues #3. (Note: here Dave is not replying to Aaron’s messages quoted above, but to someone else entirely.)
That’s fine. But I’m going to keep going. I’m tired of debating this stuff. I’ve been having a lot of fun in the last couple of months, and it’s only been in the last few days that it started to turn into the usual hand-wringing, trying to keep you from hearing things that apall you. Enough is enough. These guys want to own RSS. I put a ton of work into it. Somehow reconcile that with your appalled-ness. This is not a nice thing that’s going on. I’m apalled.
8/16/2000. Paul Freeman: The RDF approach needs to answer some valid criticism
- To enable the average developer to cope, a syndication format must be simple to create and be easily read by a human. The rdf approach requires too much studying and background knowledge to easily pick up and is too hard for humans to read and create manually.
- RSS should also be easy to parse and create using any software environment which developers care to use. Some software environments are too weak to handle RDF and the namespace syntax.
… If the RDF approach is to be widely accepted and adopted then 1) and 2) require solutions. Not all of them may be technical, but better software tools support is part of a solution which does not require the simple syntax required by the “expanded core”. This software tools support should span *all* of the environments which people need to use… and we shouldn’t sneer at people who try to parse this stuff in Perl, VB or even, shock horror, Macromedia Flash.
8/17/2000. Paulo Gasper: Re: Thoughts, questions, and issues #4
That seems to be the main problem: RDF focus.
RSS became a popular format with people that couldn’t care less about RDF. The value of RSS is that popularity.
There are a lot of private little “RSS processors” and this [proposed RSS 1.0] standard does not care much about them.
8/20/2000. Aaron Swartz: Re: Thoughts, questions, and issues #5
I think the answer from the RSS-DEV people (I’m sort of guessing, correct me if I’m wrong) is that the writers shouldn’t have to understand the spec – they should be able to use tools that will generate the RSS for them.
… The fact is, as far as I’m concerned, nobody but the programmers should have to deal with these specs. There seems to be a lot of confusion here, that RSS files are meant to be written by hand. Perhaps that was true with the old spec, but it doesn’t need to be, and is even less true with the new one. The specs are written for programmers, to allow them to write programs that communicate.
… so that you don’t have to generate the RSS file by hand, you can convert to it or do it through a web interface. If you still have trouble, let us know, we’re here to help, not to scare.
8/20/2000. Lynn Siprelle: Re: Thoughts, questions, and issues #6
[addressing Aaron] I have nothing but respect for you, *believe* me, but this is just a little too close to “don’t worry your purty li’l head about it, missy.”
OK, fine, so I can find some tool somewhere to generate the RSS file for me. But what if I want to parse one, which I will? I *still* have to understand the spec, and I don’t. And I’m neither stupid nor technically illiterate.
… It’s not just the simple writers you’ll need to worry about. It’s the simple webmasters who are doing things with RSS files now and who will get tripped up by these changes. If you’ve ever done tech support (and I have) you know that there are all kinds of people out there doing this stuff–brilliant kids like Aaron and old duffers who just wanna put up photos of the grandkids, and everyone in between.
8/20/2000. Aaron Swartz: Re: Thoughts, questions, and issues #7
[addressing Lynn] Oh, I definitely know what you mean. But here’s how I see it:
Writers:
- Use an automated RSS creator
- Use a web-based RSS creator with a nice interface
- Use a converter from the simple format to the more complex
Readers:
- Use a pre-built RSS parser (like XML::RSS for Perl)
- Use a down-converter to a simpler format
- Ignore the new additions and just use the old stuff
8/21/2000. zac: Re: Thoughts, questions, and issues #8
[addressing Aaron] These sorts of assumptions are self fulfilling to a degree. If you write a spec assuming that some users won’t interact with it then they won’t.
This limits the number of people that will use the format.
People want to (and should) understand the technologies that they use. So when you build a format that puts required namespace declarations in the <channel> tag then I think you’ve gone down a path that isn’t going to be followed by as many people.
8/21/2000. Aaron Swartz: Re: Thoughts, questions, and issues #9
I maintain that the new [proposed RSS 1.0] spec is for more technical usage than the older one. For real heavy-duty use, it will require some understanding of XML and RDF. The benefit from this is more power, but at the expense of some clarity and simplicity. I see RSS as moving away from a simple XML language for the people, and more towards a communication system for content management systems and other scripting environments. It may not be the choice you believe in, but it’s a choice that the authors are making. There will always be other formats if you don’t agree.
8/24/2000. Dan Libby: RSS: Introducing Myself
I was the primary author of the RSS 0.9 and 0.91 spec and the architect behind the My Netscape Network. … I was the primary author of the RSS 0.9 and 0.91 spec and the architect behind the My Netscape Network (a separate project from My Netscape, which I also worked on). I left Netscape in 1999, in part because of what I felt was mis-handling (non-handling?) of RSS and the MN platform. I fully expected the format to die an ignominious death, and I was pleasantly surprised to recently to poke my head out of the sand and find so many people still using it. I am glad that the net community has begun adopting RSS, and would like to see it realize the original vision.
The original My Netscape Network Vision:
We would create a platform and an RDF vocabulary for syndicating metadata about websites and aggregating them on My Netscape and ultimately in the web browser. Because we only retrieved metadata, the website authors would still receive user’s click-throughs to view the full site, thus benefitting both the aggregator and the publisher. My Netscape would run an RDF database that stored all the content. Preferences akin to mail filters, would allow the user to filter only the data in which they are interested onto the page, from the entire pool of data. For example, a user interested in articles about “Football” would be able to setup a personalized channel that simply consisted of a filter for Football, or even for a particular team or player. Or for all references to Slashdot.org, or whatever. This fit our personalization scheme well, and would (I hoped) give us the largest selection of content, with the greatest degree of personalization available. Tools would be made available to simplify the process of creating these files, and to validate them, and life would be good.
What Actually Happened:
- A decision was made that for the first implementation, we did not actually need a “real” RDF database, which did not even really exist at the time. Instead we could put the data in our existing store, and instead display data, one “channel” at a time. This made publishers happier anyway, because they would get their own window and logo. We could always do the “full” implementation later.
- The original RDF/RSS spec was deemed “too complex” for the “average user”. The RDF data model itself is complex to the uninitiated, and thus the placement of certain XML elements representing arc types seemed redundant and arbitrary to some. Support for XML namespaces was basically non-existent. My (poor) solution was to create a simpler format, RSS 0.9, that was technically valid RDF, but dropped namespaces and created a non-connected graph. … This marked the beginning of the Full Functionality vs Keep It Simple Stupid debate that continues to this day. …
- We shipped the first implementation, sans tools. Basically, there was a spec for RSS 0.9, some samples, and a web-based validation tool. No further support was given for a while…
- At some point, it was decided that we needed to rev the RSS spec to allow things like per item descriptions, i18n support, ratings, and image widths and height. Due to artificial (in my view) time constraints, it was again decided to continue with the current storage solution, and I realized that we were *never* going to get around to the rest of the project as originally conceived. At the time, the primary users of RSS (Dave Winer the most vocal among them) were asking why it needed to be so complex and why it didn’t have support for various features, eg update frequencies. We really had no good answer, given that we weren’t using RDF for any useful purpose. …
- We shipped the thing in a very short time, meeting the time constraints, then spent a month or two fixing it all. :-) …
- People on the net began creating all sorts of tools on their own, and publishing how-to articles, and all sorts of things, and using it in ways not envisioned by, err, some. And now we are here, debating it all over again.
8/25/2000. O’Reilly Network: Open Source Roundtable: Radio show on RSS 1.0
O’Reilly Network publisher Dale Dougherty talks with some of the core developers behind the new spec for RDF Site Summary (RSS 1.0) about the background behind RDF, the need for a standard, and what RSS enables. (downloadable as MP3 (10MB), or as RealAudio stream)
8/26/2000. Dave Winer: Comments on O’Reilly radio show on “RSS 1.0″
The format and process they describe are highly complex. They are over-estimating content people’s technical sophistication and interest in working on new formats.
IMHO, the new format should not be called RSS. There’s been a fork, and the peaceful solution is to each go our own way. Calling their spec RSS is unfair. We never considered moving RSS forward without getting O’Reilly on board first. RSS 1.0 was a surprise, we found out when the spec went public. I’ve said this over and over to the O’Reilly people, I would wish them godspeed if they hadn’t called it RSS. Should we call our spec RSS 1.0 too?
BTW, it was Netscape’s decision to take the RDF out of RSS, one we heartily supported. We considered calling it Really Simple Syndication. That’s the core thing about RSS, simplicity, it’s almost an end-user format, easily explained in a four-screen spec designed for people who understand HTML and not much more. Once Guha left, Netscape totally dropped the RDF pretense. Now it’s back.
8/26/2000. Aaron Swartz: Re: Commentary: RSS Roundtable
[re: complexity] Once again, content people can use the tools that we’re creating to convert from simpler formats and write files through a Web interface.
[re: "RSS 1.0" naming] Hmm, perhaps we should consider changing the name. The problem is that many of us have so much invested in the current name, making it painful to change it. Having two RSS 1.0′s would be even more confusing. I think the name is also deserved, considering the large amount of work spent on making the new spec backwards-compatible with RSS 0.9. It would be different if their we were creating a radically new spec, but we’re not — instead we’re simply adding namespaces and more RDF support to an already existing spec.
[re: simplicity] We disagree on the importance of simplicity. Yes, I like simplicity, but it needs to be balanced. I don’t think that’s the core thing about RSS, I think the core thing about RSS is what it stands for: RDF, sites and summaries.
9/2/2000. Dave Winer: What to do about RSS?
I wish it had turned out this way, then the people who legitimately want to do a Namespaces-and-RDF syndication format would have to choose another name. To their credit, the water is muddied by the departure of Netscape from the process. So there’s now an identity crisis, what is RSS, and who, if anyone, has the right to evolve it?
I think the answer to this question is totally obvious. But as one of the parties to the dispute it’s not up to me to say what it is.
9/4/2000. Ken MacLeod: Re: What to do about RSS
[addressing Dave] From my pov, that the new proposal, by the majority of developers, be “RSS 1.0″ seemed so obvious that it wasn’t until you objected so vehemently that it even crossed my mind.
9/4/2000. David McCusker: Re: TBL
[addressing Dave] I’m not directly involved. In fact I don’t want to be involved. :-) But it’s clear to me you were dispossessed by the naming, and very intentionally so by the folks who chose the name. I’m sensitive to nuances in dispossession.
… Only two main things matter in gauging your dispossesion. First, you were a voluntary party to an earlier version of RSS with certain characteristics. Second, you were involuntary party to the re-use of the old name for a new (but somewhat related) version with strikingly different technical characteristics. Case closed. They owe you. If they don’t pay, then they suck.
9/4/2000. Ken MacLeod: Re: TBL #2
It has been suggested that both forks use a different name.
9/4/2000. David McCusker: Re: RSS name cutting and drying
[addressing Ken] I’d noticed a pronounced absense of negotiation over the naming problem, as if the folks who came up with the proposed RSS 1.0 had responded to Dave by asking coldly, “Who are you, again?” It was the coldness that had a really bad feel to it, provoking my ire.
By the way, you’re doing a fine and human job of discussing the issue in a style I think is very nice. I only really think more responsiveness is required from the RDF+NS folks. The apparent “I don’t know you” reaction suggests bad faith, which folks should scramble to avoid.
[re: "it has been suggested that both forks use a different name"] That’s fair if there are actually two new evolving specs, if both sides agree to sign off. It only seems wrong if one side chooses unilaterally, especially if seeming to arrogate sole ownership to itself. It’s better to part ways amicably than to dump an inconvenient past partner. Folks who dump others inspire less future trust.
9/11/2000. USPTO: Trademark application #78025336: RSS
Mark (words only): RSS
Current Applicant: Userland Software
Filing Date: 2000-09-11
Current Status (2001-12-26): Abandoned: Applicant failed to respond to an Office action.
9/12/2000. Dave Winer:
Recently I have had a standard that I co-authored stolen by a big name, totally brazen, and I’ve said Fuck This many times in the last few weeks, and it hasn’t done any good.
9/13/2000. Dan Brickley: Re:.
9/13/2000. Dan Brickley: Re: #2
You’re mad at us because you think we stole your vision and corrupted it.
I’d like you to stop with the accusations of theft. Last I heard from you on that topic, you still claimed we were thieves. It’d be really nice to hear that retracted.
9/13/2000. Dave Winer: Re: #3
I will retract the statement after the name is changed to something other than RSS 1.0. Until then, however reluctantly, I will stand by the statement I made on the decentralization list, in the context it was posted.
My company has big plans for RSS, and they don’t include advising developers to do namespaces and RDF.
9/13/2000. Dan Brickley: Re: #4
Is it true that the *only* thing that you feel we have stolen is the name. No ideas, no technology, no designs were stolen, just the letters ‘R’,'S’,'S’. Rich Site Summary. RDF Site Summary. Real Simply Syndication. Really silly squabbles…
9/13/2000. Dave Winer: Re: #5
Dan here’s what was stolen.
Before the namegrab I had some influence on and participation in the evolution of RSS.
After the grab, I have no say in its future. I’m reduced to trying to talk you out of the namegrab. I’ve put so many weeks into just doing that. Here’s what it comes down to:
My choice is to accept your version or..?
What if I think it’s wrong? What then?
The provocative act was to take the name of something that exists and put it on something new.
You may not agree, and I don’t want to debate all this *yet again* but that’s what I lost in this and it’s not fair. I worked hard to get RSS to where it is now. Lots of months, down the drain. Why? Why do you want me to go away? What the hell did I ever do to you?
9/13/2000. Dave Winer: Greetings Syndicators!
I’d like to find out if there’s interest here in working beyond RSS 0.91, adding a few features possibly, new docs and howtos, or sample code, or just asking questions about how people do stuff.
… We might even rename our work something like RSS-Classic, so the people who want to own RSS can have their way.
9/20/2000. Tim O’Reilly: Re: Asking Tim
Speaking of RSS, here’s my read on what happened. (I wasn’t directly involved.). The only connection I can see is that the O’Reilly Network ran a series of ads on our sites promoting its stories about the RSS 1.0 spec (just as it promotes other stories on O’Reilly Network sites). Dave never approached me directly to express a point of view such as “I think the RSS spec is going in the wrong direction. Is there anything you can do to help get my point of view across to the other developers?” Instead, the first I heard of it was a series of public accusations that my company was leading a conspiracy to steal “Dave’s” standard.
9/20/2000. Dave Winer: Re: Asking Tim #2″?
9/21/2000. Tim O’Reilly: Re: Asking Tim #3
As I understand it, it was public knowledge (or certainly your knowledge) that there was work going on on a spec to extend RSS. When it was published, it was published as a “proposed RSS 1.0 spec”, and that seems completely legitimate to me, whether or not the work to develop it was done in public or private. The dozen people who worked on it have enough history with RSS to propose anything they like. You personally urged Rael to start the effort to write up what he was thinking as a proposed spec. And a “proposed RSS 1.0 spec” seems like as good a description as any for what they had come up with.
It seems to me that you immediately hardened the battle lines, and started crying foul, when you should instead have said: “I don’t think that this is the right direction for RSS 1.0.” If you’d kept yourself to technical substance instead of vague (and incorrect) accusations of plots masterminded by O’Reilly, this whole contretemps could have been avoided.
As Lao Tzu says, “He who feels pricked, must first have been a bubble.” I believe it was your power grab to unilaterally rewrite the RSS 0.91 spec with a Userland copyright that actually started this whole thing. You were moving to claim RSS as “yours” and a group of other developers put an oar in, and you didn’t like it.
… By any outside reading, your claim to have “created” RSS has no basis. Dan Brickley’s posts to FoRK, the first of which I linked to above, make that fairly clear. Netscape created it, but even then, it is so similar to other things available at the time from a number of players, including Microsoft, that anyone’s claim to ownership are pretty thin. Netscape created the name, and that’s about as close as you can get. You certainly did a lot of work to popularize and support it.
9/21/2000. Dave Winer: Re: Asking Tim #4.
9/21/2000. Tim O’Reilly: Re: Asking Tim #5.
10/12/2000. Ken MacLeod: Re: RSS History
The way that the W3C and the IETF do it is to have working groups. Working groups are made up of people who are knowledgable of the subject (experienced users as well as developers) and are willing to extend the effort to participate in the working group. It is highly regrettable that the decision to form a working group was made only after the RSS 1.0 proposal.
… I can think of no other project or technology that went through a set of circumstances similar to this.
10/12/2000. Dave Winer: Re: RSS History #2
Thanks for the help Ken.
So I guess it would be fair to say that if an outsider looked at this, that there is no precedent for the transition that took place here, this is not how open source projects or W3C or IETF projects fork.
Some people have said that the community decided to go in the direction of Namespaces and RDF, but it’s clear that that did not actually happen.
10/13/2000. Ken MacLeod: Re: RSS History #3
Right, I don’t recall ever seeing anything like this before.
[re: community support] It’s clear that it was not a unanimous decision, yes.
10/13/2000. Dave Winer: Community consensus
The claim has been made, offlist, that there was a community consensus to move to namespaces and RDF and modules. If there was such a consensus, now is the time to show where the record of that is. Ken provided a pointer, but it’s not what I asked for, because no one asked “Is it OK if we call this RSS 1.0?”
The great thing about eGroups is that no one can tamper with the record. If there was a consensus, it *must* be evident here. I went to the trouble to read the archives over the summer. There isn’t that much to read. I found no evidence that the question was ever asked on this list. I know for a fact that I was never asked to vote on the transition, and I don’t think the general membership of this list was asked either.
10/15/2000. Seth Russell: Re: [syndication] changing the name of RSS
I propose that we change the name because it would:
- help heal the rift with Dave Winer,
- encourage a new attitude towards this revolutionary RDF Metadata Feed; and
- [as you indicate] clear up the confusion in the marketplace if the syndication group moves ahead with RSS 9+.
Are there any valid arguments against changing the name?
10/15/2000. Paulo Gaspar: RE: [syndication] changing the name of RSS #2
Yes, changing the name of the RSS 1.0 to something else is what makes more sense – it is the for with the most diferences from the previous version.
10/17/2000. Paulo Gaspar: RE: [syndication] Total confusion in RSS-Land
The only problem I see with your arguments is that you never talk about FORKing, giving another name to RSS 1.0. Both groups could “live amicably” if the “1.0″ group would just agree on that.
… And it makes sense. Even if you do not understand why other people find the RDF solution complex, they still DO. And RSS 0.92 will be much closer to 0.91 than 1.0 is. Is up to 1.0 to get another name.
10/17/2000. Seth Russell: Pick a new name for RSS 1.0
It has been suggested by Dave Winer and others that it is inappropriate to name our standard RSS 1.0. To clear up the confusion that most certainly will emerge in the market place and to give this format a new revolutionary start, it seems appropriate to also give it a new name.
10/18/2000. Jeff Bone: Forking, the name game, the politics of naming
We all seem to acknowledge that there’s too much ideological distance between the camps to reasonably work on a single effort, and therefore forking is inevitable. The controversy is purely on which effort — the new and improved and totally revised and overly complexified effort or the brutally simple incremental improvement effort — gets to keep using the RSS name. Given that the original stakeholders are in favor of the simpler version, IMO they should get to keep the name.
10/19/2000. Mark Ketzler: Re: Forking, the name game, the politics of naming
RSS existed and was being used by lots of folk
Group A (including some of the RSS originators) wanted to make RSS extensible etc.
Group B (including rest of the original group) wanted status quo
This is a fork by Group A. Why should Group B change the name of something that existed. How do you defend this? If the RSS-DEV WG is so concerned about the RSS brand why are you tarnishing it with this name grab?
11/7/2000. Dan Brickley: RSS-Classic, RSS 1.0 and a historical debt
Contrary to what you might hear, the RSS 1.0 proposal did not come from a bunch of outsiders who swooped in and grabbed the prestigious acronym ‘RSS’. RSS 1.0 as proposed is solidly grounded in the original RSS vision, which itself had a long heritage going back to MCF (an RDF precursor) and related specs (CDF etc). This can be seen for example from Guha’s longstanding involvement, and Dan Libby’s account of the Netscape RSS work and endorsement of the 1.0 proposal.
… I believe we have more than adequate historical justification for calling the ‘new’ stuff RSS. To defend this observation requires pointing to a bunch of historical baggage that I’ve previously avoided publicising.
… On the basis of these various observations, we have two traditions, both rooted in Guha’s MCF work.
MCF > CDF > scriptingNews > RSS 0.91 > RSS-Classic
MCF > XML-MCF > RSS 0.90 > RSS 1.0
… To stress my point once more. The RSS 1.0 proposal did not appear from out of nowhere, but is routed in 5 years work in this area. The RSS 1.0 proposers did not swoop in from nowhere to steal the ‘RSS’ acronym.
11/7/2000. Dave Winer: Dan Brickley’s message
I just subscribed to this list for a moment, to rebut Dan’s assertion that <scriptingNews> format was derived from CDF. It was not. It was derived from my experience as a web writer and web app developer and that’s all. I documented my work through 1997-2000 on this stuff, publicly.
11/7/2000. Seth Russell: Re: [RSS-DEV] RSS-Classic, RSS 1.0 and a historical debt
[addressing Dan Brickley] Look, this thing is past the point of justifications being important. … Now, personally I don’t have any vested interest (ego) in either the WG group or in Dave’s group. But it doesn’t feel right to me. I don’t understand the divisive stubbornness that keeps the WG from just changing the name. But I do see the political magic that would happen if we just changed it. Especially since XRSS is the better name anyway and actually names this thing politically correct.
Heal the Rift
Start anew
Change the name!
The name was never changed.
§
I am no longer accepting public comments on this post, but you can use this form to contact me privately. (Your message will not be published.)
§
© 2001–present Mark Pilgrim
|
http://web.archive.org/web/20110718031950/http:/diveintomark.org/archives/2002/09/06/history_of_the_rss_fork
|
CC-MAIN-2014-52
|
en
|
refinedweb
|
Whether you are doing pre or post mortem debugging, whether you are using Visual Studio or WinDBG, one of the most important things you can do ( short of not writing the bug in the first place! ) to ready yourself for a productive debugging session is to establish the use of Symbol and Source servers. I’m going to give you some quick pointers of how to make use of these technologies in your day – to – day development process.
What is it? Essentially, a symbol server is way to expose the symbols ( PDBs ) of your applications to the debugger in a way discoverable by the debugger regardless of where the debugger is run. This same thing is also true for system symbols as well. Not only that, it is a way to handle the versioning of those PDBs so that the debugger understands how to find version one of your symbols, vs. version two, and so on.
That’s all well and good, but why is this a good thing for the developer? Let’s look at what things look like when you *don’t* have a symbol server established.
Here’s some sample code you have undoubtedly seen a zillion times by now:
using System;namespace DebuggingSeries{ class Program { static void Main( string[] args ) { Console.WriteLine( "Hello World" ); } }}
After a moment, you will hit the breakpoint. Now, open up the Modules window ( while in the debugger, select the Debug->Windows->Modules menu item ):
While will open a tool window that looks something like the one below:
The “Modules” tool window shows you all the images ( dlls, etc. ) that are loaded in order to run the executable you are currently debugging. ( NOTE: This is a very useful tool window that many folks don’t know is available in the product, and has many uses which we won’t have a chance to go through today, but more some other time. )
You’ll notice in the image above that there is a Column called “Symbol Status”. Assuming you are running with default debugging settings that Visual Studio 2010 has shipped with, you’ll also notice that almost all the rows in the tool windows except the one I highlighted have the value “Skipped loading symbols.”. The one highlighted is the actual executable itself, which contains the symbols associated with the code you typed in. You’ll also notice that the “User Code” column is “No” for everything except for this last entry as well. More on this later.
Now bring up the “Call Stack” toolwindow ( Debug->Windows->Call Stack ). You should see something like this:
Cool, now let’s see what changes once you start using a Symbol Server, but there is actually one other thing you’ve gotta do before you see the goodness of the Symbol Server. You’ll need to turn off the “Just My Code” feature in Visual Studio.
By default, Visual Studio 2010 has the “Just My Code” option turn on. This on on by default as it tends to reduce the amount of information a new user has to deal with when initially debugging inside Visual Studio. It tries to protect you by not drowning you in information.
I struggle with this option, as the intent of it is quite good, but in practice, I tend to always turn this off. ( I would love to hear your thoughts on whether or not this option should be on or off by default. Please fill the comment section with those thoughts! )
For the purposes of this post on this series, please turn off this option. Here’s how:
Go into “Tools-Options…” dialog, find the “Debugging” option on the left side of the dialog, and notice the “Enable Just My Code” option on the right:
Uncheck that box so that your dialog looks like this:
Now your ready to see the effects of the Symbol Server option.
Go back into the “Tools->Options…” dialog, and expand the “Debugging” option, and select the “Symbols” option:
Go ahead and click the check box next to “Microsoft Symbol Servers”. You should see a dialog box popup that you can safely ignore for now.
If you hit OK at this point, VS will automatically fill in a default directory that it will use for PDBs it pulls down from a public Microsoft site that contains the PDBs tied to the various versions of the framework DLLs.
If you are on a team of developers, it is a good idea to establish a common file share that all members of your team can use to house these symbols so that you don’t have to wait for the download of those images while you debug. The first developer who needs them would feel that delay, but everyone else on the team would simply pull from the common share.
For now, take the defaults and hit F5 again. You should be in the debugger waiting at the breakpoint.
Take a look at your Modules window now:
Notice how everything is loaded ( Symbol Status column reads “Symbols loaded.” ) and the User Code column reads “N/A”, ‘cause we disabled the “Just My Code Feature”
If you had turned on the Symbol Server options as described above but failed to turn off “Just My Code”, you would have seen something like the following in the Modules window, indicating that symbols were available to be loaded, but Visual Studio did not do so in order to abide by the constraints associated with “Just My Code”. Also, your call stack would look exactly the same as it did before.
Now take a look at your call stack window:
Compare this with the image of the call stack with the one I previously showed you above. Notice how much more information is now available to you?
And therein lies the point of all this:
When you are debugging, you need as much information as you can get in order to figure out the task at hand. You never know what little piece of information will be the clue that drives you towards the final solution.
This post is just scratching the surface. Next post in this series, I’ll dig a little further into some more of the benefits of the symbol server, and why making your own symbols available to your team in this manner will benefit you.
Cheers!
Cameron
|
http://blogs.msdn.com/b/camerons/archive/2011/04/01/debugging-series-symbol-server.aspx
|
CC-MAIN-2014-52
|
en
|
refinedweb
|
!”. I have been there, watching all those files uploading via FileZilla, migrating the database, restarting the server(s). If you have ever automated your deployments, even just once, you will have been spoiled. Anything else is pain.
You most likely will have heard of Capistrano. It’s an opinionated deployment tool written in Ruby. Capistrano offers user defined tasks, via an elegant DSL, for making the deployment of your latest web application a snap.
Today we will walk over a few basics, examine possible web server setups, and look at setting up some custom deployment tasks.
Opinions Matter
Remember I said Capistrano is opinionated? These opinions come in the form of a few assumptions Capistrano makes about your application:
- You will use source control
- You will have a “config” directory at the top level of your application
- You will use SSH to access the box we are deploying to
Your Server Setup
We need to talk about the setup of your server box. Services such as Heroku and EngineYard take away all the pain of setting up MySQL, Apache and so on. However, if we look around the web there is a plethora of cheap VPS which will meet your needs just as well. Sure, we will have to do some initial work to get the server setup, but that is a one time deal and we can automate it with a little know how.
My VPS uses users as hosting accounts. If I have an application named “capdemo” I will also have a user on the box named “capdemo”, with a home directory acting as their piece of the hosting pie.
I also use an Apache server, mainly because I am very familiar with it. NGINX is an alternative which gets a good write up. For now, I’m sticking with Apache. Both servers play nice with the next assumption of this article, which is: we will use Passenger.
Passenger gives us a mainstream deployment process. There is no special server configuration, or port management. A new release is just a case of uploading the files and restarting the application server (Mongrel, Unicorn etc.).
Just a short note on restarting these servers. Passenger looks for a file called
tmp/restart.txt in your application direcotry to tell it when to restart the application server. A manual restart would be
touch tmp/restart.txt.
Cooking with Capistrano
These days, I get really hungry talking devops. When I build a Capistrano script, I am developing a recipe telling Capistrano how I would my web application prepared (medium rare, maybe?). There is no splashing in extra Sriracha for heat. The recipe is followed to the letter. If it can’t complete the deployment, Capistrano lets you know and cleans up the dishes.
Capistrano works great with Rails applications, but can be used with pretty much any application. The application doesn’t even have to be Ruby-based. In this case, however, we will use a Rails application to get cooking.
As usual add
gem capistrano to your Gemfile and run
bundle install. Now, we can “capify” our project by running
capify .. This creates a Capfile and a deploy script in the config directory of our application.
It is within the
config/deploy.rb file where we will create our deployment recipe. Looking at the
deploy.rb file, we see Capistrano has been nice enough to get us started.
Your generated
deploy.rb file should look like the above (at the time of writing the latest stable version of Capustrano is 2.9.0). This gives us a bit of a head start for our recipe.
So lets change that to something more homemade. First, we need to setup some SSH configuration, information about the application, details of where it is to be deployed to and some SCM details.
ssh_options[:forward_agent] = true require 'json' set :user_details, JSON.parse(IO.read('dna.json')) set :domain, "capdemo.bangline.co.uk" set :application, domain set :user, user_details['name'] set :password, user_details['password'] set :deploy_to, "/home/#{user}" set :use_sudo, false set :repository, "git@github.com:bangline/capdemo.git" set :scm, :git set :branch, "master" set :deploy_via, :remote_cache role :app, domain role :web, domain role :db, domain, :primary => true # If you are using Passenger mod_rails uncomment this: # namespace :deploy do # task :start do ; end # task :stop do ; end # task :restart, :roles => :app, :except => { :no_release => true } do # run "#{try_sudo} touch #{File.join(current_path,'tmp','restart.txt')}" # end # end
The
ssh_options[:forward_agent] ensures we use the keys on our local machine rather than those on the server. I use this as I do not usually place keys on the server to access GitHub, but it is completely plausible to do so (delete this line if you are).
I then parse a file named
dna.json for user credentials. Not only can I omit this sensitive file from a public git repo, but it can be used to make the recipe more reusable. For instance we could also setup all the SCM details in the dna file. The contents of the dna.json file look like:
{ "name":"chuck_norris", "password":"dont_need_one_as_the_server_knows_and_fears_me" }
The next few lines explain themselves pretty well. We setup the application name, the user credentials on the server, our git configuration and,finally, where the application is to be deployed.
I should point out here I am not using keys for SSH access. If the user password is set then Capistrano will use this throughout the deployment. It’s fine for this scenario, but if we were deploying across multiple servers Capistrano would assume the password was the same for all servers. In other words, for multi-server deployment use SSH keys. It is not the most secure method, but pretty flexible. Just make sure your password looks like a cat took a nap on your keyboard.
I have also set the
:deploy_via to
:remote_cache. This creates a git repo on the server itself, preventing a full clone of the application on every deployment.
The
role definitions describe the layers of our application. The
:app layer is what we are most used to in development, the
:web layer is where all the requests go and
:db is where we want to run migrations. This style of configuration can look silly as we are only using a single box (the
server keyword addresses this) but if we ever need to scale and separate the database and so on, then this style is more maintainable.
It is possible to do a couple of checks at this point. If you run the
cap -T command in your terminal you will see what tasks Capistrano already knows about. At this time, we want to setup the applications directory and check the permissions are correct.
cap deploy:setup cap deploy:check
Capistrano Layout
Before going on any further, we can examine what layout to expect on our server. If we SSH to the box and check the application path we should see:
- current
- releases
- shared
The current is simply a symlink to the latest in the releases folder. Having this constant we can then set our apache vhost config file to the following
<VirtualHost *:80> # Admin email, Server Name (domain name) and any aliases ServerName capdemo.bangline.co.uk # Index file and Document Root (where the public files are located) DocumentRoot /home/capdemo/current/public <Directory /home/capdemo/current/public> Options -MultiViews AllowOverride All </Directory> </VirtualHost>
The releases directory holds all the releases we do using Capistrano. The current symlink points to the latest directory in here. The releases directory will hold all previous releases. We can limit the number of releases to keep in our deployment recipe with
set :keep_releases, 5 or use the Capistrano task
cap deploy:cleanup.
The shared directory persists across deployments, so put items like user uploaded assets or sqlite databases in this directory.
Writing a Recipe
So far all we have done is check everything is in place for our first deployment. The output of
cap deploy:check should have told us everything is looking good. If not, you probably have to check that the permissions for the user are correct. Remember I told you to clear out the
deploy.rb file? Well, the truth is as using Capistrano with Passenger is so easy, it’s almost expected. We left some commented out code in the
deploy.rb and we need it now.
namespace :deploy do task :start do ; end task :stop do ; end task :restart, :roles => :app, :except => { :no_release => true } do run "#{try_sudo} touch #{File.join(current_path,'tmp','restart.txt')}" end end
These are the first deployment tasks we are adding. We override the defaultstart
,stop
andrestart
tasks to be specific for our setup. Therestart
task can be called usingcap deploy:restart
and you can see it touches thetmp/restart.txt` file. What is more important is the disection of a task.
We have a namespace,
deploy and some sub tasks. When we call
cap deploy:restart the only task executed is
restart. Calling
cap deploy in the terminal will run all the tasks we see under the namespace and a few others we cannot see. Under the hood the
cap deploy task has a set of stored/default tasks. The details of which have been nicely illustrated here. The bits we are interested in, in order which they are called, are:
- deploy:update_code – Pulls the code from the git repository
- deploy:symlink – Links the most recent release to current
- deploy:restart – We have overridden this to just touch the restart.txt file
Another consideration is we are developing a Rails 3.1 application. We want to use bundler to manage our gem dependancies (why would we wnat anything else?) Luckily, bundler has a deployment task for capistrano.
require "bundler/capistrano"
Simply adding this line to our
deploy.rb will bundle all our gems on deployment. It also does this task in a smart way. All the gems are packaged into the
shared/bundle directory. This is the modern day equvalent to freezing our gems. With that in place we are nearly ready to deploy.
Since, this is going to be our first deployment, we need to perform a couple of extra tasks from the command line. First, we need to migrate the database. Without breaching any copyright ‘There is a task for that’. So let’s get some code on the server using
cap deploy:update_code then run
cap deploy:migrate. At this point we have code on the server, a database with the latest schema and the our gem dependancies have been fulfilled.
Throwing in Extra Ingredients
As I mentioned before, we will be deploying a Rails 3.1 application. Along with the asset pipeline came the ability to pre compile our assets. This gives us a perfect excuse to create our own deployment task.
namespace :assets do task :compile, :roles => :web, :except => { :no_release => true } do run "cd #{current_path}; rm -rf public/assets/*" run "cd #{current_path}; bundle exec rake assets:precompile RAILS_ENV=production" end end before "deploy:restart", "assets:compile"
What this task does is, simply, delete any existing assets and run the rake command to compile them. The more interesting part is the hook I have placed at the bottom. You probably know what it does already thanks to the eloquent DSL Capistrano has, but basically it hooks in to the
deploy task and runs our
assets:compile task before restarting the applicatiion server. Also, by splitting it out into it’s own namespace we can run it from the command line in isolation
cap assets:compile.
There is one last thing we need to do before deploying. Remember when we talked about the shared directory being a good place to keep sqlite databases? Well the current config in our
database.yml file is still using
db/production.sqlite3. The simplist fix for this is to change this to
/home/capdemo/shared/production.sqlite3. We commit that to our GitHub repo and run
cap deploy:update_code and
cap deploy:migrate. Now that we have the database persisting across our deploys, we can actually deploy the app, simply:
cap deploy
If you follow the output of the deployment you will see the compile task being executed before the restart task. Admittedly, the output does look incorrect, but if you take a bit of time to read it you will see it’s doing what we expect.
* executing `deploy:restart' triggering before callbacks for `deploy:restart' * executing `assets:compile'
Writing your own tasks is a great way to learn what’s going on under the hood with Capistrano. A couple of things to rememeber about tasks is they are written in plain old Ruby, so you have all the usual idioms available to you. The other is Capistrano gives you a good set of configuration variables such as
current_path,
shared_path. A full list has been compiled by a chap Eric Davis.
To cement those points we will look at building just one more custom task. I enjoy looking at my deployment history, so using our knowledge of the shared directory and the capistrano deployment process, we can build a log file of deployments.
task :log_deploy do date = DateTime.now run "cd #{shared_path}; touch REVISION_HISTORY" run "cd #{shared_path};( echo '#{date.strftime("%m/%d/%Y - %I:%M%p")} : Vesrion #{latest_revision[0..6]} was deployed.' ; cat REVISION_HISTORY) > rev<em>tmp && mv rev_tmp REVISION_HISTORY" end task :history do run "tail #{shared_path}/REVISION_HISTORY" do | ch, stream, out | puts out end end after "deploy", "deploy:log_deploy"
From here on in it’s plain sailing for deployments. We just use
cap deploy or
cap deploy:migrations depending on if we need to update the db schema for a release. If we want to look at the deployment history, we just call
cap deploy:history to get an output of the date, time and version of all our deployments.
Savoring Our Deployment
Hopefully now you have an appetite for rolling your own deployment with Capistrano. I tried to make this walkthrough as detailed as possible. I was reluctant at first with Capistrano as I wasn’t all that confident with my shell skills, and I was scared to lose human control. But that was just silly of me.
Not only did I save myself the pain of manual deployments, but I also got all the extra goodies Capistrano gives you, such as roll back capabilities, automatic disabled page and so on. There are still a great deal of features that come standard with Capistrano, the details of which can be found in the documentation. There is also plenty of recipes out there for you to borrow and build on.
Yes, we do have to invest a little more time developing our deployment than using something like Heroku. Howerver, after writing a few recipes, it will come as second nature and, maybe, a cheap VPS will start to look more attractive. The source is available on GitHub.
No Reader comments
|
http://www.sitepoint.com/capified-painless-deployment-for-free/
|
CC-MAIN-2014-52
|
en
|
refinedweb
|
{-# LANGUAGE DeriveDataTypeable #-} {-# LANGUAGE TupleSections #-} {-# LANGUAGE MultiParamTypeClasses #-} {-# LANGUAGE GeneralizedNewtypeDeriving #-} {-# LANGUAGE StandaloneDeriving #-} {-# LANGUAGE OverloadedStrings #-} {-# OPTIONS_HADDOCK hide #-} module Network.Xmpp.Types ( IQError(..) , IQRequest(..) , IQRequestType(..) , IQResponse(..) , IQResult(..) , IdGenerator(..) , LangTag (..) , Message(..) , MessageError(..) , MessageType(..) , Presence(..) , PresenceError(..) , PresenceType(..) , SaslError(..) , SaslFailure(..) , ServerFeatures(..) , Stanza(..) , StanzaError(..) , StanzaErrorCondition(..) , StanzaErrorType(..) , StanzaId(..) , StreamError(..) , StreamErrorCondition(..) , Version(..) , XmppConMonad , XmppConnection(..) , XmppConnectionState(..) , XmppT(..) , XmppStreamError(..) , langTag , module Network.Xmpp.Jid ) where import Control.Applicative ((<$>), many) import Control.Exception import Control.Monad.IO.Class import Control.Monad.State.Strict import Control.Monad.Error import qualified Data.Attoparsec.Text as AP import qualified Data.ByteString as BS import Data.Conduit import Data.String(IsString(..)) import Data.Maybe (fromJust, fromMaybe, maybeToList) import Data.Text (Text) import qualified Data.Text as Text import Data.Typeable(Typeable) import Data.XML.Types import qualified Network as N import Network.Xmpp.Jid import System.IO -- | -- Wraps a string of random characters that, when using an appropriate -- @IDGenerator@, is guaranteed to be unique for the Xmpp session. data StanzaId = SI !Text deriving (Eq, Ord) instance Show StanzaId where show (SI s) = Text.unpack s instance Read StanzaId where readsPrec _ x = [(SI $ Text.pack x, "")] instance IsString StanzaId where fromString = SI . Text.pack -- | The Xmpp communication primities (Message, Presence and Info/Query) are -- called stanzas. data Stanza = IQRequestS !IQRequest | IQResultS !IQResult | IQErrorS !IQError | MessageS !Message | MessageErrorS !MessageError | PresenceS !Presence | PresenceErrorS !PresenceError deriving Show -- | A "request" Info/Query (IQ) stanza is one with either "get" or "set" as -- type. It always contains an xml payload. data IQRequest = IQRequest { iqRequestID :: !StanzaId , iqRequestFrom :: !(Maybe Jid) , iqRequestTo :: !(Maybe Jid) , iqRequestLangTag :: !(Maybe LangTag) , iqRequestType :: !IQRequestType , iqRequestPayload :: !Element } deriving Show -- | The type of IQ request that is made. data IQRequestType = Get | Set deriving (Eq, Ord) instance Show IQRequestType where show Get = "get" show Set = "set" instance Read IQRequestType where readsPrec _ "get" = [(Get, "")] readsPrec _ "set" = [(Set, "")] readsPrec _ _ = [] -- | A "response" Info/Query (IQ) stanza is either an 'IQError', an IQ stanza -- of type "result" ('IQResult') or a Timeout. data IQResponse = IQResponseError IQError | IQResponseResult IQResult | IQResponseTimeout deriving Show -- | The (non-error) answer to an IQ request. data IQResult = IQResult { iqResultID :: !StanzaId , iqResultFrom :: !(Maybe Jid) , iqResultTo :: !(Maybe Jid) , iqResultLangTag :: !(Maybe LangTag) , iqResultPayload :: !(Maybe Element) } deriving Show -- | The answer to an IQ request that generated an error. data IQError = IQError { iqErrorID :: !StanzaId , iqErrorFrom :: !(Maybe Jid) , iqErrorTo :: !(Maybe Jid) , iqErrorLangTag :: !(Maybe LangTag) , iqErrorStanzaError :: !StanzaError , iqErrorPayload :: !(Maybe Element) -- should this be []? } deriving Show -- | The message stanza. Used for /push/ type communication. data Message = Message { messageID :: !(Maybe StanzaId) , messageFrom :: !(Maybe Jid) , messageTo :: !(Maybe Jid) , messageLangTag :: !(Maybe LangTag) , messageType :: !MessageType , messagePayload :: ![Element] } deriving Show -- | An error stanza generated in response to a 'Message'. data MessageError = MessageError { messageErrorID :: !(Maybe StanzaId) , messageErrorFrom :: !(Maybe Jid) , messageErrorTo :: !(Maybe Jid) , messageErrorLangTag :: !(Maybe LangTag) , messageErrorStanzaError :: !StanzaError , messageErrorPayload :: ![Element] } deriving (Show) -- | The type of a Message being sent -- (<>) data MessageType = -- | The message is sent in the context of a one-to-one chat -- session. Typically an interactive client will present a -- message of type /chat/ in an interface that enables -- one-to-one chat between the two parties, including an -- appropriate conversation history. Chat -- | The message is sent in the context of a multi-user chat -- environment (similar to that of @IRC@). Typically a -- receiving client will present a message of type -- /groupchat/ in an interface that enables many-to-many -- chat between the parties, including a roster of parties -- in the chatroom and an appropriate conversation history. | GroupChat -- |). | Headline -- | The message. -- -- This is the /default/ value. | Normal deriving (Eq) instance Show MessageType where show Chat = "chat" show GroupChat = "groupchat" show Headline = "headline" show Normal = "normal" instance Read MessageType where readsPrec _ "chat" = [(Chat, "")] readsPrec _ "groupchat" = [(GroupChat, "")] readsPrec _ "headline" = [(Headline, "")] readsPrec _ "normal" = [(Normal, "")] readsPrec _ _ = [(Normal, "")] -- | The presence stanza. Used for communicating status updates. data Presence = Presence { presenceID :: !(Maybe StanzaId) , presenceFrom :: !(Maybe Jid) , presenceTo :: !(Maybe Jid) , presenceLangTag :: !(Maybe LangTag) , presenceType :: !(Maybe PresenceType) , presencePayload :: ![Element] } deriving Show -- | An error stanza generated in response to a 'Presence'. data PresenceError = PresenceError { presenceErrorID :: !(Maybe StanzaId) , presenceErrorFrom :: !(Maybe Jid) , presenceErrorTo :: !(Maybe Jid) , presenceErrorLangTag :: !(Maybe LangTag) , presenceErrorStanzaError :: !StanzaError , presenceErrorPayload :: ![Element] } deriving Show -- | @PresenceType@ holds Xmpp presence types. The "error" message type is left -- out as errors are using @PresenceError@. data PresenceType = Subscribe | -- ^ Sender wants to subscribe to presence Subscribed | -- ^ Sender has approved the subscription Unsubscribe | -- ^ Sender is unsubscribing from presence Unsubscribed | -- ^ Sender has denied or cancelled a -- subscription Probe | -- ^ Sender requests current presence; -- should only be used by servers Default | Unavailable deriving (Eq) instance Show PresenceType where show Subscribe = "subscribe" show Subscribed = "subscribed" show Unsubscribe = "unsubscribe" show Unsubscribed = "unsubscribed" show Probe = "probe" show Default = "" show Unavailable = "unavailable" instance Read PresenceType where readsPrec _ "" = [(Default, "")] readsPrec _ "available" = [(Default, "")] readsPrec _ "unavailable" = [(Unavailable, "")] readsPrec _ "subscribe" = [(Subscribe, "")] readsPrec _ "subscribed" = [(Subscribed, "")] readsPrec _ "unsubscribe" = [(Unsubscribe, "")] readsPrec _ "unsubscribed" = [(Unsubscribed, "")] readsPrec _ "probe" = [(Probe, "")] readsPrec _ _ = [] -- | All stanzas (IQ, message, presence) can cause errors, which in the Xmpp -- stream looks like <stanza-kind. These errors are -- wrapped in the @StanzaError@ type. -- TODO: Sender XML is (optional and is) not yet included. data StanzaError = StanzaError { stanzaErrorType :: StanzaErrorType , stanzaErrorCondition :: StanzaErrorCondition , stanzaErrorText :: Maybe (Maybe LangTag, Text) , stanzaErrorApplicationSpecificCondition :: Maybe Element } deriving (Eq, Show) -- | @StanzaError@s always have one of these types. data StanzaErrorType = Cancel | -- ^ Error is unrecoverable - do not retry Continue | -- ^ Conditition was a warning - proceed Modify | -- ^ Change the data and retry Auth | -- ^ Provide credentials and retry Wait -- ^ Error is temporary - wait and retry deriving (Eq) instance Show StanzaErrorType where show Cancel = "cancel" show Continue = "continue" show Modify = "modify" show Auth = "auth" show Wait = "wait" instance Read StanzaErrorType where readsPrec _ "auth" = [( Auth , "")] readsPrec _ "cancel" = [( Cancel , "")] readsPrec _ "continue" = [( Continue, "")] readsPrec _ "modify" = [( Modify , "")] readsPrec _ "wait" = [( Wait , "")] readsPrec _ _ = [] -- | Stanza errors are accommodated with one of the error conditions listed -- below. data StanzaErrorCondition = BadRequest -- ^ Malformed XML. | Conflict -- ^ Resource or session with -- name already exists. | FeatureNotImplemented | Forbidden -- ^ Insufficient permissions. | Gone -- ^ Entity can no longer be -- contacted at this -- address. | InternalServerError | ItemNotFound | JidMalformed | NotAcceptable -- ^ Does not meet policy -- criteria. | NotAllowed -- ^ No entity may perform -- this action. | NotAuthorized -- ^ Must provide proper -- credentials. | PaymentRequired | RecipientUnavailable -- ^ Temporarily unavailable. | Redirect -- ^ Redirecting to other -- entity, usually -- temporarily. | RegistrationRequired | RemoteServerNotFound | RemoteServerTimeout | ResourceConstraint -- ^ Entity lacks the -- necessary system -- resources. | ServiceUnavailable | SubscriptionRequired | UndefinedCondition -- ^ Application-specific -- condition. | UnexpectedRequest -- ^ Badly timed request. deriving Eq instance Show StanzaErrorCondition where show BadRequest = "bad-request" show Conflict = "conflict" show FeatureNotImplemented = "feature-not-implemented" show Forbidden = "forbidden" show Gone = "gone" show InternalServerError = "internal-server-error" show ItemNotFound = "item-not-found" show JidMalformed = "jid-malformed" show NotAcceptable = "not-acceptable" show NotAllowed = "not-allowed" show NotAuthorized = "not-authorized" show PaymentRequired = "payment-required" show RecipientUnavailable = "recipient-unavailable" show Redirect = "redirect" show RegistrationRequired = "registration-required" show RemoteServerNotFound = "remote-server-not-found" show RemoteServerTimeout = "remote-server-timeout" show ResourceConstraint = "resource-constraint" show ServiceUnavailable = "service-unavailable" show SubscriptionRequired = "subscription-required" show UndefinedCondition = "undefined-condition" show UnexpectedRequest = "unexpected-request" instance Read StanzaErrorCondition where readsPrec _ "bad-request" = [(BadRequest , "")] readsPrec _ "conflict" = [(Conflict , "")] readsPrec _ "feature-not-implemented" = [(FeatureNotImplemented, "")] readsPrec _ "forbidden" = [(Forbidden , "")] readsPrec _ "gone" = [(Gone , "")] readsPrec _ "internal-server-error" = [(InternalServerError , "")] readsPrec _ "item-not-found" = [(ItemNotFound , "")] readsPrec _ "jid-malformed" = [(JidMalformed , "")] readsPrec _ "not-acceptable" = [(NotAcceptable , "")] readsPrec _ "not-allowed" = [(NotAllowed , "")] readsPrec _ "not-authorized" = [(NotAuthorized , "")] readsPrec _ "payment-required" = [(PaymentRequired , "")] readsPrec _ "recipient-unavailable" = [(RecipientUnavailable , "")] readsPrec _ "redirect" = [(Redirect , "")] readsPrec _ "registration-required" = [(RegistrationRequired , "")] readsPrec _ "remote-server-not-found" = [(RemoteServerNotFound , "")] readsPrec _ "remote-server-timeout" = [(RemoteServerTimeout , "")] readsPrec _ "resource-constraint" = [(ResourceConstraint , "")] readsPrec _ "service-unavailable" = [(ServiceUnavailable , "")] readsPrec _ "subscription-required" = [(SubscriptionRequired , "")] readsPrec _ "unexpected-request" = [(UnexpectedRequest , "")] readsPrec _ "undefined-condition" = [(UndefinedCondition , "")] readsPrec _ _ = [(UndefinedCondition , "")] -- ============================================================================= -- OTHER STUFF -- ============================================================================= data SaslFailure = SaslFailure { saslFailureCondition :: SaslError , saslFailureText :: Maybe ( Maybe LangTag , Text ) } deriving Show data SaslError = SaslAborted -- ^ Client aborted. | SaslAccountDisabled -- ^ The account has been temporarily -- disabled. | SaslCredentialsExpired -- ^ The authentication failed because -- the credentials have expired. | SaslEncryptionRequired -- ^ The mechanism requested cannot be -- used the confidentiality and -- integrity of the underlying -- stream is protected (typically -- with TLS). | SaslIncorrectEncoding -- ^ The base64 encoding is incorrect. | SaslInvalidAuthzid -- ^ The authzid has an incorrect -- format or the initiating entity -- does not have the appropriate -- permissions to authorize that ID. | SaslInvalidMechanism -- ^ The mechanism is not supported by -- the receiving entity. | SaslMalformedRequest -- ^ Invalid syntax. | SaslMechanismTooWeak -- ^ The receiving entity policy -- requires a stronger mechanism. | SaslNotAuthorized -- ^ Invalid credentials provided, or -- some generic authentication -- failure has occurred. | SaslTemporaryAuthFailure -- ^ There receiving entity reported a -- temporary error condition; the -- initiating entity is recommended -- to try again later. instance Show SaslError where show SaslAborted = "aborted" show SaslAccountDisabled = "account-disabled" show SaslCredentialsExpired = "credentials-expired" show SaslEncryptionRequired = "encryption-required" show SaslIncorrectEncoding = "incorrect-encoding" show SaslInvalidAuthzid = "invalid-authzid" show SaslInvalidMechanism = "invalid-mechanism" show SaslMalformedRequest = "malformed-request" show SaslMechanismTooWeak = "mechanism-too-weak" show SaslNotAuthorized = "not-authorized" show SaslTemporaryAuthFailure = "temporary-auth-failure" instance Read SaslError where readsPrec _ "aborted" = [(SaslAborted , "")] readsPrec _ "account-disabled" = [(SaslAccountDisabled , "")] readsPrec _ "credentials-expired" = [(SaslCredentialsExpired , "")] readsPrec _ "encryption-required" = [(SaslEncryptionRequired , "")] readsPrec _ "incorrect-encoding" = [(SaslIncorrectEncoding , "")] readsPrec _ "invalid-authzid" = [(SaslInvalidAuthzid , "")] readsPrec _ "invalid-mechanism" = [(SaslInvalidMechanism , "")] readsPrec _ "malformed-request" = [(SaslMalformedRequest , "")] readsPrec _ "mechanism-too-weak" = [(SaslMechanismTooWeak , "")] readsPrec _ "not-authorized" = [(SaslNotAuthorized , "")] readsPrec _ "temporary-auth-failure" = [(SaslTemporaryAuthFailure , "")] readsPrec _ _ = [] -- The documentation of StreamErrorConditions is copied from -- data StreamErrorCondition = StreamBadFormat -- ^ The entity has sent XML that cannot be processed. | StreamBadNamespacePrefix -- ^ The entity has sent a namespace prefix that -- is unsupported, or has sent no namespace -- prefix on an element that needs such a prefix | StreamConflict -- ^ The server either (1) is closing the existing stream -- for this entity because a new stream has been initiated -- that conflicts with the existing stream, or (2) is -- refusing a new stream for this entity because allowing -- the new stream would conflict with an existing stream -- (e.g., because the server allows only a certain number -- of connections from the same IP address or allows only -- one server-to-server stream for a given domain pair as a -- way of helping to ensure in-order processing | StreamConnectionTimeout -- ^ One party is closing the stream because it -- has reason to believe that the other party has -- permanently lost the ability to communicate -- over the stream. | StreamHostGone -- ^ The value of the 'to' attribute provided in the -- initial stream header corresponds to an FQDN that is no -- longer serviced by the receiving entity | StreamHostUnknown -- ^ The value of the 'to' attribute provided in the -- initial stream header does not correspond to an FQDN -- that is serviced by the receiving entity. | StreamImproperAddressing -- ^ A stanza sent between two servers lacks a -- 'to' or 'from' attribute, the 'from' or 'to' -- attribute has no value, or the value violates -- the rules for XMPP addresses | StreamInternalServerError -- ^ The server has experienced a -- misconfiguration or other internal error that -- prevents it from servicing the stream. | StreamInvalidFrom -- ^ The data provided in a 'from' attribute does not -- match an authorized JID or validated domain as -- negotiated (1) between two servers using SASL or -- Server Dialback, or (2) between a client and a server -- via SASL authentication and resource binding. | StreamInvalidNamespace -- ^ The stream namespace name is something other -- than "" (see -- Section 11.2) or the content namespace declared -- as the default namespace is not supported (e.g., -- something other than "jabber:client" or -- "jabber:server"). | StreamInvalidXml -- ^ The entity has sent invalid XML over the stream to a -- server that performs validation | StreamNotAuthorized -- ^ The entity has attempted to send XML stanzas or -- other outbound data before the stream has been -- authenticated, or otherwise is not authorized to -- perform an action related to stream negotiation; -- the receiving entity MUST NOT process the offending -- data before sending the stream error. | StreamNotWellFormed -- ^ The initiating entity has sent XML that violates -- the well-formedness rules of [XML] or [XML‑NAMES]. | StreamPolicyViolation -- ^ The entity has violated some local service -- policy (e.g., a stanza exceeds a configured size -- limit); the server MAY choose to specify the -- policy in the \<text/\> element or in an -- application-specific condition element. | StreamRemoteConnectionFailed -- ^ The server is unable to properly connect -- to a remote entity that is needed for -- authentication or authorization (e.g., in -- certain scenarios related to Server -- Dialback [XEP‑0220]); this condition is -- not to be used when the cause of the error -- is within the administrative domain of the -- XMPP service provider, in which case the -- <internal-server-error/> condition is more -- appropriate. | StreamReset -- ^ The server is closing the stream because it has new -- (typically security-critical) features to offer, because -- the keys or certificates used to establish a secure context -- for the stream have expired or have been revoked during the -- life of the stream , because the TLS sequence number has -- wrapped, etc. The reset applies to the stream and to any -- security context established for that stream (e.g., via TLS -- and SASL), which means that encryption and authentication -- need to be negotiated again for the new stream (e.g., TLS -- session resumption cannot be used) | StreamResourceConstraint -- ^ The server lacks the system resources -- necessary to service the stream. | StreamRestrictedXml -- ^ he entity has attempted to send restricted XML -- features such as a comment, processing instruction, -- DTD subset, or XML entity reference | StreamSeeOtherHost -- ^ The server will not provide service to the -- initiating entity but is redirecting traffic to -- another host under the administrative control of the -- same service provider. | StreamSystemShutdown -- ^ The server is being shut down and all active -- streams are being closed. | StreamUndefinedCondition -- ^ The error condition is not one of those -- defined by the other conditions in this list | StreamUnsupportedEncoding -- ^ The initiating entity has encoded the -- stream in an encoding that is not supported -- by the server or has otherwise improperly -- encoded the stream (e.g., by violating the -- rules of the [UTF‑8] encoding). | StreamUnsupportedFeature -- ^ The receiving entity has advertised a -- mandatory-to-negotiate stream feature that the -- initiating entity does not support, and has -- offered no other mandatory-to-negotiate -- feature alongside the unsupported feature. | StreamUnsupportedStanzaType -- ^ The initiating entity has sent a -- first-level child of the stream that is not -- supported by the server, either because the -- receiving entity does not understand the -- namespace or because the receiving entity -- does not understand the element name for -- the applicable namespace (which might be -- the content namespace declared as the -- default namespace) | StreamUnsupportedVersion -- ^ The 'version' attribute provided by the -- initiating entity in the stream header -- specifies a version of XMPP that is not -- supported by the server. deriving Eq instance Show StreamErrorCondition where show StreamBadFormat = "bad-format" show StreamBadNamespacePrefix = "bad-namespace-prefix" show StreamConflict = "conflict" show StreamConnectionTimeout = "connection-timeout" show StreamHostGone = "host-gone" show StreamHostUnknown = "host-unknown" show StreamImproperAddressing = "improper-addressing" show StreamInternalServerError = "internal-server-error" show StreamInvalidFrom = "invalid-from" show StreamInvalidNamespace = "invalid-namespace" show StreamInvalidXml = "invalid-xml" show StreamNotAuthorized = "not-authorized" show StreamNotWellFormed = "not-well-formed" show StreamPolicyViolation = "policy-violation" show StreamRemoteConnectionFailed = "remote-connection-failed" show StreamReset = "reset" show StreamResourceConstraint = "resource-constraint" show StreamRestrictedXml = "restricted-xml" show StreamSeeOtherHost = "see-other-host" show StreamSystemShutdown = "system-shutdown" show StreamUndefinedCondition = "undefined-condition" show StreamUnsupportedEncoding = "unsupported-encoding" show StreamUnsupportedFeature = "unsupported-feature" show StreamUnsupportedStanzaType = "unsupported-stanza-type" show StreamUnsupportedVersion = "unsupported-version" instance Read StreamErrorCondition where readsPrec _ "bad-format" = [(StreamBadFormat , "")] readsPrec _ "bad-namespace-prefix" = [(StreamBadNamespacePrefix , "")] readsPrec _ "conflict" = [(StreamConflict , "")] readsPrec _ "connection-timeout" = [(StreamConnectionTimeout , "")] readsPrec _ "host-gone" = [(StreamHostGone , "")] readsPrec _ "host-unknown" = [(StreamHostUnknown , "")] readsPrec _ "improper-addressing" = [(StreamImproperAddressing , "")] readsPrec _ "internal-server-error" = [(StreamInternalServerError , "")] readsPrec _ "invalid-from" = [(StreamInvalidFrom , "")] readsPrec _ "invalid-namespace" = [(StreamInvalidNamespace , "")] readsPrec _ "invalid-xml" = [(StreamInvalidXml , "")] readsPrec _ "not-authorized" = [(StreamNotAuthorized , "")] readsPrec _ "not-well-formed" = [(StreamNotWellFormed , "")] readsPrec _ "policy-violation" = [(StreamPolicyViolation , "")] readsPrec _ "remote-connection-failed" = [(StreamRemoteConnectionFailed, "")] readsPrec _ "reset" = [(StreamReset , "")] readsPrec _ "resource-constraint" = [(StreamResourceConstraint , "")] readsPrec _ "restricted-xml" = [(StreamRestrictedXml , "")] readsPrec _ "see-other-host" = [(StreamSeeOtherHost , "")] readsPrec _ "system-shutdown" = [(StreamSystemShutdown , "")] readsPrec _ "undefined-condition" = [(StreamUndefinedCondition , "")] readsPrec _ "unsupported-encoding" = [(StreamUnsupportedEncoding , "")] readsPrec _ "unsupported-feature" = [(StreamUnsupportedFeature , "")] readsPrec _ "unsupported-stanza-type" = [(StreamUnsupportedStanzaType, "")] readsPrec _ "unsupported-version" = [(StreamUnsupportedVersion , "")] readsPrec _ _ = [(StreamUndefinedCondition , "")] data XmppStreamError = XmppStreamError { errorCondition :: !StreamErrorCondition , errorText :: !(Maybe (Maybe LangTag, Text)) , errorXML :: !(Maybe Element) } deriving (Show, Eq) data StreamError = StreamError XmppStreamError | StreamUnknownError -- Something has gone wrong, but we don't -- know what | StreamNotStreamElement Text | StreamInvalidStreamNamespace (Maybe Text) | StreamInvalidStreamPrefix (Maybe Text) | StreamWrongTo (Maybe Text) | StreamWrongVersion (Maybe Text) | StreamWrongLangTag (Maybe Text) | StreamXMLError String -- If stream pickling goes wrong. | StreamStreamEnd -- received closing stream tag | StreamConnectionError deriving (Show, Eq, Typeable) instance Exception StreamError instance Error StreamError where noMsg = StreamConnectionError -- ============================================================================= -- XML TYPES -- ============================================================================= -- | Wraps a function that MUST generate a stream of unique Ids. The -- strings MUST be appropriate for use in the stanza id attirubte. -- For a default implementation, see @idGenerator@. newtype IdGenerator = IdGenerator (IO Text) -- | XMPP version number. Displayed as "<major>.<minor>". 2.4 is lesser than -- 2.13, which in turn is lesser than 12.3. data Version = Version { majorVersion :: !Integer , minorVersion :: !Integer } deriving (Eq) -- If the major version numbers are not equal, compare them. Otherwise, compare -- the minor version numbers. instance Ord Version where compare (Version amajor aminor) (Version bmajor bminor) | amajor /= bmajor = compare amajor bmajor | otherwise = compare aminor bminor instance Read Version where readsPrec _ txt = (,"") <$> maybeToList (versionFromText $ Text.pack txt) instance Show Version where show (Version major minor) = (show major) ++ "." ++ (show minor) -- Converts a "<major>.<minor>" numeric version number to a @Version@ object. versionFromText :: Text.Text -> Maybe Version versionFromText s = case AP.parseOnly versionParser s of Right version -> Just version Left _ -> Nothing -- Read numbers, a dot, more numbers, and end-of-file. versionParser :: AP.Parser Version versionParser = do major <- AP.many1 AP.digit AP.skip (== '.') minor <- AP.many1 AP.digit AP.endOfInput return $ Version (read major) (read minor) -- | The language tag in accordance with RFC 5646 (in the form of "en-US"). It -- has a primary tag and a number of subtags. Two language tags are considered -- equal if and only if they contain the same tags (case-insensitive). data LangTag = LangTag { primaryTag :: !Text , subtags :: ![Text] } instance Eq LangTag where LangTag p s == LangTag q t = Text.toLower p == Text.toLower q && map Text.toLower s == map Text.toLower t instance Read LangTag where readsPrec _ txt = (,"") <$> maybeToList (langTag $ Text.pack txt) instance Show LangTag where show (LangTag p []) = Text.unpack p show (LangTag p s) = Text.unpack . Text.concat $ [p, "-", Text.intercalate "-" s] -- | Parses, validates, and possibly constructs a "LangTag" object. langTag :: Text.Text -> Maybe LangTag langTag s = case AP.parseOnly langTagParser s of Right tag -> Just tag Left _ -> Nothing -- Parses a language tag as defined by RFC 1766 and constructs a LangTag object. langTagParser :: AP.Parser LangTag langTagParser = do -- Read until we reach a '-' character, or EOF. This is the `primary tag'. primTag <- tag -- Read zero or more subtags. subTags <- many subtag AP.endOfInput return $ LangTag primTag subTags where tag :: AP.Parser Text.Text tag = do t <- AP.takeWhile1 $ AP.inClass tagChars return t subtag :: AP.Parser Text.Text subtag = do AP.skip (== '-') subtag <- tag return subtag tagChars :: [Char] tagChars = ['a'..'z'] ++ ['A'..'Z'] data ServerFeatures = SF { stls :: !(Maybe Bool) , saslMechanisms :: ![Text.Text] , other :: ![Element] } deriving Show data XmppConnectionState = XmppConnectionClosed -- ^ No connection at this point. | XmppConnectionPlain -- ^ Connection established, but not secured. | XmppConnectionSecured -- ^ Connection established and secured via TLS. deriving (Show, Eq, Typeable) data XmppConnection = XmppConnection { sConSrc :: !(Source IO Event) , sRawSrc :: !(Source IO BS.ByteString) , sConPushBS :: !(BS.ByteString -> IO Bool) , sConHandle :: !(Maybe Handle) , sFeatures :: !ServerFeatures , sConnectionState :: !XmppConnectionState , sHostname :: !(Maybe Text) , sJid :: !(Maybe Jid) , sCloseConnection :: !(IO ()) , sPreferredLang :: !(Maybe LangTag) , sStreamLang :: !(Maybe LangTag) -- Will be a `Just' value -- once connected to the -- server. , sStreamId :: !(Maybe Text) -- Stream ID as specified by -- the server. , sToJid :: !(Maybe Jid) -- JID to include in the -- stream element's `to' -- attribute when the -- connection is secured. See -- also below. , sJidWhenPlain :: !Bool -- Whether or not to also include the -- Jid when the connection is plain. , sFrom :: !(Maybe Jid) -- From as specified by the -- server in the stream -- element's `from' -- attribute. } -- | -- The Xmpp monad transformer. Contains internal state in order to -- work with Pontarius. Pontarius clients needs to operate in this -- context. newtype XmppT m a = XmppT { runXmppT :: StateT XmppConnection m a } deriving (Monad, MonadIO) -- | Low-level and single-threaded Xmpp monad. See @Xmpp@ for a concurrent -- implementation. type XmppConMonad a = StateT XmppConnection IO a -- Make XmppT derive the Monad and MonadIO instances. deriving instance (Monad m, MonadIO m) => MonadState (XmppConnection) (XmppT m)
|
http://hackage.haskell.org/package/pontarius-xmpp-0.1.0.2/docs/src/Network-Xmpp-Types.html
|
CC-MAIN-2014-52
|
en
|
refinedweb
|
Formula
CreateFormula
'.
InvalidFormulaException
The engine also has the ever-popular Evaluate method for when you quickly want to evaluate an expression. Let's try to evaluate the "mega" formula found here:
Evaluate
'
When an error is encountered during formula evaluation, an ErrorValueWrapper instance will be returned. This class wraps one of the seven Excel error values and allows you to get the specific error as well as format it.
ErrorValueWrapper
ResultType
=A1
ReferenceFactory
':
Recalculate
' Create a reference to cell A1
Dim a1Ref As ISheetReference = engine.ReferenceFactory.Parse("A1")
' Recalculate all dependents of A1
engine.Recalculate(a1Ref)
FunctionLibrary
FormulaFunctionCall
FixedArgumentFormulaFunction
VariableArgumentFormulaFunction
Public Sub Hypotenuse(ByVal args() As Argument, ByVal result As FunctionResult,_
ByVal engine As FormulaEngine)
End Sub
Explanation of the three arguments:
Argument
FunctionResult
<FixedArgumentFormulaFunction(2, New OperandType() {OperandType.Double, _
OperandType.Double})> _
Public Sub Hypotenuse(ByVal args() As Argument, ByVal result As FunctionResult,_
ByVal engine As FormulaEngine)
End Sub
Double.
=1.2 + sum(1,2,3)
=1,2 + sum(1;2;3)
=Offset(A1,1,1)
=Sum(A1:B2)
!
This article, along with any associated source code and files, is licensed under The GNU Lesser General Public License (LGPLv3)
FormulaEngine engine = new FormulaEngine();
INamedReference refB = engine.ReferenceFactory.Named("B");
Formula formulaB = engine.CreateFormula("1 + A");
engine.AddFormula(formulaB, refB);
INamedReference refA = engine.ReferenceFactory.Named("A");
Formula formulaA = engine.CreateFormula("10");
engine.AddFormula(formulaA, refA);
INamedReference refC = engine.ReferenceFactory.Named("C");
Formula formulaC = engine.CreateFormula("2 * B");
engine.AddFormula(formulaC, refC);
Console.WriteLine("Dependencies : " + Environment.NewLine + engine.Info.DependencyDump);
Console.WriteLine();
object res = formulaA.Evaluate();
Console.WriteLine("Formula A = " + res);
res = formulaB.Evaluate();
Console.WriteLine("Formula B = " + res);
res = formulaC.Evaluate();
Console.WriteLine("Formula C = " + res);
Console.ReadKey();
engine.Recalculate(refA);
foreach (INamedReference namedRef in engine.GetNamedReferences()) {
engine.Recalculate(namedRef);
}
public class Engine : FormulaEngine
{
public void recalculateAll()
{
for(int x=0; x<this.Sheets.Count;x++){
var currentSheet=this.Sheets.get_Item(x);
var address=currentSheet.Name+"!A1:"+alphabet[currentSheet.ColumnCount-1]+(currentSheet.RowCount-1);
ISheetReference refe =this.ReferenceFactory.Parse(address);
this.Recalculate(refe);
}
}
public static string[] alphabet = { " };
}
Dim result As Object = target.Evaluate()
Me.Sheet.SetFormulaResult(result, MyRowIndex, MyColumnIndex)
MyValueOperand = target.EvaluateToOperand()
MyResult = target.Evaluate()
RaiseEvent Recalculated(Me, EventArgs.Empty)
cobb_michael wrote:Maybe each of these Reference objects should raise an event in OnFormulaRecalculate?
public sub Coalesce(ByVal args As Argument(), ByVal result As FunctionResult, ByVal engine As FormulaEngine)
<VariableArgumentFormulaFunction()> _
Public void Coalesce(ByVal args As Argument(), ByVal result As FunctionResult, ByVal engine As FormulaEngine)
For i As Integer = 0 To args.Length - 1
Dim arg As Argument = args(i)
' Get the argument's value
Dim value as Object = arg.ValueAsPrimitive
if not value is nothing then
' Set the function's result to the first non-null value and exit
result.setValue(value.ToString())
return
end if
' All values are null; return empty string
result.SetValue("")
End Sub
General News Suggestion Question Bug Answer Joke Rant Admin
Use Ctrl+Left/Right to switch messages, Ctrl+Up/Down to switch threads, Ctrl+Shift+Left/Right to switch pages.
|
http://www.codeproject.com/Articles/17853/Implementing-an-Excel-like-formula-engine
|
CC-MAIN-2014-52
|
en
|
refinedweb
|
| 1 2 3 | M = | 4 5 6 | | 7 8 9 |'' +/ (reads Plus reduce) means the operator "+" is applied to the data. For a 1 dimensional vector, it just adds everything (to a scalar of rank 0). But it works on higher dimensions, too. The above would sum to:
| 12 15 18 |If the input were a 3d matrix, the output would be a 2d matrix resulting from summing along the third dimension. The APL standard supports matrices up to at least 63 dimensions. So although it would be easy to implement sum for a 1-vector or a 2-matrix in any language, it is more complicated to handle X dimensions, yet the APL code is still just +/M -- so this problem was obviously posed by a SmugAplWeenie?. :-) Python bites back :) How's this (generalised OO version):
class Matrix(list): def _reduce(self, f): try: return Matrix([f(x) for x in zip(*self)]) except TypeError: try: return Matrix([Matrix(x)._reduce(f) for x in zip(*self)]) except TypeError: return f(self) def r_sum(self): return self._reduce(sum) def r_mul(self): mul = lambda x: reduce(operator.mul, x) return self._reduce(mul) a = Matrix([1,2,3]) b = Matrix([[1,2,3]]) c = Matrix([[1,2,3], [4,5,6], [7,8,9]]) d = Matrix([[[1,2,3], [4,5,6], [7,8,9]], [[10,11,12], [13,14,15], [16,17,18]], [[19,20,21],[22,23,24], [25,26,27]]]) >>> print a.r_sum() 6 >>> print b.r_sum() [1, 2, 3] >>> print c.r_sum() [12, 15, 18] >>> print d.r_sum() [[30, 33, 36], [39, 42, 45], [48, 51, 54]]It'll hit an upper limit when it runs out of stack, but should be able to handle a 64d matrix
perl -e 'while($l++<99){$_.=x;print $l,$/if! /^x$|^(xx+)\1+$/}'In Python:
import re for i in range(1,100): if not re.match(r"^x$|^(xx+)\1+$", "x"*i): print iOr if you want to be a little bit more Pythonic (and faster too):
def primes(low, high): m = re.compile(r"^x$|^(xx+)\1+$") primes = [str(i) for i in range(low, high+1) if not m.match("x"*i)] print "\n".join(primes)
#!/usr/local/bin/perl -w use strict; use Tie::File; my $file = shift; my @array; tie @array, 'Tie::File', $file or die "$file can't be opened:$!\n"; $array[1] = 'blah'; # line 2 of the file is now 'blah' print "[" . $array[2] . "]\n"; # display line 3 of the file push( @array, "new line" ); # add a line to the file Here's what this example does. $ cat junk 1 SimpleType?.pm 2 badshebang.pl 3 bitshift.pl 4 cat $ ./tie_array.pl junk [3 bitshift.pl] $ cat junk 1 SimpleType?.pm blah 3 bitshift.pl 4 cat new line Basically I am asking what is the Python equiv. of Tie::FilePython:
#! /usr/local/bin/python class Tie(list): def __init__(self, filename): self.f = open(filename, 'r+') list.__init__(self, [line[:-1] for line in self.f]) self.f.seek(0) def close(self): if self.f: for line in self: print >> self.f, line self.f.close() self.f = None def __del__(self): self.close() if __name__ == '__main__': import sys fn = sys.argv[1] array = Tie(fn) array[1] = 'blah' print "[%s]" % array[2] array.append("newline")
import traceback def log(*args): caller = traceback.extract_stack()[-2] print "%s:%d: %s" % (caller[0], caller[1], ''.join(str(a) for a in args))And using caller[2] would give you the function name.
class MyObject { public MyObject() { System.out.println("uno"); } } class MyClass { private int a; private MyObject my; public MyClass(int a) { this.a = a; } public MyClass(MyObject? my) { this.my = my; } } class MyTest { public static void main(String args[]) { MyClass mc1 = new MyClass(42); MyClass mc2 = new MyClass(new MyObject()); } }You don't. Python uses DynamicTypes? and thus doesn't offer any kind of "overloading" based on argument type. The way this is typically done in Python: class MyObject:
def __init__(self): print "UNO"class MyClass: pass def MyClassWithInt(a):
new_instance = MyClass() new_instance.a = a return new_instancedef MyClassWithObject(my):
new_instance = MyClass() new_instance.my = my return new_instanceif __name__ == '__main__':
mc1 = MyClassWithInt(42) mc2 = MyClassWithObject(MyObject())You could also use class methods as factories instead of the module-level functions used here, but there's no practical difference for this contrived example.
#include <stdio.h> int main(void) { char name[30]; printf("What's your name?"); scanf("%s", name); printf("Hello, %s\n", name); return 0; }In Python:
name = raw_input("What's your name? ") print "Hello, %s" % name(Note that the Python version does not suffer from the BufferOverflow bug that the C version has.)
/*pretty sure this code would compile*/ #include <iostream.h> main() { int arr[] = { 1,2,3,4,5,6,7,8,9,10}; cout << "This is my array, watch as I output it"; for(int i =0;i < 10; i ++) { cout << arr[i]; } cout << "\nThis is my c++ example code to output an array, I wonder what it would be in python?"; }-- camthompson@shaw.ca
arr = range(1,11) # alternatively: arr = [ 1,2,3,4,5,6,7,8,9,10 ] print "This is my array, which I will output." for i in arr: print i, print "\nNot so different, just a bit simpler, isn't it?"If you don't mind Python's list output style, you can avoid the loop:
arr = range(1,11) print 'This is my array, which I will output.' print arr print "Even simpler, isn't it?"If you do mind Python's list output style, you can convert and trim the edges:
arr = range(1,11) print 'This is my array, which I will output.' print str(arr)[1:-1] print "This is a bit of a cheat."Or more obscurely:
arr = range(1,11) print 'This is my array, which I will output.' print ' '.join([str(i) for i in arr]) print "Rather different, isn't it?"Or even more obscurely:
arr = range(1,11) print 'This is my array, which I will output.' print 10*"%s "%tuple(arr) print "TMTOWTDI but not all are equally good."
# # draw the pegs on the board based on the information # contained in the board object # # dx, dy, radius, units are global vars # # $can is a Tk::Canvas object sub placePegs { my $can = shift; my $board = shift; my $hole = 0; my $tag; my $radius = 10; $tag = "HOLE_$hole"; $can->create(oval => $dx*($units/2)-$radius, $dy-$radius, $dx*($units/2)+$radius, $dy+$radius, -fill => $board->{'holes'}[$hole]->{'peg'}, -tag => [$tag] ); $can->bind( $tag, '<Button>' , [\&selectPeg, $hole] ); $hole++; }In Python:
# # draw the pegs on the board based on the information # contained in the board object # def placePegs ( can, board, width, height ): hole = 0 radius = 10 units = 10; dx = width / units; dy = height / units; tag = "HOLE_" + str(hole) item = can.create_oval ( dx*(units/2)-radius, dy-radius, dx*(units/2)+radius, dy+radius, fill = 'white', #$board->{'holes'}[$hole]->{'peg'}, tag = tag ) # for some reason I have to pass this e thing in. can.tag_bind( tag, '<Button>' , lambda e, h=hole: selectPeg(e,h)) hole = hole + 1mailto:sheldonplankton@yahoo.com "e" gets the Event object. The Perl program throws it away in the linked page; that's what that first "shift" in selectPeg does. Have you read ?
(defun @eval (exp env cont) (cond ((numberp exp) (funcall cont exp)) ((stringp exp) (funcall cont exp)) ((symbolp exp) (@lookup exp env cont)) ((eq (first exp) 'LAMBDA) (funcall cont (list 'CLOSURE (second exp) (rest (rest exp)) env))) ((eq (first exp) 'IF) (@eval (second exp) env #'(lambda (test) (@eval (cond (test (second exp)) (t (third exp))) env cont)))) ((eq (first exp) 'LETREC) (let ((newenv (pairlis (mapcar #'first (second exp)) (make-list (length (second exp))) env))) (@evletrec (second exp) newenv (third exp) newenv cont))) (t (@eval (first exp) env #'(lambda (fn) (@evlis (rest exp) env #'(lambda (args) (@apply fn args cont)))))))) (defun @lookup (name env cont) (cond ((null env) (funcall cont name)) ((eq (car (first env)) name) (funcall cont (cdr (first env)))) (t (@lookup name (rest env) cont)))) (defun @evlis (exps env cont) (cond ((null exps) (funcall cont '())) (t (@eval (first exps) env #'(lambda (arg) (@evlis (rest exps) env #'(lambda (args) (funcall cont (cons arg args))))))))) (defun @evletrec (bindings slots body env cont) (cond ((null bindings) (@eval body env cont)) (t (@eval (second (first bindings)) env #'(lambda (fn) (rplacd (first slots) fn) ;the side effect that "ties the knot" (@evletrec (rest bindings) (rest slots) body env cont)))))) (defun @apply (fn args cont) (cond ((eq fn '+) (funcall cont (+ (first args) (second args)))) ((eq fn '*) (funcall cont (* (first args) (second args)))) ((eq fn 'print) (princ (first args)) (fresh-line) (funcall cont (first args))) ((eq fn 'call/cc) (@apply (first args) (list (list 'CONTINUATION cont)) cont)) ((atom fn) (funcall cont 'UNDEFINED-FUNCTION)) ((eq (first fn) 'CLOSURE) (@evlis (third fn) (pairlis (second fn) args (fourth fn)) #'(lambda (vals) (funcall cont (first (last vals)))))) ((eq (first fn) 'CONTINUATION) (funcall (second fn) (first args))) (t (funcall cont 'UNDEFINED-FUNCTION))))This implements a ContinuationPassingStyle interpreter for a Lisp-like language that has only arithmetic and call/cc. It's in CommonLisp, though most of the functions have equivalents in Scheme (and hopefully in Python too). Steele's original posting is here:. I'm only half doing this to be a smartass, BTW. I'm hoping to write an interpreter for a language with CallWithCurrentContinuation in the near future, and I'd rather write it in Python than Scheme. But it looks like Scheme (or CL) will be the path of least resistance at this point. -- JonathanTang see, which is a little old as you'll notice that lambdas in Python now close... Without actually translating the code, perhaps using something along these lines (plus possibly a generator - "yield")?:
def add(x, y, c): c(x+y) def mul(x, y, c): c(x*y) def print_and_stop(val): print val def myfunc(x, y, c): if x > 20: mul(x, y, c) else: add(x, 1, lambda z: myfunc(z, y, c)) myfunc(0, 7, print_and_stop)Be aware that if you plan to use heavy recursion you MUST use StacklessPython.
test :- diff(a*x^i+b*x^j+c*x^k,x,Dx), write(Dx). % should output i*a*x^(i-1)+j*b*x^(j-1)+k*c*x^(k-1) diff(A+B,X,DA+DB) :- diff(A,X,DA),diff(B,X,DB). diff(A*X^N,X,N*A*X^(N-1)).I guess this one is too hard for python (at least in comparable number of lines) It's not so much being too hard, it's just that it hasn't been written. Here's a proof of concept. It's so fragile it'll break if you breathe on it, and the remainder is left as an exercise for the reader... ;-) [permission is given to incorporate this code in any product using an OpenSourceLicence?]
class Var(object): def __init__(self, name, multiplier=1, power=1, adds=None): self.name = name self.multiplier = multiplier self.power = power if not adds: adds = [] self.adds = adds def __add__(self, add): if add == 0: return self return Var(self.name, self.multiplier, self.power, self.adds+[add]) def __mul__(self, multiplier): if multiplier == 0: return 0 return Var(self.name, multiplier, self.power, self.adds) def __pow__(self, power): if power == 0: return self.multiplier return Var(self.name, self.multiplier, power, self.adds) def diff(self): multiplier = self.multiplier * self.power power = self.power - 1 out = multiplier * Var(self.name) ** power for elem in self.adds: try: out += elem.diff() except AttributeError?: pass return out def __str__(self): out = "" if self.multiplier != 0: if self.multiplier != 1: out = "%s*"%self.multiplier if self.power != 0: out += self.name if self.power != 1: out += "**%s"%self.power adds = [str(elem) for elem in self.adds] out = [out] + [add for add in adds if add] return " + ".join(out) x = Var("x") z = 3*x**3 + 2*x**2 + 1*x**1 + 4*x**0 print z print z.diff()Output is:
3*x**3 + 2*x**2 + x + 4 9*x**2 + 4*x + 1--TaroOgawa
class Food: def __getattr__(self, name): # If we don't have it method = getattr (self.state, name) return lambda *args: method (*((self,) + args)) class Floater: def move (f, x): print 'Moving: %i' % x move = staticmethod (move) #not a decorator since I can't use 2.4 at the moment food = Food() food.state = Floater food.move(4) # worksThis seems to do the job, but is there a 'cleaner' way?
class State(object): def __getattr__(self, name): # If we don't have it try: method = getattr (self.state, name) except AttributeError?, msg: classname = repr(self.__class__).split(".")[-1][:-2] raise AttributeError?("In class %s, state %s"%(classname, msg)) return method class Food(State): pass class Floater(object): def move(x): print 'Moving: %i' % x move = staticmethod(move) #Use a decorator in 2.4 food = Food() food.state = Floater food.move(4) # works food.blah(8)''But a better way is just to swap the class:"
class State(object): def get_state(self): return self.__class__ def set_state(self, state): self.__class__ = state state = property(get_state, set_state) class Person(State): pass class Runner(State): def move(self, x): print 'Moving: %i' % x class Sitter(State): def move(self, x): print 'Not moving: %i' % x person = Person() person.state = Runner person.move(4) person.state = Sitter person.move(4)Another thing to question is whether you need StatePattern as python can swap its functions at runtime:
class Person(object): def move_standing(self, x): print 'Moving: %i' % x def move_sitting(self, x): print 'Not moving: %i' % x def move(self, x): return self.move_sitting(x) def stand(self): self.move = self.move_standing def sit(self): self.move = self.move_sitting person = Person() person.move(4) person.stand() person.move(6) person.sit() person.move(8)-- TaroOgawa Thanks, that's very useful and interesting information.
|
http://c2.com/cgi/wiki?PythonTranslator
|
CC-MAIN-2014-52
|
en
|
refinedweb
|
java.lang.Object
org.jboss.cache.aop.AOPInstanceorg.jboss.cache.aop.AOPInstance
public class AOPInstance
Wrapper type for cached AOP instances. When an object is looked up or put in TreeCacheAOP, this object will be advised with a CacheInterceptor. The tree cache stores a reference to this object (for example to update the instance variables, etc.). Since this reference need to be transactional but never replicated (the reference is only valid within the VM) this reference is wrapped into an AOPInstance. In addition, this instance also serves as a metadata for PojoCache. E.g., it has a reference count for multiple references and reference FQN.
public static final java.lang.Object KEY
public static final int INITIAL_COUNTER_VALUE
protected transient java.lang.Object instance_
protected java.lang.String refFqn_
protected int refCount_
protected java.util.List referencingFqnList_
public AOPInstance()
public AOPInstance(java.lang.Object instance)
public AOPInstance copy()
|
http://docs.jboss.org/jbosscache/1.4.1.SP4/api/org/jboss/cache/aop/AOPInstance.html
|
CC-MAIN-2014-52
|
en
|
refinedweb
|
Hi Bob,
That makes a lot of sense. Many of our objects cannot be instantiated
without arguments to the Constructors, so I've made sure that all of
them were set up previously and that the Form.copyTo wouldn't come
across anything null.
Form.copyTo() should only be coming across simple single-argument
setters. The object graph (past the Constructors) is very
straightforward. For the life of me I cannot see what's choking
Form.copyTo()...
If I come up with something brilliant, I'll let you know. :-)
Thanks again!
Alvin
On Sun, Oct 19, 2008 at 2:58 PM, Bob Schellink <sabob1@gmail.com> wrote:
> Hi Alvin,
>
> Alvin Townsend wrote:
>>
>> java.lang.IllegalArgumentException: wrong number of
>> net.sf.click.util.ContainerUtils.ensureObjectPathNotNull(ContainerUtils.java:592)
>> at
>> net.sf.click.util.ContainerUtils.copyContainerToObject(ContainerUtils.java:318)
>> at
>> net.sf.click.util.ContainerUtils.copyContainerToObject(ContainerUtils.java:355)
>> at net.sf.click.control.Form.copyTo(Form.java:1710)
>>
>
> This normally happens when an object is instantiated which does not have a
> default empty constructor.
> When you specify a path for your field e.g. new TextField("address.price"),
> Form.copyTo will navigate the
> object graph according to the path. Say you have the following:
>
> field = new TextField("address.price")
> ...
> form.copyTo(client);
>
> the copy logic will try and navigate to the Address from Client. But if
> Address is null on Client Click
> attempts to create a new Address. But the only way this works is if Address
> has a default no arg constructor:
>
> public class Address {
> public Address() {}
> }
>
> If your Address cannot have a no-arg constructor the best way to resolve
> this is to ensure your domain
> objects are valid before invoking form.copyTo(). You could do this by
> overriding Form.copyTo:
>
> public void onInit() {
> ...
> form.copyTo(client) {
> if (client.getAddress() == null) {
> client.setAddress(createAddress());
> }
> super.copyTo(client);
> };
> }
>
> Does this help?
>
> kind regards
>
> bob
>
>
|
http://mail-archives.apache.org/mod_mbox/click-user/200810.mbox/%3Cb61865290810191230n63cafe8fu66389bd965f1e3d0@mail.gmail.com%3E
|
CC-MAIN-2014-52
|
en
|
refinedweb
|
Best Practices for Getting and Setting Properties
Keep in mind the following best practices recommendations for getting and setting values for properties:
Reference a property directly off the parent object to get and set explicit built-in properties of item objects, for example, MailItem.Subject.
Use ItemProperties and ItemProperty to enumerate explicit built-in properties and custom properties, and get and set custom properties for items (except for DocumentItem object).
Use UserProperties and UserProperty to enumerate, get and set custom properties for items (except for the DocumentItem object)..
To get or set multiple custom properties, use the PropertyAccessor object instead of the UserProperties object for better performance.
To create or access custom properties, use the MAPI string namespace for convenience over the MAPI proptag or id namespace. Use the GUID of your add-in as the namespace GUID.
When referencing properties by namespaces, be aware that such references are case-sensitive. For example, while urn:schemas:contacts:givenName is a valid namespace, urn:schemas:contacts:givenname is not.
To get or set multiple properties, use PropertyAccessor.GetProperties and PropertyAccessor.SetProperties, as opposed to repeated PropertyAccessor.GetProperty and PropertyAccessor.SetProperty, for better performance.
To have the CustomPropertyChange event fire when the value of an item-level custom property changes, the custom property must be in the item's UserProperties collection. An item-level property added implicitly by SetProperty or SetProperties does not automatically become part of the item's UserProperties collection. An explicit UserProperties.Add is required to include it.
To set for the first time a property created by the UserProperties.Add method, use the UserProperty.Value property instead of the SetProperties and SetProperty methods of the PropertyAccessor object.
This section describes the best practices for saving properties on an object:
For item objects, calling the item's Save method to save the item to the current folder also saves its properties on the item.
For non-item-level objects that do not have a Save method (AddressList, Folder, Recipient, and Store), calling PropertyAccessor.DeleteProperty, PropertyAccessor.DeleteProperties, SetProperty, or SetProperties will implicitly save the properties on the object.
This section describes the best practices for keeping type conversion simple when using the PropertyAccessor to get and set properties. For definitions of MAPI property types such as PT_SYSTIME, see Property Types.
Although most Outlook date-time values are stored in Coordinated Universal Time (UTC) format, there is no guarantee that all properties of the MAPI type PT_SYSTIME will always return UTC. Getting a PT_SYSTIME property will return a VT_DATE value. When setting a PT_SYSTIME property, ensure that you are setting the property as a UTC value rather than a local date-time value. The GetProperty, SetProperty, GetProperties, and SetProperties methods do not perform time zone conversion. Use the helper methods PropertyAccessor.LocalTimeToUTC and PropertyAccessor.UTCToLocalTime to perform explicit time zone conversion.
A multi-valued property (Microsoft Visual Basic type VT_ARRAY) is stored as a two-dimensional array that contains the same number of elements as are there are values in the property. Getting a multi-valued property will return a VT_ARRAY value. When setting a multi-valued property, pass a two-dimensional array (VT_ARRAY) with one element for each value that you want to set for the property.
A binary property (MAPI type PT_BINARY) is stored as an array of bytes rather than a string. Getting a binary property will return a value of type VT_ARRAY. The GetProperty, SetProperty, GetProperties, and SetProperties methods do not perform any conversion between binary array and string. Use the helper methods PropertyAccessor.BinaryToString and PropertyAccessor.StringToBinary to explicitly perform any conversion.
Certain MAPI property types, such as PT_OBJECT, are not supported by the PropertyAccessor. Attempting to get or set such properties will result in a "property operation not supported" error.
When getting or setting a property using a reference in the MAPI proptag namespace, make sure that the type specified in the proptag matches the underlying type of the property. Except for the case of a PT_STRING8 property where you can specify either a type of 001E or 001F in the proptag to get or set the property as a VT_BSTR, getting or setting a property does not involve any type coercion and an error will be returned if there is a type mismatch.
When setting a property, it may be less restrictive to use a property reference in the MAPI string namespace than one in the MAPI proptag namespace. Specifying the property in the MAPI string namespace does not strictly require the value to match the underlying type of the property. For example, you can pass a string value like VT_BSTR to set a date-time property such as PT_SYSTIME, and the type of the property becomes the type of the value, which is VT_BSTR.
|
http://msdn.microsoft.com/en-us/library/ff869735(v=office.15).aspx
|
CC-MAIN-2014-52
|
en
|
refinedweb
|
Understanding Struts Action Class
Understanding Struts Action Class
In this lesson I will show you how to use Struts Action... on
the user browser.
In this lesson you learned how to create Action Class
and add
Login Action Class - Struts
Login Action Class Hi
Any one can you please give me example of Struts How Login Action Class Communicate with i-batis
In Struts What is Model?
In Struts What is Model?
This tutorial explains you What is Model in Struts.... In the Action class itself you can call the methods of
application service to retrieve and save the data into database.
Thus in Struts Action class plays
action tag - Struts
action tag Is possible to add parameters to a struts 2 action tag? And how can I get them in an Action Class. I mean: xx.jsp Thank
Writing model classes in struts2.2.1
Creating model classes
StudentAdmissionModel.java -
Model class is a java class(POJO). It has property and method. The
method name... java.util.List;
public class StudentAdmissionModel implements Serializable
Single thread model in Struts - Struts
for that Action. The singleton strategy restricts to Struts 1 Actions and requires...Single thread model in Struts
Hi Friends,
Can u acheive singleThreadModel , ThreadSafe in Struts
if so plx Action Class
Struts Action Class What happens if we do not write execute() in Action class
action. In this article we will see how to achieve this. Struts provides four...STRUTS ACTION - AGGREGATING ACTIONS IN STRUTS... are a Struts developer then you might have experienced the pain of writing huge number
servlet action not available - Struts
servlet action not available hi
i am new to struts and i am....
Struts Blank Application
action
org.apache.struts.action.ActionServlet
config
/WEB-INF/struts-config.xml
2
action
*.do
Struts 2 Interceptors
part of Struts 2
default stack and are executed in a specific order... are pluggable, which means that you can decide which
features an Action needs... action implements ModelDriven, Model
Driven Interceptor adds
Struts Dispatch Action Example
function. Here in this example
you will learn more about Struts Dispatch Action... Struts Dispatch Action Example
Struts Dispatch Action
password action requires user name and passwords same as you had entered
during...
The password forgot Action is invoked
Create Action class
an action
class you need to extend or import the Action classes or interface... the mapping of Action classes are done
into the struts.xml file. You will learn how...Create Action Class
An action is an important portion of web application
Part I
Object Model (DOM)
1. Reading XML Data into a DOM
Part IV. Using...Part I. Understanding XML
A1. Understanding XML :
Learn XML... additional information about elements.
XML:Validation
How a DTD is used
spring Model class
spring Model class how to connect database in spring MVC ,sample model class and diplay results from database
Have a look at the following link:
How Struts 2 Framework works?
How Struts 2 Framework works?
This tutorial explains you the working.... In this tutorial you will learn How Struts 2 works with the
help of an easy....
Controller maps the user request to specific action. In
Struts
The Beginners Guide to JAXB part 3
of SampleApp9)
This sample application illustrates how a choice model group is bound...)
This sample application demonstrates how to use the ObjectFactory class to
create... part of SampleApp9)
Another binding customization example that illustrates how
Chain Action Result Example
;
</action>
<action name="doLogin" class="...;
<action...Chain Action Example
Struts2.2.1 provides the feature to chain many
Jakarta Struts Interview Questions
?
A: The Action is part of the controller. The purpose of Action Class...?
A: Jakarta Struts is open source implementation of MVC
(Model-View-Controller... Struts Framework this class plays the
role of controller. All the requests
how to forward select query result to jsp page using struts action class
how to forward select query result to jsp page using struts action class how to forward select query result to jsp page using struts action class
User Registration Action Class and DAO code
User Registration Action Class and DAO code... to write code for action class and code for performing database operations (saving data into database).
Developing Action Class
The Action Forward Action Example
..
Here in this example
you will learn more about Struts Forward Action... an Action Class
Developing the Action Mapping in the struts-config.xml...:link>
<br>
Example shows you how to use forward class
Redirect Action Result Example
the action to the specified
location you need to do mapping in the struts.xml as follows
<action name="redirectAction" class="...;/result>
</action>
<action name="doLogin" class="
configuration - Struts
configuration Can you please tell me the clear definition of Action class,ActionForm,Model in struts framework.
What we will write in each....
Action class:
An Action class in the struts application extends Struts
Struts Tutorials
Struts application, specifically how you test the Action class.
The Action class... Tutorial
This complete reference of Jakarta Struts shows you how to develop Struts... v5.0.2.2.
Adding Spice to Struts - Part 2
This time, we started looking at how
Struts dispatch action - Struts
Struts dispatch action i am using dispatch action. i send the parameter="addUserAction" as querystring.ex:
at this time it working fine... not contain handler parameter named 'parameter'
how can i overcome
Model in struts
Model in struts what is a model in struts
Struts Tutorial
to the advance concepts of struts. At Roseindia you will learn the Basic Model View...
Struts Architecture
How Struts Works?
Struts Controller
Struts Action... and helps in routing of the application flow.
In this Struts tutorial, you will learn... in this method. When an
action is called the execute method is executed. You can... the following Action class by implementing Action
interface.
TestAction.java
Struts 2 Redirect Action
Struts 2 Redirect Action
In this section, you will get familiar with struts 2 Redirect
action.... You can see a simple implementation
of this in the following struts 2
How Struts Works
How Struts Works
..., and maintain. Struts is purely based on the Model-
View- Contoller (MVC) design... it
into memory in the init() method. You will know more about the Struts
Understanding Struts Controller
part
of the Struts Framework. I will show you how to configure the struts.... It is the Controller part of the Struts
Framework. ActionServlet is configured... to Welcome.jsp
The "Action Mapping Definitions" is the most important part
Struts - Framework
Struts Good day to you Sir/madam,
How can i start...,
Struts :
Struts Frame work is the implementation of Model-View-Controller (MVC) design pattern for the JSP. Struts is maintained as a part of Apache Jakarta
Configuring Actions in Struts application
Configuring Actions in Struts Application
To Configure an action in struts application, at first write a simple Action
class such as
SimpleAction.java... Action class which returns the success. Now Write the
following code
Struts - Struts
Struts Is Action class is thread safe in struts? if yes, how it is thread safe? if no, how to make it thread safe? Please give me with good... safe. You can make it thread safe by using only local variables, not instance
Struts2.2.1 Action Tag Example
class directly from a JSP page.
We can call action directly by specifying... the results from the Action.
The following Example will shows how to implement...;action
name="ActionTag"
class="roseindia.ActionTag
What is Struts - Struts Architecturec
, it
gives the handling of the request to the Action class. Action class is a part... the interactive form based applications with server pages.
Struts provides you... from view and passes it to the
model for the appropriate action. After the action
DispatchAction class? - Struts
DispatchAction class? HI, Which is best and why either action class or dispatch class. like that Actionform or Dynactionform . I know usage.../understandingstruts_action_class.shtml
class
medals. In this class, you should also define constructors, and assessor, mutator
methods.
Task 2
MedalTally.java is a class to model a medal tally, containing...class can any body give me idea how to write code for
Country
Introduction to Action interface
Introduction To Struts Action Interface
The Action interface contains the a single method execute(). The business
logic of the action is executed within... from user.
NONE- If the execution of action is successful but you do
not want
Struts - Struts
Struts hi
can anyone tell me how can i implement session tracking in struts?
please it,s urgent........... session tracking? you mean... for later use in in any other jsp or servlet(action class) until session exist
Aggregating Actions In Struts Revisited
Aggregating Actions in Struts , I have given a brief idea of how to create action...;
If you observed carefully, we have created only one action class...;
How to create one?
1. Extend your action class from
Struts Framework
Details
Model: The model part of the Struts
application handles... the high-class web application development
framework, which is Struts. This article will give you detailed introduction to
the Struts Framework.
Struts
Sharing a Table Model between JTable Components
how to share a table
model between JTable components. Whenever, you want to do... all resources to each
other. When you change the values in the table model... a table model
between JTable components. For this, first of all you will need
Struts
;Basically in Struts we have only Two types of Action classes.
1.BaseActions... class indirectly.These action classes are available...Struts why in Struts ActionServlet made as a singleton what
Introduction to Struts 2
to the action class.
Struts 2 actions are Spring friendly and so easy...;
This section provides you a quick introduction to
Struts 2 framework... of special
application for false values.
Any class can be used as an action
Struts Guide
, Action, ActionForm and struts-config.xml are the part of
Controller.... In
this tutorial you will learn how to develop robust application using Jakarta...? -
- Struts Frame work is the implementation of Model-View-Controller
(MVC) design
Introduction to Struts 2 Framework
the basics of Struts 2 framework. We will explain you how you can create
your first... and then
teach you how to develop your first application using Struts 2 frame....
Action Form
ActionForm class is mandatory in Struts 1
In Struts
Diff between Struts1 and struts 2? - Struts
interfaces. While in Struts 2, an Action class implements an Action interface, along... can be used as an Struts 2 Action object.
Threading Model Struts 1 Actions.../viewanswers/246.html
But the best part is, struts 2 has more features and its
Java single threaded model
Java single threaded model How single threaded model works after implementation in class, basically architecture point of view
Struts Projects
In this tutorial I will show you how to integrate Struts and Hibernate... how to write code for action class and
code for saving data into database...;
Developing
Login Action Class
In this section we will explain how
Example of ActionSupport class
Example of ActionSupport class
Struts ActionSupport class provides the default...;action name="actionSupport" class="... automatically when action is called. This is
default implemented method subclasses
Struts Articles
and Arabic. You will see how to set the user locale in Struts Action classes... includes support beyond the servlet Struts framework. In Part 1, we talked about how... how to develop a simple JSR 168 compliant Struts portlet. You discover how... to the corresponding jsp say
1) aaa_jsp.jsp
2) bbb_jsp.jsp
3) ccc_jsp.jsp
how
no action mapped for action - Struts
no action mapped for action Hi, I am new to struts. I followed...: There is no Action mapped for action name HelloWorld
The ActionForm Class
in next version.
Now we will create the Action class which is the model
part... validation
of data, the data will be sent to model (the action class... in the
model part of the struts.
How To Develop Login Form In Struts
How To Develop Login Form In Struts
....
This
article will explain how to develop login form in struts. Struts adopts an
MVC architecture.
Model
Part of Login form Example:
Model
what is struts? - Struts
what is struts? What is struts?????how it is used n what... Commons packages. Struts encourages application architectures based on the Model 2... technologies to provide the Model and the View. For the Model, Struts can interact
Interview Questions - Struts Interview Questions
with the requested action. In the Struts framework this helper class... operation. the Struts Action class contains several methods, but most important... is an action that comes with Struts 1.1
or later, that lets you combine Struts
Why Struts 2
core interfaces are HTTP
independent. Struts 2 Action classes... as an Action class. Even we don't need to implement interfaces
always... handling per action, if
desired.
Easy Spring
integration - Struts 2 1 Tutorial and example programs
to the Struts Action Class
This lesson is an introduction to Action Class...
In this tutorial you will learn how to use Struts program to upload... Struts Dispatch Action
that will help you grasping the concept
Developing Login Action Class
Developing Login Action Class
... for login action class and database code for validating the user against database.
Developing Login Action Class
In any application
Struts Alternative
not force you to go the XML route, both technologies will work side by side. Struts... class: If you don't have a form, you don't need a FormController. This is a major difference to Struts.
You can use any object as a command or form
Open Source Business Model
, and sell services around it; but I hardly see how this model can work, since you...Open Source Business Model
What is the open source business
model... such as
Send mail. The open source business model relies on shifting
Fetch the data using jsp standard action
the data from the database & show in a jsp page using jsp:usebean in MVC model... java.util.*;
public class EmpBean {
public List dataList(){
ArrayList list=new...*;
import javax.servlet.http.*;
public class BeanInServlet extends HttpServlet
struts
struts <p>hi here is my code can you please help me to solve...*;
import org.apache.struts.action.*;
public class LoginAction extends Action...;
<p><html>
<body></p>
<form action="login.do">
Struts Interview Questions
pattern. Struts components can
be categories into Model, View and Controller.
Model: Components like business logic / business processes and data are
the part of Model.
View: JSP, HTML etc. are part of View
Controller: Action Servlet
Action Listeners
Action Listeners Please, could someone help me with how to use action listeners
I am creating a gui with four buttons. I will like to know how to apply the action listener to these four buttons.
Hello Friend,
Try
Testing Struts Application
in the action class code.Intentionally so! The step by step progress gets... in our system.
---------------------------------
How do we instal Struts in our... get a folder named
struts-blank.
14. If you expand struts-blank folder
JSP Architecture, JSP Model 1 architecture, JSP Model 2 architecture
types of view for the applications. We the help of JSP you can create
data... of the architecture is used.
These architectures are known as Model 1 and Model 2 architectures. So, in JSP there are two types of architecture of the JSP:
Model 1
Struts 2.2.1 - Struts 2.2.1 Tutorial
2.2.1 framework.
Prerequisites for Struts 2.2.1 tutorial
You should have... and testing the example
Advance Struts Action
Struts Action... Action class
Add configuration in struts.xml file
Build
Single thread model - Struts
Single thread model Hi Friedns , thank u in advance
1)I need sample code to find and remove duplicates in
arraylist and hashmap.
2) In struts, ow to implement singlthread model and threadsafe
struts
}//execute
}//class
struts-config.xml
<struts...struts <p>hi here is my code in struts i want to validate my form fields but it couldn't work can you fix what mistakes i have done</p>
javascript call action class method instruts
javascript call action class method instruts in struts2 onchange event call a method in Actionclass with selected value as parameter how can i do
Part I. Exam Objectives
Part I. Exam ObjectivesPrev Next
Exam Objectives
Learn how... on
WebSphere V 5.0. You should be
able to design develop
Struts Tutorial
the
information to them.
Struts Controller Component : In Controller, Action class... the model
from view and the controller. Struts framework provides the following three... architecture :
Struts Model Component : Provides Model of the business logic
Struts Theme And Template
;
<action name="...;/action>
<action name="doLogin" class="...Struts Theme And Template Example
To make a theme based application in struts
|
http://www.roseindia.net/tutorialhelp/comment/11198
|
CC-MAIN-2014-52
|
en
|
refinedweb
|
Figure 30.1: An arrangement of segments before (a) and after (b)
SR (hot pixels are shaded). Figure 30.2 depicts
the results of SR and ISR on the same input.
Conceptually, the ISR procedure is equivalent to repeated application
of SR, namely we apply SR to the original set of segments, then we use
the output of SR as input to another round of SR and so on until all the
vertices are well separated from non-incident edges. Algorithmically
we operate differently, as this repeated application of SR would have
resulted in an efficient overall process. The algorithmic details are
given in [HP02].
Our package supports both schemes, implementing the algorithm
described in [HP02].
Although the paper only describes an algorithm for ISR,
it is easy to derive an algorithm for SR, by performing only
the first rounding level for each segment.
The input to the program is a set S of n segments,
S={s1,
,sn} and the output is a set G of n polylines,
with a polyline gi for each input segments si. An input segment
is given by the coordinates of its endpoints. An output polyline is
given by the ordered set of vertices v0,
,vk along the polyline.
The polyline consists of the segments
(v0v1),
,(vk-1vk).
There are three template parameters: Traits is the underlying geometry,
i.e., the number type used and the coordinate representation.
InputIterator is the type of the iterators that point to the first
and after-the-last elements of the input. Finally, OutputContainer is the
type of the output container.
Since the algorithm requires kernel functionalities such as the rounding to the
center of a pixel, a special traits class must be provided. The precise
description of the requirements is given by the concept
SnapRoundingTraits_2. The class Snap_rounding_traits_2 is a model of
this concept.
Figure 30.2: An arrangement of segments before (a), after SR (b)
and ISR (c) (hot pixels are shaded).
The following example generates an ISR representation
of an arrangement of four line segments. In particular it produces
a list of points that are the vertices of the resulting polylines in a plane
tiled with one-unit square pixels.
File: examples/Snap_rounding_2/snap_rounding.cpp
#include <CGAL/basic.h>
#include <CGAL/Cartesian.h>
#include <CGAL/Quotient.h>
#include <CGAL/MP_Float.h>
#include <CGAL/Snap_rounding_traits_2.h>
#include <CGAL/Snap_rounding_2.h>
typedef CGAL::Quotient<CGAL::MP_Float> Number_type;
typedef CGAL::Cartesian<Number_type> Kernel;
typedef CGAL::Snap_rounding_traits_2<Kernel> Traits;
typedef Kernel::Segment_2 Segment_2;
typedef Kernel::Point_2 Point_2;
typedef std::list<Segment_2> Segment_list_2;
typedef std::list<Point_2> Polyline_2;
typedef std::list<Polyline_2> Polyline_list_2;
int main()
{
Segment_list_2 seg_list;
Polyline_list_2 output_list;
seg_list.push_back(Segment_2(Point_2(0, 0), Point_2(10, 10)));
seg_list.push_back(Segment_2(Point_2(0, 10), Point_2(10, 0)));
seg_list.push_back(Segment_2(Point_2(3, 0), Point_2(3, 10)));
seg_list.push_back(Segment_2(Point_2(7, 0), Point_2(7, 10)));
// Generate an iterated snap-rounding representation, where the centers of
// the hot pixels bear their original coordinates, using 5 kd trees:
CGAL::snap_rounding_2<Traits,Segment_list_2::const_iterator,Polyline_list_2>
(seg_list.begin(), seg_list.end(), output_list, 1.0, true, false, 5);
int counter = 0;
Polyline_list_2::const_iterator iter1;
for (iter1 = output_list.begin(); iter1 != output_list.end(); ++iter1) {
std::cout << "Polyline number " << ++counter << ":\n";
Polyline_2::const_iterator iter2;
for (iter2 = iter1->begin(); iter2 != iter1->end(); ++iter2)
std::cout << " (" << iter2->x() << ":" << iter2->y() << ")\n";
}
return(0);
}
This program generates four polylines, one for each input segment. The exact
output follows:
Polyline number 1:
(0/4:0/4)
(12/4:12/4)
(20/4:20/4)
(28/4:28/4)
(40/4:40/4)
Polyline number 2:
(0/4:40/4)
(12/4:28/4)
(20/4:20/4)
(28/4:12/4)
(40/4:0/4)
Polyline number 3:
(12/4:0/4)
(12/4:12/4)
(12/4:28/4)
(12/4:40/4)
Polyline number 4:
(28/4:0/4)
(28/4:12/4)
(28/4:28/4)
(28/4:40/4)
The package is supplied with a graphical demo program that opens a window,
allows the user to edit segments dynamically, applies a selected
snap-rounding procedures, and displays the result onto the same window
(see <CGAL_ROOT>/demo/Snap_rounding_2/demo.cpp).
|
https://doc.cgal.org/Manual/3.5/doc_html/cgal_manual/Snap_rounding_2/Chapter_main.html
|
CC-MAIN-2021-39
|
en
|
refinedweb
|
Component QML Type
Encapsulates a QML component definition. More...
Properties
Attached Signals
- completed()
- destruction()
Methods
- object createObject(QtObject parent, object properties)
- string errorString()
- object incubateObject(Item parent, object properties, enumeration mode)
Detailed Description
Components are reusable, encapsulated QML types with well-defined interfaces.
Components are often defined by component files - that is,
.qml files. The Component type essentially allows QML components to be defined inline, within a QML document, rather than as a separate QML file. This may be useful for reusing a small component within a QML file, or for defining a component that logically belongs with other QML components within a file.
For example, here is a component that is used by multiple Loader objects. It contains a single item, a Rectangle:
import QtQuick 2.0 Item { width: 100; height: 100 Component { id: redSquare Rectangle { color: "red" width: 10 height: 10 } } Loader { sourceComponent: redSquare } Loader { sourceComponent: redSquare; x: 20 } }
Notice that while a Rectangle by itself would be automatically rendered and displayed, this is not the case for the above rectangle because it is defined inside a
Component. The component encapsulates the QML outside of that top-level item. In the same way, a
Component definition contains a single top level item (which in the above example is a Rectangle) and cannot define any data outside of this item, with the exception of an id (which in the above example is redSquare).
The
Component type is commonly used to provide graphical components for views. For example, the ListView::delegate property requires a
Component to specify how each list item is to be displayed.
Component objects can also be created dynamically using Qt.createComponent().
Creation Context.
Property Documentation
The progress of loading the component, from 0.0 (nothing loaded) to 1.0 (finished).
This property holds the status of component loading. The status can be one of the following:
-!") } }
Method Documentation
Creates and returns an object instance of this component that will have the given parent and properties. The properties argument is optional. Returns null if object creation fails.
The object will be created in the same context as the one in which the component was created. This function will always return null when called on components which were not created in QML.
If you wish to create an object without setting a parent, specify
null for the parent value. Note that if the returned object is to be displayed, you must provide a valid parent value or set the returned object's parent property,:
- status The status of the incubator. Valid values are Component.Ready, Component.Loading and Component.Error.
- object The created object instance. Will only be available once the incubator is in the Ready status.
- onStatusChanged Specifies a callback function to be invoked when the status changes. The status is passed as a parameter to the callback.
- forceCompletion() Call to complete incubation synchronously.()..
|
https://doc.qt.io/archives/qt-5.11/qml-qtqml-component.html
|
CC-MAIN-2021-39
|
en
|
refinedweb
|
.
Code:
#include <std_disclaimer. * */
Install guide:
Boot:
Code:
fastboot boot <twrp.img>
Install:
Code:
fastboot flash recovery <twrp.img>
Download:
twrp-3.4.0-0-burton-beta1.img
What's working:
Everything I've tested seems to be working aside from a few bugs:
- EFS does not appear on backup menu
- Super partition appears twice on backup menu
- Battery percentage does not appear
Source code:
Device tree:
TWRP Source:
Thanks to:
Code:
* vache for providing the racer device tree as a base * TWRP devs
XDA:DevDB Information
TWRP, Tool/Utility for the Motorola Edge +
Contributors
pixlone, vache
Source Code:
Version Information
Status: Beta
Current Beta Version: 1
Beta Release Date: 2020-09-01
Created 2020-09-01
Last Updated 2020-09-01
|
https://forum.xda-developers.com/t/recovery-unofficial-twrp-3-4-0-0.4156905/
|
CC-MAIN-2021-39
|
en
|
refinedweb
|
57 [details]
Code to reproduce the bug
The following code, when compiled via CSharpCodeCompiler, produces false error:
public class A
{
public virtual void F() {}
}
public class B: A
{
public virtual void F() {}
}
This code generates 1 warning and 1 error:
/tmp/fa35333/69a87839.0.cs(22,29) : warning CS0114: `NS.B.F()' hides inherited member `NS.A.F()'. To make the current member override that implementation, add the override keyword. Otherwise add the new keyword
(0,0) : error : /tmp/fa35333/69a87839.0.cs(16,29): (Location of the symbol related to previous warning)
Mono.CSharp.CSharpCodeCompiler.CreateErrorFromString incorrectly treats the additional line "(Location of the symbol related to previous warning)" in stderr output of mcs as an error since it doesn't match the ErrorRegexPattern regex.
Fixed in master via.
|
https://bugzilla.xamarin.com/35/35980/bug.html
|
CC-MAIN-2021-39
|
en
|
refinedweb
|
The.L
The.List; import org.bson.BSONObject; import org.bson.BasicBSONObject; import com.sequoiadb.base.Node.NodeStatus; import com.sequoiadb.base.DBCursor; import com.sequoiadb.base.Node; import com.sequoiadb.base.ReplicaGroup; import com.sequoiadb.base.Sequoiadb; import com.sequoiadb.exception.BaseException; public class BlogRG { static String rgName = "testRG"; static String hostName = "sdbserver1"; Public static void main(String[] args) {// Connect database String host = "192.168.20.46"; String port = "11810"; String usr = "admin"; String password = "admin"; Sequoiadb sdb = null; try { sdb = new Sequoiadb(host + ":" + port, usr, password); } catch (BaseException e) { e.printStackTrace(); System.exit(1); } // Print printGroupInfo(SDB); Remove environment. Delete duplicate copying group if(isGroupExist(SDB,rgName)){system.out. println(" removal the old Replica group..." ); sdb.removeReplicaGroup(rgName); } printGroupInfo(sdb); Add new copy group system. out. Println ("Adding the New Replica Group..."). ); ReplicaGroup rg = sdb.createReplicaGroup(rgName); printGroupInfo(sdb); Println ("Tere are "+ rg. getnonoDENUM (nodestatus.sdb_node_all) +" Nodes in the group."); Add three new nodes node1 = addNode(Rg,50000); Node node2 = addNode(rg,50010); Node node3 = addNode(rg,50020); Println ("Tere are "+ rg. getnonoDENUM (nodestatus.sdb_node_all) +" Nodes in the group."); // Get the master/slave Node master of replication group = rg.getMaster(); System.out.println("The master node is " +master.getPort()); System.out.println("The slave node is " + rg.getSlave().getPort()); System.out.println("stoping the master node...") ); master.stop(); While (rg.getMaster(). GetPort () == master.getport ()){try{thread.sleep (2000); {catch (Exception e){}} // View the newly elected master node system.out.println ("re-selecting the master node...") ); System.out.println("The master node is " + rg.getMaster().getPort()); } private static void printGroupInfo(Sequoiadb sdb){ ArrayList names = sdb.getReplicaGroupNames(); int count = 0; System.out.print("The replica groups are "); for (Object name : names){ count++; System.out.print((String)name + ", "); } System.out.println("\nThere are " + count + " replica groups in total."); } private static boolean isGroupExist(Sequoiadb sdb, String rgName){ ArrayList names = sdb.getReplicaGroupNames(); for (Object name : names){ if(rgName.equals((String)name)) return true; } return false; } private static Node addNode(ReplicaGroup rg, int port){ if(rg.getNode(hostName,port)! = null) rg.removeNode(hostName, port, null); Node node = rg.createNode(hostName,port,"/opt/sequoiadb/database/test/" + port,null); System.out.println("starting the node " + port + "..." ); node.start(); return node; }}
The above code adds a new replication group in the database and adds three master nodes in the new replication group. The data group automatically elects the new master node. After the master node is stopped, the new master node is re-elected in the replication group.
The result of running the above code is:
The replica groups are SYSCatalogGroup, datagroup, testRG, There are 3 replica groups in total. Removing the old replica group... The replica groups are SYSCatalogGroup, datagroup, There are 2 replica groups in total. Adding the new replica group... The replica groups are SYSCatalogGroup, datagroup, testRG, There are 3 replica groups in total. Tere are 0 nodes in the group. starting the node 50000... starting the node 50010... starting the node 50020... Tere are 3 nodes in the group. The master node is 50000 The slave node is 50010 stoping the master node... re-selecting the master node... The master node is 50020
As you can see, when the program starts running, there are three replication groups in the database, with testRG being the useless replication group left over from the last run, and the other two being the default two replication groups for the database. The redundant testRG replication groups are removed via the removeReplicaGroup() method, new testRG replication groups are added via createReplicaGroup(), and three new nodes are added and started within the new group via createNode() and Start (), respectively 50000,50010,50020. GetMaster () and getSlave() methods are used to get the master and slave nodes within the group, and stop() is used to stop the master node of 50000. After the master node stops completely, a new master node of 50020 will be automatically re-elected within the group.
After the run, check the details of the database testRG replication group through the shell console:
>rg.getDetail() { "Group": [ { "HostName": "sdbserver1", "dbpath": "/opt/sequoiadb/database/test/50000", "Service": [ { "Type": 0, "Name": "50000" }, { "Type": 1, "Name": "50001" }, { "Type": 2, "Name": "50002" } ], "NodeID": 1053 }, { "HostName": "sdbserver1", "dbpath": "/opt/sequoiadb/database/test/50010", "Service": [ { "Type": 0, "Name": "50010" }, { "Type": 1, "Name": "50011" }, { "Type": 2, "Name": "50012" } ], "NodeID": 1054 }, { "HostName": "sdbserver1", "dbpath": "/opt/sequoiadb/database/test/50020", "Service": [ { "Type": 0, "Name": "50020" }, { "Type": 1, "Name": "50021" }, { "Type": 2, "Name": "50022" } ], "NodeID": 1055 } ], "GroupID": 1023, "GroupName": "testRG", "PrimaryNode": 1055, "Role": 0, "Status": 0, "Version": 4, "_id": { "$oid": "53D9D38E14A63A88C621EDd8"}} Return 1 row(s). Takes 0.4716s.
You can see that there are three nodes in the group, PrimaryNode is 1055, i.e. 50020.
|
http://www.itworkman.com/145037.html
|
CC-MAIN-2021-39
|
en
|
refinedweb
|
table of contents
NAME¶CURLOPT_MAIL_RCPT - list of SMTP mail recipients
SYNOPSIS¶
#include <curl/curl.h> CURLcode curl_easy_setopt(CURL *handle, CURLOPT_MAIL_RCPT, struct curl_slist *rcpts);
DESCRIPTION¶Pass.".
DEFAULT¶NULL
PROTOCOLS¶SMTP
EXAMPLE¶); }
|
https://dyn.manpages.debian.org/unstable/libcurl4-doc/CURLOPT_MAIL_RCPT.3.en.html
|
CC-MAIN-2021-39
|
en
|
refinedweb
|
Getting to Know Blender So. Many. Buttons.
Ok, you probably won't have to use that many buttons. In fact, many of the effects in Blender can be done with python scripting. Since most computational scientists are way more comfortable with code than a bunch of buttons, AstroBlend is designed to maximize automation for the scientist's usage of Blender.
However, there are a few buttons and keyboard tricks that are essential to know. There are others that will make your life easier if you're willing to remember more stuff. Links to further resources will be at the bottom of this tutorial, however here we will just cover the bases of navigation.
Please check out the Getting Started page before you do this tutorial to make sure you have the AstroBlend library installed correctly.
As an aside - if you happen to be an artist checking out this library, you will probably find things to be a little hacky, since many of the functions can be replicated with a few simple button clicks. This is because computational scientists often need to load many data files sequentially and manipulate them in the same way and would rather not have to repeat said simple button clicks many times. However, I am sure this library does some silly and inefficient things, so feel free to drop me a line if you see a better way of doing things.
Running Blender From the Console
The first thing to do is to setup Blender to run from a terminal window. Why would you want to do this? Because otherwise it is hard to parse the error messages that you will receive. On my Mac, I've added the following to my ".bash_profile" file:
alias blender="/Applications/Blender2.72/blender.app/Contents/MacOS/blender -P /Users/jillnaiman/yt-x86_64/yt_blender_import.py"
But wait, what is all that "yt_blender_import.py" noise? Well, Blender can be start up with a script, which is useful if you want to be able to tell Blender where the AstroBlend library is without having to specify it explicitly everytime you use the python console. The exact form of this alias here is specific for those who want to use yt directly in Blender and is actually generated during the yt installation process (more about that in a later tutorial). For now, we can simply have:
alias blender="/Applications/Blender2.72/blender.app/Contents/MacOS/blender -P /Users/jillnaiman/myAstroBlendScript.py"
import sys sys.path.append('/Users/jillnaiman/astroblend/science')
Blender's Many Windows
If you now open up Blender with your scripting command, you will see a lot of buttons and windows. The main windows you'll see are:
(1) Main 3D window: where all your models and data will appear, along with any light sources and cameras (2) Object Selector: this panel allows you to click and then manipulate the individual objects shown in the 3D window (3) Object Specific Controls: Once you have an object selected, you can manipulate several properties of each object. Some options include changing the color of an object (a 3D model), or rendering an image (with the Camera). (4) Object Specific Stuff: more ways to manipulate objects. You probably won't use this panel. (5) Animation Stuff: Panel for animation, you also probably won't use this.
Since there are a lot of panels we won't be using, it is a lucky thing that creating your own blender layout is so easy. You can create new blender window subdivisions by pulling the little tabs in the window corners (red circle) and changing what each window does (green circle):
I have found that the most useful window setup is the one below which includes a 3D view, a text file editor for short scripts, a python console, and an Image viewer where renders will show up:
This setup is "usual.blend" from the Blender Files Download Page.
You can also further modify your layout, save your favorite layout as your startup layout, and so on. Further information on how to do this can be found in the Modify Your Blender Layout list of resources.
Render Something to Screen
It is often useful to do a quick render of your setup to the screen to see how your images or movie stills are going to look before you go through the trouble of saving them to file. This can be done by pressing the little camera button (red circle) followed by the render button (blue circle), and the rendered image is shown in the Image viewer (aren't you happy we put one in?!):
Note from our 3D layout that the image sort of looks like we would expect - the camera is pointing toward one of the corners of the cube, which is at the forfront of our image, and the lamp that is lighting the scene is behind the cube with respect to the camera, so the furthest corner from us is lit up, the closest side is in shadow.
Run a Script from the Text Window
Another thing that is good to know how to do is run a simple script from the text editor. For example, in the figure below I have the script:
import science arrow_name = "RedArrow" arrow_color = (1, 0, 0) # (R,G,B) arrow = science.Arrow(arrow_name, color=arrow_color) arrow.location = (3,0,0) arrow.pointing = (3,0,5)
import sys # get stuff for science! # add where science lib is stored sys.path.append("/Users/jillnaiman/blenderScience/astroblend/science/")
This script first appends the Blender path to include the location of AstroBlend's science library to load. It then creates a red arrow, and moves it to 3 Blender Units (BU) along the x-axis, and points it straight up by using a directionally constraining "empty mesh" and placing this empty directly above the arrow. To run the script simply hit the "Run Script" button in the bottom of the text editor window.
One cool thing you should totally play with is moving the empty mesh right above the arrow around and see how the arrow keeps pointing at this mesh. The easiest way to do this is click on the "EmptyRedArrow" object in the "Object Selector" panel in the upper right of Blender and then click on one of the 3 arrows that appears in the 3D viewer.
Also as a final note, in addition to changing the location and pointing direction of your arrow, you can
also change the color with
arrow.color and the name of your arrow with
arrow.name.
More info about the different objects you can add can be found in the Using Simple 3D Models tutorial, and info about how empty meshes can be used to create directional objects is found in the second tutorial.
Navigating the 3D Space
Finally, you might want to move around your 3D space and get different views of your object, or you may want to move around your camera. There are many hot keys one can use to do just this, but since the rotation and translation of objects can be done via the command line, I will only mention a few different methods.
To move around the 3D space you can use the number pad on your keyboard when the 3D viewer window is active. The image below from here shows what the different keys do:
In addition to rotating, being able to switch to the view your camera sees by pressing "0" is quite helpful.
One other useful thing to know is how to delete things by clicking on them, which is most useful for Camera and Lamp objects. You do this by clicking on the object you want to delete in the list of objects in the Object Selector panel, then put your mouse in the 3D viewer and click "x". Click "delete" when the menu pops up. HOWEVER, there are several reasons why you would want to use the command line to delete your actual 3D models, which are discussed at length under the "Deleting Objects with AstroBlend" section in this tutorial, so BEWARE OF CLICKING TO DELETE 3D MODELS!
There are many additional resources for how to rotate and translate things quickly. A cheat sheet for all this can be found here, and there are many other examples on the interwebs.
It should be noted that things can change from computer to computer, and these are the controls that have worked on my Mac.
|
http://www.astroblend.com/tutorials/tutorial_getToKnowBlender.html
|
CC-MAIN-2021-39
|
en
|
refinedweb
|
Nomad v1.0 Feature: This tutorial uses a feature available in Nomad v 1.0 or later.
Enterprise Only: The functionality described here is available only in Nomad Enterprise with the Multi-Cluster & Efficiency module. To explore Nomad Enterprise features, you can sign up for a free 30-day trial from here.
Using a Vagrant virtual machine, you will deploy a simple environment containing:
- An APM, specifically Prometheus, to collect metric data.
- Nomad Autoscaler Enterprise.
- A sample job, which will be configured to enable DAS recommendations with:
- one NGINX instance used as a TCP load balancer.
- three Redis instances to service requests.
- A sample dispatch job to create load on the Redis nodes.
»Prerequisites
Familiarity with the Dynamic application scaling concepts tutorial.
This Vagrantfile to create a suitable environment to run the demonstration.
This Vagrantfile provisions:
one Ubuntu 20.04 VM preinstalled with:
- Nomad Enterprise v1.0.0 beta 2
- The current version of Consul installable via package
- The current version of Docker installable via package
»Start and connect to the Vagrant environment
Download the Vagrantfile. Start the test-drive environment by running
vagrant up.
$ vagrant up
Once the environment is provisioned and you are returned to your command prompt, connect to the Vagrant instance.
$ vagrant ssh
Once you are at the
vagrant@ubuntu-focal:~$ prompt, you are ready to continue.
»Verify Nomad telemetry configuration
Nomad needs to be configured to enable telemetry publishing. You need to enable
allocation and node metrics. Since this tutorial also uses Prometheus as its APM,
you need to set
prometheus_metrics to true.
The configuration for the Nomad inside the test-drive already has the
appropriate telemetry configuration. View the configuration using
cat /etc/nomad.d/nomad.hcl file and note the following stanza is included.
telemetry { publish_allocation_metrics = true publish_node_metrics = true prometheus_metrics = true}
Given this configuration, Nomad generates node and allocation metrics and make them available in a format that Prometheus can consume. If you are using this test-drive with your own Nomad cluster, add this telemetry block to the configuration for every Nomad node in your cluster and restart them to load the new configuration.
Return to the vagrant user's home directory if you changed away from it.
$ cd /home/vagrant
»Start Prometheus
The autoscaler configuration in this test-drive uses Prometheus to retrieve historical metrics when starting to track a new target. In this beta, Prometheus is also used for ongoing monitoring metrics, but this is currently being shifted to using Nomad's metrics API. The first step is to run an instance of Prometheus for the Nomad Autoscaler to use. The simplest way to do this is to run Prometheus as a Nomad job. The environment contains a complete Prometheus job file to get started with.
You can create a file called
prometheus.nomad with the following content, or
you can copy
prometheus.nomad from the
~/nomad-autoscaler/jobs folder when
logged into a vagrant user's shell inside the VM.
job "prometheus" { datacenters = ["dc1"] group "prometheus" { count = 1 task "prometheus" { driver = "docker" config { image = "prom/prometheus:v2.18.1" args = [ "--config.file=/etc/prometheus/config/prometheus.yml", "--storage.tsdb.path=/prometheus", "--web.console.libraries=/usr/share/prometheus/console_libraries", "--web.console.templates=/usr/share/prometheus/consoles", ] volumes = [ "local/config:/etc/prometheus/config", ] port_map { prometheus_ui = 9090 } } template { data = <<EOH---global: scrape_interval: 1s evaluation_interval: 1s scrape_configs: - job_name: nomad metrics_path: /v1/metrics params: format: ['prometheus'] static_configs: - targets: ['{{ env "attr.unique.network.ip-address" }}:4646'] - job_name: consul metrics_path: /v1/agent/metrics params: format: ['prometheus'] static_configs: - targets: ['{{ env "attr.unique.network.ip-address" }}:8500']EOH change_mode = "signal" change_signal = "SIGHUP" destination = "local/config/prometheus.yml" } resources { cpu = 100 memory = 256 network { mbits = 10 port "prometheus_ui" { static = 9090 } } } service { name = "prometheus" port = "prometheus_ui" check { type = "http" path = "/-/healthy" interval = "10s" timeout = "2s" } } } }}
Run the job in Nomad.
$ nomad job run prometheus.nomad
»Start the autoscaler
The next step is to run the Nomad Autoscaler. For the beta, an enterprise version of the Nomad Autoscaler is provided that includes the DAS plugins. The simplest approach is to run the autoscaler as a Nomad job; however, you can download the Nomad Autoscaler and run it as a standalone process.
This test-drive Vagrant environment comes with Consul. The supplied Nomad job specifications uses this Consul to discover the Nomad and Prometheus URLs. Should you want to use this specification in a cluster without Consul, You can supply the URLs yourself and remove the checks.
You can create a file called
das-autoscaler.nomad with the following content, or
you can copy
das-autoscaler.nomad from the
~/nomad-autoscaler/jobs folder when
logged into a vagrant user's shell inside the VM.
job "das-autoscaler" { datacenters = ["dc1"] group "autoscaler" { count = 1 task "autoscaler" { driver = "docker" config { image = "hashicorp/nomad-autoscaler-enterprise:0.2.0-beta2" command = "bin/nomad-autoscaler" args = [ "agent", "-config", "${NOMAD_TASK_DIR}/autoscaler.hcl", "-http-bind-address", "0.0.0.0", ] ports = ["http"] } template { destination = "${NOMAD_TASK_DIR}/autoscaler.hcl" data = <<EOH// Set the log level so we can see some more interesting output at the expense// of chattiness.log_level = "debug"// Set the address of the Nomad agent. This can be omitted and in this example// is set to the default for clarity.nomad { // Use Consul service discovery for the Nomad client IP and Port. address = "{{ with service "nomad-client" }}{{ with index . 0 }}http://{{.Address}}:{{.Port}}{{ end }}{{ end }}" // Use the splat operator so the autoscaler monitors scaling policies from // all Nomad namespaces. If you wish to have it only monitor a single // namespace, update this param to match the desired name. namespace = "*" // If Nomad ACLs are in use, the following line should be uncommented and // updated to include an ACL token. // token = ""}// Setup the Prometheus APM so that the autoscaler can pull historical and// point-in-time metrics regarding task resource usage.apm "prometheus" { driver = "prometheus" config = { // Use Consul service discovery for the Prometheus IP and Port. address = "{{ with service "prometheus" }}{{ with index . 0 }}http://{{.Address}}:{{.Port}}{{ end }}{{ end }}" }}policy_eval { // Lower the evaluate interval so we can reproduce recommendations after only // 5 minutes, rather than having to wait for 24hrs as is the default. evaluate_after = "5m" // Disable the horizontal application and horizontal cluster workers. This // helps reduce log noise during the demo. workers = { cluster = 0 horizontal = 0 }}EOH } resources { cpu = 1024 memory = 512 } } network { port "http" { to = 8080 } } service { name = "nomad-autoscaler" port = "http" check { type = "http" path = "/v1/health" interval = "5s" timeout = "2s" } } }}
Run the job in Nomad.
$ nomad job run das-autoscaler.nomad
Upon starting, the autoscaler loads the DAS-specific plugin and launches workers
to evaluate vertical policies. You can see the logs using the Nomad UI or
nomad alloc logs ... command:
.
»Deploy the sample job
Create a job named example.nomad with the following content.
job "example" { datacenters = ["dc1"] group "cache-lb" { count = 1 network { port "lb" { to = 6379 } } service { name = "redis-lb" port = "lb" address_mode = "host" check { type = "tcp" port = "lb" interval = "10s" timeout = "2s" } } task "nginx" { driver = "docker" config { image = "nginx" ports = ["lb"] volumes = [ # It's safe to mount this path as a file because it won't re-render. "local/nginx.conf:/etc/nginx/nginx.conf", # This path hosts files that will re-render with Consul Template. "local/nginx:/etc/nginx/conf.d" ] } # This template overwrites the embedded nginx.conf file so it loads # conf.d/*.conf files outside of the `http` block. template { data = <<EOFuser nginx;worker_processes 1; error_log /var/log/nginx/error.log warn;pid /var/run/nginx.pid; events { worker_connections 1024;} include /etc/nginx/conf.d/*.conf;EOF destination = "local/nginx.conf" } # This template creates a TCP proxy to Redis. template { data = <<EOFstream { server { listen 6379; proxy_pass backend; } upstream backend { {{ range service "redis" }} server {{ .Address }}:{{ .Port }}; {{ else }}server 127.0.0.1:65535; # force a 502 {{ end }} }}EOF destination = "local/nginx/nginx.conf" change_mode = "signal" change_signal = "SIGHUP" } resources { cpu = 50 memory = 10 } } } group "cache" { count = 3 network { port "db" { to = 6379 } } service { name = "redis" port = "db" address_mode = "host" check { type = "tcp" port = "db" interval = "10s" timeout = "2s" } } task "redis" { driver = "docker" config { image = "redis:3.2" ports = ["db"] } resources { cpu = 500 memory = 256 } } }}
»Add DAS to the sample.
To enable application-sizing for multiple tasks with DAS, you need to add this
scaling block to every new or additional task in the job spec. Inside both the
cache-lb and the
cache tasks, add the following scaling policies. You can
verify your changes against the completed
example.nomad file in the
~/nomad-autoscaler/jobs directory.
scaling "cpu" { policy { cooldown = "1m" evaluation_interval = "1m" check "95pct" { strategy "app-sizing-percentile" { percentile = "95" } } } } scaling "mem" { policy { cooldown = "1m" evaluation_interval = "1m" check "max" { strategy "app-sizing-max" {} } } }
Note: These scaling policies are extremely aggressive and provide
"flappy" recommendations, making them unsuitable for production. They are
set with low
cooldown and
evaluation_interval values in order to
quickly generate recommendations for this test drive. Consult the
Dynamic Application Sizing Concepts tutorial for how to determine
suggested production values.
Reregister the example.nomad file by running the
nomad job run example.nomad
command.
$ nomad job run example.nomad
Once the job has been registered with its updated specification, the Nomad autoscaler automatically detects the new scaling policies and start the required internal processes.
Further details on the individual parameters and available strategies can be found in the Nomad documentation, including information on how you can further customize the application-sizing block to your needs (percentile, cooldown periods, sizing strategies).
»Review DAS recommendations
Once the autoscaler has generated recommendations, you can review them in the Nomad UI or using the Nomad API and accept or dismiss the recommendations.
Select the Optimize option in the Workload section of the sidebar. When there are DAS recommendations they appear here.
Clicking Accept applies the recommendation, updating the job with resized tasks. Dismissing the recommendation causes it to disappear. However, the autoscaler continues to monitor and eventually makes additional recommendations for the job until the vertical scaling policy is removed from the job specification.
Click the Accept button to accept the suggestion.
You also receive a suggestion for the
cache-lb task.
Click the Accept button to accept the suggestion.
Use curl to access the List Recommendations API.
$ curl ''
You should receive two recommendations: one for the cache task and one for the cache-lb task.
[ { "ID": "1308e937-63b1-fa43-67e9-3187c954e417", "Region": "global", "Namespace": "default", "JobID": "example", "JobVersion": 0, "Group": "cache-lb", "Task": "nginx", "Resource": "CPU", "Value": 57, "Current": 50, "Meta": { "window_size": 300000000000.0, "nomad_policy_id": "dd393d4b-99d7-7b72-132c-7e70f1b6b2dc", "num_evaluated_windows": 11.0 }, "Stats": { "max": 20.258468627929688, "mean": 0.21294420193006963, "min": 0.0, "p99": 20.258468627929688 }, "EnforceVersion": false, "SubmitTime": 1604353860521108002, "CreateIndex": 350, "ModifyIndex": 350 }, { "ID": "b9331de3-299f-cd74-bf6d-77aa36a3e147", "Region": "global", "Namespace": "default", "JobID": "example", "JobVersion": 0, "Group": "cache", "Task": "redis", "Resource": "CPU", "Value": 57, "Current": 500, "Meta": { "window_size": 300000000000.0, "nomad_policy_id": "1b63f7bd-c995-d61e-cf4f-b49a8d777b65", "num_evaluated_windows": 12.0 }, "Stats": { "p99": 32.138671875, "max": 32.138671875, "mean": 2.5897381649120943, "min": 0.06250959634780884 }, "EnforceVersion": false, "SubmitTime": 1604353860521659719, "CreateIndex": 352, "ModifyIndex": 352 }, { "ID": "f91454d6-8df8-ce64-696b-b21c758cfb3b", "Region": "global", "Namespace": "default", "JobID": "example", "JobVersion": 0, "Group": "cache", "Task": "redis", "Resource": "MemoryMB", "Value": 10, "Current": 256, "Meta": { "nomad_policy_id": "9153e45b-618c-a7e4-6aa3-c720fd20184f", "num_evaluated_windows": 12.0, "window_size": 300000000000.0, "nomad_autoscaler.count.capped": true, "nomad_autoscaler.count.original": 2.0, "nomad_autoscaler.reason_history": [] }, "Stats": { "max": 2.01171875, "mean": 1.9451913759689923, "min": 1.9375, "p99": 1.984375 }, "EnforceVersion": false, "SubmitTime": 1604353860521511567, "CreateIndex": 351, "ModifyIndex": 351 }]
You can accept them by using the Apply and Dismiss Recommendations API endpoint. Replace the Recommendation IDs in the command with the recommendation IDs received when you queried the List Recommendations API.
$ curl '' \ --request POST \ --data '{"Apply":["1308e937-63b1-fa43-67e9-3187c954e417", "b9331de3-299f-cd74-bf6d-77aa36a3e147"]}' "Errors": [], "LastIndex": 0, "RequestTime": 0, "UpdatedJobs": [ { "EvalCreateIndex": 403, "EvalID": "5a1c5f5e-6a82-17a9-ca2d-b053e4f418f2", "JobID": "example", "JobModifyIndex": 403, "Namespace": "default", "Recommendations": [ "1308e937-63b1-fa43-67e9-3187c954e417", "b9331de3-299f-cd74-bf6d-77aa36a3e147" ], "Warnings": "" } ]}
»Verify recommendation is applied
Watch for the deployment to complete and then verify that the job is now using
the recommended values instead of the ones initially supplied. You can do this
with in the Nomad UI or using the
nomad alloc status command for a
cache and a
cache-lb allocation listed from the
nomad job status example command.
Navigate to the example job's detail screen in the Nomad UI
Note that the Task Groups section shows the updated values for Reserved CPU and Reserved Memory given by the autoscaler.
List out the allocations for the example job by running
nomad job status example.
$ nomad job status exampleID = exampleName = exampleSubmit Date = 2020-11-02T16:28:52ZType = servicePriority = 50Datacenters = dc1Namespace = defaultStatus = runningPeriodic = falseParameterized = false SummaryTask Group Queued Starting Running Failed Complete Lostcache 0 0 3 0 3 0cache-lb 0 0 1 0 1 0 Latest DeploymentID = c3ee5e5dStatus = successfulDescription = Deployment completed successfully DeployedTask Group Desired Placed Healthy Unhealthy Progress Deadlinecache 3 3 3 0 2020-11-02T16:39:30Zcache-lb 1 1 1 0 2020-11-02T16:39:06Z AllocationsID Node ID Task Group Version Desired Status Created Modified5a35ffec c442fcaa cache 2 run running 4m49s ago 4m30s ago8ceec492 c442fcaa cache-lb 2 run running 5m7s ago 4m35s agoceb84c32 c442fcaa cache 2 run running 5m7s ago 4m50s ago2de4ff81 c442fcaa cache 2 run running 5m16s ago 4m57s ago2ffa9be6 c442fcaa cache-lb 1 stop complete 37m9s ago 5m7s ago528156b1 c442fcaa cache 0 stop complete 37m9s ago 5m15s ago04645e48 c442fcaa cache 0 stop complete 37m9s ago 5m7s ago2d9fc1f2 c442fcaa cache 0 stop complete 37m9s ago 4m48s ago
From the job status output, a
cache allocation has allocation ID 5a35ffec.
Run the
nomad alloc status 5a35ffec command to get the Task Resources
information about this allocation.
$ nomad alloc status 5a35ffecID = 5a35ffec-2af3-d36f-6dd8-d8453407d6a5Eval ID = 564f00dfName = example.cache[2]Node ID = c442fcaaNode Name = ubuntu-focalJob ID = exampleJob Version = 2Client Status = runningClient Description = Tasks are runningDesired Status = runDesired Description = <none>Created = 6m55s agoModified = 6m36s agoDeployment ID = c3ee5e5dDeployment Health = healthy Allocation AddressesLabel Dynamic Address*db yes 10.0.2.15:25465 -> 6379 Task "redis" is "running"Task ResourcesCPU Memory Disk Addresses3/57 MHz 992 KiB/10 MiB 300 MiB Task Events:Started At = 2020-11-02T16:29:11ZFinished At = N/ATotal Restarts = 0Last Restart = N/A Recent Events:Time Type Description2020-11-02T16:29:11Z Started Task started by client2020-11-02T16:29:11Z Task Setup Building Task Directory2020-11-02T16:29:11Z Received Task received by client
Note that the Task Resources section shows the updated values for memory and CPU given by the autoscaler.
From the earlier job status output, a
cache-lb allocation has allocation ID
8ceec492. Run the
nomad alloc status 8ceec492 command to get the Task
Resources information about this allocation.
$ nomad alloc status 8ceec492ID = 8ceec492-9549-e563-40d9-bf76a47940f2Eval ID = f0c24365Name = example.cache-lb[0]Node ID = c442fcaaNode Name = ubuntu-focalJob ID = exampleJob Version = 2Client Status = runningClient Description = Tasks are runningDesired Status = runDesired Description = <none>Created = 7m44s agoModified = 7m12s agoDeployment ID = c3ee5e5dDeployment Health = healthy Allocation AddressesLabel Dynamic Address*lb yes 10.0.2.15:29363 -> 6379 Task "nginx" is "running"Task ResourcesCPU Memory Disk Addresses0/57 MHz 1.5 MiB/10 MiB 300 MiB Task Events:Started At = 2020-11-02T16:28:54ZFinished At = N/ATotal Restarts = 0Last Restart = N/A Recent Events:Time Type Description2020-11-02T16:29:24Z Signaling Template re-rendered2020-11-02T16:29:16Z Signaling Template re-rendered2020-11-02T16:29:13Z Signaling Template re-rendered2020-11-02T16:29:04Z Signaling Template re-rendered2020-11-02T16:28:57Z Signaling Template re-rendered2020-11-02T16:28:55Z Signaling Template re-rendered2020-11-02T16:28:54Z Started Task started by client2020-11-02T16:28:53Z Driver Downloading image2020-11-02T16:28:53Z Task Setup Building Task Directory2020-11-02T16:28:52Z Received Task received by client
Here, also, the Task Resources section shows the updated values for memory and CPU given by the autoscaler.
»Generate load to create new recommendations
Create a parameterized dispatch job to generate load in your cluster. Create a
file named
das-load-test.nomad with the following content. You can also copy
this file from the
~/nomad-autoscaler/jobs folder in the Vagrant instance.
job "das-load-test" { datacenters = ["dc1"] type = "batch" parameterized { payload = "optional" meta_optional = ["requests", "clients"] } group "redis-benchmark" { task "redis-benchmark" { driver = "docker" config { image = "redis:3.2" command = "redis-benchmark" args = [ "-h","${HOST}", "-p","${PORT}", "-n","${REQUESTS}", "-c","${CLIENTS}", ] } template { destination = "secrets/env.txt" env = true data = <<EOF{{ with service "redis-lb" }}{{ with index . 0 -}}HOST={{.Address}}PORT={{.Port}}{{- end }}{{ end }}REQUESTS={{ or (env "NOMAD_META_requests") "100000" }}CLIENTS={{ or (env "NOMAD_META_clients") "50" }}EOF } resources { cpu = 100 memory = 128 } } }}
Register the dispatch job with the
nomad job run das-load-test.nomad command.
$ nomad job run das-load-test.nomadJob registration successful
Now, dispatch instances of the load-generation task by running the following:
$ nomad job dispatch das-load-testDispatched Job ID = das-load-test/dispatch-1604336299-70a3923eEvaluation ID = 1793fe23 ==> Monitoring evaluation "1793fe23" Evaluation triggered by job "das-load-test/dispatch-1604336299-70a3923e" Allocation "589a1825" created: node "c442fcaa", group "redis-benchmark" Evaluation status changed: "pending" -> "complete"==> Evaluation "1793fe23" finished with status "complete
Each run of this job creates 100,000 requests against your Redis cluster using 50 Redis clients.
Once you have run the job, watch the Optimize view for new suggestions based on the latest activity.
»Exit and clean up
Exit the shell session on the Vagrant VM by typing
exit. Run the
vagrant destroy
command to stop and remove the virtual box instance. Delete the Vagrantfile once
you no longer want to use the test-drive environment.
»Learn more
If you have not already, review the Dynamic Application Sizing Concepts tutorial for more information about the individual parameters and available strategies.
You can also find more information in the Nomad Autoscaler Scaling Policies documentation, including how you can further customize the application-sizing block to your needs (percentile, cooldown periods, and sizing strategies).
|
https://learn.hashicorp.com/tutorials/nomad/dynamic-application-sizing?in=nomad/ecosystem
|
CC-MAIN-2021-39
|
en
|
refinedweb
|
In my previous article “ActiveX Control Tutorial”, I tried to explain how to write a complete ActiveX
control. At the end of the article I included two examples to show how this control can be used in different
application. I didn’t explain how we can create a project which will act as container for out ActiveX
control. In this tutorial I will try to give steps involved in writing a container application.
1. Start a new “ATL COM AppWizard” project named “AtlClientApp.”
2. In step 1 of the ATL AppWizard, choose “Executable (EXE)” for the Server Type and then click
Finish.
3. From the Insert menu, select “New ATL Object” to bring up the ATL Object Wizard. For the type
of Object to insert, select Miscellaneous and pick Dialog to a add a Dialog Object to the project.
Click Next to continue.
4. Provide a short name of “ClientDlg” for your new dialog. Accept the defaults for all other names
and click OK.
5. In the project workspace, click the “Resource View” tab. Double-click “AtlClientApp Resources”
to expand the resource tree. Double-click Dialog in the Resource tree and double-click to select
the dialog box resource “IDD_CLIENTDLG.”
6. Right click inside the dialog box. Select Insert ActiveX Control option from the pop up menu.
This will give you list of all the ActiveX controls registered on your system.
Click on ShapeCtl Class control and hit OK. This will insert the control we created in ActiveX
Control tutuorial. If you don’t have that control on your machine, then download the source from
that article. Compile it and the control will get registered on your machine. Otherwise I have
included the compiled DLL in this application’s source code. From the command prompt, use the
following command to register the control.
regsvr32 ActiveXCtl.dll
Click on the View, Properties menu in the Visual Studio IDE. This will give you the property page
to manipulate the design time properties of the control. You can choose the different options to
configure the look of the control.
7. In the project workspace, click the “Class View” tab. Double-click the “Globals” folder to see the
“_tWinMain” entry point, then double-click “_tWinMain” to jump to the code location.
8. Replace all the code in the function with the following:
extern “C” int WINAPI _tWinMain(HINSTANCE hInstance,
HINSTANCE /*hPrevInstance*/, LPTSTR lpCmdLine, int
/*nShowCmd*/)
{
lpCmdLine = GetCommandLine(); //this line necessary for
_ATL_MIN_CRT
#if _WIN32_WINNT >= 0x0400 & defined(_ATL_FREE_THREADED)
HRESULT hRes = CoInitializeEx(NULL, COINIT_MULTITHREADED);
#else
HRESULT hRes = CoInitialize(NULL);
#endif
_ASSERTE(SUCCEEDED(hRes));
_Module.Init(ObjectMap, hInstance, &LIBID_ATLCLIENTAPPLib);
_Module.dwThreadID = GetCurrentThreadId();
int nRet = 0;
// Instantiate a new instance of the dialog box which is
going to contain the ActiveX control.
CClientDlg *pDlg = NULL;
pDlg = new CClientDlg;
if (NULL != pDlg) {
pDlg->Create (NULL);
pDlg->ShowWindow (SW_NORMAL);
MSG msg;
// Now we need to run the message loop to recieve the
messages/events fired for our dialog
// box.
while (GetMessage (&msg, NULL, 0, 0)) {
TranslateMessage (&msg);
DispatchMessage (&msg);
}
nRet = msg.wParam;
delete pDlg;
}
CoUninitialize();
return nRet;
}
Since we are using this project to be our control container, we
don’t need any code that does the registration of this project’s
ATL object. Therefore all the code corresponding to that has been
removed form the WinMain function.
9. At the top of the file, add the following include statement after the others:
#include “ClientDlg.h”
10. Because we are building a client application, we do not need to perform COM registration when
we compile the EXE. To remove this step, select “Settings” from the Project menu, go to the
“Custom Build” tab, and remove the all the commands that appear in the “Build Commands”
window. Do the same for the “Outputs” window, and click OK when done.
Now if you compile and run the application, the dialog box comes up with our ActiveX control placed in it.
As I mentioned in the earlier article that we implemented four events for out control, ClickIn, ClickOut,
DblClickIn and DblClickOut. aNd if you try to click inside or outside the control nothing happens. We
need to include these events in our message map and establish a contact with our control to receive these
event messages. Follow the following steps to include these event messages in our client application.
For each external object whose events you wish COM class.
The following example imports the type library of our COM server (ActiveXCtl):
#import“ActiveXCtl.dll” raw_interfaces_only, no_namespace, named_guids
1. In the project workspace, goto ClassView and right click on CclientDlg class. Click on Add Windows
Message Handlers options. This is the same step we do for implementing regular windows controls.
2. In the Class or object to handle window, click on resource id of ActiveX control. If you did not specify
any ID, it would be IDC_SHAPECTL1. You will see four events in New Windows messages/events
window, namely ClickIn, ClickOut, DblClickIn, DblClickOut. Click on these events messages one by
one and then click on AddHandler button. Accept the default names for the functions, offered by IDE.
You can have names of your own, but to match your code with the one included with this article,
accept the default names. You don’t have to implement all the events. Just for the sake of using all
these events, I included each one of these in my application.
3. ATL uses the template class IDispEventImpl to provide support for connection points in your ATL
COM object. A connection point allows your COM object to handle events fired from external COM
objects. These connection points are mapped with an event sink map, provided by your COM object.
class CClientDlg :
public CAxDialogImpl>CClientDlg<,
public IDispEventImpl>IDC_SHAPECTL, CClientDlg<
The class inheritances for the container application will look like this after the four events have been
added. If you want to support some more events for your control, add IDispEventImpl definition for
each one of those in the control inheritance.
4. In order for the event notifications to be handled by the proper function, your COM object()
5. When you add these four events to your container, wizard puts four sink entries in the ClientDlg.h file.
Make sure the following four lines are there in the file.
BEGIN_SINK_MAP(CClientDlg)
//Make sure the Event Handlers have __stdcall calling convention
SINK_ENTRY(IDC_SHAPECTL, 0x1, OnClickInShapectl)
SINK_ENTRY(IDC_SHAPECTL, 0x2, OnClickOutShapectl)
SINK_ENTRY(IDC_SHAPECTL, 0x3, OnDblClickInShapectl)
SINK_ENTRY(IDC_SHAPECTL, 0x4, OnDblClickOutShapectl)
END_SINK_MAP()
In order for the event notifications to be handled by the proper function, your COM object must route
each event to its correct handler. This is achieved by declaring an event sink map.
6. control. This procedure is referred to as
“advising.”
After your object is finished with the external interfaces, the outgoing interfaces should be notified that
your COM object no longer uses them. This process is referred to as “unadvising”. Because of the
unique nature of COM objects, this procedure varies. ATL has made this procedure by providing some
macros to do the work of advising and unadvising for us.
In Initialization of dialog box, establish the advise connection with control by using
AtlAdviseSinkMap macro. In this function we will obtain the IUnknown pointer of our ShapeCtl and
cache it so that we can use it to call other methods on the control. To accomplish make use of the
AtlAxGetControl macro which takes HWND handle of the control window. Use GetDlgItem API
call to get his window handle. The function should look like this
LRESULT
CClientDlg::OnInitDialog(UINT uMsg, WPARAM wParam, LPARAM lParam, BOOL& bHandled)
{
HRESULT hr = E_FAIL;
// Cache the pointer to shape control.
m_ShapeCtlWnd = GetDlgItem (IDC_SHAPECTL);
// Get the unknown pointer.
AtlAxGetControl (m_ShapeCtlWnd, (IUnknown **) &m_pShapeCtl);
// Make the connecton to control’s IControlContainer interface.
AtlAdviseSinkMap (this, TRUE);
return 1; // Let the system set the focus
}
7. Honoring the COM specification we will implement unadvising of the IDispatch interface before
the container is destroyed or released. ATL has provided macros to that job. We will use the same
AtlAdviseSinkMap macro to do that in OnCancel function.
LRESULT
CClientDlg::OnCancel(WORD wNotifyCode, WORD wID, HWND hWndCtl, BOOL& bHandled)
{
AtlAdviseSinkMap (this, FALSE);
DestroyWindow ();
PostQuitMessage (0);
return 0;
}
Now you can compile and run your container application. I have put some code inside two of the
four events. When you click inside the control, it redraws itself with a radius increased by 5 units.
And if you click outside the control, radius is decreased by 5 units and control is redrawn. To
obtain the current value of control radius, we have made use of the IUnknown pointer of the
control which we saved at the initialization of dialog box.
hr = m_pShapeCtl->get_Radius (&radius);
if (FAILED (hr)) {
MessageBox (_T (“Failed to get radius of control”), _T (“Information”), MB_OK);
return;
}
This makes use of the vtable of control server. Containers implemented in VB or scripting
languages like VBScript, JScript, makes use if IDispatch::Invoke to implement this control
properties manipulation or event handling. In the next article (very soon) I will try to explain this
technique too.
The included code has been compiled and tested with VC++ 6.0 (SP-1) on NT-4.0 (SP-4).
|
https://www.codeguru.com/soap/atl-client-application-tutorial/
|
CC-MAIN-2021-39
|
en
|
refinedweb
|
Class CompositeType
- java.lang.Object
- javax.management.openmbean.OpenType<CompositeData>
- javax.management.openmbean.CompositeType
- All Implemented Interfaces:
Serializable
public class CompositeType extends OpenType<CompositeData>The
CompositeTypeclass is the open type class whose instances describe the types of
CompositeDatavalues.
- Since:
- 1.5
- See Also:
- Serialized Form
Field Summary
Fields inherited from class javax.management.openmbean.OpenType
ALLOWED_CLASSNAMES, ALLOWED_CLASSNAMES_LIST
Method Summary
Methods inherited from class java.lang.Object
clone, finalize, getClass, notify, notifyAll, wait, wait, wait
Methods inherited from class javax.management.openmbean.OpenType
getClassName, getDescription, getTypeName, isArray
Constructor Detail
CompositeType
public CompositeType(String typeName, String description, String[] itemNames, String[] itemDescriptions, OpenType<?>[] itemTypes) throws OpenDataExceptionConstructs ainstance.
The Java class name of composite data values this composite type represents (ie the class name returned by the
getClassNamemethod) is set to the string value returned by
CompositeData.class.getName().
- Parameters: null or an empty string.
itemTypes- The open type instances, in the same order as itemNames, describing the items contained in the composite data values described by this
CompositeTypeinstance; should be of the same size as itemNames; no element can be null.
- Throws:).
Method Detail
containsKey
public boolean containsKey(String itemName)Returns
trueif this
CompositeTypeinstance defines an item whose name is itemName.
- Parameters:
itemName- the name of the item.
- Returns:
- true if an item of this name is present.
getDescription
public String getDescription(String itemName)Returns the description of the item whose name is itemName, or
nullif this
CompositeTypeinstance does not define any item whose name is itemName.
- Parameters:
itemName- the name of the item.
- Returns:
- the description.
getType
public OpenType<?> getType(String itemName)Returns the open type of the item whose name is itemName, or
nullif this
CompositeTypeinstance does not define any item whose name is itemName.
- Parameters:
itemName- the name of the time.
- Returns:
- the type.
keySet
public Set<String> keySet()Returns an unmodifiable Set view of all the item names defined by this
CompositeTypeinstance. The set's iterator will return the item names in ascending order.
isValue
public boolean isValue(Object obj)Tests whether obj is a value which could be described by this
CompositeTypeinstance.
If obj is null or is not an instance of
javax.management.openmbean.CompositeData,
isValuereturns
false.
If obj is an instance of
javax.management.openmbean.CompositeData, then let
ctbe its
CompositeTypeas returned by
CompositeData.getCompositeType(). The result is true if
thisis assignable from
ct. This means that:
this.getTypeName()equals
ct.getTypeName(), and
- there are no item names present in
thisthat are not also present in
ct, and
- for every item in
this, its type is assignable from the type of the corresponding item in
ct.
A
TabularTypeis assignable from another
TabularTypeif they have the same typeName and index name list, and the row type of the first is assignable from the row type of the second.
An
ArrayTypeis assignable from another
ArrayTypeif they have the same dimension; and both are primitive arrays or neither is; and the element type of the first is assignable from the element type of the second.
In every other case, an
OpenTypeis assignable from another
OpenTypeonly if they are equal.
These rules mean that extra items can be added to a
CompositeDatawithout making it invalid for a
CompositeTypethat does not have those items.
- Specified by:
isValuein class
OpenType<CompositeData>
- Parameters:
obj- the value whose open type is to be tested for compatibility with this
CompositeTypeinstance.
- Returns:
trueif obj is a value for this composite type,
falseotherwise.
equals
public boolean equals(Object obj)Compares the specified
objparameter with this
CompositeTypeinstance for equality.
Two
CompositeTypeinstances are equal if and only if all of the following statements are true:
- their type names are equal
- their items' names and types are equal
- Specified by:
equalsin class
OpenType<CompositeData>
- Parameters:
obj- the object to be compared for equality with this
CompositeTypeinstance; if obj is
null,
equalsreturns
false.
- Returns:
trueif the specified object is equal to this
CompositeTypeinstance.
- See Also:
Object.hashCode(),
HashMap
hashCode
public int hashCode()Returns the hash code value for this
CompositeTypeinstance.
The hash code of a
CompositeTypeinstance is the sum of the hash codes of all elements of information used in
equalscomparisons (ie: name, items names, items types). This ensures that
t1.equals(t2)implies that
t1.hashCode()==t2.hashCode()for any two
CompositeTypeinstances
t1and
t2, as required by the general contract of the method
Object.hashCode().
As
CompositeTypeinstances are immutable, the hash code for this instance is calculated once, on the first call to
hashCode, and then the same value is returned for subsequent calls.
- Specified by:
hashCodein class
OpenType<CompositeData>
- Returns:
- the hash code value for this
CompositeTypeinstance
- See Also:
Object.equals(java.lang.Object),
System.identityHashCode(java.lang.Object)
toString
public String toString()Returns a string representation of thisinstances are immutable, the string representation for this instance is calculated once, on the first call to
toString, and then the same value is returned for subsequent calls.
- Specified by:
toStringin class
OpenType<CompositeData>
- Returns:
- a string representation of this
CompositeTypeinstance
|
https://docs.oracle.com/javase/9/docs/api/javax/management/openmbean/CompositeType.html
|
CC-MAIN-2021-39
|
en
|
refinedweb
|
Windows provides a thread pool mechanism matching the completion port.
1. Call function / work item asynchronously
2. Call a function every other time / / timing item
3. Call a function when the kernel object is triggered / / wait item
4. Call a function / / I/O item when the asynchronous I/O request is completed
1. Call functions asynchronously
Create a work item and submit tasks multiple times.
PTP_WORK CreateThreadpoolWork(PTP_WORK_CALLBACK pfnWorkHandler,PVOID pvContext,PTP_CALLBACK_ENVIRON pcbe);
pvContext: value passed to callback function
pcbe: related to the customization of thread pool
pfnWorkHandler: function pointer, the function prototype of the function looks like this
Viod callback workcallback (PTP > callback > instance, pvoid context, PTP > work); / / work: return value of function creation
Submit a task request to the thread pool:
VOID SubmitThreadpoolWork(PTP_WORK Work);
If you need to submit a work item multiple times, the value of the incoming Context must be the same each time the callback function executes.
In another thread, you want to cancel or wait for the work item to finish processing before suspending the thread:
VOID WaitForThreadpoolWorkCallbacks(PTP_WORK pWork,BOOL bCancelPendingCallbacks);
pWork: the specified waiting work item
bCancelPendingCallbacks: TRUE: attempts to cancel the previously submitted work item. If the thread is processing the work item, it will not be interrupted until the work item is completed. If the submitted work item has not been processed by any threads, the function marks it as cancelled and returns it immediately. When the completion port fetches the work item, the thread pool does not call the callback function. FALSE: the thread is suspended until the work item is completed and the thread processing the work item is ready to process the next work item bit.
Note: if a PTP work object submits tasks multiple times, TRUE will only wait for the currently running tasks, and FALSE will wait for all the existing tasks.
Close work item:
VOID CloseThreadpoolWork(PTP_WORK pWork);
eg: implement batch processing with thread pool work item function.
# include <iostream> # include <Windows.h> # include<threadpoolapiset.h> using namespace std; PTP_WORK g_pWorkItem=NULL; LONG g_nCurrentWork=0; VOID WINAPI HandleTask(PTP_CALLBACK_INSTANCE Instance,PVOID Context,PTP_WORK pWork) { LONG CurrentTask = InterlockedIncrement(&g_nCurrentWork); cout<<"TASK "<<CurrentTask<<"begin..."<<endl; //Simulate a lot of work Sleep(CurrentTask*1000); cout<<"TASK"<<CurrentTask<<"end..."<< endl; if(InterlockedDecrement(&g_nCurrentWork)==0){ //What will be decremented and incremented represents the next time, and the current one should represent the previous value cout<<" Work is handled."<< endl; } } VOID StartBatch() { //Submit four tasks with the same work item for(int i=0;i<4;i++){ SubmitThreadpoolWork(g_pWorkItem); } cout<<"Four tasks are submitted."<< endl; } int main() { g_pWorkItem=CreateThreadpoolWork(HandleTask,NULL,NULL); if(g_pWorkItem==NULL){ } StartBatch(); WaitForThreadpoolWorkCallbacks(g_pWorkItem,FALSE); //Wait for the task to complete before closing CloseThreadpoolWork(g_pWorkItem); return 0; }
The results are as follows:
2. Call a function every other time
PTP_TIMER CreateThreadpoolTimer(PTP_TIMER_CALLBACK pfnTimerCallback,PVOID pvContext,PTP_CALLBACK_ENVIRON pcbe);
Similar to the function for creating work items, the function prototype referred to by the function pointer passed in is as follows:
VOID CALLBACK TimeoutCallback(PTP_CALLBACK_INSTANCE pINSTANCE,PVOID Context,PTP_TIMER pTimer);
Register the timer, or modify the timer properties after registration:
VOID SetThreadpoolTimer(PTP_TIMER pTimer,PFILETIME pftDueTime,DWORD msPeriod,DWORD msWindowLength);
pTimer: the specified PTP ﹣ timer object, which creates the return value of the function.
pftDueTime: negative number in microseconds, relative time. -1 means to start immediately after executing the function. Enter a positive number, absolute time, starting from 1600.1.1, unit: 100ns.
msPeriod: you only want the timer to trigger once, passing in 0. The interval between periodic calls. Nonzero, microseconds.
msWindowLength: add randomness to the execution time of the callback function, so that the callback function will trigger between the currently set trigger time and the currently set trigger time plus this parameter value.
When there are multiple timers, their trigger frequency is almost the same, in order to avoid too many conflicts.
The other is to divide multiple timers into a group. If a large number of timers are triggered at almost the same time, it is better to divide them into a group to avoid too many context switches. It can be designed that the parameter of timer B of timer A is 2. The thread pool knows that timer a expects its callback function to be called between 5-7 microseconds, and timer B will be called within 6-8 microseconds. In this case, the thread pool knows that it will be more efficient to batch these two timers at 6 microseconds in the same time. In this way, the thread pool will only wake up one thread to execute the callback function of timer A and then execute the callback function of timer B.
Determine whether the timer has been set, pftDueTime is not NULL
IsThreadpoolTimerSet(PTP_TIMER pti);
The usage of these two functions is the same as that of work item:
VOID WaitForThreadpoolTimerCallbacks(PTP_TIMER pTimer,BOOL bCancelPendingCallbacks); VOID CloseThreadpoolTimer(PTP_TIMER pTimer);
eg: # include <iostream> # include <Windows.h> # include <tchar.h> using namespace std; TCHAR g_WindowCaption[100]=TEXT("Timer"); LONG g_SelCount=10; VOID WINAPI MsgBoxCallback(PTP_CALLBACK_INSTANCE pINSTANCE,PVOID Context,PTP_TIMER pTimer) { HWND hWnd=FindWindow(NULL,g_WindowCaption); if(hWnd==NULL){ return; } if(g_SelCount==1){return ;} TCHAR MsgText[100]; _stprintf_s(MsgText,_countof(MsgText),TEXT("Countdown%d second"),--g_SelCount); MessageBox(NULL,MsgText,g_WindowCaption,MB_OK); } int main() { PTP_TIMER pTimer=CreateThreadpoolTimer(MsgBoxCallback,NULL,NULL); if(pTimer==NULL){return 0;}; ULARGE_INTEGER uiRelativeStartTime; uiRelativeStartTime.QuadPart = (LONGLONG) -(10000000); //100ns after 1s FILETIME ftRelativeStartTime; ftRelativeStartTime.dwHighDateTime=uiRelativeStartTime.HighPart; ftRelativeStartTime.dwLowDateTime=uiRelativeStartTime.LowPart; SetThreadpoolTimer(pTimer,&ftRelativeStartTime,1000,0);//Interval 1s, in milliseconds MessageBox(NULL,TEXT("Last 10 seconds."),g_WindowCaption,MB_OK); WaitForThreadpoolTimerCallbacks(pTimer,FALSE); CloseThreadpoolTimer(pTimer); return 0; }
The results are as follows:
3. Call a function when the kernel object is triggered
Sometimes multiple threads will wait for an object, which is an extreme waste of system resources. Each thread has a thread stack and requires a large number of CPU instructions to create and destroy threads, so the thread pool is good.
PTP_WAIT CreateThreadpoolWait(PTP_WAIT_CALLBACK pfnWaitCallack,PVOID pvContext,PTP_CALLBAC_ENVIRON pcbe);
Pfnwaitcallback: the prototype of the callback function it refers to:
VOID WINAPI WaitCallback(PTP_CALLBACK_INSTANCE pInstance,PVOID pvContext,PTP_WAIT Wait,TP_WAIT_RESULT WaitResult);
Bind a kernel object to this thread pool:
VOID SetThreadpoolWait(PTP_WAIT pWaitItem,HANDLE hObject,PFILETIME pftTimeOut);
pftTimeOut: pass 0, no waiting at all; pass NULL, infinite waiting; pass negative relative time; positive absolute time.
When the kernel object is triggered or timed out, the thread pool calls the callback function. WaitResult, the last parameter of the callback function, indicates the reason why the callback function was called: wait? Object? 0, the kernel object was called before the timeout, wait? Timeout: it was not triggered before the timeout.
If the callback function is called, the corresponding wait item will enter the inactive state, and the registration needs to be called again. You can either pass in a different kernel object or NULL to remove the wait from the thread pool
VOID WaitForThreadpoolWaitCallbacks(PTP_WAIT pWait,BOOL bCancelPendingCallbacks); VOID CloseThreadpoolWait(PTP_WAIT pWait pTimer);
4. Call a function when the asynchronous I/O request is completed
PTP_IO CreateThreadpoolIo(HANDEL hDevice,PTP_WIN32_IO_CALLBACK pfnIoCallback,PVOID pvContext,PTP_CALLBACK_ENVIRON pcbe);
hDevice: the file / device handle associated with the internal completion port of the thread pool. Return value of CreateFile, socket, etc.
pfnIoCallback: the function prototype is as follows:
VOID WINAPI OverlappedCompletionRoutine(PTP_CALLBACK_INSTANCE pInstance,PVOID pvContext,PVOID pOverlapped,ULONG IoResult,
ULONG_PTR NumberOfBytesTransferred,PTP_IO pIo);
Ploverlapped: get overlapped structure
IoResult: operation result, I/O success, return no error
Numberofbytesttransferred: number of bytes transferred
After the thread pool I/O object is created, the file / device embedded in the I/O item is associated with the I/O completion port inside the thread pool by calling the following function:
VOID StartThreadpoolIo(PTP_IO pIo);
If you want the thread pool to stop calling our callback function after I/O request:
VOID CancelThreadpoolIo(PTP_IO pIo);
If the ReadFile or WriteFile call fails when the request is made, the function must still be called. Because if the return value of these two functions is FALSE, but the return value of GetLastError is error? IO? Pending.
When the file device and socket are used, the relationship with the thread pool is released:
VOID CloseThreadpoolIo(PTP_IO pIo);
Another thread waits for the I/O request to be processed out of the belt to complete:
VOID WaitForThreadpoolIoCallbacks(PTP_IO pIo,BOOL bCancelPendingCallbacks);
bCancelPendingCallbacks: passed to TRUE, the callback function will not be called when the request completes (if it has not already been called).
|
https://programmer.ink/think/windows-thread-pool-functions.html
|
CC-MAIN-2021-39
|
en
|
refinedweb
|
Home >>Java Programs >Java Program to print the smallest element in an array
In this example, we will create a java program to find out the smallest element present in the array. This can be done by defining a variable min which initially will hold the value of the first element. Then we will loop through the array by comparing the value of min with elements of the array. If any of the element's value is less than min then store the value of the element in min.
Algorithm
Program:
public class Main { public static void main(String[] args) { int [] arr = new int [] {35, 11, 17, 53, 61, 43, 94, 51, 32, 87}; int min = arr[0]; for (int i = 0; i < arr.length; i++) { if(arr[i] <min) min = arr[i]; } System.out.println("Smallest element present in given array: " + min); } }
Output
Smallest element present in given array: 11
|
https://www.phptpoint.com/java-program-to-print-the-smallest-element-in-an-array/
|
CC-MAIN-2021-39
|
en
|
refinedweb
|
Difference between revisions of "Morphing Terrain Issues"
Latest revision as of 16:42, 6 June 2014
Synopsis
Using the Env_terrainmorph does not update the collision hull. Tracing through the code leads you to engine->ApplyTerrainMod(), and the trail basically ends there for the SDK. Can anything be done to "fix" this? Morphable terrain isn't very useful when it doesn't modify the collisions. I heard there was an E3 Demo of Source and the ability to morph the terrain along with updating the collisions was shown, yet the functionality appears to be absent now.
Work-arounds
The only work-around that exists is to add in a func_movelinear or other moving brush, and simply have that move under the terrain to make the brushes' shape match what the terrain is supposed to be. This is not, however, always practical.
If the terrain represents snow, that the collision hulls are not updated can create a depth to your snow displacements.
The problem
The following is engine code that is included in the SDK in src\public\dispcoll_common.cpp:
//----------------------------------------------------------------------------- // Purpose: //----------------------------------------------------------------------------- void CDispCollTree::ApplyTerrainMod( ITerrainMod *pMod ) { #if 0 int nVertCount = GetSize(); for ( int iVert = 0; iVert < nVertCount; ++iVert ) { pMod->ApplyMod( m_aVerts[iVert].m_vecPos, m_aVerts[iVert].m_vecOrigPos ); pMod->ApplyMod( m_aVerts[iVert].m_vecPos, m_aVerts[iVert].m_vecOrigPos ); } // Setup/create the leaf nodes first so the recursion can use this data to stop. AABBTree_CreateLeafs(); // Generate bounding boxes. AABBTree_GenerateBoxes(); // Create the bounding box of the displacement surface + the base face. AABBTree_CalcBounds(); #endif }
|
https://developer.valvesoftware.com/w/index.php?title=Morphing_Terrain_Issues&diff=cur&oldid=155514&printable=yes
|
CC-MAIN-2021-39
|
en
|
refinedweb
|
I’m continuing to upgrade my podcast site to .NET Core 2.1 running ASP.NET Core 2.1. I’m using Razor Pages having converted my old Web Matrix Site (like 8 years old) and it’s gone very smoothly. I’ve got a ton of blog posts queued up as I’m learning a ton. I’ve added Unit Testing for the Razor Pages as well as more complete Integration Testing for checking things "from the outside" like URL redirects.
My podcast has recently switched away from a custom database over to using SimpleCast and their REST API for the back end. There’s a number of ways to abstract that API away as well as the HttpClient that will ultimately make the call to the SimpleCast backend. I am a fan of the Refit library for typed REST Clients and there are ways to integrate these two things but for now I’m going to use the new HttpClientFactory introduced in ASP.NET Core 2.1 by itself.
Next I’ll look at implementing a Polly Handler for resilience policies to be used like Retry, WaitAndRetry, and CircuitBreaker, etc. (I blogged about Polly in 2015 – you should check it out) as it’s just way too useful to not use.
HttpClient Factory lets you preconfigure named HttpClients with base addresses and default headers so you can just ask for them later by name.
public void ConfigureServices(IServiceCollection services) { services.AddHttpClient("SomeCustomAPI", client => { client.BaseAddress = new Uri(""); client.DefaultRequestHeaders.Add("Accept", "application/json"); client.DefaultRequestHeaders.Add("User-Agent", "MyCustomUserAgent"); }); services.AddMvc(); }
Then later you ask for it and you’ve got less to worry about.
using System.Threading.Tasks; using Microsoft.AspNetCore.Mvc; namespace MyApp.Controllers { public class HomeController : Controller { private readonly IHttpClientFactory _httpClientFactory; public HomeController(IHttpClientFactory httpClientFactory) { _httpClientFactory = httpClientFactory; } public Task<IActionResult> Index() { var client = _httpClientFactory.CreateClient("SomeCustomAPI"); return Ok(await client.GetStringAsync("/api")); } } }
I prefer a TypedClient and I just add it by type in Startup.cs…just like above except:
services.AddHttpClient<SimpleCastClient>();
Note that I could put the BaseAddress in multiple places depending on if I’m calling my own API, a 3rd party, or some dev/test/staging version. I could also pull it from config:
services.AddHttpClient<SimpleCastClient>(client => client.BaseAddress = new Uri(Configuration["SimpleCastServiceUri"]));
Again, I’ll look at ways to make this even simpler AND more robust (it has no retries, etc) with Polly soon.($""); /; } } }
Once I have the client I can use it from another layer, or just inject it with [FromServices] whenever I have a method that needs one:
public class IndexModel : PageModel { public async Task OnGetAsync([FromServices]SimpleCastClient client) { var shows = await client.GetShows(); } }
Or in the constructor:
public class IndexModel : PageModel { private SimpleCastClient _client; public IndexModel(SimpleCastClient Client) { _client = Client; } public async Task OnGetAsync() { var shows = await _client.GetShows(); } }
Another nice side effect is that HttpClients that are created from the HttpClientFactory give me free logging:
info: System.Net.Http.ShowsClient.LogicalHandler[100] Start processing HTTP request GET System.Net.Http.ShowsClient.LogicalHandler:Information: Start processing HTTP request GET info: System.Net.Http.ShowsClient.ClientHandler[100] Sending HTTP request GET System.Net.Http.ShowsClient.ClientHandler:Information: Sending HTTP request GET info: System.Net.Http.ShowsClient.ClientHandler[101] Received HTTP response after 882.8487ms - OK System.Net.Http.ShowsClient.ClientHandler:Information: Received HTTP response after 882.8487ms - OK info: System.Net.Http.ShowsClient.LogicalHandler[101] End processing HTTP request after 895.3685ms - OK System.Net.Http.ShowsClient.LogicalHandler:Information: End processing HTTP request after 895.3685ms - OK
It was super easy to move my existing code over to this model, and I’ll keep simplifying AND adding other features as I learn more.
Sponsor: Check out JetBrains Rider: a cross-platform .NET IDE. Edit, refactor, test and debug ASP.NET, .NET Framework, .NET Core, Xamarin or Unity applications. Learn more and download a 30-day trial!
|
http://ugurak.net/index.php/2018/05/08/httpclientfactory-for-typed-httpclient-instances-in-asp-net-core-2-1/
|
CC-MAIN-2018-47
|
en
|
refinedweb
|
Senstools
Hardware
Software
Drivers
Operating systems
Libraries
Drivers
Operating systems
Libraries.
SimpliciTI is considered as a library and not as an operating system, since it does not implement task handler.
SimpliciTI main features are :
It supports 2 basic topologies: a strictly peer-to-peer and a star topology in which the star hub is a peer to every other device .
SimpliciTI allows user to implement three device types : End Device, Range Extender, and Access Point. Note that a hardware device may host several SimpliciTI devices either of same type or of different type.
It is the base element of the network. It generally supports most of the sensors or actuators of the network. A strictly peer-to-peer network is exclusively composed of end devices (and eventually range extenders).
It supports such features and functions as store-and-forward support for sleeping End Devices, management of network devices in terms of membership permissions, linking permissions, security keys, etc. The Access Point can also support End Device functionality. In the star topology the Access Point acts as the hub of the network.
These devices are intended to repeat frames in order to extend the network range. Due to their function, they are always on. Networks are currently limited to 4 range extenders.
This is the only layer that the developer needs to implement. It is where he develops his application (to manage sensors for example), and implements network communication, by using SimpliciTI network APIs or network applications. Note that it is in this layer that the developer needs to implement reliable transport if required, as there is no Transport layer.
This layer manages the Rx and Tx queues and dispatches frames to their destination. The destination is always an application designated by a Port number. Network applications are internal peer-to-peer objects intended to manage network. They work on a predefined port and are not intended to be used by the developer (except Ping for debugging purposes) Their usage depends on the SimpliciTI device type. These applications are:
Source code files for network layer are located in /Components/simpliciti.
This layer may be divided in 2 entities:
with the radio (/Components/mrfi);
toward the network layer (/Components/bsp).
SimpliciTI supports 5 families of TI radios :
Source code files for these 5 families of radio are located in the /mrfi/radios folder.
SimpliciTI implements 2 families of microcontrollers : Intel 8051 and TI MSP430 (source code directory : /bsp/mcus).
The following boards are supported : CC2430DB, CC2530EM, EXP461x, EZ430RF, RFUSB, SRF04EB, SRF05EB (source code directory : /bsp/boards).
As a convenience SimpliciTI also supports LEDs and button/switch peripherals attached to GPIO pins of the microcontroller. But no other services are provided such as UART drivers, LCD drivers, or timer services (source code directory : /bsp/drivers directory).
APIs enable user to implement a reliable network with little effort. But we have to keep in mind that the resulting network sacrifices flexibility for simplicity. Here are the different APIs supplied by SimpliciTI:
If you want to be sure to use the latest available version of SimpliciTI, you can put the wsn430 board code of the old version SimpliciTI into the new one.
SimpliciTI is written to be compiled with the IAR Embedded Workbench environment of the IAR Systems society. So in order to make it compilable with MSPGCC, some minor changes have to be performed on the SimpliciTI code.
/Components/bsp/mcus/bsp_msp430_defs.h
#error "ERROR: Unknown compiler."
by :
#ifdef __GNUC__ );) #else #error "ERROR: Unknown compiler." #endif
typedef signed char int8_t; typedef signed short int16_t; typedef signed long int32_t; typedef unsigned char uint8_t; typedef unsigned short uint16_t; typedef unsigned long uint32_t;
by :
#ifndef __GNUC__ typedef signed char int8_t; typedef signed short int16_t; typedef signed long int32_t; typedef unsigned char uint8_t; typedef unsigned short uint16_t; typedef unsigned long uint32_t; #endif
/Components/bsp/drivers/code/bsp_generic_buttons.h
Replace (l 197):
#error "ERROR: Debounce delay macro is missing."
by :
#ifndef __GNUC__ #error "ERROR: Debounce delay macro is missing." #endif
/Components/simpliciti/nwk/nwk_QMgmt.c
Replace (l 40) :
#include <intrinsics.h>
by :
#ifndef __GNUC_ #include <intrinsics.h> #endif
Just download and extract the simpliciti-wsn430-v1.1.1.tar.gz archive. SimpliciTI 1.1.1 ported for WSN430 and compilable with MSPGCC is ready to use.
SimpliciTI source code is in the /Components directory, including WSN430 board source code (located in /Components/bsp/boards). Examples are stored in the /Projects/Examples folder. SimpliciTI official documentations are available in the /Documents directory.
|
http://senstools.gforge.inria.fr/doku.php?id=lib:simpliciti
|
CC-MAIN-2018-47
|
en
|
refinedweb
|
Implementing conversion operators facilitates the implicit or explicit casting of user-defined types to built-in types or even other user-defined types. Developers implement a conversion operator to explain to the compiler how to interpret a user-defined type in the context of another type. Like mathematical and logical operators, the primary reason for implementing conversion operators is convenience. It is never required. You could as easily expose ToType methods, such as ToInt, ToFloat, or ToDecimal.
An implicit cast is considered a secure cast, whereas explicit casting is required for casting that is not secure. For built-in types, a secure cast is available when there is no potential loss of precision or accuracy. When there is potential loss of precision or accuracy, an explicit cast is required. For example, an int can be assigned to long implicitly. Eight bytes are reserved for a long value and four bytes are reserved for an int value. The int value will be promoted to a long. The promotion occurs silently—no warning or notice. The reverse, in which a long is assigned to an int, requires an explicit cast. This is exhibited in the following code:
int a=5; long b=10; b=a; // implicit cast a=(int) b; // explicit cast
C# does not support conversion constructors. A conversion constructor is a one-argument constructor used to create an instance of a type from a different type. Conversion constructors are supported in C++ but are not allowed in C#. Conversion constructors were convenient—too convenient. Conversion constructors were sometimes called transparently when a compiler error for mismatched types was more appropriate.
You cannot overload the cast operator directly. Instead, overload the cast operator selectively with conversion operator methods. This is the syntax of a conversion operator:
public static implicit operator returntype(classtype obj)
public static explicit operator returntype(type obj)
For the conversion operator syntax, there are many similarities when compared with mathematical and relational operators. Conversion operators must be public and static. Other modifiers, such as virtual and sealed, are syntax errors. Conversion operators that are implicit do not require casting, whereas explicit conversion operators require casting for use. I recommend explicit casting in all circumstances. Implicit casting allows the conversion function to be called transparently and sometimes inadvertently, which may cause undetected side effects. With explicit casting, developers affirmatively state their intentions through casting. Either the return or operand of the conversion operator must be the same as the containing type. If converting to the containing type, the return type should be the containing class. Notice that the return type is after the operator keyword, not before. When converting from the containing type, the operand should be the same type as the containing class.
Here is sample code of implicit and explicit conversion methods. The ZClass has two conversion operators. The first conversion operator converts ZClass to an int. The second conversion operator converts a YClass to a ZClass.
using System; namespace Donis.CSharpBook{ public class Starter{ public static void Main(){ ZClass obj1=new ZClass(5,10); int ival=obj1; YClass obj2=new YClass(5); // ZClass obj3=obj2; [ error ] ZClass obj3=(ZClass) obj2; } } public class ZClass { public ZClass(int _fielda, int _fieldb) { fielda=_fielda; fieldb=_fieldb; } public static implicit operator int(ZClass curr) { return curr.fielda+curr.fieldb; } public static explicit operator ZClass(YClass curr) { return new ZClass(curr.field/2, curr.field/2); } public int fielda, fieldb; } public class YClass { public YClass(int _field) { propField=_field; } private int propField; public int field { get { return propField; } set { propField=value; } } } }
Conversion operators are often overloaded to provide the illusion of a user-defined type, which can support a variety of casts. In the following code, the conversion operator is overloaded several times to allow the conversion of ZClass instances to a variety of types:
using System; namespace Donis.CSharpBook{ public class ZClass { public ZClass(int _fielda, int _fieldb) { fielda=_fielda; fieldb=_fieldb; } public static explicit operator int(ZClass curr) { return curr.fielda+curr.fieldb; } public static explicit operator float(ZClass curr) { return (float) (curr.fielda+curr.fieldb); } public static explicit operator short(ZClass curr) { return (short) (curr.fielda+curr.fieldb); } // and so on public int fielda, fieldb; } }
The operator string operator is a special conversion operator that converts a user-defined type to a string. This appears to overlap with the ToString method, which is inherited from System.Object. Actually, every type is also automatically provided an operator string method, which simply calls the polymorphic ToString method. Look at the following code. If a class has both a ToString and operator string method, which method is called in the Console.WriteLine?
using System; namespace Donis.CSharpBook{ public class Starter{ public static void Main(){ ZClass obj=new ZClass(); Console.WriteLine(obj); } } public class ZClass { public static implicit operator string(ZClass curr) { return "Zlass.operator string"; } public override string ToString() { return "ZClass.ToString"; } } }
The preceding program displays ZClass.operator string. The operator string is called for the Console.WriteLine operand. Why? Unlike the default operator string, the custom operator string does not call ToString. In most circumstances, calling ToString in the operator string is the best practice, which eliminates the necessity of implementing a custom operator string. The default operator string already has this behavior. You simply override the ToString method with the proper string representation of the type. Inconsistencies and confusion can occur when the operator string and ToString have disparate implementations.
|
http://etutorials.org/Programming/programming+microsoft+visual+c+sharp+2005/Part+V+Advanced+Concepts/Appendix+A+Operator+Overloading/Conversion+Operators/
|
CC-MAIN-2018-47
|
en
|
refinedweb
|
#include <TCPConnection.h>
Protected constructor for enforcing subclasses for this class.
References setConnectionListener(), and setStringListener().
This method is called be subclasses to invoke the connection listener.
References odcore::io::ConnectionListener::handleConnectionError().
Referenced by odcore::wrapper::POSIX::POSIXTCPConnection::run(), odcore::wrapper::WIN32Impl::WIN32TCPConnection::run(), odcore::wrapper::WIN32Impl::WIN32TCPConnection::sendImplementation(), and odcore::wrapper::POSIX::POSIXTCPConnection::sendImplementation().
This method returns if we have configured a raw TCP connection (i.e. without the size of the payload information).
Referenced by receivedString(), and send().
This method has to be called by subclasses whenever new (partial) data was received. This method is responsible for gathering partial data and invoking the registered StringListener when a complete data packet was gathered.
Referenced by odcore::wrapper::POSIX::POSIXTCPConnection::run(), and odcore::wrapper::WIN32Impl::WIN32TCPConnection::run().
This method is used to send data using this TCP connection.
References isRaw(), and sendImplementation().
Referenced by odcore::wrapper::POSIX::POSIXTCPConnection::sendImplementation(), and odcore::wrapper::WIN32Impl::WIN32TCPConnection::sendImplementation().
This method has to be implemented in subclasses to send data. It is called from within the send()- method.
param data Data with prepended size information.
Implemented in odcore::wrapper::POSIX::POSIXTCPConnection, and odcore::wrapper::WIN32Impl::WIN32TCPConnection.
This method registers a ConnectionListener that will be informed about connection errors.
Implements odcore::io::ConnectionObserver.
Referenced by ~TCPConnection().
This method configures a TCP connection to just transport the raw bytes.
This method sets the StringListener that will receive incoming data.
Implements odcore::io::StringObserver.
Referenced by ~TCPConnection().
This method must be called to start the connection.
Implemented in odcore::wrapper::POSIX::POSIXTCPConnection, and odcore::wrapper::WIN32Impl::WIN32TCPConnection.
This method closes a connection.
Implemented in odcore::wrapper::POSIX::POSIXTCPConnection, and odcore::wrapper::WIN32Impl::WIN32TCPConnection.
|
http://opendavinci.cse.chalmers.se/api/classodcore_1_1io_1_1tcp_1_1TCPConnection.html
|
CC-MAIN-2018-47
|
en
|
refinedweb
|
Learn how to use MobX to manage the state of your React apps with ease.
TL;DR: MobX is one of the popular state management libraries out there frequently used with React. In this article, you will learn how to manage the state of your React apps with MobX. If you need, you can find the code developed throughout the article in this GitHub repository.
"Learn how to manage the state of your @reactjs apps with MobX, an alternative to Redux."
TWEET THIS
Prerequisites
Before diving into this article, you are expected to have prior knowledge of React already. If you still need to learn about React, you can find a good React article here.
Besides knowing React, you will need Node.js and NPM installed on your machine. If you don't have them, please, follow the instructions here.
State Management in React
Before understanding the concept of state management, you have to realize what a state is. A state in this context is the data layer of your application. When it comes to React and the libraries that help it manage state, you can say that state is an object that contains the data that your application is dealing with. For instance, if you want to display a list of items on your app, your state will contain the items you intend to display. State influences how React components behave and how they are rendered. Yes! It is as simple as that.
State management, therefore, means monitoring and managing the data (i.e., the state) of your app. Almost all apps have state in one way or the other and, as such, managing state has become one of the most important parts of building any modern app today.
When you think about state management in React apps, basically, there are three alternatives:
- Redux;
- the new React Context API;
- and MobX.
Redux
Redux is the most popular state management solution for React apps. Redux strictly abides by the single source of truth principle..
To learn more about Redux, check out this article.
React Context API
The React Context API is another alternative for state management in your React app. This is not a library like the earlier mentioned alternatives. Rather, this is a framework in-built solution. Actually, this API is not something new, it had existed in React a long while ago. However, you will frequently listen people calling it as the new React Context API because only recently (more specifically on React
v16.3) this API has reached a mature stage.
In fact, Redux uses this API behind the scenes. The API provides a way to pass data down a React component tree without explicitly passing it through all the child components. This API revolves around two components, the
Provider (used by a component located in a higher hierarchy of the component tree) to provide the data and the
Consumer (used by a
Component down the hierarchy) to consume data.
To learn more about the new React Context API, check out this article.
In the next section, you will learn about the third alternative at your disposal, MobX.
MobX Introduction
As mentioned, MobX is another state management library available for React apps. This alternative uses a more reactive process, and it is slowly gaining popularity in the community. MobX is not just a library for React alone, it is also suitable for use with other JavaScript libraries and frameworks that power the frontend of web apps.
"MobX is a reactive alternative to Redux and integrates very well with @reactjs apps."
TWEET THIS
MobX is sponsored by reputable companies such as Algolia, Coinbase, etc. MobX hit 16,719 stars on GitHub at the time of writing. That obviously tells you it is becoming a solid choice for state management in React applications.
In the following subsections, you will learn about important concepts that you have to keep in mind while developing with MobX. Then, in the next section, you will see MobX in action while creating a sample app.
Observable State on MobX
Observable state is one of the main concepts of MobX. The idea behind this concept is to make an object able to emit new changes on them to the observers. You can achieve this with the
@observable decorator. For example, imagine you have a variable named
counter that you expect to change with time. You can make it observable like so:
@observable counter = 0
Or, you can declare it like so:
decorate(ClassName, { counter: observable })
ClassName, in the second example, is the name of the class where the
counter object resides. This decorator can be used in instance fields and property getters.
Computed Values on MobX
Computed value is another important concept of MobX. These values are represented by the
@computed decorator. Computed values work in hand with observable states. With computed values, you can automatically derive values. Say you have a snippet like this:
class ClassName { testTimes100 = 0; @observable test = 0; @computed get computedTest() { return this.testTimes100 * 100; } }
In this snippet, if the value of
test changes, the
computedTest method is called and
testTimes100 is updated automatically. So, with computed values, MobX can automatically compute other values when needed by using
@computed.
Reactions on MobX.
The
when reaction accepts two functions as parameters, the
predicate and the
effect. This reaction runs and observes the first function (the
predicate) and, when this one is met, it runs the
effect function.
Here you can see an example of how this function works:
when( // predicate () => this.isEnabled, // effect () => this.exit() );
Once the
isEnabled class property is
true, the
effect executes the
exit function. The function that returns
isEnabled must be a function that reacts. That is,
isEnabled must be marked with
@computed so that the value is automatically computed or, better yet, marked with an
@observable decorator.
The next reaction function is the
autorun function. Unlike the
when function, this function takes in one function and keeps running it until it is manually disposed. Here you can see how you can use an
autorun function:
@observable age = 10 const dispose = autorun(() => { console.log("My age is: ", age.get()) })
With this in place, anytime the variable
age changes, the anonymous function passed to
autorun logs it out. This function is disposed once you call
dispose.
The next one, the
reaction function, mandatorily accepts two functions: the data function and side effect function. This function is similar to the
autorun function but gives you more control on which observables to track. Here, the data function is tracked and returns data to be used inside effect function. Whereas an
autorun function reacts to everything used in its function, the
reaction function reacts to observables you specify.
Here you can see a simple use case:
const todos = observable([ { title: "Read Auth0 Blog", done: false, }, { title: "Write MobX article", done: true } ]); const reactionSample = reaction( () => todos.map(todo => todo.title), titles => console.log("Reaction: ", titles.join(", ")) );
In this case, the
reaction function reacts to changes in the length and title of the list.
Another reaction function available for React developers is the
observer function. This one is not provided by the main MobX package but, instead, provided by the
mobx-react library. To use the
observer function, you can simply add the
@observer decorator in front of it like so:
@observer class ClassName { // [...] }
With this
reaction function, if an object tagged with the
@observable decorator is used in the
render method of the component and that property changes, the component is automatically re-rendered. The
observer function uses
autorun internally.
Actions on MobX
Actions are anything that modifies the state. You can mark your actions using the
@action decorator. As such, you are supposed to use the
@action on any function that modifies observables or has side effects. A simple example is this:
@observable variable = 0; @action setVariable(newVariable){ this.variable = newVariable; }
This function is updating the value of an observable, and so it is marked with
@action.
MobX and React in Practice
Now that you have gone through the main concepts in MobX, it is time to see it in action. In this section, you will build a simple user review dashboard. In the review dashboard, a user will enter a review using an input field, select a rating from a dropdown list, and finally submit the review.
The dashboard will show the total number of reviews, the average star rating, and a list of all the reviews. You will use MobX to manage certain operations like updating the reviews in realtime on the dashboard, calculating the total number of reviews submitted and lastly, obtaining the average star rating. Once you are done, your app will look similar to this:
Scaffolding a new React app
To quickly scaffold a new React app, you will use the
create-react-app CLI tool to bootstrap your React quickly. If you are on NPM
v5.2.0 or greater, you can open a terminal, move into the directory where you usually save your projects, and issue the following command:
npx create-react-app react-mobx-tutorial
If you have an older version of NPM, you will have to proceed as follows:
# install create-react-app globally npm install -g create-react-app # use it to create your project create-react-app react-mobx-tutorial
This tool will need some seconds (or even a couple of minutes depending on your internet connection) to finish its process. After that, you can open your new project (
react-mobx-tutorial) on your preferred IDE.
Installing Dependencies
After creating your app, the next step is to install the required dependencies. For this article, you need only three dependencies: the main
mobx library to add MobX to your app; the
mobx-react library to add React specific functions available through MobX; and the
react-star-rating-component dependency to easily implement a rating bar in the app.
To install them, move into your project and use NPM, as follows:
# move into app directory cd react-mobx-tutorial # install deps npm install mobx mobx-react react-star-rating-component --save
Creating a Store with MobX
You might wonder why haven't you heard about stores on the last section (MobX Introduction). The thing is, MobX does not require you to use stores to hold your data. Actually, they explain in this resource, stores are part of an opinionated approach that they discovered at Mendix while working with MobX.
"The main responsibility of stores is to move logic and state out of your components into a standalone testable unit that can be used in both frontend and backend JavaScript." - Best Practices for building large scale maintainable projects
As such, the first thing you are going to do in your app is to add a store. This will ensure that the app reads from (and writes to) a global state object instead of its own components' state. To set this up, create a new file called
Store.js inside the
src directory and add the following code to it:
class Store { reviewList = [ {review: "This is a nice article", stars: 2}, {review: "A lovely review", stars: 4}, ]; addReview(e) { this.reviewList.push(e); } get reviewCount() { return this.reviewList.length; } get averageScore() { let avr = 0; this.reviewList.map(e => avr += e.stars); return Math.round(avr / this.reviewList.length * 100) / 100; } } export default Store;
In this store, you defined a
reviewList array containing some items already. This is the list your whole app will feed on. Besides defining this array, the store also defines three methods:
addReview(): Through this method, your app will add new reviews to the
reviewListarray.
averageScore(): This is the method that your app will use to get the average score inputted by users.
reviewCount(): You will use this method to get the size of
reviewList.
Next, you will expose these methods as observables so that other parts of your application can make use of it. MobX has a set of decorators that defines how observable properties will behave (as discussed earlier). To declare these observables, you will use the
decorate function and add it to your
App.js file as shown here:
// ... leave other imports untouched ... import Store from './Store'; import {decorate, observable, action, computed} from 'mobx'; decorate(Store, { reviewList: observable, addReview: action, averageScore: computed, reviewCount: computed }); // ... leave class definition and export statement untouched ...
As you can see, you are using the
decorate function to apply the
observable,
action, and
computed decorators to the fields defined by
Store. This makes them tightly integrated with MobX, and you can now make your app react to changes in them.
Updating the Store on MobX
Next, you will create a component with the form that will collect users' review and update the store accordingly. To keep things organized, you will create a directory called
components inside the
src directory. For the rest of the article, you will use this directory for all your React components.
After creating the
components directory, add a file called
Form.js inside it and add the following code to this file:
import React, {Component} from 'react'; export default class Form extends Component { submitReview = (e) => { e.preventDefault(); const review = this.review.value; const stars = Number(this.stars.value); this.props.store.addReview({review, stars}) }; render() { return ( <div className="formSection"> <div className="form-group"> <p>Submit a Review</p> </div> <form onSubmit={this.submitReview}> <div className="row"> <div className="col-md-4"> <div className="form-group"> <input type="text" name="review" ref={node => { this.review = node; }} </div> </div> <div className="col-md-4"> <div className="form-group"> <select name="stars" id="stars" className="form-control" ref={node => { this.stars = node; }}> <option value="1">1 Star</option> <option value="2">2 Star</option> <option value="3">3 Star</option> <option value="4">4 Star</option> <option value="5">5 Star</option> </select> </div> </div> <div className="col-md-4"> <div className="form-group"> <button className="btn btn-success" type="submit">SUBMIT REVIEW</button> </div> </div> </div> </form> </div> ) } }
The new component that you just defined contains only two functions:
submitReview and
render. The
submitReview function, which React will call when users submit the form, get the
review inputted by users and the number of
stars and then call the
addReview function from the store. Note that this component is calling the
addReview function through
props. As such, while using the
Form component, you will have to pass this function to it.
Now, regarding the
render function, although lengthy, you can see that all it does is to use some HTML elements and some Bootstrap classes to define a beautiful form with:
- a title: "Submit a Review";
- an
inputtext where users will write their review;
- a drop-down box (
select) where users will choose how many stars they give to the review (between 1 and 5);
- and a
submitthat will trigger the
submitReviewfunction when clicked (through the
onSubmit={this.submitReview}property of the
formelement).
Reacting to Changes with MobX
Once users submit the form and the store receives the new review, you need to display the updated data to your users immediately. For this purpose, you will create a component that will display the average number of stars from reviews given and the total number of reviews.
To create this component, create a new file called
Dashboard.js inside the
components directory and insert the following code into it:
import React from 'react'; import {observer} from 'mobx-react' function Dashboard({store}) { return ( <div className="dashboardSection"> <div className="row"> <div className="col-md-6"> <div className="card text-white bg-primary mb-6"> <div className="card-body"> <div className="row"> <div className="col-md-6"> <i className="fa fa-comments fa-5x" /> </div> <div className="col-md-6 text-right"> <p id="reviewCount">{store.reviewCount}</p> <p className="announcement-text">Reviews</p> </div> </div> </div> </div> </div> <div className="col-md-6"> <div className="card text-white bg-success mb-6"> <div className="card-body"> <div className="row"> <div className="col-md-6"> <i className="fa fa-star fa-5x" /> </div> <div className="col-md-6 text-right"> <p id="averageScores">{store.averageScore}</p> <p className="announcement-text">Average Scores</p> </div> </div> </div> </div> </div> </div> </div> ) } export default observer(Dashboard);
As you can see, this component contains two
card elements (or Bootstrap components). The first one uses
store.reviewCount to show how many reviews were inputted so far. The second one uses
store.averageScore to show the average score given by reviewers.
One thing that you must note is that, instead of exporting the
Dashboard component directly, you are encapsulating the component inside the
observer() function. This turns your
Dashboard into a reactive and smart component. With this in place, any changes made to any content in store within the component above will make React re-render it. That is, when
averageScore and
reviewCount get updated in your store, React will update the user interface with new contents instantaneously.
Besides this dashboard, you will also create a component that will show all reviews inputted by users. As such, create a file called
Reviews.js inside the
components directory and paste the following code into it:
import React from 'react'; import {observer} from 'mobx-react'; import StarRatingComponent from 'react-star-rating-component'; function List({data}) { return ( <li className="list-group-item"> <div className="float-left">{data.review}</div> <div className="float-right"> <StarRatingComponent name="reviewRate" starCount={data.stars}/> </div> </li> ) } function Reviews({store}) { return ( <div className="reviewsWrapper"> <div className="row"> <div className="col-12"> <div className="card"> <div className="card-header"> <i className="fa fa-comments"/> Reviews </div> <ul className="list-group list-group-flush"> {store.reviewList.map((e, i) => <List key={i} data={e} /> )} </ul> </div> </div> </div> </div> ) } export default observer(Reviews);
In the snippet above, you are importing the
StarRatingComponent installed earlier to display the number of stars selected by the user during the review. Also, you are creating a component called
Review that is used only inside this file. This component is what will render the details of a single review, like the comment inputted (
review) and the amount of
stars.
Then, in the end, you are defining the
Reviews component, which is also wrapped by the
observer() function to make the component receive and display changes in the MobX store as they come. This component is quite simple. It uses the
card Bootstrap component to display an unordered (
ul) list of reviews (
reviewList) and a title ("Reviews").
Wrapping Up your MobX App
With these components in place, your app is almost ready for the prime time. To wrap up things, you will just make some adjustments to the UI, make your
App component use the components you defined in the previous sections, and import Bootstrap (which you have been using but you haven't imported).
So, for starters, open the
App.css file in your project and replace its contents like this:
.formSection { margin-top: 30px; } .formSection p { font-weight: bold; font-size: 20px; } .dashboardSection { margin-top: 50px; } .reviewsWrapper { margin-top: 50px; }
These are just small adjustments so you can have a beautiful user interface.
Next, open the
App.js file and update this as follows:
// ... leave the other import statements untouched ... import Form from './components/Form'; import Dashboard from './components/Dashboard'; import Reviews from './components/Reviews'; import Store from './Store'; // ... leave decorate(Store, {...}) untouched ... const reviewStore = new Store(); class App extends Component { render() { return ( <div className="container"> <Form store={reviewStore}/> <Dashboard store={reviewStore}/> <Reviews store={reviewStore}/> </div> ); } } export default App;
There are three new things happening in the new version of your
App component:
- You are importing and using all the components you defined before (
Form,
Dashboard, and
Reviews).
- You are creating an instance of your
Storeclass and calling it
reviewStore.
- You are passing the
reviewStoreas a prop called
storeto all components.
With that in place, the last thing you will have to do is to open the
index.html file and update it as follows:
<!DOCTYPE html> <html lang="en"> <head> <!-- ... leave other tags untouched ... --> <title>React and MobX</title> <link rel="stylesheet" href=""> <link href="" rel="stylesheet"> </head> <!-- ... leave body and its children untouched ... --> </html>
In this case, you are simply changing the title of your app to "React and MobX" and making it import Bootstrap and Font Awesome (a library of icons that you are using to enhance your UI).
After refactoring the
index.html file, go back to your terminal and make your app run by issuing the following command:
# from the react-mobx-tutorial npm start
Now, if you open in your preferred browser, you will be able to interact with your app and see React and MobX in action. How cool is that?
"I just learned how to used MobX to manage the state of a @reactjs app".
Conclusion
In this post, you learned about state management in React apps. You also had the opportunity to take a quick look at the various alternatives for managing state in React apps, more specifically, MobX.
After that, you were able to build an app to show the most important concepts in MobX. MobX might not as popular as Redux when it comes to state management on React, but it is very mature, easy to start with, and provides a seamless way to integrate into a new or an existing application.
I do hope that you enjoyed this tutorial. Happy hacking!
|
https://auth0.com/blog/managing-the-state-of-react-apps-with-mobx/
|
CC-MAIN-2018-47
|
en
|
refinedweb
|
Serverless beyond Functions
I like to play with technology. I think it is the best way to understand its pros, cons, and limits. Most of the time, when talking about serverless, people thinks of functions, such as those provided by AWS Lambda.
Functions can be triggered synchronously, waiting for the response, such in the case of an API call coming through the Amazon API Gateway, or asynchronously, for example if a new file is uploaded to a repository such as Amazon S3.
Here I’d like to go beyond that, considering serverless in its broader definition of building applications “without thinking about servers”.
Over time, lots of triggers have been added to AWS Lambda. Using tools such as CloudWatch Events, you can react to almost any AWS API call by invoking a Lambda function. Leveraging on this, we can easily enrich our application with other interesting “building blocks”, using other services to add functionalities ready to be used.
One of the great advantage of serverless development – and I never miss an opportunity to repeat myself here – is the possibility to “chain” multiple functions together, and design event-driven architectures.
In this way, you can decompose and distribute business logic in smaller components that follow the data flow of your application: if this happens, do that.
Applications built in this way are easier to keep under control, because our human minds are much better in looking for cause-effect relationships than understanding a complex workflow.
Adding new features is also easier, because you don’t need to review all you code base to find the right spots to change, but you can start by thinking:
- What would be the cause (trigger) of that?
- Which would be the effects (what to trigger next)?
I learned over time, especially from our customers, that serverless applications can cover multiple use cases, such as mobile back ends, chat bots, or data processing.
A common scenario is web apps, and a quite standard approach there is to have web browsers download static assets (such as HTML, CSS, and JavaScript files) from a web-facing repository such as Amazon S3. To speed up things, and optimise costs, you can distribute this content via a Content Delivery Network (CDN) such as Amazon CloudFront.
The JavaScript running client-side, in the browser, can now call back end APIs that can be implemented using Lambda functions and exposed as web APIs via the Amazon API Gateway.
These Lambda functions should be designed to be stateless, and can use a persistence tier to read/write data. For example, to have a complete managed solution, you can use DynamoDB tables.
There are a lot of exceptions to this “standard” architecture. For example, you can use the Amazon API Gateway to “proxy” a native AWS API call, so that you can map your RESP API straight to a service operation, such as adding data to a Kinesis Stream.
Here, I want to go beyond this approach, and build an application that is more “interactive” than a standard website. To do that, I’ll use other AWS services to provide additional functionalities.
HTTP, at least up to version 1.1, is a request/response protocol, and all communications to send data, or ask for data, start from the client (usually a web browser). If the web browser needs to know if something happened in the back end, outside of its control (for example, if there is new information available), it has to continuously poll the server. There are even specific integration patterns that came out of this, such as HTTP long polling.
Problem is, with plain HTTP, the server is not able to push data to the client. This makes even simple applications, such as a web chat, cumbersome to implement and relatively slow to use. To overcome this limitation, during the long process that brought to the HTML5 specification, WebSockets were introduced.
More recently, HTTP/2 Server Push tried to solve the problem at a lower level in the stack, and this new technology will probably coexists with WebSockets.
In the case of serverless architectures, we can add a WebSocket interface to Lambda functions using AWS IoT, a platform that would normally be used to connect physical devices and have them interact with cloud applications and other devices. It turns out that you can use AWS IoT without any physical device, but just for its features, for example:
- Supporting long-term connections using multiple protocols, in this case we are interested specifically in WebSockets
- Publishing, and subscribing, to a hierarchy of topics via MQTT
- Using rules to process and act upon data published via MQTT
The Message Queue Telemetry Transport (MQTT) protocol is using hierarchical topics to let connected client communicate via publish and subscribe.
The / character in the topic names is used to split the different levels in the hierarchy, for example “a/b/c” is defining a three level hierarchy starting from “a”, then “b”, and finally “c”. When subscribing a device, or a rule, the + is a wildcard that can replace a single level in the hierarchy (for example, “a/+/c”, # is another wildcard that can replace all levels of the hierarchy from that point on (such as in “a/#”).
Topics starting with $ are for internal use, and are not subscribed by subscribing to # (that by definition should otherwise mean “anything”). For example, AWS IoT is using the $aws topic namespace to broadcast information related to the platform, such as device connection lifecycle events, or to keep devices and their “shadow” in the cloud in sync.
Let’s make an highly interactive serverless application using WebSockets. Web browsers will be the devices connecting to AWS IoT, using topics and rules to exchange, and process, data. Let’s build a web chat.
Using WebSockets and AWS IoT, web browsers can receive data from Lambda functions, when those functions publish something on a topic the browsers have subscribed to. And when browsers publish data on a topic, AWS IoT rules can automatically react and do different things, for example:
- Invoke a Lambda function, using the data published by the browsers as payload (event)
- Write the data in a Kinesis Stream, that is consumed by a Lambda function processing the data more efficiently, in micro-batches (for example, of 100–1000 records) but with a higher overall latency
- Store the data in a DynamoDB table
- Publish back the data in another topic
All those actions can also enrich the data sent from the client using built-in functions. For example, you can get the client ID of the publisher, or the current timestamp. AWS IoT is using IAM roles and polices to allow, or deny, access to its resources.
Let’s have a better look at how we can implement such a flow of data for a web chat.
For the web chat, I used the following topics and rules:
- Each client can subscribe to the chat/in/${iot:ClientId} topic, where the final part of the topic name is a policy variable that is replaced by the actual MQTT client ID of the connection, and is unique for any client at any point in time.
- There can’t be two clients with the same ID, so in our web chat any browser has a unique topic they can use to receive information from the back end (in this case, built using Lambda functions).
- The chat/out topic is used by all browsers to send data to the back end, which can recognise each of them by their client ID embedded in the messages.
- On their initial connection, browsers use chat/out to advertise themselves to the back end, and a Lambda function is replying with custom code, that is executed in the browsers using the JavaScript eval() function (now you understand why I said at the beginning that I was “playing with technology”: injecting code opens a lot of security concerns that should be carefully evaluated, and I’d like to hear your feedback on that).
- Since the back end can inject code in the browsers, and add new functionalities, the initial JavaScript code that is provided to the browsers contains only the minimum capabilities required to connect, advertise themselves, and process the first message.
- After their initial connection, the chat/out topic is used to publish messages in the chat, and, since my implementation is not authenticated, it turns out I don’t need a Lambda function to handle that, but I can use a republish rule to take the message, and publish it back on the chat/pub/${room} topic, where the final part is replaced by the rule with the chat room name extracted from the message payload.
- Any browser can subscribe to any chat/pub/${room} topic, and receive messages published by other clients very quickly, as all communication and processing happens within the AWS IoT platform.
- To protect communication, you can replace this republishing mechanism with a Lambda function that sends message back securely on the chat/in/{clientId} topic of each device – but for the purpose of my tests the current approach was enough.
- Not just the browsers are listening to the chat/pub/${room} topics, another rule is taking all messages there and storing them on a DynamoDB table, so that at the initial connection a browser can retrive the back log of the chat room.
- If there is a high throughput, and you want to optimise your use of Lambda function, browsers can publish on the chat/stream topic, where a rule is sending all to a Kinesis stream consumed by the same Lambda function listening to chat/out, managing the different syntax of the event payload, and retaining all the internal logic.
- Finally, a Lambda function is receiving all events from the $aws/events/# topics, where you can monitor the lifecycle of device connections — I am actually just logging this information for debugging purpose.
Let’s review the flow sequence with a diagram (graphics courtesy of this website):
The first connection is to the Amazon API Gateway, that is returning custom HTML pages for any visitor, and then each browser is establishing a bidirectional connection (using WebSockets, via AWS IoT) to receive custom code to execute, and exchange data (messages) with the back end and other browsers, using MQTT to publish and subscribe to topics that can have rules automatically reacting to what is published.
The DynamoDB table storing all messages, for all rooms, is using Auto Scaling to adjust its throughput to the actual workload, and, since my implementation is for demo purposes, Time To Live (TTL) to automatically delete messages older than 24 hours.
I find fascinating that this simple application, using just a few hundreds lines of code, is highly available and scalable, using multiple data centres for all tiers.
This is possible using together “building blocks” such as, in this case, AWS Lambda, Amazon API Gateway, AWS IoT and Amazon DynamoDB, that provide high level functionalities, with built-in scalability and reliability, without the requirement to provision, scale, and manage any servers. This is the power of “serverless” — at least until we find a better term for that.
Reviewing the final architecture, the only scalability bottleneck I found is in the number of messages per second that a single chat room can handle, due to how I designed the data model: I used the chat room as the partition key on the DynamoDB table storing all messages. I don’t expect people to have a high throughput of messages per second in a single chat room, so this seems to be enough for this use case.
You can test the web chat here, write your name and a message:
You can create new chat rooms on the fly changing the path of the URL, for example:
The code of this demo is available on GitHub:
Since this is a demo, I “forcefully” tried to avoid any external file dependency in the Lambda function, so that it could be easily reviewed and edited in the web console. On the other side, you can obvously see that UX design is not my top skill :)
Looking forward to hear your feedback!
|
https://medium.com/cloud-academy-inc/serverless-beyond-functions-cd81ee4c6b8d
|
CC-MAIN-2018-47
|
en
|
refinedweb
|
by Trilemetry
Content
Created
3 May 2011
- Requirements
In this exercise you will use the application you made in Exercise 1.6 (Creating MXML custom components with ActionScript properties) to create an ActionScript class and use instances of the class to populate employee data (see in Figure 1).
Figure 1. Create an ActionScript class.
In this exercise you will learn how to:
Create an ActionScript class
In this section you will create an ActionScript class.
- Download the ex2_07_starter.zip file if you haven't done so already and extract the file ex2_07_starter.fxp to your computer.
- Open Flash Builder.
- Import the ex2_07_starter.fxp file.
- Right-click on the components directory and select New > ActionScript class.
- Name the class Employee (see Figure 2).
Figure 2. Name the new class Employee.
- Keep the default settings and click Finish.
- Within the class declaration, type imageFile and press CTRL+1 to invoke the quick assist tool and select the Create instance variable imageFile option. This creates a private variable. Change
privateto
publicand change the data type to the
Stringclass by using the content assist tool (CTRL+Space).
public class Employee { public var imageFile:String; ...
- Repeat step 7 to create two more public variables named
firstNameand
public class Employee { public var imageFile:String; public var firstName:String; public var lastName:String; ...
- In the constructor, accept three parameters,
fileName,
fName, and
lName. Type all three parameters to the
Stringclass.
public function Employee(fileName:String, fName:String, lName:String) { }
- Assign each of the constructor arguments to its associated class property:
public function Employee(fileName:String, fName:String, lName:String) { imageFile = fileName; firstName = fName; lastName = lName; }
- Save the file.
Note: When the argument names in the constructor match the class property names, it is a best practice to add
thisto the constructor argument name so that you can differentiate between the constructor arguments and the class property names. In this case, the argument names and the class property names are different, so you do not need to add
thisto the constructor argument names.
In this section, you will create multiple employee instances with ActionScript.
- Open the ex2_07_starter.mxml file.
- Below the Script comment, create a
Scriptblock.
<!-- Script ~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~ --> <fx:Script> <![CDATA[ ]]> </fx:Script>
- Within the
Scriptblock, type firstEmployee and use the quick assist tool (CTRL+1) to create a
privatevariable and use the content assist tool (CTRL+Space) to data type the variable to the
Employeeclass (see Figure 3).
Figure 3. Use content assist to define the data class of the firstEmployee variable.
- Within the
Scriptblock, ensure the components.Employee package was imported. If not, add the following code to import the package:
<fx:Script> <![CDATA[ import components.Employee; private var firstEmployee:Employee ]]> </fx:Script>
- Assign the
firstEmployeevariable to a new instance of the Employee class component:
private var firstEmployee:Employee =new Employee;
- Pass the new Employee class component a
fileNameparameter of
aparker.jpg, a
fNameparameter of
Athena and an
lNameparameter of
Parker.
private var firstEmployee:Employee = new Employee ("aparker.jpg","Athena","Parker");
- Using the quick assist tool create another
privatevariable named
secondEmployee, data typed to the
Employeeclass.
private var firstEmployee:Employee = new Employee ("aparker.jpg","Athena","Parker"); private var secondEmployee:Employee
- Assign the");
- Locate the first
EmployeeDisplaycomponent instance.
- Assign
firstEmployee.imageFileto the first component's
imageFileproperty as a bindable value:
<components:EmployeeDisplay
- Save the file.You should see two binding warnings in the Problems view (see Figure 4).
Figure 4. Save the file and view the warnings.
- Make the");
- Save the file.The binding warning still exists for the
imageFilevariable (see Figure 5).
Figure 5. Save the file and view the warning.
- Open the Employee.as file.
- Type [B above the
imageFilevariable to invoke the content assist tool. Add the
Bindabledeclaration to the
imageFilevariable.
[Bindable] public var imageFile:String;
- Save the file.
- Open the Problems view.Note that the warnings no longer exists.
- Return to the ex2_07_starter.mxml main application file.
- To the second custom component tag's
imageFileproperty, add the
secondEmployee.imageFilebindable value:
<components:EmployeeDisplay
- Save the file.
In this section you will bind the components to the employee instances.
- Open EmployeeDisplay.mxml from the components package.
- Within the
Scriptblock, locate and delete the two bindable variables.
- Below the
import statementscomment, import the Employee class component.
// import statements ---------------------------------------- import components.Employee;
- Below the variable declarations comment, use the content and quick assist tools to declare a
Bindable publicvariable named
employeeDataand assign the variable a data type of the Employee class.
// variable declarations ------------------------------------ [Bindable] public var employeeData:Employee;
- Locate the
BitmapImagecontrol tag.
- Update the
sourceproperty's binding to display the info from the
employeeDatavariable:
<s:BitmapImage
- Locate the
Labelcontrol and delete the value of the text property.
<s:Label
- Save the file.Note: You will see four errors populate the Problems view. You will fix these next.
- Open ex2_07_starter.mxml.
- From the first
componenttag, remove the
imageFileand
fullNameproperties:
<components:EmployeeDisplay
- Add the
employeeDataproperty and bind it to the value of the
firstEmployeevariable:
<components:EmployeeDisplay
- Repeat steps 10 and 11 for the second component tag.
<components:EmployeeDisplay
- Save the file.
- Run the application.
Your application should appear as shown in Figure 6.
Figure 6. Run the application.
In this section you will create a class method to display an employee's names.
- Open the Employee.as file.
- Below the
Employee()method, create a new method named
createFullNamethat takes no parameters and returns data typed to the
Stringclass:
... lastName = lName; } public function createFullName():String { }
- Within the method, return the
firstNamevariable and the
lastNamevariable with a space between them:
public function createFullName():String { return firstName + " " + lastName; }
- Save the file.
In this section you reuse the
createFullName()function to dynamically display the employee name below the
BitmapImagecontrol.
- Open the EmployeeDisplay.mxml file.
- Locate the Label control.
- Bind the
textproperty to the
employeeDatavariable evaluated by the
createFullName()function:
<s:Label
- Save the file.
- Run the application.
The components now display the employee's names (see Figure 7).
Figure 7. View the application with employee names..
|
https://www.adobe.com/devnet/flex/videotraining/exercises/ex2_07.html
|
CC-MAIN-2018-47
|
en
|
refinedweb
|
Axel Naumann on everyone agrees that C++ needs a facility to query C++ code itself: types, functions, data members etc. And that this facility should be a compile time facility, at least as a start. But what should it look like?
Several proposals were on the table over the last few years that SG 7 existed; in Jacksonville those were N4428, P0194 and P0255. Here are the main distinguishing features, and SG7's recommendation:
How to get reflection data
Two major paths to query an entity (a base-level construct) were proposed: operators or templates. Templates need to obey the one-definition-rule (ODR); any recurrence must be exactly the same as the previous "invocations". They do not allow to test for "progress" within a translation unit: do we have a definition? Do we have a definition now? And now? For template-based reflection, the answer must always be the same.
But even more importantly, C++ only allows certain kinds of identifiers to be passed as templates arguments. Namespaces, for instance, are not among them. There must be no visible difference between passing a typedef or its underlying type as a template parameter, making it impossible to reflect namespaces or typedefs, or requiring language changes for the sake of reflection.
Operators, on the other hand, are a natural way to extend the language. They do not suffer from any such limitation. Additionally, they signal clearly that the code is reflected, making code review simpler.
Traits versus aggregates
How should reflection data be served? Some proposals were based on structure-like entities. Code could use members on them to drill into the reflection data.
This meant that the compiler needs to generate these types for each access. The objects could be passed around, they would need to have associated storage, at least at compile-time.
The alternative is an extension of the traits system. Here, the compiler needs to generate only data that is actually queried. It was also deemed simpler to extend, once reflection wants to support a more complete feature set, or once reflection wants to cover new language features.
Traits on meta or traits on code?
These traits can be applied on the C++ code itself, as done for the regular C++ type traits, possibly with filters to specify query details. Or, and that is the main distinguishing feature of P0194, an operator can "lift" you onto the meta-level, and reflection traits operate only on that meta level.
P0194
Meta-objects are of a meta-type that describes the available interfaces (meta-functions). All of that can be mapped into regular C++ these days, with some definition of “these days": meta-objects are types; they are unnamed and cannot be constructed; they are generated by the reflection operator, for instance
reflexpr(std::string). Meta-functions are templates that "take" a meta-object and "return" a constexpr value or a different meta-object, for instance
get_scope. And the big step for the Jacksonville-revision P0194R0 of the proposal has happened for the meta-types: they are now mapped to C++ concepts! That is obvious, natural and makes the proposal even simpler and even more beautiful.
Reflection-types described by concepts
You can query for instance the type property of a meta-object, using
get_type. But not all meta-objects have a type; it would not make sense to call that on the meta-object of a namespace. The meta-object (remember, a type) must be of a certain kind: it must implement the requirements of the meta::Typed concept. The type returned by
reflexpr(std) does not satisfy these requirements. Easy. For each meta-type (concept) there exists a test whether a meta-object (that all satisfy the
meta::Object concept, by definition) is of that meta-type, i.e. satisfies the concept. For instance,
get_type is only valid on those meta-objects for which
has_typed_v<meta::Object> is
true.
Reflection language versus Reflection library
P0194 proposes the basic ingredients to query reflection in C++. You might find it too basic or too complex. We use it to lay the first few miles of the train track, to agree on the design and specify the “language" used. Once we have that, extending it to become a full C++ reflection library is much simpler than providing a complete feature set and defending the design against ten other proposals in parallel. Matus, the original author, has already shown that P0194 is extensible. Like mad.
And now?
Jacksonville was a big step: SG7 agrees on the recommended design. Now we need to agree on the content. For instance, should reflection distinguish typedefs and their underlying type? Take
struct ArrayRef { using index_type = size_t; using rank_type = size_t; rank_type rank_; };
Should reflection see the type of
rank_ being
unsigned long or
rank_type? The former is how the compiler understands the code (“semantic” reflection), the latter is what the developer wrote (“syntactic” reflection). We are collecting arguments; I know of lots of smart people with convincing arguments for each one of these options.
Matus is currently writing the next revision. He will split the paper: a short one with the wording, and a discussion paper that explains the design decisions of SG7 - a sort of log, collecting the arguments for those who want to know why C++ reflection ends up the way P0194 proposes. The design paper will also contain examples of use cases, for instance a JSON serializer and likely a hash generator. Can you implement you favorite reflection use-case with P0194’s interfaces?
Cheers, Axel.
- Discuss on
Reddit, comment here.
Submitted by Mikhail (not verified) on Tue, 04/19/2016 - 19:37 Permalink
Matus has a patch for clang.
Matus has a patch for clang. Is it going to be applied? If yes, when?
Submitted by Matus Chochlik (not verified) on Tue, 04/19/2016 - 20:54 Permalink
clang patch
Mikhail,
short answer; this version very probably won't. It was slapped together very quickly and it has several shortcomings. If nobody else picks this up, the plan is that I'll probably start writing a new implementation from scratch during the summer.
Submitted by Mikhail (not verified) on Wed, 08/17/2016 - 18:50 Permalink
Any news?
Hello Matus and Axel!
Do you guys have any news to share about reflection? How is new implementation going, is it started?
Thanks!
Submitted by Axel Naumann on Thu, 11/24/2016 - 20:23 Permalink
Re: Any news?
Hi Mikhail,
Matus has an implementation / fork of clang on github that includes some of his reflection library on top of the proposal, basically to test-drive the proposal.
Within the committee, the proposal is progressing: it will likely be discussed in the library evolution group in Kona.
Cheers, Axel.
Submitted by Axel Naumann on Tue, 04/19/2016 - 20:41 Permalink
Re: Matus has a patch for clang.
Hi Mikhail,
That's here
I don't think he expects this to be merged. It was meant to serve as a demonstration that the proposal is feasible implementation-wise. A reality-check. I remember Matus saying that clang should be able to do a much better (i.e. efficient) job.
Cheers, Axel.
Submitted by Bjarne (not verified) on Wed, 04/20/2016 - 02:19 Permalink
Think of typedefs seen vs no
Think of typedefs seen vs no seen as a parameter/option. Think of it as the most obvious example of "lowering"
Submitted by Anonymous (not verified) on Wed, 04/20/2016 - 08:30 Permalink
Is anybody keen enough to
Is anybody keen enough to propose when we will see reflection in the standard?
Submitted by Vyacheslav Lanovets (not verified) on Wed, 04/20/2016 - 20:50 Permalink
Reflection is a much needed
Reflection is a much needed functionality in C++ on my personal wishlist. Other than that I can name only default operator== (already proposed AFAIK) and enforced "override" keyword (already implemented as a warning in clang).
We use C++ reflection for data persistence. We currently use circa 2003 solution based on Microsoft SBR format and SBR SDK. Needless to say, it does not work in XCode or Qt/NDK. So now there is hope we can get a standard way to reflect on struct data member names/types and list struct base classes..
Submitted by Muhammad (not verified) on Thu, 04/21/2016 - 07:39 Permalink
In the ArrRef example, I
In the ArrRef example, I think the type of rank_ should be returned by 2 functions one say get_type() which returns the type defined by developer I.e. rank_type, the second functions say get_underlying_type() should return unsigned long as understood by the compiler.
Submitted by Axel Naumann on Thu, 04/21/2016 - 12:22 Permalink
Re: the ArrRef example
Hi Muhammad,
I think that's fairly close to what Bjarne suggests. The main point here is that both of you believe that it should be possible to identify
rank_typewhereas others (in the committee) do not want reflection to be able to see a typedef. Intentionally. The argument I heard most often is that detecting a typedef will make something a distinct entity that C++ treats as identities. (My counter-argument so far is "yes, and?")
Cheers, Axel.
Submitted by Ralph Trickey (not verified) on Thu, 04/21/2016 - 16:28 Permalink
Yes and I'd like to do that
Yes and I'd like to do that for the same reason that I want to be able to distinguish between different types of Enums. The alternative in some cases is probably to start using Hungarian Notation and prefixing variables with the type again, please don't make me do that. :(
If I'm accessing an external system which has a money type but I want to simply access and display that data, being able to distinguish between the money and just a double type would save needing some other way to distinguish the user type. That's a trivial case, but I'm sure there are others. I haven't used reflection in C++ since MFC., although it's heavily used in other languages.
Ralph
Submitted by Mitch (not verified) on Fri, 04/22/2016 - 02:56 Permalink
If that's true, the committee
If that's true, the committee seems to be making assumptions about how people would use reflection (a common issue w/ design-by-committee).
Reflection has uses beyond semantic analyis (which would be the absolute minimum one would expect in a reflection API, but most certainly not the peak).
It's just as likely people will NEED syntactic analysis. One very basic use case that comes to mind (assuming compile time reflection is constexpr - which it needs to be) would be implementing custom compile time errors (linting) of domain specific rules w/ reflection and static assertions at the syntactic level that INDEED treat some typedefs as distinct entities.
Example: In some code bases typedefs are absolutely intended to be used as distinct entities (not just an alias) and will break if that typedef changes in in another configuration (as is often the intention, else why typedef?), hence static assertion on the syntax is just as important (if not more so, due to the domain specific knowledge often encoded in syntax).
One could argue in such cases that typedef should be an actual type, however it's very common in C++ for people to typedef primitives (int/float/etc) and use them as if they're a distinct entity (writing code in ways that would break if the underlying type that entity ever changed, potentially without compilation failure (due to implicit casting - hence the need for linting))
Submitted by Matus Chochlik (not verified) on Fri, 04/22/2016 - 08:30 Permalink
typedef vs. underlying type
At the moments its `reflexpr(rank_type)` = Meta-Typedef vs. `get_aliased_t<reflexpr(rank_type)>` = Meta-Type. To me adding a separate operator for the second case looks like an overkill.
Submitted by Garet Claborn (not verified) on Thu, 05/19/2016 - 08:15 Permalink
typedefs and reflection
When it comes to what's returned as the result for typedefs, and classes, I would imagine the same functionality being used on class deduction so inheritance should be a major player. If you have an object that would satisfy the usual diamond problem examples, some sort of structured result would have to be returned.
something as simple as
struct typeinfo{ typeid id; /* other properties... */ vector<typeinfo> nodes; };
If just returning a single type, it seems you'd have to always lean toward the front-most class unless compiler's context is clearly referencing a base/baser class/type. Otherwise you may have ambiguous types at the same level.
Submitted by R. Hamilton (not verified) on Thu, 04/21/2016 - 13:58 Permalink
type of rank_
Is there any particular reason one shouldn't be able to discover the typedef AND the underlying format, perhaps by a second query on the definition of the typedef? The distinction may make little difference now, but unless it never will, offering the option (along with future portability advice) should cover all concerns, unless the cost is inordinate. For example, which would best support a really universal yet lightweight serialization library?
Submitted by Axel Naumann on Thu, 04/21/2016 - 15:28 Permalink
Re: type of rank_
Hi,
I personally agree. But playing the devil's advocate, "because we can" is not a good reason to offer a feature to the world. So what we really need are good use cases that motivate the need. That's what I was fishing for :-)
Cheers, Axel.
Submitted by Andrew Osman (not verified) on Thu, 04/21/2016 - 22:56 Permalink
re:type of rank_
code generation
Submitted by Garet Claborn (not verified) on Thu, 05/19/2016 - 08:20 Permalink
re-querying types
i like the idea of re-querying for underlying type. you could recursively get down to the primary types and it would be simple to handle multiple inheritance with 1D array return.
Submitted by Paul Michalik (not verified) on Thu, 04/21/2016 - 19:39 Permalink
Did anybody manage to read
Did anybody manage to read the proposal cover to cover?
Submitted by Nick Weihs (not verified) on Thu, 04/21/2016 - 23:49 Permalink
Why not both?
I think it is unquestionable that getting the backing type of a typedef is useful and probably what you'd want to see a significant amount of time when reflecting. I'd rather avoid libraries of meta functions that bake things down into whether or not a specific type resolves to an int or not, as an example.
The other case (i.e. getting the forward type of the typedef), I believe is also useful, and something I wish that templates could do as well. I was recently working on a system to gather up the fields of various data structures and present corresponding UI to the user so they could edit the fields of those structures easily. One of the things I would have liked to do was give the data structure designers ways to annotate the fields to make things easier to edit on the user side. For example, say have an int and I want to be able to annotate a lower and upper bound so that the corresponding UI is a slider instead of a text input. I want to do something like this
template <int lower, int upper>
using r_bounded_int = int;
r_bounded_int<0, 50> m_value_that_goes_from_0_to_50;
It would be nice to be able to glean this information from the type instead of doing silly things like wrapping primitive types in classes or side loading the annotation through some other variadic template mechanism.
Submitted by Peter (not verified) on Sat, 07/09/2016 - 03:18 Permalink
I want it.
I look through Matus's evolving proposals, and it's definitely moving in the right
direction. The most recent is simple and powerful.
My only concern is that it's now linked with concepts.
You should definitely look at:
Static reflection Rationale, design and evolution
Submitted by Andrew (not verified) on Tue, 08/16/2016 - 23:49 Permalink
Exported reflection information
The proposal for runtime access to reflection information looks promising... but what I really want is the ability to have the compiler externalize (export) the reflection database so that external tools can easily consume it (from a standardized format). All sorts of code generators could benefit from this information greatly, and doing it externally allows for using superior tools than trying to build impossible-to-understand template/etc based C++ machinery to do it. This can be used to generate language bindings, serdes for data structures, etc.
Submitted by Maik Guntermann (not verified) on Mon, 11/20/2017 - 16:53 Permalink
Concepts=Contraints+Reflections .enable_if_t<C::existsIn10yrs_v> (for those who haven't seen yet - Herb's great presentation on cppcon17 about reflections)
IMHO, it would be a fatal mistake to distinguish between reflections and concepts/constraints; i.e. introducing concepts in C++20 (like ISO/IEC TS 19217:2015) as a different "feature" and separating Herb's mentioned reflections/injection/generative C++ (which in combination will make concepts obsolete) could be a *fatal* show stopper since the complexity of modern C, which is increasing with exponential speed, would become that complex that even Scott would need at least 50 editions to make "Effective Very Modern C++" bug-free (IIRC there were only very few editions for "Effective C++", and we are currently at no. 11 or 12 for "Effective Modern C++").
Already today manager are wondering and concerned why productivity shrinks as of the decision to introduce modern C in their companies. The argument that concepts (itself; so in sense of syntactic sugar to improve usability without adding new features) will simplify modern C for the users and just make it _a_little_bit_ more complicated for library writers is a bad one, since every user is partially also a (library) writer and the other way round. To get an impression what they are about to add, please have a look at - just the possibility that there are three ways to define one and the same constraint [sorry to all Perl coders] makes me sick:
//>;
Next to: Introduction of 9 types of constraints (simple, type, comound required, nested required, etc.), option to partial order contraints, wildcards (in a new context), new syntax to define contraints of a return type *within* a function body with "->" (WT..?)
Instead of making complex things even more complex, wouldn't it be now a good time to deprecate old things in order to make way for something new? At least trivial stuff; i.e. making the assumption that operator new never throws would simplify/enable dozens of move operations and probably trillions of brain cells programmers waste to think about how they can avoid a copy ;-)
|
https://root.cern.ch/comment/2351
|
CC-MAIN-2018-47
|
en
|
refinedweb
|
This
JAVA
First explain that this is not to say how good the Microsoft platform, how cattle. Just to remind some of the LAMP / JAVA platform comrades, Microsoft platform will not like you said, and thought so unbearable! But you did not know it. Meanwhile, Mic
amp, java platform, match, contrary, decline, comrades, microsoft technology, monster, social networking site, microsoft platform, cattle, sap, myspace, recruitment website, dell, domain registrar, e commerc, dating sites, heyday, parent companyJuly 28 Wrote Flex data exchange method - httpservice, webservice, RemoteObject, socket. EDITORIAL: Flex using SOAP Web Service interactions with many benefits, but it is slow, the use of cu
mapping tools, platform support, java platform, baidu, speed and performance, remoting, data exchange, binary file, common interface, asp jsp, stream data, server communications, file stream, exchange method, standard xml, transmission efficiency, exchange methods, rich data types, flex programs, adobe serverMay 24
spring annotation Keywords: spring Spring JSR-250 annotations Note configuration relative to the XML configuration has many advantages: It can take full advantage of Java's reflection mechanism to get the class structure information, which can reduce
java code, jpa, java platform, attribute, field names, programming model, annotations, model implementation, reflection mechanism, development efficiency, class structure, cohesion, bea systems, configuration work, pitchfork, public comments, public comment, configuration feature, work configuration, feature programMay 8
Java App Engine ( ) BitNami Cloud ( ) CloudBees ( ) Microsoft Azure ( ) Makara ( ) - Developer preview
java platform, google, microsoft, beta version, amazon, aws, invitation, java app, foundry, beta beta, beanstalk, developer previewMay 3
Grails development to master the basic techniques and be able to further independent study of the advanced features of Grails. Grails is built on top of the dynamic language Groovy, an open source MVC Web development framework, Grails is a distinctiv
java code, database table, ruby on rails, java platform, dynamic language, development framework, scripting language, c web, development efficiency, java programmers, dynamic nature, mvc design pattern, model data, model view, java syntax, middle finger, distinctive feature, minimal code, tight integration, independent studyApril 28
DataCenter on the java version of the function (export) 1, because the service only supports WINDOWS WIN environment, and therefore in the JAVA platform to achieve the same version of the import and export service functions. 2, first complete the pre
lt, quot, parameters, java platform, storage directory, file storage, file names, d test, execution, directory database, varchar, java version, xls files, import and export, service functions, rar files, backup folder, service pack, ego, export serviceApril 22
ClassLoader loads a class: Check whether the bottom-up loading - the process of each: cache found in this layer has been loaded, it returns an instance of this class has been loaded, the call ended; that is not loaded, continue to entrust the top. If
java class, java platform, java classes, tree root, bytecode, class loading, recursion, recursively, delegates, custom class loaders, loading facility
Java technology? Java technology is both a high-level object-oriented programming language, but also a platform. Java technology is based on Java Virtual Machine (Java virtual machine, JVM) concept - this is the language and the underlying software a
interface java, jvm java, java virtual machine, java platform, oriented programming language, object oriented programming, object oriented programming language, interface gui, graphical user interface, java programming language, application programming interface, platform java, java byte code, java application programming, virtual machine java, programming language java, intermediate language, java api, language implementations, integrated libraryJanuary 11
Records wanted to focus on Beijing 2010 JavaOne knowledge, but unfortunately the Beijing Railway Station in San Francisco than the original shrink too much, do not feel very inadequate force of the original one way or another, so bring the JavaOne 20
jdk, java platform, jvm, time point, yuan, wikipedia, delegate, system api, tpl, java version, happy new year, zhuang, core platform, beijing railway station, drip, central dispatch, gcd, draft articles, rapid modernization, wichtJanuary 4
Chapter platform-independent 1. Why should platform-independent Created with the Java executable binary can be run without change on multiple platforms. Emerging network of Java embedded devices it highlights another area of expertise, because it's p
basic data types, platform independence, java virtual machine, java platform, reading notes, java program, independent java, java architecture, platform java, sensitive code, run time java, java class files, platform versions, run time library, java executable, extension api, emerging network, host platform, architecture support, platform deploymentJanuary 2
Chapter A: Java's architecture: 1: java programming language 2: java class file format 3: java application programming interface (API) 4: java virtual machine The relationship between the four below: Represents the Java runtime environment platform t
java code, platform independence, java runtime environment, java virtual machine, java class, java platform, code execution, code java, java files, java programming language, application programming interface, platform java, java application programming, java api, host system, virtual machine operating system, class loaders, network architecture, hardware platform, time code generatorDecember 25
How to use the Swing full-screen mode? The key is java.awt .* There are two related classes with the display device: GraphicsEnvironment and GraphicsDevice. GraphicsEnvironment applications for the Java platform-specific objects and Font objects Grap
import java, string args, java platform, main string, target, constructor, swing, java awt, awt event, screen mode, image buffer, direct object, printing equipment, graphical environment, screen printer, font objectsDecember 24
Trackback URI: Adobe Flex and AIR over the past relied heavily on Java, including the Eclipse-based IDE and a Java built fully functional use of data services and products, and these products are al
eclipse, java platform, cn news, java application, user interface, servlet container, java applications, design philosophy, infoq, virtue, air applications, portability, client computer, gully, local resources, system features, platform provider, deterioration, desktop platform, newbie guideDecember 22
windows + jdk + tomcat + eclipse + mysql environment to build the platform; --- Specific adjourned;
java platform, configuration windows, platform configurationDecember 21
Java technology consists of four components: 1 JAVA programming language 2 JAVA class file format 3 JAVA virtual machine 4 JAVA application program interface Java runtime environment on behalf of the JAVA platform. JAVA Platform: JVM in a central pos
interface java, java technology, java runtime environment, java virtual machine, java class, java platform, java application, java program, jvm, file format, java programming language, platform java, mandate, registers, computer software, java programs, application program interface, central position, actual computer, imaginary machineDecember 19
About ado Fckedit use the direct write: Download the configuration file: Here we have to the following two: First: FCKeditor_2.6.6.zar, second: fckeditor-java-2.6-bin.zip (in java platform) Third: fckeditor-java-2.6-src.zip (sour
localhost, java platform, quot quot, script type, script src, text javascript, fckeditor, empty string, decompression, input box, textarea cols, java 2, object attributes, basepath, body tag, default width, head tag, correct settings, link script, zip source codeDecember 19
Java in the design and use of ThreadLocal Java 1.2 introduced back in time, Java platform to introduce a new support: java.lang.ThreadLocal, to us in the preparation of multi-threaded program provides a new choice. Using this tool can be very simple
java lang, java code, time java, java platform, variables, language level, class implementation, initial value, xl, interface object, threadlocal, language compiler, implementation version, compiler implementation, popularity, variable values, java thread, new choice, void set, new inventionDecember 15
JDK, JRE, JVM and their connection Articles Category: Java Programming Many friends may, like me, have been developed using JAVA for a long time, but on the JDK, JRE, JVM and differences between these three, has always been vague. Today feature artic
java runtime environment, java development kit, sun jdk, java class, java platform, java application, java program, language structure, api java, java tools, jre, java api, party libraries, java jdk, application program interface, java jvm, basic graphics, development toolkit, jdb, graphics networkDecember 15
As planned, half past three p.m. Admission started, we arrived at ten past three p.m. around the venue, a chaotic scene, in order to receive a badge for a long long row of the team, until half past four and we have good chest card, this time the firs
java development kit, oracle, java platform, efficiency, beijing, blog, admission, cores, java community, javaone, executive committee, t3, venue, t4, today announced that, mark reinhold, java developer kit, chaotic scene, sparc processorDecember 14
Java_SDK = Java Platform Micro Edition Software Development Kit 3.0 for Windows Download https: / / cds.sun.com / is-bin / INTERSHOP.enfinity / WFS / CDS-CDS_Developer-Site
eclipse, installation directory, java platform, java sdk, launch, chinese documents, window menu, software development kit, compilation errors, sun java, configuration settings, import button, micro edition, sdk java, device management, eclipseme, uncaught exception, obfuscation, edition software, jprDecember 14
Recently many things, people are lazy, saw a lot of things, but also think of some things, but is too lazy to write. Record what is now the first two weeks to do a stress test phenomenon, hoping to reopen a good start. Simply put, this is a connectio
java platform, web server, error message, test program, buffer space, closure, phenomenon, many things, mono, queue, simple test, r2, stress test, long wait, bit operating system, windows server, discussion group, program settings, linux problems, language and cultureDecember 14
Add in the startWebLogic.cmd set JAVA_OPTIONS =% JAVA_OPTIONS%-Xdebug-Xnoagent-Djava.compiler = NONE-Xrunjdwp: transport = dt_socket, address = 7777, server = y, suspend = n -Xdebug Activate debugging. -Xnoagent Sun typical VM, both to support the ol
implementation, application server, java platform, interface, debugger, debug, cmd, architecture, remote server, socket address, xdebug, server configuration, client connections, java options, sun tools, jit compiler, socket option, jvm instructions, transport mechanismDecember 7
Although the term cloud computing is not new (Amazon in 2006, began offering its cloud services), but since 2008 it started to really become a popular term, this period, Google and Amazon cloud service gradually gained public attention. Google's App
sun microsystems, java platform, google, abstract concept, open source java, amazon, web application developers, open source database, abstract structure, service implementation, service software, necessary components, storage service, saas, cloud model, digital assets, videos music, public service web, infrastructure applications, public attentionDecember 1
1. Abstract Abstract is to overlook a topic unrelated to the current target those aspects in order to more fully with the current target of attention-related aspects. Abstract does not intend to understand all the problems, but only select one part o
parametric polymorphism, class subclass, data abstraction, instance variables, hierarchical model, access to data, data access, character data, string class, java platform, interview questions, string value, target, class inheritance, interface object, good solution, abstract behavior, application functions, object oriented computing, process abstractionNovember 30
Android to read and write XML (on) - package instructions to modify browse permissions | Delete XML often used as a data format on the Internet, its file format, surely we are more clear, here I am with Android, Android SDK available to illustrate th
xml documents, java technology, xml document, java runtime environment, platform support, java platform, java sdk, document object model, dom document, java programming language, platform one, java xml, java api, stax, example java, w3c dom, dom w3c, function java, many different ways, reading methodsNovember 29
Android those thing in JNI programming First of all that, Android system does not allow use of a pure C / C + + program appears, it requires Java code to be embedded by Native C / C + + - that is the way via JNI to use the local (Native) code. Theref
suffix, lt, naming convention, lib directory, java platform, implementation steps, root directory, library name, java system, c program, native c, usage scenarios, loadlibrary, adb, adt, remount, system partition, host platform, java code execution, viable approachNovember 26
1 Java technology and Java Virtual Machine Speaking of Java, people think of first is Java programming language, but in fact, Java is a technology, which consists of four components: Java programming language, Java class file format, Java virtual mac
interface java, platform independence, java runtime environment, java virtual machine, java language, java platform, code execution, code java, java files, java programming language, application programming interface, java class libraries, java application programming, virtual machine java, programming language java, java virtual machine jvm, java operating system, enabling java, core position, time code generatorNovember 21
JAVA NOKIA recommended NetBeans development environment ( ) + EclipseME ( ). The following is based on the NetBeans development environment configuration steps: 1, download and install the latest version of the Java
java sun, java platform, development environment, main resources, web service, configuration steps, java programming, netbeans ide, forum nokia, menu tools, chinese language pack, eclipseme, platform manager, platform type, www forum, symbian, connection wizard, languanges, address inquiries, mobility webNovember 19
Java authorization internals: Code-centric Java 2 platform security architecture and the Java user-centric authentication and authorization services. In the field of information security, authorization is the center of the world as it is to control t
java platform, system resources, platform security, java 2 platform, java architecture, dynamic access, code snippet, authorization services, run time access, platform sdk, java user, stack inspection, authentication service, computer access, mobile code, authorization model, java security architecture, security authorization, relevant question, center of the worldNovember 13
Although the term cloud computing is not new (Amazon in 2006, began offering its cloud services), but since 2008 it started to really become a popular term, this period, Google and Amazon cloud service gradually gained public attention. Google's App
sun microsystems, java platform, web applications, google, api, microsoft, web service, open source java, amazon, web application developers, open source database, service implementation, storage service, digital assets, videos music, public service web, infrastructure applications, public attentionNovember 11
Language and geographical environment on an important impact on our culture. We communicate with others and life in between the events that have occurred in the language and geographical environment produced by a system. As the different language and
java util, naming convention, java platform, java project, effective software, different languages, java properties, scale projects, international documents, geographical environment, international language, resourcebundle class, software localization, project ideas, different cultures, localization resources, client environments, regional environment, culture environment, resources internationalNovember 5
Transmission of Chinese in the AJAX platform compatibility issues and solutions for collection I mainly discussed for IE and FF, google the chorme and close IE. First, the character set transcoding to explain several key points have been easy after j
lt, utf 8, default character, servlet, java platform, character string, jsp, google, content type, text html, meta, ajax, gbk, character encoding, contrary, ff, chinese character, transformation, chinese platform, platform compatibility issuesNovember 3
After listening to one on the back of dynamic compilation and static on the Java compiled classes, feel that they do not know much in this respect, then finishing the next knowledge points, but also check the internet for some information on Java, dy
java code, java runtime environment, java virtual machine, knowledge points, java platform, java sdk, java program, java performance, platform java, drawback, dynamic compiler, dynamic compilation, java programs, execution engine, compiler option, program execution time, independent class, dynamic java compiler, javac program, minimizationOctober 29
Oh, first to congratulate himself up a blog JavaEye GAE this domain name. GAE = Google App Engine If you need to introduce to see Why learn GAE it? Talk about ideas for their own 1 understand Java and Ecli
eclipse, java platform, google, zh, phenomenon, virtual host, computing platform, scratch, learning materials, chinese learning, current view, intl, global affairs, quota, insufficient attention, national boundaries, lament, worldwide traffic, country rank, japanese studyOctober 27
package map; /* Use special for loop , Here we can print out the map's keys and values * Here we are using a character array a word frequency statistics */ import java.util.*; public class StatisticsOfMap3 { public static void main(String[] args) { s
lt, import java, map, string args, public static void, java platform, string str, main string, java java, statistics, keyset, circulation, treemap, platform 1, m systemOctober
hibernate write to read only cacheyaoyaotv。: zj.528schooi.com6080yy.engdomain:: w software
|
http://www.quweiji.com/tag/java-platform/
|
CC-MAIN-2018-47
|
en
|
refinedweb
|
Makes an allocated copy of an LDAPControl.
#include "slapi-plugin.h" LDAPControl * slapi_dup_control( LDAPControl const *ctrl );
This function takes the following parameter:
Pointer to an LDAPControl structure whose contents are to be duplicated.
This function returns a pointer to an allocated LDAPControl structure if successful, or NULL if an error occurs.
This function duplicates the contents of an LDAPControl structure. All fields within the LDAPControl are copied to a new, allocated structure, and a pointer to the new structure is returned.
The structure that is returned should be freed by calling ldap_control_free(3LDAP) , an LDAP API function.
ldap_control_free(3LDAP)
|
http://docs.oracle.com/cd/E19693-01/819-0996/aaifn/index.html
|
CC-MAIN-2016-36
|
en
|
refinedweb
|
This article attempts to describe a way of creating a set of classes that are generic enough to allow several kinds of board games to be easily implemented. Only the actual game logic (the rules of the game) should have to change between implementations.
This article have been updated with two things:
System.Windows.Forms.Form
Both of these additions are discussed further in this article.
When starting this project, I settled for a set of requirements that the finished code needed to fulfill:
System.Window.Forms.Panel
The Visual Studio solution is made up of two projects:
The game logic (in this example, the Checkers implementation) must implement an interface called IBoardGameLogic, which is defined as:
IBoardGameLogic
using System;
using System.Collections.Generic;
namespace Bornander.Games.BoardGame
{
public interface IBoardGameLogic
{
/// <span class="code-SummaryComment"><summary />
</span>
By exposing these few methods, the Framework can control the game flow and make sure that the rules of the game are followed. There is one thing missing, though: there is no way for the Framework to figure out what to display. It can deduce a state of each square, but it has no information about what should be rendered to the screen to visualize that state. This is solved by providing the Framework with an instance of another interface, called IBordGameModelRepository:
IBordGameModelRepository
using System;
using Bornander.Games.Direct3D;
namespace Bornander.Games.BoardGame
{
public interface IBoardGameModelRepository
{
/// <span class="code-SummaryComment"><summary />
</span>
By keeping the interfaces IBoardGameLogic and IBoardGameModelRepository separate, we allow the game logic to be completely decoupled from the visual representation. This is important because we might want to port this game to a Windows Mobile device, for example, where a 2D representation is preferred over a 3D one.
IBoardGameModelRepository
Now that the Framework has access to all of the information it needs for rendering the state of the game, it is time to consider the render implementation. Almost all game rendering is handled by VisualBoard. This class queries IBoardGameLogic and uses the information returned, together with the Models returned by IBoardGameModelRepository, to render both the board and the pieces.
VisualBoard
Model
There is one element that is not rendered by VisualBoard and that is the currently selected piece, i.e. the piece the user is currently moving around. Another class called GamePanel, which extends System.Windows.Forms.Panel, handles input as well as selecting and moving pieces around on the board. This type of implementation might seem to lower the inner cohesion of the GamePanel class, but I decided to do it this way because I want VisualBoard to render the state of the board game. That state does not know anything about a piece currently being moved.
GamePanel
System.Windows.Forms.Panel
These are the classes in the class library:
Move
Square
Camera
Mesh
Material
These are the classes in the Checkers application:
CheckersModelRepository
CheckersLogic
Rendering the state of the game is pretty straightforward: loop over all board squares, render the square and then render any piece occupying that square. Simple. However, we also need to indicate to the user which moves are valid. This is done by highlighting the board square under the mouse if it is valid to move from that square (or to that square when "holding" a piece).
It is important to mention that the Framework makes the assumption that the squares are 1 unit wide and 1 unit deep (height is up to the game developer to decide). This must be taken into account when creating the meshes for the game. To help out with this, the Model class holds two materials: one "normal" material and one "highlighted" material.
By checking whether the model is in state "Selected," it sets its material to either normal or highlighted just prior to rendering, like this:
/// <span class="code-SummaryComment"><summary />
</span>
VisualBoard simply sets the selected state for each board square model before rendering it in its Render method:
Render
public class VisualBoard
{
...
public void Render(Device device)
{
for (int row = 0; row < gameLogic.Rows; ++row)
{
for (int column = 0; column < gameLogic.Columns; ++column)
{
Square currentSquare = new Square(row, column);
Model boardSquare =
boardGameModelRepository.GetBoardSquareModel
(currentSquare);
boardSquare.Position =
new Vector3((float)column, 0.0f, (float)row);
boardSquare.Selected = currentSquare.Equals(selectedSquare);
boardSquare.Render(device);
// Check that the current piece isn't grabbed by the mouse,
// because in that case we don't render it.
if (!currentPieceOrigin.Equals(currentSquare))
{
// Check which kind of model we need to render,
// move our "template" to the
// right position and render it there.
Model pieceModel =
boardGameModelRepository.GetBoardPieceModel
(gameLogic[currentSquare]);
if (pieceModel != null)
{
pieceModel.Position = new Vector3((float)column,
0.0f, (float)row);
pieceModel.Render(device);
}
}
}
}
}
}
Figuring out which square is actually selected is a matter of finding which square is "under" the mouse. In 2D, this is a really simple operation, but it gets slightly more complicated in 3D. We need to grab the mouse coordinates on the screen and, using the Projection and View matrices, un-project the screen coordinates to 3D coordinates. Then, when we have our mouse position as a 3D position, we can cast a ray towards all our board square Models to see if we get an intersection. The code for this can be difficult to understand for someone not used to 3D mathematics. This is why the Framework must take care of it for us so that we (the guy or gal implementing a board game) don't have to worry about such things. A function handles all this in the VisualBoard class:
Obviously, there has to be a way of moving pieces around on the board. I decided that the most intuitive way of doing this is to grab and drag pieces using the left mouse button. That is really all the input a simple board game needs, but I also wanted to allow the user to view the board from different angles. This means positioning the camera at different places. I decided that using the right mouse button and dragging should be used for this, and that scrolling the mouse wheel should zoom in and out. This means I have to handle MouseDown, MouseUp, MouseMove and MouseWheel in the GamePanel class:
MouseDown
MouseUp
MouseMove
MouseWheel
public void HandleMouseWheel(object sender, MouseEventArgs e)
{
// If the user scrolls the mouse wheel we zoom out or in
cameraDistanceFactor = Math.Max(0.0f, cameraDistanceFactor +
Math.Sign(e.Delta) / 5.0f);
SetCameraPosition();
Render();
}
private void GamePanel_MouseMove(object sender, MouseEventArgs e)
{
// Dragging using the right mousebutton moves the camera
// along the X and Y axis.
if (e.Button == MouseButtons.Right)
{
cameraAngle += (e.X - previousPoint.X) / 100.0f;
cameraElevation = Math.Max(0, cameraElevation +
(e.Y - previousPoint.Y) / 10.0f);
SetCameraPosition();
previousPoint = e.Location;
}
Square square;
if (e.Button == MouseButtons.Left)
{
if (ponderedMove != null)
{
if (board.GetMouseOverBlockModel
(device, e.X, e.Y, out square, ponderedMove.Destinations))
{
// Set the dragged pieces location to the current square
selectedPiecePosition.X = square.Column;
selectedPiecePosition.Z = square.Row;
}
}
}
else
{
board.GetMouseOverBlockModel(device, e.X, e.Y, out square,
GamePanel.GetSquaresFromMoves(availableMoves));
}
// Render since we might have moved the camera
Render();
}
private void GamePanel_MouseDown(object sender, MouseEventArgs e)
{
// The previous point has to be set here or the distance dragged
// can be too big.
previousPoint = e.Location;
// If the mouse is over a block (see GetMouseOverBlockModel)
// for details on how determining that
// and the left button is down, try to grab the piece
// (if there is one at the square and it has valid moves).
if (e.Button == MouseButtons.Left)
{
ponderedMove = null;
Square square;
if (board.GetMouseOverBlockModel(device, e.X, e.Y, out square, null))
{
foreach (Move move in availableMoves)
{
// We have a move and it is started
// from the square we're over, start dragging a piece
if (square.Equals(move.Origin))
{
selectedPieceModel = board.PickUpPiece(square);
selectedPiecePosition =
new Vector3(square.Column, 1.0f, square.Row);
ponderedMove = move;
break;
}
}
}
}
Render();
}
private void GamePanel_MouseUp(object sender, MouseEventArgs e)
{
if (e.Button == MouseButtons.Left)
{
Square square;
if (board.GetMouseOverBlockModel(device, e.X, e.Y, out square, null))
{
// ponderedMove keeps track of the current potential move
// that will take place
// if we drop the piece onto a valid square, if ponderedMove
// is not null that means
// we're currently dragging a piece.
if (ponderedMove != null)
{
foreach (Square allowedSquare in ponderedMove.Destinations)
{
// Was it drop on a square that's a legal move?
if (square.Equals(allowedSquare))
{
// Move the piece to the target square
availableMoves = gameLogic.Move
(ponderedMove.Origin, allowedSquare);
break;
}
}
}
}
board.DropPiece();
selectedPieceModel = null;
Render();
CheckForGameOver();
}
}
The mouse methods check if they're called as a result of a left mouse button press. If they are, they use the VisualBoard.GetMouseOverBlockModel method to determine whether the event occurred when the cursor was over a specific square. This is then used to figure out if the user is allowed to pick up a piece from or drop a piece onto the current square. Also, VisualBoard.GetMouseOverBlockModel internally handles square highlighting automatically.
VisualBoard.GetMouseOverBlockModel
If the right mouse button is down when dragging the mouse, I can figure out the delta between two updates and use that information to update two members. A third member is updated when the mouse wheel is scrolled:
private float cameraAngle = -((float)Math.PI / 2.0f);
private float cameraElevation = 7.0f;
private float cameraDistanceFactor = 1.5f;
Another method in GamePanel then uses that information to calculate a position for the camera. This position is constrained to a circle around the board (the radius is adjusted when zooming) and the camera is also constrained along the Y-axis to never go below zero:
private void SetCameraPosition()
{
// Calculate a camera position, this is a radius from the center
// of the board and then cameraElevation up.
float cameraX = gameLogic.Columns /
2.0f + (cameraDistanceFactor * gameLogic.Columns *
(float)Math.Cos(cameraAngle));
float cameraZ = gameLogic.Rows /
2.0f + (cameraDistanceFactor * gameLogic.Rows *
(float)Math.Sin(cameraAngle));
camera.Position = new Vector3(
cameraX, cameraElevation, cameraZ);
}
The class CheckersModelRepository is used to create all of the models used to render the Checkers game. It implements IBoardGameModelRepository so that the Framework has a generic way of accessing the models using the IBoardGameLogic data.
class CheckersModelRepository
{
...
public void Initialize(Microsoft.DirectX.Direct3D.Device device)
{
//);
// Create some red and black material and their
// highlighted counterparts.
Material redMaterial = new Material();
redMaterial.Ambient = Color.Red;
redMaterial.Diffuse = Color.Red;
Material highlightedRedMaterial = new Material();
highlightedRedMaterial.Ambient = Color.LightSalmon;
highlightedRedMaterial.Diffuse = Color.LightSalmon;
Material squareBlackMaterial = new Material();
Color squareBlack = Color.FromArgb(0xFF, 0x30, 0x30, 0x30);
squareBlackMaterial.Ambient = squareBlack;
squareBlackMaterial.Diffuse = squareBlack;
Material blackMaterial = new Material();
blackMaterial.Ambient = Color.Black;
blackMaterial.Diffuse = Color.Black;
Material highlightedBlackMaterial = new Material();
highlightedBlackMaterial.Ambient = Color.DarkGray;
highlightedBlackMaterial.Diffuse = Color.DarkGray;
Material[] reds = new Material[]
{ redMaterial, highlightedRedMaterial };
Material[] blacks = new Material[]
{ blackMaterial, highlightedBlackMaterial };
blackSquare = new Model(blockMesh, new Material[]
{ squareBlackMaterial, highlightedBlackMaterial });
redSquare = new Model(blockMesh, reds);
blackSquare.PositionOffset = new Vector3(0.0f, -0.25f, 0.0f);
redSquare.PositionOffset = new Vector3(0.0f, -0.25f, 0.0f);
// Create meshes for the pieces.
Mesh pieceMesh = Mesh.Cylinder(device, 0.4f, 0.4f, 0.2f, 32, 1).Clone
(MeshFlags.Managed, VertexFormats.PositionNormal |
VertexFormats.Specular, device);
Mesh kingPieceMesh =
Mesh.Cylinder(device, 0.4f, 0.2f, 0.6f, 32, 1).Clone
(MeshFlags.Managed, VertexFormats.PositionNormal |
VertexFormats.Specular, device);
redPiece = new Model(pieceMesh, new Material[]
{ redMaterial, redMaterial });
blackPiece = new Model(pieceMesh, new Material[]
{ blackMaterial, blackMaterial });
redKingPiece = new Model(kingPieceMesh, new Material[]
{ redMaterial, redMaterial });
blackKingPiece = new Model(kingPieceMesh, new Material[]
{ blackMaterial, blackMaterial });
redPiece.PositionOffset = new Vector3(0.0f, 0.1f, 0.0f);
redKingPiece.PositionOffset = new Vector3(0.0f, 0.3f, 0.0f);
blackPiece.PositionOffset = new Vector3(0.0f, 0.1f, 0.0f);
blackKingPiece.PositionOffset = new Vector3(0.0f, 0.3f, 0.0f);
//;
blackPiece.Orientation = rotation;
redKingPiece.Orientation = rotation;
blackKingPiece.Orientation = rotation;
}
}
I will not explain in detail how 3D math is used to rotate and translate objects in 3D space, but I will explain what is done in the example implementation. Code statements like redPiece.PositionOffset = new Vector3(0.0f, 0.1f, 0.0f); are used to make sure that the "origin" of the model is offset by 0.1 along the Y-axis. This is done because Mesh::Cylinder creates a cylinder with the origin in the center of the cylinder and we need it to be at the edge of the cylinder for it to be placed correctly on the board. Also, we have to rotate in 90 degrees (PI / 2 radians) around the X-axis because it is created extending along the Z-axis and we want it to extend along the Y-axis. This is why this code is used:
redPiece.PositionOffset = new Vector3(0.0f, 0.1f, 0.0f);
0.1
Mesh::Cylinder
...
//;
...
It is also important to have a Mesh that has a vertex format that suits our purposes. A vertex in a 3D model can contain different information depending on how it is going to be used. At the very least, Position data must be included. However, if the model is to have a color, Diffuse data must also be included. In the Framework, a directional light is used to shade the scene to look nicer. Because of this, Normal data must also be included.
The Mesh returned from the static methods on Mesh used to create different geometrical meshes (such as boxes and cylinders) does not return a Mesh with the vertex format we want. To fix this, we clone the mesh and pass the desired vertex format when cloning:
//);
(This section added in version 2)
When implementing a second game, Connect Four, I realized that the entire form setup could be reused and decided to provide an implementation in the API class library for this. This made this lessened the actual amount of code required when implementing a game. The form creation and startup code for the Checkers game is then reduced to this:
is
{
Application.EnableVisualStyles();
Application.SetCompatibleTextRenderingDefault(false);
Application.Run(new GameForm(
new CheckersLogic(),
new CheckersModelRepository(),
"Checkers",
"Checkers, a most excellent game!"));
}
This creates a new GameForm object and passes IBoardGameLogic, IBoardGameModelRepository, the window title and the About text in the constructor. To be able to make the GameForm class able to display which players turn, it is that I had to add another method to IBoardGameLogic. I didn't want GameForm to have to poll the game logic for this information and decided to use a callback implemented with delegates. This required a new and additional method on the interface, as well as a delegate:
GameForm
public delegate void NextPlayerHandler(string playerIdentifier);
public interface IBoardGameLogic
{
...
void SetNextPlayerHandler(NextPlayerHandler nextPlayerHandler);
}
Now the form implementation can add one of its methods as NextPlayerHandler to the game logic. It is up to the game logic to indicate when the player changes. Super simple!
NextPlayerHandler
In order to show how easy it would be to implement another game, I decided to write a Connect Four game using this API. I chose Connect Four because it is fundamentally different from Checkers in some ways. I wanted to show that, regardless of these differences, it would be not only possible, but actually quite simple to implement.
The biggest difference is that in Connect Four you do not start out with all the pieces on the board. Rather, you pick them from a pile and then place them on the board. By using a "logical" board that is larger that the board actually used, I created two areas from which an endless supply of pieces could be picked. By having IBoardGameModelRepository return null for the squares that weren't part of either the actual board or the "pile" areas, GamePanel could ignore rendering of these squares.
null
Shown above is the Connect Four implementation using very awesome-looking teapots as pieces. The actual game logic implementation for the Connect Four game is quite simple and took about the time it takes to take the train from London to Brighton and back again.
So, how does the final implementation live up to the requirements I set out to fulfill? Sadly, I have to say that I failed to comply fully with requirement #3, that being that when implementing a new board game, the person implementing does not need to have any knowledge of 3D mathematics or Direct3D. The need for 3D knowledge can be seen in the CheckersModelRepository class where meshes are created, translated and rotated (rotated using scary quaternions, no less!). This is stuff that requires at least a beginner's knowledge of 3D mathematics.
This is also quite far from being a complete Framework, as it does not currently support a computer player. Furthermore, since I decided that I should not require any game loop, there is no smooth animation when moving pieces around. Other than that, I think it turned out quite well. It took me less than an hour to implement the Checkers game once the Framework was fully implemented and I think that indicates that it is easy to implement board games using this Framework.
I appreciate any comments, both on the code and.
|
http://www.codeproject.com/Articles/21337/DirectX-Board-Game-Engine?fid=854105&df=90&mpp=10&noise=1&prof=True&sort=Position&view=None&spc=None
|
CC-MAIN-2016-36
|
en
|
refinedweb
|
Using Iron Condors to Create Profits Trading SPX
This.
So what is a trader to do? The first advice worth offering is to utilize patience. Let others do battle and wait for the market to confirm a specific direction. Professional traders always have a plan before they enter a trade and they consistently utilize stops to define their risk. The very best of traders do not allow their opinions or the opinions of others to cloud their judgment; professional traders will abruptly change their trading plans in order to adapt to changing market conditions.
Trading is all about perception and leveraging probability. Regardless of whether a trader utilizes technical analysis, fundamental analysis, or the newspaper-dart method the very best traders realize that consistently taking money out of the market is more about managing emotions and probability than anything else.
The market always leaves clues behind, but if a trader is too biased in one direction or the other he/she becomes blind to clues that do not fit his/her directional bias. The current state of affairs in the S&P 500 offers another quality setup, regardless of which bias a trader has. With option expiration looming, a new option cycle presents itself with expiration at the end of September (Quarterly's). My most recent missive focused on option butterflies, however the situation we have currently on the S&P calls for a wider trading range. We now find ourselves in condor season.
Condors and iron condors have similar setups, but they have slightly different constructions. Theta (time decay) is the primary profit engine just like traditional butterflies; the only difference is that condors and iron condors offer potentially wider profit zones than a traditional butterfly. Similar to butterflies, condors are susceptible to volatility shocks, expanding implied volatility on the underlying, and gamma risk can also present itself and negatively impact a trade's overall performance.
The most important thing to remember about option trading is that as one progresses in his/her overall option knowledge, options allow a trader to modify their position to reduce risk and allow positions to become profitable.
While both types of condors are susceptible to the same risks, their primary functional difference is based around their construction. Both condors and iron condors have 4 separate and specific legs. A traditional condor utilizes 4 option contracts of the same type; 4 calls or 4 puts. Iron condors utilize a mixture of calls and puts; 2 calls and 2 puts. Another primary difference is that condors are a debit trade, while iron condors are a credit trade..
A trader with less capital could utilize the SPY in the same manner, with less capital at risk and tighter bid/ask spreads. For accounts exposed to the ravages of the tax system, it is important to remember there is preferential tax treatment of the cash settled index options and futures options that are not present in the SPY.
The iron condor is set up using 4 separate option contracts - 2 calls and 2 puts. The iron condor has the following construction ratio: Long 1 Put/Short 1 put/Short 1 Call/Long 1 Call. Each of these two vertical spreads is constructed as a credit spread. In our case, we are going to use the following strike prices for our example. Keep in mind, a trader willing to take more risk could use strikes which are closer for the potential of higher returns (more risk). On the other hand, those who are more risk averse could move the short strikes further apart for a lower return (less risk).
The chart below represents the profitability of an SPX iron condor using the following trade construction: Long 1 Sept (Quarterly) SPX 1050 Put/Short 1 Sept. (Quarterly) 1060 Put/Short 1 Sept. (Quarterly)1165 Call/Long 1 Sept. (Quarterly) 1170 Call. For further detailed information, prices used to produce this iron condor were based on the Thursday close and the midpoints of the bid/ask spread on all contracts. The profitability reflected below is based on a 1/1/1/1 setup. Obviously if a trader decided to add more contracts the max profit and loss would increase. Keep in mind, this example is for educational purposes only and is not reflective of intraday market prices.
The red line represents profit/loss at expiration. The white line represents profit today. As you can tell, the potential profit for today is essentially zero unless a substantial deterioration of implied volatility was to occur. The key to this entire trade is the passage of time. If the SPX stays within SPX 1060 and SPX 1165 price at expiration on September 30th the trade will realize the maximum of profit of $160. The total risk taken by this trade would be $840.
The beauty as always with options is that risk is crisply defined. The absolute most you could lose on this trade regardless of what happens is $840 per side. As a side note, the probability of SPX's price remaining between the 1060-1165 price range over the next two weeks is around 70% based on a log normal (Gaussian) distribution of prices.
Additionally, iron condors can be manipulated throughout their lifespan to defend profits. The ability to make slight changes to the construction by purchasing slightly out of the money puts/calls can also help protect profits if price gets near the edge of the profitability window. A myriad of strategies exist once this trade is placed to adapt to ever changing market conditions.
As an example, let us assume that price goes higher to around SPX 1150 in one week. At that price point, we could close the put portion of the condor for the maximum gain and then restructure our condor to protect the call side with a slightly out of the money call purchase and/or another put credit spread at a higher strike point taking in more premium and further reducing our risk.
After a trader becomes proficient with the various option trading strategies, he/she can constantly adapt positions to prevent further losses. After all, options were designed primarily as a means to hedge equity positions and reduce risk.
In closing, the iron condor strategy can be profitable regardless of which direction an underlying's price goes. There is no guesswork or fake outs, as long as the inevitable passage of time continues and price stays within the contracts that were sold to open the position, a near 19% return is possible based on capital at risk.
TweetTweet
|
http://www.safehaven.com/article/18222/using-iron-condors-to-create-profits-trading-spx
|
CC-MAIN-2016-36
|
en
|
refinedweb
|
import a macro-enabled Excel worksheet into Stata 12?
I don't know the answer for sure, but a first step to find it could be
to create a trivial 'light' file with a couple of numbers and a simple
macro, save it and try to import. If it fails on the simplest file -
then it is likely not gonna import your real 'heavy' file. If it does,
then we know that the feature exists, and we could look into how the
'heavy' file differs from the 'light' one.
Best, Sergiy
On Fri, Jun 28, 2013 at 4:25 PM, John Bensin <johnalexbensin@gmail.com> wrote:
>:
> *
> *
> *
*
* For searches and help try:
*
*
*
|
http://www.stata.com/statalist/archive/2013-06/msg01345.html
|
CC-MAIN-2016-36
|
en
|
refinedweb
|
RuntimeHelpers.GetHashCode Method (Object)
Serves as a hash function for a particular object, and is suitable for use in algorithms and data structures that use hash codes, such as a hash table.
Assembly: mscorlib (in mscorlib.dll)
Parameters
- o
- Type: System.Object
An object to retrieve the hash code for.
Return ValueType: System.Int32
A hash code for the object identified by the o parameter.
The RuntimeHelpers.GetHashCode method always calls the Object.GetHashCode method non-virtually, even if the object's type has overridden the Object.GetHashCode method. Therefore, using RuntimeHelpers.GetHashCode might differ from calling GetHashCode directly on the object with the Object.GetHashCode method.
The Object.GetHashCode and RuntimeHelpers.GetHashCode methods differ as follows:
Object.GetHashCode returns a hash code that is based on the object's definition of equality. For example, two strings with identical contents will return the same value for Object.GetHashCode.
RuntimeHelpers.GetHashCode returns a hash code that indicates object identity. That is, two string variables whose contents are identical and that represent a string that is interned (see the String Interning section) or that represent a single string in memory return identical hash codes.
This method is used by compilers.
The common language runtime (CLR) maintains an internal pool of strings and stores literals in the pool. If two strings (for example, str1 and str2) are formed from an identical string literal, the CLR will set str1 and str2 to point to the same location on the managed heap to conserve memory. Calling RuntimeHelpers.GetHashCode on these two string objects will produce the same hash code, contrary to the second bulleted item in the previous section.
The CLR adds only literals to the pool. Results of string operations such as concatenation are not added to the pool, unless the compiler resolves the string concatenation as a single string literal. Therefore, if str2 was created as the result of a concatenation operation, and str2 is identical to str1, using RuntimeHelpers.GetHashCode on these two string objects will not produce the same hash code.
If you want to add a concatenated string to the pool explicitly, use the String.Intern method.
You can also use the String.IsInterned method to check whether a string has an interned reference.
The following example demonstrates the difference between the Object.GetHashCode and RuntimeHelpers.GetHashCode methods. The output from the example illustrates the following:
Both sets of hash codes for the first set of strings passed to the ShowHashCodes method are different, because the strings are completely different.
Object.GetHashCode generates the same hash code for the second set of strings passed to the ShowHashCodes method, because the strings are equal. However, the RuntimeHelpers.GetHashCode method does not. The first string is defined by using a string literal and so is interned. Although the value of the second string is the same, it is not interned, because it is returned by a call to the String.Format method.
In the case of the third string, the hash codes produced by Object.GetHashCode for both strings are identical, as are the hash codes produced by RuntimeHelpers.GetHashCode. This is because the compiler has treated the value assigned to both strings as a single string literal, and so the string variables refer to the same interned string.
using System; using System.Runtime.CompilerServices; public class Example { public static void Main() { Console.WriteLine("{0,-18} {1,6} {2,18:N0} {3,6} {4,18:N0}\n", "", "Var 1", "Hash Code", "Var 2", "Hash Code"); // Get hash codes of two different strings. String sc1 = "String #1"; String sc2 = "String #2"; ShowHashCodes("sc1", sc1, "sc2", sc2); // Get hash codes of two identical non-interned strings. String s1 = "This string"; String s2 = String.Format("{0} {1}", "This", "string"); ShowHashCodes("s1", s1, "s2", s2); // Get hash codes of two (evidently concatenated) strings. String si1 = "This is a string!"; String si2 = "This " + "is " + "a " + "string!"; ShowHashCodes("si1", si1, "si2", si2); } private static void ShowHashCodes(String var1, Object value1, String var2, Object value2) { Console.WriteLine("{0,-18} {1,6} {2,18:X8} {3,6} {4,18:X8}", "Obj.GetHashCode", var1, value1.GetHashCode(), var2, value2.GetHashCode()); Console.WriteLine("{0,-18} {1,6} {2,18:X8} {3,6} {4,18:X8}\n", "RTH.GetHashCode", var1, RuntimeHelpers.GetHashCode(value1), var2, RuntimeHelpers.GetHashCode(value2)); } } // The example displays output similar to the following: // Var 1 Hash Code Var 2 Hash Code // // Obj.GetHashCode sc1 94EABD27 sc2 94EABD24 // RTH.GetHashCode sc1 02BF8098 sc2 00BB8560 // // Obj.GetHashCode s1 29C5A397 s2 29C5A397 // RTH.GetHashCode s1 0297B065 s2 03553390 // // Obj.GetHashCode si1 941BCEA5 si2 941BCEA5 // RTH.GetHashCode si1 01FED012 si2 01FED012
Available since 8
.NET Framework
Available since 1.1
Portable Class Library
Supported in: portable .NET platforms
Silverlight
Available since 2.0
Windows Phone Silverlight
Available since 7.0
Windows Phone
Available since 8.1
|
https://technet.microsoft.com/en-us/library/11tbk3h9.aspx
|
CC-MAIN-2016-36
|
en
|
refinedweb
|
This chapter provides conceptual information about Oracle Communications Order and Service Management (OSM) orders.
Before reading this chapter, read "Order and Service Management Overview" to find out about basic OSM concepts.
An order in OSM contains all the data necessary to fulfill the products and services requested by an incoming customer order.
When a customer order is captured in a CRM or other order-source system, it includes data such as the customer's name and contact information, customer billing information, the products that the customer is ordering, and the requested date of delivery. A subset of that information is included in the customer order that is sent to OSM; for example, the customer information and the order line items that specify the service actions that need to be performed.
After the order is created in OSM, the order includes the data needed for processing the order, as well as information that specifies how to complete the order; for example, the default process to run and the order life-cycle policy. See "What an Order Contains" for more information.
This section introduces the terminology used in the OSM documentation when describing orders:
Customer order: or a sales order.
Order: An order in the OSM format. You model orders by creating order specifications in Design Studio.
Service order: An order received by an OSM instance acting in the service order management role. A service order is sometimes called a provisioning order.
Revision order. An order that modifies a previously submitted order that is still being processed. For example, a customer may want to switch to a higher level of service before an order is completed.The system can process revision orders until the original order reaches its point of no return. A revision order is sometimes called a supplemental order.
Follow-on order. An order that is submitted to modify a completed order. Follow-on orders are not processed until their order-item dependencies on the in-flight orders allow them to proceed. Follow-on orders are also used for sequencing orders.
Orders that are submitted to OSM typically have a specific purpose that is defined as an order action. This information is usually included in the order header to indicate if the order adds, deletes, or moves a service. For example, the following line from an incoming customer order specifies that the order adds services:
<im:typeOrder>Add</im:typeOrder>
In addition to orders that manage services in different ways, you can create orders for specific order-management purposes. For example:
An order that is created to manage fallout handling.
An order that communicates with a single external system to provision and activate a specific service. For example, to manage a certain configuration, you might create two order types:
An order that is processed by OSM in the central order management role, which handles all of the fulfillment functions.
A service order that is created by OSM in the central order management role and is sent to an instance of OSM acting in the service order management role. This order type would manage the fulfillment requirements specific to the provisioning system.
You typically model a different order type when the structure or order data is different from any existing order type, or when there are specific and different fulfillment requirements.
You can use multiple order specifications to create multiple order types. Each order specification that you create defines a different order type. See "About Modeling Order Specifications" for more information. In addition, you can use inheritance to manage common configurations between orders. See "Re-Using an Order Specification" for more information.
Each order line item in an incoming customer order that OSM receives specifies an action to perform. Order line item actions are typically one of the following:
Add a product or service.
Change an existing product or service.
Delete a product or service.
Update attributes of a product or service.
Cancel an existing product or service.
Move a product or service.
Suspend or resume a product or service.
An order can contain a mix of actions for different products or services. For example, an existing customer might request to add some new products, change some existing products, and remove other products. These can all be included on the same order. See "About Order Items" for more information.
Each order type uses a different order specification. When you model order specifications, you can define the following:
The order data. The data an order can contain is defined in the order template. (See "About the Order Template" for more information.) The order data is initially populated by the creation task. The creation task is used to create an order instance and define its required data. The creation task is required in all orders. See "About Modeling Order Data" and "About the Creation Task" for more information.
The default process that is run when the order is started. See "About the Default Process" for more information.
The order life-cycle policy. Every order type you create must be associated with an order life-cycle policy. The life-cycle policy defines the states that an order can be in, (such as In Progress or Canceled), the rules governing the transitions between those states, and who is authorized to initiate those transitions. For example, you can specify that an order can be transitioned to the Suspended state only when it is in the In Progress state, and only by OSM users of a designated role. See "About OSM Order Life-Cycle Management" for more information.
The order priority range. An order priority value is used by OSM at run time to determine which orders should be given more processing resources when the system is under maximum load. See "About Specifying the Order Priority" for more information.
Order rules. The rules in an order control how various actions take place; for example, when to trigger a jeopardy notification and how delays in the order process should be handled. See "About Order Rules" for more information.
Order fallout definitions. Order fallout definitions enable you to identify specific order data that can cause fallout, and to use order change management to compensate for the error and proceed with processing the order.
For example, it might be common for a task that activates a port to return an error that the port is already in use. The fallout definition can identify the port ID as the data that needs correcting. This allows OSM to undo the resource assignment task in the inventory system, so the task can be redone and the port ID corrected. The order can then resume processing with the corrected data.
See "Order Fallout Management" for more information.
Order-based behaviors. You can use behaviors to manipulate data and to control how data is displayed in the Task Web client. For example, you can validate data, specify the contents of a list, calculate values, or create tooltips for fields. See "About Behaviors" for more information.
Notifications to send when specified events occur, when the order is in jeopardy, or when specific order data has changed. Users can display notifications on the Task Web client Notifications page or receive them in email. You can use notifications with automation plug-ins to send messages to other systems or perform other business logic. See "About Notifications" for more information.
Order permissions. Order permissions control the actions that workgroups can perform on orders. See "About Setting Permissions for Orders" for more information.
At run time, an order includes the data needed for service fulfillment, as well as information about how to process the order. An order includes the following:
The order data. The data an order can contain is defined in the order template. The order template also includes control data. Control data is used by OSM to create the orchestration plan and includes order item data and the structure of the function order components, which represent the first level of decomposition. See "About Modeling Order Data" for more information.
The orchestration plan. The orchestration plan includes the order components, order items, the dependencies between them, and the order in which order items need to be processed.
You do not specify an orchestration plan when you create an order specification. You define the default process, which, for orchestration orders, is an orchestration process. See "About the Default Process" and "Understanding Orchestration" for more information.
You can display the orchestration plan, and the order components and order items included in it, in the Order Management Web client.
The tasks run by the order. You can display information about tasks in the Task Web client. You can also display historical information about the tasks.
When you create a new order model in Oracle Communications Design Studio, you can base the order on an existing order. When you extend an order specification, the extended specification inherits all of the data, tasks, rules, and behaviors of the base specification. You can add new data and behaviors to define unique order specifications and functionality. When you modify a base order specification, the order specifications extended from it are also modified. This means that you can make changes in one place, in the base specification, and those changes apply to the orders that are extended from the base specification.
For example, you might have three order specifications that share a common set of data. You can create a base order that includes configurations common to all three orders. You can then add configurations to each of the three order specifications for the data that is unique to each order specification.
When defining an order specification that is inherited from a base order specification, you cannot edit the inherited order data. For example, you cannot remove or rename data elements inherited from the base order specification. To implement changes to the inherited data, you must edit the data in the base order specification. Design Studio automatically implements those changes among all of the extended order specifications.
When you model the data in an order, you specify the data that the order must include to fulfill the service. For example, in an order for a telephone service, the order must include telephone number data.
The data elements that you can use in an order are defined in the Design Studio Data Dictionary. When you define order data, you can use data elements that already exist in the Data Dictionary data schemas, or you can create new data elements and add them to the Data Dictionary. See "About Importing the Incoming Customer Order Data into the Data Dictionary" for more information.
You can specify alias names for data elements. For example, you might have a data model that contains two instances of a data element called EmployeeID: one defined as a string (defined by the employee's name and a two-digit number), the other defined as an integer (defined by a 6-digit number). To avoid data type collisions in the run-time environment, you can rename one instance of the EmployeeID data element at the order level.
The data model defined in an order specification is called the order template. An order template is the part of an order specification that defines the order data that OSM uses to process and fulfill an order. For example, the order template defines the data required for order items as well as the data required in an order header.
Figure 2-1 shows an order template.
OSM uses the order template when processing the order. For example:
OSM adds the input message to the order template automatically. See "Adding the Input Message to the Order Template" for more information.
You can use data in the order template to manage orders; for example, you can create order keys used by amendment processing. See "About Order Keys" for more information.
You can specify which data in the order template should be considered for amendment processing (data significance). See "About Data Significance" for more information.
You can assign behaviors to data in the order template. See "About Behaviors" for more information.
The data in the order template defines the data that must be present when the order is created and the data that is generated during order processing. Design Studio generates the order-level order template by aggregating the order template definitions for the order item specifications and order components with any data defined at the order level.
Figure 2-2 shows the structure of customer data in the order template.
The order template includes control data. Control data is used by OSM to generate the orchestration plan. Control data is used only for orchestration.
There are typically two areas of the order control data:
ControlData/OrderItem provides the data and structure of order items received in the incoming customer order. Figure 2-3 shows order item data in the order template control data.
ControlData/Functions stores the structure of the function order components generated by the first level of decomposition. Figure 2-4 shows function components represented in the order template. The types of functions (BillingFunction, MarketingFunction, and so on) represent the function-level order components.
You manually model the order control data of order items in Design Studio. Control data for function order components is automatically generated by Design Studio. See the Design Studio Help for information on how control data is modeled and generated.
You can configure the order template to hold status data returned from external systems. Figure 2-5 shows an order template structure that holds status data.
You can also store status data in the order item data and in the function data. Figure 2-6 shows a structure for storing status data. In this example:
The LineID data element provides a reference to the order line item in the incoming customer order.
The SystemInteraction data element stores data about status events; for example, a status code, description, and timestamp.
Figure 2-7 shows a structure for storing status data for functions. In this example:
The componentKey data element provides a reference to the order component instance.
The Response data element stores the message from the external system, as well as the timestamp, description, and status code.
Before OSM can receive an order from an order-source system, you need to create the OSM Data Dictionary.
The Data Dictionary is the.
Design Studio automatically creates a Data Dictionary when you create an OSM cartridge project. You can use this default Data Dictionary or create multiple data schemas to add data elements or structure within the same project.
Figure 2-8 shows a list of data schemas in Design Studio.
Each data schema includes a set of data relevant to the function that the data is used with. Figure 2-9 shows the data elements for the mobile Data Dictionary, with mobile-related data such as IMSI and MSISDN.
Figure 2-10 shows data elements for the incoming customer order data.
To import the Data Dictionary for the data received in orders, you import the XSD file for that incoming customer order into OSM. The elements in the XSD file are loaded into the Data Dictionary as OSM data elements. Example 2-1 shows part of an XSD file that includes some of the elements shown in Figure 2-10.
Example 2-1 Elements in Input Message XSD File
<element name="order" type="im:OrderType"/> <element maxOccurs="1" minOccurs="1" name="numSalesOrder" type="string"> </element> <element maxOccurs="1" minOccurs="1" name="typeOrder"> </element>
For each data element, you specify attributes about the data element; for example, the data type and display name. Figure 2-11 shows the configuration for the requestedDeliveryDate data element.
Child XML elements are imported as child data elements. The Path field shows the parent data elements. In this example, the parent data element of requestedDeliveryDate is SalesOrderLine.
In addition to the order data, the Data Dictionary contains information about the data structure of each incoming customer order. For example, it contains information about the hierarchy of sales item lines, which can consist of offers, bundles, products, and so on. This data structure information can be used to manage the data when it is passed between different fulfillment systems.
When you define an order specification in Design Studio, you must model a creation task. The creation task is a required task. It specifies the required and optional data to be present when the order is created.
The creation task data is used as follows:
The creation task defines the data that must be present when the order is created.
When an order is canceled, the order is returned to the creation task.
If an order includes an orchestration plan, the Cancelled state is the final state. The order cannot be resumed. If the order does not have an orchestration plan, the canceled order is returned to the creation task for the order, and can be re-submitted to be processed again.
When performing compensation, OSM compares the creation task data of the base order with the creation task data of the revision order.
The creation task differs from other tasks as follows:
It is not modeled explicitly as part of a process, but is identified in the order specification.
When an order manager is manually editing an order at the creation task, the order has not been submitted to a process and has had no work completed. The order manager submits the order and at that point the default process is started and the order enters the first task in the process. Prior to submitting the order from the creation task, an OSM user with appropriate privileges may delete the order. Accordingly, the creation task has two task states, submit and delete.
Tip:When modeling a creation task, create a manual task, even if the order is intended to be processed automatically. Using manual tasks as creation tasks ensures that task behaviors are supported at run time if you manually create an order. This can be useful for testing purposes.
When an order is created, some data must be populated to the creation task data. To populate the data, you use a transformation rule, defined in a recognition rule. See "Understanding Order Transformation" for more information.
For orders that require an orchestration plan for fulfillment, (called orchestration orders), the default process is an orchestration process. For orders that do not use orchestration, the default process is a workflow process or workstream process. See "About Workflow Processes and Workstream Processes" for more information.
When an orchestration order is submitted to OSM, the following occurs:
OSM processes the order by running the orchestration process that is specified in the order specification.
The orchestration process specifies the orchestration sequence to use, which in turn specifies the first orchestration stage, which starts the orchestration process.
When the orchestration plan is complete, OSM runs the executable order components in the order specified in the orchestration plan. The order is based on dependencies between the order components and order items.
When the last task in the order completes, the order transitions to the Completed state.
Orchestration orders are typically used by OSM in the central order management role, where multiple fulfillment systems need to be managed and there are dependencies between the fulfillment actions.
Figure 2-12 shows the process flow for an orchestration order.
See "Understanding Orchestration" for more information.
For orders that do not require an orchestration plan for fulfillment, (called process-based orders), the default process is an OSM process, which includes tasks such as Activate_DSLAM. When a process-based order is submitted to OSM for processing, the following occurs:
OSM starts the process that is defined as the default process.
The default process can start subprocesses that run sequentially or in parallel.
After the last task has completed, the order transitions to the Completed state.
Figure 2-13 shows the process flow for a process-based order.
See "About Tasks and Processes" for more information.
It is common for an order to be fulfilled by both orchestration orders and process-based orders. For example:
OSM receives an orchestration order, which generates the orchestration plan, and begins running the executable order components.
One of the executable order components runs a process that spawns a separate, process-based order. The order is sent to a separate OSM instance that is configured to interact with a provisioning system.
The OSM instance configured for provisioning accepts the order, processes it, and returns the status to the originating order.
Figure 2-14 shows an orchestration order running a process-based order.
You assign the default process in the order specification. You specify an orchestration process the same way that you specify any other process. Figure 2-15 shows a default orchestration process in an order specification.
Figure 2-16 shows a default process defined in an order specification.
OSM uses order priority to determine which orders should be given more OSM system resources when the system is under heavy load. This ensures that orders that have higher priority are not starved for resources by lower priority orders.
Order priority does not prevent all lower priority orders from completing until all higher priority orders have completed. OSM is a multi-threaded system and processes as many orders as possible concurrently. You can use follow-on orders to manage inter-order dependencies.
You can specify two values to set the order priority:
The order priority in the recognition rule that specifies which order specification to use.
The order priority range in the order specification.
The order priority in the recognition rule defines the priority of the order in relation to other order types. The default order priority is 5. You can enter a number between 0 and 9, inclusive, or you can include an XQuery expression that sets the order priority based on data in the incoming customer order. For example, the XQuery shown in Example 2-2 retrieves the order priority (as a number) from the FulfillmentPriorityCode data element:
Example 2-2 Example of Retrieving Order Priority
declare namespace fulfillord=""; //fulfillord:ProcessSalesOrderFulfillmentEBM/fulfillord:DataArea/fulfillord:Proces sSalesOrderFulfillment/fulfillord:FulfillmentPriorityCode/text()
The order priority is typically set on the order submitted to OSM from the order-source system, and it is mapped to the OSM priority when transforming the order. An order's priority also can be modified programmatically or manually by using the Task Web client.
Important:Because OSM is typically one of several systems involved in fulfilling orders, order priority must be supported in all systems and middleware for it to be the most effective.
The order priority range specifies the acceptable range of numeric priority (between 0 and 9) that orders of a single type may use. For example, this could allow you to configure a fixed-line order type with a lower range (0 to 4) and a mobile order type with a higher priority range (5 to 9), ensuring that mobile orders are prioritized higher than fixed-line orders.
You create an order priority range by specifying a minimum and maximum priority for the order. OSM rounds priority values up or down to ensure they conform to the order priority range. For example, if you specify a priority range of 5 to 7 and an order is created with a priority of less than 5, the system assumes the intent is to provide the lowest priority allowed for the order, and the priority value of the order is set to 5. Similarly, if a priority higher than 7 is provided for another order of the same type, the system assumes the intent is to provide the highest priority allowed for the order, and the priority value of the order is set to 7.
Table 2-1 shows examples of how the order priority is set by using the order priority from the recognition rule, and the order priority range from the order specification.
Figure 2-17 shows how to set the order priority range in the Design Studio order editor.
The order priority value is also considered when an order's tasks are run, so that automated tasks are run according to order priority. This requires that Java Messaging Service (JMS) message priority settings are configured for the JMS queues.
You can change the order priority of an in-flight order by using the Order Management Web client. You can specify permissions for which roles can change the priority. See the discussion of changing order priority in OSM Order Management Web Client User's Guide.
Order rules control how various actions take place; for example, when to trigger a jeopardy notification and how delays in the order process should be handled. Rules are used in process flow decisions, conditional transitions, subprocess logic, delay activities, jeopardies, and events.
OSM evaluates order rules by comparing data to data, or data to a fixed value. Figure 2-18 shows an order rule in Design Studio. This rule identifies residential customers in a specific city. This is an example of a rule that might be used to send a fallout notification to a regional fallout manager.
OSM Web client users are assigned roles, which you can use to manage who works on different types of orders, and different types of tasks. When you assign permissions to orders, you define the following for each role:
You can specify if the OSM users belonging to the role can create the order in the Task Web client.
You can specify the data that OSM users can see in the Task Web client Query filter for the associated order. To do so, you can define flexible headers in Design Studio. Figure 2-19 shows the typeOrder field configured as a flexible header in an order specification. This allows the Order Type field to display in the Task Web client Query filter.
Flexible headers are typically used when there are one or more fields on an order that contain information that is the same for multiple orders and which can be used to query and find related orders. Examples of this are external reference numbers, customer numbers, and telephone numbers. Flexible headers can be used to allow order managers to query these data fields across orders in different cartridges as long as they have the same mnemonic path in their order templates. The Task Web Client query screen allows you to input search criteria once. It returns all orders that match the flexible header search values.
You can specify which data OSM users can display in the OSM Web clients. See "About Query Tasks for OSM Clients" for more information.
You can specify the orders that OSM users of the role can see, based on data in the order. Use the Order editor Permissions Filters subtab to limit the orders a role can view. For example, you specify that OSM users see only orders from a region or for a specific type of service.
Figure 2-20 shows conditions defined in Design Studio that allows OSM users in the role to see only orders from customers who have the 408 and 510 area codes.
See "About OSM Roles" for more information.
As an order runs tasks, the data that is available at each task should be the minimum subset of order data necessary for the task to be performed. You can choose the data to display in the OSM Web clients using the following methods:
Use task data to specify which data to display in the Task Web client for manual tasks.
Use behaviors to specify how OSM displays the task data within a manual task; for example, to hide or show task data or to make data read only. See "About Behaviors" for more information.
Use query tasks to specify which data to display in the Order Management Web client Summary tab and Data tab. Query tasks are manual tasks that specify which data to display in the Task Web client when opening an order from a query result rather than from a task in the worklist. A query task is associated with a role that has permissions to view an order and should be limited to the subset of an order specification's data that the particular role is allowed to view. See "About Query Tasks for OSM Clients" for more information.
Order management personnel can display orders in the Task Web client and in the Order Management Web client. You can specify which data is displayed by assigning query tasks to an order. The data that is specified in the query task data is the data that is displayed.
You can select any task as the query task. You can also create special tasks whose only function is to specify which data to display.
Figure 2-21 shows the Permissions tab in the Design Studio Order Editor. The upper screen shows the permissions for the provisioning role, with the provisioning function task as the query task. For the billing role, the billing function task is assigned as the query task.
The Order Management Web client uses two types of views to display orders; a summary view in the Summary tab and a detailed view in the Data tab. When you model a query task, you can specify which of those views (either or both) to display the query task data in.
You can use multiple tasks as query tasks for an order. When you do so:
For the summary view, all the data is displayed in the Order Management Web client Summary tab.
For the detailed view, the data from the query tasks appears as options in the Order Management Web client Data tab View field; each option presents the OSM user with a different view, each containing a specific set of data.
To display the query task in the Task Web client, select the Default checkbox, as shown in Figure 2-21.
In addition to defining the data that can be displayed, you can specify who can see it by using roles. Each role that is associated with an order can be assigned different query tasks. For example, if your order management personnel includes a role for billing specialists, you can create query tasks that show data specific to their activities.
The data that is available for each automation plug-in should be the minimum subset of order data necessary for the plug-in to be performed. You can choose the data to provide to automation plug-ins using the following methods:
Use the task data contained in an automation task to specify which data to provide to an automation plug-in.
Use query tasks to specify which data to provide to an automation plug-in associated to order notification, events, and jeopardies. A query task is a manual task that is associated with a role that has permissions to use some or all order data to run an automation plug-in. See "About Query Tasks for OSM Clients".
In automated tasks, the data that is available to automation plug-ins associated to automated task is already defined in the Task Data tab. However, automation plug-ins used with order notifications, events, and jeopardies do not have immediate access to this task data, and, as a result, must reference a manual task called a query task that defines the task data and behavior data available to the automation plug-in.
You can select any manual task as the query task. You can also create special tasks that are only used as query tasks. Their only function is to specify which data to provide to an automation plug-in.
Figure 2-21 shows the Permissions tab in the Design Studio order editor. The upper screen shows the permissions for the provisioning role, with the provisioning function task as the query task. For the billing role, the billing function task is assigned as the query task.
To associate a query task with an automation plug-in, use the Default checkbox, as shown in Figure 2-21.
Figure 2-22 shows an event notification with an automation plug-in that uses the ProvisioningFunctionTask query task that is defined as the default query task for the provisioning role. This role must be associated to the Run as OSM user that runs the automation plug-in as shown in the Properties Details tab. For more information about associating roles to OSM users, see the OSM Administrator Application User's Guide.
|
http://docs.oracle.com/cd/E35413_01/doc.722/e35415/cpt_about_orders.htm
|
CC-MAIN-2016-36
|
en
|
refinedweb
|
import java.util.*; public class enrollment{ public static void main(String[]args){ int balance, payment; balance = 20000; String partial = "partial"; String full = "full"; System.out.print("\nEnter Name: "); Scanner st = new Scanner(System.in); String name = st.nextLine(); System.out.print("\nYour Remaining Balance is: "+balance); System.out.print("\nChoose your Payment term(Partial/Full): "); String term = st.nextLine(); if(term == partial){ System.out.print("\nHow much would you like to pay for this quarter: "); Scanner in = new Scanner(System.in); payment = in.nextInt(); balance = balance - payment; System.out.print("\nYour Balance is: "+ balance); if(balance > payment){ System.out.print("\nYou have remaining balance of: " + balance); }else if(balance == payment){ System.out.print("You are already paid"); } } } }I've been doing this for a week, I always tend to search for some alternative solution, but still it doesn't fit to my program, whenever I answer this question:
System.out.print("\nChoose your Payment term(Partial/Full): "); String term = st.nextLine();
it always execute to the end.
I don't know what is wrong about my If condition because I already declare partial as "partial" so I assume that it will take partial and execute the statement. Just need some advice so I can finish the program until to the end at full payment. Any suggestion is deeply appreciated.
|
http://www.dreamincode.net/forums/topic/303110-simple-enrollment-java-program/page__pid__1762972__st__0
|
CC-MAIN-2016-36
|
en
|
refinedweb
|
On Wed, Dec 23, 1998 at 02:18:37PM -0600, Gordon Matzigkeit wrote: > Hi! > > >>>>> Roland McGrath writes: > > >> I'm now using libc0.2 as the package name, which I agree is > >> correct. > > RM> Really? Truly? I will defer to the wisdom of those with > RM> experience with debian, since I have none. But is it really the > RM> case that debian has no better provision for this for dealing > RM> with different versions and machine/os builds of the same > RM> package? That is a serious shortcoming. > > Your points are well-taken. > > The only reason I can think of for why ABIs are not treated as virtual > packages that others depend on, is that organizing the package > archives would require more thought. There would have to be > provisions for packages that have the same names but different sets of > dependencies. > > The current system relies on the fact that the `architecture' field > completely different than any other kind of dependency. So, we have > to change the name of the package in order to get real flexibility in > how dependencies are handled. Please remember that Debians packaging system was invented for i386 Linux systems, and later for multiple architectures that means CPUs to run on. Because different architectures means different CPUs it is indeed true that packages for one architecture can't run on another architecture (beside emulation :) Therefore, the current system works well and has no "shortcoming" in the sense I write above. However, now that we have a different underlaying OS, the situation changes. This is a new change, and therefore the infrastructure has to be developed. Earlier, there was no need for the fine differentiation in architecture. Luckily, there are other cases were a finer graduation is useful: Developing a pentium optimized distribution, which also requires a different understanding of architecture. i586 could run i386 programs, but probably not vice versa. So, more people than only hurd people will be interested in these changes to dpkg. However, changing dpkg and the ftp installation procedure is far from easy, and somebody has to do the work. > This is common Debian practice... my e-mail was only intended as a > guideline of how we might apply this practice to the Debian GNU/Hurd > distribution. I raised this issue so that we don't find ourselves > backed into a corner of the package namespace later on, when we want > better integration between GNU/Linux and GNU/Hurd. Your suggestion has the fundamental flaw that it changes the name of the packages, which breaks a lot of other things, for example the bug reporting system (it does not really break it butm akes it more inconvenient to work with). Changing the package names requires a change to the source, but we can only build from one single source. Also, I dislike the fine graduation with "p" and "i" and possibly more flags. Furthermore, I don't understand why this would be necessary. Debian has some experience with incompatible upgrades of the c library, the libc5->libc6 transition was very educating. Why can't we just bump the soname each time the hurd-i386 glibc packages have an incompatible API change? Note that you can have multiple libc6 packages with different sonames installed, so old binaries will continue to work. The versioned dependency should be enough to handle all cases. We would then have libc0.2, libc0.3, libc0.4 etc packages, and binary packages depending on them. We would only maintain one set of development packages. Can you explain the drawbacks of this simple solution? > Debian is also an evolving thing, and I believe that once we get far > enough down this road, more people will grasp the problem and want to > fix it. At that point dpkg's notion of architecture can be integrated > with the dependency system. This is a good goal, but I think it is irrelevant to our simple problem of small incompatibilities between libc6 upgrades. It should be handled in the same way we handle library ugrades under i386 linux. You will find libgtk 1.0.6 in our archive and libgtk 1.1. Both are incompatible, nevertheless both can be installed w/o problems.
|
https://lists.debian.org/debian-hurd/1998/12/msg00162.html
|
CC-MAIN-2016-36
|
en
|
refinedweb
|
Praveen Adivi
7 Dec 2010, 8:04 AM
Hi All, I am a newbie to extjs and I had a question regarding ext.ns's behavior. I have a js file where in I define a name space useing Ext.ns("Ext.ux.test"); and this js file is then added to the jsp and when I include another jsp into the current jsp using jsp:include I get a javascript error saying, "Ext.ux.test is undefined". So, I was wondering if any body could kindly tell me how to make sure that the namespace Ext.ux.test is not lost. Thank you in advance.
|
https://www.sencha.com/forum/archive/index.php/t-117952.html
|
CC-MAIN-2016-36
|
en
|
refinedweb
|
I.
In this post, I will talk about pros and cons of code generation and then show you how to use T4 templates, the built-in code generation tool in Visual Studio, using an example.
Code Generation Is a Bad Idea
I am writing a post about a concept that I think is a bad idea, more often than not and it would be unprofessional of me if I handed you a tool and didn't warn you of its dangers.
The truth is, code generation is quite exciting: you write a few lines of code and you get a lot more of it in return that you would perhaps have to write manually. So it's easy to fall into a one-size-fits-all trap with it:
"If the only tool you have is a hammer, you tend to see every problem as a nail"". A. Maslow
But code generation is almost always a bad idea. I refer you to this post, that explains most of the issues that I see with code generation. In a nutshell, code generation results into inflexible and hard to maintain code.
Here are a few examples of where you should not use code generation:
- With code generated distributed architecture.
- Visual GUI designers is what Microsoft developers have used for ages (in Windows/Web Forms and to some extent, XAML based applications) where they drag and drop widgets and UI elements and see the (ugly) UI code generated for them behind the scenes.
- Naked Objects is an approach to software development where you define your domain model and the rest of your application, including the UI and the database, all gets generated for you. Conceptually, it's very close to Model Driven Architecture.
- Model Driven Architecture.
Sometimes, Only Sometimes, Code Generation Might Be a Good Idea
Very rarely though, I find myself in a situation where code generation is a good fit for the problem at hand and the alternative solutions would either be harder or uglier.
Here is a few examples of where code generation might be a good fit:
- You need to write a lot of boilerplate code.
- You very frequently use some static metadata from a resource and retrieving the data requires using magic strings (and perhaps is a costly operation). Here are a few examples:
- Code metadata fetched by reflection: T4MVC that creates strongly typed helpers that eliminate the use of literal strings in many places.
- Static lookup web services:.
- Static lookup tables: This is very similar to static web services but the data lives in a data store as opposed to a web service.
As mentioned above, code generation makes for inflexible and hard to maintain code; so if the nature of the problem you're solving is static and doesn't require frequent maintenance, then code generation might be a good solution!
Just because your problem fits into one of the above categories doesn't mean code generation is a good fit for it. You should still try to evaluate alternative solutions and weigh your options..
Text Template Transformation Toolkit
There is an awesome code generation engine in Visual Studio called Text Template Transformation Toolkit (AKA, T4).
Text templates are composed of the following parts:
- Directives: elements that control how the template is processed.
- Text blocks: content that is copied directly to the output.
- Control blocks: program code that inserts variable values into the text and controls conditional or repeated parts of the text.
Instead of talking about how T4 works, I would like to use a real example. So here is a problem I faced a while back for which I used T4. I have an open source .NET library called Humanizer. One of the things I wanted to provide in Humanizer was a fluent developer friendly API for working with
DateTime.
I considered quite a few variations of the API and at the end, settled for this:.
For each variation I created a separate T4 file:
- In.Months.tt for
In.Januaryand
In.FebrurayOf(<some year>)and so on.
- On.Days.tt for
On.January.The4th,
On.February.The(12)and so on.
- In.SomeTimeFrom.tt for
In.One.Second,
In.TwoSecondsFrom(<date time>),
In.Three.Minutesand so on.
Here I will discuss
On.Days. The code is copied here for your reference:
<#@ template debug="true" hostSpecific="true" #> <#@ output extension=".cs" #> <#@ Assembly Name="System.Core" #> <#@ Assembly Name="System.Windows.Forms" #> <#@ assembly name="$(SolutionDir)Humanizer\bin\Debug\Humanizer.dll" #> <#@ import namespace="System" #> <#@ import namespace="Humanizer" #> <#@ import namespace="System.IO" #> <#@ import namespace="System.Diagnostics" #> <#@ import namespace="System.Linq" #> <#@ import namespace="System.Collections" #> <#@ import namespace="System.Collections.Generic" #> using System; namespace Humanizer { public partial class On { <# #>); } } <#}#> } <#}#> } }
If you're checking this code out in Visual Studio or want to work with T4, make sure you have installed the Tangible T4 Editor for Visual Studio. It provides IntelliSense, T4 Syntax-Highlighting, Advanced T4 Debugger and T4 Transform on Build.
The code might seem a bit scary in the beginning, but it's just a script very similar to the ASP language. Upon saving, this will generate a class called
On with 12 subclasses, one per month (for example,
January,
February etc) each with public static properties that return a specific day in that month. Let's break the code apart and see how it works.
Directives
The syntax of directives is as follows:
<#@ DirectiveName [AttributeName = "AttributeValue"] ... #>. You can read more about directives here.
I have used the following directives in the code:
Template
<#@ template debug="true" hostSpecific="true" #>
The Template directive has several attributes that allow you to specify different aspects of the transformation.
If the
debug attribute is
true, the intermediate code file will contain information that enables the debugger to identify more accurately the position in your template where a break or exception occurred. I always leave this as
true.
Output
<#@ output extension=".cs" #>
The Output directive is used to define the file name extension and encoding of the transformed file. Here we set the extension to
.cs which means the generated file will be in C# and the file name will be
On.Days.cs.
Assembly
<#@ assembly Name="System.Core" #>
Here we are loading
System.Core so we can use it in the code blocks further down.
The Assembly directive loads an assembly so that your template code can use its types. The effect is similar to adding an assembly reference in a Visual Studio project.
This means that you can take full advantage of the .NET framework in your T4 template. For example, you can use ADO.NET to hit a database, read some data from a table and use that for code generation.
Further down, I have the following line:
<#@ assembly name="$(SolutionDir)Humanizer\bin\Debug\Humanizer.dll" #>
This is a bit interesting. In the
On.Days.tt template I am using the Ordinalize method from Humanizer which turns a number into an ordinal string, used to denote the position in an ordered sequence such as 1st, 2nd, 3rd, 4th. This is used to generate
The1st,
The2nd and so on.
The assembly name should be one of the following:
- The strong name of an assembly in the GAC, such as
System.Xml.dll. You can also use the long form, such as name="System.Xml, Version=4.0.0.0, Culture=neutral, PublicKeyToken=b77a5c561934e089". For more information, see
AssemblyName.
- The absolute path of the assembly.
System.Core lives in GAC, so we could just easily use its name; but for Humanizer we have to provide the absolute path. Obviously I don't want to hardcode my local path, so I used
$(SolutionDir) which is replaced by the path the solution lives in during code generation. This way the code generation works fine for everyone, regardless of where they keep the code.
Import
<#@ import namespace="System" #>
The import directive allows you to refer to elements in another namespace without providing a fully-qualified name. It is the equivalent of the
using statement in C# or
imports in Visual Basic.
On the top we are defining all the namespaces we need in the code blocks. The
import blocks you see there are mostly inserted by T4 Tangible. The only thing I added was:
<#@ import namespace="Humanizer" #>
So I can later write:
var ordinalDay = day.Ordinalize();
Without the
import statement and specifying the
assembly by path, instead of a C# file, I would have gotten a compile error complaining about not finding the
Ordinalize method on integer.
Text Blocks
A text block inserts text directly into the output file. On the top, I have written a few lines of C# code which get directly copied into the generated file:
using System; namespace Humanizer { public partial class On {
Further down, in between control blocks, I have some other text blocks for API documentation, methods and also for closing brackets.
Control Blocks
Control blocks are sections of program code that are used to transform the templates. The default language is C#.
Note: The language in which you write the code in the control blocks is unrelated to the language of the text that is generated.
There are three different types of control blocks: Standard, Expression and Class Feature.
<# Standard control blocks #>can contain statements.
<#= Expression control blocks #>can contain expressions.
<#+ Class feature control blocks #>can contain methods, fields and properties.
Let's take a look at the controls blocks that we have in the sample template:
<# #>); } } <#}#> } <#}#> (
#>) the control block as soon as I open (
<#) it and then write the code inside.
On the top, inside the standard control block, I am defining
leapYear as a constant value. This is so I can generate an entry for February 29th. Then I iterate over 12 months for each month getting the
firstDayOfMonth and the
monthName. I then close the control block to write a text block for the month class and its XML documentation. The
monthName is used as a class name and in XML comments (using expression control blocks). The rest is just normal C# code which I am not going to bore you with.
Conclusion
In this post I talked about code generation, provided a few examples of when code generation could be either dangerous or useful and also showed how you can use T4 templates to generate code from Visual Studio using a real example.
If you would like to learn more about T4, you can find a lot of great content on Oleg Sych's blog.
Envato Tuts+ tutorials are translated into other languages by our community members—you can be involved too!Translate this post
|
http://code.tutsplus.com/tutorials/code-generation-using-t4--cms-19854
|
CC-MAIN-2016-36
|
en
|
refinedweb
|
/* input_file.h header for input-file. */ /*"input_file.c":Operating-system dependant functions to read source files.*/ /* * No matter what the operating system, this module must provide the * following services to its callers. * * input_file_begin() Call once before anything else. * * input_file_end() Call once after everything else. * * input_file_buffer_size() Call anytime. Returns largest possible * delivery from * input_file_give_next_buffer(). * * input_file_open(name) Call once for each input file. * * input_file_give_next_buffer(where) Call once to get each new buffer. * Return 0: no more chars left in file, * the file has already been closed. * Otherwise: return a pointer to just * after the last character we read * into the buffer. * If we can only read 0 characters, then * end-of-file is faked. * * All errors are reported (using as_perror) so caller doesn't have to think * about I/O errors. No I/O errors are fatal: an end-of-file may be faked. */ extern FILE *f_in; extern char *file_name; #ifdef SUSPECT extern int preprocess; #endif extern void input_file_begin( void); extern void input_file_end( void); extern int input_file_buffer_size( void); extern int input_file_is_open( void); extern void input_file_open( char *filename, int pre); extern char *input_file_give_next_buffer( char *where);
|
http://opensource.apple.com//source/cctools/cctools-667.4.0/as/input-file.h
|
CC-MAIN-2016-36
|
en
|
refinedweb
|
C puts the bread on the table for many of us, so every once in a while, when not just bashing C++, endulging in the awesomeness of Ruby, or wondering why PHP is still powering a good deal of the modern interwebs, the topic comes to C, its greatness, and how C99 is still not ubiquitous.
Personally I think, with the year being 2010 and all, just five humble features would make a whole lot of difference and put the language firmly in the 21st century, while not sacrificing any of the spirit of being a high-level assembly language. So here goes (in no particular order) …
Extended support for anonymous aggregates
C99’s anonymous aggregates are useful, for example when passing a one-off compound datatype to a function. What would be even more helpful would be the possibility to return an anonymous aggregate from a function, this would essentially allow returning multiple values:
struct { float width; float height; } clutter_actor_get_size (ClutterActor *actor) { struct { float width; float height; } size; /* Fill size.width and size.height */ return size; } [...] struct { float width; float height; } size; size = clutter_actor_get_size (actor);
This feature would be even more helpful with
Variable declarations using “auto”
When a variable is initialised in place it should be possible for today’s compilers to figure out the data type, there’s even things like the “L” suffix to numbers for specifying the desired range. Leveraging “auto” the above example would become a bit less verbose:
struct { float width; float height; } clutter_actor_get_size (ClutterActor *actor) { struct { float width; float height; } size; /* Fill size.width and size.height */ return size; } [...] auto size = clutter_actor_get_size (actor);
Lambda functions
Along the lines of anonymous aggregates it would seem very natural to do lambda functions by basically “casting” a block of statements. The formal parameters would be derived from the “cast” operator. This approach, in my opinion, would be more natural to C than llvm’s block syntax using ^.
clutter_container_foreach (container, (void (*)(ClutterActor *actor, gpointer data)) { printf ("%s\n", clutter_actor_get_name (actor)); }, NULL);
Type extension
GObject based C code is typically interspersed with type casts, but this does not seem strictly necessary from a semantic point of view. A pointer to a compound instance in C is by definition also a pointer to the first attribute. It should be fairly straight forward to account for that in the compiler, and thus allow for implicit “upcasting”, i.e. assigning a pointer of a “derived” type to a pointer of type of an (first-member) embedded struct. There would be no need for the C compiler to warn about pointer types not matching, because the example is actually semantically correct.
/* Lots of GObject boilerplate code omitted. */ typedef struct { ClutterActor parent; } FooActor; [...] FooActor *actor = foo_actor_new (); clutter_actor_set_x (actor, 100.0);
An #import preprocessor directive
In this day and age it seems redundant having to type function signatures twice, once in the header and once again the the C file. It would be very handy if preprocessors could import symbols from other C files, without doing the verbatim insertion that is #include. For libraries which want to install headers I would imagine a compiler option that extracts definements and non-static symbols from a C file, possibly supporting filtering on prefixes, so the headers could be generated on the fly by the build system.
That might do for the next 30 years or so, I suppose.
3 thoughts on “If I had five wishes … to (GC)C maintainers”
Oddly, I find myself agreeing with almost all of this. Not so sure about the #import though, and auto could be a bit dangerous… (the example could’ve been made less verbose with a typedef, which you’d have probably wanted anyway)
Chris: but the idea is to avoid having to put stuff into the global namespace! Also having to invent and type definitions for all the various user-data aggregates is tedious and should be avoidable with anonymous aggregates and lambda functions.
You can almost do the first two with the ‘typeof’ operator in GCC like this:
#include
struct { float width; float height; }
clutter_actor_get_size (int i)
{
/* Fill size.width and size.height */
return (typeof (clutter_actor_get_size (3))) { 1, 2 };
}
int
main (int argc, char **argv)
{
typeof (clutter_actor_get_size (3)) size;
size = clutter_actor_get_size (3);
printf (“%f %f\n”, size.width, size.height);
return 0;
}
It seems like less effort just to make a typedef though because you’ve already had to write out the definition of the struct twice in your example (even with the auto keyword).
I agree with the fourth one though; that would be really nice.
|
https://blogs.gnome.org/robsta/2010/11/17/if-i-had-five-wishes-to-gcc-maintainers/comment-page-1/
|
CC-MAIN-2016-36
|
en
|
refinedweb
|
This action might not be possible to undo. Are you sure you want to continue?
Avik Chaudhuri
University of Maryland, College Park
avik@cs.umd.edu
Abstract
In Concurrent ML, synchronization abstractions can be defined
and passed as values, much like functions in ML. This mecha-
nism admits a powerful, modular style of concurrent programming,
called higher-order concurrent programming. Unfortunately, it is
not clear whether this style of programming is possible in lan-
guages such as Concurrent Haskell, that support only first-order
message passing. Indeed, the implementation of synchronization
abstractions in Concurrent ML relies on fairly low-level, language-
specific details.
In this paper we show, constructively, that synchronization ab-
stractions can be supported in a language that supports only first-
order message passing. Specifically, we implement a library that
makes Concurrent ML-style programming possible in Concurrent
Haskell. We begin with a core, formal implementation of synchro-
nization abstractions in the π-calculus. Then, we extend this imple-
mentation to encode all of Concurrent ML’s concurrency primitives
(and more!) in Concurrent Haskell.
Our implementation is surprisingly efficient, even without pos-
sible optimizations. Preliminary experiments suggest that our li-
brary can consistently outperform OCaml’s standard library of
Concurrent ML-style primitives.
At the heart of our implementation is a new distributed syn-
chronization protocol that we prove correct. Unlike several previ-
ous translations of synchronization abstractions in concurrent lan-
guages, we remain faithful to the standard semantics for Concurrent
ML’s concurrency primitives. For example, we retain the symme-
try of choose, which can express selective communication. As a
corollary, we establish that implementing selective communication
on distributed machines is no harder than implementing first-order
message passing on such machines.
1. Introduction
As famously argued by Reppy (1999), there is a fundamental con-
flict between selective communication (Hoare 1978) and abstrac-
tion in concurrent programs. For example, consider a protocol run
between a client and a pair of servers. In this protocol, selec-
tive communication may be necessary for liveness—if one of the
servers is down, the client should be able to interact with the other.
This may require some details of the protocol to be exposed. At
the same time, abstraction may be necessary for safety—the client
should not be able to interact with a server in an unexpected way.
This may in turn require those details to be hidden.
An elegant way of resolving this conflict, proposed by Reppy
(1992), is to separate the process of synchronization fromthe mech-
anism for describing synchronization protocols. More precisely,
Reppy introduces a new type constructor, event, to type syn-
chronous operations in much the same way as -> (“arrow”) types
functional values. A synchronous operation, or event, describes
a synchronization protocol whose execution is delayed until it is
explicitly synchronized. Thus, roughly, an event is analogous to
a function abstraction, and event synchronization is analogous to
function application.
This abstraction mechanism is the essence of a powerful, mod-
ular style of concurrent programming, called higher-order concur-
rent programming. In particular, programmers can describe sophis-
ticated synchronization protocols as event values, and compose
them modularly. Complex event values can be constructed from
simpler ones by applying suitable combinators. For instance, se-
lective communication can be expressed as a choice among event
values—and programmer-defined abstractions can be used in such
communication without breaking those abstractions (Reppy 1992).
Reppy implements events, as well as a collection of such suit-
able combinators, in an extension of ML called Concurrent ML
(CML) (Reppy 1999). We review these primitives informally in
Section 2; their formal semantics can be found in (Reppy 1992).
The implementation of these primitives in CML relies on fairly
low-level, language-specific details, such as support for continu-
ations and signals (Reppy 1999). In turn, these primitives immedi-
ately support higher-order concurrent programming in CML.
Other languages, such as Concurrent Haskell (Jones et al. 1996),
seem to be more modest in their design. Following the π-calculus
(Milner et al. 1992), such languages support only first-order mes-
sage passing. While functions for first-order message passing can
be encoded in CML, it is unclear whether, conversely, the concur-
rency primitives of CML can be expressed in those languages.
Contributions In this paper, we show that CML-style concur-
rency primitives can in fact be implemented as a library, in a lan-
guage that already supports first-order message passing. Such a li-
brary makes higher-order concurrent programming possible in a
language such as Concurrent Haskell. Going further, our imple-
mentation has other interesting consequences. For instance, the
designers of Concurrent Haskell deliberately avoid a CML-style
choice primitive (Jones et al. 1996, Section 5), partly concerned
that such a primitive may complicate a distributed implementation
of Concurrent Haskell. By showing that such a primitive can be
encoded in Concurrent Haskell itself, we eliminate that concern.
At the heart of our implementation is a new distributed protocol
for synchronization of events. Our protocol is carefully designed
to ensure safety, progress, and fairness. In Section 3, we formalize
this protocol as an abstract state machine, and prove its correctness.
Then, in Section 4, we describe a concrete implementation of this
protocol in the π-calculus, and prove its correctness as well. This
implementation can serve as a foundation for other implementa-
tions in related languages. Building on this implementation, in Sec-
tions 5, 6, and 7, we show how to encode all of CML’s concurrency
primitives, and more, in Concurrent Haskell. Our implementation
is very concise, requiring less than 150 lines of code; in contrast, a
previous implementation in Haskell requires more than 1300.
In Section 8, we compare the performance of our library against
OCaml’s standard library of CML-style primitives. Our implemen-
tation consistently outperforms the latter, even without possible op-
timizations.
1 2009/3/2
Finally, unlike several previous implementations of CML-style
primitives in other languages, we remain faithful to the standard se-
mantics for those primitives (Reppy 1999). For example, we retain
the symmetry of choose, which can express selective communica-
tion. Indeed, we seem to be the first to implement a CML library
that relies purely on first-order message passing, and preserves the
standard semantics. We defer a more detailed discussion on related
work to Section 9.
2. Overview of CML
In this section, we present a brief overview of CML’s concurrency
primitives. (Space constraints prevent us from motivating these
primitives any further; the interested reader can find a comprehen-
sive account of these primitives, with several programming exam-
ples, in (Reppy 1999).) We provide a small example at the end of
this section.
Note that channel and event are polymorphic type construc-
tors in CML, as follows:
• The type channel tau is given to channels that carry values
of type tau.
• The type event tau is given to events that return values of
type tau on synchronization (see the function sync below).
The combinators receive and transmit build events for syn-
chronous communication.
receive : channel tau -> event tau
transmit : channel tau -> tau -> event ()
• receive c returns an event that, on synchronization, accepts
a message M on channel c and returns M. Such an event must
synchronize with transmit c M.
• transmit c M returns an event that, on synchronization, sends
the message M on channel c and returns () (that is, “unit”). Such
an event must synchronize with receive c.
Perhaps the most powerful of CML’s concurrency primitives is
the combinator choose; it can nondeterministically select an event
from a list of events, so that the selected event can be synchronized.
In particular, choose can express selective communication. Several
implementations need to restrict the power of choose in order to
tame it (Russell 2001; Reppy and Xiao 2008). Our implementation
is designed to avoid such problems (see Section 9).
choose : [event tau] -> event tau
• choose V returns an event that, on synchronization, synchro-
nizes one of the events in list V and “aborts” the other events.
Several events may be aborted on a selection. The combinator
wrapabort can specify an action that is spawned if an event is
aborted.
wrapabort : (() -> ()) -> event tau -> event tau
• wrapabort f v returns an event that either synchronizes the
event v, or, if aborted, spawns a thread that runs the code f ().
The combinators guard and wrap can specify pre- and post-
synchronization actions.
guard : (() -> event tau) -> event tau
wrap : event tau -> (tau -> tau’) -> event tau’
• guard f returns an event that, on synchronization, synchro-
nizes the event returned by the code f ().
• wrap v f returns an event that, on synchronization, synchro-
nizes the event v and applies the function f to the result.
Finally, the function sync can synchronize an event and return
the result.
sync : event tau -> tau
By design, an event can synchronize only at some “point”,
where a message is either sent or accepted on a channel. Such a
point, called the commit point, may be selected among several other
candidates at run time. Furthermore, some code may be run before,
and after, synchronization—as specified by guard functions, by
wrap functions that enclose the commit point, and by wrapabort
functions that do not enclose the commit point.
For example, consider the following value of type event ().
(Here, c and c’ are values of type channel ().)
val v =
choose
[guard (fn () ->
...;
wrapabort ...
(choose [wrapabort ... (transmit c ());
wrap (transmit c’ ()) ... ] ) );
guard (fn () ->
...;
wrap
(wrapabort ... (receive c))
... ) ]
The event v describes a fairly complicated protocol that, on syn-
chronization, selects among the communication events transmit
c (), transmit c’ (), and receive c, and runs some code
(elided by ...s) before and after synchronization. Now, suppose
that we run the following ML program.
val _ =
spawn (fn () -> sync v);
sync (receive c’)
This program spawns sync v in parallel with sync (receive
c’). In this case, the event transmit c’ () is selected inside v,
so that it synchronizes with receive c’. The figure below depicts
sync v as a tree. The point marked • is the commit point; this
point is selected among the other candidates, marked ◦, at run time.
Furthermore, (only) code specified by the combinators marked
in boxes are run before and after synchronization, following the
semantics outlined above.
choose
Z
guard —
guard —wrap— wrapabort —◦
wrapabort—choose
Q
wrapabort —◦
wrap —•
3. A distributed protocol for synchronization
We now present a distributed protocol for synchronizing events.
We focus on events that are built with the combinators receive,
transmit, and choose. While the other combinators are important
for describing computations, they do not fundamentally affect the
nature of the protocol; we consider them later, in Sections 5 and 6.
3.1 A source language
For brevity, we simplify the syntax of the source language. Let c
range over channels. We use the following notations:
−→
ϕ
denotes
a sequence of the form ϕ1, . . . , ϕn, where ∈ 1..n; further-
more, ¦
−→
ϕ
¦ denotes the set ¦ϕ1, . . . , ϕn¦, and [
−→
ϕ
] denotes the list
[ϕ1, . . . , ϕn].
The syntax of the language is as follows.
2 2009/3/2
• Actions α, β, . . . are of the form c or c (input or output on
c). Informally, actions model communication events built with
receive and transmit.
• Programs are of the form S1 [ . . . [ Sm (parallel composition
of S1, . . . , Sm), where each S
k
(k ∈ 1..m) is either an action
α, or a selection of actions, select(
−→
αi). Informally, a selection
of actions models the synchronization of a choice of events,
following the CML function select.
select : [event tau] -> tau
select V = sync (choose V)
Further, we consider only the following local reduction rule:
c ∈ ¦
−→
αi¦ c ∈ ¦
−→
βj¦
−→
αi) [ select(
−→
βj) −→ c [ c
(SEL COMM)
This rule models selective communication. We also consider the
usual structural rules for parallel composition. However, we ignore
reduction of actions at this level of abstraction.
3.2 A distributed abstract state machine for synchronization
Our synchronization protocol is run by a distributed system of prin-
cipals that include channels, points, and synchronizers. Informally,
every action is associated with a point, and every select is associ-
ated with a synchronizer.
The reader may draw an analogy between our setting and one of
arranging marriages, by viewing points as prospective brides and
grooms, channels as matchmakers, and synchronizers as parents
whose consents are necessary for marriages.
We formalize our protocol as a distributed abstract state ma-
chine that implements the rule (SEL COMM). Let σ range over
states of the machine. These states are built by parallel composi-
tion [, inaction 0, and name creation ν (Milner et al. 1992) over
various states of principals.
States of the machine σ
σ ::= states of the machine
σ [ σ
parallel composition
0 inaction
(ν
−→
pi ) σ name creation
ς state of principals
The various states of principals are shown in Figure 1. Roughly,
principals in specific states react with each other to cause transi-
tions in the machine. The rules that govern these reactions appear
later in the section.
Let p and s range over points and synchronizers. A synchronizer
can be viewed as a partial function from points to actions; we
represent this function as a parallel composition of bindings of the
formp → α. Further, we require that each point is associated with a
unique synchronizer, that is, for any s and s
, s ,= s
⇒ dom(s) ∩
dom(s
) = ∅.
The semantics of the machine is described by the local transition
rules in Figure 2 (explained below), plus the usual structural rules
for parallel composition, inaction, and name creation as in the π-
calculus (Milner et al. 1992).
Intuitively, the rules in Figure 2 may be read as follows.
(1) Two points p and q, bound to complementary actions on chan-
nel c, react with c, so that p and q become matched (♥p and
♥q) and the channel announces their match (⊕c(p, q)).
(2.i–ii) Next, p (and likewise, q) reacts with its synchronizer s. If
the synchronizer is open (s), it now becomes closed (s),
States of principals ς
ςp ::= states of a point
p → α active
♥p matched
α released
ςc ::= states of a channel
c free
⊕c(p, q) announced
ςs ::= states of a synchronizer
s open
s closed
s(p) selected
(p) refused
..
s (p) confirmed
..
s canceled
Figure 1.
Operational semantics σ −→ σ
(1) p → c [ q → c [ c −→ ♥p [ ♥q [ ⊕c (p, q) [ c
(2.i)
p ∈ dom(s)
♥p [ s −→ s(p) [ s
(2.ii)
p ∈ dom(s)
♥p [ s −→ (p) [ s
(3.i) s(p) [
s
(q) [ ⊕c (p, q) −→
..
s (p) [
..
s
(q)
(3.ii) s(p) [ (q) [ ⊕c (p, q) −→
..
s
(3.iii) (p) [ s(q) [ ⊕c (p, q) −→
..
s
(3.iv) (p) [ (q) [ ⊕c (p, q) −→ 0
(4.i)
s(p) = α
..
s (p) −→ α
(4.ii)
..
s (ν
−→
pi ) (s [ s) where dom(s) = ¦
−→
pi ¦
Figure 2.
and p is declared selected by s (s(p)). If the synchronizer is
already closed, then p is refused ((p)).
(3.i–iv) If both p and q are selected, c confirms the selections to
both parties (
..
s (p) and
..
s
(q)). If only one of them is
selected, c cancels that selection (
..
s).
(4.i–ii) If the selection of p is confirmed, the action bound to p
is released. Otherwise, the synchronizer “reboots” with fresh
names for the points in its domain.
3.3 Compilation
Next, we show how programs in the source language are com-
piled on to this machine. Let Π denote indexed parallel compo-
sition; using this notation, for example, we can write a program
S1 [ . . . [ Sm as Π
k∈1..m
S
k
. Suppose that the set of channels in
a program Π
k∈1..m
S
k
is (. We compile this program to the state
3 2009/3/2
Πc∈C c [ Π
k∈1..m
∼
S
k
, where
∼
S
8
>
<
>
:
α if S = α
(ν
−→
pi ) (s [ s) if S = select(
−→
αi), i ∈ 1..n, and
s = Πi∈1..n (pi → αi) for fresh names
−→
pi
3.4 Correctness
We prove that our protocol is correct, that is, the abstract machine
correctly implements (SEL COMM), by showing that the compila-
tion from programs to states satisfies safety, progress, and fairness.
Roughly, safety implies that any sequence of transitions in the state
machine can be mapped to some sequence of reductions in the lan-
guage. Furthermore, progress and fairness imply that any sequence
of reductions in the language can be mapped to some sequence of
transitions in the state machine. (The formal definitions of these
properties are complicated because transitions in the machine have
much finer granularity than reductions in the language; see below.)
Let a denotation be a list of actions. The denotations of pro-
grams and states are derived by the function , as follows. (Here
¬ denotes concatenation over lists.)
Denotations of programs and states
S1 [ . . . [ Sm = S1 ¬ ¬ Sm
α = [α]
−→
αi) = [ ]
σ [ σ
= σ ¬ σ
0 = [ ]
(ν
−→
pi ) σ = σ
ς =
[α] if ς = α
[ ] otherwise
Informally, the denotation of a program or state is the list of
released actions in that program or state. Now, if a program is com-
piled to some state, then the denotations of the program and the
state coincide. Furthermore, we expect that as intermediate pro-
grams and states are produced during execution (and other actions
are released), the denotations of those intermediate programs and
states remain in coincidence. Formally, we prove the following the-
orem (Chaudhuri 2009).
THEOREM 3.1 (Correctness of the abstract state machine). Let (
be the set of channels in a program Π
k∈1..m
S
k
. Then
Π
k∈1..m
S
k
∼ Πc∈C c [ Π
k∈1..m
∼
S
k
where ∼ is the largest relation such that 1 ∼ σ iff
(Invariant) σ −→
σ
for some σ
such that 1 = σ
;
(Safety) if σ −→ σ
for some σ
, then 1 −→
1
for some 1
such that 1
∼ σ
;
(Progress) if 1 −→ , then σ −→
+
σ
and 1 −→ 1
for some
σ
and 1
such that 1
∼ σ
;
(Fairness) if 1 −→ 1
for some 1
, then σ0 −→ . . . −→ σn for
some σ0, . . . , σn such that σn = σ, 1 ∼ σi for all 0 ≤ i < n,
and σ0 −→
+
σ
for some σ
such that 1
∼ σ
.
Informally, the above theorem guarantees that any sequence of
program reductions can be simulated by some sequence of state
transitions, and vice versa, such that
• fromany intermediate program, it is always possible to simulate
any transition of a related intermediate state;
• from any intermediate state,
– it is always possible to simulate some reduction of a related
intermediate program;
– further, by backtracking, it is always possible to simulate
any reduction of that program.
3.5 Example
Consider the program
select(x, y) [ select(y, z) [ select(z) [ select(x)
By (SEL COMM), this program can reduce to
x [ z [ z [ x
with denotation [x, z, z, x], or to
y [ y [ select(z) [ select(x)
with denotation [y, y].
The original program is compiled to the following state.
x [ y [ z [ (νp¯ x, p¯ y) (
(p¯ x→x | p¯ y→y)
[ p¯ x → x [ p¯ y → y)
[ (νpy, pz) (
(py→y | pz→z)
[ py → y [ pz → z)
[ (νp¯ z) (
(p¯ z→z)
[ p¯ z → z)
[ (νpx) (
(px→x)
[ px → x)
This state describes the states of several principals:
• channels x, y, z;
• points p¯ x → x, p¯ y → y, py → y, pz → z, p¯ z → z, px → x;
• synchronizers
(p¯ x→x | p¯ y→y)
,
(py→y | pz→z)
,
(p¯ z→z)
,
(px→x)
.
This state can eventually transition to
x [ y [ z [ x [ z [ z [ x [ σgc
with denotation [x, z, z, x], or to
x [ y [ z [ y [ y [ σgc
[ (νp¯ z) (
(p¯ z→z)
[ p¯ z → z)
[ (νpx) (
(px→x)
[ px → x)
with denotation [y, y]. In these states, σgc can be garbage-collected,
and is separated out for readability.
σgc (νp¯ x, p¯ y, py, pz, p¯ z, px)
(
(p¯ x→x | p¯ y→y)
[
(py→y | pz→z)
[
(p¯ z→z)
[
(px→x)
)
Let us examine the state with denotation [y, y], and trace the
transitions to this state. In this state, the original synchronizers are
all closed (see σgc). We can conclude that the remaining points
p¯ z → z and px → x and their synchronizers
(p¯ z→z)
and
(px→x)
were produced by rebooting their original synchronizers
with fresh names p¯ z and px. Indeed, in a previous round of the pro-
tocol, the original points p¯ z → z and px → x were matched with
the points pz → z and px → x, respectively; however, the lat-
ter points were refused by their synchronizers
(py→y | pz→z)
and
(p¯ x→x | p¯ y→y)
(to accommodate the selected communication on
y in that round); these refusals in turn necessitated the cancellations
..
(p¯ z→z)
and
..
(px→x)
.
4. Higher-order concurrency in the π-calculus
While we have an abstract state machine that correctly implements
(SEL COMM), we do not yet know if the local transition rules in
Figure 2 can be implemented faithfully, say by first-order message-
passing. We now show how these rules can be implemented con-
cretely in the π-calculus (Milner et al. 1992).
4 2009/3/2
The π-calculus is a minimal concurrent language that allows
processes to dynamically create channels with fresh names and
communicate such names over channels. This language forms the
core of Concurrent Haskell. Let a, b, x range over names. The
syntax of processes is as follows.
Processes π
π ::= processes
π [ π
parallel composition
0 inaction
(νa) π name creation
a'b). π output
a(x). π input
!π replication
Processes have the following informal meanings.
• π [ π
behaves as the parallel composition of π and π
.
• 0 does nothing.
• (νa) π creates a channel with fresh name a and continues as π;
the scope of a is π.
• a'b). π sends the name b on channel a, and continues as π.
• a(x). π accepts a name on channel a, binds it to x, and contin-
ues as π; the scope of x is π.
• !π behaves as the parallel composition of an unbounded number
of copies of π; this construct, needed to model recursion, can be
eliminated with recursive process definitions.
A formal operational semantics can be found in (Milner et al.
1992). Of particular interest are the following reduction rule for
communication:
a(x). π [ a'b). π
−→ π¦b/x¦ [ π
and the following structural rule for scope extrusion:
a is fresh in π
π [ (νa) π
≡ (νa) (π [ π
)
The former rule models the communication of a name b on a
channel a, from an output process to an input process (in parallel);
b is substituted for x in the remaining input process. The latter
rule models the extrusion of the scope of a fresh name a across
a parallel composition. These rules allow other derivations, such as
the following for communication of fresh names:
b is fresh in a(x). π
a(x). π [ (νb) a'b). π
≡ (νb) (π¦b/x¦ [ π
)
4.1 A π-calculus model of the abstract state machine
We interpret states of our machine as π-calculus processes that run
at points, channels, and synchronizers. These processes reduce by
communication to simulate transitions in the abstract state machine.
In this setting:
• Each point is identified with a fresh name p.
• Each channel c is identified with a pair of fresh names
(i
[c]
, o
[c]
), on which it accepts messages from points that are
bound to input or output actions on c.
• Each synchronizer is identified with a fresh name s, on which
it accepts messages from points in its domain.
Informally, the following sequence of messages are exchanged
in any round of the protocol.
• A point p (at state p → c or p → c) begins by sending a
message to c on its respective input or output name i
[c]
or o
[c]
;
the message contains a fresh name candidate
[p]
on which p
expects a reply from c.
• When c (at state c) gets a pair of messages on i
[c]
and
o
[c]
, say from p and another point q, it replies by sending
messages on candidate
[p]
and candidate
[q]
(reaching state
⊕c(p, q) [ c); these messages contain fresh names decision
[p]
and decision
[q]
on which c expects replies fromthe synchroniz-
ers for p and q.
• On receiving a message from c on candidate
[p]
, p (reaching
state ♥p) tags the message with its name and forwards it to its
synchronizer on the name s.
• If p is the first point to send such a message on s (that is, s is at
state s), a pair of fresh names (confirm
[p]
, cancel
[p]
) is sent
back on decision
[p]
(reaching state s(p) [ s); for each sub-
sequent message accepted on s, say from p
, a blank message is
sent back on decision
[p
]
(reaching state (p
) [ s).
• On receiving messages from the respective synchronizers of p
and q on decision
[p]
and decision
[q]
, c inspects the messages
and responds.
– If both (confirm
[p]
, ) and (confirm
[q]
, ) have come in,
signals are sent back on confirm
[p]
and confirm
[q]
.
– If only ( , cancel
[p]
) has come in (and the other message is
blank), a signal is sent back on cancel
[p]
; likewise, if only
( , cancel
[q]
) has come in, a signal is sent back on cancel
[q]
.
• If s gets a signal on confirm
[p]
(reaching state
..
s (p)), it
signals on p to continue. If s gets a signal on cancel
[p]
(reaching
state
..
s), it “reboots” with fresh names for the points in its
domain, so that those points can begin another round.
Figure 3 formalizes this interpretation of states as (recursively
defined) processes. For convenience, we let the interpreted states
carry some auxiliary state variables in . . .; these state variables
represent names that are created at run time. The state variables
carried by any state are unique to that state. Thus, they do not
convey any new, distinguishing information about that state.
For simplicity, we leave states of the form α uninterpreted, and
consider them inactive. We define ˆ α as shorthand for i
[c]
if α is of
the form c, and o
[c]
if α is of the form c.
Programs in the source language are now compiled to pro-
cesses in the π-calculus. Suppose that the set of channels in a pro-
gram Π
k∈1..m
S
k
is (. We compile this program to the process
(νc∈C i
[c]
, o
[c]
) (Πc∈C c [ Π
k∈1..m
≈
S
k
), where
≈
S
8
>
>
>
<
>
>
>
:
α if S = α
(νs,
−→
pi ) if S = select(
−→
αi),
(s [ Πi∈1..n(pi → αi)s, ˆ αi) i ∈ 1..n, and
s,
−→
pi are fresh names
Let ⇑ be a partial function from processes to states that, for
any state σ, maps its interpretation as a process back to σ. For any
process π such that ⇑ π is defined, we define its denotation π to
be ⇑ π; the denotation of any other process is undefined. We then
prove the following theorem (Chaudhuri 2009), closely following
the proof of Theorem 3.1.
THEOREM 4.1 (Correctness of the π-calculus implementation).
Let ( be the set of channels in a program Π
k∈1..m
S
k
. Then
Π
k∈1..m
S
k
≈ (νc∈C i
[c]
, o
[c]
) (Πc∈C c [ Π
k∈1..m
≈
S
k
)
5 2009/3/2
Interpretation of states as processes
States of a point
(p → c)s, i
[c]
(ν candidate
[p]
) i
[c]
'candidate
[p]
). candidate
[p]
(decision
[p]
).
♥pdecision
[p]
, s, c
(q → c)s, o
[c]
(ν candidate
[q]
) o
[c]
'candidate
[q]
). candidate
[q]
(decision
[q]
).
♥qdecision
[q]
, s, c
♥pdecision
[p]
, s, α
s'p, decision
[p]
). p().
α
States of a channel
ci
[c]
, o
[c]
i
[c]
(candidate
[p]
). o
[c]
(candidate
[q]
).
((νdecision
[p]
, decision
[q]
)
candidate
[p]
'decision
[p]
). candidate
[q]
'decision
[q]
).
⊕c (p, q)decision
[p]
, decision
[q]
[ c i
[c]
, o
[c]
)
⊕c(p, q)decision
[p]
, decision
[q]
(decision
[p]
(confirm
[p]
, cancel
[p]
).
(decision
[q]
(confirm
[q]
, cancel
[q]
).
confirm
[p]
'). confirm
[q]
'). 0
[ decision
[q]
().
cancel
[p]
'). 0)
[ decision
[p]
().
(decision
[q]
(confirm
[q]
, cancel
[q]
).
cancel
[q]
'). 0
[ decision
[q]
().
0))
States of a synchronizer
s
s(p, decision
[p]
).
(s(p)decision
[p]
[ s)
s
s(p, decision
[p]
).
((p)decision
[p]
[ s)
s(p)decision
[p]
(ν confirm
[p]
, cancel
[p]
) decision
[p]
'confirm
[p]
, cancel
[p]
).
(confirm
[p]
().
..
s (p)
[ cancel
[p]
().
..
s)
s(p)decision
[p]
decision
[p]
'). 0
..
s (p)
p'). 0
..
s (νs,
−→
pi ) (s [ Πi∈1..n(pi → αi)s, ˆ αi)
where dom(s) = ¦
−→
pi ¦, i ∈ 1..n, and ∀i ∈ 1..n. s(pi) = αi
Figure 3.
where ≈ is the largest relation such that 1 ≈ π iff
(Invariant) π −→
π
for some π
such that 1 = π
;
(Safety) if π −→ π
for some π
, then 1 −→
1
for some 1
such that 1
≈ π
;
(Progress) if 1 −→ , then π −→
+
π
and 1 −→ 1
for some
π
and 1
such that 1
≈ π
;
(Fairness) if 1 −→ 1
for some 1
, then π0 −→ . . . −→ πn for
some π0, . . . , πn such that πn = π, 1 ≈ πi for all 0 ≤ i < n,
and π0 −→
+
π
for some π
such that 1
≈ π
.
5. A CML library in Concurrent Haskell
We now proceed to code a full CML-style library for events in
a fragment of Concurrent Haskell with first-order message pass-
ing (Jones et al. 1996). This fragment is close to the π-calculus,
so we can lift our implementation in the π-calculus (Figure 3) to
this fragment. Going further, we remove the restrictions on the
source language: a program can be any well-typed Haskell pro-
gram. We implement not only receive, transmit, choose, and
sync, but also new, guard, wrap, and wrapabort. Finally, we ex-
ploit Haskell’s type system to show how events can be typed under
the standard IO monad (Gordon 1994; Jones and Wadler 1993).
Before we proceed, let us briefly review Concurrent Haskell’s
concurrency primitives. (The reader may wish to refer (Jones et al.
1996) for details.) These primitives support concurrent I/O com-
putations, such as forking threads and communicating on mvars.
(Mvars are synchronized mutable variables, similar to π-calculus
channels; see below.)
Note that MVar and IO are polymorphic type constructors, as
follows:
• The type MVar tau is given to a communication cell that car-
ries values of type tau.
• The type IO tau is given to a computation that yields results
of type tau, with possible side effects via communication.
We rely on the following semantics of MVar cells.
• A cell can carry at most one value at a time, that is, it is either
empty or full.
• The function New :: IO (MVar tau) returns a fresh cell that
is empty.
• The function Get :: MVar tau -> IO tau is used to read
from a cell; Get m blocks if the cell m is empty, else gets the
content of m (thereby emptying it).
• The function Put :: MVar tau -> tau -> IO () is used
to write to a cell; Put m M blocks if the cell m is full, else puts
the term M in m (thereby filling it).
Further, we rely on the following semantics of IO computations;
see (Jones and Wadler 1993) for details.
• The function fork :: IO () -> IO () is used to spawn a
concurrent computation; fork f forks a thread that runs the
computation f.
• The function return :: tau -> IO tau is used to inject a
value into a computation.
• Computations can be sequentially composed by “piping”. We
use Haskell’s convenient do {...} notation for this purpose,
instead of applying the underlying piping function
(>>=) :: IO tau -> (tau -> IO tau’) -> IO tau
For example, we write do {x <- Get m; Put m x} instead
of Get m >>= \x -> Put m x.
6 2009/3/2
Our library provides the following CML-style functions for pro-
gramming with events in Concurrent Haskell.
1
(Observe the differ-
ences between ML and Haskell types for these functions. Since
Haskell is purely functional, we must embed types for computa-
tions, with possible side-effects via communication, within the IO
monad. Further, since evaluation in Haskell is lazy, we can discard
λ-abstractions that simply “delay” eager evaluation.)
new :: IO (channel tau)
receive :: channel tau -> event tau
transmit :: channel tau -> tau -> event ()
guard :: IO (event tau) -> event tau
wrap :: event tau -> (tau -> IO tau’) -> event tau’
choose :: [event tau] -> event tau
wrapabort :: IO () -> event tau -> event tau
sync :: event tau -> IO tau
In this section, we focus on events that are built without
wrapabort; the full implementation appears in Section 6.
5.1 Type definitions
We begin by defining the types of cells on which messages are
exchanged in our protocol (recall the discussion in Section 4.1).
2
These cells are of the form i and o (on which points initially
send messages to channels), candidate (on which channels re-
ply back to points), s (on which points forward messages to syn-
chronizers), decision (on which synchronizers inform channels),
confirm and cancel (on which channels reply back to synchro-
nizers), and p (on which synchronizers finally signal to points).
type In = MVar Candidate
type Out = MVar Candidate
type Candidate = MVar Decision
type Synchronizer = MVar (Point, Decision)
type Decision = MVar (Maybe (Confirm, Cancel))
type Confirm = MVar ()
type Cancel = MVar ()
type Point = MVar ()
Below, we use the following typings for the various cells used in
our protocol: i :: In, o :: Out, candidate :: Candidate,
s :: Synchronizer, decision :: Decision, confirm ::
Confirm, cancel :: Cancel, and p :: Point.
We now show code run by points, channels, and synchronizers
in our protocol. This code may be viewed as a typed version of the
π-calculus code of Figure 3.
5.2 Protocol code for points
The protocol code run by points abstracts on a cell s for the associ-
ated synchronizer, and a name p for the point itself. Depending on
whether the point is for input or output, the code further abstracts
on an input cell i or output cell o, and an input or output action
alpha.
@PointI :: Synchronizer -> Point -> In ->
IO tau -> IO tau
@PointI s p i alpha = do {
candidate <- New;
Put i candidate;
decision <- Get candidate;
1
Instead of wrapabort, some implementations of CML provide the com-
binator withnack. Their expressive powers are exactly the same (Reppy
1999). Providing withnack is easier with an implementation strategy that
relies on negative acknowledgments. Since our implementation strategy
does not rely on negative acknowledgments, we stick with wrapabort.
2
In Haskell, the type Maybe tau is given to a value that is either Nothing,
or of the form Just v where v is of type tau.
Put s (p,decision);
Get p;
alpha
}
@PointO :: Synchronizer -> Point -> Out ->
IO () -> IO ()
@PointO s p o alpha = do {
candidate <- New;
Put o candidate;
decision <- Get candidate;
Put s (p,decision);
Get p;
alpha
}
We instantiate the function @PointI in the code for receive,
and the function @PointO in the code for transmit. These as-
sociate appropriate point principals to any events constructed with
receive and transmit.
5.3 Protocol code for channels
The protocol code run by channels abstracts on an input cell i and
an output cell o for the channel.
@Chan :: In -> Out -> IO ()
@Chan i o = do {
candidate_i <- Get i;
candidate_o <- Get o;
fork (@Chan i o);
decision_i <- New;
decision_o <- New;
Put candidate_i decision_i;
Put candidate_o decision_o;
x_i <- Get decision_i;
x_o <- Get decision_o;
case (x_i,x_o) of
(Nothing, Nothing) ->
return ()
(Just(_,cancel_i), Nothing) ->
Put cancel_i ()
(Nothing, Just(_,cancel_o)) ->
Put cancel_o ()
(Just(confirm_i,_), Just(confirm_o,_)) -> do {
Put confirm_i ();
Put confirm_o ()
}
}
We instantiate this function in the code for new. This associates
an appropriate channel principal to any channel created with new.
5.4 Protocol code for synchronizers
The protocol code run by synchronizers abstracts on a cell s for
that synchronizer and some “rebooting code” X, provided later. (We
encode a loop with the function fix :: (tau -> tau) -> tau;
the term fix f reduces to f (fix f).)
@Sync :: Synchronizer -> IO () -> IO ()
@Sync s X = do {
(p,decision) <- Get s;
fork
(fix (\iter -> do {
(p’,decision’) <- Get s;
Put decision’ Nothing;
iter
} ) );
7 2009/3/2
confirm <- New;
cancel <- New;
Put decision (Just (confirm,cancel));
fork
(do {
Get confirm;
Put p ()
} );
Get cancel;
X
}
We instantiate this function in the code for sync. This associates
an appropriate synchronizer principal to any application of sync.
5.5 Translation of types
Next, we translate types for channels and events. The Haskell types
for ML channel and event values are:
type channel tau = (In, Out, MVar tau)
type event tau = Synchronizer -> IO tau
An ML channel is a Haskell MVar tagged with a pair of input
and output cells. An ML event is a Haskell IO function that
abstracts on a synchronizer cell.
5.6 Translation of functions
We now translate functions for programming with events. We begin
by encoding the ML function for creating channels.
new :: IO (channel tau)
new = do {
i <- New;
o <- New;
fork (@Chan i o);
m <- New;
return (i,o,m)
}
• The term new spawns an instance of @Chan with a fresh pair of
input and output cells, and returns that pair along with a fresh
MVar cell that carries messages for the channel.
Next, we encode the ML combinators for building communi-
cation events. Recall that a Haskell event is an IO function that
abstracts on the cell of its synchronizer.
receive :: channel tau -> event tau
receive (i,o,m) = \s -> do {
p <- New;
@PointI s p i (Get m)
}
transmit :: channel tau -> tau -> event ()
transmit (i,o,m) M = \s -> do {
p <- New;
@PointO s p o (Put m M)
}
• The term receive c s runs an instance of @PointI with the
synchronizer s, a fresh name for the point, the input cell for
channel c, and an action that inputs on c.
• The term transmit c M s is symmetric; it runs an instance of
@PointO with the synchronizer s, a fresh name for the point,
the output cell for channel c, and an action that outputs term M
on c.
Next, we encode the ML combinators for specifying pre- and
post-synchronization actions.
guard :: IO (event tau) -> event tau
guard f = \s -> do {
v <- f;
v s
}
wrap :: event tau -> (tau -> IO tau’) -> event tau’
wrap v f = \s -> do {
x <- v s;
f x
}
• The term guard f s runs the computation f and passes the
synchronizer s to the event returned by the computation.
• The term wrap v f s passes the synchronizer s to the event v
and pipes the returned value to function f.
Next, we encode the ML combinator for choosing among
a list of events. (We encode recursion over a list with
the function fold :: (tau’ -> tau -> tau’) -> tau’ ->
[tau] -> tau’. The term fold f x [] reduces to x, and the
term fold f x [v,V] reduces to fold f (f x v) V.)
choose :: [event tau] -> event tau
choose V = \s -> do {
temp <- New;
fold (\_ -> \v ->
fork (do {
x <- v s;
Put temp x
} ) ) () V;
Get temp
}
• The term choose V s spawns a thread for each event v in V,
passing the synchronizer s to v; any value returned by one of
these threads is collected in a fresh cell temp and returned.
Finally, we encode the ML function for event synchronization.
sync :: event tau -> IO tau
sync v = do {
temp <- New;
fork
(fix (\iter -> do {
s <- New;
fork (@Sync s iter);
x <- v s;
Put temp x
} ) );
Get temp
}
• The term sync v recursively spawns an instance of @Sync with
a fresh synchronizer s and passes s to the event v; any value
returned by one of these instances is collected in a fresh cell
temp and returned.
6. Implementation of wrapabort
The implementation of the previous section does not account for
wrapabort. We now show how wrapabort can be handled by
enriching the type for events.
Recall that abort actions are spawned only at events that
do not enclose the commit point. Therefore, in an encoding of
wrapabort, it makes sense to name events with the sets of points
they enclose. Note that the set of points that an event encloses may
not be static. In particular, for an event built with guard, we need
8 2009/3/2
to run the guard functions to compute the set of points that such an
event encloses. Thus, we do not name events at compile time. In-
stead, we introduce events as principals in our protocol; each event
is named in situ by computing the list of points it encloses at run
time. This list is carried on a fresh cell name :: Name for that
event.
type Name = MVar [Point]
Further, each synchronizer carries a fresh cell abort ::
Abort on which it accepts wrapabort functions from events,
tagged with the list of points they enclose.
type Abort = MVar ([Point], IO ())
The protocol code run by points and channels remains the same.
We only add a handler for wrapabort functions to the protocol
code run by synchronizers. Accordingly, that code now abstracts
on an abort cell.
@Sync :: Synchronizer -> Abort -> IO () -> IO ()
@Sync s abort X = do {
...;
fork (do {
...;
fix (\iter -> do {
(P,f) <- Get abort;
fork iter;
if (elem p P) then return ()
else f
} )
} );
...
}
Now, after signaling the commit point p to continue, the syn-
chronizer continues to accept abort code f on abort; such code is
spawned only if the list of points P, enclosed by the event that sends
that code, does not include p.
The enriched Haskell type for event values is as follows.
type event tau =
Synchronizer -> Name -> Abort -> IO tau
Now, an ML event is a Haskell IO function that abstracts on a
synchronizer, an abort cell, and a name cell that carries the list of
points the event encloses.
The Haskell function new does not change. We highlight minor
changes in the remaining translations. We begin with the functions
receive and transmit. An event built with either function is
named by a singleton containing the name of the enclosed point.
receive (i,o,m) = \s -> \name -> \abort -> do {
...;
fork (Put name [p]);
...
}
transmit (i,o,m) M = \s -> \name -> \abort -> do {
...;
fork (Put name [p]);
...
}
In the function choose, a fresh name’ cell is passed to each
event in the list of choices; the names of those events are concate-
nated to name the choose event.
choose V = \s -> \name -> \abort -> do {
...;
P <-
fold (\P -> \v ->
do {
name’ <- New;
fork (do {
x <- v s name’ abort;
...
} );
P’ <- Get name’;
Put name’ P’;
return (P’ ++ P)
} ) [] V;
fork (Put name P);
...
}
We nowencode the ML combinator for specifying abort actions.
wrapabort :: IO () -> event tau -> event tau
wrapabort f v = \s -> \name -> \abort -> do {
fork (do {
P <- Get name;
Put name P;
Put abort (P,f)
} );
v s name abort
}
• The term wrapabort f v s name abort spawns a thread
that reads the list of enclosed events P on the cell name and
sends the function f along with P on the cell abort; the syn-
chronizer s is passed to the event v along with name and abort.
The functions guard and wrap remain similar.
guard f = \s -> \name -> \abort -> do {
v <- f;
v s name abort
}
wrap v f = \s -> \name -> \abort -> do {
x <- v s name abort;
f x
}
Finally, in the function sync, a fresh abort cell is now passed
to @Sync, and a fresh name cell is created for the event to be
synchronized.
sync v = do {
...;
fork (fix (\iter -> do {
...;
name <- New;
abort <- New;
fork (@Sync s abort iter);
x <- v s name abort;
...
} ) );
...
}
7. Implementation of communication guards
Beyond the standard primitives, some implementations of CML
further consider primitives for guarded communication. In par-
ticular, Russell (2001) implements such primitives in Concurrent
Haskell, but his implementation strategy is fairly specialized—for
example, it requires a notion of guarded events (see Section 9 for a
9 2009/3/2
discussion on this issue). We show that in contrast, our implemen-
tation strategy can accommodate such primitives with little effort.
Specifically, we wish to support the following receive combi-
nator, that can carry a communication guard.
receive :: channel tau -> (tau -> Bool) -> event tau
Intuitively, (receive c cond) synchronizes with (transmit
c M) only if cond M is true.
In our implementation, we make minor adjustments to the types
of cells on which messages are exchanged between points and
channels.
type In tau = MVar (Candidate, tau -> Bool)
type Out tau = MVar (Candidate, tau)
type Candidate = MVar (Maybe Decision)
Next, we adjust the protocol code run by points and channels.
Input and output points bound to actions on c now send their
conditions and messages to c. A pair of points is matched only
if the message sent by one satisfies the condition sent by the other.
@Chan :: In tau -> Out tau -> IO ()
@Chan i o = do {
(candidate_i,cond) <- Get i;
(candidate_o,M) <- Get o;
...;
if (cond M) then do {
...;
Put candidate_i (Just decision_i);
Put candidate_o (Just decision_o);
...
} else do {
Put candidate_i Nothing;
Put candidate_i Nothing
}
}
@PointI :: Synchronizer -> Point -> In tau ->
(tau -> Bool) -> IO tau -> IO tau
@PointI s p i cond alpha = do {
...;
Put i (candidate,cond);
x <- Get candidate;
case x of
Nothing ->
@PointI s p i cond alpha
Just decision -> do {
Put s (p,decision);
...
}
}
@PointO :: Synchronizer -> Point -> Out tau ->
tau -> IO () -> IO ()
@PointO s p o M alpha = do {
...;
Put o (candidate,M);
x <- Get candidate;
case x of
Nothing ->
@PointO s p o M alpha
Just decision -> do {
Put s (p,decision);
...
}
}
Finally, we make minor adjustments to the type constructor
channel, and the functions receive and transmit.
type channel tau = (In tau, Out tau, MVar tau)
receive (i,o,m) cond = \s -> \name -> \abort -> do {
...;
@PointI s p i cond (Get m)
}
transmit (i,o,m) M = \s -> \name -> \abort -> do {
...;
@PointO s p o M (Put m M)
}
8. Evaluation
Our implementation is derived from a formal model, constructed
for the purpose of proof (see Theorem 4.1). Not surprisingly, to
simplify reasoning about the correctness of our code, we overlook
several possible optimizations. For example, some loops that fork
threads in our code can be bounded by explicit book-keeping;
instead, we rely on lazy evaluation and garbage collection to limit
unnecessary unfoldings. It is plausible that the performance of our
code can be improved with such optimizations.
Nevertheless, preliminary experiments indicate that our code is
already quite efficient. In particular, we compare the performance
of our library against OCaml’s Event module (Leroy et al. 2008).
The implementation of this module is directly based on Reppy’s
original CML implementation (Reppy 1999). Furthermore, it sup-
ports wrapabort, unlike recent versions of CML that favor an al-
ternative primitive, withnack, which we do not support (see foot-
note 1, p.7). Finally, most other implementations of CML-style
primitives do not reflect the standard semantics (Reppy 1999),
which makes comparisons with them meaningless. Indeed, some
of our benchmarks rely on the symmetry of choose—see, e.g., the
swap channel abstraction implemented on p.11; such benchmarks
cannot work correctly on a previous implementation of events in
Haskell (Russell 2001).
3
For our experiments, we design a suite of benchmark programs
that rely heavily on higher-order concurrency. We describe these
benchmarks below; their code is available online (Chaudhuri 2009).
We compile these benchmarks using ghc 6.8.1 and ocamlopt
3.10.2. Benchmarks using our library run between 1–90% faster
than those using OCaml’s Event module, with a mean gain of 42%
Some of our results are tabulated in Figure 4.
Our benchmarks are variations of the following programs.
Extended example Recall the example of Section 3.5. This is a
simple concurrent program that involves nondeterministic com-
munication; either there is communication on channels x and
z, or there is communication on channel y. To observe this
nondeterminism, we add guard, wrap, and wrapabort func-
tions to each communication event, which print messages such
as "Trying", "Succeeded", and "Failed" for that event at
run time. Both the Haskell and the ML versions of the program
exhibit this nondeterminism in our runs.
Our library beats OCaml’s Event module by an average of 16%
on this program.
Primes sieve This program uses the Sieve of Eratosthenes
(Wikipedia 2009) to print all prime numbers up to some n ≥ 2.
(The measurements in Figure 4 are for n = 12.) We implement
two versions of this program: (I) uses choose, (II) does not.
3
In any case, unfortunately, we could neither compile Russell’s implemen-
tation with recent versions of ghc, nor find his contact information online.
10 2009/3/2
Primitives (#) Running times (µs) Gain (%)
new receive choose guard / wrap sync Our library OCaml’s Event module
/ transmit / wrapabort
Extended example 3 6 2 18 4 393 456 16
Primes sieve (I) 22 39 11 11 28 1949 3129 61
Primes sieve (II) 11 28 0 11 28 1474 2803 90
Swap channels 5 16 4 12 12 547 565 1
Buffered channels 2 11 3 6 8 435 613 41
Figure 4.
(I) In this version, we create a “prime” channel and a “not
prime” channel for each i ∈ 2..n, for a total of 2 ∗ (n −1)
channels. Next, we spawn a thread for each i ∈ 2..n, that
selects between two events: one receiving on the “prime”
channel for i and printing i, the other receiving on the “not
prime” channel for i and looping. Now, for each multiple
j ≤ n of each i ∈ 2..n, we send on the “not prime” channel
for j. Finally, we spawn a thread for each i ∈ 2..n, sending
on the “prime” channel for i.
(II) In this version, we create a “prime/not prime” channel for
each i ∈ 2..n, for a total of n − 1 channels. Next, we
spawn a thread for each i ∈ 2..n, receiving a message on
the “prime/not prime” channel for i, and printing i if the
message is true or looping if the message is false. Now,
for each multiple j ≤ n of each i ∈ 2..n, we send false
on the “prime/not prime” channel for j. Finally, we spawn
a thread for each i ∈ 2..n, sending true on the “prime/not
prime” channel for i.
Our library beats OCaml’s Event module by an average of 61%
on version (I) and 90% on version (II).
Swap channels This program implements and uses a swap channel
abstraction, as described in (Reppy 1994). Intuitively, if x is a
swap channel, and we run the program
fork (do {y <- sync (swap x M); ...});
do {y’ <- sync (swap x M’); ...}
then M’ is substituted for y and M is substituted for y’ in the
continuation code (elided by ...s).
type swapChannel tau = channel (tau, channel tau)
swap :: swapChannel tau -> tau -> event tau
swap ch msgOut = guard (do {
inCh <- new;
choose [
wrap (receive ch)
(\x -> let (msgIn, outCh) = x in do {
sync (transmit outCh msgOut);
return msgIn
} ),
wrap (transmit ch (msgOut, inCh))
(\_ -> sync (receive inCh)) ]
} )
Communication over a swap channel is already highly nonde-
terministic, since one of the ends must choose to send its mes-
sage first (and accept the message from the other end later),
while the other end must make exactly the opposite choice. For
the measurements in Figure 4, we add further nondeterminism
by spawning four instances of swap on the same swap channel.
Our library still beats OCaml’s Event module on this program,
but only marginally. Note that in this case, our protocol possi-
bly wastes some rounds by matching points that have the same
synchronizer (and eventually canceling these matches). An op-
timization that eliminates such matches altogether should im-
prove the performance of our implementation.
Buffered channels This program implements and uses a buffered
channel abstraction, as described in (Reppy 1992). Intuitively,
a buffered channel maintains a queue of messages, and chooses
between receiving a message and adding it to the queue, or
removing a message from the queue and sending it. For the
measurements in Figure 4, we run two sends followed by two
accepts on a buffered channel.
Our library beats OCaml’s Event module by an average of 41%
on this program.
In addition to running times, Figure 4 tabulates the number of
CML-style primitives used in each benchmark. We defer a more
detailed investigation of the correlations between our gains and the
use of these primitives, if any, to future work.
All the code that appears in this paper can be downloaded from:
~
avik/cmllch/
9. Related work
We are not the first to implement CML-style concurrency prim-
itives in another language. In particular, Russell (2001) presents
an implementation of events in Concurrent Haskell. The imple-
mentation provides guarded channels, which filter communication
based on conditions on message values (as in Section 7). Unfortu-
nately, the implementation requires a rather complex Haskell type
for event values. In particular, a value of type event tau needs
to carry a higher-order function that manipulates a continuation of
type IO tau -> IO (). Further, a critical weakness of Russell’s
implementation is that the choose combinator is asymmetric. As
observed in (Reppy and Xiao 2008), this restriction is necessary for
the correctness of that implementation. In contrast, we implement
a (more expressive) symmetric choose combinator, following the
standard CML semantics. Finally, we should point out that Rus-
sell’s CML library is more than 1300 lines of Haskell code, while
ours is less than 150. Yet, guarded communication as proposed by
Russell is already implemented in our setting, as shown in Sec-
tion 7. In the end, we believe that this difference in complexity is
due to the clean design of our synchronization protocol.
Independently of our work, Reppy and Xiao (2008) recently
pursue a parallel implementation of a subset of CML, with a dis-
tributed protocol for synchronization. As in (Reppy 1999), this im-
plementation builds on ML machinery such as continuations, and
further relies on a compare-and-swap instruction. Unfortunately,
their choose combinator cannot select among transmit events,
that is, their subset of CML cannot express selective communica-
tion with transmit events. It is not clear whether their implemen-
tation can be extended to account for the full power of choose.
Orthogonally, Donnelly and Fluet (2006) introduce transac-
tional events and implement them over the software transactional
memory (STM) module in Concurrent Haskell. More recently,
11 2009/3/2
Effinger-Dean et al. (2008) implement transactional events in ML.
Combining all-or-nothing transactions with CML-style concur-
rency primitives is attractive, since it recovers a monad. Unfortu-
nately, implementing transactional events requires solving NP-hard
problems (Donnelly and Fluet 2006), and these problems seem to
interfere even with their implementation of the core CML-style
concurrency primitives. In contrast, our implementation of those
primitives remains rather lightweight.
Other related implementations of events include those of Flatt
and Findler (2004) in Scheme and of Demaine (1998) in Java. Flatt
and Findler provide support for kill-safe abstractions, extending
the semantics of some of the CML-style primitives. On the other
hand, Demaine focuses on efficiency by exploiting communication
patterns that involve either single receivers or single transmitters. It
is unclear whether Demaine’s implementation of non-deterministic
communication can accommodate event combinators.
Distributed protocols for implementing selective communica-
tion date back to the 1980s. The protocols of Buckley and Silber-
schatz (1983) and Bagrodia (1986) seem to be among the earliest in
this line of work. Unfortunately, those protocols are prone to dead-
lock. Bornat (1986) proposes a protocol that is deadlock-free as-
suming communication between single receivers and single trans-
mitters. Finally, Knabe (1992) presents the first deadlock-free pro-
tocol to implement selective communication for arbitrary channel
communication. Knabe’s protocol appears to be the closest to ours.
Channels act as locations of control, and messages are exchanged
between communication points and channels to negotiate synchro-
nization. However, Knabe assumes a global ordering on processes
and maintains queues for matching communication points; we do
not require either of these facilities in our protocol. Furthermore, as
in (Demaine 1998), it is unclear whether the protocol can accom-
modate event combinators.
Finally, our work should not be confused with Sangiorgi’s trans-
lation of the higher-order π-calculus (HOπ) to the π-calculus (San-
giorgi 1993). While HOπ allows processes to be passed as values,
it does not immediately support higher-order concurrency. For in-
stance, processes cannot be modularly composed in HOπ. On the
other hand, it may be possible to show alternate encodings of the
process-passing primitives of HOπ in π-like languages, via an in-
termediate encoding with CML-style primitives.
10. Conclusion
In this paper, we show how to implement higher-order concurrency
in the π-calculus, and thereby, how to encode CML’s concurrency
primitives in Concurrent Haskell, a language with first-order mes-
sage passing. We appear to be the first to implement the standard
CML semantics for event combinators in this setting.
An interesting consequence of our work is that implementing
selective communication ` a la CML on distributed machines is re-
duced to implementing first-order message passing on such ma-
chines. This clarifies a doubt raised in (Jones et al. 1996).
At the heart of our implementation is a new, deadlock-free pro-
tocol that is run among communication points, channels, and syn-
chronization applications. This protocol seems to be robust enough
to allow implementations of sophisticated synchronization primi-
tives, even beyond those of CML.
References
R. Bagrodia. A distributed algorithm to implement the general-
ized alternative command of CSP. In ICDCS’86: International
Conference on Distributed Computing Systems, pages 422–427.
IEEE, 1986.
R. Bornat. A protocol for generalized Occam. Software Practice
and Experience, 16(9):783–799, 1986. ISSN 0038-0644.
G. N. Buckley and A. Silberschatz. An effective implementation
for the generalized input-output construct of CSP. ACM Trans-
actions on Programming Languages and Systems, 5(2):223–235,
1983. ISSN 0164-0925.
A. Chaudhuri. A Concurrent ML library in Concurrent Haskell,
2009. Links to proofs and experiments at.
umd.edu/
~
avik/projects/cmllch/.
E. D. Demaine. Protocols for non-deterministic communication
over synchronous channels. In IPPS/SPDP’98: Symposium on
Parallel and Distributed Processing, pages 24–30. IEEE, 1998.
K. Donnelly and M. Fluet. Transactional events. In ICFP’06:
International Conference on Functional Programming, pages
124–135. ACM, 2006.
L. Effinger-Dean, M. Kehrt, and D. Grossman. Transactional events
for ML. In ICFP’08: International Conference on Functional
Programming, pages 103–114. ACM, 2008.
M. Flatt and R. B. Findler. Kill-safe synchronization abstractions.
In PLDI’04: Programming Language Design and Implementa-
tion, pages 47–58. ACM, 2004. ISBN 1-58113-807-5.
A. D. Gordon. Functional programming and Input/Output. Cam-
bridge University, 1994. ISBN 0-521-47103-6.
C. A. R. Hoare. Communicating sequential processes. Communi-
cations of the ACM, 21(8):666–677, 1978.
S. L. Peyton Jones and P. Wadler. Imperative functional program-
ming. In POPL’93: Principles of Programming Languages,
pages 71–84. ACM, 1993.
S. L. Peyton Jones, A. D. Gordon, and S. Finne. Concurrent
Haskell. In POPL’96: Principles of Programming Languages,
pages 295–308. ACM, 1996.
F. Knabe. A distributed protocol for channel-based communica-
tion with choice. In PARLE’92: Parallel Architectures and Lan-
guages, Europe, pages 947–948. Springer, 1992. ISBN 3-540-
55599-4.
X. Leroy, D. Doligez, J. Garrigue, D. R´ emy, and J. Vouil-
lon. The Objective Caml system documentation: Event mod-
ule, 2008. Available at
manual-ocaml/libref/Event.html.
R. Milner, J. Parrow, and D. Walker. A calculus of mobile pro-
cesses, parts I and II. Information and Computation, 100(1):
1–77, 1992.
J. H. Reppy. Concurrent programming in ML. Cambridge Univer-
sity, 1999. ISBN 0-521-48089-2.
J. H. Reppy. Higher-order concurrency. PhD thesis, Cornell
University, 1992. Technical Report 92-1852.
J. H. Reppy. First-class synchronous operations. In TPPP’94:
Theory and Practice of Parallel Programming. Springer, 1994.
J. H. Reppy and Y. Xiao. Towards a parallel implementation of
Concurrent ML. In DAMP’08: Declarative Aspects of Multicore
Programming. ACM, 2008.
G. Russell. Events in Haskell, and how to implement them. In
ICFP’01: International Conference on Functional Program-
ming, pages 157–168. ACM, 2001. ISBN 1-58113-415-0.
D. Sangiorgi. From pi-calculus to higher-order pi-calculus, and
back. In TAPSOFT’93: Theory and Practice of Software Devel-
opment, pages 151–166. Springer, 1993.
Wikipedia. Sieve of Eratosthenes, 2009. See.
wikipedia.org/wiki/Sieve_of_Eratosthenes.
12 2009/3/2
This action might not be possible to undo. Are you sure you want to continue?
We've moved you to where you read on your other device.
Get the full title to continue listening from where you left off, or restart the preview.
|
https://www.scribd.com/doc/22445456/A-Concurrent-ML-Library-in-Concurrent-Haskell
|
CC-MAIN-2016-36
|
en
|
refinedweb
|
How to set up and tune the FM radio for Windows Phone 8
[ This article is for Windows Phone 8 developers. If you’re developing for Windows 10, see the latest documentation. ]
This topic describes how to connect to the FM radio in a Windows Phone app that targets Windows Phone OS 7.1.
This topic contains the following sections.
It can take up to three seconds for the first FMRadio method call to return after the phone boots up.
After the FM Radio is first initialized, if the phone is running in an active state, the methods will typically return within 100 ms.
Avoid setting up the FM Radio or synchronizing the UI thread while the app is running.
Delay sending further commands to the FM Radio until at least one second after the FM Radio is enabled.
For more information and performance tips, see Creating High Performance Applications for Windows Phone.
To set up the FM radio:
Add a using directive to include the Microsoft.Devices.Radio namespace, which contains the FMRadio API.
Create an instance of the FMRadio class and then set the power mode.
|
https://msdn.microsoft.com/library/windows/apps/ff769541
|
CC-MAIN-2016-36
|
en
|
refinedweb
|
t:SEQ Element | seq Object
This topic documents a feature of HTML+TIME 2.0, which is obsolete as of Windows Internet Explorer 9.
Defines a new timeline container in an HTML document for sequentially timed elements.
Members Table
The following table lists the members exposed by the seq object.Attributes/PropertiesCollectionsEventsMethodsObjects
Remarks
All timed HTML descendants of this XML element have sequential timing. A duration value ( dur property) must be specified or the next element in the sequence might never be displayed. Elements without timing attributes are ignored by the timing mechanism and are statically rendered. A timed element is an HTML element with an associated time behavior.
The default value of begin for children of a seq.
This element is not rendered.
This element requires a closing tag.
Example
This example uses the t:SEQ element to display a sequence of text lines without specifying begin times for each timed element in the sequence.<HTML XMLNS: <HEAD> <TITLE>SEQ</TITLE> <STYLE> .time {behavior:url(#default#time2);} </STYLE> <?IMPORT namespace="t" implementation="#default#time2"> </HEAD> <BODY TOPMARGIN=0 LEFTMARGIN=0 <t:SEQ <DIV ID="div1" CLASS="time" DUR="2">First line of text.</DIV> <DIV ID="div1" CLASS="time" DUR="2">Second line of text.</DIV> <DIV ID="div1" CLASS="time" DUR="2">Third line of text.</DIV> <DIV ID="div1" CLASS="time" DUR="2">Fourth line of text.</DIV> <SPAN STYLE="color:black" ID="span1" CLASS="time" DUR="indefinite"> <B>End of sequence.</B></SPAN> </t:SEQ> </BODY> </HTML>
Code example:
See Also
|
https://technet.microsoft.com/en-us/library/ms533602(v=vs.85).aspx
|
CC-MAIN-2016-36
|
en
|
refinedweb
|
mKatz
on 13 November 2014 - 12:51 PM
here are two screenshots to hopefully help greaten the ability to understand what OP is asking for.
Posted by mKatz
on 27 September 2012 - 01:58 PM
Posted by mKatz
on 28 August 2012 - 08:49 AM
#define STRICT
#define _AFXDLL
#include "stdafx.h"
#include <tchar.h>
#include <AFXWIN.H>
#include <windows.h>
#include "Serial\Serial\Serial.h"
#include "Serial\Serial\SerialEx.h"
#include "Serial\Serial\SerialMFC.h"
#include "Serial\Serial\SerialWnd.h"
int WINAPI_tWinMain
(
HINSTANCE //hInst
HINSTANCE //hInstPrev
int
)
{
CSerial serial;
serial.Open(_T("COM4"));
serial.Setup (CSerial::EBaud9600, CSerial::EData8, CSerial::EParNone, CSerial::EStop1);
CSerial::SetupHandshaking;
serial.Write("Hello World");
serial.Close();
return 0;
}
Posted by mKatz
on 27 August 2012 - 01:45 PM
Posted by mKatz
on 27 August 2012 - 10:22 AM
Posted by mKatz
on 17 August 2012 - 01:50 PM
As the error messages mention, you're trying to redefine EthicalCompetition::Connection::Connection(). You define it as an empty inline method in the class definition for it above.Perhaps you meant for the second definition to be EthicalCompetition::Connection::Connect()?If the horrible spacing isn't the board's fault, I highly suggest modern inventions like indentation. It's 2012, you can afford a tab or three in a file.
Posted by mKatz
on 09 August 2012 - 01:39 PM
Achievements are a great feature, but I still can't see them as a feature that will greatly enhance the self-improvement path. I think players see achievements as bonus missions, that might be cool to complete to test their skills, to make the most of the game, or just to brag. Testing skills is nearer to the self-improvement goal, but there are many ways not recognized by the game designer in which the player could test his abilities. Taking your example, is killing 100 ogres enough? What about 1,000? But is that number really testing the player's expertise with the toothpick, or merely his patience? And that makes me think that achievements are mostly nice trophies the player will put aside to continue his development journey.
Posted by mKatz
on 06 August 2012 - 01:26 PM
Posted by mKatz
on 06 August 2012 - 12:44 PM
// learn.cpp : Defines the entry point for the console application.
//
#include "stdafx.h"
#include <iostream>
#include "safestuff.cpp"
#include "SafeCracker.cpp"
#include <string>
using namespace std;
int main()
{
cout << "Suprise, suprise!" << endl;
cout << "The combination is (once again)" <<endl;
cout << SafeCracker(12)<<endl;
system("pause");
return 0;
}
#include "stdafx.h"
#include <string>
using namespace std;
#ifndef SAFESTUFF_H_INCLUDED
#define SAFESTUFF_H_INCLUDED
string SafeCracker(int SafeID);
#endif // SAFESTUFF_H_INCLUDED
#include "stdafx.h"
#include <string>
using namespace std;
string SafeCracker(int SafeID)
{
return "13-26-16";
}
Posted by mKatz
on 13 July 2012 - 02:00 PM
You are maybe right... I only thought many kids (between 10 and 14/16) don't really like Math (I know many people like that ;) ), but this was only my opinion I also have a "good" story but it does not handle humans like Steve from Minecraft and contains no fantasy, more normal adventure without something like magic and sci-fi.Unfortunately I can not write it down here, I just have not enough time Regardsomercan
Posted by mKatz
on 13 July 2012 - 01:55 PM
Posted by mKatz
on 13 July 2012 - 01:06 PM
What's about a very different block system? Like a fuel stand or kilogram...For example:You can have 1 kg Sand, but to transport it you need a bin, for a bin you need 5 kg iron. Iron is only available as ore.For 5 kg iron you need 7 kg Ore and fire... For fire you need etc...I think kg or something similar is a good choice, because it would be really "easy" to implement half blocks or different blocks like triangles. Also it makes it more independet from the blocks himself.But there is a big problem... I think many people don't like something mathematical like kilogramm.With Triangle blocks you could also implement good transports like cars.. Wish you luck for your project!PS: I also had the same idea one year ago... but stopped because it would be only a minecraft clone
Posted by mKatz
on 12 July 2012 - 11:00 AM
what if you make each block have, say, 1/8 of the minecraft block size, then, when you punch the block with your pickaxe, you break more than one block at the time, creating a destruction effect, then, for placement, you can make the player capable of crafting a bunch of those little blocks in a larger block, that in reality it's only a bunch of the little ones (i think i'm not being clear here, basically, the player has the option to place 8 little blocks in the shape of a larger one, or place the normal 1/8 sized block)this way terrain would be much more detailed and the game more realistic, it's possible with the technology of your choice to archive this?
Posted by mKatz
on 11 July 2012 - 03:55 PM
Posted by mKatz
on 11 July 2012 - 02:14 PM
GameDev.net™, the GameDev.net logo, and GDNet™ are trademarks of GameDev.net, LLC
|
http://www.gamedev.net/user/200011-mkatz/?tab=reputation&app_tab=forums&type=received
|
CC-MAIN-2016-36
|
en
|
refinedweb
|
#include <TRefFinder.hpp>
List of all members.
The method FindTrack will find the track using the TRef, if the TRef constaind the valid unique ID. This will only work if the root file was written with only one process ID, this means during one ROOT session. Usually this is the case for the CSG caf tree production and not valid for the file merged from the several production files.
Usage:
Definition at line 32 of file TRefFinder.hpp.
|
http://www-d0.fnal.gov/Run2Physics/working_group/data_format/caf/classcaf__util_1_1TRefFinder.html
|
crawl-003
|
en
|
refinedweb
|
User Name:
Published: 16 Jan 2012
By: Akhil Mittal
Download Sample Code
This article focuses on understanding a basic multilayered architecture in C#.Net.
The article starts with introduction to one tier, two tier and n-tier architectures, their pros and cons, and later describes how we can achieve a simple basic multilayered architecture in .Net.
My effort in this article would be to focus on next level programming in .Net for developing an enterprise application.
When the different components in a system are organized systematically we call it a system architecture. The architecture is the enterprise-scale division of a system into layers or tiers, each having responsibility for a major part of the system and with as little direct influence on other layers.
There are plenty of ways where a system can be split into no. of logical tiers.
Single-tier applications are basically simple standalone programs.
It's not necessary to take network communication and the risk of network failure into consideration in these type of cases as they do not have the network access.
Since all the data resides within the same application, so these programs do not focus on synchronization of data. When separating the tiers physically the application get slower due to the fact that communication over network will result in a loss of performance, therefore one-tier applications certainly have high performance.
A two-tier application, in comparison to the one-tier application as described, does not combine all functions into a single process but separate different functions. For example a chat application. This kind of application contains two separated tiers, client and a server.
The client has the responsibility of capturing user input and displaying the actual messages. The server is responsible of the communication between the people that uses the chat client.
A three-tier application, adds another tier to the previous mentioned chat application, this could be in the form of a database. One could think of a three-tier application as a dynamic web application, which has a user interface, business logic, services and a database each placed in different tiers, as illustrated in Figure 1. As mentioned in the previous section a two-tier architecture separates the user interface from the business logic, in the same way the three tier architecture separates the database from the business logic.
A logical n-tier application is an application where all logical parts are separated into discrete classes. In a typical business application, this generally involves a presentation layer, business logic layer and a data access layer. This separation makes the application easier to maintain. The advantages of this architecture are that all business rules are centralized which make them easy to create, use and re-use. The data access is also centralized, which has the same advantage as the centralization of the business rules. Centralizing the data access routines are also good when it comes to maintenance since changes only has to be implemented at one location. There are really not that many disadvantages of this kind of architecture, however, it takes a bit longer to get up and running since several separate components has to be developed, which also might make the code a bit more complicated to grasp for less experienced developers.
Let's start developing a simple multilayered architecture in .Net, I take c# as my programming language, however programming language is not a bar at all , one can choose the same as per comfort of programming. The architecture which we are to implement has the following design as mentioned in Figure 2.We will start creating the architecture for a web application , later it could be converted into a windows application too.
We will make use of class libraries to physically separate our layers. There for one Web Application/Project, one Business logic layer class library , one Data access layer class library and one Common layer class library can be included in the solution.
Lets follow the implementation Step by Step.
Step 1: Add a Web Project (Presentation layer).
Open your Visual Studio and add a simple website to the solution, name it Presentation Layer.
Your Development Environment may look like the following.
Our Presentation Layer is the Web application, that will be exposed to the end user,
The Web application includes Web Forms i.e. aspx pages, User Controls i.e. Ascx pages, Scripts (client side java scripts),Styles (css and custom styles for styling the page),Master Page(.master extension, for providing common functionality to group of desired pages, Configuration Files like web.config and app.config etc.).
Let's setup our next projects and define them in separate layers, Add three folders to your solution, Folders named, BusinessLogicLayer, DataAccessLayer and CommonLayer. Your solution will look like as below.
Step 2. Add a Business Logic layer, Data Access Layer and Common Layer :
Right click the Business Logic Layer folder and add a c# class library project to it, call it BLL.csproj.
Doing this will add a c# class library project to our solution in the Businee Logic Layer Folder.
Add two more projects to Common Layer and DataAccessLayer folder respectively and call them Common.csproj and DAL.csproj.
The Data Access Layer Project Contains the entities classes and objects to communicate with database, whereas our common project contains properties, helper classes to communicate with all the three layers commonly. It contains the objects to be passed to and fro from presentation layer to data access layer commonly, and also acts as a mode of message passing between the layers.
Now our solution would look like as below.
Step 3. Create a database with a sample table and put some default values in it, for example I have created a database named "EkuBase" and created a siple Student table with following fields: StudentId, Name, Email, Address, Age, Country. The Create script of the table is as follows.
StudentId
Name
Address
Age
Country
Therefore making studentid as primary key.
studentid
Step 4. Lets add classes to our projetcs. Add StudentBLL.cs, StudentDAL.cs, StudentEntity.cs to BLL, DAl and Common Layer respectively. Make sure you qualify them with logical namespaces, so that it's easy to recognize and use them.
Step 5. Add Connection String in web.config and a Helper class to DAL to interact with DataBase. In my case I am using SQL helper for ease.
Define the Connection String in your Sql Helper so that you don't have to again and again create a connection and pass it to your method when you interact with database.
Add a reference to System.Configuration to DAL project before reading the above connection string.
Step 6. Configure the solution and add dlls to dependent layers,
Since we need to separate the business logic, presentation and data access, and we do not want presentation layer to directly interact with database nor the business logic, we add reference of business logic layer to our presentation layer and data access layer to the business logic layer and common layer to all the three layers. To achieve the same add a common DLL folder to the physical location of the Solution and give build path of all the three class projects to that DLL folder, doing this we can directly get access to DLL's of all the layers to the layers needed that DLL. We'll get the dlls in that folder once we complete our project.
Do this for DAL and Common Layer as shown in Figure 8. After compiling all the projects we get dlls in the DLL folder created.
Now add references to Common.dll, BLL.dll to Presentation Layer, Common.dll,DAL.dll to BLL Layer, Common.dll to DAL layer and compile your solution.
Now the code of BLL is accessible to Presentation Layer,and DAL is accessible to BLL, and Common is accessible to all three layers.
Step 7. Write methods to get the flow.
Now we need to write some code to our layers to get the feel of flow between all the layers, Let's create a scenario. Suppose we need to get the details of all the students whose student id is less than 5.For that add some sample data to your Student table, about 7 rows would work(Figure 11),and add a GridView to Default.aspx in presentation layer to show the data.
To learn more about ASP.NET visit.
You can also find documentation on ASP.NET at MSDN.
Now decorate your StudentEntity Class in Common layer with following code, to make properties for each column we are going to access from Student table.
Decorate StudentDAL with following code to call the data from database. We always write data interaction code in this layer only, making it as a protocol.
Here we simply make a call to database to get students having id less than 5.
We write following code to BLL class, where we perform validation check for the id whether it is less or greater than 5, and correspondingly throw error is its greater than 5,which we show at our default.aspx page by setting the message to error label text.BLL makes call to DAL to fetch the students, and passes to Presentation Layer, where data is shown in GridView.
In Presentation layer we write code to bind our gridview else show error message in case of error returned from BLL.
In the above code we specify student id as 6 in Student Entity and pass it to BLL, when we run the code we get the following page with our error label set with the error message.
It clearly states that id should be less than 5.Note that we do not get to DAL before the data is validated.
Now change the student id to 5 and see the result. We get the following page.
Thus we get the clear result.
Here we have seen how we communicated through different layers performing different roles to fetch the data.
There are various advantages of developing applications that are split up into different tiers or corporate.
In this article I discussed about what are layered applications, different types of layered applications and how to create a multilayered application in .Net. We can handle the Exceptions more intelligently at DAL and BLL. However that was not the scope of the article so that part is skipped and I'll surely discuss this in my forthcoming articles. The article was a focus on development for beginners, who face challenges to create an architecture before starting development. Happy Coding!
This author has published 4 articles on DotNetSlackers. View other articles or the complete profile here.
Reply
|
Permanent
link
Happy Coding :-)
Akhil Mittal.
Hi BJ, You are absolutely right...We could use the DTO pattern,,Infact there are 100 of ways in which you can redisgn and refactor your architecture by using ORM's or any other pattern...but the main focus of article is for beginners as i have already mentioned in my article....once a basic architecture is implemented,,,a developer can refactor the layers according to the requirement and need...my forth coming articles will include use of ORM's and IOC patterns....
Thanks for giving your valuable time to read my article.... :-)
Please login to rate or to leave a comment.
Link to us
All material is copyrighted by its respective authors. Site design and layout
is copyrighted by DotNetSlackers.
Advertising Software by Ban Man Pro
|
http://dotnetslackers.com/articles/net/Understanding-Multilayered-Architecture-in-Net.aspx
|
crawl-003
|
en
|
refinedweb
|
User Name:
Published: 01 Apr 2011
By: Dhananjay Kumar
In this article, we will discuss about how to consume data from cloud in a Windows 7 phone.
Cloud and Windows 7 phones are two common terms that you can hear from many tech-savvy people. So I thought it was the right time to integrate these
technologies. In this article, I have tried my best not to focus on the theoretical aspects. Instead, I will present a step-by-step approach on how to
consume Data from cloud in a Windows 7 phone application.
In this article, we will cover phone
So, actually, there are two major
steps involved,
There are two steps mainly involved in this process:
The first step is to create the database. We are going to use School database. Script of sample School Database copy from here
Right click on School Database and select Tasks. From Tasks, select Generate Script.
From the Pop up, select
Set Scripting option.
Give the file a
Click on the SQL Azure tab.
You will get the project you have created for yourself.
Click on the project. In my case, the project name is debugmode. After clicking on project, you can list the
entire database created in your SQL Azure account.
debugmode
Here, in my account, there are two databases that have been already created. They are the master and student database.
Master database is the default database created by SQL Azure for you.
Click on Create Database.
Give the name of your database. Select the edition as Web and specify the
max size of database.
You can select the other option also for the edition as business
Next, click on Create, you can see on the Databases tab that Demo1 database
has been created..
To know what is the database server name of SQL Azure portal, login to Windows Azure portal with your live credential and then click on
the SQL Azure tab.
Now once you have successfully connected to School Database in SQL Azure, copy the script and Run
as given below.
After
successfully runn script, run the below command and all the tables name will get listed.
In this way, you have successfully migrated database to SQL AZURE.
Create a new project and select ASP.Net Web Application project template from the Web tab. Give a meaningfull name to the web application.
We can create a Data Model, which
can be exposed as WCF Data Service in three ways
Here, I am going to use ADO.Net Entity model to create the data model. So to create an entity model, do the following:
Since we have table
in SQL Azure DataBase, we are going to choose the option Select from database..
Select tables, views and stored procedure from the database you want to make as the part of your data model.
Creating the the.
First, we need to do is to create proxy of WCF Data service for Windows 7 phone. So to do this,
Explanation of
command.
Create a windows 7
phone application. Open Visual Studio and select the Windows phone Application project template from Silverlight for Windows Phone tab.
Create a list box and we will bind the data in this list box. On click event of the Button, Data will get bound to the list box.
Add the namespace,
Now on click event of the button, we need to call the WCF Data Service and bind the list box
Output will get as
If we are hosting WCF Data Service on Windows Azure Web Role or App Fabric, then Data and service part will be completely in the cloud. In the next
article, we will host WCF Data Service as Windows Azure web role to convert our existing application as a complete tryst of Cloud and Windows 7 phone
application.
This author has published 8
|
http://dotnetslackers.com/articles/net/Tryst-of-SQL-Azure-ODATA-and-Windows-7-Phone.aspx
|
crawl-003
|
en
|
refinedweb
|
This manual page is intended as a reference document only. For a more thorough description of make and makefiles, please refer to
The options are as follows:
There are six different types of lines in a makefile: file dependency specifications, shell commands, variable assignments, include statements, conditional directives, and comments.
In general, lines may be continued from one line to the next by ending them with a backslash The trailing newline character and initial whitespace on the following line are compressed into a single space..
If the first or first two characters of the command line are and/or the command is treated specially. A causes the command not to be echoed before it is executed. A causes any non-zero exit status of the command line to be ignored.
Any white-space before the assigned value is removed; if the value is being appended, a single space is inserted between the previous contents of the variable and the appended value.
Variables are expanded by surrounding the variable name with either curly braces or parenthesis and preceding it with a dollar sign If the variable name contains only a single letter, the surrounding braces or parenthesis Ns 's environment. Global variables Variables defined in the makefile or in included makefiles. Command line variables Variables defined as part of the command line. Local variables Variables that are defined specific to a certain target. The seven local variables are as follows:
Va .ALLSRC The list of all sources for this target; also known as Va .ARCHIVE The name of the archive file. Va .IMPSRC The name/path of the source from which the target is to be transformed (the ``implied'' source); also known as Va .MEMBER The name of the archive member. Va .OODATE The list of sources for this target that were deemed out-of-date; also known as Va .PREFIX The file prefix of the file, containing only the file portion, no suffix or preceding directory components; also known as Va .TARGET The name of the target; also known as
The shorter forms and are permitted for backward compatibility with historical makefiles and are not recommended. The six variables and are permitted for compatibility with makefiles and are not recommended.
Four of the local variables may be used in sources on dependency lines because they expand to the proper value for each target on the line. These variables are and
In addition, make sets or knows about the following variables:
Va $ A single dollar sign i.e. expands to a single dollar sign. Va .MAKE The name that make was executed with Va .CURDIR A path to the directory where make was executed. Ev MAKEFLAGS The environment variable may contain anything that may be specified on make Ns 's command line. Anything specified on make Ns 's command line is appended to the variable which is then entered into the environment for all programs which make executes.
Variable expansion may be modified to select or modify each word of the variable (where a ``word'' is white-space delimited sequence of characters). The general format of a variable expansion is as follows:
Each modifier begins with a colon and one of the following special characters. The colon may be escaped with a backslash
Variable expansion occurs in the normal fashion inside both old_string and new_string with the single exception that a backslash is used to prevent the expansion of a dollar sign not a preceding dollar sign as is usual.
Conditional expressions are also preceded by a single dot as the first chraracter of a line. The possible conditionals are as follows:
Ic .undef Ar variable Un-define the specified global variable. Only global variables may be un-defined. Xo Test the value of an expression. Xo Test the value of an variable. Xo Test the value of an variable. Xo Test the the target being built. Xo Test the target being built. Ic .else Reverse the sense of the last conditional. Xo A combination of followed by Xo A combination of followed by Xo A combination of followed by Xo A combination of followed by Xo A combination of followed by Ic .endif End the body of the conditional.
The operator may be any one of the following:
As in C, make will only evaluate a conditional as far as is necessary to determine its value. Parenthesis may be used to change the order of evaluation. The boolean operator may be used to logically negate an entire conditional. It is of higher precendence than
The value of expression may be any of the following:
Ic defined Takes a variable name as an argument and evaluates to true if the variable has been defined. Ic make Takes a target name as an argument and evaluates to true if the target was specified as part of make Ns 's command line or was declared the default target (either implicitly or explicitly, see before the line containing the conditional. Ic empty Takes a variable, with possible modifiers, and evalutes to true if the expansion of the variable would result in an empty string. Ic exists Takes a file name as an argument and evaluates to true if the file exists. The file is searched for on the system search path (see Ic target Takes a target name as an argument and evaluates to true if the target has been defined.
Expression may also be an arithmetic or string comparison, with the left-hand side being a variable expansion. The standard C relational operators are all supported, and the usual number/base conversion is performed. Note, octal numbers are not supported. If the righthand value of a or operator begins with a quotation mark a string comparison is done between the expanded variable and the text between the quotation marks. If no relational operator is given, it is assumed that the expanded variable is being compared against 0.
When make is evaluating one of these conditional expression, and it encounters a word it doesn't recognize, either the ``make'' or ``defined'' expression is applied to it, depending on the form of the conditional. If the form is or the ``defined'' expression is applied. Similarly, if the form is or expression is applied.
If the conditional evaluates to true the parsing of the makefile continues as before. If it evaluates to false, the following lines are skipped. In both cases this continues until a or is found.
Table of Contents
|
http://www.fiveanddime.net/man-pages/pmake.1.html
|
crawl-003
|
en
|
refinedweb
|
#include <cx/DataAccess.h>
char *cxFilenameExpand(const char *str)
subroutine cxFilenameExpand(str, expanded) character*(*) str, expanded
This function will also accept embedded strftime(3) time format codes. For example, %m%d%y.dat outputs a name with the current month, day, and year. There is one change from the standard codes - %n is replaced by an index instead of a newline. The index is an integer that can be set with cxFilenameIndexSet(3E) or cxFilenameIndexIncrement(3E).
|
http://www.nag.co.uk/visual/IE/iecbb/DOC/html/unix-ref/man3/cxfilenameexpand.htm
|
crawl-003
|
en
|
refinedweb
|
May 20th 2010, 14:29 by A. S. | NEW YORK.
To be fair, it's not so obvious that including the last two years of market data is appropriate when making long-term projections. If you have ten years of data and include the last two years you are assuming a major financial crisis will occur every decade. It also assumes that equities will no longer offer a return above the risk-free rate. Felix Salmon anticipates a zero equity premium in the future, with very high volatility. Based on those two assumptions it does not make sense to hold any equity in your portfolio.
That is possible, but unlikely over the very long term. A zero long-term equity premium assumes firms in most industries will not be very productive or profitable for decades. Also equities (which reflect future dividends and capital gains) are inherently riskier than Treasuries (at least for a government that is unlikely to default or hyperinflate). Equity prices must ultimately reflect and compensate investors for that risk or no one would hold them in their portfolio. Empirical evidence has repeatedly shown, over the long-term, that riskier assets do have higher average returns than less volatile assets. It is important to remember the equity premium is a risk premium, risk being the operative word.
While it is reasonable to expect a positive long-term equity premium, there exists a good chance equities will not perform as well in the future as they had in the past. The expected equity premium should be positive, but perhaps lower than 5%. But how do you calculate what it should be?
Merely ignoring the last few years of data (as I hear some are doing) is a slippery slope. It sets the precedent of cherry-picking data so that you get a risk premium that makes your projections look as good as possible. Even truly "sensible judgment" can be corrupted when the music starts up again in the next bubble. For now it remains a difficult question. Hopefully in time, more post-crisis data will provide some:
Investors chase the higher returns that the risk premium gives, which reduces the risk premium. But it doesn't reduce the risk; if anything, it raises it. Now they're taking the risk, but getting less return.
The change in the way people invest also has had an impact. Dividends were required in earlier eras, now the mantra is capital gains.
Invest in Stock
If memory serves, capital gains were already taxed at a lower rate even before the Bush tax cuts. Not as low, but still lower than income (at least in my income bracket at the time).
But I have to agree with you that the current tax structure is set up to create exactly what we have: a culture of consumption over savings. Too bad it is not viable in the long (or even medium) term. Gonna be ugly when it blows.
jouris,
I think it was the Bush tax cut of 2001 that reduced long-term (1yr + 1 day) capital gains to 15% and also andy "qualified" dividends.
If one has a Roth-IRA, neither are taxed at all. A 401(k) is taxed at the rate of one's income when it is withdrawn, dividends and cap gains.
Funny thing is, if you save in a bank you are taxed at your marginal rate. (Deposit $100 on Dec 29th 2009 and your 2010 interest will be taxed at your marginal 2010 tax rate)
Doesn't bode well for saving.
Local bank here bought back shares at $30, a few years later (2008)they were forced to sell to another bank for about $5. So be wary of buybacks.
Regards
David Merkel -- Neither of the above. If you want the most condensed form, look at graph at the bottom of p. 61, which is for 1/1/10:
He estimates FCFE for the past year as dividends + money spent on share buybacks = $40.38 for a holder of the S&P 500 index. He estimates that FCFE will grow at 21% in 2010, followed by 4 years at 4%, followed by infinite years at the T-bond rate of 3.84%.
These flows of FCFE yield the index value of $1115.10 with a discount rate of 8.20%, which he concludes is the long-term growth rate. Subtracting the risk-free rate of 3.84% (in my previous post I was looking at his risk-free rate for the wrong year), he arrives at a risk premium of 4.36% for 1/1/10.
(P.S. I saw Felix Salmon's link to your blog entry from July, referring to Eric Falkenborg's book. My impressions of Damodaran having the most thorough analysis was formed before that book came out; I hope to look at his analysis and yours soon, and maybe I will revise my opinion.)
Across the Street -- Damodaran has always been a little too facile for me in his ability to judge tough valuation questions. But here, check one thing in his work. Does he use time- or dollar-weighted returns? If time-weighted, which is common, throw the analysis out, because what the market earns on the whole is dollar-weighted returns.
hedgie, doesn't it seem at least possible that there is a reason that the mantra is now capital gains? To wit, that dividends are taxed as income, while capital gains are taxed at a much lower rate. So, given a choice between a dividend and a capital gain, why not pick the latter, and get the tax break?
Eventually, I expect, we will figure out that income is income -- and if we are going to tax it, then we ought to be agnostic about where it came from. If that drops the returns (read subsidies) to capital, so be it.
doug374:
There should be a (somewhat) stable equilibrium. Investors chase the higher returns that the risk premium gives, which reduces the risk premium. But it doesn't reduce the risk; if anything, it raises it. Now they're taking the risk, but getting less return. In a world where the investors are not all herd animals, some should decide that the risk isn't worth the smaller extra returns, and should move to safer investments.
That's the theory. Your mileage may vary.
When things are going badly, predictions are too gloomy. Just don't forget that when things are going well, they will be too rosy. Is it really too hard to remember that ? Even for our regulators, who are paid and given independence to maintain perspective ?
"To be fair, it's not so obvious that including the last two years of market data is appropriate when making long-term projections. If you have ten years of data and include the last two years you are assuming a major financial crisis will occur every decade."
Then again, one could also argue that the past 10 years are the only ones that count if the US and the rest of the world are heading towards becoming a huge Japan.
The equity premium is low for two reasons.
First, the desperation for return (to meet future liabilities) has made the return-for-risk very low. Investors are placing money in equities in the belief that the golden run of stocks from 1980-2000 can be repeated. Many institutional investors have no choice, as taking a relatively safe rate-of-return will lead to certain ruin, while the riskier assets offer the possibility of salvation. The plight of public pensions are even more dismal – they have assumed return schedules that handicap their asset allocation. Any move by public pensions away from equity triggers a vast regulatory system that would mandate higher tax-funded contributions.
Secondly, the success of index funds and efficient-market theory has essentially wiped out the risk-adjusted gains of diversification. Now that indexing and index-like strategies are common, there is very little additional reward to owning a broad stock portfolio, as there are many other investors willing to do the same. Like all investing strategies, the success of indexing is inversely correlated with its popularity. Stock indexing is a play on systematic equity risk, otherwise known as equity premium, and as indexing grows in popularity, equity premium must inevitably suffer.
Final note: I believe the author is incorrect in stating that an equity premium will exist as long as stock companies are profitable. An equity premium can be sent to near zero regardless of profitability if the investors are sufficiently desperate to assume large amounts of risk for the slightest hint of additional return. This appears to be the case today.
The equity risk premium assumes a strict order of priority in event of failure. In bankruptcy, the equity would be cleaned out and the creditors would be satisfied in order of precedence. However, under US bankruptcy laws as they have evolved, the equity holders have more control and often are not cleaned out. Much of the debt in highly leveraged firms is in fact quasi-equity and earns an equity premium. So it has become difficult to extricate the equity risk premium over corporate debt returns.
Why is there any value to GM common? It was so far under water, most of the bondholders took a severe haircut.
Aswath Damodaran has the best-justified numbers for Equity Risk Premiums, so I trust him. His preferred measure of the equity risk premium was 4.56% as of February 1.
For details, see his paper on "Equity Risk Premiums: the 2010 edition". He derives his number on pp. 58-68, and argues for using it on pp. 78-83. Note also that he measures the risk-free rate at 2.21%, for a total implied growth rate of 6.77%.
Will, I don't know if that's too philosophical, but it's an interesting thought.
Competition must tend to reduce the profitability of the businesses which compete and globalisation tends to increase competition across all economic activity so it is to be expected that it will be increasingly difficult to achieve even medium term profitability which is consistent with a significant risk premium unless there is some protection from competition.
Excluding mercantilist state assistance there are a number of possibilities. Mineral resource companies will sometimes enjoy extended periods of the profits of scarcity and some companies with doposits that cost little to mine will average high returns over a long period including the dips in returns that flow from slumps in demand or new production brought into the market by high prices. Oligopolies like the Australian banks or the major supermarket owners should be able to enjoy profitability which implies a substantial risk premium in the returns received by the original investors. Mere inventiveness and clever entrepreneurialism is rarely going to offer more than a short period of high profitability. Where patent protection relates to some invention which is truly without alternatives for a valuable innovation there could be 20 years of superior returns, and there may be circumstances where the invention has a modest niche that it is not practicable or not economic for anyone else to seek to fill. That is not likely to be possible for the major activities of large companies.
Since most investment takes place well after the original investment for setting up the manufacturing, mining or other business and may, when it involves wideapread PERs of 20 or more, entail many years of low real returns the lesson would appear to be that timing is extremely important. It is commonly said that timing the market does not work as an investment strategy but that, even in the sense of timing the whole market and not just particular stocks, that may not be correct for those seeking decent risk premiums. Certainly regular investment of equal amounts in a basket of stocks which replicated the equities indexes would not realise much of a risk premium in the globalised competitive economy.
Clearly there is plenty of opportunity for patient contrarians who don't necessarily do much research or stock picking to take a good share of the returns on equity which exceed the risk free rate. Fortunately some of those who don't make such returns but contribute to the winning investors' booty get their rewards from the pleasures of taking risk for which others go to casinos or racecourses.
It is not quite clear how changing demographics, including in particular the aging of most societies, will affect the returns on equity. While it is possible that the elderly will provide neither capital nor individual effort to produce the innovation which escapes for a time the equalising effects of competition but it is also possible that the savings of the elderly could fund both enterprise and free spending consumption that could underpin the profits of the providers.
Is it too philosophical to argue that, since equities represent the underlying growth of value in the economy it is not possible for bonds to match equities indefinitely, since the net debt then grows?
"During the post-war era equity returns have been positive."
Well, up until the late 1990's.
This was due to the fact that the US was the only country standing after WWII. See Kindleberger's "A Financial History of Western Europe" for post WWII information.
Globalization of industry and especially finance has had an effect on US corporations.
The change in the way people invest also has had an impact. Dividends were required in earlier eras, now the mantra is capital gains.
Also the changes in pay - using stock options - for management and workers. Companies buy back shares that are dumped by management and workers in order to boost EPS. This transfers the profits of the company from the shareholder to management and workers, and the shareholder still gets zero or a puny dividend.
Who knows....
Perhaps the S&P500 or the Dow is a giant bell-shaped curve, and we are now on the right side of the curve.
Small companies are a gamble, but if one does thier homework capital gains could be made. By the time the average investor learns of the small cap company, it is now a mid or large cap and most of the capital gains have been made. (Cisco, Microsoft, Oracle, WaMu, WalMart, etc.)
Regards
I understand that the equity premium is the compensation that investors receive for investing in equities instead of government bonds, but if every investor dives into stocks to take advantage of this premium, couldn't this have the effect of driving up the price of stocks relative to government bonds, thus taking away much of this premium? Supply and demand remains immutable, and if everyone wants to hold stocks, why should there be a premium?
And don't worry Doug, all the professionals are too stupid for stock picking too.
Not that I'd give up index funds. I'm too unwealthy, too busy and too stupid for stock picking.
Well, excluding the last two years is no more arbitrary than a 10-year term of data but I see the blogger's point. I wonder if index funds aren't partly responsible for some convergence. It seems a little silly at first to say that the equity premium which, at least for undergraduates at Emory in the 90s, was defined as the extra overall return you expect in exchange for higher risk, has gone away because it turned out to be unreliable. But if firms buy equities more or less economy wide then the most significant risk is cyclical and cyclical factors affect bond quality, too.
|
http://www.economist.com/blogs/freeexchange/2010/05/finance_0
|
crawl-003
|
en
|
refinedweb
|
Reading: Deitel & Deitel, Chapter 3, sections 3.1 - 3.11
Functions
The programs that we have seen so far involve only a single function, main, but any program of any size will involve multiple functions. Beginning programmers have a tendency to put all of their code in main, but this quickly leads to a large function which is hard to read and hard to debug. Good programs consist of lots of small functions, each of which usually does only one thing.
Here is a small program which demonstrates the use of function calls
in C or C++.
1 #include <iostream> 2 using namespace std; 3 int square(int); 4 int main() 5 { 6 int x, xsquared; 7 cout << "Enter a number: "; 8 cin >> x; 9 xsquared = square(x); 10 cout << "The square of " << x << " is " << xsquared << endl; 11 return 0; 12 } 13 int square(int n) 14 { 15 int nsquared; 16 nsquared = n * n; 17 return nsquared; 18 }
3 int square(int);
This is called a function prototype. If a function is to be called before it is defined, there must be a prototype. This tells the compiler that there will be a function defined later called square, which takes one argument of type int, and returns an integer. An argument is a value which is passed from the calling function to the called function. There is no limit on the number or type of arguments that a function can have. The return type of a function can be any type which has been defined. In addition, a function can be of type void which means that it does not return anything. Notice that a function cannot return more than one value.
9 xsquared = square(x);
This is where the function square is called or invoked. When a function is called, processing of the calling function (main in this case) stops, and processing is transferred to the code of the called function (square in this case). The next statement executed is the first statement of the called function.
The function square
is passed one argument, in this case, the variable x.
The function square will return an integer value, which is
copied to the variable xsquared.
13 int square(int n) 14 { 15 int nsquared; 16 nsquared = n * n; 17 return nsquared; 18 }
return-value-type function-name (parameter list)
{
declarations and statements
}
The variable n is a formal parameter. Ordinarily, when a function is called, the value of each argument, called the actual parameter, is computed and this value is copied to the formal parameter. (This parameter passing method is known as call-by-value; another method, call-by-reference, is described later in this worksheet.)
Line 17 is a return statement. Any function other than a
void function must have at least one return statement, in which
a value is returned. The keyword return must be followed by
an expression which is of the same type as the type of the
function. Functions of type void may also have one or more
return statements which are not followed by any expression.
Once a return statement is encountered in a function, processing
returns to the calling function. Any code which directly follows
a return statement is called dead code because it will never be
executed; you should never have dead code in your programs.
Note: since
return can be followed by an expression,
the
square function could be written more briefly:
int square(int n) { return n * n; }
There is no limit to the number and type of arguments that a
function can take. A void function called fctn which takes
three arguments, two integers and a floating point number, would have
a function prototype like this:
void fctn(int, int, double);
and when it is actually declared, the first line of the declaration would look like this:
void fctn(int n1, int n2, double f1)
When calling a function, the order of the actual parameters determines which values get copied to the formal parameters. The value of the first argument (first actual parameter) is copied to the first formal parameter, the value of the second argument is copied to the second formal parameter, and so on. The compiler will check to confirm that the number and type of actual parameters conform to the number and type of formal parameters. It also checks to make sure that the number and type of formal parameters in the function definition agrees with the function prototype.
Exercise 1: Write a function sum which takes three arguments, all double precision floating point numbers, and returns the sum of these three. Write a main which prompts the user to enter three floating point numbers and passes them to sum, which will return a value to main. main should print this value on the screen.
Scope of variables
Variables defined inside of a function are local to that function. No other function knows about them or can use them. An attempt to access the variable x declared in the function square from outside would result in a compiler error.
It is possible to declare variables outside of any function. These
variables are called global, and all functions defined after
the declaration of a global variable can use it.
Here is an example:
#include <iostream> using namespace std; int x; // a global variable int main() { ... x = 17; // main can access the global variable x ... } void SomeFunction() { x = 46; // any other function can also access x }
Global variables should be used sparingly because they make programs harder to debug and modify. If many functions can modify a global variable, it is easy to lose track of which functions are doing what in a large program.
Two or more functions can have local variables of the same name. If a function declares a local variable which has the same name as a global variable, this declaration masks the global variable and any reference inside that function to the variable of that name would be assumed to be the local variable, not the global variable. In the above example, suppose SomeFunction had declared a local variable called x inside it. Now there are two variables with the name x, one global and one local. In the statement x = 46; the reference to x would be to the local variable, not the global variable.
Call-by-value vs. Call-by-reference
Parameter passing in C and C++ is normally call-by-value.
When a function is called, and one or more variables are passed
as arguments, the called function makes copies of the values of
these arguments. This means that ordinarily a called function cannot change
the value of a variable in the calling function. Here is an example:
#include<iostream> using namespace std; void Silly(int); // function prototype int main() { int x; x = 17; Silly(x); cout << x << endl; return 0; } void Silly(int n) { n = 46; }
This program will print 17, not 46, because in Silly a copy of the value of x is made and assigned to n, so changing the value of n in the called function has no effect on the value of the actual parameter in the calling function. Note that the variable n in Silly could have been called x, without changing the functioning of the program.
C++ (but not C) allows a different type of parameter passing called
call-by-reference. In call-by-reference the called function,
instead of making a copy of the value of a parameter, identifies
the formal parameter with the address of the
actual parameter. This means that the called function can change the
value of a variable in the calling function. To make a parameter
a reference parameter, precede its name with an
&. You must also
follow the type in the argument list of the function prototype with
an
&. Here is an example:
#include<iostream> using namespace std; void Silly(int &); // function prototype int main() { int x; x = 17; Silly(x); cout << x << endl; return 0; } void Silly(int & n) { n = 46; }This program will print 46.
Exercise 2: What would the following program print? (answer on last page)
#include <iostream> using namespace std; int y,z; int function(int, int &); int main() { int v,w,x,y; w = 1; x = 2; y = 3; z = 4; v = function(w,x); cout << v << " " << w << " " << x << " " << y << " " << z << endl; return 0; } int function(int m, int & n) { int w; w = 5; m = 6; n = 7; y = 8; z = 9; return 10; }
Arrays as function arguments
Arrays can be used as arguments to functions, but array parameter passing is always call-by-reference; in other words, the function can change individual values in an array and the changes will be retained when control returns to the calling function.
When arrays are passed as arguments, it is not necessary to pass in
the array size; rather, the fact that a variable is an array is
signified by following the variable name with a pair of empty square
brackets (
[]). Here is an example:
1 #include <iostream> 2 using namespace std; 3 void fctn(int[]); // function prototype 4 int main() 5 { 6 int a[5]; 7 for (int i = 0; i < 5; ++i) 8 a[i] = i; 9 fctn(a); 10 for (i = 0; i < 5; ++i) 11 cout << a[i] << ' '; 12 cout << endl; 13 return 0; 14 } 15 16 void fctn(int z[]) 17 { 18 z[1] = 345; 19 z[3] = 678; 20 }
The advantage of this method is that a single function can be passed arrays of different sizes, but the disadvantage is that the function does not know how large an array is. Often, the solution to this problem is to pass in another parameter to the function which gives the size of the array.
Library functions
There are an enormous number of library functions for C and C++, far too many to exhaustively list here, but here is a small list of commonly used functions. Many of these require that you use additional include files. If so, these are listed as well.
Character Handling
int isalpha(char c) returns 1 if c is in the range A .. Z or a .. z, otherwise it returns 0.
int isdigit(char c) returns 1 if c is in the range 0 .. 9, otherwise it returns 0.
char tolower(char c) If c is an upper case letter, it returns the lower case version; otherwise it returns c.
char toupper(char c) If c is a lower case letter, it
returns the upper case version; otherwise it returns c.
Mathematics
#include <math.h>
double cos(double x) returns the cosine of x where x is measured in radians. There is a complete set of trig. functions like this.
double sqrt(double x) returns the square root of x.
double log(double x) returns the natural logarithm of x.
String Handling
strcpy(char s1[], char s2[]) copies the characters in s2 into s1 up to and including the first null character.
int strcmp(char s1[], char s2[]) returns a positive integer if s1 is lexically greater than s2 (i.e. would follow it alphabetically), a negative number if s1 is lexically less than s2, or zero if the two strings are the same up to the first null character.
strcat(char s1[], char s2[]) concatenates s2 onto the end of s1. For example if s1 is the string cat and s2 is the string bird, after calling this function, s1 would be the string catbird. Note that the size of the array s1 must be large enough to accommodate the longer string.
Character Input
#include<iostream>
cin.getline(char s[], int max) reads a line of input from standard input (normally the keyboard) into the string s up to a maximum of max characters or until the user hits the Return key. This differs from
cin >> s; because the cin.getline
function includes spaces as well while
cin >> s; will read
characters into s only up until the first space. The size
of the array s should always be at least one larger than max
because a
\0 is appended onto the end of the string.
Exercise 3: It is instructive to see what is involved
in writing some of these library functions. Write your own version
of the functions strcpy and strcat, calling them
mystrcpy and mystrcat. Here is a main to test
your functions. Remember that you can assume that all strings are
terminated with a
'\0'.
#include <iostream> using namespace std; // put your function prototypes here int main() { char mystring[80]; mystrcpy(mystring,"Programming"); mystrcat(mystring, " is fun!"); cout << mystring << endl; // should print "Programming is fun!" }
Random Numbers
Many applications, simulations in particular, require the use of
random numbers, and so there are a number of library functions which
generate random numbers. The function int rand() returns a
random integer in the range
..
each time that it is
called.
Random number sequences are based on a seed, i.e. an initial
value. The default value of the seed is zero, and this means that you
will get the same sequence of random numbers each time that you run
your program. If you want a different sequence of random numbers each
time that you run the program, you must initialize the seed to a
different random value each time. The function that initializes the
seed is void srand(int). One way to do this is by using the
current time, which would be different each time the program is run.
To do this, include this statement in your program before you start calling
rand()
srand((unsigned)time(NULL));
If you use the time function, include the header file time.h.
Here is a short program which displays ten random numbers.
#include <iostream> #include <time.h> using namespace std; int main() { int n; srand((unsigned)time(NULL)); for (int i=0;i<10;i++) { n = rand(); cout << n << endl; } return 0; }
Most applications need a random number in a defined range. For
example, a dice throwing simulation would require a random number
in the range 1 .. 6. To convert from the return value of rand()
to an integer in this range, use the modulo function. Any number
modulo 6 will return a value in the range 0 .. 5, so the following
statement will assign a random value in the range 1 .. 6 to x
each time that it is called.
x = rand() % 6 + 1;
Answer to Exercise 2
10 1 7 3 9
|
http://www.cs.rpi.edu//~lallip/cs2/wksht2/
|
crawl-003
|
en
|
refinedweb
|
The QDBusConnectionInterface class provides access to the D-Bus bus daemon service. More...
#include <QDBusConnectionInterface>
Inherits QDBusAbstractInterface.
This class was introduced in Qt 4.2.
The QDBusConnectionInterface class provides access to the D-Bus bus daemon service.().
Returns the unique connection name of the primary owner of the name name. If the requested name doesn't have an owner, returns a org.freedesktop.DBus.Error.NameHasNoOwner error.
This signal is emitted by the D-Bus server whenever a service ownership change happens in the bus, including apparition and disparition of names.
This signal means the application oldOwner lost ownership of bus name name to application newOwner. If oldOwner is an empty string, it means the name name has just been created; if newOwner is empty, the name name has no current owner and is no longer available. queue.
|
http://doc.qt.nokia.com/4.5-snapshot/qdbusconnectioninterface.html
|
crawl-003
|
en
|
refinedweb
|
The QDBusContext class allows slots to determine the D-Bus context of the calls. More...
#include <QDBusContext>
This class was introduced in Qt 4.3.().
|
http://doc.qt.nokia.com/4.5-snapshot/qdbuscontext.html#calledFromDBus
|
crawl-003
|
en
|
refinedweb
|
#include <itkImageAdaptor.h>
Collaboration diagram for itk::ImageAdaptor< TImage, TAccessor >:
ImageAdaptors are templated over the ImageType and over a functor that will specify what part of the pixel can be accessed
The basic aspects of this class are the types it defines.
Image adaptors can be used as intermediate classes that allow the sending of an image to a filter, specifying what part of the image pixels the filter will act on.
The TAccessor class should implement the Get and Set methods as static methods. These two will specify how data can be put and get from parts of each pixel. It should define the types ExternalType and InternalType too.
Definition at line 47 of file itkImageAdaptor.h.
|
http://www.itk.org/Doxygen16/html/classitk_1_1ImageAdaptor.html
|
crawl-003
|
en
|
refinedweb
|
#include <itkImage.h>
Inheritance diagram for itk::Image< TPixel, VImageDimension >:
Images are templated over a pixel type (modeling the dependent variables), and a dimension (number of independent variables). The container for the pixel data is the ImportImageContainer.
Within the pixel container, images are modeled as arrays, defined by a start index and a size.,]
Pixels can be accessed direcly using the SetPixel() and GetPixel() methods or can be accessed via iterators. Begin() creates an iterator that can walk a specified region of a buffer., ...
Definition at line 80 of file itkImage.h.
|
http://www.itk.org/Doxygen16/html/classitk_1_1Image.html
|
crawl-003
|
en
|
refinedweb
|
#include <stdlib.h>
#include <grass/gis.h>
Go to the source code of this file.
Definition at line 40 of file put_window.c.
References G__write_Cell_head3(), and G_fopen_new().
Referenced by G__make_location(), G__make_mapset(), G_put_window(), main(), and make_location().
write the database region
Writes the database region file (WIND) in the user's current mapset from region. Returns 1 if the region is written ok. Returns -1 if not (no diagnostic message is printed). Warning. Since this routine actually changes the database region, it should only be called by modules which the user knows will change the region. It is probably fair to say that under GRASS 3.0 only the g.region, and d.zoom modules should call this routine.
Definition at line 32 of file put_window.c.
References G__put_window(), and getenv().
Referenced by make_mapset().
|
http://grass.osgeo.org/programming6/put__window_8c.html
|
crawl-003
|
en
|
refinedweb
|
The QDesignerPropertySheetExtension class allows you to manipulate a widget's properties which is displayed in Qt Designer's property editor. More...
#include <QDesignerPropertySheetExtension>
The QDesignerPropertySheetExtension class allows you to manipulate a widget's properties which is displayed in Qt Designer's property editor.
QDesignerPropertySheetExtension provides a collection of functions that are typically used to query a widget's properties, and to manipulate the properties' appearance in the property editor. For example:
QDesignerPropertySheetExtension *propertySheet = 0; QExtensionManager manager = formEditor->extensionManager(); propertySheet = qt_extension<QDesignerPropertySheetExtension*>(manager, widget); int index = propertySheet->indexOf(QLatin1String("margin")); propertySheet->setProperty(index, 10); propertySheet->setChanged(index, true); delete propertySheet;
Note that if you change the value of a property using the QDesignerPropertySheetExtension::setProperty() function, the undo stack is not updated. To ensure that a property's value can be reverted using the undo stack, you must use the QDesignerFormWindowCursorInterface::setProperty() function, or its buddy setWidgetProperty(), instead.
When implementing a custom widget plugin, a pointer to Qt Designer's current QDesignerFormEditorInterface object (formEditor in the example above) is provided by the QDesignerCustomWidgetInterface::initialize() function's parameter.
The property sheet, or any other extension, can be retrieved by querying Qt Designer's extension manager using the qt_extension() function. When you want to release the extension, you only need to delete the pointer.
All widgets have a default property sheet which populates Qt Designer's property editor with the widget's properties (i.e the ones defined with the Q_PROPERTY() macro). But QDesignerPropertySheetExtension also provides an interface for creating custom property sheet extensions.
Warning: Qt Designer uses the QDesignerPropertySheetExtension to feed its property editor. Whenever a widget is selected in its workspace, Qt Designer will query for the widget's property sheet extension. If the selected widget has an implemented property sheet extension, this extension will override the default property sheet.
To create a property sheet extension, your extension class must inherit from both QObject and QDesignerPropertySheetExtension. Then, since we are implementing an interface, we must ensure that it's made known to the meta object system using the Q_INTERFACES() macro:
class MyPropertySheetExtension : public QObject, public QDesignerPropertySheetExtension { Q_OBJECT Q_INTERFACES(QDesignerPropertySheetExtension) public: ... }
This enables Qt Designer to use qobject_cast() to query for supported interfaces using nothing but a QObject pointer.
In Qt Designer the extensions are not created until they are required. For that reason, when implementing a property sheet extension, you must also create a QExtensionFactory, i.e a class that is able to make an instance of your extension, and register it using Qt Designer's extension manager.
When a property sheet extension is required, Qt Designer's extension manager will run through all its registered factories calling QExtensionFactory::createExtension() for each until the first one that is able to create a property sheet extension for the selected widget, is found. This factory will then make an instance of the extension. If no such factory can be found, Qt Designer will use the default propertyerPropertySheetExtension)) return 0; if (MyCustomWidget *widget = qobject_cast<MyCustomWidget*> (object)) return new MyPropertySheetExtension(widget, parent); return 0; }
Or you can use an existing factory, expanding the QExtensionFactory::createExtension() function to make the factory able to create a property sheet extensionerPropertySheetExtension))) { return new MyPropertySheetExtension(widget, parent); } else { return 0; } }
For a complete example using an extensionDesignerDynamicPropertySheetExtension, QExtensionFactory, QExtensionManager, and Creating Custom Widget Extensions. property group for the property at the given index.
Qt Designer's property editor supports property groups, i.e. sections of related properties. A property can be related to a group using the setPropertyGroup() function. The default group of any property is the name of the class that defines it. For example, the QObject::objectName property appears within the QObject property group.
See also indexOf() and setPropertyGroup().
Returns the name of the property at the given index.
See also indexOf().().
Sets the value of the property at the given index.
Warning: If you change the value of a property using this function, the undo stack is not updated. To ensure that a property's value can be reverted using the undo stack, you must use the QDesignerFormWindowCursorInterface::setProperty() function, or its buddy setWidgetProperty(), instead.
See also indexOf(), property(), and propertyGroup().
Sets the property group for the property at the given index to group.
Relating a property to a group makes it appear within that group's section in the property editor. The default property group of any property is the name of the class that defines it. For example, the QObject::objectName property appears within the QObject property group.
See also indexOf(), property(), and propertyGroup().
If visible is true, the property at the given index is visible in Qt Designer's property editor; otherwise the property is hidden.
See also indexOf() and isVisible().
|
http://doc.qt.nokia.com/4.5-snapshot/qdesignerpropertysheetextension.html#count
|
crawl-003
|
en
|
refinedweb
|
#include <itkImageConstIterator.h>
Inheritance diagram for itk::ImageConstIterator< TImage >:
ImageConstIterator is a templated class to represent a multi-dimensional iterator. ImageConstIterator is templated over the type of the image to be iterated over.
ImageConstIterator is a base class for all the image iterators. It provides the basic construction and comparison operations. However, it does not provide mechanisms for moving the iterator. A subclass of ImageConstIterator must be used to move the iterator.
ImageConstIterator holds a reference to the image over which it is traversing.
ImageConstIterator assumes a particular layout of the image data. In particular, the data is arranged in a 1D array as if it were [][][][slice][row][col] with Index[0] = col, Index[1] = row, Index[2] = slice, etc.
Definition at line 56 of file itkImageConstIterator.h.
|
http://www.itk.org/Doxygen16/html/classitk_1_1ImageConstIterator.html
|
crawl-003
|
en
|
refinedweb
|
import os, sys, time
servers = ['dev','admin','db1']
for s in servers:
cmd = 'scp /etc/hosts regular_user@%s:/etc/hosts' % s
print cmd
os.system(cmd)
I have written this script to copy my current HOSTS file to all my other servers.
However, I would like to do this from a regular user, not ROOT.
Since over-writing /etc/hosts takes root privelages, I would like to do SUDO. How can I put sudo inside that script?
This won't work, because it is permission denied to change /etc/hosts file.
cmd = 'sudo scp /etc/hosts regular_user@%s:/etc/hosts' % s
This question came from our site for professional and enthusiast programmers.
cat /etc/hosts | ssh otherhost "sudo sh -c 'cat >/etc/hosts'" will do the trick.
cat /etc/hosts | ssh otherhost "sudo sh -c 'cat >/etc/hosts'"
< /etc/hosts ssh otherhost "sudo tee /etc/hosts > /dev/null"
sh
tee
You need to do the sudo on the remote host instead of locally. Obviously for this to work, your account on the remote host will need sudo permissions to run the relevant copy command. It would look something like this:
cmd = 'scp /etc/hosts regular_user@%s:/tmp/hosts' % s
os.system(cmd)
cmd = 'ssh regular_user@%s sudo cp /tmp/hosts /etc/hosts' % s
os.system(cmd)
You might find using a framework like fabric or a configuration management system like cfengine or puppet to be a better long term choice...
This is easily done using Paramiko (the native Python SSH client) rather than calling the ssh command.
There are many examples of Paramiko being used for scp, and to run commands with sudo, available on the web.
What you probably want to do is turn on the suid bit on this file which should be owned by root. Then whenever a non-privileged user runs the script it will be running as superuser
The trouble here is that you are trying to copy a file to a remote server as a non-privileged user (using your login credentials with the scp command).
scp
In order to take advantage of sudo on the remote computer, you'd have to execute a command there to initiate the transfer. It might look something like this:
sudo
ssh regular_user@remote.computer sudo scp myuser@local.computer:/etc/hosts /etc/hosts
This essentially logs you into the remote computer as a regular user, then issues the sudo command to copy the file from your local computer to the remote one. The scp logic will look a little backwards, since it is being executed form the perspective of the remote host.
You might have to do some work to get ssh to accept passwords form your script, though. Especially since you are logging into a remote computer and telling it to log back into your local
|
http://serverfault.com/questions/80277/how-to-copy-etc-hosts-to-all-my-machines/80279#80279
|
crawl-003
|
en
|
refinedweb
|
#include <Collection.hpp>
List of all members.
By default the Collection<T> does not own the TObjArray that is passed to it and will not delete it.
The exception is an empty collection or an assignment to it. In this case it implements copy-on-write semantics.
The TObjArray is deleted using a "delete" statement, so unless you set the kOwned bit, it shouldn't delete its contents.
Definition at line 23 of file Collection.hpp.
|
http://www-d0.fnal.gov/Run2Physics/working_group/data_format/caf/classcafe_1_1Collection.html
|
crawl-003
|
en
|
refinedweb
|
#include <BadLBNs.hpp>
#include <BadLBNs.hpp>
Inheritance diagram for cafe::BadLBNs:
If the .MonteCarlo flag is set to 1, the rejection is based on the luminosity block of the overlaid ZB event.
Configuration options:
Definition at line 26 of file BadLBNs.hpp.
Definition at line 12 of file BadLBNs.cpp.
References _badLBNs, _MC, _vars, cafe::Variables::add(), cafe::Config::get(), cafe::Config::getVString(), cafe::Processor::name(), and cafe::Processor::out().
[virtual]
Called for every event.
Reimplemented from cafe::Processor.
Definition at line 37 of file BadLBNs.cpp.
References _badLBNs, _MC, _vars, cafe::Event::getGlobal(), cafe::Event::getMCEventInfo(), TMBGlobal::lumblk(), and TMBMCevtInfo::overlaylumblk().
[private]
Definition at line 31 of file BadLBNs.hpp.
Referenced by BadLBNs(), and processEvent().
Definition at line 33 of file BadLBNs.hpp.
Definition at line 32 of file BadLBNs.hpp.
|
http://www-d0.fnal.gov/Run2Physics/working_group/data_format/caf/classcafe_1_1BadLBNs.html
|
crawl-003
|
en
|
refinedweb
|
Investment Basics - Course 101 - Stocks and ETFs Versus Other Investments
This is the first course in a series of 38 called "Investment Basics" - created by Professor Steven Bauer, a retired university professor and still a proactive asset manager and consultant / mentor.
Stocks & ETFs Versus Other Investments
Introduction:
We all have financial goals in life: to pay for college for our children, to be able to retire by a reasonable age, to buy and own the things we need. However, you must not discount the importance of "experience." Combine these with an understanding of how money flows and how businesses compete with one another, along with a dash of accounting knowledge, and you have all the mental tools needed to get started. Then it's a matter of discipline, practice and experience.
Prof's. Guidance: I will teach you all these things and more over the coming weeks. operates, like it or not.
What Is a Stock?
Perhaps the most common misperception among new investors is that stocks are simply pieces of paper to be traded. This is simply not the case. In stock investing, trading is a means, not an end.
A stock is an ownership interest in a company. A business or company is started by a person or small group of people who put their money in, as seed capital investment. How much of the business each founder owns is a function of how much money each invested. At this point, the company is considered "private." Once a business reaches a certain size, the company may decide to "go public" and sell a chunk of itself to the investing public. This is how stocks are created, and how you can participate.
When you buy a stock, you become a business owner. Period. Over the long term, the value of that ownership stake will rise and fall according to the success of the underlying business. The better the business does, the more your ownership stake will be worth.
Prof's. Guidance: This is best measured by the "earnings" of the company. And the earnings are subject to many variables.
Why Invest in Stocks?
Stocks are but one of many possible ways to invest your hard-earned money. Why choose stocks instead of other options, such as bonds, rare coins, or antique sports cars, Etc? Quite simply, the reason that savvy investors invest in stocks is that they have historically provided the highest potential returns. And over the long term, no other type of investment tends to perform better.
On the downside, stocks tend to be one of the most volatile investments. This means that the value of stocks can drop in the short term. Sometimes stock prices may fall for a protracted period. For instance, those who put all their savings in stocks in early 2000 are probably still underwater today. Bad luck or bad timing can easily sink your returns, but you can minimize this by taking a long look and different investing approaches.
There's also no guarantee you will actually realize any sort of positive return. If you have the misfortune of consistently picking stocks that decline in value, you can obviously lose money.
Prof's. Guidance: That is why you are taking your time to learn. Of course, I think that by educating yourself and using the knowledge in these courses, you can make the risk acceptable relative to your expected reward. I will help you pick the right companies to own and help you spot the ones to avoid. Again, this effort is well worth it, because over the long haul, your money can work harder for you in equities than in just about any other investment.
ETFs (Exchange Traded Funds)
Exchange Trade Funds (ETFs) are very much like mutual funds. That is, they are baskets of stock that are bought and sold, just like stock. They differ from mutual funds in that shares of ETFs can be traded at any time while the host stock market is open.
Many ETFs are based on an Index, Sector, Industry Group, County, Commodity, etc. making them exchange traded (specialty) funds.
For example: An Index fund is a passively managed collection of stocks that Index. One of the more common Index funds is one that closely matches the holdings and performance of the Standard & Poor's 500 - Index (S&P 500).
Prof's. Guidance: I recommend sticking with the big name ETF firms such as, iShares, PowerShares, or ProShares - that is if they have an ETF of interest.
Other Basic financial analysts, held 50 - 100 or more stocks, it would be very unlikely that all of those stocks become worthless.
The flip, often more than you may be aware of. The professionals running mutual funds do not do so for free. They charge fees, and fees eat into returns.
Plus, the more money you have invested in mutual funds, the larger the absolute value of fees you will pay every year. For instance, paying 1%, 2% or even 3% the advent of $10 (or less) per-trade commissions on stocks, this is no longer the case.
Just as picking the wrong stock is a risk, so is picking the wrong fund. What if the group of people you selected to manage your investment does not perform well? Just like stocks, there is no guarantee of a return in mutual funds.
It's also worth noting that investing in a mix of mutual funds and stocks can be a perfectly prudent strategy. Stocks versus funds (or any other investment vehicle) is really a personal decision.
Bonds. At their most basic, bonds are loans. When you buy a bond, you become a lender to an institution, and that institution pays you interest. As long as the institution does not go bankrupt, it will also pay back the principal on the bond, but no more than the principal.
There are two basic types of bonds: government bonds and corporate bonds. U.S. government bonds (otherwise known as T-bills or Treasuries) are issued and guaranteed (in the US) by Uncle Sam. They typically offer a modest return with low risk. Corporate bonds are issued by companies and carry a higher degree of risk (should the company default) as well as return.
Bond investors must also consider interest rate risk. When prevailing interest rates rise, the market value of existing bonds tends to fall. (The opposite is also true.) The only way to alleviate interest rate risk is by holding the bond to maturity. Investing in corporate bonds also tends to require just as much homework as stock investing, yet bonds generally have lower returns.
Given their lower risk, there is certainly a place for bonds - but be weary of owning bond mutual funds in most portfolios, but their relative safety comes with the price of lower expected returns compared with stocks over the long term.
Real Estate. Most people's homes are indeed their largest investments. We all have to live somewhere, and a happy side effect is that real estate tends to appreciate in value over time. But if you are going to use real estate as a true investment vehicle by buying a second home, a piece of land, or a rental property, it's important to keep the following in mind.
First, despite the exceptionally strong appreciation real estate values have had in the past, real estate can and does occasionally decline in value. Second, real estate taxes will constantly eat into returns. Third, real estate owners must worry about physically maintaining their properties or must pay someone else to do it. Likewise, they often must deal with tenants and collect rents. Finally, real estate is rather illiquid and takes time to sell--a potential problem if you need your money back quickly.
Some people do nothing but invest their savings in real estate, but just as stock investing requires effort, so does real estate investing.
Bank Savings Accounts. The problem with bank savings accounts and certificates of deposit is that they offer very low returns. The upside is that there is essentially zero risk in these investment vehicles, and your principal is protected. These types of accounts are fine as rainy-day funds--a place to park money for short-term spending needs or for an emergency. But they really should not be viewed as long-term investment vehicles.
The low returns of these investments are a problem because of inflation. For instance, if you get a 3% return on a savings account, but inflation is also dropping the buying power of your dollar by 3% a year, you really aren't making any money. Your real return (return adjusted for inflation) is zero, meaning that your money is not really working for you at all.
Prof's. Guidance: Just so you will know, my personal investment focus is the stock market and investing in Companies and ETFs.
Wrapping Up:
Though investing in stocks may indeed require more work and carry a higher degree of risk compared with other investment opportunities, you cannot ignore the higher potential return that stocks provide. And as I will share in the next course, given enough time, a slightly higher return on your investments can lead to dramatically larger dollar sums for whatever your financial goals in life may be.
Quiz 101
There is only one correct answer to each question.
- Which of the following types of investments provide the largest long-term returns?
- Stocks.
- Bonds.
- Savings accounts.
- Which of the following types of investments are the most volatile in their pricing?
- Stocks.
- Bonds.
- Savings accounts.
- Which of the following skills sets is NOT needed to be a successful investor?
- Discipline.
- A critical eye.
- Advanced statistics.
- Over the long term, which type of investment provides the lowest real (inflation adjusted) returns?
- Stocks.
- Mutual funds.
- Savings accounts.
- When you buy a stock, you are:
- Making a loan to a company.
- Buying an ownership interest in a company.
- Investing in the government.
Thanks for attending class this week - and - don't put off doing some extra homework (using Google - type "info" and the word or question) and sharing with or asking the Prof. questions and concerns.
Investment Basics (a 38 Week - Comprehensive Course)
By: Professor Steven Bauer
Text: Google has the answers
Junior Year
Course 301 - The Income Statement
Course 302 - The Balance Sheet
Course 303 - The Statement of Cash Flows
Course 304 - Interpreting the Numbers
Course 305 - Quantifying Competitive Advantages
Senior?
TweetTweet
|
http://www.safehaven.com/article/18039/investment-basics-course-101-stocks-and-etfs-versus-other-investments
|
crawl-003
|
en
|
refinedweb
|
#include <itkImageModelEstimatorBase.h>
Inheritance diagram for itk::ImageModelEstimatorBase< TInputImage, TMembershipFunction >:
itkImageModelEstimatorBase is the base class for the ImageModelEstimator objects. It provides the basic function definitions that are inherent to a ImageModelEstimator objects.
This is the SuperClass for the ImageModelEstimator framework. This is an abstract class defining an interface for all such objects available through the ImageModelEstimator framework in the ITK toolkit.
The basic functionality of the ImageModelEstimator framework base class is to generate the models used in classification applications. It requires input images and a training image to be provided by the user.. The classified image is treated as a single band scalar image.
EstimateModels() is a pure virtual function making this an abstract class. The template parameter is the type of a membership function the ImageModelEstimator populates.
A membership function represents a specific knowledge about a class. In other words, it should tell us how "likely" is that a measurement vector (pattern) belong to the class.
As the method name indicates, you can have more than one membership function. One for each classes. The order you put the membership calculator becomes the class label for the class that is represented by the membership calculator.
Definition at line 65 of file itkImageModelEstimatorBase.h.
|
http://www.itk.org/Doxygen16/html/classitk_1_1ImageModelEstimatorBase.html
|
crawl-003
|
en
|
refinedweb
|
#include <cx/Geometry.h>
cxGeo cxGeoPolysDefine( int npoint, float *point, int nindex, int *index )
integer function cxGeoPolysDefine(n,point, nindex,INDEX) integer npoint real point(3, npoint) integer nindex integer index(nindex)
Indexing of vertices is zero-based, i.e. the first vertex is referenced by index 0, and the last by npoint - 1. An index element value of -1 indicates the end of a polygon.
The return value is a tag for this object that may be used to reference it at a later time with cxGeoFocus(3E).
Valid attributes are colors, normals, and transparencies. Attribute distribution may be CX_GEO_PER_OBJECT, CX_GEO_PER_FACE, CX_GEO_PER_VERTEX, or CX_GEO_PER_VERTEX_INDEXED.
Polygons specified by cxGeoPolysDefine must have their vertices supplied in a counter-clockwise fashion so that the implicit frontfacing/backfacing normal for the polygon is computed correctly according to the "right-hand rule". Specifying a normal for the polygon using cxGeoNormalAdd will not override this frontfacing/backfacing normal computation for purposes of rendering. If you do not adhere to this ordering, you may get unexpected results when the polygon is rendered in 3-D rendering modules which display the geometry.
|
http://www.nag.co.uk/visual/IE/iecbb/DOC/html/unix-ref/man3/cxgeopolysdefine.htm
|
crawl-003
|
en
|
refinedweb
|
#include <itkImageIOBase.h>
Inheritance diagram for itk::ImageIOBase:
ImageIOBase is a class that reads and/or writes image data of a particular format (such as PNG or raw binary). The ImageIOBase encapsulates both the reading and writing of data. The ImageIOBase is used by the ImageFileReader class (to read data) and the ImageFileWriter (to write data) into a single file. The ImageSeriesReader and ImageSeriesWriter classes are used to read and write data (in conjunction with ImageIOBase) when the data is represented by a series of files. Normally the user does not directly manipulate this class other than to instantiate it, set the FileName, and assign it to a ImageFileReader/ImageFileWriter or ImageSeriesReader/ImageSeriesWriter.
A Pluggable factory pattern is used this allows different kinds of readers to be registered (even at run time) without having to modify the code in this class.
ImageFileReader
ImageSeriesWriter
ImageSeriesReader
Definition at line 55 of file itkImageIOBase.h.
|
http://www.itk.org/Doxygen16/html/classitk_1_1ImageIOBase.html
|
crawl-003
|
en
|
refinedweb
|
#include <FilelistExpander.hpp>
#include <FilelistExpander.hpp>
Inheritance diagram for cafe::FilelistExpander:
Used internally by cafe.
Definition at line 19 of file FilelistExpander.hpp.
Definition at line 11 of file FilelistExpander.cpp.
References _file.
[private]
[virtual]
Implements cafe::Expander.
Definition at line 22 of file FilelistExpander.cpp.
Definition at line 24 of file FilelistExpander.hpp.
Referenced by FilelistExpander(), and nextFile().
|
http://www-d0.fnal.gov/Run2Physics/working_group/data_format/caf/classcafe_1_1FilelistExpander.html
|
crawl-003
|
en
|
refinedweb
|
#include <itkImageToImageFilter.h>
Inheritance diagram for itk::ImageToImageFilter< TInputImage, TOutputImage >:
ImageToImageFilter is the base class for all process objects that output image data and require image data as input. Specifically, this class defines the SetInput() method for defining the input to a filter.
This class provides the infrastructure for supporting multithreaded processing of images. If a filter provides an implementation of GenerateData(), the image processing will run in a single thread and the implementation is responsible for allocating its output data. If a filter provides an implementation of ThreadedGenerateData() instead, the image will be divided into a number of pieces, a number of threads will be spawned, and ThreadedGenerateData() will be called in each thread. Here, the output memory will be allocated by this superclass prior to calling ThreadedGenerateData().
ImageToImageFilter provides an implementation of GenerateInputRequestedRegion(). The base assumption to this point in the heirarchy is that a process object would ask for the largest possible region on input in order to produce any output. Imaging filters, however, can usually answer this question more precisely. The default implementation of GenerateInputRequestedRegion() in this class is to request an input that matches the size of the requested output. If a filter requires more input (say a filter that uses neighborhood information) or less input (for instance a magnify filter), then these filters will have to provide another implmentation of this method. By convention, such implementations should call the Superclass' method first.
Definition at line 64 of file itkImageToImageFilter.h.
|
http://www.itk.org/Doxygen16/html/classitk_1_1ImageToImageFilter.html
|
crawl-003
|
en
|
refinedweb
|
I recently had a customer that wants to get an alert when a specific service is not Disabled and/or not Stopped. I used the following steps to accomplish this using a "Timed Script Three State Monitor". Even if you do not have this specific need, these steps can be used as a template for creating a monitor that uses a script to query WMI and change state or generate alerts based on the results. If you don't have a need for three states (Critical, Warning, Healthy), there is a Two State Monitor that can be used for this.
Create a new Monitor, select Scripting\Generic\Timed Script Three State Monitor
Give it a name, target, etc. (I targeted the Windows Computer class, but Windows Operating System may be a better choice). I try to make a habit of unchecking "Monitor is enabled" and enabling it with an override later....at least while testing it:
Set the schedule...this just depends on how quickly you want to know if the service gets changed:
Next, I used a basic VB script which accepts a service name as a parameter, queries WMI for the service, and puts the Service Name, State (Running, Stopped, etc.), and StartMode (Disabled, Manual, Automatic) into property bag values. The full text of the script is below the screenshot:
---------------------------------------------------------------------------------------------------
Dim oAPI, oBag,strComputer
Set oAPI = CreateObject("MOM.ScriptAPI")
Set oBag = oAPI.CreatePropertyBag()
set oArgs=wscript.arguments
strComputer="."
ServName=oArgs(0)
Set namespace=GetObject("winmgmts:\\"& strComputer & "\root\cimv2")
set servinfo=namespace.ExecQuery("select * from win32_service where name =" & """" & servname & """")
for each objservice in servinfo
Call oBag.AddValue("ServiceName",ServName)
Call oBag.AddValue("State",objservice.State)
Call oBag.AddValue("StartMode",objservice.StartMode)
Call oAPI.Return(oBag)
---------------------------------------------------------------------------------------------------
For the script parameter, I just enter "ServiceName"....this will be replaced by an override later, or you can just enter your service name here:
Next, I set the "Unhealthy", "Degraded", and "Healthy" expressions for the monitor. My goal is to set the state to Warning when the service is Stopped but NOT Disabled , Critical when it is NOT Stopped, and Healthy when it is Stopped AND Disabled. I used the following expressions:
Unhealthy Expression:
Parameter Name: Property[@Name='State']
Operator: Does not equal
Value: Stopped
Degraded Expression:
Parameter Name: Property[@Name='StartMode']
Operator: Does not equal
Value: Disabled
AND
Parameter Name: Property[@Name='State']
Operator: Equals
Value: Stopped
Healthy Expression:
Parameter Name: Property[@Name='StartMode']
Operator: Equals
Value: Disabled
AND
Parameter Name: Property[@Name='State']
Operator: Equals
Value: Stopped
Next, I used the default settings for Health State, since they already match what I want to do:
Next, I configure the alert settings. The settings in the screen shot below will generate a Warning alert when the monitor is in a Warning state (service is not Disabled), and a Critical alert when the monitor is in the Critical state (service is not Stopped). The Alert Description will have the service name (using the ServiceName property created by the script):
Now that I have the monitor created, I need to enable it and set the Override for the Service Name:
I'm using the Alerter service for my test:
To test the monitor, I first set the Alerter service to Manual Startup and leave it stopped:
Then I verify that I get the Warning alert:
Health Explorer correctly shows the "Degraded" Warning state:
Now I want to test the Critical state, so I start the Alerter Service:
Now the alert is changed to Critical:
And Health Explorer shows the "Unhealthy" Critical state:
When I stop the service and disable it, the alert is auto-resolved and the state is changed back to Healthy:
I've attached my sample MP which includes the following monitors:
Service disabled and stopped - two-state monitor:
If the specified service is not Stopped AND Disabled, the computer will be put in a Warning state and a Warning alert will be generated. When the service is stopped and disabled, the computer will be put in a Healthy state.
Service disabled and stopped - three-state monitor:
If the specified service is Stopped and is not Disabled, the computer will be put in a Warning state and a Warning alert will be generated. If the specified service is not Stopped, the computer will be put in a critical state and a Critical alert will be generated. When the service is stopped and disabled, the computer will be put in a Healthy state.
Usage:
Both monitors are targeted at the Windows Computer class and roll up to the Configuration Health. Both monitors are disabled by default. They are configured to check the service every 1 minute. To enable one of the monitors, add an Override for the Computer or Group you wish to monitor and set the following Override parameters:
Enabled=True
Script Arguments = <Service Name>
Enjoy!!
We do have simpler ways to monitor a service….this script was provided for a way to do more granular monitoring of the service state and startup type.
You can easily monitor a service by doing one of the following:
1. Go to AuthoringManagement Pack Templates, run the "Add Monitoring" wizard and create a "Windows Service" monitor. This will monitor all agents that run the service.
2. Go to AuthoringManagement Pack ObjectsMonitors and create a new Unit Monitor, then select "Windows ServicesBasic Service Monitor". This will allow you to target specific classes for the monitor.
I haven’t set this up, but you should just be able to add a diagnostic or recovery command with a command line of "net start <servicename>".
You would need to change the script to query Win32_process in rootcimv2 (instead of Win32_Service), and use the "CreationDate" property to see how long it has been running. I’m not 100% sure, but I think something like this will be included in the R2 release.
Thanks for the two/three state monitor sample. However I wonder if there is a way to add diagnostic task to automatically recover stopped service. Have yoe ever worked about it?
My question was not so clear. I plan to monitor several servers at a time. ex; services beginning with ‘cisco’ display name. Therefore I need to determine stopped service name and set service name with parameters within recovery task command lines without writing a custom script
I’m not sure if anybody is still looking at the comments here, BUT:
I’m looking to monitor the Volume Shadow Copy service, and I want OpsMgr to generate an alert when it’s been in a running state for over 20 minutes (this generally will indicate that it’s hung up); normally the service is started and stopped in the span of five minutes. How would I do this?
How do I monitor a specific service on a specific server without having to create a script? I feel like monitoring services in OpsMgr R2 should be rudimentary but it’s unbelievably convoluted.
Very nice, it give me a pointer how to monitoring if service hang in stopping state.
nicel,value add with respect to service Monitor creation.
|
https://blogs.technet.microsoft.com/jimmyharper/2008/08/09/monitoring-a-service-for-state-and-startmode/
|
CC-MAIN-2019-51
|
en
|
refinedweb
|
Subject: Re: [boost] [review][assign] Formal review of Assign v2 ongoing
From: er (er.ci.2020_at_[hidden])
Date: 2011-06-21 17:05:58
> About your other comments:
> - operator| has a similar meaning in Boost.Range under adaptors.
> - Feel free to suggest another prefix than 'do'.
> - Dot is borrowed from Boost.Assign (1.0).
Did you mean dot or %? The dot is a small price to pay for alternating
between various ways to insert elements in a container, within one
statement:
put( cont )( x, y, z ).for_each( range1 )( a, b ).for_each( range2 );
The answer to "[Is it] just a quest to type less when using standard
containers?" is yes, as illustrated just above, but it is quite a broad
definition of standard containers. Version 2.0 provides macros to
broaden it further (which was used to support Boost.MultiArray, for
example).
As for why such a library should exist, it depends on the degree to
which you value the syntax above, which is very similar to Boost.Assign
(1.0), in this case.
You say "I ordinarily only initialize containers to literals when
writing unit tests.". In this case, I think you are right that you can't
beat C++0x initializer lists. But, you still may need to fill a
container, as above. And also, consider the cases below:
#include <vector>
#include <queue>
#include <string>
#include <tuple>
#include <boost/assign/v2/include/csv_deque_ext.hpp>
int main()
{
typedef std::string s_;
{
typedef std::tuple<s_, int> t_;
typedef std::vector<t_> v_;
v_ v1 = {
t_( "a", 1 ),
t_( "b", 2 ),
t_( "c", 3 ),
t_( "d", 4 ),
t_( "e", 5 )
};
using namespace boost::assign::v2;
v_ v2 = converter(
csv_deque<t_, 2>( "a", 1, "b", 2, "c", 3, "d", 4, "e", 5)
);
}
{
typedef std::queue<int> q_;
// Not aware that an initializer list works here
using namespace boost::assign::v2;
q_ q = converter(
csv_deque<int, 1>( 1, 2, 3, 4, 5 )
);
}
return 0;
}
> - Prefix _ is reserved for const objects (not sure the proper word
for it)
And I think this convention appears elsewhere in Boost, such as
Boost.Parameter.
Boost list run by bdawes at acm.org, gregod at cs.rpi.edu, cpdaniel at pacbell.net, john at johnmaddock.co.uk
|
https://lists.boost.org/Archives/boost/2011/06/183014.php
|
CC-MAIN-2019-51
|
en
|
refinedweb
|
In my last post, I showed how to write a higher-order function that could wrap an existing function without losing the original function’s types.
Today, I’m going to show how you can use that same technique to wrap an existing function for a different result–to execute it in a background process using the workerpool npm module.
Background Worker Process
Since Node.js is single-threaded, you have to be careful about performing any kind of lengthy synchronous processing, as this process will block the entire application. In many cases, you’ll want to use a robust job queue (e.g. Kue) to handle background tasks. But there are plenty of situations where a more lightweight solution will suffice–and there are countless libraries that can start up a background process (or pool of processes) and manage the interprocess communication for you.
For the purposes of this post, I’m going to use the
workerpool library. Specifically, I’ll be creating a dedicated worker (one of the options for how to use
workerpool).
The Function
Here’s a function that synchronously generates random data, writes the data to a file, and returns the path to that file on disk:
import * as crypto from 'crypto'; import * as fs from 'fs'; import * as Path from 'path'; export function createRandomDataFile(numBytes: number): string { // const uniqueId = Math.random().toString(36).substring(2) + (new Date()).getTime().toString(36); const path = Path.join(process.cwd(), uniqueId); const buffer = crypto.randomBytes(numBytes); fs.writeFileSync(path, buffer); return path; }
Depending on the size of the file being generated, this function could block the event loop for many seconds–clearly, not acceptable.
Background Process
When using a dedicated worker with
workerpool, you pass the name (which can be any label) from the foreground process (the main event loop) to a background worker process which has a set of registered functions it can execute.
Here is our
worker.ts file with the
createRandomDataFile function registered:
import * as workerpool from 'workerpool'; import { createRandomDataFile } from 'random-data'; workerpool.worker({ createRandomDataFile, });
Foreground Process
To invoke our function from the main process, we need to tell
workerpool about the
worker.ts file and then tell it to invoke the function by name:
import * as workerpool from 'workerpool'; const pool = workerpool.pool(__dirname + '/worker.ts'); export function createRandomDataFile(numBytes: number): Promise<string> { return pool.exec('createRandomDataFile', [numBytes]); }
This isn’t really that bad. The biggest downside is that the type signature of the function is now being defined in two places, and there’s nothing to enforce that they stay in sync. This is probably not that big of a deal for this simple example, but if the arguments included a more complicated object type, it could get out of sync and lead to an error that’s hard to catch.
Higher-Order Wrapper
Using the types I talked about in Generic Higher-Order Functions in TypeScript, here’s a function that wraps a given function so that when called, it dispatches the call to the worker process:
import * as workerpool from 'workerpool'; const pool = workerpool.pool(__dirname + '/worker.ts'); function makeBackgroundable<T extends (...args: any[]) => any>(func: T): (...funcArgs: Parameters<T>) => Promise<ReturnType<T>> { const funcName = func.name; return (...args: Parameters<T>): ReturnType<T> => { return pool.exec(funcName, args); }; }
Note that the function returned from
makeBackgroundable wraps the return value in a
Promise because the new function is now asynchronous, while the original function was not.
Finally, we can create a “backgrounded” function from the original function and call it from some other file (e.g.
main.ts).
import { makeBackgroundable } from 'backgroundable' import { createRandomDataFile } from 'random-data'; const backgroundedCreateRandomDataFile = makeBackgroundable(createRandomDataFile); async function run() { const path = await backgroundedCreateRandomDataFile(1024 * 1024 * 1024); // ... }
The type signature of the
backgroundedCreateRandomDataFile function will exactly match that of the original (with the exception of the return value being wrapped in a
Promise), and it will immediately reflect any changes made to the signature of the original as well. Just what we wanted.
By commenting below, you agree to the terms and conditions outlined in our (linked) Privacy Policy2 Comments
when I put the `makeBackgroundable` into .ts file, the
“`js
return pool.exec(funcName, args);
“`
shows
“`js
Cannot assign type “Promise” to type “ReturnType”
“`
I don’t see the same error/warning but maybe I don’t have some types set up right for the workerpool library so an “any” was hiding that problem for me. The pool.exec call does return a Promise, so it makes sense that the inner function should return a Promise
>. Thanks for pointing this out!
|
https://spin.atomicobject.com/2019/02/18/wrap-typescript-function/
|
CC-MAIN-2019-51
|
en
|
refinedweb
|
Select / React
A select is a simple form control element to use when user needs to select from bigger amount of choices.
To implement Select component into your project you’ll need to add the import:
import Select from "@kiwicom/orbit-components/lib/Select";
After adding import into your project you can use it simply like:
<Select options={Option} />
Props
Table below contains all types of the props available in the Select component.
Option
Table below contains all types of the props available for object in Option array.
enum
Functional specs
The
errorprop overwrites the
helpprop, due to higher priority.
When you have limited space of
Select, you can use
customValueTextproperty where you can pass text alternative of the current value. For instance, when label of selected option has
Czech Republic (+420), you can pass only
+420into this property and the original label will be visually hidden.
refcan be used for example auto-focus the elements immediately after render.
class Component extends React.PureComponent<Props> { componentDidMount() { this.ref.current && this.ref.current.focus(); } ref: { current: React$ElementRef<*> | null } = React.createRef(); render() { return ( <Select ref={this.ref} /> ) } }
|
https://orbit.kiwi/components/select/react/
|
CC-MAIN-2019-51
|
en
|
refinedweb
|
§Testing your application
Writing tests for your application can be an involved process. Play supports JUn SBT a org.junit.Test; import play.mvc.Result; import play.twirl.api.Content; public class ControllerTest {
|
https://www.playframework.com/documentation/2.6.13/JavaTest
|
CC-MAIN-2020-05
|
en
|
refinedweb
|
I looked at the help and in the forums but couldn't find anything.
If there is an hscript expression that would work too.
Thanks
Posted 23 June 2012 - 08:32 PM
Posted 23 June 2012 - 11:34 PM
Basically something that would return the primitives that uses a particular point.
Posted 23 June 2012 - 11:38 PM
Posted 23 June 2012 - 11:55 PM
Thanks rdg. How do you build a tree for the bounding boxes of primitives? Actually I also don't know how to get the bounding box of a primitive
Is there an expression for that? My geometry is a single connected polygon mesh, not sure if that matters.
Posted 24 June 2012 - 12:00 AM
Posted 24 June 2012 - 12:35 AM
Interestingly there is a way to get the points of a primitive in python, but not the other way around
# This code is called when instances of this SOP cook. node = hou.pwd() geo = node.geometry() # Add code to modify the contents of geo. pointnumber = node.evalParm('val') def GetAllPoints(): """ use it to get points of each primitive """ dict = {} for prim in geo.prims(): points = [] for verticle in prim.vertices(): points.append(verticle.point().number()) dict[prim.number()] = points return dict def GetPointPrimitives(dict, pointnumber): """ use it to get primitives that uses this point """ prims = [] for k, v in dict.items(): if pointnumber in v: prims.append(k) return prims # MAIN() print(GetPointPrimitives(GetAllPoints(), pointnumber))
Edited by mantragora, 24 June 2012 - 04:06 AM.
magic happens here... sometimes
Vimeo
Twitter
Orbolt
"If it's not real-time, it's a piece of shit not a state of the art technology" - me
Posted 24 June 2012 - 01:46 AM
Posted 24 June 2012 - 01:53 AM
Thanks mantragora, that's the method I was talking about. But looking up the prims from the points would be slow. You could make another dictionary from yours where point numbers would be the keys but that would be even slower to construct
These are the kinds of solutions I don't like implementing because they are not scalable. If I had 10 points in a mesh with 100 points, and it takes 1 ms for cooking my SOP, having the same 10 points in a mesh with 1 million points will be 10000 slower, which would be 10 seconds (just an example), but it shouldn't be. I shouldn't pay that price because I am not modifying the whole mesh.
Reminds me Edit Poly limitations in Max (not Editable Poly), where even setting the position of a vertex/point would be an epic undertaking
magic happens here... sometimes
Vimeo
Twitter
Orbolt
"If it's not real-time, it's a piece of shit not a state of the art technology" - me
Posted 24 June 2012 - 02:00 AM
Use InlineCPP.
Posted 24 June 2012 - 02:47 AM.
Edited by mantragora, 24 June 2012 - 04:59 AM.
magic happens here... sometimes
Vimeo
Twitter
Orbolt
"If it's not real-time, it's a piece of shit not a state of the art technology" - me
Posted 24 June 2012 - 02:48 AM
Posted 24 June 2012 - 07:02 AM
def buildPointPrimRefMap(geo): """ Build a dictionary whose keys are hou.Point objects and values are a list of hou.Primitive objects that reference the point. """ point_map = {} for prim in geo.prims(): for vert in prim.vertices(): pt = vert.point() if not pt in point_map: point_map[pt] = [] point_map[pt].append(prim) return point_mapThis results in a dictionary where I can use a hou.Point to get any prims that reference it.
cpp_geo_methods = inlinecpp.createLibrary("cpp_geo_methods", includes="""#include <GU/GU_Detail.h>""", structs=[("IntArray", "*i"),], function_sources=[ """ IntArray connectedPrims(const GU_Detail *gdp, int idx) { std::vector<int> ids; GA_Offset ptOff, primOff; GA_OffsetArray prims; GA_OffsetArray::const_iterator prims_it; ptOff = gdp->pointOffset(idx); gdp->getPrimitivesReferencingPoint(prims, ptOff); for (prims_it = prims.begin(); !prims_it.atEnd(); ++prims_it) { ids.push_back(gdp->primitiveIndex(*prims_it)); } return ids; } """,]) def connectedPrims(point): """ Returns a tuple of primitives connected to the point. """ geo = point.geometry() result = cpp_geo_methods.connectedPrims(geo, point.number()) return geo.globPrims(' '.join([str(i) for i in result]))
Edited by graham, 24 June 2012 - 07:05 AM.
0 members, 1 guests, 0 anonymous users
|
http://forums.odforce.net/topic/15708-is-there-a-way-to-get-primitives-using-a-point/
|
CC-MAIN-2015-27
|
en
|
refinedweb
|
UNITED STATES
NUCLEAR REGULATORY COMMISSION
OFFICE OF NUCLEAR REACTOR REGULATION
WASHINGTON, DC 20555-0001
September 2, 1994
Addressees
All holders of operating licenses or construction permits for nuclear power
reactors, radiography licensees, fuel processing licensees, fabricating and
reprocessing licensees, manufacturers and distributors of byproduct material,
independent spent fuel storage installations, facilities for land disposal of
low-level waste, and geologic repositories for high-level waste.
Purpose
The U.S. Nuclear Regulatory Commission (NRC) is issuing this generic letter to
request that all addressees provide to the NRC a voluntary report containing
the occupational radiation exposure data as described below.
Background
The provisions of 20.2206 of 10 CFR Part 20 require seven categories of NRC
licensees to submit occupational radiation exposure reports. The seven
categories are as follows: commercial nuclear power reactors; industrial
radiographers; fuel processors, fabricators and reprocessors; manufacturers
and distributors of byproduct material; independent spent fuel storage
installations; facilities for land disposal of low-level waste; and geologic
repositories for high-level waste. Each of these approximately 500 licensees
submits exposure reports for each of its monitored employees. This data is
computerized by the NRC, and forms the basis for the Radiation Exposure
Information Reporting System (REIRS).
An analysis of the REIRS database is presented in the annual volumes of
NUREG-0713, "Occupational Radiation Exposure at Commercial Nuclear Power
Reactors and Other Facilities." The analysis provides licensees with an
opportunity to compare ALARA performance at their facilities with that of
similar facilities. The data are also used to evaluate occupational doses
against national and international radiation protection recommendations to
determine if further reductions in the occupational dose limits in 10 CFR
Part 20 are needed to achieve the recommended levels. The REIRS database also
provides a historical view of radiation exposure at NRC licensed facilities
over the last quarter century. The NRC database can be used to provide
complete individual exposure history to employers to assist them in
demonstrating compliance with the occupational radiation exposure requirements
regarding exposure histories for required individuals. Finally, REIRS is the
largest database of radiation exposures at occupational levels. This makes it
a very valuable epidemiological resource in determining the actual risk of
exposure at occupational levels.
One of the goals fulfilled through the collection of this additional data is
to supplement the information available through the Part 20 reporting and
recordkeeping requirements so that the information vital to carrying out
epidemiological studies will be available. This goal was stated in the
statement of consideration to 10 CFR 20.1001 through 20.2402 published in the
Federal Register on May 21, 1991 (56 FR 23386). The utility of this
information is in conducting such studies and the intention of the National
Cancer Institute to conduct these studies was discussed in an April 20, 1994
letter to Bill M. Morris, Director of the Division of Regulatory Applications,
Office of Nuclear Regulatory Research from Dr. John Boice, Chief of the
Radiation Epidemiology Branch, National Cancer Institute. In the letter,
Dr. Boice states that it is the current workers who have been employed for
many years who are most critical to a successful epidemiological study.
Description of Circumstances
Under the previous requirements of 10 CFR Part 20.1 through 20.602, seven
classes of licensees were required to submit termination reports containing
occupational radiation exposure data for the entire period of work or
employment to the NRC when individuals terminated employment or a work
assignment at their facilities. Thus, at the end of a worker's employment,
the individual's entire exposure record would be part of NRC's exposure
database. In addition, these licensees were required to submit a statistical
summary of the exposures of all individuals occupationally exposed at their
facilities.
Under the new requirements of 10 CFR 20.1001 through 20.2402, which became
mandatory on January 1, 1994, these licensees are now required to annually
submit occupational radiation exposure data to the NRC for all persons
occupationally exposed at their facilities, during that year, for whom
monitoring is required. Termination reports are no longer required. Thus,
as of April 1994 and every April thereafter, the required exposure data for
employees for the previous year is to be submitted to the NRC. With this
change in reporting requirements, the exposure data for current employees,
from the time of their initial employment to the date of implementation of the
new requirements of 10 CFR Part 20.1001 through 20.2402 would not be reported.
Complete data would only be available for employees who finished their careers
prior to the new requirements or new employees who only worked under the new
requirements. Complete data would not be available for any employee who
worked under both the new and old reporting requirements. This gap in the
radiation exposure data would limit the usefulness of the REIRS database for
(1) epidemiology as described by Dr. Boice, (2) supporting decisions on the
necessity and appropriateness of new regulatory requirements for occupational
exposure, and (3) facilitating determinations by new employers of prior
occupational exposure as required by 10 CFR 20.2104.
Discussion
The Nuclear Regulatory Commission, as well as national and international
organizations such as the International Commission on Radiation Protection
(ICRP) and the National Commission on Radiation Protection and Measurements
(NCRP), derives information on occupational exposures from the REIRS database
and uses this information to establish limits on occupational exposure to
ionizing radiation. If the REIRS database is known to be incomplete, it will
not be reliable for determining actual lifetime exposures. This will have
three major consequences. First, NRC would not be able to continue to
provide complete exposure histories to individuals to facilitate the movement
of transient workers from one licensee to another. Second, actual lifetime
exposures could not be determined for the occupationally exposed workers in
the seven categories of licensees. Without this information, NRC may have
difficulty in evaluating whether further limitations on occupational doses are
needed to achieve dose levels recommended by the ICRP. Finally, a large
reliable database, available to the National Cancer Institute for
epidemiological studies on occupationally exposed workers, would not be
available for decades. The ability of agencies such as NCI to rely upon these
data of doses at occupational levels would be lost.
Requested Information
In an effort to provide for a complete and reliable database, the NRC is
requesting that the seven classes of licensees included in the REIRS database
provide a voluntary report of the data missed as a result of the change in
regulations. This report is requested to include the occupational radiation
exposure data of all current licensee employees from the date of employment to
the day prior to implementation of the new requirements of 10 CFR Part 20.1001
through 20.2402 which were otherwise unreported under the reporting
requirements of 10 CFR 20.1-20.602. The information requested is that
normally included on NRC Form 5.
Voluntary Response Requested
Within 180 days from the date of this generic letter, all addressees are
requested to submit a voluntary report containing data for each monitored
individual from the date of employment to the day prior to the implementation
of the new requirements of 10 CFR Part 20.1001 through 20.2402. Data
previously reported on termination reports need not be included. It is
preferable that the data be reported by monitoring year, but a single
monitoring period spanning several years is acceptable. If possible, the data
should be submitted electronically.
While the NRC will accept these data in any format, a suggested format as well
as an electronic format is provided in Enclosure 1 in an effort to simplify
submission of the requested data. The electronic format is the preferred
format for the submission of the data.
In addition, a cover letter should be included which gives the name of the
licensee, the NRC license number, the name of a person to contact in case
there are questions, and the phone number at which that individual can be
reached (the same information requested as part of NRC Form 5).
Address all reports to the U.S. Nuclear Regulatory Commission, ATTN: REIRS
Project Manager, Mail Stop T-9 C24, Washington, DC 20555.
Backfit Discussion
This generic letter only requests voluntary submittal of information.
Therefore the staff has not performed a backfit analysis.
A notice of opportunity for public comment was not published in the Federal
Register because of the voluntary nature of the information request.
Paperwork Reduction Act Statement
The voluntary information collections contained in this request are covered by
the Office of Management and Budget, clearance number 3150-0011, which expires
July 31, 1997. The public reporting burden for this voluntary collection of
information is estimated to average 10 also is purely
voluntary. The information would assist NRC in evaluating the cost of
complying with this generic letter.
The licensee staff time and costs to prepare the requested reports and
documentation.
If you have any questions about this matter, please contact the technical
contact listed below.
original signed by
Carl J. Paperiello, Director
Division of Industrial and Medical Associate Director for Projects
Nuclear Safety
Office of Nuclear Material Safety
and Safeguards
Roy P. Zimmerman
Associate Director for Projects
Office of Nuclear Reactor Regulation
Attachments:
FORMAT FOR THE OCCUPATIONAL RADIATION EXPOSURE DATA REPORT
Electronic Format
Electronic submittal should be on 3.5" or 5.25" PC diskettes or 8 mm magnetic
tape. Each disk, tape, or cartridge submitted should include a transmittal
letter. Each letter should contain the file name, date created, operating
system, the name and phone number of a person knowledgeable about each file,
any other pertinent instructions, signature, and date.
File Structure
Each diskette should contain two file types. The first file type should be a
single header record which provides information about the source of the data
file. The second file type should be an exposure record for each monitoring
period for each monitored individual. Each record should contain only ASCII
or EBCDIC printable characters, terminated with a carriage return (CR) and a
line feed (LF). All empty space should be padded with spaces. Text strings
are expected to be left justified in a field and numbers are expected to be
right justified in a field.
Header Record (occurs only once on each diskette, tape, or cartridge)
Exposure Record (one for each individual, for each monitoring period)
If hard copy reports are to be submitted, the following information
is needed for each monitored individual:
Init
SSN
Sex
Date of Birth
Date Monitoring Began
Date Monitoring Ended
Whole Body Dose(rem)
|
http://www.nrc.gov/reading-rm/doc-collections/gen-comm/gen-letters/1994/gl94004.html
|
CC-MAIN-2015-27
|
en
|
refinedweb
|
I'm new to ST2 and new to Python, so forgive me: my question will be very very silly...
In the long run, I'm trying to build a plugin that (e.g.) opens the file "~/Documents/001.tex" if I ctrl-double-click inside the curly braces of "{001}".
But as a first step, I'm trying to make a plugin that opens "~/Documents/001.tex" if the caret is inside "{001}", say as in "{0|01}", and mycommand is invoked.
What's wrong with the following code?
- Code: Select all
import sublime
import sublime_plugin
class MycommandCommand(sublime_plugin.WindowCommand):
def run(self, edit):
self.view.run_command("expand_selection", {"to": "brackets"})
sel = self.view.sel()
docnum = self.view.substr(sel[0])
open_file("~/Documents/0007xx/${docnum}.tex")
Many, many thanks in advance for any help.
--
bblue
|
http://www.sublimetext.com/forum/viewtopic.php?p=36546
|
CC-MAIN-2015-27
|
en
|
refinedweb
|
trying to understand and use an Arrayjandy48 Apr 6, 2012 5:43 PM
I would like to have a game where the player has three tries before the game stops or moves on to another level.
In this game an object jumps up with a mouseClick and if it doesn't hit it's target it falls where it crashes into a floor that uses hitTestObject.
This leads to a restartBtn. but I want that movieClip to remain on the stage, which has an animation that splatters. A new MovieClip is put on to the stage and the cycle starts over.
Before this stage of the game s over I want the various movieClip splatters to be visible on the stage.
I thought an Array would help me achieve this result but I'm not familiar with using them dynamically.
I'm hoping someone can give me some tips as to what might work.
I have temporarily separated this problem from the rest of the code as I'm hoping it will be clearer.
This is where I left off and when I click the button it seems to eliminate the previous movieClip and introduce the next one.
But it seems like I'm missing someting so I thought I would post it as it is probably a problem that comes up a lot in games. Thanks
import flash.display.MovieClip;
import flash.events.MouseEvent;
var movieArray:Array = new Array();
movieArray = ["Egg_A","Egg_B","Egg_C"];
movieArray[0] = new Egg;
movieArray[1] = new Egg_B;
movieArray[2] = new Egg_C;
var myMovieClip:MovieClip;
init();
function init()
{
for (var i:int = 0; i < movieArray.length; i++)
{
emptyMC.addChild(movieArray[i]);
}
}
mainBtn.addEventListener(MouseEvent.CLICK,changeEgg);
function changeEgg(evt:MouseEvent):void
{
for (var i:int = 0; i < movieArray.length; i++)
{
movieArray.splice(i,1);
}
init();
}
1. Re: trying to understand and use an ArrayNed Murphy Apr 6, 2012 6:57 PM (in response to jandy48)
I don't see where an array is going to make anything remain. Just having an instance created without removing it until you want it to go away is all you need.
I don't see much reason with what you are doing with that array either. First you assign a set of strings to it, then you replace those strings with instances of some Egg objects. Then you add all the eggs to the display at once in your init() function (not one at a time), or you remove them all from the array with your change Egg function... calling the init() function after emptying the array isn't going to yield much since the init() function uses the array.
2. Re: trying to understand and use an Arrayjandy48 Apr 7, 2012 3:49 AM (in response to Ned Murphy)
OK
So I might try placing three or more instances on the stage and changing thier visibility as I need to use them.
I'll keep playing with it but at least I'm getting more famiiar with Arrays.
Thanks
|
https://forums.adobe.com/thread/986436
|
CC-MAIN-2015-27
|
en
|
refinedweb
|
NAME
device_add_child, device_add_child_ordered - add a new device as a child of an existing device
SYNOPSIS
#include <sys/param.h> #include <sys/bus.h> device_t device_add_child(device_t dev, const char *name, int unit); device_t device_add_child_ordered(device_t dev, int order, const char *name, int unit);
DESCRIPTION
Create a new child device of dev. The name and unit arguments specify the name and unit number of the device. If the name is unknown then the caller should pass NULL. If the unit is unknown then the caller should pass -1 and the system will choose the next available unit number. The name of the device is used to determine which drivers might be. This allows busses which can uniquely identify device instances (such as PCI) to allow each driver to check each device instance for a match. For busses which rely on supplied probe hints where only one driver can have a chance of probing the device, the driver name should be specified as the device name. Normally unit numbers will be chosen automatically by the system and a unit number of -1 should be given. When a specific unit number is desired (e.g. for wiring a particular piece of hardware to a pre- configured unit number), that unit should be passed. If the specified unit number is already allocated, a new unit will be allocated and a diagnostic message printed. If the devices attached to a bus must be probed in a specific order (e.g. for the ISA bus some devices are sensitive to failed probe attempts of unrelated drivers and therefore must be probed first), the order argument of device_add_child_ordered().
|
http://manpages.ubuntu.com/manpages/intrepid/man9/device_add_child.9freebsd.html
|
CC-MAIN-2015-27
|
en
|
refinedweb
|
Everyone's Recent Snippets Tagged 'private'
-
ActionScript 3 javascript js method oop function variable property scope self private public Anonymous executing saved by 1 person
JS OOP Example using a self-executing anonymous function
posted on April 13, 2012 by adrianparr
Java final field private static
Change private static final field using java reflection
posted on September 26, 2011 by joycollector
JavaScript class object prototype private public
Classes objects prototype and static
posted on July 1, 2011 by devnull69
JavaScript closure unobtrusive interface namespace private public
using unobtrusive namespace
posted on March 21, 2011 by coprolit
JavaScript javascript class object template method pattern variable module scope namespace private public shield revealing
Javascript revealing module pattern template
posted on March 17, 2011 by coprol »
|
http://snipplr.com/all/tags/private/
|
CC-MAIN-2015-27
|
en
|
refinedweb
|
Back from a nice long weekend, although I spent most of it sick with a cold. I find this increasingly the way with me: I fend off illness for months at a time (probably through stress, truth be told) but then I get a few days off and wham. A shame, as we had a huge dump of snow over the weekend... we get white Christmases here every five years or so, but it's really uncommon to get a white Easter.
I had a very interesting question come in by email from 冷血儿, who wanted to get the technique shown in this post working in his F# application.
Here's the F# code I managed to put together after consulting hubFS, in particular:
#light
namespace MyNamespace
open Autodesk.AutoCAD.Runtime
open Autodesk.AutoCAD.ApplicationServices
type InitTest() =
class
let ed =
Application.DocumentManager.MdiActiveDocument.Editor
interface IExtensionApplication with
member x.Initialize() =
ed.WriteMessage
("\nInitializing - do something useful.")
member x.Terminate() =
printfn "\nCleaning up..."
end
end
module MyApplication =
let ed =
Application.DocumentManager.MdiActiveDocument.Editor
[<CommandMethod("TST")>]
let f () =
ed.WriteMessage("\nThis is the TST command.")
[<assembly: ExtensionApplication(type InitTest)>]
do
ed.WriteMessage("\nModule do")
Here's what happens when we load our module and run the TST command:
Command: NETLOAD
Module do
Initializing - do something useful.
Command: TST
This is the TST command.
|
http://through-the-interface.typepad.com/through_the_interface/2008/03/initialization.html
|
CC-MAIN-2015-27
|
en
|
refinedweb
|
The script interpreter is throwing an exception complaining about character mismatch which doesn't really make sense. Here is the exact error line as returned to me by the JVM:
Exception in thread "main" SyntaxError: ("mismatched character 's' expecting '='", ('<iostream>', 44, 49, '\t\t\tif (GlobalStatesAndVariables.keys[index] and !self.mProcessedKeys[index]):\n'))
and here is the allegedly offending code:
def ProcessKeys(self): for i in range(MotionModule.NUM_OF_IDS): if self.keyMappings[i] == GlobalStatesAndVariables.NULL: continue index = self.keyMappings[i] if (GlobalStatesAndVariables.keys[index] and !self.mProcessedKeys[index]): # the error points between 'n' and 'd' of the first index variable self.mEntity.mMotionModule.SetMotion(i, true) self.mProcessedKeys[index] = true elif (!GlobalStatesAndVariables.keys[index] and self.mProcessedKeys[index]): self.mEntity.mMotionModule.SetMotion(i, false) self.mProcessedKeys[index] = false
I can't put my finger on what the interpreter is complaining about. Any ideas?
|
http://www.gamedev.net/topic/618760-strange-python-syntax-error/
|
CC-MAIN-2015-27
|
en
|
refinedweb
|
futimes - change timestamps of a file relative to a directory file descriptor
#include <fcntl.h>
int futimesat(int dirfd, const char *path,
const struct timeval times[2]);
int futimesat(int dirfd, const char *path,
const struct timeval times the pathname given in
pathname is relative and
dirfd is the special value
AT_FDCWD, then
pathname is interpreted relative to the current working
directory of the calling process (like
utimes(2)).
If the pathname given in
pathname is absolute, then
dirfd is ignored..
path_resolution (2)
stat (2)
utimes (2)
Advertisements
|
http://www.tutorialspoint.com/unix_system_calls/futimesat.htm
|
CC-MAIN-2015-27
|
en
|
refinedweb
|
Agenda
See also: IRC log
[Ben and Ralph talk about microformats while waiting for others]
Mark: I met again today with
IPTC, followup from Friday meeting; they're keen to progress
further
... Friday meeting was very positive
... Misha focussed the discussion on his document first, then we dived into details
... IPTC wants a small set of relationships between data
... they want to make statements about the relationships between other tags
... reification is a big issue for them; they want to give [provenance] -- who assigned a 'tag', with what confidence, etc.
... they're very open to RDF/A, as long as they get a compact syntax
... in their proposed syntax there is a lot of level-mixing
... e.g. attributes whose subject is a statement and other attributes whose subject is something else
... they admitted this could get confusing to people
... this can help us, as it's a real meaty application; I felt very positive about this
... they have a tight timescale; their next meeting is in October and they need to publish documents beforehand, thus 3 Oct is their deadline
Ben: how will they do RDF/A without XHTML2?
Mark: they have several
languages;
... one is comparable to XHTML2; it marks up a news story
... they're not completely convinced that XHTML2 gives them everything they need
... they have a suite of other languages; e.g. SportsML that sit on their base language
... then they have another layer that provides alternative formats for a document
... a wrapper describes the 'package' of formats available
... document format to wrap other documents
... another format: many documents that make up a news story
... 3 document formats. XHTML2 might replace the bottom one, but not others.
... RDF/A was designed as general purpose attribute syntax
Ben: can RDF/A be added to XHTML1
Mark: there's been discussion
about this.
... there are ways to mark up DanBri's examples using XHTML1
Ben: very interesting to think about this issue. Could it be accomplished with XHTML1 that renders correctly in the browser?
Mark: property is not an XHTML1 attribute, but it could be added as a module.
Ben: how is XHTML1 modularization going to interact with validators?
Mark: there are probably no XHTML
1.1 validators, because there is no schema yet
... XHTML 1.1 will have modularization, in order to replace XHTML 1.0.
... DTDs for XHTML 1.0 will be replaced by XHTML 1.1 schemas
... tried using XHTML 1.1 architecture to combine multiple modules and add XForms. It didn't work.
... Xforms is a bit more complicated of a use case.
... currently tidying up XHTML 1.1 modularization. Later XHTML2 using same techniques.
Ben: keep us posted on this; it might help for adoption
Mark: tricky thing is QNAME issue
for predicates.
... could argue that there's nothing to stop you using Qnames in REL.
... same as "DC.creator"
Ben: what's the current direction on qnames everywhere?
Mark: would be great to have
qnames and URIs interchangeable. square brackets won't fly
either.
... need to add qabout and qhref
... or qcontent
Ben: in terms of consistency, if we add qabout and qhref do we also need qrel?
Mark: yes
... [even though] it would be painful for every attribute to have a q-version
Ben: if we do settle on the qabout, qhref direction then adding qrel would settle the problem
Mark: and existing rel would be a
kind of local identifier
... this could provide a neat answer to the backwards compatibility issue
... might also be solvable by putting constraints on the namespace prefixes; e.g. insist that the namespace prefixes used within a document not match [known] URI schemes
... it will become common to sprinkle qnames throughout HTML documents in the future
... if we do choose the q-attribute route, I prefer 'qcontent' and leave 'href' alone
Ben: any other issues from IPTC discussion?
Mark: we need to settle our view
on reification
... we've revised the interpretation of ID so many times in this design
... though I quite like the idea that ID refers to the _statement_
... IPTC does want to be able to say _who_ made a statement, _when_ the statement was made, how confident they are of the statement, etc.
Ben: did custom attributes come up?
Mark: sort of, but I described it
as solved
... but there is a desire to treat, for example, role= as identifying a property/value pair
... other things like media-type would benefit from such an approach as well
... IPTC's requirement is for compactness
... they don't like 'property=x content=y' on every element
... They've also defined elements ala 'dc:subject' with attributes
... very RDF-like
... they also have the problem of Schema validation
Ben: it would be good to summarize our current thinking on these 3 issues in an email
Mark: we could use DanBri's
samples to help this discussion
... I've already noticed some possible improvements after trying to write DanBri's examples
... IPTC has also promised some examples, including some very big cases
... to help us understand the compactness issue
ACTION: Ben create a way to track progress on the 3 issues of qnames, reification, and custom attributes and elements [recorded in]
Ralph: would Misha be interested in formally joining the WG?
Mark: perhaps. They're certainly
willing to attend some TF telecons.
... but they probably won't want to attend every meeting
... there may be applications that want to do metadata extraction by traversing an infoset via DOM
|
http://www.w3.org/2005/07/12-swbp-minutes
|
CC-MAIN-2015-27
|
en
|
refinedweb
|
This function determines if the specified entry has child entries in the backend where it resides.
#include "slapi-plugin.h" int slapi_entry_has_children( const Slapi_Entry *e );
This function takes the following parameter:
Entry that you want to test for child entries.
This function returns 1 if the entry you supply has child entries in the backend where it resides; otherwise it returns 0. Notice that if a subsuffix is in another backend, this function does not find children contained in that subsuffix.
|
http://docs.oracle.com/cd/E19693-01/819-0996/aaigx/index.html
|
CC-MAIN-2015-27
|
en
|
refinedweb
|
cutting@apache.org wrote:
> Author: cutting
> Date: Thu Mar 30 14:22:31 2006
> New Revision: 390260
>
> URL:
> Log:
> Fix for HADOOP-103, part II: I forgot to add this file the first time!
>
> +public class MapReduceBase implements Closeable, JobConfigurable {
> +
> + public void close() {
> + }
>
Shouldn't this method throw an IOException, as it is declared in Closeable?
--
Best regards,
Andrzej Bialecki <><
___. ___ ___ ___ _ _ __________________________________
[__ || __|__/|__||\/| Information Retrieval, Semantic Web
___|||__|| \| || | Embedded Unix, System Integration Contact: info at sigram dot com
|
http://mail-archives.apache.org/mod_mbox/hadoop-common-commits/200604.mbox/%3C44324F14.1020403@getopt.org%3E
|
CC-MAIN-2015-27
|
en
|
refinedweb
|
csRenderContext Class Reference
[Views & Cameras]
This structure keeps track of the current render context. More...
#include <iengine/rview.h>
Detailed Description
This structure keeps track of the current render context.
It is used by iRenderView. When recursing through a portal a new render context will be created and set in place of the old one.
Definition at line 89 of file rview.h.
Member Data Documentation
If true then we have to clip all objects to the portal frustum (either in 2D or 3D).
Normally this is not needed but some portals require this. If do_clip_plane is true then the value of this field is also implied to be true. The top-level portal should set do_clip_frustum to true in order for all geometry to be correctly clipped to screen boundaries.
Definition at line 142 of file rview.h.
If true then we clip all objects to 'clip_plane'.
In principle one should always clip to 'clip_plane'. However, in many cases this is not required because portals mostly arrive in at the boundaries of a sector so there can actually be no objects after the portal plane. But it is possible that portals arive somewhere in the middle of a sector (for example with BSP sectors or with Things containing portals). In that case this variable will be set to true and clipping to 'clip_plane' is required.
Definition at line 132 of file rview.h.
The documentation for this class was generated from the following file:
Generated for Crystal Space 1.4.1 by doxygen 1.7.1
|
http://www.crystalspace3d.org/docs/online/api-1.4.1/classcsRenderContext.html
|
CC-MAIN-2015-27
|
en
|
refinedweb
|
#include <Xm/List.h> void XmListDeleteItems( Widget widget, XmString *items, int item_count);
XmListDeleteItems deletes the specified items from the list. For each element of items, the first item in the list that matches that element is deleted. A warning message appears if any of the items do not exist.
For a complete definition of List and its associated resources, see XmList(3).
XmList(3).
|
http://www.makelinux.net/man/3/X/XmListDeleteItems
|
CC-MAIN-2015-27
|
en
|
refinedweb
|
I had a great time at my talk PDC2009 talk, but i was disappointed that I could not demo in both C# and VB… So here is the next best thing: A full play-by-play of the demo, but all in VB! Enjoy.
What you need to get started:
I am starting off with the new Business Application Template that gets installed with RIA Services.
This new template includes:
For this demo, I am going to used a customized version of the template..
After you create the project, you see we have a simple solution setup that follows the “RIA Application” pattern. That is one application that happens to span a client (Silverlight) and server (asp.net) tiers. These two are tied such that any change in the Silverlight client is reflected in the server project (a new XAP is placed in client bin) and appropriate changes in the server result in new functionality being exposed to the Silverlight client. To parts of the same application.
I started out with an Entity Framework model. RIA Services supports any DAL including Linq2Sql, NHibernate as well as DataSets and DataReader\Writer. But EF has made some great improvements in .NET 4, so I felt it was a good place to start.
So here is the EF model I created. Basically we have a set of restaurants, each of which has a set of plates they serve. A very simple model designed many to show off the concepts.
Then we need to place to write our business logic that controls how the Silverlight client can interact with this data. To do this create a new DomainService.
Then select the tables you want to expose:
Now, let’s look at our code for the DomainService…
In line 10 – we are enabling this service to be accessed from clients.. without this, the DomainService is only accessible from on the machine (for example for an ASP.NET hosted on the same machine).
In line 11: we are defining the DomainService – you should think of a DomainService as just a special kind of WCF Service.. one that is higher level and has all the right defaults set so that there is zero configuration needed. Of course the good news is that if you *need* to you can get access to the full richness of WCF and configure the services however you’d like.
In line 12: you see we are using the LinqToEntitiesDomainService. RIA Services supports any DAL including LinqToSql or NHibernate. Or what I think is very common is just POCO.. that is deriving from DomainService directly. See examples of these here…
In line 14: We are defining a Query method.. this is based on LINQ support added in VS2008. Here we define the business logic involved in return data to the client. When the framework calls this method, it will compose a LINQ query including paging, sorting, filtering from the client then execute it directly against the EF model which translate it into optimized TSQL code. So no big chunks of unused data are brought to the mid-tier or the client.
Now let’s switch over the client project and look at how we consume this.
in Views\Home.xaml we have a very simple page with just a DataGrid defined.
now let’s flip over to codebhind..
Notice we have a MyApp.Web namespace available on the client. Notice that is the same namespace we defined our DomainService in..
So, let’s create a local context for accessing our DomainService. First thing you will notice is that VS2010 Intellisense makes it very easy to find what we want.. it now matches on any part of the class name.. So just typing “domainc” narrows our options to the right one..
In line 2, notice there is a property on context called Restaurants. How did we get that there? Well, there is a query method defined on the DomainService returning a type of type Restaurant. This gives us a very clean way to do databinding. Notice this call is actually happening async, but we don’t have to deal with any of that complexity. No event handlers, callbacks, etc.
In line 4, while the whole point of RIA Services is to make n-tier development as easy as two-tier development that most of us are used to, we want to make sure the applications that are created are well behaved. So part of this is we want to be explicit when a network call is being made.. this is not transparent remoting. Network calls must be explicit. In this line we are mentioning which query method to use as you might define more than one for the same type with different logic.
Now we run it..
This is very cool and simple. But in a real world case, i am guessing you have more than 20 records… sometimes you might have 100s, or thousands or more. You can’t just send all those back to the client. Let’s see how you can implement paging and look at some of the new design time features in VS2010 as well.
Let’s delete that code we just wrote and flip over to the design surface and delete that datagrid.
Drop down the DataSources window (you may need to look under the Data menu for “Show Data Sources”
If you are familiar with WinForms or WPF development, this will look at least somewhat familiar to you. Notice our DishViewDomainContext is listed there with a table called Restaurant. Notice this is exactly what we saw in the code above because this window is driven off that same DomainContext.
Dropping down the options on Restaurant, we see we have a number of options for different controls that can be used to view this data… of course this is extensible and we expect 3rd party as well as your custom controls to work here. Next see the query method here that is checked. That lists all the available options for query methods that return Restaurant.
Now if we expand the view on Restaurant, we see all the data member we have exposed. This view gives us a chance to change how each data member will be rendered. Notice I have turned off the ID and changed the Imagepath to an Image control. Again this is an extensible and we expect 3rd party controls to plug in here nicely.
Now, drag and drop Restaurant onto the form and we get some UI
And for you Xaml heads that want to know what really happens… Two things. First if the DomainDataSource is not already created, one is created for you.
Finally, the DataGrid is created with a set of columns.
Then setup a grid cell by click 4/5ths of the way down on the left grid adorner. Then select the grid, right click, select reset layout all.
.. add poof! VS automatically lays out the DataGrid to fill the cell just right.
Now, personally, I always like the Name column to come first. Let’s go fix that by using the DataGrid column designer. Right click on the DataGrid select properties then click on the Columns property..
In this designer you can control the order of columns and the layout, etc. I moved the image and name fields to the top.
Now, let’s add a DataPager such that we only download a manageable number of records at a time. From the toolbox, simply drag the datapager out.
We use our same trick to have VS auto layout the control Right click on it and select Reset Layout\All.
That is cool, but there is a big gap between the DataGrid and the DataPager.. I really want them to be right. This is easy to fix. Right click on the grid adorner and select “Auto”..
Perfect!
Now, we just need to wire this up to the same DataSource our DataGrid is using “connect-the-dots” databinding. Simply drag the Restaurant from the DataSources window on top of the DataGrid.
For you Xaml heads, you’ll be interested in the Xaml this creates..
Notice, we don’t need to create a new DomainDataSource here… we will use the one that is already on the page.
Now, we are doing an async call.. so let’s drag a BusyIndicator from the new Silverlight 4 Toolkit.
We need to write up the IsBusy to the restaurantDomainDataSource.DomainContext.IsLoading… Luckily there is some nice databinding helper in VS2010. Select properties, then IsBusy, then DataBinding.
Again, for you Xaml heads, the Xaml that gets generated is pretty much what you’d expect.
and once it is loaded…
Very cool… that was a very easy was to get your data. Page through it and notice that with each page we are going back all the way to the data tier to load more data. So you could just as easily do this on a dataset of million+ records. But what is more, is that sorting works as well and just as you’d expect. It doesn’t sort just the local data, it sorts the full dataset and it does it all way back onto the data tier and just pulls forward the page of data you need to display.
But our pictures are not showing up… let’s look at how we wire up the pictures. The reason they are not showing up is that our database returns just the simple name of the image, not the full path. This allows us to be flexible about the where the images are stored. The standard way to handle this is to write a value converter. Here is a simple example:
Now, let’s look at how we wire this converter to the UI. First, let’s use the Document Outline to drill through the visual tree to find the Image control.
Then we select the properties on the image and wire up this converter. If you have done this in Xaml directly before, you know it is hard to get right. VS2010 makes this very easy!
Oh, and for you Xaml heads… here is what VS created..
and
Now let’s look at how we drill down and get the details associated with each of these records. I want to show this is a “web” way… So I’ll show how to create a deep link to a new page that will list just the plates for the restaurant you select.
First we add a bit of Xaml to add the link to the datagrid..
And to implement the button click handler…
Here we are getting the currently selected Restaurant, then we cons up a new URL to the page “Plates”. We pass a query string parameter of restaurantId…
Now, let’s build out the Plates page that will the list of Plates for this restaurant. First let’s great a a Plates page. Let’s add a new Plates under the Views directory.
Now we need to define a query to return the Plates. Notice that only the data you select is exposed. So we get to go back to the server, to our DishViewDomainService and add a new query method.
Now we go back to the client, and see your DataSources window now offers a new datasource: Plates.
Now, just as we saw above, I will drag and drop that data source onto the form and i get a nice datagrid alreayd wired up to a DomainDataSource.
Then, with a little formatting exactly as we saw above, we end up with…
And when we run it… First, you see the link we added to the list of Restaurants..
Clicking on anyone of them navigates us to our Plates page we just built.
This is cool, but notice we are actually returning *all* the plates, not just the plates from the restaurant selected. To address this first we need modify our GetPlates() query method to take in a resource id.
Now, back on the client, we just need to pass the query string param…
Now, we run it and we get the just the plates for the restaurant we selected.
what’s more is we now have a deep link such that it works when I email, IM or tweet this link to my buddy who happens to run a different browser ;-)
Ok… now for a details view… Let’s do a bit more layout in the Plates.xaml. First, let’s split the form in half vertically to give us some cells to work in.
In the bottom left we will put the details view to allow us to edit this plate data. Let’s go back to the DataSources window and change the UI type to Details.
Dragging that Details onto the form… we get some great UI generation that we can go in and customize.
In particular, let’s format that Price textbox as a “currency”… using the new String Formatting support in Silverlight 4.
And again, for you Xaml heads… this created:
Now, let’s add an image to the other side. Simply drop an Image control on the form and select Reset Layout\All
Now we can easily change the image to be “Uniform”
Now we need to write up the binding here so that as selection changes, this image is update. Luckily, that is very easy to do. Simply drag and drop from the Data Sources window…
Then we need to wire up our converter just as we saw before..
Run it…
That looks great!
But when we try edit something, we get this error..
Ahh, that is a good point, we need to go back and explicitly define a Update method to our DomainService on the server.
In line 2, notice we take the NumberUpdates and increment by one. it is nice that we send the entry entity back and forth, so we can do entity level operations very easily.
Next in line 3, we pull out the original value.. .this is the plate instance as the client saw it before it was updated.
In line 4-7, we first check to see if the price has changed, if it has, we add a fee of one dollar for a price change.
Finally in line 8-9, we submit this change to the database.
Now we just need to drop a button on the form.
Then write some codebehind..
What this is going to do is find all the entities that are dirty (that have changes) and package them up and send them to the server.
Now notice if you make a change price to the data and hit submit the NumberUpdates goes up by one and the the price has the one dollar fee added.
Then submit.. NumberUpdates is now 63 and the price is $73.84..
Then if you set a breakpoint on the server, change two or three records on the client. Notice the breakpoint gets hit for each change. We are batching these changes to make an efficient communication pattern.
Great.. now let’s look at data validation.
We get some validation for free. for example Calorie Count is a int, if we put a string in, we get a stock error message.
If we want to customize this a bit more, we can go back to the server and specify our validation there. It is important to do it on the server because you want validation to happen on the client for good UI, but on the server for the tightest security. Following the DRY principle (Don’t Repeat Yourself) we have a single place to put this validation data that works on the client and the server.
The data validation attributes are a core part of .NET with ASP.NET Dynamic Data and ASP.NET MVC using the exact same model.
But what if they are not expressive enough for you? For example, say I have a custom validation I have for making sure the description is valid.. To do that, I can write some .NET code that executes on the server AND the client. Let’s see how to do that. First I create a class on the server..
Notice the name here PlateValidationRules.shared.cs…. the “.shared” part is important… it is what tells us that this code is meant to be on the client and the server.
In this case, i am saying a valid description is one that has 5 more more words
Then to wire this up to the description property…
Then running the app, we see all our validations…
Lots of times in business applications we are dealing with valuable data that we need to make sure the user is authentication before we return in. Luckily this is very easy to do with RIA Services. Let’s go back to our DomainServices on the server and add the RequiresAuthentication attribute.
Then when you run the application..
So let’s log in… I don’t have an account created yet, luckily the Business Application Template supports new user registration. All this is based on ASP.NET Authentication system that has been around sense ASP.NET 2.0.
Here we are creating a new user…
And now we get our data…
Now, that we have a user concept.. why don’t we add one more setting to let the user customize this page. So we edit the web.config file to add a BackgroundColor.
And we go into the User.cs class on the server and add our BackgroundColor.
Now, back on the client, let’s build out UI using the DataSources window just as we have seen above. But this time, I have created a very simple ColorPicker control in order to show that it is possible to use your own custom control.
Drag and drop that onto the form..
Then change the binding to be TwoWay using the databinding picker.
Then I think we need a nice header here with the User name in it. To so that, let’s add a TextBlock, set the fontsize to be big. Then do connect the dots databinding to write up to the user name.
Then let’s use the string format databinding to customize this a bit..
Next we put a Submit button.
Now when we run it… we can modify the user settings.
The really cool part is that if the user goes to another machine and logs in, they get the exact same experience.
Wow, we have seen a lot here.. We walked through end-to-end how to build a Business Application in Silverlight with .NET RIA Services. We saw the query support, validating update, authorization and personalization as well as all the great new support in VS2010. Enjoy!
|
http://blogs.msdn.com/b/brada/archive/2009/11/27/pdc09-talk-building-amazing-business-applications-with-silverlight-4-ria-services-and-visual-studio-2010-now-in-visual-basic.aspx?Redirected=true
|
CC-MAIN-2015-27
|
en
|
refinedweb
|
Memory Usage Constantly RisingPosted Thursday, 21 June, 2012 - 16:58 by daleluck
Description
I was in the middle of my other project and noticed the large amount of memory that my program had taken up. Since it was just a test, I decided I'd start again and just port over the pathfinding library whilst redesigning exactly how the system works - but then I noticed that the memory allocation of the program starts at 17000k and constantly rises as the program runs, even with just the below piece of code.
using System; using System.Collections.Generic; using System.Linq; using System.Text; using System.Drawing; using OpenTK; using OpenTK.Input; using OpenTK.Graphics; using OpenTK.Graphics.OpenGL; namespace Program { class Program { // Game Window Details public static GameWindow MainGame; public static int WindowWidth = 800, WindowHeight = 600; public static double WindowFPS = 30.0; static void Main(string[] args) { // sets up the window for 2D, disallowing manually resizing it MainGame = new GameWindow(WindowWidth, WindowHeight, GraphicsMode.Default, "Test Window"); MainGame.WindowBorder = WindowBorder.Fixed; GL.MatrixMode(MatrixMode.Projection); GL.LoadIdentity(); GL.Ortho(0, WindowWidth, WindowHeight, 0, 0, 5); GL.MatrixMode(MatrixMode.Modelview); // sets up the different functions MainGame.RenderFrame += OnRenderFrame; // runs the window at 60.0fps MainGame.Run(WindowFPS); } static void OnRenderFrame(object sender, FrameEventArgs e) { GL.ClearColor(Color.Black); GL.Clear(ClearBufferMask.ColorBufferBit); MainGame.SwapBuffers(); } } }
All it does is clear the background to be black, but the memory it takes up is constantly rising - there's nothing else to the program, just setting up the window and then making the background black. I really want to know why this is happening and how can sort it out.
#1
Be sure to also test a release build with no debugger attached.
I don't have the time to test it out myself atm. But I would be interested in the results.
Also I think that when the memory reaches 1mb the garbage collector might clean up the 'leaks'
But as I said, I haven't tested it yet :-)
#2
Tested it in the release build, same problem arises. Also, what do you mean by it cleaning up once a megabyte of memory was reached? As far as I know, the amount of memory stands for how many kilobytes the program is using, and with this program starting off at 16000k memory usage I left it for a bit and checked back and it had risen to 25000k with no signs of slowing down or removing memory.
I also tested it with the 'quick start' sample and found the exact same issue, so I'm fairly sure it's not something I've written doing something wrong. I never noticed this before because I never opened my program for prolonged amounts of time, but since I'm making a game where the expectation is the user WILL have it open for so long it's kind of an issue I need sorting out quickly.
#3
What I meant was that maybe the program is generating garbage.
And if I remember correcly the garabage collector only starts collecting once 1mb of memory is allocated ( I could be wrong here ).
So if its garabage it's not really a leak, just lazy cleanup by GC.
But this might still be problem depending on how much garbage is generated each frame, ideally it shoudn't generate any.
#4
I'm not sure how I'd go about lowering the amount of garbage that it makes. It happens in the example project too, so I'm sure I've not done anything wrong. I'll carry on with writing the program and get around to the memory cleanup once a solution has arisen, or hopefully, as you've suggested might happen, the program will just sort itself out at some point.
#5
Cant confirm. This code (net) takes 35MB memory but it didnt go up.
#6
Well that's odd, that code with nothing else attached uses up just over 15000k when it's executed and then steadily rises every few seconds (odd that it isn't doing it every couple of frames) with mine. I'm not sure what to do about this now if it's just a problem that I'm having.
#8
I cannot reproduce this on Windows, Linux or Mac OS X using the native or the SDL2 backend. This may be a driver issue.
If you can still reproduce this using OpenTK 1.1 beta4, please file a bug report at
Make sure to include the following information:
|
http://www.opentk.com/node/3037
|
CC-MAIN-2015-27
|
en
|
refinedweb
|
AKS with Azure Container Registry
Using Azure container registry with Azure Kubernetes Server
A private container registry is useful for building, well, private images, but it is also invaluable republish images that may not be otherwise available, due to outages or low availability, such images on the Quay registry in the last few years, or less reliable registries like GHCR (GitHub Container Registry).
In this article, we will cover using Azure Container Registry with Azure Kubernetes Service. Separately these components by themselves are not too complex, but combined together, logistically, process of deploying applications can get complex.
What this article will cover
This article will cover building a Python client that will connect to the Dgraph distributed graph database using gRPC. This client will be built with Docker, pushed to ACR, and finally deployed to AKS using the image pulled from ACR.
This will be implemented through running these steps:
- Provision Azure Resources
- Deploy Dgraph distributed graph database
- Build and Push
pydgraph-clientutility image to ACR
- Deploy
pydgraph-clientutility with imaged pulled from ACR
- Demonstrate using client by gRPC and HTTP mutations and queries
Articles in Series
This series shows how to both secure and load balance gRPC and HTTP traffic.
- AKS with Azure Container Registry (this article)
- AKS with Calico network policies
- AKS with Linkerd service mesh
- AKS with Istio service mesh_acr/
├── env.sh
└── examples
├── dgraph
│ └── helmfile and ACR cloud resources can be provisioned with the following steps:
Building the pydgraph client
Now comes the time to build the pydgraph client utility.
The Dockerfile
The
Dockerfile will contain the instructions to build a client utility that contains the Python environment with a few tools, as well as the client script and data.
Copy the following as save as
examples/pydgraph/Dockerfile:
The Makefile
This
Makefile will encapsulate steps that can be used to build Docker images and push them to the ACR repository.
Copy the following and save as
examples/pydgraph/Makefile:
NOTE: Copy the above exactly, including tabs as tabs tell
make to run a command.
The client script
This is a script that will load the Dgraph schema and data. Copy the following and save as
examples/pydgraph/load_data.py:
The client package manifest
Copy the following and save as
examples/pydgraph/requirements.txt:
The Dgraph schema
Copy the following and save as
examples/pydgraph/sw.schema:
The Dgraph RDF data
Copy the following and save as
examples/pydgraph/sw.nquads.rdf:
Build and Push the Image
Now that all the required source files are available, build the image:
source env.sh
pushd examples/pydgraph## build the image
make build## push the image to ACR
az acr login --name ${AZ_ACR_NAME}
make pushpopd
During the build process, you should see something similar to this:
Deploying the pydgraph client
Now comes the time to deploy the pydgraph client utility, so that we can run queries and mutations using gRPC with
python or HTTP with
curl.
The Helmfile.yaml configuration
This
helmfile.yaml configuration can be used to deploy the client utility, once its image is available in the ACR.
Copy the following and save as
examples/pydgraph/helmfile.yaml:
Deploy the Client
Once the pydgraph-client image is available on ACR, Kubernetes resources the use the image can now be deployed:
source env.sh
helmfile --file examples/pydgraph/helmfile.yaml apply
You can run this to check the status of deployment:
kubectl --namespace pydgraph-client get all
This should result in something like the following:
Use the pydgraph client
Log into the container with the following command:
PYDGRAPH_POD=$(kubectl get pods \
--namespace pydgraph-client --output name
)kubectl exec -ti --namespace pydgraph-client ${PYDGRAPH_POD} -- bash
Health Checks
Verify that the cluster is functional and healthy with this command:
curl ${DGRAPH_ALPHA_SERVER}:8080/health | jq
This should show something like:
gRPC checks
Verify that gRPC is functional using
grpcurl command:
grpcurl -plaintext -proto api.proto \
${DGRAPH_ALPHA_SERVER}:9080 \
api.Dgraph/CheckVersion
NOTE: Dgraph serves HTTP traffic through port
8080 and gRPC traffic through port
9080.
This should show something like:
Run the Load Data Script
Load the schema and RDF data using the the
load_data.py python script:
python3 load_data.py --plaintext \
--alpha ${DGRAPH_ALPHA_SERVER}:9080 \
--files ./sw.nquads.rdf \
--schema ./sw.schema
Query All Movies
Run this query to get all movies:
curl "${DGRAPH_ALPHA_SERVER}:8080/query" --silent \
--request POST \
--header "Content-Type: application/dql" \
--data $'{ me(func: has(starring)) { name } }' | jq
This result set should look something similar to the following:
Query movies released after 1980
Run this query to get movies released after 1980:
curl "${DGRAPH_ALPHA_SERVER}:8080/query" --silent \
--request POST \
--header "Content-Type: application/dql" \
--data $'
{
me(func: allofterms(name, "Star Wars"), orderasc: release_date) @filter(ge(release_date, "1980")) {
name
release_date
revenue
running_time
director {
name
}
starring (orderasc: name) {
name
}
}
}
' | jq
The result set should look similar to this:
Clean up
All resources can be deleted with the following commands:
source env.sh
az aks delete \
--resource-group ${AZ_RESOURCE_GROUP} \
--name ${AZ_CLUSTER_NAME}
rm -rf ${KUBECONFIG}
NOTE: Because Azure manages cloud resources like load balancers and external volumes under a resource group for the AKS cluster, deleting the AKS cluster will delete all cloud resource provisioned on Kubernetes.
Delete Kubernetes Resources
If you wish to continue to use the AKS cluster for other projects, and want to delete dgraph and pydgraph, you can delete these resources with the following commands:
source env.shhelm delete --namespace dgraph demo
kubectl delete pvc --namespace dgraph --selector release=demo
helm delete --namespace pydgraph-client pydgraph-client
Resources
Here are some links to topics, articles, and tools used in this article:
Blog Source Code
- Build-Push Container Image to ACR:
- AKS with ACR registry integration:
Container Image Standards
Azure Documentation
- Azure Container Registry:
- Azure Kubernetes Services:
- Tutorial: Deploy and use Azure Container Registry:
- Authenticate with ACR from AKS:
Conclusion
This article will lay the ground work for managing private container images, and deploying clients and services that can communicate both through HTTP and gRPC. Though this article uses ACR and AKS flavors, the same principals could apply to similar solutions:
- Kubernetes flavors: GKE, EKS, RKE, KubeSpray, PMK, microK8s
- Container Registry flavors: GCR, ECR, Harbor, Docker Distribution, Project Quay, Sonatype Docker Registry
This article will be part of a new series that I am developing that will cover the following topic areas:
- Build, Push, and Deploy a containerized gRPC client application and corresponding server application. (this article)
- Restrict traffic between designated clients and servers using network policies with Calico.
- Secure and load balance gRPC traffic between clients and servers that are apart of a service mesh such as Linkerd or Itsio.
This will serve as a springboard to explore more advance patterns with blue-green deploy scenarios and o11y (cloud native observability) with metrics, tracing, logging, and visualization and alerting , which there are a few popular solutions in these areas:
- blue-green deploy: ArgoCD, ArgoRollouts, Spinnaker, Kayenta, Flux or Flagger
- metrics: Prometheus, Metricbeat, or Telegraf
- tracing: Jaeger or Tempo
- logging: Fluentbit, Filebeat, or Loki
- visualization and alerting: Kibana, Grafana, AlertManager, Kapacitor, or Chronograf
|
https://joachim8675309.medium.com/aks-with-azure-container-registry-b7ff8a45a8a?source=post_page-----b7ff8a45a8a-----------------------------------
|
CC-MAIN-2022-21
|
en
|
refinedweb
|
Other Aliasceilf, ceill
SYNOPSIS
#include <math.h>
double ceil(double x);
float ceilf(float x);
long double ceill(long double x);
Link with -lm.
Feature Test Macro Requirements for glibc (see feature_test_macros(7)):
ceilf(), ceill():
- _ISOC99_SOURCE || _POSIX_C_SOURCE >= 200112L
|| /* Since glibc 2.19: */ _DEFAULT_SOURCE
|| /* Glibc versions <= 2.19: */ _BSD_SOURCE || _SVID_SOURCE.
COLOPHONThis page is part of release 4.06 of the Linux man-pages project. A description of the project, information about reporting bugs, and the latest version of this page, can be found at
|
https://manpages.org/ceil/3
|
CC-MAIN-2022-21
|
en
|
refinedweb
|
Bug #5480open
Saxon EE Adding Extraneous namespaces to each ancestor node, i.e. xmlns=""
0%
Description
Hi There - we are using Saxon EE 10.6.0 for .NET framework and having an issue with extraneous namespaces in the result-document output when transforming xml.
We have several XSLTs that transform XML which worked properly with Saxon EE 9.6, but with 10.6 we're getting extraneous href attributes added to each node, i.e. <.... When processed using 9.6, the results look correct. We are using a namespace-aware DOM to load the XML, so the issue is unrlated to that.
If anyone knows of a configuration setting or other change that would prevent those hrefs from appearing in each ancestor node it would be most appreciated.
Attached is an exammple of the xsl we are using.
Files
Updated by O'Neil Delpratt 6 days ago
- Project changed from SaxonC to Saxon
- Category set to .NET API
- Found in version deleted (
10.6)
Updated by Martin Honnen 5 days ago
Can you add a small but representative XML input sample you process with the stylesheet you have already attached? It would also help if you show the relevant .NET (e.g. C#) code you use to run the transformation.
Updated by John Crane 5 days ago
- File SaxonSampleCode.cs SaxonSampleCode.cs added
Hi All,
Thanks for looking at this issue so quickly. I think we have discovered the error we were making, and now have transforms working correctly.
When doing the transform, we were previously using a DomDestination object for the results. We changed that to an XdmDestination type, which seems to have resolved our issue.
Attached is a sample of the code we are using - this is from a test application, the actual code is a bit more complicated - but this shows essence of the transform we are doing. Unfortunately the XML is quite long and not easily modified for sharing.
You can see the older DomDestination references are commented out - the uncommented code is what is now working. We've done preliminary tests that look good, and will continue to do more.
I think we have the issue resolved - but if you have any feedback on the code or suggestions in general we'd certainly appreciate them.
Many thanks again for looking so quickly. Incidentally, I meant to enter this as 'Support' - once I hit submit it was too late...
John C
Updated by Michael Kay 5 days ago
@John, for your reference Martin Honnen is a friendly user who solves a lot of bugs before we get to them (and also raises quite a few). Your thanks go to him and not to Saxonica!
You should definitely avoid using the DOM with Saxon unless you really need it, for performance reasons. If you want serialized output, use a Serializer as the destination.
But we'll keep the bug open, because we need to see why it isn't working properly with a DOM destination. We should be eliminating redundant namespace declarations when writing to the DOM tree.
Please register to edit this issue
Also available in: Atom PDF
|
https://saxonica.plan.io/issues/5480
|
CC-MAIN-2022-21
|
en
|
refinedweb
|
Use the interactive Infrastructure UI to monitor your infrastructure and identify problems in real time. You can explore metrics and logs for common servers, containers, and services.
Add dataedit
Kibana provides step-by-step instructions to help you add log data. The Infrastructure Monitoring Guide is a good source for more detailed information and instructions.
Configure data sourcesedit
The
metricbeat-* index pattern is used to query the data by default.
If your metrics are located in a different set of indices, or use a
different timestamp field, you can adjust the source configuration via the user
interface or the Kibana configuration file.
Logs and Infrastructure share a common data source definition in each space. Changes in one of them can influence the data displayed in the other.
Configure sourceedit
Configure source can be accessed via the corresponding button in the toolbar:
This opens the source configuration fly-out dialog, in which the following configuration items can be inspected and adjusted:
- Name: The name of the source configuration.
- Indices: The patterns of the elasticsearch indices to read metrics and logs from.
- Fields: The names of particular fields in the indices that need to be known to the Infrastructure and Logs UIs in order to query and interpret the data correctly.
Read only accessedit
When you have insufficient privileges to change the source configuration, the following indicator in Kibana will be displayed. The buttons to change the source configuration won’t be visible. For more information on granting access to Kibana see Granting access to Kibana.
Configuration fileedit
The settings in the configuration file are used as a fallback when no other
configuration for that space has been defined. They are located in the
configuration namespace
xpack.infra.sources.default. See
Infrastructure UI settings for a complete list of the possible entries.
|
https://www.elastic.co/guide/en/kibana/7.2/xpack-infra.html
|
CC-MAIN-2022-21
|
en
|
refinedweb
|
matplotlib.pyplot is a collection of command style functions that make matplotlib work like MATLAB. Each pyplot function makes some change to a figure: eg, create a figure, create a plotting area in a figure, plot some lines in a plotting area, decorate the plot with labels, etc.... matplotlib.pyplot is stateful, in that it keeps track of the current figure and plotting area, and the plotting functions are directed to the current axes
import matplotlib.pyplot as plt plt.plot([1,2,3,4]) plt.ylabel('some numbers') plt.show()
(Source code, png, hires.png, pdf)
You may be wondering why the x-axis ranges from 0-3 and the y-axis from 1-4. command, and will take an arbitrary number of arguments. For example, to plot x versus y, you can issue the command:
plt.plot([1,2,3,4], [1,4,9,16]) ‘b-‘, which is a solid blue line. For example, to plot the above with red circles, you would issue
import matplotlib.pyplot as plt plt.plot([1,2,3,4], [1,4,9,16], 'ro') plt.axis([0, 6, 0, 20]) plt.show()
(Source code, png, hires.png, pdf)
See the plot() documentation for a complete list of line styles and format strings. The axis() command; eg.
import numpy as np import matplotlib.pyplot()
(Source code, png, hires.png, pdf)
The figure() command here is optional because figure(1) will be created by default, just as a subplot(111) will be created by default if you don’t manually specify an axes. The subplot() command specifies numrows, numcols, fignum where fignum, ie,: line_styles)
import numpy as np import matplotlib.pyplot as plt mu, sigma = 100, 15 x = mu + sigma * np.random.randn(10000) # the histogram of the data n, bins, patches = plt.hist(x, 50, normed.
|
https://matplotlib.org/1.3.0/users/pyplot_tutorial.html
|
CC-MAIN-2022-21
|
en
|
refinedweb
|
GREPPER
SEARCH
WRITEUPS
DOCS
INSTALL GREPPER
All Languages
>>
Whatever
>>
conda list envs
“conda list envs” Code Answer’s
conda copy environment
whatever by
Filthy Fowl
on Dec 04 2020
Comment
7
conda create --clone py35 --name py35-2
conda list envs
cpp by
Joseph Joestar
on Mar 25 2020
Donate
Comment
25
conda info --envs conda env list
conda create environment based on requirements.txt
whatever by
Encouraging Echidna
on Sep 09 2020
Comment
9
# using pip pip install -r requirements.txt # using Conda conda create --name <env_name> --file requirements.txt
Source:
stackoverflow.com
conda env
python by
alpha virgo
on Feb 20 2020
Comment
29
conda create -n myenv python=3.6
Source:
docs.conda.io
conda env
python by
alpha virgo
on Feb 20 2020
Comment
8
conda env list
Source:
docs.conda.io
how to see all the environments in Conda
typescript by
Combative Caracal
on Aug 23 2020
Comment
6
conda env list
Add a Grepper Answer
Whatever answers related to “conda list envs”
anaconda duplicate environment
conda environment details
see conda packages
conda create environment without packages
how to make conda to use global packages
conda list available package versions
check python version conda env
conda enviroment python version
list packages in conda environment
conda list all channels
conda environment
anaconda virtual environment LIST
Whatever queries related to “conda list envs”
conda list environments
conda create environment
conda env
conda list env
conda new environment
conda download
conda list all environments
conda copy environment
list environments conda
create conda env
list conda env
conda create requirements.txt
create environment in anaconda
conda package list
list conda environment
create new environment conda
list environment conda
conda duplicate environment
copy conda environment
check conda environment
conda create virtual environment from requirements.txt
conda virtual environment
conda deactivate environment
conda all environments
conda packages list
list all env in conda
create environment conda
list packages in conda env
activate conda environment
anaconda change environment
conda create env from requirements.txt
conda view environments
conda clone env
list anaconda environments
delete conda environment
delete environment conda
conda show all environments
show conda environments
conda list all envs
conda environments list
copy environment conda
create new anaconda environment
list packages in conda environment
create new conda env
deactivate conda environment
list packages conda
conda list all env
conda env remove
how to get list of conda environments
create conda environment with requirements.txt
anaconda list envs
list conda packages
anaconda delete environment
conda manage environment
remove conda environment
delete env conda
how to remove environment conda
see environments conda
remove an environment conda
conda list enviroment
create env anaconda
conda list installed
conda copy environment to another machine
list all env conda
make anaconda environment
which conda environment am i in
clone conda environment
conda remove env
list python environments
how to list all the conda environments
anaconda list of environments
anaconda get list of environments
create a copy of conda environment
miniconda create env
anaconda how to create new environment
conda enviorment
conda create environment requirements.txt
load conda environment
list all environment conda
create conda environment requirements.txt
how to check my conda environment
how to create environment with conda
create conda env from requirements.txt
using conda environment.yml file
conda create env from file
conda list env
create new environment in anaconda prompt
conda --env
python env list
how to see conda environment list
make new anaconda environment
anaconda virtual environment
check all environments conda
conda list package
list all conda env
miniconda environment
list of env in anaconda
list enviroments conda
conda env variables
list of conda env
check list of conda environments
show conda envs
conda get all environments
conda create environment in specific path
anaconda remove environment
conda.ymal file exmample
conda create package list
anaconda check packages in the virtual env
how to check environments in anaconda
conda activate environments
conda create environment in directory
conda get list of all environments
conda change active environment
conda install environment from requirements.txt
how to create new environment in conda
conda list env variables
conda create env python 3.6
switch conda environment
conda find current environment
how to create conda environment with requirements.txt
anaconda env install
list all environment variables python
see all environments conda
create new environment in anaconda
conda create with python version
anaconda list env
conda creates a file when installing
conda list modules
python list environment variables
anaconda environment variables
conda install list
how to use conda env python instead of system python
create env from yml file
anaconda show environments in bash
conda requirements.txt create
conda select environment
conda create env using requirements.txt python version 3.6
list conda
conda create python version
change conda activate base
conda create environment in specific directory
conda use environment
activate conda base environment
view environments conda
conda enter environment
how to list conda env
check conda env list
get conda env
creating a new environment in anaconda
conda list environments packages
how to set up an environment in anaconda
how to see list of package conda environments
deactivate environment in conda
how to use system path in python conda environment
how to clone conda environment
conda run env
create a conda env
create env
list conda enviroment
miniconda new environment
anaconda create new environment with all packages
how to install conda environment
conda env lis
how to show list of conda packages
list environment in anaconda
conda listing environments
activate miniconda environment
conda package from conda env
conda find environment
how to remove a virtual environment conda
listing conda environments
crating environment in anaconda
python list all environment variables
copy conda env
how to set up a conda environment
how to create new env in anaconda
how to delete an environment in anaconda
list environment variables python
miniconda copy environment
conda new virtual environment
how to list all enviroment in conda
python check conda environment
conda list virtual environment
is anaconda a virtual environment
active environment in conda
list virtual environment conda
conda check list of packages
conda create env from yml file
check conda envs
anaconda list packages
conda list installed packages in environment
conda see environment list
conda install dependencies from environment.yml
get list of anaconda virtual environment
conda env create -f
how to disable anaconda environment
conda create example
how to switch conda environment
make yaml file from viryual environment
anaconda list enviroments
overwrite conda environment
change anaconda environment
how to access environment conda
conda.ymal file
conda activate environment automatically
see my environment in conda
list virtual environments
conda create with requirements.txt
conda requirements txt in envs
python list modules in environment
conda create requirements file
show my conda env
how to create a conda environment using .yml file in linux
conda list all installed packages
environment.yaml
check conda path
conda environment with requirements.txt tutorial
see current conda environment
how to create requirements txt in conda environment
conda list python 3
conda install environment
check environments conda
conda check for environments
get environments conda
show all env in anaconda
show anaconda environments
anaconda prompt off environments
build conda environment from python requirements.txt
how to see environments in conda
conda env install requirements.txt
anaconda show all environments
conda create environment version
how to check the conda environment
command to see current environment python
check environment anaconda
multiple conda environments
find conda environement
conda env patj
conda env usage
how to make anaconda environment
conda make default environment
create python environment conda
conda env for dvc
make conda to see in which env you are
conda envir
create new environment in anaconda in a directory
conda cling env
create new miniconda environment
anaconda create environment python 3.5
download conda environment
anaconda conda environment
how to install library on conda env
anaconda create environment python version
conda en0
anaconda new environment python 3.7
create a older python environment using anaconda
#! anaconda env
creating env in conda
conda enviroments tutorial
how to setup anaconda environment
conda files
create a new anaconda environment
what is conda env command
conda environments
conda actual environment
get my env conda
conda enviorment show
conda config
view all conda envs
miniconda environment setup
conda set variables
creating new environment in anaconda
conda envs --infp
change environment anaconda prompt
how to create encironment in conda
conda create anaconda argment
new anaconda env
miniconda change env
in what environment does conda install its packages
conda install new environment
activate environment anaconda
packages in conda env
activate env in conda
conda environments -l
conda create environment and install packages
conda make environment
conda import env
conda env dir
conda add env variables
clonando env conda
anaconda make new environment
conda env config vars
how to activate conda in terminal
activate base environment conda
anaconda create environment yml file
conda dowenv
conda location of environment
list all conda env names
conda show all envs
delete environment anaconda
list of envs conda
conda list envir
conda env copy
how to see list conda environment
get a list of conda environments
how to change conda environment list
how copy and create same conda environment
show conda environments list
conda environment package list
conda create an env list according to the requirement file
conda list in an environment
conda list .e
deactivate environment in anaconda
miniconda conda requirements.txt
list all the conda environments
new enviroment conda
list o environment anaconda
how to view all conda environments
list all conda packages
conda list all environment variables
check list of environments conda
conda environment lsit
conda list my environments
creating a new virtual environment conda
conda command to list all user environments
mac create anaconda environment
conda ev list
anaconda list enviornment
conda create a copy of an environment and change name
list all conda environe,nt
conda environment list modules
conda list en
view a list of conda environments
copy from other conda environment package
conda list env packages
conda see list of environments
conda env check install package
interpret conda list
conda show list of env
check list of virtual environments conda
list conda environment version
how to list all the conda env
update list of conda environments
list environment using anaconda
list the installed conda env
conda list all venv
list all environments anaconda
list cona envs
how to list all the existing conda environments
condaq list environments
conda change envirionment
how to list all conda env
how to list conda enviromnets
conda --info envs
anaconda python list environments
list envs in conda
how to get into conda environment
anaconda terminal change enviormennt
conda get list of packages in environment
anaconda list environment
activate virtual environment conda
how to list envs in conda
conda environment.yml specify python version
list of packagesa env conda
list environments in anaconda
conda list environments directories
anconda create new environment
setting up the base environment anaconda
state what conda environment python
does conda 4.8.3 use venv
check python environment conda
conda not listing all environments
check enviornments in anaconda
how to list all conda virual envs
list conda enironments
delete environment in anaconda environment
deactivate environment
delete miniconda environment
manage environnement conda
activate a virtual environment and a channel
how to see list of virtual environment in anaconda
best methods to manage conda environment
how to create env in conda
crate environment conda
conda activation windows
how to see all conda environment
conda delete an evironment
list of virtual environments conda
duplicate conda environment
list all virtual environments conda
conda delete environment directory
conda create environment python version
create conda --name python
ubuntu create python environment from yml file
conda show envir
conda create --name env django
how to export most essential packages for conda environment
does conda environment contain files
conda check current environment
how to fresh install python in new conda environment
conda current env
ubuntu anaconda envirment
conda command list
creat enew env conda python
conda create virtualenv from yml
how to see all virtual environments conda
conda delete an environment
anaconda clone environment
list anaconda enviroments
activating environment base in anaconda
create vetual enviroment with conda
anaconda change environment in terminal
conda create clone
anaconda switch environments
create a new python 3.7 environment conda
how to activate the conda environment
miniconda add to env
create env in conda
command to create conda environment
anaconda create python environment
anaconda create environment python 3.8
miniconda update conda
anaconda cmd create environment
conda venv source
activate miniconda environment windows
create new env with conda
initiate conda environment
where does conda create environments
create environment python anaconda
tutorial, create a new anaconda environment
create new conda environmet
activating environment in anaconda
conda env for ai
anaconda new environment python 3.10
create conda environment python
conda create environment and install
how to add envirnoment variables to anaconda 3
open my miniconda env
show conda environment dependencies
set conda for
how to create anaconda envronment with python 3.7
can't create environment anaconda
creating an enviroment variable on conda
create a conda enviroment
create a new python 3 6 environment in anaconda
how to create an environment in anaconda navigator
conda add a list of modules to env using a file
where is conda env installed
python anaconda new environment
conda add package to environment
do i need to install conda in new environment
create environment in anaconda navigator
anaconda env create
check conda environments available
conda list envs package
creation environnement anaconda
start existing conda environment
conda import env
anaconda create environment 3.10.4
conda instaill packages to env
does miniconda create python in a new conda environment
create python 2.7 environment anaconda
miniconda create
anaconda environments conda
miniconda export environment
conda clone environment with python 2.7
set up new environment conda
python activate env conda
conda environment values
conda create environment file
create anaconda environment via command
crate conda env
access to conda env
anaconda create environment batch file
create a new conda virtual environment
create environment conda for all users\
manging environments with conda
conda add env
creating environments with anaconda
anaconda anaconda create new environment
how to use my created anaconda environment
conda varsim
initialization of conda environment
i can not use conda in a new environment
how to run a file in conda vritual env
anaconda create environment in specific directory
anaconda create python 3.7 environment with packages
miniconda create environment python 3.8
learn conda environments
create new env miniconda
new environmental anaconda
conda actiavate env
conda environments f
anaconda create environemnt
anaconda start new environment
anaconda new env
how to list python virtual environment
list all environment anaconda
list all virtual environments python mac
check list of libraries in virtual environment python
how to see the list on envronments in anaconda
conda check base environment
anaconda copy environment
terminal check environments conda
get list of virtual environments python
how to copy conda env
how do i know what conda environment i am
how to check conda env
python list modules in venv
how to check all conda env
how to know conda environment name
get all libraries installed in conda environment
show location conda
check conda environment console
conda command to find environment contents
show current conda env
how to see conda environment in terminal
list environments python
how to know the path of environment in anaconda
check conda installed packages on environment
env.list python
conda see config
list of venv python
how to copy a env in conda
python list packages in environment
select a conda environment
list python env
check env conda
conda copy environment manually
how to check available environment in conda
list all the environment s
how do i list virtual environments in python
can you copy conda environment
see all packages your conda environment
python os env list
copy conda environment to another machine
conda remove environments
conda see current environment
conda list all installed libraries
how to get running env of conda
hpt to find all anaconda environments
venv python list
how to know all packages in anaconda env
how to see your conda env
list all packages installed in conda env
export all environments anaonda
list venv python
list of virtual environments in python
ananconda navigator environment export
how to list the virtual environment in python
conda view every environemnt
how to see environment in anaconda
see packages on conda env
how many environment i have in conda how to check
env python list
how to show current environment location -conda
where to find anaconda environments
checking conda environments
how to get anaconda enviroments
python environment list packages
see conda venvs
copying a environment conda
how to know the name of your evn in conda
conda check where env is
list virtual environment python
how to get current env in anaconda
check location of conda environment
aconda copy env
conda prompt show environments
listing environments in anaconda
find conda environment
how to list python virtual environments
conda find what's installed
get the list of python virtual environments
list python environements
python program to list all environment variables
conda copy environment and rename
list of libraries in conda
conda create environment mac
create new python environment with conda
conda.yml file
conda copy environment to new name
conda list evironment
update conda environment yml file
conda clone
how to lists installs in conda
conda list of installed pacage
linux conda list sources
conda list envrimonets
how to see the version of environments ia anaconda prompt
anaconda create a new python env
basic script to test correct installation of conda environment
list of package in conda
conda recreate env yml
conda copy environment with different python version
can miniconda create nested enviorments
anaconda virtual environment management
conda env create env variable
anaconda list environment packages
conda create conda env
anaconda python environment list
how to activate existing conda environment
list of environments in conda
anaconda list venv
how to get anaconda virtual env package installed list
conda environment variable
use env conda
how to set anaconda in env windows
creatin conda env
ubuntu conda create environment
check active conda environment
update anaconda environment with python flag
miniconda path variable
list files in conda environment
conda set environment variable for
how to create an anaconda environment
conda copy environment to yml
check existing enviroments conda
conda replicate environment
how to see current packages in conda env
anaconda copy packages from environment
list virtual environments in python
how to see all anaconda environment
conda clone new environment
check environments anaconda
conda create env copy base
conda see enviroment
list of all environment python in linux
python get list of environment variables
how to copy conda env?
conda copy base environment to new
check existing conda environments
how to check environments using conda
conda copy env to new env
how do i find anaconda env folder
how to get all conda enviroment
conda enviroment setup
show enviroments in anaconda
how to see which environments
conda delete environment by path
conda create evn from yml
list conbda envs
how to see envs in conda
check installs in conda env
make conda environment visible in linux
command to see all the conda environments
all env in conda
conda env copy environment
how to see installed packages on conda environment
conda create copy of environment
find environments in conda
conda create a copy of an environment
how to see the list of environments in anaconda using cmd
how to see all your virtual conda environments
conda create new environment copy base
get all installed in conda env
anaconda create new virtual environment
how to change environments in anaconda promt
list conda env windows
conda list of a envoirmnet
change environment in anaconda
check conda environment
conda new enc
conda env from current
conda make one environment default
create virtual using envinroment.yml
list conda env versios
how to create env file in python and how toactivate
how to source a conda env
conda using env
delete virtual environment conda
conda remove emv
deactivate env conda
copy my miniconda environment
how to open python project on conda environment
newenv conda
list conda environment path
conda modules list
conda creatae
howto install msi file in conda virtual environment
setup anaconda environment
creating conda environment
how to see the list of environments conda
activate workspace conda
conda env in cmd
make environment in conda
list all conda enciroments
environment yaml
anaconda env setup
virtual environment in conda
create python environment anaconda package
how to remove environment from anaconda
where to find conda envs file
how to list all the modules in conda environment
how to open a directory in conda environment
anaconda svreate new
list of conda encs
conda install from yml
open conda environment
deactivate anaconda interpreter
where are packages in conda environment saved
conda create environment conda version
how to list all conda environment modules
conda activate environment base
conda env list package
env conda list
conda create from yml
conda evn list
conda create new environment with local package
conda list command
sharing a anaconda env
install conda environment
conda activate environment list
check environment is conda
conda list environments and sizes
use conda env python console
switch in specific conda environment in a terminal
make new environment anaconda
how to get libraries in conda env
activate conda environment from yml
conda get a list of environments
switching conda environments
anaconda switch environment
how to run code from a conda environment
yml conda
open anaconda environment in terminal
conda new environment from yml
list all installed packages in a conda environment
copy an enviroment conda
how ti install essentials in conda environemt
how to change the environment in anaconda prompt
get all my conda environments
list venv in conda
deactive environment in conda
miniconda load guis
check conda env
create new environment conda and install packages from txt file
conda create requirements txt
create conda environment from txt
how to create an r-requirements.txt conda environment
how to create new thing in anaconda
conda create requirements.txt environment.yml
conda new environment from requirements.txt
update conda environment from requirements file
how to generate a conda requirment txt
virtual environment python conda
create a conda environment with requirement txt
conda virtual environment get requirement txt
create conda environment from requirments.txt
conda requirements.txt make
create conda env python 3.9 requirements.txt
create conda environment requirements.txt python3
anaconda create environment with requirements.txt
show virtual envs conda
miniconda activate windows
switch to env anaconda
export conda environment to txt
change conda environment
import environment anaconda list
when chaing conda environment how to set python
how to create a conda environment in linux terminal
anaconda switch virtual environment
conda create --name <env> --file <this file>
conda create python 3.7 envroment
create virtual environment conda
how to check python version in conda active environment
how to activate python virtual environment bypass conda
install conda environment from yml
conda install with pip environment yml
conda open environment
create environment yml file conda
how to creat a env
how to create a new env in conda
.yml files environmen
list conda environments paths
how to activate conda environment in windows
conda list environment packages
anaconda how to create environment
create conda environments
conda list nvs
list my environment packages conda
how to see all the conda environ,ent
list anaconda envs
conda create environment 3.6
conda new en
conda export
conda create env from yaml file
change path of conda environment
conda chnage enviroment
conda env create does not create an env
anaconda make environment
how to list your conda environment
setting up a virtual environment anaconda and requirements.txt
where to find conda environment text file
conda remove environment
list python activate list
download conda package in current env
create conda env r using requirements.txt
how to create a env in conda with requirements yml
how to create requirements.txt python from conda env
create conda environment requirements.txt with python version
open an environment in conda using cmd
how to exit conda virtual environment
envirment list conda
conda env exit
conda venv requirements.txt
conda reinstall environment
conda dump environment
conda create virtualenv from requirements.txt
conda env main library
setting up anaconda conda environments and folders
list of env conda
create environment by anaconda
conda llist environment
conda create environment with python version
create a new conda env
mini conda activate environment
anaconda create new environment command line
conda command to create virtual env
create conda env
conda env 3.8
how to setup conda in environment variable windows
how to launch an environment with conda
python anaconda create environment with python version
anaconda environment python version
conda list packages
list environments using conda
check list env conda
conda create virtual env
how to set up an anaconda venv
how to set anaconda environment
create new miniconda env
conda start environment
how to create a new anaconda environment using a different version of python
how to conda list env
python anaconda use environment
setup conda environment
start a new conda environment
anaconda command create environment
conda env create python
create a new environment python anaconda
anaconda cannot create new environment
how to make conda enviroement a an interpretaor
anaconda create environment with python 2.7
anaconda prompt new environment
list environment variables conda
create new env python on anaconda
set conda env list
conda create env explaines
list of environments conda
setting anaconda environment
anaconda how to create a new environment
conda environment yo
conda list environemnts
anaconda list my environment
create conda environment in ubuntu
miniconda2 environment variable
anaconda new environment create command line
new anaconda enviroment
how to create environment in anaconda with some pakas
miniconda env variables
list conda environment variables
activate new environment anaconda
ipython use conda environment
list available enviroments conda
conda create enviroment from require
conda create miniconda environment
conda create a virtual enviroment
create new environment in miniconda
activate a conda environment in windows
anaconda create full environment
how to create a new environment in anaconda prompt
use my env conda
how to create an anaconda environment in different directory
create environment using miniconda
check packages in conda environment
conda crate env
create conda env for 3.7
how to list working environment in conda
conda list installed environments
conda activate just lists environments
how to set up to setup a conda environment
conda list environm,ents
activating an environment anaconda
mini conda create env
list all libraries in a environment conda
miniconda setup env
create a new conda environment with env
conda update "environment.yml" new packages
how to create a new environment in anaconda mac
what all things are required for conda enviroments
in python show conda env
python conda save environment
remove conda enviornment
conda start env
active environmental anaconda
add existing env to conda
python 2.7 virtualenv yml
uninstall conda enviroment
how can i create conda create
create conda environmente
conda list modules within an environment
conda delete environment and all packages
conda eliminate environment
conda env file from directory
conda list environments python windows
anaconda activate create environment
open conda environment in terminal
conda create environment python 3 windows at a specific folder
find conda environments
conda createenv
conda list all packages in env
conda new
how to list conda environment s
list the envs using conda
conda list enfv
conda list all packages in environment
conda rm env
get env list conda
creating miniconda environmenr
create new conda environment linux
conda create random environment
create a virtual environment with virtual env + yml
conda how to remove environment
conda environment pip list
remove virtual environment anaconda
python save base environment
conda envirmnets
list conda environment packages
how conda environments
list my anaconda environments
conda virual env
conda list active environment
conda i=list of envs
list package in conda environment
use conda env command
list conda environments terminal
download conda enviromten
creating python environment anaconda
list of the env in anaconda
create a conda virtual environment
how to setup conda environment in anaconda prompt
conda get env list
anaconda prompt create new environment
anaconda list environment variables
conda create -env
export anaconda environment
conda env libraries list
conda list of conda environment
conda environmens
conda launch env
manage environments python
conda list export
python create and activate conda environment
conda insatll from envir
conda env create new
how to create a env using conda
conda install in env
create new conda env with python 3.7
conda list available environments
list all packages in conda env
conda version
upgrade packages conda
list of conda envs
conda use env
list python environment variables
conda env list
conda create env
conda list packages
conda clone environment
list all conda environments
conda env create
create conda environment
anaconda list environments
conda remove environment
conda activate
anaconda create environment
conda environments
conda create environment from requirements.txt
conda list packages in environment
conda activate environment
anaconda new environment
conda list of environments
how to create environment in anaconda
anaconda environment list
conda get list of environments
new environment conda
conda create new env
new conda environment
get list of conda environments
conda create virtual environment
anaconda create env
text file new env conda
conda copy packages from one environment to another
environment list conda
create new conda environment
see all conda environments
conda envs
how to list all environments in anaconda
how to list all packages in conda environment
how to list all conda environments
env list conda
conda create environment with requirements.txt
list virtual environments conda
list all packages in conda environment
get list of environments conda
get all conda environments
anaconda list all environments
conda enviroment
listing environments in conda
list the conda environments
how to use conda
activate env conda
create an environment in anaconda using yaml
conda delete env
list all conda envs
conda env list packages
create a new conda environment
conda available environments
view all conda environments
conda install env
conda create environment without packages
how to activate conda environment
anaconda create virtual environment
creating an environment in command prompt conda
list env in conda
list installed packages in conda environment
conda install environment.yml
conda venv
conda deactivate
conda env activate
conda make new environment
conda list all packages
conda list
anaconda activate environment
conda list libraries
check environments in conda
list libraries in conda environment
anaconda env list
list all the env in anaconda
list of packages in conda environment
anaconda create environment from requirements.txt
conda show envs
conda list virtual environments
find which envirnomnet youre in anaconda
python list virtual environments
conda env install
venv list environments
list available environments conda
conda init environment
conda environment
conda switch environment
conda create env requirements.txt
how to list all environments in conda
conda environment from requirements.txt
conda list packages in env
how to copy conda environment
conda create environment from yml
how to check conda environments
see list of env conda
get list of anaconda environments
miniconda environment location
how to see conda environments list
how to create env in anaconda
list installed packages conda env
how to set up conda environment
check conda environments
conda enviromnet
how to create conda environment
conda create --name
conda virtualenv
how to make a new environment in conda
conda get environment list
remove virtualenv conda
conda list environmenrs
how to list all virtual environments in conda
creating my environment in anaconda
conda create environment python
conda create python environment
how to create conda virtual environment
see all conda envs
conda create enviroment
conda list python environments
get conda environments
how to create a anaconda environment
how to see anaconda environments
conda list all environment
how to create an environment in anaconda
conda copy env
list of env in conda
list all environments of conda
python list env
conda config
new environment anaconda
conda clone an environment
how to create conda environment from yml
show list of conda environments
how to make miniconda environment
anaconda current environment
conda install to env
conda clone environment to specific directory
conda how to list environments
display conda environments
conda make requirements.txt
conda requirements file
how to list anaconda environments
conda share environment
create conda environment in specific directory
make conda environment
conda list environments command
conda clone enviroment
conda create environment.yml
conda env info list
how to create a new conda environment
list conda enviroments
how to create a conda environment
delete conda environments
conda env change
create environment in conda
conda environment python
can install environment anaconda
anaconda environment
conda list of a environment
create a anaconda environment
conda show env list
conda list packages in an environment
list env anaconda
conda make copy of environment
base conda environment
list python virtual environments
conda create environment python version
virtualenv list environments
check available environments conda
list conda environement
conda open env
find conda enviremnt in linux
how to activate conda env
list all the environments in conda
copy env conda
conda environment version
anaconda environment create
conda create syntax
how to create a conda env copy
list all envs conda
get list of conda env
conda copy an environment
how to create environment variable in anaconda
conda environment enter
activate conda repo
where are the environment in anaconda
anaconda environment list command
set conda environment variable windows
where is conda environment stored
set mini conda environment variable windows
list cnda package
how to switch enviorments in conda
mac activate conda env
how to list all virtual environments in anaconda
creating a evn in conda
anaconda env set up linux
checking conda enviroments
how to check all conda environments
get dependencies of conda environment
conda create env from yaml
list virtual environments in anaconda
list all the virtual environments in conda
create environment anaconda
activate conda path
create new conda virtual environment
conda .yml file
conda environment create requirements.txt
how to see the total conda environment
pip env list
conda create from requirements.txt
convert conda environment to requirements.txt
conda environment to requirements.txt
see details conda enviroment
conda env to requirements.txt
conda create install list
check environemnt conda
save conda environment
create a virtual environment with virtualenv + yml
list enviornments termina anaconda
conda create environment based on requirements.txt
conda env requirements.txt
how to check environment list in anaconda
conda current environment
conda show virtual environments
conda check all environment
how to see all env in conda
how to view all python environments created in conda
conda library list
python venv list environments
how to create a conda environment using a txt file example
pip list environments
from conda env to requirements txt
manage conda env and requirements
conda creat env
remove conda env
create environment with yaml file conda
intsall env with conda
conda create from a file
using conda envsd
conda create environment command line
conda environement
conda env open
create env with anaconda
create env miniconda
conda how to create virtual environment
how to open conda environment file
delete environment coinda
leave conda back to pip
conda env commands
execution environment anaconda
remove env conda
anaconda set environment
where are conda env sotred
conda config env
conda env gui
how to setup environment in anaconda
how to create new environment in anaconda prompt
conda acreate environm,ent
what does conda environment do
import conda env
make environment anaconda
using conda environments
environment file conda
environnement anaconda définition
create anaconda environment python 3
conda python env
anaconda how to use conda
what are conda envs
conda env python kernel
conda activate virtualenv
environment variables conda
conda in which env am i
creating environment in conda
how to make new conda environment
instal conda env
anaconda create new base environment
installing the necessary files in a new conda env
conda file
conda pypy env
conda environment what is it
how to make an environment with libraries in anaconda
conda .env file
conda export environment from windows to linux
install conda env
conda create environment
should i install conda in new environment
conda create a new environment
configure anaconda environment
anaconda environments where to create env
remove enviroment conda
conda enironment
conda export current environment to yml file
set conda env variable
conda environment with python
conda r env
conda env entry
managing conda enviorments
conda create env
how to see how much conda environments list
conda install for all environments
how to get list of python packages installed in conda env
conda list of enviroments
list packages in environment conda
how to copy an environment conda
conda command for list all environments
conda show environments list
how to list all the envs in conda
conda how to use ipython in all environments
python show available conda environments
view list of conda environments
conda copy other environment
list enviroments anaconda
how to delete environment conda
list all the enviroemnts in conda
django environment from requirements yml
conda export requirements.txt
list of environments anaconda
environment anaconda
conda list dir
conda enviroment list
python conda list environments
conda show packages
conda copy anv to new env
list all conda environments command
how to see all the environments in conda
conda env duplicate
check conda environments list
conda copy environment to new environment
list conda environment command
copy conda virtual environment
list packages conda environment
how to list all conda ev=nvironmnets
anaconda creating new environment
history of conda environments
conda package list environment
see which conda environments you have
conda list python env
conda environ lis
conda list specific environment
create env command
conda lsit env
conda list packages from other environment
view list of all conda env
how to list all the enviroments in conda
conda dependencies name: project_environment
conda see env list
conda env from requirements.txt
list conda envirooment
show available conda environments
showing all packages environment conda
check the list of env in conda
miniconda remove env
how to show conda environments
show anaconda env list
howt to make newenviorment conda
conda list of envs
conda list environements
how to delete a conda environment
list of environment anaconda
list of conda packages
list of environment in anaconda
conda list environment names
activate conda
how to see the list of environments in anaconda
conda restart virtual enironment from command line
conda env user
exit from environment conda
conda create version
list all packages in anaconda enviroment
make environment conda
conda environment use
create conda environment amc
get conda yml
how to display all the environment of the conda
check active environment conda
how can i activate my anaconda environment
conda virtual env
conda large environment
change environment anaconda from promt
conda env create
anaconda list environment
how to create env conda
conda name env
howt to make new enviorment conda
conda clone environment in directory
anaconda add new environment
conda env environment command
create conda en
conda build env
list all fdir in conda env
conda list virtualenvironments
conda create enviornment
clone conda
clone conda base environment
conda create environment with latest python
create conda env from yml file
anaconda find env info
how to use packages outside conda environment
virtual environment list conda
conda environment.yml example
how to check the enviroment set up for conda
anaconda change environment location
list all environments in conda
activate new environment in conda
define specific start up for conda environment
conda install environment from file
conda create env using yml
conda env ignore prefix user specific
show anaconda environment
remove conda environment all
how to create conda environment in custom location
activate environment from environment.yml anaconda
how to see number of conda environments
conda env create from pypi
howto setup an environment in anaconda
activate conda environment command line
conda what is an environment
conda install requirements.txt
how to activate python on a new environment in anaconda
anaconda create enviroment
create a offline conda envirment
list all conda enviroment
conda exid env
how to create new conda environment
how to list venv in conda
start anaconda env
conda create environment from yml file
conda inatall
conda create env python version
how to create an environment in miniconda
start conda virtual env
conda define env variable
how to create a new conda env with terminal tackoverlfow
new env anaconda
conda create an environment
activate new environment anacond
create envs miniconda
conda python list virtual environments
how to set up environment using anaconda
conda check environment
conda miniforce create environment
how to use a new environment in anaconda
setup environment conda
python conda environment save list
create an conda environmeet exact same as thers
how to use conda commands from a python environment
conda env-name
how configure conda in enviroment
list conda enviornment
create new environment in anaconda cli
use a conda env
from miniconda create new conda environment with python 3.8
how to create a new conda environment miniconda
set environment variable python inside conda enviroment
conda list of virtual environments
initialize conda environment
conda create environment and install python
anaconda how to make environment
how to install packages in conda env
add environment anaconda
create conda encs
make new python env conda
anaconda create python 3.9 environment
conda activate environment
conda envcs
anaconda new environment with pip
make a conda enviroment with python 3
conda env python
conda env remove environment
how to test if the conda is activated
how to call new env in conda
setting up anaconda environment
create new anaconda envirn
how to activate conda in linux
anaconda env
create python environment in anaconda
myenvironments list in anaconda prompt
tremove env conda
python create env miniconda
add miniconda to env
start virtual env conda
create miniconda environment#
how to use the miniconda environment after install anaconda
creat varuemnt conda
terminal create conda evnironment
create new environment anaconda with specific python version
conda python environment variable
config conda
create new conda enviroment
conda create virtual environment
create an anaconda python environment command prompts
what is meant by creating an environment anaconda
anaconda create python 3.7 environment
how to set environment for anaconda
conda init env
anaconda environment create command
create python 3.6 environment anaconda
conda list of env
how to view conda environments
copy and make a new environment conda
see conda environments activate
list all the environment python
how to check current environment in anaconda
command to see environments in conda
how to list environment variables using python
check my env library on conda
python venv list virtual environments
show conda environment in terminal
cannot find conda environment
see list python environments mac
list virtual environment
how to get list of libraries in venv
conda copy venv
how to show all env in anaconda
list current virtul environment
conda list current packages
how to view all packages in conda env
list packages in python environment
how to get all the installed package in conda
list of env anaconda command
conda check environment name
list all the environment
how to make conda env avail for all users
how to see enviroment list in conda
get a list of python enviroments
how to see active conda environment
conda clone environment
conda copy base environment to another environment
how to show conda environments in prompt
how to check the list of all conda environment
list virtual environment python
conda environment package location
list all virtual environment python
command to list python environmen
how copy and create conda environment
see all packages in conda environment
list of virtual environments python
python show environment
check current conda environment name
how to check the all env in conda
see list of python environments
how to get env of conda
conda list name
conda see virtual environments
check packages list python environment
list packages python environment
list all python environment
conda check environment packages
how to list conda environmemnts bash
pip environment list
env.list python
how to see my conda environments
list of installed packages in conda env
check conda installed location command
how to see different anaconda environments
accessing conda environment installed package in windows
list venv
anaconda clonar enviroment
open envirement anaconda
conda check current environment installation path
show all existing environments conda
check environment list python
see which packages you have conda env
conda show packages in environment
how to display all the modules in an anaconda env
where to find conda environment path
exporting conda environment
see what conda environment you are in from terminal
list venv environments
see my conda path
liste version anaconda linux
venv list
how to check which env is running in conda
see conda env variables
how to see conda 's environments
how to list environment variable in python
display current conda environment name
conda search environment
conda lists
see where conda env is installed
conda create new environment from yml
how to identify python env in conda install
conda environment copy
conda create environment from yml windows 10
conda see installed
when is the folder when i install a library in conda environment
anaconda prefix yml file
how to create conda environment file
python check environment are using
generate environment.yml file for exsiting repository
check where is an anaconda env installed
how to check conda version
deactivate anaconda virtual environment
conda creates environment in .conda and not anaconda3
activate virtualenv conda
how to activate conda environment in command prompt
environment.yml example
conda import pydentic
anaconda environment package list
show list virtual conda
delete environment
conda env create env
list all virtual environments using conda
check all packages installed in anaconda virtual environment
view virtual environments conda
anaconda list venvs
how to get anaconda virtual env installed list
how to list installed packages in a virtual env conda
conda + new environemnt
conda search package
set environment in anaconda
conda search installed packages
how to see all conda virtual environment
conda decativate
conda env is still activated
check which environment conda
conda create env 3.9
how to run conda environment
list libraries in python environment
list python environment
find conda environment name
how to see all the environments you created in anaconda
how to show conda environment in terminal
conda -copy
how to know all my conda environments
show all environment variables conda
how to see environmens in anaconda
conda show all env
conda clone a env
list of virtual environment python
get list of environment variables python
checking the libraries in the enviroment conda
see all conda enviroments
see conda environements
python list virtual enviroments
conda clone existing environment
check all environment variables anaconda
conda clone environment
anaconda copy an environment
find all conda environment python
conda list all libraries for environment
source activate and conda activate
conda clone environment
conda use environment from folder
how to get list of all conda environments
conda base environment activate.d
conda copy environment with new name and python version
conda list all available environments
see all anconda env
copy one conda env into another env
create conda env file to copy
conda make a new copy of environment
conda see env
how to see all the anaconda en
copy conda environment to existing environment
copy anaconda environment
copy a conda environment to another environment
how to use anaconda environment
new env venv anaconda
conda create environment by file
create virtual environment in directory conda
activate anaconda environment in sub
conda create environment based on another
conda how to activate environment
how to remove environment in anaconda
how to use different enviromnet in anaconda
how to activate conda i
conda get environment file
delete virtual env in conda
switch anaconda environment
activate conda env linux
how to install environments in anaconda
to see library list in conda conda environment
anaconda environment name
how to get other conda env list
conda show env name
conda create environment python 3.7
change conda environment path
how to set conda environment as default
switch environment in anaconda prompt
conda env create batch mode
conda list envs and directory
use 'conda create' to convert the directory to a conda environment.
how to rcreat conda env
conda create env python 2
where does conda create environments windown
how to open a conda environment in command prompt
windows create conda environment.yml
how to create environment
anaconda list of envs
how to get a list of conda environments
mass remove anaconda environments windows
conda automatically activate and deactivate environment
creating conda virtual environment
create python enviroment conda linux
conda env run
activate your conda environment where you want the package installed
conda and pipenv
.env create manually
how to activate environment in anaconda
conda creat environemnt.yml
conda list env python version
conda env delete
conda create ecviroment
conda create env for python
anaconda change env on command line
anaconda list env packages
miniconda activate
activate an environment anaconda
enter conda environment
conda export environment to yaml
list of environments in anaconda
how to get a list of available conda env
create yaml file anaconda
how to import conda environment from yaml
ceate new environment conda
python list of environmentss
conda environment -l
switching in specific conda environment
conda create env with yml
how to get a list for all conda environments
conda env list withot name
conda activate from path
how to use the new create anaconda environment
which function doesnot check in conda environment
conda list all environments with a package
conda environment remove
using conda list package in python
installed packages in conda environment list
install anaconda environment
how to check if you are in environment anaconda
activate conda env in script python
list of conda enviroment
recreate anaconda env
environment.yml conda variables
conda remove a environment
export conda environment into requirements.txt
conda environment with display
anaconda command prompt change environment
create conda environment using requirements.txt
create conda environment from requirements file
conda create environment from requirements.txt using pip
conda environment from requirements file
conda create env requirements
conda build with env requirements
set up anaconda environment
create new environment conda whith requirement.txt
conda create new env with requirement.txt
conda make env based on requirement file
conda create new environment python 3.6
create a conda environment using requirement.txt
build conda env from requirements.txt
create conda from requirements.txt
conda new environment with requirements.txt
setup conda environment using requirements.txt
conda activate base environment
how to export environment from conda
conda yml
install anaconda without losing environments
conda requirementes
how to make conda env in terminal
conda shell venv
env not working conda
new conda environment from base
create conda environment based off requirements.txt
conda create env from requirements file
create environment conda from requirements txt
how to make requirements.txt from a conda environment
a python script to create a conda virtual environemnt and install requirements.txt
how to add a pip dependency while creating conda virtual environment
how to create a new conda environment from a requirements file
create environment from yml
conda install requirements.yml
conda yml format
conda create environment python
how to create a new conda environment clone base
conda create new environment in specific directory
get list of all envs in conda
check if conda environment is active
create conda environment with python 3.6
conda create mlenev
remove virtual environment in conda
how to remove env in conda
how to use conda env and my env both
terminal conda environment list
how to check conda envs list
anaconda create environment fron install requirements
how to see conda envs
check new environment in conda
conda envrionment doesn't contain packages
conda get environment packages
conda export environment from history
all conda envs
create conda env from requirements.txt with python 3.7
anaconda prompt show environments
conda export yaml
delete conda environment windows
conda change env
conda create with requirements
conda export env requirements.txt
create virtual enviroment file form yaml
how do you create a conda environment from requirements txt?
starting a conda environment with requirements.txt
how to activate conda environment from cmd
setup anaconda environment
check list venv conda env
conda create python 3.6 env
change conda source
conda venv create requirements.txt
what is conda activate
conda create from requirements.txt
crate environment conda with packages
conda list dev
how to see list of conda envs
use miniconda environment
sow the list of env in anaconda
conda create environment miniconda
how to create a enviroment for anaconda
create new env anaconda
list of conda environment
install new conda environment
miniconda create environment packages
conda command for enviroment
create conda environment in batch file
activate anaconda python version
python how to reference a conda env variable
conda virtual environments
conda environment packages
create anaconda environment with python 3.6
creating environmens with miniconda
create env in miniconda
conda envrionment
conda env config var
conda env list equivalent python
conda list current environment
creating new env in anaconda
anaconda widnows create a new environment
cant create new environment in conda
creating new python environment in anaconda
conda create environmnet bash
conda environmentmpyth
open new env conda
conda environment python=2.7 anaconda
how to run miniconda
where can i find conda create environments
how to start conda environment
is conda on my env
conda download
how to initialize a conda env
conda envrioment
anaconda create environment python 3.7.6
create new environment in anaconda at a directory
conda environment creation
how to enter a environment in anaconda
how to make new environment in anaconda
can't create new environment anaconda
createing new env on anaconda
how to create new anaconda environment with python 3
miniconda2 environment
create conda environment using miniconda
anaconda set python environment
conda list environments names
conda list envd
conda verison
how to create new anaconda env in a specific location
how to get list of environments conda
conda env explained
how to make an enviromneyts in anaconda
conda setup
conda env settings
set up conda environement
command to list envs in conda
anaconda create env python version
list of conda enviorment command
how to make a conda snv
how to create conda environment in python
miniconda create environment creating env under anaconda
create and activate a new environment python anaconda
how to find list of environment in conda
use conda environment in command prompt
conda list packages one environment
how to setup conda environment on window
conda envirament variables
create a enviroment anaconda
conda list einviorments
anaconda default python environment
how to open conda create environment in terminal
create a conda environment
how to set up environment using conda
create new environment in anaconda at a location
creating new environment anaconda
how to select specific environment conda prompt
conda go to environment
conda env list in python
delete conda virtual environment
conda show environments mac
conda remove specific environment
new environment in a
conda enviornmental variable list
change environment in anaconda prompt
install virtual machine in conda virtual environment
lista conda environment
how to environment conda im mac
conda remove virtual environment
conda env where are library files
list availabe conda environments
how to check yaml version in anaconda prompt
anaconda create and activate
python anaconda check environments
create environment with conda
^remove conda env
list conda
how to create a new env for conda
how to open conda environment
does conda create clone the current directory
conda activate other
conda create environment python patch version
conda environment change
command conda list
conda create environement
conda environment for python project assignment
create miniconda env
conda activate cozmo-env ?
activate a conda enviornment
conda create envi
conda list packages in arbitrary environment
conda create environment with default packages
how to activate conda enviroment
how to see my environment in conda
conda check environment list
conda inv
conda env varial
conda enviroment file
how to see list of libraries in conda environments
what is conda environment?
conda env
conda list --env
conda variables
how to list conda packages
list of envs in conda
list envts in conda
list of the conda environment
conda env details
what is conda env create
new environment in miniconda
create a new environment in conda
conda list variables
conda create environments
conda create from file
conda env import
conda how to list environment variables
see all the conda environments
make new conda enviroment
conda change environment command line
conda list envs directory
conda install python to new env
how to activate a conda env
how start conda env
conda environment tutorial
activate env conda
conda env -f
list of anaconda environment variables
conda env text file
conda environments.txt
change env conda
env list in conda
conda environment address
list conda environments
conda list envs
conda environment
conda list
conda environment list
create anaconda environment
conda show environments
what is a conda environment
conda create new environment
how to list conda environments
list all environments conda
conda list environment
list conda envs
what is conda environment
see conda environments
create conda environment from requirements.txt
how to create a new environment in anaconda
conda delete environment
create env conda
conda new env
list of conda environments
conda envs list
miniconda create environment
creating environment in anaconda
conda activate env
conda create
show all conda environments
list env conda
list of all conda environments
how to see list of conda environments
how to see conda environments
conda enviroments
create new conda environment with requirements.txt
conda see all environments
activate anaconda environment
conda see environments
check conda environment list
how to see all conda environments
conda export environment
create conda environment from yml
list environments in conda
anaconda show environments
copy a conda environment
check all conda environments
activate conda env
anaconda env
list available conda environments
conda path
get conda env list
new anaconda environment
make new environment conda
conda list installed packages
export conda environment to requirements.txt
environment.yml
delete conda env
env conda
creating conda environment
anaconda create new environment
how to list environments in conda
how to create conda environment from requirements.txt
how to list all the environments in conda
view conda environments
conda env package list
environment.yml conda
conda list of packages
create conda env from yml
anaconda use environment
create an anaconda env
conda env path
anaconda create copy of environment
conda requirements.txt
see conda envs
how to create new environment in anaconda
change environment anaconda
how to check conda environment
conda show env
conda remove an environment
show all environments in anaconda
deactivate environment
see packages in conda environment
deactivate environment conda
anaconda change base environment
environment variables in conda
anaconda create environment python 3.6
how to set up anaconda environment
new env conda
create a new environment conda
conda clone
conda create new environment from requirements.txt
list envs conda
creating a conda environment
how to get conda env list
list of environments python
how to find saved conda environment
conda command to list environments
list of environments in conda
conda show available environments
create env in anaconda
list all conda enviroments
how to list conda envs
conda python list
conda env list command
list environment anaconda
show environments conda
list conda environments versions
deactivate an environment anaconda
conda uninstall environment
create new environment in conda
conda create clone environment in directory
create an environment in anaconda
conda create env with requirements.txt
conda list environmnets
conda info envs
conda enviiroment list
how to copy environment conda
environment.yml conda example
python venv list
conda create environment python 2.7
conda activate base
check env list anaconda
check current conda environment
delete a conda environment
export conda environment
how to list all anaconda environments
conda environment install requirements.txt
conda list environment packages
how to list virtual environments in python
anaconda copy environment
conda environment.yml variables
linux activate conda environment
create anaconda environment from requirements.txt
get list of packages in conda environment
conda env file
anaconda create environment requirements.txt
conda create new environment python 3.7
conda choose environment
conda command to create new environment
delete environment in anaconda
conda list environment variables
conda list enviroments
conda env packages
list all libraries in conda environment
conda list export
list env in anaconda
create conda env with requirements.txt
conda list the environments
install anaconda in conda environment
conda delete environmet
how to switch env in anaconda prompt
list of environment in conda
conda env install package
conda create environment
make new conda environment
how to delete a conda environment using pip
conda env
conda list created environments
environment conda
checklist of conda environment
miniconda activate environment
conda base environment
activating conda environment in windows
conda create env from yml
use conda environment
how to set conda environment
conda environment documentation
conda environemnt
conda not using environment python
conda list environmens
install dependencies from env.yml
how to list packages in conda env
conda set env variable
understanding conda environments
how to change environment in anaconda prompt
python list environments
how to switch environment in anaconda prompt
make a new conda environment
how to list environments in anaconda
conda create new virtual environment
conda show current environment
how to list all conda envs
create an environment in anaconda for python 3.6
list all the virtual environments conda
list all the virtual environments python conda
list whcih environments are installed in windows
anaconda envs
create conda env ith python and ibraries
conda source activate
how to create conda environment.yml
find where conda is install for environment variable
conda search package list in environment
conda where env
activating anaconda environment
activate a conda environment
conda base environment activate
setting up python environment anaconda
conda env create vs conda create
how to install conda and create environment
check if conda environment exists
add conda as envoirement variable
virtual environment anaconda
create a conda environment from a yml file
create environment.yml current conda environment versionless
how to activate a conda environment
create anaconda virtual environment
conda export environment requirements.txt
python environment list
python .env list
view all environment variables in anaconda for windows
list virtual env python
check all existing virtial environments python mac
python list venv
how to check modules in conda env
how to know the anaconda env
how to activate miniconda environment
show packages in conda env
check conda environment name
conda create env with requirements
how to list all environment variables in python
where are conda environments stored
conda get environments
conda display env
how to list all your virtual environments in conda
conda check available environments
how to check conda env list
how to create yml file anaconda
check available conda environments
conda environment requirements.txt
conda create env using requirements.txt
create conda environment from requirements.txt python version
conda env create from requirements.txt
python venv list all environments
list environment python
check list environment python
conda config set environment variables in .condarc
create new environment miniconda
activate anaconda environment created with specific directory
which conda env am i running
conda create local environment
conda environment file
create miniconda environment offline
conda -- env
conda envs command
conda create virtual environment from yml
run file using conda enviroment
accessing miniconda env
use environment.yml
how to create new env in conda
generate environment.yml
conda env version
create a new envoironment in conda
conda enviroiment
how to get conda environment info
conda environnement
how to enter a conda environment
conda env indo
conda environment create
conda install in my env
anaconda create environment with python 3.7
activate conda en
conda evn
miniconda3 create environment
conda env unnamed
what is "conda env" command
what is conda enviroment
what is a conda environment?
make conda cuda enviroment
source activate conda
conda environment from network
conda virtual environment in linux
anaconda set environment variables
setup new environment conda
conda --info env
conda script
new environment in anaconda
anaconda create environment python 3.7
conda delete virtual environment
set conda environment
new conda env
conda set env
conda make env
conda info --envs doesn'tshow all environments
how to start an environment anaconda
conda enviroment download
install conda in anaconda environment
create environment
conda create environment with specific directory
conda libvirt
conda lkist env
conda envitonmw
create conda enviroment
what packages come with a new conda environment
create conda yml file
how to choose environments in anaconda
start conda environment
which python in conda env
conda environment.yml name
create a virtual environment conda
available conda environments
conda prompt list envs
how to list all the conda envs
conda list python packages
what is anaconda environment list
enviroment list conda
get list of all conda environments terminal
how to get list of env in conda
activate existing conda environemnt
show all envs conda
see list of environments conda
list packages of conda environment
activate conda venv
ex post requirements wiht conda
list all available conda environments
how to edit requirements txt for conda environment
show list of conda env
anaconda how to list environments
conda how to copy environment
conda python version environment
conda get list of all environments windows
how to list available conda envs
list all conda environment
conda list environment dependencies
delete an environment
show envs conda
copy conda environment to another environment
list conda virtual environments
anaconda, list environments
environment conda list
conda envs how to show directories
conda env --list
show all conda envs
list all packages in environment conda
list environment packages conda
conda list enn
conda list envoirnments
check conda enviroment list
conda lsit envs
how to copy environments conda
how to check installed packages on anaconda environment
how to copy a conda environment
list env packages conda
how to get the list of all conda environments?
conda envlist
conda list all packages installed in environment
how to list available envs in conda
conda createe
conda environment variables windows
delete an environment in anaconda
how to clone conda env for another machine
python conda activate environment
check conda env name
list all my conda environments
conda environment list command
how to find the conda environemnt.yml
conda gilt clone
how to list conda environments terminal
how to list conda environments linux
anaconda list created environments
how to create a new conda clone
how to change conda activate base command
how to set env for new project in miniconda
how to get list of conda environments?
how to check conda environment list
conda install yml environment
how to see the list of conda env
how to see the list of installed in conda env
create environment yml
anaconda create environment from file
conda creat\e --name
conda create environment from file
conda export windows
how to make a conda environment
run project on different conda env
anaconda prompt venv using a yml
remove a conda environment
conda show list of environments
how to list conda enviroments
check how many conda saved
conda env create from yaml
conda env with python 3.7
conda virtual environment
change conda env name
conda list envs
deleting virtual environment conda
create a conda environment from yaml
conda create environemtn
create conda environment with packages
conda env yml options
code for seeing list of envs in anaconda
conda cfreate
conda crate new environment
conda environment with packages
anaconda create a env
conda install from environment file
conda in virtual environment
install conda environment from file
import env.yml
creating a environment in anaconda
conda list all virtual environment python 3
how to activate environment .yml without conda
conda create yml from env
conda envoirment
anaconda how to choose an environment
how to list envs in .conda/envs wsl
list of packages in conda env
anaconda virtual enveronment
conda create environment np default packages
conda environment yml example
how to remove virtual environment in conda
anaconda other environment
conda environments prefix yml
how to activate environment in conda
conda environment.yml
activate conda environment windows 10
remove an environment
create environment file in conda
conda activate "$@"
change conda to miniconda
create basic environment in anaconda
install new enviroment conda
conda ~ %
how to check list of conda environments
what is an environemt in anaconda
anaconda create environment new python 3.8
conda virtual environment list
setup python environment in anaconda
show conda env
anaconda environment make
how to create a new conda en
conda start new env
create a new envt conda
set which python in conda environment
why use environment in conda
conda virtual env pc
how to create anaconda python environment
how to start miniconda
how to use python from conda env
conda create new environment with latest python
make enn in conda
how to activate an enviroment in conda cmd
where is the conda environment
linux conda create environment
create anaconda environemt command line
list conda env list
list conda environments]
how to create env using conda prompt
create cheatsheet virtual env in anaconda
conda command env
how to work with conda env
create a virtual environment with anaconda
do i need conda package in new environemnet
how configure conda in enviroment windows
how to create create env in anaconda
create new environment with conda
create new env anaconda terminal
conda create envrionment
how to create a open new env in conda
list all virtual environment conda
how to add conda to enviromental variable
use conda environmen
how to create a python anaconda environment
how to use conda env
actvate env inn conda
anaconda environment conda
setup miniconda environment for machine learning
using conda env variable in python
how to create new anaconda env
anaconda environments show in terminal
conda create environment verion
anaconda prompt environment list
anaconda setup environment
conda environment manipulation
create anaconda env
conda how to use
conda create list of packages
miniconda where are environemnts installed
how to create environment in anaconda and use it
create conda virtual environment
create a enviroment in anaconda
create new conda enviornment
how to create conda environment in miniconda
print environment versions in conda
making new env with miniconda
creeate ewnv conda
miniconda installed packages
create new env in anaconda
path for conda create -n .venv
anaconda environment file
conda environment variables command
conda create new enviornment
conda env lít
check conda list
create an anaconda environment with python version 3.6
conda environemtz
cmd for creating env in conda
create new python version anaconda
anaconda enviromnet list
view environments anaconda
python env list of python versions
where to find conda envs
show python venv list
anaconda env list in prompt
how to find my conda environment
check if conda installed
how to see envirement name conda
conda copy environment from folder
conda environment details
print list of environment anaconda prompt
show environment conda
copy conda env to
see all conda environemnts
check environment variables location anaconda
anaconda see environments
anaconda enviroment libraries check
check conda environment currently actiavted
how to find conda environment path
check all env conda
conda command for find all environments
python3 venv list environments
view all conda project environments
install conda for all users
location of conda environment
copy envs conda
check current environment anaconda
list environment
conda copy dependencies to new environment
how to show conda environment in prompt
conda create environment and copy base environment
how to know packages present in conda python env
python list all env
list all the virtual environment python
conda create env copy
see available environments conda
check environments in anaconda
find location of conda env
conda env check installed tools
virtual environment get list
how to see what i've made conda envirnment
pip list env
check the current anaconda environment
check the environments in anaconda
conda check environment name
how to show environment in anaconda
find all environment in anaconda
check for conda environments
how to show current environment location -"conda"
how to know what is installed in an conda environment
conda see all env
how to know the name of conda environment
display the env being used by conda
python list my environment
conda check env
conda show enviorment path
list envv python
check current evironment name conda
display list of virtual environment python
check my conda environments
duplicate environment conda
how to check environments anaconda
pip list created environments
python virtual environment managers list
list of package in environment python
where i find installed environment anaconda
environment list python
where to find conda environment
activate enviroment list
see all anaconda environments
source activate list environments
anaconda share environment
anaconda see existing enviroments
names of conda env
how to know which user owns conda environment
pytohn list virtua; enviromnets
how to list the current libraries in virtual environment python
display the current conda environment
list the packages in conda
delete anaconda environment
conda list all python versions
how to list all the conda installed packages
can i copy my conda environment
can i copy my conda environment to another system
how to create a base environment in conda
yml file for conda environment example
conda start a lighter env
if create new environment
conda install packages from environment.yml
install python environment anaconda
conda commands list
get all envs conda
conda check version of package
how to check what is installed on conda env
have cmd conda activate
conda create environment in .conda
conda decativate nevironmen
anaconda yml
package list conda virtual env
conda update end from environment.yml
anaconda virtual environment list
see virtual environment list conda
anaconda list envs with python versoin
anaconda enviroment list
how ti list installed packages in virtual env conda
conda env path in terminal
conda create envronnts
conda list virtual envroments
how to check list of virtual environment in anaconda
conda package save dictionary
set environment variable for conda environment
anaconda virtual environment activation
how to deactivate a conda environment
conda check package version
how to check list of packages installed in virtual environment anaconda terminal
conda create an environment yaml
to activate existing environment in miniconda
conda make list
how to list all virtual env in conda
find conda environmenth
how to show a list of conda envirmoents that are available
list all anaconda env in shell
check list enviroment conda
conda copy base environment
list virtual environments python
show all conda env
see all package installed on a conda environment
copy conda environment with different python version
see all environment variables anaconda
check all the conda environments ubuntu
how to know create conda environment location
create a copy of a conda environment
copy python environment conda
how to see the env in conda
how to show env conda
find location of conda environment
list all packages python environment
conda show current environment name windows 10
how to see if conda installed
conda get installed lis
find all anaconda environments
how to copy conda environment with python version
how to enter conda environment
conda new environment python version
conda copy library from one environment to another
conda check current environment name
conda copy environment with new name
create new virtual environment in anaconda
check installed packages in conda env
how to check the environments on anaconda prompt
how to see whats installed in the conda env
how to show all environment in conda
how to check what environments i have in conda
how to see all conda envs
find all installed conda env names
have to see environment list conda
view my environments conda
how to check all installed packages in conda prompt
get all installed modules in conda env
conda activation
yaml file to environment anaconda
conda clone env
anaconda deactive
how to see environments in anaconda
conda make environment file
get environments list conda
cant create conda env from enviornment file
virtual env conda
get all enviroment with conda
how to check the how many conda environment are ?
how to install my environment in anaconda
list and version of current env conda
conda deactivate env
conda list packages location
create virtual environment python conda from yaml
activate conda environment in linux
conda list all packages in an environment
conda activate environment from yml
create new conda for new python
conda create environment environment.yml
how to activate conda in ubuntu
conda activate with path
clone a conda environment
conda remove environment and all packages
anaconda prompt use environment
how do i find a list of conda environments?
remove conda environment path
list of virtual environment conda
conda list libraries in environment
find conda environment list
command to check conda environment list
conda activate iron
how to create new conda environmnt from linux terminal
conda create new package
virtual environment anaconda windows
what to type in conda environemtn
how to anaconda activate myenv
how to see list of packages in conda environments
view python environments conda
conda list packages in enviroment
conda env list all packages
conda activeate env
how to get conda env list on windows
how to list all conda environment varaibles
remove an environment anaconda
conda creaste
how to remove a conda environment
conda environment ubuntu
name conda env i
conda how to remove an environment
conda list envs linux
export environment conda
anaconda check all environments
conda get list of environment packages
how to deactivate anaconda environment
conda change environment directory
install conda linux virtual environment
go out env conda
use specific conda environment in python console
conda list current environments
swtch in specific conda environment
create conda venv
conda list envoirnoments
show environement anaconda
make a new environment conda
dactivate env anaconda
conda create environment with yml
setting up a python conda encironment
conda save environment
conda liste env
how to activate conda environment using source
miniconda switch env
list installed packages in conda env
conda activate environment linux
conda environment crate
anaconda create env nenv
remove an anaconda env
linux activate anaconda
check conda env path
how to write conda environment to requirements.txt
conda env create from file
where is conda environment requirements.txt
update conda environment with requirements.txt
conda environment requirements file
activate environment anaconda prompt
anaconda make new workspace
set up conda environment
how to export conda environment to requirements.txt
anaconda new environment python 3.6
how to install environment in anaconda
create python conda environment from requirements.txt
conda create --name <env_name> --file requirements.txt
conda create environment with requirements txt
how conda write requirements.txt
how to manually create a conda environment
build conda environment from requirements.txt and python version
install dependencies for specific environment
conda generate requirements.txt
conda remove
existing conda environment
minconda env path
activate conda with source
add path to conda environment
conda install packages from yml
change environment python
create enviroment with dependeices in file conda
conda install from requirements.txt
conda create environment from requirements
set up conda environment using requirements.txt
how to activate an environment in anaconda
conda create file
can i use pip install if i have been managing my python environments with conda?
creating virtual environment python conda
conda remove virtualenv
conda io activation
conda create environment from a text file
how to create yml file python environment
how to save a conda env yml file
conda env export
channels environment.yml
conda create environment overwrite
python anaconda environment
list venvs with conda
list my environment conda
list venv with conda
conda create env in existing file
remove python environment conda
anaconda create environment debian
conda list location package
conda check which environment is active
python remove virtual env conda
create environment anaconda with prefix
editing the path enviroment variable in a conda environment
run in conda environment
conda env file optins
py environement like conda
create requirements.txt from conda environment
activate environment conda
conda install from yaml file
create env without anaconda
how to create environment from requirements.txt conda
save conda environment to requirements.txt
conda create new environment requirements.txt
install packages from yml conda
conda create with a requirements file
conda list pyhton packages in environment
conda env list path
remove encironment conda
list python conda
export requirements with conda
saving to text env anaconda python
conda file_operations python
how to start an environment with requirements.txt conda
create a requirements.txt from conda environment
how to activate new environment in anaconda
conda list programs in environment
conda environmet
conda create neww environment
list envs using conda
anaconda create environment path
conda environement lists
how to list all env in conda
create environment in anaconda cli
conda environmental variable
conda list active env
conda create environ ment
add new environment anaconda
make a conda env
anaconda environment install python version
how to an environment in miniconda
conda env list variables
making a conda enviroment
should you use conda install in conda environment
how to create anaconda envorment
conda install from env
looking conda environment list
miniconda create and activate environment
how to launch conda environment
anaconda create environment python 3.4
conda installed list
set up environemnt in conda
conda command for env list
new environment anaconda at a directory
conda install new environment from file
conda making new environment
conda list packages for current env
conda prompt how to activate enviroment
use environment variable conda
how to use the environment created in anaconda
list of envs anaconda
add environement variable conda windows
anaconda start environment
install conda in venv
create a new enviornment anaconda
how to enter anaconda environment
list environments anaconda
what are conda environments
display conda environments list
make environment on anaconda
conda list environments variables
install to conda env
how to creat miniconda environment
my env in anaconda
miniconda create environment python 3.7
conda create new environment python 3.8 terminal
how to see list of my conda environments
conda and venv
python anaconda create environment
create env in using miniconda
anaconda create new enviroment
initialize miniconda envionrment
conda add environment
list anaconda env
make an environment in anaconda
how to create anaconda environment with latest python version
create an anaconda environment
create conda environment offline
how to create an env in anaconda
how to create a environment in anaconda prompt
list conda environments\
show list of env in conda
anaconda python version environment
command to create environment in anaconda
miniconda env
how to list all env of conda
make anaconda use python environment
all env list conda
conda list available env
select env on conda
how to launch anaconda with a new environment
create a new conda environment in the same directory
creating env and installing in miniconda
get list of env conda
list of all conda enviroments
in python show current conda env
python create environment from yml
remove envirnoment conda
enable conda environment
list conda env packaes
anaconda3 set environment
anaconda version for environment
conda documantation enviroments
activate env in anaconda
create env
activate env anaconda
settng up env for anaconda
how to activate conda environment
anaconda environemnt
conda env list\
anaconda create
anaconda version environemnt
get list of conda envs
create conda environment using yml
conda environment linux
how to create a new env
system wide conda env
list of all libraries in conda environment
listar environments conda
list conda evnv
how to clone remove env
where does the anaconda new environment created
conda create virtual environment from yml file
conda get out of all envs
conda my envs
conda command to list envs
conda make env from yml
miniconda remove environment
add envs directory in conda
creating new environiment miniconda
anaconda environment deactivate
creating an environment with anaconda python version
list all anaconda environments
conda env activation
conda setup env variables
conda installa env
conda env file example
how to list packages in conda environment
conda list all libraries
conda environment ython
conda env in service
how to pip list conda environments
get list of environment in conda
view all conda env
set an environment using anaconda
anaconda python, create environment
conda install in new environment
create a new environment in anaconda
create an environment anaconda an activate
conda env-list
how to use conda environment
conda which environment am i in
using conda envs
conda command for enviroment listing
create conda env with latest python
conda how to use environment
build conda environment from yml
python create conda environment
conda env info
how to make a conda env
command for env list in conda
what conda environment
how to list env in conda
how to create conda env python
conda env library list
install conda in the environment
conda install path
create an env + conda
conda does use environment
create conda env with python 3.7v
how to keep page from scrolling sideways
remove input x
AttributeError: module 'jwt' has no attribute 'encode'
toggle loop autohotkey
how to restrict user from resize textarea
find gameobject with tag
password regex
error: key does not contain section
codeigniter query builder where not null
vscode change focus to terminal
yup password validation
xrandr duplicate displays
apple.overlap (water, collect);
how to turn off aslr in gcc
860 raintree dr
pemantauan in english
adding resources pom.xml
nodemon
matrix latex
twig is set variable
Superuser creation skipped due to not running in a TTY. You can run `manage.py createsuperuser` in your project to create one manually.
apache enable mod headers
delete conda environment
Firebase deploy error - Cannot find module 'firebase'
PANIC: Missing emulator engine program for 'x86' CPU.
turn a div green
pygame download
jest Async callback was not invoked within the 5000ms timeout specified by jest
how to find a list of columns containing null values
exiting vim
shortcut icon
flutter change white background on load
flutter sign apk
psutils.get_process_list()
apex get object describe by api name
opus engine
bootstrap4 navbar
mongodb findmany
hottest planet in our solar system
'React/RCTEventDispatcher.h' file not found
snapshot testing
how to add extra variable to form post
docker build supress build output
cxpherr roblox
hello world vala
multiple fine uploader in one page
componentdidmount functional hook
number field for float or integer form_with
xml array of objects
controller to render static data symfony
asp embed mp4
how to access any element in a stack
connecting to timescaledb from terminal
how to get phone setting url in swfit 4
Check the render method of `Custom Navbar`.
mustache syntax in laravl vue
long press gesture android
spark operator helm
geofence in mapkit xcode
<div class="container" onclick="myFunction(this)"> <div class="bar1"></div> <div class="bar2"></div> <div class="bar3"></div> </div> <script> function myFunction(x) { x.classList.toggle("change"); }
conflicting provisioning settings error when I try to archive to submit an iOS app
nihilistic
selenium interview questions 2019
udg:///dev/ttyUSB0
mazda usa
A(n) _______________ is a relation of harmony, conformity, accord, or affinity
leaderstat script
reset pasword in magento 2 generates an empty emial
how to build a nether starter house minecraft
chromium opens in small window
jeremy thomas web design
sit next to me
how to get mocha ae plugin free download
5+5
cryptojacking
google analitycs snippet
what the frick is microsoft access
wwwww
self.new_from_db
scrapy itemloader example
print("Minus - 12")
JS array sort
ascending val in array using js
ascending and descending val in array using js
ModuleNotFoundError: No module named 'requests'
install python requests
Tetris
git reset hard
git reset head
create vue project
flexbox align right and left
float right bootstrap
alighn right boostrap 4
bootstrap float
flowchart online
redux devtools
redux-devtools-extension npm
how to get list of docker containers
docker list fo dockiers
nodemon
nodemon node
installing nodemon in windows
npm i -g nodemon
centre align image in div
how to copy all requirements to requirements.txt
pip freeze requirements.txt
an unhandled exception occured cannot find module angular compiler cli
use npm to update packages to latest version
flutter listview builder
update all dependencies npm package.json
flutter list only shows 1 item
update all dependencies with npm
flutter svg
latex tabular
latex text size
markdown table
how to insert an image in markdown
mardown img
bodty parser
body-parser npm
orange color code
gold color code
Cannot connect to the Docker daemon at unix:///var/run/docker.sock. Is the docker daemon running?.
Cannot connect to the Docker daemon at unix:///var/run/docker.sock. Is the docker daemon running?
failed to configure docker environment error=Cannot connect to the Docker daemon at unix:///var/run/docker.sock. Is the docker daemon running?
file input file types
input type file filter extensions
input file define type Meta Code
bootstrap 4 cdn cdn
bootstrap
boostrap cdn
bootstrap 4
Flutter Navigator to new page
Navigator .push
flutter push route
navigator push flutter
how to open page with button flutter
flutter navigate to new screen
multi page app flutter
set background image opacity
windows 7
bootstrap center align columns
postgres alter user password
alter user password postgres
npm check package version
bootstrap text align
flutter native splash screen
list latex
latex bullet points
docker restart
wordpress .htaccess file code
no module named pip
include picture in latex
format code discord
discord code
modal form bootstrap
image modal in bootstrap
bootstrap modal center
flutter future builder
bootstrap navbar fixed top
gitlab set ssh key
kill port mac
Error: listen EADDRINUSE: address already in use :::3000
listen EADDRINUSE: address already in use 127.0.0.1:8000
killing a port mac
show collections in mongodb
idm chrome extension
purple hex code
bootstrap css
bootstrap 5 cdn
bootstrap link
stripe test card
image center in div
poppins font
vim download
error 418
Install laravel via composer
windows laravel installer
Install laravel using composer
laravel download
install laravel globally ubuntu
download the Laravel installer using Compose
install laravel
Deprecated Gradle features were used in this build, making it incompatible with Gradle 7.0.
nvm for mac
google dns
bootstrap button group
color gradient in flutter
shutter island
[core/no-app] No Firebase App '[DEFAULT]' has been created - call Firebase.initializeApp() flutter
Property 'firstName' has no initializer and is not definitely assigned in the constructor
flex force div right side
float right flex
grey rgb values
checkbox in flutter
image from assets in flutter
docker get container ip
ver ip docker
has been blocked by CORS policy: Request header field content-type is not allowed by Access-Control-Allow-Headers in preflight response.
tailwind absolute center
pink hex code
Module not found: Can't resolve 'react-router-dom'
ERROR: Could not find a version that satisfies the requirement tensorflow (from versions: none) ERROR: No matching distribution found for tensorflow
bootstrap 4 button with icon
android glide
norm macdonald
how to update pip
react native android
bootstrap center
center div bootstrap 4
bootstap flex center
unity download
heroku logs
heroku check error
host file windows 10
nginx proxy pass
regex match number
regex match any number of digits
regex match any number
bootstrap remove underline a
text-decoration:none; bootstrap create package lock
npm generate package-lock.json
How do you format code on save in VS Code
autoformating for code in vscode when i save it
=== in visual studio
dark gray hex
pound symbol
how install truffle
truffle ide
truffle
truffle install
regex 10 numbers only
fa fa-search
search font awesome 4.7
fontawesome.com search icon
nvm set default
node default version
backslash
whatsapp link
No module named 'keras'
import keras pip
install
how to make text in middle of container flutter
create next app
flutter center text in container
facebook color
adb is not recognized
on hover change cursor
full width and height iframe
jquery in react
shrug emoticon
find my phone
vue router refresh page
run cmd as administrator command line
ERR! Error: EPERM: operation not permitted, rename
flutter sdk path
gme stock
elevated button flutter color
elevated button background color
charmap' codec can't encode character '\u010d' in position 97: character maps to <undefined>
yarn update all dependencies to latest
xcrun: error: invalid active developer path (/Applications/Xcode.app/Contents/Developer),
gamestop stock
how to check cuda version
mongodb remove all from collection
bootstrap input file
semantic ui cdn
semantic-ui cdn
ngstyle
angular style if
npm WARN checkPermissions Missing write access to
links in md
add link in md
markdown hyperlink syntax
flutter checkbox
adb command not found
ouldn't adb reverse: device 'adb' not found ubuntu andrid emulator
docker access container
dataframe to list of dicts
Uncaught (in promise) FirebaseError: Missing or insufficient permissions.
onclick href
mongodb exists
mongodb check if field exists
wordpress get site url
How to set a image as a backgroung image
html background image
owl carousel cdn
flutter downgrade version
disable cors chrome
unicode arrrows
unicode arrows
arrow symbol
arrow
api key
pi symbol
ngClass
give space in latex
entity framework core add database migrations
nginx 403 forbidden
video bootstrap
add video in bootstrap
Play sound in python
contains text xpath
npm ERR! ERESOLVE unable to resolve dependency tree
npm ERR! code ERESOLVE npm ERR! ERESOLVE unable to resolve dependency tree npm ERR! npm ERR!
Unable to resolve dependency tree error when installing npm packages
npm webpack
npm install webpack
how to update webpack mac
"webpack": "4.44.2"
how to query in firestore
order datatable
git ignore node_modules
error: failed to push some refs Updates were rejected because the remote contains work that you do not have locally
pull readme in local repository
update local repository from github
git pull origin master
build apk flutter command
bootstrap breadcrumb
overflow bootstrap
bootstrap overflow hidden
overflow hidden in bs4
spring boot run command
fatal: remote origin already exists.
Reinitialize git repository
css how to remove horizontal scrollbar
Prevent left and right scrolling
remote origin already exists.
SDK location not found. Define location with an ANDROID_SDK_ROOT environment variable or by setting the sdk.dir
css how to prevent horizontal scrolling
git fatal: remote origin already exists.
how to keep page from scrolling sideways
scroll x disable css
FATAL: REMOTE ORIGIN ALREADY EXISTS. GIT ERROR – (SOLVED)
yarn parallel run
npm parallel run
npm concurrently
yarn concurrently
how to install whatsapp desktop on ubuntu
whatsapp for linux
snap for whatup in linux
ubuntu whatsapp
Install Whatsapp on linux
install whatsapp for linux
delete volumes docker
where is my phone
rabbitmq docker
stretch div to full height
css fill parent height
how to connect postgres user password using command line
cv2.imshow
composer self update command
how to print image with cv2
This package requires php ^7.3|^8.0 but your HHVM version does not satisfy that requirement.
composer update
using pip windows cmd
Spin a dreidel
Install phpmyadmin linux
markdown block code
markdown code block python
RegExp validation for password explained
Module not found: Can't resolve @mui
Could not open a connection to your authentication agent.
ssh could not open a connection to your authentication agent
could not open a connection to your authentication agent
ssh-add could not open a connection to your authentication agent
bootstrap primary color hex
initialize app flutter with firebase
vuetify primary color hex
FirebaseException ([core/no-app] No Firebase App '[DEFAULT]' has been created - call Firebase.initializeApp())
postgres docker compose
flutter text button
windows 10 shut down after 1 hour
shut down
shut down windows 10
Pulling without specifying how to reconcile divergent branches is
How to fetch data from an api axios
413 Request Entity Too Large
instagram svg logo
rich text flutter
latex line break
%matplotlib inline
flutter container radius
regex url
objective-c
latex write over arrow
an invalid form control with name='' is not focusable
2001 a space odyssey
how to change add to cart text button woocommerce
get data in from a collection firestore
flutter build for web.
.
|
https://www.codegrepper.com/code-examples/whatever/conda+list+envs
|
CC-MAIN-2022-21
|
en
|
refinedweb
|
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.