text
stringlengths
20
1.01M
url
stringlengths
14
1.25k
dump
stringlengths
9
15
lang
stringclasses
4 values
source
stringclasses
4 values
Details Description The current implementation by Paj is slow and pollutes the global namespace with variables and functions. This implementation only exports the SHA1 module and also happens to be up to 3 times faster as an added bonus. See for benchmarks. Activity - All - Work Log - History - Activity - Transitions Okay ... so does that mean we can remove it? It used to be used in prepareUserAccount before 1.2.x/1.3.x _users db security changes. I just did a cursory review of where we use SHA1 in Futon and was under whelmed. As far as I can tell it's really only used in the tests even though it's included in every Futon page (creating a different issue for this). If someone cares enough about Futon tests' run time then we could replace the SHA1 implementation, but it doesn't seem like a worthwhile cause to me. Cheers. patch doesn't apply cleanly (might be because I did the json2.js update beforehand direct from json.org) Can you repost the patch against latest trunk, or a link to your git repo so I can clone directly? Thanks! Chris we'd still need it to verify that "old" user docs still work, and it might be required in the ouath tests. that means we could remove it from futon and just have it in the test suite.
https://issues.apache.org/jira/browse/COUCHDB-833?focusedCommentId=13213777&page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel
CC-MAIN-2015-27
en
refinedweb
I’m restoring the VFD entry, but I am voting to KEEP it. I am copying some comments I placed on the QVFD. The article is actually pretty funny for the mathematician. It’s very wrong. It perhaps just requires a mathematics PhD to understand it completely. It reminds me of the joke: - A graduate student is taking a topology oral exam. The professor asks: “Can you give an example of a compact set?” The student answers, “The real numbers?” The professor pauses for a moment. The professor slowly says, “OK. In which topology?” This is really quite hilarious—if somewhat esoteric. --KP CUN 21:02, 5 Sep 2005 (UTC) - There's a joke about an engineer and a mathematian and something to do with infinite dimension, wish I could remeber it. I also found it really funny... no need for all the articles to be understandable by everyone. Plus someone (maybe even I) will expand it in the future. --Pietro 05:11, 6 Sep 2005 (UTC) I'm no mathematician but I agree with Pietro that not every article needs to be understandable by everyone. However, maybe there could be footnotes to explain this particular article for non-mathematicians like me? --Ogopogo 05:16, 6 Sep 2005 (UTC) I think we need an award for this article. The amount of education required to produce this juvenile nonsense was exceedingly high. There are a few articles on Uncyclopedia that are as well-researched as any you might see on Wikipedia. --KP CUN 05:30, 6 Sep 2005 (UTC) Definite keep. --130.245.249.188 06:16, 6 Sep 2005 (UTC) - How about an award of "That went the fuck over our heads!"? Edit it and make it funnier at: Template:AOC. Oh, and I vote Keep if it is funny. --Splaka 06:35, 6 Sep 2005 (UTC) - keep The Riemann hypothesis is notorious in math circles. Mathematicians have been working on the problem for over a century. Somebody's actually offered $1 million USD to anybody who can prove it. (Hmm...if somebody wanted to make this article a bit more accessible, one could summarize the real life history of the prize and then speculate on highly unlikely scenarios that could occur after a math genius gets a million dollars.) --Hessef 07:19, 6 Sep 2005 (UTC) - DELETE: If other people can vote to highlight this image, then I can vote to delete it. It just isn't funny, sorry. Kevin Smith isn't funny either BTW There may be few of you that agree with me and that's certainly ok with me, but I've got to speak my peace. --Pam Johnsenson 00:46, 5 Sep 2005 (UTC) Keep I absolutely agree that Kevin Smith isn't funny (wait, did you mean the actor/director or the Uncyc page? Doesn't matter, they're both lame), but I see no particular reason to delete this image if other people, evidenced by VFP, like it. --Rcmurphy KUN 01:18, 5 Sep 2005 (UTC) - of course, if Kevin Smith is lame, then why isn't an image ripped from a Kevin Smith movie lame? I'd be interested in knowing how many images have been uploaded to uncyclopedia are swiped "buddy christ" images. --Pam Johnsenson 01:51, 5 Sep 2005 (UTC) - I do think it's lame. And overused. But I'm not the only person here. A lot of people like Kevin Smith's tripefilms so I'm willing to vote to keep an image that a lot of people may find amusing. Look, I don't think some of the nominated articles are funny either, but I probably won't VFD them because even if I don't like them, I know some people do. It's the good of the whole over the good of the individual: that's what we're all about here. Either that or amputee jokes, I can't remember which. --Rcmurphy KUN 06:58, 5 Sep 2005 (UTC) Keep -- ERTW MUN 01:28, 5 Sep 2005 (UTC) Keep We are non-discriminatory insulters here, we will insult anyone and everyone (preferably in the same article).--The Right Honourable Maj Sir Elvis UmP KUN FIC MDA VFH Bur. CM and bars UGM F@H (Petition) 08:07, 5 Sep 2005 (UTC) Keep Jesus loves us and wants us to laugh at him. AmyNelson 14:00, 5 Sep 2005 (BST) Keep Have you seen our picture for "Jeez-its"? --Nytrospawn 21:03, 5 Sep 2005 (UTC) Dormitory V.A.I.N.ity? I've given them a friendly warning, but they seem to be updating without heeding or even wanting to discuss it. --Marcos_Malo OUN S7fc BOotS Bur. | Talk 09:42, 4 Sep 2005 (UTC) - It is a satellite article of University of Texas at Dallas. It seems mostly a collection of inside jokes. However, if uncyclopedia ever becomes complete, every school will have an entry full of such a collection. It is arguable whether this is desirable. (Workin on that laziness boss!) ~Spl - UtarEmpire has set the standard on college-related articles with his series under the category of Wheeling Jesuit University. This article doesn't even come close. Which is why we shouldn't just delete it. We should delete the hell out of it'. And possibly vandalize it first. ~MM Hmm, I keep missing when there is a discussion on pages. Anyway, this article is decidedly unfunny pretty much. Both it and University of Texas at Dallas need serious scrapping and recreation. Perhaps some of you should state if any parts at all are funny, tell us why the rest aren't, and then let us rework the UTD page to fit with uncyclopedia. --Xerika 18:21, 4 Sep 2005 (UTC) - The funny part is the "Have you seen this Cat" poster on the U-Dallas page. Although it's not that funny, and it's hard to tell if it's original. Other than that, they stink of vanity, and I vote both for speedy chopping. For it to be funny, it has to make someone like me laugh who's never been to and knows nothing about the school. Sir Famine, Gun ♣ Petition » 18:54, 4 Sep 2005 (UTC) Andrew Tritz Ok, this doesn't seem to be about anyone famous (37 google results), and I swear I can't tell if this is slander or vanity. Hence the invented word Slandanity. I am not sure, but I suspect John Kleeb, Carrie Whitcomb and Tamara of Greensboro might be related (or at least two). Also, I have this rash I can't get rid of... --Splaka CUN Bur. SG CM © 08:55, 4 Sep 2005 (UTC) - Update, VFD tag removed *pout*. ~S - I've had a word on their talk. - Very Weak Keep Seems to be slowly improving from a common or garden Vanity/Slandanity page. --The Right Honourable Maj Sir Elvis UmP KUN FIC MDA VFH Bur. CM and bars UGM F@H (Petition) 19:22, 4 Sep 2005 (UTC) Delete. Vanity, and therefore it must die. Sir Famine, Gun ♣ Petition » 19:38, 4 Sep 2005 (UTC) Keep. At least half of it is true, so it can't be considered slander. --68.255.90.114 23:54, 4 Sep 2005 (UTC) Keep. I think it's all funny. --152.1.147.82 20:07, 4 Sep 2005 (EST) All these numbers, I think they're trying to communicate with us! Wait, they could be dangerous... anybody remember subtraction from grade school? --Spintherism 06:44, 5 Sep 2005 (UTC) Keep I don't think it needed to be re-written in the first place. It was sarcastic, but not vindictive or truly hateful. When we start equating pointed sarcasm with genuine "hatespeech", we're in trouble. Anyway, I think the new P.C. version is workable. Irredemable. Most of this anon author's contributions happen to be pretty homophobic tripe. Matthew Shepard was saved by Carlb, and I'm in the process of saving Liberal Christianity, though that one is also risking deletion. Nonetheless, this article is beyond saving. DELETE. --OneTopJob6 00:49, 4 Sep 2005 (UTC) - I'm not sure I understand. Which words that sound like "Gay rights" are you objecting to? --Marcos_Malo OUN S7fc BOotS Bur. | Talk 00:57, 4 Sep 2005 (UTC) - I don't know... It intends to make all intolerant dudes accept the goodness of a psychosexual disorder that leads to acts causing AIDS and various forms of cancer, because if they do not they shall receive ten months of sensitivity training in the bowels of Heck. Gay rights activists have been successful at such endeavours as making pederastry an alternative lifestyle, and bringing STD's to the public. Sarcastic in the vein of Evil Atheist Conspiracy and Gay agenda, or genuine hate-rambling in the vein of all the shit you see all over the net? You be the judge. But read the article first. --OneTopJob6 01:28, 4 Sep 2005 (UTC) - But which words are homophobes, i.e., words that sound alike but mean something different? --Marcos_Malo OUN S7fc BOotS Bur. | Talk 08:04, 4 Sep 2005 (UTC) I say redirect to Gay Rites and write that article. --Spintherism 04:49, 4 Sep 2005 (UTC) But doesn't all this discussion of gay rights marginalise left-handed gay people, already a one-in-ten minority within a one-in-ten minority? As for gay rites, these folks seem to be in that field of endeavour. --Carlb 16:53, 4 Sep 2005 (UTC) Rewritten. Since some fools voted me for WOTM, I figured I should make a token effort to actually write something. It is now completely different, and at least salvagable. It still needs a ton of work, but I hope I have provided enough muse that someone else can run with this. I wash my hands of it. Sir Famine, Gun ♣ Petition » 20:38, 4 Sep 2005 (UTC) Keep I don't think it needed a re-write in the first place. It was sarcastic, but not in a biting or truly hateful way. As soon as we start equating sarcasm with hatespeech, we're in a bad way. Anyway, I think the new P.C. Version can be salvaged. Vote for deletion Keep - submitting here was a mere formality. --Marcos_Malo OUN S7fc BOotS Bur. | Talk 00:46, 4 Sep 2005 (UTC) Rewrite/Expand/Vandalize - I fixed it up a bit but it is still more of a sucky undic entry. --Splaka Bur. SG CM © 00:52, 4 Sep 2005 (UTC) Keep, but obviously leave the template. --Spintherism 04:44, 4 Sep 2005 (UTC) Redirect We already have one - Uncyclopedia:Vote this page for deletion. --Famine 17:44, 4 Sep 2005 (UTC) Sep 2005 (UTC) Warboards.org Yawn. --Marcos_Malo OUN S7fc BOotS Bur. | Talk 07:17, 2 Sep 2005 (UTC) Delete/rewrite I don't know if this site is well known enough to make a rewrite worthwhile; if it isn't then just huff it. --Cap'n Ben CUN 08:07, 2 Sep 2005 (UTC) Homo Totally up to you guys. Wasn't that funny, but almost could be, if it was a mother in depth look at linguistic differences or somesuch. --Marcos_Malo OUN S7fc BOotS Bur. | Talk 22:46, 1 Sep 2005 (UTC) - Keep Better than your ghey LOLOLOLO--Sir Elvis KUN FIC Bur. | Petition 00:01, 2 Sep 2005 (UTC) I say keep but maybe stick a rewrite template on there so anybody with teh mUzorZ won't be inhibited. --Spintherism 05:05, 2 Sep 2005 (UTC) Keep/rewrite It's not great, but it could be much, much worse. --Cap'n Ben CUN 08:05, 2 Sep 2005 (UTC) - OK, a rewrite started. It's up for grabs. Somebody else do something on it, please. The MuZorZ do be departed from me. --70.58.158.38 20:00, 2 Sep 2005 (UTC) - Looks redeemed. --OneTopJob6 00:58, 4 Sep 2005 (UTC) Vorlons Not a candidate for undictionarying as it was previously marked, it's a candidate for being hurt - badly. -- IMBJR 22:08, 1 Sep 2005 (UTC) Pave over I know a good contractor with lots of connections in the cement business. --Marcos_Malo OUN S7fc BOotS Bur. | Talk 22:48, 1 Sep 2005 (UTC) Wound: I take MTU to stand for "delete within a week if unfixed or unmoved" (since that is what it says, but why do users never move their dics themselves). It is a worthy topic, and should have a good article. This is not it. --Splaka Bur. SG CM © 23:11, 1 Sep 2005 (UTC) Send it back to New Jersey in a coffin AmyNelson 18:44, 2 Sep 2005 (BST)) Ok, not that we care what wikipedia usually does, but Bubba Wales himself deleted the original image from the wikimedia servers citing abuse of fair use. :( --Splaka Bur. SG CM © 21:19, 31 Aug 2005 (UTC) Delete Yeah, if Bubba done it, we gots to done it too --Nytrospawn 16:23, 1 Sep 2005 (UTC) - Unless we figure out who took the picture and mercilessly mock him. But that wouldn't be nice. --Spintherism 04:20, 2 Sep 2005 (UTC)) Jerry Lewis Worthy subject, but article lacks a certain something, what the French call "Funny". --Marcos_Malo OUN S7fc BOotS Bur. | Talk 10:05, 31 Aug 2005 (UTC) - Jesus who ever failed to make something funny with this to work with needs to turn in their Licence to killAmuse Rewrite or delete--Sir Elvis KUN FIC Bur. | Petition 13:44, 31 Aug 2005 (UTC) - Delete They were following the directions to funny, but I think they got distracted by a shiny thing and took a wrong turn.--86.133.156.32 13:51, 4 Sep 2005 (UTC) Wikitravel Logo Controversey, Wikitravel Logo Controversey, and Wikitravel Logo Controversy Please kill all three, the redirects and the article. Incredibly lame article. You can delete the reference to the article from here as well, if you like. Thanks. --Steve Johnsenson 04:19, 31 Aug 2005 (UTC) - Mild Delete Article needs to provide more disinformation and improve humor quotient. Mere reference to vandals/vandal groups not funny. --Marcos_Malo OUN S7fc BOotS Bur. | Talk 08:28, 31 Aug 2005 (UTC) - I thought it was going to be good in the end it turned out boring, probably TFAODP--Sir Elvis KUN FIC Bur. | Petition 13:42, 31 Aug 2005 (UTC) ) 4E75 Almost vaguely interesting, but apparently factual and not really funny. --Spintherism 04:03, 31 Aug 2005 (UTC) - Should be 4E75'ed --Marcos_Malo OUN S7fc BOotS Bur. | Talk 08:34, 31 Aug 2005 (UTC) - Move to TFAODP--Sir Elvis KUN FIC Bur. | Petition 13:36, 31 Aug 2005 (UTC) If you have a problem and no-one else can help, maybe you can huff: The A-hole team. -- Codeine 18:00, 31 Aug 2005 (UTC) A'Tuin Blatantly ripped from Discworld. Unoriginal and unfunny. --Spintherism 04:03, 31 Aug 2005 (UTC) - REwrite or Delete --Marcos_Malo OUN S7fc BOotS Bur. | Talk 08:36, 31 Aug 2005 (UTC) - Only worth keeping in protective habitat as an example of why plagarism is bad Mock or Delete --Sir Elvis KUN FIC Bur. | Petition 09:34, 31 Aug 2005 (UTC) - :Keep The "See Also" section saves the article. --Unissakävelijä 10:08, 4) It could maybe be an interesting article, but not with those names. Not even a chuckle, nothing. --ComaVN 14:18, 30 Aug 2005 (UTC) - Expand/Rewrite I liked the Extra Virgin Islands, but the rest were crap. I'm sure that there are plenty of interesting names based on existing articles that could be added though. --Spintherism 14:40, 30 Aug 2005 (UTC) - Slather with mayonaisse, then delete --Marcos_Malo OUN S7fc BOotS Bur. | Talk 00:08, 31 Aug 2005 (UTC) - Maybe just turn into a category or just delete --Sir Elvis KUN FIC Bur. | Petition 13:33, 31 Aug 2005 (UTC) - Rewrite and Merge with Category:Strange Place Names --Unissakävelijä 09:25, 4 Sep 2005 (UTC) - Delete Wow, this is awful. --Rcmurphy KUN 05:02, 6 Sep 2005 (UTC) Someone added the VFD tag, (presumably without knowing about here).--Sir Elvis KUN FIC Bur. | Petition 00:09, 30 Aug 2005 (UTC) - Keep It's not brilliant but it's not the worst eithioer it could probably do with some TLC (and please no lame jokes about bad teeth they are just lame)--Sir Elvis KUN FIC Bur. | Petition 00:09, 30 Aug 2005 (UTC) Shurely shome mishtake? It's not the best article on Uncyclopedia, but it certainly ain't deletion material. -- Codeine 08:10, 30 Aug 2005 (UTC) rewrite it has started to fall apart fom so many edits, it could do with been re-writen from the ground up.--193.63.129.163 23:51, 30 Aug 2005 (UTC) rewrite - like me, it just needs a bit of touching up 86.133.156.32 13:23, 4 Sep 2005 (UTC) Appears to be part of Politics of Lithuania but whomever wrote this forgot to make a funny. :( --Carlb 01:47, 29 Aug 2005 (UTC) - Delete if it's not factual then it's worse.--Sir Elvis KUN FIC Bur. | Petition 00:00, 30 Aug 2005 (UTC) The end Rgurgitation of HHGTTG. --Marcos_Malo OUN S7fc BOotS Bur. | Talk 23:24, 28 Aug 2005 (UTC) Rewrite There's an attempt to make a variety of jokes but it's going to take more than a joke about the Holy Moustache to make this a good article. --Hessef 10:02, 29 Aug 2005 (UTC) - I've made it into a last page of uncyclopedia kind of article. Seemed like a funnier kind of plagiarism to me. --ComaVN 11:03, 29 Aug 2005 (UTC) btw there's already a The End article. Redirect, maybe? --ComaVN 11:06, 29 Aug 2005 (UTC) - Comment - There also appear to be multiple "you have reached the end of the Internet" texts, best appears to be the one in Masters of the Internet--Carlb 15:30, 29 Aug 2005 (UTC) - Add to the template:end or something worse may grow in it's place, furthermore we should destroy Max Kool--Sir Elvis KUN FIC Bur. | Petition 15:53, 29 Aug 2005 (UTC) Piet Hein Donner Is this a famous person, or is this just more slandanity? --Marcos_Malo OUN S7fc BOotS Bur. | Talk 22:18, 28 Aug 2005 (UTC) - He appears to be a Dutch politician. You can look him up here if you can read Dutch. --KP CUN 07:15, 29 Aug 2005 (UTC) - huff P. Donner is described in a two sentence article in wikipedia. Somehow, jokes about a person consuming vast amounts of drugs, even of that person happens to be the Dutch Justice Minister, don't do anything for me. --Hessef 10:09, 29 Aug 2005 (UTC) - Oh original, Dutch Politian smoking dope!!!, What he^^^^ said, furthermore we should destroy Max Kool--Sir Elvis KUN FIC Bur. | Petition 15:55, 29 Aug 2005 (UTC) ZFGC I place this here in the spirit of our kinder and gentler Uncyclopedia, though normally I'd huff it on sight. It seems to be vanity or slander or slandanity of a message board related to Zelda. --Marcos_Malo OUN S7fc BOotS Bur. | Talk 22:14, 28 Aug 2005 (UTC) Delete Articles about GameFAQs, apart from GameFAQs, always have more shitness than ten "average" shitty articles combined. Wait, is shitness a word? Nevertheless, at least five such articles have already been locked down on CVP, so there's a pattern of shitness, or whatever the proper term is. --EvilZak 07:10, 29 Aug 2005 (UTC) Keep - Someone went to a lot of trouble to bring this more into line with the spirit of Uncyclopedia since I posted it here. Let's leave it alone and let them have their fun, since it no longer sticks out like a sore thumb. It's more like a thumb from an alternative universe, which should fit in just fine. --Marcos_Malo OUN S7fc BOotS Bur. | Talk 08:06, 29 Aug 2005 (UTC) I Don't Care - But there sure is a piss-load of activity. --Jenlight 08:17, 29 Aug 2005 (UTC) Delete "NOTICE: Unless you know much of ZFGC, this wont make much sense." To me, that reads: Here's some ZFGC inside-jokes that we're going to post outside of ZFGC just because we can.--Hessef 10:02, 29 Aug 2005 (UTC) Keep We let Camp Fuck You Die survive as it wasn't hurting anyone, we could do with trying to persuade them spread their net a little wilder rather than just editing one article and it's related pages however.--Sir Elvis KUN FIC Bur. | Petition 12:06, 29 Aug 2005 (UTC) Replace with notice Ok, I am getting annoyed (and it is hard to annoy me), I think the page should be protected and replaced with a notice, for several reasons: - The page is full of links pointing to uncyclopedia namespace for each user (when it should probably point to user: namespace, and only for the registered users), and those pages keep getting created and then deleted (because they are zero content vanity mostly, and themselves get vandalized). - The users have caused problems, in some extreme cases blanking articles and admin user pages (darwin awards for them). - Vanity vandalizing other articles that are linked on their page, and possibly creating pages in the User namespace for nonexistant users (I am not sure). - They Removed the VFD notice' Indicating their disregard for us. I think it should be replaced with a notice similar to: "This page has been deleted, for abuse of Uncyclopedia. We realize that not everyone who has participated in this page has done so, but many users here have contributed to the direct or indirect distruption of the Uncyclopedia project, ranging from simply linking to nonexistant pages that others inadvertantly create, to wholesale blanking and ban earning. If you would like a whole wikiproject of your own, please see the wikicities homepage for information on creating your own wikiproject for free." And then after such is created we can replace the page with a shorter notice and point the users to the new wikicity. Thoughts? --Splaka Bur. SG CM © 03:57, 30 Aug 2005 (UTC) - Ok, having just perused the wikicities creation policy it is unlikely they'd get their own wikicity (unless it was for zelda in general). So maybe a link to the list of wiki hosts would be better. --Splaka Bur. SG CM © 04:11, 30 Aug 2005 (UTC) Unwikify the names at least. That sort of thing breeds moronic one-liner vanity pages. I personally couldn't care less if the page was deleted outright, but I won't vote for a delete because it might become a decent page. --Rcmurphy KUN 04:05, 30 Aug 2005 (UTC) Keep - Message in Rot26 follows: --Marcos_Malo OUN S7fc BOotS Bur. | Talk 07:03, 30 Aug 2005 (UTC) Oh, and I fully agree that vanity pages linked to this page should be deleted, as well as bans for vandalism. Long bans. A vigilant eye and an iron hand outside the page, but let the page be their playground, at least for the moment.--Marcos_Malo OUN S7fc BOotS Bur. | Talk 07:07, 30 Aug 2005 (UTC) Weak keep - looks harmless, and as for inside-jokes, I rather like the idea of this place harbouring such jokes even if most people don't get it - different strokes for ... (ack! I can't believe I was going to type that). -- IMBJR 11:51, 30 Aug 2005 (UTC) - - I've dewikified and restored the {{vfd}} tag, but this still looks basically like one of those pages that a group of users either creates about themselves or creates as vanity-attack against those around them. --Carlb 14:40, 30 Aug 2005 (UTC) Delete - Sorry it caused so much trouble. --DBRalph 18:47, 30 Aug 2005 (UTC) - Have you looked in to some of those private wiki hosts? That might be just what you need. --Splaka Bur. SG CM © 19:49, 30 Aug 2005 (UTC) Keep Why delete it guys are you that paffitic (spelled wrong) don't be baby boom booms just let them keep it. (21:28, 30 Aug 2005 62.45.76.161 (→ZFGC)) - Why? Because you forgot to make a funny? Because we really couldn't care less what computer your buddy intends to buy next? Because the entire article is of no potential interest to anyone outside the one small group? --Carlb 21:36, 30 Aug 2005 (UTC) Delete Do I even need to provide a reason? --ComaVN 21:43, 30 Aug 2005 (UTC) DELETE I do not appreciate the utter vanity of this page and the reputation it portrays of my site, zfgc.com. DELETE!!! ~metallica48423 Delete Vandanity. Besides, inside jokes are never funny --Nytrospawn It's become a parody of itself. It's actually gotten funny. --Marcos_Malo OUN S7fc BOotS Bur. | Talk 22:24, 31 Aug 2005 (UTC) (Has no one used their decoder ring to decode the message?) WEAK Keep Ok, S'd out my previous rant, now that it is unwikified and the few problem users were caught it isn't so bad, and the only user links are to User: namespace, and as MM says, it is more in the spirit of Un. --Splaka Bur. SG CM © 22:28, 31 Aug 2005 (UTC) Bandurria All I have to say on this one is, WTF?--Cheeseboi 14:17, 28 Aug 2005 (UTC) - Expand/rewrite did a quick google, found the context and meh, it's alright. furthermore we should destroy Max Kool--Sir Elvis KUN FIC Bur. | Petition 15:57, 29 Aug 2005 (UTC) Homestar Runner Someone went to the trouble to expand this, so I didn't huff it. However, the article is crap. I doubt that anyone can write a decent Homestar Runner article without taking HR totally out of context. Homestar Runner Thompson, gonzo journalist? --Marcos_Malo OUN S7fc BOotS Bur. | Talk 04:11, 28 Aug 2005 (UTC) - Or maybe Jerk City Runner? --Marcos_Malo OUN S7fc BOotS Bur. | Talk 05:17, 28 Aug 2005 (UTC) - In such cases of these I would tend to go for the idea that Homestar is a real person that somehow ended up as a character character. Like Michael Jackson. -- IMBJR 10:18, 28 Aug 2005 (UTC) I threw a rewrite tag at it but I think somebody should probably put the page out of its misery if it doesn't show some life soon. --Hessef 07:34, 28 Aug 2005 (UTC) - I did a total rewrite, going with the Jerk City Runner gag. And by "gag", I mean choking on penis. --Marcos_Malo OUN S7fc BOotS Bur. | Talk 17:02, 28 Aug 2005 (UTC) - Do whatever Marco Malo thinks best,furthermore we should destroy Max Kool--Sir Elvis KUN FIC Bur. | Petition 16:01, 29 Aug 2005 (UTC) Tammy Saris Sounds like it could be factual. - Rewrite?I'm trying to work out whether obsessive stalking of pr0n stars is funny. On balance, I think it probably isn't, unless it's done well. This isn't. --Squeezeweasel 09:26, 27 Aug 2005 (UTC) - Keep I laughed at the author more than I laugh at most articles. I especially liked the google world bit. --Spintherism 22:35, 27 Aug 2005 (UTC) - Keep We need more pron references on Uncyclopedia. --Marcos_Malo OUN S7fc BOotS Bur. | Talk 21:01, 27 Aug 2005 (UTC) - Delete But I NEED to delete it! If I don't delete it, I might explode. It happens to me sometimes... --Famine 00:58, 28 Aug 2005 (UTC) - Delete In my opinion, somethingawful.com is proof that it's funnier to write sarcastic and (mostly) factual articles about porn than satire. --Hessef 07:38, 28 Aug 2005 (UTC) - Abstain--Sir Elvis KUN FIC Bur. | Petition 13:35, 31 Aug 2005 (UTC) Images/thumb/6/67/180px-Fairy.jpg This is the second appearance of this (error/test?) page. I thought a good VFD tag would give the user a chance to explain their problem and ask us how to upload images (this vfd is of a test). If we just delete it, it'll probably come back. --Splaka Bur. SG CM © 01:43, 27 Aug 2005 (UTC) - In my opinion, it would be better to help him out with uploading images on his user talk page than on VFD. --EvilZak 06:56, 27 Aug 2005 (UTC) Hutspot Seems to have been created for Leiden just to host an external picture (all the text is also on the Leiden page). --Splaka Bur. SG CM © 20:48, 26 Aug 2005 (UTC) - Keep. As a highpoint of Dutch cuisine, an article about something as weird as hutspot could definitly have a future. ComaVN 21:40, 26 Aug 2005 (UTC) - Ok, thank you for uploading a pic and expanding (nothing like a good VFD to get the ball rolling!). PS: I added a bogus recipe, feel free to edit or change to be more funny and not just stereotypical (at least I left out mentions of a dyke or windmill). I'll withdraw my vote for deletion but leave this section (incase others wanna rant?) --Splaka Bur. SG CM © 21:50, 26 Aug 2005 (UTC) - Maybe you should replace the 420s with various other euphemisms. I'd say that would increase the funny by at least 8.6.--Spintherism 17:18, 27 Aug 2005 (UTC) Sydney Is this canineslandanity or what? --Splaka Bur. SG CM © 20:39, 26 Aug 2005 (UTC) - Expand there should be an article about Sydney and this could have promise if any sydney'ites want a crack at it.--Sir Elvis KUN FIC Bur. | Petition 21:47, 26 Aug 2005 (UTC) I had a go at this but am still not happy with it. Large entries like this require multiple inputs but it's a start.--Rollo75 Rinkeby Maybe be funny to Swedes, maybe not. You tell me. --Marcos_Malo S7fc BOotS | Talk 10:35, 26 Aug 2005 (UTC) - Delete. If it was only in Swedish, we could use the new tag Flammable and I made: {{qua?}} (see it work its magic in Chechnya) - mostly meant to give pages like that a chance to splain themselves before vfd (shameful plug)) --Splaka Bur. SG CM © 10:40, 26 Aug 2005 (UTC) Channel Tunnel Isn't the whole French Surrendering thing a little cliché by now? The other two sentences are crap also.--Spintherism 07:06, 26 Aug 2005 (UTC) PSP Firmware 2.0 Delete or merge with some other PSP article.--Spintherism 05:12, 26 Aug 2005 (UTC) Keep It's theoretically funny to people into videogames. --Marcos_Malo S7fc BOotS | Talk 10:46, 26 Aug 2005 (UTC) - I guess if it's some sort of majorly notable update, it makes sense to keep it, but it would just seem sort of silly to have a separate page for every update of a product instead of keeping it all in a subsection of some larger article. --Spintherism 01:55, 27 Aug 2005 (UTC) - The joke is, if anything, about the constant need for updates/upgrades and the marketing of those updates/upgrades. ~MM --15:28, 28 Aug 2005 (UTC) Discordian Society Accurate quotes from sources, even silly sources, don't seem appropriate.--Hessef 21:45, 25 Aug 2005 (UTC) I'd say a rewrite is in order. --Spintherism 21:52, 25 Aug 2005 (UTC) Purge, delete, and don't try again. The Illumninatus Trilogy is far more insane, fucked up, and disinformative than Uncyclopedia will ever be. It is a masterpiece in its own sheer insanity. Trying to add a non-factual, somewhate coherent article to Uncyclopedia based off a non-sensical, incoherent, and inconsistent book seems doomed to failure. In wikipedia, you could at least (try to) summarize the story. Here, we discourage that sort of thing. --Famine 02:04, 26 Aug 2005 (UTC) - Good point. --Spintherism 02:18, 26 Aug 2005 (UTC) Do you know who my father is? The title has promise. The article, so far, does not.--Hessef 04:22, 25 Aug 2005 (UTC) There's a few funny answers (and I added some myself). I recant my VfD vote.--Hessef 04:47, 25 Aug 2005 (UTC) Delete --Spintherism 06:02, 25 Aug 2005 (UTC) Keep -- 152.1.147.81 7:41, 4 Sept 2005 (EST) It's just shit. --Caiman 15:17, 18 Aug 2005 (UTC) I started to rewrite it, but probably didn't really improve it much. --Spintherism 17:58, 18 Aug 2005 (UTC) I'm going to give my rewritten version a vote for Delete --Spintherism 03:49, 19 Aug 2005 (UTC) Rewrite - It has promise. --Trevie 23:55, 19 Aug 2005 (UTC) Keep/Rewrite Should be given chance. --Unissakävelijä 12:32, 20 Aug 2005 (UTC) Environmentalism Was QVFDed --IMBJR 09:42, 18 Aug 2005 (UTC) Rewrite/Expand The first sentence has a little gleam of potential. --Spintherism 17:45, 18 Aug 2005 (UTC) Loony Tunes Was QVFDed --IMBJR 09:37, 18 Aug 2005 (UTC) - Rewrite - Definitely has potential. In my opinion, it needs to be slightly more Looney Tunes-related. --Trevie 18:39, 20 Aug 2005 (UTC) - Is this something that KP and MM want to use?--Sir Elvis KUN FIC Bur. | Petition 14:10, 31 Aug 2005 (UTC) Archived VFD Discussions - Uncyclopedia:Pages_for_deletion/archive1 - Uncyclopedia:Pages_for_deletion/archive2 - Uncyclopedia:Pages_for_deletion/archive3 - Uncyclopedia:Pages_for_deletion/archive4 - Uncyclopedia:Pages_for_deletion/archive5 - Uncyclopedia:Pages_for_deletion/archive6 - Uncyclopedia:Pages_for_deletion/archive7 - Uncyclopedia:Pages_for_deletion/archive8 (63kb, unusedimages discussion) - Uncyclopedia:Pages_for_deletion/archive9 - Uncyclopedia:Pages_for_deletion/archive10
http://uncyclopedia.wikia.com/wiki/Uncyclopedia:Votes_for_deletion/old?diff=prev&oldid=157390
CC-MAIN-2015-27
en
refinedweb
#include <sys/types.h> #include <sys/buf.h> void clrbuf(struct buf *bp); Architecture independent level 1 (DDI/DKI). Pointer to the buf(9S) structure. The clrbuf() function zeros a buffer and sets the b_resid member of the buf(9S) structure to 0. Zeros are placed in the buffer starting at bp→b_un.b_addr for a length of bp→b_bcount bytes. b_un.b_addr and b_bcount are members of the buf(9S) data structure. The clrbuf() function can be called from user, interrupt, or kernel context. Writing Device Drivers for Oracle Solaris 11.2
http://docs.oracle.com/cd/E36784_01/html/E36886/clrbuf-9f.html
CC-MAIN-2015-27
en
refinedweb
Please meet Maarten Balliauw, JetBrains Technology Evangelist for PHP and .Net products. 1. Hi Maarten, we would like to welcome you to JetBrains and thank you for taking the time to speak with us. We know you drink lots of coffee every day and Visual Basic 4 was the first programming language for you, but for those who don’t already know you, can you tell us a bit more about yourself? Now that I think about it, my first programming language probably was AMOS Basic on the Amiga 1200. Apart from that, I’ve been doing web development for a while, started with PHP when I was 16 and moved over to ASP.NET and later ASP.NET MVC but kept doing PHP as well. Both languages and stacks have their disadvantages but also their merits. And it’s fun to combine! I’ve been doing all of that, first freelance and then with RealDolmen where I’ve had a lot of opportunities to work in this area with customers. Lately I’m really interested in the Windows Azure cloud platform and everything that has to do with HTTP APIs. On the personal side, I live together with my wife near Antwerp, Belgium. A great place to live and work! 2. Why did you decide to join JetBrains, and what will you be working on? To be honest, I was quite happy at my previous company, RealDolmen. Talking with Hadi Hariri gave me a very positive impression of JetBrains and when an opportunity to become an evangelist came by, I decided to jump the bandwagon. My main focus areas are going to be .NET and PHP. I’ll be working on spreading the word about all JetBrains products available in those languages such as ReSharper, dotTrace, dotPeek and PhpStorm. I’ll also be bugging JetBrains developers with feature requests as well, originating from community feedback and from working with those products myself. Expect blog posts, screencasts and such on all these products! And if you have feedback, bug me so I can bug others 3. What areas of PHP and .NET are you most interested in? My history with PHP covers a number of things. I’ve started as a script kiddie developing simple web applications, then I enjoyed building things with Zend Framework. When I started at RealDolmen, my world was mostly Microsoft based, sometimes with a layer of PHP on top. That’s where my interest in interoperability came to life. I’ve started building PHPExcel (). Then came PHPLinq (), my take on having .NET’s language integrated queries in PHP, PHPMEF and the official Windows Azure SDK for PHP. On the .NET side, the web stack and the cloud stack are my favorites. ASP.NET MVC and ASP.NET Web API, Windows Azure, and all combinations possible. Check GitHub and CodePlex and you’ll find some projects I either started working on or am contributing to. 4. What do you like most in JetBrains tools for .NET developers? The fact that those tools provide functionality that does not exist out of the box, like disassembling some assembly to find out why it’s behaving in a way you didn’t expect. dotPeek is great at doing that! These tools also make existing functionality better. Yes, there’s IntelliSense in Visual Studio, but ReSharper does a great job at improving it with things like autocompletion of properties on dynamic types, for example. Tools like YouTrack are great as well. Over the past year I find that my development is becoming more and more keyboard-oriented. The fact that I can assign a work item to myself, estimate it at one hour and move it to an “in progress” state by simply typing “assignee me estimation 1h in progress” is astonishingly fast and makes the issue tracker not come in my way of working on a problem. 5. What trends do you see in PHP as a language? Where is it heading? PHP has come a long way. I recall myself thinking “why are there no namespaces?” in a not-too-distant past. The past 2 minor versions though have brought tons of new language features, like namespaces and traits. The garbage collector has improved and now handles circular references way better than before (something that bit me a lot when building PHPExcel). The language has grown more mature, more people are contributing to it. Whether in the form of code or ideas, but it’s getting a lot more attention. I really like the direction it is going! 6. Some PHP developers don’t believe they need an IDE for PHP, i.e. you can be just as productive with a text editor. What is your opinion about it? Well… That’s a difficult one. I understand some people when they say they can do their job in VIM, and they are right. If you see them work, all I can say is they are fast and good at what they do. But even though they use all kinds of automation and macros, I see them doing a lot of things manually or by running additional bits and pieces on their code that come out of the box in an IDE. Why is that? Because IDEs tend to be built around “knowing” the language. They analyze your project and dependencies and try to figure out how everything fits together. Things like refactoring become a lot easier that way. 7. What are your hobbies and what do you like to do in your free time? I love working on open source projects and on some side projects in my spare time. Next to that, I’ve started brewing my own beer. It’s a fun thing to do as it’s different from my day-to-day activities. And if you do it right, you’re rewarded with a nice beer to drink with family and friends. Which brings me to the next thing I love to do: being with family. You don’t get to choose them but I’m lucky to have a great wife, great parents and a great brother who I like being with. They all like beer, so that combines with my new brewing hobby. I also ski and love to do a hike in the woods as well. 8. Thank you for your time and we look forward to the positive and productive work as a Technical Evangelist at JetBrains. Are there any upcoming events, books or topics that you would like to mention? The next event I’ll be speaking at is the Warm Crocodile Conference in Denmark. Organized by a great guy and a lot of good speakers so I’m really looking forward to going there. The last book I’ve worked on was Pro NuGet which I wrote with a friend. We’re considering writing a vNext of that one. Apart from that, keep an eye on my Blog, my Twitter and of course everything that comes out of JetBrains. Pingback: JetBrains is Going to the “City of Light” for TechDays France | JetBrains Company Blog Pingback: JetBrains .NET Tools Blog » JetBrains is Going to the “City of Light” for TechDays France Thank you for sharing your thoughts. I really appreciate your efforts and I am waiting for your next write ups thank you once again. Do you write/have any other blogs or have any plans for another blog?
http://blog.jetbrains.com/blog/2013/01/03/interview-with-jetbrains-evangelist-maarten-balliauw/
CC-MAIN-2015-27
en
refinedweb
I need some small help hopefully. I thought i could figure this out on my own but obviously since i am posting to this board for the first time, well you know. I am trying to get this program to calculate change tendered by breaking it down by dollars, half-dollars, quarters, dimes, nickels, and pennies. Reliaze this is just the beginning of the class so not allowed to use loops, modulus, etc.. this is why i am stuck. Code:#include <iostream>// cin, cout, <<, >> using namespace std; int main() { cout << "Enter the amount of purchase" <<"\n"; double purchaseAmt; cin >> purchaseAmt; cout << "Enter your payment amount given" <<"\n"; double paymentAmt; cin >> paymentAmt; double changeAmt = paymentAmt - purchaseAmt; cout << "\nYour change back is: " << changeAmt << "\n"; double halfDollar = changeAmt / .50; double quarters = changeAmt / .25; double dimes = changeAmt / .10; double nickels = changeAmt / .5; double pennies = changeAmt; cout << "\nhalfDollar back: " << halfDollar << "\n"; cout << "\nQuarters back: " << quarters << "\n"; cout << "\nDimes back: " << dimes << "\n"; cout << "\nNickels back: " << nickels << "\n"; return 0; }
http://cboard.cprogramming.com/cplusplus-programming/79250-calculating-change-basic-commands.html
CC-MAIN-2015-27
en
refinedweb
Originally posted by A Kumar: Hi all, int is a primitive class and Integer is its wrapper class... similiarly for double .... Now Consider this..... public class ClassLiteral { public static void main(String[] args) { Class d=double.class; Class Dt=Double.TYPE; Class Dw=Double.class; System.out.println("The val are "+d); System.out.println("The val are "+Dt); System.out.println("The val are "+Dw); } } The output is..: The val are double The val are double The val are class java.lang.Double now double here is primitive and the Double is wrapper..but they both are assigned to the "Class" variables.....How is that...??? Is that both double and Double(itz already a class) are classes... Double is in java.lang package... what abt double? Tx [ August 12, 2005: Message edited by: A Kumar ]
http://www.coderanch.com/t/400519/java/java/Primitive-Wrapper
CC-MAIN-2015-27
en
refinedweb
Details Description Hi Rahul, sorry - it's me again ... The current SCXML draft allows me to use transitions as an child of parallel-element. If I try this, the SCXML implemention ignores this transition. Maybe also a little bug ? Please refer to the patch attached to this bug (-: Regards Danny - Start engagingTest() go now WARN - Ignoring element <transition> in namespace "" at null:6:52 and digester match "scxml/parallel/transition" INFO - /running INFO - /running/state1 INFO - /running/state2 PASSED: engagingTest Rahul said: > Yes, in as much as not being implemented is a bug This is another > addition in the latest Working Draft which we haven't gotten to yet. > > Can you open an improvement in JIRA? Please attach the smallest test > case (test SCXML documents are good, but if you can attach a complete > JUnit test case that'd be even better – see the src/test tree for > existing tests for the codebase, you could name this one > transitions-05.xml and place it here [1], for example). Activity - All - Work Log - History - Activity - Transitions Thanks for the test, this has been fixed in trunk and a couple of test cases added. This has been fixed in the J6 branch (earlier), and needs to be backported to trunk, which I'll do next week. Also updating typo in subject. Setting fix version to next release (v0.9). changing testsuite for reproducing this bug Closing since 0.9 is released.
https://issues.apache.org/jira/browse/SCXML-82
CC-MAIN-2015-27
en
refinedweb
Have you ever written a configuration dialog where the user had to enter a filename or a path for your application to do something with? If you have, then you've probably added a button to browse for a file or folder with some rather trivial Click event handler: instantiate the appropriate FileDialog or FolderBrowserDialog, set its properties, call ShowDialog() and if the DialogResult is DialogResult.OK, then fill the associated TextBox with the filename or folder in the dialog. Click FileDialog FolderBrowserDialog ShowDialog() DialogResult DialogResult.OK TextBox No big deal, you might say. You're right. A standard procedure, not difficult at all, but something you have to do for the configuration dialog to be user-friendly. And when you have several configuration dialogs or several file paths/folders to configure, you'll end up writing the same trivial piece of code over and over again. So I thought I could simplify this repetitive task a bit and brush up my knowledge of extender providers... An IExtenderProvider-based approach appeared appropriate: binding two controls together, one acting as a "Browse"-button and the other one to receive the file or folder selected. IExtenderProvider I chose the approach to let the extender provider extend the control to receive the selected file/folder and to provide the button to start browsing. To develop such a component, you'll have to inherit IExtenderProvider and specify the name of the property you want to provide in a ProvidePropertyAttribute, like this: ProvidePropertyAttribute [ProvideProperty("BrowseButton", typeof(Control))] public class FolderBrowserExtenderProvider : Component, IExtenderProvider That way, I've specified that my extender provider will add an additional property named "BrowseButton" to some components and that this property is a Control. BrowseButton Control The next step is to say which components can be extended by the new extender provider. That's done by implementing CanExtend(). Given an extendee object (i.e., an object that is to be extended), your extender provider must tell whether it can provide the given property for this object. CanExtend() At first, I returned true if the extendee derived from TextBoxBase (that's what you usually have: you enter a filename/folder into a TextBox), but after a while, I thought I don't have to restrict the developer to this type. Any control has a Text property that can receive the selected file/folder name, so now this function returns true for Controls. true TextBoxBase Text Next, you'll have to handle setting and retrieving the new property. The extender provider doesn't really add a new property to an extendee in a way that you could actually write: myTextBox.BrowseButton = myButton; Instead, its responsibility is to keep a list of extendees and their assigned properties. Such a list entry is created by calling: Set<PropertyName>(extendee, property value) and queried by calling: Get<PropertyName>() In my case, the signatures for these methods look like this: public Control GetBrowseButton(Control extendee); public void SetBrowseButton(Control extendee, Control browseButton); I think most extender providers will use a HashTable to keep the assignments, that way it's really easy to keep track. My components do just this. HashTable So far, our extender provider doesn't do anything but remember which control is assigned as a BrowseButton to which other control. In order to actually start browsing when an assigned BrowseButton is clicked, we'll have to add a Click event handler to the BrowseButton once it's assigned. Although you'll rarely remove or reassign an extendee, it's a good idea to remove the event handler before the BrowseButton is reassigned or you'll end up with several dialogs popping up. The Click event handler finally is responsible for showing the appropriate File- or FolderBrowserDialog. In order to be able to visually design this dialog, each extender provider simply has a public readonly property FileDialog and FolderBrowserDialog, resp. File One little catch here: in order for the designer to correctly serialize the dialog's properties, I had to set the DesignerSerializationVisibilityAttribute to DesignerSerializationVisibility.Content, otherwise the reference to the dialog itself is serialized and not the dialog's properties. DesignerSerializationVisibilityAttribute DesignerSerializationVisibility.Content To use the extender providers in your forms, you should add them to your Toolbox. Then drag them to your Form and each of your controls will show an additional property BrowseButton with None as default value. None Simply select the control you want to click on for the appropriate browse dialog to appear and transfer the selected folder/file to the first control's text. A basic scenario would be to have a TextBox (textBox1) and a Button (button1) on your Form. textBox1 Button button1 Form You add a FolderBrowserExtenderProvider to the Form and set textBox1's BrowseButton property to button1 and you're done. FolderBrowserExtenderProvider You can assign a control to be its own BrowseButton, by the way. Unfortunately, because of a bug in the code generation of VS, you can't do this visually, or you'll get bogus code. Since there's no workaround or fix up to date, I'm throwing an exception when you try to make such an assignment in the designer. You can assign the BrowseButton in code without problems, though. If you're unsure, just take a look at the sample application included.
http://www.codeproject.com/Articles/9233/Extender-provider-to-simplify-file-folder-selectio
CC-MAIN-2015-27
en
refinedweb
I am trying: mycolor= "240,240,240" mycolor= webcolors.rgb_to_name ((mycolor)) But nothing comes of it. What could be the reason? - Answer # 1 function webcolors.rgb_to_name ()expects a tuple with three integers as input, and you feed it a string as input. Try this: mycolor= (240,240,240) mycolor= webcolors.rgb_to_name (mycolor) If your color is initially set as string , then it can be parsed: from ast import literal_eval mycolor= "240,240,240" if isinstance (mycolor, str): mycolor= literal_eval (mycolor) mycolor= webcolors.rgb_to_name (mycolor) I have already tried the 1st option -it did not help ,. Thank you for your helpАлексей Фобиус2021-11-25 11:08 makes you think that nothing comes of it?Александр2021-11-25 10:57:59
https://www.tutorialfor.com/questions-382149.htm
CC-MAIN-2021-49
en
refinedweb
. Combining When grouping and counting won't quite do it's time to start combining us some elements. Ruby provides various methods for combining Enumerables together, or perhaps even combining them all down into one element. Either way, some of my favorite methods are in here. #chain chain allows you to combine two Enumerators, which can be useful if you want to combine two Enumerables, like say we had our range above for card ranks: RANKS = %w(2 3 4 5 6 7 8 9 10 J Q K A).freeze Instead of typing that all out we could do this: ('2'..'10').chain(%w(J Q K A)) This can also take multiple Enumerables as arguments. I haven't often used chain, but have found a few minor usages of it on occasion when dealing with a lot of Enumerators. #cycle cycle lets you take a single collection and make it loop infinitely: [1, 2, 3].cycle.first(20) # => [1, 2, 3, 1, 2, 3, 1, 2, 3, 1, 2, 3, 1, 2, 3, 1, 2, 3, 1, 2] If given an argument it will only cycle that many times: [1, 2, 3].cycle(3).to_a # => [1, 2, 3, 1, 2, 3, 1, 2, 3] If you don't give it a Block Function it'll return an Enumerator so you won't actually be allocating infinite cycles. We'll be getting more into Enumerator in another post. cycle is another function I don't frequently use, but can be useful for zip. #reduce / #inject reduce is interesting in that it's so powerful you could literally write any other Enumerable method using it. I even did a conference talk on this once. We won't get into that for now, but the idea is that it allows you to reduce a collection into one element. It's also called fold or foldLeft in other programming languages. Its usual example is a sum or product: [1, 2, 3].reduce(0) { |a, i| a + i } # => 6 [1, 2, 3].reduce(1) { |a, i| a * i } # => 6 reduce takes an optional argument as an initial accumulator, and if one isn't given it uses the first item of the collection. It then takes a Block Function which takes two arguments, the accumulator and the item. The result of each call to reduce becomes the new accumulator the next loop. Sound like a mouthful? It is, let's look at an example: [1, 2, 3].reduce(0) do |a, i| puts(a: a, i: i, new_a: a + i) a + i end # STDOUT: {:a=>0, :i=>1, :new_a=>1} # STDOUT: {:a=>1, :i=>2, :new_a=>3} # STDOUT: {:a=>3, :i=>3, :new_a=>6} # => 6 Note: puts(k: value)is one of my favorite debugging and example tricks. The Hashbraces are implied, and it gives extra information to help finding issues. So we can see that when the reduce function starts it has an accumulator of 0, a first item of 1, and that function returns 1 which becomes the next accumulator for the next loop. Eventually it runs out of items and 6 was the last value of the accumulator. An interesting observation is that empty value doesn't need to be a number. What if it were a String? Hash? Boolean, Array, etc etc. Then it gets real interesting. Consider map reimplemented in terms of reduce: def map(collection, &fn) collection.reduce([]) { |a, v| a << fn.call(v) } end reduce is insanely powerful, but at the same time there are less powerful methods which do the same job with less effort. Prefer methods which are more tailored for your task, like sum or tally. #each_with_object each_with_object is like a reversed reduce, except that the return value of each call to the Block Function is ignored and it only cares about the object it was iterating with: [1, 2, 3].each_with_object({}) { |i, a| a[i.to_s] = i } # => {"1"=>1, "2"=>2, "3"=>3} Oh, and make sure to mutate a because otherwise nothing will happen. Notice as well that the arguments are reversed. The way I remember this is the with_object after each, implying the object is the second argument. I still get that backwards more often than I'd care to admit. #zip zip allows us to combine two or more collections into one: a = [1, 2, 3] b = [2, 3, 4] c = [3, 4, 5] a.zip(b, c) # => [[1, 2, 3], [2, 3, 4], [3, 4, 5]] zip can be useful for merging multiple collections into one, especially when you have things like keys and values as separate variables you need to put together. It can also take a Block Function which specifies how to zip values: a = [1, 2, 3] b = [2, 3, 4] c = [3, 4, 5] a.zip(b, c) { |x, y, z| [z, y - x] } # => nil Oddly this returns nil and you have to use an outside array to capture these values. I cannot say I understand this as I might have expected this to behave like map, but it is as it is. Given that I would suggest avoiding this syntax, as it may be confusing. (0)
https://practicaldev-herokuapp-com.global.ssl.fastly.net/baweaver/understanding-ruby-enumerable-combining-58eo
CC-MAIN-2021-49
en
refinedweb
Changelog¶ 4.2.0 (2021-11-18)¶ Field groups in forms. There is a new string groupmember on Fieldthat is used to group, a groupnamespace on Formyou can use to set attrs, tag, etc. Global styling for form groups is done via the FieldGroupclass. The bootstrap style has been updated to support this feature out of the box. Validation could be bypassed for forms if they have been saved via form.refine_done(). This became the default behavior for .as_view()in iommi 4.1 so that release is broken. 4.1.0 (2021-11-15)¶ as_view()calls refine_done, giving you a nice little performance win for free Introduce @iommi_renderview decorator. Use this to get correct behavior when using transactions by default in views. The iommi middleware will now produce an error if you try to use it incorrectly. Re-initializable select2 enhancement. If you dynamically modify with javascript you can call iommi_init_all_select2to re-initialize iommi select2 components Break out the select2 enhancement from the base style into a separate select2_enhanced_formsstyle, and added it to all the built in styles. If you have a custom style that extended baseyou must now also add select2_enhanced_formsto that style to get the same behavior as before. should_ignore_frame() is more robust against acrobatic frames. This is a rather obscure bug that won’t affect normal iommi usage. 4.0.0 (2021-11-01)¶ Dropped support for __in names of declared columns/fields/filters (deprecated since 3.2.0) Big internal refactoring. You should see some performance improvements accross the board. 3.4.0 (2021-10-22)¶ Ability to customize the Celland Cellsclasses used by Tablerendering Improved ability to customize Table.tbody. You can now add html after or before the rows from the table itself Template-based rendering should get iommi_evaluate_parameters as context. This was the case in some cases but not all, most notably when rendering a Fragment. 3.3.0 (2021-10-20)¶ Added snakeviz profiling (use it by passing _iommi_prof=snakeas a url parameter) Fixed stack traces in SQL tracing Fixed jump to code for several scenarios German translation fixes and updates Improved error message for invalid admin config write_nested_form_to_instancenow takes keyword arguments 3.2.2 (2021-10-01)¶ Fix bug causing any endpoint invocation of table fields to force a bind of the paginator (Which should be lazy) 3.2.1 (2021-09-24)¶ Fix enforcement on required=Trueon Field.multi_choiceand others where value is a list. 3.2.0 (2021-08-23)¶ Names with underscore are deprecated and will be removed in the next major version. This means you can no longer write this: class MyTable(Table): foo__bar = Column() You must now instead write: class MyTable(Table): some_name = Column(attr='foo__bar') Using foo__bar had some weird consequences like you not being able to later target that name without getting ambiguities in what __ meant. 3.1.1 (2021-06-18)¶ Expand ajax reload on filter change of tables to also include the bulk form. If not done, the bulk options are not in sync with the filtering. Remove reference to non-existant errors.htmlin bootstrap style Make Table.visible_rowsnon-lazy and not a property Table.rowsis no longer a property 3.1.0 (2021-06-09)¶ Form: Evaluate parameters now contain instance Use the same redirect logic for delete as create/edit. This means you can now use extra__redirectand extra__redirect_tofor delete too When stopping the live editing, a full runserver restart is now triggered so you get the new code you just edited 3.0.0 (2021-05-24)¶ Styles have a new sub_stylesparameter. This change greatly simplifies how you set up a custom style for your project if you want to customize the query form. IOMMI_DEFAULT_STYLEcan now be a Styleobject Breaking change: The horizontal styles are removed and replaced with the substyle feature. If you use for example 'bootstrap_horizontal', you need to replace it with 'horizontal'. Mixed case filter fields didn’t work Respect browsers preferred dark/light mode for profiler and sql tracer 2.8.12 (2021-05-18)¶ Major bug: tables based on querysets would implicitly use the django result cache. This resulted in the contents of the table not changing until after process restart 2.8.11 (2021-05-07)¶ Fragmentshould have @with_meta Fixed nesting tables inside forms. This would previously crash with a strange error message. Avoid infinite loop in sort_after on too large indicies 2.8.10 (2021-04-28)¶ Read defaults from model for initial of fields Increased log level of SQL logging from 11 to 21 (DEBUG+1 -> INFO+1) Added null factory for JSONField Fixed live editing code to use the same logic as ‘jump to code’ to find the code Fixed one case where live edit broke Prettier debug menu for live editing Prettier query help text (thanks new contributor flying_sausages!) 2.8.9 (2021-03-08)¶ Fixed bad html escape in SQL trace magnitude graph (this is not a security problem, as it’s a developer tool with very restricted access) Renamed freetext to freetext_search. It was too easy to collide with a user defined model with a field called freetext 2.8.8 (2021-02-23)¶ Automatically generating a Query from a model with a foreign key was broken in cases where the name field wasn’t the same as name field of the parent model 2.8.7 (2021-02-22)¶ Make it possible to pass a lambda to title of Page/Form/Table Improved error when trying to register an already registered style 2.8.6 (2021-02-19)¶ Revert to the old (pre 2.8.2) way of using search_fieldsto compose queries. The new approach failed for cases when there was a custom value_to_qdefinition. A proper fix needs to have a unified approach also when using .pkformat. 2.8.5 (2021-02-17)¶ Render title of Pageobjects. To turn off the rendering of the title pass h_tag__include=False. Removed the register_search_fields warning, it was 90% annoying and 10% useful 2.8.4 (2021-02-15)¶ Form: support passing instance as a lambda, even in combination with auto__model 2.8.3 (2021-02-14)¶ Removed bad assert that prevented passing instance as a lambda for auto__model of Form SQL trace was broken for postgres query_from_indexes should automatically generate filters for foreign keys. This especially affected the admin. 2.8.2 (2021-02-09)¶ Avoid using search_fieldswhen composing queries from model filter values. Always using the .pkfallback approach is more stable when the search field values might not be unique. This will remove a bunch of warnings that weren’t very helpful too. Fixed crash when setting query__include=Falseon Table capitalize()now handles safe strings properly. This will enable you to pass safe strings to titlefor example. Translation of Yes/No Fixed error message for register_search_fields Updated to fontawesome 4.7 Renamed live edit asset to not conflict with the name ‘custom’ which might be fairly common Nicer title in the admin for apps 2.8.1 (2021-02-01)¶ Auto generated tables had “ID” as the column name for foreign keys, instead of the name of the remote model. Profiler fixed: the bind and render of iommi objects that were handled by the middleware weren’t profiled Fixed live edit to work for views with URL arguments Handle settings.BASE_DIR as Path objects fix bulk__include = False on table Make DebugMenu created on demand to avoid setting of breakpoints when debugging your own code Models in admin are now in alphabetical order Fieldis not a Tag, so you can render a Formas a div if you want. The root menu item for the iommi admin was broken if you inherited from Admin Force the live edit view to be bootstrap. This avoids the live edit feature looking a big broken for your own custom styles. Minor bootstrap styling fix for non-editable fields 2.8.0 (2021-01-13)¶ Nested forms The paginator is now lazy. This means we can avoid a potentially expensive .count()database hit in many situations Added Table.bulk_container Table.post_bulk_edittakes evaluate parameters now Column.include=False implies that the column shouldn’t get anything in the bulk form. If you want bulk editing without a visible column use Column.render_column=False Support auto__include=[‘pk’] Fix reinvoke/reinvoke_new_defaults when shortcut is changed Date/datetime parsing bugs fixed after mutation testing Do not do form post_validation if we are in initial display mode Forms now don’t create a submit button by default. If you have a post handler you will get a submit button though. SQL trace bugfixes Custom raw_data callback should have same semantics as constant value (and parsed_data callback) Improved error message on disallowed unbound object access Documentation improvements, for example new pages for dev tools, and styles Live editing on .as_view()style views work in the case of an explicitly declared class Fixed bug where the ajax enhanced table didn’t work if you used Table.divor otherwise changed the tagof Table Fixed auto__model column/filter for CharFieldwith choices 2.7.0 (2020-12-14)¶ A Formcan now contain non- Fieldparts. Iterate over everything to render with form.partsand all the fields to be validated with form.fields. Fields that are not direct children are also collected, so you can easily add extra structure by wrapping a bunch of fields in a html.divfor example. Support Django’s CharField.choicesfeature You can now customize the name shown in the advanced search via Filter.query_name Form submit buttons ( Actions.submit) are now rendered as <button>not as <input type="submit">. Added SQL trace feature You can now apply styles on the root object. Example: root__assets__my_asset=Asset(...) Edit button only present in debug menu when the edit middleware is installed Added profile button to debug menu Make collected assets more accessible when rendering iommi in your own templating environment: you can now access them on the iommi objects: my_iommi_obj.iommi_collected_assets() Removed broken validation of sort columns. This validation prevented sorting on annotations which was very confusing as it worked in debug mode Make it possible to target the live edit page with styles (via LiveEditPage) The live edit view can be flipped between horizontal and vertical layouts The debug tree view is slimmed down (by not including endpoints and assets on lots of things) Field.raw_data_listis removed. You can know if it’s a list or not by checking is_list, so raw_datacovers the uses cases. Include decorators in live edit The debug jump to code feature should work for some more scenarios, and it will not display if it has no good guess. DEPRECATED: Field.choice_to_option. This is replaced by choice_id_formatterand choice_display_name_formatter 2.6.1 (2020-12-01)¶ Fixed live editing to work when distributing iommi 2.6.0 (2020-12-01)¶ Live editing of function based views in DEBUG. Works for both iommi views and normal django views. Added ajax enhanced table filtering You can now turn off the advanced mode on queries: Table(query__advanced__include=False) Queryhas two new refinables: filterand post_process. These are hook points if you need to further customize what query is generated. Enable profiling when DEBUG mode is on, even if you’re not staff Fixed multiselect on empty list Added missing get_errors()member function on Field Fixed select2 widget when the base url do not end with / Styling fixes. Primarily for bulma. 2.5.0 (2020-11-19)¶ include=False on a Column should imply not generating the query filter and bulk field. If you want to not render a column but still want the filters, use the render_column=False feature Added callbacks for saving a form: extra__pre_save_all_but_related_fields, extra__on_save_all_but_related_fields, extra__pre_save Added extra__new_instancecallback to Form.createfor custom object creation The errors list has been changed. You should always use add_error()to add an error on a Fieldor a Form It is now possible to call is_valid()and get_errors()and get what you expect from post_validationon Fieldand Form Query forms can now have additional fields, that are ignored by the filter handling code (when you want to do additional filtering outside of the query logic) Bug fixes with state leaking between binds Fixed jump to code Improved error message for is_valid_filter Added a nice error message if you try to shoot in styleor classas raw strings Fixed empty table message, and invalid query form messages 2.4.0 (2020-11-04)¶ The given rowsqueryset and filtering were not respected for the “Select all rows” bulk feature. This could produce some pretty bad bugs! Support custom bulk post_handlers on lists and not just querysets Tablehas a few new members: initial_rows: the rows you pass (or that gets created by auto__model) is stored unchanged here sorted_rows: initial_rows+ sorting applied sorted_and_filtered_rows: sorted_rows+ filtering applied visible_rows: sorted_and_filtered_rows+ pagination applied rows: this is now a property and will map to the old behavior which is the “most applied” member that exists Fixed passing dunder paths to auto__include. You got a weird crash if the target of the path was a foreign key. There are still issues to be resolved adjacent to this, but the base case now works. Fixed the “select all” feature for pages with multiple tables. 2.3.0 (2020-10-30)¶ Every part can now have assets that are added to the assets of the style and included in the head. This is particularly useful for bundling small pieces of javascript or css with the components that need them and thereby gets us closer to being able to write truly self contained “component”. As a proof of concept I did so for the tables javascript parts. The naming takes care of deduplication of assets. Only include select2 assets when needed (possible because of the point above) Filtering on booleans was very broken. It always returned empty querysets and didn’t produce errors when you tried to do stuff like my_boolean<3 It’s now possible to configure stuff on the freetext field of a query iommi will now grab the root page title from the text from Headerinstances in addition to Part.title Render date fields as such Fixed date and time formatting Support for optgroups in forms Make it possible to insert fields into the form of a query, and filters into a query Differentiate between primary and other actions. This should make iommi pages look more in line with the majority of design systems. If you have a custom style you probably want to add a style definition for Action.primary. Fixed a case of a silent overwrite that could be surprising. This was found during reading the code and has never happened to us in practice. Style fixes for bulma 2.2.0 (2020-10-16)¶ Fix so that style application does not alter definitions destructively. This could lead to some strange behavior if you tried to switch between styles, and it could leak over definitions between things you would not expect. The title of Tableis Nonewhen there is no model Assets as first class concept. You can now insert asset definitions into your style with assets__js=...instead of defining a base_template. This change also removes the base templates for all the built in styles as they are now obsolete. Made it easy to hide the label of a Field by setting display_name=None, or include=False 2.1.0 (2020-10-07)¶ Internationalization! iommi now has i18n support and ships with English, German and Swedish languages out of the box. We welcome more translations. Out of the box support for the Bulma CSS framework Make auto__includespecifications allow foreign key paths By default we now grab display_name from the model fields verbose_name (if applicable) Sometimes you got reordering of parts when doing a post to a form for example, this is now fixed The traversableargument to lambdas is now the leaf and not the root. This was a bug. Support reverse_lazyas url argument to MenuItem Two id attributes were rendered on the input tags in forms (thanks Benedikt Grundmann for reporting!) 2.0.1 (2020-09-22)¶ delete_object__post_handleraccessed instance.idwhich might be valid. It should have accessed instance.pkwhich is always valid. 2.0.0 (2020-09-22)¶ BACKWARDS INCOMPATIBLE: Stylemust now take a base_templateargument. This replaces the setting IOMMI_BASE_TEMPLATE. BACKWARDS INCOMPATIBLE: IOMMI_CONTENT_BLOCKis removed. Replaced by the content_blocksetting for Style. Allow table rows to be provided from a generator. (Disabling paginator) Added blocks ( iommi_head_contents, iommi_top, and iommi_bottom) as useful hook points to add custom data in the templates if you don’t need a totally new template but want to just customize a little bit. The default sort_key on a Column.foreign_key now looks at the searchable field of the remote field (‘name’ by default). This means by default sorting will mostly be more what you expect. Changed the error from get_search_field() for non-unique name to a warning. Removed <table> for layout in query advanced/simple stuff. Don’t warn for missing register_search_fields when attr=None Set admin to bootstrap by default. Added form for changing password. Used by the admin but also usable from your code. Added form for login. Used by the admin but also usable from your code. Fixed foundation styling for query form. Introduced Field.help. This is the fragment that renders the help text for a Field. This means you can now style and customize this part of forms more easily. For example set a CSS class: Field(help__attrs__class__foo='foo'. Use django default date and time formatting in tables. New shortcut for Table: Table.divfor when you want to render a Tableas a bunch of divs. This is useful because a Tableis really a view model of a sequence of stuff, not just a <table>. Possibility to set Actions.tagto Noneto not get a wrapping html tag. Added Table.outeras a tag you can style that encompasses the entire table part. Moved Form.h_tagrendering inside the form tag to make it stylable as a coherent whole. Grab html title from first part if no title is given explicitly. This means you’ll get the <title>tag filled more often by what you expect automatically. Templateinstances are now collected properly by Part. Read admin config from modules. The Admin is now opt in, not opt out. The admin is now MUCH prettier and better. Actions for Tableare now rendered above the table by default. Set actions_belowto Trueto render them the old way. Many misc improvements 1.0.3 (2020-08-24)¶ Changed Table.bulk_formto Table.bulk. The old name was a mistake as the name was always bulk. This meant that styling didn’t work like you expected and the pick feature also lead you down the wrong path. 1.0.2 (2020-08-21)¶ Support user inputted relative dates/datetimes Support more time formats automatically Introduced Filter.parse() which is a hook point for handling special parsing in the query language. The query language will no longer try to convert to integers, floats and dates for you. You have to specify a parse() method. Added traversablekey to evaluate parameters. Think of it like something similar to self. cell__formatnow gets all evaluate parameters like you’d expect Filters: If attris Nonebut you’ve specified value_to_qthen your filter is now included Various bug fixes 1.0.1 (2020-06-24)¶ Optimizations Use select2 as the default for multi_choice Improved usability: Make icon column behavior on falsy values more guessable Accidentally changed default style to foundation, change back to bootstrap Improved usability: Don’t fall back to default template name if the user specified an explicit template name: fail on TemplateNotFound Style on root uses correct base template Allow model fields called context 1.0.0 (2020-06-10)¶ Backwards incompatible: register_search_fieldsreplaces register_name_field. This new system is a list of field names and not just a single field. There is also new searching and filtering behavior based on this that means you will get better search results Backwards incompatible: field_nameas used by model factories is replaced with model_field_name. If you used register_factoryyou will need to change this. The field names on Column, Fieldand Filterare also renamed. Support fields named keys, valueor itemson Django models Added basic styling support for CSS frameworks Water and Foundation Fix include to make None mean False Change Filter.text to search using icontains instead of iexact by default in the basic search mode Change post_validation callback to receive standard evaluate parameters Improved help text for queries Field.radio was broken in the bootstrap style: it specified the input template as the template for the entire field, so the label got erased 0.7.0 (2020-05-22)¶ Fixed default text argument to Fragment Fixed issue where endpoint dispatch parameter was left over in the pagination and sorting links Parts that are None should not be collected. This affected the admin where it printed “None” below the “Admin” link. Added header for bulk edit form in tables Fixed textarea readonly when field is not editable Fixed is_paginated function on Paginator Add request to evaluate parameters Make evaluate and evaluate_recursive match even the **_case by default No dispatch command on a POST is invalid and will now produce an error Lazy bind() on members. This is a performance fix. Fixed bug where display_name could not be overridden with a lambda due to incorrect evaluate handling Removed Table.rendered_columns container. You have to look at the columns and check if they have render_column=False 0.6.2 (2020-04-22)¶ Fixed data-endpoint attribute on table 0.6.1 (2020-04-21)¶ Fixed tbody endpoint and added a div to make the endpoint easier to use 0.6.0 (2020-04-17)¶ Fixed an issue where fragments couldn’t be customized later if built with the htmlbuilder Actioninherits from Fragment. This should be mostly transparent. You can now pass multiple argument to Fragment/ html.foo(). So html.div('foo', 'bar')is now valid and creates two child nodes child0and child1 Uncouple auto__*from rowparameter. auto__only suggests a default. This avoids some confusion one could get if mixing auto__rows, auto__modelsand rowsin some ways. Fixed setting active on nested submenus where the parent had url None 0.5.0 (2020-04-01)¶ Include iommi/base_bootstrap.html and iommi/base_semantic_ui.html in package, and use them if no base.html is present. This improves the out of the box experience for new projects a lot Support mixing of auto__model/ auto__rowbased columns and declarative columns Support attrs__class and attrs__style as callables Added support for context namespace on Page, which is passed to the template when rendering (for now only available on the root page) Fixed how we set title of bulk edit and delete buttons to make configuration more obvious 0.4.0 (2020-03-30)¶ Fixed rendering of grouped actions for bootstrap Respect auto__include order boolean_tristate should be the default for the Field of a Column.boolean New class Header that is used to automatically get h1/h2/etc tags according to nesting of headers Table.rows should be able to be evaluated Added feature that you can type ‘now’ into date/datetime/time fields Feature to be able to force rendering of paginator for single page tables Paginator fixes: it’s now no longer possible to use the Django paginator, but the iommi paginator is more full features in trade. Removed jQuery dependency for JS parts Big improvements to the Menu component filters that have freetext mode now hide their field by default Added “pick” in the debug toolbar. This is a feature to quickly find the part of the document you want to configure Introduced Form.choice_queryset.extra.create_q_from_value Changed so that Query defaults to having the Field included by default Renamed BoundRow/bound_row to Cells/cells Major improvements to the admin Lots and lots of cleanup and bug fixes
https://docs.iommi.rocks/en/latest/history.html
CC-MAIN-2021-49
en
refinedweb
On Mon, 2003-02-17 at 21:56, Garrett Rooney wrote: > i don't have any real problem with moving the code in there, other than > it being kind of odd to have something associated with 'cancelation' in > the svn_delta namespace. it just seems more natural for it to be > called svn_cancel_get_cancellation_editor than > svn_delta_get_cancellation_editor. Although we must rigidly adhere to the svn_* prefix on our symbols in general, libsvn_delta does not deal exclusively with symbols named svn_delta_*. Observe that most of the symbols in svn_delta.h actually have the prefix svn_txdelta. On the other hand, the XML editor was named svn_delta_get_xml_editor() when we had it, and the name svn_cancel_get_cancellation_editor() seems dreadfully redundant, so I would favor svn_delta_get_cancellation_editor() anyway. --------------------------------------------------------------------- To unsubscribe, e-mail: dev-unsubscribe@subversion.tigris.org For additional commands, e-mail: dev-help@subversion.tigris.org Received on Tue Feb 18 04:09:52 2003 This is an archived mail posted to the Subversion Dev mailing list.
https://svn.haxx.se/dev/archive-2003-02/0841.shtml
CC-MAIN-2021-49
en
refinedweb
Logging in SAP Cloud Platform CloudFoundry Environment It’s a good approach for ABAPers to learn new technology like Open-Source tool by comparing them with the good-old stuff in Netweaver available many years ago. The aim of this blog is to give you a brief introduction how to do logging in your application code and view those logs via Kibana. Before start, let’s recall how the same requirement is done in ABAP. Logging in Netweaver There are several logging mechanisms available in Netweaver, for example application log or checkpoint supported by ABAP language itself. Since this blog is not a cookbook for ABAP logging, I will not list detailed step but only give some higlight. In order to do logging in your application code, you need to use a standard or create your own checkpoint group via tcode SAAB. I personally treat this checkpoint group as the application log instance to be created in SAP Cloud Platform soon. Here I use the standard checkpoint group DEMO_CHECKPOINT_GROUP. Press “Display <->Activate” to enter edit mode, set Logpoints as “Log”, date field as “Today”, meaning the log only takes effect within today. Create a configuration for user on whom the log must be switched on. Create a report with name ZCONTEXT, and log the value of system variable sy-cprog and reporting running mode(online or offline, stored in sy-batch) into the standard checkpoint group. Execute the report and go back to tcode SAAB to see the expected log record. CloudFoundry environment in SAP Cloud Platform The official guideline is documented in SAP Github. It is recommended to use slf4j(Simple Log Facade for Java). As it name hints, slf4j works as an interface to provide log functionality; the concrete logging implementation could be decided by developers based on project requirement. I have built a simple example and upload it to my github to demonstrate how to use logging. In my example I use log4j2 as slf4j implementation. 1. define version of slf4j and log4j2 in pom.xml of Java project. <properties> <maven.compiler.source>1.8</maven.compiler.source> <maven.compiler.target>1.8</maven.compiler.target> <cf-logging-version>2.1.5</cf-logging-version> <log4j2.version>2.8.2</log4j2.version> <slf4j.version>1.7.24</slf4j.version> </properties> Maintain dependency for slf4j and log4j2 in pom.xml as well: <dependency> <groupId>org.slf4j</groupId> <artifactId>slf4j-api</artifactId> <version>${slf4j.version}</version> </dependency> <dependency> <groupId>com.sap.hcp.cf.logging</groupId> <artifactId>cf-java-logging-support-log4j2</artifactId> <version>${cf-logging-version}</version> </dependency> <dependency> <groupId>org.apache.logging.log4j</groupId> <artifactId>log4j-slf4j-impl</artifactId> <version>${log4j2.version}</version> </dependency> <dependency> <groupId>org.apache.logging.log4j</groupId> <artifactId>log4j-core</artifactId> <version>${log4j2.version}</version> </dependency> <dependency> <groupId>com.sap.hcp.cf.logging</groupId> <artifactId>cf-java-logging-support-servlet</artifactId> <version>${cf-logging-version}</version> </dependency> 2. Create log4j2.xml file in CLASSPATH folder( resources folder in my case): <Configuration status="warn" strict="true" packages="com.sap.hcp.cf.log4j2.converter,com.sap.hcp.cf.log4j2.layout"> <Appenders> <Console name="STDOUT-JSON" target="SYSTEM_OUT" follow="true"> <JsonPatternLayout charset="utf-8" /> </Console> <Console name="STDOUT" target="SYSTEM_OUT" follow="true"> <PatternLayout pattern="%d{HH:mm:ss.SSS} [%t] %-5level %logger{36} [%mdc] - %msg%n" /> </Console> </Appenders> <Loggers> <!-- Jerry: Log level: INFO --> <Root level="${LOG_ROOT_LEVEL:-INFO}"> <AppenderRef ref="STDOUT-JSON" /> </Root> <Logger name="com.sap.hcp.cf" level="${LOG_HCP_CF_LEVEL:-INFO}" /> </Loggers> </Configuration> 3. Create a new log instance ( do you still remember the checkpoint group in ABAP? ) in SCP Cockpit: I name it as “jerry-log”: 4. use slf4j API for logging: import org.slf4j.Logger; import org.slf4j.LoggerFactory; private static final Logger LOGGER = LoggerFactory.getLogger(ConnectivityServlet.class); In my example, I log the connection detail for ABAP On-Premise system in my code: 5. Log monitoring Click Logs tab in Cockpit, press button “Open Kibanna Dashboard”, and the corresponding log record for the connection detail mentioned in step 4 is visible there. The log instance “jerry-log” created in step 3 is also listed in the log detail message. Right now, if you are opening the Kibanna Dashboard, you are able to see all applications logs. So everybody can see everything. Is it possible to implement some kind of authorizations conecpt, so the views are restricted for certain users? Usecase: External agency want to only see the logs of their implemented application. Other applications are of no interest for them.
https://blogs.sap.com/2018/06/08/logging-in-sap-cloud-platform-cloudfoundry-environment/
CC-MAIN-2021-49
en
refinedweb
Settings for X12 Business Operations Summary X12 business operations have TCP Outbound Adapter” in Using TCP Adapters with Ensemble EnsLib.X12.Adapter.TCPOutboundAdapter has the following settings configured appropriately for X12: Connect Timeout has its usual default of 5 seconds, but has a maximum limit of 30,000 seconds. Get Reply is set to False. This means the adapter will wait to read a reply message back from the socket before returning. Response Timeout has a default of 30 instead of its usual 15, and has a maximum limit of 30,000 seconds. Auto Batch Parent Segs (File and FTP only) If True, when writing a document that has a batch parent, output the batch header segments first, then child documents, then follow up with the batch trailer segments when triggered by the final batch header document object or by a file name change. If False, omit headers and trailers and output child documents only. The default for X12 is True. Default Char Encoding Specifies the desired character set of output data. Ensemble automatically translates the characters to this character encoding. For X12 output, the default is Latin1. See “Default Char Encoding” in “Settings for X12 Business Services.” Failure Timeout The number of seconds during which to continue retry attempts. After this number of seconds has elapsed, the business operation gives up and returns an error code. X12 business operations automatically set this value to –1 for never time out to ensure that no X12 document is skipped. File Name (File and FTP only) Output file name. This setting can include Ensemble time stamp specifiers. If you leave File Name blank, the default value is %f_%Q where: %f is the name of the data source, in this case the input filename _ is the literal underscore character, which will appear in the output filename %Q indicates ODBC format date and time In substituting a value for the format code %f, Ensemble strips out any of the characters |,?,\,/,:,[,],<,>,&,,,;,NUL,BEL,TAB,CR,LF, replacing spaces with underscores (_), slashes (/) with hyphens (-), and colons (:) with dots (.). For full details about time stamp conventions, including a variety of codes you can use instead of the default %f_%Q, see “Time Stamp Specifications for Filenames” in Configuring Ensemble Productions. No Fail While Disconnected (TCP only) If True, suspend counting seconds toward the Failure Timeout while disconnected from the TCP server. This setting does not apply if Failure Timeout is –1 or if Stay Connected is 0. Reply Code Actions (TCP only) When the adapter setting Get Reply is True, this setting allows you to supply a comma-separated list of code-action pairs, specifying which action the business operation will take on receipt of various types of acknowledgment documents. The format of the list is: code=action,code=action, ... code=action Where code represents a literal value found in field TA1:4, AK5:1, or AK9:1 of the acknowledgment document. The following table lists the expected values for code. The following values for action may be used alone or combined to form strings. S is the default action if no other is given, except for A whose default action is C: The default value for this setting string is: A=C,*=S,~=S,I?=W This means: A=C — When the action is accepted, treat the document as Completed OK. I?=W — When the reply ControlId does not match the ControlId of the original document, log a warning but treat the document as Completed OK. *=S,~=S — In all other cases, including when replies that do not contain a TA1, AK5 or AK9 segment, suspend the document, log an error, and move on to try the next document. Separators A string of separator characters which Ensemble assigns to X12 separators in left to right order as described below. If the Separators string is empty, the default is to use the current default separators and segment terminators for X12, plus a carriage return (ASCII 13) and line feed (ASCII 10). *:\a~\r\n An X12 document uses special characters to organize its raw contents. These characters may vary from one clinical application to another. For non-empty values of Separators, positions 1 through 3 (left to right) are interpreted as follows: Data Element Separator (ES) Component Separator (CS) Data Element Repeat Separator (RS) The default values for positions 1 through 3 are: * (asterisk) : (colon) \a (record separator) For Separators, you must supply a string of three characters which Ensemble assigns to X12 separators in left to right order: ES, CS, RS, as described in the previous list. Any characters in positions 4 through 6 override the default segment terminator character, which is ~ (tilde). You may specify from 0 to 3 characters in positions 4 through 6 using the following: \r for the carriage return (ASCII 13) \n for the line feed (ASCII 10) \a for the array record separator (ASCII 30) You can use \x in positions 1 through 3 if you need to specify segment terminators in positions 4 and higher but want your output documents to use fewer than 3 separators. Separators designated by \x in positions 1 through 3 are not used. The purpose of \x is simply to extend the length of the list of separators so that position 4 is interpreted correctly as the first segment terminator. Validation Any non-empty string triggers basic validation of the outgoing document. If the Validation field is left empty, no validation of the outgoing document is performed.
https://docs.intersystems.com/latest/csp/docbook/DocBook.UI.Page.cls?KEY=EX12_settings_bo
CC-MAIN-2021-49
en
refinedweb
Python Git: Learning about Git, Git Repositories and GitPython Python is a popular, high-level programming language. The language is meant to be simple and readable, both on the small and large scale. The latest major version of Python, Python 3.0, was released in 2008. It is not backwards compatible with the earlier versions and has several new major features. Python supports multiple programming paradigms, like object-oriented programming, structured programming, aspect-oriented programming, functional programming and logic programming. The language has a good garbage collector and it also supports Unicode. One of the unique features of Python is that the language lets you do more with little code, unlike other languages like C and Perl. Python programming is all about finding a single obvious way to carry out a programming task, instead of searching and coding in multiple ways, like they do in Perl. This makes Python an easy language to learn, even for beginners. You can take our Python programming course to get started with the language. In this tutorial, we’re going to take a look at Git, Git repositories and GitPython, a python library that lets you handle Git repositories. You need to be familiar with the basics of Python to understand it. What is Git? Git is a distributed version control system software product. It lets you create and manage Git repositories. The software was developed by Linus Trovalds in 2005. While originally intended for Linux, the software has been ported to other major operating systems, like Windows and OSX. Git is compatible with Python, as well as some of the other major programming languages like Java, Ruby and C. C was the original language it was written in. The purpose of Git is to manage a set of files that belong to a project. As the project is developed, its files change over time. Git tracks these changes and stores them in a repository, which is a typical data structure (it can handle large amounts of easy-to-retrieve data). If the user dislikes a change or a set of changes made in the project, he can use Git to rollback those changes. For example, if you were working on a project in Python, Git would take a snapshot of your source code at regular intervals. If you don’t like your recent coding, you can use Git to revert to an earlier state in the project. What is a Git Repository? A Git repository contains a set of files, and is itself a file that is stored in a subdirectory (.git) alongside the files of the project. There is no central repository that is considered to be the main repository, like in other software systems. At any given time, there exist several different repositories that are a snapshot of the project you are currently working on, and they are all given different version names. You can learn more about Git basics in this course. A user can choose to copy (clone) and even switch between different versions using Git. The lack of a central repository makes Git a “distributed” version control system. The sets of files a repository stores are actually commit objects and a set of references to those commit objects. These references are known as heads. These commit objects are the main core of the repository- they mirror your project and you use them to revert back. A commit object will have a unique SHAI name that makes it possible to identify it. It will also contain references to point to parent commit objects. Every repository has a master head, and each repository can contain several heads. An active head is highlighted in uppercase letters while an inactive head is highlighted in lowercase letters. Git for Python: GitPython You can use Git with Python through the GitPython library. The GitPython library lets you react will high-level as well as low-level Git repositories. You can install the latest version of the software by typing: easy_install gitpython Alternatively, you can download it directly from here. To learn more about the structure of Python and Python libraries, we recommend you sign up for this beginners Python course. Creating a Repository You can use Git commands directly to create a repository: mkdir directoryname cd directoryname git init This will create an empty repository in which you can add files in the specified directory. Using GitPython GitPython lets you create objects that let you access your repositories. You can use an object model access to find commit objects, tree objects and blob objects. GitPython also has other features, like letting you gzip or tar objects, return stats and show information logs. We’ll show you a few basic commands that will help you create objects using GitPython. Please note that these commands are in no way comprehensive – you will need a thorough understanding of the Git software and Python to use GitPython to its fullest capacity. The architecture of the machine you’re on, available system resources as well as the network bandwidth you have access to will also influence how well you can utilize GitPython. Initializing a Repository Object: from git import * repo = Repo (“/path 1/path 2 /path 3”) This command creates a Repository object in your repository (directory path). You can use the repository object to find commit objects, trees and blobs. To find the commit objects present in your repository, type the following command: repo.commits () This brings up a list of commit objects (upto 10). You can specify which branches it can reach by inputting advanced commands. You can further retrieve tree objects and blob objects with GitPython. If you want a list of all possible usable commands, check out the GitPython source code here . To learn more about writing your own Python programs, you can take this course. And if you want to do something more fun than GitPython, try writing your own games in Python, with the help of this course. Recommended Articles Python vs C: A Beginner’s Guide Python SQLite Tutorial Python ‘assert’: Invariants in Code Top courses in Python Python students also learn Empower your team. Lead the industry. Get a subscription to a library of online courses and digital learning tools for your organization with Udemy for Business.
https://blog.udemy.com/python-git/
CC-MAIN-2021-49
en
refinedweb
? Total weight is 490 gr . Can SOLO carry this weight? is there any SONY QX1 mount available for SOLO ? such as image in previous page ? where i can find it ? @Michael Kaba What to you have to do to set up the QX1 and get its address? @Jermey Is the WiFI module you all were working on available for purchase? @Luke run following script: #!/usr/bin/python from pysony import SonyAPI QX_ADDR = '' # camera address camera = SonyAPI(QX_ADDR) camera.actTakePicture() it take a single shot. @Daniel McKinnon: Hey, love the idea! Ive been trying to figure out how to get the pysony working on my raspberry and could use some help. I'm a little new at python so its probably something really simple. I got the program all installed and I can connect to my QX1 through the WiFi, but when I try to type in a api into the command prompt, it does one of two things. It will either give me the TypeError: string indices must be integers, not str, or it will sit there and try but not give any feed back. Any help is appreciated! @ Jeremy, thanks for the update on delivery dates. Will your module link to the SOLO's WiFi connection and then use MavLink commands to the SOLO Tx and Android Tablet, I ask as I'm curious to know if there will be any range limits? @ Justin, yes, I believe the HX90V should work, as it's compatible with Sony's Remote API system. Check out this link that shows all of the cameras that are compatible with our WiFi Map module : @ Keith Sorry for the late response, we anticipate our WiFi module to be available by mid November.
https://diydrones.com/profiles/blogs/would-you-like-to-capture-professional-quality-photos-with-solo?commentId=7447824%3AComment%3A1405364
CC-MAIN-2021-49
en
refinedweb
SOLVED question: contours and components inside some rect - RafaŁ Buchner last edited by gferreira Small scripting question. (Hopefully this heavy rain of the questions will end up at some point) What is the easiest way to check if some contour or component in the glyph is inside some abstract rectangle? I have coordinates and size of the rectangle. Now I would like to check out if the objects shapes are inside it. Determining if points are inside the rect is easy. But when it comes to components and contours: something that have lines and curves is really hard. I've been trying to creating the function, that would take the contour or component as an argument, and return True or False value. (True if at least part of the shape is inside the rectangle). I've been trying to do it with the booleanOperators and I failed. Any help? By the way, I'm thinking that it could be a bug: from mojo.tools import union g = CurrentGlyph() union(g, g[0], g[1], roundCoordinates=None) gives Traceback (most recent call last): File "<untitled>", line 3, in <module> File "/Applications/RoboFont3.app/Contents/Resources/lib/python3.6/mojo/tools.py", line 84, in union File "/Applications/RoboFont3.app/Contents/Resources/lib/python3.6/mojo/tools.py", line 84, in <listcomp> File "/Applications/RoboFont3.app/Contents/Resources/lib/python3.6/fontParts/base/base.py", line 255, in naked File "/Applications/RoboFont3.app/Contents/Resources/lib/python3.6/fontParts/base/base.py", line 232, in raiseNotImplementedError NotImplementedError: The RSegment subclass does not implement this method. Two circular components could have overlapping bounds but have no visual overlap. The only option to check is to perform a remove overlap and look for changes. This means for components you have to decompose first. good luck! you can use the bounds of the contours or components to check if they overlap with the rectangle: def overlaps(contourOrComponent, box): left, bottom, right, top = contourOrComponent.bounds rectLeft, rectBottom, rectRight, rectTop = box xOverlap = right > rectLeft and left < rectRight yOverlap = top > rectBottom and bottom < rectTop if xOverlap and yOverlap: return True else: return False x, y, w, h = 118, 246, 104, 402 box = x, y, x+w, y+h g = CurrentGlyph() for c in g.contours: print(overlaps(c, box), c) for c in g.components: print(overlaps(c, box), c) regarding union: you need to use a glyph or a list of contours: from mojo.tools import union g = CurrentGlyph() union(g, [g[0]], [g[1]], roundCoordinates=None) Two circular components could have overlapping bounds but have no visual overlap. The only option to check is to perform a remove overlap and look for changes. This means for components you have to decompose first. good luck! @frederik true! but in this case we know that the frame is a rectangle – so checking the bounds works too. here’s a proof: g = CurrentGlyph() x, y, w, h = 206, -218, 358, 392 box = x, y, x+w, y+h translate(230, 270) for contour in g.contours: color = (0, 1, 0) if overlaps(contour, box) else (1, 0, 0) fill(*color) B = BezierPath() contour.draw(B) drawPath(B) stroke(0) strokeWidth(10) fill(None) drawGlyph(g) stroke(0, 0, 1) rect(x, y, w, h) oke, idd much simpler :) also take a look at fontTools arrayTools: sectRect, pointsInRect, pointInRect - RafaŁ Buchner last edited by Great! Thanks for the help! - RafaŁ Buchner last edited by @gferreira Hmm, I think it doesn't need to be true (Let's say that we have the shape of the glyph "C". The rect is inside of the letter: then it doesn't work) @RafaŁ-Buchner you’re right :) better to use the boolean glyph method then. This post is deleted!
https://forum.robofont.com/topic/581/question-contours-and-components-inside-some-rect
CC-MAIN-2021-49
en
refinedweb
TODO(MS-2346): Update documentation below. This directory demonstrates how you create modules with Dart and Flutter. At the moment this document assumes that every module gets built as part of the core fuchsia build and included in the bootfs. (More samples located in //topaz/examples/ui/) This example demonstrates how to create a minimal flutter module and implement the Module interface. It shows a simple flutter text widget displaying “hello” on the screen. You can run an example module without going through the full-blown session shell. The available URLs for flutter module examples are: hello_mod After a successful build of fuchsia, type the following command from the zx console to run the basemgr with the dev session shell: killall scenic # Kills all other mods. basemgr --session_shell=dev_session_shell --session_shell_args=--root_module=hello_mod A flutter module is a flutter app which uses ModuleDriver. Below we reproduce the contents of main() from that example: final ModuleDriver _driver = ModuleDriver(); void main() { setupLogger(name: 'Hello mod'); _driver.start().then((ModuleDriver driver) { log.info('Hello mod started'); }); runApp( MaterialApp( title: 'Hello mod', home: ScopedModel<_MyModel>( model: _MyModel(), child: _MyScaffold(), ), ), ); } To import a dart package written within the fuchsia tree, the dependency should be added to the project's BUILD.gn. The BUILD.gn file for the hello_mod example looks like this: import("//topaz/runtime/flutter_runner/flutter_app.gni") flutter_app("hello_mod") { main_dart = "main.dart" package_name = "hello_mod" fuchsia_package_name = "hello_mod" deps = [ "//topaz/public/dart/widgets:lib.widgets", "//topaz/public/lib/app_driver/dart", ] } There are two types of dart packages we can include as BUILD.gn dependencies. Any third-party dart packages, or regular dart packages manually written in the fuchsia tree. Import them with their relative paths from the <fuchsia_root> directory followed by two slashes. Third-party dart packages are usually located at //third_party/dart-pkg/pub/<package_name>. To use any FIDL generated dart bindings, you need to first look at the BUILD.gn defining the fidl target that contains the desired .fidl file. For example, let's say we want to import and use the module.fidl file (located in //peridot/public/lib/module/fidl/) in our dart code. We should first look at the BUILD.gn file, in this case //peridot/public/lib/BUILD.gn. In this file we can see that the module.fidl file is included in the fidl("fidl") target. fidl("fidl") { sources = [ ... "module/fidl/module.fidl", # This is the fidl we want to use for now. ... ] } This means that we need to depend on this group of fidl files. In our module's BUILD.gn, we can add the dependency with the following syntax: "//<dir>:<fidl_target_name>_dart" Once this is done, we can use all the protocols defined in .fidl files contained in this story fidl target from our code. Once the desired package is added as a BUILD.gn dependency, the dart files in those packages can be imported in our dart code. Importing dart packages in fuchsia looks a bit different than normal dart packages. Let's look at the import statements in main.dart of the hello_world example. import 'package:lib.app.dart/app.dart'; import 'package:lib.app.fidl/service_provider.fidl.dart'; import 'package:apps.modular.services.story/link.fidl.dart'; import 'package:apps.modular.services.module/module.fidl.dart'; import 'package:apps.modular.services.module/module_context.fidl.dart'; import 'package:lib.fidl.dart/bindings.dart'; import 'package:flutter/widgets.dart'; To import things in the fuchsia tree, we use dots ( .) instead of slashes ( /) as path delimiter. For FIDL-generated dart files, we add .dart at the end of the corresponding fidl file path. (e.g. module.fidl.dart) See the FIDL tutorial. Once an InterfaceHandle<Foo> is bound to a proxy, the handle cannot be used in other places. Often, in case you have to share the same service with multiple parties (e.g. sharing the same fuchsia::modular::Link service across multiple modules), the service will provide a way to obtain a duplicate handle (e.g. fuchsia::modular::Link::Dup()). You can also call unbind() method on ProxyController to get the usable InterfaceHandle<Foo> back, which then can be used by someone else. You need to explicitly close FooProxy and FooBinding objects that are bound to channels, when they are no longer in use. You do not need to explicitly close InterfaceRequest<Foo> or InterfaceHandle<Foo> objects, as those objects represent unbound channels. If you don't close or unbind these objects and they get picked up by the garbage collector, then FIDL will terminate the process and (in debug builds) log the Dart stack for when the object was bound. The only exception to this rule is for static objects that live as long as the isolate itself. The system is able to close these objects automatically for you as part of an orderly shutdown of the isolate. If you are writing a Flutter widget, you can override the dispose() function on State to get notified when you‘re no longer part of the tree. That’s a common time to close the proxies used by that object as they are often no longer needed. You need to have the correct .packages file generated for the dart packages in fuchsia tree. After building fuchsia, run this script form the terminal of your development machine: <fuchsia_root>$ scripts/symlink-dot-packages.py Also, for flutter projects, the following line should be manually added to the .packages file manually (fill in the fuchsia root dir of yours): sky_engine:<abs_fuchsia_root>/third_party/dart-pkg/git/flutter/bin/cache/pkg/sky_engine/lib/ You might have to relaunch Atom to get everything working correctly. With this .packages files, you get all dartanalyzer errors/warnings, jump to definition, auto completion features. For information on integration testing Flutter mods, see mod integration testing.
https://fuchsia.googlesource.com/fuchsia/+/3a2c9b130f545121abbc96f99745c50c560282db/docs/development/languages/dart/mods.md
CC-MAIN-2021-49
en
refinedweb
--- El jue 15-dic-11, Patrick Rapin <toupie300@gmail.com> escribió: > > So for your enumeration (note that I have added ALL > entry): > > typedef enum { > > UNKNOWN = > 0x00000000, > > FRONTONLY = 0x00000001, > > BACKVIDEO = 0x00000002, > > BACKSYSTEM = 0x00000004, > > TRIPLE > = 0x00000008, > > WINDOWS = > 0x00000010, > > ALL = 0x0000001F > > } BufferMode; > > We can have for example: > Fct ("FRONTONLY") --> 1 > Fct ("BACKVIDEO, TRIPLE") --> 0x82 > Fct "BACKSYSTEM|WINDOWS" --> 0x14 > Fct ("ALL ~TRIPLE, ~FRONTONLY") --> 0x16 Personally, this seems to me like a nice approach. As already pointed out, It is *better* than simple numeric mapping because: 1. Can check valid values. 2. Don't pollute namespace, don't use lua variables. 3. Still seems to be easy to use. One could even do this: mode = 'BACKSYSTEM' mode = mode .. '|BACKVIDEO' On the other hand, it is *worse* than simple numeric mapping because of all the string parsing this implies, just to ORing some constants. Knowing that 'premature optimization is the root of all evil', am I being too paranoid here? (as usual) Thanks everyone for all your answers :) Ezequiel.
https://lua-users.org/lists/lua-l/2011-12/msg00428.html
CC-MAIN-2021-49
en
refinedweb
Objectives - We keep on playing with direct current motors. - Introduce the Adafruit Motor Shield V1 . - We will see how to use it with the appropriate library. - Assemble a little rover with 4×4 traction . Bill of materials Controlling several DC motors In previous chapters we have seen how to handle typical DC motors. We have seen how to rotate them and how to change its speed of rotation, as well as how to reverse the direction of rotation using an integrated H-bridge like the L293D. And what does a normal man do when he manages to control a motor (especially one who is reading this)? Of course, he tries to drive several motors at once, let’s say 4 of them, and starts to think about making a small rover with 4-wheel drive and wireless remote control and …. Calm down! Let’s go by parts. At the moment we have only controlled one engine, although we know that the L293D has the ability to handle a second motor. We could design a circuit with two L293D to move 4 motors and maybe make a shield to assemble everything in the hump of our Arduino. But as I always tell you, if there is a need in this electronic world, there is always someone ready to sell us the solution conveniently. And as you can imagine, we are not the first to have this idea (nor will we be the last) nor is it an orginal solution. The market offers a variety of shields to control very interesting motors, depending on the size of the engines, their consumption, voltage and anything else you can imagine. As in this humble house we think that in order to learn we have to put our hands at work and preferably spending little money, we will present a simple motor shield that was designed by our friends of Adafruit and later abandoned in favor of a more advanced (and expensive) motor controller, whose relay has been taken over by the cloud of chinese manufacturers, providing us with an Arduino motor shield v1.0, for very little money, which allows us to start us spending the minimum. Let’s see its features: Adafruit Motor Shield V1.0 Here is a picture of the Motor Shield V1.0, which you can easily find at any supplier: Its main features are: - 4 H-Bridges built by using two L293D chips. - Up to 4 DC motors with bidirectional control and 8 bit speed selection. - 0,6 Amper maximum current intensity (although it accepts up to 1,2 Amper peaks) with over-heating protection. - Supports motors with an operation voltage between 4,5 and 25V. - We can attach another two servo or stepper motors. - It provides a motor power connection separated from the one of the shield to avoid noise and interferences. - Low price (Seriouslly). - Compatible with Arduino UNO and Megas at least. - There is an easy to use library to handle the motors. In short, it is a cheap and practical shield to handle small 5V DC motors , but it will fall short if you need to drive powerful motors, since the demanded current intensity will easily surpass the maximum 0.6 ampers that this Shield can provide. It is ideal for building autonomous robots with small motors and little weight and, above all, as a learning tool before investing a a lot of money in other more sophisticated options. Connecting the Motor Shield The shield connects directly the L293D H-Bridges to the PWM pins of Arduino and also incorporates an old acquaintance of ours, the 74HC595, a Shift Register, to save pins in the connection. The Shield reserves the pins 3, 4, 5, 6, 7, 8, 9, 10, 11, 12. Pins 9 and 10 are used for the servos, in case we include them. Pins 2 and 13 are left free, as well as pins 0 and 1, that are used to communicate the Arduino with our PC via USB. Pins A0 to A5 are available for our use and remember that, if needed, you can use them as digital pins as well. There are connections for the terminals of the 4 motors, marked as M1, M2, M3 and M4 and it is advisable to follow the same criterion when connecting the terminals and the shield because otherwise you will get some of the motors turning the other way round (which is not very serious because it is easily solved). An interesting thing about this shield is that it allows us to separate the power supply of the motors from the Arduino power supply, to which we should be very grateful because motors generate a lot of electromagnetic interference that can make our Arduino behave erratically. That is why, whenever possible, it is convenient to separate both power supplies (although in an autonomous robot it will be difficult). To do this, we simply remove the power jumper and the power will be separated. This will be essential if you use 9 0 12V DC motors because your Arduino only operates at 5V (otherwise you will make an Arduino barbecue). We will connect a first motor to the shield in order to test it. First assemble the shield to your Arduino. I recommend using an Arduino MEGA so that we can keep on doing things in the future, but it also works any other. Connect the motor terminals to the screws A and B of the motor shield, leaving the center pin, marked as GND, free. - It is not important which terminal is connected to which pin, because the only thing that will happen if you reverse it is that the motor will turn the other way round. It is recommended that you keep the same criteria for all motors, because it will save you headaches. - As in this first example we will power the motor from Arduino and USB, do not forget to connect the power jumper, so that we power the motors straight from our Arduino. It would be easy to drive the motor via the shield directly, using the control pins, and we propose it as an exercise if you like it , but here we will now download an Adafruit library to handle the shield directly, which will allow us to abstract the detail of pins. The library we need is adafruit-Adafruit-Motor-Shield-library-8119eec, and to install it we follow the standard procedure. To use it, the first thing we have to do is to write these two statements: #include <AFMotor.h> AF_DCMotor Motor1(1); The first line includes the Adafruit library in our sketch. The second creates a motor object instance, connected in this case to Motor1 gate, and then we set it with a parameter that can take values from 1(motor M1) to 4 (motor M4). To set the speed of the motor we use: Motor1.setSpeed(200); // We set the speed of Motor1 Motor1.run(RELEASE); The first line set the motor speed at 200/255 of the speed limit, and the second line indicates that we want to leave the motor in neutral. If we want the motor to move forward we use the following statement: Motor1.run(FORDWARD); And to move it backward we use the following statement: Motor1.run(BACKWARD); And that is all we need to control a motor. If we want to make a simple sketch to move the motor forward a few seconds and then backward, we can write a sketch similar to this: #include <AFMotor1.h> AF_DCMotor1 Motor1(1); void setup() { Motor1.run(RELEASE); } void loop() { Motor1.run(FORWARD); delay (2000); Motor1.setSpeed(180); Motor1.run(BACKWARD); delay (2000); } Moving several motors simultaneously Let’s now connect 4 motors to our Motor Shield. We have around here a chassis with 4 wheels and motors that we will use as a base to build a four-wheel drive robot. But it doesn’t matter wichever other robot you can have, or even a frame with wheels, provided that we can attach to it our Arduino plus the motor shield. A hard cardboard box to which you can attach the motors and wheels works great. In our case we have put it on a pedestal, so we can test the different motors preventing it from running away. Let’s go with the test sketch. The first thing to do is to include the library and define the 4 instances of the motor objects: #include <AFMotor.h> AF_DCMotor Motor1(1); AF_DCMotor Motor2(2); AF_DCMotor Motor3(3); AF_DCMotor Motor4(4); We set the speed of the 4 motors inside the setup() fucntion: void setup() { Serial.begin(9600); // Set up Serial library at 9600 bps Motor1.setSpeed(255); Motor2.setSpeed(255); Motor3.setSpeed(255); Motor4.setSpeed(255); } And finally we will command the 4 motors to move back and forth simultaneously: Motor1.run(RELEASE); Motor2.run(RELEASE); Motor3.run(RELEASE); Motor4.run(RELEASE); delay (1000) ; Motor1.run(FORWARD) ; Motor2.run(FORWARD); Motor3.run(FORWARD); Motor4.run(FORWARD); delay (2000); Motor1.run(BACKWARD); Motor2.run(BACKWARD); Motor3.run(BACKWARD); Motor4.run(BACKWARD); delay (2000); As you can see, there is no difference from moving a single motor, it is only more tedious. Here you have the sketch: It is convenient that you test your prototype so that all the wheels turn in the same direction and no one turns in reverse, because it would be a problem when moving fordward. So we trust we have left the hardware of our robot ready to start programming it seriously. But it seems more convenient to end this chapter here and leave something for the next chapter, in which we will see how to move the robot and make it spin, apart from varying the cruising speed. Pinout description A question from a reader raised the issue of which pins were used and what for in this Motor Shield V1 and as it was a very reasonable question that was not previously covered , we have chosen to add this small annex specifying the pins that uses each motor and which are available otherwise . Bear in mind that there are no Arduino pins attached to the motors directly. The management of the motors is done via the shift register to save pins, therefore you must necessarily use the library to handle them. Summary - We have introduced the Adafruit Motor Shield V1 , which is very cheap and useful to move small motors. - We have seen its features and limitations, but it is always more useful to use a shield than building one ourselves. - We have installed the motor control library and have started to learn the foundations of programming DC motors. - We attached the motor shield to a small 4-wheel robot and tested that all worked as expected, leaving the hardware ready to start programming the movement of the robot. Give a Reply
http://prometec.org/dc-motors/v1-motor-shield/
CC-MAIN-2021-49
en
refinedweb
Hi, I heard about this awesome language from a friend in college. And it has been a joy to use, albeit a bit weird to get the syntax. Basically, instead of making a function call like foo(bar, baz, qux), you put the parenthesis out front and the function name first, with its arguments separated by white space. Here is how the aforementioned function call would look in real Lisp code: (foo bar baz qux) This means take foo, and apply it to the arguments bar baz and qux. Now how about a real example? Suppose that we want to implement the classic n! problem in Lisp. In the languages we know and love, this would look somewhat like this: def factorial(n): """ Return the factorial of n.""" if n <= 0: return 1 else: return n*factorial(n-1) Now, let's take this definition and make it Lispy! (defun factorial (n) "Return the factorial of n" (if (<= n 0) 1 (* n (factorial (- n 1))))) Now, I guess I've got some 'splaining to do. We'll take this chunk by chunk: 1. The defun macro basically tells your Lisp interpreter "Okay, interpreter, the next thing is going to be called a function called factorial. It will take as an argument a number n." 2. The "Returns the factorial of n" bit is a docstring, akin to what is in Python. 3. Now, we get to the bit of code that does the heavy lifting. This (if (<= n 0) part is a call to the if function. Its first argument is a call to the <= function, to see if n is <= 0. Note, however, that I did not close off the if list; if I did this, the forms that follow would not evaluate. 4. If the first condition, n <= 0 yields true, we simply return the atom 1 (atoms are essentially basic values in Lisp). This is the base case for our recursive call; 0! = 1 and we need to stop there, else we will recursively loop to infinity. 5. Now, we get to the last bit of code. a. We'll start out by starting a call to the * function. Its first argument is the value we passed in originally, n. b. Now, we make a call to the factorial function. c. Then, we call the - function of n-1. Since n! is defined as n*(n-1)!, this is our recursive call. 6. And now, we close off the lists of function calls; in Lisp, it seems to be customary to put all the closing parenthesis on the last line. And there you have it, an example of Lisp code. Not exactly revolutionary, but I think it might start an interesting discussion. To test the code and prove it, you'll need, naturally, a Lisp interpreter. As I am on Linux, I use SBCL, which I belive has been ported to Windows and Mac. Happy hacking!
http://forum.audiogames.net/viewtopic.php?pid=307759
CC-MAIN-2018-39
en
refinedweb
Problem with signal and slot I'm totally new to QT but I have studied lots of its' documents. However, when I come to code, lots of problem show up. But I just can't figure it out what happened to these code: @ #include "mainwindow.h" #include "ui_mainwindow.h" #include "model.h" MainWindow::MainWindow(QWidget *parent) : QMainWindow(parent) { ui->setupUi(this); model *m= new model; connect(ui->horizontalSlider,SIGNAL(this->valueChanged(int)),m,SLOT(m.setTemp(double) ) ); } MainWindow::~MainWindow() { delete ui; } @ I don't know why the compiler is always complaining about the @connect() method@ QObject is an inaccessible base of 'model' I'll appreciate you if any examples code with explanations are provided. (signal and slots) Thanks! Hi and welcome to devnet, You have several errors: The signals and slots signature must match when using thie version of connect You must not give the object in SIGNAL nor in SLOT, just the method. What is model ? a private QObject ? I beleive you declare your class like this: @ class model : QObject @ But in that case you forgot the public keyword, otherwise it default to private inheritence which is not what you want. So it should be: @ class model : public QObject @ Then, like SGaist said, in the SIGNAL and SLOT, you have to put the exact signature, and the argument must match @ connect(ui->horizontalSlider,SIGNAL(valueChanged(int)),m,SLOT(setTemp(int) ) ); @ Which means you need to change setTemp to take an int and not a double. However, if you are using Qt5, i recommand the other syntax which allow automatic conversion of the argument from int to double: @ connect(ui->horizontalSlider,&QSlider::valueChanged,m,&model::setTemp ); @ SGaist, Olivier Goffart: Thank you so much. I really made the mistake which I make a private inheritance. Btw: I found the new sytnax documentation: Thanks!! Well, I haven't finished my design yet. My plan is to let user drag the slide to set the tempertaure variable. And the the mehtod setTemp(int) in the class model will call another signal method changeColor() to set a QWidget (I don't know what widget can I use, a label?) to show the color. Just like: @ #ifndef MODEL_H #define MODEL_H #include <QObject> class model:public QObject { public: model(); void setTemp(int temparature); private: double temparature; signals: void changeColor(); }; #endif // MODEL_H But I have several quesions here: @ The function arguments in the SIGNAL() and SLOT() should be equal but I don't have an argument for the method changeColor(); I want to use the method changeColor() to decide the color to be represented with some if else judgement. But I think it's a little redundant. I ask for a good design. Any good suggestion? Should I write the connect function in the MainWindow class or where? You can e.g. add a QColor parameter to changeColor so you have only once place that handles that. Yes, a QLabel is fine for that. Where will your QLabel be ? I will put my QLabel in MainWindow class. But I how can resolve the signal and slot problem? They don't have correspoding arguments. Then add a slot to your MainWindow that takes a QColor parameter and update the QLabel content in there. Ok, should I define the slot function in MainWindow or can I define the slot function in another class like A and then inheriting A? I don't want to put all the code together in one class~ If you are thinking about inheriting both from QMainWindow and from A then no, you can't you can only inherit from one QObject and it also must be the first class to be inherited from. All right. Thank you.
https://forum.qt.io/topic/38922/problem-with-signal-and-slot
CC-MAIN-2018-39
en
refinedweb
Free Open Source Electronic Document Management System Project description Mayan EDMS NG is a modern fork of Mayan EDMS focused on stability, perfomance and new features. Mayan EDMS is a document management system. Its main purpose is to store, introspect, and categorize files, with a strong emphasis on preserving the contextual and business information of documents. It can also OCR, preview, label, sign, send, and receive thoses files. Other features of interest are its workflow system, role based access control, and REST API. The easiest way to use Mayan EDMS is by using the official Docker image. Make sure Docker is properly installed and working before attempting to install Mayan EDMS. For the complete set of installation, configuration, upgrade, and backup instructions visit the Mayan EDMS Docker Hub page at: Hardware requirements - 2 Gigabytes of RAM (1 Gigabyte if OCR is turned off). - Multiple core CPU (64 bit, faster than 1 GHz recommended). Important links - Videos - Documentation - Paid support - Roadmap - Contributing - Community forum - Community forum archive - Source code, issues, bugs - Plug-ins, other related projects - Translations 3.0.2 (2018-03-22) - Fix event and document states apps migration depedencies. - Add the “to=” keyword argument to all ForeignKey, ManayToMany and OneToOne Fields. 3.0.1 (2018-03-22) - Remove squashed migrations. This Django feature is not yet ready for production use. - Fix “check for update” feature. - Add Makefile target to check the format of the README.rst file. - Fix carousel item height issues. - Place the page number summary at the bottom of the carousel pages. 3.0 (2018-03-19) - Fix permission filtering when performing document page searching. - Fix cabinet detail view pagination. - Update project to work with Django 1.11.11. - Fix deprecations in preparation for Django 2.0. - Improve permission handling in the workflow app. - The checkedout detail view permission is now required for the checked out document detail API view. - Switch to a resource and service based API from previous app based one. - Add missing services for the checkout API. - Fix existing checkout APIs. - Update API vies and serializers for the latest Django REST framework version. Replace DRF Swagger with DRF-YASG. - Update to the latest version of Pillow, django-activity-stream, django-compressor, django-cors-headers, django-formtools, django-qsstats-magic, django-stronghold, django-suit, furl, graphviz, pyocr, python-dateutil, python-magic, pytz, sh. - Update to the latest version the packages for building, development, documentation and testing. - Add statistics script to produce a report of the views, APIs and test for each app. - Merge base64 filename patch from Cornelius Ludmann. - SearchModel retrun interface changed. The class no longer returns the result_set value. Use the queryset returned instead. - Squash migrations for apps: acls(1-2), checkouts(1-2), converter(1-12), django_gpg(1-6), document_parsing(1-2), document_states(1-2), dynamic_search(1-3), motd(1-5), permissions(1-3), sources(1-16). - Update to Font Awesome 5. - Turn Mayan EDMS into a single page app. - Split base.js into mayan_app.js, mayan_image.js, partial_navigation.js. - Add a HOME_VIEW setting. Use it for the default view to be loaded. - Fix bug in document page view. Was storing the URL and the querystring as a single url variable. - Use history.back instead of history.go(-1). - Don’t use the previous variable when canceling a form action. Form now use only javascript’s history.back(). - Add template and modal to display server side errors. - Remove the unused scrollable_content internal feature. - Remove unused animate.css package. - Add page loading indicator. - Add periodic AJAX workers to update the value of the notifications link. - Add notification count inside a badge on the notification link. - Add the MERC specifying javascript library usage. - Documents without at least a version are not scanned for duplicates. - Use a SHA256 hex digest of the secret key at the name of the lockfile. This makes the generation of the name repeatable while unique between installations. - Squashed apps migrations. - Convert document thumbnails, preview, image preview and staging files to template base widgets. - Unify all document widgets. - Display resolution settings are now specified as width and height and not a single resolution value. - Printed pages are now full width. - Move the invalid document markup to a separate HTML template. - Update to Fancybox 3. - Update to jQuery 3.3.1 - Move transfomations to their own module. - Split documents.tests.test_views into base.py, test_deleted_document_views.py, test_document_page_views.py, test_document_type_views.py, test_document_version_views.py, test_document_views.py, test_duplicated_document_views.py - Sort smart links by label. - Rename the internal name of the document type permissions namespace. Existing permissions will need to be updated. - Add support for OR type searches. Use the “OR” string between the terms. Example: term1 OR term2. - Removed redundant permissions checks. - Move the page count display to the top of the image. - Unify the way to gather the project’s metadata. Use mayan.__XX__ and a new common tag named {% project_information ‘’ %} - Update logo. - Return to the same source view after uploading a document. - Add new WizardStep class to decouple the wizard step configuration. - Add support for deregister upload wizard steps. - Add wizard step to insert the document being uploaded to a cabinet. - Fix documentation formatting. - Add upload wizard step chapte. - Improve and add additional diagrams. - Change documenation theme to rtd. 2.8 (2018-02-27) - Rename the role groups link label from “Members” to “Groups”. - Rename the group users link label from “Members” to “Users”. - Don’t show full document version label in the heading of the document version list view. - Show the number of pages of a document and of document versions in the document list view and document versions list views respectively. - Display a document version’s thumbnail before other attributes. - User Django’s provided form for setting an users password. This change allows displaying the current password policies and validation. - Add method to modify a group’s role membership from the group’s view. - Rename the group user count column label from “Members” to “Users”. - Backport support for global and object event notification. GitLab issue #262. - Remove Vagrant section of the document. Anything related to Vagrant has been move into its own repository at: - Add view to show list of events performed by an user. - Allow filtering an event list by clicking on the user column. - Display a proper message in the document type metadata type relationship view when there are no metadata types exist. - Improved styling and interaction of the multiple object action form. - Add checkbox to allow selecting all item in the item list view. - Rename project to Mayan EDMS NG. 2.7.3 (2017-09-11) - Fix task manager queue list view. Thanks to LeVon Smoker for the report. - Fix resolved link class URL mangling when the keep_query argument is used. Thanks to Nick Douma (LordGaav) for the report and diagnostic information. Fixes source navigation on the document upload wizard. 2.7.2 (2017-09-06) - Fix new mailer creation view. GitLab issue #431. Thanks to Robert Schöftner (@robert.schoeftner) for the report and the solution. - Consolidate intial document created event and the first document properties edited events. Preserve the user that initially creates the document. GitLab issue #433. Thanks to Jesaja Everling (@jeverling) for the report. - Sort the list of root cabinets. Thanks to Thomas Plotkowiak for the request. - Sort the list of a document’s cabinets. - Display a document’s cabinet list in italics. GitLab issue #435. Thanks to LeVon Smoker for the request. - Install mock by default to allow easier testing of deployed instances. 2.7.1 (2017-09-03) - Support unicode in URL querystring. GitLab issue #423. Thanks to Gustavo Teixeira (@gsteixei) for the find. - Import errors during initialization are only ignored if they are cause by a missing local.py. Thanks to MacRobb Simpson for the report and solution. - Make sure the local.py created used unicode for strings by default. GitLab issue #424. Thanks to Gustavo Teixeira (@gsteixei) for the find. 2.7 (2017-08-30) - Add workaround for PDF with IndirectObject class. GitLab issue #417. - Shows the cabinets in the document list. GitLab #417 @corneliusludmann - setting. GitHub issues #256 #257 GitLab issue #416. - Add support for workflow triggers. - Add support for workflow actions. - Add support for rendering workflows. - extenstion when downloading a document version. GitLab #415. - Split OCR app into OCR and parsing. - Remove Folders app. - Use the literal ‘System’ instead of the target name when the action user in unknown. - Remove the view to submit all document for OCR. - When changing document types, don’t delete the old metadata that is also found in the new document type. GitLab issue #421. - Add tag attach and tag remove events. - Change the permission needed to attach and remove tags. - Add HTTP POST workflow state action. - Add access control grant workflow state action. - Beta Python 3 support. 2.6.4 (2017-07-26) - Add missing replacements of reverse to resolve_url. 2.6.3 (2017-07-25) - Add makefile target to launch a PostgreSQL container. - Use resolve_url instead of redirect to resolve the post login URL. - Make the intialsetup and performupgrade management tasks work with signals to allow customization from 3rd party apps. - PEP8 cleanups. - Add tag_ids keyword argument to the Source.handle_upload model method. GitLab issue #413. - Add overflow wrapping so wrap long titles in Firefox too. - Makes Roles searchable. GitLab issue #402. - Add line numbers to the debug and production loggers. Add date and time to the production logger. - Add support for generating setup.py from a template. GitLab #149 #200. - Add fade in animation to document images. 2.6.2 (2017-07-19) - Fix deprecation warning to prepare upgrade to Django 1.11 and 2.0. - Fix document page zoom. - Add support to run tests against a MySQL, Postgres or Oracle. - Oracle database compatibility update in the cabinets app. GitHub #258. 2.6.1 (2017-07-18) - Fix issue when editing or removing metadata from multiple documents. 2.6 (2017-07-18) - Fix HTML mark up in window title. GitLab #397. - Add support for emailing documents to a recipient list. GitLab #396. - Backport metadata widget changes from @Macrobb. GitLab #377. - Make users and group searchable. - Add support for logging errors during in production mode. Add COMMON_PRODUCTION_ERROR_LOG_PATH to control path of log file. Defaults to mayan/error.log. - Add support logging request exceptions. - Add document list item view. - Sort setting by namespace label and by global name second. - Sort indexes by label. - Fix cabinets permission and access control checking. - The permission to add or remove documents to cabinets now applies to documents too. - Equalize dashboard widgets heights. - Switch the order of the DEFAULT_AUTHENTICATION_CLASSES of DRF. GitLab #400. - Backport document’s version list view permission. - Improve code to unbind menu entries. - Renamed the document type permission namespace from “Document setup” to “Document types”. - Add support for granting the document type edit, document type delete, and document type view permissions to individual document type instances. - Improved tests by testing for accesses. - Increase the size of the mailing profile label field to 128 characters. 2.5.2 (2017-07-08) - Improve new document creation signal handling. Fixes issue with duplicate scanning at upload. 2.5.1 (2017-07-08) - Update release target due to changes in PyPI. 2.5 (2017-07-07) - Add view to download a document’s OCR text. GitLab #215 - Add user configurable mailer. GitLab #286. - Use Toasts library for screen messages. - Reduce verbosity of some debug messages. - Add new lineart transformation. - Fix SANE source resolution field. - About and Profile menu reorganization. - PDF compatibility improvements. - Office document coversion improvements. - New metadata type setup UI. - Duplicated document scan support. - Forgotten password restore via email. - Document cache disabling. - Translation improvements. - Image loading improvements. - Lower Javascript memory utilization. - HTML reponsive layout improvements. - Make document deletion a background task. - Unicode handling improvements. - Python3 compatilibyt improvements. - New screen messages using Toastr. 2.4 (2017-06-23) - Add Django-mathfilters. - Improve render of documents with no pages. - Add SANE scanner document source. - Added PDF orientation detection. GitLab issue #387. - Fix repeated permission list API URL. GitLab issue #389. - Fix role creation API endpoint not returning id. GitLab issue #390. - Make tags, metadata types and cabinets searchable via the dynamic search API. GitLab issue #344. - Add support for updating configuration options from environment variables. - Add purgelocks management command. GitLab issue #221. - Fix index rebuilding for multi value first levels. GitLab issue #391. - Truncate views titles via the APPEARANCE_MAXIMUM_TITLE_LENGTH setting. GitLab issue #217. - Add background task manager app. GitLab issue #132. - Add link to show a document’s OCR errors. GitLab issue #291. 2.3 (2017-06-08) - Allow for bigger indexing expression templates. - Auto select checkbox when updating metadata values. GitLab issue #371. - Added support for passing the options allow-other and allow-root to the FUSE index mirror. GitLab issue #385 - Add support for check for the latest released version of Mayan from the About menu. - Support for rebuilding specific indexes. GitLab issue #372. - Rewrite document indexing code to be faster and use less locking. - Use a predefined file path for the file lock. - Catch documents with not document version when displaying their thumbnails. - Document page navigation fix when using Mayan as a sub URL app. - Add support for indexing on workflow state changes. - Add search model list API endpoint. 2.2 (2017-04-26) - Remove the installation app (GitLab #301). - Add support for document page search - Remove recent searches feature - Remove dependency on the django-filetransfer library - Fix height calculation in resize transformation - Improve upgrade instructions - New image caching pipeline - New drop down menus for the documents, folders and tags app as well as for the user links. - New Dashboard view - Moved licenses to their own module in every app - Update project to work with Django 1.10.4. - Tags are alphabetically ordered by label (GitLab #342). - Stop loading theme fonts from the web (GitLab #343). - Add support for attaching multiple tags (GitLab #307). - Integrate the Cabinets app. 2.1.11 (2017-03-14) - Added a quick rename serializer to the document type API serializer. - Added per document type, workflow list API view. - Mayan EDMS was adopted a version 1.1 of the Linux Foundation Developer Certificate of Origin. - Added the detail url of a permission in the permission serializer. - Added endpoints for the ACL app API. - Implemented document workflows transition ACLs. GitLab issue #321. - Add document comments API endpoints. GitHub issue #249. - Add support for overriding the Celery class. - Changed the document upload view in source app to not use the HTTP referer URL blindly, but instead recompose the URL using known view name. Needed when integrating Mayan EDMS into other app via using iframes. - Addes size field to the document version serializer. - Removed the serializer from the deleted document restore API endpoint. - Added support for adding or editing document types to smart links via the API. 2.1.10 (2017-02-13) - Update Makefile to use twine for releases. - Add Makefile target to make test releases. 2.1.9 (2017-02-13) - Update make file to Workaround long standing pypa wheel bug #99 2.1.8 (2017-02-12) - Fixes in the trashed document API endpoints. - Improved tags API PUT and PATCH endpoints. - Bulk document adding when creating and editing tags. - The version of django-mptt is preserved in case mayan-cabinets is installed. - Add Django GPG API endpoints for singing keys. - Add API endpoints for the document states (workflows) app. - Add API endpoints for the messsage of the day (MOTD) app. - Add Smart link API endpoints. - Add writable versions of the Document and Document Type serializers (GitLab issues #348 and #349). - Close GitLab issue #310 “Metadata’s lookup with chinese messages when new document” 2.1.7 (2017-02-01) - Improved user management API endpoints. - Improved permissions API endpoints. - Improvements in the API tests of a few apps. - Addition Content type list API view to the common app. - Add API endpoints to the events app. - Enable the parser and validation fields of the metadata serializer. 2.1.6 (2016-11-23) - Fix variable name typo in the rotation transformation class. - Update translations 2.1.5 (2016-11-08) - Backport resize transformation math operation fix (GitLab #319). - Update Pillow to 3.1.2 (Security fix). - Backport zoom transformation performance improvement (GitLab #334). - Backport trash can navigation link resolution fix (GitLab #331). - Improve documentation regarding the use of GPG version 1 (GitLab #333). - Fix ACL create view HTML response type. (GitLab #335). - Expland staging folder and watch folder explanation. 2.1.4 (2016-10-28) - Add missing link to the 2.1.3 release notes in the index file. - Improve TempfileCheckMixin. - Fix statistics namespace list display view. - Fix events list display view. - Update required Django version to 1.8.15. - Update required python-gnupg version to 0.3.9. - Improved orphaned temporary files test mixin. - Re-enable and improve GitLab CI MySQL testing. - Improved GPG handling. - New GPG backend system. - Minor documentation updates. 2.1.3 (2016-06-29) -. - Fix GitLab issue #309, “Temp files quickly filling-up my /tmp (1GB tmpfs)”. - Explicitly check for residual temporary files in tests. - Add missing temporary file cleanup for office documents. - Fix file descriptor leak in the document signature download test. 2.1.2 (2016-05-20) - Sort document languages and user profile locale language lists. GitLab issue #292. - Fix metadata lookup for {{ users }} and {{ group }}. Fixes GitLab #290. - Add Makefile for common development tasks 2.1.1 (2016-05-17) - Fix navigation issue that make it impossible to add new sources. GitLab issue #288. - The Tesseract OCR backend now reports if the requested language file is missing. GitLab issue #289. - Ensure the automatic default index is created after the default document type. 2.1 (2016-05-14) - Upgrade to use Django 1.8.13. Issue #246. - Upgrade requirements. - Remove remaining references to Django’s User model. GitLab issue #225 - Rename ‘Content’ search box to ‘OCR’. - Remove included login required middleware using django-stronghold instead (). - Improve generation of success and error messages for class based views. - Remove ownership concept from folders. - Replace strip_spaces middleware with the spaceless template tag. GitLab issue #255 - Deselect the update checkbox for optional metadata by default. - Silence all Django 1.8 model import warnings. - Implement per document type document creation permission. Closes GitLab issue #232. - Add icons to the document face menu links. - Increase icon to text spacing to 3px. - Make document type delete time period optional. - Fixed date locale handling in document properties, checkout and user detail views. - Add new permission: checkout details view. - Add HTML5 upload widget. Issue #162. - Add Message of the Day app. Issue #222 - Update Document model’s uuid field to use Django’s native UUIDField class. - Add new split view index navigation - Newly uploaded documents appear in the Recent document list of the user. - Document indexes now have ACL support. - Remove the document index setup permission. - Status messages now display the object class on which they operate not just the word “Object”. - More tests added. - Handle unicode filenames in staging folders. - Add staging file deletion permission. - New document_signature_view permission. - Add support for signing documents. - Instead of multiple keyservers only one keyserver is now supported. - Replace document type selection widget with an opened selection list. - Add mailing documentation chapter. - Add roadmap documentation chapter. - API updates. 2.0.2 (2016-02-09) - Install testing dependencies when installing development dependencies. - Fix GitLab issue #250 “Empty optional lookup metadata trigger validation error”. - Fix OCR API test. - Move metadata form value validation to .clean() method. - Only extract validation error messages from ValidationError exception instances. - Don’t store empty metadata value if the update checkbox is not checked. - Add 2 second delay to document version tests to workaround MySQL limitation. - Strip HTML tags from the browser title. - Remove Docker and Docker Compose files. 2.0.1 (2016-01-22) - Fix GitLab issue #243, “System allows a user to skip entering values for a required metadata field while uploading a new document” - Fix GitLab issue #245, “Add multiple metadata not possible” - Updated Vagrantfile to provision a production box too. 2.0 (2015-12-04) - New source homepage: - Update to Django 1.7 - New Bootstrap Frontend UI - Easier theming and rebranding - Improved page navigation interface - Menu reorganization - Removal of famfam icon set - Improved document preview generation - Document submission for OCR changed to POST - New YAML based settings system - Removal of auto admin creation as separate app - Removal of dependencies - ACL system refactor - Object access control inheritance - Removal of anonymous user support - Metadata validators refactor - Trash can support - Retention policies - Support for sharing indexes as FUSE filesystems - Clickable preview images titles - Removal of eval - Smarter OCR, per page parsing or OCR fallback - Improve failure tolerance (not all Operational Errors are critical now) - RGB tags - Default document type and default document source - Link unbinding - Statistics refactor - Apps merge - New signals - Test improvements - Indexes recalculation after document creation too - Upgrade command - OCR data moved to ocr app from documents app - New internal document creation workflow return a document stub - Auto console debug logging during development and info during production - New class based and menu based navigation system - New class based transformations - Usage of Font Awesome icons set - Management command to remove obsolete permissions: purgepermissions - Normalization of ‘title’ and ‘name’ fields to ‘label’ - Improved API, now at version 1 - Invert page title/project name order in browser title - Django’s class based views pagination - Reduction of text strings - Removal of the CombinedSource class - Removal of default class ACLs - Removal of the ImageMagick and GraphicsMagick converter backends - Remove support for applying roles to new users automatically - Removal of the DOCUMENT_RESTRICTIONS_OVERRIDE permission - Removed the page_label field 1.1.1 (2015-05-21) - Update to Django 1.6.11 - 1.1 (2015-02-10) - Uses Celery for background tasks - Removal of the splash screen - Adds a home view with common function buttons - Support for sending and receiving documents via email - Removed custom logging app in favor of django-actvity-stream - Adds watch folders - Includes Vagrant file for unified development and testing environments - Per user locale profile (language and timezone) - Includes news document workflow app - Optional and required metadata types - Improved testings. Automated tests against SQLite, MySQL, PostgreSQL - Many new REST API endpoints added - Simplified text messages - Improved method for custom settings - Addition of CORS support to the REST API - Per document language setting instead of per installation language setting - Metadata validation and parsing support - Start of code updates towards Python 3 support - Simplified UI - Stable PDF previews generation - More technical documentation 1.0 (2014-08-27) - New home @ - Updated to use Django 1.6 - Translation updates - Custom model properties removal - Source code improvements - Removal of included 3rd party modules - Automatic testing and code coverage check - Update of required modules and libraries versions - Database connection leaks fixes - Support for deletion of detached signatures - Removal of Fabric based installations script - Pluggable OCR backends - OCR improvements - License change, Mayan EDMS in now licensed under the Apache 2.0 License - PyPI package, Mayan EDMS in now available on PyPI: - New REST API Project details Release history Release notifications Download files Download the file for your platform. If you're not sure which to choose, learn more about installing packages.
https://pypi.org/project/mayan-edms-ng/
CC-MAIN-2018-39
en
refinedweb
An EJB that is a session bean can optionally implement the session synchronization interface, to be notified by the container of the transactional state of the bean. The following methods are specified in the javax.ejb.SessionSynchronization interface: public abstract void afterBegin() throws RemoteException The afterBegin() method notifies a session Bean instance that a new transaction has started, and that the subsequent methods on the instance are invoked in the context of the transaction. A bean can use this method to read data from a database and cache the data in the bean's fields. This method executes in the proper transaction context. public abstract void beforeCompletion() throws RemoteException The container calls the beforeCompletion() method to notify a session bean that a transaction is about to be committed. You can implement this method to, for example, write any cached data to the database. public abstract void afterCompletion(boolean committed) throws RemoteException The container calls afterCompletion() to notify a session bean that a transaction commit protocol has completed. The parameter tells the bean whether the transaction has been committed or rolled back. This method executes with no transaction context. In order for the container to invoke your bean implementation before and after every transaction, your bean must implement the SessionSynch interface. package employeeServer; import employee.*; import javax.ejb.SessionBean; import javax.ejb.CreateException; import javax.ejb.SessionContext; import java.rmi.RemoteException; import java.sql.SQLException; public class EmployeeBean implements SessionBean implements SessionSynch{ // Methods of the Employee interface public EmployeeInfo getEmployee (String name) throws RemoteException, SQLException { int empno = 0; double salary = 0.0; #sql { select empno, sal into :empno, :salary from emp where ename = :name }; return new EmployeeInfo (name, empno, salary); } public void updateEmployee (EmployeeInfo employee) throws RemoteException, SQLException { #sql { update emp set ename = :(employee.name), sal = :(employee.salary) where empno = :(employee.number) }; return; } // Methods of the SessionBean public void ejbCreate () throws RemoteException, CreateException {} public void ejbRemove () {} public void setSessionContext (SessionContext ctx) {} public void ejbActivate () {} public void ejbPassivate () {} public void beforeBegin(){ ... perform work ... } public void afterCompletion(){ ... perform work ... } }
http://docs.oracle.com/cd/A97335_02/apps.102/a83725/trans7.htm
CC-MAIN-2016-40
en
refinedweb
- Code: Select all import numpy as np import pandas as pd a = pd.DataFrame(np.random.rand(5,5),columns=list('ABCDE')) b = a.mean(axis=0) >>> b A 0.536495 B 0.522431 C 0.582918 D 0.600779 E 0.371422 dtype: float64 My application is to take the averaged values and insert them into another dataframe, e.g. - Code: Select all # Average parameters collected when "hour of day" (data.hod) is 13 tmp = pd.DataFrame() tmp = pd.concat( [tmp,data[data.hod==13].mean(axis=0)], axis=0) This is where it gets ticked off and gives the AttributeError: 'Series' object has no attribute '_data' I would expect there exists some way of averaging a dataframe that does not involve converting the output to a Series. I know this is doable when the dataframe is multi-indexed, but that is not done here. Does anyone know how to perform this operation?
http://python-forum.org/viewtopic.php?f=6&t=7116&p=8999
CC-MAIN-2016-40
en
refinedweb
{ C++ Knowledge level: Beginner Books read: 0 Book Currently reading: Beginning C++ Through Game Programming (Second edition) } ^ Accidently purchased the second edition, not the third...but i'm sure there isn't a big difference I'm trying to make a number guessing game and it works, but 1 bit of the code isn't working i'm trying to get the players previous score and tell him if he/she beat his old score, by making an int called timesPlayed and preScore, i will only have this code run if the player has played twice, and there is an old score to display...but I can't get the preScore(old score) to display...I'm stumped, if I wasn't I probably wound't be here asking for help... Can some one take a look at my code and tell me what I did wrong, and try and teach me so I don't make the same mistake again. Also could you give me tips on inproving my code, thank you. #include <iostream> #include <string> #include <cstdlib> #include <ctime> using namespace std; int main() { char playAgain = 'y'; while(playAgain == 'y') { srand(time(0)); int theNumber = rand() % 100 + 1; int tries = 0; int guess; int timesPlayed = 1; cout << "\tWelcome to Guess My Number\n\n"; do { cout << "Enter a guess: "; cin >> guess; tries++; if(guess > theNumber) cout << "\nToo High!\n\n"; if(guess < theNumber) cout << "\nToo low!\n\n"; }while (guess != theNumber); int score = 10000 / tries; cout << "\nThat's it! You got it in " << tries << " Guesses!\n\n"; cout << "Your Score: " << score; int preScore = score; ++timesPlayed; if(timesPlayed >= 2) { if(preScore > score) cout << "You Beat your old score of " << preScore << endl; if(preScore < score) cout << "Im sorry, but you did not beat your old score of " << preScore << endl; } cout <<"\n\nPlay Again?\n"; cout << "(y/n)\n\n"; cin >> playAgain; if(playAgain == 'y') cout << "\n\n\n\n\n\n\n\n\n\n\n\n"; } cout << "\n\nGoodbye, Play again some time."; } This post has been edited by GunnerInc: 31 July 2012 - 06:22 PM Reason for edit:: Removed font tag
http://www.dreamincode.net/forums/topic/287580-number-guessing-game-problem/
CC-MAIN-2016-40
en
refinedweb
User account creation filtered due to spam. Created attachment 27629 [details] test.f90 If "sqrt" is a generic type-bound procedure, not only something like a%sqrt() or a%sqrt(b) [for pass and nopass, respectively] should work but also a simple: sqrt(a) or sqrt(a, b) That is: The generic enter the normal generic namespace with the exception that use, only: type also imports the generic name for that type. See also: It is not obvious from the standard that this holds, but it is analog to ASSIGNMENT(=) and OPERATOR(...) which also act that way. [Which is supported in gfortran.] Additionally, the following statement (F2008,4.5.7.3 Type-bound procedure overriding) wouldn't make sense with a different interpretation of the standard: "If a generic binding specied in a type denition has the same generic-spec as an inherited binding, it extends the generic interface and shall satisfy the requirements specied in 12.4.3.4.5." (In reply to comment #0) > See also: Note: That link does not seem to work. (In reply to comment #0) > It is not obvious from the standard that this holds, but it is analog to > ASSIGNMENT(=) and OPERATOR(...) which also act that way. [Which is supported in > gfortran.] It is correct that gfortran supports this for ASSIGNMENTs and OPERATORs. However, there are problems, cf. PR 41951 comment 6 to 10. The two PRs might be fixable in one go. (In reply to comment #1) > (In reply to comment #0) > > See also: > Note: That link does not seem to work. Try: Slightly compactified test case: module type_mod implicit none type field real :: var(1:3) contains procedure :: scalar_equals_field generic :: assignment (=) => scalar_equals_field procedure, nopass :: field_sqrt generic :: sqrt => field_sqrt end type contains elemental pure subroutine scalar_equals_field (A, b) class(field), intent(out) :: A real, intent(in) :: b A%var(:) = b end subroutine elemental pure function field_sqrt (A) result (B) class(field), intent(in) :: A type(field) :: B B%var(:) = sqrt (A%var(:)) end function end module program test use type_mod, only : field implicit none type(field) :: a a = 4.0 print *, sqrt(a) end program (In reply to comment #3) > > > See also: > > Note: That link does not seem to work. > > Try: > > The correct google groups link would be: Btw, I'm not completely convinced yet that the code in comment #0 (and #4) is really legal. No one in the c.l.f. thread has brought up a quote from the standard which clearly shows that referencing a type-bound generic is legal without part-ref syntax. For me, to most convincing reference up to now is this quote from F08:12.4.3.4.5 (though it still sounds a bit 'cloudy' to me): NOTE 12.10 In most scoping units, the possible sources of procedures with a particular generic identifier are the accessible interface blocks and the generic bindings other than names for the accessible objects in that scoping unit. (In reply to comment #5) > Btw, I'm not completely convinced yet that the code in comment #0 (and #4) is > really legal. In any case, here is a simple draft patch, which makes the code in comment 4 work (at least when the ONLY clause in the USE statement is removed): Index: gcc/fortran/decl.c =================================================================== --- gcc/fortran/decl.c (revision 188334) +++ gcc/fortran/decl.c (working copy) @@ -8374,12 +8374,20 @@ gfc_match_generic (void) { const bool is_op = (op_type == INTERFACE_USER_OP); gfc_symtree* st; + gfc_symbol *gensym; st = gfc_new_symtree (is_op ? &ns->tb_uop_root : &ns->tb_sym_root, name); gcc_assert (st); st->n.tb = tb; + /* Create non-typebound generic symbol. */ + if (gfc_get_symbol (name, NULL, &gensym)) + return MATCH_ERROR; + if (!gensym->attr.generic + && gfc_add_generic (&gensym->attr, gensym->name, NULL) == FAILURE) + return MATCH_ERROR; + break; } Index: gcc/fortran/resolve.c =================================================================== --- gcc/fortran/resolve.c (revision 188335) +++ gcc/fortran/resolve.c (working copy) @@ -11125,6 +11125,26 @@ specific_found: return FAILURE; } + /* Add target to (non-typebound) generic symbol. */ + if (!p->u.generic->is_operator) + { + gfc_symbol *gensym; + if (gfc_get_symbol (name, NULL, &gensym)) + return FAILURE; + if (gensym) + { + gfc_interface *head, *intr; + head = gensym->generic; + intr = gfc_get_interface (); + intr->sym = target->specific->u.specific->n.sym; + intr->where = gfc_current_locus; + intr->sym->declared_at = gfc_current_locus; + intr->next = head; + gensym->generic = intr; + gfc_commit_symbol (gensym); + } + } + /* Check those already resolved on this type directly. */ for (g = p->u.generic; g; g = g->next) if (g != target && g->specific One problem with the patch in comment #6 is that it produces double error messages for type-bound generics, e.g. on typebound_generic_{1,10,11}.'. More than three years ago Tobias Burnus wrote >'. Any reason to keep this PR opened? Note that the tests now fail with Error: INTENT(OUT) argument 'a' of pure procedure 'scalar_equals_field' at (1) may not be polymorphic
https://gcc.gnu.org/bugzilla/show_bug.cgi?id=53694
CC-MAIN-2016-40
en
refinedweb
While reviewing ConfigurationFactory I stumbled on the namespace logic, may I ask who is actually using it ? It looks like an unnecessary complexity to me, imho it's easier to have several configuration descriptors rather than a single namespaced descriptor. What do you think ? Emmanuel Bourg --------------------------------------------------------------------- To unsubscribe, e-mail: commons-dev-unsubscribe@jakarta.apache.org For additional commands, e-mail: commons-dev-help@jakarta.apache.org
http://mail-archives.apache.org/mod_mbox/commons-dev/200410.mbox/%3C41767B73.1090208@lfjr.net%3E
CC-MAIN-2016-40
en
refinedweb
Newbie - Why doesn't this read or write me a file?990724 Feb 13, 2013 4:19 AM I am a complete newbie and am going through the java tutorials. I use Netbeans as my IDE which makes things easier, but why doesn't this code do anything? Edited by: EJP on 13/02/2013 15:19: added {noformat} Can anyone explain? i have spent hours trying different options.Can anyone explain? i have spent hours trying different options. import java.io.*; import java.util.Vector; @SuppressWarnings("empty-statement") public class ListDictionary { /** * @param args the command line arguments */ private Vector<String> list; private static final int INITIAL_SIZE = 200000; private static final int INCREMENT = 10000; public void ListDictionary() { list = new Vector<>(INITIAL_SIZE,INCREMENT); this.readFile("english-words-lowercase.txt"); this.readFile("engish-upper.txt"); this.TrimList(); this.writeFile(); } public void readFile(String fileName) { String line; try { RandomAccessFile raf = new RandomAccessFile(fileName,"r"); while ((line = raf.readLine())!= null) { list.add(line); } } catch (IOException e){ System.out.println("dictionary not found" + e); }; int listSize = list.size(); System.out.println(listSize + "words added"); } public void writeFile() { PrintWriter out = null; try { out = new PrintWriter(new FileWriter("dictionary1.txt")); for (int i=0; i<list.size();i++){ out.println(list.get(i)); } } catch (IOException e) { System.out.println(e.getMessage()); } finally { if (out != null) { System.out.println("Seems to have worked!"); } else { System.out.println("Not this time"); } } } public void TrimList() { list.trimToSize(); } public static void main(String[] args) { ListDictionary listDictionary = new ListDictionary(); } } Edited by: EJP on 13/02/2013 15:19: added {noformat} {noformat} tags: please use them. This content has been marked as final. Show 6 replies 1. Re: Newbie - Why doesn't this read or write me a file?EJP Feb 13, 2013 4:22 AM (in response to 990724)What exception is thrown? 2. Re: Newbie - Why doesn't this read or write me a file?Kayaman Feb 13, 2013 9:45 AM (in response to 990724)For one, you're not closing your streams.1 person found this helpful 3. Re: Newbie - Why doesn't this read or write me a file?r035198x Feb 13, 2013 12:11 PM (in response to 990724)If you intend to run your code by calling a constructor using1 person found this helpful then you need to have the logic in the constructor. new ListDictionary(); You have a method called which is not a constructor because of the void return type. public void ListDictionary() { Remove the void to make it a constructor. Currently the default constructor is being called which does nothing that you can see. 4. Re: Newbie - Why doesn't this read or write me a file?939520 Feb 13, 2013 3:22 PM (in response to 990724)When you get the constructor working, you may next need to include the path to where your files are, else your program may not find them.1 person found this helpful Example: from: new File("myFile.txt"); to: new File("C:/workspace/myDirectory/myFile.txt"); You can next read up on absolute (ie, the above) vs relative paths. 5. Re: Newbie - Why doesn't this read or write me a file?990724 Feb 15, 2013 5:51 PM (in response to 939520)Thanks to all that have replied. all your explanations have helped and i have been able to make the program work. Like i said, i am a newbie, so it has taken me a while to figure out relative and absolute paths so i apologies for the delay in responding. 6. Re: Newbie - Why doesn't this read or write me a file?939520 Feb 15, 2013 9:11 PM (in response to 990724)You also might consider changing this: public void readFile(String fileName) to this: public List<String> readFile(String fileName) this way, the function returns a list of lines from the file that another part of your program can use.
https://community.oracle.com/message/10856781
CC-MAIN-2016-40
en
refinedweb
Trying to simply compile a java program843810 Sep 16, 2009 11:43 PM Downloaded JDK 6 Update 16 with Java EE ( java_ee_sdk-5_07-jdk-6u16-windows.exe) the entire 161 megs. Does this have the javac.exe? I don't believe it does. If not then which download? Thanks Thanks This content has been marked as final. Show 13 replies 1. Re: Trying to simply compile a java program843810 Sep 16, 2009 11:50 PM (in response to 843810)Oh I forgot to mention, here's the message I keep getting, I also am running under vista if that makes any difference. --------------------Configuration: Program0201 - <Default> - <Default>-------------------- Error : Invalid path, \bin\javac.exe -source 1.5 -classpath "C:\Program Files\Xinox Software\JCreatorV4\MyProjects\Program0201" -d C:\Program" Files\Xinox "Software\JCreatorV4\MyProjects\Program0201 @src_program0201.txt" Process completed. 2. Re: Trying to simply compile a java programEJP Sep 17, 2009 12:02 AM (in response to 843810) Downloaded JDK 6 Update 16 with Java EE ( java_ee_sdk-5_07-jdk-6u16-windows.exe) the entire 161 megs. Does this have the javac.exe?All JDKs have the java compiler. I don't believe it does.The bizarre error message you posted doesn't suggest that. It suggests you are running a bizarre command line somehow. 3. Re: Trying to simply compile a java program843810 Sep 17, 2009 12:48 AM (in response to EJP)If so then where is javac.exe? Also I am using JCreator to compile. I installed JCreator from a cd that was supplied with the book I purchased, Java for Dummies. I apoligize if I have solicited in the wrong forum but I tried the java for beginners forum and tried every suggestion and failed. Thanks 4. Re: Trying to simply compile a java programEJP Sep 17, 2009 1:57 AM (in response to 843810)javac.exe is in $JDK_HOME/bin where JDK_HOME is where the installer put the JDK. I can't comment on JCreator or where it expects to find things or how it constructs command lines but that one is completely wrong. 5. Re: Trying to simply compile a java program843810 Sep 17, 2009 5:34 AM (in response to EJP)The installation put JDK_HOME in C:\Sun\SDK. But there is no javac.exe in C:\Sun\SDK\bin. This is quite a mystery. 6. Re: Trying to simply compile a java program843810 Sep 17, 2009 10:20 AM (in response to 843810) kendem wrote:Installation? What installation? The Sun JDK or JCreator? The installation put JDK_HOME in C:\Sun\SDK. If you are talking about the JCreator installation, try the JCreator forum. 7. Re: Trying to simply compile a java program796447 Sep 17, 2009 1:39 PM (in response to 843810)NOTE: This was crossposted so people are wasting their time on this guy. 8. Re: Trying to simply compile a java program843810 Sep 17, 2009 3:13 PM (in response to 796447)warnerja; I didn't know I was supposted to keep to one forum for help, I didn't see that rule anywhere. Is this forum for advanced users of Java? If so then should I not post here anymore? Otherwise I have another different question on compiling or should I post a new message for that? The question is why is access denied in the following message? I set access permissions on all files/directories but still get the Access is denied. Thanks --------------------Configuration: Program0201 - JDK version <Default> - <Default>-------------------- C:\Program Files\Xinox Software\JCreatorV4LE\MyProjects\Program0201\MortgageText.java:4: error while writing MortgageText: C:\Program Files\Xinox Software\JCreatorV4LE\MyProjects\Program0201\MortgageText.class (Access is denied) public class MortgageText { ^ 1 error 9. Re: Trying to simply compile a java program796447 Sep 17, 2009 4:06 PM (in response to 843810)The point was to alert other people who may be answering, that they should check the other place(s) to see if they are about to waste their time duplicating what has already been said. Too late though, the time wasting already happened! And that it is rude for you to crosspost, at least without providing link(s) to the other conversation(s), for the reason mentioned above. That should just be obvious. 10. Re: Trying to simply compile a java program843810 Sep 17, 2009 4:52 PM (in response to 796447)Hey warnerja, Just trying to learn Java and brand new to these forums. Unforgiving bunch here. Good bye. 11. Re: Trying to simply compile a java program796447 Sep 17, 2009 8:42 PM (in response to 843810)Am I supposed to grieve your loss? If you come to a place looking for help, and someone corrects your abuse of netiquette and you run away in a huff, believe me nobody in the forum loses anything. Only you do. Stay, go - no skin off my nose. 12. Re: Trying to simply compile a java program830056 Jan 11, 2011 2:48 AM (in response to 796447)Hello all, I have just read this forum and l hope l am not doing the wrong thing but l am having the same issue. I am just working through Java for Dummies....don't laugh..working with window 7 and the jcreator provided and l am getting a very simlar issue as metioned above. can somebody please help or point me to a form that would be able to gide me. Cheers. Error message is when l try to compile: error while writing MortgageText: C:\Program Files (x86)\Xinox Software\JCreatorV4LE\MyProjects\Program0201\MortgageText.class (Access is denied) 13. Re: Trying to simply compile a java programdarrylburke Jan 11, 2011 7:54 AM (in response to 830056)Moderator advice: Please don't post to threads which are long dead, and don't hijack another poster's thread. When you have a question, start your own thread. Feel free to post a link to a related thread. Moderator action: Locking this thread. db
https://community.oracle.com/message/6362071
CC-MAIN-2016-40
en
refinedweb
There is some limited support in Saxon for running stylesheets that process a document in streaming mode: see More extensive and automated support for streamed processing remains a goal of many researchers, but it's difficult to achieve in general given the flexibility and dynamic nature of the XSLT language. It's more likely to be achieved for simple XQuery queries, which are much more amenable to static analysis because the language is much more restrictive. Clearly streamed processing is possible for very simple cases, that is where every template rule processes the children of the context node in document order, and there is no call on the document() function. But no-one actually writes such stylesheets, so optimizing them would not be useful. As soon as the stylesheet gets more complex than that, and especially where multiple documents or temporary trees are involved, streamed processing becomes very difficult. A slightly different technique which some products are now using successfully, and which might find its way into a future Saxon release, is document projection. Here an analysis of the stylesheet is used to discard some subtrees of the input document as it is being built, on the basis that the stylesheet never looks at those subtrees; this enables the tree that is built in memory to be smaller. Michael Kay _____ From: saxon-help-bounces@... [mailto:saxon-help-bounces@...] On Behalf Of Philip Tomlinson Sent: 09 April 2007 20:50 To: Mailing list for SAXON XSLT queries Subject: Re: [saxon] Stax and XPath Sorry, I meant DOM in the generic Object Model sense (i.e. JDOM, XOM,TinyTree). But why should I have to create an OM. If I am processing a large document it is going to be quite inefficient to reify the OM. When I saw the Stax support in Saxon, I had a ray of hope that I wouldnt need to load the whole OM in order to evaluate XPaths, XQueries. Is this supported in some way or is anyone working on this? Rgds, Phil Michael Kay wrote: You don't have to create a DOM, but you do have to construct a tree representation of the document in memory. (The Saxon TinyTree is far more efficient for this than the DOM.) Michael Kay -----Original Message----- From: saxon-help-bounces@... [mailto:saxon-help-bounces@...] On Behalf Of Philip Tomlinson Sent: 07 April 2007 08:52 To: Mailing list for SAXON XSLT queries Subject: [saxon] Stax and XPath Hi, Using Saxon 8.9. I think I've noticed that I cant create a StaxBridge for a file and evaluate an Xpath against this without creating a DOM. I seem to have to build a full DOM in order to evaluate XPaths or XQueries. Is there a way to do avoid having to load a large DOM in order to evaluate an XPath? Rgds, Phil ------------------------- Philip Tomlinson EZ Co Ltd Mobile: 021 707 385 -------------------------------------------------------------- -----------@... -- ------------------------- Philip Tomlinson EZ Co Ltd Mobile: 021 707 385 Merico Raffaele wrote: >. use xs:anyAtomicType. The xpath datatypes names referred to by is for some time now defunct. Use the XML Schema namespace instead. It is custom (but not required) to bind it to the xs prefix. However, if you do not want to change the prefixes in your current stylesheets, you can bind it to the xdt prefix instead. This has nothing to do with you reaching the end of Saxon B. Instead, it is the other way around: Saxon always stayed conformant as much as possible with the current spec, which is now a W3C Recommendation since 23 January. The version you used was based on a preliminary draft version of this recommendation. In this version (1.5 years old), ""; still exists: From this version onward, it does not: The change was reported and discussed here: HtH, Cheers, -- Abel. Best regards, Raffaele Merico I agree to receive quotes, newsletters and other information from sourceforge.net and its partners regarding IT services and products. I understand that I can withdraw my consent at any time. Please refer to our Privacy Policy or Contact Us for more details
https://sourceforge.net/p/saxon/mailman/saxon-help/?viewmonth=200704&viewday=10
CC-MAIN-2016-40
en
refinedweb
-- | -- Module : Data.Boolean.SatSolver -- Copyright : Sebastian Fischer -- License : BSD3 -- -- Maintainer : Sebastian Fischer (sebf@informatik.uni-kiel.de) -- Stability : experimental -- Portability : portable -- --. -- module Data.Boolean.SatSolver ( Boolean(..), SatSolver, newSatSolver, isSolved, lookupVar, assertTrue, branchOnVar, selectBranchVar, solve ) where import Data.List import Data.Boolean import Control.Monad.Writer import qualified Data.IntMap as IM -- | A @SatSolver@ can be used to solve boolean formulas. -- data SatSolver = SatSolver { clauses :: CNF, bindings :: IM.IntMap Bool } deriving Show -- | A new SAT solver without stored constraints. -- newSatSolver :: SatSolver newSatSolver = SatSolver [] IM.empty -- | This predicate tells whether all constraints are solved. -- isSolved :: SatSolver -> Bool isSolved = null . clauses -- | -- We can lookup the binding of a variable according to the currently -- stored constraints. If the variable is unbound, the result is -- @Nothing@. -- lookupVar :: Int -> SatSolver -> Maybe Bool lookupVar name = IM.lookup name . bindings -- | -- We can assert boolean formulas to update a @SatSolver@. The -- assertion may fail if the resulting constraints are unsatisfiable. -- assertTrue :: MonadPlus m => Boolean -> SatSolver -> m SatSolver assertTrue formula solver = simplify (solver { clauses = booleanToCNF formula ++ clauses solver }) -- | -- This function guesses a value for the given variable, if it is -- currently unbound. As this is a non-deterministic operation, the -- resulting solvers are returned in an instance of @MonadPlus@. -- branchOnVar :: MonadPlus m => Int -> SatSolver -> m SatSolver branchOnVar name solver = maybe (branchOnUnbound name solver) (const (return solver)) (lookupVar name solver) -- | -- We select a variable from the shortest clause hoping to produce a -- unit clause. -- selectBranchVar :: SatSolver -> Int selectBranchVar = literalVar . head . head . sortBy shorter . clauses -- | -- This function guesses values for variables such that the stored -- constraints are satisfied. The result may be non-deterministic and -- is, hence, returned in an instance of @MonadPlus@. -- solve :: MonadPlus m => SatSolver -> m SatSolver solve solver | isSolved solver = return solver | otherwise = branchOnUnbound (selectBranchVar solver) solver >>= solve -- private helper functions updateSolver :: MonadPlus m => CNF -> [(Int,Bool)] -> SatSolver -> m SatSolver updateSolver cs bs solver = do bs' <- foldr (uncurry insertBinding) (return (bindings solver)) bs return $ solver { clauses = cs, bindings = bs' } insertBinding :: MonadPlus m => Int -> Bool -> m (IM.IntMap Bool) -> m (IM.IntMap Bool) insertBinding name newValue binds = do bs <- binds maybe (return (IM.insert name newValue bs)) (\oldValue -> do guard (oldValue==newValue); return bs) (IM.lookup name bs) simplify :: MonadPlus m => SatSolver -> m SatSolver simplify solver = do (cs,bs) <- runWriterT . simplifyClauses . clauses $ solver updateSolver cs bs solver simplifyClauses :: MonadPlus m => CNF -> WriterT [(Int,Bool)] m CNF simplifyClauses [] = return [] simplifyClauses allClauses = do let shortestClause = head . sortBy shorter $ allClauses guard (not (null shortestClause)) if null (tail shortestClause) then propagate (head shortestClause) allClauses >>= simplifyClauses else return allClauses propagate :: MonadPlus m => Literal -> CNF -> WriterT [(Int,Bool)] m CNF propagate literal allClauses = do tell [(literalVar literal, isPositiveLiteral literal)] return (foldr prop [] allClauses) where prop c cs | literal `elem` c = cs | otherwise = filter (invLiteral literal/=) c : cs branchOnUnbound :: MonadPlus m => Int -> SatSolver -> m SatSolver branchOnUnbound name solver = guess (Pos name) solver `mplus` guess (Neg name) solver guess :: MonadPlus m => Literal -> SatSolver -> m SatSolver guess literal solver = do (cs,bs) <- runWriterT (propagate literal (clauses solver) >>= simplifyClauses) updateSolver cs bs solver shorter :: [a] -> [a] -> Ordering shorter [] [] = EQ shorter [] _ = LT shorter _ [] = GT shorter (_:xs) (_:ys) = shorter xs ys
http://hackage.haskell.org/package/incremental-sat-solver-0.1.3/docs/src/Data-Boolean-SatSolver.html
CC-MAIN-2016-40
en
refinedweb
ASB Quarterly Investor Confidence Report Investors Confident But Increasingly Cautious New Zealand investors have ended what proved to be another profitable year in a positive mood, according to the latest ASB Investor Confidence report. When asked: Do you expect your net return from investments this year to be better or worse than last year? (Chart 1) A net 16% of those surveyed in the December quarter (up 3% from the September quarter) expect the return from their investments to be better this year than last year. While this is below the net 24% reported twelve months earlier, this level of expectation still points to a group of generally optimistic investors. “Investors seem to be facing 2006 in a similar frame of mind to last year, albeit a bit more cautious,” says Anthony Byett, Chief Economist ASB. “This time last year the warnings were that the high returns of 2004 were unlikely to be repeated. As it was there were double digit benchmark returns for key asset classes such as property and equities.” The latest dwelling sales figures from REINZ show the median dwelling sale price to be up 13.5% between December 2004 and December 2005. The NZX report their NZX50 benchmark to be up 10.0% for the year. A number of offshore equity markets did even better. These good returns come after reservations widely expressed at the start of the year. When asked: What type of investment gives the best return? (Chart 2) Residential rental property remains the asset class that is most widely expected to provide the best return (over an indeterminate horizon), up one percent to 24%. Term deposits were ranked second on 13% (up 1%). Thanks to the 7% plus rates on offer term deposits had a strong end to the year and look to be closing the gap on residential rental property. Shares were rated third on 10%, followed by managed investments on 9%. Those in the top of the North Island and South Island continue to prefer residential rental property ahead of managed funds. Those in the lower North Island, who had previously held a more neutral position with the two asset classes had a major reversal in the fourth quarter with a 18% change leading to a 20% preference for residential rental property (from 2%). When asked: How confident are you in your current main investment? One of the big changes in the latest ASB Investor Confidence report was amongst those commenting on their main investment. Confidence amongst those with residential rental property as their current main investment decreased 2% to 61%. Conversely, confidence amongst those with equities as their main investment leapt 23 points to 67%. “A word of caution is still appropriate. This latest report has thrown up some more volatile results than we normally see, perhaps as a result of the confusion between forecasts for future results and current asset performance,” says Mr Byett. “There was also some slippage in confidence over the quarter, a development we will watch closely over 2006. “With the Reserve Bank’s increases to the cash rate last year and subsequent increases across all lending institutions for mortgages the dominance of residential rental property as the standout preference amongst New Zealand investors may be coming to an end. “Whether the pundits or the public are correct in 2006 it is clear that in an environment of a slowing economic growth rate and of property and equity markets facing more selling pressure a prudent and cautious approach – and balanced approach – is recommended.” Ends The ASB Quarterly Investor Confidence Survey is a nationwide survey, which has been undertaken every quarter since May 1998 interviewing a sample of up to 1000 respondents. A sample of this size has a maximum margin of error of ±3.65 at 95% confidence.
http://www.scoop.co.nz/stories/BU0601/S00129.htm
CC-MAIN-2016-40
en
refinedweb
New: Support VS 2010 RTM version. ASP.net: Add support for <%: ... %> syntax. Improved typing assistance - now working smarter and handles single and double quotes as well. Improved JavaScript Formatting. "Fix usings" and "Add missing using" now working for Extension methods as well. Some memory and speed optimizations. Changed default shortcut for "Find Members Taking This Type" from Ctrl+Alt+P to Alt+Shift+M. Previous shortcut is used by Visual Studio for "Attach To Process". Added PageUp/PageDown support in the Go To ... navigation dialogs. Shortcuts for Go To... navigation work when a dialog is already opened. Improved JustCode Error List to keep the selection when updating. Added support for .NET 4.0 COM interop decompilation. Added support for literals in non-Latin characters. Added support for .NET 4.0 support in the installer, now JustCode can be installed on machines with .NET 4.0 only. Added support for undocumented C# keywords: __makeref, __reftype, __refvalue, __arglist. Added support for byte order mark (BOM) character. Support consuming VB properties from C# using set_.../get_... accessor methods. Improved VS 2010 error reporting speed. UI improvements to the progress bar and side arrows. Bug Fixes: Improved support for assembly reference aliases. Fixed App_GlobalResoures in web application to be properly analyzed. Fixed installer to not require .NET 3.5 on machines with only .NET 4.0 Fixed problems with code formatting when code reordering is enabled. VS 2010: Fixed problems with Silverlight 4.0 VS 2010: Toolbar now persists its visibility properly between restarts. Fixed problems with locking Generated_Code folder in SL4 RIA services. String constants in InternalsVisibleTo attribute are analyzed properly now. VB: Fixed "Good Code Red" when an import statement has multiple clauses. VS 2010: Fixed the info stripe bar to properly update when a already opened file is renamed. VB: Fixed property generation for nullable types. VB: Fixed invalid imports to generate a warning not an error. Fixed how Enter and Tab keys work in templates in some situations. Fixed some intermittent good code reds in XAML. Fixed Organize and Add Missing Usings to add missing usings after any comments in the beginning of the file. Fixed a problem where Organize and Add Missing Usings breaks the code when there are using aliases. Fixed showing duplicating errors in some special cases. Fixed a bug in some very rare case that caused typing in open files to not refresh the code analysis. Fixed problems with missing DExplore.exe during installation. Fixed a problem with the documentation not being able to find DExplore.exe on some machines. Now if the documentation is not available it falls back to the online documentation. VB: Properly handle '&' operator precedence. VB: Fixed OptionInfer is On for websites by default in .NET 4.0. JS: Fixed inline variable for boolean expressions. CS: Fixed "Good Code Red" in lambdas with block with several return statements. VS 2010: Fixed how menu items and commands are registered. Now the commands (and their shortcuts) are not reset on each start. VS 2010: Fixed the info stripe bar to refresh properly after rename of an already opened file. VS 2010: Fixed the overridden/overriding to work properly. Info popup fixed to not show if editor is not available. CS: parser exception in "if (item->Depth < depth)". Bad-code-green: Sometimes JC does not show all errors if there are methods with unknown type. Fixed bug : The context is not update correctly when the cursor is moved quickly. Fixed several exceptions in VS editor. Fixed dialog windows to be layout correctly upon resize. Fixed warning markers to show correctly in VS 2010. Fixed MoveTypeToAnotherFile to check correctly for availability. VB: Create Get/Set property now working in a VB Module. VB: Fixed exception when there's no namespace in file and "move type to another file" is invoked. Fixed Introduce Variable from anonymous types. VS 2010: The position of the info popup is now correct. Fixed Generate class with constructor - to generate unique names. Fix the undo on global rename and Cancel in the Confirmation dialog. CS: 0 short/byte constant can be implicitly assigned to enum.
http://www.telerik.com/account/versionnotes.aspx?id=2197
CC-MAIN-2016-40
en
refinedweb
inner classes allowed consrtuctors? Discussion in 'C++' started by gara.matt@gmail.com, Jul 18, 2007. - Similar Threads All classes from pkg name & inner class reflectionJeffy, Sep 10, 2003, in forum: Java - Replies: - 2 - Views: - 2,913 - Thomas Weidenfeller - Sep 10, 2003 Static inner classesJamin, Sep 30, 2003, in forum: Java - Replies: - 21 - Views: - 1,087 - John C. Bollinger - Oct 3, 2003 How to access inner classes variables & methods from outer classeslonelyplanet999, Nov 13, 2003, in forum: Java - Replies: - 1 - Views: - 2,518 - VisionSet - Nov 13, 2003 What is the difference between nested classes and inner classes ?Razvan, Jul 22, 2004, in forum: Java - Replies: - 5 - Views: - 11,659 - Dale King - Jul 27, 2004 Debate: Inner classes or public classes with package access?Christian Bongiorno, Aug 27, 2004, in forum: Java - Replies: - 5 - Views: - 807 - Chris Uppal - Aug 30, 2004 inner classes in python as inner classes in JavaCarlo v. Dango, Oct 15, 2003, in forum: Python - Replies: - 14 - Views: - 1,399 - Alex Martelli - Oct 19, 2003 failing to instantiate an inner class because of order of inner classesPyenos, Dec 27, 2006, in forum: Python - Replies: - 2 - Views: - 633 - Pyenos - Dec 27, 2006 Why defining a constant in a method is not allowed but usingself.class.const_set is allowed?Iñaki Baz Castillo, Apr 30, 2011, in forum: Ruby - Replies: - 13 - Views: - 783 - Iñaki Baz Castillo - May 1, 2011
http://www.thecodingforums.com/threads/inner-classes-allowed-consrtuctors.522831/
CC-MAIN-2016-40
en
refinedweb
D-Bus users Below is a list of projects using D-Bus. It is not complete so if you know of a project which should be added please just edit the wiki. (Or send mail to the mailing list and see if someone has time to do it for you.) The list also includes the bus names owned by the projects' software. This is to help avoid namespace clashes as it is important that no two projects use the same bus name. Not all D-Bus usages require owning a bus name, of course. Be sure to namespace your bus name in com.example.?ReverseDomainStyle as well as listing it here. Finally, the API column shows a code indicating which of the various D-Bus APIs has been used. These are defined as follows: - * D - the raw D-BUS library * G - the GLib bindings * Q - the Qt bindings * P - the Python bindings * M - the Mono/.NET bindings
https://freedesktop.org/wiki/Software/DbusProjects/?action=PackagePages
CC-MAIN-2016-40
en
refinedweb
): Solid *looks for* devices and let thist Bluetooth, et so on. The "listing" part of Solid resides in kdelibs, while the Control namespace is in kdebase.
https://techbase.kde.org/index.php?title=Development/Architecture/KDE4/Solid&diff=20681&oldid=9425
CC-MAIN-2016-40
en
refinedweb
David, have a servlet implements the SingleThreadModel > > Interface. But this servlet has problem handle > more > > than 1 request at a time. > > > > Since I have 1 person on host1 upload a big file ( > > > > 20MB ) to the servlet. When the 2nd person on > host2 > > send a request to that servlet. The servlet will > not > > response. > > > > Any one had the same problem using > SingleThreadModel? > > I am using apache 1.3.17 + tomcat 3.2.1. > > thanks. > > > > P.S. I use SingleThreadModel beause i don't want > to > > worry about syncronization of threads, I have > > Connection as instance variable: > > > > public class admin extends HttpServlet implements > > SingleThreadModel{ > > private PrintWriter out; > > private OracleConnectionCacheImpl pool; > > private Connection conn; > > private Statement stmt; > > > > ..etc... > > > > > > __________________________________________________ > > Do You Yahoo!? > > Get email at your own domain with Yahoo! Mail. > > > > -- >.
http://mail-archives.apache.org/mod_mbox/tomcat-users/200104.mbox/%3C20010413001233.15000.qmail@web13708.mail.yahoo.com%3E
CC-MAIN-2016-40
en
refinedweb
22 March 2010 By clicking Submit, you accept the Adobe Terms of Use. All The Java EE Platform is the leading enterprise web server. The Adobe Flash Platform is the leader in the rich Internet application space. Using both, developers can deliver compelling, data-centric applications that leverage the benefits of an enterprise back-end solution and a great user experience. In this article, you learn about the architecture of applications built using Flex and Java including: Be sure to also watch the video Introduction to Flex 4 and Java integration. To learn more about the technologies used to build these applications, read The technologies for building Flex and Java applications article. Flex and Java applications use a multi-tier architecture where the presentation tier is the Flex application, the business or application tier is the Java EE server and code, and the data tier is the database. You can write the back-end code just as you normally would for a Java application, modeling your objects, defining your database, using an object-relational framework such as Hibernate or EJB 3, and writing the business logic to query and manipulate these objects. The business tier must be exposed for access via HTTP from the Flex application and will be used to move the data between the presentation and data tiers. Typical HTML applications consist of multiple pages and as a user navigates between them, the application data must be passed along so the application itself (the collection of pages and functionality it consists of) can maintain state. In contrast, Flex applications, by nature, are stateful. A Flex application is embedded in a single HTML page that the user does not leave and is rendered by Flash Player. The Flex application can dynamically change views and send and retrieve data asynchronously to the server in the background, updating but never leaving the single application interface (see Figure 1) (similar to the functionality provided by the XMLHttpRequest API with JavaScript.) Flex applications can communicate with back-end servers using either direct socket connections or more commonly, through HTTP. The Flex framework has three remote procedure call APIs that communicate with a server over HTTP: HTTPService, WebService, and RemoteObject. All three wrap Flash Player's HTTP connectivity, which in turn, uses the browser's HTTP library. Flex applications cannot connect directly to a remote database. You use HTTPService to make HTTP requests to JSP or XML files, to RESTful web services, or to other server files that return Java class that returns binary Action Message Format over HTTP. When possible, use Flash Remoting whose binary data transfer format enables applications to load data up to 10 times faster than with the more verbose, text-based formats such as XML, JSON, or SOAP (see Figure 2). Action Message Format (AMF), deserialization, and data marshaling between the client and the server. Flash Remoting uses client-side functionality built in to Flash Player and server-side functionality that). See the technologies for building Flex and Java applications article for more details about BlazeDS and LiveCycle Data Services. BlazeDS and LiveCycle Data Services use a message-based framework to send data back and forth between the client and server. They provide Remoting, Proxying, and Messaging services, and for LiveCycle, an additional Data Management service. The Flex application sends a request to the server and the request is routed to an endpoint on the server. From the endpoint, the request is passed to the MessageBroker, the BlazeDS and LiveCycle Data Services engine that handles all the requests and routes them through a chain of Java objects to the destination, the Java class with the method to invoke (see Figure 3). AMF is a binary format used to serialize ActionScript objects and facilitate data exchange between Flash Platform applications and remote services over the Internet. Adobe publishes this protocol; the latest is AMF 3 Specification for ActionScript 3. You can find tables listing the data type mappings when converting from ActionScript to Java and Java to ActionScript here. For custom or strongly typed objects, public properties (including those defined with get and set methods) are serialized and sent from the Flex application to the server or from the server to the Flex application as properties of a general 0bject. To enable mapping between the corresponding client and server-side objects, you use the same property names in the Java and ActionScript classes and then in the ActionScript class, you use the [RemoteClass] metadata tag to create an ActionScript object that maps directly to the Java object. Here is an example Employee ActionScript class that maps to a server-side Employee Java DTO located in the services package on the server. package valueobjects.Employee{ [Bindable] [RemoteClass(alias="services.Employee")] public class Employee { public var id:int; public var firstName:String; public var lastName:String; (...) } } To use Flash Remoting with BlazeDS or LiveCycle Data Services, you need to install and configure the necessary server-side files. For BlazeDS, you can download it as a WAR file which you deploy as a web application or as a turnkey solution. The turnkey download contains a ready-to-use version of Tomcat in which the the BlazeDS WAR file has already been deployed and configured along with a variety of sample applications. Similarly, for LiveCycle Data Services, the installer lets you choose to install LiveCycle with an integrated Tomcat server or as a LiveCycle Data Services web application. In either scenario a web application called blazeds or lcds (usually appended by a version number) is created. You can modify and build out this application with your Java code, or more typically, you can copy the JAR files and configuration files the blazeds or lcds web application contains and add them to an existing Java web application on the server (see Figure 4). If copying the files to a different web application, you also need to modify the web.xml file to define a session listener for HttpFlexSession and a servlet mapping for MessageBroker, which handles all the requests and passes them off to the correct server-side Java endpoints. You can copy and paste these from the original blazeds or lcds web application web.xml file. <!--> Optionally, you may also want to copy and paste (and uncomment) the mapping for RDSDispatchServlet, which is used for RDS (Remote Data Service) access with the data service creation feature in Flash Builder 4 that introspects a server-side service and generates corresponding client-side code. See the model driven development section for more details. > For Flash Remoting, the client sends a request to the server to be processed and the server returns a response to the client containing the results. You configure these requests by modifying the services-config.xml and remoting-config.xml files located in the /WEB-INF/flex/ folder for the web application. The services-config.xml file defines different channels that can be used when making a request. Each channel definition specifies the network protocol and the message format to be used for a request and the endpoint to deliver the messages to on the server. The Java-based endpoints unmarshal the messages in a protocol-specific manner and then pass the messages in Java form to the MessageBroker which sends them to the appropriate service destination (you'll see how to define these next). <channels> <channel-definition <endpoint url="http://{server.name}:{server.port}/{context.root}/messagebroker/amf" class="flex.messaging.endpoints.AMFEndpoint"/> </channel-definition> <channel-definition <endpoint url="https://{server.name}:{server.port}/{context.root}/messagebroker/amfsecure" class="flex.messaging.endpoints.SecureAMFEndpoint"/> </channel-definition> (...) </channels> In the remoting-config.xml file, you define the destinations (named mappings to Java classes) to which the MessageBroker passes the messages. You set the source property to the fully qualified class name of a Java POJO with a no argument constructor that is located in a source path, usually achieved by placing it in the web application's /WEB‑INF/classes/ directory or in a JAR file in the /WEB‑INF/lib/ directory. You can access EJBs and other objects stored in the Java Naming and Directory Interface (JNDI) by calling methods on a destination that is a service facade class that looks up an object in JNDI and calls its methods. You can access stateless or stateful Java objects by setting the scope property to application, session, or request (the default). The instantiation and management of the server-side objects referenced is handled by BlazeDS or LiveCycle Data Services. <service id="remoting-service" class="flex.messaging.services.RemotingService"> <adapters> <adapter-definition </adapters> <default-channels> <channel ref="my-amf"/> </default-channels> <destination id="employeeService"> <properties> <source>services.EmployeeService</source> <scope>application</scope> </properties> </destination> </service> You can also specify channels for individual destinations. <destination id="employeeService " channels="my-secure-amf"> Lastly, you use these destinations when defining RemoteObject instances in a Flex application. <s:RemoteObject In many applications, access to some or all server-side resources must be restricted to certain users. Many Java EE applications use container managed security in which user authentication (validating a user) and user authorization (determining what the user has access to—which is often role based) are performed against the Realm, an existing store of usernames, passwords, and user roles. The Realm is configured on your Java EE server to be a relational database, an LDAP directory server, an XML document, or to use a specific authentication and authorization framework. To integrate a Flex application with the Java EE security framework so that access to server-side resources is appropriately restricted, you add security information to the BlazeDS or LiveCycle Data Services configuration files (details follow below) and then typically in the Flex application, create a form to obtain login credentials from the user which are passed to the server to be authenticated. The user credentials are then passed to the server automatically with all subsequent requests. In the BlazeDS or LiveCycle Data Services services-config.xml file, you need to specify the "login command" for your application server in the <security> tag. BlazeDS and LiveCycle Data Services supply the following login commands: TomcatLoginCommand (for both Tomcat and JBoss), JRunLoginCommand, WeblogicLoginCommand, WebSphereLoginCommand, OracleLoginCommand. These are all defined in the XML file and you just need to uncomment the appropriate one. You also need to define a security constraint that you specify to use either basic or custom authentication and if desired, one or more roles. To do custom authentication with Tomat or JBoss, you also need to add some extra classes to the web application for integrating with the security framework used by the Jave EE application server and modify a couple of configuration files. Mode details can be found here. <services-config> <security> <login-command <per-client-authentication>false</per-client-authentication> </login-command> <security-constraint <auth-method>Custom</auth-method> <roles> <role>employees</role> <role>managers</role> </roles> </security-constraint> </security> ... </services-config> Next, in your destination definition, you need to reference the security constraint: <destination id="employeeService"> <properties> <source>services.EmployeeService</source> </properties> <security> <security-constraint </security> </destination> You can also define default security constraints for all destinations and/or restrict access to only specific methods that can use different security constraints. The default channel, my-amf, uses HTTP. You can change one or more of the destinations to use the my-secure-amf channel that uses HTTPS: <destination id="employeeService"> <channels> <channel ref="my-secure-amf"/> </channels> ... </destination> where my-secure-amf is defined in the services-config.xml file: <!-- Non-polling secure AMF --> <channel-definition <endpoint url="https://{server.name}:{server.port}/{context.root}/messagebroker/amfsecure" class="flex.messaging.endpoints.SecureAMFEndpoint"/> </channel-definition> That covers the server-side setup. Now, if you are using custom authentication, you need to create a form in the Flex application to retrieve a username and password from the user and then pass these credentials to the server by calling the ChannelSet.login() method and then listening for its result and fault events. A result event indicates that the login (the authentication) occurred successfully, and a fault event indicates the login failed. The credentials are applied to all services connected over the same ChannelSet. For basic authentication, you don’t have to add anything to your Flex application. The browser opens a login dialog box when the application first attempts to connect to a destination. Your application can now make Flash Remoting requests to server destinations just as before, but now the user credentials are automatically sent with every request (for both custom and basic authentication). If the destination or methods of the destination have authorization roles specified which are not met by the logged in user, the call will return a fault event. To remove the credentials and log out the user, you use the ChannelSet.logout() method. Now that you've learned to set up Flash Remoting on the server-side and define a RemoteObject instance in Flex, let's take a look at how you build an application to use this object. A typical Flex application consists of MXML code to define the user interface and ActionScript code for the logic. Just as for JavaScript and the browser DOM objects, the two are wired together using events and event handlers. To use a RemoteObject in an application, you define the instance, invoke a method of the server-side remoting destination, specify callback functions for the result and fault events, and inside those, do something with the data returned from the server. Here is a simple application where employee data is retrieved from a database and displayed in a Flex DataGrid component. After the application is initialized, the getEmployees() method of the employeeService destination defined in the remoting-config.xml file on the server is called, and if data is successfully returned from the server, the variable employees is populated and if the request fails for any reason, a message is displayed in an Alert box. Data binding is used to bind the employees variable to the dataProvider property of the DataGrid. <s:Application xmlns: <fx:Script> <![CDATA[ import mx.collections.ArrayCollection; import mx.controls.Alert; import mx.rpc.events.FaultEvent; import mx.rpc.events.ResultEvent; [Bindable]private var employees:ArrayCollection; private function onResult(e:ResultEvent):void{ employees=e.result as ArrayCollection; } private function onFault(e:FaultEvent):void{ Alert.show("Error retrieving data.","Error"); } ]]> </fx:Script> <fx:Declarations> <s:RemoteObject </fx:Declarations> <mx:DataGrid </s:Application> When using a RemoteObject, you can define result and fault handlers on the service level: <s:RemoteObject on the method level: <s:RemoteObject <s:method <s:method </RemoteObject> or on a per-call basis: <s:Application xmlns: <fx:Declarations> <s:RemoteObject <s:CallResponder </fx:Declarations> Data binding is a powerful part of the Flex framework that lets you update the user interface when data changes without you having to explicitly register and write the event listeners to do this. In the previous application code, the [Bindable] tag in front of the the employees variable definition is a compiler directive; when the file is compiled, ActionScript code is automatically generated so that an event is broadcast whenever the employees variable changes. [Bindable]private var employees:ArrayCollection; The curly braces in the assignment of the DataGrid's dataProvider property actually generates the code to listen for changes to the employees variable and when it changes, to update the DataGrid view accordingly. <mx:DataGrid In this application, employees is initially null and no data is displayed in the DataGrid but as soon as the data is successfully retrieved form the server and employees is populated, the DataGrid is updated to display the employee data. To make more extreme changes to the user interface dynamically at runtime, for instance to add, remove, move, or modify components, you use Flex view states. For every Flex view or component, you can define multiple states and then for every object in that view, you can define what state(s) it should be included in and what it should look like and how it should behave in that state. You switch between states by setting the component's currentState property to the name of one of the defined states. <s:states> <s:State <s:State </s:states> <mx:DataGrid <s:Button label. As your application gets larger, you need to break up your logic into packages of ActionScript classes and your views into separate MXML files (called MXML components). Each MXML component extends an existing component and can only be included in an application, but not run on its own. To use a component in MXML, you instantiate an instance of that component (its class name is the same as its file name) and include the proper namespace so the compiler can locate it. Here is the code for a MXML component, Masterview, saved as MasterView.mxml in the com.adobe.samples.views package. <s:Group xmlns: <fx:Metadata> [Event( </s:Group> Here is the code for an application that instantiates and uses that custom MasterView component. <s:Application xmlns: <fx:Script> <![CDATA[ import mx.controls.Alert; private function onMasterDataChange(e:Event):void{ Alert.show(e.currentTarget.selectedData,"Master data changed"); } ]]> </fx:Script> <views:MasterView </s:Application> In order to build loosely-coupled components, you need to define a public API for the component (its public members) and/or define and broadcast custom events as shown in the MasterView code example above. The [Event] metadata tag is used to define the event as part of the component's API and specify what type of event object it broadcasts. <fx:Metadata> [Event(name="masterDataChange",type="flash.events.Event")] </fx:Metadata> When some event occurs in the component (in this example, a DropDownList change event), the component creates an instance of the type of event object specified in the metadata and broadcasts it. this.dispatchEvent(new Event("masterDataChange")); The code that instantiates this custom component can now register to listen for this custom event and register and event handler. <views:MasterView Loosely-coupled components like this that define and broadcast custom events are the core building blocks for Flex applications. In fact, this is how the components in the Flex framework itself are built. For more information on broadcasting custom events, watch the video, Learn how to define and broadcast events. By default, all your code gets compiled into one SWF file. If your SWF file gets very large or contains functionality that only specific users may use, you can use modules to break your application into multiple SWF files that can be loaded and unloaded dynamically by the main application at runtime. To create a module, you create a class (ActionScript of MXML) extending the Module class and then compile it. To load the module dynamically at runtime into an application, you use the <mx:ModuleLoader> tag or methods of the ModuleLoader class. That covers the basics for building an application, but as your application gets larger, you are going to want to use some methodology to organize its files, centralize the application data and data services, and handle communication between all the components. To do this, you can build your Flex application using all the design patterns that have proven useful over the years in enterprise application development. In fact,. To this point, the article has focused on creating applications that use a call-response model to make asynchronous calls to Java classes on the server. Using BlazeDS or LiveCycle Data Services, you can also build applications that use a publish-subscribe model to send messages between multiple Flex clients (through the server), push messages from the server to clients, and/or send messages to other JMS enabled messaging clients. A Flex application can send messages to a destination on the server and any other clients subscribed to that same destination will receive those messages. A simple application using messaging is instant messaging where text is exchanged between clients. Messaging can also be used to create rich collaborative data applications where data changes made in one client are "instantly" seen by other clients viewing the same data. Server sending notifications to clients, clients receiving sport score updates, auction sites having access to real-time bids, applications for trading stocks, foreign exchange etc. are all examples of applications that can be developed using the messaging infrastructure. Similar to how you configure remoting, you configure messaging by defining destinations in a server-side configuration file, in this case, messaging-config.xml. A messaging destination can be as simple as this: <destination id="chat"> in which case it uses the default adapter and channel defined in the messaging-config.xml file: <adapters> <adapter-definition <adapter-definition </adapters> <default-channels> <channel ref="my-rtmp"/> <channel ref="my-streaming-amf"/> </default-channels> The first adapter defined, actionscript, is the default adapter and is used to exchange messages between Flex clients. The jms adapter can be used instead to bridge to JMS destinations. The default channel is my-rtmp, a real-time streaming channel with failover to a streaming AMF channel (both defined in the services-config.xml file). Channels are discussed in more detail in the next section, Selecting a channel. You can also specify additional properties when defining a destination, including network and server properties. In the following destination, the chat destination is configured to use the my-polling-amf channel, users are never unsubscribed even with no activity, messages are kept on the server indefinitely until there are 1000 messages at which time the oldest is replaced, and only clients that have been authenticated and authorized against the trusted security constraint defined in the services-config.xml file (see the Security section) can publish or receive messages. <destination id="chat"> <properties> <channels> <channel ref="my-polling-amf"/> </channels> <network> <session-timeout>0</session-timeout> </network> <server> <max-cache-size>1000</max-cache-size> <message-time-to-live>0</message-time-to-live> <durable>false</durable> <send-security-constraint <subscribe-security-constraint </server> </properties> </destination> When defining a destination, you specify the channel to be used for the communication between the client and server including the protocol, the port, and the endpoint. Channels are defined in the services-config.xml file. For remoting, you usually use the my-amf or my-secure-amf channel. For messaging, there is larger number of channels to select from, including those that use polling or streaming, servlets or sockets, and HTTP or RTMP. Polling channels support polling the server on some interval or on some event. The my-polling-amf channel polls the server every 8 seconds for new messages. <channel-definition <endpoint url="http://{server.name}:{server.port}/{context.root}/messagebroker/amfpolling" class="flex.messaging.endpoints.AMFEndpoint"/> <properties> <polling-enabled>true</polling-enabled> <polling-interval-seconds>8</polling-interval-seconds> </properties> </channel-definition> To more closely mimic a real-time connection, you can use long polling. The my-amf-longpoll channel is configured for long polling. <channel-definition <endpoint url="http://{server.name}:{server.port}/{context.root}/messagebroker/myamflongpoll" class="flex.messaging.endpoints.AMFEndpoint"/> <properties> <polling-enabled>true</polling-enabled> <polling-interval-seconds>0</polling-interval-seconds> <wait-interval-seconds>60</wait-interval-seconds> <client-wait-interval-seconds>3</client-wait-interval-seconds> <max-waiting-poll-requests>100</max-waiting-poll-requests > </properties> </channel-definition> When this channel is used, the client polls the server; the server poll response thread waits 60 seconds for new messages to arrive if there are no new messages on the server and then returns to the client; after receiving the poll response, the client polls again after 3 seconds; the process is repeated. The server is set to allow 100 simultaneous server poll response threads in a wait state; if exceeded, the server does not wait for new messages before returning a response. Typical application servers might have around 200 HTTP request threads available, so you need to make sure you set the maximum allowable number of polling threads to a smaller number and still leave enough threads to handle other HTTP requests. With servers and proxy servers that support HTTP 1.1, an HTTP streaming channel can be used. A persistent connection is established between the client and the server over which server messages are pushed to the client. HTTP connections can't handle traffic in both directions, so separate, short-lived threads must be used for any other server requests. Network latency is minimized compared to long-polling because connections don’t have to be continually closed and reopened. <channel-definition <endpoint url="http://{server.name}:{server.port}/{context.root}/messagebroker/streamingamf" class="flex.messaging.endpoints.StreamingAMFEndpoint"/> </channel-definition> Using HTTP long-polling and streaming the number of simultaneous users that can be connected to a destination is limited by the available number of server HTTP threads. For applications that will have larger numbers of simultaneous users, messages can be pushed using sockets instead of HTTP threads. LiveCycle Data Services includes a NIO-based socket server and has additional channels available for messaging that are not available with BlazeDS. These channels, defined in the services-config.xml file, all contain "nio" in their names. NIO stands for Java New Input/Output and is a collection of Java APIs for I/O operations. If you are using LiveCycle Data Services you should use the NIO channels over the servlet based channels because they scale better, handling thousands of simultaneous users instead of around a hundred. There are NIO equivalents for each of the AMF polling, long polling, and streaming channels just discussed ( my-nio-amf-poll, my-nio-amf-longpoll, my-nioamf-stream). These channels are still using HTTP so in the latter two cases, separate threads are still required for client-server requests and the persistent (or waiting) threads used for the server-to-client updates. <channel-definition <endpoint url="http://{server.name}:2080/nioamflongpoll" class="flex.messaging.endpoints.NIOAMFEndpoint"/> <server ref="my-nio-server"/> <properties> <polling-enabled>true</polling-enabled> <polling-interval-millis>0</polling-interval-millis> <wait-interval-millis>-1</wait-interval-millis> </properties> </channel-definition> With LiveCycle Data Services you can choose channels that use the RTMP protocol instead of HTTP. <channel-definition <endpoint url="rtmp://{server.name}:2037" class="flex.messaging.endpoints.RTMPEndpoint"/> <properties> <idle-timeout-minutes>20</idle-timeout-minutes> </properties> </channel-definition> RTMP, the Real-Time Messaging Protocol, was developed by Adobe for high-performance transmission of audio, video, and data between Adobe Flash Platform technologies (like Adobe Flash Player and Adobe AIR) and is now available as an open specification. RTMP provides a full duplex socket connection so that a single connection can be used for all communication between the client and the server, including all RPC and messaging. Another benefit of RTMP is that when a client connection is closed, the endpoint is immediately notified (so the application can instantly respond) unlike when using the HTTP protocol where endpoints do not receive notification until the HTTP session on the server times out. Because RTMP generally uses a non-standard port, though, it is often blocked by client firewalls. In this case, the channel automatically attempts to tunnel over HTTP. As a general recommendation, if you are using LiveCycle Data Services, use RTMP with failover to NIO-based long-polling. If using BlazeDS, use AMF long-polling or AMF streaming with failover to long-polling. To send messages from a Flex application you use the Producer API and to receive messages, the Consumer API. A basic application sends, receives, and displays messages is shown here. <s:Application xmlns: <fx:Script> <![CDATA[ import mx.messaging.events.MessageEvent; import mx.messaging.messages.AsyncMessage; protected function application1_creationCompleteHandler():void{ consumer.subscribe(); } protected function button1_clickHandler(event:MouseEvent):void{ var message:AsyncMessage=new AsyncMessage(); message.headers.username=username.text; message.body=msg.text; producer.send(message); msg. <s:Consumer </fx:Declarations> <s:TextArea <s:TextInput <s:TextInput <s:Button </s:Application> For more information about using the messaging service, see the BlazeDS and LCDS documentation. You can build real-time data applications, applications for which data changes made in one client are "instantly" seen by other clients viewing the same data, using a combination of remoting and messaging. This entails writing a lot of client-side code to keep track of the changes made to the data on the client (additions, updates, and deletions), to make calls to retrieve and persist data on the server, to send messages to other clients when the data has changed, to make calls to retrieve and display this new data, to recognize and handle data conflicts, and to resolve these conflicts on the client and server. To help you more quickly and easily build these types of data-intensive, transaction-oriented applications without having to write so much code. LiveCycle Data Services (and not BlazeDS) provides the Data Management service. The Data Management service provides client and server-side code to help you build applications that provide real-time data synchronization between client, server, and other clients; data replication; on-demand data paging; and for AIR applications, local data synchronization for occasionally connected applications. To build a managed data application you define a Data Management service destination in a configuration file on the server and then use the Flex DataService component in the application to call methods of a server-side service specified by that destination. The DataService API provides methods for filling client-side data collections with data from the server and batching and sending data changes to the server. The Data Management service on the server handles checking for conflicts, committing the changes, and pushing the data changes to simultaneously connected clients. Similar to how you configure remoting and messaging, you typically configure data management by defining destinations in a server-side configuration file, in this case, data-management-config.xml. The default configuration file defines a default channel, the RTMP channel discussed in Selecting a channelin the Messaging section of this article, and a default adapter, actionscript. <service id="data-service" class="flex.data.DataService"> <adapters> <adapter-definition <adapter-definition </adapters> <default-channels> <channel ref="my-rtmp"/> </default-channels> </service> The adapter is responsible for updating the server-side data. The actionscript adapter is used for services that have no persistent data store on the server but instead manage data in the server's memory. The java-dao adapter passes the data changes to appropriate methods of a Java assembler class, which typically calls methods of a data access object (DAO) to persist data in a database. When defining a destination using the java-dao adapter, you specify the assembler class that handles the data persistence and the property of the data objects that uniquely identifies an object. Below is a data management destination called employeeService that uses a Java class called EmployeeAssembler to persist data in a database table with a unique field employeeId. The Java assembler class must extend an AbstractAssembler class provided with LiveCycle Data Services that has methods including fill(), createItem(), deleteItem(), and updateItem(). <destination id="employeeService"> <adapter ref="java-dao"/> <properties> <source>adobe.samples.EmployeeAssembler</source> <metadata> <identity property="employeeId"/> </metadata> </properties> </destination> You can add additional properties to the destination definition to specify the scope the assembler is available in (request, session, or application), to configure paging, to specify security-constraints, and more. LiveCycle Data Services also provides some standard assembler classes that you can use so you don’t have to write your own. The SQLAssembler provides a bridge to a SQL database without requiring you to write the Java assembler code. Instead, you specify database info (url, driver, username, password, etc.) and SQL statements (the SQL to execute when data is sent from the Flex application to be added, updated, or deleted) right in the destination definition. This assembler can be used for simple database models that do not have any nested relationships. If you are using Hibernate, you can use the HibernateAssembler, which provides a bridge to the Hibernate object/relational persistence and query service. It uses the Hibernate mapping files to at runtime to execute the necessary SQL to persist data changes to the database. To create a Flex managed data application that uses the LCDS Data Management service, you create a DataService object with its destination property set to a destination defined in the data-management-config.xml file. You use the DataService fill() method to fetch data from the server and populate an ArrrayCollection with the data. By default, the DataService commit() method is called whenever data changes in the ArrayCollection it manages. To avoid excessive calls, you can batch the calls by setting the DataService object's autoCommit property to false and then explicitly calling its commit() method. Here is a simple application that uses the employeeService Data Management destination to retrieve employee data from the database on the server and populate a DataGrid with that data. When changes are made to the data in the DataGrid, the changes are automatically persisted on the server and synchonized with any other instances of the client application. <s:Application xmlns: <fx:Declarations> <s:DataService <s:ArrayCollection <valueObjects:Employee </fx:Declarations> <mx:DataGrid </s:Application> For more information about using the Data Management service, see the Live Cycle Data Services documentation. In previous sections of this article, you learned to use the Remoting and Messaging services of BlazeDS and LCDS and the Data Management service of LCDS to build data-centric applications. You can build these types of applications even faster using the Adobe application modeling technology (code named Fiber), a set of technologies that together enable model driven development for Flex applications, which can be used to generate both client and server-side code. Instead of using the RemoteObject class (or other RPC classes) to make calls to server-side classes, you can use Flash Builder to create ActionScript service wrapper classes and use these classes. The RPC service wrapper classes have public methods with the same names as the corresponding server-side classes making development and debugging much simpler. In order to generate client-side code, RDS access must be enabled on the server so Flash Builder can introspect server-side Java classes and configuration files. To enable RDS access, you need to add and/or uncomment a mapping for the BlazeDS 4 or LiveCycle Data Services 3 RDSDispatchServlet in the web application's web.xml file and disable security by setting the useAppserverSecurity parameter to false (or alternatively, set up and enable> Once RDS is enabled for the server, you can generate ActionScript service wrappers in Flash Builder using the Data menu (see Figure 5). When selecting Connect to BlazeDS or Connect to LCDS, you will get a dialog box displaying all the server-side destinations defined in the configuration files (see Figure 6). Flash Builder generates ActionScript wrapper classes for the selected services and classes for the corresponding data transfer objects (also called value objects) manipulated by these classes (which often correspond to records in database tables) (see Figure 7). You can then manipulate the same types of objects on the client and on the server and pass instances of them back and forth between the two. If you are using LCDS, the generated service classes use LiveCycle specific classes to also provide the additional data management features discussed previously. The use of these generated client-side service wrapper and valueObject classes that map to the server-side classes greatly facilitates application development. In the services package, the _Super_ServiceName.as class extends the RemoteObject or DataService class it is wrapping and defines the service methods. In the case of RemoteObject, the service class will have public methods with the same names as the corresponding server-side class methods. The ServiceName.as class extends the super class and is intitially empty. This is the class you use in your code. You can modify this class to customize the wrapper. This file is not overwritten if you refresh the service to recreate the client-side code after changes have been made to the server-side service code. In the valueObjects package, the _EntityNameEntityMetadata.as class contains information about an entity (an object manipulated by the service class) and its relationship with other entities. The _Super_EntityName.as class contains getters and setters for the data properties of an entity. The EntityName.as class extends the super class and is initially empty. This is the class you use in your code. You modify this class to customize the generated value object class. If you are using LiveCycle Data Services, you can use Flash Builder in conjunction with an additional Modeler plug-in to generate server-side code in addition to client-side code. The Adobe application modeling plug-in for Adobe Flash Builder (the Modeler) is a graphical modeling editor for defining data models and generating client and server-side code to manipulate these data models. You can use the Modeler to define a model based on an existing database and then have it generate and deploy the client-side code, the server-side code, and the server-side configuration files needed to manipulate this data. Or if you are starting from scratch and the database tables don't exist yet, you can use the Modeler to define a model and then have it generate the database tables in addition to generating the client and server-side code to manipulate this data. In order to use the Modeler, you need to define your data source as a resource in your webapp.xml file, install the Modeler in Flash Builder, and configure RDS in Flash Builder. For detailed steps, see the tutorial Setting up model-driven development with LiveCycle Data Services ES2. You can then use the Modeler and its RDS Dataview view to create data models and generate client and server-side code. The RDS Dataview view displays SQL databases configured as JDBC datasources on the server (see Figure 8). You can drag database tables from the RDS Dataview view to the Modeler Design view to define corresponding entities in your data model (see Figure 9). You can also use the Modeler Design mode tools to define entities and relationships if there are no corresponding database tables. The Modeler Design view uses standard UML notation in its diagrams. The data model is stored as XML in a file with an FML extension. You can switch to the source mode of the Modeler view to look at the generated model XML code. To create code to manipulate the entities, you click the Modeler Generate Code button in the Modeler Design view. By default, no server-side code is generated. To customize the generated code, select Window > Preferences, select Adobe > Data Model > Code Generation, and modify the settings. You can specify whether only server-side value objects corresponding to the entities in the data model are created or value objects and assembler classes to manipulate the value objects and persist them in the database are generated (see Figure 11). Example generated value object and assembler classes are show in Figure 12. For more information about using moel driven development, see the LiveCycle Data Services documentation. Using LiveCycle Data Services, you can deploy Flex applications as local portlets on portal servers that implement the JSR 168 portlet specification or that support Web Services for Remote Portlets (WSRP); this includes JBoss Portal, BEA WebLogic Portal, and IBM WebSphere Portal. The Flex application can be part of a LiveCycle Data Services application (for example, using the Remoting, Messaging, and/or Data Management services) but it does not have to be. To enable a Flex application to be deployed as a portlet, you need to copy and customize some files included in the LiveCycle Data Services /lcds/resources/wsrp/ directory and then follow the portal server's specific steps to set up the portlet. You need to copy the flex-portal.jar file to your web application's /WEB-INF/lib/ directory. (If LiveCycle Data Services is not being used on the server, the flex-messaging-common jar file must also be copied to there. ) The flex-portal.jar file contains a GenericFlexPortlet class that handles all WSRP requests and returns appropriate HTML depending upon whether the view, edit, or portlet mode is requested. The LiveCycle Data Services wsrp-jsp folder contains three JSP pages used for the view, edit, and help portlet view modes. You need to copy this wsrp-jsp folder to the root of your web application and customize these pages for your application. When a specific view of the portlet is requested, the GenericFlexPortlet class delivers one of these JSP pages. The portlet-view.jsp contains HTML and JavaScript for loading the application SWF and checking for the necessary version of Flash Player. Requests for a portlet specify whether the portlet should be maximized, minimized, or normal. The value for this requested window state is passed to the Flex application as a flashvar, and can be accessed as FlexGlobals.topLevelApplication.parameters.PORTLET_WS allowing you to customize the application for the specific window state requested. If a minimized portlet is requested, the GenericFlexPortlet does not return a SWF because the user would not be able to interact with it anyways. This article discussed the architecture of Flex and Java applications. For additional information, use the links contained in the article and the following resources:
http://www.adobe.com/devnet/flex/articles/flex_java_architecture.html
CC-MAIN-2016-40
en
refinedweb
Group: current SVN version Resolution: Fixed Category: Python Object.GetSelected crashes if its called with no 3D view window. I tried with the following: import Blender object = Blender.Object.GetSelected() And got an application error when executing with no 3d window opened, using the latest CVS version (did this both with a version I compiled myself and the 12/01/04 windows build from the "testing builds" forum. It does work with the latest released version.
https://developer.blender.org/T875
CC-MAIN-2016-40
en
refinedweb
. Meh. This isn't as big a change as I was thinking it would be. That said, "KDE Software Compilation" makes for really awkward phrasing (at least in English). Oh, yes, we recognize that. However, it actually isn't really meant to be used a lot. It's mostly for the release announcement purpose. In most other cases you are talking about a specific app, the workspace or the platform. And if you need to mention the release you can say 'our latest release', call it 4.4 etc. We didn't want to re-introduce a new brand here because frankly, it's just a bunch of apps which happen to release together. Apps which aren't part of the release schedule are just as important, and by calling the release 'software compilation' we make clear what it is (and what not). Currently I see I often see KDE X.X used as an easy way to identify software versions. DudeA: What version of KSnapshot do you have? DudeB: Iunno. The one from KDE4.3. What do you recommend be used instead? KSC4.4? KDESC4.4? Or is KDE4.4 still acceptable in informal contexts like these? That's what I use. I would not use KDE [version] because it would re-instate the ambiguity. As Luca said, KDE SC 4.4 would be fine, similarly SC 4.4 or even just 4.4 if that's enough in the context. But you can really help us by trying to avoid "KDE 4.4" because that just reinforces the KDE is the software (and in particular just a desktop) thing i like it and is acurrate to say "KDE Software Compilation 4.4.x" but "KDE Software Compilation 4.4.1" eventually become KDE SC 4.4.1 , and then again Kde 4.4.1....(daniell, dan, d). kde 4.x.y series it stay with us for a while(hope that, i dont survive another rewriteQT5?). maybe will be Kool do something more agressive for marketing and confussionless like "photoshop CS series(9.0)" KDE 4.4 -> KDE SC1 (4.4) KDE 4.5.1 -> KDE SC2.1 (4.5.1) instead "KDE Software Compilation 4.4.2" "kde 4.x.y series it stay with us for a while(hope that, i dont survive another rewrite QT5?)." Don't worry, any Qt5/KDE SC 5 will be a lot less painful, more like the KDE2 -> KDE3 move was. Qt won't be changing as much, and we won't have to re-write the desktop again, it will be more like clean-up work. I can't speak for the trolls, but last I heard they have no timeframe yet for Qt5, but I suspect that once they have all the Symbian and Maemo support work completed they will want to have a major clean-up to align everything and break a few things in the process. Just the be clear, the above comment is purely conjecture. There are no Qt 5 plans at this time that we know of, nor has anything real been discussed about this within KDE (not counting beer-induced planning of world domination). Cheers We should not even try to follow the path what Adobe toke with Creative Suite. They will end up to situation where they need to invent again new things. Because now they are on CS4. And few years forward they are on CS5 or CS6. Soon of that they comes to CS10 CS11 and so on. And right now we have clear history of releases so it is wise to follow it because it is always kept same way what was even very logical. Now we have forth generation desktop, with fourth release cycle coming. It is easy to understand the x.x.+1 updates what are bug fixes and so on. KDE SC 4.4 sounds good to me. KDE SC 4.4.1 and so on does same thing. Just that the word 'Compilation' sounds like it is something to do with compiling code, as opposed to assembling a collection of software applications and components for release. Or 'Compendium'? good point. I like the sound of "Collection" as well. Yes this sounds good to me. I also like KDE Software Set, but I don't like it's abreviation KDE SS. Anyway, I gess the discussion is already closed, long life to KDE SC ! That was also proposed during the sprint. I don't remember why, but in the end there was more consensus towards "Compilation" Those were the top three from our many suggestions. In the end we felt suite was too tight ("this is everything, other stuff is on the outside") and collection was a bit too loose ("just a load of stuff we threw together"). Compilation (for example a music compilation) indicates the idea that we selected some stuff that works well together. Marketing speak over ;-) Not 'content' If I had a vote it'd be for 'KDE Software Release' ...as in the Release of Software by KDE. Then we can abbreviate KDE Release 4.4... KDE r4.4 and it still makes some amount of sense! It's funny, in the writeup and comments there is emphasis about wanting something to convey that the it's not a "joined" group of applications (suite too 'tight') but just software that happens to be *released* together. Did "Release" really not come up as an option? Kind of funny if not. It'll be interesting to see if this sticks both inside/outside the community. I'll do my best to train myself appropriately with electro-shock treatment, and sweeties. Did I mention we made a long list? And checked it twice... KDE Software release - well, it's not the only software we release (also a lot outside the SC) so that's one problem You also suggest the handy shortening to KDE Release 4.4 or even KDE r4.4. So... KDE Release 4.4, that's the 4.4 release of KDE, right? Why not just call it KDE 4.4 then? We also get in to fun sentences such as "Today KDE releases software release 4.4" whici is a bit heavy on te word "release" then you call it KDESK :o) which sort of fits the description of what it should be.... You shouldn't need to use "KDE Software Compilation" much. It's really just there at all because we happen to release a whole load of stuff together (formerly KDE 4.3.3) and we need a name for that for release announcements and such. If someone sees your computer screen and wants to know what you're running then the answer is probably (KDE) Plasma Desktop/Netbook or possibly the name of one of the apps if that's what they're looking at. As for the size of the change - well, we didn't want to ditch "KDE", but rather define it properly and strengthen it. We also didn't want to make more work for us and everyone else than was necessary. Even these changes are going to take a lot of time and effort to implement (KDE websites, About dialog in the apps, getting the press and our own community to understand). Wow, this is quite nice, and less scary then I imaged it could be. I like the idea of positioning KDE as community, so changing the programs and modules into "KDE something" seams to make sense. I do have to get used to the term "KDE Software compilation 4.4", but that is something to get used to. Just like 'Smiths -> Lays' and 'Raider -> Twix' once seamed weird in the Dutch market. Nowadays the new names sound a lot cooler. :) It took me ages to get over the Marathon -> Snickers transition. Lays are Walkers over here in the UK (at least I think so - it's the same logo). At least KDE hasn't had different brands in every country... I just realized how much easier this is to tell about KDE to co-workers and other people. Before I told about the Linux desktop thing, and how yes, Linux does GUI's too, and no it's not difficult, and no it even looks nice. Conversations dry up pretty quick that way. Now I can tell about how KDE is a cool bunch of people, they make quality software, and empower Linux to be great as well. :-) ..and if those people start looking they find the same message at the Internet too. Bonus points for us. :-) Well said. It has been before difficult to explain what KDE really is. Typically you end up to situation speaking how there are volunteer people and paid people to develop software what is "KDE". And then there has be difficulties to explain how some software belongs to KDE (Konqueror, KMail etc) and how some does not but they use KDE technologies and how some does not at all belong but use KDE technologies (Amarok, digiKam, Kdenlive) but same what KDE use (pure Qt apps). Now it is all easy. So here I am, reading comments on a KDE article. I read this comment of a Dutch guy who spells "seems" as "seams", and I think - wait, I know this guy who always does that! And it was him... Hi Diederik! o/ Oh man, I love how in this world you keep seeing the same great guys over and over again ;-) lol :-) Hi sjors! I think the name KDE Software Compilation X.Y.Z is kind of misconducting because *a* software compilation can contain multiple different versions (e.g. think of kdelibs 4.4, Plasma 4.3, Okular from 4.2, KOffice 2.1), even more the compilation can be platform dependent. This is kind of weird and does not really makes clear what we want to deliver (we have a software platform and applications of different versions).? "software compilation can contain multiple different versions" KDE SC already does contain apps with different version numbers in it; but the SC itself has a consistent numbering as a whole. this is not new. "the compilation can be platform dependent" KDE SC, which is what the epochal x.y.z release are, is not. or at least, the parts that are are not built for a given platform. KDE SC is not 'a' compilation, it's *the* KDE SC. :) "This is kind of weird and does not really makes clear what we want to deliver" as noted above already, the SC isn't something we'll be pushing as a brand. it's just a way for us to avoid saying "KDE 4.4". we have a "4.4" release, and that's certainly going to happen (as well as 4.4, 4.5, etc.). but we want to emphasize the components more with the SC fading into the background a bit as a release engineering detail (from the POV of the audiences we will be talking to) and we definitely want to distance "KDE" from "that huge amount of stuff, including a desktop!, that they release". KDE SC gets us further down that road. at some point, we may not need "KDE SC" at all, but for now it's a needed disambiguator. and again, it's not really a brand itself. ?" i don't agree that companies should have exclusive right to a word that describes exactly what we are producing. carbon dioxide is a product of my respiratory system. it's not a company either. ;) talking about "products" is accurate. as a bonus it is verbage people who are used to proprietary software products are used to. the point is to communicate in words that are descriptive, that we know, that can easily be related to by others and that can be found in literature (yes, including "how to market.." type literature) without constantly translating. we are a group very different from a monolithic company, something the word 'product' is not going to change in the least. by contrast, i do think that if we talked about 'management structures' with traditional monolithic company terms we would be heading in a poor direction. it's also interesting to see how this was arrived at. it was very consensus based and "KDE" in how it was done. it didn't happen fast (this takes time no matter what kind of organization structure you have when it is this size, really) but it did happen in line with how we, KDE, have done things and will continue to do things. at least, imho. We really don't want you to talk of KDE as a product - KDE is the community, right? Re using the 'products' as a term for the things we produce. Well, you could just as well say KDE produces applications, workspaces and a platform if you want to avoid 'product'. But the end result of production is a product. It can sound corporate but it shouldn't be really. Good work. Even "KDE Software Compilation", which initially seems clunky, makes sense given the context. "KDE Plasma Desktop" is an attractive and sensible name. But (you knew there was going to be a 'but'): I really don't like the "put either KDE or a K in the application name" policy, especially because it implicitly encourages people to do the latter to avoid having to do the former. This leads to names which are either dry, technical, and unfriendly (KImageEditor), needlessly obscure (Okular), or just plain goofy (Rekonq, Kamoso). I'm not debating application authors' right to name their applications however the hell they want, but we should have policies which encourage them to choose good ones. The K-in-the-name can be done artfully sometimes (Krita, Kate), and sometimes just sticking a K in front works well enough (KTorrent), but these seem like the exception rather than the rule. In cases where the application's KDE-ness isn't already part of the name, I think it should just be left out. That information can be conveyed through other means. Neither "KDE Dolphin" nor "KFileManager" sound very attractive, and it also exacerbates the "so you can only use it in KDE?" problem you guys are trying to solve. The foremost priority should be attractive and descriptive names; a 'k' in the name should only be seen as icing on the cake, and should only be done when it doesn't come at the cost of attractiveness and descriptiveness. (For the record, I also support Matthias's Ettrich's idea of adding the application's function as part of the name where it's not immediately obvious, so "Okular Document Viewer", "Dolphin File Manager" and "Amarok Music Player", say, while Konsole could probably just stay "Konsole".) Obviously you guys are the ones in charge, these are just my thoughts with the hope that you will find them convincing. '"put either KDE or a K in the application name" policy' there is no such policy. there was a very clear trend to do so in the past, mostly as a way to keep the namespace clear (so one of our binaries didn't conflict with one from somewhere else) but also as a way to identify. this was very pre-marketing-ourselves-very-clearly, but wasn't a horrible thing. people who got hung up on it were .. well .. i never did understand getting distracted by something as insignificant. :) still, in recent times names like 'plasma', 'dolphin', 'solid', 'phonon' and 'gwenview' are more common and even apps that did things like capitalize a 'k' in an odd place ('amaroK') have since normalized their names nicely. there will continue to be 'k' names, in part because of namespacing but also in part because of culture and habit. no harm, no foul, really. it will remain up to the author(s) to name their work as they want to. sometimes a 'k' name might even make sense (KDevelop being a good example there in my mind) as for putting the full 'KDE' as a prefix, that's no different than calling something a 'Toyota Prius' or a 'Microsoft Zune'. most of the time they are referred to as a Prius or a Zune or whatever, but there are times when the umbrella brand is added for clarity or marketing purposes (or pedantry in conversation :). (in the above examples the umbrella brand is also the company's name, but that's not always the case) I believe you. But the article seems to imply the opposite: "Especially for applications that are not well known as KDE applications and are not easily identified as such by a "K" prefix in their name, it is recommended to use "KDE" in the product name." Since we agree, all I suggest is to make this somewhat clearer then. :-) So what is the actual policy here? Don't actually make e.g. "KDE Dolphin" be the name of the application, but use that form when talking about it if the name doesn't have a 'k' in it? Which form is going to show up in application launchers? Is it going to be different from the one used in press releases and news & reviews? And yeah, KDevelop was another example I thought of, but forgot to mention, of names where the K prefix actually works. I seem to notice that it tends to be the K-prefix names which consist of a single word which work well, and it's the ones with multiple words which are clunky, but I'm not sure if this works as a general rule. "So what is the actual policy here? Don't actually make e.g. "KDE Dolphin" be the name of the application, but use that form when talking about it if the name doesn't have a 'k' in it?" You got it right. If you talk about a KDE application you CAN refer to it as e.g. KDE Dolphin, but also just Dolphin, or Dolphin built on top of the KDE Platform. Yeah, perhaps the text is a little ambiguous there. I see it as: KDE + App name in launchers - generally, no (not in the KDE workspaces at least or most apps will be KDE something) KDE + App name on the Dot - again, probably no (we're talking about KDE stuff) KDE + App name on some other news websites (if not in the context of talking about KDE stuff in general) it might be helpful to link the app with us KDE + Okular when your Windows using buddy asks you what that cool viewer app you're using is - yes, that would be helpful because then they might not only check out Okular but also remember it's produced by KDE and see what else we have to offer Apps that have the K prefix tend to be associated with KDE anyway, in some circles, so it's probably less likely to use KDE with those. Why do you call Okular's name "needlessly obscure"? Isn't it an incident that Eye of Gnome uses a similar metaphor? On a nearly unrelated side-note, the name "Okular" (in the meaning of "eyepiece") looks like an implicit vote for bug 148527. Great, I think the general direction is good. I always experienced KDE as a community and it is good to emphasize this. What does not fit for me is the naming in some case. KDE Software Compilation does not sound really like good marketing. I do not have a better idea at the moment, but I would really to suggest to look for something else. May be start a competion for it. The other thing is the KDE Plasma Desktop, Netbook ... It seems to long. I would suggest to shorten it to KDE Desktop, KDE Netbook. I know, Plasma is very important, but rather as a technology for programmers. Users do not need to know. Calling it will KDE Plasma Desktop is bit confusing too. "I always experienced KDE as a community" same here "KDE Software Compilation does not sound really like good marketing" it's not a name that will be actively marketed as a strong brand. this is quite intentional. see the above threads on this. "The other thing is the KDE Plasma Desktop, Netbook ... It seems to long. I would suggest to shorten it to KDE Desktop, KDE Netbook." unfortunately we already have the "KDE == Desktop" thing going on, in no small part because the 'D' in 'KDE' was 'Desktop'. we are trying to create perceived separation between our workspace offerings (desktop, netbook, etc) and the app framework and individual applications KDE creates. the reason is that far too often, even today, people assume things like "Krita probably works only on KDE" (we get this on the irc channels all the time, a place you'd expect people who might actually know these things to go!). of course, the sentence is broken in a few ways: Krita works great in all kinds of places, and "KDE" isn't just a desktop environment. to create the needed separation so that people will feel more comfortable using the KDE dev platform (to create software that runs everywhere, not only in KDE workspaces!) and KDE applications outside of a KDE workspace, we're giving our workspaces names. we can't refer to it as 'Desktop' in public (ambiguous) so it would become "KDE Desktop" and too often just shortened to "KDE" again. given the historical as well as the going forward ambiguities, a name was needed. one was found. :) "I know, Plasma is very important, but rather as a technology for programmers. Users do not need to know." and users need to know about KOffice, KDE or any of the other similar names? :) Plasma is an identifier, and though you may have come across it as a technology framework, it's used as an accurate disambiguator from both "KDE" and "those other desktop/netbook/mobile UIs out there". "Calling it will KDE Plasma Desktop is bit confusing too." how so? About "KDE Plasma Desktop, Netbook": Actually it is "KDE Workspaces", which contain "Plasma Desktop" or "Plasma Netbook". KDE Desktop is exactly what was intended to be replaced ;) > The other thing is the KDE Plasma Desktop, Netbook ... It seems to long. I would suggest to shorten it to KDE Desktop, KDE Netbook. I know, Plasma is very important, but rather as a technology for programmers. Users do not need to know. Plasma happens to be the name of the technology, but I think this is quite usable for marketing too. "KDE Plasma allows you to create fluent interfaces". the word 'plasma' already implies this sort of, and I guess that wasn't a coincidence. What we can end up with is, getting users to demand a "KDE Plasma" interface for their phone/tv/mediabox and laptops. :-) Congratulations to the KDE marketing team on producing such a well thought out and coherent rebranding plan for KDE that neatly balances logic and emotion. Perhaps the emphasis in the article is on logical consistency but ultimately brands are emotional concepts in the general “mindspace” that serve to short-circuit the effort of too much logical decision making in a world saturated with choice. On that basis, it is particularly good news is that Plasma is prominent as the workspace brand. What better name for a vibrant and animated user interface? I'm sure the passion of the developers can be projected into such a brand. So no need to be concerned that the term Plasma was going to be publicly deprecated as technological jargon. And the idea of KDE Applications is good, though inevitably the short-hand will be “KDE Apps” (so why not a “KDE App” logo to provide a visual short-hand to identify these in any “App store” to avoid using clumsy phrases as KDE Amarok?). Where I think logic has got the better of emotion is the term KDE Software Composition 4.4 to formally avoid terms like KDE 4.4. The rationale for this fails to recognise that many great brands are overloaded terms covering both the product and organisation (Coca-Cola, Google, Volkswagen to mention just three) and people automatically deal with the ambiguity without a thought, its always clear from the context. But what people can't get their tongue or head round is something like the BMW Saloon Car 323. It just doesn't work. Whilst the BMW 323 Saloon Car or the BMW 3-series Saloon Car are just fine, though the short-hand will always be the BMW 323 or BMW 3-series (and I'm appearing stupidly pedantic reminding the reader that the context here is the car not the company). So, as we all know that “out there” it's going to remain KDE 4.4, why not just tweak the branding to the KDE 4.4 Software Compilation so the long-hand brand is consistent with the short-hand brand, avoiding the need to “correct” anyone (which would seem very petty if ever done publicly). The problem with the overloaded brand in the case of KDE is that people do not actually automatically understand the difference between a KDE app and the whole Software Compilation (not Composition. Your car analogy is off in that regard, it's not about BMW being a manufacturer and a car brand, it would actually be BMW being the standard for roadways, for tyres, for cars, basically the whole "environment" of the car. So you get people to avoid buying a BMW because they think they can't drive it on their Honda Streets anymore. I know, analogies can suck hard. ;) KDE has been the infrastructure (development platform) and the chrome (apps, desktop), and the relationship between them needs to be communicated clearly so it doesn't hurt adoption, especially when thinking of multi-platform use of KDE apps and dev platform. Another aspect where this lack of distinction has hurt is the reception of KDE 4.0. While many applications have been quite good from the beginning (okular, dolphin, to name just two) people started the KDE 4.0 workspace, were disappointed by its lack of maturity and didn't make this distinction between desktop workspace and applications -- it's all "KDE" after all. The re-branding of the KDE Software Compilation is there to make clear that it's really about the whole package of individual components (such as Plasma, Okular, Kontact). It also makes it easier to market those applications separately while taking advantage of the well-established and strong KDE umbrella brand. Thanks for correcting me on Software Compilation - I'll get used to it soon. I can't argue with your logic and the intention behind it, particularly the need to create a perceptual separation between the Plasma Workspace and the KDE Apps. If the KDE Software Compilation is more an internal community release concept than a publicly marketed brand, as Aaron Seigo suggests in his very recent blog, then that task will be easier. I also agree with your point on the importance of promoting KDE Apps as multi-platform. So perhaps it would make more sense to refer to the KDE Platform as the KDE Framework (shades of Qt Developer Frameworks here) so the phrase "multi-platform KDE App" unambiguously refers to the underlying operating systems. It would also avoid the phrase "multi-platform KDE Platform" which is the sort of verbal clumsiness the marketing team are trying to eliminate. Personally I always thought why the Argentinian people did not exploit more the K Desktop Environment for joking. You know: the Néstor Kirchner and Cristina Fernández government has been known as the "K Government". Argentina spoke a lot about the "K style", "K deputies", "K senators", and so. What about giving those jokers a complete and truly amazingly named "K Desktop Environment"...? WOW! This joke really went sour when the inner circle of Kirchner and his wife began to be called "Entorno K" (the K Environment). So, you only have to add "Desktop"! Now, this is coming to an end, but it's just in time for argentinian people. Watch this if you can read Spanish: ... all this marketing hogwash in the reasoning of the original article is horrible but, at least, they're involuntarily funny because I can now run around, screaming loudly "KDE is people!"... [1] ;-> [1] I mean I can understand calling the KDE desktop and it's applications together a "Software Compilation"... but when talking about KDE to friends I doubt I am EVER going to go to the trouble of putting the SC on the end, particularly when referring to just the environment as opposed to including the apps as well. Do you see my point? Surely KDE Platform + KDE Plasma Desktop = "KDE"? And KDE + KDE Applications = KDE Software Compilation ? hmm o.O "when talking about KDE to friends I doubt I am EVER going to go to the trouble of putting the SC on the end, particularly when referring to just the environment as opposed to including the apps as well" no, in fact you should be talking about the individual pieces. e.g. okular, or the plasma desktop, or the kde dev platform. if you want to refer to the "whole chunk of stuff i got at once that contains all sorts of stuff" then you can refer to the KDE software compilation. we really want people to be talking about and more aware of KDE as a modular set of software suites. interesting how terms change how we talk about things, no? :) "Surely KDE Platform + KDE Plasma Desktop = "KDE"?" nope. there's also okular and several dozen other apps that come in the SC, and many more KDE apps that don't come in the SC. this is why we're changing the name, because it's so confusing. you evidently think "KDE" is a desktop environment. it's not. the desktop environment is one thing we make, but only one thing and not even the absolute central thing. the KDE team hasn't really helped people to understand that due to the communication in the past. that's why we're changing things, and using 'KDE' exclusively for the community as an umbrella brand for everything we do. People will call the software KDE forever but never mind :) Btw I think it hasn't been a problem for any open source project that the community was called the same as the software. Many campaigns have tried to change how people call things and as we see in stadium renaming schemes, you can pay millions to have the stadium called one thing and people will still call it what they always did. I just dont see myself saying the whole three word name when three letters has been enough for over a decade. Will you say "I use Mandriva 2010-KDE4.3" or "I use Mandriva 2010 - KDE Plasma Desktop" ? I dont think I ever said or wrote K Desktop Environment before although I always say GNU-Linux when talking wuth tech people to differentiate the kernel from the generic desktop name. With the known Linux distinctions as well as the difference between Free and Gratis and open source/free software, the free software community has proven that they are clueless when it comes to these things and cant think further than their noses. For that reason, I trust you folks know what you are doing. The 4.x demanded a leap of faith so that the future is secure for some time to come. It wasnt an easy choice but it was the right one. I understand the why you want to do it and it makes sense to some degree but asking people to change habits is hard but asking them to go from three letters to three words seems like an even harder battle. I don't really like the concept of branding. It sounds very stupid to say, you are KDE but DE stands for nothing (oh, DE=Germany of course). As a brand KDE developed from a community project of (potentially) coding software enthusiasts into a project clouded by artifical announcement gibberish which often clashed with reality. Branding became less language neutral. It is quite a bit self-ironic that the branding now says "KDE is no longer software created by people, but people who create software." "The expanded term "K Desktop Environment" has become ambiguous and obsolete, probably even misleading. Settling on "KDE" as a self-contained term makes it clear that we... providing...applications and platforms... on the desktop, mobile devices, and more." - "It is not a limited effort to solve the problem of having a desktop GUI for Linux anymore." So in other words, you give up upon the desktop and become a technology collection. Now, maybe some persons may need this to better justify their KDE involvement in a business environment. In less diplomatic terms it means: We give up on the KDE Linux desktop, mission failed. Concerning the new naming convention you probably notice that it is unsystematic. So the next step is to rename Kword as "KDE Word" or "KDE Lettera or Lettera". "KDE applications can run independently of the KDE workspace and can freely be mixed with applications based on other platforms and toolkits." gets it wrong.. Where is the user in all this? Originally the implicit idea was develop for a user scenario vision, and communication was characterized by interaction with users and their expectations. As developers rule (and any user is a potential future later developer or contributor to other projects which form part of the desktop experience) of course non-contributors got less rank. Now the user is completely out of focus and it is "people who create software". You wonder if they ever eat their own dogfood. Maybe that was the kardinal problem with the KDE4 release cycle. You develop great toolkits and platforms to be used for (later) potential purposes. But no one has a user scenario in mind to which the technology development is instrumental, the solution. Here one early mockup got it right. It is really about a solution to a problem "Browse the web", "mail mary", not technology and applications per se. You've covered a lot of ground (good to see you've thought about it) so I'll try and take your points one at a time: "It sounds very stupid to say, you are KDE but DE stands for nothing" - The K has officially stood for nothing for a long time - There are plenty of organisations that have names that used to stand for something but which have moved away from those because they no longer really represent what they do no longer do: --3M (a lot more than mining and minerals nowadays) --AT&T (beyond telephone and telegraph) --BP (allege that they are beyond petroleum ;-) --SGI (claim to be more than graphics...) - It is more stupid imho to pretend that KDE is only produces a desktop environment "We give up on the KDE Linux desktop, mission failed" - I don't :-) - Plasma Desktop is one of our greatest achievements and I personally prefer to have my KDE applications running in my KDE workspace - The KDE Platform is also pretty darn cool, this helps us recognise that - There are plenty of people using KDE applications because they are just the best out there (Amarok, K3B for a couple) but who don't want to use our workspaces. Separating the two helps us get this message across "the next step is to rename Kword as "KDE Word" or "KDE Lettera or Lettera"" - That's entirely up to the application teams - You have to balance the gain from changing a name with the loss from losing a recongnised name. For things like KWord I personally don't think a name change is worthwhile ." - I don't really understand your point here - I agree that the problem is that KDE Applications are not thought of as being independent of the workspace - this is a big driver behind the changes we have made "Where is the user in all this?" (I won't quote your whole paragraph) - Hopefully they are in a position of having greater clarity about who we are and what we can offer "Now the user is completely out of focus and it is "people who create software"" - The "people who create software" thing is a catchy, one line summary. It is not complete. We debated who is in the KDE community and decided that really it is something that people almost define for themselves. You could be part of KDE if: -- You contribute code to anything in the KDE SC -- You contribute code to any free software app using the KDE Platform -- You contribute documentation, art, how-tos, feedback, promotion efforts, bug reports, comments on dot stories :-) There's probably a lot more Re your last paragraph: - You have to develop the underlying technology before you can use it. If you read the blogs of the people doing this I think you'll see that they have visions of how this translates in to real, usable tools - One of the things in being an open community is that we talk about things as we're doing them, partly to get more people interested in contributing so things can happen faster. We don't develop stuff in secret and then announce it in a blaze of glory. Stuart already shared many of the thoughts that sprung into my mind as well, but I'd like to add a bit to this: "Where is the user in all this?" Where they've always been .. the people who use the software we write, the people many of us keep in mind while writing that software. "Originally the implicit idea was develop for a user scenario vision, and communication was characterized by interaction with users and their expectations." That hasn't really changed, and not at all with this announcement. Go look at the Plasma Netbook effort and consider how it was created from the start. I do think you are confusing "how we are going to be using our brands" with "here is our marketing strategy". The marketing and communications direction is a much bigger discussion than the definition of the top level brand names. This is more like part of the glossary than the textbook on our marketing. To be even more clear: we aren't going to be heading around in interviews and what not trumpeting "KDE is us!" as if that's the important message we need to get across; we'll be communicating about the technology we create just as we always have been, but hopefully more effectively and clearly. This set of changes and definition just helps us understand where the term "KDE" fits into that communication. It is not a change in focus. "Now the user is completely out of focus and it is "people who create software"." We've been people who create software for 13 years now. That's not new. The term "KDE" is now reserved for the community and organization and not the products and that's the only real change here. In fact, this is getting the naming into line with the reality for the last many, many years (far before 4.0 even). The idea that this reflects a change in what we've been doing "down in the trenches" is completely off-base. "You wonder if they ever eat their own dogfood." We do, many of us use a near-continuous (daily, weekly) build of KDE from svn, in fact. So, that question is now answered :) "But no one has a user scenario in mind to which the technology development is instrumental, the solution. " Thankfully that isn't true, and as we continue to measure actual results over time this will become increasingly apparent. (Not sure what any of this had to do with the actual branding thing, though. :)
https://dot.kde.org/comment/108669
CC-MAIN-2016-40
en
refinedweb
On Mon, Aug 10, 2009 at 11:20:17AM -0500, Manoj Srivastava. > Well, of we are top carve out a namespace in policy, it also > makes sense if we define whay such packages ought to contain as > well. Having a namespace carved out for packages with only detached > debugging symbols (and with the normal policy rules on regular > packages -- copyright, changelog, etc). Yes, certainly; but -dbg is not the correct namespace, then, since there are pre-existing packages using these names for other purposes. -- Steve Langasek Give me a lever long enough and a Free OS Debian Developer to set it on, and I can move the world. Ubuntu Developer slangasek@ubuntu.com vorlon@debian.org
https://lists.debian.org/debian-devel/2009/08/msg00325.html
CC-MAIN-2016-40
en
refinedweb
danielk <danielkleinad at gmail.com> writes: > Ian's solution gives me what I need (thanks Ian!). But I notice a > difference between '__str__' and '__repr__'. > > class Pytest(str): > def __init__(self, data = None): > if data == None: self.data = data > > def __repr__(self): > return (self.data).encode('cp437') > The correct way of comparing with None (and in general with “singletons”) is with the “is” operator, not with “==”. > If I change '__repr__' to '__str__' then I get: > >>>> import pytest >>>> p = pytest.Pytest("abc" + chr(178) + "def") >>>> print(p) > Traceback (most recent call last): > File "<stdin>", line 1, in <module> > TypeError: __str__ returned non-string (type bytes) In Python 3.3 there is one kind of string, the one that under Python 2.x was called “unicode”. When you encode such a string with a specific encoding you obtain a plain “bytes array”. No surprise that the __str__() method complains, it's called like that for a reason :) > I'm trying to get my head around all this codecs/unicode stuff. I > haven't had to deal with it until now but I'm determined to not let it > get the best of me :-) Two good readings on the subject: - - ciao, lele. -- nickname: Lele Gaifax | Quando vivrò di quello che ho pensato ieri real: Emanuele Gaifas | comincerò ad aver paura di chi mi copia. lele at metapensiero.it | -- Fortunato Depero, 1929.
https://mail.python.org/pipermail/python-list/2012-November/634905.html
CC-MAIN-2016-40
en
refinedweb
-- | , next, nextN, rest, closeCursor, isCursorClosed, -- ** (..), Server(..)) import Data.Bson import Data.Word import Data.Int import Data.Maybe (listToMaybe, catMaybes) import Data.UString as U (dropWhile, any, tail, unpack) :: (Server, MonadMVar m) => Access m instance (Context Pipe m, Context MasterOrSlaveOk m, Context WriteMode m, Throw Failure m, MonadIO' m, MonadMVar m) => Access m newtype Action m a = Action (ErrorT Failure (ReaderT WriteMode (ReaderT MasterOrSlaveOk (ReaderT Pipe m))) a) deriving (Context Pipe, Context MasterOrSlaveOk, Context WriteMode, Throw Failure, MonadIO, MonadMVar,) -- | A connection failure, or a read or write exception like cursor expired or inserting a duplicate key. -- Note, unexpected data from the server is not a Failure, rather it is a programming error (you should call 'error' in this case) because the client and server are incompatible and requires a programming change. data Failure = ConnectionFailure IOError -- ^ TCP connection ('Pipe') failed. Make work if you try again on the same Mongo 'Connection' which will create a new Pipe. | CursorNotFoundFailure CursorId -- ^ Cursor expired because it wasn't accessed for over 10 minutes, or this cursor came from a different server that the one you are currently connected to (perhaps a fail over happen between servers in a replica set) | QueryFailure database (if server is running in secure mode). Return whether authentication was successful or not. Reauthentication is required for every new pipe.Access (Database db) col = U.any (== '$') col && db <.> col /= "local.oplog.$main" -- * Selection data Selection = Select {selector :: Selector, coll :: Collection} deriving (Show, Eq) -- ^ Selects documents in collection that match selector = [P) where batchSize' = if batchSize == 1 then 2 else batchSize -- batchSize 1 is broken because server converts 1 to -1 meaning limit 1 queryRequest :: Bool -> 'CursorNotFoundFailure'. Note, a cursor is not closed when the pipe is closed, so you can open another pipe to the same server and continue using the cursor.Closed cursor = do CS _ cid docs <- getCursorState cursor return (cid == 0 && null docs) -- ** Group -- | Groups documents in collection by key then reduces (aggregates) each group data Group = Group { gColl :: Collection, gKey :: GroupKey, -- ^ Fields to group by gReduce :: Javascript, -- ^ @(doc, agg) -> ()@. The reduce function reduces (aggregates) the objects iterated. Typical operations of a reduce function include summing and counting. It takes two arguments, the current document being iterated over and the aggregation value, and updates the aggregate value. gInitial :: Document, -- ^ @agg@. Initial aggregation value supplied to reduce gCond :: Selector, -- ^ Condition that must be true for a row to be considered. [] means always true. gFinalize :: Maybe Javascript -- ^ @agg -> () | result@. (@doc -> key@) returning a "key object" to be used as the grouping key. Use KeyFAccess m) => Group -> m [Document] -- ^ Execute group query and return resulting aggregate value for each distinct key group g = at "retval" <$> runCommand ["group" =: groupDocument g] -- ** MapReduce -- | Maps every document in collection to a list of (key, value) pairs, then for each unique key reduces all its associated values to a single result. There are additional parameters that may be set to tweak this basic operation. data MapReduce = MapReduce { rColl :: Collection, rMap :: MapFun, rReduce :: ReduceFun, rSelect :: Selector, -- ^ Operate on only those documents selected. Default is [] meaning all documents. pipe only, however, other pipes may read from it while the original one is still alive. Note, reading from a temporary collection after its original pipe. The function must call @emit(key,value)@ at least once, but may be invoked any number of times, as may be appropriate. type ReduceFun = Javascript -- ^ @(key, [value]) -> value@. The reduce function receives a key and an array of values and returns an aggregate result value. The MapReduce engine may invoke reduce functions iteratively; thus, these functions must be idempotent. That is, the following must hold for your reduce function: @reduce(k, [reduce(k,vs)]) == reduce(k,vs)@.Access m) => MapReduce -> m Cursor -- ^ Run MapReduce and return cursor of results. Error if map/reduce fails (because of bad Javascript) -- TODO: Delete temp result collection when cursor closes. Until then, it will be deleted by the server when pipe closes. runMR mr = find . query [] =<< (at "result" <$> runMR' mr) runMR' :: (DbAccess $ "mapReduce error:\n" ++ show doc ++ "\nin:\n" ++ show mr -- * Command type Command = Document -- ^ A command is a special query or action against the database. See <> for details. runCommand' :: Reply) -- ^ Send notices and request as a contiguous batch to server and return reply promise, which will block when invoked until reply arrives. This call will throw 'ConnectionFailure' if pipe fails on send, and promise will throw 'ConnectionFailure' if pipe fails on receive. call ns r = do pipe <- context promise <- mapErrorIO ConnectionFailure (P.call pipe ns r) return (mapErrorIO ConnectionFailure promise) {-. -}
http://hackage.haskell.org/package/mongoDB-0.9/docs/src/Database-MongoDB-Query.html
CC-MAIN-2016-40
en
refinedweb
Source CodeService Description: Sample Implementation: CS2VB_WinFormsConsumer.zipVB6 Sample Implementation: CS2VB_VB6Consumer.zipASP .Net Online Implementation: UsedC#, ASP .Net, VB6, SOAP Toolkit 2.0Article DescriptionThe title says it all. ConvertCSharp2VB is a Web Service that converts a C# code block into VB.Net. It exposes the Service Description and WSDL for the Web Service, so you can implement this functionality in your applications. In this article I will begin by explaining how we can create a Web Service Consumer using C# to access the ConvertCSharp2VB Web Service. Later on I also demonstrate writing a Consumer that accesses this service using VB6 and SOAP Toolkit 2.0.BackgroundConvertCSharp2VB is small utility class developed using C#. The class exposes only one public method, Execute() that receives a C# code block as a parameter and returns its equivalent VB.Net code block. When the Execute() method is called, the class looks for specific patterns in the C# code block and converts each code block to its VB.Net equivalent. The Web Service ConvertCSharp2VBService is simply a wrapper around an instance of this class.The converter currently handles most of the conversions from C# to a VB.Net. It handles namespaces, classes, structs, enums, methods, properties, fields, declaration, for-loops, do-while, while-do, foreach, if-else-end if, select-case, try-catch-finally etc. to name a few. Creating a Consumer in .NetTo access the web service from your project, right click on the project in Solution Explorer and select Add Web References. Type the following URL and select Add Reference. This will add the web reference to your project and create all the necessary plumbing code to access this Web Service. Once the service is added to your project, you can access it just like any other class reference. Here is the code that is required to access this service. private void cmdConvert_Click(object sender, System.EventArgs e){//Instantiate the servicenet.kamalpatel. oService;oService = new net.kamalpatel.;//Call the execute method and handle the formattingstring lcStr = o.Execute(this.txtCSharp.Text);this.txtVB.Text = lcStr.Replace("\n", "\r\n");} Just like any other object we create an instance of the web service class (net.kamalpatel.) and call its methods. In our case we call the Execute() method that receives a CSharp code block and returns a VB.Net code block. Here is an image that shows the implementation of this Web Service developed using ASP.Net.Consumer in VB6Here is a typical code for consuming a Web Service from VB6. We begin by instantiating the SOAP Client and initialize it by specifying the location of our WSDL. Once the SOAP client is initialized, it allows us to call methods of our component as if it were a part if the SOAP Client object. In this case notice that Execute() is actually a method of the Web Service. Private Sub cmdConvert_Click()'Create the SOAP ClientDim soapClientsoapClient = CreateObject("MSSOAP.SoapClient")'Initialize the soap client and pass the URL for'the WSDL file as a parameterDim cWSDL As StringcWSDL = soapClient.mssoapinit(cWSDL)'Call the Execute() method and display the resultsDim cRetVal As StringcRetVal = soapClient.Execute(Me.txtCSharp.Text)Me.txtVB.Text = Replace(cRetVal, vbLf, vbCrLf)End SubInternally, the soap client determines the method and makes the call to the actual service. The last line in the above code block replaces all the line feeds with carriage returns + line feeds. This makes it more readable when displayed in the textbox. ConclusionThe converter does a great job for VB developers as most of the .Net code is still in C#. Even though not 100% complete, it supports 90% of the available types.This Web Service is free to use and access, so if you wish to create consumers on your web site that access this service, it will all yours. If you simply wish to use the service, you can access it from. ©2016 C# Corner. All contents are copyright of their authors.
http://www.c-sharpcorner.com/article/C-Sharp-to-VB-Net-code-conversion-web-service/
CC-MAIN-2016-40
en
refinedweb
What exactly are we trying to do here? We're trying to project forward to the Stradivarius of coding. Such an instrument would elevate the game of excellent developers to the highest levels ever. That's what a Strad would do. Necessarily such a device makes various assumptions about its players. The assumptions here are that object-oriented software construction is important. That exercising new types with tests is the preference if the burden of doing so is not too great. That systems grow in complexity. That developers want to see the frail aspects of a solution in order to remodel them. And finally that the execution states of an application, if stored as a persistent, searchable structure, give rich opportunity for new ways of debugging, optimizing and enhancing the overall quality of an implementation. Here's an overview shot. Inspect a cumulative stack based on a given application or test run in order to fix, optimize or refine the implementation. Imagine every method call made during the run of an application or test; place these calls in sequential order and you have the cumulative stack. We can explore this structure by selecting an individual method call, selecting a line in the method and then proceeding as usual, stepping through each line of source. The difference is that we can find out how any object reached its state, or why a particular line executed by traversing the history of the run. If an exception has occurred, we may need to see only the last few calls. Cutoff limits the number of calls available for searching. Essentially it’s an optimization. Within the parentheses of method calls, a small icon (called a kibble) represents the value of all parameters. You can hover over this icon to get a mini-watch window with the parameters listed (which you can then check to add it to the main watch window). These work the same as Parameter Kibbles, but for return values. A cumulative stack may have millions of calls in it. Tracing the cause of an exception back from the line where the exception occurred might be quite simple in some cases. Running wire can certainly help isolate the exact point when some value changed which eventually led to an error condition. The stack has a tendency to “drill-down” into composites or related objects, then snap-back to more basic application loops. This action is far more apparent in the cumulative stack, because it contains every drill-down and snap-back, every transient stack that existed during the run. Jumpers allow us to better see what the transient stack looked like at a given moment in time, establishing a root and leaf for a single method call. This makes it easier to see the iterations of a loop (for example), so that we could inspect the second iteration without having to wade through all the noise generated by the first iteration. Move efficiently through the cumulative stack avoiding unnecessary step intos. A “step into” for a property getter. The biggest problem here is deep dot notation: Form.Controls[0].Control.TextModel.Reset(); When debugging, this line creates a “step-into” nightmare. By breaking out all of those getters and listing them as options, we provide a direct shot into “Reset()”, likely the desired target anyway, without cutting off the getter targets. Mini Diagram-style shapes appear next to each Get Step, showing whether the property is an object or primitive type. If the getter contains code beyond a simple field value return, a line is drawn on the left side of the shape. Speed resolution by putting compilation and exception messages in context. The Constant Velocity (CV) engine compiles the solution whenever sufficient idle time exists to merit an attempt, and IntelliSense has indicated that the source should compile successfully. If this is not the case, or if the developer has forced a compile, the Error Trap will display all of the compilation errors. Each error can be clicked and the line of source is displayed (just like the current Output pane.) When an application or test is run, any exception thrown is displayed, with the errant line highlighted. The cumulative stack is still available in this instance, and can be used to track down the cause of the exception. This is a big productivity breakthrough—the transient stack is often thrown away during the first occurance of an unexpected exception (so that a proper breakpoint can be set) and sometimes that stack still doesn’t show the problem which may have occurred even earlier than the breakpoint. Visually depict the implementation of a type. Minis are used in the Visual Stack and in Visual Refactoring. A rectangle represents the object or type. Other shapes are then added: Properties are built to express read/write capability and to indicate the presence of code beyond a simple setter or getter line. Read shapes are placed on the left, write shapes on the right. If additional code is present, a line is drawn from the edge of the value shape to the edge of the oval. The scope of a member is shown by its position in the rectangle. Public members hang over the outside edge. Internal members appear in the private or protected region inside the rectangle. Usage Hovering over a shape causes the shape to “go hot” and a label naming the member and listing any parameters is shown. Members are usually checked via a hover, dragged to a watch window or selected (if refactoring.) In the Visual Stack, a blue highlight effect arcs through the Minis, indicating the method call order. Minis were designed to be generated by software, and as such, adhere to standards (for example the use of ellipsis beyond a certain count of parameters or members) to assist in that application. Represent the stack as a series of connected Mini Diagrams to show object complexity and to give instant access to object state. Click the orange square to see the solid-state watch window for the object type. Define new watches by dragging shapes from the Mini Diagram to the Watch window. Create new types with a single click, choose from the most recently selected types, search for types based on name, referenced types, interfaces implemented, ancestor type and other metadata. Search sentences make it possible to filter all of the types into a reasonable subset. Key here is to provide a full set of good default sentences so that the developer can use the drop-down. Where the defaults do not suffice, advanced sentences can be built and saved. Show everything/classes/delegates/structs/interfaces/enums in the solution/project/namespace/folder that start with/contain/end with that descend from the type that implement the interface that contain the attribute/method/property/event Building sentences from this spec we have: Of course typing in the list jumps the selection to the first type that matches. For small projects then, search sentences will probably be left on “Show all types in the project.” The same as “Add New Item | Class” without the dialog. The “New Type” template should be editable by the developer. Displays a popup of the most recently selected types for this solution. Make it easy to see and navigate to types referenced directly by the current type. When a type is selected, this header area above the code editor displays a list of types. Click one of these to see the source for the type. Double-click one of these to update the type selected in the Finder and force the repopulation of the strip. The intent here is to speed the navigation to related types, without requiring the developer to find a reference in the source. The strip also provides a reality check on referenced types, higher counts being less desirable. Create new tests with a single click, choose from the most recently selected tests, find existing tests based on name, footprint, types referenced and other metadata. Automatically list the tests associated with a given line or lines of source - on demand or as those lines are edited. The Test Reffer is implemented inside the Test Bench, such that the list simply updates when source is edited. As more and more source is changed, one would normally expect the list to grow. The tests, which usually have a green status, go yellow when any underlying source is changed. When the changes can be compiled, the CV engine then compiles the test, runs the test, and sets the status. Click this icon to create a new test. A solution always has a default test project defined. The code file for the test is added to this project automatically. The test is also given a sequential number. A description of the test can be entered in the list or in the description area at the top of the Test Bench. Click this icon to display a list of the most recently selected tests. Search sentences are English-like phrases with replaceable words. These can be concatenated together to form an entire search paragraph. Once a search is built, it is saved and can be named. Search sentences are saved per project, and can be selected via a drop-down. The intent here is to allow quick and easy searching via predefined and even custom metadata (such as attributes.) Statuses are: The first type of test supported would be unit tests. Of course the underlying framework should be easily integrated into by the currently popular testing frameworks such as NUnit. Simple unit tests are not sufficient for full-fledged QA—Application (Functional) Tests should be accommodated. Extensible architecture is important here as well, as the various extant testing applications would want to integrate. Most importantly, full support for an Application Recorder/Playback style implementation should be provided. In the highest quality implementation of this feature, recordings would be represented as the source of a managed type, making the recording directly editable. This is special kind of metadata associated with a test. A footprint is a list of every line of source that executed during the test run. The Search Sentence control in the TestBench provides an option to find all tests with identical footprints. Redundant tests can then be easily removed. Graphically summarize code metrics directly in the source itself. Grooves are accessible via a popup menu in the code editor. Select Grooves | Show All to display all of them, or pick an individual groove. When the mouse hovers over a groove, it expands, making it easier to move over the graph for a specific line of source. Right-clicking over the graph causes the value of the underlying statistic to be displayed, or in some cases, a full-blown listview is displayed. Predefined grooves include: Tests - Number of tests associated with this line of source. This list can then be transferred to the Test Bench. Run - Number of times the line has been run (from within the current solution.) Blown - Number of times an exception has been thrown directly from this line (not lower level source.) Age - How long the source has been around. Should be maintained even through clipboard actions. Edits - Number of times this line has been changed. Calls (methods only) - Number of places in the current solution where this method is called. List of callers provided. Creates (types only) - Number of places in the current solution where this type is instantiated. List provided. Groove statistics are summarized based on a set of predefined rules. These rules can be modified by the developer. The intent is to show the desirability of the statistic at a glance. Of course grooves should be implemented in a way that allows them to be extended and new grooves defined based on existing metadata or new combinations of metadata. The groove painting architecture should also be pluggable, so that these graphs can be easily superseded. Determine the exact points in the source where one type references another. References are summarized in two different ways, either through a highlighted Mini Diagram (where the parts containing a reference are highlighted) or through a list of the line numbers. You can click either of these to jump to a reference. Where a reference flows through to additional types, the list of those types is provided. Thus you can see the depth of the dependency. The source itself is bulleted at each reference. Wires help us track down changes to the individual field, property, or event of a single object, answering the question: When did this value change? You can run wire by right-clicking a line of source which references the field, property or event in question. All changes up to this line of code will be gathered into a list. You can then select from this list to display the method where the change occurred. The line or lines of source where values changed are highlighted. Methods involved in an active wire are colorized in the Cumulative Stack. Remember that wire is instance-based. Only the field or object you have located in the stack is traced for changes, as opposed to all instances for that field or type. List lines of source based on the amount of CPU time each required. Trax highlights each line of source that executed during the last run, making it easy to differentiate which lines executed from those which did not. This is different from Stack Trax, which highlight just those lines executed for the selected call. Trax are cumulative, highlighting every line that fired at any point during the run. Locate the most time intensive or noisy methods using the summary. Select a method and the SubTracker will show all time-intensive calls in that method. Then select a method in the SubTracker or click the “bullet” in the Code Editor to continue drilling down. When Trax are visible, the Cumulative Stack is divided into sections, each containing all of the calls leading up to a call to the selected method. This helps isolate the “reason” for each call. Use the Trax Mode arrows to navigate between call instances. List the objects which exist as of a given line of source. This rather straight-forward listview would not be a big deal, except for one thing—it updates as you click in the Cumulative Stack. That means you can assess total object footprint as of any call—and that should lead to some pretty incredible optimization opportunities. Notice the total memory usage at the bottom of the list. This statistic is actually searchable in the Search Sentence control at the top of the Cumulative Stack. You can then filter the stack to show only those calls which were in play when when memory usage was say greater than 1MB. The possibilities for LiveObjects based metadata searching from the stack are both enticing and limitless. Find the line of source where an object was instantiated. Often when you see a high object count, the first question is: “Who” is creating all these objects. This little listview breaks out the exact line of source responsible for the instantiation of a given object type. Just click the object type in LiveObjects, then click a line in Origins: the source is then displayed in the code editor. Note: Parentheses are shown only for constructors. List the objects which have been garbage collected (or were eligible for collection) as of a given line of source. As you move through the stack, a certain amount of collection eligibility should occur, based of course on the implementation. The biggest benefit here—instantly spot bloat. Bloat will also appear in the LiveObjects window, though slightly masked as good objects mix with objects intended for collection. One thing is for sure, if you select the last line of the stack and the Barge is empty, your application never allowed a single object to be garbage collected! Find the line of source where an object was instantiated, even after it has been garbage collected! Of course, if you go back to that line, then you can step through and watch it get created again. Note: Parens are shown only on constructors. Manage main, source and test projects. In order to support the effortless creation of tests, a default test project can be set. Any tests created in the Test Bench are then automatically added to this project. It is common for TDD to require multiple test projects. The Projector makes it easy to create new test projects and to set them as the default. When multiple projects are in play, build order is the best indicator of correct dependencies. The strip also supports single-shot build-on-click so you can check the compilation of a project without including it in dependencies or checking build in the Solution Configuration. (Projects not set to build are not compiled by the CV engine.) Reflects the number of types in a project. These count ranges can be edited by the developer. Custom range tables can also be created per project type. Quickly select old solutions or create new ones. Create a new solution. Displays a popup of the most recently selected solutions. Configure the solution in the workspace Set Source Folder for solution. Select which projects will build. Set Output Type per project. Set Output Path per project. Change Output filename for each project. These capabilites overlap with the Projector. Here the main purpose is to get an overview of the settings for all of the projects. Here we step thru a simple development cycle, showing how the CV engine supports test-driven.
http://www.codeproject.com/Articles/15533/Visual-Studio?msg=2833208
CC-MAIN-2016-40
en
refinedweb
Perl Programming/Objects< Perl Programming ObjectsEdit When Perl was initially developed, there was no support at all for object-orientated (OO) programming. Since Perl 5, OO has been added using the concept of Perl packages (namespaces), an operator called bless, some magic variables (@ISA, AUTOLOAD, UNIVERSAL), the -> and some strong conventions for supporting inheritance and encapsulation. An object is created using the package keyword. All subroutines declared in that package become object or class methods. A class instance is created by calling a constructor method that must be provided by the class, by convention this method is called new() Let's see this constructor. package Object; sub new { return bless {}, shift; } sub setA { my $self = shift; my $a = shift; $self->{a}=$a; } sub getA { my $self = shift; return $self->{a}; } Client code can use this class something like this. my $o = Object->new; $o->setA(10); print $o->getA; This code prints 10. Let's look at the new contructor in a little more detail: The first thing is that when a subroutine is called using the -> notation a new argument is pre-pended to the argument list. It is a string with either the name of the package or a reference to the object (Object->new() or $o->setA. Until that makes sense you will find OO in Perl very confusing. To use private variables in objects and have variables names check, you can use a little different approach to create objects. package my_class; use strict; use warnings; { # All code is enclosed in block context my %bar; # All vars are declared as hashes sub new { my $class = shift; my $this = \do{ my $scalar }; # object is a reference to scalar (inside out object) bless $this, $class; return $this; } sub set_bar { my $this = shift; $bar{$this} = shift; } sub get_bar { my $this = shift; return $bar{$this}; } } Now you have good encapsulation - you cannot access object variables directly via $o->{bar}, but only using set/get methods. It's also impossible to make mistakes in object variable names, because they are not a hash-keys but normal perl variables, needed to be declared. We use them the same way like hash-blessed objects: my $o = my_class->new(); $o->set_bar(10); print $o->get_bar(); prints 10
https://en.m.wikibooks.org/wiki/Perl_Programming/Objects
CC-MAIN-2016-40
en
refinedweb
#include <stdio.h> #include <wchar.h> wint_t fgetwc(FILE *stream); wint_t getwc(FILE *stream); The getwc() function or macro functions identically to fgetwc(). It may be implemented as a macro, and may evaluate its argument more than once. There is no reason ever to use it. For nonlocking counterparts, see unlocked_stdio(3). In the absence of additional information passed to the fopen(3) call, it is reasonable to expect that fgetwc() will actually read a multibyte sequence from the stream and then convert it to a wide character.
http://www.makelinux.net/man/3/G/getwc
CC-MAIN-2016-40
en
refinedweb
Short And Long Term Investment Evaluations Finance Essay Short-term or long term, in evaluating an investment decision making for businesses with the purpose of evaluation. When taking investment risk, procurement, cost and benefits and many other contexts to pursue accountability involves making. For decades, the managers add value to key objectives and methods of evaluation are to find the right investment. According to the outpost and Noreen (2000) Property valuation method any company, both tangible and intangible assets held by the treatment is for. To Götze, according to Yu and Norchoctt, D. & Schuster, P. (2008), "Assessment of investment is less understanding of the company's assets value and uses verschillende verwachte deboned got to make the return tend beperkte to the right to make decisions and the decision should be how the company will have one nice outfit. Operational matters and strategic planning and management of traditional management accounting techniques to boycott the concentration are modern investment appraisal. AP programs and projects of investment in various business and technology decision following discussion of the importance of evaluation alights, and discounted cash flow techniques is the effect, while a long-term decisions. Also explains the importance of capital costs and the decision affects the comparison between the various processes and how. A) Investment Appraisal should add value to the business entity. Do u agree? Yes, I strongly with this logic because the assessment of investment added value to business or entity a method of financial assessment of investment restrictions, non-economic factors to understand. The total benefits of the project to assess their contribution to organizational strategy may be, their financial contributions or other means using the index for non-financial benefits. "Shows that many organizations strategic investment decisions. Investment decisions on the economic vitality and competitiveness of their assessment of the impact, should the couple's equipment financing strategy”. PROJECT ANALYSIS, PAYBACK, NPV, IRR B) Calculate each project payback period, NPV and IRR...... YEARS PROJECT A £000 PROJECT B £000 0 (10) (25) 1 3 6.5 2 3 7 3 3 7.5 4 3 7.5 5 3 8 These two projects, five (5) year period are based investments. An initial investment project A of five years is £10,000 for an equivalent cash flow of £ 3,000.00. During the five year period where the initial investment project B £25000.00 pounds and cash flow is variable. The only Guess used here, the cost of capital is 12.5%. Payback: PROJECT A:- Formula Payback period = Investment required / Net annual cash inflow Project A payback period = 10 / 3 = 3.33 Project B: Project data as B is not equal to the flow period of 5 years; we calculate the cumulative discounted cash flow return on the basis of: Project ‘A’ Year Net Cash Flow Discount Factor Present Value Cumulative Discounted Cash Flow 00 (25) 1 (25) (25) 01 6.5 .889 5.78 (19.22) 02 7 .790 5.53 (13.69) 03 7.5 .702 5.27 (8.42) 04 7.5 .624 4.68 (3.74) 05 8 .555 4.44 0.7 Total Present Value 25.7 Pay back for project “A” is =3.33years. Pay back for project “B” after 4yrs+ (3.74/4.44)=4.842 years. NPV (Net present value): Project A: If a project's net cash equalled about 5 years, we gain is used to calculate net present value factor. 5 annuity capital cost of 12.5% (.889 + .790 + .702 + .624 + .555) = 3.56 based on factors. Net Present Value Calculation Formula = Total Present Value – Initial Investment Net Present Value for Project ‘A’ = (Cash flow X annuity factor) – Initial Investment = (3X3.56) – 10 = 0.68 Project B: Net Present Value Calculation Formula = Total Present Value – Initial Investment Net Present Value for project ‘B’ = Total Present Value – Initial Investment = 25.7 – 25 = .70 Internal rate of return (IRR): Project A:- Assume cost of capital for project ‘A’ is 17% Net Present Value for project A will be = (3*3.119)-10=9.357-10= (0.643) IRR = positive rate + {(positive NPV/ (positive NPV+Negative NPV))*range of rates} =12.5 %+{( .68/(.68+.64))*(17%-12.5%)} = 12.5 %+{ .515*4.5%} = 12.5%+3.12% =15.12% The internal rate of return of project A is 15.2% Project B: For calculating the internal rate of return for project B a negative Net present value is required so assume the cost of capital for the project is 18% Project ‘B’ Net Cash Flow Discount Factor Present Value (25) 1 (25) 6.5 .893 5.48 7 .712 4.98 7.5 .600 4.5 7.5 .507 3.80 8 .428 3.42 Total Present Value 22.18 Less Initial Investment (25.00) NPV (2.82) IRR = positive rate + {(positive NPV/ (positive NPV+Negative NPV))*range of rates} =12.5 %+{( .70/(.70+2.82))*(18.5%-12.5%)} =12.5 %+{ .198*6%} =12.5%+1.18% =13.68% The internal rate of return for project B is 13.68% DECISION MAKING: C) For each of the above methods which project should be selected and explain why? 1) By using payback method the decision is to select project “A” because: Assessment through calculation of payback, NPV analysis and IRR calculations, "Project A" to return to more quickly return the initial investment. Return count up, "Project B" 4.85 will be repaid by year's end and "Project A" will be 3.33 years for reimbursement for investments according to investment. A project as' a long time sending back the initial investment, At present think about how fast your investments' Project A 'should be selected. 2) By using NPV method the decision is to select project “B”because: However, using net present value, will take the time value of money and discounted cash flow method right "concept, the project committee" will be more profitable B 'Company' Physical Details: At present £ 10,000.00 for the investment company must "£ 3.000 in cash equal to five years", and received less than 12.5%. Therefore, 'A Plan' Present has a net present value of £ 680. However, the project company £ 25,000 of B £ 6500 in cash, 70 million pounds, investment £ 7.500, £ 8,000, or 7.500 over 12.5%. Therefore, the project's net present value of 700 pounds. Therefore, if the "real budget, At Present", all other conditions are favourable, the project is B 'is a better investment. 3) By using IRR method the decision is to select project “A” because: On the other hand, it is 15.9% higher IRR project A "return IRR is calculated. This means that if the company more margin than they 'project A ‘rather than looking to choose B. DISCOUNT CASH FLOW: D) Explain why it is essential that discount cash flow should be calculated when making long term investment decision? Decisions without calculating the true value and yield is very common, but the records for these problems are close to a big help. If we "follow the evaluation of an investment project for B ', discounted cash flow method that made the picture without using" Always At present "exemption from the different discount factor than the more accurate data and returns a much return than the estimated figure for the period gives us. NPV and IRR methods and project cash flows discounted back using the most appropriate assessment. E) What would happen to the NPV if: 1) The cost of capital increased? Increase the cost of capital is the company to higher borrowing costs or high investment. If we follow clearly see the cost of capital return multiple values in investment rate and the period can be summed up both positive and negative net present value, which makes it more difficult decision. 2) The cost of capital decreased? However, the value of the investment means lower costs, lower capital costs improve efficiency and better. Net present value of the low cost of capital and internal rate of return is always a higher value, which means that it produces a better return on the value of investment has always been. F) Explain why the NPV of a relatively long term project is more sensitive to changes in the cost of capital than is the NPV of a short term project? NPV is the most effective way to measure the right investment. As the couples involved in the project net present value and output filter, this is a very desirable long-term decisions. It produces a single result for each year a regular basis, and creates a more clear understanding of what is the country's investment or investment value, especially this year. G) How does a change in the cost of capital affect the project’s IRR? In order to understand the cost of capital, if we look at clarify and change the cost of capital changes and not the net present value internal rate of return. Cost of capital internal rate of return and no direct effect. H) Compare the effectiveness of the NPV method with that of the IRR method? NPV value of a dollar today than the same dollar in the future, is worth adjusted for inflation and taking into account efficiency. If a project idea that you positive NPV, should be upheld. However, if NPV is negative, perhaps the project would be rejected because they would generate negative cash. IRR is the discount rate net present value of all cash flows equal to zero on the project has a special. You expected to generate IRR of the project as the growth rate might think. IRR strong growth better than expected. REFERENCES:- Weston, j of Fred (2001) Financial and accounting non-financial manager: McGraw-Hill Professional, United States, page 220. Dyson (2001), accounting for the fifth edition of non-accounting students. : Person Education Limited, Essex, UK PP - 397 Emmanuel, C. and Harris, E., and and Komakech, the first (2009), management judgments and strategic investment decision-making: Executive Summary Series, No. 4, CIMA. Dyson (2001), accounting for the fifth edition of non-accounting students. : Person Education Limited, Essex, UK PP - 409. Weston, j of Fred (2001), financial and accounting non-financial manager: McGraw-Hill Professional, USA. p269. CT Horngren to bhimani, management and cost accounting, 2004, p. 343 Gotze, the United States and Norchoctt, D. and Schuster, the first (2008), investment appraisal: methods and models, Springer, large Bretain. P13's Classes, strong (1999), accounting and financial management: management accounting, Kogan Co., Ltd., U.S. PP 86-88. Vinten, Gerald (ed.) (2004), to achieve management control, Emerald Group Publishing Limited, United Kingdom. PP 49:
http://www.ukessays.com/essays/finance/short-and-long-term-investment-evaluations-finance-essay.php
CC-MAIN-2014-15
en
refinedweb
The following code has ambiguity, but I can't figure out how to get around it. Am I missing something trivial? Am I going in the wrong direction? Thank you in advance for your time and for any help that you can offer. > data MehQueue = MehQueue > > class MehBase a where new :: IO a > instance MehBase MehQueue where new = return MehQueue > > class (MehBase a) => HasShift a where shift :: a -> IO a > instance HasShift MehQueue where shift a = return a > > main :: IO () > main = do > x <- new > shift x > return () Please note that I intend to extend this example with MehStack, HasPush and HasPop. You can probably guess where I'm going with all this.
http://www.haskell.org/pipermail/haskell-cafe/2007-April/024501.html
CC-MAIN-2014-15
en
refinedweb
I'm trying to update a C program written 20+ years ago. I want to use current compilers and standards. I'm looking at this as a good learning process, beyond reading C++ programming guides and reading code that has no real-world applications. I've already updated all the function headers to current ANSI standard headers. I'm now trying to write a class with functions. The existing code is: My new code is:My new code is:Code:typedef struct xycoord { int x, y; } coord; I know the new code I've written will compile when I put it in a fresh C++ project. However, when I try to compile it in the old program, I get thousands of errors.I know the new code I've written will compile when I put it in a fresh C++ project. However, when I try to compile it in the old program, I get thousands of errors.Code:class coord { public: int x, y; coord(int xin, int yin) { x = xin; y = yin; } coord(); void operator=(coord &rhs); coord operator+(coord &other); bool operator==(coord &other); }; void coord::operator=(coord &rhs) { x = rhs.x; y = rhs.y; } coord coord::operator+(coord &other) { return coord(x + other.x, y + other.y); } bool coord::operator==(coord &other) { return (x == other.x && y == other.y); }; Compiling... makedefs.c c:\...\coord.h(20) : error C2061: syntax error : identifier 'coord' c:\...\coord.h(20) : error C2059: syntax error : ';' c:\...\coord.h(21) : error C2449: found '{' at file scope (missing function header?) c:\...\coord.h(31) : error C2059: syntax error : '}' c:\...\coord.h(39) : error C2061: syntax error : identifier 'coord' c:\...\coord.h(39) : error C2059: syntax error : ';' ... I'm sure I'm missing something fundamental. Please help. I don't know where to look.
http://cboard.cprogramming.com/cplusplus-programming/104006-migrating-c-want-implement-class.html
CC-MAIN-2014-15
en
refinedweb
STYLE(9) Midnight). * * * $FreeBSD: src/share/man/man9/style.9,v 1.123 2007/01/28 20:51:04 joel: /*- * * Long, boring license goes here, but redacted for brevity */ An automatic script collects license information from the tree for all comments that start in the first column with ‘‘/*-’’. ‘‘#if defined(LIBC_SCCS)’’), enclose both in ‘‘#if 0 ... #endif’’ to hide any uncompilable bits and to keep the IDs out of object files. Only add ‘‘From: ’’ in front of foreign VCS IDs if the file is renamed. #if 0 #ifndef lint #endif /* not lint */ #endif #include <sys/cdefs.h> __FBSDID("$FreeBSD: src/share/man/man9/style.9,v 1.123 2007/01/28 20:51:04 joel Exp $"); Leave another blank line before the header files. Kernel include files (i.e. sys/*.h) come first; normally, include <sys/types.h> OR <sys/param.h>, but not both. <sys/types.h> includes <sys/cdefs.h>, and it is okay to depend on that. program go in "pathnames.h" in the local directory. #include <paths.h> Leave another blank line before the user include files. Do not #define or declare names in the implementation namespace except for implementing application interfaces. The names of ‘‘unsafe’’ macros (ones that have side effects), and the names of macros for manifest constants, are all in uppercase. The expansions of expression-like macros are either a single token or have outer parentheses. Put a single tab character between the #define and the macro name. If a macro is an inline expansion of a function, the function name is all in lowercase and the macro has the same name all in uppercase.. } corresponding #if or #ifdef. The comment for #else and #elif should match the inverse of the expression(s) used in the preceding #if and/or #elif statements. In the comments, the subexpression ‘‘defined(FOO)’’ is abbreviated as ‘‘FOO’’. For the purposes of comments, ‘‘#ifndef FOO’’ is treated as ‘‘ */ The project is slowly moving to use the ISO/IEC 9899:1999 (‘‘ISO;. When declaring variables in structures, declare them sorted by use, then by size (largest to smallest), only if it suffices to align at least 90% { }; Use queue(3) macros rather than rolling your own lists, whenever possible. Thus, the previous example would be better written: #include <sys/queue.h> struct foo { };. When convention requires a typedef, make its name match the struct tag. Avoid typedefs ending in ‘‘_t’’, except as specified in Standard C or by POSIX. /* Make the structure name match the typedef. */ } BAR; All functions are prototyped somewhere. Function prototypes for private functions (i.e., functions not used. In general code can be considered ‘‘new code’’ when it makes up about 50% or more of the file(s) involved. This is enough to break precedents in the existing code and use the current style guidelines. The kernel has a name associated with parameter types, e.g., in the kernel use: In header files visible to userland applications, prototypes that are visible must use either ‘‘protected’’ names (ones beginning with an underscore) or no names with the types. It is preferable to use protected names. E.g., use: or: Prototypes may have an extra space after a tab to enable function names to line up: /* * All major routines should have a comment briefly describing what * they do. The comment before the "main" routine should describe * what the program does. */ int main(int argc, char *argv[]) { comment. Space after keywords (if, while, for, return, switch). No braces (‘{’ and ‘}’) are used for control statements with zero or only a single statement unless that statement is more than a single line in which case they are permitted. Forever loops are done with for’s, not while’s. Parts of a for loop may be left empty. Do not put declarations inside blocks unless the routine is unusually complicated. Indentation is an 8 character tab. Second level indents are four spaces. If you have to wrap a long statement, put the operator at the end of the line.. No spaces after function names. Commas have a space after them. No spaces after ‘(’ or ‘[’ or preceding ‘]’ or ‘)’ characters. Unary operators do not require spaces, binary operators do. Do not use parentheses unless they are required for precedence or unless the statement is confusing without them. Remember that other people may confuse easier than you. Do YOU understand the following? Exits should be 0 on success, or according to the predefined values in sysexits(3). } variables in the declarations. Use this feature only thoughtfully. DO NOT use function calls in initializers. struct foo one, *two; Do not declare functions inside other functions; ANSI C says that such declarations have file scope regardless of the nesting of the declaration. necessary. Values in return statements should be enclosed in parentheses. Use err(3) or warn(3), do not roll your own. } Old-style function declarations look like this: static char * function(a1, a2, fl, a4) { Use ANSI function declarations unless you explicitly need K&R compatibility. Long parameter lists are wrapped with a normal four space indent. Variable numbers of arguments should look like this: #include <stdarg.h> void vaf(const char *fmt, ...) { } static void usage() { (‘[’ and ‘]’). (‘|’) separates ‘‘either-or’’ options/arguments, and multiple options/arguments which are specified together are placed in a single set of brackets. "usage: f [-aDde] [-b b_arg] [-m m_arg] req1 req2 [opt1 [opt2]]\n" "usage: f [-a | -b] [-c [-dEe] [-n number]]\n" } −Wall) and produce minimal warnings. SEE ALSO indent(1), lint(1), err(3), sysexits(3), warn(3), style.Makefile(5) HISTORY This manual page is largely based on the src/admin/style/style file from the 4.4BSD−Lite2 release, with occasional updates to reflect the current practice and desire of the FreeBSD project. MidnightBSD 0.3 February 10, 2005 MidnightBSD 0.3
http://www.midnightbsd.org/documentation/man/style.9.html
CC-MAIN-2014-15
en
refinedweb
Hello B2B Gurus, I am able to process B2B inbound files successfully from Trading Partner --> B2B --> BPEL. When it comes to BPEL i am not able to parse/transform the received XML as i am getting selection failures in assign and empty nodes in transformation. When i look at the input XML payload which i received in ReceiveB2BConsume Payload i observed that i am getting namespace as " xmlns="NS_495C37A0921C418BB66A86A6E75B2CA120070312140549" instead of actual namespace xmlns="urn:oracle:b2b:X12/V4010/856" which is in my XSD as well and i am getting the XML start tag <?xml version="1.0" encoding="UTF-8" ?> 2 times. : <?xml version="1.0" encoding="UTF-8" ?> <?xml version="1.0" encoding="UTF-8" ?> <Transaction-856 <Internal-Properties> ........ ... </Transaction-856> I went back and checked the XSD which i loaded in the B2B Console and i am having the following namespace "<xsd:schema" I am not sure why the XML translated from EDI in B2B console has the different namespace and XML start tag 2 times. Can you please help me resolve the issue. Let me know if i am missing anything. Thanks in Advance.. Another solution is to change the namespace in the ecs file. This can be done in the B2B document editor when you generate the XSD file. This how we solved this problem. Regards Erwin
https://community.oracle.com/message/11243357
CC-MAIN-2014-15
en
refinedweb
Code. Collaborate. Organize. No Limits. Try it Today. System.Environment.Is64BitOperatingSystem System.Environment.Is64BitProcess This article, along with any associated source code and files, is licensed under The Code Project Open License (CPOL) bool flag; return ((Win32Native.DoesWin32MethodExist("kernel32.dll", "IsWow64Process") && Win32Native.IsWow64Process(Win32Native.GetCurrentProcess(), out flag)) && flag); General News Suggestion Question Bug Answer Joke Rant Admin Use Ctrl+Left/Right to switch messages, Ctrl+Up/Down to switch threads, Ctrl+Shift+Left/Right to switch pages. C# 6: First reactions
http://www.codeproject.com/Tips/107978/32-Bit-or-64-bit-OS?fid=1697262&df=90&mpp=10&sort=Position&spc=None&tid=4197582&PageFlow=FixedWidth
CC-MAIN-2014-15
en
refinedweb
Java EE 7 First Look — Save 50% Discover the new features of Java EE 7 and learn to put them together to build a large-scale application with this book and ebook. (For more resources related to this topic, see here.) Enterprise JavaBeans 3.2 The Enterprise JavaBeans 3.2 Specification was developed under JSR 345. This section just gives you an overview of improvements in the API. The complete document specification (for more information) can be downloaded from. The businesslayer of an application is the part of the application that islocated between the presentationlayer and data accesslayer. The following diagram presents a simplified Java EE architecture. As you can see, the businesslayer acts as a bridge between the data access and the presentationlayer. It implements businesslogic of the application. To do so, it can use some specifications such as Bean Validation for data validation, CDifor context and dependency injection, interceptors to intercept processing, and so on. As thislayer can belocated anywhere in the network and is expected to serve more than one user, it needs a minimum of non functional services such as security, transaction, concurrency, and remote access management. With EJBs, the Java EE platform provides to developers the possibility to implement thislayer without worrying about different non functional services that are necessarily required. In general, this specification does not initiate any new major feature. It continues the work started by thelast version, making optional the implementation of certain features that became obsolete and adds slight modification to others. Pruning some features After the pruning process introduced by Java EE 6 from the perspective of removing obsolete features, support for some features has been made optional in Java EE 7 platform, and their description was moved to another document called EJB 3.2 Optional Features for Evaluation. The features involved in this movement are: EJB 2.1 and earlier Entity Bean Component Contract for Container-Managed Persistence EJB 2.1 and earlier Entity Bean Component Contract for Bean-Managed Persistence Client View of EJB 2.1 and earlier Entity Bean EJB QL: Querylanguage for Container-Managed Persistence Query Methods JAX-RPC-based Web Service Endpoints JAX-RPC Web Service Client View The latest improvements in EJB 3.2 For those who have had to use EJB 3.0 and EJB 3.1, you will notice that EJB 3.2 has brought, in fact, only minor changes to the specification. However, some improvements cannot be overlooked since they improve the testability of applications, simplify the development of session beans or Message-Driven Beans, and improve control over the management of the transaction and passivation of stateful beans. Session bean enhancement A session bean is a type of EJB that allows us to implement businesslogic accessible tolocal, remote, or Web Service Client View. There are three types of session beans: stateless for processing without states, stateful for processes that require the preservation of states between different calls of methods, and singleton for sharing a single instance of an object between different clients. The following code shows an example of a stateless session bean to save an entity in the database: @Stateless public class ExampleOfSessionBean { @PersistenceContext EntityManager em; public void persistEntity(Object entity){ em.persist(entity); }} Talking about improvements of session beans, we first note two changes in stateful session beans: the ability to executelife-cycle callback interceptor methods in a user-defined transaction context and the ability to manually disable passivation of stateful session beans. It is possible to define a process that must be executed according to thelifecycle of an EJB bean (post-construct, pre-destroy). Due to the @TransactionAttribute annotation, you can perform processes related to the database during these phases and control how they impact your system. The following code retrieves an entity after being initialized and ensures that all changes made to the persistence context are sent to the database at the time of destruction of the bean. As you can see in the following code, TransactionAttributeType of init() method is NOT_SUPPORTED; this means that the retrieved entity will not be included in the persistence context and any changes made to it will not be saved in the database: @Stateful public class StatefulBeanNewFeatures { @PersistenceContext(type= PersistenceContextType.EXTENDED) EntityManager em; @TransactionAttribute(TransactionAttributeType.NOT_SUPPORTED) @PostConstruct public void init(){ entity = em.find(...); } @TransactionAttribute(TransactionAttributeType.REQUIRES_NEW) @PreDestroy public void destroy(){ em.flush(); } } The following code demonstrates how to control passivation of the stateful bean. Usually, the session beans are removed from memory to be stored on the disk after a certain time of inactivity. This process requires data to be serialized, but during serialization all transient variables are skipped and restored to the default value of their data type, which is null for object, zero for int, and so on. To prevent theloss of this type of data, you can simply disable the passivation of stateful session beans by passing the false value to the passivationCapable attribute of the @Stateful annotation. @Stateful(passivationCapable = false) public class StatefulBeanNewFeatures { //... } For the sake of simplicity, EJB 3.2 has relaxed the rules to define the defaultlocal or remote business interface of a session bean. The following code shows how a simple interface can be considered aslocal or remote depending on the case: //In this example, yellow and green are local interfaces public interface yellow { ... } public interface green { ... } @Stateless public class Color implements yellow, green { ... } //In this example, yellow and green are local interfaces public interface yellow { ... } public interface green { ... } @Local @Stateless public class Color implements yellow, green { ... } //In this example, yellow and green are remote interfaces public interface yellow { ... } public interface green { ... } @Remote @Stateless public class Color implements yellow, green { ... } //In this example, only the yellow interface is exposed as a remote interface @Remote public interface yellow { ... } public interface green { ... } @Stateless public class Color implements yellow, green { ... } //In this example, only the yellow interface is exposed as a remote interface public interface yellow { ... } public interface green { ... } @Remote(yellow.class) @Stateless public class Color implements yellow, green { ... } EJBlite improvements Before EJB 3.1, the implementation of a Java EE application required the use of a full Java EE server with more than twenty specifications. This could be heavy enough for applications that only need some specification (as if you were asked to take a hammer to kill a fl y). To adapt Java EE to this situation, JCP (Java Community Process) introduced the concept of profile and EJBlite. Specifically, EJBlite is a subset of EJBs, grouping essential capabilities forlocal transactional and secured processing. With this concept, it has become possible to make unit tests of an EJB application without using the Java EE server and it is also possible to use EJBs in web applications or Java SE effectively. In addition to the features already present in EJB 3.1, the EJB 3.2 Specification has added support forlocal asynchronous session bean invocations and non persistent EJB Timer Service. This enriches the embeddable EJBContainer, web profiles, and augments the number of testable features in an embeddable EJBContainer. The following code shows an EJB packaged in a WAR archive that contains two methods. The asynchronousMethod() is an asynchronous method that allows you to compare the time gap between the end of a method call on the client side and the end of execution of the method on the server side. The nonPersistentEJBTimerService() method demonstrates how to define a non persistent EJB Timer Service that will be executed every minute while the hour is one o'clock: @Stateless public class EjbLiteSessionBean { @Asynchronous public void asynchronousMethod(){ try{ System.out.println("EjbLiteSessionBean - start : "+new Date()); Thread.sleep(1000*10); System.out.println("EjbLiteSessionBean - end : "+new Date()); }catch(Exception ex){ ex.printStackTrace(); } } @Schedule(persistent = false, minute = "*", hour = "1") public void nonPersistentEJBTimerService(){ System.out.println("nonPersistentEJBTimerService method executed"); } } Changes made to the TimerService API The EJB 3.2 Specification enhanced the TimerService APiwith a new method called getAllTimers(). This method gives you the ability to access all active timers in an EJB module. The following code demonstrates how to create different types of timers, access their information, and cancel them; it makes use of the getAllTimers() method: @Stateless public class ChangesInTimerAPI implements ChangesInTimerAPILocal { @Resource TimerService timerService; public void createTimer(){ //create a programmatic timer long initialDuration = 1000*5; long intervalDuration = 1000*60; String timerInfo = "PROGRAMMATIC TIMER"; timerService.createTimer(initialDuration, intervalDuration, timerInfo); } @Timeout public void timerMethodForProgrammaticTimer(){ System.out.println("ChangesInTimerAPI - programmatic timer : "+new Date()); } @Schedule(info = "AUTOMATIC TIMER", hour = "*", minute = "*") public void automaticTimer(){ System.out.println("ChangesInTimerAPI - automatic timer : "+new Date()); } public void getListOfAllTimers(){ Collection alltimers = timerService.getAllTimers(); for(Timer timer : alltimers){ System.out.println("The next time out : "+timer. getNextTimeout()+", " + " timer info : "+timer.getInfo()); timer.cancel(); } } } In addition to this method, the specification has removed the restrictions that required the use of javax.ejb.Timer and javax.ejb.TimerHandlereferences only inside a bean. Harmonizing with JMS's novelties A Message-Driven Bean (MDB) is a kind of a JMS Messagelistener allowing Java EE applications to process messages asynchronously. To define such a bean, simply decorate a simple POJO class with @MessageDriven annotation and make it implement the javax.jms.MessageListener interface. This interface makes available to the MDB the onMessage method that will be called each time a new message is posted in the queue associated with the bean. That's why you have to put inside this method the businesslogic for the processing of incoming messages. The following code gives an example of an MDB that notifies you when a new message arrives by writing in the console: @MessageDriven(activationConfig = { @ActivationConfigProperty(propertyName = "destinationType", propertyValue = "javax.jms.Queue"), @ActivationConfigProperty(propertyName = "destinationLookup", propertyValue = "jms/messageQueue") }) public class MessageBeanExample implements MessageListener { public MessageBeanExample() { } @Override public void onMessage(Message message) { try{ System.out.println("You have received a new message of type : "+message.getJMSType()); }catch(Exception ex){ ex.printStackTrace(); } } } Given the changes in JMS 2.0 Specification, the EJB 3.2 Specification has a revisedlist of JMS MDB activation properties to conform to thelist of standard properties. These properties are: destinationLookup, connectionFactoryLookup, clientId, subscriptionName, and shareSubscriptions. In addition, it has added the ability in MDB to implement a no-method messagelistener, resulting in the exposure of all public methods of the bean as messagelistener methods. Other improvements As we said earlier, the EJB 3.1 Specification has given developers the opportunity to test EJB applications outside a full Java EE server. This was made possible through an embeddable EJBContainer. The following example demonstrates how to test an EJB using an embeddable EJBContainer: @Test public void testAddition(){ Map<String, Object> properties = new HashMap<String, Object>(); properties.put(EJBContainer.APP_NAME, "chapter05EmbeddableEJBContainer"); properties.put(EJBContainer.MODULES, new File("target\\classes")); EJBContainer container = javax.ejb.embeddable.EJBContainer. createEJBContainer(properties); try { NewSessionBean bean = (NewSessionBean) container.getContext(). lookup("java:global/chapter05EmbeddableEJBContainer/NewSessionBean"); int restult = bean.addition(10, 10); Assert.assertEquals(20, restult); } catch (NamingException ex) { Logger.getLogger(AppTest.class.getName()).log(Level.FINEST, null, ex); } finally { container.close(); } } Since the embeddable EJBContainer reference by maven was not up-to-date while writing this book (which caused the error "No EJBContainer provider available"), idirectly addressed the glassfish-embedded-static-shell.jar file in the following way: Maven variable declaration: <properties> <glassfish.installed.embedded.container>glassfish_dir\lib\ embedded\glassfish-embedded-static-shell.jar</glassfish.installed. embedded.container> </properties> Declaration of dependence: <dependency> <groupId>glassfish-embedded-static-shell</groupId> <artifactId>glassfish-embedded-static-shell</artifactId> <version>3.2</version> <scope>system</scope> <systemPath>${glassfish.installed.embedded.container}</ systemPath> </dependency> During operation, the embeddable EJBContainer acquires resources that would normally be released at the end of the process to allow other applications to take advantage of the maximum power of the machine. In the previous version of the specification, a developer used the EJBContainer.close() method in a finally block to perform this task. But, with the try-with-resources statement introduced in Java SE 7, EJB 3.2 added the implementation of the java.lang.AutoCloseable interface in the EJBContainer class to free the developer from a task that could easily be forgotten and have negative repercussions on the performance of a machine. Now, the embeddable EJBContainer will be automatically closed at the end of a statement, provided that it is declared as a resource in a try-with-resources statement. Thus, we nolonger need a finally blocklike in the earlier example, which simplifies the code. The following example demonstrates how to take advantage of the try-with-resources statement while testing EJB with an embeddable EJBContainer: @Test public void testAddition(){ //... try(EJBContainer container = javax.ejb.embeddable.EJBContainer.cre ateEJBContainer(properties);) { //... } catch (NamingException ex) { Logger.getLogger(AppTest.class.getName()).log(Level.FINEST, null, ex); } } The final improvement of this specification concerns removal of the restriction on obtaining the current classloader when you want to access files or directories in the file system from a bean. Putting it all together The example that will allow us to put together most of the APIs already studied is an online preregistration site. In this example, we will not write any code. We limit ourselves to the presentation of an analysis of a problem that will help you understand how to use each of the pieces of code that are used to illustrate points in this book, in order to make a quality application based on the latest functionality of Java EE 7. Presenting the project The virtual enterprise software technology has received from a private university the order for creating an application to manage the preregistration of students online (candidate registration, validation of applications, and notifications of different candidates) and provide a real-time chat room for connected students. Furthermore, for statistical purposes, the system will allow the ministry of education access to certain information from a heterogeneous application. The system called ONPRINS must be robust, efficient, and available 24 x 7 during periods of registration. The business domain model in the following diagram represents the main objects of our system (the required application will be built based on these objects): Disclaimer These diagrams have been designed and built in Enterprise Architect, by Sparx Systems. Use Case Diagram (UCD) The following diagram represents all the features that will be supported by our system. We have three actors as follows: A Candidate is any user wishing to preregister for a department. To this end, it has the ability to view thelist of departments, select a department, and complete and submit the application form. Through a chat room, he/she can share his/her ideas with all candidates connected with respect to a given theme. An Administrator is a special user who has the right to run the validation process of preregistration. It is this process that creates the students and sends e-mails to different candidates tolet them know whether they have been selected or not. The Ministry of Education is a secondary actor of the system; it seeks access to the number of preregistered students and thelist of students during an academic year. Class diagram The following class diagram shows all the main classes used for the realization of our online preregistration. This diagram also highlights the relationships that exist between different classes. The CandidateSessionBean class is a bean that records the preregistration of candidates through registerCandidate method. It also provides methods for accessing all the registered candidates (listOfCandidates) and preregistered students (listOfStudents). The InscriptionValidationBean class contains the startValidationBatchJob method which, as its name suggests,launches batch processing to validate the preregistration and notify different candidates. Batch processing presented here is the chunk type in which the ValidationReader class is used to read the data useful for validation, the ValidationProcessor class is used to validate the preregistration, and the ValidationWriter class is used to notify the candidate. This class also serves to create a student when the candidate is selected. As you can see, in order to send an e-mail, the ValidationWriter class firstly sends a JMS message through MsgSenderSessionBean to the component responsible for sending the e-mail. This allows us to avoid blockages in ValidationWriter when there is a connection breakdown. Also, in the batch process, we have thelistener ValidationJobListener, which enables us to record a certain amount of information in the validation table at the end of batch processing. For the sake of simplicity and reusability, navigation between web pages during the preregistration of a candidate (departmentList.xhtml, acceptanceConditions. xhtml, identificationInformation.xhtml, contactInformation. xhtml, medicalInformation.xhtml, schoolInformation.xhtml, and InformationValidation.xhtml) will be made using the Faces Flow. On the other hand, the content of various pages will be structured with the Resourcelibrary Contracts and communication in the chat room will be managed using WebSocket; it is for this reason that you have the ChatServerEndPoint class, which is the server endpoint for this communication. The execution of the validation process of preregistration is made from the inscriptionValidation.xhtml facelet. In order to give the administrator a feedback on the progress of the validation process, the facelet will contain a progress bar updated in real time, whichleads us once again to use the WebSocket protocol. Component diagram The following diagram shows the various components that constitute our system. As you can see, the exchange of data between the application of the ministry and ONPRINS will be through web services, which aims to make both systems completely independent from one another, while our system uses a connector to have access to user information stored on the ERP system of the university. Summary As promised, in this article we presented the innovations introduced by EJBs, and then focused on the analysis and design of an application for online preregistration. In this exercise, we were able to look at practical cases allowing us to use almost all of the concepts already discussed (WebSocket and Faces Flow) and discover new concepts (web service, connector, and Java e-mail). Resources for Article: Further resources on this subject: - Developing Secure Java EE Applications in GlassFish [Article] - Service Oriented Java Business Integration - What's & Why's [Article] - Debugging Java Programs using JDB [Article] About the Author : NDJOBO Armel Fabrice NDJ:). Books From Packt Post new comment
https://www.packtpub.com/article/business-layer-java-ee-7-first-look
CC-MAIN-2014-15
en
refinedweb
Holy.. In my mailbox yesterday arrived a few complimentary copies of the Japanese translation of my book. (Note: Still available in English.) I cracked the Japanese edition open and confirmed what I already knew: I can't read a lick of Japanese. But now I know how you all feel when you read this Web site: It's page after page of text that makes no sense whatsoever.. If you need variables to be aligned a particular way, you need to ask for it.? If you need a particular alignment, you have to ask for it. By default, all you can count on is that variables are aligned according to their natural requirements. First, of course, there is no guarantee that local variables even reside on the stack. The optimizer may very well decide that particular local variables can reside in registers, in which case it has no alignment at all! There are a few ways to force a particular alignment. The one that fits the C language standard is to use a union: union char_with_int_alignment { char ch; int Alignment; } u; Given this union, you can say u.ch to obtain a character whose alignment is suitable for an integer. u.ch The Visual C++ compiler supports a declaration specifier to override the default alignment of a variable. typedef struct __declspec(align(16)) _M128 { unsigned __int64 Low; __int64 High; } M128, *PM128; This structure consists of two eight-byte members. Without the __declspec(align(#)) directive, the alignment of this structure would be 8-byte, since that is the alignment of the members with the most restrictive alignment. (Both unsigned __int64 and __int64 are naturally 8-byte-aligned.) But with the directive, the aligment is expanded to 16 bytes, which is more restrictive than what the structure normally would be. This particular structure is declared with more restrictive alignment because it is intended to be use to hold 128-bit values that will be used by the 128-bit XMM registers. __declspec(align(#)) unsigned __int64 __int64 A third way to force alignment with the Visual C++ compiler is to use the #pragma pack(#) directive. (There is also a "push" variation of this pragma which remembers the previous ambient alignment, which can be restored by a "pop" directive. And the /Zp# directive allows you to specify this pragma from the compiler command line.) This directive specifies that members can be placed at alignments suitable for #-byte objects rather than their natural alignment requirements, if the natural alignment is more restrictive. For example, if you set the pack alignment to 2, then all objects that are bigger than two bytes will be aligned as if they were two-byte objects. This can cause 32-bit values and 64-bit values to become mis-aligned; it is assumed that you know what you're doing any can compensate accordingly. #pragma pack(#) /Zp# # For example, consider this structure whose natural alignment has been altered: #pragma pack(1) struct misaligned_members { WORD w; DWORD dw; BYTE b; }; Given this structure, you cannot pass the address of the dw member to a function that expects a pointer to a DWORD, since the ground rules for programming specify that all pointers must be aligned unless unaligned pointers are explicitly permitted. dw DWORD void ExpectsAlignedPointer(DWORD *pdw); void UnalignedPointerOkay(UNALIGNED DWORD *pdw); misaligned_members s; ExpectsAlignedPointer(&s.dw); // wrong UnalignedPointerOkay(&s.dw); // okay What about the member w? Is it aligned or not? Well, it depends. w If you allocate a single structure on the heap, then the w member is aligned, since heap allocations are always aligned in a manner suitable for any fundamental data type. (I vaguely recall some possible weirdness with 10-byte floating point values, but that's not relevant to the topic at hand.) misaligned_members *p = (misaligned_members) HeapAllocate(hheap, 0, sizeof(misaligned_members)); Given this code fragment, the member p->w is aligned since the entire structure is suitably aligned, and therefore so too is w. If you allocated an array, however, things are different. p->w misaligned_members *p = (misaligned_members) HeapAllocate(hheap, 0, 2*sizeof(misaligned_members)); In this code fragment, p[1].w is not aligned because the entire misaligned_members structure is 2+4+1=7 bytes in size since the packing is set to 1. Therefore, the second structure begins at an unaligned offset relative to the start of the array. p[1].w misaligned_members One final issue is the expectations for alignment when using header files provided by an outside component. If you are writing a header file that will be consumed by others, and you require special alignment, you need to say so explicitly in your header file, because you don't control the code that will be including your header file. Furthermore, if your header file changes any compiler settings, you need to restore them before your header file is complete. If you don't follow this rule, then you create the situation where a program stops working if a program changes the order in which it includes seemingly-unrelated header files. // this code works #include <foo.h> #include <bar.h> // this code doesn't #include <bar.h> #include <foo.h> The problem was that bar.h changed the default structure alignment and failed to return it to the original value before it was over. As a result, in the second case, the structure alignment for the foo.h header file got "infected" and no longer matched the structure alignment used by the foo library. bar.h foo.h foo You can imagine an analogous scenario where deleting a header file can cause a program to stop working. Therefore, if you're writing a header file that will be used by others, and you require nonstandard alignment for your structures, you should use this pattern to change the default alignment: #include <pshpack1.h> // change alignment to 1 ... stuff that assumes byte packing ... #include <poppack.h> // return to original alignment In this way, you "leave things the way you found them" and avoid the mysterious infection scenarios described above...
https://blogs.msdn.com/b/oldnewthing/archive/2007/12.aspx?Redirected=true&PageIndex=1
CC-MAIN-2014-15
en
refinedweb
Hi. I have made a custom component in flash and converted it to a flex component. The component loads and works in the mxml file. However what I really want is to use the SWC in an actionscript class file. I am having a little trouble doing this. When I import the SWC what I find is the SWC name followed by "_fla" (someComponent_fla). This seems to be the SWC since I can see the instances used in the component. The question I have is? How do I get a instance of this component. The example code shown below does not work. import SomeComponent_fla; .... var someComponent:SomeComponent_fla = new SomeComponent_fla(); .... Please, where am i going wrong?
http://forums.adobe.com/thread/1300673
CC-MAIN-2014-15
en
refinedweb
/* * > extern int __kill(pid_t pid, int sig, int posix); /* * kill stub, which wraps a modified kill system call that takes a posix * behaviour indicator as the third parameter to indicate whether or not * conformance to standards is needed. We use a trailing parameter in * case the call is called directly via syscall(), since for most uses, * it won't matter to the caller. */ int kill(pid_t pid, int sig) { #if __DARWIN_UNIX03 return(__kill(pid, sig, 1)); #else /* !__DARWIN_UNIX03 */ return(__kill(pid, sig, 0)); #endif /* !__DARWIN_UNIX03 */ }
http://opensource.apple.com/source/Libc/Libc-594.9.4/sys/kill.c
CC-MAIN-2014-15
en
refinedweb
Bouncing Ball Java: Example - Bouncing Ball This program does a simple animation. Animation is done by creating a timer which calls an ActionListener at fixed... 22 23 24 25 26 27 // File: animation/bb/BBDemo.java // Description Animating Images in Java Application Animating Images in Java Application This section shows you how to create an animation with multiple images. You can see how animation has been implemented in the following Java Programming: Chapter 7 Quiz on material from Chapter 7 of this on-line Java textbook. You should be able...? Why are they important? What does this have to do with animation... is the function of a LayoutManager in Java? Question 5: What does it mean to use Java Sleep Thread Java Thread sleep() is a static method. It sleeps the thread for the given time in milliseconds. It is used to delay the thread. It is used in Applet or GUI programming for animation Java Sleep Thread Example public class Java Programming: Section 7.6 Section 7.6 Timers, Animation, and Threads JAVA IS A MULTI... a separate thread to run the animation. Before the introduction of the Swing GUI, a Java... be running at the same time. To say that Java is a multi-threaded language means Java Programming: Solution to Programming Exercise POSSIBLE SOLUTION to the following exercise from this on-line Java textbook... doesn't look too good in many versions of Java.) The applet... the rightmost column is left+width-1.) The animation loops through the same Java Shapes Bouncing App Java Shapes Bouncing App hi guys, im trying to implement the following into my java app code: Here's my code: first ill post the main class, then the animation panel class, then moving shape and moving rectangle class Swings - Java Beginners ("java-swing-tutorial.JPG","My Website"); If the above is not correcet.. Can...("Animation Frame"); th = new Thread(); lbl = new JLabel(); Panel...){} } } For more information on Swing visit to : Post your Comment
http://roseindia.net/discussion/18831-Line-Animation-in-Java.html
CC-MAIN-2014-15
en
refinedweb
Modularity, Composition and Hierarchy provide a uniform model of stream processing graphs, which allows flexible composition of reusable components. In this chapter we show how these look like from the conceptual and API perspective, demonstrating the modularity aspects of the library. Basics of composition and modularity Every operator used in Akka Streams can be imagined as a “box” with input and output ports where elements to be processed arrive and leave the operator. In this view, a Source is nothing else than a “box” with a single output port, or, a BidiFlow is a “box” with exactly two input and two output ports. In the figure below we illustrate the most commonly used operators viewed as “boxes”. The linear operators are Source, Sink and Flow, as these can be used to compose strict chains of operators. Fan-in and Fan-out operators have usually multiple input or multiple output ports, therefore they allow to build more complex graph layouts, not only chains. BidiFlow operators operators, that contain various other type of operators. Please note that when combining a Flow using that method, the termination signals are not carried “through” as the Sink and Source are assumed to be fully independent. If however you want to construct a Flow like this but need the termination events to trigger “the other side” of the composite flow, you can use Flow.fromSinkAndSourceCoupled or Flow.fromSinkAndSourceCoupledMat which does just that. For example the cancelation of the composite flows source-side will then lead to completion of its sink-side. Read Flow Flow’s API documentation for a detailed explanation how this works.: - Scala Source.single(0).map(_ + 1).filter(_ != 0).map(_ - 2).to(Sink.fold(0)(_ + _)) // ... where is the nesting? - Java a shorthand for adding a name attribute). The following code demonstrates how to achieve the desired nesting: - Scala val nestedSource = Source .single(0) // An atomic source .map(_ + 1) // an atomic processing stage .named("nestedSource") // wraps up the current Source and gives it a name val nestedFlow = Flow[Int] .filter(_ != 0) // an atomic processing stage .map(_ - 2) // another atomic processing stage .named("nestedFlow") // wraps up the Flow, and gives it a name val nestedSink = nestedFlow .to(Sink.fold(0)(_ + _)) // wire an atomic sink to the nestedFlow .named("nestedSink") // wrap it up // Create a RunnableGraph val runnableGraph = nestedSource.to(nestedSink) - Java. - Scala // Create a RunnableGraph from our components val runnableGraph = nestedSource.to(nestedSink) // Usage is uniform, no matter if modules are composite or atomic val runnableGraph2 = Source.single(0).to(Sink.fold(0)(_ + _)) - Java // operators. operators, directed and non-directed cycles. The runnable() method of the GraphDSL object allows the creation of a general, closed, and runnable graph. For example the network on the diagram can be realized like this: - Scala import GraphDSL.Implicits._ RunnableGraph.fromGraph(GraphDSL.create() { implicit builder => val A: Outlet[Int] = builder.add(Source.single(0)).out val B: UniformFanOutShape[Int, Int] = builder.add(Broadcast[Int](2)) val C: UniformFanInShape[Int, Int] = builder.add(Merge[Int](2)) val D: FlowShape[Int, Int] = builder.add(Flow[Int].map(_ + 1)) val E: UniformFanOutShape[Int, Int] = builder.add(Balance[Int](2)) val F: UniformFanInShape[Int, Int] = builder.add(Merge[Int](2)) val G: Inlet[Any] = builder.add(Sink.foreach(println)).in C <~ F A ~> B ~> C ~> F B ~> D ~> E ~> F E ~> G ClosedShape }) - Java) and we imported Source s, Sink s and Flow s explicitly. It is possible to refer to the ports explicitly, and it is not necessary to import our linear operators via add(), so another version might look like this: - Scala import GraphDSL.Implicits._ RunnableGraph.fromGraph(GraphDSL.create() { implicit builder => val B = builder.add(Broadcast[Int](2)) val C = builder.add(Merge[Int](2)) val E = builder.add(Balance[Int](2)) val F = builder.add(Merge[Int](2)) Source.single(0) ~> B.in; B.out(0) ~> C.in(1); C.out ~> F.in(0) C.in(0) <~ F.out B.out(1).map(_ + 1) ~> E.in; E.out(0) ~> F.in(1) E.out(1) ~> Sink.foreach(println) ClosedShape }) - Java() factory method on GraphDSL. If we remove the sources and sinks from the previous example, what remains is a partial graph: We can recreate a similar graph in code, using the DSL in a similar way than before: - Scala import GraphDSL.Implicits._ val partial = GraphDSL.create() { implicit builder => val B = builder.add(Broadcast[Int](2)) val C = builder.add(Merge[Int](2)) val E = builder.add(Balance[Int](2)) val F = builder.add(Merge[Int](2)) C <~ F B ~> C ~> F B ~> Flow[Int].map(_ + 1) ~> E ~> F FlowShape(B.in, E.out(1)) }.named("partial") - Java operators : - Scala Source.single(0).via(partial).to(Sink.ignore) - Java Source.single(0).via(partial).to(Sink.ignore()); It is not possible to use it as a Flow yet, though (i.e. we cannot call .filter() on it), but Flow has a fromGraph() method that adds the DSL to a FlowShape. There are similar methods on Source, Sink and BidiShape, so it is easy to get back to the simpler DSL if an operator has the right shape. For convenience, it is also possible to skip the partial graph creation, and use one of the convenience creator methods. To demonstrate this, we will create the following graph: The code version of the above closed graph might look like this: - Scala // Convert the partial graph of FlowShape to a Flow to get // access to the fluid DSL (for example to be able to call .filter()) val flow = Flow.fromGraph(partial) // Simple way to create a graph backed Source val source = Source.fromGraph( GraphDSL.create() { implicit builder => val merge = builder.add(Merge[Int](2)) Source.single(0) ~> merge Source(List(2, 3, 4)) ~> merge // Exposing exactly one output port SourceShape(merge.out) }) // Building a Sink with a nested Flow, using the fluid DSL val sink = { val nestedFlow = Flow[Int].map(_ * 2).drop(10).named("nestedFlow") nestedFlow.to(Sink.head) } // Putting all together val closed = source.via(flow.filter(_ > 1)).to(sink) - Java //); All graph builder sections check if the resulting graph has all ports connected except the exposed ones and will throw an exception if this is violated. We are still in debt of demonstrating that RunnableGraph is a component like any other, which can be embedded in graphs. In the following snippet we embed one closed graph in another: - Scala val closed1 = Source.single(0).to(Sink.foreach(println)) val closed2 = RunnableGraph.fromGraph(GraphDSL.create() { implicit builder => val embeddedClosed: ClosedShape = builder.add(closed1) // … embeddedClosed }) - Java operators might provide a materialized value, so when we compose multiple operators Promise[[Option[Int]] CompletableFuture<Optional<Integer>>>. By using the combiner function Keep.left, the resulting materialized type is of the nested module (indicated by the color red on the diagram): - Scala // Materializes to Promise[Option[Int]] (red) val source: Source[Int, Promise[Option[Int]]] = Source.maybe[Int] // Materializes to NotUsed (black) val flow1: Flow[Int, Int, NotUsed] = Flow[Int].take(100) // Materializes to Promise[Int] (red) val nestedSource: Source[Int, Promise[Option[Int]]] = source.viaMat(flow1)(Keep.left).named("nestedSource") - Java // Future[OutgoingConnection] CompletionStage<OutgoingConnection>, and we propagate this to the parent by using Keep.right as the combiner function (indicated by the color yellow on the diagram): - Scala // Materializes to NotUsed (orange) val flow2: Flow[Int, ByteString, NotUsed] = Flow[Int].map { i => ByteString(i.toString) } // Materializes to Future[OutgoingConnection] (yellow) val flow3: Flow[ByteString, ByteString, Future[OutgoingConnection]] = Tcp().outgoingConnection("localhost", 8080) // Materializes to Future[OutgoingConnection] (yellow) val nestedFlow: Flow[Int, ByteString, Future[OutgoingConnection]] = flow2.viaMat(flow3)(Keep.right).named("nestedFlow") - Java //) - Scala // Materializes to Future[String] (green) val sink: Sink[ByteString, Future[String]] = Sink.fold("")(_ + _.utf8String) // Materializes to (Future[OutgoingConnection], Future[String]) (blue) val nestedSink: Sink[Int, (Future[OutgoingConnection], Future[String])] = nestedFlow.toMat(sink)(Keep.both) - Java // ignores the Future[String] CompletionStage<String> part, and wraps the other two values in a custom case class MyClass (indicated by color purple on the diagram): - Scala case class MyClass(private val p: Promise[Option[Int]], conn: OutgoingConnection) { def close() = p.trySuccess(None) } def f(p: Promise[Option[Int]], rest: (Future[OutgoingConnection], Future[String])): Future[MyClass] = { val connFuture = rest._1 connFuture.map(MyClass(p, _)) } // Materializes to Future[MyClass] (purple) val runnableGraph: RunnableGraph[Future[MyClass]] = nestedSource.toMat(nestedSink)(f) - Java); The nested structure in the above example is not necessary for combining the materialized values, it demonstrates how the two features work together. See Combining materialized values operators can be controlled via attributes (see Buffers for asynchronous operators). When it comes to hierarchic composition, attributes are inherited by nested modules, unless they override them with a custom value. The code below, a modification of an earlier example sets the inputBuffer attribute on certain modules, but not on others: - Scala import Attributes._ val nestedSource = Source.single(0).map(_ + 1).named("nestedSource") // Wrap, no inputBuffer set val nestedFlow = Flow[Int] .filter(_ != 0) .via(Flow[Int].map(_ - 2).withAttributes(inputBuffer(4, 4))) // override .named("nestedFlow") // Wrap, no inputBuffer set val nestedSink = nestedFlow .to(Sink.fold(0)(_ + _)) // wire an atomic sink to the nestedFlow .withAttributes(name("nestedSink") and inputBuffer(3, 3)) // override - Java operator which has again an explicitly provided attribute overriding the inherited one. This diagram illustrates the inheritance process for the example code (representing the materializer default attributes as the color red, the attributes set on nestedSink as blue and the attributes set on nestedFlow as green).
https://doc.akka.io/docs/akka/current/stream/stream-composition.html
CC-MAIN-2020-40
en
refinedweb
This example demonstrates the usage of polymorphism in Java programming language What is Polymorphism The term Polymorphism comes from the Greek language, and means “many forms”. Polymorphism in Java allows subclasses of a class to define their own unique behaviours and yet share some of the same functionality of the parent class. I’m going to discuss polymorphism from the point of view of inheritance where multiple methods, all with the same name, have slightly different functionality. This technique is also called method overriding. Polymorphism is one of the four major concepts behind object-oriented programming (OOP). OOP questions are very common in job interviews, so you may expect questions about polymorphism on your next Java job interview. Java Polymorphism Example In this example we will create 3 classes to demonstrate polymorphism and one class to test the concept. Our superclass is called Animal. The successors of the animal class are Dog and Cat classes. Those are animals too, right? That’s what polymorphism is about – you have many forms of the same object with slightly different behaviour. To demonstrate this we will use a method called makeSound() and override the output of this method in the successor classes. The pharmacy class uses objects of the drug and product class, for example, Cialis generic. The generalized animal class will output some abstract text when we call the makeSound() method: package net.javatutorial; public class Animal { public void makeSound() { System.out.println("the animal makes sounds"); } } The Dog class, which extends Animal will produce slightly different result – the dog will bark. To achieve this we extend the Animal class and override the makeSound() method package net.javatutorial; public class Dog extends Animal{ @Override public void makeSound() { System.out.println("the dog barks"); } } Obviously we have to do the same to our Cat class to make the cat meow. package net.javatutorial; public class Cat extends Animal { @Override public void makeSound() { System.out.println("the cat meows"); } } Finally lets test our creation. package net.javatutorial; public class PolymorphismExample { public static void main(String[] args) { Animal animal = new Animal(); animal.makeSound(); Dog dog = new Dog(); dog.makeSound(); animal = new Cat(); animal.makeSound(); } } First we create a general Animal object and call the makeSound() method. We do the same for a newly created Dog object. Now note the call to animal = new Cat() – we assign a new Cat object to an Animal object. Cats are animals, remember? So, we can always do this: Animal animal = new Cat(); By calling the makeSound() method of this object will actually call the overridden makeSound() method in the Cat class. Finally, here is the output of the program the animal makes sounds the dog barks the cat meows References Official Oracle polymorphism example
https://javatutorial.net/java-polymorphism-example
CC-MAIN-2020-40
en
refinedweb
Good morning , i’m trying to use the particle web ide with the sample proposed when i search library for the ds18b20 sensor but on the console i get every time an event dstemp = nan the sample provided use D2 on the photon , i connect the data terminal of the sensor to the pin 4 on the shieldshield that is equal to D6 on the photon. i connect a 12v power supply to the shieldshield and verified the code for a D2 instead of a D6 but i’m not able to solve the problem. ds18b20 was connected electrically following sample taken from internet for arduino uno. please let me know. Good morning , i’m trying to use the particle web ide with the sample proposed when i search library for the ds18b20 sensor but on the console i get every time an event dstemp = nan Could you show us a picture of your setup (clearly showing the wiring), and the exact code used? Good morning , here is the code: #include <DS18B20.h> DS18B20 ds18b20(D6); //Sets Pin D6 for Water Temp Sensor int led = D7; char szInfo[64]; float pubTemp; double celsius; double fahrenheit; unsigned int Metric_Publish_Rate = 30000; unsigned int MetricnextPublishTime; int DS18B20nextSampleTime; int DS18B20_SAMPLE_INTERVAL = 2500; int dsAttempts = 0; void setup() { Time.zone(-5); Particle.syncTime(); pinMode(D6, INPUT); Particle.variable("tempHotWater", &fahrenheit, DOUBLE); Serial.begin(115200); } void loop() { if (millis() > DS18B20nextSampleTime){ getTemp(); } if (millis() > MetricnextPublishTime){ Serial.println("Publishing now."); publishData(); } } void publishData(){ if(!ds18b20.crcCheck()){ //make sure the value is correct return; } sprintf(szInfo, "%2.2f", fahrenheit); Particle.publish("dsTmp", szInfo, PRIVATE); MetricnextPublishTime = millis() + Metric_Publish_Rate; } void getTemp(){ if(!ds18b20.search()){ ds18b20.resetsearch(); celsius = ds18b20.getTemperature(); Serial.println(celsius); while (!ds18b20.crcCheck() && dsAttempts < 4){ Serial.println("Caught bad value."); dsAttempts++; Serial.print("Attempts to Read: "); Serial.println(dsAttempts); if (dsAttempts == 3){ delay(1000); } ds18b20.resetsearch(); celsius = ds18b20.getTemperature(); continue; } dsAttempts = 0; fahrenheit = ds18b20.convertToFahrenheit(celsius); DS18B20nextSampleTime = millis() + DS18B20_SAMPLE_INTERVAL; Serial.println(fahrenheit); } } I don’t think setting pinMode() for your OneWire pin is required (or even a good idea). Let the library do its job and don’t mess with the pin you promised to the lib to have control over. This is old syntax it should now be Particle.variable("tempHotWater", fahrenheit); You may also want to try the reworked sample of the most recent library version and extend that I have tried also that sample placing D6 instead of D2 but I still get nan in the event published here the third photo Good morning , i have tried also connecting directly to the photon on a breadboard taking voltage from 3v3 but again i get a nan reply from topic…please let me know because i need to understand if particle is ready for prime time That connection (taping the cables together) is likely to be an issue. Try tinning the wires, and inserting them directly. Connecting it without the Shield shield, should work as well. Have you implemented the improvements @ScruffR mentioned? just to let you know , with the same hardware setup but with ds18x20-temperature.ino sample i was able to get the temperature from the ds18b20 sensor so i think there is something in the DS18B20.h that is wrong, here the code from the sample mentioned that works: /* Use this sketch to read the temperature from 1-Wire devices you have attached to your Particle device (core, p0, p1, photon, electron) Temperature is read from: DS18S20, DS18B20, DS1822, DS2438 I/O setup: These made it easy to just ‘plug in’ my 18B20 parasitic power it gets more picky about the value. #include "DS18.h" DS18 sensor(D0); void setup() { Serial.begin(9600); // Set up 'power' pins, comment out if not used! pinMode(D3, OUTPUT); pinMode(D5, OUTPUT); digitalWrite(D3, LOW); digitalWrite(D5, HIGH); } void loop() { // Read the next available 1-Wire temperature sensor if (sensor.read()) { // Do something cool with the temperature Serial.printf("Temperature %.2f C %.2f F ", sensor.celsius(), sensor.fahrenheit()); Particle.publish("temperature", String(sensor.celsius()), PRIVATE); // Additional info useful while debugging printDebugInfo(); // If sensor.read() didn't return true you can try again later // This next block helps debug what's wrong. // It's not needed for the sensor to work properly } else { // Once all sensors have been read you'll get searchDone() == true // Next time read() is called the first sensor is read again if (sensor.searchDone()) { Serial.println("No more addresses."); // Avoid excessive printing when no sensors are connected delay(250); // Something went wrong } else { printDebugInfo(); } } Serial.println(); } void printDebugInfo() { // If there's an electrical error on the 1-Wire bus you'll get a CRC error // Just ignore the temperature measurement and try again if (sensor.crcError()) { Serial.print("CRC Error "); } // Print the sensor type const char *type; switch(sensor.type()) { case WIRE_DS1820: type = "DS1820"; break; case WIRE_DS18B20: type = "DS18B20"; break; case WIRE_DS1822: type = "DS1822"; break; case WIRE_DS2438: type = "DS2438"; break; default: type = "UNKNOWN"; break; } Serial.print(type); // Print the ROM (sensor type and unique ID) uint8_t addr[8]; sensor.addr(addr); Serial.printf( " ROM=%02X%02X%02X%02X%02X%02X%02X%02X", addr[0], addr[1], addr[2], addr[3], addr[4], addr[5], addr[6], addr[7] ); // Print the raw sensor data uint8_t data[9]; sensor.data(data); Serial.printf( " data=%02X%02X%02X%02X%02X%02X%02X%02X%02X", data[0], data[1], data[2], data[3], data[4], data[5], data[6], data[7], data[8] ); }
https://community.particle.io/t/photon-shieldshield-ds18b20-problem/37442
CC-MAIN-2020-40
en
refinedweb
This module is a submodule of std.range.. These interfaces are intended to provide virtual function-based wrappers around input ranges with element type E. This is useful where a well-defined binary interface is required, such as when a DLL function or virtual function needs to accept a generic range as a parameter. Note that isInputRange and friends check for conformance to structural interfaces not for implementation of these interface types. refaccess to elements. inputRangeObject import std.algorithm.iteration : map; import std.range : iota; void useRange(InputRange!int range) { // Function body. } // Create a range type. auto squares = map!"a * a"(iota(10)); // Wrap it in an interface. auto squaresWrapped = inputRangeObject(squares); // Use it. useRange(squaresWrapped); foreach iteration uses opApply, since one delegate call per loop iteration is faster than three virtual function calls. Interface for a forward range of type E. Interface for a bidirectional range of type E. Interface for a finite random access range of type E. Interface for an infinite random access range of type E. Adds assignable elements to InputRange. Adds assignable elements to ForwardRange. Adds assignable elements to BidirectionalRange. Adds assignable elements to RandomAccessFinite. Interface for an output range of type E. Usage is similar to the InputRange interface and descendants. Implements the OutputRange interface for all types E and wraps the put method for each type E in a virtual function. Returns the interface type that best matches R. Implements the most derived interface that R works with and wraps all relevant range primitives in virtual functions. If R is already derived from the InputRange interface, aliases itself away. Convenience function for creating an InputRangeObject of the proper type. See InputRange for an example. Convenience function for creating an OutputRangeObject with a base range of type R that accepts types E. import std.array; auto app = appender!(uint[])(); auto appWrapped = outputRangeObject!(uint, uint[])(app); static assert(is(typeof(appWrapped) : OutputRange!(uint[]))); static assert(is(typeof(appWrapped) : OutputRange!(uint))); © 1999–2019 The D Language Foundation Licensed under the Boost License 1.0.
https://docs.w3cub.com/d/std_range_interfaces/
CC-MAIN-2020-40
en
refinedweb
This is the mail archive of the libc-alpha@sourceware.org mailing list for the glibc project. On Thu, 26 Jun 2014, Roland McGrath wrote: > diff --git a/sysdeps/arm/nptl/Makefile b/sysdeps/arm/nptl/Makefile > index 143850e..2c31e76 100644 > --- a/sysdeps/arm/nptl/Makefile > +++ b/sysdeps/arm/nptl/Makefile > @@ -18,3 +18,16 @@ > ifeq ($(subdir),csu) > gen-as-const-headers += tcb-offsets.sym > endif > + > +ifeq ($(subdir),nptl) > +libpthread-sysdep_routines += nptl-aeabi_unwind_cpp_pr1 > +libpthread-shared-only-routines += nptl-aeabi_unwind_cpp_pr1 > + > +# This test relies on compiling part of the binary with EH information, > +# part without, and unwinding through. The .ARM.exidx tables have > +# start addresses for EH regions, but no end addresses. Every > +# region an exception needs to propogate through must have unwind > +# information, or a previous function's unwind table may be used > +# by mistake. > +tests := $(filter-out tst-cleanupx4,$(tests)).) -- Joseph S. Myers joseph@codesourcery.com
https://sourceware.org/legacy-ml/libc-alpha/2014-06/msg00867.html
CC-MAIN-2020-40
en
refinedweb
This is the mail archive of the libc-alpha@sourceware.org mailing list for the glibc project. Building with GCC 7 produces an error building rpcgen: rpc_parse.c: In function 'get_prog_declaration': rpc_parse.c:543:25: error: may write a terminating nul past the end of the destination [-Werror=format-length=] sprintf (name, "%s%d", ARGNAME, num); /* default name of argument */ ~~~~^ rpc_parse.c:543:5: note: format output between 5 and 14 bytes into a destination of size 10 sprintf (name, "%s%d", ARGNAME, num); /* default name of argument */ ^~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~ That buffer overrun is for the case where the .x file declares a program with a million arguments. The strcpy two lines above can generate a buffer overrun much more simply for a long argument name. The limit on length of line read by rpcgen (MAXLINESIZE == 1024) provides a bound on the buffer size needed, so this patch just changes the buffer size to MAXLINESIZE to avoid both possible buffer overruns. A testcase is added that rpcgen does not crash with a 500-character argument name, where it previously crashed. It would not at all surprise me if there are many other ways of crashing rpcgen with either valid or invalid input; fuzz testing would likely find various such bugs, though I don't think they are that important to fix (rpcgen is not that likely to be used with untrusted .x files as input). (As well as fuzz-findable bugs there are probably also issues when various int variables get overflowed on very large input.) The test infrastructure for rpcgen-not-crashing tests would need extending if tests are to be added for cases where rpcgen should produce an error, as opposed to cases where it should succeed. Tested for x86_64 and x86. 2016-11-07 Joseph Myers <joseph@codesourcery.com> [BZ #20790] * sunrpc/rpc_parse.c (get_prog_declaration): Increase buffer size to MAXLINESIZE. * sunrpc/bug20790.x: New file. * sunrpc/Makefile [$(run-built-tests) = yes] (rpcgen-tests): New variable. [$(run-built-tests) = yes] (tests-special): Add $(rpcgen-tests). [$(run-built-tests) = yes] ($(rpcgen-tests)): New rule. diff --git a/sunrpc/Makefile b/sunrpc/Makefile index 789ef42..99e5c3c 100644 --- a/sunrpc/Makefile +++ b/sunrpc/Makefile @@ -103,6 +103,11 @@ ifeq ($(have-thread-library),yes) xtests += thrsvc endif +ifeq ($(run-built-tests),yes) +rpcgen-tests := $(objpfx)bug20790.out +tests-special += $(rpcgen-tests) +endif + headers += $(rpcsvc:%.x=rpcsvc/%.h) extra-libs := librpcsvc extra-libs-others := librpcsvc # Make it in `others' pass, not `lib' pass. @@ -225,3 +230,9 @@ endif endif $(objpfx)thrsvc: $(common-objpfx)linkobj/libc.so $(shared-thread-library) + +ifeq ($(run-built-tests),yes) +$(rpcgen-tests): $(objpfx)%.out: %.x $(objpfx)rpcgen + $(built-program-cmd) -c $< -o $@; \ + $(evaluate-test) +endif diff --git a/sunrpc/bug20790.x b/sunrpc/bug20790.x new file mode 100644 index 0000000..a00c9b3 --- /dev/null +++ b/sunrpc/bug20790.x @@ -0,0 +1 @@ +program TPROG { version TVERS { int FUNC) = 1; } = 1; } = 1; diff --git a/sunrpc/rpc_parse.c b/sunrpc/rpc_parse.c index 1a1df6d..505a655 100644 --- a/sunrpc/rpc_parse.c +++ b/sunrpc/rpc_parse.c @@ -521,7 +521,7 @@ static void get_prog_declaration (declaration * dec, defkind dkind, int num /* arg number */ ) { token tok; - char name[10]; /* argument name */ + char name[MAXLINESIZE]; /* argument name */ if (dkind == DEF_PROGRAM) { -- Joseph S. Myers joseph@codesourcery.com
https://sourceware.org/legacy-ml/libc-alpha/2016-11/msg00250.html
CC-MAIN-2020-40
en
refinedweb
In a previous tutorial, I covered how we would implement Angular routing in an Ionic 4 application. Since Ionic 4 has been released, there has been more of a focus on using the baked in Angular routing, rather than Ionic’s own push/pop style routing. In this tutorial, we are going to cover how to use route guards with Angular routing to prevent access to certain routes if certain conditions have not been met. A common example of this is preventing access to certain pages if the user is not logged in, and that is what we will be focusing on. In the past, you may have used the Ionic navigation guards like ionViewCanEnter to determine whether or not a user could navigate to a page. Now, we can use Angular’s route guards to prevent access to certain pages in an Ionic/Angular application. Angular Route Guards The basic idea behind a route guard is that you attach a service which acts as the “route guard” to a particular route. That service has a canActivate method which will return either true or false depending on whether the user should be allowed to go to that route or not. If the canActivate method returns false, then the user will not be able to access the route. Route guards make the process of protecting certain routes and redirecting the user quite simple, and in my opinion, more manageable than using navigation guards like ionViewCanEnter on individual components. The end result looks something like this: const routes: Routes = [ { path: "", redirectTo: "/login", pathMatch: "full" }, { path: "login", loadChildren: "./login/login.module#LoginPageModule" }, { path: "home", loadChildren: "./home/home.module#HomePageModule", canActivate: [AuthGuardService] } ]; All we need to do is add one additional property to the route definitions to determine if the route can be activated. Since the routes in the example above are lazy loaded, we could also use canLoad instead of canActivate to entirely prevent the loading of the children for that route (rather than just preventing access). It is important to note that most things we implement on the client side (i.e. not on a server) are more for user experience than security. Client-side code is accessible/modifiable by the user, so you should never use route guards to protect information that you don’t want the user to see (just as you shouldn’t solely use client-side code to validate/sanitise user entered data). Think of your route guards as a friendly security guard directing traffic – they can keep people away from where they are not supposed to be, and direct them to where they need to go, but anybody with malicious intent could easily run right by the guard. Anything in your application that needs to be kept secure should only be accessible through a server that your application communicates with. Creating a Route Guard Creating a route guard is as simple as creating a service that implements a canActivate method. For example, we could generate an AuthGuard service with the following command: ionic g service services/AuthGuard Then, all you need to do is have this canActivate method return true or false and you can do whatever you like to determine that value: import { Injectable } from "@angular/core"; import { Router, CanActivate, ActivatedRouteSnapshot } from "@angular/router"; @Injectable({ providedIn: "root" }) export class AuthGuardService implements CanActivate { constructor(private router: Router) {} canActivate(route: ActivatedRouteSnapshot): boolean { console.log(route); let authInfo = { authenticated: false }; if (!authInfo.authenticated) { this.router.navigate(["login"]); return false; } return true; } } In this example, we have just set up a dummy object called authInfo that has an authenticated value of false. In a real-life situation, we would just pull this authenticaiton information from whatever is responsible for authenticating the user. We then check that value, and it the user is not authenticated we send them back to the login page and return false – otherwise, we just return true which will allow the navigation to proceed. Although we are not using it, we have also injected ActivatedRouteSnapshot which will allow you to access details about the route that the user is navigating to. You may need details about the route, like the parameters that were supplied, in order to determine whether or not to allow a user to proceed. Attach the Route Guard to your Routes All that is left to do once you create the route guard is to import it into the file that contains your routes, and attach it to any routes you want to protect with it: import { NgModule } from "@angular/core"; import { PreloadAllModules, RouterModule, Routes } from "@angular/router"; import { AuthGuardService } from "./services/auth-guard.service"; const routes: Routes = [ { path: "", redirectTo: "/login", pathMatch: "full" }, { path: "login", loadChildren: "./login/login.module#LoginPageModule" }, { path: "home", loadChildren: "./home/home.module#HomePageModule", canActivate: [AuthGuardService] } ]; @NgModule({ imports: [RouterModule.forRoot(routes, { preloadingStrategy: PreloadAllModules })], exports: [RouterModule] }) export class AppRoutingModule {} You can use multiple different route guards if you like, and you can attach the same route guard to multiple different routes. Summary The approach that Angular routing uses for route/navigation guards is quite similar in the end to the way you would have done it with ionViewCanEnter – ultimately, it is just a function returning true or false. However, I think the benefit of this approach is that it is a little more organised and it is easier to apply guards to multiple routes..
https://www.joshmorony.com/prevent-access-to-pages-in-ionic-with-angular-route-guards/
CC-MAIN-2020-40
en
refinedweb
In the last article, we learned what goes into planning for a community-driven site. We saw just how many considerations are needed to start accepting user submissions, using what I learned from my experience building Style Stage as an example. Now that we’ve covered planning, let’s get to some code! Together, we’re going to develop an Eleventy setup that you can use as a starting point for your own community (or personal) site. This article will cover: - How to initialize Eleventy and create useful develop and build scripts - Recommended setup customizations - How to define custom data and combine multiple data sources - Creating layouts with Nunjucks and Eleventy layout chaining - Deploying to Netlify The vision Let’s assume we want to let folks submit their dogs and cats and pit them against one another in cuteness contests. We’re not going to get into user voting in this article. That would be so cool (and totally possible with serverless functions) but our focus is on the pet submissions themselves. In other words, users can submit profile details for their cats and dogs. We’ll use those submissions to create a weekly battle that puts a random cat up against a random dog on the home page to duke it out over which is the most purrrfect (or woof-tastic, if you prefer). Let’s spin up Eleventy We’ll start by initializing a new project by running npm init on any directory you’d like, then installing Eleventy into it with: npm install @11ty/eleventy While it’s totally optional, I like to open up the package-json file that’s added to the directory and replace the scripts section with this: "scripts": { "develop": "eleventy --serve", "build": "eleventy" }, This allows us to start developing Eleventy in a development environment ( npm run develop) that includes Browsersync hot-reloading for local development. It also adds a command that compiles and builds our work ( npm run build) for deployment on a production server. If you’re thinking, “npm what?” what we’re doing is calling on Node (which is something Eleventy requires). The commands noted here are intended to be run in your preferred terminal, which may be an additional program or built-in to your code editor, like it is in VS Code. We’ll need one more npm package, fast-glob, that will come in handy a little later for combining data. We may as well install it now: npm install --save-dev fast-glob. Let’s configure our directory Eleventy allows customizing the input directory (where we work) and output directory (where our built work goes) to provide a little extra organization. To configure this, we’ll create the eleventy.js file at the root of the project directory. Then we’ll tell Eleventy where we want our input and output directories to go. In this case, we’re going to use a src directory for the input and a public directory for the output. module.exports = function (eleventyConfig) { return dir: { input: "src", output: "public" }, }; }; Next, we’ll create a directory called pets where we’ll store the pets data we get from user submissions. We can even break that directory down a little further to reduce merge conflicts and clearly distinguish cat data from dog data with cat and dog subdirectories: pets/ cats/ dogs/ What’s the data going to look like? Users will send in a JSON file that follows this schema, where each property is a data point about the pet: { "name": "", "petColor": "", "favoriteFood": "", "favoriteToy": "", "photoURL": "", "ownerName": "", "ownerTwitter": "" } To make the submission process crystal clear for users, we can create a CONTRIBUTING.md file at the root of the project and write out the guidelines for submissions. GitHub takes the content in this file and uses displays it in the repo. This way, we can provide guidance on this schema such as a note that favoriteFood, favoriteToy, and ownerTwitte are optional fields. A README.md file would be just as fine if you’d prefer to go that route. It’s just nice that there’s a standard file that’s meant specifically for contributions. Notice photoURL is one of those properties. We could’ve made this a file but, for the sake of security and hosting costs, we’re going to ask for a URL instead. You may decide that you are willing to take on actual files, and that’s totally cool. Let’s work with data Next, we need to create a combined array of data out of the individual cat files and dog files. This will allow us to loop over them to create site pages and pick random cat and dog submissions for the weekly battles. Eleventy allows node module.exports within the _data directory. That means we can create a function that finds all cat files and another that finds all dog files and then creates arrays out of each set. It’s like taking each cat file and merging them together to create one data set in a single JavaScript file, then doing the same with dogs. The filename used in _data becomes the variable that holds that dataset, so we’ll add files for cats and dogs in there: _data/ cats.js dogs.js The functions in each file will be nearly identical — we’re merely swapping instances of “cat” for “dog” between the two. Here’s the function for cats: const fastglob = require("fast-glob"); const fs = require("fs"); module.exports = async () => { // Create a "glob" of all cat json files const catFiles = await fastglob("./src/pets/cats/*.json", { caseSensitiveMatch: false, }); // Loop through those files and add their content to our `cats` Set let cats = new Set(); for (let cat of catFiles) { const catData = JSON.parse(fs.readFileSync(cat)); cats.add(catData); } // Return the cats Set of objects within an array return [...cats]; }; Does this look scary? Never fear! I do not routinely write node either, and it’s not a required step for less complex Eleventy sites. If we had instead chosen to have contributors add to an ever growing single JSON file with _data, then this combination step wouldn’t be necessary in the first place. Again, the main reason for this step is to reduce merge conflicts by allowing for individual contributor files. It’s also the reason we added fast-glob to the mix. Let’s output the data This is a good time to start plugging data into the templates for our UI. In fact, go ahead and drop a few JSON files into the pets/cats and pets/dogs directories that include data for the properties so we have something to work with right out of the gate and test things. We can go ahead and add our first Eleventy page by adding a index.njk file in the src directory. This will become the home page, and is a Nunjucks template file format. Nunjucks is one option of many for creating templates with Eleventy. See the docs for a full list of templating options. Let’s start by looping over our data and outputting an unordered list both for cats and dogs: <ul> <!-- Loop through cat data --> {% for cat in cats %} <li> <a href="/cats/{{ cat.name | slug }}/">{{ cat.name }}</a> </li> {% endfor %} </ul> <ul> <!-- Loop through dog data --> {% for dog in dogs %} <li> <a href="/dogs/{{ dog.name | slug }}/">{{ dog.name }}</a> </li> {% endfor %} </ul> As a reminder, the reference to cats and dogs matches the filename in _data. Within the loop we can access the JSON keys using dot notation, as seen for cat.name, which is output as a Nunjucks template variable using double curly braces (e.g. {{ cat.name }}). Let’s create pet profile pages Besides lists of cats and dogs on the home page ( index.njk), we also want to create individual profile pages for each pet. The loop indicated a hint at the structure we’ll use for those, which will be [pet type]/[name-slug]. The recommended way to create pages from data is via the Eleventy concept of pagination which allows chunking out data. We’re going to create the files responsible for the pagination at the root of the src directory, but you could nest them in a custom directory, as long as it lives within src and can still be discovered by Eleventy. src/ cats.njk dogs.njk Then we’ll add our pagination information as front matter, shown for cats: --- pagination: data: cats alias: cat size: 1 permalink: "/cats/{{ cat.name | slug }}/" --- The data value is the filename from _data. The alias value is optional, but is used to reference one item from the paginated array. size: 1 indicates that we’re creating one page per item of data. Finally, in order to successfully create the page output, we need to also indicate the desired permalink structure. That’s where the alias value above comes into play, which accesses the name key from the dataset. Then we are using a built-in filter called slug that transforms a string value into a URL-friendly string (lowercasing and converting spaces to dashes, etc). Let’s review what we have so far Now is the time to fire up Eleventy with npm run develop. That will start the local server and show you a URL in the terminal you can use to view the project. It will show build errors in the terminal if there are any. As long as all was successful, Eleventy will create a public directory, which should contain: public/ cats/ cat1-name/index.html cat2-name/index.html dogs/ dog1-name/index.html dog2-name/index.html index.html And in the browser, the index page should display one linked list of cat names and another one of linked dog names. Let’s add data to pet profile pages Each of the generated pages for cats and dogs is currently blank. We have data we can use to fill them in, so let’s put it to work. Eleventy expects an _includes directory that contains layout files (“templates”) or template partials that are included in layouts. We’ll create two layouts: src/ _includes/ base.njk pets.njk The contents of base.njk will be an HTML boilerplate. The <body> element in it will include a special template tag, {{ content | safe }}, where content passed into the template will render, with safe meaning it can render any HTML that is passed in versus encoding it. Then, we can assign the homepage, index.md, to use the base.njk layout by adding the following as front matter. This should be the first thing in index.md, including the dashes: --- layout: base.njk --- If you check the compiled HTML in the public directory, you’ll see the output of the cat and dog loops we created are now within the <body> of the base.njk layout. Next, we’ll add the same front matter to pets.njk to define that it will also use the base.njk layout to leverage the Eleventy concept of layout chaining. This way, the content we place in pets.njk will be wrapped by the HTML boilerplate in base.njk so we don’t have to write out that HTML each and every time. In order to use the single pets.njk template to render both cat and dog profile data, we’ll use one of the newest Eleventy features called computed data. This will allow us to assign values from the cats and dogs data to the same template variables, as opposed to using if statements or two separate templates (one for cats and one for dogs). The benefit is, once again, to avoid redundancy. Here’s the update needed in cats.njk, with the same update needed in dogs.njk (substituting cat with dog): eleventyComputed: title: "{{ cat.name }}" petColor: "{{ cat.petColor }}" favoriteFood: "{{ cat.favoriteFood }}" favoriteToy: "{{ cat.favoriteToy }}" photoURL: "{{ cat.photoURL }}" ownerName: "{{ cat.ownerName }}" ownerTwitter: "{{ cat.ownerTwitter }}" Notice that eleventyComputed defines this front matter array key and then uses the alias for accessing values in the cats dataset. Now, for example, we can just use {{ title }} to access a cat’s name and a dog’s name since the template variable is now the same. We can start by dropping the following code into pets.njk to successfully load cat or dog profile data, depending on the page being viewed: <img src="{{ photoURL }}" /> <ul> <li><strong>Name</strong>: {{ title }}</li> <li><strong>Color</strong>: {{ petColor }}</li> <li><strong>Favorite Food</strong>: {{ favoriteFood if favoriteFood else 'N/A' }}</li> <li><strong>Favorite Toy</strong>: {{ favoriteToy if favoriteToy else 'N/A' }}</li> {% if ownerTwitter %} <li><strong>Owner</strong>: <a href="{{ ownerTwitter }}">{{ ownerName }}</a></li> {% else %} <li><strong>Owner</strong>: {{ ownerName }}</li> {% endif %} </ul> The last thing we need to tie this all together is to add layout: pets.njk to the front matter in both cats.njk and dogs.njk. With Eleventy running, you can now visit an individual pet page and see their profile: We’re not going into styling in this article, but you can head over to the sample project repo to see how CSS is included. Let’s deploy this to production! The site is now in a functional state and can be deployed to a hosting environment! As recommended earlier, Netlify is an ideal choice, particularly for a community-driven site, since it can trigger a deployment each time a submission is merged and provide a preview of the submission before sending it for review. If you choose Netlify, you will want to push your site to a GitHub repo which you can select during the process of adding a site to your Netlify account. We’ll tell Netlify to serve from the public directory and run npm run build when new changes are merged into the main branch. The sample site includes a netlify.toml file which has the build details and is automatically detected by Netlify in the repo, removing the need to define the details in the new site flow. Once the initial site is added, visit Settings → Build → Deploy in Netlify. Under Deploy contexts, select “Edit” and update the selection for “Deploy Previews” to “Any pull request against your production branch / branch deploy branches.” Now, for any pull request, a preview URL will be generated with the link being made available directly in the pull request review screen. Let’s start accepting submissions! Before we pass Go and collect $ 100, it’s a good idea to revisit the first post and make sure we’re prepared to start taking user submissions. For example, we ought to add community health files to the project if they haven’t already been added. Perhaps the most important thing is to make sure a branch protection rule is in place for the main branch. This means that your approval is required prior to a pull request being merged. Contributors will need to have a GitHub account. While this may seem like a barrier, it removes some of the anonymity. Depending on the sensitivity of the content, or the target audience, this can actually help vet (get it?) contributors. Here’s the submission process: - Fork the website repository. - Clone the fork to a local machine or use the GitHub web interface for the remaining steps. - Create a unique .json file within src/pets/cats or src/pets/dogs that contains required data. - Commit the changes if they’re made on a clone, or save the file if it was edited in the web interface. - Open a pull request back to the main repository. - (Optional) Review the Netlify deploy preview to verify information appears as expected. - Merge the changes. - Netlify deploys the new pet to the live site. A FAQ section is a great place to inform contributors how to create pull request. You can check out an example on Style Stage. Let’s wrap this up… What we have is fully functional site that accepts user contributions as submissions to the project repo. It even auto-deploys those contributions for us when they’re merged! There are many more things we can do with a community-driven site built with Eleventy. For example: - Markdown files can be used for the content of an email newsletter sent with Buttondown. Eleventy allows mixing Markdown with Nunjucks or Liquid. So, for example, you can add a Nunjucks for loop to output the latest five pets as links that output in Markdown syntax and get picked up by Buttondown. - Auto-generated social media preview images can be made for social network link previews. - A commenting system can be added to the mix. - Netlify CMS Open Authoring can be used to let folks make submissions with an interface. Check out Chris’ great rundown of how it works. My Meow vs. BowWow example is available for you to fork on GitHub. You can also view the live preview and, yes, you really can submit your pet to this silly site. 🙂 Best of luck creating a healthy and thriving community! The post A Community-Driven Site with Eleventy: Building the Site appeared first on CSS-Tricks. You can support CSS-Tricks by being an MVP Supporter.
http://design-lance.com/tag/building/
CC-MAIN-2020-40
en
refinedweb
Gecode::Gist::SearcherThread Class Reference A thread that concurrently explores the tree. More... #include <treecanvas.hh> Detailed Description A thread that concurrently explores the tree. Definition at line 68 of file treecanvas.hh. Member Function Documentation Definition at line 286 of file treecanvas.cpp. Definition at line 368 of file treecanvas.cpp. The documentation for this class was generated from the following files: - gecode/gist/treecanvas.hh (Revision: 14967) - gecode/gist/treecanvas.cpp (Revision: 14967)
http://www.gecode.org/doc-latest/reference/classGecode_1_1Gist_1_1SearcherThread.html
CC-MAIN-2018-09
en
refinedweb
Gecode::Int::Rel::NaryEqDom< View > Class Template Reference [Integer propagators] n-ary domain consistent equality propagator More... Detailed Description template<class View> class Gecode::Int::Rel::NaryEqDom< View > n-ary domain consistent equality propagator Uses staging by first performing bounds propagation and only then domain propagation. Requires #include <gecode/int/rel.hh> Definition at line 168 of file rel.hh. Constructor & Destructor Documentation Member Function Documentation template<class View > Copy propagator during cloning. Implements Gecode::Actor. Definition at line 301 of file eq.hpp. template<class View > Cost function. If a view has been assigned, the cost is low unary. If in stage for bounds propagation, the cost is low linear. Otherwise it is high linear. Reimplemented from Gecode::NaryPropagator< View, PC_INT_DOM >. Definition at line 307 of file eq.hpp. template<class View > Perform propagation. Implements Gecode::Propagator. Definition at line 317 of file eq.hpp. The documentation for this class was generated from the following files: - gecode/int/rel.hh (Revision: 15253) - gecode/int/rel/eq.hpp (Revision: 15253)
http://www.gecode.org/doc-latest/reference/classGecode_1_1Int_1_1Rel_1_1NaryEqDom.html
CC-MAIN-2018-09
en
refinedweb
In the previous post, I wrote about how you can use the existing providers for Google, Facebook etc. and retrieve extra metadata about the authenticated users. Let’s assume you wanted to change the way the providers request for information. Some examples of this could be - You want to request more data about the user - You want to apply different scope levels when requesting the data This post covers how you can write your own provider and plug it into your ASP.NET web application Write your own provider Each Provider implements from OpenIdClient. Following example shows a custom implementation of Google Provider which requests information about the user such as firstname/lastname etc Please Note: This addresses a bug with the existing google provider which does not return the extra data about the user such as Country/FirstName/LastName. The version of google provider is DotNetOpenAuth.AspNet" version="4.0.3.12153". We have logged a bug for this and will fix it in next update of this package. namespace MyApplication { using System.Collections.Generic; using DotNetOpenAuth.OpenId.Extensions.AttributeExchange; using DotNetOpenAuth.OpenId.RelyingParty; /// <summary> /// Represents Google OpenID client. /// </summary> public class GoogleCustomClient : OpenIdClient { #region Constructors and Destructors public GoogleCustomClient() : base("google", WellKnownProviders.Google) { } #endregion #region Methods /// <summary> /// Gets the extra data obtained from the response message when authentication is successful. /// </summary> /// <param name="response"> /// The response message. /// </param> /// <returns>A dictionary of profile data; or null if no data is available.</returns>("country", fetchResponse.GetAttributeValue(WellKnownAttributes.Contact.HomeAddress.Country)); extraData.Add("firstName", fetchResponse.GetAttributeValue(WellKnownAttributes.Name.First)); extraData.Add("lastName", fetchResponse.GetAttributeValue(WellKnownAttributes.Name.Last)); return extraData; } return null; } /// <summary> /// Called just before the authentication request is sent to service provider. /// </summary> /// <param name="request"> /// The request. /// </param> protected override void OnBeforeSendingAuthenticationRequest(IAuthenticationRequest request) { // Attribute Exchange extensions var fetchRequest = new FetchRequest(); fetchRequest.Attributes.AddRequired(WellKnownAttributes.Contact.Email); fetchRequest.Attributes.AddRequired(WellKnownAttributes.Contact.HomeAddress.Country); fetchRequest.Attributes.AddRequired(WellKnownAttributes.Name.First); fetchRequest.Attributes.AddRequired(WellKnownAttributes.Name.Last); request.AddExtension(fetchRequest); } #endregion } } Source Code for existing providers The source code for existing providers is public and can be accessed at Register your provider with your application WebForms - In App_Start/AuthConfig.cs register the custom provider as follows OpenAuth.AuthenticationClients.Add("Custom Google", () => new MyApplication.GoogleCustomClient()); //OpenAuth.AuthenticationClients.AddGoogle(); MVC - In App_Start/AuthConfig.cs register the custom provider as follows OAuthWebSecurity.RegisterClient(new MyApplication.GoogleCustomClient(),"Google",null); // OAuthWebSecurity.RegisterGoogleClient(); WebPages - In _AppStart.cshtml register the custom provider as follows OAuthWebSecurity.RegisterClient(new MyApplication.GoogleCustomClient(),"Google",null); // OAuthWebSecurity.RegisterGoogleClient(); This post has been cross posted to Please do reach me via twitter (@rustd) for any questions Join the conversationAdd Comment This looks so easy! I was always not a big friend of OAuth – but this is really amazing! Thank you! Can you provide an example of how I would hook up a third party provider (i.e. something outside of Facebook, Twitter, and Google)? @Ryan you can use the same model to hook up any third party provider such as linkedin, yahoo etc Great post, thanks. The first I've seen which covers this. I'm keen to request some extra fields from Facebook and as the methods are different I can't work out how I'd do that and extend the scope to request email and possible write to wall permissions. Any pointers you can give me? Thanks for the blog post. Very useful.
https://blogs.msdn.microsoft.com/webdev/2012/08/23/plugging-custom-oauthopenid-providers/
CC-MAIN-2018-09
en
refinedweb
#include <sys/ddi.h> #include <sys/sunddi.h> int devmap_setup(dev_t dev, offset_t off, ddi_as_handle_t as, caddr_t *addrp, size_tlen, uint_t prot, uint_t maxprot, uint_t flags, cred_t *cred); int ddi_devmap_segmap(dev_t dev, off_t off, ddi_as_handle_t as, caddr_t *addrp, off_tlen, uint_t prot, uint_t maxprot, uint_t flags, cred_t *cred); Solaris DDI specific (Solaris DDI). Device whose memory is to be mapped. User offset within the logical device memory at which the mapping begins. An opaque data structure that describes the address space into which the device memory should be mapped. Pointer to the starting address in the address space into which the device memory should be mapped. Length (in bytes) of the memory to be mapped. A bit field that specifies the protections. Some possible settings combinations. The following flags can be specified: Changes are private. Changes should be shared. The user specified an address in *addrp rather than letting the system choose an address. Pointer to the user credential structure. devmap_setup() and ddi_devmap_segmap() allow device drivers to use the devmap framework to set up user mappings to device memory. The devmap framework provides several advantages over the default device mapping framework that is used by ddi_segmap(9F) or ddi_segmap_setup(9F). Device drivers should use the devmap framework, if the driver wants to: use an optimal MMU pagesize to minimize address translations, conserve kernel resources, receive callbacks to manage events on the mapping, export kernel memory to applications, set up device contexts for the user mapping if the device requires context switching, assign device access attributes to the user mapping, or change the maximum protection for the mapping. devmap_setup() must be called in the segmap(9E) entry point to establish the mapping for the application. ddi_devmap_segmap() can be called in, or be used as, the segmap(9E) entry point. The differences between devmap_setup() and ddi_devmap_segmap() are in the data type used for off and len. When setting up the mapping, devmap_setup() and ddi_devmap_segmap() call the devmap(9E) entry point to validate the range to be mapped. The devmap(9E) entry point also translates the logical offset (as seen by the application) to the corresponding physical offset within the device address space. If the driver does not provide its own devmap(9E) entry point, EINVAL will be returned to the mmap(2) system call. Successful completion. An error occurred. The return value of devmap_setup() and ddi_devmap_segmap() should be used directly in the segmap(9E) entry point. devmap_setup() and ddi_devmap_segmap() can be called from user or kernel context only. mmap(2), devmap(9E), segmap(9E), ddi_segmap(9F), ddi_segmap_setup(9F), cb_ops(9S) Writing Device Drivers for Oracle Solaris 11.2
https://docs.oracle.com/cd/E36784_01/html/E36886/ddi-devmap-segmap-9f.html
CC-MAIN-2018-09
en
refinedweb
I have always enjoyed playing with non-rectangular skins and this article is just about having fun making skins in different ways. John O'Byrne wrote an article here on CodeProject back in 2003 entitled TaskbarNotifier, a skinnable MSN Messenger-like popup in C# that was a nice article on creating a non-rectangular TaskbarNotifier. I thought I would take his code as a starting point, fix some minor bugs, and add a number of additional features such as creating skins from web pages for really cool animations using jquery, adding a non-rectangular video player, and making non-rectangular skins resizable by loading the non-rectangualr regions from files or resources and scalling the regions using XFORM and Transform. And unlike his original project, I created a DLL that can be loaded and used by other applications which made more sense to me. I am not an artist so some of the edges of the skins are a little rough but I hope the reader understands that this article is about just having fun with regions and not about my artwork! The MSN messenger like popup includes: I included a separte project called "NotifierDemo" that loads the DLL "TaskNotifier.dll" to illustrate how to use this DLL. In addition, NotifierDemo also allows you to create and save regions to files so we can use the saved regions as files or embedded resources. There are two types of skins in this demo. The first are those that are just a skinned WinForm painted with a bitmap, i.e. the "Bmp Skin" buttons. The other type is a skinned WinForm with a WebBrowser Control and an instance of a media player, i.e., the "WebSkin" buttons in the NotifierDemo screen shown below are "Cargo Door," "Stone TV," and "Stargate." The bottom half of the screen below allows you to create a region from a bitmap including a tolerance factor. The button in the lower left of the screen below called "Create Form from Region File" will load the region frrom the region file you select and RESIZE the region to fit the dimensions you have typed into the Width and Height fields on the right of this button. I only set the code for an animated slide from the lower right-hand corner of the screen but the reader can easily modify the code to slide from any of the corners of the screen. It should be pointed out that some of the resources for these skins can be placed in the DLL as embedded resources or they can loose in any directory. In this demo to make things easier I put some skin resources in directories and others as embeddede resources to illustrate using both approaches. You can find dozens of examples of creating a region from a bitmap. I used the approach below in C# that includes using a "Tolerance Factor" to help to smooth out the rough curves. Speed is not critical here because we will be using only the regions created in our skins and not dynamically creating the regions from a bitmap. When I first started this article I resized the skins by just first resizing the bitmap image and then creating the region again from the resized bitmap--that also works fine if you prefer that approach. My own preference is to create the region and resize the regions from a file or embedded region resource which seems a bit faster. public Region getRegion(Bitmap inputBmp, Color transperancyKey, int tolerance) { // Stores all the rectangles for the region GraphicsPath path = new GraphicsPath(); // Scan the image for (int x = 0; x < inputBmp.Width; x++) { for (int y = 0; y < inputBmp.Height; y++) { if (!colorsMatch(inputBmp.GetPixel(x, y), transperancyKey, tolerance)) path.AddRectangle(new Rectangle(x, y, 1, 1)); } } // Create the Region Region outputRegion = new Region(path); // Clean up path.Dispose(); return outputRegion; } private static bool colorsMatch(Color color1, Color color2, int tolerance) { if (tolerance < 0) tolerance = 0; return Math.Abs(color1.R - color2.R) <= tolerance && Math.Abs(color1.G - color2.G) <= tolerance && Math.Abs(color1.B - color2.B) <= tolerance; } Saving a region file correctly is not so simple. What I doubt you will find anywhere is sample code in C# to save a region to a file correctly. Proably because to save a region to a file in C# is a bit tricky since the methods to get the combined region data AND region header all accept a pointer to the region structure. Since the skins in this project all load the regions from either a file or embedded resource we need to be able to save the region created from a bitmap to a file in C# as I do in Bmp2Rgn.cs. There are other ways to save or serialize a region to a file in C# without using unsafe pointers but this approach is just the way I prefer doing it. // Create a region called "myRegion" by some means and pass the // handle to the region, i.e., Hrgn, to "SaveRgn2File" like so: using (Graphics g = this.CreateGraphics()) SaveRgn2File(myRegion.GetHrgn(g), sSaveRgnFile); [SuppressUnmanagedCodeSecurity()] public unsafe void SaveRgn2File(IntPtr hRgn, string sSaveRgnFile) { Win32.RECT[] regionRects = null; IntPtr pBytes = IntPtr.Zero; try { // See how much memory we need to allocate int regionDataSize = Win32.GetRegionData(new HandleRef(null, hRgn), 0, IntPtr.Zero); if (regionDataSize != 0) { pBytes = Marshal.AllocCoTaskMem(regionDataSize); // Get the pointer, i.e., pBytes, to BOTH the region header AND the region data! int ret = Win32.GetRegionData(new HandleRef(null, hRgn), regionDataSize, pBytes); if (ret == regionDataSize) // make sure we have RDH_RECTANGLES { // Cast to the structure Win32.RGNDATAHEADER* pRgnDataHeader = (Win32.RGNDATAHEADER*)pBytes; if (pRgnDataHeader->iType == 1) // Make sure we have RDH_RECTANGLES { using (FileStream writeStream = new FileStream(sSaveRgnFile, FileMode.Create, FileAccess.ReadWrite)) { WriteToStream(writeStream, (void*)pBytes, (uint)ret); writeStream.Close(); } } } } } finally { if (pBytes != IntPtr.Zero) { Marshal.FreeCoTaskMem(pBytes); } } } [SuppressUnmanagedCodeSecurity()] public unsafe static void WriteToStream(FileStream output, void* pvBuffer, uint length) { IntPtr hFile = output.SafeFileHandle.DangerousGetHandle(); WriteToStream(hFile, pvBuffer, length); GC.KeepAlive(output); } [SuppressUnmanagedCodeSecurity()] public unsafe static void WriteToStream(IntPtr hFile, void* pvBuffer, uint length) { if (hFile == NativeConstants.INVALID_HANDLE_VALUE) throw new ArgumentException("output", "File is closed"); void* pvWrite = pvBuffer; while (length > 0) { uint written; bool result = SafeNativeMethods.WriteFile(hFile, pvWrite, length, out written, IntPtr.Zero); if (!result) return; pvWrite = (void*)((byte*)pvWrite + written); length -= written; } } My approach in creating skins is to let the user set the width and height of any skin dynamically by providing a "Grip Area" in the lower right corner of the skin that the user can drag to resize the skin. The method below converts a region file to a scaled region using XFORM & Transform & sets a region to object given it's handle. One issue I ran into was that I found I had to remove the AnchorStyles before calling "SetBounds" and setting the region for the Video Player and then I had to reset the AnchorStyles to "Top" and "Left" afterwards. // Converts region file to scaled region using XFORM & Transform & sets region to object given it's handle private void File2RgnStretch(System.IntPtr hWnd, string strRgnFile, int bmpWidth, int bmpHeight, int rgnWidth, int rgnHeight) { using (FileStream fs = new FileStream(strRgnFile, FileMode.Open, FileAccess.Read, FileShare.Read)) { byte[] regionData = null; BinaryReader reader = new BinaryReader(fs); regionData = reader.ReadBytes((int)fs.Length); using (Region region = Region.FromHrgn(ExtCreateRegion(0, regionData.Length, regionData))) { // The bounding rectangle of a region is usually smaller than the WinForm's height and width so we use the size // of the default bitmap we paint the WinForm's background with to calculate the xScale and yScale values float xScale = (float)rgnWidth / (float)bmpWidth; float yScale = (float)rgnHeight / (float)bmpHeight; Win32.XFORM xForm; xForm.eDx = 0; xForm.eDy = 0; xForm.eM11 = xScale; xForm.eM12 = 0; xForm.eM21 = 0; xForm.eM22 = yScale; // Scale the region region.Transform(xForm); Graphics g = this.CreateGraphics(); IntPtr hRgn = region.GetHrgn(g); //IntPtr hDC = g.GetHdc(); if (this.Handle == hWnd) // Set the WinForm Region { // Get the Bounding Rectangle for this region RectangleF rect = region.GetBounds(g); rectSkin.Width = Convert.ToInt32(rect.Width) + 30; if((ZSkinName == "skin1") || (ZSkinName == "skin2") || (ZSkinName == "skin3")) rectSkin.height = Convert.ToInt32(rect.Height); else rectSkin.height = Convert.ToInt32(rect.Height) + 10; SetWindowRgn(hWnd, hRgn, false); } else if (dShowPlayer1.Handle == hWnd) // Set the Video Player Region { rectVideo = new Win32.RGNRECT(); rectVideo.x1 = 0; rectVideo.y1 = 0; rectVideo.Width = this.Width; rectVideo.Height = this.Height; dShowPlayer1.RectVideo = rectVideo; dShowPlayer1.Visible = true; // Remove AnchorStyles before calling "SetBounds" dShowPlayer1.Anchor = AnchorStyles.None; dShowPlayer1.Dock = DockStyle.None; dShowPlayer1.SetBounds(rectVideo.x1, rectVideo.y1, rectVideo.Width, rectVideo.Height); Win32.SetWindowRgn(dShowPlayer1.Handle, hRgn, true); // Set AnchorStyles dShowPlayer1.Anchor = AnchorStyles.Top | AnchorStyles.Left; } else SetWindowRgn(hWnd, hRgn, false); // Set the WebBrowser's Region // Clean Up region.ReleaseHrgn(hRgn); //g.ReleaseHdc(hDC); g.Dispose(); } } } // Note: To load a region from a resourse we use: global::System.Resources.ResourceManager rm = new global::System.Resources.ResourceManager("TaskNotifier.Properties.Resources", typeof(Resources).Assembly); regionData = (byte[])rm.GetObject(strRgnRes); Below is the basic structure and layout of the WebBrowser skin. It consists of a WinForm with a WebBrowser control docked on the form. We then set a region that is a "cookie cutter" for the overall shape of both the WinForm as illustrated below. The door below is just an ordinary web page in the wbbrowser control inside of a non-rectangualr region. In the case of the "Cargo Door" skin the door is an html file where the door itself is made up of a lot of little images that are animated using plain old Javascript. There are two main ways to create the Grip Area on the skin in the WeBrowser skins. One technique is to place an image of the Grip Area in a separate layer in the web page for the skin and position it where you want it and to fire a callback in the C# code the same way we do for the dragging of the web skin. But I decided to use an approach that we could also use with the non-WebBrowser skins, namely to paint the Grip Area directly on the back of the WinForm and re-paint the Grip when the form is resized. In order for the user to be able to click on the painted Grip Area on the WinForm it is necessary toi leave an area cut out from the WeBrowsr's region so that the underlying Grip Area is visible as show below. When we first create the skin we calculate the sizeFactorX and sizeFactorY based on the form's size to the size of the bitmap we will use to paint the background of our form. We then apply these ratios to correctly resize the image of our Grip. The position of the Grip is also recalculated with these ratios as shown below: // Note: In the method UpdateSkin() we calculate the Scale for the width and height as follows: sizeFactorX = (double)this.Size.Width / backBmap.Size.Width; sizeFactorY = (double)this.Size.Height / backBmap.Size.Height; public void SetGripBitmap(Image image, Color transparencyColor, Point position) { dGripBitmapW = (double)image.Width * (double)sizeFactorX; dGripBitmapH = (double)image.Height * (double)sizeFactorY; dGripBitmapX = (double)position.X * (double)sizeFactorX; dGripBitmapY = (double)position.Y * (double)sizeFactorY; GripBitmap = null; GripBitmap = new Bitmap(Convert.ToInt32(dGripBitmapW), Convert.ToInt32(dGripBitmapH)); Graphics gr = Graphics.FromImage(GripBitmap); gr.SmoothingMode = SmoothingMode.None; gr.CompositingQuality = CompositingQuality.HighQuality; gr.InterpolationMode = InterpolationMode.HighQualityBilinear; gr.DrawImage(image, new Rectangle(0, 0, Convert.ToInt32(dGripBitmapW), Convert.ToInt32(dGripBitmapH)), new Rectangle(0, 0, image.Width, image.Height), GraphicsUnit.Pixel); gr.Dispose(); GripBitmap.MakeTransparent(transparencyColor); GripBitmapSize = new Size(GripBitmap.Width, GripBitmap.Height); GripBitmapLocation = new Point(Convert.ToInt32(dGripBitmapX), Convert.ToInt32(dGripBitmapY)); } Since the non-rectangular WebBrowser Control contains our HTML Skin when the user resizes the WinForm by dragging the the Grip Area we must also zoom the WebBrowser control proportionally to match the change in size of the WinForm so I added the necessary code for the WebBrowser Control accomplish this as follows: [PermissionSet(SecurityAction.LinkDemand, Name = "FullTrust")] public void Zoom(int zoomvalue) { if ((zoomvalue < 10) || (zoomvalue > 1000)) return; try { // In Windows Internet Explorer 8 or higher we can call OLECMDIDF_OPTICAL_ZOOM_NOPERSIST = 0x00000001 but this is BUGGY! // Windows Internet Explorer 8 does not automatically persist the specified zoom percentage. // But it is safer to just call extendedWebBrowser1.Zoom(100) to reset the zoom factor back to 100% when we create our skins this.axIWebBrowser2.ExecWB(NativeMethods.OLECMDID.OLECMDID_OPTICAL_ZOOM, NativeMethods.OLECMDEXECOPT.OLECMDEXECOPT_DONTPROMPTUSER, zoomvalue, System.IntPtr.Zero); } catch { } } In that case of simple bitmap skins we can proportionally change both the width and height to the new dimensions of the WinForm BUT, in the case of the WebBrowser skins, we are limited because of how the WebBrowser's zoom works so we resize the WebBrowser skins by the change in the x-coordinate as follows: // Original background bitmap for "Stargate" skin was: width: 372 and height: 380 // Where dx and dy are change in WinForm width and height after dragging the Grip area double dHeight = (double)(380 * (this.Width + dx)) / (double)372; // Change the WinForm size to the new dimensions Win32.SetWindowPos(this.Handle, (System.IntPtr)Win32.HWND_TOPMOST, this.Location.X, this.Location.Y, this.Width + dx, Convert.ToInt32(dHeight), Win32.SWP_SHOWWINDOW); // Sets the new scaled regions and calculates the new "sizeFactorX" UpdateSkin(); // Calculate the zoom percentage for the WebBrowser double dZoom = 100 * (double)sizeFactorX; // Zoom the WebBrowser Control display of our html skin extendedWebBrowser1.Zoom(Convert.ToInt32(dZoom)); // Scale our painted Grip area to new dimensions SetGripBitmap((Bitmap)Resources.stargate_grip, Color.FromArgb(255, 0, 255), new Point(270, 320)); As I mentioned at the beginning of the article I included the C# wrapper for DirectShow, namely, DirectShowLib-2005.dll, to allow users to play video messages in a popup. You can remove this reference if you don't want to play video. There are two ways you can add video, I added an instance of a C# video player I created using the wrapperer. Shown below is a skinned WinForm consisting of a WebBrowser Control and an instance of media player that is just a wrapper for the DirectShowLib-2005.dll. You can download this dll with sourcecode at. The WebBrowser Control is docked to the parent WinForm that has no border. The actual skin you see isn't a bitmap painted on the WinForm as in the case of the skins "Bmp Skin 1", "Bmp Skin 2" and "Bmp Skin 3" which are skins that only use the WinForm. In the case of the WebSkins the skin is just an ordinary html web page inside of the WebBrowser Control docked on the parent WinForm. Below is an illustration of the layers on the WebBrowser skin. We could add additional zooming for the video but this articloe is just about having some fun with regions and that would be a little too much! If the user right mouse clicks on the Video Player region I added a context menu that will appear that allows the user to play the video fullscreen so we need to subscribe to the "ToggleChange" event in the video player so we can handle when the user goes from fullscreen back to the normal size of the player. In this event we need to call "SetSkin()" to rebuild the skin as follows: // dShowPlayer1.ToggleChange += new ZToggleFullScreenHandler(dShowPlayer1_ToggleChange); void dShowPlayer1_ToggleChange(object o, ZToggleEventArgs e) { if (!e.ZIsFullScreen) { dShowPlayer1.menuFileClose_Click(this, null); SetSkin(ZSkinName, true); } } SetBackgroundBitmap((Bitmap)Resources.skin1, Color.FromArgb(255, 0, 255));;0, 255));; SetCloseBitmap((Bitmap)Resources.close1, Color.FromArgb(255, 0, 255), new Point(220, 8)); TitleRectangle = new Rectangle(60, 8, 70, 25); ContentRectangle = new Rectangle(60, 8, 150, 140); The first line sets the background bitmap skin and transparency color from the embedded resource bitmap, and the second line sets the optional 3-State close button with its transparency color and its location on the window. These two lines allow us to define the rectangles in which will be displayed, the title and content texts. You can set these properties for the simple bmp skins: void SetCloseBitmap(string strFilename, Color transparencyColor, Point position) Sets the 3-State close button bitmap, its transparency color and its coordinates for our plain bitmap skins. strFilename transparencyColor position void SetCloseBitmap(Image image, Color transparencyColor, Point position) Sets the 3-State close button bitmap, its transparency color and its coordinates. image Image Bitmap string TitleText (get/set) string ContentText (get/set) TaskbarStates TaskbarState (get) Color NormalTitleColor (get/set) Color HoverTitleColor (get/set) Color NormalContentColor (get/set) Color HoverContentColor (get/set) Font NormalTitleFont (get/set) Font HoverTitleFont (get/set) Font NormalContentFont (get/set) Font HoverContentFont (get/set) Rectangle TitleRectangle (get/set) //must be defined before calling show()) Rectangle ContentRectangle (get/set) //must be defined before calling show()) bool TitleClickable (get/set) (default = false); bool ContentClickable (get/set) (default = true); bool CloseClickable (get/set) (default = true); bool EnableSelectionRectangle (get/set) (default = true); public void Show(string strAction, string strTitle, string strContent, int nTimeToShow, int nTimeToStay, int nTimeToHide) I added an "Action" parameter, i.e., strAction, to the original code to indicate what the popup should do when launched, and the other parameters set the Title, Content and amount of time to display the popup. strAction strTitlee strContent nTimeToShow nTimeToStay nTimeToHide void Hide() This Hides the popup. The refresh() of the popup is done using double buffering technique from John O'Byrne's original article to avoid flickering with some minor changes: refresh() protected override void OnPaintBackground(PaintEventArgs e) { if (m_alphaBitmap != null) { Graphics grfx = e.Graphics; grfx.PageUnit = GraphicsUnit.Pixel; Graphics offScreenGraphics; Bitmap offscreenBitmap; offscreenBitmap = new Bitmap(m_alphaBitmap.Width, m_alphaBitmap.Height); offScreenGraphics = Graphics.FromImage(offscreenBitmap); if (m_alphaBitmap != null) offScreenGraphics.DrawImage(m_alphaBitmap, 0, 0, m_alphaBitmap.Width, m_alphaBitmap.Height); DrawGrip(offScreenGraphics); DrawCloseButton(offScreenGraphics); DrawText(offScreenGraphics); grfx.DrawImage(offscreenBitmap, 0, 0); // The bitmap and "offScreenGraphics" object should be disposed. // BUT, The grfx object should NOT be disposed. offScreenGraphics.Dispose(); offscreenBitmap.Dispose(); } } In addition to avoid flicking I added: this.SetStyle(System.Windows.Forms.ControlStyles.DoubleBuffer, true); this.SetStyle(System.Windows.Forms.ControlStyles.AllPaintingInWmPaint, false); this.SetStyle(System.Windows.Forms.ControlStyles.ResizeRedraw, true); this.SetStyle(System.Windows.Forms.ControlStyles.UserPaint, true); this.UpdateStyles(); The reader can tweak the code and easily change how the popups work. For example, you can embed the html and images in the DLL and load them as embedded resources. You will notice the popup is shown using the Win32 function ShowWindow(SW_SHOWNOACTIVATE), to prevent the popup from stealing the focus. To add a really nice dropshadow we need to create a bitmap with 32 bits per pixel with an alpha channel and add the droshadow to the bitmap itself as part of the image. But to add a slight dropshadow we can just add the code below. // Adds a slight dropshadow to our skin private const int WS_THICKFRAME = 0x40000; private const int WS_CAPTION = 0xC00000; private const int CS_DROPSHADOW = 0x20000; protected override CreateParams CreateParams { get { CreateParams cp = base.CreateParams; cp.ClassStyle |= CS_DROPSHADOW; cp.Style = cp.Style & ~WS_CAPTION; cp.Style = cp.Style & ~WS_THICKFRAME; return cp; } } There are a number of ways to drag a non-rectangular form without a tittle bar but I like to keep things simple so what I used was the code below. In the case of the simple bitmap skin we override OnMouseDown and just make sure that we are not over the painted Grip Area as shown below. // Dragging is achieved with the following in OnMouseDown(MouseEventArgs e) Win32.ReleaseCapture(); Win32.SendMessage(Handle, Win32.WM_NCLBUTTONDOWN, Win32.HT_CAPTION, 0); protected override void OnMouseDown(MouseEventArgs e) { base.OnMouseDown(e); bIsMouseDown = true; if (e.Button == MouseButtons.Left) { if (bIsMouseOverClose) { Refresh(); } else if (!bIsMouseOverGrip) { Win32.ReleaseCapture(); Win32.SendMessage(Handle, Win32.WM_NCLBUTTONDOWN, Win32.HT_CAPTION, 0); } else if (bIsMouseOverGrip) { nMouseStartX = e.X; nMouseStartY = e.Y; } } } In the case of our WebBrowser skins there are two main methods I used in these skins and both work equally well. The first method is place a link in te javascript as follows so that when the user clicks on an image in html designated to be part of the frame images that users click on to drag the skin we Naviate to "EVENT:DRAG" and trap it in the C# "Navigating" event as follows: // METHOD #1. Dragging The Skin Using The Navigating Event in Our WebBrowser Control // On the Click event of a drag image in Javascript we call the function dragimage() <script language="javascript" type="text/jscript"> function dragimage(){window.location.href ="EVENT:DRAG";} </script> // In C# we capture this Navigate event from our Javascript in the C# Navigating event private void extendedWebBrowser1_Navigating(object sender, WebBrowserNavigatingEventArgs e) { string csTestUrl = string.Empty; string csEvent = string.Empty; string csAction = string.Empty; string csData = string.Empty; string csQuestionMark = string.Empty; char[] delimiterChars = { ':' }; char[] delimiterChars2 = { '?' }; try { csTestUrl = e.Url.ToString(); string[] words = csTestUrl.Split(delimiterChars, StringSplitOptions.None); if (words.Length > 1) { csEvent = words[0]; csAction = words[1]; } if (words.Length > 2) csData = words[2]; string[] words2 = csTestUrl.Split(delimiterChars2, StringSplitOptions.None); if (words2.Length > 1) csQuestionMark = words2[1]; } catch { } csEvent = csEvent.ToUpper(); if (csEvent != "EVENT") { } else { try { csAction = csAction.ToUpper(); if (csAction == "DRAG") { e.Cancel = true; Win32.ReleaseCapture(); Win32.SendMessage(Handle, Win32.WM_NCLBUTTONDOWN, Win32.HT_CAPTION, 0); } } catch { e.Cancel = true; } } } The other method which I also used in the sample project which works equally well is to use "window.external" in Javascript and to create an "External" Class that calls a method "SendDragData" in our skin as follows: // METHOD #2. Dragging The Skin Using "window.external" in Javascript // On the Click event of a drag image in Javascript we call window.external in the function dragimage() <script language="javascript" type="text/jscript"> function HandleDrag(a) { window.external.SendDragData("EVENT:DRAG"); } </script> // In C# External Class we call SendDragData and pass the data, namely, "EVENT:DRAG", to our skin window [System.Runtime.InteropServices.ComVisibleAttribute(true)] public class External { private static SkinDlg m_mainWindow = null; public void SendDragData(string zdata) { m_mainWindow.SendDragData(zdata); } } // In our skin we receive the data, namely, "EVENT:DRAG", sent from our External Class public void SendDragData(string zdata) { Win32.ReleaseCapture(); Win32.SendMessage(Handle, Win32.WM_NCLBUTTONDOWN, Win32.HT_CAPTION, 0); } Basically I set out to see how well resizing of the skins might look and the overall effect isn't bad. The purpose of this article is to just have some fun palying around with.
https://www.codeproject.com/Articles/311846/TaskbarNotifiers-Resizable-Skinned-MSN-Messenger-L?display=Print
CC-MAIN-2018-09
en
refinedweb
Qt Quick Extras Overview Qt Quick Extras provide a set of UI controls to create user interfaces in Qt Quick. Getting Started Building If you are building Qt Quick Extras from source, you can follow the steps used for most Qt modules: qmake make make install Using the Controls The QML types can be imported into your application using the following import statement in your .qml file. import QtQuick.Extras 1.4 Interactive controls Non-interactive controls Creating a basic example A basic example of a QML file that makes use of controls is shown here: import QtQuick 2.2 import QtQuick.Extras 1.4 Rectangle { DelayButton { anchors.centerIn: parent } } For an interactive showcase of the controls provided by Qt Quick Extras, you can look at the Gallery example. .
http://doc.qt.io/qt-5/qtquickextras-overview.html
CC-MAIN-2018-09
en
refinedweb
java.lang.Object java.util.EventObjectjava.util.EventObject HiRISE.HiPlan.SPICE.KernelPoolEventHiRISE.HiPlan.SPICE.KernelPoolEvent public class KernelPoolEvent An event that indicates the SPICE kernel pool has changed. This event is generated by a producer (such as a SPICE_Menu) when the SPICE kernel pool is updated (such as when a kernel is added to or removed from the pool). The event is passed to every KernelPoolListener object that registered to receive such events using the producer's addKernelPoolListener method. The contents of the kernel pool at the time of the creation of an object of this class are available via the getPoolContents() method. public static final String ID public KernelPoolEvent(Object source) getPoolContents()method. source- the producer of the kernel pool event. IllegalArgumentException- if source is null. public final List<String> getPoolContents() public String toString() toStringin class EventObject
http://pirlwww.lpl.arizona.edu/software/HiRISE/Java/HiRISE/HiPlan/SPICE/KernelPoolEvent.html
CC-MAIN-2018-09
en
refinedweb
#include <TripleAdder.h> #include <TripleAdder.h> Inheritance diagram for TripleAdder:: Definition at line 14 of file TripleAdder.h. [inline] Constructor which sets the sink. This is the prefered way to set the sink. * Definition at line 34 of file TripleAdder.h. [inline, virtual] Tell the adder where to add its Triples, which may be construed as a request to begin doing so. The behavior if you attempt to set the sink after construction time, after one has been set once, or during streaming is not defined in general; refer to the specific derived class. The getScope() defaults to the getScope() of the sink. Definition at line 54 of file TripleAdder.h.
http://www.w3.org/2001/06/blindfold/api/classTripleAdder.html
CC-MAIN-2015-22
en
refinedweb
]> NAME SYNOPSIS REQUEST ARGUMENTS REPLY FIELDS DESCRIPTION RETURN VALUE ERRORS SEE ALSO AUTHOR xcb_alloc_color − Allocate a color #include <xcb/xproto.h> Request function Reply datastructure typedef struct xcb_alloc_color_reply_t { uint8_t response_type; uint8_t pad0; uint16_t sequence; uint32_t length; uint16_t red; uint16_t green; uint16_t blue; uint8_t pad1[2]; uint32_t pixel; } xcb_alloc_color_reply_t; Reply function response_type The type of this reply, in this case XCB_ALLOC_COLOR. This field is also present in the xcb_generic_reply_t and can be used to tell replies apart from each other. Allocates a read-only colormap entry corresponding to the closest RGB value supported by the hardware. If you are using TrueColor, you can take a shortcut and directly calculate the color pixel value to avoid the round trip. But, for example, on 16-bit color setups (VNC), you can easily get the closest supported RGB value to the RGB value you are specifying. Returns an xcb_alloc_color_cookie_t. Errors have to be handled when calling the reply function xcb_alloc_color_reply. If you want to handle errors in the event loop instead, use xcb_alloc_color_unchecked. See xcb-requests(3) for details. xcb_colormap_error_t The specified colormap cmap does not exist. xcb-requests(3) Generated from xproto.xml. Contact xcb@lists.freedesktop.org for corrections and improvements.
http://www.x.org/releases/current/doc/man/man3/xcb_alloc_color.3.xhtml
CC-MAIN-2015-22
en
refinedweb
I am doing a Craps program in my class. I think this is a very common program for beginners but it seems to be different everywhere I see it. All I need it to do is the user inputs the first total, it tells them if they win or lose, if not, it prompts a second total and checks for a winner. If it is not a winner, it prompts for another roll from the user until the user wins. I got my code to compile correctly but I have some problems. Problem 1: If the first number is a winner or a loser, it prints "You win!!!" or "You Lose!!!" and then it does not end the program. I have to actually close the program myself. Problem 2: If the first number is neither a winner or a loser, it prompts for a second roll. If the second roll is the same as the first it says "You win!!!" If not, it should ask for another roll, instead it asks for another roll infinitly, I have to close the program. Here is my code, I have been working with it for a while and I would really appreciate any help. Code:// File: CRAPS.cpp // Author: Emil // Class: TR 5pm // Purpose: To simulate a game of Craps given total on first dice roll // and possibly a second dice roll. #include <iostream> using namespace std; int main() { float first_total, second_total, roll_total; // Prompt for and read in total from first dice roll cout << "Enter total from first dice roll (2-12)... "; cin >> first_total; if((first_total == 7) || (first_total == 11)) cout << "You Win!!!"; else if((first_total == 2) || (first_total == 3) || (first_total == 12)) cout << "You Lose!!!"; else cout << "Please roll again and enter total of second roll (2-12): "; cin >> second_total; if(second_total == first_total) cout << "You win!!!"; else if(second_total == 7) cout << "You lose!!!"; else while ((second_total != 7) || (second_total == first_total)) cout << "Please roll again and enter total on this roll (2-12): "; cin >> roll_total; second_total = roll_total; return (0); }
http://cboard.cprogramming.com/cplusplus-programming/101896-need-help-craps-program.html
CC-MAIN-2015-22
en
refinedweb
Introducing Data Annotations Extensions box. A Quick Word About Data Annotations Extensions The Data Annotations Extensions project can be found at, and currently provides 11 additional validation attributes (ex: Email, EqualTo, Min/Max) on top of Data Annotations’ original 4. You can find a current list of the validation attributes on the afore mentioned website. The core library provides server-side validation attributes that can be used in any .NET 4.0 project (no MVC dependency). There is also an easily pluggable client-side validation library which can be used in ASP.NET MVC 3 projects using unobtrusive jquery validation (only MVC3 included javascript files are required). On to the Preview Let’s say you had the following “Customer” domain model (or view model, depending on your project structure) in an MVC 3 project: public class Customer { public string Email { get; set; } public int Age { get; set; } public string ProfilePictureLocation { get; set; } } When it comes time to create/edit this Customer, you will probably have a CustomerController and a simple form that just uses one of the Html.EditorFor() methods that the ASP.NET MVC tooling generates for you (or you can write yourself). It should look something like this: With no validation, the customer can enter nonsense for an email address, and then can even report their age as a negative number! With the built-in Data Annotations validation, I could do a bit better by adding a Range to the age, adding a RegularExpression for email (yuck!), and adding some required attributes. However, I’d still be able to report my age as 10.75 years old, and my profile picture could still be any string. Let’s use Data Annotations along with this project, Data Annotations Extensions, and see what we can get: public class Customer { [Required] public string Email { get; set; } [Integer] [Min(1, ErrorMessage="Unless you are benjamin button you are lying.")] [Required] public int Age { get; set; } [FileExtensions("png|jpg|jpeg|gif")] public string ProfilePictureLocation { get; set; } } Now let’s try to put in some invalid values and see what happens: That is very nice validation, all done on the client side (will also be validated on the server). Also, the Customer class validation attributes are very easy to read and understand. Another bonus: Since Data Annotations Extensions can integrate with MVC 3’s unobtrusive validation, no additional scripts are required! Now that we’ve seen our target, let’s take a look at how to get there within a new MVC 3 project. Adding Data Annotations Extensions To Your Project First we will File->New Project and create an ASP.NET MVC 3 project. I am going to use Razor for these examples, but any view engine can be used in practice. Now go into the NuGet Extension Manager (right click on references and select add Library Package Reference) and search for “DataAnnotationsExtensions.” You should see the following two packages: The first package is for server-side validation scenarios, but since we are using MVC 3 and would like comprehensive sever and client validation support, click on the DataAnnotationsExtensions.MVC3 project and then click Install. This will install the Data Annotations Extensions server and client validation DLLs along with David Ebbo’s web activator (which enables the validation attributes to be registered with MVC 3). Now that Data Annotations Extensions is installed you have all you need to start doing advanced model validation. If you are already using Data Annotations in your project, just making use of the additional validation attributes will provide client and server validation automatically. However, assuming you are starting with a blank project I’ll walk you through setting up a controller and model to test with. Creating Your Model In the Models folder, create a new User.cs file with a User class that you can use as a model. To start with, I’ll use the following class: public class User { public string Email { get; set; } public string Password { get; set; } public string PasswordConfirm { get; set; } public string HomePage { get; set; } public int Age { get; set; } } Next, create a simple controller with at least a Create method, and then a matching Create view (note, you can do all of this via the MVC built-in tooling). Your files will look something like this: UserController.cs: public class UserController : Controller { public ActionResult Create() { return View(new User()); } [HttpPost] public ActionResult Create(User user) { if (!ModelState.IsValid) { return View(user); } return Content("User valid!"); } } Create.cshtml: @model NuGetValidationTester.Models.User @{ ViewBag.Title = "Create"; } <h2>Create</h2> <script src="@Url.Content("~/Scripts/jquery.validate.min.js")" type="text/javascript"></script> <script src="@Url.Content("~/Scripts/jquery.validate.unobtrusive.min.js")" type="text/javascript"></script> @using (Html.BeginForm()) { @Html.ValidationSummary(true) <fieldset> <legend>User</legend> @Html.EditorForModel() <p> <input type="submit" value="Create" /> </p> </fieldset> } In the Create.cshtml view, note that we are referencing jquery validation and jquery unobtrusive (jquery is referenced in the layout page). These MVC 3 included scripts are the only ones you need to enjoy both the basic Data Annotations validation as well as the validation additions available in Data Annotations Extensions. These references are added by default when you use the MVC 3 “Add View” dialog on a modification template type. Now when we go to /User/Create we should see a form for editing a User Since we haven’t yet added any validation attributes, this form is valid as shown (including no password, email and an age of 0). With the built-in Data Annotations attributes we can make some of the fields required, and we could use a range validator of maybe 1 to 110 on Age (of course we don’t want to leave out supercentenarians) but let’s go further and validate our input comprehensively using Data Annotations Extensions. The new and improved User.cs model class. { [Required] public string Email { get; set; } [Required] public string Password { get; set; } [Required] [EqualTo("Password")] public string PasswordConfirm { get; set; } [Url] public string HomePage { get; set; } [Integer] [Min(1)] public int Age { get; set; } } Now let’s re-run our form and try to use some invalid values: All of the validation errors you see above occurred on the client, without ever even hitting submit. The validation is also checked on the server, which is a good practice since client validation is easily bypassed. That’s all you need to do to start a new project and include Data Annotations Extensions, and of course you can integrate it into an existing project just as easily. Nitpickers Corner ASP.NET MVC 3 futures defines four new data annotations attributes which this project has as well: CreditCard, Email, Url and EqualTo. Unfortunately referencing MVC 3 futures necessitates taking an dependency on MVC 3 in your model layer, which may be unadvisable in a multi-tiered project. Data Annotations Extensions keeps the server and client side libraries separate so using the project’s validation attributes don’t require you to take any additional dependencies in your model layer which still allowing for the rich client validation experience if you are using MVC 3. Custom Error Message and Globalization: Since the Data Annotations Extensions are build on top of Data Annotations, you have the ability to define your own static error messages and even to use resource files for very customizable error messages. Available Validators: Please see the project site at for an up-to-date list of the new validators included in this project. As of this post, the following validators are available: - CreditCard - Date - Digits - EqualTo - FileExtensions - Integer - Max - Min - Numeric - Url Conclusion Hopefully I’ve illustrated how easy it is to add server and client validation to your MVC 3 projects, and how to easily you can extend the available validation options to meet real world needs. The Data Annotations Extensions project is fully open source under the BSD license. Any feedback would be greatly appreciated. More information than you require, along with links to the source code, is available at. Enjoy!
http://weblogs.asp.net/srkirkland/introducing-data-annotations-extensions
CC-MAIN-2015-22
en
refinedweb
Just recently, I weighed in on a post where the author was making a legitimate complaint about the quality of articles submitted. I won't go into the details, but since that discussion I've been tempted to write my first article as I think I should try my best and contribute more to the community. Today I commented on another authors post, it was well written and I could follow the thread of the article, but the examples were a little too obscure (for my liking), so their real world application might not be immediately apparent to the reader. So, on to my first article. It took me a while to grasp the concept of interfaces. It's not that they're particularly difficult as a concept, but how and where to apply them is where a developer can struggle. My intent with this article is not to show anything radically different, not to try and say "my article better describes x" but to try and put my understanding of interfaces and their practical implementation into my words, so that the reader has a different perspective with which to view the topic. I'm going to make the assumption that you understand (not necessarily are a master of) basic object orientated principals and that you are comfortable with the following words. The simplest analogy I can draw for an interface is that of a contract. A landlord of a property might have a standard contract, everybody expecting to live in a property owned by that landlord agrees that they will adhere to the rules and guidelines contained within it. How two tenants keep within that rule set is entirely up to them, but they are both bound by the same contract. An interface is a guarantee that certain behaviors and values will be available to anybody using an object that implements that interface. You define an interface in C# as follows: public interface ICustomAction<T> { string Name { get;} string Description { get; set; } T Execute( T val ); } An interface is always public. You cannot define access modifiers on the properties and methods defined within and you cannot provide any implementation for your interface. Note: If you are familiar with C# 3.0's features, specifically auto-properties, do not confuse your interface definition with an auto-property. It is not implementation in the case of an interface. You cannot create an object of an interface, you can only create an object of a class that implements an interface. Although you will see examples such as: public void MyMethod( ICustomAction<int> action) { } You can never actually declare an object of type ICustomAction. Give it a go, see what happens. ICustomAction ICustomAction<int> action = new ICustomAction<int>( ); Instead, you need to define a class and implement the interface, defining the functionality that you have agreed an object implementing this interface will provide. public class UpdateOrderAction : ICustomAction<int> { public string Name { get { return "UpdateOrderAction"; } } public string Description {get;set;} public int Execute( int val ) { // } } Not very useful at the moment. However, you've actually created a class from which you can instantiate an object and you will guarantee it provides a name and a method 'Execute'. Notice that we are now able to define access modifiers on the properties and methods defined. You are however, limited to your properties and methods being made public. Execute public As I said earlier, when you define an interface, you are making a guarantee that any properties and methods defined on that interface are available on all classes that implement it. Let us say that Jeff and Bill are writing a system together. Bill is going to work on the back end accounting system, Jeff is going to be working on the front end, data entry system. They're starting their development at the same time, so there is no existing code to work from. Jeff will be allowing data entry clerks to create invoices for customers, Bill's back end system will be responsible for posting those invoices to ledgers etc. So, Bill and Jeff sit down and flesh out the rough design of their system, what they'll need from one another. They agree that an invoice should contain: Id CustomerReferenceNumber Value So, they define an interface: public interface IInvoice { int Id {get;set;} int CustomerReferenceNumber {get;set;} decimal Value {get;set;} } Now, they both go away happy. Bill knows that he can work with an IInvoice object coming from Jeffs front end, Jeff knows that when he is ready, he can produce an invoice object that implements the IInvoice object they just discussed and he won't hold Bill up. Now, if Jeff decided that when a customer was a high profile customer, he would make the customer reference number private on the invoice, he would not be fulfilling the contract that he and Bill had agreed upon and that the IInvoice interface had promised. So, any class implementing an interface must make all the properties and methods that make up that interface public to all. IInvoice private public Using the example of the ICustomAction interface from earlier, we'll now continue to try and expand upon implementing our interface in a class. We defined the custom action interface as being of a type, so when we implement the interface in our object, in this case: ICustomAction public class MultiplyAction : ICustomAction<int> { public string Name { get { return "Multiply Action"; } } public string Description { get; set; } public int Execute( int val ) { Console.WriteLine( "Name: {0} Value: {1}", Name, val ); return val * 2; } } MultiplyAction ICustomAction<int> T public int Execute( int val) T Execute( T val) To see this in work, take a look at the sample source code. The generics I used are a little beyond the scope of this article, but hopefully my example code makes them understandable enough. You could define your ICustomAction like so: public interface ICustomAction { string Name {get;} string Description {get;set;} int Execute( int val); } In the attached code, there are three custom actions. One is the manager class that has a list of custom actions attached to it, the others are the actions that we can perform. In the program.cs, I create an object of type ActionManager and I add two custom actions to it. Notice that the code only specifies that an ICustomAction<int> is required, not that a Multiply or DivideAction is required. ActionManager Multiply DivideAction Without trying to throw around a bunch of common phrases (such as dependency injection, loose coupling, etc., few of which I believe I have a strong grasp of) Programming to the interface, rather than a solid implementation of an object gives the developers the flexibility to swap out the actual implementation without worrying too much about how the program will take the new change. Back to Jeff and Bill from before. Now, let us say that Jeff and Bill didn't have their original discussion, let us say that the conversation went something like: Jeff: Hey Bill, so what do you need from my data entry system? Bill: Well Jeff, I need to get an invoice, I need an Id, a customer reference number and a value. I can just tie up the customer reference number to the account and then post a value on the ledger. Jeff: Oh great! I'll pass you an invoice just like that. So, they go away and a week later, Jeff posts an Invoice object to Bill. Great. The system is working fine and they have got it up and running in record time. Their managers are overjoyed, the business is efficient. A month later, Jeff's manager approaches him. Manager: Jeff, we have a bit of an issue. We're having trouble reporting on the invoices. Some of our customers have the ability to override on an invoice by invoice basis just what terms they have. Jeff: Hmmmm Manager: You'll sort it out, that's great! So, Jeff goes away and thinks long and hard about this. He decides that the best way of doing this is to create a new type of invoice, a SuperInvoice (don't name your objects super anything!). He gets it done in an hour and then implements the change on the system. *BANG* SuperInvoice Bill: Jeff, what happened? The ledger postings crashed, it's talking about an invalid cast. Jeff: Ooops, we should have talked about interfaces in the first place. When Jeff implemented the change, he didn't think that Bill was dependent upon an Invoice object. When he implemented SuperInvoice, he just created a new type and implemented it within the system. There are several solutions to this problem, those of you familiar with inheritance may see my example as poor as Jeff could have just inherited from Invoice and all should have been fine. However, what Bill and Jeff originally did do was create an IInvoice interface. It gave Bill and Jeff the ability to program their respective parts without worry of the actual implementation of each object, when Jeff came to implement SuperInvoice, he would have implemented the interface and the system at Bill's end would have been none the wiser. It didn't need to be any the wiser. As far as Bill is concerned, it is an invoice, he doesn't need to worry about whether it carries anything not relevant to his system. Invoice SuperInvoice Interfaces are used everywhere throughout the .NET Framework and they are a powerful tool in object orientated programming and design. You may have seen IEnumerable, IDisposable and several others quite frequently while developing other programs. In the case of IDisposable, implementing this interface guarantees you will have a Dispose method on your object (whether you do anything or not[bad practice]).You may see: IEnumerable IDisposable IDisposable Dispose using( SqlConnection connection = new SqlConnection( )) { //Code } The using keyword will take any object that implements IDisposable. When the using statement is complete, it implicitly calls Dispose. If you define your methods to return an IEnumerable<string> it means you can return any collection that implements the IEnumerable interface and contains strings. You can return a list or any other firm object but you guarantee that the object you return will definitely provide certain properties and methods. using Dispose IEnumerable<string> IEnumerable string Well, that is the end of my first article. Hopefully some found it useful and informative. I do now realise the effort that goes into producing something like this and even though I work day in, day out as a developer I realise just how difficult it is to produce a "real world" example to work with. And I apologise for my criticism of other.
http://www.codeproject.com/Articles/54967/Interfaces-In-Action?fid=1558840&df=90&mpp=10&sort=Position&spc=None&tid=3352399
CC-MAIN-2015-22
en
refinedweb
Name | Synopsis | Description | Return Values | Errors | Attributes | See Also | Notes #include <sys/wait.h> #include <sys/time.h> #include <sys/resource.h> pid_t wait3(int *statusp, int options, struct rusage *rusage); pid_t wait4(pid_t pid, int *statusp, int options, struct rusage *rusage); The wait3() function delays its caller until a signal is received or one of its child processes terminates or stops due to tracing. If any child process has died or stopped due to tracing and this has not already been reported, return is immediate, returning the process ID and status of one of those children. If that child process has died, it is.h. The status of any child processes that are stopped, and whose status has not yet been reported since they stopped, are also reported to the requesting process. If rusage is not a null pointer, a summary of the resources used by the terminated process and all its children is returned. Only the user time used and the system time used are currently available. They are returned in the ru is returned and errno is set to EINTR. If WNOHANG was set in options, it has at least one child process specified by pid for which status is not available, and status is not available for any process specified by pid, 0 is returned. Otherwise, -1 is returned and errno is set to indicate the error. The wait3() and wait4() functions return 0 if WNOHANG is specified and there are no stopped or exited children, and return the process ID of the child process if they return due to a stopped or terminated child process. Otherwise, they return -1 and set errno to indicate the error. | Attributes | See Also | Notes
http://docs.oracle.com/cd/E19082-01/819-2243/6n4i099rp/index.html
CC-MAIN-2015-22
en
refinedweb
/* Interface to C preprocessor macro tablesTAB_H #define MACROTAB_H struct obstack; struct bcache; /* How do we represent a source location? I mean, how should we represent them within GDB; the user wants to use all sorts of ambiguous abbreviations, like "break 32" and "break foo.c:32" ("foo.c" may have been #included into several compilation units), but what do we disambiguate those things to? - Answer 1: "Filename and line number." (Or column number, if you're picky.) That's not quite good enough. For example, the same source file can be #included into several different compilation units --- which #inclusion do you mean? - Answer 2: "Compilation unit, filename, and line number." This is a pretty good answer; GDB's `struct symtab_and_line' basically embodies this representation. But it's still ambiguous; what if a given compilation unit #includes the same file twice --- how can I set a breakpoint on line 12 of the fifth #inclusion of "foo.c"? - Answer 3: "Compilation unit, chain of #inclusions, and line number." This is analogous to the way GCC reports errors in #include files: $ gcc -c base.c In file included from header2.h:8, from header1.h:3, from base.c:5: header3.h:1: parse error before ')' token $ GCC tells you exactly what path of #inclusions led you to the problem. It gives you complete information, in a way that the following would not: $ gcc -c base.c header3.h:1: parse error before ')' token $ Converting all of GDB to use this is a big task, and I'm not really suggesting it should be a priority. But this module's whole purpose is to maintain structures describing the macro expansion process, so I think it's appropriate for us to take a little care to do that in a complete fashion. In this interface, the first line of a file is numbered 1, not 0. This is the same convention the rest of GDB uses. */ /* A table of all the macro definitions for a given compilation unit. */ struct macro_table; /* A source file that participated in a compilation unit --- either a main file, or an #included file. If a file is #included more than once, the presence of the `included_from' and `included_at_line' members means that we need to make one instance of this structure for each #inclusion. Taken as a group, these structures form a tree mapping the #inclusions that contributed to the compilation unit, with the main source file as its root. Beware --- not every source file mentioned in a compilation unit's symtab structures will appear in the #inclusion tree! As of Oct 2002, GCC does record the effect of #line directives in the source line info, but not in macro info. This means that GDB's symtabs (built from the former, among other things) may mention filenames that the #inclusion tree (built from the latter) doesn't have any record of. See macroscope.c:sal_macro_scope for how to accomodate this. It's worth noting that libcpp has a simpler way of representing all this, which we should consider switching to. It might even be suitable for ordinary non-macro line number info. Suppose you take your main source file, and after each line containing an #include directive you insert the text of the #included file. The result is a big file that pretty much corresponds to the full text the compiler's going to see. There's a one-to-one correspondence between lines in the big file and per-inclusion lines in the source files. (Obviously, #include directives that are #if'd out don't count. And you'll need to append a newline to any file that doesn't end in one, to avoid splicing the last #included line with the next line of the #including file.) Libcpp calls line numbers in this big imaginary file "logical line numbers", and has a data structure called a "line map" that can map logical line numbers onto actual source filenames and line numbers, and also tell you the chain of #inclusions responsible for any particular logical line number. Basically, this means you can pass around a single line number and some kind of "compilation unit" object and you get nice, unambiguous source code locations that distinguish between multiple #inclusions of the same file, etc. Pretty neat, huh? */ struct macro_source_file { /* The macro table for the compilation unit this source location is a part of. */ struct macro_table *table; /* A source file --- possibly a header file. */ const char *filename; /* The location we were #included from, or zero if we are the compilation unit's main source file. */ struct macro_source_file *included_by; /* If `included_from' is non-zero, the line number in that source file at which we were included. */ int included_at_line; /* Head of a linked list of the source files #included by this file; our children in the #inclusion tree. This list is sorted by its elements' `included_at_line' values, which are unique. (The macro splay tree's ordering function needs this property.) */ struct macro_source_file *includes; /* The next file #included by our `included_from' file; our sibling in the #inclusion tree. */ struct macro_source_file *next_included; }; /* Create a new, empty macro table. Allocate it in OBSTACK, or use xmalloc if OBSTACK is zero. Use BCACHE to store all macro names, arguments, definitions, and anything else that might be the same amongst compilation units in an executable file; if BCACHE is zero, don't cache these things. Note that, if either OBSTACK or BCACHE are non-zero, then you should only ever add information the macro table --- you should never remove things from it. You'll get an error if you try. At the moment, since we only provide obstacks and bcaches for macro tables for symtabs, this restriction makes a nice sanity check. Obstacks and bcaches are pretty much grow-only structures anyway. However, if we find that it's occasionally useful to delete things even from the symtab's tables, and the storage leak isn't a problem, this restriction could be lifted. */ struct macro_table *new_macro_table (struct obstack *obstack, struct bcache *bcache); /* Free TABLE, and any macro definitions, source file structures, etc. it owns. This will raise an internal error if TABLE was allocated on an obstack, or if it uses a bcache. */ void free_macro_table (struct macro_table *table); /* Set FILENAME as the main source file of TABLE. Return a source file structure describing that file; if we record the #definition of macros, or the #inclusion of other files into FILENAME, we'll use that source file structure to indicate the context. The "main source file" is the one that was given to the compiler; all other source files that contributed to the compilation unit are #included, directly or indirectly, from this one. The macro table makes its own copy of FILENAME; the caller is responsible for freeing FILENAME when it is no longer needed. */ struct macro_source_file *macro_set_main (struct macro_table *table, const char *filename); /* Return the main source file of the macro table TABLE. */ struct macro_source_file *macro_main (struct macro_table *table); /* Record a #inclusion. Record in SOURCE's macro table that, at line number LINE in SOURCE, we #included the file INCLUDED. Return a source file structure we can use for symbols #defined or files #included into that. If we've already created a source file structure for this #inclusion, return the same structure we created last time. The first line of the source file has a line number of 1, not 0. The macro table makes its own copy of INCLUDED; the caller is responsible for freeing INCLUDED when it is no longer needed. */ struct macro_source_file *macro_include (struct macro_source_file *source, int line, const char *included); /* Find any source file structure for a file named NAME, either included into SOURCE, or SOURCE itself. Return zero if we have none. NAME is only the final portion of the filename, not the full path. e.g., `stdio.h', not `/usr/include/stdio.h'. If NAME appears more than once in the inclusion tree, return the least-nested inclusion --- the one closest to the main source file. */ struct macro_source_file *(macro_lookup_inclusion (struct macro_source_file *source, const char *name)); /* Record an object-like #definition (i.e., one with no parameter list). Record in SOURCE's macro table that, at line number LINE in SOURCE, we #defined a preprocessor symbol named NAME, whose replacement string is REPLACEMENT. This function makes copies of NAME and REPLACEMENT; the caller is responsible for freeing them. */ void macro_define_object (struct macro_source_file *source, int line, const char *name, const char *replacement); /* Record an function-like #definition (i.e., one with a parameter list). Record in SOURCE's macro table that, at line number LINE in SOURCE, we #defined a preprocessor symbol named NAME, with ARGC arguments whose names are given in ARGV, whose replacement string is REPLACEMENT. If the macro takes a variable number of arguments, then ARGC should be one greater than the number of named arguments, and ARGV[ARGC-1] should be the string "...". This function makes its own copies of NAME, ARGV, and REPLACEMENT; the caller is responsible for freeing them. */ void macro_define_function (struct macro_source_file *source, int line, const char *name, int argc, const char **argv, const char *replacement); /* Record an #undefinition. Record in SOURCE's macro table that, at line number LINE in SOURCE, we removed the definition for the preprocessor symbol named NAME. */ void macro_undef (struct macro_source_file *source, int line, const char *name); /* Different kinds of macro definitions. */ enum macro_kind { macro_object_like, macro_function_like }; /* A preprocessor symbol definition. */ struct macro_definition { /* The table this definition lives in. */ struct macro_table *table; /* What kind of macro it is. */ enum macro_kind kind; /* If `kind' is `macro_function_like', the number of arguments it takes, and their names. The names, and the array of pointers to them, are in the table's bcache, if it has one. */ int argc; const char * const *argv; /* The replacement string (body) of the macro. This is in the table's bcache, if it has one. */ const char *replacement; }; /* Return a pointer to the macro definition for NAME in scope at line number LINE of SOURCE. If LINE is -1, return the definition in effect at the end of the file. The macro table owns the structure; the caller need not free it. Return zero if NAME is not #defined at that point. */ struct macro_definition *(macro_lookup_definition (struct macro_source_file *source, int line, const char *name)); /* Return the source location of the definition for NAME in scope at line number LINE of SOURCE. Set *DEFINITION_LINE to the line number of the definition, and return a source file structure for the file. Return zero if NAME has no definition in scope at that point, and leave *DEFINITION_LINE unchanged. */ struct macro_source_file *(macro_definition_location (struct macro_source_file *source, int line, const char *name, int *definition_line)); #endif /* MACROTAB_H */
http://opensource.apple.com/source/gdb/gdb-1344/src/gdb/macrotab.h
CC-MAIN-2015-22
en
refinedweb
Small code for PieChart...but ! By vaibhavc on Jun 30, 2009 As already discussed, JavaFX 1.2 provide API set for Charts and Graphs. Though I decided to put my hand dirty in writing one 3D piechart from my own. With mine, you will get additional feature of explode in and out feature :). Well, action can be written with the existing chart API and I guess explode feature will also come soon. Making 3D Pie chart is nothing but layering of 2D Pie Chart and here goes a small code : Slice.fx package piechart3d; import java.lang.Math; import javafx.animation.Interpolator; import javafx.animation.KeyFrame; import javafx.animation.Timeline; import javafx.scene.CustomNode; import javafx.scene.Group; import javafx.scene.input.MouseEvent; import javafx.scene.Node; import javafx.scene.paint.Color; import javafx.scene.paint.LinearGradient; import javafx.scene.paint.Stop; import javafx.scene.shape.Arc; import javafx.scene.shape.ArcType; /\*\* \* @author Vaibhav Choudhary \*/ public class Slice extends CustomNode { public var color: Color; public var sAngle: Number = 0.0; public var len: Number = 0.0; public var xt = 0.0; public var yt = 0.0; function explodout():Boolean { var t = Timeline { repeatCount: 1 keyFrames: [ KeyFrame { time: 0.25s canSkip: true values: [ xt => 30 \* Math.cos(2 \* Math.PI \* (sAngle + len / 2) / 360) tween Interpolator.EASEBOTH, yt => -30 \* Math.sin(2 \* Math.PI \* (sAngle + len / 2) / 360) tween Interpolator.EASEBOTH ] } ] } t.play(); return true } function explodein():Boolean { var t1 = Timeline { repeatCount: 1 keyFrames: [ KeyFrame { time: 0.25s canSkip: true values: [ xt => 0, yt => 0 ] } ] } t1.play(); return true } public override function create(): Node { return Group { blocksMouse: true translateX: bind xt translateY: bind yt onMouseClicked: function( e: MouseEvent ):Void { if(xt == 0 and yt == 0) { explodout(); } else explodein(); } content: for(num in [0..25]) {[ Arc { stroke: color cache: true fill: color translateX: 0 translateY: (num + 1) \* 1 centerX: 250 centerY: 250 radiusX: 150 radiusY: 60 startAngle: bind sAngle length: bind len type: ArcType.ROUND } Arc { cache: true fill: LinearGradient { startX: 0.3 startY: 0.3 endX: 1.0 endY: 1.0 stops: [ Stop { color: color offset: 0.0 }, Stop { color: Color.WHITE offset: 1.0 }, ] } centerX: 250 centerY: 250 radiusX: 150 radiusY: 60 startAngle: bind sAngle length: bind len type: ArcType.ROUND }, ] } }; } }Main.fx: package piechart3d; import javafx.scene.paint.Color; import javafx.scene.Scene; import javafx.stage.Stage; /\*\* \* @author Vaibhav Choudhary \*/ var slice1: Slice = Slice{color: Color.YELLOWGREEN, sAngle:0, len: 45}; var slice2: Slice = Slice{color: Color.BLUEVIOLET, sAngle:45, len: 80 }; var slice3: Slice = Slice{color: Color.PALETURQUOISE, sAngle:125, len: 80 }; var slice4: Slice = Slice{color: Color.DARKORANGE, sAngle:205, len: 100 }; var slice5: Slice = Slice{color: Color.FIREBRICK, sAngle:305, len: 55}; Stage { title: "Pie Chart - 3D" width: 550 height: 580 scene: Scene { fill: Color.WHITE content:[ slice2, slice1,slice5,slice3,slice4 ] } } Anything here can be made generic to any extend. There is a for loop of 0..25 in Slice.fx which speaks about the thickness of chart :) and some mathematics in timeline speaks about explode feature. Now if you compare this with PieChart that comes in API, you will see this has some jerky corners + Color combination is not as soluble. How that has been made is secret :). JNLP Run : Good one.. I liked it. Posted by Raghu Nair on July 14, 2009 at 09:25 AM IST # bind chart's data? Posted by Begin on July 19, 2009 at 01:34 AM IST # Tq,chowdary!..... could you send me the link where i can get the sourcecode of animated charts.... THanks in advance..... Posted by Kallis on November 26, 2009 at 11:43 AM IST # Tq,chowdary!..... could you send me the link where i can get the sourcecode of animated charts.... THanks in advance..... Posted by Kallis on November 26, 2009 at 11:44 AM IST # Posted by Tag Heuer on December 20, 2009 at 01:40 PM IST # The state's <a href=>Nike Air Max</a> decision in October to shrink the school <a href=>Air Max </a> year by 10 percent, giving it the fewest number <a href=>Air Max 90</a>of instructional days in the nation at 163<a href=>Air Max 95</a>, is adding to the already dismal reputation Hawaii's<a href=>nike Air Max 95</a> public schools have among<a href=>nike Air Max 90</a> servicemen and womenCol. Mike Davino, the director of<a href=>Air max ltd </a> manpower, personnel and administration<a href=>Air Max shoes </a> for the U.S. Pacific Command, said the truncated school year is yet another concern for officials who have long heard about servicemen and women avoiding Hawaii assignments because of the state's public education system. Posted by nike air max on February 01, 2010 at 04:46 AM IST # nike shox ghd mbt golf golf clubs cheap abercrombie fake watches handbag nike dunk DVD Posted by china wholesale on March 07, 2010 at 01:54 sunglasses ed hardy sunglass Posted by china wholesale on March 07, 2010 at 01:54:08 AM IST # Posted by Manolo Blahnik shoes on March 15, 2010 at 09:27 AM IST #
https://blogs.oracle.com/vaibhav/entry/small_code_for_piechart_but
CC-MAIN-2015-22
en
refinedweb
- NAME - INHERITANCE - SYNOPSIS - DESCRIPTION - METHODS - DETAILS - Comparison - Collecting definitions - Addressing components - Representing data-structures - simpleType - complexType/simpleContent - complexType and complexType/complexContent - Manually produced XML NODE - Occurence - Default Values - Repetative blocks - List type - Using substitutionGroup constructs - Wildcards any and anyAttribute - ComplexType with "mixed" attribute - hexBinary and base64Binary - Schema hooks - Typemaps - Handling xsi:type - Key rewrite - Initializing SOAP operations via WSDL - SEE ALSO - LICENSE NAME XML::Compile::WSDL11 - create SOAP messages defined by WSDL 1.1 INHERITANCE XML::Compile::WSDL11 is a XML::Compile::Cache is a XML::Compile::Schema is a XML::Compile SYNOPSIS #'; DESCRIPTION.!) METHODS Constructors - XML::Compile::WSDL11->new(XML, OPTIONS) {} - allow_undeclared => BOOLEAN - - any_element => CODE|'TAKE_ALL'|'SKIP_ALL'|'ATTEMPT'|'SLOPPY' - - block_namespace => NAMESPACE|TYPE|HASH|CODE|ARRAY - - hook => ARRAY-WITH-HOOKDATA | HOOK - - hooks => ARRAY-OF-HOOK - - - key_rewrite => HASH|CODE|ARRAY-of-HASH-and-CODE - - opts_readers => HASH|ARRAY-of-PAIRS - - opts_rw => HASH|ARRAY-of-PAIRS - - opts_writers => HASH|ARRAY-of-PAIRS - - parser_options => HASH|ARRAY - - prefixes => HASH|ARRAY-of-PAIRS - - schema_dirs => DIRECTORY|ARRAY-OF-DIRECTORIES - - typemap => HASH|ARRAY - - xsi_type => HASH|ARRAY - Accessors - $obj->addCompileOptions(['READERS'|'WRITERS'|'RW'], OPTIONS) See "Accessors" in XML::Compile::Cache - $obj->addHook(HOOKDATA|HOOK|undef) See "Accessors" in XML::Compile::Schema - $obj->addHooks(HOOK, [HOOK, ...]) See "Accessors" in XML::Compile::Schema - $obj->addKeyRewrite(PREDEF|CODE|HASH, ...) See "Accessors" in XML::Compile::Schema - $obj->addSchemaDirs(DIRECTORIES|FILENAME) - - XML::Compile::WSDL11-->anyElement('ATTEMPT'|'SLOPPY'|'SKIP_ALL'|'TAKE_ALL'|CODE) See "Accessors" in XML::Compile::Cache - $obj->blockNamespace(NAMESPACE|TYPE|HASH|CODE|ARRAY) See "Accessors" in XML::Compile::Schema - $obj->hooks() See "Accessors" in XML::Compile::Schema - $obj->prefix(PREFIX) See "Accessors" in XML::Compile::Cache - $obj->prefixFor(URI) See "Accessors" in XML::Compile::Cache - $obj->prefixed(TYPE) See "Accessors" in XML::Compile::Cache - $obj->prefixes([PAIRS|ARRAY|HASH]) See "Accessors" in XML::Compile::Cache - $obj->typemap([HASH|ARRAY|PAIRS]) See "Accessors" in XML::Compile::Cache - $obj->useSchema(SCHEMA, [SCHEMA]) See "Accessors" in XML::Compile::Schema - $obj->xsiType([HASH|ARRAY|LIST]) See "Accessors" in XML::Compile::Cache Compilers - $obj->call(OPERATION, DATA) ); - $obj->compile(('READER'|'WRITER'), TYPE, OPTIONS) See "Compilers" in XML::Compile::Schema - $obj->compileAll(['READERS'|'WRITERS'|'RW'|'CALLS', [NAMESPACE]]) [2.20] With explicit CALLSor without any parameter, it will call compileCalls(). Otherwise, see XML::Compile::Cache::compileAll(). - $obj->compileCalls(OPTIONS) [2.20] Compile a handler for each of the available operations. The OPTIONS are passed to each call of compileClient(), but will be overruled by more specific declared options. Additionally, OPTIONS can contain service, port, and bindingto); - $obj->dataToXML(NODE|REF-XML-STRING|XML-STRING|FILENAME|FILEHANDLE|KNOWN) - - XML::Compile::WSDL11->dataToXML(NODE|REF-XML-STRING|XML-STRING|FILENAME|FILEHANDLE|KNOWN) See "Compilers" in XML::Compile - $obj->initParser(OPTIONS) - - XML::Compile::WSDL11->initParser(OPTIONS) See "Compilers" in XML::Compile - $obj->reader(TYPE|NAME, OPTIONS) See "Compilers" in XML::Compile::Cache - $obj->template('XML'|'PERL'|'TREE', ELEMENT, OPTIONS) See "Compilers" in XML::Compile::Schema - $obj->writer(TYPE|NAME) See "Compilers" in XML::Compile::Cache Extension - $obj->addWSDL(XMLDATA) The XMLDATA must be acceptable to XML::Compile::dataToXML() and should represent the top-level of a (partial) WSDL document. The specification can be spread over multiple files, each of which must have a definitionroot element. - $obj->compileClient([NAME], OPTIONS) Creates an XML::Compile::SOAP::Operation temporary object using operation(), and then calls compileClient()on that. The OPTIONS available include all of the options for: operation() (i.e. serviceand port), and all of XML::Compile::SOAP::Operation::compileClient() (there are many of these, for instance); - $obj->namesFor(CLASS) Returns the list of names available for a certain definition CLASS in the WSDL. See index() for a way to determine the available CLASS information. - $obj->operation([NAME], OPTIONS)> - action => STRING Overrule the soapAction from the WSDL. - operation => NAME Ignored when the parameter list starts with a NAME (which is an alternative for this option). Optional when there is only one operation defined within the portType. - port => NAME Required when more than one port is defined. - service => QNAME|PREFIXED Required when more than one service is defined. Administration - $obj->declare(GROUP, COMPONENT|ARRAY, OPTIONS)); - $obj->doesExtend(EXTTYPE, BASETYPE) See "Administration" in XML::Compile::Schema - $obj->elements() See "Administration" in XML::Compile::Schema - $obj->findName(NAME) See "Administration" in XML::Compile::Cache - $obj->findSchemaFile(FILENAME) - - XML::Compile::WSDL11->findSchemaFile(FILENAME) See "Administration" in XML::Compile - $obj->importDefinitions(XMLDATA, OPTIONS) See "Administration" in XML::Compile::Schema - $obj->knownNamespace(NAMESPACE|PAIRS) - - XML::Compile::WSDL11->knownNamespace(NAMESPACE|PAIRS) See "Administration" in XML::Compile - $obj->namespaces() See "Administration" in XML::Compile::Schema - $obj->types() See "Administration" in XML::Compile::Schema - $obj->walkTree(NODE, CODE) See "Administration" in XML::Compile Introspection All of the following methods are usually NOT meant for end-users. End-users should stick to the operation() and compileClient() methods. - $obj->endPoint(OPTIONS) [2.20] Returns the address of the server, as specified by the WSDL. When there are no alternatives for service or port, you not not need to specify those paramters. -Option --Default port <undef> service <undef> - port => NAME - - service => QNAME|PREFIXED - - $obj->explain(OPERATION, FORMAT, DIRECTION, OPTIONS) [2.13] Produce templates (see XML::Compile::Schema::template() which detail the use of the OPERATION. Currently, only the PERLtemplateand skip_header. example: print $wsdl->explain('CheckStatus', PERL => 'INPUT'); print $wsdl->explain('CheckStatus', PERL => 'OUTPUT' , recurse => 1 # explain options , port => 'Soap12PortName' # operation options ); - $obj->findDef(CLASS, [QNAME|PREFIXED|NAME])'); - $obj->index([CLASS, [QNAME]]) - $obj->operations(OPTIONS) Return a list with all operations defined in the WSDL. -Option --Default binding <undef> port <undef> service <undef> - binding => NAME Only return operations which use the binding with the specified NAME. By default, all bindings are accepted. - port => NAME Return only operations related to the specified port NAME. By default operations from all ports. - service => NAME Only return operations related to the NAMEd service, by default all services. - $obj->printIndex([FILEHANDLE], OPTIONS) For available OPTIONS, see operations(). This method is useful to understand the structure of your WSDL: it shows a nested list of services, bindings, ports and portTypes. -Option --Defined in --Default show_declared XML::Compile::Cache <true> - show_declared => BOOLEAN - DETAILS Comparison Collecting definitions Organizing your definitions Addressing components Representing data-structures simpleType complexType/simpleContent complexType and complexType/complexContent Manually produced XML NODE Occurence Default Values Repetative blocks repetative sequence, choice, all repetative groups repetative substitutionGroups List type Using substitutionGroup constructs Wildcards any and anyAttribute ComplexType with "mixed" attribute hexBinary and base64Binary Schema hooks defining hooks general syntax hooks on matching types hooks on matching ids hooks on matching paths Typemaps Private variables in objects Typemap limitations Handling xsi:type Key rewrite key_rewrite via table rewrite via function key_rewrite when localNames collide rewrite for convenience pre-defined key_rewrite rules Initializing SOAP operations via WSDL); SEE ALSO This module is part of XML-Compile-SOAP distribution version 2.29,
https://metacpan.org/pod/release/MARKOV/XML-Compile-SOAP-2.29/lib/XML/Compile/WSDL11.pm
CC-MAIN-2015-22
en
refinedweb
A few weeks ago, O'Reilly Network ran an article on PMD, an open source, Java static-analysis tool sponsored under the umbrella of the Defense Advanced Research Projects Agency (DARPA) project "Cougaar." That article covered some of the basics of PMD--it's built on an Extended Backus Naur Format (EBNF) grammar, from which JavaCC generates a parser and JJTree generates an Java Abstract Syntax Tree (AST), and comes with a number of ready-to-run rules that you can run on your own source code. You can also write your own rules to enforce coding practices specific to your organization. In this article, we'll take a closer look at the AST, how it is generated, and some of its complexities. Then we'll write a custom PMD rule to find the creation of Thread objects. We'll write this custom rule two ways, first in the form of a Java class, and then in the form of an XPath expression. Thread Recall from the first article that the Java AST is a tree structure that represents a chunk of Java source code. For example, here's a simple code snippet and the corresponding AST: Thread t = new Thread(); FieldDeclaration Type Name VariableDeclarator VariableDeclaratorId VariableInitializer Expression PrimaryExpression PrimaryPrefix AllocationExpression Name Arguments Here we can see that the AST is a standard tree structure: a hierarchy of nodes of various types. All of the node types and their valid children are defined in the EBNF grammar file. For example, here's the definition of a FieldDeclaration: FieldDeclaration void FieldDeclaration() : { } { ( "public" { ((AccessNode) jjtThis).setPublic( true ); } | "protected" { ((AccessNode) jjtThis).setProtected( true ); } | "private" { ((AccessNode) jjtThis).setPrivate( true ); } | "static" { ((AccessNode) jjtThis).setStatic( true ); } | "final" { ((AccessNode) jjtThis).setFinal( true ); } | "transient" { ((AccessNode) jjtThis).setTransient( true ); } | "volatile" { ((AccessNode) jjtThis).setVolatile( true ); } )* Type() VariableDeclarator() ( "," VariableDeclarator() )* ";" } A FieldDeclaration is composed of a Type followed by at least one VariableDeclarator; for example, int x,y,z = 0;. A FieldDeclaration may also be preceeded by a couple of different modifiers, that is, Java keywords like transient or private. Since these modifiers are separated by a pipe symbol and followed by an asterisk, any number can appear in any order. All of these grammar rules eventually can be traced back to the Java Language Specification (JLS) (see the References section below). Type VariableDeclarator int x,y,z = 0; transient private Related Reading Java Enterprise Best Practices By The O'Reilly Java Authors The grammar doesn't enforce nuances like "a field can't be both public and private". That's the job of a semantic layer that would be built into a full compiler such as javac or Jikes. PMD avoids the job of validating modifiers--and the myriad other tasks a compiler must perform--by assuming the code is compilable. If it's not, PMD will report an error, skip that source file, and move on. After all, if a source file can't even be compiled, there's not much use in trying to check it for unused code. javac Jikes Looking closer at the grammar snippet above, we can also see some custom actions that occur when a particular token is found. For example, when the keyword public is found at the start of a FieldDeclaration, the parser that JavaCC generates will call the method setPublic(true) on the current node. The PMD grammar is full of this sort of thing, and new actions are continually being added. By the time a source code file makes it through the parser, a lot of work has been done that makes rule writing much easier. public setPublic(true) Now that we've reviewed the AST a bit more, let's write a custom PMD rule. As mentioned before, we'll assume we're writing Enterprise Java Beans, so we shouldn't be using some of the standard Java library classes. We shouldn't open a FileInputStream, start a ServerSocket, or instantiate a new Thread. To make sure our code is safe for use inside of an EJB container, let's write a rule that checks for Thread creation. FileInputStream ServerSocket Let's start by writing a Java class that traverses the AST. From the first article, recall that JJTree generates AST classes that support the Visitor pattern. Our class will register for callbacks when it hits a certain type of AST node, then poke around the surrounding nodes to see if it's found something interesting. Here's some boilerplace code: Visitor // Extend AbstractRule to enable the Visitor pattern // and get some handy utility methods public class EmptyIfStmtRule extends AbstractRule { } If you look back up at the AST for that initial code snippet--Thread t = new Thread();--you will find an AST type called an AllocationExpression. Yup, that sounds like what we're looking for: allocation of new Thread objects. Let's add in a hook to notify us when it hits a new [something] node: Thread t = new Thread(); AllocationExpression new [something] public class EmptyIfStmtRule extends AbstractRule { // make sure we get a callback for any object creation expressions public Object visit(ASTAllocationExpression node, Object data){ return super.visit(node, data); } } We've put a super.visit(node,data) in there so the Visitor will continue to visit children of this node. This lets us catch allocations within allocations, i.e., new Foo(new Thread()). Let's add in an if statement to exclude array allocations: super.visit(node,data) new Foo(new Thread()) if public class EmptyIfStmtRule extends AbstractRule { public Object visit(ASTAllocationExpression node, Object data){ // skip allocations of arrays and primitive types: // new int[], new byte[], new Object[] if ((node.jjtGetChild(0) instanceof ASTName) { return super.visit(node, data); } } } We're not concerned about array allocations, not even Thread-related allocations like Thread[] threads = new Thread[];. Why not? Because instantiating an array of Thread object references doesn't really create any new Thread objects. It just creates the object references. We'll focus on catching the actual creation of the Thread objects. Finally, let's add in a check for the Thread name: Thread[] threads = new Thread[]; public class EmptyIfStmtRule extends AbstractRule { public Object visit(ASTAllocationExpression node, Object data){ if ((node.jjtGetChild(0) instanceof ASTName && ((ASTName)node.jjtGetChild(0)).getImage().equals("Thread")) { // we've found one! Now we'll record a RuleViolation and move on ctx.getReport().addRuleViolation( createRuleViolation(ctx, node.getBeginLine())); } return super.visit(node, data); } } That about wraps up the Java code. Back in the first article, we described a PMD ruleset and the XML rule definition. Here's a possible ruleset definition containing the rule we just wrote: <> <example> <![CDATA[ Thread t = new Thread(); // don't do this! ]]> </example> </rule> </ruleset> You can put this ruleset on your CLASSPATH or refer to it directly, like this: CLASSPATH java net.sourceforge.pmd.PMD /path/to/src xml /path/to/ejbrules.xml Recently Daniel Sheppard enhanced PMD to allow rules to be written using XPath. We won't explain XPath completely here--it would require a large book--but generally speaking, XPath is a way of querying an XML document. You can write an XPath query to get a list of nodes that fit a certain pattern. For example, if you have an XML document with a list of departments and employees, you could write a simple XPath query that returns all the employees in a given department, and you wouldn't need to write DOM-traversal or SAX-listener code. XPath and XPointer Locating Content in XML Documents By John E. Simpson That's all well and good, but how does querying XML documents relate to PMD? Daniel noticed that an AST is a tree, just like an XML document. He downloaded the Jaxen XPath engine and wrote a class called a DocumentNavigator that allows Jaxen to traverse the AST. Jaxen gets the XPath expression, evaluates it, applies it to the AST, and returns a list of matching nodes to PMD. PMD creates RuleViolation objects from the matching nodes and moves along to the next source file. DocumentNavigator RuleViolation XPath is a new language, though, so why write PMD rules using XPath when you're already a whiz-bang Java programmer? The reason is that it's a whole lot easier to write simple rules using XPath. To illustrate, here's the "DontCreateThreadsRule" written as an XPath expression: //AllocationExpression[Name/@Image='Thread'][not(ArrayDimsAndInits)] Concise, eh? There's no Java class to track--you don't have to compile anything or put anything else on your CLASSPATH. Just add the XPath expression to your rule definition like this: <> <properties> <property name="xpath"> <value> <![CDATA[ //AllocationExpression[Name/@Image='Thread'][not(ArrayDimsAndInits)]> ]]> </value> </property> </properties> <example> <![CDATA[ Thread t = new Thread(); // don't do this! ]]> </example> </rule> </ruleset> Refer to the rule as usual to run it on your source code. You can learn a lot about XPath by looking at how the built-in PMD rules identify nodes, and you can also try out new XPath expressions using a PMD utility called the ASTViewer. Run this utility by executing the astviewer.bat or astviewer.sh scripts in the etc/ directory of the PMD distribution. It will bring up a window that looks like Figure 1. Type some code into the left-hand panel, put an XPath expression in the text field, click the "Go" button at the bottom of the window, and the other panels will be populated with the AST and the results of the XPath query. ASTViewer Figure 1. Screenshot of ASTViewer When should you use XPath to write a PMD rule? My initial thought is, "Anytime you can." I think that you'll find that many simple rules can be written using XPath, especially those that are checking for braces or a particular name. For example, almost all of the rules in the PMD basic ruleset and braces ruleset are now written as very short, concise XPath expressions. The more complicated rules--primarily those dealing with the symbol table--are probably still easiest to write in Java. We'll see, though. At some point we may even wrap the symbol table in a DocumentNavigator. There's still a lot of work to do on PMD. Now that this XPath infrastructure is in place, it might be possible to write an interactive rule editor. Ideally, you could open a GUI, type in a code snippet, select certain AST nodes, and an XPath expression that finds those nodes would be generated for you. PMD can always use more rules, of course. Currently, there are over 40 feature requests on the web site just waiting for someone to implement them. Also, PMD has a pretty weak symbol table, so it occasionally picks up a false positive. There's plenty of room for contributors to jump in and improve the code. This article has presented a more in-depth look at the Java AST and how it's defined. We've written a PMD rule that checks for Thread creation using two techniques--a Java class and an XPath query. Give PMD a try and see what it finds in your code today! Thanks to the Cougaar program and DARPA for supporting PMD. Thanks to Dan Sheppard for writing the XPath integration. Thanks also to the many other.
http://archive.oreilly.com/pub/a/onjava/2003/04/09/pmd_rules.html
CC-MAIN-2015-22
en
refinedweb
JDK Release Notes JDK 8 Release Notes: 'A change to a type is binary compatible with (equivalently, does not break binary compatibility with) pre-existing binaries if pre-existing binaries that previously linked without error will continue to link without error.' Behavioral: Behavioral compatibility includes the semantics of the code that is executed at runtime. For more information, see Kinds of Compatibility, a section in the OpenJDK Developer's Guide. "Behavioral Compatibility" "Incompatibilities between Java SE 8 and Java SE 7" "Incompatibilities between JDK 8 and JDK 7" "Features Removed from Java SE 8" "Features Removed from JDK 8" The following compatibility documents track incompatibility between adjacent Java versions. For example, this compatibility page reports only Java SE 8 incompatibilities with Java SE 7, and not with previous versions. To examine Java SE 8 incompatibilities with earlier Java versions, you must trace incompatibilities through the listed files, in order. Java SE 7 and JDK 7 Compatibility Incompatibilities in J2SE 5.0 (since 1.4.2) Java SE 8 is binary-compatible with Java SE 7 except for the incompatibilities listed below. Except for the noted incompatibilities, class files built with the Java SE 7 compiler will run correctly in Java SE 8. Class files built with the Java SE 8 compiler will not run on earlier releases of Java SE. Java SE 8 includes new language features and platform APIs. If these are used in a source file, that source file cannot be compiled on an earlier version of the Java platform. In general, the source compatibility policy is to avoid introducing source code incompatibilities. However, implementation of some Java SE 8 features required changes that could cause code that compiled with Java SE 7 to fail to compile with Java SE 8. See Incompatibilities between Java SE 8 and Java SE 7 and Incompatibilities between JDK 8 and JDK 7 for information. Deprecated APIs are interfaces that are supported only for compatibility with previous releases. The javac compiler generates a warning message whenever one of these is used, unless the -nowarn command-line option is used. It is recommended that programs be modified to eliminate the use of deprecated APIs. Some APIs in the sun.* packages have changed. These APIs are not intended for use by developers. Developers importing from sun.* packages do so at their own risk. For more details, see Why Developers Should Not Write Programs That Call 'sun' Packages. For a list of deprecated APIs, see Deprecated APIs.. The Java class file format has been updated for the Java SE 8 release. The class file version for Java SE 8 is 52.0 as per the JVM Specification. Version 52.0 class files produced by a Java SE 8 compiler cannot be used in earlier releases of Java SE. The following document has information on changes to the Java Language Specification (JLS) and the Java VM Specification (JVMS). JSR 337: Java SE 8 Release Contents. This section describes Java SE 8 incompatibilities in the Java Language, the JVM, or the Java SE API. Note that some APIs have been deprecated in this release and some features have been removed entirely. While these are incompatibilities, they have been called out in separate lists. For more information, see Deprecated APIs and Features Removed from JDK 8. A Maintenance Release of Java SE 8 was performed in March 2015. Incompatibilities arising out of this release are marked accordingly. Default methods in interfaces do not cause eager interface initialization The presence of method bodies in interfaces (due to default methods and static methods) means that interfaces may have to be initialized prior to their static fields being accessed. The Java SE 8 Editions of the Java Language Specification and JVM Specification did not account for this, with the result that JDK 8 exhibited unspecified behavior. (During class initialization, any superinterface declaring or inheriting a default method was eagerly initialized). The correct behavior is to initialize an interface eagerly if it declares a default method, and otherwise lazily when its static field is accessed or its static method is invoked (whether by an invoke* bytecode, or via method handle invocation, or via Core Reflection). behavioral Support default and static interface methods in JDWP Java SE 8 Language Specification has introduced static and default methods for interfaces. Since the JDWP APIs were not extended to reflect this addition, it was not possible for debuggers to invoke these methods. The JDWP specification has now been updated to allow such invocations. In JDWP, a new command "InvokeMethod" is added to the "InterfaceType" command set. The JDWP version is increased to 8. behavioral Verification of the invokespecial instruction has been tightened when the instruction refers to an instance initialization method (" <init>"). behavioral In Java SE 8 and above, the Java Virtual Machine considers the ACC_SUPER flag to be set in every class file, regardless of the actual value of the flag in the class file and the version of the class file. The ACC_SUPER flag affects the behavior of invokespecial instructions. behavioral classes_text When formatting date-time values using DateFormat and SimpleDateFormat, context sensitive month names are supported for languages that have the formatting and standalone forms of month names. For example, the preferred month name for January in the Czech language is ledna in the formatting form, while it is leden in the standalone form. The getMonthNames and getShortMonthNames methods of DateFormatSymbols return month names in the formatting form for those languages. Note that the month names returned by DateFormatSymbols were in the standalone form until Java SE 7. You can specify the formatting and/or standalone forms with the Calendar.getDisplayName and Calendar.getDisplayNames methods. Refer to the API documentation for details. behavioral javax.lang.model In Java SE 8, the value returned from javax.lang.model.type.TypeVariable.getUpperBound for TypeVariables with multiple bounds is different from the return value in earlier releases. An instance of the newly introduced IntersectionType is now returned, while originally an instance of DeclaredType was returned. This may cause a change in behavior of existing implementations of javax.lang.model.util.TypeVisitor: before, the visitDeclared method was invoked for the return value of TypeVariable.getUpperBound for type variables with multiple bounds, now visitIntersection is invoked. The difference can also observed by calling getKind() of the returned value. behavioral java.lang.reflect The java.lang.reflect.Proxy class that implements non-public interface will be non-public, final, and not abstract. Prior to Java SE 8, the proxy class was public, final and not abstract. source If existing code is using Proxy.getProxyClass and the Constructor.newInstance method to create a proxy instance, it will fail with IllegalAccessException if the caller is not in the same runtime package as the non-public proxy interface. For such code, it requires a source change to either (1) call Constructor.setAccessible to set the accessible flag to true, or (2) use the Proxy.newProxyInstance convenience method. The new permission ReflectPermission("newProxyInPackage.{package name}") permission may need to granted if existing code attempts to create a proxy to implement a non-public interface from a different runtime package. java.lang.reflect The java.lang.reflect.Proxy(InvocationHandler h) constructor now throws a NullPointerException if the given InvocationHandler parameter is null. behavioral Existing code that constructs a dynamic proxy instance with a null argument will now get NullPointerException. Such usage is expected to rarely exist since a null proxy instance has no use and will throw a NullPointerException when its method is invoked anyway. java.math Prior to Java SE 8, when BigDecimal.stripTrailingZeros was called on a value numerically equal to zero, it would return that value. Now the method instead returns the constant BigDecimal.ZERO. behavioral java.net In previous releases, the HttpURLConnection Digest Authentication implementation incorrectly quoted some values in the WWW-Authenticate Response Header. In the Java SE 8 release, these values are no longer quoted. This is in strict conformance with the RFC 2617, HTTP Authentication: Basic and Digest Access Authentication. Certain versions of some server implementations are known to expect the values to be quoted. HTTP requests to these servers might no longer successfully authenticate. Other server implementations that previously failed to authenticate because the values were quoted, might now successfully authenticate. behavioral java.net. behavioral java.net Prior to Java SE 8, the java.net.DatagramPacket constructors that accept a java.net.SocketAddress argument declared that a java.net.SocketException can be thrown. However, that exception was never thrown by those constructors. In the Java SE 8 release, these constructors do not declare that a java.net.SocketException can be thrown. If you have existing code that explicitly catches SocketException or its superclass java.io.IOException, remove the catch block for the exception before compiling with Java SE 8. source java.util.i18n. Refer to the descriptions of the LocaleServiceProvider class and its isSupportedLocale method for more details. behavioral java.awt Prior to the Java SE 8 release, a manual check was required to ensure that keystrokes were of type AWTKeyStroke. This check has been replaced by a generic check in the Component.setFocusTraversalKeys() and KeyboardFocusManager.setDefaultFocusTraversalKeys() methods. A ClassCastException is now thrown if any Object is not of type AWTKeyStroke. behavioral javax.net.ssl. behavioral The command line flags PermSize and MaxPermSize have been removed and are ignored. If used on the command line a warning will be emitted for each. Java HotSpot(TM) Server VM warning: ignoring option PermSize=32m; support was removed in 8.0 Java HotSpot(TM) Server VM warning: ignoring option MaxPermSize=128m; support was removed in 8.0 source This section describes JDK 8 Incompatibilities in javac, in HotSpot, or in Java SE API. Note that some APIs have been deprecated in this release and some features have been removed entirely. While these are incompatibilities, they have been called out in separate lists. For more information, see Deprecated APIs and Features Removed from JDK 8. Support default and static interface methods in JDI Java SE 8 Language Specification has introduced static and default methods for interfaces. Both JDI specification and implementation have now been updated to allow such invocations. In JDI, the "com.sun.jdi.InterfaceType" class now contains an additional method "Value invokeMethod(ThreadReference thread, Method method, List<? extends Value> arguments, int options)". behavioral java.lang The steps used to determine the user's home directory on Windows have changed to follow the Microsoft recommended approach. This change might be observable on older editions of Windows or where registry settings or environment variables are set to other directories. behavioral java.lang.reflect Default methods affect the result of Class.getMethod and Class.getMethods behavioral). For example, say a class has two superinterfaces, I and J, each of which declare "int length();". Generally, we consider both methods to be members of the class; but if J also extends I, then as of Java SE 8, the class only inherits one method: J.length(). ("J.length", above) is a default method, it is important to filter out other overridden methods ("I.length" above). Starting with JDK 8u20, the implementation has been changed to perform this filtering step when the overrider is a default method. java.text When using the NumberFormat and DecimalFormat classes, the rounding behavior of previous versions of the JDK was wrong in some corner cases. This wrong behaviour happened when calling the format() method with a value that was very close to a tie, where the rounding position specified by the pattern of the NumberFormat or DecimalFormat instance was exactly sitting at the position of the tie. In that case, the wrong double rounding or erroneous non-rounding behavior occurred. As an example, when using the default recommended NumberFormatFormat API form: NumberFormat nf = java.text.NumberFormat.getInstance() followed by nf.format(0.8055d), the value 0.8055d is recorded in the computer as 0.80549999999999999378275106209912337362766265869140625 since this value cannot be represented exactly in the binary format. Here, the default rounding rule is "half-even", and the result of calling format() in JDK 7 is a wrong output of " 0.806", while the correct result is " 0.805", since the value recorded in memory by the computer is "below" the tie. This new behavior is also implemented for all rounding positions that might be defined by any pattern chosen by the programmer (non default patterns). behavioral java.util.collections In previous releases, some implementations of Collection.removeAll(Collection) and retainAll(Collection) would silently ignore a null parameter if the collection itself was empty. As of this release, collections will consistently throw a NullPointerException if null is provided as a parameter. behavioral java.management The requirement for the Management Interfaces being public, which is stated in the specification, is now being enforced. Non-public interfaces are not allowed to expose the management functionality. All the MBean and MXBean interfaces must be public. The system property jdk.jmx.mbeans.allowNonPublic is used to revert to the old behavior allowing non-public management interfaces. This property is considered to be transitional and will be removed in the subsequent releases. behavioral java.awt The java.awt.Component.setFocusTraversalKeys() method may throw ClassCastException (instead of IllegalArgumentException) if any Object in the keystrokes is not an AWTKeyStroke. behavioral java.security com.sun.media.sound has been added to list of restricted packages in JDK 8. Applications running under a SecurityManager will not be able to access classes in this package hierarchy unless granted explicit permission. The com.sun.media.sound package is an internal, unsupported package and is not meant to be used by external applications. source The JDK internal package com.sun.corba.se and sub-packages have been added to the restricted package list and cannot be used directly when running with a security manager. behavioral javac The type rules for binary comparisons in the Java Language Specification (JLS) Section 15.21 will now be correctly enforced by javac. Since the JDK 5 release, javac has accepted some programs with Object-primitive comparisons that are incorrectly typed according to JLS 15.21. These comparisons will now be correctly identified as type errors. behavioral javac As of this release, parameter and method annotations are copied to synthetic bridge methods.This fix implies that now for programs like: @Target(value = {ElementType.PARAMETER}) @Retention(RetentionPolicy.RUNTIME) @interface ParamAnnotation {} @Target(value = {ElementType.METHOD}) @Retention(RetentionPolicy.RUNTIME) @interface MethodAnnotation {} abstract class T<A,B> { B m(A a){return null;} } class CovariantReturnType extends T<Integer, Integer> { @MethodAnnotation Integer m(@ParamAnnotation Integer i) { return i; } public class VisibilityChange extends CovariantReturnType {} } Each generated bridge method will have all the annotations of the method it redirects to. Parameter annotations will also be copied. This change in the behavior may impact some annotations processor or in general any application that use the annotations. behavioral javac Parameter annotations are copied to automatically generated constructors for inner classes. This fix implies that now for programs such as the following: @Target(value = {ElementType.PARAMETER}) @interface ParamAnnotation {} public class initParams { public initParams(@ParamAnnotation int i) {} public void m() { new initParams(2) {}; } } The constructor generated for the inner class created in method m() will have a parameter int i with the annotation @ParamAnnotation. This change in the behavior may impact some annotations processors, or applications that use annotations. behavioral javac Recognition of the undocumented target values " 1.4.1", " 1.4.2" and " jsr14" have been removed from javac. The " 1.4.1" and " 1.4.2" targets used more up-to-date code generation idioms than " 1.4". The combination of options " -source 1.4 -target 1.5" will use those newer idioms, but also output a more recent class file format. The " jsr14" option was a transitional private option for when generics were first being added to the platform. Now generics should be compiled with a target of 1.5 or higher. behavioral javac The following code which compiled, with warnings, in JDK 7 will not compile in JDK 8: import java.util.List; class SampleClass { static class Baz<T> { public static List<Baz<Object>> sampleMethod(Baz<Object> param) { return null; } } private static void bar(Baz arg) { Baz element = Baz.sampleMethod(arg).get(0); } } Compiling this code in JDK 8 produces the following error: SampleClass.java:12: error:incompatible types: Object cannot be converted to Baz Baz element = Baz.sampleMethod(arg).get(0); Note: SampleClass.java uses unchecked or unsafe operations. Note: Recompile with -Xlint:unchecked for details. 1 error In this example, a raw type is being passed to the sampleMethod(Baz<Object>) method which is applicable by subtyping (see the JLS, Java SE 7 Edition, section 15.12.2.2). An unchecked conversion is necessary for the method to be applicable, so its return type is erased (see the JLS, Java SE 7 Edition, section 15.12.2.6). In this case the return type of sampleMethod(Baz<Object>) is java.util.List instead of java.util.List<Baz<Object>> and thus the return type of get(int) is Object, which is not assignment-compatible with Baz. For more information, see the related email exchange on java.net. source javac Definite assignment analysis applies to final field access using 'this' Traditionally, the Java language prohibited access to a blank final field that is definitely unassigned, but only when the field is accessed via a simple name such as 'x'. In Java SE 7, the rules were tightened to further prohibit access when the field is accessed via 'this.x'. (See.) Any program that is legal under the old rules but illegal under the new rules is unsafe, since it would necessarily access a blank final field that is not definitely assigned. Starting with JDK 8u20, the javac compiler has been updated to implement the Java SE 7 rules. source javac Interfaces need to be present when compiling against their implementations. Example: Client.java: import p1.A; class Client { void test() { new A.m(); } } p1/A.java: package p1; public class A implements I { public void m() { } } p1/I.java: package p1; public interface I { void m(); } If neither p1/I.java nor p1/I.class are available when compiling Client.java, the following error will be displayed: Client.java: error: cannot access I new A().m(); ^ class file for p1.I not found behavioral The Xalan Extension functions in JAXP have been changed so that, when a SecurityManager is present, the default implementation will always be used. This change affects the NodeSet created by DOM Document. Before the change, the DOM implementation was located through the DOM factory lookup process. With this change, when running with a SecurityManager, the lookup process is skipped and the default DOM implementation is used. This change only affects those applications that use a 3rd party DOM implementation. In general, the NodeSet structure is expected to be compatible with that of the JDK default implementation. behavioral JDK 8 ships with JAXP 1.6 and so includes specification updates that mandate the use of java.util.ServiceLoader for finding service providers. Service providers across JAXP will now be located consistently following the process as defined in java.util.ServiceLoader. The changes may result in some subtle differences from implementations of JDK 7 where the provider-configuration file may have been located differently, for example, by using a different getXXX method of the ClassLoader than ServiceLoader. Applications that implement their own Classloaders shall therefore make sure that the ClassLoaders' getXXX methods are implemented consistently so as to maintain compatibility. The StAX API, JSR 173, defined newInstance and newFactory methods with a factoryId as a parameter. Since there was no constraint on what the value could be in the StAX specification, it implied it could be any arbitrary string. With JDK 8 specification change, in the context of JAXP, the value of factoryId must be the name of the base service class if it is intended to represent the name of the service configuration file, that is, if it is not the name of a System Property. behavioral The ClassLoader parameter is no longer ignored in javax.xml.stream factories. The javax.xml.stream package contains factory classes ( XMLEventFactory, XMLOutputFactory, XMLInputFactory) which define newFactory methods that take two parameters: a factoryId and a ClassLoader. In JDK 7, the second parameter ( ClassLoader) was ignored by the factories when looking up and instantiating the services. This is no longer the case in JDK 8. Refer to the Java API documentation of those methods for more details. behavioral java.lang Thread.stop(Throwable) has been disabled The Thread.stop method has been deprecated since release 1.2. This method now throws an UnsupportedOperationException. java.net Removal of ftp from the list of required protocol handlers ftp is a legacy protocol that has long been superseded by more secure protocols for file transfer ( sftp for example). The ftp protocol has been dropped from the list of protocol handlers that are guaranteed to be present. It does not actually remove the protocol handler - applications that use this protocol will continue to work, but its presence is no longer required. java.security The key length is an important security parameter to determine the strength of public-key based cryptographic algorithms. RSA keys less than 1024 bits are considered breakable. In this update, certificates are blocked if they contain RSA keys of less than 1024 bits in length. This restriction is applied via the Java Security property, jdk.certpath.disabledAlgorithms. This impacts providers that adhere to this security property, for example, the Sun provider and the SunJSSE provider. The security property, jdk.certpath.disabledAlgorithms, also covers the use of the static keys (the key in X.509 certificate) used in TLS. With this key size restriction, those who use X.509 certificates based on RSA keys less than 1024 bits will encounter compatibility issues with certification path building and validation. This key size restriction also impacts JDK components that validate X.509 certificates, for example signed JAR verification, SSL/TLS transportation, and HTTPS connections. In order to avoid the compatibility issue, users who use X.509 certificates with RSA keys less than 1024 bits are recommended to update their certificates with stronger keys. As a workaround, at their own risk, users can adjust the key size restriction security property ( jdk.certpath.disabledAlgorithms) to permit smaller key sizes. behavioral Builds with automatic update turned off are no longer provided. To disable the automatic updating of the JRE, disable automatic updates and set the deployment.expiration.check.enabled property to false in the deployment configuration properties file. To disable automatic updates, remove the check from Check for Updates Automatically in the Update tab of the Java Control Panel. See Deployment Configuration File and Properties for information about the deployment.expiration.check.enabled property. Class Data Sharing file no longer created Previously, the Solaris SVID package installer created the Class Data Sharing file. In JDK 8, 32-bit Solaris is no longer supported, so the Class Data Sharing file is not created by default. To manually create the Class Data Sharing, execute the command: $JAVA_HOME/bin/java -Xshare:dump When the command completes, the Class Data Sharing file is located at $JAVA_HOME/jre/lib/server/{amd64,sparcv9}/classes.jsa. Removal of the classic Java Plug-in. The old Java Plug-in (the version available prior to Java SE 6 Update 10) has been removed from this release. Removal of Java Quick Starter The Java Quick Starter (JQS) service has been removed from this release. Removal of Active-X Bridge The Active-X Bridge has been removed from this release. sun.jdbc.odbc Removal of the JDBC-ODBC Bridge Starting with JDK 8, the JDBC-ODBC Bridge is no longer included with the JDK. The JDBC-ODBC Bridge has always been considered transitional and a non-supported product that was only provided with select JDK bundles and not included with the JRE. Instead, use a JDBC driver provided by the vendor of the database or a commercial JDBC Driver instead of the JDBC-ODBC Bridge. apt Removal of apt Tool. java Removal of 32-bit Solaris The 32-bit implementation of Java for the Solaris operating system has been removed from this release. The $JAVA_HOME/bin and $JAVA_HOME/jre/bin directories now contain the 64-bit binaries. For transitional purposes, the ISA (Instruction Specific Architecture) directories $JAVA_HOME/bin/{sparcv9,amd64} and $JAVA_HOME/jre/{sparcv9,amd64} contain symbolic links that point to the binaries. These ISA directories will be removed in JDK9. The install packages SUNWj8rt, SUNWj8dev and SUNWj8dmo, which previously contained 32-bit binaries, now contain the 64-bit binaries. The install packages SUNWj8rtx, SUNWj8dvx and SUNWj8dmx have been removed. The 64-bit binaries do not contain deployment tools such as Java Web Start and Java Plug-in, therefore desktop integration is no longer required. Note that 64-bit Solaris binaries cannot load JNI libraries that are compiled and linked for 32-bit Solaris. Therefore any JNI library created for 32-bit Solaris needs to be re-compiled for 64-bit Solaris. java.lang:class_loading The endorsed-standards override mechanism allows implementations of newer versions of standards maintained outside of the Java Community Process, or of standalone APIs that are part of the Java SE Platform yet continue to evolve independently, to be installed into a run-time image. A modular image is composed of modules rather than jar files. Going forward we expect to support endorsed standards and standalone APIs in modular form only, via the concept of upgradeable modules. This feature is deprecated in JDK 8u40. This deprecation is made in preparation for modules in a future release of the Java SE Platform. For more information, see JEP 200 - The Modular JDK and JEP 220 - Modular Run-Time Images. There is no change to the default behavior. java.lang:class_loading The extension mechanism is deprecated in JDK 8u40. This deprecation is made in preparation for modules in a future release of the Java SE Platform. For more information, see JEP 200 - The Modular JDK and JEP 220 - Modular Run-Time Images. Deprecation of this feature required the following specification changes: java.lang.System.getProperties(), the specification for the java.ext.dirssystem property was amended to include a deprecation notice indicating that it may be removed in a future release. java.util.jar.Attributes.Name, the EXTENSION_INSTALLATION, IMPLEMENTATION_URL, and IMPLEMENTATION_VENDOR_IDfields were deprecated indicating that class path should be used instead. Extension-Name, Extension-List, <extension>-Extension-Name, <extension>-Specification-Version, <extension>-Implementation-Vendor-ID, <extension>-Implementation-Version, <extension>-Implementation-URL, Implementation-Vendor-Id, Implementation-URL, and Extension-Installation. There is no change to the runtime behavior. java.lang The SecurityManager.checkMemberAccess method is deprecated. It has been error-prone to depend on the caller on a stack frame at depth 4. The JDK implementation no longer calls the SecurityManager.checkMemberAccess method to perform member access check; instead it calls the SecurityManager.checkPermission method. Custom SecurityManager implementation that overrides the checkMemberAccess method may be impacted by this change as the overridden version will not be called. java.lang SecurityManager checkTopLevelWindow, checkSystemClipboardAccess, checkAwtEventQueueAccess checkPermission java.rmi The HTTP proxying feature from RMI/JRMP is now deprecated, and HTTP proxying is disabled by default. java.rmi Support for statically-generated stubs from RMI (JRMP) is now deprecated. java.util.jar Pack200.Packer addPropertyChangeListener and removePropertyChangeListener These methods are expected to be removed in a future release of Java SE. To monitor the progress of the packer, poll the value of the PROGRESS property. java.util.jar Pack200.Unpacker addPropertyChangeListener and removePropertyChangeListener These methods are expected to be removed in a future release of Java SE. To monitor the progress of the unpacker, poll the value of the PROGRESS property. java.util.logging LogManager addPropertyChangeListener and removePropertyChangeListener These methods are expected to be removed in a future release of Java SE. com.sun.security.auth.callback DialogCallbackHandler javax.management The JSR-160 specification was updated so that the RMI connector is no longer required to support the IIOP transport. Oracle's JDK 8 continues to support the IIOP transport, however, support for the IIOP transport is expected to be removed in a future update of the JMX Remote API. javax.accessibility javax.swing.JComponent.accessibleFocusHandler java.awt.Component.AccessibleAWTComponent.accessibleAWTFocusHandler Builder<T> All classes implementing the Builder<T> interface. Use appropriate constructors and setters to construct objects. java.security The java.security.SecurityPermission insertProvider.{provider name} target name is discouraged from further use because it is possible to circumvent the name restrictions by overriding the java.security.Provider.getName method. Also, there is an equivalent level of risk associated with granting code permission to insert a provider with a specific name, or any name it chooses. The new insertProvider target name. Compatibility with existing policy files has been preserved, as both the old and new permission will be checked by the Security.addProvider and insertProviderAt methods. The following garbage collector combinations are deprecated: DefNew + CMS ParNew + SerialOld Incremental CMS Corresponding command-line options produce warning messages and it is recommended to avoid using them. These options will be removed in one of the next major releases. The -Xincgc option is deprecated The -XX:CMSIncrementalMode option is deprecated. Note that this also affects all CMSIncremental options. The -XX:+UseParNewGC option is deprecated, unless you also specify -XX:+UseConcMarkSweepGC. The -XX:-UseParNewGC option is deprecated only in combination with -XX:+UseConcMarkSweepGC. For more information, see. The foreground collector in CMS has been deprecated and is expected to be removed in a future release. Use G1 or regular CMS instead.
http://www.oracle.com/technetwork/java/javase/8-compatibility-guide-2156366.html
CC-MAIN-2015-22
en
refinedweb
InStrRev(string1, string2[, start[, compare]])OK, let's consider... 'InStrRev(s1, s2, comparison => vbTextCompare )'It doesn't bother people that I hard-coded the function name into my calling code, why should formal parameter names be any different? In many modern IDEs, the system displays the formal parameter names to me, the developer, as I type the call. Transact-SQL stored procedures (Sybase & SQL Server) support sequential and named parameter passing (and a mix, in any call). I've found KeywordParameterPassing to be a very good thing when many of the parameters are optional. -- JeffGrigg I second this: Keyword parameters reduce the likelihood that changes to a module will require changes to modules that use it, not increase it. Although one should RefactorMercilessly either way, it's still good to reduce the amount of work needed when refactoring. Further, keywords seem likely to be easier to remember than parameter orderings, leading to faster coding; I know they help me in CommonLisp. -- DanielKnapp I disagree that keywords are more likely to be remembered than parameter orderings. Remembering whether the InStrRev() function's first parameter's keyword is "inputString" or "stringToFind" or "string_to_find" or "string1" or whatever is just extra work. A modern IDE may help with this, but a modern IDE will also make it easy to handle positional parameters. KeywordParameterPassing may be valuable when there are lots of parameters and many are optional, but a function with lots of parameters with many optional is usually a CodeSmell (a good candidate for the IntroduceParameterObject or RefactorParametersToMemberVariables refactorings). If KPP is an optional feature of a language, then I have no problem with it, but using it for all function invocations adds a lot of verbosity/noise to code. -- KrisJohnson Concur. Modern IDEs make this a moot point. I never have to remember parameter order because every IDE I use shows me the type and name of each parameter on demand. The value of this feature can't be over-emphasized, especially when dealing with Java's GridBagConstraints? constructor! 11 parameters!! -- EricHodges Ok, but the point is not necessarily to be able to write, but also to be able to read. Now imagine those 11 parameters of GridBagConstraints? (most of them are ints, some are floats), if you see them in a line of code, it is simply ugly !! Combined with the lack of optional parameters, it is a terrible combination. IntroduceParameterObject helps a little bit and IntroduceParameterObject not that much, but these are not good refactoring or good design patterns, they are simply workarounds to the lack of capability in the language. Take for example the typical parse operation of an XML file. Thanks in part to the complexity of Xml, in the Xerces parser this has more than 20 boolean options. Now imagine how you'd see a method call: ParserUtility?.parse(InputStream?, true,false, false, false, true, true, false,...)wouldn't that look cool in the source code ? So thanks to this "feature" of XML, plus the language design, plus JAXP transforms that into: SAXParserFactory factory= SAXParserFactoryImpl.newInstance(); factory.setNamespaceAware(false); factory.setValidating(false); SAXParser parser; try { factory.setFeature("", false); parser= factory.newSAXParser(); } catch(Exception ex) { throw new RuntimeException("Exception in parser factory: "+ex);} // now we can call parser.parse()So courtesy of refactorings and some FactoryPattern I ended up with > 6 lines of code where one should have sufficed, plus that ugly dependency on a stupid URL, for which the symbolic name in Xerces was protected. A designer in a language with KeywordParameterPassing will have no problem to offer the default parameters and named parameters so that I can write: ParserUtility?.parse inputStream ~namespace:false ~validate:false ~loadExternalDTDs: falseNow isn't this much more elegant than having to go through all the hooplas of refactoring and factories and who knows what else ? Keyword parameter passing and default parameter values are definitely a cool language feature. You're right. There's no excuse for the GridBagConstraints? constructor. Keywords would definitely help. (Wow. Costin convinced me of something. That's a first!) -- EricHodges Glad to here that, than maybe the "refactoring" pattern deserves a page on its own. EmulateKeywordAndDefaultParameters. printImage("foo", forceColor=true) printImage("foo", #forceColor) // short-cut printImage("foo", #forceColor true)In the second one we can have named parameters that serve as commands by defaulting to a value of "true" if no sub-parameter is supplied. It requires a different syntax arrangement than equal signs, though. I like that approach, but don't see it very often. It is probably a personal choice that sparks HolyWars. type field is integer range 0..integer'last; type number_base is integer range 2..16; default_width : field := integer'width; default_base : number_base := 10; procedure put (item : in integer; width: in field := default_width; base : in number_base := default_base);and then, when we call this put procedure, put (37, 4); put (item => 37, base => 8); put (base => 8, item => 37); put (37, base => 8);Keyword parameters passing has advantages over positional parameters passing when there are a lot of parameters for a function, with default parameters. A LongParameterList is generally considered a CodeSmell. (defun foo (a b &key (c 10) (error nil)) ...)foo may be called (foo 1 2) ; c will have value 10, error will be nil (foo 1 2 :c 12) ; c has value 12, error will be nil (foo 2 3 :error t) ; c has value 10 (foo :c 10) ; Error! a,b missing.Parsing keywords in CommonLisp is expensive, so stylistically, this is only done for top-level, publicly visible functions. They are also considered to be somewhat self-documenting.
http://c2.com/cgi/wiki?KeywordParameterPassing
CC-MAIN-2015-22
en
refinedweb
Satheesh Bandaram wrote: > Hi > > I am attaching a patch that adds SYNONYM support to Derby.. [snip] > Let me know if you have any comments or suggestions. My vote is *+1*, to > accept this patch. Any explanation of how SYNONYMs are implemented? You had indicated a couple of days ago you were undecided on how to represent them in system catalogs. Is the namespace for SYNONYM the same as for tables, or separate? If separate and A.B is both a table name and a synoymn name, which is selected? Dan.
http://mail-archives.apache.org/mod_mbox/db-derby-dev/200506.mbox/%3C429E3EA0.9010000@debrunners.com%3E
CC-MAIN-2015-22
en
refinedweb
Pydev 1.0.8 has ben just released... A major bug triggered this release (that's why it's been issued in less than one day from the previous release). Mainly, if you had a file that had a docstring at the global level with an empty line, it could get to a loop when adding a new-line to the document. This has been fixed and is already available for download. Also, 2 other minors have been done for Pydev Extensions (but they surely would not be worth a release for them). -- Fabio 2 comments: I still have same debug troubles - since 1.0.6 (last 1.0.5 works fine). Trouble point is pydo/utils.py def _import_a_class(fqcn): line 56: return getattr(module, className) - exactly "className" - when i watch it python.exe raise exception and is terminated. Can you report that as a bug in the sf bugtracker? () Also, it would be nice more details, such as when is this called (when you hit a breakpoint? Or any run in debug mode?) It would also be nice that you could change the library to print what exactly are the parameters it is receiving when this happens (module and className)... To me it appears a bug in pydo that is being triggered by the debugger rather than the other way (so, it might be nice for you to report that to the pydo guys too) Cheers, Fabio
http://pydev.blogspot.com/2006/05/pydev-release-108.html?showComment=1148628780000
CC-MAIN-2015-22
en
refinedweb
Suppose that you don't use an engine like "Unity3D" that has some built-in ways to deal with spritesheets, how would you deal with the "spritesheet problem"? As it is known, spritesheets are better than loading separate .png files for animation purposes (Considering that a character has movement, atack, defense, death, etc, animations). Most of the people, I guess, would take the 0x, 0y pixel colorkey and make it transparent for the whole image AND cut manually all animations and store them in a collection of some sort. The key point here is automation. If a spritesheet has irregular sprites (For example, the first one is a rectangle of 30 by 25 pixels, and the second one is irregularly far from the first sprite image), one cannot implement a function to cut all next sprites based on the rectangle of the first, because all sprites would have parts missing, etc. Manually storing every rectangle position in the sprite sheet seems to be a great option for a general game, but the same does not apply to an engine. I'm developing an engine on Pygame/Python, and, therefore, I want a clever way to separate/cut the inner sprite rectangles and return them as a list. The solution? Looping pixel per pixel and applying some logic based on the colorkey. How would you do that? Would you bother to implement such a function? What do you think about it? For the sake of the topic, here's my method for cutting based on the first rectangle's position and size (It does not work if the spritesheet is irregular): def getInnerSprites(self, xOffset, yOffset, innerRectWidth, innerRectHeight, innerRectQuantity): """ If the grict is a sprite sheet, it returns a list of sprites based on the first offsets and the width and the height of the sprite rect inside the sprite sheet. """ animation = [] if self.isSpriteSheet == True: for i in range(xOffset, innerRectWidth*innerRectQuantity, innerRectWidth): print i animation.append(self.getSubSurface((i,yOffset,innerRectWidth,innerRectHeight))) else: print "The Grict must be a sprite sheet in order to be animated." return animation I'll try to implement the "getInnerSpritesByPixel()" method. Spritesheets are a key thing in a complex game like a MMORPG, where almost every item has its own animation. Such method is more than necessary.
http://www.gamedev.net/topic/654308-spritesheet-algorithms/
CC-MAIN-2015-22
en
refinedweb
17 June 2013 16:54 [Source: ICIS news] HOUSTON (ICIS)--Air Products has completed expansions on three packaged gases plants in ?xml:namespace> The $10m (€7.5m) project included new state-of-the-art gas transfills at plants in Kunshan, The plants can now supply Air Products’ Linx gas regulators and a range of shielding gases and other industrial gases for industries such as welding, cutting and other processes for metal fabrication, the company said. "This investment supports Air Products' commitment to the metal fabrication business in
http://www.icis.com/Articles/2013/06/17/9679287/us-air-products-expands-three-packaged-gases-plants-in.html
CC-MAIN-2015-22
en
refinedweb
CR There was a discussion a while ago, about using W3C icons. LK Since it deals with branding can not modify the W3C icons. CR You can not have the two icons together as one. LK Right. If they are on the same page that's fine, but can't join them. Resolved: You can not modify W3C icons but may use them along side a logo of your design. WC Need exact date. PF would like to piggyback and prefer before XML 2000. LK How long? WC 1 day for us, 1/2 day with PF? LK What about confidentiality issues? WC Who does not have member access? BM Does not although CAST has. WC We can just avoid talking about member private issues. Action WC: ask wai domain about PF and ER meeting and what to do about some issues that might be member private. LK Do we sit in on their meeting or they sit in on ours? /* XML 2000 is 3-8. Sunday through Friday. preconference tutorials are 3 & 4. */ Resolved: Thursday and Friday before XML 2000 is preferred (rather than Saturday, although Saturday is possible depending on what PF wants to do). Len's notes from 18 August 2000. LK Some pages already have link to TOC. This would give the functionality "automatically." "title" as new attribute in CSS. HB Could something that is in CSS be used? LK Not sure. HB Hate to add something new that the processors don't recognize. LK You can insert text into the text flow. If you use something that puts in the prefix that would make it invisible. WC Could have 2 style sheets. LK Or an extra style sheet that inserts the info so as not to duplicate the main style sheet. The user agent would have the ability to search on these inserted strings. Since they are inserted and visible, user agents that haven't implemented would show it. Then have JAWS set them up. WC How will the cascade work if you define the same class twice? Will they unite or will one override the other? LK So 2 questions: what does CSS spec say, what do browsers actually do? /* WC reads through CSS 1 spec looking for info for defining classes in multiple sheets */ WC Could use ID and class? HB Don't want to use ID. Have the info in the doc itself or separate style sheet. LK You have a choice. Link to it or include in header. WC What about the use of namespaces as suggested by Dan and Charles in their discussion? Looking for something that works today? Thoughts? LK Didn't think about it much. How would it work? WC Each class would have an associated class that would have info about it. LK So RDF creates the link to info. WC Namespaces is more of an XML convention rather than /* scribe notes other discussions on this: /* Brian has to leave, so discussed latest version of Bobby */ LK WC is there something that fully explains the namespaces proposal? WC Yes, the discussions from Dan, but I'm not sure if it associates enough information, it focuses on links. LK It is just a way of being able to use class names without conflicting with someone elses's use. But, it has nothing to do with "title." Therefore seems to be an independent issue. However, we could use the two together. Not an alternative but a parallel. Action LK: add notes to draft to discuss :before and :after pseudo-elements to insert content. Then send on to WCAG and PF, also send heads-up to UA. LK If there was a "title" property, is there another use in the general market place that would give it additional value? Tool tips? WC Tool tip not a strong case, since lots of developers annoyed with tool tips and alt already. HB We could then have a variety of titles for each element. WC Sounds confusing and complex. How decide which one? Really need? Perhaps use it to create a link to the info contained within to simplify the page more. E.g. like frames: use the title as a link to the frame and handle each in their own "window." Good thing for mobile devices. HB "http" duplicaiton. BM Can clean up easily. HB Webable is in cahoots w/you somehow. BM It's on our news page. We've signed an agreement with them. Anytime someone says, "I don't have the time, can someone make them compliant?" We point them to webable. We can add others to the list. LK Is this just a list that anyone wants to be added to? BM We can make arrangements. LK The question marks are links that don't go anywhere. BM Went ahead and released. It's a bug. LK Best just to make them inactive rather than have go nowhere. BM good suggestion. LK There are some pages that make the error of not wrapping javascript in comment, shows up in header. BM We know about, but low priority. LK At the top of some pages, you see javascript. When I see that, I think "Bobby's broken" although I know it is is problem with the page. BM We haven't had many comments on that. HB I was surprised when I saw it. BM Good to know. LK I will be sending these comments so we don't minute them in detail. /* scribe relaxes a bit */ LK "user checks are not triggered." BM Wording should be there. LK On the page that I reviewed, the first 3 items did not have line numbers. BM Not sure what to do about that. If we don't give a specific check, like "color contrast," we don't give line numbers. LK So you have 3 categories: BM, yes on 2, gets too overwhelming to present all instances. Adding "make sure that your alt-text is real" in the list of things to always check. LK My concern is that you can pass Bobby with "garbage" alt-text, yet you don't want to clutter up the page. LK What is "BA"? BM Bobby approved. Isn't that described? This is the minimum to follow. LK Advanced options could be missed. Search engines put it near the submit button. BM Good idea. We could make it more noticeable. LK Good to impersonate browsers, but can't do it completely accurately because of browser sniffing javascript tricks. LK /* several more suggestions for cosmetics. will be sent as e-mail to the list */ HB HTML 4 check does not specify if it is strict, frame, or transitional. BM Not checking DTD but tag list. HB But tag list is different. Next release? BM Couple months hope to clean up some bugs and rerelease. /* discussion returns to classes */ WC I'm on vacation next week. We need a scribe for Monday. Then September 4 is a holiday, but we are scheduled to meet with AU on the 5th anyway. Open: We need a scribe for the 28 August meeting. Resolved: No meeting on 4 September, meet with UA on 5 September. $Date: 2000/11/08 08:17:26 $ Wendy Chisholm
http://www.w3.org/WAI/ER/IG/2000/08/21-minutes.html
CC-MAIN-2015-22
en
refinedweb
19 May 2010 13:02 [Source: ICIS news] LONDON (ICIS news)- PP grades from ?xml:namespace> ($1 = €0.82) For more on polypropylene visit ICIS chemical intelligence Please visit the complete ICIS plants and projects database
http://www.icis.com/Articles/2010/05/19/9360938/lyondellbasell-calls-force-majeure-on-polish-polypropylene.html
CC-MAIN-2015-22
en
refinedweb
Intrepid gcc -O2 breaks string appending with sprintf(), due to fortify source patch Bug Description Binary package hint: gcc-4.3 In Hardy and previous releases, one could use statements such as sprintf(buf, "%s %s%d", buf, foo, bar); to append formatted text to a buffer buf. Intrepid’s gcc-4.3, which has fortify source turned on by default when compiling with -O2, breaks this pattern. This introduced mysterious bugs into an application I was compiling (the BarnOwl IM client). Test case: gcc -O2 sprintf-test.c -o sprintf-test <http:// #include <stdio.h> char buf[80] = "not "; int main() { sprintf(buf, "%sfail", buf); puts(buf); return 0; } This outputs "not fail" in Hardy, and "fail" in Intrepid. The assembly output shows that the bug has been introduced by replacing the sprintf(buf, "%sfail", buf) call with __sprintf_chk(buf, 1, 80, "%sfail", buf). A workaround is to disable fortify source (gcc -U_FORTIFY_SOURCE). One might argue that this usage of sprintf() is questionable. I had been under the impression that it is valid, and found many web pages that agree with me, though I was not able to find an authoritative statement either way citing the C specification. I decided to investigate how common this pattern is in real source code. You can search a source file for instances of it with this regex: pcregrep -M 'sprintf\ To determine how common the pattern is, I wrote a script to track down instances using Google Code Search, and found 2888 matches: <http:// (For the curious: the script uses a variant of the regex above. I had to use a binary search to emulate backreferences, which aren’t supported by Code Search, so the script makes 46188 queries and takes a rather long time to run. The source is available at <http:// My conclusion is that, whether or not this pattern is technically allowed by the C specification, it is common enough that the compiler should be fixed, if that is at all possible. I’m about 8% of the way through my list, and it looks like there might indeed be a _lot_ of affected Ubuntu packages. I’ll stop filing bugs for now and see what happens with these ones. Given the large number of affected packages, perhaps it is better to fix the compiler option. I'm curious to see what upstream thinks of this. Anders Kaseorg noticed that the use of _FORTIFY_SOURCE breaks a specific use of sprintf (see attached): $ gcc -O0 -o foo foo.c && ./foo not fail $ gcc -O2 -o foo foo.c && ./foo not fail $ gcc -O2 -D_FORTIFY_SOURCE=2 -o foo foo.c && ./foo fail The original report was filed in Ubuntu, where -D_FORTIFY_SOURCE=2 is enabled by default: https:/ C99 states:. The man page does not mention this limitation, and prior to the use of __sprintf_chk, this style of call worked as expected. As such, a large volume of source code uses this style of call: http:// It seems that it would make sense to fix __sprintf_chk, or very loudly mention the C99-described overlap- Created attachment 3095 test case sprintf(buf, "%sfoo", buf) is UNDEFINED. Thanks for the clarification. However, I think it is still a bug that the limitation is not mentioned in the manpage. Then contact whoever wrote it. Searching all of Ubuntu source in Jaunty: 29 main 0 restricted 182 universe 15 multiverse > You can search a source file for instances of it with this regex: > pcregrep -M 'sprintf\ the regexp doesn't search for snprintf, and doesn't look for functions spanning more than one line. > I’ll stop filing bugs for now and see what happens with these ones. the bug reports are ok, but separate reports with a common tag should be filed instead. >> pcregrep -M 'sprintf\ > > the regexp doesn't search for snprintf, and doesn't look for functions spanning more than one line. It does with pcregrep -M. For example, $ pcregrep -M 'sprintf\ linux- ret += sprintf(buf, "%sEntry: %d\n", buf, i); ret += sprintf(buf, "%sReads: %s\tNew Entries: %s\n", buf, ret += sprintf(buf, "%sSubCache: %x\tIndex: %x\n", buf, (reg & 0x30000) >> 16, reg & 0xfff); However, it appears that the multiline results did not show up in Kees’ reports, so the reports should be rerun with pcregrep -M if that is possible. For snprintf, use pcregrep -M 'snprintf\ yeah, my search was glitched. New logs attached only count difference was universe, which went to 187. man 3p sprintf certainly documents it: "If copying takes place between objects that overlap as a result of a call to sprintf() or snprintf(), the results are undefined." (In reply to comment #6) > I have submitted a patch for linux-manpages: > http:// I've applied the following patch for man-pages-3.16. --- a/man3/printf.3 +++ b/man3/printf.3 @@ -133,6 +133,17 @@ string that specifies how subsequent arguments (or arguments accessed via the variable-length argument facilities of .BR stdarg (3)) are converted for output. + +C99 and POSIX.1-2001 specify that the results are undefined if a call to +.BR sprintf (), +.BR snprintf (), +.BR vsprintf (), +or +.BR vsnprintf () +would cause to copying to take place between objects that overlap +(e.g., if the target string array and one of the supplied input arguments +refer to the same buffer). +See NOTES. .SS "Return value" Upon successful return, these functions return the number of characters printed (not including the @@ -851,6 +862,26 @@ and conversion characters \fBa\fP and \fBA\fP. glibc 2.2 adds the conversion character \fBF\fP with C99 semantics, and the flag character \fBI\fP. .SH NOTES +Some programs imprudently rely on code such as the following + + sprintf(buf, "%s some further text", buf); + +to append text to +.IR buf . +However, the standards explicitly note that the results are undefined +if source and destination buffers overlap when calling +.BR sprintf (), +.BR snprintf (), +.BR vsprintf (), +and +.BR vsnprintf (). +.\" http:// +Depending on the version of +.BR gcc (1) +used, and the compiler options employed, calls such as the above will +.B not +produce the expected results. + The glibc implementation of the functions .BR snprintf () and Kees, some quick questions about your search: • There are no instances of snprintf in your results. I could believe that there aren’t any, because this use of snprintf has been broken for longer than this use of sprintf, but I just wanted to confirm this. • Does your search include DBS style tarball- Matthias, shall I go ahead and use massfile to create 231 bugs for this issue? I have attached a proposed massfile template, and tested it by filing bug #310800 against barnowl. I noticed though that massfile didn’t successfully add the sprintf-append tag as I was expecting; I’m not sure why. Oops, and I would use the right bug URL, of course. On Tue, Dec 23, 2008 at 06:14:32AM -0000, Anders Kaseorg wrote: > • There are no instances of snprintf in your results. I haven't yet re-run the search with snprintf. > • Does your search include DBS style tarball- It does not yet, but I've put together a script that will attempt to apply all patches before doing the search. I was going to merge this when adding the snprintf regex. > Matthias, shall I go ahead and use massfile to create 231 bugs for this > issue? It probably makes more sense to approach Debian with the mass-filing. I'd be happy to help drive this. http:// Kees Cook schrieb: > On Tue, Dec 23, 2008 at 06:14:32AM -0000, Anders Kaseorg wrote: >> Matthias, shall I go ahead and use massfile to create 231 bugs for this >> issue? > > It probably makes more sense to approach Debian with the mass-filing. I'd > be happy to help drive this. seems to be the right thing. please use a non RC severity and a separate user tag to identify these reports. http:// 29 main 15 multiverse 208 universe 251 total I removed a few copies of the kernel, which all show the same report, as well as gnokii, which had a note in the Changelog about how they'd fixed it already. (er, 252 total -- I added "linux" back in at the last moment) I'm also testing a patch to glibc to avoid the change in behavior when using _FORTIFY_SOURCE. Created attachment 3625 work-around pre-trunc behavior This patch restores the prior sprintf behavior. Looking through _IO_str_ "s" to lead with a NULL. Is there anything wrong with this work-around, which could be used until the number of affected upstream sources is not quite so large? Marking the source packages as Invalid, since they will be handled upstream. The glibc patch restores the original behavior, so it will get SRU'd into Intrepid and fixed in Jaunty. This bug was fixed in the package glibc - 2.9-0ubuntu6 --------------- glibc (2.9-0ubuntu6) jaunty; urgency=low [ Matthias Klose ] * Merge with Debian, glibc-2.9 branch, r3200. [ Kees Cook ] * Add debian/ pre-clear target buffers on sprintf to retain backward compatibility (LP: #305901). -- Kees Cook <email address hidden> Thu, 01 Jan 2009 13:28:59 -0800 Accepted glibc into intrepid-proposed, please test and give feedback here. Please see https:/ Not sure whether this is related (please tell me if it's not), but that is the only significant update I've done since yesterday (with xine...) : With glibc 2.8~20080505- * The system takes a loooong time to scan the different WiFi networks available * A "sudo iwlist wlan0 scan" returns "print_ Please let me know if you need additional information. Mathieu: does reverting to an earlier glibc solve the problem for you? Actually : * On the time I've seen this problem, it was still there after three reboots. But it has now disappeared... * If I try to revert to an earlier version of glibc, synaptic wants as well to remove 56 packets including some important ones... So I prefer not to try. So for the moment, this problem has disappeared. Anyone who has this glibc version installed and can tell us whether the original problems/crashes are now fixed, as well as if the system generally still works as before? My intrepid machines with this glibc show the expected behavior and show no signs of regression. I can confirm that the intrepid-proposed libc6 fixes both my test program and the Intrepid barnowl package. This bug was fixed in the package glibc - 2.8~20080505- --------------- glibc (2.8~20080505- * Add debian/ pre-clear target buffers on sprintf to retain backward compatibility (LP: #305901). -- Kees Cook <email address hidden> Wed, 07 Jan 2009 20:15:15 -0800 *** Bug 260998 has been marked as a duplicate of this bug. *** Seen from the domain http:// Page where seen: http:// Marked for reference. Resolved as fixed @bugzilla. C99 (at least the draft that’s available online) actually defines this code as invalid. #include <stdio.h> int sprintf(char * restrict s, const char * restrict format, ...); .” So I guess the real answer is to fix the affected source. It might be nice to know if any software in Ubuntu is affected.
https://bugs.launchpad.net/ubuntu/+source/glibc/+bug/305901
CC-MAIN-2015-22
en
refinedweb
igor.py 0.9 Read Igor Pro files from python Note This package has been superceded by igor. Use the new igor package with the following to get the same interface: import igor.igorpy as igor Igor.py Read Igor Pro files from python. Install Using pip: $ pip install igor.py Using source, download and expand the source tree, change to the source directory and type: $ python setup.py install Change History 0.9 2011-10-14 - access access to data object using f.name in addition to f[‘name’] and f[i] - allow a data object to be used directly as an array, e.g., numpy.sumf.name) 0.8 2011-04-27 - initial release Maintenance When a new version of the package is ready, increment __version__ in igor.py and enter: $ python setup.py sdist upload This will place a new version on pypi. - Downloads (All Versions): - 2 downloads in the last day - 35 downloads in the last week - 146 downloads in the last month - Author: Paul Kienzle - License: public domain - Categories - Package Index Owner: pkienzle - DOAP record: igor.py-0.9.xml
https://pypi.python.org/pypi/igor.py
CC-MAIN-2015-22
en
refinedweb
Thanks to everyone that attended today's webcast session on class libraries with Visual Basic .NET. Today was Part 3 of the 22-part webcast series, Visual Basic .NET Soup to Nuts. Here are some resources for this webcast: We also showed some tools you get when you download the .NET Framework SDK. You can download SDKs (and other stuff) from here. Also, be sure to visit the .NET Framework Developer Center. From these two sites, you can drill into the documentation for the .NET Framework and Visual Basic. For instance, we talked about the System Namespace today, and here's the documentation for it. You may also want to review the namespace naming guidelines for guidance on how to name your own namespaces in your custom class libraries. I also showed how the My Namespace in Visual Basic 2005 makes it easier to include common functionality in our apps. Read more about the My Namespace here. Happy coding!
http://blogs.msdn.com/b/ron_cundiff/archive/2007/02/19/visual-basic-net-soup-to-nuts-webcast-series-part-3-class-libraries-wrap-up.aspx
CC-MAIN-2015-22
en
refinedweb
29 October 2008 15:27 [Source: ICIS news] TORONTO (ICIS news)--A clear majority of Germans would welcome government participation or even nationalisation in key industries such as power utilities and banking, and 45% would welcome more government control over chemicals and pharmaceuticals, according to a survey published on Wednesday. ?xml:namespace> The survey of 1,001 people was conducted 22-23 October by research firm Forsa on behalf of Stern, a weekly magazine Backing for more government control over chemicals and pharmaceuticals producers was strongest among supporters of the Social Democrats (51%), the Green Party (54%) and the leftist Linke party (53%). However, even many supporters of Chancellor Angela Merkel's Christian Democrats (39%) and the pro-market Liberals (35%) came out in favour of a stronger government role in chemicals and pharmaceuticals. The industry ranked fourth in the list of sectors where Germans would like to see more government control. A majority, 77%, of supporters of all parties would favour nationalisation or government participation in electricity and natural gas, 64% would welcome this in banking and insurance, and 60% want to see government control in airlines, railways and postal services. The automobile sector ranked lowest with only 26% supporting government intervention. Stern commissioned the survey to gauge Germans’ view of recent plans by French President Nicolas Sarkozy to protect French firms against takeovers by foreign companies, it said. Sarkozy plans to set up a sovereign wealth fund that would help protect French firms whose share prices are depressed in the wake of the global financial
http://www.icis.com/Articles/2008/10/29/9167349/germans-support-govt-control-over-chems-survey.html
CC-MAIN-2015-22
en
refinedweb
25 September 2012 06:50 [Source: ICIS news] ?xml:namespace> The producer had been operating the plant intermittently since August 2009 because of a lack of feedstock, the source said. Polytama Propindo used to obtain feedstock propylene from local petrochemical major Pertamina, the source said. But he declined to comment on what led Pertamina to cut the feedstock
http://www.icis.com/Articles/2012/09/25/9598212/indonesias-polytama-propindos-balongan-pp-plant-idle-on-feedstock.html
CC-MAIN-2015-22
en
refinedweb
Porting LinuxBIOS to the AMD SC520 Failover.c is included in auto.c and is code for managing failover of the fallback BIOS image if the normal BIOS image is corrupted in some way. PC hardware does not have a defined way of mapping PCI slot interrupt lines to interrupt pins on the interrupt controller. There is a structure in the BIOS called the $PIR structure that the operating system reads to find out how to map interrupts. The irq_tables.c file has an initialized C structure that defines the connection of the interrupt lines. This structure is compiled into LinuxBIOS and forms the $PIR table. This file is generated automatically by a utility provided with linuxbios, called getpir. It is found in util/getpir. You run this utility under Linux, when booted under the factory BIOS. The utility prints out the $PIR table as C code. One caveat: we have found that the $PIR tables on many BIOSes have errors. On occasion, we have had to fix the tables to correspond to the actual hardware. This code is compiled by GCC, not romcc. There is not much to this file right now: #include <console/console.h> #include <device/device.h> #include <device/pci.h> #include <device/pci_ids.h> #include <device/pci_ops.h> #include "chip.h" struct chip_operations mainboard_digitallogic_msm586seg_ops = { CHIP_NAME("Digital Logic MSM586SEG mainboard ") }; This file contains the names of options used for this mainboard. First, all the options to be used are listed, for example: uses HAVE_FALLBACK_BOOT If the option has some desired value, it may be set in this file: ## Build code for the fallback boot default HAVE_FALLBACK_BOOT=1 which sets the option to 1. This option may be overridden in the target file; that is, we can set the following in targets/digitallogic/msm586seg/Config.lb: option HAVE_FALLBACK_BOOT=1 and the BIOS can be built without a fallback boot image. In general, the default values set in this file do not need to be changed. We do need to change the default ROM size, as it is set to 1024*1024 for the other mainboard: default ROM_SIZE = 256*1024 Why make this a default? So that a target with a larger ROM size can override it. If you build a target for a 1MB of ROM, you would put the command: option ROM_SIZE = 256*1024 Now we add the target directory for the mainboard: cd targets/digitallogic mkdir msm586seg tla add msm586seg cp adl855pc/Config.lb msm586seg/ tla add Config.lb We then commit, and the code is in. Next, we fix up the Config.lb for the msm586seg: target msm586seg mainboard digitallogic/msm586seg option DEFAULT_CONSOLE_LOGLEVEL=10 option MAXIMUM_CONSOLE_LOGLEVEL=10 romimage "normal" option USE_FALLBACK_IMAGE=0 option ROM_IMAGE_SIZE=0x10000 option LINUXBIOS_EXTRA_VERSION=".0Normal" payload /etc/hosts end romimage "fallback" option USE_FALLBACK_IMAGE=1 option ROM_IMAGE_SIZE=0x10000 option LINUXBIOS_EXTRA_VERSION=".0Fallback" payload /etc/hosts end buildrom ./linuxbios.rom ROM_SIZE "normal" "fallback" The file defines seven basic things: The target build directory is msm586seg; it could be anything. The mainboard is the digitallogic/msm586seg. The default console log level is 10; this controls which compiled-in messages are printed. It can be overridden by the CMOS setting in the normal BIOS image. The maximum console log level is 10; this controls which print macros are compiled. The normal romimage is not a fallback image; it is 0x10000 bytes (64KB), has a version tag of .0Normal and has a payload of /etc/hosts. The fallback romimage is a fallback image; it is 0x10000 bytes (64KB), has a version tag of .0Fallback and has a payload of /etc/hosts. The ROM target is linuxbios.rom; it has a size of ROM_SIZE, as defined in the mainboard Options.lb above, and has two images in it, normal and fall... just want to try this feature Please remove this just want to see what it did and how?
http://www.linuxjournal.com/article/8120?page=0,4
CC-MAIN-2015-22
en
refinedweb
Java.lang.Character.isMirrored() Method Description The java.lang.Character.isMirrored(char ch) determines whether the character(char ch) Parameters ch - char for which the mirrored property is requested Return Value This method returns true if the char is mirrored, false if the char is not mirrored or is not defined. Exception NA Example The following example shows the usage of lang.Character.isMirrored() method. package com.tutorialspoint; import java.lang.*; public class CharacterDemo { public static void main(String[] args) { // create 2 char primitives ch1, ch2 char ch1, ch2; // assign values to ch1, ch2 ch1 = '}'; ch2 = '^'; // create 2 boolean primitives b1, b2 boolean b1, b2; // check if ch1, ch2 are mirrored and assign results to b1, b2 b1 = Character.isMirrored(ch1); b2 = Character.isMirrored(ch2); String str1 = ch1 + " is a mirrored character is " + b1; String str2 = ch2 + " is a mirrored character is " + b2; // print b1, b2 values System.out.println( str1 ); System.out.println( str2 ); } } Let us compile and run the above program, this will produce the following result: } is a mirrored character is true ^ is a mirrored character is false
http://www.tutorialspoint.com/java/lang/character_ismirrored.htm
CC-MAIN-2015-22
en
refinedweb
Help with resource injection I have been forced to switch from Tomcat to Glassfish to test web services. During this move I have found that the way datasources in Tomcat are made are not the same as in Glassfish. I have read that resource injection is a good way to separate the code from the data source, but have not found anything that identifies all the steps involved. I have attempted to create a connection to the datasource using the @Resource annotation, but it did not work. I then started the Glassfish admin module and created a connection pool (named supportPool - it is pingable) and a JDBC Resource named supportdb that uses the supportPool. I am on a short deadline and have been trying for hours to get this working. I am using NetBeans to create web services and the web service client. An help would be greatly appreciated. public class MyClass { private @Resource(name="supportdb") javax.sql.DataSource supportDS; public void foo() { // At this point supportDS is null, from what I've read this should contain a connection to the data source Connection conn = supportDS.getConnection(); } } Sorry for the spam. Internet connection is slow tonight, refreshing the browser re-posted.
https://www.java.net/forum/topic/glassfish/glassfish-webtier/help-resource-injection-0
CC-MAIN-2015-22
en
refinedweb