text
stringlengths
20
1.01M
url
stringlengths
14
1.25k
dump
stringlengths
9
15
lang
stringclasses
4 values
source
stringclasses
4 values
See also: IRC log <Nilo> scribe:Nilo <Jonathan> agenda approved, with test status update included Feb 20 minutes approval postponed to tomorrow test results from Jonathan: 5 implementations tested, results shown, basically a sea of green a few optional features have no implementations; so they are at risk Bob would like to get PR text completed today and wants everyoe to work towards that AI review - all AIs completed Umit concerned about lack of WSDL implementations affecting the WSDL binding Bob says that we have a choice re WSDL binding - choice of 1) making doc a Note or 2) changing the number of implementations needed to progress it <hugo> DH thinks this is a potentially editorial change and can be done after we have agreed to the other changes to this text. He proposes discussion be deferred until later Umit: how is SOAP1.1 text affected? <Jonathan> jonathan: two action values - one for addressing-specific faults, another for application-level ones Hugo: concern with the term "generic SOAP faults" general consensus re the action value for the addressing-specific faults with one addition: add a reference to the section on the Faults section <bob> The [action] property below designates WS-Addressing fault messages: <bob> <bob> *This action SHOULD NOT be used as an action value in messages other <bob> than those carrying WS-Addressing faults.* section 6.4 <bob> SOAP modules, extensions and applications SHOULD define custom [action] values for <bob> the <bob> faults they describe but MAY designate use of the following [action] <bob> value <bob> instead: Katy: change "generic faults" to "such as..." DaveO: call them "SOAP defined faults" instead of "generic SOAP faults" <dorchard> SOAP defined SOAP faults. Marc: seems a bit contradictory Glen: may want to have infrastructure choose the appropriate action value consenusus building around closing the second part with no action Umit: what's the reason for the second part Jonathan: it would be nice to know if it was a generic SOAP fault as opposed to a app level fault. Glen: it almost seems like over-riding SOAP fault codes Anish: still need to change the para between the two action value. <anish> in section 6 This will be marked as CR24 Resolution: CR24 as above, accepted without objection <bob> The above [action] value SHOULD be used for SOAP defined faults <bob> including <bob> version mismatch, must understand, and data encoding unknown. *This <bob> action SHOULD NOT be used as an action value in messages other than <bob> those carrying SOAP defined faults or those of SOAP modules and <bob> extensions.* <bob> <bob> I use SHOULD because this is a hard thing to test, seems like the <bob> appropriate level of guidance, and doesn't force a breaking change in <bob> implementations at this point. <bob> <bob> Original thread that sparked this follows... <bob> <bob> This SOAP 1.1 request optional response HTTP binding, in conjunction <bob> with the SOAP 1.1 binding, can be used for sending request messages with <bob> an optional SOAP response. This binding augments the SOAP 1.1 binding <bob> by allowing that the HTTP [RFC 2616] response MAY have a 202 status code <bob> and the response body MAY be empty. Note that the HTTP [RFC 2616] <bob> specification states "the 202 response is intentionally non-committal". <bob> As such, any content in the response body, including a SOAP body, MAY or <bob>". <dorchard> As such, any content in the response body, including a SOAP body, MAY or Umit: can the last sentnce be reworded? Anish: this is an op response, so you could also get back 202 with and without a SOAP env, as well as a 200 with a SOAP env Bob: we need new text starting from "As such..." <marc> At the risk of prolonging the discussion I note that SOAP 1.1 doesn't preclude a HTTP 202 without a SOAP entity body: Anish: propose striking sentence starting "As such..." Jonathan: how does this affect the test suite Glen: the 202 is used in the test suite <dorchard> Another slight wording mod of the last 2 sentence.. <dorchard> Note that the HTTP [RFC 2616] specification states "the 202 response is intentionally non-committal" and so any content in the response body, including a SOAP Envelope, MAY not be an expected SOAP response. Text above agreed RESOLUTION: Out-optional-in MEP is accepted and will be published as a WG NOTE Break until 10:45 <bob> above resolution is to clean up and publish the note: <bob> N.B. <bob> Bob- talk to hugo about how to publish resuming meeting again <bob> <anish> alternate proposal: Anish: has an alternative proposal and suggests that this isuue be closed with no action umit: it is WS_RX's choice to use our anon URI or mint their own Glen: agrees with Anish ... we don't need to encourage WS-RX to use this URI in ways that may not be these semantics Katy: the semantics is that it depends on the underlying SOAP transport binding ... It allows RX to go either way - use this URI as we define it or define their own URI for acksTo <uyalcina> I am very uncomfortable in reverting a request that came from RX and forcing a particular decision on them, <uyalcina> we should give them the choice Anish: not comfortable with making its meaning context sensitive Jonathan: there is value in making this URI mean that it depends on the infrastructure ... in short, agreeing with Katy DH: if you are using anon, you need to understand the context it is to be used <Zakim> GlenD, you wanted to indicate that using a different URI is more comprehensible than looking up a namespace for an EPR header such as <wsrx:AcksTo> Tom: could go either way, but does not see why a spec can't define its own EPR's semantics Glen: in summary would prefer WS-RX to mint its own URI DOR provides an analogy to java abstract vs concrete classes anish: we don't disallow use of anon in any other context. we simply define its use for reply/failtTo <GlenD> Tom just made an interesting point - when you see a URI, you tend to want to deference it to see what it means... umit: there is an abstract/concrete analogy. abstract is "any back channel" while concrete is specific back channels in specifc contexts <GlenD> If it's RX's "sequenceBackchannel" URI, that's clearer than "W3C's no-real-meaning-anonymous URI" anish: the current text encourages you to use this URI in a different context <Zakim> TonyR, you wanted to addressing RX minting their own URI Tony: RX should define their own URI because it has a different meaning <Zakim> dhull, you wanted to point out that RX is not the only potential reuser of anon <uyalcina> Please folks, WS-RX ASKED us to loosen up the definition of the URI for them. <uyalcina> This is why we are discussing this <anish> the issue about implementations changing is a red herring, imho, it does not change the impl much. There are a whole lot of changes that are currently taking place in wsrx which are huge compared to this tiny 'if' stmt change <uyalcina> +1 to DH. DH: we should leave the door open for future SOAP bindings which have back channels; so we should provide guidance for what anon might mean for them > Hugo: support Anish's proposal <Katy> Additional text does not force rx to use anon URI - just states that, if it is used, the behaviour must be specified in RX context. DO: in either case, the RX spec implementers will have to look at their spec to see how the anon value is used Jonathan: wants to allow anon to be used with AcksTo as used today <Zakim> GlenD, you wanted to call the question (let's vote!) Tom: happier with Anish's proposal, but could edit Katy's proposal to meet concerns <dorchard> Is it almost the straw poll of "do you want anon re-used or not"? Glen: want to get a sense of the group Katy: RX implementations are using the anon URI Marc/Anish: they are not using this CR anon URI <pauld> has no sympathy for WS-RX, they're referencing a moving target Anish: RX uses the old anon URI Umit: the semantics of the old anon URI was "any back channel" which is what Katy's proposal is trying to capture <GlenD> Umit - it doesn't matter that it's the SAME anonymous URI as addressing uses, though, does it? <GlenD> If they need to change anyway, is it that big a deal? <uyalcina> it does matter to RX. <uyalcina> They asked us to define it for them <pauld> what implementations are using our freshly minted URI?! <GlenD> I'm asking Umit the architect/developer, not Umit the politician. :) <uyalcina> i was the one who raised CR4 on behalf of WS-RX poll to adopt katy's proposal straw poll: for 6 against 8 Tom: the input WS-RX did not use "anon" Bob: can we live with no action? <dorchard> And, can we live with Katy's proposals? Umit: I was given the AI by WS-RX TC to define the use of "anon" Bob: I have an ongoing AI with WS-RX to keep tabs on movement to cr4, thus I will communicate to WS-RX what has changed ... What needs to change in the proposal to make it acceptable? DO: I thought we voted for the intent, not the exact text Glen: Minting URIs is easy and there does not seem to be a benefit to use the same URI with different meaning DH: WS-RX found the "back channel"semantics of anon to be useful to reuse <Zakim> dorchard, you wanted to ask more people Vikas: For a non synchronous transport, you do not want somebody to specify what the back channel is Anish: does not achive the reuse objective. WS-RX sequence can make use of many differnt binding fora given sequence; so this text may not be able to describe the RX scenarios. <dorchard> anish, this only doesn't work if there's a non-soap binding.. Bob: anon means "knows what to do" <anish> dave, or another soap/http binding Hugo: I do not understand why people care about this so much as it's not clear that any of the option will change anything; let's change find an option that everybody can live with and move on <anish> dave, or if you use soap 1.1 and smtp or another transport protocol <dorchard> anish, if somebody else defines soap/http then they would have to do anon.. <anish> right, and they would not be using our soap addressing binding, so any text we put it in there cannot be used by wsrx <dorchard> jeff, aren't we defining extensible semantics though? That is the semantics are of re-use... <uyalcina> The URI's semantic is the same=back channel Jeff: if you use a URI defined in a spec you are confined to its semantics. If anyone wants to use a differnt semantics, define a different URI. Problem if different specs continue to use anaon with different semantics <anish> can we do a can-live-with poll <uyalcina> what we are debating is the semantics of the EPRs that use the URI <pauld> wakes up for what sounds like a versioning and extensibilty discussion <bob> ? Bob: can anyone not live with closing with no action? Tom: I don't want to suggest reuse in the text, becasue if you do you have to put in all sorts of caveats. ... katy's text needs worsmithing. <uyalcina> +1 To Jonathan <pauld> anonymous means "do the right thing" <anish> jonathan, do u think we need to say that or is it enuf for wsrx to say that? <anish> i.e., with status quo wsrx can do exactly what u are suggesting <Jonathan> I think we've overconstrained anonymous already. <Zakim> TonyR, you wanted to point out that jonathan's point is valid, but that's not what the words SAY at the moment <anish> how? <anish> we in fact only say right now what it means in replyTo and faultTo <anish> we don't say anything beyond that Tony: At this point it does not say "knows what to do" <dorchard> I assert that the anon is a special value and it means that a user will have to look at context, and the good part of Katy's proposal is to highlight that people need to describe their use.. <Katy> Additional text clarifies what is required by those who want to reuse anonymous - i.e. it must be defined when used outside ws-a context Anish: we do not say what anon means outside replyTo/FaultTo Jonathan: we constrain what it means at the SOAP level and the HTTP level <pauld> Katy's text is simply "caveat emptor" .. I puzzle how to write test cases for it .. <anish> section 5.1.2 says: ]. Bob: Does the spec overly constrain the use of anon? Tony: heading of 5.1 needs to be changed. Add some text to say that this section defines the use of anon in the ctx of WSA. Anish: apart from the section heading, text does not constrian you in any way. Bob: over lunch, small group will work on new text DH: pending c18 will also affect this text bob: anish and katy will work on text over lunch, followed by formal vote LUNCH BREAK resume 1:15PM <Katy> Katy: explains the proposal prepared over lunch <Katy> The precise <Katy> meaning of this URI ** within the context of Addressing ** is defined by the binding of Addressing to a <Katy> specific protocol.. <Katy> The precise <Katy> meaning of this URI ** within the context of Addressing ** is defined by the binding of Addressing to a <Katy> specific protocol.. Katy: is katy's suggested alternative text to the proposal <scribe> Scribe: Anish Bob: how do folks feel about this? Tony: s/in SOAP Response/for SOAP Response/ Umit: i think the heading is still incorrect ... it is a general stmt ... s/SOAP// RESOLUTION: resolve issue CR23 with proposal at no objections Marc explains the issue jonathan: i feel that we made a bunch of changes in Vancouver and looks like we have issues about it. ... our solution introduced more problems. Revert back to what we had, but open to fixing this however we can Discussion on what CR17 resolution did Marc: I would rather just define a default which is not context sensitive Glen: there are usecases where people can't use this Umit: in that case the property is no longer optional ... it makes the header optional but not the property optional glen: u can fix it either way ... if i care about one-way messages only then i don't care about it <dhull> can everyone still hear on the phone anish: don't make premature optimizations, no default. if the property is there, it is there. Else it isn't. glen: instead of saying syntactic default, we can specify in the 'how to sent the reply' section ... why do we need to specify that in the wsdl doc anish: how important is the default? glen: it is important as there are cases where the reply can be large % of the message. umit: in the wsdl doc, we have described using abstract message props ... the reply endpoint should be there. But that does not mean that the header is there. <marc>;%20charset=utf-8#WSDLMEPS Marc: we can just delete section 5 in wsdl binding anish: i don't think we should do it glen: i don't think section 5 should be removed <dorchard> Glen suggestion: change "in message" to "used by communicating nodes". <dhull> +1 to "hands off the core" davidh: what we are trying to say that this is actually used in message exchange jonathan: i think we can revert CR17 and make the property required More discussion on various solution katy: my concern is that if we undo cr17, we have to make sure that it all works ... can we fix it syntactically in the wsdl spec ... under the MEPs, it says "this section describes whether properties are required or optional" <dhull> s/Mandatory/Required by MEP/? katy: we can change it to : this section says which core MAPs are rquried by MEPs of WSDL 1.1 ... and the top of the table change mandatory to requried dhull: i don't think change from mandatory to required does anything. we need to say that the requireness is for the MEP ... the important part is "for MEP" umit: how does it solve the current problem ... the MAP reply-to will always have a value Marc: u are mixing the abstract and serialized one ... if u come up with another way to serialize this then it won't fly Glen: agree with katy's suggestion ... we say there is no syntactic defaults ... from the MEP there may be defaults Marc: i want to write a lib which can query soap message and get the abstract properties ... if i have to default based on the context, i don't like it umit: we can't put any case (where it has to fault if a value is not present) in our test case glen: that is a different issue marc: in this case, for our mapping, there will always be a value for [reply to] umit: without all the context you can never figure this out marc: we can add another note to make this clear ... proposal -- core section 3.4 note that for the serialization [fault to] and [reply to] is always present Tony: rather than saying that the processor will never fault, I would say that the MAP would never be empty ... instead of talking about the behavior of the processor we should talk about the properties Marc: proposal -- reopen CR17, close with clarification to the existing note and revert decision on CR17 Umit: i prefer saying explicitly that we allow other serializations bob: we have marc's proposal and see if we can converge ... is this the approach to take to resolve this? ... can we leave the wordsmithing to the editors? Umit: I'll work with marc on this Tony's proposed text: The [reply endpoint] messaging property is defaulted to "ANONYMOUS" by the current serialisation, so this MAP will not be empty. RESOLUTION: cr21 Revert cr17, new text proposed by Tony and close issue CR21 with reversal of cr17 umit: there will be clarification of the note <scribe> ACTION: umit/tony/marc to figure out the wordsmithing for clarification to the note (CR17/CR21) [recorded in] jonathan/paul: can we do that later but still discuss the issue coming out of the tests? Jonathan explains the issue Issue explained at: Jonathan: these features were marked at risk ... MS's position is we are not going to implement it, we would like it to be gone. We don't mind having it in, as long as we have 2 implementations hugo: we did not mark those things at risk ... we put in a note saying that the section may change paul: from my POV i feel strongly about it. it has minimal with no value but requires lot of testing ... my recollection from Boston was that it was at risk Hugo: i don't think removing something like that would change the reviewer/implementers experience jonathan: we are not going to implement it, unless you hold a gun to our head and then we would do something, but not ship dhull: if this is not useful then i'm not going push on it jonathan: not a security expert, but i have heard that passing just a Qname is better dhull: u would send a message and the person receiving the message may not have the context bob: We have not heard anyone speaking strongly for this ... do we have agreement that we redact the problemheader ... it is the Team's opinion that it would not change the implementation experience RESOLUTION: redact problemHeader Jonathan: not sure how to test it ... no motivation ... i don't want this fault code hold up CR ... this isn't as clear cut as the last one ... we could make it an informative test case rather than optional test case dhull: feel differently about it ... the idea behind this is to deal with hop-by-hop or gateway situations ... if we want to do a test, we can create an endpoint that always throws a fault jonathan: we might have to work on it. move it to informative will not help wrt result this week. but could be helpful in the long run dhull: don't want to derail this. if we want to do it now or later that is fine ... the fact that no one has implemented doens ... n't tell us much ... would like to keep it, will help with test cases <more discussion on various fault codes and what they mean> dhull: we should be able to make the box turn green bob: if status-quo means we need to have 4 imples then it is a problem jonathan: this is a more advanced feature and we can agrue that we don't need this now paul: i look at this fault and think it is not interoperable ... no benefit of this katy: agree with paul dhull: this is not an advanced feature it is a simple feature tony: davidh your position would be more defensible if you had an impl jonathan: two different perspective, say it is more adv. feature bob: what do we have to do here to move on? davidIh: there is no must here bob: does the resolution mean just removing the testcase hugo: we have an agreement that if we can't test it we'll remove it jonathan: we can say that we failed to get implementations of this particular error condition ... advanced impl. can use this ... we can't have every class of implementations bob: could be useful for say 'tcp/ip' ... nothing to do with addressing dhull: in the gateway case not necessarily a transport failure ... jabber case is another one ... not going to lie on the road jeff: the reason to set a criteria before you are under pressure is to create one that is reasonable ... hard to believe justification given now ... don't believe any arguements about things happening in the future ... this is supposed to be widely implemented spec and easy ... we should stick with it and carry on with the program hugo: but nobody is using it jonathan: what are you suggesting? jeff: don't move on till we have an implementation hugo: i don't understand that -- we have 3 (or 5) impl. and no one is using it jeff: either yank it out or implement it jonathan: i've a positioin that is compatible with that paul: test cases haven't been written dhull: lets be consistent <uyalcina> It appears that there are no test cases for section 6.4.1 dhull: i don't want to cherry-pick paul: we have a number of testcases bound to contributions from MS/IBM/myself/Hugo/Philippe -- we don't give good coverage to this area (faulting) ... it could be one fault or multiple fault ... don't know the reasons for test cases for a few faults and not all ... each fault code implies a processing model ... each of these has a high cost ... my company position to yank it out dhull: in that case we have to yank out other things too ... are there test cases for actionnotsupported and endpointunavailable <pauld> dhull: how do u know that the implementation is behaving properly daveo: is it a question of creating testcases or impl. just don't support it ... lets have the WG members write tests <pauld> daveo: standardized fault is important <BREAK> <scribe> Scribe: anish bob: we are trying to figure out appropriate way to move forward on the fault codes ... one suggestion is to take all of non-tested/unimplemented faultcodes and to remove them from the normative spec and put it in a non-normative note ... all of the faults in section 6 that are not tested dhull: actually all the optional faults (with no MUST) umit: why not create a non-normative fault codes jeff: need faultcodes for interop ... helps to have std codes dhull: that is the idea ... but nothing normative paul: i favor note, non-normative appendix has a cost ... wrt errata ... expectation is that it will grow ... these are soap fault that are useful tom: non-Required faults are still normative for the clients <uyalcina> I do not prefer a note, one spec is enough for implementors as a reference. All of our implementors are confused how many specifications docs are out there, what their status is, etc. jeff: i don't understand the packaging argument paul: i can pick up the ws-addr spec with 2 pages as opposed to 6 pages jeff: there should be normative in the sense that they are defined by the rec paul: endpointnotavailable can be catch for a lot of things jeff: is helpful so that one can have a switch <uyalcina> we seem to be mixing the concept of whether we can test the error generating conditions and whether the codes may be useful for users. paul: want more of vendor specific codes ... if the semantics can be nailed down -- which is a lot of work jeff: u don't have to test them bob: if the spec were to include the other non-must faults, those may be impossible/hard to test jeff: we could write a test for that bob: we might dhull: fault is not the feature hugo: we are talking about section 5 and it is useless, times time, helps interop etc ... put them on the whiteboard and find out which are releated to MUST, whether there is a test, implemented etc umit: where are we going with the classification? ... i'm worried that eliminating subcodes. Would like to compare their use how XML schema codes were useful with XML Schema implementations schema. Would like to keep most of the codes <hugo draws a chart with various faultcodes and associated attributes: Must, Test and Implemented> jonathan: one of the difficulties with testing is that it is hard to create bad messages paul: not people writing test cases <dhull> section 6 says "[Details] The detail elements, use of the specified detail elements is REQUIRED. If absent, no detail elements are defined for the fault.", but I don't think any of the faults really gives any REQUIRED detail elements. They either say nothing or say MAY. paul: my understanding was that implementations don't implement features in time, they are gone <uyalcina> Lets be fair here. WSDL binding just went to LC and some of the fault codes were added recently after Japan at Vancouver. Hugo's table: Problem hdr - Must:N Test:Y Impl:N Problem Hdr QN - Must:N Test:Y Impl:Y Problem IRI - Must:N Test:Y Impl:N Problem Action - Must:N Test:N Impl:N Retry After - Must:N Test:N Impl:N Invalid Addr Hdr - Must:Y Test:Y Impl:Y Inv Addr - Must:Y Test:N Impl:N Inv EPR - Must:Y Test:N Impl: Inv. Cardinality - Must:Y Test:Y Impl:Y Miss Addr in EPR - Must:Y Test:N Impl: Dup MID - Must:N Test:N Impl: Action Mismatch - Must:Y Test:N Impl: Only Anon - Must:Y Test:N Impl: Only Non-Anon - Must:Y Test:N Impl: Message Adr Hdr Req - Must:Y Test:Y Impl:N Dest. Unreachable - Must:N Test:N Impl:N Action Not Supported - Must:N Test:N Impl:N Endpoint Unavailable - Must:N Test:N Impl:N bob: we do not specify behavior hugo: my proposal is to keep the ones with 'Must' and they seem to be implemented and figure out what to do with the rest katy: we should be able to look and them and remove if not needed umit: i pushed 2 error codes cause they are related to WSDL <dorchard> agenda question, what else is after the test cases? jonathan: if we can have a written proposal that can be reviewed by our engineers that would be good ... and look at it tomorrow <uyalcina> those two error codes came in late in Vancouver, they are relevant for the WSDL binding. <pauld> I do not believe we have two implementations who have implemented each of these faults in the same way <pauld> WSDL Binding can develop its own faults <Zakim> dhull, you wanted to reconsider whether a standard spelling with no standard semantics is helpful or harmful daveh: the reason for have the last three fault codes in hugo's table was to not to have a must around it, but having the spelling available for interop ... we could just pull the 3 out bob: proposal -- ... Define normatively two faults that MUST be implemented ... 1)Invalid Adr Header; subfaults move to note ... 2) message Adr Hdr req ... 3) move all other non-MUST fualts to a note ... 0) drop Dest unreachable, Action not supported and Endpoint unavailble faults <dorchard> For the record, I'll be in and out tomorrow AM then absent in the PM for other WG/TAG meetings.. dhull: 0 is separate decision bob: any opposition none RESOLUTION: accepted (0) in bob's proposal (subsequently nullified the following day) jeff: don't agree that it should be in the note umit: what if there are in a non-normative appendix jeff: i want to be able to be count on these codes as a client katy: David Illsley thinks that we can remove more codes bob: there are a lot of flavor of Invalid Addr Hdr jeff: normativeness means we reserve this slot in the NS hugo: we have done the generic category of Invalid Addr Hdr ... don't find the necessity to test the details bob: we have tested at least one detail paul: unhappy camper ... i went with the consensus in Boston with the assumption that based on implementors experience we'll rip it out if needed ... would like to express my objection and get it on record ... i would like to remove all rows that are optional ... it is half-baked, we slapped it in there ... it should not be in there jeff: in corba there were predefined codes which were hints to users paul: this will come back an bite us umit: we have two fault codes related to WSDL. WSDL isn't even in CR paul: i feel strongly about remove these codes, but i'm willing to move on cause we need to get done ... error codes need to be machine processable jeff: other reasons to have error codes too <discussion on error codes> dhull: intellence systems can go thru logs and do interesting things. So this is helpful ... i do agree that our processing model is weak (wrt codes) katy: in response to Paul, agree with what he is saying, but we are down to only two faults ... we got these subcodes/details, they will just fall out depending on what we decide ... we have 6.4.1.1 - 6.4.1.8 subcodes instead of throwing them into a note we should step thru them and see if they are needed ... why not move the anon/non-anon faults to wsdl umit: we tell people when they are required but they are valid independent of that katy: problem action is not a fault, it is a detail bob: ah, it is part of invalid header hdr Bob's new proposal: 1) invalid Adr header; tested one detial, leave the rest of the details (internal rationalization) 2) message Adr Hdr req kept 3) move all other non-MUST faults to a note dhull: i don't want to revisit closed issues Bob's new new proposal: 1) invalid Adr header; tested one detail, leave the rest of the details (internal rationalization, induction proof) 2) message Adr Hdr req kept bob: we will close this tomorrow morning ... immediatly after which we will continue with the test report jonathan: our report is looking better, but will be better to have more vendors online bob: we also have to discuss wsa:From and source endpoint which have been ... identified as "at risk" features <Recessed for the day >
http://www.w3.org/2002/ws/addr/6/03/02-ws-addr-minutes.html
CC-MAIN-2015-40
en
refinedweb
district 6 tier 1 hockey eden prairie high school graduation 2023 how much to pay babysitter reddit used compact tractor with backhoe for sale near ohio bulk food storage bags non surgical hair replacement phoenix my friend lost weight and changed forge of empires celtic farmstead worth it deshaun watson fantasy football names massage lake havasu city arizona seafood anchorage barn conversion for sale llanellen outdoor rug 8x10 blue michaels 8x8 frame disadvantages of being a lawyer emergency rental assistance program massachusetts 2013 infiniti g37x reliability home laser hair removal cosmetology school atlanta cost list of authorised prescribers qld wells fargo routing number sc fortnite unban tool waterfront apartments boston film extra jobs near New Delhi Delhi mustang tires and rims who owns lifestyle holidays vacation club missed jury duty reddit ublock origin firefox mobile baretraps womens shoes craigslist heavy equipment walmart bean bag chairs townsville investment hotspot isuzu 4ja1 engine specs berkeley mfe parttime bluray and dvd player combo signs you grew up lonely reddit craigslist chico furniture for sale by owner how much is a walnut tree worth brannan island drowning davinci resolve relink clips not working dell xps 15 7590 driver pack amazing race template free wells fargo mortgage application pdf harry potter and the cursed child full movie web jail viewer santa cruz helping hands program national parks vizio 32 dseries 1080p bbc twin screw supercharger for sale bulk custom temporary tattoos australia binghamton math 224 loxias crown milo makita battery home depot centennial apartments club tattoo venetian reviews sims 4 pet stuck outside apartment chocolate brown leather loveseat monster girl maker picrew accuracy of the movie fury buchanan castle for sale the homies reaction ccc853 livestock forage disaster program application how to buy on mercari what is 3proxy chicago remastered aimbot script pastebin saver caravan haven rpcs3 red dead redemption crash girls love travel alaska cwu hr forms jaguar mk2 for sale usa fake gold chains stamped 14k certificate iii in correctional practice tafe thinkorswim volume profile width young and beautiful book pdf kubota bucket attachments bl novel english translation wattpad estate sales lubbock texas tomorrow bridge design pattern real world example directions to bound brook island beach bmw n52 29e0 no credit check apartments raleigh nc reddit barrel polishing kit handmade deer knife how many fantasy football leagues should i be in pentair vs hayward sand filter sandstorm shower usman shahid oakton parents strixhaven a curriculum of chaos anyflip missouri food stamp income guidelines 2022 adventure games online pc how to tell if rv fridge is working 1080p youtube videos look like 480p quran 681 810 fulton street lottery dance drill team competitions 03 f150 leaf springs lodash vs underscore hair salon raleigh nc green card renewal online showstopper dance competition 2022 results signs a girl is losing interest through text cutting horses for sale near me kichler landscape lighting replacement bulbs 5x4 5 trailer wheels ohio sexual offenders wicked ball fx impact barrel holder hyip check fort snelling military base how to give app gallery permission to install apps dirty hands log splitter telegram desktop download folder how to get off vyvanse without gaining weight best authentic flamenco madrid where do locals eat seafood in panama city rent garage space with lift best tune for bmw 335d temporary fence panels near Juliaca home depot room dividers cobra 29 lx swr problems pole foam home depot post oak logs for sale moonstone quartz crystal transferwise singapore 15 day notice to quit pa tomari jackson used sandpiper 38fkok for sale near Toronto ON entitlement check asubeebe classes las vegas celtic festival 2023 dressage horse trainers near Nanded Maharashtra how long does a saline drip take funny celebrity dog names mlive car accident ann arbor glamour photo studio near me used office furniture grand rapids melting sugar chemical reaction gear knob cover emma thomson obituary johnstown pa staples watsonville toro parts online fantasy epl draft rankings steering wheel horn button cover how to make a fixed blade knife sheath reddit las vegas geriatric anxiety inventory wisconsin parade accident raw video benchmade modern couch potato review honda 50 outboard for sale spouse no longer attracted to me top 100 high schools in illinois how to close partial position in mt5 mobile megan is missing assault scene 2k22 mycareer cast where are honda motorcycles made bernedoodle for sale near south carolina welcome speech for meeting fun party restaurants in miami savage obituary sports cars that start with b yamaha r3 for sale astell amp kern left side pain during pregnancy first trimester baby gender who played jethro on the beverly hillbillies bariatric power scooter 600 lb capacity controls in research best forged cavity back irons 2022 what will the future be like in 2050 2011 chevy silverado leather seats korean lip filler trend txdot erosion control details a person move from one place to another is called kawasaki brute force 750 troubleshooting beg for money from millionaires i have a cough that won39t go away but i39m not sick covid total pond website disney genderneutral 2022 1959 dodge custom royal lancer convertible for sale 25 ton ac unit square footage ford transit flexible coupling recall accused synonym power thesaurus what fruit symbolizes life best caribbean all inclusive resorts procreate dry ink brush download female momonga fanfiction mayflower apartments dallas when he touches me i feel tingly bonding capacity of hydrogen sky princess club class mini suite videos what bridles are allowed in dressage weapon progression hypixel skyblock 2022 best saliva neutralizing gum what do lizards represent spiritually talavera pots near me anchor brands nba playoffs bracket 2022 predictions maker nbc winter schedule 2022 mixing casting plaster phantom orion special move sololearn java time converter atm near me with cash ktbs breaking news nostalgic candy 90s old powermatic planer georgia tech social science electives 2k22 my career rookie difficulty print to pdf free future pinball complete set are moxy hotels good event furniture rental birmingham al highland county va events can i take my puppy out before vaccine online grooming stories losartan and orange juice fink crisis management model flyweight pattern swift yanmar dpf delete kit amazon prime miraclesuit swimwear gemini june 2022 horoscope mobile homes for sale in nevada city florida beach resorts for couples a nurse is caring for a client who has heart failure and a prescription for digoxin bonus bingo promo codes unblocked proxy list cse 191 ucsd best fiber supplement duplex for sale antelope valley ca used work boats for sale near me a suite salon plantation pch search rewards sims 4 cc alternative clothes best faraday box for key fobs house for sale in kolkata best programmable thermostat cabin restaurant in rourkela west elm urban ottoman bull bars for jeeps chepstow accident yesterday garage sales in plymouth wi xero trial expired awntech retractable awning acrylic ornament blanks southington mass schedule bamburgh beach car park postcode ucla neurology n14 cummins weight star citizen issue council echoing flames warrior cats fanfiction hippie hemp flower delta 8 vape is 32 degrees celsius hot or cold dos and don39ts of interview broward county board of elections absentee ballot cape girardeau events 2022 water hauling trailer for sale man city dls kit 2023 ford transmission fluid change price suburban autoride fortnite unban tool texturizing shears vs thinning shears installing tile edging shower idaho black bear season 2022 best dual hose portable air conditioner planetary orbit simulator obituaries pei guardian daiwa 2016 catalogue famous south african artists painters dismiss synonym wordhippo veterinary emergency protocol channel 2 weather girl aluminum pan roof panels for sale salt lake city car crash 3 piece wheel conversion coach day trips from pontefract zig zag stitch length and width key stage 2 havasupai population kawasaki governor shaft seal leak lane bryant lingerie gpu making buzzing noise when playing games average monthly cost of living in a tiny house help with security deposit and first months rent in ct mri tibia fibula cpt code astra gtc front splitter 1940s fashion catalogue roku remote model rc al7 eclipse dobermans the billionaire accidental bride emma miller monophonic homophonic polyphonic punta cana live cam capital one com autoenroll how to read manga reddit tn deer hunting mobile flashing tools allinone resident parking permit boston 1win aviator oculus quest 2 repair word for someone who is very good at their job roblox cookie logger link window air conditioner drain kit metal detecting events in virginia 2 seater helicopter price bmw n54 valve cover torque specs massage mani pedi facial near me power bi external tools wind chime tubes diy white volkswagen passat 2012 pentair pool pump case steiger 620 specs kimber speedloader pouch firefox aurora rare 1937 penny gree error code l3 30mm belt buckle best cnc under 1000 campervan fairy lights keystone montana 377fl price english speaking eye doctor seoul farmers market proposal template how to charge darkwolf saber wisconsin sorority rankings 2022 what level of security clearance do i have i mediatek ad not delivering facebook describe the condition of argos the hound and the parallels it has to the rest of ithaca how to decline uber eats order imprumut rapid fara venit vw golf mk6 android head unit how to unlock samsung smart oven adidas cloudfoam advantage fabric cutting plan software free download dr brown borland groover 2001 s10 xtreme for sale her imdb cast best time of day to give baby probiotics fpga developer pcie pamela smith comlex level 3 pass rate specialized bike price green sapphire meaning nebraska med aide practice test umd campus how to import dividend data into google sheets micro string bikini vintage fender tube amp easy hairstyles for fine straight hair vector mechanics for engineers statics 11th edition solutions manual chapter 3 nj boxer rescue available dogs fascard signage motorways in scotland kendall baptist hospital jamf prestage enrollment computer name emory and henry college wrestling 1993 yamaha waverunner impeller pwc partners calgary romantic cabins in granbury tx panodyne saliva test review noita quadruple scatter spell wireless keyboard typing extra letters belkasoft ram capture what is the most serious sign of hepatic encephalopathy retro girdle sex pics the vampire diaries season 2 all episodes watch online fair in killeen tx 2022 writing equations from word problems worksheet parker mountain machine jobs waxed canvas messenger bag pattern mg13 parts kit ragdoll rescue california equinox plus therapy car insurance 1995 ford f350 crew cab long bed for sale p0128 dodge challenger strawberry picking bay area reddit amphenol industrial operations medical advice discord emergency equine vet near me buxton crash photos contestar plural in spanish chicks who had dicks where is the broken machine in family island funny anesthesia team names iphone 7 microphone replacement ifixit oil and gas field jobs trough meaning economics adelaide beachfront caravan park he only talks to me when his friends are not around hotels near los angeles cruise port with free shuttle the riviera apartments provo fish rescue videos my car makes a popping noise when i turn it off putlockers websites why does my elbow feel cold and wet saturn transit in aquarius 2023 dometic rm3662 parts list kfsm channel 5 news anchors how does lithium make you feel baby boy clothes wholesale is after ever happy out screenshot snapchat story master lock hitch stables to rent lancashire kid forced to wear diaper arizona escort vehicle requirements coachman caravan weight plate upgrade macos catalina download how to make a socket fit a smaller nut league of legends game version not supported blackrock net worth reddit hamilton ny police blotter new york university dorms emotional abuse and neglect are the same thing m549 artillery how many curtain panels for 100 inch window what is goguardian used for parasitic effects in integrated circuits national general customer service jobs near Makassar Makassar City South Sulawesi how to tell if drain field is failing all through the night lullaby lyrics powerapps attachment control in gallery fuel moto heads hazmat driver jobs 1972 volkswagen beetle engine how to put a jeep cherokee in neutral without key summary statistics by group in r ceramic knobs amazon bored panda insults instagram captions for brother and sister pictures benefits of burning cloves at home where can i get my brake light fixed irish flag colours gold or orange fluorescent light diffuser replacement optus customer service hours globalway luggage solid wood bookcases with doors accuweather vilnius sunday school lesson july 3 2022 how to view html page in your browser bash timeout while loop wheel of death generator minecraft st jude childrens hospital uber crisis management case study design your own planet online flare network twitter ha tunnel plus mod apk v20 1234 mb bpd and disorganization short caption for old songs fpdc crochet stitch how to install wine in arch linux 2000 chevy silverado fuse box diagram www modmobi com tiktok hair extensions bay area land for sale derbyshire zillow newport beach egyptian horror movies nyc exotic snacks jersey city motorola one 5g ace lineageos star wars pac3 swing gate openers commercial return and earn locations melbourne bmw m3 e36 for sale 80s synthpop albums grey wolf therapeutics coaching jobs football toddler wakes up multiple times a night crying what is the bay area support xtool d1 fascard laundry locations american airlines flight attendant training 2022 dji mini 2 fly more combo best price uk centrelink complaints email handyman carpet installation commercial unit for rent leicester hd buttercup reviews steam deck email time shimmer synonym rubbermaid cooler with wheels sports broadcaster jobs dog license oregon 5339 dry van swing doors summerfield cinemas business for sale cheney wa did instagram change story algorithm green dragoons uniform pendant led lighting tempo learning reviews what happens if a 15 year old gets pregnant by a 15 year old why am i seeing shadows in my vision hq monaro for sale ebay big wally39s plaster magic repair kit just kampers discount code progress lighting hubbell gigabyte official store bruder mack granite halfpipe dump truck neighbours shed too high how to chat in roblox group wall pc how does omicron spread from person to person accident highway 69 parry sound fort snelling military base majestic homes for sale vintage carpet for sale jinn malayalam movie release date set off lunk alarm reddit suzuki atv parts def in python austin film festival 2022 reddit when his eyes opened 1270 barrington hills police chief cdl driving test near me onoff driver 2021 kohler cv490 head bolt torque black seed oil for hair growth pcv delete b58 cheap florida townhomes for sale nhl 22 franchise mode tips reddit severus snape marauders era fanfiction magpul hunter 110 stock release date for this order you 39ll need to meet your courier at the door harman coal furnace metal beretta grips supernatural fanfiction sam cough commercial lighting industries rep how to say thank you spiritually photo laser engraved gifts viking sword name map of structs kyland young and tiffany is susquehanna university liberal or conservative university of minnesota bird rescue eas scenario rent manager remote login brewster wa 2015 ford escape abs sensor replacement penn state dubois softball roster 1955 chevy convertibles for sale on craigslist preventive care guidelines for adults valorant tracker wrong account saying i hate you to your child chevy truck go kart body for sale outdoor tv hard cover install mathjax jupyter lesson for grade 10 services for the deaf telegram web download folder gottlieb pinball schematics chippewa county road construction bedside table for small space az4620 photoresist data sheet six flags tickets maryland play fnia 2 online n54 pure 600 turbos fastrax waynesboro mosquito helicopter range this version of citrix workspace is not the most recent izuku gambit quirk fanfiction what do i do if my boyfriend cheated on me eureka tents 6 person ricoh im c4500 waste toner bottle is full how to size images for sublimation medical negligence law mary magdalene surgery addict age cascadian farms granola bars recipe 2439 utility trailer for sale online gel nail course free central station to newcastle train timetable facade companies dubai announce winners of contest mathews engage limb legs parts of clocks crossword astoria bridge accident 2022 acurite replacement sensor 00592txr samsung tv ads 2022 pharmacy technician interview questions and answers pdf matte ceramic bowls folding wood stove backpacking vystar bank statements what disney villain is a scorpio qualcomm atheros wifi driver for windows 7 64bit acer what is the legal drinking age in australia landlordtenant law iowa outdoor storage box lowe39s homebrew fey creatures 5e black touch up paint for metal at home store locations red corvette convertible for sale great place synonyms vihtavuori 6 dasher storage on wheels ikea nms solar ships reddit bariatric wheelchair 26 inch seat stl high school soccer rankings thompson animal clinic tube amp power transformer disney florida season pass cost deer season south carolina the hand center ncgs 14167 top 10 most beautiful kpop female idols 2022 how do i find my tv code without remote what do beta blockers do to the heart vinidex suppliers sluggish liver reddit extensions google chrome adblock homes for sale in garland tx under 200k zurich zr13 price mimaki banding issue listltint to uint8list flutter mips assembler game server software can you defend a client you know is guilty ruby has a crush on jaune fanfiction citybus west lafayette 4b schedule kusto bin example eswt treatment near me turkish airlines baggage size ap lit q3 thesis examples solaredge inverter troubleshooting san gabriel humane society google nest hub max key fob programming service glass dining table lifebridge health pension plan list of formal words for academic writing cruz girl name best dehumidifier for room clicking noise in dashboard when car is on paraffin lamp oil lowe39s korean webtoon in english casey desantis parents generac control board repair amature videos granny pussy getting fucked merrell trail glove runrepeat fryeburg fair 2022 viking cruise dinner attire why do i want to work for home bargains rls media newark nj roblox stub cookie logger waifu anime best doom metal albums how to use rose oil to attract love pixel time blue archive argent dawn quartermaster shadowlands what is the purpose of this pun installing inverter in silverado heavy melton wool fabric is nyu housing worth it 2002 jeep wrangler nada value whalen workbench for sale blackrock investment management llc annual report how far is palm bay florida from the beach rural homes for sale near west fargo north dakota condos roblox links olympus solaire apartments home assistant ring doorbell live view findlay football 2019 how to hide vape in checked luggage usmc captain selection fy22 samsung 970 evo plus vs passion for writing quotes quotglacier bayquot curved shower rod installation entry level computer engineering jobs near me dr mani cardiologist who buys old coke machines eva lopez rooftop bars lawrenceville videohive text presets for davinci resolve swift playground 4 cape cod bridge boston university w2 unimat drill press open marriages with friends rancho santa margarita building and safety single father fanfiction unsolved mysteries haunted houses near Brooklyn cwu hr benefits network solutions whois privacy perry mason cast original forklift hydraulic oil type santa barbara teachers federal credit union santa maria law practice definition personalized playing cards with photos variyan yoga saxon math intermediate 4 textbook grunge discord server templates fantasy idp rankings 2022 odp soccer realtek card reader driver windows 11 olay retinol 24 for milia can a pulled groin cause testicle pain montgomery wards login hatsan hercules 35 cal pleasant grove high school bell schedule 20222023 harnett county news horror convention orlando 2022 2000 vw beetle problems high blood pressure while sleeping seatcraft curved home theater seating labor day lift off facebook deliver us from evil 2020 720p hdrip h264 aacmkvking subtitle ikea dresser with oval mirror cheap clothing stores near me metlife annual report 2021 iget classic tobacco australia city of alexandria business license bhai meaning in english elite reining horses for sale sherwin williams beach commercial clase azul ultra shot price reflex london cookie delivery boston surgical glue itching umich school of information acceptance rate reddit challenges in recruitment and selection ppt shopify bogus gateway seashell beads hobby lobby dashboard lights in mercedes ford edge owner39s manual asus zenbook 14 review 620 credit score car loan altar server sticharion spice of life corningware history technicolor port forwarding minecraft chittum skiffs laguna madre for sale farmall super c point gap pack with 1100 gba roms funeral prayer cards professor messer study group wandering crossword clue 6 letters rock island vr80 problems p2 small orange pill black arts festival 2022 winstonsalem nc can you bring your own wine to olive garden tangled setting rec league baseball tournaments virpil error 254 medicine cabinet shelf replacement home depot temp agencies near me part time home depot cabinet knobs liveops nation transite panels evony how to teleport to alliance english bulldog adoption san antonio fresher mechanical engineer salary in qatar thalassemia minor diet winchester sx3 stock recall excel vba get file metadata rolex for 100 gastric sleeve turkey irmet border union show results 2022 vintage vans for sale near Eldorado Misiones Province cmp talladega inventory forum 25acp in stock naics code for parent company gas powered go karts under 200 ktm starter button not working venture houses for sale near me metro nashville police non emergency number bude banter the knot room jet2 757 interior post finasteride syndrome cure reddit salt air village cottages for sale home depot picture frame moulding craigslist new orleans handyman wda basketball tournament schedule homily for 20th sunday year c model pics josh okogie jobs for 16 year olds online champions league results and fixtures i got treated for chlamydia but still have discharge 2004 cadillac escalade bcm location microbrewery livestock panels tractor supply searchtempest auto www clearfield life perazzi tm1 special permanent hair dye purple hold off antonyms roboworm sculpin mymathlab free access code 2022 dermatologist salary per month in india lego double vip points june 2022 barksdale air force base pharmacy aquarium co op ohio fish rescue talking heads albums sold grade 11 mathematics textbook pdf used golf swing plane trainer how much money does minecraft make 2022 dwp address sheffield wednesday players 2022 tianeptine dosage erowid judge beeler calendar online short courses uk with certification new holland workmaster 55 neutral safety switch location data warehouse modeling techniques garden safe insecticidal soap technicolor india benefits order green bags trafford older brother x little brother viola beaded earrings worst emr systems quaternion inverse vs conjugate city of tulare jobs the active avenue models cedh tier list moxfield vtac mid year applications 2022 interweave downloads slipcover swivel chairs social security card template front and back audi a5 stronic gearbox service fem harry potter reborn as a peverell fanfiction medical mutual mental health coverage premature menopause in 20s fanatics store locations vntg google pixel 6 pro screen replacement cost ultimate ninja naperville how to uninstall league of legends 2022 chameleon admin template free download purple rose theater schedule 2022 angel gowns houston nerf hyper consumer cellular zte avid 579 phone why do aspies withdrawal sterling truck dealer reveal algebra 2 volume 2 pdf gold shoes heels sexy mature naked events before the warning garabandal interview online casino 247 philippines xyl arp9 parts outdoor receptacle post eques pante lake house littleton menu george newbern movies and tv shows how hard is it to get a job at the smithsonian side delivery hay rake for sale 2021 ram 2500 hacks gone meaning in tagalog roblox usernames generator blue staffordshire bull terrier for sale near me my arcade 8 bit gaming dr brown gastroenterologist nj st john jewelry necklace stoneware crockery india williamson w va do you need a special license to drive an rv class c vrs dfp wheel yandere otp prompt generator discord won39t open windows 10 clergy parking permit application audi key battery replacement q3 saint laurent customer service hours wgu requirements rafting death washington 45 amp schematic cc domain reddit salt river clinic phone number open source pdf editor windows sermon illustrations on spiritual blessings stfc event level brackets reptile store online timbertech weathered teak petite midi boho dress local 580 fund office wf aviation exercise is the best medicine pinewood elementary school website lisfranc surgery success rate blue collar tv show netflix funny covid get well messages nezuko x female reader lemon x96 mini remote control app how to become a pathologist invisible weft hair extensions sydney regents scores 2022 volkswagen gti for sale near Quang Binh Province kuroo x reader angst to fluff diy black duck decoys mi router ax1800 firmware download b12 deficiency stool color hypixel skyblock best axe for wood scallop season port st joe 2022 dexter cattle for sale tennessee calculator credit romania best sat reading prep reddit alpinestars mesh jacket tangled 2022 passionflix watch online free monk fruit recipes for diabetics largest single wide mobile home shenseea mom and dad how to fix pin holes in floor pans summer party songs 2000s ford plug in hybrid biology beginners quiz tvb news hong kong today warehouse fans lowe39s cifss football rankings direct flights from wichita ks electric dirt bike for sale rhino tw84 gearbox the phoenix building son of a rich netflix ontario conservative party platform 2022 john deere la115 parts manual jborn owo mp3 santa hats for family scan your lottery ticket lane frost death how to increase estrogen naturally are fusion hair extensions damaging conditionally add property to array neurocardiology doctor near Kalmar what should i build in bloxburg generator pest control sop doc overpowered si fanfiction how to be a good condo board member rehabilitation and reintegration of offenders free paper pieced hummingbird pattern 2007 jeep commander starting problems antioch tn how long does it take to become an intermediate horse rider skim delete software adhd ruins lives online ap courses for credit fatal accident a40 today maou gakuin manga raw fanuc r30ib controller animated progress bar jennifer morrison age bellevue college running start advisor estrogen blocker drugs ivanti endpoint manager klx 230 on highway goodwill color of the week oregon 2022 exhaust fan with light lowes medent api pdo thread lift los angeles cost biology unit test 2 edgenuity ios 154 crashing cornell ilr tuition edelbrock carburetor for harley davidson gfdn120ed1ww disassembly combat warriors paid script f 150 brush guard balmar alternator couples massage long island ny differential shop motorcycle clothing shops northern ireland cholesteatoma surgery cost airtight glass food containers t connector price turkish lira to pound chart tsuna is izuku fanfic lokai bracelet amazon battery recycling companies in india stock market aetna address john gavin daughters lynn and alex amazing race free svg converter app audiworld forum q3 high lift 4 post hoist how many people fail the cia polygraph does milk make you pee more than water fresno county court holidays columbia basin college aa degree requirements diana jenkins net worth constitution marina parking 71115 homes for rent best heater core leak sealant free jw printables anthropologie knobs sale freeway rick ross net worth 2020 1987 silver eagle value 53 ft container trailer for sale tmobile international calling what type of gym should i join best warhammer accessories gender roles in the home essay world superbikes results today lovells gvm upgrade gu patrol wagon prometheus up function who owns united ag and turf famous graves in dallas restrictions enabled iphone turn off translate google drive mobile homes national city 1961 panhead motor for sale basset hound puppies for sale in montgomery al karate classes near me for 3 year olds tmep skin vespa second hand near me airbnb near busco beach aussie locker dana 30 review sonic mania shadow mod 2014 bentley flying spur cars com double sided sublimation dog tags tam o shanter clothing the accused mate amari and alexander used parts for john deere 420 garden tractor highway 21 yard sale 2022
https://ogloszeniaolesnica.pl/proveedor/index4.html
CC-MAIN-2022-40
en
refinedweb
A typeahead.js autocomplete for ember. Updated to work with maintained fork corejs-typeahead) aupac-ember-data-typeaheadcomponent. ember install ember-aupac-typeahead ember-data> 1.13.x if using the aupac-ember-data-typeaheadcomponent ember.js> 1.13.x aupac-ember-data-typeaheadcomponent The aupac-ember-data-typeahead component is an extension of the more generic aupac-typeahead and assumes you're using ember-data to retrieve your data remotely. This allows ember-data users to streamline the use of this component into a single line of code in their template. By default, each ember-data model supplied in modelClass is required to have a displayName (computed property or attribute) that will return a string representing the name to display in the suggestion template. If this is not possible you can override the suggestionTemplate and supply something else (see below). In addition to all the features supported by aupac-typeahead (see below), aupac-ember-data-typeahead supports the following: modelClass: (*required) the dasherized form of the ember-data model you're searching for. ie 'customer-address' displayKey: (default: 'displayName') the attribute to display to the user when an item is selected, params: (default: {}) an object containing various query string parameters to send along with the remote request, queryKey: (default: 'q') the query parameter sent to the server containing the search text. selection: (default: null) initial selection - can be an ember-datamodel (in which case the displayKeyis used as the initial value) or a stringwhich will display as is. Wrap selection in (readonly x)helper to avoid two-way binding. This component has already implemented the relevant functions to make them compatible with ember-data. You do not need to do so yourself. <!--In this case the ember-data model "task" needs a displayName attribute--> {{aupac-ember-data-typeahead modelClass='task' action=(action (mut selection))}} The above is all you need to have a fully functional autocomplete search in your page. It would create an input that allows you to search for tasks and when selected would update the selection property on your controller. aupac-typeaheadcomponent The aupac-typeahead component contains no assumptions about how you're retrieving your data. Both local and remote suggestions are supported. disabled: (deafult: false) true if the control should be disabled. placeholder: (default: 'Search') the placeholder text to display in the input. name: (default: '') the name of the typeahead input. action: (*required) the selected item will be provided as the first argument. selection: (default: null) will be set as the initial selection in the component. Wrap selection with helper (readonly x)to avoid two-way binding. autoFocus: (default: false) focus the control on render. transformSelection: (default: no transform) allows you to transform the selected value before it is set on the typeahead by returning the transformed value, signature function(selection) allowFreeInput: (default: false) allows the user to input their own values that are not part of the option list. Only useful if the item being selected is a String. tabindex: allows you to define a numeric tab index for the input See the typeahead docs for a more complete description of the items below. source: (*required) a function to return an array of items to display to the user with the signature function(query, syncResults, asyncResults). The callback functions syncResults or asyncResultsshould be called with and array of results as a parameter. async: (default: false) true if the returned data is asynchronous. datasetName: (default: 'default') the name of the dataset. limit: (default: 15) the maximum number of results to display to the user. display: (default: will display the returned item as is) function that displays the selected item to the user, signature function(model). suggestionTemplate: a precompiled HTMLBars template used for suggestions, attribute bindings should be specified under the model object. ie {{model.firstName}}. If the returned value is not an object, it will be bound under {{model.displayName}}. notFoundTemplate: a precompiled HTMLBars template that is rendered when no results are found. pendingTemplate: a precompiled HTMLBars template that is rendered when loading the result set but not yet resolved. headerTemplate: a precompiled HTMLBars template displayed at the top of the search results. footerTemplate: a precompiled HTMLBars template displayed at the bottom of the search results. See the typeahead docs for a more complete description of the items below. highlight: (default: true) true if matching text be highlighted in the search results. hint: (default: true) true if hints be displayed in the input. minLength: (default: 2) the minumum number of characters before a search in performed. typeaheadClassNames: (default: {}) allows you to customise the class names used in typeahead. In your template {{aupac-typeahead action=(action (mut country)) class='form-control' source=countrySource placeholder='Search for a country'}} In your controller const countries = Ember","Cape Verde","Cayman Islands","Chad","Chile","China","Colombia","Congo","Cook Islands","Costa Rica" ,"Cote D Ivoire","Croatia","Cruise Ship","Cuba","Cyprus","Czech Republic","Denmark","Djibouti","Dominica","Dominican Republic","Ecuador","Egypt","El Salvador","Equatorial Guinea" ,"Estonia","Ethiopia","Falkland Islands","Faroe Islands","Fiji","Finland","France","French Polynesia","French West Indies","Gabon","Gambia", Pierre & Miquelon","Samoa","San Marino","Satellite","Saudi Arabia","Senegal","Serbia","Seychelles" ,"Sierra Leone","Singapore","Slovakia","Slovenia","South Africa","South Korea","Spain","Sri Lanka","St Kitts & Nevis","St Lucia","St Vincent","St. Lucia",","Uganda","Ukraine","United Arab Emirates","United Kingdom","Uruguay","Uzbekistan","Venezuela","Vietnam","Virgin Islands (US)" ,"Yemen","Zambia","Zimbabwe"]); export default Ember.Controller.extend({ country : null, countrySource : function(query, syncResults, asyncResults) { const regex = new RegExp(`.*${query}.*`, 'i'); const results = countries.filter((item, index, enumerable) => { return regex.test(item); }) syncResults(results); } }); You can override the suggestionTemplate, notFoundTemplate, pendingTemplate, headerTemplate or footerTemplate used by importing a *.hbs file and assigning to the appropriate property. For example {{!-- app/templates/country-templates/suggestion.hbs --}} <div class='typeahead-suggestion'><img src="" style="width: 10%; height: 10%">{{model.displayName}}</div> Then in your controller import customSuggestionTemplate from '../templates/country-templates/suggestion'; export default Ember.Controller.extend({ customSuggestionTemplate: customSuggestionTemplate }) And assign it to your template {{aupac-typeahead action=(action (mut country)) ... bind the custom suggestion template to the component suggestionTemplate=customSuggestionTemplate }} You can disable the importing of typeahead.js by adding the following to your /config/environment.js 'ember-aupac-typeahead' : { includeTypeahead: false } The current compatible typeahead.js version is v0.11.1 By default, Bootstrap 3 compatible css styles are included with the addon, you can disable this by adding: 'ember-aupac-typeahead' : { includeCss: false } See the typeahead.js docs for applying your own custom styling. test/pages/aupac-typeahead.js export function typeahead(selector, options) { return { search : function(search) { $(selector).val(search).trigger('input'); }, suggestions : collection({ scope: '', //Reset to global scope itemScope: '.tt-suggestion', item: { select: clickable() } }) }; } TODO - show example ember server ember test ember test --server ember build For more information on using ember-cli, visit.
https://openbase.com/js/ember-aupac-x-numen-typeahead
CC-MAIN-2022-40
en
refinedweb
The following code is a snippet of my add-in, Basically, it will create a ConstructionPlane based on a Plane. However, it failed at sketch = sketches.add(cons_plane).After debugging, I found constructionPlanes.add returns nothing, that is why sketches.add failed. import adsk.core, adsk.fusion, traceback def run(context): ui = None try: app = adsk.core.Application.get() ui = app.userInterface product = app.activeProduct rootComp = product.rootComponent pt_3d_1 = adsk.core.Point3D.create(0,0,0) normal = adsk.core.Vector3D.create(0,0,1) plane = adsk.core.Plane.create(pt_3d_1,normal) print(plane.normal.x,plane.normal.y,plane.normal.z) cons_planes = rootComp.constructionPlanes print(cons_planes.count) cons_planeInput = cons_planes.createInput() cons_planeInput.setByPlane(plane) cons_plane = cons_planes.add(cons_planeInput) #watch cons_plane. it is none. sketches = rootComp.sketches #throw exception because cons_plane is none sketch = sketches.add(cons_plane) #can work if using default plane #xyPlane = rootComp.xYConstructionPlane #sketch = sketches.add(xyPlane) except: if ui: ui.messageBox('Failed:\n{}'.format(traceback.format_exc())) When we create a construction plane in [non-Parametric Modeling], it has no relationship to anything else and is positioned in space. We can use the Move command to reposition it anywhere in the model. When working in [Parametric Modeling], the construction plane remembers the input geometry and is tied to it. If that geometry changes, the construction plane will be recomputed. It’s not possible to create a construction plane that has not relationship to anything. The exception to this is because I create a construction plane and then delete whatever it’s dependent on. But then it just becomes sick and the only option is to redefine it which means I need to re-associate it to some other geometry. Finally, I got know the root reason. It is an as design behavior in [Parametric Modeling]. Fusion 360 provides two types of modeling: [Parametric Modeling] and [non-Parametric Modeling]. The former is also called modeling with history, while the latter is called direct modeling. Construction planes are real entities, while Plane object is transient, which just provides the mathematical definition of a plane. So in [non-Parametric Modeling], the code will work well. In default, the modeling mode follows the setting in Preference If you want to switch the modeling mode in the middle way, you can right click the root node and click the last menu item.
https://adndevblog.typepad.com/manufacturing/2016/03/index.html
CC-MAIN-2022-40
en
refinedweb
Keras IoU implementation Are there any implementations of Intersection over Union metric in Keras 2.1.*? 2 votes Are there any implementations of Intersection over Union metric in Keras 2.1.*? def mean_iou(y_true, y_pred): y_pred = K.cast(K.greater(y_pred, .5), dtype='float32') # .5 is the threshold inter = K.sum(K.sum(K.squeeze(y_true * y_pred, axis=3), axis=2), axis=1) union = K.sum(K.sum(K.squeeze(y_true + y_pred, axis=3), axis=2), axis=1) - inter return K.mean((inter + K.epsilon()) / (union + K.epsilon())) Zoe, he knew cause he's smart It's not difficult. Remember we are talking about masks and y_true, y_pred are matrices with 0 or 1. The operations are pixelwise Intersection = y_true * y_pred // all pixel locations, which are 1 for both y_true and y_pred Union = y_true + y_pred - Intersection // combines all pixel locations from y_true and y_pred. If we just add y_true and y_pred we will calculate their common part twice, that's why we subtract their intersection. the sum function and axis arguments are just parameters and it's a way to understand the shapes of y_true and y_pred. How did you know this?
https://ai-pool.com/d/keras_iou_implementation
CC-MAIN-2022-40
en
refinedweb
go to bug id or search bugs for New/Additional Comment: Description: ------------ Because XPath cannot select nodes from the default namespace (i.e. xmlns="foo") an alias must be added. This is possible in the DOM extension using the DomXpath::registerNamespace() method. Because SimpleXML does not have this, its XPath implementation is quite crippled. The only way around this is the following: $sxml = simplexml_load_*($xml) $sxml['xmlns:foo'] = ''; $sxml = simplexml_load_string($sxml->asXML()); Which is quite stupid really. - Davey Add a Patch Add a Pull Request See #27709.
https://bugs.php.net/bug.php?id=28689&edit=1
CC-MAIN-2022-40
en
refinedweb
Function: In a simple language we can say that functions are basically the segments of a code. Different functions are written in a code to perform a specific task (for example, a function which is framed to add two numbers or a function that returns factorial of a number etc.). And as per requirement we use the working of one function in another function (calling a function). CALLING FUNCTION: which calls the other function. CALLED FUNCTION: which is being called by the calling function. Example 1.1: Addition of two numbers using function: #include<iostream> #include<bits/stdc++.h> using namespace std; int sum(int a,int b){ return a+b; } int main(){ cout<<sum(5,7); return 0; } In the above example main() is the calling function and sum() is the called function. And a and b are the argument of the sum() function of integer type, whenever the sum() function will be called, the calling function will assign the value to this argument. Prototypes: In example 1.1, if the sum function will be framed below the main function than the compiler would fail to execute output sum(5, 7) and will give error that “sum was not declared in the scope” . So to get ride of such problem a single line is to be written above the main function (for example, a prototype for the sum() function in example 1.1 will be simply: int sum(int a, int b); After writing this prototype you can frame the sum() function wherever you want in your code either above ,below or anywhere else; compiler will be able to find the called function. Remember one thing, if the called function (sum() function as per ex. 1.1) is above the calling function (main() function as per ex. 1.1) you can skip writing the prototype. But if it is below the calling function then you must write the prototype. Well, it is recommended to always write the prototype. Call by Value and Call by Reference: A function can call the another function in two ways: 1) call by value: if the calling function just pass the value to the arguments of called function. 2) call by reference: when the arguments of called function is declared as a pointer and the address is passed to it. It will be more clear with this example: Example 1.2: Multiplication of two numbers using function: (call by value) #include<iostream> #include<bits/stdc++.h> using namespace std; int product(int a,int b); int main(){ cout<<product(5,7); return 0; } int product(int a,int b){ return a*b; } Example 1.3: Swaping two numbers using function: (this code can only be successfully executed by passing address to the argument of ‘called function’ because if we just pass the value, then only the variables of ‘called function’ will be swaped and it will not be reflected in the variables of calling function (since both are independent of each other). #include<iostream> #include<bits/stdc++.h> using namespace std; void swap(int* a,int* b); int main(){ int l=1,m=2; cout<<"the value of l and m before swaping is : "<<l<<" and "<<m<<endl; swap(&l,&m); cout<<"the value of l and m after swaping is : "<<l<<" and "<<m<<endl; return 0; } void swap(int* a, int* b){ int t=*a; *a=*b; *b=t; } Recursion: In a simple language, when the called and the calling function are same then such situation is known as recursion and that function is called as recursive function. (RECURSION: WHEN THE FUNCTION CALLS ITSELF) Example 1.4: Finding factorial of a number using function: #include<iostream> #include<bits/stdc++.h> using namespace std; int fact(int a); int main(){ int n; cin>>n; cout<<n<<"! = "<<fact(n)<<endl; return 0; } int fact(int a){ if(a==0) return 1; else if(a==1) return 1; else return a*fact(a-1); } Here fact() function is called by fact() function itself. So, we can say that here fact() function is a recursive function. Read More Applications of Recursion: Linear Diophantine Equation Using Extended Euclidean Algorithm ❤️Good Hello Sadhna I just arrived at this article. I enjoyed it a lot. Carry on writing such useful stuff . Thank you and bye for now. Very informative 🙇🙇
https://hacktechhub.com/functions-prototypes-recursion-in-c/
CC-MAIN-2022-40
en
refinedweb
. ALTER VIEW "SYS"."SYSCOLUMN" as select b.table_id, b.column_id, if c.sequence is null then 'N' else 'Y' endif as pkey,b.domain_id, b.nulls, b.width, b.scale, b.object_id, b.max_identity, b.column_name, r.remarks, b."default", b.user_type, b.column_type from SYS.ISYSTABCOL as b left outer join SYS.ISYSREMARK as r on(b.object_id = r.object_id) left outer join SYS.SYSIDXCOL as c on(b.table_id = c.table_id and b.column_id = c.column_id and c.index_id = 0)
https://dcx.sap.com/1001/en/dbrfen10/rf-views-s-5215939.html
CC-MAIN-2022-40
en
refinedweb
Rocky Series Release Notes¶ 6.0.0¶ New Features¶ Masakari has been enabled for mutable config. Below option may be reloaded by sending SIGHUP to the correct process. ‘retry_notification_new_status_interval’ option will apply to process unfinished notifications. Operators can now purge the soft-deleted records from the database tables. Added below command to purge the records: masakari-manage db purge --age_in_days <days> --max_rows <rows> NOTE: notificationsdb records will be purged on the basis of update_atand statusfields (finished, ignored, failed) as these records will not be automatically soft-deleted by the system. Masakari now support policy in code, which means if operators doesn’t need to modify any of the default policy rules, they do not need a policy file. Operators can modify/generate a policy.yaml.samplefile which will override specific policy rules from their defaults. Masakari is now configured to work with two oslo.policy CLI scripts that have been added: The first of these can be called like oslopolicy-list-redundant --namespace masakariand will output a list of policy rules in policy.[json|yaml] that match the project defaults. These rules can be removed from the policy file as they have no effect there. The second script can be called like oslopolicy-policy-generator --namespace masakari --output-file policy-merged.yamland will populate the policy-merged.yaml file with the effective policy. This is the merged results of project defaults and config file overrides. NOTE: Default policy.json file is now removed as Masakari now uses default policies. A policy file is only needed if overriding one of the defaults. Operator can now customize workflows to process each type of failure notifications (hosts, instance and process) as per their requirements. Added below new config section for customized recovery flow in a new conf file masakari-custom-recovery-methods.conf [taskflow_driver_recovery_flows] Under [taskflow_driver_recovery_flows] is added below five new config options ‘instance_failure_recovery_tasks’ is a dict of tasks which will recover instance failure. ‘process_failure_recovery_tasks’ is a dict of tasks which will recover process failure. ‘host_auto_failure_recovery_tasks’ is a dict of tasks which will recover host failure for auto recovery. ‘host_rh_failure_recovery_tasks’ is a dict of tasks which will recover host failure for rh recovery on failure host. Upgrade Notes¶ WSGI application script masakari-wsgiis now available. It allows running the masakari.
https://docs.openstack.org/releasenotes/masakari/rocky.html
CC-MAIN-2022-40
en
refinedweb
OLCNE/OCSK: 'kubectl' Commands to Create Reources Fails with "Error from server (Forbidden): Error When creating "XX.yaml": deployments.apps is forbidden: User "system:node:kubernetes" cannot create deployments.apps in the namespace "default"" (Doc ID 2354472.1) Last updated on AUGUST 25, 2022 Applies to:Oracle Cloud Native Environment - Version 1.0 and later Linux x86-64 Symptoms kubectl operations (such as create a Deployment/Pod, list component status, namespace, replication controller etc.,) fail with "Error from server (Forbidden)", "xxxxxxx is forbidden". Changes Cause In this Document
https://support.oracle.com/knowledge/Oracle%20Linux%20and%20Virtualization/2354472_1.html
CC-MAIN-2022-40
en
refinedweb
Unsupervised Learning of Visual Features by Contrasting Cluster AssignmentsUnsupervised Learning of Visual Features by Contrasting Cluster Assignments This code provides a PyTorch implementation and pretrained models for SwAV (Swapping Assignments between Views), as described in the paper Unsupervised Learning of Visual Features by Contrasting Cluster Assignments. SwAV is an efficient and simple method for pre-training convnets without using annotations. Similarly to contrastive approaches, SwAV learns representations by comparing transformations of an image, but unlike contrastive methods, it does not require to compute feature pairwise comparisons. It makes our framework more efficient since it does not require a large memory bank or an auxiliary momentum network. Specifically, our method simultaneously clusters the data while enforcing consistency between cluster assignments produced for different augmentations (or “views”) of the same image, instead of comparing features directly. Simply put, we use a “swapped” prediction mechanism where we predict the cluster assignment of a view from the representation of another view. Our method can be trained with large and small batches and can scale to unlimited amounts of data. Model ZooModel Zoo We release several models pre-trained with SwAV with the hope that other researchers might also benefit by replacing the ImageNet supervised network with SwAV backbone. To load our best SwAV pre-trained ResNet-50 model, simply do: import torch model = torch.hub.load('facebookresearch/swav:main', 'resnet50') We provide several baseline SwAV pre-trained models with ResNet-50 architecture in torchvision format. We also provide models pre-trained with DeepCluster-v2 and SeLa-v2 obtained by applying improvements from the self-supervised community to DeepCluster and SeLa (see details in the appendix of our paper). Larger architecturesLarger architectures We provide SwAV models with ResNet-50 networks where we multiply the width by a factor ×2, ×4, and ×5. To load the corresponding backbone you can use: import torch rn50w2 = torch.hub.load('facebookresearch/swav:main', 'resnet50w2') rn50w4 = torch.hub.load('facebookresearch/swav:main', 'resnet50w4') rn50w5 = torch.hub.load('facebookresearch/swav:main', 'resnet50w5') Running timesRunning times We provide the running times for some of our runs: Running SwAV unsupervised trainingRunning SwAV unsupervised training RequirementsRequirements - Python 3.6 - PyTorch install = 1.4.0 - torchvision - CUDA 10.1 - Apex with CUDA extension (see how I installed apex) - Other dependencies: scipy, pandas, numpy Singlenode trainingSinglenode training SwAV is very simple to implement and experiment with. Our implementation consists in a main_swav.py file from which are imported the dataset definition src/multicropdataset.py, the model architecture src/resnet50.py and some miscellaneous training utilities src/utils.py. For example, to train SwAV baseline on a single node with 8 gpus for 400 epochs, run: python -m torch.distributed.launch --nproc_per_node=8 main_swav.py \ --data_path /path/to/imagenet/train \ --epochs 400 \ --base_lr 0.6 \ --final_lr 0.0006 \ --warmup_epochs 0 \ --batch_size 32 \ --size_crops 224 96 \ --nmb_crops 2 6 \ --min_scale_crops 0.14 0.05 \ --max_scale_crops 1. 0.14 \ --use_fp16 true \ --freeze_prototypes_niters 5005 \ --queue_length 3840 \ --epoch_queue_starts 15 Multinode trainingMultinode training Distributed training is available via Slurm. We provide several SBATCH scripts to reproduce our SwAV models. For example, to train SwAV on 8 nodes and 64 GPUs with a batch size of 4096 for 800 epochs run: sbatch ./scripts/swav_800ep_pretrain.sh Note that you might need to remove the copyright header from the sbatch file to launch it. Set up dist_url parameter: We refer the user to pytorch distributed documentation (env or file or tcp) for setting the distributed initialization method (parameter dist_url) correctly. In the provided sbatch files, we use the tcp init method (see * for example). Evaluating modelsEvaluating models Evaluate models: Linear classification on ImageNetEvaluate models: Linear classification on ImageNet To train a supervised linear classifier on frozen features/weights on a single node with 8 gpus, run: python -m torch.distributed.launch --nproc_per_node=8 eval_linear.py \ --data_path /path/to/imagenet \ --pretrained /path/to/checkpoints/swav_800ep_pretrain.pth.tar The resulting linear classifier can be downloaded here. Evaluate models: Semi-supervised learning on ImageNetEvaluate models: Semi-supervised learning on ImageNet To reproduce our results and fine-tune a network with 1% or 10% of ImageNet labels on a single node with 8 gpus, run: - 10% labels python -m torch.distributed.launch --nproc_per_node=8 eval_semisup.py \ --data_path /path/to/imagenet \ --pretrained /path/to/checkpoints/swav_800ep_pretrain.pth.tar \ --labels_perc "10" \ --lr 0.01 \ --lr_last_layer 0.2 - 1% labels python -m torch.distributed.launch --nproc_per_node=8 eval_semisup.py \ --data_path /path/to/imagenet \ --pretrained /path/to/checkpoints/swav_800ep_pretrain.pth.tar \ --labels_perc "1" \ --lr 0.02 \ --lr_last_layer 5 Evaluate models: Transferring to Detection with DETREvaluate models: Transferring to Detection with DETR DETR is a recent object detection framework that reaches competitive performance with Faster R-CNN while being conceptually simpler and trainable end-to-end. We evaluate our SwAV ResNet-50 backbone on object detection on COCO dataset using DETR framework with full fine-tuning. Here are the instructions for reproducing our experiments: Install detr and prepare COCO dataset following these instructions. Apply the changes highlighted in this gist to detr backbone file in order to load SwAV backbone instead of ImageNet supervised weights. Launch training from detrrepository with run_with_submitit.py. python run_with_submitit.py --batch_size 4 --nodes 2 --lr_backbone 5e-5 Common IssuesCommon Issues For help or issues using SwAV, please submit a GitHub issue. The loss does not decrease and is stuck at ln(nmb_prototypes) (8.006 for 3000 prototypes).The loss does not decrease and is stuck at ln(nmb_prototypes) (8.006 for 3000 prototypes). It sometimes happens that the system collapses at the beginning and does not manage to converge. We have found the following empirical workarounds to improve convergence and avoid collapsing at the beginning: - use a lower epsilon value ( --epsilon 0.03instead of the default 0.05) - carefully tune the hyper-parameters - freeze the prototypes during first iterations ( freeze_prototypes_nitersargument) - switch to hard assignment - remove batch-normalization layer from the projection head - reduce the difficulty of the problem (less crops or softer data augmentation) We now analyze the collapsing problem: it happens when all examples are mapped to the same unique representation. In other words, the convnet always has the same output regardless of its input, it is a constant function. All examples gets the same cluster assignment because they are identical, and the only valid assignment that satisfy the equipartition constraint in this case is the uniform assignment (1/K where K is the number of prototypes). In turn, this uniform assignment is trivial to predict since it is the same for all examples. Reducing epsilon parameter (see Eq(3) of our paper) encourages the assignments Q to be sharper (i.e. less uniform), which strongly helps avoiding collapse. However, using a too low value for epsilon may lead to numerical instability. Training gets unstable when using the queue.Training gets unstable when using the queue. The queue is composed of feature representations from the previous batches. These lines discard the oldest feature representations from the queue and save the newest one (i.e. from the current batch) through a round-robin mechanism. This way, the assignment problem is performed on more samples: without the queue we assign B examples to num_prototypes clusters where B is the total batch size while with the queue we assign (B + queue_length) examples to num_prototypes clusters. This is especially useful when working with small batches because it improves the precision of the assignment. If you start using the queue too early or if you use a too large queue, this can considerably disturb training: this is because the queue members are too inconsistent. After introducing the queue the loss should be lower than what it was without the queue. On the following loss curve (30 first epochs of this script) we introduced the queue at epoch 15. We observe that it made the loss go more down. If when introducing the queue, the loss goes up and does not decrease afterwards you should stop your training and change the queue parameters. We recommend (i) using a smaller queue, (ii) starting the queue later in training. LicenseLicense See the LICENSE file for more details. See alsoSee also PyTorch Lightning Bolts: Implementation by the Lightning team. SwAV-TF: A TensorFlow re-implementation. CitationCitation If you find this repository useful in your research, please cite: @article{caron2020unsupervised, title={Unsupervised Learning of Visual Features by Contrasting Cluster Assignments}, author={Caron, Mathilde and Misra, Ishan and Mairal, Julien and Goyal, Priya and Bojanowski, Piotr and Joulin, Armand}, booktitle={Proceedings of Advances in Neural Information Processing Systems (NeurIPS)}, year={2020} }
https://giters.com/facebookresearch/swav
CC-MAIN-2022-40
en
refinedweb
java.lang.IllegalArgumentException: Not a file or directory: xxxxxx\.idea\modules\src\test\resources\features 关注 Hi, I'm getting the above compile error in my Scala test project and just cannot resolve. My project structure is: CucumberTest -.idea -project (sources root) -src -> main -> scala -> test -> resources -> features -> firstFeature.feature -> scala -> stepdefs (package) -> RunCucumber.scala My RunCucumber class :- package stepdefs import cucumber.api.junit.Cucumber import cucumber.api.CucumberOptions import org.junit.runner.RunWith @RunWith(classOf[Cucumber]) @CucumberOptions( features = Array("src/test/resources/features"), glue = Array("stepdefs"), format = Array("pretty", "html:target/cucumber-report"), tags = Array("@wip") ) class RunCucumber { } I dont know why its trying to look in the .idea directory path to do the compile or where to change the configuration to resolve this? When i created the scala project there was no resources directory in test, i had to create it (along with the features directory). The error occurs when i run RunCucumber Any advice please for a newbie? I understand the issue, just dont know the cause or how to resolve. Cheers I've created a ticket for a further processing. Is it possible to provide the project or at least some of the following information as attachment to the ticket? Screenshot of - project view. If it's higher than one screen - show the place where problematic file locates. - project structure -> modules -> Sources - project structure -> modules -> Paths - project structure -> Project IDEA logs after the error appears (Help menu -> Collect and Show Logs in Finder) Hi Anton - thanks for taking the time to reply. I'm trying to complete the Scala SBT Cucumber tutorial that is here: I've downloaded the latest versions of IntelliJ, JDK9 etc. When i follow the instructions, no resources directory is created : I create the resources directory along with features etc but when i run RunCucumber, it throws the error and expects the directory path to be in the .idea path. I've managed to set up Java Selenium and Scala projects okay but seem to be having a real problem with this. Any help or pointers much appreciated (I've added the screenshots as an attachment to the ticket) Got it. Thanks! We'll take a look.
https://intellij-support.jetbrains.com/hc/zh-cn/community/posts/115000609904-java-lang-IllegalArgumentException-Not-a-file-or-directory-xxxxxx-idea-modules-src-test-resources-features
CC-MAIN-2022-40
en
refinedweb
Provides cdk-exec, an AWS CDK dev tool to quickly find and execute your Lambdas and State Machines in AWS. $ WARNING: Do not rely on this tool to execute your functions in a production environment. Now that you have been warned, please read on. Exporting Environment Variables If during local development you want to access the environment variables configured for a Lambda Function, such as to see the arns of real resources, you may use cdk-exec --export-env integ-cdk-exec/Function. $ cdk-exec --export-env integ-cdk-exec/Function FOO=bar SECRET_ARN=arn:aws:secretsmanager:REGION:000000000000:secret:SecretA720EF05-qa4X020B9S3f-UI3sIs First, add @wheatstalk/aws-cdk-exec to your project's dev dependencies. Then synthesize your app to a cdk.out directory. Once synthesized there, you can execute one of your resources with cdk-exec. If you're using cdk watch, the CDK will keep your cdk.outup to date, so when you use watch mode, you can run cdk-exec(roughly) at will. app.ts import { App, Stack } from 'aws-cdk-lib'; import { Code, Function, Runtime } from 'aws-cdk-lib/aws-lambda'; import { Choice, Condition, Fail, StateMachine, Succeed } from 'aws-cdk-lib/aws-stepfunctions'; const app = new App(); const stack = new Stack(app, 'integ-cdk-exec'); new StateMachine(stack, 'StateMachine', { definition: new Choice(stack, 'Choice') .when(Condition.isPresent('$.succeed'), new Succeed(stack, 'ChoiceSucceed')) .otherwise( new Fail(stack, 'ChoiceFail')), }); new Function(stack, 'Function', { runtime: Runtime.PYTHON_3_9, handler: 'index.handler', code: Code.fromInline(` def handler(event, context): if "succeed" in event: return {"succeed": True, "message": "Hello from Lambda"} raise Exception('Error from lambda') `), }); app.synth(); Synthesize your app The cdk-exec tool operates on a synthesized cloud assembly (your cdk.out directory), so the first step is to synthesize your app: cdk synth --output cdk.out Execute a state machine with input $ cdk-exec integ-cdk-exec/StateMachine --input '{"succeed":true}' ⚡ Executing integ-cdk-exec/StateMachine/Resource (arn:aws:states:REGION:000000000000:stateMachine:StateMachine2E01A3A5-8z4XHXAvT3qq) ✨ Final status of integ-cdk-exec/StateMachine/Resource Output: { "succeed": true } ✅ Execution succeeded Execute a lambda with input $ (integ-cdk-exec-Function76856677-k5ehIzbG2T6S) Output: { "succeed": true, "message": "Hello from Lambda" } ✅ Execution succeeded Use a custom cloud assembly directory $ cdk-exec --app path/to/cdkout integ-cdk-exec/Function --input '{"json":"here"}' Path matching cdk-exec searches for resources matching the exact path you provide and any deeper nested resources. This is how we support both L1 & L2 constructs, but is also a convenient shortcut when your app has only one executable resource. For example, if you have only one function or state machine in a stack, you can type cdk-exec my-stack and your resource will be found. If your entire app has only one executable resource, you can run cdk-exec without arguments to run it. Tag matching When running cdk-exec --tag mytag=value, cdk-exec will search for a resource matching tags that you have defined in your CDK app. If more than one resource would match, by default cdk-exec will produce an error message. But, if you want to execute several resources simultaneously, cdk-exec provides --all. We have also added aliases and shorthands to streamline typing label-matching commands. For example, cdk-exec -at mytag will try to run all resources with a tag named mytag, regardless of the value of the tag. This has the same effect as typing the longer cdk-exec --all --tag mytag command. Metadata matching When running cdk-exec --metadata mymeta=myvalue, cdk-exec will search for and run resources containing the given metadata. Same as for tag matching, you can run one or more matching resources if you specify the --all option. Path metadata This tool requires path metadata to be enabled in your assembly.
https://openbase.com/js/@wheatstalk/aws-cdk-exec
CC-MAIN-2022-40
en
refinedweb
Laravel - Return json along with http status code Solution 1 You can use http_response_code() to set HTTP response code. If you pass no parameters then http_response_code will get the current status code. If you pass a parameter it will set the response code. http_response_code(201); // Set response status code to 201 For Laravel(Reference from:): return Response::json([ 'hello' => $value ], 201); // Status code here Solution 2 This is how I do it in Laravel 5 return Response::json(['hello' => $value],201); Or using a helper function: return response()->json(['hello' => $value], 201); Solution 3 I think it is better practice to keep your response under single control and for this reason I found out the most official solution. response()->json([...]) ->setStatusCode(Response::HTTP_OK, Response::$statusTexts[Response::HTTP_OK]); add this after namespace declaration: use Illuminate\Http\Response; Solution 4 There are multiple ways return \Response::json(['hello' => $value], STATUS_CODE); return response()->json(['hello' => $value], STATUS_CODE); where STATUS_CODE is your HTTP status code you want to send. Both are identical. if you are using Eloquent model, then simple return will also be auto converted in JSON by default like, return User::all(); Solution 5 laravel 7.* You don't have to speicify JSON RESPONSE cause it's automatically converted it to JSON return response(['Message'=>'Wrong Credintals'], 400); Related videos on Youtube - Galivan 9 months If I return an object: return Response::json([ 'hello' => $value ]); the status code will be 200. How can I change it to 201, with a message and send it with the json object?. I don't know if there is a way to just set the status code in Laravel. - Mladen Janjetovic almost 7 yearsKeep in mind that Symfony\Component\HttpFoundation\Response has its own predefined constants for http status codes, and if you use other than that it will change your status into something close to it... i.e. if you want to set status 449, you will always get status 500 - DJC over 6 years@timeNomad What are the pros and cons of these two methods - which is recommended? - Jonathan about 6 years@Tushar what if I don't want to send any data back, just a 200 response? Is response()->json([], 200);fit for purpose in this situation? Or is 200 implicit? - Maytham Fahmi almost 6 years+ (201) this answer safes my evening :) - Marcelo Agimóvel over 4 years@DJC on first method you will be able to use Response:: several times loading only once. On second method you will call that class to each time you use response()-> (no problem if you'll use only one). - jjmu15 over 2 yearsThanks, I was looking for a reference to this. Do you happen to have a link to the other available response names such as 201, 400 etc and not just the 200 (HTTP_OK)? I've tried googling it but haven't been able to find it quite yet! - jjmu15 over 2 yearsNevermind... found it. Here is a complete list for anyone else who may be looking for it: gist.github.com/jeffochoa/a162fc4381d69a2d862dafa61cda0798 - Derk Jan Speelman almost 2 years use Illuminate\Http\Response;and return new Response(['message' => 'test'], 422);worked for me - Faiyaj 11 monthsthis one is helpful ! Thanks :)
https://9to5answer.com/laravel-return-json-along-with-http-status-code
CC-MAIN-2022-40
en
refinedweb
table of contents NAME¶ rdma_post_recvv - post a work request to receive incoming messages. SYNOPSIS¶ #include <rdma/rdma_verbs.h> int rdma_post_recvv (struct rdma_cm_id *id, void *context, struct ibv_sge *sgl, int nsge); ARGUMENTS¶ DESCRIPTION¶ Posts a single work request to the receive queue of the queue pair associated with the rdma_cm_id. The posted buffers will be queued to receive an incoming message sent by the remote peer. RETURN VALUE¶ Returns 0 on success, or -1 on error. If an error occurs, errno will be set to indicate the failure reason. NOTES¶ The user is responsible for ensuring that the receive is posted, and the total buffer space is large enough to contain all sent data before the peer posts the corresponding send message. The message buffers must have been registered before being posted, and the buffers must remain registered until the receive completes.. SEE ALSO¶(3)
https://manpages.debian.org/bullseye/librdmacm-dev/rdma_post_recvv.3.en.html
CC-MAIN-2022-40
en
refinedweb
USAGE: import com.greensock.TweenLite; import com.greensock.plugins.TweenPlugin; import com.greensock.plugins.EndVectorPlugin; TweenPlugin.activate([EndVectorPlugin]); //activation is permanent in the SWF, so this line only needs to be run once. var v:Vector.<Number> = new Vector.<Number>(); v[0] = 0; v[1] = 1; v[2] = 2; var end:Vector.<Number> = new Vector.<Number>(); end[0] = 100; end[1] = 250; end[2] = 500; TweenLite.to(v, 3, {endVector:end, onUpdate:report}); function report():void { trace(v); } Copyright 2008-2013, GreenSock. All rights reserved. This work is subject to the terms in or for Club GreenSock members, the software agreement that was issued with the membership.
http://www.greensock.com/asdocs/com/greensock/plugins/EndVectorPlugin.html
CC-MAIN-2022-40
en
refinedweb
Hello Thanks for the Info. I was able to implement the BLOG in our site. I used the existing demo templates project. But there is one issue that i am facing. In the demo templates when we add a Personal Blog Start page, and if we right click that page and if we want to add page which is blogitem, automatically the demo templates project picks up Blog Item page type and DATES and TAGS item was created beneath Blog Start Page where as in my project these are not coming. Can any one help me out Thanks Rachappa Hi Rachappa! I'm having problems with integrating the blog function in my project, i was wondering if you could be so kind as to share how you did it step-by-step? I have tried from the alloy template package and also looked at the link previous in this thread, but i would like some more hands on tips. Would be very grateful for your assistance. /Jonas I have followed the steps provided in the blog post from epinova and tried to adapt the pagetypes and other code from the Alloy templates, but when i create a new personal start page it fails to create the tags and dates pages and i get an "object referance not set to an instance of the object" and i can't figure out where or what i'm doing wrong. So the best thing for me would be some steps on how to proceed from where the blog post ended. Maybe it's time for a new part in the series on how to create an epi site from scratch on your website ;-) Thanks for some good exaples there btw! Hello Jonas, Please check the files, a) You should include the complete blog folder in ur porject. Check the pagetype of PPersonalStart for it's virtual path. Templates/Demo/Blog/Pages b) Templates/Demo/Units folder is required c) Check the web.config for the section name episerver.blo d) In web.config, check for this tag personalStartPageTypeName, If it's not set then the page might throw an exception Thanks Hi Haven't worked on this since December, but now i need to get it working. 1. I have installed the demotemplates on my site through episerver deployment center and included the blog folder and the units folder. 2. I have checked my webconfig to make sure all necessarry parts are there 3. But when i compile i get Error 109 The type or namespace name 'Workroom' does not exist in the namespace 'EPiServer.Templates.Demo' (are you missing an assembly reference?) C:\EPiServer\Sites\SandboxSite2\Templates\Demo\Units\Placeable\CreatePageBox.ascx.cs 14 32 PublicTemplates Error 110 The type or namespace name 'Forum' does not exist in the namespace 'EPiServer.Templates.Demo' (are you missing an assembly reference?) C:\EPiServer\Sites\SandboxSite2\Templates\Demo\Blog\Pages\Item.aspx.cs 27 32 PublicTemplates Hello How can i integrate/implement the Episerver Blog to our Site? Thanks Rachappa
https://world.optimizely.com/forum/legacy-forums/Episerver-CMS-6-CTP-2/Thread-Container/2010/11/Episerver-Blog/
CC-MAIN-2022-40
en
refinedweb
Blocks and statements in Python Block and Statement Block : A block is a piece of Python program text that is executed as a unit. The following are blocks: a module, a function body, and a class definition. Each command typed interactively is a block. Statement : Instructions that a Python interpreter can execute are called statements. Example : - def add(a,b): - #Block - P=’PrepInsta’ #statement /> Block : - A Python program is constructed from code blocks. - A script file (a file given as standard input to the interpreter or specified as a command line argument to the interpreter) is a code block. Statement : - There are different types of statements in the Python programming language like Assignment statement, Conditional statement, Looping statements etc. - P=’PrepInsta’ #assignment statement - Multi-Line Statements: In Python we can make a statement extend over multiple lines using braces {}, parentheses (), square brackets [], semi-colon (;), and continuation character slash (\). /> Implementation of Block and Statements in Python #Block def add(A,b): #block Class Student: #block def(self,name,roll): #block #Statements #Continuation Character (\): s = 1 + 41 + 48 + \ 4 + 51 + 6 + \ 50 + 10 #parentheses () : n = (17 * 5 * 4 + 8 ) #square brackets [] : cars = [‘BMW’, ‘THAR’, ‘FERARI’] #braces {} : x = {1 + 2 + 3 + 4 + 5 + 6 + 7 + 8 + 9} semicolons(;) : a= 2; b = 3; c = 4 Login/Signup to comment
https://prepinsta.com/python/block-and-statement-in-python/
CC-MAIN-2021-04
en
refinedweb
TripleO Quickstart¶ We need a common way for developers/CI systems to quickly stand up a virtual environment. Problem Description¶ The tool we currently document for this use case is instack-virt-setup. However this tool has two major issues, and some missing features: There is no upstream CI using it. This means we have no way to test changes other than manually. This is a huge barrier to adding the missing features. It relies on a maze of bash scripts in the incubator repository[1] in order to work. This is a barrier to new users, as it can take quite a bit of time to find and then navigate that maze. It has no way to use a pre-built undercloud image instead of starting from scratch and redoing the same work that CI and every other tripleo developer is doing on every run. Starting from a pre-built undercloud with overcloud images prebaked can be a significant time savings for both CI systems as well as developer test environments. It has no way to create this undercloud image either. There are other smaller missing features like automatically tagging the fake baremetals with profile capability tags via instackenv.json. These would not be too painful to implement, but without CI even small changes carry some amount of pain. Proposed Change¶ Overview¶ Import the tripleo-quickstart[2] tool that RDO is using for this purpose. This project is a set of ansible roles that can be used to build an undercloud.qcow2, or alternatively to consume it. It was patterned after instack-virt-setup, and anything configurable via instack-virt-setup is configurable in tripleo-quickstart. Use third-party CI for self-gating this new project. In order to setup an environment similar to how developers and users can use this tool, we need a baremetal host. The CI that currently self gates this project is setup on ci.centos.org[3], and setting this up as third party CI would not be hard. Alternatives¶ One alternative is to keep using instack-virt-setup for this use case. However, we would still need to add CI for instack-virt-setup. This would still need to be outside of tripleoci, since it requires a baremetal host. Unless someone is volunteering to set that up, this is not really a viable alternative. Similarly, we could use some other method for creating virtual environments. However, this alternative is similarly constrained by needing third-party CI for validation. Other End User Impact¶ Using a pre-built undercloud.qcow2 drastically symplifies the virt-setup instructions, and therefore is less error prone. This should lead to a better new user experience of TripleO. Performance Impact¶ Using a pre-built undercloud.qcow2 will shave 30+ minutes from the CI gate jobs. Other Deployer Impact¶ There is no reason this same undercloud.qcow2 could not be used to deploy real baremetal environments. There have been many production deployments of TripleO that have used a VM undercloud. Implementation¶ Work Items¶ Import the existing work from the RDO community to the openstack namespace under the TripleO umbrella. Setup third-party CI running in ci.centos.org to self-gate this new project. (We can just update the current CI[3] to point at the new upstream location) Documentation will need to be updated for the virtual environment setup. Dependencies¶ Currently, the only undercloud.qcow2 available is built in RDO. We would either need to build one in tripleo-ci, or use the one built in RDO. Testing¶ We need a way to CI the virtual environment setup. This is not feasible within tripleoci, since it requires a baremetal host machine. We will need to rely on third party CI for this.
https://specs.openstack.org/openstack/tripleo-specs/specs/mitaka/tripleo-quickstart.html
CC-MAIN-2021-04
en
refinedweb
If you are reading this blog there may be two reasons. First, you are the programmer and second, you want to be a better programmer. So here we go, Even bad code can function, But if the code isn’t clean, it can bring a development organization to its knees. Every year, countless hours and significant resources are lost because of the poorly written code. But it Doesn’t have to be that way. So there might be a question in your head what is clean code? The answer to that is your logic must be straight forward to make it hard for any bug to hide and which can be read and enhanced by a developer other than its original author. clean code always looks like written by someone who cares. So there are some rules which we will discuss: 1)Meaningful names It is easy to say that a name should be relevant intent. Choosing good names to take time but save more than it takes. So take care of your name and change them to better ones. Everyone who read your code will be happier including you. which can improve consistency, clarity and code integration . The name of the function or class should answer all big question. Suppose your variable name is val x = 10 // This variable name reveals nothing The name “x” doesn’t reveal anything. Use Intention revealing the name We should choose a name that specifies what is being measured and the unit of that measurement. val elapsedTimeinDays // This reveals what is being measured and unit of measurement Programmers must avoid leaving false clues that obscure the meaning of the code. We should avoid words whose entrenched meanings vary from our intended meaning. For example, hp, aix, and sco would be a poor variable name and avoid misleading names like which may create confusion def getAccount () def getAccounts() def getAccountInfo() Use pronounceable name because a name like “genymdhms” means (generate date,year,months,day,hours,minutes,seconds) so we can’t walk around and say “gen why emm dee aich emm ess” which is very hard to discuss on which can be write As “generateTimeStamp” would be a better choice. You also don’t need to prefix member variables with m_ anymore. Your classes and functions should be small enough that you don’t need them. And you should be using an editing environment that highlights or colorizes members to make them distinct. prefixes become unseen clutter and a marker of an older code class Part { var m_dsc = "manager"; def setName(name:String){ m_dsc = name; } } class Part{ var description:String = "Manager"; def setDescription(description:String) { this.description = description; } } Class and object should have a noun or noun phases name like Customer, wiki page, Account etc… avoid a name like Manager, Data or Info. Class name should not be a verb. Methods should have verb or verb phrase names like post payment, deletePage, or save. Accessors, mutators, and predicates should be named for their value and prefixed with getting, set. Pick one word for one abstract concept and stick with it. For instance, it’s confusing to have fetched, retrieve, and get as equivalent methods of different classes Example of good coding practice for the method shown below def genymdhms(t: Any): Timestamp = { val d1 = new SimpleDateFormat("yyyy-MM-dd hh:mm:ss.SSS") val d2 = date1.parse(token_exp.toString) new Timestamp(parsed2.getTime) } Clean code Would be def getCalendarTimeStamp(token_exp: Any): Timestamp = { val dateFormat = new SimpleDateFormat("yyyy-MM-dd hh:mm:ss.SSS") val parsedDate = dateFormat.parse(token_exp.toString) new Timestamp(parsedDate.getTime) } 2)Functions “Functions should DO ONLY ONE THING They should Do It Well They should Do IT only”. The first rule of functions is that they should be small. The second rule of functions is that they should be smaller than that. So this means that your function should not be large enough to hold nested structures. Therefore, the indent level of a function should not be greater than one or two. This technique makes it easier to read, understand and digest. Your function can take the minimum number of argument that. def registerUser(name: String, password: String, email: String,address:String,zip:Long): String = { implicit val session = AutoSession dBConnection.createConnectiontoDB() val token = UUID.randomUUID().toString import java.util.Calendar val calendar = Calendar.getInstance val token_gen = new Timestamp(calendar.getTime.getTime) calendar.add(Calendar.MINUTE, 30) val token_exp = new Timestamp(calendar.getTime.getTime) withSQL { insert.into(UserData).values(name, password, email, address,zip,token, token_gen, token_exp) }.update().apply() token } Which can be written as case class User(name: String, password: String, email: String,address:String,zip:Long) def registerUser(user:User): String // 3)TDD means “Test Driven Development” The primary goal of TDD is to make the code clearer, simple and bug-free. In TDD approach, first, the test is developed which specifies and validates what the code will do. How TDD work is?? - Write a test - Make it run. - Change the code to make it right i.e. Refactor. - Repeat process By now everyone knows that TDD asks us to write unit tests first before we write production code. But that rule is just the tip of the iceberg. Consider the following three laws. And Also that there should be a single concept per test and only one assertion per test which is said to be good practice example:- "document is empty" should { "not be able to convert a document into an entity" in { val result = UserDataDao.documentToEntity(Document()) assert(result.isFailure) } } References - clean code by robert c. martin
https://blog.knoldus.com/coding-best-practices-to-follow-with-scala/
CC-MAIN-2021-04
en
refinedweb
Problem : You will be given two arrays of integers and asked to determine all integers that satisfy the following two conditions: - The elements of the first array are all factors of the integer being considered - The integer being considered is a factor of all elements of the second array These numbers are referred to as being between the two arrays. You must determine how many such numbers exist. For example, given the arrays a = [2, 6] and b = [24, 36], there are two numbers between them: 6 and 12. 6 % 2 = 0, 6 % 6 = 0, 24 % 6 = 0 and 36 % 6 = 0 for the first value. Similarly, 12 % 2 = 0, 12 % 6 = 0 and 24 % 12 = 0, 36 % 12 = 0. Function Description : The first line contains two space-separated integers, n and m, the number of elements in array a and the number of elements in array b. The second line contains n distinct space-separated integers describing a[i] where 0 <= i <= n. The third line contains m distinct space-separated integers describing b[j] where 0 <= j <= m. Constraints : 1 <= n, m <= 10 1 <= a[i] <= 100 1 <= b[j] <= 100 Output Format : Print the number of integers that are considered to be between a and b. Sample Input : : #include <cstdio> #include <cstring> #include <string> #include <cmath> #include <cstdlib> #include <map> #include <iostream> #include <vector> #include <algorithm> using namespace std; int main() { int n, m; scanf("%d %d", &n, &m); int a[100], b[100]; for (int i=0; i<n; i++) scanf("%d", &a[i]); for (int i=0; i<m; i++) scanf("%d", &b[i]); int cnt = 0; for (int k=1; k<=100; k++) { int flag = 1; for (int i=0; i<n; i++) if (k % a[i] != 0) flag = 0; for (int i=0; i<m; i++) if (b[i] % k != 0) flag = 0; if (flag == 1) cnt ++; } printf("%d\n", cnt); return 0; } 247 total views, 8 views today Post Disclaimer the above hole problem statement is given by hackerrank.com but the solution is generated by the SLTECHACADEMY authority if any of the query regarding this post or website fill the following contact form thank you.
https://sltechnicalacademy.com/between-two-sets-hackerrank-solution/
CC-MAIN-2021-04
en
refinedweb
VIF port config versioned objects and driver plugin library¶ Define a standalone os-vif python library, inspired by os-brick, to provide a versioned object model for data passed from neutron to nova for VIF port binding, and an API to allow vendors to provide custom plug/unplug actions for execution by Nova. Problem description¶ When plugging VIFs into VM instances there is communication between Nova and Neutron to obtain a dict of port binding metadata. Nova passes this along to the virt drivers which have a set of classes for dealing with different VIF types. In the libvirt case, each class has three methods, one for building the libvirt XML config, one for performing host OS config tasks related to plugging a VIF and one for performing host OS config tasks related to unplugging a VIF. Currently, whenever a new Neutron mechanism driver is created, this results in the definition of a new VIF type, and the addition of a new VIF class to the libvirt driver to support it. Due to the wide variety of vendors, there is a potentially limitless number of Neutron mechanisms that need to be dealt with over time. Conversely the number of different libvirt XML configurations is quite small and well defined. There are sometimes new libvirt XML configs defined, as QEMU gains new network backends, but this is fairly rare. Out of 15 different VIF types supported by libvirt’s VIF driver today, there are only 5 distinct libvirt XML configurations required. These are illustrated in The problem with this architecture is that the Nova libvirt maintainers have task of maintaining the plug/unplug code in the VIF drivers, which is really code that is defined by the needs of the Neutron mechanism. This prevents Neutron project / vendors from adding new VIF types without having a lock-step change in Nova. A second related problem, is that the format of the data passed between Nova and Neutron for the VIF port binding is fairly loosely defined. There is no versioning of the information passed between them and no agreed formal specification of what the different fields mean. This data is used both to generate the libvirt XML config and to control the logic of the plug/unplug actions. Proposed change¶ Inspired by the os-brick library effort started by the Cinder project, the proposal involves creation of a new library module that will be jointly developed by the Neutron & Nova teams, for consumption by both projects. This proposal is describing an architecture with the following high level characteristics & split of responsibilities - Definition of VIF types and associated config metadata. - Owned jointly by Nova and Neutron core reviewer teams - Code shared in os-vif library - Ensures core teams have 100% control over data on the REST API - Setup of compute host OS networking stack - Owned by Neutron mechanism vendor team - Code distributed by mechanism vendor - Allows vendors to innovate without bottleneck on Nova developers in common case. - In the uncommon, event a new VIF type was required, this would still require os-vif modification with Nova & Neutron core team signoff. - Configuration of guest virtual machine VIFs ie libvirt XML - Owned by Nova virt driver team - Code distributed as part of Nova virt / VIF driver - Ensures hypervisor driver retains full control over how the guest instances are configured Note that while the description below frequently refers to the Nova libvirt driver, this proposal is not considered libvirt specific. The same concepts and requirements for VIF type support exist in all the other virt drivers. They merely support far fewer different VIF types than libvirt, so the problems are not so immediately obvious in them. The library will make use of the oslo.versionedobjects module in order to formally define a set of objects to describe the VIF port binding data. The data in this objects will be serialized into JSON, for transmission between Neutron and Nova, just as is done with the current dicts used today. The difference is that by using oslo.versionedobjects, we gain a formal specification and the ability to extend and modify the objects over time in a manner that is more future proof. One can imagine a base object from oslo_versionedobjects import base class VIFConfig(base.VersionedObject) # Common stuff for all VIFs fields = { # VIF port identifier id: UUIDField() # Various common fields see current # nova.network.model.VIF class and related ones ...snip... # Name of the class used for VIF (un)plugging actions plug: StringField() # Port profile metadata - needed for network modes # like OVS, VEPA, etc profile: ObjectField("VIFProfile") } This base object defines the fields that are common to all the different VIF port binding types. There are a number of these attributes, currently detailed in the VIF class in nova.network.model, or the equiv in Neutron. One addition here is a ‘plug’ field which will be the name of a class that will be used to perform the vendor specific plug/unplug work on the host OS. The supported values for the ‘plug’ field will be determined by Nova via a stevedore based registration mechanism. Nova can pass this info across to Neutron, so that mechanisms know what plugins have been installed on the Nova compute node too. Tagging the plugin class with a version will also be required to enable upgrades where the Neutron mechanism versions is potentially newer than the nova installed plugin. This ‘plug’ field is what de-couples the VIF types from the vendor specific work, and will thus allow the number of VIFConfig classes to remain at a fairly small finite size, while still allowing arbitary number of Neutron mechanisms to be implemented. As an example, from the current list of VIF types shown at: We can see that IVS, IOVISOR, MIDONET and VROUTER all use the same libvirt type=ethernet configuration, but different plug scripts. Similarly there is significant overlap between VIFs that use type=bridge, but with different plug scripts. The various VIFConfig subclasses will be created, based on the different bits of information that are currently passed around. NB, this is not covering all the current VIF_TYPE_XXX variants, as a number of them have essentially identical config parameter requirements, and only differ in the plug/unplug actions, hence the point previously about the ‘plug’ class name. All existing VIF types will be considered legacy. These various config classes will define a completely new set of modern VIF types. In many cases they will closely resemble the existing VIF types, but the key difference is in the data serialization format which will be using oslo.versionedobject serialization instead of dicts. By defining a completely new set of VIF types, we make it easy for Nova to negotiate use of the new types with Neutron. When calling Neutron, Nova will indicate what VIF types it is capable of supporting, and thus Neutron can determine whether it is able to use the new object based VIF types or the legacy anonymous dict based types. The following dependant spec describes a mechanism for communicating the list of supported VIF types to Neutron when Nova creates a VIF port. What is described in that spec will need some further improvements. Instead of just a list of VIF types, it will need to be a list of VIF types and their versions. This will allow Neutron to back-level the VIF object data to an older version in the event that Neutron is running a newer version of the os-vif library than is installed on the Nova compute host. Second, in addition to the list of VIF types, Nova will also need to provide a list of installed plugins along with their versions. So approximately the following set of objects would be defined to represent the new VIF types. It is expected that the result of the ‘obj_name()’ API call (defined by oslo VersionedObject base class) will be used as the VIF type name. This gives clear namespace separation from legacy VIF type names. class VIFConfigBridge(VIFConfig): fields = { # Name of the host TAP device used as the VIF devname: StringField(nullable=True) # Name of the bridge device to attach VIF to bridgename: StringField() } class VIFConfigEthernet(VIFConfig): fields = { # Name of the host TAP device used as the VIF devname: StringField() } class VIFConfigDirect(VIFConfig): fields = { # Source device NIC name on host (eg eth0) devname: StringField() # An enum of 'vepa', 'passthrough', or 'bridge' mode: DirectModeField() } class VIFConfigVHostUser(VIFConfig): fields = { # UNIX socket path path: StringField() # Access permission mode mode: StringField() } class VIFConfigHostDevice(VIFConfig): fields = { # Host device PCI address devaddr: PCIAddressField() # VLAN number vlan: IntegerField() } NB, the attributes listed in these classes above are not yet totally comprehensive. At time of implementation, there will be more thorough analysis of current VIF code to ensure that all required attributes are covered. This list is based on the information identified in this wiki page Some of these will be applicable to other hypervisors too, but there may be a few more vmware/hypervisor/xenapi specific config subclasses needed too. This spec does not attempt to enumerate what those will be yet, but they will be similarly simple and finite set. Those looking closely will have see reference to a “VIFProfile” object in the “VIFConfig” class shown earlier. This object corresponds to the data that can be provided in the <portprofile>…</portprofile> XML block. This is required data when a VIF is connected to OpenVSwitch, or when using one of the two VEPA modes. This could have been provided inline in the VIFConfig subclasses, but there are a few cases where the same data is needed by different VIF types, so breaking it out into a separate object allows better reuse, without increasing the number of VIF types. class VIFProfile(base.VersionedObject): pass class VIFProfile8021QBG(VIFProfile): fields = { managerid: IntegerField(), typeid: IntegerField() typeidversion: IntegerField() instanceid: UUIDField() } class VIFProfile8021QBH(VIFProfile): fields = { profileid: StringField() } class VIFProfileOpenVSwitch(VIFProfile): fields = { interfaceid: UUIDField() profileid: StringField() } Finally, as alluded to in an earlier paragraph, the library will also need to define an interface for enabling the plug / unplug actions to be performed. This is a quite straightforward abstract python class class VIFPlug(object): VERSION = "1.0" def plug(self, config): raise NotImpementedError() def unplug(self, config): raise NotImpementedError() The ‘config’ parameter passed in here will be an instance of the VIFConfig versioned object defined above. There will be at least one subclass of this VIFPlug class provided by each Neutron vendor mechanism. These subclass implementations do not need to be part of the os-vif library itself. The mechanism vendors would be expected to distribute them independently, so decomposition of the neutron development is maintained. It is expected the vendors will provide a separate VIFPlug impl for each hypervisor they need to be able to integrate with, so info about the Nova hypervisor must be provided to Neutron when Nova requests creation of a VIF port. The VIFPlug classes must be registered with Nova via the stevedore mechanism, so that Nova can identify the list of implementations it has available, and thus validate requests from Neutron to use a particular plugin. It also allows Nova to tell Neutron which plugins are available for use. The plugins will be versioned too, so that it is clear to Neutron which version of the plugin logic will be executed by Nova. The vendors would not be permitted to define new VIFConfig sub-classes, these would remain under control of the os-vif library maintainers (ie Neutron and Nova teams), as any additions to data passed over the REST API must be reviewed and approved by project maintainers. Thus proposals for new VIFConfig classes would be submitted to the os-vif repository where the will be reviewed jointly by the Nova & Neutron representatives working on that library. It is expected that this will be a fairly rare requirement, since most new mechanism can be implemented using one of the many existing VIFConfigs. So when a vendor wishes to create a new mechanism, they first decide which VIFConfig implementation(s) they need to target, and populate that with the required information about their VIF. This information is sufficient for the Nova hypervisor driver to config the guest virtual machine. When instantiating the VIFConfig impl, the Neutron vendor will set the ‘plug’ attribute to refer to the name of the VIFPlug subclass they have implemented with their vendor specific logic. The vendor VIFPlug subclasses must of course be installed on the Nova compute nodes, so Nova can load them. When Nova asks Neutron to create the VIF, neutron returns the serialized VIFConfig class, which Nova loads. Nova compute manager passes this down to the virt driver implementation, which instantiates the class defined by the ‘plug’ attribute. It will then invoke either the ‘plug’ or ‘unplug’ method depending on whether it is attaching or detaching a VIF to the guest instance. The hypervisor driver will then configure the guest virtual machine using the data stored in the VIFConfig class. When a new Nova talks to an old Neutron, it will obviously be receiving the port binding data in the existing dict format. Nova will have to have some compatibility code to be able to support comsumption of the data in this format. Nova would likely convert the dict on the fly to the new object model. The existing libvirt driver VIF plug/unplug methods would also need to be turned into VIFPlug subclasses. This way new Nova will be able to deal with all pre-existing VIF types that old Neutron knows about, with no loss in functionality. When an old Nova talks to a new Neutron, Neutron will have to return the data in the existing legacy port binding format. For this to work, there needs to be a negotiation between Nova and Neutron to opt-in to use of the new VIFConfig object model. With an explicit opt-in required, when an old Nova talks to new Neutron, Neutron will know to return data in the legacy format that Nova can still understand. The obvious implication of this is that any newly developed Neutron mechanisms that rely on the new VIFCOnfig object model exclusively, will not work with legacy Nova deployments. This is not considered to be a significant problem, as the mis-match in Neutron/Nova versions is only a temporary problem as a cloud undergoes a staged update from Kilo to Liberty To aid in understanding how this changes from current design, it is helpful to compare the relationships between the objects. Currently there is mostly a 1:1 mapping between Neutron mechanisms, vif types, and virt driver plugins. Thus each new Neutron mechanism has typically needed a new VIF type and virt driver plugin. In this new design, there will be the following relationships - VIF type <-> VIFConfig class - 1:1 - VIFConfig classes are direct representation of each VIF type - a VIF type is simply the name of the class used to represent the data. - Neutron mechanism <-> VIF type - M:N - A single mechanism can use one or more VIF types, a particular choice made at runtime based on usage scenario. Multiple mechanisms will be able to use the same VIF type - VIF type <-> VIF plugins - 1:M - a single VIF type can be used with multiple plugins. ie many mechanisms will use the same VIF type, but each supply their own plugin implementation for host OS setup The split between VIF plugins and VIF types is key to the goal of limiting the number of new VIF types that are created over time. Alternatives¶ Do nothing. Continue with the current approach where every new Neutron mechanism requires a change to Nova hypervisor VIF driver to support its vendor specific plug/unplug actions. This will make no one happy. Return to the previous approach, where Nova allows loading of out of tree VIF driver plugins for libvirt. This is undesirable for a number of reasons. The task of configuring a libvirt guest consists of two halves commonly referred to as backend configuration (ie the host) and frontend configuration (ie what the guest sees). The frontend config is something that the libvirt driver needs to retain direct control over, in order to support various features that are common to all VIFs regardless of backend config. In addition the libvirt driver has a set of classes for representing the libvirt XML config of a guest, which need to be capable of representing any VIF config for the guest. These are considered part of the libvirt internal implementation and not a stable API. Thirdly, the libvirt VIF driver plugin API has changed in the past and may change again in the future, and the data passed into it is an ill-defined dict of values from the port binding. For these reasons there is a strong desire to not hand off the entire implementation of the current libvirt VIF driver class to an external 3rd party. That all said, this spec does in fact take things back to something that is pretty similar to this previous approach. The key differences and benefits of this spec, are that it defines a set of versioned objects to hold the data that is passed to the 3rd party VIFPlug implementation. The external VIFPlug implementation is only being responsible for the host OS setup tasks - ie the plug/unplug actions. The libvirt driver retains control over guest configuration The VIFPlug driver is isolated from the internal impl and API design of the libvirt hypervisor driver. The commonality is that the Neutron vendor has the ability to retain control of their plug/unplug tasks without Nova getting in the way. Keep the current VIF binding approach, but include the name of an executable program (script) that Nova will invoke to perform the plug/unplug actions. This is approximately the same as the proposal in this spec, it is just substituting in-process execution of python code, for out of process execution of a (shell) script. In the case of scripts, the data from the VIF port bindings must be provided to the script, and the proposal was to use environment variables. This is moderately ok if the data is all scalar, but if there is as need to provide non-scalar structured data like dicts/lists, then the environment variable approach is very painful to work with. The VIF script approach also involves creation of some formal versioned objects for representing port binding data, but those objects live inside Nova. Since Neutron has the same need to represent the VIF port binding data, it is considered better if we can have an external python module which defines the versioned objects to represent the port binding data, that can be shared between both Nova and Neutron It is believed that by defining a formal set of versioned objects to represent the VIF port binding data, and a python abstract class for the plug/unplug actions, we achieve a strict, clean and easily extensible interface for the boudnary between Nova and Neutron, avoiding some of the problems inherant in serializing the data via environment variables. ie the VIFPlug subclasses will stil get to access the well defined VIFConfig class attributes, instead of having to parse environment variables. As per this spec, but keep all the VIFConfig classes in Nova instead of creating a separate os-vif library. The main downside with this is that Neutron will ultimately need to create its own copy of the VIFConfig classes, and there will need to be an agreed serialization format between Nova and Neutron for the VIF port binding metadata passed over the REST API. By having the VIFConfig classes in a library that can be used by both Nova and Neutron directly, we ensure both apps have a unified object model and can leverage the standard oslo.versionedobject serialization format. This brings Neutron/Nova a well defined REST API data format this the data passed between them. Move responsibility for VIF plug/unplug to Neutron. This would require that Neutron provide an agent to run on every compute node that takes care of the plug/unplug actions. This agent would have to have a plugin API so that each Neutron mechanism can provide its own logic for the plug/nuplug actions. In addition the agent would have to deal with staged upgrades where an old agent works with new Neutron or a new agent works with old Neutron. There would still need to be work done to formalize the VIF config data passed between Neutron and Nova for the purpose of configuring the guest instance. So this alternative is ultimately pretty similar to what is described in this spec. The current proposal can simply be thought of as providing this architecture, but with the agent actually built-in to Nova. Given the current impl of Neutron & Nova, leveraging Nova as the “agent” on the compute nodes is lower effort approach with no strong downsides. REST API impact¶ This work requires the aforementioned spec to allow Nova to pass details of its supported VIF types to Neutron: For existing “legacy” VIF types, the data format passed back by Neutron will not change. For the new “modern” VIF types, the data format passed back by Neutron will use the oslo.versionedobjects serialization format, instead of just serializing a plain python dict. In other words, the data will be the result of the following API call jsons.dumps(cfg.obj_to_primitive()) where cfg is the VIFConfig versioned object. This JSON data is thus formally specified and versioned, improving ability to evolve this in future releases. In terms of backwards compatibility there are the following scenarios to consider - Old Neutron (Kilo), New Nova (Liberty) Nova adds extra info to the request telling Neutron what VIF types and plugins are supported. Neutron doesn’t know about this so ignores it, and returns one of the legacy VIF types. Nova libvirt driver transforms this legacy VIF type into a modern VIF type, using one of its a built-in back-compat plugins. So there should be no loss in functionality compared to old Nova - New Neutron (Liberty), Old Nova (Kilo) Nova does not add any info to the request telling Neutron what VIF types are supported. Neutron assumes that Nova only supports the legacy VIF types and so returns data in that format. Neutron does not attempt to use the modern VIF types at all. - New Neutron (Liberty), New Nova (Liberty) Nova adds extra info to the request telling Neutron what VIF types and plugins are supported. The neutron mechanism looks at this and decides which VIF type + plugin it wishes to use for the port. Neutron passes back a serialized VIFConfig object instance. Nova libvirt directly uses its modern code path for VIF type handling - Even-newer Neutron (Mxxxxx), New-ish Nova (Liberty) Nova adds extra info to the request telling Neutron what VIF types and plugins are supported. Neutron sees that Nova only supports VIFConfigBridge version 1.0, but it has version 1.3. Neutron thus uses obj_make_compatible() to backlevel the object to version 1.0 before returning the VIF data to Nova. - New-ish Neutron (Liberty), Even-newer Nova (Mxxxx) Nova adds extra info to the request telling Neutron what VIF types and plugins are supported. Neutron only has version 1.0 but Nova supports version 1.3. Nova can trivially handle version 1.0, so Neutron can just return data in version 1.0 format and Nova just loads it and runs. Security impact¶ The external VIFPlug classes provided by vendors will be able to run arbitrary code on the compute nodes. This is little different in security risk than the current situation where the libvirt VIF driver plug/unplug method implementations run a fairly arbitrary set of commands on the compute host. One difference though is that the Nova core team will no longer be responsible for reviewing that code, as it will be maintained exclusively by the Neutron mechanism vendor. While it is obviously possible to vendors to add malicious code to their plugin. This isn’t a complete free for all though - the cloud admin must have taken explicit action to install this plugin on the compute node and have it registered appropriately via stevedore. So this does not allow arbitrary code execution by Neutron. Other deployer impact¶ When deploying new Neutron mechanisms, they will include a python module which must be deployed on each compute host. This provides the host OS plug/unplug logic that will be run when adding VIFs to a guest. In other words, while currently a user deploying a mechanism would do pip install neutron-mech-wizzbangnet on the networking hosts, in the new system they must also run pip install nova-vif-plugin-wizzbangnet on any compute nodes that wish to integrate with this mechanism. It is anticipated that the various vendor tools for deploying openstack will be able to automate this extra requirement, so cloud admins will not be appreciably impacted by this. Developer impact¶ When QEMU/libvirt (or another hypervisor) invents a new way of configuring virtual machine networking, it may be neccessary to define a new versioned object in the os-vif library that is shared between Neutron and Nova. This will involve defining a subclass of VIFConfig, and then implementing the logic in the Nova libvirt driver to handle this new configuration type. Based on historical frequency of such additions in QEMU, it is expected that this will be a rare occurrance. When a vendor wishes to implement a new Neutron mechanism, they will have to provide an implementation of the VIFPlug class whose abstract interface is defined in the os-vif library. This vendor specific implementation will not need to be included in the os-vif library itself - it can be distributed and deployed by the vendor themselves. This frees the vendor from having to do a lock-step update to Nova to support their product. Implementation¶ Assignee(s)¶ - Primary assignee: TBD Other contributors: Daniel Berrange <berrange@redhat.com> irc:danpb Brent Eagles <beagles@redhat.com> irc: beagles Andreas Scheuring Maxime Leroy Jay Pipies irc: jaypipes Work Items¶ Create a new os-vif python module in openstack and/or stackforge Implement the VIFConfig abstract base class as a versioned object using oslo.versionedobjects. Agree on and define the minimal set of VIF configurations that need to be supported. This is approximately equal to the number of different libvirt XML configs, plus a few for other virt hypervisors Create VIFConfig subclasses for each of the configs identified in step 3. Define the VIFPlug abstract base class for Neutron mechanism vendors to implement Extend Neutron such that it is able to ask mechansisms to return VIF port data in either the legacy dict format or as a VIFConfig object instance Extend Nova/Neutron REST interface so that Nova is able to request use of the VIFConfig data format Add code to Nova to convert the legacy dict format into the new style VIFConfig object format, for back compat with old Neutron Convert the Neutron mechanisms to be able to use the new VIFConfig object model Profit Dependencies¶ The key dependency is to have collaboration between the Nova and Neutron teams in setting up the new os-vif python project, and defining the VIFConfig object model and VIFPlug interface. There is also a dependancy in agreeing how to extend the REST API in Neutron to allow Nova to request use of the new data format. This is discussed in more detail in: Though some aspects of that might need updating to take account of the proposals in this spec Once those are done, the Nova and Neutron teams can progress on their respective work items independently. Testing¶ The current gate CI system includes cover for some of the Neutron mechanisms. Once both Neutron and Nova support the new design, the current CI system will automatically start to test its operation. For Neutron mechanisms that are not covered by current CI, it is expected that the respective vendors take on the task of testing their own implementations, as is currently the case for 3rd party CI. Documentation Impact¶ The primary documentation impact is not user facing. The docs required will all be developer facing, so can be done as simple docs inside the respective python projects. There will be some specific release notes required to advise cloud admins of considerations during upgrade. In particular when upgrading Nova it will be desired to deploy one or more of the Nova VIF plugins to match the Neutron mechanism(s) that they are currently using. If they fail to deploy the plugin, then the Nova/Neutron negotiation should ensure that Neutron continues to use the legacy VIF type, instead of switching to the modern VIF type. References¶ The proposal to add a negotiation between Neutron and Nova for vif port binding types. This is a pre-requisite for this spec The alternative proposal to introduce a VIF script to the existing VIF port binding data. This spec obsoletes that. The alternative proposal to completely outsource hyervisor VIF driver plugins to 3rd parties once again. This spec obsoletes that. Basic impl of library suggested by Jay Pipes Variant of Jay’s design, which more closely matches what is described in this spec
https://specs.openstack.org/openstack/nova-specs/specs/newton/implemented/os-vif-library.html
CC-MAIN-2021-04
en
refinedweb
Add 'state' to the MonadState class As discussed here: Change the Control.Monad.State.Class module in the "mtl" and "monad-tf" libraries, by adding a method to the MonadState class: class MonadState s m | m -> s where ... state :: (s -> (a,s)) -> m a -- for mtl state :: (StateType m -> (a, StateType m)) -> m a -- for monads-tf state f = do s <- get let (a,s') = f s set s' return a And change the modify function to use 'state': modify f = state $ \s -> ((),f s) And add the appropriate instances: instance (Monad m) => MonadState s (Lazy.StateT s m) where ... state f = Lazy.state instance (Monad m) => MonadState s (Strict.StateT s m) where ... state f = Strict.state instance (Monad m, Monoid w) => MonadState s (LazyRWS.RWST r w s m) where ... state f = LazyRWS.state instance (Monad m, Monoid w) => MonadState s (StrictRWS.RWST r w s m) where ... state f = StrictRWS.state The actual implementations should go into the "transformers" library: state :: Monad m => (s -> (a,s)) -> StateT s m a state f = StateT $ return . f state :: (Monoid w, Monad m) => (s -> (a,s)) -> RWST r w s m a state f = RWST $ \_ s -> let (a,s') = f a in return (a, s', mempty) Note that there is already a function named 'state' in Control.Monad.State, which has 'm' restricted to 'Identity'. This more general function would be a replacement.
https://gitlab.haskell.org/ghc/ghc/-/issues/5714
CC-MAIN-2021-04
en
refinedweb
The try...catch block in Java is used to handle exceptions and prevents the abnormal termination of the program. Here's the syntax of a try...catch block in Java. try{ // code } catch(exception) { // code } The try block includes the code that might generate an exception. The catch block includes the code that is executed when there occurs an exception inside the try block. Example: Java try...catch above example, notice the line, int divideByZero = 5 / 0; Here, we are trying to divide a number by zero. In this case, an exception occurs. Hence, we have enclosed this code inside the try block. When the program encounters this code, ArithmeticException occurs. And, the exception is caught by the catch block and executes the code inside the catch block. The catch block is only executed if there exists an exception inside the try block. Note: In Java, we can use a try block without a catch block. However, we cannot use a catch block without a try block. Java try...finally block We can also use the try block along with a finally block. In this case, the finally block is always executed whether there is an exception inside the try block or not. Example: Java try...finally block class Main { public static void main(String[] args) { try { int divideByZero = 5 / 0; } finally { System.out.println("Finally block is always executed"); } } } Output Finally block is always executed Exception in thread "main" java.lang.ArithmeticException: / by zero at Main.main(Main.java:4) In the above example, we have used the try block along with the finally block. We can see that the code inside the try block is causing an exception. However, the code inside the finally block is executed irrespective of the exception. Java try...catch...finally block In Java, we can also use the finally block after the try...catch block. For example, import java.io.*; class ListOfNumbers { // create an integer array private int[] list = {5, 6, 8, 9, 2}; // method to write data from array to a fila public void writeList() { PrintWriter out = null; try { System.out.println("Entering try statement"); // creating a new file OutputFile.txt out = new PrintWriter(new FileWriter("OutputFile.txt")); // writing values from list array to Output.txt for (int i = 0; i < 7; i++) { out.println("Value at: " + i + " = " + list[i]); } } catch (Exception e) { System.out.println("Exception => " + e.getMessage()); } finally { // checking if PrintWriter has been opened if (out != null) { System.out.println("Closing PrintWriter"); // close PrintWriter out.close(); } else { System.out.println("PrintWriter not open"); } } } } class Main { public static void main(String[] args) { ListOfNumbers list = new ListOfNumbers(); list.writeList(); } } Output Entering try statement Exception => Index 5 out of bounds for length 5 Closing PrintWriter In the above example, we have created an array named list and a file named output.txt. Here, we are trying to read data from the array and storing to the file. Notice the code, for (int i = 0; i < 7; i++) { out.println("Value at: " + i + " = " + list[i]); } Here, the size of the array is 5 and the last element of the array is at list[4]. However, we are trying to access elements at a[5] and a[6]. Hence, the code generates an exception that is caught by the catch block. Since the finally block is always executed, we have included code to close the PrintWriter inside the finally block. It is a good practice to use finally block to include important cleanup code like closing a file or connection. Note: There are some cases when a finally block does not execute: - Use of System.exit()method - An exception occurs in the finallyblock - The death of a thread Multiple Catch blocks For each try block, there can be zero or more catch blocks. Multiple catch blocks allow us to handle each exception differently. The argument type of each catch block indicates the type of exception that can be handled by it. For example, class ListOfNumbers { public int[] arr = new int[10]; public void writeList() { try { arr created an integer array named arr of size 10. Since the array index starts from 0, the last element of the array is at arr[9]. Notice the statement, arr[10] = 11; Here, we are trying to assign a value to the index 10. Hence, IndexOutOfBoundException occurs. When an exception occurs in the try block, - The exception is thrown to the first catchblock. The first catchblock does not handle an IndexOutOfBoundsException, so it is passed to the next catchblock. - The second catchblock in the above example is the appropriate exception handler because it handles an IndexOutOfBoundsException. Hence, it is executed. Catching Multiple Exceptions } To learn more, visit Java catching multiple exceptions. Java try-with-resources statement more, visit the java try-with-resources statement.
https://www.programiz.com/java-programming/try-catch
CC-MAIN-2021-04
en
refinedweb
Closed Bug 1534530 Opened 2 years ago Closed 2 years ago Remove some leftover references to RDF in comments of c-c Categories (MailNews Core :: Backend, enhancement) Tracking (Not tracked) Thunderbird 67.0 People (Reporter: aceman, Assigned: aceman) References Details Attachments (2 files, 1 obsolete file) There are still some comments in code and other text files in c-c which mention RDF even though RDF was already removed from the place or is no longer relevant. Remove unused nc: and rdf: namespace declarations in Thunderbird's XUL files. Try run: LDAP comments. Comment on attachment 9050461 [details] [diff] [review] 1534530-ldap.patch Review of attachment 9050461 [details] [diff] [review]: ----------------------------------------------------------------- ::: ldap/xpcom/TODO.txt @@ -99,5 @@ > needs to change: assume all attributes are binary, use some > heuristic to figure out if they're a string. I wonder how > ldapsearch does this. > > * grep for XXXs and fix the issues Another trailing space to be killed, or do more move into view then? @@ -102,5 @@ > > * grep for XXXs and fix the issues > > -rdf datasource > --------------- Hmm, you removed the heading without removing the points below it? Are they still an issue. Do they relate to RDF? Yes, there are many trailing spaces in that file. The points below the heading do not sound like particularly related to RDF so I kept them. They just aren't implemented (or aren't to be implemented) in RDF. Comment on attachment 9050461 [details] [diff] [review] 1534530-ldap.patch OK then. Attachment #9050461 - Flags: review+ Pushed by mozilla@jorgk.com: remove mensions of RDF in ldap. r=jorgk Attachment #9050220 - Attachment is obsolete: true Attachment #9051142 - Flags: review?(jorgk) Comment on attachment 9051142 [details] [diff] [review] 1534530-nc.patch - rebased OK, thanks. Attachment #9051142 - Flags: review?(jorgk) → review+ Pushed by geoff@darktrojan.net: remove unused nc: and rdf: namespace in Thunderbird's XUL files. r=jorgk Status: ASSIGNED → RESOLVED Closed: 2 years ago Keywords: checkin-needed Resolution: --- → FIXED Target Milestone: --- → Thunderbird 67.0
https://bugzilla.mozilla.org/show_bug.cgi?id=1534530
CC-MAIN-2021-04
en
refinedweb
C++ Program to Sort an Unordered Set in STL Hello Everyone! In this tutorial, we will learn about the working of an Unordered Set and its implementation in the C++ programming language.. Sorting an Unordered Set: An Unordered Set can be sorted by copying its elements to a Vector and then using the sort() method of the STL. For a better understanding of its implementation, refer to the well-commented C++ code given below. Code: #include <iostream> #include <bits/stdc++.h> using namespace std; bool cmp(int x, int y) { if (x > y) return true; else return false; } / of an Unordered Set, before sorting are: "; show(s); //Declaring a vector and initializing it with the elements of the unordered set vector<int> v(s.begin(), s.end()); //Sorting the vector elements in descending order using a custom comparator sort(v.begin(), v.end(), cmp); cout << "\n\nThe elements of the Unordered Set after sorting in descending Order using a Custom sort method are: \n"; //declaring an iterator to iterate through the unordered set vector<int>: Sorting an Unordered Set and its implementation in CPP. For any query, feel free to reach out to us via the comments section down below. Keep Learning : )
https://studytonight.com/cpp-programs/cpp-program-to-sort-an-unordered-set-in-stl
CC-MAIN-2021-04
en
refinedweb
"In your tutorial in the class AbcOpenSupport you create a new AbcTopComponent, set its displayName to dobj.getName() and return it. In a real world scenario would you rather pass the AbcDataObject to the just created AbcTopComponent or is there another preferred way, how the AbcTopComponent (which acts as an editor) shall access its AbcDataObject? Shall I pass the DataObject as constructor argument to the TopComponent or shall I use a setter method or is there another preferred way?" Well, the answer to the above question is: "That depends." There are two approaches. Do you want a singleton TopComponent or do you want a different instance of the TopComponent per instance of the DataObject? Look at the screenshot below, where the file underlying the "Car" node is as follows: <?xml version="1.0" encoding="UTF-8"?> <car type="Pontiac" color="blue"> <shape_ref file = "wheel.xml"/> <shape_ref file = "door.xml"/> </car> When the user selects "Car", I have a singleton TopComponent that displays the "type" and "color" attributes: So, when the user selects a different car, the same singleton TopComponent is used: However, maybe you do not like this approach at all. Maybe you'd like there to be different instances of the TopComponent for each car, which means that multiple cars would be editable simultaneously. Here is my OpenSupport class for the first scenario, i.e., I want a singleton TopComponent (the other approach is described in the tutorial, yes, either pass into the constructor or use a setter method): public class CarOpenSupport extends OpenSupport implements OpenCookie, CloseCookie { public CarOpenSupport(CarDataObject.Entry entry) { super(entry); } @Override protected CloneableTopComponent createCloneableTopComponent() { return CarDesignTopComponent.findInstance(); } } And, in my TopComponent, here is the "findInstance" method referred to above: private static CarDesignTopComponent instance; public static CarDesignTopComponent findInstance(){ if(instance==null){ instance = new CarDesignTopComponent(); return instance; } return instance; } OK, so now we have a single instance of the TopComponent. Now, in the TopComponent, we listen for the DataObject, which is automatically exposed whenever the user selects a different car: private Result<CarDataObject> result = null; @Override public void resultChanged(LookupEvent le) { Collection<? extends CarDataObject> allCarNodes = result.allInstances(); if (!allCarNodes.isEmpty()) { CarDataObject carDataObject = result.allInstances().iterator().next(); FileObject file = carDataObject.getPrimaryFile(); displayTypeAndColor(file); } } @Override public void componentOpened() { result = Utilities.actionsGlobalContext().lookupResult(CarDataObject.class); result.addLookupListener(this); resultChanged(new LookupEvent(result)); } @Override public void componentClosed() { result.removeLookupListener(this); } And here's the "displayTypeAndColor" method, taken straight from the NetBeans XML Editor Extension Module Tutorial: private void displayTypeAndColor(FileObject file) { try { //Get the InputStream of the file: InputStream is = file.getInputStream(); //Use the NetBeans org.openide.xml.XMLUtil class to create); //Get the name of the node: String nodeName = mainNode.getNodeName(); if (nodeName.equals("car")) { //Create a map for all the attributes of the org.w3c.dom.Node: NamedNodeMap map = mainNode.getAttributes(); /(); if (attrName.equals("color")) { colorField.setText(attrNode.getNodeValue()); } if (attrName.equals("type")) { typeField.setText(attrNode.getNodeValue()); //Set the text in the tab of the TopComponent: setDisplayName("Car: " + attrNode.getNodeValue()); } } } } is.close(); } catch (IOException ex) { Exceptions.printStackTrace(ex); } catch (SAXException ex) { Exceptions.printStackTrace(ex); } } Probably the above could be done without the iterations, by simply identifying the required node name and attribute names. Next time, we'll persist changes in the TopComponent back to the file. Hello Geertjan, Is it possible to use JAXB bindings (generated from XSD) in NetBeans Application? Instead writing code in displayTypeAndColor maybe you could just unmarshall it, IMHO it will be much more readable and you would have opportunity to validate XML documents with XSD. Currently I've tried to create JAXB bindings in NB Platform app/module but it's not possible (in Ant and Maven build) - in normal Java application it works fine. Why I can't do it in NB Platform App, do you have any tips how to integrate JAXB with NB Platform App? I've got also second question about Maven Support in NetBeans. Is there going to be something like "Library wrapper module" in Maven project? I have commercial libraries (IBM JDBC Drivers), and I really don't want to create maven repository for few libraries, but my app requires it to work correctly. It looks like last commend was not posted.
https://blogs.oracle.com/geertjan/car-designer-on-the-netbeans-platform-part-2
CC-MAIN-2021-04
en
refinedweb
Dist: Ubuntu 14.04, Unity. POL: 4.2.2 I didn't want to put this in the POL forum because it's more a Steam problem than POL(I believe), I just figured someone here would know more as it is running the game with a POL shortcut. I recently got Swordsman working in POL, everything is fine it launches with the shortcut "/usr/share/playonlinux/playonlinux --run "Swordsman" %F" as expected, but I can't seem to launch it in Steam. I have searched around and have tried adding the .desktop file with a script as explained > When I run the game from Steam, Steam shows it launch for about half a second then stops, and the game doesn't even launch. It seems that Steam tries to run it but fails somewhere and it doesn't even get to POL, otherwise I'd at least get the game pop up. My Swordsman.desktop file; [Desktop Entry] Type=Application Name=Swordsman Exec=/home/mattio/Documents/Swordsman.sh Icon=/home/mattio/.PlayOnLinux/wineprefix/Arc/drive_c/Program Files/Perfect World Entertainment/Swordsman_en/patcher/myicon.ico Terminal=false /home/mattio/Documents/Swordsman.sh #!/bin/bash /usr/share/playonlinux/playonlinux --run "Swordsman" %F Any help would be great, hopefully I haven't missed something blatently obvious, I just want to have Steams overlay ready for when the game arrives :P Awesome! I can't wait to play Swordsman and make a PlayOnLinux guide. What are your computer specs? I'm sure you knew that the Arc client will run Swordsman as well... Arc runs beautifully in PlayOnLinux, so try downloading and installing from it. It's a great game! Definitely recommend trying it. I'll tell you what happened with POL. I managed to get Swordsman running by it's files only, without Arc(I downloaded on a windows machine a while back and to save time I copied them over). So I could run it directly and I was happy with that, but that was before closed beta release. Just before release they release a patch! Broke my damn setup and wouldn't launch the game lol. I have a Windows partition so I ran on that because it released and was eager to play :P. I found the last Swordsman patch forces you to run the game through Arc, which really sucked(for me), so I haven't had chance to try fix it for Linux, so I'll try again soon(maybe end of closed beta) or awaite a guide. My main specs are i5 2500 (4ghz), GTX 660Ti, 8GB GSkill RipjawX RAM, OCZ Agility SSD. Ubuntu 14.04. I did the same thing with Neverwinter while it was beta. Ran fine without Arc, but when it went live I had to install Arc. Good thing is, Arc runs just fine in PlayOnLinux. Check out this Guide on installing Arc for Neverwinter: Neverwinter Guide Just get Arc running and then install Swordsman and do the patching/updating. I wonder if you could speed up the process by moving the beta game folders to the same place Arc install Swordsman? So its out of beta? I'm going to have to try it! Star Trek Online and Neverwinter run just fine, so I bet its the same engine! Hey Mattio, I finally started testing World of Swordsman and am having problems as well. I have successfully gotten Arc to install and download the entire game, but when I launch there is a Swordsman logo, some strange buggy animation and then CRASH. I tried several versions of Wine: 1.6.2 1.7.19 1.7.21 I tried windows versions: 7 & XP I also installed the following libraries: I can post the debug, but there isn't anything significant that I can see. Maybe its just the intro video that is causing problems, but the debug has some codec PNG fixme outputs. Not sure what is up with that. i've tried to install, with no luck.. with 1.7.19 , i get "Sorry, your client resources are incomplete." and if i click verify i get "Verification Interrupted!".. with wine 1.7.10 it works fine but after the book image, the game crashes, with older versions i get buggy text and images and it crashes on the book aswell, i've tried to open directly, or trought arc, i've tried installing on a windows machine and then copy it, but i got allways the same results Thanks for replying TeraJL, I'm glad that I am not the only one with these problems. I get the exact same errors no matter how I launch it. I even checked the Arc support forums and many Windows people are having the same problems. Guess we will be waiting for the next update. Please post your results as you keep testing. I really want to play Swordsman in Linux!!! I actually got to the Blue Book by copying the Swordsman_en folder to a new virtual drive and installing Arc again. But of course, it crashes. By the way, Swordsman isn't even available on Steam... so are you using Arc? I will continue launching Swordsman everday to see if there is an update. I'm thinking about testing Swordsman again. Has anyone had any recent success with newer versions of Wine? I got ARC & SWORDSMAN working in POL Distribution: Slackware 14.1 x86_64, multilib enabled POL : 4.2.5, WINE 1.9.10 (x86) update: WINE 1.9.10-staging (x86) works, CSMT for enhanced video also works, but CSMT eats up more RAM & CPU cycles, up to 30%-40% memory & 50% cpu cycles, it's faster when running single client (xajh.exe), but slower when running multiple instances, all depends on the hardware available (hyperthreading & extra RAM) here are the steps: gecko (to install when prompted in "Configure WINE", or use POL "Install components")mono (to install when prompted in "Configure WINE") Configure WINEWinXP (windows version)VIDEO RAM, set according to GPU hardware POL "Install Components"d3dx9 msvc80 msvc90 msvc100 vcrun2008 vcrun2010 PROBLEMS: - character BAG don't display fonts properly, the quantity of items in a stack is unreadable (update: found the simple fix for unreadable fonts, change game resolution to aspect ratio 16:9) - game may crash during screen loading, maybe 2 out of 5 times, but the game will run properly once loaded - if game turns into a crash loop/stuck/no response, kill SWORDSMAN client (patcher.exe) and click PLAY again in ARC, pause for 1-2 mins before clicking START - some distributions may need libtxc-dxtn (32bit package) also (if you found a solution, please post here, thank you) 2 methods to install SWORDSMAN 1ST METHOD (normal way) install ARC, update install SWORDSMAN via ARC launch SWORDSMAN by clicking PLAY 2ND METHOD (copying old SWORDSMAN folder, then edit registry) install ARC (uncheck all 3 boxes at the end, DO NOT LAUNCH ARC!) if using 32-bit windows or WINE copy old Swordsman_en folder into "Program Files" prepare the following registry file, save as SMO_32BIT_COPY.reg Windows Registry Editor Version 5.00 [HKEY_LOCAL_MACHINE\SOFTWARE\Perfect World Entertainment\Core] [HKEY_LOCAL_MACHINE\SOFTWARE\Perfect World Entertainment\Core\30en] "INSTALL_PATH"="C:\\Program Files\\Swordsman_en\\" "CLIENT_PATH"="C:\\Program Files\\Swordsman_en\\patcher\\patcher.exe" "APP_ABBR"="swm" "installed"=dword:00000001 launch POL Registry Editor import Registry file SMO_32BIT_COPY.reg quit POL Registry Editor Launch POL, run ArcLauncher.exe located in your ARC folder SWORDSMAN will be listed as installed, just click PLAY if using 64-bit windows or WINE copy old Swordsman_en folder into "Program Files (x86)" prepare the following registry file, save as SMO_64BIT_COPY.reg Windows Registry Editor Version 5.00 [HKEY_LOCAL_MACHINE\SOFTWARE\Wow6432Node\Perfect World Entertainment\Core] [HKEY_LOCAL_MACHINE\SOFTWARE\Wow6432Node\Perfect World Entertainment\Core\30en] "INSTALL_PATH"="C:\\Program Files (x86)\\Swordsman_en\\" "CLIENT_PATH"="C:\\Program Files (x86)\\Swordsman_en\\patcher\\patcher.exe" "APP_ABBR"="swm" "installed"=dword:00000001 launch POL Registry Editor import Registry file SMO_64BIT_COPY.reg quit POL Registry Editor Enjoy! Edited by pokipokipxorn Awesome! Thanks for posting all of your findings. I'll definitely check out Swordsman again and test it on my machine. Do you know if there is a download/installer for Swordsman so you don't have to use ARC? They have one for Neverwinter, Star Trek Online and Champions Online Not that I know of, but once u installed the game via ARC, u can just copy and folder Swordsman_en and edit registry (see 2nd METHOD in post above) I'm looking for the equivalent of hotkeynet in linux, it's a multiboxing app that sends the same keystroke to 3 different game windows simultaneously. Any recommendations? thks I can't even get ARC installed
https://www.playonmac.com/en/topic-12026-NonSteam_game_with_POL.html
CC-MAIN-2020-24
en
refinedweb
#include <hallo.h> * Henning Makholm [Fri, Feb 18 2005, 09:13:26PM]: > Given the tendency of people like me to just repeat the procedures > that worked for 2.4, it might be a good idea for make-kpkg to check > whether the necessary files are present in the kernel tree (and warn > loudly if they are not) when one tries to build modules. On the other > hand I have no idea what would be involved in checking this, so it > might be probitively difficult. It is difficult. Many modules need just the kernel build scripts (which are included in most kernel-headers package nowadays, either completely or shared with others via the kernel-kbuild package). Some other modules require more (header files from the core source or even whole source trees). Currently there is no see what the module-source really needs. Maintainers document that in README.Debian but nobody reads that. I have a request to implement some hocus-pocus in module-assistant to warn the users more loudly when the headers are insufficient. Regards, Eduard. -- Russian roulette for linux: [ $[ $RANDOM % 6 ] == 0 ] && rm -rf / || echo "Still breathing, eh?"
https://lists.debian.org/debian-devel/2005/02/msg00957.html
CC-MAIN-2020-24
en
refinedweb
Created on 2005-02-04 01:27 by falsetru, last changed 2009-09-15 00:01 by orsenthil. This issue is now closed. I expected this. >>> os.path.splitext('/path/to/.Hiddenfile') ('/path/to/.Hiddenfile', '') but got this. >>> os.path.splitext('/path/to/.Hiddenfile') ('/path/to/', '.Hiddenfile') Logged In: YES user_id=147070 from test_posixpath.py :: self.assertEqual(posixpath.splitext(".ext"), ("", ".ext")) IMHO should then return (".ext",""). if this is desired :: if i<=p.rfind('/'): return p, '' else: return p[:i], p[i:] should do Logged In: YES user_id=1188172 Interestingly, altering the behaviour of splitext in such a way does not contradict the documentation, which is: """ Split the pathname path into a pair (root, ext) such that root + ext == path, and ext is empty or begins with a period and contains at most one period. """ Personally I'm in favour of this change (on Unix it makes sense, while on Windows you can hardly find an "extension-only" file). Logged In: YES user_id=261020 -1 I hate to be a stick-in-the-mud, but the existing behaviour is what I would expect, and seems to be regular -- always picks the last dot: >>> os.path.splitext('a/b/c/foo.bar') ('a/b/c/foo', '.bar') >>> os.path.splitext('a/b/c/f.oo.bar') ('a/b/c/f.oo', '.bar') >>> os.path.splitext('a/b/c/.foo') ('a/b/c/', '.foo') >>> os.path.splitext('a/b/c/.foo.txt') ('a/b/c/.foo', '.txt') Changing it would surely break somebody's code too, of course. 1462106 is a patch, though perhaps not the latest. python-dev is currently debating whether to fix this behavior or maintain backwards-compatibility. That suggests that it at least won't be changed in a bugfix version (like 2.4.x), and the group should be changed to 2.6. After some discussion on python-dev, I fixed this in r54204. I've read parts of the python-dev discussions, but I don't agree with this change: mimetypes.guess_type() now recognises '.ogg' as None. Alexandru: You commented on a closed issue. If you see any problem with mimetypes.guess_type() w.r.t to .ogg files, please open a new open stating your problem.
https://bugs.python.org/issue1115886
CC-MAIN-2020-24
en
refinedweb
Bulk Pending Orders Bot free Description and I will send it for FREE. 207 downloads How to install How to install Warning! Executing the following cBot may result in loss of funds. Use it at your own risk. Notification Publishing copyrighted material is strictly prohibited. If you believe there is copyrighted material in this section you may use the Copyright Infringement Notification form to submit a claim. Formula / Source Code using System; using System.Linq; using cAlgo.API; using cAlgo.API.Indicators; using cAlgo.API.Internals; using cAlgo.Indicators; namespace cAlgo.Robots { [Robot(TimeZone = TimeZones.UTC, AccessRights = AccessRights.None)] public class Test : Robot { protected override void OnStart() { // Put your initialization logic here Print("Contact me on "); } protected override void OnTick() { // Put your core logic here } protected override void OnStop() { // Put your deinitialization logic here } } }
https://ctrader.com/algos/cbots/show/2207
CC-MAIN-2020-24
en
refinedweb
Community Discussion board where members can learn more about Integration, Extensions and API’s for Qlik Sense. Hi all, Is there a way to trigger a Qlik Sense reload using AWS Lambda? Anyone has a some sample code to share? Thank you I've done this using the qsAPI package GitHub - rafael-sanz/qsAPI: QlikSense python API client for QPS and QRS interfaces I added a "StartTask" method to the library. I suppose I should submit a pull request to GitHub, but for now here is the method I added: ''' @Function: Start a task by name @param pName: Task Name @return : json response return self.driver.post('/qrs/task/start/synchronous', param={'name':pName}).json() And here's my AWS Lambda function: def lambda_handler(event, context): import qsAPI import os PROXY = os.environ['proxy'] qrs=qsAPI.QRS(proxy=PROXY, certificate='client.pem') qrs.StartTask('ReloadCallCenterStatus') "proxy" is an environment variable that contains my Qlik Sense server address. -Rob
https://community.qlik.com/t5/Qlik-Sense-Integration-Extensions-APIs/How-to-trigger-a-Task-reload-using-AWS-Lambda/td-p/6217
CC-MAIN-2020-24
en
refinedweb
In Build 2018 Microsoft interduce the preview of ML.NET (Machine Learning .NET) which is a cross platform, open source machine learning framework. Yes, now its easy to develop our own Machine Learning application or develop costume. In this article we will see on how to develop our first ML.Net application for Clustering Model. Machine Learning is nothing but a set of programs which is used to train the computer to predict and display the output for us. Example live applications which is using Machine Learning are Windows Cortana, Facebook News Feed, Self-Driving Car, Future Stock Prediction, Gmail Spam detection, Pay pal fraud detection and etc. In Machine Learning there is 3 main types In Each type we will be using the Algorithm to train the Machine for producing the result we can see the algorithm for each Machine Learning types. In our previous article we have explained about predicting Future Stock for an Item using ML.NET for the Regression Model for the Supervised learning. In this article we will see how to work on Clustering model for predicting the Mobile Sales simple dataset with random members cluster count by Sex,Before2010 and After 2010. got trained and using this data our model needs to be analyzed to predict the result. The Predicted result will be displayed to as Cluster ID and Score as Distance to us in our console application. Score here is. Make sure, you have installed all the prerequisites in your computer. If not, then download and install Visual Studio 2017 15.6 or later with the ".NET Core cross-platform development" workload installed. Select Browse tab and search for Microsoft.ML Click on Install, I Accept and wait till the installation complete. We can see as the Microsoft.ML package was been installed and all the references for Microsoft.ML has been added in our project references. Now we need to create a Model training and evaluate dataset. For creating this we will add two csv file one for training and one for the evaluate. We will create a new folder called data in our project to add our csv files. Right click the Data folder click on Add >> New Item >> select the text file and name it as “custTrain.csv” Select the properties of the “StockTrain.csv” change the Copy to Output Directory to “Copy always” Add your csv file data like below. Add your csv file data like below. Here we have added the data with the following fields. (Feature) Male - Total No of phone using (Feature) Female – Total No of phone using (Feature) Before2010 – Total No of phone using (Feature) After2010 – Total No of phone using (Feature) MobilePhone – Mobile Phone Type.. using Microsoft.ML.Runtime.Api; ML.NET to cluster Note: Important to be note is in the prediction column we need to set the column name as the “Score” also set the data type as the float[] for Score and for PredictedLabel set as uint. public class ClusterPrediction [ColumnName("PredictedLabel")] public uint PredictedCustId; [ColumnName("Score")] public float[] Distances; Microsoft.ML; Microsoft.ML.Data; Microsoft.ML.Models; Microsoft.ML.Trainers; Microsoft.ML.Transforms; We set the custTrain.csv data and Model data path. For the traindata we give “custTrain.csv” path The final trained model needs to be saved for produce results. For this we set modelpath with the “custClusteringModel. zip” file. The trained model will be saved in the zip fil automatically during runtime of the program our bin folder with all needed files. static readonly string _dataPath = Path.Combine(Environment.CurrentDirectory, "Data" , "custTrain.csv" ); _modelPath = Path.Combine(Environment.CurrentDirectory, "custClusteringModel.zip" ); Change the Main method to async Task Main method like below code. the change the Language version to C#7.1 In the Project Properties >> Build tab >> click on Advance button at the bottom and change the Language Version to C#7.1 PredictionModel<CustData, ClusterPrediction> model = await Train(); } public async Task<PredictionModel<CustData, ClusterPrediction>> Train() In the above method we add the function to train the model and save the model to the zip file. In training the first step will be working the LearningPipeline(). The LearningPipeline loads all the training data to train the model. The TextLoader used to get all the data from train csv file for training and here we set as the useHeader:true to avaoid reading the first row from the csv file. Next, we add all our fratures colums to be trained and evaluate. The learner will train the model.We have selected the Clustering model for our sample and we will be usingKMeansPlusPlusClusterer learner . KMeansPlusPlusClusterer is one of the clustering leraner provided by the ML.NET. Here we add the KMeansPlusPlusClusterer to our pipeline. We also need to set the K value as how many cluster we are using for our model.here we have 3 segments as Windows Mobile,Samsung and Apple so we have set K=4 in our program for the 3 clustering. Finally, we will train and save the model from this method. // Start Learning var pipeline = new LearningPipeline(); // Load Train Data pipeline.Add( TextLoader(_dataPath).CreateFrom<CustData>(useHeader: true , separator: ',' )); // </Snippet6> // Add Features columns ColumnConcatenator( "Features" , "Male" "Female" "Before2010" "After2010" )); // Add KMeansPlus Algorithm for k=3 (We have 3 set of clusters) KMeansPlusPlusClusterer() { K = 3 }); // Start Training the model and return the model var model = pipeline.Train<CustData, ClusterPrediction>(); return model; Now its time for us to produce the result of predicted results by model. For this we will add one more class and, in this Class we will give the inputs. Create a new Class named as “TestCustData.cs“ We add the values to the TestCustDataClass which we already created and defined the columns for Model training. class TestCustData internal readonly CustData PredictionObj = CustData Male = 300f, Female = 100f, Before2010 = 400f, After2010 = 1400f }; We can see in our custTrain.csv file we have the same data for the inputs. In our program main method, we will add the below code at the bottom after Train method calling to predict the result of ClusterID and distances and display the results from model to users in command window. var prediction = model.Predict(TestCustData.PredictionObj); Console.WriteLine($ "Cluster: {prediction.PredictedCustId}" ); "Distances: {string.Join(" ", prediction.Distances)}" Console.ReadLine(); When we can run the program, we can see the result in the command window like below. ML.NET (Machine Learning DotNet) is a great framework for all the dotnet lovers who are all looking to work with machine learning. Now only preview version of ML.NET is available and its great frame work to getting started with ML.NET. Hope you all enjoy reading this article and see you all soon with another post.
https://social.technet.microsoft.com/wiki/contents/articles/52114.machine-learning-dotnet-for-clustering-model-getting-started.aspx
CC-MAIN-2020-24
en
refinedweb
A Simple Database in Plain Old C C might be regarded as an old low level langage but it actually is very powerful. A databse can be created very easily in C. The following is a manual on how to create a simple database in C. This code is available in my github repo :-) Heap vs Stack Allocation Understanding the difference between heap and stack is very important for this tutorial. A chunk of RAM can be on heap or stack. Stack is a special region in memory that is used by functions to store variables which when the function ends its cleared up by C. When you have too much data on stack, you will get a stackoverflow error Heap is memory created by malloc() and it returns a pointer to it. When you are done with it you must free it with free() to return it to the OS, failure to do so will cause a program to leak memory. Importing Necessary Tools For this program we shall require to import the following: #include <stdio.h> #include <assert.h> #inlcude <stdlib.h> #include <errno.h> #inlcude <string.h> Constants We now get to set some very important constants which shall be MAX_DATA and MAX_ROWS as follows: #define MAX_DATA 512; #define MAX_ROWS 100; Structs This database shall have 3 structs. A struct in C is simply objects which really dont have functions in them. If you make a pointer to it, you use the -> arrow as you shall see later. A structure has elements in it and it can be called a compound data type since it contains more than one data type. The Address struct has 4 elements of which name and email have max info on them. struct Address { int id; int set; char name [MAX_DATA]; char email [MAX_DATA]; }; This Database struct is essentially for accessing the database, and it has one element. struct Database { struct Address rows[MAX_ROWS]; }; Finally the Connection struct is the most important since its used everywhere to handle connections. struct Connection { FILE *file; struct Database *db; }; Die Function We shall start off with the die function, its primary role is to handle error messages. This function has an if statement which first prints out the perror message. Note that this perror is the C library: function void perror(const char *str) that prints a descriptive error message to stderr. First the string str is printed, followed by a colon then a space. void die(const char *message) { if (errno){ perror (message); }else{ printf("ERROR: %s\n", message); } exit(1); } However the next piece of code ensure that it can also print a specific message ‘ERROR’ + message. I started off with the die function since its utilised by all major functions in the code. Address_print (Heap) Next we have the Address_print function. Its role is to print out the addresses as assiged to the Address structure. When you pass in a structure and use the *notation on a function then this is a heap, without it it becomes a stack. Here we are using heap to move around structs that need to be shared accross functions. void Address_print(struct Address *addr) { printf("%d %s %s\n", addr->id, addr->name, addr->email); } Here we use the -> notation to access the Address Struct and print out the id, name, and email. One of the parameters of the function is a pointer addr. By adding the *addr, we are de-referencing the pointer so as to return the value of what addr is pointing to. So here when we call printf function, it shall simply print the id which shall be in addr pointer, name and email. All with their specific data types. To get what value is being held in that pointer, just add a *addr to it.(de-referencing) Main Function We shall now create a main function to test that our program is working well. int main(int argc, char *argv[]) { if(argv < 3) die("USAGE: ex17 <dbfile> <action> [action params]"); return(0); int addr = 10; Address_print(addr); } This function just checks that you have entered anything for argv and throws an error to tell you. Nuthing much, moving on… With no errors, our database is okay, lets move to the next step. Step 2 Databse_create When we talk of a database, the key part is creating the database. So we shall start with the function that is responsible for that. We shall call it: Database_create void Database_create(struct Connection *conn) { int i = 0; for(i = 0; i < MAX_ROWS; i++); { //make a prototype to initialise it struct Address addr = {.id = i, .set = 0}; //then just assign it conn->db->rows[i] = addr; } } This function has the parameters struct Connection *conn. This is heap. Inside the function there is a variable declaration which is used to check for the size of MAX_ROWS. If you look carefuly you will notice that we are creating an address and setting the default values, and since we are not using a *notation, then this is a stack. Stack means that this memory is created in RAM and once this function exists, then the memory is deleted likewise. You dont have to free that memory. Then the last statement simply used the pointer conn to go into the pointer db which has rows of i which is given to addr. Before I compile the code I need to call the function Database_create in the main function for me to see results. However in the main function, Database_create is called using an Case statement. When I call the function Database_create, I have to pass to it a parameter conn like this: Database_create(conn) At this moment C does not know what conn is so we have to define conn. Conn is a pointer. Here is how the main function looks like: int main(int argc, char *argv[]) { if(argv < 3) die("USAGE: ex17 <dbfile> <action> [action params]"); char *filename = argv[1]; char action = argv[2][0]; struct Connection *conn = Database_open(filename, action); switch (action) { case 'c': Database_create(conn); break; default: die("Invalid action"); } return(0); } We define *conn as: struct Connection *conn = Database_open(filename, action); The value of *conn is the result of Database_open(filename, action) Therefore for our main function to work we have to create a Database_open function. Database_open The Database_open function is primarily to open the database. It looks like below:; } This function take in two parameters: a pointer of type char which is constant and a mode variable of type char. We first start by creating a struct Connection on heap with the pointer *conn. Remember *conn is now pointing to memory created by malloc which is the size of the struct Conection. Since we put the * on conn, we are simply de-referencing the pointer such that instead of getting the address to the memory location, we are returning the contents of that memory. The next step is to check wether the memory creating was successful. We also create a Database struct on heap again using malloc. After that we use an if statement inserting a (!conn) Logical Note to see wether the pointer conn is linked to db, if Not then call die function and give an error. However is mode is equal to c that when the user inserts a c for database create(c is declared in the Case statement in the main function), then the file open is executed. When you open a file using fopen(), The C library function FILE *fopen(const char *filename, const char *mode) opens the filename pointed to, by filename using the given mode. The usage is as follows: FILE *fopen(const char *filename, const char *mode) The follwing parameter must be given for the function fopen: filename − This is the C string containing the name of the file to be opened. mode − This is the C string containing a file access mode. It includes − “r” - Opens a file for reading. The file must exist. “w” - Creates an empty file for writing. If a file with the same name already exists, its content is erased and the file is considered as a new empty file. “a” - Appends to a file. Writing operations, append data at the end of the file. The file is created if it does not exist. “r+” - Opens a file to update both reading and writing. The file must exist. “w+” - Creates an empty file for both reading and writing. “a+” - Opens a file for reading and appending. So when the user appends a ‘c’ the the file can be opened in “w” thats creating a an empty file or “r+” reading and writing. This is awesome. We the proceed to call Databse_load function and pass to it the pointer conn. This function ends by returning conn. Let us create the Database_load function. Database_load The Database_load function is used to load a database. It takes in the Struct Connection *conn pointer as its parameter. void Database_load(struct Connection *conn) { int rc = fread(conn->db, sizeof(struct Database), 1, conn->file); if (rc != 1) die("failed to load database"); } We then create an rc variable of type int and store into it the value returned by fread(). fread() is a function that reads data from a given stream into the array pointed to by a pointer. The fread() here takes in 4 parameters: - conn->db - pointer linkage. This is the pointer to a block of memory with a minimum size of size*nmemb bytes. - sizeof(struct Database) - This is the size in bytes of each element to be read. - 1 - This is the number of elements, each one with a size of size bytes. - conn->file - This is the pointer to a FILE object that specifies an input stream. The next line of code is to check whether the value of rc is NOT equal 1 then call die function, to print the error message. With all those functions in place, its time to test our databse program. We first compile it make ex17_Database_create Then execute with the following parameters ./ex17_Database_create test.db c The ‘c’ is for create. When you exectute this program, the int argc contains ‘test.db’ and char argv[] contains ‘c’ in the main function. At this point in this line: struct Connection *conn = Database_open(filename, action); Database_open is called and the filename and action are passed onto it. The function returns a value which is stored in the Struct Connection *conn pointer. Since the file does not exist, in the Database_open function, it shall execute using the “w” option to create the file. The last section called the Database_create since the ‘c’ option was selected in the Case.
https://wilfred.githuka.com/post/c_database/
CC-MAIN-2020-24
en
refinedweb
As I’m preparing a talk about refinement types I will be giving this Thursday at the Functional Tricity Meetup, and I’ve recently given a similar talk using the Scala language as well, I realized there is a missing typeclass in Haskell. In the following sections, I will be providing examples and use cases for this typeclass to showcase why it would be great to have it in Haskell. Oh, yes… I love refinement types as well! In Haskell, we have the refined library and other more complex tools such as Liquid Haskell. Refinement types Refinement types give us the ability to define validation rules, or more commonly called predicates, at the type level. This means we get compile-time validation whenever the values are known at compile-time. Say we have the following predicates and datatype: import Refined type Age = Refine (GreaterThan 17) Int type Name = Refine NonEmpty Text data Person = Person { personAge :: Age , personName :: Name } deriving Show We can validate the creation of Person at compile-time using Template Haskell: me :: Person me = Person $$(refineTH 32) $$(refineTH "Gabriel") If the age was a number under 18, or the name was an empty string, then our program wouldn’t compile. Isn’t that cool? Though, most of the time, we need to validate incoming data from external services, meaning runtime validation. Refined gives us a bunch of useful functions to achieve this, effectively replacing smart constructors. The most common one is defined as follows: refine :: Predicate p x => x -> Either RefineException (Refined p x) We can then use this function to validate our input data. mkPerson :: Int -> Text -> Either RefineException Person mkPerson a n = do age <- refine a name <- refine n return $ Person age name However, the program above will short-circuit on the first error, as any other Monad will do. It would be nice if we could validate all our inputs in parallel and accumulates errors, wouldn’t it? We can achieve this by converting our Either values given by refine a into Validation, use Applicative functions to compose the different parts, and finally converting back to Either. import Data.Validation mkPerson :: Int -> Text -> Either RefineException Person mkPerson a n = toEither $ Person <$> fromEither (refine a) <*> fromEither (refine n) As we can see, it is a bit clunky, and this is a very repetitive task, which will only increase the amount of boilerplate in our codebase. This seems to be the status quo around validation in Haskell nowadays, and it was the same in Scala. So it’s kind of hard to realize we are missing what we don’t know: the Parallel typeclass. I didn’t know it was such a game changer until I started using it everywhere. This is exactly what this typeclass does for us in other languages, via its helpful functions and instances. Unfortunately, it doesn’t exist in Haskell, as far as I know… until now! Parallel typeclass Let me introduce you to the Parallel typeclass, already present in PureScript and Scala: import Control.Natural ((:~>)) class (Monad m, Applicative f) => Parallel f m | m -> f, f -> m where parallel :: m :~> f sequential :: f :~> m It defines a relationship between a Monad that can also be an Applicative with “parallely” behavior. That is, an Applicative instance that wouln’t pass the monadic laws. The most common relationship is the one given by Either and Validation. These two types are isomorphic, with the difference being that Validation has an Applicative instance that accumulate errors instead of short-circuiting on the first error. So we can represent this relationship via natural transformation in a Parallel instance: instance Semigroup e => Parallel (Validation e) (Either e) where parallel = NT fromEither sequential = NT toEither In the same way, we can represent the relationship between [] and ZipList: instance Parallel ZipList [] where parallel = NT ZipList sequential = NT getZipList Now, all this ceremony only becomes useful if we define some functions based on Parallel. One of the most common ones is parMapN (or parMap2 in this case, but ideally, it should be abstracted over its arity). parMapN :: (Applicative f, Monad m, Parallel f m) => m a0 -> m a1 -> (a0 -> a1 -> a) -> m a parMapN ma0 ma1 f = unwrapNT sequential (f <$> unwrapNT parallel ma0 <*> unwrapNT parallel ma1) Before we get to see how we can leverage this function with refinement types and data validation, we will define a type alias for our effect type and a function ref, which will convert RefineExceptions into a [Text], since our error type needs to be a Semigroup. import Control.Arrow (left) import Data.Text (pack) import Refined type Eff a = Either [Text] a ref :: Predicate p x => x -> Eff (Refined p x) ref x = left (\e -> [pack $ show e]) (refine x) In the example below, we can appreciate how this function can be used to create a Person instance with validated input data (it’s a breeze): mkPerson :: Int -> Text -> Eff Person mkPerson a n = parMapN (ref a) (ref n) Person Our mkPerson is now validating all our inputs in parallel via an implicit round-trip Either/ Validation given by our Parallel instance. We can also use parMapN to use a different Applicative instance for lists without manually wrapping / unwrapping ZipLists. n1 = [1..5] n2 = [6..10] n3 :: [Int] n3 = (+) <$> n1 <*> n2 n4 :: [Int] n4 = parMapN n1 n2 (+) Without Parallel’s simplicity, it would look as follows: n4 :: [Int] n4 = getZipList $ (+) <$> ZipList n1 <*> ZipList n2 For convenience, here’s another function we can define in terms of parMapN: parTupled :: (Applicative f, Monad m, Parallel f m) => m a0 -> m a1 -> m (a0, a1) parTupled ma0 ma1 = parMapN ma0 ma1 (,) In Scala, there’s also an instance for IO and IO.Par, a newtype that provides a different Applicative instance, which allows us to use functions such as parMapN with IO computations to run them in parallel! And this is only the beginning… There are so many other useful functions we could define! For now, the code is presented in this Github repository together with some other examples. Should there be enough interest, I might polish it and ship it as a library. Let me know your thoughts! Gabriel.
https://gvolpe.github.io/blog/parallel-typeclass-for-haskell/
CC-MAIN-2020-24
en
refinedweb
Ahmed Chaudhary - Total activity 14 - Last activity - Member since - Following 0 users - Followed by 0 users - Votes 0 - Subscriptions 6 Ahmed Chaudhary created a post, Will Resharper still work after installing VS 2008 SP1 ?VS 2008 SP1 has been released . Will resharper have any issues if I install VS 2008 SP1?-Ahmed Ahmed Chaudhary created a post, Current EAP and VS generated Private Accessor for Unit TestsDoes the current EAP recognize the private member accessors generated by VS 2008 for the MS Test unit tests ?The last EAP complained about them.-Ahmed Ahmed Chaudhary created a post, Current EAP release on VS 2008 RTM ?Does the current EAP release work with VS 2008 RTM?VS 2008 RTM was released today.-Ahmed Ahmed Chaudhary created a post, Support for XAML ?Hi there,I was wondering when we could get support for XAML and the windows presentation foundation in Resharper for VS 2005. Currently resharper does not recognise object refs in code for elements... Ahmed Chaudhary created a post, Cannot Resolve Symbol and Problems with basic features !note: similar problem as there, Resharper was highly reccomended to me so I got it(v1.5) to try it out but so fa... Ahmed Chaudhary created a post, Does not recognizes nested namespace and types in itI added a porject to an existing solution with other projects and put a reference to new project into the old project. The code in the new project is in a nested name spacenamespace MyGeneration.dO...
https://resharper-support.jetbrains.com/hc/en-us/profiles/2111824409-Ahmed-Chaudhary
CC-MAIN-2020-24
en
refinedweb
Excel 2010 finds unreadable content in my openpyxl generated workbook Hello, I have a workbook generated by openpyxl 2.1 with lxml which cannot be opened by Excel 2010 as it founds unreadable content. In order to repair the workbook, Excel removes the comments I wanted to include in my workbook. The generated workbook is attached. Many thanks for the support! David @dlaudy can you tell me which cells in which sheet are affected? Do you have some of the code that was used to create the file? I get the error in Excel, which as usual is not much help, but the validation tool seems to suggest that there may be a problem with the styles. I think I may have identified the problem. Was the file generated in write-only mode? @charlie_x The creation of the workbook has been done with Concerning the cells and sheets, here is the list: NoeudLNG (I4), new_sheet (F4), Options (H4, F98), Contrats (L4, S4, T4, U4, V4), DEF_mappings (I4, I5) Thanks. I would be surprised if this worked even with LXML not installed. I think I may have a solution for this but otherwise you'll have to avoid using write_only=True(the new way of writing optimized_write=True) @charlie_x I tried to write my workbook without the write_only flag but I got a traceback: Seems that append() method only accepts list of KNOWN_TYPES but no WriteOnlyCells. I reproduce with the following code: Yes, the code isn't entirely 1:1 because in standard mode you always get a cell from the sheet, whereas in write_only mode you create them only when you need them and, as a result, you have to manually assign the worksheet to the cell and also give it a position. And no need to append would be the equivalent code for a standard workbook. I think I have a solution for the comments issue. Can you work with a checkout of the source? Always calculate comment position from the cell coordinate to play nicely with WriteOnlyCells Resolves #403 → <<cset b4f8d540bf70>> @charlie_x I confirm the problem is solved! Many thanks for the support. Glad to know it's working. I've also just checked in some code that will make code more compatible between modes but please don't use import os.path as ospanymore ;-) Removing version: 2.1.x (automated comment)
https://bitbucket.org/openpyxl/openpyxl/issues/403
CC-MAIN-2020-24
en
refinedweb
Python 3 program to test if a number is positive or negative : In this tutorial, we will learn how to test if a number is positive or negative. We will also check if the number is zero. This is a beginner-friendly python tutorial. With this example, you will learn how to read a user input, how to put your code in a different method to organize it, and how to use an_ if, else-if, else _condition. The program will take the number as input from the user, it will check if it is zero, greater than zero or less than zero and print out the result to the user. You can also store the number in a variable and check its value. But in this program, we are reading the number as an input from the user. Algorithm : The algorithm of the program is like below : - Take the number as an input from the user. You can create one separate variable to hold the number or you can directly test the number. In this example, we are using one separate variable to hold it. - Check the number using one_ if-elseif-else_ condition. This condition will compare the number two times. The first one will check if it is equal to zero or not, the second one will check if it is greater than zero or not. If both of these conditions fail, we will print that the number is less than zero or it is a negative number. Example Program : def check_number(n): if n == 0: print ("Zero") elif n > 0: print (n,"is greater than zero") else : print (n,"is less than zero") user_no = int(input("Enter a number : ")) check_number(user_no) You can also download this program from here. Explanation : - check_number is a method to check if the number is zero, greater than zero or less than zero. This method takes one number as its argument. It doesn’t return anything. - Inside the method, we are using one if-elif-else condition. This condition will test the number and print out the result accordingly. - First, it will move inside the ‘if’ block. This block is used to check if the number is equal to zero or not. If the number is equal to zero, it will print one message “Zero” on the console and exit the if-elif-else block. - If the ‘if’ block fails, it will move into the ‘elif’ block. ‘elif’ is checking if the number is greater than zero or not. If it is greater than zero or if it is a positive number, it will print one message on the console and exit from the if-elif-else block. - If the ‘elif’ block fails, it will move to the last block. This is the ‘else’ block. Note that we are not verifying anything in this block. This block will run if the number is not equal to zero and if it is not greater than zero or this block will run only if the number is less than zero or if it is a negative number. We are sure about that. So, without checking any condition, just print to the user that the number is less than zero. - For reading the user input, the input() method is used. This method returns the value in string form. We are wrapping it with int() to get the integer value of the user input. Sample Outputs :
https://www.codevscolor.com/python-3-program-check-number-positive-negative-zero/
CC-MAIN-2020-29
en
refinedweb
sensor_set_calibration() Enable or disable sensor calibration. Synopsis: #include <bps/sensor.h> BPS_API int sensor_set_calibration(sensor_type_t type, bool enable_calibration) BPS_DEPRECATED Since: BlackBerry 10.2.0 Arguments: - type The sensor to enable or disable calibration for. - enable_calibration If true calibration is enabled, if false calibration is disabled. Library:libbps (For the qcc command, use the -l bps option to link against this library) Description: Deprecated: This function is deprecated. The sensor_set_calibration() function enables or disables calibration for the specified sensor. The accuracy of a sensor might degrade over time. By enabling calibration, if sensor accuracy degrades by a significant amount, the sensor service calibrates the sensor. This improves sensor accuracy. During normal operation of your application, you shouldn't need to call this function, because your application can rely on background calibration. You should enable calibration only if your application requires higher quality readings from a sensor. After the desired level of quality is reached, you should disable calibration; calibration should not be left enabled for an extended period of time. Returns: BPS_SUCCESS upon success, BPS_FAILURE with errno set otherwise. Last modified: 2014-09-30 Got questions about leaving a comment? Get answers from our Disqus FAQ.comments powered by Disqus
https://developer.blackberry.com/native/reference/core/com.qnx.doc.bps.lib_ref/topic/sensor_set_calibration.html
CC-MAIN-2020-29
en
refinedweb
Fear not, I am still planning on posting all of my blog posts here on dev.to. One of the best features of dev.to, outside of the incredible community, is the ability to use canonical URLs to point back to your original blog post. With that disclaimer out of the way, let's dive into how I stood up my own static website blog. - Setting up your initial blog. - Implementing common functionality pieces for SEO and social sharing. - Bonus Points: Configuring the AWS infrastructure to host your blog. Sounds like a solid plan right? Let's get started. GatsbyJS + TailwindCSS == Awesome I have blogged about TailwindCSS before in my post about launching the Learn By Doing newsletter. It is a fantastic utility first CSS framework that comes with a lot of bells and whistles out of the box. Additionally, in my Learn AWS By Using It course we use GatsbyJS to create a demo static website that we can then use to learn core AWS concepts such as hosting, securing, and deploying static websites. So for my blog, I decided to mash them together. I wanted the simplicity of a static website generator like Gatsby with the ability to quickly style it using TailwindCSS. So, I created a starter (aka boilerplate) Gatsby project that lays out all of the configuration necessary to use the Gatsby static website generator pre-configured with Tailwind. To get started, you need to install the gatsby-cli from NPM. npm install --global gatsby-cli Next, you need to create a new Gatsby project in a directory of your choice using the gatsby-starter-tailwind-seo-social project. gatsby new kylegalbraith-blog This will create a new folder, kylegalbraith-blog, in your current directory. Inside of this folder is all of the boilerplate and initial configurations for the Gatsby site and TailwindCSS. If we run a quick develop command we can see what the initial site looks like. cd kylegalbraith-blog gatsby develop What we should end up seeing is something along these lines. With me so far? Excellent. With the starter project pulled down, you can begin by opening it up in Visual Studio Code or your favorite IDE. If you take a look at the folder structure you see a couple of different things. The first thing to get familiar with is the src directory. This is where all the code lives that composes your blog. GatsbyJS is a React based static website generator so everything is defined in terms of components, static assets, layouts, and pages. If you expand the components folder and open the Header component you see code that looks like this. import React from "react"; import Link from "gatsby-link"; import logo from "../../images/favicon.png"; import config from "../../config/config"; const Header = () => { return ( <nav className="bg-grey-lightest"> <div className="container mx-auto p-4 md:p-8"> <div className="text-center lg:text-left"> <Link to="/" className="lg:inline-flex items-center no-underline text-grey-darkest hover:text-black"> <div className="mb-4 flex-1 pt-5"> <img src={logo} </div> <div className="flex-2"> <h1 className="text-5xl ml-2 font-hairline text-indigo-darkest"> {config.authorName} </h1> <span className="block ml-2 mt-2 font-hairline text-indigo-darkest"> {config.siteDescription} </span> </div> </Link> </div> </div> </nav> ); }; export default Header; This is the header component for the Gatsby blog. Right now this is still a boilerplate blog. Let's spice it up by changing some configuration settings in src/config/config.js. You can update the authorName and siteDescription to match your information. module.exports = { siteTitle: "Your Blog Title", shortSiteTitle: "Your Short Blog Title", siteDescription: "This is an awesome blog that you are going to make your own.", siteUrl: "", pathPrefix: "", siteImage: "images/facebook-cover.jpg", siteLanguage: "en", authorName: "Kyle Galbraith Was Here", authorTwitterAccount: "kylegalbraith", authorSocialLinks: [ { name: "github", url: "" }, { name: "twitter", url: "" }, { name: "facebook", url: "" } ] }; Now that those fields are updated, you can check out the changes live in the browser by running gatsby develop again from the command line. This command starts a localhost server at port 8000 by default. Then you can view your changes in the browser. If you keep the develop command running any changes made to components will be hot reloaded in the browser. Pretty cool right? You can change any of those configuration settings to match your blog details and the components will automatically update. Changing content is cool, but you probably want to add your own style as well. Head over to the Footer component and let's change the background color of the outer div from bg-grey-lightest to bg-indigo. import React from "react"; import config from "../../config/config"; const Footer = () => ( <div className="bg-indigo"> <div className="text-center max-w-xl mx-auto p-4 md:p-8 text-sm"> <p> <a href={config.siteUrl} This blog is powered by <a href="">GatsbyJS</a> using the gatsby-starter-tailwind-seo-social from <a href="">Kyle Galbraith</a>. </a> </p> </div> </div> ); export default Footer; Now the footer for your blog should be a blue color. By using TailwindCSS you can use a lot of pre-built utility classes that allow you to rapidly develop new UI components or change the style of existing ones. But at some point, you are going to want to assign your own custom CSS to a component. That is handled by adding a custom style to index.tailwind.css under src/layouts. Scrolling to the bottom you can see there is already a custom style defined for the body element to add the background gradient. Let's change the gradient to something else. body { background: #1f4037; background: -webkit-linear-gradient(to right, #99f2c8, #1f4037); background: linear-gradient(to right, #99f2c8, #1f4037); } To update stylesheets you need to run an npm script from the package.json. The build:css script will run the tailwind command and output the final CSS. npm run-script build:css ... > tailwind build ./src/layouts/index.tailwind.css -c ./tailwind.config.js -o ./src/layouts/index.css Building Tailwind! Finished building Tailwind! Now checking localhost again you can see that the background gradient has been updated. That is the boilerplate setup for your Gatsby + TailwindCSS blog setup. You can leverage existing Tailwind utility classes or add and extend your own to style the blog further. You can also build your own components to add new functionality to your blog. Setting up the actual blogging piece Gatsby is a fantastically simple blogging platform that allows you to write blog posts in Markdown. As you can see from the boilerplate starter there is already a blog post created. If you click on the blog post you can see a blog post loaded with tasty bacon ipsum. If you take a look at the url of the blog post you should see the following format, 2018/08/01/a-sample-gatsby-plus-tailwind-blog-post/. This is defined by the folder structure under the pages directory. The blog post is written inside of the markdown folder, index.md and the image is the cover image you see defined at the top of the post. This is also the image that will be used when shared on Facebook and Twitter. But how does the markdown post become the HTML post? OK, not really. It's actually handled by two plugins located in gatsby-config.js called gatsby-source-filesystem and gatsby-transformer-remark. The first loads the files from the pages directory and feeds them into the transformer that turns the markdown syntax into proper HTML. You can create a new blog post by creating a new directory under the 08 directory and initializing a new markdown file. mkdir pages\2018\08\02\new-post touch pages\2018\08\02\new-post\index.md Now you can add some new content to your new markdown file. --- title: "This is a new post" date: "2018-08-02" cover: "" --- A brand new blog post from here. If you refresh your localhost blog you should see that you have a new blog post with the title from your markdown file. Easy peezy right? Now that you know how to use Gatsby to rapidly develop your new blog and style it to fit your needs using Tailwind, let's explore the SEO and Social Sharing components built into this starter project. SEO and Social Sharing If you are putting in the hard work to write content on your blog you want to make sure you are getting it into the hands of the people that would find it useful. This can be done by optimizing the SEO of your posts and making it easy for other readers to share your content. Lucky for you, that is built into this Gatsby starter project. Taking a look under the templates directory you can check out the blog-post.js file. This is the template that defines how an individual blog post appears on your blog. return ( <div className="text-left p-4 bg-grey-lightest shadow-lg"> <Seo data={post} /> { post.frontmatter.cover && <Img sizes={post.frontmatter.cover.childImageSharp.sizes} alt={post.frontmatter.title} } <h1 className="text-3xl lg:text-5xl text-indigo-darker font-normal mt-6 mb-2"> {post.frontmatter.title} </h1> <p className="block mb-8 pb-4 border-b-2"> 📅 {post.frontmatter.date} – {config.authorName} </p> <div className="blog-content" dangerouslySetInnerHTML={{ __html: post.html }} /> <div className="mt-16 pt-8 social-content text-center border-t"> <p className="font-light">Did you enjoy this post? Share the ❤️ with others.</p> <Social url={url} title={post.frontmatter.title} /> </div> <ul className="mt-8 border-t-2 pt-4" style={{ display: 'flex', flexWrap: 'wrap', justifyContent: 'space-between', listStyle: 'none', paddingLeft: 0 }} > <li> { previous && <Link to={previous.fields.slug} ← {previous.frontmatter.title} </Link> } </li> <li> { next && <Link to={next.fields.slug} {next.frontmatter.title} → </Link> } </li> </ul> </div> ) Taking a look at the HTML template that is returned you can see that there are two custom components Seo and Social being used. So what exactly are they doing? If you take a look at the Seo component you can see that it is returning a React Helmet component. <Helmet htmlAttributes={{ lang: config.siteLanguage, prefix: "og:" }} > <title>{title}</title> <meta name="description" content={description} /> <link rel="shortcut icon" href={favicon} /> <meta property="og:url" content={url} /> <meta property="og:title" content={title} /> <meta property="og:description" content={description} /> <meta property="og:image" content={image} /> <meta property="og:type" content="website" /> <meta name="twitter:card" content="summary" /> <meta name="twitter:image" content={image} /> <meta name="twitter:description" content={description} /> <meta name="twitter:creator" content={config.authorTwitterAccount ? config.authorTwitterAccount : ""} /> </Helmet> The component takes an individual blog post and returns the necessary HTML for a title, description, and favicon. Tags that are very important to SEO. It is also returning the necessary meta tags for Facebook, og:url, and Twitter twitter:description. Every blog post in your new Gatsby blog will automatically get this optimization by using the content in your post. But you also want your content to be easily shareable. So let's take a look at what the Social component is adding to each blog post. <ul className="list-reset inline-flex"> <li className="p-4"> <TwitterShareButton url={url} title={tweet} <TwitterIcon size={32} round={true} /> </TwitterShareButton> </li> <li className="p-4"> <FacebookShareButton url={url} quote={title} <FacebookIcon size={32} round={true} /> </FacebookShareButton> </li> </ul> Here the react-share component is being used to create Twitter and Facebook share buttons. Each is pre-filled using the title and url of the blog post so that when a user clicks on them they have the content ready to be posted. Bonus Points: Configuring the AWS infrastructure to host your blog If you are looking to start learning Amazon Web Services then this bonus section is for you. This part of the post assumes you already have an AWS account setup and an introductory understanding of the platform. If AWS is totally new to you, consider grabbing a package of my learn AWS course that focuses on teaching you the platform by actually using it. In my course, we focus on learning core AWS services like S3, CloudFront, Lambda, and API Gateway by actually using them to host, secure, and deliver static websites. Included in the starter project is a deployment folder. In this folder, I have included a Terraform template that configures AWS resources to host your blog. This template provisions the following resources within your AWS account. - An S3 bucket that is configured for static website hosting. The name of the bucket must match the url of your blog. For example, my blog is at blog.kylegalbraith.comand therefore the bucket is named blog.kylegalbraith.com. - A CloudFront CDN distribution that sits in front of your S3 website bucket. It is also configured to have SSL via the ACM certificate you pass in. Check out this blog post if you aren't familiar with adding SSL to your static website in AWS. So how do you actually deploy this infrastructure? Great question. Here are the steps you should follow in order to deploy the AWS infrastructure for your blog. - Make sure you have the AWS CLI installed and configured to interact with your AWS account. - Install Terraform and add it to your PATHso you can execute it from anywhere. - Now you can initialize the Terraform template from within the deploymentdirectory. cd deployment terraform init ... Initializing provider plugins... - Checking for available provider plugins on... - Downloading plugin for provider "aws" (1.30.0)... - With the providers initialized, you can run terraform planin order to get a visualization of what resources are going to be created. You can pass the necessary variables from variables.tfinto the plancommand via the -varflag as you see below. terraform plan \ -var blog_url=blog.yourcoolsite.com \ -var acm_certificate_arn=arn:aws:acm:us-east- 1:yourAccountId:certificate/yourCert ..._cloudfront_distribution.blog_distribution - The planmethod tells you what resources are going to be provisioned. To initiate the provisioning you must run terraform apply, passing the same variables as before. terraform apply \ -var blog_url=blog.yourcoolsite.com \ -var acm_certificate_arn=arn:aws:acm:us-east-:yourAccountId:certificate/yourCert ... Do you want to perform these actions? Terraform will perform the actions described above. Only 'yes' will be accepted to approve. Enter a value: yes aws_s3_bucket.blog: Creating... - The applycommand takes a few minutes to complete while the S3 bucket and CloudFront distribution are created. If you want to skip the approval step you see above, pass the -auto-approveflag to the applycommand. - Once the applycommand completes you are going to have a brand new CloudFront distribution configured with the S3 website bucket as an origin where your blog is going to be hosted. The next step is to update your DNS records in order to route your blog traffic to the CloudFront distribution. With your AWS infrastructure provisioned you can now deploy your Gatsby blog to your S3 bucket. This is done by running the build script in the package.json and then running an S3 copy command from the AWS CLI. npm run-script build aws s3 cp public/ "s3://blog.yourcoolsite.com/" --recursive This script runs the build:css configuration that produces our final TailwindCSS. It then runs gatsby build which generates a production build and outputs the contents into the public directory. From there it is just a matter of copying the contents of that directory to the S3 bucket where your blog is hosted. Conclusion I prefer processes that are as frictionless as humanly possible. I become disengaged when the process is cumbersome and very manual because this often means spending time on things that aren't valuable. There are only 24 hours in a day so wasting time on a cumbersome manual process is less than ideal. In the past, creating a blog has always had that vibe in my mind. My journey started with writing raw HTML, not fun. Then came things like WordPress, better but still slow and a lot of overhead. Finally, I switched to platforms like dev.to and Medium, this was awesome because it streamlined the creative process and allowed me to just focus on the content. But, I still had a need to showcase my content on something that I owned. Gatsby solved this problem and kicked ass while doing it. The folks over there have created a great open source project with a strong and vibrant community. Hopefully, you have seen how easy it is to get a blog up and running using tools like Gatsby and Tailwind. Once you have something created you can then get it deployed to AWS, as you saw here, or any other hosting platform for static websites. If you have questions or run into issues trying to work through this post please feel free to drop me a comment below. Thanks for reading! PS: Are you hungry to learn. Posted on Nov 20 '18 by: Kyle Galbraith Programmer by day and author by night. I am passionate about all things development related, but especially Amazon Web Services. I recently created a course about learning AWS by using it. Discussion nice post. what about pagination, categories and tags Thank you for the comment. Pagination, categories, and tags are likely in my future as I grow this blog out. All of these are supported pretty much out of the box with Gatsby, but ill share an update when I add these in 😀.
https://practicaldev-herokuapp-com.global.ssl.fastly.net/kylegalbraith/how-to-make-an-awesome-blog-using-gatsbyjs-and-aws-33nc
CC-MAIN-2020-29
en
refinedweb
matplotlib Bindings to Matplotlib; a Python plotting library See all snapshots matplotlib appears in Module documentation for 0.6.0 Matplotlib Haskell bindings to Python’s Matplotlib. It’s high time that Haskell had a fully-fledged plotting library! Documentation is available on Hackage. For more examples see the tests. {-# LANGUAGE ExtendedDefaultRules #-} import Graphics.Matplotlib degreesRadians a = a * pi / 180.0 main :: IO () main = do Right _ <- onscreen $ contourF (\a b -> sin (degreesRadians a) + cos (degreesRadians b)) (-100) 100 (-200) 200 10 return () We need -XExtendedDefaultRules to avoid having to manually having to specify certain types. Installation You will need several python libraries to run this code which can be installed on Ubuntu machines with the following command: sudo apt-get install -y python3-pip python3-matplotlib python3-numpy python-mpltoolkits.basemap If you have instructions for other machines or OSes let me know. We require /usr/bin/python3 to be available; the path isn’t configurable right now. Once you have the prerequisites you can install using the standard incantation cabal install matplotlib Examples Click on any of the examples below to go to the corresponding test that generates it. Depending on your matplotlib version default colors might be different.
https://www.stackage.org/lts-11.22/package/matplotlib-0.6.0
CC-MAIN-2020-29
en
refinedweb
Powerful Web Scraping/Crawling||Scrapy and BS4 - 5.5 hours on-demand video - 1 article - 2 downloadable resources - Full lifetime access - Access on mobile and TV - Certificate of Completion Get your team access to 4,000+ top Udemy courses anytime, anywhere.Try Udemy for Business - Web Scraping using Scrapy Frame work - Building a Spiders or Web Crawlers - Writing and Execute Web Scraping Script - Source Code for all Spiders or web crawlers - Exporting data extracted by Scrapy into Excel or CSV files - Fine tuning Your Spiders using Scraping Build in Settings - No Programming Required, Python Refresher is Provided with this course. - A laptop with Internet Connection - Attitude to Learn Web Scraping - A Smile....:) I will share my experience , how did i came up to develop this course.When i started out, My problem was not that i don't know that what i want to learn but my problem was following in HINDI/URDU LANGUAGE. 1- Which tools are the best 2- Which techniques are the Best 3- Will this Course Make me find my way or Make me more Confuse because there is too much Information out there. 4- Then i found The course but that was in English. and All above went into the Drain. But Don't worry, amigo, Very First time you are going to get all your needs satisfied, plus in your mother tongue. Its so easy to grab concepts in your mother tongue, that programming language seems natural and learning comes easy. Above all, I have distilled my six years of University Teaching Experience in this course to make it One of the Best Course you will ever take. In this Course, We will start from Zero. This Course is divided into three sections 1- Python Refresher Here You will learn all the Python Concepts needed to get started with Web Scraping Frame Work 2- Beautiful Soup BS4 In this Section, You will do your First Project of Scraping a real website, using BS4, One the most Famous Web-scraping Python Library 3- Scrapy In this Section, You will learn Scrapy, An Asynchronous Web Scraping Framer Work Build on Twisted. You will build a Scrapy Spider, and Learn how to use Scrapy Shell. 'Great Teacher So Far (5 Stars)' Saqib Munir Last but not the least if you have any Question don't hesitate to ask question. Good luck and Enjoy. - Python Programmers - Web Developers - Data-mining or Machine Learning Students - Students who want to built web Crawlers. - Students who want to extract data from web sites efficiently # -*- coding: utf-8 -*- import scrapy class QuotesSpider(scrapy.Spider): name = 'quotes' allowed_domains = ['quotes.toscrape.com'] start_urls = [''] def parse(self, response): # h1_tag = response.xpath('//*[@class="tag-item"]/a/text()').extract() # tags = response.xpath("//h1/a/text()").extract_first() # yield {'H1_Tag':h1_tag, 'Tags':tags} container = response.xpath('//*[@class="quote"]') for quote in container: text = quote.xpath('.//*[@class="text"]/text()').extract_first() author = quote.xpath('.//*[@class="author"]/text()').extract_first() keywords = quote.xpath('.//*[@class="keywords"]/@content').extract_first() yield { 'Text':text, 'Author':author, 'Key':keywords } next_url = response.xpath('//*[@class="next"]/a/@href').extract_first() abs_next_url = response.urljoin(next_url) yield scrapy.Request(abs_next_url)
https://www.udemy.com/course/web-scraping-python-scrapy-bs4-hindi-urdu/?couponCode=D2BE5D6383CF3B6CDF26
CC-MAIN-2020-29
en
refinedweb
QuantLib_BaseCorrelationLossModel man page BaseCorrelationLossModel< BaseModel_T, Corr2DInt_T > Synopsis #include <ql/experimental/credit/basecorrelationlossmodel.hpp> Inherits DefaultLossModel, and Observer. Public Member Functions BaseCorrelationLossModel (const Handle< BaseCorrelationTermStructure< Corr2DInt_T > > &correlTS, const std::vector< Real > &recoveries, const initTraits &traits=initTraits()) Protected Member Functions void setupModels () const template<> void setupModels () const template<> void setupModels () const template<> void setupModels () const template<> void setupModels () const Additional Inherited Members Detailed Description template<class BaseModel_T, class Corr2DInt_T> class QuantLib::BaseCorrelationLossModel< BaseModel_T, Corr2DInt_T >" Base Correlation loss model; interpolation is performed by portfolio (live) amount percentage. Though the literature on this model is inmense, see for a more than introductory level (precrisis) chapters 19, 20 and 21 of Modelling single name and multi-name credit derivatives. Dominic O'Kane, Wiley Finance, 2008 For freely available documentation see: Credit Correlation: A Guide; JP Morgan Credit Derivatives Strategy; 12 March 2004 Introducing Base Correlations; JP Morgan Credit Derivatives Strategy; 22 March 2004 A Relative Value Framework for Credit Correlation; JP Morgan Credit Derivatives Strategy; 27 April 2004 Valuing and Hedging Synthetic CDO Tranches Using Base Correlations; Bear Stearns; May 17, 2004 Correlation Primer; Nomura Fixed Income Research, August 6, 2004 Base Correlation Explained; Lehman Brothers Fixed Income Quantitative Credit Research; 15 November 2004 For bespoke base correlation see: Base Correlation Mapping in Lehman Brothers' Quantitative Credit Research Quarterly; Volume 2007-Q1 You can explore typical postcrisis data by perusing some of the JPMorgan Global Correlation Daily Analytics Here the crisis model problems of ability to price stressed portfolios or tranches over the maximum loss are the responsibility of the base models. Users should select their models according to this; choosing the copula or a random loss given default base model (or more exotic ones). Notice this is different to a bespoke base correlation loss (bespoke here refering to basket composition, not just attachment levels) ; where loss interpolation is on the expected loss value to match the two baskets. Therefore the correlation surface should refer to the same basket intended to be priced. But this is left to the user and is not implemented in the correlation surface (yet...) BaseModel_T must have a constructor with a single quote value Member Function Documentation void setupModels () const [protected] Sets up attach/detach models. Gets called on basket update. To be specialized on the spacific model type. Author Generated automatically by Doxygen for QuantLib from the source code. Referenced By The man pages BaseCorrelationLossModel(3) and setupModels(3) are aliases of QuantLib_BaseCorrelationLossModel(3).
https://www.mankier.com/3/QuantLib_BaseCorrelationLossModel
CC-MAIN-2018-17
en
refinedweb
Hi Daniel Chen and people. I again, sorry I’ve the post form to create an object and I get this error `Exception Type: TypeError at /host/lodging-offer/new/ Exception Value: ‘NoneType’ object is not subscriptable ` This error is presented in my views.py in the `LodgingOfferImageCreate` view, which is similar to `ProfileFamilyMemberCreate` in the sample in this article The error is presented in: I am calling many form fields in my view, which belong to the `LodgingOffer` model (similar to `ProfileModel` here) My `LodgingOfferImageCreate` view is: class LodgingOfferImageCreate(CreateView): model = LodgingOffer fields = ['ad_title', 'country', 'city', 'address', 'lodging_offer_type', 'stars', 'check_in', 'check_out', 'offered_services', 'featured_amenities', 'room_type_offered', 'number_guest_room_type', 'bed_type', 'bathroom', 'room_information', 'image', 'room_value', 'additional_description', 'is_taked'] success_url = reverse_lazy("articles:article_list") def get_context_data(self, **kwargs): data = super(LodgingOfferImageCreate, self).get_context_data(**kwargs) if self.request.POST: data['lodgingimages'] = LodgingOfferImagesFormset(self.request.POST) else: data['lodgingimages'] = LodgingOfferImagesFormset() return data def form_valid(self, form): context = self.get_context_data() lodgingimages = context['lodgingimages'] with transaction.atomic(): self.object = form.save() if lodgingimages.is_valid(): lodgingimages.instance = self.object lodgingimages.save() return super(LodgingOfferImageCreate, self).form_valid(form) I guess or I think that this error is related with the `context['lodgingimages']` which is sent to the template which I try print in the and cannot get some value I am understanding the sample presented here and adapt it to my particular situation, but I cannot get what happen in this situation … Has anything similar happened to anyone?
https://medium.com/@bgarcial/hi-daniel-chen-and-people-i-again-sorry-b9a1ad211ec5
CC-MAIN-2018-17
en
refinedweb
#include "afxtempl.h" CMap<CString*, CString*, int, int> map; CMap<CString*, CString*, int, int> map(16); Are you are experiencing a similar issue? Get a personalized answer when you ask a related question. Have a better answer? Share it in a comment. From novice to tech pro — start learning today. do you use this in a class declaration? If so this isn't possible, i.e.: > class CTest > { > CMap < CString*, CString*, int, int > map1; // OK > CMap < CString*, CString*, int, int > map2( 16 ); // Error C2059 > }; To initialize such a member you need to add it's constructor call to the classes constructors initialization list, i.e.: > class CTest > { > CMap < CString*, CString*, int, int > map1; // OK > CMap < CString*, CString*, int, int > map2; // Error C2059 > public: > CTest(); > }; > > CTest::CTest() > : map2( 16 ) > { > } Hope that helps, ZOPPO Error Message syntax error : 'token' The token caused a syntax error. To determine the cause, examine not only the line listed in the error message, but also the lines above it. The following example generates an error message for the line declaring j, but the true source of the error appears on the line just above it. If examining the lines yields no clue to what the problem might be, try commenting out the line listed in the error message and possibly several lines above it. If the error message occurs on a symbol immediately following a typedef variable, check that the variable is defined in the source code. You may get C2059 if a symbol evaluates to nothing, as can occur when you compile with /Dsymbol=. CMap<CString*, CString*, int, int> map(16);
https://www.experts-exchange.com/questions/26433795/Which-include-s-are-necessary-for-CMap.html
CC-MAIN-2018-17
en
refinedweb
go to bug id or search bugs for Description: ------------ While we are running PHP 5.3.27 here, I see nothing in the changelog for newer versions of PHP that indicates this bug is even possibly fixed. The code on pastebin is a simplification of the scenario. In my real-world environment, a query on a database is run, results are built up in an array in PHP, and then boiled down into a second array. Everything is fine until the inner foreach() loop. At that point, all bets are off as to whether or not PHP will run itself out of memory. The exact same inputs resulting in the exact same data in $rows2 will sometimes cause PHP to suddenly chew up 128MB RAM and terminate the script - interestingly, not passing 'true' to memory_get_usage() says only 40MB is being used at the time PHP terminates the script while claiming 128MB RAM is being used. The problem seems to hinge on this line: $row[1] = htmlspecialchars(implode(", ", $idmap[$row[1]])); If I break it out so the variable being assigned doesn't reference itself in the same statement: $id = $idmap[$row[1]]; $row[1] = htmlspecialchars(implode(", ", $id)); The problem goes away as well. There is nothing involved in the foreach statements beyond basic PHP arrays, numbers, and strings (i.e. no objects). The amount of memory used before the foreach() lines is normal (about 10MB). When the foreach statements complete successfully, the amount of memory used is still normal (about 20MB). So there isn't always a runaway memory leak. Attempting to replicate this issue will likely cause headaches. When I added the commented out if-statement so I could output debugging information without affecting other users, the problem went away...even though the if-statement technically does absolutely nothing. This made pinpointing the problem a lot more difficult in userland PHP. There is indeed a bug here. I wouldn't be filing a bug report or writing a test script if I wasn't certain about the issue. Test script: --------------- Expected result: ---------------- Consistent memory usage between runs with identical data. If statements that do nothing should not have an effect on memory usage. Actual result: -------------- PHP inexplicably consumes all memory up to the limit in the INI file. But only sometimes. For the exact same inputs and data. Add a Patch Add a Pull Request Well, since we no longer support PHP 5.3.x you could at least try to replicate in PHP 5.4/5.5 and let us know if you are still able to. I did a quick test using your provided script and couldn't see any issue. Also, since I suspect you are only able to see this when running under a web server and not from CLI, you should indicate how you are running PHP, whether enable_gc is on or off and whether and which opcode you are using. All of which could affect this. According to phpinfo(): zend.enable_gc On No opcode cache. Apache/2.2.21 (Unix) PHP/5.3.27 mod_ssl/2.2.21 OpenSSL/1.0.0h Apache is running PHP as a module. Unfortunately, I can't upgrade the box to 5.4 (yet). Also, the test script I made doesn't replicate the problem for me either - it merely represents a working, pared back example of the data as it is headed into the foreach logic. Update: Built 5.5.4, ran 'make test', waited forever, and then it spit out 22 errors. So, can't deploy 5.5.4. Filed the report with PHP QA as requested by 'make test'. ===================================================================== FAILED TEST SUMMARY --------------------------------------------------------------------- Test DOMDocument::loadXML() detects not-well formed XML [ext/dom/tests/DOMDocument_loadXML_error4.phpt] Test DOMDocument::load() detects not-well formed XML [ext/dom/tests/DOMDocument_load_error4.phpt] DomDocument::schemaValidateSource() - string that is not a schema [ext/dom/tests/DOMDocument_schemaValidateSource_error1.phpt] DomDocument::schemaValidateSource() - non-conforming schema [ext/dom/tests/DOMDocument_schemaValidateSource_error2.phpt] DomDocument::schemaValidate() - file that is not a schema [ext/dom/tests/DOMDocument_schemaValidate_error1.phpt] DomDocument::schemaValidate() - non-conforming schema file [ext/dom/tests/DOMDocument_schemaValidate_error2.phpt] DomDocument::schemaValidate() - non-existent schema file [ext/dom/tests/DOMDocument_schemaValidate_error5.phpt] Bug #42082 (NodeList length zero should be empty) [ext/dom/tests/bug42082.phpt] Bug #47848 (importNode doesn't preserve attribute namespaces) [ext/dom/tests/bug47848.phpt] Test 5: HTML Test [ext/dom/tests/dom005.phpt] Test function getservbyname() [ext/standard/tests/general_functions/getservbyname_basic.phpt] Test ceil() - basic function test for ceil() [ext/standard/tests/math/ceil_basic.phpt] Test ip2long() function : usage variation 2, 32 bit [ext/standard/tests/network/ip2long_variation2.phpt] xmlwriter_write_attribute_ns basic function tests [ext/xmlwriter/tests/xmlwriter_write_attribute_ns_basic_001.phpt] xmlwriter_write_attribute_ns with missing param [ext/xmlwriter/tests/xmlwriter_write_attribute_ns_error_001.phpt] xmlwriter_write_dtd basic function tests [ext/xmlwriter/tests/xmlwriter_write_dtd_basic_001.phpt] Bug #52944 (segfault with zlib filter and corrupted data) [ext/zlib/tests/bug_52944-darwin] basic function [sapi/cli/tests/php_cli_server_001.phpt] No router, no script [sapi/cli/tests/php_cli_server_013.ph. Finally fixed the issue today. I ended up rewriting most of the code but the problem still reared its head under specific conditions. It finally went away when I called: unserialize(serialize($row2)); Serializing and the unserializing forces a complete disconnect in the Zend allocator, which is exactly what should have happened in the first place. The bug seems to only show up when htmlspecialchars() is called on a string and the output is assigned to an array. In my case, each row only takes about 1KB RAM and there are only about 3,000 rows (approximately 3MB RAM should be used, not 128MB+), so the runaway memory issue technically still exists, but it has been buried in my particular case.
https://bugs.php.net/bug.php?id=65775
CC-MAIN-2018-17
en
refinedweb
. top-level more package clauses that name the current project and then the current subpackage(s). This can be done by a simple multi-file regular expression search and replace operation. In passing, it turns out that the old Scala 2.7 rules are the same as the rules in C# and other .Net languages. So why did something that obviously works on .Net cause such problems on the JVM? It's a matter of expectations and conventions. In .Net, which has nested namespaces similar to Scala's packages, nobody in their right mind would have defined a namespace org.System because it would shadow the well-known top level System namespace. On the JVM, people do this sort of thing, and it works, because of Java's absolute package name convention. So this experience shows that sometimes a design cannot be judged to be right or wrong only along technical criteria, but that it matters how it fits with the pre-existing conventions and expectations of its users. Scala 2.7's nested packages are a simple design. The Scala programming language website is at: The Scala 2.8 release notes are at: The Scaladoc collections API is at: Mart.
https://www.artima.com/scalazine/articles/chained_package_clauses_in_scalaP.html
CC-MAIN-2018-17
en
refinedweb
Flashcards Preview Cardiovascular System The flashcards below were created by user Pandora320 on FreezingBlue Flashcards . Quiz iOS Android More What is the function of the cardiovascular system? Provides the transport system to continuously deliver nutrients and remove wastes from the tissues/cells. What are the three parts of the cardiovascular systems and what is the function of each? Heart - pump Blood vessels - delivery routes Blood - transport medium Two-thirds of the heart projects to the ______ of the midsternal line. left What are the two layers of the serous pericardium and what is the function of each? Parietal - lines the internal surface of the fibrous pericardium Visceral - lines the external surface of the heart. Both secrete fluid to reduce friction. What are the three layers of the heart wall? Which is the thickest and serves as the functioning layer of the heart pump? Epicardium Myocardium - thickest Endocardium What chambers of the heart are the "receiving chambers" and which chambers serve as the actual "pumps" of the heart, propelling blood into circulation? Atria are receiving chambers Ventricles are the pumps Which chamber of the heart has, by far, the thickest wall and why? The left ventricle - it pumps blood throughout the body. The heart is two side by side pumps each serving a different circuit. What are the two circuits and which side of the heart supplies blood to each? Pulmonary circuit - right side Systemic circuit - left side Which circuit contains a greater volume of blood? Both circuits have an equal volume of blood. Which of the two ventricles has the greatest workload? The left ventricle has a greater workload and is 3x as thick to manage it. What is the name of the arteries that supply oxygen and nutrients to the heart muscle (myocardium)? From where do these arteries arise? Coronary arteries - arise from the base of aorta. What is angina pectoris? What causes it? Thoracic pain caused by fleeting deficiency of oxygen in blood to coronary arteries. What is the technical term for "heart attack"? What is it caused by? What is the most common cause of death with a "heart attack"? Myocardium infarction Prolonged coronary blockage resulting in myocardial cell death in region of vascular deficiency. Arrhythmia What is the purpose of the heart valves? Ensure unidirectional blood flow through heart. Which valves are the papillary muscles/chordae tendineae attached to? What is their function? Atrioventricular valves - prevents prolapse of valves into atria. Both an incompetent (leaking/regurgitating) valve and a stenotic valve (valvular stenosis) cause the same problem for the heart. What is this problem and how does each of these two valvular abnormalities cause it? Heart's workload increases and may ultimately be markedly weakened. (Incompetent valve increases cardiac workload as it pumps the same blood over and over.) (Stenotic valve is stiffening that constricts the valve opening so the heart must contract more forcibly to overcome this narrowing.) Are cardiac muscles cells "striated"? Yes. Do cardiac muscle cells contract via the sliding filament model? Yes. Why does the cardiac muscle have so many more mitochondria than skeletal muscle? For nearly exclusive aerobic respiration, constant supply of oxygen and ATP production. What is the adaptation, unique to cardiac muscle cells, that enables electrical coupling of all the cardiac muscle cells so that they can contract synchronously? Gap junctions in the intercalated disc allow ions to pass from cell to cell so that adjacent cells are electrically coupled transmitting current across the entire heart. Why is it so important that cardiac muscle cells have such a long absolute refractory period? To prevent tetanic contractions which would stop the heart's pumping action. What ion channel is unique to cardiac muscle cells that enables a long refractory period, as well as causing a much longer depolarization phase and contraction period of the muscle cell? Slow calcium channel Will the heart continue to beat rhythmically if all of its nerve connections are severed? Yes What system, unique to the heart, allows it to continue to beat rhythmically if all of its nerve connections are severed? Intrinsic cardiac conduction system What ion channels create unstable resting potentials that enable autorhythmic cells to generate rhythmic impulses that pace the heart? Slow sodium channels What serves at the heart's "pacemaker"? What is the "rhythm" it generates called? About how many "heart beats" per minute does it generate? Sinoatrial (SA) node Sinus rhythm Average 75 bpm If the heart's "pacemaker" is not functioning, the ____________ may take over as pacemaker (if ectopic focus does not), generating a heart rate of approximately ________. AV node 50 bpm What are the last three components of the intrinsic cardiac conduction system? What is their depolarization rate? Is this enough to maintain adequate circulation in most people? -AV bundle (bundle of His) -Right and Left bundle branches -Purkinje fibers 30 bpm No The wringing motion of contraction of the heart's ventricles begins at _______ and moves toward the ________ ejecting blood into _______________. heart apex atria the large arteries leaving the ventricles What is arrhythmia? What is the arrhythmia that can cause sudden death? Irregular heart rhythm Ventricular fibrillation A(n) ____________ is a composite of all the action potentials being generated by nodal and contractile cells of the heart at a given time. Electrocardiogram The P wave is indicative of depolarization of _________ the QRS complex shows ____________ and the T wave ____________. atria ventricular depolarization ventricular repolarization Elevated or depressed S-T segment on an EKG can be indicative of ________. cardiac ischemia What are you hearing when you listen to the "heart sounds" with a stethoscope? What are abnormal heart sounds called? What are they usually indicative of? Closing of the heart valves Heart murmurs Valve problems What is the "cardiac cycle"? All events associated with blood flow through the heart during one complete heart beat. What is systole? Contraction What is diastole? Relaxation During what part of the cardiac cycle does ventricular filling take place? Mid to late ventricular diastole Does atrial contraction account for most of the filling? No, 80% of blood passively flows into ventricles. What is end diastolic volume (EDV)? Volume of blood in each ventricle at the end of ventricular diastole. What are the two phases of ventricular systole? Which valves are open and closed in each phase? Isovolumetric contraction phase - all valves closed Ejection phase - Semilunar (SL) valves open What is end systolic volume (ESV)? Volume of blood remaining in each ventricle after contraction. What controls blood flow through the heart? Pressure changes with flow down a pressure gradient through any available opening. What is cardiac output (CO)? Volume of blood pumped by each ventricle in one minute. What are the two components of cardiac output (CO)? (Equation) CO = HR x SV Cardiac Output = heart rate x stroke volume How can we increase our cardiac output (CO) in response to demand? Increase heart rate and/or stroke volume. What is the difference between resting and maximal cardiac output (CO) called? Cardiac Reserve What is the "equation" for determining stroke volume (SV)? SV = EDV - ESV What are the three main factors which affect stroke volume (SV)? Preload Contractility Afterload How does preload increase during exercise? Increase venous return during exercise increases EDV and decreases ESV which increases SV. What is the definition of contractility? Contractile strength at a given muscle length independent of muscle stretch and EDV. Why does increasing contractility result in an increase in stroke volume (SV)? Decreases ESV What are two ways we can increase our heart's contractility? -Increased Ca2+ influx due to sympathetic stimulation -Hormones (thyroxine, glucagon, epinephrine) What is afterload? Pressure that must be overcome for ventricles to eject blood. What is the main cause of increased afterload? Hypertension Does hypertension increase or decrease stroke volume (SV)? Decreases stroke volume, increases ESV What part of the nervous system exerts the most control over heart rate (HR)? Autonomic nervous system Which part of the ANS increases HR and which part decreases HR? Sympathetic increases HR Parasympathetic decreases HR How is a trained athlete able to maintain normal circulation with a resting heart rate of 40 bpm? Heart muscle strengthens which increases SV so they can maintain same resting CO with lower HR. What is tachycardia, bradycardia? Tachycardia is a fast heart rate (>100bpm) Bradycardia is a slow heart rate (<60 bpm) How can excessive degrees of tachycardia/bradycardia results in insufficient cardiac output (CO) to maintain adequate circulation to the tissues? Tachycardia - if persistent may lead to fibrillation Bradycardia - may result in grossly inadequate blood circulation How can hyperkalemia lead to cardiac arrest? Lowers the resting potential What is congestive heart failure (CHF)? Weakening of the myocardium by damage from various conditions can result in a chronic decrease output from the heart. Failure of which ventricle results in "excess fluid in lungs" (pulmonary edema)? Left ventricle What are the two main causes of CHF? Coronary atherosclerosis Persistent high blood pressure What two types of congenital heart defects results in mixing of systemic and pulmonary blood? Why is this a problem? -Atrial and ventricular septal defects -Patent ductus arteriosus Inadequately oxygenated blood reaches body tissues. What is an artery? a vein? Arteries carry blood away from the heart. Veins carry blood to the heart. Do the pulmonary arteries carry oxygenated or deoxygenated blood? Deoxygenated Which types of blood vessels are in contact with tissue cells and directly serve cellular needs? Capillaries What are the three layers of the walls of the arteries and veins? What is each layer composed of and what is its primary function? Tunica intima - simple squamous epithelium - lines lumen of all vessels. Tunica media - smooth muscle and elastin - sympathetic vasomotor nerve fibers control vasoconstriction and vasodilation of vessels for control of blood flow and blood pressure. Tunica externa - collagen fibers - protect and reinforce. The aorta and its major branches are the ____________ arteries. What are two aspects of these vessels' structure that are critical to their function? elastic (conducting) -elastin in all 3 tunics -large lumen What type of arteries control flow into the capillary beds via vasodilation and vasoconstriction? Arterioles How many cells thick is the wall of a capillary? One How does the thickness of a capillary wall relate to its function? By being one cell thick it allows the exchange of materials (gases, nutrients, wastes, hormones) between the blood and the interstitial fluid. The most common type of capillary are continuous capillaries. If they are "continuous" how do fluids pass from the blood to the interstitial spaces? Tight junctions connect endothelial cells, they are incomplete and intercellular clefts allow the passage of fluids and small solutes. What type of capillaries are found in the kidneys? Fenestrated capillaries How is the structure of fenestrated capillaries in the kidneys adapted to the function of this organ? The kidneys function as filters so fenestrated capillaries contain pores and are much more permeable to fluids and solutes than continuous capillaries. What are the two types of vessels in capillary beds? Vascular shunt True capillaries How is blood flow through the capillary beds controlled? Cuff of smooth muscle fibers called precapillary sphincters surrounds the root of each true capillary at the metarteriole and acts as a valve to regulate blood flow into the capillaries. Veins have _____ walls and _____ lumens than arteries. thinner larger Why are veins called capacitance vessels or blood reservoirs? Veins can contain up to 65% of the blood supply. What structural adaptation of veins, particularly in the limbs, help maintain the flow of blood back toward the heart? Large diameter lumens offer little resistance and valves formed from folds in the tunica intima prevent backflow of blood. What homeostatic imbalance results if the valves in veins become incompetent? Where is this abnormality most common? What are two risk factors for this condition? Varicose veins - tortuous, dilated veins due to incompetent valves. Most common in superficial veins of legs. -Heredity -Conditions that hinder venous return (prolonged standing, obesity, pregnancy) What is the purpose of arterial anastomoses at the base of the brain (circle of Willis)? Provide alternate pathways (collateral channels) to a given body region. The homeostatic imbalance of arteries responsible for 50% of US deaths is ________. Death from this abnormality is most likely when it occurs in the _____ and _____ arteries where it can cause ______ and _______. atherosclerosis coronary carotid myocardium infarction stroke What is thought to be the first step in the formation of atherosclerotic plaques? damage to the tunica intima Which type of cholesterol containing lipoprotein is responsible for the accumulation of fats in atherosclerotic plaques and which is protective against this accumulation? LDL HDL What happens to atherosclerotic plaques that usually results in death (sometimes sudden) due to MI or stroke? Plaque becomes unstable: can ulcerate and rupture, causing platelet adhesion and thrombus formation. Which four risk factors for atherosclerosis are theoretically under our control? Smoking, obesity, diet, exercise What are two types of medications that can help reduce risk of atherosclerosis if lifestyle changes are not successful? Anti-hypertensive medications Cholesterol lowering drugs (statins) What is the point of taking one baby aspirin a day? Reduces platelet aggregation. What is the definition of blood pressure? Force per unit area exerted on the wall of a blood vessel by the blood (mmHg) What is the driving force that keeps blood moving from the heart toward the tissues? Pressure gradient What is "resistance"? Opposition to flow Where in the vascular system is the most resistance encountered? Periphery of systemic circulation Variation of what vascular parameter is utilized by the body to alter peripheral resistance? What type of vessel is most important in this process? Changes in blood vessel diameter Small diameter arterioles What is the formula that expresses the relationship between blood flow, blood pressure and resistance? F = deltaP/ PR Blood Flow = Blood Pressure Gradient / Peripheral Resistance What is "normal" blood pressure? 110 to 140 over 70 to 80 What is pulse pressure? Difference between systolic and diastolic pressure What is mean arterial pressure (MAP)? Pressure that propels the blood to the tissues Why do pulse pressure and mean arterial pressure (MAP) both decrease as one moves further away from the heart? MAP decreases due to friction Pulse pressure decreases as arteries become more muscular (less elastic) Why is low capillary blood pressure desirable? High BP would rupture fragile, thin-walled capillaries. Most are very permeable, so even low pressure forces solute containing fluid out of blood stream and into interstitial spaces. What are three functional adaptations utilized to help return blood to the heart from the low pressure venous system? Respiratory pump Muscular pump (most important) Sympathetic control How does exercise increase the efficiency of the mechanisms to return blood to the heart? Why is this increased efficiency important? Increases CO via increased venous return (increased EDV) Because muscles need lots of blood for exercise. Why is maintaining adequate blood pressure crucial for body homeostasis? To maintain a steady flow of blood from the heart to the periphery is vital for organ function. What are the three factors which can be varied to influence systemic blood pressure? Which of these factors is utilized for short-term control of blood pressure? Which for long-term? CO = deltaP / PR Cardiac Output - short-term Peripheral Resistance (PR) - short-term Blood Volume - long-term In a low blood volume/low blood pressure situation how can neural controls maintain blood flow to vital organs (brain, heart, kidneys)? Constrict all blood vessels except those supplying heart and brain so blood perfuses those vital organs. The cardiovascular center in the medulla is composed of _______ center which maintains short-term blood pressure control by altering _______ and the __________ center which controls blood pressure by altering ________________. The ________ branch of the _________ nervous system mediates these changes. cardiac cardiac output vasomotor peripheral resistance sympathetic autonomic How does the vasomotor center monitor blood pressure (what two types of receptors does it receive input from)? What variables does each type of receptor monitor? Baroreceptors - pressure sensitive Chemoreceptors - changes in blood levels of carbon dioxide, oxygen, hydrogen ions. Where are the baroreceptors located that help protect the blood supply of the brain? Carotid sinus reflex What changes in the blood would chemoreceptors detect that would signal the cardiovascular center that an increase in blood pressure was called for? Why? Increase in carbon dioxide and decrease in pH or oxygen Because you need more oxygen/nutrients where metabolic processes are occuring. What organ is predominately involved in long-term regulation of blood pressure? What variable does it utilize to control BP? Kidney Blood volume In regard to long-term regulation of blood pressure, how does the direct mechanism work? Alters blood volume independently of hormones. What hormone is utilized for indirect mechanism (for long-term regulation of blood pressure)? Angiotensin II How does angiotensin II increase blood pressure? It's a potent vasoconstrictor, stimulates aldosterone secretion, stimulates ADH release which increases blood volume. What is orthostatic hypotension? In what age group is it most common? Temporary low BP and dizziness when suddenly rising from a sitting or reclining position. Elderly. Why is chronic hypertension so dangerous (what does it damage and what diseases does this lead to)? It damages blood vessels and strains the heart. Leads to CHF. Which is more common, primary hypertension or secondary hypertension? Primary hypertension (90% of cases) What causes primary hypertension? Complicated interplay of several risk factors: heredity, diet, obesity, age, stress, diabetes mellitus, smoking, and nicotine What are the four critical processes which occur with tissue perfusion? Delivery of oxygen/nutrients to, and removal of wastes from, tissue cells. Gas exchange Absorption of nutrients Urine formation Why is the velocity of blood flow the slowest in the capillaries of any other blood vessels? Allows adequate time for exchange between blood and tissues. What is autoregulation of blood flow? Automatic adjustment of blood flow to each tissue in proportion to its requirements at any given point in time. What are the two types of controls involved in autoregulation of blood flow? Metabolic Myogenic Accumulation of what metabolic substances results in vasodilation in any given organ or tissue? Why is this vasodilation beneficial to homeostasis? H+, K+, adenosine and prostaglandins and inflammatory chemicals. Because you need more oxygen/nutrients where metabolic processes are occuring. What do vascular endothelial cells release to cause vasodilation as a result of accumulation of metabolic substances? Nitrous oxide (NO) What are two ways that blood flow to skeletal muscle increases during exercise. Blood flow increases in direct proportion to the metabolic activity. Sympathetic nervous system constricts arterioles of digestive viscera and skin to divert it to muscles. What are the two functions of the cutaneous (skin) circulation that are extremely important to overall homeostasis and are relatively unique to this circulation? Maintains body temperature. Provides blood reservoir. What is typical blood pressure in the pulmonary arteries? How does this compare to the systemic circulation? 24/8 Very low How are metabolic autoregulatory controls of pulmonary blood flow different than all other tissues? Why are they different? They are opposite. Low oxygen levels cause vasoconstriction, high oxygen levels promote vasodilation. Why is increased blood flow in coronary arteries critical to meet the needs of a more vigorously working heart? Increased blood flow is critical to meet increased demand as at rest cardiac cells use as much as 65% of the delivered oxygen so increasing blood flow is the only way to make sufficient oxygen available to a more vigorously working heart. By what mechanism do most respiratory gases and nutrients pass between blood and interstitial fluid? Why does this result in oxygen flowing out of the blood and into the tissues? Diffusion Concentration gradient _________ pressure and ________ pressure are the two types of pressure that determine the direction of the bulk fluid flows into and out of the capillaries. At the arterial end, _______ pressure dominates, forcing fluid __________ and at the venous end, _________ pressure dominates drawing fluid _________. Hydrostatic colloid osmotic hydrostatic out of blood osmotic back into blood What is circulatory shock? Any condition in which blood vessels are inadequately filled and blood cannot circulate normally. What are the three types of circulatory shock? Which type is caused by extreme vasodilation? Hypovolemic shock Vascular shock - extreme vasodilation Cardiogenic shock What are two common examples of vascular shock and what causes vasodilation in each? Anaphylactic shock - systemic allergic reaction in which body wide vasodilation is triggered by massive histamine release. Septic shock - caused by septicemia in which bacterial toxins cause vasodilation. What are the four structures unique to the fetal circulation and why do they exist? Foramen ovale and ductus artenous - bypass nonfunctioning lungs Ductus venous - bypass liver Umbilical vein and arteries - circulate blood to/from the placenta where gas and nutrient exchange occurs with the mother's blood Why do premenopausal women have such low incidence of atherosclerosis? At what age do the risks of cardiovascular disease become equal in men and women? Protective effects of estrogen Age 65 What are the three parts of the lymphatic system? Lymphatic vessels Lymph Lymph nodes What are the two critical functions of the lymphatic system? Control of blood volume Immune system How does the extreme permeability of lymph capillaries enhance the defense function of the lymphatic system? Allows for uptake of large particles such as cell debris, pathogens and cancer cells. What are the two lymphatic ducts? Which one drains the majority of the body? Right lymphatic duct Thoracic duct - majority of body What is the name of the sac anterior to the upper lumber spine where the thoracic duct originates? Cisterna chyli Where do the lymphatic ducts empty their lymph? Into venous circulation at the junction of the internal jugular and subclavian veins. What are the two types of lymphocytes? What important role does each play in the immune system? T-cells - manage the immune response, attack and destroy foreign cells B-cells - produce plasma cells which secrete antibodies (mark antigens for destruction by phagocytes) What role do macrophages play? Phagocytize foreign substances and help activate T-cells. What type of tissue are lymphoid tissues/organs largely composed of? Reticular connective tissue What two important functions to lymphoid tissues/organs perform as part of the immune system? Houses and provides a proliferation site for lymphocytes. Furnishes a surveillance vantage point for lymphocytes and macrophages. What are the principal lymphoid organs or the body? Lymph nodes In what tissue are lymph nodes located? Where are they near the body surface? Embedded in connective tissue in clusters along lymphatic vessels. Near body surface in inguinal, axillary, cervical regions. What are the two primary functions of lymph nodes? Filter lymph - macrophages destroy microorganisms and debris. Immune system - lymphocytes are activated and mount an attack against antigens. What is lymphadenopathy? Swollen/enlarged lymph nodes. What are two types of causes of lymphadenopathy? How can the cause be hypothesized by physical examination of these lymph nodes? Infectious - tender Neoplastic (cancer) - hard and non-tender What is the largest lymphoid organ in the body? What are its two main functions? Spleen -Site of lymphatic proliferation and immune surveillance and response. -Cleanses the blood of aged and defective blood cells, platelets and debris. What is the function of the tonsils? How do they perform this function? Crypts trap bacteria and particulate matter entering pharynx in food and inhaled air. What comprises the mucosa-associated lymphatic tissues (MALT) and what is their function? Peyer's patches, appendix, tonsils, lymphoid nodules in walls of bronchi. Protects passages open to exterior from foreign matter. Author: Pandora320 ID: 44553 Card Set: Cardiovascular System Updated: 2010-10-27 13:05:25 Anatomy Folders: Description: Heart, Blood Vessels and Lymph Review Questions Show Answers: Flashcards Preview
https://www.freezingblue.com/flashcards/print_preview.cgi?cardsetID=44553
CC-MAIN-2018-17
en
refinedweb
Flashcards Preview Management Test The flashcards below were created by user Anonymous on FreezingBlue Flashcards . Quiz iOS Android More the process of determining, through observation and study, the relevant information relating to the nature of a specific job. Need to do this to select the right people. Job Analysis identify tasks, duties, response, and performance expectations Job Description knowledge, skills, abilities, and other characteristics a person needs to bee successful on a job job specification Part 1 of a Job Analysis - contains basic information about each employee including (skills, qualifications, salary and job history, company data, capacity of individual, special preferences) Want to have the right people in the right position in the right time Question: Where are we now? Skills Inventory Part 2 of a Job Anaylsis - Question: Where do we want to go? Attempts to determine future HR needs Forecasting Part 3 of a Job Analysis - Final Phase, Transitional activities, current trend(downsizing) Transitition prohibits wage discrimination on the basis of sex - all else equal - women must make the same as men Equal Pay Act of 1963 eliminate employment discrimination related to race, color, religion, sex, or national origin in organizations that conduct intersate commerce. Title VII of the Civil Rights act of 1964 the right of all people to work and to advance on the bases of merit, abilit, and potential. equal employement opportunity protects people between 40 and 70 - no mandatory retirement at age 65 Age Discrimination in Employment Act prohibits discrimination in hiring of individuals with disabilities by federal agencies and federal contractors. Rehabilitation Act of 1973 gives indiviuals with disabilities sharply increased access to services and jobs. - protects people with disabilities - organizations must accomodate people with disabilities as long as it doesnt make a hardship Americans with Disabilities Act (ADA) of 1990 permits women, minorities, persons with disabilities, and persons who are religious minorities to have a jury trial and sue for punitive damages of up to 300K if they can prove they are victims of intentional hiring or workplace discrimination Civil Rights Act 1991 Enables qualified employees to take prolonged unpaid leave for family and health related reasons without fear of losing their jobs Family and Medical Leave Act (FMLA) providing preferential treatment for one group(minority) over another group(majority) rather than merely providing equal opportunity. Reverse Discrimination provide a sample of behavior that is used to draw inferences about the future behavior or performance of an individual tests measure a person's capacity or potential ability to learn - IQ test Aptitude Test measure the job related knowledge possessed by a job applicant job knowledge test measure how well the applicant can do a sample of work to be performed proficiency test designed to determine how a person's interests compare with the interests of successful people in a specific job interest test measure a person's strength, dexterity, and coordination - must be necessary for the job psychomotor test attempt to measure personality characteristics psychological tests lie detector - record physical changes in the body as the test subject answers a series of questions polygraph tests extent to which a test predicts a specific criterion test validity consistency or reproducibility of the results of a test test reliability most valid type of interview - conducted using a prederemined outline structured interview a variation of the structured interview - the interviewer prepares the major questions in advance but has the flexibility to use such techniques as probing to help assess the applicant's strengths and weaknesses semi-structured interviews a variation of the structured interview - uses projective techniques to put the prospective employee in action situations that might be encountered on the job situational interview a variation of the structured interview - what did you do in your past that shows how you would do it in the future - or show that you might have learned from mistakes Past Behavior Description Interview interviews conducted without a predetermined checklist of questions - least Valid unstrictured interview 3 interviewing techniques 1. Stress - put interviewee under pressure 2. Panel - two or more interviewers - reliability 3. Group - questions several interviewees teogether in a group discussion 5 suggestions for conducting effective interviews 1. proper selection and trainin of interviewers 2. specific outline 3. put the applicant at ease 4. record the facts 5. evaluation of interview effectiveness the degree of attraction among group memebers or how tightly knit a group is group cohesiveness factors that affect the cohesiveness of informal work groups (7) size, success, status, outside pressures, stability of membership, communication, physical isolation 4 phases of team development forming, storming, norming, and performing a phase of team development - 1. occurs when the team members first come together forming a phase of team development - 2. involves a period of disagreement and intense discussion as members attempt to impose their individual viewpoints on the rest of the group storming a phase of team development - 3. the team develops the informal rules that enable it to regulate the behavior of the team members norming a phase of team development - 4. the team becomes an effective and high performing team only if it has gone through the 3 pervious stages performing People can keep job and work at 20-30% capacity - a hightly motivated person can work at 80-90% capacity - the importance of motivation William James based on the assumption that individuals are motivated to satisfy a number of needs and that money can directly or indirectly satisfy only some of these needs hierarchy of needs hierarchy of needs from top to bottom are (5) 1. Self Actualization 2. Esteem or ego 3. Social 4. Safety 5. Phsysiological Frederick Herzberg - 1st factor - aspects that are better than others ( make us feel good)-achievement, recognition, responsibility, advancement, and job characteristics Second factors - negative (work environment) - interpersonal relations motivation-hygiene approach giving an employee more of a similar type of operation to perform job enlargement the practice of periodically roatiting job assignments within the organization job rotation upgrading the job by adding motivator factors job enrichment developed by Victor Vroom - employee beliefs about the relationship among effort, performance, and outcomes as a reslut of performance and the value of employees place on the outcomes determine their level of motivation expectancy approach employees belief that his or her effort will lead to the desired level of performance expectancy emplyees belief that attaining the desired level of performance will lead to the desired rewards instrumentality employees belief about the value of rewards valence B.F. Skinner - if reward or punish it motivates 4 types: positivie, avoidance, extinction, punishment Reinforcement Approach providing a positive consequence as a relut of desirable behavior positive reinforcement giving a person the opportunity to avoid a negative consequence by exhibiting a desirable behavior ( aka negative reinfrcement) avoidance employees recieve positive reinforcement that encourages negative action - cut throat environment - providing no positive consequences or removing perviously provided positive consequences as a result of undesirable behavior extinction providing a negative consequence as a result of undesireable behavior punishment belief that satisfied employees = good performance research rejects this popular view satisfaction and motvation are not identical recruiting satisfied employees is successful the satisfaction- performance controversy Why practice Management Control???? Alert managers to potential critical problems Five actions for managers: 1. Prevent Crisis 2. Standardize Outputs 3. Appraise Employee performance 4. Update plans 5. Protect the organization's assets methods, sometimes called steering controls, attempt to prevent a problem from occurring - process or means to output is just as important as the output preliminary control also called screening controls, focus on things that happen as inputs are being transformed into outputs concurrent controls methods are designed to detect existing problems after they occur but before they reach crisis proportions- most controals are like this statement of expected results or requirements expressed in financial or numerical terms budget most widely used type of control - dangers are inflixibility, inefficiencies, "padded"(buy things you don't need to get higher budget for next year) budgetary control answer to budgetary control issues - requires each manager to justify an entire budget request in detail, ingorder items zero based budget method requires the manager to keep a written record of incidents, as they occur, involving job behaviors that illustrate both stisfactory and unsatisfactory performance of the employee being rated. critical incident appraisal a ranking method where you simply rank employees alteration ranking a ranking method where you compare each person to every other person in a group paired comparison ranking a ranking method - bell curve forced distribution a ranking method where everyone rates everyone - potential for sabatoge multirater assessment a potential error in performance appraisials - grouping of ratings at the positive end of the scale instead of spreading them throught the scale leniency a potential error in performance appraisials - occcurs when performance appraisal statistics indicate that most employees are evaluated similaly as doing average or above average work central tendancy a potential error in performance appraisials occurs when perfornace evaluations are based on work performed most recently, generally work performed one to two months before evaluation recency a potential error in performance appraisials - a positive or negative characteristic and generalize halo/ horn effect a potential error in performance appraisials - 3 other things that can cause errorss Personal Preference prejudices biases relative term that means different things to different people quality 4 most important areas for quality 1. loss of buisness 2. liability 3. costs 4. productivity pioneered in Japan, schedules materials to arrive and leave as they are needed Just in time inventory control (JIT) integrating different cultures and backgrounds diversity reasons for creating diverse workforce (4) employee population is increasingly diverse customer population is increasingly diverse retaining top talent means recruiting individuals from all backgrounds increasing diversity minimizes the risk of litigation the ability to produce more of a good than another producer with the same quantity of inputs absolute advantage producers should produce goods they are most efficient at producing and purchase from others the goods they are less efficient at producing law of comparative advantage goods and services that are sold abroad exports goods and services purchased abroad imports difference between the value of the good a country exports and the value of the goods it imports balance of trade export more than import (China) trade Surplus import more than export(U.S.) although we are the largest importer and exporter trade deficit government imposed taxes charged on goods imported into a country tariff restrictions on the quantity of a good that can enter a country quotas a total ban on the import of a good from a particular country embargo a region within which trade restictions are reduced or eliminated free trade area Lewins Three Step Model for Change 1. Unfreezing - new technological change- institute it 2. New Alternative - present and sell 3. Refreezing - reward for using Six reasons for resisting change 1. fear of unkown 2. economics 3. fear of skills loosing value 4. threats to power 5. additional work 6. threats to interpersonal relations an organization that is committed to creating, aquiring, and transforming knowledge the learning organization Three broad areas that are expected to affect management in the 21st century 1. Technological growth 2. Virtual Management 3. Ethical and Social responsibilities increases productivity, decreases costs, ability to hire best talent regardless of location, quickly solve problems with dynamic teams, more easily leverage both static and dynamic staff, improves the work environment, better balance of personal and professional lives, provides competitive advantage benefits of virtual management leaders must move to a trust method, new forms of communication and collaboration required, management must enable learning culture, staff re-education may be required, it can be difficult to monitor employee behavior challenges of virtual management a set of moral principles or values that govern behavior ethics occurs when an individual takes a backward looking or relective perspective to determine whether the ethical situation at hand is related to a similar case and or the rules governing it rule based style ( formalism) occurs when an individual takes a forward looking perspective and compares the perceieved choice alternatives and their consequences on key judging criteria cost/benefit style (utilitarianism) Three distict schools of thought for social responsibility profit maximization trusteeship management social involvement makes it illegal for companies to monopolize trade the sherman act makes it illegal to charge different prices to different wholesale customers Clayton act bans unfair or deceptive acts or practices indluding false advertising wheeler-lea act refers to the ownership of ideas, such as inventions, books, movies, and computer programs Intellectual Property the obligation that individuals or businesses have to help solve social problems social responsibility Author: Anonymous ID: 80193 Card Set: Management Test Updated: 2011-04-17 21:02:22 Management Test Folders: Description: Management Test Show Answers: Flashcards Preview
https://www.freezingblue.com/flashcards/print_preview.cgi?cardsetID=80193
CC-MAIN-2018-17
en
refinedweb
How to include session id and username in Grails logs Introduction It can be very useful in debugging problems to include the session id and the logged in user's username in your logs. Of course you can do it manually in every applicable log, but there is another way, a more Spring/Grails like way. There is a feature in log4J called Mapped Diagnostic Context (MDC). This enables the storing of thread/request/context specific values in a map, that can be included in log output. To store something in this map you would write code like the following: import org.apache.log4j.MDC ... MDC.put("username", username)This entry in the MDC map can then be referenced in an application's log4J conversion pattern. To add the username defined above you would have a pattern like: %c{2} %X{username} %m%n Grails Example Let's now move to an example of how once could take advantage of MDC in Grails. Firstly, create a filter which will populate the MDC map. Such a filter could look like if you are using Spring Security: class MyFilters { def springSecurityService def filters = { addAdditionalRequestInfoToLogs(controller: '*', action: '*') { before = { MDC.clear() MDC.put("sessionId", "[$session.id]") // try catch is good here because otherwise an error ends // up creating stackoverflow scenario try { def username = springSecurityService.principal?.username if (username) MDC.put("username", "[$username]") } catch (Exception e) { log.error "$e" } } } } } Now you only need to update your logging config and you wil be done. Open up your Config.groovy and track down your existing logging pattern definition. Here is one I have for development: appenders { console name:'stdout', layout: pattern(conversionPattern: '%c{2} %m%n') } Now add references to the data we have put in the MDC: appenders { console name:'stdout', layout: pattern(conversionPattern: '%c{2} %X{sessionId} %X{username} %m%n') } And that's it. You should be good to go for more informative logging! Nice article. Thanks a lot for the article, this is exactly what I was searching for! best food franchises Old New York Deli & Bakery. A place great tasting food & great people meet. A top food franchise to own and fast casual for breakfast, lunch & dinner. Hand made daily,
http://www.34m0.com/2012/11/how-to-include-session-id-and-username.html
CC-MAIN-2018-17
en
refinedweb
28 December 2007 16:27 [Source: ICIS news] By Charlie Shaw LONDON (ICIS news)--Opinions were mixed regarding the short-term prospects for ethyl acetate. Some felt it would see modest growth, while others said fundamentals in the second half of 2008 would be less favourable than in 2007. ?xml:namespace> One producer said the slowing auto industry in ?xml:namespace> Another view was that prices will be driven higher by elevated oil and gas numbers. One distributor said fully integrated producers would have a clear advantage over those having to buy their raw materials. On the other hand, more acetic acid capacity could come on stream later in the year which could start to ease ethyl acetate prices. Asian imports could start to arrive in larger quantities should the euro exchange rate remain favourable to Asian exporters. However, healthy demand in that part of the world could see sellers there solely interested in the domestic market. Butyl acetate is likely to remain tight in 2008, with the availability of feedstock butanol constrained by two major maintenance shutdowns. This could result in an overall reduction of 10% of average annual output, according to some. One large buyer thought otherwise, predicting that prices would be lower by the end of the year and forecasting a reduction in the cost of methanol - which is used for acetic acid production. The buyer thought that new capacity for production of butanol in the Asia-Pacific region would help to ease butyl acetate prices downwards. Another factor cited as likely to sustain high prices was a protracted absence of imports from On the other hand, one major producer said a weaker downstream economic environment could give some relief to the balance of supply and demand. Increasing raw material costs and strong market competition continued to be the main focus as European producers of iso-propanol (IPA), methyl ethyl ketone (MEK) and methyl iso-butyl ketone (MIBK) looked ahead to 2008. Although tight supply has pushed spot prices for IPA and MEK up during times of severe production outages in 2007, prices bounced back below manufacturer targets as soon as market balance was restored, sellers and buyers noted. Sustained strong naphtha pricing was an ongoing challenge to downstream MEK producers, and the €57/tonne ($83/tonne) first-quarter propylene increase will apply further upward pricing pressure on IPA and MIBK, producers said. Buyers, however, said the market was well supplied at present and manufacturers would struggle to raise the level. For MIBK, a structural oversupply situation in the European market meant prices were some €200/tonne below what producers described as reasonable for profit margins. Domestic manufacturers said imported material was the main driver for the cost pressure and no easing of competition was to be expected in 2008, according to market participants. Propylene oxide-based glycol ether producers will be looking to implement hikes in the region of €100/tonne for methoxy propanol (PM) and €120/tonne for methoxy propanol acetate (PMA) from next week, based chiefly on first-quarter propylene and methanol increases of €57/tonne and €110/tonne respectively. Sellers said they were never able to pass through the added raw material costs they incurred moving into the fourth quarter this year, which was why they would be looking to make up some of this extra ground early in 2008. One producer said it would aim to secure a sizable increase next week, followed by a series of step increases during the first quarter. The market for ethylene glycol ethers saw sustained tightness in 2007 on account of a series of plant outages in This tightness was always forecast to remain in the first quarter of 2008, with European sellers still saying they would be unable to meet demand for some time. Prices are set to rise further with immediate effect on account of greater-than-expected first-quarter ethylene and propylene hikes, which will add a substantial cost to producers’ raw material expenditures. A maintenance outage announced by the market’s largest producer in February has given distributors and buyers added reason to suppose that producers will try to push prices through the €1,500/tonne FD NWE mark later in the quarter. ($1 = €0.69) Peter Gerrard and Sofia L
http://www.icis.com/Articles/2007/12/28/9088989/outlook-08-feedstock-driving-europe-solvents.html
CC-MAIN-2014-42
en
refinedweb
25 October 2012 09:44 [Source: ICIS news] SINGAPORE (ICIS)--?xml:namespace> The plant is scheduled to be taken offline for one week and is currently operating at around 90% of capacity, the source added. Hebei Yingdu Gasification’s plant shutdown is unlikely to have any major impact on the acetic acid market because the producer has sufficient inventories, market sources
http://www.icis.com/Articles/2012/10/25/9607166/chinas-hebei-yingdu-chemical-to-shut-acetic-acid-plant.html
CC-MAIN-2014-42
en
refinedweb
W. There was some initial dismay within the RDF community about the proliferation of documents. The RDF specification originally consisted only of two: the RDF Model & Syntax specification and the candidate RDF Schema document. The concern was that the release of so many additional documents signified an increase in the complexity of the RDF specification. However,. To read or contribute to discussions about the documents themselves, refer to the rdf-comments mailing list. For more discussion about RDF in general, see the rdf-interest group mailing list. The Concepts and Abstract Syntax document focuses on the core aspects that make up RDF, independent of any serialization format and outside the formal semantics of the RDF model. It essentially provides a glossary of RDF concepts and should be one of the first documents read by RDF newcomers. Included in the document is a good overview of the major components of the RDF specification, including the RDF graph model, the XML serialization, data types, and URIs. In particular, data types are given considerably more discussion in the Concepts document (and in the other documents) than they got in the 1999 release of the RDF M&S. The Concepts document doesn't require specialized knowledge to understand the topics discussed in it. However, there are some borderline concepts that possibly could cause confusion. For instance, a discussion of entailment provides an example and a model-specific interpretation of the example, but it doesn't define the term for those who lack a background in formal logic, model theory, and the like. The RDF Semantics document is a semantic clarification of RDF constructs. It's not a trivial read, particularly if you don't have a background in model theory semantics, which forms the basis of the proofs in the document. However, the document is essential for providing precise semantic interpretation of each aspect of RDF. Hopefully this will mean the endless rounds of debate concerning the precise meaning of each RDF construct can eventually come to an end, which would allow the RDF community to focus its energy on using, rather than endlessly interpreting RDF. Still, one aspect of the RDF Semantics document could generate considerable discussion in the future, in that two RDF concepts, reification and containers, lack a formal semantic specification in the document. The Semantics document is readable by an audience with substantial exposure to the RDF model concepts. I recommend prospective readers work through the Concepts document and Primer first, before taking up the Semantics document. The material is not a light read and not necessarily required for everyone interested in RDF. However, I do recommend reading the document at some point, if for no other reason than it provides a good definition and understanding of the concept of entailment. Many people's first and primary exposure to RDF will be through RDF/XML, a serialization format that's been the center of a great deal of controversy. Some have called for a simplified XML syntax which more clearly demonstrates both the individual RDF triple and the underlying RDF graph. Whatever your opinion of the syntax, it is essential that you read the Concepts document and the Primer, at a minimum, before reading the RDF/XML Syntax specification. One formatting change in the Syntax document was the inclusion of the concept of XML striping within the specification, rather than as a separate note. This does provide a better overview of the mapping of the RDF graph "node-arc-node-arc" to XML. In addition, because the concepts and semantics have been pulled into separate documents, the XML specification can focus more closely on the syntax without having to switch between abstract RDF concepts and RDF/XML implementation. The document also provides greater detail about RDF data typing, as well as more examples of RDF/XML particulars, including a closer look at the parseType attribute and the container membership elements. parseType One particular clarification in the new specification deals with RDF/XML within HTML documents. This topic has generated a great deal of discussion and workarounds in the past, even including RDF/XML within a script tag. The document formalizes the RDF Working Group position that RDF should not be embedded in HTML. Users are encouraged to use the link element in HTML or XHTML documents to point to separate RDF resources. script link An additional change to the new working draft from earlier drafts is that the RDF namespace is no longer required, though still strongly recommended, for the about and ID attributes. This change allows some users of RDF, such as Mozilla, to preserve the validity of existing documents. about ID At the end of the document is a change section detailing the many changes between releases. This section may be removed before final publication, but it's worth a read if you've worked with RDF and RDF/XML in the past. All in all, the RDF/XML specification document is much cleaner and has a tighter focus than previous releases of the same document. The RDF Vocabulary/Schema (RDFS) is used to describe RDF vocabularies. It's a revision of the original RDF Schema specification and provides additional detail and classes, including the new rdfs:Datatype. Other new classes have been defined to support collections, a new RDF concept. Collections are RDF resources grouped together whose order is determined by RDF properties rdf:first, rdf:rest, and rdf:nil. The rdf:List is the class representing all of these collections. rdfs:Datatype rdf:first rdf:rest rdf:nil rdf:List Other than these additions, and some clarifications, the RDF Vocabulary is quite similar to the original RDF Schema working draft. This document is essential reading for anyone who wants to understand more about how RDF vocabularies are defined. RDF is simple if you think about an RDF graph as a set of node-arc-node triples. However, RDF as a specification isn't trivial, primarily because of its semantics, which tend to trip us up, even though the RDF/XML syntax usually receives the brunt of criticism. The RDF Working Group is aware that there has been difficulty understanding and interpreting the RDF specification, so among the working drafts it has released is the RDF Primer. Most of the document focuses on the RDF/XML and the associated RDF Schema, which isn't surprising because most uses of RDF are based in RDF/XML. The document also provides a look at some existing RDF applications, such as Dublin Core, PRISM, and RSS 1.0. The Primer provides a good introduction to the basics of RDF without getting too mired in the semantic depths. It's the first document you'll want to read when you get exposed to the specification. The RDF Test Cases document provides test cases in RDF/XML and N-Triple format that demonstrate each RDF issue as it is resolved. These test cases are a way for RDF tool developers to verify conformance to the RDF specification. © , O’Reilly Media, Inc. (707) 827-7019 (800) 889-8969 All trademarks and registered trademarks appearing on oreilly.com are the property of their respective owners.
http://www.xml.com/pub/a/2002/11/27/rdf.html?page=last&x-order=date&x-showcontent=off
CC-MAIN-2014-42
en
refinedweb
strpattern_match_end_index() Get the end index of a match. Synopsis: #include <strpattern.h> int strpattern_match_end_index(const strpattern_match *match, int *err) Since: BlackBerry 10.0.0 Arguments: - match The match whose end index is returned. - err STRPATTERN_EOK if there is no error. Library:libstrpattern (For the qcc command, use the -l strpattern option to link against this library) Description: This function returns the end index of a match. The end index represents the offset, from the beginning of the string which was analyzed, to the character immediately following the last character of the match. This character is after the last character of the string if the last character of the match is the last character of the analyzed string. The offset is counted in terms of characters in the analyzed string using Unicode code points. Characters are not reinterpreted in any way. For example, each code point is counted as a character even if it represents a character decoration associated with the preceding character. Returns: The end index of the match (-1 on error). Last modified: 2014-05-14 Got questions about leaving a comment? Get answers from our Disqus FAQ.comments powered by Disqus
http://developer.blackberry.com/native/reference/core/com.qnx.doc.strpattern.lib_ref/topic/strpattern_match_end_index.html
CC-MAIN-2014-42
en
refinedweb
On Friday 26 November 2004 19:19, Hans Reiser wrote:>.> Regarding namespace unification + XPath:For files: cat /etc/passwd/[. = "joe"] should work like in XPath.But what to do with directories?Would 'cat /etc/[. = "passwd"]' output the contents of the passwd fileor does it mean to output the file '[. = "passwd"]'?If the first is the case then you have to prohibit filenames looking like '[foo bar]'.If the shells wouldn't like * for themself, I'd suggest something likecat /etc/*[. = "passwd"]This means: list all contents and show the ones where /etc/passwd/*[@shell = "/bin/tcsh"]/@shellI hope I'm not offending, but my impression is now thatXPath stuff fits better into some shell providinga XPath view of the filesystem, than into the kernel.--------------------------------------------------------------------What about mapping the contents of files into "pure" posix namespace?XML is basically a tree, too.Notes: 1) "...." below is the entry to reiser4 namespace.2) # denotes a shell commandFor example:# cd /etc/passwd/# ls -a *. .. .... joe root# cd joe# lsgid home passwd shell uid# cat shell/bin/tcsh# cd ../....# ls plugins I guess an implementation in reiser4 would require somemime-type/file extension dispatcher plus a specialdirectory plugin for each mime-type.-- lg, Chris-To unsubscribe from this list: send the line "unsubscribe linux-kernel" inthe body of a message to majordomo@vger.kernel.orgMore majordomo info at read the FAQ at
http://lkml.org/lkml/2004/11/26/56
CC-MAIN-2014-42
en
refinedweb
I am not sure if we are utilizing this for Jetty...I didn't see a virtual-server style parameter in the geronimo-jetty.xsd. Perhaps it is somewhere else...which leads me to a discussion we have had in the past... I would very much be interested in taking the geronimo-jetty.xml and geronimo-tomcat.xml files and merge them as a common version, such as a geronimo-web.xml...and remove the jetty namespace attributes from the xsd. This way both containers can surf off the same file....and reuse the xmlbean code. I did not see anything in the xds that was container specific. If anything is container specific, the builder could ignore the parameter. Thoughts? Jeremy Boynes wrote: > Jeff Genender wrote: > >> >> I also added a Tomcat Builder so we will no longer rely on the Jetty >> version. The main reason for this is that Tomcat supports virtual >> hosts (declared by additional HostGBean objects in the plan). The web >> applications can now include a geronimo-tomcat.xml file in the WEB-INF >> file which is very similar to the Jetty version. The only difference >> is support for the <virtual-server> parameter. This allows you to >> deploy your web application to a specific virtual host. >> > > IIRC Jetty has this function as well - can we merge the vhost changes > back into the jetty-builder? > > -- > Jeremy
http://mail-archives.apache.org/mod_mbox/geronimo-dev/200504.mbox/%3C426D7846.7040601@savoirtech.com%3E
CC-MAIN-2014-42
en
refinedweb
Linux was one of the first cross-platform operating systems to use 64-bit processors, and now 64-bit systems are becoming commonplace in servers and desktops. Many developers are now facing the need to port applications from 32-bit to 64-bit environments. With the introduction of Intel® Itanium® and other 64-bit processors, making software 64-bit-ready has become increasingly important. As with UNIX® and other UNIX-like operating systems, Linux uses the LP64 standard, where pointers and long integers are 64 bits but regular integers remain 32-bit entities. Although some high-level languages are not affected by the size differences, others such as the C language may be. The effort to port an application from 32 bits to 64 bits might range from trivial to very difficult, depending on how these applications were written and maintained. Many subtle issues can cause problems even in a well-written, highly portable application, so this article outlines these issues and suggests ways to deal with them. Advantages of 64 bits 32-bit platforms have a number of limitations that are increasingly frustrating to developers of large applications such as databases, especially those developers who wish to take advantage of advances in computer hardware. While scientific calculations normally rely on floating-point mathematics, a few applications such as financial calculations need a narrower numeric range but higher precision than floating point offers. 64-bit math provides this higher precision fixed-point math, with an adequate range. There is much discussion today in the computer industry about the barrier presented by 32-bit addresses. 32-bit pointers can address only 4GB of virtual address space. You can overcome this limitation, but application development becomes more complicated, and performance is significantly reduced. As far as language implementation is concerned, the current C language standard allows the "long long" data type to be at least 64 bits. However, an implementation may define it as a larger size. Another area that requires improvement is dates. In Linux, dates are expressed as signed 32-bit integers representing the number of seconds since January 1, 1970. This turns negative in 2038. But in 64-bit systems, dates are expressed as signed 64-bit integers, which extends the usable range. In summary, the 64-bit architecture has the following advantages: - A 64-bit application can directly access 4 exabytes of virtual memory, and the Intel Itanium processor provides a contiguous linear address space. - 64-bit Linux allows for file sizes up to 4 exabytes (2 to the power of 63), a very significant advantage to servers accessing large databases. The Linux 64-bit architecture Unfortunately, the C programming language does not provide a mechanism for adding new fundamental data types. Thus, providing 64-bit addressing and integer arithmetic capabilities involves changing the bindings or mappings of the existing data types, or adding new data types to the language. Table 1. 32-bit and 64-bit data models The difference among the three 64-bit models (LP64, LLP64, and ILP64) lies in the non-pointer data types. When the width of one or more of the C data types changes from one model to another, applications may be affected in various ways. These effects fall into two main categories: - Size of data objects. The compilers align data types on a natural boundary; in other words, 32-bit data types are aligned on a 32-bit boundary on 64-bit systems, and 64-bit data types are aligned on a 64-bit boundary on 64-bit systems. This means that the size of data objects such as a structure or a union will be different on 32-bit and 64-bit systems. - Size of fundamental data types. Common assumptions about the relationships between the fundamental data types may no longer be valid in a 64-bit data model. Applications that depend on those relationships will fail when compiled on a 64-bit platform. For example, the assumption sizeof (int) = sizeof (long) = sizeof (pointer)is valid for the ILP32 data model, but not valid for others. In summary, the compilers align data types on a natural boundary, which means that "padding" will be inserted by the compiler to enforce this alignment, as in a C structure or union. The members of the structure or union are aligned based on their widest member. Listing 1 illustrates this structure. Listing 1. C structure struct test { int i1; double d; int i2; long l; } Table 2 shows the size of each member of the structure and the structure size itself on 32-bit and 64-bit systems. Table 2. Size of structure and structure members Note here that on a 32-bit system, the compiler may not align the variable d, even though it is a 64-bit object, because the hardware treats it as two 32-bit objects. However, a 64-bit system aligns both d and l causing two 4-byte fillers to be added. Porting from 32-bit to 64-bit systems This section shows you how to correct common trouble spots: - Declarations - Expressions - Assignments - Numeric constants - Endianism - Type definitions - Bit shifting - Formatting strings - Function parameters Declarations To enable your code to work on both 32-bit and 64-bit systems, note the following regarding declarations: - Declare integer constants using "L" or "U", as appropriate. - Ensure that an unsigned int is used where appropriate to prevent sign extension. - If you have specific variables that need to be 32-bits on both platforms, define the type to be int. - If the variable should be 32-bits on 32-bit systems and 64-bits on 64-bit systems, define them to be long. - Declare numeric variables as int or long for alignment and performance. Donât try to save bytes using char or short. - Declare character pointers and character bytes as unsigned to avoid sign extension problems with 8-bit characters. Expressions In C/C++, expressions are based upon associativity, precedence of operators and a set of arithmetic promotion rules. To enable your expression to work correctly on both 32-bit and 64-bit systems, note the following rules: - Addition of two signed ints results in a signed int. - Addition of an int and a long results in a long. - If one of the operands is unsigned and the other is a signed int, the expression becomes an unsigned. - Addition of an int and a double results in a double. Here, the int is converted to a double before addition. Assignments Since pointer, int, and long are no longer the same size on 64-bit systems, problems may arise depending on how the variables are assigned and used within an application. A few tips in this regard: - Do not use int and long interchangeably because of the possible truncation of significant digits. For example, don't do this: int i; long l; i = l; - Do not use an int to store a pointer. The following example works on a 32-bit system but fails on a 64-bit system, because a 32-bit integer cannot hold a 64-bit pointer. For example, don't do this: unsigned int i, *ptr; i = (unsigned) ptr; - Do not use a pointer to store an int. For example, don't do this: int *ptr; int i; ptr = (int *) i; - In cases where unsigned and signed 32-bit integers are mixed in an expression and assigned to a signed long, cast one of the operands to its 64-bit type. This will cause the other operands to be promoted to 64-bits and no further conversion is needed when the expression is assigned. Another solution is to cast the entire expression such that sign extension occurs on assignment. For example, consider the problem caused by the following: long n; int i = -2; unsigned k = 1; n = i + k; Arithmetically, the result should be -1 in the expression shown in bold above. But since the expression is unsigned, no sign extension occurs. The solution is to cast one of the operands to its 64-bit type (as in the first line below) or cast the entire expression (as in the second line below): n = (long) i + k; n = (int) (i + k); Numeric constants Hexadecimal constants are commonly used as masks or specific bit values. Hexadecimal constants without a suffix are defined as an unsigned int if it will fit into 32-bits and if the high order bit is turned on. For example, the constant OxFFFFFFFFL is a signed long. On a 32-bit system, this sets all the bits, but on a 64-bit system, only the lower order 32-bits are set, resulting in the value 0x00000000FFFFFFFF. If you want to turn all the bits on, a portable way to do this is to define a signed long constant with a value of -1. This turns all the bits on since twos-compliment arithmetic is used: long x = -1L; Another problem that might arise is the setting of the most significant bit. On a 32-bit system, the constant 0x80000000 is used. But a more portable way of doing this is to use a shift expression: 1L << ((sizeof(long) * 8) - 1); Endianism Endianism refers to the way in which data is stored, and defines how bytes are addressed in integral and floating point data types. Little-endian means that the least significant byte is stored at the lowest memory address and the most significant byte is stored at the highest memory address. Big-endian means that the most significant byte is stored at the lowest memory address and the least significant byte is stored at the highest memory address. Table 3 shows a sample layout of a 64-bit long integer. Table 3. Layout of a 64-bit long int For example, the 32-bit word 0x12345678 will be laid out on a big endian machine as follows: Table 4. 0x12345678 on a big-endian system If we view 0x12345678 as two half words, 0x1234 and 0x5678, we would see the following in a big endian machine: Table 5. 0x12345678 as two half words on a big-endian system However, on a little endian machine, the word 0x12345678 will be laid out as follows: Table 6. 0x12345678 on a little-endian system Similarly, the two half-words 0x1234 and 0x5678 would look like the following: Table 7. 0x12345678 as two half words on a little-endian system The following example illustrates the difference in byte order between big endian and little endian machines. The C program below will print out "Big endian" when compiled and run on a big endian machine, and "Little endian" when compiled and run on a little endian machine. Listing 2. Big endian vs. little endian #include <stdio.h> main () { int i = 0x12345678; if (*(char *)&i == 0x12) printf ("Big endian\n"); else if (*(char *)&i == 0x78) printf ("Little endian\n"); } Endianism is important when: - Bit masks are used - Indirect pointers address portions of an object We have bit fields in C and C++ that help to deal with endian issues. I recommend the use of bit fields rather than mask fields or hexadecimal constants. There are several functions that are used to convert 16-bit and 32-bit from "host-byte-order" to "net-byte-order." For example, htonl (3), ntohl (3) are used to convert 32-bit integers. Similarly, htons (3), ntohs (3) are used for 16-bit integers. However, there is no standard set of functions for 64-bit. But Linux provides the following macros on both big and little endian systems: - bswap_16 - bswap_32 - bswap_64 Type definitions I recommend that you do not code your applications with the native C/C++ data types that change size on a 64-bit operating system, but rather use type definitions or macros that explicitly call out the size and type of data contained in a variable. Some type definitions help make the code more portable. ptrdiff_t: A signed integer type that results from subtracting two pointers. size_t: An unsigned integer and the result of the sizeofoperator. This is used when passing parameters to functions such as malloc (3), and returned from several functions such as fred (2). int32_t, uint32_tetc.: Define integer types of a predefined width. intptr_tand uintptr_t: Define integer types to which any valid pointer to void can be converted. Example 1: The 64-bit return value from sizeof in the following statement is truncated to 32-bits when assigned to bufferSize. int bufferSize = (int) sizeof (something); The solution is to cast the return value using size_t and assign it to bufferSize declared as size_t as shown below: size_t bufferSize = (size_t) sizeof (something); Example 2: On a 32-bit system, int and long are of the same size. Due to this, some developers use them interchangeably. This can cause pointers to be assigned to int and vice-versa. But on a 64-bit system, assigning a pointer to an int causes the truncation of the high-order 32-bits. The solution is to store pointers as pointer types or the special types defined for this purpose, such as intptr_t and uintptr_t. Bit shifting Untyped integral constants are of type (unsigned) int. This might lead to unexpected truncation while shifting. For example, in the following code snippet, the maximum value for a can be 31. This is because the type of 1 << a is int. long t = 1 << a; To get the shift done on a 64-bit system, 1L should be used as shown below: long t = 1L << a; Formatting strings The function printf (3) and related functions can be a major source of problems. For example, on 32-bit platforms, using %d to print either an int or a long will usually work, but on 64-bit platforms, this would truncate a long to its least significant 32-bits. The proper specification for a long is %ld. Similarly, when a small integer (char, short, int) is passed into printf (3), it will be widened to 64-bits and the sign will be extended if appropriate. In the example below, the printf (3) assumes that a pointer is 32-bits. char *ptr = &something; printf (%x\n", ptr); The above code snippet will fail on 64-bit systems and will display only the lower 4 bytes. The solution for this is to use the %p specification as shown below, which will work fine on both 32-bit and 64-bit systems. char *ptr = &something; printf (%p\n", ptr); Function parameters There are a few things that you need to remember while passing parameters to functions: - In the case where the data type of the parameter is defined by a function prototype, the parameter is converted to that type according the standard rules. - When the type of the parameter is not specified, the parameter is promoted to the larger type. - On a 64-bit system, integral types are converted to 64-bit integral types, and single precision floating point types are promoted to double precision. - If a return value is not otherwise specified, the default return value for a function is int. The problem arises when passing the sum of signed and unsigned ints as long. Consider the following case: Listing 3. Passing the sum of signed and unsigned ints as long long function (long l); int main () { int i = -2; unsigned k = 1U; long n = function (i + k); } The above code snippet will fail on 64-bit systems, because the expression (i + k) is an unsigned 32-bit expression, and when promoted to a long, the sign doesnât extend. The solution is to cast one of the operands to its 64-bit type. There is another problem on register-based systems where registers are used to pass parameters to functions rather than the stack. Consider the following example: float f = 1.25; printf ("The hex value of %f is %x", f, f); On a stack-based system, the appropriate hexadecimal value is printed. But on a register-based system, the hexadecimal value is read from an integer register, not the floating point register. The solution is to cast the address of the floating point variable to a pointer to an int, which is then de-referenced as shown below: printf ("The hex value of %f is %x", f, *(int *)&f); Conclusion Major hardware vendors have recently expanded their 64-bit offerings because of the performance, value, and scalability that 64-bit platforms can provide. The constraints of 32-bit systems, particularly the 4GB virtual memory ceiling, have spurred companies to consider migrating to 64-bit platforms. Knowing how to port applications to comply with a 64-bit architecture can help you write portable and efficient code. Resources Learn - 64-Bit Programming Models: Why LP64? provides more detail on the various 64-bit programming models and argues for LP64. - Read about the Year 2038 problem that 32-bit systems have in Wikipedia. - Read "Porting enterprise apps from UNIX to Linux" (developerWorks, February 2005) for tips and insights on porting large, multithreaded applications to Linux. - "Porting Intel applications to 64 bit Linux PowerPC" gives advice on some of issues to consider when porting Linux from IA32 to PowerPC. - The Linux distributions site on Linux Online (linux.org) offers an extensive listing of distributions, including those for 64-bit systems. - The developerWorks Linux on Power Architecture developer's corner is a resource for programmers and developers writing applications for Linux running on POWER-based hardware. - penguinppc.org is a community site devoted to users of Linux on PowerPC.
http://www.ibm.com/developerworks/linux/library/l-port64/index.html
CC-MAIN-2014-42
en
refinedweb
Reading from and Writing to the Registry Visual Studio .NET 2003 When programming in Visual Basic .NET, you can choose to access the registry via either the functions provided by Visual Basic .NET or the registry classes of the .NET Framework. The registry hosts information from the operating system as well as information from applications hosted on the machine. Working with the registry may compromise security by allowing inappropriate access to system resources or protected information. In This Section - Reading from and Writing to the Registry Using the Microsoft.Win32 Namespace - Explains how to use the Registry and RegistryKey classes in the Microsoft.Win32 namespace of the .NET Framework to read from and write to a registry key. - Reading from and Writing to the Registry Using Visual Basic Run-Time Functions - Explains how to use the Visual Basic .NET functions, DeleteSetting, GetSetting, GetAllSettings, and SaveSetting, to access the registry. Related Sections - Registry Access Changes in Visual Basic .NET - Provides an explanation of differences between registry access in Visual Basic 6.0 and Visual Basic .NET. - Registry Class - Presents an overview of the Registry class along with links to individual keys and members. Show:
http://msdn.microsoft.com/en-us/library/85t3c3hf(v=vs.71).aspx
CC-MAIN-2014-42
en
refinedweb
Improve Your App’s Performance with Memcached One of the easiest ways to improve your application's performance is by putting a caching solution in front of your database. In this tutorial, I'll show you how to use Memcached with Rails, Django, or Drupal. Memcached is an excellent choice for this problem, given its solid history, simple installation, and active community. It is used by companies big and small, and includes giants, such as Facebook, YouTube, and Twitter. The Memcached site, itself, does a good job of describing Memcached as a "Free & open source, high-performance, distributed memory object caching system, generic in nature, but intended for use in speeding up dynamic web applications by alleviating database load." In general, database calls are slow. In general, database calls are slow, since the query takes CPU resources to process and data is (usually) retrieved from disk. On the other hand, an in-memory cache, like Memcached, takes very little CPU resources and data is retrieved from memory instead of disk. The lightened CPU is an effect of Memcached's design; it's not queryable, like an SQL database. Instead, it uses key-value pairs to retrieve all data and you cannot retrieve data from Memcached without first knowing its key. Memcached stores the key-value pairs entirely in memory. This makes retrieval extremely fast, but also makes it so the data is ephemeral. In the event of a crash or reboot, memory is cleared and all key-value pairs need to be rebuilt. There are no built-in high-availability and/or fail-over systems within Memcached. However, it is a distributed system, so data is stored across multiple nodes. If one node is lost, the remaining nodes carry on serving requests and filling in for the missing node. Installing Memcached Installing Memcached is a fairly simple process. It can be done through a package manager or by compiling it from source. Depending on your distribution, you may want to compile from source, since the packages tend to fall a bit behind. # Install on Debian and Ubuntu apt-get install memcached # Install on Redhat and Fedora yum install memcached # Install on Mac OS X (with Homebrew) brew install memcached # Install from Source get tar -zxvf memcached-1.x.x.tar.gz cd memcached-1.x.x ./configure make && make test sudo make install You'll want to configure Memcached for your specific needs, but, for this example, we'll just get it running with some basic settings. memcached -m 512 -c 1024 -p 11211 -d At this point, you should be up and running with Memcached. Next, we'll look at how to use it with Rails, Django and Drupal. It should be noted that Memcached is not restricted to being used within a framework. You can use Memcached with many programming languages through one of the many clients available. Using Memcached with Rails 3 Rails 3 has abstracted the caching system so that you can change the client to your heart's desire. In Ruby, the preferred Memcached client is Dalli. # Add Dalli to your Gemfile gem 'dalli' # Enable Dalli in config/environments/production.rb: config.perform_caching = true config.cache_store = :dalli_store, 'localhost:11211' In development mode, you will not normally hit Memcached, so either start Rails in production mode with rails server -e production, or add the above lines to your config/environments/development.rb. The simplest use of the cache is through write/ read methods to retrieve data: Rails.cache.write 'hello', 'world' #=> true Rails.cache.read 'hello' #=> "world" The most common pattern for Rails caching is using fetch. It will attempt to retrieve the key (in this case, expensive-query) and return the value. If the key does not exist, it will execute the passed block and store the result in the key. Rails.cache.fetch 'expensive-query' do results = Transaction. joins(:payment_profile). joins(:order). where(':created > orders.created_at', :created => Time.now) end # ... more code working with results In the example above, the problem is cache expiry. (One of the two hard problems in computer science.) An advanced, very robust solution is to use some part of the results in the cache key itself, so that if the results change, then the key is expired automatically. users = User.active users.each do |u| Rails.cache.fetch "profile/#{u.id}/#{u.updated_at.to_i}" do u.profile end end Here, we're using the epoch of updated_at as part of the key, which gives us built in cache expiration. So, if the user.updated_at time changes, we will get a cache miss on the pre-existing profile cache and write out a new one. In this case, we'll need to update the user's updated_at time when their profile is updated. That is as simple as adding: class Profile < ActiveRecord::Base belongs_to :user, touch: true end Now, you have self-expiring profiles without any worry about retrieving old data when the user is updated. It's almost like magic! Using Memcached with Django Once you have Memcached installed, it is fairly simple to access with Django. First, you'll need to install a client library. We'll use pylibmc. # Install the pylibmc library pip install pylibmc # Configure cache servers and binding settings.py CACHES = { 'default': { 'BACKEND': 'django.core.cache.backends.memcached.PyLibMCCache', 'LOCATION': '127.0.0.1:11211', } } Your app should be up and running with Memcached now. Like other libraries, you'll get basic getter and setter methods to access the cache: cache.set('hello', 'world') cache.get('hello') #=> 'world' You can conditionally set a key if it does not already exist with add. If the key already exists, the new value will be ignored. cache.set('hello', 'world') cache.add('hello', 'mundus') cache.get('hello') #=> 'world' From the Python Decorator Library, you can create create a memoized decorator to cache the results of a method call. import collections import functools class memoized(object): '''Decorator. Caches a function's return value each time it is called. If called later with the same arguments, the cached value is returned (not reevaluated). ''' def __init__(self, func): self.func = func self.cache = {} def __call__(self, *args): if not isinstance(args, collections.Hashable): # uncacheable. a list, for instance. # better to not cache than blow up. return self.func(*args) if args in self.cache: return self.cache[args] else: value = self.func(*args) self.cache[args] = value return value def __repr__(self): '''Return the function's docstring.''' return self.func.__doc__ def __get__(self, obj, objtype): '''Support instance methods.''' return functools.partial(self.__call__, obj) @memoized def fibonacci(n): "Return the nth fibonacci number." if n in (0, 1): return n return fibonacci(n-1) + fibonacci(n-2) print fibonacci(12) Decorators can give you the power to take most of the heavy lifting out of caching and cache expiration. Be sure to take a look at the caching examples in the Decorator Library while you are planning your caching system. Using Memcached with Drupal Getting started with Memcached in Drupal starts with installing the PHP extension for Memcached. # Install the Memcached extension pecl install memcache <?php // Configure Memcached in php.ini [memcache] memcache.hash_strategy = consistent memcache.default_port = 11211 ?> <?php // Tell Drupal about Memcached in settings.php $conf['cache_backends'][] = 'sites/all/modules/contrib/memcache/memcache.inc'; $conf['cache_default_class'] = 'MemCacheDrupal'; $conf['memcache_key_prefix'] = 'app_name'; $conf['memcache_servers'] = array( '10.1.1.1:11211' => 'default', '10.1.1.2:11212' => 'default' ); ?> You'll need to restart your application for all the changes to take effect. As expected, you'll get the standard getter and setter methods with the Memcached module. One caveat is that cache_get returns the cache row, so you'll need to access the serialized data within it. <?php cache_set('hello', 'world'); $cache = cache_get('hello'); $value = $cache->data; #=> returns 'world' ?> And just like that, you've got caching in place in Drupal. You can build custom functions to replicate functionality such as cache.fetch in Rails. With a little planning, you can have a robust caching solution that will bring your app's responsiveness to a new level. And You're Done While a good caching strategy takes time to refine, it shouldn't stop you from getting started. Implementing a caching system can be fairly straightforward. With the right configuration, a caching solution can extend the life of your current architecture and make your app feel snappier than it ever has before. While a good caching strategy takes time to refine, it shouldn't stop you from getting started. As with any complex system, monitoring is critical. Understanding how your cache is being utilized and where the hotspots are in your data will help you improve your cache performance. Memcached has a quality stats system to help you monitor your cache cluster. You should also use a tool, like New Relic to keep an eye on the balance between cache and database time. As an added bonus, you can get a free 'Data Nerd' tshirt when you sign-up and deploy.
http://code.tutsplus.com/tutorials/improve-your-apps-performance-with-memcached--net-26768
CC-MAIN-2014-42
en
refinedweb
On Sunday 20 September 2009 14:00:40 Diego Biurrun wrote: > On Sun, Sep 20, 2009 at 07:12:44PM +0200, Reimar D?ffinger wrote: > > On Sun, Sep 20, 2009 at 06:53:36PM +0200, Diego Biurrun wrote: > > > There's an ugly preprocessor gcc 3.3 workaround in swscale_template.c: > > > > > > /* GCC 3.3 makes MPlayer crash on IA-32 machines when using "g" > > > operand here, which is needed to support GCC 4.0. */ > > > #if ARCH_X86_64 && ((__GNUC__ > 3) || (__GNUC__ == 3 && > > > __GNUC_MINOR__ >= 4)) > > > > > > :: "m" (src1), "m" (dst), "g" (dstWidth), "m" > > > :: (xInc_shr16), "m" (xInc_mask), > > > > > > #else > > > > > > :: "m" (src1), "m" (dst), "m" (dstWidth), "m" > > > :: (xInc_shr16), "m" (xInc_mask), > > > > > > #endif > > > > > > At the very least it should be updated to use the > > > AV_GCC_VERSION_AT_LEAST macro from libavutil. However, I would prefer > > > to get rid of it completely. If I understand the comment correctly, > > > deleting the whole #else clause would be the way to achieve this. > > > Since I know little enough assembler to have no real idea what "g" and > > > "m" operands are all about I'd like to hear an informed opinion. > > > >. Not true. -- This message has been scanned for viruses and dangerous content by MailScanner, and is believed to be clean.
http://ffmpeg.org/pipermail/ffmpeg-devel/2009-September/079806.html
CC-MAIN-2014-42
en
refinedweb
Technote (troubleshooting) Problem(Abstract) During an install of RHEL 6 ppc64 onto a Power 7 system, the system experiences the following when booting from the DVD installation media. Note that this may occur on any P7 hardware. Initalizing network drop monitor service RAMDISK: incomplete write (4318 != 32768) write error List of all partitions: No filesystem could mount root, tried: iso9660 Kernel panic - not syncing: VFS: Unable to mount root fs on unknown-block(1,0) Call Trace: [c000001ecbe3fc30] [c000000000012eb4] .show_stack+0x74/0x1c0 (unreliable) [c000001ecbe3fce0] [c0000000005a640c] .panic+0x80/0x1b4 [c000001ecbe3fd70] [c00000000082152c] .mount_block_root+0x2e8/0x324 [c000001ecbe3fe50] [c0000000008217b4] .prepare_namespace+0x1c4/0x218 [c000001ecbe3fee0] [c000000000820574] .kernel_init+0x348/0x374 [c000001ecbe3ff90] [c0000000000323f4] .kernel_thread+0x54/0x70 Rebooting in 180 seconds.. This issue has also been reported installing RHEL 6 onto a p7 system. Resolving the problem To resolve this issue: 1) When your system comes up to SMS, type 8 to get to the firmware prompt. 2) Enter the following two commands: dev nvram wipe-nvram 3) Reboot and boot from DVD.
http://www-01.ibm.com/support/docview.wss?uid=isg3T1012911
CC-MAIN-2014-42
en
refinedweb
What is Abstract Factory Pattern? Provide an interface for creating families of related or dependent objects without specifying their concrete classes. Wikipedia says: A software design pattern, the Abstract Factory Pattern provides a way to encapsulate a group of individual factories that have a common theme. also read: Intent: Define an interface for creating an object,but let subclasses decide which class to instantiate. Factory method lets a class instantiation to subclasses. Motto: Hence depend on abstractions and not on concrete classes.Intent of Abstract Factory is to create families of related objects without having to depend on their concrete classes.A Simple Abstract Factory is a way to decouple our clients from concrete classes.Abstract Factory relies on Object Composition.Object creation is implemented in methods exposed in the factory interface.[GoF,162] Purpose: To create a contract for creating families of related or dependent objects without having to specify their concrete classes. Introduction: Suppose we plan to manage address and telephone information in our application(Example Personal Informational manager) system.We will initially produce classes to represent Address and Telephone number data.Code these classes so that they store the relavent information and enforce business rules about their format.Ex: In few indian cities all telephone numbers are limited to 7 digits.Shortly we get another requirement to manage this application for another city/country.So we modify our logic in the Address and PhoneNumber to satisfy rules for another city/country and all of a sudden after managing for many countries sees our classes get bloated with code and difficult to manage. With every country added, nightmare of adding functional logic to the classes,extend the Address to USAddress and PhoneNumber to USPhoneNumber for country US.Instances of both classes are created by USAddressFactory.This gives greater freedom to extend your code without having to make major structural modifications in the rest of the system. Applicability: Use the Abstract Factory Pattern when: - The client should be independent of how the products are created. - Application should be configured with one of multiple families of products. - Objects needs to be created as a set,in order to be compatible. - You want to provide a collection of classes and you want to reveal just their contracts and their relationships,not their implementations. Description:. We typically use the following to implement the Abstract Factory Pattern: - AbstractFactory – An abstract class or interface that defines the create methods for abstract products. - AbstractProduct – An abstract class or interface that Abstract Factory helps to increase the overall flexibility of the application.Flexibility manifests itself both during runtime and design time.During design we donot have to predict all future uses of this application.Instead we create a generic framework,and develop implementations independently from the rest of the application.At runtime,application can easily integrate new features and resources. A further benefit of this pattern is it simplifies testing the rest of the application. As discussed - Factory Method – Used to implement Abstract Factory. - Singleton – Often Used in Concrete Factory. - Data Access Object – The Data Access Object pattern can use the Abstract Factory Pattern to add flexibility in creating Database-specific factories. Implementation Issues How many instances of a particular concrete factory should there be? - An application typically only needs a single instance of a particular concrete factory - Use the Singleton pattern for this purpose Below example describes implementation of Abstract Factory. package patterns; interface AddressFactory { public Address createAddress(); public PhoneNumber createPhoneNumber(); } abstract class Address { private String street; private String city; private String region; private String postalCode; public static final String EOL_STRING =System.getProperty("line.separator"); public static final String SPACE = " "; public String getStreet() { return street; } public String getCity() { return city; } public String getPostalCode() { return postalCode; } public String getRegion() { return region; } public abstract String getCountry(); public String getFullAddress() { return street + EOL_STRING + city + SPACE + postalCode + EOL_STRING; } } abstract class PhoneNumber { private String phoneNumber; abstract String getCountryCode(); public String getPhoneNumber() { return phoneNumber; } public void setPhoneNumber(String phoneNumber) { this.phoneNumber = phoneNumber; } } class USAddressFactory implements AddressFactory{ public Address createAddress(){ return new USAddress(); } public PhoneNumber createPhoneNumber(){ return new USPhoneNumber(); } }; } } class USPhoneNumber extends PhoneNumber{ private static final String COUNTRY_CODE = "01"; private static final int NUMBER_LENGTH = 10; public String getCountryCode(){ return COUNTRY_CODE; } public void setPhoneNumber(String newNumber){ if (newNumber.length() == NUMBER_LENGTH){ super.setPhoneNumber(newNumber); } } } class FrenchAddressFactory implements AddressFactory{ public Address createAddress(){ return new FrenchAddress(); } public PhoneNumber createPhoneNumber(){ return new FrenchPhoneNumber(); } } class FrenchAddress extends Address{ private static final String COUNTRY = "FRANCE"; public String getCountry(){ return COUNTRY; } public String getFullAddress(){ return getStreet() + EOL_STRING + getPostalCode() + SPACE + getCity() + EOL_STRING + COUNTRY + EOL_STRING; } }); } } } public class AbstractFactoryPattern { public static void main(String[] args) { Address fd=new FrenchAddressFactory().createAddress(); System.out.println("French Address:" + fd.getFullAddress()); } } […] Abstract Factory Pattern […]
http://www.javabeat.net/abstract-factory-pattern/
CC-MAIN-2014-42
en
refinedweb
. /// Calls); } } import UnityEngine import System.Collections public class ExampleClass(MonoBehaviour): def ApplyDamage(damage as float) as void: print(damage) def Example() as void: BroadcastMessage('ApplyDamage', 5.0F)
http://docs.unity3d.com/ScriptReference/Component.BroadcastMessage.html
CC-MAIN-2014-42
en
refinedweb
player_actarray_position_cmd Struct Reference Command: Joint position control (PLAYER_ACTARRAY_CMD_POS) More... #include <player_interfaces.h> Detailed Description Command: Joint position control (PLAYER_ACTARRAY_CMD_POS) Tells a joint/actuator to attempt to move to a requested position. Member Data Documentation The joint/actuator to command. The position to move to. The documentation for this struct was generated from the following file:
http://playerstage.sourceforge.net/doc/Player-3.0.2/player/structplayer__actarray__position__cmd.html
CC-MAIN-2014-42
en
refinedweb
Pebbles::Path Provides searchable, parseable pebbles-compliant UID paths, e.g. (such as a.b.*) for Active Record models. Requirements Requires ActiveModel. The target class (the one that will contain the path property) needs to have fields in the DB for: label_0 label_1 label_2 label_3 label_4 label_5 label_6 label_7 label_8 label_9 Installation Add this line to your application's Gemfile: gem 'pebble_path' And then execute: $ bundle Or install it yourself as: $ gem install pebble_path Create a migration for the table that you want to put the paths on, e.g. def self.up labels = (0..9).map { |i| "label_#{i}".to_sym } create_table :locations do |t| labels.each do |label| t.text label end t. end add_index :locations, labels, :unique => true, :name => 'index_locations_on_labels' end Usage Include the Pebbles::Path module in the ActiveRecord model that has the labels class Location < ActiveRecord::Base include Pebbles::Path end Contributing - Fork it - Create your feature branch ( git checkout -b my-new-feature) - Commit your changes ( git commit -am 'Add some feature') - Push to the branch ( git push origin my-new-feature) - Create new Pull Request
https://www.rubydoc.info/gems/pebbles-path/0.0.3
CC-MAIN-2021-04
en
refinedweb
How to Modernize a Django Index Definition with Zero Downtime2020-07-27 If you’ve read the Django documentation for Model.Meta.index_together recently, you may have noticed this note: Use the indexesoption instead. The newer indexesoption provides more functionality than index_together. index_togethermay be deprecated in the future. Django historically provided index control for a single field with Field(db_index=True), and for multiple fields in Meta.index_together. These are good for specifying indexes for one or more fields, but they don’t give you access to the full power of database indexes. The Meta.indexes option was added in Django 1.11 (2017) to allow use of more index features through the Index() class. Initially Index() added support for indexes with descending ordering. It now supports db_tablespace to control storage, opclasses to use PostgreSQL’s various operator classes for indexes, and condition to create partial indexes that don’t contain every row. “Upgrading” So, how do you “upgrade” from Field(db_index=True) or Meta.index_together to Meta.indexes? Well first, this isn’t necessary. Neither feature is actually deprecated, and they’re not likely to be either. If you have an old project using either Field(db_index=True) or Meta.index_together, you’re best leaving it in place and using indexes for new indexes. But this change is a good example of how to make a “zero downtime” migration, with low risk. It can be a good be a nice exercise for learning more about Django’s migrations. Let’s take this model: from django.db import models class Status(models.TextChoices): UNPUBLISHED = 'UN', 'Unpublished' PUBLISHED = 'PB', 'Published' class Book(models.Model): status = models.CharField( max_length=2, choices=Status.choices, default=Status.UNPUBLISHED, ) title = models.CharField(max_length=200) class Meta: index_together = [["status", "title"]] (N.B. the Status class is using Django 3.0’s new enumeration types.) Our model uses index_together, which we’ll change to use indexes. The process should be similar to change Field(db_index=True) to use indexes. We’ll look at two methods. The first uses a rebuild of the index, which can take some time to run on large tables. The second retains the existing index for “zero downtime.” Note we’ll not be changing the definition of the index at all. If you want to upgrade an index to use any of the extra features of Index(), such as condition, databases typically cannot change the index in-place. You’ll need to add a new index in one migration, then remove the original index in a second migration. Rebuilding Method To rebuild, we’d need only to drop index_together and add indexes with an equivalent Index() defined: from django.db import models class Book(models.Model): status = models.CharField( max_length=2, choices=Status.choices, default=Status.UNPUBLISHED, ) title = models.CharField(max_length=200) class Meta: indexes = [ models.Index( name="core_book_status_title_idx", fields=["status", "title"], ) ] When we run makemigrations, we’ll end up with a migration file like this: from django.db import migrations, models class Migration(migrations.Migration): dependencies = [ ("core", "0001_initial"), ] operations = [ migrations.AlterIndexTogether(name="book", index_together=set()), migrations.AddIndex( model_name="book", index=models.Index( fields=["status", "title"], name="core_book_status_title_idx" ), ), ] This is functional. But if we run sqlmigrate, we’ll see that it does DROP INDEX followed by CREATE INDEX: $ python manage.py sqlmigrate core 0002 BEGIN; -- -- Alter index_together for book (0 constraint(s)) -- DROP INDEX "core_book_status_title_6099efdb_idx"; -- -- Create index core_book_status_title_idx on field(s) status, title of model book -- CREATE INDEX "core_book_status_title_idx" ON "core_book" ("status", "title"); COMMIT; This isn’t great for large tables, where creating an index might take hours. Additionally PostgreSQL and SQLite will lock the table for writes whilst they make the new index. On PostgreSQL, we could swap AddIndex for AddIndexConcurrently (Django 3.0+) to prevent the lock. Let’s look at the second method that avoids the work of recreating the index. Zero Downtime Method To achieve zero down, we need to add the new Index() definition using the existing index name, and then write a migration that tells Django nothing needs to change in the database. The first thing we need is the name that Django auto-generated for the index. This combines the table name, included field names, and a hash. The hashing algorithm has changed a couple of times in Django’s history versions, so to be safe we’ll retrieve the index name from the database. We can do this with a little SQL in dbshell. For example, on SQLite, we can run the .indexes command to list the indexes on our model’s table, and pick ours from the list: python manage.py dbshell SQLite version 3.24.0 2018-06-04 14:10:15 Enter ".help" for usage hints. sqlite> .indexes core_book ... core_book_status_title_6099efdb_idx ... sqlite> On MariaDB/MySQL, the query to run is: SHOW INDEXES FROM core_book; On PostgreSQL, the query to run is: SELECT tablename, indexname, indexdef FROM pg_indexes WHERE schemaname = 'public' AND tablename = 'core_book' ORDER BY tablename, indexname; It’s worth checking the index name is identical across all your environments (development, staging, production). The name might differ between environments if their databases were initially created with different Django versions, and thus different hashing algorithms. If the names do differ, we’d probably want to rename the index on all environments to match production. Second, we want to move this into an Index() definition, inside Meta.indexes, using the found name: from django.db import models class Book(models.Model): status = models.CharField( max_length=2, choices=Status.choices, default=Status.UNPUBLISHED, ) title = models.CharField(max_length=200) class Meta: indexes = [ models.Index( name="core_book_status_title_6099efdb_idx", fields=["status", "title"], ) ] If we run the check command at this point, we’ll see an error: $ python manage.py check SystemCheckError: System check identified some issues: ERRORS: core.Book: (models.E034) The index name 'core_book_status_title_6099efdb_idx' cannot be longer than 30 characters. System check identified 1 issue (0 silenced). The new Index() restricts its names to 30 characters to be compatible with Oracle. This is fair enough, and especially applicable to Django core and third party packages which should be compatible with all database backends. If you’re using Oracle, the old index_together name should be < 30 characters. For other backends we have more characters to work with: - SQLite - 1,000,000 - PostgreSQL - 63 - MariaDB/MySQL - 64 In this case, we can safely disable the check. Do this by adding the check ID to the SILENCED_SYSTEM_CHECKS setting: SILENCED_SYSTEM_CHECKS = [ # Allow index names >30 characters, because we aren’t using Oracle "models.E034", ] This is a little bit dangerous as it removes the check for every index. However tests should discover if any future index has an overly long index name, because the database should raise an error during migrations. (N.B. there’s an open ticket to allow more granular system check silencing.) Rerunning check will show it is now silenced: $ python manage.py check System check identified no issues (1 silenced). We should then run makemigrations with flags to make a new migration: $ python manage.py makemigrations core --name book_indexes Migrations for 'core': index_change/core/migrations/0002_book_indexes.py - Alter index_together for book (0 constraint(s)) - Create index core_book_status_title_6099efdb_idx on field(s) status, title of model book (We passed --name to avoid the automatic migration name.) Our new migration is identical to the previous downtime-inducing one. We need to modify it to allow Django’s migrations to consider these changes as applied, without running any SQL. Enter SeparateDatabaseAndState. This operation class takes two lists of migration operations. database_operations is compiled to SQL and run on the database. state_operations is applied to the in-memory version of models. This separation allows us to perform some operations in the database, and tell Django “another” thing happened to the actual model classes. It’s useful for changes that can’t be auto-detected correctly, for example Changing a ManyToManyField to use a through model. In our case, we want to do nothing to the database, so we provide database_operations=[]. In the state layer, we want to tell Django that we’ve “removed” index_together and “added” the new index in the database. To do this, we can move the auto-generated AlterIndexTogether and AddIndex operations into our state_operations list. Our migration ends up like this: from django.db import migrations, models class Migration(migrations.Migration): dependencies = [ ("core", "0001_initial"), ] operations = [ migrations.SeparateDatabaseAndState( database_operations=[], state_operations=[ migrations.AlterIndexTogether( name="book", index_together=set(), ), migrations.AddIndex( model_name="book", index=models.Index( fields=["status", "title"], name="core_book_status_title_6099efdb_idx", ), ), ], ), ] We can verify both sets of operations before running the migration. First, we can check database_operations really does nothing with sqlmigrate: $ python manage.py sqlmigrate core 0002 BEGIN; -- -- Custom state/database change combination -- COMMIT; Great - no SQL statements there, except the normal BEGIN and COMMIT. Second, we can check that state_operations does tell Django our migrations match the latest defition of our models. We do this by running makemigrations --dry-run to ensure the autodetector doesn’t find anything to change: $ python manage.py makemigrations core --dry-run No changes detected in app 'core' Great! This should now be ready to deploy, after our normal test suite passes :) Fin I hope this helps you understand this improved way of defining indexes, and how to write zero downtime migrations, —Adam Working on a Django project? Check out my book Speed Up Your Django Tests which covers loads of best practices so you can write faster, more accurate tests. One summary email a week, no spam, I pinky promise. Related posts: Tags: django
https://adamj.eu/tech/2020/07/27/how-to-modernize-your-django-index-definitions/
CC-MAIN-2021-04
en
refinedweb
Pro Open Source Empower your team with the next generation API testing solution Further accelerate your SoapUI testing cycles across teams and processes The simplest and easiest way to begin your API testing journey Getting Started PRO DOCUMENTATION. WSDL files define various aspects of SOAP messages: You may consider a WSDL file as a contract between the provider and the consumer of the service.. To take a closer look at a WSDL file, create a new project and import a sample WSDL file: In SoapUI, click or select File > New SOAP Project In the dialog box, specify the following URL in the Initial WSDL field: Leave the default settings and click OK SoapUI will load the specified WSDL and parse its contents into the following object model: A WSDL can contain any number of services (the bindings). A binding exposes an interface for the specified protocol.. If you want SoapUI to always use a remote WSDL file, set the Cache Definition project property to False. Double-click the service in the navigator to open the editor: The Overview tab contains general information on the WSDL file: its URL, target namespace, etc. The Service Endpoint tab contains endpoints for the interface: Besides endpoints specified in the WSDL file, you can add endpoints for the service. For each endpoint, you can specify the required authentication. The WSDL Content tab provides more details on the WSDL file The left panel allows you to browse through the contents of the file. If the service contains several WSDL files, each file is shown in a separate tab. The toolbar contains the following options: Updates the service definition by using an external WSDL file. Note: In ReadyAPI, you can refactor your service. Refactoring updates your test to fit the updated definition. Download ReadyAPI Trial to try out this functionality. On the WS-I Compliance tab, you can validate your web service against the WS-I Basic Profile (see below). Since the initial creation of WSDL and SOAP, a multitude of standards have been created and embodied in the Web Services domain, making it hard to agree on exactly how these standards should be used in a Web Service Context. To make interoperability between different Web Service vendors easier, the Web Service Interoperability Organization (WS-I;) has defined the WS-I Basic Profile - a set of rules mandating how the standards should be used. SoapUI is bundled with version 1.1 of the profile. Use it to check the conformance of a WSDL file and SOAP messages. To validate the WSDL Service: Double-click the service in the Navigator and switch to the WS-I Compliance tab Click to run validation - or - Right-click the service in the Navigator SoapUI will show the validation report: To validate SOAP messages: Open a SOAP request and send it Right-click within the XML panel of the response editor and select Check WS-I Compliance SoapUI generates the corresponding report that highlights any compliance errors for the current request/response message exchange. Compare: All SoapUI Pro Features There are many web service development frameworks that allow you to generate code from a WSDL file. This can be either client code that calls operations specified in a WSDL file, or stubs for implementing the service itself. SoapUI provides a graphical interface for most frameworks. To generate it: Right-click the service in the Navigator panel and select the desired framework from the Generate Code popup menu For example, if you select the Apache CXF framework, you will see the following dialog: Specify the desired settings and click Generate. SoapUI will launch the corresponding command-line tool: Note: You must specify the path to the corresponding tool on the Tools page of SoapUI Preferences. The selected tool generates files in the specified folder: Working With WSDL Coverage Operations and Requests Authenticating SOAP Requests SOAP vs REST APIs: Understand The Differences SOAP Attchements and Files
https://www.soapui.org/docs/soap-and-wsdl/working-with-wsdls/
CC-MAIN-2021-04
en
refinedweb
Introduction Before I start, I want to emphasize that this post is not about one particular project or any automation testers that I have worked with. I have seen this behavior in three recent projects, and nearly every automation tester that I have worked with has busted a gut to make this faulty machine work. I am fairly sure that a memo has gone out to every contract that I have worked on recently stipulating that a million automation tests are required to guarantee success. We must not stop to question the worth of these tests. We must protect them like our children. These tests must be written in selenium despite nearly everyone having a pretty grim experience due to the inherent known issues that I will state later. Selenium tests are insanely challenging to write, but we won’t let that hold us back, and instead, we will get our testers who have maybe come into programming late or are new to development, and we will get these less experienced developers to write these difficult tests. Selenium tests might be difficult to write, but they are straightforward to copy and paste, which will, of course, lead to all sorts of problems. We often hear, “If it moves, write a selenium test for it”. Automation tests must be written for the API, the frontend, the backend, the middle-end, the happy path, the sad path, the upside-down path, etc. We won’t have any time for manual testing, how could we? We have all these flakey selenium tests to write and maintain. We are already late for this sprint, and every story must have an automation test. After a year or so and an insanely long build, we will decide that this was a bit silly and delete them all or, worse, start again. Why does everyone still use Selenium despite the inherent problems? I think I would be closer to understanding the true nature of our existence if I could answer the above question but joking aside, why is the use of selenium so widespread? It does stagger me, but here are a few suggestions: - It is the industry standard, lots of online resources and a vast community to lean on - It works across multiple OS, and multiple languages, your language and platform of choice are more than likely covered - Cross-browser testing. Selenium supports all the major browsers so you could test on Chrome, Firefox, Safari, IE, Edge, and many more To be fair, the sudden surge of writing a million acceptance tests is not selenium’s fault. For my money, the correct number of automation tests is one happy path test, no sad paths or upside-down paths. This one test is a smoke test to ensure that our system is open for business. Unit tests and integration tests are cheaper to run, implement and maintain and should be the bulk of our tests, has everyone forgotten about the test pyramid? Selenium is not fit for purpose and here is why The problems with selenium can be expressed in one word, timing. Before we can even start writing code to assert that our test is correct, we need to ensure that whatever elements we need to interact with are visible and are in a state to accept simulated input. Remote APIs calls will need to have resolved, animations and spinners need to have concluded. The dynamic content that now makes up the majority of our apps will need to have finished rendering from the currently retrieved data of the API calls. So what do we do while this macabre pantomime of asynchronicity is occurring? How do we stop our tests from just finishing or bottoming out because a particular text input is disabled until an API call has finished or a beautiful SVG spinner overlay has put a veil of darkness over our virtual world? In layman’s terms, we wait for the HTML elements to be in a ready state, in selenium speak, we write many custom waitForXXXXX code helpers, e.g. waitForTheFullMoonAndTheWereWolvesHaveFinishedEating or more realistically… wait.until(ExpectedConditions.visibilityOfElementLocated(By.xpath("//input[@id='text3']"))); One of the worst crimes to commit is to use Thread.sleep. This is a heinous crime where a random number is plucked from thin air and used as a wild guess for when we think the UI is in a ready state. Please, never do this. Below are my all-time favorite selenium exceptions that I have found while wading through a CI build report: NoSuchElementException– move along, you’ll not find your input here ElementNotVisibleException– this cheeky scamp means you are tantalizingly close but not close enough, it is in the DOM, but you can’t do a single thing with it StaleElementReferenceException– the element has finished work for the day and gone to the pub. Please try again tomorrow TimeoutException– you could wait until the end of time and whatever you are trying to do is just not going to happen. You just rolled a seven Behold the flake One of the most soul-destroying moments that I have experienced is having a build fail due to a failing automation test only for it to magically pass by just rerunning the build again. This phenomenon or zombie automation test is often referred to as a flake. The main problem with the flake is that it is non-deterministic which means that a test can exhibit different behavior when executed with the same inputs at different times. You can watch the confidence in your regression test suite go up in smoke as the number of non-deterministic tests rises. A flakey test is more than likely down to timing, latency and the macabre opera of asynchronicity that we are trying to tame with our Thread.sleep and waitForAHero helpers that we need to keep writing to try and keep sane. Just think how much easier this would be if we could somehow make all this asynchronous programming go away and if our world started to behave linearly or synchronously. What a natural world to test we would have. Cypress.io sets out to do just that. Cypress.io – The ghost in the machine One of the main differences between cypress.io and selenium is that selenium executes in a process outside of the browser or device we are testing. Cypress executes in the browser and in the same run loop as the device under test. Cypress executes the vast majority of its commands inside the browser, so there is no network lag. Commands run and drive your application as fast as its capable of rendering. To deal with modern JavaScript frameworks with complex UI’s, you use assertions to tell Cypress what the desired state of your application. This is the main take away, and cypress has eliminated the main problem with selenium by executing in the same run loop as the device. Cypress takes care of waiting for DOM elements to appear. I repeat, Cypress takes care of all this waiting business. No Thread.sleep, no waitForTheMoon helper. Don’t you see what this means? To know how good this is, you have to have experienced the pain. Below are a few examples of cypress tests. One thing synonymous by their absence is any timing or obscene waitFor helpers:"); }); }); I like these tests; they clearly state their purpose and are not obfuscated by code that makes up for the limitations of the platform. Below are some tests I wrote to run the axe accessibility tool through cypress: import { AxeConfig } from "../support/axeConfig"; describe("Axe violations", () => { beforeEach(() => { cy.visit("/"); cy.injectAxe(); }); it("home page should have no axe violations", () => { cy.configureAxe(AxeConfig); cy.checkA11yAndReportViolations(); }); }); And here is a similar test using webdriver: // in e2e/home.test.js import assert from 'assert'; import { By, until } from 'selenium-webdriver'; import { getDriver, analyzeAccessibility, } from './helpers'; describe('Home page', () => { let driver; before(() => { driver = getDriver(); }); it('has no accessibility issues', async () => { await driver.get(``); // The dreaded wait until. Abandon hope await driver.wait(until.elementLocated(By.css('h1'))); const results = await analyzeAccessibility(); assert.equal(results.violations.length, 0); }); }); The main striking difference and the worrying thing to me is the latency, and there are two await calls and the dreaded wait(until.elementLocated). This is a simple test, but the more interactions you have, the more waitFor helpers you will need, and the flakiness starts spreading. JavaScript all the way down Cypress is clearly aimed at the frontend developer, installing cypress is a breeze and performed via your favorite package manager choice of npm or yarn. npm install cypress --save-dev It really could not be any easier. Compare that with downloading the chrome webdriver and friends in the world of selenium. There is no multi-language support like selenium. You can have any programming language you like as long as it is JavaScript or TypeScript. Cypress cons Of course, there are drawbacks, and some of them are notable so it would be remiss of me not to list these. - Cypress is relatively new, and it does not have the vast community that selenium does - No cross-browser testing, this is huge and will cause less adoption until it can be cured - As stated earlier, it’s JavaScript or bust. You won’t write cypress tests in the tired old static languages of C# and java - Because it runs inside the browser, you won’t be able to support multiple tabs - There is no cross-browser support other than Chrome and Electron - At this time of writing, there is no shadow DOM support The above items are, in some cases, insurmountable and will not be overcome. Will Cypress replace Selenium? As much as I would like to say yes, I have my doubts. There is an army of automation testers who have not known any other world than selenium, and it may be difficult to move away from soon. Testing is just the beginning – ensure passed tests mean happy users While Cypress introduces a compelling new testing framework, it’s important to take testing one step further. LogRocket monitors the entire client-side experience of your application and automatically surfaces any issues (especially those which tests might have missed). To gain valuable insight into production environments with frontend monitoring, try LogRocket. LogRocket is like a DVR for web apps, recording literally everything that happens on your site. Instead of guessing why problems happen, you can aggregate and report on performance issues to quickly understand the root cause.. Make performance a priority – Start monitoring for free. thing synchronous, this eliminates a whole world of pain, and for this, I am firmly on board. This, however, is not the green light to write thousands of cypress tests. The bulk of our tests are unit tests with a layer of integration tests before we get to a few happy path automation tests. This, of course, is far too sensible a strategy ever to. 39 Replies to “Cypress.io: The Selenium killer” Write Selenium tests properly — with a proper class for pages that can encapsulate most of the complexities, and then additional tests become a breeze, fragility addressed in one place. Use the object oriented properties of those “old” enterprise languages. The author lost me anyway by saying “only test the happy path.” If you believe in a world where errors never happen, and want to irritate your customers who hit them even more, then go ahead and serve up broken web pages. I will stick with a well-tested system. Hi Erica, when I said only test the happy path, I meant only write acceptance tests for the happy path. The vast majority of our tests should be fine grained inexpensive unit tests with a layer of integration tests. Acceptance tests with selenium are too brittle and too expensive to maintain. The problem is brittle tests, not their usefulness. I assure you that testing of user experience during error handling matters. Other types of tests, while also important, are not enough. Fix your brittle tests — write a proper Selenium Page, encapsulate common user functions, and see how easy it is to write robust tests. Cypress.io. Just what Selenium used to be, like 15 year ago: A sandbox contained JavaScript library, embedded in your page. Now,w use Webdriver/Wire Protocol technology. No multi-browser testing, regardless of how much HTML and JavaScript have evolved, still ends on products that work on Chrome but not on Firefox. I have seen this happening regularly for the last 20 years. Well written but my biggest complaint is that you make judgement with your title but don’t really make that point. It should be a question. Cypress has its uses but I don’t think it’s going to be the tool to end all tools, specifically selenium. Selenium written with custom attributes and proper encapsulated classes be they page level or component level can be a very effective and valuable tool still. Cypress is doing some sort of waiting, it’s just not explicitly done by the client wiring up the tests. What does Cypress do if it truly can’t find the element you are looking for? It probably waits up to a certain time and fails. I think that you didn’t understood the author. “Only test the happy path” – means that UI should be automated as less as it could. Much more to be covered in unit and integration tests, and then just automate the happy path with UI. I agree with this part. absolutely Milos. You got it I agree with the other commenter. Use page objects and proper wait logic and selenium isn’t that difficult. This article also left out a very big concern of mine for Cypress. Cypress uses semantic click and keystroke events through the DOM API, and doesn’t actually interact with the UI itself. For a quality acceptance test Cypress won’t work due to this constraint. Cypress is great for a front end developer integration test, but I cannot see it’s place for a quality group. cypress runs in the same run loop as the browser, there is no waiting, this is the kicker Hi Add, Cypress runs in the same run loop as the browser, there really is no waiting Hi Nick, I think wait logic is a bit of an oxymoron here. Selenium executes at break neck speed and not anything remotely like a user. you simply have to add lots of waitForXXX helpers. network latency in the CI environment quickly lead to non deterministic tests. you are left with a bunch of brittle and hard to maintain tests that make change really difficult, hence my call for 1 happy path acceptance test is sensible but I can see rushing to call this heresy by people who make a living adding a mountain of these tests. So.. how does this compare to testcafe? I think you are blaming selenium for developers bad practices in most cases. The waits are there for a good reasons. Im sure there is a thread.sleep buried in your dlls. The title should be a question mark. The content has not convinced me that it is a selenium killer. From my point of view this article makes the wrong assumption from the start. It is not Selenium’s problems if tests are flaky – you need a good test framework to handle this well. Also he totally dismisses the point that Webdriver is now a W3C standard that vendors have to follow. Of course, if you have to test in Chrome only, it will be much faster as it is running inside of it. The same applies if you use an ios or android specific test framework as opposed to a generic one for all. How does that “logon error” test work? Clicking the submit button is likely to kick off some asynç work, but you check for the error message immediately after clicking it. That’s a pretty substantial list of drawbacks and the only pro that I take away from this article is that I don’t have to explicitly handle any waiting in my code. I’d gladly add the one-liner wait statements here and there as a tradeoff for that laundry list of drawbacks. Well said 🙂 Cypress waits for a page load event before firing the next enqueued command. In this case either a toast message is displayed and cypress will find it or the page would attempt to log in and be redirected back to a failed login page. He kind of glossed over how being in the same run-loop as the application gives Cypress access to the network. Thus it can ensure (unless your app doesn’t send a page load event) in most cases that next command waits for the proper state. Also in defense of a different way: POM abstraction in most large case test suites tends to be a detriment for upkeep and onboarding. I can’t tell you how many times an entire suite is thrown away because reverse engineering it can be maddening for a new SDET. So to those who think page objects solves the selenium problem are missing the point. Cypress stresses quick test creation and state management + easy ci/cd setup. well said Jordan Cypress.io is not a replacement or killer for Selenium WebDriver it’s just another cool and trending tool to use, which looks awesome and really have a lot of cool features. One of the big problems of cypress is inability to do cross browser testing, so currently only chrome browser and electron framework are supported. Selenium WebDriver on the other hand is a W3C standard () and all the major web-browsers creators supporting it by developing dedicated WebDrivers which are W3C standard compliant. Now cypress would like to support all the other browsers in the future, and they figured out that to do this they will need to use the same dedicated WebDrivers which Selenium is currently using ().teer. Cypress does have a lot of limitations, however some of which you mentioned aren’t or can be circumvented at least. You can upload files by trigger a drop event on the desired input field () And you can download files and make assertions on them You can catch the name of the downloaded file by catching the event And since cypress runs on node you can read directories and make assertions on the existence of the downloaded file in the appropriate folder. Imo cypress won’t replace selenium, but it is an easily graspable and implementable tool which can ease development for the developers, since they can debug the app during the tests The lack of Cross Browser support is a deal killer, and will also be a Cypress killer if they don;t figure it out. I agree with this. In my current project first end to end test was developed in 2 hours by newly joined SDET. With existing POM selenium framework it would have been min 2 days to understand the framework then start using existing “reusable” methods to develop first script. Cypress is not a Selenium killer, rather it adds to the stack of tools to use. Cypress is very good at quickly testing components on a page and proving that they work. You can then use Selenium to test the e2e flows through the web app hitting the top 3-5 flows that users take. With less flows through the app the Selenium framework can be smaller and therefore less complicated. If there is a failure you know it is to do with the component integration or (if you are doing cross-browser testing) the browser’s implementation as the cypress tests have already tested the logic. This comment is absolutely true for me. These tools are complimentary. Cypress needs to work across browsers though for QA Engineers to be engaged with it. true true I see that most comments are regarding the Cypress only testing in Chrome, now with 4.0, Cypress supports Edge and Firefox. Article should be updated. nice one, thanks for the heads up I can still see my cypress test runner, running only on chrome or electron browsers. Can you please provide any more documentation regarding this please. That would be appreciated.. I run through all the comments , and i am more confused now 🙁 I am an automation tester, currently working with selenium web driver but i want to replace the existing selenium test framework with Cypress, can i go ahead with this ? I’m pretty sure this piece is not that relevant anymore. I’ve used new versions of Selenium IDE. It’s flexible, you barely need any code, everyone can work with it. It’s great. I’ve been creating Automated Tests for a few years, using different methods and yes, “oldschool” Selenium is quite time-consuming. I’ve been looking for differences between Selenium IDE and Cypress.io to see which works best, but thusfar I don’t see it. I’ve set up a project with a lot of tests which can be run simultaneously. I’ve created a PowerShell to handle them in TFS and create useful test output. It was easy, fast and security-wise it seems to be the best option as well. Can someone convince me Cypress.io is fast to learn and better to use? Well, I rarely use waits while automating a very old and very bad written application. Using POM for better test clarity and only disabling waits where needed to speed up execution, with perhaps adding some waits where we switch between apps or display very long lists (we don’t have 100% control of test data!). Newest Selenium has build in waits which solves all the problems earlier versions had. Its good, when used properly. And yes it needs to be maintained, both TF and tests. I just think the author isn’t doing his selenium the right way, hence the mess he is experiencing. The moment you write first ten tests and you end up with enough objects and methods that writing 40 new equals to creating 40 new xml files with test data and maybe a single new line of code per test, you will know you did you abstraction the right way. Naming conventions and object pages organisation is the key. If you don’t have these, you will end up with a mess of illegible code. Do POM the right way and it will lead you to the success. This is important. Proper conventions, clean code, code reviews and maintenance for automation tests and framework are vital for a long term project success. Same here – Cypress is a novelty, something good to learn to know why Selenium remains an industry gold standard, if it is behind a proper Automation Framework. I’m working on TestNG+Selenium combine framework where we also have some API and Unit tests. All can be done from a single framework where needed, as needed. I guess it’s just a matter of how people work. Our tests for UI follow POM and we map test steps to code lines (usually few lines of reusable methods per step), so they look very different than this ‘it’ convention that I somehow always relate to Protractor. But the way ours are written, you can read the code aloud and they make perfect english sentences describing what user is doing. API and unit tests are simpler (send request, check response, assert method results) but for system tests you could have zero coding skills and you should still be able to read what we coded and understood what happens. // Step 1 logIntoApp // Step 2 navigateToUserProfile // Step 3 selectProfileDetails assertTrue(userIdEqualsTestData) Etc. Easy enough. And the reason we write it that way is exactly to allow any newcomers to understand what is going on in the test and in the code. Maintainability is second most important quality of test automation framework after usability 🙂 We did a study to compare Cypress.IO, Selenium IDE with other options and chose WebdriverIO instead. It’s easy to use, flexible and stable. It’s not easy to use Gherkin, but it’s possible. There are easy-to-use packages, you only need NPM and drivers (I use Visual Code for editting). Although I spent a lot of time setting up my Selenium IDE, Selenium, MS Coded UI Testing and learned a lot, this is the best choice to be able to run locally for us with a great reporting Allure add-on. This is no ad because it’s free to use, but I’m glad we’re using it. I created 300 tests in a short period and it runs within a few minutes headless (and they aren’t small either). Well I used both Cypress and Selenium. Both have the adventages and down sides but major problem of Cypress is stability because cypress code is constantly changed which causing much bigger problem with the maintanace of the tests over time on complex project. In some versions they literaly depreacated a lot of methods which made tests written in the previous version useless. Also if you are using the cypress in regular mode without any patterns enforced, as it is suggested by cypress team, it is absolut nightmare to mantain the large number of tests over time and there will be a lot of overlappings in the tests. Also cypress has the severe issues with the React and interaction with React elements is also nightmare especially if the App code isn’t perfect. Due to limited number of browser supported and tab limitation it is also useless in complex flows or multiple app testing. Thus from my experiance so far cypress will never be able to replace selenium. At the moment it is just one of the fancy tools which is hyped. Cypress is ok only for quick validation of the simple apps during the sprints or for some test which will not be used anymore in the future. For all other projects that required constant regressions and monitoring Selenium will be always better option. Just design appropriate selenium framwork with appropriate and simple granular page object with methods that work with each element on the page seperatelly and remove driver from the tests by hiding the driver handling behind generic classes like base suite or the base test and it will be as simple for usage as Cypress but much better option for long term project.
https://blog.logrocket.com/cypress-io-the-selenium-killer/
CC-MAIN-2021-04
en
refinedweb
#include <EBPatchGodunov.H> non-virtual stuff Set parameters for slope computations. Given left and right one-sided undivided differences /a_dql,a_dqr/, apply van Leer limiter $vL$ defined in section to a_dq. Called by the default implementation of PatchPolytropic::slope. Update the state using flux difference that ignores EB. Store fluxes used in this update. Store non-conservative divergence. Flux coming out of htis this should exist at cell face centers. Update the state at irregular VoFs and compute mass difference and the maximum wave speed over the entire box. Flux going into this should exist at VoF centroids. deprecated interface References define(), m_useAgg, and RealVect::Unit. Reimplemented in EBPatchAdvect. References m_dt, and m_time. needs coarse-fine IVS to know where to drop order for interpolation virtual in case you need to add anything to definition Reimplemented in EBPatchAdvect. Compute the limited slope /a_dq/ of the primitive variables /a_q/ for the components in the interval /a_interval/, Calls user-supplied EBPatchGodunov::applyLimiter. Reimplemented in EBPatchAdvect. needs to be virtual because of RZ virtual in case you want to do something faster than go through constoprim Reimplemented in EBPatchAdvect. virtual because RZ changes this function Returns the interval of component indices in the primitive variable EBCellFAB for the velocities. Only used for artificial visc and flattening Implemented in EBPatchAdvect. needs to be virtual because of RZ Returns the component index for the pressure. Called only if flattening is used. Implemented in EBPatchAdvect. Returns the component index for the bulk modulus, used as a normalization to measure shock strength in flattening. Called only if flattening is used. Implemented in EBPatchAdvect. Return number of components for primitive variables. Implemented in EBPatchAdvect. Returns number of components for conserved variables. Implemented in EBPatchAdvect. Return the names of the variables. A default implementation is provided that puts in generic names. Implemented in EBPatchAdvect. Return the names of the variables. A default implementation is provided that puts in generic names. Implemented in EBPatchAdvect. Given input left and right states, compute a suitably-upwinded flux (e.g. by solving a Riemann problem), as in Implemented in EBPatchAdvect. Given input left and right states, compute a suitably-upwinded flux (e.g. by solving a Riemann problem). Implemented in EBPatchAdvect. rz func. rz func. rz func. Return true if the application is using flattening. Implemented in EBPatchAdvect. Return true if the application is using artificial viscosity. Implemented in EBPatchAdvect. Return true if you are using fourth-order slopes. Return false if you are using second-order slopes. Implemented in EBPatchAdvect. Returns value of artificial viscosity. Called only if artificial viscosity is being used. Implemented in EBPatchAdvect. References m_primState. References m_coveredFluxPlusG4. References m_coveredFluxMinuG4. References m_coveredFaceMinuG4. References m_coveredFacePlusG4. set to true if the source you will provide is in conservative variables. Default is false References s_conservativeSource. References m_entireBox, and SpaceDim. Referenced by useConservativeSource(). Referenced by EBPatchAdvect::floorPrimitives(). Referenced by getCoveredFacePlus(). Referenced by getCoveredFaceMinu(). Referenced by getCoveredFluxPlus(). Referenced by getCoveredFluxMinu(). Referenced by EBPatchAdvect::getPrimState(), and getPrimState(). Referenced by getEntireBox().
http://davis.lbl.gov/Manuals/CHOMBO-RELEASE-3.2/classEBPatchGodunov.html
CC-MAIN-2020-45
en
refinedweb
. Configuring the SDK for the Unity Editor. When testing your scene in the Unity Editor, you can use the Realtime Database. You must configure the SDK with the proper database URL. Call SetEditorDatabaseUrl with the url of your database. using Firebase; using Firebase.Unity.Editor; public class MyScript: MonoBehaviour { void Start() { // Set this before calling into the realtime database. FirebaseApp.DefaultInstance.SetEditorDatabaseUrl(""); } } If you have chosen to use public access for your rules and have set the database url, you can proceed to the sections on saving and retrieving data. Optional. Editor Setup for restricted access. If you choose to use rules that disallow public access, you will need to configure the SDK to use a service account to run in the Unity Editor. This will also allow you to impersonate end users while testing. To do this first create a new p12 file via Record the generated email and password of the service account. Place the p12 file under "Editor Default Resources" within your Unity project. Next, add the following code to initialize usage of the service account. using Firebase; using Firebase.Unity.Editor; public class MyScript: MonoBehaviour { void Start() { // Set these values before calling into the realtime database. FirebaseApp.DefaultInstance.SetEditorDatabaseUrl(""); FirebaseApp.DefaultInstance.SetEditorP12FileName("YOUR-FIREBASE-APP-P12.p12"); FirebaseApp.DefaultInstance.SetEditorServiceAccountEmail("SERVICE-ACCOUNT-ID@YOUR-FIREBASE-APP.iam.gserviceaccount.com"); FirebaseApp.DefaultInstance.SetEditorP12Password("notasecret"); } }.
https://firebase.google.com/docs/database/unity/start?hl=nb-NO
CC-MAIN-2020-45
en
refinedweb
This small JavaFX test application import javafx.application.Application; import javafx.scene.Scene; import javafx.scene.layout.BorderPane; import javafx.scene.paint.Color; import javafx.scene.shape.Rectangle; import javafx.stage.Stage; public class ApplicationWithNonResizableStage extends Application { public static void main(final String[] args) { launch(args); } @Override public void start(final Stage primaryStage) throws Exception { final Rectangle rectangle = new Rectangle(200, 100, Color.POWDERBLUE); final BorderPane pane = new BorderPane(rectangle); final Scene scene = new Scene(pane); primaryStage.setScene(scene); primaryStage.setResizable(false); primaryStage.show(); } } produce a window with unwanted padding: Removing the call primaryStage.setResizable(false) also removes the effect: What is going wrong? As already commented, this different behaviour of !/resizable smells like a bug (somebody might consider filing an issue ;-) A shorter (than sizing manually) way around is to explicitly fit the stage to the scene: primaryStage.setScene(scene); primaryStage.setResizable(false); primaryStage.sizeToScene(); Just noticed that this works for jdk8, but not jdk7. For convenience, a bug update: the original report filed by jewelsea was closed as a duplicate of (in new coordinates) - still open, commented to be win-only. Although this is not explanation, it solves the problem: @Override public void start(final Stage primaryStage) throws Exception { final Dimension d = new Dimension(210, 110); final Rectangle rectangle = new Rectangle(d.width, d.height, Color.POWDERBLUE); final BorderPane pane = new BorderPane(rectangle); pane.maxWidth(d.height); pane.maxWidth(d.width); final Scene scene = new Scene(pane, d.width, d.height); primaryStage.setScene(scene); primaryStage.setResizable(false); primaryStage.setWidth(d.width); primaryStage.setHeight(d.height); primaryStage.show(); } Key is setting width and height of the Stage at the right time.
https://javafxpedia.com/en/knowledge-base/20732100/javafx--why-does-stage-setresizable-false--cause-additional-margins-
CC-MAIN-2020-45
en
refinedweb
SYNOPSIS #include <Inventor/nodes/SoClipPlane.h> Inherits SoNode. Inherited by SoClipPlaneManip. Public Member Functions virtual SoType getTypeId (void) const SoClipPlane SoSFPlane plane SoSFBool on Protected Member Functions virtual const SoFieldData * getFieldData (void) const virtual ~SoClipPlane () Static Protected Member Functions static const SoFieldData ** getFieldDataPtr (void) Additional Inherited Members Detailed Description The SoClipPlane class is a node type for specifying clipping planes. A scene graph without any SoClipPlane nodes uses six clipping planes to define the viewing frustum: top, bottom, left, right, near and far. If you want extra clipping planes for 'slicing' the visible geometry, you can do that by using nodes of this type. Geometry on the back side of the clipping plane is clipped away. Note that OpenGL implementations have a fixed maximum number of clipping planes available. To find out what this number is, you can use the following code: #include <Inventor/elements/SoGLClipPlaneElement.h> // ...[snip]... int maxplanes = SoGLClipPlaneElement::getMaxGLPlanes(); Below is a simple, basic scene graph usage example of SoClipPlane. It connects an SoClipPlane to an SoCenterballDragger, for end-user control over the orientation and position of the clipping plane: #Inventor V2.1 ascii Separator { Separator { Translation { translation -6 0 0 } DEF cbdragger CenterballDragger { } } TransformSeparator { Transform { rotation 0 0 1 0 = USE cbdragger . rotation translation 0 0 0 = USE cbdragger . center } ClipPlane { } } Sphere { } }.fi Note that SoClipPlane is a state-changing appearance node, and as such, it will only assert its effects under the current SoSeparator node (as the SoSeparator pops the state stack when traversal returns above it), as can be witnessed by loading this simple example file into a Coin viewer: #Inventor V2.1 ascii Separator { ClipPlane { } Cube { } } Separator { Translation { translation -3 0 0 } Cube { } }.fi FILE FORMAT/DEFAULTS: ClipPlane { plane 1 0 0 0 on TRUE } Constructor & Destructor Documentation SoClipPlane::SoClipPlane (void)Constructor. SoClipPlane::~SoClipPlane () [protected], [virtual]Destructor. Member Function Documentation SoType SoClipPlane::getClassTypeId (void) [static]This static method returns the SoType object associated with objects of this class. Reimplemented from SoNode. Reimplemented in SoClipPlaneManip. SoType SoClipPlane:. Reimplemented in SoClipPlaneManip. const SoFieldData ** SoClipPlane::getFieldDataPtr (void) [static], [protected]This API member is considered internal to the library, as it is not likely to be of interest to the application programmer. Reimplemented from SoNode. Reimplemented in SoClipPlaneManip. const SoFieldData * SoClipPlane::getFieldData (void) const [protected], [virtual]Returns a pointer to the class-wide field data storage object for this instance. If no fields are present, returns NULL. Reimplemented from SoFieldContainer. Reimplemented in SoClipPlaneManip. void SoClipPlane::initClass (void) [static]Sets up initialization for data common to all instances of this class, like submitting necessary information to the Coin type system. Reimplemented from SoNode. Reimplemented in SoClipPlaneManip. void SoClipPlane::doAction (SoAction *action) [virtual]This function performs the typical operation of a node for any action. Reimplemented from SoNode. Reimplemented in SoClipPlaneManip. void SoClipPlane: SoClipPlaneManip. void SoClipPlane:. Reimplemented in SoClipPlaneManip. void SoClipPlane::pick (SoPickAction *action) [virtual]Action method for SoPickAction. Does common processing for SoPickAction action instances. Reimplemented from SoNode. Reimplemented in SoClipPlaneManip. Member Data Documentation SoSFPlane SoClipPlane::planeDefinition of clipping plane. Geometry on the back side of the clipping plane is clipped away. The default clipping plane has it's normal pointing in the <1,0,0> direction, and intersects origo. (I.e., everything along the negative X axis is clipped.) SoSFBool SoClipPlane::onWhether clipping plane should be on or off. Defaults to TRUE. Author Generated automatically by Doxygen for Coin from the source code.
http://manpages.org/soclipplane/3
CC-MAIN-2020-45
en
refinedweb
A set of similar system calls are used to create an IPC resource and manipulate IPC information. [3] Due to their flexibility, the syntax for these calls is somewhat arcane (the calls appear, like the camel, to have been designed by a committee). The System V IPC calls are summarized in Table 6.2. [3] Note Linux also supports a nonstandard, nonportable system call called ipc that can be used to manipulate IPC resources. As this is a Linux-specific call, its use is best left to Linux system developers. Table 6.2. Summary of the System V IPC Calls. The get system calls [4] ( msgget , semget , and shmget ) are used either to allocate a new IPC resource (which generates its associated system IPC structure) or gain access to an existing IPC. Each IPC has an owner and a creator, which under most circumstances are usually one and the same. When a new resource is allocated, the user must specify the access permissions for the IPC. Like the open system call, the get system calls return an integer value called an IPC identifier, which is analogous to a file descriptor. The IPC identifier is used to reference the IPC. From a system standpoint, the IPC identifier is an index into a system table containing IPC permission structure information. The IPC permission structure is defined in that is included by the header file . This structure is defined as [4] The term get (in italics ) will be used to reference the group of system calls. struct ipc_perm { __key_t __key; /* Key */ __uid_t uid; /* Owner's user ID. */ __gid_t gid; /* Owner's group ID. */ __uid_t cuid; /* Creator's user ID. */ __gid_t cgid; /* Creator's group ID. */ unsigned short int mode; /* Access permission. */ unsigned short int __pad1; unsigned short int __seq; /* Sequence number. */ unsigned short int __pad2; unsigned long int __unused1; unsigned long int __unused2; }; The type definitions for __uid_t , __gid_t , and so on can be found in the header file . In general, all programs that use the IPC facilities should include the and files. As will be explained in the discussion of ctl system calls, some members of the permission structure can be modified by the user. There are two arguments common to each of the three get system calls. Each get system call takes an argument of defined type __key_t (of base type integer). This argument, known as the key value, is used by the get system call to generate the IPC identifier. There is a direct, one-to-one relationship between the IPC identifier returned by the get system call and the key value. While the key can be generated in an arbitrary manner, there is a library function called ftok that is commonly used to standardize key production. [5] By calling ftok with the same arguments, unrelated processes can be assured of producing the same key value and thus reference the same IPC resource. The ftok function is summarized in Table 6.3. [5] In all honesty, the ftok library function is superfluous, but is presented for historical and continuity reasons. As long as processes that wish to access a common IPC resource have a method to communicate the key value for the IPC (such as in a common header file), ftok can be avoided. Table 6.3. Summary of the ftok Library Function. The ftok function takes two arguments. The first, path , is a reference to an existing accessible file. Often the value "." is used for this argument, since in most situations the self-referential directory entry "." is always present, accessible, and not likely to be subsequently deleted. The second argument for ftok , proj , is a single-character project identifier most commonly represented as a literal. The value returned by a successful call to ftok is of defined type key_t . ftok 's underlying algorithm, which uses data returned by the stat system call for the specified pathname as well as the proj argument value, does not guarantee a unique key value will be returned. If ftok fails, it returns a 1 and sets errno in a manner similar to the stat system call (the stat system call is discussed in Section 2.8, "File Information." As demonstrated in Program 6.1, the most significant byte of the value returned by ftok is the character proj value, which is passed as the second argument. Program 6.1 Generating some key values with ftok . File : p6.1.cxx /* Using ftok to generate key values */ #include + #include #include using namespace std; int main( ){ 10 key_t key; for (char i = 'a'; i <= 'd'; ++i) { key = ftok(".", i); cout << "proj = " << i << " key = [" << hex << key << "] MSB = " << char(key >> 24) << endl; + } return 0; } Figure 6.3 shows the output of Program 6.1 when run on a local 32-bit system. Figure 6.3 Output of Program 6.1. linux$ p6.1 proj = a key = [61153384] MSB = a <-- 1 proj = b key = [62153384] MSB = b proj = c key = [63153384] MSB = c proj = d key = [64153384] MSB = d (1) The proj argument becomes the most significant byte of the value returned by ftok . The key value for the get system calls may also be set to the defined constant IPC_PRIVATE. Beneath the covers, IPC_PRIVATE is defined as having a value of 0. Note that regardless of its argument values, the ftok library function will not return a value of 0. Specifying IPC_PRIVATE instructs the get system call to create an IPC resource with a unique IPC identifier. Thus, no other process creating or attempting to gain access to an IPC resource will receive this same IPC identifier. An IPC resource created with IPC_PRIVATE is normally shared between related processes (such as parent/child or child/child) or in clientserver settings. In the related process settings, the parent process creates the IPC resource. When is performed, an exec , the associated IPC identifier is passed to the child process by way of the environment or as a command-line parameter. In clientserver relationships, the server process usually creates the IPC using IPC_PRIVATE. The IPC identifier is then made available to the client via a file. Note that in either scenario, the child/client process would not specify IPC_PRIVATE when issuing its get system call to gain access to the existing private resource. Finally, using IPC_PRIVATE does not prohibit other processes from gaining access to the resource; it only makes it a bit more difficult for a process to determine the identifier associated with the resource. The second argument common to all of the IPC get system calls is the message flag. The message flag, an integer value, is used to set the access permissions when the IPC resource is created. The lower nine bits of the message flag argument define the access permissions. Table 6.4 summarizes the subsequent types of permissions required for each of the IPC system calls [6] to perform their functions. The execute bit is not relevant for IPC facilities. [6] The header files for each of the IPC facilities (i.e., , , and ) contain defined constants for read/write (access) permissions for the facility. As noted previously, using defined constants does increase the portability of code. However, there is no free lunch , as the programmer must often take the time to look up the correct spelling of infrequently used defined constants. Table 6.4. Required Permissions for IPC System Calls. In addition to setting access modes, there are two defined constants, found in , that can be OR ed with the access permission value(s) to modify the actions taken when the IPC is created. The constant IPC_CREAT directs the get system call to create an IPC resource if one does not presently exist. When IPC_CREAT is specified, if the resource is already present and it was not created using IPC_PRIVATE, its IPC identifier is returned. In conjunction with IPC_CREAT, the creator may also specify IPC_EXCL. Using these two constants together (i.e., IPC_CREAT IPC_EXCL) causes the get system call to act in a no clobber manner. That is, should there already be an IPC present for the specified key value, the get system call will fail; otherwise , the resource is created. Using this technique, a process can be assured that it is the creator of the IPC resource and is not gaining access to a previously created IPC. In this context, specifying IPC_EXCL by itself has no meaning. The ctl system calls ( msgctl , semctl , and shmctl ) act upon the information in the system IPC permission structure described previously. All of these system calls require an IPC identifier and an integer command value to stipulate their action. The values the command may take are represented by the following defined constants (found in the header file ): A process can specify IPC_SET or IPC_RMID only if it is the owner or creator of the IPC (or if it has superuser privileges). Some of the ctl system calls have additional functionality, which will be presented in later sections. The remaining IPC system calls are used for IPC operations . The msgsnd and msgrcv calls are used to send and receive a message from a message queue. By default, the system blocks on an msgsnd if a message queue is full, or on an msgrcv if the message queue is empty. The process will remain blocked until the indicated operation is successful, a signal is received, or the IPC resource is removed. A process can specify to not block by OR ing in the IPC_NOWAIT flag with the specified operation flag. The semop system call performs a variety of operations on semaphores (such as setting and testing). Again, the default is to block when attempting to decrement a semaphore that is currently at 0 or if the process is waiting for a semaphore to become 0. The shmat and shmdt system calls are used with shared memory to map/attach and unmap/detach shared memory segments. These calls do not block. For some reason known only to those who authored the documentation, the msgsnd and msgrcv manual pages (found in Section 2) contain a reference to msgop . However, there is no system call msgop . Likewise, the shmat and shmdt manual pages make reference to shmop , which also is not a system call. The manual page for semop only makes reference to semop (which is indeed a system call). One must only conclude that the initial intent was to group all of these calls under the general heading of IPC operations. We address each set of IPC system calls in detail as we cover message queues, semaphores, and shared memory. Programs and Processes Processing Environment Using Processes Primitive Communications Pipes Message Queues Semaphores Shared Memory Remote Procedure Calls Sockets Threads Appendix A. Using Linux Manual Pages Appendix B. UNIX Error Messages Appendix C. RPC Syntax Diagrams Appendix D. Profiling Programs
https://flylib.com/books/en/1.23.1/ipc_system_calls_a_synopsis.html
CC-MAIN-2020-45
en
refinedweb
Bug #1512 TBX: 0.7.7, deal with structures properties in commands Description "text" structure in editors¶ Current TXT+CSV import module builds a 'path' property for 'text' structures that one can see in various command results disturbing the user who has never named any source data with that name. For example after importing the 'voeux-txt' sample corpus without any metadata, the Description command displays: Propriétés des structures (max 20 valeurs) p id (1) = 0. s n (59) = 22, 23, 24, 25, 26, 27, 28, 29, 3, 33, 34, 35, 36, 37, 38, 39, 40, 41, 42, 43. text id (3) = t0015, t0022, t0036. path (1) = "".That property must not be shown to the user, in: - Description command - the first page of the Edition of each text - Concordance references display choice - Sub-corpus selection - etc. or must not be build. 'text' structure properties in partition dialog¶ Currently, the Partition command parameters dialog box lists the 'base', 'path' and 'project' structure property for the 'text' structure. Those supposely internaly used properties are unknown and unusable by the user, so it is confusing the user. It should not be displayed here. Note: those informations have been noticed in the Partition command parameters, but they must probably also be hidden in other components like Sub-corpus parameters, Concordance reference display parameters, etc. 'txmcorpus' structure¶ Currently, the Partition command parameters dialog box lists the 'txmcorpus' structure. That supposely internaly used propertiy is unknown and unusable by the user, so it is confusing the user. It should not be displayed here. Solution 1¶For all ancillary/internal/private structure properties build (and needed?) by TXM: - use a secure non colliding name (to prevent conflicts with user's structure properties space name) - for example prefix the name by 'Txm' in camelback naming policy - filter processing and display depending on context of every ancillary data name to prevent the user to discover or to have to deal with it - rename the 'base', 'path' and 'project' internal properties in a reserved namespace of TXM. For example prefix them by 'txm-' (if we only remove any properties of those names, we prevent any corpus to use those property names); - rename the 'txmcorpus' structure by 'txm-corpus' (to prevent conflict with any corpus sources) Solution 3¶ A supplementary development could add a boolean preference to show/hide 'internal structures and properties' in parameters dialog boxes for advanced users? This - of course - supposes to document those internal properties and structures in the Javadoc and in a developer manual. History #1 Updated by Matthieu Decorde Matthieu Decorde over 1 year ago - Target version changed from TXM 0.8.0 to TXM 0.8.2 Also available in: Atom PDF
https://forge.cbp.ens-lyon.fr/redmine/issues/1512
CC-MAIN-2020-45
en
refinedweb
State There are two types of data that control a component: props and state. props are set by the parent and they are fixed throughout the lifetime of a component. For data that is going to change, we have to use state. In general, you should initialize state in the constructor, and then call setState when you want to change it. For example, let's say we want to make text that blinks all the time. The text itself gets set once when the blinking component gets created, so the text itself is a prop. The "whether the text is currently on or off" changes over time, so that should be kept in state. import React, { Component } from 'react'; import { AppRegistry, Text, View } from 'react-native'; class Blink extends Component { constructor(props) { super(props); this.state = { isShowingText: true }; // Toggle the state every second setInterval(() => ( this.setState(previousState => ( { isShowingText: !previousState.isShowingText } )) ), 1000); } from the server, or from user input. You can also use a state container like Redux or Mobx to control your data flow. In that case you would use Redux or Mobx to modify your state rather than calling setState directly. When setState is called, BlinkApp will re-render its Component. By calling setState within the Timer, the component will re-render every time the Timer ticks. State works the same way as it does in React, so for more details on handling state, you can look at the React.Component API. At this point, you might be annoyed that most of our examples so far use boring default black text. To make things more beautiful, you will have to learn about Style.
http://facebook.github.io/react-native/docs/0.21/state
CC-MAIN-2018-51
en
refinedweb
We completely agree with everyone’s request for new/better/improved/complete documentation. We hope to address this in the future. I going to create a new thread for this… We completely agree with everyone’s request for new/better/improved/complete documentation. We hope to address this in the future. I going to create a new thread for this… This is pretty hard to get correct since python is not a strongly typed language. That being said, I think we can improve on what we currently support. @alain, can you look into improving this for V6? If we had better “getting started” documentation, would this help? Would you still need a course? Hi @dale, @stevebaer and @Alain, what i´ve meant is that we get code completion for RhinoCommon methods like below eg.: import Rhino def SomeFunction(): pt = Rhino.Geometry.Point3d(0,0,0) pt. # does not bring up anything I know this is a lot to ask for but it would be extremely valuable for the Python editor. c. RhinoCommon has come a long way since the old SDK, and its fantastic. But I agree with @fraguada about completing the SDK documentation, I also agree with @menno about creating two types of documentation, one for entry level, and one for advanced topics, in multiple languages (C#, .Net, Python, etc). Having it consolidated in one location will make it easy to search for solutions. I travel frequently and it would be nice for users to have the ability to download offline documentation as well. All in all, great work!! Hi @Dale, a starting kit it’s a good thing but, because the language, I would prefer a proper course taken in person with real teacher. Maybe because I am a teacher… I think with a teacher you could get more information, more “shade”. I started the rpython 101 and never taken the end. I’m still missing lot of concept also if I read it more than 5 time. (maybe I need just an English course!?!?) Ps: just asked to Steve how to organize a course in Italy This is something that @piac can help you with. Giulio provides courses or can put you in contact with someone else if he is not available. Here are my wishes: 1- Have a proper integrated UI system with sliders, radio buttons, tags, values etc. … Like this: 2- Get keyboard and mouse input and control within Rhino. … So we can control and manipulate stuff with arrowkeys or other keys 3- Complete Python so it has all the tools RhinoScript has with out the need to turn to Rhino Common. 4- Have a simple introduction to RhinoCommon within the editor + integrated help. … When Python is not poweful enough or lacks tools then it would be great if it was simpler to turn to Rhino Common. Thanks for asking. Dear McNeel-Team i would love to see: but i love Rhino… FWIW - Developer (general) start here RhinoCommon developer start here RhinoCommon examples are to be found here and docs are here - this documentation is littered with examples too. Some documentation on the coercing functions would be fantastic, there are still RS methods that behave differently in PY and GHPY. Guide for best practices would also be nice. And it would be great to have a future development direction insight. Like what is the future of the *.ghpy container? Only R6? Hi Dale, David’s RhinoScript primer made me programming in just a few days. Absolutely great. So I think a primer like this for C++, .NET would be very useful. thanks, Tobias The link to download the Rhinoscript doesn’t work:! Could some one maybe share the primer!? Tanks /\Matthijs Try this link for RhinoScript … EDIT: Crap, it doesn’t link to the primer on link from this page. Here is link to revision 3 pdf from my Dropbox … The broken link to the RhinoScript 101 Primer should be fixed now. Let me know if you find otherwise. To integrate API documentation and *.dll on the standard Rhino download as an independent package if the user decided to; in this way, API documentation and *.dll files could keep updated without needed to be downloaded manually, etc. An API documentation and script editing viewport: in the same way GH is going to turn into a “dockable” viewport it could be useful to have the documentation and coding window as a viewport too. A GUID, object name, current layer…, visualization mode in viewports to attach a small text to object and keep track on them. I think that it could be some kind of customizable option in which the user could write the variables that want to display attached to every object. I think that this could be useful even for 2d drawing generation adding some tweaks to include this data visualization in detail views. I’m aware that this is a specialised request, but I think that there’s at least a small group for which this is relevant: There should be a well-documented “optimization” API for Grasshopper, that makes it easier to develop optimization plug-ins for Grasshopper by providing access to Galapagos’ GUI and the functions it uses for interacting with Grasshopper. ATM, these functions are spread around, and not all of them are documented. Given that I know of at least four people/groups that have developed optimization tools for Grasshopper or are developing them, such an API would surely find some use. Cheers, Thomas
https://discourse.mcneel.com/t/what-can-mcneel-do-to-improve-your-development/10491?page=2
CC-MAIN-2018-51
en
refinedweb
Introduction Can you imagine the programming process without the possibility of debugging program code at run-time? It is obvious that such programming may exist, but programming without a debugging possibility is too complicated when we are working on big and complex projects. In addition to standard approaches of debugging program code, such as an output window on the Visual Studio IDE or the macros of asserts, I propose a new method for debugging your code: to output your debugging data to the application that is separated from the Visual Studio IDE and the project you are currently working on. Features What's so good about it and should I use it? - It's a separate module that allows you to trace and debug the release version of your project. - It is a fully controlled module with a command set that enables you to control your debugging process: closing tracing windows (also known as trace channel), saving the entry of the trace window to the file, and so forth. Full control set are described below. - This module supports several (the number of trace channels is unlimited) strategies of trace channels. A detailed description about trace channel is described below. - You can easily modify this module to meet your needs. Behaviour Launch the application of trace messages catcher (next: trace catcher) before you start working with this module. Tracing data, sent to the trace catcher application, will be saved if the catcher application was inactive or was terminated during the trace operations. All the data that were saved during the critical situations, as described above, will be kept and popped-out to the trace catcher application when it starts again. There's a possibility to start the trace catcher application with the creation of the trace module and terminate it when the trace module is being destructed. _Log.setSectionName( "channel_#1" ); _Log.dump( "%s", "My trace data" ); or _Log.dumpToSection( "channel_#1", "%s", "My trace data" ); If you send your trace data to the new trace channel that is not created in the trace catcher application, a new trace channel will be created automatically. In addition to sending your trace data to the catcher, there's a possibility to manipulate the trace catcher application with commands help. Commands are divided into two parts: global commands and commands that depend on the trace channel. Global commands affecting the whole trace catcher application: closeRoot—closing the trace catcher application; onTop.ON—enabling always on top state for the catcher application; onTop.OFF—disabling always on top state for the catcher application. Example: _Log.sendCmd( "closeRoot" ); _Log.sendCmd( "onTop.ON") Commands affecting certain trace channels: clear—deleting the entry of the given trace channel; close—closing the given trace channel; save<path to output stream>—saving the entry of the given trace channel to the output stream that you described. Example: _Log.sendCmd( "Channel_1", "clear" ); _Log.sendCmd( "Channel_2", "save c:\\channel2.log" ); _Log.sendCmd( "close" ); /** close the current output window * (section) */ How to Use It To fully use this trace module, you have to do only two steps: - First: You must copy the [LogDispatch.dir] directory from the unzipped source file (LogDispatch_src.zip) to your project directory. - Second: You must include the header of this module in your project modules were you want to use it. Example: #include "path_by_you\LogDispathc.dir\LogDispath.h" <ClogDispatch> methods To call all the messages described below, you must use variable names as follows: [_Log]. The trace object is created once during the project lifetime (using a singleton pattern). Let's say calling the dump message will be described like this: _Log.dump( "System time is %d %d %5.5f ", 15, 10, 08.555121 ); Tracing operations dump—Formatted trace data (sprintf format) are sent to catcher application. The tracing data will be placed to the section named as the result of calling the "setSectionName" method before that; or, if the "setSectionName" method wasn't called, tracing data will be placed to the default section named as "output@default". dumpToSection—The principle is the same as dump message. The difference is that this message will place your data to the channel by the name that you described in this message. setSectionName—Set the working (active) channel name. Configuration command getCmdPrefix—Sets the prefix of the command. setCmdPrefix—Returns the prefix of the command. sendCmd—Sends your message to the receiver application. setCloseOnExit—Enables/disables the possibility to send the message to the catcher application on exit. setCloseCMDOnExit—Sets the command of the catcher application that will be sent when the trace module is destroyed. Additional operations of the module configuration setClassNameOfCatcher—Sets the class name of the catcher application. That class name will be used in the search of the catcher application where the tracing data will be sent. runCatcher—Executes the catcher application from the described path. Conclusion This trace module and the strategy we are using on it is a very flexible and effective trace tool for debugging big projects. In my opinion, this tool will be a very effective strategy to trace release versions of the project where all debugging data are removed. It is very easy and comfortable to use it. Window resizing for sidebar debugging on release versionsPosted by revoscan on 08/04/2005 06:43pm
https://www.codeguru.com/cpp/v-s/debug/logging/article.php/c7231/LogDispatchmdashDebug-Module.htm
CC-MAIN-2018-51
en
refinedweb
detach Percentile Detach Objects from the Search Path Detach a database, i.e., remove it from the search() path of available R objects. Usually this is either a data.frame which has been attached or a package which was attached by library. Usage detach(name, pos = 2L, unload = FALSE, character.only = FALSE, force = FALSE) Arguments - name The object to detach. Defaults to search()[pos]. This can be an unquoted name or a character string but not a character vector. If a number is supplied this is taken as pos. - pos Index position in search()of the database to detach. When nameis a number, pos = nameis used. - unload A logical value indicating whether or not to attempt to unload the namespace when a package is being detached. If the package has a namespace and unloadis TRUE, then detachwill attempt to unload the namespace via unloadNamespace: if the namespace is imported by another namespace or unloadis FALSE, no unloading will occur. - character.only a logical indicating whether namecan be assumed to be a character string. - force logical: should a package be detached even though other attached packages depend on it? Details. Value The return value is invisible. It is NULL when a package is detached, otherwise the environment which was returned by attach when the object was attached (incorporating any changes since it was attached). Note. Good practice. References Becker, R. A., Chambers, J. M. and Wilks, A. R. (1988) The New S Language. Wadsworth & Brooks/Cole. See Also attach, library, search, objects, unloadNamespace, library.dynam.unload . Aliases - detach Examples library(base) # NOT RUN { require(splines) # package detach(package:splines) ## or also library(splines) pkg <- "package:splines" # } # NOT(substitute(db)) attach(db, pos = pos, name = name) print(search()[pos]) detach(name, character.only = TRUE) } attach_and_detach(women, pos = 3) # }
https://www.rdocumentation.org/packages/base/versions/3.5.1/topics/detach
CC-MAIN-2018-51
en
refinedweb
Do High-Denomination Notes Create Externalities?30th August 2018 In J.P. Koning’s lead essay he outlines the costs and benefits of large denomination currency and proposes the introduction of a high-value “supernote” that would be taxed to deal with the externalities imposed by high denomination users. Koning’s proposal is similar to the optimal policy suggested in my own work with my colleague, Jaevin Park. It should therefore come as no surprise that I would support his policy proposal on the condition that we accept the premise that there are externalities imposed by the use of currency. However, I am not completely convinced by that premise. As such, I would like to organize my response around three points. First, I will push back on the idea that the use of currency creates an externality in the traditional sense that we use the term. Second, I will argue that the elimination of high denomination notes would do little to reduce illegal transactions. And finally, I will argue that even if we accept the premise that currency creates an externality, the optimal policy would not be to eliminate high denomination notes, but rather to enact the sort of policy that Koning proposes. A common argument made by those advocating the elimination of high denomination notes (or currency, entirely) is that currency is used in illegal trade and therefore creates an externality that needs to be corrected by public policy. This argument is predicated on a different view of externalities than is typically found in the literature. For example, pollution is a textbook example of an externality. A firm that generates pollution as a byproduct of production does not bear the full cost of the pollution. Since the pollution can affect air quality and/or the health and production of others in society, the firm is creating an external cost above and beyond the cost of its own production. It is important to note that the cost to society is not the mere annoyance of seeing clouds of smoke or murky water, but rather the health and productivity consequences of pollution. It is unclear whether the sort of illegal activity that is facilitated by currency fits the same model as something like pollution.. Even if we accept the idea that the sort of illegal trade facilitated by currency generates an externality, there is no guarantee that eliminating currency (or the large denomination variety used in large-scale illegal transactions) would eliminate such trade. With respect to illegal trade, currency is a means to an end. Eliminating the means hardly guarantees an elimination of the end. Instead, those who are already engaged in illegal transactions are likely to look for substitutes. The recent emergence of cryptocurrency would likely get a boost from the elimination of large denomination conventional currency. Furthermore, those engaged in illegal activity might be inclined to create their own media of exchange or payment system. All of this is not to mention the fact that currency (even large denomination currency) is used by many people for legal transactions. Eliminating large denomination currency therefore imposes a cost on the law-abiding members of society. This subtracts from any perceived net benefit of eliminating illegal trade. Overall, this suggests the net benefits of eliminating large denomination currency are likely exaggerated as well. This brings me to my final point. Suppose that we simply take as given that illegal trade reduces social welfare, and that large denomination currency facilitates that type of trade. What should be done? What is the optimal policy? A typical Pigouvian response to this sort of problem is to levy a tax on the activity that creates an external cost. The proceeds of the tax can then be transferred to the harmed group. The problem with this type of policy solution is that illegal trade tends to be hidden and unreported and is therefore difficult, if not impossible, to tax. The fact that large denomination currency has desirable properties for engaging in illegal activity, however, allows for a possible solution. As my colleague Jaevin Park and I show in our paper “Breaking the Curse of Cash,” the optimal policy in this sort of environment is to have two types of currency with a flexible exchange rate between the two currencies. Policymakers can then vary the rates of return (or the rates of depreciation) on each currency to induce legal traders to hold one type of currency and illegal traders to hold the other type. The seigniorage earned from depreciating the currency used by illegal traders can then be used to finance a transfer to legal traders. By introducing a separate type of currency, policymakers are able to engineer a Pigouvian policy. The policy suggested by J.P. Koning in his original post is precisely the sort of policy consistent with our model. He argues that the United States should create a supernote and institute a surcharge (tax) on this supernote such that it depreciates faster than smaller denominations. If the tax is set optimally, this scheme would effectively replicate the result from our paper. To the extent to which we want to reduce illegal activity facilitated by cash, this is an appropriate policy to do so. Read more about eu binary options trading and CFD brokers
http://www.binary-method.net/do-high-denomination-notes-create-externalities
CC-MAIN-2018-51
en
refinedweb
NAME¶top - display Linux processes SYNOPSIS¶top -hv|-bcHiOSs -d secs -n max -u|U user - p pid -o fld -w [cols] DESCRIPTION¶. OVERVIEW¶ Documentation¶The remaining Table of Contents OVERVIEW Operation Startup Defaults. SYSTEM Configuration File b. PERSONAL Configuration File c. ADDING INSPECT Entries 7. STUPID TRICKS Sampler a. Kernel Magic b. Bouncing Windows c. The Big Bird Window d. The Ol' Switcheroo 8. BUGS, 9. SEE Also Operation¶When operating top, the two most important keys are the help (h or ?) key and quit (`q') key. Alternatively, you could simply use the traditional interrupt key (^C) when you're done. key/cmd objective ^Z suspend top fg resume top <Left> force a screen redraw (if necessary) key/cmd objective reset restore your terminal settings key special-significance Up recall older strings for re-editing Down recall newer strings or erase entire line Insert toggle between insert and overtype modes Delete character removed at cursor, moving others left Home jump to beginning of input line End jump to end of input line Startup Defaults¶The following startup defaults assume no configuration file, thus no user customizations. Even so, items shown with an asterisk (`*') could be overridden through the command-line. All are explained in detail in the sections that follow. Global-defaults A - Alt display Off (full-screen) * d - Delay time 1.5 seconds * H - Threads mode Off (summarize as tasks) I - Irix mode On (no, `solaris' smp) * p - PID monitoring Off (show all processes) * s - Secure mode Off (unsecured) B - Bold enable On (yes, bold globally) Summary-Area-defaults l - Load Avg/Uptime On (thus program name) t - Task/Cpu states On (1+1 lines, see `1') m - Mem/Swap usage On (2 lines worth) 1 - Single Cpu Off (thus multiple cpus) Task-Area-defaults b - Bold hilite Off (use `reverse') * c - Command line Off (name, not cmdline) * i - Idle tasks On (show all tasks) J - Num align right On (not left justify) j - Str align right Off (not right justify) R - Reverse sort On (pids high-to-low) * S - Cumulative time Off (no, dead children) * u - User filter Off (show euid only) * U - User filter Off (show any uid) V - Forest view On (show as branches) x - Column hilite Off (no, sort field) y - Row hilite On (yes, running tasks) z - color/mono On (show colors) Linux Memory Types¶ Private | Shared 1 | 2 Anonymous . stack | . malloc() | . brk()/sbrk() | . POSIX shm* . mmap(PRIVATE, ANON) | . mmap(SHARED, ANON) -----------------------+---------------------- . mmap(PRIVATE, fd) | . mmap(SHARED, fd) File-backed . pgms/shared libs | 3 | 4 ) 1. COMMAND-LINE Options¶The command-line syntax for top consists of: - hv|-bcHiOSs -d secs -n max -u|U user - p pid -o fld -w [cols] - . - -O :Output-field-names - This option acts as a form of help for the above -o option. It will cause top to print each of the available field names on a separate line, then quit. Such names are subject to nls. - -s :Secure-mode operation - Starts top with secure mode forced, even for root. This mode is far better controlled through. 2. SUMMARY Display¶Each of the following three areas are individually controlled through one or more interactive commands. See topic 4b. SUMMARY AREA Commands for additional information regarding these provisions. 2a. UPTIME and LOAD Averages¶This portion consists of a single line containing: program or window name, depending on display mode current time and length of time since last boot total number of users system load avg over the last 1, 5 and 15 minutes 2b. TASK and CPU States¶This portion consists of a minimum of two lines. In an SMP environment, additional lines can reflect individual CPU state percentages. running; sleeping; stopped; zombie a b c d %Cpu(s): 75.0/25.0 100[ ... 2c. MEMORY Usage¶This portion consists of two lines which may express values in kibibytes (KiB) through exbibytes (EiB) depending on the scaling factor enforced with the `E' interactive command. total, free, used and buff/cache total, free, used and avail (which is physical memory) a b c GiB Mem : 18.7/15.738 [ ... GiB Swap: 0.0/7.999 [ ...¶ 3a. DESCRIPTIONS of Fields¶Listed below are top's available process fields (columns). They are shown in strict ascii alphabetical order. You may customize their position and whether or not they are displayable with the `f' or `F' (Fields Management) interactive commands. 1. %CPU -- CPU Usage - The task's share of the elapsed CPU time since the last screen update, expressed as a percentage of total CPU time. 2. %MEM -- Memory Usage (RES) - A task's currently resident share of available physical memory. 3. CGNAME -- Control Group Name - The name of the control group to which a process belongs, or `-' if not applicable for that process. 4. CGROUPS -- Control Groups - The names of the control group(s) to which a process belongs, or `-' if not applicable for that process. 5. CODE -- Code Size (KiB) - The amount of physical memory currently devoted to executable code, also known as the Text Resident Set size or TRS. 6. COMMAND -- Command Name or Command Line - Display the command line used to start a task or the name of the associated program. You toggle between command line and name with `c', which is both a command-line option and an interactive command. [kthreadd]. 8. ENVIRON -- Environment variables - Display all of the environment variables, if any, as seen by the respective processes. These variables will be displayed in their raw native order, not the sorted order you are accustomed to seeing with an unqualified `set'.. OOMa -- Out of Memory Adjustment Factor - The value, ranging from -1000 to +1000, added to the current out of memory score (OOMs) which is then used to determine which task to kill when memory is exhausted. - 15. OOMs -- Out of Memory Score - The value, ranging from 0 to +1000, used to select task(s) to kill when memory is exhausted. Zero translates to `never kill' whereas 1000 means `always kill'. - 16.). - 17.. - 18. PID -- Process Id - The task's unique process ID, which periodically wraps, though never restarting at zero. In kernel terms, it is a dispatchable entity defined by a task_struct. - 19. PPID -- Parent Process Id - The process ID (pid) of a task's parent. - 20. PR -- Priority - The scheduling priority of the task. If you see `rt' in this field, it means the task is running under real time scheduling priority. - 21. RES -- Resident Memory Size (KiB) - A subset of the virtual address space (VIRT) representing the non-swapped physical memory a task is currently using. It is also the sum of the RSan, RSfd and RSsh fields. - 22. RSan -- Resident Anonymous Memory Size (KiB) - A subset of resident memory (RES) representing private pages not mapped to a file. - 23. RSfd -- Resident File-Backed Memory Size (KiB) - A subset of resident memory (RES) representing the implicitly shared pages supporting program images and shared libraries. It also includes explicit file mappings, both private and shared. - 24. RSlk -- Resident Locked Memory Size (KiB) - A subset of resident memory (RES) which cannot be swapped out. - 25. RSsh -- Resident Shared Memory Size (KiB) - A subset of resident memory (RES) representing the explicitly shared anonymous shm*/mmap pages. - 26. RUID -- Real User Id - The real user ID. - 27. RUSER -- Real User Name - The real user name. - 28. S -- Process Status - The status of the task which can be one of: D = uninterruptible sleep R = running S = sleeping T = stopped by job control signal t = stopped by debugger during trace Z = zombie - 29. SHR -- Shared Memory Size (KiB) - A subset of resident memory (RES) that may be used by other processes. It will include shared anonymous pages and shared file-backed pages. It also includes private pages mapped to files representing program images and shared libraries. - 30.. - 31. SUID -- Saved User Id - The saved user ID. - 32. SUPGIDS -- Supplementary Group IDs - The IDs of any supplementary group(s) established at login or inherited from a task's parent. They are displayed in a comma delimited list. - 33. SUPGRPS -- Supplementary Group Names - The names of any supplementary group(s) established at login or inherited from a task's parent. They are displayed in a comma delimited list. - 34. SUSER -- Saved User Name - The saved user name. - 35. SWAP -- Swapped Size (KiB) - The formerly resident portion of a task's address space written to the swap file when physical memory becomes over committed. - 36. TGID -- Thread Group Id - The ID of the thread group to which a task belongs. It is the PID of the thread group leader. In kernel terms, it represents those tasks that share an mm_struct. - 37.. - 38. TIME+ -- CPU Time, hundredths - The same as TIME, but reflecting more granularity through hundredths of a second. - 39.). - 40.. - 41. UID -- User Id - The effective user ID of the task's owner. - 42. USED -- Memory in Use (KiB) - This field represents the non-swapped physical memory a task is using (RES) plus the swapped out portion of its address space (SWAP). - 43. USER -- User Name - The effective user name of the task's owner. - 44. VIRT -- Virtual Memory Size (KiB) - The total amount of virtual memory used by the task. It includes all code, data and shared libraries plus pages that have been swapped out and pages that have been mapped but not used. - 45. WCHAN -- Sleeping in Function - This field will show the name of the kernel function in which the task is currently sleeping. Running tasks will display a dash (`-') in this column. - 46. nDRT -- Dirty Pages Count - The number of pages that have been modified since they were last written to auxiliary storage. Dirty pages must be written to auxiliary storage before the corresponding physical memory location can be used for some other virtual page. - 47.. - 48.. - 48. nTH -- Number of Threads - The number of threads associated with a process. - 50. nsIPC -- IPC namespace - The Inode of the namespace used to isolate interprocess communication (IPC) resources such as System V IPC objects and POSIX message queues. - 51. nsMNT -- MNT namespace - The Inode of the namespace used to isolate filesystem mount points thus offering different views of the filesystem hierarchy. - 52. nsNET -- NET namespace - The Inode of the namespace used to isolate resources such as network devices, IP addresses, IP routing, port numbers, etc. - 53. nsPID -- PID namespace - The Inode of the namespace used to isolate process ID numbers meaning they need not remain unique. Thus, each such namespace could have its own `init/systemd' (PID #1) to manage various initialization tasks and reap orphaned child processes. - 54. nsUSER -- USER namespace - The Inode of the namespace used to isolate the user and group ID numbers. Thus, a process could have a normal unprivileged user ID outside a user namespace while having a user ID of 0, with full root privileges, inside that namespace. - 55. nsUTS -- UTS namespace - The Inode of the namespace used to isolate hostname and NIS domain name. UTS simply means "UNIX Time-sharing System". - 56. vMj -- Major Page Fault Count Delta - The number of major page faults that have occurred since the last update (see nMaj). - 57. vMn -- Minor Page Fault Count Delta - The number of minor page faults that have occurred since the last update (see nMin). 3b. MANAGING Fields¶. - • - As the on screen instructions indicate, you navigate among the fields with the Up and Down arrow keys. The PgUp, PgDn, Home and End keys can also be used to quickly reach the first or last available field. - • - The Right arrow key selects a field for repositioning and the Left arrow key or the <Enter> key commits that field's placement. - • - The `d' key or the <Space> bar toggles a field's display status, and thus the presence or absence of the asterisk. - • - The `s' key designates a field as the sort field. See topic 4c. TASK AREA Commands, SORTING for additional information regarding your selection of a sort field. - • - The `a' and `w' keys can be used to cycle through all available windows and the ` q' or <Esc> keys exit Fields Management. 4. INTERACTIVE Commands¶ Commands¶The global interactive commands are always available in both full-screen mode and alternate-display mode. However, some of these interactive commands are not available when running in Secure mode. - <Enter> or <Space> : Refresh-Display - These commands awaken top and following receipt of any input the entire display will be repainted. They also force an update of any hotplugged cpu or physical memory changes. - ? | h : Help - There are two help levels available. The first will provide a reminder of all the basic interactive commands. If top is secured, that screen will be abbreviated. - = . - * d | s :Change-Delay-Time-interval - You will be prompted to enter the delay time, in seconds, between display updates. -. 1) at the pid prompt, type an invalid number 2) at the signal prompt, type 0 (or any invalid signal) 3) at any prompt, type <Esc> - q :Quit - * r :Renice-a-Task - You will be prompted for a PID and then the value to nice it to. Commands¶The summary area interactive commands are always available in both full-screen mode and alternate-display mode. They affect the beginning lines of your display and will determine the position of messages and prompts. -. 1. detailed percentages by category 2. abbreviated user/system and total % + bar graph 3. abbreviated user/system and total % + block graph 4. turn off task and cpu states display - m :Memory/Swap-Usage toggle - This command affects the two summary area lines dealing with physical and virtual memory.. 4c. TASK AREA Commands¶The task area interactive commands are always available in full-screen. The following commands will also be influenced by the state of the global `B' (bold enable) toggle. CONTENT of task window SIZE of task window SORTING of task window - - y :Row-Highlight toggle - Changes highlighting for "running" tasks. For additional insight into this task state, see topic 3a. DESCRIPTIONS of Fields, the `S' field (Process Status). -. - S :Cumulative-Time-Mode toggle - When Cumulative mode is On, each process is listed with the cpu time that it and its dead children have used. - u | U : Show-Specific-User-Only - You will be prompted for the uid or name of the user to display. The -u option matches on effective user whereas the -U option matches on any user (real, effective, saved, or filesystem). -. - n | # : Set-Maximum-Tasks - You will be prompted to enter the number of tasks to display. The lessor of your number and available screen rows will be used. For compatibility, this top supports most of the former top sort keys. Since this is primarily a service to former top users, these commands do not appear on any help screen. Note: Field sorting uses internal values, not those in column display. Thus, the TTY and WCHAN fields will violate strict ASCII collating sequence. command sorted-field supported A start time (non-display) No M %MEM Yes N PID Yes P %CPU Yes T TIME+ Yes. 4d. COLOR Mapping¶When you issue the `Z' interactive command, you will be presented with a separate screen. That screen can be used to change the colors in just the `current' window or in all four windows before returning to the top display. 5. ALTERNATE-DISPLAY Provisions¶ 5a. WINDOWS Overview¶ -. - Current Window: - The `current' window is the window associated with the summary area and the window to which task related commands are always directed. Since in alternate-display mode you can toggle the task display Off, some commands might be restricted for the `current' window. 5b. COMMANDS for Windows¶ - - | _ :. - * A :Alternate-Display-Mode toggle - This command will switch between full-screen mode and alternate-display mode. - * a | w :Next-Window-Forward/Backward - This will change the `current' window, which in turn changes the window to which commands are directed. These keys act in a circular fashion so you can reach any desired window using either key. - * g :Choose-Another-Window/Field-Group - You will be prompted to enter a number between 1 and 4 designating the field group which should be made the `current' window. -¶. - Home :Jump-to-Home-Position - Reposition the display to the un-scrolled coordinates. - End :Jump-to-End-Position - Reposition the display so that the rightmost column reflects the last displayable field and the bottom task row represents the last. 5d. SEARCHING in a Window¶You can use these interactive commands to locate a task row containing a particular value. - L :Locate-a-string - You will be prompted for the case-sensitive string to locate starting from the current window coordinates. There are no restrictions on search string content. - & :Locate-next - Assuming a search string has been established, top will attempt to locate the next occurrence. -. 5e. FILTERING in a Window¶You can use this Other Filter feature to establish selection criteria which will then determine which tasks are shown in the `current' window. -..Either of these RES filters might yield inconsistent and/or misleading results, depending on the current memory scaling factor. Or both filters could produce the exact same results.Potential Solutions GROUP=root ( only the same results when ) GROUP=ROOT ( invoked via lower case `o' ) RES>9999 ( only the same results when ) !RES<10000 ( memory scaling is at `KiB' ) nMin>9999 ( always a blank task window ).With Forest View mode active and the COMMAND column in view, this filter effectively collapses child processes so that just 3 levels are shown. !nTH=` 1 ' ( ' for clarity only ) nTH>1 ( same with less i/p ) !COMMAND=` `- ' ( ' for clarity only ) `PR>20' + `!PR=-' ( 2 for right result ) `!nMin=0 ' + `!nMin=1 ' + `!nMin=2 ' + `!nMin=3 ' ... 6. FILES¶ 6a. SYSTEM Configuration File¶ s # line 1: secure mode switch 5.0 # line 2: delay interval in seconds 6b. PERSONAL Configuration File¶This file is written as `$HOME/.your-name-4-top' + `rc'. Use the `W' interactive command to create it or update it. 6c. ADDING INSPECT Entries¶To exploit the `Y' interactive command, you must add entries at the end of the top personal configuration file. Such entries simply reflect a file to be read or command/pipeline to be executed whose results will then be displayed in a separate scrollable, searchable window. .type: literal `file' or `pipe' .name: selection shown on the Inspect screen .fmts: string representing a path or command .fmts= /proc/ %d/numa_maps .fmts= lsof -P -p %d .fmts= pmap -x %d 2> "pipe\tOpen Files\tlsof -P -p %d 2>&1" >> ~/.toprc "file\tNUMA Info\t/proc/%d/numa_maps" >> ~/.toprc "pipe\tLog\ttail -n200 /var/log/syslog | sort -Mr" >> ~/.toprc # next would have contained `\t' ... # file ^I <your_name> ^I /proc/%d/status # but this will eliminate embedded `\t' ... pipe ^I <your_name> ^I cat /proc/%d/status | expand - Inspection Pause at pid ... Use: left/right then <Enter> ... Options: help 1 2 3 4 5 6 7 8 9 10 11 ... 7. STUPID TRICKS Sampler¶Many of these tricks work best when you give top a scheduling boost. So plan on starting him with a nice value of -10, assuming you've got the authority. 7a. Kernel Magic¶For these stupid tricks, top needs full-screen mode. - • - The user interface, through prompts and help, intentionally implies that the delay interval is limited to tenths of a second. However, you're free to set any desired delay. If you want to see Linux at his scheduling best, try a delay of .09 seconds or less. . - • - Under an xterm using `white-on-black' colors, on top's Color Mapping screen set the task color to black and be sure that task highlighting is set to bold, not reverse. Then set the delay interval to around .3 seconds. - • - Delete the existing rcfile, or create a new symlink. Start this new version then type `T' (a secret key, see topic 4c. Task Area Commands, SORTING) followed by `W' and `q'. Finally, restart the program with -d0 (zero delay). 7b. Bouncing Windows¶For these stupid tricks, top needs alternate-display mode. - • -. - • - Set each window's summary lines differently: one with no memory (`m'); another with no states (`t'); maybe one with nothing at all, just the message line. Then hold down `a' or `w' and watch a variation on bouncing windows -- hopping windows. - • - Display all 4 windows and for each, in turn, set idle processes to Off using the `i' command toggle. You've just entered the "extreme bounce" zone. 7c. The Big Bird Window¶This stupid trick also requires alternate-display mode. - • - Display all 4 windows and make sure that 1:Def is the `current' window. Then, keep increasing window size with the `n' interactive command until all the other task displays are "pushed out of the nest". is top fibbing or telling honestly your imposed truth? 7d. The Ol' Switcheroo¶This. some lines travel left, while others travel right eventually all lines will Switcheroo, and move right
https://manpages.debian.org/stretch/procps/top.1.en.html
CC-MAIN-2018-17
en
refinedweb
From Elixir Mix configuration to release configuration - Alchemy 101 Part 2 by Thomas Hutchinson This is Part 2 in our Alchemy 101 series. Catch up on Part 1: Elixir Module Attributes and Part 3: Fault Tolerance Doesn’t Come Out Of The Box. Today we will be looking at what happens to your Mix configuration when you perform a release. Take a look at the Set up section and then proceed to Application Configuration. Set up You can follow along with the examples. You will require elixir and to perform the following steps. First create a new mix project. mix new my_app --module Twitter cd my_app Next add distillery (for creating releases) to mix.exs as a dependency. defp deps do [{:distillery, "~> 0.10.1"}] end Then download distillery and create the release configuration. mix deps.get mix release.init The rest of the blog assumes that you are in the my_app directory. Application Configuration When creating an Elixir OTP Application you will most probably need some configuration. There are 4 ways to supply application configuration on startup. 1. In your mix.exs file. Here you can specify default application environment variables. This file is used to generate the .app file which is used to start your application. Run ‘mix help compile.app’ for more information 2. In your config/config.exs file, this compiles down to sys.config. Alternatively with distillery you can supply your own sys.config file. 3. With an additional .config file. From what I can see distillery and exrm don’t seem to support this out the box. You can find out more here on how to use it. 4. You can supply it to the Erlang VM when it starts up. Distillery supports this via erl_opts in rel/config.exs. Simply add “-Application Key Value” to it for each application configuration variable e.g. set erl_opts: “-my_app magic_number 42”. From what I have seen most people tend to go with the option 2, supplying configuration via config/config.exs. As configuration is an Elixir script it gives us the potential to be very creative. When creating a release (with exrm or distillery) config.exs (by default) is evaluated and the result is written to rel/$app/releases/$version/sys.config which is picked up by your application on startup. Not knowing this can lead to confusion. Here comes another example where this can happen. Open lib/twitter_client.ex and add the following to it. defmodule Twitter do require Logger def log_twitter_url do url = Application.get_env(:my_app, :twitter_url) Logger.info("Using #{url}") end end Now add the following to config/config.exs. config :my_app, twitter_url: System.get_env("TWITTER_URL") Pretty nice eh? It appears like we can get TWITTER_URL at runtime. Lets create the release and inspect it. Run the following. export TWITTER_URL="" MIX_ENV=prod mix release ./rel/my_app/bin/my_app console iex(my_app@127.0.0.1)1> Twitter.log_twitter_url() 17:26:05.270 [info] Using Perfect! Everything looks good, the url to the mock is being used. Just what I want during development. Now I want to test the integration with the real twitter API, time to change TWITTER_URL. export TWITTER_URL="" Start your release in the console and invoke Twitter.log_twitter_url/0. ./rel/my_app/bin/my_app console iex(my_app@127.0.0.1)1> Twitter.log_twitter_url() 17:26:05.270 [info] Using Strange! It is still using the mock url, but why? As mentioned before when creating a release the configuration is evaluated and written to sys.config. Lets take a look. cat rel/my_app/releases/0.1.0/sys.config [{sasl,[{errlog_type,error}]}, {my_app,[{twitter_url,<<"">>}]}]. As you can see twitter_url is “”. When the release is being created config.exs is evaluated and the results are placed in sys.config. Part of this involved executing System.get_env(“TWITTER_URL”) and having “” returned. This doesn’t have to be a problem though as you can set Application configuration at runtime via Application.put_env/3. Using this you could create a function that reads the OS environmental variable and adds it to the application’s configuration. def os_env_config_to_app_env_config do twitter_url = System.get_env("TWITTER_URL") Application.put_env(:my_app, :twitter_url, twitter_url) :ok end Note that this function would have to be called before any initialisation in your application takes place. I’m sure there are other ways to handle this scenario, if so feel free to mention them in the comments section. Hope you enjoyed reading, tune in next time where I’ll be talking about what it means to be fault tolerant. This is Part 2 in our Alchemy 101 series. Catch up on Part 1: Elixir Module Attributes and Part 3: Fault Tolerance Doesn’t Come Out Of The Box.Go back to the blog
https://www.erlang-solutions.com/blog/from-elixir-mix-configuration-to-release-configuration-alchemy-101-part-2.html
CC-MAIN-2018-17
en
refinedweb
#include <LevelAdvect.H> Default constructor. Object requires define() to be called before all other functions. References m_dx, m_isDefined, and m_refineCoarse. Actual constructor. Inside the routine, we cast away const-ness on the data members for the assignment. The arguments passed in are maintained const (coding standards). Advance the solution by one timestep on this grid level. Convert velocity from face-centered to cell-centered. In each direction, take average of normal component of velocity on the neighboring faces in that direction. Fill in ghost cells by exchange at this level and then by interpolation from coarser level (if any). Get maximum wave speed. layout for this level patch integrator physics class number of ghost cells need locally for this level exchange copier interpolator for filling in ghost cells from the next coarser level grid spacing Referenced by LevelAdvect(). problem domain - index space for this level refinement ratio between this level and the next coarser Referenced by LevelAdvect(). whether a coarser level exists whether a finer level exists number of conserved variables (= 1) order of normal predictor whether to use 4th-order slope computations (otherwise, use 2nd order) whether to do slope limiting in the primitive variables whether to do slope limiting in the characteristic variables whether to do slope flattening - MUST BE USING 4th-order slopes whether to apply artificial viscosity of a set value artificial viscosity coefficient whether this object has been defined Referenced by LevelAdvect().
http://davis.lbl.gov/Manuals/CHOMBO-SVN/classLevelAdvect.html
CC-MAIN-2018-17
en
refinedweb
CDI and EJB 3.1: Complementary Technologies in the Java EE 6 Platform Contexts and Dependency Injection (referred to as CDI) is a new specification feature introduced in Java EE 6 Platform. This was earlier referred to as Web Beans and in due course, the name has been changed to Contexts and Dependency Injection. The primary objective of CDI specification is to bring together the Web tier and the transactional services of the Java EE platform (i.e., the idea are to bring the entire transactional services to the Web tier). CDI services facilitate the usage of Enterprise Beans in the JavaServer Faces technology. CDI provides the power of dependency injection and flexibility. Enterprise JavaBeans 3.1 (EJB 3.1), the latest release as part of Java EE 6 Platform specification, makes the development much simpler and easier. EJB 3.1 specification has simplified a number of features and provides a good declarative support for transactions and security. CDI and EJB 3.1 both are part of the Java EE 6 platform, act as complementary technologies. In this article, let's try to understand the synergy between both the technologies by starting with an introduction to CDI and EJB 3.1 and let's see how they complement each other in building powerful Web applications. Contexts and Dependency Injection (CDI): An Introduction CDI (JSR 299) is a specification that defines a powerful set of services for the Java EE environment that helps in developing Web applications easily. CDI helps to develop Java EE components that exist within the life cycle of an application with well defined scopes. In the Java EE Platform, there is a strong support for transactions in the business tier and persistence tier through the technologies like Enterprise JavaBeans and Java Persistence API. However, there is less/no support for transactions from the Web tier. They are more focused on displaying the presentation content and have limited access to transactional resources. CDI services help in unifying Enterprise JavaBeans (EJB) and JavaServer Faces (JSF) programming models. CDI services allow Enterprise JavaBeans to be used as the Managed Beans in JavaServer Faces framework. CDI also provides a good support for accessing transactional resources which facilitates in easy creation of Web applications using Java Persistence API. The services defined by this specification allow objects to be bound to lifecycle contexts, to be injected, to be associated with interceptors and decorators and to interact in a loosely coupled fashion by initiating and observing events. Primary objective of introducing CDI is to bring together different types of beans available in Java EE Platform like JSF Managed Beans, Enterprise Java Beans, etc. CDI helps in defining "bean" object which can be used in any of the tiers of the Java EE platform. In simple terms, a bean object defines applications state or logic within a context. Any Java EE component can be considered as a bean provided the life cycle of the component is managed by the container. Bean in CDI like any other beans is a POJO and the beauty is that it can take the shape of any other component with the help of annotations. Any class can be used as a managed-bean provided it meets all the requirements of a bean. Annotations are used to mark the bean to be of a specific type, @Model annotation is used on the bean to mark the class as a Model in the MVC architecture, @Named annotation is used on the bean to mark the class as a Managed Bean in the Java Server Faces application, @Stateful annotation can be added to mark the class as a Stateful Enterprise Bean. @Model public class Login { private String uname; private String pwd; public void setUname(String uname) { this.uname = uname; } public String getUname() { return this.uname; } public void setPwd(String pwd) { this.pwd = pwd; } public String getPwd() { return this.pwd; } } @Named @SessionScoped public class LoginBean implements Serializable { … } @Model @Stateful @SessionScoped public class Login { …. } Apart from these annotations there are qualifier annotations and scoped annotations. CDI services are provided to the components through the transaction and security annotations. Qualifier Annotations A qualifier helps in identifying a specific implementation of Java Class or Interface to be injected. A qualifier is an annotation applied to a bean. In order to define the qualifier annotation, the type should be defined as a qualifier: @Qualifier. A qualifier type is a Java annotation, custom annotation defined using @Target({METHOD, FIELD, PARAMETER, TYPE}) and @Retention(RUNTIME) annotations. @Qualifier @Retention(RUNTIME) @Target({TYPE, METHOD, FIELD, PARAMETER}) public @interface ValidUser { … } The qualifier defined in the above example can be used as follows: …. @ValidUser @Inject private String username; …. Scoped Annotations Scope is an important factor in Web applications. Scope defines the state of the object that is being held by the bean in a Web application. While the Web applications have well defined scope, there is no well-defined scope for the enterprise beans. When enterprise bean components are used within the Web applications, the components are not aware of the contexts of the Web applications and have no state associated with those contexts. It is meaningless to add the scope of the enterprise bean to the web-tier context. The following are the scopes used in the CDI beans: - Request Scope – @RequestScoped– Single HTTP request (Defined in Servlet Specification) - Session Scope – @SessionScoped– Across HTTP requests – a single user sequential requests (Defined in Servlet Specification) - Application Scope – @ApplicationScoped– Across all users within an application (Defined in Servlet Specification) - Dependent Scope – @Dependent– For a Single client and the lifecycle is same as that the client (Defined in CDI Specification) - Conversation Scope – @ConversationScoped– Scope between multiple invocations of JSF life cycle within the session boundaries (Defined in CDI Specification) CDI objects are scoped, have a well defined lifecycle context in the Java EE container. They are automatically created and destroyed when the context in which they created ends. Originally published on. Page 1 of 2
http://www.developer.com/java/ent/cdi-and-ejb-3-complementary-technologies-in-java-ee-6.html
CC-MAIN-2016-44
en
refinedweb
I still have more questions than I'd like in my JavaScript implementation of a Polymer + Boostrap element. But I think most of those questions can be deferred until later. Tonight I convert my custom Polymer elements into Dart using Polymer.dart. I am unsure what the code organization ought to be, so I am going to guess tonight and circle back around later to see if any changes are needed. Loosely following Dart Pub guidelines, I create an assetdirectory to hold my Polymer.dart HTML templates and my bootstrap CSS, a libdirectory for my Polymer.dart code, and a webdirectory for my sample application web page. Next, I create the usual pubspec.yamllisting only Polymer as a library (non-development) dependency: name: pricing_panels dependencies: polymer: any dev_dependencies: unittest: anyAfter a quick pub fetch, I am ready to go. Starting with index.html, I have very similar HTML to the JavaScript version: <!DOCTYPE html> <html lang="en"> <head> <title>Test</title> <meta name="viewport" content="width=device-width, initial-scale=1.0"> <meta content="text/html; charset=UTF-8" http- <link type="text/css" rel="stylesheet" href="/assets/pricing_panels/bootstrap.min.css"> <!-- Load component(s) --> <link rel="import" href="/assets/pricing_panels/pricing-plans.html"> <link rel="import" href="/assets/pricing_panels/pricing-plan.html"> <!-- Load Polymer --> <script type="application/dart">export 'package:polymer/init.dart';</script> <script src="packages/browser/dart.js"></script> </head> <body> <!-- pricing plan html here --> </body> </html>The stuff that I placed in the assetdirectory is, by Pub convention, accessible from the /assets/<package name>/URL space, which is where the Polymer element definitions and Boostrap CSS are coming from. Also in there is the normal Polymer.dart initialization code. Well, the new normal at least as this has changed slightly since the last time I played with this project. Taking a closer look at the Polymer element definition, not much needs to change from the JavaScript version: <polymer-element <template> <div class="col-md-{{size}}"> <div class="panel panel-{{type}}"> <div class="panel-heading"> <h3 class="panel-title">{{name}}</h3> </div> <div class="panel-body"> <content></content> </div> </div> </div> </template> <script type="application/dart" src="/packages/pricing_panels/pricing_plan.dart"></script> </polymer-element>In fact, the only thing that has changed is the <script>tag, which, naturally enough, now points to Dart code. Since the Dart code resides in the libdirectory, the URL is slightly different than the assetHTML URL. The Dart code itself looks like: import 'package:polymer/polymer.dart'; import 'dart:html'; @CustomTag('pricing-plan') class PricingPlanElement extends PolymerElement { @observable String name = 'Plan'; @observable String type = 'default'; @observable int size = 4; PricingPlanElement.created() : super.created(); }That is more verbose than the JavaScript version, but only slightly, due to types and code annotations. The equivalent JavaScript was: Once I have similar definitions for the other custom element in this package (theOnce I have similar definitions for the other custom element in this package (the Polymer('pricing-plan', { name: 'Plan', type: 'default', size: 1 }); <pricing-plans>container), this actually works. Almost. Interestingly, the CSS styles from the main document are not being applied: To get the Boostrap CSS into the shadow DOM elements, I have to tell the Dart to allow "author" (from the main page) styles: import 'package:polymer/polymer.dart'; import 'dart:html'; @CustomTag('pricing-plan') class PricingPlanElement extends PolymerElement { @observable String name = 'Plan'; @observable String type = 'default'; @observable int size = 4; PricingPlanElement.created() : super.created() { shadowRoot.applyAuthorStyles = true; } }With that, I have my Bootstrap panels: What is interesting is that I did not need to tell my JavaScript version to do the same thing. I am unsure if this is due to different Chrome versions (31 for Dart, 32 for JavaScript) or the embedded Dart VM in the former. The “author styles” thing seems to be a real thing, so I would expect that it is needed in the JavaScript version as well. Ah well, grist for another day. For now, I have made the successful transition with a minimum of pain. Up tomorrow, I will explore testing of Polymer in Dart. Hopefully thanks to Dart's excellent testing and my hard-earned knowledge of Polymer testing, this will go a little smoother than when I was earning that knowledge. Day #944 This comment has been removed by the author. I get Exception: The null object does not have a setter 'applyAuthorStyles='. NoSuchMethodError : method not found: 'applyAuthorStyles=' Receiver: null Arguments: [true] When I try assigning true to it in created. In enteredView() method It does exist, and I can assign true to it, and seems to work. @override void enteredView(){ shadowRoot.applyAuthorStyles = true; } Interesting. I'm not getting that error. I even tried upgrading Dart (to 1.0.0.3_r30188) along with my Pub packages. Still, I don't doubt that error is lurking somewhere. I'll keep an eye for it -- especially while trying out the testing stuff... Yup, I did eventually see this -- when I was testing. When I created a new Element.tag(), the shadow DOM had not created itself, resulting in the same error that you noted. I solved it the same way (enteredView). Yes I was wondering why it might be working for you. It happens even with attributes passed to it like this It's null in created(), but, again, not so in enteredView(). OK didn't realise the replies would convert my tags to entities. Anyway this is what is missing in my last reply <poly-el</poly-el>
https://japhr.blogspot.com/2013/11/converting-javascript-polymer-to-dart.html
CC-MAIN-2016-44
en
refinedweb
RSpec 2 add-on for specifying and testing generators This project contains RSpec 2 matchers, helpers and various utilities to assist in writing Generator specs. There is additional support for writing specs for Generators in Rails 3. Why? Rails 3 has a Rails::Generators::TestCase class for use with Test-Unit, to help test generators. This TestCase contains specific custom assertion methods that can be used to assert generator behavior. To create an RSpec 2 equivalent, I wrapped Rails::Generators::TestCase for use with RSpec 2 and created some RSpec 2 matchers that mimic the assertion methods of the Test-Unit TestCase. I have also a bunch of "extra goodies" to the mix. This RSpec DSL should make it very easy and enjoyable to spec and test your Generators with RSpec 2 :) Feedback Please let me know if you find any issues or have suggestions for improvements. Install gem install generator-spec The gem is a jewel based on jeweler. To install the gem from the code, simply use the jeweler rake task: rake install Usage The following demonstrates usage of this library. There are many more options and DSL convenience methods (see wiki, code or specs). Configuration First setup the spec_helper.rb. Here is an example configuration. # spec/spec_helper.rb require 'rspec' require 'generator-spec' # configure it like this to use default settings RSpec::Generator.configure do |config| config.debug = false config.remove_temp_dir = true config.default_rails_root(__FILE__) config.logger = :stdout # :file to output to a log file, logger only active when debug is true end # or customize the location of the temporary Rails 3 app dir used RSpec::Generator.configure do |config| # ... config.rails_root = '~/my/rails/folder' end Specs for generators I recommend having a separate spec file for each generator (generator specs). You can use the special require_generator statement to ensure that one or more generators are loaded and made available for a given spec. require_generator :canable This will load the generator : generators/canabale_generator.rb If the generator is in a namespace (subfolder of generators), use a nested approach like this: require_generators :canable => ['model', 'user'] This will load the generators: generators/canable/model_generator.rb and generators/canable/user_generator.rb You can also load generators from multiple namespaces and mix and match like this. I recommend against this however as it is difficult to read. require_generators [:canable => ['model', 'user'], :other => :side, :simple] Auto-require all generators You can also require all generators or all within a specific namespace like this: require_generators :all require_generators :canable => :all Example: full generator spec # spec/generators/model_generator_spec.rb require 'spec_helper' # list of generators to spec are loaded require_generator :canable describe 'model_generator' do # include Rails model helpers for ActiveRecord # available: # Other ORM options - :mongo_mapper, :mongoid and :data_mapper # note: use_orm auto-includes the :model helper module use_orm :active_record # load helper modules and make available inside spec blocks # here the module in rails_helpers/rails_migration is included # to load multiple helpers use the method - use_helpers use_helper :migration before :each do # define generator to test setup_generator 'model_generator' do tests Canable::Generators::ModelGenerator end # ensure clean state before each run remove_model :account end after :each do # ensure clean state after each run remove_model :account end describe "the weird stuff!!!" do before :each do @generator = with_generator do |g| g.run_generator :account.args end end it "should not work without an existing Account model file" do @generator.should_not generate_file :account, :model end end it "should not work without an existing Account model file" do with_generator do |g| g.run_generator :account.args g.should_not generate_file :account, :model end end it "should decorate an existing Account model file with 'include Canable:Ables'" do with_generator do |g| create_model :account g.run_generator 'account'.args g.should generate_model :account do |content| content.should have_class :account do |klass| klass.should include_module 'Canable::Ables' end end end end end Code specs There are a bunch of specialized ruby code matchers in the matchers/content folder which can be used to spec code files in general. Check out the specs in spec/generator_spec/matchers/content for examples on how to use these. Rails specs The rails_helpers folder contains a bunch of files which makes it easy to spec rails files and to perform various "Rails mutations". These mutations make it easy to setup the temporary Rails app in a specific pre-condition, which is required for a given spec. Examples: Rails helpers require File.expand_path(File.dirname(__FILE__) + '/../../../spec_helper') describe 'controller' do include RSpec::Rails::Controller before :each do create_controller :account do %q{ def index end } end end after :each do remove_controller :account end it "should have an account_controller file that contains an AccountController class with an index method inside" do Rails.application.should have_controller :account do |controller_file| controller_file.should have_controller_class :account do |klass| klass.should have_method :index end end end.
http://www.rubydoc.info/gems/generator-spec/frames
CC-MAIN-2016-44
en
refinedweb
We are 100% user-supported! Without you, there is no RationalWiki! Help and donate today! Conservapedia talk:What is going on at CP?/Archive251 [edit] Andy will not intervene. This is now my prediction. He will simply ignore the shitstorm on his talk page and all over his wiki. He will not demote anyone, he will not give his judgement. Eventually, one or more of two things will happen: - People will leave, fed up with the situation - Conservapedia will share a characteristic with RW, and have semi-regular bouts of vitriolic exchange between certain users should they ever stumble across each other. This is what I see happening.--"Shut up, Brx." 20:17, 31 July 2011 (UTC) - I think Andy is hoping the situation will resolve itself without him ever having to exert himself. I'm sure he's sitting there hoping Rob will piss off so he can demote him and be done with it. The thing is though if Rob continues to be stubborn, it's just going to drag on and on because Andy is the only one with the power to break the deadlock. I just wonder how many days of continuous flamewar is Andy going to be able to keep his ignoring up through. --JeevesMkII The gentleman's gentleman at the other site 20:25, 31 July 2011 (UTC) - IMO the flamewar will die down, and occasionally sputter up again. That's the way these things work.--"Shut up, Brx." 20:31, 31 July 2011 (UTC) - "Let's see, I've got about 300 new messages, maybe I should go check my talk page...oooh, INSIGHT!img" Of course he won't intervene, he's been letting this go on for a week, despite how much he hates disunity. Röstigraben (talk) 20:32, 31 July 2011 (UTC) - Yes andy, the rest of the planet being malnourished outside the western world doesnt exist--Mikalos209 (talk) 20:35, 31 July 2011 (UTC) 'Actually, actually' the problem is ... podpeople are taking over Conservapedia (or the editors are showing their true Reptilian character). Slightly more plausible than Conservapedia going into meltdown as participants realise *nobody really cares about them* (apart from the 'recreational rearrangers and rubbishers'). 212.85.6.26 (talk) 16:35, 1 August 2011 (UTC) [edit] I think… …that I might have a handle on why Andy has been even more silent about this than he normally has been. Of course, just as easily, it could be bollocks, but here we go: - Forget about the crumbling of the dream and editors daring to Question Authority (which has got to be a slap in the face to an arch-conservative to Andy), I think it goes much deeper than that. Up until now, Andy has had plausible deniability over Ken's actions. If any potential or current client ever asks him about the nonsensical arguments appearing on Conservapedia, Andy could always argue - Of course, anybody who actually knows the site knows that is bollocks, but it's plausible bollocks, something you could just about get away with. - But now, Catch 22. What does Andy do? I get the feeling that he instinctively supports Conservative, and certainly doesn't want to be held accountable to the rules that other editors have to follow, and so the instinct isn't going to be to back Rob up. But the problem with that approach is that it's professional suicide. The moment that Andy backs Conservative he takes ownership of all those wonderful flying kitties, pony essays, and hur, hur, you're fat and so wrong approach to arguing. Now I'm sorry, I don't care how right-wing an organisation or client is, they ain't going to be retaining or hiring any lawyer who shows that level of debating strategy. - But, on the other hand, if he supports Rob, he's going to piss off at least two of the inner-circle, which is pretty much a guaranteed way to end up with a few more knives in his back to join the ones planted there by PJR and TK. - So what direction does Andy go? He's kind of fucked either way. I also wonder if Andy is still harbouring ambitions. We know that he's tried politics, and got shot down. He's tried education, and been shot down. He's tried being a player in the conservative movement via Conservapedia, and failed. So now I wonder if maybe Andy's delusions of grandeur stretch to hoping to be nominated to be a judge if the right people get into power (the nomination itself is just a process of who you know, not what you know, unless it happens to be where the bodies are buried). Of course, having the nomination confirmed is a different matter, but I think Andy is delusional enough to think that, as it stands, Conservapedia isn't enough of a millstone around his neck to stop his nomination, but if Andy gets to be known as the Flying Kitty/Fat Pony lawyer who can't get a client…well even Andy's got to realise that would enough to sink him for good.-- Jabba de Chops 20:25, 31 July 2011 (UTC) - He can always rely on mummy's money.--"Shut up, Brx." 20:28, 31 July 2011 (UTC) - Because he's got it, I don't think it's money that Andy is really interested in. It's reputation, prestige and acknowledgement of his achievements (all of the things that his mother got), things that he just can't claim on his own merit.-- Jabba de Chops 20:37, 31 July 2011 (UTC) - The problem is, Andy already has supported and pushed kens crap before, so he can't claim to not "own it"--Mikalos209 (talk) 20:31, 31 July 2011 (UTC) - Mmm, true. But the support wasn't whole hearted and could be backed away from ("I don't agree with what Conservative was writing, I was just trying to stop Conservative from being censored." is the kind of argument that could be used to backtrack from it). Getting rid of Rob to support Ken however, that really is a different level of support. It says to the wider world, I'd rather support flying kitties, ponies vs. and hur, hur, you're fat, over accountability and responsibility.-- Jabba de Chops 20:37, 31 July 2011 (UTC) - Does anyone who Andy might possibly work with actually care about conservapedia, though? I mean, if he ran for a serious political office, his opponents could have their entire campaign simply be conservapedia's URL with "Want this?" written under it, but for the type of organisations Andy works for, I don't think it matters. Andy's been making ridiculous arguments on CP since almost day one and he still manages to find work. Also Andy's definition of "censorship" is vastly different to reality's definition. He could easily take Ken's side and claim he was defending Ken against liberal censorship being imposed by Rob, and it'd be consistent with all of his other claims of "censorship" (which, in andy's mind, simply means "calling a stupid idea stupid"). Hell, say a company does check up on his little project, they won't even see this battle between ken and rob. As soon as it's over, it's going to be wiped anyway, probably no matter who wins. X Stickman (talk) 01:04, 1 August 2011 (UTC) - Andy is being silent because he has no leadership skills and is a bit cowardly. It's not enough that a site he participates on is rioting among itself and its leaders are in a nasty public conflict, but this is a site that he personally founded and maintains and controls, and those leaders were appointed by him. But he doesn't step in because he's afraid to try to sort it out, because doing so would mean making big changes. The man is the basest coward.-- talk 01:40, 1 August 2011 (UTC) [edit] The server is temporarily unable to service your request due to maintenance downtime or capacity problems. Please try again later. Additionally, a 404 Not Found error was encountered while trying to use an ErrorDocument to handle the request. Oh dear. Maybe it's just one of CP's many instabilities, but wouldn't it be hilarious if Andy shut down to wiki to avoid dealing with the drama?--"Shut up, Brx." 20:24, 31 July 2011 (UTC) - Okay, so not really but it would be funny if that were the case--"Shut up, Brx." 20:29, 31 July 2011 (UTC) - I've been having trouble getting in the past hour, too. It's usually that way on weekday afternoons for me. nobsput down the toilet seat 20:34, 31 July 2011 (UTC) - No, what would be funny is if we had another "week that never was" where all the posts magically disappear and we were transported to back to how CP was in June 2011. ГенгисIs the Pope a Catholic? 01:26, 1 August 2011 (UTC) [edit] More Biblical Scientific Foreknowledge A break from the CP Civil War. I'm just going to quote Andyimg, cuz it's just mindboggling: Maybe in rich countries obesity is a bigger problem than hunger, but worldwide this is not the case. About a billion people are malnourished. Hell, there's a famine going on in the Horn of Africa right now. And how is more food being available than expected in a story even close to being "scientific foreknowledge"? Arghhhh, there's so much wrong with this. --Night Jaguar (talk) 20:35, 31 July 2011 (UTC) - Well maybe if they had chosen conservatism and Jesus over Satan and Communist liberals things wouldn't be so bad for them. --Mikalos209 (talk) 20:37, 31 July 2011 (UTC) - Didn't the UN just declare a famine in East Africa?--"Shut up, Brx." 20:42, 31 July 2011 (UTC) - This is rush's favorite line. How can there be a world wide food problem, if there are fat people and i can buy any food i want at 3 am? He also talks about children with pot bellies in africa as signs that they have eaten too much. En attendant Godot 20:44, 31 July 2011 (UTC) - As my AP euro teacher liked to say: The Miracle of Hyvee--Mikalos209 (talk) 20:45, 31 July 2011 (UTC) - Wow. What's particularly sickening about that is that the swelling of the abdomen in children is a symptom of famine. --"Shut up, Brx." 23:17, 31 July 2011 (UTC) - IS that what that actually is? --Mikalos209 (talk) 23:21, 31 July 2011 (UTC) - Maybe you should have put a smiley after that because some people obviously can't detect sarcasm. ГенгисIs the Pope a Catholic? 01:19, 1 August 2011 (UTC) - Yes. It's a very serious condition. Of course, it were were to truly believe in pure conservatism, Africa should be left to starve and not a single penny government money should be put into relief efforts. America has all the food because they "earned it" from working hard, unlike the lazy...</Ken>. Wingnuts truly do disgust me. Doraemon話そう!話そう! 23:36, 31 July 2011 (UTC) - No, that's Objectivism (related, but not the same). The conservative way would be to not allocate any government funds but to let private charities pitiably fail at doingdo the legwork. Which would amount to a bunch of starvelings with all the Bibles they could possibly want. Yay.--"Shut up, Brx." 23:44, 31 July 2011 (UTC) - As i said earlier: Well maybe if they had chosen conservatism and Jesus over Satan and Communist liberals things wouldn't be so bad for them. --Mikalos209 (talk) 23:45, 31 July 2011 (UTC) - I know they're hardly secretive about the US-centric thing, and I understand and accept it for the most part, but that statement just plainly and simply takes the piss. ADK...I'll dehydrate your eel! 01:37, 1 August 2011 (UTC) andy really is a wankstain isn't he? Oldusgitus (talk) 11:56, 1 August 2011 (UTC) - This is one of the ways politically conservative Christians rationalize the Christian duty to care for the less fortunate (and it's one of the clearest commandments Jesus made: care for the poor) and the "screw the poor" attitude of today's Republican Party: "oh, there aren't really that many poor people!" MDB (talk) 14:54, 1 August 2011 (UTC) [edit] Next on MPL "July 31, Conservapedia's best day yet! 3,000,000 visits and 2,000 edits!" Pippa (talk) 22:05, 31 July 2011 (UTC) - Three MILLION people visited today?--Mikalos209 (talk) 22:15, 31 July 2011 (UTC) - It was a slightly above-average day. --Sid (talk) 22:18, 31 July 2011 (UTC) - Must be all those facebook users abandoning the liberal bias in droves--Mikalos209 (talk) 22:32, 31 July 2011 (UTC) - The future is going to be awesome. All CP. All the time. No other form of entertainment or education will exist. --Inquisitor (talk) 23:18, 31 July 2011 (UTC) - Of course, 2 million of those were Ken moving Full Stops around and having Rob's user contributions page on auto-refresh every 5 seconds in a state of paranoia. Doraemon話そう!話そう! 23:38, 31 July 2011 (UTC) - I long ago gave up monitoring CP page views but 3m in a day looks like a lot of page bumping going on. I'd be interested to see which articles are getting the attention. ГенгисIs the Pope a Catholic? 01:16, 1 August 2011 (UTC) - Probably all the talkpages where this little conflict is going on. I know I've been paying far more direct attention than I usually do. X Stickman (talk) 01:28, 1 August 2011 (UTC) - That's utter nonsense, CP's server can handle at most 13 requests per second, which, assuming they actually achieved that, would give 1.1M views in a day. Once again, the Assfly is lying. SHOW US THE DATA! DeltaStarSenior SysopSpeciationspeed! 06:33, 1 August 2011 (UTC) [edit] Anger bear boils over Unblocking Human was just too much for him, poor dear. Human gets another one for his memoirs and Rob gets the smackdown too. He really doesn't like us, does he? --JeevesMkII The gentleman's gentleman at the other site 02:57, 1 August 2011 (UTC) - Just like this over there now. P-FosterThe Holy Roman Empire was neither Holy, nor Roman, nor an Empire. Discuss. 03:16, 1 August 2011 (UTC) - YOU ARE CEASING YOUR ACTIONS! This is pretty epic, but at the same time squirm inducingly embarrassing. When is Andy going to step in and put a stop to all this.... --JeevesMkII The gentleman's gentleman at the other site 03:23, 1 August 2011 (UTC) - 23:16, 31 July 2011 Iduan (Talk | contribs) unblocked RobSmith (Talk | contribs) (again. stop wheel warring both of you) - 23:15, 31 July 2011 Karajou (Talk | contribs) blocked RobSmith (Talk | contribs) with an expiry time of infinite (account creation disabled) (Incivility: You are ceasing your actions) - 23:15, 31 July 2011 Iduan (Talk | contribs) unblocked Karajou (Talk | contribs) (stop wheel warring) - 23:14, 31 July 2011 RobSmith (Talk | contribs) blocked Karajou (Talk | contribs) with an expiry time of 2 hours (account creation disabled) (cease your trolling actions now) - 23:14, 31 July 2011 Iduan (Talk | contribs) unblocked RobSmith (Talk | contribs) (stop wheel warring) - 23:13, 31 July 2011 Karajou (Talk | contribs) blocked RobSmith (Talk | contribs) with an expiry time of infinite (account creation disabled) (Incivility) - 23:13, 31 July 2011 RobSmith (Talk | contribs) unblocked RobSmith (Talk | contribs) - 23:11, 31 July 2011 Karajou (Talk | contribs) blocked RobSmith (Talk | contribs) with an expiry time of infinite (account creation disabled) (Incivility) - 23:10, 31 July 2011 RobSmith (Talk | contribs) unblocked RobSmith (Talk | contribs) - 22:44, 31 July 2011 Karajou (Talk | contribs) blocked RobSmith (Talk | contribs) with an expiry time of infinite (account creation disabled) (Incivility) - --Night Jaguar (talk) 03:27, 1 August 2011 (UTC) - The Koward has never got the military "do as I say, not do as I do" out of his system. And typically the lower ranking they are the more petty. ГенгисIs the Pope a Catholic? 04:00, 1 August 2011 (UTC) - They don't call them petty officers for nothing. ONE / TALK 08:52, 1 August 2011 (UTC) - RobS unblocked me in mid-May. I only finally went through all the hoops to bypass my IP 403 block today. I was just, um, what do they call civilian casualties? Oh, yeah, collateral damage. No big deal until I get my IP changed, really. Since no one at CP can figure out how to undo an IP read block. ħuman 07:51, 1 August 2011 (UTC) [edit] Ken's leaving for "2-3 years" He'll have input on the new CP power structure in 2-3 years.img Pathetic. PsyGremlinZungumza! 14:05, 1 August 2011 (UTC) - When did he imply he was leaving?--Mikalos209 (talk) 14:12, 1 August 2011 (UTC) - Absolutely all the time. He is terribly busy, you know.-- Kriss AkabusiAAAWOOOGAAAR!!1 15:06, 1 August 2011 (UTC) - He didn't. He did make another weird remark to support the laughably unlikely possibility that he's doing anything but sitting on his swampy obese ass in a filthy apartment without air conditioning obsessively refreshing recent changes with absolutely nothing better to do than lie about all his unspecified off-wiki obligations, which at most consist of finally taking a shower and going shopping for more Cheetohs and hot dogs. 15:07, 1 August 2011 (UTC) - Doesn't look like he's leaving, I think it's just the usual "I'm going to be incredibly busy pulling shit out of my arse for the foreseeable future so I won't be able to answer any questions about my shit-pulling antics or have any input on how such shit-pulling might be subject to oversight" ONE / TALK 15:09, 1 August 2011 (UTC) - Speaking of which, you know how Leaving And Never Coming Back is a 'thing'? Is there one we can use for Ken, something along the lines of Leaving And Immediately Coming Back With No Perceptible Change In Regularity Or Contribution, All The While Claiming To Be Away? ONE / TALK 15:09, 1 August 2011 (UTC) - It's shorter just to say he's insane. --JeevesMkII The gentleman's gentleman at the other site 15:34, 1 August 2011 (UTC) - An more truthful. EddyP Great King! Disaster! 16:04, 1 August 2011 (UTC) - Wait, wait, wait. Let's go back here for just a second: "I don't know if there is a "silent majority", but since I don't believe in mob rule it is a moot point anyways." (Unsuprisingly) I don't think Ken understands what the kind of "mob rule" we had here means (as our kind of mob rule is explicitly not working with people being silent about something) and if he wasn't going for our mob rule, as what should we understand "mob" then? I would guess the majority, the normal people. Equally unsurprisingly Ken has admitted himself to be a fascist or authoriatarian at least, although looking at how he debates "totalitarian" may be the word to go. On another note which will cost you another second: Just because Ken doesn't believe in "mob rule" (a.k.a. democracy) doesn't make any point moot... --★uːʤɱ pervert 16:53, 1 August 2011 (UTC) [edit] Is Rob parodying Conservative? Rob's spelling flame, while written in Conservative's style, was pretty embarrassing. The proper phrase, as I understand it, is "without any further ado", just as Conservative wrote it. (Maybe Rob was trolling by correcting an already correct spelling — I'm no good with subtle humor.) Linkimg Phiwum (talk) 20:47, 1 August 2011 (UTC) - Well you have User:Conserative dressing down an editor with the same trademark sysop bullying here, replete with the not so sublte hinting of blocking for spelling errors (which the offender did not commit). The response is classic. nobsput down the toilet seat 22:00, 1 August 2011 (UTC) - "Ado" and "Adieu" are two totally different words. Whilst the latter is French the former appears to be Scandinavian in origin. --Horace (talk) 22:42, 1 August 2011 (UTC) [edit] Ken's view of li'l ol' England Ken, as you are in the USA have you met Barack Obama?img (I know it's four days old, but it's funny) Pippa (talk) 22:49, 1 August 2011 (UTC) - I know the feeling: "Oh, you are from Germany? I know another guy from Germany, he's <insert random description here> - do you know him?" - Me: "There are more than 80 million people in Germany..." --★uːʤɱ anti-communist 22:58, 1 August 2011 (UTC) - You're German? Do you know a guy called Jürgen Müller? DeltaStarSenior SysopSpeciationspeed! 23:40, 1 August 2011 (UTC) [edit] Karajou cleans out socks Good oneimg Terry! --Horace (talk) 01:09, 2 August 2011 (UTC) - "I am sysop, hear me roar!"img --Sid (talk) 01:16, 2 August 2011 (UTC) - Respect the Assfly's authoritah, even if he never actually chooses to exert it. He's behind my angry blocking streaks in spirit. --JeevesMkII The gentleman's gentleman at the other site 01:21, 2 August 2011 (UTC) - I'm just glad he vents his anger through Conservapedia, and not something that actually matters. ~SuperHamster Talk 01:22, 2 August 2011 (UTC) - Heh, Kowardjerk is brilliant! His 'warning' highlights exactly what Knobs is trying sort out. DeltaStarSenior SysopSpeciationspeed! 01:33, 2 August 2011 (UTC) - Heh heh heh! There's a few Seffrican IP addresses there, so they're all obviously socks of Psy!img DeltaStarSenior SysopSpeciationspeed! 01:35, 2 August 2011 (UTC) - Maybe the welcome template needs an additional instruction to read Karajou's talk page. I mean, really, it's so childish. If you want to issue an edict to everyone then you need to say it on the front page not in the inside of your bedroom door. Of course it's hilarious because you can almost hear the steam emanating from his ears. ГенгисIs the Pope a Catholic? 01:48, 2 August 2011 (UTC) - Anyone who abuses this website; anyone who abuses the authority and ownership of this website by ASchlafly... - - Does that include you, and other sysops? Start with Conservative.' - ...anyone who comes in this website out of the blue and demands changes to the site like they own it... - - Rob didn't come "out of the blue", unless you mean socks voting. But hey, who's gonna stop you anyway? - ...anyone who harasses others within this website... - - It's only harassment if you say so, right? Your attempts at RW users in the past count as what? - C'mon Koward, keep true to your word. Throw out Conservative unless you want people looking down on you like a hypocritical drooling fool. Must suck, yelling in the mirror at times telling the person on the others side they're a moron. Your anger sustains us. NorsemanCyser Melomel 01:55, 2 August 2011 (UTC) - Karjou is a piece of shit. nobsput down the toilet seat 04:17, 2 August 2011 (UTC) - I have no idea who EdwardJS is. I really don't. He's not a sock of mine. Karajou is either mistaken or a bold face liar. Terry Benny (talk) 02:04, 2 August 2011 (UTC) - SIGHHHHH, nevermind, EdwardJS was an account a friend made and used for one day. KJ needs to understand that more than one person can edit from the same IP. Terry Benny (talk) 02:08, 2 August 2011 (UTC) - I'm kinda surprised everyone who voted for Rob to keep his Admin. rightsimg hasn't been blocked yet. It would have been a good trap to purge Rob's supporters. Btw, Rob is winning right now by 26 to 4. --Night Jaguar (talk) 02:33, 2 August 2011 (UTC) - Numbers, who needs 'em. --Mikalos209 (talk) 03:32, 2 August 2011 (UTC) [edit] CP Block Log The CP Block Log is just fab right now. It's like one of our block wars, except they're taking it very seriously. There's almost as much blocking of CP regulars as there is wandals and parrots. DogP (talk) 03:23, 2 August 2011 (UTC) - " It's like one of our block wars, except they're taking it very seriously." -> Lol. Good description. --Night Jaguar (talk) 05:53, 2 August 2011 (UTC) [edit] Oh no, nintendo has had its worst sales in 27 years! Nintendo having worst sales in 27 years = abandoning of Video Games!img nevermind that this was an inevitability given how everybody owns a Wii and the handheld market is already oversaturated, that theres a recession going on, and totally ignoring that Historically Nintendo has been the most Family-Friendly of the consoles--Mikalos209 (talk) 22:34, 28 July 2011 (UTC) - Never mind that Nintendo's consoles tended to have the more family-friendly games on average than Sony and MS. Also never mind that the sales drop is (likely) also because of the sub-par 3DS launch (WHERE THE FRIG ARE MY MUST-HAVE GAMES, NINTENDO?). --Sid (talk) 22:51, 28 July 2011 (UTC) - On the bright side, Nintendo is giving away 10 free GBA games and 10 NES games for anyone who bought the 3DS before the upcoming price drop, so now we'll actually have games to play on the 3DS...10 and 20 year old games, but games nonetheless... ~SuperHamster Talk 22:55, 28 July 2011 (UTC) - Implying something is wrong with old games? The first decade after the crash gave us the best games ever.--Mikalos209 (talk) 22:58, 28 July 2011 (UTC) - Oh, no, I'm real excited for it. It made my day when I heard about it. New games (when I say games, I mean Nintendo's big ones like Mario and Zelda, because I'm a nerd like that) would be nice too, though...waiting for the holiday season takes too long. ~SuperHamster Talk 23:04, 28 July 2011 (UTC) - Yeah, the giveaway is a nice nod at early adopters (assuming that Nintendo manages to pull it off properly - so far, I've been less than impressed by their Online Strategy track record...), and I'd love to play Metroid Fusion and oldschool SMB again. If they throw in a NES-era Mega Man, I'd be practically ecstatic. --Sid (talk) 23:23, 28 July 2011 (UTC) - They said video games were over and done with back in 1983 too, look how that turned out (well for everyone who wasn't Atari).--BMcP - Just an astronomy guy 02:30, 29 July 2011 (UTC) - Hell, even a bastardized, rump state version of Atari is still kicking around taking good game franchises and killing them and selling nostalgia consoles nobody knows exists--Mikalos209 (talk) 02:59, 29 July 2011 (UTC) - This is the sort of stuff that keeps them in the conservative backwater. First, lots of conservative people like video games; Second, there is probably not one person who isn't buying games right now who has "walked away" from playing/buying because of the senseless violence. MPR items like this are what tell people the site is a loony bin. Andy dreams up a reason, no matter how silly it sounds to everyone else, and then reports it on CP as fact. His own source says it's because of crappy games, not a shift in consumer preference away from games. --Phil Leotardo da Vinci (talk) 14:22, 29 July 2011 (UTC) On another note, Nintendo's president just cut his salary in half due to the losses that Nintendo has gone though. If only American banks and corporations could follow suit. ~SuperHamster Talk 22:43, 29 July 2011 (UTC) - I never expected to hear a discussion about the 3DS on Rationalwiki. I agree hearing about the 20 game givaway made me glad to be a first adopter, some of the games their giving away look really cool. If only America's CEOs could follow Nintendo's lead and cut their pay when things go tits up. Protoman (talk) 17:49, 31 July 2011 (UTC) [edit] Kendoll's pathetic sockpuppeteering Does Kendoll really think nobody knows that his socks are really him? Gosh, why would I create an android app if I were a sock? Er, maybe because it took you two clicks on some stupid site to do, and you thought it would make you look genuine, you dickhead. At this point, cruel as it might be, I'd really like to see Kendoll desysopped but all his articles remaining in their locked state just as he likes them. --JeevesMkII The gentleman's gentleman at the other site 01:16, 1 August 2011 (UTC) - Me, I think they're parodists.--"Shut up, Brx." 01:24, 1 August 2011 (UTC) - Yeah, likely just bandwagon trolls/parodists. Fergus is putting marginally more effort into it than the rest, but that's it. Though Rob losing CheckUser just in time for the Ken-supporters to show up is indeed a fascinating "coincidence"... --Sid (talk) 01:28, 1 August 2011 (UTC) - Which, of course, makes Andy complicit to the whole affair. --Inquisitor (talk) 01:48, 1 August 2011 (UTC) - I got the impression that Fergus was a Ken impersonator more than a Ken sockpuppet. --Night Jaguar (talk) 02:00, 1 August 2011 (UTC) - Rob, looks like they have started picking off your supporters one by one. Incivility.. my ass--Buscombe (talk) 02:29, 1 August 2011 (UTC) - Here's two leaks: (1) I lost checkuser after I told Andy I had trouble accessing the site in the afternoons and was using a proxy; (2) in private confidential phone conversations with User:Conservative I explained in great detail what a wp:Strawman sockpuppet was; subject showed extreme interest in the phenomena and apparently never heard of it. Also, please note CP now allows functionaries to use sockpuppets and single purpose accounts [1] which brings it more inline with Wikipedia policy. And a note to Karajou, I am leaking only that portion of a discussion I told someone else, which in no way can be considered an ethical lapse. I am not discussing anything a second party told me in confidence. Do you see the difference? Oh, I see you just blocked me. I'm sorry, I repent, I'll never do it a again, whatever it was. How low do you wish me to grovel, Sire? May I Please Please Please Pretty Please come back? May I please, my good sire and lord, return from you banishment? Did you blocking stop me? did it make a point? Do you feel better now, or do feel you're a more effective CP sysop? nobsput down the toilet seat 03:00, 1 August 2011 (UTC) - Knowing Ken, once he learns something he tends to use it over and over again. He's pretty much just a three-trick pony. ГенгисIs the Pope a Catholic? 03:13, 1 August 2011 (UTC) - The phrase is "one-trick pony" but yeah whatever. Ken is really only "good" at ctrl-C/crtl-V. ħuman 07:43, 1 August 2011 (UTC) - I know what the phrase is. How about a bit of artistic license? ГенгисIs the Pope a Catholic? 08:25, 1 August 2011 (UTC) - 'three'? I say you give him too much credit... Eye on the ICR talk, or type, or whatever... 08:35, 1 August 2011 (UTC) - FergusE never needs more than one attempt at an edit, how could conservative fake that? Besides wouldnt an Android app hurt his precious web stats? — Unsigned, by: 71.197.167.224 / talk / contribs - a) The apps that stupid site produces are just a browser control pointed at the site in question, and b) No fucker is ever going to download it anyway. So, no. --JeevesMkII The gentleman's gentleman at the other site 03:35, 1 August 2011 (UTC) - Well, even an browser control wouldnt have Alexa installed. But i didnt realize it was so easily made, I assumed it was some clever plan to steal phone numbers of conservapedia users or something. Still, dont think it his sock, especially after How old is this guy? He has to sockpuppet on his on website which virtually no one reads except him, and the few who do do so just for a good laugh? He's either a 13 year old virgin or he has a really, really boring life.--108.193.118.126 (talk) 15:36, 2 August 2011 (UTC) - He is a 47 year old virgin. DeltaStarSenior SysopSpeciationspeed! 16:15, 2 August 2011 (UTC) - Correction, he'll be 49/50 now. And still a virgin, obviously. DeltaStarSenior SysopSpeciationspeed! 16:21, 2 August 2011 (UTC) [edit] And what a month it was! But August 2011 will be quite different: Most of the actions is due to Andy's abysmal lack of leadership. He lets his palace guards fight without even interfering when those are soiling his little castle. No doubt he'll crown the victor - and punish the defeated party - when the outcome gets clear. I tried to infuse some spine into himimg, but without any luck, I'm afraid. Without a competent deputy, Andy is certainly lost. And TK has shown that Andy is even in need of a deceptive incompetent second-in-command. larronsicut fur in nocte 16:59, 1 August 2011 (UTC) - Thanks for all your number-crunching! Is it possible to see a breakdown of Talk versus mainspace edits on CP in the last few months? I'm curious how much the recent sysop battles have contributed to the increase in edits. (ʞlɐʇ) ɹǝɯɯɐHʍoƆ 19:12, 1 August 2011 (UTC) As CP approaches completion & perfection, it is only natural that the number of edits will go down. Essentially, nearly all possible entries have been made. At best, you can just tweak a word or two here and there. I think the only thing left is an Ed Poor upload of a picture of Hello Kitty panties & the last touches on the Really Good Bible (RGB). Jimaginator (talk) 20:20, 1 August 2011 (UTC) - @CowHammer: the shaded area in the first pics show the number of edits made in namespace main - sorry, this information was somehow lost from the legends... larronsicut fur in nocte 07:05, 2 August 2011 (UTC) [edit] Jpatt, feminist *sigh*img Once again people that normally have no problem with oppressing women, jump in to safe the poor women from being exploited. DAMN YOU SEXUALITY! OUR NOT ENTIRELY 2000 YEAR OLD BOOK FROM SOME DESERT COUNTRY SAYS YOU ARE EVIL! --★uːʤɱ atheist 17:43, 1 August 2011 (UTC) - Worse, Jpatt is immediately contradicting his own headline and news article: - In fact, if you need to twist this headline, we'd need Andy-Logic: "Give it up liberals! America is becoming more conservative, and the liberal porn industry is feeling the sting!" --Sid (talk) 17:54, 1 August 2011 (UTC) - Wheres the figures that show Conservative states have higher porn buying?--Mikalos209 (talk) 17:57, 1 August 2011 (UTC) - Even private enterprise is liberal now. I guess to be a Conservative CP style you have to live in a rude shack, trapping and grubbing for root and tubers. --JeevesMkII The gentleman's gentleman at the other site 18:03, 1 August 2011 (UTC) - Everything that does anything that might be perceived as bad is liberal, but furthermore didn't you know that conservatives have always been against oppression? That's even where the word comes from: conservare from good ol' Christian Latin meaning "to preserve [those that are oppressed]" in fact there is an ongoing fight of the majority of Conservatives against the liberal leading elite that only wants to progress their power! But one day, a glorious day will come when the conservative masses take over society and ... and ... um… change absolutely nothing at all. --★uːʤɱ pirate 18:42, 1 August 2011 (UTC) Yes, I have always felt that treating women as chattal & forcing them to have 14 kids, is far superior to them being seen in the nude. It's the natural order. They prefer it that way. I think the blacks liked their servitude too, they made up all kinds of fun harvesting songs. Yes, I am being sarcastic. Jimaginator (talk) 20:08, 1 August 2011 (UTC) - The fact is that today's porn is generally not "exploitative of women" (which is a misogynistic concept anyway). Porn has gone mainstream, and you can see 10s of 1000s of free amateur videos made by couples trying to liven up their sex lives. --Phil Leotardo da Vinci (talk) 20:37, 1 August 2011 (UTC) - Porn is one of the few industries where women are paid more than men. Feminists should all in favour and hold it as a bastion of their cause! DeltaStarSenior SysopSpeciationspeed! 23:42, 1 August 2011 (UTC) - There is such a thing as feminist porn. That interesting little aside brought to you courtesy of a debate held on Newsnight. Got to figure it was a slow news day that day.-- Jabba de Chops 00:14, 2 August 2011 (UTC) - How does that work exactly? --Mikalos209 (talk) 01:39, 2 August 2011 (UTC) - There was a schism inside feminism between AP (anti-pornography) feminists and SP (sex-positive) feminists. I sympathise with the SP feminists personally but no doubt an AP feminist would say that's just because I am corrupted by pornography. Anyway, for an SP feminist making pornography - perhaps as an actor but more likely as a writer or director - is a perfectly acceptable career in which they can help construct more healthy fantasies for other people. SP feminists also tend to be behind things like blogs which review (women's) sex toys. 82.69.171.94 (talk) 07:49, 2 August 2011 (UTC) [edit] Now for something totally different... larronsicut fur in nocte 22:39, 1 August 2011 (UTC) No Seffricans? What about Psy? P-FosterThe Holy Roman Empire was neither Holy, nor Roman, nor an Empire. Discuss.Oh, "anonymous." P-FosterThe Holy Roman Empire was neither Holy, nor Roman, nor an Empire. Discuss. 22:44, 1 August 2011 (UTC) 22:45, 1 August 2011 (UTC) Psy never forgot to log in... larronsicut fur in nocte 22:48, 1 August 2011 (UTC) [edit] Blocked ranges and IPs over time I hope this works! larronsicut fur in nocte 17:05, 2 August 2011 (UTC) [edit] Exposing parody gets you scolded on RW and BLOCKED on CP. - Karajou: "Arrrr, William made accusations that FergusE is a parodist! If he comes back, I'll make him produce evidence! And if he refuses, he's out again!"img - Sid: "Are you blind? Here it is."img - Karajou: "Arrrr, thanks, but you weren't nice about it!"img *1 day block*img Apologies to whoever is controlling Fergus, btw., but Karajou's UTTER BLINDNESS was aggravating (and to be fair, Fergus wasn't even trying to be subtle anymore). --Sid (talk) 02:23, 2 August 2011 (UTC) - You know if it was anyone else KJ wouldn't have reacted like that. Senator Harrison (talk) 02:29, 2 August 2011 (UTC) - "...then you're going to present them in a nice, professional matter". Er, just like Karajou's talk page? He really is a buffoon. ГенгисIs the Pope a Catholic? 02:54, 2 August 2011 (UTC) - Because nothing says professional like the way Karajou communicates with others. Ultimate hypocrisy. ~SuperHamster Talk 03:02, 2 August 2011 (UTC) - You know the best part? IFergus still hasn't been blocked. --Roofus (talk) 03:18, 2 August 2011 (UTC) - I think you might be laying it on a bit thick with that last edit Roofus. --Horace (talk) 03:27, 2 August 2011 (UTC) - I might be, but at this point I'm just seeing how far I can push it until I get blocked. :P --Roofus (talk) 03:34, 2 August 2011 (UTC) - You might as well. You've already put Karajou in an impossible position. Having failed to block you to date (for no apparent reason other than the fact you were supporting Ken) your increasing obviousness as a parodist makes him look foolish, but in blocking you now he will look worse. Well done. --Horace (talk) 03:39, 2 August 2011 (UTC) - At last! Not sure why it took so long. I guess he forgot to run checkuser over the editors who voted in support of demoting Rob. Easy mistake to make. I note that Bclough is still there though. --Horace (talk) 04:16, 2 August 2011 (UTC) - Blocked... for being a sock? Karajou shows the oddly misaligned focus I'd expect from Darkwing Duck. If anybody doubts that CP is utterly unable to deal with parody, here's a textbook example. --Sid (talk) 10:49, 2 August 2011 (UTC) - The Bclough thing reminds me of when a bunch of parodists signed up to egg Schlafly on in the Lenski affair all using the name of British footballers. Fun times. ГенгисIs the Pope a Catholic? 13:58, 2 August 2011 (UTC) - Pathetic. nobsput down the toilet seat 17:06, 2 August 2011 (UTC) - Back then, Andy had made up his mind anyway. He openly turned down or ignored Philip's advice (to align CP with AiG/etc. there and simply dismiss the results) right from the start, both in public and in the Conservaleaks. All the parodists (and that Holocaust denier) did was accelerating things. This is a pattern that can be found every now and then with Andy: He'll make a "suggestion" and ask for input, then he'll summarily ignore all input and go ahead with his initial plan. See for example the times when he asked the sysop mailing list about letting TK back or about giving Bugler sysop rights. --Sid (talk) 17:25, 2 August 2011 (UTC) - Pathetic? A Lenski letter XI would've had a pretty decent midfield. I'd've been worried about the lack of a top class striker, but none of them seemed interested in exposing Lenski's deceitz. :( --Robledo (talk) 20:17, 2 August 2011 (UTC) - Rob, my point about the footballers names was that Andy eagerly gobbled them all up as support for his call for the release of the Lenski data. I don't think any of them made any other edit but it showed how ideologically driven Schlafly is and those people were performing the role of the tailors in The Emperor's New Clothes. Far too many sysops are prepared to go along with the how wonderful is Mr Schlafly and are as guilty as any parodist in not challenging him. The ones who did call him out such as Philip, DanH, TimS are sadly nowhere to be seen any more. ГенгисIs the Pope a Catholic? 22:24, 2 August 2011 (UTC) [edit] Karajou can have the shit-hole he created I'm taking some time off. See you at Ameriwiki. nobsput down the toilet seat 03:55, 2 August 2011 (UTC) - Ok?--Mikalos209 (talk) 04:18, 2 August 2011 (UTC) - I'm a little hot right now; I've spent years wanting to reform that project, and that worthless little cocksucker who contributes less than 1.2% of edits on site with 10 people editing can just go fuck himself with a carrot stick. What a worthless piece of human excrement. nobsput down the toilet seat 04:23, 2 August 2011 (UTC) - But I'm the guy who made him famous in Wikipedia, "Brian McDonald says, 'people think Conservapedia is just whacko'"; what a fitting epitaph. nobsput down the toilet seat 04:26, 2 August 2011 (UTC) - Go bitch somewhere else. you know it wouldn't have worked anyways.--Mikalos209 (talk) 04:32, 2 August 2011 (UTC) - Mikalos, this is probably the most appropriate place on the internet to bitch about CP--"Shut up, Brx." 04:37, 2 August 2011 (UTC) - Bitch about it with more class then atleast. Not like a child who got grounded.--Mikalos209 (talk) 04:38, 2 August 2011 (UTC) - He's complaining in the same fashion as us.--"Shut up, Brx." 04:40, 2 August 2011 (UTC) - What tipped the scales? Eye on the ICR talk, or type, or whatever... 04:41, 2 August 2011 (UTC) - I'm calling him out on it more because he constantly says he wants to reform it, says he wanted to for years, then acts surprised when it fucked up in his face.We all knew this was coming and if he honestly thought he had a chance in succeeding he's at best delusional.--Mikalos209 (talk) 04:44, 2 August 2011 (UTC) - He's frustrated that in spite of all his efforts so little has happened. --"Shut up, Brx." 04:48, 2 August 2011 (UTC) - He should have known after spending all this time there nothing would have anyways, and was foolish for getting his hopes up he would do anything--Mikalos209 (talk) 04:51, 2 August 2011 (UTC) - If Rob had listened to you negative nancies in the first place, the fun we've seen the past few days would've never happened. Now you're bitching about his bitching, ironic eh? NorsemanCyser Melomel 14:10, 2 August 2011 (UTC) I bet Karaturd is literally sitting at his desk redfaced and trembling. 04:54, 2 August 2011 (UTC) - So, Ken and Angry Bear win the Great CP Flame War of '11. I for one welcome Conservapedia's new batshit crazy/mallcop overlords. Go, go forth and make CP an even bigger laughing stock than it already is. Let the reign of error begin! --Night Jaguar (talk) 05:07, 2 August 2011 (UTC) - I will continue to make my small edits until i get banned, nothing is changed from my expectations--Mikalos209 (talk) 05:08, 2 August 2011 (UTC) - I think we all need to say a big thank you to Robbie for what he has achieved: A more crazed Ken and an even more pathetic and enraged Koward; the result of which is that those people with a (what I assume to be) genuine concern about CP falling apart have been shouted down and/or blocked, and sycophantic parodists are on the up! Well done Knob Smith! DeltaStarSenior SysopSpeciationspeed! 05:13, 2 August 2011 (UTC) - It's true. Delta's dead right. And the beautiful thing is it was possible to have good faith hope in either outcome: Rob was doing the right thing but was bound to fail from the start, with the result being an even more megalomaniacal assemblage of fucking batshit crazy assholes. Ken is going to go on the biggest flying kitting and fat atheists bender you've ever seen, oversight every single critical reference to his sharticles from the last 8 months. Karaturd's going to get some release from blocking all the "parodists" and "sockpuppets" (forgetting of course that millions of potential editors would have to use proxies, and hence share IPs, because Schlafly the dope 403'd the planet earlier this year). It's beautiful, and I don't feel a lick of shame hoping for CP's speedy decent into heretofore unplumbed depths of insanity now that Rob gave reform the old college try. Fuck yes. 05:21, 2 August 2011 (UTC) - I'd have loved for Rob to succeed and would love to see CP reformed, i just didn't want to lose my... i think fourth account since late 2008/early 2009--Mikalos209 (talk) 05:26, 2 August 2011 (UTC) - I see Karajerk has also locked down his talkpage, al a Ken. Good to see the spirit of openness is alive and well at CP. I've been e-mailing the big douche bag to get his off-the-record opinion, but of course he's too much of a COWARD to respond. I for one look forward to Ken spouting his nonsense all over CP now - he's untouchable. 10-1 says the featured article suddenly vanishes, like it did last time. PsyGremlin말하십시오 13:09, 2 August 2011 (UTC) - Rob, if you want to reform CP then the best solution would be just to nuke the whole thing and start over from scratch, and get Andy off the site while you're at it. Most of the articles on that site are either parodies or half-parodies, or just crap that was cut/pasted from fringe sites like WorldNutDaily. Plus no one's going to take a website seriously which has headlines plastered all over the front page such as "Bruce Springsteen didn't sing "Born in the USA" at a concert because he secretly knows Obama is from Kenya!" or "Heathen Germany lost the World Cup to Christian Poland and Brazil because Germany doesn't allow religious homeschooling". Even if there was any good content on that site, it's wasted on them. You'd be better off just writing your own articles and starting a blog or something, because that site is too far gone.--108.193.118.126 (talk) 18:04, 2 August 2011 (UTC) [edit] There's no shame in being beaten by the best And let's face it Knob, KowardJerksOffOverMen and Ken are true intellectual giants and men of great integrity. DeltaStarSenior SysopSpeciationspeed! 05:10, 2 August 2011 (UTC) - more ancient asian secrets?img--Mikalos209 (talk) 05:27, 2 August 2011 (UTC) - wtf is Marathi? Also, I'm slightly ashamed at having indulged this manchild and attempted to translate whatever it is he picked up somewhere on the web--"Shut up, Brx." 05:31, 2 August 2011 (UTC) - Marathi (मराठी Marāṭhī) is an Indo-Aryan language spoken by the Marathi people of western and central India. It is the official language of the state of Maharashtra. There are 90 million fluent speakers worldwide.[2] Marathi has the 4th largest number of native speakers in India[5] and is the 15th most spoken language in the world.[6] Marathi has the oldest of the regional literatures in Indo-Aryan languages, dating from about 1000 CE.[7] --Mikalos209 (talk) 05:33, 2 August 2011 (UTC) - I'm pretty sure Pam is a buddy from his local Methodist church. ГенгисIs the Pope a Catholic? 05:57, 2 August 2011 (UTC) - So he is able to put 'crush atheism' in English-Hindi translation program and copy&paste the result. Wow, we've obviously underestimated this great mind. --Night Jaguar (talk) 06:02, 2 August 2011 (UTC) - Curious that capturebot can't take good shots of it... Eye on the ICR talk, or type, or whatever... 06:03, 2 August 2011 (UTC) - So CP is literally turning into Babel. Brilliant work...oh, excuse me: tốt công việc, bảo thủ! Röstigraben (talk) 07:12, 2 August 2011 (UTC) BTW, that is Hindi. There is some evidence, to suggest he is a (?second generation) immigrant from South India. Did he/she/it have an Indian accent Rob? --Buscombe (talk) 08:21, 2 August 2011 (UTC) - I'm not sure we should be going down this route should we? Fine to speculate imo but perhaps hold back on the confirmation? But then maybe I'm wrong on this. Oldusgitus (talk) 09:41, 2 August 2011 (UTC) - Anupam volunteers a lot about himself through his user boxes on CP and Wikipedia and publicly linked the two accounts when he copy/pasted his WP articles to CP. ГенгисIs the Pope a Catholic? 13:49, 2 August 2011 (UTC) - I was talking about Ken--Buscombe (talk) 14:24, 2 August 2011 (UTC) [edit] Not satisfied I appreciate that pricks like Karajou might begin to grind one down and wear one out with their base stupidity and belligerence after a while. But for Rob to throw in the towel at this point seems a bit much. After garnering 24 nay votes on the "Should RobS be ceremonially disemboweled?" page he has effectively hung his supporters out to dry by conceding and buggering off to Amerowiki or wherever he said he was going. If one didn't know better one might almost suspect that this was a successful counter insurgency operation. I think that Rob is genuine so I do not suggest that seriously, but it may nonetheless have the same effect in the end. Get back in there Rob! --Horace (talk) 10:25, 2 August 2011 (UTC) - He told Karajou to fuck himself with a carrot stick. I think Rob is done. Senator Harrison (talk) 11:12, 2 August 2011 (UTC) - Sadly Rob did not have the support of the one vote that really mattered, Herr Schlafly's. Frankly its his silence in all this that has interested me the most. It speaks volumes about the kind of shop he runs and people he gathers around him. Now that the sound and fury of so little significance is over I can only imagine him peaking his head timidly out from his pillow fortress with a softly whimpered "Haz da bad man gone nowz?" As for Rob, that he has finally wised up and moved on to greener pastures is encouraging. That he is heading over to yet another conservative POV wiki is...not. C'est la vive I suppose. You can lead the horse to water, etc, etc, etc. - What will really be interesting though is how things move from here. Kara and Ken have now slipped nicely into Co-Dragon positions, splitting the dearly departed TK's aggression and insanity between them. Any takers on how long it takes them to finish mopping up Rob's supporters before casting a squinting, suspicious eye at their remaining fellow sysops? "You know, you didn't openly support Rob, but I didn't hear you decrying him either. Vhat are you hiding comrade...?" --Tygrehart - Yeah, Andy's near complete absence in the fight over his own website was astounding. Anyway, the purge should be starting any time soon.... --Night Jaguar (talk) 11:56, 2 August 2011 (UTC) - No one who voted there expected anything good from it - they all knew that he was taking a stand and so were they.-- talk 11:34, 2 August 2011 (UTC) - Karajou boasts 20,000 edits, but seriously, I haven't seen one that would not have attracted the attention of a Wikipedia Administrator and a possible warning. He's reminds me of wp:Luca Brasi in the opening scene of the Godfather presenting a gift "on this the day of your daughter's wedding"; even the Godfather looks at him cross eyed cause he's a little scary. nobsput down the toilet seat 12:08, 2 August 2011 (UTC) - But why did you throw in the towel now? I don't see anything that happened on the site overnight that explains your sudden capitulation. You seem to have given up your noble quest for no reason at all. Phiwum (talk) 12:14, 2 August 2011 (UTC) - I'll think you'll find that Karajou has already taken those who voted "nay" out back and shot them. Amazingly I apparently had 4 socks accounts there. Who knew? Just goes to show what a dishonest cunt the man is. PsyGremlinTala! 13:48, 2 August 2011 (UTC) - So, Karajou's claims about checkuser results were bogus? Surely, Rob still has an ally there who could've run the same checks, no? Phiwum (talk) 14:01, 2 August 2011 (UTC) - Well what do you expect if every German IP is Sid, then every Seffrican IP is PsyGrumbling. Me, I like to keep them guessing; this week I'm editing from Guatemala, let him try and find that IP with his manly checkuser extension. ГенгисIs the Pope a Catholic? 14:06, 2 August 2011 (UTC) - He's unearthed at least two socks at time of writing that belong to Rob apparently :P--Mikalos209 (talk) 14:22, 2 August 2011 (UTC) - Wait, those socks are also from South Africa, they can't be Rob they must be Psy. ГенгисIs the Pope a Catholic? 14:30, 2 August 2011 (UTC) - Damn it, the truth is out! I confess, I was RobS! (Hey... why do you think Rob wasn't around when Jessica was... just sayin') Honestly though, I plead ignorance. However, it is fun to see Rob become the new prime evil. How long until Andy strips his rights? More importantly, how long should we wait before asking for the next instalment for Conservaleaks? A week? PsyGremlinTal! 14:45, 2 August 2011 (UTC) - Thanks. Look what you did now. The man is dangerous behind the wheels of a tricycle. nobsput down the toilet seat 17:29, 2 August 2011 (UTC) - With all due respect to all involved, is there anyone here at all that believes that some unfortunate conservative in South Africa has lost a sincere opportunity to contribute to Conservapedia? Karajou's claims of sockpuppetry may be ill-founded, and it sure seems like the rules there are mere excuses for various sysop abuse, but to be honest, there aren't really all that many honest victims from the recent purge. Chances are pretty damned good that any contributors from Germany or South Africa don't really intend to benefit the "project" in the way intended. - I don't think I'm missing the point of the outcries entirely, but I sincerely don't think that there are many legitimate victims here in the past twenty-four hours. Rather, it's mostly a bunch of RW-inspired accounts being purged, unless I'm mistaken. (I get Rob's point as well — if they really do say the right thing, then who cares about their motive. But in determining victimhood status, motive counts.) - That's not to say that previous abuses didn't drive off legitimate contributors, nor that the current atmosphere won't do the same. I imagine there are a lot of sympathetic parties that get blocked for no good reason and that's a damned shame for CP. But Karajou's recent blocks? I can't get all that upset. Sorry. Phiwum (talk) 20:00, 2 August 2011 (UTC) - Using the same logic, you might as well ban all non-sysops and even some sysops. Or do you really think that people in the US think "Why certainly, flying kitties and the beauty of fall leaves do disprove evolution, and I also had many doubts about General Relativity! Indeed, I believe that homeschooling is the answer because it teaches our children that boys won't ask girls out if the latter get better test results. Also, the Bible is filled with liberal bias, and atheists are fat!"? CP is so far gone that I honestly think that all current editors other than the Fab Five are at least reading RW. Every. Single. One. - Because nobody else out there will bother to contribute unless bribed with promises of immunity and of absolute editorial freedom to pimp their crap. And even then they will likely go "Uh, no." Even Ed noted in the Conservaleaks that CP fails to attract conservative authors and wondered why that was so. Liberals laugh and move on, conservatives distance themselves, so only we remain. - CP has become a giant stage where people can go to try random shit. Some choose to play honest blokes, some play assholes, some sit in the corner and quietly analyze, and the core sysops stand in the center without realizing that nothing they say actually matters. - Do the blocks matter? In the big picture, they don't. CP could block literally the entire planet, and only we would notice. But like I said, the sysops believe in their little fantasy world, and they believe that they do the correct and smart thing. So if you dive in and look at it inside the illusion, it matters. And most discussions on (T:)WIGO implicitly do just that. We pretend that CP is not just a lone stage in the middle of nowhere. We are a part of their illusionary world. We give these people meaning. --Sid (talk) 21:50, 2 August 2011 (UTC) - What I enjoyed in the debate were boot-licking schmucks like Jcw, who wroteimg in defense of Ken: "I assumed that the sysops were accountable upwards, to ASchlafly. Isn't that the point - to avoid mob rule by having a clear chain of command with a definite leader at the top?" No, you idiot, there is no definite leader on the site, as if you needed any more evidence of that then this whole fiasco of Sysops run wild. --Phil Leotardo da Vinci (talk) 18:33, 2 August 2011 (UTC) [edit] Phyllis Schlafly WND column Phyllis Schlafly column on Whirled Nut Daily. This is just syndication and she probably doesn't even know she's appearing there, right? I mean, she wouldn't knowingly work for an editor who thinks her son is nuts, no? Mountain Blue (talk) 12:12, 2 August 2011 (UTC) - The article is hilarious. Does anyone else talk about "Edison light bulbs" (as opposed to those nasty ones from Communist China)? We can't wash our dishes! We can't give ourselves an "efficient body wash"! And "cars and light trucks will have to be lighter weight and thus more dangerous in accidents" - WTF? Cantabrigian (talk) 13:14, 2 August 2011 (UTC) - The shit doesn't fall far from the bat. --Night Jaguar (talk) 13:37, 2 August 2011 (UTC) - No, I'm pretty sure Mama Schlafly is a regular contributor.--"Shut up, Brx." 13:41, 2 August 2011 (UTC) - Her name is on the list of regular editors, heck it's even in own list of contributors. ГенгисIs the Pope a Catholic? 13:47, 2 August 2011 (UTC) - Why is it that morons like mamma asfly and her son are all in favour of governmnet staying out of our bedrooms right up until I and a consenting adult partner fancy a little anal sex. Then they are all in favour of the government geting HEAVILY involved, and not in a good or fun way. Oldusgitus (talk) 14:03, 2 August 2011 (UTC) - How is the government getting involved in sexual acts EVER good or fun? LordSlug You want me to do...work? what's that? 01:53, 3 August 2011 (UTC) - "While the ban on Edison light bulbs was passed before Barack Obama became president, we can blame him and his energy secretary" - that'll teach for not letting her little boy be Harvard Law Review president. ГенгисIs the Pope a Catholic? - I wasn't interested in this section until I read about anal sex. NorsemanCyser Melomel 14:17, 2 August 2011 (UTC) - Yeah, I know she's a "regular contributor," but does she know about that or are they just using syndicated material she writes? When I was living in Austria they had a nativist/nationalist wingnut tabloid whose list of regular contributors included Hillary Rodham Clinton, if you get my drift. Mountain Blue (talk) 15:04, 2 August 2011 (UTC) - This is a baby-step forward for the wingnuts -- at least they now understand that the light bulb secret police was created by arch-commie Dubya. Nebuchadnezzar (talk) 16:34, 2 August 2011 (UTC) - Bangs head on desk. Hummers on my highway. cause those people need to be more safe in their huvees when they push us little guys out of the way. and damn us all for using more efficient and light weight technologies with better research to make more safe cars, when all we should have done is just put armor plating on them and eaten up all the gas. and Fuck the birds, the elk, the wolves, the elephants and rhinos. if i want to hunt, or eat fish to extinction that's my right. but it's NOT your right ot kill you baby, damn it. En attendant Godot 16:46, 2 August 2011 (UTC) - Oh, it's far worse than that...they'z comin' for yer toilets! Nebuchadnezzar (talk) 16:50, 2 August 2011 (UTC) - The headline alone boggles the mind. "makes connection between abortion and toilets". um... HUH??????-- En attendant Godot 16:56, 2 August 2011 (UTC) Bill Maher pretty much nailed this one:Nebuchadnezzar (talk) 17:01, 2 August 2011 (UTC) - Thanks for sharing that awesome clip! Mama Phyllis has all the histrionics of baby Andy. I liked this: "...and he warned we can no longer set our thermostats at 72 degrees." That hysterical assertion comes from this on the 2008 campaign trail: “We can’t drive our SUVs and eat as much as we want and keep our homes on 72 degrees at all times … and then just expect that other countries are going to say OK,” Obama said." --Phil Leotardo da Vinci (talk) 18:42, 2 August 2011 (UTC) [edit] Irony meter *poof* Oh dear. When Ken starts running around, informingimg peopleimg how to "create a more collaborative spirit and increase the esprit de corps", you know his trolling his nipples off in celebration. This is the man who drove off another CP sysop who wanted Ken to be more collaborative. --PsyGremlin말하십시오 14:12, 2 August 2011 (UTC) - hehe...be more collaborative by creating a group within a group within a group. Occasionaluse (talk) 14:42, 2 August 2011 (UTC) - Ken nicely sums up why Conservapedia utterly failed to become a collaborative project: "(1) Everybody should find a free niche and stay there. (2) Nobody gets to challenge my stuff." Because we all know that "collaboration" means "look away and edit something else" on CP. --Sid (talk) 17:11, 2 August 2011 (UTC) - I'm glad Rob failed or I'd need to find a new dumb hobby. --Phil Leotardo da Vinci (talk) 18:44, 2 August 2011 (UTC) [edit] Notice of Acknowledgment I didn't check who wrote the wigo, but "manchild and mallcop" is as apt as it is pithy. Well done, good sir. Mountain Blue (talk) 16:14, 2 August 2011 (UTC) - Thanks, but I can't claim all the credit. People have been calling Ken "manchild" for at least a few days now (I don't know who started it) and "mallcop" I got from our TK article (describes Karajou just as well). --Night Jaguar (talk) 17:10, 2 August 2011 (UTC) - Yeah, I've seen manchild before myself, and I think the mallcop is ultimately due to Richard Jensen, of all people. Anyhow, aphorisms are like theorems; credit goes to the guy who ultimately puts the pieces together. Mountain Blue (talk) 18:21, 2 August 2011 (UTC) [edit] So, how bout that GW is fake Cause, apparently records were broken--Mikalos209 (talk) 23:52, 2 August 2011 (UTC) - Think again, libtard! Roy Spencer just got a new PEER-REVIEWED study published definitively debunking AGW...or not. Nebuchadnezzar (talk) 23:56, 2 August 2011 (UTC) - I read this and wondered how Bush might not be real. ГенгисIs the Pope a Catholic? 00:46, 3 August 2011 (UTC) - ME TOO! Senator Harrison (talk) 02:22, 3 August 2011 (UTC) [edit] The wise monkeys So looking at the riot on Andy's talkpage and wondering how he sees no evil by completely ignoring it, I checked his edit history to find his last contribution was 26 July now over a week ago. But looking at the total history of his talk page I see his first edit was 23 January 2011, ah yes that was Ken deleted and recreated the page thereby wiping out the prior edit history. One day we will find that Ken has been the only editor on CP as the contributions of every other editor will have been memory holed just to save Ken's face concerning his own ineptitude. ГенгисIs the Pope a Catholic? 01:03, 3 August 2011 (UTC) - Nvm, but, dont forget, oneday all that will exist will be CP, and therefor, the only human will be Ken.--Mikalos209 (talk) 01:18, 3 August 2011 (UTC) - Spooky. I was actually just thinking to myself, with no snark or supposition at all, "Where exactly HAS Andy been during this main event cage match?" After all, CP is his (helmet wearing, shot bus riding, window licking) baby and such a blowout could not truly be escaping his notice. Was he really ignoring the whole dustup and hoping it would just go away or has he legitimately had something else occupying his time? Then I did a quick trip back through the WIGOs and saw he personally removed Robs check user rights on 7/28. That bit alone was enough to convince me he's well aware of what's been transpiring, has given his quiet support to Team Kenajou and evidently had no problem tossing Rob to the wolves. --Tygrehart - Andys been just as active the entire time. Just without talking about the civil war--Mikalos209 (talk) 01:22, 3 August 2011 (UTC) - Perhaps I wasn't explicit enough, I meant that he hadn't edited his talk page. He has of course been blogging the whole time on MPR. ГенгисIs the Pope a Catholic? 02:50, 3 August 2011 (UTC) - Never fear, we'll always have american history terms x. --JeevesMkII The gentleman's gentleman at the other site 06:05, 3 August 2011 (UTC) [edit] So, a scary thought about the Future of Humans according to CP They think they are replacing all "Liberal" things at a glorious rate, right? Well, that means all humans will have oneday is Conservapedia, all day every day. If we go with that, then go with the fact Ken is slowly pushing people out as he takes over... One would assume that in the future the only thing left of our race will be Ken writing how he destroyed the fat atheists, Homosexuals and other Vile threats to Humanity on a website hosted by a fast failing server next to him. --Mikalos209 (talk) 03:04, 3 August 2011 (UTC) - That server will be in a state of perpetual suspence, located with the Gods Of The Furtherest Ring LordSlug You want me to do...work? what's that? 03:08, 3 August 2011 (UTC) [edit] Kendoll filter I forgot to ask, what is that thing someone wrote that removes the spam from recent changes at CP? And will it work via anonymouse, and better yet, tor? Perhaps pmail as well as post here, since I check irregularly. ħuman 05:42, 3 August 2011 (UTC) - You seem to have asked this question (and got an answer) a year ago. ГенгисIs the Pope a Catholic? 06:05, 3 August 2011 (UTC) [edit] "I think of my beautiful city in flames." I think it's fitting that the Idiocy Singularity hits CP right as Rob is pretty much ready to move. Penn Jillette looks at Conservapedia and Ken's "essays" and seriously concludes that it's a troll site. Ken's reaction is basically to jizz in his pants and to feel encouraged to create more "essays". Seriously, this is brain damaged. There is missing the point, and then there is missing the PLANET the point is located on. And Andy is sitting there in his little echo chamber, seriously believing that the opposition is either cowering in fear or being converted and that conservatives will rally behind his glorious insights. In the meantime, out in the real world, the opposition is actually unwilling to believe they're serious while conservatives try to put as much distance between themselves and CP as possible. On a certain level, I care for CP. I've been watching the place for more than four years now. I guess I developed a certain nostalgic bond, yeah. And now I see... this. A blathering idiot either doesn't understand or doesn't understand that he is completely undermining the site's last shred of credibility, and Andy sides with him. *sigh* I think I may need a vacation before the week is over... --Sid (talk) 23:20, 2 August 2011 (UTC) - That would probably be the Stockholm syndrome, Sid. ГенгисIs the Pope a Catholic? 23:26, 2 August 2011 (UTC) - Either that, or Fremdschämen - the kind of emotion one experiences while watching the catastrophic castings in the first round of American Idol and such. --Sid (talk) 23:35, 2 August 2011 (UTC) - Fremdschämen can be explained as feeling embarrassed about (not by) somebody without having done anything youself. Just throwing that in to clarify. --★uːʤɱ structuralist 00:53, 3 August 2011 (UTC) - Andy enjoys vigorous duicussion (a flame war), and really hasn't kept up with it. It's when user comments get redunant, etc. that 90/10 gets invoked. nobsput down the toilet seat 17:33, 3 August 2011 (UTC) - I think from the get-go Conservapedia was doomed because of Andy Schlafly. If you had a Rich Lowry, David Frum or even--blech--a Tucker Carlson the site would have been a robust place where various strains of conservative thought would be explained, as Rob proposed. The rot is at the top. What I don't understand is why any users on this board still have any nostalgic for what was always a mistaken notion about what CP would be. I also don't understand why people are so invested in something named Conservapedia being that compendium of conservative thought. It's not going to happen, and it never was going to happen. --Phil Leotardo da Vinci (talk) 14:16, 3 August 2011 (UTC) [edit] Who left? "Conservapedia is proven right, again: the reaction was angry and one editor even left Conservapediaimg when we linked the Norway massacre to violent video games. But look who just joined our side: "Norway Retail Chain Pulling Violent Video Games in Wake of Breivik Killings."" Is he talking about Rob? --Roofus (talk) 01:32, 3 August 2011 (UTC) - I think it might be WilliamB1, who placed an abrupt message on his user page (since deleted by some lackey) criticising Andy and saying he wanted no more to do with the place. --Horace (talk) 01:49, 3 August 2011 (UTC) - The same editor is taking his que from User:Conservative: deleting other sysops comments from his talk page. [2] nobsput down the toilet seat 17:30, 3 August 2011 (UTC) - Hehe. From CP's own source: ." And the link at the bottom of that article leads to 'Norway Attacks: Killers May Play Video Games, but Video Games Don't Make Killers'. So they couldn't even find a source that agrees with them about the alleged link between video games and violence to highlight some retailer deciding to pull some games off their shelves for a while. Oh, but, I forgot, that's due to realitythe media having a liberal bias. 86.162.88.181 (talk) 19:52, 3 August 2011 (UTC) [edit] Penn Jillette Ugh. Ken keeps shitting all over Penn Jillette, i mean, could someone please tell me what this "Essayimg" is supposed to be about? Whats really annoying to me is that Ken starts refering to himself in the third person, then quickly switches to first person. MAKE YOUR FUCKING MIND UP KEN!!! LordSlug You want me to do...work? what's that? 03:02, 3 August 2011 (UTC) - And of course, this recently made piece of shit is now the Featured Article. EDIT: haha, Kendoll is being mocked on the talk page. LordSlug You want me to do...work? what's that? - Honestly, I think Ken is going mad after his great victory. He seems to have made 5 (possibly more) "Essays" all saying the exact same thing about Penn. And of course Ken is referring to himself in third person, that's what psychopaths do. He's even referring to himself as he/she. What's even more troubling is that Ken is referring to himself and Conservapedia interchangeably. Ego trip, perhaps? Beck (talk) 03:37, 3 August 2011 (UTC) - What!? A sniveling shit emerges. Let's beat him down with words. To arms, lads! 06:08, 3 August 2011 (UTC) - Oh, Definately. And i think its hilarious how Ken is denying the "charge" of calling Penn "A stupid athiest". I mean... really Ken? Really? LordSlug You want me to do...work? what's that? 03:42, 3 August 2011 (UTC) - All Penn needs to do is say that he will gladly debate User:Conservative providing that he reveal his real identity. ГенгисIs the Pope a Catholic? 03:45, 3 August 2011 (UTC) - Ken just made an entire category called Penn Jillette. This is bordering on obsession Beck (talk) 03:48, 3 August 2011 (UTC) - Oh, it's way passed obsessive. --Night Jaguar (talk) 04:56, 3 August 2011 (UTC) - Whoever takes care of Ken must have the patience of a saint. You know the whole time he's rattling his keyboard, you just know he's rocking rhythmically back-and-forth, lips mumbling barely audible repetitive nonsense, only occasionally interrupted by peels of maniacal laughter. It must be a sad sad life. --Inquisitor (talk) 04:15, 3 August 2011 (UTC) - As I understand it, nobody is taking care of Ken. ГенгисIs the Pope a Catholic? 04:29, 3 August 2011 (UTC) - Yeah, with his victory in the Flame War and the response from Penn, he's definitely gone on a spreeimg (close to 175 edits in 24 hours). --Night Jaguar (talk) 04:52, 3 August 2011 (UTC) - GIVE IT UP SCHLAFLY! Just change the name to Kenservapedia.com ГенгисIs the Pope a Catholic? 05:40, 3 August 2011 (UTC) - Just a thought. I know I do not post often, but I have monitored the idiotocracy of CP for sometime. Has it occurred to anyone that Conservative's "rise" may give us an opportunity? Look at some of the personalities over there. We may be able to turn them on each other. - --Franklin (talk) 13:27, 3 August 2011 (UTC) - You must be new. "We" don't expressly want to destroy Conservapedia, just point out when it's misleading or unscientific. Conservative is just acting how he always acts. And sysops have turned against each other before, but these days Andy never intervenes so the winner is whoever spends the most time editing and reverting (ie Conservative).-- Kriss AkabusiAAAWOOOGAAAR!!1 13:53, 3 August 2011 (UTC) [edit] "Atheist trying to stop christinaity" Content: The horribly underthought picture of a firefighter watching helplessly as the all consuming destructive wildfire of Christianity destroyed a home... then some quote about Atheism dyingimg--Mikalos209 (talk) 05:29, 3 August 2011 (UTC) - and then this jewel on the talk pageimg--Mikalos209 (talk) 05:32, 3 August 2011 (UTC) - Antheist? Is that the worship of the creatures from this classic film? Nebuchadnezzar (talk) 05:37, 3 August 2011 (UTC) - >.> didn't notice that--Mikalos209 (talk) 05:39, 3 August 2011 (UTC) - As a trained search and rescue volunteer, i've worked around firemen most of my teen and adult life. what does he mean this firefighter looks "lazy" and "slothful?" How, exactly, should a fire fighter look when staring at a fire of this magnitude? god ken is a first class fuck. I'd love to see him say that to the actual face of ANY firefighter. and then expect they'd ever help him. En attendant Godot 14:49, 3 August 2011 (UTC) - He knows half the nasty shit he says from the safety of his unintellectual internet bunny hole would get his ass kicked in real life. 15:44, 3 August 2011 (UTC) - Obviously a real firefighter could beat the fire down via his axe--Mikalos209 (talk) 16:54, 3 August 2011 (UTC) - Yeah, i figured it was teh fire fighter's stance. if you lean back slightly, on one leg, that's clearly sloth. if you lean forward, that's not sloth. and your hands should never be at your side - again, sloth. En attendant Godot 17:27, 3 August 2011 (UTC) - I like how Christianity is represented by a roaring fire of destruction, burning down the labors of man's intellect (the home), as the atheist looks distraught at the raging destruction of religion on civilization. --BMcP - Just an astronomy guy 17:30, 3 August 2011 (UTC) - A real firefighter could have Ken mouth off to him face to face, and then still rescue him anyway because it's the right thing to do. I don't think Ken understands that concept. X Stickman (talk) 18:12, 3 August 2011 (UTC) - If that firefighter chose not to save ken he'd get flak, no doubt legal problems and no doubt be fired and a brief 15 minutes of shame thanks to him being a prick and not doing his job.--Mikalos209 (talk) 21:05, 3 August 2011 (UTC) [edit] Almost definitely a Parthian Shot NKeaton's head is about to explodeimg. A slightly amusing, but quite lengthy, rant.Tielec01 (talk) 05:52, 3 August 2011 (UTC) - That rant. its beautiful. *Sheds a single tear of black, liquid sorrow* LordSlug You want me to do...work? what's that? 07:14, 3 August 2011 (UTC) - He had to look up what 'troll' meant? Wow... Eye on the ICR talk, or type, or whatever... 07:17, 3 August 2011 (UTC) - At least he actually looked it up. Ken has displayed no knowledge that HE knows what a troll is? LordSlug You want me to do...work? what's that? 07:24, 3 August 2011 (UTC) - I would grade it as a solid B. Too much time (for my atheistic tastes) was spent defending Catholicism. Also, there was too much lip service paid to CP being a "worthy" project. And it wouldn't have killed him to toss in standard usage of paragraph formatting. But all that aside, his takedown of Ken was concise, accurate, and mildly humorous. Bravo. --Inquisitor (talk) 08:59, 3 August 2011 (UTC) - Liked this one a lot. It says it all about how harmful Conservapedia, and Conservative particularly, are to the causes they try to promote. And that sincerity would be hard to fake.-- Kriss AkabusiAAAWOOOGAAAR!!1 11:02, 3 August 2011 (UTC) He hasn't been blocked yet but if he tries to continue editing he probably will be. Proxima Centauri (talk) 16:47, 3 August 2011 (UTC) - NKeaton: A safe life is not worth living.img UPDATE: MaxFletcher: I like to live dangerously too!img Conservative: I like lamp.img MarshallF: This thread is not wierd enough.img JamesWilson: No-one seriously supports conservative, ergo you must be 1) intimidating or 2) a troll. Either way - BANNED.img Tielec01 (talk) 02:20, 4 August 2011 (UTC) [edit] Did The Best NEW Conservative Songs Help Bring Down The Berlin Wall? Andy bravely asks the questionimg. Maybe he thinks the construction worker from the Village Peopleimg helped tear it down? --Night Jaguar (talk) 06:39, 3 August 2011 (UTC) - I'm sure Reagan was singing along to some of the Best New Conservative Songs as he tore down the Berlin Wall with his bare hands. SoCal 212 I can't find my talk page 07:03, 3 August 2011 (UTC) - I can totally see it: Gorbachev sets down a huge Soviet era boombox in the middle of some conference room table deep within the bowels of the Kremlin, presses play, and utters "Comrades, once you get an earful of this infectious toe-tapping melody, you will realize- like I have- that we have been bested."--Inquisitor (talk) 08:48, 3 August 2011 (UTC) - Wait a minute... where is this money coming from? The Government? If so, I find it greatly amusing that Schlafly ignores this pesky fact just because the money is being used for something conversative. And isn't all rock music of the devil anyway? ONE / TALK 10:25, 3 August 2011 (UTC) - From what I've heard, the extent to which US rock was circulating in Soviet Russia was that there was a lot of tape trading. I'm pretty sure it was mostly Beatles bootlegs, and I've actually heard that Frank Zappa tapes were highly prized. -Lardashe - The US government was aiding and abetting that circulation too, and even before Zappa and the Beatles, it was Satchmo. Nebuchadnezzar (talk) 20:13, 3 August 2011 (UTC) - Andy is praising a Federal arts grant? I now expect to hear a news report that winged swine are being denied permission to land at the infernal regions' airport due to extreme icing conditions. MDB (talk) 11:05, 3 August 2011 (UTC) - Everyone knows that The Hoff brought down the Berlin Wall. X Stickman (talk) 14:58, 3 August 2011 (UTC) - It's what I call the "conservative blind spot": While you are in government do as you wish, justify it with fighting the enemies of your country. If you are not in the government there is no justification for certain things - if they do not benefit you. Or in other words: if you are in government you act pragmatic, if you are not, you are an idealist. --★uːʤɱ structuralist 19:53, 3 August 2011 (UTC) - Of course not, it was the Falcon PunchREAGAN SMASH!!111!! Nebuchadnezzar (talk) 20:08, 3 August 2011 (UTC) [edit] SchlockOfGog is weighing in on the "debate" SchlockOfGog is weighing in on the Penn Jillette "debate" HEREEDIT: guess where the video description links to? LordSlug You want me to do...work? what's that? 07:28, 3 August 2011 (UTC) - Why is ShockofGod stating it was him the Penn was calling a troll, like this blog here when it was Ken? Weird. Aceof Spades 07:31, 3 August 2011 (UTC) - does anyone know who this swiftfoxmark2 guy is? one of Ken's socks perhaps? EDIT: this swiftfoxmark2 guy is a moron, he mentions Shockofgod instead of Ken. LordSlug You want me to do...work? what's that? 07:33, 3 August 2011 (UTC) - I have long suspected that conservative and shockofgod are one and the same, but my suspicions were dashed by this very site. Tielec01 (talk) 07:45, 3 August 2011 (UTC) - People have long speculated that shock and kenny are one and the same. Personally I don't think they are for a couple of reasons. 1/ I SERIUOUSLY doubt ken could pass a motorcycle test and I certainly wouldn't want to be on the same continent were he to be riding a motorcycle. 2/ kenny spends far too much time editing cp to ever be able to make the rather silly shock videos. I suspect they know each other rather well but they are not one and the same. Oldusgitus (talk) 08:51, 3 August 2011 (UTC) [edit] You people don't realise who you're dealing with!!! Ken is no ordinary anonymous internet trollimg. --Horace (talk) 08:49, 3 August 2011 (UTC) - Notice how Ken doesn't provide any links. EddyP Great King! Disaster! 09:11, 3 August 2011 (UTC) - What, did Ken do a phone interview with News of The World? i hear that's all the rage these days. LordSlug You want me to do...work? what's that? 09:14, 3 August 2011 (UTC) - Also, he mentions that his work is mentioned on Penn Gillette's website. the VERY SAME Penn THAT KEN IS TRYING TO DEGRADE!!!!! LordSlug You want me to do...work? what's that? 09:14, 3 August 2011 (UTC) - Nope, he's definately no ordinary troll. Rememberimg as of 31st July kenny was going to far too busy to spend much time or be very active at cp. So somewhere in the region of 200 edits in the past 24 hours is being too busy to edit at cp. Damn right he's no ordinary troll, they need sleep. Oldusgitus (talk) 09:21, 3 August 2011 (UTC) - Wow, this is even worse than Andy's delusions of grandeur. Ken, your material is mentioned for the same reason Time Cube guy's is. The fact that the number of people laughing at you is in the hundreds of thousands isn't something to boast about. --Night Jaguar (talk) 10:31, 3 August 2011 (UTC) - Holy fuck, over 200 edits in the last 12 hoursimg. Definitely no ordinary troll. --Night Jaguar (talk) 10:48, 3 August 2011 (UTC) - This is the real fallout of reality TV culture. People who can't tell that there is a difference between being mentioned at all and being respected, or effective. Dear Ken, If they hold up your work as a sign of how terrible your entire side is you're not helping, If I started carving bible verses into childrends foreheads at preschools yes there is a chance that someone would pick up a bible and be converted, but the damage I did to christianity as a whole would unconvert many many more than I managed to convert. --Opcn (talk) 11:38, 3 August 2011 (UTC) - Happy Birthday, Ken. His birthday was the 1st, so I'm sure Penn's response was a nice gift. Anyone want to check if he left CP for any amount of time on his birthday? -Lardashe - Leaving would imply anybody wanted to spend time with him--Mikalos209 (talk) 14:27, 3 August 2011 (UTC) - Now that Ken is the new, undisputed troll of CP, we can expect many more outpourings of this nature (at least until his keyboard gums up anyway). Ken is officially untouchable, and free to smear his excretement all over CP, with Andy's blessing. I see he was so busy screwing up the main page again, that he's completely fucked up the formatting. Let's see just how long it stays that way. Lol, and Ken talking about intellectual bunny holes... the man has room to talk. He can't even respond to comments on talkpages. PsyGremlinSprich! 14:55, 3 August 2011 (UTC) - (EC)On the 1st his last edit was at 10:19 AM, so hopefully he had a good birthday. EddyP Great King! Disaster! 14:57, 3 August 2011 (UTC) I see Ken as a troll in the best sense of the word. There is genius in him... or he is mentally insane (the line between the two being quite blurry when it comes to art). I picture him as a disabled man, unable to leave his house, who found in Conservapedia (and A. Schlafly) a venue to express himself, see how much he can get away with, and make people laugh as they enjoy the crazy antics of his internet persona (and Schlafly's abnormal reactions to it all). I think that he's acting the whole time, performing for us. Enjoying himself as he mercilessly -and deliberatedly- twists and tortures logic, maturity, rationality, common sense and sane behaviour itself. I see him as a dedicated artist performing at the cheap global stage of this series of tubes, laughing at the world and allowing us to share in his enjoyment. A troll, in the best sense of the word... and probably a little insane. :-) God may have deprived him of mobility, but not of his hability to laugh at His creation. Xyr (talk) 21:01, 3 August 2011 (UTC) [edit] Someone got angry... :-oimg surely no-one here is so immature? Foobarraboof (talk) 12:47, 3 August 2011 (UTC) - I would hope not, suspiciously-new-user-who-has-never-made-a-single-contribution-anywhere-except-just-now-to-mention-thatFoobarraboof. ONE / TALK 13:11, 3 August 2011 (UTC) - (ec)Heh, good point. It wasn't me though, honest. I'm CPfan, a slightly-less-suspiciously-new user who's forgotten his password and didn't give an email address. Foobarraboof (talk) 13:55, 3 August 2011 (UTC) - That thought had occurred to me as well Mr 1. Why do people troll cp and immediately come here to crow assuming everyone will say well done? It's tedious and will never match the long terms efforts of people like Bugler and Ken Dm to undermine and destroy cp. Oldusgitus (talk) 13:54, 3 August 2011 (UTC) - And yes, quite. Exactly what smart people are doing right now, no doubt. Foobarraboof (talk) 13:58, 3 August 2011 (UTC) - Likely some bored teenager or \b\tard, no one here cares. --BMcP - Just an astronomy guy 16:55, 3 August 2011 (UTC) - Yes, I suppose so. It interested me because it's a rare demonstration of someone even more stupid and misguided than Ken himself. Foobarraboof (talk) 18:31, 3 August 2011 (UTC) - We had that TerryB/Terry Benny do the same thing and then say that he doesn't expect to make any more edits. Well, thanks for nothing. ГенгисIs the Pope a Catholic? 01:22, 4 August 2011 (UTC) [edit] Atheistic England is doomed! DOOMED I say! Many British kill harm themselves! Because they got atheism!img Because, you know, the United Kingdom is really the epicenter of European atheism. So stop bein' an atheeist now, 'cause, 'cause - I don' like it when you think different than me! But here comes the icing on the cake: Now, the source Ken quotes says: Now let's do some meth math kids: 400 in 100,000 is what persentage? Let's do it very easy and just adjust the position of the decimal point so that it's a percentage: ahh... wait… just a sec... - a there you go: 0.4 out of 100, which is 0.4%! But wait a minute! In the US 4%, in the UK it's 0.4%. Now well, the US is only for the adults, but as mostly teens engage in self-harm, this percentage would even have to be adjusted upwards if we were to seriously compare the two on a nationwide scope. So, the US is more religious than the UK... and especially more Christian than the UK.. wait, let me get the "Ken DeMyer-logic simulator" up and running (I mean it often fucks up, needs several times to do something and sometimes I can't turn it off, but I think that's how it ought to be...) - Ah! There we go: CHRISTIANITY MAKES PEOPLE HARM THEMSELVES! --★uːʤɱ federalist 22:53, 3 August 2011 (UTC) - You were able to do that in one edit; you clearly don't have a working DeMyer-logic simulator. Mr. Swift (talk) 23:23, 3 August 2011 (UTC) - Well their hero had himself nailed to a wooden cross and had a crown of thorns embedded into his head, and told people that if their hand causes them to sin, they should cut it off. So they're likely just going by example.--94.76.233.42 (talk) 23:24, 3 August 2011 (UTC) - You know I have usually associated self-harm with those of a religious disposition, all that self-flagellation because you are told that you are not worthy of your maker. And if you start going through the book of saints you will find all sorts of daft things people do to themselves for their love of Jesus. ГенгисIs the Pope a Catholic? 01:39, 4 August 2011 (UTC) - Plenty of people self harm. Its pretty simplistic to blame it on Religiosity or the lack there of. AMassiveGay (talk) 09:42, 4 August 2011 (UTC) - Have a read of this one ADK...I'll exterminate your heretic! 11:54, 4 August 2011 (UTC) [edit] Andy is aware his talk page still exists. So why hasn't he stepped into the debate?img--"Shut up, Brx." 23:42, 3 August 2011 (UTC) - Because saying what was wrong with an Elvis picture is far more important than a several day epic flame war between YOUR sysops over effective control of YOUR site. Priorities! --Night Jaguar (talk) 00:27, 4 August 2011 (UTC) - Week-long epic flame war--"Shut up, Brx." 00:29, 4 August 2011 (UTC) - Whole thing will be memory-holed soon enough, at which point there will never have been hostilities between Team Kenajou and Rob. Andy's just a little ahead of the game in that regard. Does begger the question though of what he promised Rob for his capitulation. --Tygrehart - We have always been at war with Rob Smith. ADK...I'll vitiate your Audi! 12:22, 4 August 2011 (UTC) - "If disengagement disappeared from the agenda, we would be forced into endless skirmishing over broader issues on which I knew we would not be able to deliver quickly." nobsput down the toilet seat 21:07, 4 August 2011 (UTC) [edit] Andy and free speech "No, I don't think an employee can say whatever he likes if his employer disapproves. If the CEO of Microsoft harshly criticized the company publicly, the Board of Directors would probably fire him."img Hmmm isn't this the same Andy who said of McChrystals firing "McChrystal's "criticism" was so mild and removed from matters relevant to military operations that Obams's removal of him for non-military reasons seems awfully self-centered."img Aceof Spades 01:40, 4 August 2011 (UTC) - The teacher who wanted to pray, should have a right to pray. but what about muslims, is that true for them? Of course not, schools have a right to make teaches say what they want them to say. They just don't have a right to make then not say prayers..... does ANYONE get his logic? En attendant Godot 01:47, 4 August 2011 (UTC) - Andy Schlafly. Inconsistent. LOL. ГенгисIs the Pope a Catholic? 02:02, 4 August 2011 (UTC) - Andy's stance is that majority rules, and protections for the minority are irrelevant in the face of the majority's desires. This is part of why he likes to frame all his views as widely popular, even if they aren't. -Lardashe - Actualy Andy only agrees with majority rule when the majority is on his side. Then it becomes "the will of the people". When the majority is not on his side, it's "mob rule".--Inquisitor (talk) 02:11, 4 August 2011 (UTC) - ^ This. --Night Jaguar (talk) 04:59, 4 August 2011 (UTC) - {ec}Nah. The difference is public/private employment--"Shut up, Brx." 02:11, 4 August 2011 (UTC) - Andy is a little more consistent than we give him credit for. He's not saying the (Christian) teacher should have a right to pray, he's saying the parents should have the right to have the teacher pray. Thus he'd technically have to agree that a Muslim-majority school district should be able to institute school prayer to Allah. If I weren't blocked over there, I'd be tempted to ask about specifically Mormon prayers - I'm pretty sure there are Mormon-majority districts in Utah, while Muslim-majority districts are probably much more rare. Yoritomo (talk) 14:12, 4 August 2011 (UTC) - I hope you're right regarding Andy's consistency, but I doubt it. I suspect that, if there happened to be a Muslim-majority area, Andy would revert to the Christian nation myth. It's a shame that we don't have an actual Muslim-majority area at hand, but at least one could ask him to explicitly state whether or not he agrees that in such a school district (or, better, in the area belonging to one particular elementary school), he would advocate Islamic prayers in the classroom and if not, why not? Pin him down as clearly as possible: which group matters most? The locals attending the school? The area funding the school? The nation? The nation's alleged history as founded by Christians? Phiwum (talk) 15:02, 4 August 2011 (UTC) - IIRC somebody did bring up a question about what if Jewish or other children in the class wanted to pray in their religion, rather than Andy's "teacher-led prayer" I think his response basically came down to "No, it will be Christian prayer in my schools, if they want to pray otherwise, they can either do it quietly, or attend their own classes." PsyGremlinSprich! 15:19, 4 August 2011 (UTC) My wife taught at a charter school for Somali immigrants (we're in Minneapolis, which has a surprising high Muslim population, particularly East Africans), and there was a shit-storm because the school had some religious-themed activities (prayer sessions on Fridays were the big thing) Now, I think the school was in the wrong; but if this had been a Christian prayer session at a Christian-majority school, Andy would have been stumbling over himself to praise it. But I'm sure a Muslim prayer session at a Muslim-majority school would send him into a frenzied, hypocritical rage. Carlaugust (talk) 16:58, 4 August 2011 (UTC) [edit] CP's robots.txt Here's CP's robots.txt: User-agent: * Disallow: /index.php Disallow: /skins Disallow: /Special:Search Disallow: /Special:Random Disallow: /MediaWiki: Disallow: /Template: User-agent: dotbot Disallow: / User-agent: baiduspider Disallow: / The first few lines block all search engines from indexing the special wiki pages. That's pretty normal. The next one blocks the DotBot from every part of the site. They want to index the entire internet for research purposes. Apparently, it sucks up a huge amount of bandwidth, so it makes sense to block it. The one I found interesting was baiduspider. That is, of course, China's own special counterfeit version of Google. I can't quite figure out why Andy would block that search engine, and only that one. Doesn't he realize that by depriving China of the Greatest Conservative Songs, TV Shows, Movies and Words, he's really keeping them under the control of a liberal atheist dictatorship? --Roofus (talk) 06:03, 4 August 2011 (UTC) - RW's, on the other hand, is far less liberal about what it blocks. Eye on the ICR talk, or type, or whatever... 06:17, 4 August 2011 (UTC) - Lol. How does it feel, Ken. How does it feel to know that your A E H articles are being actively blocked from a 5th of the world's population? ONE / TALK 08:47, 4 August 2011 (UTC) - now, i wonder what would happen if we were able to 'hack' into that .txt file (nice encryption btw) and slipped googlebot into there... *evil laugh* LordSlug You want me to do...work? what's that? 10:34, 4 August 2011 (UTC) - Uhhhhh... robots.txt is never encrypted, on any site - otherwise how would the search engine crawlers read it? ONE / TALK 10:58, 4 August 2011 (UTC) - But it doesn't matter, baidu got them through bing. Yes, Andy Schlafly, uber-conservative, got tricked by some commies and some hippies at Microsoft - who made a business deal. Ah, the picture... --★uːʤɱ secularist 12:33, 4 August 2011 (UTC) - TALK, you do know that Baidu is not the only search engine available in China, right? There is another rather well-known search engine beginning with "G". Phiwum (talk) 14:06, 4 August 2011 (UTC) - I'd be interested to know if CP is blocked in china... Ateafish (talk) 21:44, 4 August 2011 (UTC) - Rather than countries blocking access to CP, CP has tended to block itself. I think that some commercial internet filters block CP as a hate speech/racist site but I don't think that any countries do. Whereas, when I was in Yemen last year RW was blocked under category 'pornography'. ГенгисIs the Pope a Catholic? 23:49, 4 August 2011 (UTC) - Conservapedia is not blocked in the PRC, and I can't personally recall any time that it was. On the other hand... 江斯顿What is it now? 00:46, 5 August 2011 (UTC) - I would love to ask the Chinese government why Conservapedia isn't blocked and get the answer "Not even people in a dictatorship are so stupid". --★uːʤɱ socialist 00:53, 5 August 2011 (UTC) - Rationalwiki was not blocked as recently as late June. I was in Shenzhen and other parts of Guangdong at the time. In fact, aside from Youtube and Facebook, it was hard to find any blocked sites, but I'm not a user of social media (I only checked out Facebook to test the firewall), so the firewall doesn't affect me much. U.S. traditional media sites were not blocked at the time I was there — at least none that I checked. Phiwum (talk) 02:55, 5 August 2011 (UTC) [edit] Is Andy Ken's bitch? I only ask because I remember my Jack Russell bitch used to think she smelled a lot better after she had rubbed her neck in Ken's fox shit. ГенгисIs the Pope a Catholic? 13:19, 4 August 2011 (UTC) - I think Andy is naturally submissive - wouldn't you be, growing up under Big Phyl's claws? Thus he instinctively defers to the alpha male on CP - despite his token presence as Brother Leader - hence his rolling over on TK and now Ken. That probably also explains a lot of Andy's projection. PsyGremlinSiarad! 14:11, 4 August 2011 (UTC) - On the concept that Kenny G is CP's alpha male, I will the entire day howl with melodious laughter and the table thump repeatedly. DogP (talk) 15:17, 4 August 2011 (UTC) - Everyone at CP is spineless and submissive. There hasn't been a true alpha male since TK. Ken is trying to fill that space, but he really can't. Andy is Ken's bitch, but I think its fair to say that Ken is also Andy's bitch. Beck (talk) 17:15, 4 August 2011 (UTC) - If it turned out the TK was actually still alive and kicking and he returned to CP, do you think Ken, after all he's said and done this year, would take a firmer line against him? Grumblejaws (talk) 17:32, 4 August 2011 (UTC) - Well, don't forget that Andy sometimes "trims" Ken's more embarrassing items and that Ken went crying to Andy during his squabble with Rob. Ken is more like Andy's neglected child who effectively controls the house and can get away with nearly anything. - The fact that CP is Ken's Blog in all-but-name is due to three factors: the reluctance of those at CP to criticize one on their side, the laziness/disinterest of nearly every other sysop and the clinical obsessiveness of Ken. Ken was able to wear down Rob. He certainly didn't defeat him due to any skill. --Night Jaguar (talk) 19:58, 4 August 2011 (UTC) [edit] Peace in our times Apparently Rob has been talked in to an armistice agreementimg whereby he'll go back to pretending that Kendoll doesn't exist, just like everyone else at CP. Presumably there's a reciprocal agreement whereby Kendoll and Karajou have to pretend to forget that Rob called them a demon and a piece of shit respectively. Is Rob completely delusional or is he just a masochist? --JeevesMkII The gentleman's gentleman at the other site 19:43, 3 August 2011 (UTC) - What? He spends one day in Conservapedia re-education camp and already comes out completely spineless? Vulpius (talk) 19:58, 3 August 2011 (UTC) - So, Rob, you spoke with Andyimg, and you decided to ignore Conservapedia's problems together? Great! larronsicut fur in nocte 20:07, 3 August 2011 (UTC) - So, anything good we said about him is taken back i assume?--Mikalos209 (talk) 20:37, 3 August 2011 (UTC) - What do you mean "we"? Lotsa folks still think Rob is a tool. P-FosterThe Holy Roman Empire was neither Holy, nor Roman, nor an Empire. Discuss. 20:38, 3 August 2011 (UTC) - A tool yes, but he's still one of the more rational people on CP. Beck (talk) 20:44, 3 August 2011 (UTC) - Which is about the same as saying the leg breaker isn’t as bad as the loan shark because the leg breaker didn't personally give the order to cripple you. Whatever. Be sure to put a little more spit shine into that turd your currently buffing Rob and don’t forget your mantra: “Thank you Mistress, may I have another…” --Tygrehart - I never really said anything good either, I meant we as in anybody here--Mikalos209 (talk) 20:52, 3 August 2011 (UTC) - This is somehow more pathetic than it already was. Rob, your life must suck. Occasionaluse (talk) 20:58, 3 August 2011 (UTC) - I thought Rob was suppose to be taking a break from CP. He who fights monstersdemons....--Night Jaguar (talk) 21:05, 3 August 2011 (UTC) - Peace or otherwise, it's now apparent Rob is branded and will resemble the nations of Korea. Who pays the price? Everyone else but them, just the way they like it. lol NorsemanCyser Melomel 22:28, 3 August 2011 (UTC) - I loved that show as a kid; now I see it as the Hollywood Screen Writers/Kremlin directed communist agitprop it was, intended to demoralize U.S. troops and kin during the Vietnam War. nobsput down the toilet seat 22:57, 3 August 2011 (UTC) - It must be a strange world inside your head, Rob. Don't forget to check behind the curtains for commies before you retire to bed. --JeevesMkII The gentleman's gentleman at the other site 05:22, 4 August 2011 (UTC) - Ah, back to normal: Rob shows again that he is always able to follow the tangent if it pleases him... Dear Rob, at least at RW try to stick to the topic, this thread isn't about *M*A*S*H*, it's about you making up with Andy behind the curtains. What did Andy say when he turned off the internal emails? - It's turned off for now, but could easily be restored in the future. This feature seems contrary to the spirit of a wiki.--Andy Schlafly 22:03, 7 July 2011 (EDT) - But telephone calls aren't? And I guess that the sysops are exchanging emails all the times, they just don't want to be pestered by footsies or - horrible dictu - allow the unwashed masses to talk behind the back of the Obrigkeit... - And what have you accomplished? Well, the range blocks are down, congrats! That is somehow an hollow victory, as still there are all these 403-blocks in place on the server: Human and I have to jump through some loops to be able to edit at Conservapedia: but theoretically we can, at least for the moment. - And beside this? Well you have disappointed the users who thought that a change of some of the more obnoxious practices (deleting pages, not using the fricking preview-button...) could be changed. - larronsicut fur in nocte 12:14, 4 August 2011 (UTC) - As Poskrebyshev said, "there is movement." [3] nobsput down the toilet seat 16:47, 5 August 2011 (UTC) - He must be eating the same new cereal that my wife bought. "Movement" indeed. P-FosterThe Holy Roman Empire was neither Holy, nor Roman, nor an Empire. Discuss. 16:50, 5 August 2011 (UTC) [edit] Moon over My Hammy This MPR itemimg is hysterical for its irony: - ." [10] No kidding." No kidding, Creationists. --Phil Leotardo da Vinci (talk) 15:18, 4 August 2011 (UTC) - Just jealous because these guys actually came up with an original idea. ADK...I'll revolt your bazooka! 15:22, 4 August 2011 (UTC) - If Andy read the very next couple of sentences, he would see how such tests would be carried out to see if the theory is valid. --BMcP - Just an astronomy guy 15:24, 4 August 2011 (UTC) - Dear f'in goat. "like what an artist would paint; but the opposite side is jagged, like what an astronomer would expect." Did this moron SERIOUSLY graduate from Harvard or did he just bribe the lecturers so he could lie about it? If I had written anything like when I was at university the lecturers would have thrown it back unmarked. Oldusgitus (talk) sometime, 4 August 2011 (UTC) My keyboard is playing up........ It's pretty simple. One side is smooth because God made it that way. The other side is jagged because God made it that way. Scientists don't realize the truth because God made them that...oh, rats...Jimaginator (talk) 17:09, 4 August 2011 (UTC) - The near side of the Moon is smooth like sandpaper is smooth. --BMcP - Just an astronomy guy 17:39, 4 August 2011 (UTC) - You don't know with certainty why one side is smoother than the other, therefore god. Checkmate, atheists. Occasionaluse (talk) 17:52, 4 August 2011 (UTC) - My best guess would be that the near side is smoother because since it's the side always facing the Earth, it's less likely to be hit by meteors. --Roofus (talk) 17:57, 4 August 2011 (UTC) - I don't think the moon is stationary - it actually spins on an axis, so that idea wouldn't hold up? --Phil Leotardo da Vinci (talk) 18:04, 4 August 2011 (UTC) - The moon rotates on an axis. Occasionaluse (talk) 18:07, 4 August 2011 (UTC) - It rotates, but it orbits the Earth at the same speed it rotates. The result is the same side facing us all the time. The smooth, God-painted side. In His great wisdom He made sure the smooth side was always facing us so we could appreciate its beauty. Rather than, say, making the entire thing smooth. Because that would have been silly. «-Bfa-» 18:13, 4 August 2011 (UTC) - Because of tidal effects celestial bodies which are circling around each other will become locked. The Earth's rotation is slowing down and sometimes in the future, the Earth will show the Moon always the same side (well, it will be wobbling a bit because of the Sun..) - a question for our astronomers: wouldn't one expect the side of the Moon which is facing us to be somehow shielded against impacts by the Earth? larronsicut fur in nocte 18:43, 4 August 2011 (UTC) - Not an astronomer, but: It doesn't seem to be very interesting which side of the moon faces outward. It seems to be more interesting which side of the moon faces forward, ie. in the direction the system travels around the sun. Every side of the moon will do that at some point, obviously. Another thing is that impactors hitting on the near side will tend to be faster than impactors on the far side due to the gravity assist from the planet they will typically have received. Mountain Blue (talk) 19:09, 4 August 2011 (UTC) - No, blue, due to the tidal locking, the same side of the Moon is always "the lead side". That astronomy guy has a point, the side facing the Earth is more protected from impacts than the other side. And of course Andy is as usual a moron in general. ħuman 06:50, 5 August 2011 (UTC) - Uh, no. At first quarter moon, the near side points in the direction the Earth travels. At third quarter moon, it's the far side. Just look at this diagram. Mountain Blue (talk) 12:57, 5 August 2011 (UTC) - I would suspect a little, but not by much given the still large distance between the two worlds, relatively speaking. Earth's gravity does influence NEOs significantly though, so it is possible. The near side of the Moon is also likely smoother because it suffered one or more major impacts in the deep past that cause lava to flow to the surface creating the lava filled basins called maria ("seas") that we see today. We do know that the crust on the near side of the Moon is thinner than the crust on the far side of the Moon, which may have made it easier for lava to flow to the surface from such huge impacts in the deep past. These days the Moon's interior is no longer active, having long since cooled off. Curiously, the Moon's center of mass is actually slightly offset towards the near side from its geographical center, so in addition to a thinner crust, the near side has slightly more mass.--BMcP - Just an astronomy guy 00:35, 5 August 2011 (UTC) While you all were discussing "science", you were not praising Him. The moon is the way God wanted it to be. Stop trying to figure things out. Where will it lead? Inventions? Knowledge? Worldwide communication? Praise Him, and stop wasting time. Jimaginator (talk) 19:56, 4 August 2011 (UTC) - "The side of the Moon that faces us is smooth, as an artist would paint; but the opposite side is jagged, as an astronomer would expect". Yeah, painters can't stand painting jagged stuff. They like smooth stuff. That is why they have always painted the side of the moon facing the Earth and why you will see very few landscapes showing the far side of the moon. --Horace (talk) 03:01, 5 August 2011 (UTC) - What is annoying about that is that it isn't smooth by any stretch of the imagination. Look how ridiculously cratered the near side is[4], especially in contrast to our own world, but no one is on the Moon admiring the artistic beauty of Earth. If we humans see the near side of the Moon as artistic it because it is utterly ingrained into our collective historical and cultural psyche. After all up until a half century or so ago, it was the only side of the Moon anyone has ever seen, and about every human has seen it, making the near side for all intended historical, cultural purposes, the "entirety" of the Moon. --BMcP - Just an astronomy guy 07:51, 5 August 2011 (UTC) - This. Also wanted to add in a comment about 'the same side always faces the Earth'. That's broadly true, but can be a little misleading, as it implies that the bit facing the Earth is static (relative to the Earth). That's not quite accurate, as due to libration something like 59% of the moon's surface is visible from the Earth at some point. There's quite a good graphic on the Wikipedia page for libration to demonstrate this. Worm (talk) 09:12, 5 August 2011 (UTC) [edit] Sarcasm? Unless someone has some other explanation for thisimg. --Horace (talk) 00:12, 5 August 2011 (UTC) - I don't expect Karajou to receive it with the love and affection with which it was so obviously bestowed. ГенгисIs the Pope a Catholic? 00:17, 5 August 2011 (UTC) - This smiley face would have been more appropriate for Karajerk. --Night Jaguar (talk) 01:05, 5 August 2011 (UTC) - did anyone else read that as "BANstar" as apposed to "BARNstar"? LordSlug You want me to do...work? what's that? 01:36, 5 August 2011 (UTC) - No, but it's a good one. ГенгисIs the Pope a Catholic? 01:38, 5 August 2011 (UTC) - He's already got a banhammer... Eye on the ICR talk, or type, or whatever... 01:43, 5 August 2011 (UTC) - I guess the "do not poke the anger bear" sign fell off his cage. --JeevesMkII The gentleman's gentleman at the other site 02:07, 5 August 2011 (UTC) - Was the banhammer one also a pisstake by Rob then? CrundyTalk nerdy to me 13:51, 5 August 2011 (UTC) [edit] What the hell is Rob on about? For some truly incomprehensible advice, see Rob's recent conversationimg. - What's the problem? --Roofus (talk) 02:59, 5 August 2011 (UTC) - It appears that he's trying to steer a conversation and point the bannables into the direction of the data he wants them to present without violation the terms of... lets say his 'probation'. Rob, if you are indeed trying to do that, you might want to go with [5] and then assume that obesity is evenly spread between all political backgrounds for that given state. You can find a partisan voting index at [6] . For example, Alabama (a nice first in the list) has 32.2% obesity and 39.1% democratic vote. On the other hand, you have something like Vermont which has a obesity rate of 23.2% and voted democratic 68.9% in '08. So maybe you've got some data to work from there. Or maybe someone will try to twist it around to suggest that every republican voter in vermont is thin. If this isn't enough data, you could go by county [7] and [8] though I'll admit you'd have to do a bit of work to map the congressional district back to counties. Still, you'll have those wonderful blips like Dane County, Wisconsin that has a 25% obesity rate (lowest in the state) and elects Tammy Baldwin (a lesbian!) in a district with a D+15 index. Whatever case, the data is all there for you to pass along to have people try to figure out what percentage of the population are overweight republicans. Just make sure you stay far, far away from any actual studies [9] [10] and news articles [11], they're either by liberal schools of higher learning or publicized by the mainstream media. --Shagie (talk) 03:29, 5 August 2011 (UTC) - What's the problem? Well, all of his advice just seems to miss the point. The claim is that atheism is correlated with obesity. The fact that there are more obese persons than atheists would not refute the claim at all. And adding political affiliation just makes things that much more complicated and confusing. What one wants is clear evidence that there is no such correlation. Now, there probably is no such evidence, so what one needs to do is the next best thing: simply point out that there is no study that supports Ken's claim. There is no evidence of the correlation Ken wants. (Of course, Ken doesn't really care, so this argument will have no practical consequence.) Phiwum (talk) 11:01, 5 August 2011 (UTC) - OK OK, have you considered trying cp:Voting characteristics of obese Americans, for example? It could be ground breaking. nobsput down the toilet seat 16:55, 5 August 2011 (UTC) [edit] Conservapedia, where the K stands for "quality." Now here'simg an example of a quality wiki project. Five hundred spots on the list of pages that have no links to them, and you're still in the "A"'s. That's some good wiki-maintenance they got going on there. P-FosterThe Holy Roman Empire was neither Holy, nor Roman, nor an Empire. Discuss. 03:20, 5 August 2011 (UTC) - American Telephone and Telegraph (AT&T)img links to a "This Page Has Been Deleted" page. lol --Roofus (talk) 03:49, 5 August 2011 (UTC) - To be fair, a bunch of those are to American History homework pages, which should not have mainspace links to them. That it is obvious to anyone that they should be in a different namespace seems to have escaped Andy, however. I really don't know why he's adverse to other namespaces so much of the time (except for things like those contests - man do I miss those). I guess a separate namespace sounds like the sort of thing Wikipedia would do, therefore it is bad. I believe he has similar feelings for disambiguation pages. DickTurpis (talk) 03:53, 5 August 2011 (UTC) - To be fair, a bunch of those are to American History homework pages The next group of 500img takes us all the way into the "C"'s. P-FosterThe Holy Roman Empire was neither Holy, nor Roman, nor an Empire. Discuss. 03:58, 5 August 2011 (UTC) - I think Assfly has previously commented on why he doesn't use namespaces and it is something to do with Wikipedia doing it. Of course, it's clearly a cover for the fact he doesn't know how or simply can't be bothered. ONE / TALK 08:47, 5 August 2011 (UT) Actually a better link than P-Foster offers is thisimg; because by editing the URL you can get up to 5000 entries per page (this applies to RC and user contribs as well) and find with just one click that there are 5586 orphaned pages. So the homework ones are a mere pimple on CP's ugly visage. ГенгисIs the Pope a Catholic? 10:35, 5 August 2011 (UTC) - I tried to clean up the orphaned pages a couple years ago. There were just too many obscure pages that couldn't possibly link to anything, like "Atheism and Huffing Paint Fumes" Aboriginal Noise Oh, you want to hit people with garbage cans? 10:38, 5 August 2011 (UTC) - Sorry but you need to find something more obscure for your example. "Atheism and Huffing Paint Fumes" as a Ken article would be cross-linked to a dozen other pages. ГенгисIs the Pope a Catholic? 10:45, 5 August 2011 (UTC) - Such as 'Atheism and Sniffing Glue', 'Atheism and Shooting Smack', 'Atheism and Harmful Addiction'. All of which would have exactly the same content. EddyP Great King! Disaster! 12:00, 5 August 2011 (UTC) - Actually, I'm noticing a large amount of bird-related pages on the list. Aboriginal Noise Oh, you want to hit people with garbage cans? 12:32, 5 August 2011 (UTC) [edit] See also User:Conservative. That's what you wanted to writeimg wasn't it, Nobbykins? --JeevesMkII The gentleman's gentleman at the other site 04:08, 5 August 2011 (UTC) - ^Like. ħuman 06:31, 5 August 2011 (UTC) - It's collabrorative. I've been trying to explain to Karajou for years, "Look, we know PsyGremlin runs a sockpuppet farm from South Africa. Now, we know User:TracyS is a PsyGremlin sock. TracyS has not violated any site rules. Just block the other PsyGremlin socks, and leave TracyS alone. That way, while he's behaving himself as TracyS, he's not running a sock. If TracyS gets abusive, we can handle him when that happens." nobsput down the toilet seat 16:33, 5 August 2011 (UTC)
http://rationalwiki.org/wiki/Conservapedia_talk:What_is_going_on_at_CP%3F/Archive251
CC-MAIN-2016-44
en
refinedweb
UR::Namespace - Manage collections of packages and classes In a file called MyApp.pm: use UR; UR::Object::Type->define( class_name => 'MyApp', is => 'UR::Namespace', ); Other programs, as well as modules in the MyApp subdirectory can now put use MyApp; in their code, and they will have access to all the classes and data under the MyApp tree. A UR namespace is the top-level object that represents your data's class structure in the most general way. After use-ing a namespace module, the program gets access to the module autoloader, which will automaticaly use modules on your behalf if you attempt to interact with their packages in a UR-y way, such as calling get(). Most programs will not interact with the Namespace, except to use its package. my @class_metas = $namespace->get_material_classes(); Return a list of UR::Object::Type class metadata object that exist in the given Namespace. Note that this uses File::Find to find *.pm files under the Namespace directory and calls UR::Object::Type->get($name) for each package name to get the autoloader to use the package. It's likely to be pretty slow. my @class_names = $namespace->get_material_class_names() Return just the names of the classes produced by get_material_classes. my @data_sources = $namespace->get_data_sources() Return the data source objects it finds defined under the DataSource subdirectory of the namespace. my $path = $namespace->get_base_directory_name() Returns the directory path where the Namespace module was loaded from. UR::Object::Type, UR::DataSource, UR::Context
http://search.cpan.org/~sakoht/UR-0.39/lib/UR/Namespace.pm
CC-MAIN-2016-44
en
refinedweb
IntroductionI think that adding a splash form to a windows application is good decision in terms of design but also those kinds of windows forms are used for commercial, ergonomics and psychological purposes. Splash screens are often used to display information to the final user while an application is loading. So that it can give him a glance about what he is going to bring into play and this short moment can affect deeply the user either positively or negatively. In this tutorial, I propose a method of how to do this programmatically.Within VS2005-VB.NET IDE the problem is solved because we can add a splash screen as an item proposed by the IDE as is mentioned in the figure 1 below:Figure 1But within C# IDE this kind of items are not provided as the figure 2 shows:Figure 2To resolve the problem I invite you to follow those steps:First of all we create new windows application within C# IDE, in this case we can use either Visual Studio 2005, VS C# express edition that you can download form Microsoft official web site or Sharp Develop 2.0 witch is an open source IDE for developing DOT Net applications that you can download for free.Add a new project by clicking File --> New project Figure 3 Figure 4So now we go to the important stage in this tutorial. In the solution explorer as shown belowFigure 5Select 'Program.cs' and open it so that the code appears. It is from this point that the application will be started. The following code will be displayed in the editor:Figure 6When we fire up the program, Form1 appears due to this fragment of code:Application.Run (new Form1 ());In order to display the Form2, witch is the Splash-From in your case, for a few moments before showing the Form1, you have to implement the following code instead of this one above (See figure 6) using System; using System.Collections.Generic; using System.Windows.Forms; namespace SplashProject { static class Program { /// <summary> /// The main entry point for the application. /// </summary> /* a new static timer instance, it's important for starting and handeling the time during it the splash form is displayed*/ static System.Windows.Forms.Timer myTimer = new System.Windows.Forms.Timer(); // Counter is an integer that help us to fix the number of seconds during them the Splash form is displayed static int counter = 0; // b is a boolean related to the event if the Spalsh form will be disposed or not static bool b = false; [STAThread] static void Main() { Application.EnableVisualStyles(); Application.SetCompatibleTextRenderingDefault(false); /*-----------------The modification of the void Main method behavior----------------*/ // New instance of Form2 Form2 oForm2 = new Form2(); //Add event handler to myTimer.Tick event myTimer.Tick+=new EventHandler(myTimer_Tick); //Fix myTimer interval to 1000 witch is 1 second myTimer.Interval = 1000; //Start myTimer myTimer.Start(); /* This is the conditional loop during it the Form2 will appear as Splash form */ while (b == false) { //Display oForm2 oForm2.Show(); /*This is very important to add this line of code because it is responsable for dispalying oForm2 during the time provided to this action*/ Application.DoEvents(); if (b == true) { /*Disposing oForm2 when the time is out */ oForm2.Dispose(); /*Going out of this while loop*/ break; } Application.Run(new Form1()); } } /*This is the surcharged method that handels myTimer.Tick*/ static void myTimer_Tick(Object sender, EventArgs e) // Stop the timer myTimer.Stop(); /* Here we chose 4 seconds or a little more to dispaly the splash form oForm2, and then to dispose it */ if (counter < 4) // Enable the counter myTimer.Enabled = true; // Increment the counter counter++; // The condition related to witch the Spalsh form will be disposed if (counter == 4) b = true; }}In deed, if you want to precise the position and the size of your Splash-Form programmatically, you can implement the Form2_Load(Object sender, EventArgs e) {} according to two ways and you can chose one among those mentioned below: //The first way private void Form2_Load(object sender, EventArgs e) this.Location = new Point(200, 200); this.Size = new Size(300, 300); } //The second way this.Top = 200; this.Left = 200; this.Width = 300; this.Left = 300; }This is my manner to add a Splash screen or a Splash-Form programmatically using C# 2.0 ©2016 C# Corner. All contents are copyright of their authors.
http://www.c-sharpcorner.com/UploadFile/yougerthen/add-a-splash-form-to-a-windows-application/
CC-MAIN-2016-44
en
refinedweb
CodePlexProject Hosting for Open Source Software Hello, I am pretty new to Visual Studio, I have just installed VS2010 Integrated shell and decided to give it a try as an IDE for working with Python. I had Python 2.7.2 installed on my machine, along with IPython 0.10.2 (from a Python distro), and and then installed Python Tools for Visual Studio. When I try to execute some simple code I get the following error message: Python interactive window. Type $help for a list of commands. Resetting execution engine Failed to launch REPL process Exception AttributeError: AttributeError("'NoneType' object has no attribute 'platform'",) in <function _remove at 0x01D123B0> Unhandled exception in thread started by <bound method BasicReplBackend._repl_loop of <__main__.BasicReplBackend object at 0x01C9DF90>> ignored Traceback (most recent call last): File "C:\Program Files\Microsoft Visual Studio 10.0\Common7\IDE\Extensions\Microsoft\Python Tools for Visual Studio\1.1\visualstudio_py_repl.py", line 135, in _repl_loop close failed in file object destructor:sys.excepthook is missinglost sys.stderr In the end I get the actual output of my code (some prints to see if the code is running). Any idea why I get this error message and how do I get rid of it? Thanks. Have you customized site.py in anyway or done something to make IPython the default? If you start a normal interpreter in a console window and do: import sys sys.platform Do you get something outputted or do you get an exception? Also, to enable IPython in VS you'll need to go to Tools->Options->Python Tools->Interactive Windows, select Python 2.7, and make sure IPython mode is selected for the interactive mode. Also you may need to install pyzmq, but I'm guessing we'll need to get you past this first exception either way. thank your for your fast reply. I did not have much time available since then to go through this issue (I am not a professional programmer) but I did find out that I needed to install the Python modules in Windows Vista as a machine administrator. Apparently the VS2010 Integrated Shell + PTVS1.1 now works ok most of the times although when I do a reset of the Python2.7 interactive window I still get error messages from time to time. I get the impression that they are related to a startup script but I do not know if it exists nor do I know where it is located. Regretably, I have not been able to install IPython on my machine so far so I decided to drop it for the time being. I would like very much to use it both as a standalone application and inside VS2010 Integrated Shell. Got any clues as to how I can install it in Windows Vista? Apparently there is bug with the installer... Are you sure you want to delete this post? You will not be able to recover it later. Are you sure you want to delete this thread? You will not be able to recover it later.
https://pytools.codeplex.com/discussions/353609
CC-MAIN-2016-44
en
refinedweb
1 /*2 * $Id: DefaultGroovyStaticMethods.java,v 1.3 2004/05/18 06:15:45 spullara Exp $3 *4 * Copyright 2003 (C) James Strachan and Bob Mcwhirter. All Rights Reserved.5 *6 * Redistribution and use of this software and associated documentation7 * ("Software"), with or without modification, are permitted provided that the8 * following conditions are met:9 * 1. Redistributions of source code must retain copyright statements and10 * notices. Redistributions must also contain a copy of this document.11 * 2. Redistributions in binary form must reproduce the above copyright12 * notice, this list of conditions and the following disclaimer in the13 * documentation and/or other materials provided with the distribution.14 * 3. The name "groovy" must not be used to endorse or promote products15 * derived from this Software without prior written permission of The Codehaus.16 * For written permission, please contact info@codehaus.org.17 * 4. Products derived from this Software may not be called "groovy" nor may18 * "groovy" appear in their names without prior written permission of The19 * Codehaus. "groovy" is a registered trademark of The Codehaus.20 * 5. Due credit should be given to The Codehaus - *22 * THIS SOFTWARE IS PROVIDED BY THE CODEHAUS AND CONTRIBUTORS ``AS IS'' AND ANY23 * EXPRESSED OR IMPLIED WARRANTIES, INCLUDING, BUT NOT LIMITED TO, THE IMPLIED24 * WARRANTIES OF MERCHANTABILITY AND FITNESS FOR A PARTICULAR PURPOSE ARE25 * DISCLAIMED. IN NO EVENT SHALL THE CODEHAUS OR ITS CONTRIBUTORS BE LIABLE FOR26 * ANY DIRECT, INDIRECT, INCIDENTAL, SPECIAL, EXEMPLARY, OR CONSEQUENTIAL27 * DAMAGES (INCLUDING, BUT NOT LIMITED TO, PROCUREMENT OF SUBSTITUTE GOODS OR28 * SERVICES; LOSS OF USE, DATA, OR PROFITS; OR BUSINESS INTERRUPTION) HOWEVER29 * CAUSED AND ON ANY THEORY OF LIABILITY, WHETHER IN CONTRACT, STRICT30 * LIABILITY, OR TORT (INCLUDING NEGLIGENCE OR OTHERWISE) ARISING IN ANY WAY31 * OUT OF THE USE OF THIS SOFTWARE, EVEN IF ADVISED OF THE POSSIBILITY OF SUCH32 * DAMAGE.33 *34 */35 package org.codehaus.groovy.runtime;36 37 import groovy.lang.Closure;38 39 import java.util.regex.Matcher ;40 41 /**42 * This class defines all the new static groovy methods which appear on normal JDK43 * classes inside the Groovy environment. Static methods are used with the44 * first parameter as the destination class.45 *46 * @author Guillaume Laforge47 * @version $Revision: 1.3 $48 */49 public class DefaultGroovyStaticMethods {50 51 /**52 * Start a Thread with the given closure as a Runnable instance.53 *54 * @param closure the Runnable closure55 * @return the started thread56 */57 public static Thread start(Thread self, Closure closure) {58 Thread thread = new Thread (closure);59 thread.start();60 return thread;61 }62 63 /**64 * Start a daemon Thread with the given closure as a Runnable instance.65 *66 * @param closure the Runnable closure67 * @return the started thread68 */69 public static Thread startDaemon(Thread self, Closure closure) {70 Thread thread = new Thread (closure);71 thread.setDaemon(true);72 thread.start();73 return thread;74 }75 76 /**77 * Get the last hidden matcher that system used to do a match.78 * 79 * @param matcher80 * @return81 */82 public static Matcher getLastMatcher(Matcher matcher) {83 return RegexSupport.getLastMatcher();84 }85 }86 Java API By Example, From Geeks To Geeks. | Our Blog | Conditions of Use | About Us_ |
http://kickjava.com/src/org/codehaus/groovy/runtime/DefaultGroovyStaticMethods.java.htm
CC-MAIN-2016-44
en
refinedweb
Jens Finke <jens triq net> writes: > I'd like to see the standard as open and simple as possible, so that an > app can add further information to a thumb. That said, I like the idea > that we have only a few required info chunks and that specific > applications can add some more in their own namespace, like > GIMP::ImageType or GIMP::HasLayer. > Suggest also requiring all extensions to be in an extension namespace, traditionally X- i.e. X-GIMP::ImageType. Another very useful thing would be a "validator" app that validated generated thumbnail directories for spec-compliance. I really want this for .desktop files too, will write it soon. The only way we're going to get rid of some of the horkage in various .desktop files. Havoc
https://listman.redhat.com/archives/xdg-list/2001-August/msg00020.html
CC-MAIN-2016-44
en
refinedweb
New submission from Alexander Belopolsky <belopolsky at users.sourceforge.net>: """ As an aside, I dislike the fact that the datetime module uses a C 'int' for date ordinals, and clearly assumes that it'll be at least 32 bits. int could be as small as 16 bits on some systems (small embedded systems?). But that's another issue. """ -- Mark Dickinson A comment and an assertion at the top of the module suggest that this was deliberate. /* We require that C int be at least 32 bits, and use int virtually * everywhere. In just a few cases we use a temp long, where a Python * API returns a C long. In such cases, we have to ensure that the * final result fits in a C int (this can be an issue on 64-bit boxes). */ #if SIZEOF_INT < 4 # error "datetime.c requires that C int have at least 32 bits" #endif However, since ranges of all integers are well defined in this module, there is little to be gained from the uncertainty about sizes of int and long. (For example, the allowed range of dates will not magically increase on a platform with 64 bit ints.) I propose using explicitly sized C99 types int32_t and int64_t or rather their blessed for use in python equivalents PY_INTXX_T) throughout the module. ---------- assignee: belopolsky components: Extension Modules messages: 108222 nosy: belopolsky, mark.dickinson priority: low severity: normal stage: needs patch status: open title: datetime module should use int32_t for date/time components type: feature request versions: Python 3.2 _______________________________________ Python tracker <report at bugs.python.org> <> _______________________________________
https://mail.python.org/pipermail/new-bugs-announce/2010-June/007883.html
CC-MAIN-2016-44
en
refinedweb
When I was developing web application using maven, netbeans was deadlocked. My environment is following: OS: Mac OS X 10.6.2 NetBeans: 6.8 Java:1.6.0_17 I attach relevant thread dump. Created attachment 93408 [details] thread dump *** Bug 179997 has been marked as a duplicate of this bug. *** Reassigning to default owner. Can you check this? Looks like a bug in XAM/XDM to me, but not sure. Looks a bit different from bug #191796. at org.netbeans.modules.xml.xdm.XDMModel.sync(XDMModel.java:158) - locked <0x0000000115457338> (a org.netbeans.modules.xml.xdm.XDMModel) at org.netbeans.modules.xml.xdm.xam.XDMAccess.sync(XDMAccess.java:141) at org.netbeans.modules.xml.xam.AbstractModel.sync(AbstractModel.java:259) the code in XDMAccess is called under a XAM transaction. Since we support both poms with a namespace declaration and without it, POMComponentFactoryImpl.getQName() needs to query the namespace of the current file. the POMModelVisitor.visit(Project) method checks in the beginning if synchronization is required, however only that piece of code is part of a transaction, the following reading operations that are performed in that method are not. And during the execution of the method the underlying model is created/updated.. reassigning to xml/xam, please advice what is the appropriate threading model with regard to model synchronization.. Report from old NetBeans version. Due to code changes since it was reported likely not reproducible now. Feel free to reopen if happens in 8.0.2 or 8.1.
https://netbeans.org/bugzilla/show_bug.cgi?id=179592
CC-MAIN-2016-44
en
refinedweb
Look at the KS module, for instance. KS is a repository of handy functions which I have accumulated over the last nine years. They're useful and mature functions, but scarcely documented and need to be broken out into category-specific submodules. It's not "the way" to have functions for converting Brinell Hardness units to Vickers Hardness units rubbing elbows with file-locking functions and networking functions and string manipulation functions, all in the same module. The unit conversion functions need to go into modules in the "Physics" namespace, and the network functions need to go into modules in the "Net" namespace, etc. The documentation needs to be brought up to PAUSE standards as well. It's work I knew I needed to do, but it was easy to put it off as long as I didn't have a PAUSE account. But now that I do, there's no more putting it off! I just need to find time to do it. The first module I publish might be a relatively young one, a concurrency library called Dopkit. It's something I've been wanting to write for years, but I just finished writing and debugging it yesterday. There are many concurrency modules in CPAN already, but most of them require considerable programming overhead and require that the programmer wrap their head around the way concurrency works. These are reasonable things to do, but I've often thought it would be nice if it could be made trivially easy for the programmer to make loop iterations perform in parallel, without changing from the familiar loop construct. Dopkit ("Do Parallel Kit") provides functions that look and act like the familiar perl loop syntax -- do, for, foreach, and while -- and chops up the loop into parts which execute concurrently on however many cores the machine has. The idea is to put very few demands on the programmer, who needs only to load the module, create a dopkit object, and then use dop(), forp(), foreachp, and whilefilep() where they'd normally use do, for, foreach, and while(defined(<SOMEFILE>)). There are some limitations to the implementation, so the programmer can't use Dopkit *everywhere* they'd normally use a loop, but within its limitations Dopkit is an easy and powerful way to get code running on multiple cores fast. Dopkit suffers from the same documentation deficit as KS, but at least it's already "categorized" -- as soon as I can get the documentation written, it should be published as Parallel::Dopkit. KS will take significant refactoring. Most of the perl in my codecloset is embarrassingly primitive (I wrote most of it in my early days of perl, before I was very proficient with it), but there are a few other modules on my priority list to get into shape and publish. My "dy" utility has proven a tremendously useful tool over the years, but is in desperate need of rewriting. It started out life as a tiny throwaway script, and grew features organically without any design or regard for maintainability. I've been rewriting its functionality in two modules, FileID and DY (which should probably get renamed to something more verbose). When they're done, the "dy" utility itself should be trivially implementable as a light wrapper around these two modules. Another tool I use almost every day is "select", which is also in need of being rewritten as a module. I haven't started that one yet. In other news I stopped dorking around with FUSE and linux drivers, and dug into the guts of my distributed filesystem project. Instead of worrying about how to make it available at the OS level for now, I've simply written a perl module for abstracting the perl filesystem API. As long as my applications use the methods from my FS::AnyFS module instead of perl's standard file functions, transitioning them from using the OS's "native" filesystems to the distributed filesystem should be seamless. This is only an interim measure. I want to make the DFS a full-fledged operating-system-level filesystem eventually, but right now that's getting in the way of development. Writing a linux filesystem driver will come later. Right now I'm quite pleased to be spending my time on the code which actually stores and retrieves file data. Questions posted by other slashdot users focussed my attention on how I expect to distinguish my DFS from the various other distributed filesystem projects out there (like BTRFS and HadoopFS). I want it to do a few core things that others do not: (1) I want it to utilize the Reed-Solomon algorithm so it can provide RAID5 and RAID6-like functionality. This will produce a data cluster which could lose any two or three (or however many the system administrators specify) servers without losing the ability to serve data, without the need to store all data in triplicate or quadrupilate. BTRFS only provides RAID0, RAID1, and RAID10 style redundancy -- if you want the ability to lose two BTRFS servers without losing the ability to serve all your data, all data has to be stored in triplicate. That is not a limitation I'm willing to tolerate. Similarly, the other distributed filesystems have "special" nodes which the entire cluster depends on. These special servers represent SPOFs -- "Single Points Of Failure". If the "master" server goes down, the entire distributed filesystem is unusable. Avoiding SPOFs is a mostly-solved problem. For many applications (such as database and web servers), IPVS and Keepalived provide both, load-balancing and rapid failover capability. There's no reason not to have similar rapid failover for the "special" nodes in a distributed filesystem. (2) I want the filesystem to be continuous. Adding storage, replacing hardware, or allocating storage between multiple filesystem instances should not require interruption of service. This is a necessary feature if the filesystem is to be used for mission-critical applications expected to stay running 24x7. Fortunately I've done a lot of this sort of thing, and haven't needed to strain thusfar to achieve it. (On a related note, I still chuckle at the memory of Brewster calling me in the middle of the night from Amsterdam in a near-panic, following The PetaBox's first deployment. The system kept connecting to The Archive's cluster in San Francisco and keeping itself in sync, and nothing brewster could do would make it stop. The data cluster's software interpreted all of his attempts to turn the service off as "system failures" which it promptly auto-corrected and restored. It was a simple matter to tell the service to stop, but Brewster has a thing against documentation.) (3) I want the filesystem to perform well with large numbers of small files. This is the hard part for filesystems in general, and it's something I've struggled with for years on production systems. None of the existing filesystems handle large sets of very small files very well, and most distributed filesystems such as RAID5 do not address the problem (and in some ways compound it -- as RAID5 arrays get larger, the minimum data that must be read/written for any operation also gets larger). In my experience, most real-life production systems have to deal with large numbers of small files. Just running stat() on a few million files is a disgustingly resource-intensive exercise. RAID1 helps, but the CPU quickly becomes the bottleneck. One of my strongest motivations for developing my own filesystem is to address this problem. I don't want to be struggling with it for the next ten years. I am tackling this problem in three ways: First, filesystem metadata is replicated across multiple nodes, for concurrent read-access. Second, filesystem metadata is stored in a much more compact format than the traditional inode permits. Many file attributes are inherited from the directory, and attribute data is only stored on a per-file basis when it is different from the directory's. This should improve its utilization of main and cache memories. Third, the filesystem API provides low-level methods for performing operations on files in batches, and implementations of standard filesystem functions (such as stat()) could take advantage of these to provide superior performance. For instance, when stat() was called to return information about a file, the filesystem could provide that information for many of the files in the same directory. This information would be cached in the calling process' memor space by the library implementing stat() (with mechanisms in place for invalidating that cache should the filesystem metadata change), and subsequent calls to stat() would return locally cached information when possible. This wouldn't help in all situations, but it would help when the calling application was trying to stat() all of the files in a directory heirarchy -- a common case where high performance would be appreciated. I don't know how long it will take to implement such a system. What work I've already done is satisfying, but it just scratches the surface of what needs to be done, and I can barely find time to refactor and comment my perl modules, much less spend hard hours on design work! But I'll keep at it until it's done or until the industry comes up with something which renders it moot.
https://slashdot.org/~TTK+Ciar/journal
CC-MAIN-2016-44
en
refinedweb
Happy New Year all; I hope you had as pleasant a New Year’s Eve as I did. Last time on FAIC I described how the C# compiler first uses overload resolution to find the unique best lifted operator, and then uses a small optimization to safely replace a call to Value with a call to GetValueOrDefault(). The jitter can then generate code that is both smaller and faster. But that’s not the only optimization the compiler can perform, not by far. To illustrate, let’s take a look at the code you might generate for a binary operator, say, the addition of two expressions of type int?, x and y: int? z = x + y; Last time we only talked about unary operators, but binary operators are a straightforward extension. We have to make two temporaries, so as to ensure that side effects are executed only once: [1. More specifically, the compiler must ensure that side effects are executed exactly once.] int? z; int? temp1 = x; int? temp2 = y; z = temp1.HasValue & temp2.HasValue ? new int?(temp1.GetValueOrDefault() + temp2.GetValueOrDefault()) : new int?(); A brief aside: shouldn’t that be temp1.HasValue && temp2.HasValue? Both versions give the same result; is the short circuiting one more efficient? Not necessarily! AND-ing together two bools is extremely fast, possibly faster than doing an extra conditional branch to avoid what is going to be an extremely fast property lookup. And the code is certainly smaller. Roslyn uses non-short-circuiting AND, and I seem to recall that the earlier compilers do as well. Anyway, when you do a lifted addition of two nullable integers, that’s the code that the compiler generates when it knows nothing about either operand. Suppose however that you added an expression q of type int? and an expression r of type int [2. Roslyn will also optimize lifted binary operator expressions where both sides are known to be null, where one side is known to be null, and where both sides are known to be non-null. Since these scenarios are rare in user-written code, I’m not going to discuss them much.]: int? s = q + r; OK, reason like the compiler here. First off, the compiler has to determine what the addition operator means, so it uses overload resolution and discovers that the unique best applicable operator is the lifted integer addition operator. Therefore both operands have to be converted to the operand type expected by the lifted operator, int?. So immediately we have determined that this means: int? s = q + (int?)r; Which of course is equivalent to int? s = q + new int?(r); And now we have an addition of two nullable integers. We already know how to do that, so the compiler generates: int? s; int? temp1 = q; int? temp2 = new int?(r); s = temp1.HasValue & temp2.HasValue ? new int?(temp1.GetValueOrDefault() + temp2.GetValueOrDefault()) : new int?(); And of course you are saying to yourself well that’s stupid. You and I both know that temp2.HasValue is always going to be true, and that temp2.GetValueOrDefault() is always going to be whatever value r had when the temporary was built. The compiler can optimize this to: int? s; int? temp1 = q; int temp2 = r; s = temp1.HasValue ? new int?(temp1.GetValueOrDefault() + temp2) : new int?(); Just because the conversion from int to int? is required by the language specification does not mean that the compiler actually has to generate code that does it; rather, all the compiler has to do is generate code that produces the correct results! [3. A fun fact is that the Roslyn compiler’s nullable arithmetic optimizer actually optimizes it to temp1.HasValue & true ? ..., and then Roslyn’s regular Boolean arithmetic optimizer gets rid of the unnecessary operator. It was easier to write the code that way than to be super clever in the nullable optimizer.] Next time on FAIC: What happens when we throw some lifted conversions into the mix? Eric, I love your blog and am a long-time reader, and I was wondering if you’d be willing to install a footnote plugin (such as this one:) to make jumping around a little easier? Great post, by the way, I loving reading about what’s going on under the hood🙂 Thanks for the suggestion. I am new at running a wordpress blog and the array of available plugins is somewhat bewildering. I use a lot of footnotes, so I’ll check that out! I’ll also probably install a “markdown in comments” plugin at some point. That would also be quite nice–I think one with a preview window would be great for those of us who aren’t so good at Markdown. If I could be so bold, I’d like to suggest also widening the page, or at least the comment box. The first comment is okay but replies-to-replies are tiny! There also seems to be a limit to how many comments in a thread: Not the lack of a reply button! “AND-ing together two bools is extremely fast” But it might give a wrong result! Is HasValue guaranteed to return either (bool)0 or (bool)1? If it sometimes returns (bool)2 the AND might produce a false negative. You can’t cast ints to bools like that in C#: > (bool)2 (1,1): error CS0030: Cannot convert type ‘int’ to ‘bool’ I think it’s guaranteed that ‘&’ will work on bools as expected. A bool has an integer representation (I know that from Microsoft Pex – Pex can actually generate bools that are not 0 or 1 and cause the program under test to fail). The CLR allows non-0-or-1 booleans. What about this?: [MethodImpl(MethodImplOptions.NoInlining)] public static unsafe bool ByteToBoolean(byte b) { return (bool) *&b; } ByteToBoolean(1) & ByteToBoolean(2) Will return false! Unsafe pointers and related conversion make pretty much anything rational go out the window. Unsafe code is not the explanation here. It is an implementation detail See the last section of (“This is madness!”). Oh, very interesting, I didn’t know that! In your example case, though, you’re running unsafe code–I assume everything with the nullable types is safe. When you’re in an unsafe context things do get a bit trickier. Perhaps a better way to put it would be “in a safe context, ‘&’ on bools works as expected”? Or are there times even then where you can get this behavior? What this method does on the “inside” is an implementation detail. Unsafe code or not does not matter. The CLR provides the same facility – you can convert any byte losslessly to a boolean. This is perfectly defined and deterministic. “Mixed” booleans are a perfectly valid element of the CLR. They are allowed to occur anywhere. The C# Reference on the operator () states: “For bool operands, & computes the logical AND of its operands; that is, the result is true if and only if both its operands are true.” C# isn’t C. & when used on bools isn’t “bitwise AND”, it’s “non-short-circuiting logical AND”. (Scratch the above, apparently the IL emitted is the same for integers and booleans. This might actually be a genuine bug; or one could argue C# booleans and CLR booleans aren’t the same concept, although I thought this sort of interoperability would be dealt with somewhere.) The C# Reference and Specification both say the only values a bool can have are false and true. If all bools have values of false or true, bitwise AND and non-short-circuiting logical AND are the same thing. I think the spec simply does not address this. It sounds suspiciously like what C calls a trap representation. In C, when reading a trap representation (such as, on some systems, a byte with a value of 2 through an lvalue of type _Bool), the behaviour is undefined, any behaviour is permitted. Even though the value 2 is a valid value for any 8-bit register that the generated machine code might use, the compiler is allowed to assume that the value isn’t 2, isn’t 3, isn’t any larger value than that. So, for example, b > true might evaluate to false at compile time, yet so would b < true or b == true at run time. As a practical data point, C#’s & when used with two bools compiles to a bitwise AND on my system, meaning (bool)1 & (bool)2 evaluates to false, and (bool)2 & (bool)2 evaluates to (bool)2, even though your comment suggests the result should be the same for both. The documentation for HasValue is clear: the return value is either false, or true. So unless (bool)2 compares equal to true on some systems (not on mine), it is not a valid return value, and the compiler does not have to worry about that possibility. This reminds me of something I’ve always wondered- Why does the C# compiler not do any inlining itself? Why leave it all to the JIT? It seems like you should do as many optimizations at compile time as possible. I imagine Eric could give a better reply; but my guess would be that a fair amount of the information that guides whether to inline or not is only available at JIT time. Inlining takes up more code space (possibly lowering performance, due to cache misses), removes the speed penalty of a call/return, and potentially saves pushing function arguments onto the stack. Until runtime, you don’t know how much extra code space inlining will use (depends on the platform and/or CPU in question). You also don’t know how many registers you have available, and hence whether you actually *can* save the time of pushing function arguments onto the stack (if inlining causes you to run out of registers, then you’re going to have to spill some onto the stack in any case). Some architectures will take longer for a call/return which could affect your decision on whether to inline. JIT time is the point where you have all the information you need on whether to inline. At compile time you don’t, so the safest option is to leave it to the JIT. Actually no, I don’t think I can give a better answer than that.🙂 I understand why some inlining can only be done by the JIT, but do those considerations really come into play in these single statement cases, such as GetValueOrDefault()? To me, it seems like a reasonable assumption that single statement methods always make more sense to be inlined. Thus, it is something the C# compiler could do. The compiler is essentially doing a form of inlining when it performs the rewrites you detail in your post Eric (as opposed to generated a reusable method and calling that each time). Eric: thanks!😉 Sam: I can imagine two reasons why the C# compiler might not inline ‘simple’ methods (single statement might not be the best way to describe them, a single statement could be pretty complex!); 1) If the statement is really that simple, then the JITter isn’t going to spend much time analysing it, so you’re not saving much JIT time by analysing it in advance; why bother special casing it? It complicates the compiler to optimise only simple cases that weren’t causing speed problems anyway. 2) It removes the ability to do things like set a breakpoint on the method in question – because it effectively doesn’t exist in the IL if it’s been inlined everywhere…! I think (not sure?) that the JIT normally avoids inlining code if there’s a debugger attached, for that reason. Inlining a method also removes the ability to replace some, but not all, dlls in a compiled application (i.e., makes full compiles necessary). Though if you “know” they won’t change (e.g., constants)… Although there are many cases where only the JIT can know whether to inline something, I would think that having the compiler inline things like struct property getters would allow it to eliminate many redundant copy operations, especially when they are invoked in read-only contexts. For example, if one is enumerating a Dictionary(of Guid, Rectangle) and has a KeyValuePair(of Guid, Rectangle) called kvp, and if the compiler doesn’t analyze property getters, accessing kvp.Value.X will require making a copy of all 32 bytes of kvp, passing a byref to a method which fetches 16 of them, storing those to another temporary stack spot, and passing a byref to a routine that fetches four bytes. Even if the methods are inlined, one is still stuck with the overhead of the copy operations. Even though the purpose of the code is simply to fetch four bytes from kvp, it has to make a slow copy of kvp, then a somewhat faster (but still RAM-based) copy of kvp.Value, before it can finally read the four bytes it was actually interested in. I don’t think any realistic level of JIT inlining could yield anything close to the obvious optimization of simply reading the four bytes directly from the original struct, but I would think a compiler that could recognize trivial struct property getters could do so. Inlining at compile time can’t work across assemblies, since you don’t know what code that other assembly might contain at runtime. “so you’re not saving much JIT time by analysing it in advance; why bother special casing it?” – it’s a micro-optimization, just like the compiler using GetValueOrDefault() is a micro-optimization. Compilers should micro-optimize anywhere they can, because when you add those micro-optimizations up over millions/billions lines of code across the globe, you’re saving real time/energy/karma. As with all compiler optimizations, it wouldn’t be present if you build in Debug mode. As it stands, the compiler can already do a significant amount of code rewriting when optimizations are enabled (for example, removing unused variables), making some breakpoints in Release mode code impossible. Pingback: The Morning Brew - Chris Alcock » The Morning Brew #1266 Does it actually do the assignment to a temporary if r is actually an int or just if r is an expression that can have side effects? That’s a great question that I am not going to explore in depth in this series. Briefly, the problem is that determining when an expression either *produces* or *consumes* a side effect can be quite tricky. For example, reading a local variable never produces a side effect, but another expression might *write* to a local variable as a side effect, and therefore the read must not be re-ordered with respect to the write. The Roslyn and original recipe compilers treat constants as expressions that do not need to be stored in temporaries; pretty much everything else is put into some kind of temporary. Pingback: Nullable micro-optimization, part four | Fabulous Adventures In Coding There really was rather little need to be uncertain about what the classic compiler did. You may be familiar with Joseph Albahari’s excellent Linqpad utility. Prior to Roslyn’s REPL it was the best way to to test a C# expression or fragment without creating a whole visual studio project, or calling the compiler by hand. One of the nice things it does is provides the disassembly of the method(s) you create. So to test this one, I simply switched to “C# program mode” and typed in a simple method whose body was “return a+b”, and whose parameters and return value had type ‘int?’. Then I click run (which ran the empty Main method) and looked at the disassembly for the method. Sure enough it the classic compiler does use bitwise and as you predicted. The whole thing took less than a minute to check. It literally took me several times as long to write this post as it did to check that. Hope this helps. Pingback: When would you use & on a bool? | Fabulous adventures in coding
https://ericlippert.com/2013/01/03/nullable-micro-optimization-part-three/
CC-MAIN-2016-44
en
refinedweb
On 06/04/2013 11:32 AM, Jim Fehlig wrote: > Eric Blake wrote: >> On 06/04/2013 08:43 AM, Jim Fehlig wrote: >> >>> Only install nwfilter example XML files when WITH_NWFILTER >>> is defined. >>> >> >> Does this require any corresponding libvirt.spec.in file changes? >> > > I don't think so. I stumbled across this issue doing a client-only > package build, where WITH_NWFILTER is not defined yet the XML examples > get installed. I'm looking at examples like this in the spec file: %if ! %{with_python} rm -rf $RPM_BUILD_ROOT%{_datadir}/doc/libvirt-python-%{version} %else rm -rf $RPM_BUILD_ROOT%{_datadir}/doc/libvirt-python-%{version}/examples %endif but note that those files are also conditionally called out later under %files python. My initial concern was whether we need a corresponding %if ! %{with_nwfilter} clause which removes the nwfilter example files when building an rpm without nwfilter. But looking for where those files get installed, I only found the recursive: %doc examples/xml which merely copies into the rpm all doc files that got installed; and your change is to the Makefile to not install them in the first place. Different from the python examples, the examples/xml files are all installed into a single sub-package. So it looks like creating an rpm of just the client libraries should still succeed, as it's not referencing any file name that didn't exist, nor stranding any files behind. I ran out of time to actually test a 'make rpm' of just a client build, but have convinced myself that: a) your change seems like it is clean, and b) we have time to fix it before 1.0.7 if further testing turns up anything.. I'm still reluctant to give ack, but now for a different reason. I'm not convinced that compiling client-only has any bearing on whether the nwfilter xml examples are useful, because it is not the client that talks to nwfilter in the first place, but libvirtd, and you can't know what capabilities the libvirtd will have that the client will be talking to. Does anyone else have an opinion? -- Eric Blake eblake redhat com +1-919-301-3266 Libvirt virtualization library Attachment: signature.asc Description: OpenPGP digital signature
https://www.redhat.com/archives/libvir-list/2013-June/msg00164.html
CC-MAIN-2016-44
en
refinedweb
Products and Services Downloads Store Support Education Partners About Oracle Technology Network Name: diC59631 Date: 12/03/98 Obviously the two below operations will not compile in the same java file. import java.util.Date ; import java.sql.Date ; It would be nice to alias one such as below: import java.util.Date ; import java.sql.Date as SqlDate ; Date => java.util.Date SqlDate => java.sql.Date This way instead of leaving out the import java.sql.Date line and having to fully qualify the SQL Date class for every usage, you'd only have to use the SqlDate alias. This becomes a much larger issue with package names like: com.fake1.western.division.plane.Fuel ; com.fake2.tiger.landbased.vehicle.Fuel ; Especially when you need to use both types of Fuel, but can only use the shortend version for one of them at most. "import .. as .." is not a new concept, and I'm surprised that it wasn't incorporated into the language specification from the beginning. (Review ID: 43613) ====================================================================== An alternative syntax is proposed in RFE 4478140: import jlang=java.lang.*; alias jlang=java.lang; The need for an extra keyword is not clear. ###@###.### 2005-04-17 07:20:05 GMT 4983159 is now JDK-8061419 EVALUATION 4983159 is the master CR for type aliasing. WORK AROUND Name: diC59631 Date: 12/03/98 Always use fully qualified class names. ====================================================================== EVALUATION This is a minor bit of syntactic sugar. I am not convinced that it is very important in practice. The usual arguments about destabilizing the language, high bar for changes and creeping featurism apply. gilad.bracha@eng 1998-12-03
http://bugs.java.com/bugdatabase/view_bug.do?bug_id=4194542
CC-MAIN-2016-44
en
refinedweb
I want to know about intercepting an incoming sms for a specific key word EX:"Hi", so that I can read that sms containing "Hi" in it & delete it after reading the msg, & if that msg doesn't contain any such text then it wouldn't be deleted and instead saved in the inbox. Guys please help me I am finding it very difficult to perform this functionality. Look for Broadcast Receiver, this is dependent on the apps installed on the phone but you can give your app priority for listening to messages. Although, when a notification is shown, the message won't be in the SMS Database yet, so you will need to use abortBroadcast() to stop other apps being notified. See example below: public class MessageReceiver extends BroadcastReceiver {()); if(messages.getMessageBody().contains("Hi")) { abortBroadcast(); } } And you would need to declare the receiver in the manifest, like so: <receiver android: <intent-filter android: <action android:</action> </intent-filter> Finally, make sure you have the permission to RECEIVE_SMS in the manifest, hope that helps!
https://codedump.io/share/idczkgqxBv1e/1/how-can-i-intercept-an-incoming-sms-with-a-specific-text
CC-MAIN-2016-44
en
refinedweb
0 Why is it, when I compile this, the cout statement runs twice at first? It does not do that when I have the rest of the code in there that is int. Any thoughts of why or what I am doing wrong.. #include <iostream> #include <string> using namespace std; int main() { string *name; // int *votes; int i; int numVotes; cout << "how many voters: "; cin >> numVotes; name = new string[numVotes]; for(i=0;i < numVotes; i++){ cout <<"enter canidates last names: "; getline(cin,name[i]); } for(i = 0;i < numVotes; i++){ cout << name[i] << endl; } //moved rest of code that's integer and not having this problem out. Basicly //same thing, Dynamicly allocated votes of the canidates for a percentage later. delete [] name; system ("pause"); return 0; } /**************************************************************************************/ // below bit of code (unfinished mind you) does not cout repeat..don't understand. /*votes = new int[numVotes]; for(int i = 0;i < numVotes; i++){ cout << "enter votes received : "; cin >> votes[i]; }*/ /*for(int x=0;x<6;x++){ total = total + votes[x]; perc[x] = (votes[x] * 100) / total; cout << "Canidate \t\t\t" << "Votes Received \t\t\t" << "% of Total Votes"; cout << name[x] << "\t\t\t\t\t" << votes[x] << "\t\t\t\t" << perc[x] << endl; }*/ /********************************************/ //delete [] votes;
https://www.daniweb.com/programming/software-development/threads/318681/c-string-for-loop-doubling-cout-why
CC-MAIN-2016-44
en
refinedweb
29 December 2011 05:09 [Source: ICIS news] SINGAPORE (ICIS)--Japanese producer, JX Nippon Oil & Energy settled the benzene Asian Contract Price (ACP) for January $95/tonne (€73/tonne) higher than the previous month’s price, a company source said late on Wednesday. The company reached the settlement at $1,060/tonne CFR (cost and freight) ?xml:namespace> The $95/tonne price jump was in line with the recent increases in the global benzene market. Spot benzene prices in Prices were hovering at $1,050-1,060/tonne FOB (free on board) ($1 = €0.77)
http://www.icis.com/Articles/2011/12/29/9519435/jx-nippon-oil-settles-jan-benzene-acp-95tonne-higher.html
CC-MAIN-2014-15
en
refinedweb