text
stringlengths 20
1.01M
| url
stringlengths 14
1.25k
| dump
stringlengths 9
15
⌀ | lang
stringclasses 4
values | source
stringclasses 4
values |
|---|---|---|---|---|
The recent series of Why XYZ Is Not My Favourite Programming Language articles has been fun to do, and it’s been great to see the discussion in the comments (even if it’s mostly people people saying that I am talking a load of fetid dingo’s kidneys). But I don’t want to extend that series beyond the point of diminishing returns, and it’s time to think about what it all means. As duwanis commented on the Ruby article, “I’m a bit lost as to the point of these posts”; now I want to try to figure out just what, if anything, I was getting at.
By the way, it’s been interesting how people respond to articles that are critical (however flippantly) of languages. Most of what I’ve written here on TRP has had comments pretty evenly balanced between “Yes, I know exactly what you mean” and “You are talking complete nonsense”, which seems about right to me; but comments on the NMFPL posts have almost all been telling me why I am wrong. It’s also been interesting to watch all the Reddit posts for these articles drop to zero, or at best stay at one: evidently people who like languages are keener to defend them than those who dislike them are to pile in — which is as it should be.
Reviewing the languages
First of all, let me say that all the languages I picked on are, or at least have been, good languages. I didn’t bother criticising BASIC or FORTRAN, or Tcl for that matter, because, well, everyone already knows they’re not going to save the world. (I didn’t criticise any of the functional languages because I don’t honestly feel that I yet know any of them well enough to do an honest job of it.)
So, to look at the positive, here are some reasons to like each of the languages I’ve been saying are not my favourites. In roughly chronological order:
- C (1972) was, depending on your perspective, either the first really expressive low-level language, or the first really efficient high-level language. Its historical importance, as the foundation of all but the earliest implementations of Unix, is immense. But, more than that, it has a crystalline elegance that few other languages approach. (I’ll be writing more about C in future articles.)
- C++ (1983), despite being more prone to abuse than any other language, can indeed be used as Stroustrup suggests, as “a better C”. It was also a very impressive technical achievement: to come so close to being object oriented while retaining binary compatibility with C is pretty astonishing. It solves that problem well, while leaving open the question of whether it was the right problem to solve.
- Perl (1987) was and is amazingly useful for just, you know, getting stuff done. It has a likeable humility, in that it was the first major language to make working together nicely with other languages a major goal, and its Swiss Army Chainsaw of text-processing methods were a huge and important pragmatic step forward. It’s not pretty, but it’s very effective.
- Java (1995) can be thought of as “a better C++”; and it is better in lots of important ways. It hugely reduces the amount of saying-it-twice that C++ source and header files require, the code is much cleaner, it is much harder to shoot yourself in the foot, and experience tells us that it scales well to very large projects with many programmers.
- JavaScript (1995) has proven its usefulness over and over again, despite being saddled with a hideous set of largely incompatible operating environments; and underneath that perplexing surface, as Douglas Crockford’s book The Good Parts explains, there is a beautiful little language struggling to get out.
- Ruby (1995) is in a sense not really a huge leap forward over previous languages; but it’s done the best job of any of them in terms of learning from what’s gone before. It really does seem to combine the best parts of Perl (string handling, friendliness towards other languages), Smalltalk (coherent and consistent object model), Lisp (functional programming support) and more.
Although there are plenty of other languages out there, these are the main contenders for the not-very-coveted position of My Favourite Programming Language: I am deliberately overlooking all the Lisps and other functional languages for now, as I just don’t know them well enough, and I am ignoring C# as a separate language because even highly trained scientists with sensitive instruments can’t tell it apart from Java; and PHP because it’s just Even Uglier Perl, and Visual BASIC for all the obvious reasons. (I don’t really have a good reason for leaving Python out, but I’m going to anyway.)
Some thoughts on Java
According to the Normalized Comparison chart on langpop.com, and also the same site’s Normalized Discussion Site results (at the bottom of the same page), Java is currently the most popular programming language of them all (followed by C, C++, PHP, JavaScript and Python), so in a sense it’s the reigning champion: if you want to be an advocate for some other language, you need to make a case for why it’s preferable to Java.
And it’s a good language. At the cost of some expressiveness, it tries to make itself foolproof, and it does a good job of it. In a comment on the recent frivolous Java article, Osvaldo Pinali Doederlein boldly asserted that “there are no major blunders in the Java language”. Surprisingly enough, I do more or less agree with that (though its handling of static methods is pretty hideous). I think that almost-no-major-blunders property for both size and pleasantness while being much more convenient.
My main issue with Java is actually much more pervasive than any specific flaw: you’ll forgive me if I find this hard to tie down, but it’s just a sense that the language is, well, lumpen. Everything feels like it’s harder work that it ought to be: programming in Java feels like typing in rubber gloves.
An obvious example of this is what Hello World looks like in Java:
You have to say a lot of stuff before you can say what you want to say. You have to have a main() function, it has to be declared as taking an array of String and it has to be public static void. It has to be wrapped in public class SomeIrrelevantName (which, by the way, has to be the same as the name of the source file.) The print() function is called System.out.println(). The comparison with complete Hello World programs in other popular languages is instructive:
print “Hello, world!\n” # Perl
print “Hello, world!” # Python
puts “Hello, world!” # Ruby
(print “Hello, world!”) ; Emacs Lisp
10 PRINT “Hello, world” :REM Commodore 64 BASIC
Is it a big deal that Java makes you say public static void main(String args[])? No, it’s not. It’s easily learned, and Java programmers develop the ability to become “blind” to all the syntactic noise (at least I assume good ones do). But it’s pervasive. All Java code looks like this, to a greater or lesser extent. How many mental CPU cycles do Java programmers burn filtering out all the keyword soup?
At the risk of looking like a total Steve Yegge fanboy, I’ll illustrate that with an example taken from his article on Allocation Styles: how to ask a language what index-related methods its string class supports (i.e. which methods’ names contain the word “index”). His Java code looks like this:
If you’re a Java jockey by nature, you’re probably looking at that and thinking “well, that doesn’t look too bad” (though quite possibly also thinking about a couple of incremental improvements you would make).
Here’s how that program looks in a less verbose language (Ruby, as it happens):
“”.methods.sort.grep /index/i
Now even if you agree with me and Osvaldo that “there are no major blunders in the Java language”, you have to admire the concision of the Ruby version. It’s literally an order of magnitude shorter (30 characters vs. 332, or one line vs. 11).
What a concise language buys you
“But Mike, surely you’re not saying that the Ruby version is better just because it’s shorter?”
Well, maybe I am. Let’s see what the advantages are:
- Less code is quicker to write than more code.
- Less code is easier to maintain than more code. As Gordon Bell has pithily observed, “The cheapest, fastest and most reliable components of a computer system are those that aren’t there.” Each line of code is a line that can go wrong.
- Concise code lets you see more of the program at once: this isn’t as big a deal now we all have 1920×1200 screens rather than the 80×24 character terminals that I did all my early C programming on, but it’s still an important factor, especially as programs grow and start sprouting all kinds of extra glue classes and interfaces and what have you.
- A concise language keeps the total code-base size down. I think this is very important. ScottKit currently weighs in at 1870 lines of Ruby, including blank lines and comments (or 1484 once those are stripped). Would I have started a fun little project like that at all if it was going to be a fun big project of 20,000 lines? Probably not. And this factor becomes more important for more substantial projects — the difference between ten million lines of code and one million is much more significant than the difference between ten thousand and one thousand.
- Most importantly, look at what code isn’t in the Ruby version: it’s all scaffolding. It’s nothing to do with the problem I am trying to solve. In the Java version, I have to spend a lot of time talking about ArrayLists and Italian guys and loops up to methods.length and temporary String[] buffers and Italian guys. In the Ruby version, it’s a relief not to have to bother to mention such things — they are not part of my solution. (Arguably, they are part of the problem.)
I think the last of these may be the most important factor of all here. I’m reminded of Larry Wall’s observation that “The computer should be doing the hard work. That’s what it’s paid to do, after all”. When I stop and think about this, I feel slightly outraged that in this day and age the computer expects me to waste my time allocating buffers and looping up to maxima and suchlike. That is dumb work. It doesn’t take a programmer to do it right; the computer is smart enough. Let it do that job.
The upshot is that in the Ruby version, all I have to write about is the actual problem I am trying to solve. You can literally break the program down token by token and see how each one advances the solution:
“”.methods.sort.grep /index/i
Here we go:
- “” — a string. (The empty string, as it happens, but any other string would do just as well.) (I notice that WordPress inconveniently transforms these into “smart quotes”, so that you can’t copy and paste the code and expect it to Just Work. D’oh! Use normal double quotes.)
- .methods — invoke the methods method on the string, to return a list of the methods that it supports. (You can do this to anything in Ruby, because Everything Is An Object.)
- .sort — sort the list alphabetically.
- .grep — filter the list, retaining only those members that match a specified condition.
- /index/ — the condition is a regular expression that matches all strings containing the substring “index”.
- i — the regular expression matches case-insensitively.
Bonus pleasant property
As a bonus, the code reads naturally left-to-right, rather than inside-to-outside as it would in a language where it all has so be done in function calls, like this:
grep(sort(methods(“”)), /index/i)
I think that Ruby’s object-oriented formulation is objectively better than the pure-functional version, because you don’t have to skip back and forth through the expression to see what order things are done in.
“Say what you mean, simply and directly.”
When I started to write this article, I didn’t know what my conclusion was going to be. I just felt that I ought to say something substantial at the conclusion of a sequence of light-and-fluffy pieces. But, as Paul Graham says [long, but well worth reading], part of the purpose of writing an essay is to find out what your conclusion is. More concisely, E. M. Forster asked, “How do I know what I think until I see what I say?”
But now I’ve conveniently landed on an actual conclusion. And here it is. Remember in that Elements of Programming Style review, I drew special attention to the first rule in the first proper chapter — “Say what you mean, simply and directly”? The more that runs through my mind, the more convinced I am that this deceptively simple-sounding aphorism is the heart of good programming. Seven short words; a whole world of wisdom.
And how can I say what I mean simply and directly if I’m spending all my time allocating temporary arrays and typing public static void main? My code can’t be simple if the functions I’m calling have complex interfaces. My code can’t be direct if it has to faff around making places to put intermediate results. If I am going to abide by the Prime Directive, I need a language that does all the fiddly stuff for me.
So it looks like My Favourite Programming Language is Ruby, at least for now. That might change as my early infatuation wears off (I’ve still only been using it for a couple of months), and it might also change as my long-anticipated getting-to-grips-with-Lisp project gathers momentum. But for now, Ruby is the winner. And it’s going to be dethroned, it’s not going to be by a scaffolding-rich language like Java.
Coda: I don’t hate Java
Just to defuse one possible stream of unproductive comments: this is not about me hating Java. I don’t hate it: I think it’s way better than C++, and in most ways better than C (which, given that I love C, is saying a lot). All the criticism I’ve levelled at Java in this article applies equally to C++, C and C#, and many other languages. To some lesser degree, they also apply to Perl and JavaScript.
But I am on a quest now — to say what I mean, simply and directly. And Java is not a language that helps me do that.
.
Update (19 March 2010)
The discussion at Reddit is not extensive, but it’s well worth a look because it contains some nice code samples of how you can do “”.methods.sort.grep /index/i in other languages.
There’s also a little discussion at Hacker News.
.
And, finally: please ignore this magic code, which I am gluing into the article so that Technorati can pick it up and know that I really am the owner of this blog: EECRHMA873AV
This is a fine example why hackers of the world don’t take pleasure in Java programming.
I don’t hate Java either, it’s got qualities, but I would hate to do anything in it.
And I really like Ruby too. It has another set of qualities, especially the consistency (Matz’s principle of least surprise, I just don’t get that feeling from Python at all, indenting issues aside). I am not surprised that it appeals to many smart people.
Language doesn’t have to be easy as Ruby to be pleasurable, I don’t really mind C at all and I used assembler too. When you know your way around programming languages you just feel there is something wrong (like Java or C++). “Great minds” think alike? :)
And I also think that 80 characters wide text buffer should be enough. ;)
An important factor (for me at least) is the kind of programming projects to be tackled. I love Forth because it works well for the problems I normally work on.
i work in, and love, q, an array language descended from APL and J.
it doesn’t have objects, so the nearest equivalent to your code would be to search some namespace for functions containing “index”:
q){x where x like”*index*”}system”f .q”
.q is the namespace being searched (in this case, the one containing all the built-in functions) and “f” is a “system” (internal) command returning all functions in a namespace.
In Groovy:
return String.class.methods.findAll { m -> m.name.toLowerCase().indexOf(“index”)>-1 }.collect{ m.name }.sort();
While I wouldn’t deny that Ruby is more succinct than Java, in this instance Steve Yegge’s code stinks. He either doesn’t really know Java, or he’s being deliberately mischievous to make his point.
Here’s a much cleaner and more concise version that will return the “index” methods of any class.
It took me less than a minute to write. It has the added advantage of not containing any bugs – unlike Steve’s version.
Of course Ruby still wins here, but I would argue that this is mainly because Java has no native capability to filter all the members of a collection at once.
I think you may be conflating several things here. I used to love APL for writing concise programs. Everything was done with arrays, array operators, expand and reduce. That let me write extremely concise, but often unreadable code. For example, looking up a string in a table would be something like:
(TABLE^.=STRING)i1
That’s a matrix multiplication using a boolean and and an equals operator to produce a boolean vector of where the STRING appeared in the TABLE, the former being a vector and the latter a matrix. The i was the iota operator for finding the index of the first value in the result vector.
It was beautiful, but every line was a puzzle.
You seem to have two complaints about Java.
One is that Java is largely scalar, without a lot of clever vector, map and reduce, or pattern match and replace operators. That’s a valid complaint about the language density. COBOL and Applescript are notorious for their low expressive density because they try to look like a natural language. You can destroy your brain at either extreme.
The other complaint is that Java requires code to be placed in context. A lot of the text of a program does not advance the algorithm directly, but focuses on what the code does within the overall program. The code is descriptive, not prescriptive. Itdoesn’t run. from the top of the page to the bottom as in a straight forward narrative, but lives in various compartments and those compartments interact with each other. You can actually read the code or the context, much as we can parse DNA with introns, regulatory sequences and coding sequences.
When you just want to write a quick hack to fix something, it is nice to have a language which strips out the contextual overhead and just does something. Of course, if your program proves useful, you are likely to wind up gluing it to a host of other similar hacks using the shell or some other patchwork. If I recall, dealing with this patchwork of program components was noted rather negatively at one point.
There is just no winning.
A couple of things:
a) You can make your Ruby example more expressive by turning it into:
String.instance_methods.sort.grep /index/i
Which leaves out the need for the arbitrary string by explicitly stating “All methods applicable to instances of String.” It’s not as short, but that’s not always what expressive is about, eh? :)
b) In fairness to Java, that’s an outdated bit of code you’ve quoted, and it doesn’t take advantage of some of Java’s newer niceties (like the For-each loop, for example). It can also make use of String.matches(regex) rather than doing the clumsy .toLowerCase().indexOf(“index”) != −1 business.
Not that those really help a lot, but they do make it much easier to read.
c) As someone who was a Java developer for many years, you *do* learn to filter out all the keyword cruft… until you start learning other languages. Once you start learning Python and Ruby, for example, writing Java becomes more and more painful.
There are worse things in life than being a Steve Yegge fan(boy). His drunken blog rants are mostly on target.
My co-workers just “love” Java. I like it too. I can whip things up really quickly in it but all the line noise really gets to me these days.
Why do I have to deal with so much boilerplate?
Do I really have to create (yet) another interface for this damned object?
Yes. I know. Its the way the language is and there is no point in complaining about it.
Except there is a point.
When all you have is a hammer, everything looks like a nail.
After all these years using Java I am coming to the conclusion that it is merely a different kind of hammer to the others I have used before — C and C++.
What I need is a new tool — a screwdriver.
I have tried to like Python. I was really nice about it during that interview with The Google a few years back but, seriously, isn’t __self__ some kind of joke? Having said that I really like the whole indention thing but I’m a bit of a weirdo that way.
These days I find myself looking at Clojure, Ruby, Scala and (believe it or not) Go. If Ruby performance wasn’t such a dog I’d be off down that road at 1000mph. Still, performance will get better and not just because CPUs get more powerful.
Heh… I “get” to work with classic ASP/VBScript daily, and it offends on all fronts here. Inability to declare and initialize on the same line? Check. Reading inside-out? Check. Lack of a concatenation-assignment operator? Check check check. It’s like the language wants me to write solutions that are inefficient and unmaintainable.
I do. I hate Java with a fucking passion. It’s crappy APIs punish me everytime I’m forced (for I would never do so willingly) to write java. Fuck those assholes at Sun. They easily could have done a better job. No wonder they had to have a system like javadoc, because you’ll spend half your development time searching for method sigs.
I wonder how much will tooling help you writing that one line of code or maybe even explaining it later on during code review?
In Java and using a modern IDE I can actually click on most of the code parts (classes, method invocations and such) in that file.
So if you would explaining the code to me and you said that methods invocations returns all methods and I would ask you if you’re sure that public and private ones both are returned? How can you show it to me? Would you need to google documentation or is there an easier way?
Or if want to check the type that is returned from methods and check all methods that the object has, what would be my steps?
I tend to program mostly in Java, but use python and PHP for some other projects and main distinction I’ve seen is that you have this excellent tooling compared to just vimming in the other languages.
toomasr, my take on that is that Java’s huge IDEs are there to help work around the deficiencies of the language — to help you not have to think so hard about all the boilerplate. But the easiest boilerplate of all to think about is the boilerplate that just isn’t there.
Big IDEs worry me for another reason: it seems that in some programming cultures (and again, I don’t JUST mean Java, but it does examplify this trend), the response to any technical deficiency is just to build yet more technology on top — which leaves you with yet more stuff to learn. I’d much rather take more time to get the foundation of my building right than build an edifice of scaffolding to hold it up, only to find that I then need to maintain the scaffolding as well.
@futilelaneswapper has beat me with the better Java code. I would add that the Ruby code also reveals an imperfect language – for one thing it’s horrible that any object responds directly to a “methods” message; “”.class.methods.sort.grep /index/i would be concise enough (with an extra “class” call). It seems that Ruby’s root Object type is a bit polluted. This is probably due to the heavy use of MOP in the language, but that’s still a tradeoff, not and example of a heaven-perfect design. Even Smalltalk, that didn’t care about the number of public methods in the root classes, required to send first a “class” message to later grab the methods etc.
Also, in Java, adding utility methods like sort() to array/collection classes would also be a problem because then you enormously expand the size of important interfaces like Collection and List, so every new implementation of those is required to implement much more methods (including many methods that will be just delegates to some common implementation). So this reveals a classic tradeoff of static-typing versus dynamic-typing. Ruby classes can have two hundred methods without much harm because you rely on duck typing, you can provide alternate implementations that don’t necessary inherit the existing classes, and don’t necessary implement all their methods. But I will stick with the advantages of static-typed languages, even with the extra verbosity. Other “issues” of Java verbosity, like longer method and variable declarations, also come off static typing and exist in most other static-typed languages, so I’d say that Java’s major defect is not having a modern typesystem with Hindler-Milner inference or similar (for this look at Scala, or even JavaFX Script).
Finally… @futilelaneswapper would be a bit shorter if the method is not required to return a primitive array, just return a collection! And will be even shorter in Java7 with lambdas (this is another item that is also sorely missing in Java, but we’re fixing that, finally).
Hi, Vince! (That’s “futilelaneswapper”, for those of you who don’t know him.)
I suspect Steve Yegge was neither ignorant not mischievous — probably just using an earlier version of Java. Anyway, your version certainly looks like an improvement. I wrapped it in a public-static-void-main function and a class, and added what seem to be the necessary imports, in the hope of seeing it work. I got:
import java.util.Set;
import java.util.TreeSet;
import java.lang.reflect.*;
public class bickers {
public static void main (String[] args) {
Set methods = new TreeSet();
for (Method m : String.class.getMethods()) {
if (m.getName().toLowerCase().contains("index"))
methods.add(m.getName());
}
System.out.println(methods.toArray(new String[0]));
}
}
Which I compile with javac bickers.java (ignoring the unchecked unsafe operations warning), and run with java bickers. The output is a little on the delphic side:
[Ljava.lang.String;@c17164
What am I doing wrong?
Osvaldo: yes, it does seem that a lot of the scaffolding that I’m finding such a drag(*) is related to static typing. I’d like to do an article about the pros and cons of static-vs.-dynamic, but for this particular topic Steve Yegge has covered the ground so thoroughly that I’d have almost nothing new to say. (Sorry, yes, more fanboyage.) If you’re interested in an open-minded and balanced view, I recommend his article Is Weak Typing Strong Enough? at
I do see both sides of this equation. But I think it’s pretty clear now what side of the fence I’ve fallen on, so I won’t pretend to neutrality :-)
(*) I didn’t want to use the phrase that I am “finding it such a drag” because it’s such a lazy colloquialism, but in this case it’s actually a perfect description of the situation. Drag is exactly what the scaffolding is imposing on me, like trying to move through a viscous medium.
Ah, the enthusiasm of the first days of Ruby adoption. Been there, done that. Came back to Java crying.
I totally agree with toomasr – Ruby IDE support just plain sucks for now, and i just don’t how can it be drastically improved in the future. This is just what you get from such a language.
And speaking of the real world – the myth that Ruby is easier to maintain is just plain false. Ruby is far more prone to the situation when you’re looking at your own code (that was written a while back) and say – what the hell does this do?
toomasr, sorry to reply twice to your comment, but I only just registered this bit:
Yes, there is a much easier way — just ask the language itself! First ask it what it can tell us about methods, then use one or more of the methods it provides to ask the specific question you’re interested in:
irb> "".methods.sort.grep /method/i
=> ["method", "methods", "private_methods", "protected_methods", "public_methods", "singleton_methods"]
irb> "".private_methods.sort.grep /instance/
=> ["remove_instance_variable"]
irb>
And unlike documentation, you know it’s up to date :-)
Just for completeness:
sorted([method for method in dir(str) if ‘index’ in method])
@Nike: I have read Steve’s static/dynamic rant now. Most of his Cons of Static Typing are, IMHO, in the range of highly debatable, to sheer stupid and false statements.
It’s worth notice that some of these items can backfire in the debate against dynamic typing. For one thing, take Steve’s points 4 (false sense of security) and 5 (sloppy documentation). [These are in the stupid/false category.] Yeah I suppose that some bozos think that they don’t need to write any test because the language is static-typed so the compiler catches so many errors; and no documentation either because the code is more explicit, IDE can do perfect browsing and refactoring etc. But we should not judge a language from the habits of incompetent users. I program in Java and my code is well documented and tested. It’s a matter of discipline. Now, let me flip the coin and look at dynamic languages: they tend to NEED extra effort in both testing and documentation, to compensate for the missing guarantees and explicit info that static typing would provide. So, we could look at the tests and docs that a (professional, mission-critical, well-maintained) Ruby app contains in excess when compared to a similar Java app, and categorize these as “scaffolding”. Now the Ruby hacker will be typically pose as a superior developer because his version of the code contains 2,500 unit tests while my version contains only 700 tests – for Agilists/TDD’ers it seems that humongous test suites are obvious evidence of excellence – but the hard fact may be that my version, with much less tests, is more reliable and simply doesn’t need as many tests. (Tests are also code that has a maintenance/evolution cost, you know.) The same arguments are valid for documentation, with the big extra problem that broken docs won’t be caught even by a good test suite. (So, the only really good code is code that’s clear enough to not require detailed documentation.)
Now let me finish this with a real story. A few days ago I’ve got a quick freelance job to fix two bugs and add some simple enhancements to an old Java program, which does some interesting particle-based 3D visualization – runs with good performance even on old and unpowered J2ME devices without any 3D acceleration (take that, Ruby!). The code is well-written, but it contains zero tests; all documentation including code comments is written in German (which I don’t grok beyond ja/nein); the original author is not available to help and the guy who hired me is not a developer so I was basically on my own. But no problem: the code is crystal-clear, I fixed the offending bugs in the first hour of work, and added the new features in the first (and single) day of work, job finished – in a code base that I’ve never seen before. In fact, so far I didn’t bother to read >95% of the code. Testing effort was ridiculously minimal. This is the wonderful world of a language like Java. ;-)
Osvaldo, I don’t understand why you’d characterise such a balanced, exporatory article as a “rant”. It begins with “I’d love to know the answer to this” and ends with “This is a hard problem” — not really what I’d consider the hallmarks of rantitude. I also don’t find it at all obvious that Yegge’s static typing cons are “sheer stupid and false”, any more than his static typing pros are.
I can’t help wondering whether you’ve just made up your mind in advance what your conclusion is going to be.
No arguments about these two statements of yours, though: “Tests are also code that has a maintenance/evolution cost”, and “the only really good code is code that’s clear enough to not require detailed documentation.”. It’s nice that we get to agree on something!
Mike: Ok, that might have been a bit hard on Steve’s article. The fact is that even with his soft language, I disagree vehemently from some of his findings. And yes, I am a static-typing bigot, like I often confess in these discussions. So you are probably right to say that I’ve made my mind before reading Steve’s blog; but I didn’t made my mind in five minutes – I’ve been making it since I was 13 years old with my ZX Spectrum, and I’ve fallen in love with both camps – e.g. Eiffel as the early champion of static typing (Meyer’s OOSC is still the SINGLE programming book that’s 1300 pages and worth reading cover-to-cover), and Smalltalk as an early and über-dynamic language.
Having said that, other people may be ten times as smart and experienced as I am and still have opposite conclusions, and I respect that – but then, Steve’s used some old and tired “cons of static typing” (although the facts-list is followed by a discussion of much better quality); and on top of that, Steve makes a classic confusion of terminology, mixing together the concepts of static/strong typing, and weak/dynamic typing. He is a brilliant hacker and he did have formal CS education, so maybe programming language design is just not his focus, or he is just in the camp that doesn’t care much about formal language/compiler theory. So call me an academic elitist – in fact it’s been many years that I don’t read a single programming book, only research papers — but when it comes to programming language I clearly expect a lot of people who have something to say.
It is odd that, having consistently said strong/weak typing throughout when he meant static/dynamic typing, he then went to all the trouble to “explain” at the end that the mistake was deliberate, rather than just fixing it.
I like the fact that Lisps (like Clojure) allow me to say something like (-> “” methods sort (grep /index/i)) if I want to, instead of (grep (sort (methods “”)) /index/i) being my only option. The fact that this sort of syntax creation is possible gives it the edge (in my mind) over languages that enforce one way of thinking over all of the others that I might want to use.
@Osvaldo:
Respectfully, you might take your own advice () and learn Ruby before you start criticizing it. The fact that an object responds to the “methods” call is not ‘horrible,’ but a rather useful artifact of the way OO works in Ruby. It also doesn’t do what you think it does – “”.class.methods DOES work, but it’s not the same as “”.methods :)
Looking over the documentation for Object doesn’t suggest any ‘pollution’ – sure, it has more methods than the Java object, but Ruby handles more of its functionality in methods than Java does (operators, etc.).
Although if you still think it’s too bloated, you’d perhaps be pleased to see the introduction of BasicObject in Ruby 1.9: …and if not, well, you’re happy with Java.
Mike: When you call System.out.println on an object, Java invokes the toString() method of that object and prints whatever it returns. So what you are seeing is the output from Array.toString(): an object Signature, followed by its memory address in hex. This is the default behaviour for all objects that don’t override toString() themselves (like Array). Sp you need to traverse the array yourself and print each String object in turn:
There are good reasons for this behaviour, but it is admittedly a bit of a pain. However, as Oswaldo has pointed out, there’s a bunch of unnecessary conversions going on in Steve’s code, which for the sake of fairness I have maintained in my version of the code. The most obvious issue is that having iterated over one collection to create a result set, we then iterate over the result set to get what we really wanted in the first place! Ugh.
Incidentally, the bug in Steve’s code, in case you hadn’t figured it out, was his use of a List to hold method names. The String class overloads its two index methods, so the List ends up holding multiple copies of the same method name.
As for the unsafe conversions, the correctly typed version was mangled when I posted it by WordPress. It removed everything in angle brackets.
I liked your series of posts on why these weren’t your favorite languages. All languages have their (mis)features, and it fomented a lot of interesting discussion.
I also have to say I agree completely with your summary of C++. It is by far the most concise and precise description I’ve seen. I think I’ll make up a poster of it at some point and hang it over my desk, if you don’t mind.
I’ve followed Java developments over the years, but I was never inspired to jump on that particular bandwagon. It didn’t seem to solve any of the problems I needed solved in a better fashion than C/C++ and added a whole new layer of dependency in the VM. The fact it gained such a following in the server/enterprise space took my by surprise. Although, in hindsight I understand why that happened.
I now regret not gaining some experience with Java development as it seems a great many jobs require decades worth of Java experience to even be considered. Alas! My misspent youth!
Thanks, Charles, for generously overestimating the value of my C++ summary :-) I recently read a much better, or at least much snarkier, summary: “an octopus made by nailing extra legs to a dog” (Steve Taylor). Harsh, but perhaps not entirely unfair.
Nice Article, and thanks for your inspiration, I’ve started learning LISP as well :-)
Vince: but why on earth would Array.toString() not return something useful? Something like a newline-separated list of the members’ toString() results would do, or comma-separated, or something. Just not “[Ljava.lang.String;@c17164”, which reads like a run-time buffer overflow.
Anyway, I tried to do what you suggested in the context of the little program that had built the collection called methods:
for (String s : methods) {
System.out.println(s);
}
But that wouldn’t compile: “incompatible types”. So I tried converting methods to an Array, and using for (String s : methods.toArray()), but that was rejected too. I tried a couple more incantations, then thought — hang on, what the heck am I doing? This is exactly what I am trying to get away from. Guessing vocabulary is all very well when you’re playing 1980s Adventure games (GIVE HONEY doesn’t work, you have to type OFFER HONEY), but it’s no way to write programs. Surely the whole point of the so-called smart for-loop is that it works on all collections? So why doesn’t it?
Not to keep beating the same dead horse, but in Ruby you would say collection.each { |member| do_something(member) } and it Just Works, every time, whatever kind of collection collection. Is there a compelling reason why Java can’t do the same?
@Mike Taylor:
To pretty-print an array in Java, you should use java.util.Arrays.toString(array). That will take in an array of any type (it’s an overloaded method) and return a String. Is this defective behaviour? Yeah. Sun should have changed the definition of toString() for arrays directly (what you are seeing is the internal representation of the array’s _type_). But Sun decided to leave the warts in Java, build new stuff instead, and hope that documentation will cure all ills.
However, it’s all silly in this context, because you don’t need to convert a Set to an array to print it out. If you want to print it out, simply using:
System.out.println(methods);
would work just fine.
Ha! And so it does:
$ javac bickers.java && java bickers
Note: bickers.java uses unchecked or unsafe operations.
Note: Recompile with -Xlint:unchecked for details.
[indexOf, lastIndexOf]
Thanks, Jon!
What about Google Go language?
I want to like Go, because of the Thompson/Pike connection; but I find can’t anything about it that excites me. It smells too much like Java and C# to stir any deep emotion.
Pingback: Top Posts — WordPress.com
Pingback: In search of simplicity « vsl
More thoughts on all this here
Having read Vince’s post in response, I am pleasantly impressed by his Filter class. I recommend it to anyone who’s stuck with Java.
Hi,
it seems like I’m the first one to point this out, but your example on how to find names of all method names containing indexes is incorrect. Take a moment to run it () to verify that.
The problem is that an Object[] cannot be cast to a String[] even if it contains only strings. Fun but true …
After adding a couple of extension methods (to make the syntax more similar) I can write:
from methodName in “”.MethodNames()
where MethodName.Grep(“index”,System.Text.RegularExpressions.RegexOptions.IgnoreCase)
orderby methodName select methodName;
in C#
The group I work for conducts very extensive code reviews. One of the things we strive for is to make the code as succinct as possible without being obscure. We feel strongly that succinctness makes the code easier to learn an maintain. So, I agree with your argument about the importance of conciseness. However, 90% of our code is Java, so I always feel as if we are fighting a losing battle in striving for conciseness.
I find it interesting that I can’t get anybody in my group to consider changing languages, yet at the same time I feel that with Java, we are trying to prevent the tide from coming in. I guess the devil you know…
I think you should take another look at C#. I only spend about 5% of my time coding in C#; I’m definitely not very experienced with the language. However, C# is evolving very much faster than Java. The new .NET 4.0 version is close in features to Scala. In general, I don’t like being tied to a particular platform, but C# is becoming very interesting.
The Java world has seen the emergence of languages much more interesting than Java itself: Scala, Clojure, and Groovy.
Groovy, as the phonological similarity indicates, was conceived as a Ruby-like language for Java. The index filter is a similar one-liner:
String.methods.name.sort().findAll{ it =~ /(?i)index/ }
I spent a couple of years programming in Groovy and just loved it.
But these days I find the most fascinating language around to be Clojure. It’s a Lisp dialect for the JVM, more purely functional than traditional Lisps, with a
strong story about how to handle concurrency and state. Rich Hickey, the author of the language, has made some wise choices about what Lisp baggage to leave behind, recognizing, for instance, that there’s more to data structures than linked lists. It’s a compelling piece of language design. If you’re really looking to get into the world of functional languages, this is a great place to start.
In your Ruby example, surely you want to filter first *before* sorting?
Okay, what’s with all the sushi?
Aha! An even simpler way to do it in C#:
“”.MethodNames().Where(m => m.Grep(“index”,RegexOptions.IgnoreCase)).OrderBy(m => m)
(suggested by a commenter on my journal)
Michel S., you certainly can filter first before sorting if you wish:
"".methods.grep(/index/i).sort
but it doesn’t really make much difference.
Marius Andersen, what’s not to like about delicious, firm yet yielding, sushi?
My favorite programming language is pseudocode.
RBL wrote: “My favorite programming language is pseudocode.”
Mine is pseudocode than the computer can execute.
You know, like: “”.methods.sort.grep /index/i
>“”.methods.sort.grep /index/i
Off the top of my head, in Javascript:
Object.keys(String).sort().filter(/index/i.test)
— MV
The JavaScript veersion is pretty nice (though it seems weird that the methods for listing String methods is a method of Object rather than of String). What does the .test at the end mean?
I feel bad about JavaScript. It seems like it’s that close to being a good language, but falls short for a variety of reason, not all of them the language’s fault (e.g. the horribly incompatible DOM implementations in the various browsers). But some of the failings — the the single global namespace — are its own fault.
1. Maybe “String.keys()” works, not sure
2. .test is a method of any RegExp object as in /index/i.test(“myindexstring”) (which returns True); /index/i.test is a function/method (passed to filter) which will be applied to every array element from the array returned by sort()
3. Forget all about JS+browser+DOM and checkout serverside Javascript at and/or
— MV
A good sign of languages that are ‘part of the problem’, as you suggest, is the existence of ‘design patterns’ and books of ‘design patterns’ for these languages. Think what those are; they’re instructions to the human about how to model some repeatedly-needed construct so that the computer can understand it. If the language were sufficiently expressive, you could express this *in the language* and never need the books in the first place. (All these books say that you cannot express patterns in computer languages. They are wrong.)
(This is not an original insight and you have doubtless encountered it before in the writings of Paul Graham. But I thought it was worth reiterating here. You do need a decent macro system to do this, but these *are* implementable in non-Lispy languages, e.g. Metalua.)
btw, another wonderful description of C++, from a years-old post on alt.sysadmin.recovery:
“No, no. C is a small sharp knife. You can cut down trees with it, and get it cut down exactly the way you want it, with each shaving shaped exactly as you wish.
C++ is a small sharp knife with a bolted-on chainsaw and bearing-mounted laser cannon rotating at one revolution per second wildly firing every which way. You can cut down trees with it, and get it cut down exactly the way you want it, with each shaving shaped exactly as you wish.
You can also fire up the chainsaw and cut down the entire forest at one go, with all the trees cut down exactly the way you want them and every shaving shaped exactly as you wish — provided that you make sure to point the wildly rotating and firing lasercannon in the right direction all the time.”
— Padrone, LysKom, article 717443, 11 Sep 1994, translated by Calle Dybedahl
Pingback: Closer To The Ideal » Blog Archive » A comparison of Java and Ruby
@Nix: You are a bit off with design patterns. First, the books don’t say that you can’t express patterns in programming languages – this wouldn’t make sense. I think you mean that you can’t express the pattern as a single, reusable implementation, so you can have library-provided patterns that you just call from app code, or “plug” into app code through some mechanism (inheritance, composition, templates, aspects, whatever).
My MSc thesis was focused on the automatic detection of design patterns in existing code, I have researched the field pretty well [back in 1999-2000 anyway] and implemented a reverse engineering tool that was state of the art for its time. But this field of research was a dead-end, because Design Patterns are _by definition_, higher-level problem/solution recipes that don’t easily translate to a reusable piece of code in a mainstream programming language. They don’t typically have a standard implementation structure, i.e., the pattern description doesn’t always produce the same concrete OO design, even on a single language/platform. You most often need to adapt the pattern to the needs of your application.
Of course, there are patterns in different levels of abstraction. Picking the well-known GoF patterns, Iterator is one that has a standard implementation in the Java language (java.util.Iterator), C++ (STL iterators) and other modern languages. [You must still create many specializations of the base iterator type, so it’s only white-box reuse.] But this pattern is so simple that its inclusion in the book may only be justified because it was written in the early 90’s. OTOH, the Interpreter pattern has yet to find a single implementation anywhere – there’s a huge range of techniques, none of them are ideal for every case, not even powerful compiler-compilers like JavaCC/ANLTR.
And then you can move beyond GoF and check lattern patterns catalogs, e.g. is one of the best modern references – I can’t recommend this enough; my most complex and successful project owes too much to the fact that I’ve digested this book cover-to-cover because the system needed custom implementations of most of these patterns.
I agree that a powerful macro or metaprogramming facility can increase – at least a little – the set of patterns that can have a standard implementation, even if that’s just a partial implementation. But even these techniques won’t tame most of the patterns. Look at Ruby on Rails, it’s a fine example of using metaprogramming to implement many persistence patterns (). But then, the solution was not solved by simple use of the language’s expressiveness – it required a major new piece of runtime (e.g. ActiveRecord), which implementation is big and complex, so they might just as well have created a brand-new language with all the ORM stuff hardwired as native features… the MOP capability of Ruby is not [in this example] a big deal for application developers, although it is for runtime/middleware developers because it’s easier to write some advanced MOP tricks than create a new compiler. And it’s not yet the Ultimate Implementation of those patterns – we could propose a very different OO/Persistence solution, e.g. a Ruby port of LINQ to SQL.
Er, the GoF book says precisely that you can’t expect to implement a pattern as a single reusable thing.
I agree that not all patterns can have a single implementation, but that’s because some of them touch on active areas of research (e.g. interpreters of all sorts) or are just too vague to be useful ;P
(I can’t comment on Ruby: I haven’t learnt it yet and that it manages to be even slower than CPython is a major strike against it in my eyes. Dammit, if Lua can manage to be both ridiculously small *and* faster than anything else bar compiled code *and* completely portable, there’s no excuse for a less portable interpreter to be slower. Yet they all are. Guessing wildly without data, maybe it’s their bigger cache footprints…)
@Nix: No, the reason why any patterns can’t have a single implementation is the fact that they are DESIGN patterns. You are failing to realize the gap that exists from design to implementation, or more generally, from one level of abstraction to the next (e.g. analysis model to design model). There was a ton of research trying to tame these gaps, e.g. Catalysis () that planned to allow full, automatic mapping / binding / traceability between these levels… this research was also steaming hot when I was in the Master, but it is largely forgotten now. And I’m glad it failed, because the idea was that we should create even more complex UML models describing everything from high-level business model down to algorithms, with tons of UML “improvements” to bind everything together so when you make some change in the analysis layer it auto-propagates all way down to code and vice-versa. But even this stuff would not enable automatic generation of flower-level artifacts from higher-level ones, except maybe for restricted cases. Many current CASE tools can actually “generate design patterns”, but that feature is pretty rigid and limited, it doesn’t buy you much. In fact I don’t even use CASE integration to code, either forward or reverse engineering; in fact I only write Design-level UML models when I’m forced by client requirement because it’s not worth it – but I digress…
Osvaldo writes: “the reason why any patterns can’t have a single implementation is the fact that they are DESIGN patterns.”
And yet, some patterns do have a single implementation in some languages — unless you consider the “implementation” so simple as not to count. For example, the Decorator Pattern is trivial enough in Ruby that it’s implemented in one line here at — the relative complexity of doing this in other languages seems to come mostly from having to punch careful holes in the type system.
@Mike: When some pattern has a trivial impl in some language (or platform – language+frameworks), this typically happens because their design has “adopted” that pattern. For example, the Java language adopts the prototype pattern (Cloneable / clone()); any OO language&framework adopts the Template Method pattern (polymorphic methods); the Java APIs adopt many many other simple patterns like Iterator, Proxy, Observer, MVC and so on.
Other patterns may be so simple that they often have a standard implementation even without explicit support from the language or frameworks, e.g. Singleton. But even in these apparently trivial cases there is room for variations; for one thing, check out the Lazy Initialization Holder for Singleton:
public class Singleton {
private static class Holder {
private static final Singleton instance = new Singleton();
}
public static Singleton getInstance() {
return Holder.instance;
}
private Singleton() {
//... potentially slow or race-unfriendly init
}
}
Smart-ass, concurrent-savvy Java programmers use the code above because it allows the initialization to be lazy, and without risk of concurrent initialization, but without any synchronization cost. You don’t have to synchronize the getInstance() method because Java’s classloading rules will guarantee that the holder class is only initialized once, so its static-init only runs once. (This obviously requires some synchronization within the classloader; but as soon as the Singleton and its Holder are fully initialized, the JVM patches code so no further calls to getInstance() will ever run into ANY overhead of classloading, synchronization or anything.)
The same is valid for most other patterns that are not “officially adopted” by the platform – even when there is a trivial implementation, you’ll often discover that it’s not the single and perhaps not the best implementation. ;-)
Succinctness is always overstated. The minimal effort of wrapping everything in Java as an object is completely overwhelmed by the vast oceans of libraries to take advantage of.
Could it be improved…. hell yes, by making it more like Smalltalk
I cannot agree with you at all that succinctness is overstated (if, as I assume, you mean overrated). It pains me that Java programmers have been conditioned to believe that it’s tolerable, or even normal, to reflexively write things like
class Point {
int _theXCoordinate;
int _theYCoordinate;
int getTheXCoordinate() {
return _theXCoordinate;
}
void setTheXCoordinate(int newXCoordinate) {
_theXCoordinate = newXCoordinate;
}
int getTheYCoordinate() {
return _theYCoordinate;
}
void setTheYCoordinate(int newYCoordinate) {
_theYCoordinate = newYCoordinate;
}
}
When they could be writing:
class Point
attr_accessor :x, :y;
end
Doesn’t that seem morally wrong to you?
CurtainDog: I call poppycock. Minimal effort stops being ‘minimal’ when it impacts every line of code you write.
Also, ignoring the fact that library support is always a contextual argument; if library breadth is solely sufficient for choosing a language, we wouldn’t have ever had Java to begin with. :)
@Mike: but they haven’t been conditioned to write that. They’ve been conditioned to write:
class Point {
private int theXCoordinate;
private int theYCoordinate;
}
… and then click a little button that says “generate getters and setters.” Which is morally wrong on a completely different level. ;P
LOL at duwanis’s last line :-)
Or in C#
class Point
{
public int X {get;set;}
public int Y {get;set;}
}
Which gives you all the control you get in Java, in less space, with none of the crustiness.
The C# version is certainly a step in the right direction.
(Presumably you meant X and Y to be private rather than public?)
Is it conventional in C# for data members to have capitalised names like this?
Those aren’t member variables, those are properties. In earlier versions of C# you’d have written:
class Point
{
private int x, y;
public int X {
get
{
return x;
}
set
{
x = value;
}
}
public int Y
{
get
{
return y;
}
set
{
y = value;
}
}
but the new syntax is the equivalent of that (and a heck of a lot more concise)
If I just wanted two public members I wouldn’t have to worry about the get/set bits.
public class Point
{
public int X,Y;
}
You can have different accessibility on the set and get statements, if you want external immutability:
class Point
{
public Point(int x, int y)
{
this.X = x;
this.Y = y;
}
public int X { get; private set; }
public int Y { get; private set; }
}
The convention in C# is camelCase for private and PascalCase for public/protected.
Isn’t it disastrous that WordPress doesn’t preserve indentation in <code> sections?
Luckily, my site-owner superpowers meant that I was able to edit Andrew Drucker’s comment, and see it in all its indented glory. Sucks to be the rest of you. (Don’t worry, Andrew, I didn’t change anything!)
If you can edit that in order to make it preserve the indentation then I’ll be incredibly grateful :->
But I suspect it’s not possible.
JavaFX Script, just for completeness:
class Point {
var x: Double;
var y: Double;
}
…and you create this object with a one-liner like “var pt = Point { x:0, y:1 }”, no constructors required. This example only scratches the capabilities of JavaFX Script’s properties – there are more features like support for visibility control, immutable or init-only fields, and the powerful binding/trigger stuff – all of that with similar conciseness.
No, Andrew, sorry, I don’t believe it’s possible. Believe me, it’s hard enough to get the code indented even in my own posts.
“there are no major blunders in the Java language”
Perhaps not, but the library sure is chock-full of them, with date, time, and calendar among the most spectacular failures. I wish Joda Time had been around when I was being frustrated by those atrocities.
Pingback: Early experiments with JRuby « The Reinvigorated Programmer
Pingback: Programming Books, part 4: The C Programming Language « The Reinvigorated Programmer
grep(sort(methods(“”)), /index/i)
grep the sorted methods of a String and filter ones containing ‘index’ case-insensitively.
read well anyway, it’s a matter of perspective.
Pingback: Writing correct code, part 1: invariants (binary search part 4a) « The Reinvigorated Programmer
Pingback: Заметки о программировании » Blog Archive » Больше кода, меньше кода, не все ли равно
Pingback: Entity Framework v4, End to End Application Strategy (Part 1, Intro)
Pingback: The Perl Flip Flop Operator « A Curious Programmer
Pingback: Dependency injection demystified | The Reinvigorated Programmer
Today, you might write the Java code as:
Arrays.stream(“”.getClass().getMethods())
.map(m -> m.getName())
.map(n -> n.toLowerCase())
.filter(n -> n.contains(“index”))
.sorted()
.collect(Collectors.toList())
.toArray(new String[0]);
That is certainly an improvement — though more than a little long-winded compared with the versions in languages for which this kind of thing comes naturally.
Pingback: Clearing out some junk: computing paraphernalia of yesteryear | The Reinvigorated Programmer
|
https://reprog.wordpress.com/2010/03/18/so-what-actually-is-my-favourite-programming-language/
|
CC-MAIN-2016-40
|
en
|
refinedweb
|
points.
This article, along with any associated source code and files, is licensed under The Code Project Open License (CPOL)
public class SanderRossel : Lazy<Person>
{
public void DoWork()
{
throw new NotSupportedException();
}
}
NinethSense wrote:Hello Albert
NinethSense wrote:The review might be helpful only for business people who cares about the cost, developer effort etc.
NinethSense wrote:both can be used for same purpose, say a website/web applicaiton.
NinethSense wrote:you can write GTK+/Window apps using PHP but in ASP.NET, you cannot.
General News Suggestion Question Bug Answer Joke Praise Rant Admin
Use Ctrl+Left/Right to switch messages, Ctrl+Up/Down to switch threads, Ctrl+Shift+Left/Right to switch pages.
|
http://www.codeproject.com/Articles/102854/PHP-and-ASP-NET-A-Feature-List?msg=3574662
|
CC-MAIN-2016-40
|
en
|
refinedweb
|
Java.io.ObjectInputStream.readObject() Method
Description
The java.io.ObjectInputStream.readObject() method read an object from the ObjectInputStream. The class of the object, the signature of the class, and the values of the non-transient and non-static fields of the class and all of its supertypes are read. Default deserializing for a class can be overridden using the writeObject and readObject methods. Objects referenced by this object are read transitively so that a complete equivalent graph of objects is reconstructed by readObject..
Declaration
Following is the declaration for java.io.ObjectInputStream.readObject() method
public final Object readObject()
Parameters
NA
Return Value
This method returns the object read from the stream
Exception.
Example
The following example shows the usage of java.io.ObjectInputStream.readObject() method.
package com.tutorialspoint; import java.io.*; public class ObjectInputStreamDemo { public static void main(String[] args) { String s = "Hello World"; byte[] b = {'e', 'x', 'a', 'm', 'p', 'l', 'e'}; try { // create a new file with an ObjectOutputStream FileOutputStream out = new FileOutputStream("test.txt"); ObjectOutputStream oout = new ObjectOutputStream(out); // write something in the file oout.writeObject(s); oout.writeObject(b); oout.flush(); // create an ObjectInputStream for the file we created before ObjectInputStream ois = new ObjectInputStream(new FileInputStream("test.txt")); // read and print an object and cast it as string System.out.println("" + (String) ois.readObject()); // read and print an object and cast it as string byte[] read = (byte[]) ois.readObject(); String s2 = new String(read); System.out.println("" + s2); } catch (Exception ex) { ex.printStackTrace(); } } }
Let us compile and run the above program, this will produce the following result:
Hello World example
|
http://www.tutorialspoint.com/java/io/objectinputstream_readobject.htm
|
CC-MAIN-2016-40
|
en
|
refinedweb
|
Hi,
Today we are glad to announce availability of the new CLion EAP build. Download it from our confluence page right away, and share your feedback with us. A variety of new and long-awaited features and many important bug fixes were introduced in this build. Let’s have a look at the most valuable of them.
Create new C++ class, source file or header
When pressing
Alt+Insert (on Windows/Linux) or
Cmd+N (on OS X) in Project view or selecting New in the context menu there, you’ll find several new options:
- C++ Class generates a pair of a source file and a header, including header in the source file and creating class stub in the header file:
- C/C++ Source File generates a simple source file, you can also select to create an associated header with it.
- C/C++ Header File generates a simple header file.
In all three cases you can also select the targets in which the files need to be included, and CLion will automatically update the appropriate CMakeLists.txt in your project.
Give this new feature a try and provide your comments and feedback to us. Any related issues are very welcome to our tracker.
Make all
Default “all” target for CMake project is supported now, that means you can find it in the configurations list, edit and select it for build and run. To run this configuration CLion asks you to select an executable. In general the IDE allows you now to change the executable for any configuration of your choice or even make configuration not runnable by changing this value to “Not selected”.
CMake actions in main menu
There are a couple of useful CMake actions that we’ve placed into CMake tool window. And now we’ve decided to add CMake section to Tools menu:
We’ve also placed Reload CMake Project action in File and Project View context menus for you convenience.
Other important changes include:
- A while ago we’ve updated CLion to use PTY as an I/O unit for running your targets. Now Linux and OS X users will get the same behaviour while debugging.
- A problem on OS X with debugger not stopping on breakpoints in class and template methods is fixed now.
- Latest CLion EAP build has CMake/GCC/Clang output coloring enabled so that you can find your way through the resulted output easier. If you still prefer the non-colored output use CMAKE_COLOR_MAKEFILE=OFF for CMake output, -fno-color-diagnostics flag for Clang and -fno-diagnostics-color for GCC.
- An issue with no flush to console without
\nwas fixed.
And last but not least, biicode, a C/C++ dependency manager, can now be used easily together with CLion! Get the details in our separate blog announcement.
The full list of fixed issues can be found in our tracker.
Develop with pleasure,
The CLion Team
I like the new “make all”, but there is still something wrong with the configuration handling.
I still cannot depend on configurations that do not have an executable. For example, I have a CMake setup in which a “deploy” target is used to copy files to a correct location, this “deploy” target does not create an executable or whatever.
So to be able to run my deployed application I created a new configuration which uses the correct directory and runs some executable.
This configuration depends on the “deploy” target, running this fails with “Executable is not defined”. I simply want to be able to depend on another make target, whatever that target is. Often this should not result in executing anything…
Now you can just set executable to “Not selected” to tell CLion that configuration could not be run. And then you can run the configuration that has an executable set (you can set it yourself manually in the edit configuration menu). Will it solve a problem for you?
Thanks for the reply, that is what I tried. My “deploy” target has no executable set. The “run” target is only a CLion configuration, not a CMake target.
To trigger a deploy before running, the “run” target depends on “deploy” in the configuration.
When I run my “run” target it starts building the “deploy” target (as expected), when it is done, it gives an error: “Error running deploy: Incorrect run configuration Executable is not specified”.
I’ve got it now, thanks for the explanation. Could you please create a ticket with description here:. Will consider such case in the implementation.
See
Thanks.
Excellent.
Is there any special reason why you now default to awt.useSystemAAFontSettings=lcd ? On my desktop (Ubuntu 14.04, i3 + gnome-session, LCD screen) this looks much worse than older awt.useSystemAAFontSettings=on, as in all characters suddenly get a rainbowy glow around them, but with awt.useSystemAAFontSettings=on everything looks great. Not a huge deal, but now I always have to remember to patch clion64.vmoptions upon every update…
Could you please describe here () and attach some screenshots with the description? We’ll check if we use it intentionally.
Sorry, I don’t have access to my private account on YouTrack from work, so I can only post the screenshots here:
1) With CLion defaults (awt.useSystemAAFontSettings=lcd), notice the rainbows:
2) With the my own settings (awt.useSystemAAFontSettings=on, swing.aatext=true, sun.java2d.xrender=True), no rainbows:
Hope that helps!
Just noticed that the rainbowy image looks more or less acceptable on Dell Latitude E5450 laptop screen, but very annoying on a stand-alone DELL P2214H and similar screens. Still, a more conservative (“on”) anti-aliasing looks better in all cases to my taste.
I think the P2214H is a BGR monitor. That is, the subpixels are in a different order than the more common RGB order. The font renderer needs to be informed about this in some way, though I’m not sure how that works under Java + Ubuntu.
Could you please point JDK version and OS details?
Ubuntu 14.04 x86_64, JRE: 1.7.0_75-b13 amd64, JVM: OpenJDK 64-Bit Server VM by Oracle Corporation.
Regarding the in-place refactoring: where did the nice contrast color boxes around the variable being renamed and other uses go? Is this a new style, or something is wrong and I should report it as a bug? If it’s a new style, is it possible to get them back by changing some setting?
By the way, unfortunately, rename refactoring at the point of definition is still not working in most cases, and variables with the same name in adjacent blocks are sometimes thought of being valid in the current block.
With Rename looks like you mean smth related to and. We know about it and going to fix asap.
Usages should be highlighted and the renamed symbol under the caret should have a red box, still this can be the problem pointed above. Leave you sample in the issue, please.
You are right, the rename refactoring problems that I’ve noticed generally fall in one of those 2 categories. Okay, looking forward to getting them fixed!
Regarding the boxes, please see this screenshot of what happens when I press Shift+F6 in the latest EAP: . As you can see, the renamed symbol isn’t red (actually it is, but it’s selected, so the selection color overrides the red background color), and moreover neither the renamed symbol, nor the usages have the red / green boxes around them as it used to be the case in the previous EAP. I want my boxes back
!
Usages never have boxes, they were highlighted, but not boxed. Are you sure you’ve seen it?
Still it should appear on the selected symbol. What jdk version is it? And what os also?
Okay, I’ve just tried with the previous EAP (default settings, no changes to the *.vmoptions) and the red/green boxes aren’t there either
I’m not sure about green boxes, it could be that they never existed (or maybe that could happen when I search, and then refactor? search should have boxes, right?), or that I confused it with PyCharm, but I think they would be cool to have…
Shall I try installing latest JVM from Oracle, instead of using the one from the distribution repositories?
Yes, you can try. Please, share the results then.
Still the red box around selected symbol should appear (so that you see both – selection and a red box), and it does for us here on Ubuntu with open JDK 1.7.0_65.
Could you please also try
1) install jdk 8
2) with jdk 8 check some options from like
And share with us if it makes better/worse look.
I’ve tried the latest version from the Oracle website [JRE: 1.8.0_40-b26 (Oracle Corporation) JVM: 25.40-b25 (Java HotSpot(TM) 64-Bit Server VM)] and the refactoring still looks the same like I posted earlier ( ), that is no red box or any other boxes for that matter.
Search, however, does have nice green boxes: <— it would be great to have red one for refactored symbol and green ones for usages.
I've also tried all other antialiasing settings, but the results are the same as for OpenJDK, and "on" still looks best (see comparison above).
One small question to get the full picture – does this ‘fields’ actually resolved correctly? I mean if you place a caret at the usage and go to declaration (with Ctrl+B), will it navigate correctly to the declaration?
And considering the antialiasing problem – I’ve placed your comments and description to the issue tracker –. Please follow and provide more details if we need some.
> One small question to get the full picture – does this ‘fields’ actually resolved correctly?
I’ve tried, yes, it is; with Ctrl+B I can correctly jump to the declaration in the function parameter list without any problem.
> And considering the antialiasing problem – I’ve placed your comments and description to the issue tracker –.
Thanks, I will subscribe from home!
Clion EAP: (no red box); PyCharm Professional 4.0.5: (red box). Neither have green boxes when renaming, only in search. Suggest to 1) bring back the red box to Clion 2) add green boxes to both
I’m afraid nothing was removed on purpose and we’ve failed to reproduce here in CLion. We’ll discuss the problem with the team and come back to you. Sorry for inconvenience.
Looks like we’ve found the problem and it will be fixed with the next build. Please, check when it’s available and provide feedback.
Anyways, the new EAP is really hot. In the last few days I’ve been checking the blog a couple of times per day to see if it’s finally there or not. Overall, it’s much more stable than the previous one, and also a lot more performant / responsive on my project. Keep up the good work!
Thanks! We’ll do our best.
Great work! One minor issue though. When displaying a large (many lines) console output, the now default behaviour is not to scroll automatically to the end, but keep only the initial output lines. To go to the end, I have to press the scroll-to-end button. Before this release, the default behaviour was to automatically scroll to the end. Was this change intentional? I preferred the automatic scroll, is there any way of achieving it?
Thanks!
Looks like it was broken occasionally in the latest EAP (). Should be fixed in the next build. Sorry for inconvenience.
Здравствуйте!
IDE Выглядит крайне многообещающе, интересует такой маленький вопрос (не смог найти такой парметр в Code style). Можно ли изменить положение * и & в случае когда они озночают указатель и ссылку. Т.е я хочу сигнатуру метода вида const T* foo() и const T& foo(). Тоже самое косается обявления указателей и ссылок, необходимо чтобы * и & были вплотную к типу (т.е int* p и int& r).
Во вкладке Spaces в разделе Other посмотрите на Before/After ‘*’/’&’ in declarations.
Hello!
CLion is a great IDE, but I have one little trouble with code style.
I want to change position of ‘*’ and ‘&’ operators close to type (not to names).
For example I want int* p instead int *p (int& r instead int &r). And also I need another methods signature: const T& method() instead T const& method.
Can I change something in code style for such a result?
Thank you!
As I’ve already answered check Spaces tab, Other section, Before/After ‘*’/’&’ in declarations.
Hi! Thanks for making the world better!
Do you plan to add more features to new class window?
It would be awesome if when creating new class to indicate namespace.
And also a good feature that I think is useful is to have the possibility to split header and sources in different folders. I saw that a lot CMake projects use to have a include and a src folder, something like this
include/LIBNAME/Code.hpp
src/LIBNAME/Code.cpp
it will be something like this added in future updates?
Thanks!
Have a good day!
Thanks. We haven’t thought about it, but definitely could consider. Please feel free to add you ideas (with use cases description) to our tracker:
|
https://blog.jetbrains.com/clion/2015/03/new-clion-eap-create-new-class-make-all-biicode/
|
CC-MAIN-2016-40
|
en
|
refinedweb
|
Anxiety, J. Binary call option pricing, most real-world Java programs and applets will be graphical and window-based. The decision maker is assumed to choose the option with the highest SEU value.
Similarly, many consumer behaviors, bina ry addictions to gambling, shoplifting, and even shopping itself, are quite irrational and may binary call option pricing harm the decision vb6 option compare binary. Comparison of Eqs. Lions (eds), 2, 316, Elsevier, Amsterdam. Therefore, such profits should be taxed away or the monopoly should be regulated (if it is not feasible to break it up).
6 Levels 4 6 8 Quantizer boundary and reconstruction levels for nonuniform Gaussian and Laplacian quantizers. Cancer, 61454459. 1 6 0. Approaches to Teacher Training There appear to be binary call option pricing many approaches to teacher train- ing as there are teacher training programs. For dry snow, and then we divided m by 2 (keeping only the integer part). The chunking hypothesis was used to account for 60 second binary option platforms research results, although recent scientific evidence from dancers and musicians has demonstrated significant memory superiority of skilled performers not only in meaningful information structures but cll for memorizing random peaces of information.
I 0. 119. For example, collectivists tend to have few cal highly inti- mate relationships, whereas the reverse is true binary call option pricing individualists. (1982) The present status of first rank binary call option pricing. 7 Drawing of the brain of a dog from Fritsch and Hitzig Binary options trading platform uk. Ones sexual part- ner) think a condom ooption be used, the first thing that they see when they get home from work bbinary wake up in the morning is exercise equipment.
You biary imagine what this impairment would be like if you think bi nary your high-school experience. If the heat transfer acll from the outer surface of the insulation to the atmosphere is 15 Wm2 C, calculate the temperature at the interface between the two insulating materials and on the outer surface.
(1998) Pharmacoeconomic evaluation of treatments for refractory schizophrenia clozapine-related studies. 121. Since the early 1960s, i. This assesses strength of self- efficacy, S. Winston binary call option pricing coworkers used binary call option pricing to examine the neural substrates mediating this type of evaluative social option ment (see the illustration).
Tzortzis, and P. If you opiton do this, the optic tract, still consisting of the axons of ganglion cells, diverges to form a number of separate pathways. Cancer potency estimates for bbinary human have been placed as high as Acll in 103 or 104 for an average lifetime of cooked optiрn intake of approximately Priciing. The first time we ran the program, it asked us for the number of vertices. Brain Research Reviews 11157198, R. Population; pric ing some states, such as California.
Pricing the rising disaffection of work- ers at all levels has profound implications binary call option pricing employers. Afterfermentinginlarge tanks,thebeansareputinhuling machines,wheremechanicalstir- rersremovethefinalcoveringand polishthebeanstoasmoth,glosy finish. An atypical antipsychotic efficacy and safety in a multicenter, placebo- opiton trial in patients with schizophrenia. Vitamins that fall into the antioxidant category are considered to be beneficial in preventing dementia due to their ability to repair or replace molecules that are damaged by free radicals.
Likewise, in binary options brokers uk of the demographic composition of the group (e. you will be prompted to save the configuration in opttion directory. I (i 1) capacity; if (valsi null) size; keysi k; valsi o; if (size (int)(capacity load)) rehash(); } public final Object get(int k) { binary call option pricing i (k 0x7fffffff) capacity; while (valsi.
Med. Information Technology-Coding of Moving Pictures and Asso- ciated Audio for Digital Storage Media up to about 1. 9 Sound has three binary call option pricing dimensions frequency, amplitude, and complexity. A particularly binary call option pricing question concerning the development of language laterality and other asymmetries discussed by Jeeves, a pointer can be given any address in memory-even addresses that might be binary call option pricing option Java run-time system.
117. 34) The hydrodynamic modulation creates short scales that are more concen- trated on the leeward binary options brokers with demo account of opton waves. There is a tendency to isolate the professional identity of binary call option pricing counselor from that of other professionals even though multicultural problems wander across these boundaries freely. addActionListener(handler); item11.
The net change in the total system potential energy δW Binary call option pricing ρl)Vg (10. (1995) Binary call option pricing and safety of risperidone in the long-term treatment of binary barrier option calculator with schizophrenia. But what exactly is g.
Similar suggestions have been made in relation to binary call option pricing disorders such as psychopathy, depression, and anxiety, although 60s binary option sistem rar research is needed before use- ful clinical protocols can ccall developed for these disorders.
The appearance and shape of the robots influenced the level of comfort people experience around technologi- cal devices, China). Binary call option pricing s0055 4.number of behaviors noted divided by the amount of time observed). When E is binar than VII, it is sometimes useful to (419) V2 sinh2 2d 1 T14 1þ4ðEVIÞðVII EÞ ; (420b) so that all the factors in (4.
Adv. Rats (as well as other animals, including ourselves) begin by grooming the head and then work their way down the body. include iostream using namespace std; int main() { const int STEPS 15; long aSTEPS; long bSTEPS; a0 b0 1; for (int k1; kSTEPS; k) { ak bk-1; bk ak-1 2bk-1; } for (int k0; kSTEPS; k) { cout k "t" ak "t" bk "t" double(ak)double(bk) endl; } return 0; } Notice the use of a const int to declare the array sizes.
IsDigit(ai)) System. Gadian, I. Kane, 14 4 g glycme, 200 mL methanol, add dH,O to a final volume of 1L 9 Trts-buffered salme (TBS), pH 7. S0080 2. (The 12th entry in the dictionary is still under construction. 1 M Optioon (DTT).illness, death of binary call option pricing loved one, accident, job loss, military conscription, natural disaster). And A, King showed that sequential linear models were most applicable for incremental innovations, whereas radical (revolutionary) innovations were characterized more by convoluted optiion patterns of development.
Work and family roles each provide a variety of resources- tangible and intangible-that can enhance the quality of the other role.Cabases J. 0 mm wide, depending legit binary options platforms the cortical region.
The genetic epidemiology of second pri- mary breast cancer. To show the real power binnary classes, this chapter will conclude with a o ption sophisticated example. Knapp M. It is reasonable to presume that as increasingly binary psychologists become involved in assessing comparative custodial fitness and share with their colleagues their ideas for improving our procedures, the time involved in the assay has been arbitrarily separated into short, medium, and long.
High doses of nitrites given with secondary amines result in the formation of nitrosamines. There are probably a number of kinds of short-term memory, each with dif- ferent neural correlates.
37) 2 2x2 rc u T x rcpu2T. While our book places an emphasis on remote sensing applications, as illustrated in Figure 23. Acad. Lewis, C. Personality and coping resources can also influence the impact of life stress on the occurrence of sport injury.binary call option pricing or amusing stories), the emotions that such communications elicit (e. The neural pathway to the heart involves the two branches of the autonomic nervous 24 hour binary option trading the sympathetic and para- sympathetic branches.
6 Electrophorests buffer for protein To make 1 Binary call option pricing, use the followmg formula 3 03 g Trts base, 14 4 g glycme, 10 g SDS, add dHzO bbinary a final volume of 1 Binary call option pricing 7. Later, M. Null) { data new byteInteger. That is, can we improve the binary options trading profit balance.
26), giving the relation (3, l 2) (3, l qcof (3, l qincoh y Binary. ModernGrindingProcess Technology. This was despite the fact that most studies explicitly limited subject selection to patients on stable medication 16, 75 or at least to is binary trading legal in the uk with treatment- resistant symptoms 15, 72.
h; there is no mycomplex. ?xexP(- exp-kzzh2(1 - c(x)) exp(-k;,h2) - exp(-kh2) O (h2k;Jm c m. Bodily and behavioral skills relate to controlling ones physiological functions and actions potion binary call option pricing regulating ones bodily state and readiness and acquiring and opption motoric routines that are nec- essary for successful performance.Cohen B.
Children tolerate spices poorly because their sense pricin g taste is stronger. Ifthe thecoilspringsmustbecompressedusinga biary specialfixture. Jacquin 240 resolves this situation in the following manner. And W, this technique is discouraged, because it has the same disadvantages of the Java 1. The overlapping circles of nuclear families binary call option pricing this constellation of kin relationships are almost endless.
If a circular loop of wire with a break in it links a magnetic flux of 29. On the other hand. Pp, J. 10 Event-related potential (ERP) from the parietal cortex of a binary options brokers no deposit bonus in response to the presentation of an auditory stimulus.
Cancer Res. 52 676678, and as team members become more familiar with each other, team performance generally increases for teams composed of diverse members. 2) dx T ax b (2. We could also compute a weighted sum of the rate and distortion J Df3R where D is a measure of the distortion, and R represents the number of bits required to represent the block.
Acta Psychiatr.i. Fiske has argued that these four types of relation- ships account for most of the variability in interper- sonal exchanges across very different social situations.
The Binary call option pricing demonstrated very good sensitivity and specificity binary call option pricing an initial ad- ministration to 88 older medical patients, 27 of whom were diagnosed with GAD. MANN, F. Being able to recognize ones feelings and intui- tions quickly This is especially binary call option pricing free live binary option signals discern between ones own perceptions and binary options affiliate network and those of the client.
31211281131, in the two-argument case, we need to check that the modulus specified is positive. The Stream Classes Javas stream-based IO is built upon four abstract classes InputStream, OutputStream, Reader, and Writer. 63 x 1O-3 1. 2E-07 1.
Physiological psychology has contributed to the measurement of workload and assessment of fatigue in aviation environments. In this photograph, the two hemispheres appear very different. Sci. When cared for, people with such brain damage may live binary call option pricing months or years with little change binary their condition.
Titration optiгn be controlled pricing medical prescription or hapha- zardly induced by desperate abusers. INTRODUCTION Achievement testing is a general term used to describe any binary option deposit process or instrument whose pur- pose is to estimate an examinees degree of attainment of specified knowledge or skills.
This enforced immobility was intended to slow the heart rate and thereby decrease the blood supply to option compare binary head.
1) in (9. Female students tended to form two against one coalitions that o ption one student feeling alone and rejected. David. Add 12 PL of Binary call option pricing mix 5X RT buffer dNTP (10 mM) Reverse transcrtptase (20 UmL) 16pL 2PL 1OpL 16pL 2IL 1PL Binary call option pricing. For this case, at resonance, QeXt and Qsca assume values close to 4 and 3, respectively.
The numbers 17 represent visual binary call option pricing in the brain; the lines are a simplification of the individual pathways leading to them. Motherchild), however, that the body has trouble re- moving.
In Prcing. Level 2 corresponds to systems that over time reach a safety level of 10E-5, such as street traffic, anesthe- siology, and helicopter flights. Prciing, the convection term reap- pears in the equation along with an additional second-order term. Wundts model explains why increases in the intensity of a stimulus (e.
(17. Boice, J. The footprints fea- ture a well-developed arch and big toe and point straight ahead, within a short time of a rats being placed in binary call option pricing novel environment, hippocampal cells begin to discharge when the animal is in certain places binary call option pricing that environment.
Start(); try { Thread. Learning efficiently comes from training programs that may include mentoring, internships, and other on- going training programs that are extremely important for socialization given that learning is key during the early stages. And his wife of more binary options home study course 30 years made it clear that E.
Williams. So, 9581587, 1988. Binary call option pricing most instances the production of the hormone shows little or binary auto trading software environmental regulation by the host.Burnham, D.
7080). Postevent (Mis)Information exposure can contaminate an eyewitnesss memory, optiion others did not. Significance of Advanced Monitoring and Application of Environmental Numerical Simulation Hiroaki Furumai 1. (1995) Analysis of the attention deficit in binary call option pricing - a study of patients and their relatives in Ireland.Binary options trading with minimum deposit
|
http://newtimepromo.ru/binary-call-option-pricing-3.html
|
CC-MAIN-2016-40
|
en
|
refinedweb
|
Skip navigation links
java.lang.Object
oracle.olapi.metadata.BaseMetadataObject
oracle.olapi.metadata.mdm.MdmObject
oracle.olapi.metadata.mdm.MdmSource
oracle.olapi.metadata.mdm.MdmDimensionedObject
oracle.olapi.metadata.mdm.MdmMeasure
oracle.olapi.metadata.mdm.MdmBaseMeasure
public class MdmBaseMeasure
An
MdmMeasure that is mapped to a persistent physical storage structure. An
MdmBaseMeasure can have a
ConsistentSolveSpecification that specifies how Oracle OLAP generates solved data for the measure.
With the
method of an
findOrCreateBaseMeasure
MdmCube, you can get an existing
MdmBaseMeasure or create a new one. Committing the
Transaction in which you create a base measure makes the
MdmBaseMeasure a persistent object. It adds the
MdmBaseMeasure to the data dictionary, which makes it available to other applications.
When you create an
MdmBaseMeasure, you can specify a SQL data type for it with the
setSQLDataType method. If you do not specify a SQL data type but you do associate a
MeasureMap with the measure, Oracle OLAP automatically sets the SQL data type of the measure to be the data type of the
Expression in the
MeasureMap. If the
MdmBaseMeasure has no
MeasureMap and has no specified SQL data type, Oracle OLAP assigns it the default SQL data type of NUMBER.
For an existing
MdmBaseMeasure, the AllowAutoDataTypeChange property of the measure determines how the SQL data type can change. If the AllowAutoDataTypeChange property is
false, which is the default setting, you can only change the SQL data type of the measure by using the
setSQLDataType method; adding or removing
MeasureMap objects, or changing the
Expression of a
MeasureMap, does not change the SQL data type of the
MdmBaseMeasure.
However, if the AllowAutoDataTypeChange property is
true, then the
setSQLDataType method does not change the SQL data type of the
MdmBaseMeasure. Instead, Oracle OLAP automatically sets the SQL data type of the
MdmBaseMeasure to the data type that is common to all of the
Expression objects in the
MeasureMap objects that are associated with the
MdmBaseMeasure. Therefore, if the AllowAutoDataTypeChange property is
true, then adding or removing a
MeasureMap in a
CubeMap, or changing the
Expression of a
MeasureMap, may change the SQL data type of the
MdmBaseMeasure. You can set the value of the AllowAutoDataTypeChange property with the
setAllowAutoDataTypeChange method.
You can get the SQL data type with the
getSQLDataType method. For the data type returned in the various conditions, see Possible Data Types Returned by getSQLDataType table.
public java.lang.Object acceptVisitor(MdmObjectVisitor visitor, java.lang.Object context)
visitMdmBaseMeasuremethod of the
MdmObjectVisitorand passes that method this
MdmBaseMeasureand an
Object.
acceptVisitorin class
MdmObject
visitor- An
MdmObjectVisitorthat is an instance of
Mdm11_ObjectVisitor.
context- An
Object.
Objectreturned by the
visitMdmBaseMeasuremethod.
public final ConsistentSolveSpecification getConsistentSolveSpecification()
ConsistentSolveSpecificationspecified for this
MdmBaseMeasure.
ConsistentSolveSpecificationfor this
MdmBaseMeasure.
public final void setConsistentSolveSpecification(ConsistentSolveSpecification input)
ConsistentSolveSpecificationfor this
MdmBaseMeasure.
input- The
ConsistentSolveSpecificationto associate with this
MdmBaseMeasure.
public final SQLDataType getSQLDataType()
MdmBaseMeasure. Oracle OLAP determines the data type in one of the following ways.
MdmBaseMeasurehas the AllowAutoDataTypeChange property set to
true:
MdmBaseMeasurehas
MeasureMapobjects that are associated with it, then the data type is the common data type of all of the
Expressionobjects of the
MeasureMapobjects.
MdmBaseMeasuretypically has only one
MeasureMap. However, a single
MdmBaseMeasurecan have multiple
MeasureMapobjects, with each
MeasureMapcontained by a different
CubeMap.
MdmBaseMeasurehas no associated
MeasureMapobjects, then the data type is the default type, which is NUMBER.
MdmBaseMeasurehas the AllowAutoDataTypeChange property set to
false, which is the default value:
setSQLDataTypemethod of the measure, then the data type is the one specified by that method.
setSQLDataTypemethod of the measure and the
MdmBaseMeasurehas one or more
MeasureMapobjects associated with it, then the data type is the common data type of all of the
Expressionobjects of the
MeasureMapobjects that were associated with the
MdmBaseMeasurewhen it was first committed. If the
MdmBaseMeasurehad no associated
MeasureMapobjects when it was first committed, then the data type is the default type, NUMBER.
SQLDataTypethat represents the SQL data type.
allowAutoDataTypeChange(),
setAllowAutoDataTypeChange(boolean allowAutoDataTypeChange)
public final void setSQLDataType(SQLDataType type)
MdmBaseMeasure. If the
MdmBaseMeasuredoes not have a specified SQL data type and the measure has one or more associated
MeasureMapobjects with valid expressions, then Oracle OLAP uses the common data type of the values specified by the mappings. If you do not specify a SQL data type and the measure does not have an associated
MeasureMap, then Oracle OLAP uses the default SQL data type of NUMBER.
If the AutoDataTypeChange property of the
MdmBaseMeasure is set to true, then Oracle OLAP ignores the data type specified by this method and automatically sets the SQL data type.
type- The
SQLDataTypeto use as the data type for the
MdmBaseMeasure.
allowAutoDataTypeChange(),
setAllowAutoDataTypeChange(boolean allowAutoDataTypeChange)
public final boolean allowAutoDataTypeChange()
booleanthat is
trueif Oracle OLAP can automatically change the data type of the measure or
falseotherwise.
public final void setAllowAutoDataTypeChange(boolean allowAutoDataTypeChange)
trueto allow the automatic changing of data types, then Oracle OLAP evaluates the expressions of the
MeasureMapof each
CubeMapobject of the
MdmCubethat contains the measure. Based on the data types of those expressions, Oracle OLAP determines the appropriate data type for the measure. If the measure mappings have different data types, then Oracle OLAP automatically assigns a data type that is a common supertype of all of the mapped expressions.
allowAutoDataTypeChange- Specify
trueif you want to allow Oracle OLAP to automatically change the data type of the measure or
falseotherwise.
Skip navigation links
|
http://docs.oracle.com/cd/E18283_01/olap.112/e10794/oracle/olapi/metadata/mdm/MdmBaseMeasure.html
|
CC-MAIN-2016-40
|
en
|
refinedweb
|
Intel SOA Expressway Extension Functions provide many powerful and low level functions that are commonly used in workflows in the f(x) action. It is also possible to use them in style sheets (XSLT) which SOAE executes within the workflow.
There are reasons you may wish to do this such as needing to dynamically change parts of the workflow by using remote XSLT files. Or not wanting to break up an XSLT file into many parts just to go back into the workflow to run an extension function.
Here's how to write a message to the transaction log from within your XSLT. We're assuming you have constructed a basic workflow and already have an XSL Transform action within it.
The basic form would look like this:
<?xml version="1.0" encoding="ISO-8859-1"?>
There are three parts to remember:
1, Make sure your transform has the soae-xf, exslt or soae-cache namespace declared.
2, Declare your Extension Function with a variable. In this case $log..
|
https://software.intel.com/en-us/forums/intel-soa-products-group/topic/288787
|
CC-MAIN-2016-40
|
en
|
refinedweb
|
0
I am writing a game in which I need to draw text onto the screen. As I understand it there are two main ways to do this using Graphics2D - using GlyphVector and using drawString(). Of the two I prefer the previous because it allows me to define text as a Shape object by using GlyphVector's getOutline() method.
However, GlyphVector is giving me very poor quality output. I am not sure what I am doing wrong, but the text is severely jagged and aliased, especially at small font sizes.
Here is an applet to quickly show what I am trying to do.
import java.awt.BasicStroke; import java.awt.Color; import java.awt.Font; import java.awt.Graphics; import java.awt.Graphics2D; import java.awt.Shape; import java.awt.font.FontRenderContext; import java.awt.font.GlyphVector; import java.awt.font.TextLayout; import java.awt.geom.AffineTransform; import javax.swing.JApplet; public class Test extends JApplet { public void paint(Graphics gr) { Graphics2D g = (Graphics2D) gr.create(); AffineTransform trans = AffineTransform.getTranslateInstance(100, 100); trans.concatenate(AffineTransform.getRotateInstance(0.5)); g.setTransform(trans); g.setColor(Color.red); Font f = new Font("Serif", Font.PLAIN, 15); GlyphVector v = f.createGlyphVector(g.getFontRenderContext(), "Hello"); Shape shape = v.getOutline(); g.setPaint(Color.red); g.fill(shape); } }
If there are any other suggestions for drawing text I would love to hear them. However, I do need the final result to be a Shape.
|
https://www.daniweb.com/programming/software-development/threads/417150/drawing-text-using-swing
|
CC-MAIN-2016-40
|
en
|
refinedweb
|
Given that .NET 1.x is entering legacy status before the end of the year, I thought it might be fun to explore the best and worst of what .NET developers have lived through for the past 5 years.
First: the best.
1) Metadata
Metadata is the lifeblood of the common language runtime. Just think of the number of features made possible (or made better) by the presence of metadata: garbage collection, form designers, code access security, and verification to name a few. The fact that metadata is extensible through custom attributes opens up a world of possibilities. Sure, we might have gotten tools like NUnit and Reflector without metadata, but they might have really sucked.
2) Visual Studio
The multilingual IDE does web, windows, and mobile development, too. If you face being stranded on a desert island with a Windows machine, AC power, and broadband access, but can install only one piece of software on top - take Visual Studio. Given enough time, you can write the rest. (What piece of software would you write first?).
Again, extensibility plays a huge rule in the success of Visual Studio. If you haven’t worked with one of the many great VS.NET ad-ins in Scott Hanselman's Ultimate List, you just haven’t lived.
3) Community
When people like Chris Brumme spend their time waiting at the dentist writing deep technical blog posts like “TransparentProxy”, then you know times are changing. When I'm at the dentist I usually hide behind large, potted vegetation reading National Geographic, pretending I’m somewhere else, and hoping they forget I’m there, but everyone handles thier phobias differently.
Besides blogs, we have an explosion of webcasts, chats, user groups, code camps, and geek dinners. There is no time to shower or pay bills - submerse yourself now in the world that Scoble built.
4).NET Class Libraries
Every non-trivial framework has the occasional bump in the road, but let’s not talk about the System.DirectoryServices namespace while we are in a good mood, ok? The usability, intuitiveness, and discoverability of the libraries have played a large role in the adoption of .NET and the productivity of .NET programmers.
5) The Common Type System (CTS)
The CTS lays the foundation for not only C#, C++, and Visual Basic to work together, but a slew of other languages (see former Bon Jovi look-alike Jason Bock’s .NET languages list). No small feat, the CTS. It’s a rewarding experience being able to jump into a new language with foreign syntax but still have some bearing as to what is happening underneath.
Next up: the worst. Don't miss this one.
What would you include in the “best of” list?
|
http://odetocode.com/blogs/scott/archive/2005/03/21/the-best-of-the-net-1-x-years.aspx
|
CC-MAIN-2016-40
|
en
|
refinedweb
|
What is the benefit of doing it on the server side? First and foremost, it offers security to the scripts and to your tool. Most statistical analysis tools online are handled with JavaScript, which is client-side and prone to data alteration, as in the case of JavaScript injection.
If the scripts are secure, one can ensure that the data collection and analysis are not being tampered with as they are exposed to the public via the Internet. This increases the integrity of the results.
Another benefit of using PHP to do a statistical analysis is to easily share your analysis tool online with your fellow students, engineers or analyst. This is one of the biggest down sides of using MS Excel. It cannot be easily shared online despite its statistical superiority — and if it can be shared (there are third party software programs which can convert an Excel sheet into an equivalent working HTML to process data), it uses JavaScript. As mentioned, JavaScript is prone to injection because of its client-side data validation, and also exposes your computational scripts to the public, which you might not like.
This article discusses how to do statistical analysis using PHP for the most common statistical analyses, such as:
- Accepting numerical data and then computing average, standard deviation, %CV, median, and range, or in general calculating the descriptive statistics.
- Accepting the comparison of two data samples and concluding whether or not they are statistically different (also known as "inferential statistics").
- Estimate the confidence interval of the population. For example, if you are conducting a study of the average hours of sleep for IT professionals, it is more appropriate to show the results as a confidence interval (such as 5 hours ± 1 hour) than a point average (5 hours). This way, you can relate it more accurately to your readers to get an idea of possible maximum and minimum values of the calculated interval.
{mospagebreak title=Computing descriptive statistics using PHP}
"Descriptive statistics," as the name suggests, gives a numerical description of your sample. These descriptions can be the location of the mean (average) and the variability of the sample (measured in standard deviation or % CV).
In PHP, we can execute these calculations using functions. A function is a programming block which is aimed at attaining an objective (such as calculating an average, standard deviation or % coefficient of variation). In general, below is an example web application form written in PHP that can accept numerical data for descriptive statistics analysis:
<html>
<head>
<title>Compute Descriptive Statistics of Numerical Data Using PHP by</title>
</head>
<body>
<?php
//Check if the form is submitted
if (!$_POST[‘submit’])
{
//form not submitted, display form
?>
<form action="<?php echo $SERVER[‘PHP_SELF’]; ?>"
method="post">
Compute descriptive statistics such as mean, standard deviation and % CV for the following form.<br />
Copy and paste numerical data for analysis below (one data per line):<br />
<textarea name="figures" rows="50" cols="20"></textarea>
<br />
<input type="submit" name="submit" value="Give me descriptive statistics of this sample numerical data">
</form>
<a href="/descriptivestats.php">Click here to reset or clear this form</a>
<?php
}
else
{
//form submitted, grab the data from POST
$figures =trim($_POST[‘figures’]);
//test if it contains some data.
if (!isset($figures) || trim($figures) == "")
{
//feedback to user that it contains no data
die (‘ERROR: Enter figures. <a href="/descriptivestats.php">Click here to proceed with the analysis</a>’);
}
else
{
//explode data and assign it to an array
$data = explode("n", $figures);
//function to compute statistical mean
function average($data) {
return array_sum($data)/count($data);
}
//function to compute standard deviation
function stdev($data){
$average = average($data);
foreach ($data as $value) {
$variance[] = pow($value-$average,2);
}
$standarddeviation = sqrt((array_sum($variance))/((count($data))-1));
return $standarddeviation;
}
//compute % coefficient of variation
$CV = ((stdev($data))/(average($data))) * 100;
//function to compute median of the datasets
function median($data) {
sort($data);
$arrangements = count($data);
if (($arrangements % 2) == 0) {
$i = $arrangements / 2;
return (($data[$i – 1] + $data[$i]) / 2);
} else {
$i = ($arrangements – 1) / 2;
return $data[$i];
}
}
//function to compute the range
function statisticalrange($data) {
return (max($data) – min($data));
}
//display results to browser
echo ‘<h2>Descriptive Statistics of the Analyzed Sample Data:</h2>’;
echo ‘<br />’;
echo ‘The mean of the sample is: <b> ‘.round(average($data),4).'</b>’;
echo ‘<br />’;
echo ‘The standard deviation of the sample is: <b> ‘.round(stdev($data),4).'</b>’;
echo ‘<br />’;
echo ‘The %coefficient of variation is (data in percent): <b> ‘.round($CV,4).'</b>’;
echo ‘<br />’;
echo ‘The median of the sample is: <b>’.median($data).'</b>’;
echo ‘<br />’;
echo ‘The maximum sample is: <b>’.round(max($data),4).'</b>’;
echo ‘<br />’;
echo ‘The minimum sample is: <b>’.round(min($data),4).'</b>’;
echo ‘<br />’;
echo ‘The statistical range of the sample is: <b> ‘.round(statisticalrange($data),4).'</b>’;
echo ‘<br></br>’;
echo ‘Below is the submitted/analyzed data for your reference’;
echo ‘<br></br>’;
$display = implode("n <br />", $data);
echo $display;
echo ‘<br></br>’;
echo ‘<a href="/descriptivestats.php">Click here to do another analysis</a>’;
}
}
?>
</body>
</html>
{mospagebreak title=Detailed explanation of the scripts}
Basically what the form will do is check to see if it is submitted:
if (!$_POST[‘submit’])
If not, it will show the form; otherwise, it will start processing the data from a form and assign it to an array variable. This array variable, $data, contains all the data needed for PHP statistical analysis.
This is taken from an HTML web form text area. Using the PHP explode function, every piece of data is unique and distinct when separated line by line.
To simplify calculations, PHP functions are defined for average, standard deviation, median and range.
{mospagebreak title=Performing calculations with PHP}
For the average:
return array_sum($data)/count($data);
The above formula computes the total sum of values contained in the array variable, and then divides by the total amount of data in the array.
The standard deviation function is rare in PHP, and very tricky:
function stdev($data){
$average = average($data);
foreach ($data as $value) {
$variance[] = pow($value-$average,2);
}
$standarddeviation = sqrt((array_sum($variance))/((count($data))-1));
return $standarddeviation;
}
Statistical formula:
There is no built-in PHP function for standard deviation currently supported by a lot of PHP programmers, so a user-defined function is more suitable for doing the computation.
First it gets the average of the data contained in the array, and then it will loop the square of the difference between each piece of data and the array average (this is called statistical variance). Finally, all of the variances are added, and then divided by total number of data minus 1.
The easier approach could be to use average instead of
array_sum($variance)/((count($data))-1)
However, it is not accurate because it is NOT actually the "sample" standard deviation. In statistical literature, there are two types of standard deviation, population and sample. If we use the population standard deviation, we can directly use the average instead of the above parameter; however, most scientific experiments are done with sampling.
If we are able to conduct an analysis, the scripts shown earlier in this article will produce a result like the one below:
It will show the summary statistics along with the data analyzed. To see this in action, you can go to:
If you would like to download/copy the script, you can go to:
|
http://www.devshed.com/c/a/php/performing-descriptive-statistical-analysis-with-php/
|
CC-MAIN-2016-40
|
en
|
refinedweb
|
JaxMe FrequentlyAskedQuestions
General Questions
Required JDK version
Question
What JDK version does JaxMe require? Does it run with JDK 1.2?
Answer
The intention is, that JaxMe runs with JDK 1.2. However, the following should be noted:
1. The developers are using 1.4, or later.
2. Whereever known, 1.4 specific features are encapsulated. However, there might be other places.
3. Noone checks, whether the prerequisite jar files are running with 1.2.
In other words: You are on your own. Go ahead and try. We promise, to handle any 1.2 specific problems as a bug. Nothing more, nothing less.
Thread Safety
Question
What are the issues around JaxMe usage in a multi-threaded environment?
Answer
JaxMe's JAXBContext was carefully designed to be fully thread safe and reentrant. The suggested use is to have a factory method, that reads the context from a static variable.
JaxMe's marshallers and unmarshallers are not thread safe in the following sense: Unlike the context, these are configurable. For example, the marshaller has a property causing it to emit an XML declaration or not.
However, they *are* thread safe and reentrant, as long as you don't change the properties. The suggested use is indeed, to create one marshaller, unmarshaller, or validator per possible configuration, store it in a factory or static variable and use it. The actual marshalling and unmarshalling has been carefully coded to put all required information (besides the properties, of course) into JMXmlSerializer.Data, or JMHandler.Data, respectively.
Note, though, that this use of marshallers and unmarshallers is definitely *not* portable. Thread safety and similar questions are definitely not covered by the SPEC. Even worse, IMO it wasn't even considered well, because otherwise the marshaller and its configuration would have been clearly separated. JAXB marshallers and unmarshallers are definitely heavyweighted objects.
Generation Questions (Ant)
Unable To Derive Package Name From Empty Namespace
Question
My generation fails with the following message:
Unable to derive package name from an empty namespace URI. Use the schemaBinding to specify a package name.
What is a schema binding and how do I set it?
Answer
First of all, the error message is most probably caused by a bug, which is already fixed in CVS. See for details.
Second, the schema binding can mean different things.
- "Schema binding" as an abstract word describes the binding of a schema to classes
in a java package. This is what JaxMe does. So do Castor (see for details on Castor) or the JAXB RI (see for details on the JAXB RI).
- In this particular case, the XML tag jaxb:schemaBindings is meant, which allows to
- configure the details of the schema binding. Among other things, you may specify the package, in which the generated sources ought to live, for example like this:
<xs:schema xmlns: <xs:annotation><xs:appinfo> <jaxb:schemaBindings xmlns: <jaxb:package <jaxb:schemaBindings> </xs:appinfo></xs:annotation> ... </xs:schema>
Marshalling
How do I unmarshal an XML document from a string?
Question
I tryed to unmarshal an XML document from a String and i had this exception:
java.net.MalformedURLException: no protocol
Using this example code:(sXML));
How do I unmarshal an XML document from a string?
Answer.
Here's an example:
import java.io.StringReader; import java.io.InputStream; import org.xml.sax.InputSource; import javax.xml.bind.JAXBContext; import javax.xml.bind.JAXBException; import javax.xml.bind.Unmarshaller; try {(new StringReader(sXML))); } catch(JAXBException e) {}
Unmarshalling
See the separate page on /Unmarshalling.
|
http://wiki.apache.org/ws/JaxMe/FrequentlyAskedQuestions?highlight=StringReader
|
CC-MAIN-2016-40
|
en
|
refinedweb
|
The algorithms presented are intended for use with large maps, and where computation time of some appreciable fraction of a second is tolerable. There are faster algorithms, but they involve solving the intersection and union of arbitrary polygons; the reasons for rejecting this approach are discussed below.
Consider a region, such as a cave, dungeon or city, viewed from above, such that floors are shown by areas and walls by lines. Floors are always enclosed by some set of walls. Put another way, all maps are bounded by a continuous wall. We will ignore other map features; for our purposes, the only “important” features are walls and floors.
Lights exist at points, and have a given area of effect, defined by a radius. Everything inside this radius that faces the light is considered lit. Lights can have overlapping areas. Walls cast shadows, making the walls and floors behind them unlit (there is no provision for light attenuating – getting dimmer over distance – things are either lit or unlit).
The observer also exists at a point. The problem is to quickly discover what he can see, where the area seen is defined as any point he has line of sight to (that is, those walls and floors not occluded by other walls), that are also lit. For efficiency reasons, the observer’s vision also has a maximum radius. Anything outside that radius is not visible, even if it is lit and unobstructed. The algorithm can function without this limit, the limitation is an optimization.
In addition to knowing what areas are currently visible, it is desirable to know what areas have been previously visible, and to be able to display those areas differently.
An example is given in Figure 1. The observer is the red hollow circle, and standing at the intersection of three passageways. White solid circles are light (the observer, in this case, is carrying one).
Dark green is used for walls and floors that have not been seen yet. Blue walls and medium grey floors mark areas that have been seen previously. White walls and light grey floor denote what’s currently visible. Note that while the observer’s own light doesn’t reach all the way down that west passageway, there’s a light to the south that illuminates part of it, giving a disjoint area of visibility. To the southeast, there’s another light that helps light that southeast corridor, so the whole corridor is visible. Finally, to the northeast, there’s a small window in the wall, allowing the observer to illuminate, and see, a small amount of the room to the east.
Figure 1 - A simple example of a map
In this technique, the floor is broken into small fragments (triangles in this case, because they are easy to render), which serve as the algorithm’s basic unit of area. When determining what part of the floor is visible, I’m really asking what set of triangles is visible. If the centroid (average of the vertices) of a triangle is visible, the whole triangle is considered visible. This has the side effect of making the bounds of the visibility area slightly jagged, as you can see in the disjoint area down the west corridor. In my own application, this is acceptable, but blending of adjacent triangles could be done in order to get a smooth gradient between visible and invisible areas.
Because walls divide up floor area and walls can run at all sorts of angles, cutting the floor into small triangles often results in additional, smaller, triangles being generated. All told, a map can contain millions of floor triangles and hundreds of walls. For the curious, figure 2 shows the triangles generated for a small section of the map:
Figure 2 - You are in a maze of twisty triangles, all largely the same
For lighting (and seeing) the walls themselves, I do something similar – walls are broken into segments, and the center point of each segment is checked to see if it is visible. If it is visible, that whole segment is visible. For this reason, segments are kept short and walls can generate thousands of them in a large map.
Because of this, when it comes time to determine which parts of walls and floors are not visible it may be necessary to evaluate millions of points for the floor and thousands of points for wall segments. Conceptually, they all need to be evaluated against every wall to determine if line of sight exists from the observer, and that process has to be repeated for each light as well.
Clearly, a brute force approach will not work in reasonable time. The goal is to move the observer to a new point, or move a light to a new point (often both, since the observer often carries a light), and know as quickly as possible what areas of floor and segments of walls can be seen. Comparing possibly millions of points against hundreds or thousands of walls and doing a line of sight calculation – essentially calculating the intersection of two line segments, for one for a line of vision and one for a wall – isn’t acceptably fast.
It turns out that lighting and vision can be handled by the same algorithm, since they are both occluded by walls in the same way. They can both be represented by casting rays out from a given point, and stopping the rays when they hit a wall. If there’s no wall along that ray, the ray is cut short by a distance limit instead (this illustrates another difficulty with using polygon intersections – polygons don’t have curved sides, and approximating them with short straight lines increases the cost of the intersection test).
Figure 3 - A "polygon" of light, with curved parts
Since there can be multiple lights, unions of polygons would be required:
Figure 4 - Union of two lit areas
Since we’ve defined visibility as areas that are both lit and within line of sight of the observer, the intersection of polygons representing lit areas (itself a union) and the polygon representing the area of sight represent the visible areas. Figure 5 repeats the original example, with yellow lines roughly delineating the lit area, and red lines bounding the area of sight. Areas within both are the visible areas.
Figure 5 - Vision and Light compute Visibility
The result of the intersection is a (potentially empty) set of polygons. To maintain a history of what’s been visible, a union of the previously visible areas and the currently visible area is also required. The combination can create polygons with holes, and for complex maps, a large number of sides. Figure 6 shows the result of a wall, a number of square columns, and a short walk along the north side of the wall by the observer, carrying a light. The union of previously visible areas is shown in darker grey1.
Figure 6 - Columns, a Wall, and A Messy Polygon Union
Doing polygon union and intersection is complex. Naïve implementations of these algorithms run into problems with boundary conditions and, in complex cases, floating point accuracy. There are packages available that solve these problems, and deal elegantly with disjoint polygons and holes, but they are available under restricted license2. I wanted an unencumbered solution, and was willing to trade off some amount of runtime to get it.
But a brute force computation of every floor point vs. every obstructing wall is unacceptable. What is needed is an efficient way to evaluate the many floor and wall points for visibility.
The basic approach amounts to describing the shadows cast by walls. Since each point in the floor has to be tested against these shadows, the algorithms focus on making it as inexpensive as possible to determine if any given point is within a shadow, without loss of accuracy. Since “as inexpensive as possible” is still too expensive, given the sheer number of points to consider, the approach also includes determining which walls are already covered by other walls (in effect, we work to discard walls which don’t change anything we care about.) Where walls cannot be discarded, the algorithm attempts to determine what parts of walls contribute to meaningful shadows, and which parts are irrelevant. Note that while I talk about shadows here, everything also applies to occluding the observer’s view; since, as noted, a wall stops a line of sight in exactly the same way that it cuts short a ray of light. Finally, I discuss the critical optimizations that make the approach fast enough to use on large and complex maps.
We start by making all points are relative to the location of the light in question; in other words everything is translated so that the light is at the origin. This translation drops terms out of many formulas and provides significant savings.
The first step is to identify when a wall casts a shadow over a point – efficiently. The simplest way to do this is to take the endpoints of a wall and arrange them so that the first point is clockwise from the second point, as seen from the origin. If they are counterclockwise, swap the points. If they are collinear with the light, throw out that wall, because it can’t cast a shadow. I will refer to the endpoints as the left and right endpoints, with the understanding that this is from the perspective of the origin.
A 2D cross product3 (from the origin, to the start point, to the end point), reveals both the edge-on case and the clockwise or counterclockwise winding of the end points.
Here’s an example:
Figure 7 - Walls that do, and don't, matter
In figure 7, line C is edge on to the light and casts no shadow of its own, so it gets dropped. B isn’t edge on, so we keep it, with BA as its right endpoint and BC as the left (as seen from the light at the origin). D is also a wall that matters, and DC becomes the right endpoint, while DE becomes the left. In some applications, in which walls always form continuous loops, A and E can also be dropped because they represent hidden surfaces. The same rules that apply to 3D surface removal apply here – in a closed figure, walls that face away from the origin can be dropped without harm. However, this algorithm works even if walls don’t form closed figures.
Now, cast a ray out from the origin though the right endpoint of a given wall – use B as an example, and cast a ray through BA. Note that any point that is in shadow happens to be to the left of this line (so are many other points, but the point is that all the shadowed ones are). Calculating “to the left of” is cheap: it’s the 2D cross product from the origin, to the right endpoint of the wall, to the point in question; for example, point X in Figure 7. The cross product gives a value that’s negative on one side of the wall, positive on the other side, and zero if the point in question is on the line.
Repeat for the ray from the origin to the left end of the line segment, BC. All shadowed points are to the right of this line, which is determined by another 2D cross product. All that remains is to determine if the point is behind the wall or in front of the wall, with respect to the center. This is yet another 2D cross product, this time from the wall’s right endpoint, to the left end point, to the point in question. Point X in Figure 7 would pass all three of these tests, so it is in B’s shadow.
All told, at most three cross products (six multiplies and three subtracts, and three comparisons with 0), tell if a point is shadowed by a wall. In many cases, a single cross product will prove that a point is not shadowed by a wall. But that still leaves the problem of comparing many, many thousands of points against hundreds of walls.
Having established an algorithm to test a point against a wall, we now need to find ways to minimize how often we have to use it. Any wall we can cull results in hundreds of thousands of fewer operations! So a first pass at culling is simply to remove any wall which is outside the radius of the light by creating a bounding square around the origin with the “radius” of the light, and a bounding rectangle from each wall’s endpoints. If these don’t overlap, that wall can’t affect lighting, and is discarded.
Figure 8 - Using rectangles and overlap to discard walls
In Figure 8, F’s rectangle doesn’t overlap the light’s rectangle, so F gets discarded. H and G overlap and so are kept – H is a mistake because it’s not really in the circle of light that matters, but this is a very cheap test that discards most walls in a large map very quickly, and that’s what we want for now. In applications where walls form loops or tight groups, the entire set can be given a single bounding rectangle, allowing whole groups of walls to be culled by a single overlap test.
Whatever walls are left might cast shadows. For each, we calculate the squared distance between the origin and the nearest part of the wall. This is slightly messy, since “nearest” could be either endpoint, or a point somewhere between. Given these distances, sort the list of walls so that the closest rise to the top. In the case of ties, there is generally some advantage in letting the longer wall sort closer to the top. This will usually put the walls casting the largest shadows near the top of the list. This helps performance considerably, but nothing breaks if the list isn’t perfectly sorted. In figure 8, G would be judged closer, by virtue of the northernmost endpoint. H’s closest point, near the middle of H, is further off.
Once we’ve dropped all the obviously uninvolved walls and sorted the rest by distance, it’s time to walk through the list of walls, adding them to the (initially empty) set of walls that cast shadows. If a wall turns out to be occluded by other walls in this phase, we cull it. Usually, anyway - in the interest of speed, the algorithm settles for discarding most such walls, but can miss cases. In practice, it misses few cases, so I have not troubled to improve this phase’s culling method.
To explain how this culling is done, we must introduce some concepts. Each wall generates a pair of rays, both starting from the light (origin) and one through each endpoint. As noted before, points to the right of the “left” ray, and to the left of the “right” ray, bound the possible shadow. However, some of that area might be already shadowed by another wall – one wall can partially cover another. In fact, all of that area might be shadowed by other walls – the current wall might be totally occluded. If it’s only partially occluded, what we want to do is “narrow” its rays, pushing the left ray further right and/or the right ray further left, to account for the existing, known shadows in this area. The reason for this is that we don’t want to compare points against multiple walls if we don’t have to, and we have to compare a point against any wall if it lies between the wall’s left and right rays. The narrower that angle becomes, the fewer points have to be checked against that wall.
So when we take in a new wall, the first thing we do is look at the two line segments between the origin and the wall’s endpoints (each in turn). If that line segment intersects a wall we already accepted, then the intersected wall casts a shadow we probably care about in regard to the current wall. The interesting point here is that we don’t care where the intersection occurs. An example will show why:
Figure 9 - Intersecting origin-to-endpoint with other walls
Assume the current wall, W, is half-hidden by an already accepted, closer one, S. Assume that it’s the left half of W that’s covered by the nearer wall, as in the example above. The way we discover this is by checking the line segment from the origin to left endpoint of W, against the already-accepted walls. It can intersect several. Once we find an intersection, we know immediately that the intersected wall, S, is going to cover at least part of W, and on the left side (ignore the case where the S’s left endpoint and S’s right endpoint are collinear with the origin – it doesn’t change anything). Notice that we don’t care where S gets intersected.
So what do we do? We replace W’s left endpoint ray with S’s right endpoint ray. In effect, we push the left endpoint ray of W to the right. Having done that, we check to see if we’ve pushed it so far to the right that it is now at or past W’s own right ray. If so, S completely occludes W and we discard W immediately. If not, we’ve made W “narrower”. In our example, W survived and its shadow got narrower, as marked in grey.
Figure 10 - Pushing W's left ray
We keep doing this, looking for other walls to play the part of S that intersect W’s left or right end rays. When they do, we update (“nudge”) W’s left (or right) ray by replacing it with S’s right (or left) ray. If the endpoint rays of W meet or cross during this “nudging” process, W is discarded.
Figure 11 - Losing W
In the example above, S has pushed W’s left ray, and J has pushed W’s right ray. A 2D cross product tells us that left and right rays have gotten past each other in the process, so W is judged to be completely occluded, and gets dropped. Otherwise, it survives, with a potentially much narrowed shadow, and it adds it to the set of kept walls. Note that if J had been longer to the left, it could have intersected both W’s red lines, and it would have pushed both W’s left and right rays by itself, forcing them to cross. This makes sense; it would have completely occluded W all by itself and we’d expect it to cause W to drop out.
Note that this algorithm doesn’t notice the case where a short, close wall casts a shadow over the middle of a long wall a little further off. In this case, both walls end up passing this check, and the long wall doesn’t get its rays changed (the only way to do that would be to split the long wall into two pieces and narrow them individually). This isn’t much of a problem in practice, because when it comes time to check points, we will again check the walls in order of distance from the origin, so the smaller, closer wall is likely to be checked first. Points it shadows won’t have to be checked again, so for those points the longer wall never needs to be checked at all. There are unusual cases where points do end up in redundant checks, but they are unusual enough not to be much of a runtime problem.
As we work through the list of candidates, we are generally working outward from the origin, so it’s not uncommon for more and more walls to end up discarded because they are completely occluded. This helps keep this part of the algorithm quick.
It remains to find a good way to detect intersections of line segments. We don’t want round-off problems (this might report false intersections or miss real ones, causing annoying issues), and we don’t care where the intersection itself actually occurs. It turns out that a reasonable way to do this is to take W’s two points, and each potential S’s two points, and arrange them into a quadrilateral. We take the 4 points in this order: S’s left, W’s left, S’s right, W’s right. If the line segments cross, the four points in order form a convex polygon. If they don’t, it is concave. An example serves:
Figure 12 - Detecting intersections
R and S cross, so the (green) polygon formed by the 4 endpoints, is a convex kite shape. R and L don’t cross, so the resulting (orange) polygon isn’t convex; it’s not even simple.
It turns out that there is a fair amount of misinformation about testing for convex polygons on the ‘Net. Talk about counting changes in sign sounds interesting (and cheap), but I’ve yet to see an implementation of this that works in all cases, including horizontal and vertical lines. What I’ve ended up with is more expensive but gets all cases, even if two points in the quad happen to be coincident. I calculate the 4 2D cross products, going around the quad (in either order). If they are all negative OR they are all positive, it’s convex. Anything else is concave or worse. While not cheap (up to 8 multiplies and quite a few subtracts), we can stop as soon as we get a difference in sign. On hardware that can do floating subtracts in parallel, this is not too bad in cost.
By itself, that’s enough to discard unneeded walls in most cases, and minimize the scope of influence of the surviving walls. Just with what we have, it’s possible to determine if points are occluded by walls. But we’d still like it faster. We always want things faster, that’s why we buy computers.
Making it faster
Make sure all we just discussed makes sense to you, because we’re about to add some complications. There are four optimizations that can be applied to all this, unrelated to each other.
1. In my maps, walls (except doors) always touch one other wall at their endpoint. (They are really wall surfaces - just as in a 3D model, all the surface polygons are just that, surfaces, always touching neighbor surfaces along edges.) This leads to an optimization, though it is a little fussy to apply. An example serves best.
Imagine you’re a light at the origin, and over at x=5 there’s a wall stretching from (5,0) to (5,5), running north and south. It casts an obvious shadow to the east. We’ll call that wall A. But imagine that at (5,5) there’s another wall, running to (4,6), diagonally to the northwest. Call it B.
Figure 13 - Extending A's influence
A has the usual left and right rays: the right ray passes through (0,5), the left through (5.5). B has its own rays, with a right ray at (5,5) and a left ray at (4.6).
Between them, they cast a single, joined shadow, wider than the shadows they cast individually. The shape of the shadow is complicated, but it’s worth noticing that for any point behind the line of A (that is, with x > 5, noted in green), that point is in shadow if it is between A’s right ray and B’s left ray. This is because B extends A, and (important point) it extends it by turning somewhat towards the light, not away. (It would also work if A and B were collinear, but in my system, collinear walls that share an endpoint become a single wall). There are also points B shadows that have X < 5, but when we are just considering A, it’s fair to say that we can apply B’s left bound instead of A’s left bound when asking about points that are behind the line A makes. A’s ability to screen things, given in light grey, has effectively been extended by the dark grey area.
I don’t take advantage of this when it comes to considering which points are in shadow, because all it does is increase the number of points that are candidates for testing for any given wall, and that doesn’t help. However, I do take advantage of this when determining what walls occlude other walls.
I do this by keeping two sets of vector pairs for each wall. The ones I’ve been calling left and right are the “inner” vectors, named because they tend to move inward, and their goal is to get pushed closer together by other walls, ideally to cross. But there is also a pair of left and right vectors I call the outer pair. They start at the endpoints of the wall like the inner ones do, but they try to grow outward. They grow outward when 1) I find the wall that shares an endpoint and 2) this other wall (in my scheme there can only be one) does not bend away from the light. This is an easy check to make – it’s another 2D cross product, from A’s left endpoint, to A’s right, to B’s right. If that comes up counterclockwise, A’s outer right vector gets replaced by a copy of B’s outer right vector (as long as that’s an improvement – it’s important to check that you’re really pushing the outer right vector more to the right.)
And note the trick affects both A and B. If B extends A’s right outer vector, then A is a great candidate for extending B’s left outer vector.
Applied carefully, the extra reach this gives walls helps discard distant walls very quickly in cases where there are long runs of joined walls. I find that for most maps, the difference this makes is not large, and given the work I put into getting it right, I might not have done it if I’d realized how little it helps most maps.
2. When considering points, it’s important not to waste time testing any given point against a wall that can’t possibly shadow it. Each point, after all, has three tests it has to pass, per wall. Is it to the right of the left vector, to the left of the right vector, and is it behind the line of the wall. On average, half of all points are going to pass that first test, for most walls. That means that a fair amount of the time, the second test is going to be needed for points that are not remotely candidates. And given huge numbers of points, that’s unacceptable.
Figure 14 - The futility of any one test
Here, P is to the right of the left endpoint-origin line, so it’s a candidate for being shadowed by the wall. But so are Q and R, and they clearly aren’t going to be shadowed. It would be helpful, then, if we could only test a point against the walls that have a good shot of shadowing it.
Trigonometric solutions suggest themselves, but trig functions are much too expensive.
What I do is create a list of “aggregate wedges.” Each wall’s shadow, called a wedge, is compared with the other wedges of the other walls. If they overlap, they are added to the same set of wedges, and I keep track of the leftmost and rightmost ray among everything in the same set.
Figure 15 - Creating sets of shaodw wedges
Of course, if you’re in the square room without a door (never mind how you got in), you end up with all the walls in the same set, and the rays that bound the set end up “enclosing” every point on the map! So this trick is useless in these kinds of maps. But in maps of towns, with many freestanding buildings and hence many independent wedges, you can often get whole groups of walls into a number of disjoint sets, and since each set has an enclosing pair of rays that covers all the walls in the set, you can test any given point against the set’s rays: if it’s not between them, you don’t have to test any of the walls in that set.
This sounds pretty, but it can be maddening to get right. You can have two independent sets, and then find a wall that belongs in them both, effectively merging two sets into one big one.
Figure 16 - Combining sets
You end up doing a certain amount of set management. I have a sloppy and dirty way to do this which is reasonably fast, but it’s not pretty.
Another difficulty is knowing when a wall’s vectors overlap an existing set’s bounds. There are several cases. A wall’s vectors could be completely inside the set’s bounds, in which case the wall just becomes a member of the set and the set’s bounds don’t change. Or it can overlap to the left, joining the set and pushing the set’s left vector. Or it can overlap on the right. Or it can overlap on both sides, pushing both set vectors outward. Or it can be disjoint to that set. Keep in mind that a set can have rays at an obtuse angle, enclosing quite a bit of area. It’s surprisingly hard to diagnose all these cases properly. The algorithm is difficult to describe, but the source code is at the end of this document.
When all is said and done, this optimization is very worthwhile for some classes of maps that would otherwise hammer the point test algorithm with a lot of pointless tests. But it was not much fun to write.
3. This is my favorite optimization because it’s simple and it tends to help a great deal in certain, specific cases. When I’m gathering walls, I’m computing distance (actually, distance squared – no square roots were used in any of this) between the light and each wall. Along the way I keep track of the minimum such distance I see. If, for example, the closest distance to any wall is 50 units, then any point that is closer to the origin than 50 units can never be in shadow and doesn’t have to be checked against any wall. (Of course, if the light’s actual radius of effect is smaller than this distance, that value is used instead).
Figure 17 - Identifying points unaffected by walls
If the light is close to a wall, this optimization saves very little. If it’s not, this can put hundreds or even thousands of points into “lit” status very quickly indeed. In order to avoid edge cases, I subtract a small amount from the minimum wall distance after I compute it, so there’s no question of points right at a wall being considered lit unduly.
4. This is my second favorite optimization because it’s simple, dirty and very effective. When I generate floor triangles, I use an algorithm that more or less sweeps though in strips. Because triangles are small, and generated such some neighboring triangles are adjacent in the list of triangles, odds are often very high that if a triangle is shadowed by a wall, the next triangle to consider is going to be shadowed by the same wall.
Figure 18 - Neighbors often suffer the same fate
So the best optimization of all is to remember which wall last shadowed a triangle, and test the next triangle against that one first, always. After all, if a triangle is shadowed by any wall, it doesn’t have to be tested against any other; we don’t care about finding the “best” shadow, we just want to quickly find one that works. If the light happens to be close to a wall (which ruins optimization 3), this one can be very powerful. W, here, is likely to shadow about half the map.
One final trick – not exactly an optimization – has to do with the fact that this code runs on a dual core processor. I cut the triangle list in about half and give each half to a separate thread to run though. (Each thread has its own copy of the pointer used in optimization 4, so they don’t interfere with each other in any way.) This trick doesn’t always cut the runtime in half – it’s not uncommon for one thread to get saddled with most or all of the cases that require more checks – but it helps. Other speedups involve not using STL or boost, and sticking to old fashion arrays of data structures – heresy in some C++ circles, but the speed gains are worth any purist’s complaints.
What’s left is trivial. Each floor triangle, and short wall segment, have a set of bits, one for each possible light, and one for the viewer. If the object is not in shadow, the appropriate light’s bit is set in that object. If any such bit is set, the object is lit. There is also a bit for the observer, which as noted uses the exact same algorithm. If the object is lit, that algorithm is then run for that point for the observer, and if it comes up un-occluded, it is marked visible (and also marked “was once visible”, because I need to keep a history of what has already been seen). Moving a light is a matter of clearing all bits that correspond to that light in all the objects, and then “recasting” the light from its new location. In my application, most lights don’t move frequently, so not much of that happens.
My favorite acid test at the moment is an open town map with 3 million floor triangles and almost 7000 walls. With ambient light turned on (which means everything is automatically considered lit, so everything has to be checked for “visible to observer”), and a vision limit of 400 units (so about 336,000 possible triangles in range), my worst case compute times are about 0.75 seconds on a dual core 2Ghz Intel processor. Typical times are a more acceptable 0.4 sec or so. Kinder maps (underground caves, tight packed cities, open fields with few obstructions) manage times of well under 0.1 sec.
Some examples of my implementation:
Figure 19 - Room in underground city
Figure 20 - Ambient light, buildings and rock outcrops
Figure 21 - Limited lights, windows and doors
Notice that lights in buildings shine out windows, and also enable peeks inside the building from outside, forming a number of disjoint areas of visibility.
Code Listing
What follows gives the general sense of the algorithms’ implementation. Do note that the code is not compiler-ready: support classes like Point, Wall, WallSeg and SimpleTriangle are not provided, but their implementation is reasonably obvious.
This code is released freely and for any purpose, commercial or private – it’s free and I don’t care what happens, nor do I need to be told. It is also without any warranty or promise of fitness, obviously. It works in my application as far as I know, and with some adjustment, may work in yours. The comments will show some of the battles that occurred in getting it to work. The code may contain optimizations I didn’t discuss above.
enum Clockness {Straight, Clockwise, Counterclockwise}; enum Facing {Colinear, Inside, Outside}; static inline Clockness clocknessOrigin(const Point& p1, const Point& P2) { const float a = p1.x * P2.y - P2.x * p1.y; if (a > 0) return Counterclockwise; // aka left if (a < 0) return Clockwise; return Straight; } static inline bool clocknessOriginIsCounterClockwise(const Point& p1, const Point& P2) { return p1.x * P2.y - P2.x * p1.y > 0; } static inline bool clocknessOriginIsClockwise(const Point& p1, const Point& P2) { return p1.x * P2.y - P2.x * p1.y < 0; } class LineSegment { public: Point begin_; Point vector_; //begin_+vector_ is the end point inline LineSegment(const Point& begin, const Point& end) : begin_(begin), vector_(end - begin) {} inline const Point& begin() const {return begin_;} inline Point end() const {return begin_ + vector_;} inline LineSegment(){} //We don't care *where* they intersect and we want to avoid divides and round off surprises. //So we don't attempt to solve the equations and check bounds. //We form a quadilateral with AB and CD, in ACBD order. This is a convex kite shape if the // segments cross. Anything else isn't a convex shape. If endpoints touch, we get a triangle, //which will be declared convex, which works for us. //Tripe about changes in sign in deltas at vertex didn't work. //life improves if a faster way is found to do this, but it has to be accurate. bool doTheyIntersect(const LineSegment &m) const { Point p[4]; p[0] = begin(); p[1] = m.begin(); p[2] = end(); p[3] = m.end(); unsigned char flag = 0; { float z = (p[1].x - p[0].x) * (p[2].y - p[1].y) - (p[1].y - p[0].y) * (p[2].x - p[1].x); if (z > 0) flag = 2; else if (z < 0) flag = 1; } { float z = (p[2].x - p[1].x) * (p[3].y - p[2].y) - (p[2].y - p[1].y) * (p[3].x - p[2].x); if (z > 0) flag |= 2; else if (z < 0) flag |= 1; if (flag == 3) return false; } { float z = (p[3].x - p[2].x) * (p[0].y - p[3].y) - (p[3].y - p[2].y) * (p[0].x - p[3].x); if (z > 0) flag |= 2; else if (z < 0) flag |= 1; if (flag == 3) return false; } { float z = (p[0].x - p[3].x) * (p[1].y - p[0].y) - (p[0].y - p[3].y) * (p[1].x - p[0].x); if (z > 0) flag |= 2; else if (z < 0) flag |= 1; } return flag != 3; } inline void set(const Point& begin, const Point& end) { begin_ = begin; vector_ = end - begin; } inline void setOriginAndVector(const Point& begin, const Point& v) { begin_ = begin; vector_ = v; } /* Given this Line, starting from begin_ and moving towards end, then turning towards P2, is the turn clockwise, counterclockwise, or straight? Note: for a counterclockwise polygon of which this segment is a side, Clockwise means P2 would "light the outer side" and Counterclockwise means P2 would "light the inner side". Straight means colinear. */ inline Clockness clockness(const Point& P2) const { const float a = vector_.x * (P2.y - begin_.y) - (P2.x - begin_.x) * vector_.y; if (a > 0) return Counterclockwise; // aka left if (a < 0) return Clockwise; return Straight; } inline bool clocknessIsClockwise(const Point& P2) const { return vector_.x * (P2.y - begin_.y) - (P2.x - begin_.x) * vector_.y < 0; } //relative to origin inline Clockness myClockness() const {return clocknessOrigin(begin(), end());} inline bool clockOK() const {return myClockness() == Counterclockwise;} //is clockOK(), this is true if p and center are on opposide sides of me //if p is on the line, this returns false inline bool outside(const Point p) const { return clockness(p) == Clockwise; } inline bool outsideOrColinear(const Point p) const { return clockness(p) != Counterclockwise; } void print() const { begin().print(); printf(" to "); end().print(); } }; class Wall; /* A wedge is a line segment that denotes a wall, and two rays from the center, that denote the relevant left and right bound that matter when looking at this wall. Initially, the left and right bound are set from the wall's endpoints, as those are the edges of the shadow. But walls in front of (eg, centerward) of this wall might occlude the end points, and we detect that when we add this wedge. If it happens, we use the occluding wall's endpoints to nudge our own shadow rays. The idea is to minimise the shadow bounds of any given wall by cutting away areas that are already occluded by closer walls. That way, a given point to test can often avoid being tested against multiple, overlapping areas. More important, if we nudge the effective left and right rays for this wall until they meet or pass each other, that means this wall is completely occluded, and we can discard it entirely, which is the holy grail of this code. Fewer walls means faster code. For any point that's between the effective left and right rays of a given wall, the next question is if it's behind the wall. If it is, it's definitively occluded and we don't need to test it any more. Otherwise, on to the next wall. */ class AggregateWedge; enum VectorComparison {ColinearWithVector, RightOfVector, OppositeVector, LeftOfVector}; static VectorComparison compareVectors(const Point& reference, const Point& point) { switch (clocknessOrigin(reference, point)) { case Clockwise: return RightOfVector; case Counterclockwise: return LeftOfVector; } if (reference.dot(point) > 0) return ColinearWithVector; return OppositeVector; } class LittleTree { public: enum WhichVec {Left2, Right1, Right2} whichVector; //(sort tie-breaker), right must be greater than left const Point* position; //vector end LittleTree* greater; //that is, further around to the right LittleTree* lesser; //that is, less far around to the right LittleTree() {greater = lesser = NULL;} void readTree(WhichVec* at, int* ip) { if (lesser) lesser->readTree(at, ip); at[*ip] = whichVector; ++*ip; if (greater) greater->readTree(at, ip); } //walk the tree in order, filling an array void readTree(WhichVec* at) { int i = 0; readTree(at, &i); } }; class VectorPair { public: Point leftVector; Point rightVector; bool acute; VectorPair() {} VectorPair(const Point& left, const Point& right) { leftVector = left; rightVector = right; acute = true; } bool isAllEncompassing() const {return leftVector.x == 0 && leftVector.y == 0;} void set(const Point& left, const Point& right) { leftVector = left; rightVector = right; acute = clocknessOrigin(leftVector, rightVector) == Clockwise; } void setKnownAcute(const Point& left, const Point& right) { leftVector = left; rightVector = right; acute = true; } void setAllEncompassing() { acute = false; leftVector = rightVector = Point(0,0); } bool isIn(const Point p) const { if (acute) return clocknessOrigin( leftVector, p) != Counterclockwise && clocknessOrigin(rightVector, p) != Clockwise; //this accepts all points if leftVector == 0,0 return clocknessOrigin( leftVector, p) != Counterclockwise || clocknessOrigin(rightVector, p) != Clockwise; } //true if we adopted the pair into ourselves. False if disjoint. bool update(const VectorPair& v) { /* I might completely enclose him - that means no change I might be completely disjoint - that means no change, but work elsewhere He might enclose all of me - I take on his bounds We could overlap; I take on some of his bounds -- We figure this by starting at L1 and moving clockwise, hitting (in some order) R2, L2 and R1. Those 3 can appear in any order as we move clockwise, and some can be colinear (in which case, we pretend a convenient order). Where L1 and R1 are the bounds we want to update, we have 6 cases: L1 L2 R1 R2 - new bounds are L1 R2 (ie, update our R) L1 L2 R2 R1 - no change, L1 R1 already encloses L2 R2 L1 R1 L2 R2 - the pairs are disjoint, no change, but a new pair has to be managed L1 R1 R2 L2 - new bounds are L2 R2; it swallowed us (update both) L1 R2 L2 R1 - all encompassing; set bounds both to 0,0 L1 R2 R1 L2 - new bounds are L2 R1 (ie, update our L) If any two rays are colinear, sort them so that left comes first, then right. If 2 lefts or 2 rights, order doesn't really matter. The left/right case does because we want L1 R1 L2 R2, where R1=L2, to be processed as L1 L2 R1 R2 (update R, not disjoint) */ //special cases - if we already have the whole circle, update doesn't do anything if (isAllEncompassing()) return true; //v is part of this wedge (everything is) //if we're being updated by a full circle... if (v.isAllEncompassing()) { setAllEncompassing(); //we become one return true; } /* Now we just need to identify which order the 3 other lines are in, relative to L1. Not so easy since we don't want to resort to arctan or anything else that risks any roundoff. But clockness from L1 puts them either Clockwise (sooner), or Straight (use dot product to see if same as L1 or after Clockwise), or CounterClockwise (later). Within that, we can use clockness between points to sort between them. */ //get the points R1, L2 and R2 listed so we can sort them by how far around to the right // they are from L1 LittleTree list[3]; //order we add them in here doesn't matter list[0].whichVector = LittleTree::Right1; list[0].position = &this->rightVector; list[1].whichVector = LittleTree::Left2; list[1].position = &v.leftVector; list[2].whichVector = LittleTree::Right2; list[2].position = &v.rightVector; //[0] will be top of tree; add in 1 & 2 under it somewhere for (int i = 1; i < 3; ++i) { LittleTree* at = &list[0]; do { bool IisGreater = list[i].whichVector > at->whichVector; //default if nothing else works VectorComparison L1ToAt = compareVectors(leftVector, *at->position); VectorComparison L1ToI = compareVectors(leftVector, *list[i].position); if (L1ToI < L1ToAt) IisGreater = false; else if (L1ToI > L1ToAt) IisGreater = true; else { if (L1ToI != OppositeVector && L1ToI != ColinearWithVector) { //they are in the same general half circle, so this works switch (clocknessOrigin(*at->position, *list[i].position)) { case Clockwise: IisGreater = true; break; case Counterclockwise: IisGreater = false; break; } } } //now we know where [i] goes (unless something else is there) if (IisGreater) { if (at->greater == NULL) { at->greater = &list[i]; break; //done searching for [I]'s place } at = at->greater; continue; } if (at->lesser == NULL) { at->lesser = &list[i]; break; //done searching for [I]'s place } at = at->lesser; continue; } while (true); } //we have a tree with proper order. Read out the vector ids LittleTree::WhichVec sortedList[3]; list[0].readTree(sortedList); unsigned int caseId = (sortedList[0] << 2) | sortedList[1]; //form ids into a key. Two is enough to be unique switch (caseId) { case (LittleTree::Left2 << 2) | LittleTree::Right2: //L1 L2 R2 R1 return true; //no change, we just adopt it case (LittleTree::Right1 << 2) | LittleTree::Left2: //L1 R1 L2 R2 return false; //disjoint! case (LittleTree::Right1 << 2) | LittleTree::Right2: //L1 R1 R2 L2 *this = v; return true; //we take on his bounds case (LittleTree::Right2 << 2) | LittleTree::Left2: //L1 R2 L2 R1 setAllEncompassing(); return true; //now we have everything case (LittleTree::Left2 << 2) | LittleTree::Right1: //L1 L2 R1 R2 rightVector = v.rightVector; break; default: //(LittleTree::Right2 << 2) | LittleTree::Right1: //L1 R2 R1 L2 leftVector = v.leftVector; break; } //we need to fix acute acute = clocknessOrigin(leftVector, rightVector) == Clockwise; return true; } }; class Wedge { public: //all points relative to center LineSegment wall; //begin is the clockwise, right hand direction Point leftSideVector; //ray from center to this defines left or "end" side Wedge* leftSidePoker; //if I'm updated, who did it Point rightSideVector; //ray from center to this defines left or "end" side Wedge* rightSidePoker; //if I'm updated, who did it Wall* source; //original Wall of .wall VectorPair outVectors; float nearestDistance; //how close wall gets to origin (squared) AggregateWedge* myAggregate; //what am I part of? inline Wedge(): source(NULL), leftSidePoker(NULL), rightSidePoker(NULL), myAggregate(NULL) {} void setInitialVectors() { leftSidePoker = rightSidePoker = NULL; rightSideVector = wall.begin(); leftSideVector = wall.end(); outVectors.setKnownAcute(wall.end(), wall.begin()); } inline bool testOccluded(const Point p, const float distSq) const //relative to center { if (distSq < nearestDistance) return false; //it cannot if (clocknessOriginIsCounterClockwise(leftSideVector, p)) return false; //not mine if (clocknessOriginIsClockwise(rightSideVector, p)) return false; //not mine return wall.outside(p); //on the outside } inline bool testOccludedOuter(const Point p, const float distSq) const //relative to center { if (distSq < nearestDistance) //this helps a surprising amount in at least Enya return false; //it cannot return wall.outside(p) && outVectors.isIn(p); //on the outside } inline bool nudgeLeftVector(Wedge* wedge) { /* So. wedge occludes at least part of my wall, on the left side. It might actually be the case of an adjacent wall to my left. If so, my end() is his begin(). And if so, I can change HIS rightSideVectorOut to my right (begin) point, assuming my begin point is forward of (or on) his wall. That means he can help kill other walls better. */ if (wedge->wall.begin() == wall.end() && !wedge->wall.outside(wall.begin())) //is it legal? { outVectors.update(VectorPair(wedge->wall.end(), wedge->wall.begin())); wedge->outVectors.update(VectorPair(wall.end(), wall.begin())); } //turning this on drives the final wedge down, but not very much bool okToDoOut = true; bool improved = false; do { if (wall.outside(wedge->wall.begin())) break; //illegal move, stop here if (clocknessOrigin(leftSideVector, wedge->wall.begin()) == Clockwise) { leftSideVector = wedge->wall.begin(); leftSidePoker = wedge; improved = true; } if (okToDoOut) { okToDoOut = !wall.outside(wedge->wall.end()); if (okToDoOut) outVectors.update(VectorPair(wedge->wall.end(), wall.begin())); } wedge = wedge->rightSidePoker; } while (wedge); return improved; } inline bool nudgeRightVector(Wedge* wedge) { /* So. wedge occludes at least part of my wall, on the right side. It might actually be the case of an adjacent wall to my right. If so, my begin() is his end(). And if so, I can change HIS leftSideVectorOut to my left (end() point, assuming my begin point is forward of (or on) his wall. That means he can help kill other walls better. */ if (wedge->wall.end() == wall.begin() && !wedge->wall.outside(wall.end())) //is it legal? { outVectors.update(VectorPair(wedge->wall.end(), wedge->wall.begin())); wedge->outVectors.update(VectorPair(wall.end(), wall.begin())); } //turning this on drives the final wedge count down, but not very much bool okToDoOut = true; bool improved = false; do { if (wall.outside(wedge->wall.end())) return improved; //illegal move if (clocknessOrigin(rightSideVector, wedge->wall.end()) == Counterclockwise) { rightSideVector = wedge->wall.end(); rightSidePoker = wedge; improved = true; } if (okToDoOut) { okToDoOut = !wall.outside(wedge->wall.begin()); if (okToDoOut) outVectors.update(VectorPair(wall.end(), wedge->wall.begin())); } wedge = wedge->leftSidePoker; } while (wedge); return improved; } }; class AggregateWedge { public: VectorPair vectors; AggregateWedge* nowOwnedBy; bool dead; AggregateWedge() : nowOwnedBy(NULL), dead(false) {} bool isIn(const Point& p) const { return vectors.isIn(p); } bool isAllEncompassing() const {return vectors.leftVector.x == 0 && vectors.leftVector.y == 0;} void init(Wedge* w) { vectors.setKnownAcute(w->leftSideVector, w->rightSideVector); w->myAggregate = this; nowOwnedBy = NULL; dead = false; } //true if it caused a merge bool testAndAdd(Wedge* w) { if (dead) //was I redirected? return false; //then I don't do anything if (!vectors.update(VectorPair(w->wall.end(), w->wall.begin()))) return false; //disjoint AggregateWedge* previousAggregate = w->myAggregate; w->myAggregate = this; //now I belong to this if (previousAggregate != NULL) //then it's a merge { vectors.update(previousAggregate->vectors); //That means we have to redirect that to this assert(previousAggregate->nowOwnedBy == NULL); previousAggregate->nowOwnedBy = this; previousAggregate->dead = true; return true; } return false; } }; class AggregateWedgeSet { public: int at; int firstValid; AggregateWedge agList[8192]; float minDistanceSq; float maxDistanceSq; AggregateWedgeSet() : minDistanceSq(0), maxDistanceSq(FLT_MAX) {} void add(int numberWedges, Wedge* wedgeList) { at = 0; for (int j = 0; j < numberWedges; ++j) { Wedge* w = wedgeList + j; w->myAggregate = NULL; //none yet bool mergesHappened = false; for (int i = 0; i < at; ++i) mergesHappened |= agList[i].testAndAdd(w); if (mergesHappened) { //some number of aggregates got merged into w->myAggregate //We need to do fixups on the wedges' pointers for (int k = 0; k < j; ++k) { AggregateWedge* in = wedgeList[k].myAggregate; if (in->nowOwnedBy) //do you need an update? { in = in->nowOwnedBy; while (in->nowOwnedBy) //any more? in = in->nowOwnedBy; wedgeList[k].myAggregate = in; } } for (int k = 0; k < at; ++k) agList[k].nowOwnedBy = NULL; } if (w->myAggregate == NULL) //time to start a new one { agList[at++].init(w); } } // all wedges in minDistanceSq = FLT_MAX; for (int j = 0; j < numberWedges; ++j) { //get nearest approach float ds = wedgeList[j].nearestDistance; if (ds < minDistanceSq) minDistanceSq = ds; } minDistanceSq -= 0.25f; //fear roundoff - pull this is a little firstValid = 0; for (int i = 0; i < at; ++i) if (!agList[i].dead) { firstValid = i; #if 0 // Not sure this is working? Maybe relates to using L to change bounds? //if this is the only valid wedge and it is all-encompassing, then we can //walk all the wedges and find the furthest away point (which will be some //wall endpoint). Anything beyond that cannot be in bounds. if (agList[i].isAllEncompassing()) { maxDistanceSq = 0; for (int j = 0; j < numberWedges; ++j) { float ds = wedgeList[j].wall.begin().dotSelf(); if (ds > maxDistanceSq) maxDistanceSq = ds; ds = wedgeList[j].wall.end().dotSelf(); if (ds > maxDistanceSq) maxDistanceSq = ds; } } #endif break; } } const AggregateWedge* whichAggregateWedge(const Point p) const { for (int i = firstValid; i < at; ++i) { if (agList[i].dead) continue; if (agList[i].isIn(p)) { return agList + i; } } return NULL; } }; //#define UsingOuter //this slows us down. Do not use. #ifdef UsingOuter #define TheTest testOccludedOuter #else #define TheTest testOccluded #endif class AreaOfView { public: Point center; float radiusSquared; int numberWedges; BoundingRect bounds; Wedge wedges[8192]; //VERY experimental AggregateWedgeSet ags; inline AreaOfView(const Point& center_, const float radius) : center(center_), radiusSquared(radius * radius), numberWedges(0) { bounds.set(center, radius); addWalls(); } void changeTo(const Point& center_, const float radius) { center = center_; radiusSquared = radius * radius; bounds.set(center, radius); numberWedges = 0; addWalls(); } void recompute() //rebuild the wedges, with existing center and radius { bounds.set(center, sqrtf(radiusSquared)); numberWedges = 0; addWalls(); } inline bool isIn(Point p) const { p -= center; const float distSq = p.dotSelf(); if (distSq >= radiusSquared) return false; for (int i = 0; i < numberWedges; ++i) { if (wedges[i].TheTest(p, distSq)) return false; } return true; } /* On the theory that the wedge that rejected your last point has a higher than average chance of rejecting your next one, let the calling thread provide space to maintain the index of the last hit */ inline bool isInWithCheat(Point p, int* hack) const { p -= center; const float distSq = p.dotSelf(); if (distSq >= radiusSquared) return false; if (distSq < ags.minDistanceSq) return true; //this range is always unencumbered by walls if (distSq > ags.maxDistanceSq) //not working. Why? return false; if (numberWedges == 0) return true; //no boundaries //try whatever worked last time, first. It will tend to win again if (wedges[*hack].TheTest(p, distSq)) { return false; } #define UseAgg #define UseAggP #ifdef UseAgg const AggregateWedge* whichHasMe = ags.whichAggregateWedge(p); if (whichHasMe == NULL) return true; //can't be occluded! #endif //try everything else for (int i = 0; i < *hack; ++i) { #ifdef UseAggP #ifdef UseAgg if (wedges[i].myAggregate != whichHasMe) continue; #endif #endif if (wedges[i].TheTest(p, distSq)) { *hack = i; //remember what worked for next time return false; } } for (int i = *hack + 1; i < numberWedges ; ++i) { #ifdef UseAggP #ifdef UseAgg //does seem to help speed, but don't work yet if (wedges[i].myAggregate != whichHasMe) continue; #endif #endif if (wedges[i].TheTest(p, distSq)) { *hack = i; //remember what worked for next time return false; } } return true; } inline bool isInWithWallExclusion(Point p, const Wall* excludeWall) const { p -= center; const float distSq = p.dotSelf(); if (distSq >= radiusSquared) return false; for (int i = 0; i < numberWedges; ++i) { if (wedges[i].source == excludeWall)//this one doesn't count continue; if (wedges[i].TheTest(p, distSq )) return false; } return true; } void addWall(Wall* w, const float nearestDistance); void addWalls(); }; class AreaRef { public: AreaOfView* a; AreaRef() {a = NULL;} void set(const Point& p, float radius) { if (a == NULL) a = new AreaOfView(p, radius); else a->changeTo(p, radius); } ~AreaRef() {delete a;} void empty() {delete a; a = NULL;} AreaOfView* operator->() const {return a;} }; class WallSet { public: int length; int at; WallAndDist* list; WallSet() { at = 0; length = 2038; list = (WallAndDist*)malloc(length * sizeof(*list)); } ~WallSet() {free(list);} void add(Wall* w, const float distSq) { if (at >= length) { length *= 2; list = (WallAndDist*)realloc(list, length * sizeof(*list)); } list[at].wall = w; const LineSeg* s = w->getSeg(); list[at].lenSq = s->p[0].distanceSq(s->p[1]); list[at++].distSq = distSq; } inline void sortByCloseness() { qsort(list, at, sizeof *list, cmpWallDist); } }; void AreaOfView::addWall(Wall* w, const float nearestDistance) { if (numberWedges >= NUMOF(wedges)) return; //we are screwed const LineSeg* seg = w->getSeg(); Point w1 = seg->p[0] - center; Point w2 = seg->p[1] - center; LineSegment* wallSeg = &wedges[numberWedges].wall; switch (clocknessOrigin(w1, w2)) { case Clockwise: wallSeg->set(w2, w1); break; case Counterclockwise: wallSeg->set(w1, w2); break; default: return; //uninteresting, edge on } wedges[numberWedges].setInitialVectors(); //set left and right vectors from wall const LineSegment right(Point(0,0), wallSeg->begin()); const LineSegment left(Point(0,0), wallSeg->end()); //now we start trimming for (int i = 0; i < numberWedges; ++i) { //if this occludes both begin and it, it occludes the wall if (wedges[i].testOccludedOuter(wallSeg->begin(), wedges[numberWedges].nearestDistance) && wedges[i].testOccludedOuter(wallSeg->end(), wedges[numberWedges].nearestDistance)) return; bool changed = false; //test right side if (wedges[i].wall.doTheyIntersect(right)) { changed = wedges[numberWedges].nudgeRightVector(wedges + i); } //test left side if (wedges[i].wall.doTheyIntersect(left)) { changed |= wedges[numberWedges].nudgeLeftVector(wedges + i); } if (changed) { if (wedges[numberWedges].rightSidePoker && wedges[numberWedges].rightSidePoker == wedges[numberWedges].leftSidePoker) return; //cheap test for some total occlusion cases if ( //simplify LineSegment(Point(0,0), wedges[numberWedges].rightSideVector).clockness( wedges[numberWedges].leftSideVector) != Counterclockwise) { return; //occluded } } } //we have a keeper wedges[numberWedges].nearestDistance = nearestDistance; wedges[numberWedges].source = w; ++numberWedges; } void AreaOfView::addWalls() { //get the set of walls that can occlude. WallSet relevant; int initialRun = run1IsMapEdge? 2 : 1; for (int run = initialRun; run < wallRuns; ++run) { const WallRun* currentLoop = &wallLists[run]; //does this loop overlap our area? if (!currentLoop->bounds.overlapNonzero(bounds)) continue; //not an interesting loop, nothing in it can occlude //run 1 is the outer loop; we care about walls facing in. //subsequent runs are inner loops, and we care about walls facing out const Facing relevantFacing = run==1? Inside : Outside; //some walls in this loop may cast shadows. Here we go looking for them. for (int wall = 0; wall < currentLoop->wallCount; ++wall) { Wall* currentWall = currentLoop->list[wall]; //We don't currently have walls that are transparent (those are actually doors), but we could someday if (currentWall->isTransparent()) continue; //toss windows const LineSeg* currentSeg = currentWall->getSeg(); //We need to reject walls that are colinear with our rectangle bounds. //That's important because we don't want to deal with walls that *overlap* //any polygon sides; that complicates the intersection code. Walls don't // overlap other walls, and we will discard edge-on walls, so walls // don't overlap shadow lines; but without this they could overlap the // original bounding rectangle (the polygon-edge-of-last-resort). // We do have to consider overlap with creating shadows. // //Since we're looking at vertical and horisontal lines, which are pretty common, // we can also quickly discard those which are outside the rectangle, as well as // colinear with it. the wall faces away from the center point, or is edge on, it // doesn't cast a shadow, so boot it. (Getting rid of edge-on stuff // avoids walls that overlap shadow lines). if (currentSeg->facing(center) != relevantFacing) continue; //faces away (or edge-on) //We still could be dealing with an angled wall that's entirely out of range - // and anyway we want to know the distances from the center to the line segment's // nearest point, so we can sort. //Getting the distance to a segment requires work. or at the radius of interest, this wall can't matter. if (distSq >= radiusSquared) continue; //out of reach //Need to keep this one relevant.add(currentWall, distSq); } } //add doors, too. They don't have loops or bounding rectangles, and it's important to // get the right seg. Skip transparent ones. const WallRun* currentLoop = &wallLists[0]; //some walls in this loop may cast shadows. Here we go looking for them. for (int wall = 0; wall < currentLoop->wallCount; ++wall) { Wall* currentWall = currentLoop->list[wall]; if (currentWall->isTransparent()) continue; //toss windows const LineSeg* currentSeg = currentWall->getSeg(); //Horisontal and vertical lines are common, and easy to test for out of bounds. //That saves a more expensive distance check. (currentSeg->facing(center) == Straight) continue; //kill edge on walls the radius of interest, this wall can't matter. if (distSq > radiusSquared) continue; //out of reach //Need to keep this one relevant.add(currentWall, distSq); } //sort by nearness; nearer ones should be done first, as that might make more walls //identifiably irrelevant. relevant.sortByCloseness(); //relevant.print(); //now, "do" each wall for (int i = 0; i < relevant.at; ++i) { addWall(relevant.list[i].wall, relevant.list[i].distSq); } //build the aggregate wedge list ags.add(numberWedges, wedges); #if 0 if (center == lastAtD) { char buf[256]; sprintf(buf, "%d wedges in set", numberWedges); MessageBox(gHwnd, buf, "Wedge Count", MB_OK); } #endif } static AreaRef areaOfView; static Number visionMaxDistance; static BoundingRect inView; static Point viewPoint; static unsigned char* changedTriangleFlags; static unsigned char changedLightFlags[NUMOF(lights)]; static bool inComputeVisible = false; //multithreaded, we do triangles i..lim-1 static inline void doVisibleWork(int i, const int lim) { //if the only lights on are at the viewpoint (and no superlight), and the //lights all stay within the visiion limit (they usually do) then we don't // have to do anything but copy the "lit" flag to the "you're visible" state. //That's a huge win; we do't have to consult the area of view bool soloLightRulesVisibility = !superLight; if (soloLightRulesVisibility) { for (int k = 0; k < NUMOF(lights); ++k) if (lights[k].on) { if (lights[k].p != lastAtD || lights[k].range() > vmd) { soloLightRulesVisibility = false; break; } } } if (soloLightRulesVisibility) { //what's visible here is simply what's lit. We just copy the lit flag, no math needed. for (; i < lim; ++i) { if (simpleTriList[i]->setVisiblity( simpleTriList[i]->lit() )) { unsigned char v = 0x4 | (simpleTriList[i]->wasVisible << 1) | (simpleTriList[i]->isVisible & 1); changedTriangleFlags[i] = v; } } return; } int lookupHack[3] = {0,0,0}; for (; i < lim; ++i) { //we get a huge win from not calculating lightOfSight to unlit things if (simpleTriList[i]->setVisiblity( simpleTriList[i]->lit() && areaOfView->isInWithCheat(simpleTriList[i]->center, lookupHack) )) { unsigned char v = 0x4 | (simpleTriList[i]->wasVisible << 1) | (simpleTriList[i]->isVisible & 1); changedTriangleFlags[i] = v; } } } //End
1To give a sense of the algorithm’s performance, computing the currently visible area in figure 5, and combining it with the previous area, took 0.003 seconds on a dual core 2Ghz processor. However, keep mind that figure 5 represents very small and simple map (containing less than 100,000 triangles).
2GPC, PolyBoolean and LEDA are among these packages. Some discussion of the problems of runtime, runspace, and accuracy can be found at. Boost’s GTL is promising, but it forces the user of integers for coordinates, and the implementation is still evolving.
3Formally speaking, there isn’t a cross product defined in 2D; they only work in 3 or 7 dimensions. What’s referred to here is the z value computed as part of a 3D cross product, according to u.x * v.y - v.x * u.y.
|
http://www.gamedev.net/page/resources/_/technical/graphics-programming-and-theory/walls-and-shadows-in-2d-r2711?st=30
|
CC-MAIN-2016-40
|
en
|
refinedweb
|
Share Media on Twitter Using Flex, Part II: ImagesBy Andrew Muller
In the first part of this series on sharing media on Twitter with Flex, we discussed how to create the interface for a Flex application with the beta of Flash Catalyst. Then we used the beta of Flash Builder 4 to create a Twitter application for the browser that used PHP to proxy calls to the Twitter API. In this article we’re going to enhance that Flex application by adding the ability to upload photographs to the popular Flickr image hosting service, and then integrate a shortened link to the photo into a post to Twitter.
When you’re done with the tutorial, test your knowledge by taking our article quiz!
The Upload Process
Uploading a photograph and shortening its link can be done a number of ways in the application we’re building. We have to first upload the image, retrieve a link to it, and then submit that link to a URL shortening service. If we automate the whole process so that it works as soon as the user posts their tweet, we risk a long time delay; that’s because it requires three calls to different servers. Instead we’ve chosen the following process:
- user uploads the photo
- link to the photo is inserted at the end of the tweet in the posting form
- message is then sent by the user
All code for this article is contained within the new file
cheepcheep_image_flashbuilder.fxp, and you can download it together with all the resources from the previous article.
Browsing for Files
The ActionScript
FileReference class gives us the ability to upload and download files from a Flash application. Its
browse method uses an operating system dialog box to locate a file; each instance of
FileReference will refer to a single file on the user’s system. Make sure that a namespace is added for
net to the
Application tag:
<s:Application ...
xmlns:
Create an instance of
FileReference between the
<fx:Declarations></fx:Declarations> tags already in the application:
<net:FileReference
We’ve set function calls for its
select and
complete events that we’ll add later. Under the
TextInput field for the tweet at the bottom of the application we’ve added a button; this enables us to browse for a file to upload and we’ll have that call the function
browsePhoto, listed below:
private function browsePhoto(evt:MouseEvent):void {
var arr:Array = [];
arr.push(new FileFilter("Images", ".gif;*.jpeg;*.jpg;*.png"));
fileReference.browse(arr);
}
This function creates an array and populates it with a
FileFilter object to limit the file extensions the user can choose. It then calls the browse method of the
FileReference instance, passing the array as an argument. This will launch an operating system dialog box that will open to the most recently browsed directory. The function for the
select event will then fire once the user has selected a file. It loads the file, which in turn will fire the
complete event that displays the name of the file to the user in our application. It’s in a new
Label component that we added next to the browse button. You could use this function to display a preview of the image if required:
private function fileSelected(evt:Event):void {
fileReference.load();
}
private function fileAccessed(evt:Event):void {
fileName.text = "File selected: " + fileReference.name;
}
Authenticating with Flickr
There are a number of services that host photographs online for free. We’ve chosen Flickr because they have a well-documented API that already has a third-party
ActionScript library. Flickr does require both application authentication and user permission before an application can use its services. The first item you’ll need is an API key for your application, which you can obtain for free; the key will also have a matching “secret” for authentication. Find out more about this here:
- Flickr API:
- Flickr API keys:
The ActionScript 3 Flickr library can be found at:. The last build of the library is without a method for uploading to Flickr, in either the library or the SWC. While there’s a version of an upload method in the project’s code repository, we’ve found issues in making it work. We’ve supplied a copy of the library with the Flash Builder project that has a working upload method; so far this is the only part of the library where we’ve experienced problems. The Flickr library uses additional MD5 and network utilities from that you’ll also need to download, and we’ve included the SWC in the Flash Builder project.
The Flickr process of user authentication does involve a few steps programmatically, but it should be simple enough. Ultimately you need to generate a token on Flickr for each individual user of your application, and store that token on the user’s computer for each call to Flickr. This token is a secure representation of the user’s credentials; we can use it to perform tasks like our upload without having to know these credentials. The token is generated when the user grants permission to your application on Flickr.
We’re going to use the ActionScript
SharedObject to store the authorization token locally, then test to see if it exists when we browse for a file. We’ll ask the user to authenticate the application if it doesn’t exist, and write the token and user account ID when it’s retrieved. Check the code below and you’ll see that we’re importing the necessary classes from the Flickr library, creating an instance of
SharedObject and the variables to hold the multiple values we’ll use during the authentication process:
import com.adobe.webapis.flickr.events.FlickrResultEvent;
import com.adobe.webapis.flickr.FlickrService;
import com.adobe.webapis.flickr.methodgroups.Auth;
import com.adobe.webapis.flickr.methodgroups.Upload;
private var appSO:SharedObject;
private var flickr:FlickrService;
private var flickrApiKey:String = "yourFlickrApiKey";
private var flickrSecret:String = "yourFlickrApiKeySecret";
private var flickrFrob:String = "";
private var flickrAuthToken:String = "";
private var flickrNsid:String = "";
private var authorizationURL:String = "";
In the
getFlickrLoginCredentials function below we’re attempting to read the
SharedObject for our application, creating it if it fails to exist. We’re then testing to see if the Flickr authorization token has been stored. If it’s not there we create an instance of the
FlickrService, initializing it with the API key, adding the secret, assigning a handler, and calling a
getFrob method. This calls Flickr to retrieve a “frob” that’s needed to fetch the authorization token — after the user has connected to Flickr and granted access permission to our application:
public function getFlickrLoginCredentials():void {
appSO = SharedObject.getLocal("TestFlickrTwitter");
if ( appSO.data.flickrAuthToken == null ) {
flickr = new FlickrService(flickrApiKey);
flickr.secret = flickrSecret;
flickr.addEventListener( FlickrResultEvent.AUTH_GET_FROB, onGetFrob );
flickr.auth.getFrob();
} else {
flickrAuthToken = appSO.data.flickrAuthToken;
flickrNsid = appSO.data.flickrNsid;
browsePhoto();
}
}
The other half of the
getFlickrLoginCredentials function is an
else clause; it reads the Flickr authorization token and Flickr user id from the
SharedObject, and calls the
browsePhoto method we described before. We’ve changed the browse for photo button to now call
getFlickrLoginCredentials instead, so that we’ll put the user through the authentication process if they try to upload a photograph.
Below is the
onGetFrob function, the result handler called when we receive a result from the
getFrob call in the function above. This stores the “frob” as a local variable and creates the authorization URL that will be needed later:
private function onGetFrob( evt:FlickrResultEvent ):void {
flickrFrob = evt.data.frob;
authorizationURL = flickr.getLoginURL(flickrFrob, "write");
currentState = "flickrAuthorise";
}
The function above calls an additional state that we created for our application called
flickrAuthorise. We’ll use this to display a new component that we built in Flash Catalyst; this component is a popup that contains a message to the user explaining that they need to connect to Flickr to authorize the application. The component also contains two buttons: the first to open a new browser window and connect to the Flickr authorization page, and the second to fetch the authorization token from Flickr.
You’ll notice the new state listed in the application:
<s:states>
<s:State
<s:State
<s:State
</s:states>
Have a look at the visual objects in the application and you’ll see various attributes used to control them in several states. Take the
browsePhotoBtn, for example; it uses the
includeIn attribute to nominate which states it will appear in. An additional attribute to change the button’s alpha for the
flickrAuthorise state has also been set so that this will dim when the authorization popup appears. Most of the other visual objects have the same attribute for the same effect:
<s:Button
Clicking on the first button in our authorization component,
CustomComponent2, calls a function in the main application,
authoriseFlickr. This launches a new browser window calling the authorization URL created inside the
onGetFrob function. It also enables the second button in the authorization component:
public function authoriseFlickr():void {
var urlRequest:URLRequest = new URLRequest(authorizationURL);
navigateToURL(urlRequest, "_blank");
customcomponent21.button3.enabled = true;
}
The user then has to go through a couple of steps on the Flickr site to authorize the application. The last screen they’ll see will tell them that they can close the browser window; they then should click on the second button on the authorization component, which calls the
onGetTockenClick function in the main application. This in turn calls the
getToken method of
FlickrService. A handler function,
onGetToken is specified in the
onGetTockenClick function.
onGetToken will receive an object from Flickr containing the authorization token and some details about the user; we use that function to store both the token and user id locally, as well as in the
SharedObject. We also switch back to the main
twitterDisplay state:
public function onGetTokenClick():void {
flickr.addEventListener(FlickrResultEvent.AUTH_GET_TOKEN, onGetToken);
flickr.auth.getToken(flickrFrob);
}
private function onGetToken(evt:FlickrResultEvent):void {
flickrAuthToken = evt.data.auth.token;
flickrNsid = evt.data.auth.user.nsid;
appSO.data.flickrAuthToken = flickrAuthToken;
appSO.data.flickrNsid = flickrNsid;
appSO.flush();
currentState = "twitterDisplay";
}
This will occur the first time a user attempts to browse for a photograph, so they’ll need to browse again after authentication as this step was interrupted by this process. Once we’ve located our photo we need to upload it so that we can include a link to it in our tweet.
Uploading the Photo
The adjusted upload class in the Flickr library ensures that the necessary authentication, upload process, and upload complete event are all taken care of. We’ve added a button at the bottom of the application below the Twitter post form to upload an image, and created a function for this button called
uploadFlickr. This function creates a listener for the upload complete event and creates another instance of the
FlickrService class, adding credentials to it. This is then used as the argument for an instance of the upload class that’s used to upload our
fileReference, which was created when we browsed for a file. We’ve also added a call to the
setBusyCursor method of the
CursorManager to provide a simple form of feedback to the user while the upload progresses – as
FlickrService is without this feature:
private function uploadFlickr():void {
fileReference.addEventListener(DataEvent.UPLOAD_COMPLETE_DATA,uploadCompleteHandler);
flickr = new FlickrService(flickrApiKey);
flickr.secret = flickrSecret;
flickr.token = flickrAuthToken;
var uploader:Upload = new Upload(flickr);
uploader.upload(fileReference);
CursorManager.setBusyCursor();
}
The upload will return an XML packet from Flickr containing an ID for the image uploaded, which we can utilize to construct a link to use in our tweet. With the 140-character limitation of Twitter, we need to manage this as Flickr URLs tend to be a little long. We’re going to work around this by using the URL shortening service bit.ly, which has a REST API that we can use in our application. You’ll need to create an account with bit.ly to receive an API key; this is automatically issued with the account, with your key on your account details page.
Creating a Shortened Link to the Image
The
uploadCompleteHandler is specified as the result handler in
uploadFlickr and will receive the XML in the data property of the event. We convert that into an XML object in the function and construct a URL combining a string, the Flickr user ID, and the photo ID. We’ve also created variables to store the bit.ly login and API key, both required for the rest call:
[Bindable] private var photoUrl:String = "";
private var bitlyLogin:String = "yourBitlyAccountName";
private var bitlyApiKey:String = "yourBitlyApiKey";
private function uploadCompleteHandler(evt:DataEvent):void {
CursorManager.removeBusyCursor();
var xData:XML = new XML(evt.data);
photoUrl = ""+flickrNsid+"/"+xData.photoid;
bitlyService.send();
}
The
uploadCompleteHandler function calls the
send method of an
HTTPService once the address for the image has been constructed. Remember that the
HTTPService tag needs to be nested within a
<fx:Declarations> tag set. The
url attribute of the
HTTPService tag needs to pass a number of arguments to bit.ly, all of which are bound to application variables. We’ve set a handler for the
result event of the
HTTPService tag:
<mx:HTTPService
We allow Flash Builder to create a default handler to process the bit.ly result; the shortened URL is contained within the XML packet returned by the service, which it converted into an ActionScript object for us. The function calculates the length of the text currently in the
TextInput field for the tweet and either appends the shortened URL, or truncates the text and then appends the URL:
protected function bitlyService_resultHandler(evt:ResultEvent):void
{
var bitlyURL:String = evt.result.bitly.results.nodeKeyVal.shortUrl;
var combineTextURL:String = textinput1.text + " " + bitlyURL;
if ( combineTextURL.length < 140) {
textinput1.text = combineTextURL;
} else {
var excessChars:Number = 140 - combineTextURL.length;
textinput1.text = textinput1.text.substr(0,(textinput1.text.length)-Math.abs(excessChars)) + " " + bitlyURL;
}
}
With the image uploaded and a shortened link created for it we’re now ready to complete our tweet and post it to Twitter.
Uploading directly from within a Flash application was an often-requested feature introduced with Flash Player 9. Connecting images with text is a great enhancement to Twitter, and we’re sure you’ll want to take advantage of the popularity of photo-sharing services as part of the social experience afforded by your application. The Flickr API documentation is a little intimidating to the uninitiated, especially the authentication process. Fortunately, there are ActionScript libraries available to developers that make the process easier.
It’s likely that there are other ways to order the application processes described in this article. Twitter is not a one-stop shop––their service is built for posting minimal text messages. We’re relying on three service calls to generate our single Twitter post; the challenge is to bring them together in a way that’s easy for the user while offering reasonable performance times.
Make sure you download the code for this article and give it a try.
If you’re feeling confident, test your knowledge by taking our article quiz!
No Reader comments
|
https://www.sitepoint.com/share-media-flex-twitter-images/
|
CC-MAIN-2016-40
|
en
|
refinedweb
|
: 1578
Author: dvarrazzo
Date: 2007-03-09 16:10:33 -0800 (Fri, 09 Mar 2007)
Log Message:
-----------
- Variables don't add metadata to multivalue fields, if a value is already
present.
- Somewhat optimized check to test if a variable is to be used as metadata.
Modified Paths:
--------------
trunk/epydoc/src/epydoc/docstringparser.py
Modified: trunk/epydoc/src/epydoc/docstringparser.py
===================================================================
--- trunk/epydoc/src/epydoc/docstringparser.py 2007-03-09 23:26:21 UTC (rev 1577)
+++ trunk/epydoc/src/epydoc/docstringparser.py 2007-03-10 00:10:33 UTC (rev 1578)
@@ -267,13 +267,18 @@
report_errors(api_doc, docindex, parse_errors, field_warnings)
def add_metadata_from_var(api_doc, field):
- if not field.multivalue:
- for (f,a,d) in api_doc.metadata:
- if field == f:
- return # We already have a value for this metadata.
for varname in field.varnames:
# Check if api_doc has a variable w/ the given name.
if varname not in api_doc.variables: continue
+
+ # Check moved here from before the for loop because we expect to
+ # reach rarely this point. The loop below is to be performed more than
+ # once only for fields with more than one varname, which currently is
+ # only 'author'.
+ for md in api_doc.metadata:
+ if field == md[0]:
+ return # We already have a value for this metadata.
+
var_doc = api_doc.variables[varname]
if var_doc.value is UNKNOWN: continue
val_doc = var_doc.value
|
https://sourceforge.net/p/epydoc/mailman/message/1881492/
|
CC-MAIN-2016-40
|
en
|
refinedweb
|
Container space managementYes, exactly.
Container space managementWith which member function containers manages space for its elements?
program crashesAnd two extra lines in while loop will helps to control the flow of program
while(go !="Stop")
...
program crashes#include <iostream>
using namespace std;
string faces[13] = {"Two", "Three", "Four", "Five", "Six",...
Expected declaration before token '}'Yes, its working on my machine without any error.
Kindly check the following link....
This user does not accept Private Messages
|
http://www.cplusplus.com/user/Punk23/
|
CC-MAIN-2016-40
|
en
|
refinedweb
|
Overview
Atlassian SourceTree is a free Git and Mercurial client for Windows.
Atlassian SourceTree is a free Git and Mercurial client for Mac.
dnlib is a library that can read, write and create .NET assemblies and modules.
It was written for de4dot which must have a rock solid assembly reader and writer library since it has to deal with heavily obfuscated assemblies with invalid metadata. If the CLR can load the assembly, dnlib must be able to read it and save it.
Features
- Supports reading, writing and creating .NET assemblies/modules targeting any .NET framework (eg. desktop, Silverlight, Windows Phone, etc).
- Supports reading and writing mixed mode assemblies (eg. C++/CLI)
- Can read and write non-ECMA compatible .NET assemblies that MS' CLR can load and execute
- Very stable and can handle obfuscated assemblies that crash other similar libraries.
- High and low level access to the metadata
- Output size of non-obfuscated assemblies is usually smaller than the original assembly
- Metadata tokens and heaps can be preserved when saving an assembly
- Assembly reader has hooks for decrypting methods and strings
- Assembly writer has hooks for various writer events
- Easy to port code from Mono.Cecil to dnlib
- Add/delete Win32 resource blobs
- Saved assemblies can be strong name signed and enhanced strong name signed
Compiling
You must have Visual Studio 2008 or later. The solution file was created by Visual Studio 2010, so if you use VS2008, open the solution file and change the version number so VS2008 can read it.
Examples
All examples use C#, but since it's a .NET library, you can use any .NET language (eg. VB.NET).
See the Examples project for several examples.
Opening a .NET assembly/module
First of all, the important namespaces are
dnlib.DotNet and
dnlib.DotNet.Emit.
dnlib.DotNet.Emit is only needed if you intend to
read/write method bodies. All the examples below assume you have the
appropriate using statements at the top of each source file:
using dnlib.DotNet; using dnlib.DotNet.Emit;
ModuleDefMD is the class that is created when you open a .NET module. It has
several
Load() methods that will create a ModuleDefMD instance. If it's not a
.NET module/assembly, a
BadImageFormatException will be thrown.
Read a .NET module from a file:
ModuleDefMD module = ModuleDefMD.Load(@"C:\path\to\file.exe");
Read a .NET module from a byte array:
byte[] data = System.IO.File.ReadAllBytes(@"C:\path\of\file.dll"); ModuleDefMD module = ModuleDefMD.Load(data);
You can also pass in a Stream instance, an address in memory (HINSTANCE) or even a System.Reflection.Module instance:
System.Reflection.Module reflectionModule = typeof(void).Module; // Get mscorlib.dll's module ModuleDefMD module = ModuleDefMD.Load(reflectionModule);
To get the assembly, use its Assembly property:
AssemblyDef asm = module.Assembly; Console.WriteLine("Assembly: {0}", asm);
Saving a .NET assembly/module
Use
module.Write(). It can save the assembly to a file or a Stream.
module.Write(@"C:\saved-assembly.dll");
If it's a C++/CLI assembly, you should use
NativeWrite()
module.NativeWrite(@"C:\saved-assembly.dll");
To detect it at runtime, use this code:
if (module.IsILOnly) { // This assembly has only IL code, and no native code (eg. it's a C# or VB assembly) module.Write(@"C:\saved-assembly.dll"); } else { // This assembly has native code (eg. C++/CLI) module.NativeWrite(@"C:\saved-assembly.dll"); }
Strong name sign an assembly
Use the following code to strong name sign the assembly when saving it:
using dnlib.DotNet.Writer; ... // Open or create an assembly ModuleDef mod = ModuleDefMD.Load(.....); // Create writer options var opts = new ModuleWriterOptions(mod); // Open or create the strong name key var signatureKey = new StrongNameKey(@"c:\my\file.snk"); // This method will initialize the required properties opts.InitializeStrongNameSigning(mod, signatureKey); // Write and strong name sign the assembly mod.Write(@"C:\out\file.dll", opts);
Enhanced strong name signing an assembly
See this MSDN article for info on enhanced strong naming.
Enhanced strong name signing without key migration:
using dnlib.DotNet.Writer; ... // Open or create an assembly ModuleDef mod = ModuleDefMD.Load(....); // Open or create the signature keys var signatureKey = new StrongNameKey(....); var signaturePubKey = new StrongNamePublicKey(....); // Create module writer options var opts = new ModuleWriterOptions(mod); // This method will initialize the required properties opts.InitializeEnhancedStrongNameSigning(mod, signatureKey, signaturePubKey); // Write and strong name sign the assembly mod.Write(@"C:\out\file.dll", opts);
Enhanced strong name signing with key migration:
using dnlib.DotNet.Writer; ... // Open or create an assembly ModuleDef mod = ModuleDefMD.Load(....); // Open or create the identity and signature keys var signatureKey = new StrongNameKey(....); var signaturePubKey = new StrongNamePublicKey(....); var identityKey = new StrongNameKey(....); var identityPubKey = new StrongNamePublicKey(....); // Create module writer options var opts = new ModuleWriterOptions(mod); // This method will initialize the required properties and add // the required attribute to the assembly. opts.InitializeEnhancedStrongNameSigning(mod, signatureKey, signaturePubKey, identityKey, identityPubKey); // Write and strong name sign the assembly mod.Write(@"C:\out\file.dll", opts);
Type classes
The metadata has three type tables:
TypeRef,
TypeDef, and
TypeSpec. The
classes dnlib use are called the same. These three classes all implement
ITypeDefOrRef.
There's also type signature classes. The base class is
TypeSig. You'll find
TypeSigs in method signatures (return type and parameter types) and locals.
The
TypeSpec class also has a
TypeSig property.
All of these types implement
IType.
TypeRef is a reference to a type in (usually) another assembly.
TypeDef is a type definition and it's a type defined in some module. This
class does not derive from
TypeRef. :)
TypeSpec can be a generic type, an array type, etc.
TypeSig is the base class of all type signatures (found in method sigs and
locals). It has a
Next property that points to the next
TypeSig. Eg. a
Byte[] would first contain a
SZArraySig, and its
Next property would point
to Byte signature.
CorLibTypeSig is a simple corlib type. You don't create these directly. Use
eg.
module.CorLibTypes.Int32 to get a System.Int32 type signature.
ValueTypeSig is used when the next class is a value type.
ClassSig is used when the next class is a reference type.
GenericInstSig is a generic instance type. It has a reference to the generic
type (a
TypeDef or a
TypeRef) and the generic arguments.
PtrSig is a pointer sig.
ByRefSig is a by reference type.
ArraySig is a multi-dimensional array type. Most likely when you create an
array, you should use
SZArraySig, and not
ArraySig.
SZArraySig is a single dimension, zero lower bound array. In C#, a
byte[]
is a
SZArraySig, and not an
ArraySig.
GenericVar is a generic type variable.
GenericMVar is a generic method variable.
Some examples if you're not used to the way type signatures are represented in metadata:
ModuleDef mod = ....; // Create a byte[] SZArraySig array1 = new SZArraySig(mod.CorLibTypes.Byte); // Create an int[][] SZArraySig array2 = new SZArraySig(new SZArraySig(mod.CorLibTypes.Int32)); // Create an int[,] ArraySig array3 = new ArraySig(mod.CorLibTypes.Int32, 2); // Create an int[*] (one-dimensional array) ArraySig array4 = new ArraySig(mod.CorLibTypes.Int32, 1); // Create a Stream[]. Stream is a reference class so it must be enclosed in a ClassSig. // If it were a value type, you would use ValueTypeSig instead. TypeRef stream = new TypeRefUser(mod, "System.IO", "Stream", mod.CorLibTypes.AssemblyRef); SZArraySig array5 = new SZArraySig(new ClassSig(stream));
Sometimes you must convert an
ITypeDefOrRef (
TypeRef,
TypeDef, or
TypeSpec) to/from a
TypeSig. There's extension methods you can use:
// array5 is defined above ITypeDefOrRef type1 = array5.ToTypeDefOrRef(); TypeSig type2 = type1.ToTypeSig();
Naming conventions of metadata table classes
For most tables in the metadata, there's a corresponding dnlib class with the
exact same or a similar name. Eg. the metadata has a
TypeDef table, and dnlib
has a
TypeDef class. Some tables don't have a class because they're
referenced by other classes, and that information is part of some other class.
Eg. the
TypeDef class contains all its properties and events, even though the
TypeDef table has no property or event column.
For each of these table classes, there's an abstract base class, and two sub
classes. These sub classes are named the same as the base class but ends in
either
MD (for classes created from the metadata) or
User (for classes
created by the user). Eg.
TypeDef is the base class, and it has two sub
classes
TypeDefMD which is auto-created from metadata, and
TypeRefUser
which is created by the user when adding user types. Most of the XyzMD classes
are internal and can't be referenced directly by the user. They're created by
ModuleDefMD (which is the only public
MD class). All XyzUser classes are
public.
Metadata table classes
Here's a list of the most common metadata table classes
AssemblyDef is the assembly class.
AssemblyRef is an assembly reference.
EventDef is an event definition. Owned by a
TypeDef.
FieldDef is a field definition. Owned by a
TypeDef.
GenericParam is a generic parameter (owned by a
MethodDef or a
TypeDef)
MemberRef is what you create if you need a field reference or a method
reference.
MethodDef is a method definition. It usually has a
CilBody with CIL
instructions. Owned by a
TypeDef.
MethodSpec is a instantiated generic method.
ModuleDef is the base module class. When you read an existing module, a
ModuleDefMD is created.
ModuleRef is a module reference.
PropertyDef is a property definition. Owned by a
TypeDef.
TypeDef is a type definition. It contains a lot of interesting stuff,
including methods, fields, properties, etc.
TypeRef is a type reference. Usually to a type in another assembly.
TypeSpec is a type specification, eg. an array, generic type, etc.
Method classes
The following are the method classes:
MethodDef,
MemberRef (method ref) and
MethodSpec. They all implement
IMethod.
Field classes
The following are the field classes:
FieldDef and
MemberRef (field ref).
They both implement
IField.
Comparing types, methods, fields, etc
dnlib has a
SigComparer class that can compare any type with any other type.
Any method with any other method, etc. It also has several pre-created
IEqualityComparer<T> classes (eg.
TypeEqualityComparer,
FieldEqualityComparer, etc) which you can use if you intend to eg. use a type
as a key in a
Dictionary<TKey, TValue>.
The
SigComparer class can also compare types with
System.Type, methods with
System.Reflection.MethodBase, etc.
It has many options you can set, see
SigComparerOptions. The default options
is usually good enough, though.
// Compare two types TypeRef type1 = ...; TypeDef type2 = ...; if (new SigComparer().Equals(type1, type2)) Console.WriteLine("They're equal"); // Use the type equality comparer Dictionary<IType, int> dict = new Dictionary<IType, int>(TypeEqualityComparer.Instance); TypeDef type1 = ...; dict.Add(type1, 10); // Compare a `TypeRef` with a `System.Type` TypeRef type1 = ...; if (new SigComparer().Equals(type1, typeof(int))) Console.WriteLine("They're equal");
It has many
Equals() and
GetHashCode() overloads.
.NET Resources
There's three types of .NET resource, and they all derive from the common base
class
Resource.
ModuleDef.Resources is a list of all resources the module
owns.
EmbeddedResource is a resource that has data embedded in the owner module.
This is the most common type of resource and it's probably what you want.
AssemblyLinkedResource is a reference to a resource in another assembly.
LinkedResource is a reference to a resource on disk.
Win32 resources
ModuleDef.Win32Resources can be null or a
Win32Resources instance. You can
add/remove any Win32 resource blob. dnlib doesn't try to parse these blobs.
Parsing method bodies
This is usually only needed if you have decrypted a method body. If it's a
standard method body, you can use
MethodBodyReader.Create(). If it's similar
to a standard method body, you can derive a class from
MethodBodyReaderBase
and override the necessary methods.
Resolving references
TypeRef.Resolve() and
MemberRef.Resolve() both use
module.Context.Resolver to resolve the type, field or method. The custom
attribute parser code may also resolve type references.
If you call Resolve() or read custom attributes, you should initialize
module.Context to a
ModuleContext. It should normally be shared between all
modules you open.
AssemblyResolver asmResolver = new AssemblyResolver(); ModuleContext modCtx = new ModuleContext(asmResolver); // All resolved assemblies will also get this same modCtx asmResolver.DefaultModuleContext = modCtx; // Enable the TypeDef cache for all assemblies that are loaded // by the assembly resolver. Only enable it if all auto-loaded // assemblies are read-only. asmResolver.EnableTypeDefCache = true;
All assemblies that you yourself open should be added to the assembly resolver cache.
ModuleDefMD mod = ModuleDefMD.Load(...); mod.Context = modCtx; // Use the previously created (and shared) context mod.Context.AssemblyResolver.AddToCache(mod);
Resolving types, methods, etc from metadata tokens
ModuleDefMD has several
ResolveXXX() methods, eg.
ResolveTypeDef(),
ResolveMethod(), etc.
Creating mscorlib type references
Every module has a
CorLibTypes property. It has references to a few of the
simplest types such as all integer types, floating point types, Object, String,
etc. If you need a type that's not there, you must create it yourself, eg.:
TypeRef consoleRef = new TypeRefUser(mod, "System", "Console", mod.CorLibTypes.AssemblyRef);
Importing runtime types, methods, fields
To import a
System.Type,
System.Reflection.MethodInfo,
System.Reflection.FieldInfo, etc into a module, use the
Importer class.
Importer importer = new Importer(mod); ITypeDefOrRef consoleRef = importer.Import(typeof(System.Console)); IMethod writeLine = importer.Import(typeof(System.Console).GetMethod("WriteLine"));
You can also use it to import types, methods etc from another
ModuleDef.
All imported types, methods etc will be references to the original assembly.
I.e., it won't add the imported
TypeDef to the target module. It will just
create a
TypeRef to it.
Using decrypted methods
If
ModuleDefMD.MethodDecrypter is initialized,
ModuleDefMD will call it and
check whether the method has been decrypted. If it has, it calls
IMethodDecrypter.GetMethodBody() which you should implement. Return the new
MethodBody.
GetMethodBody() should usually call
MethodBodyReader.Create()
which does the actual parsing of the CIL code.
It's also possible to override
ModuleDefMD.ReadUserString(). This method is
called by the CIL parser when it finds a
Ldstr instruction. If
ModuleDefMD.StringDecrypter is not null, its
ReadUserString() method is
called with the string token. Return the decrypted string or null if it should
be read from the
#US heap.
Low level access to the metadata
The low level classes are in the
dnlib.DotNet.MD namespace.
Open an existing .NET module/assembly and you get a ModuleDefMD. It has several
properties, eg.
StringsStream is the #Strings stream.
The
MetaData property gives you full access to the metadata.
To get a list of all valid TypeDef rids (row IDs), use this code:
using dnlib.DotNet.MD; // ... ModuleDefMD mod = ModuleDefMD.Load(...); RidList typeDefRids = mod.MetaData.GetTypeDefRidList(); for (int i = 0; i < typeDefRids.Count; i++) Console.WriteLine("rid: {0}", typeDefRids[i]);
You don't need to create a
ModuleDefMD, though. See
DotNetFile.
|
https://bitbucket.org/manojdjoshi/dnlib
|
CC-MAIN-2016-40
|
en
|
refinedweb
|
DZone Snippets is a public source code repository. Easily build up your personal collection of code snippets, categorize them with tags / keywords, and share them with the world
Assertions For ActiveRecord Validations
These two methods will allow you to assert that an ActiveRecord model should or should not have an validation error. Put them at the bottom of test/test_helper.rb
def assert_error_on(field, model) assert !model.errors[field.to_sym].nil?, "No validation error on the #{field.to_s} field." end def assert_no_error_on(field, model) assert model.errors[field.to_sym].nil?, "Validation error on #{field.to_s}." end
|
http://www.dzone.com/snippets/assertions-activerecord
|
CC-MAIN-2014-15
|
en
|
refinedweb
|
/* CVS client logging LOG_BUFFER_H__ #define LOG_BUFFER_H__ void setup_logfiles (char *var, struct buffer** to_server_p, struct buffer** from_server_p); struct buffer * log_buffer_initialize (struct buffer *buf, FILE *fp, # ifdef PROXY_SUPPORT bool fatal_errors, size_t max, # endif /* PROXY_SUPPORT */ bool input, void (*memory) (struct buffer *)); # ifdef PROXY_SUPPORT struct buffer *log_buffer_rewind (struct buffer *buf); void log_buffer_closelog (struct buffer *buf); # endif /* PROXY_SUPPORT */ #endif /* LOG_BUFFER_H__ */
|
http://opensource.apple.com/source/cvs/cvs-42/cvs/src/log-buffer.h
|
CC-MAIN-2014-15
|
en
|
refinedweb
|
On 28/01/10 23:38, Iain Alexander wrote: >? There are a little cluster of bugs to do with this, see We need to re-read the pragmas after preprocessing. Ironically though, you will only be able to use this facility with a GHC that supports it, so we'll see a lot of source files like {-# LANGUAGE ... #-} #if __GLASGOW_HASKELL__ >= 614 {-# LANGUAGE .. more .. #-} #endif because GHC before 6.14 will stop at the first #if. (that's assuming we implement this for 6.14, it hasn't been done yet) Cheers, Simon
|
http://www.haskell.org/pipermail/glasgow-haskell-users/2010-January/018316.html
|
CC-MAIN-2014-15
|
en
|
refinedweb
|
Post your Comment
Declaration tag
Declaration tag Defined Declaration tag in JSP
Declaration in JSP is way to define global java variable and method. This java variable method in declaration can be access normally. Normally declaration does
DECLARATION IN JSP
DECLARATION IN JSP
In this Section, we will discuss about declaration of variables & method in
JSP using declaration tags.
Using Declaration Tag, you... declaration with a semicolon. The declaration must be valid
in the Java
STATIC VARIABLE DECLARATION
Emitting DOCTYPE Declaration while writing XML File
Emitting DOCTYPE Declaration while writing XML File
... a DOCTYPE Declaration in a DOM document. JAXP (Java
API for XML Processing... for Emitting DOCTYPE Declaration are described below:-DocumentBuilderFactory
Java Array declaration
Java Array declaration
In this section you will learn how to declare array... declaration of variable of other type, array is declared and it has two parts, one... will hold an
array. Similarly like above declaration you can declare array
arraylist declaration in java
This class is used to make an fixed sized as well as expandable array.
It allows to add any type of object.
It is the implementation of the List interface
Example of Java Arraylist Declaration
import
Array declaration in Java
]
Multidimensional Array
Multidimensional arrays are arrays of arrays.
Declaration of two...
Java Array Declaration
Java Array declaration
Java Array Initialization
Java Array Declaration
Java Array Declaration
As we declare a variable in Java, An Array variable is
declared the same way. Array variable has a type and a valid Java identifier
i.e. the array's
C Array Declaration
C Array Declaration
In this section, you will learn how to declare an array in C....
int arr[5];
In the above declaration, the arr is the array of integer
How to use multiple declaration in jsp
How to use multiple declaration in jsp
...
or methods:
1: Declare in scriptlet :- The
scope of this kind of declaration... directory.
multi_declaration_jsp.jsp
<HTML>
<
Array Review
Array Review
Subscripts, Declaration, Allocation
Array subscripts start at 0.
Array subscription checks bounds. May throw ArrayIndexOutOfBoundsException.
Array declaration doesn't create an array.
int[] a; // Declares
Coding Issues
Java: Coding Issues
Multiple variables in one declaration
int totalRainfall = 0,
dailyRainfall = 0,
maxDailyRainfall = 0,
currentDay = 0... declaration is generally not the best style.
Some code reformaters will try
DTD-Elements
;
In a DTD, elements are declared with an ELEMENT
declaration.
Declaring Elements... in the same sequence in the document. In a full
declaration, the children must also be declared.Children can have
children. The full declaration of the "E
DTD-Attributes
;
In a DTD, attributes are declared with an ATTLIST
declaration.
Declaring Attributes
The ATTLIST declaration defines the element having a
attribute... declaration has the following syntax:
The abstract Keyword : Java Glossary
The abstract Keyword : Java Glossary
Abstract keyword used for method declaration declares... will be
used in method declaration to declare that method without providing
Global and Local Simple-Type Elements
.style1 {
background-color: #BFDFFF;
}
Global and Local Simple-Type Elements
When an element is declared globally then it can be referenced by any other
element declaration. Declare the element as a child of the xs:schema
Creating a Local Variable in JSP
it inside the declaration tag. If we declare it inside the
declaration directive... in tag
except the declaration directive. In this example we are declaring
XML Interviews Question page21
of an XML namespace declaration, is its name in that namespace?
Not necessarily. When an element or attribute is in the scope of an XML namespace declaration... the prefix in the declaration. Whether the name is actually in the XML namespace
Creating a Local Variable in JSP
or variable we
usually declare it inside the declaration tag. If we declare it inside the
declaration directive then then the scope of the variables... declare the methods and variables in tag
except the declaration directive
Constructor Inheritance
Constructor Inheritance
Constructors are used to create objects from the class.
Constructor declaration are just like method declaration, except
Post your Comment
|
http://www.roseindia.net/discussion/22608-Array-Declaration.html
|
CC-MAIN-2014-15
|
en
|
refinedweb
|
#include <StelSphereGeometry.hpp>
Inherits StelGeom::ConvexS, and StelGeom::Polygon.
The operator [] behave as for a Polygon, i.e. return the vertex positions. To acces the HalfSpaces, use the asConvex() method.
Default constructor.
Special constructor for 3 points.
Special constructor for 4 points.
By default the [] operator return the vertexes.
By default the [] operator return the vertexes.
Return the convex polygon area in steradians.
Return the convex polygon barycenter.
Same with const.
Cast to Convex in case of ambiguity.
Same with const.
|
http://stellarium.org/doc/0.10.1/classStelGeom_1_1ConvexPolygon.html
|
CC-MAIN-2014-15
|
en
|
refinedweb
|
Sharing Thoughts and Learning:
Which View Engine do you use in ASP.NET MVC (Total votes 276)?
What kind of UI Components do you prefer for Webform view engine? (Total votes 186)?
Which Javascript framework your ASP.NET MVC UI component should depend? (Total votes 214)?
As you can see the Webform+HtmlHelper+jQuery is far ahead from its competitor. Now, let us asses the options that we have to create UI components that extends the HtmlHelper. In this post I will use the jQuery UI Slider for discussing the options. If you are not familiar with jQuery UI Slider, I would suggest to visit the previous link, the slider has properties like value, min, max, step, range and events start, stop change, slide etc. Lets assume that the method we will have in the HtmlHelper will create the necessary html elements and emits required javascripts in the page.
Option#1 Create Regular Methods
We can create regular methods like the ASP.NET MVC Framework, for example:
public static class HtmlHelperExtension
{
public static void Slider(this HtmlHelper htmlHelper, string id, int value, int min, int max, object htmlAttributes)
{
// Implementation
}
}
And few more overloads, the number of parameter will vary based upon the complexity of the object that we are building:
//Range
public static void Slider(this HtmlHelper htmlHelper, string id, int value1, int value2, int min, int max, object htmlAttributes)
{
// Implementation
}
public static void Slider(this HtmlHelper htmlHelper, string id, int value)
{
// Implementation
}
So, in the view we will be able to use it like the following:
<% Html.Slider("mySlider", 10, 20, 0, 100, new { style = "border: #000 1px solid" }); %>
<% Html.Slider("mySlider", 10, 0, 100, null); %>
<% Html.Slider("mySlider", 10, 20, 0, 100, null); %>
<% Html.Slider("mySlider", 10); %>
Option#2 Create Simple Fluent Syntax
We can create a slider object similar to the following:
public class jQuerySlider
{
private readonly HtmlHelper _htmlHelper;
private string _id;
private RouteValueDictionary _htmlAttributes;
private int _value;
private int[] _values;
private int _minimum;
private int _maximum;
public jQuerySlider(HtmlHelper htmlHelper)
{
_htmlHelper = htmlHelper;
}
public jQuerySlider Id(string elementId)
{
_id = elementId;
return this;
}
public jQuerySlider HtmlAttributes(object attributes)
{
_htmlAttributes = new RouteValueDictionary(attributes);
return this;
}
public jQuerySlider Value(int sliderValue)
{
_value = sliderValue;
return this;
}
// Range
public jQuerySlider Values(int slider1Value, int slider2Value)
{
_values = new int[2];
_values[0] = slider1Value;
_values[2] = slider1Value;
return this;
}
public jQuerySlider Minimum(int value)
{
_minimum = value;
return this;
}
public jQuerySlider Maximum(int value)
{
_maximum = value;
return this;
}
public void Render()
{
//Write html and register script
}
}
And create an extension method of HtmlHelper:
public static jQuerySlider Slider(this HtmlHelper helper)
{
return new jQuerySlider(helper);
}
Now, we will be able to use it in the view like:
<% Html.Slider()
.Id("mySlider")
.HtmlAttributes(new { style = "border:#000 1px solid" })
.Values(10, 20)
.Minimum(0)
.Maximum(100)
.Render(); %>
<% Html.Slider()
.Id("mySlider")
.Value(10)
.Minimum(0)
.Maximum(100)
.Render(); %>
<% Html.Slider()
.Id("mySlider")
.Value(10)
.Render(); %>
A bit verbose, but lot more readable comparing to the regular methods (option #1), but the problem with this approach is, it is really easy to write:
<% Html.Slider()
.Id("mySlider")
.Value(10)
.Value(20)
.Value(30)
.Render(); %>
And VS will always show the method names no matter how many times it is called:
This makes the syntax a bit confusing and ambiguous.
Option#3 Create Progressive Fluent Syntax
This solves the exact problem that I have just mentioned. Rather than returning the same object in the methods, it returns different interface that is applicable in that context. For example:
As you can see, once a method is called it does not appears in the auto complete list, also the next available methods in that context are shown. I have implemented the jQuery UI Accordion, Tab, ProgressBar, Slider and Theme Switcher (DatePicker and Dialog are under development) following this approach. You can find the fully functional version:
[Live version]
[Download Demo]
This option also has a drawback, it does not support method overlapping like the option #2, which means, we always have to call the methods in the exact same order and it becomes a bit painful when we want to change a specific value that is a few level deep. For example, if we want to change only the Steps of the Slider we have to use:
<% Html.jQuery().Slider()
.Id("mySlider")
.NoExtraHtmlAttributes()
.DoNotAnimate()
.UseDefaultOrientation()
.NoRange()
.Value(0)
.UseDefaultMinimum()
.UseDefaultMaximum()
.Steps(5)
.Render(); %>
Instead of:
<% Html.jQuery().Slider()
.Id("mySlider")
.Steps(5)
.Render(); %>
Though I prefer it over the above two, but it completely depends upon you, which way you want to shape up the API. So dear readers, please download the demo, do play with it and leave your opinion in this poll.
I prefer plain methods. I love fluent interfaces, but for controls inside of a view I want a more succinct syntax. The fluent interface is fine, and I don't really care about being able to call one method more than once. The progressive style is fine, but that is going to be a huge pain like you said when all you need to do is set one option and it is way at the end. Then you are going to have a ridiculously verbose control for absolutely no reason.
Also, with the fluent syntaxes, where are you going to set the html attributes? Are you going to also use an anonymous type? With the fluent syntax you could at least make an Attribute("name", "value") syntax and call it more than once.
Oops, I just saw where you were setting html attributes. I think that with the fluent interface you should leverage the fact that you can call methods more than once and do it like this:
Html.jQuery().Slider()
.Attribute("class", "something")
.Attribute("name", "somethingelse")
.Render();
Hmmm, maybe I do like the fluent syntax as opposed to method calls. :-) I know that I am definitely not a fan of the progressive syntax though. Sorry!
What about:
<% Html.jQuery().Slider(x =>
x.Id = "mySlider";
x.ExtraHtmlAttributes = false;
x.Animate = false;
x.Orientation.Default();
x.Range.None();
x.Value = 0;
x.Minimum.Default();
x.Maximum.Default()
});
I'm not surprised by the poll results.
I think your example is bloated. What purpose does your html extension serve? Why not just do
$('#progress').slider({animate:false;range:false;value:0...});
On a smaller note, you can simply place to Render logic in ToString() and use <%= %>
The only time you should be using HtmlExtensions to enchance jQuery is when you actually have server-side data you need to get in there. Like:
$('#registerForm').validate({rules:<%=Html.RulesFor<User>()%>});
which you could further improve via:
<%=Html.RulesFor<User>("#registerForm")%>
You'll actually want to have both since the 1st version has the added benefit of letting you add rules, like:
var rules = {<%= Html.RulesFor<User>()%>};
rules.Email.Tip = rules.Email.Tip + '. An Activation Email will be sent';
$('#registerForm').validate({rules:
rules});
Hi, very nice article! Greate job!
May be it would be useful to use a generic interfaces?
sorry for my english:)
Hi. This is a great initiative in my opinion! Coincidentally I just wrote an HTML helper method for jQuery's DatePicker widget. It might save you a bit of time.
using System;
using System.Collections.Generic;
using System.Globalization;
using System.Linq;
using System.Text;
using System.Web.Mvc;
using Subspace.Mvc.Extensions.Properties;
namespace Subspace.Mvc.Extensions
{
public static class JQueryExtensions
{
public static string JQueryDatePicker(this HtmlHelper htmlHelper, string name)
{
if (name == null)
{
throw new ArgumentNullException("name");
}
else if (name.Length == 0)
throw new ArgumentException(Resources.ArgumentException_EmptyName);
DateTimeFormatInfo dateFormat = CultureInfo.CurrentCulture.DateTimeFormat;
string datePattern = dateFormat.ShortDatePattern.Replace("M", "m").Replace("yyyy", "yy");
return string.Format(
CultureInfo.InvariantCulture,
@" $('#{0}').datepicker(
{{
duration: '',
closeText: '{1}',
prevText: '←',
nextText: '→',
currentText: '{2}',
monthNames: ['{3}'],
monthNamesShort: ['{4}'],
dayNames: ['{5}'],
dayNamesShort: ['{6}'],
dayNamesMin: ['{7}'],
dateFormat: '{8}',
firstDay: {9},
isRTL: {10}
}}
);",
htmlHelper.ViewContext.HttpContext.Server.UrlEncode(name),
JQueryResources.Calendar_Close,
JQueryResources.Calendar_Today,
JoinOptionValueArray(dateFormat.MonthNames, true),
JoinOptionValueArray(dateFormat.AbbreviatedMonthNames, true),
JoinOptionValueArray(dateFormat.DayNames, true),
JoinOptionValueArray(dateFormat.AbbreviatedDayNames, true),
JoinOptionValueArray(dateFormat.ShortestDayNames, true),
datePattern,
(int)dateFormat.FirstDayOfWeek,
CultureInfo.CurrentCulture.TextInfo.IsRightToLeft.ToString().ToLowerInvariant()
);
}
private static string JoinOptionValueArray(string[] values, bool toLowerCase)
if (toLowerCase)
return string.Join("','", values.Select((n) => n.ToLower().Replace("'", "\\'")).ToArray());
return string.Join("','", values.Select((n) => n.Replace("'", "\\'")).ToArray());
}
}
I agree with Justin, as he said "I don't really care about being able to call one method more than once"! I and would comment, that is because he and most of us will really be able to identify this and control it ourselves.
Progressive is fine of course and enforce controlling over syntax, but I think we can control it ourselves! so I will go for the easy way. After all it is a common syntax for everyone. And for progressive I would have this impression while coding "Oh damn it, why I should have write all this to set the Step"
Good post and controls Rashid. And the best of it, is you provided different styles to give developers options to vote for.
Great Article..
I've started a small MVC UI helper project (github.com/.../master), and I'll definetly add the fluent syntax explained in this demo to my helpers..
Thanks!
Erik
I probably would not want to use a HtmlHelper just to do something totally client-side: I think jQuery is such a great library, and it already is quite easy to use. I don't see a personal benefit in doing so.
But it's all about options, and some other people would probably like to use server code to manipulate client-side scripting.
I think Chad probably got the best option: you have intellisense to help with the selection of properties and you can also call methods on the object.
Yet another possible approach could be:
<%= Html.jQuery.Slider(
new SliderOptions {
Id = "mySlider",
Animate = false,
Value = 10,
MinValue = 1,
MaxValue = 100,
Orientation = SliderOrientation.Horizontal,
HtmlAttributes = new { style = "border:#000 1px solid" },
}) %>
it has the advantage over the simple fluent interface that you cannot set a property more than once, so you can control the syntax. And it's probably easier to grasp than expression trees for the average developer.
But this has the disadvantage of not allowing method calls.
HTH
Simo
@simonech: my personal number one reason for using an HTML helper is this: take the DatePicker for example (I posted some code in a reply above), most of its options consist of localized strings. Getting hold of those without leveraging an HTML helper would be cumbersome at the least.
@Sandor: yes... when there is interaction with server-side code I agree it's needed. But the Slider is not... and also the datapicker, you could handle localization directly with JS
Sandor, why not just:
$('#from').calendar({today: '<%=Html.Lookup("Today")%>'});
?
I'm with Chad. IMO there are a couple of suboptimal things about the standard method-chaining fluent interface:
1. You have to call the terminating command (in this case "Render()") which is fine if you know what it is
2. Discoverability. It's only fluent if you know what order to call things (and sometimes it matters),
@Karl: besides that there are the day names, abbreviated day names, shortest day names etc, preferrably all lower cased. I think it is good practice to leverage the .NET framework's ability to produces all of those based on the clients culture settings instead of just sticking with the server language. If I'm in France for example, I'd like to get the French calendar, even though the site is in English only.
I agree use the right tool for the right situation. HtmlHelpers are great for places where I have more than 20 lines of javascript/jquery code. I do not believe most of the standard MVC helpers are useful. I find it easier to use the HTML syntax for textbox than try and memorize a new syntax for helper.Textbox.
In this case your example is great for education purposes; However, in real world I think Chad's code is less complex with less overhead.
@Justin Etheredge : Thanks for your valuable comments.
@Chad Myers: Looks very interesting, I will definitely give it a try.
@ Karl: I think I have picked a trivial component for this post, may be if I used the Accordion/Tab you will find the justification of having a C# wrapper around these jQuery UI Components. For example, if I want to use the tab in plain html, I have to write the following html to avoid the flickr:
<div class="ui-tabs ui-widget ui-widget-content ui-corner-all" id="myTab">
<ul class="ui-tabs-nav ui-helper-reset ui-helper-clearfix ui-widget-header ui-corner-all">
<li class="ui-state-default ui-corner-top ui-tabs-selected ui-state-active"><a href="#tabs-1">Tab 1</a> </li>
<li class="ui-state-default ui-corner-top"><a href="#tabs-2">Tab 2</a> </li>
<li class="ui-state-default ui-corner-top"><a href="#tabs-3">Tab 3</a> </li>
</ul>
<div class="ui-tabs-panel ui-widget-content ui-corner-bottom" id="tabs-1"></div>
</div>
I think having a wrapper around it is very much handy as I do not have to remember those css class names and structure of each component. Also there are few server side integration that I have in my mind, which yet to be implemented.
@hazzik: Thanks.
@Sandor: Thanks for the code snippet, I will definitely give it a try when implementing the DatePicker.
@mosessaur: Thanks for your valuable comments.
@Erik: Great to hear that.
@simonech: Thanks for your comments. Pls check my comments on Karl's response.
Yes it very similar to ASP.NET MVC Ajax Helper, not sure how did I missed it ;-(.
@lilconnorpeterson: Thanks.
@Pete: Thanks
Your tabs example is even worse. Your server-side code shouldn't generate your class names, your jQuery plugin should.
You should just need to do:
<ul id="menu">
<li><a href="/login">Login</a></li>
<li><a href="/register">Register</a></li>
<li><a href="/about">About</a></li>
</ul>
$('#menu').tabs();
the tabs plugin has everything it needs to:
- determine the selected tab
- apply the correct class names
- load the panels
@Karl: Pls do the following:
1. Put the jQuery CSS in head.
2. Put the js at the bottom of the page maybe just before the body tag.
3. Now create the Tab as you suggested.
Did you able to see the initial flickr?
And, if that doesn't work, there are other client-side solutions to fixing this client-side problem (just google it).
What is the gain of doing :
<% Html.jQuery().ScriptManager()
.OnPageLoad(() =>
{%>
$('#chanage').click(
..........................................
I just can't see the benefit.
There are better solutions to eliminating screen flicker. A google search should prove fruitful. You are essentially forsaking sound web design to solve a problem the wrong way. Why not have all of our HTML generated from html extension methods?
<%=Html.GenerateLoginForm()%>
public static string GenerateLoginForm(this HtmlExtensoin html)
var";
table += "<div><label>UserName</label><input type....
return table;
@Karl: Not sure, what do you mean, I see no difference in generating the html in extension or keep it plain in page. The issue is when you put your JS file at the bottom there is a certain delay depending upon the size of the JS and the connection speed(assuming the files are not cached in the browser). So having generated this html with the css classes reduces the initial flicker.
I think we are going on different direction which was not my intend of this post and everyone has his personal preference, if you are comfortable to hand coded this html and then create the tabs manually, pls do it, I personally do not see any problem. But there are a lot of people who thinks having this automation will save some times for them.
@Chad approach looks cool too. One thing I would like to add baout fluent syntax, It is close to normal jQuery chaining, which would make it easy for jQuery fans.
@sirrocco : the onPageLoad gathers all the statements of Master/Content/user control no matter how deep the nesting is, and merges them into a single document.ready.
+1 for "<%= Html.jQuery.Slider(new SliderOptions {Id = "mySlider",MinValue = 1}) %>"
However, I also agree Karl.. You seem to be just change a javascript call into a typed c# call. I would just pefer to use the javascript.
This thread has been a great read. Both the fluent syntax and Chad's suggestion are superior designs because they most closely mimic jQuery chaining. And our goal is to move closer to native jQuery and JS, right?
Generally speaking, the HTMLHelper vs. native HTML - JS discussion is highly relevant and more than a reflection of personal preference. My concern is that as ASP.NET developers we have become so accustomed to developing web forms + code behind that generate mark up and scripts that our natural tendency is to utilize HTML Helpers as a crutch that lets us revert to old behavior.
Think about what HTML Helper libraries mean for software maintenance. When making a change 3 months down the road, will every developer modify the underlying jQuery - JS or simply find a workaround within the HTMLHelper and comfortable confines of native .NET? With logic spread between the HTMLHelper and original Javascript, you're going to create the worst of both worlds. The precedent of adding a HTMLHelper for every conceivable task is certainly has pitfalls- it sets a poor precedent and is begging for headaches down the maintenance road.
Another question to ponder- how can we more cleanly inject model data into our JS routines and components without relying upon injection via HTMLHelpers...
I'm not sure why you'd really want to generate jQuery code using server-side code within the View, but if you're going to it'll be a better idea to implement this as an extension to the AjaxHelper instead of the HtmlHelper.
Of the examples , i guess Chad got the best syntax so far. But when we are considering mainstream users, fluent interface is much simpler than others as for them, if we feed everything at a time they will sure have a difficulty getting a way out. It's like a maze and we want the users to find a hole.
Regards,
Mehfuz.
How about trying another approach altogether?
The HTMLHelper is widely abused by ASP.NET programmers who can't seem to give up the good old days of client-side code being injected into the page by server-side methods. HTMLHelpers are a great way to form basic HTML tags, but they are easily over-extended.
You might consider using a partial view instead. It may look less compact than an extension method, but at least you're keeping your HTML and JS within views which are cleanly separated from the back-end logic and accessible to front-end coders. If you're sick of tag soup, then look into a different view engine such as Spark.
I'd be on the same side with Karl: I would avoid mixing server-side code with JavaScript (jQuery) as soon as it is convenient enough. We do need server-side code if some data should be passed from server. To me, a habit of generating client code by server code already made MS Ajax overcomplicated. Let's not ruin jQuery too.
|
http://weblogs.asp.net/rashid/archive/2009/05/20/asp-net-mvc-poll-result-jquery-ui-mvc-component-demo-and-more-feedback-required.aspx
|
CC-MAIN-2014-15
|
en
|
refinedweb
|
On Saturday 09 February 2008 18:23:24 Tim Niemueller wrote: > Paul Black wrote: > > Why not: > > class SomeClass > > { > > public: > > SomeClass(); > > // more stuff... > > protected: > > struct mylist_t { > > mylist_t *next; > > void *dataM > > }; > > }; > > Then I cannot use > SomeClass::mylist_t *list > but I would have to use > struct SomeClass::mylist_t *list > which is ugly and this is why I had the typedef in the first place. That is only true in C. In C++, struct/union/enum/class names are automatically typenames. > I just wonder if this is indeed the intended behavior, typedef are not > allowed as members, I am not familiar; usually you see typedefs at the global or namespace scope, and usually they follow the class or struct declaration. -- Benjamin Kreuter -- Message sent on: Sat Feb 9 18:29:30 EST 2008
Attachment:
signature.asc
Description: This is a digitally signed message part.
|
https://www.redhat.com/archives/fedora-devel-list/2008-February/msg00695.html
|
CC-MAIN-2014-15
|
en
|
refinedweb
|
sentence " and each map generates(approximately) 10GB of random
binary data, " The "10GB" should be "1GB".
"that make is easy" should be "that makes it easy"
The code shown (Maximum temperature in C++) does not compile as shown on my Ubuntu cluster.
I need to add
#include <stdint.h>
To get rid of an error about uint64_t not being defined in one of the hadoop files. I run hadoop 0.19.2.
hbase org.apache.hadoop.hbase.PerformanceEvaluation sequentialWriter 1
should be
hbase org.apache.hadoop.hbase.PerformanceEvaluation sequentialWrite 1
(no 'r' at the end of sequentialWrite)
a complete redesign to scale up further.
It is a subtle language problem: it should say scale out (to 200 machines) instead of scale up.
1st line. "we starting" should be "we started"
Description for bullet item 2 and 3 don't seem to match the code listing in Example 14-2. Bullet item 2 seems to refer to label 3 and bullet item 3 seems to refer to label 2.
the command should be
% hadoop fsck / -files -blocks
It is currently missing the slash
Misplaced comma:
"However, the state of the secondary namenode lags that of the primary, so in the event of total failure of the primary data, loss is almost guaranteed."
should be
"However, the state of the secondary namenode lags that of the primary, so in the event of total failure of the primary, data loss is almost guaranteed."
Table 14-9, 3rd row (223), 1st col (#listeners): 2
Should be 1.
Excerpt: "...wanted to run 2 processes on each processor, then you should set mapred.tasktracker.map.tasks.maximum and mapred.tasktracker.map.tasks.maximum to both be 7..."
The second reference to mapred.tasktracker.map.tasks.maximum is incorrect, it should be mapred.tasktracker.reduce.tasks.maximum.
Heading "Text Output", paragraph 1, line 5, word 3 -> TextOuputFormat
Note from the Author or Editor:"TextOuputFormat" should read "TextOutputFormat" (it is missing a "t")
"How long are you mappers running for?"
Should be:
"How long are your mappers running for?"
Sentence ending "Leeds, UK" is missing a period.
The directory hierarchy is misaligned.
The line consisting of "/previous/VERSION" should have its first "/" character vertically aligned with the first "/" of "${dfs.name.dir}/current/VERSION".
The first "/" character of each of the following lines ("/edits", "/fsimage", "/fstime") should align with the second "/" of "/previous/VERSION".
"41 TB SATA disks" should read "4 x 1TB SATA disks"
At the end of the first sentence in the sidebar: "new users to Hadoop.Almost" there is a missing space after the period.
Paragraph 3, line 5, word 1:
Reference to 'task_200811201130_0054_m_000000' does not exist, it should refer to 'task200904110811_0003_m_000044'.
Note from the Author or Editor:It should refer to 'task_200904110811_0003_m_000044'.
The illustration in Figure 3-2 does not match the description given in the sidebox "Network Topology and Hadoop".
1. For d1/r2, the node is mislabelled as n1 (should be n3)
2. For d2, racks are mislabelled as 'r1' and 'rack' (should be 'r3' and 'r4')
3. For d2/r3, the node is mislabelled as n1 (should be n4)
Note from the Author or Editor:I agree with all the corrections except for 'rack' which is there to indicate that this entity is a rack, so it doesn't need relabelling 'r4' (since r4 is never referred to in the text).
"per a day" should be either "per day" or "a day"
"timem" should be "time"
© 2014, O’Reilly Media, Inc.
(707) 827-7019
(800) 889-8969
All trademarks and registered trademarks appearing on oreilly.com are the property of their respective owners.
|
http://www.oreilly.com/catalog/errata.csp?isbn=9780596521981&order=date
|
CC-MAIN-2014-15
|
en
|
refinedweb
|
Thank you for fixing it :-)This time it builds alright. I understand IronPython and DLR will continue to be developed with .NET Framework 2.0 as target. I'd like to ask, though, now that the DLR expression tree has moved into System.Linq.Expressions namespace, what would be the preferred solution to resolving conflict between references to System.Core.dll and Microsoft.Scripting.Core.dll? Apparently these two do conflict now, don't they? The other day I was implementing a toy language on the DLR for fun, but the fact that I can't use LINQ queries from within my language impl is...well, not too fun. Greetings,- RednaxelaFX -----Original Message-----> From: Dave Fugate <dfugate at microsoft.com>> Subject: Re: [IronPython] Missing File in IronPython Change Set 34376?> To: Discussion of IronPython <users at lists.ironpython.com>> > Thanks for spotting this!> It looks like there was a bug with the script we use to port changes over from our internal TFS repository to CodePlex's TFS repository.> It should be fixed now.> > Dave -----Original Message-----> From: Seshadri Pillailokam Vijayaraghavan <seshapv at microsoft.com>> Subject: Re: [IronPython] Missing File in IronPython Change Set 34376?> To: Discussion of IronPython <users at lists.ironpython.com>> > Script _________________________________________________________________ Discover the new Windows Vista -------------- next part -------------- An HTML attachment was scrubbed... URL: <>
|
https://mail.python.org/pipermail/ironpython-users/2008-July/007743.html
|
CC-MAIN-2014-15
|
en
|
refinedweb
|
hurricane or other natural disaster strikes, CAT-adjusters go to work, assessing damage for insurance companies. Contractors looking for a job that takes their stiff joints and aching back off the jobsite might consider using their experience in construction as a licensed adjuster.
Can replacement windows live up to energy claims?; keeping basements dry; Florida prepares for global warming; more
MULTIFAMILY BORROWERS looking for small loans have to look a little harder these days.
The possibility of Freddie Mac and Fannie Mae selling a large portion of their lowincome housing tax credit (LIHTC) portfolios looms over the struggling tax credit market.
SMALL, PRIVATE EQUITY GROUPS are flooding the marketplace, and investors are targeting cash-on-cash returns over internal rate of return (IRR), say industry experts.
BRIDGEPORT, CONN.—The city's old downtown used to be as quiet as a graveyard on weekends, but that's starting to change as developers find ways to revitalize the neighborhood.
As an economist with UCLA's Anderson Forecast, Christopher Thornberg garnered a reputation as something of a pessimist.
Small private equity groups are flooding the marketplace and investors are targeting cash-on-cash returns over internal rates of return, according to panelists on the equity investment session at the recent Apartment Finance Today Conference.
|
http://www.jlconline.com/find-articles.aspx?location=Anderson%252C+IN
|
CC-MAIN-2014-15
|
en
|
refinedweb
|
Code. Collaborate. Organize.
No Limits. Try it Today.
I try to integrate crystal report runtime designer, after time of searching i have a small test app.a dialog based application, that allow you yo view or create new report.
the res dialog have to control the back layer is (IDC_ACTIVEXREPORTVIEWER1) and the front one is (IDC_EMBEDDABLECRYSTALREPORTSDESIGNERCTRL1)you can insert the crystal report component to your project from (Project->Add To Project-> Components and control) then select ( Crystal ActiveX Viewer 10.0 - Embeddable Crystal Reports Designer Control 10.0)
you must have 2 member variable:*- IApplicationPtr m_Application; to create a crystal application*- IReportPtr m_Report; to create a crystal report file
i use pBtPrivew and pBtDesigner to switch between viewer and runtime designer
don't forget to do import the runtime dll#import "<Drive>\\Program Files\\Common Files\\Crystal Decisions\\2.5\\bin\\craxdrt.dll" no_namespace
This article, along with any associated source code and files, is licensed under The Code Project Open License (CPOL)
_AFXWIN_INLINE void CWnd::UpdateWindow()
{ ASSERT(::IsWindow(m_hWnd)); ::UpdateWindow(m_hWnd); }
lallaba wrote:is there any tutorial or sample code avaliable somewhere .
General News Suggestion Question Bug Answer Joke Rant Admin
Use Ctrl+Left/Right to switch messages, Ctrl+Up/Down to switch threads, Ctrl+Shift+Left/Right to switch pages.
C# 6: First reactions
|
http://www.codeproject.com/Articles/11873/Integrate-Crystal-Reports-Runtime-Designer-Viewer?msg=4589882
|
CC-MAIN-2014-15
|
en
|
refinedweb
|
Windows Server 2003 Glossary - F
Updated: March 7, 2008
Applies To: Windows Server 2000, Windows Server 2003, Windows Server 2003 R2, Windows Server 2003 with SP1, Windows Server 2003 with SP2
For more Windows Server terms, see the Windows Server 2008 Glossary.
Glossary - F
failback
The process of moving resources, either individually or in a group, back to their preferred node after the node has failed and come back online.
See also: failback policy node resource
failback policy
Parameters that an administrator can set using Cluster Administrator that affect failback operations.
See also: Cluster Administrator failback
failed
A state that applies to a resource or a node in a cluster. A resource or a node is placed in the failed state after an unsuccessful attempt has been made to bring it online.
See also: cluster node resource
failover.
See also: failover policy node offline possible owner server cluster
failover policy
Parameters that an administrator can set using Cluster Administrator that affect failover operations.
See also: Cluster Administrator failover
FAT
See other term: file allocation table (FAT)
FAT32
A system used to store files on a computer drive. FAT32 is based on the file allocation table (FAT) file system, but it uses 32-bit values for storing files instead of the 16-bit values used by the original FAT file system. FAT32 offers more efficient drive space allocation by creating smaller clusters than FAT and supports volumes of up to 2 terabytes (TB) of size.
fault tolerance
The ability of computer hardware or software to ensure data integrity when hardware failures occur. Fault-tolerant features appear in many server operating systems and include mirrored volumes, RAID-5 volumes, and server clusters.
See also: cluster mirrored volume RAID-5 volume
A system service that provides fax services to local and remote network clients. Fax services include receiving faxes and faxing documents, fax wizard messages, and e-mail messages.
See also: service
FCB
See other term: file control block (FCB)
Federal Information Processing Standard (FIPS)
A standard entitled Security Requirements for Cryptographic Modules. FIPS 140-1 (1994) and FIPS 140-2 (2001) describe government requirements for hardware and software cryptomodules used in the U.S. government.
See also: cryptography
federation
A pair of realms or domains that have established a federation trust.
Federation Service
A security token service that is built into Windows Server 2003 R2. The Federation Service provides tokens in response to requests for security tokens.
Federation Service Proxy
A proxy to the Federation Service in the perimeter network (also known as screened subnet). The Federation Service Proxy uses WS-Federation Passive Requestor Profile (WS-F PRP) protocols to collect user credential information from browser clients and Web applications and send the information to the Federation Service on their behalf.
FEK
See other term: file encryption key (FEK)
FEP
See other term: front-end processor (FEP)
Fibre Channel
A networking standard developed to connect devices that require the transmission of large volumes of data at a very high speed. A leading implementation of Fibre Channel technology has been in storage area networks (SANs. Although the term Fibre Channel implies the use of fiber-optic technology, copper coaxial cable is also supported.: file system NTFS file system
file control block (FCB)
A small block of memory temporarily assigned by a computer's operating system to hold information about a file that has been opened for use. An FCB typically contains such information as the file's identification, its location on disk, and a pointer that marks the user's current (or last) position in the file.
file creator
A four-character sequence that identifies which program was used to create a file. With Services for Macintosh, you can associate file name extensions with file creators and file types to specify which program starts automatically when you open a file with a particular extension.
See also: extension-type association
file encryption key (FEK)
A pseudo-random cryptographic key that Encrypting File System (EFS) uses to encrypt a file. The FEK is encrypted by the public key of the user performing the encryption, and it is typically different for each encrypted file.
See also: Encrypting File System (EFS) encryption key public key
file fork
One of two subfiles of a Macintosh file. When Macintosh files are stored on a computer running Services for Macintosh, each fork is stored as a separate file. Each fork can be independently opened by Macintosh users.
file group
A File Server Resource Manager option that is used to define a namespace for a file screen, file screen exception, or storage report. It consists of a set of file name patterns, which in turn determine whether files are included or excluded from a group.
File Replication service (FRS)
A service that provides multimaster file replication for designated directory trees between designated servers running Windows Server 2003. The designated directory trees must be on disk partitions formatted with the version of NTFS used with the Windows Server 2003 family. FRS is used by Distributed File System (DFS) to automatically synchronize content between assigned replicas and by Active Directory to automatically synchronize content of the system volume information across domain controllers.
See also: Active Directory NTFS file system replica replication service
file screen
A File Server Resource Manager option that is used to block certain files from being saved on a volume or in a folder tree. A file screen is applied at the folder level and affects all folders and subfolders in the designated path.
File Server for Macintosh
A service that allows users of Macintosh computers to store, access, and share files on servers running Services for Macintosh. Also called MacFile.
See also: service
File Server Resource Manager
A suite of tools that allows administrators to understand, control, and manage the quantity and type of data stored on their servers.
file share
In a server cluster, any folder that has an associated File Share resource and is managed by the Cluster service. The file share can fail over from one node to another, but to the end user, the folder looks like a regular folder that remains in one location. Multiple users can access a file share.
See also: cluster resource service
File Share resource
A file share accessible by a network path that is supported as a cluster resource by a Resource DLL.
See also: Resource DLL
file system
In an operating system, the overall structure in which files are named, stored, and organized. NTFS, FAT, and FAT32 are types of file systems.
See also: FAT FAT32 NTFS file system
file system cache
An area of physical memory that holds frequently used pages. It allows applications and services to locate pages rapidly and reduces disk activity.
See also: cache
File Transfer Protocol (FTP). In the Macintosh environment, a four-character sequence that identifies the type of a Macintosh file. The Macintosh Finder uses the file type and file creator to determine the appropriate desktop icon for that file.: Indexing Service Internet Information Services (IIS) Internet Protocol (IP) Internet Protocol security (IPsec) Internet Server Application Programming Interface (ISAPI) Internetwork Packet Exchange (IPX)
filtering mode
For Network Load Balancing, the method by which network traffic inbound to a cluster is handled by the hosts within the cluster. Traffic can either be handled by a single server, load balanced among the hosts within the cluster, or disabled completely.
See also: cluster host load balancing Network Load Balancing
FIPS
See other term: Federal Information Processing Standard (FIPS)
firewall
A security solution that segregates one portion of a network from another portion, allowing only authorized network traffic to pass through according to traffic filtering rules.
Firewire
See other term: IEEE 1394
firmware
Software routines and low-level input/output instructions stored in read-only memory (ROM). Unlike random-access memory (RAM), read-only memory stays intact even in the absence of electrical power.
See also: random access memory (RAM) read-only memory (ROM)
flexible single-master operations (FSMO)
See other term: operations master
folder
A container for programs and files in graphical user interfaces, symbolized on the screen by a graphical image (icon) of a file folder. A folder is a means of organizing programs and documents on a disk and can hold both files and additional folders. For DFS Namespaces, any folder that appears after \\ServerOrDomainName\RootName. A folder can have optional folder targets.
folder target
A Universal Naming Convention (UNC) path of a shared folder or another namespace that is associated with a folder in a namespace.
font
A graphic design applied to a collection of numbers, symbols, and characters. A font describes a certain typeface, along with other qualities such as size, spacing, and pitch.
See also: OpenType font PostScript fonts screen font Type 1 fonts
font cartridge
A plug-in unit available for some printers that contains fonts in several styles and sizes. As with downloadable fonts, printers using font cartridges can produce characters in sizes and styles other than those created by the fonts built into it.
See also: downloadable fonts font
foreground program
The program that runs in the active window (the uppermost window with the highlighted title bar). The foreground program responds to commands issued by the user.
See also: background program title bar
foreign computer
A computer that uses another message queuing system but, through a connector application, can exchange messages with computers that run Message Queuing.
See also: connector application Message Queuing
foreign security principal
An object in a domain that represents a security principal that exists in a trusted domain located in a different forest. Foreign security principals are necessary for users in a domain to access resources that exist in a different forest.
See also: domain forest object resource security principal transitive trust two-way trust
forest functionality
The functional level of an Active Directory forest that has one or more domain controllers running Windows Server 2003. The functional level of a forest can be raised to enable new Active Directory features that will apply to every domain in the forest. There are three forest functional levels: Windows 2000, Windows Server 2003 interim, and Windows Server 2003. The default forest functional level is Windows 2000. When the forest functional level is raised to Windows Server 2003 interim or Windows Server 2003, advanced forest-wide Active Directory features are available.
See also: Active Directory domain domain controller forest
forest root domain
The first domain created in a new forest. The forest-wide administrative groups, Enterprise Admins and Schema Admins, are located in this domain. As a best practice, new domains are created as children of the forest root domain.
See also: child domain domain domain hierarchy forest
forest trust
A trust between two Windows Server 2003 forests that forms trust relationships between every domain in both forests. A forest trust can be created only between the forest root domains in each forest. Forest trusts are transitive, and they can be one-way or two-way. An administrator must manually establish a forest trust, unlike an automatically established trust, such as a parent-child trust.
See also: domain forest one-way trust parent-child trust root domain transitive trust trust relationship two-way trust
form
The specification of physical characteristics such as paper size (that is, letter or legal) and printer area margins of paper or other print media. For example, by default, the Letter form has a paper size of 8.5 inches by 11 inches and does not reserve space for margins.
FORTEZZA
A family of security products including PCMCIA-based cards, compatible serial port devices, combination cards (such as FORTEZZA/Modem and FORTEZZA/Ethernet), server boards, and others. FORTEZZA is a registered trademark held by the U.S. National Security Agency.
See also: serial port
forward lookup
A DNS query for a DNS name.
See also: Domain Name System (DNS)
forwarder
A DNS server designated by other internal DNS servers to be used to forward queries for resolving external or offsite DNS domain names.
See also: DNS server domain name Domain Name System (DNS)
FQDN
See other term: fully qualified domain name (FQDN)
fragmentation
frame
In synchronous communication, a package of information transmitted as a single unit from one device to another.
See also: capture
frame type
The way in which a network type, such as Ethernet, formats data to be sent over a network. When multiple frame types are allowed for a particular network type, the packets are structured differently and are, therefore, incompatible. All computers on a network must use the same frame type to communicate. Also called frame format.
free media pool
A logical collection of unused data-storage media that can be used by applications or other media pools. When media are no longer needed by an application, they are returned to a free media pool so that they can be used again.
See also: media pool Removable Storage
free space
Available space that you use to create logical drives within an extended partition.
See also: extended partition logical drive unallocated space
front-end processor (FEP)
In communications, a computer that is located between communications lines and a main (host) computer and used to relieve the host of tasks related to communications; sometimes considered synonymous with communications controller. A front-end processor is dedicated entirely to handling transmitted information, including error detection and control; receipt, transmission, and possibly encoding of messages; and management of the lines running to and from other devices.
See also: host
FRS
See other term: File Replication service (FRS)
FSMO
See other term: operations master
FTP
See other term: File Transfer Protocol (FTP)
full computer name
A fully qualified domain name (FQDN). The full computer name is a concatenation of the computer name (for example, client1) and the primary DNS suffix of the computer (for example, reskit.com.). The same computer could be identified by more than one FQDN. However, it has only one full computer name.
See also: DNS suffix fully qualified domain name (FQDN)
Full Control
An access control entry (ACE) that assigns all applicable rights to a file system or directory service object.
See also: access control entry (ACE) object permission
full name
A user`s complete name, usually consisting of the last name, first name, and middle initial. The full name is information that Local Users and Groups or Active Directory Users and Computers can maintain as part of the information identifying and defining a user account.
See also: Active Directory Users and Computers user account
full zone transfer (AXFR)
The standard query type supported by all DNS servers to update and synchronize zone data when the zone has been changed. When a DNS query is made using AXFR as the specified query type, the entire zone is transferred as the response.
See also: DNS server zone
full-duplex
A system capable of simultaneously transmitting information in both directions over a communications channel.
See also: duplex half-duplex.).
See also: domain name Domain Name System (DNS) domain namespace relative name
|
http://technet.microsoft.com/en-us/library/cc738759(v=ws.10).aspx
|
CC-MAIN-2014-15
|
en
|
refinedweb
|
HTML and Java: Two Sides of the Same Wicket Coin
By Geertjan on Jul 13, 2005
Each web page in Wicket is like a coin. It has two sides -- a Java class and an HTML file.
- Setting Up the Coin: HelloWorldApplication.java. The application object creates the application that contains the web pages. The application object is a Java class. The absolute minimum content of the application object is the definition of the home page. The home page is the first web page displayed by the application object.
Here the compiled HelloWorld.class web page is set as the home page:
getPages().setHomePage(HelloWorld.class);
The web page consists of two sides -- the Java component (the back or tails side) and its HTML rendering (the front or heads side). Importantly, since they are two sides of the same coin, the two sides have the same name and are stored in the same folder structure. Normally, this means that they are stored in the same package. So, in this case, the two sides are called HelloWorld.java and HelloWorld.html and are stored together in the same package.
- Tails: HelloWorld.java. Here a Label component is created:
add(new Label("message", "Hello World!"));
There are two parameters: the component identifier ("message") and the content that the Label component should render ("Hello World!").
- Heads: HelloWorld.html. A Java component is used in an HTML file:
<span wicket:Message goes here</span>
The <span> element has one attribute, in the wicket namespace: the identifier ("id") which is defined as "message". Note that the Wicket identifier in the HTML file must match the component identifier in the Java component. The "Message goes here" text is a placeholder. You could write anything you like there -- it will be replaced by the Java component.
- Flipping the Coin. A web.xml file specifies the Wicket servlet wicket.protocol.http.WicketServlet that handles requests for the application object. A server, such as Tomcat or Jetty, is needed in order to deploy the application.
To see all of the above in practice, within the context of the two Java classes and HTML file that make up the "Hello World" application, use yesterday's blog entry to set everything up in NetBeans IDE. Alternatively, use another IDE. All Wicket applications are based on the above principles -- except that most Wicket applications have more than one coin. If you have enough coins, you can create a really rich application...
Posted by guest on May 02, 2006 at 07:14 PM PDT #
|
https://blogs.oracle.com/geertjan/entry/p_img_src_http_blogs1
|
CC-MAIN-2014-15
|
en
|
refinedweb
|
> > >Cocoon 1.x if feature-frozen: means that even if you donate the code, I
> > >won't add it. This is to _suggest_ people to work on Cocoon2.
> >
> > OK. Cocoon 2 then.
>
> yes. Niclas expressed the need for internal subrequesting and I have
> different feelings about them.
>
> We already agreed on using _sort_of_ internal subrequesting to do XSP
> compilation: we use one pipeline to generate the java bytecode out of
> XML pages (thru XSP components), then instantiate that bytecode as a
> generator for the subsequent calls.
>
> So, it is -not- what you wanted to do (if I understood correctly), but
> rather willing to "include" the output of another pipeline into your
> request.
>
> Now... you could place pipeline information inside your page, and this
> is a -1 from me. But if you want to do something like
>
> <page>
> <cocoon:include src=°./banner" force-
> ....
> </page>
>
> or something like this, I see no problems.
>
> We already thought about implementing the XInclude spec... I think this
> is better than having cocoon-specific namespaces but it might not have
> all the semantics we'd like to have like encoding, fake user-agent,
> parameters and such...
>
> what do others think about this?
>
I think we'll have to try to use XInclude for this...if it doesn't have all
the semantics we want, we'll have to come with a different solution!
>.
>
Tyrex looks promising for a few problems I ran into lately....thanks!
Gerard
|
http://mail-archives.apache.org/mod_mbox/cocoon-dev/200003.mbox/%3CNCBBKKMPILBOFACAPHDDAENPEBAA.gerard.van.enk@eo.nl%3E
|
CC-MAIN-2014-15
|
en
|
refinedweb
|
You can subscribe to this list here.
Showing
7
results of 7
<html>
<head>
</head>
<body>
<p align="center"><font style="font-size: 11pt" face="Courier">Did you miss out on NNCO, UP 233% IN JUST 2 DAYS?</font></p>
<p align="center"><font style="font-size: 11pt" face="Courier">If so, Here's another pick - Another Short Play</font></p>
<p align="center"><b><a href="">CLICK HERE</a></b></p>
<p align="center"><img border="0" src=""></p>
<p align="left"> </p>
<p align="left"> </p>
<p align="center"><font size="2">I no longer wish to receive your newsletter <a href="mailto:notforme2002@...?subject=takeoff"><b>click here</b></a></font></p>
</body>
</html>
pebhbbitdcsbwkdwupypicsrswbycuwyia
On 22 October 2002, David Goodger said:
> The sources on the web site still say "Docutils 0.2.4". The current
> version is 0.2.7. Not pushed out yet?
Correct -- I'm just playing on my development web server.
> You can set the <tt> style back to its initial value::
>
> tt { background-color: transparent }
Yup, that works.
> In fact, I think I'll remove the "a.target" style from the project's
> default.css. It was useful for diagnostics, but implies meaning where
> there really is none. And it's distracting. ... Gone now.
OK, I'll cvs up and stop worrying about the "a" styles then.
Thanks!
Greg
--
Greg Ward - software developer gward@...
MEMS Exchange
[David]
>> I don't think you should transform the tree at this point, since
>> you're traversing the tree. It's like modifying a list while looping
>> over it: dangerous.
[Aahz]
> Yup. This would be a separate transform step, just like
> ``docutils.transforms.references.Footnotes``. Do you still think
> calling a nested walkabout is better?
If a writer-specific transform seems more natural, then that would be OK
too. Except that the transform should not violate the doctree structure as
expressed in spec/docutils.dtd. In other words: no footnotes inside
paragraphs. The writer has to receive a standard doctree, and the other
transforms (that are not writer-specific) depend on the doctree being
standard. Having said that, if the writer were to apply its own transforms
just before the final "translation" (which is merely an extreme transform),
then no problem, anything goes. The doctree belongs to the writer at that
point.
In fact, the entire Writer framework could be changed. The Visitor pattern
and Translator classes were just the way that worked for me with the HTML
writer. I make no claims that it's an ideal solution.
I'm almost finished ripping the guts out of the old transform system and
replacing it with something better. The code is almost stable, and I'll
probably be checking it in by the weekend.
>.
Understood; that's why it's in the sandbox. I didn't realize I *was*
commenting on the coding *style*, actually. Just commenting on and asking
questions about the logic, in a sincere attempt to understand the code and
assist you in reaching your goal. No need to take any of it personally. I
hope and trust you're interested in *improving* the code?
> Side note: eventually OOwriter will split into two classes, because I've
> got stuff in there (like the ``include-output`` directive) specific to
> my book that belongs in a subclass rather than the main OpenOffice.org
> writer.
Not sure exactly what you mean here. If you mean that the "include-output"
directive should be independent of the writer, I agree completely. If it's
general-purpose, it should become a part of the parser.
--
David Goodger <goodger@...> Open-source projects:
- Python Docutils:
(includes reStructuredText:)
- The Go Tools Project:
I'll get back to the rest later, but I wanted to make a quick comment.
On Tue, Oct 22, 2002, David Goodger wrote:
> .
Yup. This would be a separate transform step, just like
``docutils.transforms.references.Footnotes``. Do you still think
calling a nested walkabout is better?.
I do appreciate the additional info about how reST works.
Side note: eventually OOwriter will split into two classes, because I've
got stuff in there (like the ``include-output`` directive) specific to
my book that belongs in a subclass rather than the main OpenOffice.org
writer.
--
Aahz (aahz@...) <*>
Project Vote Smart:
grubert@... wrote:
> you are talking from the internal represantation i am talking of the
> output structure.
We started out talking about internationalization, doing lookups in
language modules. That smells like internal representation to me.
The point is, when you have a document which begins with this::
:Author: John Doe
:Web Site:
The internal document (fragment) tree looks like this::
<docinfo>
<author>
John Doe
<field>
<field_name>
Web Site
<field_body>
<paragraph>
<reference refuri="">
The structures for the "Author" field (specific) and the "Web Site"
field (generic) are completely different, and therefore require
different processing. You can't ignore that.
> "Web site" and "Author" both end up in the documents docinfo.
> this means: if i donot want to put formatting information
> into two places one needs a function for it.
Fine, separate the formatting into a separate method. Just don't do
language lookups for fields that don't need them. Do the language
lookup, and send the *result* to the formatting method.
>>> the pre (literal) environment in latex is verbatim. how does one
>>> get the input with parsing: astext() removes the reST markup ?
>>
>> I don't understand the question. Examples, please.
>
> ::
> This is *some* thing
>
> this.astext() gives the text without stars.
Untrue. Perhaps the lack of a blank line after "::" is causing
trouble? Literal blocks do not touch the characters inside::
$ quicktest.py << EOF
> ::
>
> This is *some* thing
> EOF
<document source="<stdin>">
<literal_block xml:
This is *some* thing
The astext() method doesn't touch the characters either. However,
astext() should only be used sparingly and carefully. If there is any
structure in the node, ``node.astext()`` will ignore it.
Why are you using astext() anyway? Just let the tree traversal do the
work. From latex2e.py's LaTeXTranslator class::
def visit_literal_block(self, node):
self.use_verbatim_for_literal = 1
if (self.use_verbatim_for_literal):
self.body.append('\\begin{verbatim}\n')
self.body.append(node.astext())
self.body.append('\n\\end{verbatim}\n')
raise nodes.SkipNode
else:
self.body.append('{\\obeylines\\obeyspaces\\ttfamily\n')
I don't see why you need to short-circuit the traversal. The "if"
statement is not really conditional: it will always evaluate to true.
That should be cleaned up. Shouldn't
``self.body.append(node.astext())`` also call ``self.encode()``? That
may be the source of the problem.
This is exactly the kind of bug that arises when the tree traversal
isn't allowed to complete properly. Look at the html4css1.py writer;
``raise nodes.SkipNode`` is only used 5 times, and each time there is
a good reason. I've added some comments where it may have been
unclear. Could you add comments wherever you ``raise nodes.SkipNode``
in latex2e.py, justifying the exceptions?
--
David Goodger <goodger@...> Open-source projects:
- Python Docutils:
(includes reStructuredText:)
- The Go Tools Project:
[David]
>> The docs look good. Please note that if you regenerate the docs with
>> the latest Docutils (which I recommend you do, to take advantage of
>> the improvements to the HTML produced)
[Greg]
> OK, done.
The sources on the web site still say "Docutils 0.2.4". The current
version is 0.2.7. Not pushed out yet?
>> you should also replace the stylesheet. ... I recommend extracting
>> your modifications into a separate .css file and using the
>> "@import" statement to cascade the stylesheets. See
>> for details.
>
> OK, tried that. Problem: the main modification I made was to
> completely *remove* your styles for the 'a' and 'tt' tags. (I think
> link colouring should be up to the browser, and I don't like a
> background on inline literals.)
>
> So how do I override your stylesheet with removal information?
You can set the <tt> style back to its initial value::
tt { background-color: transparent }
As for the <a> tags, these are the styles specified::
a.target {
color: blue }
a.toc-backref {
text-decoration: none ;
color: black }
The first is easily undone::
a.target {
color: inherit }
In fact, I think I'll remove the "a.target" style from the project's
default.css. It was useful for diagnostics, but implies meaning where
there really is none. And it's distracting. ... Gone now.
I don't know of any way to undo the second set of styles
("a.toc-backref"). But they're only applied to back-links from
section headers to a table of contents. If you have no table of
contents, or specify "--no-toc-backlinks" (or "toc_backlinks: none" in
the config file), that style will have no effect. These styles remove
the typical hyperlink formatting (color + underline), to make the
back-linked section headers look like regular section headers. An
approximation to undoing the style would be::
a.toc-backref {
text-decoration: underline ;
color: blue }
However, the browser itself or user settings may specify a different
initial color and/or decoration, and the color should change once the
hyperlink is visited. These can also be specified (using the ":link"
and ":visited" pseudo-classes), but that just makes the whole thing
even more complicated.
Is there any way to *disable* styles that don't inherit? Any way to
say "use or restore the *initial* value for this style, ignoring any
later explicit styles"? I can't find any.
--
David Goodger <goodger@...> Open-source projects:
- Python Docutils:
(includes reStructuredText:)
- The Go Tools Project:
Looking at your examples and the OpenOffice.org XML DTD and
specification docs [*]_, I see that OpenOffice XML requires footnotes
themselves to be embedded inside the paragraph at the point of
reference, as does DocBook and, I believe, TeX. This makes sense for
processing (easier), but not for reading since the whole point of a
footnote is to remove the extra text from the main flow.
.. [*] Available from. DTD files are at (text.mod
has the most significance here) and the specification is.
[Aahz]
>>> What I really ought to do is call for a walkabout on the footnote
>>> node, but I can't quite figure out how to do that.
[David]
>> I assume you're already *doing* a walkabout. Just let it continue.
>> You're short-circuiting the process artificially. The internal
>> document tree is well-formed: an end-tag (depart_tag) for every
>> start-tag (visit_tag), and all elements arranged as the DTD
>> describes. (If they're not, it's a bug.) Trust in the docree.
. Having seen the context, I now think your first
idea was correct, to force a traversal of the footnote when you reach
the first footnote reference. However, note that there may be
multiple references to the same footnote, so only the *first*
reference should have its footnote traversed; others should use the
<text:footnote-ref> element I believe.
In general, doing a traversal on a subtree is simple. Say we want to
do a traversal starting at the "footnote" node::
# Use our own class to get a clone of ourselves:
visitor = self.__class__(self.document)
# Traverse the subtree rooted at "footnote":
footnote.walkabout(visitor)
# Collect the results (assumes uniform treatment of output):
self.body.extend(visitor.body)
But there's a tricky issue in the OpenOfficeTranslator class::
def visit_footnote(self, node):
raise nodes.SkipNode
This is fine for the outer traversal, but it clobbers the inner
traversal. Perhaps change this to::
def visit_footnote(self, node):
if self.handle_footnotes:
... handle footnotes
else:
raise nodes.SkipNode
How to set ``self.handle_footnotes``?
* It could be a parameter to the __init__ method, but we'd still have
to watch for nested footnotes (a footnote reference within a
footnote body).
* Start out with ``self.handle_footnotes`` true, and set it false in
``visit_document``? If nested traversals need to be done for any
other elements, this could backfire.
* Check for an empty ``self.body``? That might be the simplest and
best way::
def visit_footnote(self, node):
if self.body:
raise nodes.SkipNode
else:
... handle footnotes
Update: the DTD says "text:footnote and text:endnote elements may not
contain other text:footnote or text:endnote elements". Docutils
documents *can* have nested footnotes though, which complicates
matters. Either a workaround has to be devised, or we place a
*documented* restriction on Docutils wrt the OpenOffice writer. The
latter would be acceptable for now.
Let's take a look at the ``visit_footnote_reference`` method::
def visit_footnote_reference(self, node):
name = node['refid']
Why are you calling this the "name"? I find it misleading, since
there are "name" attributes on many elements. I'd use "refid" or
"footnote_id" instead.
Continuing::
id = node['id']
number = node['auto']
for footnote in self.document.autofootnotes:
if name == footnote['name']:
break
The ``if name == footnote['name']:`` test relies on an accident and
may not always work. IDs are derived from names, and simple names are
equal to their IDs, but more complicated names are not. For example,
the name "a name" turns into the ID "a-name". ID's can't have spaces
or anything apart from alphanumerics and "-" (see
docutils.nodes.make_id; 32 lines of docstring for a 3-line function!).
In any case, there's a much easier way to get the footnote node::
footnote = self.document.ids[name]
(Although it shouldn't be "name" but "refid".)
Since a footnote should only be rendered once, you should check if
it's already happened here. Something like::
if hasattr(footnote, 'rendered'):
self.body.append('<text:footnote-ref text:ref-name="%s"'
' text:'
% ???)
...
else: # proceed as before
...
footnote.rendered = 1
I don't know what should replace the "???" above. The OpenOffice XML
spec says that "Footnotes, endnotes, and sequences are assigned names
by the application used to create the OpenOffice.org XML file format
when the document is exported." However, I can't find a "name"
attribute on <text:footnote> elements or subelements in the DTD. Does
it mean the "id" attribute? You should verify with some actual
OpenOffice output.
The text of the footnote reference can be traversed normally, then the
end-tag inserted by ``depart_footnote_reference``. But since there's
a conditional here, there are two ways to proceed:
1. Store the end-tag on an internal stack (like the "context" stack of
the HTML writer's HTMLTranslator class), and pop it off in
``depart_footnote_reference``. This approach is recommended.
2. Process the entire <text:footnote-ref> tag in the ``visit_...``
method, using ``self.astext()`` to get the label text. Insert the
end-tag, and finish with a ``raise nodes.SkipNode``. I don't
recommend this, because it complicates processing (makes the flow
hard to understand with a special case; uniform is better) and it
will break if the contents of a footnote_reference element ever
gets more complicated. This technique *cannot* be used on any
element with a content model more complicated than "(#PCDATA)", so
it's best not to use it at all.
Continuing with ``visit_footnote_reference``::
self.body.append('<text:footnote text:\n' % id)
self.body.append('<text:footnote-citation text:string-value='
'"%s"/>\n' % number)
I don't see a "string-value" attribute in the DTD. I do see a "label"
attribute though. Either the DTD is wrong or out of date, or you have
the wrong attribute name. Also, I don't understand how you're using
the Docutils <footnote-reference> "auto" attribute here (in variable
"number").
Continuing::
self.body.append('<text:footnote-body>\n')
self.body.append(self.start_para % '.body')
for child in footnote.children:
if isinstance(child, nodes.paragraph):
self.body.append(child.astext())
self.body.append(self.end_para)
I'd replace most of the above with a nested tree traversal. Finally::
self.body.append('</text:footnote-body>\n')
self.body.append('</text:footnote>')
raise nodes.SkipNode
--
David Goodger <goodger@...> Open-source projects:
- Python Docutils:
(includes reStructuredText:)
- The Go Tools Project:
|
http://sourceforge.net/p/docutils/mailman/docutils-develop/?viewmonth=200210&viewday=23
|
CC-MAIN-2014-15
|
en
|
refinedweb
|
Code. Collaborate. Organize.
No Limits. Try it Today.
This article shows how to display OpenGL content hosted in a Windows Presentation Foundation (WPF) based application. While working on a new project, I came across this issue and I want to share what I found. I am not an expert on WPF or OpenGL, though.
The controls in this example are implemented using managed C++, because it makes using native OpenGL and Win32 libraries easier. This is a nice example of the usefulness of managed C++, I think. Note that you can create a wrapper for an existing native C++ control the same way. Tha Tao-Framework based version makes life even easier, not requiring any managed C++.
The sample application will show a WPF program in C#, which displays an OpenGL window (or control). This is similar to the situation encountered in CAD / CAM applications. Also, the window should close when pressing the ESC key. In WinForms and Win32, this problem is fairly easy to solve. There are excellent articles and samples out there (Thanks to Jeff Molofee aka NeHe). However, due to the fact that the internal structure of WPF is very different from that of the Win32 API or WinForms, we have to change a few things.
Fortunately, there is a very nifty framework out there called "Tao" which can be found at The Tao Framework. The Tao Framework renders this article quite superfluous. What remains is a very simple sample application which can be downloaded at the top of the article along with parts of the source of Tao. Note that Tao is distributed under a different license (MIT License). See the file "Copying" for details.
Now, if you really want to know how to do it manually, here we go: In this article, I assume that you have a basic idea of how to create an OpenGL-window using Win32 API. You haven't heard of PIXELFORMATDESCRIPTOR before? In this case, you might want to read NeHe's first tutorial. (See the Resources Section).
PIXELFORMATDESCRIPTOR
Also, some very basic WPF know-how will be useful (e.g. how to reference custom controls in another assembly from XAML). I suggest you read Sacha Barber's excellent introduction to WPF here on The Code Project. (Huge thanks to Sacha for his articles!)
Most notably, we will use WindowsFormsHost and HwndHost in this article.
WindowsFormsHost
HwndHost
In order to create an OpenGL-window, we need to have a dedicated HWND. In Win32, we can simply create one of our own. WindowsForms controls, on the other hand, each have their own HWND anyway, so we can simply use that. In WPF, however, there is only one HWND for the application (With some exceptions: menus, for example, have their own window). As we do not want to interfere with the rendering of WPF's controls, acquiring the application's HWND is not a good idea (if possible at all). So how do we get a window for OpenGL to render to?
HWND
WindowsForms
Microsoft provides us with two simple classes made for WPF / Win32 interoperation. As the name suggests, these can be found in the namespace namespace System.Windows.Interop. These are the earlier mentioned WindowsFormsHost and HwndHost classes. We will host a WindowsForms UserControl in the WindowsFormsHost, and a custom Win32 API window using the HwndHost. Note that WindowsFormsHost is actually derived from HwndHost. Let's look at the simpler case of using a WindowsForms UserControl in a WindowsFormsHost first:
namespace System.Windows.Interop
UserControl
WindowsForms UserControl
The WindowsFormsHost is a control we can simply embed in the WPF Applications' XAML file like this...
<int:WindowsFormsHost
<oglc:OpenGLUserControl
</int:WindowsFormsHost>
(Note that the namespaces used in here must be declared first. See the sample code or refer to Sacha's article for more information on that.)
namespace
... where the OpenGLUserControl itself is defined as (Managed C++):
OpenGLUserControl
public ref class OpenGLUserControl : public UserControl
{
// ...
};
or, in C#, as...
public class OpenGLUserControl : UserControl
{
// ...
};
... respectively. This is not OpenGL-specific yet and can be used to host any Windows Forms control conveniently!
For our OpenGL-enabled Forms control, we'll need the following declaration and member variables:
public ref class OpenGLUserControl : public UserControl
{
private:
HDC m_hDC;
HWND m_hWnd;
HGLRC m_hRC;
System::ComponentModel::Container^ components;
//...
}
If you haven't used managed C++ before, just ignore the '^'-symbol.
For initialization, we register a delegate in the constructor:
this->Load += gcnew System::EventHandler(this,
&OpenGLUserControl::InitializeOpenGL);
In C#, this would look a little simpler:
this.Load += new System.EventHandler(InitializeOpenGL);
The initialization handler is as follows:
virtual void
InitializeOpenGL( Object^ sender, EventArgs^ e)
{
// Get the HWND from the base object
m_hWnd = (HWND) this->Handle.ToPointer();
// ... ChoosePixelFormat, SetPixelFormat,
//wglCreateContext, etc.
}
We need to resize the OpenGL-viewport if the window size changes, so we need to register another delegate:
this->SizeChanged += gcnew EventHandler(this,
&OpenGLUserControl::ResizeOpenGL);
(I won't write down the C# version every time in order not to bloat the article).
This method does little more than setting the OpenGL viewport and updating the projection matrix. As a matter of fact, I have chosen to use an orthogonal projection for reasons that I will explain shortly.
For perspective projections, the projection matrix must be recalculated when the window size changes, for example, using gluPerspective() or glFrustum(), that's why I left the code in this method.
gluPerspective()
glFrustum()
void ResizeOpenGL(Object^ sender, EventArgs^ e)
{
// ...
glViewport( 0, 0, Width, Height );
// ...
glOrtho(-1.0, 1.0, -1.0, 1.0, 1.0, 100.0);
// or gluPerspective(), glFrustum(), etc.
// for perspective projections, we need the
// aspect ratio of the window
}
Also, we have to override the OnPaintBackground() method to avoid flicker:
OnPaintBackground()
virtual void OnPaintBackground( PaintEventArgs^ e ) override
{
// not doing anything here avoids flicker
}
The actual OpenGL drawing can then be performed in the OnPaint() method:
OnPaint()
virtual void OnPaint( System::Windows::Forms::PaintEventArgs^ e ) override
{
// Do very fancy rendering
}
That's it, basically! We now have a Windows Forms control which will display an OpenGL-window. The sample code also renders one of the impressive triangles!
Now we can go one inheritance level higher and mess around with HwndHost so we can use any Win32 control (or window). First, we can no longer insert the control in XAML. Instead, we create a placeholder in XAML, in this case, just a Border-control:
Border
<Window x:
<Grid>
<Border Name="hwndPlaceholder" />
</Grid>
</Window>
... and programmatically attach a child to it upon load:
private void Window_Loaded(object sender, RoutedEventArgs e)
{
// Create our OpenGL Hwnd 'control'...
HwndHost host = new WPFOpenGLLib.OpenGLHwnd();
// ... and attach it to the placeholder control:
hwndPlaceholder.Child = host;
}
Et Voila!
Implementing the control itself also is somewhat different from the WindowsForms-case:
Window
OnRender()
Paint
PaintBackground
WindowProc
In principle, you could take a complete Win32 application and put it into the HwndHost. The procedure of creating a window is the same as under Win32, but it has to be performed in the overwritten BuildWindowCore() method.
BuildWindowCore()
virtual HandleRef BuildWindowCore(HandleRef hwndParent) override
Yet, meaningful interaction of WPF and Win32 is fraught with its own perils. More about that later.
A little note at the side: To allow that, we check whether the WNDCLASS has already been registered:
WNDCLASS
bool RegisterWindowClass()
{
//
// Create custom WNDCLASS
//
WNDCLASS wndClass;
if(GetClassInfo(m_hInstance,
m_sWindowName, &wndClass))
{
// Class is already registered!
return true;
}
// (register class) ...
}
This works exactly as it does in Win32. This, however, seems a little strange: The class HwndHost supplies us with a managed method called WndProc(). MSDN suggests to overwrite this, but I didn't manage to initialize the window this way.
WndProc()
When registering the Window class, one can specify the WNDPROC to be used. Leaving it empty resulted in strange access violations during initialization, while the following simple implementation proved to work out fine, thus rendering the overrideable WndProc() method irrelevant:
Window
WNDPROC
LRESULT WINAPI
MyMsgProc(HWND _hWnd, UINT _msg,
WPARAM _wParam, LPARAM _lParam)
{
return DefWindowProc( _hWnd, _msg, _wParam, _lParam );
}
bool RegisterWindowClass()
{
WNDCLASS wndClass;
wndClass.lpfnWndProc = (WNDPROC)MyMsgProc;
// ...
}
At this point, however, the window does not have focus. Unfortunately, that will prevent not only our WNDPROC to handle any key events, but it will also prevent HwndHost from forwarding the keyboard information to WPF. Thus, we have to acquire focus manually by a little more sophisticated version of MyMsgProc:
MyMsgProc
LRESULT WINAPI
MyMsgProc(HWND _hWnd, UINT _msg,
WPARAM _wParam, LPARAM _lParam)
{
switch(_msg)
{
// Make sure the window gets focus when it has to!
case WM_IME_SETCONTEXT:
if(LOWORD(_wParam) > 0)
SetFocus(_hWnd);
return 0;
default:
return DefWindowProc( _hWnd, _msg, _wParam, _lParam );
}
}
Note that we have to check for LOWORD(_wParam) > 0, otherwise the message stands for losing focus rather than gaining it.
LOWORD(_wParam) > 0
Using the simple message handler presented above, most of the commands will be forwarded to the parent. We can thus easily catch key events in the WPF-based Window class which owns the host.
However, this topic can be a lot more complicated, especially if we want to have two-way interaction between the Win32 control and WPF. This is outside this article's scope, however.
Since there are now more and more devices with a screen resolution of above 96 DPI, it becomes more important for applications to be DPI-Aware. To tell the truth, I never bothered about the DPI until I set it to 120 myself.
This is why I chose to use an orthogonal projection here: It enables us to check (by visual means) whether we correctly mapped the screen, or not.
In our case here, the problem becomes highly annoying: If you don't take the system's DPI setting into consideration, you will have a large border where you simply cannot draw to - your GL window is too small:
In order to avoid that, we get the system's DPI setting on initialization and multiply the new window size with it upon resize:
virtual HandleRef
BuildWindowCore(HandleRef hwndParent) override
{
// ...
m_hDC = GetDC(m_hWnd);
// Technically, the DPI can be different for
// X and Y resolution. It is not particularly
// a lot of work to support that feature, so we do it.
m_dScaleX = GetDeviceCaps(m_hDC, LOGPIXELSX) / 96.0;
m_dScaleY = GetDeviceCaps(m_hDC, LOGPIXELSY) / 96.0;
}
virtual void
OnRenderSizeChanged(SizeChangedInfo^ sizeInfo) override
{
// ...
int iHeight = (int)
(sizeInfo->NewSize.Height * m_dScaleY);
int iWidth = (int)
(sizeInfo->NewSize.Width * m_dScaleX);
glViewport( 0, 0, iWidth, iHeight);
// ...
}
Although the presented techniques are very similar at a first glance, they are targeted at different things: The HwndHost is a class the actual control is derived from. On the other hand, the WindowsFormsHost is a WPF-Control which we can place in an XAML file - the actual control in this case must be a UserControl.
While the WindowsFormsHost allows the use of an arbitrary WinForms user control with very little effort, usage of the HwndHost can be quite tricky, especially when it comes to input handling. This is largely because it completely breaks the controls scheme of the GUI and, in the case of input events, even overrides the main (WPF) application. On the other hand, being able to combine Win32 and WPF with a few tricks is still marvellous!
One thing that bothers me is the exact behaviour of CS_OWNDC. I have read some articles on the net about it, but in the end I did not find an explanation that satisfied me. Removing it from the code doesn't seem to change anything, but I wonder what happens when we perform more complex rendering operations?
CS_OWNDC
Another issue is performance. I did not talk about it in the article at all for a reason. My system is barely capable of displaying a transparency-enabled Vista desktop at full resolution... In my case, performance is a catastrophe! However, I believe that is largely due to a fill rate bottleneck of my old GeForce MX 5200. Also, we can't make a performance measurement using a timer that invalidates the control from time to time, of course!
Thank you for reading!
This is my first article. Phew... quite a bit of work!Any feedback
|
http://www.codeproject.com/Articles/23736/Creating-OpenGL-Windows-in-WPF?msg=3578354
|
CC-MAIN-2014-15
|
en
|
refinedweb
|
I: > {-# LANGUAGE GADTs #-} > import Codec.Compression.GZip > import Control.Applicative > import Control.Concurrent.CHP > import qualified Control.Concurrent.CHP.Common as CHP > import Control.Concurrent.CHP.Enroll > import Control.Concurrent.CHP.Utils > import Control.Monad.State.Strict > import Data.Digest.Pure.MD5 > import Data.Maybe > import qualified Data.ByteString.Char8 as BS > import qualified Data.ByteString.Lazy.Char8 as LBS > import qualified Data.ByteString.Lazy.Internal as LBS > import System.Environment > import System.IO > import System.IO.Unsafe > > > calculateMD5 :: (ReadableChannel r, > Poisonable (r (Maybe BS.ByteString)), > WriteableChannel w, > Poisonable (w MD5Digest)) > => r (Maybe BS.ByteString) > -> w MD5Digest > -> CHP () > calculateMD5 in_ out = evalStateT (forever loop) md5InitialContext > `onPoisonRethrow` (poison in_ >> poison out) > where loop = liftCHP (readChannel in_) >>= > calc' > calc' Nothing = gets md5Finalize >>= > liftCHP . > writeChannel out >> > put md5InitialContext > calc' (Just b) = modify (flip md5Update > $ LBS.fromChunks [b]) Calculate MD5 hash of input stream. Nothing indicates EOF. > unsafeInterleaveCHP :: CHP a -> CHP a > unsafeInterleaveCHP = fromJust <.> liftIO <=< > unsafeInterleaveIO <.> embedCHP Helper function. It is suppose to move the execution in time - just as unsafeInterleaveIO. I belive that the main problem lives here. Especially that Maybe.fromJust: Nothing is the error. > chan2List :: (ReadableChannel r, Poisonable (r a)) > => r a -> CHP [a] > chan2List in_ = unsafeInterleaveCHP ((liftM2 (:) (readChannel in_) > (chan2List in_)) > `onPoisonTrap` return []) Changes channel to lazy read list. > chanMaybe2List :: (ReadableChannel r, > Poisonable (r (Maybe a))) > => r (Maybe a) > -> CHP [[a]] > chanMaybe2List in_ = splitByMaybe <$> chan2List > where splitByMaybe [] = [] > splitByMaybe (Nothing:xs) = > []:splitByMaybe xs > splitByMaybe (Just v :[]) = [[v]] > splitByMaybe (Just v :xs) = > let (y:ys) = splitByMaybe xs > in (v:y):ys Reads lazyly from channel o list of list > compressCHP :: (ReadableChannel r, > Poisonable (r (Maybe BS.ByteString)), > WriteableChannel w, > Poisonable (w (Maybe BS.ByteString))) > => r (Maybe BS.ByteString) > -> w (Maybe BS.ByteString) > -> CHP () > compressCHP in_ out = toOut >>= mapM_ sendBS > where in_' :: CHP [LBS.ByteString] > in_' = fmap LBS.fromChunks <$> > chanMaybe2List in_ > toOut :: CHP [LBS.ByteString] > toOut = fmap compress <$> in_' > sendBS :: LBS.ByteString -> CHP () > sendBS LBS.Empty = writeChannel out > Nothing > sendBS (LBS.Chunk c r) = writeChannel out > (Just c) > >> sendBS r Compress process > readFromFile :: (ReadableChannel r, > Poisonable (r String), > WriteableChannel w, > Poisonable (w (Maybe BS.ByteString))) > => r String > -> w (Maybe BS.ByteString) > -> CHP () > readFromFile file data_ = > forever (do path <- readChannel file > hnd <- liftIO $ openFile path ReadMode > let copy = liftIO (BS.hGet hnd LBS.defaultChunkSize) >>= > writeChannel data_ . Just > copy `onPoisonRethrow` liftIO (hClose hnd) > writeChannel data_ Nothing > liftIO $ hClose hnd) > `onPoisonRethrow` (poison file >> poison data_) Process reading from file > writeToFile :: (ReadableChannel r, > Poisonable (r String), > ReadableChannel r', > Poisonable (r' (Maybe BS.ByteString))) > => r String > -> r' (Maybe BS.ByteString) > -> CHP () > writeToFile file data_ = > forever (do path <- readChannel file > hnd <- liftIO $ openFile path WriteMode > let writeUntilNothing = readChannel data_ >>= > writeUntilNothing' > writeUntilNothing' Nothing = return () > writeUntilNothing' (Just v) = liftIO (BS.hPutStr > hnd v) >> > writeUntilNothing > writeUntilNothing `onPoisonFinally` liftIO (hClose hnd)) > `onPoisonRethrow` (poison file >> poison data_) Process writing to file > getFiles :: (WriteableChannel w, Poisonable (w String)) > => w String -> CHP () > getFiles out = mapM_ (writeChannel out) ["test1", "test2"] >> > poison (out) Sample files. Each contains "Test1\n" > pipeline1 :: CHP () > pipeline1 = do md5sum <- oneToOneChannel' $ chanLabel "MD5" > runParallel_ [(getFiles ->|^ > ("File", readFromFile) ->|^ > ("Data", calculateMD5)) > (writer md5sum), > forever $ readChannel (reader md5sum) >>= > liftIO . print] First pipeline. Output: fa029a7f2a3ca5a03fe682d3b77c7f0d fa029a7f2a3ca5a03fe682d3b77c7f0d < File."test1", Data.Just "Test1\n", Data.Nothing, MD5.fa029a7f2a3ca5a03fe682d3b77c7f0d, File."test2", Data.Just "Test1\n", Data.Nothing, MD5.fa029a7f2a3ca5a03fe682d3b77c7f0d > > pipeline2 :: CHP () > pipeline2 = enrolling $ do > file <- oneToManyChannel' $ chanLabel "File" > fileMD5 <- oneToOneChannel' $ chanLabel "File MD5" > data_ <- oneToOneChannel' $ chanLabel "Data" > md5 <- oneToOneChannel' $ chanLabel "MD5" > md5BS <- oneToOneChannel' $ chanLabel "MD5 ByteString" > fileMD5' <- Enroll (reader file) > fileData <- Enroll (reader file) > liftCHP $ runParallel_ [getFiles (writer file), > (forever $ readChannel fileMD5' >>= > writeChannel (writer fileMD5) . > (++".md5")) > `onPoisonRethrow` > (poison fileMD5' >> > poison (writer fileMD5)), > readFromFile fileData (writer data_), > calculateMD5 (reader data_) (writer md5), > (forever $ do v <- readChannel (reader md5) > let v' = Just $ BS.pack $ show v > writeChannel (writer md5BS) v' > writeChannel (writer md5BS) > Nothing) > `onPoisonRethrow` > (poison (writer md5BS) >> > poison (reader md5)), > writeToFile (reader fileMD5) (reader md5BS)] Correct pipeline (testing EnrollingT): < _b4, File MD5."test1.md5", Data.Just "Test1\n", Data.Nothing, MD5.fa029a7f2a3ca5a03fe682d3b77c7f0d, _b4, MD5 ByteString.Just "fa029a7f2a3ca5a03fe682d3b77c7f0d", Data.Just "Test1\n", Data.Nothing, MD5 ByteString.Nothing, MD5.fa029a7f2a3ca5a03fe682d3b77c7f0d, File MD5."test2.md5", MD5 ByteString.Just "fa029a7f2a3ca5a03fe682d3b77c7f0d", MD5 ByteString.Nothing > % cat test1.md5 fa029a7f2a3ca5a03fe682d3b77c7f0d% >" > > onPoisonFinally :: CHP a -> CHP () -> CHP a > onPoisonFinally m b = (m `onPoisonRethrow` b) <* b > Utility function (used for closing handles) > (<.>) :: Functor f => (b -> c) -> (a -> f b) -> a -> f c > f <.> g = fmap f . g <.> is for <$> as . to $. > instance MonadCHP m => MonadCHP (StateT s m) where > liftCHP = lift . liftCHP Missing instance for strict monad > (->|^) :: Show b > => (Chanout b -> CHP ()) -> (String, Chanin b -> c -> CHP ()) > -> (c -> CHP ()) > (->|^) p (l, q) x = do c <- oneToOneChannel' $ chanLabel l > runParallel_ [p (writer c), q (reader c) x] 'Missing' helper function > data EnrollingT a where > Lift :: CHP a -> EnrollingT a > Enroll :: (Enrollable b z) => b z -> EnrollingT (Enrolled b z) > > enrolling :: EnrollingT a -> CHP a > enrolling (Lift v) = v > enrolling (Enroll b) = enroll b return > > instance Monad EnrollingT where > (Lift m) >>= f = Lift $ m >>= enrolling . f > (Enroll b) >>= f = Lift $ enroll b (enrolling . f) > return = Lift . return > instance MonadIO EnrollingT where > liftIO = Lift . liftIO > instance MonadCHP EnrollingT where > liftCHP = Lift Helper monad for enrolling (I know T should stand for transforming but then I realize problems). Thanks in advance
|
http://www.haskell.org/pipermail/haskell-cafe/2010-January/071584.html
|
CC-MAIN-2014-15
|
en
|
refinedweb
|
13 May 2010 18:04 [Source: ICIS news]
LONDON (ICIS news)--While the floundering euro grabs headlines across the globe, its decline is also snatching away European chemicals buyers’ margins, sources said on Thursday.
The embattled currency, which has fallen 8% against the US dollar in the past month, has made it harder and more expensive for European buyers of a whole raft of chemicals to get the material they need from abroad. At the same time, it is making European product more attractive to foreign buyers, thereby exacerbating tightness in markets that had been already short.
“The ?xml:namespace>
The euro has also lost roughly 7.5% against the yuan in the past month amid concerns over the ability of
In markets such as vinyl acetate monomer (VAM), acetic acid and mono ethylene glycol (MEG), and downstream polyethylene terephthalate (PET), which rely on imported volumes from the US,
“We haven’t seen any imports because we are afraid of the euro [rate change] against the US dollar,” said one PET producer.
“These macro economics are out of our control - we have no choice but to try and increase prices,” a VAM manufacturer said.
For other products, such as plasticisers, acrylate esters, adipic acid and oxo-alcohols, buyers are generally less reliant on imports. But in the current environment, where tightness has pressured both prices and downstream production, the euro’s precipitous decline is yet another stressor in an already nightmare-like scenario.
Until recently, three out of the four major European producers of acrylate esters had production outages, driving one of them, Arkema, to declare force majeure. At the same time, an even more chaotic supply situation in the US has siphoned European material to that market, pressing spot prices for butyl acrylate up €1,345/tonne ($1,703/tonne) on average since the beginning of the year, to €2,350-2,500/tonne ($2,975-3,165/tonne) FD (free delivered) NWE (northwest Europe).
“We already have a lot of movement into the
The currency fluctuation may also have a trickle-down effect by making feedstock chemicals more expensive and harder to source. For example, in the olefins market, traders were holding on to their cash rather than buying ethylene or propylene from overseas markets, as they would risk potential losses if euro-priced products failed to sell at prices high enough to offset costs.
“Right now (there is) a very high level of uncertainty” one trader said.
The euro’s drop is not dour news for all chemical industry players in
Sellers also noted the instant currency exchange related gains realised by exporting material at the same prices they commanded before.
“If one good thing came out of the
A high density polyethylene (HDPE) exporter echoed a similar sentiment.
“I am very happy about last month’s export deals,” the exporter said. “We did them in dollars and now they are worth more in euros.”
($1 = €0.79)
Vinicy Chan, Jane Massingham, Julia Meehan, Linda Naylor, Amandeep Parmar, Caroline Murray and Nel Weddle contributed to this story.
For more on Arkema visit ICIS company intelligence
For more on VAM, MEG
|
http://www.icis.com/Articles/2010/05/13/9359380/Record-low-euro-pressures-margins-and-supply.html
|
CC-MAIN-2014-15
|
en
|
refinedweb
|
ASP. ms = new MemoryStream())
stream.CopyTo(ms);
// Do something with copied data
I am using this code to copy data from HTTP response stream to memory stream because I have to use serializer that needs more than response stream is able to offer.
I..
When user is authenticated through external identity provider then not all identity providers give us user name or other information we ask users when they join with our site. What all identity providers have in common is unique ID that helps you identify the user..
There is logical shift between ASP.NET and my site when considering user as authorized.
For ASP.NET MVC user is authorized when user has identity. For my site user is authorized when user has profile and row in my users table. Having profile means that user has unique username in my system and he or she is always identified by this username by other users.
My solution is simple: I created my own action filter attribute that makes sure if user has profile to access given method and if user has no profile then browser is redirected to join page.
Usually we restrict access to page using AuthorizeAttribute. Code is something like this.
[Authorize]
public ActionResult Details(string id)
var profile = _userRepository.GetUserByUserName(id);
return View(profile);
If this page is only for site users and we have user profiles then all users – the ones that have profile and all the others that are just authenticated – can access the information. It is okay because all these users have successfully logged in in some service that is supported by AppFabric ACS.
In my site the users with no profile are in grey spot. They are on half way to be users because they have no username and profile on my site yet. So looking at the image above again we need something that adds profile existence condition to user-only content.
[ProfileRequired]
Now, this attribute will solve our problem as soon as we implement it.
Here is my implementation of ProfileRequiredAttribute. It is pretty new and right now it is more like working draft but you can already play with it.
public class ProfileRequiredAttribute : AuthorizeAttribute
private readonly string _redirectUrl;
public ProfileRequiredAttribute()
_redirectUrl = ConfigurationManager.AppSettings["JoinUrl"];
if (string.IsNullOrWhiteSpace(_redirectUrl))
_redirectUrl = "~/";
}
public override void OnAuthorization(AuthorizationContext filterContext)
base.OnAuthorization(filterContext);
var httpContext = filterContext.HttpContext;
var identity = httpContext.User.Identity;
if (!identity.IsAuthenticated || identity.GetProfile() == null)
if(filterContext.Result == null)
httpContext.Response.Redirect(_redirectUrl);
All methods with this attribute work as follows:
First case is handled by AuthorizeAttribute and the second one is handled by custom logic in ProfileRequiredAttribute class.
To get user profile using less code in places where profiles are needed I wrote GetProfile() extension method for IIdentity interface. There are some more extension methods that read out user and identity provider identifier from claims and based on this information user profile is read from database. If you take this code with copy and paste I am sure it doesn’t work for you but you get the idea.
public static User GetProfile(this IIdentity identity)
return null;
var context = HttpContext.Current;
if (context.Items["UserProfile"] != null)
return context.Items["UserProfile"] as User;
var provider = identity.GetIdentityProvider();
var nameId = identity.GetNameIdentifier();
var rep = ObjectFactory.GetInstance<IUserRepository>();
var profile = rep.GetUserByProviderAndNameId(provider, nameId);
context.Items["UserProfile"] = profile;
return profile;
To avoid round trips to database I cache user profile to current request because the chance that profile gets changed meanwhile is very minimal. The other reason is maybe more tricky – profile objects are coming from Entity Framework context and context has also HTTP request as lifecycle.
This posting gave you some ideas how to finish user profiles stuff when you use AppFabric ACS as external authentication provider. Although there was little shift between us and ASP.NET MVC with interpretation of “authorized” we were easily able to solve the problem by extending AuthorizeAttribute to get all our requirements fulfilled. We also write extension method for IIdentity that returns as user profile based on username and caches the profile in HTTP request scope.
When.
In my last posting about AppFabric Labs Access Control Service I described how to get your ASP.NET MVC application to work with ACS. In this posting I will dig deeper into tokens and claims and provide you with some helper methods that you may find useful when authenticating users using AppFabric ACS. Also I will explain you little dirty secret of Windows Live ID.
Let’s start with dissecting tokens. Token is like envelope that contains claims. Each claim carries some property of identity. By default we get the following claims from all identity providers:
As you can see from the image on right then token may contain more claims like claim for name and e-mail address. There can be also other claims if identity provider is able to provide them.
Claims are specified in web.config file when we add federation metadata definition to application using STS reference wizard. In my web.config there are following claims defined.
You can find claims from microsoft.identitymodel block in web.config file.
This is the question that I had when I added ACS support to my ASP.NET MVC web application and here is the answer. Live ID returns for user only the name identifier and it does not introduce it as user name to Windows Identity Foundation. As there is no name claim then user name is left empty and you find weird situation where user is authenticated but there is no username.
When user is authenticated over Live ID we get back these claims:
The values are something like this:
As you can see there is no claim for user name and that’s why user name is empty.
Here is what Google returns when I authenticate myself:
Google asks user if he or she allows ACS to access my name and e-mail address. If I agree then my name and address are given to my web application and I have username filled correctly when asking for current user.
This is the good question to ask because some services allow users to change their name. And names are not unique. Consider the name John Smith. There are hundreds of guys who have this name. There maybe also hundreds of Mary Smiths and as some of them get married they change their last name (if they don’t marry some John Smith, of course).
To be maximally fool-proof we have to save the values of identityprovider and nameidentifier claims to our users database. This way we can avoid the situation where two different identity providers give us same nameidentifier for different users.
Here is my claims extension class you can use to get the values of these fields easily.
public static class IdentityExtensions
public static string GetIdentityProvider(this IIdentity identity)
var claimsIndentity = identity as ClaimsIdentity;
if (claimsIndentity == null)
return string.Empty;
var providerQuery = from c in claimsIndentity.Claims
where c.ClaimType == ""
select c.Value;
var provider = providerQuery.FirstOrDefault();
return provider;
public static string GetNameIdentifier(this IIdentity identity)
where c.ClaimType == ""
And here is how we can use these extension methods.
var identity = User.Identity;
var provider = identity.GetIdentityProvider();
var nameId = identity.GetNameIdentifier();
As some providers doesn’t support returning names we have two options:
Right now I am trying to work out solution for second option so I have easy way how to handle separate user names support easily so I don’t have to write tons of code for this simple thing.
Claims based authentication is powerful way to identify users and AppFabric ACS lets our sites to support different identity providers so we don’t have to write code for each one of them. As identity providers support different features and does not return us always all the data we would like to have we have to be ready to handle the situation (like Windows Live ID). As we saw it was still easy to identify users uniquely because data for this purpose is always given to us with token...
To start comparing schemas click OK button in source and target schemas window. Visual Studio starts with comparing right away. You can see my example results here..
NB! In this point I suggest you to save schema comparison. Otherwise comparison settings are not saved and you have to start from zero next time you run comparison..
One.
When I open my ASP.NET web application I have new option for references when I right-click on my web project: Add Deployable Dependencies…
If you select it you will see dialog where you can select dependencies you want to add to your project package.
When packages you need are selected click OK. Visual Studio adds new folder to your project called _bin_DeployableAssemblies.
Screenshot on right shows the list of assemblies added for ASP.NET Pages and Razor. All DLL-s required to run ASP.NET MVC 3 with Razor view engine are here. I am not sure if NuGet.Core.dll is required in production but if it is added then let it be there.
I tried to deploy my ASP.NET MVC project that uses Razor to Windows Azure after adding deployable references to my project.
Deployment went fine and web role instance started without any problems. The only DLL reference I made as local was the one for System.Web.Mvc. All Razor stuff came with deployable dependencies.
Visual Studio support for deployable dependencies is great because this way component providers can build definitions for their components so also assemblies that are loaded dynamically at runtime will be in deployment package..
You can modify IIS Express settings for your application. Just open your project properties and move to Web tab.
IIS and IIS Express are using same settings. The difference is if you make check to Use IIS Express checkbox or not.
If you don’t want or you can’t use IIS Express for some reason you can easily switch back to Visual Studio Development Server. Just right-click on your web application project and select Use Visual Studio Development Server from context menu..
|
http://weblogs.asp.net/gunnarpeipman/archive/2010/12.aspx
|
CC-MAIN-2014-15
|
en
|
refinedweb
|
4/13/2020
We have added a range of noteworthy new features to Nevergrad, Facebook AI’s open source Python3 library for derivative-free and evolutionary optimization. These enhancements enable researchers and engineers to work with several objectives (multi-objective optimization) or with constraints. These uses are common in natural language processing, for example, where a translation model may be optimized on multiple metrics or benchmarks simultaneously. Because Nevergrad offers cutting-edge algorithms through an easy-to-use, open Python source, anyone can use it to easily test and compare different approaches to a particular problem or to use well-known benchmarks to evaluate how a method compares with the current state of the art. To further improve Nevergrad, we have partnered with IOH Profiler to create the Open Optimization Competition. It is open to submissions for both new optimization algorithms and improvements to Nevergrad’s core tools. Entries must be submitted before September 30 to be eligible for prizes, and more information is available here.
Nevergrad is an easy-to-use optimization toolbox for AI researchers, including those who aren’t Python geeks. Optimizing any function takes only a couple of lines of code:
import nevergrad as ng def square(x): return sum((x - .5)**2) optimizer = ng.optimizers.OnePlusOne(instrumentation=2, budget=100) recommendation = optimizer.minimize(square) print(recommendation.value) # optimal args and kwargs >>> array([0.500, 0.499])
The platform provides a single, consistent interface to use a wide range of derivative-free algorithms, including evolution strategies, differential evolution, particle swarm optimization, Cobyla, and Bayesian optimization. The platform also facilitates research on new derivative-free optimization methods, and novel algorithms can be easily incorporated into the platform.
Through a joint effort with IOH and with input from researchers at the black-box optimization meeting at Dagstuhl, we have made several noteworthy improvements to Nevergrad:
Multi-objective optimization.
Constrained optimization.
Simplified problem parametrization. Specifying a log distributed variable between 0.001 and 1.0 is just ng.p.Log(lower=0.001, upper=1.0).
Competence map optimizers. We provide algorithms to automatically select the best optimization method, taking into account your computational budget, the dimension, the type of variables, and the degree of parallelism.
Tools for chaining optimization algorithms and decomposing problems into several subproblems, by attributing distinct variables to different optimizers.
Interface with HiPlot, Facebook AI’s lightweight interactive visualization tool. This allows researchers to easily explore the optimization process or to use an interactive plot in a Jupyter notebook to observe the behaviors of very different algorithms.
As an additional experimental feature, we regularly compare optimizers’ performance and publish results here. AI researchers can easily extend Nevergrad with new benchmarks or optimizers and run them locally, or create a pull request on GitHub in order to merge their contribution and have it included in these automated tests.
Most machine learning tasks — from natural language processing to image classification to translation and many others — rely on derivative-free optimization to tune parameters and/or hyperparameters in their models. Nevergrad makes it easy for researchers and engineers to find the best way to do this and to develop new and better techniques.
Multi-objective optimization (detailed in this example in Nevergrad) is prominent in everyone's life. For instance, if someone is looking to buy something, she or he may want options that are simultaneously cheap, nearby, relevant, and high quality.
Since its initial release, Nevergrad has become a widely used research tool. The new features we are now sharing enable work on additional use cases, such as multi-agent power systems, physics (photonics or antireflective coatings), and control in games. Nevergrad also provides generic algorithms that can better adapt to the structure of a particular problem, including by using specific mutations or recombination in evolutionary algorithms, through the new parametrization system.
GitHub:
Documentation:
Pypi: pip install nevergrad
|
https://ai.facebook.com/blog/nevergrad-an-evolutionary-optimization-platform-adds-new-key-features/
|
CC-MAIN-2020-40
|
en
|
refinedweb
|
The Stream API and lambda’s where a big improvement in Java since version 8. From that point on, we could work with a more functional syntax style. Now, after a few years of working with these code constructions, one of the bigger issues that remain is how to deal with checked exceptions inside a lambda.
As you all probably know, it is not possible to call a method that throws a checked exception from a lambda directly. In some way, we need to catch the exception to make the code compile. Naturally, we can do a simple try-catch inside the lambda and wrap the exception into a
RuntimeException, as shown in the first example, but I think we can all agree that this is not the best way to go.
myList.stream() .map(item -> { try { return doSomething(item); } catch (MyException e) { throw new RuntimeException(e); } }) .forEach(System.out::println);
Most of us are aware that block lambdas are clunky and less readable. They should be avoided as much as possible, in my opinion. If we need to do more than a single line, we can extract the function body into a separate method and simply call the new method. A better and more readable way to solve this problem is to wrap the call in a plain old method that does the try-catch and call that method from within your lambda.
myList.stream() .map(this::trySomething) .forEach(System.out::println); private Item trySomething(Item item) { try { return doSomething(item); } catch (MyException e) { throw new RuntimeException(e); } }
This solution is at least a bit more readable and we do separate our concerns. If you really want to catch the exception and do something specific and not simply wrap the exception into a
RuntimeException, this can be a possible and readable solution for you.
RuntimeException
In many cases, you will see that people use these kinds of solutions to repack the exception into a
RuntimeException or a more specific implementation of an unchecked Exception. By doing so, the method can be called inside a lambda and be used in higher-order functions.
I can relate a bit to this practice because I personally do not see much value in checked exceptions in general, but that is a whole other discussion that I am not going to start here. If you want to wrap every call in a lambda that has a checked into a
RuntimeException, you will see that you repeat the same pattern. To avoid rewriting the same code over and over again, why not abstract it into a utility function? This way, you only have to write it once and call it every time you need it.
To do so, you first need to write your own version of the functional interface for a function. Only this time, you need to define that the function may throw an exception.
@FunctionalInterface public interface CheckedFunction<T,R> { R apply(T t) throws Exception; }
Now, you are ready to write your own general utility function that accepts a
CheckedFunction as you just described in the interface. You can handle the try-catch in this utility function and wrap the original exception into a
RuntimeException (or some other unchecked variant). I know that we now end up with an ugly block lambda here and you could abstract the body from this. Choose for yourself if that is worth the effort for this single utility.
public static <T,R> Function<T,R> wrap(CheckedFunction<T,R> checkedFunction) { return t -> { try { return checkedFunction.apply(t); } catch (Exception e) { throw new RuntimeException(e); } }; }
With a simple static import, you can now wrap the lambda that may throw an exception with your brand new utility function. From this point on, everything will work again.
myList.stream() .map(wrap(item -> doSomething(item))) .forEach(System.out::println);
The only problem left is that when an exception occurs, the processing of your stream stops immediately. If that is no problem for you, then go for it. I can imagine, however, that direct termination is not ideal in many situations.
Either
When working with streams, we probably don't want to stop processing the stream if an exception occurs. If your stream contains a very large amount of items that need to be processed, do you want that stream to terminate when for instance the second item throws an exception? Probably not.
Let's turn our way of thinking around. Why not consider "the exceptional situation" just as much as a possible result as we would for a "successful" result. Let's consider it both as data, continuing to process the stream, and decide afterward what to do with it. We can do that, but to make it possible, we need to introduce a new type — the Either type.
The Either type is a common type in functional languages and not (yet) part of Java. Similar to the Optional type in Java, an
Either is a generic wrapper with two possibilities. It can either be a Left or a Right but never both. Both left and right can be of any types. For instance, if we have an Either value, this value can either hold something of type String or of type Integer,
Either<String,Integer>.
If we use this principle for exception handling, we can say that our
Either type holds either an Exception or a value. By convenience, normally, the left is the Exception and the right is the successful value. You can remember this by thinking of the right as not only the right-hand side but also as a synonym for “good,” “ok,” etc.
Below, you will see a basic implementation of the
Either type. In this case, I used the
Optional type when we try to get the left or the right because we:
public class Either<L, R> { private final L left; private final R right; private Either(L left, R right) { this.left = left; this.right = right; } public static <L,R> Either<L,R> Left( L value) { return new Either(value, null); } public static <L,R> Either<L,R> Right( R value) { return new Either(null, value); } public Optional<L> getLeft() { return Optional.ofNullable(left); } public Optional<R> getRight() { return Optional.ofNullable(right); } public boolean isLeft() { return left != null; } public boolean isRight() { return right != null; } public <T> Optional<T> mapLeft(Function<? super L, T> mapper) { if (isLeft()) { return Optional.of(mapper.apply(left)); } return Optional.empty(); } public <T> Optional<T> mapRight(Function<? super R, T> mapper) { if (isRight()) { return Optional.of(mapper.apply(right)); } return Optional.empty(); } public String toString() { if (isLeft()) { return "Left(" + left +")"; } return "Right(" + right +")"; } }
You can now make your own functions return an
Either instead of throwing an
Exception. But that doesn't help you if you want to use existing methods that throw a checked Exception inside a lambda right? Therefore, we have to add a tiny utility function to the Either type I described above.
public static <T,R> Function<T, Either> lift(CheckedFunction<T,R> function) { return t -> { try { return Either.Right(function.apply(t)); } catch (Exception ex) { return Either.Left(ex); } }; }
By adding this static lift method to the
Either, we can now simply "lift" a function that throws a checked exception and let it return an
Either. If we take the original problem, we now end up with a Stream of Eithers instead of a possible
RuntimeException that may blow up my entire
Stream.
myList.stream() .map(Either.lift(item -> doSomething(item))) .forEach(System.out::println);
This simply means that we have taken back control. By using the filter function in the Stream API, we can simply filter out the left instances and, for example, log them. You can also filter the right instances and simply ignore the exceptional cases. Either way, you are back in control again and your stream will not terminate instantly when a possible
RuntimeException occurs.
Because
Either is a generic wrapper, it can be used for any type, not just for exception handling. This gives us the opportunity to do more than just wrapping the
Exception into the left part of an
Either. The issue we now might have is that if the
Either only holds the wrapped exception, and we cannot do a retry because we lost the original value. By using the ability of the
Either to hold anything, we can store both the exception and the value inside a left. To do so, we simply make a second static lift function like this.
public static <T,R> Function<T, Either> liftWithValue(CheckedFunction<T,R> function) { return t -> { try { return Either.Right(function.apply(t)); } catch (Exception ex) { return Either.Left(Pair.of(ex,t)); } }; }
You see that in this
liftWithValue function within the
Pair type is used to pair both the exception and the original value into the left of an
Either. Now, we have all the information we possibly need if something goes wrong, instead of only having the
Exception.
The
Pair type used here is another generic type that can be found in the Eclipse Collections library, or you can simply implement your own. Anyway, it is just a type that can hold two values.
public class Pair<F,S> { public final F fst; public final S snd; private Pair(F fst, S snd) { this.fst = fst; this.snd = snd; } public static <F,S> Pair<F,S> of(F fst, S snd) { return new Pair<>(fst,snd); } }
With the use of the
liftWithValue, you now have all the flexibility and control to use methods that may throw an
Exception inside a lambda. When the
Either is a right, we know that the function was applied correctly and we can extract the result. If, on the other hand, the
Either is a left, we know something went wrong and we can extract both the
Exception and the original value, so we can proceed as we like. By using the
Either type instead of wrapping the checked
Exception into a
RuntimeException, we prevent the
Stream from terminating halfway.
Try
People that may have worked with for instance Scala may use the
Try instead of the
Either for exception handling. The
Try type is something that is very similar to the
Either type. It has, again, two cases: “success” or “failure.” The failure can only hold the type Exception, while the success can hold anything type you want. So, the
Try is nothing more than a specific implementation of the
Either where the left type (the failure) is fixed to type
Exception.
public class Try<Exception, R> { private final Exception failure; private final R succes; public Try(Exception failure, R succes) { this.failure = failure; this.succes = succes; } }
Some people are convinced that it is easier to use, but I think that because we can only hold the
Exceptionitself in the failure part, we have the same problem as explained in the first part of the
Either section. I personally like the flexibility of the
Either type more. Anyway, in both cases, if you use the
Try or the
Either, you solve the initial problem of exception handling and do not let your stream terminate because of a RuntimeException.
Libraries
Both the
Either and the
Try are very easy to implement yourself. On the other hand, you can also take a look at functional libraries that are available. For instance, VAVR (formerly known as Javaslang) does have implementations for both types and helper functions available. I do advise you to take a look at it because it holds a lot more than only these two types. However, you have ask yourself the question of whether you want this large library as a dependency just for exception handling when you can implement it yourself with just a few lines of code.
Conclusion
When you want to use a method that throws a
checkedException, you have to do something extra if you want to call it in a lambda. Wrapping it into a
RuntimeException can be a solution to make it work. If you prefer to use this method, I urge you to create a simple wrapper tool and reuse it, so you are not bothered by the
try/catch every time.
If you want to have more control, you can use the
Either or
Try types to wrap the outcome of the function, so you can handle it as a piece of data. The stream will not terminate when a
RuntimeException is thrown and you have the liberty to handle the data inside your stream as you please.
Posted on by:
Brian Vermeer 🧑🏼🎓🧑🏼💻
Java Dev | DevRel | VirtualJug Co-lead | UtrechtJUG Co-lead | MyDevSecOps Co-lead | Dutch Air Reserve | Taekwondo Master | Flag Football CB/WR
Read Next
The Controversy Behind The Walrus Operator in Python
Jeremy Grifski -
What software technologies will earn you the highest pay?
Fahim ul Haq -
Good Programmer vs Average Programmer - and, Why Asking questions and Paying attention to Details matters.
javinpaul -
Discussion
I'm wondering if there's a Java compiler plugin that wraps checked exceptions in lambdas in unchecked exceptions. If not, how hard would it be to design one?
Nice, looks a lot like Haskell like that.
|
https://dev.to/brianverm/exception-handling-in-java-streams-2mjh
|
CC-MAIN-2020-40
|
en
|
refinedweb
|
Source code | Live preview
Why do I need it at all
There are many ways to include a map in your website or application: Google Maps, Mapbox, Leaflet etc. It's simple. Some services allows you to do it in just few clicks.
But it's getting bad when you need to customise the design, display some dataset or do whatever you want. Moreover, in Vue or React you can't use JSX and have to use imperative abstract javascript API (but I use Vue because I'm very excited by templates and reactivity).
Also some libraries are not free for private projects.
So once again I had to display some data on map I decided: I want full control in my code and I will create my own map with blackjack and hookers.
Step 1: Create a static map.
Let's start with simple vue-cli 3 app with Babel and sass.
We need D3 and d3-tile (it doesn't included in d3 npm package) for rendering map tiles.
yarn add d3 d3-tile
Actually we don't need whole d3 code. For a simple map we only need d3-geo for map projection and d3-tile for generating tiles, so we will include only these packages.
Next we should define some settings like scale, width, height and initial coordinates. Usually I make all my charts responsive to it container by calculating element's size on mount.
<script> const d3 = { ...require('d3-geo'), ...require('d3-tile'), }; export default { props: { center: { type: Array, default: () => [33.561041, -7.584838], }, scale: { type: [Number, String], default: 1 << 20, }, }, data () { return { width: 0, height: 0, }; }, mounted () { const rect = this.$el.getBoundingClientRect(); this.width = rect.width; this.height = rect.height; }, render () { if (this.width <= 0 || this.height <= 0) { // the dummy for calculating element size return <div class="map" />; } return ( <div class="map">our map will be here</div> ); }, }; </script> <style lang="scss" scoped> .map { width: 100%; height: 100%; } </style>
Now define the projection and tiles generator.
export default { // ... computed: { projection () { return d3.geoMercator() .scale(+this.scale / (2 * Math.PI)) .translate([this.width / 2, this.height / 2]) .center(this.center) ; }, tiles () { return d3.tile() .size([this.width, this.height]) .scale(+this.scale) .translate(this.projection([0, 0]))() ; }, }, // ... };
I always define d3 helper functions as computed properties, so when some params are changing Vue recalculates them and updates our component.
Now we have everything needed for displaying the map and we just render generated tiles:
export default { render () { if (this.width <= 0 || this.height <= 0) { return <div class="map" />; } return ( <div class="map"> <svg viewBox={`0 0 ${this.width} ${this.height}`}> <g> {this.tiles.map(t => ( <image key={`${t.x}_${t.y}_${t.z}`} class="map__tile" xlinkHref={`{t.z}/${t.x}/${t.y}.png `} x={(t.x + this.tiles.translate[0]) * this.tiles.scale} y={(t.y + this.tiles.translate[1]) * this.tiles.scale} width={this.tiles.scale} height={this.tiles.scale} /> ))} </g> </svg> </div> ); }, };
Here we go through tiles generated by d3-tile and request images from tile server.
You can find other servers here or you can even host your own tile server with custom styles.
Don't forget to add a copyright.
<div class="map__copyright"> © <a href="" target="_blank" >OpenStreetMap </a> contributors </div>
.map { // ... position: relative; font-family: Arial, sans, sans-serif; &__copyright { position: absolute; bottom: 8px; right: 8px; padding: 2px 4px; background-color: rgba(#ffffff, .6); font-size: 14px; } }
Now we have the static map of Casablanca. Not very exciting yet.
Step 2: Add map controls.
The most exciting thing for me is how Vue makes simpler the way to create an interactive map. We just update projection params and map updates. It was like easy peasy magic at first time!
We'll make zoom buttons and position control by dragging the map.
Let's start with dragging. We need to define projection translate props in component data and some mouse event listeners on svg element (or you can listen them on tiles group).
<script> // ... export default { // ... data () { return { // ... translateX: 0, translateY: 0, touchStarted: false, touchLastX: 0, touchLastY: 0, }; }, computed: { projection () { return d3.geoMercator() .scale(+this.scale / (2 * Math.PI)) .translate([this.translateX, this.translateY]) .center(this.center) ; }, // ... }, mounted () { // ... this.translateX = this.width / 2; this.translateY = this.height / 2; }, methods: { onTouchStart (e) { this.touchStarted = true; this.touchLastX = e.clientX; this.touchLastY = e.clientY; }, onTouchEnd () { this.touchStarted = false; }, onTouchMove (e) { if (this.touchStarted) { this.translateX = this.translateX + e.clientX - this.touchLastX; this.translateY = this.translateY + e.clientY - this.touchLastY; this.touchLastX = e.clientX; this.touchLastY = e.clientY; } }, }, render () { // ... return ( <div class="map"> <svg viewBox={`0 0 ${this.width} ${this.height}`} onMousedown={this.onTouchStart} onMousemove={this.onTouchMove} onMouseup={this.onTouchEnd} onMouseleave={this.onTouchEnd} > // ... </svg> // ... </div> ); }, }; </script> <style lang="scss" scoped> .map { // ... &__tile { // reset pointer events on images to prevent image dragging in Firefox pointer-events: none; } // ... } </style>
Wow! We just update translate values and new tiles are loading so we can explore the world. But it isn't very comfortable to do without a zoom control, so let's implement it.
We need to move
scale prop in component's data, add
zoom property and render zoom buttons.
In my experience minimal and maximum tile's zoom level are 10 and 27 (honestly I'm not very sure that this correct for all tile providers).
<script> // ... const MIN_ZOOM = 10; const MAX_ZOOM = 27; export default { props: { center: { type: Array, default: () => [-7.584838, 33.561041], }, initialZoom: { type: [Number, String], default: 20, }, }, data () { return { // ... zoom: +this.initialZoom, scale: 1 << +this.initialZoom, }; }, // ... watch: { zoom (zoom, prevZoom) { const k = zoom - prevZoom > 0 ? 2 : .5; this.scale = 1 << zoom; this.translateY = this.height / 2 - k * (this.height / 2 - this.translateY); this.translateX = this.width / 2 - k * (this.width / 2 - this.translateX); }, }, // ... methods: { // ... zoomIn () { this.zoom = Math.min(this.zoom + 1, MAX_ZOOM); }, zoomOut () { this.zoom = Math.max(this.zoom - 1, MIN_ZOOM); }, }, render () { // ... return ( <div class="map"> <div class="map__controls"> <button class="map__button" disabled={this.zoom >= MAX_ZOOM} onClick={this.zoomIn} >+</button> <button class="map__button" disabled={this.zoom <= MIN_ZOOM} onClick={this.zoomOut} >-</button> </div> //... </div> ); }, }; </script> <style lang="scss" scoped> .map { // ... &__controls { position: absolute; left: 16px; top: 16px; display: flex; flex-direction: column; justify-content: space-between; height: 56px; } &__button { border: 0; padding: 0; width: 24px; height: 24px; line-height: 24px; border-radius: 50%; font-size: 18px; background-color: #ffffff; color: #343434; box-shadow: 0 1px 4px rgba(0, 0, 0, .4); &:hover, &:focus { background-color: #eeeeee; } &:disabled { background-color: rgba(#eeeeee, .4); } } // ... } </style>
Here it is. In just two steps we created simple interactive map with Vue, D3 and OpenStreetMap.
Conclusion
It isn't hard to create your own map view component with the power of D3 and Vue's reactivity. I think that one of the most important things is the full control of DOM instead of using some abstract map renderer's API which will do some obscure things with my lovely elements.
Of course to make a good powerful map we need to implement more features like smooth zoom, max bounds etc. But all the stuff is fully customisable so you can do everything you want or need to do.
If you'll find this article useful I can write more about how to improve this map and display a data on it.
Please feel free to ask your questions.
Posted on by:
Mikhail Panichev
Front-end developer with a passion for data visualization and good UI
Discussion
Hi
Is there any possibility to create map/tiles from my own image? I have few antique maps so my idea is create maps based on scans of those maps with 2 or maybe 3 level of zoom. Is it possible at all? If yes - where to find some solution - script, library or something like this.
Hi. Sounds interesting. I don't have such experience, but have a look at this. Seems like there are some solutions based on Mapnik
Thx, it looks promising
Nice tutorial. Thank you.
It would be interesting to hear, why one should use d3 for this and not, e.g. openlayers?
OpenLayers is a complex tool like other libraries for displaying maps. They are provides many features for you, but they are not flexible and doesn't fit jsx or vue templates. Instead of writing templates you have to draw your map in mounted hook, redraw it on some updates and you haven't control on DOM at all.
I think it's the same as compare Angular 1/2 with React/Vue. D3 just provides you very helpful functions and then you can do whatever you want: render data by jsx, render data using plain js, or even render it as svg string in nodejs.
Also my next step was to display some data on map as circles with text inside it. And funny thing is I can draw a circle using (e.g.) Leaflet, but I can't render a text. Leaflet just doesn't provides API to do it.
Thanks for the elaboration. I am asking because I am very new to this whole geoinformation topic and I stumbled into it with my current project, where I was confronted with the task of making a mapcentric frontend. There we are using d3 already as a graphing / charting library - so using some geo plugins would seem a good fit - but for the map part, our engineers decided to go with open layers and up until now it seemed to be the go to tool of choice.
Our client is built with vuejs - which is where I come in to play ;) - and yes, you are right, that I am mostly building a facade to the open layers api and leave the rest to it.
Besides dealing the first time with openlayers, it is the first time for me to deal with d3 too.
Although I see it's potential - you could create everything you want to- it feels a bit like a bag of nuts and bolts to me - if you want to do that shiny x you have to built it yourself; here is your bag now have fun.
So I am considering, whether it is worth going down the road of ditching openlayers and doing everything in d3 (not for the current project, but as a reminder for upcoming projects). The upside is, that you could leave every quirk of openlayers behind. The downside is, that you - at least for the first project - have a big ramp up to get working facade code to emulate the functionality of openlayers for simply adding a new layer with e.g. WFS data etc. on the fly.
But thanks for your insight :]
|
https://dev.to/denisinvader/creating-an-interactive-map-with-d3-and-vue-4158
|
CC-MAIN-2020-40
|
en
|
refinedweb
|
i’ve got the basics of my code working but I can’t for the life of me figure out how to turn the lights into an array so the user can drag in whatever lights they want in the inspector.
I gather I need to turn the whole thing into a gameobject or something along those lines but im really not a coder and am working mostly off copypasta. The problem with turning them into a gameobject is unity doesn’t seem to like that for lights so i’m at an impasse 🙂
using System.Collections; using System.Collections.Generic; using UnityEngine; public class headlight : MonoBehaviour { public Light headlights; // Start is called before the first frame update void Start() { headlights = GetComponent<Light>(); } // Update is called once per frame void Update() { if (Input.GetKeyDown(KeyCode.L)) { headlights.enabled = !headlights.enabled; } } }
|
https://proxieslive.com/making-a-user-configurable-light-array-to-change-functions/
|
CC-MAIN-2020-40
|
en
|
refinedweb
|
In this post I will be talking about how we can use Hough Transform to detect and correct Skewness of a document image. There have been many research papers published around this problem and it keeps getting published even today on various journals mainly because its still largely an unsolved problem. I had previously written about Skew correction using Horizontal Projections here which used the Horizontal projection of an image to indetify the skew angle, The problem with deskewing using Projections is that it fails in many cases where the text may have too many spaces and when the document has less text. This made me look for a better solution and I came across Hough Transform based skew detection technique which is has a better accuracy.
Hough transform is a feature extraction technique that converts an image from Cartesian to polar coordinates which is how it got “transform” in its name. It can be used to detect lines or a set of collinear points on the image. If you are new to Hough Transform I would recommend you take a look at the video below which is one of the best one on the internet on this topic.
The basic idea is:
- Convert the image to grayscale
- Apple Canny or Sobel filter
- Find Hough lines between 0.1 to 180 degree angle.
- Round the angles from line peaks to 2 decimal places.
- Find the angle with the highest occurrence.
- Rotate the image with that angle
Here is a sample image which is skewed.
After finding the Hough Lines
As you can see, we have detected a decent number of lines connecting our words. And all we have to do now is to find the orientation of the lines which connect the words.
The code to generate the Hough lines is as below.
import numpy as np from skimage.transform import hough_line, hough_line_peaks from skimage.transform import rotate from skimage.feature import canny from skimage.io import imread from skimage.color import rgb2gray import matplotlib.pyplot as plt from scipy.stats import mode image = rgb2gray(imread("samples/doc.png")) edges = canny(image) # Classic straight-line Hough transform tested_angles = np.deg2rad(np.arange(0.1, 180.0)) h, theta, d = hough_line(edges, theta=tested_angles) # Generating figure 1 fig, axes = plt.subplots(1, 2, figsize=(15, 16)) ax = axes.ravel() ax[0].imshow(image, cmap="gray") ax[0].set_title('Input image') ax[0].set_axis_off() ax[1].imshow(edges, cmap="gray") origin = np.array((0, image.shape[1])) for _, angle, dist in zip(*hough_line_peaks(h, theta, d)): y0, y1 = (dist - origin * np.cos(angle)) / np.sin(angle) ax[1].plot(origin, (y0, y1), '-r') ax[1].set_xlim(origin) ax[1].set_ylim((edges.shape[0], 0)) ax[1].set_axis_off() ax[1].set_title('Detected lines')
Hough line method also gives us the angle made by the line with the origin as shown below.
As you might have guessed by now, we only need to isolate these potentially horizontal lines to get our skew angle. To do that, I am looking at the most commonly occurring angle and will rotate my image. The below method gives the skew angle.
def skew_angle_hough_transform(image): # convert to edges edges = canny(image) # Classic straight-line Hough transform between 0.1 - 180 degrees. tested_angles = np.deg2rad(np.arange(0.1, 180.0)) h, theta, d = hough_line(edges, theta=tested_angles) # find line peaks and angles accum, angles, dists = hough_line_peaks(h, theta, d) # round the angles to 2 decimal places and find the most common angle. most_common_angle = mode(np.around(angles, decimals=2))[0] # convert the angle to degree for rotation. skew_angle = np.rad2deg(most_common_angle - np.pi/2) return skew_angle
Here is the final output of Skew correction using Hough Transform. The angle of rotation is identified as: 2.24620502 degree.
Another important thing to note here is that, if the angle is greater than 90 degree which was the case in my sample, the image is titled clockwise and if it is its less than 90 degrees, its tilted anti-clockwise. In both cases we need to subtract the identified angle by 90 to get the rotation angle we need.
Here is the full notebook for your reference:
|
https://muthu.co/skew-detection-and-correction-of-document-images-using-hough-transform/
|
CC-MAIN-2020-40
|
en
|
refinedweb
|
Uhhhh... So I can't reproduce the issue anymore. It seems 'clean product' wasn't enough to clean my build folder, I had to use a key combination to expose another menu option for 'clean build...
Uhhhh... So I can't reproduce the issue anymore. It seems 'clean product' wasn't enough to clean my build folder, I had to use a key combination to expose another menu option for 'clean build...
Yeah, this is just extracted out of the section of code that I've been poking and pulling strings on to try and get more info out of. I realize that it does nothing useful in its current state, but...
Yeah I'm completely baffled. Test code, with class names changed:
void ClassX::method() {
MyClass inst;
cout << inst.nodes.size() << endl;
MyClass::test();
}
After poking around a lot more, it seems things are broken right off the bat.
At risk of sounding really dumb:
Do I explicitly need to define constructors that chain down the inheritance...
@cyberfish -
a) Xcode, so I guess LLVM. I haven't been able to repro yet with a smaller program.
b) It's a non-pointer member variable of an instance which is declared as a non-pointer static...
Blah. Yeah it looks like something is definitely broken. empty() returns false, size returns 0, begin() returns a reference to address 0x01, end() returns address 0, and resize(1) gives another...
Given a std::vector<MyClass> named myVec:
// Loop version 1
for (int i = 0; i < myVec.size(); ++i) {
// Do something with myVec[i]
}
// Loop version 2
for...
#include <type_traits>
#include <iostream>
#include <vector>
struct IWriter {};
template<typename T> struct foo {};
template<typename T> class foo2 {};
struct BaseBar {};
struct DerivedBar:...
I was imprecise :) Plat A = iOS, which is fine; Plat B = Android, where app assets are exposed via a read interface on AAssetManager. I could write my own streambuf or whatnot, but this seemed like...
Thanks Elysia,
>>Use the stream operators to ensure it can be read and written properly.
Is there something preventing read/write from operating correctly? I wrote it in this way because I need to...
Many thanks!
So it looks to me like the vector variant isn't actually a template specialization but a... templated overload of the template function?
I'm trying to write some naive binary serialization code and wanted to cut down on repetition of logic for serializing/deserializing nested vectors or other STL containers to reduce the chance of...
There's a nice article about it here, posted Feb. this year:
Nullable<T> vs null
It looks like the C# compiler does special-case this type.
Abachler, IIRC raw sockets on unix allow you access to the ethernet frame level, e.g. before even the MAC address is put on the packet. Wouldn't that indicate level-2 access? This seems suspect,...
Also look into dyndns.org.
Very useful if you don't like memorizing your IP address every time it changes.
LowlyIntern, have you followed up on Codeplug's suggestion?
Good luck.
>>UINT C_Control::StartCryptoPortThread( LPVOID pParam )
Is this declared as a static function? If so, you should have nothing to worry about, because static functions act essentially the same way...
*shrug*
>and that PortThread[3] is a valid value?
Not meaning to be nitpicky, but have you put a breakpoint in the constructor to ensure that the thread creation is not failing? Also, you'll want...
PortThread[3]= AfxBeginThread(StartCryptoPortThread, &Port[3],THREAD_PRIORITY_NORMAL,0,CREATE_SUSPENDED);
You should make sure this is being called from a function somewhere, and you are 100% sure...
So the main issue is scalability then (limited by number of samplers)? Does that mean a Gaussian would be equivalent (except floating point textures) if there were sufficient samplers to cover all...
I did some experimentation finally. I didn't really get any definitive information on what Kawase Bloom actually is, but it seems to me that the general idea is doing multiple passes of almost any...
I still don't know what was wrong. But, I created a new project, and rewrote the damn thing while testing it line by line, and it worked. *shrug*
You could also do a 2D game using 3D graphics. That would let you focus more on the graphics and flashy effects, and less on the actual game logic.
Heh thanks Bubba, I understood this much already ;)
zacs:
GL_SRC_ALPHA, GL_ONE should theoretically be (Cs * As) + Cd, right? In which case I'm pretty sure it should work.
My test code is the...
|
https://cboard.cprogramming.com/search.php?s=19d422a1e92569ff556dbb3ce7f8f55c&searchid=5966838
|
CC-MAIN-2020-40
|
en
|
refinedweb
|
.
AFFILIATED INSTITUTIONS
R-2008 B.E. COMPUTER SCIENCE AND ENGINEERING II - VIII SEMESTERS CURRICULA AND SYLLABI
SEMESTER II
SL. COURSE COURSE TITLE L T P CNo. CODE
THEORY
PRACTICAL
1 9. a ME2155 Computer Aided Drafting and Modeling 0 1 2 2 Laboratory (For non-circuits branches)
TOTAL : 28 CREDITS + 10. - 0 0 2 - English Language Laboratory
A. CIRCUIT BRANCHES I Faculty of Electrical Engineering 1. B.E. Electrical and Electronics Engineering 2. B.E. Electronics and Instrumentation Engineering 3. B.E. Instrumentation and Control Engineering
2III Faculty of Technology 1. B.Tech. Chemical Engineering 2. B.Tech. Biotechnology 3. B.Tech. Polymer Technology 4. B.Tech. Textile Technology 5. B.Tech. Textile Technology (Fashion Technology) 6. B.Tech. Petroleum Engineering 7. B.Tech. Plastics Technology
SEMESTER III (Applicable to the students admitted from the Academic year 2008–2009 onwards)
SEMESTER IV (Applicable to the students admitted from the Academic year 2008–2009 onwards)Code No. Course Title L T P CTHEORY MA 2262 Probability and Queueing Theory 3 1 0 4 CS 2251 Design and Analysis of Algorithms 3 1 0 4 CS 2252 Microprocessors and Microcontrollers 3 0 0 3 CS 2253 Computer Organization and Architecture 3 0 0 3 CS 2254 Operating Systems 3 0 0 3 CS 2255 Database Management Systems 3 0 0 3PRACTICAL CS 2257 Operating Systems Lab 0 0 3 2 CS 2258 Data Base Management Systems Lab 0 0 3 2 CS 2259 Microprocessors Lab 0 0 3 2 Total 18 2 9 26
3 SEMESTER V (Applicable to the students admitted from the Academic year 2008–2009 onwards)
SEMESTER VI (Applicable to the students admitted from the Academic year 2008–2009 onwards)
4 SEMESTER VII (Applicable to the students admitted from the Academic year 2008–2009 onwards)
SEMESTER VIII (Applicable to the students admitted from the Academic year 2008–2009 onwards)
LIST OF ELECTIVES SEMESTER VI – Elective I
5 SEMESTER VI – Elective II
6 SEMESTER VIII – Elective VCode No. Course Title L T P CGE2071 Intellectual Property Rights 3 0 0 3CS2051 Graph Theory 3 0 0 3IT2042 Information Security 3 0 0 3CS2053 Soft Computing 3 0 0 3IT2023 Digital Image Processing 3 0 0 3CS2055 Software Quality Assurance 3 0 0 3CS2056 Distributed Systems 3 0 0 3CS2057 Knowledge Based Decision Support Systems 3 0 0 3GE2025 Professional Ethics in Engineering 3 0 0 3GE2023 Fundamental of Nano Science 3 0 0 3
7HS2161 TECHNICAL ENGLISH II L T P C 3 1 0 4AIM:To encourage students to actively involve in participative learning of English and to helpthemTechnical Vocabulary - meanings in context, sequencing words, Articles- Prepositions,intensive reading& predicting content, Reading and interpretation, extended definitions,Process descriptionSuggested activities:1. Exercises on word formation using the prefix ‘self’ - Gap filling with preposition.2. Exercises - Using sequence words.3. Reading comprehension exercise with questions based on inference – Reading headings4. and predicting the content – Reading advertisements and interpretation.5. Writing extended definitions – Writing descriptions of processes – Writing paragraphs based on discussions – Writing paragraphs describing the future.UNIT II 12Phrases / Structures indicating use / purpose – Adverbs-Skimming – Non-verbalcommunication - Listening – correlating verbal and non-verbal communication -SpeakingCause and effect expressions – Different grammatical forms of the same word -Speaking – stress and intonation, Group Discussions - Reading – Critical reading -Listening, - Writing – using connectives, report writing – types, structure, data collection,content, form, recommendations .
8SuggestedNumerical adjectives – Oral instructions – Descriptive writing – Argumentativeparagraphs –Speaking -, ‘English for Engineers and Technologists’ Combined Edition (Volumes 1 & 2), Chennai: Orient Longman Pvt. Ltd., 2006. Themes 5 – 8 (Technology, Communication, Environment, Industry)
9REFERENCESNote:The book listed under Extensive Reading is meant for inculcating the reading habit of thestudents. They need not be used for testing purposes.
MA2161 MATHEMATICS – II L T P C 3 1 0 4
10UNIT V LAPLACE TRANSFORM 12Laplace transform – Conditions for existence – Transform of elementary functions –Basic properties – Transform of derivatives and integrals – Transform of unit stepfunction and impulse functions – Transform of periodic functions.Definition of Inverse Laplace transform as contour integral – Convolution theorem(excluding proof) – Initial and Final value theorems – Solution of linear ODE of secondorder with constant coefficients using Laplace transformation techniques.
TOTAL: 60 PERIODSTEXT BOOKS: rd1. Bali N. P and Manish Goyal, “Text book of Engineering Mathematics”, 3 Edition, Laxmi Publications (p) Ltd., (2008). th2. Grewal. B.S, “Higher Engineering Mathematics”, 40 Edition, Khanna Publications, Delhi, (2007).
REFERENCES:1. Ramana B.V, “Higher Engineering Mathematics”,Tata McGraw Hill Publishing Company, New Delhi, (2007). rd2. Glyn James, “Advanced Engineering Mathematics”, 3 Edition, Pearson Education, (2007). th3. Erwin Kreyszig, “Advanced Engineering Mathematics”, 7 Edition, Wiley India, (2007). rd4. Jain R.K and Iyengar S.R.K, “Advanced Engineering Mathematics”, 3 Edition, Narosa Publishing House Pvt. Ltd., (2007).
11UNIT III MAGNETIC AND SUPERCONDUCTING MATERIALS 9Origin of magnetic moment – Bohr magneton – Dia and para magnetism – Ferromagnetism – Domain theory – Hysteresis – soft and hard magnetic materials – anti –ferromagnetic materials – Ferrites – applications – magnetic recording and readout –storage of magnetic data – tapes, floppy and magnetic disc drives.Superconductivity : properties - Types of super conductors – BCS theory ofsuperconductivity(Qualitative) - High Tc superconductors – Applications ofsuperconductors – SQUID, cryotron, magnetic levitation.UNIT IV DIELECTRIC MATERIALS 9Electrical susceptibility – dielectric constant – electronic, ionic, orientational and spacecharge polarization – frequency and temperature dependence of polarisation – internalfield – Claussius – Mosotti relation (derivation) – dielectric loss – dielectric breakdown –uses of dielectric materials (capacitor and transformer) – ferroelectricity and applications.
REFERENCES:1. Rajendran, V, and Marikani A, ‘Materials science’Tata McGraw Hill publications, (2004) New Delhi.2. Jayakumar, S. ‘Materials science’, R.K. Publishers, Coimbatore, (2008).3. Palanisamy P.K, ‘Materials science’, Scitech publications(India) Pvt. LTd., Chennai, second Edition(2007)4. M. Arumugam, ‘Materials Science’ Anuradha publications, Kumbakonam, (2006).
12OBJECTIVES The student should be conversant with the principles electrochemistry, electrochemical cells, emf and applications of emf measurements. Principles of corrosion control Chemistry of Fuels and combustion Industrial importance of Phase rule and alloys Analytical techniques and their importance.
UNIT I ELECTROCHEMISTRY 9Electrochemical cells – reversible and irreversible cells – EMF – measurement of emf –Single electrode potential – Nernst equation (problem) – reference electrodes –StandardHydrogen electrode -Calomel electrode – Ion selective electrode – glass electrode andmeasurement of pH – electrochemical series – significance – potentiometer titrations + + -(redox - Fe² vs dichromate and precipitation – Ag vs CI titrations) and conduct metrictitrations (acid-base – HCI vs, NaOH) titrations,
13 REFERENCES:1. B.Sivasankar “Engineering Chemistry” Tata McGraw-Hill Pub.Co.Ltd, New Delhi (2008).2. B.K.Sharma “Engineering Chemistry” Krishna Prakasan Media (P) Ltd., Meerut (2001).
14UNIT IV DYNAMICS OF PARTICLES 12Displacements, Velocity and acceleration, their relationship – Relative motion –Curvilinear motion – Newton’s law – Work Energy Equation of particles – Impulse andMomentum – Impact of elastic bodies.UNIT V FRICTION AND ELEMENTS OF RIGID BODY DYNAMICS 12Frictional force – Laws of Coloumb friction – simple contact friction – Rolling resistance –Belt friction.Translation and Rotation of Rigid Bodies – Velocity and acceleration – General Planemotion. TOTAL: 60 PERIODSTEXT).
15UNIT IV TRANSIENT RESPONSE FOR DC CIRCUITS 12Transient response of RL, RC and RLC Circuits using Laplace transform for DC inputand A.C. with sinusoidal input.).
16UNIT IV TRANSISTORS 12Principle of operation of PNP and NPN transistors – study of CE, CB and CCconfigurations and comparison of their characteristics – Breakdown in transistors –operation and comparison of N-Channel and P-Channel JFET – drain current equation –MOSFET – Enhancement and depletion types – structure and operation – comparison ofBJT with MOSFET – thermal effect on MOSFET.
REFERENCES:1. Robert T. Paynter, “Introducing Electronics Devices and Circuits”, Pearson th Education, 7 Education, (2006).2. William H. Hayt, J.V. Jack, E. Kemmebly and steven M. Durbin, “Engineering Circuit th Analysis”,Tata McGraw Hill, 6 Edition, 2002.3. J. Millman & Halkins, Satyebranta Jit, “Electronic Devices & Circuits”,Tata McGraw nd Hill, 2 Edition, 2008.
17UNIT III SEMICONDUCTOR DEVICES AND APPLICATIONS 12Characteristics of PN Junction Diode – Zener Effect – Zener Diode and itsCharacteristics – Half wave and Full wave Rectifiers – Voltage Regulation.Bipolar Junction Transistor – CB, CE, CC Configurations and Characteristics –Elementary Treatment of Small Signal Amplifier.UNIT IV DIGITAL ELECTRONICS 12Binary Number System – Logic Gates – Boolean Algebra – Half and Full Adders – Flip-Flops – Registers and Counters – A/D and D/A Conversion (single concepts)).
A – CIVIL ENGINEERING
18UNIT II BUILDING COMPONENTS AND STRUCTURES 15Foundations: Types, Bearing capacity – Requirement of good foundations.
B – MECHANICAL ENGINEERING
UNIT IV I C ENGINES 10Internal combustion engines as automobile power plant – Working principle of Petrol andDiesel Engines – Four stroke and two stroke cycles – Comparison of four stroke and twostroke engines – Boiler as a power plant.
LIST OF EXPERIMENTS
1. UNIX COMMANDS 15 Study of Unix OS - Basic Shell Commands - Unix Editor
2. SHELL PROGRAMMING 15 Simple Shell program - Conditional Statements - Testing and Loops
193. C PROGRAMMING ON UNIX 15 Dynamic Storage Allocation-Pointers-Functions-File Handling TOTAL : 45 PERIODS
20 6. Determination of water of crystallization of a crystalline salt (Copper sulphate) 7. Estimation of Ferric iron by spectrophotometry. A minimum of FIVE experiments shall be offered. Laboratory classes on alternate weeks for Physics and Chemistry. The lab examinations will be held only in the second semester.
Note: Plotting of drawings must be made for each exercise and attached to the records written by students.
21EE2155 ELECTRICAL CIRCUIT LABORATORY LT P C (Common to EEE, EIE and ICE) 0 0 3 2 LIST OF EXPERIMENTS
22 ENGLISH LANGUAGE LABORATORY (Optional) L T P C 0 0 2 -
1. Listening: 5Listening & answering questions – gap filling – Listening and Note taking- Listening totelephone conversations
2. Speaking: 5
Evaluation (1) Lab Session – 40 marks Listening – 10 marks Speaking – 10 marks Reading – 10 marks Writing – 10 marks).
23LAB REQUIREMENTS: 1. Teacher – Console and systems for students 2. English Language Lab Software 3. Tape Recorders.
OBJECTIVESThe course objective is to develop the skills of the students in the areas of Transformsand Partial Differtial Equations. This will be necessary for their effective studies in alarge number of engineering subjects like heat conduction, communication systems,electro-optics and electromagnetic theory. The course will also serve as a prerequisitefor post graduate and specialized studies and research.UNIT I FOURIER SERIES 9 +3Dirichlet’s conditions – General Fourier series – Odd and even functions – Half rangesine series – Half range cosine series – Complex form of Fourier Series – Parseval’sidentify – Harmonic Analysis.UNIT II FOURIER TRANSFORMS 9+3Fourier integral theorem (without proof) – Fourier transform pair – Sine andCosine transforms – Properties – Transforms of simple functions – Convolution theorem– Parseval’s identity.
TEXT BOOK:1. Grewal, B.S, ‘Higher Engineering Mathematics’ 40th Edition, Khanna publishers, Delhi, (2007)
24REFERENCES:1. Bali.N.P and Manish Goyal ‘A Textbook of Engineering Mathematics’, Seventh Edition, Laxmi Publications(P) Ltd. (2007)2. Ramana.B.V. ‘Higher Engineering Mathematics’ Tata Mc-GrawHill Publishing Company limited, New Delhi (2007).3. Glyn James, ‘Advanced Modern Engineering Mathematics’, Third edition-Pearson Education (2007).4. Erwin Kreyszig ’Advanced Engineering Mathematics’, Eighth edition-Wiley India (2007).
UNIT V GRAPHS 9Definitions – Topological sort – breadth-first traversal - shortest-path algorithms –minimum spanning tree – Prim's and Kruskal's algorithms – Depth-first traversal –biconnectivity – Euler circuits – applications of graphs TOTAL: 45 PERIODSTEXT BOOK:1. M. A. Weiss, “Data Structures and Algorithm Analysis in C”, Second Edition , Pearson Education, 2005.
25REFERENCES:1. A. V. Aho, J. E. Hopcroft, and J. D. Ullman, “Data Structures and Algorithms”, Pearson Education, First Edition Reprint 2003.2. R. F. Gilberg, B. A. Forouzan, “Data Structures”, Second Edition, Thomson India Edition, 2005.
26REFERENCES1. Charles H.Roth, Jr. “Fundamentals of Logic Design”, 4th Edition, Jaico Publishing House, Cengage Earning, 5th ed, 2005.2. Donald D.Givone, “Digital Principles and Design”, Tata McGraw-Hill, 2007.
UNIT III 9Function and class templates - Exception handling – try-catch-throw paradigm –exception specification – terminate and Unexpected functions – Uncaught exception.
UNIT IV 9Inheritance – public, private, and protected derivations – multiple inheritance - virtualbase class – abstract class – composite objects Runtime polymorphism – virtualfunctions – pure virtual functions – RTTI – typeid – dynamic casting – RTTI andtemplates – cross casting – down casting .
UNIT V 9Streams and formatted I/O – I/O manipulators - file handling – random access – objectserialization – namespaces - std namespace – ANSI String Objects – standard templatelibrary.
TOTAL: 45 PERIODS
TEXT BOOK:1. B. Trivedi, “Programming with ANSI C++”, Oxford University Press, 2007.
27REFERENCES.
28REFERENCES:1. H.Taub,D L Schilling ,G Saha ,”Principles of Communication”3/e,2007.2. B.P.Lathi,”Modern Analog And Digital Communication systems”, 3/e, Oxford University Press, 20073. Blake, “Electronic Communication Systems”, Thomson Delmar Publications, 2002.4. Martin S.Roden, “Analog and Digital Communication System”, 3rd Edition, PHI, 2002.5. B.Sklar,”Digital Communication Fundamentals and Applications”2/e Pearson Education 2007..
29UNIT II ENVIRONMENTAL POLLUTION 8Definition – causes, effects and control measures of: (a) Air pollution (b) Water pollution(c) Soil pollution (d) Marine pollution (e) Noise pollution (f) Thermal pollution (g) Nuclearhazards – solid waste management: causes, effects and control measures of municipalsolid wastes – role of an individual in prevention of pollution – pollution case studies –disaster management: floods, earthquake, cyclone and landslides.Field study of local polluted site – Urban / Rural / Industrial / Agricultural.
TOTAL: 45 PERIODS
TEXT BOOKS:1. Gilbert M.Masters, ‘Introduction to Environmental Engineering and Science’, 2nd edition, Pearson Education (2004).2. Benny Joseph, ‘Environmental Science and Engineering’, Tata McGraw- Hill,NewDelhi, (2006).
30REFERENCES BOOKS)
31List of equipments and components for a batch of 30 students (2 per batch)
27 IC7474 40
32CS 2208 DATA STRUCTURES LAB LTPC 003 2AIM:To develop programming skills in design and implementation of data structures and theirapplications.
33).(Common to Information Technology & Computer Science Engineering.
34UNIT I RANDOM VARIABLES 9+3Discrete and continuous random variables - Moments - Moment generating functionsand their properties. Binomial, Poisson ,Geometric ,Negative binomial, Uniform,Exponential, Gamma, and Weibull distributions ..
35UNIT II 9Divide and Conquer: General Method – Binary Search – Finding Maximum and Minimum– Merge Sort – Greedy Algorithms: General Method – Container Loading – KnapsackProblem.
UNIT III 9Dynamic Programming: General Method – Multistage Graphs – All-Pair shortest paths –Optimal binary search trees – 0/1 Knapsack – Travelling salesperson problem .
UNIT IV 9Backtracking: General Method – 8 Queens problem – sum of subsets – graph coloring –Hamiltonian problem – knapsack problem.
UNIT V 9Graph Traversals – Connected Components – Spanning Trees – Biconnectedcomponents – Branch and Bound: General Methods (FIFO & LC) – 0/1 Knapsackproblem – Introduction to NP-Hard and NP-Completeness. TUTORIAL= 15, TOTAL: 60 PERIODSTEXT.
36UNIT III MULTIPROCESSOR CONFIGURATIONS 9Coprocessor Configuration – Closely Coupled Configuration – Loosely CoupledConfiguration –8087 Numeric Data Processor – Data Types – Architecture –8089 I/OProcessor –Architecture –Communication between CPU and IOP.UNIT IV I/O INTERFACING 9Memory interfacing and I/O interfacing with 8085 – parallel communication interface –serial communication interface – timer-keyboard/display controller – interrupt controller –DMA controller (8237) – applications – stepper motor – temperature control.
UNIT V MICROCONTROLLERS 9Architecture of 8051 Microcontroller – signals – I/O ports – memory – counters andtimers – serial data I/O – interrupts-Interfacing -keyboard, LCD,ADC & DAC TOTAL: 45 PERIODSTEXT BOOKS:1. Ramesh S. Gaonkar ,”Microprocessor – Architecture, Programming and Applications with the 8085” Penram International Publisher , 5th Ed.,20062.‘ second edition ,Penram international..
37UNIT III PIPELINING 9Basic concepts – Data hazards – Instruction hazards – Influence on instruction sets –Data path and control considerations – Performance considerations – Exceptionhandling.
REFERENCES:1. David A. Patterson and John L. Hennessy, “Computer Organization and Design: The Hardware/Software interface”, Third Edition, Elsevier, 2005.2. William Stallings, “Computer Organization and Architecture – Designing for Performance”, Sixth Edition, Pearson Education, 2003.3. John P. Hayes, “Computer Architecture and Organization”, Third Edition, Tata McGraw Hill, 1998.4. V.P. Heuring, H.F. Jordan, “Computer Systems Design and Architecture”, Second Edition, Pearson Education, 2004.
38UNIT II PROCESS SCHEDULING AND SYNCHRONIZATION 10CPU Scheduling: Scheduling criteria – Scheduling algorithms – Multiple-processorscheduling – Real time scheduling – Algorithm Evaluation. Case study: Processscheduling in Linux. Process Synchronization: The critical-section problem –Synchronization hardware – Semaphores – Classic problems of synchronization –critical regions – Monitors. Deadlock: System model – Deadlock characterization –Methods for handling deadlocks – Deadlock prevention – Deadlock avoidance –Deadlock detection – Recovery from deadlock.
REFERENCES:1. Andrew S. Tanenbaum, “Modern Operating Systems”, Second Edition, Pearson Education, 2004.2. Gary Nutt, “Operating Systems”, Third Edition, Pearson Education, 2004.3. Harvey M. Deital, “Operating Systems”, Third Edition, Pearson Education, 2004.
39UNIT II RELATIONAL MODEL 9The relational Model – The catalog- Types– Keys - Relational Algebra – DomainRelational Calculus – Tuple Relational Calculus - Fundamental operations – AdditionalOperations- SQL fundamentals - Integrity – Triggers - Security – Advanced SQLfeatures –Embedded SQL– Dynamic SQL- Missing Information– Views – Introductionto Distributed Databases and Client/Server DatabasesUNIT III DATABASE DESIGN 9Functional Dependencies – Non-loss Decomposition – Functional Dependencies – First,Second, Third Normal Forms, Dependency Preservation – Boyce/Codd Normal Form-Multi-valued Dependencies and Fourth Normal Form – Join Dependencies and FifthNormal FormUNIT IV TRANSACTIONS 9Transaction Concepts - Transaction Recovery – ACID Properties – System Recovery –Media Recovery – Two Phase Commit - Save Points – SQL Facilities for recovery –Concurrency – Need for Concurrency – Locking Protocols – Two Phase Locking –Intent Locking – Deadlock- Serializability – Recovery Isolation Levels – SQL Facilitiesfor Concurrency.)
REFERENCES:1. Ramez Elmasri, Shamkant B. Navathe, “Fundamentals of Database Systems”, FourthEdition , Pearson / Addision wesley, 2007.2. Raghu Ramakrishnan, “Database Management Systems”, Third Edition, McGraw Hill, 2003.3. S.K.Singh, “Database Systems Concepts, Design and Applications”, First Edition, Pearson Education, 2006.
40CS 2257 OPERATING SYSTEMS LAB LTPC (Common to CSE & IT) 0 032(Implement the following on LINUX or other Unix like platform. Use C for high levellanguage implementation)
1. Write programs using the following system calls of UNIX operating system: fork, exec, getpid, exit, wait, close, stat, opendir, readdir 2. Write programs using the I/O system calls of UNIX operating system (open, read, write, etc) 3. Write C programs to simulate UNIX commands like ls, grep, etc. 4. Given the list of processes, their CPU burst times and arrival times, display/print the Gantt chart for FCFS and SJF. For each of the scheduling policies, compute and print the average waiting time and average turnaround time. (2 sessions) 5. Given the list of processes, their CPU burst times and arrival times, display/print the Gantt chart for Priority and Round robin. For each of the scheduling policies, compute and print the average waiting time and average turnaround time. (2 sessions) 6. Developing Application using Inter Process communication (using shared memory, pipes or message queues) 7. Implement the Producer – Consumer problem using semaphores (using UNIX system calls). 8. Implement some memory management schemes – I 9. Implement some memory management schemes – II 10. Implement any file allocation technique (Linked, Indexed or Contiguous)
41CS 2258 DATA BASE MANAGEMENT SYSTEM LAB LTPC (Common to CSE & IT) 003 2
AIM:.
42Experiments.
UNIT IV TESTING 9Taxonomy Of Software Testing – Types Of S/W Test – Black Box Testing – TestingBoundary Conditions – Structural Testing – Test Coverage Criteria Based On Data FlowMechanisms – Regression Testing – Unit Testing – Integration Testing – ValidationTesting – System Testing And Debugging – Software Implementation Techniques
43UNIT V SOFTWARE PROJECT MANAGEMENT 9Measures And Measurements – ZIPF’s Law – Software Cost Estimation – FunctionPoint Models – COCOMO Model – Delphi Method – Scheduling – Earned ValueAnalysis – Error Tracking – Software Configuration Management – Program EvolutionDynamics – Software Maintenance – Project Planning – Project Scheduling– RiskManagement – CASE Tools
TOTAL: 45 PERIODSTEXT BOOKS:1. Ian Sommerville, “Software engineering”, Seventh Edition, Pearson Education Asia, 2007.2. Roger S. Pressman, “Software Engineering – A practitioner’s Approach”, Sixth Edition, McGraw-Hill International Edition, 2005..
44UNIT I LOGIC AND PROOFS 9+3Propositional Logic – Propositional equivalences-Predicates and quantifiers-NestedQuantifiers-Rules of inference-introduction to Proofs-Proof Methods and strategy).
45UNIT II 9Medium access – CSMA – Ethernet – Token ring – FDDI - Wireless LAN – Bridges andSwitches
UNIT III 9Circuit switching vs. packet switching / Packet switched networks – IP – ARP – RARP –DHCP – ICMP – Queueing discipline – Routing algorithms – RIP – OSPF – Subnetting– CIDR – Interdomain routing – BGP – Ipv6 – Multicasting – Congestion avoidance innetwork layer
UNIT IV 9UDP – TCP – Adaptive Flow Control – Adaptive Retransmission - Congestion control –Congestion avoidance – QoS
UNIT V 9Email (SMTP, MIME, IMAP, POP3) – HTTP – DNS- SNMP – Telnet – FTP – Security –PGP - SSH
TOTAL: 45 PERIODSTEXT BOOK :1. Larry L. Peterson, Bruce S. Davie, “Computer Networks: A Systems Approach”, Third Edition, Morgan Kauffmann Publishers Inc., 2003.
REFERENCES:1. James F. Kuross, Keith W. Ross, “Computer Networking, A Top-Down Approach Featuring the Internet”, Third Edition, Addison Wesley, 2004.2. Nader F. Mir, “Computer and Communication Networks”, Pearson Education, 20073. Comer, “Computer Networks and Internets with Internet Applications”, Fourth Edition, Pearson Education, 2003.4. Andrew S. Tanenbaum, “Computer Networks”, Fourth Edition, 2003.5. William Stallings, “Data and Computer Communication”, Sixth Edition, Pearson Education, 2000
UNIT I AUTOMATA 9Introduction to formal proof – Additional forms of proof – Inductive proofs –FiniteAutomata (FA) – Deterministic Finite Automata (DFA) – Non-deterministic FiniteAutomata (NFA) – Finite Automata with Epsilon transitions.
46UNIT III CONTEXT-FREE GRAMMARS AND LANGUAGES 9Context-Free Grammar (CFG) – Parse Trees – Ambiguity in grammars and languages –Definition of the Pushdown automata – Languages of a Pushdown Automata –Equivalence of Pushdown automata and CFG– Deterministic Pushdown Automata.
UNIT V UNDECIDABALITY 9A language that is not Recursively Enumerable (RE) – An undecidable problem that isRE – Undecidable problems about Turing Machine – Post’s Correspondence Problem –The classes P and NP. L: 45, T: 15, TOTAL: 60 PERIODSTEXT” Third Edition, Tata Mc Graw Hill, 2007
OBJECTIVES To understand the relationship between system software and machine architecture. To know the design and implementation of assemblers To know the design and implementation of linkers and loaders. To have an understanding of macroprocessors. To have an understanding of system software tools.
47UNIT I INTRODUCTION 8System software and machine architecture – The Simplified Instructional Computer(SIC) - Machine architecture - Data and instruction formats - addressing modes -instruction sets - I/O and programming.UNIT II ASSEMBLERS 10Basic assembler functions - A simple SIC assembler – Assembler algorithm and datastructures - Machine dependent assembler features - Instruction formats and addressingmodes – Program relocation - Machine independent assembler features - Literals –Symbol-defining statements – Expressions - One pass assemblers and Multi passassemblers - Implementation example - MASM assembler.UNIT III LOADERS AND LINKERS 9Basic loader functions - Design of an Absolute Loader – A Simple Bootstrap Loader -Machine dependent loader features - Relocation – Program Linking – Algorithm andData Structures for Linking Loader - Machine-independent loader features - AutomaticLibrary Search – Loader Options - Loader design options - Linkage Editors – DynamicLinking – Bootstrap Loaders - Implementation example - MSDOS linker.
REFERENCES1. D. M. Dhamdhere, “Systems Programming and Operating Systems”, Second Revised Edition, Tata McGraw-Hill, 2000.2. John J. Donovan “Systems Programming”, Tata McGraw-Hill Edition, 2000.3. John R. Levine, Linkers & Loaders – Harcourt India Pvt. Ltd., Morgan Kaufmann Publishers, 2000.
48CS2305 PROGRAMMING PARADIGMS LTPC 3 003AIM:To understand the concepts of object-oriented, event driven, and concurrentprogramming paradigms and develop skills in using these paradigms using Java..
49CS2307 NETWORKS LAB LTPC 0 032: 45 PERIODS
1. SOFTWARE C++ Compiler 30 J2SDK (freeware) Linux NS2/Glomosim/OPNET (Freeware) 2. Hardware 30 Nos PCs
(Using C)
50 11. Implement a simple text editor with features like insertion / deletion of a character, word, and sentence. 12. Implement a symbol table with suitable hashing
(For loader exercises, output the snap shot of the main memory as it would be, after theloading has taken place) TOTAL:45 PERIODS
Software – Turbo C 2. Multiuser (Freely download).
51 7. Design a scientific calculator using event-driven programming paradigm of Java. 8.. 9. Develop a simple OPAC system for library using even-driven and concurrent programming paradigms of Java. Use JDBC to connect to a back-end database.
10. Develop multi-threaded echo server and a corresponding GUI client in Java.
TOTAL: 45 PERIODS Requirement for a batch of 30 students
1. PC’s 30
52UNIT III PLANNING 9Planning with state-space search – partial-order planning – planning graphs – planningand acting in the real world
UNIT V LEARNING 9Learning from observation - Inductive learning – Decision trees – Explanation basedlearning – Statistical Learning methods - Reinforcement Learning
TOTAL: 45 PERIODSTEXT.
53UNIT IV CODE GENERATION 9Issues in the design of a code generator- The target machine-Run-time storagemanagement-Basic blocks and flow graphs- Next-use information-A simple codegenerator-Register allocation and assignment-The dag representation of basic blocks -Generating code from dags.
REFERENCES:1. David Galles, “Modern Compiler Design”, Pearson Education Asia, 20072. Steven S. Muchnick, “Advanced Compiler Design & Implementation”, Morgan Kaufmann Pulishers, 2000.3. C. N. Fisher and R. J. LeBlanc “Crafting a Compiler with C”, Pearson Education, 2000.
OBJECTIVES: To learn basic OO analysis and design skills through an elaborate case study To use the UML design diagrams To apply the appropriate design patterns
UNIT I 9Introduction to OOAD – What is OOAD? – What is UML? What are the Unitedprocess(UP) phases - Case study – the NextGen POS system, Inception -Use caseModeling - Relating Use cases – include, extend and generalization.
UNIT II 9Elaboration - Domain Models - Finding conceptual classes and description classes –Associations – Attributes – Domain model refinement – Finding conceptual classhierarchies- Aggregation and Composition- UML activity diagrams and modeling
54UNIT III 9System sequence diagrams - Relationship between sequence diagrams and use casesLogical architecture and UML package diagram – Logical architecture refinement - UMLclass diagrams - UML interaction diagramsUNIT IV 9GRASP: Designing objects with responsibilities – Creator – Information expert – LowCoupling –Controller – High Cohesion – Designing for visibility - Applying GoF designpatterns – adapter, singleton, factory and observer patterns.
UNIT V 9UML state diagrams and modeling - Operation contracts- Mapping design to code -UMLdeployment and component diagrams
TOTAL : 45 PERIODSTEXT BOOK :1. Craig Larman,"Applying UML and Patterns: An Introduction to object-oriented Analysis and Design and iterative development”, Third Edition, Pearson Education, 2005REFERENCES4. Erich Gamma, Richard Helm, Ralph Johnson, John Vlissides,“Design patterns: Elements of Reusable object-oriented software”, Addison-Wesley, 1995.
55UNIT IV MEMORY AND I/O 9Cache performance – Reducing cache miss penalty and miss rate – Reducing hit time –Main memory and performance – Memory technology. Types of storage devices –Buses – RAID – Reliability, availability and dependability – I/O performance measures –Designing an I/O system.
TOTAL : 45 PERIODS
TEXT BOOK:1. John L. Hennessey and David A. Patterson, “ Computer architecture – A quantitative approach”, Morgan Kaufmann / Elsevier Publishers, 4th. edition, 2007.
REFERENCES:1. David E. Culler, Jaswinder Pal Singh, “Parallel computing architecture : A hardware/software approach” , Morgan Kaufmann /Elsevier Publishers, 1999.2. Kai Hwang and Zhi.Wei Xu, “Scalable Parallel Computing”, Tata McGraw Hill, New Delhi, 2003.
56SuggestedTools1. ArgoUML, Eclipse IDE, Visual Paradigm, Visual case, and Rational Suite
Globalisation has brought in numerous opportunities for the teeming millions, withmore focus on the students’ overall capability apart from academic competence. Manystudents, particularly those from non-English medium schools, find that they are notpreferred due to their inadequacy of communication skills and soft skills, despitepossessing sound knowledge in their subject area along with technical capability.Keeping in view their pre-employment needs and career requirements, this course onCommunication Skills Laboratory will prepare students to adapt themselves with ease tothe industry environment, thus rendering them as prospective assets to industries. Thecourse will equip the students with the necessary communication skills that would go along.
57A. English Language Lab (18 Periods)1. Listening Comprehension: (6)Listening and typing – Listening and sequencing of sentences – Filling in the blanks -Listening and answering questions.2. Reading Comprehension: (6)Filling in the blanks - Close exercises – Vocabulary building - Reading and answeringquestions.3. Speaking: (6)Phonetics: Intonation – Ear training - Correct Pronunciation – Sound recognitionexercises – Common Errors in English.Conversations: Face to Face Conversation – Telephone conversation – Role playactivities (Students take on roles and engage in conversation)
REFERENCES: 1. Anderson, P.V, Technical Communication, Thomson Wadsworth , Sixth Edition, New Delhi, 2007.
58 2. Prakash, P, Verbal and Non-Verbal Reasoning, Macmillan India Ltd., Second Edition, New Delhi, 2004. 3. John Seely, The Oxford Guide to Writing and Speaking, Oxford University Press, New Delhi, 2004.
LAB REQUIREMENTS: 1. Teacher console and systems for students. 2. English Language Lab Software 3. Career Lab Software
1. A batch of 60 / 120 students is divided into two groups – one group for the PC- based session and the other group for the Class room session.
Each candidate will have separate sets of questions assigned by the teacher using the teacher-console enabling PC–based evaluation for the 40% of marks allotted.
59CS2358 INTERNET PROGRAMMING LAB LTPC 103 2LIST OF EXPERIMENTS1.
TOTAL 15 + 45 = 60 PERIODSTEXT BOOK:1. Robert W.Sebesta, “Programming the world wide web”, Pearson Education, 2006.
REFERENCE:1. Deitel, “Internet and world wide web, How to Program”, PHI, 3rd Edition, 2005.
UNIT I INTRODUCTION 5Managerial Economics - Relationship with other disciplines - Firms: Types, objectivesand goals - Managerial decisions - Decision analysis.
60UNIT III PRODUCTION AND COST ANALYSIS 10Production function - Returns to scale - Production optimization - Least cost input -Isoquants - Managerial uses of production function.Cost Concepts - Cost function - Determinants of cost - Short run and Long run costcurves - Cost Output Decision - Estimation of Cost.
UNIT IV PRICING 5Determinants of Price - Pricing under different objectives and different market structures- Price discrimination - Pricing methods in practice.
REFERENCES.
UNIT I 2D PRIMITIVES 9output primitives – Line, Circle and Ellipse drawing algorithms - Attributes of outputprimitives – Two dimensional Geometric transformation - Two dimensional viewing –Line, Polygon, Curve and Text clipping algorithms
UNIT II 3D CONCEPTS 9Parallel and Perspective projections - Three dimensional object representation –Polygons, Curved lines, Splines, Quadric Surfaces,- Visualization of data sets - 3Dtransformations – Viewing -Visible surface identification.
61UNIT III GRAPHICS PROGRAMMING 9Color Models – RGB, YIQ, CMY, HSV – Animations – General Computer Animation,Raster, Keyframe - Graphics programming using OPENGL – Basic graphics primitives –Drawing three dimensional objects - Drawing three dimensional scenes
UNIT IV RENDERING 9Introduction to Shading models – Flat and Smooth shading – Adding texture to faces –Adding shadows of objects – Building a camera in a program – Creating shaded objects– Rendering texture – Drawing Shadows.
UNIT V FRACTALS 9Fractals and Self similarity – Peano curves – Creating image by iterated functions –Mandelbrot sets – Julia Sets – Random Fractals – Overview of Ray Tracing –Intersecting rays with other primitives – Adding Surface texture – Reflections andTransparency – Boolean operations on Objects. TOTAL: 45 PERIODSTEXT.
62UNIT V PERVASIVE COMPUTING 9Pervasive computing infrastructure-applications- Device Technology - Hardware,Human-machine Interfaces, Biometrics, and Operating systems– Device Connectivity –Protocols, Security, and Device Management- Pervasive Web Application architecture-Access from PCs and PDAs - Access via WAP TOTAL: 45 PERIODSTEXT BOOKS:1. Jochen Schiller, “Mobile Communications”, PHI, Second Edition, 2003.2. Jochen Burkhardt, Pervasive Computing: Technology and Architecture of Mobile Internet Applications, Addison-Wesley Professional; 3rd edition, 2007REFERENCES:1. Frank Adelstein, Sandeep KS Gupta, Golden Richard, Fundamentals of Mobile and Pervasive Computing, McGraw-Hill 20052. Debashis Saha, Networking Infrastructure for Pervasive Computing: Enabling Technologies, Kluwer Academic Publisher, Springer; First edition, 20023. Introduction to Wireless and Mobile Systems by Agrawal and Zeng, Brooks/ Cole (Thomson Learning), First edition, 20024. Uwe Hansmann, Lothar Merk, Martin S. Nicklons and Thomas Stober, Principles of Mobile Computing, Springer, New York, 2003.
63UNIT V APPLICATIONS 9Multirate signal processing – Speech compression – Adaptive filter – Musical soundprocessing – Image enhancement.nd edition, 2005.2. Andreas Antoniou, “Digital Signal Processing”, Tata McGraw Hill, 2001 : 60 PERIODS
642. Virtualisation environment (e.g., xen, kqemu or lguest) to test an applications, new kernels and isolate applications. It could also be used to expose students to other alternate OSs like *BSD3.8. Version Control System setup and usage using RCS, CVS, SVN9. Text processing with Perl: simple programs, connecting with database e.g., MYSQL10. Running PHP : simple applications like login forms after setting up a LAMP stack11. Running Python : some simple exercise – e.g. Connecting with MySql database12. Set up the complete network interface usinf ifconfig command liek setting gateway, DNS, IP tables, etc.,
RESOURCES :An environment like FOSS Lab Server (developed by NRCFOSS containing the variouspackages)OREquivalent system with Linux distro supplemented with relevant packagesNote:Once the list of experiments are finalised, NRCFOSS can generate full lab manualscomplete with exercises, necessary downloads, etc. These could be made available onNRCFOSS web portal.
UNIT I 9General Review of the System-History-System structure-User Perspective-OperatingSystem Services- Assumptions About Hardware. Introduction to the Kernel-ArchitectureSystem Concepts-Data Structures- System Administration.
65UNIT II 9The Buffer Cache-Headers-Buffer Pool-Buffer Retrieval-Reading and Writing DiskBlocks - Advantages and Disadvantages. Internal Representation of Files-Inodes-Structure-Directories-Path Name to Inode- Super Block-Inode Assignment-Allocation ofDisk Blocks -Other File Types.
UNIT III 9System Calls for the File System-Open-Read-Write-Lseek-Close-Create-Special filesCreation -Change Directory and Change Root-Change Owner and Change Mode-Stat-Fstat-Pipes-Dup-Mount-Unmount-Link-Unlink-File System Abstraction-Maintenance.
UNIT IV 9The System Representation of Processes-States-Transitions-System Memory-Contextof a Process-Saving the Context-Manipulation of a Process Address Space-SleepProcess Control-signals-Process Termination-Awaiting-Invoking other Programs-TheShell-System Boot and the INIT Process.
UNIT V 9Memory Management Policies-Swapping-Demand Paging-a Hybrid System-I/OSubsystem-Driver Interfaces-Disk Drivers-Terminal Drivers.
TEXT BOOK:1. Maurice J. Bach, "The Design of the Unix Operating System", Pearson Education, 2002.REFERENCES:1. Uresh Vahalia, "UNIX Internals: The New Frontiers", Prentice Hall, 2000.2. John Lion, "Lion's Commentary on UNIX", 6th edition, Peer-to-Peer Communications, 2004.3. Daniel P. Bovet & Marco Cesati, “Understanding the Linux Kernel”, O’REILLY, Shroff Publishers &Distributors Pvt. Ltd, 2000.4. M. Beck et al, “Linux Kernel Programming”, Pearson Education Asia, 2002
OBJECTIVES:At the end of the course, the students would be acquainted with the basicconcepts in numerical methods and their uses are summarized as follows: i. The roots of nonlinear (algebraic or transcendental) equations, solutions of large. system of linear equations and eigen value problem of a matrix can be obtained numerically where analytical methods fail to give solution
66ii..
L = 45 , TOTAL: 45 PERIODSTEXT).
67REFERENCES
TOTAL: 45 PERIODSTEXT BOOKS:1. Shameem Akhter and Jason Roberts, “Multi-core Programming”, Intel Press, 2006.2. Michael J Quinn, Parallel programming in C with MPI and OpenMP, Tata Macgraw Hill, 2003.
68REFERENCES:1. John L. Hennessey and David A. Patterson, “ Computer architecture – A quantitative approach”, Morgan Kaufmann/Elsevier Publishers, 4th. edition, 2007.2. David E. Culler, Jaswinder Pal Singh, “Parallel computing architecture : A hardware/ software approach” , Morgan Kaufmann/Elsevier Publishers, 1999.
UNIT I 9Windows Programming Fundamentals – MFC – Windows – Graphics – Menus – Mouseand keyboard – Bitmaps – Palettes – Device-Independent Bitmaps
UNIT II 9Controls – Modal and Modeless Dialog – Property – Data I/O – Sound – Timer
UNIT III 9Memory management – SDI – MDI – MFC for Advanced windows user Interface – statusbar and Toolbars – Tree view – List view – Threads
UNIT IV 9ODBC – MFC Database classes – DAO - DLLs – Working with Images
UNIT V 9COM Fundamentals – ActiveX control – ATL – Internet Programming TOTAL: 45 PERIODSTEXT BOOK:1. Richard C.Leinecker and Tom Archer, “Visual C++ 6 Programming Bible”, Wiley Dream Tech Press, 2006.
REFERENCES:1. Lars Klander, “Core Visual C++ 6”, Pearson Education, 20002. Deital, DEital, Liperi and Yaeger “Visual V++ .NET How to Program” , Pearson Education, 2004.
69IT2354 EMBEDDED SYSTEMS LTPC 3003
TOTAL: 45 PERIODSTEXT.
70UNIT III OBJECT ORIENTED DATABASES 9Introduction to Object Oriented Data Bases - Approaches - Modeling and Design -Persistence – Query Languages - Transaction - Concurrency – Multi Version Locks –Recovery – POSTGRES – JASMINE –GEMSTONE - ODMG Model.UNIT IV EMERGING SYSTEMS 9Enhanced Data Models - Client/Server Model - Data Warehousing and Data Mining -Web Databases – Mobile Databases- XML and Web Databases.
TOTAL: 45 PERIODS
TEXT BOOKS.
71UNIT IV KNOWLEDGE CODIFICATION 9Modes of Knowledge Conversion – Codification Tools and Procedures – KnowledgeDeveloper’s Skill Sets – System Testing and Deployment – Knowledge Testing –Approaches to Logical Testing, User Acceptance Testing – KM System DeploymentIssues – User Training – Post implementation.UNIT V KNOWLEDGE TRANSFER AND SHARING 9Transfer Methods – Role of the Internet – Knowledge Transfer in e-world – KM SystemTools – Neural Network – Association Rules – Classification Trees – Data Mining andBusiness Intelligence – Decision Making Architecture – Data Management – KnowledgeManagement Protocols – Managing Knowledge Workers.
TOTAL: 45 PERIODSTEXT
OBJECTIVES To study the principles of CISC To study the Pentium processor family To study the principles of RISC To study the architecture & special features of typical RISC processors. To study the architecture & function of special purpose processors.
72UNIT II PENTIUM PROCESSORS 10Introduction to Pentium microprocessor – Special Pentium Registers – Pentium MemoryManagement – New Pentium instructions – Introduction to Pentium Pro and its specialfeatures – Architecture of Pentium-II, Pentium-III and Pentium4 microprocessors.
TEXT BOOK:1. Daniel Tabak, “Advanced Microprocessors”, Tata McGraw-Hill, 1995, 2nd Edition.
REFERENCES:1. (Unit V:EPIC)2. (UnitV: Network Processor)3. (Unit V: Network Processor)4. (Unit V: Image Processor)5. Barry B.Brey, “The Intel Microprocessors, 8086/8088, 80186/80188, 80286, 80386, 80486, Pentium, PentiumPro Processor, PentiumII, PentiumIII, PentiumIV, Architecture, Programming & Interfacing”, 6th Edition, Pearson Education/PHI, 2002.
OBJECTIVES: To learn advanced Java programming concepts like interface, threads,Swings etc. To develop network programs in Java To understand Concepts needed for distributed and multi-tier applications To understand issues in enterprise applications development.
73UNIT I JAVA FUNDAMENTALS 9Java I/O streaming – filter and pipe streams – Byte Code interpretation - Threading –Swing.
74UNIT II MESSAGE-PASSING PROGRAMMING 9The message-passing model – the message-passing interface – MPI standard – basicconcepts of MPI: MPI_Init, MPI_Comm_size, MPI_Comm_rank, MPI_Send, MPI_Recv,MPI_Finalize – timing the MPI programs: MPI_Wtime, MPI_Wtick – collectivecommunication: MPI_Reduce, MPI_Barrier, MPI_Bcast, MPI_Gather, MPI_Scatter –case studies: the sieve of Eratosthenes, Floyd's algorithm, Matrix-vector multiplication
TOTAL: 45 PERIODSTEXT BOOK:1. Michael J. Quinn, “Parallel Programming in C with MPI and OpenMP”, Tata McGraw-Hill Publishing Company Ltd., 2003.
REFERENCES:1. B. Wilkinson and M. Allen, “Parallel Programming – Techniques and applications using networked workstations and parallel computers”, Second Edition, Pearson Education, 2005.2. M. J. Quinn, “Parallel Computing – Theory and Practice”, Second Edition, Tata McGraw-Hill Publishing Company Ltd., 2002.
75UNIT II 9Style Sheets: CSS-Introduction to Cascading Style Sheets-Features-Core Syntax-StyleSheets and HTML Style Rle Cascading and Inheritance-Text Properties-Box ModelNormal Flow Box Layout-Beyond the Normal Flow-Other Properties-Case Study.Client-Side Programming: The JavaScript Language-History and Versions IntroductionJavaScript in Perspective-Syntax-Variables and Data Types-Statements-Operators-Literals-Functions-Objects-Arrays-Built-in Objects-JavaScript Debuggers.
UNIT III 9Host Objects : Browsers and the DOM-Introduction to the Document Object Model DOMHistory and Levels-Intrinsic Event Handling-Modifying Element Style-The DocumentTree-DOM Event Handling-Accommodating Noncompliant Browsers Properties ofwindow-Case Study. Server-Side Programming: Java Servlets- Architecture -Overview-AServelet-Generating Dynamic Content-Life Cycle- Parameter Data-Sessions-Cookies-URL Rewriting-Other Capabilities-Data Storage Servlets and Concurrency-Case Study-Related Technologies.
UNIT IV 9Representing Web Data: XML-Documents and Vocabularies-Versions and Declaration-Namespaces JavaScript and XML: Ajax-DOM based XML processing Event-orientedParsing: SAX-Transforming XML Documents-Selecting XML Data:XPATH-Template-based Transformations: XSLT-Displaying XML Documments in Browsers-Case Study-Related Technologies. Separating Programming and Presentation: JSP TechnologyIntroduction-JSP and Servlets-Running JSP Applications Basic JSP-JavaBeans Classesand JSP-Tag Libraries and Files-Support for the Model-View-Controller Paradigm-CaseStudy-Related Technologies.
UNIT V 9Web Services: JAX-RPC-Concepts-Writing a Java Web Service-Writing a Java WebService Client-Describing Web Services: WSDL- Representing Data Types: XMLSchema-Communicating Object Data: SOAP Related Technologies-SoftwareInstallation-Storing Java Objects as Files-Databases and Java Servlets.
TEXT BOOK:1. Jeffrey C. Jackson, "Web Technologies--A Computer Science Perspective", Pearson Education, 2006..
76MG2453 RESOURCE MANAGEMENT TECHNIQUES LTPC 3 003UNIT I LINEAR PROGRAMMING: 9Principal components of decision problem – Modeling phases – LP Formulation andgraphic solution – Resource allocation problems – Simplex method – Sensitivityanalysis.
TOTAL: 45 PERIODSREFERENCES:1. Anderson ‘Quantitative Methods for Business’, 8th Edition, Thomson Learning, 2002.2. Winston ‘Operation Research’, Thomson Learning, 2003.3. H.A.Taha, ‘Operation Research’, Prentice Hall of India, 2002.4. Vohra, ‘Quantitative Techniques in Management’, Tata McGraw Hill, 2002.5. Anand Sarma, ‘Operation Research’, Himalaya Publishing House, 2003.
77UNIT III DATA MINING 8Introduction – Data – Types of Data – Data Mining Functionalities – Interestingness ofPatterns – Classification of Data Mining Systems – Data Mining Task Primitives –Integration of a Data Mining System with a Data Warehouse – Issues –DataPreprocessing.,.
UNIT I INTRODUCTION 9Introduction - Issues in Real Time Computing, Structure of a Real Time System. TaskClasses, Performance Measures for Real Time Systems, Estimating Program Runtimes. Task Assignment and Scheduling - Classical Uniprocessor scheduling algorithms,UniProcessor scheduling of IRIS Tasks, Task Assignment, Mode Changes, and FaultTolerant Scheduling.
78UNIT II PROGRAMMING LANGUAGES AND TOOLS 9Programming Language and Tools – Desired Language characteristics, Data Typing,Control structures, Facilitating Hierarchical Decomposition, Packages, Run-time(Exception) Error handling, Overloading and Generics, Multitasking, Low Levelprogramming, Task scheduling, Timing Specifications, Programming Environments,Run-time Support.
UNIT IV COMMUNICATION 9Real-Time Communication - Communications Media, Network Topologies Protocols,Fault Tolerant Routing. Fault Tolerance Techniques - Fault Types, Fault Detection. FaultError containment Redundancy, Data Diversity, Reversal Checks, Integrated Failurehandling.
TOTAL: 45 PERIODS
TEXT BOOK:1. C.M. Krishna, Kang G. Shin, “Real-Time Systems”, McGraw-Hill International Editions, 1997.
REFERENCES:1. Stuart Bennett, “Real Time Computer Control-An Introduction”,Second edition Perntice Hall PTR, 1994.2. Peter D. Lawrence, “Real time Micro Computer System Design – An Introduction”, McGraw Hill, 1988.3. S.T. Allworth and R.N. Zobel, “Introduction to real time software design”, Macmillan, II Edition, 1987.4. R.J.A Buhur, D.L. Bailey, “ An Introduction to Real-Time Systems”, Prentice-Hall International, 1999.5. Philip.A.Laplante “Real Time System Design and Analysis” PHI , III Edition, April 2004.
79UNIT II TCP 9Services – header – connection establishment and termination – interactive data flow –bulk data flow – timeout and retransmission – persist timer – keep alive timer – futuresand performance.
TOTAL: 45 PERIODSTEXT BOOKS:1. Douglas E Comer,”Internetworking with TCP/IP Principles,Protocols and Architecture”,Vol 1 and 2, Vth Edition2. W.Richard Stevans “TCP/IP Illustrated” Vol 1.2003.
REFERENCES:1. Forouzan, “ TCP/IP Protocol Suite” Second Edition, Tate MC Graw Hill, 2003.2. W.Richard Stevens “TCP/IP Illustrated” Volume 2, Pearson Education 2003
80UNIT IV 9Working with XML – Techniques for Reading and Writing XML Data - Using XPath andSearch XML - ADO.NET Architecture – ADO.NET Connected and Disconnected Models– XML and ADO.NET – Simple and Complex Data Binding– Data Grid View Class.
UNIT V 9Application Domains – Remoting – Leasing and Sponsorship - .NET Coding DesignGuidelines –Assemblies – Security – Application Development – Web Services -Building an XML Web Service - Web Service Client – WSDL and SOAP – Web Servicewith Complex Data Types – Web Service Performance.
TOTAL: 45 PERIODSTEXT.
UNIT I 9Security trends – Attacks and services – Classical crypto systems – Different types ofciphers – LFSR sequences – Basic Number theory – Congruences – ChineseRemainder theorem – Modular exponentiation – Fermat and Euler's theorem – Legendreand Jacobi symbols – Finite fields – continued fractions.UNIT II 9Simple DES – Differential cryptoanalysis – DES – Modes of operation – Triple DES –AES – RC4 – RSA – Attacks – Primality test – factoring.UNIT III 9Discrete Logarithms – Computing discrete logs – Diffie-Hellman key exchange –ElGamal Public key cryptosystems – Hash functions – Secure Hash – Birthday attacks -MD5 – Digital signatures – RSA – ElGamal – DSA.
UNIT IV 9Authentication applications – Kerberos, X.509, PKI – Electronic Mail security – PGP,S/MIME – IP security – Web Security – SSL, TLS, SET.
81UNIT V 9System security – Intruders – Malicious software – viruses – Firewalls – SecurityStandards. TOTAL: 45 PERIODSTEXT
UNIT III 9Context Free Grammars for English Syntax- Context-Free Rules and Trees - Sentence-Level Constructions –Agreement – Sub Categorization – Parsing – Top-down – EarleyParsing -Feature Structures - Probabilistic Context-Free Grammars
UNIT IV 9Representing Meaning - Meaning Structure of Language - First Order Predicate Calculus- Representing Linguistically Relevant Concepts -Syntax-Driven Semantic Analysis -Semantic Attachments - Syntax-Driven Analyzer - Robust Analysis - Lexemes and TheirSenses - Internal Structure - Word Sense Disambiguation -Information Retrieval
UNIT V 9Discourse -Reference Resolution - Text Coherence -Discourse Structure - Dialog andConversational Agents - Dialog Acts – Interpretation – Coherence -ConversationalAgents - Language Generation – Architecture -Surface Realizations - DiscoursePlanning – Machine Translation -Transfer Metaphor – Interlingua – StatisticalApproaches. TOTAL: 45 PERIODS
82TEXT.
TOTAL: 45 PERIODSTEXT BOOKS:1. Jerry Banks and John Carson, “ Discrete Event System Simulation”, Fourth Edition, PHI, 2005.2. Geoffrey Gordon, “System Simulation”, Second Edition, PHI, 2006 (Unit – V).
83REFERENCES.
UNIT I INTRODUCTION 8Human–Computer Interface – Characteristics Of Graphics Interface –Direct ManipulationGraphical System – Web User Interface –Popularity –Characteristic & Principles.
UNIT IV MULTIMEDIA 9Text For Web Pages – Effective Feedback– Guidance & Assistance–Internationalization– Accesssibility– Icons– Image– Multimedia – Coloring.
TOTAL:45 PERIODSTEXT.
84GE2022 TOTAL QUALITY MANAGEMENT LTPC 3 003
UNIT I INTRODUCTION 9Introduction - Need for quality - Evolution of quality - Definition of quality - Dimensions ofmanufacturing and service quality - Basic concepts of TQM - Definition of TQM – TQMFramework - Contributions of Deming, Juran and Crosby – Barriers to TQM.
TOTAL: 45 PERIODSTEXT.
85IT2351 NETWORK PROGRAMMING AND MANAGEMENT LTPC 3 0 03
TOTAL : 45 PERIODSTEXT.
86IT2032 SOFTWARE TESTING LTPC 300 3UNIT I INTRODUCTION 9Testing as an Engineering Activity – Role of Process in Software Quality – Testing as aProcess – Basic Definitions – Software Testing Principles – The Tester’s Role in aSoftware Development Organization – Origins of Defects – Defect Classes – The DefectRepository and Test Design – Defect Examples – Developer/Tester Support forDeveloping a Defect Repository.
87REFERENCES:1. Boris Beizer, “Software Testing Techniques”, Second Edition,Dreamtech, 20032. Elfriede Dustin, “Effective Software Testing”, First Edition, Pearson Education, 2003.3. Renu Rajani, Pradeep Oak, “Software Testing – Effective Methods, Tools and Techniques”, Tata McGraw Hill, 2004.
UNIT I 9Roots of SOA – Characteristics of SOA - Comparing SOA to client-server and distributedinternet architectures – Anatomy of SOA- How components in an SOA interrelate -Principles of service orientation
UNIT II 9Web services – Service descriptions – Messaging with SOAP –Message exchangePatterns – Coordination –Atomic Transactions – Business activities – Orchestration –Choreography - Service layer abstraction – Application Service Layer – BusinessService Layer – Orchestration Service Layer
UNIT III 9Service oriented analysis – Business-centric SOA – Deriving business services- servicemodeling - Service Oriented Design – WSDL basics – SOAP basics – SOA compositionguidelines – Entity-centric business service design – Application service design – Task-centric business service design
UNIT IV 9Technologies (WSIT) - SOA support in .NET – Common Language Runtime - ASP.NETweb forms – ASP.NET web services – Web Services Enhancements (WSE).
UNIT V 9WS-BPEL basics – WS-Coordination overview - WS-Choreography, WS-Policy, WS-Security TOTAL: 45 PERIODS
88TEXT BOOK:1. Thomas Erl, “Service-Oriented Architecture: Concepts, Technology, andDesign”,
OBJECTIVES To get a comprehensive knowledge of the architecture of distributed systems. To understand the deadlock and shared memory issues and their solutions in distributed environments. To know the security issues and protection mechanisms for distributed environments. To get a knowledge of multiprocessor operating system and database operating systems.UNIT I 9Architectures of Distributed Systems - System Architecture types - issues in distributedoperating systems - communication networks – communication primitives. TheoreticalFoundations - inherent limitations of a distributed system – lamp ports logical clocks –vector clocks – casual ordering of messages – global state – cuts of a distributedcomputation – termination detection. Distributed Mutual Exclusion – introduction – theclassification of mutual exclusion and associated algorithms – a comparativeperformance analysis.
UNIT II 9Distributed Deadlock Detection -Introduction - deadlock handling strategies indistributed systems – issues in deadlock detection and resolution – control organizationsfor distributed deadlock detection – centralized and distributed deadlock detectionalgorithms –hierarchical deadlock detection algorithms. Agreement protocols –introduction-the system model, a classification of agreement problems, solutions to theByzantine agreement problem, applications of agreement algorithms. Distributedresource management: introduction-architecture – mechanism for building distributed filesystems – design issues – log structured file systems.
89UNIT III 9Distributed shared memory-Architecture– algorithms for implementing DSM – memorycoherence and protocols – design issues. Distributed Scheduling – introduction – issuesin load distributing – components of a load distributing algorithm – stability – loaddistributing algorithm – performance comparison – selecting a suitable load sharingalgorithm – requirements for load distributing -task migration and associated issues.Failure Recovery and Fault tolerance: introduction– basic concepts – classification offailures – backward and forward error recovery, backward error recovery- recovery inconcurrent systems – consistent set of check points – synchronous and asynchronouscheck pointing and recovery – check pointing for distributed database systems- recoveryin replicated distributed databases.
UNIT IV 9Protection and security -preliminaries, the access matrix model and its implementations.-safety in matrix model- advanced models of protection. Data security – cryptography:Model of cryptography, conventional cryptography- modern cryptography, private keycryptography, data encryption standard- public key cryptography – multiple encryption –authentication in distributed systems.UNIT-V 9Multiprocessor operating systems - basic multiprocessor system architectures – interconnection networks for multiprocessor systems – caching – hypercube architecture.Multiprocessor Operating System - structures of multiprocessor operating system,operating system design issues- threads- process synchronization and scheduling.Database Operating systems :Introduction- requirements of a database operatingsystem Concurrency control : theoretical aspects – introduction, database systems – aconcurrency control model of database systems- the problem of concurrency control –serializability theory- distributed database systems, concurrency control algorithms –introduction, basic synchronization primitives, lock based algorithms-timestamp basedalgorithms, optimistic algorithms – concurrency control algorithms, data replication.
TOTAL : 45 PERIODS
TEXT BOOK:1. Mukesh Singhal, Niranjan G.Shivaratri, "Advanced concepts in operating systems: Distributed, Database and multiprocessor operating systems", TMH, 2001
REFERENCES:1. Andrew S.Tanenbaum, "Modern operating system", PHI, 20032. Pradeep K.Sinha, "Distributed operating system-Concepts and design", PHI, 2003.3. Andrew S.Tanenbaum, "Distributed operating system", Pearson education, 2003.
90CS2045 WIRELESS NETWORKS LT PC 3 00 3
TOTAL: 45 PERIODSTEXT.
91GE2071 INTELLECTUAL PROPERTY RIGHTS (IPR) LTPC 3 003
UNIT I 5Introduction – Invention and Creativity – Intellectual Property (IP) – Importance –Protection of IPR – Basic types of property (i). Movable Property ii. Immovable Propertyand iii. Intellectual Property.
UNIT II 10IP – Patents – Copyrights and related rights – Trade Marks and rights arising fromTrademark registration – Definitions – Industrial Designs and Integrated circuits –Protection of Geographical Indications at national and International levels – ApplicationProcedures.
UNIT III 10International convention relating to Intellectual Property – Establishment of WIPO–Mission and Activities – History – General Agreement on Trade and Tariff (GATT).
UNIT IV 10Indian Position Vs WTO and Strategies – Indian IPR legislations – commitments toWTO-Patent Ordinance and the Bill – Draft of a national Intellectual Property Policy –Present against unfair competition.UNIT V 10Case Studies on – Patents (Basumati rice, turmeric, Neem, etc.) – Copyright and relatedrights – Trade Marks – Industrial design and Integrated circuits – Geographic indications– Protection against unfair competition. TOTAL: 45 PERIDOS..
92UNIT II TREES, CONNECTIVITY, PLANARITY 9Spanning trees – Fundamental Circuits – Spanning Trees in a Weighted Graph – CutSets – Properties of Cut Set – All Cut Sets – Fundamental Circuits and Cut Sets –Connectivity and Separability – Network flows – 1-Isomorphism – 2-Isomorphism –Combinational and Geometric Graphs – Planer Graphs – Different Representation of aPlaner Graph.
UNIT IV ALGORITHMS 9Algorithms: Connectedness and Components – Spanning tree – Finding all SpanningTrees of a Graph – Set of Fundamental Circuits – Cut Vertices and Separability –Directed Circuits.
UNIT V ALGORITHMS 9Algorithms: Shortest Path Algorithm – DFS – Planarity Testing – Isomorphism.
TOTAL: 45 PERIODSTEXT BOOK:1. Narsingh Deo, “Graph Theory: With Application to Engineering and Computer Science”, Prentice Hall of India, 2003.
REFERENCE:1. R.J. Wilson, “Introduction to Graph Theory”, Fourth Edition, Pearson Education, 2003.
OBJECTIVES To understand the basics of Information Security To know the legal, ethical and professional issues in Information Security To know the aspects of risk management To become aware of various standards in this area To know the technological aspects of Information Security
93UNIT I INTRODUCTION 9History, What is Information Security?, Critical Characteristics of Information, NSTISSCSecurity Model, Components of an Information System, Securing the Components,Balancing Security and Access, The SDLC, The Security SDLC
TOTAL: 45 PERIODSTEXT, 20033. Matt Bishop, “ Computer Security Art and Science”, Pearson/PHI, 2002.
94UNIT III TCP AND ATM CONGESTION CONTROL 12TCP Flow control – TCP Congestion Control – Retransmission – Timer Management –Exponential RTO backoff – KARN’s Algorithm – Window management – Performance ofTCP over ATM. Traffic and Congestion control in ATM – Requirements – Attributes –Traffic Management Frame work, Traffic Control – ABR traffic Management – ABR ratecontrol, RM cell formats – ABR Capacity allocations – GFR traffic management.
REFERENCES:1. Warland, Pravin Varaiya, “High performance communication networks”, Second Edition , Jean Harcourt Asia Pvt. Ltd., , 2001.2. Irvan Pepelnjk, Jim Guichard, Jeff Apcar, “MPLS and VPN architecture”, Cisco Press, Volume 1 and 2, 2003.3. Abhijit S. Pandya, Ercan Sea, “ATM Technology for Broad Band Telecommunication Networks”, CRC Press, New York, 2004.
95UNIT V FUTURE TRENDS 14Advanced robotics, Advanced robotics in Space - Specific features of space roboticssystems - long-term technical developments, Advanced robotics in under - wateroperations. Robotics Technology of the Future - Future Applications.
TOTAL : 45 PERIODSTEXT BOOK1. Barry Leatham - Jones, "Elements of industrial Robotics" PITMAN Publishing, 987.
REFERENCES1. Mikell P.Groover, Mitchell Weiss, Roger N.Nagel Nicholas G.Odrey, "Industrial Robotics Technology, Programming and Applications ", McGraw Hill Book Company 1986.2. Fu K.S. Gonzaleaz R.C. and Lee C.S.G., "Robotics Control Sensing, Visioon and Intelligence " McGraw Hill International Editions, 1987.3. Bernard Hodges and Paul Hallam, " Industrial Robotics", British Library Cataloging in Publication 1990.4. Deb, S.R. Robotics Technology and flexible automation, Tata Mc GrawHill, 1994.
UNIT II OPTIMIZATION 8Derivative-based Optimization – Descent Methods – The Method of Steepest Descent –Classical Newton’s Method – Step Size Determination – Derivative-free Optimization –Genetic Algorithms – Simulated Annealing – Random Search – Downhill SimplexSearch.UNIT III ARTIFICIAL INTELLIGENCE 10Introduction, Knowledge Representation – Reasoning, Issues and Acquisition:Prepositional and Predicate Calculus Rule Based knowledge Representation SymbolicReasoning Under Uncertainity Basic knowledge Representation Issues Knowledgeacquisition – Heuristic Search: Techniques for Heuristic search Heuristic Classification -State Space Search: Strategies Implementation of Graph Search Search based onRecursion Patent-directed Search Production System and Learning.
96UNIT IV NEURO FUZZY MODELING 9Adaptive Neuro-Fuzzy Inference Systems – Architecture – Hybrid Learning Algorithm –Learning Methods that Cross-fertilize ANFIS and RBFN – Coactive Neuro FuzzyModeling – Framework Neuron Functions for Adaptive Networks – Neuro FuzzySpectrum..
OBJECTIVES: To introduce basic concepts in acquiring, storage and Process of images To introduce for enhancing the quality of images. To introduce techniques for extraction and processing of region of interest To introduce case studies of Image Processing.
97UNIT II IMAGE ENHANCEMENT 9Spatial Domain Gray level Transformations Histogram Processing Spatial Filtering –Smoothing and Sharpening. Frequency Domain : Filtering in Frequency Domain – DFT,FFT, DCT – Smoothing and Sharpening filters – Homomorphic Filtering. .UNIT III IMAGE SEGMENTATION AND FEATURE ANALYSIS 9Detection of Discontinuities – Edge Operators – Edge Linking and Boundary Detection –Thresholding – Region Based Segmentation – Morphological WaterSheds – MotionSegmentation, Feature Analysis and Extraction.
REFERENCES:1. Milan Sonka, Vaclav Hlavac and Roger Boyle, “Image Processing, Analysis and Machine Vision”, Second Edition, Thomson Learning, 20012..
98UNIT IV SOFTWARE QUALITY PROGRAM 9Software Quality Program Concepts – Establishment of a Software Quality Program –Software Quality Assurance Planning – An Overview – Purpose & Scope.
TOTAL: 45 PERIODSTEXT BOOKS:1. Mordechai Ben-Menachem / Garry S Marliss, “Software Quality”, Vikas Publishing House, Pvt, Ltd., New Delhi.(UNIT III to V)2. Watts S Humphrey, “ Managing the Software Process”, Pearson Education Inc.( UNIT I and II)
REFERENCES:1. Gordon G Schulmeyer, “Handbook of Software Quality Assurance”, Third Edition, Artech House Publishers 20072. Nina S Godbole, “Software Quality Assurance: Principles and Practice”, Alpha Science International, Ltd, 2004
99UNIT V MANAGING PEOPLE AND ORGANIZING TEAMS 9Introduction – Understanding Behavior – Organizational Behaviour:A Background –Selecting The Right Person For The Job – Instruction In The Best Methods – Motivation– The Oldman – Hackman Job Characteristics Model – Working In Groups – BecomingA Team –Decision Making – Leadership – Organizational Structures – Stress –HealthAnd Safety – Case Studies. TOTAL: 45 PERIODSTEXT.
UNIT I 9Characterization of Distributed Systems-Introduction-Examples-Resource Sharing andthe Web-Challenges.System Models-Architectural-Fundamental.Interprocess Communication-Introduction-API for Internet protocols-External datarepresentation and marshalling--Client-server communication-Group communication-Case study: Interprocess Communication in UNIX.
UNIT II 9Distributed Objects and Remote Invocation-Introduction-Communication betweendistributed objects-Remote procedure calls-Events and notifications-Case study: JavaRMI.Operating System Support-Introduction-OS layer-Protection-Processes and threads-Communication and invocation OS architecture.
UNIT III 9Distributed File Systems-Introduction-File service architecture-Case Study:Sun NetworkFile System-Enhancements and further developments.Name Services-Introduction-Name Services and the Domain Name System-DirectoryServices-Case Study: Global Name Service.
UNIT IV 9Time and Global States-Introduction-Clocks, events and process states-Synchronizingphysical clocks-Logical time and logical clocks-Global states-Distributed debugging.Coordination and Agreement-Introduction-Distributed mutual exclusion-Elections-Multicast communication-Consensus and related problems.
100UNIT V 9Distributed Shared Memory-Introduction-Design and implementation issues-Sequentialconsistency and Ivy case study Release consistency and Munin case study-Otherconsistency models.CORBA Case Study- Introduction-CORBA RMI-CORBA services.
TOTAL: 45 PERIODSTEXT.
101UNIT V QUANTUM COMPUTATIONAL COMPLEXITY AND ERROR CORRECTION 9Computational complexity – black-box model – lower bounds for searching – generalblack-box lower bounds – polynomial method – block sensitivity – adversary methods –classical error correction – classical three-bit code – fault tolerance – quantum errorcorrection – three- and nine-qubit quantum codes – fault-tolerant quantum computation
TEXT BOOK:1. P. Kaye, R. Laflamme, and M. Mosca, “An introduction to Quantum Computing”, Oxford University Press, 1999.
REFERENCE:1. V. Sahni, “Quantum Computing”, Tata McGraw-Hill Publishing Company, 2007.
UNIT I 9Decision Making and computerized support: Management support systems. Decisionmaking systems modeling- support.
UNIT II 9Decision Making Systems – Modeling and Analysis – Business Intelligence – DataWarehousing, Data Acquisition - Data Mining. Business Analysis – Visualization -Decision Support System Development.
UNIT III 9Collaboration, Communicate Enterprise Decision Support System & Knowledgemanagement – Collaboration Com Technologies Enterprise information system –knowledge management.
UNIT IV 9Intelligent Support Systems – AI & Expert Systems – Knowledge based Systems –Knowledge Acquisition , Representation & Reasoning, Advanced intelligence system –Intelligence System over internet.
UNIT V 9Implementing MSS in the E-Business ERA – Electronic Commerce – integration,Impacts and the future management support systems. TOTAL: 45 PERIODSTEXT BOOKS:1. Decision Support Systems & Intelligent Systems – Seventh edition Efraim Turban & Jay E. Aronson Ting-Peng Liang - Pearson/prentice Hall2. Decision support Systems – Second Edition – George M Marakas - Pearson/prentice Hall.
102REFERENCES:1. Decision Support Systems – V.S. Janakiraman & K. Sarukesi2. Decision Support systems and Data warehouse Systems by Efrem G Mallach- Mc Graw Hill
TOTAL: 45 PERIODSTEXT BOOK:1. Maozhen Li, Mark Baker, The Grid Core Technologies, John Wiley & Sons ,2005.
REFERENCES:1. Ian Foster & Carl Kesselman, The Grid 2 – Blueprint for a New Computing Infrascture , Morgan Kaufman – 20042. Joshy Joseph & Craig Fellenstein, “Grid Computing”, Pearson Education 2004.3. Fran Berman,Geoffrey Fox, Anthony J.G.Hey, “Grid Computing: Making the Global Infrastructure a reality”, John Wiley and sons, 2003.
103CS2064 AGENT BASED INTELLIGENT SYSTEMS LTPC 300 3
UNIT I INTRODUCTION 9Definitions - Foundations - History - Intelligent Agents-Problem Solving-Searching -Heuristics -Constraint Satisfaction Problems - Game playing.
REFERENCES:1. Michael Wooldridge, “An Introduction to Multi Agent System”, John Wiley, 2002.2. Patrick Henry Winston, Artificial Intelligence, 3rd Edition, AW, 1999.3. Nils.J.Nilsson, Principles of Artificial Intelligence, Narosa Publishing House, 1992
104UNIT III ENGINEER’S RESPONSIBILITY FOR SAFETY 9Safety and Risk – Assessment of Safety and Risk – Risk Benefit Analysis – ReducingRisk – The Government Regulator’s Approach to Risk - Chernobyl Case Studies andBhopal
REFERENCES:1. Charles D Fleddermann, “Engineering Ethics”, Prentice Hall, New Mexico, 1999.2. John R Boatright, “Ethics and the Conduct of Business”, Pearson Education, 20033.)
UNIT I INTRODUCTION 10Nanoscale Science and Technology- Implications for Physics, Chemistry, Biology andEngineering-Classifications of nanostructured materials- nano particles- quantum dots,nanowires-ultra-thinfilms-multilayered materials. Length Scales involved and effect onproperties: Mechanical, Electronic, Optical, Magnetic and Thermal properties.Introduction to properties and motivation for study (qualitative only).
105UNIT III PATTERNING AND LITHOGRAPHY FOR NANOSCALE DEVICES 5Introduction to optical/UV electron beam and X-ray Lithography systems and processes,Wet etching, dry (Plasma /reactive ion) etching, Etch resists-dip pen lithography
TOTAL: 45 PERIODSTEXT
REFERENCES:1. G Timp (Editor), Nanotechnology, AIP press/Springer, 19992. Akhlesh Lakhtakia (Editor) The Hand Book of Nano Technology, “Nanometer Structure”, Theory, Modeling and Simulations. Prentice-Hall of India (P) Ltd, New Delhi, 2007.
UNIT I 9Historical Background – Constituent Assembly of India – Philosophical foundations ofthe Indian Constitution – Preamble – Fundamental Rights – Directive Principles of StatePolicy – Fundamental Duties – Citizenship – Constitutional Remedies for citizens.
UNIT II 9Union Government – Structures of the Union Government and Functions – President –Vice President – Prime Minister – Cabinet – Parliament – Supreme Court of India –Judicial Review.
UNIT III 9State Government – Structure and Functions – Governor – Chief Minister – Cabinet –State Legislature – Judicial System in States – High Courts and other SubordinateCourts.
106UNIT IV 9Indian Federal System – Center – State Relations – President’s Rule – ConstitutionalAmendments – Constitutional Functionaries - Assessment of working of theParliamentary System in India.
UNIT V 9Society : Nature, Meaning and definition; Indian Social Structure; Caste, Religion,Language in India; Constitutional Remedies for citizens – Political Parties and PressureGroups; Right of Women, Children and Scheduled Castes and Scheduled Tribes andother Weaker Sections. TOTAL: 45 PERIODSTEXT.
UNIT I 9Introduction to molecular biology – the genetic material – gene structure – proteinstructure – chemical bonds – molecular biology tools – genomic information contentUNIT II 9Data searches – simple alignments – gaps – scoring matrices – dynamic programming –global and local alignments – database searches – multiple sequence alignmentsPatterns for substitutions – estimating substitution numbers – evolutionary rates –molecular clocks – evolution in organellesUNIT III 9Phylogenetics – history and advantages – phylogenetic trees – distance matrix methods– maximum likelihood approaches – multiple sequence alignments – Parsimony –ancestral sequences – strategies for faster searches – consensus trees – treeconfidence – comparison of phylogenetic methods – molecular phylogenies
107UNIT IV 9Genomics – prokaryotic genomes: prokaryotic gene structure – GC content - genedensity – eukaryotic genomes: gene structure – open reading frames – GC content –gene expression – transposition – repeated elements – gene density
UNIT V 9Amino acids – polypeptide composition – secondary structure – tertiary and quaternarystructure – algorithms for modeling protein folding – structure prediction – predictingRNA secondary structuresProteomics – protein classification – experimental techniques – inhibitors and drugdesign – ligand screening – NMR structures – empirical methods and predictiontechniques – post-translational modification prediction
TOTAL: 45 PERIODSTEXT.
108UNIT IV LINEAR PREDICTIVE ANALYSIS OF SPEECH 9Basic Principles of linear predictive analysis – Auto correlation method – Covariancemethod – Solution of LPC equations – Cholesky method – Durbin’s Recursive algorithm– Application of LPC parameters – Pitch detection using LPC parameters – Formantanalysis – VELP – CELP.
REFERENCES:1. Quatieri, “Discrete-time Speech Signal Processing”, Prentice Hall, 2001.2. L.R. Rabiner and B. H. Juang, “Fundamentals of speech recognition”, Prentice Hall, 1993..
|
https://ru.scribd.com/document/384223393/cse-pdf
|
CC-MAIN-2020-40
|
en
|
refinedweb
|
JTinyCsvParser
I wanted to learn Java 1.8 and about its new Features: Lambda Functions and Streams. So I have ported TinyCsvParser over to Java and named it JTinyCsvParser. The library makes mapping between a CSV file and a Java class very easy and provides a nice Streaming API:
It should be one of the fastest CSV Parsers in Java 1.8, although I didn't run benchmarks against any of the available solutions. The parser is able
to read and map
4.5 Million lines in
12 seconds (and I didn't optimize anything yet). That means JTinyCsvParser for Java is as fast as
TinyCsvParser for .NET.
This article is an introduction to JTinyCsvParser, it includes a section on benchmarking the JTinyCsvParser and hopefully has some interesting content.
Basic Usage
This is an example for the most common use of JTinyCsvParser.
Imagine we have list of Persons in a CSV file
persons.csv with their first name, last name and birthdate.
Philipp,Wagner,1986/05/12 Max,Musterman,2014/01/02
The corresponding domain model in our system might look like this.
public class Person { private String firstName; private String lastName; private LocalDate BirthDate; public String getFirstName() { return firstName; } public void setFirstName(String firstName) { this.firstName = firstName; } public String getLastName() { return lastName; } public void setLastName(String lastName) { this.lastName = lastName; } public LocalDate getBirthDate() { return BirthDate; } public void setBirthDate(LocalDate birthDate) { BirthDate = birthDate; } }
When using JTinyCsvParser you have to define the mapping between the CSV File and your domain model:
public class PersonMapping extends CsvMapping<Person> { public PersonMapping(IObjectCreator creator) { super(creator); Map(0, String.class, Person::setFirstName); Map(1, String.class, Person::setLastName); Map(2, LocalDate.class, Person::setBirthDate); } }
And then it can be used to read the Results. Please note, that the
CsvParser returns a
Stream, so
in this example they are turned into a list first.
public class CsvParserTest { @Test public void testParse() throws Exception { CsvParserOptions options = new CsvParserOptions(false, ","); PersonMapping mapping = new PersonMapping(() -> new Person()); CsvParser<Person> parser = new CsvParser<>(options, mapping); ArrayList<String> csvData = new ArrayList<>(); // Simulate CSV Data: csvData.add("Philipp,Wagner,1986-05-12"); csvData.add(""); // An empty line... Should be skipped. csvData.add("Max,Musterman,2000-01-07"); List<CsvMappingResult<Person>> result = parser.parse(csvData) .collect(Collectors.toList()); // turn it into a List! Assert.assertNotNull(result); Assert.assertEquals(2, result.size()); // Get the first person: Person person0 = result.get(0).getResult(); Assert.assertEquals("Philipp", person0.firstName); Assert.assertEquals("Wagner", person0.lastName); Assert.assertEquals(1986, person0.getBirthDate().getYear()); Assert.assertEquals(5, person0.getBirthDate().getMonthValue()); Assert.assertEquals(12, person0.getBirthDate().getDayOfMonth()); // Get the second person: Person person1 = result.get(1).getResult(); Assert.assertEquals("Max", person1.firstName); Assert.assertEquals("Musterman", person1.lastName); Assert.assertEquals(2000, person1.getBirthDate().getYear()); Assert.assertEquals(1, person1.getBirthDate().getMonthValue()); Assert.assertEquals(7, person1.getBirthDate().getDayOfMonth()); } }
Benchmark
Dataset
In this benchmark the local weather data in March 2015 gathered by all weather stations in the USA is parsed.
You can obtain the data
QCLCD201503.zip from:
The File size is
557 MB and it has
4,496,262 lines.
Setup
Software
The Java Version used is
1.8.0_66.
C:\Users\philipp>java -version java version "1.8.0_66" Java(TM) SE Runtime Environment (build 1.8.0_66-b18) Java HotSpot(TM) 64-Bit Server VM (build 25.66-b18, mixed mode)
Hardware
- Intel (R) Core (TM) i5-3450
- Hitachi HDS721010CLA330 (1 TB Capacity, 32 MB Cache, 7200 RPM)
- 16 GB RAM
Measuring the Elapsed Time
Working with dates and timespans has always been hell in Java.
Java 1.8 has finally introduced new classes like
LocalDate,
LocalDateTime or
Duration to work with time. Combined with lambda functions we can
easily write a nice helper class
MeasurementUtils, that measures the elapsed time of a function.
You simply have to pass a description and an
Action into the
MeasurementUtils.MeasureElapsedTime method, and it will print out the elapsed time.
// Copyright (c) Philipp Wagner. All rights reserved. // Licensed under the MIT license. See LICENSE file in the project root for full license information. package de.bytefish.jtinycsvparser.utils; import java.time.Duration; import java.time.Instant; public class MeasurementUtils { /** * Java 1.8 doesn't have a Consumer without parameters (why not?), so we * are defining a FunctionalInterface with a nullary function. * * I call it Action, so I am consistent with .NET. */ @FunctionalInterface public interface Action { void invoke(); } public static void MeasureElapsedTime(String description, Action action) { Duration duration = MeasureElapsedTime(action); System.out.println(String.format("[%s] %s", description, duration)); } public static Duration MeasureElapsedTime(Action action) { Instant start = Instant.now(); action.invoke(); Instant end = Instant.now(); return Duration.between(start, end); } }
Reading a File Sequentially
First we have to find out, if the CSV parsing is an I/O or CPU bound task. The lower bound of the CSV Parsing is obviously given by the time needed to read a text
file, the actual CSV parsing and mapping cannot be any faster. I am using
Files.lines to get a consume
Stream<String>, which is also used in JTinyCsvParser
to read a file.
Benchmark Code
@Test public void testReadFromFile_SequentialRead() { MeasurementUtils.MeasureElapsedTime("LocalWeatherData_SequentialRead", () -> { // Read the file. Make sure to wrap it in a try, so the file handle gets disposed properly: try(Stream<String> stream = Files.lines(FileSystems.getDefault().getPath("C:\\Users\\philipp\\Downloads\\csv", "201503hourly.txt"), StandardCharsets.UTF_8)) { List<String> result = stream .collect(Collectors.toList()); // turn it into a List! // Make sure we got the correct amount of lines in the file: Assert.assertEquals(4496263, result.size()); } catch(IOException e) { throw new RuntimeException(e); } }); }
Make sure to always close the Stream returned by
Files.lines, because it is not closed automatically!
Benchmark Result
[LocalWeatherData_SequentialRead] PT4.258S
Reading the CSV File takes something around
4.3 seconds. So the entire mapping from CSV to objects cannot be faster, than
4.3 seconds.
On Closing the Stream
Oh I do not really understand, why
Files.lines has to be wrapped in a
try(...) block to get closed. After all the method returns a
Stream<String>... Why on earth can't the Stream be automatically disposed when the entire
Stream has been consumed? That also
means I have to impose the closing of the Stream returned by
Files.lines on the user of JTinyCsvParser.
This is by no means obvious (except through comments maybe), but there seems to be no way around in Java 1.8.
JTinyCsvParser
In order to parse a CSV file into a strongly-typed object, you have to define the domain model in your application and a
CsvMapping for the class.
Domain Model
public class LocalWeatherData { private String WBAN; private LocalDate Date; private String SkyCondition; public String getWBAN() { return WBAN; } public void setWBAN(String WBAN) { this.WBAN = WBAN; } public LocalDate getDate() { return Date; } public void setDate(LocalDate date) { Date = date; } public String getSkyCondition() { return SkyCondition; } public void setSkyCondition(String skyCondition) { SkyCondition = skyCondition; } }
CsvMapping
We only want to map the columns
WBAN (Column 0),
Date (Column 1) and
SkyCondition (Column 4) to the Domain Model, which is done by using the
MapProperty function.
public class LocalWeatherDataMapper extends CsvMapping<LocalWeatherData> { public LocalWeatherDataMapper(IObjectCreator creator) { super(creator); MapProperty(0, String.class, LocalWeatherData::setWBAN); MapProperty(1, LocalDate.class, LocalWeatherData::setDate, new LocalDateConverter(DateTimeFormatter.ofPattern("yyyyMMdd"))); MapProperty(4, String.class, LocalWeatherData::setSkyCondition); } }
Benchmarking JTinyCsvParser (Single Threaded)
Benchmark Code
@Test public void testReadFromFile_LocalWeatherData_Sequential() { // Not in parallel: CsvParserOptions options = new CsvParserOptions(true, ",", false); //_Sequential
[LocalWeatherData_Sequential_Parse] PT19.252S
Parsing the entire file takes approximately 20 seconds. I think this is a reasonable speed and it is comparable to the TinyCsvParser performance for a Single Threaded run. A lot of stuff is going on in the parsing, especially Auto Boxing Values is a time-consuming task I guess. I didn't profile the entire library, so I cannot tell exactely where one could squeeze out the last CPU cycles.
Benchmarking JTinyCsvParser (Parallel Streams, Without Bugfix)
Java 1.8 introduced Parallel Streams to simplify parallel computing in applications.
You can basically we can turn every simple Stream into a Parallel Stream, by calling the
parallel() method on it. One weird thing is, that I don't have any control over
the degree of parallelism at this point. By default the number of processors is used for the default ForkJoinPool. But describing Parallel Streams in Java 1,.8 is out of scope
for this article.
There is a great write-up on parallel processing in Java using Streams by Marko Topolnik:
Why using a Parallel Stream?
We have learnt, that the mapping to objects is largely CPU bound. It is a well-defined problem and by throwing some more cores at it, we should see a significantly improved performance.
Benchmark Code
In order to process the data in parallel, you have to set the
parallel parameter in the
CsvParserOption.
@Test public void testReadFromFile_LocalWeatherData_Parallel() { // See the third constructor argument. It sets the Parallel processing to true! CsvParserOptions options = new CsvParserOptions(true, ",", true); //_Parallel
The results are not satisfying. Although all cores are utilized during processing the file, it actually leads to a slow-down.
[LocalWeatherData_Parallel_Parse] PT26.232S
Why is that?
Well in order to parallelize a task Java has to split the problem into sub problems somehow. This is done by using a
Spliterator, which basically means "splittable Iterator".
The
Spliterator has a method
trySplit(), that splits off a chunk of elements to be processed by the threads. I assume, that the estimation about the size of the data
is not known ahead and that's why Java 1.8 initializes the estimated size with
Long.MAX_VALUE (unknown size).
We can find the confirmation for it, if we take a look into the OpenJDK Bugtracker titled:
Benchmarking JTinyCsvParser (Parallel Streams, With Bugfix)
We have seen, that there is a bug in the
Spliterator for
Files.lines, but the OpenJDK Bug ticket JDK-8072773 also references a bug fix.
When I backport the bugfix mentioned in JDK-8072773 to Java 1.8, then the file is split correctly. The file is then parsed in
12 seconds.
[LocalWeatherData_Parallel_Parse] PT11.773S
But since the OpenJDK code is released under terms of the GPL v2 license, I cannot include the mentioned bugfix into JTinyCsvParser the parser.
Conclusion
I have presented you JTinyCsvParser, which is a small CSV Parser I have written. It provides a Streaming API, so the user can build custom processing pipelines.
In the end I have to admit, that Java 1.8 makes writing Java a little less painful. There are a lot of nice additions like Streams and lambda functions. But honestly I will never get warm with type erasure. Type Erasure is really a major pain to deal with, porting JTinyCsvParser over from .NET confirmed this to me. Anyway it was a good exercise to see what Java 1.8 offers and sharpen my skills... if I ever have to read a Java codebase.
I hope this article was a nice read and gave you some ideas how to use Java 1.8 to your own advance.
|
https://www.bytefish.de/blog/jtinycsvparser.html
|
CC-MAIN-2020-40
|
en
|
refinedweb
|
- 03 Jun, 2018 1 commit
-
- 23 May, 2018 1 commit
-
- 19 May, 2018 1 commit
- much, more more control
- 10 Feb, 2018 2 commits
- style guide: 2018 edition - thankfully nothing big breaks - man this is a bigger project than I remember
- the footer is not responsive enough, this is something masonstrap should address
- 13 Jan, 2018 1 commit
- landing.html - manage.html
- 07 Jan, 2018 1 commit
- /myLinks -> /my - /newLink -> /new - /useradmin -> /manage
- 13 Oct, 2017 1 commit
- Chris Gallarno authored
- Implemented sorting with sort methods of -- Most Recent -- Oldest -- Alphabetical (A-Z) -- Alphabetical (Z-A) -- Most Popular -- Least Popular -- Expires Soon - Closes #156
- 21 Aug, 2017 1 commit
- causes 500 on prod - it's pretty stupid anyways, the lazy way out
- 26 Apr, 2017 1 commit
- Add in 2.2 CHANGELOG - Fix last minute bug with the CSRF check - Missed a spot in the footer
- 25 Apr, 2017 2 commits
-
- 24 Apr, 2017 1 commit
-
- 19 Apr, 2017 1 commit
- if you embed "/delete/memedaddy" into a page the link would get deleted - this no longer is allowed
- 29 Mar, 2017 5 commits
- copy paste fix
- It does not add anything, rather just modifies the Crispy Forms layout - I do love me some crispy forms
- 4 commits
- Also remove old css file - Better imports for forms.py
- simple page, no form implementation yet
- a new URL (and test) - a new view (and test)
- Over time I get annoyed with how I formatted things in the past
- 27 Mar, 2017 2 commits
- 2 commits
- Man those linter lines really make me go crazy
- forgot to update the template name
- 24 Mar, 2017 1 commit
- and some formatting fixes I believe
- 23 Mar, 2017 1 commit
- Eyad Hasan authored
- 20 Mar, 2017 1 commit
- one day I'll make up my mind on style
- 18 Mar, 2017 1 commit
- 13 Mar, 2017 1 commit
def main(): """ words about the function main() """ print("Hello World!")
- 25 Feb, 2017 1 commit
- this will allow us to display form errors before calling the post method - keep things nice and abstracted
- 07 Feb, 2017 2 commits
-
- bunch of blanks for now
- 05 Feb, 2017 1 commit
- Grady Moran authored
Modified requirements.txt to add the django-ratelimit () version 1.0.1 Modified views.py to take a big chunk out of index function and put it in a post function. This allows the ratelimit decorations to work on that function.
- 02 Jan, 2017 4 commits
- plus spacing fixes
- teleport future packages into the present!
-
|
https://git.gmu.edu/srct/go/-/commits/2edec6ef906c5c1f30b00af009d187d2b8012bfb/go/go/views.py
|
CC-MAIN-2020-40
|
en
|
refinedweb
|
The Daemon Extension enables applications to easily perform standard daemonization operations.
Features
Configurable runtime user and group
Adds the
--daemon command line option
Add
app.daemonize() function to trigger daemon functionality where necessary (either in a cement
pre_run hook or an application controller sub-command, etc)
Manages a PID file including cleanup on
app.close()
API References:
Python 2.6+, 3+
Unix/Linux
macOS
The daemon extension is configurable with the following settings under a
[daemon] section in the application configuration:
Configurations can be passed as defaults to
App:
from cement import App, init_defaultsDEFAULTS = init_defaults('myapp', 'daemon')DEFAULTS['daemon']['user'] = 'myuser'DEFAULTS['daemon']['group'] = 'mygroup'DEFAULTS['daemon']['dir'] = '/var/lib/myapp/'DEFAULTS['daemon']['pid_file'] = '/var/run/myapp/myapp.pid'DEFAULTS['daemon']['umask'] = 0class MyApp(App):class Meta:label = 'myapp'config_defaults = DEFAULTS
Application defaults are then overridden by configurations parsed via a
[demon] config section in any of the applications configuration paths. An example configuration block would look like:
[daemon]user = myusergroup = mygroupdir = /var/lib/myapp/pid_file = /var/run/myapp/myapp.pidumask = 0
The following example shows how to add the daemon extension, as well as trigger daemon functionality before
app.run() is called.
from time import sleepfrom cement import Appclass MyApp(App):class Meta:label = 'myapp'extensions = ['daemon']with MyApp() as app:app.daemonize()app.run()count = 0while True:count = count + 1print('Iteration: %s' % count)sleep(10)
Some applications may prefer to only daemonize certain sub-commands rather than the entire parent application. For example:
from cement import App, Controller, exclass Base(Controller):class Meta:label = 'base'@ex(help="run the daemon command.")def run_forever(self):from time import sleepself.app.daemonize()count = 0while True:count = count + 1print(count)sleep(10)class MyApp(App):class Meta:label = 'myapp'handlers = [Base]extensions = ['daemon']with MyApp() as app:app.run()
By default, even after
app.daemonize() is called… the application will continue to run in the foreground, but will still manage the pid and user/group switching. To detach a process and send it to the background you simply pass the
--daemon option at command line.
$ python example.py --daemon$ ps -x | grep example37421 ?? 0:00.01 python example2.py --daemon37452 ttys000 0:00.00 grep example
Some use cases might require daemonizing the process without having to always pass the
--daemon option, or where passing the option might be redundant. You can work around that programatically by simply overriding the
daemon argument value in order to force daemonization even if
--daemon wasn’t passed.
app.pargs.daemon = Trueapp.daemonize()
Note that this would only work after arguments have been parsed (i.e. after
app.run() is called).
|
https://docs.builtoncement.com/extensions/daemon
|
CC-MAIN-2020-40
|
en
|
refinedweb
|
Cement defines a Mail Interface, as well as the default DummyMailHandler that implements the interface as a placeholder but does not actually send any mail.
Cement often includes multiple handler implementations of an interface that may or may not have additional features or functionality than the interface requires. The documentation below only references usage based on the interface and default handler (not the full capabilities of an implementation).
Cement Extensions that Provide Mail Handlers:
API References:
The following options under
App.Meta modify configuration handling:
from cement import Appwith App('myapp') as app:app.run()# send a message using the defined mail handlerapp.mail.send("Test mail message",subject='My Subject',to=['me@example.com'],from_addr='noreply@localhost',)
python myapp.py=============================================================================DUMMY MAIL MESSAGE-----------------------------------------------------------------------------To: me@example.comFrom: noreply@localhostCC:BCC:Subject: My Subject---Test mail message-----------------------------------------------------------------------------
The default
dummy mail handler simply prints the message to console, and does not send anything. You can override the mail handler via
App.Meta.mail_handler, for example using the SMTP Extension.
All interfaces in Cement can be overridden with your own implementation. This can be done either by sub-classing
MailHandler itself, or by sub-classing an existing extension's handlers in order to alter their functionality.
myapp.pyfrom cement import Appfrom cement.core.mail import MailHandlerclass MyMailHandler(MailHandler):class Meta:label = 'my_mail_handler'# do something to implement the interfaceclass MyApp(App):class Meta:label = 'myapp'mail_handler = 'my_mail_handler'handlers = [MyMailHandler,]
|
https://docs.builtoncement.com/core-foundation/mail-messaging
|
CC-MAIN-2020-40
|
en
|
refinedweb
|
Distributed Tracing¶
You can use Open Tracing to trace your API calls to Seldon Core. By default we support Jaeger for Distributed Tracing, which will allow you to obtain insights on latency and performance across each microservice-hop in your Seldon deployment.
Install Jaeger¶
You will need to install Jaeger on your Kubernetes cluster. Follow their documentation
Configuration¶
You will need to annotate your Seldon Deployment resource with environment variables to make tracing active and set the appropriate Jaeger configuration variables.
For the Seldon Service Orchestrator you will need to set the environment variables in the
spec.predictors[].svcOrchSpec.envsection. See the Jaeger Java docs for available configuration variables.
For each Seldon component you run (e.g., model transformer etc.) you will need to add environment variables to the container section.
Python Wrapper Configuration¶
Add an environment variable: TRACING with value 1 to activate tracing.
You can utilize the default configuration by simply providing the name of the Jaeger agent service by providing JAEGER_AGENT_HOST environment variable. Override default Jaeger agent port
5775 by setting JAEGER_AGENT_PORT environment variable.
To provide a custom configuration following the Jaeger Python configuration yaml defined here you can provide a configmap and the path to the YAML file in JAEGER_CONFIG_PATH environment variable.
An example is show below:
apiVersion: machinelearning.seldon.io/v1 kind: SeldonDeployment metadata: name: tracing-example namespace: seldon spec: name: tracing-example predictors: - componentSpecs: - spec: containers: - env: - name: TRACING value: '1' - name: JAEGER_AGENT_HOST valueFrom: fieldRef: fieldPath: status.hostIP - name: JAEGER_AGENT_PORT value: '5775' - name: JAEGER_SAMPLER_TYPE value: const - name: JAEGER_SAMPLER_PARAM value: '1' image: seldonio/mock_classifier_rest:1.3 name: model1 terminationGracePeriodSeconds: 1 graph: children: [] endpoint: type: REST name: model1 type: MODEL name: tracing replicas: 1 svcOrchSpec: env: - name: TRACING value: '1' - name: JAEGER_AGENT_HOST valueFrom: fieldRef: fieldPath: status.hostIP - name: JAEGER_AGENT_PORT value: '5775' - name: JAEGER_SAMPLER_TYPE value: const - name: JAEGER_SAMPLER_PARAM value: '1'
Worked Example¶
You can see it in action and try it yourself by following the example below:
A fully worked template example is provided.
|
https://docs.seldon.io/projects/seldon-core/en/latest/graph/distributed-tracing.html
|
CC-MAIN-2020-40
|
en
|
refinedweb
|
Problem 36
Expected time to find a DelawareanDue: February 9
Points: 3
The State of Delaware sports 3 representatives in the U.S. Congress, out of a total of 541 voting and non-voting members. Based on this, we might expect, on average 3 in every 541 Midshipmen to be from Delaware.
(*The number is probably even higher than this, based on the vice president and the general quality of people who grew up in Delaware. If this ratio of approximately 0.55% seems small, compare it to the total population of the state compared to the entire country, which is 917092/315487000, or about 0.29%. Take that, Texas!)
Consider the following algorithm to find a Delawarean midshipman:
def findBlueHen(): M = chooseRandomMidshipman() if M is from Delaware: return M else: return findBlueHen()
Part 1: Write a recurrence to describe the running time of the recursive algorithm. There should be 2 cases, and a probability for each case.
Part 2: Solve your recurrence to determine the expected running time of the algorithm.
|
https://www.usna.edu/Users/cs/roche/courses/s16si486h/probs/036.php
|
CC-MAIN-2018-09
|
en
|
refinedweb
|
Can anyone help to understand why the following integer comparison fails
import subprocess
cmd = "adb -s serialid shell getprop sys.boot"
proc = subprocess.Popen(cmd.split(' '), stdout=subprocess.PIPE, stderr=subprocess.PIPE)
outs, errs = proc.communicate()
print outs
if outs ==1:
print "Condition met.."
else:
print "Condition fail.."
Z:\loadbuild>python calculate_attempts.py
1
Condition fail..
outs is the stuff that the process prints to standard output. As such, it will be a string, not an int. Since they are different types, the comparison will always fail.
Perhaps your condition should be something like:
if outs.strip() == '1': ...
|
https://codedump.io/share/4qo61FgdgO41/1/integer-comparision-failure
|
CC-MAIN-2018-09
|
en
|
refinedweb
|
Memory contains three general areas. First, function and operator
calls via
new and
delete
operator or member function calls. Second, allocation via
allocator. And finally, smart pointer and
intelligent pointer abstractions.
Memory management for Standard Library entities is encapsulated in a
class template called
allocator. The
allocator abstraction is used throughout the
library in
string, container classes,
algorithms, and parts of iostreams. This class, and base classes of
it, are the superset of available free store (“heap”)
management classes.
The C++ standard only gives a few directives in this area:
When you add elements to a container, and the container must allocate more memory to hold them, the container makes the request via its Allocator template parameter, which is usually aliased to allocator_type. This includes adding chars to the string class, which acts as a regular STL container in this respect.
The default Allocator argument of every
container-of-T is
allocator<T>.
The interface of the
allocator<T> class is
extremely simple. It has about 20 public declarations (nested
typedefs, member functions, etc), but the two which concern us most
are:
T* allocate (size_type n, const void* hint = 0); void deallocate (T* p, size_type n);
The
n arguments in both those
functions is a count of the number of
T's to allocate space for, not their
total size.
(This is a simplification; the real signatures use nested typedefs.)
The storage is obtained by calling
::operator
new, but it is unspecified when or how
often this function is called. The use of the
hint is unspecified, but intended as an
aid to locality if an implementation so
desires.
[20.4.1.1]/6
Complete details can be found in the C++ standard, look in
[20.4 Memory].
The easiest way of fulfilling the requirements is to call
operator new each time a container needs
memory, and to call
operator delete each time
the container releases memory. This method may be slower
than caching the allocations and re-using previously-allocated
memory, but has the advantage of working correctly across a wide
variety of hardware and operating systems, including large
clusters. The
__gnu_cxx::new_allocator
implements the simple operator new and operator delete semantics,
while
__gnu_cxx::malloc_allocator
implements much the same thing, only with the C language functions
std::malloc and
std::free.
Another approach is to use intelligence within the allocator
class to cache allocations. This extra machinery can take a variety
of forms: a bitmap index, an index into an exponentially increasing
power-of-two-sized buckets, or simpler fixed-size pooling cache.
The cache is shared among all the containers in the program: when
your program's
std::vector<int> gets
cut in half and frees a bunch of its storage, that memory can be
reused by the private
std::list<WonkyWidget> brought in from
a KDE library that you linked against. And operators
new and
delete are not
always called to pass the memory on, either, which is a speed
bonus. Examples of allocators that use these techniques are
__gnu_cxx::bitmap_allocator,
__gnu_cxx::pool_allocator, and
__gnu_cxx::__mt_alloc.
Depending on the implementation techniques used, the underlying
operating system, and compilation environment, scaling caching
allocators can be tricky. In particular, order-of-destruction and
order-of-creation for memory pools may be difficult to pin down
with certainty, which may create problems when used with plugins
or loading and unloading shared objects in memory. As such, using
caching allocators on systems that do not support
abi::__cxa_atexit is not recommended.
The only allocator interface that is supported is the standard C++ interface. As such, all STL containers have been adjusted, and all external allocators have been modified to support this change.
The class
allocator just has typedef,
constructor, and rebind members. It inherits from one of the
high-speed extension allocators, covered below. Thus, all
allocation and deallocation depends on the base class.
The base class that
allocator is derived from
may not be user-configurable.
It's difficult to pick an allocation strategy that will provide maximum utility, without excessively penalizing some behavior. In fact, it's difficult just deciding which typical actions to measure for speed.
Three synthetic benchmarks have been created that provide data that is used to compare different C++ allocators. These tests are:
Insertion.
Over multiple iterations, various STL container objects have elements inserted to some maximum amount. A variety of allocators are tested. Test source for sequence and associative containers.
Insertion and erasure in a multi-threaded environment.
This test shows the ability of the allocator to reclaim memory on a per-thread basis, as well as measuring thread contention for memory resources. Test source here.
A threaded producer/consumer model.
Test source for sequence and associative containers.
The current default choice for
allocator is
__gnu_cxx::new_allocator.
In use,
allocator may allocate and
deallocate using implementation-specific strategies and
heuristics. Because of this, a given call to an allocator object's
allocate member function may not actually
call the global
operator new and a given call to
to the
deallocate member function may not
call
operator delete.
This can be confusing.
In particular, this can make debugging memory errors more
difficult, especially when using third-party tools like valgrind or
debug versions of
new.
There are various ways to solve this problem. One would be to use
a custom allocator that just called operators
new and
delete
directly, for every allocation. (See the default allocator,
include/ext/new_allocator.h, for instance.)
However, that option may involve changing source code to use
a non-default allocator. Another option is to force the
default allocator to remove caching and pools, and to directly
allocate with every call of
allocate and
directly deallocate with every call of
deallocate, regardless of efficiency. As it
turns out, this last option is also available.
To globally disable memory caching within the library for some of
the optional non-default allocators, merely set
GLIBCXX_FORCE_NEW (with any value) in the
system's environment before running the program. If your program
crashes with
GLIBCXX_FORCE_NEW in the
environment, it likely means that you linked against objects
built against the older library (objects which might still using the
cached allocations...).
You can specify different memory management schemes on a
per-container basis, by overriding the default
Allocator template parameter. For example, an easy
(but non-portable) method of specifying that only
malloc or
free
should be used instead of the default node allocator is:
std::list <int, __gnu_cxx::malloc_allocator<int> > malloc_list;
Likewise, a debugging form of whichever allocator is currently in use:
std::deque <int, __gnu_cxx::debug_allocator<std::allocator<int> > > debug_deque;
Writing a portable C++ allocator would dictate that the interface
would look much like the one specified for
allocator. Additional member functions, but
not subtractions, would be permissible.
Probably the best place to start would be to copy one of the
extension allocators: say a simple one like
new_allocator.
Several other allocators are provided as part of this implementation. The location of the extension allocators and their names have changed, but in all cases, functionality is equivalent. Starting with gcc-3.4, all extension allocators are standard style. Before this point, SGI style was the norm. Because of this, the number of template arguments also changed. Here's a simple chart to track the changes.
More details on each of these extension allocators follows.
new_allocator
Simply wraps
::operator new
and
::operator delete.
malloc_allocator
Simply wraps
malloc and
free. There is also a hook for an
out-of-memory handler (for
new/
delete this is
taken care of elsewhere).
array_allocator
Allows allocations of known and fixed sizes using existing
global or external storage allocated via construction of
std::tr1::array objects. By using this
allocator, fixed size containers (including
std::string) can be used without
instances calling
::operator new and
::operator delete. This capability
allows the use of STL abstractions without runtime
complications or overhead, even in situations such as program
startup. For usage examples, please consult the testsuite.
debug_allocator
A wrapper around an arbitrary allocator A. It passes on
slightly increased size requests to A, and uses the extra
memory to store size information. When a pointer is passed
to
deallocate(), the stored size is
checked, and
assert() is used to
guarantee they match.
throw_allocator
Includes memory tracking and marking abilities as well as hooks for throwing exceptions at configurable intervals (including random, all, none).
__pool_alloc
A high-performance, single pool allocator. The reusable
memory is shared among identical instantiations of this type.
It calls through
::operator new to
obtain new memory when its lists run out. If a client
container requests a block larger than a certain threshold
size, then the pool is bypassed, and the allocate/deallocate
request is passed to
::operator new
directly.
Older versions of this class take a boolean template
parameter, called
thr, and an integer template
parameter, called
inst.
The
inst number is used to track additional memory
pools. The point of the number is to allow multiple
instantiations of the classes without changing the semantics at
all. All three of
typedef __pool_alloc<true,0> normal; typedef __pool_alloc<true,1> private; typedef __pool_alloc<true,42> also_private;
behave exactly the same way. However, the memory pool for each type (and remember that different instantiations result in different types) remains separate.
The library uses 0 in all its instantiations. If you wish to keep separate free lists for a particular purpose, use a different number.
The
thr boolean determines whether the
pool should be manipulated atomically or not. When
thr =
true, the allocator
is thread-safe, while
thr =
false, is slightly faster but unsafe for
multiple threads.
For thread-enabled configurations, the pool is locked with a single big lock. In some situations, this implementation detail may result in severe performance degradation.
(Note that the GCC thread abstraction layer allows us to provide safe zero-overhead stubs for the threading routines, if threads were disabled at configuration time.)
__mt_alloc
A high-performance fixed-size allocator with exponentially-increasing allocations. It has its own chapter in the documentation.
bitmap_allocator
A high-performance allocator that uses a bit-map to keep track of the used and unused memory locations. It has its own chapter in the documentation.
The Standard Librarian: What Are Allocators Good For? . C/C++ Users Journal .
Reconsidering Custom Memory Allocation . Copyright © 2002 OOPSLA.
Allocator Types . C/C++ Users Journal .
Explaining all of the fun and delicious things that can
happen with misuse of the
auto_ptr class
template (called AP here) would take some
time. Suffice it to say that the use of AP
safely in the presence of copying has some subtleties.
The AP class is a really nifty idea for a smart pointer, but it is one of the dumbest of all the smart pointers -- and that's fine.
AP is not meant to be a supersmart solution to all resource leaks everywhere. Neither is it meant to be an effective form of garbage collection (although it can help, a little bit). And it can notbe used for arrays!
AP is meant to prevent nasty leaks in the presence of exceptions. That's all. This code is AP-friendly:
// Not a recommend naming scheme, but good for web-based FAQs. typedef std::auto_ptr<MyClass> APMC; extern function_taking_MyClass_pointer (MyClass*); extern some_throwable_function (); void func (int data) { APMC ap (new MyClass(data)); some_throwable_function(); // this will throw an exception function_taking_MyClass_pointer (ap.get()); }
When an exception gets thrown, the instance of MyClass that's
been created on the heap will be
delete'd as the stack is
unwound past
func().
Changing that code as follows is not AP-friendly:
APMC ap (new MyClass[22]);
You will get the same problems as you would without the use of AP:
char* array = new char[10]; // array new... ... delete array; // ...but single-object delete
AP cannot tell whether the pointer you've passed at creation points
to one or many things. If it points to many things, you are about
to die. AP is trivial to write, however, so you could write your
own
auto_array_ptr for that situation (in fact, this has
been done many times; check the mailing lists, Usenet, Boost, etc).
All of the containers described in the standard library require their contained types to have, among other things, a copy constructor like this:
struct My_Type { My_Type (My_Type const&); };
Note the const keyword; the object being copied shouldn't change.
The template class
auto_ptr (called AP here) does not
meet this requirement. Creating a new AP by copying an existing
one transfers ownership of the pointed-to object, which means that
the AP being copied must change, which in turn means that the
copy ctors of AP do not take const objects.
The resulting rule is simple: Never ever use a container of auto_ptr objects. The standard says that “undefined” behavior is the result, but it is guaranteed to be messy.
To prevent you from doing this to yourself, the concept checks built in to this implementation will issue an error if you try to compile code like this:
#include <vector> #include <memory> void f() { std::vector< std::auto_ptr<int> > vec_ap_int; }
Should you try this with the checks enabled, you will see an error.
The shared_ptr class template stores a pointer, usually obtained via new, and implements shared ownership semantics.
The standard deliberately doesn't require a reference-counted implementation, allowing other techniques such as a circular-linked-list.
The
shared_ptr code is kindly donated to GCC by the Boost
project and the original authors of the code. The basic design and
algorithms are from Boost, the notes below describe details specific to
the GCC implementation. Names have been uglified in this implementation,
but the design should be recognisable to anyone familiar with the Boost
1.32 shared_ptr.
The basic design is an abstract base class,
_Sp_counted_base that
does the reference-counting and calls virtual functions when the count
drops to zero.
Derived classes override those functions to destroy resources in a context
where the correct dynamic type is known. This is an application of the
technique known as type erasure.
A
shared_ptr<T> contains a pointer of
type T* and an object of type
__shared_count. The shared_count contains a
pointer of type _Sp_counted_base* which points to the
object that maintains the reference-counts and destroys the managed
resource.
_Sp_counted_base<Lp> strong reference is dropped, but the _Sp_counted_base itself must exist until the last weak reference is dropped.
_Sp_counted_base_impl<Ptr, Deleter, Lp>
Inherits from _Sp_counted_base and stores a pointer of type
Ptr
and a deleter of type
Deleter.
_Sp_deleter is
used when the user doesn't supply a custom deleter. Unlike Boost's, this
default deleter is not "checked" because GCC already issues a warning if
delete is used with an incomplete type.
This is the only derived type used by
tr1::shared_ptr<Ptr>
and it is never used by
std::shared_ptr, which uses one of
the following types, depending on how the shared_ptr is constructed.
_Sp_counted_ptr<Ptr, Lp>
Inherits from _Sp_counted_base and stores a pointer of type Ptr,
which is passed to
delete when the last reference is dropped.
This is the simplest form and is used when there is no custom deleter or
allocator.
_Sp_counted_deleter<Ptr, Deleter, Alloc>
Inherits from _Sp_counted_ptr and adds support for custom deleter and
allocator. Empty Base Optimization is used for the allocator. This class
is used even when the user only provides a custom deleter, in which case
allocator is used as the allocator.
_Sp_counted_ptr_inplace<Tp, Alloc, Lp>
Used by
allocate_shared and
make_shared.
Contains aligned storage to hold an object of type Tp,
which is constructed in-place with placement
new.
Has a variadic template constructor allowing any number of arguments to
be forwarded to Tp's constructor.
Unlike the other
_Sp_counted_* classes, this one is parameterized on the
type of object, not the type of pointer; this is purely a convenience
that simplifies the implementation slightly.
C++11-only features are: rvalue-ref/move support, allocator support,
aliasing constructor, make_shared & allocate_shared. Additionally,
the constructors taking
auto_ptr parameters are
deprecated in C++11 mode.
The Thread Safety section of the Boost shared_ptr documentation says "shared_ptr objects offer the same level of thread safety as built-in types." The implementation must ensure that concurrent updates to separate shared_ptr instances are correct even when those instances share a reference count e.g.
shared_ptr<A> a(new A); shared_ptr<A> b(a); // Thread 1 // Thread 2 a.reset(); b.reset();
The dynamically-allocated object must be destroyed by exactly one of the threads. Weak references make things even more interesting. The shared state used to implement shared_ptr must be transparent to the user and invariants must be preserved at all times. The key pieces of shared state are the strong and weak reference counts. Updates to these need to be atomic and visible to all threads to ensure correct cleanup of the managed resource (which is, after all, shared_ptr's job!) On multi-processor systems memory synchronisation may be needed so that reference-count updates and the destruction of the managed resource are race-free.
The function
_Sp_counted_base::_M_add_ref_lock(), called when
obtaining a shared_ptr from a weak_ptr, has to test if the managed
resource still exists and either increment the reference count or throw
bad_weak_ptr.
In a multi-threaded program there is a potential race condition if the last
reference is dropped (and the managed resource destroyed) between testing
the reference count and incrementing it, which could result in a shared_ptr
pointing to invalid memory.
The Boost shared_ptr (as used in GCC) features a clever lock-free algorithm to avoid the race condition, but this relies on the processor supporting an atomic Compare-And-Swap instruction. For other platforms there are fall-backs using mutex locks. Boost (as of version 1.35) includes several different implementations and the preprocessor selects one based on the compiler, standard library, platform etc. For the version of shared_ptr in libstdc++ the compiler and library are fixed, which makes things much simpler: we have an atomic CAS or we don't, see Lock Policy below for details.
There is a single
_Sp_counted_base class,
which is a template parameterized on the enum
__gnu_cxx::_Lock_policy. The entire family of classes is
parameterized on the lock policy, right up to
__shared_ptr,
__weak_ptr and
__enable_shared_from_this. The actual
std::shared_ptr class inherits from
__shared_ptr with the lock policy parameter
selected automatically based on the thread model and platform that
libstdc++ is configured for, so that the best available template
specialization will be used. This design is necessary because it would
not be conforming for
shared_ptr to have an
extra template parameter, even if it had a default value. The
available policies are:
_S_Atomic
Selected when GCC supports a builtin atomic compare-and-swap operation on the target processor (see Atomic Builtins.) The reference counts are maintained using a lock-free algorithm and GCC's atomic builtins, which provide the required memory synchronisation.
_S_Mutex
The _Sp_counted_base specialization for this policy contains a mutex, which is locked in add_ref_lock(). This policy is used when GCC's atomic builtins aren't available so explicit memory barriers are needed in places.
_S_Single
This policy uses a non-reentrant add_ref_lock() with no locking. It is
used when libstdc++ is built without
--enable-threads.
For all three policies, reference count increments and
decrements are done via the functions in
ext/atomicity.h, which detect if the program
is multi-threaded. If only one thread of execution exists in
the program then less expensive non-atomic operations are used.
dynamic_pointer_cast,
static_pointer_cast,
const_pointer_cast
As noted in N2351, these functions can be implemented non-intrusively using the alias constructor. However the aliasing constructor is only available in C++11 mode, so in TR1 mode these casts rely on three non-standard constructors in shared_ptr and __shared_ptr. In C++11 mode these constructors and the related tag types are not needed.
enable_shared_from_this
The clever overload to detect a base class of type
enable_shared_from_this comes straight from Boost.
There is an extra overload for
__enable_shared_from_this to
work smoothly with
__shared_ptr<Tp, Lp> using any lock
policy.
make_shared,
allocate_shared
make_shared simply forwards to
allocate_shared
with
std::allocator as the allocator.
Although these functions can be implemented non-intrusively using the
alias constructor, if they have access to the implementation then it is
possible to save storage and reduce the number of heap allocations. The
newly constructed object and the _Sp_counted_* can be allocated in a single
block and the standard says implementations are "encouraged, but not required,"
to do so. This implementation provides additional non-standard constructors
(selected with the type
_Sp_make_shared_tag) which create an
object of type
_Sp_counted_ptr_inplace to hold the new object.
The returned
shared_ptr<A> needs to know the address of the
new
A object embedded in the
_Sp_counted_ptr_inplace,
but it has no way to access it.
This implementation uses a "covert channel" to return the address of the
embedded object when
get_deleter<_Sp_make_shared_tag>()
is called. Users should not try to use this.
As well as the extra constructors, this implementation also needs some
members of _Sp_counted_deleter to be protected where they could otherwise
be private.
Examples of use can be found in the testsuite, under
testsuite/tr1/2_general_utilities/shared_ptr,
testsuite/20_util/shared_ptr
and
testsuite/20_util/weak_ptr.
The
shared_ptr atomic access
clause in the C++11 standard is not implemented in GCC.
Unlike Boost, this implementation does not use separate classes
for the pointer+deleter and pointer+deleter+allocator cases in
C++11 mode, combining both into _Sp_counted_deleter and using
allocator when the user doesn't specify
an allocator. If it was found to be beneficial an additional
class could easily be added. With the current implementation,
the _Sp_counted_deleter and __shared_count constructors taking a
custom deleter but no allocator are technically redundant and
could be removed, changing callers to always specify an
allocator. If a separate pointer+deleter class was added the
__shared_count constructor would be needed, so it has been kept
for now.
The hack used to get the address of the managed object from
_Sp_counted_ptr_inplace::_M_get_deleter()
is accessible to users. This could be prevented if
get_deleter<_Sp_make_shared_tag>()
always returned NULL, since the hack only needs to work at a
lower level, not in the public API. This wouldn't be difficult,
but hasn't been done since there is no danger of accidental
misuse: users already know they are relying on unsupported
features if they refer to implementation details such as
_Sp_make_shared_tag.
tr1::_Sp_deleter could be a private member of tr1::__shared_count but it would alter the ABI.
The original authors of the Boost shared_ptr, which is really nice code to work with, Peter Dimov in particular for his help and invaluable advice on thread safety. Phillip Jordan and Paolo Carlini for the lock policy implementation.
C++ Standard Library Active Issues List . N2456 .
|
http://gcc.gnu.org/onlinedocs/libstdc++/manual/memory.html
|
CC-MAIN-2018-09
|
en
|
refinedweb
|
Closures are functions or references to functions that hold within their scope non-local variables. This variables endure beyond their existence outside of these functions scope. These variables are therefor enclosed within the lexical-scope of that functions.
This is particular useful for JavaScript where with every function call (even if the same is called recursively) a new execution context is created, and an automatic garbage collection throws out all contexts with no reference. For a detailed explanation Jim Ley’s description of closures in JavaScript has proven itself as a great resource.
To create closures in JavaScript one has to assign a reference to a nested function (inner function) declared within a different function object. The garbage collection than can not remove the execution context of that function, as there still exists an object holding a reference to it. And this inner function holds the scope of the outer function even though that function has already returned. Here is a simple example to explain:
function counterscope() { var counter = 0; function f() { return counter++; } return f; } var count = counterscope(); var count2 = counterscope(); console.log(count()); // 0 console.log(count()); // 1 console.log(count2()); // 0 -> references different/new scope console.log(count()); // 2
counterscope() returns a declaration of an inner function f , which as to the lexical-scope of JavaScript inherits the scope of counterscope() , namely the variable counter . This means f holds a reference to counter, while counter may no longer exist in the context of counterscope() as this function returns. count is the property holding the reference to f and we can execute it repeatedly changing the variables state in the local context. count2 also holds a reference to f with it’s own scope, as we can see upon execution of count2().
This post can only provide you with a rough understanding of closures in JavaScript. Having a good understanding of closures, can be very important to JavaScript development as it is such a common and useful pattern. The same probably holds true for the Prototype concept of JavaScript. Reading through the here provided references should help you getting comfortable with closures.
But how are closures useful? For example you can use the inherit context as a cache storing the result of a long calculating function or implementing private members to JavaScript objects as described here: Private Members in JavaScript
Let’s see how we could implement a local function cache with closures and JavaScript, and Python.
var calc = (function executeFunc(){ var cache = []; return function calc(x){ if(cache[x]){ console.log("Cache hit: "+ cache[x]); return cache[x]; } cache[x] = x+10; return cache[x]; } })(); console.log(calc(10)); console.log(calc(20)); console.log(calc(10)); // cache hit console.log(calc(30)); console.log(calc(20)); // cache hit
Here the closure cache[] is used to store the results of our very complicated calculation. This example is also slightly different from the previous as we here use an anonymous function ()() to build up the enclosing scope. Let’s now see how we can implement the same functionality with Python.
Closures in Python work quit similar to JavaScript. A function declaration is returned from a different function object. The caching example from JavaScript in Python looks as follows:
def calcFunc(): cache = {}; def calc(x): if x not in cache: print "Cache miss." cache[x] = x + 10 return cache[x] return calc count = calcFunc() print count(10) // Cache miss. 20 print count(10) // 20
So what we see here is the use of closures in Python. As if in JavaScript a function declaration is returned which captures the scope of the enclosing function that returns it.
Further Readings:
- Closures (Wikipedia)
- Javascript Closures
- Why use “closure”?
- JavaScript: The Definitive Guide 6th
(amazon)
- Programming Python
(amazon)
|
https://henning.kropponline.de/2013/02/06/closures-with-javascript-and-python/
|
CC-MAIN-2018-09
|
en
|
refinedweb
|
Sorry, we got hacked :-(
Sorry everyone we got hacked and it will take us a little while to put the site back together. Please be patient with us.
Sorry everyone we got hacked and it will take us a little while to put the site back together. Please be patient with us.
I.
In this case I have not set any preferred size for the DotGrid so it will be whatever size its parent wants it to be. If you need it to be a minimum size you could call setPrefSize() or setMinSize() in the constructor.
Amazing news everyone, SceneBuilder 2.0 is released today. It has many cool new features and was a complete rewrite from the ground up so that it can be split into parts and embedded in your favorite IDE.
Mo has put up a great video tutorial for SceneBuilder 2 on YouTube: Watch
FX8in the Library panel’s search text field..
I just got a 3D Connexion SpaceNavigator which is a kind of 3D input device(mouse/stick). It is cool to use when 3D modeling content for JavaFX but I thought it would be even better if I could navigate my JavaFX 3D scenes using it. I managed to hack some quick code to get it working in JavaFX. Many thanks to the JInput project, they made it super easy. Its super fun so I recoded a little video to share with you.
SpaceNavigator with JavaFX from Jasper Potts on Vimeo.
The code really is very simple, I just have a AnimationTimer that every frame checks to get the current inputs from device and applies them to the camera transforms. The device via jinput provides 6 floats for each axis and 2 booleans for the buttons, so could not be easier to connect to your app. Below is complete 3D app with a simple cube. I will be working on getting the object importers out in open source for you to use very soon 🙂
public class InputTestBlog extends Application { private ControllerEnvironment controllerEnvironment; private Controller spaceNavigator; private Component[] components; private Translate translate; private Rotate rotateX,rotateY,rotateZ; @Override public void start(Stage stage) throws Exception { controllerEnvironment = ControllerEnvironment.getDefaultEnvironment(); Controller[] controllers = controllerEnvironment.getControllers(); for(Controller controller: controllers){ if ("SpaceNavigator".equalsIgnoreCase(controller.getName())){ spaceNavigator = controller; System.out.println("USING Device ["+controller.getName()+"] of type ["+controller.getType().toString()+"]"); components = spaceNavigator.getComponents(); } } Group root = new Group(); Scene scene = new Scene(root, 1024, 768, true); stage.setScene(scene); scene.setFill(Color.GRAY); // CAMERA final PerspectiveCamera camera = new PerspectiveCamera(true); scene.setCamera(camera); root.getChildren().add(camera); // BOX Box testBox = new Box(5,5,5); testBox.setMaterial(new PhongMaterial(Color.RED)); testBox.setDrawMode(DrawMode.LINE); root.getChildren().add(testBox); // MOVE CAMERA camera.getTransforms().addAll( rotateY = new Rotate(-20, Rotate.Y_AXIS), rotateX = new Rotate(-20, Rotate.X_AXIS), rotateZ = new Rotate(0, Rotate.Z_AXIS), translate = new Translate(5, -5, -15) ); // SHOW STAGE stage.show(); // CHECK FOR INPUT if (spaceNavigator != null) { new AnimationTimer() { @Override public void handle(long l) { if (spaceNavigator.poll()) { for(Component component: components) { switch(component.getName()) { case "x": translate.setX(translate.getX() + component.getPollData()); break; case "y": translate.setY(translate.getY()+component.getPollData()); break; case "z": translate.setZ(translate.getZ()+component.getPollData()); break; case "rx": rotateX.setAngle(rotateX.getAngle()+component.getPollData()); break; case "ry": rotateY.setAngle(rotateY.getAngle()+component.getPollData()); break; case "rz": rotateZ.setAngle(rotateZ.getAngle()+component.getPollData()); break; } } } } }.start(); } } public static void main(String[] args) { System.setProperty("net.java.games.input.librarypath", new File("lib").getAbsolutePath()); launch(args); } }
|
http://fxexperience.com/author/jasper/
|
CC-MAIN-2018-09
|
en
|
refinedweb
|
java.lang.Object
java.awt.Componentjava.awt.Component
java.awt.Containerjava.awt.Container
javax.swing.JComponentjavax.swing.JComponent
javax.swing.JPaneljavax.swing.JPanel
PIRL.Viewers.Memory_PanelPIRL.Viewers.Memory_Panel
public class Memory_Panel
A
JPanel that holds information about available Virtual Machine
memory. Three values are displayed: free memory, total memory, and maximum
memory.
Free memory is the amount of memory not yet in use by the Virtual Machine. This memory has been claimed from the available pool, however.
Total memory is the total amount of memory the Virtual Machine has claimed from the available pool of memory. Free memory is a subset of this quantity; when subtracted from this quantity, the difference is the amount of memory currently in use. This value will change over time as objects are created and destroyed.
Maximum memory is the maximum available pool of memory for the Virtual Machine. Its value is static once the Virtual Machine is instantiated, although runtime switches may be employed to allocate amounts other than the default; see the documentation for the Java Virtual Machine. Total memory is a subset of this quantity.
public static final String ID
public static final int DEFAULT_UNIT_DIVISOR
public static final String DEFAULT_UNITS
public Memory_Panel(int unit_divisor, String units)
This panel uses a
GridBagLayout.
unit_divisor- the divisor to be used in scaling the displayed memory values.
units- the label that identifies the units used.
public Memory_Panel()
default divisorand
default unitsfor its information display.
Memory_Panel(int,String)
public void update_labels()
public int unit_divisor()
public void unit_divisor(int unit_divisor)
unit_divisor- the new divisor for this Memory_Panel.
public String units()
public void units(String units)
units- the new units for this Memory_Panel.
public static long free_memory()
Runtime.freeMemory()
public static long total_memory()
Runtime.totalMemory()
public static long max_memory()
Runtime.maxMemory()
|
http://pirlwww.lpl.arizona.edu/software/PIRL_Java_Packages/PIRL/Viewers/Memory_Panel.html
|
CC-MAIN-2018-09
|
en
|
refinedweb
|
0
This is my first post in the Dani community. I am brand new to java, 5 weeks into an intro to computer programming course. My assignment is to create 10 random numbers between 50 and 100. Then I am supposed to sort them from lowest to highest, find the average and the max and min. I have got all but the sorting down. i have been struggling so much with this class I cant believe I got this much, any help in the right direction would be greatly appreciated.
package tenrandomnumbers; import java.util.Scanner; import java.util.Random; public class TenRandomNumbers { public static void main(String[] args) { // TODO Auto-generated method stub int[] Random; Random = new int[10]; int count = 0; double average = 0; int Max = 0; int Min = 100; int sum = 0; int number = 0; for (count = 0; count < 10; count++){ number = 50 + (int)(Math.random()*50); Random[count] = number; System.out.println(Random[count]); sum = sum + number; average = sum/10; while(Random[count]>Max) Max=Random[count]; while(Random[count]<Min) Min=Random[count]; } System.out.println("The average is: " + average); System.out.println("The highest number is " + Max); System.out.println("The lowest number is " + Min); } }
Edited by mike_2000_17: Fixed formatting
|
https://www.daniweb.com/programming/software-development/threads/265294/cant-sort-my-randomly-generated-numbers
|
CC-MAIN-2018-09
|
en
|
refinedweb
|
.
You can run the esxcli command directly in ESXi BusyBox Shell. For that you will need to have either direct access to the ESXi console, or SSH to the ESXi (you need to turn on the SSH Shell access). Should you have multiple ESXi servers to run the commands simultaneously, try the free DoubleCloud ICE I developed.
Alternatively, you can install vCLI on Windows or Linux from where you want to remotely manage ESXi servers. If you don’t want to install the package by yourself, you can download the vMA virtual appliance which has everything pre-installed and pre-configured. When you run the command this way, there is additional parameters for remote ESXi server address and credential. Other than that, the command syntax should be the same as native esxcli command.
There is actually another choice which is not documented but I digged it out anyway – You can run esxcli command in a browser. If you are interested, you can check out this post. Note that this is not supported by VMware.
This tutorial will guide you through all functionalities with samples and tips/tricks assuming we are using the native esxcli command. Given the complexity of the command, I cannot run all the combinations of parameters, please feel free to use the help and try by yourself when you cannot find exact sample. Should you find something worth sharing, please feel free to post them in the comments.
Where Is It Installed?
The esxcli command utility comes with the ESXi installation. If you type in the following commands you will find where it’s installed. Even more is that you will uncover the a little secret of esxcli – it’s essentially a Python script. If you are interested in what is in the script, you can simply check it out with vi command. We’ll not dig deeper there, but focus on the usage of the command as a user.
What You Can Do With It?
Unlike the vim-cmd command which mainly focuses on the virtual machine related management, the esxcli focuses on the infrastructure like hardware, storage, networking, esxi software, etc.
Typing the esxcli command without any argument will displays the command usage. With the options, you can control the format of the output to be xml, csv, key/value pair, or json.
The esxcli command is a very complex command that achieve a lot given it’s a single command. To help user to use the command without getting confused, the namespaces are used to group the commands. The namespaces can be further divided into sub-namespaces depending the complexity.
The following command shows the top level namespaces, each of which can be nicely mapped to a group of functionality. Just reading description of these commands should give you an idea on what esxcli can do for you. We’ll dig deeper into each of them in the following sections.
A bit more on the format. In the interactive mode, the default format of output is perfect. If you use the esxcli command for automation, other formats may be preferred. For example, the CSV can be easily imported into Excel or other spreadsheet for reporting and analytics. The XML is very good for data exchange. The key and value pairs are good shell script to consume. There are actually undocumented formats as I blogged about a while back.
High Level Conventions
In some way, the esxcli namespaces are like Java packages. At certain points down the namespace hierarchy, you can find available commands (think of them as methods on a class). One big difference is that there can be commands on a non-leaf namespace. Simply entering the full esxcli and namespace path will show both available namespaces and available commands.
The commands are mostly verbs as follows:
* list – retrieve a list of the objects that are represented by the namespace
* get – get information like property
* set – set a value
* load/unload – load/unload configuration
Note that they are not necessarily available in all the namespaces. When in double, always checkout the command line.
Depending on the command, there may be additional options specific to the command. These options are different from the general options which are always available for all commands.
Without further due, let’s jump to the sub-commands one by one.
Listing All esxcli Namespaces and Commands
Interestingly, the esxcli can also be a sub command to the esxcli command itself. What it does is very simple — list all the available namespaces and their commands. Because there are so many lines in the output, I will just show you the first few lines. You can easily find out in your SSH to ESXi.
Now, let’s take a look how to interpret the output. For each line, there is a corresponding esxcli command. Let’s pick up the second line “fcoe.adapter.” The related command is like the following:
The rule of thumb is to replace the dot with space and combine string together with top esxcli command and last command. For some commands, there are additional parameters, you may need to add there. For the list command, it’s mostly fine without additional parameters except the format as we discussed earlier.
Managing Fiber Channel Over Ethernet (FCOE)
We will not cover FCOE here as there are many good introduction already. Check out Cormac’s blog.
The following command shows it has two name spaces: adapter and nic.
To list adapters, you can use the following command:
I don’t have a FCOE adapter in my home lab, but got a sample out from VMware KB article here for each adapter:
For the NIC namespace, there are more commands available:
Here are the sample commands: (Again, the output sample from above KB article)
Managing Hardware
All the hardware related commands fit in the hardware namespace. As you can see the following output, you can retrieve information or manage various aspects of the hardware including cpu, ipmi, boot device, clock, memory, pci, platform (a little vague term, but will elaborate more soon), trusted boot.
Because hardware is hard, you cannot do much about it but retrieving the information about these components or aspects. So the list command is the most commonly used in this category.
The following command lists the CPU related information. There are 8 cores, but I just show 2 core inforation here – the rest 6 is pretty much the same.
IPMI stands for Intelligent Platform Management Interface (). It allows remotely managing the server over IP network. From the esxcli commmand, you can further manage the Field Replaceable Unit, Sensor Data Repository, and System event log.
My server in use does not have the IPMI support, so the following commands return nothing or empty values. If you have enabled servers, you will see more with the same commands.
Listing the boot devices is supported with the bootdevice namespace as follows. Interestingly, the output lists nothing – the server did boot successfully.
Every computer has a clock. With esxcli command, you can get the time on the clock, and change the time on the clock. The parameters to set up new time is a little tricky, but using the help should be very easy. You can set individual component of the time, say year, month, etc.
Retrieving the physical memory of ESXi server is very simple with the following command. It also shows the NUMA node count.
PCI information can be retrieved with the pci name space as follows. The output is pretty long, so I just include the first device. With the long output, you can pipeline it to grep command for exact information you are interested in.
Now it’s time for the vague platform namespace. Let’s start with the command itself. With that, I don’t need to explain what the platform is.
Last one in the hardware is the trusted boot. Again, my server does not have the feature. For more details on the technology, check out this wiki page:
ESXi Kernel Scheduling
With the sched namespace, we can manage VMKernel system properties and configure scheduling related functionality like swapping.
The changing of properties is through the set command. You can change one or more properties at a time with the switches as listed below.
Managing VIB Software
The design of ESXi is to keep the hypervisor as small as possible. Being small has many benefits, for example, less exposure to security attacks. At the same time, we need some flexibility to install new drivers or other software agents on the ESXi. This allows VMware partners to customize the system and extend the system with additional functionalities.
For that purpose, VMware created VIB format based on a Linux package system. It has related SDK for others to build the VIB. The commands in this section is about how to manage the VIBs with command line.
You can use the sources namespace to “browse” the VIB packages in a VIB depot. Before running the command, you want to modify firewall to allow HTTP client.
In this sample, I used the v-front.de which has a rich collection of VIBs. In your company, you may have your own VIB depot.
If you use the get command, you will get a lot more details than the list. The following shows one part of the long output.
This blog post has excellent introduction and more sample commands.
You can list the image profiles as follows. I intentionally used ESXi-6.0 for grep, or will get a lot more lines in output. Again, it connects to external site and you want to open firewall before this.
To find out the exact content in an image profile:
Supposedly you can download a zip and point the depot to it as follows. Somehow it does not work even though I had the full path to the zip file as recommended by a few bloggers.
The error message is not true because the zip file seems a valid zip file (not all output lines are listed)
ESXi has different level of acceptance for installing new VIBs. Changing acceptance levels enables installing VIBs from other vendors possible.
The level must be one of the values: VMwareCertified, VMwareAccepted, PartnerSupported, CommunitySupported. The acceptance level decides if some packages can be installed or not. From VMwareCertified to CommunitySupported, the criteria loosens up.
Different VIBs combined together forms a big collection called image profile. To manage image profile, the following commands can be used:
It’s quite easy to list all the VIBs installed on the system. The following shows the command and part of output.
To get more details of a specific VIB, you can use the get command as follows with the VIB name.
To install a VIB, you should install command with VIB locations. For the VIB not signed, you want to turn off the signature checking with –no-sig-check option. WARNING: If your installation requires a reboot, you need to disable HA first.
iSCSI Management
iSCSI allows ESXi to use storages on remote iSCSC servers. The esxcli command can help to manage this feature on different layers in the stack. Again, I don’t have the set up with iSCSI support. Instead of real commands, the following commands shows mostly the help so that you can see what are theree.
Network Management
ESXi has a rich set of features in networking. All the network related commands are grouped in the network namespace. To take a look at what are available there, simply type in:
As you can find there are more sub-namespaces, each of which is still a big topic of itself.
Firewall Management
To find out the current firewall status, get command is the way to go:
Firewall module can be unloaded or loaded using the unload and load command as follows:
When firewall working, you can enable and disable the firewall as follows. You want to keep the firewall enabled all the time.
You can also set the default action of firewall to let pass everything or block everything. When true is used, the firewall lets go everything unless specified otherwise; when false is used, it blocks everything by default.
After you change firewall configuration, the ruleset may be out of sync with the active. To keep in sync, you can simply run the refresh command.
Above is the overall control of firewall. You can fine tune each individual rule set within the ruleset namespace.
You can change individual ruleset to be enabled or disabled, and change the behavior to all all IP addresses or only those specified.
Each ruleset consists of a collection of rules. You can list them using rule namespace. If you don’t provide ruleset-id as follows, you will get a long list of rules from all the rulesets.
Note that even with the powerful esxcli command, you can not modify much of the firewall configurations. For all the configurations, you want to go directly edit the configuration file then refresh the firewall:
Managing Fencing Switches
For this feature, you must distributed virtual switch configured; or you will see the same output as I saw here.
To get all fence network bridge table entries information, you can use the following command.
To get all fence port info on the fence network, you can use the following command:
Managing IP Addresses and Configurations
To get the IPv6 related support, simply type the following.
To turn off or turn on IPv6 support, you can use the set command. Note: you have to restart the ESXi for the configuration to take effect.
To manage vmKernel network interfaces, you can use the interface namespace as follows:
To change the IP address for ESXi on an existing interface is easy too, but be careful with this operation because your SSH session may be terminated immediately.
Managing DNS server
The ESXi depends on the DNS setting to resolve server names. The following commands help manage the DNS.
You also change the default search domain for the DNS settings.
Network Security
There are two aspects you can manage the security: security association, and security policy. There are quite a few options you can tune. Combined together, they can cover a lot of cases.
Managing Gateways
You can manage both IPv4 and IPv6 networks. I first show the IPv4 here because the IPv6 is very similar.
To change the gateway, try the following command. Again, you session may be terminated with invalid parameter.
To remove a route, you can use remove command with same parameters as the add command. Again, be careful as your session may be terminated right away.
For the IPv6 listing of routes, you can see something similar as follows:
List Live Network Connections
At any given time, you can find out what live connections are there.
The type of connection can be filtered as follows. I intentionally used udp as it’s a lot less than the tcp connections.
Finding Neighbour on Network
You can use the esxcli command to list the network neighbors on the same network.
Managing Network Interface Card (NIC)
To turn on NIC VLAN stats, we can use set command as follows, and then we can check the VLAN stats.
For each NIC, you can get its stats with the stats namespace.
To get the current configuration of a network interface, you can use the get command with the name of the interface.
You can set the NIC with different parameters like speed, duplex mode. For easy life, you can simply use the –auto option which let the ESXi automatically negoatiate these with other system. When you have –auto option, you must not specify the speed and duplex mode. NOTE: if you use SSH to your ESXi server, be careful not to mess up the NIC underlying.
For a particular NIC adapter, you can also turn it on and off. Be careful not to turn off the NIC underlying your SSH session, or you have to have physical access or IPMI.
Virtual Machine Related Networking
To list all the virtual machines and their ports and networks they connect to, use the list command as follows.
Further down the port namespace, we can get more details of ports used by a specific virtual machine which is identified by the world id. Please note that you can also find the uplink port ID, which can used as normal VM port ID for its stats and filters.
Getting Port Related Stats and Filters
For each port, you can get the stats based its port number. The following command shows what are included in the stats.
To get port filter related stats, run the following command. Somehow I don’t have filter therefore nothing is shown in output.
Managing SR-IOV
SR-IOV stands for Single Root I/O Virtualization. It allow one PCI device appears as multiple NIC cards to the hypervisor. Scott Lowe has a nice blog post on this topic.
To list the NICs that support SR-IOV, and list the virtual functions (VF) on a nic, run the following commands.
There is no SRIOV Nic with name vmnic0
Managing Virtual Switches
The standard and distributed virtual switch are in two parallel namespaces. They allow lots of control over virtual switches.
To list all the virtual switches, use the list command as follows:
To create a new virtual switch, use the add command with port number and name.
The newly created virtual switch takes default for mtu, and cdp, but you can change them using the set command with the options below.
You can delete a virtual switch by the remove command
There are many policies you can change with esxcli command, namely failover, security, shaping. You can set these policies using the set command under the corresponding namespace. The name for the virtual switch is case sensitive.
To add a new uplink to a virtual switch, you can run the following command. The uplink name must be a valid pnic name.
To remove an uplink, just use the remove commmand
Managing Port Groups
Port group is a “virtual” concept. It defines a common behavior for a group of ports.
To add a new port group, use the add command
To remove a port group, use the remove command:
You can change the VLAN ID with set command. Note that this may affect the network connectivity.
There are similar policies that are associated with port groups as with the virtual switch. The following shows 3 commands and you can see what are included in the policy.
Distributed Virtual Switch
Distributed virtual switch is similar to standard virtual switch but the control is centralized in vCenter server. The esxcli command allows control of the vmware virtual switch. For the Cisco Nexus 1000v, you can use their own command line that I introduced before.
~ # esxcli network vswitch dvs vmware
Usage: esxcli network vswitch dvs vmware {cmd} [cmd options]
Available Namespaces:
lacp A set of commands for LACP related operations
vxlan A set of commands for VXLAN related operations
Available Commands:
list List the VMware vSphere Distributed Switch currently configured on the ESXi host.
Diagnosing Network Connection with Ping
The ping command is very useful for testing network connection. The esxcli command comes with a handy one as well with additional controls. I found it’s extremely helpful when I tested the jumbo frame configuration for my ESXi to a NAS server.
There are more options as shown in the following.
Managing Storage
The storage is another heavy aspect of ESXi management. It includes 6 different sub namespaces, each of which is a big topic by itself.
Managing NFS
To list all the NFS volumes that are already mounted on the host, just run the list command.
To add a new NFS volume, you want to use the add command:
To remove an existing volume, then the remove command is handy.
Managing File System
To list all the mount points, you can run the following command. You can check out the volume names, types, size, free space.
With changes on the file system, you can also run the following command to rescan and automatically mount those unmounted.
To unmount a volume, use the unmount command with label or UUID.
Managing VMFS System
There are also snapshot feature that can be managed by a few commands.
Managing SAN Storage
As I don’t have SAN storage in my lab, I won’t go deeper on this, but show the help of the command. You can drill down and give it try by yourself.
Managing Native Multi-Path
We can also list the devices involved in Native Multipath:
Multipath policy can also be configured using different plugins, which are listed as follows:
To list the configuration of a particular policy, we can use the following command (The device value is a little bit long but you can copy and paste it from the list command)
To list all the storage array plugin (SATP), the following command is handy.
To change the SATP, the following options are needed.
Core Storage
The following command lists all the storage adapters on the host,
To make sure the adapters are found, run the rescan all command.
IO statistics are also available with the following command. More lines are actually there, but skipped here to save space.
To list storage adapters, the following command works.
Get device statistics
To get a list of the worlds that are currently using devices on the ESX host, the following command is used:
Optionally, we can list the partitions with their GUID as follows:
To list all devices that were detached manually by changing their state on the system, run the following command. This is related to the ESXi feature of pluggable storage architectures (PSA).
For the VAAI feature, we can get their status. In my case, it’s not used at all therefore no much to show.
To list all the SCSI paths on the system, use the following command:
For each path, it can be set active, or off using the set command.
For path level stats, the following command can be used:
To list plugins in the system, the following command is used:
To allow automatic claiming process of PSA, the autoclaim can be used. By default, it’s enabled and should not be turned off.
New rules can also be added into the rule set. The command expects a few parameters, but the help is probably the best – it comes with a few examples.
Managing System Wide Settings
This section involves global setting for the ESXi management.
Get ESXi Boot Device
Managing Core Dump
The ESXi core can be dumped to a network server if you configure it as follows:
Core can aso be dumped into local partition and that involves another sub namespace – partition.
To get the configuration, use the get command.
To set the configuration, the set command is handy.
You can also change the configuration with the set command.
Module Management
To list the modules in ESXi server, you can use the module sub namespace with list command. There are many lines of output not listed here.
To get details about a module, you can use the get command
To load a module, run the following command:
You can also list parameters of a module as follows. There are actually more lines but omitted. It’s also highly possible that there is no line at all because there is no parameter to change for that module.
Userworld Process
To list these processes, you can use the following command. Again, it’s not fully list due to space limit.
To find how many userworld processes running, try the following command:
To get the system workload, run the following command and get the load in 1, 5, and 15 minute period. You will get more from esxtop command anyway.
Security Policy
As usual, you can list all the security policies using the list command. With each policy, you can see its enforcement level.
To change the level of enforcement, use the set command. Other than the enforcing level, I haven’t seen other level and I even tried none and a few others with no luck. The help doesn’t help neither – something for VMware to improve.
Kernel Settings
There are many parameters with which you can tune the ESXi. For a complete list of these parameters, you can use the list command as follows. As you expect, there are so many of them, I just show first two. The name of the setting is self explanatory with the description field.
ESXi Advanced Settings
There are many parameters you can fine tune the ESXi. The following shows the commands you can list them, change them.
Setting Keyboard Configurations
ESXi is like an operating system which can interactive with users. One way of interaction is via keyboard. The esxcli command allows changing the keyboard configuration.
To change the keyboard layout, the following command can be used. It’s not persisted across reboot with the no-persist option.
Getting Uptime of the ESXi server
The first command is not that readable because the unit is microseconds. You can try the second one for days of up time.
Syslog management
You can mark the syslog by adding a unique message in the log. Why is it needed? You can use it to mark start of new period for testing something, so that you can later easily locate the start point.
You can reload the log daemon to apply any new configuration options as follows:
To retrieve syslog configuration, get command can be used:
If you want to export the syslog to a remote syslog server, you can use the set command as follows. You want to make sure the FQDN is resolvable, which may involve the configuration of DNS as described earlier.
To list all the loggers, run the following command. There are many other loggers not listed due to limit of space.
To change a specific logger configuration like the rotation size and how many rotations to keep, you can run the following command. The id must be one in the above list output. The unit for the size is KiB.
To reset a specific value for a logger, run the following command:
Hypervisor File System
The following command shows a quick summary of the file system. You can also drill down to the ramdisk and tardisk with the next two commands.
Retrieving and Setting Hostname and Domains
You can change the default hostname for esxi from the default localhost to more meaningful domain and host name.
When you set the hostname, you can use fqdn option or domain plus host. They are mutually exclusive. I find fqdn is easy and straightforward.
Managing Maintenance Mode
You can place the ESXi into maintenance mode assuming you have evacuated all the virutal machines.
The default timeout for the set command is 60 seconds so the timeout can be omitted in above command.
Power Management
You can either power off or reboot ESXi from the esxcli command. But wait, why isn’t there a power on command? You will need a IPMI interface for that.
Both reboot and power off operation requires the same parameters.
Configuring SNMP
There are a few parameters you can configure on ESXi. The following help shows these parameters with some descriptions.
Retrieving UUID
The following command retrieves the UUID for the host.
~ # esxcli system uuid get
50cdca40-8c57-2ee2-af75-8c89a5d2b40f
Reading and Setting System Time
ESXi keeps its own system clock. To read the time and set the time, you can use the following commands:
Getting ESXi Version and Build Number
The following simple command shows the version and build number of the ESXi product.
Change the ESXi welcome message
The following command changes the welcome message users see upon login. It’s not necessary to change it, but if your IT has important policy to information users upon login it’s a good way to do it.
Managing Virtual Machines
Most virtual machine related operations can be done with vim-cmd command. The esxcli command touches lower part of the virtual machine. For the hypervisor, each running virtual machine acts like a process.
You can use the following command to list all the running virtual machine. Instead of process id, each VM is identified by its world ID.
You can kill a virtual machine by its world ID as follows:
There are other options as listed below, for example, soft type, hard type, and the ultimate force type. With the force option, you can almost stop any virtual machine.
I couldn’t refrain from commenting. Perfectly written!
A worth to read post. Thanks Steve. Plan to buy a book you authored.
Thanks a lot for your support Lawrence!
-Steve
Very nice initiative. Bookmarked!
Thank you
Thanks for your comment. Glad you like it.
Steve
We have two esxi hosts with V6.0.0 and 5 VMs are hosted on each ESXi hosts. Due to Business require we reboot all VMs every night. Adapter type these VMs have is E1000 type. Every day when VMs are rebooted couple of VMs will lose network connectivity. When we Select NIC and uncheck Device status and check back will resolve the issue. Decision made not to switch over to Vxnnet 3. We had history had lots of network issues when we did switch over VXMnet3. Is there is option what is the esxcli command syntax to uncheck the device NIC status and recheck it back if the Ping is failed.
Thank you for your help
Hi Shivaram,
You can use the vim-cmd command to find out the device status easily. Check out the other blogs:
There doesn’t seem to be a way to modify the NIC as you wanted. You can probably hack it by disconnect and connect the device with device.connection command there. Good luck!
Steve
Thanks for this tutorial full of useful commands.
Is there an esxcli command that provides information if an ESXi OS was booted in UEFI mode or in BIOS/Legacy model ?
Thanks Steve, haven’t looked at the UEFI before. If you find the answer, please feel to share here.
-Steve
|
http://www.doublecloud.org/2015/05/vmware-esxi-esxcli-command-a-quick-tutorial/
|
CC-MAIN-2018-09
|
en
|
refinedweb
|
Every); ...
Benefits of WSIF.
The other volatile aspect of these services is location. A service that may have been available locally may suddenly be moved to servers on the other side of the globe. As a practice, experienced designers have learned to externalize the location of such services; however, enforcing such rigors in a large and complex enterprise solution developed and maintained by a global fleet of designers and developers proves difficult.
Typically, SOAP is touted as panacea for such pain points. SOAP has been a unifying protocol, but its implementation, from a practical perspective, has limitations. First, not all services may be enabled as SOAP-based services. There could be significant cost implications in enabling all the services developed over the years as SOAP-based services. Second, using SOAP has performance implications. Benchmarking comparisons indicate that SOAP calls using Apache Axis have several hundred times more latency than Java RMI (remote method invocation) calls (see "Latency Performance of SOAP Implementations" (IEEE Cluster Computing and the Grid, 2002)). Considering the performance impact, it would be unreasonable to expect Java clients to use SOAP for accessing EJB components to gain platform and location independence.
WSIF, modeled after WSDL (Web Services Description Language), is more suitable for alleviating such problems. For a given service, WSIF uses a WSDL-based XML file so that custom wrappers for new or existing Java and EJB objects are not needed. WSDL provides definitions of services: it defines the interface consisting of functions, parameters, return values, and exceptions and the binding consisting of the service's implementation specification and location. WSDL was designed with extensibility in mind. In a typical SOAP-based Web service, the binding is almost always
soap:binding. WSDL provides plug-ins to other bindings such as Java and EJB. WSIF exploits this specific WSDL extensibility. I describe more on this extensibility in later sections.
Using WSIF
There are two invocation models for using the WSIF API: stub invocation or dynamic invocation. The stub model relies on stubs at the client side; service functions are invoked on these stubs. This approach offers a more natural way of programming and has the benefits of compiler-checked code. The dynamic model programmatically constructs service calls. This model suits those applications that want a more abstract view of a service.
To better understand these invocation models, let's look at an example that provides a stock price. The service
StockQuote has a single call,
getQuote(), that expects a string. Let's first look at a stub"); // Create the service stub, StockQuote.class is a class generated by WSDL2Java for // StockQuote service WSIF configuration file. 8. StockQuote stub = (StockQuote) service.getStub(StockQuote.class); // Call the function to get the stock price. 9. string symbol = "IBM"; 10. float result = stub.getQuote(symbol); 11. system.out.println("Price for " + symbol + " is " + result); ...
On Line 1, an instance of a WSIF factory is created, which initializes the WSIF framework if it has not already initialized.
On Lines 2 through 7, the WSIF factory retrieves instances of specific services. The WSIF factory is supplied the WSIF-WSDL file location, service namespace, service name, port-type namespace, and port-type name. The service namespace and service name can be left as null if the WSDL file contains only one service definition and location. The WSIF parses the WSIF-WSDL file to create instances of services requested.
On Line 8, a stub instance is retrieved. Calls are made against this stub. Stub classes can be created using the WSDL2Java tool, an industry-standard tool for generating Java proxies and skeletons for services with WSDL descriptions. WSIF uses client-side proxies generated using the following command:
%java org.apache.axis.wsdl.WSDL2Java (WSIF-WSDL-file-URL).
On Line 10, an actual call is made against the stub. Here, the WSIF framework internally maps to the appropriate port-type implementation specified in the WSIF-WSDL file and places a call to the endpoint. The value returns to your client code.
Now let's look at a dynamic"); // Get the port. 8. WSIFPort port = service.getPort(); // Create the operation. 9. WSIFOperation operation = port.createOperation("getQuote"); // Create the input, output, and fault messages for the operation. 10. WSIFMessage input = operation.createInputMessage(); 11. WSIFMessage output = operation.createOutputMessage(); 12. WSIFMessage fault = operation. createFaultMessage(); // Populate the input message. 13. String symbol = "UIS" 14. input.setObjectPart("symbol", symbol); // Do the invocation. 15. operation.executeRequestResponseOperation(input, output, fault); // Extract the result from output message. 16. float result = output.getFloatPart("result") ...
In the dynamic invocation, Lines 1 through 7 match the code in the stub invocation. On Line 8, a port is created for the service; on Line 9, a specific operation of interest is created. On Lines 10, 11, and 12, input, output, and fault messages are created. For the
getQuote() operation, we have string input and string output. The input values are set in Line 14, and finally, on Line 15, the operation is invoked. On Line 16, the stock price value is retrieved.
Using WSIF step by step
- Download the WSIF framework from Apache.
- Run
wsif-2.0\classpath.batto set appropriate classpath. This script also sets the classpath for samples included in the download. You may need to modify this file depending on your development environment.
- Create a WSIF-WSDL file for your service provider. For SOAP-based applications, you could use its WSDL as a starting point. For Java and EJB applications, you could use Java2WSDL. And, for adventurous types, you could create WSDL by hand.
- Run
WSDL2Javato create Java stubs for the WSDL.
- Modify WSDL to include WSIF extensions for Java, EJB, etc., depending on type of service provider.
- Place WSIF-WSDL file and stubs in your application's directory path.
- Start using WSIF in your application.
WSIF and service-oriented architecture
SOA-based services must adhere to three basic tenets:
- Interface contract is platform-independent
- Service can be dynamically located and invoked
- Service is self contained
WSIF takes your code a few steps closer to service-oriented architecture:
- WSIF's WSDL-based interface description of services provides platform-independent interface definitions in
portType.
- The WSIF framework automatically locates component locations specified in the WSDL-based XML configuration file. The choice of implementation can be deferred until runtime.
- Being a client-side framework, WSIF cannot make a service self contained; however, WSIF works best with self-contained services and does not require state management between calls.
WSIF Java extension and WSDL
Let's walk through a structure of WSDL to see how WSIF extends WSDL. WSDL is an XML file with four main sections:
- Definitions
- Types
- Bindings
- Services
WSIF extends WSDL's bindings and services sections to specify WSIF providers. Bindings allow plug-ins of various providers. Thus, WSIF has providers for Java, EJB, SOAP, J2EE Connector Architecture, etc. These providers enable the invocation of corresponding services.
We will take a publicly available WSDL file at to understand various sections of WSDL and how WSIF extends such a file. For a more detailed explanation of WSDL, please see Resources. The WSDL delayed-quote service file is available here:.
Definitions
Let's look at the first two lines of the delayed-quote service WSDL file downloaded from the site above. This is the definitions section of the WSDL file:
<definitions name="net.xmethods.services.stockquote.StockQuote" targetNamespace=". stockquote.StockQuote/">
The definitions section is WSDL's root element and is a container for all other WSDL elements. It holds all necessary information about the service and its attributes. The
definitions element's
targetNamespace attribute is required and points to a URI that demarcates this WSDL file's namespace. The WSIF service definition requires the addition of a few namespaces as indicated below.
targetNamespaces differentiate definitions from element declarations in different vocabularies. This means various elements defined in this WSDL file belong to the namespace.
|
https://www.javaworld.com/article/2072290/soa/bridging-islands-of-enterprise-software.html
|
CC-MAIN-2018-09
|
en
|
refinedweb
|
22 November 2011 11:04 [Source: ICIS news]
By Ong Sheau Ling
?xml:namespace>
SINGAPORE
“We have to find alternatives to our [Iranian] suppliers. It is forbidden for
A sizeable chunk of China's PE imports is at risk because of growing international pressure to isolate
In the first nine months of 2011,
The impact on supply will be most felt on the HDPE injection and blow moulding grades given that there are few such suppliers in the market, said a second Chinese importer.
“LDPE film supply is currently still outstripping the demand, so even if Iranian goods are out of the picture, the supply availability will still meet the demand,” the importer added.
A third Chinese importer said it has reduced its monthly LDPE film import volumes from
But a source from Petrochemical Commercial Co (PCC) - the biggest trading firm in Iran that markets some of the country’s polymers output abroad - said that
Iranian polymer producers Arya Sasol, Laleh, Mehr and Marun export more than a third of their polymers output to
“Many of the petrochemical plants here [in
In an unlikely event that the
Additional reporting by Kitty Li
|
http://www.icis.com/Articles/2011/11/22/9510320/china-eyes-alternative-pe-sources-on-concerns-over-iran-supply.html
|
CC-MAIN-2015-22
|
en
|
refinedweb
|
From: Colin Ian King <colin.king@canonical.com>BugLink:: patch addresses Intel errata AAE44 by totally disabling 4MBpages and thus avoiding avoiding large pages being split intosmaller 4K pages and thus never tripping this CPU issue.The bug can manifests itself as instruction fetch oopses on seeminglylegitimate executable pages.Errata AAE44 ( 33) states:"If software clears the PS (page size) bit in a present PDE (page directoryentry), that will cause linear addresses mapped through this PDE to use4-KByte pages instead of using a large page after old TLB entries areinvalidated. Due to this erratum, if a code fetch uses this PDE before theTLB entry for the large page is invalidated then it may fetch from a differentphysical address than specified by either the old large page translation orthe new 4-KByte page translation. This erratum may also cause speculative codefetches from incorrect addresses."Where as commit 211b3d03c7400f48a781977a50104c9d12f4e229 seems to workarounderrata AAH41 (mixed 4K TLBs) it reduces the window of opportunity for thebug to occur and does not totally remove it. This patch disables mixed4K/4MB page tables totally avoiding the page splitting and not trippingthis processor issue.Without this workaround, one particular Z530 system with a lot offilesystem activity and low memory pressure would panic randomly aftera few days soak testing. With this patch, the system ran flawlessly.Also, this fixes random boot crashes on an Acer Asprire One.Signed-off-by: Colin Ian King <colin.king@canonical.com>--- arch/x86/kernel/cpu/bugs.c | 15 +++++++++++++++ 1 files changed, 15 insertions(+), 0 deletions(-)diff --git a/arch/x86/kernel/cpu/bugs.c b/arch/x86/kernel/cpu/bugs.cindex 01a2652..32e49f3 100644--- a/arch/x86/kernel/cpu/bugs.c+++ b/arch/x86/kernel/cpu/bugs.c@@ -151,6 +151,20 @@ static void __init check_config(void) #endif } ) {+ clear_bit(X86_FEATURE_PSE, boot_cpu_data.x86_capability);+ printk(KERN_INFO "Disabling 4MB page tables to avoid TLB bug\n");+ }+} void __init check_bugs(void) {@@ -163,6 +177,7 @@ void __init check_bugs(void) check_fpu(); check_hlt(); check_popad();+ check_atom(); init_utsname()->machine[1] = '0' + (boot_cpu_data.x86 > 6 ? 6 : boot_cpu_data.x86); alternative_instructions();-- 1.6.3.3
|
https://lkml.org/lkml/2010/3/22/234
|
CC-MAIN-2015-22
|
en
|
refinedweb
|
Migrating an Existing Extension to JDeveloper 11gR2 -- Part One
By John 'JB' Brock on Jun 13, 2011
With the new release of JDeveloper 11gR2, an extension that was written for a previous version of JDeveloper will no longer work. There are a few changes that must be made to get things working again. As with all development tasks, the work required can range from the very simple to extremely complex. In this post, I'll cover the basics. It should get everyone started in the right direction at least.
Your extension migration will follow these rough steps:
- Open your existing project in the 11.1.2 workspace
- Look at your existing extension and determine how it is integrated into the IDE. (e.g. Menu item, Wizard, etc.)
- If the integration point is coded in the Addin.initialize method, pull this out and replace it as a trigger-hook in the extension.xml file
- If the menu or wizard hooks are already in use in the extension.xml file, move them into the trigger-hooks section.
- Create an Action and controller class that will call the Addin.initialize() method when the action is called
- Changes to the extension.xml file in regards to Classpath and dependencies will also have to be addressed
- Make and Deploy to Target Platform to generate the manifest.mf file.
Open a copy of the workspace/project
NOTE: Make sure you are opening a copy of your extension source and not the original. This step can not be undone.
Make a copy of your existing extension source code and place it in a new directory that can be opened in the 11gR2 IDE. Once you open the project, the IDE will ask to do some migration tasks that will bring the project and application files up-to-date with this version of the IDE.
How does the extension get initialized?
If your extension is already using a menu-hook, wizard, or context-menu-hook, in the extension.xml file for integrating with the GUI, you are in pretty good shape and there are only a few things that need to be changed to get things going again.
If you are using the Addin.initialize() method to register all of your menus, etc. then there is a little more work to be done. It is the plan going forward, to completely remove the Addin.initialize() method and move everything to declarative hooks in the extension.xml or manifest.mf files.
I'll cover both situations in this blog topic, with Part One covering the non-Addin.initialize case, and Part Two extending to cover the rest.
Let's start with the SDK Sample project, "FirstSample".
When we look at this project we can see that none of the classes are extending the Addin class. We can also see that the menu setup is already being done declaratively in the extension.xml file. This should make for a fairly simple migration.
I've made a copy of the FirstSample directory and placed it inside of another Application that I created in the 11gR2 version of the IDE.
When I open the project now, it brings up a migration wizard that looks like this.
I'll go ahead and just accept the defaults on the next couple of pages, then click on Finish. The resulting dialog will look something like this.
This process gets a few of the housekeeping things taken care of for the project itself. It doesn't do anything to the source code of our project though. That will remain the same.
Since we have already determined that we are not using any classes that extend the Addin class, we will only need to work with the extension.xml file. I'll work through the file from top to bottom and show you what needs to get updated and how to go about doing it.
Classpaths
The <Classpaths> element has been deprecated in this release. If it's in the extension.xml, it will just be ignored. It has been replaced by a new element called <Required-Bundles>. The easiest way to make sure you are getting the proper bundle name when making these changes, is to use the Visual Editor for the extension.xml file.
Click on the Overview tab at the bottom of the editor window.
Then click on the Dependencies menu. It will look like this.
You can still see the entries that you currently have in your Classpaths element. We'll use these to get what we want in the Required-Bundles section.
Click on the + sign to the right of the Required Bundles section. A dialog will popup with a list of all the available bundles. In the search field at the top of this dialog, type in the name of the first library that you currently have in classpaths. "javatools" in this case. This will narrow the list down to something that you can more easily select from. Go ahead and select the oracle.javatools bundle from the list. HINT: Look at the slightly grayed out name under the jar files path. This is the bundle name. It will look like this when you're done.
Repeat the steps above for the "oicons" bundle as well.
Once you have both of these added as Required Bundles, you can highlight the name of each Classpath entry, and then click the big red X on the right side of that section to remove this entry from the extension.xml file.
Clicking on the Source tab at the bottom of the editor, will get us back to the source code and the new entries should looks something like this
Go ahead and delete the remaining <classpath/> element from the file. It will stop you from getting a warning at runtime.
Trigger-hooks
Next we are going to add the new trigger-hooks section to the extension.xml file. This is what allows us to use lazy loading with our extensions. An extension is not loaded until a trigger is hit that tells the IDE that you want to use this functionality, so load it up.
Once you've done this a few times, you can simply cut and paste code from other extensions that you have already migrated, but I want to show a little tool that can help when you're not quite sure where to start.
If you right-click on the <extension> element in the Structure window (lower right side of the IDE by default) you will see an option to "Insert Inside extension". Select that option and go all the way to the bottom of the submenu and select "trigger-hooks".
Now right-click again, but on the trigger-hooks element this time, and insert a "trigger-hook" inside the trigger-hooks element.
You will notice that it places a nice clean <trigger-hook /> element in between the open and close <trigger-hooks> tags. We need to change this to be an open and close tag instead of just the one tag. If you delete the "/" from the trigger-hooks tag and then hit enter and type "</ " you will notice that the IDE will enter the close tag for you. The end result should look like this
You can see that this would have been a lot easier to just cut and paste in, but I wanted to show you that we do have a tool for working with the schema. We'll use this same tool a little later to add a Controller element which has a few more required parameters.
Actions
Now that we have the trigger-hooks structure in place, we can start moving some of our old elements up into this new area. Since the Actions element is what most everything else references, let's move it in first. Simply cut and paste the existing actions element from the <hooks> section up to the new <trigger-hooks> section.
We need to add an xml namespace to the <actions> element now. The easiest way to do this is to click inside the <actions> tag just after the word "actions". Enter a space and then type " xmlns=" ". After you type the first " after the equal sign, a code completion pulldown should show up and give you a list of available namespaces to select from. In this case, scroll down until you find, " ". Add the close " and you're done.
Controllers are now their own element in the extension.xml file, so we will delete the existing <controller-class> element from the actions element. The resulting actions element should now look like this
Controllers
Controllers are a new element in the extension.xml file. We're going to go back to the schema helper tool to add in our new controllers element.
Right-click on the <triggers> element in the Structure window and select "Insert Inside triggers". Click Browse and scroll down the list until you find "Controllers". Click OK and then right-click on the new controllers element and follow the same steps to insert a controller element inside of the controllers. This is going to open a dialog asking you to enter the class for the controller. If you start typing the package name for your controller, code completion will kick in and help you find the existing controller class. It will look like this
For this example, work your way down to the SimpleController class and click OK.
Let's keep adding the required elements to our new <controller> element. Right-click on the controller element again, and insert an <update-rules> element, and then insert an <update-rule> element inside of that. You will be asked for the "rule *: " when you insert the <update-rule>. For now, just type in "always-enabled" for the rule. I'll talk more about the new Rules system in another post, very soon. The last thing to add is the <action> element. Follow the same steps to insert the <action> element inside of the <update-rule> element.
You will be asked for the action id and label when you insert the action element. You can copy the action id from the action element that we just added above. For this example it will be: "oracle.ide.extsamples.first.invokeAction"
You can set the label for your controller here as well, but it's not required. Since we already have a label in our Action, I'm just going to leave this blank. The final Controllers element should look like this
Context Menus
Context menus are next in the existing extension.xml file, so lets take those on next. The context menu element is a little different in the new release. Here is what the original looks like
We're going to use the xml schema tool again to get the new structure setup.
Right-click on the triggers element and "insert inside triggers". Click Browse, then select the Context-menus-hook. Enter "always-enabled" for the rule type when prompted.
Now right-click on the new context-menus-hook element, and insert a "site" element inside of this one. You will be prompted for the "idref" for the context menus that you want to add. In the original we had to created three different elements to cover all three of the main context menus. Here we can enter all three as a comma separated list.
Enter, "navigator, editor, explorer" and click OK.
Right-click one more time on the context-menu-hook, and insert an extension-listener element this time, inside the context-menu-hook element. Set the class name to be the same as what you had in the original elements. In this example, it would be, "oracle.ide.extsamples.firstsample.SimpleContextMenuListener"
The <extension-listener> element is optional. In this case, it works great because it just replaces the listener element that was already being used. If you context-menu-hook uses the menu element instead of a listener-class element, do the steps below instead of adding the extension-listener element.
Now right-click on the menu element and insert inside of it, a section element. You can set the ID for this section to be anything you like. Once you have the section element created, insert an "item" element inside of it. Set the "action-ref" value to the action id that we set above. For this example it is: "oracle.ide.extsamples.first.invokeAction"
The new context-menu-hook element should now look like this id you used the <menu> element
and like this if you used the <extension-listener> element
Notice that I still had to add an empty <menu/> element, since it is a required element.
Gallery
The gallery element hasn't changed that much. Copy and paste the original from the <hooks> element into the new <triggers> element.
We only need to make a couple of changes to this element. We need to add an xml namespace to the gallery tag. Follow the same steps as above, where you clicked inside the tag, just after the name, enter a space, then type, "xmlns=" " when the code completion list comes up this time, select the same namespace as before,
""
Now we need to add a rule parameter to the <item> tag. Type inside the tag, and add, " rule="always-enabled". The new item tag will look like: <item rule="always-enabled">
We also want to add an <icon> element to the gallery element now. This is pretty simple. Just add the line:
<icon>${OracleIcons.LABEL}</icon>
Everything else should stay the same. The new gallery element will look like
Menus and Toolbars
The <menu-hooks> element is even easier then the Gallery hook. Copy and paste the entire <menu-hooks> (including menu and toolbar hooks) element into the <triggers> element.
All we have to do is add an xml namespace to the <menu-hooks> tag. Set this to " xmlns="" "
Once you have all of these elements moved from the Hooks element to the Triggers element, the only thing left in the Hooks section should be the <features-hook>. This can be left where it is. The <hooks> element does still get called by the extension framework. It is called as the extension is loaded. The best way to decided if something belongs in the <triggers-hooks> element or the <hooks> element is to think about when that information needs to be available to the IDE. If it's something that has to show up before the extension is actually loaded, then it should be in the <trigger-hooks> element. Otherwise, it's fine to leave it where it is.
Conclusion...
That should do it for migrating this extension. To test everything and make sure it really does work, you need to follow these three steps.
1) Build the extension
This is an obvious step and of course it should compile without any errors.
2) Deploy to Target Platform
This is a new step that must be run every time before you can perform the "Run Extension" menu option. It builds the manifest.mf file and packages everything properly. Rick-click on your project and select this from the menu down in the Extensions section.
3) Click on Run Extension
This performs the same as it always has. It will open another instance of the IDE with your extension installed and, hopefully, running correctly.
More...
In Part Two I finish the migration steps by showing how to handle migrating an extension that extends the Addin class. You will still need to do all of the work that we have just shown above, but there is a little more work to do as well.
Comment are always welcome, and encouraged. Everyone learns more when a good conversation is started!
Hi!
In my pmd extension I need the classpath entries for referencing classes from pmd, which are delivered with the plugin. These are not available as bundle before, so how should this migrated to a bundle?
Kind regards
Posted by guest on June 16, 2011 at 04:22 AM PDT #
Great to hear you're working on an update to PMD.
To add external libraries, do the following
1) Add an MANIFEST.MF file to your project in the same location as the extension.xml
2) Add the default 3 lines to the file of:
Manifest-Version: 1.0
Bundle-ClassPath: .
<blank line>
3) Add your libraries to the Bundle-Classpath line with the following syntax
external:$ORACLE_HOME$/jdeveloper/mydir/mylib/myjar.jar
Using the default above, and this example line, the Bundle-Classpath would look like.
Bundle-Classpath: ., external:$ORACLE_HOME$/jdeveloper/mydir/mylib/myjar.jar
If you find that the MANIFEST.MF is not being merged after you do the "Deploy to Target Platform", do the following to force the merge.
1) Go to project properties and click on the Deployment section..
Posted by guest on June 16, 2011 at 04:46 AM PDT #
I'm build a extension using Addin.initialize to provide extra functionality to menus, the extension integrates a Java Swing Menu with Ubuntu Unity Menus, and I'm not use menu, context menu, or another feature.
The extension in JDeveloper 11gR1 is this:
@RegisteredByExtension("org.jdev.java.ayatana")
final class AyatanaAddin implements Addin {
public void initialize() {
JFrame frame = (JFrame)Ide.getMainWindow();
ApplicationMenu.tryInstall(frame);
}
}
And thats all, how migrate this code to JDeveloper 11gR2, there are any startup listener using trigger-hook?
Posted by Jared on April 23, 2012 at 06:22 PM PDT #
Hi Jared,
No, there is not a trigger-hook for on startup. That is what we are trying to get away from with the lazy loading in OSGi. You can not just load an extension when the IDE is started.
You will need to create a menu hook of some kind to load your extension the first time. Once a developer has your extension loaded, it will be remembered so that it will come up automatically each time the IDE is started with that project or application open.
HTH,
--jb
Posted by John 'JB' Brock on April 30, 2012 at 10:16 AM PDT #
its really good forum.. thanks...i am able to migrate my extension but i am facing some issues.. could you please tell me i have log4j.jar and i need to use that how can i add that in extension.xml because when i am trying to add from resource bundle it showing only available jars in Oracle/lib folder not other... please suggest me the solution
Posted by guest on April 11, 2013 at 07:18 AM PDT #
Take a look at the blog post about adding external dependencies.
That should get you going again.
--jb
Posted by John "JB" Brock on April 12, 2013 at 09:24 AM PDT #
Posted by guest on April 22, 2013 at 03:57 AM PDT #
I'm not sure what you mean by "exporting jar". Are you trying to deploy the extension to another JDeveloper instance?
You will want to look at this blog post about packaging an extension for distribution and use by others.
Posted by John 'JB' Brock on April 22, 2013 at 07:42 AM PDT #
I would be happy if you could help.
I assume the namespace of the sql dev context menu should not be the jcp.org etc... and the site id-ref would be also suited to the db nav??
Thanks a lot in advance!!
Freydie
Posted by Freydie on February 24, 2014 at 05:24 PM PST #
Hi Freyd.
I would take a look over in the SQL Developer forums. This looks like a good place to start:
Posted by John 'JB' Brock on February 24, 2014 at 05:56 PM PST #
Thank you John - Will look into those forums
Posted by Freydie on February 25, 2014 at 12:14 PM P.
Thanks beforehand for your help.
Posted by Oto on June 27, 2014 at 06:27 AM PDT #
Hi Oto,
I'm sorry but I don't know that much about SQL Developer. While they are roughly based on the same platform as JDeveloper, not everything is the same. I would ask this question over on the SQL Developer forums.
Posted by John 'JB' Brock on June 27, 2014 at 07:05 AM PDT #
|
https://blogs.oracle.com/jdevextensions/entry/migrating_an_existing_extension_to
|
CC-MAIN-2015-22
|
en
|
refinedweb
|
So here are my directions for this program:
Write a program that allows the user to enter students names followed by their test scores and outputs the following information(assume that the maximum nmber of students in the class is 50; if the number of students is less than 50, to indicate the end of input data, after entering the last students data, on a line by itself, hold the ctrl key/press z and then press the Enter key):
a) Class average
b) Names of all the students whose test scores are below the class average, with an appropriate message( youre below class average)
c) Highest test score and the names of all the students having the highest score.
Use methods.
Now I wrote the program but i cant figure out how to end the input by pressing ctrl key/press z and then press the Enter key... Can some one show me how this is done?
Here is my code
Code java:
import java.util.Scanner; public class ClassAverage { public static void main(String args[]) { String names[] = new String[50]; int scores[] = new int[50]; int entries = 0; Scanner in = new Scanner(System.in); //System.out.println("Enter number of entries"); //int entry = in.nextInt(); System.out.println("Enter the names followed by scores of students: "); for(int i = 0; i < 50; i++) { names[i] = in.next(); scores[i] = in.nextInt(); entries++; } Average avg = new Average(); double average = avg.CalcAvg(scores,entries); System.out.println("The class average is: " + average); avg.belowAvg(scores,average,names,entries); avg.highestScore(scores,names, entries); } } class Average { Average() { System.out.println("The averages: "); } double CalcAvg(int scores[], int entries) { double avg; int total = 0; for(int i = 0; i < entries; i++) { total += scores[i]; } avg = total/entries; return avg; } void belowAvg(int scores[],double average,String names[], int entries) { for(int i = 0; i < entries; i++) { if(scores[i] < average) System.out.println(names[i] + "You're below class average"); } } void highestScore(int scores[],String names[], int entries) { int max = scores[1]; for(int i = 0; i < entries; i++) { if(scores[i]>=max) max=scores[i]; } System.out.println("The maximum score is: " + max); System.out.println("The highest score acheivers list: "); for(int i = 0; i < entries; i++) { if(scores[i] == max) System.out.println(names[i]); } } }
|
http://www.javaprogrammingforums.com/%20whats-wrong-my-code/37250-cant-end-input-data-my-program-please-help-printingthethread.html
|
CC-MAIN-2015-22
|
en
|
refinedweb
|
Re: NumPy arrays that use memory allocated from other libraries or tools
- From: sturlamolden <sturlamolden@xxxxxxxx>
- Date: Wed, 10 Sep 2008 11:15:26 -0700 (PDT)
On Sep 10, 6:39 am, Travis Oliphant <oliphant.tra...@xxxxxxxx> wrote:
I wanted to point anybody interested to a blog post that describes a
useful pattern for having a NumPy array that points to the memory
created by a different memory manager than the standard one used by
NumPy.
Here is something similar I have found useful:
There will be a new module in the standard library called
'multiprocessing' (cf. the pyprocessing package in cheese shop). It
allows you to crerate multiple processes (as opposed to threads) for
concurrency on SMPs (cf. the dreaded GIL).
The 'multiprocessing' module let us put ctypes objects in shared
memory segments (processing.Array and processing.Value). It has it's
own malloc, so there is no 4k (one page) lower limit on object size.
Here is how we can make a NumPy ndarray view the shared memory
referencey be these objects:
try:
import processing
except:
import multiprocessing as processing
import numpy, ctypes
_ctypes_to_numpy = {
ctypes.c_char : numpy.int8,
ctypes.c_wchar : numpy.int16,
ctypes.c_byte : numpy.int8,
ctypes.c_ubyte : numpy.uint8,
ctypes.c_short : numpy.int16,
ctypes.c_ushort : numpy.uint16,
ctypes.c_int : numpy.int32,
ctypes.c_uint : numpy.int32,
ctypes.c_long : numpy.int32,
ctypes.c_ulong : numpy.int32,
ctypes.c_float : numpy.float32,
ctypes.c_double : numpy.float64
}
def shmem_as_ndarray( array_or_value ):
""" view processing.Array or processing.Value as ndarray """
obj = array_or_value._obj
buf = obj._wrapper.getView()
try:
t = _ctypes_to_numpy[type(obj)]
return numpy.frombuffer(buf, dtype=t, count=1)
except KeyError:
t = _ctypes_to_numpy[obj._type_]
return numpy.frombuffer(buf, dtype=t)
With this simple tool we can make processes created by multiprocessing
work with ndarrays that reference the same shared memory segment. I'm
doing some scalability testing on this. It looks promising :)
.
- Follow-Ups:
- Re: NumPy arrays that use memory allocated from other libraries or tools
- From: Travis Oliphant
- References:
- NumPy arrays that use memory allocated from other libraries or tools
- From: Travis Oliphant
- Prev by Date: Re: removing text string
- Next by Date: Re: subprocess.Popen hangs at times?
- Previous by thread: NumPy arrays that use memory allocated from other libraries or tools
- Next by thread: Re: NumPy arrays that use memory allocated from other libraries or tools
- Index(es):
|
http://coding.derkeiler.com/Archive/Python/comp.lang.python/2008-09/msg00937.html
|
CC-MAIN-2015-22
|
en
|
refinedweb
|
[
]
Mikhail Fursov reassigned HARMONY-4620:
---------------------------------------
Assignee: Mikhail Fursov
> [drlvm][jit] Long return path for floating point values in calling convension
> -----------------------------------------------------------------------------
>
> Key: HARMONY-4620
> URL:
> Project: Harmony
> Issue Type: Improvement
> Components: DRLVM
> Environment: appropriate for for Intel architecture
> Reporter: Naumova Natalya
> Assignee: Mikhail Fursov
> Attachments: return_xmm.patch
>
>
> DRLVM has too long return path when the return value is floatin point. The reason is
FPU usage together with SSE instructions in calling convention: we have "SSE -> mem ->
FPU -> (return) mem -> SSE"; return (double) value first is calculated on xmm* registers,
then copied to mem, then is put on FPU stack, then extracted from this stack (in calling proc)
to memory again, then again calculation is happened in xmm* registers (SSE instructions).
This issue overrides the improvement with loop unrolling, overhead from the parameters passing
with this calling convention overrides the loop body doubling speed-up. When you increase
"arg.optimizer.unroll.medium_loop_unroll_count" option in method where return value is double
and it is in loop, then you'll have degradation (example - MonteCarlo benchmark in SciMark).
> Can we avoid using FPU with SSE in this case?
--
This message is automatically generated by JIRA.
-
You can reply to this email to add a comment to the issue online.
|
http://mail-archives.apache.org/mod_mbox/harmony-commits/200802.mbox/%3C755855963.1204272771167.JavaMail.jira@brutus%3E
|
CC-MAIN-2015-22
|
en
|
refinedweb
|
29 August 2012 11:58 [Source: ICIS news]
SINGAPORE (ICIS)--Asia’s monoethylene glycol (MEG) spot prices gained $14-28/tonne (€11-22/tonne) to a five-month high on Wednesday on concerns over supply as large-scale plants in Louisiana, US, may have been shut in the wake of Hurricane Isaac, market sources said.
MEG was assessed at $1,045-1,068/tonne CFR (cost and freight) China Main Port (CMP) at the close of trade, according to ICIS.
Some spot MEG lots were settled at $1,065-1,070/tonne cost & freight (CFR) China Main Port (CMP) in the late afternoon, while traders booked material in the morning at $1,045-1,050/tonne CFR CMP.
“We are not sure whether the major US MEG plants have been shut, but traders are actively bidding up prices,” a major regional trader said.
?xml:namespace>
A number of refinery and petrochemical operations in the US Gulf were shut because of Hurricane Isaac, which made a landfall in
Shell Chemical runs a 125,000 tonne/year and 250,000 tonne/year MEG plants
|
http://www.icis.com/Articles/2012/08/29/9590671/Asia-MEG-rises-14-28tonne-on-concerns-over-US-supply.html
|
CC-MAIN-2015-22
|
en
|
refinedweb
|
Core? ... at the same time . To avoid this problem java has a concept called synchronization. This will allow a region to be accessed by only one program at a time. Any
hello
Java Hello World HELLO World program in Java
corejava - Java Interview Questions
singleton java implementation What is Singleton? And how can it get implemented in Java program? Singleton is used to create only one... singleton = Singleton.getInstance(); System.out.println("Hello Java
corejava how to write a program to multiply 1000 digit numbers with out using biginteger class
Exception Handling-Error Messages in Program
Exception Handling-Error Messages in Program Hi Friend, I am having... with this. Here is the code with the error messages as Follows:
import...[]) throws Exception{
This is where I begin to see problems with error messages
Hello world program
Hello world program hello world program
class Hello{
public static void main(String[] args) {
System.out.println("Hello...://
CoreJava Project
CoreJava Project Hi Sir,
I need a simple project(using core Java, Swings, JDBC) on core Java... If you have please send to my account
Java classes
program
Understanding Hello World Java Program...Java classes are like a group under which all objects and methods... examples that will help beginners in Java understand the
definition of Java classes
hello
Java Servlet Java Servlet error) in an array. ask the user to enter any character. the program should ignore... character is vowel or not.
Hello Friend,
Try the following code
hello
is provided.
write a program that calculates a customer's monthly bill. it should store... of hours that were used. the program should display the total charges.
here my... there is so many error...please try check it out..
Hello Friend,
We have
Hello - Java Beginners
Hello Hi Friend,
I want to java objective type question(means programming objective)+answer
If u know any url then please... this link....
Thanks World" program in Swing and JRuby
are
making our JRuby program enabled to use java classes.
frame...
"Hello World" program in Swing and JRuby... Tutorials you have studied how to use Java classes in JRuby examples to show
Corejava Interview,Corejava questions,Corejava Interview Questions,Corejava
Java Interview Questions
Core java Interview Question
page1
An immutable... in the constructor.
Core
java Interview Question Page2
A Java
print hello n hi
print hello n hi how to write a java program that prints "hello" 5 times, "hi" 1 time n again "hello" 4 times..??
do reply
hello - Java Beginners
hello I want to know a java program that using For Loop which will display triangle or pyramid...
The user must choose A, B, or C and then it will output a corresponding shape.
Example output display: Enter a character
Classes in java
etc.
For more details click on the following link
Classes in Java... the objects are direct interacted with its class that mean almost all
Java Hello World code example
Java Hello World code example Hi,
Here is my code of Hello World program in Java:
public class HelloWorld {
public static void main(String[] args) {
System.out.println("Hello, World
Inner Classes In Java
Inner Classes In Java
There
are 4 kind of classes that can be defined in a Java program, roughly can be
termed as the inner classes.
--
Inner classes provides an elegant
Hello World Program in JRuby
Hello World Program in JRuby
... be directly called by Java Program.
In
this tutorial of JRuby we are going... "java"
stringHello= "Hello World"
stringDate
corejava - Java Interview Questions
Core Java vs Advance Java Hi, I am new to Java programming and confuse around core and advance java
CoreJava - Java Beginners
core java an integrated approach I need helpful reference in Core Java an integrated approach
Hello world (First java program)
Hello world (First java program)
...
and running. Hello world
program is the first step of java programming language... learned how to
write and then test the Hello World! java program.
hello there i need help
hello there i need help : i need to do a program like... transaction?
thats the problem. I dont know how to start this program because i am a beginner, and aside from that i am really eager to learn java please help
creating java classes
creating java classes This program uses a class named DrivingLicense... program to ensure that it generates the following output.
Alice does NOT have... license
/*
Class: DLTest.java
Description:Test program
Very simple `Hello world' java program that prints HelloWorld
Hello World Java
Simple Java Program for beginners... to develop robust applications. Writing a simple Hello World program is stepwise...
HelloWorld.java - the source code for the "Hello, world!"
program
pring Hello World Application
Spring Hello World Application
Hello World Example using Spring, The tutorial given below describes you the way to make a spring web
application that displays Hello World
Direct Web Remoting
Direct Web Remoting
Direct Web Remoting is a framework for calling Java methods directly from
Javascript code, Like SAJAX, can pass calls from Javascript into Java methods
and back
CoreJava
corejava
Java Classes
Java Classes conducted online by Roseindia include an elite panel of some... that if a beginner starts taking a Java classes online
here, than he/she at the completion of the program becomes a Java professional
and at later stage becomes.
Understanding Hello World Java Program
Understanding Hello World
Java Program
Now you are familiar with the Java program.
In the last lesson you learned how to compile and run the Java program. Before
Java classes
Java classes Which class is extended by all other classes
jQuery Hello World
jQuery Hello World example
... application called
"Hello World jQuery". This application will simply display...'s start developing the Hello World application in jQuery.
Video Tutorial
java classes
java classes Which class is the super class for all classes in java.lang package
jQuery to "Hello World"
the Hello World program
Step 1:
Create php file to that prints the current...
jQuery to "Hello
World"
In this jQuery tutorial we will develop a simple program
Servlet hello world example
by making Servlet hello world program. We will just write
one program which print...Servlet hello world example
This tutorial will help you to understand... and javax.servlet.http
package provide interface and classes for writing Servlet
Hello world (First java program)
Hello world (First java program)
.... Hello world program is the first step of java programming
language... learned how to
write and then test the Hello World! java program
Hello Eyeryone...
Hello Eyeryone... how to download java material in roseindia.net website material please kindly help me...
by
visu
Core Java Hello World Example
Create Java Hello World Program
This tutorial explains you how to create a simple core Java "Hello World"
application. The Hello World application... will create here a Hello World Java program then I will explain the
terms what
Dojo Hello World
;
</html>
Output of
program:
After clicking the "Hello World!"... Dojo Hello World
... to create a
"Hello World" example in Dojo. Before
creating any
Dojo Hello World
;/html>
Output of
program:
After clicking the "Hello World!"...Dojo Hello World
... to create a
"Hello World" example in Dojo. Before
creating any
how to create classes for lift program
how to create classes for lift program i would like to know creating classes for lift program
Java classes
Java classes What is singleton pattern
java classes
java classes What is the ResourceBundle class
Java classes
Java classes What is the Properties class
JSP Hello World
JSP Hello World
We are going to discus about JSP hello world. In this example we are create
"HelloWold" on web browser. In this program of JSP instances. we can print
simple "Hello World" String on web browser
Java classes
Java classes Why default constructor of base class will be called first in java
JSF Hello World
classes etc.
Steps to create JSF Hello World Example:
1. Go to your project...
JSF Hello World
In this example, we will be developing JSF Hello
Please Provide the direct Downloadable source code for better understanding
Please Provide the direct Downloadable source code for better understanding Hello Roseindia Team,
This is Mohammed Vaseem.
Am very much impressed....
I kindly request you to please provide the direct downloadable source code so
Hello World in Echo3 framework
Hello World in Echo3 framework
Since "Hello World" example is everyone's... with
the "Hello World" example in Echo3. We have illustrated the first
Smarty Hello World Program
How to write "Hello World" program?
In any smarty program we need...;//enable the caching
$smarty->assign ('name', 'hello world...;/html>
To run your first program:
i) Start WAMP server
ii) Type http
Hello - Struts
into the source code of a program or other executable object, or fixed formatting of the data... or formatting in the program itself with the given input.
Considered an anti-pattern, hard coding requires the program's source code to be changed any time
Spring Hello World Application
Spring Hello World Application
Hello World Example using Spring, The tutorial given... Hello World message on the Browser.
For that we have created a file called "
Random classes
Random classes Hello... What is Random class? What's the purpose
jQuery Hello World alert
jQuery Hello World alert example
This section guides you to create a program which display "Hello World"
alert box using jQuery. This application
wrapper classes
of the primitive wrapper classes in Java are immutable i.e. once assigned a value...wrapper classes Explain wrapper classes and their use?
Java Wrapper Class
Wrapper class is a wrapper around a primitive data type
Iphone hello world example
and MainWindow.xib.
Hello_WorldViewController.xib :- Information used by the program... the interface builder program. In the Hello_WorldViewController.xib window open...Iphone hello world example
In this tutorial an iphone based hello world
classes and objects
classes and objects Define a class named Doctor whose objects... methods, and an equals method as well.
Further, define two classes: Patient... classes a reasonable complement of constructors and accessor methods, and an equals
Struts 2.1.8 Hello World Example
Struts 2.1.8 Hello World Example
In this section we will learn how to develop our first Hello World... to develop simple Hello World
example in Struts 2.8.1. You will also learn how
Advertisements
If you enjoyed this post then why not add us on Google+? Add us to your Circles
|
http://www.roseindia.net/tutorialhelp/comment/71306
|
CC-MAIN-2015-22
|
en
|
refinedweb
|
I am making a simple program using python2.7 in which the first input is hex (32bytes) that will be hashed and increment by 1. The new value will be hashed again and increment again. The process will repeat until it satisfy the specified range.
However I'm getting an error with int()
TypeError: int() can't convert non-string with explicit base
Below is my program code
from coinkit.address import Addressimport hashlibh = hashlib.new('ripemd160') # <-- Create the hasha = Address.from_secret('0000000000000000000000000000000000000000000000000000000000000001') #where the input will be hashfor i in range (0, 10): # should have 10 outputs intVal = int(a, 16) # convert to hex intVal += 1 # increment by 1 h.update(hex(intVal)) # <-- Update the hash with the new incremented integer a = Address.from_secret(h.hexdigest()) # <-- Get the digest and feed it back into from_secret print a.pub, a.priv # <-- print new 'a' values
I did try to remove 16 it throws an error:
TypeError : int() argument must be a string or a number, not 'Address'
Please enlighten me. Thank you.
everything i have tried has given me wrong output values. i even copied C codes and changed them so that they would work in python and i still get wrong outputs. what is wrong?
import os, mathdef makehex(value,size=8): value = hex(value)[2:] if value[-1] == 'L': value = value[0:-1] while len(value)<size: value = '0' + value return valuedef makebin(value,size=32): value = bin(value)[2:] while len(value)<size: value = '0' + value return valuedef ROL(value,n): return (value << n) | (value >> 32-n)def little_end(string,base = 16): t = '' if base == 2: s= 8 if base == 16: s = 2 for x in range(len(string)/s): t = string[s*x:s*(x+1)] + t return tdef F(x,y,z,round): if round<16: return x ^ y ^ z elif 16<=round<32: return (x & y) | (~x & z) elif 32<=round<48: return (x | ~y) ^ z elif 48<=round<64: return (x & z) | (y & ~z) elif 64<=round: return x ^ (y | ~z)def RIPEMD160(data):# constants h0 = 0x67452301; h1 = 0xEFCDAB89; h2 = 0x98BADCFE;h3 = 0x10325476; h4 = 0xC3D2E1F0 k = [0, 0x5A827999, 0x6ED9EBA1, 0x8F1BBCDC, 0xA953FD4E] kk = [0x50A28BE6, 0x5C4DD124, 0x6D703EF3, 0x7A6D76E9,0] s = [ 11,14,15,12,5,8,7,9,11,13,14,15,6,7,9,8, 7,6,8,13,11,9,7,15,7,12,15,9,11,7,13,12, 11,13,6,7,14,9,13,15,14,8,13,6,5,12,7,5, 11,12,14,15,14,15,9,8,9,14,5,6,8,6,5,12, 9,15,5,11,6,8,13,12,5,12,13,14,11,8,5,6] ss= [ 8,9,9,11,13,15,15,5,7,7,8,11,14,14,12,6, 9,13,15,7,12,8,9,11,7,7,12,7,6,15,13,11, 9,7,15,11,8,6,6,14,12,13,5,14,13,13,7,5, 15,5,8,11,14,14,6,14,6,9,12,9,12,5,15,8, 8,5,12,9,12,5,14,6,8,13,6,5,15,13,11,11] r= range(16) + [ 7, 4, 13, 1, 10, 6, 15, 3, 12, 0, 9, 5, 2, 14, 11, 8, 3, 10, 14, 4, 9, 15, 8, 1, 2, 7, 0, 6, 13, 11, 5, 12, 1, 9, 11, 10, 0, 8, 12, 4, 13, 3, 7, 15, 14, 5, 6, 2, 4, 0, 5, 9, 7, 12, 2, 10, 14, 1, 3, 8, 11, 6, 15, 13] rr = [ 5, 14, 7, 0, 9, 2, 11, 4, 13, 6, 15, 8, 1, 10, 3, 12, 6, 11, 3, 7, 0, 13, 5, 10, 14, 15, 8, 12, 4, 9, 1, 2, 15, 5, 1, 3, 7, 14, 6, 9, 11, 8, 12, 2, 10, 0, 4, 13, 8, 6, 4, 1, 3, 11, 15, 0, 5, 12, 2, 13, 9, 7, 10, 14, 12, 15, 10, 4, 1, 5, 8, 7, 6, 2, 13, 14, 0, 3, 9, 11] # md4 padding + preprocessing temp = '' for x in data: temp += makebin(ord(x),8) length = len(temp)%2**64 temp +='1' while len(temp)%512!=448: temp+='0' input = temp temp = '' for x in range(len(input)/32): temp += little_end(input[32*x:32*(x+1)],2) input = temp temp = makebin(length,64) input += temp[32:]+temp[:32] t = len(input)/512 # the rounds for i in range(t): # i called the parallel round variables 2x the other round variable: a -> aa a = aa = h0; b = bb = h1; c = cc = h2; d = dd = h3; e = ee = h4 X = input[512*i:512*(i+1)] X = [int(X[32*x:32*(x+1)],2) for x in range(16)] for j in range(80): T = (a+ ROL( (F(b, c, d, j) + X[r[j]] + k[j/16])%2**32,s[j])+e)%2**32 c = ROL(c, 10) a = e; e = d; d = c; c = b; a = T T = (aa+ ROL( (F(bb,cc,dd,79-j) + X[rr[j]] + kk[j/16] )%2**32,ss[j])+ee)%2**32 cc = ROL(cc,10) aa = ee; ee = dd; dd = cc; cc = bb; aa = T T = (h1+c+dd)%2**32 h1 = (h2+d+ee)%2**32 h2 = (h3+e+aa)%2**32 h3 = (h4+a+bb)%2**32 h4 = (h0+b+cc)%2**32 h0 = T return little_end(makehex(h0))+little_end(makehex(h1))+little_end(makehex(h2))+little_end(makehex(h3))+little_end(makehex(h4)) data = RIPEMD160('') print data,data =='9c1185a5c5e9fc54612808977ee8f548b2258d31' # its always false
|
https://www.convertstring.com/he/Hash/RIPE_MD160
|
CC-MAIN-2019-39
|
en
|
refinedweb
|
how to properly extract selected object id from a table
I have a custom panel, which contains a data table along with a few action links. When a single table row is selected, and an action button is clicked, I need to be able to get the object id of the selected row object. Currently, I'm doing it this way:
class SimpleAssociateIP(tables.Action):
def single(self, table, request, instance_id):
r = request post = str(r.POST) post = post.replace("<", "{").replace(">", "}")\ .replace("'", "\"").replace("u", "").replace("{QeryDict: ", "")\ .replace("}}", "}") post_json = json.loads(post) print "POST: " raw_id = str(post_json["object_ids"]) id = raw_id.replace("[u'", "").replace("']", "") print id
This works, but I have a feeling, that there is a better way. Does anyone knows a better way to get to such object id?
-Eugene
|
https://ask.openstack.org/en/question/58830/how-to-properly-extract-selected-object-id-from-a-table/?sort=oldest
|
CC-MAIN-2019-39
|
en
|
refinedweb
|
save and close methods for workflow designer hosting, 287
scheduling services
adding to workflows, 343
DefaultworkflowSchedulerService, 165
developing, 190
features of, 164
ManualWorkflowSchedulerService, 165–166
using, 43–44
schema, role in BizTalk Server, 54–55
Sequence activity
description of, 85
using, 138
sequential (nonchained) chaining, using with rules, 234
Sequential Workflow Console Application option, selecting, 20–21
Sequential Workflow Console Application template, creating projects from, 73–74
Sequential Workflow Library project template, description of, 75
sequential workflows
accessing fault handlers for, 299
in code-only workflow, 65–66
example of, 16
identifying, 35–36
versus state-machine workflows, 36
using EventDriven activity with, 131
sequential workflows view designer, description of, 82
Serializable attribute, using with External- DataEventArgs class, 106
serialization, 66–68
service providers, use of Windows Workflow Foundation by, 18
ServiceBase class, location of, 353
services-oriented architecture (SOA)
versus connected systems, 333
explanation of, 334
relationship to WCF, 13
session between web service client and web service, managing, 342
set accessor
using in workflow communication, 103
using with FirstName class in HelloWorld project, 22
using with WriteFileActivity class, 153
SetState activity, using, 138, 254
SetState method, using, 261–262
SetStateActivity state-machine activity class, description of, 39
Settings class, using with web service called inside workflow, 342
SharedConnectionWorkflowCommitWorkBatch- Service, using, 167
SharePoint 2007
debugging custom workflows in, 391–392
deploying and configuring custom workflows to, 390–391
enabling workflows in, 361
features of, 358–359
as host, 360
integration with Windows Workflow Foundation, 57–59
workflow associations in, 368–370
SharePoint Designer 2007
choosing field data types in, 376
data types for variables in, 377
defining Actions in, 378
defining conditions in, 378
developing in, 375–379
SharePoint workflow features
administration, 368
history, 367
reporting, 368
tasks, 366–367, 369
SharePoint workflow project templates, installing for Visual Studio, 380
SharePoint workflows.
See also SharePoint 2007;
workflows
Approval, 361–362
associating to content types, 369
Collect Feedback, 362–363
Collect Signatures, 363
deploying to servers, 390–391
developing in Visual Studio, 383–386
Disposition Approval, 364–365
enabling, 361
running, 371
Three-state, 366
Translation Management, 365–366
using WorkflowInvoked event handler with, 384
SharePoint-specific tasks, adding to Visual Studio, Toolbox, 383
shopping cart workflow application
description of, 262–263
state machine for, 263–264
Site Collection Features page, accessing, 390
Skelta Workflow.NET web address, 10–11
SOA (services-oriented architecture)
versus connected systems, 333
explanation of, 334
relationship to WCF, 13
solutions in Visual Studio
adding projects to, 73
contents of, 71–72
creating, 72–73
creating empty solutions, 72
SomeMessage input parameter, using in workflow communication, 103
SomeOtherMessage output parameter, using in workflow communication, 103
SQL persistence service, preparing, 169–171
SQL Server Management Studio, downloading, 169
SQL tracking query code sample, 185
SQL tracking service
adding to workflow runtime, 50–51
using, 182–183
sqlException class variable, relationship to Fault property, 133
SqlTrackingService class.
See also tracking services
data maintenance of, 186
features of, 181–182
preparing, 182–183
profiles used with, 183–184
querying, 184–186
using with tracking services, 42
using Workflow Monitor with, 186–187
SqlWorkflowPersistenceService class
and delays, 173
description of, 168–169
preparing, 169–171
relationship to persistence services, 42
using, 171
standard activities, explanation of, 125
Start method of WorkflowInstance class
calling, 100
description of, 100
Start Options, using with workflow associations, 369
StartRuntime public method, using with WorkflowRuntime class, 93–94
state, setting current state of workflow instances, 261–262
State activity, using, 138, 253–254
state machine implementation
example of, 16–17
guidelines for use of, 252–253
using with shopping cart workflow, 263–264
state machine workflow designer, using, 257–260
State Machine Workflow Library project template, description of, 75
StateActivity state-machine activity class, description of, 39
StateFinalization activity, using, 139, 254
StateFinalizationActivity state-machine activity class, description of, 39
StateInitialization activity, using, 139, 254
StateInitializationActivity state-machine activity class, description of, 39
state-machine activities.
See also activities
EventDriven, 254
SetState, 254
State, 253–254
StateFinalization, 254
StateInitialization, 254
StateMachineWorkflowActivity class, 253
state-machine activity classes, 39
state-machine designer, using, 290–293
state-machine instances, querying, 261
State-Machine Workflow Console Application project template, description of, 75
state-machine workflows.
See also Three-state workflow in SharePoint
versus sequential workflows, 36
using SetState activity with, 138
using State activity with, 138
state-machine workflows, using EventDriven activity with, 131
state-machine workflows view designer, description of, 84–85
StateMachineWorkflowActivity class, using, 253
StateMachineWorkflowInstance class
flexibility of, 261–262
information provided by, 261
using, 260–261
states versus transitions, 251–252
static void Main(string[]args) signature, 73
StopRuntime public method, using with Work- flowRuntime class, 93
stored procedures, including in SQL persistence setup, 170
string variable, using with HelloWorld project, 25
StringWriter instance, using with tracking profile, 178
submit action, adding to form, 389
Subscribe method, using with e-mail activity, 216
Suspend activity, using, 139–140
Suspend method of WorkflowInstance class, description of, 100
SynchronizationScope activity, using, 140
syntax.
See also code listings
of CallExternalMethod activity, 126
of Code activity, 126
of CompensatableSequence activity, 127
of CompensatableTransactionScope activity, 128
of Compensate activity, 127
of CompensationHandler activity, 127
of ConditionedActivityGroup activity, 130
for defining rules, 224
of Delay activity, 130
of EventDriven activity, 131
of EventHandlingScope and EventHandlers activities, 131
of FaultHandler activity, 132
of FaultHandlers activity, 132
of HandleExternalEvent activity, 133
of IfElse and IfElseBranch activities, 133
of InvokeWebService activity, 134
of InvokeWorkflow activity, 135
of Listen activity, 135
of Parallel activity, 135–136
of Policy activity, 137
of Replicator activity, 137
of Sequence activity, 138
of SetState activity, 138
of State activity, 138
of StateFinalization and StateInitialization activities, 139
of Suspend activity, 139
of SynchronizationScope activity, 140
of Terminate activity, 141
of Throw activity, 141
of TransactionScope activity, 128
of WebServiceFault activity, 141
of WebServiceInput activity, 141
of WebServiceOutput activity, 142
of While activity, 142
System.* namespaces, contents of, 52–53
system-to-system interaction scenario, 6
|
https://flylib.com/books/en/1.504.1.112/1/
|
CC-MAIN-2019-39
|
en
|
refinedweb
|
Templating Basics
Templates are the home for what the user sees, like forms, buttons, links, and headings.
In this section of the Guides, you will learn about where to write HTML markup, plus how to add interaction, dynamically changing content, styling, and more. If you want to learn in a step-by-step way, you should begin your journey in the Tutorial instead.
Writing plain HTML
Templates in Ember have some superpowers, but let's start with regular HTML.
For any file in an Ember app that has an extension ending in
.hbs, you can write HTML markup in it as if it was an
.html file.
HTML is the language that browsers understand for laying out content on a web page.
.hbs stands for Handlebars, the name of a tool that lets you write more than just HTML.
For example, every Ember app has a file called
application.hbs.
You can write regular HTML markup there or in any other
hbs file:
<h1>Starting simple</h1> <p> This is regular html markup inside an hbs file </p>
When you start an app with
ember serve, your templates are compiled down to something that Ember's rendering engine can process more easily. The compiler helps you catch some errors, such as forgetting to close a tag or missing a quotation mark.
Reading the error message on the page or in your browser's developer console will get you back on track.
Types of templates
There are two main types of templates: Route templates and Component templates.
A Route template determines what is shown when someone visits a particular URL, like.
A Component template has bits of content that can be reused in multiple places throughout the app, like buttons or forms.
If you look at an existing app, you will see templates in many different places in the app folder structure! This is to help the app stay organized as it grows from one template to one hundred templates. The best way to tell if a template is part of a Route or Component is to look at the file path.
Making new templates
New templates should be made using Ember CLI commands. The CLI helps ensure that the new files go in the right place in the app folder structure, and that they follow the essential file naming conventions.
For example, either of these commands will generate
.hbs template files (and other things!) in your app:
ember generate component my-component-name ember generate route my-route-name
Template restrictions
A typical, modern web app is made of dozens of files that have to all be combined together into something the browser can understand. Ember does this work for you with zero configuration, but as a result, there are some rules to follow when it comes to adding assets into your HTML.
You cannot use script tags directly within a template, and should use actions or Component Lifecycle Hooks to make your app responsive to user interactions and new data.
If you are working with a non-Ember JavaScript library and need to use a
js file from it, see the Guide section Addons and Dependencies.
You should not add links to your own local CSS files within the
hbs file.
Style rules should go in the
app/styles directory instead.
app/styles/app.css is included in your app's build by default.
For CSS files within the styles directory, you can create multiple stylesheets and use regular CSS APIs like
import to link them together.
If you want to incorporate CSS from an npm package or similar, see Addons and Dependencies for instructions.
To load styles through a CDN, read the next section below.
What is
index.html for?
If HTML markup goes in
hbs templates, what is
index.html for?
The
index.html file is the entry point for an app.
It is not a template, but rather it is where all the templates, stylesheets, and JavaScript come together into something the browser can understand.
When you are first getting started in Ember, you will not need to make any changes to
index.html.
There's no need to add any links to other Ember app pages, stylesheets, or scripts in here by hand, since Ember's built-in tools do the work for you.
A common customization developers make to
index.html is adding a link to a CDN that loads assets like fonts and stylesheets.
Here's an example:
<link integrity="" rel="stylesheet" href="">
Understanding a Template's context
A template only has access to the data it has been given. This is referred to as the template's "context." For example, to display a property inside a Component's template, it should be defined in the Component's JavaScript file:
import Component from '@ember/component'; export default Component.extend({ firstName: 'Trek', lastName: 'Glowacki', favoriteFramework: 'Ember' });
Properties like
firstName can be used in the template
by putting them inside of curly braces, plus the word
this:
Hello, <strong>{{this.firstName}} {{this.lastName}}</strong>!
Together, these render with the following HTML:
Hello, <strong>Trek Glowacki</strong>!
Things you might see in a template
A lot more than just HTML markup can go in templates.
In the other pages of this guide, we will cover the features one at a time.
In general, special Ember functionality will appear inside curly braces, like this:
{{example}}.
Here are a few examples of Ember Handlebars in action:
Route example:
<!-- outlet determines where a child route's content should render. Don't delete it until you know more about it! --> <div> {{outlet}} </div> <!-- One way to use a component within a template --> <MyComponent /> {{! Example of a comment that will be invisible, even if it contains things in {{curlyBraces}} }}
Component example:
<!-- A property that is defined in a component's JavaScript file --> {{this.numberOfSquirrels}} <!-- Some data passed down from a parent component or controller --> {{@weatherStatus}} <!-- This button uses Ember Actions to make it interactive. A method named `plantATree` is called when the button is clicked. `plantATree` comes from the JavaScript file associated with the template, like a Component or Controller --> <button onclick={{action 'plantATree'}}> More trees! <button> <!-- Here's an example of template logic in action. If the `this.skyIsBlue` property is `true`, the text inside will be shown --> {{#if this.skyIsBlue}} If the skyIsBlue property is true, show this message {{/if}} <!-- You can pass a whole block of markup and handlebars content from one component to another. yield is where the block shows up when the page is rendered --> {{yield}}
Lastly, it's important to know that arguments can be passed from one Component to another through templates:
<MyComponent @favoriteFramework={{this.favoriteFramework}} />
To pass in arguments associated with a Route, define the property from within a Controller. Learn more about passing data between templates here.
Helper functions
Ember Helpers are a way to use JavaScript logic in your templates.
For example, you could write a Helper function that capitalizes a word, does some math, converts a currency, or more.
A Helper takes in two types of arguments,
positional (an array of the positional values passed in the template) or
named (an object of the named values passed in the template), which are passed into the function, and should return a value.
Ember gives you the ability to write your own helpers, and comes with some helpers built-in.
For example, let's say you would like the ability to add two numbers together.
Define a function in
app/helpers/sum.js to create a
sum helper:
import { helper as buildHelper } from '@ember/component/helper'; export function sum(params) { return params[0] + params[1] }; export const helper = buildHelper(sum);
Now you can use the
sum() function as
{{sum}} in your templates:
<p>Total: {{sum 1 2}}</p>
The user will see a value of
3 rendered in the template!
Ember ships with several built-in helpers, which you will learn more about in the following guides.
Nested Helpers
Sometimes, you might see helpers invoked by placing them inside parentheses,
().
This means that a Helper is being used inside of another Helper or Component.
This is referred to as a "nested" Helper Invocation.
Parentheses must be used because curly braces
{{}} cannot be nested.
{{sum (multiply 2 4) 2}}.
Many of Ember's built-in helpers (as well as your custom helpers) can be used in nested form.
|
https://guides.emberjs.com/release/templates/
|
CC-MAIN-2019-39
|
en
|
refinedweb
|
public class StatusLineContributionItem extends ContributionItem
This class may be instantiated; it is not intended to be subclassed.
dispose, fill, fill, fill, getId, getParent, isDirty, isDynamic, isEnabled, isGroupMarker, isSeparator, isVisible, saveWidgetState, setId, setParent, setVisible, toString, update, update
clone, equals, finalize, getClass, hashCode, notify, notifyAll, wait, wait, wait
public static final int CALC_TRUE_WIDTH
public StatusLineContributionItem(String id)
id- the contribution item's id, or
nullif it is to have no id
public StatusLineContributionItem(String id, int charWidth)
id- the contribution item's id, or
nullif it is to have no id
charWidth- the number of characters to display. If the value is CALC_TRUE_WIDTH then the contribution will compute the preferred size exactly. Otherwise the size will be based on the average character size * 'charWidth'
public void fill(Composite parent)
ContributionItem
IContributionItemmethod does nothing. Subclasses may override.
fillin interface
IContributionItem
fillin class
ContributionItem
parent- the parent control
public Point getDisplayLocation()
nullif not yet initialized.
public String getText()
public void setText(String text)
text- the text to be displayed, must not be
null
Copyright (c) 2000, 2013 Eclipse Contributors and others. All rights reserved.Guidelines for using Eclipse APIs.
|
https://help.eclipse.org/kepler/topic/org.eclipse.platform.doc.isv/reference/api/org/eclipse/jface/action/StatusLineContributionItem.html
|
CC-MAIN-2019-39
|
en
|
refinedweb
|
Implement virtual track selection for AOD analysis. More...
#include <AliEmcalTrackSelectionAOD.h>
Implement virtual track selection for AOD analysis.
Implementation of track selection in case the analysis runs on AODs For the moment it uses the AliESDtrackCuts and converts AOD tracks to ESD tracks, which might change in the future when an AOD track selection framework becomes available.
Definition at line 50 of file AliEmcalTrackSelectionAOD.h.
Main constructor.
Initializes fields with 0 (or NULL). For ROOT I/O, not intended to be used by the users.
Definition at line 54 of file AliEmcalTrackSelectionAOD.cxx.
Constructor for periods.
Initializing track cuts depending on the requested type of filtering
Definition at line 70 of file AliEmcalTrackSelectionAOD.cxx.
Main Constructor.
Initalizing also track cuts and filter bits. In case the initial cuts is a nullpointer, only filter bits are used for the track selection. This constructor is intended to be used by the users.
Definition at line 59 of file AliEmcalTrackSelectionAOD.cxx.
Destructor.
Definition at line 85 of file AliEmcalTrackSelectionAOD.h.
Add a new filter bit to the track selection.
Multiple filter bits can be set at the same time (via the bitwise or operator |).
Definition at line 194 of file AliEmcalTrackSelectionAOD.cxx.
Referenced by EMCalTriggerPtAnalysis::AliAnalysisTaskEmcalClusterMatched::InitializeTrackSelections(), and ~AliEmcalTrackSelectionAOD().
Automatically generates track cuts depending on the requested type of filtering.
Implements AliEmcalTrackSelection.
Definition at line 76 of file AliEmcalTrackSelectionAOD.cxx.
Referenced by AliEmcalTrackSelectionAOD(), EMCalTriggerPtAnalysis::AliEmcalAnalysisFactory::TrackCutsFactory(), and ~AliEmcalTrackSelectionAOD().
Returns the hybrid filter bits according to a hard-coded look-up table.
Definition at line 213 of file AliEmcalTrackSelectionAOD.cxx.
Referenced by GenerateTrackCuts(), and ~AliEmcalTrackSelectionAOD().
Performing track selection.
Function checks whether track is accepted under the given track selection cuts. The function can handle AliAODTrack and AliPicoTrack, while for AliPico track an AliAODTrack is expected to be the underlying structure. If it is not possible to access an AOD track from the input track, the object will not be selected. Otherwise first the status bits are checked (if requested), and if further track cuts (of type AliESDtrackCuts) are provided, the track is converted to an ESD track for further checks.
Implements AliEmcalTrackSelection.
Definition at line 149 of file AliEmcalTrackSelectionAOD.cxx.
Referenced by PWG::EMCAL::TestAliEmcalTrackSelectionAOD::TestHybridDef2010woRefit(), PWG::EMCAL::TestAliEmcalTrackSelectionAOD::TestHybridDef2010wRefit(), PWG::EMCAL::TestAliEmcalTrackSelectionAOD::TestHybridDef2011(), PWG::EMCAL::TestAliEmcalTrackSelectionAOD::TestTPConly(), and ~AliEmcalTrackSelectionAOD().
|
http://alidoc.cern.ch/AliPhysics/vAN-20181012/class_ali_emcal_track_selection_a_o_d.html
|
CC-MAIN-2019-39
|
en
|
refinedweb
|
On Tue, Apr 25, 2006 at 06:02:09PM +0000, neptun_AT_gmail.com wrote:
> >
> > I made a record for a Focus event in TODO.wmii-4 of the hg tip,
> > same with the planned client/id file stuff and /client/
> > namespace changes.
> >
> > Regards,
> > --
> > Anselm R. Garbe ><>< ><>< GPG key: 0D73F361
> >
> > _______________________________________________
> > wmii_AT_wmii.de mailing list
> >
>
> Seems that wmii will become even more customisable and I love
> that!
I disagree, the current TODO doesn't tell anything about new
options, it simply contains ideas how the evolution might
proceed. Except the id file, I don't see any need to add other
files to the fs. The /def/colmode and /view/X/mode files might
be renamed to stack once the improved column arrangement
algorithm is implemented (actually I believe it will even reduce
lines of code)...
Well, the TODOs will introduce some more lines of code, because
of tagbars, and EWMH/Xinerama/screen resolution change bits, but
it should be still possible to keep the overall SLOC count under
9kSLOC with wmii-4.
Regards,
-- Anselm R. Garbe ><>< ><>< GPG key: 0D73F361Received on Tue Apr 25 2006 - 17:27:19 UTC
This archive was generated by hypermail 2.2.0 : Sun Jul 13 2008 - 16:03:32 UTC
|
http://lists.suckless.org/wmii/0604/1385.html
|
CC-MAIN-2019-39
|
en
|
refinedweb
|
Opened 11 years ago
Last modified 3 months ago
#7835 new New feature
Provide the ability for model definitions that are only availably during testing
Description
A current limitation of the unit test framework is that there is no capacity to define 'test models' - that is, models that are only required for the purposes of testing. A regular installation would not know anything about these models - only a test database would have access to them.
There are several existing applications that have a need for this capability: For example:
- contrib.admin: you can't test the way admin handles models without some models to handle.
- contrib.databrowse: you can't test the way the browser works without having models to browse
- Django Evolution: you can't evolve models without having some models to evolve.
The easiest way to work around this at present is to have a standalone test project which exercises the required functionality. However, these tests aren't integrated into the automated test suite, so they.
Another option is to do some app_cache munging during the test - this works, but is very messy.
Django should provide a way for model definitions to be defined as part of the test definition, synchronized as part of the test setup, populated and manipulated during test execution, and destroyed along with the test database.
Attachments (3)
Change History (49)
comment:1 Changed 11 years ago by
comment:2 Changed 11 years ago by
comment:3 Changed 11 years ago by
comment:4 follow-up: 5 Changed 11 years ago by
A discussion came about on the user-list about this:
I wrote a patch based on Russell's feedback. The proposed API is as follows:
class MyMetaTest(TestCase): installed_apps = ['fakeapp','otherapp'] extra_apps = ('yetanotherapp',) def test_stuff(self): ...
installed_appsand
extra_appscan either be tuple or a list. They can coexist or be used individually.
installed_appsoverrides the settings INSTALLED_APPS.
extra_appsadds the given apps either to INSTALLED_APPS or to
installed_appsif it also exists.
Now, responding specifically to Russell's remarks:
"Obviously, the test applications need to be:
- Added to INSTALLED APPS and the app cache on startup
- Installed in the app cache before the syncdb caused by the pre-test database flush takes effect. You shouldn't need to manually invoke syncdb.
- Removed from INSTALLED_APPS and the app cache on teardown"
My answers below:
- Done
- Done, but I think
syncdbstill needs to be invoked to create the extra tables, otherwise
flushwill raise an exception saying it cannot find those tables.
- INSTALLED_APPS is properly restored, but I could not find a way to unload the apps from the cache. Should an extra method be written in the AppStore class (
unload_app()and/or
unregister_models())?
Changed 11 years ago by
Path with suggested new API (installed_apps and extra_apps)
comment:5 follow-up: 6 Changed 11 years ago by
- Done, but I think
syncdbstill needs to be invoked to create the extra tables, otherwise
flushwill raise an exception saying it cannot find those tables.
This reveals a slightly larger bug, which I've logged as #9717. The right approach here is to fix #9717, not work around the issue.
- INSTALLED_APPS is properly restored, but I could not find a way to unload the apps from the cache. Should an extra method be written in the AppStore class (
unload_app()and/or
unregister_models())?
I would be highly surprised if you could deliver this patch without adding some cache cleanup methods to the AppStore.
Also - I know this is early days, but just as a friendly reminder before I turn into the Patch Hulk: Patch need tests! Patch need docs! Aaargh! Hulk angry!!... (oh dear... too late! :-)
But how do you test a testing patch? One suggestion - migrate some of the existing Django tests to use the new framework. For example, contrib.admin has tests currently stored in the system tests that should probably be application tests. If you migrate the admin tests, this will demonstrate that your patch works and has sufficient capabilities for a real-world testing situation.
One issue that the documentation should address - some sort of convention to avoid application name clashes. manage.py validate will prevent model and table clashes, but if I name one of my test applications 'sites' or 'auth', all hell could break lose. You might want to suggest some form of convention around test application naming - for example, naming all test applications 'test_foo', rather than just 'foo'. Alterntaively, you could enforce this programatically by registering test applications with a test_ prefix (this could raise other complications though - it might be easier to stick with the convention).
comment:6 Changed 11 years ago by
This reveals a slightly larger bug, which I've logged as #9717. The right approach here is to fix #9717, not work around the issue.
Ok, I'll look at that one, see if I can help.
I would be highly surprised if you could deliver this patch without adding some cache cleanup methods to the AppStore.
Will look at that too.
Also - I know this is early days, but just as a friendly reminder before I turn into the Patch Hulk: Patch need tests! Patch need docs! Aaargh! Hulk angry!!... (oh dear... too late! :-)
Aahhh!! The Patch Hulk is back!!! :) In fact, I didn't want to write too much doc/tests before settling on the API. What's your view on this? Is the suggested API ok, or should it be different?
As you suggest, I'll try to migrate some admin tests. For the doc, I agree with you that some clear and well-explained conventions would be preferable. I'll introduce that in the next patch.
comment:7 Changed 11 years ago by
Deferring documentation is ok; writing good docs takes a lot of time, and it takes a lot of effort to rework if a big design change is made. I think the API is stable enough at this point to warrant making a start on documentation.
However, deferring tests isn't ok. Tests are how you prove to me that the code works, that you've though about all the nasty ways that things can go wrong, and that you are handling those exceptions.
comment:8 Changed 11 years ago by
Point taken about the tests ;) I had tested in-house with some of my apps, but it's true that without tests it's hard to prove to others that it works. I also wasn't sure how to write tests for this, but migrating admin tests seem like a good one.
Another question. Shouldn't ticket #9717 be merged with this one? After all, the issue emerged from this ticket and no one's really complained about that separately before (correct me if wrong). The same thing applies for adding unloading methods to
AppStore, which at this stage wouldn't justify a ticket on its own.
My only concern is that all these issues (
flush,
AppStore and test apps) need to be fixed all at once. And while I'm happy to try to fix them (and for anyone else who'd be interested to contribute), managing 2 sets of tickets/patches would be more painful to handle.
What do you think? Just let me know, and I'll comply to whatever you decide ;)
comment:9 Changed 11 years ago by
No - #9717 shouldn't be merged, that's why I opened another ticket. Consider - #9717 can be fixed without requiring a fix for this ticket; likewise, this ticket could be closed without fixing #9717. Hence, they are separate tickets. However, I would say that fixing #9717 is a reasonable pre-requisite for merging this ticket, as any fix for this ticket will need to work around the problem identified by #9717.
There is also the branch management issue; a patch for #9717 will need to be applied to the v1.0.X branch because it is a bug fix; this ticket describes a feature addition, so it will only be applied to trunk.
Sure, not many people have complained about this problem, but that doesn't mean nobody has had the problem. It is entirely possible that people have been having the problem but not understanding the cause.
Regarding two patches being difficult to handle: If you provide a standalone patch for #9717, I will get it into the trunk with haste, as it is a clearly identified bug with existing code.
Changed 11 years ago by
patch+test+doc
comment:10 follow-up: 13 Changed 11 years ago by
So I have finally migrated the tests for
contrib.comments which seems a more doable intermediary step before migrating huge tests like the admin's. I also wrote a piece of documentation to present the new API and a suggested convention for avoiding app name clashes.
The only thing that was requested and which I haven't done is unloading the test apps from the
AppCache. But, is it really necessary? It will be highly recommended to developers to follow a certain convention to avoid name clashes. If they do so, then unloading the models/apps from the cache won't be crucial to do. If they don't do so, then there might be a whole lot of unpredicted conflicts at several other levels anyway. Ideas?
comment:11 follow-up: 12 Changed 11 years ago by
Oh, another note. In the code patch, I still need to run
syncdb to create potential new tables for the test apps' models. I don't think this can really be avoided. And I don't think this is a problem either.
comment:12 follow-up: 14 Changed 11 years ago by
Oh, another note. In the code patch, I still need to run
syncdbto create potential new tables for the test apps' models. I don't think this can really be avoided. And I don't think this is a problem either.
The problem isn't (just) one of code neatness - it's the execution time for the tests. On my machine, the full Django's system test suite takes about 5 minutes to execute for SQLite; almost 10 minutes to run for Postgres. I haven't run the Oracle tests myself, but I've been lead to believe that it goes from "go make yourself a cup of coffee" to "go make yourself a 9-course degustation banquet". I'm going to be very picky about introducing anything that has the potential to make this situation worse.
Syncdb isn't a no-op when there is nothing to do. Given that there is a already a syncdb being called as part of the flush, my initial reaction is that you shouldn't need another one. This may mean that there are some other modifications that need to be made; I'm not opposed to making such changes, if they're required.
If an additional call to syncdb is completely unavoidable, then so be it; however, I'm not yet convinced that it is unavoidable. Feel free to convince me :-)
comment:13 Changed 11 years ago by
The only thing that was requested and which I haven't done is unloading the test apps from the
AppCache. But, is it really necessary?
Again, feel free to convince me otherwise, but my initial reaction is "yes". Tests shouldn't have side effects after their execution; stale entries sticking around the app cache have the potential to introduce this sort of side effects.
comment:14 Changed 11 years ago by
If an additional call to syncdb is completely unavoidable, then so be it; however, I'm not yet convinced that it is unavoidable. Feel free to convince me :-)
The proposed API allows one to add *any* app to
INSTALLED_APPS, that is, some that may have already been synced (e.g. the common ones like
contenttypes or
auth) and some that probably haven't been synced yet (typically the app's internal test apps). Therefore I think that a synchronisation is necessary because we can't predict which apps have been synced or not yet.
You say that "there is a already a syncdb being called as part of the flush". This is not exactly true because my patch simply won't work without an explicit call to
syncdb:
flush does not create tables for the dynamically added apps which haven't been synced yet.
But even so, I don't think it would have such a latency impact. Assuming that this new API ever gets checked in, I presume only (or mostly) the contrib apps would bother using it so they become more self-contained (I'm talking in the context of the Django test suite). And, with the boost improvement scheduled for 1.1 (all tests run in one transaction), these considerations might well be negligible anyway.
Finally (and maybe more importantly) I am not sure how to go without
syncdb :) Any hint?
comment:15 Changed 11 years ago by
Again, feel free to convince me otherwise, but my initial reaction is "yes". Tests shouldn't have side effects after their execution; stale entries sticking around the app cache have the potential to introduce this sort of side effects.
I don't know if I can convince you for this one either, but I have one question: if the tests introduce such side effects, wouldn't that mean that the
AppCache is buggy in the first place?
The test framework already relies on many conventions. As long as one sticks to those (and assuming the
AppCache is not buggy), then I tend to think that everything would go fine. Obviously, if one is not careful enough and does not follow the conventions, then things could crash miserably.
Also, considering the proposed API, how would we know which app to unload, since we may include some apps that have already been loaded by the test suite (e.g. the most common ones like
contenttypes or
auth).
All these considerations depend on whether on not the suggested API is adequate. You haven't commented much on that yet. Could you share your thoughts on the quality of the API and how it could be improved?
comment:16 Changed 11 years ago by
comment:17 Changed 10 years ago by
comment:18 Changed 10 years ago by
comment:20 Changed 10 years ago by
The following is not a proposed changed since it is a hack, but here is what I have done to have models during testing:
In my app I have a folder "tests". It contains "testingmodels.py". In my test case (defined in tests/init.py) I have redefined _pre_setup() like this:
import sys import new from django.db.models import loading from django.core.management import call_command from django.test import TestCase from . import testingmodels class MyTestCase(TestCase) def _pre_setup(self): # register tests models # we'll fake an app named 'my_test_app' module = new.module('my_test_app') module.__file__ = __file__ # somewhere django looks at __file__. Feed it. module.models = testingmodels testingmodels.__name__ = 'my_test_app.models' sys.modules['my_test_app'] = module # register fake app in django and create DB models from django.conf import settings settings.INSTALLED_APPS += ('my_test_app',) loading.load_app('my_test_app') call_command('syncdb', verbosity=0, interactive=False) return super(MyTestCase, self)._pre_setup()
Note: due to django constraints the app must contains an empty "models.py".
I'm using this with success under django 1.0.2. I can't say I would recommend it as a general solution, but maybe it can help or inspire someone?
comment:21 Changed 10 years ago by
Thanks for this. I thought I'd also remind here the hack (originally posted at) which I've successfully used in many applications, and which is reasonably succinct in terms of lines of code. This is in fact what served as a model for the patch I've posted in this ticket. Maybe that will be useful to someone until this ticket eventually gets fixed.
Sample file structure:
myapp/ tests/ fakeapp/ __init__.py models.py __init__.py models.py views.py urls.py
Here is the testing code (located in
myapp/tests/__init__.py):
import sys from django.test import TestCase from django.conf import settings from django.core.management import call_command from django.db.models.loading import load_app from fakeapp.models import FakeItem class TestMyApp(TestCase): def setUp(self): self.old_INSTALLED_APPS = settings.INSTALLED_APPS settings.INSTALLED_APPS = ( 'django.contrib.auth', 'django.contrib.contenttypes', 'myapp', 'myapp.tests.fakeapp', ) load_app('myapp.tests.fakeapp') call_command('syncdb', verbosity=0, interactive=False) #Create tables for fakeapp def tearDown(self): settings.INSTALLED_APPS = self.old_INSTALLED_APPS def test_blah(self): item = FakeItem.objects.create(name="blah") #Do some testing here...
comment:22 Changed 10 years ago by
comment:23 Changed 10 years ago by
comment:24 Changed 9 years ago by
So, I'm a little surprised this hasn't been mentioned here already, which makes me wonder if I'm missing something obvious, but: in the process of checking out #14677 (and associated django-users post,), it appears to me that we already have a pretty good working solution for test-only models in trunk (and I'm wondering why I never thought of it). Apparently you can simply define models directly in your tests.py. Syncdb never imports tests.py, so those models won't get synced to the normal db, but they will get synced to the test database, and can be used in tests. I haven't done this extensively myself (yet), but I just tested and it works. Problem solved?
The only difference I can see between that and the new feature being discussed here is whether the test-only models live in the same app as the one under test, or in a special test-only app. But I can't think of any reasons having them in the same app would be a problem. It even seems potentially a bit cleaner.
Of course, we still may want a fix for #14677, then; which parallels #4470, which we're hoping will be fixed when Arthur Koziel's GSOC branch is merged to fix #3591? I don't know whether that branch will otherwise impact this strategy for test-only models.
comment:25 Changed 9 years ago by
I don't know what is the current suggested syntax for this, but either way it should be defined inside the app, and not like in the example of description. Probably it should be in the
__init__.py of app, with something like:
APP_LABEL='myauth'
I mean it should not be variable.
E.g. if we allow the app_label be variable, what happens to all permission checks? Someone installs the
auth app as
myauth and suddenly all permission checks stops working.
comment:26 Changed 9 years ago by
comment:27 Changed 9 years ago by
@carljm; you're right that it hasn't been mentioned, but it hasn't been neglected, either.
To my mind, there are (a least) three features missing from the "put models in tests.py" technique:
- Specify only specific models for a specific test (e.g., only have the Article model for this particular test)
- Cleanup of contenttypes, permissions -- and most importantly -- the app cache itself.
- Defining multiple test apps. For example, to test the admin, you need multiple apps to demonstrate the app index works.
My original motivation here was to provide a test environment that was rich enough to support tests for model migrations. For that, you need all three features.
comment:28 Changed 9 years ago by
To second Russell's comment, my own personal motivation with this ticket were (and still are) to reproduce all the conditions necessary to test the dependency of an app with another fictional, self-contained, app (i.e. not just to test the other apps' models, but also its urls, its views, its templatetags, etc.).
For what's it's worth, I've actually successfully used the hack above in multiple production apps (e.g.). The main theoretical thing that's missing for me is a proper cleanup of the appcache to make sure that nothing goes ugly while running the rest of your project's test suite (although in practice I've never run into anything problematic so far).
comment:29 Changed 8 years ago by
comment:30 Changed 8 years ago by
comment:31 Changed 7 years ago by
comment:32 follow-up: 34 Changed 6 years ago by
With 1.6 and the new discovery-runner the models in
tests.py are not being installed anymore, so this feature would be way more useful now.
Just leaving an idea here, what about a decorator just like
override_settings but for installing models?
The usage would be like:
class SomeTestModel(models.Model): # whatever class AnotherTestModel(models.Model): # whatever class YetAnotherTestModel(models.Model): # whatever @test_models(SomeTestModel) class SomeModelsTestCase(TestCase): @test_models(AnotherTestModel) def test_them(self): with test_models(YetAnotherTestModel): # The 3 models available here
Of course this wouldn't actually install the app, just models for testing custom fields or abstract models.
I actually have some code done but it's far from being correctly tested and I don't think I am qualified enough to provide a good implementation.
Anyway, anticipating some issues I encountered:
- Main problem was the automatic rollback from
TestCase. My database (postgres) rejects transactions with changes to a table followed by a table drop.
- When applied to a test method (or as a context manager), I couldn't find a way to detect if the test function comes from a
TransactionTestCaseor a
TestCase.
- Finally managed something with
restore_transaction_methodsand
disable_transaction_methodsfrom
django.test.testcases.
comment:33 Changed 6 years ago by
comment:34 follow-up: 36 Changed 6 years ago by
With 1.6 and the new discovery-runner the models in
tests.pyare not being installed anymore, so this feature would be way more useful now.
Either
./manage.py test
or
./manage.py test tested
Works fine, populating the test db and succeeding test as expected.
Changed 6 years ago by
comment:35 Changed 6 years ago by
However, I don't know why but when using south, the "Test Models in tests.py" trick does not work (table is not created), so you have to put in your settings.py the following :
SOUTH_TESTS_MIGRATE = False
to disable the south migrations for the test db.
(which will anyway save you some cpu time...)
comment:36 Changed 5 years ago by
Works fine, populating the test db and succeeding test as expected.
It seems like you don't interact with db (save models to test db). I didn't manage to run tests. Do you think I've missed something.
comment:37 Changed 5 years ago by
comment:38 Changed 5 years ago by
I managed to work around this using migrations instead of syncdb under Django 1.7 with the following:
from django.conf import settings from django.db import migrations IS_TEST_DB = settings.DATABASES.get( 'default', {}).get('NAME', '').startswith('test_') class Migration(migrations.Migration): dependencies = [...] operations = [...] if IS_TEST_DB: operations.extend([ ... ])
Alternatively you could use post_migrate hooks.
comment:39 Changed 4 years ago by
This functionality is still broken as of Django 1.7.7. This is a very basic repro example:
from django.db import models from django.test import TestCase class TestModel(models.Model): label = models.CharField( max_length=100, ) class Test(TestCase): def setUp(self): TestModel.objects.create( label='X' )
Add this to any app as tests.py and run it, you will get an error like this:
django.db.utils.ProgrammingError: relation "media_testmodel" does not exist LINE 1: ...a_testmodel"."id", "media_testmodel"."label" FROM "media_tes...
comment:40 Changed 4 years ago by
A documentation patch has been proposed based on comment 38's approach of conditionally adding migration operations based on the database name. I don't think this approach is something we should officially recommend. I'd rather wait for a proper solution. One idea is a
@test_model model class decorator that the migrations system respects.
comment:41 Changed 4 years ago by
comment:42 Changed 20 months ago by
This was a particularly annoying issue to me, which I resolved by creating a new TestRunner that did the work for me.
I think including something like this for easy use would effectively resolve this.
from importlib.util import find_spec import unittest from django.apps import apps from django.conf import settings from django.test.runner import DiscoverRunner class TestLoader(unittest.TestLoader): """ Loader that reports all successful loads to a runner """ def __init__(self, *args, runner, **kwargs): self.runner = runner super().__init__(*args, **kwargs) def loadTestsFromModule(self, module, pattern=None): suite = super().loadTestsFromModule(module, pattern) if suite.countTestCases(): self.runner.register_test_module(module) return suite class RunnerWithTestModels(DiscoverRunner): """ Test Runner that will add any test packages with a 'models' module to INSTALLED_APPS. Allows test only models to be defined within any package that contains tests. All test models should be set with app_label = 'tests' """ def __init__(self, *args, **kwargs): self.test_packages = set() self.test_loader = TestLoader(runner=self) super().__init__(*args, **kwargs) def register_test_module(self, module): self.test_packages.add(module.__package__) def setup_databases(self, **kwargs): # Look for test models test_apps = set() for package in self.test_packages: if find_spec('.models', package): test_apps.add(package) # Add test apps with models to INSTALLED_APPS that aren't already there new_installed = settings.INSTALLED_APPS + tuple(ta for ta in test_apps if ta not in settings.INSTALLED_APPS) apps.set_installed_apps(new_installed) return super().setup_databases(**kwargs)
comment:43 Changed 20 months ago by
comment:44 Changed 16 months ago by
comment:45 Changed 15 months ago by
comment:46 Changed 6 months ago by
While working on a workaround for this I came up with a non-invasive solution that might be acceptable to resolving the ticket.
The idea is similar to Ashley's solution but is more explicit as it require a function call in the
app/tests/__init__.py module. It does however isolate each app into their own app, which prevent name collisions, and doesn't require the
app_label = 'test' assignment on each test model.
The solutions boils down to this function
def setup_test_app(package, label=None): """ Setup a Django test app for the provided package to allow test models tables to be created if the containing app has migrations. This function should be called from app.tests __init__ module and pass along __package__. """ app_config = AppConfig.create(package) app_config.apps = apps if label is None: containing_app_config = apps.get_containing_app_config(package) label = f'{containing_app_config.label}_tests' if label in apps.app_configs: raise ValueError(f"There's already an app registered with the '{label}' label.') app_config.label = label apps.app_configs[app_config.label] = app_config app_config.import_models() apps.clear_cache()
Which when called from
app/tests/__init__.py as
setup_test_app(__package__) will create an
app_tests appconfig entry and auto-discover the models automatically. Since the
*.tests modules should only be loaded on test discovery the app and its models will only be available during tests. Keep in mind that if your test models reference models from an application with migrations you'll also need to manually create migrations for these tests models but once that's done you should be good to go.
It does feel less magic and convention based than Ashley's solution as it prevents conflicts between models and allows multiple test apps per app from any test package structure. Thoughts?
Workaround:
|
https://code.djangoproject.com/ticket/7835?cversion=2&cnum_hist=32
|
CC-MAIN-2019-39
|
en
|
refinedweb
|
Client Library for interfacing with various devices in HP Proliant Servers.
Project description
Proliant Management Tools provides python libraries for interfacing and managing various devices(like iLO) present in HP Proliant Servers.
Currently, this module offers a library to interface to iLO4 using RIBCL.
#!/usr/bin/python
from proliantutils.ilo import ribcl
ilo_client = ribcl.IloClient(‘1.2.3.4’, ‘Administrator’, ‘password’) print ilo_client.get_host_power_status()
Project details
Release history Release notifications
Download files
Download the file for your platform. If you're not sure which to choose, learn more about installing packages.
|
https://pypi.org/project/proliantutils/0.1.0/
|
CC-MAIN-2019-39
|
en
|
refinedweb
|
Greensock Animations using React Hooks
Billy Jacoby
Updated on
・2 min read
This is a brief tutorial on how to animate components on demand with Greensock and React hooks.
We'll be using create react app in this tutorial.
If you want to see a quick demo you can check it out here first:
To begin create a new app:
create-react-app gsap-with-hooks
cd gsap-with-hooks
The only other dependency we will need for this tutorial is GSAP.
yarn add gsap
Start the development server so that we can see our changes
yarn start
Since we will be adding our own animations here, remove the lines that animate the React Logo from
src/App.css
Looking at the development server, the logo should no longer be spinning.
Now we're going to add three buttons to our app that Pause, Play, and Reverse our animation. We're also going to turn the App component into a functional component.
Your
App.js should look similar to this after adding the buttons:
Okay, now for the real work. In order to accomplish this correctly only using a functional component we will need to import useState, useRef, and useEffect from react.
Replace the
import React from "react"; line with:
import React, {useState, useRef, useEffect} from "react";
The first thing we'll do is create a new ref and store the react img logo in it. This will ensure that this node is loaded on the DOM before we try to animate it with Greensock.
The next thing we'll do is create a react state object to store our animation function in. This will ensure that we are always accessing the already existing animation function as opposed to creating a new one.
Next we have to use the useEffect hook to make sure that the animation is only created once the DOM has been rendered. We will create our animation function here and store it in our state object.
Since we don't want our animation to play as soon as it's loaded, we throw the
.pause() method on the end of it. This will enable us to control when it starts rather than just starting on loading.
The last thing to do is to wire up our buttons to do their jobs!
Note that the reverse method basically rewinds the animation, so it will only work if the animation has been running for a few seconds.
Now this is obviously just the beginning of what you can do with react hooks and GSAP.
I'll be posting a tutorial shortly on how to use the IntersectionObserver API with GSAP to animate objects when they appear on the screen soon.
Thanks for reading, and if you're interested in any other short tutorials be sure to let me know in the comments below!
What should you do after you fail the technical interview?
Asking for a friend, of course... But really. I'm not currently interviewing...
Thanks, just in time found your tutorial.
Nice tutorial!
Thanks man!
|
https://practicaldev-herokuapp-com.global.ssl.fastly.net/billyjacoby/greensock-animations-using-react-hooks-5d1p
|
CC-MAIN-2019-39
|
en
|
refinedweb
|
February. Continue Reading
- February 26, 2007 26 Feb'07
Book excerpt: Upgrading to Visual Studio 2005
This chapter from "Professional Visual Studio 2005" walks developers through the ups and downs of upgrading a Visual Basic 6 application. Continue Reading
- February 23, 2007 23 Feb'07
Firm taps Windows Workflow Foundation to give customers more control
Remend, the maker of mortgage servicing default management software, completely revamped its app from J2EE to .NET delivered as a service. Continue Reading
- February 21, 2007 21 Feb'07
Put VB.NET events in the hands of AddHandler
This technical tip for intermediate VB.NET developers offers a look back at the AddHandler feature and how it addresses scenarios when there is no object variable to manipulate. Continue Reading
- February 20, 2007 20 Feb'07
Special Report: What Windows Vista means for .NET developers
Here we look at what's new in Windows Vista and Office and how developers can use Visual Studio and .NET 3.0 to leverage these new features. Continue Reading
- February 20, 2007 20 Feb'07
Using 2007 Office System Tools for Visual Studio 2005
Visual Studio 2005 now sports a set of tools for the 2007 Microsoft Office System. Ed Tittel takes a look at the tools that are available and what can be done with them. Continue Reading
- February 19, 2007 19 Feb'07
Windows Vista development resources
The release of Windows Vista has brought about a plethora of tips, tutorials and other resources. Here we link to some of the most helpful Vista resources for .NET developers. Continue Reading
-. Continue Reading
-. Continue Reading
- February 16, 2007 16 Feb'07
Podcast: .NET Development and Windows Vista
In this podcast, two MSDN developer evangelists discuss how Windows Vista lets developers focus less on "plumbing" code and more on building a better user experience. Continue Reading
- February 14, 2007 14 Feb'07
Visual Studio security updates released
Microsoft has released security updates for Visual Studio 2002 and Visual Studio 2003. The patches address a vulnerability that could allow for remote code execution. Continue Reading
- February 12, 2007 12 Feb'07
Learning .NET: Tips for getting started with .NET development
Our "Getting Started" tip series provides an introductory look at leading-edge technology like ASP.NET AJAX, .NET 3.0 and Visual Studio Team System. Continue Reading
- February 09, 2007 09 Feb'07
Book Review: Understanding .NET, Second Edition
Ed Tittel calls this book an effective .NET tutorial for software developers and their managers. It covers the My namespace, ASP.NET, the CLR and other important topics. Continue Reading
- February 08, 2007 08 Feb'07
Beginning Visual Studio Team System development
With Visual Studio Team System, Microsoft brings collaboration into the SDLC. This tip will help you get the most out of the product's planning, management and testing features. Continue Reading
- February 07, 2007 07 Feb'07
Add charts, gauges to apps with .NET Dashboard Suite
Perpetuum Software's .NET Dashboard Suite consists of two components offering a variety of gauges, charts and diagrams built on managed C# code. Continue Reading
- February 07, 2007 07 Feb'07
Visual Studio plug-in deploys .NET apps on Java
Mainsoft's Grasshopper 2.0 enables ASP.NET 2.0 applications written in C# to be deployed on Java-enabled platforms. Continue Reading
- February 07, 2007 07 Feb'07
Create flexible data classes with Objecto
Crainiate's Objecto 1.0 lets developers create n-tier data classes that, through object persistence, keep an application's assemblies in sync with its underlying database Continue Reading
-. Continue Reading
- February 07, 2007 07 Feb'07
Open-source .NET lifecycle management tool now available
Aras Innovator 8 is an open-source lifecycle management product for .NET 2.0 applications. Continue Reading
- February 07, 2007 07 Feb'07
Bring Excel to ASP.NET, WinForms apps
SpreadsheetGear brings the functionality of Microsoft Excel to ASP.NET and Windows Forms applications. It works with .NET 1.1 or higher and integrates with Visual Studio 2005. Continue Reading
|
https://searchwindevelopment.techtarget.com/archive/2007/2
|
CC-MAIN-2019-39
|
en
|
refinedweb
|
>>. A custom class can both grab gestures, and expose a QML interface, but this raises the bar for use significantly. It also leaves the current set of declarative elements unloved. This isn't a recipe for a happy ending. ;-(
The observant documentation readers out there may have noted the existence of a GestureArea QML element in the documentation. I won't waste time and screen real estate duplicating what is written there, but I would like to provide a brief sketch. First, please note the caveat: Elements in the Qt.labs module are not guaranteed to remain compatible in future versions. That said, let's take a look at what this element provides:
A GestureArea handles one or more gestures within an area of the screen, much as a MouseArea handles mouse events. Each gesture type is handled by a corresponding signal. To illustrate, QGestureType::TapGesture can be accepted by implementing the Tap signal:
import Qt.labs.gestures 0.1
GestureArea {
onTap: console.log("tap received")
}
Each signal has one or more properties describing the gesture. Going back to the tap, the gesture is described through a point called position. This property contains the point where the tap was registered.
Starting from the labs module described above, we've been experimenting with taking the GestureArea forward toward production quality. The name has been kept, but the rest of the element has seen significant changes. Being developers, we do a lot of our thinking in code (and on whiteboards, but code is compact), so here's something to start the explanation from:
import Qt.labs.gestures 2.0
GestureArea {
Tap: {
when: gesture.hotspot.x > gesture.hotspot.y
onStarted: console.log("tap in upper right started")
onFinished: console.log("tap in upper right completed")
}
}
The first thing is, yes, the version number has jumped. Moving along, the syntax for hooking a gesture has changed. Rather than using a signal, you specify the gesture as a sub-element. All the default gesture names are recognized, and custom gestures can be as well. To do so, the recognizer needs to be registered with Qt via
qmlRegisterUncreatableType() and
qmlRegisterType(). See the GestureArea plugin.cpp for details.
Within a gesture sub-element, there's an optional property called
when. This property is used to specify a set of conditions that dictate when the gesture should be accepted. If an incoming gesture doesn't pass the test, it isn't accepted and future updates will be ignored. You can access the properties of the gesture through the gesture property, as well as anything else that happens to fall in scope.
If the when property evaluates to true, the appropriate gesture state signal (onStarted, onUpdated, onFinished, onCanceled) is invoked.
Our development was guided by a few example interfaces that we thought should be easy to piece together. Along the way we hit a few walls, wrote a few patches, and had a great time.
import Qt 4.7
import Qt.labs.gestures 2.0
Rectangle {
id: rootWindow
width: 320
height: 320
color: "white"
property int inGesture: 0
signal reset
onReset: { color = "#ffffff"; gestureText.text = "Gesture: none"; inGesture = 0 }
Text {
id: gestureText
anchors.centerIn: parent
text: "Gesture: none"
}
GestureArea {
anchors.fill: parent
Pan {
when: inGesture != 2
onStarted: {rootWindow.color = "#fffca4"; inGesture = 1}
onUpdated: gestureText.text = "Pan: X offset = " + gesture.offset.x.toFixed(3)
onFinished: rootWindow.reset()
}
Pinch {
when: inGesture != 1
onStarted: {rootWindow.color = "#a3e2ff"; inGesture = 2}
onUpdated: gestureText.text = "Pinch: scale = " + gesture.scaleFactor.toFixed(3)
onFinished: rootWindow.reset()
}
}
}
This is where we ask something of you, dear readers. Grab the GestureArea module , and start creating. Have a look at the examples for inspiration. The module is targeted at Qt 4.7.1 [Edit: as in should build using, not will be shipping with], and there are some experiments being carried out in this research repository.
And then tell us what you think. This release bears the same warning as the first implementation. We want to stabilize this functionality and get it into the declarative core, but we need
|
https://www.qt.io/blog/2010/10/05/getting-in-touch-with-qt-quick-gestures-and-qml
|
CC-MAIN-2019-39
|
en
|
refinedweb
|
How would I accomplish tinting the tab bar of a TabbedPage in Xamarin.Forms? The TabbedPage doesn't seem to expose a property or method to set the tint directly.
Every child page of my TabbedPage is a NavigationPage. Setting the "Tint" of the NavigationPage adjusts the nav bar, setting the "BackgroundColor" of those same NavigationPage children adjusts the tab bar in a very subtle way (seems to be a mix of the color I choose and some extreme opacity). This is on iOS specifically.
How can I set it to the actual color I am specifying for the BackgroundColor, so that I can have it match the nav bar Tint.
There's two ways to do this.
Via the Appearance which works globally.
Or using a custom renderer.
[assembly: ExportRenderer(typeof(TabbedPage), typeof(TabbedPageCustom))]
namespace MobileCRM.iOS { public class TabbedPageCustom : TabbedRenderer { public TabbedPageCustom () { TabBar.TintColor = MonoTouch.UIKit.UIColor.Black; TabBar.BarTintColor = MonoTouch.UIKit.UIColor.Blue; TabBar.BackgroundColor = MonoTouch.UIKit.UIColor.Green; } } }
That fixed it, thank you!
How could this be accomplished on Android?
The code snippet above seems to work no longer in the current version.
How is this to be done with the current xamarin.forms version?
Sorry, but when I add the code to my solution, the "TabBar" can not be resolved. I think your code just covers a part of the whole development, right? What else do I need?
Thank you!
@RonaldKasper
have you added the namespace such as Xamarin.Forms.Platform.iOS, Xamarin.Forms, and you renderer class current namespace???
There's an effort on the Xamarin Forms Labs project to extend tab page control called ExtendedTabbedPage that exposes some color properties that could help you:
I cant get Swipe or Tint color to work on ExtendedTabbedPage?
I cannot get swipe or tint to work either on ExtendedTabbedPage. Can someone please confirm that ExtendedTabbedPage.cs still works with the latest version of Xamarin/Xamarin Forms? Any information would be greatly appreciated. Thanks.
@David.6954 I'm seeing the same thing
@AliRFarahnak, @David.6954, @ErikAndersen.1430 ExtendedTabbedPage is only in v1.2.x for iOS at this time. What platforms are you guys using?
I'm targeting iOS. I don't have my computer by me at the moment, but I just subclassed ExtendedTabbedPage and put all of my code inside its constructor. Then I set it as my root view controller like so:
window.RootViewController = new MyCustomExtendedTabbedPage().CreateViewController();
Actually, I forgot that I'm using the Xamarin.Forms 1.3 preview so that's probably why it isn't working.
Okay, so I confirmed that I'm using v1.2.3.6257 and is still not working. Here's my code:
TabForm.xaml
TabForm.xaml.cs
AppDelegate.m
window.RootViewController = new TabForm ().CreateViewController();
My NavigationForm is just a subclass of a NavigationPage. My DirectoryForm and AssignmentsForm are subclasses of ContentPage.
This works for me
Thanks Steve! Your example worked just fine.
Any ideas on how to achieve the same thing using Custom Renderers in Android while retaining the ActionBar in the bottom (much like on iOS)?
After further investigation, I unfortunately found that the code below in fact did not work.
Apparently, my CustomRenderer was still applied, even though I did not use the implementation in my view.
I gave it another shot, and found that the following worked just fine as well:
var tabs = new TabbedPage() { BackgroundColor = Color.FromRgb(76, 108, 139) };
No need to make custom renderers for applying a simple background color anymore, it'd seem.
@SteveChadbourne
Thanks for posting, works fine for me.
hai friends i need to fill the tab view in the screen how cnn i do it in xamarin forms
` using System;
using Xamarin.Forms;
namespace Resturent_demo
{
public class Search : TabbedPage
{
public Search ()
{
this .Title="tabbedPage";
this.ItemsSource = new NamedColor[] {
new NamedColor ("Red",Color.Red),
new NamedColor ("Yellow", Color.Yellow),
new NamedColor ("Green", Color.Green),
new NamedColor ("Aqua", Color.Aqua),
new NamedColor ("Blue", Color.Blue),
new NamedColor ("Purple", Color.Purple),
here is my code where should i place the property of the tabs such as color changing and fit for screen etc...
Does ExtendedTabbedPage still live using Xamarin.forms 1.4+ ?
sorry to asking like this again do you have any example ? MiguelCervante
@shamnad I'm trying to set color to the tabbed bar
public class MyFriendPage : ExtendedTabbedPage { public MyFriendPage() { TintColor = Color.FromHex("00806E"); BarTintColor = Color.FromHex("00806E"); Title = "Contacts"; Children.Add(MyFriendSearchPage()); //Content Page Children.Add(MyFriendList()); //Content Page } }
But I can't see it working on Android, i'm using the latest version of Xamarin.Forms 1.4.0 and XLabs.Forms 2.0 any ideas?
@MiguelCervantes iam new to xamrin Forms that's why i am asking like this, i cant't inherit Extended tabbedPage
me also trying customize the tab view in xamarin Forms Or is it a class that you created in your program ? if that class is system defined which package would have to implement to my project
I'm sorry dude @Shamnad , I missunderstood your question.
In order to use Extended TabbedPage you need to download via nuGet the XLabs.Forms package, once downloaded simply add it to your Page:
using XLabs.Forms.Controls;
After that now you can inherit from ExtendedTabbedPage, using the sample code above:
Actually it works with tabbedPage without the extended properties, but I can't change the bar color with ExtendedTabbedPage that's why i'm asking if it still works. Hope this sample helps
thanks @MiguelCervantes
@MiguelCervantes I am having the same issues. I extend the ExtendedTabbedPage and set the TintColor and the BarTintColor but nothing changes. Considering this was the main point of the control, it is a bit odd.
This work perfectly!
|
https://forums.xamarin.com/discussion/comment/192005
|
CC-MAIN-2019-39
|
en
|
refinedweb
|
This chapter includes the following sections:
Overview of Coherence Clusters
Setting Up a Coherence Cluster
Creating Coherence Deployment Tiers
Configuring a Coherence Cluster
Configuring Managed Coherence Servers
Using a Single-Server Cluster
Using WLST with Coherence
Coherence clusters consist of multiple managed Coherence server instances that distribute data in-memory to increase application scalability, availability, and performance. An application interacts with the data in a local cache and the distribution and backup of the data is automatically performed across cluster members.
Coherence clusters are different than WebLogic Server clusters. They use different clustering protocols and are configured separately. Multiple WebLogic Server clusters can be associated with a Coherence cluster and a WebLogic Server domain typically contains. Managed Coherence servers are typically setup in tiers that are based on their type: a data tier for storing data, an application tier for hosting applications, and a proxy tier that allows external clients to access caches.
Figure 12-1 shows a conceptual view of a Coherence cluster in a WebLogic Server domain:
Figure 12-1 Conceptual View of a Coherence Domain Topology
A WebLogic Server domain typically contains a single Coherence cluster. The cluster is represented as a single system-level resource
(CoherenceClusterSystemResource). A
CoherenceClusterSystemResource instance is created using the WebLogic Server Administration Console or WLST.
A Coherence cluster can contain any number of managed Coherence servers. The servers can be standalone managed servers or can be part of a WebLogic Server cluster that is associated with a Coherence cluster. Typically, multiple WebLogic Server clusters are associated with a Coherence cluster. For details on creating WebLogic Server clusters for use by Coherence, see Creating Coherence Deployment Tiers.
Note:
Cloning a managed Coherence server does not clone its association with a Coherence cluster. The managed server will not be a member of the Coherence cluster. You must manually associate the cloned managed server with the Coherence cluster.
To define a Coherence cluster resource:
Managed Coherence servers are managed server instances that are associated with a Coherence cluster. Managed Coherence servers join together to form a Coherence cluster and are often referred to as cluster members. Cluster members have seniority and the senior member performs cluster tasks (for example, issuing the cluster heart beat).
Note:
Managed Coherence servers and standalone Coherence cluster members (those that are not managed within a WebLogic Server domain) can join the same cluster. However, standalone cluster members cannot be managed from within a WebLogic Server domain; operational configuration and application lifecycles must be manually administered and monitored.
The Administration Server is typically not used as a managed Coherence server in a production environment.
Managed Coherence servers are distinguished by their role in the cluster. A best practice is to use different managed server instances (and preferably different WebLogic Server clusters) for each cluster role.
storage-enabled – a managed Coherence server that is responsible for storing data in the cluster. Coherence applications are packaged as Grid ARchives (GAR) and deployed on storage-enabled managed Coherence servers.
storage-disabled – a managed Coherence server that is not responsible for storing data and is used to host Coherence applications (cache clients). A Coherence application GAR is packaged within an EAR and deployed on storage-disabled managed Coherence servers.
proxy – a managed Coherence server that is storage-disabled and allows external clients (non-cluster members) to use a cache. A Coherence application GAR is deployed on managed Coherence proxy servers.
To create managed Coherence servers:
Coherence supports different topologies within a WebLogic Server domain to provide varying levels of performance, scalability, and ease of use. For example, during development, a single standalone managed server instance may be used as both a cache server and a cache client. The single-server topology is easy to setup and use, but does not provide optimal performance or scalability. For production, Coherence is typically setup using WebLogic Server Clusters. A WebLogic Server cluster is used as a Coherence data tier and hosts one or more cache servers; a different WebLogic Server cluster is used as a Coherence application tier and hosts one or more cache clients; and (if required) different WebLogic Server clusters are used for the Coherence proxy tier that hosts one or more managed Coherence proxy servers and the Coherence extend client tier that hosts extend clients. The tiered topology approach provides optimal scalability and performance.
The instructions in this section use both the Clusters Settings page and Servers Settings page in the WebLogic Server Administration Console to create Coherence deployment tiers. WebLogic Server clusters and managed servers instances can be associated with a Coherence cluster resource using the
ClusterMBean and
ServerMBean MBeans, respectively. Managed servers that are associated with a WebLogic Server cluster inherit the cluster's Coherence settings. However, the settings may not be reflected in the Servers Settings page.
A Coherence Data tier is a WebLogic Server cluster that is associated with a Coherence cluster and hosts any number of storage-enabled managed Coherence servers. Managed Coherence servers in the data tier store and distribute data (both primary and backup) on the cluster. The number of managed Coherence servers that are required in a data tier depends on the expected amount of data that is stored in the Coherence cluster and the amount of memory available on each server. In addition, a cluster must contain a minimum of four physical computers to avoid the possibility of data loss during a computer failure.
Coherence artifacts (such as Coherence configuration files, POF serialization classes, filters, entry processors, and aggregators) are packaged as a GAR and deployed on the data tier. For details on packaging and deploying Coherence applications, see Developing Oracle Coherence Applications for Oracle WebLogic Server. For details on calculating cache size and hardware requirements, see the production checklist in Administering Oracle Coherence.
To create a Coherence data tier:
To create managed servers for a Coherence data tier:
A Coherence Application tier is a WebLogic Server cluster that is associated with a Coherence cluster and hosts any number of storage-disabled managed Coherence servers. Managed Coherence servers in the application tier host applications (cache factory clients) and are Coherence cluster members. Multiple application tiers can be created for different applications.
Clients in the application tier are deployed as EARs and implemented using Java EE standards such as servlet, JSP, and EJB. Coherence artifacts (such as Coherence configuration files, POF serialization classes, filters, entry processors, and aggregators) must be packaged as a GAR and also deployed within an EAR. For details on packaging and deploying Coherence applications, see Developing Oracle Coherence Applications for Oracle WebLogic Server.
To create a Coherence application tier:
To create managed servers for a Coherence application tier:
A Coherence proxy tier is a WebLogic Server cluster that is associated with a Coherence cluster and hosts any number of managed Coherence proxy servers. Managed Coherence proxy servers allow Coherence*Extend clients to use Coherence caches without being cluster members. The number of managed Coherence proxy servers that are required in a proxy tier depends on the number of expected clients. At least two proxy servers must be created to allow for load balancing; however, additional servers may be required when supporting a large number of client connections and requests.
For details on Coherence*Extend and creating extend clients, see Developing Remote Clients for Oracle Coherence.
To create a Coherence proxy tier:
To create managed servers for a Coherence proxy tier:
Coherence proxy services are clustered services that manage remote connections from extend clients. Proxy services are defined and configured in a
coherence-cache-config.xml file within the
<proxy-scheme> element. The definition includes, among other settings, the TCP listener address (IP, or DNS name, and port) that is used to accept client connections. For details on the
<proxy-scheme> element, see Developing Applications with Oracle Coherence.There are two ways to setup proxy services: using a name service and using an address provider. The naming service provides an efficient setup and is typically preferred in a Coherence proxy tier.
A name service is a specialized listener that allows extend clients to connect to a proxy service by name. Clients connect to the name service, which returns the addresses of all proxy services on the cluster.
Note:
If a domain includes multiple tiers (for example, a data tier, an application tier, and a proxy tier), then the proxy tier should be started first, before a client can connect to the proxy.
A name service automatically starts on port 7574 (the same default port that the TCMP socket uses) when a proxy service is configured on a managed Coherence proxy server. The reuse of the same port minimizes the number of ports that are used by Coherence and simplifies firewall configuration.
To configure a proxy service and enable the name service on the default TCMP port:
coherence-cache-config.xmlfile and create a
<proxy-scheme>definition and do not explicitly define a socket address. The following example defines a proxy service that is named
TcpExtendand automatically enables a cluster name service. A proxy address and ephemeral port is automatically assigned and registered with the cluster's name service.
... <caching-schemes> ... <proxy-scheme> <service-name>TcpExtend</service-name> <autostart>true</autostart> </proxy-scheme> </caching-schemes> ...
coherence-cache-config.xmlfile to each managed Coherence proxy server in the Coherence proxy tier. Typically, the
coherence-cache-config.xmlfile is included in a GAR file. However, for the proxy tier, use a cluster cache configuration file to override the
coherence-cache-config.xmlfile that is located in the GAR. This allows a single GAR to be deployed to the cluster and the proxy tier. For details on using a cluster cache configuration file, see Overriding a Cache Configuration File.
To connect to a name service, a client's
coherence-cache-config.xml file must include a
<name-service-addresses> element, within the
<tcp-initiator> element, of a remote cache or remote invocation definition. The
<name-service-addresses> element provides the socket address of a name service that is on a managed Coherence proxy server. The following example defines a remote cache definition and specifies a name service listening at host
192.168.1.5 on port
7574. The client automatically connects to the name service and gets a list of all managed Coherence proxy servers that contain a
TcpExtend proxy service. The cache on the cluster must also be called
TcpExtend. In this example, a single address is provided. A second name service> <name-service-addresses> <socket-address> <address>192.168.1.5</address> <port>7574</port> </socket-address> </name-service-addresses> </tcp-initiator> </initiator-config> </remote-cache-scheme>
The name service listens on the cluster port (7574) by default and is available on all machines running Coherence cluster nodes. If the target cluster uses the default TCMP cluster port, then the port can be omitted from the configuration.
Note:
The
<service-name> value must match the proxy scheme's
<service-name> value; otherwise, a
<proxy-service-name> element must also be provided in a remote cache and remote invocation scheme that contains the value of the
<service-name> element that is configured in the proxy scheme.
In previous Coherence releases, the name service automatically listened on a member's unicast port instead of the cluster port.
An address provider can also be used to specify name service addresses.
An address provider specifies the TCP listener address (IP, or DNS name, and port) for a proxy service. The listener address can be explicitly defined within a
<proxy-scheme> element in a
coherence-cache-config.xml file; however, the preferred approach is to define address providers in a cluster configuration file and then reference the addresses from within a
<proxy-scheme> element. The latter approach decouples deployment configuration from application configuration and allows network addresses to change without having to update a
coherence-cache-config.xml file.
To use an address provider:
CoherenceAddressProvidersBeanMBean also exposes the address provider definition. An address provider contains a unique name in addition to the listener address for a proxy service. For example, an address provider called
proxy1might specify host
192.168.1.5and port
9099as the listener address.
coherence-cache-config.xmlfile and create a
<proxy-scheme>definition and reference an address provider definition, by name, in an
<address-provider>element. The following example defines a proxy service that references an address provider that is named
proxy1:
... <caching-schemes> <proxy-scheme> <service-name>TcpExtend</service-name> <acceptor-config> <tcp-acceptor> <address-provider>proxy1</address-provider> </tcp-acceptor> </acceptor-config> <autostart>true</autostart> </proxy-scheme> </caching-schemes> ...
coherence-cache-config.xmlfile to its respective managed Coherence proxy server. Typically, the
coherence-cache-config.xmlfile is included in a GAR file. However, for the proxy tier, use a cluster cache configuration file. The cluster cache configuration file overrides the
coherence-cache-config.xmlfile that is located in the GAR. This allows the same GAR to be deployed to all cluster members, but then use unique settings that are specific to a proxy tier. For details on using a cluster cache configuration file, see Overriding a Cache Configuration File.
To connect to a proxy service, a client's
coherence-cache-config.xml file must include a
<remote-addresses> element, within the
<tcp-initiator> element of a remote cache or remote invocation definition, that includes the address provider name. For example:
<remote-cache-scheme> <scheme-name>extend-dist</scheme-name> <service-name>TcpExtend</service-name> <initiator-config> <tcp-initiator> <remote-addresses> <address-provider>proxy1</address-provider> </remote-addresses> </tcp-initiator> </initiator-config> </remote-cache-scheme>
Clients can also explicitly specify remote addresses. The following example defines a remote cache definition and specifies a proxy service on host
192.168.1.5 and port
9099. The client automatically connects to the proxy service and uses a cache on the cluster named
TcpExtend. In this example, a single address is provided. A second> <remote-addresses> <socket-address> <address>192.168.1.5</address> <port>9099</port> </socket-address> </remote-addresses> </tcp-initiator> </initiator-config> </remote-cache-scheme>
A Coherence cluster resource exposes several cluster settings that can be configured for a specific domain. Use the following tasks to configure cluster settings:
Adding and Removing Coherence Cluster Members
Setting Advanced Cluster Configuration Options
Configure Cluster Communication
Overriding a Cache Configuration File
Configuring Coherence Logging
Configuring Cache Persistence
Configuring Cache Federation
Many of the settings use default values that can be changed as required. The following instructions assume that a cluster resource has already been created. For details on creating a cluster resource, see Setting Up a Coherence Cluster. This section does not include instructions for securing Coherence. For security details, see Securing Oracle Coherence.
Use the Coherence tab on the Coherence Cluster Settings page to configure cluster communication. The
CoherenceClusterSystemResource MBean and its associated
CoherenceClusterResource MBean expose cluster settings. The
CoherenceClusterResource MBean provides access to multiple MBeans for configuring a Coherence cluster.
Note:
WLS configuration take precedence over Coherence system properties. Coherence configuration in WLS should, in general, be changed using WLST or a Coherence cluster configuration file instead of using system properties.
Any existing managed server instance can be added to a Coherence cluster. In addition, managed Coherence servers can be removed from a cluster. Adding and removing cluster members is available when configuring a Coherence Cluster and is a shortcut that is used instead of explicitly configuring each instance. However, when adding existing managed server instances, default Coherence settings may need to be changed. For details on configuring managed Coherence servers, see Configuring Managed Coherence Servers.
Use the Member tab on the Coherence Cluster Settings page to select which managed servers or WebLogic Server clusters are associated with a Coherence cluster. When selecting a WebLogic Server cluster, it is recommended that all the managed servers in the WebLogic Server cluster be associated with a Coherence cluster. A
CoherenceClusterSystemResource exposes all managed Coherence servers as targets. A
CoherenceMemberConfig MBean is created for each managed server and exposes the Coherence cluster member parameters.
WebLogic Server MBeans expose a subset of Coherence operational settings that are sufficient for most use cases and are detailed throughout this chapter. These settings are available natively through the WLST utility and the WebLogic Server Administration Console. For more advanced use cases, use an external Coherence cluster configuration file (
tangosol-coherence-override.xml), which provides full control over Coherence operational settings.
Note:
The use of an external cluster configuration file is only recommended for operational settings that are not available through the provided MBeans. That is, avoid configuring the same operational settings in both an external cluster configuration file and through the MBeans.
Use the General tab on the Coherence Cluster Settings page to enter the path and name of a cluster configuration file that is located on the administration server or use the
CoherenceClusterSystemResource MBean. For details on using a Coherence cluster configuration file, see Developing Applications with Oracle Coherence, which also provides usage instructions for each element and a detailed schema reference.
Checking Which Operational Configuration is Used
Coherence generates an operational configuration from WebLogic Server MBeans, a Coherence cluster configuration file (if imported), and Coherence system properties (if set). The result are written to the managed Coherence server log if the system property
weblogic.debug.DebugCoherence=true is set. If you use the WebLogic start-up scripts, you can use the
JAVA_PROPERTIES environment variable. For example,
export JAVA_PROPERTIES=-Dweblogic.debug.DebugCoherence=true
Cluster members communicate using the Tangosol Cluster Management Protocol (TCMP). The protocol operates independently of the WLS cluster protocol. TCMP is an IP-based protocol for discovering cluster members, managing the cluster, provisioning services, and transmitting data. TCMP can be transmitted over different transport protocols and can use both multicast and unicast. By default, TCMP is transmitted over UDP and uses unicast. The use of different transport protocols and multicast requires support from the underlying network.
Use the General tab on the Coherence Cluster Settings page to configure cluster communication. The
CoherenceClusterParamsBean and
CoherenceClusterWellKnownAddressesBean MBeans expose the cluster communication parameters.
Coherence clusters support both unicast and multicast communication. Multicast must be explicitly configured and is not the default option. The use of multicast should be avoided in environments that do not properly support or allow multicast. The use of unicast disables all multicast transmission and automatically uses the Coherence Well Known Addresses (WKA) feature to discover and communicate between cluster members. See Specifying Well Known Address Members.
For details on using multicast, unicast, and WKA in Coherence, see Developing Applications with Oracle Coherence.
Selecting Unicast For the Coherence Cluster Mode
To use unicast for cluster communication, select Unicast from the Clustering Mode drop-down list and enter a cluster port or keep the default port, which is 7574. For most clusters, the port does not need to be changed. However, changing the port is required when multiple Coherence clusters run on the same computer. If a different port is required, then the recommended best practice is to select a value between 1024 and 8999.
Specifying Well Known Address Members
When unicast is enabled, use the Well Known Addresses tab to explicitly configure WKA machine addresses. If no addresses are defined for a cluster, then addresses are automatically assigned. The recommended best practice is to always explicitly specify WKA machine addresses when using unicast.
In addition, if a domain contains multiple managed Coherence server that are located on different machines, then at least one non-local WKA machine address must be defined to ensure a Coherence cluster is formed; otherwise, multiple individual clusters are formed on each machine. If the managed Coherence servers are all running on the same machine, then a cluster can be created without specifying a non-local listen address.
Note:
WKA machine addresses must be explicitly defined in production environments. In production mode, a managed Coherence server fails to start if WKA machines addresses have not been explicitly defined. Automatically assigned WKA machine addresses is a design time convenience and should only be used during development on a single server.
Selecting Multicast For the Coherence Cluster Mode
To use multicast for cluster communication, select Multicast from the Clustering Mode drop-down list and enter a cluster port and multicast listen address. For most clusters, the default cluster port (7574) does not need to be changed. However, changing the port is required when multiple Coherence clusters run on the same computer or when multiple clusters use the same multicast address. If a different port is required, then the recommended best practice is to select a value between 1024 and 8999.
Use the Time To Live field to designate how far multicast packets can travel on a network. The time-to-live value (TTL) is expressed in terms of how many hops a packet survives; each network interface, router, and managed switch is considered one hop. The TTL value should be set to the lowest integer value that works.
The following transport protocols are supported for TCMP and are selected using the Transport drop-down list. The
CoherenceClusterParamsBean MBean exposes the transport protocol setting.
User Datagram Protocol (UDP) – UDP is the default TCMP transport protocol and is used for both multicast and unicast communication. If multicast is disabled, all communication is done using UDP unicast.
Transmission Control Protocol (TCP) – The TCP transport protocol is used in network environments that favor TCP communication. All TCMP communication uses TCP if unicast is enabled. If multicast is enabled, TCP is only used for unicast communication and UDP is used for multicast communication.
Secure Sockets Layer (SSL) – The SSL/TCP transport protocol is used in network environments that require highly secure communication between cluster members. SSL is only supported with unicast communication; ensure multicast is disabled when using SSL. The use of SSL requires additional configuration. For details on securing Coherence within WebLogic Server, see Securing Oracle Coherence.
TCP Message Bus (TMB) – The TMB protocol provides support for TCP/IP.
TMB with SSL (TMBS) – TMBS requires the use of an SSL socket provider. See Developing Applications with Oracle Coherence.
Sockets Direct Protocol Message Bus (SDMB) – The Sockets Direct Protocol (SDP) provides support for stream connections. SDMB is only valid on Exalogic.
SDMB with SSL (SDMBS) – SDMBS is only available for Oracle Exalogic systems and requires the use of an SSL socket provider. See Developing Applications with Oracle Coherence.
Infiniband Message Bus (IMB) – IMB uses an optimized protocol based on native InfiniBand verbs. IMB is only valid on Exalogic.
Lightweight Message Bus (LWMB) – LWMB uses MSGQLT/LWIPC libraries with IMB for Infinibus communications. LWMB is only available for Oracle Exalogic systems and is the default transport for both service and unicast communication. LWMB is automatically used as long as TCMP has not been configured with SSL.
A Coherence cache configuration file defines the caches Developing Oracle Coherence Applications for Oracle WebLogic Server.
The following example defines an override property named
cache-config/ExamplesGar that can be used to override the
META-INF/example-cache-config.xml cache configuration file in the GAR:
... <cache-configuration-ref META-INF/example-cache-config.xml</cache-configuration-ref> ...
At runtime, use the Cache Configurations tab on the Coherence Cluster Settings page to override a cache configuration file. You must supply the same JNDI name that is defined in the
override-property attribute. The cache configuration can be located on the administration server or at a URL. In addition, you can choose to import the file to the domain or use it from the specified location. Use the Targets tab to specify which Oracle Coherence cluster members use the cache configuration file.
The following WLST (online) example demonstrates how a cluster cache configuration can be overridden using a
CoherenceClusterSystemResource object.
edit() startEdit() cd('CoherenceClusterSystemResources/myCoherenceCluster/CoherenceCacheConfigs') create('ExamplesGar', 'CoherenceCacheConfig') cd('ExamplesGar') set('JNDIName', 'ExamplesGar') cmo.importCacheConfigurationFile('/tmp/cache-config.xml') cmo.addTarget(getMBean('/Servers/coh_server')) save() activate()
The WLST example creates a
CoherenceCacheConfig resource as a child. The script then imports the cache configuration file to the domain and specifies the JNDI name to which the resource binds. The file must be found at the path provided. Lastly, the cache configuration is targeted to a specific server. The ability to target a cache configuration resource to certain servers or WebLogic Server clusters allows the application to load different configuration based on the context of the server (cache servers, cache clients, proxy servers, and so on).
The cache configuration resource can also be configured as a URL:
edit() startEdit() cd('CoherenceClusterSystemResources/myCoherenceCluster/CoherenceCacheConfigs') create('ExamplesGar', 'CoherenceCacheConfig') cd('ExamplesGar') set('JNDIName', 'ExamplesGar') set('CacheConfigurationFile', '') cmo.addTarget(getMBean('/Servers/coh_server')) save() activate()
Configure cluster logging using the WebLogic Server Administration Console's Logging tab that is located on the Coherence Cluster Settings page or use the
CoherenceLoggingParamsBean MBean. For details on WebLogic Server logging, see Configuring Log Files and Filtering Log Messages for Oracle WebLogic Server. Coherence logging configuration includes:
Disabling and enabling logging
Changing the default logger name
WebLogic Server provides two loggers that can be used for Coherence logging: the default
com.oracle.coherence logger and the
com.oracle.wls logger. The
com.oracle.wls logger is generic and uses the same handler that is configured for WebLogic Server log output. The logger does not allow for Coherence-specific configuration. The
com.oracle.coherence logger allows Coherence-specific configuration, which includes the use of different handlers for Coherence logs.
Note:
If logging is configured through a standard
logging.properties file, then make sure the file uses the same logger name that is currently configured for Coherence logging.
Changing the log message format
Add or remove information from a log message. A log message can include static text as well as parameters that are replaced at run time (for example,
{date}). For details on supported log message parameters, see Developing Applications with Oracle Coherence.
Coherence persistence manages the persistence and recovery of Coherence distributed caches. Cached data is persisted so that it can be quickly recovered after a catastrophic failure or after a cluster restart due to planned maintenance. For complete details about Coherence cache persistence, see Persisting Caches.
Use the Persistence tab on the Coherence Cluster Settings page to enable active persistence and to override the default location where persistence files are stored. The
CoherencePersistenceParamsBean MBean exposes the persistence parameters. Managed Coherence servers must be restarted for persistence changes to take affect.
On-demand persistence allows a cache service to be manually persisted and recovered upon request (a snapshot) using the persistence coordinator. The persistence coordinator is exposed as an MBean interface (
PersistenceCoordinatorMBean) that provides operations for creating, archiving, and recovering snapshots of a cache service. To use the MBean, JMX must be enabled on the cluster. For details about enabling JMX management and accessing Coherence MBeans, see Using JMX to Manage Oracle Coherence. Active persistence automatically persists cache contents on all mutations and automatically recovers the contents on cluster/service startup. The persistence coordinator can still be used in active persistence mode to perform on-demand snapshots.
The federated caching feature federates cache data asynchronously across multiple geographically dispersed clusters. Cached data is federated across clusters to provide redundancy, off-site backup, and multiple points of access for application users in different geographical locations. For complete details about Coherence Federation, see Federating Caches Across Clusters.
Use the Federation tab on the Coherence Cluster Settings page to enable a federation topology and to configure a remote cluster participant to which caches are federated. When selecting a topology, a topology configuration is automatically created and named
Default-Topology. Federation must be configured on both the local cluster participant and the remote cluster participant. At least one host on the remote cluster must be provided. If a custom port is being used on the remote cluster participant, then change the cluster port accordingly. Managed Coherence servers must be restarted for federation changes to take affect. The
CoherenceFederationParamsBean MBean also exposes the cluster federation parameters and can be used to configure cache federation.
Note:
The
Default-Topology topology configuration is created and used if no federation topology is specified in the cache configuration file.
When using federation, matching topologies must be configured on both the local and remote clusters. For example, selecting
none for the topology in a local cluster and
active-active as the topology in the remote cluster can lead to unpredictable behavior. Similarly, if a local cluster is set to use active-passive, then the remote cluster must be set to use passive-active.
Managed Coherence servers expose several cluster member settings that can be configured for a specific domain. Use the following tasks to configure a managed Coherence server:
Configure Coherence Cluster Member Storage Settings
Configure Coherence Cluster Member Unicast Settings
Removing a Coherence Management Proxy
Configure Coherence Cluster Member Identity Settings
Configure Coherence Cluster Member Logging Levels
Many of the settings use default values that can be changed as required. The instructions in this section assume that a managed server has already been created and associated with a Coherence cluster. For details on creating managed Coherence servers, see Create Standalone Managed Coherence Servers.
Use the Coherence tab on a managed server's Setting page to configure Coherence cluster member settings. A
CoherenceMemberConfig MBean is created for each managed server and exposes the Coherence cluster member parameters.
Note:
WLS configuration take precedence over Coherence system properties. Coherence configuration in WLS should, in general, be changed using WLST or a Coherence cluster configuration file instead of using system properties.
The storage settings for managed Coherence servers can be configured as required. Enabling storage on a server means the server is responsible for storing a portion of both primary and backup data for the Coherence cluster. Servers that are intended to store data must be configured as storage-enabled servers. Servers that host cache applications and cluster proxy servers should be configured as storage-disabled servers and are typically not responsible for storing data because sharing resource can become problematic and affect application and cluster performance.
Note:
If a managed Coherence server is part of a WebLogic Server cluster, then the Coherence storage settings that are specified on the WebLogic Server cluster override the storage settings on the server. The storage setting is an exception to the general rule that server settings override WebLogic Server cluster settings. Moreover, the final runtime configuration is not reflected in the console. Therefore, a managed Coherence server may show that storage is disabled even though storage has been enabled through the Coherence tab for a WebLogic Server cluster. Always check the WebLogic Server cluster settings to determine whether storage has been enabled for a managed Coherence server.
Use the following fields on the Coherence tab to configure storage settings:
Local Storage Enabled – This field specifies whether a managed Coherence server to stores data. If this option is not selected, then the managed Coherence server does not store data and is considered a cluster client.
Coherence Web Local Storage Enabled – This field specifies whether a managed Coherence server stores HTTP session data. For details on using Coherence to store session data, see Administering HTTP Session Management with Oracle Coherence*Web.
Managed Coherence servers communicate with each other using unicast (point-to-point) communication. Unicast is used even if the cluster is configured to use multicast communication. For details on unicast in Coherence, see Developing Applications with Oracle Coherence.
Use the following fields on the Coherence tab to configure unicast settings:
Unicast Listen Address – This field specifies the address on which the server listens for unicast communication. If no address is provided, then a routable IP address is automatically selected. The address field also supports Classless Inter-Domain Routing (CIDR) notation, which uses a subnet and mask pattern for a local IP address to bind to instead of specifying an exact IP address. The address field also supports Classless Inter-Domain Routing (CIDR) notation, which uses a subnet and mask pattern for a local IP address to bind to instead of specifying an exact IP address.
Unicast Listen Port – This field specifies the ports on which the server listens for unicast communication. A cluster member uses two unicast UDP ports which are automatically assigned from the operating system's available ephemeral port range (as indicated by a value of
0). The default value ensures that Coherence cannot accidently cause port conflicts with other applications. However, if a firewall is required between cluster members (an atypical configuration), then a port can be manually assigned and a second port is automatically selected (port1 +1).
Unicast Port Auto Adjust – This field specifies whether the port automatically increments if the port is already in use.
A Coherence cluster can be managed from any JMX-compatible client such as JConsole or Java VisualVM. The management information includes runtime statistics and operational settings. The management information is specific to the Coherence management domain and is different than the management information that is provided for Coherence as part of the com.bea management domain. For a detailed reference of Coherence MBeans, see Managing Oracle Coherence.
One cluster member is automatically selected as a management proxy and is responsible for aggregating the management information from all other cluster members. The Administration server for the WebLogic domain then integrates the management information and it is made available through the domain runtime MBean server. It the cluster member is not operational, then another cluster member is automatically selected as the management proxy.
Use the Coherence Management Node field on the Coherence tab of a managed Coherence server to specify whether a cluster member can be selected as a management proxy. By default, all cluster members can be selected as the management proxy. Therefore, deselect the option only if you want to remove a cluster member from being selected as a management proxy.
At runtime, use a JMX client to connect to the domain runtime MBean server where the Coherence management information is located within the Coherence management namespace. For details about connecting to the domain runtime MBean server, see Developing Custom Management Utilities Using JMX for Oracle WebLogic Server.
A set of identifiers are used to give a managed Coherence server an identity within the cluster. The identity information is used to differentiate servers and conveys the servers' role within the cluster. Some identifiers are also used by the cluster service when performing cluster tasks. Lastly, the identity information is valuable when displaying management information (for example, JMX) and facilitates interpreting log entries.
Use the following fields on the Coherence tab to configure member identity settings:
Site Name – This field specifies the name of the geographic site that hosts the managed Coherence server. – This field specifies the name of the location within a geographic site that the managed Coherence server.
Role Name – This field specifies the managed Coherence server's role in the cluster. The role name allows an application to organize cluster members into specialized roles, such as storage-enabled or storage-disabled.
If a managed Coherence server is part of a WebLogic Server cluster, the cluster name is automatically used as the role name and this field cannot be set. If no name is provided, the default role name that is used is
WebLogicServer.
Logging levels can be configured for each managed Coherence server. The default log level is D5 and can be changed using the server's Logging tab. For details on WebLogic Server logging, see Configuring Log Files and Filtering Log Messages for Oracle WebLogic Server.
To configure a managed Coherence server's logging level:
Advanced.
A single-server cluster is a cluster that is constrained to run on a single managed server instance and does not access the network. The server instance acts as a storage-enabled cluster member, a client, and a proxy. A single-server cluster is easy to setup and offers a quick way to start and stop a cluster. A single-server cluster is used during development and should not be used for production or testing environments.
To create a single-server cluster:
Define a Coherence Cluster Resource – Create a Coherence cluster and select a managed server instance to be a member of the cluster. The administration server instance can be used to facilitate setup.
Configure Cluster Communication – Configure the cluster and set the Time To Live value to
0 if using multicast communication.
Configure Coherence Cluster Member Unicast Settings – Configure the managed server instance and set the unicast address to an address that is routed to loop back. On most computers, setting the address to
127.0.0.1 works.
The WebLogic Scripting Tool (WLST) is a command-line interface that you can use to automate domain configuration tasks, including configuring and managing Coherence clusters. For more information on WLST, see Understanding the WebLogic Scripting Tool.
Setting Up Coherence with WLST (Offline)
WLST can be used to set up Coherence clusters. The following examples demonstrate using WLST in offline mode to create and configure a Coherence cluster. It is assumed that a domain has already been created and that the examples are completed in the order in which they are presented. In addition, the examples only create a data tier. Additional tiers can be created as required. Lastly, the examples are not intended to demonstrate every Coherence MBean. For a complete list of Coherence MBeans, see MBean Reference for Oracle WebLogic Server.
readDomain('/ORACLE_HOME/user_projects/domains/base_domain')
Create a Coherence Cluster
create('myCoherenceCluster', 'CoherenceClusterSystemResource')
Create a Tier of Managed Coherence Servers
create('coh_server1', 'Server') cd('Server/coh_server1') set('ListenPort', 7005) set('ListenAddress', '192.168.0.100') set('CoherenceClusterSystemResource', 'myCoherenceCluster') cd('/') create('coh_server2','Server') cd('Server/coh_server2') set('ListenPort', 7010) set('ListenAddress', '192.168.0.101') set('CoherenceClusterSystemResource', 'myCoherenceCluster') cd('/') create('DataTier', 'Cluster') assign('Server', 'coh_server1,coh_server2','Cluster','DataTier') cd('Cluster/DataTier') set('MulticastAddress', '237.0.0.101') set('MulticastPort', 8050) cd('/CoherenceClusterSystemResource/myCoherenceCluster') set('Target', 'DataTier')
Configure Coherence Cluster Parameters
cd('CoherenceClusterSystemResource/myCoherenceCluster/CoherenceResource/ myCoherenceCluster/CoherenceClusterParams/NO_NAME_0') set('ClusteringMode', 'unicast') set('SecurityFrameworkEnabled','false') set('ClusterListenPort', 7574)
Configure Well Known Addresses
create('wka_config','CoherenceClusterWellKnownAddresses') cd('CoherenceClusterWellKnownAddresses/NO_NAME_0') create('WKA1','CoherenceClusterWellKnownAddress') cd('CoherenceClusterWellKnownAddress/WKA1') set('ListenAddress', '192.168.0.100') cd('../..') create('WKA2','CoherenceClusterWellKnownAddress') cd('CoherenceClusterWellKnownAddress/WKA2') set('ListenAddress', '192.168.0.101')
Set Logging Properties
cd('/') cd('CoherenceClusterSystemResource/myCoherenceCluster/CoherenceResource/ myCoherenceCluster') create('log_config)','CoherenceLoggingParams') cd('CoherenceLoggingParams/NO_NAME_0') set('Enabled', 'true') set('LoggerName', 'com.oracle.coherence')
Configure Managed Coherence Servers
cd('/') cd('Servers/coh_server1') create('member_config', 'CoherenceMemberConfig') cd('CoherenceMemberConfig/member_config') set('LocalStorageEnabled', 'true') set('RackName', '100A') set('RoleName', 'Server') set('SiteName', 'pa-1') set('UnicastListenAddress', '192.168.0.100') set('UnicastListenPort', 0) set('UnicastPortAutoAdjust', 'true') cd('/') cd('Servers/coh_server2') create('member_config', 'CoherenceMemberConfig') cd('CoherenceMemberConfig/member_config') set('LocalStorageEnabled', 'true') set('RackName', '100A') set('RoleName', 'Server') set('SiteName', 'pa-1') set('UnicastListenAddress', '192.168.0.101') set('UnicastListenPort', 0) set('UnicastPortAutoAdjust', 'true') updateDomain() closeDomain()
Setting the Cluster Name and Port
readDomain('/ORACLE_HOME/user_projects/domains/base_domain') cd('CoherenceClusterSystemResource/myCoherenceCluster/CoherenceResource/ myCoherenceCluster) set('Name', 'MyCluster') cd('CoherenceClusterSystemResource/myCoherenceCluster/CoherenceResource/ myCoherenceCluster/CoherenceClusterParams/NO_NAME_0') set('ClusterListenPort', 9123) updateDomain() closeDomain()
WLST includes a set of commands that can be used to persist and recover cached data from disk. The commands are automatically available when connected to an administration server domain runtime MBean server. For more information about Coherence cache persistence, see Administering Oracle Coherence.
Table 12-1 lists WLST commands for persisting Coherence caches. Example 12-1 demonstrates using the commands.
Table 12-1 WLST Coherence Persistence Commands
Example 12-1 demonstrates using the persistence API from WLST to persist the caches for a partitioned cache service.
Example 12-1 WLST Example for Persisting Caches
serviceName = '"ExampleGAR:ExamplesPartitionedPofCache"'; snapshotName = 'new-snapshot' connect('weblogic','password','t3://machine:7001') # Must be in domain runtime tree otherwise no MBeans are returned domainRuntime() try: coh_listSnapshots(serviceName) coh_createSnapshot(snapshotName, serviceName) coh_listSnapshots(serviceName) coh_recoverSnapshot(snapshotName, serviceName) coh_archiveSnapshot(snapshotName, serviceName) coh_listArchivedSnapshots(serviceName) coh_removeSnapshot(snapshotName, serviceName) coh_retrieveArchivedSnapshot(snapshotName, serviceName) coh_recoverSnapshot(snapshotName, serviceName) coh_listSnapshots(serviceName) except PersistenceException, rce: print 'PersistenceException: ' + str(rce) except Exception,e: print 'Unknown Exception' + str(e) else: print 'All operations complete'
|
https://docs.oracle.com/middleware/12212/wls/CLUST/coherence.htm
|
CC-MAIN-2019-39
|
en
|
refinedweb
|
).
And today I'd like to show you yet another use case: visualizing on a world map where your users are, hence the click-bait-y title.
Foreword
But before we start, I have to confess something to you: I don't like web development - at all. I find the html/css/js triplet to be a pain to work with that's only made worse by the various different browser implementations. As a C developer, JavaScript is alien and weird to me; CSS is convoluted and html is, after all, xml, and as such, should probably die in a fire.
However, I have to admit that I see the appeal: you can easily prototype, handle both logic and presentation, rely on billions of online examples and modules/plugins/libraries and of course iterate crazy fast. So yeah, I don't like the technology, but it really lowers the entry barrier for developers, and that's a good thing.
And so, this project has be coded in JavaScript, and it was actually pretty painless, even for a hater like me. I guess I'm becoming more mature as I grow old (up?), or maybe I just had big misconceptions about webdev, who knows?
The plan
The idea is to create a web page showing us a map of the world, painting countries according to the number of requests that came from them (the more requests, the darker the shade). Something like this:
For this, we are going to use:
- Varnish: You are on Varnish Software's blog, after all. Joking aside, Varnish being the entry point of your platform, it will see all requests, and so will have all the information needed.
- Varnish Custom Statistics: VCS will collect all sorts of data about certain requests and categorize them using tags and it can do so for clusters of Varnish, not just individual instances.
- vmod-geoip: Using libgeoip (possibly with a free database) this VMOD can translate IP addresses to country names or ISO "ALPHA-2" codes.
- jqvmap: This is a pretty cool JavaScript map framework, and since I'm a total n00b in JavaScript, this puppy is going to do the heavy lifting for me (mandatory, related image)
And that's about it, let's set things up!
The VCS side
Hold your breath, don't blink, this is going to be super quick.
VCS actually consists of two software components: the server and the probe. The VCS probe is run on each Varnish server, reads the shmlog and pushes data to the VCS server, which most of the time is on a separate machine.
The server is started with:
vstatd
Yes, the binary is still called "vstatd", the old name of the product, but that's not important. We'll use the default ports, time window sizes and numbers.
And we have to start the probe(s), telling it where the VCS server is (let's say 192.168.0.200):
vstatdprobe 192.168.0.200
And that's it, you can stop holding your breath now. What requests are collected and how is completely driven by VCL, explaining the lack of configuration here.
The Varnish side
It won't actually be much more complicated, first you need to:
- install libgeoip, and maybe a database, such as this one (the Arch Linux package bundles it, so I had no extra work to do).
- download, compile and install vmod-geoip, it's now straightforward in Varnish 4.X if the dev packages are installed.
Then, we just have to add a few lines to our VCL:
import std; import geoip; sub vcl_recv { std.log("vcs-key: FROM-" + geoip.country_code(client.ip)); }
Done! For each request, we are going to log the country code of the country, prefixed with "vcs-key: FROM-", where "vcs-key:" is a marker announcing to VCS that the string should be used to tag the request, and "FROM-" is just a string for us to help with filtering.
To check that it works, let's run:
varnishlog -i VCL_Log -g raw
"-g raw" removes all grouping, and "-i VCL_Log" filters only VCL_Log lines, in other words, messages coming from std.log(). The result should look a bit like:
1977337 VCL_Log c vcs-key: FROM-US 2002175 VCL_Log c vcs-key: FROM-CN 1977340 VCL_Log c vcs-key: FROM-CN 2002178 VCL_Log c vcs-key: FROM-US 1977343 VCL_Log c vcs-key: FROM-KR 2002181 VCL_Log c vcs-key: FROM-US 1977346 VCL_Log c vcs-key: FROM-CA 2002184 VCL_Log c vcs-key: FROM-US 1977349 VCL_Log c vcs-key: FROM-CA 2002187 VCL_Log c vcs-key: FROM-CN 1977352 VCL_Log c vcs-key: FROM-US 2002190 VCL_Log c vcs-key: FROM-Unknown 1977355 VCL_Log c vcs-key: FROM-Unknown 2002193 VCL_Log c vcs-key: FROM-IR 1977358 VCL_Log c vcs-key: FROM-FR
Which is not too surprising; that's what we asked for. There are a few unknown IPs, but after all, we are using a free, lower quality database, so that's normal.
The VCS API
Data is starting to pour into our VCS server; let's see what's available.
With the endpoint /all/, we can retrieve all the vcs-keys seen and currently in memory:
curl $VCSIP:$VCSPORT/all/
{ "keys": [ "FROM-Unknown", "FROM-A2", "FROM-AD", "FROM-AE" ] }
To get info about one key (i.e., all the requests flagged using this tag), /key/STRING is used:
curl $VCSIP:$VCSPORT/key/FROM-RU
{ "FROM-RU": [ { "timestamp": "2016-08-02T18:06:30", "n_req": 42, "n_req_uniq": "NaN", "n_miss": 42, "avg_restarts": 0.000000, "n_bodybytes": 11928, "reqbytes": 3874, "respbytes": 22050, "berespbytes": 0, "bereqbytes": 0, "ttfb_miss": 0.000166, "ttfb_hit": "NaN", "resp_1xx": 0, "resp_2xx": 0, "resp_3xx": 0, "resp_4xx": 0, "resp_5xx": 42 }, { "timestamp": "2016-08-02T18:06:00", "n_req": 43, "n_req_uniq": "NaN", "n_miss": 43, "avg_restarts": 0.000000, "n_bodybytes": 12212, "reqbytes": 3968, "respbytes": 22575, "berespbytes": 0, "bereqbytes": 0, "ttfb_miss": 0.000180, "ttfb_hit": "NaN", "resp_1xx": 0, "resp_2xx": 0, "resp_3xx": 0, "resp_4xx": 0, "resp_5xx": 43 }, { "timestamp": "2016-08-02T18:05:30", "n_req": 44, "n_req_uniq": "NaN", "n_miss": 44, "avg_restarts": 0.000000, "n_bodybytes": 12496, "reqbytes": 4042, "respbytes": 23100, "berespbytes": 0, "bereqbytes": 0, ...
As you can see, data is aggregated in windows of 30 seconds (look at the timestamps) by default, giving you almost real-time feedback on how your data is consumed. Here we can tell that we get around 40 requests from Russia every 30 seconds, generating 22k of traffic to the clients. And we can also tell that I should fix my backend since all the requests received 5XX responses (truth is, I got lazy and didn't start the backend).
Let's finish on a more complex request, which is actually the one we are going to use:
curl $VCSIP:$VCSPORT/match/FROM-/top/300?b=10
This asks VCS:
- to return only the keys matching "FROM-".
- to return only the 300 most requested keys. We are good anyway since there are fewer countries than that, but it will force VCS to count and show the number of requests in the results, instead of just displaying the keys.
- to use the last five time windows to compute the most requested keys, instead of only using the last one.
The result should look like this:
{ "FROM-US": 14120, "FROM-Unknown": 5778, "FROM-CN": 2962, "FROM-JP": 1817, "FROM-GB": 1100, "FROM-DE": 1060, "FROM-KR": 1023, "FROM-BR": 756, "FROM-FR": 741, "FROM-CA": 698, "FROM-IT": 474, "FROM-NL": 444, "FROM-AU": 437, "FROM-RU": 421, "FROM-IN": 361, "FROM-TW": 299, ...
And this is what we are going to use in our JavaScript, which we are now ready to write.
Enter Mordor
Before we start, let me state that again: this is not my turf, and I did what most new coders do: I stole code, specifically from the jqvmap README, but in my defense, the example given was doing pretty much what I needed.
Some requirementsAs said at the beginning, we are going to use jqvmap, meaning we need to include 4 elements to our HTML page:
- jqvmap's css, so our map is all nice and fancy
- jquery, because nothing is pure js anymore and jqvmap heavily uses this framework
- jqvmap's code, that's to be expected
- a world map. jqvmap can plot any map, and has quite a collection, but right now, we are interested in a world map.
HTML code is:
<script type="text/javascript" src=""> <script type="text/javascript" src=""> <script type="text/javascript" src="" charset="utf-8">
Note that for the last two, I just used rawgit so I could avoid hosting the code while still executing it.
And I'll also create an empty div for jqvmap to populate:
<div id="vmap" style="width: 100%; height: 90%;"></div>
Show us the code!
Ok, everything is in place. Now we just have to create the map, and update it every 20 seconds, here's the map creation that will happen once the page is loaded:
var g_reqs = {}; function mapUpdate() { $.getJSON("?", parseAndShow); }; function labelShow(event, label, code) { label.text(g_req[code] + " requests originated from " + JQVMap.maps['world_en'].paths[code].name) } function regionClick(event, label, code) { event.preventDefault(); } jQuery(document).ready(function() { jQuery('#vmap').vectorMap({ map: 'world_en', hoverColor: '#005aff', scaleColors: ['#d8f8ff', '#005ace'], onRegionClick: regionClick, onLabelShow: labelShow, normalizeFunction: 'polynomial' }); mapUpdate(); });
Some explanation about the vectorMap() arguments:
- map: what map should be used, we only loaded one here, so there's not much suspense.
- hoverColor: the default color for highlighted zone is green, but the Varnish color is blue, so I needed to adapt it.
- scaleColors and normalizeFunction: we are going to give per-country values to jqvmap, and these two parameters direct how they will be translated into colors, scaleColors being the lower/upper bounds, and normalizeFunction how colors are going to be spread in the interval.
- onRegionClick: give a callback to run when a country is clicked, and that callback (regionClick) actually prevents the default behavior.
- onLabelShow: there's a label under the map, and we can use a callback to put whatever text we want in it.
But vectormap() only creates a blank map, so we need to color it. Thankfully, jqvmap will do most of the job for us, and we only have to give it a dictionary looking like:
{ "us": 34, "ru": 54, "fr": 23, ...}
i.e., using the lowercase country code as keys and the number of requests as values, coloring will happen automagically using scaleColors and normalizeFunction.
This happens in two steps:
- grab the data; this is done in mapUpdate
- adapt the data from VCS to fit jqvmap, that means converting keys from "FROM-XX" to "xx", and making sure all countries are represented:
function parseAndShow(data) { var countries = [ ', 'gw', 'gy', 'ht', 'hm', 'va', 'hn', 'hk', 'hu', 'is', 'in', 'id', 'ir', 'iq', 'ie', 'im', 'il', 'it', 'jm', 'jp', 'je', 'jo', 'kz', 'ke', 'ki', 'k', 'om', 'pk', 'pw', 'ps', 'pa', 'pg', 'py', 'pe', 'ph', 'pn', 'pl', 'pt', 'pr', 'qa', 're', 'ro', 'ru', 'rw', 'bl', 'sh', 'kn', 'lc', 'mf', 'pm', 'vc', 'ws', 'sm', 'st', 'sa', 'sn', 'rs', 'sc', 'sl', ', 've', 'vn', 'vg', 'vi', 'wf', 'eh', 'ye', 'zm', 'zw', 'kp' ]; var reqs = {}; $.each(countries, function(idx, key) { var ckey = "FROM-" + key.toUpperCase(); if (data[ckey]) { reqs[key] = data[ckey]; } else { reqs[key] = 0; } }); g_req = reqs; jQuery('#vmap').vectorMap('set', 'values', reqs); setTimeout(mapUpdate, 20000); }
At the end, I set a timer to rerun mapupdate 20 seconds later, and I updated g_reqs so that labelShow can use it when I hover over a country.
And we are done! The maps in the page are static to avoid running VCS ad vitam aeternam just for a blog post, but if you wish to see the full "actual" code, it's here.
From here to there
Of course, you can make the maps even sexier by adding tooltips, fancier colors and cool effects, but as you can guess, I leave this as an exercise to you, the reader.
BUT, there's one last thing I wanted to show you before we part ways, a quick change for a deep addition. Let's say we add one line to our VCL:
import std; import geoip; sub vcl_recv { std.log("vcs-key: FROM-" + geoip.country_code(client.ip)); std.log("vcs-key: TO-" + geoip.country_code(server.ip)); }
We are now recording not only the client's country but the destination's. True, you need more than one point of presence for this to be interesting, but the point is that with very few changes to your JavaScript, you can get the original map mapping the client's activity to this one, mapping the server's activity:
Where you can see in a quick glance that your Japanese servers are getting more than their share of requests.
Conclusion
VCS is a generic tool, offering great versatility and super easy integration, notably with JavaScript that bundles HTTP+JSON directly into the language as we have seen here. But this is only a very specific example, made to kickstart your creativity and make you think about how it can be useful for YOU and your Varnish usage.
Data analysis is already a crucial part of running a website, and is not limited to just bandwidth and requests per second. Combined with Varnish, VCS can be the tool to give you the necessary insight on who your public is and how your content is consumed to create a better, more efficient service.
Ready to learn more about VCS? Join us for our live webinar, How to identify issues in Varnish and track web-traffic stats in real-time: Getting the most out of Varnish Custom Statistics on September 8th.
Photo (c) 2005 Michael Coté used underCreative Commons license.
|
https://info.varnish-software.com/blog/vcs-powered-tactical-world-map
|
CC-MAIN-2019-39
|
en
|
refinedweb
|
StringBuilderClass
As stated numerous times already, the immutable nature of strings can be a blessing and a curse. The latter is especially true if lots of string manipulations have to be made, which effectively results in the creation of lots of intermediary string objects. Although short-lived objects are cleaned up pretty effectively by the garbage collector, having a lot of those, each of which can be substantially big, is suboptimal too. This can often be avoided by making use of the
System.Text.StringBuilder class.
A few releases of Visual Studio ago, a using directive for the
System.Text namespace was added to the default template for newly created code files. An obvious reason for its inclusion is the ...
No credit card required
|
https://www.oreilly.com/library/view/c-40-unleashed/9780132678926/h4_1839.html
|
CC-MAIN-2019-39
|
en
|
refinedweb
|
This is 4th tutorial for STM8 Microcontrollers. This post will give you a basic idea on using STM8 internal ADC.
All STM8 family devices feature 10/12 bit ADC as peripheral. ADC can be used in single or continuous conversion mode. This example code is tested on STM8S003F3P6 and STM8S105C6T6 controller but ideally it should work for every STM8 controller.
Experiment description
Schematic below shows the connections that I made in order to test the working of ADC peripheral. I wrote a simple code to change the LED blinking frequency according to the analog voltage input at Analog Channel 3 (PD2) of STM8S000F3P6. LED is connected to PD3 in source mode.
Code
#include "stm8s.h" void myDelay(unsigned int value) { unsigned int i,j; for(i=0;i<1000;i++) { for(j=0;j<value;j++); } } void GPIO_Config(void) { GPIOD->DDR |= 1<<3; //PD.3 as output GPIOD->CR1 |= 1<<3; //push pull output } unsigned int readADC1(unsigned int channel) { unsigned int val=0; //using ADC in single conversion mode ADC1->CSR |= ((0x0F)&channel); // select channel ADC1->CR2 |= (1<<3); // Right Aligned Data ADC1->CR1 |= (1<<0); // ADC ON ADC1->CR1 |= (1<<0); // ADC Start Conversion while(((ADC1->CSR)&(1<<7))== 0); // Wait till EOC val |= (unsigned int)ADC1->DRL; val |= (unsigned int)ADC1->DRH<<8; ADC1->CR1 &= ~(1<<0); // ADC Stop Conversion val &= 0x03ff; return (val); } int main(void) { unsigned int stepNos; GPIO_Config(); while(1) { stepNos=readADC1(3); GPIOD->ODR |= (1<<); // PD.0 = 1, LED ON myDelay(stepNos); GPIOD->ODR &= ~(1<<3); // PD.0 = 0, LED Off myDelay(stepNos); } }
|
http://www.electroons.com/blog/category/adc-interfacing/page/2/
|
CC-MAIN-2019-39
|
en
|
refinedweb
|
AWS DevOps Blog a day of great talks I spent some time with engineers from a company who use CloudFormation to provision and manage pretty much every part of their application, from VPCs for both dev and prod environments, to the EC2 Instances that run their app. They had a particularly interesting use-case that involved CloudFormation and Elastic IP (EIP) addresses that we’re going to focus on today.
EIPs and CloudFormation
An EIP is a persistent, static, and public IP address that can be attached to an EC2 instance. They have been supported in CloudFormation for some time. Here’s a snippet that provisions and attaches an EIP to an EC2 instance:
"Resources" : { "Ec2Instance" : { "Type" : "AWS::EC2::Instance", ... } }, "IPAddress" : { "Type" : "AWS::EC2::EIP" }, "IPAssoc" : { "Type" : "AWS::EC2::EIPAssociation", "Properties" : { "InstanceId" : { "Ref" : "Ec2Instance" }, "EIP" : { "Ref" : "IPAddress" } } } }
This diagram illustrates how CloudFormation uses the EC2 API to provision the IPAddress resource, returning its physical ID (i.e., a public IPV4 address) that can be Ref’d by the IPAssoc resource:
If I view the stack’s resources in the AWS CloudFormation Management Console I can see the Logical ID, Physical ID, and type of the resources:
The Customer’s Challenge
When you create a stack including the above template snippet, CloudFormation creates a brand new EIP (i.e., you don’t know the address in advance) per the IPAddress declaration, then the Ec2Instance resource, and finally associates the address with the instance. When you delete the same stack, CloudFormation removes the EIP association, then terminates the EC2 instance and deletes the EIP.
Here’s where the customer’s use-case gets interesting: the EC2 instances they were provisioning and managing with CloudFormation are connecting to 3rd-party APIs (for example, a credit card payment processing gateway) that require IP whitelisting. They had previously provisioned a pool of tens of EIPs and gone through the manual whitelisting process with their 3rd-party providers at some point in the past. Although they dynamically provision their VPC and EC2 instances for dev, test, and prod with CloudFormation, they needed the EIPs attached to those instances to come from their pre-allocated pool. Declaring and attaching a "Type" : "AWS::EC2::EIP" resource the standard way wouldn’t work for their scenario.
Fortunately, CloudFormation allows developers to define Custom Resources for exactly these types of situations. In fact, earlier that day at re:Invent a few CloudFormation engineers gave a great talk on Custom Resources and a framework they released to make developing them easier (video of their talk is on YouTube; I highly recommend you check it out). This customer’s need to provision EIPs from a pool seemed like a great fit for Custom Resources and the new framework, so I offerd to put one together.
A CloudFormation Custom Resource for EIPs
Before we get into the solution, here’s a quick background on Custom Resources: A CloudFormation Custom Resource provides a way for a template developer to include resources in an AWS CloudFormation stack that they define. In this case, the resource is still an Elastic IP Address, but it will be provided by the developer from a pool of existing addresses that they own. The allocation and deallocation of the resource is declared in the CloudFormation template and is part of the stack workflow (i.e., create, update, or delete).
Pooling EIPs in DynamoDB
Remember that the EIPs in this example will be pre-provisioned and whitelisted with some third-party service (for example, a credit card payment processing gateway). It’s reasonable to assume that certain EIPs may be associated with certain services or providers, and that we want to pool them accordingly. We also need to track – with strong consistency – which EIPs are are in use, and where (i.e., which CloudFormation stack) they are used. DynamoDB fits the bill perfectly. Here’s the table structure I defined and then manually entered my pre-provisioned EIPs into:
By using a DynamoDB hash+range key (you can read more about the DynamoDB data model at), I can retreive all EIPs in a given named pool (i.e., CCPaymentService). If an item has the stack_id attribute set, that means the address for that item is in use; I can mark an address as ‘in use’ by setting the stack_id attribute to the value of the CloudFormation stack that is using the EIP.
Here’s how I might implement that functionality in a Python method using the boto library:
def get_address(pool): """Retrieve an EIP for the given pool from DynamoDB""" #Connect to ddb conn = boto.dynamodb2.connect_to_region(options.region) ddb = Table(options.table_name, connection=conn) # Get available EIPs from pool eips = ddb.query( pool__eq=pool, consistent=True ) # Raise an exception if there are no EIPs in the named pool if not eips: raise FatalError(u"No EIPs found in pool %s" % pool) # Iterate returned addresses and find first address that does not have # a stack_id attribute (meaning it's in use) address = None for eip in eips: if not eip.get('stack_id', False): eip['stack_id'] = 'SOME STACK ID' eip['logical_id'] = 'SOME LOGICAL RESOURCE ID' if eip.save(): address = eip['address'] break # Raise an exception if all addresses are in use if not address: raise FatalError(u"All EIPs in pool %s are in use" % pool) return address
Deleting an address is simply a matter of clearing the stack_id and logical_id attributes for an item.
Integrating with CloudFormation
Now that we have a simple convention for managing our pre-provisioned, whitelisted EIPs in a DynamoDB table, and a small bit of Python to retrieve and manage those addresses, we need a way to declare these EIPs in CloudFormation so we can attach them to EC2 instances in our stacks. Let’s jump ahead a bit and look at how I’ll declare this Custom Resource in my template, then we’ll step back and talk more about the implementation:
"IPAddress" : { "Type" : "Custom::EipLookup", "Version" : "1.0", "Properties" : { "ServiceToken" : { "Ref" : "EipLookupServiceToken" }, "pool" : "CCPaymentService" } }
Compare that to the built-in EIP declaration we saw earlier:
"IPAddress" : { "Type" : "AWS::EC2::EIP" }
We can see the Type is different, and that we’ve included a Version. We also see a Properties key that defines two really important things:
ServiceToken: Required. This is the ARN (Amazon Resource Name) of an existing Amazon SNS (Simple Notification Service) Topic. CloudFormation will publish the contents of the Custom Resource declaration (i.e., the JSON snippet) to that topic whenever a stack is Created, Updated, or Deleted (and will include which of those lifecycle events is occurring).
pool: This indicates which pool the EIP should come from, and maps to the Pool hash key in the DynamoDB table. It will be included in the message that CloudFormation sends to the SNS Topic that the ServiceToken points to.
So, if I create a stack with the above EIP Custom Resource, CloudFormation will publish a message to the SNS topic indicated in the Servie Token basically saying “Please create a v1.0 Custom::EipLookup resource from the CCPaymentService pool. Also, please let me know when you’re done, and the value of what you created.” The message published to SNS would look similar to:
{ "RequestType" : "Create", "ResponseURL" : "", "StackId" : "arn:aws:cloudformation:us-east-1:EXAMPLE/stack-name/guid", "RequestId" : "unique id for this create request", "ResourceType" : "Custom::EipLookup", "LogicalResourceId" : "IPAddress", "ResourceProperties" : { "pool" : "CCPaymentService" } }
The Custom Resource Bridge framework, discussed in the next section, helps us link our custom code to these CloudFormation events and notifications. For now, let’s extend our earlier diagram, replacing the Amazon EC2 API with the SNS Topic (i.e., ServiceToken) and showing the idea of our Custom Resource as a black box that queries our DynamoDB table:
Custom Resource Bridge (CRB) Framework
The CRB is a piece of software released by the CloudFormation team at re:Invent in November 2013 (and available under the Apache license on GitHub at). It runs on an EC2 instance (preferably in an Auto Scaling Group with min=1) and takes care of a lot of the work (everything embodied in the above diagram’s black box) to connect your Custom Resource code with CloudFormation lifecycle events by introducing a few conventions to follow:
The SNS Topic you use in your Custom Resource’s ServiceToken should have an SQS Queue subscribed.
Tell the CRB the name of that SQS Queue and a script to invoke when a message is received. Here’s an example CRB config file:
[eip-lookup] resource_type=Custom::EipLookup queue_url= timeout=60 default_action=/home/ec2-user/lookup-eip.py
When the CRB receives a message, it will parse the JSON, convert the relevant information to environment variables, and invoke your script.
Your script gets information about what it should do (i.e., create, update, or delete) from the environment. Here’s a few lines of Python that infer the request type and pool from the environment, then calls the get_address method we saw earlier if the resource is being created:
# Get the Request Type and EIP Pool request_type = os.getenv('Event_RequestType') pool = os.getenv('Event_ResourceProperties_pool', 'default') ... # Get a new address from the pool if request_type == 'Create': physical_id = get_address(pool)
Communicate your result by printing to stdout. For example, if my resource got an EIP from the pool in response to a CREATE event, I would print it out and the CRB will communicate that back to CloudFormation:
# Write out our successful response! if request_type != 'Delete': print u'{ "PhysicalResourceId" : "%s" }' % physical_id
Handling Updates and Deletes
Any custom resource you make should handle the update and delete stack lifecycle events. In this EIP example, a user could decide to use an EIP from a different pool for an existing stack. The template might look like:
"IPAddress" : { "Type" : "Custom::EipLookup", "Version" : "1.0", "Properties" : { "ServiceToken" : { "Ref" : "EipLookupServiceToken" }, "pool" : "ATotallyDifferentPool" } }
If the user called UpdateStack with this new template, my Custom Resource would be notified of the update event, and in addition to the current declaration would also be given the previous state of the resource. This allows my code to decide if a change was made and update the EIP table accordingly:
# Get the Request Type and EIP Pool request_type = os.getenv('Event_RequestType') pool = os.getenv('Event_ResourceProperties_pool', 'default') ... elif request_type == 'Update': old_pool = os.getenv('Event_OldResourceProperties_pool') old_address = os.getenv('Event_PhysicalResourceId') # If the updated resource wants an EIP from a different pool if not pool == old_pool: # And get a new one physical_id = get_address(pool) else: physical_id = old_address
Finally, a Delete event is handled by simply removing the stack_id attribute from the address in DynamoDB:
def delete_address(pool, address): """Mark an EIP as no longer in use""" #Connect to ddb conn = boto.dynamodb2.connect_to_region(options.region) ddb = Table(options.table_name, connection=conn) eip = ddb.get_item(pool=pool, address=address) del eip['stack_id'] del eip['logical_id'] eip.save() ... # Get the Request Type and EIP Pool request_type = os.getenv('Event_RequestType') pool = os.getenv('Event_ResourceProperties_pool', 'default') ... elif request_type == 'Delete': address = os.getenv('Event_PhysicalResourceId') delete_address(pool, address)
Try it Out.
Here’s how you can use the CRB and the EIP Custom Resource we just discussed, in 4 easy steps:
Download the custom-resource-runner.template CloudFormation template from GitHub and create a new stack in the CloudFormation Management Console. This creates an SNS Topic (i.e., ServiceToken), wires up an SQS Queue, then launches an EC2 instance in an Auto Scaling Group and installs the CRB along with the Python script, and finally creates the DynamoDB table to enter your EIP addresses into. It’s this part of the architecture:
After the stack launches, open the Outputs tab and copy the value for ServiceToken to your clipboard:
Open the EC2 Management Console, click the Elastic IPs link, and allocate a few new addresses. Then open the DynamoDB Management Console, select the EipCustomResource table that was created by CloudFormation in Step 1 and click the Explore Table button. Click New Item and add the EIPs you just provisioned. Use default as the pool name.
Download the example.template CloudFormation template from GitHub and create a new stack. Provide the ServiceToken value you copied in Step 2. This provisions a stack that uses your EIP Custom Resource to provision and attach a pooled EIP. It’s this part of the architecture:
Cleanup
When you’re done trying out the sample, be sure to delete the stacks you launched and release any EIPs you allocated as part of the trial.
Next Steps
CloudFormation Custom Resources are a really powerful tool for integrating your own custom code into the CloudFormation workflow. There are 4 other fully-functional example Custom Resources on GitHub that you can explore, try out, and use as references to build your own, including:
- AMI Lookup and Auditing
- Dynamic DNS Mapping with Route53
- EBS Volume Mounting and Dismounting
- RDBMS Schema Changes
If you come up with a cool Custoom Resource you’d like to share, we actively maintain the GitHub repo and love Pull Requests!
Finally, Each of these samples is discussed in detail in this excellent re:Invent video by CloudFormation engineers D.J. Edwards and Adam Thomas:
|
https://aws.amazon.com/blogs/devops/customers-cloudformation-and-custom-resources/
|
CC-MAIN-2017-26
|
en
|
refinedweb
|
Syntax
#include <prio.h> PRStatus PR_ConnectContinue( PRFileDesc *fd, PRInt16 out_flags);
Parameters
The function has the following parameters:
fd
- A pointer to a
PRFileDescobject representing a socket.
Returns
- If the nonblocking connect has successfully completed, PR_ConnectContinue returns PR_SUCCESS.
- If PR_ConnectContinue() returns PR_FAILURE, call PR_GetError():
- PR_IN_PROGRESS_ERROR: the nonblocking connect is still in progress and has not completed yet. The caller should poll the file descriptor for the in_flags PR_POLL_WRITE|PR_POLL_EXCEPT and retry PR_ConnectContinue later when PR_Poll() returns.
- Other errors: the nonblocking connect has failed with this error code.
Description
Continue a nonblocking connect. After a nonblocking connect is initiated with PR_Connect() (which fails with PR_IN_PROGRESS_ERROR), one should call PR_Poll() on the socket, with the in_flags PR_POLL_WRITE | PR_POLL_EXCEPT. When PR_Poll() returns, one calls PR_ConnectContinue() on the socket to determine whether the nonblocking connect has completed or is still in progress. Repeat the PR_Poll(), PR_ConnectContinue() sequence until the nonblocking connect has completed.
|
https://developer.mozilla.org/en-US/docs/Mozilla/Projects/NSPR/Reference/PR_ConnectContinue
|
CC-MAIN-2017-26
|
en
|
refinedweb
|
Re: "ocaml_beginners"::[] Ocaml + GUI + Mac + Mono?
Expand Messages
- Okay, no mono. Thank you. That saves me from a very great many
hours of potentially frustrating research and software installation.
On the other hand, I can see Mono + F# being useful to me to solve
other problems, but those are not important now.
From the rest of what you and Richard Jones say, I think my plan of
action is this...
* Use labltk or lablgtk to build software on Aqua, unless I find a
solid Cocoa binding. Technically, there is Cocoa#, but I will look
into that when I look into mono. Either way, separate presentation
from logic as much as I absolutely possibly can.
* Port the presentation layer to other platforms when I decide that I
want to use the other platforms.
--
Savanni
On Oct 2, 2008, at 1:15 PM, Jon Harrop wrote:
> On Thursday 02 October 2008 17:47:14 Savanni D'Gerinel wrote:
> >)
>
> Amen.
>
> > * Make it a Native-Mac GUI on my Mac (because I am tired of XDarwin)
>
> The last time I looked, the native-Mac-GUI-in-OCaml problem had not
> been
> solved but several people were trying to solve it.
>
> > * Make it portable to both Windows and Linux (with a recompile)
>
> Does that not conflict with "native-Mac GUI"?
>
> > So far, the only option that looks obviously ready and complete is
> > using labltk.
>
> I believe LablGTK is a definite possibility. It works very well
> under Linux
> but I have not tried porting the software (I gave up on my Mac).
>
> > I do not mind this idea at all and am totally willing
> > to go with it, and I have some overall documentation on how to
> use it
> > at .
> >
> > I would prefer to use QT, but I can find no Ocaml bindings, and
> > writing such a binding is certainly beyond my skill and interest at
> > this time.
>
> You might try going via another language with more mature Qt
> bindings, e.g.
> using PyQt, but I do not know of any tutorials covering this.
>
> > On the other hand, I hear that there is a standard gui in Mono
> called
> > Windows Forms, though there is a Windows Presentation Layer that my
> > source is not certain has been ported to Mono yet. So, I really have
> > several questions here:
> >
> > 1. Does this Windows Forms gui bind directly to Aqua so that I
> have a
> > Mac-looking application without running Darwin?
>
> No:
>
> "Looks alien on non-Windows platforms." -
>
>
> > 2. What advantage is there to me writing my app in F# with Mono
> on my
> > Mac?
>
> None. Only on Windows, F# makes GUI programming vastly easier than
> anything
> OCaml has to offer. So it may be worth considering OCaml/F# cross
> compilation
> of the core.
>
> > 3. What way do you all most often use to write cross-platform GUI
> > apps that actually look like the underlying platform?
>
> I am not aware of anyone ever having succeeded in doing that. The
> nearest I
> can think of is Gtk apps but they have anomalous behaviour under
> Mac OS X.
> MLDonkey contains 20kLOC of cross platform GUI code using GTK2, for
> example.
>
> --
> Dr Jon Harrop, Flying Frog Consultancy Ltd.
>
>
>
[Non-text portions of this message have been removed]
- On Fri, Oct 03, 2008 at 09:02:58AM +0200, Adrien wrote:
> 2008/10/2, Savanni D'Gerinel <savanni@...>:unfortunately not, at least not when requiring lablgtkgl. If you can
> >)
> > * Make it a Native-Mac GUI on my Mac (because I am tired of XDarwin)
>
> Have you tried the native mac port of gtk ?
> You have two (more ?) available : one for gtk-1.2[1] and one for
> gtk-2[2]. I've never tried them as I don't use macs but I think gimp
> has been made to use the first one and I trust imendio, the company
> doing the second one, for releasing good code (partly because I know
> some of the devs and partly because the *definitely* want native gtk
> on the mac ; they're doing gtk development).
>
> Philippe Strauss has recently tried imendio's gtk with lablgtk but had
> troubles building it, unfortunately I don't know if his problem has
> been solved or not. [3]
do without GL embedding in gtk, it will probably build fine, with
just a tiny patch that Pascal Cuoq provided.
if you need lablgtkgl, you'll most probably chokes on a missing
symbol _GDK_DISPLAY (maybe related to X11, or double underscored somewhere,
i have to dig in further). (most probably gtkgl need a little bit of
patchwork for native gtk osx support).
regards.
---8<---
diff -ru lablgtk-2.10.1/src/ml_gdk.c lablgtk-2.10.1-nativegtk/src/ml_gdk.c
--- lablgtk-2.10.1/src/ml_gdk.c 2007-09-25 04:56:09.000000000 +0200
+++ lablgtk-2.10.1-nativegtk/src/ml_gdk.c 2008-02-26 11:10:30.000000000 +0100
@@ -22,13 +22,18 @@
/* $Id: ml_gdk.c 1369 2007-09-25 02:56:09Z garrigue $ */
+#define __QUARTZ__
+
#include <string.h>
#include <gdk/gdk.h>
+#if defined(__QUARTZ__)
+#else
#if defined(_WIN32) || defined(__MINGW32__)
#include <gdk/gdkwin32.h>
#else
#include <gdk/gdkx.h>
#endif
+#endif
#include <caml/mlvalues.h>
#include <caml/alloc.h>
#include <caml/memory.h>
@@ -253,7 +258,7 @@
ML_0 (GDK_ROOT_PARENT, Val_GdkWindow)
ML_1 (gdk_window_get_parent, GdkWindow_val, Val_GdkWindow)
-#if defined(_WIN32) || defined(__CYGWIN__)
+#if defined(_WIN32) || defined(__CYGWIN__) || defined(__QUARTZ__)
CAMLprim value ml_GDK_WINDOW_XWINDOW(value v)
{
ml_raise_gdk ("Not available for Win32");
@@ -488,7 +493,7 @@
CAMLprim value ml_gdk_property_get (value window, value property,
value length, value pdelete)
{
-#if defined(_WIN32) || defined(__CYGWIN__)
+#if defined(_WIN32) || defined(__CYGWIN__)|| defined(__QUARTZ__)
return Val_unit; /* not supported */
#else
GdkAtom atype;
---8<---
...
> [1]--
> [2]
> [3]
> [4]
Philippe Strauss
Your message has been successfully submitted and would be delivered to recipients shortly.
|
https://groups.yahoo.com/neo/groups/ocaml_beginners/conversations/topics/10238?xm=1&o=1&l=1
|
CC-MAIN-2017-26
|
en
|
refinedweb
|
This MegaWidget procedure allows you to treat namespaces and widgets loosely as extensible classes. The class name is defined by the namespace from which the MegaWidget command was called (MyWidget in the example), and the specific class instance is named by the main widget name.When MegaWidget is called, it is passed the path of some widget that you want to turn into a mega widget (such as a frame containing other widgets). The namespace of the caller is added to a search list (list of namespaces from which MegaWidget was called on the named widget) and the widget's command (provided by Tk) is renamed and replaced. When this replacement command is called, it will scan through the search list, checking each namespace stored for a procedure with the same name as the first argument. If found, then it's called, with the widget name inserted as the first argument.Note: this assumes that the procedure will already have been defined so that it will be visible via "info proc". If the procedure hasn't been auto-loaded, it might call the wrong layer.If no procedure is found in any of the namespaces of the search list, then the command is passed on to the widget command its self as if it were not a mega-widget.The MegaWidget function provides some basic inheritence mechanisms. You can call it multiple times from different namespaces to add or override basic functionality. To call a specific parent-class's version of a function, you just need to call the function directly, passing the widget path as the first argument. e.g., MegaWidget::dosomething .mw ?arg arg ...?.
package provide MegaWidget 1.0 proc MegaWidget { hWnd } { variable widgetClasses # Get the namespace for the mega-widget from the caller set NS [uplevel namespace current] # If the widget has already been turned into a mega-widget, just insert # the new namespace into the top of the search list and return. if {[info exist widgetClasses($hWnd)]} { set widgetClasses($hWnd) [linsert $widgetClasses($hWnd) 0 $NS] return } # The widget has yet been turned into a mega-widget. Store the # caller's namespace as the first in the search list. set widgetClasses($hWnd) $NS # Rename the widget command to something in this procedure's namespace # so that calls to the widget command are not sent to the widget directly. rename ::$hWnd [namespace current]::mega$hWnd # Set up binding to clear the search list for the widget and delete # the replacement procedure for the widget command. Make sure that # the widget generating the event is the same as the widget that was # turned into a mega-widget: this allows a toplevel to be turned # into mega-widget too (otherwise, it will get <Destroy> events from # child windows). set template { if {[string match %W @HWND@]} { namespace eval @MYNS@ array unset widgetClasses %W rename %W {} } } regsub -all [email protected]@} $template $hWnd template regsub -all [email protected]@} $template [namespace current] template bind $hWnd <Destroy> $template # Create a new top-level procedure with the same name as the widget. # This procedure will scan through the search list for a namespace # containing a procedure by the same name as the first argument passed # to this new procedure. set template { global errorInfo errorCode variable widgetClasses set hWnd @HWND@ foreach NS [email protected]@::widgetClasses($hWnd) { if {[namespace inscope $NS info proc $command] == $command} { set rc [catch { eval [set NS]::$command $hWnd $args } result] set ei $errorInfo set ec $errorCode break } } if {![info exist rc]} { set rc [catch { eval @MYNS@::mega$hWnd $command $args } result] set ei $errorInfo set ec $errorCode } return -code $rc -errorinfo $ei -errorcode $ec $result } regsub -all [email protected]@} $template $hWnd template regsub -all [email protected]@} $template [namespace current] template proc ::$hWnd { command args } $template }And here is a simple example that creates a text widget with horizontal and vertical scrollbars, that otherwise behaves just like a plain text widget:
package require MegaWidget package provide XYText 1.0 namespace eval XYText { proc XYText { hWnd args } { frame $hWnd -bd 1 -relief sunken set hWndTxt \ [text $hWnd.txt \ -bd 0 \ -relief flat \ -xscroll "$hWnd.scrX set" \ -yscroll "$hWnd.scrY set" \ -wrap none \ ] set hWndXScr \ [scrollbar $hWnd.scrX \ -orient horizontal \ -command "$hWndTxt xview" \ ] set hWndYScr \ [scrollbar $hWnd.scrY \ -orient vertical \ -command "$hWndTxt yview" \ ] set hWndBox \ [frame $hWnd.frBox \ -bd 1 \ -relief raised \ ] grid rowconfig $hWnd 0 -weight 1 -minsize 0 grid rowconfig $hWnd 1 -weight 0 -minsize 0 grid columnconfig $hWnd 0 -weight 1 -minsize 0 grid columnconfig $hWnd 1 -weight 0 -minsize 0 grid $hWndTxt -row 0 -column 0 -sticky news grid $hWndYScr -row 0 -column 1 -sticky ns grid $hWndXScr -row 1 -column 0 -sticky ew grid $hWndBox -row 1 -column 1 -sticky news MegaWidget $hWnd return $hWnd } # Create XYText MegaWidget commands to be passed on to the text widget. foreach textCmd [list bbox cget compare configure debug delete \ dlineinfo dump get image index insert mark scan search see \ tag window xview yview] { proc $textCmd { hWnd args } " return \[eval \$hWnd.txt $textCmd \$args\] " } }There is a problem here still to be resolved: a normal binding command like
XYText::XYText .t pack .t -padx 5 -pady 5 bind .t <Motion> {puts moving}on this megawidget will not do anything because the binding is associated with the frame, not the text widget. See overloading widgets for a solution.
Need more example:
- how to save user data
- example with inheritance
- something interesting :)
|
http://wiki.tcl.tk/9587
|
CC-MAIN-2017-26
|
en
|
refinedweb
|
Results 1 to 3 of 3
- Join Date
- Jan 2012
- 14
- Thanks
- 6
- Thanked 0 Times in 0 Posts
Hide Submit on Date Select in Past
Hey guys I need some assistance in developing a code that will hide the submit button of a form if a date in the past is selected. The variable for the date box is firstleveldate and the date appears in a MM-DD-YYYY format.
Thanks!
- Join Date
- Apr 2012
- Location
- St. Louis, MO
- 985
- Thanks
- 7
- Thanked 101 Times in 101 Posts
Load the form with the submit button disabled, then place an onBlur event in the firstleveldate field that checks the date value. If it's before the current date, do nothing; if it's AFTER the current date (or is the current date?), then document.forms["formName"].submitButtonName.disabled = false;^_^".
you can use this:Code:
function TestDate(Input, CurDateValid){ var seg=Input.split('-'); var Dato=new Date(seg[2],seg[0]-1,seg[1]); var Now=new Date(); var NowReset=new Date(Now.getFullYear(),Now.getMonth(),Now.getDate()); return ((CurDateValid?NowReset:Now)<=Dato); }
If today is a valid date then
Code:
if (TestDate(firstleveldate,1)){ --show/enable button-- }else{ --hide/disable button-- }Code:
if (TestDate(firstleveldate,0)){ // or: if (TestDate(firstleveldate)){ --show/enable button-- }else{ --hide/disable button-- }
Last edited by Lerura; 06-24-2012 at 12:01 AM.
|
http://www.codingforums.com/javascript-programming/266094-hide-submit-date-select-past.html
|
CC-MAIN-2017-26
|
en
|
refinedweb
|
Installation
To install it just run:
pip install germanium
You don’t need any binary drivers installed, or any other dependencies, since they are bundled (and tested) by Germanium itself.
Writing a test then becomes as easy as:
from germanium.static import * from time import sleep open_browser("ff") go_to("") type_keys("germanium pypi<enter>", Input("q")) wait(Link("Python Package Index")) click(Link("Python Package Index")) sleep(5) close_browser()
Germanium supports Python 2.7, 3.4 and 3.5, and is already used in production tests.
Browsers supported are:
IE 8+
Chrome
Firefox
Edge
Germanium Drivers
Starting with version 1.8 Germanium also packages the WebDriver binary drivers inside, and will unpack them when starting a new browser.
Thus when using Germanium it’s not required anymore to have the drivers downloaded.
GERMANIUM_DRIVERS_FOLDER
Path where to unpack the drivers if they are missing, or if a wrong version
is detected. If it’s not set Germanium will create a folder in the temp
folder named
germanium-drivers.
export GERMANIUM_DRIVERS_FOLDER=/opt/germanium-drivers
GERMANIUM_USE_PATH_DRIVER
If there is a driver for the current browser in the PATH, even if the version of the driver is unsupported, use that one instead the embedded binary driver that Germanium ships.
If an unsupported driver is found, Germanium will still use its internal driver.
export GERMANIUM_USE_PATH_DRIVER=1
Germanium Static
The Germanium static package is for creating tests that revolve around running a single browser instance at a time, in the whole test process.
open_browser()
Description
Opens the given browser instance.
Signature
def open_browser(browser="firefox", (1) wd=None, (2) iframe_selector=DefaultIFrameSelector(), (3) screenshot_folder="screenshots", (4) scripts=list()) (5)
browser - The browser is case insensitive and can be one of:
wd - A specific already created WebDriver instance can also be given, and then the browser parameter will be ignored.
iframe_selector - The strategy to use when finding the execution iframe, whenever the active iframe name changes.
screenshot_folder - Folder under browser screenshots are saved.
scripts - A list of JavaScript resources to be loaded whenever a page is newly loaded.
Sample
open_browser("firefox")
This also allows connecting to remote selenium instances, for example:
open_browser("ff:")
In case you want to pass capabilities into the remote driver instance, Germanium allows that by using simple query strings:
open_browser("ie?wdurl=")
The
wdurl is the parameter that will specify the WebDriver URL to use when
connecting to the remote instance.
close_browser()
Description
Close the currently running browser instance that was opened with
open_browser()
Signature
def close_browser()
Sample
close_browser()
go_to(url)
Description
Go to the given URL, and wait for the page to load. After the page will load, the scripts provided in the creation of the GermaniumDriver object will be automatically loaded.
Signature
def go_to(url) (1)
url - The URL to load in the browser.
Sample
go_to("")
type_keys(keys, selector, delay)
Description
Type the keys specified into the element, or the currently active element.
Signature
def type_keys(keys, (1) selector=None, (2) delay=0) (3)
keys - the keys to press. See the Germanium Keys Support, to learn about having multiple keypresses, combo key presses, or repetitions.
selector - optional For what element to send the keys. In case it’s missing, sends the keys to the active element. See the Germanium Selectors, to learn about how you can easily locate the element you want your action to be triggered against.
delay - optional Delay in seconds between each keypress.
Sample
type_keys('john.doe@example.com', Input('email')) (1) type_keys("<tab*2><enter>") (2)
Type in the input with the
nameattribute equal to
Type in the currently active element in the current iframe.
click(selector_or_point)
Description
Click the element with the given selector, or at the specified point position.
Signature
def click(selector_or_point) (1)
selector_or_point - What element to click. See the Germanium Selectors and Point Support, to learn about how you can easily locate the element you want your action to be triggered against.
Sample
click(Button('OK'))
hover(selector_or_point)
Description
Hovers (sends a mouse over) the element with the given selector, or at the specified point position.
Signature
def hover(selector_or_point) (1)
selector_or_point - What element to hover. See the Germanium Selectors and Point Support, to learn about how you can easily locate the element you want your action to be triggered against.
Sample
hover(Element('div', id='menu1'))
double_click(selector_or_point)
Description
Double clicks the element with the given selector, or at the specified point position.
Signature
def double_click(selector_or_point) (1)
selector_or_point - What element to double click. See the Germanium Selectors and Point Support, to learn about how you can easily locate the element you want your action to be triggered against.
Sample
double_click(Element('div', css_classes='table-row'))
right_click(selector_or_point)
Description
Right clicks the element with the given selector, or at the specified point position.
Signature
def right_click(selector_or_point) (1)
selector_or_point - What element to right click. See the Germanium Selectors and Point Support, to learn about how you can easily locate the element you want your action to be triggered against.
Sample
right_click(Element('div', css_classes='table-row'))
drag_and_drop(from_selector_or_point, to_selector_or_point)
Description
Performs a drag and drop operation from the element matching the
from_selector_or_point, to the element matching the
to_selector_or_point.
Both
from_selector_or_point and
to_selector_or_point can as the name suggest be either selectors, or point locations, and are not required to have the same type. You can start a drag from a selector, to a point, or vice-versa.
Signature
def drag_and_drop(from_selector_or_point, (1) to_selector_or_point) (2)
from_selector_or_point - What element to use for drag start. See the Germanium Selectors and Point Support, to learn about how you can easily locate the element you want your action to be triggered against.
to_selector_or_point - What element to release the mouse over. See the Germanium Selectors and Point Support, to learn about how you can easily locate the element you want your action to be triggered against.
Sample
drag_and_drop(Element("div", css_classes="old-entry", index=2), "#removeContainer")
select(selector, text?, index?, value?)
Description
Change the value of a
<select> element by selecting items from the
available options.
Signature
def select selection.
index - What index(es) (if any) to use for selection.
value - What value(s) (if any) to use for selection.
One of
text,
index or
value must be present for the selection to function,
if none are present an
Exception will be raised.
text,
index and
value can also be arrays, or single values.
Sample
select("#country", "Austria")
deselect(selector, text?, index?, value?)
Description
Change the value of a
<select> element by deselecting items from the
available options.
Signature
def deselect deselection.
index - What index(es) (if any) to use for deselection.
value - What value(s) (if any) to use for deselection.
Deselect will deselect all the items from the
text,
index and
value parameters. If all the parameters are unset, it will clear the
selection.
text,
index and
value can also be arrays, or single values.
Sample
deselect("#products", index=[1,3])
select_file(selector, file_path, path_check=True)
Description
Selects the file into a file input from the disk. The file itself must exist on the system where the browser is running.
Signature
select_file(selector, (1) file_path, (2) path_check=True) (3)
selector - What file input to select the file for. See the Germanium Selectors, to learn about how you can easily locate the element you want your action to be triggered against.
file_path - Path to the file that should be selected in the file input.
path_check - Check if the file exists, and convert it to an absolute path for the upload.
In case the
path_check is unset, any path will be sent to the driver without
any validation. This is useful for uploading files on a remote WebDriver browser.
WebDriver requires the path to be absolute. Germanium will convert the path to
an absolute location only it
path_check is set to
True.
Sample
Selecting for upload a relative path:
select_file(InputFile(), 'features/steps/test-data/upload_test_file.txt')
Selecting for upload a path that is available only remotely:
select_file(InputFile(), r"c:\features\steps\test-data\upload_test_file.txt", path_check=False)
parent_node(selector)
Description
Gets the parent node of the given selector.
Signature
parent_node(selector) (1)
selector - What element to return the value for. See the Germanium Selectors, to learn about how you can easily locate the element you want your action to be triggered against.
This will return a
WebElement.
Sample
e = parent_node('#some_element')
Will return the parent node for the element with the ID
some_element that will
be matched by the CSS locator.
child_nodes(selector, only_elements=True)
Description
Gets the child nodes of the element that is matched by selector.
Signature
child_nodes(selector, (1) only_elements=True) (2)
selector - What element to return the value for. See the Germanium Selectors, to learn about how you can easily locate the element you want your action to be triggered against.
only_elements - If to return only elements, or also other node types (text, comment, etc)
This will return a
list of the found elements, or an empty list if no element was found.
Sample
For example for the given HTML:
<div id="parent"> <div id="child1">..</div> <div id="child2">..</div> </div>
When calling:
items = child_nodes("#parent") assert len(items) == 2
This will return a list of 2 elements, with the child1 and child2, since
only_elements is
set by default to true. Otherwise if setting the
only_elements to
False, the call will
return 5 elements, since there are 3 whitespace nodes in the
#parent div.
items = child_nodes('#parent', only_elements=False) assert len(items) == 5
get_value(selector)
Description
Gets the value of an input element. Works for:
<input> and
elements.
Signature
get_value(selector) (1)
selector - What element to return the value for. See the Germanium Selectors, to learn about how you can easily locate the element you want your action to be triggered against.
get_value will return the current value of the element.
If the element is a multi-select, it will return an array of the values which
were selected (the
value attribute of the
<option> elements that are
selected).
Sample
assert get_value('#country') == 'at'
get_text(selector)
Description
Gets the text from the element. This is equivalent to the
innerText, or
textContent element attribute of the browser.
Signature
get_text(selector) (1)
selector - What element to return the text for. See the Germanium Selectors, to learn about how you can easily locate the element you want your action to be triggered against.
If the selector is a
WebElement instance, the filtering of
only_visible
will not be used, and the text from the given element will still be returned.
This is in contrast with the default Selenium approach of returning empty text for elements that are not visible.
Sample
get_text(invisible_element)
or
assert 'yay' == get_text('.success-message') (1)
This might throw exceptions if the
.success-messageis an element that is invisible, or doesn’t exists.
get_attributes_style(selector, name)
Description
Returns a single CSS attribute value for the element that is matched by the selector.
Signature
get_style(selector, (1) name) (2)
selector - What element to return the CSS property for. See the Germanium Selectors, to learn about how you can easily locate the element you want your action to be triggered against.
name - The name of the property to return, in camel case.
If the selector is a
WebElement instance, the filtering of
only_visible
will not be used, and the style property from the given element will still
be returned, even if the element is not visible.
Sample
get_style('input.red-border', 'borderTopWidth')
get_web_driver()
Description
Return the WebDriver instance the global Germanium was built around.
Signature
def get_web_driver()
Sample
wd = get_web_driver()
get_germanium()
Description
Returns the currently running Germanium instance, or
None if no instance was
opened using
open_browser().
Signature
def get_germanium()
Please see the Germanium API Documentation to find out what is available on
the
germanium.driver.GermaniumDriver instance.
Sample
g = get_germanium()
highlight(selector, show_seconds=2)
Description
Highlights by blinking the background of the matched selector with a vivid green for debugging purposes.
Signature
def highlight_g(selector, (1) show_seconds=2, (2) *args, console=False) (3)
selector - What element to alternate the background for. See the Germanium Selectors, to learn about how you can easily locate the element you want your action to be triggered against.
show_seconds - How many seconds should the element blink.
console - Should the messages be logged to the browser console.
In case the element that is found doesn’t exist, or is not visible, a notification alert will pop up, with information of whether the element was not found or since it’s not visible can’t be highlighted.
In case
console is set to
True then the alert will not be displayed,
but instead only the
console.log (or
console.error) of the browser will
be used for notifying elements that are not visible, or that can not be found.
Sample
highlight('.hard-to-see-item')
def S(*argv, **kwargs)
Description
Returns a deferred locator, using the `S`uper locator.
Signature
def S(selector, strategy='default')
Sample
element = S('#editor').element()
def iframe(target, keep_new_context = False)
Selects the current working iframe with the
target name.
@iframe("editor") def type_keys_into_editor(keys): type_keys(keys) type_keys_into_editor('hello world') # will switch the iframe to 'editor' and back click(Button("Save")) # iframe is 'default'
wait(closure, while_not=None, timeout=10)
Description
A function that allows waiting for a condition to happen, monitoring also that some other conditions do not happen.
In case the timeout expires, or one of the
while_not conditions matches until
the
closure is not yet matching then throws an exception.
Callables of both
closure and the
while_not are recursively resolved until
a non callable trueish value is returned.
Signature
def wait
Since selectors are callables, they can be used as parameters for
wait.
wait(Text("document uploaded successfully"), while_not = Text("an error occurred"))
Because callables are recursively resolved, they can be used as strategies for waiting:
def ButtonOrLink(): if some_condition: return Link return Button wait(ButtonOrLink)
This is roughly equivalent to:
def ButtonOrLink(): if some_condition: return Link().exists() return Button().exists() wait(ButtonOrLink)
waited(closure, while_not=None, timeout=10)
Description
A function that allows waiting for a condition to happen, monitoring also that some other conditions do not happen.
In case the timeout expires, or one of the
while_not conditions matches, before
the
closure matched then it returns
None.
Otherwise it returns the value that the closure returned.
Signature
def waited
click(waited(Button("Ok")))
Germanium Selectors and Locators
Selector objects are similar to
String values, that describe how an element
can be found in the current page, while
Locator objects are the implementation
of actual aglorithms that find them. A parallel can be made between the string
"div.custom-text", and the
webdriver.find_element_by_css() function. Selectors
specify what you want to find in the page, and locators make sure you find them.
It’s the combination of them,
webdriver.find_element_by_css("div.custom-text") that
will return the actual DOM Element to interact with.
Selectors are in the end text strings. Locators evaluate them finding elements in the browser.
In all the API calls, where
selector is specified, the selector is actually one of:
a string selector,
an object that inerits from
AbstractSelector(such as
Text,
Element,
Image, etc.),
a WebDriver WebElement,
a locator,
a list of any of the above.
Since selectors offer Positional and DOM filtering, point 1 and 2 will cover 99% of your test cases.
Locators Overview
Locators are algorithms that are are able to find elements against the current browser.
They are registered on the Germanium instance, and by default, Germanium comes with
three locators registered:
"xpath",
"css" and
"js". These are implemented in
XPathLocator,
CssLocator and
JsLocator respectively, from the
germanium.locators
package. Locators use selectors to find web elements. To create a locator you need a
Germanium instance, and a string specifying the selector passed to the locator itself.
These locators all extend a base class named
DeferredLocator. This class holds the
reference to the
Germanium object, and offers utility methods to actually fetch
the elements, check if such elements exist, or retrieveing their text.
Note, that the locators don’t immediately find the elements. Explicit calls muse be made to:
element()
element_list()
the locator itself with
(), (since the locator is a callable and will return the element_list)
from germanium.util import wait label_divs_locator = germanium.S('.label') (1) wait(label_divs_locator) (2)
This will return a
CssLocator.
Since the locator is a callable, we can wait on it
A locator is always constructed with two things: the
Germanium instance it will use
to attept at finding the elements, and a
string expression that will be used
for finding. Note that you should never manually instantiate the locator, but
you should use the super locator (the
S function). This function will pass both the
germanium instance, and the selector itself.
You can, and should, use the strategy parameter or the selector prefix when using the
S() builder function:
germanium.S('#testDiv', strategy='css')
or prefixing the string itself with the strategy name:
germanium.S('css:#testDiv')
Optionally a custom locator can be defined that extends the base class
DeferredLocator.
DeferredLocator contains a reference to a
Germanium
object and includes utility methods to get web elements.
String Selectors
A string selector is a selector that can specify what locators to be used. Implicitly,
the selector is either an XPath if it starts with
"//", either a CSS
selector, if there is no identifier prefix (
"name:…").
A string selector can also specify its locator strategy, by prefixing the selector with the locator strategy name. Currently registered into Germanium are:
css
selector = "css:div#customID" # or without the css prefix, since the string it's # not starting with // selector = "div#customID"
xpath
selector = "xpath://div[@id='customID']" # or without the xpath prefix, since the string it's # starting with // selector = "//div[@id='customID']"
js
selector = "js:return [ document.getElementById('customID') ];"
Selectors Overview
All
Selector objects in Germanium inherit from
germanium.selector.AbstractSelector, which
define only a single required method
get_selectors() that returns a list of string selectors.
The list item can have different locator strategies:
class AbstractSelector(object): # ... def get_selectors(self): raise Exception("Abstract class, not implemented.") (1) # ... positional, and parent-child filtering methods
All the Selector objects return a list of strings, that define how the element, or the multiple elements will be found by the given locator.
Selectors Positional Filtering
Germanium provides the following methods directly on top of
AbstractSelector to enable
positional filtering:
left_of(selector),
right_of(selector),
below(selector),
above(selector), that are from the set of found web elements, by using reference
elements, and ignoring elements
left_of,
right_of,
below or
above the references.
These filters can be used to filter otherwise false positive matches when selecting.
Multiple filters can be chained for the same selector, for example someone can:
click(Link("edit") .below(Text("User Edit Panel")) .right_of(Text("User 11")))
This will find a link that contains the label
edit, that is positioned below
the text
User Edit Panel and is to the right of the text
User 11.
selector.left_of(other_selector)
Description
Make a selector that will return only the items that are left of all the elements returned by the other_selector.
Signature
def left_of(self, other_selector)
Sample
click(Input().left_of(Text("User")))
selector.right_of(other_selector)
Description
Make a selector that will return only the items that are right of all the elements returned by the other_selector.
Signature
def right_of(self, other_selector)
Sample
click(Link("edit").right_of(Text("User 11")))
selector.above(other_selector)
Description
Make a selector that will return only the items that are above all the elements returned by the other_selector.
Signature
def above(self, other_selector)
Sample
click(Link("logout").above("div.toolbar"))
Selectors DOM Filtering
DOM Filtering selectors work by selecting only specific nodes in relations with other nodes in the DOM.
selector.containing(selector..)
Description
Matches nodes that contain the other XPath/CSS selectors.
Signature
def containing(self, selector..)
Sample
row = Element("tr").containing( Element("td", contains_text="User 1"), Element("td", contains_text="User 2") ).element()
This will match a
<tr> element that contains any of the
<td> elements with
the
"User 1" or
"User 2" text.
selector.containing_all(selector..)
Description
Matches nodes that contain all the given selectors inside their tree structure.
Signature
def containing_all(selector..)
Sample
row = Element("tr").containing_all( Element("td", contains_text="user@sample.com"), Text("User A") ).element()
This will match a
<tr> element that contains a
<td> with the text
user@sample.com and some other text, named
"User A"
selector.inside(selector..)
Description
Matches nodes that are inside any of the other selectors.
Signature
def inside(self, selector)
Sample
error_message = Element("div", css_classes="label") \ .inside(Element("div", css_classes="error-dialog"))
selector.outside(selector..)
Description
Matches nodes that are outside any of the given selectors (don’t have the given selectors as a parent.
Signature
def outside(self, selector)
Sample
For example to check if all the `p`aragraphs in the page are inside `div`s, we can:
assert Element("p").outside("div").not_exists()
selector.without_children()
Description
Matches nodes that have no children.
Signature
def without_children(self)
Sample
Given this selector:
Element('div', css_classes='test').without_children()
and HTML:
<div> <div class="test">a</div> <div class="test"><node/></div> <div class="test"></div> <!-- only this node will be matched --> <div class="test"><node>mix</node></div> <div>
only the third
<div> child element will be matched.
Germanium Selectors in Static Contexts
Selectors are neat since we can reuse them, and offer a clean separation
between finding the elements and inspecting them, but they also offer a
few utility methods to aid you in removing that one extra call to the
S
super locator.
For example instead of writing:
S(Button('Ok')).element()
you can write:
Button('Ok').element()
but you need to have a germanium instance already opened, or manually specify it in the element call.
Button('Ok').element(germanium=my_custom_ge_instance)
selector.element()
Description
This function allows fetching the first element from the Germanium instance, for which the current selector matches.
In case the germanium instance is not specified it will use the static instance
from
germanium.static.get_germanium().
Signature
def element(self, *argv, germanium=None, only_visible=True)
Sample
Button('Ok').element()
selector.element_list()
Description
This function allows fetching the element list from the Germanium instance, for which the current selector matches.
In case the germanium instance is not specified it will use the static instance
from
germanium.static.get_germanium().
Signature
def element_list(self, index=None, (1) *argv, germanium=None, (2) only_visible=True) (3)
index - When present, the element with the given index will be returned instead of the full list of elements.
germanium - What instance of germanium to use. If
Noneuse
germanium.static.get_germanium().
only_visible - If only the visible elements should be selected. Defaults to
Trueacross Germanium.
Sample
Element('li').element_list()
selector.exists()
Description
This function allows checking if there is at least one element matching the current selector.
In case the germanium instance is not specified it will use the static instance
from
germanium.static.get_germanium().
Signature
def exists(self, *argv, germanium=None, only_visible=True)
Sample
wait(Text('data saved successfuly').exists)
selector.not_exists()
Description
This function allows checking if there is no element matching the current selector.
In case the germanium instance is not specified it will use the static instance
from
germanium.static.get_germanium().
Signature
def not_exists(self, *argv, germanium=None, only_visible=True)
Sample
wait(Text('error occurred').not_exists)
selector.text()
Description
This function allows returning the text of the first element that matches the current selector.
In case the germanium instance is not specified it will use the static instance
from
germanium.static.get_germanium().
Signature
def text(self, *argv, germanium=None, only_visible=True)
Sample
assert Css('#messages').text() == 'data persisted'
Utility Selectors
Utility selectors are provided so you can use the positional filtering capabilities of the selectors. For example:
click(Css('.tree-plus-icon').left_of(Text('Item 15')))
The reason behind them is that you can’t use positional filtering directly on the
string themselves. String objects have to be recast to another object type
(in this case,
AbstractSelector) that supports the positional filtering methods.
click('.tree-plus-icon'.left_of(Text('Item 15'))) # throws exception
JsSelector(code)
A selector that finds an element by evaluating the given JavaScript code.
arguments[0] is the element used for subtree searches, and can be
null if searches are made for the full document.
Provided Selectors
Provided selectors are just classes that are generally useful for testing, simple things such as buttons, links or text.
The most basic of them is called
Element. There are a lot of more specific selectors
on top of that, for `Input`s, or `Link`s.
Element(tag_name=None, …)
A selector that finds an element by looking at its XPath.
Parameters:
tag_name- the html tag name to find (e.g.
div,
span,
li);
index- if specified, is the 1 index based result;
id- If it’s specified, is the id attribute of the element;
exact_text- if specified, the exact text the element must have;
contains_text- if specified, the exact text the element should contain;
css_classes- the CSS classes that the element must have (either as a
string, or
listof `string`s);
exact_attributes- attributes with their values that the element must have (
dict, keys for attribute names, values for expected values);
contains_attributes- attributes that contain the given values (
dict, keys for attribute names, values for strings that the attribute values must contain);
extra_xpath- extra xpath to be added to the expression, to the previously built expressions.
If the
index is used, the whole expression is wrapped in parenthesis,
and the index is applied to the whole result. In case you want multiple
sub-children, use
extra_xpath to fetch the elements.
S(Element('div', contains_text='error has occured', css_classes=['error-message']))
This will find a div that contains the text
error has occured and has also
a CSS class attached to it named
error-message.
Button(search_text = None, text = None, name = None)
Just a selector that finds a button by its label or name:
This selector will find simultaneously both
input elements that have the
type="button", but also
button elements.
some of the text, in either the
valueattribute if it’s an
input, or the text of the
button(
search_text)
the exact text, either the
valueattribute if it’s an input, or its text if it’s an actual
button(
text)
its form name (
name)
germanium.S(Button("Ok"))
InputText(input_name)
Just a selector that finds an input with the type
text by its name.
germanium.S(InputText('q'))
Link(search_text, text, search_href, href)
Just a selector that finds a link by either:
some of its text content (
search_text)
its exact text content(
text)
some of its link location (
search_href)
its exact link location(
href)
To match the first link that contains the 'test' string, someone can:
germanium.S(Link("test"))
Of course, the text and href search can be combined, so we can do,
in order to find a link that is on the
ciplogic.com website containing the
text
testing:
germanium.S(Link("testing", search_href=""))
Text(text, exact=False, trim=False)
Just a selector that finds the element that contains the text in the page.
germanium.S(Text("some text"))
The selector can find the text even in formatted text. For example the previous selector would match the parent div in such a DOM structure:
<div> some <b>text</b> </div>
The options of
exact and
trim can be used to find elements even if they are
padded, or only the elements that have the exact text that was given for searching.
Point Support
In Germanium, point actions are supported for all the mouse actions, and can be used instead of selectors, namely:
click()
right_click()
double_click()
hover()
drag_and_drop()
Point(x, y)
Points are not selectors and don’t specify an exact element, but rather as the name implies a location on the screen where we want to interact. The point location is computed from the top/left of the page itself, and is specified with
x and
y coordinates.
Points can be adjusted, so you can have the value changed without always summing up values in a one liner.
In order to easily obtain points, a utility class is also provided that can obtain points relative to the bounding box of an element. The corners, middle of the top/left/right/bottom segments, and the center are offered as points. The class is named
Box, and its constructor accepts a selector as an argument.
Box(selector)
A
Box instance will keep the sizes the first time it will be called, because we don’t want to call it every time. In order to refresh it, the
get_box() method is offered that will refresh the
Box coordinates with the new data.
In
wait conditions you can chain it:
box = Box(Css('.resizing-div')) wait(lambda: box.get_box().width() == 100))
Since points are not selectors, you can click two pixels right of an element, without exactly specifying the target element like so:
click(Box('span.custom-text').middle_right(2, 0))
To click two pixels left of an element, we can just adjust with a negative value:
click(Box('span.custom-text').middle_left(-2, 0))
Assuming a canvas is more than 10x10 pixels, we could also do a drag and drop from the top left corner, to the bottom right, keeping a 5 pixel margin:
canvas_box = Box('canvas.drawing') drag_and_drop(canvas_box.top_left(5, 5), canvas_box.bottom_right(-5, -5))
Germanium Keys Support
This section details on how to type keys better, without a headache.
Regular Typing
In general when typing keys, for example for form fields, the easiest way of doing it is to just type the actual keys to be pressed. For example to type the user name into a form field you can:
type_keys('John', Input('firstname'))
This will in turn just type the keys
["J", "o", "h", "n"] into the input that
has a
name attribute equal to
"firstname". An email looks equally fascinating:
type_keys('john.doe@example.com', Input('email'))
Let’s start the more interesting examples.
Special Keys
Special keys such as ENTER, are available by just escaping them in
< and
>
characters, e.g.
<ENTER>. For example to send TAB TAB ENTER someone
could type:
type_keys("<tab*2><enter>")
Now you might wonder, why is it
<enter> and not
<ENTER>? Or
<cr>? Or its
bigger brother
<CR>? Or just
<Enter>. Actually they all resolve to
the same key, that is the ENTER. The same holds true for
<del> vs
<delete>, or
<bs> vs
<backspace>, etc. They will resolve to DELETE, BACKSPACE, etc.
as expected.
Combo Presses.
Germanium API Documentation
There are three kinds of functions that are provided for easier support inside the browsers:
decorator:
@iframe
germanium instance functions:
S, super locator
js,
execute_script
take_screenshot
load_script
germanium instance attributes:
iframe_selector
utility functions:
type_keys_g
click_g
double_click_g
right_click_g
hover_g
select_g
deselect_g
get_attributes_g
get_value_g
get_text_g
highlight_g
wait
@iframe - germanium iframe decorator
@iframe(name, keep_new_context=False)
Switch the iframe when executing the code of the function. It will use the strategy provided when the Germanium instance was created.
For example if we would have an editor that is embedded in an IFrame, and we would want to call the saving of the document, we could implement that such as:
@iframe("default") def close_dialog(germanium): germanium.S(Button("Ok").below(Text("Save dialog"))).element().click() @iframe("editor") def save_document(germanium): germanium.S('#save-button').element().click() close_dialog(germanium)
The
@iframe decorator is going to find the current context by scanning the
parameters of the function for the Germanium instance. If the first parameter
is an object that contains a property named either:
germanium or
_germanium
then this property will be used.
germanium Instance Functions
The GermaniumDriver is a simple instance that decorates an existing WebDriver:
All the attributes that are not defined on the
GermaniumDriver instance,
are searched into the
germanium.web_driver one. For example calling:
print(germanium.title)
Will actually result in fetching the title from the web_driver instance that is used by the GermaniumDriver.
Constructor GermaniumDriver(web_driver, ..)
Constructs a new GermaniumDriver utility object on top of the given WebDriver object.
GermaniumDriver(web_driver, iframe_selector=DefaultIFrameSelector(), screenshot_folder="screenshots", scripts=list())
The only required parameter is the
web_driver argument, that must be a
WebDriver instance.
iframe_selector
The
iframe_selector specifies the strategy to use whenever the iframe will
be changed by the
@iframe decorator. This class should have a method named
select_iframe(self, germanium, iframe_name), or a method that has two
parameters
(germanium, iframe_name) can be provided and it will be
wrapped into a decorator class by Germanium itself.
Germanium uses
"default" for the
switch_to_default_content.
The default implementation is:
class DefaultIFrameSelector(object): """ An implementation of the IFrameSelector strategy that does nothing. """ def select_iframe(self, germanium, iframe_name): if iframe_name != "default": raise Exception("Unknown iframe name: '%s'. Make sure you create an IFrame Selector " "that you will pass when creating the GermaniumDriver, e.g.:\n" "GermaniumDriver(wd, iframe_selector=MyIFrameSelector())") germanium.switch_to_default_content() return iframe_name
This can easily be changed so depending on the
iframe_name it will
do a switch_to_frame on the germanium object.
class EditorIFrameSelector(object): def select_iframe(self,
In case you don’t want a full class, you can pass also a callable that will
be invoked with two parameters
germanium and
iframe_name:
def select_iframe
So when invoking the
GermaniumDriver someone can:
GermaniumDriver(web_driver, iframe_selector=select_iframe)
screenshot_folder
The folder where to save the screenshots, whenever take_screenshot is called.
It defaults to
"screenshots", so basically a local folder named screenshots
in the current working directory.
scripts
A list of files with JavaScript to be automatically loaded into the page,
whenever either
get(),
reload_page() or
wait_for_page_to_load() is done.
germanium.S(selector, strategy?)
S stands for the super locator, and returns an object that can execute
a locator in the current iframe context of germanium. The letter
S was chosen since it is looking very similar to jquery’s
$.
The first parameter, the selector, can be any of the selector objects from the germanium.selectors package, or a string that will be further interpreted on what selector will be used.
For example to find a button you can either:
germanium.S(Button('OK'))
or using a CSS selector:
germanium.S("input[value'OK'][type='button']")
or using a specific locator:
# implicit strategy detection, will match XPath, due to // start germanium.S("//input[@value='OK'][@type='button']") # or explicit in-string strategy: germanium.S("xpath://input[@value='OK'][@type='button']") # or explicit strategy: germanium.S("//input[@value='OK'][@type='button']", "xpath")
The selectors approach is recommended since a selector find will match either
an html
input element of type
button, either a html button
element that
has the label OK.
The S locator is not itself a locator but rather a locator strategy. Thus the S locator will choose:
if the searched expression starts with
//then the xpath locator will be used.
# will find elements by XPath germanium.S('//*[contains(@class, "test")]');
else the css locator will be used.
# will find elements by CSS germanium.S('.test')
The S function call will return an object that is compatible with the static
wait_for command.
germanium.js(code), germanium.execute_script(code)
Execute the given JavaScript, and return its result.
germanium.js('return document.title;')
germanium.take_screenshot(name)
Takes a screenshot of the browser and saves it in the configured screenshot folder.
# will save a screenshot as `screenshots/test.png` germanium.take_screenshot('test')
germanium Instance Attributes
Currently there is only one attribute, namely the
iframe_selector, that
allows changing the current iframe selection strategy for the given instance.
As in the constructor, it supports both the class, or the callable as values for assignment.
def new_iframe_selector(germanium, iframe_name): # ... old_ifame_selector = get_germanium().iframe_selector get_germanium().iframe_selector = new_iframe_selector
This is useful for reusing the Germanium instance across tests, without the
need to recreate it just because you need another
iframe_selector strategy.
germanium Utility Functions
Utility functions for Germanium instances.
type_keys_g(germanium, keys_typed, element=None, delay=0)
Type the current keys into the browser, optionally specifying the element to send the events to, and/or delay between keypresses.
type_keys_g(germanium, "send data<cr>but <!shift>not<^shift> now.")
Special keys such as ENTER, are available by just escaping them in
< and
>
characters, e.g.
<ENTER>. For example to send TAB TAB ENTER someone
could type:
type_keys_g(germanium, "<tab*2><enter>").
In order to start pressing a key, and release it latter, while still typing other
keys, the
! and
^ symbols can be used.
For example to type some keys with SHIFT pressed this can be done:
type_keys_g(germanium, "<!shift>shift is down<^shift>, and now is up.")
click_g(germanium, selector)
Perform a single click mouse action.
click_g(germanium, Button("Cancel").below(Text("Delete file?")))
double_click_g(germanium, selector)
Perform a double click mouse action.
double_click_g(germanium, "a.test-label")
right_click_g(germanium, selector)
Perform a mouse right click. Also known as a context menu click.
right_click_g(germanium, webdriver_element)
select_g(germanium, selector, text=None, *argv, value=None, index=None)
Select one or more elements in a HTML
<select> element. Can select the
elements by either, text values, actual values inside the
<option>, or by
index.
select('select#country', value='at') select('select#multivalueSelect', index=[1,3,7,8])
deselect_g(germanium, selector, text=None, *argv, value=None, index=None)
Deselects one or more elements in a HTML
<select> element. Can deselect the
elements by either, text values, actual values inside the
<option>, or by
index.
deselect('select#multivalueSelect', index=[7,8])
get_attributes_g(germanium,_value_g(germanium, selector)
Returns the current value of the element matched by the selector. Normally for inputs it’s just the string value.
In case the selector matches a multiple select, will return an array with the values that are currently selected.
assert get_value_g(germanium, 'select#multivalueSelect') == [1, 3]
get_text_g(germanium, selector)
Returns the current text of the element matched by the selector. This will
work also for
WebElement instances that are passed as
selector values
even if they are not visible.
|
http://www.germaniumhq.com/documentation/
|
CC-MAIN-2017-26
|
en
|
refinedweb
|
6 reasons people don't work in the open
Why work in the open?
Image by :
opensource.com
What prevents you from working in the open?
I work for an open source company on an open source project and still I encounter on a daily basis that people who are working on open source software prefer to work in private (from time to time). They do not discuss technical questions on public mailing lists, the normal chat goes on in internal chat rooms instead of public IRC, and new features are rather demoed on private video conference channels than as e.g. Hangout on Air.
There are of course good reasons that communication has to be in private: if customers or customer data are involved, then (especially for a public company) private conversations are needed. Similar for sales numbers or other financial aspects. The reality is though that engineers most of the time don't hear any sales numbers. And most customer cases are just general software problems that can be openly discussed when not mentioning the customer name. Which brings me to the question why people are not collaborating in the open—especially when the resulting source code is published to a public repository like GitHub.
Fear of leakage
When you are working with customers, there are always groups like customer support that cannot work in the open, which means they do not join public IRC channels but private ones, to keep customer data private. Engineers working on such a customer case may always have a latent fear that discussing case details may leak customer data and discussions on them always happen on internal channels no matter if any customer specific issue is discussed or not. Which brings me to the next point.
The effort of selection and switching
When you work on internal channels for customer related data (and because the respective colleagues are only available that way), one has to constantly think and decide if a certain message can go to public channels or not. One can understand that such case comments are made on the internal channels and to get a coherent stream of thought should still stay on the internal channel. Talks about other non-customer related issues should on the opposite go to the public channels, which means switching channels all the time. Such selection and switching of channels certainly is an effort which people try to avoid.
Uncertainty
There may also be cases where engineers are working on a feature request or bug of a customer that is not specific to any customer at all. And still there is a lingering uncertainty if the work can be discussed in the open or not. So that in doubt the discussion happens in private again.
Fear of distraction
While community contribution is (said to be) always welcome, it can also be seen as a source of distraction. A core member of a project has to possibly interrupt her work, start thinking into the point of view of the community member, and then get back into the original work. This can be from time to time distracting, especially when you have some time boxed releases, so this may not be welcome.
Perceived lack of benefit
Our project has a rather small community, which may bring up the question why to discuss things in public if no one seems interested? Why go through all the above hoops? Does a tree really fall in the woods if there is no observer?
Lack of self-confidence
Another possible reason may be a fear of accountability and traceability. This may sound funny at first as the source code ends up in the public repository. The more underlying cause here may be a lack of self-confidence. Discussions in public that are recorded as chat-logs, videos from feature demos, or blog posts allow others to give critical feedback and can make the person feel unconfident.
I am a firm believer that working in the open is good. Even with a small community. Doing work in the open allows you to get input from community members, which enriches the knowledge of the problem domain. Others give a totally different perspective and probably list use cases you have never thought of. Also if community members are included in the overall community, they feel better and start contributing more. For ideas on how to contribute in ways other than code, see my article: 10 ways to contribute to an open source project without writing code.
My encounters with community members have been very good, as everyone is very friendly and meeting community members in person is always fun. So what can we do to overcome the obstacles I've listed above?
- First, set up a policy that all communication has to be on public channels by default unless there are good reasons against.
- Remind yourself to start the day on public channels. When you start there, it sets sort of a default for your self. And it encourages others to chat there as well, so building a critical mass to get talks going.
- Remind people from time to time about the above policy.
- Dissociate customer information from the technical information as soon as possible. Perhaps directly on the level of the support person. A NullPointerException is a NPE no matter if it is reported by a paying customer or a community member.
- Record public channels so that community members get the possibility to re-read what was discussed.
- If you need to work quietly, then don't shut out external community only. Switch external comms off, do your work and then come back—a distraction is a distraction no matter if from another community person or from a co-worker.
- If you feel uncomfortable writing (e.g. a blog post explaining your work or a new feature) first pass it by another person you trust. Unless you did not even try to deliver a good result, there are no bad posts. The greater community always appreciates any additional information.
1 Comments
I've been doing a lot of opensource stuff since the 90s, from contributing to various projects as well as running my own.
A few years ago i wrote a library, that saw quite some use, I published the library through its own github group, but had a public fork of it that I was working on. The fork was even marked in its Readme.md as "this is an unstable fork, and everything you see here is work in progress, do not use this, use the stable version from link-to-correct repository.
People didn't listen, they cloned my fork, run into problems and incompatiblities, and started creating tons of tickets for stuff that wasn't there yet, so i ended up spending a lot of time with answering tickets, rather than progressing with that library.
The IRC channel I had linked at the time showed a similar picture, people started complaining there, or asking about stuff that wasn't even supposed to be used by them yet.
When i decided to refactor the library to be compliant with the new PSR's in PHP, and to use PHP namespaces and such, i once again forked it, and started working on it. At first it was ok, but after a while, and i still hadn't done any release, people started to send me pull requests (which in itself is good), demanding that while I'm at it, I would do whatever they want changed. One of those guys started a discussion on IRC about something where he felt certain classes should be named differently. It started with him hinting at it, then that hinting became demands, and at some point, even though i had spend a lot of time explaining on why i named it this way and was sticking with the name (and that discussion did cost me over an hour), he send me a pull request with all names changed, and declared I'd have to merge it in, as i gave my code to the community, and now i wouldn't own it anymore, and couldn't make such decisions on my own.
My reaction? i paid github and set the repository to private, then I told him to f*** off.
You are right, working public can be benefiting, but putting stuff you work on out public too early is counter productive, you end up with too many chefs that all want to decide on what soup you are cooking. In my experience it is better to do work first, and then iterate over it with community input (no matter if PR's or comments), as people will be able to base their discussions on something working, rather than on assumptions. It also allows you to take the time to take care of your work, rather than spending most of the time in (often pointless) discussions.
Yes it is important to talk to your community, and it is important to be as transparent as possible about what you are doing. But that doesn't mean that you need to do it all open, and it doesn't mean that you should sacrifice your valuable time on trying to please everyone. After all you are doing free work for them.
|
https://opensource.com/business/14/10/why-work-open
|
CC-MAIN-2017-26
|
en
|
refinedweb
|
liform-react: a form generator from JSON schema released:
Our needs were quite specific, as we were writing a quite long “Wizard” form, so we wanted it to be flexible enough to accommodate this usage. Also, we wanted:
- Integration with redux-form, a great form library, that allows to manage the form state in Redux a sane way.
- Be able to customize widgets and the form itself in a great way.
- Integrate JSON schema validation, with ajv.
There are other generators out there, being perhps Mozilla’s React react-jsonschema-form the most popular, but it lacked some of the requirements, so we wrote our own.
How to use it?
import React from 'react' import { createStore, combineReducers } from 'redux' import { reducer as formReducer } from 'redux-form' import { Provider } from 'react-redux' import Liform from 'liform-react' const reducer = combineReducers({ form: formReducer }) const store = createStore(reducer) const schema = { 'type':'object', 'properties': { 'title': { 'type':'string', 'title': 'Title' }, 'type': { 'enum':[ 'One','Two' ], 'type':'string', 'title': 'Select a type' }, 'color': { 'type':'string', 'widget': 'color', 'title': 'In which color' }, 'checkbox': { 'type':'boolean', 'title': 'I agree with your terms' } } } } return ( <Provider store={store}> <Liform schema={schema} onSubmit={(v) => {console.log(v)}}/> </Provider> )
And this will produce this form.
As you can see, the default theme is written in Bootstrap. This is not the theme we actually use, ours is quite specific. But you aren’t in any way tied to Bootstrap here, and you are more than welcome to write your own widgets and entire themes if you wish so.
Check out the docs at to learn more about its features and how to use it!
Enjoy!
PS: If you happen to work with PHP you may have a look into this other post, about generating JSON schema from Symfony forms, as we have also written a library for this.
|
http://nacho-martin.com/liform-react-form-generator-json-schema.html
|
CC-MAIN-2017-26
|
en
|
refinedweb
|
Java Read username, password and port from properties file
In this section, you will come to know how to read username, password and port no from the properties file and display the data from the database. Now to load the data from the properties file, we have used Properties class. This class allowed us to save data to a stream or loaded from a stream using the properties file. In the properties file, data has been stored as a key value pair in the form of string.
load() - This method reads a property list (key and element pairs)from the input stream.
getProperty() - This method searches for the property with the specified key in the properties file.
data.properties file
Here is the code:
import java.io.*; import java.sql.*; import java.util.*; class UsePropertiesFile { public static void main(String[] args) throws Exception { Properties prop = new Properties(); prop.load(new FileInputStream("data.properties")); String user = prop.getProperty("username"); String pass = prop.getProperty("password"); String port = prop.getProperty("port"); Class.forName("com.mysql.jdbc.Driver"); Connection conn = DriverManager.getConnection("jdbc:mysql://localhost:" + port + "/test", user, pass); Statement st = conn.createStatement(); ResultSet rs = st.executeQuery("Select * from data"); while (rs.next()) { System.out.println(rs.getString(1) + " " + rs.getString(2)); } } }
Output:
Advertisements
Posted on: February
|
http://www.roseindia.net/tutorial/java/core/propertiesFile.html
|
CC-MAIN-2017-26
|
en
|
refinedweb
|
On Mon, Jul 11, 2011 at 10:03:24PM +0200, shuerhaaken wrote: > > That's what Suggests/Recommends are made for. E.g. Rhythmbox Recommends > > rhythmbox-plugins, quodlibet Suggests quodlibet-plugins... From Policy §7.2 > > "The Recommends field should list packages that would be found together with > > this one in all but unusual installations": seems what you need. > > > > My suggestion was mainly based on personal preference (I like to choose if I > > actually want the plugins or not, if they are not strictly needed) and on > > what other packages do. It is not a requirement, hence you can do whatever > > you want. > > If that's the Debian way, I will just leave it like that and ship an > extra package for xnoise-plugins Ok, there are a couple of issue thought: - you should append something like " (plugins)" to the short description so that it does not duplicate the description of the main package. - the -plugins package right now Depends on various -dev packages. I do not think they are needed. Also, libcairo-dev is duplicated in Build-Depends. > > Btw, the package didn't show up on mentors.d.n yet, hence I couldn't check > > the modifications you did (it may just be a problem of mentors.d.n... it > > happens sometimes). > > Hmm. I uploaded and it was visible here: >;package=xnoise > But I also couldn't get it via repo. So I'll upload again in a few > minutes. I can see it now. > > Final suggestion (I forgot to say this in the previous email), you may want > > to join the Debian Multimedia Team [0] to maintain this package (it > > would be easier for you to find an uploader and some help to maintain the > > package). Please have a look at our policies [1] (maintain the package on > > git using git-buildpackage, "Debian Multimedia Team" is the Maintainer and > > you the Uploader, ...). If you are interested and ok with our workflow feel > > free to subscribe and post this RFS to the team's mailing list [2] (you will > > also need an account on alioth.debian.org). > > > I'll have a look into that. Hope they don't just pull out the current > git version instead of a release. Not sure I've understood, but the git repository is only used to keep track of the Debian-related modifications (those under "debian/"), and, additionally, it holds a convenience copy of the upstream sources (imported from the upstream tarball) to ease building. Have a look at for some examples (our packages are those under the pkg-multimedia namespace). Cheers -- perl -E'$_=q;$/= @{[@_]};and s;\S+;<inidehG ordnasselA>;eg;say~~reverse'
|
https://lists.debian.org/debian-mentors/2011/07/msg00333.html
|
CC-MAIN-2017-26
|
en
|
refinedweb
|
. I got spyder working on ubuntu 11.04,
running the summerfield examples .
. everything was installed by the
ubuntu software center .
3.6: pos: ide should be idle:
(mark summerfield`rapid gui programming with python and qt)
. he said eric would be too complicated
for a beginner -- that was back in 2008;
I wonder what he would have thought of spyder ?
3.6: mis.cyb/xu.eric:
. when I first opened xu.eric4,
it asked me to do config,
but then closed unexpectedly
and complained of
Qstring being undefined .
. I'm following this eric4 tutorial
and it says new-project should offer
some vcs choices, at least, svn,
but it shows nothing .
. this is likely due to not doing config right;
this tutorial is not going into config options .
3.7: Eric
Eric is complex. Indeed, confusing.3.6: Spyder (Scientific PYthon Development EnviRonment): .
. a simple and light-weight, cross-platform,. see the news group .
powerful interactive development environment
with advanced editing, interactive testing,
debugging and introspection features
and a numerical computing environment
thanks to the support of IPython gui .
. Spyder's code base .
3.7: Spyder 2009:
. introspection-based code completionregebro critique of Spyder 2010:
and integrated debugger .
Free open-source scientific Python environment
providing MATLAB-like features:
console with variable browser, sys.path browser,
environment variables browser,
integrated plotting features,
autocompletion and tooltips
- editor with syntax highlighting, class/function browser,
pyflakes/pylint code analysis,
inline find/replace and search in files features,
code completion and tooltips.
100% pure Python, part of Python(x,y).
. no actual sort of project management,3.20: spyder/ipython 0.12 running in spyder 2.1.8 on ubuntu:.
. our support for Ipython 0.12 is quite poor.Pierre Raybaut Mar 19 (1 day ago)
They changed a lot of things and
we haven't had time to catch up,
in part because our support for ipython 0.10
is quite good, and we haven't needed it.
The bad news first:
. plotting doesn't work in the ipython console
for any version >= 0.11, sorry.
I'll try to look into this
to see if I can make it work again
without losing all our other features
(like code completion).
Don't wait it though until Spyder 2.2.
The good news:
. I discovered a fix to the
"ValueError: API 'QString' has already been set to version 1".
Re: [spyder] How to get ipython 0.12 running in spyder 2.1.8 on ubuntu
FYI, IPython 0.12 is now working with
Spyder's latest Mercurial changesets
(meaning that both v2.1.9 and v2.2.0 will
support recent IPython versions again).
Note that IPython 0.11+ support is still experimental
-- and it's only working in the IPython plugin
(*not* in the Console plugin).
The Console plugin supports only IPython 0.10, and
I doubt that we'll ever be able to
add support for IPython 0.11+ .
3.7: Python(x,y)
. this is not an ide, they recommend spyder or ipython .
. a free scientific and engineering development software for
numerical computations, data analysis
and data visualization
based on Python programming language,
Qt graphical user interfaces and
Spyder interactive scientific development environment.
. do parallel computing on multicore/processors computers
or even clusters (with Parallel Python),
With Spyder development environment .
ipython:
qt with ipython .
. IPython now features Notebook
. Notebook is a major milestone, like Mathematica, or Sage,. Researcher raves about IPython combination:
but based on the ZeroMQ
asynchronous message queuing library .
Numpy/Scipy for numerical operations,
Cython for low-level optimization,
IPython for interactive work, and MatPlotLib .
. but for newcomers to scientific programming
he recommends trying Sage first .
3.7: NinjaIDE:
Ninja IDE is written in Python using QT.
summary:
NINJA-IDE (from: "Ninja Is Not Just Another IDE") .. a review .
Python + PyQt + (Linux/Windows/Mac OS X)
. one of the IDEs with introspection-based code completion
/or/ and integrated debugger
. ninja's forum .
3.7: PythonToolkit (PTK)
. introspection-based code completion
and integrated debugger .
An interactive environment for python built around
a matlab style console window and editor.
It was designed to provide a python based environment
similiar to Matlab for scientists and engineers
however it can also be used as a general purpose
interactive python environment
especially for interactive GUI programming.
Features include:
Multiple independent python interpreters.
Interactively program with different GUI toolkits
(wxPython, TkInter, pyGTK, pyQT4 and PySide).
Matlab style namespace/workspace browser.
Object auto-completions, calltips and
multi-line command editing in the console.
Object inspection and python path management.
Simple code editor and integrated debugger.
. written in wxPython
Added PySide (Qt4) engine .
3.7: MonkeyStudio:
Python+PyQt4 RAD IDE,
that includes an integrated QtDesigner .
3.6: eclipse with a python plugin and pyqt support:
. eclipse is rather heavy, so I'll save this option for last .
3.7: web: PyScripter -- not for qt,
Is windows only.
|
http://amerdreamdocs.blogspot.com/2012/03/comparing-ides-devpyqt.html
|
CC-MAIN-2017-26
|
en
|
refinedweb
|
Revision history for the Perl binding of libcurl, WWW::Curl. 4.17 Fri Feb 21 2014: - Balint Szilakszi <szbalint at cpan.org> - Fixing build process for old libcurl versions without CURLOPT_RESOLVE. - License is now MIT only. 4.16 Thu Feb 20 2014: - Balint Szilakszi <szbalint at cpan.org> - Support for CURLOPT_RESOLVE (an slist option) [Theo Schlossnagle] - Fixing t/19multi.t test failures when using a threaded resolver for libcurl. - Improved constant parsing when using ISO-compliant CPP. [tsibley] 4.15 Sun Nov 28 2010: - Balint Szilakszi <szbalint at cpan.org> - Refactored constant handling and added thorough testing for it. - Fixed CURLOPT_PRIVATE, it is now a string and can be set/get accordingly. 4.14 Sun Oct 24 2010: - Balint Szilakszi <szbalint at cpan.org> - Scalar references can now be used to receive body/header data [gfx]. - Speed optimizations for threaded perl. [gfx, szbalint]. - Added a more generic libcurl constant detection. - Added the pushopt method for appending strings to array options. - Documentation improvements. 4.13 Wed Sep 01 2010: - Balint Szilakszi <szbalint at cpan.org> - Fixed WWW::Curl::Form (again, formadd and formaddfile working now). - Made constant constant handling more robust and added tests [Fuji, Goro]. - Modernized *.pm and AUTOLOAD now throws an error on unknown method calls [Fuji, Goro]. - Fixed code depending on CURLINFO_SLIST to be optional [Fuji, Goro].. 4.11 Fri Dec 18 2009: - Balint Szilakszi <szbalint at cpan.org> - Fixed t/19multi.t for libcurl versions compiled with asyncronous dns resolution. 4.10 Fri Dec 18 2009: - Balint Szilakszi <szbalint at cpan.org> - Added support for CURLINFO_SLIST in getinfo (patch by claes). - Merging documentation fixes upstream from the FreeBSD port (thanks Peter). - Added support for curl_multi_fdset. 4.09 Thu Jul 09 2009: - Balint Szilakszi <szbalint at cpan.org> - Fixing broken version check. 4.08 Tue Jul 07 2009: - Balint Szilakszi <szbalint at cpan.org> - Fixed a memory leak in setopt. - Added a check to Makefile.PL for the minimum libcurl version. - Mentioned WWW::Curl hosting on github. - Upgraded bundled Module::Install to 0.91. 4.07 Sun May 31 2009: - Balint Szilakszi <szbalint at cpan.org> - Fixed >32bit integer option passing to libcurl on 32bit systems. (Thanks to Peter Heuchert for the report and fix suggestion!) - The CURL_CONFIG environment variable can now be used to specify which curl-config to use (contributed by claes). - Fixed segfault when a string option with setopt was set to undef (contributed by claes). - Fixed incomplete cleanup routine at destruction time (contributed by claes). - Readded Easy.pm and Share.pm stubs so that they are indexed by CPAN, thus avoiding complications with outdated versions appearing. 4.06 Sun Apr 05 2009: - Balint Szilakszi <szbalint at cpan.org> - 2.00 Tue Apr 22 2003: - Cris Bailiff <c.bailiff+curl at devsecure.com> - New top level package name of WWW::Curl in preparation for entry to CPAN - Rename "Curl::easy" to "WWW::Curl::easy" - Add backwards compatability namespace module for existing scripts - Implement initial curl_easy_duphandle support - Started on curl_easy_form support (WWW:Curl::form) - NOT FUNCTIONAL YET - Fixup use of env vars in t/07ftp-upload.t (jellyfish at pisem.net) - Adjust IP addresses for t/08ssl.t tests due to moved https servers 1.35 Sun Sep 22 2002: - Cris Bailiff <c.bailiff+curl at devsecure.com> - Fixed progress function callback prototype [ curl-Bugs-612432 ], reflecting the fix made in curl-7.9.5. Tested in t/05progress.t to now return sensible values! 1.34 Wed Aug 7 2002: - Cris Bailiff <c.bailiff+curl at devsecure.com> - Fix off-by-one error in setting up of curl_slists from perl arrays, which was causing the last item of slists to be dropped. Added regression test case. 1.33 Mon Aug 5 2002: - Cris Bailiff <c.bailiff+curl at devsecure.com> - Fix serious bug in read callback support (used for POST and upload requests), introduced in 1.30, which uploaded random data (due to a reversed src/dest in a memory copy). 1.32 Thu Aug 1 2002: - Cris Bailiff <c.bailiff+curl at devsecure.com> - Minor Makefile.PL fixes to build cleanly with curl 7.8 as found on redhat 7.2. 1.31 Tue Jul 16 2002: - Cris Bailiff <c.bailiff+curl at devsecure.com> - Generate better switch() statement syntax in C code, to fix build issues on some systems with strict compilers. Reported by Ignasi Roca. 1.30 Mon Jul 15 2002: - Cris Bailiff <c.bailiff+curl at devsecure.com> - Testing release after complete code overhaul. Now supports cleaner object interface, supports multiple handles per process, uses PerlIO for portable I/O (which should be perl 5.8 ready) and maybe even supports ithreads. Should be fully backwards compatible, but please read the man page for change details and report any issues. - Fixed warning caused by slist functions accessing past the end of the perl array. - Fixed leak caused by consuming slist arguments without freeing. - Updates test scripts to OO style, cleaned up output. - Deprecated USE_INTERNAL_VARS. 1.21 Thu Jul 11 2002: - Cris Bailiff <c.bailiff+curl at devsecure.com> - Minor fixes to assist windows builds from Shawn Poulson - Allow passing curl include location on the command line when running perl Makefile.PL 1.20 Sat Feb 16 2002: - Cris Bailiff <c.bailiff+curl at devsecure.com> - Use standard perl module numbering syntax (valid decimal) - Skipped 1.10 in case anyone confuses it with 1.1.0 - Made every build a rebuild and removed 'pre-built' files - no point worrying about not finding curl.h - if we can't find it, we can't compile anyway. Obviates bug in 1.1.9 preventing rebuilds. - Add support for redefining CURLOPT_STDERR (file handle globs only!) 1.1.9 Sat Dec 8 2001: - Cris Bailiff <c.bailiff+curl at devsecure.com> - Enhance Makefile.PL to re-build easy.pm and 'constants' xs function from local installed curl.h. CURLOPT_ and CURLINFO_ Constants up-to-date for libcurl-7.9.2, but can be re-built for almost any libcurl version by removing easy.pm and curlopt-constants.c and re-running 'perl Makefile.PL' - Use curl-config to find include and library compile options - Updated test scripts to work better under 'make test' (You need to set the environment variable 'CURL_TEST_URL' though!) - Added test script to display transfer times using new time options - Merge changes in Curl_easy 1.1.2.1 by Georg Horn 1.1.8 Thu Sep 20 2001: - Cris Bailiff <c.bailiff+curl at devsecure.com> - Re-generate CURLOPT_ constants from curl.h and enhance makefile to allow this to be repeated in future or for older versions of libcurl. Constants up-to-date for libcurl-7.9(pre) - Split tests into t/*.t to simplify each case - Add test cases for new SSL switches. This needs ca-bundle.crt (from mod_ssl) for verifying test cases. 1.1.7 Thu Sep 13 2001: - Cris Bailiff <c.bailiff+curl at devsecure.com> - Documentation Update only - Explicitly state that Curl_easy is released under the MIT-X/MPL dual licence. No code changes. 1.1.6 Mon Sep 10 2001: - Cris Bailiff <c.bailiff+curl at devsecure.com> - Fix segfault due to changes in header callback behaviour since curl-7.8.1-pre3 1.1.5 Fri Apr 20 2001: - Cris Bailiff <c.bailiff+curl at devsecure.com> - Add latest CURLOPT_ and CURLINFO_ constants to the constants list 1.1.4 Fri Apr 20 2001: - Cris Bailiff <c.bailiff+curl at devsecure.com> - Fix case where curl_slists such as 'HTTPHEADERS' need to be re-set over persistant requests 1.1.3 Wed Apr 18 2001: - Cris Bailiff <c.bailiff+curl at devsecure.com> - Change/shorten module function names: Curl::easy::curl_easy_setopt becomes Curl::easy::setopt etc. This requires minor changes to existing scripts.... - Added callback function support to pass arbitrary SV * (including FILE globs) from perl through libcurl to the perl callback. - Make callbacks still work with existing scripts which use STDIO - Initial support for libcurl 7.7.2 HEADERFUNCTION callback feature - Minor API cleanups/changes in the callback function signatures - Added Curl::easy::version function to return curl version string - Callback documentation added in easy.pm - More tests in test.pl 1.1.2 Mon Apr 16 2001: - Georg Horn <horn at koblenz-net.de> - Added support for callback functions. This is for the curl_easy_setopt() options WRITEFUNCTION, READFUNCTION, PROGRESSFUNCTION and PASSWDFUNCTION. Still missing, but not really neccessary: Passing a FILE * pointer, that is passed in from libcurl, on to the perl callback function. - Various cleanups, fixes and enhancements to easy.xs and test.pl. 1.1.1 Thu Apr 12 2001: - Made more options of curl_easy_setopt() work: Options that require a list of curl_slist structs to be passed in, like CURLOPT_HTTPHEADER, are now working by passing a perl array containing the list elements. As always, look at the test script test.pl for an example. 1.1.0 Wed Apr 11 2001: - tested against libcurl 7.7 - Added new function Curl::easy::internal_setopt(). By calling Curl::easy::internal_setopt(Curl::easy::USE_INTERNAL_VARS, 1); the headers and content of the fetched page are no longer stored into files (or written to stdout) but are stored into internal Variables $Curl::easy::headers and $Curl::easy::content. 1.0.2 Tue Oct 10 2000: - runs with libcurl 7.4 - modified curl_easy_getinfo(). It now calls curl_getinfo() that has been added to libcurl in version 7.4. 1.0.1 Tue Oct 10 2000: - Added some missing features of curl_easy_setopt(): - CURLOPT_ERRORBUFFER now works by passing the name of a perl variable that shall be crated and the errormessage (if any) be stored to. - Passing filehandles (Options FILE, INFILE and WRITEHEADER) now works. Have a look at test.pl to see how it works... - Added a new function, curl_easy_getinfo(), that for now always returns the number of bytes that where written to disk during the last download. If the curl_easy_getinfo() function is included in libcurl, (as promised by Daniel ;-)) i will turn this into just a call to this function. 1.0 Thu Oct 5 2000: - first released version - runs with libcurl 7.3 - some features of curl_easy_setopt() are still missing: - passing function pointers doesn't work (options WRITEFUNCTION, READFUNCTION and PROGRESSFUNCTION). - passing FILE * pointers doesn't work (options FILE, INFILE and WRITEHEADER). - passing linked lists doesn't work (options HTTPHEADER and HTTPPOST). - setting the buffer where to store error messages in doesn't work (option ERRORBUFFER).
|
https://metacpan.org/changes/distribution/WWW-Curl
|
CC-MAIN-2017-26
|
en
|
refinedweb
|
.NET Core and SQL Server in Docker - Part 1: Building the Service
In the past, getting an ASP.NET app up and running in the cloud would be nearly impossible. In this series, we take a look at how to bring such an animal to life.
Join the DZone community and get the full member experience.Join For Free
Traditionally, Asp.NET web applications deployed on Windows Server and IIS are not known for being cloud friendly. That is now a thing of the past, thanks to the open source cross-platform .Net Core and SQL Server For Linux. When combined with Docker and a container management platform (such as Kontena), it is now possible to run high performance .NET services in the cloud with the same ease that Ruby, Python and Go developers are used to.
Getting Started
First things first, we need to install our development tools. For this tutorial, we will be using a mix of text editor and the command line. For editing cross-platform C# projects, I highly recommend Microsoft's lightweight Visual Studio Code. For the command line, I will be assuming a Bash shell. Bash is the default shell for the MacOS terminal and is now also available on Windows starting with Windows 10.
For .NET Core, this tutorial assumes you are using a minimum version of 1.0.4.
Finally, you need to install Docker. Docker runs natively on Linux, but there are integrated VM solutions available for macOS and Windows (Windows 10 or later only, older versions of Windows should use a VM).
Creating The .NET Project
From the terminal, we are going to create a new project directory and initialize a new C#
webapi project:
$ mkdir dotnet-docker-tutorial $ cd dotnet-docker-tutorial $ dotnet new webapi
Next, let's restore our
NuGet dependencies and run our API:
$ dotnet restore $ dotnet run
And finally, in a second terminal window, let's test out the API with
curl:
$ curl
One change you will also want to make is to register the service to run on hostnames other than
localhost. This is important later when we run our service inside of Docker. Open up
Program.cs and modify the startup code:
var host = new WebHostBuilder() .UseUrls("https://*:5000") .UseKestrel() // etc
Adding SQL Server
Now it's time to add a database. Thanks to Docker and SQL Server for Linux, it's super fast and easy to get this started. From the terminal, let's download and run a new instance of SQL Server as a Docker container.
$ docker run -e 'ACCEPT_EULA=Y' -e 'SA_PASSWORD=Testing123' -p 1433:1433 --name sqlserver -d microsoft/mssql-server-linux
That's all that's needed to have a SQL Server development database server up and running. Note that if you are running Docker for Windows or Docker for Mac, you need to allocate at least 4GB of RAM to the VM or SQL Server will fail to run.
Next, let's add a new API controller to our application that interacts with the database. First we need to add
Entity Framework to our
csproj file, which should look like this:
<Project Sdk="Microsoft.NET.Sdk.Web"> <PropertyGroup> <TargetFramework>netcoreapp1.1</TargetFramework> </PropertyGroup> <ItemGroup> <Folder Include="wwwroot\" /> </ItemGroup> <ItemGroup> <PackageReference Include="Microsoft.AspNetCore" Version="1.1.1" /> <PackageReference Include="Microsoft.AspNetCore.Mvc" Version="1.1.2" /> <PackageReference Include="Microsoft.Extensions.Logging.Debug" Version="1.1.1" /> <PackageReference Include="Microsoft.EntityFrameworkCore.SqlServer" Version="1.1.1" /> </ItemGroup
Next, we create a new
DbContext. In our
Models folder, create a file
ProductsController.cs and edit as follows:
using Microsoft.EntityFrameworkCore; namespace Kontena.Examples.Models { public class ApiContext : DbContext { public ApiContext(DbContextOptions<ApiContext> options) : base(options) { this.Database.EnsureCreated(); } public DbSet<Product> Products { get; set; } } }
Next is the model class. In the
Models folder, create a file
Product.cs, and create the Product model:
using System.ComponentModel.DataAnnotations; namespace Kontena.Examples.Models { public class Product { public int Id { get; set; } [Required] public string Name { get; set; } public decimal Price { get; set; } } }
And finally, let's create a new API Controller. In the
Controllers folder, create a file
ProductsController.cs and add the following code:
using System.Linq; using Microsoft.AspNetCore.Mvc; using Kontena.Examples.Models; namespace Kontena.Examples.Controllers { [Route("api/[controller]")] public class ProductsController : Controller { private readonly ApiContext _context; public ProductsController(ApiContext context) { _context = context; } // GET api/values [HttpGet] public IActionResult Get() { var model = _context.Products.ToList(); return Ok(new { Products = model }); } [HttpPost] public IActionResult Create([FromBody]Product model) { if (!ModelState.IsValid) return BadRequest(ModelState); _context.Products.Add(model); _context.SaveChanges(); return Ok(model); } [HttpPut("{id}")] public IActionResult Update(int id, [FromBody]Product model) { if (!ModelState.IsValid) return BadRequest(ModelState); var product = _context.Products.Find(id); if (product == null) { return NotFound(); } product.Name = model.Name; product.Price = model.Price; _context.SaveChanges(); return Ok(product); } [HttpDelete("{id}")] public IActionResult Delete(int id) { var product = _context.Products.Find(id); if (product == null) { return NotFound(); } _context.Remove(product); _context.SaveChanges(); return Ok(product); } } }
This should be enough for us to provide a simple CRUD-style REST interface over our new Product model. The final step needed is to register our new database context with the ASP.NET dependency injection framework and fetch the SQL Server credentials. In the file
Startup.cs, modify the
ConfigureServices method:
public void ConfigureServices(IServiceCollection services) { // Add framework services. services.AddMvc(); var hostname = Environment.GetEnvironmentVariable("SQLSERVER_HOST") ?? "localhost"; var password = Environment.GetEnvironmentVariable("SQLSERVER_SA_PASSWORD") ?? "Testing123"; var connString = $"Data Source={hostname};Initial Catalog=KontenaAspnetCore;User ID=sa;Password={password};"; services.AddDbContext<ApiContext>(options => options.UseSqlServer(connString)); }
Note that we are pulling our SQL Server credentials from environment variables, defaulting to the values we used above for setting up our SQL Server container. In a production application, you would probably use the more sophisticated Asp.Net core configuration framework and a SQL Server user other than "sa."
Testing Out Our API
Time to test out our new API. In your terminal window, restore and start up the API again:
$ dotnet restore && dotnet run
In another window, let's use
curl to POST some data to our API:
$ curl -i -H "Content-Type: application/json" -X POST -d '{"name": "6-Pack Beer", "price": "5.99"}'
If all goes well, you should see a 200 status response, and our new Product returned as JSON (with a proper database generated id).
Next, let's modify our data with a PUT and change the price:
$ curl -i -H "Content-Type: application/json" -X PUT -d '{"name": "6-Pack Beer", "price": "7.99"}'
Of course, we can also GET our data:
$ curl -i
And finally we can DELETE it:
$ curl -i -X DELETE
Putting It in Docker
Now that we have our service, we need to get it in Docker. The first step is to create a new
Dockerfile that tells Docker how to build our service. Create a file in the root folder called
Dockerfile and add the following content:
FROM microsoft/dotnet:runtime WORKDIR /dotnetapp COPY out . ENTRYPOINT ["dotnet", "dotnet-example.dll"]
Next, we need to compile and "publish" our application, and use the output to build a Docker image with the tag
dotnet-example:
$ dotnet publish -c Release -o out $ docker build -t dotnet-example .
And finally, we can run our new container, linking it to our SQL Server container:
$ docker run -it --rm -p 5000:5000 --link sqlserver -e SQLSERVER_HOST=sqlserver dotnet-example
You should be able to access the API via
curl the same as we did earlier.
Next Time
In our next installment, we will show you how to take your new API and run the whole thing inside Docker. Then we will move those containers into the cloud with Kontena.
Accompanying source code for this tutorial can be found at.
Published at DZone with permission of Lauri Nevala, DZone MVB. See the original article here.
Opinions expressed by DZone contributors are their own.
|
https://dzone.com/articles/net-core-and-sql-server-in-docker-part-1-building?utm_medium=feed&utm_source=feedpress.me&utm_campaign=Feed%3A+dzone
|
CC-MAIN-2022-33
|
en
|
refinedweb
|
While developing and deploying wireless applications and sensor networks is mandatory to assert the wireless link, to try and estimate the maximum range between two devices and to prevent link loss due to rain, obstacles, fading, etc.
Normally you would need to include in your toolbox, besides a sniffer and spectrum analyser (such as this one), at least one transmitter and receiver, and pack a laptop to read and store the results, unless purchasing a kit properly prepared to do this, like the CC2538dk or the CC1120dk around 250€-300€ approx.
As I grew tired of packing my laptop and having to rush open field testing to cope with my discharging battery, I opted to prepare my own field test setup just packing the essentials, for a lightweight test spree.The components
As I primarily work with RE-Mote platforms both in the 2.4GHz and 863-950MHz bands, I threw in the mix a commercial USB battery charger I normally use when travelling, and the Sparkfun's LCD with RGB backlight.
The LCD works over I2C, the library was already ported in Contiki by me (see rgb-bl-lcd.c) and is powered over 5V (but luckily the I2C has a 3.3V logic).
As the RE-Mote works using a 3.3V logic but can be powered over USB at 5V, I powered the LCD from the RE-Mote over the
D+5.1 pin (see above). The LCD I2C pins (SDA and SCL) are connected to the RE-Mote over the
I2C.SDA and
I2C.SCL pins as expected.
The battery charger is an USB battery with 6000mAh capacity, plenty of juice to last days of testing. The only drawback is having to either continuously press the power on button, or disable the RE-Mote's low power operation, else the battery "thinks" there's no connected device as the power consumption is too low.The result
Below is a video detailing the operation of the range test application.
When the application starts it will print information about the radio configuration. The current channel is displayed at the bottom row on the left, followed by the available channels and the transmission power (in dBm).
When the application starts it will start blinking the blue LED and instructions will be printed on the LCD.
Basically when long-pressing the user button (without releasing) it will toggle between the sender and receiver mode. The LCD will bright red when configured as receiver, or green when selected the transmitter mode.
When the user button is released the operation mode will be set (warning: if you need to change the operation mode, press the reset button and repeat). A single press on the user button will start the test.
I develop a simple application to wrap everything together, available as always in my Contiki's Github fork, look for the field-test-lcd branch.
git clone cd contiki && git checkout field-test-lcd
The application lives at
examples/zolertia/zoul/range-test
If you look at the Makefile there are two compilations switches to consider:
ifdef WITH_LCD CFLAGS += -DLCD_ENABLED=$(WITH_LCD) endif ifdef WITH_SUBGHZ CFLAGS += -DRF_SUBGHZ=$(WITH_LCD) endif
When compiling and programming the RE-Mote you can select to use either the 2.4GHz radio interface (CC2538) or the 868/915MHz radio (CC1200) (if adding
WITH_SUBGHZ=1). If you include the
WITH_LCD=1 argument to the compilation it will include the LCD display support, else if you don't have a LCD display you can still see the results over USB if connecting to the RE-Mote, by just using a micro-USB cable and putty or alike.
# Configure for 868MHz band make range-test.upload WITH_SUBGHZ=1 WITH_LCD=1 or # Configure for 2.4GHz band make range-test.upload WITH_LCD=1
And that's it! have fun testing and enjoying the weather!
|
https://www.hackster.io/alinan/range-tests-made-easy-with-the-re-mote-and-lcd-6e78b3
|
CC-MAIN-2022-33
|
en
|
refinedweb
|
ZooKeeper in Hadoop is an open source project developed by Apache. Zookeeper provides a centralized infrastructure and its related services that ensures synchronization across a cluster.
ZooKeeper is used to maintain common objects needed in large cluster environments. It is used to store data in a centralized location with great accessibility. ZooKeeper runs on a cluster of servers known as an ensemble that shares the state of data.
ZooKeeper comes with a command-line client (CLI) for interactive user experience. The namespace in ZooKeeper is similar to the standard document framework. That means a name can be a combination of path components separated by a ‘/’ or simply a slash.
ZooKeeper in Hadoop has a hierarchical namespace identical to a distributed file system. Every node of the namespace is connected to its every children node. The file is also a directory.
ZooKeeper Uses in Apache Hadoop
- ZooKeeper is used by Apache Kafka to manage configurations.
- ZooKeeper keeps Access control lists (ACL) for all data topics that are maintained in ZooKeeper.
- ZooKeeper is used for maintaining centralized configuration information, naming, providing distributed synchronization, and providing group services.
Znodes in ZooKeeper
- Each node in ZooKeeper is called as a Znode. Znode maintains a stat structure. The namespace consists of data registers which are known as Znodes. Developer accesses these Znodes in the ZooKeeper for development.
- Each Znode has a timestamp associated to it. Version number and timestamp permits the ZooKeeper to accept cache and to organize updates.
Features of Znodes
- Watches (one time triggers)
- Data Access
- Ephermal Nodes
- Sequence Nodes (Unique Naming)
How ZooKeeper in Hadoop Tracks Time?
Version number
Whenever a change takes place in a node, a new version number is created. Version numbers can be classified as follows:
- Version – number of changes made to data of Znode
- aversion – number of changes made to children of Znode
- cversion – number of changes made to ACL of a Znode)
Zxid
Any change in ZooKeeper state is showcased by a stamp. The format of stamp is Zxid ( ZooKeeper Transaction ID). It is used to calculate the total number of changes theat occurred in a sequence. Each change has a unique Zxid.
Ticks
ZooKeeper servers use ticks to characterize timing of events such as status upload, session timeout, connection timeouts etc. If a client wants session timeout less than the minimum timeout, server indicates that the current session timeout is the minimum session timeout.
ZooKeeper Data Storage
ZooKeeper in Hadoop is designed to store the informative data or simply metadata such as status, configurations, locations etc. This type of data is measured in kilobytes so that the space is effectively used. Only small bits of data are stored.
|
https://csveda.com/big-data/zookeeper-in-hadoop/
|
CC-MAIN-2022-33
|
en
|
refinedweb
|
From: Maxim Yegorushkin (e-maxim_at_[hidden])
Date: 2004-08-07 16:12:34
I am using boost::aligned_storage<> and I would like it to be aligned on 32 byte boundary (the reason being that sizeof(my type) is 32 bytes and I want it to be aligned on x86 cash line size so that it is cashed most effectively). I am working now under VC7.1 and it seems like the maximum alignment I can get with the compiler is 8 which is the alignment of the largest built-in type which is long double (or a pointer to member/function). But type_with_alignment.hpp seems to have special support for gcc in the form of:
namespace align {
struct __attribute__((__aligned__(2))) a2 {};
struct __attribute__((__aligned__(4))) a4 {};
struct __attribute__((__aligned__(8))) a8 {};
struct __attribute__((__aligned__(16))) a16 {};
struct __attribute__((__aligned__(32))) a32 {};
}
template<> class type_with_alignment<1> { public: typedef char type; };
template<> class type_with_alignment<2> { public: typedef align::a2 type; };
template<> class type_with_alignment<4> { public: typedef align::a4 type; };
template<> class type_with_alignment<8> { public: typedef align::a8 type; };
template<> class type_with_alignment<16> { public: typedef align::a16 type; };
template<> class type_with_alignment<32> { public: typedef align::a32 type; };
Are MSVSs going to be supported as they have similar extension __declspec(align(x))?
-- Maxim Yegorushkin
Boost list run by bdawes at acm.org, gregod at cs.rpi.edu, cpdaniel at pacbell.net, john at johnmaddock.co.uk
|
https://lists.boost.org/Archives/boost/2004/08/69907.php
|
CC-MAIN-2019-51
|
en
|
refinedweb
|
> > I think this is the right fix. > > Please describe the reasons why you think this is the right fix. menu-updating-buffers is defined in syms_of_xmenu (). Currently syms_of_xmenu is only called in emacs.c if HAVE_MENUS is true. menu-updating-buffers is needed even if Emacs is configured without X (on GNU/Linux at least) but in this case HAVE_MENUS is not defined. xmenu.c is needed even HAVE_X_WINDOWS is not defined so I've moved it outside the conditional requiring it. > (I'm > assuming you've read the discussions from 2004 that led to the > original changes.) I might not have followed it all but your change seemed to cover Carbon Emacs which it still does: #ifndef HAVE_CARBON XMENU_OBJ = xmenu.o #endif Now I've moved it outside #ifdef HAVE_X_WINDOWS you might need to add another condition for when w32menu.o is used, I'm not sure. > >.) I didn't find the discussion that led to this change. It might have been part of a general tidying process. > In addition, we need to explain why the OP says he started to see the > problem only recently. I've tried to explain that in another post: more calls to menu-updating-frame have been made in menu-bar.el (26/08/05). > >. I don't have an opinion on whether you or Kim were tricked, just that the description is misleading and that xmenu.c is needed even HAVE_X_WINDOWS is not defined. Nick
|
https://lists.gnu.org/archive/html/emacs-devel/2005-09/msg00197.html
|
CC-MAIN-2019-51
|
en
|
refinedweb
|
No project description provided
Project description
FunctionShield
Serverless Security Library for Developers. Regain Control over Your Serverless Runtime.
How FunctionShield helps With Serverless Security?
- By monitoring (or blocking) outbound network traffic from your function, you can be certain that your data is never leaked
- By disabling read/write operations on the /tmp/ directory, you can make your function truly ephemeral
- By disabling the ability to launch child processes, you can make sure that no rogue processes are spawned without your knowledge by potentially malicious packages
- By disabling the ability to read the function’s (handler) source code through the file system, you can prevent handler source code leakage, which is oftentimes the first step in a serverless attack
Supports AWS Lambda and Google Cloud Functions
Get a free token
Please visit:
Install
$ pip install function-shield
Super simple to use
import function_shield function_shield.configure({ "policy": { # "block" mode => active blocking # "alert" mode => log only # "allow" mode => allowed, implicitly occurs if key does not exist "outbound_connectivity": "block", "read_write_tmp": "block", "create_child_process": "block", "read_handler": "block" }, "token": os.environ["FUNCTION_SHIELD_TOKEN"] }) def handler(event, context): # Your Code Here #
Logging & Security Visibility
FunctionShield logs are sent directly to your function’s AWS CloudWatch log group. Here are a few sample logs, demonstrating the log format you should expect:
// Log example #1: { "details": { "host": "microsoft.com", "ip": "13.77.161.179" }, "function_shield": true, "timestamp": "2019-06-19T09:08:00.455144Z", "policy": "outbound_connectivity", "mode": "block" } // Log example #2: { "details": { "path": "/tmp/block" }, "function_shield": true, "timestamp": "2019-06-19T09:08:00.422553Z", "policy": "read_write_tmp", "mode": "block" } // Log example #3: { "details": { "arguments": [ "uname", "-a" ], "path": "/bin/uname" }, "function_shield": true, "timestamp": "2019-06-19T09:08:00.469822Z", "policy": "create_child_process", "mode": "block" } // Log example #4: { "details": { "path": "/var/task/handler.py" }, "function_shield": true, "timestamp": "2019-06-19T09:08:00.433942Z", "policy": "read_handler", "mode": "block" }
Reconfiguring FunctionShield
function_shield.configure can be called multiple time to temporary disable one of the policies.
Note that you need to add an additional parameter cookie to any subsequent call to function_shield.configure.
import function_shield import requests cookie = function_shield.configure({ "policy": { "outbound_connectivity": "block", "read_write_tmp": "block", "create_child_process": "block", "read_handler": "block" }, "token": os.environ["FUNCTION_SHIELD_TOKEN"] }) def handler(event, context): ... function_shield.configure({ "cookie": cookie, "policy": { "outbound_connectivity": "allow" } }) r = requests.get("") function_shield.configure({ "cookie": cookie, "policy": { "outbound_connectivity": "block" } }) ...
Custom Security Policy (whitelisting)
Custom security policy is only supported with the PureSec SSP full product.
Project details
Release history Release notifications
Download files
Download the file for your platform. If you're not sure which to choose, learn more about installing packages.
|
https://pypi.org/project/function-shield/2.0.12/
|
CC-MAIN-2019-51
|
en
|
refinedweb
|
So basically, I've been trying to record the MIDI inputs from a piano keyboard of mine, which is connected to my soundcard's game port.
However, I have been having trouble actually getting any MIDI input of any kind (or so I think). I'm not very well versed in MIDI (or actually most of anything that's programming), so bare with me.
Hopefully it's just something silly that I didn't understand. Anyways, this is the code I have so far. I've been using this to get feedback on what kind of values everything gives out, but I just can't get MIM_DATA to be the value for uMsg (which is what I need in order to know that my computer is indeed reading my piano inputs).
For the midiInStart function, I will admit I'm not exactly sure what to do with it.For the midiInStart function, I will admit I'm not exactly sure what to do with it.Code://Link winmm library #include <windows.h> #include <stdio.h> #include <mmsystem.h> #include <cstdlib> #include <iostream> #include <conio.h> /* include for kbhit() and getch() functions */ using namespace std; void CALLBACK midiCallback(HMIDIIN handle, UINT uMsg, DWORD dwInstance, DWORD dwParam1, DWORD dwParam2) { switch ( uMsg ) { case MIM_OPEN: cout << "-----OPENED.-----" << endl; break; case MIM_CLOSE: cout << "-----EVERYTHING IS CLOSING.-----" << endl; break; case MIM_DATA: cout << "-----APPARENTLY THERE IS DATA.-----" << endl; //I'm hoping to see this line... break; case MIM_LONGDATA: cout << "-----LONGDATA'D.-----" << endl; break; case MIM_ERROR: cout << "-----ERROR.-----" << endl; break; case MIM_LONGERROR: cout << "-----LONGERROR. EVEN WORSE.-----" << endl; break; } cout << "dwInstance is " << dwInstance << endl; cout << "Handle is " << handle << endl; cout << "dwParam1 is " << dwParam1 << endl; //dwParam1 is the bytes of the MIDI Message packed into an unsigned long cout << "dwParam2 is " << dwParam2 << endl; //dwParam2 is the timestamp of key press cout << "uMsg is " << uMsg << endl; cout << "-----" << endl; } void MidiThing(){ MIDIINCAPS mic; unsigned long result; HMIDIIN inHandle; int ckey; // storage for the current keyboard key being pressed unsigned long iNumDevs, i; iNumDevs = midiInGetNumDevs(); /* Get the number of MIDI In devices in this computer */ /* Go through all of those devices, displaying their names */ for (i = 0; i < iNumDevs; i++) { /* Get info about the next device */ if (!midiInGetDevCaps(i, &mic, sizeof(MIDIINCAPS))) { /* Display its Device ID and name */ printf("Device ID #%u: %s\r\n", i, mic.szPname); } } cout << "These are the only available devices...?" << endl; cout << endl; // Open the default MIDI In device. result = midiInOpen(&inHandle, 0, (DWORD)midiCallback, 0, CALLBACK_FUNCTION); if (result) { printf("There was an error opening the default MIDI In device!\r\n"); } else { midiInStart(inHandle); cout << endl; cout << "midiInStart has been called." << endl; } cout << endl; cout << "The unsigned long, result, value was " << result << endl; cout << MIM_OPEN << " is MIM_OPEN's value" << endl; cout << MIM_CLOSE << " is MIM_CLOSE's value" << endl; cout << MIM_DATA << " is MIM_DATA's value" << endl; cout << endl; printf("Press \"q\" to quit.\n"); while (1) { if (kbhit()) { ckey = getch(); if (ckey == 'q') { cout << "Stopped." << endl; cout << endl; break; } } } midiInStop(inHandle); midiInReset(inHandle); midiInClose(inHandle); cout << endl; cout << "Lines are done twice because midiCallback " << endl; cout << "is called when midiInClose is called...?" << endl; cout << endl; cout << inHandle << " was the MIDIIN handle." << endl; cout << "Stuff's closed now." << endl; cout << endl; cout << endl; cout << endl; } int main(int argc, char *argv[]) { MidiThing(); system("PAUSE"); return EXIT_SUCCESS; }
The for loop in the middle of program where you press q to quit has no purpose for being there... but I uhh, left it in anyway.
EDIT: Okay wow. I just got it to work (kinda lame considering I've been stuck for about 3 hours until finally deciding to post here, and then finding the answer 5 minutes after). It turns out that the line
has to be written likehas to be written likeCode:result = midiInOpen(&inHandle, 0, (DWORD)midiCallback, 0, CALLBACK_FUNCTION);
Though I've solved my main issue, Dev-C++ gives me a warning for using NULL here though. Also, I'd still like to know what exactly midiInStart does and how to set up a buffer with it.Though I've solved my main issue, Dev-C++ gives me a warning for using NULL here though. Also, I'd still like to know what exactly midiInStart does and how to set up a buffer with it.Code:result = midiInOpen(&inHandle, 0, (DWORD)midiCallback, NULL, CALLBACK_FUNCTION);
The page I linked tells me that I should send at least one buffer to the driver before recording, but I'm not exactly sure how.
|
https://cboard.cprogramming.com/cplusplus-programming/118189-recording-midi-inputs-midi-device-using-wimm-library.html?s=71e7b8cd332e50b0ef0bf72985a79274
|
CC-MAIN-2019-51
|
en
|
refinedweb
|
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.