text
stringlengths 20
1.01M
| url
stringlengths 14
1.25k
| dump
stringlengths 9
15
⌀ | lang
stringclasses 4
values | source
stringclasses 4
values |
|---|---|---|---|---|
Welcome to Part 10 of the creating and Artificial Intelligence bot in StarCraft II with Python series. In this tutorial, we're going to be working on the creation of our model.
The focus here is just purely to see if a model can learn from this style of input data. The training data I built and that I will be using can be found here: Stage 1, 2868 games data vs Hard AI . You do not necessarily need to grab this data, it will not be our final data if successful.
Once you have the data, extract it, and you're ready to rumble. First we need to devise the structure of our convolutional neural network. I will also be making use of Keras, which is a framework that sits on top of TensorFlow. So long as you already have TensorFlow, it's just a pip install. I am using Keras version 2.1.2, and TensorFlow version 1.8.0 here.
If you do not know about neural networks, it is advised you visit at the very least the deep learning tutorials from the machine learning series.
To begin, I was actually having a hard time getting anything to learn. In the Halite II competition, I also found this to be quite a challenge. I found that an exceptionally low starting learning rate was the solution. That model started with a 1e-5 learning rate and ended on a 1e-6 (0.00001 to 0.000001). Normally, you will start with more like 1e-3, and stop at 1e-4 (0.001 to 0.0001). Here, I found that starting at 1e-4 was enough to begin learning. For this part, our model script will begin with the following imports:
import keras # Keras 2.1.2 and TF-GPU 1.8.0 from keras.models import Sequential from keras.layers import Dense, Dropout, Flatten from keras.layers import Conv2D, MaxPooling2D from keras.callbacks import TensorBoard import numpy as np import os import random
We're going to import Keras, obviously, but then also specifically the
Sequential model type,
dense layers,
dropout, and
flatten (to flatten the data before passing through the final, regular dense layer). Finally, we're using a convolutional neural network, so we're going to use
Conv2D and
MaxPooling2D for that. I also want to be able to visualize the model's training, so we'll be using
TensorBoard.
Our data is stored from numpy, so we'll use
numpy to load it back in, as well as shape it. We're going to use
os to iterate over the directory containing the data, and
random to shuffle it about.
Now, let's build our model:
model = Sequential()
This just means that we have a regular type of model. Things are going to go in order.
Now for our main hidden convolutional layers:
model.add(Conv2D(32, (3, 3), padding='same', input_shape=(176, 200, 3), activation='relu')) model.add(Conv2D(32, (3, 3), activation='relu')) model.add(MaxPooling2D(pool_size=(2, 2))) model.add(Dropout(0.2)) model.add(Conv2D(64, (3, 3), padding='same', activation='relu')) model.add(Conv2D(64, (3, 3), activation='relu')) model.add(MaxPooling2D(pool_size=(2, 2))) model.add(Dropout(0.2)) model.add(Conv2D(128, (3, 3), padding='same', activation='relu')) model.add(Conv2D(128, (3, 3), activation='relu')) model.add(MaxPooling2D(pool_size=(2, 2))) model.add(Dropout(0.2))
Next we'll add one fully-connected dense layer:
model.add(Flatten()) model.add(Dense(512, activation='relu')) model.add(Dropout(0.5))
Finally the output layer:
model.add(Dense(4, activation='softmax'))
Now we need to setup the compile settings for the network:
learning_rate = 0.0001 opt = keras.optimizers.adam(lr=learning_rate, decay=1e-6) model.compile(loss='categorical_crossentropy', optimizer=opt, metrics=['accuracy'])
Lastly, we want to log everything via TensorBoard, so we'll call it:
tensorboard = TensorBoard(log_dir="logs/stage1")
Now that we have our model, we need to pass the data through. Since our data already exceeds my GPUs VRAM, and knowing that our future data will likely vastly exceed it, we need to work on a way to load the data in by batches, which is what we'll be working on in the next tutorial.
|
https://pythonprogramming.net/building-neural-network-starcraft-ii-ai-python-sc2-tutorial/
|
CC-MAIN-2019-26
|
en
|
refinedweb
|
Implement windows NTLM authentication using SSPI
VERIFIED FIXED in mozilla1.4alpha
Status
()
People
(Reporter: daniel, Assigned: darin.moz)
Tracking
Firefox Tracking Flags
(Not tracked)
Details
Attachments
(2 attachments, 6 obsolete attachments)
Bug 23679 (NTLM auth for HTTP) is an rfe for implementing crossplatform NTLM authentication, enabling mozilla to talk to MS web and proxy servers that are configured to use "windows integrated security". For the following 2 reasons, the windows implementation of this should use SSPI instead of a cross-platform solution: 1) SSPI can by used to send the credentials of the user running the browser without any user interaction. In an intranet situation (workstation and web/proxy server share the same user database) this means seamless authentication. IMHO, this is usually how windows authentication is used. I am not sure if Mozilla would be able to grab a hold of these credentials and use them for network authentication without using SSPI (I doubt it). 2) The first step of an SSPI driven authentication negotiates the authentication protocol. NTLM is but one possible outcomes. Two windows 2000 machines might choose to use MS's kerebos implementation instead of NTLM. If memory serves me, it is possible to disable NTLM altogether in a pure windows 2000+ domain, my guess is that then IIS/MS Proxy server will require kerebos instead of NTLM to authenticate the connection and the solutions proposed in the bug won't work. SSPI documentation can be found on (thanks to Jason Prell for the ptrs):
*** Bug 159215 has been marked as a duplicate of this bug. ***
cc
NTLM is also used for other protocols like POP3, IMAP and NNTP. I guess they do LDAP authentication as well. So I suggest using SSPI in a more general way and therefor changing the bug's component accordingly to "Networking".
This bug will be functionally comparable to the fix for bug 23679, but this will be a Windows-only implementation, using the built-in Microsoft tools, namely SSPI. Multiple protocols, thus just Browser/networking. Reassign. We have to assume that Microsoft won't change the API. If/when they make changes, only then does that the dodgy API become an issue. Anyone have an idea of the difficulty of this implementation, assuming the API remains the same?
Assignee: darin → new-network-bugs
Component: Networking: HTTP → Networking
Keywords: mozilla1.2, nsbeta1
QA Contact: tever → benc
I have found this helpful: Documentation: Utility: I could help implement this in VB, but I don't know if that will help since mozilla is Java.
SSPI has been around since windows 95/NT4 (3.51?), the API itself is very unlikely to change. What could change is the way IIS expects the http headers to contain the SSPI generated data. I have some SSPI client code lying around. If bug 23679 comes up with a working implementation, it means the rest of netwerk can handle this type of authentication (authenticate a persistent connection instead of a individual HTTP requests). Plugging in SSPI on windows would then be a semi-trivial task. Had a brief e-mail exchange with darin@netscape.com. Bug 23679 is apparently being worked on but making a windows specific implementation is lower priority. Feel free to depend this one on bug 23679 and assign to me.
I beleive this is the 'correct' way to solve this issue on the MS Windows platform. By taking advantage of the same API MS uses, you ensure maximum compatiblity. NTLMSSP is not simple, and if we can avoid having to actually deal with it at all (ie just pass blobs) then I think this is a big plus. On other platforms, I'm proposing a Samba based solution that involves an independent NTLMSSP implementation.
May I ask how this depends on 23679?
OK, i saw the light ;-) ... last night i hacked together a patch to get mozilla using SSPI and things mostly work. the patch i'm about to attach is by no means final. it has many serious problems, but i'm just submitting it here so people can try it out and let me know if they discover any other problems that i'm not expecting.
Assignee: daniel → darin
Target Milestone: --- → mozilla1.4alpha
this patch is at best a hack.
things that still need to be done: 1- implement auth prompt with domain field. currently, you must enter your username as "username\domain" ... this is just for testing purposes ;-) 2- try default credentials first before prompting user. 3- come up with a better way of suppressing Authorization headers on an already authenticated connection. current method is a total hack. 4- aggressively send first auth header if URL looks like it will require authentication. currently we only do NTLM auth when challenged. 5- i hacked around the problem of needing to reuse the same connection for the second auth header by delaying the second HTTP transaction until the first is finished (instead of firing it off as soon as we read the header with the challenge as is usually done). it turns out that our next transaction stands a very good chance of being sent out over the same connection that the first transaction was sent out over. so, this hack mostly works. it will probably fail if we are authenticating multiple connections at once. to get this right, i really need to tag the transactions with some extra state information and have the connection machinery check that and try to do the right thing from there. at any rate, i'm anxious to hear how this patch behaves in the real world as is :)
Just one comment - in Windows, it's traditionally DOMAIN\username when entering things in a single box. It's great to see this finally moving!
ok, i cleaned up the previous patch, and architecturally things are looking better. for example, i'm not sending default credentials first before prompting the user. some big things still remain to be done: 1- i haven't made any UI changes, so i took Andrew's advice and made it so that you must supply "domain\username" in the prompt dialog. if i'm lucky, maybe 2 or 3 other people will know how to enter things correctly into this prompt without reading this bug report (or digging through the release notes if i don't fix this by 1.4 alpha) :-/ 2- we continue to only send out NTLM credentials if challenged. this can mean one extra round trip, but i'd like to take the perf hit for now (i don't think it'll be huge). by doing this we avoid the problem of having to detect an already authenticated channel. yes, this shouldn't be difficult, but given the way things are organized, it is not as trivial as it should be. 3- same hack in place that depends on request order to reuse the right connection. i'm pretty sure this won't hold up. despite the remaining problems, i think this patch stands on its own. i'm going to move forward and try to land this for 1.4 alpha. i'll then try to clean up the remaining issues (especially the UI) in a subsequent patch.
Attachment #116584 - Attachment is obsolete: true
Comment on attachment 116892 [details] [diff] [review] phase 1 patch bbaetz: can you please review this. at least from a design point of view. nsIHttpAuthenticator needed to be seriously wacked. thx!
The UI shouldn't be a big deal at all. The 3rd 'Domain:' field is actually used when invoking 'Integrated Windows Authentication'. Anyone who is using a domain account that is using Windows XP with IE 6 or any version of Windows with NS (any version) will have to enter the 'Domain\Username'. The 3rd 'Domain:' field only appears in pre-Windows XP clients if the 'Integrated Windows Authentication' is enabled as an authentication method in IIS. You shouldn't need to worry about the UI if you do not want to. :)
Comment on attachment 116892 [details] [diff] [review] phase 1 patch >Index: public/nsIHttpAuthenticator.idl How would an ASCII-art state diagram look? It may be too complex, bu if not, it would be good to add here >Index: src/nsHttpAuthCache.cpp > >+static inline void >+GetAuthKey(const char *host, PRInt32 port, nsCString &key) >+{ > >+ key.Assign(nsDependentCString(host) + nsPrintfCString(":%d", port)); nsDependentCString(host) + NS_LITERAL_CSTRING(":") + <Something>(port) I don't know what the <something> is - prdtoa, perhaps? Save on the :%d parsing each time. I skipped over the rest, though - I don't have time, and can't test it anyway. >Index: src/nsHttpNTLMAuth.cpp >@@ -0,0 +1,364 @@ >+/* -*- Mode: C++; tab-width: 2; indent-tabs-mode: nil; c-basic-offset: 2 -*- */ >+/* ***** BEGIN LICENSE BLOCK ***** >+ * Version: NPL 1.1/GPL 2.0/LGPL 2.1 New files should be MPL-tru-licensed.
finally got a hold of a copy of MS Proxy to test this on, and there are definitely some issues to be ironed out.
I don't understand, this is just SSPI or cross-platform NTLM module? How about bug 23679 comment 161?
I would like to test this patch but am not sure how to implement it. I am currently developing an ASP intranet site that uses "windows intergrated security" please advise... Thanks, Dan
> I ... made it so that you must supply "domain\username" in the prompt dialog. You mean there's no default for the domain? Trust Microsoft to make things difficult :-/
Summary: Implement windows authentication using SSPI → Implement windows NTLM authentication using SSPI
ok, this patch seems to pass all my stress tests. it's is really ugly still, and needs some major cleanup... so no reviewing yet ;-) but, if someone wants to try out this patch, this is the one to test.
Attachment #116892 - Attachment is obsolete: true
oh fun! so i submitted that last patch via a NTLM proxy, and i noticed that it POSTed the entire patch twice to the proxy server. the first POST request was rejected with a 407, but the second POST request succeeded (and was passed on to bugzilla). we have the same problem with Basic and Digest auth... we so need to implement the HTTP/1.1 "expect 100 continue" method of uploading.
Status: NEW → ASSIGNED
I just downloaded the nightly, and I guess your patch was in it... Anyway, I can now pass through our proxy which before I could not, so the patch seems to be working... Thanks... I can now finally get back to my favorite browser :-) WinXP, build 2003031714
no, the patch isn't checked into the tree !
cleaned up a bit. addressed bbaetz's review comment. ready for more reviews.
Attachment #117179 - Attachment is obsolete: true
Comment on attachment 117630 [details] [diff] [review] v1.2 patch I am about 1/2 way done. Do we warn the user before we send the response to the auth request? Sending the username, domain name, and workgroup in the clear to anyone who asks seans like it should have a alert dialog. i would wait a milestone before tagging the nsIHttpAuthenticator as UNDER_REVIEW. In the comments of nsIHttpAuthenticator, you should make it explict who is going to be prompting the user. We should build this feature into a configurable build option: +ifeq ($(OS_ARCH),WINNT) +CPPSRCS += \ + nsHttpNTLMAuth.cpp \ + $(NULL) +endif Make it so, or drop the comment. :-) +#define NS_HTTP_STICKY_CONNECTION (1<<2) +// XXX bah! this name could be better :( I really don't like this procedure. It returns TRUE if i pass a="liar", b=nsnull. +StrEquivalent(const PRUnichar *a, const PRUnichar *b) instead of using calloc, just assign a null after PL_Base64Encode returns + // use calloc, since PL_Base64Encode does not null terminate. Fix your copyright dates. static void ParseUserDomain(PRUnichar *buf, const PRUnichar **user, const PRUnichar **domain) if buf is ever null, ParseUserDomain will crash. I think that the result of PromptUsernameAndPassword could get you into this condition Maybe you should explain this where you got the numbers :-) : + // decode into the input secbuffer + ib.BufferType = SECBUFFER_TOKEN; + ib.cbBuffer = (len * 3)/4; Why the changes from PRPackedBool to PRUint32 - PRPackedBool mConnected; - PRPackedBool mHaveStatusLine; - PRPackedBool mHaveAllHeaders; - PRPackedBool mTransactionDone; - PRPackedBool mResponseIsComplete; // == mTransactionDone && NS_SUCCEEDED(mStatus) ? - PRPackedBool mDidContentStart; - PRPackedBool mNoContent; // expecting an empty entity body? - PRPackedBool mReceivedData; - PRPackedBool mDestroying; + // state flags + PRUint32 mClosed : 1; + PRUint32 mDestroying : 1; + PRUint32 mConnected : 1; + PRUint32 mHaveStatusLine : 1; + PRUint32 mHaveAllHeaders : 1; + PRUint32 mTransactionDone : 1; + PRUint32 mResponseIsComplete : 1; + PRUint32 mDidContentStart : 1; + PRUint32 mNoContent : 1; // expecting an empty entity body + PRUint32 mReceivedData : 1; + PRUint32 mStatusEventPending : 1;
Comment on attachment 117630 [details] [diff] [review] v1.2 patch +static PRBool +StrEquivalent(const PRUnichar *a, const PRUnichar *b) +{ + if (a && b) + return nsCRT::strcmp(a, b) == 0; + + if (a && a[0] == '\0') + return PR_TRUE; + + if (b && b[0] == '\0') + return PR_TRUE; + + return PR_FALSE; +} + wait, doesn't that mean that "foo" and "" are equivalent? maybe you meant something like if (a && b && a[0] == b[0] == '\0') return PR_TRUE? I'm looking at the AuthCache stuff now, then onto NTLM
Comment on attachment 117630 [details] [diff] [review] v1.2 patch duh, shaver straightened me out on my last comment, ignore that.. The rest of this looks good though.. can we have some kind of a pref and configure definition so this can be disabled at both build time and runtime? The build time stuff is for bloat, obviously, but the runtime stuff would cover people who don't want their credentials just sent unconditionally .. I guess dougt covered that too :) you should document why StrEquivalent works, just because both dougt and I fell into the same confusion and I'd hate to see someone come along and try to 'fix' it :) (something along the lines of // a has value, so b must be null ... // b has value so a must be null ... As for the changing of GenerateCredentials to contain user/pass/domain - is there any reason you can't just pass around nsHttpAuthIdentity like you do in PromptForIdentity? or perhaps if this is going to be frozen (and thus you can't use concrete classes) could you just wrap an interface around the identity, or abstract the domain into some sort of "extra" user data so that if some other authentication mechanism shows up and wants to hijack the "domain" parameter.. almost done...
Mind you, that check will also report that NULL and NULL aren't equivalent, though they're each equivalent to "". Not what I would expect.
thanks for the comments guys. i've applied changes to my local tree, minus the following: 1- no fancy build changes at this time. that can happen later. anyways, this has very low footprint already. 2- dougt: i changed nsHttpTransaction to store its flags as bit fields instead of bytes. in the end this shaves off a few DWORDS. 3- alecf: good idea about COM'ifying the auth identity structure, but i think i'd prefer to wait on that. it isn't necessary at the moment. maybe something that can be done when we want to freeze this interface. mike: good catch.. thx! i'll fix that.
revised per previous comments.
Attachment #117630 - Attachment is obsolete: true
Attachment #118353 - Flags: superreview?(alecf)
Alec (#31), I think "realm" is the term used in other protocols for "domain".
martin: sort of. actually, with NTLM the "domain" is something the user must enter, whereas with other protocols the "realm" is something specified in the server challenge.
Comment on attachment 118353 [details] [diff] [review] v1.3 patch ok, the rest of this looks good! sr=alecf
Attachment #118353 - Flags: superreview?(alecf) → superreview+
updated the patch to apply cleanly to the trunk and applied changes per the review comments.
Attachment #118353 - Attachment is obsolete: true
Comment on attachment 118465 [details] [diff] [review] v1.4 patch r=cathleen :-)
Attachment #118465 - Flags: review+
landed everything except nsHttpNTLMAuth.{h,cpp} and nsNetModule/nsNetCID changes. these are still pending approval.
this patch includes only the NTLM SSPI portion.
Comment on attachment 118602 [details] [diff] [review] remaining portion of patch that didn't land before the tree closed for 1.4 alpha requesting drivers approval for 1.4 alpha. this patch has r=dougt,cathleen & sr=alecf.
Attachment #118602 - Flags: approval1.4a?
Comment on attachment 118602 [details] [diff] [review] remaining portion of patch that didn't land before the tree closed for 1.4 alpha >+ifeq ($(OS_ARCH),WINNT) >+CPPSRCS += \ >+ nsHttpNTLMAuth.cpp \ >+ $(NULL) >+ifdef MOZ_PSM >+REQUIRES += pipnss >+DEFINES += -DHAVE_PSM >+endif >+endif I don't think this will work in a clobber build, will it? (Given the order of client.mk, designed to prevent dependencies like this one.) >+#ifdef HAVE_PSM ... >+static PRBool >+IsFIPSEnabled() >+{ >+ nsresult rv; >+ nsCOMPtr<nsIPKCS11ModuleDB> pkcs = >+ do_GetService("@mozilla.org/security/pkcs11moduledb;1", &rv); Shouldn't this function exist even if HAVE_PSM isn't defined, so that it car return false if the service is present, to allow for a case where the security DLLs (which can, after all, be installed drop-in) are added later? I'd think you could do_GetService outside of the |#ifdef HAVE_PSM| and then QI to the right interface and test inside of it?
dbaron: the issue i ran into is that if MOZ_PSM is not defined, then nsIPKCS11ModuleDB.h will not be exported! so, i do unfortunately need some kind of compile time check :( hmm... any suggestions?
Comment on attachment 118602 [details] [diff] [review] remaining portion of patch that didn't land before the tree closed for 1.4 alpha a=asa (on behalf of drivers) for checkin to 1.4a.
Attachment #118602 - Flags: approval1.4a? → approval1.4a+
thanks to kaie for providing better PSM hooks.
Attachment #118602 - Attachment is obsolete: true
Attachment #118671 - Flags: superreview?(dbaron)
Attachment #118671 - Flags: review?(kaie)
Comment on attachment 118671 [details] [diff] [review] updated final patch I compared this patch with the different version of the patch. r=kaie on the new portions added, and carrying forward the other reviews
Attachment #118671 - Flags: review?(kaie) → review+
alright! final patch is in! marking FIXED =)
Status: ASSIGNED → RESOLVED
Closed: 16 years ago
Resolution: --- → FIXED
What about NTLM for Moz running on Linux machines, or is this a separate bugzilla issue?
Manik, this is bug is about SSPI, which is a Windows 2000 API. For a Linux solution see bug #171500. It seems that winbindd is not favoured by SAMBA developers. OTOH SAMBA provides no other means for non-GPL programs. For a discussion of the general topic see bug #23679.
Great work - lack of NTLM support has for ages been one great reason why corporates haven't dared consider moving away from IE. But can I just check that there's no information leakage going on? IE rightly has a preference which goes something like "authenticate only in intranet zone", which prevents NTLM credentials from leaking out to nasty sites on the Internet. Does your implementation do the same? It's not enough to rely on only authenticating when challenged, because of course the challenge can come from anywhere.
mozilla will never send your NT logon credentials without first consulting you. thereafter, it will automatically send them to the same domain per the protection space matching rules defined by RFC2617. in the future, we may look into offering the default credentials using some kind of heuristic like same intranet or proxy- only, etc.
Hi - Like many others I'm really excited to have this in Moz and I've been testing it out. On one site it works fine. On a second site, I get a prompt: Enter Username and password for "" at myurl.example.com In user name I have tried US\bob and then my password in the password field After hitting enter the prompt just returns over and over again. I think a clue to the issue might be that the prompt is for "". On the site that works it lists the hostname. eg/ Enter Username and password for "myurl.example.com" at myurl.example.com Am I missing something here?
Just curious: does this put the machinery in place to support SPA, which is basically NTLM over SMTP, in the mail/news client? (More info at)
dave: please see bug 199644#c4 rich: short answer is not really. feel free to file a bug ;-)
Comment on attachment 118353 [details] [diff] [review] v1.3 patch (clearing review request on obsolete patch)
VERIFIED: commercial 2003-04-29-04, Win 98. I'm in the process of moving some testing to Win2K, so I'll check the NT end soon. BTW, can someone confirm for me: Although 1.4a was released a couple days after this checkin, this was not in for the 1.4a milestone release.
Status: RESOLVED → VERIFIED
Re comment 52 ! (and may be 31 ?) Could there be a preference, with default to 'false' if you prefer, to let Mozilla authenticate itself without prompting the user, as MsIE does ?
using mozilla 1.6b I still don't seem to be able to get out of my corporate network. Do I need to switch NTLM explicitly on, somehow? (I can get out with IE)
|
https://bugzilla.mozilla.org/show_bug.cgi?id=159015
|
CC-MAIN-2019-26
|
en
|
refinedweb
|
Download presentation
Published byCarolina Osen Modified over 4 years ago
1
10/6: Lecture Topics Procedure call Calling conventions The stack
Preservation conventions Nested procedure call
2
Calling Conventions Sequence of steps to follow when calling a procedure Makes sure: arguments are passed in flow of control from caller to callee and back return values passed back out no unexpected side effects
3
Calling Conventions Mostly governed by the compiler
We’ll see a MIPS calling convention Not the only way to do it, even on MIPS Most important: be consistent Procedure call is one of the most unpleasant things about writing assembly for RISC architectures
4
A MIPS Calling Convention
1. Place parameters where the procedure can get them 2. Transfer control to the procedure 3. Get the storage needed for the procedure 4. Do the work 5. Place the return value where the calling code can get it 6. Return control to the point of origin
5
Step 1: Parameter Passing
The first four parameters are easy - use registers $a0, $a1, $a2, and $a3 You’ve seen this already What if there are more than four parameters?
6
Step 2: Transfer Control
Getting from caller to callee is easy -- just jump to the address of the procedure Need to leave a way to get back again Special register: $ra (for return address) Special instruction: jal
7
Jump and Link Calling code Procedure proc: add .. jal proc
8
Step 3: Acquire Storage What storage do we need?
Registers Other local variables Where do we get the storage? From the stack
9
Refining Program Layout
Address Reserved 0x Program instructions Text 0x Static data Global variables 0x Dynamic data heap Local variables, saved registers Stack 0x7fffffff
10
Saving Registers on the Stack
Low address $sp $s2 $s1 $s0 $sp $sp Before During After High address
11
Assembly for Saving Registers
We want to save $s0, $s1, and $s2 on the stack sub $sp, $sp, 12 # make room for 3 words sw $s0, # store $s0 sw $s1, # store $s1 sw $s2, # store $s2
12
Step 4: Do the work We called the procedure so that it could do some work for us Now is the time for it to do that work Resources available: Registers freed up by Step 3 All temporary registers ($t0-$t9)
13
Callee-saved vs. Caller-saved
Some registers are the responsibility of the callee callee-saved registers $s0-$s7 Other registers are the responsibility of the caller caller-saved registers $t0-$t9
14
Step 5: Return values MIPS allows for two return values
Place the results in $v0 and $v1 You’ve seen this too What if there are more than two return values?
15
Step 6: Return control Because we laid the groundwork in step 2, this is easy Address of the point of origin + 4 is in register $ra Just use jr $ra to return
16
An Example int leaf(int g, int h, int i, int j) { int f; f = (g + h) - (i + j); return f; } Let g, h, i, j be passed in $a0, $a1, $a2, $a3, respectively Let the local variable f be stored in $s0
17
Compiling the Example leaf: sub $sp, $sp, 4 # room for 1 word
sw $s0, 0($sp) # store $s0 add $t0, $a0, $a1 # $t0 = g + h add $t1, $a2, $a3 # $t1 = i + j sub $s0, $t0, $t1 # $s0 = f add $v0, $s0, $zero # copy result lw $s0, 0($sp) # restore $s0 add $sp, $sp, 4 # put $sp back jr $ra # jump back
18
Nested Procedures Suppose we have code like this:
Potential problem: the return address gets overwritten main() { foo(); } int foo() { return bar(); int bar() { return 6;
19
A Trail of Bread Crumbs The registers $s0-$s7 are not the only ones we save on the stack What can the caller expect to have preserved across procedure calls? What can the caller expect to have overwritten during procedure calls?
20
Preservation Conventions
Preserved Not Preserved Saved registers: $s0-$s7 Stack pointer register: $sp Return address register: $ra Stack above the stack pointer Temporary registers: $t0-$t9 Argument registers: $a0-$a3 Return value registers: $v0-$v1 Stack below the stack pointer
21
A Brainteaser in C What does this program print? Why?
#include <stdio.h> int* foo() { int b = 6; return &b; } void bar() { int c = 7; main() { int *a = foo(); bar(); printf(“The value at a is %d\n”, *a);
Similar presentations
© 2019 SlidePlayer.com Inc.
|
https://slideplayer.com/slide/3377993/
|
CC-MAIN-2019-26
|
en
|
refinedweb
|
AppDaemon Tutorial? AppDaemon is not meant to replace Home Assistant Automations and Scripts, rather complement them. For a lot of things, automations work well and can be very succinct. However, there is a class of more complex automations for which they become harder to use, and appdeamon then comes into its own. It brings quite a few things to the table:
- New paradigm - some problems require a procedural and/or iterative approach, and
AppDaemonApps are a much more natural fit for this. Recent enhancements to Home Assistant scripts and templates have made huge strides, but for the most complex scenarios, Apps can do things that Automations can’t
- Ease of use - AppDaemon’s API is full of helper functions that make programming as easy and natural as possible. The functions and their operation are as “Pythonic” as possible, experienced Python programmers should feel right at home.
- Reuse - write a piece of code once and instantiate it as an app as many times as you need with different parameters e.g. a motion light program that you can use in 5 with out the need to restart
AppDaemonitself. It is also possible to change parameters for an individual or multiple apps and have them picked up dynamically, and for a final trick, removing or adding apps is also picked up dynamically. Testing cycles become a lot more efficient as a result.
- Complex logic - Python’s If/Else constructs are clearer and easier to code for arbitrarily complex nested logic
- Durable variables and state - variables can be kept between events to keep track of things like the number of times a motion sensor has been activated, or how long it has been since a door opened
- All the power of Python - use any of Python’s libraries, create your own modules, share variables, refactor and re-use code, create a single app to do everything, or multiple apps for individual tasks - nothing is off limits!
It is in fact a testament to Home Assistant’s open nature that a component like
AppDaemon can be integrated so neatly and closely that it acts in all ways like an extension of the system, not a second class citizen. Part of the strength of Home Assistant’s underlying design is that it makes no assumptions whatever about what it is controlling or reacting to, or reporting state on. This is made achievable in part by the great flexibility of Python as a programming environment for Home Assistant, and carrying that forward has enabled me to use the same philosophy for
AppDaemon - it took surprisingly little code to be able to respond to basic events and call services in a completely open ended manner - the bulk of the work after that was adding additional functions to make things that were already possible easier.
How it Works
The best way to show what AppDaemon does is through a few simple examples.
Sunrise/Sunset Lighting
Lets start with a simple App to turn a light on every night fifteen
minutes (900 seconds) before 2 separate
callbacks. The named argument
offset is the number of seconds offset
from sunrise or sunset and can be negative or positive (it defaults to
zero). For complex intervals it can be convenient to use Python’s
datetime.timedelta class for calculations. In the example below,
when sunrise or just before sunset occurs, the appropriate callback
function,
sunrise_cb() or
before_sunset_cb() is called which
then makes a call to Home Assistant to turn the porch light on or off by
activating a scene. The variables
args["on_scene"] and
args["off_scene"] are passed through from the configuration of this
particular App, and the same code could be reused to activate completely
different scenes in a different version of the App.
import appdaemon.plugins.hass.hassapi as hass class OutsideLights(hass.Hass): def initialize(self): self.run_at_sunrise(self.sunrise_cb) self.run_at_sunset(self.before_sunset_cb, offset=-900) def sunrise_cb(self, kwargs): self.turn_on(self.args["off_scene"]) def before_sunset_cb(self, kwargs): self.turn_on(self.args["on_scene"])
This is also fairly easy to achieve with Home Assistant automations, but we are just getting started.
Motion Light
Our next example is to turn on a light when motion is detected and it is dark, and turn it off after a period of time. This time, the
initialize() function registers a callback on a state change (of the motion sensor) rather than a specific time. We tell AppDaemon that we are only interested in state changesd where the motion detector comes on by adding an additional parameter to the callback registration -
new = "on". When the motion is detected, the callack function
motion() is called, and we check whether or not the sun has set using a built-in convenience function:
sun_down(). Next, we turn the light on with
turn_on(), then set a timer using
run_in() to turn the light off after 60 seconds, which is another call to the scheduler to execute in a set time from now, which results in
AppDaemon calling
light_off() 60 seconds later using the
turn_off() call to actually turn the light off. This is still pretty simple in code terms:
import appdaemon.appapi as appapi class FlashyMotionLights(appapi.AppDaemon): def initialize(self): self.listen_state(self.motion, "binary_sensor.drive", new = "on") def motion(self, entity, attribute, old, new, kwargs): if self.sun_down(): self.turn_on("light.drive") self.run_in(self.light_off, 60) def light_off(self, kwargs): self.turn_off("light.drive")
This is starting to get a little more complex in Home Assistant automations requiring an Automation rule and two separate scripts.
Now lets extend this with a somewhat artificial example to show something that is simple in AppDaemon but very difficult if not impossible using automations. Lets warn someone inside the house that there has been motion outside by flashing a lamp on and off 10 times. We are reacting to the motion as before by turning on the light and setting a timer to turn it off again, but in addition, we set a 1 second timer to run
flash_warning() which when called, toggles the inside light and sets another timer to call itself a second later. To avoid re-triggering forever, it keeps a count of how many times it has been activated and bales out after 10 iterations.
import homeassistant.appapi as appapi class MotionLights(appapi.AppDaemon): def initialize(self): self.listen_state(self.motion, "binary_sensor.drive", new = "on") def motion(self, entity, attribute, old, new, kwargs): if self)
Of course if I wanted to make this App or its predecessor reusable I would have provide.
Happy Automating!
|
https://home-assistant.io/docs/ecosystem/appdaemon/tutorial/
|
CC-MAIN-2018-13
|
en
|
refinedweb
|
User interface to display and manage an entity and associated resourcesDownload PDF
Info
- Publication number
- US7743332B2US7743332B2 US10967392 US96739204A US7743332B2 US 7743332 B2 US7743332 B2 US 7743332B2 US 10967392 US10967392 US 10967392 US 96739204 A US96739204 A US 96739204A US 7743332 B2 US7743332 B2 US 7743332B2
- Authority
- US
- Grant status
- Grant
- Patent type
-
- Prior art keywords
- entity
- user
- display
- input
-
Abstract
Description
This is a divisional application of U.S. patent application Ser. No. 09/606,383 now U.S. Pat. No. 7,278,103, entitled “USER INTERFACE TO DISPLAY AND MANAGE AN ENTITY AND ASSOCIATED RESOURCES”, filed Jun. 28, 2000. This application is also related to co-pending U.S. patent application Ser. No. 10/967,739 entitled “USER INTERFACE TO DISPLAY AND MANAGE AN ENTITY AND ASSOCIATED RESOURCES” filed on Oct. 18, 2004. The entireties of the above-noted applications are incorporated herein by reference.
The present invention relates generally to computer systems, and more particularly to a system and method for managing and interfacing to a plurality of computers cooperating as an entity wherein the entity may be interfaced collectively as a whole and/or individually.
With the advent of Internet applications, computing system requirements and demands have increased dramatically. Many businesses, for example, have made important investments relating to Internet technology to support growing electronic businesses such as E-Commerce. Since companies are relying on an ever increasing amount of network commerce to support their businesses, computing systems generally have become more complex in order to substantially ensure that servers providing network services never fail. service. Although these systems may provide a more economical hardware solution, system management and administration of individual servers is generally more complex and time consuming.
Currently, management of a plurality of servers is a time intensive and problematic endeavor. For example, managing server content (e.g., software, configuration, data files, components, etc.) monitoring generally must be achieved via separate applications. Thus, management of the entity (e.g., plurality of computers acting collectively) as a whole generally requires individual configuration of loosely coupled servers whereby errors and time expended are increased.
Presently, there is not a straightforward and efficient system and/or process for managing and administering a collection of independent servers. Many problems are thereby created since administrators may be generally required to work with machines individually to setup content, tools, monitor server state and administer each server. Due to the need to administer and modify content on each machine individually, errors are a common occurrence. For example, it is routine for portions of server content to get out of sync with a master copy.
Another problem associated with management of a plurality of servers is related to adding additional servers to the system. Adding servers is generally time intensive and error prone since the new server generally must be manually configured as well as having the system content copied to the new server. Furthermore, server configuration settings generally need to be adjusted along with the content.
Still yet another problem associated with management is related to receiving system wide performance results and/or status views of the collection of servers. Some applications may exist that provide performance, memory used) associated with the plurality of servers may be problematic, however, since each server generally must be searched independently.
Currently, there is not an efficient and straightforward interface for managing and administering an entity without substantial and sometimes complex individual configuration/monitoring of each member associated with the entity. Consequently, there is an unsolved need in the art for a user interface to manage, create, administer, configure and monitor a group of servers operating as an entity.
The present invention relates to a user interface to display and manage a plurality of entities as a single entity. For example, the entities may include a plurality of members (e.g., computers, servers, clusters) collectively cooperating as a whole. In accordance with the present invention, a system interface is provided wherein a consistent and unified representation of a plurality of the entities as a whole may be obtained and/or managed from any of the members associated with the entity. Moreover, remote systems may interface with the entity—even if not a member thereof.
The interface enables actions to be performed on the representation of the entities as a whole and/or on representations of members associated with the entity individually. If actions are to be performed on the entities as a whole, the action may be propagated to the collection of entities. If the action is performed on the representation of a member, then the action may be directed to the member. In this manner, system administration, configuration and monitoring are greatly facilitated by enabling a user to send and receive information to the entity as if the entity were essentially a single machine. In contrast to prior art user interfaces wherein any collection of machines connected over a network may need to be administered individually, at each machine site, and/or via separate applications, the present invention provides point entry into the entity from a consistent and singular applications interface that may be directed from substantially any system operatively coupled to the entity (e.g., Internet connection).
More specifically, the present invention provides navigational namespaces that represent the collection of entities as a whole and/or members associated with the entity. In this manner, a hierarchy of entities may be established wherein members and/or other entities may be represented. For example, a first namespace may provide an entity (e.g., cluster) wide view and a second namespace may provide a member view. The entity wide namespace enables users to navigate to pages that provides/distributes information to/from the entity as a whole such as viewing performance and status of members, creating/viewing/editing application manifests defined for deployment to the entity, creating/viewing/filtering event logs aggregated for the entity and specific to each member, and viewing resource monitors (e.g., CPU utilization, memory utilization, server requests/second) aggregated for the entity and/or individually for each member. The member view enables users to navigate to pages designed to provide status and performance views of a particular member such as the manifests, event logs and monitors described above and also view/manage applications deployed across the entity.
In accordance with another aspect of the present invention, an entity (e.g., cluster, plurality of servers) node view may be provided to facilitate management and navigation of each member associated with the entity, wherein a monitor node view facilitates viewing, enabling and disabling monitors associated with performance aspects of the entity and individual members. An events node may further be provided to view and filter aggregated and individual member event logs. A performance view may be provided to facilitate an aggregated status of the entity wherein a status view may provide the overall state and health of each member of the entity. Additionally, member specific status may be viewed within the entity namespace, and an applications view may be provided for editing applications as described above.
According to another aspect of the present invention, administration helpers (e.g., wizards) may be provided to create an entity relationship, add members to the entity and to deploy applications and resources across the entity and/or to systems which may be remote therefrom. In this manner, the entity may be viewed and administered in a singular fashion thus mitigating individual member upgrades and synchronization problems between members. Furthermore, the present invention may be automatically installed by selecting a potential member from the operating system wherein the operating system then directs an installation to the member and then further adds the member to the entity.
According to another aspect of the present invention, management input operations for the entity are provided. From the context of members within the entity, members may be taken online or offline, automatically synchronized and/or not synchronized with the entity, have member weight adjusted for load balancing, specify a dedicated IP address and/or specify suitable load balancing parameters, provide an IIS restart, and/or restart the member.
From the context of the entity as a whole, a user may set entity wide settings such as load balancing, synchronize members that are part of a replication loop, set request forwarding behavior, and/or manage entity wide IP addresses. In order to facilitate management of applications, the user interface may expose a manifest to maintain a list of valid resources that may be deployed, managed and monitored across the entity. user interface is provided that greatly facilitates management and administration of an entity. The user interface substantially automates management by enabling a user to administer and manage the entity from any of a plurality of systems operatively coupled to the entity. A consistent user experience is therefore provided wherein the entity may be configured and monitored as if the entity were a singular machine—thereby providing a substantial improvement over conventional systems that may require an administrator to individually configure, monitor, maintain, and upgrade each machine comprising the entity. Thus, the present invention saves time and administration costs associated with conventional systems. Moreover, system configurability and troubleshooting is improved since entity members may be operated upon as a collective whole (e.g., viewing system wide performance) and/or individual members may be identified and operated upon.
Management is also facilitated by enabling a user/administrator to manage and configure a plurality of entities and/or entities from a single computer. In accordance with the user interface of the present invention, a user may create entities, join existing entities, add/remove existing members, deploy content (e.g., components, DLLs, data files) across the entity and/or to other entities/servers, configure load balancing and monitor performance. It is to be appreciated that the present invention may manage both homogeneous and non-homogeneous entities. For example, a homogeneous entity may include systems wherein all members share similar applications and resources. A non-homogeneous system may not require all members to be configured the same. As will be described in more detail below, the user interface may include an output such as display objects (e.g., icons, buttons, dialog boxes, pop-up menu's, wizards) and an input (e.g., buttons, selection boxes, user input boxes, wizards) to facilitate creating, joining, managing, monitoring and configuring the entity.
Referring initially to
As depicted by the system 10, the user interface 40 enables a user to administer, monitor, and configure the entity 30 from each member 20 a-20 d and/or from non-members such as computer system 20 e. The user interface 40 provides a consistent interface for the user to manage the entity 30 as if a singular machine. For example, the computer system 20 e may be added to the entity 30 via the user interface 40 from any of computer systems 20 a through 20 e. Consequently, the user does not have to administer (e.g., gain access to each machine) and configure (e.g., download new content/software) each machine individually. Thus, time is saved and errors are mitigated. It is noted that the user interface 40 generally does not have to run on each computer in the system 10. As will be described in more detail below, full entity control may be achieved by interfacing to a controller, for example.
In accordance with the present invention, one of the computer systems 20 a through 20 d may be configured to operate as a controller for the entity 30. The controller may operate as a master and determine what information is distributed throughout the entity 30. It is noted that the entity may still continue to operate even if the controller becomes disconnected. However, it is to be appreciated that another member may be promoted to a controller at any time.
The user interface 40 may be served with information provided from each member 20 a through 20 d. This may be achieved by enabling each member to distribute information to the entity 30. Therefore, the interface 40 may provide aggregated information from the entity as a whole—in contrast to conventional systems wherein information may be received and displayed from individual members. For example, computer systems 20 a-20 d processor performance may be displayed as an aggregation of the output of each member of the entity 30. Any of the displays 34 a through 34 e may provide a similar consistent view. It is noted that the members 20 a through 20 d may also be entities. For example, some members could also be a collection of members represented by an entity. Thus, the entity 30 may include members that are entities in their own right.
Alternatively, the user interface enables individual performance to be monitored from any of the displays 34 a through 34 e by selecting a particular member from a context menu (not shown) as will be described in more detail below. Furthermore, entity configurations may be modified from any of the user interfaces 40 by enabling the user to provide input to the interface and thereby distribute resultant modifications throughout the entity 30. This may be achieved for example, by providing the user input to the controller described above wherein the controller may then distribute the modified configuration throughout the entity 30. It is to be appreciated that other distribution systems may be provided. For example, rather than have entity resources centrally distributed and aggregated at the controller, individual members 20 a-20 d may share a master file (e.g., XML) describing the resources and content of each member. As new members are added to the entity 30, the resources and content may be distributed/received from any of the members 20 a-20 d according to the master file.
Turning now to
Referring now to
If the user attempts to connect to a server that is not associated with the entity, a choose options dialog 82 a, illustrated in
Referring now to
Referring briefly to
As described above in relation to
As will be described in more detail below, the user interface 40 may provide performance views to enable a user to display to a chart control (e.g., performance counters). The counters may be aggregated for the entity and/or related to a specific member. Additionally, status views may be provided wherein entity wide status may be viewed and/or member status viewed. Status may include health state, load-balancing related status, current synchronization status, entity health metrics, monitor related metrics, and/or synchronization loop state, for example.
If a user selects an entity wide view as described above, a performance display 90 a may be provided as depicted in the results pane 50. As illustrated in the scope pane 54, an entity node 90 b may be highlighted indicating to the user that performance and status is provided as an aggregated set from members 90 c and 90 d. For example, a status output 90 g may include display objects (e.g., icons) for providing status information such as connection status and on-line status of cluster members 90 c and 90 d. A synchronization display object 90 h may be provided to show that a particular server is set to be synchronized to the entity.
As illustrated in the display output 90 a, performance information for the cluster may be aggregated and displayed. The aggregated information may be provided from a plurality of sources such as from counters associated with performance aspects of members serving the entity. For example, a second display output window 90 i may provide information regarding particular counters such as processor utilization, memory available, and server requests per second. Inputs 90 j and 90 k (e.g., Add/Remove) may be provided to add and remove counters from the display 90 a respectively. For example, if input Add 90 j were selected, a predetermined list (not shown) may be provided to enable the user to select a performance counter for display output. Similarly, counters may be removed by selecting (e.g., mouse highlighting) a counter within the display 90 i and then selecting the remove input 90 k.
A selection input/output 901 (e.g., rectangle with selection arrow) may be provided to enable the user to see and/or select a suitable time period for monitoring the aggregated data described above. As the time period is modified, the resolution of the display output 90 a may thereby be altered accordingly. Additional input selections 90 m and 90 n may be provided to enable the user to modify the entity IP address (e.g., integrated operating system load balancing shared virtual IP address) and/or refresh the display with updated information respectively.
Turning now to
An application relating to the list 100 c may provide a collection of software resources to be utilized for Web site and/or (Component Object Model) COM applications. Applications may include files and directories, Web sites (e.g., IIS), COM+ applications, certificates, registry keys, DSN registry entries, and/or WMI settings, for example. Applications may also be employed for replication and enable administrators to organize sites into logical groups. Furthermore, an application may include more than one Web site and/or other resource, or no Web site at all, yet, still be replicated across the cluster. In this manner, administrators are provided granular control over the process in which replication occurs and/or what resources each member will maintain.
The applications interface 100 a may provide an applications task bar 100 d and an applications content display 100 e for providing information regarding items associated with the list 100 c. The task bar 100 d may include a new input 100 d 1, a delete input 100 d 2, a rename input 100 d 3, a synchronize input 100 d 4, and a refresh input 100 d 5. The new input 100 d 1 enables a user to create a new application to be added to the list 100 c, wherein the delete input 100 d 2 enables a user to delete a selected item from the list 100 c. The rename input 100 d 3 similarly enables a user to rename a selected application. The synchronize input 100 d 4 directs a synchronization of the selected application across the entity, and the refresh input 100 d 5 may be employed to update and/or refresh a Web Page associated with the entity.
Positioned below the task bar 100 d is the application list 100 c. Each application in the list 100 c may be displayed with an associated name 100 f and date last modified 100 g. When an application is selected, the applications content display 100 e may change to display associated resources for the applications. The content display 100 e may be employed for displaying and editing a manifest 100 h (e.g., grouping of associated files) of an application. For example, the manifest 100 h may include a plurality of resources such as All Resources, Websites/Vdirs, COM+ applications and proxies, registry paths, file system path, certificates and/or DSN settings.
To add a resource to a selected application, the user may select the resource type from an input 100 i and then select an Add input 100 j. Another browser (not shown) may then be launched acting as a dialog for that particular resource. When the dialog is closed, and the user selects OK, rather than CANCEL, the list of resources 100 h may then be refreshed to display the new resource added. If error conditions are detected, (e.g., application removed by another user) the user may be prompted by an error message, and the application list 100 c and resource list 100 h may then be refreshed.
To remove a resource, the user may select the desired resource type from the resource type 100 i. A remove input 100 k may then be selected. The user may then be then prompted with a YES/NO dialog (not shown) confirming removal of the requested resource. If the user selects YES, the resource may be removed and the resource list 100 h then updated.
Referring now to
Additional inputs may also be included with the events display 110 a. For example, an input 110 d enables a user to select which product category a displayed event should be selected from (e.g., operating system, entity operations). A type input 110 e enables a user to decide which events should be displayed. A source 110 f and/or event id 110 g input enables a user to enter selected events to filter (e.g., display only filtered events, do not display filtered events). After the source 110 f and/or event id 110 g have been entered, a filter input 110 h may then be selected by the user to enable the filter for the source and/or event id entered by the user.
Referring now to
Now referring to
Now referring to
Turning now to
Briefly referring back to
In addition to configuring properties as a whole, member properties may also be configured. For example, referring to
Referring now to
An exclusions input 150 h may be provided to enable a user to exclude specific events from logging and/or to re-enable previously excluded events. If the user selects the exclusions input 150 h, an exclusions dialog 154 a illustrated in
Turning now to
Referring to
Proceeding to
At
Proceeding to
Referring now to
Referring to
Proceeding to
At
Referring now to
Referring to
Proceeding to
Proceeding to operating system). It is to be appreciated that other operating systems may be employed such as UNIX for example. employed in a LAN networking environment, the server computer 220 may be connected to the local network 251 through a network interface or adapter 253. When utilized in a WAN networking environment, the server.
|
https://patents.google.com/patent/US7743332?oq=5%2C815%2C488
|
CC-MAIN-2018-13
|
en
|
refinedweb
|
Have a look at the messages from the top…
It will answer all…
Regarding the CSV…
After removing the space inbetween,
At the time of submission we need to undo that …using
data.classes…
Have a look at the messages from the top…
It will answer all…
Regarding the CSV…
After removing the space inbetween,
At the time of submission we need to undo that …using
data.classes…
The steps are:
Download data from Kaggle under “data/seedlings/”
unzip train.zip under “data/seedlings/”
run the script and generate the labels.csv under “data/seedlings/” (then you can use this labels.csv to count and visualize the data)
Since we are going to use
ImageClassifierData.from_csv, all the images need to sit under “train” folder and the sub-folders become redundant.
4.
mv train/**/*.png to move files from species sub-folders to “train” folder
rm -r train/**/to remove all species sub-folders
Hope this help. All credit to @shubham24 to provide the codes.
After five experiments (~3 hours), I submitted the best one.
Kudos to the fast.ai library.
I hope that put things in perspective for other people.
The thread to parse the data and to create the final CSV file was super useful.
Thanks to everyone that shared their insights.
Maybe a late reply and you probably don’t need it anymore but still
def f1(preds, targs): preds = np.argmax(preds, 1) targs = np.argmax(targs, 1) return sklearn.metrics.f1_score(targs, preds, average='micro') learn = ConvLearner.pretrained(f_model, data=data, ps=0.5,xtra_fc=[], metrics=[f1])
My targets are one-hot encoded for example [0, 0, 1, 0, 0, 0, 0, 0, 0, 0] -> class 3
This worked for me
Use this csv file
its has been edited…
Okay. Looks like I’m struggling with this competition but seems like it’s giving me an opportunity to play with many parameters.
With bs of 64 or 32 I wasn’t getting a curve which flattens or loss starts to increase so I tried bs of 16. This is how my lr curve is. This suggests me to use 1 as lr which eventually giving no better than .6 f1 score.
Any guidance?
I am getting the same error. I have deleted the sub-folders in train folder but i checked again to make sure. How did you fix this?
I would say you have some issues with data/labels/filenames/anything_not_related_to_training. You have an incredibly low loss (lower than even @jamesrequa declared in this thread) and very low accuracy with high variance. In three epochs (one of which after unfreeze) you should get 0.9 + accuracy with 4-5 times higher loss.
I redid everything and ignored f1 metrics for now and things are looking promising.
Question: reduced batch size to 32 for this problem (even though GPU could accommodate) so that each epoch (gradient) gets more time to learn? Or right way to think is because we had less number of images?
10 places improvement happened because of this equation.
lrs=np.array([lr/18,lr/6,lr/2]) than original
lrs=np.array([lr/9,lr/3,lr])
Thank you @mmr, @ecdrid for continuous guidance.
Disregard this, didn’t see that you’ve fixed it already
|
http://forums.fast.ai/t/kaggle-comp-plant-seedlings-classification/8212?page=5
|
CC-MAIN-2018-13
|
en
|
refinedweb
|
In this example, we will compare the F# and C# code for downloading a web page, with a callback to process the text stream.
We’ll start with a straightforward F# implementation.
// "open" brings a .NET namespace into visibility open System.Net open System open System.IO // Fetch the contents of a web page let fetchUrl callback url = let req = WebRequest.Create(Uri(url)) use resp = req.GetResponse() use stream = resp.GetResponseStream() use reader = new IO.StreamReader(stream) callback reader url
Let’s go through this code:
using System.Net” header in C#.
fetchUrlfunction, which takes two arguments, a callback to process the stream, and the url to fetch.
let req = WebRequest.Create(url)the compiler would have complained that it didn’t know which version of
WebRequest.Createto use.
response,
streamand
readervalues, the “
use” keyword is used instead of “
let”. This can only be used in conjunction with classes that implement
IDisposable. It tells the compiler to automatically dispose of the resource when it goes out of scope. This is equivalent to the C# “
using” keyword.
Now here is the equivalent C# implementation.
class WebPageDownloader { public TResult FetchUrl<TResult>( string url, Func<string, StreamReader, TResult> callback) { var req = WebRequest.Create(url); using (var resp = req.GetResponse()) { using (var stream = resp.GetResponseStream()) { using (var reader = new StreamReader(stream)) { return callback(url, reader); } } } } }
As usual, the C# version has more ‘noise’.
TResulttype has to be repeated three times.
* It’s true that in this particular example, when all the
using statements are adjacent, the extra braces and indenting can be removed,
but in the more general case they are needed.
Back in F# land, we can now test the code interactively:
let myCallback (reader:IO.StreamReader) url = let html = reader.ReadToEnd() let html1000 = html.Substring(0,1000) printfn "Downloaded %s. First 1000 is %s" url html1000 html // return all the html //test let google = fetchUrl myCallback ""
Finally, we have to resort to a type declaration for the reader parameter (
reader:IO.StreamReader). This is required because the F# compiler cannot determine the type of the “reader” parameter automatically.
A very useful feature of F# is that you can “bake in” parameters in a function so that they don’t have to be passed in every time. This is why the
url parameter was placed last rather than first, as in the C# version.
The callback can be setup once, while the url varies from call to call.
// build a function with the callback "baked in" let fetchUrl2 = fetchUrl myCallback // test let google = fetchUrl2 "" let bbc = fetchUrl2 "" // test with a list of sites let sites = [""; ""; ""] // process each site in the list sites |> List.map fetchUrl2
The last line (using
List.map) shows how the new function can be easily used in conjunction with list processing functions to download a whole list at once.
Here is the equivalent C# test code:
[Test] public void TestFetchUrlWithCallback() { Func<string, StreamReader, string> myCallback = (url, reader) => { var html = reader.ReadToEnd(); var html1000 = html.Substring(0, 1000); Console.WriteLine( "Downloaded {0}. First 1000 is {1}", url, html1000); return html; }; var downloader = new WebPageDownloader(); var google = downloader.FetchUrl("", myCallback); // test with a list of sites var sites = new List<string> { "", "", ""}; // process each site in the list sites.ForEach(site => downloader.FetchUrl(site, myCallback)); }
Again, the code is a bit noisier than the F# code, with many explicit type references. More importantly, the C# code doesn’t easily allow you to bake in some of the parameters in a function, so the callback must be explicitly referenced every time.
|
https://fsharpforfunandprofit.com/posts/fvsc-download/
|
CC-MAIN-2018-13
|
en
|
refinedweb
|
- NAME
- SEE ALSO
- LICENSE
- AUTHOR
NAME
Test::Run::Base::PlugHelpers - base class for Test::Run's classes with pluggable helpers.
$self->register_pluggable_helper( { %args } )
Registers a pluggable helper class (commonly done during initialisation). %args contain the following keys:
'id'
The 'id' identifying this class type.
'base'
The base class to use as the ultimate primary class of the plugin-based class.
'collect_plugins_method'
The method from which to collect the plugins. It should be defined for every base class in the hierarchy of the main class (that instantiates the helpers) and is traversed there.
$self->calc_helpers_namespace($id)
Calc the namespace to put the helper with the ID
$id in.
$self->create_pluggable_helper_obj({ id => $id, args => $args })
Instantiates a new pluggable helper object of the ID $id and with $args passed to the constructor.
$self->helpers_base_namespace()
TO OVERRIDE: this method determines the base namespace used as the base for the pluggable helpers classes.
SEE ALSO
Test::Run::Base, Test::Run::Obj, Test::Run::Core
LICENSE
This file is freely distributable under the MIT X11 license.
AUTHOR
Shlomi Fish,.
|
https://metacpan.org/pod/Test::Run::Base::PlugHelpers
|
CC-MAIN-2018-13
|
en
|
refinedweb
|
I applied for to road test [1] the Intel Joule 570X because it is one of the most powerfull soultions in market.
I'm Embedded Software Engieer focused on such solutions.
My plan is to attempt 3 phases of road testing:
1. Setup device to run Windows and Linux enviroment
2. Run and deploy Hello World applications on both enviroments.
3. Run and deploy some GPIO connected applicaiton.
Intel Joule 570X [2,3]:
I've recived mine package on time. Intel Joule 570x will be shipped in a small package - Pic.1
Inside grey package is colorfull one - Pic.2
Pic 2. Package in Package.
Here is the detail content of package after unboxing. Detailed Content is presented on Pic.3
Pic.3 Detailed content of received Pacage.
After setup of Intel SystemStudio IoT Edition you will get "This machine is missing Docker Toolbox software required by the Intel® System Studio IoT Edition. Installation will abort. Please install Docker Toolbox version 1.11 or higher from here: before installing the Intel® System Studio IoT Edition and ensure Docker Toolbox is in the PATH."
After installing Docker, you probably see Java requirement. "Please install Java* SE Runtime version 8 (JRE 8) 64-bit or higher from in order to launch the Intel® System Studio IoT Edition. If you wish to develop Intel IoT Projects with Java then please install Java* SE Development Kit 64-bit version 8 (JDK 8) or higher which includes Java JRE and ensure Java is in the PATH. "
Install it as well.
Pic.4. Intel SystemStudio install screen.
Then I leave only Joule set and hit Next.
Then next fail will appear. After calling Intel® System Studio IoT Edition from windows start menu Enviroment will claim that need JDK.
Yes. First it want's JRE - then JDK. I've installed JRE and JDK and added JRE to path.
Then I've put this JDK into WIndows->Preferences->Java->Installed JREs and manually added JRE and make it default by clicking checkbox.
This make Intel System Studio Happy after restart.
Pic. 5 Java Install setup.
I hope this help.
If you want to try windows on this - you need to get access to Insider program. Login into your windows account and try:
There is Windows 10 IOT Core Insider Preview web page and links that will be vaild for 24 hours after getting access. File that I have download was Windows10_InsiderPreview_IoTCore_IntelJoule_ARM32ARM64_en-us_16193.iso
Road is expalined at
To run this Joule you need power supply. 12DC, 3A. I've bought micro hdmi -> hdmi cable and power supply. Monitor and keyboard I have alredy. In the set there is also small SD card with system image. In the set there is also small radiator.
Setup of this radiator is documented in Passive Heatsink Assembly document - Intel webpage. After proper setup we will see following picture on the bench:
Pic.6 First proper run of Intel Joule 570X - After power plugin
On the monitor screen I saw following picture.
Pic. 7 Boot screen of Intel Joule 570X with Ostro on board.
Linux Ostro starts and root account is enabled by default.
uname -a show us Intel-corei7-64.
Short analysis show us that delivered system is legacy.
New card and bios will be helpful - on Intel page I've found something from 2017.
My plan was - first blind shot - try to install Windows and try not to change Bios.
I've created SD Card with last Windows 10 IoT image.
I've used IoT Dashboard tool.
Pic. 8. Setup of Windows IoT
Then I've turn Bios with F2 during boot and set two parameters as specification says.
Pic. 8-9 Joule 570X Bios setting
Boot Option Menu - EFI uSD Device as boot device and OS Selection -> Windows in Boot option.
Then reboot and ...
Pic. 10 Windows is starting on Joule 570X
System windows 10 IoT is booting ...
Till ...
Pic. 11. Windows show green face. - BSOD on Windows IoT
Seems that we need to make more effort to run Windows IoT on our Joule.
My first suspicion was on legacy BIOS version. According to Microsoft Specification we need first upgrade BIOS of our Joule. How to upgrade BIOS is described on Intel Specification web page. I've take 1H3 from 6/8/2017 follow the steps and ... almost brick Joule.
After flashing white diode stays on and screen after boot is off. In that moment I've almost finished mine roadtest with fail and birck. Then I've try other bios.
I've take 1F1 from 5/8/2017 - This time flash was success and Joule wake up.
Note: if you take debug bios version - be patient - it starts very long. Do not use it if you do not need.
Roadtest is on the road again!
I've start Windows IoT booting and Green Screen - INACCESSIBLE_BOOT_DEVICE once again appears.
My second suspicion was Microsoft Image - I've take Legacy one. And INACCESSIBLE_BOOT_DEVICE once again appears.
I was stuck - only option remain was 1H3 with host device drivers from 1F1. That was very strange because I've done some research and some people are running windows 10 IoT on this board.
Next day I put Joule BIOS 1H3 into 1F1 folder and start flash.bat with this. It works and alive! Unfortunatellty that didn't help either to start Windows IoT ... Same green screen of BSOD (This should be called GSOD now I think) on both Images appear.
I use mechanical USB keyboard - Tesoro Ulitmate that contains USB Hub. My mouse is connected to this keyboard. There is one free USB slot. I created USB stick with Windows Image and plug this into keyboard hub. Joule and Windows works now! But ... it loads extermly slow from legacy USB Stick. I've found special adapter that can be used with Micro SD Card - that make booting time resonable.
Finally, after two days of fight Windows IoT on Joule started properly.
I've ordered Powered USB HUB TP-Link UH720 and 7 inch HDMI LCD touch screen.
Standard keyboard and mouse as usual.
Pic. 12 Final Tool Set for building Joule Setup
All required parts are here (execpt HDMI > microHDMI converter) are above.
Then I've connected all the parts
Pic. 13. Windows IoT on final System Setup is Ready to get Software
And mine WIndows IoT started. (of course - First go setup and swtich boot sequenece)
Running a first program on build-in ostro became not so easy part as I expected.
I've installed Intel SystemStudio IoT Edition already.
I've boot system direct into Ostro Linux.
And first problem Appears - cannot connect to root. There wasn't set root passowrd.
Moreover there was no internet connect to our system.
Here helps youtube presentation [1] Engieering Bench Talk Mouser Electornics.
Few commands and we have connection:
connmanctl
connmanctl> scan wifi
connmanctl> services
connmanctl> agent on
connmanctl> connect wifi__managed_psk
After reboot connection remains.
passwd will setup a password for root account as well.
Presented in youtube method of enablid root access looks wrong i think.
I've used vi /etc/ssh_ssd_config and directly put last line "PermitRootLogin yes" at the end of configuration file.
I've also added follwoing IPTABLES as well:
-P INPUT ACCEPT
-P FORWARD ACCEPT
-P OUTPUT ACCEPT
But after reboot this look like this:
root@intel-corei7-64:~# iptables -S
-P INPUT DROP
-P FORWARD DROP
-P OUTPUT ACCEPT
-A INPUT -i ve-+ -p udp -m udp --dport 53 -j ACCEPT
-A INPUT -i ve-+ -p udp -m udp --dport 67 -j ACCEPT
-A INPUT -i lo -j ACCEPT
-A INPUT -m conntrack --ctstate RELATED,ESTABLISHED -j ACCEPT
-A INPUT -p udp -m udp --dport 5353 -j ACCEPT
-A INPUT -p tcp -m tcp --dport 22 -j ACCEPT
-A FORWARD -o ve-+ -j ACCEPT
-A FORWARD -i ve-+ -j ACCEPT
But works fine.
Expecialy with installed Intel SystemStudio IoT Edition .
I've started this Intel tool. This tool have found mine board on the net and connected properly.
root/password is needed to connect.
I've started from hello world. Project has been created. I've hit run button:
Pic. 14. Intel SystemStudio IoT Edition problem with Build In Ostro.
And following information is displayed (Pic.14).
I've typed and I've see 404 - Dev Humor from intel (Pic. 15)
Pic. 15 Intel flashing-ostro-on-joule web page
Look's like we have small problem. On the other hand there is only a warrning and world probably make some hope. So I've ok and follow that path.
Hello world was displayed - on the host screen.
Pic.16 Intel SystemStudio IoT Edition builds hello world.
But I want to see that on real system. Then I've type:
root@intel-corei7-64:~# cd /tmp
root@intel-corei7-64:/tmp# ls
Hello_World
root@intel-corei7-64:/tmp# ./Hello_World
Hello, This is a road test for Element14
And we have first program running on Build In Ostro. Then I've make something more complicated. I've take Led Blink example. Unfortunatelly Led Blink example program does not compile.
There is missing INTEL_JOULE_EXPANSION definition in include files. I've make some patches ... And mine program for Led Blinking looks like this:
#include <mraa.hpp>
#include <iostream>
#include <unistd.h>
int main()
{
// select onboard LED pin based on the platform type
// create a GPIO object from MRAA using it
mraa::Gpio* d_pin = NULL;
d_pin = new mraa::Gpio(100, true,;
}
This will help and led's now Joule is blinking.
Note: If you want store permament system setting in BIOS - buy battery. Otherwise, each time you will use F9.
After booting into Windows IoT I've started to make some hello world on Windows Setup. System give us nice web interface.
Note: Magic username/password for Windows IoT bare install is Administrator/p@ssw0rd
This interface look's like Pic.17.
Pic. 17 Windows IoT on Joule 570X Web Management Page
On the first I thought that Hello World example on this device will be simple. Under Apps->Quick-run samples there was 3 applications, including hello world. Unfortunatelly - not working.
Pic. 18 Build not working Quick-run samples
I've hit link 'source code' and found
I've downloaded entire repository and start work with Visual Studio 2017 Community Edition. After few clicks you will find following page:
There is a picture - where new Project is Windows IoT Core I've installed almost all tools under Visual Studio Installer tool. No one shows Windows IoT core.
Pic. 19 Web Page with Windows IoT Core download component required to build Joule 570X Examples.
Then I've found BIG GREEN BUTTON - Download. After download this and run magic Windows IoT Core Appear on Visual Studio 2017 Build Enviroment.
I was blinded. Now path goes straight.
Open Visual Studio 2017.
Open Soultion HelloWorld in C# (yes ... C# not C++ that was small suprise for Embedded Engieeer)
Remember to set on project setting as Universal Windows -> Extensions (Windows IoT Extension for UWP) as requred SDK for Joule.
Then I've set Debug/x64/Remote Machine and just hit green right Arrow as run.
It will deploy and run our software.
Pic. 20 Visual Studio 2017 Builds Hello World.
After correct deployment visual studio will change look to following:
Pic. 21 Visual Studio 2017 Debugging Hello World.
On the device follwoing screen will appear:
Pic. 22 Screen of Joule 570X running Hello World under Windows IoT.
After hit Click Me! with mouse ... screen will show:
Pic. 23 Screen of Joule 570X running Hello World under Windows IoT - hit Click Me! with mouse.
Yeah, small change in MainPage.xaml.cs was made:
private void ClickMe_Click(object sender, RoutedEventArgs e)
{
HelloMessage.Text = "Hello, element14 Joule Road Test";
}
Next I've start to blink. I've stopped debbuger. Viusal Studio 2017 and Device was restored to main state.
Pic. 24 Default Starting Screen of Windows IoT
Then I've open BlinkyCpp.sln from downloaded repository.
Hit build and deploy ...
And on device screen appears foolowing picture:
Pic. 25 Screen of Joule 570X running GPIO Example.
And big red point changes red/white blinking. Acutally that wasn't expected result. I thought that some diode wil blink as in Ostro solution. Therefore I've look into the code.
In file MainPage.xaml.cpp there was following code:
pin_ = gpio->OpenPin(LED_PIN);
pin_->Write(pinValue_);
pin_->SetDriveMode(GpioPinDriveMode::Output)
Where PIN_LED was binded to 5. I've changed it to:
pin_ = gpio->OpenPin(0);
pin_->Write(pinValue_);
pin_->SetDriveMode(GpioPinDriveMode::Output);
Compiled and deployed. This time big red/white point was blinking and diode on Joule board was blinking also.
Intel Joule 570X is a very good piece of silicon. Powerful and nice to work with. All is well documented and there was no such sitiuation on entire road test that I was lost. Either from Microsoft and Linux/Intel side.
Most of the things work as expected. I didn't have strong problems with that.
During this test I've prepared functional devleopment enviroment for mine Stream Engine Processing - Both functional and ready to develop and deploy software.
On the other hand sad news appear during this road test - I've get sad notice of End of Life of this chip [5] - my entire enthusiasm just drop away. This info makes me sad.
Nice idea - maybe too powerfull for embedded developement but this could be one of the most powerfull solutions on the market.
I think that reasons of this decission could be found on market analysys [6] - but I'll be still crossing my fingers for this nice piece of silicon.
1. Intel® Joule™ 570x Developer Kit
2. Engieering Bench Talk Mouser Electornics
3. Getting Started Guide Arrow Electronics
4. Official Intel Board Specification
5. Intel End of Life Notice
6. Rabsperry PI 3 still rulez
|
https://www.element14.com/community/roadTestReviews/2429/l/Intel%C2%AE-Joule%E2%84%A2-570x-Developer-Kit
|
CC-MAIN-2018-13
|
en
|
refinedweb
|
There were several exciting announcements at Google IO 2017: too many to go through all in one post. The thing that stood out to me the most (other than Kotlin of course 😉) were the Android Architecture Components.
Google has finally given us some guidance around recommended architecture on Android and they’ve done so in a way that is modular, flexible and allows developers to plug in different modules and frameworks as needed.
In one of his talks, Yiğit Boyar, said that there are more components and recommendations coming but that they were ready to announce the following and show us how they work.
- Room
- ViewModel
- Lifecycle
- LiveData
If you didn’t get a chance to watch the architecture videos on these topics yet, I’d highly recommend them. There are a few good ones but this video is a good summary.
I’ll briefly explain each and how we can use them to simplify development with Realm.
Room
Room is Google’s new ORM built on top of SQLite. It’s a major improvement over the previous Google API for working with SQLite. The name also has a nice ring to it 😉
Unlike many other SQLite ORMs, Room requires you to write the SQL to query for data and doesn’t support lazy loading of children in the objects returned. This is actually a strength over many other ORMs, that generate SQL for the developer under the hood. While lazy loading and not having to manually create and maintain SQL over time seems appealing at first, anyone who’s spent a fair amount of time using an ORM knows that as soon as you start following relationships between objects, new (sometimes large and inefficient) queries get run, which can greatly degrade performance.
While you still have to manually query and join data from the database, Room makes this easier by defining a
@Query annotation that takes the SQL as its value. You attach that annotation to a Data Access Object (DAO) interface method that names the query and sets the return type. Room then generates the implementation DAOs for you at compile time.
Here are a few examples
@Query("SELECT Loan.id, Book.title as title, User.name as name, Loan.startTime, Loan.endTime " + "FROM Book " + "INNER JOIN Loan ON Loan.book_id = Book.id " + "INNER JOIN User ON User.id = Loan.user_id " + "WHERE User.name LIKE :userName " + "AND Loan.endTime > :after " ) public LiveData<List<LoanWithUserAndBook>> findLoansByNameAfter(String userName, Date after); @Query("SELECT * From Loan") LiveData<List<Loan>> findAll();
If you’re using SQLite for local storage on Android, Room represents a major improvement in how you can work with it. For more information, check out the Google Room Documentation.
ViewModel
Google ViewModels are designed to provide access to UI related data to the Activity and combined with Google’s new LiveData and LifeCycle components it will change the way you write Android apps going forward.
Get more development news like this
The single best feature about this approach is that the ViewModel is lifecycle aware and isn’t destroyed on configuration changes, such as when the device rotates. This is an issue that has plagued Android apps for a long time. As you can see in the diagram taken from the android developer documentation, the ViewModel lives until the Activity finishes.
For Realm, this means that the Realm lifecycle can be managed in the ViewModel and closed when the ViewModel is no longer being used.
This all works because of a new concept called Lifecycles, or Lifecycle aware components.
Lifecycle
Most of the app components that are defined in the Android Framework have lifecycles attached to them. Realm is no different. For every
Realm.getDefaultInstance() you invoke, you need to call
realm.close() on that instance before it’s GC’d. In addition, you need to remove any change listeners and close any open transactions before closing the realm instance. The Android Lifecycle package has classes and interfaces that make it easier for you to build components that are “Lifecycle Aware”. We’ll see how this works in practice in the code sample later on, but first let’s talk about the last piece, LiveData.
LiveData
LiveData provides a way to stream data from your ViewModel to your Activity/UI without the Activity needing to poll the ViewModel for changes, or worse, make the ViewModel hold a reference to the Activity in order to update the display. Your Activity can bind to LiveData so that it reacts to data changes as they occur.
It is awesome to see Google embrace the Live Data concept. Their extensible
LiveData class works really well with Realm’s observable live data, providing a layer of abstraction so that the Activity isn’t exposed to RealmResults and RealmObjects.
The best part of the new Architecture Components is it’s pluggable nature. Any data can be represented as LiveData. Any component can be made Lifecycle aware, and ViewModels can be written to meet the needs of any app. Google has done a great job of providing guidance and tools to help, without getting in the way. Cheers Google! 🎉
Let’s take a look at how this works with Realm using the Google Code labs android-persistence project as a starter and swapping the SQLite table structure with a Realm Object data model. Here is a before and after view.
The biggest difference is in the relationships. With Realm, both Users and Books have a collection of loans, and each Loan has a reference to the Book and User to which it belongs. Whereas, with SQLite, these relationships are inferred by Foreign Key (FK) references stored in the Loan table and joined via SQL Query joins as we’ll see in a moment.
I had to update the model classes to account for this, but not too much. Here is an example of the Loan model class, with a before and after.
Queries
Queries also have to change a bit. For example, instead of defining an interface method and SQL for
findLoansByNameAfter, I need to provide a method body and RealmQuery.
With the model changes in place, let’s see how it all fits together, starting from the top of the stack.
Activity
The Activity extends from LifecycleActivity. Eventually this will be merged into the AppCompat libraries. Doing this allows us to use Lifecycle aware Components, like ViewModels which we’ll see in a minute.
public class CustomResultUserActivity extends LifecycleActivity { private CustomResultViewModel mShowUserViewModel; private TextView mBooksTextView; @Override protected void onCreate(Bundle savedInstanceState) { super.onCreate(savedInstanceState); setContentView(R.layout.db_activity); mBooksTextView = (TextView) findViewById(R.id.books_tv); // Android will instantiate my ViewModel for me, and the best part is // the viewModel will survive configurationChanges! mShowUserViewModel = ViewModelProviders.of(this).get(CustomResultViewModel.class); // We'll observe updates to our LiveData loan string. mShowUserViewModel.getLoansResult().observe(this, new Observer<String>() { @Override public void onChanged(@Nullable final String result) { mBooksTextView.setText(result); } }); } public void onRefreshBtClicked(View view) { mShowUserViewModel.simulateDataUpdates(); } }
Notice that the
onCreate gets an instance of the ViewModel using the built in
ViewModelProviders.of(...). This will create a new CustomResultViewModel for us if there isn’t already one created for us in this Activity lifecycle. If the user rotates the phone and the resulting configuration change causes the Activity to be destroyed and recreated, the ViewModel will survive and the same instance will be returned in the next
Activity.onCreate call. At the same time, when the Activity finishes and the ViewModel is no longer being used, the system will still give the ViewModel a chance to clear resources before it destroys it.
The ViewModel exposes LiveData to the activity via
mShowUserViewModel.getLoansResult() and the Activity observes changes. We don’t need to add code to stop observing in
onPause() or
onStop(), because LiveData is lifecycle aware and bound to the lifecycle of this Activity. That’s why we passed the activity in as the first argument to
.observe(...).
ViewModel
The ViewModel stores and manages the UI data and exposes actions to the Activity, like
simulateDataUpdates().
public class CustomResultViewModel extends ViewModel { private Realm mDb; private LiveData<String> mLoansResult; public CustomResultViewModel() { mDb = Realm.getDefaultInstance(); subscribeToMikesLoansSinceYesterday(); simulateDataUpdates(); } public void simulateDataUpdates() { DatabaseInitializer.populateAsync(mDb); } public LiveData<String> getLoansResult() { return mLoansResult; } private void subscribeToMikesLoansSinceYesterday() { LiveRealmData<Loan> loans = loanModel(mDb) .findLoansByNameAfter("Mike", getYesterdayDate()); mLoansResult = Transformations.map(loans, new Function<RealmResults<Loan>, String>() { @Override public String apply(RealmResults<Loan> loans) { StringBuilder sb = new StringBuilder(); SimpleDateFormat simpleDateFormat = new SimpleDateFormat("yyyy-MM-dd HH:mm", Locale.US); for (Loan loan : loans) { sb.append(String.format("%s\n (Returned: %s)\n", loan.getBook().getTitle(), simpleDateFormat.format(loan.getEndTime()))); } return sb.toString(); } }); } /** * This method will be called when this ViewModel is no longer used and will be destroyed. * <p> * It is useful when ViewModel observes some data and you need to clear this subscription to * prevent a leak of this ViewModel... Like the Realm instance! */ @Override protected void onCleared() { mDb.close(); super.onCleared(); } private Date getYesterdayDate() { Calendar calendar = Calendar.getInstance(); calendar.set(Calendar.DATE, -1); return calendar.getTime(); } }
The Realm version of the CustomResultViewModel is very similar to the Google Code Labs version except that instead of creating a separate DTO to hold Loan and User data, I just reference a Loan, which references the User it belongs too. This isn’t possible to do efficiently with an ORM and SQL. With Realm we don’t have this limitation because relationships are essentially free with Realm. There are no joins happening and no additional queries being run. Realm relationships are, simply, an object graph.
Finally,
LiveRealmData<T> is-a
LiveData<RealmResults<T>> as you see here.
public class LiveRealmData<T extends RealmModel> extends LiveData<RealmResults<T>> { private RealmResults<T> results; private final RealmChangeListener<RealmResults<T>> listener = new RealmChangeListener<RealmResults<T>>() { @Override public void onChange(RealmResults<T> results) { setValue(results);} }; public LiveRealmData(RealmResults<T> realmResults) { results = realmResults; } @Override protected void onActive() { results.addChangeListener(listener); } @Override protected void onInactive() { results.removeChangeListener(listener); } }
This is a wrapper for the RealmResults to expose them as Lifecycle aware LiveData.
DAOs
Hiding your database interactions behind DAOs is a good idea for interoperability with other components and can help with testing. For example, the ViewModels can be tested independently of the DAOs, which can be mocked for unit testing.
The Codelab example uses inheritance to add methods onto a RoomDatabase and has a factory method to get a singleton instance of it like so:
@Database(entities = {User.class, Book.class, Loan.class}, version = 1) public abstract class AppDatabase extends RoomDatabase { private static AppDatabase INSTANCE; public abstract UserDao userModel(); public abstract BookDao bookModel(); public abstract LoanDao loanModel(); public static AppDatabase getInMemoryDatabase(Context context) { if (INSTANCE == null) { INSTANCE = Room.inMemoryDatabaseBuilder(context.getApplicationContext(), AppDatabase.class) // To simplify the codelab, allow queries on the main thread. // Don't do this on a real app! See PersistenceBasicSample for an example. .allowMainThreadQueries() .build(); } return INSTANCE; } public static void destroyInstance() { INSTANCE = null; } }
While we don’t need to define a custom factory lookup with Realm (because Realm already provides one and the lifecycle is bound to the ViewModels lifecycle in this case), it would be nice for us to be able to get the instance of the DAOs associated with a given Realm instance, as needed. As a reminder, all realm models are tied to the lifecycle of the Realm instance from which they were fetched.
To accomplish this, I could have written a simple RealmUtils.java which does this… but that would be boring 😉. So I made use of the now (fully supported!) Kotlin Extensions functionality instead.
@file:JvmName("RealmUtils") // pretty name for utils class if called from Java ... fun Realm.userModel(): UserDao = UserDao(this) fun Realm.bookModel(): BookDao = BookDao(this) fun Realm.loanModel(): LoanDao = LoanDao(this) // Convenience extension on RealmResults to return as LiveRealmData fun <T:RealmModel> RealmResults<T>.asLiveData() = LiveRealmData<T>(this)
Now, in Java code, I can still call
RealmUtils.bookDao(realm) to get a book DAO, but if I’m accessing from Kotlin code, I can simply say
realm.bookDao(), to get a bookDao.
Now that I have a way to create DAOs, let’s take a look at a sample Realm DAO. This is the LoanDao used by the CustomResultViewModel to find loans by name, after a specific date.
public class LoanDao { private Realm mRealm; public LoanDao(Realm realm) { this.mRealm = realm; } public LiveRealmData<Loan> findLoansByNameAfter(final String userName, final Date after) { return asLiveData(mRealm.where(Loan.class) .like("user.name", userName) .greaterThan("endTime", after) .findAllAsync()); } public void addLoan(final Date from, final Date to, final String userId, final String bookId) { User user = mRealm.where(User.class).equalTo("id", userId).findFirst(); Book book = mRealm.where(Book.class).equalTo("id", bookId).findFirst(); Loan loan = new Loan(from, to, book, user); mRealm.insert(loan); } }
The last thing we need to do is to recreate the live data feed simulation from the reference example.
Simulating Data Updates
When the Activity starts, and again whenever the refresh button is tapped, the data is cleared and new simulation data is put into the database.
I modified the SQLite example to do this. My realm simulation code is very similar to the original except that I’m pushing the work to the background using Realm’s built in Async transaction methods, instead of an AsyncTask. I might also use an IntentService for this type of work in a real world Realm App.
// Simulate a blocking operation delaying each Loan insertion with a delay: private static final int DELAY_MILLIS = 500; public static void populateAsync(final Realm db) { Realm.Transaction task = populateWithTestDataTx; db.executeTransactionAsync(task); } private static Realm.Transaction populateWithTestDataTx = new Realm.Transaction() { @Override public void execute(Realm db) { db.deleteAll(); checkpoint(db); User user1 = addUser(db, "1", "Jason", "Seaver", 40); User user2 = addUser(db, "2", "Mike", "Seaver", 12); addUser(db, "3", "Carol", "Seaver", 15); Book book1 = addBook(db, "1", "Dune"); Book book2 = addBook(db, "2", "1984"); Book book3 = addBook(db, "3", "The War of the Worlds"); Book book4 = addBook(db, "4", "Brave New World"); addBook(db, "5", "Foundation"); try { // Loans are added with a delay, to have time for the UI to react to changes. Date today = getTodayPlusDays(0); Date yesterday = getTodayPlusDays(-1); Date twoDaysAgo = getTodayPlusDays(-2); Date lastWeek = getTodayPlusDays(-7); Date twoWeeksAgo = getTodayPlusDays(-14); addLoan(db, user1, book1, twoWeeksAgo, lastWeek); Thread.sleep(DELAY_MILLIS); addLoan(db, user2, book1, lastWeek, yesterday); Thread.sleep(DELAY_MILLIS); addLoan(db, user2, book2, lastWeek, today); Thread.sleep(DELAY_MILLIS); addLoan(db, user2, book3, lastWeek, twoDaysAgo); Thread.sleep(DELAY_MILLIS); addLoan(db, user2, book4, lastWeek, today); Log.d("DB", "Added loans"); } catch (InterruptedException e) { e.printStackTrace(); } } }; private static Date getTodayPlusDays(int daysAgo) { Calendar calendar = Calendar.getInstance(); calendar.set(Calendar.DATE, daysAgo); return calendar.getTime(); } private static void checkpoint(Realm db) { db.commitTransaction(); db.beginTransaction(); } private static void addLoan(final Realm db, final User user, final Book book, Date from, Date to) { loanModel(db).addLoan(from, to, user.getId(), book.getId()); checkpoint(db); } private static Book addBook(final Realm db, final String id, final String title) { Book book = bookModel(db).createOrUpdate(new Book(id, title)); checkpoint(db); return book; } private static User addUser(final Realm db, final String id, final String name, final String lastName, final int age) { User user = userModel(db).createOrUpdate(new User(id, name, lastName, age)); checkpoint(db); return user; }
If you’re interested in taking the modified example for a spin, you can download the source code here.
Conclusion
It’s exciting to see some guidance and framework support from Google to make Android development cleaner and easier than ever. With first class support for Kotlin and more architecture components to come, (and faster reactive NOSQL alternatives to SQLite 😉), there is no better time than now to be an Android developer!
Next Up: Realm Java enables you to efficiently write your Android app’s model layer in a safe, persisted, and fast way.
About the content
This content has been published here with the express permission of the author.
|
https://academy.realm.io/posts/android-architecture-components-and-realm/
|
CC-MAIN-2018-13
|
en
|
refinedweb
|
I am attempting to run Python code on a Coldfusion server using Java. I am familiar with CFML but an absolute beginner with Java.
I can instantiate the objects and list their methods ok, however I am getting stuck with different object types.
The example I am trying to get to work in Coldfusion is
import javax.script.ScriptEngine;
import javax.script.ScriptEngineManager;
import javax.script.ScriptException;
public class JSR223 {
public static void main(String[] args) throws ScriptException {
ScriptEngine engine = new ScriptEngineManager().getEngineByName("python");
engine.eval("import sys");
engine.eval("print sys");
engine.put("a", 42);
engine.eval("print a");
engine.eval("x = 2 + 2");
Object x = engine.get("x");
System.out.println("x: " + x);
}
}
ScriptEngine = CreateObject("java", "javax.script.ScriptEngine");
ScriptEngineManager = CreateObject("java", "javax.script.ScriptEngineManager");
ScriptException = CreateObject("java", "javax.script.ScriptException");
ScriptEngine engine = new ScriptEngineManager().getEngineByName("python");
engine = ScriptEngineManager.getEngineByName("python");
writeDump(engine);
interp = CreateObject("java", "org.python.util.PythonInterpreter");
java.lang.NullPointerException
The keyword
new invokes the class constructor. ColdFusion does not support
new with java objects. Instead, use the psuedo-method init():
The init method is not a method of the object, but a ColdFusion identifier that calls the new function on the class constructor.
A literal translation of that code is to chain the calls. Invoke init() first, to create a new instance. Then call getEngineByName() on that instance:
engine = createObject("java", "javax.script.ScriptEngineManager").init().getEngineByName("python");
Though for better readability, you may want to break it up:
ScriptEngineManager = createObject("java", "javax.script.ScriptEngineManager").init(); engine = ScriptEngineManager.getEngineByName("python");
As an aside, in this specific case, you can technically omit the call to
init(). ColdFusion will automatically invoke the no-arg constructor as soon as you call getEngineByName():
...If you call a public non-static method on the object without first calling the init method, ColdFusion makes an implicit call to the default constructor.
Update based on comments:
If
engine is not defined, that means the "python" engine was not found.
Be sure you have added the the jython jar file to the CF class path (or loaded it via For some reason it does not work if you load the jar dynamically through ACF's this.javaSettings. However, it works fine if you place the jython jar in
this.javaSettings in your Application.cfc). Once it is registered, the code should work correctly.
WEB-INF\lib and restart CF. Try adding the jar to the physical CF class path, rather than loading it dynamically and it should work correctly.
It also works from CF if you manually register the engine first (see below). Not sure why that extra step is necessary when ScriptEngineManager is invoked in CF, but not from Eclipse.
ScriptEngineManager = createObject("java", "javax.script.ScriptEngineManager").init(); factory = createObject("java", "org.python.jsr223.PyScriptEngineFactory").init(); ScriptEngineManager.registerEngineName("python", factory); engine = ScriptEngineManager.getEngineByName("python"); // ...
How does the other class, ScriptEngine fit in with this?
Unlike CF, Java is strongly typed. That means when you declare a variable, you must also declare its type (or class). The original code declares the
engine variable as an instance of the
ScriptEngine class. Since CF is weakly typed, that is not necessary. Just declare the variable name as usual. The getEngineByName() method automatically returns a
ScriptEngine object (by the definition in the API).
|
https://codedump.io/share/bliKQ6Lj7sHR/1/java-and-jsr-223-to-run-python-or-ruby-code-on-a-coldfusion-server
|
CC-MAIN-2018-13
|
en
|
refinedweb
|
Short Description
Python environment for proso-apps
Full Description
PROSO Apps
Development
Setup your local virtual environment:
mkvirtualenv proso-apps
If the environment already exists, activate it:
workon proso-apps
To install/reinstall the project:
make install|reinstall
If you want to download javascript dependencies, you have to run Bower and grunt:
make build-js
If you want to download javascript dependencies and create a symbolic link to clone of proso-apps-js repository, you have to run Bower (develop) and grunt:
make build-js-develop
To run tests:
make test
Release
In case of final major version (you have to setup your PIP environment before):
make release
In case of final micro version:
make release-micro
Migration to python3
virtualenv
- (only for testing )install sqlite-devel or libsqlite3-dev from repository
- install python3.5 ()
- create virtualenv with python3.5 (use -p in mkvirtualenv command)
- install requirements (make install)
social-auth
details at
- replace 'social_auth' with 'social.apps.django_app.default' in INSTALLED_APPS in settings.py
- replace old include with 'url('', include('social.apps.django_app.urls', namespace='social'))' in urls.py
- change setting vars names to connect to google and facebook in setting.py
- change facebook and google backends in AUTHENTICATION_BACKENDS in settings.py
- in google developer console allow Google+ API
- clean the session and force the users to login again in your site or run script to update session in DB
migrations
python manage.py migrate default 0001 --fake run somehow migration 0006_migrate_to_psa.py (dir: proso_user) - move to migrations, run, move back python manage.py migrate default python manage.py migrate flatblocks 0001 --fake python manage.py migrate lazysignup 0001 --fake python manage.py migrate
Docker Pull Command
Owner
hkarasek
Source Repository
|
https://hub.docker.com/r/hkarasek/proso-apps/
|
CC-MAIN-2018-13
|
en
|
refinedweb
|
The Visual Studio 2015 updates 2 brings several improvement. Among all of them, one of the small but vary handy improvement is “Add Using command” for misspelled types using “fuzzy” matching. In case there is a typo for any predefined types, Visual Studio search the entire solution, analyze the meta data uses the fuzzy matching to suggest the correct type.
Quick Visual Studio Tip : Did you know – you can close specific set of open documents together in Visual Studio ?
For an example, consider you want to use the SqlConnection, but while typing you misspelled. Now with the help of Light Blub, along with other suggested tip, you can see Visual Studio also recommending preferably matching Using Statement Using System.Data.SqlClient and the associated changes as shown in below screen shots.
Related Tip : XAML Inspection Toolbar : Inspecting XAML for Live Debugging from the Apps
Hope this helps !
Pingback: Dew Drop – May 18, 2016 (#2254) | Morning Dew
Oddly, it only seems to work if it needs to add the namespace to see the misspelled identifier. Misspell a name that is in an already included namespace, and the tip won’t offer the correct spelling.
Pingback: Dew Drop – May 19, 2016 (#2255) | Morning Dew
Pingback: Visual Studio – Developer Top Ten for May 25th, 2016 - Dmitry Lyalin
Pingback: Using light bulb action to refactor asynchronous method synchronous – Visual Studio 2015
|
https://dailydotnettips.com/2016/05/18/add-using-command-for-misspelled-types-usingfuzzy-matching-visual-studio-2015/
|
CC-MAIN-2018-13
|
en
|
refinedweb
|
ECF/Bot Framework
The Bot Framework was introduced as a new API in ECF's 1.0.0M6 release. It interoperated with the presence API which allowed it to be leveraged as both a typical chatroom-oriented bot in addition to one that was based around instant messaging. A bot can easily be created with just an implementation of an interface in addition to the declaration of extension points.
The tutorial below will show you just how easy it is to create a bot for IRC. This tutorial assumes the reader has some basic knowledge about plug-in development. This tutorial was tested on Linux using Sun's 1.4.2.13 JDK, the 3.3M6 build of the Eclipse SDK, and code from Eclipse.org's HEAD branch.
Contents
Requirements
As regular expression pattern matching is used, a Java runtime environment of 1.4.2 or higher is required.
The Bot Framework leverages code that is new to Equinox in Eclipse 3.3, so a recent milestone must be installed for this example to work.
ECF Plug-ins
- org.eclipse.ecf (core ECF APIs)
- org.eclipse.ecf.core.identity (identity and namespace APIs)
- org.eclipse.ecf.presence (presence APIs for monitoring messages and presence status)
- org.eclipse.ecf.presence.bot (bot API)
- org.eclipse.ecf.provider.irc (IRC implementation and ECF bridging code)
Please see ECF's downloads page to find out how to retrieve these plug-ins.
Project Setup
Dependencies
- Create a Plug-in Project like how you normally would. Since this is a bot that will be run in headless mode, we do not need any UI components. You do not even need an activator class.
- Open the MANIFEST.MF file and go to the 'Dependencies' tab.
- Add org.eclipse.ecf, org.eclipse.ecf.presence, and org.eclipse.ecf.presence.bot as a 'Required Plug-in'.
- Now add org.eclipse.core.runtime as an 'Imported Package'.
MANIFEST.MF
Manifest-Version: 1.0 Bundle-ManifestVersion: 2 Bundle-Name: Geir Plug-in Bundle-SymbolicName: org.eclipse.ecf.example.geir;singleton:=true Bundle-Version: 1.0.0 Require-Bundle: org.eclipse.ecf, org.eclipse.ecf.presence, org.eclipse.ecf.presence.bot Import-Package: org.eclipse.core.runtime
Extensions
- Open the Extensions tab.
- Add the org.eclipse.ecf.presence.bot.chatRoomRobot and the org.eclipse.ecf.presence.bot.chatRoomMessageHandler extension point.
- Select the org.eclipse.ecf.presence.bot.chatRoomRobot extension.
- Fill in something unique for your 'id'. org.eclipse.ecf.example.bot.geir2
- Fill in ecf.irc.irclib for your 'containerFactoryName'.
- For the 'connectId', select an IRC server of your choice and a name for the bot. irc://geir2@irc.freenode.net
- For the 'chatRoom' field, pick the channel that you want your bot to join upon successful connection to the server above. #eclipse
- Now select the org.eclipse.ecf.presence.bot.chatRoomMessageHandler extension point.
- For your 'id', copy the same 'id' that you filled in above. org.eclipse.ecf.example.bot.geir2
- In 'filterExpression', enter a regular expression that should be matched for parsing purposes for your bot. (~bug[0-9]*)
- Click on the 'class*' hyperlink and then create a new class that implements the 'org.eclipse.ecf.presence.bot.IChatRoomMessageHandler' interface. For this example, I will assume that your class's name is Geir2Bot under the org.eclipse.ecf.example.bot package..
plugin.xml
<?xml version="1.0" encoding="UTF-8"?> <?eclipse version="3.2"?> <plugin> <extension point="org.eclipse.ecf.presence.bot.chatRoomMessageHandler"> <handler chatRoomRobotId="org.eclipse.ecf.example.bot.geir2" class="org.eclipse.ecf.example.bot.Geir2Bot" filterExpression="(~bug[0-9]*)"> </handler> </extension> <extension point="org.eclipse.ecf.presence.bot.chatRoomRobot"> <chatRoomRobot connectId="irc://geir2@irc.freenode.net" containerFactoryName="ecf.irc.irclib" id="org.eclipse.ecf.example.bot.geir2" > <chatRooms name="#eclipse"> </chatRooms> </chatRoomRobot> </extension> </plugin>
Writing the Code
Interface Implementation
- Open the Geir2Bot class that you have created.
- Since we want our bot to be able to say something, we need to retrieve an interface that will provide us with such a functionality.
- Add a field to the class of type IChatMessageSender.
- We will retrieve our instance in the preChatRoomConnect(IChatRoomContainer, ID) method. This method will be called right before our bot joins the channel (#eclipse in our case). You can retrieve an instance of an IChatMessageSender by calling getChatRoomMessageSender() on the provided IChatRoomContainer instance.
- Now that our bot has a mechanism for replying, we should write some code to parse the messages that the bot receives so that it can give a correct response. To get the string that's been said, use the getMessage() method from the IChatRoomMessage interface that's passed into the handleRoomMessage(IChatRoomMessage) method.
- Our regular expression of (~bug[0-9]*) implies that any string beginning with ~bug followed by any number of digits will be a valid input for our bot to read. So let's add some string handling code to route people to Eclipse's bugzilla when they type something like ~bug150000 or ~bug180078.
- To send a reply to the IRC channel, simply use IChatRoomMessageSender's sendMessage(String) method. This method will throw an ECFException, but given this simple scenario, we won't bother to handle it.
org.eclipse.ecf.example.bot.Geir2Bot
package org.eclipse.ecf.example.bot; import org.eclipse.ecf.core.IContainer; import org.eclipse.ecf.core.identity.ID; import org.eclipse.ecf.core.util.ECFException; import org.eclipse.ecf.presence.bot.IChatRoomBotEntry; import org.eclipse.ecf.presence.bot.IChatRoomMessageHandler; import org.eclipse.ecf.presence.chatroom.IChatRoomContainer; import org.eclipse.ecf.presence.chatroom.IChatRoomMessage; import org.eclipse.ecf.presence.chatroom.IChatRoomMessageSender; public class Geir2Bot implements IChatRoomMessageHandler { private IChatRoomMessageSender sender; public void handleRoomMessage(IChatRoomMessage message) { // use substring 1 to just truncate the opening tilda (~) String msg = message.getMessage().substring(1); try { if (msg.equals("bug")) { //$NON-NLS-1$ // if no number was provided, just send them to bugzilla sender.sendMessage(""); //$NON-NLS-1$ } else { // otherwise, give the person a direct link to the bug sender.sendMessage("" //$NON-NLS-1$ + "show_bug.cgi?id=" + msg.substring(3)); //$NON-NLS-1$ } } catch (ECFException e) { e.printStackTrace(); } } public void init(IChatRoomBotEntry robot) { // nothing to do } public void preChatRoomConnect(IChatRoomContainer roomContainer, ID roomID) { sender = roomContainer.getChatRoomMessageSender(); } public void preContainerConnect(IContainer container, ID targetID) { // nothing to do } }
Running the Example
- Open the 'Run' dialog and then right-click on 'Eclipse Application' and select 'New'.
- In the 'Main' tab, from the combo drop down in the 'Program to Run' section, select 'Run an application:' and choose org.eclipse.ecf.presence.bot.chatRoomRobot.
- Click on the Plug-ins tab.
- From the top, select plug-ins selected below only from the drop down box.
- Pick the plug-in you created (in the example, this was org.eclipse.ecf.example.geir) and org.eclipse.ecf.provider.irc.
- Click on the Add Required Plug-ins button on the right and then hit Run.
- Moments later, your bot should appear in the server and channel that you specified in the plugin.xml file.
* geir2 (n=geir2@bas3-kitchener06-1096650252.dsl.bell.ca) has joined #eclipse <rcjsuen> ~bug <geir2> <rcjsuen> ~bug76759 <geir2>
Working Demo
A working and well-featured implementation is currently being run on Eclipse-related channels on freenode.
Conclusion
I hope you learned enough from this tutorial to be ready to go out and write your own bots using ECF's bot framework. If you have any questions, please do not hesitate to on the newsgroup. For questions, comments, inquiries, and anything else to the development of the framework or ECF in general, please join the ecf-dev mailing lists.
|
http://wiki.eclipse.org/index.php?title=ECF/Bot_Framework&oldid=143262
|
CC-MAIN-2017-22
|
en
|
refinedweb
|
...one of the most highly
regarded and expertly designed C++ library projects in the
world. — Herb Sutter and Andrei
Alexandrescu, C++
Coding Standards
Read a Sequence from an input stream.
template <typename IStream, typename Sequence> IStream& operator>>(IStream& is, Sequence& seq);
is >> seq
Return type: IStream&
Semantics: For each element,
e, in sequence,
seq,
call
is >>
e.
#include <boost/fusion/sequence/io/in.hpp> #include <boost/fusion/include/in.hpp>
vector<int, std::string, char> v; std::cin >> v;
|
http://www.boost.org/doc/libs/1_64_0/libs/fusion/doc/html/fusion/sequence/operator/i_o/in.html
|
CC-MAIN-2017-22
|
en
|
refinedweb
|
NumericRevision
Since: BlackBerry 10.2.0
#include <bb/cascades/datamanager/NumericRevision>
To link against this class, add the following line to your .pro file: LIBS += -lbbcascadesdatamanager
A Revision which uses a 64-bit unsigned integer as the revision.
Overview
Inheritance
Public Functions Index
Public Functions
Constructs a NumericRevision.
BlackBerry 10.2.0
Constructs a NumericRevision given a revision number.
BlackBerry 10.2.0
Copy constructor.
This function constructs a NumericRevision containing exactly the same values as the provided NumericRevision.
BlackBerry 10.2.0
virtual
Destructor.
BlackBerry 10.2.0
virtual bool
Check for equality.
True if the revisions are equal, false otherwise.
BlackBerry 10.2.0
virtual bool
Check whether this revision is greater (newer) than the other.
True if this object is greater (newer) than the given object, false otherwise.
BlackBerry 10.2.0
virtualRevision *
Return a new revision based on this revision and another revision.
virtualQString
Convert this revision to a string representation for debugging.
The string representation.
BlackBerry 10.2.0
Got questions about leaving a comment? Get answers from our Disqus FAQ.comments powered by Disqus
|
http://developer.blackberry.com/native/reference/cascades/bb__cascades__datamanager__numericrevision.html
|
CC-MAIN-2017-22
|
en
|
refinedweb
|
#include <pcre2.h>
This function builds a set of character tables for
character values()(3)(3) page and a description of the POSIX API in the pcre2posix(3) page.
|
http://manpages.courier-mta.org/htmlman3/pcre2_maketables.3.html
|
CC-MAIN-2017-22
|
en
|
refinedweb
|
Bugs in Web3CMS 1.0.7.r6 (User)
#21
Posted 12 October 2009 - 08:06 AM
In this release we implemented discussed ideas.
In the next release we are going to implement jqGrid and some rich text editor.
Thanks for your interest!
#22
Posted 10 November 2009 - 09:40 AM
phpdevmd, on 12 October 2009 - 08:06 AM, said:
In this release we implemented discussed ideas.
In the next release we are going to implement jqGrid and some rich text editor.
Thanks for your interest!
ERROR!!!
#23
Posted 14 November 2009 - 01:45 PM
Web3CMS 1.0.9.r10 (update2) have been released.
In this release we implemented jqGrid 3.6 and sidebar menu.
Thanks for your interest!
#24
Posted 04 January 2010 - 01:00 PM
In this release we fixed some issues and improved jqgrid page.
We are pleased to welcome 2 new team members: Steve and Morgan.
Next versions should be released with their work also. Thank you guys for joining this project!
#25
Posted 14 January 2010 - 11:43 AM
phpdevmd, on 04 January 2010 - 01:00 PM, said:
In this release we fixed some issues and improved jqgrid page.
We are pleased to welcome 2 new team members: Steve and Morgan.
Next versions should be released with their work also. Thank you guys for joining this project!
import yii 1.1?
#26
Posted 14 January 2010 - 05:18 PM
#27
Posted 16 January 2010 - 03:01 AM
Nice work! Have there been any thoughts towards including all extensions as modules in the cms? Also I found a bug in the user grid. When you try to search by date that contains only the month for example it does not return anything...that might be due to data type in DB.
cheers,
b
#28
Posted 23 January 2010 - 02:44 AM
#29
Posted 05 February 2010 - 05:24 AM
Quote
not yet...
Quote
you are right, thank you. we will fix this in the next release.
Quote
yes, we will add "Post" Model/Views/Controller around the release time of Yii-1.1.2
thanks for your input and interests! your replies make us more motivated.
#30
Posted 07 March 2010 - 04:54 PM
Upgrade to Yii-1.1.0.
Few simple improvements.
Start working on the chat implementation (should be based on Yii-1.1.1).
#31
Posted 05 June 2010 - 03:16 PM
wow, web3CMS is looking good. two things:
I'm using
Web3CMS 1.1.0.r34
with Yii-1.1.2
1 - I edited the site/wiki.php file, rename it to site/about.php add menu item to _layout/main.php. The menu item shows up and links to correct file BUT it comes up as a 404 error, what am I missing?
2 - created my own model and crude in the web3cms project and got this error
"include(Controller.php) [<a href='function.include'>function.include</a>]: failed to open stream: No such file or directory"
sure enough no Controller.php in protected/components/
Copied one from another project and actually go my model to partially work, but not fully like in the other project. AND could never find they way to make it include web3cms page headers, css, etc.
any tips to help jump start me on understanding how to make it fit in with the web3cms would be very helpful. thanks
here's the Stack Trace:
#0 C:\wamp\www\framework\YiiBase.php(338): autoload()
#1 unknown(0): autoload()
#2 C:\wamp\www\web3cms\protected\controllers\ProfileController.php(4): spl_autoload_call()
#3 C:\wamp\www\framework\web\CWebApplication.php(388): require()
#4 C:\wamp\www\framework\web\CWebApplication.php(314): CWebApplication->createController()
#5 C:\wamp\www\framework\web\CWebApplication.php(120): CWebApplication->runController()
#6 C:\wamp\www\framework\base\CApplication.php(135): CWebApplication->processRequest()
#7 C:\wamp\www\web3cms\index.php(16): CWebApplication->run()
|
http://www.yiiframework.com/forum/index.php/topic/3264-bugs-in-web3cms-107r6-user/page__st__20__p__47753
|
CC-MAIN-2017-22
|
en
|
refinedweb
|
System Error Codes (9000-119 9000 to 11999). They are returned by the GetLastError function when many functions fail. To retrieve the description text for the error in your application, use the FormatMessage function with the FORMAT_MESSAGE_FROM_SYSTEM not supported by name server.
- DNS_ERROR_RCODE_REFUSED
- 9005 (0x232D)
DNS operation refused.
- DNS_ERROR_RCODE_YXDOMAIN
- 9006 (0x232E)
DNS name that ought not exist, does exist.
- DNS_ERROR_RCODE_YXRRSET
- 9007 (0x232F)
DNS RR set that ought not exist, does exist.
- DNS_ERROR_RCODE_NXRRSET
- 9008 (0x2330)
DNS RR set that ought to exist, does not exist.
- DNS_ERROR_RCODE_NOTAUTH
- 9009 (0x2331)
DNS server not authoritative for zone.
- DNS_ERROR_RCODE_NOTZONE
- 9010 (0x2332)
DNS name in update or prereq is not in zone.
- DNS_ERROR_RCODE_BADSIG
- 9016 (0x2338)
DNS signature failed to verify.
- DNS_ERROR_RCODE_BADKEY
- 9017 (0x2339)
DNS bad key.
- DNS_ERROR_RCODE_BADTIME
- 9018 (0x233A)
DNS signature validity expired.
- DNS_ERROR_KEYMASTER_REQUIRED
- 9101 (0x238D)
Only the DNS server acting as the key master for the zone may perform this operation.
- DNS_ERROR_NOT_ALLOWED_ON_SIGNED_ZONE
- 9102 (0x238E)
This operation is not allowed on a zone that is signed or has signing keys.
- DNS_ERROR_NSEC3_INCOMPATIBLE_WITH_RSA_SHA1
- 9103 (0x238F)
NSEC3 is not compatible with the RSA-SHA-1 algorithm. Choose a different algorithm or use NSEC.
This value was also named DNS_ERROR_INVALID_NSEC3_PARAMETERS
- DNS_ERROR_NOT_ENOUGH_SIGNING_KEY_DESCRIPTORS
- 9104 (0x2390)
The zone does not have enough signing keys. There must be at least one key signing key (KSK) and at least one zone signing key (ZSK).
- DNS_ERROR_UNSUPPORTED_ALGORITHM
- 9105 (0x2391)
The specified algorithm is not supported.
- DNS_ERROR_INVALID_KEY_SIZE
- 9106 (0x2392)
The specified key size is not supported.
- DNS_ERROR_SIGNING_KEY_NOT_ACCESSIBLE
- 9107 (0x2393)
One or more of the signing keys for a zone are not accessible to the DNS server. Zone signing will not be operational until this error is resolved.
- DNS_ERROR_KSP_DOES_NOT_SUPPORT_PROTECTION
- 9108 (0x2394)
The specified key storage provider does not support DPAPI++ data protection. Zone signing will not be operational until this error is resolved.
- DNS_ERROR_UNEXPECTED_DATA_PROTECTION_ERROR
- 9109 (0x2395)
An unexpected DPAPI++ error was encountered. Zone signing will not be operational until this error is resolved.
- DNS_ERROR_UNEXPECTED_CNG_ERROR
- 9110 (0x2396)
An unexpected crypto error was encountered. Zone signing may not be operational until this error is resolved.
- DNS_ERROR_UNKNOWN_SIGNING_PARAMETER_VERSION
- 9111 (0x2397)
The DNS server encountered a signing key with an unknown version.)
The DNS server cannot accept any more signing keys with the specified algorithm and KSK flag value for this zone.
- DNS_ERROR_INVALID_ROLLOVER_PERIOD
- 9114 (0x239A)
The specified rollover period is invalid.
- DNS_ERROR_INVALID_INITIAL_ROLLOVER_OFFSET
- 9115 (0x239B)
The specified initial rollover offset is invalid.
- DNS_ERROR_ROLLOVER_IN_PROGRESS
- 9116 (0x239C)
The specified signing key is already in process of rolling over keys.
- DNS_ERROR_STANDBY_KEY_NOT_PRESENT
- 9117 (0x239D)
The specified signing key does not have a standby key to revoke.
- DNS_ERROR_NOT_ALLOWED_ON_ZSK
- 9118 (0x239E)
This operation is not allowed on a zone signing key (ZSK).
- DNS_ERROR_NOT_ALLOWED_ON_ACTIVE_SKD
- 9119 (0x239F)
This operation is not allowed on an active signing key.
- DNS_ERROR_ROLLOVER_ALREADY_QUEUED
- 9120 (0x23A0)
The specified signing key is already queued for rollover.
- DNS_ERROR_NOT_ALLOWED_ON_UNSIGNED_ZONE
- 9121 (0x23A1)
This operation is not allowed on an unsigned zone.
- DNS_ERROR_BAD_KEYMASTER
- 9122 (0x23A2)
This operation could not be completed because the DNS server listed as the current key master for this zone is down or misconfigured. Resolve the problem on the current key master for this zone or use another DNS server to seize the key master role.
- DNS_ERROR_INVALID_SIGNATURE_VALIDITY_PERIOD
- 9123 (0x23A3)
The specified signature validity period is invalid.
- DNS_ERROR_INVALID_NSEC3_ITERATION_COUNT
- 9124 (0x23A4)
The specified NSEC3 iteration count is higher than allowed by the minimum key length used in the zone.
- DNS_ERROR_DNSSEC_IS_DISABLED
- 9125 (0x23A5)
This operation could not be completed because the DNS server has been configured with DNSSEC features disabled. operation completed, but no trust anchors were added because all of the trust anchors received were either invalid, unsupported, expired, or would not become valid in less than 30 days.
- DNS_ERROR_ROLLOVER_NOT_POKEABLE
- 9128 (0x23A8)
The specified signing key is not waiting for parental DS update.
- DNS_ERROR_NSEC3_NAME_COLLISION
- 9129 (0x23A9)
Hash collision detected during NSEC3 signing. packet.
- DNS_ERROR_RCODE
- 9504 (0x2520)
DNS error, check rcode.
- DNS_ERROR_UNSECURE_PACKET
- 9505 (0x2521)
Unsecured DNS packet.
- DNS_REQUEST_PENDING
- 9506 (0x2522)
DNS query request is pending.
- DNS_ERROR_INVALID_TYPE
- 9551 (0x254F)
Invalid DNS type.
- DNS_ERROR_INVALID_IP_ADDRESS
- 9552 (0x2550)
Invalid IP address.
- DNS_ERROR_INVALID_PROPERTY
- 9553 (0x2551)
Invalid property.
- DNS_ERROR_TRY_AGAIN_LATER
- 9554 (0x2552)
Try DNS operation again later.
- DNS_ERROR_NOT_UNIQUE
- 9555 (0x2553)
Record for given name and type is not unique.
- DNS_ERROR_NON_RFC_NAME
- 9556 (0x2554)
DNS name does not comply with RFC specifications.
- DNS_STATUS_FQDN
- 9557 (0x2555)
DNS name is a fully-qualified DNS name.
- DNS_STATUS_DOTTED_NAME
- 9558 (0x2556)
DNS name is dotted (multi-label).
- DNS_STATUS_SINGLE_PART_NAME
- 9559 (0x2557)
DNS name is a single-part name.
- DNS_ERROR_INVALID_NAME_CHAR
- 9560 (0x2558)
DNS name contains an invalid character.
- DNS_ERROR_NUMERIC_NAME
- 9561 (0x2559)
DNS name is entirely numeric.
- DNS_ERROR_NOT_ALLOWED_ON_ROOT_SERVER
- 9562 (0x255A)
The operation requested is not permitted on a DNS root server.
- DNS_ERROR_NOT_ALLOWED_UNDER_DELEGATION
- 9563 (0x255B)
The record could not be created because this part of the DNS namespace has been delegated to another server.
- DNS_ERROR_CANNOT_FIND_ROOT_HINTS
- 9564 (0x255C)
The DNS server could not find a set of root hints.
- DNS_ERROR_INCONSISTENT_ROOT_HINTS
- 9565 (0x255D)
The DNS server found root hints but they were not consistent across all adapters.
- DNS_ERROR_DWORD_VALUE_TOO_SMALL
- 9566 (0x255E)
The specified value is too small for this parameter.
- DNS_ERROR_DWORD_VALUE_TOO_LARGE
- 9567 (0x255F)
The specified value is too large for this parameter.
- DNS_ERROR_BACKGROUND_LOADING
- 9568 (0x2560)
This operation is not allowed while the DNS server is loading zones in the background. allowed to exist underneath a DNAME record.
- DNS_ERROR_DELEGATION_REQUIRED
- 9571 (0x2563)
This operation requires credentials delegation.
- DNS_ERROR_INVALID_POLICY_TABLE
- 9572 (0x2564)
Name resolution policy table has been corrupted. DNS resolution will fail until it is fixed..
- DNS_ERROR_ZONE_CONFIGURATION_ERROR
- 9604 (0x2584)
Invalid DNS zone configuration.
- DNS_ERROR_ZONE_HAS_NO_SOA_RECORD
- 9605 (0x2585)
DNS zone has no start of authority (SOA) record.
- DNS_ERROR_ZONE_HAS_NO_NS_RECORDS
- 9606 (0x2586)
DNS zone has no Name Server (NS) record.
- DNS_ERROR_ZONE_LOCKED
- 9607 (0x2587)
DNS zone is locked.
- DNS_ERROR_ZONE_CREATION_FAILED
- 9608 (0x2588)
DNS zone creation failed.
- DNS_ERROR_ZONE_ALREADY_EXISTS
- 9609 (0x2589)
DNS zone already exists.
- DNS_ERROR_AUTOZONE_ALREADY_EXISTS
- 9610 (0x258A)
DNS automatic zone already exists.
- DNS_ERROR_INVALID_ZONE_TYPE
- 9611 (0x258B)
Invalid DNS zone type.
- DNS_ERROR_SECONDARY_REQUIRES_MASTER_IP
- 9612 (0x258C)
Secondary DNS zone requires master IP address.
- DNS_ERROR_ZONE_NOT_SECONDARY
- 9613 (0x258D)
DNS zone not secondary.
- DNS_ERROR_NEED_SECONDARY_ADDRESSES
- 9614 (0x258E)
Need secondary IP address.
- DNS_ERROR_WINS_INIT_FAILED
- 9615 (0x258F)
WINS initialization failed.
- DNS_ERROR_NEED_WINS_SERVERS
- 9616 (0x2590)
Need WINS servers.
- DNS_ERROR_NBSTAT_INIT_FAILED
- 9617 (0x2591)
NBTSTAT initialization call failed.
- DNS_ERROR_SOA_DELETE_INVALID
- 9618 (0x2592)
Invalid delete of start of authority (SOA).
- DNS_ERROR_FORWARDER_ALREADY_EXISTS
- 9619 (0x2593)
A conditional forwarding zone already exists for that name.
- DNS_ERROR_ZONE_REQUIRES_MASTER_IP
- 9620 (0x2594)
This zone must be configured with one or more master DNS server IP addresses.
- DNS_ERROR_ZONE_IS_SHUTDOWN
- 9621 (0x2595)
The operation cannot be performed because this zone is shut down.
- DNS_ERROR_ZONE_LOCKED_FOR_SIGNING
- 9622 (0x2596)
This operation cannot be performed because the zone is currently being signed. for DNS zone.
- DNS_ERROR_FILE_WRITEBACK_FAILED
- 9654 (0x25B6)
Failed to write datafile for DNS zone.
- DNS_ERROR_DATAFILE_PARSING
- 9655 (0x25B7)
Failure while reading datafile for DNS zone.
- DNS_ERROR_RECORD_DOES_NOT_EXIST
- 9701 (0x25E5)
DNS record does not exist.
- DNS_ERROR_RECORD_FORMAT
- 9702 (0x25E6)
DNS record format error.
- DNS_ERROR_NODE_CREATION_FAILED
- 9703 (0x25E7)
Node creation failure in DNS.
- DNS_ERROR_UNKNOWN_RECORD_TYPE
- 9704 (0x25E8)
Unknown DNS record type.
- DNS_ERROR_RECORD_TIMED_OUT
- 9705 (0x25E9)
DNS record timed out.
- DNS_ERROR_NAME_NOT_IN_ZONE
- 9706 (0x25EA)
Name not in DNS zone.
- DNS_ERROR_CNAME_LOOP
- 9707 (0x25EB)
CNAME loop detected.
- DNS_ERROR_NODE_IS_CNAME
- 9708 (0x25EC)
Node is a CNAME DNS record.
- DNS_ERROR_CNAME_COLLISION
- 9709 (0x25ED)
A CNAME record already exists for given name.
- DNS_ERROR_RECORD_ONLY_AT_ZONE_ROOT
- 9710 (0x25EE)
Record only at DNS zone root.
- DNS_ERROR_RECORD_ALREADY_EXISTS
- 9711 (0x25EF)
DNS record already exists.
- DNS_ERROR_SECONDARY_DATA
- 9712 (0x25F0)
Secondary DNS zone data error.
- DNS_ERROR_NO_CREATE_CACHE_DATA
- 9713 (0x25F1)
Could not create DNS cache data.
- DNS_ERROR_NAME_DOES_NOT_EXIST
- 9714 (0x25F2)
DNS name does not exist.
- DNS_WARNING_PTR_CREATE_FAILED
- 9715 (0x25F3)
Could not create pointer (PTR) record.
- DNS_WARNING_DOMAIN_UNDELETED
- 9716 (0x25F4)
DNS domain was undeleted.
- DNS_ERROR_DS_UNAVAILABLE
- 9717 (0x25F5)
The directory service is unavailable.
- DNS_ERROR_DS_ZONE_ALREADY_EXISTS
- 9718 (0x25F6)
DNS zone already exists in the directory service.
- DNS_ERROR_NO_BOOTFILE_IF_DS_ZONE
- 9719 (0x25F7)
DNS server not creating or reading the boot file for the directory service integrated DNS zone.
- DNS_ERROR_NODE_IS_DNAME
- 9720 (0x25F8)
Node is a DNAME DNS record.
- DNS_ERROR_DNAME_COLLISION
- 9721 (0x25F9)
A DNAME record already exists for given name.
- DNS_ERROR_ALIAS_LOOP
- 9722 (0x25FA)
An alias loop has been detected with either CNAME or DNAME records.
- DNS_INFO_AXFR_COMPLETE
- 9751 (0x2617)
DNS AXFR (zone transfer) complete.
- DNS_ERROR_AXFR
- 9752 (0x2618)
DNS zone transfer failed.
- DNS_INFO_ADDED_LOCAL_WINS
- 9753 (0x2619)
Added local WINS server.
- DNS_STATUS_CONTINUE_NEEDED
- 9801 (0x2649)
Secure update call needs to continue update request.
- DNS_ERROR_NO_TCPIP
- 9851 (0x267B)
TCP/IP network protocol not installed.
- DNS_ERROR_NO_DNS_SERVERS
- 9852 (0x267C)
No DNS servers configured for local system.
- DNS_ERROR_DP_DOES_NOT_EXIST
- 9901 (0x26AD)
The specified directory partition does not exist.
- DNS_ERROR_DP_ALREADY_EXISTS
- 9902 (0x26AE)
The specified directory partition already exists.
- DNS_ERROR_DP_NOT_ENLISTED
- 9903 (0x26AF)
This DNS server is not enlisted in the specified directory partition.
- DNS_ERROR_DP_ALREADY_ENLISTED
- 9904 (0x26B0)
This DNS server is already enlisted in the specified directory partition.
- DNS_ERROR_DP_NOT_AVAILABLE
- 9905 (0x26B1)
The directory partition is not available at this time. Please wait a few minutes and try again.
- DNS_ERROR_DP_FSMO_ERROR
- 9906 (0x26B2)
The operation failed because the domain naming master FSMO role could not be reached. The domain controller holding the domain naming master FSMO role is down or unable to service the request or is not running Windows Server 2003 or later.
- WSAEINTR
- 10004 (0x2714)
A blocking operation was interrupted by a call to WSACancelBlockingCall.
- WSAEBADF
- 10009 (0x2719)
The file handle supplied is not valid.
- WSAEACCES
- 10013 (0x271D)
An attempt was made to access a socket in a way forbidden by its access permissions.
- WSAEFAULT
- 10014 (0x271E)
The system detected an invalid pointer address in attempting to use a pointer argument in a call.
- WSAEINVAL
- 10022 (0x2726)
An invalid argument was supplied.
- WSAEMFILE
- 10024 (0x2728)
Too many open sockets.
- WSAEWOULDBLOCK
- 10035 (0x2733)
A non-blocking socket operation could not be completed immediately.
- WSAEINPROGRESS
- 10036 (0x2734)
A blocking operation is currently executing.
- WSAEALREADY
- 10037 (0x2735)
An operation was attempted on a non-blocking socket that already had an operation in progress.
- WSAENOTSOCK
- 10038 (0x2736)
An operation was attempted on something that is not a socket.
- WSAEDESTADDRREQ
- 10039 (0x2737)
A required address was omitted from an operation on a socket.
- WSAEMSGSIZE
- 10040 (0x2738)
A message sent on a datagram socket was larger than the internal message buffer or some other network limit, or the buffer used to receive a datagram into was smaller than the datagram itself.
- WSAEPROTOTYPE
- 10041 (0x2739)
A protocol was specified in the socket function call that does not support the semantics of the socket type requested.
- WSAENOPROTOOPT
- 10042 (0x273A)
An unknown, invalid, or unsupported option or level was specified in a getsockopt or setsockopt call.
- WSAEPROTONOSUPPORT
- 10043 (0x273B)
The requested protocol has not been configured into the system, or no implementation for it exists.
- WSAESOCKTNOSUPPORT
- 10044 (0x273C)
The support for the specified socket type does not exist in this address family.
- WSAEOPNOTSUPP
- 10045 (0x273D)
The attempted operation is not supported for the type of object referenced.
- WSAEPFNOSUPPORT
- 10046 (0x273E)
The protocol family has not been configured into the system or no implementation for it exists.
- WSAEAFNOSUPPORT
- 10047 (0x273F)
An address incompatible with the requested protocol was used.
- WSAEADDRINUSE
- 10048 (0x2740)
Only one usage of each socket address (protocol/network address/port) is normally permitted.
- WSAEADDRNOTAVAIL
- 10049 (0x2741)
The requested address is not valid in its context.
- WSAENETDOWN
- 10050 (0x2742)
A socket operation encountered a dead network.
- WSAENETUNREACH
- 10051 (0x2743)
A socket operation was attempted to an unreachable network.
- WSAENETRESET
- 10052 (0x2744)
The connection has been broken due to keep-alive activity detecting a failure while the operation was in progress.
- WSAECONNABORTED
- 10053 (0x2745)
An established connection was aborted by the software in your host machine.
- WSAECONNRESET
- 10054 (0x2746)
An existing connection was forcibly closed by the remote host.
- WSAENOBUFS
- 10055 (0x2747)
An operation on a socket could not be performed because the system lacked sufficient buffer space or because a queue was full.
- WSAEISCONN
- 10056 (0x2748)
A connect request was made on an already connected socket.
- WSAENOTCONN
- 10057 (0x2749)
A request to send or receive data was disallowed because the socket is not connected and (when sending on a datagram socket using a sendto call) no address was supplied.
- WSAESHUTDOWN
- 10058 (0x274A)
A request to send or receive data was disallowed because the socket had already been shut down in that direction with a previous shutdown call.
- WSAETOOMANYREFS
- 10059 (0x274B)
Too many references to some kernel object.
- WSAETIMEDOUT
- 10060 (0x274C)
A connection attempt failed because the connected party did not properly respond after a period of time, or established connection failed because connected host has failed to respond.
- WSAECONNREFUSED
- 10061 (0x274D)
No connection could be made because the target machine actively refused it.
- WSAELOOP
- 10062 (0x274E)
Cannot translate name.
- WSAENAMETOOLONG
- 10063 (0x274F)
Name component or name was too long.
- WSAEHOSTDOWN
- 10064 (0x2750)
A socket operation failed because the destination host was down.
- WSAEHOSTUNREACH
- 10065 (0x2751)
A socket operation was attempted to an unreachable host.
- WSAENOTEMPTY
- 10066 (0x2752)
Cannot remove a directory that is not empty.
- WSAEPROCLIM
- 10067 (0x2753)
A Windows Sockets implementation may have a limit on the number of applications that may use it simultaneously.
- WSAEUSERS
- 10068 (0x2754)
Ran out of quota.
- WSAEDQUOT
- 10069 (0x2755)
Ran out of disk quota.
- WSAESTALE
- 10070 (0x2756)
File handle reference is no longer available.
- WSAEREMOTE
- 10071 (0x2757)
Item is not available locally.
- WSASYSNOTREADY
- 10091 (0x276B)
WSAStartup cannot function at this time because the underlying system it uses to provide network services is currently unavailable.
- WSAVERNOTSUPPORTED
- 10092 (0x276C)
The Windows Sockets version requested is not supported.
- WSANOTINITIALISED
- 10093 (0x276D)
Either the application has not called WSAStartup, or WSAStartup failed.
- WSAEDISCON
- 10101 (0x2775)
Returned by WSARecv or WSARecvFrom to indicate the remote party has initiated a graceful shutdown sequence.
- WSAENOMORE
- 10102 (0x2776)
No more results can be returned by WSALookupServiceNext.
- WSAECANCELLED
- 10103 (0x2777)
A call to WSALookupServiceEnd was made while this call was still processing. service provider could not be loaded or initialized.
- WSASYSCALLFAILURE
- 10107 (0x277B)
A system call has failed.
- WSASERVICE_NOT_FOUND
- 10108 (0x277C)
No such service is known. The service cannot be found in the specified name space.
- WSATYPE_NOT_FOUND
- 10109 (0x277D)
The specified class was not found.
- WSA_E_NO_MORE
- 10110 (0x277E)
No more results can be returned by WSALookupServiceNext.
- WSA_E_CANCELLED
- 10111 (0x277F)
A call to WSALookupServiceEnd was made while this call was still processing.)
This is usually a temporary error during hostname resolution and means that the local server did not receive a response from an authoritative server.
- WSANO_RECOVERY
- 11003 (0x2AFB)
A non-recoverable error occurred during a database lookup.
- WSANO_DATA
- 11004 (0x2AFC)
The requested name is valid, but no data of the requested type was found.
- WSA_QOS_RECEIVERS
- 11005 (0x2AFD)
At least one reserve has arrived.
- WSA_QOS_SENDERS
- 11006 (0x2AFE)
At least one path has arrived.
- WSA_QOS_NO_SENDERS
- 11007 (0x2AFF)
There are no senders.
- WSA_QOS_NO_RECEIVERS
- 11008 (0x2B00)
There are no receivers.
- WSA_QOS_REQUEST_CONFIRMED
- 11009 (0x2B01)
Reserve has been confirmed.
- WSA_QOS_ADMISSION_FAILURE
- 11010 (0x2B02)
Error due to lack of resources.
- WSA_QOS_POLICY_FAILURE
- 11011 (0x2B03)
Rejected for administrative reasons - bad credentials.
- WSA_QOS_BAD_STYLE
- 11012 (0x2B04)
Unknown or conflicting style.
- WSA_QOS_BAD_OBJECT
- 11013 (0x2B05)
Problem with some part of the filterspec or providerspecific buffer in general.
- WSA_QOS_TRAFFIC_CTRL_ERROR
- 11014 (0x2B06)
Problem with some part of the flowspec.
- WSA_QOS_GENERIC_ERROR
- 11015 (0x2B07)
General QOS error.
- WSA_QOS_ESERVICETYPE
- 11016 (0x2B08)
An invalid or unrecognized service type was found in the flowspec.
- WSA_QOS_EFLOWSPEC
- 11017 (0x2B09)
An invalid or inconsistent flowspec was found in the QOS structure.
- WSA_QOS_EPROVSPECBUF
- 11018 (0x2B0A)
Invalid QOS provider-specific buffer.
- WSA_QOS_EFILTERSTYLE
- 11019 (0x2B0B)
An invalid QOS filter style was used.
- WSA_QOS_EFILTERTYPE
- 11020 (0x2B0C)
An invalid QOS filter type was used.
- WSA_QOS_EFILTERCOUNT
- 11021 (0x2B0D)
An incorrect number of QOS FILTERSPECs were specified in the FLOWDESCRIPTOR.
- WSA_QOS_EOBJLENGTH
- 11022 (0x2B0E)
An object with an invalid ObjectLength field was specified in the QOS provider-specific buffer.
- WSA_QOS_EFLOWCOUNT
- 11023 (0x2B0F)
An incorrect number of flow descriptors was specified in the QOS structure.
- WSA_QOS_EUNKOWNPSOBJ
- 11024 (0x2B10)
An unrecognized object was found in the QOS provider-specific buffer.
- WSA_QOS_EPOLICYOBJ
- 11025 (0x2B11)
An invalid policy object was found in the QOS provider-specific buffer.
- WSA_QOS_EFLOWDESC
- 11026 (0x2B12)
An invalid QOS flow descriptor was found in the flow descriptor list.
- WSA_QOS_EPSFLOWSPEC
- 11027 (0x2B13)
An invalid or inconsistent flowspec was found in the QOS provider specific buffer.
- WSA_QOS_EPSFILTERSPEC
- 11028 (0x2B14)
An invalid FILTERSPEC was found in the QOS provider-specific buffer.
- WSA_QOS_ESDMODEOBJ
- 11029 (0x2B15)
An invalid shape discard mode object was found in the QOS provider specific buffer.
- WSA_QOS_ESHAPERATEOBJ
- 11030 (0x2B16)
An invalid shaping rate object was found in the QOS provider-specific buffer.
- WSA_QOS_RESERVED_PETYPE
- 11031 (0x2B17)
A reserved policy element was found in the QOS provider-specific buffer.
- WSA_SECURE_HOST_NOT_FOUND
- 11032 (0x2B18)
No such host is known securely.
- WSA_IPSEC_NAME_POLICY_ERROR
- 11033 (0x2B19)
Name based IPSEC policy could not be added.
Suggestions?
If you have additional suggestions regarding the System Error Codes documentation, given the constraints enumerated at the top of the page, please click the link labeled "Send comments about this topic to Microsoft" below. We appreciate the input.
Requirements
See also
|
https://technet.microsoft.com/en-us/library/ms681391(v=vs.85).aspx
|
CC-MAIN-2017-22
|
en
|
refinedweb
|
This article shows how you can make your apps transparent using the new functions provided with Win2K. If you download the Platform SDK from Microsoft then these functions will be available, but those of you without fast Internet connections this article could be useful.
This is a mix of stuff I found on the net so if anyone feels that I have stolen something and should get the credit, sorry...
The functions you want are included in the USER32.DLL in Win2K, but the SDK provides the header files and the source code in libraries. But to use the functions one could just import the functions from the USER32.DLL. So here it goes...
First some constants must be declared:
#ifndef WS_EX_LAYERED
#define WS_EX_LAYERED 0x00080000
#define LWA_COLORKEY 0x00000001
#define LWA_ALPHA 0x00000002
#endif // ndef WS_EX_LAYERED
Then some declarations in the header-file:
// Preparation for the function we want to import from USER32.DLL
typedef BOOL (WINAPI *lpfnSetLayeredWindowAttributes)(HWND hWnd,
COLORREF crKey, BYTE bAlpha, DWORD dwFlags);
lpfnSetLayeredWindowAttributes m_pSetLayeredWindowAttributes
That is all for the header file, now to the implementation!
// Here we import the function from USER32.DLL
HMODULE hUser32 = GetModuleHandle(_T("USER32.DLL"));
m_pSetLayeredWindowAttributes =
(lpfnSetLayeredWindowAttributes)GetProcAddress(hUser32,
"SetLayeredWindowAttributes");
// If the import did not succeed, make sure your app can handle it!
if (NULL == m_pSetLayeredWindowAttributes)
return FALSE; //Bail out!!!
If the function was imported correctly we must set the dialog we want to make transparent into "transparent-mode". E.G. Set the style for the dialog so that it can be transparent, and that is done with the flag WS_EX_LAYERED defined earlier.
// Check the current state of the dialog, and then add the
// WS_EX_LAYERED attribute
SetWindowLong(m_hWnd, GWL_EXSTYLE, GetWindowLong(m_hWnd, GWL_EXSTYLE)
| WS_EX_LAYERED);
Now when that is done its time to describe the function we imported, and to tell you the truth I'm not 100% sure about all of the parameters...
hwnd [in] Handle to the layered window.
crKey [in] Pointer to a COLORREF value that specifies the transparency color key to be used. (When making a certain color transparent...)
bAlpha [in] Alpha value used to describe the opacity of the layered window. 0 = Invisible, 255 = Fully visible
dwFlags [in] Specifies an action to take. This parameter can be LWA_COLORKEY (When making a certain color transparent...) or LWA_ALPHA.
// Sets the window to 70% visibility.
m_pSetLayeredWindowAttributes(m_hWnd, 0, (255 / 70) * 100, LWA_ALPHA);
One thing you must make sure of is to disable this function if the app is running under any OS other then Win2K. And there is probably some very easy way to do that, but here is how I did it:
OSVERSIONINFO os = { sizeof(os) };
GetVersionEx(&os);
// use m_bWin2k before any call to the
// m_pSetLayeredWindowAttributes to make sure we are runninng Win2K
BOOL m_bWin2K = ( VER_PLATFORM_WIN32_NT == os.dwPlatformId &&
os.dwMajorVersion >= 5 );
That's.
|
https://www.codeproject.com/Articles/981/Win2K-transparent-dialogs?fid=1974&df=90&mpp=10&sort=Position&spc=None&select=540951&tid=856191
|
CC-MAIN-2017-22
|
en
|
refinedweb
|
How to swap two numbers without using temp or third variable is common interview question not just on Java interviews but also on C and C++ interviews. It is also a good programming questions for freshers. This question was asked to me long back and didn't had any idea about how to approach this question without using temp or third variable, may be lack of knowledge on bitwise operators in Java or may be it didn't click at that time. Given some time and trial error, I eventually come out with a solution with just arithmetic operator but interviewer was keep asking about other approaches of swapping two variables without using temp or third variable. Personally, I liked this question and included in list of my programming interview question because of its simplicity and some logical work, it force you to do. When learned bit-wise operation in Java I eventually find another way of swapping two variables without third variable, which I am going to share with you guys.
Swapping two numbers without using temp variable in Java
int a = 10;
int b = 20;
System.out.println("value of a and b before swapping, a: " + a +" b: " + b);
//swapping value of two numbers without using temp variable
a = a+ b; //now a is 30 and b is 20
b = a -b; //now a is 30 but b is 10 (original value of a)
a = a -b; //now a is 20 and b is 10, numbers are swapped
System.out.println("value of a and b after swapping, a: " + a +" b: " + b);
Output:
value of a and b before swapping, a: 10 b: 20
value of a and b after swapping, a: 20 b: 10
Swapping two numbers without using temp variable in Java with bitwise operator
Bitwise operators can also be used to swap two numbers without using third variable. XOR bitwise operator returns zero if both operand is same i.e. either 0 or 1 and returns 1 if both operands are different e.g. one operand is zero and other is one.
By leveraging this property, we can swap two numbers in Java. Here is code example of swapping two numbers without using temp variable in Java using XOR bitwise operand:
By leveraging this property, we can swap two numbers in Java. Here is code example of swapping two numbers without using temp variable in Java using XOR bitwise operand:
A B A^B (A XOR B)
0 0 0 (zero because operands are same)
0 1 1
1 0 1 (one because operands are different)
1 1 0
int a = 2; //0010 in binary
int b = 4; //0100 in binary
System.out.println("value of a and b before swapping, a: " + a +" b: " + b);
//swapping value of two numbers without using temp variable and XOR bitwise operator
a = a^b; //now a is 6 and b is 4
b = a^b; //now a is 6 but b is 2 (original value of a)
a = a^b; //now a is 4 and b is 2, numbers are swapped
System.out.println("value of a and b after swapping using XOR bitwise operation, a: " + a +" b: " + b);
value of a and b before swapping, a: 2 b: 4
value of a and b after swapping using XOR bitwise operation, a: 4 b: 2
Swapping two numbers without using temp variable in Java with division and multiplication
There is another, third way of swapping two numbers without using third variable, which involves multiplication and division operator. This is similar to first approach, where we have used + and - operator for swapping values of two numbers. Here is the code example to swap tow number without using third variable with division and multiplication operators in Java :
int a = 6;
int b = 3;
System.out.println("value of a and b before swapping, a: " + a +" b: " + b);
//swapping value of two numbers without using temp variable using multiplication and division
a = a*b; //now a is 18 and b is 3
b = a/b; //now a is 18 but b is 6 (original value of a)
a = a/b; //now a is 3 and b is 6, numbers are swapped
System.out.println("value of a and b after swapping using multiplication and division, a: " + a +" b: " + b);
Output:
value of a and b before swapping, a: 6 b: 3
value of a and b after swapping using multiplication and division, a: 3 b: 6
That's all on 3 ways to swap two variables without using third variable in Java. Its good to know multiple ways of swapping two variables without using temp or third variable to handle any follow-up question. Swapping numbers using bitwise operator is the fastest among three, because it involves bitwise operation. It’s also great way to show your knowledge of bitwise operator in Java and impress interviewer, which then may ask some question on bitwise operation. A nice trick to drive interview on your expert area.
47 comments :
As I saw the first method you've explained the first time, I was absolutely amazed.
Thanks for this article. I didn't know the other two ones.
Are there any real world examples where somebody would want to use any of these solutions? Perhaps swapping register contents for context switches within the OS?
Won't there be problem of over-flow in methods 1 & 3 ?
@Hemanth: of course that's the potential problem. That's why better you come with method 2 in the first place, then you can show you know also about 1 & 3, but pointing out the risks.
@Hemanth and @Jozsef, Good point. That's a good thing to point out even during interviews, and probably a good follow up question.
Piece of cake in Python lol
a,b=b,a
While these are neat for things like a programmer's interview question I would be very careful about actually using these in real code, for two reasons:
1. It obfuscates the code for very little benefit. Readability suffers more than performance gains.
The JVM could employ similar tricks under the covers for performance reasons if it detects that this is on the hot path.
2. The code is sensitive to overflow, sign handling, Not-a-Number, and divide-by-zero.
Both these points should be raised by an experienced programmer.
What if "b == 0"? Then "b = a/b" gives a division by zero exception...
"Are there any real world examples where somebody would want to use any of these solutions? ".
String a;
String b;
Now do it. Fail.
@Asgeir Storesund Nilsen , I think you raised very important things regarding, overflow, sign handling, and divide by zero. Solution, which involves integer arithmetic always prone to overflow. I guess second solution still fits the bill, what's your thought?
@Anonymous, Yes that's an age case for third solution. Interviewer really likes if you can raise those concerns as well, but you need to back with solution. XOR may help there.
osef @Hemanth @Javin how could there be overflow in methods 1 & 3? I am new to computer science and just wondering.
@Anonymous1 :
max value of int is 2,147,483,647 in java. So, when you do addition of two large numbers, you might get unexpected values because of over-flow.
@Anonymous2 :
This question was actually asked in my e-bay phone interview.
In Java you can use BigInteger to add extremely lager numbers.
---------------
String a;
String b;
Now do it. Fail.
---------------
String a = "abc";
String b = "def";
System.out.println("\na: " + a + "\nb: " + b);
b = a.length() + "_" + a + b;
System.out.println("\na: " + a + "\nb: " + b);
a = b.substring(Integer.parseInt(b.split("_")[0]) + b.indexOf("_") + 1);
System.out.println("\na: " + a + "\nb: " + b);
b = b.substring(b.indexOf("_") + 1, b.indexOf("_") + 1 + Integer.parseInt(b.split("_")[0]));
System.out.println("\na: " + a + "\nb: " + b);
========
Output:
a: abc
b: def
a: abc
b: 3_abcdef
a: def
b: 3_abcdef
a: def
b: abc
Win.
@Hemanth, first method to swap two number is fine with Integer overflow, because overflow is clearly defined in Java and addition is cumulative operation. I did run with a case which overflow :
//first method
int a = Integer.MAX_VALUE;
int b = 2;
System.out.println("before, a: " + a+ " b: " + b);
a = a + b;
b = a - b;
a = a - b;
System.out.println("after a: " + a + " b: " + b);
This is what I see :
before, a: 2147483647 b: 2
after a: 2 b: 2147483647
On third method, there are couple of problems related to sign and divide by zero, which may come due to integer overflow.
.
Can I use these examples, to swap two numbers without using temp variable, in C, C++ or C#? Actually same question was asked but on C# interview.
What is the point of swapping two variables without temporary variable, making it unreadable, introducing new bugs and wasting CPU?
Another way ....
--------------------------------
import java.util.*;
class StackEx {
public static void main(String args[]) {
int a=10,b=20;
Stack st = new Stack();
st.push(new Integer(a));
st.push(new Integer(b));
System.out.println("b value"+(Integer)st.pop());
System.out.println("a value"+(Integer)st.pop());
}
}
Your Method #2 is one that I first used when doing assembly language progamming . . . in 1973. ;-)
You can achieve same with one liner:
a = a + b - (b=a);
How to create a swap function for integers in java as there are no pointers in java and primitive data types are called value , not by reference
if one of the number is negative then what is the logic
guys, there is another alternative too :-
int a = 10;
int b = 20;
a = ( a + b ) - ( b = a );
P.S :- Taken from
@Ruks Shetty
String a = "abcdefghijklmno";
String b = "123456787654321";
b = a + b;
a = b.substring(a.length());
b = b.substring(0, b.indexOf(a));
a = b + 0 * (b = a); it's a cool idea, be sure to mention the downside to your interviewer.
If some interviewer asks me this question I will quit that interview and leave. This is so unreal and so academic if you are not hiring for some very specialised position, not to mention all that problems with overflow, performance etc...
My brain is activelly refusing to waste precious time with problems like this and from my > 10 years programming experience this type of question will tell you nothing about your candidate and his programming skills and experience. And he is often to nervous to solve algorhitmical problems when sweatting in front of interviewer, if he will come up with absoluttely nothing, it doesnt mean he is very smart.
Maybe that only thing you will learn if he will solve that right on place is that he is reading this blog regurally :)
As long as both String a & String b are not null. we can use the following to swap them:
b = a + (a = b).substring(0, 0);
Method 2 is the best method. It uses very little space & is very fast. Method 3 will give a runtime error if b=0.
IIRC, while it's probably pointless on modern hardware, this used to be a way to save space when memory or registers were at a premium (ex. embedded systems, early computers with a few hundred bytes of RAM, etc.).
The purpose of these type of questions is only to test how you will perform when u r put in a situation that is outside your comfort zone !!! Will you panic? Will you revolt ? Will you remain indifferent ? or will you show leadership skills to address the situation ??? Thats about it dude ..
The title of the article says NUMBERS not STRINGS.
@Anonymous, you have summed it absolutely correct. I like one or two such interview question everytime, something which is new to candidate and gives him an opportunity to apply his knowledge in totally new problem.
how can we swap two characters without using 3rd variable?????
Does this work too?
a^=b^=a^=b
Sorry, i'm new to java.
I have a couple of questions on this:
First : What's wrong in using temp for swapping? What's the advantage do we have without using temp?
The question is about swapping two numbers; but the answers are given for integers. Except for the first one, would the others work for numbers of type float/double?
Suppose I want to swap i and a[i] in an integer array
temp=i
i = a[i]
a[i] = temp
would work nicely.
Would the following give the effect that I want to?
=====
a[i] = a[i] + i
i = a[i] - i
a[i] = a[i] - i
=========
Above All : I don't see any disadvantage in using temp that forms a universal solution to any primitive types.
Using bitwise operator(^) is better in performance wise. so go for bitwise operator
Toun, your logic will not work,
As you said: a = b + 0 * (b = a);
Let : a=2, b=3
1st execute (b=a); //b=2
then a = b + 0 * (b = a);
a = 2 + 0 * 2;
a=2;
so here, value of "b" will bi swap but value of "a" remain as it is.
I had this question asked in the first programming class I ever took - we weren't even learning a language, we just created flowcharts of a solution. The question as I heard it was output 2 values in ascending order without using a temp variable. The question is not about language skill, it is about thinking skill, so bitwise operators and multiplication and all the rest are not really the point.
I think I ended up solving it by checking if the first number was larger than the second value, then output b, a otherwise output a, b.
This was also asked in 3 different variants - first was output sorted values using 2 temp variables, next was output sorted values using one temp variable and the third was output sorted using no temp variables.
They only work for integers, and the one that uses division fails for b=0. As someone noted, it also produces more code than a regular swap via temp variable, so it's useless.
There is no way to judge a person based on this question. These type of questions are way too common and readily available on net. Its just chance and not skill. I got this swap integer question asked today. Had I looked in one of these interview guides, I would have easily answered it in a 3 to 4 mins spell. And the interviewer would have thought 'Oh this guy has good analytical skills'. Total bullshit. Interviewer should have asked or discussed some real-time scenario based questions, rather than picking some BS from the internet.
Will there be a overflow in 1?
In bitwise swapping, if both values are equal, both values will become zero. so it should be handled.
|
http://javarevisited.blogspot.com/2013/02/swap-two-numbers-without-third-temp-variable-java-program-example-tutorial.html?showComment=1379794909967
|
CC-MAIN-2017-22
|
en
|
refinedweb
|
SDL_SetVideoModeSection: SDL API Reference (3)
Updated: Tue 11 Sep 2001, 23:01
Index Return to Main Contents
NAMESDL_SetVideoMode- Set up a video mode with the specified width, height and bits-per-pixel.
SYNOPSIS
#include "SDL.h"
SDL_Surface *SDL_SetVideoMode(int width, int height, int bpp, Uint32 flags);
DESCRIPTION
Set up a video mode with the specified width, height and bits-per-pixel.
If bpp is 0, it is treated as the current display bits per pixel.
The flags parameter is the same as the flags field of the SDL_Surface structure. OR'd combinations of the following values are valid.
- the colors you request with SDL_SetColors or SDL_SetPalette.
- SDL_DOUBLEBUF
- Enable hardware double buffering; only valid with SDL_HWSURFACE. Calling SDL_Flip will flip the buffers and update the screen. All drawing will take place on the surface that is not displayed at the moment. If double buffering could not be enabled then SDL_Flip will just perform a SDL_UpdateRect on the entire screen.
- generated and SDL_SetVideoMode can be called again with the new size.
- SDL_NOFRAME
- If possible, SDL_NOFRAME causes SDL to create a window with no title bar or frame decoration. Fullscreen modes automatically have this flag set.
- Note:
Whatever flags SDL_SetVideoMode could satisfy are set in the flags member of the returned surface.
- Note:
Index
Random Man Pages:
SSL_alert_type_string_long
ExtUtils::Miniperl
s3virge
pbmtoepson
|
http://www.thelinuxblog.com/linux-man-pages/3/SDL_SetVideoMode
|
CC-MAIN-2017-22
|
en
|
refinedweb
|
I'm working on a little program and I want to break it up into multiple source files and also use a make file. I can't seem to get my program to compile though and I don't know why. I type
and it doesn't compile. Here are some source files that illustrate my problem.and it doesn't compile. Here are some source files that illustrate my problem.Code:mingw32-make -f makefile.mak
Code:// main.cpp #include <iostream> using namespace std; int main () { CPoint A, B; A.set_point (3,4); B.set_point (5,-2); cout << "distance between points: " << distance(A, B) << endl; return 0; }Code:// CPoint.cpp #include <cmath> class CPoint { double x, y; public: void set_point (double,double); double distance(CPoint,CPoint); }; void CPoint::set_point (double a, double b) { x = a; y = b; } double CPoint::distance (CPoint A, CPoint B){ double dist; dist = sqrt((A.x-B.x)*(A.x-B.x)+(A.y-B.y)*(A.y-B.y)); return dist; }Code:#makefile.mak all: Mult Mult: main.o CPoint.o g++ main.o CPoint.o -o Mult main.o: main.cpp g++ -c main.cpp CPoint.o: CPoint.cpp g++ -c CPoint.cpp clean: rm -rf *o Mult
|
https://cboard.cprogramming.com/cplusplus-programming/106964-multiple-source-files-make-files-scope-include.html
|
CC-MAIN-2017-22
|
en
|
refinedweb
|
So far, all of the variables used have been declared at the start of main() method. However, Java allows the variables to be declared within any block.
A block is begin with an opening curly brace and ended by a closing curly brace. A block defines a scope. Thus, each time you start a new block, you are creating a new scope. A scope determines what objects are visible to the other parts of your program. It also determines the lifetime of those objects.
Many other computer languages defines two general categories of scopes: global and local. However, these traditional scopes do not fit well with Java's strict, object-oriented model. While it is possible to create what amounts to being a global scope, it is far by the exception, not the rule.
In Java, two major scopes are those defined by a class and those defined by a method. Even this distinction is somewhat artificial. However, since the class scope has several unique properties and attributes that do not apply to the scope defined by a method, this distinction makes some sense. You will learn later about class in separate chapter. For now, we will only examine the scopes defined by/within a method.
The scope defined by a method begins with its opening curly brace. However, if that method has parameters, they too are included within the method's scope.
As a general rule, variables declared inside a scope are not visible/accessible to the code that is defined outside the scope. Thus, when you declare a variable within a scope, you are localizing that variable and protecting it from the objects declared in the outer scope will be visible to code within the inner scope. However, the reverse is not true. Objects declared within the inner scope will not be visible outside it.
To understand the effect of nested scopes, consider the following program:
/* Java Program Example - Java Variables Scope */ public class JavaProgram { public static void main(String args[]) { int x; //known to all code within main x = 10; if(x == 10) { int y = 20; //known only to this block /* x and y both known here */ System.out.println("x : " + x + "\ny : " + y); x = y * 2; } // y = 100; //error! y not known here /* x is still known here */ System.out.println("x is " +x); } }
When the above Java program is compile and run, it will produce the following output:
As the comments indicate, the variable x is declared at the start of the main()'s scope and is accessible to all subsequent code within the main(). Within the if block, y is declared. Since a block defines a scope, y is only visible to other code within its block. This is why outside of its block, the line y = 100; is commented out. If you remove the leading comment symbol i.e., //, a compile-time error will occur, because y is not visible outside of its block. Within the if block, x can be used because code within a block (i.e., a nested scope) has access to, the following code fragment is invalid because count cannot be used prior to its declaration :
// This fragment is wrong! count = 100; // oops! cannot use count before it is declared! int count;
Here is another important point to remember i.e., the variables are created when their scope is entered, and destroyed when their scope is left. This means that a variable will not hold its value once it has gone out of the scope. Therefore, the following program :
/* Java Program Example - Demonstrate lifetime of a variable - Java Scope Rules */ public class JavaProgram { public static void main(String args[]) { int x; for(x=0; x<5; x++) { int y = -1; //y is initialized each time block is entered System.out.println("y is : " +y); //this always prints -1 y = 100; System.out.println("y is now : " +y); } } }
When the above Java program is compile and run, it will produce the following output:
As you can see, y is reinitialized to -1 each time the inner for loop is entered. Even though it is subsequently assigned the value 100, this value is lost.
Java Programming Online Test
Tools
Calculator
Quick Links
|
https://codescracker.com/java/java-variables-scope.htm
|
CC-MAIN-2017-22
|
en
|
refinedweb
|
) More appropriate string types in IO
Johan would.
SPJ: I don't understand either the problem or the solution.
(G5) Avoid code copies
Johan says: The I/O manager currently has a copy of IntMap inside its implementation because base cannot use containers. Why? Becuase
containers depends on
base, so
base can't depend on
containers. Splitting base would let us get rid of this code duplication. For example:
base-puredoesn't need
containers
containserdepends on
base-pure
base-iodepends on
containers
(G), (G..
- The ST Monad can (and should) be provided independently of IO, but currently functions like
unsafeIOToSTare provided in the
Control.Monad.STnamespace..
|
https://ghc.haskell.org/trac/ghc/wiki/SplitBase?version=22
|
CC-MAIN-2017-22
|
en
|
refinedweb
|
MicroPython libraries¶
This chapter describes modules (function and class libraries) which are built into MicroPython and CircuitPython. There are a few categories of modules:
- Modules which implement a subset of standard Python functionality and are not intended to be extended by the user.
- Modules which implement a subset of Python functionality, with a provision for extension by the user (via Python code).
- Modules which implement MicroPython extensions to the Python standard libraries.
- Modules specific to a particular port and thus not portable.
Note about the availability of modules and their contents:.
Python standard libraries and micro-libraries¶
The following standard Python libraries have been “micro-ified” to fit in with
the philosophy of MicroPython. They provide the core functionality of that
module and are intended to be a drop-in replacement for the standard Python
library. Some modules below use a standard Python name, but prefixed with “u”,
e.g.
ujson instead of
json. This is to signify that such a module is
micro-library, i.e. implements only a subset of CPython module functionality.
By naming them differently, a user has a choice to write a Python-level module
to extend functionality for better compatibility with CPython (indeed, this is
what done by micropython-lib project mentioned above).
On some embedded platforms, where it may be cumbersome to add Python-level
wrapper modules to achieve naming compatibility with CPython, micro-modules
are available both by their u-name, and also by their non-u-name. The
non-u-name can be overridden by a file of that name in your package path.
For example,
import json will first search for a file
json.py or
directory
json and load that package if it is found. If nothing is found,
it will fallback to loading the built-in
ujson module.
MicroPython-specific libraries¶
Functionality specific to the MicroPython implementation is available in the following libraries.
|
http://circuitpython.readthedocs.io/en/latest/docs/library/index.html
|
CC-MAIN-2017-22
|
en
|
refinedweb
|
Designing Custom Exceptions
The following guidelines help ensure that your custom exceptions are correctly designed.
Avoid deep exception hierarchies.
For more information, see Types and Namespaces.
Do derive exceptions from System.Exception or one of the other common base exceptions.
Note that Catching and Throwing Standard Exception Types has a guideline that states that you should not derive custom exceptions from ApplicationException.
Do end exception class names with the Exception suffix.
Consistent naming conventions help lower the learning curve for new libraries.
Do make exceptions serializable. An exception must be serializable to work correctly across application domain and remoting boundaries.
For information about making a type serializable, see Serialization.
Do provide (at least) the following common constructors on all exceptions. Make sure the names and types of the parameters are the same as those used in the following code example.
public class NewException : BaseException, ISerializable { public NewException() { // Add implementation. } public NewException(string message) { // Add implementation. } public NewException(string message, Exception inner) { // Add implementation. } // This constructor is needed for serialization. protected NewException(SerializationInfo info, StreamingContext context) { // Add implementation. } }
Do report security-sensitive information through an override of System.Object.ToString only after demanding an appropriate permission. If the permission demand fails, return a string that does not include the security-sensitive information.
Do store useful security-sensitive information in private exception state. Ensure that only trusted code can get the information.
Consider providing exception properties for programmatic access to extra information (besides the message string) relevant to the exception.
For more information on design guidelines, see the "Framework Design Guidelines: Conventions, Idioms, and Patterns for Reusable .NET Libraries" book by Krzysztof Cwalina and Brad Abrams, published by Addison-Wesley, 2005.
|
https://msdn.microsoft.com/en-us/library/ms229064(v=vs.100).aspx
|
CC-MAIN-2017-22
|
en
|
refinedweb
|
> .
Cody had some patches for enabling sub namespaces in endpoints (as in enabling slashes). Might be worth pulling those in. - Joris ----------------------------------------------------------- > ------- > > This involved a lot more challenges than I anticipated, I've captured the > various approaches and limitations and deal-breakers of those approaches > here: [Master Endpoint Implementation > Challenges]() > > Key points: > > * This is a stop-gap solution until we shift the offer creation/management > logic from the master to the allocator. > * `updateAvailable` and `updateSlave` are kept separate because > (1) `updateAvailable` is allowed to fail whereas `updateSlave` must not. > (2) `updateAvailable` returns a `Future` whereas `updateSlave` does not. > (3) `updateAvailable` never leaves the allocator in an over-allocated state > and must not, whereas `updateSlave` does, and can. > * The algorithm: > * Initially, the master pessimistically assume that what seems like > "available" resources will be gone. > This is due to the race between the allocator scheduling an `allocate` > call to itself vs master's `allocator->updateAvailable` invocation. > As such, we first try to satisfy the request only with the offered > resources. > * We greedily rescind one offer at a time until we've rescinded > sufficiently many offers. > IMPORTANT: We perform `recoverResources(..., Filters())` rather than > `recoverResources(..., None())` so that we can pretty much always win the > race against `allocate`. > In the case that we lose, no disaster occurs. We simply fail > to satisfy the request. > * If we still don't have enough resources after resciding all offers, be > optimistic and forward the request to the allocator since there may be > available resources to satisfy the request. > * If the allocator returns a failure, report the error to the user with > `PreconditionFailed`. This could be updated to be `Forbidden`, or `Conflict` > maybe as well. We'll pick one eventually. > > This approach is clearly not ideal, since we would prefer to rescind as > little offers as possible. > The challenges of implementing the ideal solution in the current state is > described in the document above. > > TODO(mpark): Add more comments and test cases. > > > Diff/master/validation.hpp 469d6f56c3de28a34177124aae81ce24cb4ad160 > src/master/validation.cpp 9d128aa1b349b018b8e4a1916434d848761ca051 > > Diff: > > > Testing > ------- > > `make check` > > > Thanks, > > Michael Park > >
|
https://www.mail-archive.com/reviews@mesos.apache.org/msg04981.html
|
CC-MAIN-2017-22
|
en
|
refinedweb
|
Brid. The binary-based solutions described in this page are appropriate for software beyond your control.
Gradual migration to SLF4J from Jakarta Commons Logging (JCL)
jcl-over-slf4j.jar
To ease migration to SLF4J from JCL, SLF4J distributions include the jar file jcl-over-slf4j.jar. This jar file is intended as a drop-in replacement for JCL version 1.1.1.-over-slf4j.jar. Subsequently, the selection of the underlying logging framework will be done by SLF4J instead of JCL but without the class loader headaches plaguing JCL. The underlying logging framework can be any of the frameworks supported by SLF4J. Often times, replacing commons-logging.jar with jcl-over-slf4j.jar will immediately and permanently solve class loader issues related to commons logging.-over-slf4j.jar should not be confused with slf4j-jcl.jar
JCL-over-SLF4J, i.e. jcl.
log4j-over-slf4j
SLF4J ship with a module called log4j-over-slf4j. It allows log4j users to migrate existing applications to SLF4J without changing a single line of code but simply by replacing the log4j.jar file with log4j-over-slf4j.jar, as described below.
How does it work?
The log4j-over-slf4j module contains replacements of most widely
used log4j classes, namely
org.apache.log4j.Category,
org.apache.log4j.Logger,
org.apache.log4j.Priority,
org.apache.log4j.Level,
org.apache.log4j.MDC, and
org.apache.log4j.BasicConfigurator. These replacement
classes redirect all work to their corresponding SLF4J classes.
To use log4j-over-slf4j in your own application, the first step is to locate and then to replace log4j.jar with log4j-over-slf4j.jar. Note that you still need an SLF4J binding and its dependencies for log4j-over-slf4j to work properly.
In most situations, replacing a jar file is all it takes in order to migrate from log4j to SLF4J.
Note that as a result of this migration, log4j configuration files will no longer be picked up. If you need to migrate your log4j.properties file to logback, the log4j translator might be of help. For configuring logback, please refer to its manual.
When does it not work?
The log4j-over-slf4j module will not work when the application calls log4j components that are not present in the bridge. For example, when application code directly references log4j appenders, filters or the PropertyConfigurator, then log4j-over-slf4j would be an insufficient replacement for log4j. However, when log4j is configured through a configuration file, be it log4j.properties or log4j.xml, the log4j-over-slf4j module should just work fine.
What about the overhead?
There overhead of using log4j-over-slf4j instead of log4j directly is relatively small. Given that log4j-over-slf4j immediately delegates all work to SLF4J, the CPU overhead should be negligible, in the order of a few nanoseconds. There is a memory overhead corresponding to an entry in a hashmap per logger, which should be usually acceptable even for very large applications consisting of several thousand loggers. Moreover, if you choose logback as your underlying logging system, and given that logback is both much faster and more memory-efficient than log4j, the gains made by using logback should compensate for the overhead of using log4j-over-slf4j instead of log4j directly.
log4j-over-slf4j.jar and slf4j-log4j12.jar cannot be present simultaneously
The presence of slf4j-log4j12.jar, that is the log4j binding for SLF4J, will force all SLF4J calls to be delegated to log4j. The presence of log4j-over-slf4j.jar will in turn delegate all log4j API calls to their SLF4J equivalents. If both are present simultaneously, slf4j calls will be delegated to log4j, and log4j calls redirected to SLF4j, resulting in an endless loop.
jul-to-slf4j bridge
The jul-to-slf4j module includes a java.util.logging (jul)
handler, namely
SLF4JBridgeHandler, which routes all
incoming jul records to the SLF4j API. Please see SLF4JBridgeHandler
javadocs for usage instructions.
Note on performance Contrary
to other bridging modules, namely jcl-over-slf4j and
log4j-over-slf4j, which reimplement JCL and respectively log4j,
the jul-to-slf4j module does not reimplement the java.util.logging
because packages under the java.* namespace cannot be
replaced. Instead, jul-to-slf4j translates LogRecord
objects into their SLF4J equivalent. Please note this translation
process incurs the cost of constructing a
LogRecord
instance regardless of whether the SLF4J logger is disabled for
the given level or nor. Consequently, j.u.l. to SLF4J
translation can seriously increase the cost of disabled logging
statements (60 fold or 6000%) and measurably impact the
performance of enabled log statements (20% overall increase).
As of logback version 0.9.25, it is possible to completely
eliminate the 60 fold translation overhead for disabled log
statements with the help of LevelChangePropagator.
If you are concerned about application performance, then use of
SLF4JBridgeHandler is appropriate only if any one of
the following two conditions is true:
- few j.u.l. logging statements are in play
LevelChangePropagatorhas been installed
jul-to-slf4j.jar and slf4j-jdk14.jar cannot be present simultaneously
The presence of slf4j-jdk14.jar, that is the jul binding for SLF4J, will force SLF4J calls to be delegated to jul. On the other hand, the presence of jul-to-slf4j.jar, plus the installation of SLF4JBridgeHandler, by invoking "SLF4JBridgeHandler.install()" will route jul records to SLF4J. Thus, if both jar are present simultaneously (and SLF4JBridgeHandler is installed), slf4j calls will be delegated to jul and jul records will be routed to SLF4J, resulting in an endless loop.
|
https://www.slf4j.org/legacy.html
|
CC-MAIN-2017-22
|
en
|
refinedweb
|
25 March 2007 22:28 [Source: ICIS news]
SAN ANTONIO (ICIS news)--Shell Chemical expects ethylene demand in ?xml:namespace>
Despite the lower operating rate, there had been expressions of interest for the Puerto Rico refinery which supplies heavy feedstock for Shell’s ethylene units, he said, but added that the group was at the very early stages of any possible divestment process.
He had more to say about ethylene demand, however.
“We see [ethylene] growth aligning with GDP (gross domestic product) globally with growth in
Shell’s view matches that of numerous commentators, who see average ethylene demand growth in North America declining as derivatives growth is fed increasingly by imports, largely from new Middle East capacities.
“We see export growth diminishing and demand growth satisfied from elsewhere,” Chouffot said.
For the past two years, ethylene and derivatives growth has been over 4% globally, he added. Shell’s
The ethylene picture in North America looks a little better than at the end of last year, Chouffot, said with high inventories having been worked out of the system and less concern over the
Shell’s strategy in chemicals in
The market will remain competitive and the focus for Shell will be on issues such as reliability and safety, he suggested.
Chouffot said Shell continued to build on upstream integration in chemicals and on synergies with with its ref
|
http://www.icis.com/Articles/2007/03/25/9015897/npra-07-shell-expects-flat-ethylene-demand.html
|
CC-MAIN-2014-35
|
en
|
refinedweb
|
Allows object modules included in the application to perform various registration operations after main() has been called. More...
Allows object modules included in the application to perform various registration operations after main() has been called.
Most products need to register with other products to provide information such as product version, command line option, and diagnostic writers. The Bootstrap class provides a safe mechanism for these registrations to occur.
Bootstrapping occurs in two steps and requires two components, a callback function and a static bootstrap object. The construction of the static bootstrap object registers the callback function for later execution by main() and the callback function performs registrations such as registering product information, command line options and diagnostics writers.
By utilizing this mechanism, any modules which need to be bootstrapped can do so by being linked into the executable.
#include <stk_util/util/Bootstrap.hpp> namespace { void bootstrap() { boost::program_options::options_description desc("Use case options"); desc.add_options() ("performance", "run performance test") ("mesh", boost::program_options::value<std::string>(), "run mesh file performance test"); stk::env::get_options_description().add(desc); } stk::Bootstrap x(&bootstrap); } // namespace <empty>
Main contains the following function which executes the registered bootstrap functions.
int main() { stk::Boostrap::bootstrap(); }
The applications main executes the bootstrap functions causing the command line description to be fully populated.
|
http://trilinos.sandia.gov/packages/docs/r11.4/packages/stk/doc/html/group__stk__util__bootstrap__detail.html
|
CC-MAIN-2014-35
|
en
|
refinedweb
|
Editor's note: In part one of this two-part series of excerpts from Eclipse, author Steve Holzner provided examples of how Eclipse makes it easier to create Java code from scratch. Continuing in that vein, in this week's concluding excerpt Steve covers creating Javadocs, refactoring, adding certain skills to your Eclipse toolbox, and customizing the development environment.
Eclipse also makes it easy to develop Javadoc documentation, the standard Java documentation that accompanies Java programs. You'll notice that in the code it generates, Eclipse inserts some text for Javadoc, as you see in Ch02_05.java:
package org.eclipsebook.ch02; /** * @author Steven Holzner * * To change the template for this generated type comment go to * Window>Preferences>Java>Code Generation>Code and Comments */ . . .
If you want to enter your own Javadoc, code assist helps you here, too; for
example, if you enter
@param and invoke code assist with Ctrl+Space,
code assist will list the parameters a method takes. Typing
@exception and using code assist will list the exceptions a method throws, and so on. Typing
@ in a comment and pausing will make code assist display the Javadoc
possibilities, like
@author,
@deprecated, and so on.
To generate Javadoc from your code, select the Project→ Generate Javadoc
item, opening the Generate Javadoc dialog, which lets you select the project
for which you want to create Javadocs. To browse a project's Javadocs, select
the Navigate→ Open External Javadoc menu item. For example, you can see
the generated Javadoc for the
Ch02_05 project in Figure 2-19.
Figure 2-19. Browsing Javadoc
One of the major advantages of using a good Java IDE like Eclipse is that it can let you rename and move Java elements around, and it will update all references to those items throughout your code automatically.
For example, take a look at the code in Example 2-6. Here, we've used code assist to create a new method to display a simple message, but we forgot to change the default name for the method that code assist supplied.
Example 2-6. The Ch02_06.java example
package org.eclipse.ch02; /** * @author Steven Holzner * * To change the template for this generated type comment go to * Window>Preferences>Java>Code Generation>Code and Comments */ public class Ch0206 { public static void main(String[] args) { name( ); } public static void name( ) { System.out.println("No worries."); } }
This default name for the new method,
name, is called in the
main method, and it could be called from other locations in your code as well. How
can you change the name of this method and automatically update all calls to
it? Select
name in the editor and then select the Refactor→ Rename menu item, opening the Rename Method dialog you see in Figure 2-20.
Figure 2-20. Refactoring a method
Enter the new name for the method,
printer in this case, and click
OK. When you do, the name of this method and all references to it will be updated
throughout your code, including all code in the project, as you see here:
package org.eclipse.ch02; /** * @author Steven Holzner * * To change the template for this generated type comment go to * Window>Preferences>Java>Code Generation>Code and Comments */ public class Ch0206 { public static void main(String[] args) { printer( ); } public static void printer( ) { System.out.println("No worries."); } }
We've also misnamed the class in this example—
Ch0206, instead
of
Ch02_06. To rename the class, select
Ch0206 in
the editor and select the Refactor→ Rename menu item, opening the Rename
Type dialog you see in Figure 2-21. Enter the new name,
Ch02_06,
and click OK to rename the class.
Figure 2-21. Refactoring a class
Clicking OK not only changes the name of the class in the code, it even changes the name of the class's file from Ch0206.java to Ch02_06.java, as you can see by checking the Package Explorer. Here's the new code:
package org.eclipse.ch02; /** * @author Steven Holzner * * To change the template for this generated type comment go to * Window>Preferences>Java>Code Generation>Code and Comments */ public class Ch02_06 { public static void main(String[] args) { printer( ); } public static void printer( ) { System.out.println("No worries."); } }
In fact, we've unaccountably managed to misname the package as well when creating
this example—
org.eclipse.ch02 instead of
org.eclipsebook.ch02.
When you refactor it, the name is changed both in the Package Explorer and throughout
your code:
package org.eclipsebook.ch02; /** * @author Steven Holzner * * To change the template for this generated type comment go to * Window>Preferences>Java>Code Generation>Code and Comments */ public class Ch02_06 { public static void main(String[] args) { printer( ); } public static void printer( ) { System.out.println("No worries."); } }
As you can see, it's easy to rename Java elements in your code—Eclipse will handle the details, making the changes throughout your code automatically.
TIP: If you simply type over a Java element in your code, no refactoring happens. You've got to explicitly refactor if you want those changes to echo throughout your code.
Refactoring works automatically across files as well. Say, for example, that
you want to move the
printer method to another class,
Ch02_06Helper.
To see how this works, create that new class now, which Eclipse will put in
its own file, Ch02_06Helper.java. Then select the method you want to
move,
printer, by selecting the word "printer" in the declaration
of this method. Next, select the Refactor→ Move to open the dialog you
see in Figure 2-22. To move this method to the
Ch02_06Helper class,
enter the fully qualified name of that class,
org.eclipsebook.ch02.Ch02_06Helper,
in the dialog and click OK. This moves the
printer method to the
Ch02_06Helper class like this:
package org.eclipsebook.ch02; /** * @author Steven Holzner * * To change the template for this generated type comment go to * Window>Preferences>Java>Code Generation>Code and Comments */ public class Ch02_06Helper { public static void printer( ) { System.out.println("No worries."); } }
Figure 2-22. Moving a method between classes
And the call to the
printer method is automatically qualified
as
Ch02_06Helper.printer back in the
Ch02_06 class
in the
main method:
package org.eclipsebook.ch02; /** * @author Steven Holzner * * To change the template for this generated type comment go to * Window>Preferences>Java>Code Generation>Code and Comments */ public class Ch02_06 { public static void main(String[] args) { Ch02_06Helper.printer( ); } }
You can also extract interfaces using refactoring. To see how this works, we'll
create an interface for the
Ch02_06Helper class (this class has
the
printer method in it). Convert
printer from a
static to a standard method by deleting the keyword
static in the
method declaration. Then select the name of the class,
Ch02_06Helper,
in the editor and select Refactor→ Extract Interface to open the Extract
Interface dialog you see in Figure 2-23. Select the
printer method
to add that method to the interface, and then enter the name of the new interface—
Ch02_06HelperInterface—and
click OK.
Figure 2-23. Extracting an interface
Clicking OK creates a new file, Ch02_06HelperInterface.java, where the interface is declared:
package org.eclipsebook.ch02; /** * @author Steven Holzner * * To change the template for this generated type comment go to * Window>Preferences>Java>Code Generation>Code and Comments */ public interface Ch02_06HelperInterface { public abstract void printer( ); }
The original class is now declared to implement this new interface,
Ch02_06HelperInterface:
package org.eclipsebook.ch02; /** * @author Steven Holzner * * To change the template for this generated type comment go to * Window>Preferences>Java>Code Generation>Code and Comments */ public class Ch02_06Helper implements Ch02_06HelperInterface { public void printer( ) { System.out.println("No worries."); } }
Besides renaming and moving elements and extracting interfaces, there are other operations you can perform with refactoring, such as converting anonymous classes to nested classes, changing a method's signature, and converting a local variable to a class field. For these and other options, take a look at the items available in the Refactor menu..
|
http://www.linuxdevcenter.com/lpt/a/4911
|
CC-MAIN-2014-35
|
en
|
refinedweb
|
PrintServer Class
Manages the print queues on a print server, which is usually a computer, but can be a dedicated hardware print server appliance.
System.Printing.PrintSystemObject
System.Printing.PrintServer
System.Printing.LocalPrintServer
Namespace: System.PrintingNamespace: System.Printing
Assembly: System.Printing (in System.Printing.dll)
The PrintServer type exposes the following members.
When your program writes a value to a property of PrintServer, that change has no effect until it is passed on to the computer that is represented by the PrintServer object. To commit changes, use the Commit method for the object.
Similarly, other applications may change the actual print service properties of the computer. To make sure that the PrintServer object for your program has the latest values, use the Refresh method for the object.
If you want to print from a Windows Forms application, see the System.Drawing.Printing namespace.
The following example shows how to create an instance of PrintServer.
// Create a PrintServer // "theServer" must be a print server to which the user has full print access. PrintServer myPrintServer = new PrintServer(@"\\theServer"); // List the print server's queues PrintQueueCollection myPrintQueues = myPrintServer.GetPrintQueues(); String printQueueNames = "My Print Queues:\n\n"; foreach (PrintQueue pq in myPrintQueues) { printQueueNames += "\t" + pq.Name + "\n"; } Console.WriteLine(printQueueNames); Console.WriteLine("\nPress Return to continue.");.
|
http://msdn.microsoft.com/EN-US/library/system.printing.printserver
|
CC-MAIN-2014-35
|
en
|
refinedweb
|
Introduction
Note: all the code examples can be found on my Github profile under visual-studio-projects accessible here:.
In this tutorial, we’ll take a look at various methods that we can use to inject a DLL into the process’ address space. For injecting a DLL into the process’s address space, we must have administrator privileges on the system so that we’ve completely taken over the system at that time. This is why these methods cannot be used in a normal attack scenario where we would like to gain code execution on the target computer. The methods assume we already have complete control over the system. But you might ask why would we want to do anything to the system or processes running on the system if we already have a full access to it? There is one single reason: to avoid detection. Once we’ve gained total control over the system, we must protect ourselves from being detected by the user or system administrator. That would defeat the whole purpose of the attack, so it’s best to remain undetected as long as possible. By doing so, we can also track what user is doing and possibly gather more and more information about the user or the network in which we’re located.
First, let’s talk a little about API hooking. We must understand that there are various methods to hook an API:
- Overwriting the address of the function with the custom function’s address.
- Injecting the DLL by creating a new process. This method takes the DLL and forces the executable to load it at runtime, thus hooking the functions defined in the DLL. There are various ways to inject a DLL using this approach.
- Injecting the DLL into the address space of the process. This takes the DLL and injects it into an already running process, which is stealthier than the previous method.
- Modifying the Import Address Table.
- Using proxy DLLs and manifest files.
Let’s take a look at the third option in the above list—the injection of the DLL into the address space of the process. We’re talking about an already running process, and not an executable which we’re about to run. By injecting a DLL into an already running process, we leave less footprint on the system and make the forensic analysis somewhat harder to do. By injecting a custom DLL into an already running process, we’re actually forcing the load of a DLL that wouldn’t otherwise be loaded by the process. There are various ways we can achieve that:[1]
- AppInit_DLLs
- SetWindowsHookEx
- CreateRemoteThread
Remember that the IAT import table is part of the executable and it populated during the build time. This is also the reason why we can only hook functions written in IAT (with the method we’ll describe). This further implies that IAT hooking is only applicable when talking about load-time dynamic linking, but couldn’t be used with run-time dynamic linking where we don’t know in advance which DLLs the program will use.
Creating the DLL
Here we’ll describe the process of creating the DLL. We’ll be injecting into some process using various options. First, we have to create a new project in Visual Studio and choose “Win32 Console Application” as seen on the picture below:
We named the project dllinject, which will also be the name of the created DLL once we compile the source code. When we click on the OK button, a new window will appear where we must select that we’re building a DLL not a console application (which is the default). This can be seen on the picture below (notice that the DLL is checked):
When we click on the Finish button, the project will be created. There will be two header files named stdafx.h and targetver.h and three source files named dllinject.cpp, dllmain.cpp, and stdafx.cpp. The initial project will look like the picture. Let’s check the source code of the dllmain.cpp file, which can be seen below:
The DllMain is an optional entry point into a DLL. When a system starts or terminates a process or a thread, it will call that function for each loaded DLL. This function is also called whenever we load or unload a DLL with LoadLibrary and FreeLibrary functions [3]. The DllMain takes three parameters as parameters, which can be seen below (the picture was taken from [3]):
The parameters of the DllMain function are as follows:
- hinstDLL: a handle to the DLL module, which contains the base address of the DLL.
- fdwReason: a reason why the DLL is entry point function is being called. There are three possible constant that defined the reason [3]:
- DLL_PROCESS_ATTACH: DLL is being loaded into the address space of the process either because the process has a reference to it in the IAT or because the process called the LoadLibrary function.
- DLL_PROCESS_DETACH: DLL is being unloaded from the address space of the process because the process has terminated or because the process called the FreeLibrary function.
- DLL_THREAD_ATTACH: the current process is creating a new thread; when that happens the OS will call the entry points of all DLLs attached to the process in the context of the thread.
- DLL_THREAD_DETACH: the thread is terminating, which calls the entry point of each loaded DLL in the context of the exiting thread.
- lpvReserved: is either NULL or non-NULL based on the fwdReason value, and whether the DLL is being loaded dynamically or statically.
The DllMain function should return TRUE when it succeeds and FALSE when it fails. If we’re calling the LoadLibrary function, which in turn calls the entry point of the DLL and that fails (by returning FALSE), the system will immediately call the entry point again, this time with the DLL_PROCESS_DETACH reason code. After that the DLL is be unloaded.
Let’s present the whole code that we’ll be using for our DLL. The code is presented below:
#include <windows.h> #include <stdio.h> INT APIENTRY DllMain(HMODULE hDLL, DWORD Reason, LPVOID Reserved) { /* open file */ FILE *file; fopen_s(&file, "C:\temp.txt", "a+"); switch(Reason) { case DLL_PROCESS_ATTACH: fprintf(file, "DLL attach function called."); break; case DLL_PROCESS_DETACH: fprintf(file, "DLL detach function called."); break; case DLL_THREAD_ATTACH: fprintf(file, "DLL thread attach function called."); break; case DLL_THREAD_DETACH: fprintf(file, "DLL thread detach function called."); break; } /* close file */ fclose(file); return TRUE; }
We’re calling the DllMain function normally, but right after that, we’re opening the C:temp.txt file where some text is written based on why the module was called. After that, the file is closed and the module is done executing.
After we’ve built the module, we will have the dllinject.dll module ready to be injected into the processes. Keep in mind that the DLL doesn’t actually do anything other than saving the called method name into the C:temp.txt file. If we would like to actually do something, we have to change the DllMain() function to change some entries in the IAT table, which will effectively hook the IAT. We’ll see an example of this later. For now, we’ll only take a look at the previously mentioned methods of DLL injecting.
The AppInit_DLLs Method
The Appinit_DLLs value uses the following registry key [2]:
HKEY_LOCAL_MACHINESoftwareMicrosoftWindows NTCurrentVersionWindows
We can see that by default the Appinit_DLLs key has a blank value of the type REG_SZ, which can be seen on the picture below:
The AppInit_DLLs value can hold a space separated list of DLLs with full paths, which will be loaded into the process’s address space. This is done by using the LoadLibrary() function call during the DLL_PROCESS_ATTACH process of user32.dll; the user32.dll has a special code that traverses through the DLLs and loads them, so this functionality is strictly restricted to user32.dll. This means that the listed DLLs will be loaded into the process space of every application that links against the user32.dll library by default. If the application doesn’t use that library and is not linked against this library, then the additional DLLs will not be loaded into the process space. A careful reader might have notices another similar registry key LoadAppInit_DLLs, which is by default set to 1. This field specifies whether the AppInit_DLLs should be loaded when the user32.dll library is loaded or not; the value of 1 means true, which means that all the DLLs specified in AppInit_DLLs will also be loaded into the process’s address space when it’s linked against user32.dll.
The article at [2] suggests that we should use only the kernel32.dll functions when implementing the DLL that we’re going to link to the process’s address space. The reason for this is because the listed DLLs will be loaded early in the loading process where other libraries might not be available yet, so calling their functions would result in segmentation fault (most probably), because those functions are not available at that time.
The next picture shows how we have to specify the AppInit_DLLs in order to inject the C:driversdllinject.dll module into every process that uses user32.dll library:
Note that before this will work, we have to actually copy the module built by the Visual Studio to the specified location or change the location of the module. It’s better to copy the module into a folder that doesn’t contains spaces in its path, so keep that in mind when configuring the AppInit_DLLs registry key value.
After we’ve done this, it’s relatively easy to test whether the DLL will be injected into the processes address space. We can do that by downloading Putty program, which uses user32.dll library and loads it into Olly. Then we have to inspect the loaded modules, which can be seen on the picture below:
Notice that the dllinject.dll library is also loaded? Keep in mind that this DLL is only loaded when the executable program also uses the user32.dll, which we can also see on the picture above. We’ve just shown how an attacker could inject an arbitrary DLL into your process address space.
Conclusion
- seen the basic introduction to IAT hooking and described the first method that can be used to inject the DLL into the processes address space. The method is somehow limited, because it only works when the launched program imports the functions from user.dll library. Nevertheless almost any program nowadays uses that library, so the method is quite successful. In the next article, we’ll take a look at the other two methods that can be used to inject a DLL into the processes address space.
INTERESTED IN LEARNING MORE? CHECK OUT OUR ETHICAL HACKING TRAINING COURSE. FILL OUT THE FORM BELOW FOR A COURSE SYLLABUS AND PRICING INFORMATION.
References:
[1] API Hooking with MS Detours, accessible at.
[2] Working with the AppInit_DLLs registry value, accessible at.
[3] DllMain entry point, accessible at.
[4] SetWindowsHookEx function, accessible at.
Hi,
Everyday I do some technical research..today i landed your page while searching for “win32 api hooking”. Even if i did not understand completely what you have presented on this post, at least i read something interesting.
Thanks for sharing.
Cheers.
|
http://resources.infosecinstitute.com/api-hooking-and-dll-injection-on-windows/
|
CC-MAIN-2014-35
|
en
|
refinedweb
|
17 August 2012 05:38 [Source: ICIS news]
SINGAPORE (ICIS)--?xml:namespace>
No details on the current operating rate of the plant were immediately available.
Kaltim Methanol is jointly owned by
Sojitz does the overseas marketing for 70% of the methanol produced by the company, while the remaining 30% is marketed in
Many methanol
|
http://www.icis.com/Articles/2012/08/17/9587834/indonesias-kaltim-methanol-to-shut-bontang-methanol-unit-in-nov.html
|
CC-MAIN-2014-35
|
en
|
refinedweb
|
03 February 2012 04:43 [Source: ICIS news]
By Judith Wang
SINGAPORE (ICIS)--Spot adipic acid prices in Asia are expected to rise further in near-term as major suppliers hike offers, citing high feedstock costs, as well as improving demand, industry sources said on Friday.
On 1 February, adipic acid prices were assessed at $1,730-1,850/tonne (€1,315-1,406/tonne) CFR (cost and freight) NE (northeast) ?xml:namespace>
A major regional producer increased its offers for February cargoes by $50/tonne to $1,900/tonne CFR NE Asia.
“Feedstock prices are rising, and we can see the demand is also gradually recovering, therefore, we raised prices after the week-long Chinese holiday to follow the uptrend,” the producer said.
Feedstock benzene prices had risen by $80-85/tonne over the past month to $1,175-1,185 FOB (free on board)
Demand for adipic acid has been recovering after the Lunar New Year holiday in China (22-28 January), given the upcoming peak manufacturing season in its major downstream – the polyurethane (PU) sector – in February and March.
Adipic acid is also used as raw material of polyester polyols, which are used to manufacture shoe soles.
Supply of the material, on the other hand, is limited in
This, coupled with market player’ low inventory of the product, bolsters expectations that adipic acid spot prices will increase.
“Many buyers called me up and asked me to give offers, but we could not sell cargoes now as our plant is still undergoing maintenance,” said a source from Shandong Hongye Chemical.
In February, major Chinese producers announced their February nominations at yuan (CNY) 13,500-14,000/tonne ex-tank, up CNY2,000-2,500/tonne from January nominations, sources said.
“Sellers are very bullish about the market after the holiday, although some buyers resisted the price growth,” a China-based trader said.
“But I think some downstream buyers will gradually start to build some stocks ahead of the upcoming of peak season if prices continue to rise,” the trader said.
($1 = CNY6.31 /
|
http://www.icis.com/Articles/2012/02/03/9529064/asia-adipic-acid-to-gain-on-rising-benzene-cost-better-demand.html
|
CC-MAIN-2014-35
|
en
|
refinedweb
|
msdn says its as simple as Directory::DirectoryCreate(path). but it says you have to be #using a dll and thats not working for me. does anyone know how to do this?
Printable View
msdn says its as simple as Directory::DirectoryCreate(path). but it says you have to be #using a dll and thats not working for me. does anyone know how to do this?
That looks like managed C++.
In plain old C/C++, you can use CreateDirectory.
hmm. well i also saw on that site that there was a Directory::Exists() function. i tried just getting rid of the Directory:: but it didn't work. how would i check to see if a directory exists?
Use GetFileAttributes(), and see if the return value is FILE_ATTRIBUTE_DIRECTORY.
so just scan through the files and see if they are a file_attribute_directory and if it is check to see if its the directory i want?
Well I have no idea what you are trying to do here, so I don't know if that idea will work for you.
i just want to see if a directory exists in a certain directory. so could i just scan through all the files in that directory and see if any of those files is the directory that i'm looking for?
GetFileAttributes takes a path so you don't need to scan all the files. Just pass it the path to the directory you wish to check.
hmm. i'm not really getting what everyone is trying to say. i've made a function that works, but i don't know if its the best way. it seems kind of slow.
PHP Code:
int directoryExists(string dpath, string dname)
{
HANDLE hFind;
WIN32_FIND_DATA findData;
dpath += '*';
string fname;
hFind = FindFirstFile(dpath.c_str(), &findData);
while (FindNextFile(hFind, &findData))
{
fname = findData.cFileName;
if (fname == dname)
{
if (findData.dwFileAttributes == FILE_ATTRIBUTE_DIRECTORY)
{
FindClose(hFind);
return (1);
}
}
}
FindClose(hFind);
return (0);
}
Code:
int directoryExists(string dpath, string dname)
{
char completePath[MAX_PATH];
sprintf(completePath,"%s\\%s",dpath,dname);
if(GetFileAttributes(completePAth) == FILE_ATTRIBUTE_DIRECTORY)
return 1;
return 0;
}
works like a charm. thank you very much.
|
http://cboard.cprogramming.com/windows-programming/75422-creating-directory-printable-thread.html
|
CC-MAIN-2014-35
|
en
|
refinedweb
|
Java 6 features - Java Interview Questions
Java 6 features What are the advanced features added in Java 6 compared to Java 5
Java Interview Questions - Page 6
Java Interview Questions - Page 6
....
Question: Is sizeof a keyword?
Answer: The sizeof operator is not a keyword.
Question: What are wrapped
java - Servlet Interview Questions
java servlet interview questions Hi friend,
For Servlet interview Questions visit to :
Thanks
Java keyword
Java keyword Is sizeof a keyword
interview - Java Interview Questions
interview kindly guide me some interview questions of Java
java - Java Interview Questions
java where can we exactly use static keyword
interview questions - Java Interview Questions
interview questions for Java Hi Any one can u please post the interview point of questions.Warm Regards, Satish
Core java Interview Questions
Here are the Core Java Interview Questions that can be asked to you by interviewers. These frequently asked Core Java Interview Questions will be beneficial to software developers going for Java developer Interviews.
Q 1. Difference
java - Java Interview Questions
Helpful Java Interview questions Need Helpful Java Interview questions
Java SE 6
of frequently asked questions(faqs) in interview or viva of Java... Java SE 6
... MicroSystems has released the Java SE 6 on Monday December 11.
So go
Java interview questions and answers
Java interview questions and answers
what is garbage collection? What is the process that is responsible for doing that in java?
Ans.Reclaiming.... StringBuffer is a mutable object.
6.
//java program for static member
java - Java Interview Questions
with an access specifier keyword:
They are :
public, private, or protected.
Access....
You can optionally declare a field with a modifier keyword:
e.g final.... You can optionally
declare a field with an access specifier keyword
Java - Java Interview Questions
Java and Collection Interfaces I wanted to know more about Java and Collection Interfaces Java and Collection InterfacesInterface : Types... of an interface use the keyword "Implements ".Interfaces are just
Java interview questions
Java interview questions Plz answer the following questions..., z = 5
c. x = 3, y = 2, z = 6
d. x = 4, y = 2, z = 6
What will be the result... and b = 6?
a. 4
b. 1.66
c. 1
d. None of these
What will be the result
java - Java Interview Questions
keyword
in base class function and passing reference of derived
class to the base
java - Java Interview Questions
information, visit the following links:
java - Java Interview Questions
java what is meant by the following fundamentals as used in java..., member variables and methods. Java provides some access modifiers like: public... their accessibility.
1. public keyword specifies that the public class
Java - Java Interview Questions
Java How to use C++ code in Java Program? Hi friend,
Java does not have a preprocessor. It provides similar functionality (#define..., and class definitions are used in lieu of typedef. The end result is that Java
java - Java Interview Questions
not be directly instantiated. To define the methods of an interface the keyword....
For more information, visit the following link:
Thanks
java - Java Interview Questions
6. Which identifiers are valid?
a) _xpoints
b) r2d2
c) bBb$
d) set-flow
e) thisisCrazy
7. Represent the number 6 as a hexadecimal literal
java - Java Interview Questions
following URL.
Hope... link:
Dynamic
JAVA - Java Interview Questions
.");
for (int i=1;i<6;i++){
int number = input.nextInt
Interview Question - Java Interview Questions
Interview Question I need Interview Questions on Java,J2EE
Pls help me
Collection of Large Number of Java Interview Questions!
Interview Questions - Large Number of Java Interview Questions
Here you....
The Core Java Interview Questions
More interview questions on core Java.. Read
Corejava Interview,Corejava questions,Corejava Interview Questions,Corejava
Core Java Interview Questions Page2
...
than an interface ?
Ans : A Java interface is an abstract data type like...
we cannot create objects of an interface. Typically, an interface in java
C interview questions
C interview questions Plz answer the following questions...?
a. 5
b. 6
c. 10
d. 11
e. 12
/question number 2/
With every use of a memory..."?
a. ptr = ptr + sizeof(myStruct); [Ans]
b. ++(int*)ptr;
c. ptr = ptr
java, - JSP-Interview Questions
java, hi..
define URI?
wht is difference b/w URL and URI
wht....
Use the "Throw" Keyword.
throw new MyException();
throws
For particular... keyword.
Difference between throw and throws
1)we want to force
Java Interview Questions
Java Interview Questions Hi,
Can anyone tell the urls of Java Interview Questions on roseindia.net?
Thanks
java - Java Interview Questions
the command line as:
java MyProg I like tests
what would be the value of args[ 1... until a value is assigned
3. Which of the following are Java keywords...) cannot be determined; it depends on the machine
6. Which identifiers
Multithreading ? - Java Interview Questions
Multithreading ?
Hi Friends,
I am new to java... implement Synchronization keyword or singlethreadmodel interface to avoid deadlock...://
Thanks
Struts - Java Interview Questions
Struts Interview Questions I need Java Struts Interview Questions and examples
Corejava Interview,Corejava questions,Corejava Interview Questions,Corejava
Core Java Interview Questions Page1
... be assigned in the constructor. As per the specification
declared in java document... of the arguments passed,
that is sufficient for the java interpreter
plz - Java Interview Questions
is between 1 and 6 (inclusive).
design and implement a class , called die... and 6 (inclusive).use the method random of the class math to generate a random... greater than or equal to 1 and less than or equal to 6 , you can use
hr - Java Interview Questions
is for 6 months only
JSF Interview Questions
JSF Interview Questions
Collection of JSF (Java Server Faces) Interview Questions... Java applications.
JSF Interview
Question Page 6
Interface - Java Interview Questions
Interface Respected sir
why we use Interface in java? because we... the interface's example.
But in java programming language interface is nothing... the keyword "implements" is used
sizeof() ?
sizeof() ? how to implement sizeof() fun. in c ?
Hello Friend,
Please visit the following link:
Thanks
LOOPS !! - Java Interview Questions
(String[] args) {
String st="Hello";
for(int i=1;i<6;i++){
String... StringBuffer(st).reverse().toString();
for(int i=1;i<6;i++){
String sub1
core java - Java Interview Questions
core java What are transient variables in java? Give some examples Hi friend,
The transient is a keyword defined in the java...://
Thanks Hi friend,
The transient is a keyword
corejava - Java Interview Questions
corejava how to validate the date field in Java Script? ...) {
for (var i = 1; i <= n; i++) {
this[i] = 31
if (i==4 || i==6 || i==9 || i... for more information.
Thanks
Core Java - Java Interview Questions
for the application
For read more information :... in a Java application... that the main method in a Java application is declared public
java - Java Server Faces Questions
java Java Server Faces Quedtions Hi friend,
Thanks
Core Java - Java Interview Questions
Throw Keyword in Core Java Why to use Throw Keyword in Core Java? throw keyword. it is used to user rethrow Exception to caller.its at a time only one exception we can throw.But throws key word is method signature
java program - Java Interview Questions
java program i want information of locks in java ?
1.what... only one thread at a time to execute a region of code.The synchronized keyword.../java/thread/SynchronizedThreads.shtml
Thanks
Core java - Java Interview Questions
Core java Hai this is jagadhish.Iam learning core java.In java1.5 I saw one keyword that is "assert(condition)".I want to know about this.Plz...://
Thanks
java threads - Java Interview Questions
java threads How can you change the proirity of number of a thread... the priority of thread.
Thanks Hi,
In Java the JVM defines priorities for Java threads in the range of 1 to 10.
Following is the constaints defined
Interview question - Java Interview Questions
Interview question
Hi Friends,
Give me details abt synchronization in interview point of view.
I mean ow to explain short and neat. Thanks
core java - Java Interview Questions
6)what shadow varialbe..
Thanks a lot give the above
java - Java Interview Questions
java MNC now which modal question are asked in interview
Jsp - Java Interview Questions
Need JSP Interview Questions Hi, I need JSP interview questions.Thanks
INTTERFACE 1 - Java Interview Questions
the keyword "implements" is used. Interfaces are similar to abstract classes
Interview Tips - Java Interview Questions
Interview Tips Hi,
I am looking for a job in java/j2ee.plz give me interview tips. i mean topics which i should be strong and how to prepare. Looking for a job 3.5yrs experience
help me in these - Java Interview Questions
plz answer me it is important to me
and these are the questions :
1)Write... with its characters reversed . ( tool to loot) then test your method in java application ??
6) write a method that takes an integer value as parameter
ARRAY DIAMOND - Java Interview Questions
ARRAY DIAMOND HI I WANT PRINT LIKE THIS I Want Print In Diamond Shape?
1 2 3 4 5 6
5 4 3 2 1
4 3 2 1
3 2 1
2 1
1
I...;=6;k++){
System.out.print(k);
}
System.out.print("\n");
for(i=5;i>=1;i
collection frame - Java Interview Questions
).
(4)Enumerations: The enum keyword creates a typesafe,
ordered list of values
use of synchronozation - Java Interview Questions
. Synchronization is
the keyword to avoid concurrent access to critical section
of the code... visit to :
http
Displaying calendars - Java Interview Questions
on the console. For example, if the user entered the year 2005, and 6...
1
2 3 4 5 6 7 8
9 10 11 12 13 14 15
16 17 18 19 20 21 22
23 24 25 26... Tue Wed Thu Fri Sat
1 2 3
4 5 6 7 8 9 10
11 12 13 14 15 16 17
18
java - Servlet Interview Questions
Java Design Patterns Interview Questions and Answers I need to know Java Design Patterns with the help of Interview Questions and Answers Hi malli,GenericServlet is the super class of HttpServlet calssjava soft
Java Programming: Chapter 6 Quiz
Quiz Questions
For Chapter 6
THIS PAGE CONTAINS A SAMPLE quiz on material from
Chapter 6 of this on-line
Java textbook. You should be able to answer these questions after
studying that chapter. Sample answers to all
java - Java Interview Questions
Java interview questions and answers for freshers Please provide me the link of Java interview questions and answers for freshers Hi friend,class Point{ int x, y; Point(){ System.out.println("default"
questions
*
* * *
* * * * *
* * *
*
Q. 6... data members
b) static methods
In our java programming language we have... methods.
Methods and Variables in Java
In our java programming language we have 2
Javascript Function - Java Interview Questions
==4 || i==6 || i==9 || i==11) {this[i] = 30}
if (i==2) {this[i] = 29
Java - Java Interview Questions
://
Thank you for posting questions.
Rose India Team Interview,Corejava questions,Corejava Interview Questions,Corejava
Core Java Interview Questions Page3
... ?
Ans :
Generally Java sandbox does not allow to obtain this reference so... and retrieve information. For example, the
term data store is used in Enterprise Java
java multiple inheritence - Java Interview Questions
inheritance in java.
In C++ they handled multiple inheritance using virtual keyword...java multiple inheritence what are the drawbacks of multiple inheritence due to which it is not used in JAVA?Thanks in Advance. Hi friend
Java - Java Interview Questions
Java Hi
How to write java code inorder to get the comments written in a java
program?
Please let me know..this was asked in my interview... in details.
For more information on Java visit to :
Core Java Interview questions and answers
Core Java Interview questions and answers
....
So, we have tried to create most frequently asked Core Java Interview Questions
and answers in one place.
These Core Java Interview Questions are supported
Interview Question - Java Interview Questions
Interview Question 1)If we give S.O.P. statement after finally block shall it prints or not?
2)Write a program in java to read a file & write in to another file?
3)Write a program taking two arrays and compare those two
Core Java Interview Questions!
Core Java Interview Questions
...();
}
Question: How to define an Interface?
Answer: In Java Interface... functionality
to the java applications.
Java applications can now use
core java - Java Beginners
Core Java interview Help Core Java interview questions with answers Hi friend,Read for more information.
PHP Array Sizeof
PHP Array sizeof() function
To count the elements of an array PHP provides sizeof() function. It counts
all elements of an array or properties of an object.
The sizeof() & count() functions are identical
java - Java Interview Questions
Java Programming What is Java Programming language
Arraylist java code - Java Interview Questions
Arraylist java code Create an employee class with an employee id's... an employee id is given, display his name & address? Please provide the java code Detaily... DisplayArrayListData(6, "F", "Delhi");
displaydata.setDetails(data1
java - Java Interview Questions
java how to learn java
Java - Java Interview Questions
Java definition for "USER DEFINE PACKAGE FOR JAVA
java - Java Interview Questions
Java a complete reference Need a Complete reference on Java.
|
http://roseindia.net/tutorialhelp/comment/97425
|
CC-MAIN-2014-35
|
en
|
refinedweb
|
New to the AIR 1.5.2 release (and the corresponding Flash Player, 10.0.32) is the LocalConnection.isPerUser property. Note that you’ll need to update your application’s namespace to …/1.5.2 to access this property. Here’s why you should do that.
LocalConnection provides local (i.e., on the same machine) communication between SWFs and AIR applications. It operates via a shared memory segment that’s visible to all processes that use the mechanism.When LocalConnection was first implemented on Mac OS, it used a memory segment that is visible to all processes running on the machine. This was reasonable at the time, but problematic now that Mac OS is a multi-user operating system. The unfortunate result is that LocalConnection can be used to communicate across user accounts on Mac OS.
To address this a new, per-user implementation has been implemented on Mac OS. You should always use this mode; it’s safer. To do that, set LocalConnection.isPerUser = true on every LocalConnection object you create.
Unfortunately, AIR can’t do this for you transparently. The problem is that, if it did, you could get into a situation where version skew breaks use of LocalConnection. For example, this can occur if an application is running on AIR 1.5.2 and attempts to communicate with a SWF in the browser running on Flash Player 9. Until both sides are updated, there’s no way to use the isPerUser = true option. By adding an API and making this an option, we’ve given you a chance to migrate to this option without breaking anything along the way.
This issue is specific to Mac OS. Windows and Linux use a user-scoped LocalConnection in all cases, regardless of the isPerUser setting. You can safely set LocalConnection.isPerUser = true everywhere and be confident that the Windows and Linux behavior won’t change.
Final note: The default setting of this property is likely to change to true in a future release, in order to be consistent with our general philosophy of defaulting to safe behavior.
|
http://blogs.adobe.com/simplicity/2009/08/localconnectionisperuser_in_ai.html
|
CC-MAIN-2014-35
|
en
|
refinedweb
|
The service document
The
<?xml version="1.0" encoding="UTF-8"?> <app:service xmlns: <app:workspace> <atom:title>AdminBlog</atom:title> <app:collection <atom:title>Weblog Entries</atom:title> <app:categories app: <atom:category atom: <atom:category atom: </app:categories> <app:accept>entry</app:accept> </app:collection> <app:collection <atom:title>Media Files</atom:title> <app:accept>image/*</app:accept> </app:collection> </app:workspace> <app:workspace> ... </app:workspace> </app:service>
From this document, you can create a servlet that includes a sidebar that shows all of the services and feeds available, along with links to their HTML versions.
The basic servlet
The first step is to create the basic servlet, including space for the sidebar (see Listing 2).
Listing 2. The basic servlet
package com.backstop.atom; import java.io.IOException; import javax.servlet.ServletException; import javax.servlet.http.HttpServletRequest; import javax.servlet.http.HttpServletResponse; import javax.xml.parsers.DocumentBuilder; import javax.xml.parsers.DocumentBuilderFactory; import java.io.File; import org.w3c.dom.Document; import org.w3c.dom.Element; import org.w3c.dom.NodeList; import org.w3c.dom.Node; public class SideBarServlet extends javax.servlet.http.HttpServlet implements javax.servlet.Servlet { String APPNS = ""; String AtomNS = ""; String AtomNS2 = ""; protected void doGet(HttpServletRequest request, HttpServletResponse response) throws ServletException, IOException { response.getWriter().print( "<div style='width:25%; float: right;'>"); Document doc = URLContents.getContentsAsXMLDoc( "" ); Element root = (Element)doc.getDocumentElement(); NodeList workspaces = root.getElementsByTagNameNS(APPNS, "workspace"); for (int i = 0; i < workspaces.getLength(); i++){ Element thisWorkspace = (Element)workspaces.item(i); response.getWriter().print( "<div style='border: 1px solid green; padding: 5px;'>"); String wsTitle = thisWorkspace.getElementsByTagNameNS(AtomNS, "title").item(0).getTextContent(); response.getWriter().print( "<h3 class='wstitle'>"+wsTitle+"</h3>"); response.getWriter().print("</div>"); } response.getWriter().print("</div>"); } }.
Add the collections
Adding the collections to the page involves much the same process. Once you have the individual workspace, you can retrieve each of its collections (see Listing 3).
Listing 3. Looping through the collections
... response.getWriter().print( "<h3 class='wstitle'>"+wsTitle+"</h3>"); NodeList collections = thisWorkspace.getElementsByTagNameNS(APPNS, "collection"); for (int j = 0; j < collections.getLength(); j++){ Element thisCollection = (Element)collections.item(j); String colTitle = thisCollection.getElementsByTagNameNS(AtomNS, "title").item(0).getTextContent(); String feedURL = thisCollection.getAttribute("href"); response.getWriter().print( "<h4 class='collection'>"+colTitle); response.getWriter().print(" -- <a href='"+feedURL+ "'><img border='0' src='images/feed-icon.gif' /></a>"); response.getWriter().print("</h4>"); } response.getWriter().print("</div>"); } response.getWriter().print("</div>"); } }<<
Add categories
You can also add an indication of the categories of content covered by each collection (see Listing 4).
Listing 4. Adding categories
... response.getWriter().print(" -- <a href='"+feedURL+ "'><img border='0' src='images/feed-icon.gif' /></a>"); response.getWriter().print("</h4>"); NodeList categories = thisCollection.getElementsByTagNameNS(AtomNS, "category"); if (categories.getLength() > 0){ response.getWriter().print( "Categories in this collection: "); } for (int k = 0; k < categories.getLength(); k++){ Element thisCategory = (Element)categories.item(k); String catName = thisCategory.getAttributeNS(AtomNS, "label"); if (k > 0){ response.getWriter().print(", "); } response.getWriter().print(catName); } } response.getWriter().print("</div>"); } response.getWriter().print("</div>"); } }
The result looks like Figure 3.
Figure 3. Adding categories
Link to the HTML version
The final step is to include a link to the HTML representation of the information. Unfortunately, that information does not actually exist in the service document. To retrieve it, you will have to look at the feed itself (see Listing 5).
Listing 5. Retrieving the feed information
... for (int j = 0; j < collections.getLength(); j++){ Element thisCollection = (Element)collections.item(j); String colTitle = thisCollection.getElementsByTagNameNS(AtomNS, "title").item(0).getTextContent(); String feedURL = thisCollection.getAttribute("href"); String webURL = getWebURL(feedURL); response.getWriter().print( "<h4 class='collection'><a href='"+webURL+"'>"+> colTitle+"</a>"); ... } response.getWriter().print("</div>"); } response.getWriter().print("</div>"); } private String getWebURL(String feedURL){ String webURL = ""; Document doc = URLContents.getContentsAsXMLDoc(feedURL); Element root = (Element)doc.getDocumentElement(); NodeList links = root.getElementsByTagNameNS(AtomNS2, "link"); for (int i = 0; i < links.getLength(); i++){ Element thisLink = (Element)links.item(i); if (thisLink.getAttribute("rel").equals("alternate") && thisLink.getParentNode().equals(root)){ webURL = thisLink.getAttribute("href"); } } return webURL; } }<<
Summary
The service document is more than just an opportunity for introspection; with careful planning, you can use it to provide actual content and links to content.
Download
Resources.
|
http://www.ibm.com/developerworks/xml/library/x-atomsidebar/index.html
|
CC-MAIN-2014-35
|
en
|
refinedweb
|
Sencha Forum
>
Sencha Touch 2.x Forums
>
Sencha Touch 2.x: Bugs
> Documentation Content Bugs
PDA
View Full Version :
Documentation Content Bugs
nick_p
11 Oct 2011, 11:21 AM
Please report any documentation content bugs for Sencha Touch 2 here.
kinetifex
11 Oct 2011, 1:58 PM
All the links to examples 404 on the data package guide: ()
rakagod
11 Oct 2011, 11:39 PM
Congratulations.
I am very impressed by the new documentation tools.
Since I'm new, I was running through the "Getting started example" and pressed on the link for " tabBarPosition () " and it correctly took me to its definition.
It says that its a string and it defaults to null, however I cannot find the list of possible values for it.
I know I can use "top" and "bottom" but I should be able to find the allowed values of any such parameter.
Am I missing something obvious?
rakagod
12 Oct 2011, 12:35 AM
When I search for xType in the documentation I get
- getXTypes
- isXType
but nothing defining xType.
I did notice that next to some headings it gives the xType.
ex. Ext.Component xtype: component
but no clear definition.
edspencer
12 Oct 2011, 12:41 AM
Thanks, we'll fix those up. As you guessed, the valid positions are 'top' and 'bottom'. As for xtype, it's a convenient way to create components. Both of these are equivalent:
Ext.create('Ext.Panel', {
items: [
Ext.create('Ext.Panel', {
html: 'inner panel'
})
]
});
Ext.create('Ext.Panel', {
items: [
{
xtype: 'panel',
html: 'inner panel'
}
]
});
rakagod
12 Oct 2011, 11:02 AM
The iPad2 has a problem properly displaying the new documentation.
The text in the right hand panel goes right up to the right edge of the display and drops letters.
It does this in both the portrait and landscape mode.
rakagod
12 Oct 2011, 11:52 AM
The code sample in "Getting Started with Sencha Touch 2" is not display compatible with
iPad2 Landscape mode,
iPhone4 landscape mode,
iPhone4 portrait mode
or desktop browser (i.e. landscape).
Specifically the 6th coding example has:
'<img width="65%" src="" />',
It is the width="65%" parameter that causes the problem.
It show up immediately in my Chrome browser as too big an image.
Testing on the iPad2 shows the same problem in landscape mode.
Testing on my iPhone4 shows the same problem in landscape mode.
Testing on my iPhone4 in portrait mode showed it loosing the bottom of the panel.
It appears to only work with the iPad portrait mode.
TommyMaintz
12 Oct 2011, 11:57 AM
The current API docs are targeted to desktop browsers as that is the usual development environment and thus the most often needed. We are looking at creating a mobile friendly version of the API docs though.
rakagod
12 Oct 2011, 12:14 PM
I really like the Preview button in the documentation.
I am still on the getting started sample code and there is no indication that the display in a real iPhone will NOT be the same as in the Preview, until I do an "Add to home screen" for the web page.
Until then a real iPhone will be showing the Safari navigation toolbar on the bottom.
(I don't know what happens on an Android phone)
For us beginners, I would suggest a note to expect the difference and maybe a link to a deployment article explaining how to solve the problem.
Even-though, I am still on the getting started sample code, I am excited to see it on my iPhone and to see it exactly the same. Hence the need for the heads up for a beginner.
rakagod
12 Oct 2011, 12:57 PM
How to use classes in Sencha Touch 2Dependencies and Dynamic LoadingMost of the time, classes depend on another classes.
- " ... classes depend on another class. ..."
- or " ... classes depend on other classes. ..."
rakagod
12 Oct 2011, 1:04 PM
How to use classes in Sencha Touch 2
Dependencies and Dynamic Loading
"... we depend on Animal being present to be able to define Animal. ..."
Should be:
"... we depend on Animal being present to be able to define Human. ..."
rakagod
12 Oct 2011, 1:28 PM
How to use classes in Sencha Touch 2
Dependencies and Dynamic Loading
" ... See part 2 of the Getting Started guide for details on how to use the JSBuilder..."
Don't know where to find this.
A link would be more useful.
rakagod
12 Oct 2011, 2:07 PM
How to use classes in Sencha Touch 2
Naming Conventions
>>The top-level namespaces and the actual class names should be in CamelCased, everything else should be all lower-cased. For example:
MyCompany.form.action.AutoLoad <<For Camel Case, Wiki states that "... first letter either upper or lower case...".
From the examples I can figure out that:
- a class name uses upper case for the first letter;
- a variable name uses lower case for the first letter.
That information should be stated in the text.
Also the beginning text above; remove either the "in" or the "d".
rakagod
12 Oct 2011, 3:00 PM
How to use classes in Sencha Touch 2
1.2) The New Way
"... get in the habit of always using Ext.create since it allows you to take advantage of dynamic loading. For more info on dynamic loading see the Getting Started guide ()..."
There is no info on dynamic loading on the linked page "Getting Started guide" but there is some in this article.
rakagod
13 Oct 2011, 7:46 AM
In the Panel component documentation I noticed that there is no description for the elements:
left
right
flex
rakagod
13 Oct 2011, 7:55 AM
The documentation below talks about creating a Text field but a panel creation is shown.
Using Components in Sencha Touch 2
Instantiating ComponentsComponents are created the same way as all other classes in Sencha Touch - using Ext.create. Here's how we can create a Text field:
var panel = Ext.create('Ext.Panel', { html: 'This is my panel'});
NickT
13 Oct 2011, 7:55 AM
there is a reference to 'fullscren' instead of 'fullscreen'
NickT
13 Oct 2011, 8:03 AM
There is no documentation on
Ext.Viewport
rakagod
13 Oct 2011, 8:15 AM
Using Components in Sencha Touch 2
Showing and Hiding Components
It explains showing and hiding the main panel.
At this point, I have 2 questions:
1- how do you do it for the child panel?
2- will hiding a child panel affect the layout i.e flex ?
rakagod
13 Oct 2011, 8:46 AM
I was looking up the "Listeners" config element and was not able to find a list of events in the documentation.
I guess that there are events generated by js, DOM and Sencha Touch.
If you don't produce a list is it possible to provide links those who do?
rakagod
13 Oct 2011, 9:04 AM
Please ignore the previous request about events.
I found this:
Each Component has a full list of the events they fire inside their class docs.
rakagod
13 Oct 2011, 10:22 AM
Ext.data.reader.Reader
I cannot find a property of "type" as used in the sample code of that page.
reader: { type: 'json', root: 'users' }I did find the statement "a valid Reader type name (e.g. 'json', 'xml')" on the Ext.data.proxy.Proxy page that suggests that "types" are the "sub classes" given on the Ext.data.reader.Reader page.
Is this correct for "Reader" class?
and is it the same for any other classes with "sub classes"?
rakagod
13 Oct 2011, 10:29 AM
The link in the text: "For a live demo please see the Simple Store () example." gives a 404 error.
Found in:
Using the data package in Sencha Touch 2
Models and Stores
The link text "Inline Data example ()" also gives a 404 error.
Found in:
Using the data package in Sencha Touch 2
Inline data
Again for:
see the Sorting Grouping Filtering Store () example.
Example of a Model that uses a Proxy directly
. ().. you can check the rest on the page.
rakagod
13 Oct 2011, 11:03 AM
Is it possible that the code should be using "posts" rather than "Post" here?
hasMany: 'Post' // shorthand for { model: 'Post', name: 'posts' }:
Should it not read as ?:
hasMany: 'posts';
Found in:
Using the data package in Sencha Touch 2
Associations
Sakes
13 Oct 2011, 12:56 PM
The first example is broken. When you type bob.speak() you expect the result to alert 'Bob'. Instead it alerts null. It appears that the example's constructer never calls this.initConfig();
robcolburn
14 Oct 2011, 1:43 PM
"This is an optional configuration. You can specify a specific Certificate Alias to use for signing your application."
It's unclear what to specify for the Certificate Alias. A good solution would be to show an example, and provide a separate page with screen-shots for getting/setting the Certificate Alias.
tobiu
15 Oct 2011, 9:04 AM
Ext.define('MyApp.controller.Users', {
extend: 'Ext.app.Controller',
refs: [
{
ref: 'list',
selector: 'grid'
}
],
init: function() {
this.control({
'button': {
click: this.refreshGrid
tap: this.refreshGrid
}
});
},
refreshGrid: function() {
this.getList().store.load();
}
});
iaBrad
17 Oct 2011, 3:16 PM
I love the improvements to the docs, but I also like to read them on my iPad. It makes a handy second (or third) display while developing. If there were a PDF option (which would also allow offline viewing), that would be handy.
slchorne
19 Oct 2011, 7:26 AM
The 'getting started guide' uses an example where the panel config has the 'fullscreen' item set.
but the docs for 'ext.panel' say that this is depricated and to use Ext.Viewport instead.
slchorne
19 Oct 2011, 8:02 AM
The kitchen sink source demos don't show the full app. They don't have the 'Ext.application' declaration so it is hard to see how the individual components are registered with the app
slchorne
19 Oct 2011, 2:33 PM
The layout controls 'dockedItems' and 'dock' have been replaced with a single config item 'docked'. But none of the examples are using this config. They are all still using 'dock'.
E.g. : ()
edspencer
19 Oct 2011, 8:52 PM
@slchorne it's a mistake, we're not changing it to 'docked'. We'll revert the docs back to 'dock' shortly. Sorry for the confusion, our policy is not to make API changes unless they're absolutely necessary (so this obviously doesn't qualify)
Marc-QNX
28 Oct 2011, 7:16 AM
If you leave the DataView () page open for more than a few minutes, it crashes Chrome on OSX. I seem to be consistently on this page whenever it crashes.
Tried clearing cache, no change.
tomlobato
28 Oct 2011, 3:01 PM
To get the example working (6/nov/2011)...
Ext.define('Animal',{
config:{ name:null},
speak:function(){ alert('grunt');}
});
just after config: {name: null}, add:
constructor: function(config) {
this.initConfig(config);
return this;
},
so, Bob learns how to speak.
The first example is broken. When you type bob.speak() you expect the result to alert 'Bob'. Instead it alerts null. It appears that the example's constructer never calls this.initConfig();
xanaguy
30 Oct 2011, 8:51 PM
This is at least a doc bug and perhaps a code bug.
The "Using Lists" guide is clearly unfinished. It ends almost mid-sentence. The last code snippet doesn't compile:
alert('tapped on '+)
But the part I'm most interested in is the event which I should be listening on to know when a list item is selected. This example says the `select` event should work. But the reference () doc says `activeitemchange`.
In my code, neither event is firing. :( Which is the correct event?
vudup
10 Nov 2011, 8:43 PM
The docs for Ext.dataview.List are missing at least one event. itemtap is a really useful event which is not listed. What other events are missing?
pdm
15 Nov 2011, 6:28 AM
getting-started.html is shipped with the Sencha Touch 2 preview download but has old content. The new Getting Started content () is different (and actually works)
edspencer
16 Nov 2011, 12:29 PM
getting-started.html is shipped with the Sencha Touch 2 preview download but has old content. The new Getting Started content () is different (and actually works)
Thanks, we fixed this in last week's PR2 release
Surykat
18 Nov 2011, 5:06 AM
In Ext.field.Number in event CHANGE there are still parameters: oldValue, newValue - which are not working in new relase.
I debugged my application and found that in event parameters, second parameter (lets call it 'val') have properties equal to old parameters in properties:
oldValue = val._startValue;
newVal = val._value;
It would be a standard now?
AussieInSeattle
1 Dec 2011, 2:48 AM
ComponentView documentation seems to be a copy/paste from DataView and incorrect?
MattUCG
2 Dec 2011, 6:33 AM
Note the low temperatures:
29760
MattUCG
2 Dec 2011, 6:41 AM
From the initial screen, clicking the 'Kiva' button temporarily shows a loading div, but the z-index seems to be off or something:
29761
MattUCG
2 Dec 2011, 6:46 AM
The last day's weather has a strange vertical offset:
29762
rdougan
3 Dec 2011, 3:28 PM
Thanks for all the reports. I've added tickets for all of them, and fixed any tiny ones.
bweiler
5 Dec 2011, 11:27 AM
Ext.dataview.List
The Ext.dataview.List code example contains the following depreciated class.
Ext.regModel('Contact', {
fields: ['firstName', 'lastName']
});
The following model definition should be used instead.
Ext.define('Contact', {
extend: 'Ext.data.Model',
fields: ['firstName', 'lastName']
});
bweiler
6 Dec 2011, 4:04 PM
Ext.app.Controller API description contains the following text:
"For an example of real-world usage of Controllers see the Feed Viewer example in the examples/app/feed-viewer folder in the SDK download."
The example doesn't exist.
riahut.com
9 Dec 2011, 10:01 AM
Sencha 2 Documentation page doesn't load at all, stuck on gears.
edspencer
9 Dec 2011, 12:58 PM
Sencha 2 Documentation page doesn't load at all, stuck on gears.
Works for me - can you try again? Which browser are you using?
anj
13 Dec 2011, 3:54 AM
new Ext.field.Spinner({
minValue: 0,
maxValue: 100,
incrementValue: 2,
cycle: true
});
should be
new Ext.field.Spinner({
minValue: 0,
maxValue: 100,
increment: 2,
cycle: true
});
bweiler
13 Dec 2011, 10:12 AM
PR3 changed the position of the tab.Panel icons to be left aligned on the tabPanel. The documentation recommends using tabBarPosition, but there doesn't appear to be an additional property to set the icon positioning. Icon positioning is very common and the correct 2.x approach should be documented in the Ext.tab.Panel documentation.
From a previous Sencha forum manager post, the correct approach is the following (1.x approach):
This works:
tabBar: {
docked: 'bottom',
layout: {
pack : 'center'
}
},
However, the 2.0 docs lead you to believe that this is the correct approach:
// No guidance on how to center tab icons in tab panel. Only tabBar positioning.
tabBarPosition: 'bottom',
rancid
27 Dec 2011, 6:14 AM
I've found in this post () that initConfig don't works in constructors, and next I've found Mitchell's code that works: "this.callParent([config]);" is the replaced method for: "this.initConfig(config);"
I think it's important to modify documentation on "How to use classes in Sencha Touch 2" on "The Class System" guide
jk171505
7 Jan 2012, 12:34 PM
I'm curious does anyone tested this examples?
I suppose to be able to learn something from it...
How can people learn something from this if the examples are broken?????
jk171505
7 Jan 2012, 1:38 PM
The example describing class system (below) should contain the 'if' statement, otherwise it just doesn't work:
Ext.define('Human', {
extend: 'Animal',
applyName: function(newName, oldName) {
return confirm('Are you sure you want to change name to '+ newName +'?');
}
});
hengly
8 Jan 2012, 7:34 PM
Hi guys,
The test case run on chrome and safari, however the pinheader UI feel confused..
If the grouped list view still have bugs?
For the ST 1.0 version, the demo on works fine.
Thank you very much!~o)
Ext.define('Contact', {
extend: 'Ext.data.Model',
fields: ['firstName', 'lastName']
});
Ext.application({
name: "ListDemo",
launch: function(){
var store = Ext.create('Ext.data.JsonStore', {
model: 'Contact',
sorters: 'lastName',
getGroupString:: 'Nicolas', lastName: 'Belmonte'},
{firstName: 'Jay', lastName: 'Robinson'},
{firstName: 'Nigel', lastName: 'White'},
{firstName: 'Don', lastName: 'Griffin'},
{firstName: 'Nico', lastName: 'Ferrero'},
{firstName: 'Nicolas', lastName: 'Belmonte'},
{firstName: 'Jason', lastName: 'Johnston'}
]
});
var listPanel = Ext.create('Ext.List', {
store: store,
allowDeselect: false,
itemTpl: '<div class="contact">{firstName} {lastName}</div>',
onItemDisclosure: function(record, btn, index) {
console.log('in');
},
grouped: true
//indexBar: true
});
Ext.Viewport.add(listPanel);
}
});
hengly
9 Jan 2012, 12:18 AM
oh,The problem is caused by using the resources/css-debug/sencha-touch.css instead of the resources/css/sencha-touch.css.Not a bug.
Sowri
14 Jan 2012, 3:22 PM
Hi ,
I was trying to get to the offsetBoundary of scroller and ended up here at
Thanks,
renku
19 Jan 2012, 7:28 AM
Thanks to all for reporting. Most of the issues have been fixed, others have been filed as bugs.
benben
25 Jan 2012, 8:38 AM
namespace config is not mentionned either in the Using Device Profiles Guide nor in the Ext.app.Profile entry, and is the source of a bug (path is set to (null) if you don't specifiy it).
bweiler
25 Jan 2012, 10:36 AM
Ext.XTemplate Documentation Bug PRx:
The following example in the XTemplate documentation contains an error.>');
benben
25 Jan 2012, 2:31 PM
Ext.data.association.HasMany code sample uses a "model" config property, but the class documentation only mentions an "associatedModel". Not sure which one is correct, i'm still struggling with that feature.
benben
26 Jan 2012, 2:47 PM
Documentation for updateRecord in Ext.form.Panel has disappeared, but it is still in the code and seems to be working fine
bweiler
27 Jan 2012, 7:19 PM
The "Intro to Applications with Sencha Touch 2" () example recommends putting the views array in the application definition and not the controller definition and the twitter example puts the views and stores arrays in the controller definition.
I tried putting the views array in the application definition and Sencha Touch threw errors and the get[ViewName]View() methods no longer worked, so it looks like the views and stores belong in the controller.
The stores and views arrays should also be mentioned in the Controllers section () and the Ext.app.Controller documentation.
Intro to Applications with Sencha Touch 2:
Ext.application({
name: 'MyApp',
models: ['User', 'Product', 'Order'],
views: ['OrderList', 'OrderDetail', 'Main'],
controllers: ['Orders'],
launch: function() {
Ext.create('MyApp.view.Main');
}
});
Twitter Example:
Ext.define('Twitter.controller.Search', {
extend: 'Ext.app.Controller',
config: {
profile: Ext.os.deviceType.toLowerCase()
},
views : [
'Main',
'SearchBar',
'SearchList',
'TweetList'
],
stores: ['Searches'],
...
edspencer
28 Jan 2012, 12:45 PM
Those generated Controller functions are no longer present in Sencha Touch 2, by design. We're going to upgrade the Twitter example shortly, we'd initially left it like it was to test out some compatibility issues - instead of shipping like that though we've forked the old style into a private folder so we can continue to test while demonstrating best practice in the actual examples.
The 1.x-style store, model and view generated functions are no longer present in 2.x, which is why they don't appear in the guides or API docs. I'll update the migration guide for B1 based on this and other feedback to make it clearer what has changed and why.
Anticom
29 Jan 2012, 1:13 PM
I don't know, wether this was reported already, but on bar charts when you compare two values with each other and change view mode (stacked on/off) the arrow indicating which values you compared neither dissapears nor updates its position.
rhomb
30 Jan 2012, 6:06 AM
In the API page the example states that this works:
Ext.Msg.prompt('Name', 'Please enter your name:', function(text) { // process text value and close...
});
But it has to be:
Ext.Msg.prompt('Name', 'Please enter your name:', function(button, text) { // process text value and close...
});
First argument is the button value, second the entered Text.
paul_todd
30 Jan 2012, 4:24 PM
This method is undocumented, is this intentional or is it not yet documented?
If not what is the approved way to move values from a form to the record instance?
bweiler
30 Jan 2012, 4:53 PM
I may be the only one struggling with this, but I'm having a hard time understanding how to reference the Main view instance created in the following example from within the controller.
App definition:
Ext.application({
name: 'MyApp',
models: ['User', 'Product', 'Order'],
views: ['OrderList', 'OrderDetail', 'Main'],
controllers: ['Orders'],
launch: function() {
Ext.create('MyApp.view.Main');
}
});
The following controller code returns the class, but not the instance:
var main = MyApp.view.Main;
The following controller code works, but ignores the Ext.create('MyApp.view.Main') in the app definition:
refs: {
main: {
selector: 'mainview',
xtype: 'mainview',
autoCreate: true
}
}
...
var main = this.getMain();
Will you please explain how the Main instance defined in the app is accessed from within the controller and include some additional text on this in the Intro to Applications document.
Thanks
paul_todd
31 Jan 2012, 5:42 AM
Ext.data.Model.getData is not documented
paul_todd
31 Jan 2012, 8:53 AM
in
var form = Ext.create ()('Ext.form.Panel ()', { listeners: { '> field': { change: function(field, newValue, oldValue) { ed.set(field.getName(), newValue); } } }, items: //as before});
I was not able to get multiple listeners to work with the field example as above but if I used 'textfield' it did work.
I also read the example above as changing a character of text would fire the event whereas the event is fired if the user shifts focus from the control. To capture any character of text changes this requires handling the 'keyup' event
edspencer
31 Jan 2012, 11:23 AM
I've added the getData docs back into Model
dugbot
1 Feb 2012, 8:10 AM
The DataView documentation does not mention the method "SetItemTpl" which is needed for dynamically updating a ItemTpl config value.
Jamie Avins
1 Feb 2012, 9:15 AM
All config options have setter methods, but we'll try to make it clearer.
geekflyer
1 Feb 2012, 3:37 PM
in the xtype: selectfield the function getRecord() is not documented.
Instead there is documented a function called record() , however without any description. The truth is that the function record() does not exist.
rdougan
1 Feb 2012, 3:45 PM
@geekflyer
Thanks, I'll add docs for getRecord.
As for a method called record(), there is no such thing. The docs note is as a private configuration option, not a method. It is used internally in that class and should never be used.
DeShadow
3 Feb 2012, 12:53 AM
In documentation Ext.data.reader.Xml has property "root", but now it's depricated. It's replaced by "rootProperty". Documentation hasn't anything about it property.
edspencer
3 Feb 2012, 5:33 AM
In documentation Ext.data.reader.Xml has property "root", but now it's depricated. It's replaced by "rootProperty". Documentation hasn't anything about it property.\
Thanks, I've fixed this locally so it'll show up properly in the next release
tinyfactory
3 Feb 2012, 4:55 PM
//lets assume container is a container you have
//created which is scrollable
container.getScrollable.getScroller().setFps(10);
getScrollable should be getScrollable()
geekflyer
3 Feb 2012, 6:09 PM
I'm not sure whether this is really a content bug. Maybe i don't understand it completely.
Look at his one
I ()n the examples (also in the model examples) is often written about a config for an association which is called 'model'.
However in the code of it (also in it's it subclasses) and in the config docs i only found the config associatedModel and ownerModel.
cyberwombat
4 Feb 2012, 8();
cyberwombat
4 Feb 2012, 8:48 AM
This post has an example about loading a mask bound to a store but the docs here state that one can no longer bind a store to this class as it's deprecated
new Ext.LoadMask(Ext.getBody(), {
store: 'businesses',
msg: ''
});
qooleot
5 Feb 2012, 8:03 AM
The docs for Ext.data.Store show a model for the example store defined as:
// Set up a model () to use in our Store
Ext.define ()('User',{
extend:'Ext.data.Model ()',
fields:[
{name:'firstName', type:'string'},{name:'lastName', type:'string'},{name:'age', type:'int'},{name:'eyeColor', type:'string'}
]
});
but it actually needs to be like the store in the forms example:
i ()n this format:
Ext.define('Ranks', {
extend: 'Ext.data.Model',
config: {
fields: [
{name: 'rank', type: 'string'},
{name: 'title', type: 'string'}
]
}
});
The difference is the 'config' object encapsulates the fields. If you don't do that, then a selectfield does not display the valueField and displayField fields when specified. I'm thinking the docs need an update, but maybe this is actually just a bug? If so, let me know and I can throw this over in the bugs sub-forum.
Thanks!
edspencer
5 Feb 2012, 9:00 AM
Bug in the docs, I'll fix that right now. Thanks :)
SunboX
5 Feb 2012, 10();
Seem´s the docs are wrong ... :(
edspencer
5 Feb 2012, 10:43 AM
Yes they are, I've fixed that one too. Will be correct next time we refresh the live docs
nigelpegg
5 Feb 2012, 5:03 PM
In most event listings (for example, Carousel's activeitemchange), the docs describe the event as though they're only fired when triggered programmatically :
"Fires when the activeItem configuration is changed by setActiveItem".
99% of the time, you'd be listening for this event when *the user* interacts with the carousel, not when you (as coder) invoke the setActiveItem() method to "change a configuration". I found the description confusing, even if the answer here is "well, when the user interacts with the carousel, internally the framework calls setActiveItem()" - I as coder don't know that detail.
8alery
8 Feb 2012, 2:46.
edspencer
8 Feb 2012, 11:08.
This was actually down to a bug in the framework, it's fixed for b2
tinyfactory
9 Feb 2012, 2:19 PM
The data Store docs incorrectly use the property "root" on the JSON reader example. "root" is deprecated, and is now "rootProperty" This threw me off for about 30 minutes:
var myStore =Ext.create ()('Ext.data.Store ()',{
model:'User',
proxy:{
type:'ajax',
url :'/users.json',
reader:{
type:'json',
root:'users' // this should be rootProperty: 'users'
}
}, autoLoad:true});
edspencer
9 Feb 2012, 2:22 PM
@tinyfactory yea I fixed that one last night, just missed the cutoff for b2 unfortunately. Will be right when we next update the docs
nigelpegg
9 Feb 2012, 5:31 PM
The docs for any listeners config will point you towards the addListener method on Observable, mentioning :
"This should be a valid listeners config object as specified in the addListener example". The addListener example never makes it clear how you'd configure a listeners config object for use here.
More to the point, *all examples* in the addListener section ignore both addListener and listeners, instead opting for the mysterious "on()" method, which exists nowhere else in the docs.
Something as fundamental as event listening *really* needs clearer docs.
nigel
nigelpegg
10 Feb 2012, 10:32 AM
The "class system" docs never make it clear how to extend actual UI components. callParent() seems pretty indispensable.
nigelpegg
10 Feb 2012, 10:34 AM
Could we add Jacky's post on event listening on elements somewhere? Being able to listen to a tap or swipe on a component seems useful. A whole section explaining components and elements seems warranted, since devs will find themselves bouncing back and forth between them.
nigelpegg
10 Feb 2012, 10:35 AM
Component. getWidth() returns null unless you've explicitly set the width. If you're looking for the calculated width of a component that's been dynamically laid out, you need to use component.getEl().getWidth().
nigelpegg
10 Feb 2012, 10:36 AM
the fullscreen property could bear additional explanation with a warning as to use. By its name, it seems like something you'd use all the time. In practice, you don't really want to.
What's more, nearly every example in the docs include fullscreen, so it's easy to see why folks end up over-using it and getting stuck.
nigelpegg
10 Feb 2012, 10:38 AM
A (written) section on theming or styling components would be really useful.
rolfdaddy
11 Feb 2012, 8:30 AM
At in the example at the top it says:
var myStore = Ext.create('Ext.data.Store', {
model: 'User',
proxy: {
type: 'ajax',
url : '/users.json',
reader: {
type: 'json',
root: 'users'
}
},
autoLoad: true
});
I think that should be "rootProperty", correct?
nigelpegg
11 Feb 2012, 8:38 AM
The component.painted event warns you (that's a good thing) not to over-use it; instead it tells you to use initialize. Initialize isn't in the docs anywhere.
nigelpegg
11 Feb 2012, 3:22 PM
As of B2, looks as though the "getEl()" method has been removed in favor of directly accessing the "element" property (yay!). getEl() has been properly removed from the docs, but, comically, the "element" property was never added.
bweiler
11 Feb 2012, 5:28 PM
The getExtraParams method is missing from the Ext.data.proxy.Ajax documentation and probably the documentation for the other proxies.
passion4code
15 Feb 2012, 3:26 PM
Data Reader API doc - first basic example
Note the config property for "rootProperty" is misspelled. Not a big deal, just thought you should know.
JeanPlouin
16 Feb 2012, 5:53 AM
The launch and init configs in a controller don't work if you define them in the config object as induced in the docs: you have to define them outside of the config object to get them running as described in the doc.
edspencer
16 Feb 2012, 9:36 AM
The launch and init configs in a controller don't work if you define them in the config object as induced in the docs: you have to define them outside of the config object to get them running as described in the doc.
Where in the docs does it show these inside the config object?
JeanPlouin
16 Feb 2012, 11:59 AM
maybe i got confused: i saw it in the live ST2 doc --> configs
as i also found refs and control in the same section, which are to be set inside the config object it confused me
edspencer
16 Feb 2012, 12:08 PM
Ah you're right, I'll get that fixed. Thanks :)
fhellwig
16 Feb 2012, 12:40 PM
Thank you. I too was confused by this and thought that the init() function was now a config item rather than being at the top level of the class (same as the constructor). I think the same documentation issue applies to the launch() function as well.
edspencer
16 Feb 2012, 12:45 PM
Yes, it did, they're both fixed now
aoathout
21 Feb 2012, 4:45 PM
First off I hope that the docs on the sencha site are for the next release and this isn't a bug, but incase it is....:
In B3 there isn't an isLoaded() off the store. Hopefully this is new docs that have got out before the next B (or maybe RC) is released? Can somebody from the dev team reply and let us know?
Thanks.
fhellwig
21 Feb 2012, 7:47 PM
Along the lines of my previous post regarding launch and init, the singleton, et al. is another item that is listed under the "Configs" category yet must be specified outside of the config object. On a broader level, does this warrant an additional drop-down on the documentation selection bar? Not sure what it should be called - Properties is already used; "Settings" perhaps?
For developers first faced with the rich Sencha API, maybe a small "help" link at the top would be welcomed that explains the differences between Configs, Properties, and Methods.
Thanks
gunston2084
22 Feb 2012, 2:40 PM
+1 for documentation being very slow and unusable on iPad 2.
rdougan
22 Feb 2012, 10:36 PM
+1 for documentation being very slow and unusable on iPad 2.
We have plans to fix this in the future. But right now, because the app is developed using Ext JS - iPad 2 is simply not supported.
JavascriptParrot
22 Feb 2012, 10:46 PM
Touch 2 docs don't work at the moment?
Failed to load resource: the server responded with a status of 404 (Not Found)
Failed to load resource: the server responded with a status of 404 (Not Found)
Thanks
renku
23 Feb 2012, 12:51 AM
Try again. Should be OK now.
JavascriptParrot
23 Feb 2012, 3:27 AM
Thanks it works, but got the following message in console.
OPTIONS 503 (Service Unavailable)
XMLHttpRequest cannot load. Origin is not allowed by Access-Control-Allow-Origin.
room9
23 Feb 2012, 4:16 PM
"Sencha Touch 2 Native Packaging for Android" switches between webAppPath and inputPath in the configs.
kret
24 Feb 2012, 1:18 PM
Sencha touch 2.0 docs still doesn't work :(.
I am getting in console 404 not found:
()
()
nick_p
24 Feb 2012, 2:14 PM
Hi kret, please try clearing your cache and let me know if it's still an issue.
Thanks
kret
24 Feb 2012, 10:16 PM
Yup. It is working now :).
Thanks.
Jani Hur
28 Feb 2012, 12:31 PM
The current API docs are targeted to desktop browsers as that is the usual development environment and thus the most often needed. We are looking at creating a mobile friendly version of the API docs though.
Please consider this seriously. Currently I'm using a tablet for most of my screen reading time and I found it shocking Sencha Touch documentation doesn't render correctly on an iPad.
jojojose
29 Feb 2012, 8:20 PM
Page Info: Using Nested List- Loading Remote Data
Missing Semicolon - after defaultRootProperty
var treeStore = Ext.create ()('Ext.data.TreeStore ()', { model: 'ListItem', defaultRootProperty: 'items' proxy: { type: 'ajax', url: 'data.json' }});
hbeing123
6 Mar 2012, 9:26 AM
I've been working with the Sencha Touch 2 RC and I just installed the new final release but almost all of the examples aren't working now, they show up as dead images on the examples page also... I've tried clearing my cache and it's still not working, it's not working on the phone either... they just come up blank... the examples seem to be working fine on the sencha site, they just don't seem to work when installed to my server. The RC examples worked fine.
edspencer
6 Mar 2012, 12:48 PM
@hbeing123 looks like we screwed up the local docs build in the download. We'll fix that for 2.0.1 - can you use the live docs on for now?
nigelpegg
6 Mar 2012, 4:29 PM
The "Using and Creating Builds" section repeatedly mentions it will be updated after Beta 1. I also don't know how it fits with "Sencha Command", which uses some of the same tool, but differently.
fhellwig
7 Mar 2012, 5:40 AM
On the download page for the newly-released Sencha Touch 2.0.0, both links in the "Get Started" section lead to pages beginning with the phrase "This Tutorial is most relevant to Sencha Touch, 1.x."
hbeing123
7 Mar 2012, 7:54 AM
@edspencer thanks for confirming, I can work with the online examples for now, thank again.
gkatz
8 Mar 2012, 6:05 AM
Ext.os.is.<x>;its not clear what X is from the docs...
there is no full list of all possibilities anywhere except from looking at the source code. the 'is' method documentation lists a few possibilities but does not cover all. I think that the list should be on top of the doc for this class.
BTW; I am reffering to the possible constants. for example, Ext.os.is.iPad, Ext.os.is.Phone etc)
thanks.
rolfdaddy
8 Mar 2012, 1:27 PM
E.g., on ()
The casing is important. I found that (in RC2 at least, haven't tested 2.0 final) if you have disabled Ext.Loader, in the code example given on the();
it will error with
Uncaught Error: Ext.Loader is not enabled, so dependencies cannot be resolved dynamically. Missing required class: Ext.util.Geolocation
Instead it needs to be
var geo =Ext.create ()('Ext.util.GeoLocation ()',... There may be other examples as well.
hbeing123
9 Mar 2012, 1:32 PM
noticed 2 more documentation bugs... under Native iOS Packaging you have "Steps to package your application for iOS on Mac" when I believe this should be for both windows and Mac.
Also there is a significant typo on the Native iOS Provisioning page where you have a typo in an OpenSSL command:
"openssl req –new –key myprivatekey.key –out CertificateSigningRequest.certSigningRequest –sub "/
should be:
"openssl req –new –key myprivatekey.key –out CertificateSigningRequest.certSigningRequest –subj "/
punchy
10 Mar 2012, 2:19 PM
looking at:
after install of the sdk, use command(win) or terminal(mac) navigate to the sdk directory
and type "sencha" and hit return.
documentation says it should say "Sencha Command v2.0.0 for Sencha Touch 2"
however on both MAC and WIN i get back "Sencha Command v2.0.0 Beta"
documentation proceeds to describe how to generate a new application using the "sencha app create" command. This command doesn't seem to do anything - tried on both win and mac
are the current SDKs available for download at not the latest SDK tools?
gkatz
13 Mar 2012, 12:55 AM
Hi all;
when looking in the API, each component has an example code and a live preview.
I think it would be super awesome to be able to edit that code on the fly and have the live preview be aware of the changes. something like an editable code editor...
I am not sure about if this is possible but it would be great and very very helpful.
thanks
renku
13 Mar 2012, 1:29 AM
@gkatz: What you are describing should already work. Just click the "Code Editor" button above the example, edit the code and press "Live Preview" - your changes should be reflected.
lhughey
16 Mar 2012, 8:06 AM
Oddly, only the watch list and job with friends show anything but a blank screen on my local box. The other applications simply display a white screen. Is there an easy fix for this? I'm using Chrome 17.0.9 on Win7 to view the examples.
renku
16 Mar 2012, 9:18 AM
The examples included to Sencha Touch 2.0.0 download are horribly broken. Sorry for that. Use the online docs ().
lhughey
16 Mar 2012, 10:07 AM
The examples included to Sencha Touch 2.0.0 download are horribly broken. Sorry for that. Use the online docs ().
Thanks for the reply. Will do.
aw1zard2
16 Mar 2012, 1:20 PM
Didn't notice if this was mentioned but "Sencha Touch 2 Native Packaging for Android" Guide needs to add this into the text. This is pasted form the packager.json files in the examples.
/**
* @cfg androidAPILevel
* This is android API level, the version of Android SDK to use, you can read more about it here:.
* Be sure to install corresponding platform API in android SDK manager (android_sdk/tools/android)
*/
"androidAPILevel":"15",
aw1zard2
16 Mar 2012, 1:27 PM
You might want to add the API levels from Android as well.
Android 4.0.3 ()
15 ()
ICE_CREAM_SANDWICH_MR1 ()
Platform Highlights ()
Android 4.0, 4.0.1, 4.0.2 ()
14 ()
ICE_CREAM_SANDWICH ()
Android 3.2 ()
13 ()
HONEYCOMB_MR2 ()
Android 3.1.x ()
12 ()
HONEYCOMB_MR1 ()
Platform Highlights ()
Android 3.0.x ()
11 ()
HONEYCOMB ()
Platform Highlights ()
Android 2.3.4
Android 2.3.3 ()
10 ()
GINGERBREAD_MR1 ()
Platform Highlights ()
Android 2.3.2
Android 2.3.1
Android 2.3 ()
9 ()
GINGERBREAD ()
Android 2.2.x ()
8 ()
FROYO ()
Platform Highlights ()
Android 2.1.x ()
7 ()
ECLAIR_MR1 ()
Platform Highlights ()
Android 2.0.1 ()
6 ()
ECLAIR_0_1 ()
Android 2.0 ()
5 ()
ECLAIR ()
Android 1.6 ()
4 ()
DONUT ()
Platform Highlights ()
Android 1.5 ()
3 ()
CUPCAKE ()
Platform Highlights ()
Android 1.1 ()
2
BASE_1_1 ()
Android 1.0
1
BASE ()
dcnauta74@gmail.com
17 Mar 2012, 6:27 AM
The guides of sencha touch 2 can not be clearly seen on the Ipad, because the right part of the text gets cut off. If you guys could fix this would really help because it would be possible to study the framework anywhere any time. It would help if we could zoom in too.
renku
18 Mar 2012, 11:27 AM
Currently the main problem is that documentation app is built in ExtJS which doesn't support iPad :(
martinvidec
20 Mar 2012, 5:39 AM
Ext.Date.now() returns a number not a date
jhoweaa
25 Mar 2012, 11:16 AM
I've been trying to use a list with the 'PullRefresh' plugin in an app that started from the initial app created by the Sencha SDK generate command. The list looked funny, the 'Pull to Refresh' text was always visible. After a bit of digging I realized that I was missing some stylesheet information. The app.scss file generated by the Sencha tool does not include the 'sencha-list-pullrefresh' styles. It would be nice if the documentation would list any scss modules required to make the item work. It would help in determining which modules a user can leave out in their scss file and which files they need to have.
Thanks!
nak1
29 Mar 2012, 10:28 AM
For some reason the toolbar title is overflowing onto the button. Below is the code I'm using to render the object. 33316
items:[{
xtype : "toolbar",
config:{
style:'background-color:white'
},
docked : "top",
title:'Central Activities List'
items:[{
xtype:'button',
iconCls: 'add',
iconMask: true,
handler:function() {
}
},{
xtype:'button',
iconCls: 'refresh',
iconMask: true,
handler:function() {
}
}]
}]
renku
29 Mar 2012, 11:28 AM
@nak1: This thread is dedicated for problems with documentation. Please post your question to Q&A forum instead.
mrsunshine
30 Mar 2012, 12:45 AM
the setter and getter for emptyText are missing in the docs for dataview and list
Powered by vBulletin® Version 4.1.5 Copyright © 2014 vBulletin Solutions, Inc. All rights reserved.
|
http://www.sencha.com/forum/archive/index.php/t-150418.html
|
CC-MAIN-2014-35
|
en
|
refinedweb
|
It's all about edUInsights, foresights and hindsights from the world of Microsoft (and other) technology in education. Community 7.1.12.36162 (Build: 7.1.12.36162)2008-05-13T01:14:53ZTools for Schools<p>One of the inherent issues in the age of Cloud is one of <strong>Discoverability</strong> – knowing how to search for and find all the great services and resources that exist. When it comes to Educationally relevant resources, Microsoft offers over 100 FREE tools, eBooks, applications and websites, many of which you’ve likely never seen or explored.</p> <p>Our Partners in Learning team has compiled an exhaustive list of these tools using a convenient Azure application called <a href="">TeachTec Tools for Schools</a> with filtering options for school subject areas and audience, as well as capability to bookmark, copy and share links for easy socialization.</p> <p>A couple of my favorites:</p> <ul> <li>12 free eBooks labeled as “Educator Guides” – how to use technology in the classroom and in your curriculum</li> <li>PhotoSynth – a free digital photo tool that allows you to “stitch” your panoramic pictures into 3D space and even geolocate in Bing Maps</li> <li>Worldwide Telescope – a free web or Windows client to explore the solar system, night sky or planetary objects in amazing detail</li> </ul> <p>Take a few minutes and surprise yourself with the breadth and depth of Microsoft’s resources for classroom creativity and exploration. Enjoy!</p><div style="clear:both;"></div><img src="" width="1" height="1">rgode you know about Office Web Apps (for Free) and how they can be used in the classroom?<p>I’ve been attending the annual ISTE (International Society for Technology in Education) conference in Philadelphia this week, spending a majority of my time in the Microsoft booth. Far and away the most frequent question I get – either directly or as a follow on to my introduction to the topic is, “What is/are the Office Web Apps?” After a brief explanation and demonstration, the follow on question is “Why are they Free?”</p> <p>To me, both these questions are potentially transformational to our education customers – let me explain why.</p> <p>First, to be sure we’re all on the same page, Office Web Apps are explained in detail on <a href="" target="_blank">this Microsoft site</a>. To summarize, access to the web apps, and the associated 25 GB of free SkyDrive storage associated with the tools can be easily accessed by signing in to <a href=""></a> with your LiveID – either the one that you’ve acquired individually by signing up for any number of Microsoft’s useful Live services (Live Messenger, Hotmail, etc.) or by being a student, teacher or faculty member associated with an organization that has signed up for <a href="">Live@edu</a>. In either case, you have an identity moniker we call LiveID and can take advantage of Office Web Apps.</p> <p>To be perfectly clear, Office Web Apps do not have the full functionality of the Office 2010 version that you install on your PC. But the browser versions of Excel, Word, PowerPoint and OneNote that come in the Web Apps suite look, feel and act much like their 32 or 64-bit counter parts and can be used for all viewing and many editing functions in any of your favorite browser platforms (IE, Firefox, Safari). Most importantly, the ability to maintain document formatting and rich fidelity is maintained as you transfer documents between the Skydrive cloud storage and your PC.</p> <p>So now what? Microsoft publishes a number of <a href="">teacher guides</a> that provide smart, educator-centric insights on Microsoft tools, including <a href="">this one</a> for Office Web Apps. It explains how the combination of Office Web Applications and Skydrive can be skillfully utilized to not only increase collaboration in the classroom and beyond. Think about the potential of providing students access to their projects, homework and your notes, syllabi, and homework assignments from any PC and any browser just by signing in with a LiveID. And because Office Web Apps are free and always run the latest version, installing, maintaining and upgrading the full Office version on a PC that they may use or have access to (home, Library, friend or parent) is no longer a roadblock to getting work and collaboration done.</p> <p>Which brings us to – why free? </p> <p>When I speak at conferences and Microsoft events, I frequently refer to a concept I call “monetization motivation” – I question that *every* web user needs to ask themselves when they come across a free offering – not only for PC security purposes, but also to understand what the “catch” might be. I encourage everyone to do this for ALL vendors, including Microsoft, especially for students and teachers who might be more drawn to free solutions because of budget constraints. For all software, free or otherwise, you could argue that there is always a hard or soft payment in one or more of the following forms:</p> <ul> <li>currency (traditional pay for package or service)</li> <li>your identity, email or profile data (shopping, consumer or customer-service oriented angles)</li> <li>your eyeballs or loyalty as a consumer (ads, clicks, page views, brand)</li> <li>the unknowing, unwilling sacrifice of your computer as a bot or malware host for any number of nefarious purposes, from spreading malware to sending spam to capturing PII</li> </ul> <p>Luckily most free software falls into the second and third categories, including Office Web Apps, but to be sure, it is always a good idea to read the privacy statement associated with what you are signing up for. (Microsoft’s Online Privacy information can be found <a href="">here</a>.) As a company that makes 95%+ of its revenue and profits on software, our ultimate goal is to sell individuals, institutes and companies more software. If we can entice and provide value to students and teachers with a free version of our software, we’re hopeful that you will think of us when you are ready to buy the full version. </p> <p>So get your LiveID today and check out Skydrive and Office Web Apps. At ISTE this week a number of teachers and IT admins have departed the Microsoft booth with words that are music to my ears, “you’ve just made my entire trip worthwhile.”</p><div style="clear:both;"></div><img src="" width="1" height="1">rgode“Campus to Cloud”<p>This week I had the opportunity to participate in an innovative workshop sponsored by <a href="" target="_blank">Center for Digital Education</a> and <a href="" target="_blank">Brocade</a>. The sponsors invited CIOs from a variety of higher education campuses across the country of varying sizes and affiliations, as well as education and industry experts from key vendors including EMC, McAfee, Google and Microsoft.</p> <p:</p> <ol> <li>Student achievement is the ultimate goal and driver for any IT investment </li> <li>Campus IT must consistently re-evaluate services and direction, outsourcing commodity or non-essential functions and investing in areas that drive the #1 priority above </li> <li>Standards, openness and mobility (as it relates to IT, vendors and networking) will increasingly drive their ability to be successful in the cloud era </li> </ol> <p <a href="" target="_blank">Campus Computing Project</a> and associated survey data.</p> <p>When we got to the later part of the day to discuss solutions and ideals, the attendees clearly identified attributes of a future state that not only took into account the natural progression to cloud computing, but more importantly, the ideals of true education-oriented and mission critical IT: <strong>adaptive, agile, results-driven, effective delivery of relevant services to support and enhance student outcomes</strong>. .</p><div style="clear:both;"></div><img src="" width="1" height="1">rgode, SharePoint and raw business drive – a winning combination<p><font face="Calibri">Based on the 13 years I’ve spent working with education customers and technology solutions, I think I can fairly say that innovation runs rampant on college campuses – both in IT shops that are challenged with fewer dollars to deliver more services as well with students experimenting with high potential technology and business ideas. This week I had the opportunity to witness student innovation first hand as a judge for a student business collaboration and simulation contest.</font></p> <p><a href="" target="_blank"><font face="Calibri">Kiefer Consulting</font></a><font face="Calibri">, a Sacramento-based SharePoint and Application Development Gold Certified Microsoft partner, conceived the idea of an extra-curricular 4-week project management-oriented business consulting simulation centered around SharePoint to introduce MIS and Business faculty, students and curriculum to the potential of SharePoint. The program kicked off with the fall semester at the Sacramento and Chico campuses of the California State University (press release <a href="" target="_blank">here</a>). With growing interest and momentum, Kiefer ran the program again in the spring semester, where participation doubled to 8 teams of students competing from each university.</font></p> <p><font face="Calibri">As a judge, I participated in the final presentation day of competition for each campus. Presentation day concluded 4 weeks of the program which included orientation, site development, project deliverables, checkpoints, and presentation preparation. Each of the student teams chosen as finalists (based on their project score) had 15 minutes to present their response to a Request For Offer, showcasing their solution and offering using SharePoint. Judges served in the role of executive staff of the organization that had originated the RFO, assigned to evaluate oral presentations of solutions. Teams also had to withstand 10 minutes of judging panel questions that ranged from site design criteria to business process and team collaboration inquires. While the program is not designed to overly burden students already handling a full course load, feedback from students confirmed that the real-life scenario relating to response time, preparation and presentation delivery was a great experience. </font></p> <p><font face="Calibri">Student participation in the program is completely voluntary – motivation for completion stemmed primarily from the opportunity to gain valuable project collaboration and business simulation experience for their resume, or for potential job offers or references provided by the executives in attendance as judges. And while team site designs, team dynamics, solutions and presentation styles all differed, the common denominator all the judges experienced was clear: the potential for great collaboration, solutions and business delivery on SharePoint is impressive, especially considering almost all students were new to SharePoint walking into this program.</font></p> <p><font face="Calibri">Kiefer is evaluating the feasibility of expanding to other universities within California, and potentially nationally, to continue the momentum of learning and excitement that surrounds this program. If this program sounds intriguing to you for your campus, I’d encourage you to connect directly with Kiefer (<a href="mailto:info@kieferconsulting.com">info@kieferconsulting.com</a>) or send me an email registering your interest.</font></p><div style="clear:both;"></div><img src="" width="1" height="1">rgode = SaaS + PaaS + IaaS + DaaS!<p><font face="Calibri">One of my first blog posts over a year and half ago was entitled <a href="" target="_blank"><strong>S+S=SaaS?</strong></a> which delved into the Microsoft definition of Software Plus Services and how that related to the broader industry concept of Software as a Service. Today, that state of cloud understanding seems long outdated – both in terms of what we as a software provider know to be true, and in terms of the broader industry and academic constructs of current expectations and delivery of cloud services.</font></p> <p><font face="Calibri">If we look at what has transpired since then in the Microsoft vernacular, we first announced last spring that we were “All in” (PressPass story <a href="" target="_blank">here<="161" height="65" /></a>and followed that up this past fall with position of “Cloud Power” (PressPass story <a href="" target="_blank">here</a>). <a href="" target="_blank">="105" height="82" /></a></font></p> <p><font face="Calibri">I’d like to point out some interesting subtleties about these announcements:</font></p> <ul> <li> <div align="left"><font face="Calibri">The “all in” announcement last March was made on campus at the University of Washington, recognizing the growing significance and influence that students and educators have in selecting and utilizing online services.</font></div> </li> <li> <div align="left"><font face="Calibri">The “all in” statement was a reflection of our redirection of developer resources to on-premises AND in-cloud solutions, as well as a total commitment to sales and marketing solutions thinking about cloud services. (My extended pre-sales engineering team invested several hundred hours ramping, digesting, creating training and messaging for, and eventually training field sales staff on the solutions and ecosystem – in fact we’re still investing heavily here).</font></div> </li> <li> <div align="left"><font face="Calibri">“All in” quickly became outmoded as our customers already expected as much – not that they were ubiquitously moving to the cloud, but that they wanted that *option* if the economics and security/privacy proved out.</font></div> </li> <li> <div align="left"><font face="Calibri">The “Cloud Power” announcement in November builds on “All in” by stating that not only do we do cloud, but we do it well, in fact we’re <strong>the</strong> leading provider in most consumer and enterprise cloud services categories. As we focus on simplifying and integrating our solutions and messaging, Microsoft is gaining comfort with re-establishing niche leadership.</font></div> </li> </ul> <p align="left"><font face="Calibri">How does this relate back to S+S and all the “X as a Service” acronyms I list in the title, you ask? And how is that simplifying the message? As a computer scientist by training and a systems engineer by trade, I’ll always be fast to point out the differences between marketing and marketecture; and as holds true with evaluating and understanding software solutions, we need to break down options into manageable pieces. How do we do that for the cloud? Relate it to what we’re familiar with in the X-1 generation of computing. </font></p> <p align="left"><font face="Calibri">Therefore the Infrastructure, Platform, Software and Desktop ideas that we’re all familiar with as it relates to on-premises services are rapidly building out in cloud-form. The key for our customers in the tie-back is that you will want to ensure integration and tie-back, both for continuity, migration and portability of software solutions. So the more Microsoft provides both the integration and the flexibility for on-premises, cloud or hybrid deployment, the easier your adoption and lower your costs will be – thus the beauty of Software Plus Services.</font></p><div style="clear:both;"></div><img src="" width="1" height="1">rgode Beyond Kinect<p>In November Microsoft will launch a ground breaking motion-sensing technology for our Xbox 360 gaming console system called Kinect. You can read about it and explore features <a href="">here</a>.</p> <p><a href="" title="Kinect sensor" class="image"><img height="104" width="250" src="" alt="Kinect sensor" /></a></p> <div>If you research some of the history and technology in this device, you'll find an interesting mix of old and new technology. It uses a webcam, depth sensor, motorized pivot and multi-array microphone to enable voice recognition, motion capture and facial recognition allowing a whole new generation of gaming, dancing, health & fitness controller-free interaction. As a casual gamer, I'm intrigued and interested in the revolution, but as a technologist, I'm almost more curious about what the "commodization" of this type of device might mean for computing in general.</div> <div></div> <div>Microsoft's Office Labs team has posted a "Future of Productivity" video that envisions a not-so-distant future of full wall displays, multiple touch and highly social/interactive interfaces - it's a great video to imagine the possibilities - see it <a href="">here</a> - many of which have rather salient potential in education, classroom and broader teaching and learning. As you watch the video, however, you might think to yourself - "no way... not possible - at least not in 5 or 10 years." When you return to the present, however, and see the Kinect device in action, many of the scenarios displayed have relatively easy extrapolations from this device. It may be difficult to imagine until you experience Kinect in person, but once you do, the light bulb goes off.</div> <div></div> <div>Mainstream computing has been largely wed to some form of tactile input device - mouse, keyboard, gaming controller, even more recently touch-screens. Computing interfaces have largely grown dependent on these devices, but hands free, full-body, motion sensing devices open many possibilities for computing in general, not to mention niche-specific areas like social computing, health and fitness training, vocational or job training, virtual conferencing and even health and medical consultations. As Kinect also demonstrates, we don't have to necessarily open our home/office/room and body/face up for viewing, we can use an avatar and a virtual environment to both protect identity or a bad hair day and to optimize bandwidth (i.e. eliminating the need to transmit full color HD 30-fps images).</div> <div></div> <div>My kids are excited about Kinect and what it will mean to their gaming enjoyment; I'm excited about what it means for their activity level during their ever-increasing "screen time"; but I'm thrilled about what future technologies and social experiences - in educaiton and beyond - this technology will bring to our rapidly morphing world of consumer electronics.</div><div style="clear:both;"></div><img src="" width="1" height="1">rgode Day Reflections – Art and Science<p>There are two Microsoft-relevant highlights on this 40th anniversary of Earth Day, 2010, that I’d like to share – one art-related, and one science-related.</p> <p><strong>Art: BING Student Photo Contest</strong></p> <p, <a href="">the finalists were announced earlier this week</a> and the winner featured today. Along the way 8,240 classrooms around the world incorporated the contest into a class project, generating almost 1 million page views and $62,000 was raised as a result of the charitable donation process associated with voting.</p> <p>If you get a chance, check out the media and buzz created by this contest that paid ultimate tribute to the natural beauty of our planet.</p> <p><strong>Science: Electronic Software Distribution (ESD)</strong></p> <p:</p> <ul> <li>Every month, some <b>100,000 pounds</b> of CDs become outdated, useless, or unwanted in the United States</li> <li>Every year, more than <b>5.5 million software packages</b> go to landfills and incinerators</li> <li>CDs and DVDs aren’t just plastic, they are made up of <b>petroleum-based lacquer and paints, aluminum, and other metals</b></li> <li>Studies estimate that the creation, packaging, and delivery of a single CD contributes <b>1kg of carbon dioxide</b> to the atmosphere</li> </ul> <p>(Source: <i><a href=""></a>)</i></p> <p>To show we mean business, Microsoft is partnering with Asus to mark the occasion of Earth Day by giving away five Asus Netbooks loaded with Office 2007 Ultimate and Windows 7 Professional.</p> <p>Our goal? Raise awareness of the green benefits by purchasing and downloading Microsoft Software – including Office Ultimate 2007 and Windows 7 Professional. <a href="">Click here for more information.</a></p> <p>While these data reflect more than just Microsoft media, it is encouraging to know Microsoft is partnering with its education customers to make a positive impact on the health of our planet. Make it a great earth day.</p><img src="" width="1" height="1">rgode Earth Day Photo Contents… For Students!<p>Bing Is Next Gen <br />Bing Is Now Gellin’ <br />Big Investment, New Game</p> <p.</p> <p.</p> <p>So when I learned that Microsoft is sponsoring an Earth Day Photo contest for students with great prizes and the opportunity to have the winning photo displayed, I knew I had to blog about it.</p> <p><a href=""><img style="border-right-width: 0px; display: inline; border-top-width: 0px; border-bottom-width: 0px; border-left-width: 0px" title="email_footer" border="0" alt="email_footer" src="" width="361" height="53" /></a></p> <p>Check out <a href=""></a>, which went live on Friday (03/05/2010) to see details on the contest, prizes and how you (teachers) can enhance the contest with classroom projects. Good stuff!</p><img src="" width="1" height="1">rgode Cup – Olympics for Student Technologists and Artists<p>For my first post of the new year, I thought I’d focus on our future – more specifically the future bright minds of the technology and computer industry that currently take the shape of ambitious and passionate students in the market that my team and I serve.</p> <p>For many years now, Microsoft has been promoting and holding <strong>Imagine Cup</strong> – a competition for students, age 16 and older, to design “software, games and web sites that help make a difference on global issues.” Registration for the US spring competition closes February 1st for finals in late April. Encourage your students to register at <a href=""></a> – they can win $10,000 just for registering!</p> <p>More than the prizes and glory of qualifying and winning in such a competition is the sense of purpose that it provides students in computer and technology field. I toiled and hacked as a computer science undergrad for 4 years, spending countless hours in front of a workstation writing C code to solve what seemed like meaningless problems and functions. What I would have given to inject some purpose into my studies that revolved around pure competition, if not also the spirit of addressing pressing global issues. Not to date myself too much, but those days in the computer lab were right on the cusp of the modern GUI, OOP and coding toolkits that make coding life much more intriguing for today’s students. Nevertheless, the Imagine Cup neatly squares up a cool contest with a great competitive and enriching collaborative experience for those students who have the mettle and solution to further their technology aspirations.</p> <p>US finalists have the opportunity to travel to Warsaw to compete internationally for some great cash prizes and recognition – a pretty darn cool experience if you can beat out 299,999 other registrants to win the top Imagine Cup honors!</p><img src="" width="1" height="1">rgode about the Learning Continuum with disrupters like H1N1<p>Snow days, carpool conflicts, broken bones, learning disabilities, the flu, excessive heat, hurricanes, weapon threats, cyber-bullying, failed levies, family vacation… Any of these factors, and many others, could mean unintended missed school days and a disruption in the learning continuum, at a minimum. More importantly, it could mean stunted flow of federal dollars for attendance and instruction.</p> <p>With the on-going threat of the pandemic-like H1N1 virus, schools are regularly grappling with attempting to keep some semblance of order and learning while minimizing learning downtime caused by recommendations to isolate potentially sick students. </p> <p>Meanwhile, Web 2.0 advances have allowed schools to explore the potential of anywhere, anytime learning tools to enhance the learning environment well beyond the physical bounds of the school property and beyond the time bounds of school sessions. Communication, collaboration, alerts and content is becoming increasingly digital and ubiquitously accessible as 1-1 computing and home computer access spreads.</p> <p>Microsoft is right there in the mix with solutions for schools. Most customers probably traditionally associate us with on-premise, central-IT sponsored projects involving SharePoint, Office Communications Server and Office – clearly our legacy and area of expertise. What many customers don’t realize is that we’re also in the business of Web 2.0-style immediacy – with free web tools directly relevant to educational needs.</p> <p><font size="2"><strong>Office Live Workspace fits the LEARNING CONTINUUM bill</strong></font></p> <p>Forget the branding, for a second and just think of this as a Classroom workspace – an accessible from any browser tool to store, share and collaborate on homework, handouts, presentations and projects. No – they don’t have to be Office documents – they can be of ANY file time; and no – you don’t have to be a <a href="">Live@edu</a> customer. Any teacher, student or administrator can setup and invite other peers, students or parents to participate and – POW – instant learning continuum tool! Check out the details here: <a title="" href=""></a></p> <p><strong>Don’t overlook our myriad other tools</strong></p> <p>More free web tools abound – all with high relevance for your classrooms.</p> <p>1. Need to record a lecture or lesson for students who miss it? Check out <a href="">Community Clips.</a></p> <p>2. Need great multi-media software for assembling projects? Check out the new <a href="">Windows Live Movie Maker</a>.</p> <p>3. Want to share documents and screen views with peers or students? Check out <a href="">Shared View.</a></p> <p>Who knew Microsoft is your quick and simple source for relevant classroom tools? Enjoy!</p><img src="" width="1" height="1">rgode Innovation – Microsoft Education Labs<p><a href=""></a></p> <p></p> <p>Have you followed the trend of Microsoft’s innovation in Education? It’s been an interesting path, for sure – especially if you trace the roots far enough back. You could almost put it into a 3-phased approach…</p> <p><strong>Phase 1 – Software</strong> <br />Surely you remember not-so-long bygone products like Encarta and perhaps even Class Server that focused on classroom enrichment and management. More recently we’ve dug in deeper to optimizing the learning experience with products like <a href="">Learning Essentials</a> – the Office add-on specifically designed for helping students and teachers work smarter; <a href="">Math</a> – everything you need to tackle math and science on your computer; and <a href="">Semblio</a> – a forthcoming set of tools for publishers and end-users for enriching content used in the classroom.</p> <p><strong>Phase 2 – <a href=""><strong>Software+Services</strong></a> <br /></strong>In recognition of the power of the web, we’ve also started to play heavily in the cloud space – first by leveraging our consumer-oriented Live and OfficeLab or LiveLab tools for education enrichment (<a href="">Community Clips</a>, <a href="">SharedView</a>, <a href="">MovieMaker</a>) and then by formalizing email and collaboration tools into a centrally managed <a href="">Live@edu</a> offering for schools.</p> <p><strong>Phase 3 - <a href=""><img border="0" alt="Education Labs Home Page" src=" Library/Images/OLSite/Logo2.gif" /></a></strong> <br />Released in July, Microsoft’s Education Labs is a new web community designed to share the company’s latest innovations for the education community. <a href="">Educationlabs</a> features a growing collection of free, easy-to-deploy solutions specifically for teaching and learning, and serves as a forum for exchanging ideas that will inform Microsoft’s ongoing research and technology development for education. </p> <p>A number of new innovations on Education Labs are slated for launch this fall. One launched and is available for download today (9/3) - Math Worksheet Generator; two others, Microsoft Folder-based Sites and Flashcards, will become available later this fall. Here’s a little detail about these cool little classroom apps.</p> <p><b>Math Worksheet Generator – </b>an application for educators to quickly create personalized math worksheets for an entire class or individual students.</p> <ul> <li>Eliminates the need to photocopy old worksheets or textbooks to find math problems</li> <li>Worksheets can be used for in-class or take-home assignments, tests, quizzes or materials for tutors</li> <li>Created in Word, which makes it easy to reformat, save and print</li> <li>Integrates with Microsoft Math 3.0 <b></b></li> </ul> <p><b></b></p> <p><b>Microsoft Folder-Based Sites – </b>helps educators create a Web site which automatically converts files such as Word documents, spreadsheets and PowerPoint decks into HTML files, allowing educators to easily store, organize and share their materials with their students online. </p> <ul> <li>Easy folder creation to organize and categorize documents via “drag-and-drop”</li> <li>Browser-viewable - no special applications are needed on students’ computers</li> <li>Ability to upload many files at once – a feature not typically supported by existing LMS offerings</li> </ul> <p><b></b></p> <p><b>Flashcards – </b>an interactive application for educators and students to create flashcard decks or choose from a catalogue of digital flashcards featuring audio, text or pictures.</p> <ul> <li>Personalizes the studying experience for the student</li> <li>Uses a special algorithm to track how many times a student visits a card to help them know how they are progressing on a study subject</li> </ul> <p> <br />Oh yeah – did I mention all of these are FREE? Start enhancing your classroom environment today with innovative EducationLabs downloads and forums – <a href=""></a></p><img src="" width="1" height="1">rgode news in the world of (classroom) movie making!<I style="mso-bidi-font-style: normal"><SPAN style="COLOR: #1f497d"><FONT face=Calibri> <P style="MARGIN: 0in 0in 0pt" class=MsoNormal><I style="mso-bidi-font-style: normal"><FONT color=#000000><FONT size=3>We’re entering an exciting launch year – not the least of which is Windows 7 – a smooth, snappy operating system I’ve been enjoying on several business and personal computers for several months. <?xml:namespace prefix = o>Quick sidebar: I made the mistake of loading Win7 on only one of my three children’s computers – which happened to be the oldest machine as well, a 5+ year old Dell tower.<SPAN style="mso-spacerun: yes"> </SPAN>The other two children are constantly clamoring for me to upgrade their machines now as well, having felt slighted at me overlooking their "advanced" computing needs…>Back to the topic at hand.<SPAN style="mso-spacerun: yes"> </SPAN>In the hoopla of Windows 7 launch, it’s important for our Education customers to keep their eye on some of the other announcements we’re making in the world of “Live”.<SPAN style="mso-spacerun: yes"> </SPAN>I’d like to share a prime example with yesterday’s announcement of the global availability of the new Windows Live Movie Maker. With the new <SPAN style="mso-bidi-font-weight: bold">Windows Live Movie Maker,<SPAN style="mso-bidi-font-style: italic"> </SPAN></SPAN>it’s easier than ever to turn videos and photos into great-looking movies and slideshows, using many popular camera types and file formats on the market today. <SPAN style="mso-spacerun: yes"> </SPAN.<SPAN style="mso-spacerun: yes"> </SPAN> size=3>Teachers and students – anyone for that matter – can download the new Windows Live Movie Maker <U>for free</U> by going to </FONT><A href="" mce_href=""><SPAN style="COLOR: windowtext"><FONT size=3download.live.com</FONT></SPAN></A><FONT size=3><FONT color=#000000>. Enjoy!<o:p></o:p></FONT></FONT></I></P></FONT></SPAN></I><div style="clear:both;"></div><img src="" width="1" height="1">rgode your LiveMeeting Investment<p>Whether your school has made an investment in the Live Meeting service hosted by Microsoft, or if you utilize the Web Conferencing capabilities of Live Meeting on premise, you are probably realizing great cost-savings and efficiency benefits from the business and classroom application of this product.</p> <p:</p> <p><strong>Recording Converter for Microsoft Office Live Meeting 2007</strong> <br />Download: <a title="" href=""></a> <br />Info: <a title="" href=""></a></p> <p><strong>Expression Encoder 2 <br /></strong><a title="" href=""></a> <br /><img alt="Expression Encoder 2" src="" /></p> .</p> <p>Although Encoder is not a free tool, it has a number of useful features for the aspiring video-to-web publisher, including Silverlight compatibility as well as scripting and batch processing for high volume shops. </p> <p>Check out these Live Meeting “add-ons” to see if you can get more value out of your web-conferencing investment today.</p><img src="" width="1" height="1">rgode Hied Conference – 2009 Edition<p.</p> <p>Even if you are not a subscriber of the listsrv but you are a technical resource from a college or university, you are welcome to join us! If you are curious about the technical focus, see the <a href="">agenda posted to the HIED wiki</a>. You must be willing to sign a Non-Disclosure Agreement form as some of the content is “next version” planning information. The official invite details are below. Hope to see you there!</p> <p><img alt="Windows HiEd Conference 2009" src="" /><:acd61706-b81e-4e7d-b800-d38622b3f084" class="wlWriterEditableSmartContent">Technorati Tags: <a href="" rel="tag">Higher Education</a>,<a href="" rel="tag">conference</a>,<a href="" rel="tag">IT administrators</a></div> <p><b>Background</b></p> <p>Working in conjunction with <a href="">Windows-Hied listsrv</a> representatives, Microsoft Education is pleased to host the 5<sup>th</sup> Windows Hied Conference at the Microsoft Campus in Redmond, WA, March 30<sup>th</sup> – April 1<sup>st</sup>, 2009.</p> <p><b>Conference Goals</b></p> <p:</p> <ul> <li>Provide highly relevant <b>product and solution discussion</b> as well as tips and tricks for better evaluating, deploying, integrating, administering, supporting and simplifying Microsoft solutions in the higher education environment </li> <li>Illicit <b>product input and feedback</b> to ensure product teams understand the needs of the HED customer base </li> <li>Learn of <b>unique challenges and successes of Microsoft product deployment</b> from customer presentation sessions </li> <li>Provide an informal venue to discuss additional issues and topics </li> <li>Increase the <b>trust and confidence of customer attendees</b> in deploying and supporting Microsoft solutions </li> <li>Provide Microsoft <b>product teams</b> an opportunity to talk to a focused and strategic group of knowledgeable HED customers</li> </ul> <p><b></b></p> <p><b>Event Logistics</b></p> <ul> <li><b>Dates</b>: March 30<sup>th</sup> – April 1<sup>st</sup>, 2009 </li> <li><b>Location</b>: Microsoft Campus, Building 37 </li> <li><b>Presentations</b>: Mix of Microsoft and customer presentations: 300-400 level technical drill down (~60-90 min each) </li> <li><b>Customer Attendees</b>: ~100 </li> <li><b>Conference Hotel</b>s: <ul> <li>Residence Inn<b> - </b><a href=""></a></li> <li>Marriott Redmond Town Center<b> -</b> <a href=""></a></li> </ul> </li> <li><b>Cost 150$</b></li> <li><b>Agenda located <a href="">Here</a></b></li> <li><b>Registration Page located <a href="">Here</a></b></li> </ul> <p>NOTE - This event is limited to administrators from education. To ensure adequate capacity, please do not register unless you are an administrator from a school/university. We cannot guarantee refunds for those who register from other industries.</p><img src="" width="1" height="1">rgode<p> </p> <p>Microsoft’s vision of cloud computing is called, and being marketed as <font color="#ff0000">Software + Services</font>. The original, more broadly accepted term, as you are likely aware, is <font color="#ff0000">Software as a Service</font>. .</p> <p>Software as a Service (SaaS) is defined quite well on Wikipedia - <a title="" href=""></a>. .</p> <p><a href=""><img style="border-top-width: 0px; border-left-width: 0px; border-bottom-width: 0px; border-right-width: 0px" height="56" alt="image" src="" width="244" border="0" /></a></p> <p>To be sure, Microsoft understands and embraces the concept of SaaS, as evidenced by our foray into all of our “Live” branded products – both for the consumer (<a title="" href=""></a>) as well as for business (<a title="" href=""></a>). Where we augment the basic definition is around those areas above that are known or hypothesized weaknesses of SaaS. It may or may not surprise you that S+S is actually defined quite well on Wikipedia - <a title="" href=""></a> – but to summarize in my own words: Software + Services takes the concept and power of cloud computing (Services) and magnifies its effect with the smart use of client side computing (Software).</p> <p>Let’s take some examples:</p> <p>1. <strong>Unified Communications</strong>:.</p> <p>2. <strong>Classroom or Administrative Productivity</strong>:. </p> <p>3. <strong>Companion Applications</strong>:.</p> <ul> <li>Office Live Workspaces (<a title="" href=""></a>): share, store, track and comment on documents online</li> <li>Shared View (<a title="" href=""></a>): instant screen and application collaboration over the internet</li> <li>Community Clips (<a title="" href=""></a>): record and share screen activity, presentations, audio and webcam video</li> </ul> <p!</p><img src="" width="1" height="1">rgode + Services for Academia?<p <a href="mailto:Live@edu">Live@edu</a>, utilizing Office Live Workspaces for project or team collaboration or Live Skydrive for sharing presentations, projects and files.</p> <p different angles, depending on your lens consumers, the justification becomes a little muddled when students.</p> <p you run locally that could benefit from a more lively interface with cloud services - a Word document session that allows collaboration with a student that just has a browser; an interactive history simulation session running on your <a href="">Software + Services</a> .</p><div style="clear:both;"></div><img src="" width="1" height="1">rgode 2.0 meets Web 2.0<P>My favorite educationally relevant and intellectual article of this calendar year (and arguably for the past 12 months or more) is one that appeared in Educause Magazine in their Jan/Feb 2008 issue entitled <EM>Minds on Fire: Open Education, the Long Tail, and Learning 2.0</EM>: access it directly <A href="" mce_href="">here</A>.</P> <P.</P> <P><STRONG>The Long Tail</STRONG></P> <P>Chris Anderson, the editor in chief for WIRED Magazine, gained notoriety in Oct 2004 for his article, <A href="" mce_href="">The Long Tail</A>,.</P> <P.</P> <P><STRONG>Dissecting Web 2.0 Components</STRONG></P> <P.</P> <P.</P> <P.</P> <P><A href="" mce_href=""><IMG style="BORDER-RIGHT: 0px; BORDER-TOP: 0px; BORDER-LEFT: 0px; BORDER-BOTTOM: 0px" height=324 alt=image</A> </P> <P><STRONG>Microsoft Relevance?</STRONG></P> <P:</P> <P>Photosynth: <A title=</A> <BR>WorldWide Telescope: <A title=</A> <BR>Tafiti: <A title=</A> <BR>PopFly: <A href="" mce_href=""></A></P> <P.</P> <P><A href="" mce_href=""><IMG style="BORDER-RIGHT: 0px; BORDER-TOP: 0px; BORDER-LEFT: 0px; BORDER-BOTTOM: 0px" height=328 alt=image</A></P><img src="" width="1" height="1">rgode All Students...<p...</p> <p.</p> <p>Now all college students have an opportunity to experience that enrichment... talk about a cool job! If you're not a student in college, but know one (or have one), feel free to pass along the information, link and deadline!</p> <p><b><a href=""><img style="border-top-width: 0px; border-left-width: 0px; border-bottom-width: 0px; border-right-width: 0px" height="69" alt="clip_image002" src="" width="540" border="0" /></a></b></p> <p><b>About the Microsoft® Student Partners (MSP) Program</b></p> <p><b></b></p> <p! </p> <p!</p> <p>To apply, US students should visit <b><a href=""></a></b> and submit an application by the <b>5/31/08 Deadline. </b></p><img src="" width="1" height="1">rgode<p.</p> <p><strong>First, the Formalities</strong></p> <p>The focus of this blog will inherit from my role and mission here at Microsoft, leading a team of technical and solution specialists who focus on the Education vertical: <strong>"Empower people to realize their social and economic potential by enabling access to quality education experiences for all through technology." </strong>As lofty as a goal as that may seem, I'm going to assume, at least for the time being, that every little point of light will help the cause.</p> <p.</p> <p.</p> <p><strong>Next, the Realities</strong></p> <p. </p> .</p><img src="" width="1" height="1">rgode
|
http://blogs.technet.com/b/rgode/atom.aspx
|
CC-MAIN-2014-35
|
en
|
refinedweb
|
Checks whether a Slapi_RDN structure holds any RDN matching a given type/value pair.
#include "slapi-plugin.h" int slapi_rdn_contains(Slapi_RDN *rdn, const char *type, const char *value,size_t length);
This.
This function returns 1 if rdn contains an RDN that matches the type, value and length, or 0 if no RDN matches the desired type/value.
This function searches for an RDN inside of the Slapi_RDN structure rdn that matches both type and value as given in the parameters. This function makes a call to slapi_rdn_get_index() and verifies that the returned value is anything but -1.
slapi_rdn_contains_attr()
|
http://docs.oracle.com/cd/E19693-01/819-0996/aailf/index.html
|
CC-MAIN-2014-35
|
en
|
refinedweb
|
Code covered by the BSD License
by
Jonathan Karr
21 Dec 2012
Class for representing empirical formulae including support for basic math (+, -, *, etc.)
|
Watch this File
Example Usage:
import edu.stanford.covert.util.EmpiricalFormula;
x = EmpiricalFormula()
x = EmpiricalFormula('H2O')
x = EmpiricalFormula(struct('H', 2, 'O', 1))
Jonathan, I think this could be a useful piece of code, but you have not documented it. If you have gone to the effort of writing and shareing the code, I suggest going the extra effort to document it so others might be interested in using it.
|
http://www.mathworks.com/matlabcentral/fileexchange/39569-empirical-formula
|
CC-MAIN-2014-35
|
en
|
refinedweb
|
Fabulous Adventures In Coding
Eric Lippert is a principal developer on the C# compiler team. Learn more about Eric.
Here’s a crazy-seeming but honest-to-goodness real customer scenario that got reported to me recently. There are three DLLs involved, Alpha.DLL, Bravo.DLL and Charlie.DLL. The classes in each are:
public class Alpha // In Alpha.DLL{ public virtual void M() { Console.WriteLine("Alpha"); }}
public class Bravo: Alpha // In Bravo.DLL{}
public class Charlie : Bravo // In Charlie.DLL{ public override void M() { Console.WriteLine("Charlie"); base.
|
http://blogs.msdn.com/b/ericlippert/archive/2010/03/29/putting-a-base-in-the-middle.aspx?PageIndex=6
|
CC-MAIN-2014-35
|
en
|
refinedweb
|
The Obligatory Hello World
Since every programming paradigm needs to solve the tough problem of printing a well-known greeting to the console we’ll introduce you to the actor-based version.
import akka.actor.Actor import akka.actor.Props class HelloWorld extends Actor { override def preStart(): Unit = { // create the greeter actor val greeter = context.actorOf(Props[Greeter], "greeter") // tell it to perform the greeting greeter ! Greeter.Greet } def receive = { // when the greeter is done, stop this actor and with it the application case Greeter.Done ⇒ context.stop(self) } } receive method where we can conclude the demonstration by stopping the HelloWorld actor. You will be very curious to see how the Greeter actor performs the actual task:
object Greeter { case object Greet case object Done } class Greeter extends Actor { def receive = { case Greeter.Greet ⇒ println("Hello World!") sender ! Greeter.Done } }
This is extremely simple now: after its creation this actor will not do anything until someone sends it a message, and if that happens to be an invitation to greet the world then the Greeter complies and informs the requester that the deed has been done.
As a Scala developer you will probably want to tell us that there is no main(Array[String]) method anywhere in these classes, so how do we run this program? The answer is that the appropriate main method is implemented in. Thus you will be able to run the above code with a command similar to the following:
java -classpath <all those JARs> akka.Main com.example.HelloWorld
This conveniently assumes placement of the above class definitions in package com.example and it further assumes that you have the required JAR files for scala-library and akka-actor available. The easiest would be to manage these dependencies with a build tool, see Using a build tool.
Contents
|
http://doc.akka.io/docs/akka/2.2.3/scala/hello-world.html
|
CC-MAIN-2014-35
|
en
|
refinedweb
|
a.k.a.
We got ourselves a couple-two-tree noders, but no fronchroom.
I know what you're thinking: "OMGWTF! A Chicago Nodermeet?!?"
Yes, kids. Time to retire to the fallout shelter, for the end certainly is near. For the first time in the (readily available) history of everything2, there will be a nodermeet in Chicago! Come on up to Old Irving on August 10th through the 12th for a weekend of drunken debauchery, porch-sitting, and neighbor-angering!
The Master Plan (at it currently stands)
WHEN: Friday, August 10 and Sunday, August 12, 2007. If you want to come up on Friday night, that's cool. Just know that I might be working from home, and may even need to run into the office for an hour if things really go to shit. I have fenangled Friday afternoon off. Come over after 2pm, and I'll be around. Don't tell the boss. :)
WHERE: Super-Secret! My wife and I don't like posting our address on the internet. What I will say is that we're in the Old Irving neighborhood of Chicago. Say you'll come, and I'll tell you specifically.
Our reasonably-sized apartment has a couch and a futon available, as well as plenty of floor space for those needing additional accommodations (bring a sleeping bag or what have you). We've got a huge deck, which I assume will be our major hang-out, weather permitting. A word of warning: there's only one bathroom in my apartment, which will make shower coordination a bit interesting.
My house is also conveniently located near the Kennedy, so driving here will be easy. A little note for those coming from the south: You're going to want to dodge the construction on the Dan Ryan (This is I-94 westbound after the I-57 junction). I suggest getting on I-294, and coming around through the suburbs, and taking I-90 East into the city. It may or may not be faster, but 294 is going to be the happier of the choices by far.
WHY: Well, um, my wife will be in New York for the ASA conference that weekend. Yes, yes, I know what some of you are saying: "What, that fictional wife you keep talking about?" Yes, that wife. This time you will be able to see all her books, clothes, and other personal effects. And then, of course, wave them all off as an elaborate attempt to fool all of you into thinking I'm married. Yes, you have it all figured out. (Editor's note: izubachi has now met my wife, and confirmed her existence. Take that, smartass noders!)
Anyway, since she will be gone, I was planning to sit here and drink by myself. Might as well get the noders together and trash the place, so she can be really mad when she gets home.
WHAT TO DO:
* Drink on the porch! You do like drinking, don't you?
* Take a look at the gigantic ugly homes they have built on either side of my building.
* We can swing around the corner to the 'Nug for some decent diner food. I'm also thinking we may need to make a trip to Hot Doug's on Saturday.
* Carcassonne, Apples to Apples, and Puerto Rico on premises! Perhaps I'll have figured out how to play Puerto Rico by then. Um, not so much.
* We're in the city! There's bound to be something to do here, right? We can organize expeditions to one of the excellent museums if folks would like.
* This weekend is smack in the middle of a White Sox home stand versus the Mariners, if you like that kind of thing.
* chaotic_poet suggests a trip to the Green Mill might be an good option. "The jazz bands they have there are usually two shades of awesome." Karrin Allyson is playing there Saturday night.
* chaotic_poet also reminds me that it is Market Days that weekend. I knew I was forgetting about an important event.
FOOD: I'll have some stuff here ready to go as far as snacks go, but we run a pretty healthy/anti-snacky house, so anything you guys bring will only make things better in the long run. As for booze, we've got quite a bit here, but there isn't any beer here, so you'll have to bring your own. Maybe some of us will make a run over to Miska's for some good stuff when everyone gets here.
As for meals, I don't have any cooking ability at all. It's best if I don't even try to make you anything. If someone would like to make something, the full kitchen facilities are available to you. Other than that, we can all get organized and go somewhere, or I've got a delivery menu or two around here somewhere.
THINGS TO KNOW:
* We've got two cats. If you've got allergies, you might want to take this into account. We'll also need to make sure they don't get out of the apartment, because they wouldn't last five minutes outside.
* This porch out back is shared with the other units in the building (five, including ours). Everyone else in the building is cool, but so you know, it may be more than just us out there.
* We've got some hotels here in the city, but nothing really around the house. If you want to stay somewhere and need help figuring out what's best, let me know and I'll help you out.
* If you get lost anywhere on the way here, just give me a call. I should be able to talk you down.
* No markers allowed. I'll be checking bags when you come in, so don't get any smart ideas. I'm looking at you, GCP noders.
CONTACT INFORMATION:
Cell Number is probably best - (312) 391-5577
The Cool Kids:
vandewal (naturally)
BrooksMarlin
LaggedyAnne and Sessor
Wiccanpiper and BriarCub - who have dibs on the futon in the sunroom
chaotic_poet
RoguePoet
opteek
sauth
mordel
karma debt
izubachi!?! - will be here Friday night. Show up early kids!
Ysardo - Last minute addition
Maybe they're cool enough:
jrn
hunt05
Billy
Two Sheds
You suck, but we'll see you on Labor Day:
artman2003
Apatrix
The Green Mill,,,
stolen glances,
borrowed time,,
stolen kisses,,,,
who knew that a quick visit to the city would remind us all how to be such master thieves.
It was here that I was reminded that the only way to repay a kindness undeserved, is to begin, or perhaps remember, friendships that are unbridled, unashamed, and unforgettable,,,
The music glides through the air and makes it's short distance over to our table, "Capone's booth" as it is known. Slow as the music before us the sun is setting and the bar is darkening. The band is working hard to keep up with their overly talented singer. Set one glides by far to fast, as well as the first few rounds.
The music runs us through unbelievable highs and low, and the place feels like it's really waking up as the second set starts. More crowded, more life, more energy, and more enjoyment in this little (non-smoking) club that refuses to feel like anything but a smoky little room where secrets are being told. The music is moving us all in the direction that we came here to go. People become somehow more than the sum of their parts in this fragrant atmosphere that's every bit as tangible as the table that's holding my Gin and Tonic.
We sit, soaking up to power and joy of it all as we scribble down our would-be whispered confessions, apologies, and admirations.
The street,,,
Fresh out of the car the humidity and the smells begin to sink in. Beautiful church on the right, sculptures of broken swords and beautiful angels waiting me to find them. I'm looking forward though, not up, and then comes what I had so much earlier, and also later, come to love the smell of.
Greeting,,,
Genesis,,,
Renewal,,,
On my short walk, I had time to collect myself, prepare to be working with 'proper villains' again, but instead of thinking about what I'm going to say or do with the noders I'm walking to meet, I fall into the ebb and flow of the city. Hoping against the odds to catch it's very pulse against my fingertip
I didn't, not really at this point. I was too distracted by the beautiful afternoon. The smells of the garden mulch and the beautiful people, mostly just the beautiful people.
The city of Chicago does have a certain rhythm, especially in the summer. We hadn't seen our Jazz show yet, we were just remembering hello. And I didn't feel the pulse of the whole city against my fingers until it was almost time for goodbye.
Time for goodbye,,,
Maybe hello was better suited the moment I was living, walking towards the weekend in the company of friends old and new, but I couldn't help but wonder what lie waiting for me at home...
The Deck,,,
Decidedly in the swing of things by now. It's now that I learned all about how "Apples to Apples" is played. The games is going very well, we all have full stomachs and fuller hearts. Some play, some talk, some smoke, some investigate the mysteries of the newly invented measurement system, all present love, mostly each other.
Rising,,,
Cresting,,,
Falling Away,,,
The lightning comes to play, the clouds roll in being completely irreverent to our observances. Things are protected as droplets begin to fall, and some begin to fall away. The droplets increase and more return to comfort of a drier existence, I remain. I remember that the drive to refuse the inconvenience of your environment is the ultimate expression of humanity. I spend a moment or two with the storm, but sure enough I begin to miss you all.
Pushing hands to applause, pushing ourselves to be our very best selves, simply by being our best selves. No such thing as a lonely table in our midst, no such thing as a gift given here without thanks, and no such thing as an ordinary moment.
Hell has left us some cherries on the table and I am left to admit, I can't ever recall a time before when punching myself has ever been quite so enjoyable. New memories take root, while old ones shove aside and make way for them. Someone steals my spirit just long enough to make sure I get back so that I can lock it in a jar and save it for later. I look around the faces of my people, then the face of the clock and know that we've long since been done tearing the day to shreds.
Finally it becomes time to rest. Some go, and some stay. I melt, or perhaps unfurl on the floor, embracing the quiet. I lie for a few moments just processing the joy of what a day with good friends, good music, good humor, and a particularly good reason to be happy is like.
The End,,,
Waken to the morning feeling better than I have a right to. Spend a few moments learning how God is dead from a lovely book in the corner. Someone stirs to my left and wakes and again it begins. The two of us, alone in the room find Serenity for a couple of hours and then the rest of the house seems to stir almost at once.
We wake and hunger...
A new place, someplace I have never seen and don't seem to quite remember as well as I could. It must be time for us to be coming to close; I always seem to manage to let the memory of the endings slide away.
Time to look you in the eye
Time to give you all the time you need
Time to let you hold me, turn my cheek and accept your kiss.
All that was begged, borrowed or stolen must be returned. This is the moment when I am reminded with the most power why I come out to see such amazing and beautiful strangers. Thank you Noders, for being the most naturally generous, amazing, attractive and wonderful people I have the good fortune and privilege of knowing. You, the people I love, are what make me keep fighting.
We all hug, shake hands, accept our kisses on the cheek from our new mama. You see the happiness at being together, the awkwardness of leaving one another behind, and at least if you were to look in my eyes, you might see the wish for more time peeking out to greet all of you.
Hop a bus and bend Vanderwalls ear one last time. I talk of the family I am going back to. It would only seem silly to talk of the family I just left behind because, alas, it's time to fade back into the nodegel and of course to remember, I was already home.
Time to say goodbye,,,
NEVER SAY GOODBYE!
"Knight's an energetic cocksucker and Armstrong's clearly defined balls cling close to his body in their tight sack during the lick."
It's a little surreal. It's Saturday, we are all sunbaked and shell shocked from the street fair, and we've stopped off at my apartment just to gather back together and regroup for whatever night brings. karma debt giggling voice rings out over my living room, having grabbed the local free gay entertainment guide's porn review article and reading it now outloud. "Wow. Straight porn reviews aren't anywhere near this graphic," someone chimes in from the couch. I can't remember who. I just giggle and smile. These are the little tiny moments -- these five minute asides, sometimes sweet, sometimes absurd -- that sparkle in the afterglow.
But I'm getting ahead of myself...
It was Friday around 5 pm. I was lost. Go me? Hometown advantage and all, yet when the time comes, I've managed to get completely mixed up. Chicago's perfect grid usually doesn't betray me like this, but we're here at the corner of 4000 and 4000 -- all the addresses are the same and I don't know the Northwest side of Chicago. I'm running late and it's already been a long frustrating week full of overtime and fevers. Even today, I ended up leaving work at 4 when I had asked for a 1/2 day. I needed very badly to have a good weekend.
Finally, after walking about 5 blocks more than I needed, I stumble up to vanderwal's C!'d apartment, climb up, and sink into a seat. It's a nice place -- warm and full of character with two cats of opposite demeanors. The brown one walked up and demanded to be pet, but only in the way that it wanted (cheek to back to tail, incidentally). The other, a fuzzy orange blob, just napped lazily in the filtered sun coming in through the window.
I was first to arrive, but it wasn't too long before a few others started arriving. Few by few we settled onto the deck in the back -- the place that would become the centerpiece of the weekend -- and sipped the moon out of it's daytime hiding place. As day left and night went on, we teased, we joked, we talked seriously (but mostly not)... It was comfortable basking in the glow of new friends that have felt like they've been there forever (and those that really are starting to actually be that).
What occurred on Saturday was possible the gayest thing ever to happen at a Nodermeet. It should therefore not be a surprising thing that we were swallowing sausages for brunch. 2 o'clock had found us at Hot Doug's, a self-preported "encased meat emporium." The line was out the door around the corner when we arrived, Saturday apparently being a big day for hot dogs in Chicago. It was no surprise why there was a crowd with such delicacies as fries made with rendered duck fat and hot dogs made with pheasant lined up against old favorites like traditional brats and red hots. Most of the people new to town had a taste of the traditional Chicago dog, the miniature salad on top balancing precariously, while others tried some of the more esoteric or fancier things. It was an enjoyable divey place -- small but worth it.
I had mentioned to vanderwal that the weekend also happened to hold Chicago's largest street fair: Market Days. Taking place on N Halstead, between Belmont and Addison, Market Days is one of the big events of the end of summer in Chicago. It also happens to be one of the Gayest events in the city outside of the Pride Parade. Not that it would usually be a bad thing, save the fact that the oncoming hordes of scantily clad gay men left little room for anything else.
I'd only been to the fair before in off hours, but as soon as we stepped through the gates, it occurred to me that perhaps I'd made a bad suggestion. People were packed back to back in varying states of undress, the uniform du jour seemed to be a pair of speedo briefs and tennis shoes. Squeezing our way through leather daddies and drunken twinks, we ended up rushing through the fair as if it was some sort of rainbow gauntlet of doom. Someone cried shortly after leaving, "I had no where to look. There was just man-flesh everywhere..." The shock would have been equal had we just pressed ourselves through any other mostly naked crowd of people. Still, it was fun to point out that this was really really gay.
[insert porn review escapdes here. add splitting of groups -- one back to vanderwal's and one to the Green Mill Cocktail Lounge]
"All I really want and
is to bring out the best and worst of you"
-Karrin Allyson.
We slammed ourselves into the booth a little later, the group reforming at the Golden Apple, just one of those late night diners that you end up at on late nights after many drinks with many friends. We feasted on that wonderful mix of coffee that only a diner can make and breakfast food flipped onto the darkside of the morning hours. We were all smiles, all around, and there was a peace in the chaos of conversation and passing food. Potato pancakes can leave a memory if you let them.
From there, we melted again into a night on the porch, cards flying as fast as the conversation. I ended up crashing on the couch, the long bus ride home too much for me that night....
Things lingered on Sunday... people peeled away one by one, each with goodbyes. We end with one last introduction on the border of Uptown and Lakeview - a Mexican brunch. One last coming together before the winds blew us apart again.
Messages passed across and around the table at the Green Mill:
Everyone is swapping sekrits
Noders always msg, don't they?
Sometimes the /msg is where the real action is…
Indeed. Msgs are good
To love and be loved in return...
These songs make me want to fall in love.
These songs could get someone to fall in love.
I'm in love with the sounds and happiness.
These songs take me to falling in love.
This is much more good than as market days was bad.
I'm in love with all of this. I'm in love with BIG AL.
Have to admit they are catching up with me.
They ARE much faster than you are. The music helps. And how. No food, no rest helps too.
Just love and be loved in return (food is our next stop)
THANK YOU so very much
Thank you for bringing me back to myself. Spent a lot of years singing. Thanks again.
A good night surrounded by smiling faces.
Good sounds, pleasing drinks, family around us--not just a "Tuesday at Noon" but real life happening in front of us.
Love of people+ Love of jazz+Love of communication=wonderful, enjoyable, ecstatic peace.
*Last Blank Piece*
Spent 9 months trying to answer this question.
Its sad, so sad.
Live in the moment, enjoy the now, life is unpredictable. Tomorrow…
Aye
Brightness always conquers dark, the sun follow the night. When we do the things we ought to do, when we ought to do them, there comes a day when we get to do the things we want to when we want to do them.
This place, these people, this music. We see everyday the harder parts of life… then we come together, do extraordinary things, feel extraordinary feelings and we remember life is more than meaningful it is poignant.
Got a headful of friends and music. No room for yesterdays. Who needs tomorrows when you've got jazz? Sitting in the Mill with a glass of liquid bread and I am thankful, oh gods and ladies, yes.
Log in or registerto write something here or to contact authors.
|
http://everything2.com/title/Noders+By+The+Lake%253A+A+Chicago-Style+Nodermeet?showwidget=showCs1904274
|
CC-MAIN-2014-35
|
en
|
refinedweb
|
This manual page is part of the POSIX Programmer’s Manual. The Linux implementation of this interface may differ (consult the corresponding Linux manual page for details of Linux behavior), or the interface may not be implemented on Linux.
sys/stat.h — data returned by the stat() function
#include <sys/stat.h>
The <sys/stat.h> header shall define the structure of the data returned by the fstat(), lstat(), and stat() functions.
The <sys/stat.h> header shall define the stat structure, which shall include.
struct timespec st_atim Last data access timestamp.
struct timespec st_mtim Last data modification timestamp.
struct timespec st_ctim Last file status change timestamp. <sys/stat.h> header shall define the blkcnt_t, blksize_t, dev_t, ino_t, mode_t, nlink_t, uid_t, gid_t, off_t, and time_t types as described in <sys/types.h>.
The <sys/stat.h> header shall define the timespec structure as described in <time.h>. Times shall be given in seconds since the Epoch.
Which structure members have meaningful values depends on the type of file. For further information, see the descriptions of fstat(), lstat(), and stat() in the System Interfaces volume of POSIX.1-2008.
For compatibility with earlier versions of this standard, the st_atime macro shall be defined with the value st_atim.tv_sec. Similarly, st_ctime and st_mtime shall be defined as macros with the values st_ctim.tv_sec and st_mtim.tv_sec, respectively.
The <sys/stat.h> header shall define the following symbolic constants for the file types encoded in type mode_t. The values shall be suitable for use in #if preprocessing directives:
S_IFMT
Type of file.
S_IFBLK
Block special.
S_IFCHR
Character special.
S_IFIFO
FIFO special.
S_IFREG
Regular.
S_IFDIR
Directory.
S_IFLNK
Symbolic link.
S_IFSOCK
Socket.
The <sys/stat.h> header shall define the following symbolic constants for the file mode bits encoded in type mode_t, with the indicated numeric values. These macros shall expand to an expression which has a type that allows them to be used, either singly or OR’ed together, as the third argument to open() without the need for a mode_t cast. The values shall be suitable for use in #if preprocessing directives. <sys/stat.h> header shall define the following symbolic constants as distinct integer values outside of the range [0,999999999], for use with the futimens() and utimensat() functions: UTIME_NOW UTIME_OMIT
The following shall be declared as functions and may also be defined as macros. Function prototypes shall be provided..
Use of the macros is recommended for determining the type of a file. POSIX.1-2008 unless the standard requires that they do, except in the case of documented extensions to the standard.
Upon assignment, file timestamps are immediately converted to the resolution of the file system by truncation (i.e., the recorded time can be older than the actual time). For example, if the file system resolution is 1 microsecond, then a conforming stat() must always return an st_mtim.tv_nsec that is a multiple of 1000. Some older implementations returned higher-resolution timestamps while the inode information was cached, and then spontaneously truncated the tv_nsec fields when they were stored to and retrieved from disk, but this behavior does not conform. POSIX.1-2008..
Some earlier versions of this standard did not specify values for the file mode bit macros. The expectation was that some implementors might choose to use a different encoding for these bits than the traditional one, and that new applications would use symbolic file modes instead of numeric. This version of the standard specifies the traditional encoding, in recognition that nearly 20 years after the first publication of this standard numeric file modes are still in widespread use by application developers, and that all conforming implementations still use the traditional encoding.
No new S_IFMT symbolic names for the file type values of mode_t will be defined by POSIX.1-2008; if new file types are required, they will only be testable through S_ISxx() or S_TYPEISxxx() macros instead.
<sys_statvfs.h>, <sys_types.h>, <time.h>
The System Interfaces volume of POSIX.1-2008, chmod(), fchmod(), fstat(), fstatat(), futimens(), mkdir(), mkfifo(), mknod(), um .
|
http://man.sourcentral.org/MGA6/0p+sys_stat.h
|
CC-MAIN-2020-40
|
en
|
refinedweb
|
import "github.com/julienschmidt/httprouter"
Package httprouter is a trie based high performance HTTP request router.
A trivial example is:
package main import ( "fmt" "github.com/julienschmidt/httprouter" "net/http" "log" ))) }
The router matches incoming requests by the request method and the path. If a handle is registered for this path and method, the router delegates the request to that function. For the methods GET, POST, PUT, PATCH, DELETE and OPTIONS shortcut functions exist to register handles, for all other methods router.Handle can be used.
The registered path, against which the router matches incoming requests, can contain two types of parameters:
Syntax Type :name named parameter *name catch-all parameter
Named parameters are dynamic path segments. They match anything until the next '/' or the path end:
Path: /blog/:category/:post Requests: /blog/go/request-routers match: category="go", post="request-routers" /blog/go/request-routers/ no match, but the router would redirect /blog/go/ no match /blog/go/request-routers/comments no match
Catch-all parameters match anything until the path end, including the directory index (the '/' before the catch-all). Since they match anything until the end, catch-all parameters must always be the final path element.
Path: /files/*filepath Requests: /files/ match: filepath="/" /files/LICENSE match: filepath="/LICENSE" /files/templates/article.html match: filepath="/templates/article.html" /files no match, but the router would redirect
The value of parameters is saved as a slice of the Param struct, consisting each of a key and a value. The slice is passed to the Handle func as a third parameter. There are two ways to retrieve the value of a parameter:
// by the name of the parameter user := ps.ByName("user") // defined by :user or *user // by the index of the parameter. This way you can also get the name (key) thirdKey := ps[2].Key // the name of the 3rd parameter thirdValue := ps[2].Value // the value of the 3rd parameter
path.go router.go tree.go
MatchedRoutePathParam is the Param name under which the path of the matched route is stored, if Router.SaveMatchedRoutePath is set.
ParamsKey is the request context key under which URL params are stored.
CleanPath is the URL version of path.Clean, it returns a canonical URL path for p, eliminating . and .. elements.
The following rules are applied.
If the result of this process is an empty string, "/" is returned
Handle is a function that can be registered to a route to handle HTTP requests. Like http.HandlerFunc, but has a third parameter for the values of wildcards (path variables).
Param is a single URL parameter, consisting of a key and a value.
Params is a Param-slice, as returned by the router. The slice is ordered, the first URL parameter is also the first slice value. It is therefore safe to read values by the index.
ParamsFromContext pulls the URL parameters from a request context, or returns nil if none are present.
ByName returns the value of the first Param which key matches the given name. If no matching Param is found, an empty string is returned.
MatchedRoutePath retrieves the path of the matched route. Router.SaveMatchedRoutePath must have been enabled when the respective handler was added, otherwise this function always returns an empty string.
type Router struct { // If enabled, adds the matched route path onto the http.Request context // before invoking the handler. // The matched route path is only added to handlers of routes that were // registered when this option was enabled. SaveMatchedRoutePath bool // 308 308 // If enabled, the router automatically replies to OPTIONS requests. // Custom OPTIONS handlers take priority over automatic replies. HandleOPTIONS bool // An optional http.Handler that is called on automatic OPTIONS requests. // The handler is only called if HandleOPTIONS is true and no OPTIONS // handler for the specific path was set. // The "Allowed" header is set before calling the handler. GlobalOPTIONS http.Handler // Configurable http.Handler which is called when no matching route is // found. If it is not set, http.NotFound is used. NotFound http.Handler // Configurable http.Handler which is called when a request // cannot be routed and HandleMethodNotAllowed is true. // If it is not set, http.Error with http.StatusMethodNotAllowed is used. // The "Allow" header with allowed request methods is set before the handler // is called. MethodNotAllowed http.Handler // Function to handle panics recovered from http handlers. // It should be used to generate a error page and return the http error code // 500 (Internal Server Error). // The handler can be used to keep your server from crashing because of // unrecovered panics. PanicHandler func(http.ResponseWriter, *http.Request, interface{}) // contains filtered or unexported fields }
Router is a http.Handler which can be used to dispatch requests to different handler functions via configurable routes
New returns a new initialized Router. Path auto-correction, including trailing slashes, is enabled by default.
DELETE is a shortcut for router.Handle(http.MethodDelete, path, handle)
GET is a shortcut for router.Handle(http.MethodGet, path, handle)
HEAD is a shortcut for router.Handle(http.MethodHead, path, handle)
Handle registers a new request handle with the given path and method.
For GET, POST, PUT, PATCH and DELETE requests the respective shortcut functions can be used.
This function is intended for bulk loading and to allow the usage of less frequently used, non-standardized or custom methods (e.g. for internal communication with a proxy).
Handler is an adapter which allows the usage of an http.Handler as a request handle. The Params are available in the request context under ParamsKey.
func (r *Router) HandlerFunc(method, path string, handler http.HandlerFunc)
HandlerFunc is an adapter which allows the usage of an http.HandlerFunc as a request handle.
Lookup allows the manual lookup of a method + path combo. This is e.g. useful to build a framework around this router. If the path was found, it returns the handle function and the path parameter values. Otherwise the third return value indicates whether a redirection to the same path with an extra / without the trailing slash should be performed.
OPTIONS is a shortcut for router.Handle(http.MethodOptions, path, handle)
PATCH is a shortcut for router.Handle(http.MethodPatch, path, handle)
POST is a shortcut for router.Handle(http.MethodPost, path, handle)
PUT is a shortcut for router.Handle(http.MethodPut, path, handle)
func (r *Router) ServeFiles(path string, root http.FileSystem)
ServeFiles serves files from the given file system root. The path must end with "/*filepath", files are then served from the local path /defined/root/dir/*filepath. For example if root is "/etc" and *filepath is "passwd", the local file "/etc/passwd" would be served. Internally a http.FileServer is used, therefore http.NotFound is used instead of the Router's NotFound handler. To use the operating system's file system implementation, use http.Dir:
router.ServeFiles("/src/*filepath", http.Dir("/var/www"))
ServeHTTP makes the router implement the http.Handler interface.
Package httprouter imports 6 packages (graph) and is imported by 4023 packages. Updated 2020-08-31. Refresh now. Tools for package owners.
|
https://godoc.org/github.com/julienschmidt/httprouter
|
CC-MAIN-2020-40
|
en
|
refinedweb
|
Code Coverage as a Refactoring Tool
Code Coverage as a Refactoring Tool
Join the DZone community and get the full member experience.Join For Free
I am a big fan of using code coverage as a developer tool to promote more reliable, better tested code. The merits and limits of code coverage are widely discussed and fairly well known. Essentially, the big strength of code coverage is revealing code that has not been exercised by your unit tests. It is up to you, as a software development professional, to establish why this code has not been exercised by your unit tests, and whether it is significant. It is also up to you to ensure that your tests are of sufficient quality to ensure that the code that is exercised during the tests is effectively well tested.
In this article I don't want to discuss those aspects of code coverage. Instead I want to look at how code coverage can be a useful tool in a TDD practitioner's toolbox, in particular during the refactoring phase. Indeed, Using code coverage to help with refactoring, when combined with TDD, is a powerful tool.
But code coverage should be less relevant when you use Test Driven Development, should it not? TDD automatically results in 100% coverage, right? When you use a disciplined TDD approach, there should be a failing test justifying every line of code written. Conversely, every line of code you write goes towards making a test fail. It should therefore be impossible to have less than 100% code coverage if you are doing proper TDD. Lower than this just means you aren't doing TDD properly.
This is actually at best an over-simplification, and at worst, just wrong. Leaving aside minor issues in the code coverage tools, and language-related cases that arguably don't deserve coverage (e.g. a third-party API interface requires you to catch an exception, but in your implementation that exception can never occur, a private constructor in a static class designed precisely never to be called, and so on), code coverage holes can arise during the refactoring phase of Test-Driven Development, and code coverage metrics can be a useful aid to this refactoring process.
The refactoring phase is a critical part of the TDD process. Refactoring involves improving (often by simplifying) your code, to make it more clearer, more readable, easier to maintain, and so on. Although refactoring should never alter functionality (and therefore application behaviour from an external viewpoint), it can and often does involve some significant structural changes to your code. In these cases, code coverage can be a good indicator of areas that need tidying up.
Let's look at a few examples.
Untested classes
In the screenshot shown here, the class has 0% coverage. If you are developing using TDD, all of your tests will generally be tested at some stage, and most (if not all) will be tested directly by unit tests. A class with 0% test coverage in this context is indeed strange.
There are several possible explanations. Your class may effectively be tested by an integration or functional test. Alternatively, sometimes a class in one Maven module is effectively unit tested via a class in another module: in this case the coverage may not get picked up by your test coverage tool (Cobertura, for example, would not detect coverage in this case). In both cases, this is fine if it works for you, or if you can't do otherwise, but maybe your tests should be closer to the classes they are testing?
However, quite often, 0% coverage on a class indicates a class that is no longer used anywhere. That is the case here, so the class can safely be deleted.
Untested methods
Sometimes entire methods within a class may be passed over by the test coverage metrics. In other words, these methods were never invoked during the execution of the unit tests. If you are developing using TDD, a totally untested method will be rare, and should be considered a warning sign. If it a public method, why does it never appear in the executable specifications embodied by your unit tests?
For public methods, of course, the method might be invoked elsewhere, from another module, and tested during the integration or functional tests. On the other hand, it may be a method that is no longer needed after some refactoring. In this case, it should of course be deleted.
Untested lines or conditions
Skipped lines or conditions within a method can also sometimes raise red flags, especially if the test coverage was previously higher. Incompletely tested guard conditions are a particularly common form of this. In the following screenshot, Cobertura is showing that the null check is never being exercised completely - in other words, the description parameter is never null. There are lots of reasons why this check may have been placed there (sloppy debugging is a common one), but in any case, if we are practicing TDD with any rigour, this condition is no longer necessary.
In fact, as illustrated by the tweet from "Uncle" Bob Martin below, a guard condition, such as checking for null, particularly in a non-public method, is a flag that says "I don't know how this method is being used".
For example, consider the following code:
private doStuffTo(Client client) {
if (client != null) {
// do stuff
}
}
So why are we testing for null? Should this actually be written like this?
private doStuffTo(Client client) {
if (client != null) {
// do stuff
} else {
throw new WTFException();
}
}
If there really is a good case for a null value, other cleaner guard options might include using asserts or preconditions:
import static com.google.common.base.Preconditions.checkNotNull;
private doStuffTo(Client client) {
checkNotNull(client)
// do stuff
}
But of course it is even better to understand your code - why would a null be passed to this method in the first place? Isn't this a sign of a bug in the calling code? Where possible, I would prefer something like this myself:
private doStuffTo(Client client) {
// just do stuff
}
Indeed, if your tests cover all of the use cases for the public methods of a class, and the null pointer condition is still never being fully exercised, then maybe you don't need it after all. So ditch it and make your code simpler and easier to read!Indeed, if your tests cover all of the use cases for the public methods of a class, and the null pointer condition is still never being fully exercised, then maybe you don't need it after all. So ditch it and make your code simpler and easier to read!
And sometimes, just sometimes, they reveal a slip in your TDD practice - important business logic that is untested. You might be tempted to let it lie, but remember - in TDD and BDD, tests do a lot more than just test your code. They document the specifications and the design you are implementing, and go a long way to helping the next guy understand why you did things a certain way, and what business constraint you thought you were addressing. And maybe, just maybe, the untested code might contain a subtle bug that your tests will reveal. And although your cowboy developers will grumble in protest at all this extra thought and refection, when they could be just hacking code, this is the sort of unacknowledged extra value where processes like TDD and BDD really shine.
John is a well-known international consultant, trainer and speaker in open source and agile Java development and testing practices. He specializes in helping development teams improve their game with techniques such as Test-Driven Development (including BDD and ATDD), Automated Acceptance Tests, Continuous Integration, Build Automation, and Clean Coding practices. In the coming months, he will be running include online workshops on Test-Driven Development and Automated Web Testing for European audiences on May 31-June 3, running a full 3-day TDD/BDD/ATDD workshop in Sydney (June 20-22) and Wellington (date to be announced), and talking at the No Fluff Just Stuff ÜberConf in Denver in July.
Opinions expressed by DZone contributors are their own.
{{ parent.title || parent.header.title}}
{{ parent.tldr }}
{{ parent.linkDescription }}{{ parent.urlSource.name }}
|
https://dzone.com/articles/code-coverage-refactoring-tool?utm_source=feedburner&utm_medium=feed&utm_campaign=Feed%253A+javalobby%252Ffrontpage+%2528Javalobby+%252F+Java+Zone%2529
|
CC-MAIN-2020-40
|
en
|
refinedweb
|
import "github.com/documize/community/core/uniqueid/xid"
Package xid is a globally unique id generator suited for web scale
Xid is using Mongo Object ID algorithm to generate globally unique ids:
- 4-byte value representing the seconds since the Unix epoch, - 3-byte machine identifier, - 2-byte process id, and - 3-byte counter, starting with a random value..
Xid doesn't use base64 because case sensitivity and the 2 non alphanum chars may be an issue when transported as a string between various systems. Base36 wasn't retained either because 1/ it's not standard 2/ the resulting size is not predictable (not bit aligned) and 3/ it would not remain sortable. To validate a base32 `xid`, expect a 20 chars long, all lowercase sequence of `a` to `v` letters and `0` to `9` numbers (`[0-9a-v]{20}`).
UUID is 16 bytes (128 bits), snowflake is 8 bytes (64 bits), xid stands in between with 12 bytes with a more compact string representation ready for the web and no required configuration or central generation server.
Features:
- Size: 12 bytes (96 bits), smaller than UUID, larger than snowflake - Base32 hex encoded by default (16 bytes storage when transported as printable string) - Non configured, you don't need set a unique machine and/or data center id - K-ordered - Embedded time with 1 second precision - Unicity guaranteed for 16,777,216 (24 bits) unique ids per second and per host/process
Best used with xlog's RequestIDHandler ().
References:
- - -
var ( // ErrInvalidID is returned when trying to unmarshal an invalid ID ErrInvalidID = errors.New("xid: invalid ID") )
Sort sorts an array of IDs inplace. It works by wrapping `[]ID` and use `sort.Sort`.
ID represents a unique request id
FromBytes convert the byte array representation of `ID` back to `ID`
FromString reads an ID from its string representation
New generates a globally unique ID
NewWithTime generates a globally unique ID with the passed in time
NilID returns a zero value for `xid.ID`.
Bytes returns the byte array representation of `ID`
Compare returns an integer comparing two IDs. It behaves just like `bytes.Compare`. The result will be 0 if two IDs are identical, -1 if current id is less than the other one, and 1 if current id is greater than the other.
Counter returns the incrementing value part of the id. It's a runtime error to call this method with an invalid id.
IsNil Returns true if this is a "nil" ID
Machine returns the 3-byte machine id part of the id. It's a runtime error to call this method with an invalid id.
MarshalJSON implements encoding/json Marshaler interface
MarshalText implements encoding/text TextMarshaler interface
Pid returns the process id part of the id. It's a runtime error to call this method with an invalid id.
Scan implements the sql.Scanner interface.
String returns a base32 hex lowercased with no padding representation of the id (char set is 0-9, a-v).
Time returns the timestamp part of the id. It's a runtime error to call this method with an invalid id.
UnmarshalJSON implements encoding/json Unmarshaler interface
UnmarshalText implements encoding/text TextUnmarshaler interface
Value implements the driver.Valuer interface.
Package xid imports 13 packages (graph) and is imported by 1 packages. Updated 2018-10-21. Refresh now. Tools for package owners.
|
https://godoc.org/github.com/documize/community/core/uniqueid/xid
|
CC-MAIN-2020-40
|
en
|
refinedweb
|
The following program does the job of converting long to String.
public class LongToString { public static void main(String args[]) { long l1 = 10; String str = String.valueOf(l1); System.out.println("long l1 in string form is " + str); } }
Output screenshot of long to String Conversion
The long l1 is passed as parameter to valueOf() method. This method converts long l1 to string form. Anywhere long can be done in Java.
\==============================================================================
Your one stop destination for all data type conversions
byte TO
short TO
int TO
float TO
double TO
char TO
boolean TO
String and data type conversions
String TO
TO String
|
https://way2java.com/string-and-string-buffer/long-to-string-conversion-example-java/
|
CC-MAIN-2020-40
|
en
|
refinedweb
|
Instant Form Validation Using JavaScript
Free JavaScript Book!
Write powerful, clean and maintainable JavaScript.
RRP $11.95
HTML\d]+)(\.[-\w\d]+)*(\.([a-zA-Z]{2,5}|[\d]{1,3})){1,2}(\/([-~%\.\(\)\w\d]*\/*)*(#[-\w\d]+)?)?$">
This example is a simple comments form, in which some fields are required, some are validated, and some are both. The fields which have
required also have
aria-required, to provide fallback-semantics for assistive technologies that don’t understand the new
input types.
The ARIA specification also defines an
aria-invalid attribute, and that’s what we’re going to use to indicate when a field is invalid (for which there is no equivalent attribute in HTML5). The
aria-invalid attribute obviously provides accessible information, but it can be also used as a CSS hook to apply the red outline:
input[aria-invalid="true"], textarea[aria-invalid="true"] { border: 1px solid #f00; box-shadow: 0 0 4px 0 #f00; }
We could just use
box-shadow and not bother with the
border, and frankly that would look nicer, but then we’d have no indication in browsers that don’t support box-shadows, such as IE8.
Adding the JavaScript
Now that we have the static code, we can add the scripting. The first thing we’ll need is a basic
addEvent() function:
function addEvent(node, type, callback) { if (node.addEventListener) { node.addEventListener(type, function(e) { callback(e, e.target); }, false); } else if (node.attachEvent) { node.attachEvent('on' + type, function(e) { callback(e, e.srcElement); }); } }
Next, we’ll need a function for determining whether a given field should be validated, which simply tests that it’s neither disabled nor readonly, and that it has either a
pattern or a
required attribute:
function shouldBeValidated(field) { return ( !(field.getAttribute("readonly") || field.readonly) && !(field.getAttribute("disabled") || field.disabled) && (field.getAttribute("pattern") || field.getAttribute("required")) ); }
The first two conditions may seem verbose, but they are necessary, because an element’s
disabled and
readonly properties don’t necessarily reflect its attribute states. In Opera, for example, a field with the hard-coded attribute
readonly="readonly" will still return
undefined for its
readonly property (the dot property only matches states which are set through scripting).
Once we’ve got those utilities we can define the main validation function, which tests the field and then performs the actual validation, if applicable:
function instantValidation(field) { if (shouldBeValidated(field)) { var invalid = (field.getAttribute("required") && !field.value) || (field.getAttribute("pattern") && field.value && !new RegExp(field.getAttribute("pattern")).test(field.value)); if (!invalid && field.getAttribute("aria-invalid")) { field.removeAttribute("aria-invalid"); } else if (invalid && !field.getAttribute("aria-invalid")) { field.setAttribute("aria-invalid", "true"); } } }
So a field is invalid if it’s required but doesn’t have a value, or it has a pattern and a value but the value doesn’t match the pattern.
Since the
pattern already defines the string form of a regular-expression, all we have to do is pass that string to the
RegExp constructor and that will create a regex object we can test against the value. But, we do have to pre-test the value to make sure it isn’t empty, so that the regular expression itself doesn’t have to account for empty strings.
Once we’ve established whether a field is invalid, we can then control its
aria-invalid attribute to indicate that state – adding it to an invalid field that doesn’t already have it, or removing it from a valid field that does. Simple! Finally, to put this all into action, we need to bind the validation function to an
onchange event. It should be as simple as this:
addEvent(document, "change", function(e, target) { instantValidation(target); });
However for that to work, the
onchange events must bubble (using a technique that’s usually known as event delegation), but in Internet Explorer 8 and earlier,
onchange events don’t bubble. We could just choose to ignore those browsers, but I think that would be a shame, especially when the problem is so simple to workaround. It just means a bit more convoluted code – we have to get the collections of
input and
textarea elements, iterate through them and bind the
onchange event to each field individually:
var fields = [ document.getElementsByTagName("input"), document.getElementsByTagName("textarea") ]; for (var a = fields.length, i = 0; i < a; i++) { for (var b = fields[i].length, j = 0; j < b; j++) { addEvent(fields[i][j], "change", function(e, target) { instantValidation(target); }); } }
Conclusion and Beyond
So there we have it – a simple and non-intrusive enhancement for instant form validation, providing accessible and visual cues to help users complete forms. You can check out a demo below:
See the Pen Instant Form Validation by SitePoint (@SitePoint) on CodePen.
Once that scripting is implemented, we’re actually only a couple of skips and hops away from a complete polyfill. Such a script is beyond the scope of this article, but if you wanted to develop it further, all of the basic blocks are there – testing whether a field should be validated, validating a field against a pattern and/or required, and binding trigger events.
I have to confess, I’m not sure it’s really worth it! If you already have this enhancement (which works in all modern browsers back to IE7), and given that you have no choice but to implement server-side validation as well, and given that browsers which support
pattern and
required already use them for pre-submission validation – given all that, is there really any point adding another polyfill?
Learn how Git works, and how to use it to streamline your workflow!
Google, Netflix and ILM are Python users. Maybe you should too?
|
https://www.sitepoint.com/instant-validation/
|
CC-MAIN-2020-40
|
en
|
refinedweb
|
Playing with OpenCV
• Mark Eschbach
I am investigating OpenCV for my aging server as an alternative to Tensorflow for facial recognition and hopefully GPU accelerated image down sampling. Tensorflow is a fine library however my server doesn’t have the AVX or AVX2 instruction sets and the GTX 570 only supports CUDA Compute Capability 2.0, both of which are required for Tensorflow. My approach is to first look at scaling the images, then see how to move it onto the GPU, then finally start looking at facial recognition.
First step is getting it installed. Although the most recent version is the 4 series it appears as though most of the material out there is still for the 3 series. On the release page there is not an OSX release unfortunately. Consulting the general internet people have installed it via Homebrew, which I am still scared by watching machines being bricked by that. So to the source!
The package is built using CMake. There was a rather old version of CMake on my laptop. Easy to update. Language bindings to Python were intentionally disabled as well as compiling examples.
cmake -D CMAKE_BUILD_TYPE=RELEASE -D CMAKE_INSTALL_PREFIX=$HOME/tools/opencv-4.0.1 -D INSTALL_PYTHON_EXAMPLES=OFF -D INSTALL_C_EXAMPLES=OFF -D OPENCV_ENABLE_NONFREE=ON -D BUILD_EXAMPLES=OFF ..
While that is compiling I was wondering what Ubuntu 18.10 had available for OpenCV. Looks like the most recent version available through the package management system is 3.2. Looks like the library dates back to 2016, so it is fairly old. I will probably elect to compile from source to ensure I have reasonable parity between my laptop and server. Of course the server won the compilation race by a long shot, being able to compile up to 12 units concurrently.
Building Against the installed OpenCV library
I elected to use CLion since I have a license and I was hoping it would reduce the
time to implement with it’s project templates. The project template produces a
CMake compatible
environment with C++17. Out of the box I had the following file:
cmake_minimum_required(VERSION 3.12) project(opencv_play) set(CMAKE_CXX_STANDARD 17) add_executable(opencv_play main.cpp)
The
main.cpp file contained a
Hello World example in C++. Not bad. The target can be configured with
cmake . -B build which will produce the relative directory
build. A
make within the
build directory will
produce the executable artifact
opencv_play.
Next step was to get the OpenCV project properly linked in. The failing test case should look something like the following example:
#include<iostream> #include <opencv2/opencv.hpp> using namespace std; int main( int argc, char** argv){ cout << "Hello World" << endl; return 0; }
This produces an error like the following on OSX.
opencv-play/main.cpp:6:10: fatal error: 'opencv2/opencv.hpp' file not found #include <opencv2/opencv.hpp> ^~~~~~~~~~~~~~~~~~~~
Since I am not familiar with the CMake system I had to do a bit of web searching. It’s Time To Do CMake Right
was a great article pointing towards a way to properly implement a CMake dependency. I added the following stanzas to
the
CMakeLists.txt file.
cmake_minimum_required(VERSION 3.12) project(opencv_play) set(CMAKE_CXX_STANDARD 17) find_package( OpenCV REQUIRED HINTS "/home/user/tools/opencv-4.0.1/lib/cmake/opencv4" ) add_executable(opencv_play main.cpp) target_link_libraries( opencv_play ${OpenCV_LIBS} )
At this time although I am sure there is a better way to promote the discovery of the library I hard coded the path since I am exploring the library. This allows for correct linking against the OpenCV libraries.
Image Scaling on the CPU
The following code sample will produce a CPU down sampled image. This uses the the LANCZOS4
algorithm since it appears to be the best available implementation for the output image. The output image will forced
into a 256 pixel square, distorting the image to fit. The
waitkey(0) function will block until the window produced by
imshow(string, Mat) receives the Escape character.
#include<iostream> #include <opencv2/opencv.hpp> using namespace std; using namespace cv; int cpuImageResize( const string fileName ){ auto image = imread( fileName, IMREAD_COLOR ); if ( !image.data ){ cerr << "Unable to load image " << fileName << endl; return -1; } Mat result; Size size(256,256); resize( image, result, size, INTER_LANCZOS4); namedWindow("Display Image", WINDOW_AUTOSIZE ); imshow("Display Image", result); waitKey(0); return 0; } int main( int argc, char** argv){ auto fileName = "test.jpg"; return cpuImageResize( fileName ); }
To get the test image copied to the build directory the following stanza needs to be added to the
CMakeLists.txt file:
file(COPY test.jpg DESTINATION ${CMAKE_BINARY_DIR})
Image Scaling on the GPU?
Many of the examples available are for the OpenCV version 3 branch. Part of the major version change was the underlying the architecture of the platform to split a the processing pipeline description and application. This feels similar to the limited amount of experience I have with the Tensorflow API. As a result the tutorials and community posts were not any help in figuring out how to build against the API, resulting in linking errors.
From what I had read, the changes were to prevent arbitrary writes back to the CPU and reduce the cost of implementing backend to perform the computations. As a result the application client code is portable between underlying computational platforms as long as you do not create additional operations for a specific backend.
A majority of the functions are under
cv::gapi in
opencv2/gapi.hpp. To get high level operations such as
resize
the header
opencv2/gapi/core.hpp needs to be included. The
resize operation takes a
Size object during the
pipeline description, or optionally a scale parameter. Since sizes are described during pipeline creation, the pipeline
must be tailored to each aspect ratio. Here is the minimal example:
#include<iostream> #include <opencv2/opencv.hpp> #include <opencv2/gapi.hpp> #include <opencv2/gapi/core.hpp> using namespace std; using namespace cv; using namespace cv::gapi; int gpuImageResize( const string fileName ){ auto image = imread( fileName, IMREAD_COLOR ); if ( !image.data ){ cerr << "Unable to load image " << fileName << endl; return -1; } GMat in; Size size(256,256); auto dest = resize( in, size, INTER_LANCZOS4); GComputation computation(GIn(in), GOut(dest)); Mat result; computation.apply(gin(image), gout(result)); namedWindow("Display Image", WINDOW_AUTOSIZE ); imshow("Display Image", result); waitKey(0); return 0; } int main( int argc, char** argv){ auto fileName = "test.jpg"; return gpuImageResize( fileName ); }
Computationally Accelerated Platforms
Despite a performance benefit of 20% when using reusing the pipeline with the
gapi implementation I fear this may
still be executing on the CPU. Scaling approximately 1250 images a second with the non-pipelined
implementation versus 1500 images a second with the pipeline. I was unable to verify which backend was performing the
processing at the time.
A future project will be building a diagnostic tool to verify the expected backends are being used, such as OpenCL or CUDA.
|
https://meschbach.com/stream-of-consciousness/programming/2019/03/12-opencv/
|
CC-MAIN-2020-40
|
en
|
refinedweb
|
TSUrlCreate¶
Traffic Server URL object construction API.
Synopsis¶
#include <ts/ts.h>
- TSReturnCode
TSUrlCreate(TSMBuffer bufp, TSMLoc * locp)¶
- TSParseResult
TSUrlParse(TSMBuffer bufp, TSMLoc offset, const char ** start, const char * endCreate() creates a new URL within the marshal buffer
bufp. Release the resulting handle with a call to
TSHandleMLocRelease().
TSUrlClone() copies the contents of the URL at location src_url
within the marshal buffer src_bufp to a location within the marshal
buffer dest_bufp. Release the returned handle with a call to
TSHandleMLocRelease().
TSUrlCopy() copies the contents of the URL at location src_url
within the marshal buffer src_bufp to the location dest_url
within the marshal buffer dest_bufp.
TSUrlCopy() works correctly even if
src_bufp and dest_bufp point to different marshal buffers. It
is important for the destination URL (its marshal buffer and
TSMLoc)
to have been created before copying into it.
TSUrlParse() parses a URL. The start pointer is both an input
and an output parameter and marks the start of the URL to be parsed. After a
successful parse, the start pointer equals the end
pointer. The end pointer must be one byte after the last character you
want to parse. The URL parsing routine assumes that everything between
start and end is part of the URL. It is up to higher level
parsing routines, such as
TSHttpHdrParseReq(), to determine the actual
end of the URL.
Return Values¶
The
TSUrlParse() function returns a
TSParseResult, where
TS_PARSE_ERROR indicates an error. Success is indicated by one of
TS_PARSE_DONE or
TS_PARSE_CONT. The other APIs all return
a
TSReturnCode, indicating success (
TS_SUCCESS) or failure
(
TS_ERROR) of the operation.
|
https://docs.trafficserver.apache.org/en/latest/developer-guide/api/functions/TSUrlCreate.en.html
|
CC-MAIN-2020-40
|
en
|
refinedweb
|
My favorite analogy for explaining variables is the "bucket" analogy. Think of a variable as a bucket. Into that bucket, you can place data. Some buckets don't care what kind of data you place in them, and other buckets have specific requirements on the type of data you can place in them. You can move data from one bucket to another. Unfortunately, the bucket analogy gets a little confusing when you take into account that one bucket can contain a little note inside that reads "see Bucket B for actual data" (you'll read about reference types shortly in the section "Value Types vs. Reference Types").
To declare a variable in C#, you can use the following syntax:
type variable_name;
You can initialize a variable on the same line:
type variable_name = initialization expression
where type is a .NET type. The next section lists some of the core .NET types.
Table 1.1 shows some of the basic .NET data types. As you will see in later chapters, this is just the beginning. When you start using classes, the variety of types available to you will be virtually unlimited.
Data Type
Description
System.Boolean
Provides a way to store true/false data.
System.Byte
Represents a single byte of data.
System.Char
A single character. Unlike other languages, this character is a 2-byte Unicode character.
System.Decimal
A decimal value with 28 to 29 significant digits in the range ±1.0 x 1028 to ±7.9 x 1028.
System.Double
A double-precision value that represents a 64-bit floating-point value with 15 to 16 significant digits in the range ±5.0 x 10324 to ±1.7 x 10308.
System.Single
A single-precision value that represents a 32-bit floating point number in the range ±1.5 x 1045 to ±3.4 x 1038.
System.Int32
Represents a 32-bit signed integer in the range 2,147,483,648 to 2,147,483,647.
System.Int64
Represents a 64-bit signed integer in the range 9,223,372,036,854,775,808 to 9,223,372,036,854,775,807.
System.SByte
A signed 8-bit integer.
System.Int16
A signed 16-bit integer.
System.UInt32
An unsigned 32-bit integer.
System.UInt64
An unsigned 64-bit integer.
System.UInt16
An unsigned 16-bit integer.
System.String
An arbitrary-length character string that can contain Unicode strings.
If you aren't familiar with .NET or C#, you may be wondering what the "System" is in the data types listed in Table 1.1. .NET organizes all types into namespaces. A namespace is a logical container that provides name distinction for data types. These core data types all exist in the "System" namespace. You'll see more namespaces throughout the book as you learn about more specific aspects of .NET.
C# provides you with some shortcuts to make declaring some of the core data types easier. These shortcuts are simple one-word lowercase aliases that, when compiled, will still represent a core .NET type. Table 1.2 lists some data type shortcuts and their corresponding .NET types.
Shortcut
.NET Type
bool
byte
char
decimal
double
float
int
long
sbyte
short
uint
ulong
ushort
Up to this point, this chapter has just been illustrating data types in one category. Earlier in the chapter, I mentioned a "bucket" analogy where data in one bucket could actually refer to data contained in some other bucket. This is actually the core point to illustrate the difference between value types and reference types.
A value type is a type whose data is contained with the variable on the stack. Value types are generally fast and lightweight because they reside on the stack (you will read about the exceptions in Chapter 16, "Optimizing your .NET 2.0 Code").
A reference type is a type whose data does not reside on the stack, but instead resides on the heap. When the data contained in a reference type is accessed, the contents of the variable are examined on the stack. That data then references (or points to, for those of you with traditional C and C++ experience) the actual data contained in the heap. Reference types are generally larger and slower than value types. Learning when to use a reference type and when to use a value type is something that comes with practice and experience.
Your code often needs to pass very large objects as parameters to methods. If these large parameters were passed on the stack as value types, the performance of the application would degrade horribly. Using reference types allows your code to pass a "reference" to the large object rather than the large object itself. Value types allow your code to pass small data in an optimized way directly on the stack.
|
https://flylib.com/books/en/1.237.1.12/1/
|
CC-MAIN-2020-40
|
en
|
refinedweb
|
Event Handling in C#
In a previous article, we have seen a sneak preview about event handling in C#. In this article, we will examine the concept in detail with the help of relevant examples.
Understanding the Basic Technique
Actions and events serve an important part in every GUI-based application. These are equally important in the same way because we are arranging components using either Visual Studio .NET or by applying codes. It's these actions that instruct the program what to do when something happens.
For example, when a user performs a mouse click or a keyboard operation, some kind of event is taking place. If the user does not perform that operation, nothing will happen. Think of a situation where you are not performing a mouse click or a keyboard operation after booting the computer. Before going forward, let's discuss how different Application Programming Interfaces (APIs) handle events.
Microsoft Foundation Classes (MFC)
These classes are based upon the Microsoft Win32 API. Normally, development work is done through Visual C++; programming using this API is a tedious task. A developer would have to learn complex set of theories and syntaxes.
Java API
Java provides a nice set of packages and classes such as the Abstract Windowing Toolkit (AWT) and Swing packages to perform GUI-based programming. These classes and packages also provide functionalities for handling events. In Java, you have to learn the concept of Interfaces for applying actions. The main difficulty is that you should learn and remember all methods in the corresponding Interfaces, failing which you would get compile-time errors. As compared to Java, event handling in C# is much more simplified. It's possible to handle various mouse- and key-related events quickly and in a more efficient manner.
The basic principles behind event handling in C# is elaborated below. These principles are applicable to all languages under the .NET framework.
- Invoke the related event, such as Click, Key Press, and so forth by supplying a custom method using += operator as shown here:
- While applying the above method, it should conform to a delegate of the class System.EventHandler, as shown in the following code fragment:
b1.Click += new EventHandler(Your Method Name)
public delegate void EventHandler(object sender, Event e) {}
In the above code, the first argument indicates the object sending the event and the second argument contains information for the current event. You can use this argument object, here e, to handle functionalities associated with the related event.
Triggering a Button
In this session, we will examine how to activate a WinForm button.
As already outlined, you need not worry about the placement of controls if you are using Visual C# .NET. As you place buttons and text boxes, the built-in editor will automatically create codes in the background. Double-clicking a control takes you to the Form Editor's area, where you can straightaway type your codes. For our examples, we use Notepad as our editor and .NET SDK for compiling and executing the applications.
Use your favorite editor to enter the code shown in Listing 1 and save it as Butevent.cs. Finally, compile and execute the program.
Listing 1
// Compilation : csc Butevent.cs // Execution : Butevent using System; using System.Windows.Forms; using System.Drawing; public class Butevent:Form { TextBox t1 = new TextBox(); Button b1 = new Button(); public Butevent() { this.Text = "C# Program ";()); } }
There are two kinds of processes going on in the above piece of code. One is that, upon clicking the button (b1), "Hello C#" would be printed inside the Textbox. Another is when you resize the form; a message box pops up with the message as shown in the above code. Locate the message yourself by verifying the code.
Working with Mouse and Key Events
We will examine a few mouse- and key-related events in this last session of this article.
If you activate something via the mouse, the mouse event takes place whereas if you are using the keyboard, a key event occurs. There are various types of these events, such as pressing the mouse, releasing the mouse, entering the mouse, pressing the keyboard, and so forth. First, we will cover mouse events.
Handling Mouse Events
The Control class specifies various events using the mouse. One such event is MouseUp. You have to apply this event in your program as shown in Listing 2:
Listing 2
// Compilation : csc Mousedemo.cs // Execution : Mousedemo()); } }
As usual, compile and execute the above code. The current mouse coordinates will be displayed on the Form's title bar. e.X and e.Y imply the X and Y coordinates, respectively. Other popular mouse events are Click, DoubleClick, MouseEnter, and MouseLeave. These events also can be handled in the similar way as outlined in the above code. Next, we will examine key-related events, which are triggered via the keyboard.
Using Keyboard Events
Every modern programming language contains all necessary functionalities for handling keyboard-related events. C# also provides us with three events. They are KeyPress, KeyUp, and KeyDown, which you can use to handle keyboard events.
Listing 3 shows the usage of the KeyUp event. As usual, enter the code given below using your editor.
Listing 3
// Compilation : csc Keydemo.cs // Execution : Keydemo()); } }
Upon execution, press any key and you will be able to view a message box with the corresponding key code in it. Cool, isn't it?<<
|
https://www.developer.com/net/csharp/article.php/1496891/Event-Handling-in-C.htm
|
CC-MAIN-2018-09
|
en
|
refinedweb
|
On Thu, May 08 2003, Linus Torvalds wrote:> > On Thu, 8 May 2003, Jens Axboe wrote:> > > > Maybe a define or two would help here. When you see drive->addressing> > and hwif->addressing, you assume that they are used identically. That> > !hwif->addressing means 48-bit is ok, while !drive->addressing means> > it's not does not help at all.> > Why not just change the names? The current setup clearly is confusing, and> adding defines doesn't much help. Rename the structure member so that the> name says what it is, aka "address_mode", and when renaming it you'd go> through the source anyway and change "!addressing" to something more> readable like "address_mode == IDE_LBA48" or whatever.Might not be a bad idea, drive->address_mode is a heck of a lot more tothe point. I'll do a swipe of this tomorrow, if no one beats me to it.> (Anyway, I'll just drop all the 48-bit patches for now, since you've > totally confused me about which ones are right and what the bugs are ;)I think we can all agree on the last one (attached again, it's short) isok. The 'only use 48-bit when needed' can wait until Bart gets thetaskfile infrastructure in place, until then I'll just have to eat theoverhead :)diff -Nru a/drivers/ide/ide-disk.c b/drivers/ide/ide-disk.c--- a/drivers/ide/ide-disk.c Thu May 8 14:32:59 2003+++ b/drivers/ide/ide-disk.c Thu May 8 14:32:59 2003@@ -1479,7 +1483,7 @@ static int set_lba_addressing (ide_drive_t *drive, int arg) {- return (probe_lba_addressing(drive, arg));+ return probe_lba_addressing(drive, arg); } static void idedisk_add_settings(ide_drive_t *drive)@@ -1565,6 +1569,18 @@ } (void) probe_lba_addressing(drive, 1);++ if (drive->addressing == 1) {+ ide_hwif_t *hwif = HWIF(drive);+ int max_s = 2048;++ if (max_s > hwif->rqsize)+ max_s = hwif->rqsize;++ blk_queue_max_sectors(&drive->queue, max_s);+ }++ printk("%s: max request size: %dKiB\n", drive->name, drive->queue.max_sectors / 2); /* Extract geometry if we did not already have one for the drive */ if (!drive->cyl || !drive->head || !drive->sect) {diff -Nru a/drivers/ide/ide-probe.c b/drivers/ide/ide-probe.c--- a/drivers/ide/ide-probe.c Thu May 8 14:32:59 2003+++ b/drivers/ide/ide-probe.c Thu May 8 14:32:59 2003@@ -998,6 +998,7 @@ static void ide_init_queue(ide_drive_t *drive) { request_queue_t *q = &drive->queue;+ ide_hwif_t *hwif = HWIF(drive); int max_sectors = 256; /*@@ -1013,8 +1014,10 @@ drive->queue_setup = 1; blk_queue_segment_boundary(q, 0xffff); - if (HWIF(drive)->rqsize)- max_sectors = HWIF(drive)->rqsize;+ if (!hwif->rqsize)+ hwif->rqsize = hwif->addressing ? 256 : 65536;+ if (hwif->rqsize < max_sectors)+ max_sectors = hwif->rqsize; blk_queue_max_sectors(q, max_sectors); /* IDE DMA can do PRD_ENTRIES number of segments. */-- Jens Axboe-To unsubscribe from this list: send the line "unsubscribe linux-kernel" inthe body of a message to majordomo@vger.kernel.orgMore majordomo info at read the FAQ at
|
https://lkml.org/lkml/2003/5/8/156
|
CC-MAIN-2018-09
|
en
|
refinedweb
|
Prime Programming Proficiency, Part 2: VS.NET Macros
WEBINAR:
On-Demand
Full Text Search: The Key to Better Natural Language Queries for NoSQL in Node.js
As soon as the first article of this Prime Programming Proficiency series hit CodeGuru.com, I received an e-mail response about the value of lines of code as a metric. I love the dynamic nature of the Web and the opportunity to exchange ideas with people all over the world. That said, Part 1 did not advocate lines of code as the best or only metric. It is part of a mosaic of information that should be collected as a means to an end—programming proficiency.
Lines of code are a relatively easy metric to obtain, and used prudently, it can tell you whether your code is growing or shrinking. Growing code could indicate more features, but it could just as easily indicate useless code bloat. Shrinking code could mean that features were removed or that refactoring is occurring. So, you see that counting lines of code creates more questions than it answers, but it is part of an information-gathering process that can be a first step toward obtaining the answers.
This second article of the series introduces the VS.NET Macros IDE and gets you started on implementing the LineCounter tool for VS.NET. Parts 3 and 4 will provide the complete code listing and discuss the pattern used to implement the solution, the Visitor behavior pattern.
VS.NET Extensibility Object Model for Projects
Visual Studio .NET has a comprehensive extensibility object model. Writing macros in VB.NET is a convenient way to access this part of VS.NET. (See the help documentation VSLangProj Hierarchy Chart for specifics on the extensibility object mode for projects.)
By using this object model in conjunction with macros, you or third-party tools vendors can write simple or advanced code generators or automate a wide variety of tasks to extend and customize VS.NET. In fact, by combining macros, wizards, the CodeDOM, project templates, and the extensibility object model, one can automate a wide variety of tasks and speed up software development significantly. (These features alone would justify having a fulltime toolsmith on your team, assuming you have more than a couple of developers.)
For now, let's look at using the extensibility object model and macros to design and implement your line counter utility.
Exploring the Macros IDE
As a consultant, I meet a lot of smart people. Oddly, though, few of them talk about the extensibility object model or even seem to know about macros. To give you an idea of the possibilities, consider some enhancements I recently devised:
- I created a New Web Page template that automatically employs 25 percent of a complete Web page for a recent project, including style links, HTML table layouts, footers, and headers. (Web page inheritance will be a nice upgrade eventually.)
- With the CodeDOM, I wrote a code generator that, when provided with a connection string and table name, generates a class and data class for that table, based on a custom data-access layer pattern. (XSD is a good alternative.)
- I implemented a property code generator as a macro and added it to the toolbar. So, a user provides the field name and data type and it writes the property code for him or her, and I implemented the LineCounter to see the evolution of the project by file and lines of code.
These functions are all automatic now, and they save a lot of time relative to the time it took to implement them. I hope this encourages you to explore all of .NET and share those explorations with others. (One of the best bangs for your buck is to join a users group. Another is to sign up for free e-mail newsletters such as CodeGuru.com's.) Now, I'll get back to macros.
Macros are supported by their own IDE. To open the Macros IDE, select Tools|Macros|Macros IDE from Visual Studio .NET. The Macros IDE is a lot like Visual Studio's IDE and the Macro language is Visual Basic .NET, so you should feel right at home.
A Quick Tour of the Macros IDE
The Macros IDE has a Project Explorer. When you expand the elements of the project being explored, you see VB modules—think shared class—and class files. These files are located in C:\Documents and Settings\[user name]\My Documents\Visual Studio\ MyMacros.vsmacros. (This is useful information if you want to share macros with others.)
You can create plain vanilla modules or classes in the project and import or export classes and modules. I haven't tried to use a form in the Macros IDE, but I suspect you could. The important thing is that an entry point for macro code is a public method in a module that requires no arguments and returns either void or a subroutine with no parameters. After the macro has started, you can run any combination of methods, create classes, and generally perform any kind of programming task. Listing 1 offers a quick, introductory sample macro to get you started.
Listing 1: Quick Macro Sample.
Option Strict Off Option Explicit Off Imports EnvDTE Imports System.Diagnostics Imports System.Windows Imports System.Windows.Forms Imports System Imports System.IO Imports System.Text.RegularExpressions Imports System.text Public Module MyUtilities Public Sub DumpProject2() Dim project As Project Dim projects As Projects projects = DTE.Solution.Projects For Each project In projects Try Debug.WriteLine("Project: " & project.FullName) Catch Debug.WriteLine("Project: " & "<no name>") End Try Next End Sub End Module
The example macro DumpProject2 obtains an instance of the Projects collection from the active solution—DTE.Solution—and iterates each project, writing the project's name to the Macros IDE's Output window. Some solution-level elements do not have a FullName property because they may not be projects in the traditional sense of the word. You also can examine the Project.Kind property to determine the specific type of the element. The constants for Kind are defined in the EnvDTE.Constants namespace..
Having a statement of work, you now can begin defining a solution. The first thing you might do is extend your sample code from Listing 1 and just keep adding inner, nested loops to handle sub-projects and files. I will submit, with some caution, that this will work fine—and it reflects how a lot of code is written. However, that's not how this article is going to do it.!
|
https://www.codeguru.com/csharp/.net/net_general/macros/article.php/c7823/Prime-Programming-Proficiency-Part-2-VSNET-Macros.htm
|
CC-MAIN-2018-09
|
en
|
refinedweb
|
Programming with Partial Classes in VB.NET 2005
WEBINAR:
On-Demand
Full Text Search: The Key to Better Natural Language Queries for NoSQL in Node.js
Sometimes I wonder what motivates language developers to make some of the design choices they do. In fact, I'd like to arrange events called Authors Summits, where the language vendor, such as Microsoft, explains its decisions directly. Imagine gathering about 200 authors in a room. There is Dan Appleman and Charles Petzold. Dave Chappell is presenting with Chris Sells. Over by the coffee table are Carl Franklin and Rocky Lhotka having a danish. And I am in the back furiously taking notes to prepare an article or a proposal for a new book. I politely raise my hand.
Paul: What was the motivation behind partial classes?
Chris: Well, VB programmers were used to a very clean form code-behind experience, and partial classes allow us to separate out all of the plumbing that the .NET form designer adds to forms and give VB.NET programmers the clean experience they were used to.
Paul: So, that was pretty much it.
Chris: Yeah.
Later I see Dan Appleman and ask him the same question. (Dan has a way of saying things that are true and painful without seeming too offensive.) He says that Microsoft was trying to figure out how to outsource code projects to India without releasing proprietary information. With partial classes, Microsoft can send obfuscated assemblies to India without exposing proprietary information&151;while taking advantage of the lower wage rates.
Paul: How is that working?
Dan: Not too good. The Indians are smart to be aggressive about scope creep, which makes it very hard to change requirements midstream. So, if something comes back wrong, a debate about scope creep ensues.
Paul: That's clever. Claiming scope creep is a way to get paid without really delivering, sort of.
Dan: Yeah, it's a real problem.
Paul: Does the partial class obfuscation strategy work technically?
Dan: Yeah, so far so good, but if one big chunk of critical code is de-obfuscated, there will be big problems for everybody.
Disclaimer: No actual authors were harmed during this dramatization, and Dan Appleman is a fictional character who is not intended to represent any real person.
Guidelines for Using Partial Classes
Partial classes, like any new construct, come with rules for using them. The help function serves this purpose. I have included a summarized list of the most notable partial class rules to support Listing 1, which demonstrates two partial classes.
Listing 1: A Partial Class Containing Two Parts
Namespace MyPartialClass Partial Public Class PartialClass Public Shared Sub Main() Dim part As New PartialClass part.WhoAmI() End Sub End Class Partial Public Class PartialClass Public Sub WhoAmI() Console.WriteLine("I am PartialClass") Console.ReadLine() End Sub End Class End Namespace
Pretty easy, really. The partial classes use the new keyword partial and are defined in the same namespace. However, partial classes can and probably should be defined in separate physical files. Other rules or limitations are:
- Partial classes can be used to split the definition of classes, structures, and interfaces, which supports simultaneous development or the separation of generated from user-generated code.
- Each part of a partial class must be available at compile time.
- Partial classes must use the same access modifier (for example Public Protected and Private).
- If any single part is abstract (MustInherit), the whole class is abstract.
- If any parts are sealed (NotInheritable), the whole class is sealed.
- If any part declares a base type, the whole class inherits that base type.
- Parts can specify different interfaces, but the whole class implements all interfaces.
- Features defined in any part are available to all partials; the whole is the sum of all of the parts.
- Delegates and enumerations cannot use the partial modifier.
- Attributes apply to all parts.
This list may seem like a lot to remember, but just think of partial classes as a class definition split across many files for convenience. The next section presents a brief scenario that may help you get some extra mileage out of partial classes.
Problem Resolution Scenario
The notion of eXtreme Programming has some interesting concepts. Some, like pair programming, bug me because programmers are all smart and there seems to be too much debate going on&151;at least for all of the pairs I have seen. However, a derivative of this concept might work well.
Along the same lines, SourceSafe is a relatively useful tool. However, sometimes it seems hard to use because of real or perceived problems that may occur during multiple file checkouts. Suppose you and I have a file checked out at the same time. You check your changes in, and I check my changes in. Are you completely sure that all files&151;including text and binary&151;are merged correctly? No. Well, you are not alone.
With partial files, we could name our various source code files Foo1, Foo2, and Foo3, each of which contains the same class, Foo. You work on one part (say fields, properties, and events), while I work on another (methods) and a third developer works on implementing interfaces. Now, we can check in or out our various files without ever having to check out the same file simultaneously. Problem solved.
In addition, with a little upfront design work and some simple naming conventions, the three of us can reduce or eliminate minor problems. For example, make one person responsible for the class modifiers, attributes, and inheritance. Or, better yet, have one person stub the classes and agree not to unilaterally change the class header. This kind of collaboration could yield some good returns and ensure that remote developers and telecommuters don't make code changes that contradict changes made by office workers.
The Good Housekeeping Feature
Partial classes is pretty good idea. It cleans out the code-behind, separating designer code from user-written code, and anything that aids in housekeeping is a good thing.
You aren't required to use the partial modifier in code you write, but you will see it used in Forms and Controls by the designers. What the partial modifiers mean is that under most conditions you don't need to change the designer-generated code; simply add your code to the source file provided and let the designer take care of itself..
Happy PathPosted by pb_man5 on 10/03/2005 09:22am
Check out this link to a web site Mr. Kimmel authored on. MoTownJobs.com allows you to do searches, but if you don't enter anything for criteria the site blows up with a nice ASP.NET error.Reply
Network discoveryPosted by nafees on 09/21/2005 08:21am
Network discoveryPosted by nafees on 09/21/2005 08:20am
|
https://www.codeguru.com/csharp/csharp/cs_syntax/indexers/article.php/c10611/Programming-with-Partial-Classes-in-VBNET-2005.htm
|
CC-MAIN-2018-09
|
en
|
refinedweb
|
Recursion in a monad
From HaskellWiki
People sometimes wonder how to effectively do recursion when inside amonadic
-block. Here's some quick examples:
do
The problem is to read 'n' lines from stdin, recursively:
The obvious, recursive way:
main = f 3 f 0 = return [] f n = do v <- getLine vs <- f (n-1) return $! v : vs
Runs:
$ runhaskell A.hs 1 2 3 ["1","2","3"]
Or make it tail recursive:
f 0 acc = return (reverse acc) f n acc = do v <- getLine f (n-1) (v : acc)
Or abstract the recursion pattern into a fold:
f n = do s <- foldM fn [] [1..n] return (reverse s) where fn acc _ = do x <- getLine return (x:acc)
And finally, apply some functor and pointfree shortcuts:
f n = reverse `fmap` foldM fn [] [1..n] where fn acc _ = (: acc) `fmap` getLine
|
https://wiki.haskell.org/index.php?title=Recursion_in_a_monad&printable=yes
|
CC-MAIN-2018-09
|
en
|
refinedweb
|
Today, we are announcing an update to our preview SDK and the adoption of Virtual Machine Scale Sets in Azure-hosted clusters. As part of this release, we are making a series of changes based on your feedback, some of which are breaking. Please read the notes below carefully to understand what you need to do to adopt the new release.
SDK Update
This release of the SDK includes a number of new features, along with some key bug fixes and general performance and reliability improvements.
Windows 7 Support for Development
We heard you loud and clear on the need for Windows 7 support for development machines. Starting with this release, you can run a local cluster and deploy applications to it from Visual Studio 2015 on Windows 7.
Important notes:
- There is a bug impacting the Visual Studio debugger for Service Fabric applications on Windows 7. To fix it, you will need to install the Visual Studio 2015 Update 2 CTP.
- The Service Fabric PowerShell cmdlets require PowerShell 3.0 or higher, whereas Windows 7 includes PowerShell 2.0 by default. If you have not updated PowerShell to a recent release on your Windows 7 machine, you will need to do so to use the Service Fabric SDK.
Delete actors
The Reliable Actors framework includes a form of garbage collection that serializes actor state to disk and then removes the actor object from memory after a period of inactivity. This ensures that you aren’t consuming a large amount of memory in your clusters holding on to actors that are not being used. However, in some cases, you may want to go further than just removing the actors from memory and delete them from the cluster entirely, either to limit storage usage or for compliance. This release adds this capability.
The DeleteActorAsync method can be invoked using ActorServiceProxy as shown below:
var serviceUri = ActorNameFormat.GetFabricServiceUri(typeof(IMyActor), actorAppName);
var actorServiceProxy = ActorServiceProxy.Create(actorId.GetPartitionKey(), serviceUri);
await actorServiceProxy.DeleteActorAsync(actorId, cancellationToken);
Query actors
In order to determine the actors to delete, you’ll probably want to figure out which actors have been created. We have enabled the ability to query the set of actors in a given partition.
This is an Actor Service level operation with following signature:
Task<PagedResult<ActorInformation>> GetActorsAsync(ContinuationToken continuationToken, CancellationToken cancellationToken);
This API returns a PagedResult which contains a list of ActorInformation and a continuationToken signifying if more calls are needed to get complete list of actors.
This method can be invoked using ActorServiceProxy as shown below:
var serviceUri = ActorNameFormat.GetFabricServiceUri(typeof(IMyActor), actorAppName);
var actorServiceProxy = ActorServiceProxy.Create(partitionKey, serviceUri);
ContinuationToken continuationToken = null;
var queriedActorCount = 0;
do
{
var queryResult = actorServiceProxy.GetActorsAsync(continuationToken, cancellationToken);
queriedActorCount += queryResult.Items.Count();
continuationToken = queryResult.ContinuationToken;
} while (continuationToken != null);
CancellationToken support for IService/IActor
Reliable Service and Reliable Actor methods now support a cancellation token that can be remoted via ActorProxy and ServiceProxy, allowing you to implement cooperative cancellation. Clients that want to cancel a long running service or actor method can signal the cancellation token and that cancellation intent will be propagated to the actor/service method. That method can then determine when to stop execution by looking at the state of its cancellation token argument.
For example, an actor’s contract that has a possibly long-running method can be modelled as shown below:
public interface IPrimeNumberActorInterface : IActor
{
Task<ulong> FindNextPrimeNumberAsync
(ulong previous, CancellationToken cancellationToken);
}
The client code that wishes to cancel the method execution can communicate its intent by canceling the cancellation token.
Flexible application packaging for guest executables
Service Fabric can provide orchestration and high-availability for arbitrary executables, referred to as “guest executables”. In some cases, you may want to build an application that is a combination of Reliable Services/Reliable Actors (or “service host executables”) and guest executables. Previously, these types of application packages had to be managed entirely outside of Visual Studio because there was no way to include guest executables in the application package created by VS and msbuild would complain if it found services in the application manifest that did not have a corresponding service project in the solution.
To better support this scenario, the application project now includes an ApplicationPackageRoot folder, similar to the PackageRoot folder found in service projects. The contents of this directory will be directly copied to the generated application package.
Important: as part of this change, the ApplicationManifest file has been moved under the ApplicationPackageRoot folder. Visual Studio will automatically move it as part of project upgrade but you will need to ensure that the move is properly reflected if your project is checked into source control.
Key bug fixes
In addition to the features described above, the following key bugs are fixed in this release:
- An assembly naming clash with Java that was causing FabricHost not to start on some machines with the JRE installed.
- A long-path issue with the Test-ServiceFabricApplicationPackage PowerShell cmdlet that was causing deployment failures with ASP.NET 5/ASP.NET Core projects.
API Breaking Changes
Transaction object now required in IReliableQueue::GetCountAsync
To ensure consistent results, we now provide the count of objects in the queue within transaction scope. To enable this, we require a transaction object to be provided to the GetCountAsync call.
IReliableDictionary::Count property replaced by IReliableDictionary::GetCountAsync method
To align with IReliableQueue, IReliableDictionary’s Count property has been replaced with a GetCountAsync method.
Reliable collections interfaces no longer implement IEnumerable – use CreateEnumerableAsync instead
In order to prepare for upcoming releases where the data in Reliable Collections may be paged to disk, the ReliableQueue and ReliableDictionary no longer implement IEnumerable directly. Instead, you should use the CreateEnumerableAsync method to acquire an enumerable collection. Note that IEnumerables returned by CreateEnumerableAsync can only be enumerated within transaction scope, so if you intend to use them elsewhere, you will need move the results into a temporary collection, such as a List.
Testability APIs moved into System.Fabric assembly and moved namespaces
The testability APIs previously included in the Microsoft.ServiceFabric.Testability NuGet package have now been moved to the System.Fabric assembly and are accessible via the FabricClient type. More specifically, the following methods previously available under System.Fabric.Testability.TestabilityExtensions, are now available via the TestManager and FaultManager properties, as shown below:
Virtual Machine Scale Sets
In our initial preview release, Service Fabric clusters created in Azure were based on “single-instance virtual machines”, meaning that a specific set of VMs were pre-defined to form the basis of the cluster. Scaling the cluster up or down required that you manually add or remove VMs. Going forward, new Service Fabric clusters will be based on Virtual Machine Scale Sets (VMSS). With VMSS, every node type in your cluster will be tied to a VM scale set, allowing you to define rules for when the number of VMs of that type should grow and shrink. Note that the auto-scaling functionality of VMSS is not integrated with Service Fabric yet and will be enabled in an upcoming release.
Updating to the new release
You can install the new SDK using the Web Platform Installer. To take advantage of the new APIs described above in existing projects, you will need to update your NuGet packages to the latest versions.
Important note:
Because of the move to VMSS and the breaking changes in our APIs, we have elected to leave existing clusters in place and fixed on the existing Service Runtime version (4.4.104.9494). If you’re using the existing SDK/NuGet packages (v1.4.87) and targeting those clusters, you can continue to do so. However, in order to deploy apps created with the new SDK or existing apps upgraded to the new NuGet packages to Azure, you will need to create new clusters. Once we reach general availability, this will no longer occur. You can check the version of your cluster in the Azure portal:
The following table offers a quick guide to compatibility:
Known Issues
Application project upgrade errors with ASP.NET 5 (aka ASP.NET Core) projects
When you open an existing Service Fabric project with the updated tooling extension included in this SDK, Visual Studio will attempt to upgrade your project to the latest version. Before it does that, it takes the existing contents of your application project folder and moves them to a Backup folder. Given the depth of ASP.NET 5 projects, you may already be close to the limit on path length, so pushing things down another level may cause an error like this:
If you hit this, try deleting the auto-generated folders (bin, obj, and pkg) from the application project directory.
Questions and feedback
As always, we are monitoring Twitter, StackOverflow, MSDN, Azure.com, and our feedback forum for your comments and questions.
The Service Fabric Team
Does Service Fabric support C++? I only see C# examples.
Is there a way to create an on premises cluster? It was mentioned in previous posts that there would be an update in Q1 that would allow on prem cluster creation.
Each time you create local dev cluster, you actually create onpremis cluster. Look into C:\Program Files\Microsoft SDKs\Service Fabric\ClusterSetup\DevClusterSetup.ps1, here you can find everything you need for setup any cluster you want (keep trace over references files).
I'm using MEF together with Azure Service Fabric. It was working fine in the 4.4x version. But after I upgrade to v4.5x, it will always throw load exception that System.Fabric.Management.ServiceModel.dll cannot be found and load. Any idea?
I would like to also point out the MEF error. There’s a dependency on one of the 1.5 assemblies to System.Fabric.Management.ServiceModel, which remains unresolved when deployed. Breaking MEF. A simple solution to this is to include that DLL in your project from Program File\Service Fabric. But, that’s pretty non-ideal.
Have you established any more details on the what was happening here with MEF?
Thanks,
that's great news! could you please also update the azure quickstart ARM templates to VM Scale Sets?! github.com/…/service-fabric-secure-cluster-5-node-1-nodetype-wad
thx!
Any examples on DeleteActorAsync?
do you have sample for DeleteActorAsync?
I had a cluster up and running on the previous version that had services listening on HTTPS using OwinCommunicationListener. With the new version, that no longer seems to work. I have not changed any of the LB settings. The port 443 seems to be open, but gets closed as soon as the client tries to talk to it. Any ideas?
I should add – HTTP still works, it’s just HTTPS that has stopped working.
Nevermind, got it working! Apparently you have to explicitly reference the endpoint certificate in the application manifest with this release, whereas you didn’t have to previously. Thanks.
|
https://blogs.msdn.microsoft.com/azureservicefabric/2016/02/23/service-fabric-sdk-v1-5-175-and-the-adoption-of-virtual-machine-scale-sets/?replytocom=583
|
CC-MAIN-2018-09
|
en
|
refinedweb
|
Details
- Type:
Bug
- Status: Resolved
- Priority:
Major
- Resolution: Won't Fix
- Affects Version/s: 1.8.3
- Fix Version/s: None
- Component/s: Bean / Property Utils
- Labels:None
Description
We have migrated the library from version 1.6.0 to 1.8.0 and the copyProperties() method fails when copying a java.util.Date attribute with a null value.
Here is a simple testcase :
public class Test { private Date date; public Date getDate() { return date; } public void setDate(Date date) { this.date = date; } public static void main(String[] args) throws Exception { Test dest = new Test(); Test source = new Test(); BeanUtils.copyProperties(dest, source); } }
As a workaround, we can do this :
ConvertUtils.register(new DateConverter(null), Date.class);
When can also use PropertyUtils.copyProperties() because in this case no conversion is required but the impact is unknown on our big application.
The problem is that there seems to be a regression between version 1.6.0 and 1.8.0.
Attachments
Issue Links
- blocks
BEANUTILS-255 Date and Calendar Converter implementations
- Closed
- is duplicated by
BEANUTILS-454 copyProperties() throws conversion exception for null Date
- Closed
|
https://issues.apache.org/jira/browse/BEANUTILS-387
|
CC-MAIN-2018-09
|
en
|
refinedweb
|
dtostrf() on D21G
The dtostrf() function (to convert float to string) is not included in the standard arduino.h in the D21G core.
This is probably due to memory size considerations, or it could just be added.
Anyhow, after a bit of search I solved this issue by adding the correct include:
#include <avr/dtostrf.h>
Hope this is helpful
Stefano Giuseppe Bonvini
Stefano Giuseppe Bonvini
47
|
41 follower(s)
|
https://industruino.com/forum/help-1/question/dtostrf-on-d21g-449
|
CC-MAIN-2018-09
|
en
|
refinedweb
|
With XML becoming a thing of past for inter process communication and data exchange on the Internet infrastructures, JSON is more increasingly getting popular for providing quicker and better packaging of data across the wires. To be specific, as the need for partial rendering of web pages increases with the dynamic exchange of data using AJAX-enabled Web Services, JSON has become a more compact and simpler means to communicate between client calls and server processes thus providing a rich user experience. Moreover, Windows Communication Foundation (WCF) processes JSON messages using an internal, hidden mapping between JSON data and the XML infoset for data exchange.
This article discusses the features of JSON and how to perform a simple serialization of JSON data manually by an example.
JSON (JavaScript Object Notation) is an efficient data encoding format that enables fast exchanges of small amounts of data between client browsers and server components. JSON encodes data using a subset of the object literals of JavaScript.
In WCF services and AJAX enabled Web Services, JSON encoding has become a better alternative in place of XML for exchange of data and messages. In general, XML elements are mapped into JSON strings while serialization, but it would be interesting to know how other parts of XML get serialized.
Currently, in the .NET Framework 3.5, JSON serialization and de-serialization is implemented in WCF and AJAX-enabled Web Services. Retaining the other forms of serialization techniques such as Binary, SOAP, and XML, JSON is newly included in the framework, and is handled automatically by Windows Communication Foundation (WCF) when you use data contract types in service operations that are exposed over AJAX-enabled endpoints.
JSON serialization is about serializing .NET type objects into JSON-encoded data, and is widely used in situations when writing Asynchronous JavaScript and XML (AJAX)-style Web applications. AJAX support in Windows Communication Foundation (WCF) is optimized for use with ASP.NET AJAX through the ScriptManager control.
ScriptManager
JSON de-serialization is about de-serializion of JSON data into instances of .NET Framework types and objects to work at the client ends.
In cases where you need to directly work with JSON data, the DataContractJsonSerializer class is used, and it acts as a serialization engine that converts JSON data into instances of .NET Framework types and back into JSON data. A .NET objects that needs to be serialized must have a DataContract attribute attached to it, and its members must be attached with a DataMember attribute.
DataContractJsonSerializer
DataContract
DataMember
The DataContractJsonSerializer class consists of a WriteObject method to serialize a specific object to JSON data, and writes the resultant data to a stream. The ReadObject method reads the document stream in JSON format, and returns the deserialized object. But, this class currently has a bigger limitation that multiple members (fields, methods etc.) of the DataContract class cannot be serialized.
WriteObject
ReadObject
Here is a sample application that shows how to serialize and deserialize JSON data. Note that the DataContractJsonSerializer class works independently with out requiring a WCF configuration of your application. This helps us in serializing and de-serializing data manually.
using System.Runtime.Serialization;
namespace JSONConsole
{
[DataContract]
internal class Person
{
[DataMember]
internal string name;
[DataMember]
internal int age;
}
Main
using System.IO;
using System.Runtime.Serialization.Json;
namespace JSONConsole
{
class Program
{
static void Main(string[] args)
{
Person p = new Person();
p.name = "Bala";
p.age = 22;
MemoryStream stream1 = new MemoryStream();
DataContractJsonSerializer ser =
new DataContractJsonSerializer(typeof(Person));
ser.WriteObject(stream1, p);
stream1.Position = 0;
StreamReader sr = new StreamReader(stream1);
Console.Write("JSON serialized Person object: ");
Console.WriteLine(sr.ReadToEnd());
}
Note that the namespace System.Runtime.Serialization.Json is found in the System.ServiceModel.Web due to the usage of the DataContractJsonSerializer class in WCF and AJAX related services.
System.Runtime.Serialization.Json
System.ServiceModel.Web
JSON serialized Person object: {"age":22,"name":"Bala"}.
stream1.Position = 0;
Person p2 = (Person)ser.ReadObject(stream1);
Console.WriteLine("Deserialized Person data:");
Console.WriteLine("Name: " + p2.name);
Console.Write("age=");
Console.WriteLine(p2.age);
Deserialized Person data:
Name: Bala
age=22
This article has dealt with the new encoding format JavaScript Object Notation (JSON) introduced in .NET Framework 3.5 that performs quicker and faster data exchanges between client applications and servers. It also tells how to perform serialization of a .NET object to JSON encoding and back in WCF.
|
http://www.codeproject.com/Articles/37069/JSON-serialization-and-de-serialization-in-WCF-Dat
|
CC-MAIN-2015-35
|
en
|
refinedweb
|
#include <windows.h>so I did that, and it reduced the number of errors. I tried adding it to more places and it continued to reduce the number of errors. Why do I need that in Visual Studio but not in Code Blocks? Is there some way to put that information somewhere once to fix the problem everywhere?
-mwindows -lopengl32
-mwindowsis what I need, but I don't know what it is in Visual Studio. Any ideas?
|
http://www.cplusplus.com/forum/windows/125815/
|
CC-MAIN-2015-35
|
en
|
refinedweb
|
by Chengjiang (Andy) Lin
05/09/2005
BEA WebLogic Platform application provisioning is the process of preparing and providing a valid WebLogic Platform environment to support subsequent usage by the deployed applications. In a previous article, an overview was provided on what to expect when promoting a WebLogic Platform 8.1 application from development to production through multiple stages. In this article, the focus is on automating the application provisioning process at each stage using tools that are available for use with the WebLogic Platform product. This should enable you to automatically create the necessary environment in which WebLogic applications should run.
WebLogic Platform application provisioning is the process of preparing and providing a valid WebLogic Platform environment to support subsequent usage by the application. The WebLogic Platform environment typically consists of three types of resources: domain-level resources (such as JDBC Connection Pools, and DataSources), customer applications, and production data (such as security roles, cache policies, Portal metadata, and Trading Partner Management data). When promoting a WebLogic application from development to production through multiple stages, these resources need to be properly provisioned at each stage. Given the complexity of configuring domain-level and application-scoped resources, application provisioning is not a trivial process and is often manual-intensive and error-prone. It is therefore highly desirable to automate the provisioning process to improve production efficiency and reliability, and at the same time reduce IT cost.
This article first discusses the challenge with automating the application provisioning process. It then provides an overview of the provisioning tools available with the WebLogic Platform. Through case studies on some common scenarios such as moving from development to production, rapid production replication, production data provisioning, and differential provisioning, this article demonstrates how these provisioning tools can be effectively used to automate the provisioning process.
The examples provided in this article are abstracted from some real customer provisioning scenarios. The WebLogic Scripting Tool (WLST) sample scripts and domain templates referenced in this article have been developed and tested on WebLogic Platform 8.1 Service Pack 4. In the coming WebLogic Platform release, JSR-88 will be supported. Therefore, some resources now configured at domain level will be packaged as application-scoped resources. This should make our lives a little easier with application provisioning automation.
Note that this article is not intended to be a tutorial on using the provisioning tools such as Domain Template Builder or WLST. You can find more information on these tools through the links provided in the References section.
To run WebLogic Platform applications, you must create a domain that includes the appropriate WebLogic Platform components, as well as application-scoped resources. Figure 1 depicts an example promoting process that takes an application through four stages: development, integration, QA/staging, and production. In production, WebLogic Platform application provisioning also involves setting up a cluster subnet and proxies per application deployment requirements.
Figure 1. Application promotion from development to production
In a typical development stage, multiple developers will be concurrently working on a project with each focusing on different modules. Developers are likely to have their own working domains (development-mode, single server with local database) as their sandbox, using a source control tool to facilitate team development. In a development-mode domain, BEA WebLogic Server and BEA WebLogic Workshop will automatically provision some application resources, such as Entity Bean tables, AsyncDispatcher queues, and conversation state tables. These resources need to be explicitly provisioned in a production-mode domain. The QA/staging environment normally mimics the real production environment, which typically consists of one or more cluster domains, along with dedicated database servers, load balancers, and commercial certificates. Security and high availability are critical in production, while they may be wholly absent from the development environment. In addition, if you use different IP addresses or ports for the managed servers in production, you also need to update your application modules and regenerate the EAR file for custom deployment.
When promoting WebLogic Platform applications from development to production through multiple stages, it is critical to automate the process to ensure a successful production deployment. The key challenge for provisioning automation is to identify and capture what exactly needs to be configured to support proper deployment of the applications, carry that information forward through each stage, and then apply any configuration customizations that are necessary for the target environment.
WebLogic Platform provides a rich set of system tools such as the Configuration Wizard, the Domain Template Builder, and the WebLogic Scripting Tool (WLST) to facilitate provisioning automation. For a complete list of tools, refer to the WebLogic Platform Deployment Guide. The WebLogic Platform also ships with a set of predefined domain templates, to help customers with initial domain configuration, and a set of extension templates for adding well-defined applications and services to existing domains. Depending on the characteristics of various provisioning scenarios, they can be categorized into the template-based approach and the WLST-based approach. Note that these approaches can be effectively combined to address complex provisioning scenarios.
The predefined domain and extension templates contain the main attributes and files required for building or extending a particular WebLogic domain. After you have added new resources and applications to your domain, you can use the Domain Template Builder to create a custom domain template, which can be used later for creating a target domain through the Configuration Wizard.
The template-based approach leverages the Domain Template Builder functionality to capture the various configuration details and artifacts of the current domain. To use the template-based approach, you would start with a working domain. It can be a single server development domain or a clustered production domain. In addition, you should have a thorough understanding of the existing domain so as not to miss out on including critical configuration information when creating the template. During template creation you have an opportunity to customize the settings of some domain resources, but you can also opt to apply customization in a later stage when you are actually configuring the domain using the created template.
WebLogic Server Scripting Tool (WLST) is a command-line scripting interface (built with Jython) that you use to interact with and configure a WebLogic Server. WLST supports both online and offline modes of operation. WLST Online operates while connected to a running server. WLST Offline adds commands to support Configuration Wizard functionality, enabling creation and extension of WebLogic domains without being connected to a running server. WLST Offline supports the predefined domain templates provided with WebLogic Platform, as well as custom domain templates created using the Template Builder tool.
The WLST-based provisioning approach leverages the WLST Offline and WLST Online functionality to record the various domain configuration details in a set of WLST scripts and use these scripts along with some predefined or custom domain templates to create the target domain. Using the WLST-based approach, you can easily change domain configuration through scripts, and effectively track configuration changes through source control. For more information on using WLST, visit the dev2dev CodeShare project wlst for sample scripts and best practices.
In the next few sections, we will delve into some common provisioning use cases to demonstrate and compare the use of the template-based approach and the WLST-based approach.
This use case represents a common provisioning scenario with high demand on automation. Typically, the development environment is dramatically different from the production environment, which makes the WLST-based approach a better fit compared to the template-based approach. Through a hypothetical scenario, we will illustrate how to combine WLST Offline and Online scripts to configure a desired production domain.
In this example, let's say the development team has implemented two applications: one BEA WebLogic Integration application and one BEA WebLogic Portal application. Both applications are deployed in a single platform domain with a single server instance. Moving to staging and production, the team wants to configure two separate clusters (
wliCluster and
wlpCluster) in a single platform domain, one for deploying the WebLogic Integration application and the other for deploying the WebLogic Portal application. The Configuration Wizard and WLST Offline support auto-configuration for a single-cluster domain only. When configuring multiple clusters (for example, a
wliCluster and a
wlpCluster) in a single Platform domain through Configuration Wizard, domain-level resources will be automatically targeted to the first cluster (in this case, the
wliCluster). Therefore, the target of some resources (such as the portal-related applications and resources) needs to be reassigned. The combination of WLST Offline and Online scripts can effectively address the specific requirements of configuring a multicluster domain.
In general, WLST Offline scripts can be used to configure the resources for any single cluster domain. For example, a WLST Offline script can add users, add managed servers, add additional JMS queues, customize JDBCConnectionPools, and so on. Since WLST Offline is based on the Config Wizard framework, it supports some nice auto-configuration features, but it has the same constraints on configuring multiple clusters. After the first cluster (wliCluster) is created, the portal-related applications and resources need to be untargeted from the wliCluster.
cd('/') unassign('Application', 'paymentWSApp','Target', 'wliCluster') unassign('Application', 'taxWSApp','Target', ' wliCluster ') unassign('JDBCConnectionPool', 'portalPool','Target', ' wliCluster ') unassign('JDBCDataSource','p13n_trackingDataSource', 'Target', ' wliCluster ') unassign('JDBCDataSource', 'p13nDataSource','Target', ' wliCluster ') unassign('JDBCTxDataSource','portalFrameworkPool', 'Target', ' wliCluster ')
The following WLST Online scripts illustrate the configuration of the second cluster (
wlpCluster) and proper retargeting of the portal-related resources to the
wlpCluster.
def create_wlpcluster(cluster_config): clusterName, clusterAddr,multicastAddr,multicastPort, mss = cluster_config # create the cluster cluster = auto_createCluster(clusterName,clusterAddr, multicastAddr, multicastPort) # create the managed servers for msConfig in mss: managedserver = auto_createManagedServer(msConfig,cluster) # add the cluster to the target list of the following resources wlst.cd('/') jcf = wlst.getTarget('JMSConnectionFactory/' + 'cgQueue') if jcf != None: jcf.addTarget(cluster) poolList='portalPool','cgPool','cgJMSPool-nonXA' for poolName in poolList: pool = wlst.getTarget('JDBCConnectionPool/' + poolName) if pool != None: pool.addTarget(cluster) dsList='p13n_trackingDataSource','p13nDataSource' for dsName in dsList: ds = wlst.getTarget('JDBCDataSource/' + dsName) if ds != None: ds.addTarget(cluster) txdsList='portalFrameworkPool','cgDataSource','cgDataSource-nonXA' for txdsName in txdsList: txds = wlst.getTarget('JDBCTxDataSource/' + txdsName) if txds != None: txds.addTarget(cluster) wlst.cd('Application/' + 'JWSQueueTransport') ejb = wlst.getTarget('EJBComponent/' + 'QueueTransportEJB') if ejb != None: ejb.addTarget(cluster) wlst.cd('/')
Let's take a closer look at a portion of this script:
dsList='p13n_trackingDataSource','p13nDataSource' for dsName in dsList: ds = wlst.getTarget('JDBCDataSource/' + dsName) if ds != None: ds.addTarget(cluster)
What this does is loop through the names of the DataSources that we know should be targeted to the new
wlpCluster. For each DataSource, we first call the
getTarget() method to get the DataSource object, and then we add the
wlpCluster to its target list.
The complete sample scripts can be downloaded from the wlst CodeShare project.
In addition to configuring domain-level resources, we also need to support automatic provisioning (or migration as it is sometimes called) of critical production data, such as the WebLogic security realm, WebLogic Portal metadata, and WebLogic Integration metadata. Refer to the Production Data Provisioning section for more information.
This use case depicts a particular provisioning scenario where the customer already has a working production domain properly configured on the administration server machine, and needs to replicate the same configuration to many managed server boxes in a WebLogic cluster domain. It is tedious and error-prone to run GUI-based tools on multiple remote machines. The template-based approach seems to be a good fit to automate the replication process. We illustrate how to use Domain Template Builder to capture what is in the production domain.
Assume this hypothetical domain is a WebLogic Integration cluster domain, called
wlicluster, and that the customer originally was created using the Configuration Wizard with the WLI domain template. The following resources were subsequently deployed and configured to support the application needs:
An application, called
tradeHubApp.ear, which uses message broker channels and event generators, and invokes a third-party adapter
A distributed queue, called
egDistQueue, and its corresponding queue members
A JMS Event Generator,
tradeHubJMSEG, that is associated with
egDistQueue and an existing channel file
A third-party adapter, called
kodo, with target to the cluster
Now we need to create a custom domain template to capture this entire configuration. By default the Template Builder will include files directly referenced by the domain. Other files needed by the deployed applications, such as the JMS Event Generator jar file and the third-party adapter files, can be added to the template through the Add File window (Figure 2).
Figure 2. Add files when creating a custom domain template.
For the aforementioned production domain, we need to add the following files:
Add the
<wliconfig> folder to
<Domain Root Directory> (note that you do not need to include the managed server subfolders under
<wliconfig>)
Add the
<script> folder to
<Domain Root Directory>
Add the jar file corresponding to
tradeHubJMSEG event generator (that is,
WLIJmsEG_tradeHubJMSEG.jar) to
<Domain Root Directory>
Add the
DefaultAuthorizer.ldift file to
<Domain Root Directory> since it defines default authorization roles and policies for accessing message broker channels
Add the adapter implementation folder to
<Application Root Directory>/wlicluster
The above demonstrates a list of domain and application resources that have been captured in a domain template. If you have additional resources (such as security roles and policies), you will need to capture those resources separately using other tools provided by WebLogic Platform. The next use case illustrates how to use these tools to provision production data.
In general, data in a WebLogic Platform production environment consists of security roles, cache policies, Portal metadata, Trading Partner Management (TPM) data, and other data that are typically stored in a variety of persistence stores such as embedded LDAP, external database, and repository files. The focus of this use case is on automating the process of production data provisioning.
Each WebLogic domain template (except the basic WebLogic Server domain template) has a predefined set of SQL files that needs to be loaded into the target database before the servers can be started. For example, the WebLogic Portal domain template defines the Content Management Database Objects and WSRP Objects tables that need to be loaded before you start a WebLogic Portal domain. These domain-level tables will be loaded to the production database and shared by all the servers in the domain. You can load these tables through the Configuration Wizard during domain creation, or through the
loadDB() function in WLST Offline, which is depicted in the following scripts:
readTemplate('D:\<WL_HOME>\common\templates\domains\platform.jar') existingPoolName = 'cgJMSPool-nonXA' cd ('/') cd ('JDBCConnectionPool/' + existingPoolName) cmo.setDriverName('oracle.jdbc.driver.OracleDriver') cmo.setUserName('E2EDCDB') cmo.setPassword('E2EDCDB') cmo.setURL('jdbc:oracle:thin:@<DBMS_HOST>:<PORT>:<DBMS_NAME>') loadDB('9i',existingPoolName) closeTemplate()
Each WebLogic Platform domain is configured with a default security realm that is typically stored in embedded LDAP. Each security realm consists of a set of configured security providers, users, groups, security roles, and security policies. The default realm can be customized to support various security requirements such as replacing a security provider and adding users. To support migration of security data from one realm to another, WebLogic Platform offers specific exporting/importing utilities through WebLogic Console and MBean interfaces.
BEA WebLogic Enterprise Security (WLES) services use several distinct categories of policies to protect applications and resources. To enable you to transfer your policy data easily to a production environment, WLES provides Policy Export/Import tools. Policy exporting allows you to output data from the policy database to text files called policy files. These policy files can be imported back to the same or another policy database using the Policy Import tool. These tools have command-line interfaces that can be easily integrated to enable automation.
When you configure and customize WebLogic Portal applications, there are various portal objects (such as entitlements and delegated administration) stored in the embedded LDAP and the portal database. The WebLogic Portal Propagation Utility enables Portal administrators to move data from one stage to another selectively, with the ability to filter metadata that should not be propagated and to leave untouched destination data that should not be updated. Note that this tool is currently on dev2dev, and will be supported in the next release of the WebLogic Platform.
Trading Partner Management (TPM) data includes trading partner profiles, certificates from keystores, service definitions, and service profiles. The Bulk Loader is a command-line tool that you can use to import, export, and delete TPM data. The Bulk Loader imports an XML representation of TPM data, and it exports an XML file.
bulkloader [-verbose] [-config <blconfig.xml>] [-wlibc] -import <data.xml> -export <data.xml> [-nokeyinfo] [-select <selector.xml>]
BEA Liquid Data for WebLogic server settings include Data Sources, Custom Functions, and Stored Queries. The settings are stored in repository files under the
<ldrepository> folder. The following Ant script depicts how to invoke the Liquid Data tool to import server settings from the
Ldconfig.xml file. The same class also supports exporting of the server settings.
<java classname="com.bea.ldi.util.CfgImpExp" fork="true"> <classpath> <pathelement location="<WL_HOME>/server/lib/weblogic.jar"/> <pathelement location="<REPOSITORY_DIR>/import-export-tools/CfgImpExp.jar"/> <pathelement location="<WL_HOME>/liquiddata/server/lib/wlldi.jar"/> <pathelement location="<WL_HOME>/liquiddata/server/lib/castor-0.9.3.9.jar"/> <pathelement location="<WL_HOME>/liquiddata/server/lib/xercesImpl.jar"/> <pathelement location="<WL_HOME>/liquiddata/server/lib/APP-INF/lib/LDS.jar"/> </classpath> <arg line="<ADMIN_ADDR> <ADMIN_PORT> <ADMIN_USERNAME> <ADMIN_PASSWORD> import <DOMAIN_HOME>/ldrepository/import_export/LDconfig.xml"/> </java>
In a production-mode domain, WebLogic Server will not attempt to automatically provision the dependent resources when you deploy an application. For example, if your application contains conversational web services or Entity Beans, you need to load the conversation state table and Entity Bean tables before you deploy your application. Since you understand what needs to be provisioned, it can be easily automated through scripts. For more information on loading conversation state table and Entity Bean tables, refer to the previous article.
The need for automating production data provisioning is evidenced by the variety of production data in a typical platform environment. While some of them can be provisioned through WLST Online/Offline scripts, others can be exported or imported through specific tools provided by WebLogic Platform (for example, Portal Propagation utility and WLES Import/Export). In some rare cases, you might need to use database-level tools to export or import critical information stored in external database instances.
As the production environment evolves, there may be a need to apply further customization to the existing environment. For example, the administrator may need to change some security provider in the realm, or update existing cache policies. This is sometimes referred to as differential provisioning. Most of the differential provisioning actions are carried out through WebLogic Console and MBean calls. You can also use the template-based approach or the WLST-based approach to support various differential provisioning needs.
For example, you can use WLST Online to configure an additional JMS queue as a distributed queue, and set up the corresponding physical queue members across the cluster. One code sample in the wlst CodeShare project demonstrates how to add Application AsyncRequest Queues as Distributed Queues to a Cluster. Another code sample illustrates how to configure embedded LDAP or external LDAP through WLST Online.
While WebLogic Console and WLST Online scripts support differential provisioning to live servers, WLST Offline along with extension templates can be used to provision additional applications and services, such as JDBC or JMS components, and startup/shutdown classes. To apply extension templates, you need to shut down servers. The following WLST Offline script demonstrates a differential provisioning scenario where a WebLogic Workshop domain is first created, and the WebLogic Integration extension template is applied to the WebLogic Workshop domain using the addTemplate() and updateDomain() operations. As a result, you have a domain that supports both WebLogic Workshop and WebLogic Integration functionality.
readTemplate('<WL_HOME>/common/templates/domains/wlw.jar') cd('/') cd('Security/mydomain') cd('User/weblogic') cmo.setPassword('weblogic') setOption('OverwriteDomain', 'true') writeDomain('<BEA_HOME>/user_projects/domains/wlw-wli') closeTemplate() readDomain('<BEA_HOME>/user_projects/domains/wlw-wli') setOption('ReplaceDuplicates','false') addTemplate('<WL_HOME>/common/templates/applications/wli.jar') updateDomain() closeDomain()
Automating WebLogic Platform application provisioning through multiple stages requires a thorough understanding of the specific requirements at each stage and employing the right tools and utilities to implement the automation. In this article, we focused on the template-based approach and the WLST-based approach, and illustrated how to use these approaches effectively to automate some common provisioning scenarios.
I would like to thank Michael Meiner, Michael Zanchelli, Venkat Padmanabhan, and Juan Andrade for their review and comments.
Chengjiang (Andy) Lin is a senior software engineer at BEA Platform Engineering Team. Over the last 4 years at BEA, Andy has made contributions in areas including Platform Integration, Web Service Interoperability, WebLogic Clustering and Domain Provisioning Tools.
|
http://www.oracle.com/technetwork/articles/entarch/automatic-provisioning-098769.html?ssSourceSiteId=otnes
|
CC-MAIN-2015-35
|
en
|
refinedweb
|
Name | Synopsis | Description | Attributes | See Also
#include <unistd.h> unsigned int sleep(unsigned int seconds);
The caller is suspended from execution for the number of seconds specified by the argument. The actual suspension time may be less than that requested because any caught signal will terminate the sleep() following execution of that signal's catching routine. The suspension time may be longer than requested by an arbitrary amount because of the scheduling of other activity in the system. The value returned by sleep() will be the ``unslept'' amount (the requested time minus the time actually slept) if the caller incurred premature arousal because of a caught signal.
The use of the sleep() function has no effect on the action or blockage of any signal. In a multithreaded process, only the invoking thread is suspended from execution.
See attributes(5) for descriptions of the following attributes:
nanosleep(3C), attributes(5), standards(5)
Name | Synopsis | Description | Attributes | See Also
|
http://docs.oracle.com/cd/E19082-01/819-2243/sleep-3c/index.html
|
CC-MAIN-2015-35
|
en
|
refinedweb
|
#include <lqr.h>
Rescale operations involving both directions will dump two maps.
The dumped maps will be attached to the LqrCarver object and can be accessed at any later time using the functions lqr_vmap_list_foreach(3) or a combination of lqr_vmap_list_start(3), lqr_vmap_list_current(3) and lqr_vmap_list_next(3).
Using this setting is pointless if the LqrCarver object is not initialised.
The function lqr_carver_set_no_dump_vmaps reverts the effect of the previous one (but the maps dumped so far will be kept).
Note that it is also possible to dump the visibility maps manually; however, using the automatic dump is the only way to get intermidiate maps when the function lqr_carver_resize(3) performs the rescaling in both directions, or in more than one step.
lqr_carver_init(3), lqr_carver_resize(3), lqr_carver_set_enl_step(3), lqr_carver_set_resize_order(3), lqr_carver_set_side_switch_frequency(3), lqr_carver_set_progress(3), lqr_carver_set_preserve_input_image(3), lqr_vmap_list_start(3), lqr_vmap_list_current(3), lqr_vmap_list_next(3), lqr_vmap_list_foreach(3), lqr_carver_set_use_cache(3)
|
http://www.makelinux.net/man/3/L/lqr_carver_set_no_dump_vmaps
|
CC-MAIN-2015-35
|
en
|
refinedweb
|
On Mon, Aug 18, 2003 at 06:43:20PM +0100, Bruce Stephens wrote: > > 4. A lot of documentation talks about people making new repositories > > each year because old ones get "big" > > It's the number of revisions, I suspect. Actually I think it's meant to address namespace pollution. If you're a good arch user and make branches out the wazoo, you may end up with a lot of dead-end branches, and maybe categories of failed projects, etc. They don't really interfere that much (unless for some reason you want to re-use a branch name or something, and don't like the idea of bumping the version), but do clutter things up, abrowse output, etc. [As far as I can see, moving to a new archive doesn't affect the way revisions are applied at all; if you don't add some cached versions, it will happily go back to the old archive and apply revisions from there!] I also think that it may simply be a good habit to at least occasionally move your archive -- it forces you to put in place the tools/whatever to make such a move possible, so if there comes a time when you're _forced_ to move you're in a better position. Some reasons you might be forced to move: (1) your email address changes, so the one in the archive name is bogus (it's just a string, but personally I don't want to publicize an obsolute email address), (2) your old archive becomes too big for any existing disk (you hacker stud!).... :-] -Miles -- A zen-buddhist walked into a pizza shop and said, "Make me one with everything."
|
http://lists.gnu.org/archive/html/gnu-arch-users/2003-08/msg00047.html
|
CC-MAIN-2015-35
|
en
|
refinedweb
|
We all like C++’s container classes such as maps. The main negative thing about them is persistance. Ending your process makes the data structure go away. If you want to store it, you need to write code to serialise it to disk and then deserialise it back to memory again when you need it. This is tedious work that has to be done over and over again.
It would be great if you could command STL containers to write their data to disk instead of memory. The reductions in application startup time alone would be welcomed by all. In addition most uses for small embedded databases such as SQLite would go away if you could just read stuff from persistent std::maps.
The standard does not provide for this because serialisation is a hard problem. But it turns out this is, in fact, possible to do today. The only tools you need are the standard library and basic standards conforming C++.
Before we get to the details, please note this warning from the society of responsible coding.
What follows is the single most evil piece of code I have ever written. Do not use it unless you understand the myriad of ways it can fail (and possibly not even then).
The basic problem is that C++ containers work only with memory but serialisation requires writing bytes to disk. The tried and true solution for this problem is memory mapped files. It is a technique where a certain portion of process’ memory is mapped to a backing file. Any changes to the memory layout will be written to the disk by the kernel. This gives us memory serialisation.
This is only half of the problem, though. STL containers and others allocate the memory they need through operator new. The way new works is implementation defined. It may give out addresses that are scattered around the memory space. We can’t mmap the entire address space because it would take too much space and serialise lots of stuff we don’t care about.
Fortunately C++ allows you to specify custom allocators for containers. An allocator is an object that does memory allocations for the object it is tied to. This indirection allows us to write our own allocator that gives out raw memory chunks from the mmapped memory area.
But there is still a problem. Since pointers refer to absolute memory locations we would need to have the mmapped memory area in the same location in every process that wants to use it. It turns out that you can enforce the address at which the memory mapping is to be done. This gives us an outline on how to achieve our goal.
- create an empty file for backing (10 MB in this example)
- mmap it in place
- populate the data structure with objects allocated in the mmapped area
- close creator program
- start reader program, mmap the data and cast the root object into existance
And that’s it. Here’s how it looks in code. First some declarations:
*mmap_start = (void*)139731133333504; size_t offset = 1024; template <typename T> class MmapAlloc { .... pointer allocate(size_t num, const void *hint = 0) { long returnvalue = (long)mmap_start + offset; size_t increment = num * sizeof(T) + 8; increment -= increment % 8; offset += increment; return (pointer)returnvalue; } ... }; typedef std::basic_string<char, std::char_traits<char>, MmapAlloc<char>> mmapstring; typedef std::map<mmapstring, mmapstring, std::less<mmapstring>, MmapAlloc<mmapstring> > mmapmap;
First we declare the absolute memory address of the mmapping (it can be anything as long as it won’t overlap an existing allocation). The allocator itself is extremely simple, it just hands out memory offset bytes in the mapping and increments offset by the amount of bytes allocated (plus alignment). Deallocated memory is never actually freed, it remains unused (destructors are called, though). Last we have typedefs for our mmap backed containers.
Population of the data sets can be done like this.
int main(int argc, char **argv) { int fd = open("backingstore.dat", O_RDWR); void *mapping; mapping = mmap(mmap_start, 10*1024*1024, PROT_READ | PROT_WRITE, MAP_SHARED | MAP_FIXED, fd, 0); if(mapping == MAP_FAILED) { printf("MMap failed.\n"); return 1; } mmapstring key("key"); mmapstring value("value"); if(fd < 1) { printf("Open failed.\n"); return 1; } auto map = new(mapping)mmapmap(); (*map)[key] = value; printf("Sizeof map: %ld.\n", (long)map->size()); printf("Value of 'key': %s\n", (*map)[key].c_str()); return 0; }
We construct the root object at the beginning of the mmap and then insert one key/value pair. The output of this application is what one would expect.
Sizeof map: 1. Value of 'key': value
Now we can use the persisted data structure in another application.
int main(int argc, char **argv) { int fd = open("backingstore.dat", O_RDONLY); void *mapping; mapping = mmap(mmap_start, 10*1024*1024, PROT_READ, MAP_SHARED | MAP_FIXED, fd, 0); if(mapping == MAP_FAILED) { printf("MMap failed.\n"); return 1; } std::string key("key"); auto *map = reinterpret_cast<std::map<std::string, std::string> *>(mapping); printf("Sizeof map: %ld.\n", (long)map->size()); printf("Value of 'key': %s\n", (*map)[key].c_str()); return 0; }
Note in particular how we can specify the type as std::map<std::string, std::string> rather than the custom allocator version in the creator application. The output is this.
Sizeof map: 1. Value of 'key': value
It may seem a bit anticlimactic, but what it does is quite powerful.
Extra evil bonus points
If this is not evil enough for you, just think about what other things can be achieved with this technique. As an example you can have the backing file mapped to multiple processes at the same time, in which case they all see changes live. This allows you to have things such as standard containers that are shared among processes.
|
http://voices.canonical.com/jussi.pakkanen/tag/evil/
|
CC-MAIN-2015-35
|
en
|
refinedweb
|
Duck Typing and Protocols vs. Inheritance
According to irb,
>> StringIO.is_a?(IO)
>> => false
This seems illogical to me. Is this intentional? If so, why?
One answer to this:
Since StringIO doesn't use IO to base its implementation, it is not an IO.
Why does it matter? If you use standard duck typing, it shouldn't. Class hierarchy doesn't matter with duck typing. Classes are just an implementation detail with duck typing. You are best off not looking #is_a?, #respond_to?, etc. Just assume the object can do the methods you'll be using and use.
This sounds like the basic idea of duck typing: instead of requiring an object to be of a certain type, it suffices that the object responds to a set of methods.
None of this is new or specific to Ruby. By replacing the word "methods" with the OOP term "messages", the above statement reads: "[...] it suffices that the object responds to a set of messages". This does sound a lot like the idea of a (network) protocol. After all, a system that understands and responds to messages such as "PUT", "GET", "POST", etc. sent to it in a certain manner, can be said to understand the HTTP protocol.
Let's go further with this idea: a client, say a web browser, only cares that a server it communicates with understands HTTP; it does not require the server to be of a certain type or make. After all: the web would have had a harder time growing, if, say, the Mosaic browser had been hardcoded to require the server on the other end of the line to be NSCA's or Netscape's HTTP servers. By ignoring the other end of the communication, and only requiring that a "GET" is answered in a certain way, both ends of the webs development (client, server) were able to evolve independently.
The similarity of this approach to protocols was clear to users of OOP languages long ago. Smalltalk and ObjectiveC, both dynamic OOP languages, have long used the term Protocol to refer to this concept.
The Protocol concept is certainly useful, if only to give a name to a particular set of messages. This also helps with clearing up the questions raised in the above mailing list post. What the StringIO and IO share is not common ancestry but a common Protocol.
The concept is certainly not limited to dynamic languages. On the more static side, Java made a crucial step by introducing
interfaces, as a way of:
- naming a set of messages
- statically checking that a class actually implements them
Of course, as is often the case with static guarantees, interfaces only check the presence of a set of methods with certain signatures - it doesn't guarantee the behavior or even the returned values of the methods. It's not even guaranteed that the method is invokable - it's still prudent to watch out for Java's
java.lang.UnsupportedOperationExceptionwhen working with, say, Java's Collection API.
Interfaces do have their own share of problems, such as evolvability. Changes to an interface are breaking changes for all classes implementing it. Big projects, such as Eclipse, have guidelines for API evolution, which rule that an interface can't be changed once it's been published. This sometimes yields the advice to favor Abstract classes over interfaces, since Abstract classes can add new functions, but define empty implementations, so subclasses can, but don't have to implement them. This problem could, of course, be solved by making fine grained interfaces, with one method each, and building the full interface up with new interfaces that extend the previous ones, each adding a single API call.
The problem of ensuring that an object supports a Protocol could of course be solved in Ruby by delegating the interface check to a predicate function. An example:
def supports_api?(obj)While this works, it's also fraught with subtle problems -
obj.respond_to?(:length) && obj.respond_to?(:at)
end
respond_to?only works with methods defined in a class. If an object implements parts of its interface with
method_missing(e.g. a Proxy),
respond_to?would return false to these methods, although a call to these methods would work. This means that
respond_to?checking can work, but it's not a drop-in replacement for static checks (Rick de Natale refers to this coding style as "Chicken Typing", i.e. it's a kind of defensive programming).
Another, static, approach to fine grained interfaces are Scala's Structural Types:
def setElementText(element : {def setText(text : String)}, text : String) =This function requires the argument
{
element.setText(text.trim()
.replaceAll("\n","")
.replaceAll("\t",""))
}
elementto have a
setText(text:String)method, but does not care about the particular type of the object.
The focus on specific types or class hierachies of an object also limits the flexibility. An interface that requires an
intparameter can only be called with an int or - through AutoBoxing - an Integer, but nothing else. A method that just requires the object to have an to_i method (that returns an integer number, either Fixnum or BigNum) is more flexible in the long run. Classes not under the developers control might not inherit from the needed classes, although they might still support a particular Protocol.
Somewhere between the dynamic (duck typing) and static (interfaces, Scala's Structural Types) are Erlang's Behaviors. Erlang's modules (which are comparable to Ruby's Modules) group functions together. A module can use
-behaviour(gen_server).to state that it implements the Behavior
gen_server- which means that it exports a certain set of functions. This is supported by the Erlang compiler in that it complains if the module doesn't export all of the Behaviors required functions.
This also shows that the general principle of Protocols is not limited to languages that define themselves as "OOP" languages by providing concepts of Class, Object, Inheritance, etc.
How do you document the use of Protocols in Ruby, i.e. how do you talk about the set of messages that interacting objects need to agree upon?
In Groovy
by
Steven Devijver
But in Groovy you can implement methodMissing() so that methods that are called are added to the type. Next time you call this method it will directly go to that method and no longer through methodMissing(). repondsTo() would also recognize this method after the first invocation.
Grails uses this approach all over the place. See one of Graeme's talks on the Grails Exchange for more details.
Also in Groovy, when I realize a method I expect to be there isn't I just add it using ExpandoMethodClass. In this way anything can be turned into a duck.
Re: In Groovy
by
Michael Neale
Is it becuase I'm Canadian? or..
by
Deborah Hartmann
Re: In Groovy
by
Paul King
If it is for greater clarity at the source code level, in Groovy you would mix and match Java's static checking techniques with Groovy's dynamic typing capabilities. You could statically define the type of a variable (as a fine-grained interface of course) or you could implement an interface in your class definition if you wanted. These techniques provide you with a more declarative specification of your objects protocol(s) than without. That doesn't mean you abandon dynamic typing where it makes sense and you should still use fine-grained (interface-oriented programming) when thinking about interfaces anyway. It certainly puts Groovy miles ahead of many other dynamic languages for those scenarios where you want to apply a declarative style.
If it is for more robustness in your code, than you have various techniques available to you at the metaprogramming layer. In Groovy, I would possible hook into the beforeInvoke() interceptor or the methodMissing() level if needed. Groovy is a little bit more powerful (but mostly on par) with other dynamic languages in this regard.
Python and Zope interfaces
by
Kevin Teague
Zope tackled the problem by using metaprogramming to supply their own interface implementation:
griddlenoise.blogspot.com/2005/12/zope-componen...
"This problem could, of course, be solved by making fine grained interfaces, with one method each, and building the full interface up with new interfaces that extend the previous ones, each adding a single API call."
Interfaces are just a formal and semi-formal way of describing a specification. Changing the specs means breaking existing code. Interfaces in Zope are usually inherited in a fine-grained manner. Not always as fine-grained as one method or attribute per interface, but it is good design have each interface as small as makes sense, and add new functionality by layering on more specific interfaces.
The real joy though is when you can apply these same principles of interface specificity when doing data modelling and working with a persistence layer that allows complex model inheritance structures without messy ORM issues ...
from zope.interface import interface
from zope.schema import TextLine, SourceText
class IBaseContent(Interface):
"Shared attributes amongst all content objects"
title = TextLine(title=u'Title')
description = SourceText(title=u'Description')
class IDocument(IBaseContent):
"Plain document"
body = SourceText(title=u'Body')
class INewsItem(IDocument):
"News Item"
slug = SourceText(title=u'Slug for lead-in text')
You can declare that an object provides and implementation "after the fact" with the Zope Component Architecture. That is, you can use the equivalent of method_missing, or open a Class and add the required methods, and then declare that your object or class now provides a certain interface.
>> class Whatever(object):
... "some random plain old object"
...
>>> random_obj = Whatever()
>>> random_obj.>> random_obj.>> random_obj.>> interface.directlyProvides(random_obj, IDocument)
>>> IDocument.providedBy(random_obj)
True
Kind of a convoluted example, as normally you would have a concrete class that declared that it implement specific interfaces.
This can be persisted in Zope as easy as working with any normal Python object that implements a mapping interface:
>>> mydatacontainer['some-unique-name'] = random_obj
Re: In Groovy
by
Rusty Wright
public interface Graphics {
void draw();
}
public interface DeckOfCards {
void draw();
}
public final class DrawThing implements DeckOfCards, Graphics {
public void draw() {
}
}
Duck Typing.
by
Porter Woodward?
Chicken typing indeed. All the sudden you're sprinkling your code with a ton of respond_to? clauses trying to find out if a given object can respond to a particular call. On the one hand it's a workable solution - especially if you're used to programming against low-level hardware (better to ask the CPU, or GPU if it has a capability than ask what type it is and make assumptions about it's functionality). But sometimes you'd like to be able to ask - are you a chicken - or are you a duck? And get an honest answer.
And while interfaces codify this very well - you rightly point out that there is no guarantee that a given interface is even implemented correctly. There's no way to determine that in languages that don't have them anyway - just because an object .respond_to? a given method is no guarantee that the internal implementation of that method is any good.
Strongly recommend checking out Interface Oriented Programming from Pragmatic.
Re: Duck Typing.
by
Kevin Teague".
Say you have defined your interfaces as:
class IQuacker(Interface):
"It quacks!"
def quack(self):
"Make some noise"
class IDuck(IQuacker):
"Definition of a duck"
class IPlatypus(IQuacker):
"Definition of a platypus"
Then you have some basic imlementations:
class Duck(object):
implements(IDuck)
def quack(self):
print "Duck noise is the only noise!"
class Platypus(object):
implements(IPlatypus)
def quack(self):
print "Platypus noise is better than duck noise!"
Then in your client code:
>>> platypus = Platypus()
>>> IQuacker.providedBy(platypus)
True
>>> IPlatypus.providedBy(platypus)
Ture
>>> IDuck.providedBy(platypus)
False
Without interfaces, this distinction is possible but needs to deal with
implementation specific details:
>>> platypus = Platypus()
>>> hasattr(platypus,'quack')
True
>>> isinstance(platypus,Platypus)
True
>>> isinstance(platypus,Duck)
False
In the second case, where you are programming to an implementation, your
code would not work if you had a test suite that used a mock implementation:
class MockPlatypus(object):
implements(IPlatypus)
def quack(self):
pass
This mock object will still pass the method names check, but because it
has a different implementation class the check for the Class type will not
act as desired.
Sounds familiar...
by
Rickard Öberg
By focusing on interfaces, and allowing an object to easily implement as many interfaces as are necessary without having to use class inheritance or manual delegation to implement it, it becomes a lot easier to write and (re)use code.
Re: Duck Typing.
by
Porter Woodward".
I agree to an extent. However there are times when the implementation becomes important. Hence my mention of low-level interfaces: CPU and/or GPU. Sometimes you really need to know whether something is implemented - and _how_ it's implemented. For a lot of high-level applications programming - absolutely the goal should be to program to the interface. Hence my cite of Interface Oriented Design at the end of my post.
Even in higher level application programming sometimes you need to _know_; for example you might have abstracted away a lot of your persistence to a database. It can still be important to know some details about the underlying database implementation. Ideally those are encapsulated in the persistence layer - but the persistence layer will do those queries to determine the functionality of the underlying database, and specifically _how_ certain features are implemented in order to provide a common interface to users of the API regardless of the underlying implementation.
So - while it's all very well and good to say the GOF say to program to interfaces - there also needs to be the realization that sometimes to produce an elegant interface the code underneath may have some very un-OOP properties. In order to make that process a little easier it can be handy to have features that allow a high degree of introspection (is_a, responds_to, etc.). Both from the perspective of building working code - and building tools to work with the code - such features make it a little easier to reverse engineer a working code base - and can make automated refactorings a little easier.
Re: Duck Typing.
by
John DeHope
I think if architects first demanded code be driven by the code layer (as I have defined it) and only then by the application layer, we'd get better enterprise code. My point is that I find it worse to see data-access layer code that has generic library and custom business logic mixed in together, than it is to have controllers with a bit of data-access and user-interface code mixed together.
Let me say that another way... I'd rather see application code that communicates with two different app-layers, but both through interfaces rather than implementations, than I would to see code that communicates with low-level library or system implementations and higher-level interfaces.
Regarding Chicken Typing
by
Craig McClanahan
method_missing?()would give incorrect answers for the
respond_to?()test. That is true if you rely on the default inherited implementation. However, an intelligently designed class that uses this sort of dynamic implementation is also free to override
respond_to?()so that it works as expected.
An example of this technique is the way that <code>ActiveRecord::Base</code> overrides
respond_to?()to return true for all the attribute getters and setters inferred from the underlying table, even though these methods are not directly declared.
commonalities in class hierarcy
by
Arash Bizhanzadeh
StringIO and IO don't need to go back to the same superclass, because they don't share any commonalities except one: a set of methods they support.
I am a little confused here. Isn't the methods most important characteristics of classes? If they share a set of similar methods, I assume that they have something in common.
On the other hand if the concept of duck typing and protocols could be adopted, why should somebody bother with inheritance?
Re: Duck Typing.
by
Kevin Teague
Sometimes you really need to know whether something is implemented - and _how_ it's implemented.
Using interfaces does not have to mean that you should or need be detached from implementation details. In the end, it's the implementations that do all the work.
Going back to the Zope Component Architecture, it's possible, and common, for your implementations to be named, and then you can ask for a specific implementation by name. If you had an ORM Interface and two implementations:
class ISimpleORM(Interface):
def do_orm_stuff(self, data):
"worlds worst ORM interface"
class GenericPoorPerformingORM(object):
implements(ISimpleORM)
def do_orm_stuff(self, data):
# bunch of implementation details
class MySQLSpecificORM(object)
implements(ISimpleORM)
def do_orm_stuff(self, data):
# bunch of MySQL-specific implementation details
Then you might use specific implementations by registering those implementations and querying for them:
# registration (usually done in XML or with 'convention over configuration')
component.provideUtility(GenericPoorPerformingORM(), ISimpleORM, 'generic')
component.provideUtility(MySQLSpecificORM(), ISimpleORM, 'mysql-tuned')
# later on in your application code
ORM_IMPLEMENTATION = 'mysql-tuned'
orm = component.getUtility(ISimpleORM, ORM_IMPLEMENTATION)
Traits
by
Alejandro Gonzalez...
It helps to construct classes by composing them from behavioral building blocks (called traits), so a class can respond to complete set of protocols without having to use inheritance as the fundametal tool for it.
It's not a concept to replace single inheritance but to complement it. Also helps a lot in code reuse.
It was already implemented in Squeak (an open source Smalltalk dialect) and IMHO is a promising concept.
Re: Traits
by
Kevin Teague
Smalltalk's Traits is very similar to Python's zope.interface and Perl 6 is also applying the same concept and calling it Roles. They all differ somewhat in the details, but they all aim to solve the problem as eloquently stated in the Smalltalk paper, "The purpose of traits is to decompose classes into reusable building blocks by providing first-class representations for the different aspects of the behaviour of a class."
Re: Is it becuase I'm Canadian? or..
by
Keith Thomas
|
http://www.infoq.com/news/2007/11/protocols-for-ducktyping/
|
CC-MAIN-2015-35
|
en
|
refinedweb
|
Manage, Administrate and Monitor GlassFish v3 from Java code using AMX & JMX
Manage, Administrate and Monitor GlassFish v3 using Application Server Management Extensions (AMX) & The Java Management Extensions (JMX)
Management is one of the most crucial parts of an application server set of functionalities. Development of the application which we deploy into the server happens once with minor development iteration during the software lifecycle, but the management is a lifetime task. One of the very powerful features of the GlassFish application server is the powerful administration and management channels that it provides for different level of administrators and developers whom want to extend the application server administration and management interfaces.
GlassFish as an application server capable or serving mission critical and large scale applications benefits from several administration channel including the CLI, web based administration console and finally the possibility to manage the application server by using standard Java management extension or the JMX.
Not only GlassFish fully expose its management functionalities as JMX MBeans but also it provides a very easier way to manage the application server using local objects which proxies JMX MBeans. These local objects are provided as AMX APIs which lift the need for learning JMX by administers and developers whom want to interact with the application server by code.
GlassFish provides very powerful monitoring APIs in term of AMX MBeans which let developers and administrators monitor any aspect of anything inside the application server using Java code without need to understand the JMX APIs or complexity of monitoring factors and statistics gathering. These monitoring APIs allows developers to monitor a bulk of Java EE functionalities together or just monitor or single attribute of a single configuration piece.
GlassFish self management capability is another powerful feature based on the AMX and JMX APIs to let administrators easily automate daily tasks which can consume a handful amount of time without automation. Self management can manage the application server dynamically by monitoring the application server in runtime and changing the application server configuration dynamically based on predefined rules.
1 Java Management eXtension (JMX)
JMX, native to Java platform, introduced to let Java developers have a standard and easy to learn and use way for managing and monitoring their Java applications and Java enabled devices. We as architects, designers and developers of Java applications which can be as small as an in house invoice management or as big as a running stock exchange system need a way to expose management of our developed software to other industry accepted management software and JMX is the answer to these need.
1.1 What is JMX?
JMX is a part of Java Standard edition and was present from early days of Java platform existence and seen many enhancements during Java platform evolution. The JMX related specifications define the architecture, design patterns, APIs, and services in the Java programming language for managing and monitoring applications and Java enabled devices.
Using the JMX technology, we can develop Java classes which perform the management and monitoring tasks and expose a set of their functionalities or attributes by means of an interface to which later on are exposed to JMX clients through specific JMX services. The objects which we use to perform and expose management functionalities are called Managed Beans or MBeans in brief.
In order for MBeans to be accessible to JMX clients, which will use them to perform management tasks or gathers monitoring data, they need to be registered in a registry which later on let our JMX client application to find and initialize them. This registry is one of the fundamental JMX services and called MBean Server.
Now that we have our MBeans registered with a registry, we should have a way to let clients communicate with the running application which registered the MBeans to execute our MBeans operations, this part of the system is called JMX connectors which let us communicate with the agent from a remote or local management station. The JMX connector and adapter API provides a two way converter which can transparently connect to JMX agent over different protocols and provides a standard way for management software to communicate with the JMX agents regardless of communication protocol.
1.2 JMX architecture
The JMX benefits from a layered architecture heavily based on the interfaces to provide independency between different layers in term of how each layer works and how the data and services are provided for each layer by its previous one.
We can divide the JMX architecture to three layers. Each layer only relay on its direct bottom layer and is not aware of its upper layer functionalities. These layers are: instrumentation, agent, and management layers. Each layer provides some services either for other layers, in-JVM clients or remote clients running in other JVMs. Figure 1 shows different layers of JMX architecture.
Figure 1 JMX layerd architecture and each layer components
Instrumentation layer
This layer contains MBeans and the resources that MBeans are intended to manage. Any resource that has a Java object representative can be instrumented by MBeans. MBeans can change the value of object’s attributes or call its operations which can affect the resource that this particular Java object represents. In addition to MBeans, notification model and MBean metadata objects are categorized in this layer. There are two different types of MBeans for different use cases, these types include:
§ Standard MBeans: Standard MBeans consisting of an MBean interface which define the exposed operations and properties (using getters and setters) and the MBean implementation class. The MBean implementation class and the interface naming should follow a standard naming pattern in Standard MBeans. There is another type of standard MBeans which lift the urge for following the naming pattern called MXBeans. The Standard MBeans naming pattern for MBeans interface is ClassNameMBean and the implementation class is ClassName. For the MXBeans naming pattern for the interface is AnythingMXBean and the implementation class can have any name. We will discuss this naming matter in more details later on.
§ Dynamic MBeans: A dynamic MBean implements javax.management.DynamicMBean, instead of implementing an static interface with a set of predefined methods. Dynamic MBeans relies on javax.management.MBeanInfo that represents the attributes and operations exposed by them. MBeans client application call generic getters and setters whose implementation must resolve the attribute or operation name to its intended behavior. Faster implementation of JMX management MBeans for an already completed application and the amount of information provided by MBeans metadata classes are two benefits of Dynamic MBeans.
§ Notification Model: JMX technology introduces a notification model based on the Java event model. Using this event model MBeans can emit notifications and any interested party can receive and process them, interested parties can be management applications or other MBeans.
§ MBean Metadata Classes: These classes contain the structures to describe all components of an MBean's management interface including its attributes, operations, notification, and constructors. For each of these, the MBeanInfo class include a name, a description and its particular characteristics (for example, an attribute is readable, writeable, or both; for an operation, the signature of its parameter and return types).
Agent layer
This layer contains the JMX Agents which are intended to expose the MBeans to management applications. The JMX agent’s implementation specifications fall under this layer. Agents are usually located in the same JVM that MBeans are located but it is not an obligation. The JMX agent consisting of an MBean server and some helper services which facilitate MBeans operations. Management software access the agent trough an adapter or connecter based on the management application communication protocol.
§ MBean Server: This is the MBeans registry, where management applications will look to find which MBeans are available to them to use. The registry expose the MBeans management interface and not the implementation class. The MBeans registry provides two interfaces for accessing the MBeans from a remote and in the same JVM client. MBeans can be registered by another MBeans, by the management application or by the Agent itself. MBeans are distinguished by a unique name which we will discuss more in AMX section.
§ Agent Services: there some helper services for MBeans and agent to facilitate some functionalities. These services include: Timer, dynamic class loader, observers to observer numeric or string based properties of MBeans, and finally relation service which define associations between MBeans and enforces the cardinality of the relation based on predefined relation types.
Management layer
The Management tier contains components required for developing management applications capable of communicating with JMX agents. Such components provide an interface for a management application to interact with JMX agents through a connector. This layer may contain multiple adapters and connectors to expose the JMX agent and its attached MBeans to different management platforms like SNMP or exposing them in a semantic rich format like HTML.
JMX related JSRs
There are six different JSRs defined for the JMX related specifications during past 10 years. These JSRs include:
§ JMX 1.2 (JSR 3): First version of JMX which was included in J2SE 1.2
§ J2EE Management (JSR 77): A set of standard MBeans to expose application servers’ resources like applications, domains, and so on for management purposes.
§ JMX Remote API 1.0 (JSR 160): interaction with the JMX agents using RMI from a remove locaten.
§ Monitoring and Management Specification for the JVM (JSR 174): a set of API and standard MBeans for exposing JVMs management to any interested management software.
§ JMX 2.0 (JSR 255): The new version of JMX for Java 0 which introduces using generics, annotation, extended monitors, and so on.
§ Web Services Connector for JMX Agents (JSR 262): define an specification which leads to use Web Services to access JMX instrumentation remotely.
1.3 JMX benefits
What are JMX benefits that JCP defined a lot of JSRs for it and on top of it, why we did not follow another management standard like IEEE Std 828-1990. The reason is behind the following JMX benefits:
§ Java needs an open to extend and close to change API for integration with emerging requirement and technologies, JMX does this by its layered architecture.
§ The JMX is based on already well defined and proven Java technologies like Java event model for providing some of required functionalities.
§ The JMX specification and implementation let us use it in any Java enabled software in any scale.
§ Almost no change is required for an application to become manageable by JMX.
§ Many vendors uses Java to enable their devices, JMX provide one standard to manage both software and hardware.
You can imagine many other benefits for JMX which are not listed above.
1.4 Managed Beans (MBeans)
We discussed that generally there are two types of MBeans which we can choose to implement our instrumentation layer. Dynamic MBeans are a bit more complex and we would rather skip them in this crash course, so in this section we will discuss how MXBeans can be developed, used locally and remotely to prepare ourselves for understanding and using AMX to manage GlassFish.
We said that we should write an interface which defines all exposed operation of the MBeans both for the MXBeans and standard MBeans. So first we will write the interface. Listing 1 shows the WorkerMXBean interface, the interface has two methods which supposed to change a configuration in a worker thread and two properties which return the current number of workers threads and maximum number of worker threads. Number of current workers thread is read only and maximum number of threads is both readable and updateable.
Listing 1 The MXBean interface for WorkerMXBean
@MXBean
public interface WorkerIF
{
public int getWorkersCount();
public int getMaxWorkers();
public void setMaxWorkers(int newMaxWorkers);
public int stopAllWorkers();
}
I did not told you that we can forget about the naming conversion for MXBean interfaces if we are intended to use Java annotation. As you can see we simply marked the interface as an MBean interface and defined some setter and getter methods along with one operation which will stop some workers and return the number of stopped workers.
The implementation of our MXBean interface will just implement some getter and setters along with a dummy operation which just print a message in standard output.
Listing 2 the Worker MXBean implementation
public class Worker implements WorkerIF {
private int maxWorkers;
private int workersCount;
public Worker() {
}
public int getWorkersCount() {
return workersCount;
}
public int getMaxWorkers() {
return maxWorkers;
}
public void setMaxWorkers(int newMaxWorkers) {
this.maxWorkers = newMaxWorkers;
}
public int stopAllWorkers() {
System.out.println("Stopping all workers");
return 5;
}
}
We did not follow any naming convention because we are using MXBean along with the annotation. If it was a standard MBean then we should have named the interface as WorkerMBean and the implementation class should have been Worker.
Now we should register the MBean to some MBean server to make it available to any management software. Listing 3 shows how we can develop a simple agent which will host the MBean server along with the registered MBeans.
Please replace the numbers with cueballs
Listing 3 How MBeans server works in a simple agent named WorkerAgent
public class WorkerAgent {
public WorkerAgent() {
MBeanServer mbs = ManagementFactory.getPlatformMBeanServer(); #1
Worker workerBean = new Worker(); #2
ObjectName workerName = null;
try {
workerName = new #3 ObjectName("article:name=firstWorkerBean");
mbs.registerMBean(workerBean, workerName); #4
System.out.println("Enter to exit..."); #5
System.in.read();
} catch(Exception e) {
e.printStackTrace();
}
}
public static void main(String argv[]) {
WorkerAgent agent = new WorkerAgent();
System.out.println("Worker Agent is running...");
}
}
At #1 we get the platform MBean Server to register our MBean. Platform MBean server is the default JVM MBean server. At #2 we initialize an instance of our MBean.
At #3 we create a new ObjectName for our MBean. Each JVM may use many libraries which each of them can register tens of MBeans, so MBeans should be uniquely identified in MBean server to prevent any naming collision. The ObjectName follow a format to represent an MBean name in order to ensure that it is shown in a correct place in the management tree and lift any possibility for naming conflict. An ObjectName is made up of two parts, a domain and a name value pair separated by a colon. In our case the domain portion is article and the description is name=name=firstWorkerBean
At #4 as register the MBean to the MBean server. At #5 we just make sure that our application will not close automatically and let us examine the MBean.
Sample code for this chapter is provided along with the book, you can run the sample codes by following the readme.txt file included in the chapter06 directory of source code bundle. Using the sample code you will just use Maven to build and run the application and JConsole to monitor it. But the behind the scene procedure described in the following paragraph.
To run the application and see how our MBean will appear in a management console which is standard JConsole bundled with JDK. To enable the JMX management agent for local access we need to pass the -Dcom.sun.management.jmxremote to the JVM. This command will let the management console to use inter process communication to communicate with the management agent. Now that you have the application running you can run JConsole. Open a terminal window and run jconsle. When JConsole opens, it shows a window which let us select either a remote or a local JVM to connect. Just scan the list of local JVMs to find WorkerAgent under name column; select it and press connect to connect to the JVM. Figure 2 shows the new Connection window of JConsole which we talked about.
Figure 2 The New Connection window of JConsole
Now you will see how JConsole shows different aspects of the selected JVM including memory meters, threads status, loaded classes, JVM overview which includes the OS overview and finally the MBeans. Select MBeans tab and you can see a tree of all MBeans registered with the platform MBean server along with your MBean. You should remember what I said about ObjectName class, the tree clearly shows how a domain includes its child MBeans. When you expand article node you will see something similar to figure 3.
Figure 3 JConsole navigation tree and effect of ObjectName format on the MBean placing in the tree
And if you click on stopAllWorkers node the context panel of the JConsole will load a page similar to figure 4 which also shows the result of executing he method.
Figure 4 The content panel of JConsole after selecting stopAllWorkers method of the WorkerMBean
It was how we can connect to a local JVM, in the 1.6 we will discuss connecting to a JVM process from a remote location to manage the system using JMX.
1.5 JMX Notification
The JMX API defines a notification and notification subscription model to enable MBeans generate notifications to signal a state change, a detected event, or a problem.
To generate notifications, an MBean must implement the interface NotificationEmitter or extend NotificationBroadcasterSupport. To send a notification, we need to construct an instance of the class javax.management.Notification or a one of its subclasses like AttributeChangedNotification, and pass the instance to NotificationBroadcasterSupport.sendNotification.
Every notification has a source. The source is the object name of the MBean that generated the notification.
Each.
Please replace the # with cueball in the source and the paragraph
1.6 Remote management
To manage our worker application from a remote location using a JMX console like JConsole we will just need to ensure that an RMI connector is open in our JVM and all appropriate settings like port number, authentication mechanism, transport security and so on are provided. So, to run our sample application with remote management enabled we can pass the following parameters to the java command.
-Dcom.sun.management.jmxremote.port=10006 #1
-Dcom.sun.management.jmxremote.authenticate=true #2
-Dcom.sun.management.jmxremote.password.file=passwordFile.txt #3
-Dcom.sun.management.jmxremote.ssl=false #4
At #1 we determine a port for the RMI connector to listen for incoming connections. At #2 we enable authentication in order to protect our management portal. At #3 we provide the path to a password file which contains the list of username and password in plain text. In the password file, each username and password pair are placed in one line with an space between them. At #4 we disable SSL for transport layer.
To connect to a JVM which started with these parameters, we should choose remote in the New Connection window of JConsole and provide the service:jmx:rmi:///jndi/rmi://127.0.0.1:10006/jmxrmi as Remote Process URL along with one of the credentials which we defined in the passwordFile.txt.
2 Application Server Management eXtension (AMX)
GlassFish fully adheres to the J2EE management (JSR 77) in term of exposing the application server configuration as JMX MBeans but dealing with JMX is not easy and likeable for all developers, so Sun has included a set of client side proxies over the JSR 77 MBeans and other additional MBeans of their own to make the presence of JMX completely hidden for the developers who want to develop management extensions for GlassFish. This API set is named AMX and usually we use them to develop management rules.
2.1 J2EE management (JSR 77)
Before we dig into AMX we need to know what JSR 77 is and how it helps us to using JMX and AMX for managing the application server. JSR 77 specification introduces a set of MBeans and services which let any JMX compatible client manage Java EE container’s deployed objects and Java EE services.
The specification defines a set of MBeans which models all Java EE concepts in a hierarchic of MBeans. The specification determines which attributes and operation each MBeans must have and what should be the effect of calling a method on the managed objects. Specification defines a set of events which should be exposed to JMX clients by the MBeans. The specification also defines a set of attributes’ statistics which should be exposed by the JSR 77 MBeans to the JMX client for performance monitoring.
Services and MBeans provided by JSR 77 covers:
§ Monitoring performance statistics of managed artifacts in the Java EE container. Managed objects like EJBs, Servlets, and so on.
§ Event subscription for important events of the managed objects like stopping or starting an application.
§ Managing state of different standard managed objects if the Java EE container like changing an attribute of a JDBC connection pool or underplaying an application.
§ Navigation between Managed objects.
Managed objects
The artifacts that JSR 77 exposes their management, monitoring to JMX clients include a broad range of Java EE components and services. Figure 5 shows the first level of the Managed objects in the Managed objects hierarchic. As you can see in the figure all objects inherits four attributes which later on will be used to determine whether an object state is manageable, the object provides statistics or the object provide events for its important performed actions.
The objectName attribute which we discussed before has the same use and format, for example amx:j2eeType=X-JDBCConnectionPoolConfig,name=DerbyPool represent a connection pool named DerbyPool in the amx domain, the j2eeType attribute shows the MBean’s type.
Figure 5 first level hierarchic of JSR 77 Managed objects which based on the specification must be exposed for JMX management.
Now that you saw how broad the scope of JSR 77 is you ask what the use of these MBeans is and how they work and can be used.
Simple answer is that as soon as a Managed object become live in the application server a corresponding JSR 77 MBeans will get initialized for it by the application server management layer. For example as soon as you create a JDBC connection pool a new MBeans will appear under the JDBCConnectionPoolConfig node of connected JConsole which represent the newly created JDBC connection. Figure 6 shows the DerbyPool under the JDBCConnectionPoolConfig node in JConsole.
Figure 6 The DerbyPool place under JDBCConnectionPoolConfig node in JConsole
In the figure 5 you can see that we can manage All deployed objects using the JSR 77 exposed MBeans, these objects J2EE applications and modules which are shown in figure A deployed application may have EJB, Web, and any other modules; Glassfish management service will initialize a new insistence of appreciated MBeans for each module in the deployed application. The management service uses ObjectName which we discussed before to uniquely identify each MBean instance and later on we will use this unique name to access each deployed artifact independently.
Figure 7 The specification urge implementation of MBeans to manage all shown deployable objects
Getting lower into the deployed modules we will have Servlets, EJBs and so on. The JSR 77 specification provided MBeans for managing EJBs and Servlets. The EJB MBeans inherit the EJB MBean and includes all necessary attributes and method to manage different types of EJBs. Figure 8 represent the EJB sub classes.
Figure 8 All MBeans related to EJB management in GlassFish application server, based on JSR 77
Java EE resources is one the most colorful area in the Java EE specification and the JSR 77 provides MBeans for managing all standard resources managed bye Java EE containers. Figure 9 shows which types of resources are exposed for management by the JSR 77.
Figure 9 Java EE resources manageable by the JSR 77 MBeans
All of Java EE base concepts are covered by the JSR 77 in order to make it possible for the 3rd party management solution developers to integrate Java EE application server management into their solutions.
The events propagated by JSR 77
In the beginning of this section we talked about events that interested parties can receive from JSR 77 MBeans. These events are as follow for different type of JSR 77 MBeans.
§ J2SEEServer: An event when the corresponding server enters RUNNING, STOPPED, or FAILED state
§ EntityBean: An event when the corresponding Entity Bean enters RUNNING, STOPPED, or FAILED state
§ MessageDrivenBean: An event when the corresponding Entity Bean enters RUNNING, STOPPED, or FAILED state
§ J2EEResource: An event when the corresponding J2EE Resource enters RUNNING, STOPPED, or FAILED state
§ JDBCResource: An event when the corresponding JDBC data source enters RUNNING, STOPPED, or FAILED state
§ JCAResource: An event when a JCA connection factory or managed connection factory entered RUNNING, STOPPED, or FAILED state
The monitoring statistics exposed by JSR 77
The specification urge exposing some statistics related to different Java EE components by JSR 77 MBeans. The required statistics by the specification includes:
§ Servlet statistics: Servlet related statistics which include Number of currently loaded Servlets; Maximum number of Servlets loaded which were active, and so on.
§ EJB statistics: EJB related statistics which Include include Number of currently loaded EJBs, Maximum number of live EJBs, and so on.
§ JavaMail statistics: JavaMail related statistics which Include maximum number of sessions, total count of connections, and so on.
§ JTA statistics: Statistics for JTA resources which includes successful transactions, failed transactions and so on.
§ JCA statistics: JCA related statistics which includes both the non-pooled connections and the connection pools associated with the referencing JCA resource.
§ JDBC resource statistics: JDBC resource related statistics for both non-pooled connections and the connection pools associated with the referencing JDBC resource, connection factory. The statistics include total number of opened and closed connections, maximum number of connections in the pool, and so on. This is really helpful to find connection leak for an specific connection pool.
§ JMS statistics: The JMS related including statistics for connection session, JMS producer, and JMS consumer.
§ Application server JVM Statistics: The JVM related statistics. Information like different memory sector size, threading information, class loaders and loaded classes and so on.
2.2 Remotely accessing JSR 77 MBeans by Java code
Include a sample code which shows accessing JSR 77 MBeans from remote location using java code to show how cumbersome it is and how simple AMX made it. // Done
Now that we discussed the details of the JSR 77 MBeans, let’s see how we can access the DerbyPool MBeans from java code and then how we can change an attribute which represent maximum number of connections in the connection pool. Listing 4 show the sample code which will access a GlassFish instance with default port for JMX listener (the default port is 8686)
Listing 4 Accessing a JSR 77 MBean for changing DerbyPool’s MaxPoolSize attribute.
public class RemoteClient {
private MBeanServerConnection mbsc = null;
private ObjectName derbyPool;
public static void main(String[] args) {
try {
RemoteClient client = new RemoteClient(); #A
client.connect(); #B
client.changeMaxPoolSize(); #c
} catch (Exception e) {
e.printStackTrace();
}
}
private void connect() throws Exception {
JMXServiceURL jmxUrl =
new JMXServiceURL("service:jmx:rmi:///jndi/rmi://127.0.0.1:8686/jmxrmi"); #1
Map env = new HashMap();
String[] credentials = new String[]{"admin", "adminadmin"};
env.put(JMXConnector.CREDENTIALS, credentials); #2
JMXConnector jmxc =
JMXConnectorFactory.connect(jmxUrl, env); #3
mbsc = jmxc.getMBeanServerConnection(); #4
}
private void changeMaxPoolSize() throws Exception {
String query = "amx:j2eeType=X-JDBCConnectionPoolConfig,name=DerbyPool";
ObjectName queryName = new ObjectName(query); #5
Set s = mbsc.queryNames(queryName, null); #6
derbyPool = (ObjectName) s.iterator().next();
mbsc.setAttribute(derbyPool, new Attribute("MaxPoolSize", new Integer(64))); #7
}
}
#A initiate an instance of the class
#B get the JMX connection
#C change the attribute’s value
At #1 we create a URL to GlassFish JMX service. At #2 we prepared the credentials which we should provide for connecting to the JMX service. At #3 we initialize the connector. At #4 we create a connection to GlassFish’s MBean server. At #5 we query the registered MBeans for an MBean similar to our DerbyPool MBean. At #6 we get the result of the query inside a set. We are sure that we have an MBean with the give name otherwise we should have checked to see whether the set is empty or not. At #7 we just change the attribute. You can check the attribute in JConsole and you will se that it change in the JConsole as well.
In the sample code we just update the DerbyPool’s maximum number of connections to 64. it can be counted as one of the simplest task related to JSR 77, management, and using JMX. Using plain JMX for a complex task will overhaul us with many lines of complex reflection based codes which are hard to maintain and debug.
2.3 Application Server Management eXtension (AMX)
Now that you see how hard it is to work with JSR 77 MBeans I can tell you that you are not going to use JSR 77 MBeans directly in your management applications, although you can.
What is AMX
In The AMX APIs java.lang.reflect.proxy is used to generate Java objects which implement the various AMX interfaces. Each proxy internally stores the JMX ObjectName of a server-side JMX MBean who’s MBeanInfo corresponds to the AMX interface implemented by the proxy.
So, in the same time we have JMX MBeans for using trough any JMX compliant management software and we have the AMX dynamic proxies to use them as easy to use local objects for managing the application server.
The GlassFish administration architecture is based on the concept of administration domain. An administration domain is responsible for managing multiple resources which are based on the same administration domain. A resource can be a cluster of multiple GlassFish instances, a single GlassFish instance, and a JDBC connection pool inside the instance and so on. Hundreds of AMX interfaces are defined to proxy all of the GlassFish managed resources which themselves are defined as JSR 77 MBeans for client side access. All of these interfaces are placed under com.sun.appserv.management package.
There are several benefits in AMX dynamic proxies over the JMX MBeans, which are as follow:
§ Strongly typed methods and attributes for compile time type checking
§ Structural consistency with both the domain.xml configuration files.
§ Consistent and structured naming for methods, attributes and interfaces.
§ Possibility to navigate from a leaf AMX bean up to the DAS.
AMX MBeans
AMXdefines different types of MBean for different purposes or reasons, namely, configuration MBeans, monitoring MBeans, utility MBeans and JSR 77 MBeans. All AMX MBeans shares some common characteristics including:
§ They all implement the com.sun.appserv.management.base.AMX interface which contains methods and fields for checking the interface type, group, reaching its container and its root domain.
§ They all have a j2eeType and name property within their ObjectName. The j2eeType attribute specifies the interface we are dealing with.
§ All MBeans that logically contain other MBeans implement the com.sun.appserv.management.base.Container interface. Using the container interface we can navigate from a leaf AMX Bean to the DAS and vice-versa. For example by having the domain AMX Bean we can get a list of all connection pools or EJB modules in deployed in the domain.
§ JSR 77 MBeans that have a corresponding configuration or monitoring peer expose it using getConfigPeer or getMonitoringPeer. However; there are many configuration and monitoring MBeans that do not correspond to JSR 77 MBeans.
Configuration MBeans
We discussed that there are several types of MBeans in the AMX framework, one of them is the configuration MBeans. Basically these MBeans represent domain.xml and other configuration file content and structure.
In GlassFish all configuration information are stored in one central repository named DAS, in a single instance installation the instance act as DAS and in a clustered installation the DAS responsibility is sole taking care of the configuration and propagating it to all instances. The information stored in the repository are exposed to any interested party like an administration console trough AMX interfaces.
Any developer with familiarity with domain.xml structure will find him very comfortable with configuration interfaces.
Monitoring MBeans
Monitoring MBeans provide transient monitoring information about all the vital components
of the Application Server. A monitoring interface can either provides statistics or not and if it provides statistics it should implements the MonitoringStats interface which is JSR 77 compliant interface for providing statistics.
Utility MBeans
UtilityMBeans provide commonly used services to the Application Server. These MBeans all extend either or both of the Utility and Singleton interfaces. All of these MBeans interface are located in com.sun.appserv.management.base package. Notable utility MBeans are listed in table 1
Table 1 AMX Utility MBeans along with description
Java EE Management MBeans
The Java EE management MBeans implement, and in some cases extend, the management
Hierarchy as defined by JSR 77, which specifies the management model for the whole Java EE platform. All JSR 77 MBeans in the AMX domain offer access to configuration and monitoring MBeans using the getMonitoringPeer and getConfigPeer methods.
Dynamic Client Proxies
Dynamic Client Proxies are an important part of the AMX API, and enhance ease-of-use for the programmer. JMX MBeans can be used directly by an MBeanServerConnection MBeanServerConnection), the return type, argument types, and method names might vary as needed for the difference between a strongly-typed proxy interface and generic MBeanServerConnection or ObjectName interface.
Changing the DerbyPool attributes using AMX
In listing 4 you saw how we can use JMX and pure JSR 77 approach to change the attributes of a JDBC connection pool, in this part we are going to perform the same operation using AMX to see how much easier and more effective the AMX is.
Listing 5 Using AMX to change DerbyPool MaxPoolSize attribute
AppserverConnectionSource appserverConnectionSource = new AppserverConnectionSource(AppserverConnectionSource.PROTOCOL_RMI, "127.0.0.1", 8686, "admin", "adminadmin",null,null); #1
DomainRoot dRoot = appserverConnectionSource.getDomainRoot(); #2
JDBCConnectionPoolConfig cpConf= dRoot.getContainee(XTypes.JDBC_CONNECTION_POOL_CONFIG, "DerbyPool"); #3
cpConf.setMaxPoolSize("100"); #4
You are not mistaking it with another idea or code, that four lines of code let us change the maximum pool size of the DerbyPool.
At #1 we create a connection to the server that we want to perform our management operation on it. Several protocols can be used to connect to the application server management layer. As you can se when we construct the appserverConnectionSource instance we used AppserverConnectionSource.PROTOCOL_RMI as the communication protocol to ensure that we will not need some JAR files from OpenPDK project. Two other protocols which we can use are AppserverConnectionSource.PROTOCOL_HTTP and AppserverConnectionSource.PROTOCOL_JMXMP. The connection that we made does not uses TLS, but we can use TLS to ensure transport security.
At #2 we get the AMX domain root which later on let us navigate between all AMX leafs which are servers, clusters, connection pools, and listeners and so on. At #3 we query for an AMX MBean which its name is DerbyPool and its j2eeType is equal to XTypes.JDBC_CONNECTION_POOL_CONFIG. At #4 we set a new value for the attribute of our choice which is MaxPoolSize attribute.
Monitoring GlassFish using AMX
The monitoring term comes to the developers and administrators mind whenever they are dealing with performance tuning, but monitoring can also be used for management purposes like automation of specific tasks which without automation need an administrator to take care of it. An example of these types of monitoring is critical condition notifications which can be send either via email or SMS or any other gateway which the administrators and system managers prefer.
Imagine that you have a system running on top of GlassFish and you want to be notified whenever acquiring connections from a connection pool named DerbyPool is taking longer than 35 seconds.
You also want to store all information related to the connection pool when the pool is closing to saturate. For example when there are only 5 connections to give away before the pool get saturated.
So we need to write an application which monitor the GlassFish connection pool, check its statistics regularly and if the above criteria meets, our application should send us an email or an SMS along with saving the connection pool information.
AMX, provides us with all required means to monitor the connection pool and be notified when any connection pool attributes or any of its monitoring attributes changes so we will just use our current AMX knowledge along with two new concepts about the AMX.
The first concept that we will use is AMX monitoring MBeans which provides us with all statistics about the Managed Objects that they monitor. Using the AMX monitoring MBeans is the same as using other AMX MBeans like the connection pool MBean.
The other concept is the notification mechanism which AMX provides on top of already established JMX notification mechanism. The notification mechanism is fairly simple, we register our interest for some notification and we will receive the notifications whenever the MBeans emit a notification.
We know that we can configure GlassFish to collect statistics about almost all Managed Objects by changing the monitoring level from OFF to either LOW or HIGH. In our sample code we will change the monitoring level manually using our code and then use the same statistics that administration console shows to check for the connection pool consumed connections.
Listing 6 shows sample applications which monitor DerbyPool and notify the administrator whenever acquiring connection take longer than accepted. The application saves all statistics information when the connection pool gets close to its saturation.
Please Replace the numbers with cueballs
Listing 6 monitoring a connection pool and notifying the administrator when connection pool is going to reach the maximum size
public class AMXMonitor implements NotificationListener { #1
AttributeChangeNotificationFilter filter; #2
AppserverConnectionSource appserverConnectionSource;
private int cPoolSize;
private DomainRoot dRoot;
JDBCConnectionPoolMonitor derbyCPMon; #3
JDBCConnectionPoolConfig cpConf;
private void initialize() {
try {
appserverConnectionSource = new AppserverConnectionSource(AppserverConnectionSource.PROTOCOL_RMI, "127.0.0.1", 8686, "admin", "adminadmin", null, null);
dRoot = appserverConnectionSource.getDomainRoot();
Set<String> stpr = dRoot.getDomainConfig().getConfigConfigMap().keySet(); #4
ConfigConfig conf= dRoot.getDomainConfig().getConfigConfigMap().get("server-config"); #4
conf.getMonitoringServiceConfig().getModuleMonitoringLevelsConfig().setJDBCConnectionPool(ModuleMonitoringLevelValues.HIGH); #4
cpConf = dRoot.getContainee(XTypes.JDBC_CONNECTION_POOL_CONFIG, "DerbyPool");
cPoolSize = Integer.getInteger(cpConf.getMaxPoolSize());
filter = new AttributeChangeNotificationFilter(); #2
filter.enableAttribute("ConnRequestWaitTime_Current"); #2
filter.enableAttribute("NumConnUsed_Current"); #2
Set<JDBCConnectionPoolMonitor> jdbcCPM =
dRoot.getQueryMgr().queryJ2EETypeSet
(XTypes.JDBC_CONNECTION_POOL_MONITOR); #5
for (JDBCConnectionPoolMonitor mon : jdbcCPM) {
if (mon.getName().equalsIgnoreCase("DerbyPool")) {
derbyCPMon = mon;
break;
}
}
derbyCPMon = dRoot.getContainee(XTypes.JDBC_CONNECTION_POOL_MONITOR, "DerbyPool"); #5
derbyCPMon.addNotificationListener(this, filter, null); #5
} catch (Exception ex) {
ex.printStackTrace();
}
}
public void handleNotification(Notification notification, Object handback) {
AttributeChangeNotification notif = (AttributeChangeNotification) notification; #6
if (notif.getAttributeName().equals("ConnRequestWaitTime_Current")) {
int curWaitTime = Integer.getInteger((String) notif.getNewValue()); #7
if (curWaitTime > 3500) {
saveInfoToFile();
sendNotification("Current wait time is: " + curWaitTime);
}
} else {
int curPoolSize = Integer.valueOf((String) notif.getNewValue()); #8
if (curPoolSize > cPoolSize - 5) {
saveInfoToFile();
sendNotification("Current pool size is: " + curPoolSize);
}
}
}
private void saveInfoToFile() {
try {
FileWriter fw = new FileWriter(new File("stats_" + (new Date()).toString()) + ".sts");
Statistic[] stats = derbyCPMon.getStatistics(derbyCPMon.getStatisticNames()); #9
for (int i = 0; i < stats.length; i++) {
fw.write(stats[i].getName() + " : " + stats[i].getUnit()); #10
}
} catch (IOException ex) {
ex.printStackTrace();
}
}
private void sendNotification(String message) {
}
public static void main(String[] args) {
AMXMonitor mon = new AMXMonitor();
mon.initialize();
}
}
At #1 we implement the NotificationListener interface as we are going to use the JMX notification mechanism. At #2 we define an AttributeChangeNotificationFilter which filter the notifications to a subset that we are interested. We also add the attributes that we are interested to the set of non-filtered attribute change notification. At #3 we define and initialize an AMX MBeans which represent DerbyPool monitoring information. We will get an instance not found exception the connection pool had no activities yet. At #4 we change the monitoring level of JDBC connection pools to ensure that GlassFish gather the required statistics. At #5 we find our designated connection pool monitoring MBean and add a new filtered listener to it.
The handleNotification method is the only method in the NotificationListener interface which we need to implement. At #6 we convert the received notification to AttributeChangeNotification as we know that the notification is of this type. At #7 we are dealing with change in the ConnRequestWaitTime_Current attribute. We get its new value to check for our condition. In the same time we can get the old value if we are interested. At #8 we are dealing with NumConnUsed_Current attribute and later on with calling the saveToFile method and sendNotification methods.
At #9 we get names of all connection pool monitoring factors and at #10 we just write the monitoring attribute’s name along with its value to a text file.
AMX and Dotted Names
AMX is designed with ease of use and efficiency in mind. So in addition to using standard JMX programming model, the getters and setters, we can use another hierarchical model to access all AMX MBeans attributes. In the dotted named model each attribute of an MBean starts from its root which is either the domain for all configuration MBeans and server for all runtime MBeans. For example domain.resources.jdbc-connection-pool.DerbyPool.max-pool-size represents the maximum pool size for the DerbyPool which we discussed before.
Two interfaces are provided to access the dotted names either for monitoring or for management and configuration purposes. The MonitoringDottedNames is provided to assists with reading an attribute. The other interface is ConfigDottedNames which provides writing access to attributes using the dotted format. We can get an instance of its implementation using dRoot.getMonitoringDottedNames().
3 GlassFish Management Rule
The above sample application is promising especially when you want to have tens or rules for automatic management, but running a separate process and watching that process is not in taste of many administrators. GlassFish application server provides an very effective way to deploy such management rules into GlassFish application server for sake of simplicity and integration. Benefits and use cases of the Management Rules can be summarized as follow:
§ Manage complexity by self-configuring based on the conditions
§ Keep administrators free for complex tasks by automating mundane management tasks
§ Improve performance by self-tuning in unpredictable run-time conditions
§ Automatically adjusting the system for availability by preventing problems and recovering from one (self healing)
§ Enhancing security measures by taking self-protective actions when security threats are detected
A GlassFish Management Rule is a set of:
§ Event: An event uses the JMX notification mechanism to trigger actions. Events can range from an MBean attribute change to specific log messages.
§ Action: Actions are associated with events and are triggered when related events happen. Actions can be MBeans that implement the NotificationListener interface.
When we deploy a Management Rule into GlassFish, GlassFish will register our MBean in the MBean server and register its interests for the notification type that we determined. Therefore, upon any event which our MBean is registered for, the handleNotification method of our MBeans will execute. GlassFish provides some pre-defined types of events which we can choose to register our MBean’s interest. These events are as follow:
§ Monitor events: These types of events trigger an action based on an MBean attribute change.
§ Notification events: Every MBean which implements the NotificationBroadcaster interface can be source of this event type.
§ System events: This is a set of predefined events that come from the internal infrastructure of GlassFish application server. These events include: lifecycle, log, timer, trace, and cluster events.
Now, let’s see how we can achieve a similar functionality that our AMXMonitor application provides from The GlassFish Management Rules. First we need to change our application to a mere JMX MBean which implements NotificationListener interface and perform required action which is, for example, sending an email or SMS, in the handleNotification method.
Changing the application to MBean should be very easy; we just need to define an MBean interface and then the MBean implementation which will just implement one single method, the handleNotification method.
Now that we have our MBeans compiled JAR file, we can deploy it using the administration console. So open the GlassFish administration console, navigate to the Custom MBeqns node and deploy the MBeans by providing the path to the JAR file and the name of the MBeans implementation class. Now that we have our MBeans deployed into GlassFish, it is available for the class loader to be used as an action for a Management Rule, so in order to create the Management Rule, use the following procedure.
In the navigation tree select Configuration node, select Management Rules node. In the content panel select New and user dcpRule as the name, make sure that you teak the enabled checkbox, select monitor in the event type combo box and let the events to be recorded in the log files, press next to navigate to the second page of the wizard.
In the second page, for the Observed MBean field enter amx:X-ServerRootMonitor=server,j2eeType=X-JDBCConnectionPoolMonitor,name=DerbyPool and for the observed attribute enter ConnRequestWaitTime_Current. For the monitor type select counter. The number type is int and the initial threshold is 35000 which indicate the number which if the monitored attribute exceed, our action will start.
Scroll down and for the action, select AMXMonitor which is the name of our MBean which we deployed in previous step.
You saw that the overall process is fairly simple, but there are some limitations like possibility to monitor one single MBean and attribute at a time. Therefore we need to create another Management Rule for NumConnUsed_Current attribute.
Now that we reached to the end of this article, you should be fairly familiar with JMX, AMX and GlassFish Management Rules. In next articles we will use the knowledge that we gained here to create administration commands and create monitoring solutions.
4 Summary
We toughly discussed managing GlassFish using Java code by discussing JMX which is the foundation of all Java based management solutions and framework. We discussed JMX architecture, event model and different types of MBeans which are included in the JMX programming model. We also covered AMX as the GlassFish way providing its management functionalities to client side applications which are not interested in using complicated JMX APIs.
We discussed GlassFish management using AMX APIs to show how much simpler the AMX is when we compare it to the plain JMX implementation and we covered GlassFish monitoring using the AMX APIs.
You saw how we can use GlassFish’s Self management and self administration functionalities to automate some tasks to keep administrators free from dealing with low level repetitive administration and management tasks.
- Login or register to post comments
- Printer-friendly version
- kalali's blog
- 11423 reads
hello there, can you please explain how do you setup, ...
by galrub - 2013-01-31 14:02
hello there,
can you please explain how do you setup, package and deploy this MBeans?
Many thanks,
G
Reading AMX beans after Glassfish restart
by snobbles1 - 2010-06-04 03:04I'm accessing the jdbc resources in Glassfish via the AMX JMX beans, but I've having problems on a Glassfish restart If my war is deployed, and Glassfish is restarted, the war fails to deploy on the Glassfish restart - this is due to the bean amx:pp=/domain,type=resources not being present. The AMX service is only started if its required (eg if the admin console is loaded) I've tried manually starting the AMX service from my war: I've tried calling the method new AMXGlassfish(AMXGlassfish.DEFAULT_JMX_DOMAIN).bootAMX() from my war, but on restarting glassfish, the call to bootAMX() hangs indefinitely I've also tried connecting to the AMX JMX bean with the URL service:jmx:rmi://:8686/jndi/rmi://:8686/jmxrmi This works if I'm deploying after Glassfish has been started, and the admin console hasn't been loaded. If the war is deployed, and Glassfish is restarted, this fails as Glassfish cannot connect to the RMI JMX service. From looking at glassfish logs, the start up sequence seems to be that Glassfish does not launch JMX until after all existing war files have been deployed. Version of glassfish: v3
|
https://weblogs.java.net/node/386113/atom/feed
|
CC-MAIN-2015-35
|
en
|
refinedweb
|
Hi
I have many DOS clients on my LAN and I need to realize a communication tool based on NETBEUI protocol, does anybody know where i can get examples and source code (in C) ?
Network is working, I can share files among the DOS PC's.
Any help ?
Printable View
Hi
I have many DOS clients on my LAN and I need to realize a communication tool based on NETBEUI protocol, does anybody know where i can get examples and source code (in C) ?
Network is working, I can share files among the DOS PC's.
Any help ?
You might check out Ralph Brown's Interrupt List. I'm not familiar with that protocol but if its a dos-based driver then it must be interrupt driven. Perhaps the interrupt listing has the information you need. It will most likely use interrupt 2Fh, but there is another standard networking interrupt vector that most companies used. However it slips my mind at this point because I haven't done any DOS programming in over a year.
Thanks,
but I am actually a Java Developer and not familiar with dark, basic hardware implementations.. as a matter of fact I need to code in C for my DOS clients thus I am searching for example code which I can use direclty by modifying the source code.
Thanks again
You need to interface with the Net driver which will be located in upper memory in DOS. In order to 'call' the driver it will reside on an interrupt which you can invoke. Upon invocation the system looks in the IVT (interrupt vector table). The algo is: offset=(interrupt_num*4), segment=(interrupt_num*4)+2. The value at the location in this table is then used to call the interrupt handler which is provided by the driver manufacturer.
You simply need to pass the correct function number in the AX register (most implementations use the AX register as a function selector) and the correct values in the other registers. The registers used and the correct values to put in them are completely dependent on implementation inside the driver so there really is not a standard. The RBIL or interrupt listing by Ralph Brown will provide a wealth of information on many, many, many older network drivers and I'm sure yours is probably in there somewhere.
Here is an example of setting the video mode in C via int 10h. The desired mode is to be put into AX. There are other options available to use for diff cards but all of them support putting the mode number into AX.
This sets the video mode to 320x200x256 1 byte per pixel. It is a palettised mode in that the numbers in the video memory correspond to a pallette table in the video card memory which then correspond to certain RGB values.
In inline asm:In inline asm:Code:
#include <dos.h>
int main(void)
{
REGS regs;
regs.x.ax=0x13;
int86(0x10,®s,®s);
//Fill screen with blue
//Video memory in graphics modes >=320x200x256 all start at A000:0000
unsigned char far*Screen=(unsigned char far *)MK_FP(0xA000,0);
for (unsigned int memloc=0;memloc<64000;memloc++)
{
Screen[memloc]=1;
}
return(0);
}
Code:
#include <dos.h>
int main(void)
{
unsigned char far *Screen=(unsigned char far *)MK_FP(0xA000,0);
asm {
mov ax,13h
int 10h
les di,[Screen]
mov cx,32000d
mov al,0x01
mov ah,0x01
rep stosw
}
return(0);
}
Hehe, writing a protocol stack using inline assembly would be... cumbersome. :)
I'm sure there are libraries available.
But I thought that DOS never had any network access... microsoft first accessed the network with windows.
then how could there be an interupt or a lib?
Well thanks for your postings..I don't have the knowledge to implement any interrput stuff, I think that I will try to realize the communication through a database that is located on a shared drive.
I have Microsoft Network Client 3.0 on my DOS PC's, they can all access a shared drive on a Win2000 PC where I will have a database which gets queried by the DOS clients frequently..
All I need now is source code in C that lets me access (execute SQL statements) on a Microsoft Access database.
? ?
Thanks for your replies
Marcel
Just for information.
I found a nice tool for tcp/ip communication for DOS: and
http_d is kind of a socket (on DOS) that can become access easily e.g. through Java etc..
Works nice !
All you need is a Packet Driver for your NIC
Marcel
|
http://cboard.cprogramming.com/networking-device-communication/58318-dos-network-communication-netbeui-printable-thread.html
|
CC-MAIN-2015-35
|
en
|
refinedweb
|
CSock - To verify timeout socket class
Posted by Seung Kyung, Lee. on May 11th, 1999
Environment: VC6, WindosNT4 SP3
I'm korean and not good at English.
In case of using CSocket class, We have problem to verify connction between computers.
This class(CSock) is solving this problem with method below.
CSocket don't return the proper time when it disconnects another socket.
This cause performance of program to down.
I can find method to solve this problem. I make the class 'CSock'.
If input time-value exceed the time limit,
my Overrided function immediatily return the response.
CSock class is overriding ConnectHelper of CSocket class member functions
/// CSocket modify - timeout module. BOOL CSock::ConnectHelper(const SOCKADDR* lpSockAddr, int nSockAddrLen) { if (m_pbBlocking != NULL) { WSASetLastError(WSAEINPROGRESS); return FALSE; } m_nConnectError = -1; if (!CAsyncSocket::ConnectHelper(lpSockAddr, nSockAddrLen)) { if (GetLastError() == WSAEWOULDBLOCK) { // Insert.... CTime curt, st; CTimeSpan span(0, 0, 0, m_nTimeOut); st = CTime().GetCurrentTime(); //....... while (PumpMessages(FD_CONNECT)) { if (m_nConnectError != -1) { WSASetLastError(m_nConnectError); return (m_nConnectError == 0); } // Insert.... curt = CTime().GetCurrentTime(); if(curt > (st+span)) return FALSE; //.............. } } return FALSE; } m_Kill = FALSE; return TRUE; }
Good!!Posted by Legacy on 11/03/2003 12:00am
Originally posted by: Iampro
thanks very much!!Reply
Thanks!!! Very Good!!Posted by Legacy on 09/24/2002 12:00am
Originally posted by: Kwack ManYoung
I can't understand it
But I Use it..^.^
I Pride of you are Korean...Reply
it looks like the timeout of the base class is in minutes.Posted by Legacy on 05/08/2002 12:00am
Originally posted by: njkayaker
It looks like the timeout of the base class is in minutes. The derived class makes the timeout in seconds.
Reply
SuggestPosted by Legacy on 11/08/2001 12:00am
Originally posted by: Jiacy
The m_nTimeOut is used in the class CSocket's member function PumpMessage()(you can see MFC's source file Sockcore.cpp),and I think the parameter can't be used by this way.We can define a new parameter in the class CSock replacing the m_nTimeOut.I also want to knew the use of the function ConnectHelper(). Anyway your method is very smart.Thanks!Reply
very usefull! thxPosted by Legacy on 11/07/2000 12:00am
Originally posted by: chshin77
One Question about recv() functionPosted by Legacy on 11/16/1999 12:00am
Originally posted by: syhwang
Hi~
I'm a person in KOREA.
I met one problem for programming about socket
I used CSocket class and member function.
The problem is that...
I send several packet with send() function...
and I received with recv() function in other socket app.
when the recv function finished,
the received packet have several send packet
that is, One recv() call gets several send packet...
I want to received only one send packet...
How can I do it...?
Please help me~~
Thank youReply
But how does your demo function?Posted by Legacy on 07/28/1999 12:00am
Originally posted by: zhutong
|
http://www.codeguru.com/cpp/i-n/internet/network/article.php/c3397/CSock--To-verify-timeout-socket-class.htm
|
CC-MAIN-2015-35
|
en
|
refinedweb
|
Binding. If you want to code along with me, be sure to read my previous two posts.
setCountryId()method, change the types of
countryIdand
oldCountry.
- Switch back to the EditClient.java file and click Design at the top of the editor to work with the file in Design view.
- Right-click the combo box and choose Bind | elements.
- Click Import Data to Form, select the database connection, and select the Countries table. countriesList should appear as the binding source. Click OK.
- Right-click the combo box again and choose Bind | selectedItem.
- Select Form as the binding source and currentRecord | countryId as the expression. Click OK. (As you may recall from the last post, we are using a custom bean called CurrentRecord as a liaison between this dialog and the main form.)
The combo box is almost ready to work properly in the dialog. It is set up to draw its values from the Countries db, but that feature does not exist yet for combo boxes.)
To get the combo boxes to render country names, do the following:
- Create a new class called CountryListCellRenderer in your project.
- Delete the generated class declaration and paste the following code below the package statement:
import java.awt.Component;
import javax.swing.DefaultListCellRenderer;
import javax.swing.JList;
public class CountryListCellRenderer extends DefaultListCellRenderer {
@Override
public Component getListCellRendererComponent(
JList list, Object value, int index, boolean isSelected, boolean cellHasFocus) {
super.getListCellRendererComponent(list, value, index, isSelected, cellHasFocus);
if (value instanceof Countries) {
Countries c = (Countries) value;
setText(c.getCountry());
}
return this;
}
}
- Compile the class.
- Select the EditClient form in the Source Editor (make sure that the Design view is selected).
- Drag the class from the Projects window to the white space surrounding the form, as shown in the screenshot below.
Doing so adds the renderer to your form as a bean, much like dragging a component from the Palette adds that component to your form.
- In the form, select the combo box.
- In the Properties window, scroll to the renderer property and choose countryListCellRenderer1 from the drop-down list for that property.
The combo box should be ready to go - except for one thing. It doesn't have any values to display yet. You can go ahead and populate the table with a few SQL commands and then run the project. Or you can indulge me in this digression that demonstrates how you can quickly do this with a few hacks within the IDE (and shows you some handy features along the way).
First, create a separate form for adding countries to the db by doing the following:
- Right-click the package containing your classes and choose New | Other.
- Select the Swing GUI Forms | Master/Detail Sample Form and click Next.
- Give the class the name CountriesForm and click Next.
- Select the database connection, select the countries table.
- Since we won't be editing the Country_ID fields by hand (the values will be automatically generated), move the Country_ID column to the list of Available Columns. Then click Next.
- Click Finish to exit the wizard.
We have just essentially created another application with its own main class. In order to properly run this class, we need to temporarily make it the main class of the project. (Simply using the Run File command won't work since this command doesn't pick up classpath dependencies.) We can do so by creating a new project configuration.
- Choose Build | Set Main Project Configuration | Customize.
- Click New and then enter CountryEditing as the configuration name.
- Click the Browse button next to the Main Class field and select the CountriesForm class.
- Click OK.
The configuration is automatically switched to the new configuration.
You can now start editing the list of countries.
- Choose Run | Run Main Project.
- In the simple application that runs, click New to create a new row and fill in a country.
- Repeat step 2 a few times so that you have multiple countries to choose from.
- Choose Build | Set Main Project Configuration |
so that the main application runs the next time we use the Run Project command.
Once you have some countries in the the Countries table, you can run the main application and see the combo box in action:
- Choose Run | Run Main Project.
- In the running application, click the first New button.
- Enter values into the various text fields and choose a country from the combo box.
Notice that the values that you enter in the dialog box also appear in the top table in the main form, including the country you selected from the combo box.
- Since we have not coded the buttons in the dialog box yet, move the dialog out of the way and click Save in the main form to save the changes to the database.
The application works, but it's still very rough around the edges. Here is some quick tidying up we can do now:
- Make the the columns in the table uneditable. You can do so by right-clicking the table, choosing Table of Contents, clicking the Columns tab, and then clearing the Editable checkbox for each of the items. This is particularly desirable for the Country column so that you can manage the what people enter for countries (e.g. to avoid misspellings) and better handle changes in country names (the change only needs to be made in one place), etc.)
- Change the text of the New buttons to distinguish them. I'm going to use New Client and New Order. You can change the text inline (by clicking the button once, pausing, and then clicking again). Or, if you want to change the text in every place that the action is used (such as from a menu), you can right-click the button, choose Set Action and change the Text attribute.
- Delete the superfluous
main()method in the
EditClient.javaclass.
We still have some work to do, such as:
- Adding functionality to the Save and Cancel buttons in the dialog
- Making it possible to edit existing records
- Doing some currency formatting
I'll cover those topics and others in ensuing posts. Where time and personal knowledge allows, I'll try to field requests as well.
- Login or register to post comments
- Printer-friendly version
- pkeegan's blog
- 20887 reads
ComboBox Rederer
by tm23 - 2009-12-30 19:03Mr. Keegan, Excellent post! Thank you. I was wondering in a very similar scenario, I implemented the renderer for a Date field uneditable ComboBox as following: GameDate Games) { Games meg = (Games)value; SimpleDateFormat dateFormatter = new SimpleDateFormat("yyyy-MMM-dd"); setText(value == null ? "" : dateFormatter.format(meg.getGameDt())); } return this; } }); But for some reason, frustratingly I cannot figure out, it does not seem to be working. Since I see the fields being displayed similar to the format "Sat Nov 07 00:00:00 PST 2009" . Games is an entity class created of a corresponding database table and GameDate is defined as a Temporal Date field of the entity. I am new to this and obviously must be making some silly error. I will be deeply obliged if you could kindly help me to get through my obvious frustration. Thank you, Tapas
by asubhan - 2009-04-05 21:25Hi Patrick, I am using netbeans 6.5 and create a simple application containing Customers table and Coutries table. I generated the application using master/detail skeleton and in detail option page I select textfields rather than table option to utilize the editing component. I replace CountryId textfield component with JcomboBox. I don't have dififculty to showing proper country list on the JCombobox once the application being run but everytime I selec an item on the JCombobox the changing was not reflected on the Jtable. Please advise me what was wrong? since I don't have similar problem when the editing component were place on the separate form (JDialog Form)
by pkeegan - 2008-09-18 04:07Hi yoguess, In this blog entry, the Save functionality isn't yet set up. You will have follow the next entry to see that work. Also note that the event model is different. The Swing Application Framework uses enhanced action support, whereas the master/detail form uses straight event handlers, so there is some variation there.
by yoguess - 2008-09-16 00:19hi patrick, i`m new to Gui using netBeans. i have build the whole project as suggested... now i have done some changes. instead of selecting "database Application", i have selected "basic Application" then made master/detail sample form. i have then kept some text fields and comboboxes. binded it as u have suggested. everything is working fine except when i select any item from combobox, the item is not getting saved in database.... please suggest me.... thanks in adv
by pkeegan - 2008-09-15 03:02Hmm, I guess I had would have to see your code to know for sure what is going wrong. Have you created both a custom editor and custom renderer for the combo box in the table column (and then specified the editor and renderer in the Table Contents/Columns dialog)? You might be merely missing the custom renderer.
by bestage - 2008-09-13 13:21Hello Patrick, first of all thanks for the nice tutorial. I did learn a lot how to handle things. As a novice in Java I do have to learn a lot after .NET and Visual Studio. But I like it and yourt tutorial showed that it is not such a long way to get where I want to get. I have one question. After playing with your tutorial I went further and added a combobox to the JTable (Custom editor). For example, I have a table called Customers and it has a join column AddressID connected with the entity Address. Now, the combox is populated with Address-Objects correctly. But when I select on Address I get the ClassCastException. OK the solution may be to override the getCellEditorValue() method of my custom editor. But from here I have no Idea what I should do in the overridden method. Can you give me a hint? Thanks
by pkeegan - 2008-05-23 12:14Yes, you can use toString() for this case. I don't do it because toString() can be used for any number of things, so I wanted to keep the separation. But that might be worth calling out in the final version of the tutorial.
by dags - 2008-05-23 10:38There is any advice against doing a customized toString() method in Countries instead of using CountryListCellRenderer ?. I know that CountryListCellRenderer is better if you want to display icons and text but a customized toString() seems better for simple cases.
by benhur99ph - 2008-06-15 19:23What about if I want to use a List box instead of the dropdown combo box? I tried making the application from the start again and I modified the SQL statement so that the "countries" in the "Client" table is set to string. When I get to making the Jdialog (EditClient), if I use a textbox for the countries (like the others) it works fine. I tried using a List box and I have successfully binded the elements property and when I run the application, it shows the list of countries from the countries table. But when I bind the selectedElement property, when I run it, I get the ClassCastException. How can I make it work? Is it a mistake that I modified the SQL statement? Thanks!
|
https://weblogs.java.net/node/240278/atom/feed
|
CC-MAIN-2015-35
|
en
|
refinedweb
|
Splashscreen does not automatically close
Not yet implemented or bug?
Javadoc says "It is closed automatically as soon as the first window is displayed by Swing/AWT".
Yet, it does not close unless explicitly closed.
I start with "java -splash:splash.png Main".
Here's the code:
import javax.swing.*;<br /> import java.awt.*;</p> <p>public class Main {<br /> public static void main(String[] args) throws InterruptedException {<br /> System.out.println("Main.main");<br /> Thread.sleep(3000);</p> <p> Runnable runnable = new Runnable() {<br /> public void run() {<br /> JFrame frame = new JFrame();<br /> frame.setBounds(100, 100, 100, 100);<br /> frame.setDefaultCloseOperation(JFrame.EXIT_ON_CLOSE);<br /> frame.setVisible(true);<br /> //SplashScreen.getSplashScreen().close();<br /> }<br /> };<br /> SwingUtilities.invokeLater(runnable);<br /> }<br /> }<br />
Hi,
This bug has already been reported and the fix is
ready, it will probably go into b45.
Regards,
Bino.
I've noticed this, too. I thought it might be because I start my application by means of reflection, but I hadn't gotten around to testing that theory yet. I doesn't seem to hurt anything, but it's kind of disconcerting to minimize my app and find the splash screen still sitting there.
Hi,
This looks like a bug, although you can workaround by explicitly closing the splash screen. I will report it to
the AWT Team for investigation.
Thanks,
Bino.
|
https://www.java.net/node/644298
|
CC-MAIN-2015-35
|
en
|
refinedweb
|
The QStyleSheet class is a collection of styles for rich text rendering and a generator of tags. More...
#include <qstylesheet.h>
Inherits QObject.
List of all member functions.
By creating QStyleSheetItem objects for a style sheet you build a definition of a set of tags. This definition will be used by the internal rich text rendering system to parse and display text documents to which the style sheet applies. Rich text is normally visualized in a QTextEdit or a QTextBrowser. However, QLabel, QWhatsThis and QMessageBox also support it, and other classes are likely to follow. With QSimpleRichText it is possible to use the rich text renderer for custom widgets as well.
The default QStyleSheet object has the following style bindings, sorted by structuring bindings, anchors, character style bindings (i.e. inline styles), special elements.
|
http://vision.lbl.gov/People/qyang/qt_doc/qstylesheet.html
|
CC-MAIN-2015-35
|
en
|
refinedweb
|
I.
I like the verbose output that a report like this provides, however, just about every macro scheme that I have encountered was just as verbose. Quite often these macro entries interfere with the readability of the code, they are cumbersome to fill out. They use the printf mechanism to report data values, and because the C Preprocessor cannot overload macros, parameter reporting macros often have multiple versions defined for the number of parameters to be reported. The name often ends with a "_x" where x is the number of parameters to be reported. These basic solutions are also generally not threadsafe, and often severely impact the performance of the application because of IO operations with the file system.
printf
I decided to try to harness the power of C++, and create an instrumentation framework that exhibits these characteristics:
This article documents the framework that I have developed. I was able to accomplish most of the goals listed above. Some of the goals were only able to be mere improvements over the code that inspired this framework. For most of the goals however, I believe I have created a much better model that will make it possible to easily instrument the code.
Along the way, I discovered that I could not only instrument the code with basic output logging, but I could also include a basic profiling framework for very little extra effort. Here is a list of the profiling features that this framework provides:
Let's first describe the framework for the readers that are more interested in using this code rather than how it works. The system is contained in a single header file called FnTrace.h. There are approximately ten to fifteen macros designed to be used by the developer to assist in code implementation. Here are the names of these commands:
FN_CHECKPOINT
FN_ENABLE_PROFILE
FN_INSTRUMENT_LEVEL
FN_INSTRUMENT
FN_INSTRUMENT_THREAD
FN_ENTRY
FN_FUNC
FN_FUNC_RETVAL
FN_FUNC_PARAMS
FN_OUTPUT
FN_PARAM
FN_PAUSE
FN_RESUME
FN_RETVAL
Each of these macros provide a specific purpose, and not all of them are required to be used. The selection of macros will be based on what the developer would like to accomplish.
This section describes the basic requirements for the FnTrace framework:
FnTrace
This section will describe the actions that are required in order to enable function instrumentation for your application. All of the macros that are intended for direct use will also be documented and explained.
In order to activate instrumentation, the following macro will need to be defined before the FnTrace header is included in a file. If you are using precompiled headers, I would suggest that you add the include the header file in your precompiled header.
#define FN_INSTRUMENT
You can declare your desired instrumentation log levels for your application. If you do not define these log levels, the default level of FN_5 will be used, which is the maximum level of detail. There is also a MACRO that will be described later that can be called which will allow the level of logging to be changed at runtime.
FN_5
#define FN_FUNC_LEVEL_DEF FN_3
FN_FUNC_LEVEL_DEF:This MACRO defines the default log level for reporting function level operations. This includes function entry and exit, profiling, and general output statements. If the log level is less than the defined level for a function, the statements found in that function will not be reported to the instrumentation log.
FN_FUNC_LEVEL_DEF:
#define FN_PARAM_LEVEL_DEF FN_5
FN_PARAM_LEVEL_DEF:This MACRO defines the default log level for reporting function parameter values. If the log level is less than the defined level for a function, the parameter output statements found in that function will not be reported to the instrumentation log. If this log level is set to a value less than the function log level, it will automatically be moved to the same level as the FN_FUNC_LEVEL_DEF value.
FN_PARAM_LEVEL_DEF:
FN_FUNC_LEVEL_DEF
Include the FN Trace framework header in each of the files that you intend to instrument. If you are using precompiled headers, I would suggest that you add the include the header file in your precompiled header.
The MACRO in this section can be used to change the instrumentation level dynamically at runtime. This could be attached to a UI option or part of a message processor in your main windows WndProc.
WndProc
FN_INSTRUMENT_LEVEL(F,P);
F
FN_0
P
For each thread that will be instrumented, the following entry will need to be defined at the beginning of the thread function to initialize the thread state for the framework:
FN_INSTRUMENT_THREAD;
If function profiling is desired, then declare the following MACRO at the top of your main function.
main
FN_ENABLE_PROFILE(1);
You can turn function profiling off at a later point in the program at run time by calling the same macro with a value of 0.
0
FN_ENABLE_PROFILE(0);
For each function that you would like to instrument in your application, add the following MACRO to the top of the function scope inside of the braces:
FN_FUNC(retType, level);
ex:
bool IsEmpty()
{
FN_FUNC(bool, FN_1);
...
}
retType
void
FN_VOID
level
The FN Framework is capable of reporting the value that is returned by a function when the function exits. In order to enable this feature, a programming convention will need to be followed. You will need to declare the macro from this section, as well as allocate space for a return value. The name of the return variable is passed to the macro, and at the function exit, this variable should be in the return call.
return
FN_RETVAL(retVar);
ex:
bool IsEmpty()
{
...
bool retVal = false;
FN_RETVAL(retVal);
...
retVal = true;
return retVal;
}
retVar
This section is a combination of the previous two macros. This macro can be defined, and make this a one-step process.
FN_FUNC_RETVAL(retType, retVar, level);
ex:
bool IsEmpty()
{
bool retVal = false;
FN_FUNC_RETVAL(bool, retVal, FN_1);
...
retVal = true;
return retVal;
}
void return
For each function that you would like to instrument in your application, declare the MACRO from this section at the top of the function scope after the braces. This macro will initialize the appropriate state to report the name of the function, each of the parameters that you indicate, and report the return value when the function exits. This MACRO will also initiate profiling for this function if you have enabled it for the framework.
In order to use this version of the MACRO, you will need Visual Studio 2005 or greater because of the variadic pre-processor operator ...
...
FN_FUNC_PARAMS(retType, retVar, level, n, ...);
ex:
BOOL SetRect(LPRECT lprc, int xLeft, int yTop, int xRight, int yBottom)
{
BOOL retVal = FALSE;
FN_FUNC_RETVAL(BOOL, retVal, FN_1, 5, lprc, xLeft, yTop, xRight, yBottom);
...
retVal = TRUE;
return retVal;
}
n
Any parameter type can be reported as long as there is a conversion available to the ostream operator<< for its type. Therefore in the example above, somewhere in the application, a definition must be declared for the RECT struct of the following:
ostream operator<<
RECT
// Sample OStream implementation for the pointer to RECT struct
inline std::wostream& operator<<(std::wostream& os, const RECT* _Val)
{
if (_Val)
{
os << "RECT(" << _Val;
os << "): left " << _Val->left;
os << ", top " << _Val->top;
os << ", right " << _Val->right;
os << ", bottom " << _Val->bottom;
}
else
{
//C: A Null pointer was passed in.
os << L"(null);
}
return os;
}
// output:
// RECT(0034EF5A): left 0, top 0, right 640, bottom 480
Use the FN_OUTPUT MACRO to report any messages to the instrumentation log in the current function scope. The output mechanism for the FnTrace framework is based on the C++ ostream objects. Output strings are formed the same way you would write messages to cout. The default path for output is reported to OutputDebugString, which can be captured by your debugger or an external application running on your machine that will listen to the debug stream.
ostream
string
cout
OutputDebugString
FN_OUTPUT(output);
ex:
TCHAR windowName[100];
::_tcscpy(windowName, _T("Test Application"));
...
FN_OUTPUT(L"The current window name is: " << windowName);
output
ostream
The output stream can be redirected from OutputDebugString to the std::clog by declaring this macro before including the FnTrace header.
output
std::clog
#define FN_OUTPUT_CLOG
You can use the following macro to report the name and value of a single variable or function parameter.
FN_PARAM(param);
ex:
int x = 100;
FN_PARAM(x);
//result:
//x = 100
param:
Some functions are called so frequently that if you were to log every single entrance into the function, it would quickly fill your buffers. In this scenario, you would have too much information and it would no longer be beneficial to instrument your application because there would be too much data to sift through without automated tools.
A perfect example of this is a WndProc function for a user defined window in the WIN32 API. Most of the time this function is called, the default Windows message handler is perfectly acceptable. In this particular scenario, you are most likely not concerned with the entrance of this function. However, you may still be interested in hitting certain points in the WndProc function, such as when the user triggers a WM_COMMAND message that you have handled.
WndProc
WIN32
WM_COMMAND
These next two MACROs are meant to be used in conjunction to provide selective instrumentation for a function. The first macro will prepare a function scope for instrumentation, the second MACRO is an instrumentation checkpoint. If a checkpoint is reached, then data will be printed out. If the function is entered, but a checkpoint is never reached, then no data will be reported for the function.
FN_FUNC_x
FN_ENTRY(level);
The FN_ENTRY: MACRO must have been declared at some point in the current function scope before this MACRO otherwise a compiler error is reported.
FN_CHECKPOINT(label, level, n, ...);
ex:
...
case IDM_HELP_ABOUT:
{
FN_CHECKPOINT("IDM_HELP_ABOUT", FN_5, 4, hWnd, message, wParam, lParam);
DialogBox(g_hInst, (LPCTSTR)IDD_ABOUTBOX, hWnd, About);
}
break;
...
label
Any parameter type can be reported as long as there is a conversion available to the ostream operator<< for its type. Refer to the FN_FUNC_PARAMS MACRO for more details.
If function profiling is enabled, this MACRO can be used to stop and start the timer for a function. This could be useful when the function calls a kernel function that blocks for a long period of time, such as GetMessage, or WaitForSingleObject.
GetMessage
WaitForSingleObject
FN_PAUSE();
FN_RESUME();
ex:
// Main message loop:
FN_PAUSE();
while (GetMessage(&msg, NULL, 0, 0))
{
FN_RESUME();
if (!TranslateAccelerator(msg.hwnd, hAccelTable, &msg))
{
TranslateMessage(&msg);
DispatchMessage(&msg);
}
FN_PAUSE();
}
The FnTrace Framework is built around a set of objects that are created on the stack when a tracing scope is entered. The constructor of the object will log important startup state and the destructor will report exit state before the function exits. There are three types of objects that are created to facilitate the framework. I will briefly describe each of the objects and their purpose later in this section.
All of the object definitions and global variables are declared within the namespace fn{...} in order to prevent cluttering of the global namespace. This fact should be completely transparent to a developer using the FnTrace framework because all access is done through the MACROs. No objects should be called directly.
namespace fn{...}
The following sub-sections will document the basic path I followed to arrive with the current implementation. I will indicate my initial motivation for a design, what possibly worked and what failed, why I changed it, and I will indicate what the final solution was for each of the major components of the framework.
While developing the framework, I used cout in my MACROs to quickly get the framework up and running. My intention was then to write a message queue or packet object to transmit the calls to a different process. That external process could then handle the synchronization of the messages, as well as the final storage to a file or some other medium for analyzing. The message queue system was also going to be portable across platforms, starting with Windows CE, and Windows Desktop, but moving on to Linux and other platforms.
Once I reached the point in my development to create this object, I remembered the Win32 function OutputDebugString. This function can be used to report messages to a debugger attached to your application. I may well end up creating that portable message queue class in the future, but for now, by using OutputDebugString I am able to ignore an entire chunk of code while I am solely developing with the WIN32 API.
The one downside, is that I need to have a debugger attached to my application while it is running to listen for the debug strings. The good news is that there are plenty of debugging tools that you can run on your system that are not actually debuggers, but tools that listen to the OutputDebugString data for your application. One such example is Debug View by Windows SysInternals.
The only challenge in making this work was to create a std::wostringstream object for each thread, to record the state of a thread. Then when the data was written to the OutputDebugString function, the read buffers of the ostream had to be emptied, or set back to the beginning. For more details, look at the definition to the FN_OUTPUT MACRO.
std::wostringstream
In an eye towards portability, I also made it possible to redirect the output to the std::clog. However, since I have not used this yet, no effort has been made to make this path thread-safe.
My initial vision for this framework was to find a way to declare a macro that could extract all of the information that it needed at compile time from the declaration of the function, or behind the scene in the disassembly of the program. I did not like the fact that all of the basic instrumentation MACRO implementations that I had seen required the developer to type in the name of the function. This made instrumenting the application cumbersome and error prone. If you cut and paste, it would be too easy to either forget to fill in the correct name of the function when you first instrument, or change the name in each place the instrumentation MACRO is used in a function if the function name changes.
My first thought was to use the compiler defined MACROS __FILE__ and __LINE__ to report the current file name and line number respectively. I am already well aware of these macros from a clickable REMINDER MACRO that I, as well as many other developers use. My intention was to write an analyzer tool that would use the file name and line number, as well as the map file generated by the compiler and create a slick UI to more easily analyze an instrumentation trace. The benefit of doing it this way, is that I could write short cryptic messages to indicate state, and they could be post-processed at analyzing time in order to hopefully reduce the log size and impact on the application while it is executing.
__FILE__
__LINE__
The first reason I dumped this idea, is that it was another piece of software that I would have to write. Sadly it is not the reason that I did not go this route. The real reason is that I found out about some Microsoft Specific pre-processor macros, namely __FUNCSIG__. This macro reports the name of the function, as well as the calling convention, return type, and parameter types of the function. There was no need for me to write a decoding tool anymore. I have all of the information that I needed to extract the name of the function, the return type and all of the parameters.
__FUNCSIG__
Another aspect that I have not liked about the instrumentation MACROs I have seen up until now, is that in order to report a return value of a function, a MACRO such as this would have to be reported at each and every return point in the function:
...
INSTRUMENT_OUT_1("FunctionA: returns %s", name);
return name;
}
Again, the name of the function would have to be typed in, a cumbersome printf format string is required, and this is a copy of the MACRO that is required at each and every return point. If one of the points is missed, or added at a later date, the exit of the function will be missed.
printf
In order to tackle this, I created the an object on the stack at the entry of the function. During the destructor of this object, I was going to search the stack or the function registers for the return value, and report it. I first went looking at the EAX register, which is the conventional register that values are returned in for a function in the x86 architecture. I found out quickly that this would not work because of the fact that the destructor itself is in a different stack frame than the function that I am instrumenting and the EAX register currently has nothing to do with the desired return value, and I could not find a reliable way to walk the stack for this value. That would make this solution very fragile and non-portable. I would also have to write decode the assembly for each architecture I was trying to run on for the Windows CE platform.
One intermediate solution to this problem was to use a MACRO that redefines the return keyword. That didn't work for obvious reasons, so I tried a MACRO such as this RETURN(retVal). Inside the MACRO, the variable retVal would be assigned to the retval stored in the functions implementation object, and would be printed out when the destructor was called. The developer would need to use this macro everywhere that it returned from a function call. This is still error prone and looks a little funny.
return
RETURN(retVal)
retVal
retval
My final solution, was to pre-declare a return variable, and store a pointer to that value in my function stack object. Then when the developer assigns the return value to the return variable, I have access to the value in my stack object. This solution requires the user to follow a convention, however, it is a small price to pay for a big reward.
bool retVal = false;
FN_RETVAL(retVal);
...
if (x < minVal)
{
return retVal;
}
...
retVal = true;
return retVal;
}
I probably spent most of my development time working on a simple and robust way to report the parameter values for a function call. I really did not like the way that most TRACE macros are defined in the form:
TRACE
#define TRACE_1(msg, p0)
#define TRACE_2(msg, p0, p1)
#define TRACE_3(msg, p0, p1, p2)
#define TRACE_4(msg, p0, p1, p2, p3)
#define TRACE_5(msg, p0, p1, p2, p3, p4)
#define TRACE_6(msg, p0, p1, p2, p3, p4, p5)
#define TRACE_7(msg, p0, p1, p2, p3, p4, p5, p6)
Just to get the ball rolling, I declared this macro:
#define FN_PARAM(param) FN_OUTPUT(#param << " = " << ##param##)
This required a FN_PARAM definition for each parameter to be reported, but to me it felt better than the multiple numbered MACROs from above with the printf format string. However, it was still cumbersome, and very verbose, much worse than the method I was trying to avoid. This MACRO still exists in the framework, but it is left as a convenience and some of the other MACROs actually call this MACRO to do their work.
I started investigating the Stack Frames assembly, and I looked at the x86 Base Pointer, EBP (also known as Frame Pointer). From here, I could walk up the stack, and the first parameter was going to be EBP-8h, and the second parameter EBP-12h and so on. As I started down this path, I quickly realized I would be back in to the printf dilemma, because I would no longer have the type information at compile time. I was going to have to do some extra processing to discover the type at runtime.
EBP-8h
EBP-12h
I was already using the __FUNCSIG__ compiler macro to get the function definition, so I thought that I would go ahead and use the defined values in the function definition to decode the types and generate output from that path. At this point I decided to add a second object on the stack in the function declaration MACRO, only this second object would be static. This would allow the object to only be created once when the function was first called. This way, it could hold on to all of the type data that I calculated for the function, and I would only have to do it once. This turned out to be much more difficult than I had first thought. However, it was still a great step, because it led to the extra profiling capabilities that the framework now provides.
__FUNCSIG__
static.
After realizing the deficiencies of the automatic stack method, I thought about what other problems could arise. A major problem with automatically trying to grab the parameters, is that many times a parameter could be an "out only" parameter. This would mean the data is invalid coming into the function and worthless to report to the log. When this new factor was coupled with the previous fact that type information was now gone, it seemed to me that there was too much work in this new method for very little gain.
The final step in this path was discovering the Boost pre-processor library. This library is a very cool set of expanded pre-processor macros. Some of the macros are quite complicated and it took me a bit of time to sift through the documentation and find the few that I actually needed. I settled on the ARRAY based macros that allowed me convert the variadic entries into an ARRAY. Then I was able to use the control and loop definitions from Boost to enumerate through each parameter that was defined for output in my macros.
ARRAY
variadic
Ironically, I decided to provide the cumbersome form of macros that inspired this framework in the beginning, but only to help support compilers that do not support the variadic pre-processor MACRO ... that was added to Visual Studio 2005. This MACRO is just like the one available in C++ for use in its functions.
Thread Safety is provided by allocating a set of variables to each thread through Thread-Local Storage (TLS). The TLS mechanism in the Windows Desktop compiler turns out to be a much simpler implementation than the path I had to take to add support for the Windows CE counterpart. The Windows Desktop supports a compiler-defined definition __declspec(thread) for a variable that will allocate space for the variable for each thread that is created. For now this is convenient, however, this form does have limitations, and since I had to go through the work to support CE, I may make the CE method the only one that I used for my Windows based implementation. My decision will be based on the effects of performance after I analyze the framework in detail.
__declspec(thread)
For Windows CE based systems, I had to use the WIN32 APIs designed to provide TLS support:
TLSAlloc
TLSGetValue
TLSSetValue
TLSFree
Currently there are four values that are allocated for each thread in the Windows implementation of the framework:
bool isInstrumentThread
true
unsigned long indentLevel
std::wostringstream *pInstLog
std::stack<timeEdge> *pTimeStack
Each of these parameters is conditionally initialized during the thread creation macro, based on the compiler and other #defines. The objects need to be stored as pointers because of the nature of TLS storage. The thread object destructor will release the memory for each of these variables when the thread exits.
#define
A series of helper macros have been created and utilized to abstract the access to the TLS variables as much as possible in an attempt to reduce the impact that the access to these variables will have on the overall performance on programs instrumented with this framework.
Profiling was really added as an after thought because all of the pieces fell into place that it would not take much more effort to add this feature. Currently, the profiling features are very primitive, and provide a very limited amount of detail about the application. In fact, the timer resolution is reported in milliseconds, therefore the actual use for the timing functionality is quite limited. Therefore, at this point, this feature is really about proof-of-concept. All of the profiling features are currently facilitated by a static template object that is created for each function the first time the function is called.
static
The first feature that was quite simple to implement was the ability to count the number of times that a function is entered. This is accomplished incrementing a counter in the static function object found in each function. There is nothing else to this feature. What is the value in knowing how many times a particular function is called?
This feature keeps track of the total amount of time that the CPU spends within the scope of a particular function. More precisely, the amount of time that the local function state object exists on the stack for each particular instantiation of a function. This feature was a bit tricky to complete and make entirely useful. The main issue was the fact that a single function call may call into other functions. Time spent in these "callee" functions should not necessarily be counted toward the execution time of "caller". If it were not this way, your applications main function would have the most time spent in it, and when you add up the total running time of the program, it would be greater than the actual time spent in the program.
My first attempt was to simply create a stack object in the TLS, that would allow the start time of each function to be pushed onto the stack. Then when the function exited, the item at the top would be popped off the stop and subtracted from the current time. This would indicate the total usage of the current function. This is when I realized the first issue I described above. The usage for each function lower on the stack was continually reported as a much larger value than I had expected.
What I needed to do was create a mechanism that would keep track of the usage of each function further down the stack. Then when I got to the end of a function, I would subtract the total time spent further down the stack, from the current execution time for the current function. This concept seemed pretty straight forward when I first started. To help facilitate this, I changed the stack object to use an stl pair. This allowed me to couple two items, the start time of the function call, and the total delay. The start time was built up the stack, and the delay was passed down from the "callee" to the "caller" through the TLS stack.
pair
One last issue to tackle was to allow the profiling to account for function calls that this framework was not actually managing, such as blocking calls or calls into libraries that are time consuming. In order to give the developer some sort of control over the timing instrumentation I added the FN_PAUSE and FN_RESUME macros to be placed around calls such as WaitForSingleObject. This was easily implemented by using the code that was already used to calculate when the current function had returned and was being removed from the stack. The time spent in the PAUSE state counts as a delay for the "caller", however this time is not counted as time spent for the current function "callee".
WaitForSingleObject.
The function timing feature has much potential to be useful. Before it can reach its full potential I will need to convert the time to a high-precision timer, much higher than millisecond resolution. Here are a few possible uses for measuring the execution time of your applications functions:
When I first created all of the tools, I wrote the data to cout and the order of the header includes was quite different. Therefore, I was getting my program profiling summary written to stdout when the application was destroying all of the static function objects. However, after I started writing a sample application and placed the FnTrace.h file in the pre-compiled header, I believe I changed the initialization order of some of the objects and therefore the destruction order. Therefore the stdout object is no longer valid when the objects are being destroyed, and there is no place to accept the reported data. This is a bug I hope to fix in the near future.
stdout
The demo application is simply an example of how to use the different macros. It is an application that is compatible with both the Windows Desktop and Window Mobile devices. It is the default application that is generated by the Visual Studio 2005 project wizard, that I have instrumented with the FnTrace framework. The sample image at the top of the article is an example of the data that will be reported to the output window when this application is run through the Visual Studio 2005 debugger.
Here is a sample of the output to expect from the demo application:
3068454610: int __cdecl WinMain(struct HINSTANCE__ *,struct HINSTANCE__ *,wchar_t *,int)
3068454610: hInstance = B6DE9E56
3068454610: hPrevInstance = 00000000
3068454610: lpCmdLine =
3068454610: nCmdShow = 5
3068454610: int __cdecl InitInstance(struct HINSTANCE__ *,int)
3068454610: hInstance = B6DE9E56
3068454610: nCmdShow = 5
3068454610: unsigned short __cdecl MyRegisterClass(struct HINSTANCE__ *,wchar_t *)
3068454610: hInstance = B6DE9E56
3068454610: szWindowClass = INSTRUMENT
3068454610: elapsed: 51ms
3068454610: return (50540)
3068454610: long __cdecl WndProc(struct HWND__ *,unsigned int,unsigned int,long)
3068454610: WM_CREATE
3068454610: hWnd = 7C079270
3068454610: message = 1
3068454610: wParam = 0
3068454610: lParam = 470153100
3068454610: elapsed: 122ms
3068454610: leave; = 1
3068454610: lParam = 0
3068454610: elapsed: 133ms
3068454610: leave;
3068454610: elapsed: 988ms
3068454610: return (1)
3068454610: long __cdecl WndProc(struct HWND__ *,unsigned int,unsigned int,long)
3068454610: IDM_OK
3068454610: hWnd = 7C079270
3068454610: message = 273
3068454610: wParam = 40000
3068454610: lParam = 2080871584 = 0
3068454610: lParam = 0
3068454610: elapsed: 136ms
3068454610: leave;
3068454610: long __cdecl WndProc(struct HWND__ *,unsigned int,unsigned int,long)
3068454610: WM_DESTROY
3068454610: hWnd = 7C079270
3068454610: message = 2
3068454610: wParam = 0
3068454610: lParam = 0
3068454610: elapsed: 114ms
3068454610: leave;
3068454610: elapsed: 0ms
3068454610: leave;
3068454610: elapsed: 0ms
3068454610: return (0)
This is a work in progress that was inspired by the need to instrument production level application code, that would remain in place and would not reduce the readability of the code. This framework is built around the mechanics available in C++, and attempt to be as non-intrusive as possible.
I have just completed my first pass of this framework, and I will now attempt to use it. I will be making improvements to this article if I find deficiencies in the framework. I welcome any comments or suggestions to help improve on its design and implementation.
In the future, I hope to improve the profiling reporting functionality as well as include features to record and report code coverage paths and other useful.
|
http://www.codeproject.com/Articles/30028/Basic-Instrumentation-and-Profiling-Framework-for?msg=4119638
|
CC-MAIN-2015-35
|
en
|
refinedweb
|
Foundations of Python Network Programming 144
First of all, 'Network' means 'Internet.' Everything in the book concerns protocols running over IP, which is almost anything useful these days. That said, this is a lot of ground to cover -- there's FTP, HTTP, POP3, IMAP, DNS, a veritable explosion of acronyms, and this book does a great job of hitting all the ones you're likely to need.
Foundations assumes you already know Python, but nothing about network programming. The first 100 pages covers the basics of IP, TCP, UDP, sockets and ports, server vs. daemon, clients, DNS, and more advanced topics like broadcast and IPv6. And in case you already know all that, how Python deals with them. This is the only part of the book you will probably read in order. After that you pick what you need.
Find a topic you need to know how to deal with, such as using XML-RPC, and locate the appropriate section of the book. There he'll cover the basics of the topic, show you how to use the correct Python module(s) to implement it, explain any gotchas (this is key!), and write a short but functional application or two that uses it. I'm not sure why this book isn't called 'Practical Python Network Programming.' It's eminently Practical. It won't make your heart race, but it tells you exactly what you need to get the job done.
All this information is out there to find for free, but having it all collected and summarized is worth every penny. And the real value is having the edge conditions and not-so-obvious practical details explained by someone who's obviously used this stuff in the field. Python and its excellent libraries make Internet tasks relatively easy, but it's even easier with some expert help, and the libraries assume you already know what you're trying to do. For example, if you're doing a DNS.Request() record query and using a DNS.Type.ANY, it (for good reason) returns information cached by your local servers, which may be incomplete. If you really need all the records you need to skip your local servers and issue a query to the name server for the domain. This is isn't hard; you just have to know what's going on. Or do you know which exceptions can get raised if you're using urllib to fetch web pages? It's here. Exception handling is not neglected.
So you know what you're getting, here's a laundry list of topics: IP, TCP, UDP, sockets, timeouts, network data formats, inetd/xinetd, syslog, DNS, IPv6, broadcast, binding to specific addresses, poll and select, writing a web client, SSL, parsing HTML and XHTML, XML and XML-RPC, email composition and decoding, MIME, SMTP, POP, IMAP, FTP, MySQL/PostgreSQL/zxJDBC (though you won't learn SQL), HTTP and XML-RPC servers, CGI, and mod_python. As a bonus you get some chapters on forking and threading (for writing servers) and handling asynchronous communication in general.
Just to find something to complain about churlishly, I wish Goerzen had managed to do all this and make it scintillatingly brilliant and witty from cover to cover (all 500 pages); perhaps dropping juicy bon mots of gossip from the Debian project. And while I'm at it I'd like a pony. No, seriously. If you program in Python, intend to do anything Internet related, and aren't already a Python networking god, you need Foundations of Python Network Programming. In terms of 'hours I could have saved if only I had this book sooner' it would have paid for itself many times over.
You can purchase Foundations of Python Network Programming from bn.com. Slashdot welcomes readers' book reviews -- to see your own review here, read the book review guidelines, then visit the submission page.
Amazon (Score:2, Informative)
Re:Amazon (Score:5, Informative)
-- Sex Toys... [secondnirvana.com]
Re:Amazon (Score:1)
Re:Amazon (Score:2)
Re:Amazon (Score:1)
-Leigh
Re:Amazon (Score:5, Insightful)
Re:Amazon (Score:2, Offtopic)
If he really wanted to be an ass, he could have hidden it behind a meta redirect. But he didn't. If you want to be all morally hoity-toity about it, that's your prerogative.
Oh and by the way, I can smell your patchouli.
Re:Amazon (Score:4, Insightful)
Re:Amazon (Score:1, Insightful)
Try starting a small company that makes Apple clone hardware and see if Apple keeps you from doing something that you want.
Re:Amazon (Score:1, Insightful)
I don't know of Apple or amazon.com trying to keep me from doing something I want or pay for something that should be free, so I'm not all that fussed.
Unless of course you want to use something like an affiliate program or 1-click ordering, which Amazon has patented.
Amazon is still on my shitlist, but the great thing about the internet is you can "shop" at amazon and then click-click, buy your book somewhere else, with a clear conscience.
Re:Amazon (Score:1)
And all along I that was a worm poking out of it. The good news is it must still be edible.
Re:Amazon (Score:1)
Re:Amazon (Score:1)
Welcome to slashdot, you must be new here.
Twisted Framework (Score:5, Interesting)
For those interested in starting in network programming in Python, I'd recommend checking it out.
Re:Twisted Framework (Score:2)
Re:Twisted Framework (Score:5, Informative)
The bonus to Mr. Goerzen's use of Twisted in IMAP is that I came away with a much better understanding of how to use Twisted generally -- I grokked Deferreds for the first time. And I'd read all (ALL) the Twisted documentation I could get my hands on prior to that. That probably gave me the proper background, but the book really kicked in to place those final pieces necessary to get what was going on in Twisted.
The book doesn't just cover "raw" network programming, but covers multiple domain-specific areas and points you to the best libraries and modules to use for the area.
Good stuff, I highly recommend the book.
Re:Twisted Framework (Score:2)
I've been hoping for someone to do this for a long time... Not very likely to happen though...
The documentation is huge and pretty good but it just doesn't cut it for me...
Typos (Score:5, Informative)
Re:Typos (Score:3, Informative)
On the other hand, the content is excellent, truly a good book. And so far, my binding hasn't broken, FWIW.
Re:Typos (Score:4, Interesting)
As practicalities go, the one thing I really liked about the last APress book I got ('Dive into Python') was that when I wanted to refer to it at work, I didn't have to carry the book in, I just read the section I wanted on their website.
It was one of those "never going to buy another book without this facility" moments... how could we have missed something so useful for so long?
So back on topic, this book only has one chapter available for download, and it's in PDF rather than anything useful, so I guess it's not general policy to make all APress books downloadable.
I did find it amusing how Visual Basic, C#, and
re: Downloadable Dive Into Python (Score:1)
Re: Downloadable Dive Into Python (Score:1)
Look, I program in Perl (Score:1, Interesting)
Re:Look, I program in Perl (Score:5, Interesting)
Re:Look, I program in Perl (Score:5, Interesting)
And classes too. What's more, a class is just a callable object that is called exactly like a function - "C()" - and it returns an instance of that class. If a function wants a class for instantiation, you can just pass a factory function that returns some instance of any class.
Re:Look, I program in Perl (Score:5, Interesting)
And classes too.
Yes, absolutely. First-class functions and types is one of the most important hallmarks of a good, usable language. Not being able to pass around classes or functions would severely limit how most of my solution sets are defined. Personally, I won't consider languages that don't offer this.
Re:Look, I program in Perl (Score:2)
Re:Look, I program in Perl (Score:4, Informative)
import MyModule
MyModule.SomeFunction()
Or:
from MyModule import *
SomeFunction()
Re:Look, I program in Perl (Score:2)
For the former, first check out the python package index [python.org], which is the equivelent of CPAN to see if someone else has created a relevant package. If not, creating a python module from C code from python is easy [python.org]. As far as calling perl modules from python, that is one of the things Parrot is intended to do, so your savior will come with the apoclypse.
SWIG rocks for plugging into C/C++/Libraries. (Score:5, Informative)
I can't say enough good things about SWIG. It's an amazing piece of work that has saved me years of menial labor and enabled me to integrate all kinds of compex code into Python, from hairy C++ templates to third party Win32 libraries for which there is no source code. It works extremely well with Python, and many other languages too.
Here is the blurb from the web site [swig.org]: C#, Common Lisp (Allegro CL), Java, Modula-3 and OCAML. Also several interpreted and compiled Scheme implementations (Chicken, Guile, MzScheme).
-Don
Re:SWIG rocks for plugging into C/C++/Libraries. (Score:2)
Re:Look, I program in Perl (Score:2)
Re:Look, I program in Perl (Score:2)
I can definitely see a real use for this, as there aren't very many python packages around that do what DBI does. (at least not that I could find back when I was looking for them. I suspect this has not changed).
Look: Programming in Perl is Simply Irresponsible! (Score:2, Insightful)
If you choose to program in Perl, the poor suckers who are going to have to read, maintain, clean up and modify the code you wrote will hate your guts.
Programming languages should be designed primarily for PEOPLE to read, understand, write and maintain reliably, and only incidentally for computers to interpret and execute.
Perl goes against every rule in the
Re:Look: Programming in Perl is Simply Irresponsib (Score:2)
O.K. Why don't you start with listing a single specific?
Re:That's like eating just one potato chip! (Score:2)
Nothing. Unless you want to write like you do C/C++ or shell scripting in Perl(that is what may people do).
If you follow the style guide and conventions, you will be able to write concise, elegant code in Perl. Hmmm... It's like English. You can write poetry or you can spew out illiterate gibberish. One has to invest some time in studying Perl. It grows on you.
S
Re:That's like eating just one potato chip! (Score:2)
It's sad how many Perl programmers have invested so much prescious time learning their way around Perl's fractally complex syntactic surface area and nip picking legalistic style guides and conventions, that they're unwilling to consider learning other languages. Monolinguistic Perl programmers are afraid to learn other languages because they're under the mistaken belief that programming lang
Re:That's like eating just one potato chip! (Score:2)
What's Wrong with Perl (Score:2)
If you're a Perl programmer who doesn't know what Perl's weaknesses are yourself, and you have to ask me to spell them out for you, then you're an Incompetent Perl Programmer. You should have done that research yourself before deciding to use Perl. Shame on you! Put down the crack pipe and step away from the keyboard.
Incompetent Perl programmers who can't see or admit the fl
Re:What's Wrong with Perl (Score:2)
You can find detractors to any and all languages or systems (just noticing the source of the comments above).
I was not asking you to drink someone else's koolaid, but specifically list what you think is wrong with perl and see if it stands up to scrutiny. That is why your post was rated flamebait in the first degree.
Incidently if you read some of the quotes above and then read the discussions a little further and try to understand what the orignal compl
Perl is like DDT, Asbestos and Lead. (Score:2)
If you didn't already know those points I raised about Perl and the fundamental problems of "DWIM" programming languages, then you're simply not a competent programmer. The information is out there, go look it up and learn it yourself.
My point is that there are a lot of incompetent Perl programmers ou
Re:Look: Programming in Perl is Simply Irresponsib (Score:1)
Re:Look: Programming in Perl is Simply Irresponsib (Score:2)
-Don
Re:Look: Programming in Perl is Simply Irresponsib (Score:1, Funny)
It's like maturbating in public and not cleaning up after yourself.
Yes! whenever I masturbate in public I always wipe it up afterwards! The cashier at the supermarket really appreciated that!
Re:Look: Programming in Perl is Simply Irresponsib (Score:2)
Conjoined Fetus Lady [tvtome.com]
Every Perl programmer should switch to Python. (Score:5, Insightful)
At least, for a month or so.
Knowing multiple languages increases your value as a programmer quadratically. I like to think that languages follow a square law. By doubling the number of languages you know, you quadruple your total skill and marketability as a programmer.
I've done significant stuff in both languages and there are definitely tasks where Python is better -- for example, command-and-control, super-high-level types of apps, which coordinate large systems of smaller programs. And Perl is vastly superior in other situations, such as processing enormous wads of data and formatting output. I've even written hybrid programs where Python and Perl code intertwine.
Step outside your box. You don't have to love the language you're learning, but consider it an investment in yourself. Saving money sucks too, but it's still a good idea.
Re:Every Perl programmer should switch to Python. (Score:2, Informative)
Now I use Pythons re() module which takes a little getting used to. While I'm at it, for those of you that write complex or deep regular expressions where re() craps out, there is a much more stable legacy module called pre() that seems to be undocumented. It works exactly like re(). The only difference, it trades speed for stability.
Re:Every Perl programmer should switch to Python. (Score:3, Insightful)
Different syntax and different libraries will open your mind a little. But when a language encourages (or forces) you to think differently, that's where your "square law" starts to kick in.
Re:Every Perl programmer should switch to Python. (Score:2)
If you're a "$LANGUAGE Programmer"... (Score:4, Insightful)
As the parent post says, knowing multiple languages is good. One of my pet annoyances is hearing people describe themselves as a programmer for a specific language -- there are many more out there, and to say you only do one speaks volumes about the lack of breadth of experience you posess.
And don't just stick with imperative object-oriented languages. Try a few declarative languages, like Haskell (functional) or Prolog (logic). Yes, getting your head around them is hard. But you'll be glad you did.
Disclaimer: I'm a student doing an MSc in Computer Science, and by lines of code, most of what I wrote in the last twelve months was Perl, and was completely unrelated to my thesis
Re:If you're a "$LANGUAGE Programmer"... (Score:2, Interesting)
You're a little bit off there. Haskell is not declarative. It's just functional, as you said. Prolog is declarative though.
If you want to try a nice variety of languages here's what I suggest:
C - 'nuf said.
Java or C# - Good application programming languages. Similar enough for learning purposes that it doesn't matter which you use. If you want to get into the inner workings, both are very interesting systems to learn about.
Pe
Re:Every Perl programmer should switch to Python. (Score:1)
It doesn't take much googling to figure out a growing number of very experienced programmers are "discovering" python and the most common comment is something similiar to "...fun to program again..."
Python is a main stay at my work place. A language that doesn't get in your way, you can just solve problems and create solutions.
But, perl isn't going away. It's simply magic for one liners.
Have you tried Ruby? (Score:4, Insightful)
Ruby [ruby-lang.org] to Python. No indentation hassles with Ruby, for example. You'll also like the way Ruby does OO compared to Perl OO. More [rubyforge.org] Rubilicious [ruby-doc.org] links... [rubygarden.org]
Also, The Pragmatic Programmers [pragmaticprogrammer.com] have released a new edition of Programming Ruby that's a great intro and reference to the language - go buy it from their website.
Ruby: Because I can't wait around for Perl 6 to get finished
Re:Have you tried Ruby? (Score:5, Informative)
As a background to my choice, here's what I use it for:
I tend to write primarily for the Win32 platform and most of my applications have GUI front-ends they speak to MySQL databases, and often also control third party applications via COM. Aside from the COM stuff (the apps I'm controlling are only available for Win32 anyway) my software is fully cross-platform which is desireable. I love and use GNU/Linux extensively, and am starting to see an interest from the SME market which is encouraging.
I've used Python a lot and Perl a fair bit, plus I've looked at and thoroughly expected to fall in love with Ruby and Lua. I didn't.
I've realised that all four languages are so similar in many respects, that it's very difficult to convince a person using one to convert to another unless they have a very specific need. So it's just not worth trying.
If the language you are using does the job for you, then stick with it. Once you know the work-arounds for its deficiencies (and they all have them) then there is even less reason to change.
Trying to be objective, here's how I find each of the languages:
Python - Extremely easy to pick up, which is actually good for experienced programmers as well, but at the same time very flexible and powerful. Very readable and easily maintainable code. Good range of libraries (but nowhere near as many as Perl) which all stick closely to a well established "pythonic" way of doing things. You don't have to choose from a dozen different libraries that all claim to do the same job. The interactive shell is also remarkably useful for experimentation and debugging. Most good programmers indent their code anyway, and I don't know anyone that found the forced indentation a problem unless they were deliberately being arguamentative. The concept of packages is very simple and neat - you don't need to do anything special to allow importing of your code. Object orientation is very flexible, straight-forward and powerful. There are a large number of precompiled libraries with installers for Win32 platforms - don't ever underestimate how important this is in when using scripting languages in the current commercial environment. Extensive and uniform use of dot notation. Good range of freely available cross-platform IDEs. Like most Python bindings, those to GUI libraries are generally much easier to work with than the original C libraries.
Perl - Very powerful but extensive use of special characters rather than keywords can tend to result in code which needs reading several times to fully comprehend. Having built-in regular expressions is both useful and powerful, but only adds to the problem of making code less readable. Th eobject orientated aspects of the language are very much bolted on, and far from elegant. Functionaly they're quite capable, but certainly not pretty. It's very easy to code in your own style with several ways of doing the same thing, not necessarily a bad thing, but it does means there is more to learn of the core language if you want to be confident about being able to maintain code written by others. You do feel that you have flexibility in your choice of coding style which is always nice. Immense number of additional libraries, available from one source - the wonderful CPAN - but there is also a good deal of duplication, and you need to spend time evaluating the options to find one that has the features you need and works the way you'd like. Packages have to be written or at least bundled up as such. That said, it's available by default on *nix systems, it's also very closely tied into the operating system and shell which makes OS related stuff in Perl a breeze. Win32 support is available, but Perl is only truly at home in a *nix environment. The bindings to most cross platform GUIs are aften more complicated and difficult to use than the C equivalents.
Ruby -
Re:Have you tried Ruby? (Score:2)
The new second edition Programming Ruby by Dave Thomas & co. has an excellent section on built-in classes and modules that starts at page 427 and goes to page 777 - and even it is not exhaustive. I've done Ruby programming for pay and I've not found that Ruby was lacking any functionality that I've needed. Sure, Perl's CPAN is bigger than Ruby's RAA, but there's quite a bit of redundancy in the CPAN as well. I suspect that we're ga
Re:Have you tried Ruby? (Score:2)
A nice clean container for publishing objects and frameworks for logging, testing, cron, and remote objects.
Re:Have you tried Ruby? (Score:1)
Re:Have you tried Ruby? (Score:1)
Re:Have you tried Ruby? (Score:2)
Unlike perl they are scattered all over the web and you'll have to google for the module you think you need. There is no equivalent of perl -MCPAN so installation of the modules can be a pain if there there are dependencies.
"You don't have to choose from a dozen different libraries that all claim to do the same job."
This is flat out false. There are lots of SOAP libr
Re:Have you tried Ruby? (Score:1)
Re:Have you tried Ruby? (Score:1)
This is always brought up - oh no, Python enforces indentation. This is not a hassle - five minutes using Python and you won't even notice!
Re:Have you tried Ruby? (Score:2)
Indentation sometimes gets screwed up when you move a chunk of text around. Sometimes you 'fail to proceed' when you run tests and it's because of a screwed up indentation. It's easy enough to diagnose and pretty easy to fix, but it is a hassle.
Also, if you have a crappy text editor (or if you have crap skillz) you can get in trouble when you have to indent a chunk of text. Not a big hassle, just a little one.
Despite these two exception, python's me
Re:Look, I program in Perl (Score:3, Insightful)
Re:Look, I program in Perl (Score:2)
Re:Look, I program in Perl (Score:3, Insightful)
Re:Look, I program in Perl (Score:2)
For python code: insanely easy, modules and packages are just files and directories, and your own libraries snap into the place just like they were part of the stdlib.
For C/C++: still rather easy, even if you do it manually, with pyrex or swig, it's even better. The best part is, you call C extension just like you a python module, there's no difference in whatsoever, programmer using the
Re:Look, I program in Perl (Score:2)
Haskell is superb for mathematical problems, partly because the syntax is very mathematical, partly because the compiler implementation is well optimised for that kind of problem. (I often wonder why it doesn't get more use for stuff like encryption). Completely Open Source, of course.
Python is wondrous for network-related stuff - its a real strength of the language - and also seems to get alot of use as a language for installations and mods.
Perl i
Re:Look, I program in Perl (Score:3, Interesting)
print "Who are you?"
name = stdin.readline().strip()
print "I'm glad to meet you, %s." % name
Overly verbose and complicated, you can write this in two lines and save an import as well as the need to know stdin is a file object to boot. And string formatter operator is not instantly clear, especially to a person who hasn't used printf(), not everyone has C background.
How about
As the author says... (Score:5, Insightful)
Is it? If you are, as the author says, someone familiar with Python but you have no clue about network concepts or programming, perhaps this book isn't for you. The first 100 pages or so are all intro to networking; after that, you have specific Python networking programming topics. Perhaps you'd be better suited with a networking book and then this book (sans the first 100 pages).
I've read a few books on programming languages and when they decide that the reader needs an intro to something, they usually provide pretty poor coverage of that topic. You end up being lost after you get done with the intro section. I did this when I was learning some encryption programming... before I could start actually writing code that deals with encryption, I needed a solid base. Instead of trying to teach me all I needed to know, the reference I was using pointed me at the industry's best encryption and security books and authors (like Bruce Schneier).
Disclaimer: Not having read this particular book, maybe this one is different. I don't know.
Re:As the author says... (Score:2)
If you are doing your own encryption, it's going to be easily crackable. Encryption is definitely something that needs to be done in a peer-reviewed library.
What's wrong with, say, SSL?
Encryption Programming and Canned Libraries (Score:3, Insightful)
Re:As the author says... (Score:2)
Re:As the author says... (Score:3, Insightful)
Re:As the author says... (Score:5, Insightful)
Having read the book, I understand socket programming, general network programming, and could probably design and implement my own application protocol -- badly, of course, but still... Could I have done this prior to reading this book? No. Did this book make it easy to pick up the necessary background, as well as make it easy to pick up the specifics of network programming in Python? Yes.
This is a great book, and is a must-have for Python programmers.
Python makes Windows fun (Score:4, Interesting)
I used to be a huge Linux buff (and still am when it comes to servers), but intelligent tools like Python make using Windows XP Home a much more fruitful and fun experience as I can actually get stuff done programmatically. Go Python developers and keep up the good work!!!
Re:Python makes Windows fun (Score:2)
Python Web Programming (Score:2, Informative)
It also had a brief Python tutorial in it, but I kind of skipped over that, so I can't vouch for that part. The rest of the book will definitely teach you a bit about network programming, web/database programming, and things of that nature. For most of the
/. programmers it might be pretty old hat since they were doing this stuff in the womb, but for unexperience programmers such as myself, I found it helpful.
Civ IV moddable with python (Score:4, Interesting)
OT - sig (Score:1)
Re:OT - sig (Score:1)
Re:OT - sig (Score:1)
Re:OT - sig (Score:1)
I never really got into 'hjkl' as navigation keys, as even when playing the ports on my Amiga 500 I had a numberpad to use instead
Re:Civ IV moddable with python (Score:2)
A bunch of friends and I were talking about civ 3 the other day, and how the biggest feature it lacks is some player useable scripting engine. I hope that it will be flexible and allow things such as iterating through all cities and setting production to cavalry where net shields are greater than 15 per turn, except if the city has no barracks, in which case set production to that. It's a pain to do by hand.
Re:Civ IV moddable with python (Score:2)
Google groups psoting (Score:1)
/ groups?q=Foundations+of+Python+Network+Programmin g &hl=en&lr=&selm=mailman.3337.1095202643.5135.pyth o n-announce-list%40python.org&rnum=1
How many Foundations? (Score:5, Funny)
Re:How many Foundations? (Score:2)
Re:How many Foundations? (Score:1)
Re:How many Foundations? (Score:2)
For a minute there... (Score:2, Funny)
For a minute there I really did think the title "Foundations of Python Network Programming" indicated that a new Python Network was being created for television and that they were laying the foundations and discussing what the programming schedule would be.
Seriously, no joke. And by the way, a Python Network would be beat the Game Show Network [gsn.com] hands down !!!
The Agony of SOAP/WSDL (Score:2, Interesting)
The usual way involves a pageful of obscure code, and having to use obtuse WSDL descriptor files and code generators to give you classes.
But, hey, python can generate classes and methods on the fly. So getting the temperature at zip code 90210 becomes a one-liner after some standard imports:
I'm n
Python == good (Score:3, Interesting)
Python caused me to change my layout for code, almost instantly eliminating a big problem with c-like code: the missing brace.
Most code is structured like this: In this small segment, notice that there are to sets of braces - and they don't line up at all. You have to mentally follow the code after "fubar" and see that after the condition "if (c())" in order to mentally track the state of the braces.
Compare this to
(slashdot's ECODE filter sux0rs)
If you could see it, you'd notice that the braces line up. The opening and closing braces for the condition "if (c()))" are indented one more than the braces for function Fubar() which are indented more than the line "Function fubar()" itself.
Thus, you merely have to follow the indents to match the opening/closing braces. As a result of this change, I spend less than 5 minutes per week matching up braces without the need for an IDE to match them up for me.
Python seems to be a good language (I like that you can compiles sections of a Python program in c to improve performance without rewriting the whole program) but it's concepts of layout certainly carry beyond Python itself!
Re:Python == good (Score:2)
void function(args){
if (<cond>) {
<body>
}
}
See how the closing brace for the if lines up with the if itself?
Re:Python == good (Score:1)
FREE BOOK HERE! (Score:2)
Re:Easiest review to skip (Score:5, Insightful)
Re:Easiest review to skip (Score:2)
Re:Easiest review to skip (Score:2)
Re:Easiest review to skip (Score:2)
Re:A better article... (Score:1)
|
http://news.slashdot.org/story/04/10/13/1815209/foundations-of-python-network-programming
|
CC-MAIN-2015-35
|
en
|
refinedweb
|
[!
Now the shooting, killing, racing, jumping and running can finally begin.
Posted by unrealer2 on Aug 23rd, 2011
How can we force something to move or interact with the player? Since we can't just say it out, we have to tell the computer what it has to do on other ways. That it is: We have to write a script. Therefore we decided to use AngelScript as language because of the great support and a c++-like structure.
One is the GlobalScript, and the other one is the ContentMark-Script.The difference between them is that while ContentMark-Scripts are linked to a ContentMark on which they operate, GlobalScripts haven't such an anchor.
GlobalScripts are meant to be used as scripts which operate on the whole level, or to control the players. You can, though, build the same functionality of a ContentMark-Script into a Global-Script, but this leads to trouble: You'd have to register every single new instance there. This makes the script unusable for other levels and also it leads to much more complicated code than simply using a ContentMark-Script. As said above, ContentMark -Scripts are linked to a ContentMark after placing and can be accessed from inside. This way the same script can be used over and over again.
Now we want to show you, how you could implement a simple CM-Script which just rotates the given ContentMark. To do that, we have to create a new script-file in our browser:
That done, we can double-click our new file and the (rather simple) code-editor will pop up, waiting for us to get filled with code:
#include "EngineScripts.wpk\ContentMark.ws" class SimpleRotation : ContentMark { // Variables and Methods go here! };
This is the basic layout of every Script-File:
Since we already called that script "SimpleRotation" we want it to do exactly that.
We'll surely need the Tick() -Function, so here we go:
#include "EngineScripts.wpk\ContentMark.ws" class SimpleRotation : ContentMark { void Tick(float DeltaTime) { } };
As always, Tick() is getting called every frame of the engine when wanted. Makes it just perfect to animate stuff fluently. I also have to mention, that we are running the script-engine on it's completely own thread. That means that rendering-speed and script-speed aren't the same!
Back to our script now. We want our thing to rotate.To do this, we need to get access to the ContentMark we are linked to. This is done via the "GetCM()"-Function.
The call might look like this:
#include "EngineScripts.wpk\ContentMark.ws" class SimpleRotation : ContentMark { void Tick(float DeltaTime) { GetCM().AddRotation(0, DeltaTime, 0); } };
And hey! Our script is ready to get compiled and linked!
Nice :D, i'm going to try to incorporate lua into my engine at some point.
I was thinking about using Lua in [w]tech at first as well. But then it turned out I don't like it that much. So I did some research and found AngelScript as the better alternative for us.
Btw: I didn't know that you are the guy who made Bounce! Nice work! :)
This engine looks better every time i see it! Awesome sauce, having said that AngelScript seems like a good choice
|
http://www.moddb.com/engines/wtech/news/wtech-scripting-support
|
CC-MAIN-2015-35
|
en
|
refinedweb
|
...one of the most highly
regarded and expertly designed C++ library projects in the
world. — Herb Sutter and Andrei
Alexandrescu, C++
Coding Standards
This example shows several functions, including summing all valid values.
#include <boost/circular_buffer.hpp> #include <numeric> #include <assert.h> int main(int /*argc*/, char* /*argv*/[]) { // Create a circular buffer of capacity 3. boost::circular_buffer<int> cb(3); assert(cb.capacity() == 3); // Check is empty. assert(cb.size() == 0); assert(cb.empty()); // Insert some elements into the circular buffer. cb.push_back(1); cb.push_back(2); // Assertions to check push_backs have expected effect. assert(cb[0] == 1); assert(cb[1] == 2); assert(!cb.full()); assert(cb.size() == 2); assert(cb.capacity() == 3); // Insert some other elements. cb.push_back(3); cb.push_back(4); // Evaluate the sum of all elements. int sum = std::accumulate(cb.begin(), cb.end(), 0); // Assertions to check state. assert(sum == 9); assert(cb[0] == 2); assert(cb[1] == 3); assert(cb[2] == 4); assert(*cb.begin() == 2); assert(cb.front() == 2); assert(cb.back() == 4); assert(cb.full()); assert(cb.size() == 3); assert(cb.capacity() == 3); return 0; }
The
circular_buffer has
a capacity of three
int. Therefore,
the size of the buffer will never exceed three. The
std::accumulate
algorithm evaluates the sum of the stored elements. The semantics of the
circular_buffer can be inferred from
the assertions.
You can see the full example code at circular_buffer_sum_example.cpp.
The bounded buffer is normally used in a producer-consumer mode: producer threads produce items and store them in the container and consumer threads remove these items and process them. The bounded buffer has to guarantee that
This example shows how the
circular_buffer
can be utilized as an underlying container of the bounded buffer.
#include <boost/circular_buffer.hpp> #include <boost/thread/mutex.hpp> #include <boost/thread/condition.hpp> #include <boost/thread/thread.hpp> #include <boost/call_traits.hpp> #include <boost/bind.hpp> #include <boost/timer/timer.hpp> // for auto_cpu_timer template <class T> class bounded_buffer { public: typedef boost::circular_buffer<T> container_type; typedef typename container_type::size_type size_type; typedef typename container_type::value_type value_type; typedef typename boost::call_traits<value_type>::param_type param_type; explicit bounded_buffer(size_type capacity) : m_unread(0), m_container(capacity) {} void push_front(typename boost::call_traits<value_type>::param_type item) { // `param_type` represents the "best" way to pass a parameter of type `value_type` to a method. boost::mutex::scoped_lock lock(m_mutex); m_not_full.wait(lock, boost::bind(&bounded_buffer<value_type>::is_not_full, this)); m_container.push_front(item); ++m_unread; lock.unlock(); m_not_empty.notify_one(); } void pop_back(value_type* pItem) { boost::mutex::scoped_lock lock(m_mutex); m_not_empty.wait(lock, boost::bind(&bounded_buffer<value_type>::is_not_empty, this)); *pItem = m_container[--m_unread]; lock.unlock(); m_not_full.notify_one(); } private: bounded_buffer(const bounded_buffer&); // Disabled copy constructor. bounded_buffer& operator = (const bounded_buffer&); // Disabled assign operator. bool is_not_empty() const { return m_unread > 0; } bool is_not_full() const { return m_unread < m_container.capacity(); } size_type m_unread; container_type m_container; boost::mutex m_mutex; boost::condition m_not_empty; boost::condition m_not_full; }; //
The bounded_buffer relies on Boost.Thread and Boost.Bind libraries and Boost.call_traits utility.
The
push_front()
method is called by the producer thread in order to insert a new item into
the buffer. The method locks the mutex and waits until there is a space for
the new item. (The mutex is unlocked during the waiting stage and has to be
regained when the condition is met.) If there is a space in the buffer available,
the execution continues and the method inserts the item at the end of the
circular_buffer. Then it
increments the number of unread items and unlocks the mutex (in case an exception
is thrown before the mutex is unlocked, the mutex is unlocked automatically
by the destructor of the scoped_lock). At last the method notifies one of the
consumer threads waiting for a new item to be inserted into the buffer.
The
pop_back()
method is called by the consumer thread in order to read the next item from
the buffer. The method locks the mutex and waits until there is an unread item
in the buffer. If there is at least one unread item, the method decrements
the number of unread items and reads the next item from the
circular_buffer.
Then it unlocks the mutex and notifies one of the producer threads waiting
for the buffer to free a space for the next item.
The
bounded buffer::pop_back() method does not remove
the item but the item is left in the circular_buffer which then
replaces it with a new one (inserted by a
producer) when the circular_buffer is full. This technique is more effective
than removing the item explicitly by calling the
circular_buffer::pop_back()
method of the
circular_buffer.
This claim is based on the assumption that an assignment (replacement) of a new item into an old one is more effective than a destruction (removal) of an old item and a consequent inplace construction (insertion) of a new item.
For comparison of bounded buffers based on different containers compile and
run bounded_buffer_comparison.cpp.
The test should reveal the bounded buffer based on the
circular_buffer
is most effective closely followed by the
std::deque based
bounded buffer. (In reality, the result may differ sometimes because the test
is always affected by external factors such as immediate CPU load.)
You can see the full test code at bounded_buffer_comparison.cpp, and an example of output is
Description: Autorun "J:\Cpp\Misc\Debug\bounded_buffer_comparison.exe" bounded_buffer<int> 5.15 s bounded_buffer_space_optimized<int> 5.71 s bounded_buffer_deque_based<int> 15.57 s bounded_buffer_list_based<int> 17.33 s bounded_buffer<std::string> 24.49 s bounded_buffer_space_optimized<std::string> 28.33 s bounded_buffer_deque_based<std::string> 29.45 s bounded_buffer_list_based<std::string> 31.29 s
.
|
http://www.boost.org/doc/libs/1_59_0/doc/html/circular_buffer/examples.html
|
CC-MAIN-2015-35
|
en
|
refinedweb
|
from (to) a file
SYNOPSIS
#include <curses.h>
int scr_dump(const char *filename);
int scr_restore(const char *filename);
int scr_init(const char *filename);
int scr_set(const char *filename);
DESCRIPTION util(3NCURSES)].".
|
http://www.linux-directory.com/man3/scr_set.shtml
|
crawl-003
|
en
|
refinedweb
|
SYNOPSIS
#include <sched.h>
int sched_setscheduler(pid_t pid, int policy,
const struct sched_param *param);
DESCRIPTION neg-
ative, the behavior of the sched_setscheduler() function is unspeci-
fied.. Addi-
tionally, implementation-defined restrictions may apply as to the
appropriate privileges required to set a process' own scheduling pol-
icy, is SCHED_SPORADIC, the
value specified by the sched_ss_low_priority member of the param argu-
ment shall be any integer within the inclusive priority range for the
sporadic server policy. The sched_ss_repl_period and sched_ss_init_bud-
get members of the param argument shall represent the time parameters
used by the sporadic server scheduling policy for the target process.
The sched_ss_max_repl member of the param argument shall represent the
maximum number of replenishments that are allowed to be pending simul-
taneously for the process scheduled under this scheduling policy.
The specified sched_ss_repl_period shall be greater than or equal to
The effect of this function on individual threads is dependent on the
scheduling contention scope of the threads:
* For threads with system scheduling contention scope, these functions
shall have no effect on their scheduling.
* For threads with process scheduling contention scope, the threads'
scheduling policy and associated parameters shall shall not be affected by these functions.
The underlying kernel-scheduled entities for the process contention
scope threads shall have their scheduling policy and associated sched-
uling parameters changed to the values specified in policy and param,
respectively. Kernel-scheduled entities for use by process contention
scope threads that are created after this call completes shall inherit
their scheduling policy and associated scheduling parameters from the
process.
This function is not atomic with respect to other threads in the
process. Threads may continue to execute while this function call is
in the process of changing the scheduling policy and associated sched-
uling parameters for the underlying kernel-scheduled entities used by
the process contention scope threads.
RETURN VALUE
Upon successful completion, the function shall return the former sched-
uling policy of the specified process. If the sched_setscheduler()
function fails to complete successfully, the policy and scheduling
parameters shall remain unchanged, and the function shall return a
value of -1 and set errno to indicate the error.
ERRORS
The sched_setscheduler() function shall fail if:
EINVAL The value of the policy parameter.
The following sections are informative.
SEE ALSO
sched_getparam() , sched_getscheduler() , sched_setparam() , .
|
http://www.linux-directory.com/man3/sched_setscheduler.shtml
|
crawl-003
|
en
|
refinedweb
|
This service provided by the operating system is fundamental, it is 'the supervisory of processes' execution; thus, processes are executed in a dedicated environment. Losing the control on execution of the processes brings to the developer a synchronization problem, summarized by this question: how is it possible to let two independent processes work together?
The problem is more complex than it seems: it is not only a question of synchronisation of the execution of the processes, but also of sharing data, both in read- and in write-mode.
Let's speak about some classical problems of concurrent data access; if two processes read the same dataset this is obviously not a problem, and the execution is CONSISTENT. Now let one of the two processes modify the dataset: the other one will return different results according to the time at which it reads the dataset, before or after the writing by the first process. For example: we have two processes "A" and "B" and an integer "d". The process A increases d by 1, the process B prints it out. Writing it in a meta language we can expess it in this way
A { d->d+1 } & B { d->output }where the "&" indentifies a concurrent execution. A first possible execution is
(-) d = 5 (A) d = 6 (B) output = 6but if the process B is executed first we will obtain
(-) d = 5 (B) output = 5 (A) d = 6You understand immediately how important it is to manage correctly these situations: the risk of INCONSISTENCY of data is big and inacceptable. Try to think that the datasets represent your bank account and you will never underestimate this problem.
In the preceeding article we already spoke about a first form of synchronisation through the use of the waitpid(2) function, which let a process wait for the termination of another one before going on. In fact this allow us to solve some of the conflicts raised about data read and write: once the dataset on which a process P1 will work has been defined, a process P2 which works on the same dataset or on a subset of it shall wait for the termination of P1 before if can proceed with its own execution.
Clearly this method represent a first solution, but is far off from the best, because P2 have to stay idle for a time which can be very long, waiting that P1 terminates its execution, even if it is no more working on common data. Thus, we must increase the granularity of our control, i.e. rule the access to single data or data set. The solution to this problem is given by a set of primitives of the standard library known as SysV IPC (System V InterProcess Communication).
key_t ftok(const char *pathname, int proj_id);which uses the name of an existing file (pathname) and an integer. It is not assured that the key is unique, because the parameters taken from the file (i-node number and device number) can create identical combinations. A good solution is to create a little library which traces the assigned keys and avoids duplicates.
Semaphores can be used to control resource access: the value of the semaphore represents the number of processes which can access the resource; any time a process accesses the resource the value of the semaphore shall be decremented and incremented again when the resource is released. If the resource is exclusive (i.e. only one process can access it) the initial value of the semaphore will be 1.
A different task can be accomplished by the semaphore, the resource counter: the value it represents, in this case, the number of resources available (for example the number of free memory cells).
Let's consider a practical case, in which the semaphore types will be used: imagine we have a buffer in which several processes S1,...,Sn can write but from which only a process L can read; moreover, operations cannot be accomplished at the same time (i.e. at a given time only one process is operating on the buffer). Obviously S processes can always write except when the buffer is full, while the process L can read only if the buffer is not empty. Thus, we need three semaphores: the first will manage the access to the resource, the second and the third will keep track of how many elements are in the buffer (we will see later why two semaphores are not sufficient).
Considering that the access to the buffer is exclusive the first semaphore will be a binary one (its value will be 0 or 1), while the second and the third will assume values related to the dimension of the buffer.
Let's learn how semaphores are implemented in C using SysV primitives. The function that creates a semaphore is semget(2)
int semget(key_t key, int nsems, int semflg);where key is an IPC key, nsems is the number of semaphores we want to create and semflg is the access control implemented with 12 bits, the first 3 being related to creation policies and the other 9 to read and write access by user, group and other (notice the similarity to the Unix filesystem); for a complete description read the man page of ipc(5). As you can notice SysV manage set of semaphores instead of single ones, resulting in a more compact code.
Let's create our first semaphore
#include <stdio.h> #include <stdlib.h> #include <linux/types.h> #include <linux/ipc.h> #include <linux/sem.h> int main(void) { key_t key; int semid; key = ftok("/etc/fstab", getpid()); /* create a semaphore set with only 1 semaphore: */ semid = semget(key, 1, 0666 | IPC_CREAT); return 0; }Going further we have to learn how to manage and remove semaphores; the management of the semaphore is performed by the primitive semctl(2)
int semctl(int semid, int semnum, int cmd, ...)which operats according to the action identified by cmd on the set semid and (if requested by the action) on the single semaphore semnum. We will introduce some options when we will need them, but a complete list can be found on the man page. Depending on the cmd action it could be necessary to specify another argument for the function, whose type is
union semun { int val; /* value for SETVAL */ struct semid_ds *buf; /* buffer for IPC_STAT, IPC_SET */ unsigned short *array; /* array for GETALL, SETALL */ /* Linux specific part: */ struct seminfo *__buf; /* buffer for IPC_INFO */ };To set the value of a semaphore the SETVAL directive should be used and the value has to be specified in the union semun; let's modify the preceeding program setting the semaphore's value to 1
[...] /* create a semaphore set with only 1 semaphore */ semid = semget(key, 1, 0666 | IPC_CREAT); /* set value of semaphore number 0 to 1 */ arg.val = 1; semctl(semid, 0, SETVAL, arg); [...]Then we have to release the semaphore deallocating the structures used for its management; this task is accomplished by the directive IPC_RMID of semctl. This directive removes the semaphore and sends a message to all the processes waiting to gain access to the resource. A last modification to the program is
[...] /* set value of semaphore number 0 to 1 */ arg.val = 1; semctl(semid, 0, SETVAL, arg); /* deallocate semaphore */ semctl(semid, 0, IPC_RMID); [...]As seen before creating and managing a structure for controlling concurrenct execution is not difficult; when we will introduce error management things will become more complex, but only from a code complexity point of view.
The semaphore can now be used through the function semop(2)
int semop(int semid, struct sembuf *sops, unsigned nsops);where semid is the set identifier, sops an array containing operations to be performed and nsops the number of these operations. Every operation is represented by a sembuf struct.
unsigned short sem_num; short sem_op; short sem_flg;i.e. by the semaphore number in set (sem_num), the operation (sem_op) and a flag setting the wait policy; for now let sem_flg be 0. The operations we can specify are integer numbers and follow these rules:
Read and write of the buffer are only virtual: this happens because, as seen in the preceeding article, every process has its own memory space and cannot access that of another process. This makes the correct management of the buffer with 5 processes impossible, because each one will see its own copy of the buffer. It will change when we will speak about shared memory but let's learn things step by step.
Why do we need 3 semaphores? The first (number 0) acts as a buffer access lock and has a maximum value of 1, while the other two manage the overflow and underflow conditions. A single semaphore cannot manage both situations, because semop acts one-way.
Let's clarify the matter: with one semaphore (called O), which value represents the number of empty spaces in the buffer. Every time an S process puts something in the buffer it decreases the value of the semaphore by one, until the values reaches zero, i.e. the buffer is full. This semaphore cannot manage the underflow condition: the R process, in fact, can increase its value without limits. We need thus a special semaphore (called U), which value represents the number of elements in the buffer. Every time a W process puts an element in the buffer it will also increase the value of the U semaphore and decrease that of the O semaphore. On the contrary, the R process will decrease the value of the U semaphore and increase that of the O semaphore.
The overflow condition is thus identified by the impossibility of decreasing the O semaphore and the underflow condition by the impossibility of decreasing th U semaphore.
#include <stdio.h> #include <stdlib.h> #include <errno.h> #include <linux/types.h> #include <linux/ipc.h> #include <linux/sem.h> int main(int argc, char *argv[]) { /* IPC */ pid_t pid; key_t key; int semid; union semun arg;}; /* Other */ int i; if(argc < 2){ printf("Usage: bufdemo [dimensione]\n"); exit(0); } /* Semaphores */ key = ftok("/etc/fstab", getpid()); /* Create a semaphore set with 3 semaphore */ semid = semget(key, 3, 0666 | IPC_CREAT); /*); /* Fork */ for (i = 0; i < 5; i++){ pid = fork(); if (!pid){ for (i = 0; i < 20; i++){ sleep(rand()%6); /*); } exit(0); } } for (i = 0;i < 100; i++){ sleep(rand()%3); /*); } /* Destroy semaphores */ semctl(semid, 0, IPC_RMID); return 0; }Let's comment the more interesting parts of the code:};These 4 lines are the actions we can perform on our semaphore set: the first two are single actions, while the others are double. The first action, lock_res, tries to lock the resource: it decreases the value of the first semaphore (number 0) by a value of 1 (if the value is not zero) and the policy adopted if the resource is busy is none (i.e. the process waits). The rel_res action is identical to lock_res but the resource is released (the value is positive).
The push and pop actions are a bit special. They are arrays of two actions, the first on the semaphore number 1 and the second on the semaphore number 2; while the first is incremented the second is decremented and viceversa, but the policy is no more a wait one: IPC_NOWAIT forces the process to continue execution if the resource is busy.
/*);Here we initialize the value of the semaphores: the first to 1 because it controls the access to an exclusive resource, the second to the length of the buffer (given on the command line) and the third to 0, as said before about over- and underflow.
/*);The W process tries to lock the resource through the lock_res action; once this is done it performs a push and tells it on the standard output: if the operation cannot be performed it prints that the buffer is full. After that it releases the resource.
/*);The R process acts more or less as the W process: locks the resource, performs a pop and releases the resource.
In the next article we will speak about message queues, another structure for the InterProcess Communication and synchronisation. As always if you write something simple using what you learned from this article send it to me, with your name and your e-mail address, I will be happy to read it. Good work!
2003-03-25, generated by lfparser version 2.35
|
http://www.redhat.com/mirrors/LDP/linuxfocus/English/January2003/article281.shtml
|
crawl-003
|
en
|
refinedweb
|
#include <vtkOBJExporter.h>
Inheritance diagram for vtkOBJExporter:
vtkOBJExporter is a concrete subclass of vtkExporter that writes wavefront .OBJ files in ASCII form. It also writes out a mtl file that contains the material properties. The filenames are derived by appending the .obj and .mtl suffix onto the user specified FilePrefix.
Definition at line 37 of file vtkOBJExporter.h.
|
http://www.vtk.org/doc/release/5.0/html/a01766.html#w0
|
crawl-003
|
en
|
refinedweb
|
The QIconEnginePluginV2 class provides an abstract base for custom QIconEngineV2 plugins. More...
#include <QIconEnginePluginV2>
This class was introduced in Qt 4.3.
The QIconEnginePluginV2 class provides an abstract base for custom QIconEngineV2 plugins.
Icon engine plugins produces QIconEngines for QIcons; an icon engine is used to render the icon. The keys that identifies the engines the plugin can create are suffixes of icon filenames; they are returned by keys(). The create() function receives the icon filename to return an engine for; it should return 0 if it cannot produce an engine for the file.
Writing an icon engine plugin is achieved by inheriting QIconEnginePluginV2, reimplementing keys() and create(), and adding the Q_EXPORT_PLUGIN2() macro.
You should ensure that you do not duplicate keys. Qt will query the plugins for icon engines in the order in which the plugins are found during plugin search (see the plugins overview document).
See also How to Create Qt Plugins.
Constructs a icon engine plugin with the given parent. This is invoked automatically by the Q_EXPORT_PLUGIN2() macro.
Destroys the icon engine plugin.
You never have to call this explicitly. Qt destroys a plugin automatically when it is no longer used.
Creates and returns a QIconEngine object for the icon with the given filename.
Returns a list of icon engine keys that this plugin supports. The keys correspond to the suffix of the file or resource name used when the plugin was created. Keys are case insensitive.
|
http://doc.trolltech.com/main-snapshot/qiconenginepluginv2.html
|
crawl-003
|
en
|
refinedweb
|
ulimit - get and set process limits
Synopsis
Description
Return Value
Errors
Examples
Application Usage
Rationale
Future Directions
See Also
#include <ulimit.h>
long ulimit(int cmd, ...);
The ulimit() function shall control process limits. The process limits that can be controlled by this function include the maximum size of a single file that can be written (this is equivalent to using setrlimit() with RLIMIT_FSIZE). The cmd values, defined in <ulimit.h>, include:The ulimit() function shall not change the setting of errno if successful.
As all return values are permissible in a successful situation, an application wishing to check for error situations should set errno to 0, then call ulimit(), and, if it returns -1, check to see if errno is non-zero.
Upon successful completion, ulimit() shall return the value of the requested limit. Otherwise, -1 shall be returned and errno set to indicate the error.
The ulimit() function shall fail and the limit shall be unchanged if:The following sections are informative.
None.
None.
None.
None.
getrlimit() , setrlimit() , write() , the Base Definitions volume of IEEE Std 1003.1-2001, <ulimit .
|
http://www.squarebox.co.uk/cgi-squarebox/manServer/usr/share/man/man3p/ulimit.3p
|
crawl-003
|
en
|
refinedweb
|
/*************************************************** Analyzing Census data: Town/City/ADP Count Write a program that reads in a datafile from: and prints out the number of towns, cities, and CDP's. The user should specify the name of the file to be analyzed. This way, if you have data files from several states saved, you can analyze all of them with the same program. Note: If s is an object of type string that contains the name of the file you want to open you can't say: ifstream fin(s); which would seem pretty likely. Instead, you have to say: ifstream fin(s.begin()); Why and what that means we'll discuss later. For now just keep in mind that this is how it works. ***************************************************/ #include <iostream> #include <fstream> #include <string> using namespace std; int main() { // Get input file name string filename; cout << "Enter file name: "; cin >> filename; // Open input file ifstream fin(filename.c_str()); if (fin) { /****** INPUT FILE EXISTS ****************/ // Initialize town count int count_t, count_c, count_cdp; count_t = count_c = count_cdp = 0; // Read in string until file is finished string s; while(fin >> s) { if (s == "town") count_t++; else if (s == "city") count_c++; else if (s == "CDP") count_cdp++; } // Write results cout << "There are " << count_t << " towns" << endl; cout << "There are " << count_c << " cities" << endl; cout << "There are " << count_cdp << " CDP's" << endl; } else /****** INPUT FILE DOES NOT EXIST ********/ cout << "File not found!" << endl; return 0; }
|
http://www.usna.edu/Users/cs/wcbrown/courses/F04SI204/classes/L10/TE3.html
|
crawl-003
|
en
|
refinedweb
|
Suggestions welcome!
There is online documentation (within ANU only) for JOGL and GLUT.
Sample questions and the ANU has copies of exam papers from previous years.
Simple 3D programs like the cube exercise may show multiple images in the window at once. This happens because the nVidia graphics cards are really fast and screen updates are not being limited to one per LCD/CRT frame. You can synchronise drawing by setting an environment variable from your terminal.
$ setenv __GL_SYNC_TO_VBLANK 1
You need to add JOGL to your classpath with
$ setenv CLASSPATH ".:/usr/local/java/jre/lib/jogl.jar"
The R105 server is ephebe.anu.edu.au. You can ssh or scp to ephebe from inside or outside the ANU.
and the C code:and the C code:import java.util.*; // Initialise static long Start; Date now = new Date(); Start = now.getTime(); ... // Current time Date now = new Date(); float seconds; seconds = (float)(now.getTime() - Start) / 1000.0f;
#include <sys/time.h> ... /* Initialise */ static long Start; struct timeval now; gettimeofday(&now, NULL); Start = now.tv_sec; ... /* Current time */ struct timeval now; float seconds; gettimeofday(&now, NULL); seconds = (float)(now.tv_sec - Start) + (float)now.tv_usec / 1000000.0;
Geometry drawings
OpenGL Lighting
Virtual Reality System Development
|
http://cs.anu.edu.au/student/comp4610.2007/2006/hints/index.html
|
crawl-003
|
en
|
refinedweb
|
hw_dataflash.h File Reference
Dataflash HW control routines (interface). More...
#include <cfg/compiler.h>
Go to the source code of this file.
Detailed Description
Dataflash HW control routines (interface).
Definition in file hw_dataflash.h.
Function Documentation
Data flash init function.
This function provide to initialize all that needs to drive a dataflash memory. Generaly needs to init pins to drive a CS line and reset line.
Definition at line 56 of file hw_dataflash.c.
Chip Select drive.
This function enable or disable a CS line. You must implement this function comply to a dataflash memory datasheet to allow the drive to enable a memory when
enable flag is true, and disable it when is false.
Definition at line 81 of file hw_dataflash.c.
Reset data flash memory.
This function provide to send reset signal to dataflash memory. You must impement it comly to a dataflash memory datasheet to allow the drive to set a reset pin when
enable flag is true, and disable it when is false.
Definition at line 108 of file hw_dataflash.c.
|
http://doc.bertos.org/2.7/hw__dataflash_8h.html
|
crawl-003
|
en
|
refinedweb
|
First Experiences with Scald.
Options for Cluster Processing #.
Scalding is used by Big Companies #
Another reason why I’m particularly interested in Scalding is that it is being used in several large companies. E.g. Etsy, Twitter. Twitter runs most of their backend batch tasks using scalding.
Getting Scalding #
You can get scalding by cloning and building
On the twitter/scalding github page(s) the tutorial uses scald.rb to trigger jobs. Don’t use it please. The code is hideous and it will take you forever to make a simple change. On the other hand, I use the project here:..
Simple Use Case #
We had an issue where one of HDFS folders of an external HIVE JSON table was having issues with bad / Incomplete JSON. Any hive query on the table would error because of the bad JSON..
Note: This code uses the FieldsAPI which is not typed. It is recommended to use the Typed API
import com.twitter.scalding._ class FindBadJson(args: Args) extends Job(args) { TextLine(args("input")) .read .filter ('line) { line: String => line.matches(".*[^}]$")} .write(Tsv(args("output"))) }
Then from the scalding tutorial directory)
On further digging in to resource manager UI I found this
Diagnostics: MAP capability required is more than the supported max container capability in the cluster. Killing the Job. mapResourceReqt: 2048 maxContainerCapability:1222 Job received Kill while in RUNNING state..
ERROR hadoop.HadoopStepStats: unable to get remote counters, no cached values, throwing exception No enum constant org.apache.hadoop.mapreduce.JobCounter.MB_MILLIS_MAPS
The MR job keeps chugging and succeeds. Aah..finally some data!!!
But Wait! Since, we didn’t specify a reducer, we have just as many files as the mapper read. Bad MR…. Bad.. The output files are named like part-00001, part-00002, etc. Too much to go through. Time to declare a reducer:
import com.twitter.scalding._ class FindBadJson(args: Args) extends Job(args) { TextLine(args("input")) .read .filter ('line) { line: String => line.matches(".*[^}]$")} .groupAll { _.size } .write(Tsv(args("output"))) }
And Voila! All offenders in one file!
Conclusion #.
Note: this example uses the fields api which is not typed. It is recommended to use the Typed API.
|
https://etl.svbtle.com/experiences-with-scalding
|
CC-MAIN-2019-51
|
en
|
refinedweb
|
Why is foreach iterating with a const reference?
I try to do the following:
QList<QString> a; foreach(QString& s, a) { s += "s"; }
Which looks like it should be legitimate but I end up with an error complaining that it cannot convert from 'const QString' to 'QString &'.
Why is the Qt foreach iterating with a const reference?
Answers
As explained on the Qt Generic Containers.
It makes a copy because you might want to remove an item from the list or add items while you are looping for example. The downside is that your use case will not work. You will have to iterate over the list instead:
for (QList<QString>::iterator i = a.begin(); i != a.end(); ++i) { (*i) += "s"; }
A little more typing, but not too much more.
or you can use
QList<QString> a; BOOST_FOREACH(QString& s, a) { s += "s"; }
I believe Qt's foreach takes a temporary copy of the original collection before iterating over it, therefore it wouldn't make any sense to have a non-const reference as modifying the temporary copy it would have no effect.
Maybe for your case:
namespace bl = boost::lambda; std::for_each(a.begin(),a.end(),bl::_1 += "s");
With C++11, Qt now encourages this standard for syntax instead of Qt foreach :
QList<QString> a; for(auto& s : a) { s += "s"; }
Need Your Help
Disable Transaction Log
sql-server sql-server-2008Oracle has SQL commands that one can issue so that a transaction does not get logged. Is there something similar for SQL Server 2008?
Fastest way to remove first char in a String
c# string substring performance trimSay we have the following string
|
http://www.brokencontrollers.com/faq/45860226.shtml
|
CC-MAIN-2019-51
|
en
|
refinedweb
|
Support for interactive validation of form elements, HTML5 specs section 4.10.15.2, is necessary.
Some tip from other bugs this depends on:
1.
Form control elements can be in a "no-validate state" that is controlled by "novalidate" and "formNoValidate" attributes, plus some more condition. This snippet could be of help:
bool HTMLFormControlElement::isInNoValidateState() const
{
return (isSuccessfulSubmitButton() && formNoValidate()) || m_form->novalidate();
}
2.
HTMLFormElement::checkValidity() needs to be adapted to deal with "unhandled invalid controls" (as per TODO comment). Actually it just iterates over form elements calling checkValidity() (that fires the invalid event, as per specs), but it must also return a list of invalid form controls that haven't been handled through the invalid event.
The following snippet might be of some help (was part of the proposed patch for bug 27452):
bool checkValidity(Vector<HTMLFormControlElement*>* unhandledInvalidControls = 0);
bool HTMLFormElement::checkValidity(Vector<HTMLFormControlElement*>* unhandledInvalidControls)
{
Vector<HTMLFormControlElement*> invalidControls;
for (unsigned i = 0; i < formElements.size(); ++i) {
HTMLFormControlElement* control = formElements[i];
if (control->willValidate() && !control->validity()->valid())
invalidControls.append(control);
}
if (invalidControls.isEmpty())
return true;
for (unsigned n = 0; n < invalidControls.size(); ++n) {
HTMLFormControlElement* invalidControl = invalidControls[n];
bool eventCanceled = invalidControl->dispatchEvent(eventNames().invalidEvent, false, true);
if (eventCanceled && unhandledInvalidControls)
unhandledInvalidControls->append(invalidControl);
}
return false;
}
I have started implementation.
(In reply to comment #2)
> I have started implementation.
Good! I think you'll be needing support for the validationMessage for interactive validation step 3 (4.10.15.2) [bug 27959], I'm gonna speed it up.
> Good! I think you'll be needing support for the validationMessage for
> interactive validation step 3 (4.10.15.2) [bug 27959], I'm gonna speed it up.
That's right! The implementation requires validationMessage().
Created attachment 38918 [details]
Demo patch
A workable demo patch. This is not ready to be reviewed. It has no tests and it depends on bug#27959 and bug#28868.
I'll use Balloon Tooltip for Windows to show validation messages.
I don't know if Mac OS has a corresponding control.
Created attachment 40698 [details]
Incomplete patch (rev.0)
I'd like to ask comments on the patch though it is incomplete.
The patch will add the following behavior:
- Show/hide a validation message when an invalid form control gets/losts the focus
The way to show messages depends on each of platforms.
- Prevent a form submission if the form has invalid controls
TODO:
- Add tests
- Build file changes for other platforms
- Provide ChangeLog.
Comment on attachment 40698 [details]
Incomplete patch (rev.0)
As you say, the patch is incomplete. No ChangeLog. No tests. If you'd like feedback on this patch, I recommend asking the relevant people on IRC or email. Having this patch in the review queue just makes it harder to review complete patches.
I split this to 3 patches. Bug#31716, Bug#31718 and this will have patches.
I think that form validation should be disabled until:
- a solution is found for the compatibility problem with "required" attribute name;
- UI for correcting problems is implemented (as tracked by bug 31718/bug 40747). Currently the user experience is just horrible.
I agree with Alexey. In 2008, I wrote an app to the HTML5 spec, then backported the validation API to JavaScript to support then-existing browsers. I've seen the app break a couple of times due to changes in WebKit (which Tamura is always eager to fix).
If you go to without the WebKit validator, the page scrolls smoothly to take you to an invalid element on form submission. Since the validation constraints have been added, the page abruptly jumps to the first invalid <input/>. I'm sure there are users who don't understand what's going on when this happens. For that matter, I wasn't even sure what was going on when I was debugging #40591.
*** Bug 80419 has been marked as a duplicate of this bug. ***
*** Bug 136595 has been marked as a duplicate of this bug. ***
*** Bug 142817 has been marked as a duplicate of this bug. ***
This ticket was created as an enhancement, but the last few tickets which were marked as duplicate to this are more critical. It seems this problem now affects normal validation and not only the JavaScript API.
IMHO the priority/importance of this ticket should be increased.
This always affected “normal validation”, not just JavaScript API, and that validation is a new feature, and one that is not implemented yet in WebKit.
I understand that you would like to see the feature!
*** Bug 158331 has been marked as a duplicate of this bug. ***
Is anyone currently working on this? It was submitted 7 years ago, and Safari is now the only browser whose current version will submit a form with invalid constraints.
<rdar://problem/28636170>
*** This bug has been marked as a duplicate of bug 164382 ***
(In reply to comment #18)
> Is anyone currently working on this? It was submitted 7 years ago, and
> Safari is now the only browser whose current version will submit a form with
> invalid constraints.
Safari Technology Preview 19 has this enabled:
|
https://bugs.webkit.org/show_bug.cgi?format=multiple&id=28649
|
CC-MAIN-2019-51
|
en
|
refinedweb
|
Top 10 Tips for Making the Spark + Alluxio Stack Blazing Fast
Top 10 Tips for Making the Spark + Alluxio Stack Blazing Fast
Check out these tips for blazing fast performance when running Spark on Alluxio.
Join the DZone community and get the full member experience.Join For Free
The Apache Spark + Alluxio stack is getting quite popular particularly for the unification of data access across S3 and HDFS. In addition, compute and storage are increasingly being separated causing larger latencies for queries. Alluxio is leveraged as compute-side virtual storage to improve performance. But to get the best performance, like any technology stack, you need to follow the best practices. This article provides the top 10 tips for performance tuning for real-world workloads when running Spark on Alluxio with data locality, giving the most bang for the buck.
A Note on Data Locality
High data locality can greatly improve the performance of Spark jobs. When data locality is achieved, Spark tasks can read in-Alluxio data from local Alluxio workers at memory speed (when ramdisk is configured) instead of transferring the data over the network. The first few tips are related to locality.
Check Data Locality Achieved
Alluxio provides the best performance when Spark tasks are running at Spark workers co-located with Alluxio workers and performing short-circuit reads and writes. There are multiple ways to check if I/O requests are served by short-circuit reads/writes:
Monitor the metrics at Alluxio metrics UI page for “Short-circuit reads” and “From Remote Instance” while a Spark job is running. Alternatively, monitor the metrics
cluster.BytesReadAlluxioThroughputand
cluster.BytesReadLocalThroughput. If the local throughput is zero or significantly lower than the total throughput, this job is likely not interfacing with a local Alluxio worker.
Leverage tools like
dstatto detect network traffic, which will allow us to see whether short-circuit reads are happening and what network traffic throughput is looking like. YARN users can also check the logs in
/var/log/hadoop-yarn/userlogsto find some messages, like below, and see if there are no remote reads happening, or all short circuit reads:
INFO ShuffleBlockFetcherIterator: Started 0 remote fetches in 0 ms. You can enable the collection of these logs by running your Spark job with the appropriate YARN properties, like "
--master yarn --num-executors 4 --executor-cores 4".
In case Spark jobs have a small fraction of short-circuiting reads or writes served by Alluxio, read the following few tips to improve the data locality.
Ensure Spark Executor Locality
There are potentially two different levels of data locality for Spark to achieve. The first level is executor locality, which means when Spark is deployed on other resource management frameworks (like Mesos and YARN). Spark executors are assigned by these resource managers on nodes that have Alluxio workers running. Without executor locality achieved, it is impossible to enforce Alluxio to serve Spark jobs with local the data. Note that we typically see issues in executor locality when Spark deployment is done by resource management frameworks rather than running Spark Standalone.
While recent versions of Spark support YARN and Mesos to schedule executors while considering data locality, there are other ways to force executors to start on the desired nodes. One simple strategy is to start an executor on every node, so there will always be an executor on at least the desired nodes.
In practice, it may not be applicable to deploy Alluxio worker on every node in a production environment due to resources constraints. To deploy Alluxio worker together with computation node, one can leverage features from resource management frameworks like YARN node label. By marking
NodeManagers with a label "alluxio" (the name of this label is not important), which means that machine contains an Alluxio worker, the user can submit their job to the same label specified machine and launch Spark jobs with executors collocated with Alluxio workers.
Ensure Spark Task Locality
The next level of data locality is task locality, which means that once the executors are started, Spark can schedule tasks to the executors with locality respected (i.e., scheduling tasks on the executors that are local to the data served from Alluxio). To achieve this goal, Spark task scheduler first gathers all the data locations from Alluxio as a list of hostnames of Alluxio workers and then tries to match the executor hostnames for the scheduling. You can find the Alluxio worker hostnames from the Alluxio master web UI, and you can find the Spark executor hostnames from the Spark driver web UI.
Note that, sometimes, due to various networking environments and configurations, the hostnames of the Alluxio workers may not match the hostnames of the executors, even though they are running on the same machines. In this case, Spark task scheduler will be confused and not able to ensure data locality. Therefore, it is important for the Alluxio workers hostnames to share the same "hostname namespace" as the executor hostnames. One common issue we see is that one uses IP addresses and the other uses hostnames. Once this happens, users can still manually set Alluxio-client property
alluxio.user.hostname in Spark (see how) and have the same value set for
alluxio.worker.hostname in Alluxio worker’s site properties; alternatively, users can read the JIRA ticket SPARK-10149 for solutions from the Spark community.
Prioritize Locality for Spark Scheduler
With respect to locality, there are a few Spark client side configs to tune:
Property
spark.locality.wait: This checks how long to wait to launch a data-local task before then trying to launch on a less-local node. The same wait will be used to step through multiple locality levels (
process-local,
node-local,
rack-localand then
any).
We can also set this for a specific locality level using `spark.locality.wait.node,` which will customize the locality wait for each node.
Load Balance
Load balancing is also very important to ensure the execution of Spark jobs are distributed uniformly across different nodes available. The next few tips are related to preventing imbalanced load or task schedule due to skew input data distribution in Alluxio.
Use DeterministicHashPolicy to Cold-Read Data From UFS Via Alluxio
It is quite common that multiple tasks of a Spark job read the same input file. When this file is not loaded in Alluxio, by default, the Alluxio worker local to each of these tasks will read the same file from UFS. This can lead to the following consequence:
Multiple Alluxio workers are competing for the Alluxio-UFS connection on the same data. If the connection from Alluxio to UFS is slow, each worker can read slowly due to unnecessary competition.
The same data can be replicated multiple times across Alluxio, evicting other useful data for the subsequent queries.
One way to solve this problem is to set
alluxio.user.ufs.block.read.location.policy to
DeterministicHashPolicy to coordinate workers to read data from UFS without unnecessary competition — e.g., edit
spark/conf/spark-defaults.conf to ensure at most four (decided by
alluxio.user.ufs.block.read.location.policy.deterministic.hash.shards ) random Alluxio workers to read a given block.
spark.driver.extraJavaOptions=-Dalluxio.user.ufs.block.read.location.policy=alluxio.client.block.policy.DeterministicHashPolicy spark.executor.extraJavaOptions=-Dalluxio.user.ufs.block.read.location.policy.deterministic.hash.shards=4
Note that a different set of four random workers will be selected for different blocks.
Use Smaller Alluxio Block Size for Higher Parallelism
When Alluxio block size is large (512MB by default) relative to the file size, there can be only a few Alluxio workers serving the input files. During the “file scan” phase of Spark, tasks may be assigned to only a small set of servers in order to be NODE_LOCAL to the input data, leading to imbalanced load. In this case, caching the input files in Alluxio with smaller blocks (e.g. setting
alluxio.user.block.size.bytes.default to 128MB or smaller) allows for better parallelism across servers. Note that customizing the Alluxio block size is not applicable when UFS is HDFS, where the Alluxio block size is forced to be the HDFS block size.
Tune the Number of Executors and Tasks Per Executor
Running too many tasks in one executor in parallel to read from Alluxio may create resource contention in connections, network bandwidth, and etc. Based on the number of Alluxio workers, one can tune the number of executors and the number of tasks per executor (configurable in Spark) to better distribute the work to more nodes (thus higher bandwidth to Alluxio) and reduce the overhead in resource contention.
Preload Data Into Alluxio Uniformly
Though Alluxio provides transparent and async caching, the first cold-read may still have a performance overhead. To avoid this overhead, users can pre-load the data into Alluxio storage space using CLI:
$ bin/alluxio fs load /path/to/load \ -Dalluxio.user.ufs.block.read.location.policy=\ alluxio.client.file.policy.MostAvailableFirstPolicy
Note that this
load command is simply reading the target file from the under store on this single server to promote data to Alluxio. So, the speed to write to Alluxio is bound by that single server. In Alluxio 2.0, we plan to provide an implementation of a distributed
load to scale the throughput.
Capacity Management
At a very high level, Alluxio provides a caching service for hot input data. Allocating and managing the caching capacity correctly is also important to achieve good performance.
Extend Alluxio Storage Capacity With SSD or HDD
Alluxio workers can also manage local SSD or hard disk resources as the complementary to RAM resource. To reduce evictions, we recommend putting multiple storage directories in the same tier rather than a single tier. See link for more details. One caveat is that for users running Alluxio workers on AWS EC2 instances, the EBS storage mounted as local disk goes over the network and can be slow.
Prevent Cache Thrashing by Disabling “Passive Caching”
Alluxio client-side property
alluxio.user.file.passive.cache.enabled controls whether to cache data with additional copies in Alluxio. This property is enabled by default (
alluxio.user.file.passive.cache.enabled=true ), so an Alluxio worker can cache another copy of data already cached on other workers on client requests. When this property is
false, there will be no more copy made for any data already in Alluxio.
Note that when this property is enabled, it is possible that the same data blocks are available across multiple workers, reducing the amount of available storage capacity for unique data. Depending on the size of the working set and Alluxio capacity, disabling passive caching can help workloads that have no concept of locality and whose dataset is large relative to Alluxio capacity. Alluxio 2.0 will support fine-grained control on data replications, e.g., setting the minimal and maximal number of copies of a given file in Alluxio.
If you have more suggestions on performance tuning Spark on Alluxio, you are welcome to share them on our mailing list (link).
Opinions expressed by DZone contributors are their own.
{{ parent.title || parent.header.title}}
{{ parent.tldr }}
{{ parent.linkDescription }}{{ parent.urlSource.name }}
|
https://dzone.com/articles/top-10-tips-for-making-the-spark-alluxio-stack-bla?fromrel=true
|
CC-MAIN-2019-51
|
en
|
refinedweb
|
[lockfile] Fix non-deterministic lockfile generation. When different versions of a same packages is defined in manifest, there is a change that the lockfile generated is non-deterministic if the same version the package is deployed to different paths. This change fixes this issue. Bug: 42330 Change-Id: I31acd43edcc46328ca28802b4d8365839c847efb
/jɪəri/ YEER-ee
“Jiri integrates repositories intelligently”
Jiri is a tool for multi-repo development. It supports:
Jiri has an extensible plugin model, making it easy to create new sub-commands.
Jiri is open-source.
We have prebuilts for linux and darwin
x86_64 systems. In order to fetch latest jiri source code and build jiri manually, latest version of Go should be installed. After installing Go, you can fetch the latest jiri source code by using the command:
git clone
To build (or rebuild) jiri, simply type the following commands:
cd jiri go install ./cmd/jiri
The binary will be installed to
$HOME/go/bin/jiri (or
$GOPATH/bin/jiri, if you set
GOPATH) and can be copied to any directory in your PATH, as long as it is writable (to support jiri bootstrapping and self-updates).
Jiri organizes a set of GIT repositories on your local filesystem according to a manifest. These repositories are referred to as “projects”, and are all contained within a single directory called the “jiri root”.
Jiri also supports CIPD “packages”, to download potentially large read-only files, like toolchain binaries or test data, into the jiri root.
The manifest file specifies the relative location of each project or package within the jiri root, and also includes other metadata, such as its remote url, the remote branch or revision it should track, and more.
The
jiri update command syncs the master branch of all local projects to the revision and remote branch specified in the manifest for each project. Jiri will create the project locally if it does not exist, and if run with the
-gc flag, jiri will “garbage collect” any projects that are not listed in the manifest by deleting them locally.
The command will also download, update or remove CIPD packages according to manifest changes, if necessary.
The
.jiri_manifest file in the jiri root describes which project jiri should sync. Typically the
.jiri_manifest file will import other manifests, but it can also contain a list of projects.
For example, here is a simple
.jiri_manifest with just two projects, “foo” and “bar”, which are hosted on github and bitbucket respectively.
<?xml version="1.0" encoding="UTF-8"?> <manifest> <projects> <project name="foo-project" remote="" path="foo"/> <project name="bar" remote="" path="bar"/> </projects> </manifest>
When you run
jiri update for the first time, the “foo” and “bar” repos will be cloned into
foo and
bar respectively and repos would be put on DETACHED HEAD. Running
jiri update again will update all the remote refs and rebase your current branch to its upstream branch.
Note that the project paths do not need to be immediate children of the jiri root. We could have decided to set the
path attribute for the “bar” project to “third_party/bar”, or even nest “bar” inside the “foo” project by setting the
path to “foo/bar” (assuming no files in the foo repo conflict with bar).
Because manifest files also need to be kept in sync between various team members, it often makes sense to keep your team's manifests in a version controlled repository.
Jiri makes it easy to “import” a remote manifest from your local
.jiri_manifest file with the
jiri import command. For example, running the following command will create a
.jiri_manifest file (or append to an existing one) with an
import tag that imports the jiri manifest from the repo.
jiri import -name jiri manifest
The next time you run
jiri update, jiri will sync all projects listed in the jiri manifest.
This section explains how to get started with jiri.
First we “bootstrap” jiri so that it can sync and build itself.
Then we create and import a new manifest, which specifies how jiri should manage your projects.
You can get jiri up-and-running in no time with the help of the bootstrap script.
First, pick a jiri root directory. All projects will be synced to subdirectories of the root.
export MY_ROOT=$HOME/myroot
Execute the
jiri_bootstrap script, which will fetch and build the jiri tool, and initialize the root directory.
curl -s | base64 --decode | bash -s "$MY_ROOT"
The
jiri command line tool will be installed in
$MY_ROOT/.jiri_root/bin/jiri, so add that to your
PATH.
export PATH="$MY_ROOT"/.jiri_root/bin:$PATH
Next, use the
jiri import command to import the “jiri” manifest from the Jiri repo. This manifest includes Jiri's repository.
You can see the jiri manifest here. For more information on manifests, read the manifest docs.
cd "$MY_ROOT" jiri import -name jiri manifest
You should now have a file in the root directory called
.jiri_manifest, which will contain a single import.
Finally, run
jiri update, which will sync all local projects to the revisions listed in the manifest (which in this case will be
HEAD).
jiri update
You should now see the imported project in
$MY_ROOT/go/src/fuchsia.googlesource.com/jiri.
Running
jiri update again will sync the local repos to the remotes, and update the jiri tool.
Now that jiri is able to sync and build itself, we must tell it how to manage your projects.
In order for jiri to manage a set of projects, those projects must be listed in a manifest, and that manifest must be hosted in a git repo.
If you already have a manifest hosted in a git repo, you can import that manifest the same way we imported the “jiri” manifest.
For example, if your manifest is called “my_manifest” and is in a repo hosted at “”, then you can import that manifest as follows.
jiri import my_manifest
The rest of this section walks through how to create a manifest from scratch, host it from a local git repo, and get jiri to manage it.
Suppose that the project you want jiri to manage is the “Hello-World” repo located at.
First we‘ll create a new git repo to host the manifest we’ll be writing.
mkdir -p /tmp/my_manifest_repo cd /tmp/my_manifest_repo git init
Next we'll create a manifest and commit it to the manifest repo.
The manifest file will include the Hello-World repo as well as the manifest repo itself.
cat <<EOF > my_manifest <?xml version="1.0" encoding="UTF-8"?> <manifest> <projects> <project name="Hello-World" remote="" path="helloworld"/> <project name="my_manifest_repo" remote="/tmp/my_manifest_repo" path="my_manifest_repo"/> </projects> </manifest> EOF git add my_manifest git commit -m "Add my_manifest."
This manifest contains a single project with the name “Hello-World” and the remote of the repo. The
path attribute tells jiri to sync this repo inside the
helloworld directory.
Normally we would want to push this repo to some remote to make it accessible to other users who want to sync the same projects. For now, however, we'll just refer to the repo by its path in the local filesystem.
Now we just need to import that new manifest and
jiri update.
cd "$MY_ROOT" jiri import -name=my_manifest_repo my_manifest /tmp/my_manifest_repo jiri update
You should now see the Hello-World repo in
$MY_ROOT/helloworld, and your manifest repo in
$MY_ROOT/my_manifest_repo.
The
jiri help command will print help documentation about the
jiri tool and its subcommands.
For general documentation, including a list of subcommands, run
jiri help. To find documentation about a specific topic or subcommand, run
jiri help <command>.
branch Show or delete branches diff Prints diff between two snapshots grep Search across projects. import Adds imports to .jiri_manifest file init Create a new jiri root patch Patch in the existing change project Manage the jiri projects project-config Prints/sets project's local config run-hooks Run hooks using local manifest runp Run a command in parallel across jiri projects selfupdate Update jiri tool snapshot Create a new project snapshot source-manifest Create a new source-manifest from current checkout status Prints status of all the projects update Update all jiri projects upload Upload a changelist for review version Print the jiri version help Display help for commands or topics
Run
jiri help [command] for command usage.
See the jiri filesystem docs.
See the jiri manifest docs.
TODO(anmittal): Write me.
Gerrit is a collaborative code-review tool used by many open source projects.
One of the peculiarities of Gerrit is that it expects a changelist to be represented by a single commit. This constrains the way developers may use git to work on their changes. In particular, they must use the --amend flag with all but the first git commit operation and they need to use git rebase to sync their pending code change with the remote master. See Android‘s repo command reference or Go’s contributing instructions for examples of how intricate the workflow for resolving conflicts between the pending code change and the remote master is.
The rest of this section describes common development operations using
jiri upload.
All development should take place on a non-master “feature” branch. Once the code is reviewed and approved, it is merged into the remote master via the Gerrit code review system. The change can then be merged into the local branches with
jiri update -rebase-all.
jiri update
git checkout -b <branch-name> --track origin/master
git add <file1> <file2> ... <fileN>
git commit
jiri update
git checkout <branch-name>
git rebase origin/master
git add <file1> <file2> ... <fileN>
git commit --amend
git checkout <branch-name>
jiri upload
If the CL upload is successful, this will print the URL of the CL hosted on Gerrit. You can add reviewers and comments through the Gerrit web UI at that URL.
Note that there are many useful flags for
jiri upload. You can learn about them by running
jiri help upload.
git checkout <branch-name>
git add -u
git commit --amend
jiri upload
jiri upload
jiri update
git checkout JIRI_HEAD && git branch -d <branch-name>
If you have changes A and B, and B depends on A, you can still submit distinct CLs for A and B that can be reviewed and submitted independently (although A must be submitted before B).
First, create your feature branch for A, make your change, and upload the CL for review according to the instructions above.
Then, while still on the feature branch for A, create your feature branch for B.
git checkout -b feature-B --track origin/master
Then make your change and upload the CL for review according to the instructions above.
You can respond to review comments by submitting new patch sets as normal.
After the CL for A has been submitted, make sure to clean up A's feature branch and upload a new patch set for feature B.
jiri update # fetch update that includes feature A git checkout feature-B git rebase -i origin/master # if u see commit from A, delete it and then rebase properly jiri upload # send new patch set for feature B
The CL for feature B can now be submitted.
This process can be extended for more than 2 CLs. You must keep two things in mind:
The tool was conceived by engineers working on the Vanadium project to facilitate the multi-repository management needs of the project. At the time, it was called “v23”. It was renamed to “jiri” shortly after its creator (named Jiří) left the project and Google.
Jiří is a very popular boys name in the Czech Republic.
We pronounce “jiri” like “yiree”.
The actual Czech name Jiří is pronounced something like “yirzhee”.
|
https://fuchsia.googlesource.com/jiri/+/refs/heads/master
|
CC-MAIN-2019-51
|
en
|
refinedweb
|
A legacy pass for the legacy pass manager that wraps the
SROA pass.
More...
A legacy pass for the legacy pass manager that wraps the
SROA pass.
This is in the llvm namespace purely to allow it to be a friend of the
SROA pass.
Definition at line 4618 of file SROA.cpp.
Definition at line 4625 of file SROA.cpp.
References llvm::PassRegistry::getPassRegistry(), and llvm::initializeSROALegacy 4639 of file SROA.cpp.
References llvm::AnalysisUsage::addPreserved(), llvm::AnalysisUsage::addRequired(), and llvm::AnalysisUsage::setPreservesCFG().
getPassName - Return a nice clean name for a pass.
This usually implemented in terms of the name that is registered by one of the Registration templates, but can be overloaded directly.
Reimplemented from llvm::Pass.
Definition at line 4646 of file SROA.cpp.
runOnFunction - Virtual method overriden by subclasses to do the per-function processing of the pass.
Implements llvm::FunctionPass.
Definition at line 4629 of file SROA.cpp.
References llvm::PreservedAnalyses::areAllPreserved().
Definition at line 4623 of file SROA.cpp.
Referenced by getPassName().
|
http://llvm.org/doxygen/classllvm_1_1sroa_1_1SROALegacyPass.html
|
CC-MAIN-2019-51
|
en
|
refinedweb
|
Boston Python Workshop 3/Saturday/Twitter
Use the Twitter API to write the basic parts of a Twitter client. See what your friends are tweeting, get trending topics, search tweets, and more.
Setup[edit]
See the Friday setup instructions.
Goals[edit]
- Have fun playing with data from Twitter.
- See how easy it is to programmatically gather data from social websites that have APIs.
- Get experience with command line option parsing and passing data to a Python script.
- Get experience reading other people's code.
Suggested exercises[edit]
- Customize how tweets are displayed. Look at the
Statusand
Userclasses in the Twitter code for inspiration; options include the URL for the tweet, how many followers the sender has, the location of the sender, and if it was a retweet.
- Write a new function to display tweets from all the trending topics. Add a new command line option for this function.
- The code to display tweets gets re-used several times. De-duplicate the code by moving it into a function and calling that function instead. Example prototype:
def printTweet(tweet): """ tweet is an instance of twitter.Status. """ pass
- [Long] A lot of the Twitter API requires that you be authenticated. Examples of actions that require authentication include: posting new tweets, getting a user's followers, getting private tweets from your friends, and following new people. Set up oAuth so you can make authenticated requests. describe how Twitter uses oAuth. has examples of using oAuth authentication to make authenticated Twitter API requests.
« Back to the Saturday project page
|
https://wiki.openhatch.org/wiki/Boston_Python_Workshop_3/Saturday/Twitter
|
CC-MAIN-2019-51
|
en
|
refinedweb
|
torch create tensor: Construct a PyTorch Tensor
torch create tensor - Create an uninitialized PyTorch Tensor and an initialized PyTorch Tensor
< > Code:
Transcript:
We import PyTorch.
import torch
We’re going to print the torch version to see what version we’re using.
print(torch.__version__)
We’re using 0.2.0_4.
To construct a PyTorch tensor, we define a variable x and set it equal to torch.Tensor(5,1).
x = torch.Tensor(5, 1)
We can then print that tensor to see that it is a torch.FloatTensor of size 5x1.
print(x)
It is uninitialized.
By default, the PyTorch tensors are created using floats.
We can create a second tensor, y, using torch.Tensor(1,5).
y = torch.Tensor(1, 5)
We can print the y tensor and it is a FloatTensor of size 1x5.
print(y)
It is uninitialized.
Next, we define a tensor z and we set it equal to torch.Tensor(2, 2, 2).
This is going to be a three-dimensional tensor.
z = torch.Tensor(2, 2, 2)
When we print it, we can see that it is 2x2, 2x2, it is uninitialized, and that there are two of these matrices.
print(z)
Again, it is torch.FloatTensor of size 2x2x2.
Here, we’re going to construct a random tensor variable and we’re going to use torch.rand(3, 3, 3).
What this does is it creates a tensor based on the arguments we passed.
So it’s going to be 3x3x3.
random_tensor = torch.rand(3, 3, 3)
And when we print it, you can see that it has numbers that are all floating numbers.
print(random_tensor)
This rand function in PyTorch gives you a random number pulled from a uniform distribution from 0 to 1.
So you can see that all of the numbers here displayed are between 0 and 1.
|
https://aiworkbox.com/lessons/construct-a-pytorch-tensor
|
CC-MAIN-2019-51
|
en
|
refinedweb
|
Definitions: Let P be a point in 3D of coordinates X in the world reference frame (stored in the matrix X) The coordinate vector of P in the camera reference frame is:
\[Xc = R X + T\]
where R is the rotation matrix corresponding to the rotation vector om: R = rodrigues(om); call x, y and z the 3 coordinates of Xc:
\[x = Xc_1 \\ y = Xc_2 \\ z = Xc_3\]
The pinhole projection coordinates of P is [a; b] where
\[a = x / z \ and \ b = y / z \\ r^2 = a^2 + b^2 \\ \theta = atan(r)\]
Fisheye distortion:
\[\theta_d = \theta (1 + k_1 \theta^2 + k_2 \theta^4 + k_3 \theta^6 + k_4 \theta^8)\]
The distorted point coordinates are [x'; y'] where
\[x' = (\theta_d / r) a \\ y' = (\theta_d / r) b \]
Finally, conversion into pixel coordinates: The final pixel coordinates vector [u; v] where:
\[u = f_x (x' + \alpha y') + c_x \\ v = f_y y' + c_y\]
#include <opencv2/calib3d.hpp>
#include <opencv2/calib3d.hpp>
Performs camera calibaration.
#include <opencv2/calib3d.hpp>
Distorts 2D points using fisheye model.
Note that the function assumes the camera matrix of the undistorted points to be identity. This means if you want to transform back points undistorted with undistortPoints() you have to multiply them with \(P^{-1}\).
#include <opencv2/calib3d.hpp>
Estimates new camera matrix for undistortion or rectification.
#include <opencv2/calib3d.hpp>
Computes undistortion and rectification maps for image transform by cv::remap(). If D is empty zero distortion is used, if R or P is empty identity matrixes are used.
#include <opencv2/calib3d.hpp>
Projects points using fisheye model..
#include <opencv2/calib3d.hpp>
This is an overloaded member function, provided for convenience. It differs from the above function only in what argument(s) it accepts.
#include <opencv2/calib3d.hpp>
Performs stereo calibration.
#include <opencv2/calib3d.hpp>
Stereo rectification for fisheye camera model.
#include <opencv2/calib3d.hpp>
Transforms an image to compensate for fisheye lens distortion..
See below the results of undistortImage.
Pictures a) and b) almost the same. But if we consider points of image located far from the center of image, we can notice that on image a) these points are distorted.
#include <opencv2/calib3d.hpp>
Undistorts 2D points using fisheye model.
|
https://docs.opencv.org/3.4/db/d58/group__calib3d__fisheye.html
|
CC-MAIN-2019-51
|
en
|
refinedweb
|
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.