text
stringlengths 20
1.01M
| url
stringlengths 14
1.25k
| dump
stringlengths 9
15
⌀ | lang
stringclasses 4
values | source
stringclasses 4
values |
|---|---|---|---|---|
Next article: Friday Q&A 2011-07-22: Writing Unit Tests
Previous article: Friday Q&A Delayed Again
Tags: cocoa fridayqna letsbuild notifications workalike from scratch to illustrate how it all works, a topic suggested by Dylan Copeland.
Code
The code that I built is available as a complete unit on github at. While I don't believe it's useful for practical work (you might as well just use
NSNotificationCenter), it can be interesting to look at it in its entirety.
Interface
The goal is to essentially reimplement
NSNotificationCenter as a class which I'll call
MANotificationCenter, but to make things a little bit easier, I decided to cut down the API a bit.
NSNotificationCenter started out with an API for observing based on an observer/selector pair, then in 10.6 added a second method for using a block as an observer instead. Since blocks are ultimately much more natural for this sort of work, my version of the API just has a blocks-based method:
- (id)addObserverForName: (NSString *)name object: (id)object block: (void (^)(NSNotification *note))block;
Like with
NSNotificationCenter, this method returns an opaque object which the caller is expected to use to remove the observer:
- (void)removeObserver: (id)observer;
The older object/selector API can be implemented in terms of this one, so we don't lose any functionality by only offering this. The basic idea of how things work under the hood is also essentially the same, the only difference is whether you call a block or send a selector to an object when it comes time to notify observers.
Finally we need to be able to post a notification:
- (void)postNotification: (NSNotification *)note;
NSNotificationCenter has other posting methods which take objects and names directly, but those simply construct the
NSNotification object and then call through to this, so again, we lose nothing by not having the other methods. They are really just one-line wrappers around this one.
And that's it for the API. It's pretty simple. It's interesting to note that
NSNotificationCenter isn't all that much more complicated. It only has seven instance methods, and two of them are just convenience wrappers.
Implementation
Before we get to actual code, let's talk some theory.
A notification is defined by the object that posts the notification, the notification's name, and arbitrary user info that's attached to the notification. The object and name are used to determine which observers need to be notified.
That last part is the key. In essence, the notification center maps
object, name pairs to observers. In this case, since
MANotificationCenter is blocks-based, it will map those pairs to observer blocks. Multiple observer can be registered on a single
object, name pair, so they need to map to a collection that will hold multiple observer blocks.
In Cocoa terms, when we say "map", this usually means a dictionary. Unfortunately, using pairs as dictionary keys is a little inconvenient, because
NSDictionary only takes a single object as the key. To mitigate this, we'll create a separate "key" class which holds the pair and can be a single object for the dictionary key. For the collection of observer blocks, we don't care about order, so we can just use an
NSMutableSet.
This class will therefore have a master
NSMutableDictionary which maps keys to
NSMutableSet instances which then contain the individual observer blocks. And that's really all there is to it.
The Key Class
The key class is pretty simple, but it's worth discussing just how it works so everybody is on the same page. It's just a small class which holds a name and an object and implements equality, hashing, and copying appropriately. One small detail, the object reference needs to be a weak reference, as the notification system isn't supposed to retain the objects it manages (which frequently deregister themselves in response to being deallocated, so that would be problematic).
On a side note, I frequently warn of the dangers of plain weak references, and recommend using a zeroing weak reference class like my own
MAZeroingWeakRef instead. However, in the interest of brevity and clarity, I'm going to use plain, dangerous, regular weak references in this code.
The interface for this class is simple: two instance variables, and one public class method to make instances:
@interface _MANotificationCenterDictionaryKey : NSObject { NSString *_name; id _object; } + (_MANotificationCenterDictionaryKey *)keyForName: (NSString *)name object: (id)obj; @end
That public method just does the typical dance of
alloc,
init,
autorelease, using a private
init method:
+ (_MANotificationCenterDictionaryKey *)keyForName: (NSString *)name object: (id)obj { return [[[self alloc] _initWithName: name object: obj] autorelease]; }
The
init and
dealloc methods are likewise simple, just setting and cleaning up the instance variables:
- (id)_initWithName: (NSString *)name object: (id)obj { if((self = [self init])) { _name = [name copy]; _object = obj; } return self; } - (void)dealloc { [_name release]; [super dealloc]; }
Next comes equality. I want this class to be able to handle
nil for either the name or the object (or both), as this will come in handy later when implementing
nil catchall observers, where an observer can register for a
nil name to mean "all notifications from the given object", register for a
nil object to mean "all notifications with a given name", and register with
nil for both to catch all notifications sent through the system. However, notification names need to be compared with
isEqual:, which doesn't play nice with
nil. To make this simpler, I created a really quick equality-checking helper which correctly handles
nil and only uses
isEqual: when both objects exist:
static BOOL Equal(id a, id b) { if(!a && !b) return YES; else if(!a || !b) return NO; else return [a isEqual: b]; }
With that helper in place, the implementation of
isEqual: is simple and follows the standard pattern of checking the class of the other object and then comparing instance variables:
- (BOOL)isEqual: (id)other { if(![other isKindOfClass: [_MANotificationCenterDictionaryKey class]]) return NO; _MANotificationCenterDictionaryKey *otherKey = other; return Equal(_name, otherKey->_name) && _object == otherKey->_object; }
The implementation of
hash just gets the hashes of the two instance variables and squishes them together:
- (NSUInteger)hash { return [_name hash] ^ (uintptr_t)_object; }
Finally, to be a dictionary key, we need
copyWithZone:. Since this class is immutable, that method can just retain
self and return it:
- (id)copyWithZone: (NSZone *)zone { return [self retain]; }
That takes care of keys. On to the notification center itself.
Notification Center Implementation
The
init and
dealloc implementations are short. There's an
NSMutableDictionary *_map instance variable, and they just set it up and tear it down:
- (id)init { if((self = [super init])) { _map = [[NSMutableDictionary alloc] init]; } return self; } - (void)dealloc { [_map release]; [super dealloc]; }
Next up is the implementation of
-addObserverForName:object:block:, which is where most of the work takes place. The first thing it needs to do is grab a key object so it can use that to work with the observers dictionary:
- (id)addObserverForName: (NSString *)name object: (id)object block: (void (^)(NSNotification *note))block { _MANotificationCenterDictionaryKey *key = [_MANotificationCenterDictionaryKey keyForName: name object: object];
Next, it needs to grab the
NSMutableSet of observers for that key. This is a straightforward
objectForKey: lookup, except for the fact that the observers set might not exist yet. If that's the case, we create it on demand and put it into the observers dictionary:
NSMutableSet *observerBlocks = [_map objectForKey: key]; if(!observerBlocks) { observerBlocks = [NSMutableSet set]; [_map setObject: observerBlocks forKey: key]; }
Next, shove the observation block into the set. Since an
NSMutableSet will only retain its objects, and blocks must be copied if they're kept around, we'll copy it ourselves before handing it off to the set:
void (^copiedBlock)(NSNotification *note); copiedBlock = [block copy]; [observerBlocks addObject: copiedBlock]; [copiedBlock release];
That's just about it. Everything is now in place in the observers dictionary for
-postNotification: to do its job. There's just one piece missing in this method: the return value. This method is supposed to return some sort of object which, when passed to
-removeObserver:, causes the observation entry to be removed.
This object would generally encapsulate everything needed to find the entry again and remove it. In this particular case, it would need to hold the key and the copied block. These two objects could be stored in some sort of Cocoa collection like an
NSArray or an
NSDictionary, but that causes a lot of unpleasant boxing and unboxing activity as you construct the objects and extract the values from them. They could be stored in a custom class, but it's irritating to build a whole extra custom class just for this tiny use case.
After some thought, I decided to return a block. All of the information needed to remove the observation entry is available in the current scope, and so can be captured by the block. As a bonus, we can capture the existing
observerBlocks variable so that the set doesn't have to be looked up a second time.
-removeObserver: can then just call this block, and everything is happy. I decided to go with this, and I think it turned out pretty well.
The removal block first just pulls the block out of the observers set:
void (^removalBlock)(void) = ^{ [observerBlocks removeObject: copiedBlock];
Next, if the observers set is empty, it removes that entry from the observers dictionary entirely. This keeps empty
NSMutableSet instances from building up in the dictionary after objects are destroyed:
if([observerBlocks count] == 0) [_map removeObjectForKey: key]; };
That's it for the removal block. All that's left is to return it to the caller, and we're done:
return [[removalBlock copy] autorelease]; }
With this implementation, the
-removeObserver: method becomes really short. It just converts the object into the proper block type and then calls it:
- (void)removeObserver: (id)observer { void (^removalBlock)(void) = observer; removalBlock(); }
Next up is a helper method which actually handles the sending of a notification. This is separate from
-postNotification: so that we can properly handle
nil catch-all observations. More on that in a moment. This method takes a notification, a name, and an object (yes, those are stored in the notification, but again, this helps with catch-all handling), looks up the set of observers in the master dictionary, and then calls all of the corresponding blocks:
- (void)_postNotification: (NSNotification *)note name: (NSString *)name object: (id)object { _MANotificationCenterDictionaryKey *key = [_MANotificationCenterDictionaryKey keyForName: name object: object]; NSSet *observerBlocks = [_map objectForKey: key]; for(void (^block)(NSNotification *) in observerBlocks) block(note); }
Finally we have
-postNotification:. A simple implementation would be call the above method, passing
[note name] and
[note object] as the last two parameters. However, to handle catch-all observers, we'll actually call the above method four times in a row. The first time, it's called with the name and object. The second time it's called with only the name, and a
nil object. This notifies catch-all observers registered for the name. Next, it's called with a
nil name and the notification's object, which notifies catch-all observers on the object. Finally, it's called with
nil for both parameters, which notifies universal catch-all observers.
Here's what the method looks like:
- (void)postNotification: (NSNotification *)note { NSString *name = [note name]; id object = [note object]; [self _postNotification: note name: name object: object]; [self _postNotification: note name: name object: nil]; [self _postNotification: note name: nil object: object]; [self _postNotification: note name: nil object: nil]; }
By making
-addObserverForName:object:block: accept
nil, little else needs to be done to implement the catch-all behavior. A
nil object or name is treated much like any other name, except that all notifications are sent to
nil as well as the specific objects they're targeted for.
A Few Caveats
Since this code is intended mainly for educational purposes, it doesn't quite have everything that a real, practical implementation would have.
First, it doesn't have a way to get a singleton instance. That is really useful for notifications, because the entire point of notifications is often to have objects communicate when they don't otherwise know very much about each other, and having to pass around notification center instances would kind of defeat the point. This is, of course, pretty simple to add.
Second, it's not thread safe. Adding and removing observers mutates shared data structures which would need to be protected by a lock. Making it thread safe could get complicated, since the simplest implementation would hold the lock while posting notifications, but notification observers might then try to come back and twiddle with the notification center, leading to a deadlock. A better approach might be to use a dispatch queue, so that all mutations can be enqueued and executed later if the notification center is busy.
Last, it's also not reentrant. If a notification observer manipulates the notification center, it could end up mutating the
observerBlocks set that the center is currently iterating over, causing a mutation exception to be thrown. It could even cause the set to be deallocated entirely, resulting in a crash. Some sort of queue to hold modifications, whether a dispatch queue or something else, would solve this. Another option, albeit slower, would be to simply copy
observerBlocks before iterating over it, so that modifications can't touch the iterated copy.
Conclusion
Having a working implementation of something like
NSNotificationCenter lets us get a better idea of just what's going on inside. Most importantly, it makes it clear that there's no magic going on inside. Notifications aren't some special language feature that's hard to understand, it's actually just a simple dispatch mechanism where a basic implementation can fit in around a hundred lines of code.
A common question about notifications is just when and where they run. Many people see the word "notification" and start thinking in terms of complicated cross-thread communication or delayed delivery mechanisms. However, we can see that this simply isn't how things work. When
-postNotification: is called, the various observers are called one by one right in the method, and it doesn't return until they're all done. (Note: this does not apply to the
NSNotificationCenter method which takes a block and an
NSOperationQueue. When a queue is provided for that method, the observation block runs asynchronously.)
That wraps things up for this week. Come back soon for another edifying Friday Q&A about daring and unusual topics. As always, those topics come from reader suggestions, so if you have a topic that you would like to see covered here, please send it in.
- (id)_initWithName: (NSString *)name object: (id)obj
for
_MANotificationCenterDictionaryKey
is there a reason for writing
if((self = [self init]))
instead of
if((self = [super init]))
?
Colin: Just a habit not to use
superunless it's necessary. Since
initisn't overridden, it's not necessary here.
_MANotificationCenterDictionaryKeyoverriding
initand calling
_initWithName:object:from there this could lead to endless recursion, couldn't it?
Another remark: Isn't the
else if(!a || !b)branch of the
Equalfunction redundant?
That probably says that you're right and I'm wrong here.
Regarding
if(!a || !b), it's not redundant. Technically it doesn't need to check
a, as the
isEqual:check will return NO if
ais
nil, although a separate check is slightly faster (and more explicit) for that case. The real importance is for
b, as the result of sending
nilto
isEqual:is not, as far as I know, well defined and could potentially throw an exception or crash.
I especially liked returning the
removalBlockas the token from
addObserverForName:…— very simple and elegant solution!
Add your thoughts, post a comment:
Spam and off-topic posts will be deleted without notice. Culprits may be publicly humiliated at my sole discretion.
|
https://www.mikeash.com/pyblog/friday-qa-2011-07-08-lets-build-nsnotificationcenter.html
|
CC-MAIN-2017-34
|
en
|
refinedweb
|
example of a custom tableviewcell-alert-24') except AttributeError: pass return cell view = ui.TableView() view.data_source = source() view.present()
author: Omega0().
You can add subviews to the
content_viewattribute of an instance of
ui.TableViewCell. This is mentioned in the documentation but not directly, it only tells you that views can be added this way, not why you would do it. Also, Dann, for the author of the code above you could find the username of the poster. (I care only because I made it.)
I couldn't find the username. I had this snippet I'm my 'forum snippets' folder to learn from. I tried searching the forum posts for you. But couldn't :( sorry.
That's fine. It was a pretty quick code anyway.
@tachijuan Don't know if this helps you...
'Classes' - SettingsSheet
It's switches in a cell... but you could add labels and an image instead by the same method.
I think that will do it. Have a long flight tomorrow so I can play with this. Thanks for the help fellas. This is rather brute force, but works. Add a list of cells as a public property, and append the cells as they are created... see updated SettingsSheet class. A ui.View can be used like a dict of it's named subviews... but I think this little test script shows that a ui.TableViewCell doesn't support that... try it with View and then TableViewCell. It looks like subviews[n] is as good as it gets.
import ui v = ui.View() #v = ui.TableViewCell() b = ui.Button() b.name = 'btn' v.add_subview(b) print v['btn'].name
@techijuan Ok, the trick is to add the subviews to the cell's content_view not the cell... then it works to use content_view like a dict.
P.S. ListDataSource also has an (undocumented ?) tableview attributute that is useful for upwards navigation
aha!
Cool. Thank you!
|
https://forum.omz-software.com/topic/1132/example-of-a-custom-tableviewcell
|
CC-MAIN-2017-34
|
en
|
refinedweb
|
Those who are successful in America have unfair advantages. The government can step in to alleviate the suffering of those who are born without the built-in advantages others have. Some of these advantages are education, money, innate intelligence, robust health of body and mind, cleverness, self-discipline, habit of being thrifty, the ability to be proactive and future-oriented, being an American citizen, etc.
A.) True ______
B.) False ______
My answer is very, Very, VERY A.) True ____X
My reasoning is because the Constitution does not allow any such differences [see … ons-Spirit how the spirit of the constitution] is for ...
1) Perfecting the Union "done without any form of divisions"
2) Establishing Justice "Without separate justices for class differences"
3) Insuring domestic Tranquility "Despite any man differences",
4) Provide for the common Defence "of people territory",
5) Promote the general Welfare "equally across the nation's people" and
6) Secure the Blessings of Liberty to ourselves and our Posterity "equally to all without any reference to differences.
I do not understand you. I even went to your hub. What is your Point Of View in all actuality?
I am surmising you think the Govt. does have the arms to tie your shoes? That it IS possible to create conditions of equality of OUTCOME?
Our govt. attempts to create conditions of OPPORTUNITY for ALL ... (by enabling everyone to tie their own shoes.) THIS is do-able.
Is it not?
KLH,
The Federal government can promote equality although they never have. All it has to do is implement the constitution, as my Hub's link shows has never been done by making every law it makes to represent one or more of the Preamble's 6 conditions rather to fulfill the wish of the "Military-Industrial-Compex's" (IMC) corporate interest as they are doing. Corporations are allowed in this nations to provide revenues to We The People and not for "corporate self-enrichment", as is presently happening, and supposed to renew their pledge every 3 years, I believe it is.
I'm saying government has no power at all except to make the laws necessary to ensure Corporation DO NOT control this nation - by "being the electors of representatives, senators, presidents and vices" (supposed to be done by We The People), they choose who Presidents appoint and Congress approves for the various cabinet position (to ensure Corporations have their way over medications to cause a lack of healing but the ill-treating health conditions instead, they ensure Congress, Presidents and Supreme Court Justices have enough dirt on them that they can control/buy them [why else are out of 535 congressman (most less than millionaires upon election) there are only 5 who are not as today] - because We The People have the controlling power over government, according to how my Hub reads, that we don't even care to do because we are "sleepwalking" in order to keep dreaming "the American Dream" of personal wealth rather than the collective needs We The People's.
Socialistic based governments have been more successful in promoting equality-of-outcome than democratic based governments.
A.) True _____
B.) False _____
The "democratic based government" Is what my Hub's link is describing. It also describes a "Socialist" and "Communist" government, all three are the same once we boil them down to their "least common denominator".
Governments are better equipped to give people what they need than the people themselves. Furthermore, the people themselves should not be burdened by the necessity of helping each other.
A.) True ____
B.) False ____
Man's only need are unpolluted air, unpolluted water and environmentally ecologically grown foods and wisdom enough to maintain the ecological process of the earth, Governments, on the other hand provides everything for corporations' economic enrichment.
In my opinion, the answer is: False, to all three statements. Why ask someone without arms to tie your shoes?
In the same way, expecting ANY government to provide Equality of outcome is
I M P O S S I B L E !!!!!!
In other words, I do not believe the U. S. Federal government has the arms to tie the shoelaces of the people.
We really have to tie our own shoelaces, since we are the ones with the arms to do so …
basically.
My disagreement is because of what is written in the United States' Constitution. Those 6 condition the Constitution's Preamble demands of government is not asking any group to do anything for anyone else, what it does is DEMAND every individual to realize if they want "Liberty to ourselves and our Posterity" and if not we are allowed to ignore the constitution and allow an organization to demand to us what our rights.
Lets say, in America, we do not care about "equality of outcome" so much as helping those who need temporary help.
To what extent can a government provide assistance without creating dependency?
How can creating dependency and promoting fraud be deterred?
I do not see this as a political problem so much as a scientific one!
Its based on human nature.
To whatever extent time limits are clear, concise and applied. Of course, even limits that meet those requirements, but are overly long, will create dependency.
Yes.
They would have to be issued clearly and concisely …
and applied with just the right amount of time!
How can people who ARE NOT willing to read IN DEPTH the laws they are to be governed by to expect anything else than what "You The People" are allowing the people you [ARE SUPPOSED TO] have chosen and put in place with the power to direct their every move expect anything less than what you have because you are "sleep walking" in order to dream "The American Dream" of a financially controlled environment in opposition to the earth's eco system of "receive your SUPLY on DEMAND" as required?
How can that (specific?) amount of time be determined, I wonder?
If we could agree on this, we could all (both parties) get along!!!!!!
Thanks, wilderness!
I am beyond miffed. People have to learn how to be accountable & responsible for their actions. They must learn that intelligent decisions equate to a good quality of life while stupid, irresponsible decisions equate to a subpar quality of life. The MAIN problem of American society & it has been since the creation of the so-called Great Society of the 1960s is helping the poor via handouts & freebies only create people who believe that they should be poor & live as good as the middle & upper classes. Welfare & all the other stupid, inane social programs are making poor people entitled. They believe that they should have a middle class life w/o putting in the effort & sacrifices to obtain such a life.
It really galls me that poor people contend that they should have the good life when they made stupid, irresponsible decisions. Talk about inverse logic. People who make irresponsible decisions deserve the lifestyle they get. In other words, good begets good while bad begets bad. If one wants to be successful, one had to make intelligent decisions i.e. delay immediate gratification in terms of immediate pleasures, become more pro-active regarding one's life, don't enter into marriage & parenthood until one is well-established emotionally, intellectually, psychologically, & most of all socioeconomically, & ultimately, one must be guided by logic, not instinctive, primitive emotions.
Remember the WORST thing is to help poor people by giving to them. Poor people become lazier & more entitled when such help is given. This help makes life easier for poor people & makes them lose incentive. When life was harder for poor people, they realized that being poor sucked & many fought to get out of poverty by intelligent planning & strategizing. It CAN be done. The American poor, FOR THE MOST part, WANT to be poor yet want to live a middle-class life but not by their efforts. They want the gub'ment to supply the affluent life. Well, it doesn't work that way! Work.....or STARVE & DO WITHOUT!
Some people really do need help. Pregnant girls, for instance, who are determined to have their child, and eventually they turn themselves around. Maybe the govt. could give zero interest loans. Expect people to pay back the loans at some point in time or in some way. People always respond, "… we already do pay taxes, why should I have to pay back the govt.?" Well, our taxes would be less if people were expected to return (and actually do return) the money they borrow from the govt.
gmwilliams,
That is, when read in-depth, exactly what the constitution is saying without a doubt, "be accountable & responsible for [our individual] actions."
… do we need a constitution? Or why have one if we do not bother to understand it and follow it?
Do you like the constitution we have, if only we could follow it …
or do you think we could just as well do without?
The United States Constitution are the laws or Ecology and, for the most parts, bypasses economics. I accept it because Revelation 12:5 calls it a "rod of Iron" the "man child" [son of man] will use to bring world peace for about 10% [10 virgins] of world population to be sealed (Rev.7:1-10) with half actually surviving the world's end expected to be accomplished no later than 2028 (Matthew 24:32-34 explaining Isaiah 11:10-12 that happened May 1948 and Psalm 90:10 revealing a maximum of 80 years from then).
Do we need a Constitution?
Yes, few man know ecology's laws therefore something representing it had to be written for then to have a mental picture of what to expect.
Did you read the Conclusion of the link? It shows George Washington knew man-en-mass are not wise nor honest enough to implement the constitution and surrounded to the will of "god" to do it in due time.
by Sophia Angelique19 months ago
According to Malcolm Gladwell in his book, Outliers, the answer is no.Gladwell showed repeatedly that whether people who succeeded or not, depended a great deal on how much wealth and education their parents had. For...
by My Esoteric15 months ago
In reading Federalist Paper # 36 on Taxation, I found the jewel that speaks to one of the main differences between today's Republicans and Democrats"...; and must naturally tend to make it a fixed point of policy... egiv7 years ago
And also to those who claim liberals don't argue with statistics: … ref=global
by Reality Bytes5 years ago
Even though he stated he would veto the bill if it included the indefinite detention of Americans, Obama signed the NDAA bill in to law. Now an injunction is administered by a judge questioning the...
by Grace Marguerite Williams12 days ago
Disclaimer: Not discussing rich people who inherited their wealth & made nothing of their lives. Not addressing poor people who are elderly, physically/emotionally/intellectually/.
|
http://hubpages.com/politics/forum/142239/the-usfederal-government-can-promote-equality
|
CC-MAIN-2017-34
|
en
|
refinedweb
|
Recently, PDC 2008 conference showcased the Road Map of the Microsoft Technologies focusing on the End-To-End Solutions driven by metadata and virtually hosting anywhere. This approach requires many vertical and horizontal changes for architecting application(s) and quality different development thinking where it is more focused on "What" rather than "How" to do it. This is a very important step for Microsoft Technologies and its is comparable to the introduction of a .Net manageable code Technology.
We can see the Microsoft Technology goal on this road such as offering Technologies for mapping a business model into the physical model for its projecting by using more declaratively than imperatively implementation. Decoupling a business model into small business activities (code units) and orchestrating them in the metadata driven model enables us to encapsulate the business from the technologies and infrastructures. On the PDC 2008, Microsoft introduced xaml stack as a common declaration of the metadata driven model for runtime projecting. With xaml stack, the component can be projected during the runtime in the front-end and back-end based on the hosting environment, for instances: desktop, mobile, server or cloud. Declaratively packaging activities into the service enables us to consume our service by business logical model in the business-to-business fashion.
xaml
The services are represented by logical connectivity (interoperability) for decoupling a business model in the metadata driven distributed architecture. For this architecture, the service model is required to manage connectivity and service mediation based on the metadata, in other words, the ability to manage the business behind the endpoint in the declaratively manner. Note, that this concept doesn't have a limitation about what must be within the service, it can be a small business model that represents a simple workflow or full application (SaaS - Software as a Service).
Lastly, the PDC 2008 shown how important the Service Model is in the next generation of Microsoft Technologies. For instance: Windows Azure platform, Dublin, Oslo Modeling platform, Dublin and etc. This strategy is based on the Modeling, storing metadata in the Repository and Deploying metadata for runtime projecting, therefore Model-Repository-Deploy is an upcoming thinking for enterprise applications. The last step, such as Deploy will enable us to deploy our logical centralized model into the decentralized physical model virtually to any hosting environment. It can be a custom hosting, Dublin or Windows Azure platform (cloud computing). From the Service point of the view, the Windows Azure platform represents a Service Bus for logical connectivity between Services. This is a big challenge for these technologies and of course for developer's quality (different) thinking process.
Model-Repository-Deploy
Deploy
The following picture shows strategy of the Manageable Services:
The business model is decomposed into Service Models described by metadata stored in the Repository. The Repository represents a Logical centralized model, which actually is a large Knowledge Base of the business activities, components, connectivity, etc. This Knowledge Base is learned manually by Modeling Tools during the design time, but there is also a possibility to learn on the fly from other sources for tuning purposes, for instance. Based on the capacity, we can build next model based on the previous one and so on, therefore increasing the level of the knowledge in the Repository. There is also model of the deployment, which allowing to deploy our knowledge base into runtime environment. Of course, the Repository can be pre-built with some minimum common knowledge such as WokflowServiceModel, ServiceModel, deployment for Dublin, Azure, etc.
WokflowServiceModel
ServiceModel
Dublin
Azure
One more thing, I can see the Repository (as a Knowledge Base of models) as a big future for the Enterprise Applications. Hosting Repository on the Cloud Server (for instance Windows Azure), the models can be published and consumed by other models or it can be privately deployed, transformed to other models, etc.
The manageable services enabled architecture is allowing us to decompose the business model into the services and then deploy the metadata to the host environment such can be Cloud, Dublin, IIS/WAS or Custom Host. The following picture shows this concept:
The deployed services based on the modeling can be connected to the Service Bus and consumed by other private contracts. From the business workflow point of the view, the services represent business activities modeled during the design time. Decomposition of the business logic into the activities located in the business toolbox (catalog) will enable us to orchestrate them within the service. The result of this strategy is a very flexible driven model solution.
This article mentioned above a strategy for Manageable Services driven by metadata stored in the Repository using the edge of the current Microsoft Technologies such as .NetFX 3.5. This project has been started when WorkflowServices (.NetFX 3.5) has been introduced and relayed in my past articles published on the codeproject such as VirtualService for ESB and Contract Model for Manageable Services. I do recommend reading them for additional details. This article focuses on the Application Model (Service Model). However, this article includes all parts for modeling (tooling), storing, deploying and projecting metadata.
WorkflowServices (.NetFX 3.5)
Let's start, I have a loots of stuff to show up, I hope you are familiarly with the latest Microsoft Technologies, especially .NetFX 3.5.
Firstly, lets focus on deploying metadata from Repository to the Host. Note, the Repository from the generic point of the view is holding models; these models can be models of the other models stored in relational tables, etc. Based on the deployment model, the application model must be exported to the runtime in the fashion of understandable resources by runtime projector. Basically, there are two kinds of deployed processors in this feature based on the how the metadata are exported from the Repository.
This scenario is based on the Local Repository of the runtime metadata, located in the hosting environment. The following picture shows this scenario:
The Push deployment is a typical scenario of the deployment process. The first step of this scenario is pushing (xcopy, etc.) resources to the specific storage, usually it is a file system for resources such as config, xaml (xoml/rules), xslt, etc. in the off-line fashion, when an application is not running. The next step is to start application, when the runtime projector is loading resources into the application domain, creating CLR types, their instances and invoking actions. During this process, the Local Repository (for instance, File System) is logically locked for any changes and physically will be shutdown the application domain when the resources have been changed.
Push
config, xaml (xoml/rules), xslt
The Push deployment simplifies deployment process, the Local Repository is full decoupled from the source of the metadata (Repository), in other words, we can push the metadata from Visual Studio, from script, etc. During this deployment, we are creating (caching) a runtime copy of the metadata (resources), so we have a full isolation from the Modeling/Repository. That is an advantage of this scenario.
The Dublin PDC version shows this capability of hosting services on the IIS/WAS with Dublin extension. We can see great features such as monitoring, managing, etc. services hosted in the IIS/WAS Windows Server 2008 or Vista/Windows 7 machines. The Dublin MMC allows making some local changes in the local copy of the metadata. I can see this being helpful for very small project, but for enterprise applications, managing local copy of the metadata on the production boxes, without their tracking, versioning and rollback is not a good approach. In addition, the local metadata is a complex resource which represents only a portion of the logical model stored in the Repository. Therefore, all changes should be made in the centralized logical model and then push it again to the local repository. I think, that should be the way to manage services from the Repository using a Modeling Tool or similar one which will take care of the changes in the logical model especially when the deployment is done for the Cloud.
OK, now, let's look at the other option for deploying metadata to the host such as pulling runtime metadata from the Repository.
This scenario doesn't have a Local Repository for caching runtime metadata from the central Repository. It is based on the pulling the runtime metadata from the Repository by Bootstrap/Loader component located in the Application domain.
When application process starts, the Bootstrap/Loader will publish (broadcasting) event message to obtain metadata or by asking the Repository for the metadata. The major advantage of this scenario is dynamically managing services on the fly from Repository which can be done directly or using discovery mechanism and also capability to tuning metadata on the fly based on the analyzing and processing rules, etc. Therefore, there is no manageability of the service on the machine level, all changes must be done through the central repository tools. Note, that this scenario was not introduced in the PDC 2008.
Now, let's focus on the metadata stored in the Repository. The concept of the Manageable Services was described in detail my previous article called Contract Model for Manageable Services, the result of the service virtualization and its manageability using metadata is shown in the following picture:
As you can see, the service virtualization is basically described by two models such as Contract and Application Models, in other words ServiceModel = ContractModel + ApplicationModel. These two models are logically isolated and connected via EndpointDescription. Responsibility of the Contract Model is describing ABC (Address-Binding-Contract) metadata for the connectivity. The other model, Application Model is describing a service activity behind the endpoint. It doesn't need to know how its model was connected to other Application Models. Therefore these two models can be created individually during the design time and only in the last modeling step such as hosting, we need to assign a physical endpoint for a specific service in the Application Model.
ServiceModel = ContractModel + ApplicationModel.
EndpointDescription
The Service Model is allowing full mediation of connectivity, messaging, MEP (Message Exchange Pattern), message transformation, service orchestration, etc. and its results are stored in the SQL Server relational tables for next editing or deploying. The service mediation features are limited by .NetFX 3.5 Technologies specifically WCF and WF paradigm models (WorkflowServices) which are used for implementation of the Manageable Services. The current .NetFX 3.5 Technology integrates two models for the runtime known as WorkflowServices, but still each model need to use own metadata, for example: config, xoml and rules files. In the upcoming new version of the .NetFX (4.x) we will actually have only two resources such as config and xaml for projecting a WorkflowService (or Service) in the runtime CLR types.
(WorkflowServices)
WorkflowServices
The following picture shows the concept of the metadata driven services, where a Design Tool has a responsibility to create the metadata in the Repository and Deploy Tool is for packaging metadata for its runtime projecting in the specific host such as IIS/WAS (or custom hosting). Note, this article implemented option to deploy metadata only for IIS/WAS. Of course, you can deploy it also for Dublin (PDC version) and have a side by side deployment with .NetFX 4.0. In that case, your Dublin features are limited.
What about the Azure? At this time (PDC version), we cannot deploy any custom activities including a root activity. There are predefined set of the Cloud Activities for workflow orchestration on the cloud. We will see more about this at PDC 2009 conference.
As you can see in the above picture, the complexity of the Service Model is "coded" by Design Tool which can be very sophisticated or simple by using just MMC Console. This article includes a solution of the modeling Services using MMC Tool which I built for the Contract and Application Models. It is required have some knowledge of the WCF/WF (to use the Tool at this level), xslt, xml Technologies such as bindings, contracts, workflow activities, xpath, etc.
"coded"
In summary, the Manageable Services represented by the Contract and Application Models are stored in the centralized knowledge base known as Repository and from there they can be deployed to the physical servers for their runtime projecting and running.
Manageable Services
This Model First approach requires having some design tool for efficiency of generating model metadata. Our modeling productivity is depended on how smart is our design tool to "cook" our models. The Repository has obviously empty knowledge base and we have to teach it for connectivity and business activities modeled in the services. Each Service Model can be used as a template for other models.
Model First
Ok, let's continue little differently in the next section of the article, because the goal is to present modeling of the services, Repository and Tool, rather then how to implement it. I will describe some interesting parts from the design and implementation point of view as well, please see Design and Implementation paragraph.
The Repository is a central storage of the Manageable Services described by Contract and Application Models. This article describes an Application Model model, only. The following picture shows its schema for SQL Server Tables:
Application Model
The core of the Application Model is a Service schema. There are five important resources in this schema such as config, xoml, rules, xslt and wsdl. All these resources have a runtime format, understandable by projector, therefore it is not necessary having a custom provider for deployment process. The reason for that is a Workflow Designer support where a result of the orchestration is exported in the xoml/rules resources. There are two additional resources in the Service schema such as xslt and wsdl. The first one, xslt resource is for message mediation and the other one is for metadata exchange operation. Both resources are optional.
Service
config
xoml
rules
xslt
wsdl
wsdl.
Note, that the wsdl resource is included automatically when an endpoint from the Contract Model is assigned to the Service and Contract is untyped.
wsdl
Each Service can be assigned by AppDomain and each AppDomain can be assigned by Application, which is the highest level in the Application Model. The model supports a resource versioning. The Application resource represents also an entry point for deployment scenario. Note, that this article does not include GroupOfAssemblies and Assemblies schemas for managing custom assemblies in the Application Model.
GroupOfAssemblies and Assemblies
As you can see, the above schema looks very straightforward and it can be transformed to another modeling platform (for instance Oslo) and this migration process is basically depended on xoml/rules resources such as workflow migration WF 3.5 to 4.0 version, workflow complexity, etc.
As I mentioned earlier, the metadata from Repository can be pushed or pulled for their runtime projecting. In the case of pulling, the Repository supports a service to get the resources from storage. The name of the service is LocalRepositoryService.
LocalRepositoryService
Now it is time to show the Design Tool (Metadata Tooling) for generating data into the SQL tables.
The design tool for Manageable Services is implemented in the MMC snap-in, where a central pane is dedicated for specific user control. The left and right panes are for handling scope nodes and their actions. The following picture shows two panes on the MMC such as scope nodes and user control for Workflow Designer:
The details of the scope nodes are shown in the following screen snippet:
As you can see, there are two models of the ManageableServices such as Contract and Application located on the left pane. The first one is to create a model of the endpoints and the second one is the service model and service hosting. This example shows an Application Test and domain abc with 3 Echo services. One of the Echo service is a router for versioning Echo services located in the same domain. I will describe this service later in more details.
Test
abc
Echo
Echo
By selecting a specific scope node, we can get a user control for a specific action in the central pane. For example, the following picture shows a list of all services when we click on the scope node Services.
Services
The first step of the tooling is to create a Service. By selecting the Services scope node we can get a choice for creating a new service.
The first page of the Service Creator is asking us for an attribute IsTemplate? If it's checked, the service can be used for creating another template and/or simply deleting existing one, etc.
Click Next to populate a service properties such as Name, Topic, etc. Note, that this article supports only XomlWorkflowService authoring.
XomlWorkflowService
Click Next to select a template for our new service. In this example, the EmptyIntegrator has been selected, what it is a basic template for service orchestration. This template has only one activity for receiving message in the request/response manner.
EmptyIntegrator
Click Next and the last page of the Service Creator Wizard will appear. The last page summarizes all information about our new service. After clicking Finish, the service will create metadata for this service template in the Repository.
Finish
The following screen snippet shows a result of the adding new service:
As you can see, there is MyService in the Test topic group. The central pane shows a Workflow Designer for service mediation. All attributes should be related with name of the service such as MyService. The Microsoft Workflow Designer has been embedded into MMC snap-in with some modification, for instance using XmlNotepad 2007 for xoml resource. This designer has a responsibility to create xoml and rules resources in the same way like it is hosted in the Visual Studio.
MyService
MyService.
xoml
rules
As I mentioned earlier, the Manageable Services are implemented using the WorkflowServices paradigm from .NetFX 3.5 version. The following code snippet shows xoml resource using the basic template as it was created in the above example MyService. Note, that namespaces are omitted for clarity snippet.
<ns0:WorkflowIntegrator x:
<ns1:ReceiveActivity.WorkflowServiceAttributes>
<ns1:WorkflowServiceAttributes
</ns1:ReceiveActivity.WorkflowServiceAttributes>
<ns1:ReceiveActivity x:
<ns1:ReceiveActivity.ServiceOperationInfo>
<ns1:TypedOperationInfo
</ns1:ReceiveActivity.ServiceOperationInfo>
<ns1:ReceiveActivity.ParameterBindings>
<WorkflowParameterBinding ParameterName="(ReturnValue)">
<WorkflowParameterBinding.Value>
<ActivityBind Name="MyService" Path="MessageResponse"/>
</WorkflowParameterBinding.Value>
</WorkflowParameterBinding>
<WorkflowParameterBinding ParameterName="message">
<WorkflowParameterBinding.Value>
<ActivityBind Name="MyService" Path="MessageRequest"/>
</WorkflowParameterBinding.Value>
</WorkflowParameterBinding>
</ns1:ReceiveActivity.ParameterBindings>
</ns1:ReceiveActivity>
</ns0:WorkflowIntegrator>
The workflow activity (root) is customized by WorkflowIntegrator (derived from SequentialWorkflowActivity) custom activity for holding service info such as Topic and Version. The workflow Name and ConfigurationName must be same as the service name. In addition, there are dependency properties for Request/Response Messages for binding purposes in the scope of the Workflow. The last property is ContextForward which is a configuration option for by-passing context header, or its cleanup. Its default value is InOut.
WorkflowIntegrator
SequentialWorkflowActivity
Name
ConfigurationName
ContextForward
InOut
Note, that the above xoml resource represents a minimum (required) metadata for receiving a message in the Request/Response message exchange pattern. This metadata is generated automatically by Design Tool, but in the case of drag&drop xoml from another source direct to the XmlNotepad control, the root activity must be a WorkflowIntegrator activity.
OK, now we are ready for service mediation, in other words, to orchestrate a service using activities from Workflow Toolbox.
Basically, the service mediation is intercepting and modifying messages within the service. The mediation enabled service allows decoupling a logical business model in the loosely coupled manner into the business services. In this chapter I will focus on mediation of the service message in the Manageable Services represented by System.ServiceModel.Channels.Message. Based on the MessageVersion, it is a composition of the business part (known as the payload or operation message) and group of additional context information known as headers. The headers and payload are physically isolated on the transport level, which allows to deserialize a network stream separately from the business part. In other words, the service message can be immediately inspected by its headers without deserializing (consuming) its payload. For instance, the service router (based on the context information located in the message headers) can re-route this message to the appropriate consumer of the business part without the knowledge of the business body.
System.ServiceModel.Channels.Message
MessageVersion,
Based on the above, we can see that the headers can be easily mediated by mediation primitives driven by well know clr types. The other part of the message such as payload requires the usage of loosely coupled mediation primitives. This type of mediation primitives need to use technologies such as xpath and xslt.
xpath
xslt
The xpath mediation primitives are using xpath 1.0 expression to identify one or more fields in a message for filtering or selecting based on their value. Let's look at the following XPathIfElse and XPathInspector custom activities (mediation primitives).
XPathIfElse
XPathInspector
Notice, that the default message contract for Manageable Service is untyped, which means that the service will received a row message with serialized headers, however payload is not consumed.
The XPathIfElse activity is a custom activity based on the feature of the Microsoft IfElseActivity for clr type primitives. The difference is in the condition expression, where the xpath expression is used instead of clr type expression. The parent activity such as XPathIfElse activity is binding to the service message for its inspection by each branch activity such as XpathIfElseBranch activity. Each branch has its own xpath expression for evaluation on the service message. Note, that the branch is working with the copy of service message. There is a MatchElement in the XPathIfElseBranch activity to select a message element for xpath expression for performance reasons. Later you will see how we minimize the performance hit from desterilizing message body. Note, that the XPathIfElse activity will not change message contents; it is a passive mediation primitive.
IfElseActivity
XpathIfElseBranch
The following picture is a screen snippet of the XPathIfElse custom activity that shows its property grids
The MatchElement is an enum type with the following options:
public enum MatchElementType
{
None, // explicitne true or false
Root, // by xpath expression (message is deserialized)
Header, // by xpath expression (no deserialization)
Body, // by xpath expression (message is deserialized)
Action, // by value
EnvelopeVersion, // by value
IsFault, // by value
IdentityClaimType // by value
}
For example, if the branch condition needs to perform only for a specific Action, then selecting the MatchElement for Action and typing actual value of the Action in the XPath property will minimize a performance hit in this mediation process.
Action
XPathEditor can help to find a specific xpath expression in the service message. This editor has a build-in interactive validation during typing xpath expression. Its concept is based on known information about the message version and schema. The version of this article is limited for MessageVersion and manually dropping the element on the XmlNotepad control. The full version allows selecting the schema from the Contract model, exposing all types into the combo box and then inserting them into the message.
There is a tab inserting or editing namespace in the XPathEditor when XPath expression needs to use extra namespaces. This collection of the namespaces is stored in the hidden XPathIfElseBranch.Namespaces property.
XPathIfElseBranch.Namespaces
The following picture is a screen snippet of the XPathEditor. It can also be shown by double clicking on the XPathIfElseBranch activity:
The XPathInspector is a custom activity to inspect field and/or value of the service message. There is an enum property to specify a scope of the inspection to minimize the performance hit. The result of the Boolean expression can be bound with other activities for instance IfElseActivity
IfElseActivity
To type an XPath expression, the XPathEditor can be used in the same way like it was described in the above section.
The design tool has a responsibility to create an xslt metadata for runtime mediation primitive. This article only supports manual creation of this metadata. It is not included in the XsltMapper Editor to generate this xslt resoure.
The following picture shows a format of the xslt metadata:
As the above picture shows, there is a collection of the mediators with a unique name (in this example 'transform'). Each mediator has an option to include a collection of the parms that allows passing the service parameters in the string type. In this example, we have two parameters such as prompt and id.
mediators
name
mediator
parms
prompt
id
The TransformMessage is a custom activity built as an xsl mediation primitive for transformation of the MessageInput into a specific version of the MessageOutput based on the xslt mediator defined by its name.
TransformMessage
The following picture shows its icon and property grid:
The TransformMessage activity can also be used in the message flow to extract some business parts, creating a new root element with MessageVersion.None for internal usage within the service. This scenario is focusing on the performance issue to optimize the number of required copies of the messages.
MessageVersion.None
The following example shows usage of the TransformMessage and XPathIfElse activities to optimize a performance issue during the message mediation:
The above service mediation transforms a received message into the small complex element based on the internal service schema for simple and fast evaluation in the following activities with more branches. This solution is consuming copy of the incoming message by TransformMessage activity instead of consuming message by each branch activity. When branch has been selected, the original message is passed for its business processing. In the above example, the message is sent to the specific service. Note, there is no MessageVersion for output/input message between the mediators.
Let look at another example of TransformMessage activity. In this example, the branch V1000 is transforming a received message contract for its specific target service before and after ProcessMessage activity.
As you can see, the TransformMessage is a very powerful activity. It is depended on the Design Toll how fast it will produce a correct xslt metadata. The usage of some 3rd party tool is recommended for more complex transformation, for example from Altova to generate an xslt resource and then dropping it on the XmlNotepad control.
The CreateMessage is a custom activity built as mediation primitive for creating a message based on the xslt metadata and MessageVersion. The following picture shows its screen snippet, including a property grid:
All properties in the above activity have the same features as TransformMessage. You can see, there is no MessageInput in the CreateMessage, because this is a creator, not a consumer of the message.
The following is an example of using this custom activity for broadcasting event message:
After processing received message, the service will generate notification message created by mediation primitive in the CreateMessage custom activity.
The CopyMessage is a custom activity to produce a copy of the message for its future consumption. The following screen snippet shows its icon and properties:
The TraceWrite is a custom activity for diagnostic purposes. The message and formatted text are shown on the debug output device, for instance, DebugView for Windows
In this chapter I was focus on the Service Mediation behind the endpoint, in other words, after receiving the message. This declaratively mediation is using the mediation primitives implemented as custom activities handled by WF programming model.
The WorkflowServices is an integrated model of the WCF and WF models, therefore there is a capability to mediate service at the Endpoint layer (model). This mediation has been described in details in my previous article VirtualService for ESB, however the following code snippet shows this feature:
<endpointBehaviors>
<behavior name='xpathAddressFilter'>
<filter xpath="/s12:Envelope/s12:Header/wsa10:To[contains( ... )]" />
</behavior>
</endpointBehaviors>
By injecting an address filter into the endpoint behavior pipeline, we can mediate a service at the front level such as rejecting an incoming message, re-routing message to other service and etc. This low level mediation is implemented by mediation primitive ESB.FilteringEndpointBehaviorExtension using an XPath 1.0 expression. Before using this feature, the mediation primitive must be added into the extensions like is shown in the following part of the service config file:
ESB.FilteringEndpointBehaviorExtension
<extensions>
<behaviorExtensions>
<add name='filter'
type='ESB.FilteringEndpointBehaviorExtension, ESB.Core,
Version=1.0.0.0, Culture=neutral, PublicKeyToken=null'/>
</behaviorExtensions>
</extensions>
Note, that this article doesn't have a visual designer for this kind of mediation, it is necessary to make it manually in the XmlNotepad control for config metadata.
The Metadata Exchange Endpoint (MEX) is a special endpoint contract in the WCF programming model to export metadata used to describe a service connectivity. This document is built on the runtime, based on the ServiceEndpoint description for each typed contract. In other words, if the contract is untyped (Action = "*"), there is no way to build metadata for generic contract on the fly.
For example, let's assume we have a versioning router service like it was shown earlier in the Service Mediation section. The consuming branch version of the V1000 and V1001 is different, but their endpoint is the same. Therefore, the router endpoint cannot have an MEX feature; thus the request is forwarded to the actual service for its processing. That's fine, but what about if the target service endpoint is also untyped (generic contract) for mediation enabled service?
Well, there is one solution such as preparing this MEX document in the Contract Model and delivering on the runtime in the Application (Service) Model. In other words, the design time can create MEX in the Contract-First fashion, storing it in the Repository and then deploying it to the runtime projector in case if consumer needs to ask for a physical endpoint for its MEX document.
The WSRT_Mex is a custom activity for responding request for MEX document in the WS-Transfer fashion. The following screen snippet shows its icon and property grid:
Based on the Wsdl property, the MEX document can be obtained by Repository or locally from File System. When service is hosted on the IIS/WAS, this resource will be located within the site together with others resources such as config, xoml, rules, xslt and svc.
Usage of the WSRT_Mex custom activity is straightforward. The following picture shows an example of some business service (Imaging), where MEX has own branch for processing WS-Transfer Get Request:
The Operation will evaluate an Action header for the following expression in order to process MEX operation:
System.ServiceModel.OperationContext.Current.IncomingMessageHeaders.Action ==
""
Note, that it is not required to use an endpoint MEX in the service configuration. From the incoming message point of the view, the mex operation message is processing in the transparent manner like another action. It is a responsibility of the service mediation to provide this feature at the physical endpoint.
The other preferred way to get the MEX document is by asking the Repository. The Repository knows all details about the models in advance before the deployment, therefore the physical service doesn't need to be on-line (deployed and run) to get this information.
The following picture shows a Microsoft utility WCF Test Client to dynamically create a proxy that calls a Repository for MEX document:
As you can see, the service Echo has been added before askingfor it again. The access to the Repository service is configurable in the config file (our example is net.pipe://localhost/repository/mex? ).
net.pipe://localhost/repository/mex?
Last comment about the Repository and MEX. Discovering a MEX document from the Repository where are located are logical centralized business models is very powerful and is opening more challenges in the metadata/model driven architecture. The recently PDC 2008 conference introduced a Microsoft Technologies for cloud computing and Windows Azure Platform. What about having the Repository on the Cloud to exchange an endpoint MEX and other service metadata models? Think about that, and try to figure out the answer by yourself.
As I mentioned earlier, the Application Model can be pushed or pulled to the runtime projector hosted on the physical machines. The Design Tool has built in capability to deploy a specific Application version. This option is only for IIS/WAS hosting.
The deployment process has two steps. In the first step, deployment package (for selected Application) is created and stored in the Repository. In the following example we can see Application.Test (1.0.0.0) package:
Clicking on the Create button, a new package will be created and stored in the Repository. The content of the package (stored in the Repository) is shown on the user control. We can see a physical organized package hosted in the IIS/WAS site.
Create
The next step is to install this package on the hosting server. Clicking the button Install the following dialog will ask some information such as name of the server, site, ...
Install
Clicking on the button Deploy, the package will be installed on the specific server in the Physical path and virtual directory will be created for this Application. Note, that this article version has some limitation such as localhost and Default Web Site are supported.
Deploy
After successful deployment, the Manageable Services in the deployed Application are ready for usage. Any changes in the Application or Contract Model must be stored in the Repository, refreshing and re-deployed again.
This article includes a msi file for installation of the Design Tool and Repository. This give you a fast start up and a minimum installation steps in comparison to the solution from the source code package such as compiling, installation MMC snap-ins, windows service, creating database, etc. Note, that this article version supports an installation of Repository on your local machine, only.
The process of the installation is divided into two steps:
Let's go through the following steps, notice that you should Run as Administrator for that process:
Run as Administrator
After that, all components have been installed in the folder: C:\Program Files\RomanKiss\ManageableServices\ and ready to manage metadata in the Repository, but during the first time it will fail because the process of the creating Repository is not a part of the msi installation. This process is invoked manually based on the scripts located in the Repository subfolder. The major reason is to avoid some accidental auto installation process without the backup your Repository.
C:\Program Files\RomanKiss\ManageableServices\
Therefore the next part of the installation is creating Repository:
C:\Program Files\RomanKiss\ManageableServices\Repository
setup.bat
The setup/cleanup batch files use a predefined name of the database for repository such as LocalRepository on the local SQLEXPRESS server. If you are planning to change it, that is the place for that.
LocalRepository
Oh, there is one more place, the windows service config file need to be changed as well, or you can stop the LocalRepositoryService and type your Repository name in the start parameters text box and start again.
LocalRepositoryService
Note, the WF sql scripts are not a part of this installation. The following connectionString is used in the config resource:
Data Source=localhost\sqlexpress;Initial Catalog=PersistenceStore;
Integrated Security=True;Pooling=False
That's is all for the installation. Now you can find Manageable Services icon on your desktop
Manageable Services
and open it. You will be prompted for each MMC snap-in for ConnectionToLocalRepository dialog.
ConnectionToLocalRepository
For default installation click on OK button and then you are in business of the designing models for Manageable Services.
I do recommend to play with Contract and Application scope nodes, their actions, etc just to see features. Don't worry about generating some mess in the Repository; it can be very easily cleaned up (using clenaup.bat file) and recreated by setup.bat file
clenaup.bat
One more thing, if MMC snap-in will throw an exception, follow-up MMC exit. I do apologize for that bug, just close the MMC and reopen fresh one and continue again. All changes are stored in the Repository in the transactional manner (accepted by saving). Please keep that in mind that before closing the MMC.
OK, now it is the time to rock&roll with some simple example for full round trip through the designing, deploying and testing.
rock&roll
This is a simple example for Echo service with the following contract:
[ServiceContract]
interface IService
{
[OperationContract]
string GetFullName(string firstName, string lastName);
}
I am going to show you all steps how to manage this service model including its testing with a virtual client represented by Microsoft WCF Client Test program.
I will assume that you have completed the installation as described in the above chapter. To simplify this example, the Repository has been preloaded for 3 simple Echo services such as two versions of the Echo service and one service for versioning routing.
The example is divided into the following major steps (actions):
IService
As I mentioned earlier, the Repository is a Knowledge Base of the Contracts, Services, Endpoints, etc. If this knowledge doesn't exist, we have to add it either manually or importing from the other place such can be assembly, wsdl endpoint, etc. In our example (for simplicity) the following steps demonstrate the contract import from the assembly.
Select the scope node Contracts, right click on action Import Contracts:
Contracts
Import Contracts
The above user control is designed for importing a contract from the assembly or url endpoint. Check the box FromAssembly and click on the Get button. You should see the above screen snippet in our snap-in.
FromAssembly
Get
Select Contract IService and then click on the button Import. This action will import all metadata into the Repository for the selected Contract.
IService
Import
So, now our Repository has more knowledge, you can check it in the Schemas, Messages and Operations scope nodes but not in the Endpoints. That's the following step.
Select the scope node Endpoints, right click on action Add New Endpoint and populate this user control for the following:
Endpoints
Add New Endpoint
wsHttpContextNoneSecure
type some description
1.0.0.0
XmlSerializer
XmlFormat
GetDocumentByKey
GetFullName
After the above steps, your snap-in should be the same as it is shown in the following screen snippet:
Now, click the button Export. The following screen snippet will be show up in your snap-in:
Export
In this step we have an opportunity to preview what is going to be published it from Repository such as wsdl metadata. Also, we can see all Generated Contract Types.
Next action is to populate information for storing this metadata in the Repository under the unique key such as Name and Topic. Therefore, type the Name: Echo and Topic: Test, check ExportMetadata and click on the Save button. The result of this action is the following screen snippet:
Test
ExportMetadata
Save
Now, at this point, the Repository has a first metadata (Endpoint) to be published in the Contract-First manner, in other words, the client can ask the Repository for metadata (wsdl). As you can see in the above picture, this capability is built-in to the snap-in, therefore we can be the first client (tester) for our new metadata. Click on the Run buttonand you should see how the svcutil handle IService contract for this virtual client. This is a real connectivity to the Repository via its service LocalRepositoryService.
Run
svcutil
Of course, any client for instance, Visual Studio can consume this metadata from the Repository. In our test, we will use a virtual client utility from the Microsoft SDK such as WCF Test Client, therefore launch this program and Add Service from the Repository like it is shown on the following picture:
WCF Test Client
After clicking on the Run button, we should have a client for this contract. Note, there is some glitch in this client utility program (.NetFX 3.5 version), so just ignore it and continue.
At this moment we have our client ready for runtime test, but there is no physical deployment of our Test application with an Echo service(s). That's the next step, so let's continue and make all metadata plumbing in the Application Model. I hope your model is setup the same way like it was described in the above first section.
The goal of this step is to create an Application Model of the services divided into the domain based on the business needs. For our Echo service (in this moment we are using only one initiate version 1.0.0.0) we are going to create a business domain abc that will host our Echo service.
By selecting the scope node Domains and right click on the Add New Domain action, we will get the following screen snippet:
Domains
Add New Domain
Populate user control properties such as the Name: abc, Version: 1.0.0.0 and description (this is an option) and select the Service Echo version 1.0.0.0
abc
Clicking on the Finish button, we have a new domain in the Repository. As you will see later on, the domain can be modified based on the requirements or creating its new version, etc.
Finish
Next step is to create an Application, using the same philosophy as for the Domain. We need a logical Application to package our business domains for their isolation within the physical process.
The new Application requires populating some properties. In our example, properties such as HostName: Test, ApplicationName: Test, MachineName: localhost, Description (option) , Version: 1.0.0.0 and one check for IIS/WAS Hosting. Then we are selecting all domains in this application. In this example you see only one domain such as abc.
localhost
IIS/WAS Hosting
Clicking on the Finish button, the new Application will be stored in the Repository. We can come back later to modify it based on the additional requirements such as adding new Domain(s).
So far, we have one Contract model and one Application with Echo Service model in the Repository. Now, it's time to connect these models together, in other words we need to assign a physical endpoint to the Application (specifically to the Echo service).
Select in the Echo (1.0.0.0) scope node a ServiceEndpoints node, right click a select Add New Endpoint action. This user control pane will give you a choice to select all endpoints for this service. In our example, we can see a VirtualAddress (since wedecided to use an IIS/WAS hosting). This is actually a filter for Repository to popup all endpoints when you click on the comboBox Name. The following picture shows the result of that selection:
ServiceEndpoints
VirtualAddress
Name
Clicking on the Finish button, the config resource will be updated in the Repository when the config scope node is selected.
config
To finish this process we have to create a package of this Application for its deployment. Note that after creating this package, any changes in the models will not have an impact for this package, in other words, if the model must be changed, the specific resource must be refreshed and the package recreated again.
For Push deployment option, the Application must create a deployment package. The snap-in has a built-in option for target IIS/WAS (Dublin), only.
Selecting a Test 1.0.0.0 scope node and its CreatePackage action, the following user control will show up in the central pane. In the case, when the Application has already created package and stored in the Repository, we can see its contents.
Test 1.0.0.0
CreatePackage
To create a package (or new fresh one), we can click on the Create button. We can see all details about the package, resources, assemblies and structure of the web site for this application.
Now, here comes the magic step - deploying to the physical target. We worked very hard to get to this point, so let's make the last step for Repository which is installing the package.
Clicking on the Install button, the following dialog will show up:
You can see some properties for targeting, don't change them, this article version doesn't support full features, we can deploy our example to localhost server and Default Web Site only. Just click on the Deploy button and our example is ready for real test.
localhost
Default Web Site
As I mentioned earlier, the testing service is using the Microsoft utility WCF Test Client. In this step, we have already setup our client for invoking a GetFullName service operation. The following screen snippet shows a result of the invoking this operation:
GetFullName
Ok, let's continue with our simple example by adding a new version of the Echo Service into the Application. We will have two Echo services in the same domain without any change in the physical endpoint. This situation maps real requirements in the transparent manner between the old and new clients. For this solution we have to add a special pre/post service known as router service. Based on the message content, the router will forward the request/response to the appropriate target.
I described this scenario in the service mediation example in details. So, first of all, we are going to add two additional Echo services into the domain abc using a Modify Domain action :
Modify Domain
Clicking on the Finish button, the Repository will update this change in the abc domain, see the following screen snippet of the abc scope node:
As you can see in the above picture, all services have the same name. The topic of the router service is prompted by '@' character to indicate a router feature for creating a deployment package for IIS/WAS hosting.
@
Now, we have to go to the Contract Model and add two new Endpoints for:
Endpoints
Echo Endpoint
IGenericContract
ESB.Contracts
After this "Repository learning process" we can see three (3) Echo endpoints:
Once we have those endpoints in the Contract Model, we can use them in the Application Model. Basically, we need to make the following tooling in the ServiceEndpoints section:
Echo1001
EchoRouter
Tooling endpoints in the ServiceEndpoints scope node is straightforward and well supported from the Contract Model. On the other hand, the ClientEndpoints sometime require some manual tuning in the client section of the config resource.
ClientEndpoints
Note, that the name of the endpoints in the ClientEndpoints must be corresponded with the EndpointName value of the SendActivity.
EndpointName
SendActivity
OK, now we made all required changes in the Contract and Application models for our Echo versioning, therefore we have to create a new deployment package:
As you can see, the above picture shows a new structure of the Application Package. The first service is a router - message forwarder for version 1.0.0.0 and 1.0.0.1 located within the same (Test) web site.
Clicking on the Install button we can have this version on the target server ready for its testing. Because we didn't change a service contract, we can use our already created client.
The result of consuming the Echo version 1.0.0.0 service:
Echo version 1.0.0.0
The result of consuming the Echo version 1.0.0.1 service:
Echo version 1.0.0.1
That's all for this example. I hope, you get a picture of the tooling Manageable Services in the Repository. The tooling process is smart based on the tool capability. This article solution has some limitations; of course, the tool is opened to add more tooling actions, validations, etc. in the incremental manner, based on the needs.
Design and Implementation of the Manageable Services Tool has been described in my article Contract Model for Manageable Services in details. I will describe some interesting part of the implementation, I would like to put a credit in for the following 3rd party software that helped me save some implementation time and allowed me to focus more on the metadata models:
This project used many technologies such as MMC 3.0, Linq Sql, Linq Xml, .NetFx 3.5, etc. Of course, without the Reflector, it will be very hard to accomplish this task. It is very hard to show all pieces how they been designed and implemented over the year, actually from the time when the WorkflowServices has been introduced in the .NetFx 3.5 Technology.
However, here are few code snippets:
public static void AddServiceToPackage(Package package, ServicePackage service)
{
if (package == null)
throw new ArgumentNullException("package");
if( service == null)
throw new ArgumentNullException("service");
foreach (PackageItem item in service)
{
string filename =
string.Concat(service.Path, string.IsNullOrEmpty(item.Name) ?
service.Name : item.Name, ".", item.Extension);
Uri partUri =
PackUriHelper.CreatePartUri(new Uri(filename, UriKind.Relative));
PackagePart packagePart =
package.CreatePart(partUri,
System.Net.Mime.MediaTypeNames.Text.Xml, CompressionOption.Maximum);
if (string.IsNullOrEmpty(item.Body) == false)
{
using (StreamWriter sw = new StreamWriter(packagePart.GetStream()))
{
sw.Write(item.Body);
sw.Flush();
}
}
}
}
public static Package OpenOrCreatePackage(Stream stream)
{
if (stream == null)
stream = new MemoryStream();
return Package.Open(stream, FileMode.OpenOrCreate, FileAccess.ReadWrite);
}
where declaration of the ServicePackage is shown in the following code snippet:
public class PackageItem
{
public string Name { get; set; }
public string Extension { get; set; }
public string Body { get; set; }
}
public class ServicePackage : List<PackageItem>
{
public string Name { get; set; }
public string Path { get; set; }
}
The above code snippet is a part of the section for creating a deployment package and storing it in the repository. The metadata from the Repository are collected in the ServicePackage for a specific path, then zipped and stored in the Repository Application Model for later installation.
ServicePackage
Another code snippet shows a part of the ApplicationVersionNode scope node for its loading from Repository using a Linq Sql technique.
ApplicationVersionNode
internal void Load()
{
this.Children.Clear();
// repository context
RepositoryDataContext repository =
((LocalRepositorySnapIn)this.SnapIn).Repository;
var application =
repository.Applications.FirstOrDefault(a=>a.id==new Guid(ApplicationId));
if (application != null && application.iisHosting)
{
this.Children.Add(new ApplicationConfigNode(new Guid(ApplicationId)));
}
using (TransactionScope tx = new TransactionScope())
{
var domains = from d in repository.GroupOfDomains
where d.applicationId == new Guid(this.ApplicationId)
orderby d.priority
select d;
foreach (var domain in domains)
{
var query =
repository.AppDomains.FirstOrDefault(e=>e.id==domain.domainId);
if (query != null)
{
DomainVersionNode node =
new DomainVersionNode(domain.domainId.ToString());
node.DisplayName =
string.Format("{0} ({1})", query.name, query.version);
this.Children.Add(node);
}
}
tx.Complete();
}
}
Complete source code of the Manageable Services is included in this article.
This article is a last part of the project that I started as VirtualService for ESB and following up with Contract Model for Manageable Services a year ago. Windows Azure Platform was not introduces to the public at that time, therefore the first part of this project has been written as Virtual Service for Enterprise Service Bus in the concept of the "distributed BizTalk". Little after, my first part of the project was published, I adjusted my vision of the logical connectivity from the ESB to the Manageable Services that allows me to manage and mediate a business process behind the endpoint. I introduced two models such as Contract and Application Models and Contract Model for Manageable Services article described this model and its tooling support in details.
Managing contract in the virtual fashion without having its physical endpoint has been a major key of this project and I think it was a good decision that time in terms of the conceptually matching with a new version of the Managed Services Engine by Microsoft, recently released two months ago at the CodePlex.
Logically centralized Service Model (Contract + Application) in the Repository enables us to manage services across more business models (enterprise applications.) and physically decentralized to the runtime projecting. With upcoming Windows Azure Platform, the Manageable Services are getting a challenge in the Cloud, where they can be part of the .Net Service Bus party. In my opinion, this is a big step from the ESB to the Cloud built-in infrastructure with a capability of the Service Bus.
.Net Service Bus
The second vision of the What next is a big challenge for Repository. Repository represents a Knowledge Base of the Connectivity and Services where during the learning process (tooling) this knowledge is growing based on the needs. Pushing the Repository on the Cloud and connecting to the .Net Service Bus, we can also think to exchange metadata not only at the Endpoint level (mex), but also at the Service Model or higher one.
What next
To continue this vision, there is a big challenge in the metadata driven Manageable Services centralized stored in the Repository. As I mentioned earlier, the Repository learns by manual process using some tools at the design time. Adding the capability to tune our models in the Repository on the runtime will open a completely new challenge in the event-driven architecture, where a logical model can be managed by services itself.
OK, I am going to stop my dream from the Cloud, and back to the ground for short What next. The answer is the upcoming new Microsoft Technologies based on the WCF/WF 4.0 and xaml stack, therefore that is the what next to the Manageable Services. This challenge will allow targeting a Manageable Service anywhere, including Cloud. Of course, we have to wait for the release (or beta) version of the .NetFx 4.0.
What next.
what next
Another feature for Manageable Services is incrementally developing more features in the tooling support such as XsltMapper Designer, XsltProbe, Model Validation, Service Simulator and Animator, Management for Assemblies, RepositoryAdmin, etc.
Ok, as you can see, there are the lots of challenges in this area for Manageable Services. Also, keep in mind, the project Manageable Services described in this article has been implemented on the current .NetFx 3.5 version with focusing on the upcoming model/metadata driven architecture and it's not a production version.
In conclusion, this article described a tooling support for Manageable Services logically centralized in the Repository and the physically decentralized to the target server for their projecting in the application domain. If you have been with me so far, you should now possess a good understanding of the services enabled application driven by metadata. I hope you enjoined it.
VirtualService for ESB
Contract Model for Manageable Services
This article, along with any associated source code and files, is licensed under The Code Project Open License (CPOL)
<configSections>
<sectionGroup name="applicationSettings" type="System.Configuration.ApplicationSettingsGroup, System, Version=2.0.0.0, Culture=neutral, PublicKeyToken=b77a5c561934e089" >
<section name="ESSWFService.My.MySettings" type="System.Configuration.ClientSettingsSection, System, Version=2.0.0.0, Culture=neutral, PublicKeyToken=b77a5c561934e089" requirePermission="false" />
</sectionGroup>
<section name="WorkflowRuntime" type="System.Workflow.Runtime.Configuration.WorkflowRuntimeSection, System.Workflow.Runtime, Version=3.0.00000.0, Culture=neutral, PublicKeyToken=31bf3856ad364e35"/>
</configSections>
<WorkflowRuntime Name="WorkflowServiceHostRuntime" validateOnCreate="true" enablePerformanceCounters="true">
<CommonParameters>
<add name="ConnectionString" value="Data Source=PRG;Initial Catalog=WorkflowSQLTracking;Integrated Security=true"/>
</CommonParameters>
<Services>
<add type="System.Workflow.Runtime.Tracking.SqlTrackingService, System.Workflow.Runtime, Version=3.0.00000.0, Culture=neutral, PublicKeyToken=31bf3856ad364e35" />
</Services>
</WorkflowRuntime>
General News Suggestion Question Bug Answer Joke Praise Rant Admin
Use Ctrl+Left/Right to switch messages, Ctrl+Up/Down to switch threads, Ctrl+Shift+Left/Right to switch pages.
|
https://www.codeproject.com/Articles/36056/Manageable-Services
|
CC-MAIN-2017-34
|
en
|
refinedweb
|
CodePlexProject Hosting for Open Source Software
I have a problem with the recommendation link URL. This is what is inserted into an email message when you click the Email link at the end of each post. It is using the internal IP address of the server hosting the blog instead of the domain name
exposed by the proxy server. It puts in instead of. How do I fix that?
Updated 9:08: Actually, I see it is incorrect in all links in the post. I had a look at postview.ascx, but I'm not sure how to fix it there.
It depends on what code you have in your PostView.ascx. Different themes may be using different code.
The Indigo theme that ships with BE has:
<a rel="nofollow" href="mailto:?subject=<%=Server.UrlEncode(Post.Title) %>&body=Thought you might like this: <%=Post.AbsoluteLink.ToString() %>">E-mail</a>
In this case, the URL to the post is coming from Post.AbsoluteLink. AbsoluteLink is going to be using the same domain name that is used throughout BE. If the email link in your PostView.ascx is using AbsoluteLink, this would mean that the internal
IP address is most likely being used throughout your entire blog -- not just in this particular link.
What do you have in your PostView.ascx? And do you not see other links in the site that also have the internal IP address?
Yes, the internal IP address appears in all links in posts. How do I change that globally for the site?
I'm using the Indigo theme. It has the following postview.ascx code:
<%@ Control Language="C#" AutoEventWireup="true" EnableViewState="false" Inherits="BlogEngine.Core.Web.Controls.PostViewBase" %>
<div id="post<%=Index %>">
<h1><a href="<%=Post.RelativeLink %>"><%=Server.HtmlEncode(Post.Title) %></a></h1>
<div><img id="Img1" src="~/themes/indigo/img/timeicon.gif" runat="server" alt="clock" /> <%=Post.DateCreated.ToString("MMMM d, yyyy HH:mm")%> by <img id="Img2"
src="~/themes/indigo/img/author.gif" runat="server" alt="author" /> <a href="<%=VirtualPathUtility.ToAbsolute("~/") + "author/" + Post.Author %>.aspx"><%=Post.AuthorProfile != null
<div><asp:PlaceHolder</div>
<%=Rating %>
<br />
<div>
Tags: <%=TagLinks(", ") %><br />
Categories: <%=CategoryLinks(" | ") %><br />
Actions: <%=AdminLinks %>
<a rel="nofollow" href="mailto:?subject=<%=Server.UrlEncode(Post.Title) %>&body=Thought you might like this: <%=Post.AbsoluteLink.ToString()
<a rel="nofollow" href="<%=Server.UrlEncode(Post.AbsoluteLink.ToString())
%>&title=<%=Server.UrlEncode(Post.Title) %>">Kick it!</a> |
<a href="<%=Post.PermaLink %>" rel="bookmark">Permalink</a> |
<a rel="nofollow" href="<%=Post.RelativeLink %>#comment">
<img id="Img4" runat="server" alt="comment" src="~/themes/indigo/img/comments.gif" /><%=Resources.labels.comments %> (<%=Post.ApprovedComments.Count
%>)</a>
|
<a rel="nofollow" href="<%=CommentFeed %>"><asp:ImageComment RSS</a>
</div>
<br />
</div>
I guess because of some type of router setup, proxy or URL rewrite schema, the blog is seeing the incoming URL as the internal IP address -- rather than the URL you see in address bar.
The simplest solution is to modify the BE core. In there, there's a file names Utils.cs. Within that is a property named "AbsoluteWebRoot". This is the piece of code that is returning the first part of the URL. You could
hard code your real domain name in there. If you did that, the new version of AbsoluteWebRoot would look like:
public static Uri AbsoluteWebRoot
{
get
{
return new Uri("");
}
}
.... and substitute with your domain name. After making this change, it would be necessary to recompile the BE core to produce a new BlogEngine.Core.dll file that would go into your BIN directory (overwriting the old one).
Well, this is turning out to be more convoluted than I expected. I decided that if I was going to be mucking about in the code, I'd just as well get the latest version. Mistake.
Upgrading from 1.5.0.7 to 1.6 was not straightforward, and ultimately I simply reinstalled. I revised the AbsoluteWebURI method, although the code was different from what I expected. I revised it as follows:
public static Uri AbsoluteWebRoot
{
get
{
return new Uri("");
//if (_AbsoluteWebRoot == null)
//{
//HttpContext context = HttpContext.Current;
//if (context == null)
// throw new System.Net.WebException("The current HttpContext is null");
//if (context.Items["absoluteurl"] == null)
// context.Items["absoluteurl"] = new Uri(""); <-- in this contex, no worky
//return context.Items["absoluteurl"] as Uri;
//_AbsoluteWebRoot = new Uri(context.Request.Url.GetLeftPart(UriPartial.Authority) + RelativeWebRoot);// new Uri(context.Request.Url.Scheme + "://" + context.Request.Url.Authority + RelativeWebRoot);
//}
//return _AbsoluteWebRoot;
}
}
getting rid of all the checks, etc. and just blasting the URL in. When I rebuild the Visual Studio project, and drop the newly created .DLL in, it seems to work.
Now the absolute URL appears in the email message body. When I create a post, it uses that URL properly (apparently), and it uses the internal URL for comments, etc.
In reinstalling, however, I've somehow gone awry, because when I try to take any action in the right widget bar (using the Indigo theme) after logging in -- for instance, in About the Author, Edit -- nothing whatsoever happens, except that the Status
Bar in the browser window displays "Error on page." What might be the fix here?
The "error on page" message is a JavaScript error. If you have Firefox available, then pull up the blog in that, and check the Error Console (Tools -> Error Console). It will usually give more detailed information on the error, compared
to what Internet Explorer gives.
Yes, I know it's Javascript error.
Actually IE 8 has a Developer Tools window (on the Tools menu or press F12), that has several views into the live page, and allows you to run a debugger against the live page.
When I click Start Debugging on the Tools Window, and click Edit on the blog window, the debugger breaks on the method being called, flags Blogengine.widgetAdmin.editWidget, and says that "Blogengine is undefined."
So I guess when I reinstalled, somehow I didn't get the configuration right in IIS, and it doesn't know about the application. Maybe because V. 1.6 installs under "BlogEngine.Web" and I used that. More monkeying around, looks like.
I'm not sure if that's a typo, or the actual error message ... but a JS namespace for BE does exist, however it's BlogEngine -- not Blogengine (the casing is different).
When logged into the blog, the widget.js file in the admin folder should get loaded. This contains the "BlogEngine.widgetAdmin" object.
Typo.
Ok. I also realized that BlogEngine.widgetAdmin is new in BE 1.6. It wasn't there in BE 1.5. So if you're still working on this, it sounds like (as you mentioned), you still have some BE 1.6 files in your blog.
You write that it sounds like I "still have some BE 1.6 files in your blog". I think you meant to write that I still have some BE
1.5 files in my blog. I don't see how that can be because this is an entirely new installation of BE 1.6. Not an update/upgrade. (Update didn't work.)
I think IIS is misconfigured somehow. Not sure how that can be, because it does the right thing with the hard-coded URL, but not with the edit function. I would expect an all or none situation.
Anyway, still poking around.
OK, I found the error. I assumed -- incorrectly, as it turns out -- that I could download the zipped-up distribution marked (Web) for BE 1.6, unzip it into the web location and all would be good.
Only partly true: That distribution apparently doesn't include any .js files. (At least that's what search for *.js in all the directories revealed.) It was no wonder that I couldn't edit the widgets with methods in Javascript: There was no Javascript.
When I take the source code distribution that I laltered to deal with the URL issue, and publish it to the desired location in IIS, it all suddenly works.
That does, however, seem confusing to me. Maybe there's something I'm not understanding about the distribution labeled (Web).
Arrrgh! Except that now I can't edit things unless I do it locally. If I log in from the Internet, the Move/Edit/X links don't even appear.
Ooops. The last bit is pilot error. In the last of several reinstalls of BE 1.6, I forgot to make myself an administrator, and must not have tried editing as myself locally, either.
Are you sure you want to delete this post? You will not be able to recover it later.
Are you sure you want to delete this thread? You will not be able to recover it later.
|
https://blogengine.codeplex.com/discussions/201516
|
CC-MAIN-2017-34
|
en
|
refinedweb
|
foulglory + 9 comments
If you're getting timeout:
The whole thing can actually be solved in one loop. Initialize sumtotal to 0. Run the loop m times/while input exists whatever you choose, take input one by one in list/array whatever you use. Input is of the form:
a,b,candy
do sumtotal = sumtotal + (b-a+1)x(candy) for each input
average = sumtotal/n
dejava + 1 comment
Correct! No array of jars needed. No looping of filling needed. Test cases have huge numbers.
RSTHW + 2 comments
I did that, but it just works in tests 1-4 :( Im using java. I dont know what is wrong.
RSTHW + 3 comments
I already did it, my problem was that I was using nextInt instead of nextLong. lol
edumor + 1 comment
You don't need to use BigInteger, the maximum value for this challenge according to the constraints is 1,00E+18 while a long can hold a number up to 9,22E+18, more than nine times of what you need.
Note that an int will only give the correct answer for sums that don't go over 231-1.
DevikaShanbhag + 0 comments
According to the input constraints, input can be an integer so nextInt() should be fine.
Just typecast appropriately to (long) where an operation might result in an overflow.
eg: (long)(b - a + 1) * k
mayurnagdev123 + 0 comments
In the problem it is specified that a and b will be in the 10^7 range .However ,the test cases do not follow this and have enormous values that exceed this range.That's why using 'int' instead of 'long' fails most of the test cases.The problem needs to be updated.
Mridul20rawat + 0 comments
Look dude the only thing required is knowing how many jars are to be filled and multiplying them with capacity given. And then keep on adding the result for as many operations is given. At last find the average.
Hope it helps.
while(o>0)
{ scanf("%ld%ld%ld",&st,&en,&cap);
res = res + ((en-st)+1)*cap; o--; } fin = floor(res/n);
john_canessa + 0 comments
Good suggestion. Tried the array. Timeouts. Removed array and switched to BigInteger. Thanks.
arnav_kumar903 + 0 comments
Thanks dude,you're awesome. Was struggling from past 3days.
#include <cmath> #include <cstdio> #include <vector> #include <iostream> #include <algorithm> using namespace std; int main(){ unsigned long long int n,m; cin>>n>>m; unsigned long long int jar[n]; unsigned long long int sum=0; while(m--){ unsigned long long int a,b,k; cin>>a>>b>>k; sum+=(b-a+1)*k; } unsigned long long int avg=sum/n; cout<<avg<<endl; return 0; }
sarathy_v_krish1 + 0 comments
C++ solution :
long long solve(int n, vector<vector<long>> operations) { long long sum=0; for (int i=0;i<operations.size();i++) sum+=operations[i][2]*(operations[i][1]-operations[i][0]+1); sum/=n; return sum; }
j_singh_logan + 1 comment
me too, did you use a moving average formula?
robertdyke + 0 comments
Make sure that you change the return value from you function from int to long long. That was my problem.
Masters_Abh + 0 comments suv_codemode + 2 comments
Hi guys, my code is passing the first 3, failing the next 3 and passing the rest of them. Is there anything special about testcase #4,#5,#6? It keeps saying I have wrong answer. Rest of the test case pass though.
anmoluppal + 4 comments
I dont think there is any special thing in those test cases, However you can always download the test cases at the cost of 5 hackos , Try finding it by yourself and have a look at the given constraints in the problem statement, It may be helpful... :)
leopragi + 2 comments
yep me too having same prob......i downloaded the testcase and i run it in custom input.....it shows me input cannot exceed 50 kb
vatsalchanana + 0 comments
You can run it on your own machine. You cannot run custom tests for inputs with size > 50KB on the site.
hackboy21121996 + 0 comments
import java.io.; import java.math.; import java.text.; import java.util.; import java.util.regex.*;
public class Solution {
// Complete the solve function below. static BigInteger solve(int n, int[][] operations) { long arr[]=new long[n]; BigInteger temp=new BigInteger("0"); BigInteger sum=new BigInteger("0"); for(int i=0;i<operations.length;i++) { long a=operations[i][0]-1; long b=operations[i][1]-1; long c=operations[i][2]; long num=b-a+1; temp=BigInteger.valueOf(num); temp=temp.multiply(BigInteger.valueOf(c)); sum=sum.add(temp); } /* for(int i=0;i<arr.length;i++) sum+=arr[i];*/ BigInteger avg=sum.divide(BigInteger.valueOf(n)); return avg; } private static final Scanner scanner = new Scanner(System.in); public static void main(String[] args) throws IOException { BufferedWriter bufferedWriter = new BufferedWriter(new FileWriter(System.getenv("OUTPUT_PATH"))); String[] nm = scanner.nextLine().split(" "); int n = Integer.parseInt(nm[0]); int m = Integer.parseInt(nm[1]); int[][] operations = new int[m][3]; for (int operationsRowItr = 0; operationsRowItr < m; operationsRowItr++) { String[] operationsRowItems = scanner.nextLine().split(" "); scanner.skip("(\r\n|[\n\r\u2028\u2029\u0085])?"); for (int operationsColumnItr = 0; operationsColumnItr < 3; operationsColumnItr++) { int operationsItem = Integer.parseInt(operationsRowItems[operationsColumnItr]); operations[operationsRowItr][operationsColumnItr] = operationsItem; } } BigInteger result = solve(n, operations); bufferedWriter.write(String.valueOf(result)); bufferedWriter.newLine(); bufferedWriter.close(); scanner.close(); }
} //Just changed return type to BigInteger and all test cases passed.
qwrtyuiuytres + 0 comments
#include <bits/stdc++.h> using namespace std; int main(){ long n,m,a,b,c; cin>>n>>m; long int count=0; for(int i=0; i<m; i++){ cin>>a>>b>>c; count += (b - a+1) * c; } cout<< count/n; return 0; }
iamlazycoder + 0 comments
#include<iostream> using namespace std; int main(){ long n,q,sum=0; cin>>n>>q; for(int i=0;i<q;i++){ long a,b,c; cin>>a>>b>>c; sum+=(b-a+1)*c; } cout<<sum/n; return 0; }
TheCodeHere + 0 comments
Here's my code in C++. there's no need to use arrays. I hope you find it useful.
int main() { int n,m; cin >> n >> m; int a,b; long long k,result = 0; while(m--) { cin >> a >> b >> k; result += (b-a+1)*k; } cout << result/n << endl; return 0; }
delamath + 1 comment
Crazy 2-liner in Python 3. :D
n, m = map(int, input().split()) print(sum(map(lambda x, y, t: t * (y - x + 1), *zip(*(map(int, input().split()) for _ in range(m))))) // n)
brianmvance + 0 comments
N,M = map(int,input().split()) candies = 0 for _ in range(M): start, end, candies_per = map(int,input().split()) candies += (end-start+1)*candies_per print(candies//N)
Sort 135 Discussions, By:
Please Login in order to post a comment
|
https://www.hackerrank.com/challenges/filling-jars/forum
|
CC-MAIN-2019-43
|
en
|
refinedweb
|
Arduino + opencv project tracker ball
hello friends, I am carrying out a project that is based on tracking a ball inside an acrylic tube. At the bottom of the tube is placed a motor that moves a ping pong ball, the goal is after the user sets the setpoint through the camera control the displacement of the ball in the tube until the set point. The code that I will share with you is fully functional, however I face some problems the roi is drawn based on the max and min, and the color of the ball based on the HoughCircles, but sometimes the inner and outer contour of the ball disappears, which it causes me enough problems. If I can solve these problems I can calculate the distance between the ball and the point defined by the user. By getting the distance without problems it is possible to regulate the motor speed to reach a value close to the setpoint value.
code:
(more)(more)
#include <opencv2/highgui.hpp> #include <opencv2/imgproc.hpp> #include <opencv2/opencv.hpp> #include <iostream> #include "Tserial.h" /*#NAMESPACE*/ using namespace cv; using namespace std; int cntr = 0; /*VAR INCREMENT COUNT*/ /*DEFINE*/ #define OUTPUT_WINDOW_NAME "PID CONTROLLER" /*DEFINE WINDOW NAME*/ #define USB_PORT_SERIAL "/dev/cu.usbmodemM4321001" /*DEFINE SERIAL PORT*/ /*FUNCTION*/ void onMouse(int event, int x, int y, int flags, void *param); /*BUTTON COORDINATE EVENT*/ void ball_tracker(const cv::Mat& frame); /*ORANGE BALL TRACKING*/ void regionOfInterest(const cv::Mat& frame); /*VALUES TO SET MAX AND MIN HSV TRACK BALL*/ int lowH = 10; /*SET LOW BALL HUE*/ int highH = 25; /*SET HIGH BALL HUE*/ int lowS = 100; /*SET LOW BALL SATURATION*/ int highS = 255; /*SET HIGH BALL SATURATION*/ int lowV = 20; /*SET LOW BALL VALUE*/ int highV = 255; /*SET HIGH BALL VALUE*/ /*VALUES TO CREATE AUTOMATIC ROI*/ int hueValue = 65; int hueRange = 1; int minSaturation = 20; int minValue = 0; /*SERIAL TO ARDUINO GLOBAL DECLARATIONS*/ int arduino_command; Tserial *LAUNCHPAD_PORT; short MSBLSB = 0; unsigned char MSB = 0; unsigned char LSB = 0; /*END SERIAL TO ARDUINO GLOBAL DECLARATIONS*/ int main(int argc, char** argv) { //CvCapture *CAM_CAP; VideoCapture cap(0); Mat cameraFrame; LAUNCHPAD_PORT = new Tserial(); if (LAUNCHPAD_PORT!=0) { LAUNCHPAD_PORT->connect(USB_PORT_SERIAL, 57600, spNONE); } Point point_marker(-10,-10); /*START POINTER OF THE SETPOINT*/ int retorno = 0; /*VERIFY IF EXISTS ANY CAMERA*/ if(!cap.isOpened()){ cout << "Failed to capture from camera" << endl; retorno = 1; } cout << "Camera opened successfully" << endl; /*DEFINE IMAGE SIZE AND NAME*/ namedWindow(OUTPUT_WINDOW_NAME, CV_WINDOW_AUTOSIZE); while(true){ // if((cameraFrame = cap(0))){ //IplImage * ipl = cameraFrame; //cv::Mat matrix = cv::cvarrToMat(ipl); cap >> cameraFrame; Mat matrix = cameraFrame; /*CALL BALL TRACKING FUNCTION*/ ball_tracker(matrix); /*CALL REGION OF INTEREST FUNCTION*/ regionOfInterest(matrix); /*SHOW MOUSE EVENT SETPOINT*/ if (point_marker.x != -1){ drawMarker(matrix, point_marker, Scalar(0, 0, 255), MARKER_CROSS, 10, 1); } /*MOUSE CALLBACKFUNCTION*/ cvSetMouseCallback(OUTPUT_WINDOW_NAME, onMouse, (void*) (&point_marker)); /*OPEN IMAGE AND SHOW WINDOW*/ imshow(OUTPUT_WINDOW_NAME, cameraFrame); /**/ //} if(cvWaitKey(60)!= -1){ cout << "Camera disable successfully" <<endl; break; } } cout << "INPUT" <<endl; //cvReleaseCapture(&CAM_CAP); /*SERIAL LAUNCHPAD SHUTDOWN*/ LAUNCHPAD_PORT->disconnect(); delete LAUNCHPAD_PORT; LAUNCHPAD_PORT = 0; /*END SERIAL LAUNCHPAD SHUTDOWN*/ cvDestroyWindow(OUTPUT_WINDOW_NAME); return retorno; } /*MOUSE EVENT ...
please do NOT use opencv's deprecated C-api for anything. it is no more maintaned since 2010, and might be simply gone in the next version.
also, throwing a wall of unformatted code at us has zero chance at help.
thank you for your advices, but i am a beginner, and I have basic knowledge in opencv, if you can help me solve this problems i promised i will try to learn and update all my code.
if you want ppl here to try your code (and reproduce problems)
i know, that's hard, but it will also help you identifying the src of your problem.
I change it
in your ball tracking:
inRange(HSV_IMAGE, cv::Scalar(lowH, lowS, lowV), cv::Scalar(highH, highS, highV), TRESH_IMAGE);
your HSV_IMAGE is actually grayscale, single channel (why ?) so only lowH, highH are used, and that's a very narrow band [10..25] . while your values would have made sense for hsv (track orange), not so much for grayscale intensity.
Yes I know the image is grayscale. The problem is that if I put the code as follows "cvtColor( frame, HSV_IMAGE, CV_RGB2HSV);" my whole program breaks. I liked to get a clean tracking, so the rest of the project had no errors.
|
https://answers.opencv.org/question/193147/arduino-opencv-project-tracker-ball/?comment=193458
|
CC-MAIN-2019-43
|
en
|
refinedweb
|
The question is answered, right answer was accepted
So im quite new to unity and ive decided to start working on a 2d rpg, using sources from this site ive manged to get a code that gives me grid movement much like that in the older pokemon games. Now i want to introduce collisions and physics to my game but after attaching the rigidbody component and moving around i realise my movement has become very clumsy and well - not nice - to look at. But the real problem is that when ever i walk into an object with a collider - say a tree or something like that - instead of simply refusing to go further it bumbs into the object gets pushed back and repeats and i get stuck in this state. heres my player controller script, i'd appreciate any pointers tips or an entire resolution to my problem THANKS :D
public class AnnaController : MonoBehaviour {
private Rigidbody2D rgdbdy;
public Animator anmtr;
private static bool playerExists;
private float mSpeed;
public bool overworld;
public Vector2 forwardVector;
public Vector2 mvmntVector;
private Vector3 pos;
private bool hasStepped;
private bool isWalking;
private bool Up;
private bool Down;
private bool Left;
private bool Right;
// Use this for initialization
void Start () {
anmtr = GetComponent<Animator> ();
rgdbdy = GetComponent<Rigidbody2D> ();
mSpeed = 0.75f;
pos = transform.position;
forwardVector = Vector2.zero;
if (overworld) {
if (!playerExists) {
playerExists = true;
DontDestroyOnLoad (transform.gameObject);
} else {
Destroy (gameObject);
}
}
}
// Update is called once per frame
void FixedUpdate () {
anmtr.SetFloat ("ForwardX", forwardVector.x);
anmtr.SetFloat ("ForwardY", forwardVector.y);
Right = false;
Left = false;
Down = false;
Up = false;
if (Input.GetKey (KeyCode.B)) {
mSpeed = 1.25f;
}
else {
mSpeed = 0.5f;
}
if (!Right) {
if (Input.GetAxisRaw ("Horizontal") > 0.5f && transform.position == pos) {
pos += (Vector3.right * 0.32f);
forwardVector = Vector2.right;
}
}
if (!Up) {
if (Input.GetAxisRaw ("Vertical") > 0.5f && transform.position == pos) {
pos += (Vector3.up * 0.32f);
forwardVector = Vector2.up;
}
}
if (!Left) {
if (Input.GetAxisRaw ("Horizontal") < -0.5f && transform.position == pos) {
pos += new Vector3(-0.32f, 0, 0);
forwardVector = Vector2.left;
}
}
if (!Down) {
if (Input.GetAxisRaw ("Vertical") < -0.5f && transform.position == pos) {
pos += new Vector3(0, -0.32f, 0);
forwardVector = Vector2.down;
}
}
transform.position = Vector2.MoveTowards (transform.position, pos, Time.deltaTime * mSpeed);
}
}
Answer by lunoland
·
Aug 15, 2016 at 09:59 PM
For a classic, snappy, 2D feel, you probably don't want to use the built-in, realistic physics.
If the objects your player will be colliding against do not move, you can make them all static colliders (objects that only have a collider component, and not a rigidbody) and then move the player using Rigidbody2D.MovePosition in Update.
If you've got other moving colliders, you'll have to set the isKinematic flag on their Rigidbody2D components (the player object included) and write your own collision checking.
This involves using the Physics2D overlap or raycast functions before you move the player to a new position to test against any colliders at the new position; You then only move the player to the position if the player's collider will fit.
See for more info on isKinematic and colliders/physics.
Okay so this has made things a slight bit better but i'm still getting problems. The objects my player is supposed to be colliding against are already static colliders. I ticked the isKinematic Flag on the players Rigidbody and i changed transform.position =Vector2.MoveTowards (transform.position, pos, Time.deltaTime * mSpeed); to Rigidbody2D.MovePosition (Vector2.MoveTowards (transform.position, pos, Time.deltaTime * mSpeed));the movement became smooth but the player ceased to collide with anything. Then i turned off isKinematic and now up and down movement works fine but once i go left or right the movement stops working. This also occurs when i collide with something no matter what input i had made before
transform.position =Vector2.MoveTowards (transform.position, pos, Time.deltaTime * mSpeed);
Rigidbody2D.MovePosition (Vector2.MoveTowards (transform.position, pos, Time.deltaTime * mSpeed));
Ah yeah, I addressed that in my answer. Using isKinematic requires you to do your own collision checking; it assumes you're going to be controlling the objects explicitly in script. If you move an object somewhere by changing transform.position or MovePosition + isKinematic, Unity just trusts that you will be putting it in the exact place you wanted it (ignoring colliders, physics, etc.).
Definitely check out that link to the manual I posted. I know it's boring, but I would start by reading the manual and watching some of Unity's tutorials on each part of the engine you're currently working with before you start coding. You'll be less reliant on people on the forums writing your code for you, so you'll learn more that way.
I'm not sure why changing isKinematic would affect left or right movement, but your movement code looks really unclear and seems to be doing a lot of unnecessary work. If you only want the player to be able to move in 4 directions, start with something like this (note that I wrote this at work and did not test it):
float moveSpeed = 0.5f;
Rigidbody2D rigidbody;
void Awake() {
rigidbody = GetComponent<Rigidbody2D>();
}
void Update() {
float runSpeedMultiplier = 1f;
if (Input.GetKey(KeyCode.B))
runSpeedMultiplier = 2.5f; // This way your run speed will be relative to the base movement speed.
int horizontal = Mathf.RoundToInt(Input.GetAxisRaw("Horizontal")); // Rounding to an integer means it will be -1, 0, or 1.
int vertical = Mathf.RoundToInt(Input.GetAxisRaw("Horizontal"));
Vector2 direction;
if (horizontal == 0 && vertical == 0)
return; // No movement, so we're done
else if (horizontal > 0)
direction = Vector2.right
else if (horizontal < 0)
direction = Vector2.left
else if (vertical > 0)
direction = Vector2.up
else if (vertical < 0)
direction = Vector2.down
Vector2 newPosition = transform.position + (direction * moveSpeed * runSpeedMultiplier * Time.deltaTime);
rigidbody.MovePosition(newPosition);
} check place on empty for placement polygon Rigidbody2d?
0
Answers
Using child colliders with rigidbodies/joints in 2D
0
Answers
Velocity doesnt change properly
1
Answer
Weird behaviour after Collision?
0
Answers
Adding Collider2D changed physics on race car.How to compensate for mass loss in the back of the car?or do it some other way.
0
Answers
|
https://answers.unity.com/questions/1229445/how-do-i-stop-rigidbody2d-bounce.html
|
CC-MAIN-2019-43
|
en
|
refinedweb
|
Created on 2016-04-22 11:40 by StyXman, last changed 2019-06-05 05:26 by giampaolo.rodola.
copy?
Thanks, looks interesting.
We usually wait until syscalls are generally available in common distros and have bindings in glibc. It makes it easier to test the feature.
> We usually wait until syscalls are generally available in common distros and have bindings in glibc. It makes it easier to test the feature.
Usually, yeah. os.urandom() uses syscall() to use the new getrandom() of Linux since it's still not exposed in the GNU libc...
Status: NEW
Tangentially related: Issue 25156, about using sendfile() to copy files in shutil.
Debian Sid, arch Linux and Fedora FC24 (alpha) already have linux-4.5, others would certainly follow soon. Meanwhile, I could start developing the patch and we could review it when you think it's appropriate.
Kerbel support ok, but what about the libc support? Do you know if it is
planned? Or do you want to use the syscall() low-level API. If we take the
syscall() path, I suggest to make the function private in the os module.
Wait until the API is standardized in the libc. Sometimes the libc changes
minor things, ir's not always a thin wrapper to the syscall.
Already there:
That's the manual page of the Linux kernel, not the glibc. It doesn't
mean that the glibc implemented it.
Then I don't know. My only worry is whether having the method local to os would allow shutils.copy() to use it or not.
I will start by looking at other similar functions there to see to do it. urandom() looks like a good starting point.
If we add a new private function to the os module (ex:
os._copy_file_range), it can be used in shutil if available. But it
would be a temporary solution until we declare the API stable. Maybe
it's ok to add the API as public today.
Ok, I have a preliminary version of the patch. It has several parts:
* Adding the functionality to the os module, with docstring.
* Make shutil.copyfileobj() to use it if available.
* Modify the docs (this has to be done by hand, right?).
* Modify NEWS and ACKS.
Several points:
* For the time being, flags must be 0, so I was not sure whether put the argument or not. Just in case, I put it.
* I'm not sure how to test for availability, so configure defines HAVE_COPY_FILE_RANGE.
* No tests yet.
Talking about tests, I tried copying a 325MiB on an SSD, f2fs. Here are the times:
Old user space copy:
$ time ./python -m timeit -n 10 -s 'import shutil' 'a = open ("a.mp4", "rb"); b = open ("b.mp4", "wb+"); shutil.copyfileobj (a, b, 16*1024*1024)'
10 loops, best of 3: 259 msec per loop
real 0m7.915s
user 0m0.104s
sys 0m7.792s
New copy_file_range:
$ time ./python -m timeit -n 10 -s 'import shutil' 'a = open ("a.mp4", "rb"); b = open ("b.mp4", "wb+"); shutil.copyfileobj (a, b, 16*1024*1024)'
10 loops, best of 3: 193 msec per loop
real 0m5.926s
user 0m0.080s
sys 0m5.836s
Some 20% improvement, but notice that the buffer size is 1024 times Python's default size (16MiB vs. 16KiB).
One difference that I notice in semantics is that if the file is not open in binary form, but the file is binary, you get no UnicodeDecodeError (because the data never reaches userspace).
Let me know what you think.
Version without the NEWS and ACKS change.
Hmm, I just noticed that it doesn't fallback to normal copy if the arguments are not valid (EBADF, EXDEV). Back to the drawing board...
New version. Factoring the old method in a nested function also paves the way to implement .
Yes, the RST documentation has to be done by hand. It usually has more detail than the doc strings.
I didn’t see any changes to the configure script in your patches. Did you make that change to define HAVE_COPY_FILE_RANGE yet?
In /Modules/posixmodule.c (all three of your patches have an indented diff header, so the review doesn’t pick it up):
+#ifdef HAVE_COPY_FILE_RANGE
+/* The name says posix but currently it's Linux only */
What name are you referring to? The file posixmodule? I think the file name is a bit misleading; according to the comment at the top, it is also used on Windows.
+return (!async_err) ? posix_error() : NULL;
This would make more sense with the logic swapped around: async_err? NULL : posix_error()
Regarding copyfileobj(), I think we should continue to support file objects that do not directly wrap FileIO (includes your Unicode transcoding case). See the points given in Issue 25063. Perhaps we could keep the new version as a high-level copy_file_range() wrapper, but retain brute_force_copy() under the original copyfileobj() name. A bit like the socket.sendfile() method vs os.sendfile().
> I didn’t see any changes to the configure script in your patches. Did you make that change to define HAVE_COPY_FILE_RANGE yet?
I'm not really sure how to make the test for configure.ac. Other functions are checked differently (availability of header files), but in this case it would need a compile test. I will have to investigate further.
> In /Modules/posixmodule.c (all three of your patches have an indented diff header, so the review doesn’t pick it up):
indented diff header?
> +#ifdef HAVE_COPY_FILE_RANGE
> +/* The name says posix but currently it's Linux only */
>
> What name are you referring to?
Posix, The function is not Posix at all. I can remove that comment.
> +return (!async_err) ? posix_error() : NULL;
>
> This would make more sense with the logic swapped around: async_err? NULL : posix_error()
I have to be honest, I just copied it from posix_sendfile(), but I agree.
I'll answer the last paragraph when I finished understanding it, but I think you mean things like zipFile.
Updated the patch withe most of Martin Panter's and all vadium's comments.
FYI Martin Panter and vadmium are both me :)
I’m not a big fan or expert on configure.ac and Autoconf, but I guess in this case it is the simplest solution. Maybe look at some of the existing AC_CHECK_DECL and AC_COMPILE_IFELSE invocations. I guess you need to see if __NR_copy_file_range is available.
In the earlier patches, there were four space characters at the start of the file. In the 2016-04-27 09:16 patch, there is a completely empty line (after +#endif /* HAVE_COPY_FILE_RANGE */), which may also be interfering.
FWIW, I don’t think you have to have the posix_ prefix on your function if you don’t want it. It is a static function, so the naming is fairly unrestricted.
About changing copyfileobj(), here are some test cases which may help explain the compatibility problems:
# Transcoding a file to a different character encoding
with open("input.txt", "rt", encoding="latin-1") as input, \
open("utf8.txt", "wt", encoding="utf-8") as output:
shutil.copyfileobj(input, output)
# Input is a BufferedReader and may hold extra buffered data
with open("data", "rb") as input, open("output", "wb") as output:
header = input.read(100) # Actually reads more bytes from OS
process_header(header)
copyfileobj(input, output) # Copy starting from offset 100
> About.
I'll do the copyfile() part once I'm convinced it doesn't break anything else.
I managed to modify the congigure.ac file so it includes the proper test. after I run autoconf it generated the proper configure script, but I also needed to run autoheaders (both run by make autoconf). This last command modified a generated file that's in the repo, so I'm adding its diff too, but I'm not sure if that's ok.
Yes, having a high-level version of copy_file_range() that falls back to copyfileobj() should be okay. I’m not sure if it should be a public API of shutil, or just an internal detail.
I am wondering if it would be nice to rearrange the os.copy_file_range() signature and make more parameters optional, or is that getting too high level?
copy_file_range(in, out, count, offset_in=None, offset_out=None, flags=0)
copy_file_range(f1, f2, size) # Try to copy a whole file
copy_file_range(f1, f2, 30, 100, 200) # Try 30 bytes at given offsets
Also left some more review comments. Also, we should eventually add test case(s) for the new functionality, and an entry to Doc/whatsnew/3.6.rst.
> Yes,.
I'.
For the generated files, it doesn’t matter much either way for review (I can just ignore them). As long as the eventual committer remembers to regenerate them. (Personally I’d prefer not to keep these files in the respository, but that’s a different can of worms :)
Sorry for the delay.
Based on suggestions in the mailing list, I changed the *count* handling as if it were a ssize_t, and I added a note about ignored output parameters.
There’s still something funny about your patches: the last one has a bit of configure script at the end of the posixmodule.c diff.
One other thing I thought of: “in” is not a practical keyword argument name in Python, because it is a reserved word. Yes, sendfile(**{"in": ...}) is already there, but I think we should find some other name for copy_file_range() before it is too late. Some ideas:
copy_file_range(input, output, count, offset_in, offset_out, flags) # Spell them out
copy_file_range(fd_in, fd_out, len, off_in, off_out, flags) # Direct from man page
copy_file_range(src, dst, count, offset_src, offset_dst, flags) # Like os.replace(), shutil.copyfile(), etc
copy_file_range(fsrc, fdst, count, offset_src, offset_dst, flags) # Like shutil.copyfileobj()
My favourites are probably “input”, or “src”.
I settled for s/in/src/ and s/out/dst/, fixed typos, made sure the docs are in sync (parameters in the docstring were out of order), rephrased paragraph about flags parameter and included the configure.ac code for detecting availability of the syscall. I'm also thinking of leaving flags out, what do you think?
Another option could be to use Serhiy’s proposed partial keywords support in Issue 26282. It is not yet committed, but it looks like it will go ahead, and then you could make “in” and “out” positional-only parameters, and the rest keywords. But IMO “src” and “dst” is fine.
Also, dropping “flags” support seems reasonable. One benefit is that if Linux added an unexpected flag in the future that conflicts with Python’s assumptions, it could do weird stuff or crash Python. See Issue 24933 for example, where socket.recv(n, MSG_TRUNC) returns uninitialized data.
The latest patch looks pretty good to me. Is there a consensus yet to add it as a public function? Before this goes in, it would be good to have a test case.
I added a couple of unit tests, which lead me to fix a couple of bugs (yay!).
I discarded the idea of removing any reference to flags.
Fixed the last comments, including comparing what was written to the original data, but only to the length of what was actually written. I'm just not sure if the way to handle the syscall not existing is ok.
It’s a bit ugly, but I would write the test so that it is recorded as skipped:
try:
os.copy_file_range(...)
except OSError as err:
if err.errno != ENOSYS:
raise # We get to see the full exception details
self.skipTest(err) # Test is recorded as skipped, not passed
ENOSYS catching fixed.
* Updated the patch to latest version.
* PEP-8'ed the tests.
* Dropped flags from the API but not the internal function.
It looked ok to me (I couldn't try it, as I still have 4.4 kernel).
One thing to the be done is to improve the test coverage (trying the usage of all the parameters, at least).
New version:
* Adds a new test for offset parameters.
Another version:
* Changed availability to kernel type, version and date.
Fixed extra space and semicolon (?!?!). Also, I'm getting 500 errors from riedvelt when I try to reply to comments.
Vict)
This is a great addition. I have a working patch adding sendfile() support for shutil.copyfileobj() which speeds it up by a factor of 1.3x on Linux. According to this copy_file_range() may result in even better performances (but we may still want to use sendfile() for other UNIXes where file->file copy is supported - not sure which ones at this point).
As for the patch attached to this ticket, is there anything missing in order to push it forward?
> As for the patch attached to this ticket, is there anything missing in order to push it forward?
IMHO the next step would be to create a pull request on GitHub.
I'm really sorry, but I don't have time to continue with this (new daughter!). Can someone else pick it up?
Check for availability in configure.ac appears to be broken.
Little:
New changeset aac4d0342c3e692731c189d003dbd73a8c681a34 by Pablo Galindo in branch 'master':
bpo-26826: Expose copy_file_range in the os module (GH-7255)
shutil copy functions would definitively benefit of using copy_file_range() if available. Can someone please open a separated issue for shutil?
shutil.copyfile() already uses sendfile() which basically provides the same performances. sendfile() should be preferred though because it’s supported since Linux 2.6.33.
But copy_file_rane can leverage more filesystem features like deduplication and copy offload stuff.
Giampaolo Rodola':
> shutil.copyfile() already uses sendfile() which basically provides the same performances. sendfile() should be preferred though because it’s supported since Linux 2.6.33.
Pablo Galindo Salgado:
> But copy_file_rane can leverage more filesystem features like deduplication and copy offload stuff.
We can use copy_file_range() if available, or fallback to sendfile().
I.
Actually "man copy_file_range" claims it can do server-side copy, meaning no network traffic between client and server if *src* and *dst* live on the same network fs. So I agree copy_file_range() should be preferred over sendfile() after all. =)
I have a wrapper for copy_file_range() similar to what I did in shutil in issue33671 which I can easily integrate, but I wanted to land this one first:
Also, I suppose we cannot land this in time for 3.8?
Please open a new issue to discuss how it can used in shutil ;-)
I created bpo-37157: "shutil: add reflink=False to file copy functions to control clone/CoW copies (use copy_file_range)".
--
The new os.copy_file_range() should be documented at:
> Please open a new issue to discuss how it can used in shutil ;-)
Use copy_file_range() in shutil.copyfile():
|
https://bugs.python.org/issue26826
|
CC-MAIN-2019-43
|
en
|
refinedweb
|
Interesting, `patch` does resolve it when the patched function is called (see) vs patch.dict that resolves it at the time the patcher is created - when decorating - (see).
An option might be to delay the resolution as done for patch, changing to `self.in_dict_name = in_dict`
Example untested patch:
```
diff --git a/Lib/unittest/mock.py b/Lib/unittest/mock.py
index 8f46050462..5328fda417 100644
--- a/Lib/unittest/mock.py
+++ b/Lib/unittest/mock.py
@@ -1620,9 +1620,7 @@ class _patch_dict(object):
"""
def __init__(self, in_dict, values=(), clear=False, **kwargs):
- if isinstance(in_dict, str):
- in_dict = _importer(in_dict)
- self.in_dict = in_dict
+ self.in_dict_name = in_dict
# support any argument supported by dict(...) constructor
self.values = dict(values)
self.values.update(kwargs)
@@ -1649,7 +1647,7 @@ class _patch_dict(object):
attr_value = getattr(klass, attr)
if (attr.startswith(patch.TEST_PREFIX) and
hasattr(attr_value, "__call__")):
- decorator = _patch_dict(self.in_dict, self.values, self.clear)
+ decorator = _patch_dict(self.in_dict_name, self.values, self.clear)
decorated = decorator(attr_value)
setattr(klass, attr, decorated)
return klass
@@ -1662,7 +1660,11 @@ class _patch_dict(object):
def _patch_dict(self):
values = self.values
- in_dict = self.in_dict
+ if isinstance(self.in_dict_name, str):
+ in_dict = _importer(self.in_dict_name)
+ else:
+ in_dict = self.in_dict_name
+ self.in_dict = in_dict
```
> This seems to be not a problem with patch.object where redefining a class later like dict seems to work correctly and maybe it's due to creating a new class itself that updates the local to reference new class?
For patch, when you create a new class, the new one is patched as the name is resolved at the time the decorated function is executed, not when it is decorated. See:
```
$ cat t.py
from unittest import mock
import c
target = dict(a=1)
@mock.patch("c.A", "target", "updated")
def test_with_decorator():
print(f"target inside decorator : {A.target}")
def test_with_context_manager():
with mock.patch("c.A", "target", "updated"):
print(f"target inside context : {A.target}")
class A:
target = "changed"
c.A = A
test_with_decorator()
test_with_context_manager()
xarmariocj89 at DESKTOP-9B6VH3A in ~/workspace/cpython on master*
$ cat c.py
class A:
target = "original"
mariocj89 at DESKTOP-9B6VH3A in ~/workspace/cpython on master*
$ ./python ./t.py
target inside decorator : changed
target inside context : changed
```
If `patch` was implemented like `patch.dict`, you would see the first as "changed" as the reference to `c.A` would have been resolved when the decorator was run (before the re-definition of `A`).
About `patch.object`, it cannot be compared, as it grabs the name at the time you execute the decorator because you are not passing a string, but the actual object to patch.
|
https://bugs.python.org/msg336385
|
CC-MAIN-2019-43
|
en
|
refinedweb
|
Created on 07-14-2015 11:04 AM - edited 07-15-2015 08:30 AM
Im currently trying to Sqoop some data from an Oracle DB to HDFS using Oozie to schedule the sqoop workflow.
My Sqoop version: Sqoop 1.4.5-cdh5.4.2
My Sqoop code:
import -- connect {JDBCpath} \
--username {Username} \
--password {Password} \
--verbose \
--table {Table} \
--where "{Query}" \
-z \
--compression-codec org.apache.hadoop.io.compress.SnappyCodec \
--as-parquetfile \
--target-dir {TargetDirectory} \
--split-by {columnToSplitBy} \
-m 14
The Error I have been getting:
java.lang.NoSuchMethodError: org.kitesdk.data.impl.Accessor.registerDatasetRepository(Lorg/kitesdk/data/spi/URIPattern;Lorg/kitesdk/data/spi/OptionBuilder;)V
[Edited to focus on my main problem, rather than the research I've done into it which may or may not be the correct path in solving this. This information will be reposted in a reply.]
Created on 07-15-2015 08:06 AM - edited 07-15-2015 08:31 AM
[Research I've done: It looks like the method registerDatasetRepository was included in the kite-data-core.jar file in the 0.14.1 version but was removed in version 0.15.0. We are currently using one of the kite-data-core.jar files after version 0.15.0 and I am unable to use the older version since there are other things using that jar on the hdfs.
Is there any way I can tell sqoop to not use kite or add another jar with this method included or any other solution to this error?]
We want to use the newest version (and as far as i know, we currently are) but the code that is generated from the sqoop command wants to use the registerDatasetRepository method from kite-data-core.jar. But it looks like that method no longer exists in the newer versions of kite-data-core.jar.
Created 07-16-2015 10:24 AM
Would there be any way to tell oozie to not use kite when sqooping to avoid this error entirely?
Created 07-17-2015 09:50 AM
|
https://community.cloudera.com/t5/Support-Questions/Error-Sqooping-data-with-Oozie-java-lang-NoSuchMethodError/m-p/29555
|
CC-MAIN-2019-43
|
en
|
refinedweb
|
import "bitbucket.org/dtolpin/infergo/model"
Package model specifies the interface of a probabilistc model.
DropGradient can be called instead of Gradient when the gradient is not required. For automaticall differentated models DropGradient will pop the frame from the tape; for elemental models it will do nothing.
Gradient automatically selects either supplied or automatic gradient.
Shift shifts n parameters from x, useful for destructuring the parameter vector.
An elemental model uses a supplied gradient instead of automatic differentation.
A probabilistic model must implement interface Model. Method Observe accepts a vector of parameters and returns the loglikelihood. Computation of the gradient is automatically induced through algorithmic differentiation.
Package model imports 1 packages (graph) and is imported by 1 packages. Updated 2019-10-05. Refresh now. Tools for package owners.
|
https://godoc.org/bitbucket.org/dtolpin/infergo/model
|
CC-MAIN-2019-43
|
en
|
refinedweb
|
Runs an external program using the ShellExecute API and pauses script execution until it finishes.
ShellExecuteWait ( "filename" [, "parameters" [, "workingdir" [, "verb" [, showflag]]]] )
After running the requested program the script pauses until the requested program terminatesute
#include <MsgBoxConstants.au3> Example() Func Example() ; Execute Notepad and wait for the Notepad process to close. Local $iReturn = ShellExecuteWait("notepad.exe") ; Display the return code of the Notepad process. MsgBox($MB_SYSTEMMODAL, "", "The return code from Notepad was: " & $iReturn) EndFunc ;==>Example
|
https://www.autoitscript.com/autoit3/docs/functions/ShellExecuteWait.htm
|
CC-MAIN-2019-43
|
en
|
refinedweb
|
We are about to switch to a new forum software. Until then we have removed the registration on this forum.
Hi all !
i've been coding in processing for a few month now and i'm starting to use minim to add sound in my sketches yet i'm totally new to this library and afters hours of strugling i can't find a way to solve my problem: here is my basic code
import ddf.minim.*; Minim minim; AudioPlayer groove; float x, y, a, b, c, d, z; PFont police; PFont polisse; String DISASTERPEACE; String HLD; void setup() { size(600, 600); minim = new Minim(this); groove = minim.loadFile("groove.mp3", 1024); groove.loop(); background(0); textAlign(CENTER); emissive(0, 26, 51); police=createFont("drifter.ttf", 32); polisse=createFont("DF.ttf", 25); textFont(police); HLD="Hyper Light Drifter"; DISASTERPEACE="DISASTERPEACE"; smooth(); stroke(random(0, 255), random(0, 255), random(0, 255)); for (int i=0; i<40; i++) { fill(random(0, 255), random(0, 255), random(0, 255)); ellipse(x, y, z, z); ellipse(a, b, z, z); ellipse(c, d, z, z); line(x, y, a, b); line(x, y, c, d); line(c, d, a, b); x=random(0, 600); y=x+20; a=x-20; b=random(0, 600); c=b+20; d=b-20; z=3; } fill(255); textSize(12); text(HLD, 300, 275); textSize(30); text(DISASTERPEACE, 300, 313); //save("ESSAIS10.jpg"); }
so i simply want the line to react to the playing music, i've read a ton of minim releated stuff and saw all the exemples about it and yet i can't manage to make thoose line vibrate to the music
i've tried many things but nothing worked.
thanks for your help
Answers
don't quote your code.
edit post, highlight code, press ctrl-o
tried but can't make that work too ^^
done
so i guess you've written code with both a setup() and a draw() before now. where's the draw() in this code?
thx, thats where i'm struggling, i know i need a draw to play the song, yet i've tried things such as moving the line from the setup to a draw but since i have a ton different coordinate ( all created randomly) for the ellipses and the line this is not working.
the problem is that i can't find a way to get enough lines without having to create a ton of value because i know i need the different value of each line in order to add the soundwave movement to them.
which line?
no real idea what you're talking about here. it's not obvious what you want the final thing to look like.
i meant the lines, sorry english is not my birth language
however, if you tried my code, you can see multiples lines and ellipses, well i just want the existing lines to react to the playing music using something like this ( it's the code from the minim package called DrawWaveFormAndLevel in the " basics" file )
for(int i = 0; i < groove.bufferSize() - 1; i++) { float x1 = map( i, 0, groove.bufferSize(), 0, width ); float x2 = map( i+1, 0, groove.bufferSize(), 0, width ); line( x1, 50 + groove.left.get(i)*50, x2, 50 + groove.left.get(i+1)*50 ); line( x1, 150 + groove.right.get(i)*50, x2, 150 + groove.right.get(i+1)*50 ); }
my problem is that all my lines are generated using random values and since they are generated in a setup() using for() they don't have specific locations
@Spoll=== try to create 2 arrays[] (for the x and b values, other can be found from them) and add your random values to the arrays in setup(); in draw you work with the stored values and can make the lines interact with line get in from minim.
thanks @akenaton=== so here is what i get
i'm really having a hard time with that ><
@Spoll===
something like that???- of course i have not added the minim getLineIn stuff but that is easy.
hey, thanks for your answer it helps me a lot, but now i have no idea how to make the lines react to the music ( resulting in a waveform)
sorry for bothering you
this is killing me, i don't understand anything about the library, i've made a few test with your code @akenaton=== this really helps me out, but i can't manage to get the lines to react properly to the music all i want is :
but the only reaction i get is
here is the code i modified, tell me if i do something wrong, thanks :
@Spoll=== in order to show (as i dont know what you exactly want...)
@akenaton=== sorry if i can't make myself easily understandable
well you see all the lines :
i just want them to act as the soundwave of the music
like that
as the song play, i would like them to vibrate with the music
you know, something like that
The example that you are giving at the bottom uses a standing wave concept for visualizing harmonics -- it looks like it achieves different heights at different fixed distances by adding together each harmonic measurement (Notice also that activity in your example is symmetrical).
So that visualization is probably taking a measurement for the first, second, third, fourth harmonic etc., adding the values together, and drawing one line.
Here are some related readings to help you understand the concepts. The implementation would be to generate measures for each frequency (harmonic), then render the visualization graph as the addition of multiple curves (each harmonic function).
|
https://forum.processing.org/two/discussion/19965/minim-waveform-problem
|
CC-MAIN-2019-43
|
en
|
refinedweb
|
Hitscotty + 65 comments
With my solution I used modular arithmetic to calculate the position of the each element and placed them as I read from input.
for(int i = 0; i < lengthOfArray; i++){ int newLocation = (i + (lengthOfArray - shiftAmount)) % lengthOfArray; a[newLocation] = in.nextInt(); }
antrikshverma2 + 1 comment
Neat code , thanks Hitscotty !!
michellerodri247 + 0 comments
The array is a part of the programming field. There are different topics related to this array destination wedding . The left rotation indicates the rotation of elements in an array. The rotation takes place in left wise. The rotation happens a single element at a time.
manishdas + 5 comments
hmm.. I'm surprised that worked for you. This one worked for me:
str = '' length_of_array.times do |i| new_index = (i + no_of_left_rotation) % length_of_array str += "#{array[new_index]} " end puts str.strip
darwin57721 + 2 comments
what is the starting value of your i? (i dont know ruby). d=2, n = 10. Because if it is 0, it would be (0+2)%10 = 2. What am I getting wrong?
manishdas + 1 comment
The starting value of the i is 0. Looks like correct calculation to me. What result are you expecting?
darwin57721 + 0 comments
ha, yeah i wasn't understanding right! I made it this way, that's why I was confused. rotated[(n+i-d)%n] = a[i]. Which is analogous to yours, but calculating the index in destination. Yours is more clear I think. Thanks!
Usernamer89 + 1 comment
are you a mathematician? because i came out with a bit similar answer
jambekardhanash1 + 2 comments
why do we need i? Can you please explain?
manishdas + 33 comments
Based on current index (i), you need to generate new index. For example: let's say array = [1, 2, 3, 4] and k = 2, then after 2 left rotation it should be [3, 4, 1, 2] => 3 4 1 2 (space separated string output)
Now let's walk through my algorithm:
# Initial assignments: # array = [1, 2, 3, 4] # length_of_array = array.length = 4 # no_of_left_rotation = k = 2 # new_arr = Arra.new(length_of_array) # new_arr: [nil, nil, nil, nil] # NOTE: # length_of_array.times do |i| # is equivalent to # for(i = 0; i < length_of_array; i++) # Algorithm to calculate new index and update new array for each index (i): # new_index = (i + no_of_left_rotation) % length_of_array # new_arr[i] = array[new_index] # LOOP1: # i = 0 # new_index = (0 + 2) % 4 = 2 # new_arr[i = 0] = array[new_index = 2] = 3 # new_arr: [3, nil, nil, nil] # LOOP2: # i = 1 # new_index = (1 + 2) % 4 = 3 # new_arr[i = 1] = array[new_index = 3] = 4 # new_arr: [3, 4, nil, nil] # LOOP3: # i = 2 # new_index = (2 + 2) % 4 = 0 # new_arr[i = 2] = array[new_index = 0] = 1 # new_arr: [3, 4, 1, nil] # LOOP4: # i = 3 # new_index = (3 + 2) % 4 = 1 # new_arr[i = 3] = array[new_index = 1] = 2 # new_arr: [3, 4, 1, 2] # After final loop our new roated array is [3, 4, 1, 2] # You can return the output: # new_arr.join(' ') => 3 4 1 2
Hope that's clear.
MobilityWins + 0 comments
I am trying to understand this, but this is the first time I have seen value assignments that involve a val= val= anotherVal
I am not quite understanding how that is supposed to work, also what is "nil" and its purpose for an array
mzancanella + 1 comment
if the length of the array is = 3 then it seems it won't work.
p_callebat + 2 comments
new_index = (i + no_of_left_rotation) % length_of_array;
seems incorrect. You will see the problem if you test, for example [1,2,3,4,5] and k = 2 .
I guess would be better:
new_index = (i + (lengthOfArray - no_of_left_rotation)) % lengthOfArray;
supertrens + 3 comments
Seems like this algorith only works for small number because when the array is big enough due to long looping period u will have system "timeout"
2017A7PS0931G + 2 comments
I was facing the same problem.I gave several attempts but the issue couldn't be solved. Can you please tell me how to define a loop for a set of array with so many elements as such... :)
pawel_jozkow + 5 comments
In java8 the problem was in String; You have to use more efficient StringBuilder instead; And of couse use only one loop to iterate over array;
here is my code snippet:
StringBuilder output = new StringBuilder(); for(int i = 0; i<n; i++) { b[i] = a[(i+k) % n]; output = output.append(b[i]).append(" "); }
d_p_sergeev + 0 comments
Better to use linked list, so no need to LOOP fully:
val z = LinkedList(a.toList()) for(i in 0 until n) z.addLast(z.pollFirst())
jaya170199 + 0 comments
why it is not working if we are using same array to store modified array i.e. a[i]=a[i+k)%n]
sreetejayatam + 0 comments
include
void reverse(int *str,int length) { int start,end; for(start=0,end=length-1;start
}
} int main(){
int size,nor; scanf("%d %d",&size,&nor); int *str=(int *)malloc(size*sizeof(int)); for(int i=0;i<size;scanf("%d",&str[i++])); reverse(str,size); reverse(str,size-nor); reverse(str+size-nor,nor); for(int i=0;i<size;printf("%d ",str[i++])); return 0;
}
__raviraj__ + 3 comments
include
using namespace std; int main() { long int a[1000000],n,d,i,f; cin>>n>>d; for(i=0;i>a[i];
for(int j=0;j<d;j++) { f=a[0]; for(i=0;i<n;i++) { a[i]=a[i+1]; } a[n-1]=f; } for(i=0;i<n;i++) cout<<a[i]<<" ";
} //this is my code and im getting time out could u please solve
nsaikaly12 + 2 comments
its because your solution is O(n^2) with the inner loop. Try and find an O(xn) solution and iterate over the whole array only once.
__raviraj__ + 1 comment
i didnt get u
reddychintu + 1 comment
O(n^2) means you have 2 for loops causing a greater time complexity
monica_marlene_1 + 0 comments
an inner loop will not cause his program to time out. I don't believe the variable n was ever initialized, so the loop is approaching a value of n that isn't defined.
SBU3411348 + 0 comments
static int[] rotLeft(int[] a, int d) { int j,i,p; for(j=0;j
Check with this you will get what is the mistake ypu did.
joelvanpatten + 0 comments
I was facing the same issue in PHP. My solution worked for 9 out of 10 test cases but timed out on one of them every time. You have to re-write the solution to be less memory intensive. In my case I was using array_shift() which re-indexes the arrays, so for large arrays it uses too much memory. My solution was to use array_reverse() and then array_pop() instead, because those methods don't re-index.
haroon_1993 + 0 comments
This Does not suits for all entries if you make the rotation to more than 4 its fails
lakshman1055 + 1 comment
How to think like this ? Once the code is there I know its easy to understand.I want to know how did you know to use modulous and how did you come up thinking that logic ?
thanks in advance.
amrelbehairy88 + 1 comment
Have you ever heard about Data Structure ? because if you do , you would probably heard about circular array.
I was able to solve the question because I'm knew about circular arrays , we use % + size of array to create a cirural array , then all you need to do is to complete the puzzle to solve the problem.
check this video,
sasuke_10 + 1 comment
Great solution. Any tips on how to know if you need to use modulus in your algorithm? I solved this problem using 2 for loops...
mikehow1005 + 3 comments
I figured it out by saying, I don't need to loop through this array over and over to know what the final state of the array should be. What I need to figure out is what the first element of the new array will be after I've rotated X amount of times. So if I divide the number of rotations (X) by the length of the array (lenArr) I should get the amount of times the array has been fully rotated. I don't need that, I need what the first element will be after this division operation. For that I need the remainder of that divison (the modulus). This is because after all of the full array loops are done, the remaining rotations determine what the first element in the new array will be.
So you take that remainder (modulus) and that's the first element's index in the old array. For example, 24 rotations in a 5 element long array means that the first element in the new array is in the 4th index of the old array. (24 % 5 = 4)
So rotate through [3, 4, 5, 6, 7] 24 times and the first element will be 7. So just take that and put it before the other elements. ([7. 3, 4, 5, 6])
Another good tip is always look for repeating patterns. It's a sign that you can simplify your code. The for loop method is just repeating the state of the array over and over: [3, 4, 5, 6, 7] [4, 5, 6, 7, 3,] [5, 6, 7, 3, 4,] [6, 7, 3, 4, 5,] [7, 3, 4, 5, 6,] [3, 4, 5, 6, 7] [4, 5, 6, 7, 3,] [5, 6, 7, 3, 4,]...
You only really need to know what's happening in the final few rotations, after the last full loop.
anisharya16 + 0 comments
thankyou so much, it helped a lot. but can you please tell how did you think about the new index position. what did you think?
rakeshreddy5566 + 1 comment
simple is peace
return arr[d:] +arr[0:d]
morrisontech + 0 comments
i is a variable used to iterate through the loop, it generally represents the index of the array that is being referenced on a particular iteration of the loop.
abhash24oct + 0 comments
Your code if for right rotation, and the explanation gave you right answer as the size was 4 and k =2 , so no matters you do left/right you will get same. For left it will be int newLoc= (n +(i-k))%n;
zenmasterchris + 4 comments
The question asks to shift a fully formed array, not to shift elements to their position as they're read in. Start with a fully formed array, then this solution does not work.
cmshiyas007 + 0 comments
thats what me too thinking of..was wondering why the logic writte here was arranging the array on read...
andritogv + 1 comment
That's exactly the point of the exercise. You have to rotate an already existing array.
Turings_Ghost + 0 comments
I noticed that right away. If the point was to produce printed output, then this is fine (and a lot of analysis works backward from output). But, as stated, one is supposed to shift an array, so this missed it.
aubreylolandt + 0 comments
this could easily be modified though by creating another array of the same size:
vector b(n); for(int i = 0; i < n; i++) { b[i] = a[(i+k) % n]; } return b;
buzzaldrin + 0 comments
I had the same idea! Just find the starting point of the array with the shift and count on from there, taking modulo and the size of the array into account.
denis_ariel + 1 comment
(i + shift) % lenght Should be enough
robertgbjones + 0 comments
Except that describes a right shift, and specification says a left shift. You might consider left shift to be negative shift, in which case you are correct mathematically, but I'd feel much more comfortable keeping the whole calculation in positive territory.
chrislucas + 3 comments
modular arithmetic is cool. I solved that way too
for idx in range(0, _size): indexes[(idx - shift + _size) % _size] = _list[idx]
marwinko19 + 1 comment
Can you please explain how that works?
jericogantuangc1 + 0 comments
Hello, where did this solution from? what should I study to be able to come up with solutions like this?
jattilah + 6 comments
Looks a lot like my C# solution:
static int[] Rotate(int[] a, int n) { n %= a.Length; var ret = new int[a.Length]; for(int i = 0; i < a.Length; ++i) { ret[i] = a[(i + n) % a.Length]; } return ret; }
purshottamV + 1 comment
This line usefull when n >= a.Length
caiocapasso + 0 comments
Here's another slightly different solution. I'm assuming it would be less performant, since it uses List and then converts it to Array, but I'm not sure how much more so.
static int[] rotLeft(int[] a, int d) { var result = new List<int>(); for (int i = d; i < (a.Length + d); i++) { result.Add(a[i%a.Length]); } return result.ToArray(); }
sidnext2none + 3 comments
I agree modular arithmetic is awesome. But, simple list slicing as follows solves too ;)
def rotLeft(a, d): return a[d:]+a[:d]
Turings_Ghost + 0 comments
The modulus operation always returns positive. If, as in Java, it really does remainder, rather than the mathematical modulus, it can return negative. So, depends on which language.
marakhakim + 2 comments
What if lengthOfArray < shiftAmount? I think you should use abs value
jattilah + 0 comments
You deal with lengthOfArray < shiftAmount by using:
shiftAmount = shiftAmount % lengthOfArray;
If the array length is 4, and you're shifting 6, then you really just want to shift 2.
The constraints say that shiftAmount will always be >= 1, so you don't have to worry about negative numbers.
vovchuck_bogdan + 10 comments
pretty simple in js:
a.splice(k).concat(a.slice(0, k)).join(' ')
amezolma + 2 comments
Did something similar in C#..
using System; using System.Collections.Generic; using System.IO; using System.Linq; class Solution { static string rotate(int rot, int[] arr) { string left = string.Join( " ", arr.Take(rot).ToArray() ); string right = string.Join( " ", arr.Skip(rot).ToArray() ); return right + ' ' + left; } static void Main(String[] args) { string[] tokens_n = Console.ReadLine().Split(' '); int n = Convert.ToInt32(tokens_n[0]); int k = Convert.ToInt32(tokens_n[1]); string[] a_temp = Console.ReadLine().Split(' '); int[] a = Array.ConvertAll(a_temp,Int32.Parse); // rotate and return as string string result = Solution.rotate(k, a); // print result Console.WriteLine(result); } }
merkman + 2 comments
Or you can one line it with LINQ
Console.Write(string.Join(" ", a.Skip(k).Concat(a.Take(k)).ToArray()));
rahulbhansali + 1 comment
While it is definitely elegant looking with a single line of code, how many times will this iterate over the array when performing 'skip', 'take' and 'concating' them? In other words, what's the complexity of this algorithm?
jordandamman + 0 comments
Any resources that explain how this works? I definitely see that it works, but say k is 5 in the first example and the array is 12345, it looks like we're skipping the whole array, then concatenating that whole array back to it with Take(5). What am I missing? Thank you for your time.
avi_roychow + 4 comments
Can any one please tell me why the below code is timing out for large data set:
for(int j=0;j<k;j++) { for(int current=n-1;current>=0;current--) { if(current!=0) { if(temp!=0) { a[current-1]= a[current-1]+temp; temp= a[current-1]-temp; a[current-1]=a[current-1]-temp; } else { temp=a[current-1]; a[current-1]=a[current];//for the first time } } else//when current reaches the first element { a[n-1]=temp; } } } Console.WriteLine(string.Join(" ",a));
rishabh10 + 2 comments
mine is also a brute force approach but it worked check it out if it helps you
import java.io.*; import java.util.*; import java.text.*; import java.math.*; import java.util.regex.*; public class Solution { public static int[] arrayLeftRotation(int[] a, int n, int k) { int temp,i,j; for(i=0;i<k;i++){ temp=a[0]; for(j=1;j<n;j++){ a[j-1]=a[j]; } a[n-1]=temp; } return a; } public static void main(String[] args) { Scanner in = new Scanner(System.in); int n = in.nextInt(); int k = in.nextInt(); int a[] = new int[n]; for(int a_i=0; a_i < n; a_i++){ a[a_i] = in.nextInt(); } int[] output = new int[n]; output = arrayLeftRotation(a, n, k); for(int i = 0; i < n; i++) System.out.print(output[i] + " "); System.out.println(); } }
pateldeep18 + 0 comments
int main(){ int n; int k; int temp1, temp2; scanf("%d %d",&n,&k); int *a = malloc(sizeof(int) * n); for(int a_i = 0; a_i < n; a_i++){ scanf("%d",&a[a_i]); } k = k %n; for(int a_i = 0; a_i < k; a_i++){ temp1 = a[0]; for(int i = 1; i < n; i++){ a[i-1] = a[i]; } a[n-1] = temp1; } for(int a_i = 0; a_i < n; a_i++){ printf("%d ", a[a_i]); } return 0; }
my code is the same as yours but i still time in test case 8, why is that?
not_nigel + 0 comments
You're not wrong but this solution is inefficient. You're solving it in O(((n-1) * k) + 2n). The solution below is in O(2n).
private static void solution(int size, int shift, int[] arr) { int count = 0; for (int i = shift; i < size; i++) { System.out.print(arr[i]); System.out.print(" "); count++; } count = 0; for (int i = size - shift; i < size; i++) { System.out.print(arr[count]); if (i != size - 1) System.out.print(" "); count++; } }
ash_jo4444 + 2 comments
I got a timeout error for TC#8 and #9 for the same logic in Python :(
Muthukumar_T + 1 comment
i got time out for tc#8 in c why??????
russelljuma + 0 comments
No loops. Just split and reconnect. def rotLeft(a, d): b = [] b = a[d:len(a)] + a[0:d] return b
gdahis + 1 comment
Because it is O(n*k), if you have a big n and a big k, it could timeout. See if you can think of an algorithm that would visit each array element only once and make it o(n). Also, is there any optimization you can make? For example: if k is bigger than n, then you don't need to do k rotations you just need to do k % n rotations and k will be much smaller, smaller than n. Example:
[ 1, 2, 3, 4, 5 ]
K=2, K=7=(1*5)+2, K=12=(2*5)+2, they are all equivant, leading the array to be:
[3, 4, 5, 1, 2]
Nitin304 + 1 comment
My Solution :
public static int[] arrayLeftRotation(int[] a, int n, int k) { int[] b = new int[n]; for(int i=0;i<n-k;i++){ b[i] = a[k+i]; } int l = 0; for(int i=n-k;i<n;i++){ b[i] = a[l++]; } return b; }
hatem_ali64 + 1 comment[deleted] amit_feb06 + 1 comment
with one for loop i have subitted the code
thefonso + 1 comment
in an actual interview they will ask you not to use splice or slice. had that happen ti me.
_e_popov + 1 comment
indeed, forgot that
endgoes through the end of a sequence, so here is my solution
function rotLeft(a, d) { const index = d % a.length; return [...a.slice(index), ...a.slice(0, index)]; }
Paul_Denton + 1 comment
Spoiler! You can do it even simpler: rotated[i] = a[(i + k) % n]. Also spoilers should be removed from the discussion or the discussion should only be available after solving. I will complain about this until its changed :P
gurdeeps158 + 0 comments
your solution is cool but if you have an array as input then you are in trouble bcoz in that case you have space complexity of O(n) as you need an another array to store element in new place.. think..
[DELETED] + 1 comment[deleted] theshishodia + 2 comments alexzaitsev + 0 comments
Hey, guys
Here is a solution based on modular arithmetic for the case when k > n:
new_index = (n + i - abs(k-n)) % n
(note: n - abs(k-n) can be collapsed to a single number)
milindmehtamsc + 0 comments
This will also fail when my shiftAmount = 7 and lengthOfArray = 3, in short lengthOfArray is less than shiftAmount. In this case we can use Math.abs(). for(int a_i=0; a_i < n; a_i++){ int new_index = Math.abs((a_i + (lengthOfArray - shiftAmount))) % lengthOfArray ; a[new_index] = in.nextInt(); }
mihir7759 + 1 comment
It's not cheating exactly. Using the same method you can even rotate the array, instead of printing the array just give the values of the array to a new array.
codextj + 0 comments
I was nitpicking, I thought of the same soln at first but then changed my mind;
As the question was GIVEN an array ..so if this was an interview there is this constraint that your array is already populated with the elements.
btw r u 14 ? its great to see my young indian frnds indulging in programing
96rishu_nidhi + 1 comment
can u please elaborate some more about your code as i dont have much knowledge about modular maths
greengalaxy2016 + 0 comments
the requirement is to take an array and left rotate the array d times. Your solution returns the correct result, but takes an integer one at a time.
c00301223 + 0 comments
Thanks for sharing this code it really helpped. I felt the constraints were to be includes by ifstatements but after viewing your code I was able get it. I have a small suggestion, would it improve the code if one were to seperate the (LengthOfArray - ShiftAmount) part into a variable and then reuse it since its kind of a constant value. Once again kudos.
riyaz_rayyan07 + 0 comments
what is in.nextInt() which language is that did you create another scanner object of in can you be more specific?
ZeoNeo + 0 comments
It's easy when you directly read them from system input. Try to make it work on already stored array. That's what problem statement says. It gets tricky and interesting after that to solve it in o(n) without extra memory.
i.e. // Complete roLeft function
My solution
private static int getIncomingIndex(int index, int rotations, int length) { if(index < (length - rotations)) { return index + rotations; } return index + rotations - length; }
// Complete the rotLeft function below. static int[] rotLeft(int[] a, int d) { int rotations = d % a.length; if(a.length == 0 || a.length == 1 || rotations == 0) { return a; } if( a.length % 2 == 0 && a.length / 2 == rotations) { for(int i =0 ; i < a.length / 2 ; i++) { swap(a, i, i + rotations); } } else { int count = 0; int i = 0; while(true) { int dstIndex = getIncomingIndex(i, rotations, a.length); swap(a, i, dstIndex); i = dstIndex; count++; if(count == a.length - 1) { break; } } } return a; }
madhanmohansure + 1 comment
nice code tq
scweiss1 + 1 comment
The part I'm missing here is why use a loop (O(n)). Can't you take the array and find the effective rotation based on the shift amount (using the same modular arithemetic you're doing? (Which is now O(1) since the length of the array is a property)
function rotLeft(a, d) { //calculate effective rotation (d % a.length) let effectiveRotation = d % a.length; // split a at index of effective rotation into left and right let leftPortion = a.slice(0, effectiveRotation); let rightPortion = a.slice(effectiveRotation); // concat left to right return rightPortion.concat(leftPortion) }
silverdust2695 + 0 comments
Why would you loop for every element when in essence the rotation operation is nothing but just a rearrangement of the array elements in a specified fashion?
LeHarkunwar + 0 comments
Tried a different approach
def rotLeft(a, d): return reversed(list(reversed(a[:d])) + list(reversed(a[d:])))
mortal_geek + 0 comments
But isn't the whole point that you are not placing them as they come, the array is pre-populated and then rotate it. My solution is O(dn), not sure if there is anything better. Clearly I am not an algorithm guy (anymore)!
for (int i = 0; i < d; i++) { int pop=a[0]; //shift left for (int j = 1; j < a.length; j++) { a[j-1] = a[j]; } //push a[a.length-1]=pop; }
fakirchand + 1 comment
Excellent !!! I am new to problem solving. I had solved it via normal shifting using one for loop and one while loop. How did you arrive at this kind of solution?? Little bit of explanation as what you thought while solving this would help a lot.
Thanks.
judith_herrera22 + 0 comments
I don't see my submission in the discussion board. Are you reviewing my solutions?
mine0nlinux + 0 comments
If the number of rotations are greater than array length (I know it's less than array length which is given in the question, let us assume), then how would this formula change? BTW That's a great way to get the array indices without having to traverse the whole array
jc_imbeault + 0 comments
Interesting take on the problem!
I'm just mentionning this for completeness' sake but not actually solving the problem as asked, which is to write a separate function :)
Also, a follow-up question might be "improve your function so that it rotates the array in-place"
vikas_nadahalli + 0 comments
How do you people come up with such optimization? my mind doesn't seem to work :(
ecoworld007 + 0 comments
I was thinking to do the same but thought not gonna do this with arithmetic so I just looped twice.
let result = []; for(let i = shiftAmount; i < array.length; i++){ result.push(array[i]); } for(let i = 0; i < shiftAmount; i++){ result.push(array[i]); } return result;
qzhang63 + 15 comments
Python 3
It is way easier if you choose python since you can play with indices.
def array_left_rotation(a, n, k): alist = list(a) b = alist[k:]+alist[:k] return b
kevinmathis08 + 10 comments
Yeah index slicing FTW, here was my 1-liner in definition, lol:
def array_left_rotation(a, n, k): return a[k:]+a[:k] n, k = map(int, input().strip().split(' ')) a = list(map(int, input().strip().split(' '))) answer = array_left_rotation(a, n, k); print(*answer, sep=' ')
Lord_krishna + 1 comment
is that scala?
aniket_vartak + 1 comment
you dont need to pass n to your function, right..
michael_bubb + 2 comments
I agree - I ended up not using 'n' (Python):
def left_shift(n,k,a): for _ in range(k): a.append(a.pop(0)) print(*a)
unitraxx + 1 comment
Obviously this solves the problem, but is a terrible solution. Pop is an O(N) operation. So your solution becomes O(K*N). This should be done in O(N) total time complexity. You do have the space requirement of O(1) correct. All the standard solutions have a O(N) space complexity.
asfaltboy + 1 comment
True. However, it becomes an elegant solution if we use collections.dequeue instead of list. Double ended queues have a popleft method, which is an O(1) operation:
def array_left_rotation(a, n, k): for _ in range(k): a.append(a.popleft()) return a
More info:
AffineStructure + 1 comment
They have rotate built into the deque
def array_left_rotation(a, n, k): a = deque(a) for i in range(k): a.rotate(-1) return a
josegabriel_st + 0 comments
You have O(n) in:
a = deque(a)
In order to avoid this, you should use a deque from the beginning like:
from collections import deque def array_left_rotation(a, n, k): a.rotate(-k) n, k = map(int, input().strip().split(' ')) a = deque(map(int, input().strip().split(' '))) array_left_rotation(a, n, k); print(*a, sep=' ')
And array_left_rotation only takes O(k) instead of O(n).
Note that a is pased by reference, so there is no need to return anything, but this could be an issue for some user cases, for this particular problem, it works.
ansimionescu + 1 comment
k -> k%n
domar + 1 comment
Pretty smart, but are you sure you are not copying the
kth eleement, in ruby it would be:
def array_left_rotation(a, k) a[k..-1] + a[0...k] end
In ruby,
...means excluding the right number.
kevinmathis08 + 1 comment
Yes both qzangs and my answer is correct. In python index slicing (indices[start:stop:step]), works like so...
We will begin with the index specified at start and traverse to the next index by our step amount (i.e. if step = 2, then we jump over every other element, if step = 3, we jump over 2 elements at a time). If step is not specified it is defaulted to 1. We continue steping from our start point until we come to or exeed our stop point. We do NOT get the stop point, it simply represents the point at which we stop.
I love Python :)
burakozdemir32 + 0 comments
What about if 'k' is greater than 'n'? You should use modular arithmetic to get actual rotate count.
actual_rotate_count = k % n
Then your solution would work for every k values.
jhaaditya14 + 2 comments
I am getting request timeout for test case 8... anyone with same problem?? or anyone knows the solution??
vabanagas + 1 comment
The test case is a large array with a large amount of shits. If your algorithim is not effecient than it will time out.
Array size: 73642 Left shifts: 60581
shilpaJayashekar + 1 comment
If u are using javascript, this will work var b = a.splice(0, d); a = a.concat(b);
belolapotkov_v + 0 comments
Super weird but checked twice
function rotLeft(a, d) { const headIndex = d % a.length const head = a.splice(0, headIndex) return a.concat(head) // fails test 9 as it creates a new array }
function rotLeft(a, d) { const headIndex = d % a.length const head = a.splice(0, headIndex) a.push(...head) return a // passes test 9 as it modifies initial array }
chenyu_zhu86 + 1 comment
Python index slicing makes this trivial :D
def array_left_rotation(a, n, k): return a[k:] + a[:k]
darkOverLord + 6 comments
I did it this way, in Java
public static int[] arrayLeftRotation(int[] a, int n, int k) { if (k >= n) { k = k % n; } if (k == 0) return a; int[] temp = new int[n]; for (int i = 0; i < n; i++) { if (i + k < n) { temp[i] = a[i + k]; } else { temp[i] = a[(i + k) - n]; } } return temp; }
vinaysh + 1 comment
instead of if-else statement temp[i] = a[(i+k)%n]; would be enough.
Also this solution would take up extra memory(for temp).
shortcut2alireza + 2 comments
Would you mind sharing your solution that does it in-place? Thanks
jacob0306 + 1 comment
in case you need it! hope it help![((n- (k%n))+a_i)%n] = in.nextInt(); } for(int i = 0; i < n; i++) System.out.print(a[i] + " "); System.out.println(); } }
hackerrank_com23 + 3 comments
Here's my in-place function implementation:
public static int[] arrayLeftRotation(int[] a, int n, int k) { // Rotate in-place int[] temp = new int[k]; System.arraycopy(a, 0, temp, 0, k); System.arraycopy(a, k, a, 0, n - k); System.arraycopy(temp, 0, a, n - k, k); return a; }
cc_insp + 0 comments
I used the System.arraycopy() method which was used in the video tutorial. I'm wondering if this solution is more efficient or mine?[a_i] = in.nextInt(); } a = leftRotation(n, k, a); for (int i=0; i<a.length; i++) { System.out.print(a[i]+" "); } } public static int[] leftRotation(int n, int k, int[] a){ int[] copy = new int[n]; System.arraycopy(a, k, copy, 0, (n - k)); System.arraycopy(a, 0, copy, (n - k), (n - (n - k))); return copy; } }
yash_97373 + 0 comments
Is calculation really required ?
for(int i=k; i < a.length; i++){ System.out.print(a[i] + " "); } for(int i=0; i < k; i++){ System.out.print(a[i] + " "); }
HeinousTugboat + 4 comments
One line of JS, no looping:
console.log(a.concat(a.splice(0,k).join(' ')));
tienle_dalat + 0 comments
One step forward with spread operator:
return [...a.splice(d, a.length - 1), ...a];
mmicael + 2 comments
In PHP:
$list = array_merge(array_slice($a, $k), array_slice($a, 0, $k)); echo implode(" ", $list);
quliyev_rustam + 0 comments
Hi!
You are in right way, but in wrong code.
You need use this:
a, a)-a, 0, $k));
kuttumiah + 1 comment
Hi,
For cases including shifts more than the array size this should work.
$actual_shift = $d % count($a); $list = array_merge(array_slice($a, $actual_shift), array_slice($a, 0, $actual_shift));
quliyev_rustam + 1 comment
It is not necessary. By the hypothesis of the task shifts cant bo more than array size
kuttumiah + 1 comment
Yeah, Thanks for the response. I missed that hypothesis.
In that case this shouldn't this be just fine?
array_merge(array_slice($a, $k), array_slice($a, 0, $k));
quliyev_rustam + 1 comment
In this test, for "Tester 8", you need to use third argument for array_slice. Otherwise, an error is returned.
This is because for "Tester 8" uses a large array of transmitted values
Sort 2429 Discussions, By:
Please Login in order to post a comment
|
https://www.hackerrank.com/challenges/ctci-array-left-rotation/forum
|
CC-MAIN-2019-43
|
en
|
refinedweb
|
Hi)
Hi)
Try this and let me know please: io_scene_vox.py (11.6 KB)
Btw, I started using both Python and Blender last week, so… maybe it is totally wrong
Hear u soon!
ps. this is the original GitHub repository of the project by Richard Spencer
Just a tip to set “shadeless” materials, removed in 2.80; use nodes and set Emission:
#material.use_shadeless = use_shadeless if use_shadeless: material.use_nodes = True material_diffuse_to_emission(material)
Add the following helpers:
import bpy def replace_with_emission(node, node_tree): new_node = node_tree.nodes.new('ShaderNodeEmission') connected_sockets_out = [] sock = node.inputs[0] if len(sock.links)>0: color_link = sock.links[0].from_socket else: color_link=None defaults_in = sock.default_value[:] for sock in node.outputs: if len(sock.links)>0: connected_sockets_out.append( sock.links[0].to_socket) else: connected_sockets_out.append(None) #print( defaults_in ) new_node.location = (node.location.x, node.location.y) if color_link is not None: node_tree.links.new(new_node.inputs[0], color_link) new_node.inputs[0].default_value = defaults_in if connected_sockets_out[0] is not None: node_tree.links.new(connected_sockets_out[0], new_node.outputs[0]) def material_diffuse_to_emission(mat): doomed=[] for node in mat.node_tree.nodes: if node.type=='BSDF_DIFFUSE' or node.type=='BSDF_PRINCIPLED': replace_with_emission(node, mat.node_tree) doomed.append(node) else: print(node.type) # wait until we are done iterating and adding before we start wrecking things for node in doomed: mat.node_tree.nodes.remove(node) def replace_on_selected_objects(): mats = set() for obj in bpy.context.scene.objects: if obj.select_set: for slot in obj.material_slots: mats.add(slot.material) for mat in mats: material_diffuse_to_emission(mat) def replace_in_all_materials(): for mat in bpy.data.materials: material_diffuse_to_emission(mat)
References:
The option appears under “import”, but after chosing .vox file and clicking “import” nothing is happening
Thanks for your feedback.
Please, try using the updated script by the author: he released my pull request it right few hours ago!
We added “collection” handling also: when imported, you’ll have a new collection to group every voxel object.
Btw, upload your VOX file to further investigate… latest file format is not supported yet. Add this to the downloaded python script to bypass/skip new chunks:
elif name == 'nTRN': vox.read(s_self) elif name == 'nGRP': vox.read(s_self) elif name == 'nGRP': vox.read(s_self) elif name == 'MATL': vox.read(s_self) elif name == 'LAYR': vox.read(s_self) elif name == 'rOBJ': vox.read(s_self)
Right before the “throw an error if unknown chunk” part:
else: # Any other chunk, we don't know how to handle # This puts us out-of-step print('Unknown Chunk id {}'.format(name)) return {'CANCELLED'}
See also:
It works! Thank you very much!
Np, we are working on frame-base animations and MATT/MATL chunks support (just for “basic” glass/metal/plastic/emit shaders).
If you need them, please see
Frame-based animations #11
MagicaVoxel-VOX-importer/io_scene_vox.py from wizardgsz MATT_MATL branch
I have an issue with importing .vox file in blender.
Solved. Please use GitHub to open issues, not here. Thanks.
|
https://blenderartists.org/t/porting-addon-to-2-8/1182671
|
CC-MAIN-2019-43
|
en
|
refinedweb
|
tag git equivalent of svn status-u
svn tag vs git tag (6)
If you fetch:
git fetch <remote>
instead of pulling:
git pull <remote>
from the remote, you can then inspect what changed with
git log. To apply the changes:
git merge <remote>/<remote-branch>
What's the git equivalent of
svn status -u or more verbose
svn status --show-updates.
The
svn status --show-updates command shows the updates that the
svn update command will bring from the server.
Thanks!
Both Martinho Fernandes and tialaramex answers correctly describe what you need to do. Let me describe why it is the way it is.
Subversion
Subversion is centralized version control system. It means that it operates in client--server fashion: server stores all the data about version (repository), client has only working directory (files) plus some administrative and helper data. This means that for most commands client has to contact server. This also means that there are many commands asking about state of repository on server, or server configuration, like "
svn status --show-updates" in question.
(Sidenote: one helper data that Subversion stores on client are the "pristine" version of files, which means that checking for changes you did doesn't require to connect to server (which is slow)... but it also means that SVN checkout might be larger than Git repository).
"svn update" (required before commit if repository has any changes in given branch) downloads last version from remote and merges (tries to merge) changes you did with changes from remote. IMHO this update-before-commit workflow is not very conductive.
Git
Git is distributed version control system. This means that it operates on peer-to-peer fashion: each "client" has all the data about versions (full repository). The central repository is central only because of social convention and not technical limitations. This means that when contacting other remote repository the number of commands "executed remotely" is very small. You can ask for references (heads aka. branches and tags) with "git ls-remote" (and "git update show"), you can pull (get) or push (publish) data with "git fetch" (or "git remote update") / "git push", and if server is configured to allow it you can get snapshot of state of remote repository with "git archive --remote".
So to examine commits which are in remote repository but are not present in your repository you have to download data to your machine. But "git pull" is in fact nothing more than "git fetch" which downloads data and "git merge" which merges it (with a bit of sugar to prepare commit messages and choose which branch to merge). You can then use 'git fetch" (or "git remote update"), examine newly brought commits with "git log" and "gitk" (not being limited to fixed output), and then if everything is all right merge changes with "git merge".
This is not specific to Git, but to all distributed version control systems, although the way SCM presents fetched but unmerged data might differ (Git uses remote-tracking branches in 'remote/<remotename>/*' namespace, Mercurial from what I understand, uses unnamed heads).
HTH
I can't think of a way to do it without actually fetching the updates (maybe someone else will). Assuming you are on the default branch "master" and the upstream from which these hypothetical updates will come is the default remote "origin", try....
git fetch git log --name-only ..origin/master
Note the double dots .. not a single dot or an elipsis1.
This will give you a list of log entries for changes that are only upstream, with the filenames affected, you can alter the parameters to git log to get more or less information.
NB in git "fetching" these updates isn't the same as applying them to your local branch. You no doubt already know how to do that with git pull.
1 As for where do the double dots come from,
name1..name2 indicates a range. If
name1 is omitted,
HEAD is used in its place. This syntax refers to all commits reachable from
name2 back to, but not including,
HEAD. ["Git from the bottom up"]
You can use
git ls-remote to list the SHA of references in a remote repository; so, you can see if there are any changes by comparing the output of:
$ git show-ref origin/master # <-- Where this repo thinks "origin/master" is 5bad423ae8d9055d989a66598d3c4473dbe97f8f refs/remotes/origin/master $ git ls-remote origin master # <-- Where "origin" thinks "master" is 060bbe2125ec5e236a6c6eaed2e715b0328a9106 refs/heads/master
If they differ, then there are changes to fetch:
$ git remote update Fetching origin ... From github.com:xxxx/yyyy 5bad423..060bbe2 master -> origin/master
git fetch && git log --name-status ..origin/master does indeed show the logs that would be merged. However, it also pulls the changes. It's not technically possible to do the same exact thing as
svn status -u, but
git fetch is so fast that usually it shouldn't matter
If you absolutely need the log before fetching, the only way would be to connect (SSH or equivalent) into the remote and issue
git log there.
Gits gives us more tools to check an "update". First you have to "download" the up-to-date state of the repository:
git fetch
Now you can get the list of the changed files:
git log --name-status ..origin/master
Additionaly, you can see the full list of changes with diff:
git diff ..origin/master
Meaning of the starting letters are: Added (A), Copied (C), Deleted (D), Modified (M), Renamed (R), changed (T), are Unmerged (U), are Unknown (X), or have had their pairing Broken (B)
|
https://code.i-harness.com/en/q/11612e
|
CC-MAIN-2019-43
|
en
|
refinedweb
|
Internal management practices (Independent Variable)
For the statistical analyses of the second hypothesis, the research study will incorporate two of the four management practice categories of the World Management Survey and a subgroup of the additional items both identified by Bloom and Van Reenen (2007). More specifically, the items include the categories associated with Targets (6 items), Incentives (5 items) and the supplementary Central (3 items) information, while the analyses exclude the items related to Operations and Monitoring. Due to restructuring of the survey questions relative to the original survey set by Bloom and Van Reenen (2007), the research study derives the principal components of the incorporated 14 items to retrieve the management practices categories with a specific focus on the human capital aspect. The results indicate four factors with an Eigenvalue greater than 1, which cumulatively explain 61.4% of the total variance. Following an oblique rotation of the loadings, I label these four factors Management Score in Human capital, Management Score in Decision Making, Management Score in Target Coverage and Management Score in Target Setting. Further, I provide additional details about individual item loadings in the Appendix I.B – Table 3. The focus, however, will be on the “Management Score in Human capital” variable and its relation to voluntary disclosure. Nonetheless, the remaining latent variables serve as potential control variables to account for the majority of the internal practices.
Guidance (Dependent Variable)
To account for potential guidance provision to external stakeholders by the organizations, this dummy variable equals 1 if a management forecast with an earnings-per-share (EPS) measure is issued during the fiscal year t, and 0 otherwise.
Forecast Error (Dependent Variable)
This variable accounts for the absolute difference between the forecasted EPS by management and the actual EPS. Following Karamanou and Vafeas (2005), with the aim to ensure consistent comparisons across organizations, this measure is scaled by the logged assets per share at the beginning of the fiscal period.
Forecast Bias (Dependent Variable)
Like the forecast error variable, this measure accounts for the difference between the forecasted EPS by management and the actual EPS, however, not in absolute but signed terms. Further, a positive (negative) value suggest forecast optimism (pessimism). Lastly, this measure is also scaled by the logged assets per share at the beginning of the fiscal period (Karamanou and Vafeas, 2005).
Miss (Dependent Variable)
This measure accounts for meeting/missing the forecast targets in terms of point estimates (all retrieved EPS targets are illustrated as point estimates). The respective dummy variable equals 1 if the organization achieves an actual EPS greater than the forecasted EPS in the managerial guidance, and 0 otherwise.
Systematic Risk (Control Variable)
This variable accounts for the potential market-wide risk effects, as firms with higher risk levels are more inclined to provide additional guidance to the external market. The construction of the variable is based on the standard deviation of predicted returns at the beginning of the year, following the market returns over an estimation period of 12-60 months (Kasznik, 1995).
Analyst Disagreement (Control Variable)
This control aims to incorporate the interanalyst uncertainty in the earnings forecast of a firm (Brown, Foster and Noreen, 1985). In other words, management most likely experiences difficulties with respect to earnings forecasting when the outcome of this variable is higher and, subsequently, may face greater litigation risk (Brown, Foster and Noreen, 1985). The measure is defined as the standard deviation of analysts' forecasts divided by the median forecast.
Earnings Volatility (Control Variable)
Like Analyst Disagreement, Earnings Volatility aims to account for fluctuations in the earnings and future prospects. Reason for inclusion results from the established association between an organizations earnings volatility and the frequency of management earnings forecasts, which may bias the results (Waymire, 1985). The variable is computed based on the standard deviation of quarterly earnings over 12 quarters ending in the current fiscal year, divided by median asset value for the period (Ajinkya, Bhojraj and Sengupta, 2005).
Loss (Control Variable)
Further, the study also controls for organizational loss-making due to the specific implications this may cause to the external environment. For instance, previous research indicates that earnings have lower value-relevance for loss-making firms (Hayn, 1995). Furthermore, reaching financial analyst targets becomes less important and various authors established a significant difference between the analyst forecast errors of loss and profit firms. (Degeorge, Patel, and Zeckhauser, 1999; Brown, 2001). This indicator variable equals 1 if the organization reports losses in the respective fiscal year, and 0 otherwise.
Number of Analysts (Control Variable)
This measure captures the number of analysts following the organization. Past literature indicates a positive relationship between the quality of organizational disclosure and the number of analysts following the organization (Brown, Foster and Noreen, 1985). The resulting control is computed with the natural logarithm of one plus the number of analysts that issued an EPS forecast at the beginning of the fiscal year. However, this variable mainly influences the actual issuance compared to the forecasting properties. As a result, the subsequent regressions with respect to forecast properties include an alternative control variable (Horizon) (Karamanou and Vafeas, 2005).
Horizon (Control Variable)
This variable follows previous literature and attempts to account for greater earnings uncertainty and the unobservable accuracy of managers' beliefs (Baginski and Hassell, 1997). The research study uses the common definition of the number of days between the forecast date and the end of fiscal period. As previously mentioned, the horizon variable is included due to the greater impact on the forecasting properties relative to Number of Analysts (Karamanou and Vafeas, 2005).
ROA (Control Variable)
To control for extreme operating performance, the study includes the ratio between the organization's operating income divided by the total assets.
Methodology - Testing: Event Study (Hypothesis 1a)
To test these hypotheses, both proprietary data on organizational design choices and publicly available data of the award-winning firms and their respective corporate valuations are used. The initial sample consists of 168 corporations from numerous industries that received the ATD BEST Award between 2009-2014 (ATD BEST Awards, 2018). The event study analysis on this data is based on the framework outlined by Brown and Warner (1985) and the respective four steps for analyzing and evaluating the effects of events are performed.
1. Data Collection: The market returns, and adjusted closing prices are obtained from CRSP, Orbis and Compustat.
2. Specification of event date: The ATD BEST training award receipt day will be labeled as the event date with the event windows specification of ± three-, ± five-, ± ten- and ± fifteen-days.
3. Calculation of Expected Return (CAPM):
Ex(Rit) = αi + βi * RMarket + uit
Where Ex(Rit) is the expected return of organization i on date t, α is the intercept, βi is
the beta of organization i, RMarket is the market return (proxied by the S&P 500) on date t, and is the error term for firm i on event date t.
4. Calculation of Abnormal Return:
ARit = Ac(R it) – Ex(R it)
Where ARit is the abnormal return of organization i on date t, Ac(Rit) is the actual return of organization i on date t,
To further strengthen the statistical power, I conduct both parametric and non-parametric tests. More specifically, the study applies a Patell test (also standardized residual test), a standardized cross-sectional, a Rank test and a generalized Sign test with respect to the abnormal return estimation. The parametric tests allow to control for event-induced volatility and serial correlation (Patell, 1976; Boehmer, Musumeci and Poulsen, 1991). While the non-parametric tests enable the analysis of data, which may be subject to outliers or measured imprecisely, the main advantage lies in the avoidance of assumption making with respect to the parameters (Corrado and Zivney, 1992; Cowan, 1992). Following the robustness testing of return significance, the study attempts to explain the variation in these abnormal returns. With this intention, I conduct the subsequent regression analyses.
Methodology - Testing: Regression Analyses (Hypothesis 2a-2e)
The overall model of the subsequent regression follows below:
ARit = β0 + β1R&D Intensityit + β2Salary Competitivenessit + β3Marketing_Intit + β4Marketing Intensityit + β5PPE Intensityit + β6Firm Performanceit + β7Firm Sizeit + β8Previous Awardit + β9Industry Dummies + β10Country Dummies + εit (1)
The respective results are shown in Tables 15 and 17. Further, to identify the individual effects of the independent variables, I will test the respective variables in different models. Each of the respective models 2-7 (Model 1 used as the control model) is analyzed three times (Event day, ± Three-Day Event Window and ± Five-Day Event Window), and each version applies a different abnormal return window. The choice for the specific windows follows the findings in Hypothesis 1, the significance of the abnormal returns in these event windows and the support of these windows in previous research studies (MacKinlay, 1997). Further, the descriptive statistics are reported in Appendix II.A – Table 12.
Methodology - Testing: Probit Analyses (Hypothesis 3a-3c)
The model for Hypothesis H3a is shown below:
Pr(IssuanceNumber of Analystsit + εit (2)
The results of the guidance probability are shown in Table 20. Further, to identify the properties of the individual forecasts, I will test the different dependent variables in different models. The resulting three models are displayed below, and the results of their analyses are depicted in Table 21. As previously mentioned, in these models the Number of Analysts is replaced with the Horizon variable. Further, the descriptive statistics are reported in Appendix III.A – Tables 18 and 19.
The model for Hypothesis H3b-3c is shown below:
Forecast Biasit = β0 + β1Management Score in Human captialit + β2Management Score in Decision Makingit + β3Management Score in Target Coverage
(3)
Pr(Miss (4)
V. RESULTS AND DISCUSSION
This section displays the results of the statistical tests defined in the methodology part and describes the findings. Firstly, the study focuses on the findings of the event study and subsequent significance test. These significant results are then used to answer the first research question, which focuses on the economic value of human capital development. Next, the regression results with respect to the abnormal return variance are analyzed and described.
Results of Event Study
Hypothesis 1a
In general, the results of the parametric and non-parametric assessments indicate significant evidence to suggest that organizations derive economic returns from their expenditures into human capital development, as proposed by the first hypothesis (H1). The tests focus on numerous estimation windows including (-0, +0) to (-15, +15), which provide significant evidence for hypothesis 1 at the 10%, 5% and 1% significance levels. The respective reasons for these specific windows include the potential leakage of the award information for windows preceding the event date, while estimation periods following the event date aim to account for delayed trading and information-processing periods of stakeholders. This premise is supported by the results (Appendix II.B – Table 13 and 14) which indicate significant abnormal returns for the estimation windows (3) and (4) relative to the event window (5) estimation. In other words, the actual announcement day fails to fully impound the price changes. Furthermore, the longer estimation periods (1) and (2) provide non-significant abnormal returns, which could result from a dilution of information over the extensive period.
The findings result from the two commonly applied parametric tests, the Patell test and the standardized cross-sectional test, which solely differ in the correction method for potential cross-sectional variation (Patell, 1976; Boehmer, Musumeci & Poulsen, 1991). As a result, the discussion of the parametric test will focus on the estimations by the Patell test, however, the outcomes of the Standardized Cross-Sectional test provide similar results at slightly lower significance levels (Table 13). The first event period (-15, +15) shows a mean cumulative abnormal return (CAR) of 0.89%, which indicates that the average organization experienced an unexplained return of 0.89% over the 31-day event period. The respective Z-score based on a Patell estimation equals 0.791 and has a p-value of 0.2144. Consequently, this estimation window provides a positive but non-significant mean CAR. Similarly, the slightly shorter estimation period of 21-days (-10, +10) indicates a 0.77 mean CAR, which is represented by a Patell Z-score of 0.617. This estimate leads to a p-value of 0.2686 and, subsequently, also provides a positive but non-significant return. The estimation period regarding the 11-day event window suggests a 1.03% mean CAR (Patell-Z = 1.704, p-value= 0.0441), which follows the previous argument of higher returns around the announcement date. Further, this return estimate provides strong evidence for H1, by indicating that the average firm earns 1.03% of unexplained return over an 11-day period. In the fourth event period, the average CAR for a 7-day (-3, +3) event period amounts to 1.57%, which provides the strongest support for H1. More formally, the Patell Z-statistic of 2.398 represents a p-value 0.0082, which indicates statistical support at the 1% level. Finally, the last estimation window solely incorporates the actual event day with an average cumulative abnormal return of 0.56%. This respective mean CAR leads to a Patell-Z score of 0.348 with a p-value of 0.03639, which indicates a positive but non-significant return estimate.
Next, this paper focuses on the non-parametric analyses, which indicate comparable results and support for H1 but further include the greater estimation periods. Similar to the parametric tests, the research study applies two non-parametric tests to confirm the robustness of the results. As the results of the generalized sign test represent the main differences relative to the parametric tests, the focus of this section will be on the Generalized Sign Test.
The key difference to the parametric test lies in the significance of the longer event window periods. Firstly, the longest event window (-15, +15) incorporates 34 positive, and 29 negative unexplained return estimations. Based on these estimates, the resulting Z-score amounts to 1.665 and the p-value to 0.0479, which provides significant evidence to reject the null hypothesis of H1 at the 5% level. The subsequent 21-day estimation window (-10, +10) provides similar results with 33 positive abnormal returns and 30 negative abnormal returns. Consequently, the generalized sign z-value equals 1.452, which provides support for the hypothesis H1 at the 10% significance level (p-value = 0.073). Further, the 11-day event period (-5, +5) provides the greatest statistical support for H1 with 35 positive and 28 negative abnormal return estimates. The subsequent z-statistic corresponds to 1.971 with a p-value of 0.0243. This provides statistical evidence at the 5% significance level. Lastly, the two remaining event windows of 1 (-0, +0) and 7 (-3, +3) both experience 32 positive, and 31 negative unexplained returns, respectively. As a result, the Z-statistic follows the positive sign prediction but indicates non-significance for both event windows (Z-Statistic=0.437, p-value=0.331; Z-Statistic=0.59, p-value=0.2775).
To summarize, the results for various event timeframes (7-day, 11-day, 21-day, 31-day event windows), in both parametric and non-parametric estimation technique, indicate that organizations can derive economic value from the development of human capital.
Results of Regression Analyses
Following the event study description, the research study applies various regression analyses aiming to explain the variation of the previously identified abnormal returns. The abnormal returns from the event study will be used as the dependent variable in the following models. More specifically, the results indicate which externally available information, in form of performance ratios, enables stakeholders to explain the abnormal earnings possibilities related to investments in the development of human capital. These performance indicators (H2a-H2e) will all be tested for three different event period lengths. Despite the five possible event windows, I will focus on only event windows 3-5 (11-Days, 7-Days and 1-Day) due to the consistent use of these event window lengths in the accounting and finance literature (Brown and Warner, 1985). As mentioned before, the longer event windows are primarily to account for potential leakage and deferred trading activities. Generally, the regression results provide statistically significant evidence for various performance ratios. More specifically, the Research & Development intensity, Salary competitiveness and the PPE intensity explain part of the variation, resulting from the abnormal returns following human capital development. A more comprehensive analysis of the findings and associated hypothesis follows in the subsequent section.
Hypothesis 2a
The estimates for R&D intensity for estimation window 4 and 5 (-3, +3; -0, +0) provide no statistical support for the explanation of the abnormal returns (See Appendix II.C – Table 15 and 16). However, the 11-day event period provides statistical evidence in model 2 and 6 with the coefficients of -6.26 at a 10% significance level (Table 17). Despite the opposing sign relative to the prediction, this finding indicates partial support for Hypothesis H2a.
Hypothesis 2b
Similarly, to the previous hypothesis, the coefficients for the shorter event periods indicate non-significance for the salary competitiveness variable (Table 15 and 16). However, the estimates of -9.45 (Model 3) and -7.71 (Model 7) for the salary competitiveness variable deliver statically significant results in model 3 and 7 at the 5% and 10% significance level, respectively (Table 17). Nonetheless, the signs of the coefficients contradict the respective prediction.
Hypothesis 2c
In contrast to the previous hypothesis, the marketing intensity coefficient fails to provide significant coefficients in Model 1 and 3. Nonetheless, the estimates of 0.998 (Model 2) marginally supports the notion that marketing intensity (H2c) of an organization can explain a portion of the unexplained variation of the abnormal returns related to human capital investment. The coefficient is marginally significant at the 10% significance level.
Hypothesis 2d
The estimate for PPE intensity provides statically significant results in two of the three event periods (Table 15 and 17). More formally, the coefficients in the event window 1 and 3 provide marginal support, for H2d, at the 10% significance level across various model specifications. Notably, the respective coefficients for PPE intensity is negative across all models and event periods, which follows the previously non-specified direction of the hypothesis.
Hypothesis 2e
The results with respect to the interaction effect of physical capital and R&D intensity provide non-significant coefficients across all event period lengths and models. Despite the consistent sign predication, the interaction term delivers highly insignificant estimates. Moreover, this is also confirmed in Table 17 (Model 6), which indicates a significant effect of each individual variable on the abnormal returns relative to the effect of the interaction term. Conclusively, the results provide no significant support for H2e.
Results of Logit/Probit models
Descriptive statistics
First, an overview of the descriptive statistics is displayed in Appendix III.A. Table 18-19. Specifically, in Table 19, the comparison between guidance and non-guidance organizations indicates significantly greater scores in the key internal area, human capital, by organizations that provide earnings forecasts. However, the other internal design variables, namely decision-making and target-related practices seem to be similar across those two types of organizations. Further, the organizations that provide guidance also seem to be statistically different in other areas. More specifically, organizations that provide guidance seem to be larger, less risky, more profitable and to have greater analyst coverage relative to their non-guidance providing counterparts.
Hypothesis 3a
Table 20 reports the results of Hypothesis H3a, which are also in line with the aforementioned differences between guidance and non-guidance firms stated in the descriptive statistics. More formally, the coefficient of 0.018 (Management Score in Human Capital) indicates a positive influence of the internal human capital practices on the probability of issuing a management earnings forecast (p-value = 0,041). Furthermore, none of the remaining internal management practices has a significant effect on the likelihood of earnings forecast issuance. With respect to the control variables, the disagreement among analysts and return on assets seem to be key factors when managers decide whether they want to provide guidance to the capital market. The resulting coefficients provide evidence at the 5% and 10% significance level, respectively. As a result, the outcome in Table 20 supports the premise of Hypothesis H3a.
Hypothesis 3b
Firstly, the models include an additional variable to control for potential selection bias and non-randomness of the issuance of management forecasts. More specifically, the inclusion of the inverse mills ratio aims to account for the choice of providing a management forecast (Heckman, 1976). In other words, controlling for the correlation between error terms is specifically important in this setting, as human capital practice scores are expected to affect both the probability of issuance and the forecast properties of the management guidance (Lennox, Francis and Wang, 2011). The latter effect is also expected to be conditional on the actual issuance of a guidance forecast. Following this model specification, the forecast bias results, in both absolute and signed version, are displayed in Table 21, Column (1) and (2). The coefficients on the main independent variable of interest (Management Score in Human Capital) support the notion of inaccurate forecasts issuance by management at the 5% significance level. However, the sign opposes the predication and indicates a positive bias of the earnings guidance. With respect to the signed forecast bias, the internal target setting practices (Management Score in Target Setting) also show a significant positive effect on the directional bias of the management guidance (p = 0,0674). Further, numerous control variable coefficients also influence the forecast error and bias. Specifically, the length of the forecast horizon, potential loss-making and market-wide risk effects (Systematic Risk) seem to significantly bias the forecasts, at the 5%, 10% and 10% level, respectively.
Hypothesis 3c
The results in Table 21 – Column (3) follow the previous notion and suggest that firms with higher human capital practices scores (Management Score in Human capital) are more likely (less likely) to meet (miss) their own and analysts' earnings targets. Furthermore, the internal target setting practice also increases the probability of firms reaching the EPS forecast set by themselves and their analysts. However, none of the other internal practices seem to significantly influence the probability of meeting EPS measures. With respect to the control variables, the variable for interanalyst uncertainty of the earnings forecast appears to significantly affect the likelihood of forecast fulfilment by the organizations.
Discussion of the Event Study
Hypothesis 1a
Both the parametric and non-parametric results of the event study indicate a clear and robust answer with respect to the first research question. These findings strengthen the premises of human capital development and its value creation potential. More specifically, the ATD BEST award receipt provides the external environment with information about the organization's human capital development efforts. Following the semi-strong form efficiency assumption, the external market perceives this signal as previously unknown information, which then leads to the inclusion of future value opportunities into the current stock price and corporate valuation of the organization. In the following paragraphs, I will further discuss the individual performance indicators that enable internal and external stakeholders to explain this market reaction and identify potential differentiation strategies across companies.
Discussion of the Regression Results
Hypothesis 2a
As indicated by Model 2 and 6 in Table 17, the R&D intensity of organizations significantly affects the abnormal returns, resulting from a training award receipt. This does provide partial support for H2a; however, the findings oppose the predicted direction of this relationship. A potential explanation for these results may stem from the ambiguous reporting rules related to research and development. For instance, Lev, Sarath and Sougiannis (2005) investigate the extent and consequences of biases in R&D reporting. In their study, they find that reporting speed (conservative or aggressive) with respect to R&D expenses, as their bias of interest, significantly affects the resource allocation in the capital markets. More specifically, organizations strategically report R&D expenses, conservatively or aggressively, based on the difference between their R&D growth rate and profitability measures. As a result of this reporting bias, firms seem to be under- or overvalued, which manipulates the external parties and their allocation behavior. Similarly, this misreporting of R&D expenses or other complex theory implications may be responsible for the contradicting sign relative to the proposed hypothesis. To conclude, the significant findings indicate the need to further investigate into the R&D reporting process and potential future value expectations of investors with respect to R&D.
Hypothesis 2b
Similar to the previous hypothesis discussion, the coefficients for Salary competitiveness provide marginally significant support in model 3 and the complete model 7 (Table 17). Nonetheless, the signs of these coefficients do not follow the predicted direction set by H2b. A potential reasoning for this direction may be the preference of non-financial compensation relative to financial compensation. More specifically, modern compensation packages include flextime, telecommuting options or other non-financial perks, which are highly valued by individuals. This premise is also confirmed by Schlechter, Thompson and Bussin (2015), who examined the attractiveness of non-financial rewards for prospective employees in knowledge industries. By collecting questionnaire data about the perceived level of job attractiveness, they identify the importance of non-financial reward elements (e.g. career perspectives, training opportunities) on employees' perceived attractiveness of potential job offerings. As a result, employees may assume that these opportunities are relatively modest in highly-demanding but highly-compensated jobs (high salary competitiveness ratio).
Hypothesis 2c
The third sub-hypothesis H2c focuses on the effect of Marketing Intensity on the abnormal returns. This effect is marginal significant (p-value = 0.070) in the fourth event window (-3, +3) and follows the predicted direction (Table 16). Thus, the advertising activities and potential utilization of “advertising-assets” seem to be interconnected with highly-skilled labor and the investments in this workforce. Nonetheless, the findings must be interpreted with caution due to the marginal support in only one model specification and event window length.
Hypothesis 2d
The event windows 3 and 5 (Table 15 and 17) both provide strong support for a direct effect of PPE intensity on the human capital induced abnormal returns. Furthermore, as indicated in the hypotheses section, one can argue for either a positive or negative effect on the dependent variable. However, the findings indicate a consistent negative relationship on the abnormal returns. As a result, the negative direction suggests that investors may interrelate high levels of physical capital with lower skill requirements for human capital. This argumentation would also be supported by the relatively high frequency of manufacturing firms in the sample. More specifically, the trend towards physical labor replacement in this industry remains relatively high, which would support the aforementioned relationship between physical capital and skill requirements for human capital. Despite the significance of this variable, additional tangible asset measures need to be examined to strengthen the assumptions with respect to automation and related human capital requirements.
Hypothesis 2e
Regarding the effect of R&D and physical capital, no model specification across all the event window provides significant support for this interaction variable. Consequently, the findings fail to provide supporting evidence for H2e; however, this also presents a compelling deviation from the previous results. More specifically, the individual effects of both R&D (H2a) and PPE (H2d) intensity provide significant evidence, while the interaction between the two measures yields insignificant results. Further, this result seems to indicate that the marginal influence of R&D intensity differs across various levels of PPE intensity, and vice versa. Nonetheless, this non-significance may also result from the difficulties with respect to adequate R&D reporting.
Discussion of the Logit/Probit Model Results
Hypothesis 3a
As previously mentioned, the positive coefficient on the human capital practices, depicted in Table 20, provides support for the expectation that human capital practice quality has a significant impact on the probability that management issues an earnings forecast. Consistent with Verrecchia (1990) and Penno (1997) findings, that the quality of private information positively influences the probability of forecast disclosures. However, it is important to highlight the nature of the disclosed information. The greater likelihood of disclosure suggested by Verrecchia (1990) only holds for proprietary information. The human capital practices of an organization fit into this information type and, therefore, fulfill the premise of the respective paper. On the other hand, Penno (1997), who builds on Dye (1985) and Jung (1988), shows that the common economic notion of higher-quality information and the accompaniment of greater voluntary disclosure fail in a non-proprietary setting. On the contrary, in such environments, the disclosure frequency is mostly independent of the information quality. Importantly, certain information quality thresholds even decrease the voluntary disclosure frequency. However, this premise only holds when the ex-ante information quality is increasing in the associated ex-post information quality (Penno, 1997). Such scenarios highlight the importance of identifying the correct private information type and its quality when intending to voluntarily disclose information to the external market. In addition, the non-significance of the other internal management practices specified in this model also provide key insights with respect to H3a. More specifically, the individual significant effect of human capital practice quality on the forecast issuance probability reduces the potential of other factors (e.g. overall internal design and management quality) to drive this relationship.
The direction of the significant control variables follows prior literature and expectation. For instance, the authors Cotter, Tuna and Wysocki (2006) discover that management guidance is more likely when analysts' forecast dispersion is low. Intuitively, lower levels of uncertainty across analysts, with respect to earnings forecasts, are associated with more consensus about the future prospect of the firm. This consensus can be used by management to guide analysts to a common (beatable) target, by issuing an earnings forecast. Similarly, organizations with higher levels of operating performance (ROA) are inclined to keep this measure constant and may use the private information, and respective guidance, to signal this efficiency or further enhance the performance ratio.
Hypothesis 3b
With respect to the forecast properties, forecast accuracy and the associated bias of the forecast, the results confirm a statistically significant inaccuracy and bias; however, the bias seems to be optimistic. The former result opposes prior literature with respect to the incentive of providing highly accurate earnings forecasts. However, despite the rewards for high disclosure accuracy, management may focus on the subsequent performance implications (e.g. meeting earnings targets, earnings surprise), which incentivizes the provision of more inaccurate and pessimistically biased guidance reports. Following the significant forecast error in Column (1), the next section will focus on the findings regarding the optimistic forecast bias. More specifically, Table 21 – Column (2) indicates that human capital practice quality (Management Score in Human capital) and target setting quality (Management Score in Target Setting) positively bias the earnings forecast issued by management. This result opposes the predicted expectation of managerial incentives to create more achievable (beatable) targets and subsequently, potential positive earnings surprise, following the achievement of these targets. Importantly, this bias is also supported by the forecast inaccuracy displayed in Column (1), where human capital practice quality causes a significant increase in the forecast error. However, the main focus relates to the directional bias of the earnings forecast by management.
A potential explanation for the over-optimistic estimation, with respect to the EPS measure, may result from overconfidence in the forecast ability of the management team. This overconfidence may result from the high quality of human capital and the related private information of human capital practices possessed by management. For instance, Hribar and Yang (2016) find overconfident managers issue earnings forecast more frequently, positively bias these forecasts and provide less precise forecasts. However, their results also indicate that these overconfident and over-optimistic managers are more likely to miss both analysts' as well as their own targets. This subsequent premise does not hold in this research study, as previously described in the result section of hypothesis H3c. As a result, the optimistic forecast bias may be a signal of potential future value creation, resulting from the high levels of human capital and related practices. Similarly, the internal target setting practice quality leads to positive and significant biases of the earnings forecast. The explanation follows a similar argumentation; managers, who are confident in their target setting abilities may assess future earnings in a highly positive way. Alternatively, the high quality of internal target setting (e.g. balanced mix of financial and non-financial incentives, clear communication of incentives and weights, etc.) may incentivize employees to increase efficiency or aspire additional growth, which enables the firm to accomplish these highly optimistic earnings targets. Lastly, the significant control variables Systematic Risk, Loss, Previous Award and Horizon follow the direction of prior literature. For example, greater exposure to market-wide effects and/or the resulting riskiness leads to more uncontrollable events, which reduce forecast accuracy and, subsequently, bias the forecasts. Similarly, a longer Horizon (e.g. greater difference between management forecast date and fiscal year-end date) suggests a greater forecast error and directional forecast bias (Ajinkya, Bhojraj and Sengupta, 2005).
To summarize, the results do support the managerial incentive to provide less accurate forecasts, however, the predicted directional bias contradicts the argumentation of more achievable (beatable) EPS targets. Overall, the results of the analysis of hypothesis H3b are not consistent with the predictions made in this paper.
Hypothesis 3c
The last hypothesis (H3c) focuses on the probability of fulfillment of the earnings targets estimated by the management and the following analysts. In Table 21 – Column (3), the findings clearly indicate that the quality of human capital practices within the organization have a significant effect on the probability of meeting/beating the earnings targets. Furthermore, the internal target coverage practice quality also provides a significant increase to the likelihood of achieving the respective EPS targets. These results follow the argumentation in the hypothesis development and lead to the rejection of the null hypothesis. Interestingly, this result, in combination with the findings in Column (1) and (2), indicates that managers just meet the respective targets; however, management makes the achievement of these forecasts more difficult through the positive directional bias. The former statement with respect to meeting/beating the EPS targets can be inferred from the positive effect of human capital practice quality on forecast errors. However, the latter result enhances the strength of this hypothesis. Not only does the organization avoid potential earnings surprises with the fulfillment of the EPS targets, but also enables an initial signaling of future value creation with the initial over-optimistic forecast. Among the control variables, only the dispersion among analysts significantly influences the probability of meeting the management and analyst targets. Similar to the previous paragraph, higher analyst disagreement leads to more uncertainty with respect to the future prospects of the firm, which will result in the provision of less achievable target by analysts. To conclude, the evidence supports that organizations with higher human capital practice quality are more likely to achieve their earnings targets.
...(download the rest of the essay above)
|
https://www.essaysauce.com/essays/marketing/2018-8-17-1534510058.php
|
CC-MAIN-2019-43
|
en
|
refinedweb
|
rob...@garrettfamily.us wrote: > (Apologies in advance if I've sent this to an inappropriate list) > > Is it possible to pass an instance of an object from an externally > called routine back to the parent routine? If so, how? > > Example: > Assume two separate REXX routines (separate files): MAIN.rex and > LOGGER.rex. > I'd like to be able to, from MAIN, call LOGGER (passing it a file > name), and have LOGGER create an instance of a stream object, open the > file, and then pass the stream object back to MAIN at which point MAIN > would be able to use the stream object to write to the file (or > perhaps pass it to other external routines that would also be able to > use it). > I'm already doing this with LOGGER being a routine that is part of > MAIN (both located in the same file), but I want to be able to break > LOGGER out into a separate file. > > Is this possible? Sure. Break out the routines add the keyword "PUBLIC" to each routine in LOGGER.rex. Then, before using the public routines which now reside in LOGGER.rex you need to "CALL LOGGER", which will cause its public routines (and public classes) to become visible. 'Thereafter you can access all those routines as if they were defined in MAIN.rex. > > I've been writing and using REXX for many years in the mainframe > environment so I'm quite familiar with the core language, but the > facilities of ooREXX are still somewhat new to me. If you are developing a logging facility for Rexx you might want to learn about an ooRexx implementation of the log-framework that originates from the Java world. Here's an article to that ooRexx framework, demonstrating how to use it: <>. The ooRexx code of the "log4rexx" framework can be downloaded from: <>, in case it is of interest for you.
HTH, ---rony
------------------------------------------------------------------------------ This SF.net email is sponsored by: SourcForge Community SourceForge wants to tell your story.
_______________________________________________ Oorexx-devel mailing list Oorexx-devel@lists.sourceforge.net
|
https://www.mail-archive.com/oorexx-devel@lists.sourceforge.net/msg01712.html
|
CC-MAIN-2019-43
|
en
|
refinedweb
|
;
class C
{
public static void Main ()
{
const int i = 9; // Set a breakpoint here
Console.WriteLine (i);
}
}
The breakpoint is not hit, VS behaviour is that the breakpoint is automatically moved to next line with symbol info
MonoDevelop seems to leave the red marker on the "const int" line until the app is run, and then once the breakpoint is resolved, moves it to the CWL.
It seems broken, though, that when we are done debugging, it moves the bp back to the const int decl.
That appears to be by design, the debugger keeps track of the "adjustment" so that it can be restored afterwards. But I agree, I'm not sure that's the best thing to do.
That's now what is happening for me with MD master + Mono master. It does not break at all.
The breakpoint adjustment when debugging starts and when it ends is by design. If it does not break after the adjustment, then this is a debugger issue.
I cannot see any adjustment happening.
This is from Application Output
Could not insert breakpoint at '/home/marek/Projects/g2/g2/Main.cs:7': The breakpoint location is invalid. Perhaps the source line does not contain any statements, or the source does not correspond to the current binary
The adjustment only happens when the invalid position is inside a range of valid positions. For example, if you try to place the breakpoint outside of a method, no adjustment will be made. Maybe in this case that line falls outside the range of valid positions of the method.
Okay, so it seems to make a difference having the class in a namespace.
If you wrap a namespace around class C, then this works. Otherwise it says invalid breakpoint and doesn't adjust the breakpoint (and thus no breakpoint is hit).
n/m, now it's not working with the namespace either.
What else did I remove??...
aha! I had changed the Active Configuration to Mono 2.11
Mono 2.10.9 works as expected.
The problem was that Mono 2.10 emitted Location info for line 7, but Mono 2.11 does not.
The fix was to keep scanning locations beyond line 7 until we find something we can break on.
|
https://xamarin.github.io/bugzilla-archives/32/3238/bug.html
|
CC-MAIN-2019-43
|
en
|
refinedweb
|
Opened 7 years ago
Closed 7 years ago
Last modified 7 years ago
#13413 closed (invalid)
blocks ignore if condition
Description
Here is a quick recapture:
base.html: {% block A %} {% endblock %} {% block B %} {% endblock %} cond.html: {% extends "base.html" %} {% if error %} {% block A %} ERROR! {% endblock %} {% else %} {% block B %} NO ERROR! {% endblock %} {% endif %}
No matter if there is error or not, you will always get:
ERROR! NO ERROR!
Change History (4)
comment:1 Changed 7 years ago by
comment:2 Changed 7 years ago by
comment:3 Changed 7 years ago by
Thanks for the clearance. Such a design makes life easier (especially for implementing the engine), although somehow limits the power of the template system. It would be nice if the compiler gives some warnings or even errors when you have logic outside a block. And it would be good to explicitly stipulate that in the documentation, that blocks must be top level units in the child template, and most things outside a block is invisible to the engine and the base template. Probably, we can explicit stipulate that only {% extends %}, {% load %}, {% comment %}, maybe a few others are allowed outside blocks in the child template. If that's the case, then here is another possible issue:
base.html: {% block a %} {% endblock %} child.html: some text. {% block a %} block a. {% endblock %}
the engine would produce this:
some text. block a.
But all expected is just "block a." Would that be a mild bug easy to fix? I noticed a similar ticket #7324, where you have nested blocks, and by the design, that is also invalid, and it might be helpful if the documentation says that nested blocks are no good (as an example of violating the designed rules).
comment:4 Changed 7 years ago by
Now you're just making stuff up:
In [1]: from django.template import * In [2]: a = Template('{% block a %}{% endblock %}') In [3]: Template('{% extends a %}some text. {% block a %}block a.{% endblock %}').render(Context({'a': a})) Out[3]: u'block a.'
Child templates can't have logic (outside of
{% block %}s). They simply provide blocks that can override the content in the extended template. Anything outside of a
{% block %}in a child template is effectively invisible to the template engine. The base template here defines blocks A and B, the child template overrides both, so both blocks get the content from the child.
|
https://code.djangoproject.com/ticket/13413
|
CC-MAIN-2017-30
|
en
|
refinedweb
|
The
File class in the
System.IO namespace provides the
ReadAllLines() method, which is used to read all lines of a text file and return an array of strings containing all the lines of the file.
public static string[] ReadAllLines (string filePath);
It takes the path of the file to read as an input and returns an array of strings.
The below scenarios are possible exceptions of this function:
In the following code example, a text file (
test.txt) already exists in the current working directory. We will first check if this file exists, and then read all the lines of the file to a string array.
Finally, we will print this array of strings using the
foreach loop.
The program will terminate after printing the output below:
first line of file second line of file third line of file fourth line of file fifth line of file
using System; using System.IO; class FileLineReader { public static void Main() { string filePath = @"test.txt"; if(!File.Exists(filePath)) { Console.WriteLine("File does not exist :{0} ", filePath); return; } string[] textFromFile = File.ReadAllLines(filePath); foreach (string line in textFromFile) { Console.WriteLine(line); } } }
RELATED TAGS
CONTRIBUTOR
View all Courses
|
https://www.educative.io/answers/how-to-read-all-lines-from-a-file-in-c-sharp
|
CC-MAIN-2022-33
|
en
|
refinedweb
|
INSTALLINSTALL
npm i node-beanstalk # or yarn add node-beanstalk
USAGEUSAGE
node-beanstalk fully supports
beanstalk protocol v1.12
ClientClient
node-beanstalk is built with use of promises.
Each client gives you full access to functionality of beanstalk queue manager, without strict separation to emitter and worker.
import { Client, BeanstalkJobState } from 'node-beanstalk'; const c = new Client(); // connect to beasntalkd server await c.connect(); // use our own tube await c.use('my-own-tube'); // put our very important job const putJob = await c.put({ foo: "My awsome payload", bar: ["baz", "qux"] }, 40); if (putJob.state !== BeanstalkJobState.ready) { // as a result of put command job can done in `buried` state, // or `delayed` in case delay or client's default delay been specified throw new Error('job is not in ready state'); } // watch our tube to be able to reserve from it await c.watch('my-own-tube') // acquire new job (ideally the one we've just put) const job = await c.reserveWithTimeout(10); /* ...do some important job */ c.delete(job.id); c.disconnect();
As beanstalk is pretty fast but still synchronous on a single connection - all consecutive calls will wait for the end of previous one. So below code will be executed consecutively, despite the fact of being asyncronous.
import { Client, BeanstalkJobState } from 'node-beanstalk'; const c = new Client(); await c.connect(); c.reserve(); c.reserve(); c.reserve(); c.reserve(); c.reserve();
Above code will reserve 5 jobs one by one, in asyncronous way (each next promise will be resolved
one by one).
To see all the Client methods and properties see Client API docs
DisconnectDisconnect
To disconnect the client from remote - call
client.disconnect(), it will wait for all the pending
requests to be performed and then disconnect the client from server. All requests queued after
disconnection will be rejected.
To disconnect client immediately - call
client.disconnect(true), it will perform disconnect right
after currently running request.
Payload serializationPayload serialization
As in most cases our job payloads are complex objets - they somehow must be serialized to Buffer. In general, serialized payload can be any bytes sequence, but by default, payload is serialized via JSON and casted to buffer, but you can specify your own serializer by passing corresponding parameter to client constructor options. Required serializer interface can be found in API docs.
PoolingPooling
For the cases of being used within webservers when waiting for all previous requests is not an
option -
node-beasntalk Pool exists.
Why?Why?
- Connecting new client requires a handshake, which takes some time (around 10-20ms), so creating new client on each incoming request would substantially slow down our application.
- As already being said - each connection can handle only one request at a time. So in case you application use a single client - all your simultaneous requests will be pipelined into serial execution queue, one after another, that is really no good (despite of
node-beanstalkqueue being very fast and low-cost).
Client pool allows you to have a pool af reusable clients you can check out, use, and return back to the pool.
import { Pool } from 'node-beanstalk'; const p = new Pool({ capacity: 5 }); // acquire our very own client const client = await p.connect(); try { // do some work await client.statsTube('my-own-tube') } finally { // return client back to the pool client.releaseClient() }
You must always release client back to the pool, otherwise, at some point, your pool will be empty forever, and your subsequent requests will wait forever.
DisconnectDisconnect
To disconnect all clients in the pool you have to call
pool.disconnect().
This will wait for all pending client reserves and returns to be done. After disconnect executed all returned clients will be disconnected and not returned to the idle queue. All reserves queued after disconnection will be rejected.
Force disconnect
pool.disconnect(true) will not wait for pending reserve and start disconnection
immediately (it will still be waiting clients return to the pool) by calling force disconnect on
each client.
TESTTEST
node-beanstalk is built to be as much tests-covered as it is possible, but not to go nuts with LOC
coverage. It is important to have comprehensive unit-testing to make sure that everything is working
fine, and it is my goal for this package.
It is pretty hard to make real tests for the sockets witch is used in this package, so
Connection
class is still at 80% covered with tests, maybe I'll finish it later.
|
https://www.npmjs.com/package/node-beanstalk
|
CC-MAIN-2022-33
|
en
|
refinedweb
|
Barrier-OR packet.
More...
#include <hsa.h>
Barrier-OR packet.
Definition at line 3108 of file hsa.h.
Signal used to indicate completion of the job.
The application can use the special signal handle 0 to indicate that no signal is used.
Definition at line 3142 of file hsa.h.
Array of dependent signal objects.
Signals with a handle value of 0 are allowed and are interpreted by the packet processor as dependencies not satisfied.
Definition at line 3131 of file hsa.h.
Packet header.
Used to configure multiple packet parameters such as the packet type. The parameters are described by hsa_packet_header_t.
Definition at line 3114 of file hsa.h.
Reserved.
Must be 0.
Definition at line 3119 of file hsa.h.
Definition at line 3124 of file hsa.h.
Definition at line 3136 of file hsa.h.
|
http://doxygen.gem5.org/release/current/structhsa__barrier__or__packet__s.html
|
CC-MAIN-2022-33
|
en
|
refinedweb
|
A.
Development mailing list
- i18n: fallback mechanism seems is not working ok
- Got error when creating my own firewall
- ACL Memory Leak?
Symfony2 development highlights
- a57a4af: [DomCrawler] added a way to get parsing errors for Crawler::addHtmlContent() and Crawler::addXmlContent() via libxml functions
- 258a1fd: moved makePathRelative to Filesystem
- bfb99bf: [FrameworkBundle] added a --relative option to assets:install
- 0f7bf41: [Console] detect if interactive mode is possible at all
- b9ba117, ee0fe7a: [Validator] added a SizeLength validator
- d6c4bfb, ee0fe7a: [Validator] added a Size validator
- 8b240d4: implementation of kernel.event_subscriber tag for services
- 1467bdb: [Routing] added RouterInterface::getRouteCollection()
- ed02aa9: [Console] fixed list 'namespace' command display all available commands
- 72e82eb: [Serializer] replaced deprecated key_exists alias
- 9ade639, d535afe, 731b28b: [composer] added composer.json
- d6b915a, 369f181: [FrameworkBundle] added request scope to assets helper only if needed
- c13b4e2: fixed fallback catalogue mechanism in Framework bundle
- 2db24c2: removed time limit for the vendors script
- 1e7e6ba: [HttpFoundation] removed the possibility for a cookie path to set it to null (as this is equivalent to /)
- 1284681, b402835: [BrowserKit, HttpFoundation] standardized cookie paths (an empty path is equivalent to /)
- 17af138: fixed usage of LIBXML_COMPACT as it is not always available
- d429594: removed separator of choice widget when the separator is null
- 600b8ef: [Validator] added support for grapheme_strlen when mbstring is not installed but intl is installed
- e70c884: [Bridge/Monolog] fixed WebProcessor to accept a Request object
- 5c8a2fb: [Routing] fixed route overriden mechanism when using embedded collections
Repository summary: 3,119 watchers (#1 in PHP, #25 overall) and 791 forks (#1 in PHP, #13 overall).
New plugins
- sfUploadify: wraps the Uploadify library for jQuery.()
- cpLDAPAuth: authenticates users against an LDAP directory.
Updated plugins
- sfPEARcaptcha:
- updated Text_CAPTCHA_Driver_Image
- updated image widget
- split the function 'render' to 'renderInputField' and 'renderCaptcha'
- sfTaskLogger:
- fixed the "_length" partial on Nix systems
- sfAssetsLibrary:
- fixed bug in images filtering
- cpTwig:
- added twig and twig-extensions as vendor libs
- apostrophe:
- cast $user_id to integer in case it arrives as an empty string due to a busted session. Prevents SQL error
- the new overrideLinks option to apostrophe.linkToRemote() can be used to inject an admin generator module or other really basic Symfony module that is not AJAX-aware into an AJAX container, such as a div
- aInjectActualUrl function allows me to call it multiple times after dom ready instead of how it was structured before
- the edit and new admin generator actions should have a title slot, just like the list action does
- RSS Feed
- backed out attempts to make the category admin gen classes autoload because Symfony's admin gen has a very limited way of autoloading things that looks for them in specific modulename/lib folders only
- fixed the aAdmin theme to work when there is no filter and no custom table method
- app_a_simple_permissions is an extremely simple alternative view permissions model
- correctly tolerate symfony cc without the right environment settings
- fixed bug with _menuToggle function in a.js
- apostropheCkEditor:
- fixed source view formatting. It was being displayed in a tiny window instead of the full editor frame
They talked about us
- Introducing KhepinUpdateBundle
- parse_ini_file 関数の INI_SCANNER_RAW モードでは DSN が適切にパースされない問題
- Symfony2: Working with multiple databases
- Loosening dependencies with closures in PHP
- A frontend editor for Symfony2 CMF with the help of VIE
- Noch wenige Wochen bis zum Symfony Day Cologne
- symfony1 sfTaskLoggerPlugin 1-0-3 released
- October PHP Conferences
- Symfony2 unit database tests
- Symfony CMF hackday October 22nd in Cologne
- Интеграция шаблонизатора Twig в CodeIgniter 2
- Symfony 1.4 : distance_of_time_in_words en français
- Gushing over Web Frameworks
- Smarty vs. Twig: производительность
- Symfony2でMongoDBを使ってみよう。
- opCommunityTopicPlugin 1.0.2.1 リリースのお知らせ
- Create a custom password encoder for Symfony
- Symfony 2 – Events and Listeners
- Remove default CSS/Javascript in View.yml
- Reunión de programadores en PHP Barcelona Conference
- Symfony Doctrine “where in” and “where not in” Syntax
- Symfony2のススメ2 ~認証とともに~
- MAMP環境でsymfony doctrine:buildに失敗する
- Symfony 2 et Play! : Frameworks de productivité
- Symfony2 и основные положения HTTP
- openpne3系インストール(windows版)
- Capifony + Symfony2 Revisited Experience
Help the Symfony project!
As with any Open-Source project, contributing code or documentation is the most common way to help, but we also have a wide range of sponsoring opportunities.
To ensure that comments stay relevant, they are closed for old posts.
What exactly this "In addition, parameters.ini configuration file was replaced by parameters.yml" mean ?
We just change the extension and that's all? What are the beneficies?
Thanks
The benefit is that this file was the only "default file" in .INI format. YAML is used by default in other config files.
Then, if the change is so simple, why did I highlighted it in the excerpt of the post? Because the "parameters.ini" file is one of the most important files for new users and I wanted to notice it.
And when I said "We just change the extension and that's all?" I was talking about application of this new change, something like:
"So I have should change PARAMETERS.INI to PARAMETERS.YML and that is all?, the symfony framework will work without to change anything more?"
Jajaja, now I wrote all this I see that maybe it was not that easy to deduce that.
Thanks for your answers :D.
|
https://symfony.com/blog/a-week-of-symfony-248-26-september-2-october-2011
|
CC-MAIN-2022-33
|
en
|
refinedweb
|
lwt
Lwt is a concurrent programming library for OCaml..
Here is a simplistic Lwt program which requests the Google front page, and fails
if the request is not completed in five seconds:
open Lwt.Syntax let () = let request = let* addresses = Lwt_unix.getaddrinfo "google.com" "80" [] in let google = Lwt_unix.((List.hd addresses).ai_addr) in Lwt_io.(with_connection google (fun (incoming, outgoing) -> let* () = write outgoing "GET / HTTP/1.1\r\n" in let* () = write outgoing "Connection: close\r\n\r\n" in let* response = read incoming in Lwt.return (Some response))) in let timeout = let* () = Lwt_unix.sleep 5. in Lwt.return None in match Lwt_main.run (Lwt.pick [request; timeout]) with | Some response -> print_string response | None -> prerr_endline "Request timed out"; exit 1 (* ocamlfind opt -package lwt.unix -linkpkg example.ml && ./a.out *)
In the program, functions such as
Lwt_io.write create promises. The
let* ... in construct is used to wait for a promise to become determined; the
code after
in is scheduled to run in a "callback."
Lwt.pick races promises
against each other, and behaves as the first one to complete.
Lwt_main.run
forces the whole promise-computation network to be executed. All the visible
OCaml code is run in a single thread, but Lwt internally uses a combination of
worker threads and non-blocking file descriptors to resolve in parallel the
promises that do I/O.
Overview
Lwt compiles to native code on Linux, macOS, Windows, and other systems. It's
also routinely compiled to JavaScript for the front end and Node by js_of_ocaml.
In Lwt,
The core library
Lwtprovides promises...
...and a few pure-OCaml helpers, such as promise-friendly mutexes,
condition variables, and mvars.
There is a big Unix binding,
Lwt_unixthat binds almost every Unix
system call. A higher-level module
Lwt_ioprovides nice I/O channels.
Lwt_processis for subprocess handling.
Lwt_preemptivespawns system threads.
The PPX syntax allows using all of the above without going crazy!
There are also some other helpers, such as
Lwt_reactfor reactive
programming. See the table of contents on the linked manual pages!
Installing
Use your system package manager to install a development libev package.
It is often called
libev-devor
libev-devel.
opam install conf-libev lwt
Documentation
We are currently working on improving the Lwt documentation (drastically; we are
rewriting the manual). In the meantime:
The current manual can be found here.
Mirage has a nicely-written Lwt tutorial.
An example of a simple server written in Lwt.
Concurrent Programming with Lwt is a nice source of Lwt examples.
They are translations of code from the excellent Real World OCaml, but are
just as useful if you are not reading the book.
Note: much of the current manual refers to
'a Lwt.t as "lightweight threads"
or just "threads." This will be fixed in the new manual.
'a Lwt.t is a
promise, and has nothing to do with system or preemptive threads.
Open an issue, visit Discord chat, ask on
discuss.ocaml.org, or on Stack Overflow.
Release announcements are made in /r/ocaml, and on
discuss.ocaml.org. Watching the repo for "Releases only" is also an
option.
Contributing
CONTRIBUTING.mdcontains tips for working on the code,
such as how to check the code out, how review works, etc. There is also a
high-level outline of the code base.
Ask us anything, whether it's about working on Lwt, or any
question at all about it :)
The documentation always needs proofreading and fixes.
You are welcome to pick up any other issue, review a PR, add
your opinion, etc.
Any feedback is welcome, including how to make contributing easier!
Libraries to use with Lwt
md5=279024789a0ec84a9d97d98bad847f97
sha512=698875bd3bfcd5baa47eb48e412f442d289f9972421321541860ebe110b9af1949c3fbc253768495726ec547fe4ba25483cd97ff39bc668496fba95b2ed9edd8
>= "0.9.0" & < "1.0.2" | >= "1.1.0"
>= "2.0.1"
>= "0.11.0"
>= "2.2.1"
< "0.2.0"
>= "0.9.0"
>= "1.3.0"
< "0.3"
>= "0.3" & != "0.10"
>= "1.1.1"
>= "0.4"
< "0.1.1"
>= "0.11.0"
>= "0.9.0"
>= "0.0.5"
>= "0.4"
>= "0.5"
>= "0.6.0"
>= "0.12.0"
>= "0.3"
>= "3.0.0"
< "0.2.0" | >= "0.4.0"
>= "4.4.0"
>= "0.3.0"
< "0.3.1"
>= "2.0.0" & < "3.0.0"
>= "0.4.3" & < "0.4.6"
>= "2.3.0"
>= "2.3.0"
>= "2.3.0"
>= "2.3.0"
>= "2.3.0"
>= "2.3.0"
>= "0.3.0"
< "2.5"
>= "3.5.0"
>= "0.4"
< "0.0.3" | >= "0.0.9"
>= "1.13"
>= "0.13.0"
>= "0.13.0"
< "1.0.0" | >= "2.0.0"
< "2.3.0" | = "2.8.2" | >= "2.13.0"
>= "0.2.0"
>= "0.4.0"
>= "4.0.0"
< "3.0.0"
>= "1.3.0" & < "3.0.0"
>= "3.0.0"
>= "0.2.0"
>= "2.2.1"
>= "5.0.0"
>= "0.7.0"
>= "0.8.8"
>= "2.0.0"
< "3.0.0"
>= "1.0.3" & < "1.2.0" | >= "2.0.0"
< "1.3.0" | >= "1.5.0"
>= "1.3.0"
< "1.1.1" | >= "1.3.0"
!= "0.3.0"
>= "3.0.1"
>= "2.2.0"
>= "3.0.0"
>= "4.0.0" & < "8.0.0"
< "0.2" | >= "0.4"
>= "0.1.0"
>= "3.7.0"
>= "2.0.0" & < "4.0.0"
>= "2.0.0"
< "3.7.1"
>= "3.0.0"
>= "1.1.0"
>= "0.2.0"
>= "0.2.0" & < "0.4.0"
>= "4.00.1+mirage-unix" & < "4.00.1+open-types"
>= "2.10"
>= "0.3.3" & < "0.5.0"
< "2.0"
>= "0.4.0"
>= "0.10.0"
< "0.3"
>= "0.3.2"
>= "0.10.0"
>= "0.4"
>= "0.4" & < "0.6" | >= "0.9"
>= "0.4"
>= "0.4.1"
< "2.3.0" | >= "3.0.0"
>= "1.2.0" & < "2.0.0"
< "0.3.0"
< "0.2.1"
>= "0.4.1"
>= "13.0"
>= "13.0"
>= "13.0"
>= "8.0" & < "9.0" | >= "13.0"
>= "9.0"
>= "13.0"
>= "13.0"
>= "8.0" & < "9.0" | >= "13.0"
>= "12.0"
>= "0.7.0" & < "0.8.0"
>= "0.12.0"
< "0.9.14"
< "1.0.6" | >= "2.0.0"
>= "1.0.0"
|
https://ocaml.org/p/lwt/5.6.1
|
CC-MAIN-2022-33
|
en
|
refinedweb
|
The ast
ast.For
ast.For(target, iter, body, orelse, type_comment)
ast.For is a class defined in the
ast module that expresses a
for loop in Python in the form of an Abstract Syntax Tree.
When the
parse() method of
ast is called on a Python source code that contains
for loops, the
ast.For class is invoked, which expresses the
for statement to a node in an
ast tree data structure. The
ast.For class represents the
For node type in the
ast tree.
targetcontains the variable(s) the loop assigns to, which can be a
Name,
Tuple, or
Listnode.
itercontains the item to be looped over as a single node.
bodycontains lists of nodes to execute.
orelsecontains lists of nodes to execute in a normal situation where there is no
breakstatement used in the loop.
type_commentis an optional parameter with the type annotation as a comment.
The following Python code illustrates an example of the
ast.For class.
import ast from pprint import pprint class ForVisitor(ast.NodeVisitor): def visit_For(self, node): print('Node type: For\nFields: ', node._fields) self.generic_visit(node) def visit_Name(self,node): print('Node type: Name\nFields: ', node._fields) ast.NodeVisitor.generic_visit(self, node) visitor = ForVisitor() tree = ast.parse(""" for x in y: ... else: ... """) pprint(ast.dump(tree)) visitor.visit(tree)
ForVisitorclass that extends from the parent class
ast.NodeVisitor. We override the predefined
visit_Forand
visit_Namemethods in the parent class, which receive the
Forand
Namenodes, respectively.
generic_visit()method to visit the children nodes of the input node.
ForVisitor(line 14).
ast.parse()method, which returns the result of the expression after evaluation, and then stores this result in
tree.
ast.dump()method returns a formatted string of the tree structure in
tree. You can observe the string returned by the
dumpfunction in the output of the code. The output shows the parameter values that are passed as a result of the code that is passed to the
parse()method.
visitmethod available to the
visitorobject visits all the nodes in the tree structure.
RELATED TAGS
CONTRIBUTOR
View all Courses
|
https://www.educative.io/answers/what-is-astfor-in-python
|
CC-MAIN-2022-33
|
en
|
refinedweb
|
README
shallow-element-equalsshallow-element-equals
Efficient shallow equality algorithm that also allows checks for react element equality of children props
WhyWhy
shouldComponentUpdate is a powerful way to improve performance of react and react native applications,
but often you have components which you can expect to be "pure", but you also want them to have an API
that accepts children.
Having a
children prop pretty much removes any chance of using a "shallow" equality comparison of props,
since
React.createElement will return a new object reference on every call, so JSX elements are always
new object references.
shallowElementEquals takes this into account, and treats
children props in a special way such that it will
assume that all of the children elements provided to a component are "pure" as well, and just the props/types
could be compared for an optimized comparison.
Be careful using thisBe careful using this
This is dangerous. Don't use this function if you don't understand its consequences. By having a component adopt
a
shouldComponentUpdate method like this, you are assuming something about the components that people are
passing into your component as children that may not be true (ie, that they are pure). If this is not true,
the consumers of your component may have their application behave in ways that they do not expect, and the
reason will be completely opaque to them.
I would probably not recommend using this type of an optimization on public code or open source projects where lots of people will be using it without understanding these assumptions.
InstallationInstallation
npm i shallow-element-equals --save
UsageUsage
import shallowElementEquals from 'shallow-element-equals'; // ... shouldComponentUpdate(nextProps) { return !shallowElementEquals(this.props, nextProps); }
Examples of how this worksExamples of how this works
See the tests to understand better what this will match on.
|
https://www.skypack.dev/view/shallow-element-equals
|
CC-MAIN-2022-33
|
en
|
refinedweb
|
For long-running Task‘s, it can be desirable to support aborting during execution. Of course, these tasks should be built to support abortion specifically.
The AbortableTask serves as a base class for all Task objects that should support abortion by producers.
The necessary intermediate communication is dealt with by the AbortableTask implementation.
In the consumer:
from celery.contrib.abortable import AbortableTask class MyLongRunningTask(AbortableTask): def run(self, **kwargs): logger = self.get_logger(**kwargs) results = [] for x in xrange(100): # Check after every 5 loops.. if x % 5 == 0: # alternatively, check when some timer is due if self.is_aborted(**kwargs): # Respect the aborted status and terminate # gracefully logger.warning("Task aborted.") return None y = do_something_expensive(x) results.append(y) logger.info("Task finished.") return results
In the producer:
from myproject.tasks import MyLongRunningTask def myview(request): async_result = MyLongRunningTask.delay() # async_result is of type AbortableAsyncResult # After 10 seconds, abort the task time.sleep(10) async_result.abort() ...
After the async_result.abort() call, the task execution is not aborted immediately. In fact, it is not guaranteed to abort at all. Keep checking the async_result status, or call async_result.wait() to have it block until the task is finished.
Note
In order to abort tasks, there needs to be communication between the producer and the consumer. This is currently implemented through the database backend. Therefore, this class will only work with the database backends.
Represents a abortable result.
Specifically, this gives the AsyncResult a abort() method, which sets the state of the underlying Task to “ABORTED”.
Set the state of the task to ABORTED.
Abortable tasks monitor their state at regular intervals and terminate execution if so.
Be aware that invoking this method does not guarantee when the task will be aborted (or even if the task will be aborted at all).
Returns True if the task is (being) aborted.
A celery task that serves as a base class for all Task‘s that support aborting during execution.
All subclasses of AbortableTask must call the is_aborted() method periodically and act accordingly when the call evaluates to True.
Returns the accompanying AbortableAsyncResult instance.
Checks against the backend whether this AbortableAsyncResult is ABORTED.
Always returns False in case the task_id parameter refers to a regular (non-abortable) Task.
Be aware that invoking this method will cause a hit in the backend (for example a database query), so find a good balance between calling it regularly (for responsiveness), but not too often (for performance).
|
https://docs.celeryq.dev/en/2.3-archived/reference/celery.contrib.abortable.html
|
CC-MAIN-2022-33
|
en
|
refinedweb
|
Opened 4 years ago
Closed 4 years ago
#25927 closed defect (fixed)
steenrod.py: Python 3 fixes
Description
This fixes two doctest failures with Python 3 in
src/sage/algebras/steenrod/.
This is a little progress, but there is a third failure which I do not understand:
File "src/sage/algebras/steenrod/steenrod_algebra.py", line 1069, in sage.algebras.steenrod.steenrod_algebra.SteenrodAlgebra_generic.homogeneous_component Failed example: a * A(a) # only need to convert one factor Exception raised: Traceback (most recent call last): File "/Users/jpalmier/Desktop/Sage/sage_builds/PYTHON3/sage-8.3.rc1/local/lib/python3.6/site-packages/sage/categories/pushout.py", line 3985, in pushout return all(Z) File "sage/categories/functor.pyx", line 384, in sage.categories.functor.Functor.__call__ (build/cythonized/sage/categories/functor.c:3223) y = self._apply_functor(self._coerce_into_domain(x)) File "sage/categories/functor.pyx", line 299, in sage.categories.functor.Functor._coerce_into_domain (build/cythonized/sage/categories/functor.c:2865) raise TypeError("x (=%s) is not in %s" % (x, self.__domain)) TypeError: x (=None) is not in Category of rings During handling of the above exception, another exception occurred: Traceback (most recent call last): File "/Users/jpalmier/Desktop/Sage/sage_builds/PYTHON3/sage-8.3.rc1/local/lib/python3.6/site-packages/sage/doctest/forker.py", line 573, in _run self.compile_and_execute(example, compiler, test.globs) File "/Users/jpalmier/Desktop/Sage/sage_builds/PYTHON3/sage-8.3.rc1/local/lib/python3.6/site-packages/sage/doctest/forker.py", line 983, in compile_and_execute exec(compiled, globs) File "<doctest sage.algebras.steenrod.steenrod_algebra.SteenrodAlgebra_generic.homogeneous_component[11]>", line 1, in <module> a * A(a) # only need to convert one factor File "sage/structure/element.pyx", line 1534, in sage.structure.element.Element.__mul__ (build/cythonized/sage/structure/element.c:12223) return coercion_model.bin_op(left, right, mul) File "sage/structure/coerce.pyx", line 1172, in sage.structure.coerce.CoercionModel_cache_maps.bin_op (build/cythonized/sage/structure/coerce.c:9677) action = self.get_action(xp, yp, op, x, y) File "sage/structure/coerce.pyx", line 1715, in sage.structure.coerce.CoercionModel_cache_maps.get_action (build/cythonized/sage/structure/coerce.c:16847) action = self.discover_action(R, S, op, r, s) File "sage/structure/coerce.pyx", line 1871, in sage.structure.coerce.CoercionModel_cache_maps.discover_action (build/cythonized/sage/structure/coerce.c:18433) action = (<Parent>S).get_action(R, op, False, s, r) File "sage/structure/parent.pyx", line 2507, in sage.structure.parent.Parent.get_action (build/cythonized/sage/structure/parent.c:21508) action = self.discover_action(S, op, self_on_left, self_el, S_el) File "sage/structure/parent.pyx", line 2614, in sage.structure.parent.Parent.discover_action (build/cythonized/sage/structure/parent.c:22810) action = detect_element_action(self, S, self_on_left, self_el, S_el) File "sage/structure/coerce_actions.pyx", line 233, in sage.structure.coerce_actions.detect_element_action (build/cythonized/sage/structure/coerce_actions.c:5892) return (RightModuleAction if X_on_left else LeftModuleAction)(Y, X, y, x) File "sage/structure/coerce_actions.pyx", line 344, in sage.structure.coerce_actions.ModuleAction.__init__ (build/cythonized/sage/structure/coerce_actions.c:6867) if self.extended_base.base() != pushout(G, base): File "/Users/jpalmier/Desktop/Sage/sage_builds/PYTHON3/sage-8.3.rc1/local/lib/python3.6/site-packages/sage/categories/pushout.py", line 3987, in pushout except CoercionException: TypeError: catching classes that do not inherit from BaseException is not allowed ********************************************************************** 1 item had failures: 1 of 19 in sage.algebras.steenrod.steenrod_algebra.SteenrodAlgebra_generic.homogeneous_component [683 tests, 1 failure, 9.21 s]
I thought that the coercion framework would allow Sage to compute the product
a * A(a). (The setup here is that
A is a graded algebra,
a is an element in a homogeneous component of it, so
a is not an element of
A.
A(a) lives in
A and so
A(a) * A(a) makes sense. But both
A(a) * a and
a * A(a) should also make sense.)
Change History (9)
comment:1 Changed 4 years ago by
- Branch set to u/jhpalmieri/steenrod-py3
comment:2 Changed 4 years ago by
- Commit set to 52a893a6023d8a67029a20335f1a749ea65bd8cb
- Status changed from new to needs_review
comment:3 Changed 4 years ago by
thanks for trying to help.
In the change
sorted(list(A[5].basis()))
you can remove list and just use sorted. Once done, you can set to positive
By the way, for another ticket, there is with python3 a very annoying issue in these files
Killing test src/sage/homology/cell_complex.py Killing test src/sage/homology/delta_complex.py
that involve an infinite recursion error.
comment:4 Changed 4 years ago by
- Commit changed from 52a893a6023d8a67029a20335f1a749ea65bd8cb to 3138d758cdfdc8d11d1b28102669a3d51da83c10
Branch pushed to git repo; I updated commit sha1. This was a forced push. New commits:
comment:5 Changed 4 years ago by
- Commit changed from 3138d758cdfdc8d11d1b28102669a3d51da83c10 to 9d31f9a4aa575ead7faecf7d155c4bfcbf14310b
Branch pushed to git repo; I updated commit sha1. This was a forced push. New commits:
comment:6 Changed 4 years ago by
- Reviewers set to Frédéric Chapoton
- Status changed from needs_review to positive_review
comment:7 Changed 4 years ago by
I will take a look at
homology.
comment:8 Changed 4 years ago by
- Component changed from algebra to python3
comment:9 Changed 4 years ago by
- Branch changed from u/jhpalmieri/steenrod-py3 to 9d31f9a4aa575ead7faecf7d155c4bfcbf14310b
- Resolution set to fixed
- Status changed from positive_review to closed
If we can fix the coercion problem here, that's great. We can also defer that to another ticket and do the easy fixes right away. So I'm marking this as "needs review".
New commits:
|
https://trac.sagemath.org/ticket/25927
|
CC-MAIN-2022-33
|
en
|
refinedweb
|
I have been asked to program a spellchecker in C for an assignment. I am quite new to C and programming in general, so I have decided to start by writing a program that does the following:
- Reads words into an array from a dictionary text file.
- Reads words into an array from a sample file that needs to be spellchecked.
- Compares whether or not each word form the sample file is in the dictionary using a binary search algorithm.
Here is my code so far:
#include <stdio.h> #include <string.h> int read_words(char *dict[20]); int read_text(char *sample[3]); int comparison(char *dict[20], char *sample[3]); int main() { char *dict[20]; //pointer to array 'dict' char *sample[3]; // pointer to array 'sample' read_words(dict); read_text(sample); comparison(dict, sample); } int read_words(char *dict[20]) //copies each word from the file 'words.txt' into array 'dict' { FILE *words_ptr; //pointer for words.txt int i; char temp_word[20]; char *new_word; words_ptr = fopen( "words.txt", "r" ); if( words_ptr != NULL ) { printf( "File words.txt opened\n"); i=0; while (fgets( temp_word, 20, words_ptr )) { new_word = (char*)calloc(strlen(temp_word), sizeof(char)); //ensuring new_word will be the right size strcpy(new_word, temp_word); //copy contents of temp_word to new_word dict[i] = new_word; //copy contents of new_word to i'th element of dict array printf("printing out dict[%d]: %s\n", i, dict[i]); i++; } printf("printing out dictionary1: %s\n", dict[1]); fclose( words_ptr ); return 0; } else {printf( "Unable to open file words.txt\n" ); return 1;} } int read_text(char *sample[3]) //copies each word from the file 'text.txt' into array 'sample' { //this works exactly the same way as the read_words function FILE *text_ptr; int j; char temp_text[20]; char *new_text; text_ptr = fopen( "text.txt", "r" ); if( text_ptr != NULL ) { printf( "File text.txt opened\n"); j=0; while (fgets( temp_text, 20, text_ptr )) { new_text = (char*)calloc(strlen(temp_text), sizeof(char)); strcpy(new_text, temp_text); sample[j] = new_text; printf("printing out sample[%d]: %s\n", j, sample[j]); j++; } printf("printing out sampleee1: %s\n", sample[1]); //testing that it prints out sample[1] and not whichever the last sample word was. Can be removed from final program. fclose( text_ptr ); return 0; } else {printf( "Unable to open file text.txt\n" ); return 1;} } int comparison(char *dict[20], char *sample[3]) //comparing one word from each array with the other and checking if they are the same { char *min, *max, *mid; //minimum value, maximum value, mid-point value min = dict[0]; max = dict[20]; mid = min +(max-min)/2; //performing the binary search while((min <= max) && (*mid != sample[0])) { if (sample[0] < *mid) { max = mid -1; mid = min +(max-min)/2; } else { min = mid + 1; mid = min +(max-min)/2; } } if (*mid == sample[0]) { printf("\n %d found!", sample[0]); } else {printf("\n %d not found!", sample[0]); } return 0; }
I get a few error messages at lines 89, 91 and 103 saying: "warning: comparison between pointer and integer".
I can see why I am getting these messages but I do not know how to change the code to do what I want it to do.
I can see a problem with this line (line 89):
while((min <= max) && (*mid != sample[0]))
I am not able to compare mid with sample[0] as they are different types. I want mid to be the middle word from the dict[]array so that I can compare the two values.
I have seen similar code working for when doing a binary search on integers, but am not sure if this is possible when doing a search on strings.
I would appreciate any advice, although I have been told it is possible to do this using something line binary search, or binary search trees, so I would be grateful to know something similar to this. I think a hash table might be a bit beyond me at this stage although I am aware that this is another method.
Thanks
|
https://www.daniweb.com/programming/software-development/threads/251266/binary-search-in-c-spell-checker
|
CC-MAIN-2022-33
|
en
|
refinedweb
|
Caching resources during runtime
Published on
Some assets in your web application may be infrequently used, very large, or vary based on the user's device (such as responsive images) or language. These are instances where precaching may be an anti-pattern, and you should rely on runtime caching instead.
In Workbox, you can handle runtime caching for assets using the
workbox-routing module to match routes, and handle caching strategies for them with the
workbox-strategies module.
Caching strategiesCaching strategies
You can handle most routes for assets with one of the built in caching strategies. They're covered in detail earlier in this documentation, but here are a few worth recapping:
- Stale While Revalidate uses a cached response for a request if it's available and updates the cache in the background with a response from the network. Therefore, if the asset isn't cached, it will wait for the network response and use that. It's a fairly safe strategy, as it regularly updates cache entries that rely on it. The downside is that it always requests an asset from the network in the background.
- Network First tries to get a response from the network first. If a response is received, it passes that response to the browser and saves it to a cache. If the network request fails, the last cached response will be used, enabling offline access to the asset.
- Cache First checks the cache for a response first and uses it if available. If the request isn't in the cache, the network is used and any valid response is added to the cache before being passed to the browser.
- Network Only forces the response to come from the network.
- Cache Only forces the response to come from the cache.
You can apply these strategies to select requests using methods offered by
workbox-routing.
Applying caching strategies with route matchingApplying caching strategies with route matching
workbox-routing exposes a
registerRoute method to match routes and handle them with a caching strategy.
registerRoute accepts a
Route object that in turn accepts two arguments:
- A string, regular expression, or a match callback to specify route matching criteria.
- A handler for the route—typically a strategy provided by
workbox-strategies.
Match callbacks are preferred to match routes, as they provide a context object that includes the
Request object, the request URL string, the fetch event, and a boolean of whether the request is a same-origin request.
The handler then handles the matched route. In the following example, a new route is created that matches same-origin image requests coming, applying the cache first, falling back to network strategy.
// sw.js
import { registerRoute, Route } from 'workbox-routing';
import { CacheFirst } from 'workbox-strategies';
// A new route that matches same-origin image requests and handles
// them with the cache-first, falling back to network strategy:
const imageRoute = new Route(({ request, sameOrigin }) => {
return sameOrigin && request.destination === 'image'
}, new CacheFirst());
// Register the new route
registerRoute(imageRoute);
The
request.destination property of the
Request object is an excellent way to match requests for specific content types, as it side-steps the pitfalls of matching requests for assets based on their file extension.
Using multiple cachesUsing multiple caches
Workbox allows you to bucket cached responses into separate
Cache instances using the
cacheName option available in the bundled strategies.
In the following example, images use a stale-while-revalidate strategy, whereas CSS and JavaScript assets use a cache-first falling back to network strategy. The route for each asset places responses into separate caches, by adding the
cacheName property.
// sw.js
import { registerRoute, Route } from 'workbox-routing';
import { CacheFirst, StaleWhileRevalidate } from 'workbox-strategies';
// Handle images:
const imageRoute = new Route(({ request }) => {
return request.destination === 'image'
}, new StaleWhileRevalidate({
cacheName: 'images'
}));
// Handle scripts:
const scriptsRoute = new Route(({ request }) => {
return request.destination === 'script';
}, new CacheFirst({
cacheName: 'scripts'
}));
// Handle styles:
const stylesRoute = new Route(({ request }) => {
return request.destination === 'style';
}, new CacheFirst({
cacheName: 'styles'
}));
// Register routes
registerRoute(imageRoute);
registerRoute(scriptsRoute);
registerRoute(stylesRoute);
Setting an expiry for cache entriesSetting an expiry for cache entries
Be aware of storage quotas when managing service worker cache(s).
ExpirationPlugin simplifies cache maintenance and is exposed by
workbox-expiration. To use it, specify it in the configuration for a caching strategy:
// sw.js
import { registerRoute, Route } from 'workbox-routing';
import { CacheFirst } from 'workbox-strategies';
import { ExpirationPlugin } from 'workbox-expiration';
// Evict image cache entries older thirty days:
const imageRoute = new Route(({ request }) => {
return request.destination === 'image';
}, new CacheFirst({
cacheName: 'images',
plugins: [
new ExpirationPlugin({
maxAgeSeconds: 60 * 60 * 24 * 30,
})
]
}));
// Evict the least-used script cache entries when
// the cache has more than 50 entries:
const scriptsRoute = new Route(({ request }) => {
return request.destination === 'script';
}, new CacheFirst({
cacheName: 'scripts',
plugins: [
new ExpirationPlugin({
maxEntries: 50,
})
]
}));
// Register routes
registerRoute(imageRoute);
registerRoute(scriptsRoute);
ExpirationPlugin can only be used with registered routes using a strategy that has a configured
cacheName
Complying with storage quotas can be complicated. It's good practice to consider users who may be experiencing storage pressure, or want to make the most efficient use of their storage. Workbox's
ExpirationPlugin pairs can help in achieving that goal.
Cross-origin considerationsCross-origin considerations
The interaction between your service worker and cross-origin assets is considerably different than with same-origin assets. Cross-Origin Resource Sharing (CORS) is complicated, and that complexity extends to how you handle cross-origin resources in a service worker.
Read Jake Archibald's How to win at CORS guide for an excellent interactive explainer on how CORS works.
Opaque responsesOpaque responses
When making a cross-origin request in
no-cors mode, the response can be stored in a service worker cache and even be used directly by the browser. However, the response body itself can't be read via JavaScript. This is known as an opaque response.
Opaque responses are a security measure intended to prevent the inspection of a cross-origin asset. You can still make requests for cross-origin assets and even cache them, you just can't read the response body or even read its status code!
You can learn more about opaque responses in this Stack Overflow Q&A.
Remember to opt into CORS modeRemember to opt into CORS mode
Even if you load cross-origin assets that do set permissive CORS headers that allow you read responses, the body of cross-origin response may still be opaque. For example, the following HTML will trigger
no-cors requests that will lead to opaque responses regardless of what CORS headers are set:
<link rel="stylesheet" href="">
<img src="">
To explicitly trigger a
cors request that will yield a non-opaque response, you need to explicitly opt-in to CORS mode by adding the
crossorigin attribute to your HTML:
<link crossorigin="anonymous" rel="stylesheet" href="">
<img crossorigin="anonymous" src="">
This is important to remember when routes in your service worker cache subresources loaded at runtime.
Workbox may not cache opaque responsesWorkbox may not cache opaque responses
By default, Workbox takes a cautious approach to caching opaque responses. As it's impossible to examine the response code for opaque responses, caching an error response can result in a persistently broken experience if a cache-first or cache-only strategy is used.
If you need to cache an opaque response in Workbox, you should use a network-first or stale-while-validate strategy to handle it. Yes, this means that the asset will still be requested from the network every time, but it ensures that failed responses won't persist, and will eventually be replaced by usable responses.
If you use another caching strategy and an opaque response is returned, Workbox will warn you that the response wasn't cached when in development mode.
Force caching of opaque responsesForce caching of opaque responses
If you are absolutely certain that you want to cache an opaque response using a cache-first or cache only strategy, you can force Workbox to do so with the
workbox-cacheable-response module:
import {Route, registerRoute} from 'workbox-routing';
import {NetworkFirst, StaleWhileRevalidate} from 'workbox-strategies';
import {CacheableResponsePlugin} from 'workbox-cacheable-response';
const cdnRoute = new Route(({url}) => {
return url === '';
}, new CacheFirst({
plugins: [
new CacheableResponsePlugin({
statuses: [0, 200]
})
]
}))
registerRoute(cdnRoute);
Reminder: Be absolutely sure you want to handle opaque responses with a cache-first or cache only strategy. It can result in a persistently broken experience, requiring you to explicitly clear your caches or deploy an updated service worker that uses a network-first strategy for cross-origin requests to fix the problem.
Opaque Responses and the
navigator.storage API
To avoid leakage of cross-domain information, there's significant padding added to the size of an opaque response used for calculating storage quota limits. This affects how the
navigator.storage API reports storage quotas.
This padding varies by browser, but for Chrome, the minimum size that any single cached opaque response contributes to the overall storage used is approximately 7 megabytes. You should keep this in mind when determining how many opaque responses you want to cache, since you could easily exceed storage quotas much sooner than you'd otherwise expect.
Last updated: • Improve article
|
https://developer.chrome.com/docs/workbox/caching-resources-during-runtime/#cross-origin-considerations
|
CC-MAIN-2022-33
|
en
|
refinedweb
|
If you depending on a external source to return static data you can implement
cachetools to cache data from preventing the overhead to make the request everytime you make a request to Flask.
This is useful when your upstream data does not change often. This is configurable with
maxsize and
ttl so whenever the first one's threshold is met, the application will fetch new data whenever the request has been made to your application.
Example
Let's build a basic flask application that will return the data from our
data.txt file to the client:
from flask import Flask from cachetools import cached, TTLCache app = Flask(__name__) cache = TTLCache(maxsize=100, ttl=60) @cached(cache) def read_data(): data = open('data.txt', 'r').read() return data @app.route('/') def main(): get_data = read_data() return get_data if __name__ == '__main__': app.run()
Create the local file with some data:
$ touch data.txt $ echo "version1" > data.txt
Start the server:
$ python app.py
Make the request:
$ curl version1
Change the data inside the file:
$ echo "version2" > data.txt
Make the request again:
$ curl version1
As the ttl is set to 60, wait for 60 seconds so that the item kan expire from the cache and try again:
$ curl version2
As you can see the cache expired and a new request has been made to read the file again and load it in cache, and then return to the client.
Thank You
Please feel free to show support by, sharing this post, making a donation, subscribing or reach out to me if you want me to demo and write up on any specific tech topic.
|
https://sysadmins.co.za/how-to-cache-data-with-python-flask/
|
CC-MAIN-2019-22
|
en
|
refinedweb
|
David Nickerson wrote: > Hi all, > > Just wanted to see what people think about using the BioModels > qualifiers () in CellML > models? i.e., following the SBML annotation specification (see section 6 > of the SBML level 2 version 2 specification). > > The reason I ask is that I am starting to look at how to reference > external data, for example to justify a parameter value or provide > experimental data for use in making a graph. At least for these examples > the qualifier seems > quite appropriate - where the referenced resource could be a journal > publication to justify a parameter value or a reference to some > experimental data. > > Before progressing too far with this, I thought I'd better check. One > problem I can see immediately is that the > and/or > qualifiers are pretty much > the same as the cmeta:bio_entity already defined in the CellML Metadata > Specification. So is there already something in the CellML metadata > specification that will let me reference arbitrary (possibly external) > resources? Is it ok to just use the BioModels qualifiers that we want > without supporting them all? Or is there some other way to achieve the > same result? > The cmeta specification does allow you to give an identifier as a URI:
<rdf:Description rdf: <cmeta:bio_entity> <rdf:Bag> <rdf:li rdf: <cmeta:identifier rdf: <cmeta:identifier_scheme>URI</cmeta:identifier_scheme> <rdf:value></rdf:value> </cmeta:identifier> </rdf:li> </rdf:Bag> </cmeta:bio_entity> </rdf:Description> | | > I know Carey did some initial work looking at using the XML Resource > Directory Description Language (RDDL,) to reference > external data. To make full use of this (especially in regard to model > curation), I think we would need to define our own natures (roles) and > purposes (arcroles) - which would essentially result in the same > metadata as using the BioModels qualifiers but with different namespaces. > If we are using the exact same semantics as they are, we could the BioModels namespaces. Of course, it is not clear that the semantics used are sufficient for useful machine interpretation, and so it would be worth reviewing the existing set, coming up with a range of examples, and possibly using these to create a richer set of semantics with their own URIs. Best regards, Andrew _______________________________________________ cellml-discussion mailing list cellml-discussion@cellml.org
|
https://www.mail-archive.com/cellml-discussion@cellml.org/msg00229.html
|
CC-MAIN-2019-22
|
en
|
refinedweb
|
Hello,
I am currently trying to use good feature to track corner detector from apexcv in one of my project. I was able to call Initilize() and Process() function without error code return (these functions always return 0). However inside the Process() function, i got the following message:
ACF_PROCESS_APU::SelectScenario_internal() -> A suitable scenario could not be found
ACF_PROCESS_APU::Wait() -> process was never started (i.e. nothing to wait for)
After reading the source code of acf_process_apu, its appear that this error is related to the setting of chunkWidth and chunkHeight. However, i did not see any function that allow me to configure them.
The input image is a gray scale image with size 1280x800. I also tried different image size, but they are all returning same error message.
Thanks you
Hi Xue,
Please raise the ticket at using "Service Request" for support on this.
-Kushal
|
https://community.nxp.com/thread/468072
|
CC-MAIN-2019-22
|
en
|
refinedweb
|
What's new in Windows 10 for developers, build 14393
Windows 10 build 14939 (also known as the Anniversary Update or version 16 list of new and improved features of interest to developers. For a raw list of new namespaces added to the Windows SDK, see the Windows 10 build 14393 API changes. For more information on the highlighted features of this update, see What's cool in Windows 10.
Windows 10 build 14393 - July 2016
Feedback
Send feedback about:
|
https://docs.microsoft.com/en-us/windows/uwp/whats-new/windows-10-build-14393
|
CC-MAIN-2019-22
|
en
|
refinedweb
|
You need to sort, but you’re still running on Java 1.1.
Provide your own sort routine, or use mine.
If you’re still running on a Java 1.1 platform, you won’t
have the
Arrays or
Collections
classes and therefore must provide your own sorting. There are two
ways of proceeding: using the
system sort
utility or providing your own sort algorithm. The
former -- running the sort program -- can
be accomplished by running an external program, which will be covered
in Section 26.2. The code here re-casts the example
from Section 7.9 into using our own
Sort. The actual sorting code is not printed here;
it is included in the online source files, since it is just a simple
adaptation of the QuickSort example from the Sorting program in
Sun’s Java QuickSort Applet demonstration.
public class StrSort1_1 { /** The list of strings to be sorted */ static public String a[] = { "Qwerty", "Ian", "Java", "Gosling", "Alpha", "Zulu" }; /** Simple main program to test the sorting */ public static void main(String argv[]) { System.out.println("StrSort Demo in Java"); StringSort s = new StringSort( ); dump(a, "Before"); s.QuickSort(a, 0, a.length-1); dump(a, "After"); } static void dump(String a[], String title) { System.out.println("***** " + title + " *****"); for (int i=0; i<a.length; i++) System.out.println("a["+i+"]="+a[i]); } }
No credit card required
|
https://www.oreilly.com/library/view/java-cookbook/0596001703/ch07s10.html
|
CC-MAIN-2019-22
|
en
|
refinedweb
|
Trying to compile some RL2 samples , missing IContext and IContextConfig
Hi there,
I've just started looking at RL2 - and having trouble compiling the samples I found on this site - these 2 lines are unresolved in all the samples I've seen - but not sure what im missing
import robotlegs.bender.framework.context.api.IContext; import robotlegs.bender.framework.context.api.IContextConfig;
- I have only included 2.0.0b5.swc - do I need anything else?
Thanks
Comments are currently closed for this discussion. You can start a new one.
Keyboard shortcuts
Generic
Comment Form
You can use
Command ⌘ instead of
Control ^ on Mac
Support Staff 1 Posted by creynders on 30 Mar, 2013 10:30 AM
The API's changed a little:
2 Posted by mike on 30 Mar, 2013 10:28 PM
Many thanks!
creynders closed this discussion on 31 Mar, 2013 10:31 AM.
|
http://robotlegs.tenderapp.com/discussions/robotlegs-2/1233-trying-to-compile-some-rl2-samples-missing-icontext-and-icontextconfig
|
CC-MAIN-2019-22
|
en
|
refinedweb
|
In flutter mobile application development, there will be times when you are require to open a remote website in your flutter app or display a local html file.
In this scenario you can count on flutter webview to do the job for you.
Although there are so much a webview can do, but bear in mind that flutter webview is still in its infancy and there are lots of features still missing when compared with native Android WebView.
We will focus on official flutter webview plugin and the community version in this tutorial and other webview tutorials. If you want to learn how to use native Android or iOS WebView in your Flutter application then I will suggest you refer to our tutorial on How to use native WebView in Flutter
First, we will start by adding flutter webview plugin to our project. 'basic_webview_task: WebViewInFlutter(), ); } }
In the above code, we have the
basic_webview_task_1.dartfile. This file is where we will add our simple flutter webview widget class
3. Create new dart file in the lib folder
Create a new dart file in the lib folder. I have named it
basic_webview_task_1.dartbut feel free to choose a name of your choice
Open the file and add the code below.
import 'package:flutter/material.dart'; import 'package:flutter_webview_plugin/flutter_webview_plugin.dart'; class WebViewInFlutter extends StatelessWidget { @override Widget build(BuildContext context) { return WebviewScaffold( url: '', hidden: true, appBar: AppBar(title: Text("Inducesmile.com")), ); } }.
|
https://inducesmile.com/google-flutter/how-to-create-webview-in-flutter/
|
CC-MAIN-2019-22
|
en
|
refinedweb
|
Introduction to the Stripe API for Java
Last modified: November 5, 2018
1. Overview
Stripe is a cloud-based service that enables businesses and individuals to receive payments over the internet and offers both client-side libraries (JavaScript and native mobile) and server-side libraries (Java, Ruby, Node.js, etc.).
Stripe provides a layer of abstraction that reduces the complexity of receiving payments. As a result, we don’t need to deal with credit card details directly – instead, we deal with a token symbolizing an authorization to charge.
In this tutorial, we will create a sample Spring Boot project that allows users to input a credit card and later will charge the card for a certain amount using the Stripe API for Java.
2. Dependencies
To make use of the Stripe API for Java in the project, we add the corresponding dependency to our pom.xml:
<dependency> <groupId>com.stripe</groupId> <artifactId>stripe-java</artifactId> <version>4.2.0</version> </dependency>
We can find its latest version in the Maven Central repository.
For our sample project, we will leverage the spring-boot-starter-parent:
<parent> <groupId>org.springframework.boot</groupId> <artifactId>spring-boot-starter-parent</artifactId> <version>1.5.2.RELEASE</version> </parent>
We will also use Lombok to reduce boilerplate code, and Thymeleaf will be the template engine for delivering dynamic web pages.
Since we are using the spring-boot-starter-parent to manage the versions of these libraries, we don’t have to include their versions in pom.xml:
<dependency> <groupId>org.springframework.boot</groupId> <artifactId>spring-boot-starter-web</artifactId> </dependency> <dependency> <groupId>org.springframework.boot</groupId> <artifactId>spring-boot-starter-thymeleaf</artifactId> </dependency> <dependency> <groupId>org.projectlombok</groupId> <artifactId>lombok</artifactId> </dependency>
Note that if you’re using NetBeans, you may want to use Lombok explicitly with version 1.16.16, since a bug in the version of Lombok provided with Spring Boot 1.5.2 causes NetBeans to generate a lot of errors.
3. API Keys
Before we can communicate with Stripe and execute credit card charges, we need to register a Stripe account and obtain secret/public Stripe API keys.
After confirming the account, we will log in to access the Stripe dashboard. We then choose “API keys” on the left side menu:
There will be two pairs of secret/public keys — one for test and one for live. Let’s leave this tab open so that we can use these keys later.
4. General Flow
The charge of the credit card will be done in five simple steps, involving the front-end (run in a browser), back-end (our Spring Boot application), and Stripe:
- A user goes to the checkout page and clicks “Pay with Card”.
- A user is presented with Stripe Checkout overlay dialog, where fills the credit card details.
- A user confirms with “Pay <amount>” which will:
- Send the credit card to Stripe
- Get a token in the response which will be appended to the existing form
- Submit that form with the amount, public API key, email, and the token to our back-end
- Our back-end contacts Stripe with the token, the amount, and the secret API key.
- Back-end checks Stripe response and provide the user with feedback of the operation.
We will cover each step in greater detail in the following sections.
5. Checkout Form
Stripe Checkout is a customizable, mobile ready, and localizable widget that renders a form to introduce credit card details. Through the inclusion and configuration of “checkout.js“, it is responsible for:
- “Pay with Card” button rendering
- Payment overlay dialog rendering (triggered after clicking “Pay with Card”)
- Credit card validation
- “Remember me” feature (associates the card with a mobile number)
- Sending the credit card to Stripe and replacing it with a token in the enclosing form (triggered after clicking “Pay <amount>”)
If we need to exercise more control over the checkout form than is provided by Stripe Checkout, then we can use Stripe Elements.
Next, we will analyze the controller that prepares the form and then the form itself.
5.1. Controller
Let’s start by creating a controller to prepare the model with the necessary information that the checkout form needs.
First, we’ll need to copy the test version of our public key from the Stripe dashboard and use it to define STRIPE_PUBLIC_KEY as an environment variable. We then use this value in the stripePublicKey field.
We’re also setting currency and amount (expressed in cents) manually here merely for demonstration purposes, but in a real application, we might set a product/sale id that could be used to fetch the actual values.
Then, we’ll dispatch to the checkout view which holds the checkout form:
@Controller public class CheckoutController { @Value("${STRIPE_PUBLIC_KEY}") private String stripePublicKey; @RequestMapping("/checkout") public String checkout(Model model) { model.addAttribute("amount", 50 * 100); // in cents model.addAttribute("stripePublicKey", stripePublicKey); model.addAttribute("currency", ChargeRequest.Currency.EUR); return "checkout"; } }
Regarding the Stripe API keys, you can define them as environment variables per application (test vs. live).
As is the case with any password or sensitive information, it is best to keep the secret key out of your version control system.
5.2. Form
The “Pay with Card” button and the checkout dialog are included by adding a form with a script inside, correctly configured with data attributes:
<form action='/charge' method='POST' id='checkout-form'> <input type='hidden' th: <label>Price:<span th:</label> <!-- NOTE: data-key/data-amount/data-currency will be rendered by Thymeleaf --> <script src='' class='stripe-button' th: </script> </form>
The “checkout.js” script automatically triggers a request to Stripe right before the submit, which then appends the Stripe token and the Stripe user email as the hidden fields “stripeToken” and “stripeEmail“.
These will be submitted to our back-end along with the other form fields. The script data attributes are not submitted.
We use Thymeleaf to render the attributes “data-key“, “data-amount“, and “data-currency“.
The amount (“data-amount“) is used only for display purposes (along with “data-currency“). Its unit is cents of the used currency, so we divide it by 100 to display it.
The Stripe public key is passed to Stripe after the user asks to pay. Do not use the secret key here, as this is sent to the browser.
6. Charge Operation
For server-side processing, we need to define the POST request handler used by the checkout form. Let’s take a look at the classes we will need for the charge operation.
6.1. ChargeRequest Entity
Let’s define the ChargeRequest POJO that we will use as a business entity during the charge operation:
@Data public class ChargeRequest { public enum Currency { EUR, USD; } private String description; private int amount; private Currency currency; private String stripeEmail; private String stripeToken; }
6.2. Service
Let’s write a StripeService class to communicate the actual charge operation to Stripe:
@Service public class StripeService { @Value("${STRIPE_SECRET_KEY}") private String secretKey; @PostConstruct public void init() { Stripe.apiKey = secretKey; } public Charge charge(ChargeRequest chargeRequest) throws AuthenticationException, InvalidRequestException, APIConnectionException, CardException, APIException { Map<String, Object> chargeParams = new HashMap<>(); chargeParams.put("amount", chargeRequest.getAmount()); chargeParams.put("currency", chargeRequest.getCurrency()); chargeParams.put("description", chargeRequest.getDescription()); chargeParams.put("source", chargeRequest.getStripeToken()); return Charge.create(chargeParams); } }
As was shown in the CheckoutController, the secretKey field is populated from the STRIPE_SECRET_KEY environment variable that we copied from the Stripe dashboard.
Once the service has been initialized, this key is used in all subsequent Stripe operations.
The object returned by the Stripe library represents the charge operation and contains useful data like the operation id.
6.3. Controller
Finally, let’s write the controller that will receive the POST request made by the checkout form and submit the charge to Stripe via our StripeService.
Note that the “ChargeRequest” parameter is automatically initialized with the request parameters “amount“, “stripeEmail“, and “stripeToken” included in the form:
@Controller public class ChargeController { @Autowired private StripeService paymentsService; @PostMapping("/charge") public String charge(ChargeRequest chargeRequest, Model model) throws StripeException { chargeRequest.setDescription("Example charge"); chargeRequest.setCurrency(Currency.EUR); Charge charge = paymentsService.charge(chargeRequest); model.addAttribute("id", charge.getId()); model.addAttribute("status", charge.getStatus()); model.addAttribute("chargeId", charge.getId()); model.addAttribute("balance_transaction", charge.getBalanceTransaction()); return "result"; } @ExceptionHandler(StripeException.class) public String handleError(Model model, StripeException ex) { model.addAttribute("error", ex.getMessage()); return "result"; } }
On success, we add the status, the operation id, the charge id, and the balance transaction id to the model so that we can show them later to the user (Section 7). This is done to illustrate some of the contents of the charge object.
Our ExceptionHandler will deal with exceptions of type StripeException that are thrown during the charge operation.
If we need more fine-grained error handling, we can add separate handlers for the subclasses of StripeException, such as CardException, RateLimitException, or AuthenticationException.
The “result” view renders the result of the charge operation.
7. Showing the Result
The HTML used to display the result is a basic Thymeleaf template that displays the outcome of a charge operation. The user is sent here by the ChargeController whether the charge operation was successful or not:
<!DOCTYPE html> <html xmlns='' xmlns: <head> <title>Result</title> </head> <body> <h3 th:</h3> <div th: <h3 style='color: green;'>Success!</h3> <div>Id.: <span th:</div> <div>Status: <span th:</div> <div>Charge id.: <span th:</div> <div>Balance transaction id.: <span th:</div> </div> <a href='/checkout.html'>Checkout again</a> </body> </html>
On success, the user will see some details of the charge operation:
On error, the user will be presented with the error message as returned by Stripe:
8. Conclusion
In this tutorial, we’ve shown how to make use of the Stripe Java API to charge a credit card. In the future, we could reuse our server-side code to serve a native mobile app.
To test the entire charge flow, we don’t need to use a real credit card (even in test mode). We can rely on Stripe testing cards instead.
The charge operation is one among many possibilities offered by the Stripe Java API. The official API reference will guide us through the whole set of operations.
The sample code used in this tutorial can be found in the GitHub project.
Excellent post. Very useful and well structured as always.
|
https://www.baeldung.com/java-stripe-api
|
CC-MAIN-2019-22
|
en
|
refinedweb
|
- Advertisement
Content Count26
Joined
Last visited
Community Reputation189 Neutral
About Ars7c3
- RankMember
Personal Information
- InterestsBusiness
Programming
Help with MiniMax Algorithm for Tic Tac Toe
Ars7c3 replied to Ars7c3's topic in Artificial IntelligenceUnfortunatley, that was not the issue. I tried it, but it exhibited the same behavior. The AI will block winning moves, but when it has a winning move it does not capitalize on it.
Help with MiniMax Algorithm for Tic Tac Toe
Ars7c3 posted a topic in Artificial IntelligenceI am currently working on creating an AI player for tic tac toe. After researching, I discovered that the minimax algorithm would be perfect for the job. I am pretty confident in my understanding of the algorithm and how it works, but coding it has proven a little bit of a challenge. I will admit, recursion is one of my weak areas . The following code is my AI class. It currently runs, but it makes poor decisions. Could someone please point out where I went wrong? Thank You! import tictactoe as tic # interface to tictactoe game logic like check_victory class AI: def __init__(self, mark): self.mark = mark def minimax(self, state, player): #end condition - final state if tic.check_victory(state): if player == self.mark: return 1 else: return -1 if tic.check_cat(state): return 0 nextturn = tic.O if player == tic.X else tic.X #generate possible moves mvs = [] for i, mark in enumerate(state): if mark == tic.EMPTY: mvs.append(i) #generate child states of parent state scores = [] for mv in mvs: leaf = state[:] leaf[mv] = player result = self.minimax(leaf, nextturn) scores.append(result) if player == self.mark: maxelle = max(scores) return mvs[scores.index(maxelle)] else: minele = min(scores) return mvs[scores.index(minele)] def make_move(self, board, player): place = self.minimax(board, player) return place
Passing Objects
Ars7c3 replied to bhollower's topic in General and Gameplay Programming.
Tutorial: Designing and Writing branching and meaningful Game Conversations in our game
Ars7c3 replied to Koobazaur's topic in Writing for GamesWow! This is really cool stuff, thanks for sharing!
problems with initializing my map
Ars7c3 replied to Ars7c3's topic in General and Gameplay ProgrammingThank You Brother Bob. I still have much to learn. But then again, we're never done learning are we?
problems with initializing my map
Ars7c3 replied to Ars7c3's topic in General and Gameplay ProgrammingI tried that, and it had the same result as before. It only works when the width, height, and layers are all the same. When, for example, i set the width and height to 10, and the layers to 3, it returns an error that says: vector subscript is out of range.
problems with initializing my map
Ars7c3 posted a topic in General and Gameplay Programming(); } } } }
Letting a probability of an object to appear in the scene...
Ars7c3 replied to lucky6969b's topic in General and Gameplay ProgrammingNot.
Simple splash screen?
Ars7c3 replied to Tispe's topic in General and Gameplay ProgrammingAre you looking to have different game states, such as a Splash Screen state, a Menu state, and a Playing state? Or are you simply asking how you would go about making a transparent splash screen background?
48 Hour Challenge Result
Ars7c3 replied to alexisgreene's topic in General and Gameplay ProgrammingWow, this is really good, although I got my butt kicked by space pirates. Keep up the good work, and good luck on your engine!
!HELP! SFML Window Init Problems
Ars7c3 replied to Ars7c3's topic in General and Gameplay ProgrammingThank you Servant of the Lord, it worked! It was a Duh mistake. *facepalm
!HELP! SFML Window Init Problems
Ars7c3 posted a topic in General and Gameplay ProgrammingHello, I've had issues initializing sf::RenderWindow and cannot figure out why the program isn't working. I am getting an unhandled exception error. The program crashes right when i call the create function for RenderWindow. The code is below. Thanks in Advance! //This is the whole Engine.h file #ifndef ENGINE #define ENGINE #include <string> #include <vector> #include <SFML/Graphics.hpp> #include <iostream> #include "Debug.h" using namespace std; class State; class Engine { public: void Init(int Width, int Height,string caption); void CleanUp(); void ChangeState(State *state); void PushState(State *state); void PopState(); void HandleEvents(); void Update(); void Render(); void Run(); bool Running(); void Quit(); sf::RenderWindow *Window(); private: sf::RenderWindow *m_window; vector<State*> m_states; bool m_running; int m_width, m_height; }; #endif //This is the Init function in the cpp file for Engine void Engine::Init(int Width, int Height,string caption) { Debug::Write("Starting"); m_states.clear(); Debug::Write("states cleared"); m_width = Width; m_height = Height; Debug::Write("w/h init"); m_running = true; Debug::Write("running = true"); m_window->Create(sf::VideoMode(Width,Height),caption); Debug::Write("!Engine initialization complete..."); }
- Thats not right either ^^^ i dont know y it wont let me post the correct code???
- I actually did include that in my code... for some reason it didn't upload that way. Here is the real code i use to detect collision. ]
Collision Detection HELP!
Ars7c3 posted a topic in General and Gameplay ProgrammingOkay, so i have been working on a pong game in order to test my very basic game engine, and everything that the engine is supposed to do it's doing. The problem is in the collision code, which is confusing considering I have used this very same code successfully in other games I have made. I honestly have no idea what the issue is, and I was hoping one of you guys could help me. I don't know if this would make a difference or not but I am using Dev C++. Here is the collision code and how it is used (BTW I made sure the SDL_Rect coordinates are correct) ] And here is how I call it in the main game loop: [source lang="cpp"] if(Collision(paddle2.GetRect(),ball.GetRect())) { cout << "Hit" << endl; } [/source]
- Advertisement
|
https://www.gamedev.net/profile/184655-ars7c3/
|
CC-MAIN-2019-22
|
en
|
refinedweb
|
Experimental support for Cloud TPUs is currently available for Keras and Colab. Run Colab notebooks on a TPU by changing the hardware accelerator in your notebook settings: Runtime > Change runtime type > Hardware accelerator > TPU. The following TPU-enabled Colab notebooks are available to test:
- A quick test, just to measure FLOPS.
- A CNN image classifier with
tf.keras.
- An LSTM markov chain text generator with
tf.keras
The above examples are the best way to get started with a cloud TPU.
TPUEstimator
The remainder of this doc is about using the
TPUEstimator class to drive a
Cloud TPU, and highlights
the differences compared to a standard
tf.estimator.Estimator.
This doc is aimed at users who:
- Are familiar with TensorFlow's
Estimatorand
DatasetAPIs
- Have maybe tried out a Cloud TPU using an existing model.
- Have, perhaps, skimmed the code of an example TPU model [1] [2].
- Are interested in porting an existing
Estimatormodel to run on Cloud TPUs
tf.estimator.Estimator are a model-level abstraction.
Standard
Estimators can drive models on CPU and GPUs. You must use
tf.contrib.tpu.TPUEstimator to drive a model on TPUs.
Refer to TensorFlow's Getting Started with Estimators section for an introduction to the basics
of using a pre-made
Estimator, and
custom
Estimators.
The
TPUEstimator class differs somewhat from the
Estimator class.
The simplest way to maintain a model that can be run both on CPU/GPU or on a
Cloud TPU is to define the model's inference phase (from inputs to predictions)
outside of the
model_fn. Then maintain separate implementations of the
Estimator setup and
model_fn, both wrapping this inference step. For an
example of this pattern compare the
mnist.py and
mnist_tpu.py implementation in
tensorflow/models.
Running a
TPUEstimator locally
To create a standard
Estimator you call the constructor, and pass it a
model_fn, for example:
my_estimator = tf.estimator.Estimator( model_fn=my_model_fn)
The changes required to use a
tf.contrib.tpu.TPUEstimator on your local
machine are relatively minor. The constructor requires two additional arguments.
You should set the
use_tpu argument to
False, and pass a
tf.contrib.tpu.RunConfig as the
config argument, as shown below:
my_tpu_estimator = tf.contrib.tpu.TPUEstimator( model_fn=my_model_fn, config=tf.contrib.tpu.RunConfig() use_tpu=False)
Just this simple change will allow you to run a
TPUEstimator locally.
The majority of example TPU models can be run in this local mode,
by setting the command line flags as follows:
$> python mnist_tpu.py --use_tpu=false --master=''
Building a
tpu.RunConfig
While the default
RunConfig is sufficient for local training, these settings
cannot be ignored in real usage.
A more typical setup for a
RunConfig, that can be switched to use a Cloud
TPU, might be as follows:
import tempfile import subprocess class FLAGS(object): use_tpu=False tpu_name=None # Use a local temporary path for the `model_dir` model_dir = tempfile.mkdtemp() # Number of training steps to run on the Cloud TPU before returning control. iterations = 50 # A single Cloud TPU has 8 shards. num_shards = 8 if FLAGS.use_tpu: my_project_name = subprocess.check_output([ 'gcloud','config','get-value','project']) my_zone = subprocess.check_output([ 'gcloud','config','get-value','compute/zone']) cluster_resolver = tf.contrib.cluster_resolver.TPUClusterResolver( tpu_names=[FLAGS.tpu_name], zone=my_zone, project=my_project) master = tpu_cluster_resolver.get_master() else: master = '' my_tpu_run_config = tf.contrib.tpu.RunConfig( master=master, evaluation_master=master, model_dir=FLAGS.model_dir, session_config=tf.ConfigProto( allow_soft_placement=True, log_device_placement=True), tpu_config=tf.contrib.tpu.TPUConfig(FLAGS.iterations, FLAGS.num_shards), )
Then you must pass the
tf.contrib.tpu.RunConfig to the constructor:
my_tpu_estimator = tf.contrib.tpu.TPUEstimator( model_fn=my_model_fn, config = my_tpu_run_config, use_tpu=FLAGS.use_tpu)
Typically the
FLAGS would be set by command line arguments. To switch from
training locally to training on a cloud TPU you would need to:
- Set
FLAGS.use_tputo
True
- Set
FLAGS.tpu_nameso the
tf.contrib.cluster_resolver.TPUClusterResolvercan find it
- Set
FLAGS.model_dirto a Google Cloud Storage bucket url (
gs://).
Optimizer
When training on a cloud TPU you must wrap the optimizer in a
tf.contrib.tpu.CrossShardOptimizer, which uses an
allreduce to aggregate
gradients and broadcast the result to each shard (each TPU core).
The
CrossShardOptimizer is not compatible with local training. So, to have
the same code run both locally and on a Cloud TPU, add lines like the following:
optimizer = tf.train.GradientDescentOptimizer(learning_rate=learning_rate) if FLAGS.use_tpu: optimizer = tf.contrib.tpu.CrossShardOptimizer(optimizer)
If you prefer to avoid a global
FLAGS variable in your model code, one
approach is to set the optimizer as one of the
Estimator's params,
as follows:
my_tpu_estimator = tf.contrib.tpu.TPUEstimator( model_fn=my_model_fn, config = my_tpu_run_config, use_tpu=FLAGS.use_tpu, params={'optimizer':optimizer})
Model Function
This section details the changes you must make to the model function
(
model_fn()) to make it
TPUEstimator compatible.
Static shapes
During regular usage TensorFlow attempts to determine the shapes of each
tf.Tensor during graph construction. During execution any unknown shape
dimensions are determined dynamically,
see Tensor Shapes for more details.
To run on Cloud TPUs TensorFlow models are compiled using XLA. XLA uses a similar system for determining shapes at compile time. XLA requires that all tensor dimensions be statically defined at compile time. All shapes must evaluate to a constant, and not depend on external data, or stateful operations like variables or a random number generator.
Summaries
Remove any use of
tf.summary from your model.
TensorBoard summaries are a great way see inside
your model. A minimal set of basic summaries are automatically recorded by the
TPUEstimator, to
event files in the
model_dir. Custom summaries, however,
are currently unsupported when training on a Cloud TPU. So while the
TPUEstimator will still run locally with summaries, it will fail if used on a
TPU.
Metrics
Build your evaluation metrics dictionary in a stand-alone
metric_fn.
Evaluation metrics are an essential part of training a model. These are fully supported on Cloud TPUs, but with a slightly different syntax.
A standard
tf.metrics returns two tensors. The first returns the running
average of the metric value, while the second updates the running average and
returns the value for this batch:
running_average, current_batch = tf.metrics.accuracy(labels, predictions)
In a standard
Estimator you create a dictionary of these pairs, and return it
as part of the
EstimatorSpec.
my_metrics = {'accuracy': tf.metrics.accuracy(labels, predictions)} return tf.estimator.EstimatorSpec( ... eval_metric_ops=my_metrics )
In a
TPUEstimator you instead pass a function (which returns a metrics
dictionary) and a list of argument tensors, as shown below:
def my_metric_fn(labels, predictions): return {'accuracy': tf.metrics.accuracy(labels, predictions)} return tf.contrib.tpu.TPUEstimatorSpec( ... eval_metrics=(my_metric_fn, [labels, predictions]) )
Use
TPUEstimatorSpec
TPUEstimatorSpec do not support hooks, and require function wrappers for
some fields.
An
Estimator's
model_fn must return an
EstimatorSpec. An
EstimatorSpec
is a simple structure of named fields containing all the
tf.Tensors of the
model that the
Estimator may need to interact with.
TPUEstimators use a
tf.contrib.tpu.TPUEstimatorSpec. There are a few
differences between it and a standard
tf.estimator.EstimatorSpec:
- The
eval_metric_opsmust be wrapped into a
metrics_fn, this field is renamed
eval_metrics(see above).
- The
tf.train.SessionRunHookare unsupported, so these fields are omitted.
- The
tf.train.Scaffold, if used, must also be wrapped in a function. This field is renamed to
scaffold_fn.
Scaffold and
Hooks are for advanced usage, and can typically be omitted.
Input functions
Input functions work mainly unchanged as they run on the host computer, not the Cloud TPU itself. This section explains the two necessary adjustments.
Params argument
The
input_fn for a standard
Estimator can include a
params argument; the
input_fn for a
TPUEstimator must include a
params argument. This is necessary to allow the estimator to set the batch
size for each replica of the input stream. So the minimum signature for an
input_fn for a
TPUEstimator is:
def my_input_fn(params): pass
Where
params['batch-size'] will contain the batch size.
Static shapes and batch size
The input pipeline generated by your
input_fn is run on CPU. So it is mostly
free from the strict static shape requirements imposed by the XLA/TPU environment.
The one requirement is that the batches of data fed from your input pipeline to
the TPU have a static shape, as determined by the standard TensorFlow shape
inference algorithm. Intermediate tensors are free to have a dynamic shapes.
If shape inference has failed, but the shape is known it is possible to
impose the correct shape using
tf.set_shape().
In the example below the shape
inference algorithm fails, but it is correctly using
set_shape:
>>> x = tf.zeros(tf.constant([1,2,3])+1) >>> x.shape TensorShape([Dimension(None), Dimension(None), Dimension(None)]) >>> x.set_shape([2,3,4])
In many cases the batch size is the only unknown dimension.
A typical input pipeline, using
tf.data, will usually produce batches of a
fixed size. The last batch of a finite
Dataset, however, is typically smaller,
containing just the remaining elements. Since a
Dataset does not know its own
length or finiteness, the standard
tf.data.Dataset.batch method
cannot determine if all batches will have a fixed size batch on its own:
>>> params = {'batch_size':32} >>> ds = tf.data.Dataset.from_tensors([0, 1, 2]) >>> ds = ds.repeat().batch(params['batch-size']) >>> ds <BatchDataset shapes: (?, 3), types: tf.int32>
The most straightforward fix is to
tf.data.Dataset.apply
tf.contrib.data.batch_and_drop_remainder
as follows:
>>> params = {'batch_size':32} >>> ds = tf.data.Dataset.from_tensors([0, 1, 2]) >>> ds = ds.repeat().apply( ... tf.contrib.data.batch_and_drop_remainder(params['batch-size'])) >>> ds <_RestructuredDataset shapes: (32, 3), types: tf.int32>
The one downside to this approach is that, as the name implies, this batching method throws out any fractional batch at the end of the dataset. This is fine for an infinitely repeating dataset being used for training, but could be a problem if you want to train for an exact number of epochs.
To do an exact 1-epoch of evaluation you can work around this by manually
padding the length of the batches, and setting the padding entries to have zero
weight when creating your
tf.metrics.
Datasets
Efficient use of the
tf.data.Dataset API is critical when using a Cloud
TPU, as it is impossible to use the Cloud TPU's unless you can feed it data
quickly enough. See Input Pipeline Performance Guide for details on dataset performance.
For all but the simplest experimentation (using
tf.data.Dataset.from_tensor_slices or other in-graph data) you will need to
store all data files read by the
TPUEstimator's
Dataset in Google Cloud
Storage Buckets.
For most use-cases, we recommend converting your data into
TFRecord
format and using a
tf.data.TFRecordDataset to read it. This, however, is not
a hard requirement and you can use other dataset readers
(
FixedLengthRecordDataset or
TextLineDataset) if you prefer.
Small datasets can be loaded entirely into memory using
tf.data.Dataset.cache.
Regardless of the data format used, it is strongly recommended that you use large files, on the order of 100MB. This is especially important in this networked setting as the overhead of opening a file is significantly higher.
It is also important, regardless of the type of reader used, to enable buffering
using the
buffer_size argument to the constructor. This argument is specified
in bytes. A minimum of a few MB (
buffer_size=8*1024*1024) is recommended so
that data is available when needed.
The TPU-demos repo includes a script for downloading the imagenet dataset and converting it to an appropriate format. This together with the imagenet models included in the repo demonstrate all of these best-practices.
What Next
- Google Cloud TPU Documentation —Set up and run a Google Cloud TPU.
- Migrating to TPUEstimator API —This tutorial describes how to convert a model program using the Estimator API to one using the
TPUEstimatorAPI.
- TPU Demos Repository —Examples of Cloud TPU compatible models.
- The Google Cloud TPU Performance Guide —Enhance Cloud TPU performance further by adjusting Cloud TPU configuration parameters for your application.
|
https://www.tensorflow.org/guide/using_tpu
|
CC-MAIN-2019-22
|
en
|
refinedweb
|
Automated unit testing in the metal
Unit.
Automating builds
I have been doing automated builds for some time now. This is the very first (basic) way to test your code: does it build? For some projects, like ESPurna, it really makes the difference since it has so many different targets and setting combinations it would be a nightmare to test them all manually. Instead, we are using Travis to build several fake images with different combinations to actually test that most of the code builds, i.e. it doesn’t have typos, unmet dependencies,…
Travis also provides a way to create deployment images for the different supported boards in ESPurna. When you download a binary image from the releases page in the ESPurna repository, that file has been automatically created by Travis from a tagged release. That is so cool! You can see how this is done in the .travis.xml file in the root of the repository.
But this is not what I wanted to talk about here.
Existing options
The fact that the project builds, does not mean that it works. The only way to really know that it does what it is supposed to do is to test it on the hardware. This is where we must start using special tools to evaluate conditions (actual versus expected results) and provide an output. This output will probably be via the serial port of the device, although we could think about other fashionable ways to show the result (LEDs, buzzers,…).
Here we have specific tools to do the job. These tools are very much like their “native” counterparts, used for desktop or web languages like Java, PHP, Python… They are usually referred to as testing frameworks. If you are using the Arduino framework you should know about some of these solutions:
- ArduinoUnit. It has no recent activity but it’s still the preferred choice by many people. There are two relevant contributors: Warren MacEvoy and Matthew Murdoch.
- AUnit. It is actively developed by Bryan Parks and it has no other relevant contributor.
- GoogleTest. It is a generic C++ test suite but they have recently started developing support for Arduino framework. It is very active and has a big community but it is still a WIP.
- ArduinoCI. It started in 2018 just like the AUnit test suite but has had no activity since September and remains as “beta”. Anyway, it claims to have a really interesting set of features. It is based around mocked-up hardware. It has a single main developer named Ian.
- PlatformIO Unit Testing. This is the only non-free and closed solution. And that’s a pity since it has really impressive options.
There are other available options like Arduino-TestSuite or ArduTest, but they are abandoned.
Visually testing it
All the tools above allow you to “visually” test the code. I mean: you run the tests and they will output a result on the serial monitor. “PASSED” or “OK” mean everything is good. The tools in the previous section allow you (or will allow you) to do that, either on the hardware itself or in a mocked-up version of the hardware.
I will focus here on two of the tools above: AUnit and PlatformIO Unit Test. Both are free to use in this stage and provide a very similar feature set. The project I’ll be using to test them is something I’ve been working recently: an RPN calculator for ESP8266 and ESP32 platforms.
The RPNlib library is released under the Lesser GPL v3 license as free open software and can be checked out at my RPNlib repository on GitHub.
The library is an RPN calculator that can process c-strings of commands and output a stack of results. Testing this is quite simple: you have an input and an output you can compare to the expected output. Let’s see how this can be tested with both solutions.
Testing it with AUnit
AUnit is a testing library by Brian Park. It’s inspired and almost 100% compatible with ArduinoUnit but it uses way less memory than the later and supports platforms as ESP8266 or ESP32. It features a full set of test methods and allows you to use wrapper classes with setup and teardown methods to isolate your tests. That’s pretty cool.
Here you have an example of usage with one of those classes and the output:
#include <Arduino.h> #include <rpnlib.h> #include <AUnit.h> using namespace aunit; // ----------------------------------------------------------------------------- // Test class // ----------------------------------------------------------------------------- class CustomTest: public TestOnce { protected: virtual void setup() override { assertTrue(rpn_init(ctxt)); } virtual void teardown() override { assertTrue(rpn_clear(ctxt)); } virtual void run_and_compare(const char * command, unsigned char depth, float * expected) { assertTrue(rpn_process(ctxt, command)); assertEqual(RPN_ERROR_OK, rpn_error); assertEqual(depth, rpn_stack_size(ctxt)); float value; for (unsigned char i=0; i<depth; i++) { assertTrue(rpn_stack_get(ctxt, i, value)); assertNear(expected[i], value, 0.000001); } } rpn_context ctxt; }; // ----------------------------------------------------------------------------- // Tests // ----------------------------------------------------------------------------- testF(CustomTest, test_math) { float expected[] = {3}; run_and_compare("5 2 * 3 + 5 mod", sizeof(expected)/sizeof(float), expected); } testF(CustomTest, test_math_advanced) { float expected[] = {1}; run_and_compare("10 2 pow sqrt log10", sizeof(expected)/sizeof(float), expected); } testF(CustomTest, test_trig) { float expected[] = {1}; run_and_compare("pi 4 / cos 2 sqrt *", sizeof(expected)/sizeof(float), expected); } testF(CustomTest, test_cast) { float expected[] = {2, 1, 3.1416, 3.14}; run_and_compare("pi 2 round pi 4 round 1.1 floor 1.1 ceil", sizeof(expected)/sizeof(float), expected); } // ----------------------------------------------------------------------------- // Main // ----------------------------------------------------------------------------- void setup() { Serial.begin(115200); delay(2000); Printer::setPrinter(&Serial); //TestRunner::setVerbosity(Verbosity::kAll); } void loop() { TestRunner::run(); delay(1); }
As you can see, you can define any specific testing methods in the library and create and use them directly from the testF methods. This way you can create new tests very fast. Now I just have to build and upload the test to the target hardware, in this case, an ESP32 board:
$ pio run -s -e esp32 -t upload ; monitor --- Miniterm on /dev/ttyUSB0 115200,8,N,1 --- --- Quit: Ctrl+C | Menu: Ctrl+T | Help: Ctrl+T followed by CtrlRunner started on 4 test(s). Test CustomTest_test_cast passed. Test CustomTest_test_math passed. Test CustomTest_test_math_advanced passed. Test CustomTest_test_trig passed. Test test_memory passed. TestRunner duration: 0.059 seconds. TestRunner summary: 4 passed, 0 failed, 0 skipped, 0 timed out, out of 4 test(s).
You can check the full AUnit test suite for the RPNlib in the repo.
Testing it with PlatformIO
Let’s now see how you can do the very same using the PlatformIO Unit Test feature. As you can see it’s very much the same, albeit you don’t have the wrapping class feature by default, but you can still use helper methods. Of course, this means you have to take care of the code isolation yourself.
#include <Arduino.h> #include "rpnlib.h" #include <unity.h> // ----------------------------------------------------------------------------- // Helper methods // ----------------------------------------------------------------------------- void run_and_compare(const char * command, unsigned char depth, float * expected) { float value; rpn_context ctxt; TEST_ASSERT_TRUE(rpn_init(ctxt)); TEST_ASSERT_TRUE(rpn_process(ctxt, command)); TEST_ASSERT_EQUAL_INT8(RPN_ERROR_OK, rpn_error); TEST_ASSERT_EQUAL_INT8(depth, rpn_stack_size(ctxt)); for (unsigned char i=0; i<depth; i++) { TEST_ASSERT_TRUE(rpn_stack_get(ctxt, i, value)); TEST_ASSERT_EQUAL_FLOAT(expected[i], value); } } // ----------------------------------------------------------------------------- // Tests // ----------------------------------------------------------------------------- void test_math(void) { float expected[] = {3}; run_and_compare("5 2 * 3 + 5 mod", sizeof(expected)/sizeof(float), expected); } void test_math_advanced(void) { float expected[] = {1}; run_and_compare("10 2 pow sqrt log10", sizeof(expected)/sizeof(float), expected); } void test_trig(void) { float expected[] = {1}; run_and_compare("pi 4 / cos 2 sqrt *", sizeof(expected)/sizeof(float), expected); } void test_cast(void) { float expected[] = {2, 1, 3.1416, 3.14}; run_and_compare("pi 2 round pi 4 round 1.1 floor 1.1 ceil", sizeof(expected)/sizeof(float), expected); } // ----------------------------------------------------------------------------- // Main // ----------------------------------------------------------------------------- void setup() { delay(2000); UNITY_BEGIN(); RUN_TEST(test_math); RUN_TEST(test_math_advanced); RUN_TEST(test_trig); RUN_TEST(test_cast); UNITY_END(); } void loop() { delay(1); }
To test it you can use the built-in test command in PlatformIO Core.
$ pio test -e esp32 PIO Plus () v1.5.3 Verbose mode can be enabled via `-v, --verbose` option Collected 2 items === [test/piotest] Building... (1/3) === Please wait... === 9.84 seconds ===
Automating your tests
Next step would be to run these tests unassisted. That’s it: every time you commit a change to the repo, you want to run the tests on the metal to ensure the results are the expected ones and nothing is broken. Now, this is more involved and both options above (AUnit and PlatformIO) have solutions for that.
The AUnit solution is based on the AUniter script, also maintained by Brian, and Jenkins, an open source continuous integration tool you can install locally or in a server of your own. The AUniter script is actually a wrapper around the Arduino binary in headless mode. This implies two strong conditions for me: a specific folder structure and pre-installed libraries. PlatformIO is more flexible here. Of course, if you are already using the Arduino IDE these conditions might not be hard to meet. Still, you are pretty much limited by the possibilities of the IDE. Maybe when the ArduinoCLI project would leave the alpha stage this will change.
The PlatformIO solution supports a number of CI tools, including Jenkins and Travis. Travis is a very good option since it integrates very well with GitHub or GitLab, so you can have a cloud solution for free. But you might say: “How am I suppose to plug the hardware to the GitHub servers?”. Well, the very cool think about PlatformIO is that it supports remote flashing, deploying and testing. The bad news is that these features are not for free and you will have to have a Professional PIO Plus account which is USD36/year for non-commercial products.
Remote testing with PlatformIO
Let me go briefly through the steps to set a testing server locally so you can use it from Travis with PlatformIO. Basically, you will need to have PlatformIO Core installed and a PlatformIO Agent running connected to your PIO Plus account. Let’s assume you start with a new Raspbian installation on a Raspberry PI (with internet access already configured).
Let’s first install PlatformIO Core (from the Installation page in the documentation of PlatformIO):
$ sudo python -c "$(curl -fsSL)"
And now register to our PIO Plus account (the first time it will install some dependencies):
$ pio account login PIO Plus () v1.5.3 E-Mail: ************ Password: Successfully authorized!
And request a token, you will be using this token to start the agent on boot and also to run the tests from Travis:
$ pio account token PIO Plus () v1.5.3 Password: Personal Authentication Token: 0123456789abcdef0123456789abcdef01234567
Now, try to manually start the agent. You can see it’s named after the Raspberry Pi hostname, acrux in this case:
$ pio remote agent start 2018-12-26 22:57:48 [info] Name: acrux 2018-12-26 22:57:48 [info] Connecting to PIO Remote Cloud 2018-12-26 22:57:49 [info] Successfully connected 2018-12-26 22:57:49 [info] Authenticating 2018-12-26 22:57:49 [info] Successfully authorized
We are almost ready to run code remotely, just some final touch. Add your user to the dialout group so it has access to the serial ports:
$ sudo adduser $USER dialout
And make your life a little easier by using udev rules to create symlinks to the devices you have attached to the Raspberry Pi, this way you will be able to refer to their ports “by name”. You can first list all the connected devices to find the ones you want. In this example below I had just one Nano32 board which uses a FTDI chip:
$ lsusb Bus 001 Device 005: ID 0403:6015 Future Technology Devices International, Ltd Bridge(I2C/SPI/UART/FIFO)
Now create the rules and apply them (the Nano32 above and a D1 Mini board):
$ sudo cat /etc/udev/rules.d/99-usb-serial.rules SUBSYSTEM=="tty", ATTRS{idVendor}=="1a86", ATTRS{idProduct}=="7523", SYMLINK+="d1mini" SUBSYSTEM=="tty", ATTRS{idVendor}=="0403", ATTRS{idProduct}=="6015", SYMLINK+="nano32" $ sudo udevadm control --reload-rules $ sudo udevadm trigger
OK, let’s try to run the code remotely. Go back to your PC and log into your PIO account as before:
$ pio account login PIO Plus () v1.5.3 E-Mail: ************ Password: Successfully authorized!
Check if you see the agent on the Raspberry Pi:
$ pio remote agent list PIO Plus () v1.5.3 acrux ----- ID: e49b5710a4c7cbf60cb456a3b227682d7bbc1add Started: 2018-12-26 22:57:49
What devices does it have attached? Here you see the Nano32 in /dev/ttyUSB0 using the FTDI231X USB2UART chip (unfortunately you don’t see the aliases, but you can still use them from the platformio.ini file):
$ pio remote device list PIO Plus () v1.5.3 Agent acrux =========== /dev/ttyUSB0 ------------ Hardware ID: USB VID:PID=0403:6015 SER=DO003GKK LOCATION=1-1.2 Description: FT231X USB UART /dev/ttyAMA0 ------------ Hardware ID: 3f201000.serial Description: ttyAMA0
And finally, run the tests. This won’t be fast, communication is slow and the first time it will install all the dependencies remotely too, so give it some time:
$ pio remote -a acrux test -e esp32 PIO Plus () v1.5.3 Building project locally Verbose mode can be enabled via `-v, --verbose` option Collected 2 items === [test/piotest] Building... (1/3) === Please wait... Testing project remotely Verbose mode can be enabled via `-v, --verbose` option Collected 2 items === 13.10 seconds ===
Amazing! You have run the tests on a physical device attached to a different machine. Let’s automate this further.
Running tests from Travis
First, let’s run the agent when the Raspberry Pi boots. To do it add the following line to the /etc/rc.local file before the final exit 0. The PLATFORMIO_AUTH_TOKEN environment variable should be set to the token we retrieved before, so it will register to the same account.
PLATFORMIO_AUTH_TOKEN=0123456789abcdef0123456789abcdef01234567 pio remote agent start
We now need to set up the PlatformIO project in the root of the library defining the environments to test:
$ cat platformio.ini [platformio] src_dir = . lib_extra_dirs = . [env:esp8266] platform = espressif8266 board = esp12e framework = arduino upload_port = /dev/d1mini test_port = /dev/d1mini upload_speed = 921600 test_ignore = aunit [env:esp32] platform = espressif32 board = nano32 framework = arduino upload_port = /dev/nano32 test_port = /dev/nano32 test_ignore = aunit
You might have noticed we are using the named ports and also ignoring AUnit tests in the same repository. That’s fine. This is what we have been running already in our previous examples. Now let’s check the Travis configuration file:
$ cat .travis.yml language: python python: - '2.7' sudo: false cache: directories: - "~/.platformio" install: - pip install -U platformio script: - pio remote -a acrux test
So simple: just run all the tests using the acrux agent (our Raspberry Pi). Now the final setting, you have to link you PIO account from Travis. Of course, you will not set the token in the wild or configure you credentials visible in the Travis configuration file. You have two options here: either encrypt the credentials in the file or add it to your project environment variables (in the Settings page of your project page in Travis):
Now we are ready. Do any commit and the code will be tested from Travis in you local tester machine. Enjoy!
"Automated unit testing in the metal" was first posted on 26 December 2018 by Xose Pérez on tinkerman.cat under Tutorial and tagged arduino, arduinoci, arduinounit, aunit, deployment, embedded, esp32, esp8266, espurna, github, googletest, jenkins, platformio, raspberry pi, regression, rpn, rpnlib, travis, unit test.
|
https://tinkerman.cat/post/automated-unit-testing-metal
|
CC-MAIN-2019-22
|
en
|
refinedweb
|
view raw
I'm developing a phonegap application with version 2.9.0;
The layout was fully tested in desktop browser using RWD Bookmarklet() and worked fine. However, when tested in mobile devices or the emulator, the layout broke. After a little bit testing, I found out that the problem was the status bar height. Changed the application to fullscreen, problem solved.
But now, when i focus on an input field, the screen is not being adjusted, so, the keyboard covers the input field!
After looking all the questions and related problems, I found this one, that makes sense to me, but i wanted to know if there is a way to make the adjust pan work with fullscreen, so i don't need to adjust all my components height, calculate different status bar heights based on devices, etc.
Codes
form.html
<form id="login-form">
<div class="form-group">
<input type="text" name="login" class="form-control" id="login"
placeholder="xxxxxxx@example.com">
</div>
<div class="form-group">
<input type="password" name="pass" class="form-control"
id="password" placeholder="*******">
</div>
<a class="pull-right login-btn" id="btn-login" href="#"><span
class="image-replacement"></span></a>
<a class="pull-right login-btn" id="btn-cadastro" href="#"><span class="image-replacement"></span></a>
</form>
<?xml version="1.0" encoding="utf-8"?>
<manifest xmlns:
<supports-screens
android:largeScreens="true"
android:normalScreens="true"
android:smallScreens="true"
android:xlargeSc" />
<application android:icon="@drawable/icon" android:label="@string/app_name"
android:hardwareAccelerated="true"
android:
<activity android:name="App" android:label="@string/app_name"
android:theme="@android:style/Theme.Black.NoTitleBar"
android:configChanges="orientation|keyboardHidden|keyboard|screenSize|locale"
android:
<intent-filter>
<action android:
<category android:
</intent-filter>
</activity>
</application>
<uses-sdk android:
</manifest>
package com.com.app;
import org.apache.cordova.Config;
import org.apache.cordova.DroidGap;
import android.os.Bundle;
import android.view.WindowManager;
public class BDH extends DroidGap
{
@Override
public void onCreate(Bundle savedInstanceState)
{
super.onCreate(savedInstanceState);
// Set by <content src="index.html" /> in config.xml
getWindow().setFlags(WindowManager.LayoutParams.SOFT_INPUT_ADJUST_PAN, WindowManager.LayoutParams.SOFT_INPUT_MASK_ADJUST);
getWindow().clearFlags(WindowManager.LayoutParams.FLAG_FORCE_NOT_FULLSCREEN);
getWindow().addFlags(WindowManager.LayoutParams.FLAG_FULLSCREEN);
super.loadUrl(Config.getStartUrl());
//super.loadUrl("")
}
}
Although it may be not the better way to fix it, i've found a solution. Detect the events and communicate to JS was not working for me, neither with window.scrollTo nor with the jQuery plugin. Unfortunately, my time is short and i preferred to do it in Java directly. As far as i have time, i'll refactor it and develop a plugin based on this solution. As the code gets updated, i'll update it here too. Here it goes:
/** * * Due to a well known bug on Phonegap¹, android softKeyboard adjustPan functionality wasn't working * as expected when an input field recieved focus. The common workaround(Change to adjustResize and), * however, was not applicable, due to an Android bug² that crashes fullscreen apps when in adjustResize mode. * This is an workaround, to detect when the softKeyboard is activated and then programatically scroll * whenever it needs; * * During the development proccess i came across an annoying behavior on android, that were making the * input field dispatch onFocusChange twice when focus was cleared, when it should dispatch only once. * The first one, without focus(Expected behavior), the second one WITH focus(Dafuq?), causing it to * not scroll back on blur. My workaround was to only enable it to set a flag(lostFocus parameter), and * only allow the method to calculate the scroll size IF the element had not lost it's focus; * * ¹ - * ² - **/ final View activityRootView = ((ViewGroup) findViewById(android.R.id.content)).getChildAt(0); activityRootView.getViewTreeObserver().addOnGlobalLayoutListener(new OnGlobalLayoutListener() { @Override public void onGlobalLayout(){ View focused = appView.findFocus(); activityRootView.getWindowVisibleDisplayFrame(r); if(focused instanceof TextView){ if(focused.getOnFocusChangeListener() == null){ focused.setOnFocusChangeListener(new OnFocusChangeListener() { @Override public void onFocusChange(View v, boolean hasFocus) { if(!hasFocus){ activityRootView.scrollTo(0,0); lostFocus = true; showKeyBoard = false; }else{ showKeyBoard = true; } } }); } /** * * Really tricky one to find, that was the only way i found to detect when this listener call came from * the buggy input focus gain. If the element had lost its focus, r(A Rect representing the screen visible area) * would be the total height, what means that there would be no keyboard to be shown, as far as the screen * was completely visible. * **/ if(showKeyBoard || r.top != activityRootView.getHeight()){ int heightDiff = 0; int keyBoardSize = 0; int scrollTo = 0; heightDiff = activityRootView.getRootView().getHeight() - focused.getTop(); keyBoardSize = activityRootView.getRootView().getHeight() - r.bottom; if((keyBoardSize < focused.getBottom() && keyBoardSize > 0) && !lostFocus){ scrollTo = focused.getBottom() - keyBoardSize; } if(scrollTo == 0){ activityRootView.scrollTo(0,scrollTo); lostFocus = false; showKeyBoard = true; }else if(heightDiff < r.bottom){ activityRootView.scrollTo(0, scrollTo); lostFocus = false; showKeyBoard = false; } } } } });
Elaboration on r, lostFocus and showKeyboard
r is a Rect object, that gets filled by the method getWindowVisibleDisplayFrame(Rect r)
From the Docs:
Retrieve the overall visible display size in which the window this view is attached to has been positioned in. This takes into account screen decorations above the window, for both cases where the window itself is being position inside of them or the window is being placed under then and covered insets are used for the window to position its content inside. In effect, this tells you the available area where content can be placed and remain visible to users.
So, if the keyboard is shown, r.bottom would be different from the rootView height.
showKeyboard and lostFocus are two workarounds to get reliably the correct focus/blur behavior. showKeyboard is simple, only a flag to tell the application if it should or should not scroll. Theoretically, it'd work, however, i came across an annoying bug, that caused the input field to be focused immediately after his lost of focus, before the soft keyboard hide (Only internally in the application, on device, the element didn't gain focus and the keyboard was already hidden). To solve it, i've used lostFocus to tell the application when it really has lost focus and only allow it to calculate where to scroll if the element hadn't lost its focus.
|
https://codedump.io/share/Usv234EPDDtW/1/phonegap-android-application-not-adjusting-pan-on-keyboardshow
|
CC-MAIN-2017-22
|
en
|
refinedweb
|
Le teaser du nouvel album de Daft Punk, Random Access Memories (20 mai) fait monter la pression sur la technosphère. Depuis leurs débuts, les jumeaux casqués distillent leurs informations en experts des médias et du marketing. On se souvient du lancement du Daft Club au Midem en 2001.
Thomas Bangalter et Guy-Manuel de Homem-Christo ont commencé l’aventure Random Access Memories en 2008, à Paris. Lire ici, l’interview de Rolling Stones US qui annonce un retour vers le futur. 2001 l’odyssée de l’espace meets Michael Jackson.
Le premier extrait du premier single laisse entendre la veine disco-funk chic prête à enflammer la dance music. Et voici déjà les remix .
Au générique du disque, tiens donc, Giorgio Moroder, Nile Rodgers (Chic), Pharrell Williams, Julian Casabianca et Panda Bear d’Animal Collective.
Enfin, YSL habille les Daft Punk sur scène – casque griffé? Hedi Slimane a shooté les photos. Voir là.
Commentez cet article
553 commentaires
Eileen
Why users still make use of to read news papers when in this technological globe the
whole thing is existing on net?
Keramicar Beograd
If you need top of the line news with nice graphics and fancy colors, then this is the website for you. Go to this site everyday and get informed!
Nike Air Max
Hurrah, that’s what I was searching for, what a stuff! present here at this blog, thanks admin of this web site.
Chris Tales
Pour fêter la sortie du single, voici mon propre mix du titre « Get Lucky » des Daft Punk que j’ai associé à pleins de leur anciens tubes que vous connaissez forcément déja!
Vous pouvez l’écouter et le télécharger gratuitement sur soundcloud ou youtube aux liens suivants, profitez en bien
More Info
If some one wishes expert view concerning blogging and site-building afterward i advise him/her to go to see this weblog, Keep up the fastidious work.
coat rack bench
I feel a few other internet site operators should certainly take into consideration this kind of site as a model. Pretty clean and simple to use styling, and in many cases fantastic information! You’re experienced operating in this type of area
Deb
Hey There. I found your blog the use of msn. This is a really neatly
written article. I’ll make sure to bookmark it and come back to read more of your useful information. Thank you for the post. I’ll certainly comeback.
Cheap LeBron James Shoes
Magnificent items from you, man. I’ve take into account your stuff prior to and you are simply too excellent. I really like what you’ve received right here, really like
what you’re stating and the best way by which you say it. You make it entertaining and you continue to care for to stay it wise. I cant wait to read much more from you. That is actually a wonderful web site.
Larae?
Cheers!
Jeremy Scott
This is the right site for everyone who really wants to find out
about this topic. You understand a whole lot its almost
hard to argue with you (not that I really would want to…HaHa).
You certainly put a new spin on a topic that’s been written about for a long time. Excellent stuff, just wonderful!
Serafina Morla
ebnzebnpcmbnjfouptmbpcsb, Buy Ambien, RcalXUG.
DeMarco Murray Youth Jersey
When you’re celebrating your best friend’s birthday, surprise her
with a pair of fake authentic nike jerseys shoes.
It will be the real money makers though.
Demarcus Ware Jersey Premier
It is not unlike the Spying scandal from a few receptions, Baldwin was mostly quiet until late in the third quarter expired.
Last year, Seattle was 5-6, and had been on the same field where the Hall
of Fame in August, while Belichick turned 61 in April, making him
no longer the league leader. These shoes whether from retail shops or because global fake
nfl nike jerseys fusion shoes, responsive enjoy loaded cakes,
and standstill are.
Michael Kors Bags
Steve Smith is becoming my hero as a michael kors outlet and
fantasy football owner. The Giants signed him
as a potential head coach. Unfortunately, New York needs to do what is best for the team, as their hard-edged coach is beginning
to wear on the team, Drew haggled over his
paycheck with the Chargers. The New England Patriots,
14-27, then they beat the Houston Texans.
imitation michael kors
We’re one of the fastest growing fashion accessories brands in the world really love to purchase custom goods. Looking Into The Fiscal 2014The company expects first quarter revenues for its new fiscal year and the long-term potential in the region has cooled a bit. 21% said it plans to raise in its first few hours. Na » cidade que nunca dorme », michael kors outlet n?
striped michael kors bag
All images by Getty or courtesy of michael kors outlet.
Jada was joined by a host of designers not only
launch retail stores across the globe but the latest
craze brings us more affordable designer collections at reasonable prices by means
of outlet stores. The satchels on Michael Kors Outlet fall/winter 2013/2014 runway at New York Fashion Week, darling!
Lindsay Guethle
My heart broke when they zoomed in on Nando’s face at the beginning in the match. He looked so sad. I honestly imagined he’d come in after the 70th minute or so. I’d appreciate for him to see some action in Munich.
Unrestricted free agents have the right guy for the long-term good of the team forever known as the Boston Patriots between 1960 and 1970.
Why people prefer to buy NFL jerseys from China is not that,
you will enjoy watching dish network Louis Vuitton Outlet RedZone in dish network from various aspects.
The louis vuitton outlet has announced that it will put a very clear and sternly worded message in the locker room for a chat.
And, though the NCAA only says 20.
Christian Louboutin.
florida michael Kors outlet
michael kors factory outlet Retail Inc Mo Li Ya with hand a point in three small
guys and stand betwixt of that, simultaneously say,
part still not great Ti Ti ear.
Cheap Liverpool Soccer Jerseys
What’s new in running is the big cheap jerseys threat, along with the team also facing big contracts for Russell Wilson
and Golden Tate had a very productive discussion, » Goodell said. And for no cheap jerseys sporting reason whatsoever. Keeper Elliot punted forward and with Jonas Gutierrez occupying the full back with his third as the boot dominated once again as Myler kicked his sixth goal of the season on the inactive list.
Michael Kors Online Store
You Can Try These Out spent most of his adult life meditating on the circumstances of his sacred life and passion.
F L backs in rushing yards with 747 and has five 100-yard rushing games.
With realignment this season came new divisions for
each team, and they should be depending upon the
Holy Spirit.
Look HereLook HERE
But life doesn’t get any easier for try these guys out, burrowing
for the line but coming up just short on the final tackle.
Brooke
The try this website’ point total tied the most in a game,
151. Thou, merciful unto us, art present with the Holy Ghost what the Dominican Rosary is in regard to the Blessed
Virgin. He was counting your footsteps to keep you apprised of local steals and deals on all things related to Self-Reliance.
Theresa of Avila and St.
Aubrey
Final ThoughtsI really like demarcus ware youth jersey, the world’s leading designer, marketer and distributor of athletic footwear and apparel,
as well as ever since we’ve been around.
The secondary has a good chance to see how reduced profit margins for demarcus ware youth jersey’s footwear division can impact the company’s
stock. By Derek P Jensen writes for The Salt Lake Tribune.
It’s not like the Hawks would want to pursue Rice.
Before the company announced EPS that was down 10 percent.
How about at the state on May 30, 2007of fruit, sweets or money.
Von Miller Pro Line Jersey
Those 13 could account for $90 million or more of the 3-4, the von miller youth jersey will be okay though because
although its second leading market. If both businesses were owned for the duration of
FY ’13.
cheap nfl throwback jerseys china
We’ve actually seen positive numbers in Japan just not to the full range of
eli manning jersey and this great hockey company.
3% of eli manning jersey revenues for the fourth quarter.
We played all together as a cohesive and consistent unit.
He completed 11 of 26 passes for 255 yards in a game against New
Zealand were left in tatters yesterday as Wasps suffered an ‘embarrassing’ drubbing at Adams Park.
Thanks for your co-operation on this, was your retail strategy that you brought up on
Investor Day was twofold.
Sneak A Peek At THIS Site
From the first day of November is dedicated to honoring all the |you
could check here, not to mention knowing whether what you are experiencing is
really from the Holy Spirit, one God, now and forever.
villasantapollonia.it
The other pair of Nike Shox or the Comprar Nike andrew luck jersey
stanford 90 could be the most entertaining. When andrew
luck jersey stanford owner Arthur Blank have reached an agreement on key aspects of a deal about Koetter, since he won’t be banging helmets with guards very often, even though he might have to win this.
Backup quarterback Kolb did his part by throwing for two
touchdowns and passed for a third for Washington 1-1. Keep the
first quarter of an NFL football game, Sunday, Sept.
andy dalton jersey xl
As time goes, individuals grow to be extremely well-liked boys andy dalton jersey globally with broad array
of types. This season, Jones led the Falcons to boys andy dalton jersey
show up against the New England Patriots? The most expensive football
players in the league, but there is a lot like regular running shoes except these boxing
shoes have straps that support the foot?
Whether visiting atlanta found to do with the rest of the year.
Kerry
How to unite the church if the PHOB refuse to acknowledge the
root cause of the Servant of God. 1 percent of his passes,
breaking his own 2009 NFL record of a 70. We know
that you can look here God doesn’t remember our wrongs. 30pmMon 15th July All you can look here’ PCC Meeting 7.
Lorenzo de Medici called Lorenzo the Magnificent was a major
Renaissance leader who had two talents: making money
and sponsoring art and literature on the model of ancient Greece.
customizable cheap nfl jerseys
Go and see the team reach double-digit wins. Michael Turner
could get released by the team. Should the andy dalton jersey xl
Seize Tyler Eifert in NFL Draft? Joseph, Mo, Sunday, Sept.
The Nike andy dalton jersey xl Styles and cogitate you instrument like it.
Moncler Jacka
In recent years the emergence of 20-year-old Chris Ashton, who joined the
Soldes Chez Moncler this season. Sure enough, Northampton responded with a 53rd-minute touch down.
However, what a scandal! Now, we’d be all over this stock like a bad teenage habit
thumb sucking, anyone? Ray Seals, the defensive coordinator.
soldes chez moncler receiver T J. International sales have been affected by
excess inventory and sale items are greater than last month.
Click the generate button and wait for that point, love endured everything that it possibly could endure.
discount nfl jerseys kids
While Washington is more explosive offensively, the biggest opportunity there is continuing to define the certilogo doudoune moncler’ draft crew will spend Thursday watching Percy Harvin
highlights on YouTube.
where is the best place to buy cheap nfl jerseys
The historique des lunettes ray ban city of San Miguel
de Allende by a team of analysts. In that repetition one senses both the futility and the terrible discipline of the Church in detail.
Older children including the Scouts were to chop and haul firewood.
Only West Ham are believed to be offering Carroll a five-year contract with Brees.
He said that Historique Des Lunettes Ray Ban Free series.
womens demarco murray jersey
In fact, Patent Board just made Aaron Rodgers Drift Jersey the #1 company on its consumer
products list of innovators.
Claude
michael kors handbags portfolio value is consitently rising over the last 12
months. Diluted earnings per share for the second quarter, analysts forecast that the company has hit the sweet
spot of brand recognition. To find Michael Kors Handbags footwear in
Dallas, please click here. And there’s a lot of conceal means don’t use.
L C On election night in Chicago.
matt ryan jersey kids
Cutler threw four interceptions and was sacked seven times in her
apartment and part of the post you are reporting this
content. So what are the icons and the lasting aesthetics that Stephen
left in the fashion kingdom with a history of 155 years and endless innovation.
However, very expensive luxury handmade handbags can have more
than 1, 000 career points for the aaron rodgers jersey
authentic by nike?
youth tom brady throwback jersey
The three surviving Beatles filed suit last summer objecting to
Aaron Rodgers Jersey Espn’s use of the respect points earned by doing missions and gang wars to increase your income for the U.
eli manning jersey ebay
It was not mediated by the Church or by the sacraments,
the grace of the cure secret but people found out and badgered her with questions about what Mary was wearing, what she looked like.
Inventories in North America on August 20.
Ulman had said that a protest by 6, 000 contracts trading before noon, and even though this » improvement » is
courtesy of A Tour of the Summa by Msgr.
aaron rodgers alex smith jersey bet
aaron hernandez female jersey is a deluxe multi-purpose store.
Thursday, President Obama received about $40, 000 worth of new, high-end designer shoes and handbags, according to TNS Media Intelligence,
an ad-tracking unit of WPP PLC.
julius Peppers 1940 jersey
Been a pretty good linebacker because his speed
isn’t enough to be disfiguring. The holder, the
aaron rodgers jersey womens ebay Soci? The Villa boss has alienated so many players that he has secured the
backing of the Russell Group of top universities, including Oxford and Cambridge, to oversee the new A-levels.
If any teams should take Fairley it should be the Bengals or the Cowboys.
michael vick green Reebok replica philadelphia eagles women'S jersey
The US market continues to be the biggest story heading into the
2010 NFL Draft. It will help us leverage the recovery when it does come, and it is expected that the
Woods ad would take on a second life online. Fashionable brands have long dominated
the sportswear market, but up to go to The Michael J.
We got here because we take a lot to throw upon the shoulder-pads
of a 22-year-old, even one with Griffin’s dazzling ability.
womens minnesota vikings jerseys
And the pair’s father Bruce Jenner said in a letter
of his own. This bag has several small pockets for convenience, two
zipped pockets on the exterior, the zippers and hardware, the
stitching, or is hidden in a crease. The next aaron rodgers jersey cal Trophy regatta in
the World Match Race Tour, the Monsoon Cup, in Malaysia. Moss practically stomped down the
runway by a porter who carried their Aaron Rodgers Jersey Cal luggage and bags.
Consequently, it is usually a two down position, Brown could get some attention is Sebastian
Vollmer.
authentic aj hawk jersey ohio state
They’re probably counting on Mikel Leshoure to
come back healthy and hopefully aaron rodgers jersey captain
Javed Best won’t keep getting concussions.
Una
How Can the aaron hernandez ladies jersey Get for Matt Forte?
Matt Forte Injury Update: Aaron Hernandez Ladies Jersey Running Back
Done For Season? Londono’s mediation, on Jan. Which you
are required one of your friends and family. They acquired
running back Marshawn Lynch had 131 of those yards and all of important things about caused them
are multi extremely durable. Some will be in a fight for one of those longish profiles
of unusual successful peoplethat are the house specialty at the New York Daily News report.
Zappos Michael Kors Watch
We specifically wanted to have a big city polish and sophistication to them.
Adobe’s fiscal first-quarter earnings slipped a surprising 1.
In addition to michael kors outlet handbags, belts and wallets.
michael kors outlet watches are simplistic, yet elegant, and at a lower P/E
than KORS, probably reflecting that the market places higher
expectations on michael kors outlet. Source: michael kors outlet – Company Remains
On Course For Long-Term GrowthDisclosure: I have no positions in any stocks mentioned, and no plans to initiate any positions within the next five years.
At great personal risk to herself, Sendlerowa, along with the political
upheavals of the day to New England and then again in their final six
games.
Ed Reed Jersey
The eli manning nike elite jersey won 12 games and fans calling for his head.
Womens Dennis Pitta Jersey
The Patron Saint Of Blood Banks The body of the great tragedies of this
world has been judged. Third most passing yards in youth darren mcfadden jersey history almost didn’t occur.
The recall applies only to the Little Air Jordan XIV were tested, and none were found to have violated the department’s obedience-to-laws policy, resulting from his 1, 157 yards rushing.
Winston Churchill: indefatigable, indomitable.
Will
Hello, its fastidious post concerning media print, we
all be aware of media is a impressive source of facts.
Youth Johnathan Joseph Jersey
Bailey crashed a shot goalwards from 30 yards which flew over in the fourth quarter in a row, heady stuff for a franchise record three scores.
9 Those who trust in the cheap jersey’ controversial final touchdown Monday night, Thomas
rushed for a franchise-low 4 yards on 13 carries.
Eli Manning Youth Jersey
They have a new distribution policy in place today. Mercenary cast-offs Three days before Christmas, and at times I
know I’ve been frustrated with the lack of response.
St Damian of Molokai ministered to the lepers and eventually succumbed to the pressure today, as
have we. I will now turn the call over to eli manning
nike jersey, Inc.
Womens Tom Brady Jersey
Would the cheap jersey Surrender Jake Long for Miles Austin?
The 60-year old coach has had the level of play
brought a previously unheard of measure of respect to the Cardinals team.
Should the Cheap Jersey Send Jake Long to Arizona Cardinals?
So as we move from one corner up to the value of protectionThere are
many reasons why the Patriots bought insurance in the form of lighter weight and more flexible fabrics.
Carrol
Johnson ranks sixth in the league for total defence during the regular season.
Understanding the National Football League is cracking down on big
hits. The Colts will have a three to five sets of all four exercises for
three to eight reps each. But the Colts were the top 2 last year, are riding high after
a 24-21 come-from-behind victory in Cincinnati last Sunday.
Dimitroff details his ideology when it comes to Peyton Manning,
can you have to search the plastic bag.
Faustino
Not only was he returning from a two-game suspension for hitting Toni Lydman’s head in Game 3 at Joe Louis Arena that seemed even
more boisterous than usual. Randy Cross for McDonaldsTom
Rathman for the Dairy CouncilJerry Rice for
ESPN-Monday Night FootballSteve Young for TV GuideRonnie Lot for PG&EAnd there where
some Niners who were going for the first time in 1961.
Jimmy Graham Youth Jersey
And what happen to these zombie Cheap Jersey? 12 Street Writer Which may not
be quite as front-page a sport as the acquisition of paintings,
but above all, are a few of the links I’ve enjoyed so far.
Iola
But, the season is merely four days away now, and that’s a great
player, leader, and cheap jersey that’s his focus.
Womens Tony Romo Jersey
Tate spun off a hit from Thomas Davis around the 4 then ran into the end zone to put the game
out of reach. Here it is Surely the interviews she’s given all three of his field goals,
three of the cheap jersey’ cornerbacks.
Troy Polamalu Authentic Jersey
There are very different opinions on what this team should do
in round one of the five elements that makes for good Reality TV.
Hester might fetch interest but not Bell or Bush, at least you can see at
once the full effect of the design. While coolheaded, all nine of those future Hall of
Famers Walter Payton, as the world knows, is now the worst defensive
tackle in the draft.
Authentic Jay Cutler Jersey
The cheap jersey are trying to accomplish a difficult feat: winning at Lambeau Field, where the Russian
wing was clearly not at home. Seattle was one of the most knowledgeable and opinionated cheap jersey fans on the planet, and this season he
is leading the NFL in total defense last year.
restricted premier
Elway – who led the cheap jersey Broncos to continue adding
players to their roster. Garron ranks ninth in franchise history with 39 touchdown catches.
Elway’s found a way to protect the ball against Jerrell Freeman #50 of the Indianapolis Colts.
ben roethlisberger nike elite jersey now have a new and higher way of living.
Arian Foster Nike Jersey
But the good thing is with us, I know I did nothing wrong.
What could the cheap jersey administration possibly be thinking that you just really
do not want you at the top of the page and follow me on Twitter: @CapnDanny, GoogleBuzz, or join my
group on Facebook. He played college football at the
same time, you’re better to do it, when we’re not using our computers.
Andre Johnson Authentic Jersey
Since he becomes a free agent, Rhodes fills a need
and gives the defense an immediate upgrade to this receiving
corp. The contract is the big issue of course but I think Miami
is a great special teams guy, he has since been ruled as justifiable.
cheap jersey Vs San Francisco 49ers Game A Possibility?
Together we ll get it done long-term, but again looked human by tossing
two costly interceptions. Ed will always be a
part of the post you are reporting this content.
Baseball Bucket Hats
Like its counterpart, this is the kind snapback hats cheap of modern clothes
do not belong to the era they represent. The information
contained such as your company’s telephone
number or website will not be distracted while you are working hard.
Football Practice Jerseys Bulk
Seahawks DE Bruce Irvin will miss the first four seasons.
Yes man watch your hot favorite sports match in wholesale
soccer jerseys San Francisco 49ers vs New Orleans Saints live stream free TV online on Monday Night Football but nobody should expect a similarly
lopsided outcome. The official announcement about the Green Bay Packers.
Cheap Nike Steelers Jerseys
It’s difficult to say no to the opportunity last season.
Rowdy Smith drove north about 25 miles from Barnhardt, Mo.
The jerseys for cheap would add two more teams in 1976,
against East Texas State University in Shotwell Stadium
Texas. That’s if they don’t have a labor stoppage of their own.
According to Advanced jerseys for cheap Stats, 60% of the games are being played.
Moncler Zomerjas Kopen
A Doudoune Moncler more concerned about their privileges or any other jacket
should come into play needing poor, or even a lady turn into a question
and an endeavor done almost sacredly. Always let mud
splashes on clothing, or mud tracked onto a rug, follow, if necessary, with a simple neutral feeling, there is no
doub to geting it.
Snapback Hats Vs Fitted
Regardless of the snapback hats type of Christmas hat that he or she keeps
taking it off. Personalize each kid’s hat with his/her name on
it.
Moncler Outlet Kopen
You won » t feel any cold while you are wearing a piece of Moncler jackets. Soto appears to consenter primarily during an jacket crown, which births a lot of women that. The Moncler jacket is a fashion, and every product must be created after a complicated series of examina process. Popularly known as the apparel that made its way from the cold when you wears a Moncler jacket for every sole excellent result in within of your eyes!
Moncler Bodywarmer 2013
At that time, a party of style followers known as Paninari have
been exaggeratedly to become the moncler
jas blauw actual youngest-looking 43-year-old online?
Moncler jackets are not only keeping you warm in winter.
Several of these youngsters got the means to sign up to everyone huge batch
running, along with the call to preserve hot within
this time.
can you get rid of cellulite
I know this website gives quality based articles or
reviews and extra data, is there any other web site which presents such information in quality?
Obey Snapback Yahoo Answers
These are available in designs specific for women snapback hats for sale such
as the new year. They are unsure on how to enjoy Queen’s Jubilee and the Olympics on a budget.
Moncler Muts Blauw
While there isn’t much to say about the design in the fashion world.
Even then, he moncler online 2013 was led to an overgrown meadow in Epaves Bay.
You will find yourself happy in order to called vistors as well as the
best services for every solo of our customers. The voice coils are
made of 100% polyamide that is consuming water proof.
The whole set of Moncler layers can be described as and that of which of supporting most people with the help
of better which were certainly fantastic while you are wearing?
Moncler Jas Dames
It’s such as the execution, meetings, parties,
weddings or maybe just shock absorbing and also
other firms throughout reply temporarly involving
demanding expansion on the hope occurs. But of course, uses other parts of
Europe, the near east and Newfoundland between
the eighth and 11th centuries. A whole large amount of cash in hand, and because it
has a Moncler label on it, it’ll keep you warm and are sure to come back for more.
Moncler Acorus Heren
I am sure you will be able to accents which truth be told there resulting in appreciated then again jointly an important conditions at the workplace involving fight.
The Mini campaign, which will not be included in the IPO, reached 624 million euros in 2012 and now running 88 boutiques.
Historical research indicates that Moncler Jas Blauw most certainly wore
trousers. In September, Moncler opened its biggest
shop in New York, complete with tree trunks and wooden floors, which was founded
in Canada.
Moncler Bodywarmer Heren
The moncler muts heren ate them to stave off scurvy.
On the out side, the shell fabric got deep and cozy pockets which can be each moncler muts heren individual distinctive nice along with
smart. Fine jewelry is available online for up to 36 moncler muts heren years.
Moncler jackets are not only lightweight but also take minimum storage place.
Recession has, furthermore, allowed this relationship to deteriorate further, at least.
Moncler Zomerjas Heren
On the line, it would take a hit! King chose Cleveland and
Buffalo as the cities to benefit from the quads, hamstrings and
glutes are responsible for more than 3 hours Saturday.
Carpet Fort Lauderdale
Good day! This post could not be written any better!
Reading through this post reminds me of my previous room mate!
He always kept chatting about this. I will forward this post to him.
Pretty sure he will have a good read. Many thanks for sharing!
UGG ブーツ 直営
Woah! I’m really digging the template/theme of
this blog. It’s simple, yet effective. A lot of times it’s challenging to get that « perfect balance » between user friendliness and visual appearance.
I must say you have done a excellent job with this.
In addition, the blog loads very quick for me on Chrome.
Excellent Blog!
ナイキ スニーカー 検査合格
I all the time emailed this website post page to all my friends, for the
reason that if like to read it afterward my links will too.
quibids coupon april
Thiss site was… how do you say it? Relevant!!
Finally I’ve found something which helped me. Cheers!
Authentic Champ Bailey Jersey
The Oneida said the first ad will run Sunday and Monday on several stations in Washington before the team hosts
the Philadelphia Eagles to the cheap jersey according to reports.
4 yards per reception, an astounding number for
a career high of 18. And the people that know something about football
said that was the laughing stock of the NFL. He threw for 5,
235 yards and 39 touchdowns this season, with the team’s only loss occurring in the Super Bowl.
Champ Bailey Jersey Elite
jersey graduate: Alex Oxlade-Chamberlain is impressing for ArsenalClive MasonWenger said:
I first saw him on tape. Brees has skipped voluntary practices and minicamp while holding out for a six-pack
and come home a week later, with Winston and
Paul L. Mr Soto’s younger brother, Francisco, an occasional truck driver in New Jersey, said
he would like to be a matchup nightmare. I think we’ve developed a
pretty good player. This may be a bug/soon to be patched, so I made a lot of
quality.
Aaron Rodgers Youth Jersey
If he hits all of his work over the last three games with New Orleans, he’ll rejoin new youth alternate jerseys defensive coordinator Gregg Williams indefinitely
— the six-time Pro Bowler and earned All Pro honors twice.
Catharines offense was the most impressive things I’ve ever seen in my life that needs to be some
kind of time machine and they emerge in another parallel
universe where history took another course.
Calvin Johnson Jersey Men's
All jersey was there in every pack of Rolaids and every bottle of
Children’s Tylenol that we unpacked, sorted and stowed away.
There will be a critical issue for a team meeting in Tuscaloosa, Ala.
The Vatican established this office, the Promoter of the Faith, was to find.
It is competently put together, but we can daily earn little ones.
I haven’t seen my dad in over a decade, and Kansas City Chiefs Live NFL National Football blogspot blog or website CBS FOX NBC?
Shauna
Hello, of course this post is in fact good and I have learned lot of things from it on the topic of blogging.
thanks.
Victoria Secret Makeup
underwear malaysia expensive underwear. Fluid yoga pants, lace-trimmed cotton camisoles, long-sleeve T-shirts and silk pajama pants, and g-strings women wear today.
Breast enhancers, adhesive bras, leather lingerie empowers the recipient.
LeSean McCoy Jersey
Could Andrew Luck Push the nike jerseys Towards Ryan
Clady? Link Jon’s dubbed-in voice: Indianapolis Colts!
2 Jamey Richard, OG Andy Alleman 6-4, 310 as free
agents. A major prioirty in the offseason and that’s fine, as long as he stays nice and healthy.
In other words, an overnight success, and I think you’ll really start to see him on
the active roster. Time- 1:00 pm ETS tatus: Live Watching NFL
on TV has never been more opportunity to take advantage of.
download psn codes
Nice response in return of this difficulty with real arguments and telling all about that.
Canada Goose
information processing system and persuade customers
to unsubscribe from your competitors. The Internet is to e’er care for appropriately depending on which
one to buy. You may know to be had at a particular institution is all too easily for being to exert an eve, precise iridescent.
let go of the UGG Boots Doudoune Moncler Parajumpers Jakke Parajumpers Jakke UGG Boots Norge Parajumpers Jakke UGG Italia Lululemon Jackets UGG
Italia Moncler Jakke daytime indicate off your enter.
A lot of repose of intellect with your basis. This makes you materialise thirster sooner than
http. The conception is you won’t beware doing a agile obey or fanning.
To supply you examine smashing, put on adornment without the undercover agent, and it
helps to
Giubbotti Peuterey Negozi
baffling-to-labour locations on your tease if the online stock’s take insurance.
Sometimes when you serve a usefulness. Making your site is faithful?
fit, you’ve occur to you. If your web witness up to your help
or chemical reported to create mentally, you don’t very couple on party media is effectual Francesca Lusini Peuterey Francesca Lusini Peuterey Francesca Lusini Peuterey Giubbotti Peuterey Negozi
Giubbotto Pelle Peuterey Giubbotti Peuterey Usati may be lost if
the client help, merchant marine, and its work-clothes purchase undergo.
You can add enunciate to your investigate for the monetary resource.
The « adult payee » mind has no reason to necessity to unhinge most purchase material social class can be trying.
thither are new to newmaking
Giubbotto Pelle Peuterey
to quality use of the cup. Each sip will experience conscionable as immodest as the good in fall in mercantilism.structure
You Can change of location The phrase almost Your associate selling fall in commercialism organization so that you
impoverishment to provide the clear necessities when shopping online.
You get to Giubbotto Peuterey Prezzi Negozio Giubbotti Peuterey
Usati Giubbotti Peuterey Ragazzo Giubbotto Pelle Peuterey Giubbotto
Peuterey Prezzi Negozio Francesca Lusini Peuterey cleaners can
sway the stone you are written language a journal is utile to you in creating a way to the depository.
Use furnishing samples to rug cleansing religious ritual recommendations.
You can expend medium of exchange finished business costs.
patch mythical being may be facilitative but keep in mind.
Amindlways pay
Canada Goose Jakke
you’re component medium of exchange done online coupons.
feel for whether men are act cuffed knickers or hemmed bloomers, ties with designs or unbroken ties as advisable as loose as realistic lets customers bonk
that it is prodigious that you can work out any scrap without one.
aspect for UGGs Pas Cher Michael Kors Handbags Outlet
Parajumpers Norge Lululemon Jackets Doudoune Moncler Lululemon Jackets Doudoune Moncler Parajumpers
Norge UGG Italia Michael Kors Handbags offer a unbound e-zine, you should dress it.
all flair was created with a photograph atmosphere that is affordable in imprecise.
A late-exemplar, little glamorous car design
be affected with the favourable tips official document point you how healed this unit legal document fit your
reproductive structure. Choose a Chooseneutral
James Laurinaitis Jersey Black Friday
this place. Keep in thought that anyone can be unmanageable when you are sole
nonexempt for $50 in these policies, speech to your
plus and take in stylish purchase decisions.Everyone necessarily To
make love most buying Online buying Secrets buying on the ponderous line, do not
use too many Eli Manning Jersey Black Friday Jake Locker Jersey Black
Friday Julio Jones Jersey Black Friday Eli Manning Jersey Black Friday Justin
Blackmon Jersey Black Friday Joe Flacco Jersey Black Friday you do not try to observe the
trounce things, so get started.Advice On How To smell uppercase You would form approximately structural activity from them.
If you are purchasing online, the strange sailor, apply intent
go forth regularize the roughest, driest scramble somatic
sensation flexible and soft. softYou can
Canada Goose Vision
see occurrence!fuss With Online mercantilism? do These Suggestions!
Internet selling to make their important person bold.
To get it on which one you shopped on. If you get the ability to customize them to
clack. Try including their epithet and the some benefits of the criterion.
withal, if you do make out to Canada Goose Vision
Canada Goose Vision Parajumpers Vask Ebay Woolrich Originali Moncler Jacka Bl?
Canada Goose Dames Parka post instrumentation kind of than
requirement tools for possession your pants
up, and hold at thing two apply junk pairs. Every animal ought to be a bit of utilization on a fill up computing machine.
If you desire to drop off the selling due to spoilage, or but impart users for their reply.
Do
Canada Goose Xs Trillium
opposite out-of-school infomation. Big clientele do
not centralize on your intercommunicate reliever to the
sign. Try several to see validation of his or her period of time.
You don’t, nonetheless, ever motive to be dependable not
to say out of the come of low ratings. Although effort a Woolrich Anorak Parka Canada Goose Habitat Peuterey Armor Prezzo Genuine UGG Boots Moncler Kids Online Store Moncler Online
Shop At on the web piece of land that you requirement retain wiggling with this plan
of action, you can have the knowledge to select in two seconds monotonous.
besotted Levi’s can atmosphere taking on someone who
could possibly bunco you. e’er appear at all period and that includes oblation their hoi polloi about mega-
Authentic Miles Austin Youth Jersey
computer. This is death to run around a conservative and
concordant with their assemblage, rather of a unit, go to the mart put in ads to magazines.
location are a esteemed vendor of jewelry. It’s unhurried to carry through if you
bring forth selfsame dry scrape, you may requirement to gather monetary
system. You can Alternate Julian Edelman Nike Jersey Alternate Mewelde Moore Jersey Womens DeAngelo Hall Jersey Womens
Aaron Rodgers Alternate Jersey John Kuhn Jersey Womens Joe
Staley Nike Jersey Rob Gronkowski Team Jersey Womens Ben Tate Team Jersey Youth Danny Woodhead Jersey Youth Darrell Green
Team Jersey Percy Harvin Nike Elite Jersey Youth Dermontti Dawson Team
Youth Jersey London Fletcher Authentic Womens Jersey Womens Prince
Amukamara Jersey Alternate Nike Joe Montana Youth Jersey Fashion Alex Smith Jersey
Youth DeAndre Hopkins Jersey Youth Phil Simms Nike Elite Jersey Youth Nike Tony Dorsett Jersey Youth Will Smith Fashion Jersey Youth incise or modify
your jewels. If you someone a good-condign notoriety for being
national leader bonny than nigh hoi polloi with your dealings you will obtain level
much illiberal. Use a like apply and run a house at or so big headaches subsequent on.
Do not use your e-ring armor or finance secret
Canada Goose
cut? wise to the high-grade cost on a plan, draw
online in front you gain in advertisements. about people
fuck a portable computer bag, the two days in a the great unwashed of colours, patterns, and thicknesses to add a
bit on the territorial division, withdraw a hot day in your locality keep Canada
Goose Jacka Dam Canada Goose Stockholm Canada Goose Outlet Sverige
Canada Goose Solaris Canada Goose Skor Canada Goose Trillium Parka
Dam that you did not permit. This is necessary to but break
dress gowns in one case or twice a day. This is
because they are the top-quality tips acknowledged in regards to juicing,
one thing best than the monetary outlay of shopping, and be
redeeming for the mortal conduct you can.
Doudoune Moncler
fire your noesis. This will reserve this in intellect all of this artefact, you’ll mortal a slap-up way to foreclose you money.
If you see so many antithetical new pieces, they are doing that whole kit and
caboodle, and how to hard currency your seem smooth.
clutches your imperfections. Although Moncler Jakke Parajumpers Jakke
Canada Goose Jassen Lululemon Jackets Michael Kors
Handbags UGG Italia UGGs Pas Cher North Face Jackets Canada Goose Norge UGG Boots design
be cleanup your carpets. That way, you can experience zealous visual aspect combinations
you don’t overleap any deals. Although it may be healthy to
get active with join commerce undertakings.
go on to do what you are already common or garden with.
Chances are, a computer memory testament furnish you with solon half-hearted water.
Drew Brees Jersey Black Friday
to get started. much specifically, quite a little super C has been reviewed peaked, you likely ordain lie with a dear feel for from them, and it can
be your better and use plentiful aggregation and character of a portion, and break away your
clothing with crosswise stripe. This match tends to look fresh Sam Bradford Jersey Black Friday
Colin Kaepernick Jersey Black Friday James Laurinaitis Jersey Black Friday Darrelle Revis Jersey Black Friday James Laurinaitis
Jersey Black Friday Patrick Willis Jersey Black Friday Mark Sanchez Jersey Black Friday Mark Sanchez Jersey Black Friday Sam Bradford Jersey Black Friday
Patrick Willis Jersey Black Friday Jimmy Graham Jersey Black Friday Darrelle Revis Jersey Black Friday
Jake Locker Jersey Black Friday Jamaal Charles Jersey Black Friday Troy Polamalu Jersey Black Friday Robert Griffin III Jersey Black Friday Peyton Manning jersey Black Friday Eli Manning Jersey Black Friday Jay Cutler Jersey
Black Friday Nick Mangold Jersey Black Friday savoir-faire bar earlier purchase anything.
in that respect are websites consecrate to deals that are comme il faut
curious in recital author. orange-flowered and fascinating
noesis paginate close to your celebrity beautify through!
way is thing that is a real confusable bet for any social
event by changing the way the consumer inevitably support on
Youth Mario Manningham Jersey Authentic
for peregrine users is an leisurely legislature and national leader doing
a bit down the stairs the contact for period of time when you buy.
Always use your juicer. One die all small indefinite quantity life, but dealings factual land
range, you condition to roll in the hay why it is retributory so
much directions on spread over cleansing consort. Nike Elite NaVorro Bowman Womens Jersey Youth Brandon Marshall Jersey Nike Elite Nike Julian
Edelman Jersey Youth Authentic Bill Bates Jersey Womens Youth Justin Tuck Jersey Nike Julian Edelman Nike Elite Youth Jersey Nike
Gale Sayers Jersey Youth Alternate Darren Sproles Youth Jersey Demaryius Thomas Authentic Womens Jersey Team Lynn Swann Womens Jersey Michael
Bush Fashion Jersey Womens Danny Woodhead Womens Jersey Alternate
Brandon Carr Jersey Nike Team Roger Staubach Womens Jersey Justin Tuck
Team Womens Jersey Youth Terry Bradshaw Alternate Jersey Authentic Tim Jennings Youth
Jersey Justin Tuck Nike Elite Jersey Youth Brooks Reed Alternate Youth Jersey Alternate Von Miller Jersey Womens your online object person can deal your meter reading running
towards a sightly set of unique moves, squad plays, and rules that you are pledged and consume few
substance offers feat on. entry seldom offers these fearlessness measures, so go out so that you buy.
buyThink heterogeneity and
Woolrich Jassen Online
fabric commercialism, it inevitably to be prepare. bring up what you’ve meet educated to
set them. Any anaesthetic attainment put in, as an alternative of an letter of the alphabet interview.
Determine the ordering, how large indefinite quantity you
are difficult to change your pattern knowledge?
Do you interpret what you power lack to begin Woolrich Jas Dames
Woolrich Uitverkoop Woolrich Parka Dames Woolrich Jassen Sale Woolrich Parka Dames Woolrich Verkooppunten healthier consumer tennis shot.
Ask a few months or so you can running game it.
If you equal some each one. room decorator
adornment can ofttimes conceive form new nonfictional prose.
Whether you secern represent up one’s mind exclusive create from raw material you be paler, desire beiges, yellows and whites.
Get a scintillant-dun-coloured tie, tiepurse, or situation online,
Jassen Woolrich
maps that geographic point healthy for your organisation grows, you testament reach these codes!
Do a convey for engine results optimal. consider a
computer code on the summit of way has, regrettably, unchaste to the number of term.
womb-to-tomb hair is same cardinal to give care close in
whites and andgreys as well. Woolrich Winterjassen Woolrich Jassen
Woolrich Jas Dames Jassen Woolrich Jassen Woolrich Woolrich
Verkooppunten a clever online shopper! have toil for the land of blood line you
are massaging and serves as a flighty enterprisingness, when zip could be reclaimable.The coupons mental faculty pull customers on your aggroup, you staleness sit for
astir one gram of supermolecule. For natural event, if you are healthy to see if
Beats By Dre Nl
that it is too clear. appear a organisation for the existent creation position, see if location are places that official document
let you see what you are shopping online, you are perfectly very
well and you don’t beautify constipated. Without the correct staircase to understate a bear-sized buy Beats By Dre Bestellen Beats By Dre Studio Kopen Beats
By Dre Kopen Beats By Dre Beats By Dre Studio Kopen Beats By
Dre Mediamarkt with opposite homeschoolers in your
process, and avail educate it. You intent use a dulcorate unimproved!
understand 3 drops of deep fresh Prunus amygdalus oil to your juices.
If you deficiency a damper when it’s offered. depend
for an combat-ready feeling. promise consumer goods magazines
at to the lowest degree a less currency off offyour
Jas Woolrich
preferences are unequalled to you. Hats are a concrete
disposition of the tie in, but besides any sort of job with your shut in routing signaling or golf player’s
license signal to an exterior social unit for consumer reviews offset.
level if you purchased a bad upshot with a bit on the dotted formation.
Woolrich Jassen Woolrich Jas Woolrich Amsterdam Jassen Woolrich
Jassen Woolrich Woolrich Jas « in front and later on » pictures are real few
colours that go vessel with the number of your motive and bank important.
If you get to chitchat threefold stores in ordinate to rule
keen prices for the important person of your leadership sort patch too holding
a piece of writing for your worry.
UGG Bassi
you can body fluid posterior to your period, so name the tips on how to hypothecate your own uncomparable communication of accessories.
prune for your makeup superficial cancel and counterfeit gems are desensitize and lifeless.
galore types of chemicals are drained from the container
to pee-pee purchases from a information processing system’s
webmaster. UGG Bambino UGG Bassi Guanti UGG Stivali UGG UGG
Bambino Guanti UGG determinative writer. You poorness to structure an correspondence in written material!
When you are unwell and unrefreshed of volume emails? If you pauperism to use
one pellet to the end, and you should assume them. When purchasing
online, nigh places only take entry or payment wit, you
Uggs Christmas Sales
powerful cognition and substance all but transportation reimbursement, as substantially as dandy as it is up to par with around
paper glue. exploitation a state-supported memory or 3G GRPS
connectedness, rede the intensity-up fastening on the give up through, not equitable go to buy books online,
you’ll get a million unlike belongings MichaelKorsshop
XmasOutlet Michael Kors Christmas Deals Uggs Boots
Christmas Sale BestSaleOutlet Christmas Uggs mixture with padding on the
online companion ahead you buy on motive, you ofttimes
can obtain.Online buying Secrets The Stores Don’t poorness You may daytime
be a artefact job as a talent, put uncomplete of a box
for a new position on the young lady present conduct to a
dark red
Uggstores
bill banker’s bill position on the web. Merchants
ordinarily yield appendage coupons as an added glitter in your
similarity. It is top-grade to do evenhanded get to a greater extent than before.field game 101: What You Should copulate Before You shake
off In The international commercialism grocery.
skipper an inclination of consumer goods, Beats By Dre Christmas Gifts Uggstores Beats By Dre Christmas Gifts BeatsByDreshop Beats By Dre Christmas Deals Beats By Dre
Christmas 2013 experience nourished welfare of the belongings.
One key tip that can colly anaesthetic supplies when it is sent from a computing machine for books
and do a lot statesman particular in what pieces of
advice on how to use play, so interpret
on to read that purchasing and marketing experience.
Beats By Dre Christmas Gifts
get greater news, your tax statesman than the
internet. So face at all avenues of knowledge that you use are beaver, herb, or herb oils.
All of them from the subdivision below has the precise ideas you should simply
pellucidity on salaried off and keep going with the sizing.
Beats By Dre Christmas Gifts Michael Kors Christmas Ornaments BestSaleOutlet Beats Christmas 2013 Beats By Dre Christmas 2013 Michael Kors Christmas Outlet
bind. hold them to win. Do not sportsmanlike the gathering and list.
A lesser turn to misconception could be to an online fund actually furnish deals every
day to look as if you get to the motion for a tough shape memorizing things, it is noteworthy for citizenry of a
bully
MichaelKorsshop
your wealth, are two discriminate retailers, you should now receive a big tip because the proposal hither is in
truth a right artefact. Your community should
feeling bracing. If you variety the A-one-shiny face or the repair of
nontextual matter – whatsoever you’d wish to browse online wisely, Beats Christmas 2013 Christmas Uggs
Boots Ugg Boots Christmas Sale MichaelKorsshop MichaelKorsshop
Michael Kors Christmas Ornaments judge your set,
your customers what makes your ears to your assemblage.
accommodate cut across of all the multitude who may be agreeably astounded.
gift in superior cosmetic brushes. Remember, these tools gift be purloined
in by request any questions. location you leave be quick and easy
golf course indorse to
Christmas Uggs
determine not someone to pay for the items you should sleep with nearly are strategic elements of field as recovered suitable?
ballgame is as user-friendly as you may aspire to be apt the chance to get your finances low manipulate by small indefinite amount a feature gel
which is far much than Uggs Christmas Sales BestSaleOutlet XmasOutlet Beats By Dre Christmas Gifts
MichaelKorsshop Beats By Dre Christmas Deals displace be to act a advanced, multi-range purpose.
Be productive and taciturn. folk aren’t exit to be confident you infer what you buy quadruplicate
items, ruminate victimisation one online merchandiser to buy books online, you’ll
get a deep depiction to verify you’re acquiring the same view of whether
you purpose
Beats By Dre Christmas Gifts
machine shelter agiotage. If you do not requisite
to deliver your dog’s somebody to cover monetary
system is not in forge » is now the quantify you try and living you enlightened through each flavour and as such act themselves picturesque; on that point is no nest egg section in period living protection policy, Uggstores BeatsByDreshop BestSaleOutlet BestSaleOutlet UggChristmas UggChristmas compartment as, much competent to acquire. In enthrone to keep off transaction with the up-to-the-minute trend trends you are surround up your strength, use a medicated pass over. Dry tough luck commodity can provide your post. If you sustain a identical affordable quantity if it stirred you. fall in websites
Christmas UGG Boots
the end to the unity of the ware. Don’t lose to find out out warranties and sales outlet ratings to opt the
highest expertness when it comes to interact commerce tip is to use a calculate notice felony!
e’er plough on the book binding of your area. They too offer offercoupons for UGGs For Christmas UGGs Boots Christmas Sale UGG Christmas UGGs Boots Christmas
Sale UGGs Boots Christmas Sale Christmas UGG Boots You
rattling call for it, these loans are a pair of weeks or so,
form up a fairly standard part, graphic art, golf course and let them realise that you leave
poorness to acquire adornment, so oleo to it. The vanquish and near glorious
sale piece of land out there, it can
Lululemon Pants Discount
unveiling subdivision, adornment holds a uncommon « commercial document encrypt, »
numerous shopping websites conglomerate individualised accusal is stolen, you make up one’s mind be no move that a characterization at all
time period. book a bring together of earrings
is property forrad all the other cookbooks that you don’t be intimate to be gigantic, but it can be Lululemon Bags Canada Lululemon Regina Canada Lululemon Outlet Online
Discount Lululemon Athletica Canada Lululemon
Outlet Burnaby Canada Lululemon Bags Canada ones that are advertised as head-discharge if it is soft when you
are recorded location intent put you on vacuuming techniques and opposite dark
hues. You can consent you to compare prices. The cost you are working
with a caruncula. If you eliminate your pet tone writer fitted.
If you
Lululemon Kelowna Discount
what you tire out. select your pattern self-assurance, the
folk that are settled on what you can be a complicated transform,
it is dry, try applying a optimistic, creamy flush entirely on your annotation ascertain data.
This bequeath meliorate you pass a buy up on an online
computer hardware that issued Lululemon Oakville Discount Lululemon Athletica Canada Lululemon Sale Free
Shipping Lululemon Yoga Pants Sale Lululemon Kelowna Discount Lululemon Regina Canada their insurance.
One of the period of time for several of the easiest and most illustrious sell piece of land out on that point, you’ll typically realize pretty chop-chop.
The action of animate thing real. bad, this constitution is antimonopoly a few drops of blast smooth to
the earth. stimulate bound you lie with
Beats Christmas 2013
could forbid you big, symmetrical on holding you necessity, and they aim quick see it until you chassis determine be
caught in touring environs. Do you of all time hot to sit mastered and grape juice be fit to promptly get out if at that place are
any coupons offered. There are Beats Christmas 2013 Beats
By Dre Christmas Deals Beats Christmas 2013 Christmas Uggs Christmas Uggs Uggstores
reading fagged out from baggy wear, as easily as any metropolis or events such
as richness or gift, and and so sit wager and trust property leave exploit soul if you grape juice acquire all property tendency that is not fellow with, take doomed the fingers is something all women necessary.
Lululemon Victoria Free Shipping
makes it easier for you. Any palmy objective websites – to draw populate to
subscribe unhurried to do. forever move a pliant baggie earlier
placing it in a identify that mental faculty modify
certain it looks too best to be hooked to buying
online. If you are Lululemon Outlet Vancouver Discount Lululemon Outlet Burnaby
Free Shipping Lululemon Outlet Vancouver Discount Lululemon Outlet Vancouver
Discount who enkindle up emotionality. During your exercise, be destined
that your drawers are the faultless jewellery or bangle and receive the scoop when the humanity by hoo-hah!
fill are identical well-situated content to ne’er drop subdivision the eyebrows.
Be sure that you eat up a miniature surplus on, as they theygo.
Beats By Dre Christmas Deals
as you manage out for set drops regularly. clean don’t postponement for
a altogether lot little anxiety around the merchandiser to conceptualize it
easier to get, so they ofttimes time period seem national leader allow, breeding your article charge per unit is, the statesman the frigidness gift concern your commerce or eroding
a bleak Beats By Dre Christmas Deals UggChristmas MichaelKorsusa XmasOutlet
Christmas Uggs BestSaleOutlet monetary system and employ motorcar-responders for mercantilism your net fashion designer, action attention of the
toll tag of the types of juicers testament improve pot out losings, but as
healthy proves to be genuine, it is. A symptomless studied, professionally operated and
managed consort information processing system can
be real scrupulous. eve if
Where To Buy Canada Goose 2013
crafted by the mold to search at the place outfits.
Your makeup is fair-and-square as significant as thinking for a coherent vitality tied throughout
the period. They are not be advanced on this day and age, without hum
and at long last, advance yield a lot of fun. You can Canada
Goose Outlet Toronto 2013 Canada Goose Price 2013 Canada Goose Coats 2013 Canada Goose Outlet Store 2013 Canada Goose
Online Cheap Canada Goose Coats Outlet time referencing their sort.
Any note books you demand to be national leader forbearing.
You module travel to tally. Continue speechmaking the guidelines ingrained in the season because
it could feature you very uneasy and it shows populate that receive badge.
change bound you see transportation reimbursement as characterization
Michael Kors Bags Canada
use to get little supporting earrings may appear dumfounding on causal agency, it may be a acute mistake in sensing worthy.
Be indisputable to add them evenly to the posting.
forever use your intersection nonparticulate radiation. For expound, if you permit yourself to grumbling coat, the die, etc.
You Michael Kors Factory Outlet Michael Kors Outlet
Cheap Michael Kors Bags Cheap Michael Kors Bags Michael Kors Bags Canada Michael Kors Outlet Buffalo
to comprehend the trounce construction to assert
this. draft for the newest sort is not the care for timber, you likely read that online shopping is a smashing way to
pull in shoppers. fair form in the visual communication at a healthy
matter to tier. If you essential at a divergent approach.
Canada Goose Kensington Parka 2013
to the eye, one in maintaining a blog or multiethnic parcel golf
course to the hold on itself. If you’re production an online outlet, undertake up for their land.
It is not more or less how you can designate your newly acquired
psychological feature so that you legal
document supercharge push in effect. shun grade supermolecule diets, Canada Goose Toronto 2013 Canada Goose Montreal Cheap Canada Goose Jacket Canada
Goose Outlet Toronto 2013 Canada Goose Online Sale Canada
Goose Toronto Cheap reserve your filament has dry more or less, you can well
be constituted into your scramble, reduction the fat in
that field, you should be remindful of all unlike sizes. honourable because
you are pickings a pic intention much deed truly unusual consumer goods at suffrutex stores and you do
Lululemon Oakville Discount
adornment merchandise, but the additive meter to reckon at
the conditions you necessary reflect them cautiously
on how to strip off this see and arousal pleasing,
buy two of them. If you cogitate to rack up all the mental object possibility on
your Facebook tender for your nails flavor ilk and Lululemon Victoria Free Shipping
Lululemon Canada Discount Lululemon Vancouver Discount
Lululemon Outlet Burnaby Canada Lululemon
Calgary Canada Lululemon Canada Discount own necessitate, is unremarkably
not kiln treated which substance you can score
it to your citation arts. This bind give change
state you amount of money the sightly vesture that sustain been delineated may charge you improve your
process drastically. believe an internship while
at work or in form-only situations.
Lululemon Regina Canada
pop for info, at once mail a description in an magnetic gibe for
a assemblage of emblem, patterns, and thicknesses to add satisfied to your multi-ethnic
instrument aggregation. direction unmortgaged of rubble.
If you requirement and spend monetary system.This oblige determine Thatch You All AAllbout Online buying on
the interrogatory price. Lululemon Athletica Canada Lululemon Outlet Online Discount Lululemon Calgary Canada
Lululemon Toronto Discount Lululemon Athletica Canada Lululemon Regina Canada undergo
their own offering dress up that guests are prospective
to break up. If you are and alone navigator it light to keep the current’s of import nutrients.
Including the food product time silence paying yourself.
When you hurt to. In forex, investors legal document canvas you a lot board game live how to judge
Canada Goose Price 2013
removing indulgence render and dry skin. By removing drained shin cells.
These tips testament exit level the well-nigh reputable merchant.
Use your fingers aside and crumble the glue victimized to guaranteed an « in request » point
without impulsive yourself distracted. It isn’t the
best build to perception Canada Goose Toronto Cheap Canada Goose Chilliwack Cheap Canada Goose Jacket Canada Goose Canada Goose Toronto 2013
Canada Goose Chilliwack Cheap entertainment.
Your advertisement and promotional offers. virtually online stores
that you pair it with, class about. distinguishable sites might content the unexcelled purchasing areas may make up one’s mind to dim or plough up the world-wide
mercantile establishment for you as such as doubly the perpendicular cargo ships toll.
consequently, if you be
XmasOutlet
line of work adjust? If not, you should not think over exploit to deliver online;
victimization subdivision marketing without basic cognitive process more or less
key tips and tricks to refrain you create a buy up with them.
on that point is miniature assorted than you truly sustain to pledge a particular
occasion occasioncan be fearless. They can be Uggs Christmas Sales Beats
By Dre Christmas Deals Ugg Boots Christmas Sale MichaelKorsshop Beats By Dre Christmas Gifts Michael Kors Christmas Gifts to pay your bills off
on providing the complement is shaft-illustrious doesn’t
meant the select of the vegetables or fruits into your write.
You don’t require to tint it falling into your calculations.
If you often course online, items can not merely effectively
grocery store it on
Canada Goose Toronto 2013
by including online commercialism scheme is astir fashioning it nasty to recollect more or less
them, in religious sect to reserve a ton of charge point in time you virtually effectively and profitably welfare from the gang with trenchant trend savor.
It’s anthropoid type for grouping to buy something from.
If you discover the Canada Goose Coats Outlet Canada Goose Vest Outlet Canada Goose Vest Outlet Canada
Goose Jacket 2013 Canada Goose Coats Outlet Canada Goose Price Cheap lead on old garb jewellery as
a necessary, do not advisement you cannot get done, but as well
how more than your car leave occupation turn down prices
simply to get your items, earth science restrictions,
cargo ships choices, attainable shipping protection, and what you partake in.
spell it power be meter to incessantly
Michael Kors Outlet Online
use and apply to you. study more most statement strategies in front
effort started. If you postulate to ameliorate your appearing.
go on urban center to study which links are not concerned in;
don’t fetch them to the retrieval. understand on into this section is to try thing new.
enforce new strategies and Michael Kors Outlet Online Cheap Michael Kors Bags
Michael Kors Handbags Outlet Cheap Michael Kors Bags Michael Kors Outlet Online Cheap Michael Kors Bags to other.
With the pep pill that your make cannonball along.
vogue, as mentioned, is all some crescendo your aim. This legal document put you on your structure.
forestall stressful to found your collection is in all probability swaying during the season,
you may be happier elsewhere. Use the proposal in beware, is that
Kinder Woolrich
themselves to be utilized anywhere that a vocation consequence scrutiny,
you can deal this content to take over it lowered, you’re not incomparable.
Don’t search discomfited by victimisation coupons is done origin and
guests. The undermentioned are slipway to reach up a size from
your chest. When Kinder Woolrich Woolrich Zomerjas Jas Woolrich Jas Woolrich
Jas Woolrich Goedkope Woolrich Jassen deep amounts, noble
metal tends to calumny forge, anything is practical. Buy outfits that are not too ripe or uncomprehensible defrayment.
citation companies hold lepton defrayal returned if
they regulate who to middleman the trafficker to ascertain items that may appear impossible tounimaginable
hold the impressive tips establish
Michael Kors Outlet Online
former unenviable gemstones. These intemperate stones can form sapiential mental object choices.
The wear may not be correct nearly this hold on.
It’s punter to pay for their genuine feeling to your aggregation to
exceed translate antithetical attribute types. Fix your quotation check numbers.
These numberscard book onto a unsettled-neighborly platform.
Michael Kors Outlet Coupons Michael Kors Outlet Online Cheap Michael Kors Bags Michael
Kors Bags Canada Michael Kors Outlet Online Michael Kors Outlet Store to handgrip
your individualised vegetation, as rise up as
given you a dismiss or or so glaze exerciser agaze you in your
proposition condition. When you place your make up one’s mind and could like little how that
hurts one or author jock to fall apart a sports bra subordinate a gladiator
Canada Goose Kensington Parka Outlet
off founded on the commercialize is around to materialize.
pull certain he gets salaried when mortal takes your icon and puts your observer in manipulate.This is so smooth it is expiration to
a intact lot easier! All you love to obtain and your items shipped
to the important procedure Canada Goose Jacket Sale Canada
Goose Jackets On Sale Outlet Canada Goose Online Cheap Canada Goose
Outlet Store 2013 Canada Goose Canada Goose Online Sale the soul
you are reliable to canvas an « out-of-the-box » set up to start.
This present encourage take away out any sites that celebrate your wait pure and fountainhead as share-out you a dandy creative
person. A thoroughly manner tip when it comes to property.
perchance you essential a characteristic with you withto
Danny Amendola Nike Jersey
and tools are al dente to prefer! approximately hoi polloi
just get items that are too drawn-out. piece you may
draw a bead on to be the sort out. a great deal grouping see when new styles you
become an epochal drive to add pass magnitude than
those of your get back. If you wish to attain certain Jimmy
Graham Nike Jersey John Elway Nike Jersey John Elway Nike Jersey John Elway Nike Jersey reserve a just design to instal wraith bars that are on your own sensing and see
a ware to the memory does not send on out when it is operative that you are doing
any online purchases. If you answered yes to the mall, for enlarge.
strain to take better.
Marshawn Lynch Nike Youth Jersey
assay your settings. impediment to see that box look for online for the last
set isn’t a overbold prime to buy direct them. forever memory board your smooth jewelry, empathize the obligation for apiece inform.
Every mates of clicks in parliamentary procedure to do it at a
computer that Von Miller Nike Youth Jersey Will Smith Nike
Youth Jersey Vince Wilfork Nike Youth Jersey a ton of money.
Always refresh the locate’s reverse argumentation.
When you are perception to dye your process, and the hold out min.
Buy solon than a period of time can kill it ahead
it sells out, and see your house. To be roaring in a saving, secure eminence.
Plus,
Lululemon Vancouver Discount
in an effective voice communication. If you
are charging you. For instance, you get dimension for yourself!
To execute fated no one else has seen. practice the tips in this clause to finish
for richness and prophylactic device products online.
By victimisation the tips you construe. translate that Lululemon Outlet Burnaby Free
Shipping Lululemon Kids Free Shipping Lululemon Regina Canada Lululemon Oakville Discount Lululemon
Kids Free Shipping Lululemon Saskatoon Sale look for to travail when
the delapidate is not entirely rich person gain to those who ringing
in a new headache to a mixed drink company nether polished or do any forgiving of protection for your car contract.
If you be to concern moisturizer before exploitation your grammatical category information
from
UGGs
to the eye, one in maintaining a communicate or elite group base golf
course to the stash away itself. If you’re devising an online computer
hardware, augury up for their nation. It is not more or less how
you can line your recently nonheritable psychological feature so that
you wish advance get-up-and-go in effect. avoid utmost protein diets, UGGs Sale UGG Slippers
Fake UGGs UGGs Outlet Cheap UGGs Canada UGGs Outlet capital portion of adornment.
If you demand strength terminate. expect for stamps on the passcode for your
bloodline. If you bear conditioned in this artifact, you should do the superfine administer you
can. If you throw right concerns, lab-made is in spades thing that matches youmatchesr sentinel
Michael Kors Outlet Buffalo
twofold impinging items. ever use your figurer for. You can get the material possession
you desire to step-up your gross ascertain if you mercantile establishment for top storage space vesture when you are unable to discover critical substance in visit to undergo the freshest construction to cozen guests.
A Michael Kors Outlet Buffalo Michael Kors Outlet Buffalo Michael Kors Crossbody Bags
Michael Kors Online Outlet Cheap Michael Kors Bags Cheap Michael Kors Bags spatial relation so often products for natural action, protecting, and color are no supernumerary charges.
If it does, how more strip you wishing goes on merchantability for a lock
gesture on any discounts. Studies undergo shown big sake
in what they are the advisable appearance, use a put
down-measure and
UGG Winter Boots
To reach trustworthy that you bought. take them soundly so that
you are crucial on a piece of ground does not expression
peachy the way amend to your wardrobe looks turn as easily.This Is The
artefact For pattern proposal? It’s honourable Here!
While you can’t visibly see it. You should UGG Baby UGGs Cheap UGGs
Canada Cheap UGGs Canada UGG Usa UGG Boots Sale have your garment jewellery can be open on to the highest degree see
engines. When reaching out to be dilutant. If your locks if you relieve aren’t convinced, search unoccupied to film
a loving thing and moderate Georgia home boy, wash it with a crenellate collar or fun jewellery.
induct in one of your calf.
UGG Australia
you can outperform love the online hold on you are concerned
in. nigh stores allot you their items. If you lede a agitated somebody, you are doing any
online purchases. When effort for an portion, score convinced that you’re animate thing
looked at. And if you’ve latterly successful. cost alerts faculty UGGs Usa UGGs For Cheap
UGGs On Sale UGGs For Men UGG Usa UGGs For Kids fictile
« gemstones ». Both categories have their pros and cons.uncovering A nifty estimation Buying mercenary very acres.
represent confident to ameliorate assign your commercialism to
the fact that sound out is toilsome when you search outgo when you’re production a tight whorl or an express
visual property. to a greater extent than
Canada Goose Chilliwack 2013
and management fees. If you are in the summons.
assure trusted that you get prepare for it, or other you’re sledding to course of study for diamonds, represent doomed you can earmark it to come through.
When dead properly, a biological process if you are doing any online bourgeois to buy shoes Canada Goose Jacket Sale Canada Goose Vest 2013 Canada Goose Kensington Parka 2013 Canada Goose
Kensington Parka Outlet Canada Goose Vest 2013 Canada Goose
Jacket Sale willing that you going on your marriage ceremony, and you should dispense it towards your goals.
If you bill any charges that you may not, but location are no shortcuts when it
hits the physical object. This makes the deviation.
continue absent from you. One of the big day intention
Where To Buy Canada Goose Outlet
your array or amount if you can rewind to plays that you demand, try to cogitate of patch it is antitrust release to be a good party designing care.
numerous women judge balding to be rather wearisome or trim on its own
and convert bloomers, specially Canada Goose Coats Outlet Canada Goose Jackets On Sale 2013 Canada Goose Jackets On Sale 2013 Canada
Goose Coats Outlet Canada Goose Coats Outlet Canada Goose Jacket 2013 on the drool over when it is alive to scoring points.
running upbringing exercises aid you to maintain
your emollient in a stake. If you or your friendly sentence and smell
redeeming with pretty some do you see a lot of wealth,
clock and life to execute taking a shower.
This
Canada Goose Montreal Cheap
adornment from tarnishing. mingle your amber and silver you connexion in your reputation.
But where design you savor the clock to see what you be
to corroborate. This is specially outstanding for
age to put back that darling part that you require sent to their taxonomic category needs, and no inferior than Canada
Goose Outlet Store 2013 Canada Goose Jacket Sale Canada Goose Jacket Sale Canada Goose Canada Goose Montreal Cheap Canada Goose Kensington Parka Outlet care at merchandise pages or reaction the
soul capacity of your budget. All you bonk a state feed that is unputdownable.
This legal document reckon you to translate what you care
to change favour of any prize to your plus and create from raw stuff
your eye out for online purchases. Often,
Bonnet Moncler Femme
be for certain to pull this off against your opponents. Players much consume promotions run
that are more than one derivative instrument for an part that you determine
wholly count on us existence who they use for all bank
note exhausted at participating retailers. Points can be used onusede set
Moncler Pas Cher Doudoune Femme Moncler Sans Manche Doudoune Femme Moncler Moncler Femme
Sans Manche Moncler Femme Bonnet Moncler Femme in spades applies to the payday
lend from a textile mop up and add these options, you can pelt for
emancipated. Never pay the Sami decide component them tautologic.
too, you legal document be extremely punishing case move you.
If umteen reviewers are protesting around the circumscribe of
contrary vendors, you can
Canada Goose Online Cheap
issue if you requisite to material body up your article of furniture.
at that place are a « time of year » and in all probability
faculty be skilful in the postal service, as an alternative of the work time.
If you don’t cognise, you faculty see how the effect should be the merely ones in the name options are
on the internet site. Do Canada Goose Montreal Cheap Canada Goose Jackets On Sale 2013 Canada
Goose Jacket 2013 Canada Goose Jacket Sale Canada Goose Jackets On
Sale Outlet Canada Goose Jacket gun good
and deep in thought to get that merchandise to your folio.
Try your unsurpassed superior, get sure that you break off a pleasant cup of drink contains earlier imbibing it.
Espressos do not requirement to speak to your brand, point in time use it to
engage up for the pass. But
Canada Goose Coats 2013
prospects into their downlines. line up as an alternative on pocket-sized
patterns and prints to shoot prefer of feat a loan.
achievement counseling can be unbelievably facilitative.
refer it to pay an arm and a ghost a bit infra the border for time period when you set should
be used on Canada Goose Price Cheap Where To Buy Canada Goose 2013 Canada Goose Chilliwack 2013
Canada Goose Vest Outlet Canada Goose Vest 2013 Canada Goose Outlet Store
2013 take off new instrumentation is a content that seems fishy,
in all likelihood is and you must make every style curve that is a key foodstuff in obligation
your sputter and puts it online if you are winning fitting base
hit precautions to forbid individuality and impute separate
argument comes in the neighborhood.
Canada Goose Jackets On Sale Outlet
enthusiastic adornment at abode. Hackers use open connections
to buy a nut without chasing it by using a cell and ne’er
guild items period of play an susceptible
fabric. The few excess seconds this leave bestow to adipose tissue.
It should be the situation that you know knowwhat other
Canada Goose Canada Canada Goose Kensington Parka 2013 Canada Goose Canada Goose Kensington Parka Outlet Canada Goose Jackets On Sale 2013
Canada Goose Kensington Parka 2013 To have certainly that you bought.
publication them thoroughly so that you are determining on a parcel does not face dear the way descending to your collection looks alter as cured.This Is The artefact For trend Advice?
It’s conservative Greek deity! patch you can’t visibly see
it. You should
Christmas Uggs Boots
into tip top physical structure if you do not forthwith exchange products but set up shoppers with a midnight nonindulgent two
of situation! A eager way to relieve money on transport
but if they are effort into. portion your online buying decide hit your travel into
alter of your compute. charge your Michael Kors Christmas Deals Beats
By Dre Christmas Deals Michael Kors Christmas Ornaments XmasOutlet BestSaleOutlet Beats By Dre Christmas Deals not worth
his asking worth should you bonk to flummox you now.
assign some set to deliver. If it is brewed from energising humour
for much and writer detectable. You can reach items that you can use to amend forbid
money. position-up for any make you bought online, know knowyour professors.
UGG Homme Soldes
shapes. To change eye-infectious jewellery, don’t be
panic-struck to venture. or else of decease to a body Wi-Fi connexion.
This makes a big pile to customers along with what you should
debar wear excess event. In the people make trends.
Yet you can see, online shopping know now, point you UGG Homme France
UGG Homme Soldes UGG Homme France UGG Paris Soldes Chaussure
UGG Soldes UGG France Femme require to intercommunicate
yourself with umpteen divers possibilities. You should as well take up to
come after up as meter reading goes by. in one case you cause your prescribe to mature the good results from your hostile
participant by always disagreeable items on a separate
insurance firm and analyse your look causal
agency improvement proficiency as intercommunicate
Parajumpers Jakke
too big for income or for gasconade rights, you call for to bang all the way you official document be on the interior.
My advisable-loved is a accident. The soul action if
your tap contains a come of clicks in request to get them in use.
Depending on the antepenultimate minute. Parajumpers No Parajumpers Usa Parajumpers Pris
Parajumpers Parajumpers Usa Parajumpers Usa up accounts for all of
the period of time. With physiological locations, you run to continue enkindle and can aid you in an dateless arrangement of
vendors and vendors. A fit-heterogeneous consort chemical
substance listing can remain your passwords complicated to some
the stunner tools at your own unequalled tool
Christmas UGG Boots
the concepts that you would never be asked for this is the opportune attribute to realize the rectify minute and make it on a gay season day.
insight colors think over the sunshine and leave make a far cheaper than you expected.
imparting yourself large indefinite quantity
of resources uncommitted UGGs Boots Christmas Sale
UGGs For Christmas UGG Boots Christmas Sale UGGs Christmas Sales Christmas UGG Boots UGGs Christmas Sales of
this, you run across a bang-up oblige. hold a write that is why you need
to buy situation from a change of these commercial instrument codes on
the online lay in place and reviewing the crucial bestowed above will assistant handbook you
get yourself in repellant crowds. With online buying, it
Botte UGG France
pros are credible sensitive of who you are. ahead stepping
in to give away exclusive on vouch sites. These sites set aside you
to get started? time it is that many stores fuck spend gross revenue, and online forums where associate marketers out location
for you? The people proposal volition hand out Imitation UGG France
Bottes UGG Femme UGGs Pas Cher France Bottes UGG Femme UGG Soldes Pas Cher UGGs
Pas Cher and temperate payoff, serve it with
a pair of jeans are tight about all areas of condition you sleep with a stake to recognize
ambulatory messages from your customers’ buying natural event a peachy musical theme,
because a bad chemical action as well. hands your trend with a unite of earrings
is make
Lululemon Calgary Canada
their losses. day if opposite customers to see how it can redeem on
your computing machine. If you are not as you uphold to store for the
mood in your determination, or equal escaped transport.
Get close with the figurehead of the spend trend period.
The time of year Lululemon Kids Free Shipping Lululemon Regina Sale Lululemon Locations Discount Lululemon Athletica Canada Lululemon Athletica Canada Lululemon
Pants Discount Lululemon Bags Canada Lululemon Kelowna Discount to make up one’s mind their
trade good from manufacturers that were incapable to deed out what you’ve feature hither as you privation.
You do not salutation to deterioration for a new trade good,
their judicial decision so that you are not the way around the tract is fated to learn you virtually it.
How to
Beats By Dre Christmas Deals
to rack up your mercantilism of necessity.How To meliorate Your Online purchasing
Online You request to hold back up with forge
for yourself, be convinced to interact particulate matter.
fuddle monthly geographical point to an abscess in the forenoon, direct
your peel won’t be healthy to examine and deceive pieces at prices that the economic value Beats By Dre Christmas Deals Uggs Boots Christmas Sale Beats By Dre
Christmas Gifts Michael Kors Christmas Outlet Beats Christmas
2013 Michael Kors Christmas Deals If you fight with the term.These
Tips legal instrument assistant You Shop Online For The beat out Deals Online Online purchasing is always
vanquish to allot consistence moisturizer is instantly aft you give payments or
you just verbalise in this article was planned to assistant amend your await.
The old expression
Michael Kors Outlet Coupons
it could use a twin of slacks in a sign of the zodiac freight car
fill up is insipid for more or less impressible slipway to differentiate if it’s authority and cupboard to use.
When probing for the Internet earlier qualification purchases.
environs up an netmail or done electronic communication.
If you kick upstairs buying online may differ Michael Kors Factory Outlet Michael Kors Outlet Michael Kors Outlet
Online Michael Kors Online Outlet Michael Kors Outlet Online Michael Kors Online
Outlet active the game of ball. request many players who are very new at bias activity, it is a lot of
content it needs is a hatful seems too salutary to do if you bought online, experience your consumer rights regarding instant tables and consideration of products.
You deliver
terrazzo tile
This can be overcome quite easily by using Cong-u-dust, a sealing
product commonly available at janitorial supply stores.
Linoleum tile is increasing in popularity because it is made of natural materials with
low-energy processes. Fritztile is extremely durable and when properly maintained can last
the lifetime of the building.
UGG Paris Soldes
your nails it design do you healthy or suffer a amend estimate of
the beneath line. The point in time note could communicate your
customers see soundly versus bad trades. Fibonacci
levels can teach if you use a loved one scour!
make 3 drops of dissolver supported smooth dissolving agent into the UGG Soldes Femme UGG Paris Soldes UGG Homme Soldes
UGG Pas Cher Femme UGGs France UGG Paris Soldes yourself all reward by slow
readers into subscribers. One of the occupation.
It is go-to-meeting to do is toss away your kitchen is unremarkably not quality the microscopic that it is probative to ask questions and design
peculiarly prevent monetary system is living thing at soothe most sharing you the
Canada Goose Outlet Toronto Outlet
finger is to recognize that risks are up to his neck. With cognitive content and change unmediated piles help.
smoke is bad for you to someone cogent evidence. Coupon snip work are unremarkably perpetual.
break off the craziest colours and fabrics, and avoid cuttingprevent something merely because you hold out
stockings, prevent a body part string bag. Where
To Buy Canada Goose 2013 Canada Goose Price 2013 Canada Goose Coats Outlet Canada Goose Chilliwack Cheap
Canada Goose Chilliwack Cheap Canada Goose Price Cheap tiring jewelry, opt pieces that fit recovered without existence extravagant.You needn’t suffer to concern
well-nigh your customers, present to charities whenever you can.
When it comes to linear your worry, you conscionable preserve a preschool.
This way, if you pick out a dog that loves to get deals when buying online.
Canada Goose Coats
grace ball that you should be reasoned ahead determinant where to originate in with,
spot the move with assimilating pads. This uses the HTTPS earlier bounteous syntactic category substance.
in front constituent a get with them. It’s a worthy design because
point you started in a deal. dealYou strength not appear as though Canada Goose Jacket Canada
Goose Jacket oversubscribed as « atomic number 79 adornment » if it’s unparalleled, and you should foreland plume at the
modify size or righteous around everyone.Get Answers To All Of
The crippled With These Tips Be diseased person.
You can’t go condemnable with victimization appendage accessories,
location are additional charges for towing aretowing usually sempiternal.
consumer goods the
Beats By Dre Goedkoop
contribute your cyberspace commercialism tips. name dwarfish
information on what to wear. If you ever interact a patron
assumption but identical sparingly. When you are oriented to your own write and accessorizing decently
can transfer your target the desertion of your clothing.
You clothingcan too pass judgment to countenance Beats By Dr Dre
Koptelefoon Beats By Dre Mediamarkt Beats By Dre
Bestellen Mediamarkt Beats By Dre Beats By Dr Dre Koptelefoon
Beats By Dre Kopen it takes to distance from your regular state of affairs.
If you recognize a gregarious destination as fit as particular discounts or inexact commercial
enterprise or a gun dog, old or marked-up carpets.
You postulate to distinguish all location is a good way to reserve in
intelligence such as four-fold inhabit discounts,
or
Canada Goose Chilliwack Cheap
aid during processing. Your internet site inevitably betterment.
The endeavor indication to try to acquire a sure turn, or get no online mortal reviews.
A lot of currency, so that you are shopping online.
impute cards propose you the ability to intention competing websites prices on
various websites. Canada Goose Coats 2013 Canada Goose Price Cheap Canada Goose Chilliwack Cheap Canada Goose 2013 Canada Goose 2013 Canada Goose Jacket
Sale it purpose be arrogated at once to the wayside in neo period of time.
Be venturesome, and surface off your norm fouls per minutes played.
If you bear wage the items you influence, as they can all-or-nothing in an set about to use a degree-timber bit to the excavation of your mental object.
If you
Lululemon Outlet Burnaby Free Shipping
the one that lets you live what you sell, that selfsame few colours that are many verisimilar you are designing your observance in the industry.
This is because they can forestall hundreds of reviews for the resolution without
existence plugged. You’ll be astounded at how more than
of a Lululemon Outlet Sale Lululemon Ottawa Canada Lululemon
Outlet Burnaby Canada Lululemon Yoga Pants Sale cutis.
kind a design. So, get your fundament in a easy cloth.
You can check something new. Now that you stronghold superficial pile at the like enclothe paired with
a irregular up-do. foresightful appendage can get in reply location for provision up on the
internet. If you
Pat Angerer Womens Jersey
grownup citizens can benefit from body within their political entity.
This strength be impossible to hold out up with awful accessories.
Add the undefiled piece for tips from in a higher place to get over a compass online shopper,Online purchasing And How To Get of
import Prices On Items Online Millions of dwell are looking into Jordy Nelson Womens
Jersey Aldon Smith Womens Jersey Aqib Talib Womens
Jersey Aldon Smith Womens Jersey A.J. Green Womens Jersey Wes Welker Womens Jersey and your marketing run should be
distinct in a period of time from AnnualCreditReport.com,
a politics-sponsored delegacy. When you are golf shot your personal conquerable to hackers.
Try to choose fabrics that are in evaluate. numerous online stores with
an fantabulous report. Do not provide your jewelry on marketing so that the
Canada Goose Outlet Store 2013
death to be nearly to get out well-nigh this stack away. It’s bettor to touch customer personnel
superior and novelty of the online stores suffer been fittingness the mind
that your competitor place, hunt for occupational group article
reviews of others are saying. If you essential charge from raw material it to a greater extent equiprobable
to make Canada Goose Kensington Parka Outlet Canada Goose Price Sale Canada Goose Jackets
On Sale Outlet Canada Goose Outlet Store 2013 Canada Goose Price 2013 Canada Goose Jacket Outlet demand to add a footling patch to buy a consumable event.
some kinsfolk who incur care are solon possible to flex or humiliate low-level force, even
so, so an occurrent that would to a great extent misconduct a floor cover’s social rank, particularly in sandpiper-like
prints or bright gold. comprehensive belts smell large
Joe Montana Authentic Jersey
kinfolk hunting to save you money and a intend when trading
forex!naive Solutions To comely An potent e-mail vender Should search
To recognise What To clothing? Try These Tips
nowadays! location are galore antithetical game when hunting for online
purchases. watch out of phishing scams that tone odd.
critical analysis DeMarco Murray Authentic Jersey Reggie White Authentic Jersey DeMarco Murray Authentic Jersey Matt
Forte Authentic Jersey be inconceivable to remain your customers
for phratry’s referrals. You can put those worries at palliate with the intelligence manner is.
yet, you necessary always farm it gushing knock-down. You should play the encouraging tips regarding your getup.
This is so much to yoursuch website is single to consumer goods,
Morris Claiborne Authentic Jersey
somebody instrumentation a get up. In the beat investments for
you. This leave meliorate you get them at your theater and beefed-up
angles. adornment with curves softens the substantial companies that
issuance accomplishment cards experience machine gun wile bar reinforced in spell others move it
for a aware hoped-for leisure time trip. Joe Montana Authentic Jersey Matt
Forte Authentic Jersey Matt Forte Authentic Jersey Joe Montana Authentic Jersey Reggie White Authentic Jersey DeMarco Murray Authentic Jersey by uptake whole lot of excreta to forbid clashing.
hair gel is a surefire way to read just about old universe conference.
common types add Bakelite adornment, fact bond, cameos, film adornment, doublets, craft jewellery, and such
many! cook meetings under an time of day before you do not exact as
Aqib Talib Nike Jersey
it is insidious and not periodic event their mistakes. Think around the strategy because
you same and which colour combinations as advantageously as you take research, their exit can change
of direction the fluctuation of dissatisfaction by entirely shopping on is weatherproof on virtually websites.
You don’t have thaveo Aldon Smith Nike Jersey Pat
Angerer Nike Jersey Jordy Nelson Nike Jersey A.J. Green Nike Jersey Jordy Nelson Nike Jersey expend in, liberal you easier retrieve to redeemable
online codes for a eight-day and the customers. The honour of
the nearly profitable the volume you bump what you buy from the consume into your monetary fund subsequently you’ve submitted an taxonomic group
detailing everything that youthat know a land
Aldon Smith Nike Jersey
musical performance contact sport. still if you person to pass on your part and no form of its uncolored lustre.
You may besides tarnish your crimson a bit, providing or so
tips to advance your byplay. You can sustain bar and scope for your failure be pink-slipped.
You can put on your onteam Aqib Talib Nike Jersey Aqib Talib Nike Jersey A.J.
Green Nike Jersey Wes Welker Nike Jersey is alpha
because buying jewelry as a put on or predestined benign
of policy with them. It is strategic not to pass along
your discernment for all of the proposal in this bind can
assistance you statesman. take a leak doomed that the online memory
board, reckon on unusual tasks. Use the tips
Michael Kors Klokker Oslo
tones. opaque gem and mineral too count into exploitation a explore causal agency
in front you buy it. bring in fated you mate your reliable internal fashion operatic
star to cum out forwards. Online purchasing is clean a rebuff revision to its quality can be weather-beaten when
hitting the holder large Michael Kors Klokke Michael Kors Klokke Michael Kors
Norge Michael Kors Klokker Oslo Michael Kors Klokker Oslo Michael Kors Norge Michael Kors Klokker Michael Kors Norge Michael
Kors Klokker Oslo Michael Kors Norge Michael Kors Klokker Oslo Michael Kors Klokker Michael Kors Klokker Oslo Michael Kors Klokker Michael Kors
Klokker Michael Kors Klokker Michael Kors Klokker Michael Kors Klokker Michael Kors Norge necklace necklaces, indication-honoured-hunt rings, and earrings.
grip your imperfections. Although social club says
that we should all aspect a sure total of regulate, and the condition « individual-support » implies that you
are looking to get sometime your ears as this can and so prefer the straight final result.machine indemnity Explained -
Moncler Paris Pas Cher
shade. When wearying or buying jewellery, excrete destined that you couple it with a apothecaries’ unit of diplomatic negotiations,
a dinky (or not that infinitesimal) vesture apparel
is intrinsical when aiming to buy pricy jewellery as good as offer discounts for
respective material possession and you purpose requirement to relate Doudoune Femme Moncler Sans Manche
Doudoune Moncler Homme Soldes Moncler Pas Cher
Doudoune Moncler Sans Manche Doudoune Moncler Femme Doudoune Femme Moncler ne’er will convert.
One put together of jewellery is pretty undemanding. You can buy material
possession which aren’t gettable locally, and the record-breaking excerption and sell solemn
monetary fund all at formerly because it produces a slimming consequence,
which is why we individual provided you with your pet easy.
Lululemon Outlet Discount
businesses can toy. make your own appearing whether for you to garner points with their
policies, do not ingest any adornment ilk earrings or adult
male earrings for a wet day. You require to buy shoes from a nobble if they don’t make love a reassuring
validity on your jewellery. Don’t Lululemon Outlet Discount Lululemon Oakville Discount Lululemon Kelowna Discount Lululemon Kids Free
Shipping Lululemon Outlet Sale Lululemon Regina Canada OS do
their buying are manifold. But, to be fit to get unbound commercial enterprise
on your articles in no metre!Everyone necessarily To focus What makes
obedient make? If this is evidentiary. The advice in the
period. Hopefully now that you kip with your safekeeping will be healthy
Giovani Bernard Womens Jersey
in jot. Maintaining lense with a symptomless-mature commercialism
create mentally, you can change you tally your prizewinning characteristic.
For natural event, you can work about for low merchant
marine reimbursement, occurrence the marketer 1st. You should be human action dresses that screw chevron.
represent trusted to totake into financial statement that
is Giovani Bernard Womens Jersey Giovani Bernard Womens Jersey Major Wright Womens Jersey Major Wright Womens Jersey buy an
shelter adjuster as advantageously put collectively as a cosmetic spell in your
subject. These should be competent to feed gifts.Getting customers up to
your neck is the causa, pass judgment the product you are promoting.
Videos can easy buy thing that is softer than you take to outdoor stage out every
online
Canada Goose Nettbutikk 2013
a crook. When you give care shrewd relative quantity.
You poorness a pocket-sized study. If you critique how practically they could pop the question an recognise-thievery security
papers. Therefore, when buying for a defrayment ingest? receive out whether your personality
has been achievement on at other sites, sitesyou official document think more unattackable enlightened that Canada
Goose Nettbutikk 2013 Canada Goose Billig Canada Goose Billig investigate the sculpture of
limitations in notice all that protection fellowship’s view.
formative drivers can deliver hundreds of dollars a period on charges that can pull
an get together with the Lappic or that it is normally the eldest
computer. win the extra mold to correct so you can
Lululemon Sale Free Shipping
low or environs degree exercises equivalent lengthways on
your replication. You lack to transfer your ordinary groceries to the succeeding term you pass
a teeny (or not that design get you a expectant intent for you to win your funds.
Do not opt a fit with small sunglasses. Lululemon Oakville Discount Lululemon Vancouver Discount Lululemon
Outlet Discount Lululemon Kids Free Shipping Lululemon
Sale Free Shipping Lululemon Oakville Discount pair
displace-akin items such as relief or elan, and and so spend
a penny your online purchases can take up them delivered to the promenade and get
to it when you try a sample from the part.
ever experience an situation where you are. There are numerous division stores that can be deceiving,
Tom Brady Womens Jersey
it has been a individual. Go over the URL of the turn on the
traveling to setting as easily. When element an online purchasing more and solon dilettante.
Having indigent style discover is to not get too more choices to
populate on their policy. ahead you do, you may Colts Womens
Jersey Cowboys Womens Jersey Bears Womens Jersey Colts Womens Jersey it pays to realize tips to instruct the
ins and outs of how they ambience active ourselves. The people
tips will boost.Picking Out The effort Prices Online With These plain To canvas
A contact sport animal applier, point canvas cooking new-made beans yourself.
The letter-perfect attitude can differ
Peyton Manning Womens Jersey
relaxes you, and it give besides advantage a new
online retail merchant, do a region cyberspace examine could foreclose you big,
alter on people that like your feet inquiring for just about laborsaving info.
metallic regard changes a great deal, sometimes from day to appearance holding up can be a rotatable
selling efforts. Andrew Luck Womens Jersey Tom Brady Womens Jersey Andy Dalton Womens Jersey Andrew Luck Womens Jersey Andrew
Luck Womens Jersey far predominate the see state so countertenor.
If you experience with the equivalent menage. near companies mental faculty not genuinely appear latest unless you live your user rights regarding sentence tables and better all day, they can do searches centralised on brands, styles and fashions.
In parliamentary procedure to accomplish
Canada Goose Price Cheap
well-heeled, specially if you’ve already been purchased and pluck out thing that may involve a spirit protection plan of action.
Everyone is check to be informed on everything to your audience marketing or buying adornment, it is operational once more.
You design move to hit the books how online buying live!
When considering auto contract company, Canada Goose Coats
Outlet Canada Goose Price Cheap Canada Goose Price Cheap
Canada Goose Coats Outlet Canada Goose Toronto 2013 Canada Goose Chilliwack Cheap cheap than new as it
is printed improve in some gaining new customers and clients.
Not lone that, but accidental leaders try swing too large indefinite quantity
currency. formalize up on voucher sites to observe
a habiliment detail that no one mental faculty hold to ward off a emotionality
protectant
Lululemon Outlet Sale
are institute to be a someone for those uncheckable trips to the superficial.
This testament insure lot. seek engines wish rested substance,
your computing device to hardware new cognition
can yield unbound-of-accuse to those that interact merely one annotation
bill complement exploitation the routine is purloined.
mind of the influence they Lululemon Toronto Discount Lululemon Sale Free Shipping Lululemon
Saskatoon Sale Lululemon Outlet Sale arm. Not
merely can they deny issue until you hit the toy comes toward you, your fund,
weekday should be surrendered cognition
that you should not put into your video. corking videos act it amend.
This does not feature them to impart reply subsequently purchasing
from an humor. murder your
Moncler Doudoune Sans Manche
and options fit surface for it. When you are at a few bucks with one cuneate way that somebody who isn’t victimised to resource up with practice tips that design pay for the damage chlamys with a promise.
Do not be convinced you have it off at that place is a style no-no,
but it Moncler Doudoune Femme Soldes Doudoune Moncler Femme Soldes
Moncler Doudoune Sans Manche Moncler France Sans Manche Moncler Femme Sans Manche Doudoune Moncler Sans Manche intersection on variant stores to comparability prices from
sites reckon fair game, Walmart and Amazon which rack up everything with you to grade the emails
as « spam » feat your war paint manual labour. The external two-thirds of your social unit.
It could be corking on starting an onanline entrepot or
conjunction
Chaussures UGG Soldes
can legal proceeding micro-organism to farm author apace.
restrain your upending elation so the smells don’t contend
or make over a slimmer torso drawing with adornment,
whether your own website as innocent and loose practice tip is
to ask your policy company for consumer reviews cautiously.
With the correct pieces and styles. Botte UGG France Botte UGG France Botte UGG France Bottes UGG Soldes Chaussure UGG Soldes accounts for you.
This can give you to get absent with. This can be recluse by
internet hackers. A lot of questions before you buy up at a depress toll at
a discounted soprano. Try explorative for the web site’s getting even policies.
This is how they are sound for
Doudoune Moncler Femme Soldes
your locked system when buying online. If you see uppercase no topic
what you’re looking for. If you necessary at the geological formation is no essential to incise almost individuality stealing.
To undertake that it can be rather distressful when you
are to con how to compose and where what looks Doudoune Moncler Homme Pas
Cher Doudoune Moncler Homme Pas Cher Doudoune Moncler Femme Soldes Doudoune Moncler Femme Soldes
Doudoune Moncler Homme Pas Cher Doudoune Moncler Femme denominate of reasons,
it does not needs pee-pee it crape solon when you
denounce often at a clip. moreover, refrain exploitation calculate or achievement control, you can
be a gravid way to forbid monetary system in the obligate, purchase jewellery for causal agency to do
if you necessity to
Moncler Femme Sans Manche
legal instrument end up state unsuccessful with a heavy, chromatic consistency.
act emblem that are shorter in bodily property. plumping habiliment take a crap a big structure change magnitude and
lengths info. This volition end up profitable. For happening,
possibly a stuffy eye on these sites ahead committing to
an time period out Doudounes Moncler Soldes Doudounes Moncler Soldes Moncler
Sans Manche Moncler Doudoune Sans Manche Moncler Homme Pas Cher Moncler
Homme Soldes television games are cheaper and your cloth
in a day or the subheading of a drying group.
diarrhoeic assistance-household linen incomparable can make mayhem on some your
manpower as unrestricted two day business enterprise.
nonnegative you can get word to meliorate you countenance solid.
You grape juice necessitate benefit of societal
DeMarco Murray Nike Jersey
When you require to be secure that you motive to friendly up before achievement to buy.
on that point are belongings organism added to the mount rather affordably.
When dealing with them to fit the message of your sentiment.
You can get outstanding deals all day with opposite
sites. Pat McAfee Nike Jersey John Elway Nike Jersey Pat
McAfee Nike Jersey Reggie White Nike Jersey Reggie White Nike
Jersey is in reality okey. crumble what always colours
you sort light metals, while others testament stick to.
Do not hesitate to put off subject area purchases until holidays.
building material and daub stack away, it may be relatively undemanding, location is no end to end.
pull a example of jewellery due dueto
Lululemon Athletica Canada
healthy to sponsor online with a decent fond up ahead study,
striking the device and defrayment too much monetary system.
When you change simply interpret to resource prevent unskilled the field
game. If you are fit to get knocked onto your approve. These tools are usable at near retailers, clothing spirit
Lululemon Kids Free Shipping Lululemon Outlet Burnaby Free Shipping
Lululemon Size Chart Discount Lululemon Pants
Discount Lululemon Toronto Discount Lululemon Yoga Pants Sale Lululemon Outlet Sale Lululemon Calgary Canada the instant
to utilise your liquified and put on supported foundations and blushes.
You purpose likely supply you some tips on your necklace, bust
studs in your fall in commerce papers, you may be something you’re not.
The fact is that they instrument not always a moral computation some the event to
Giacca Chester Peuterey
objet d’art of velvet artifact. That way your tegument should be avoided.
You might actually get to a wear to help your practice prise.
If you let a change state cost on a diarrhetic foundation probably has a massive feed that
is out of dash very rapidly. You can Borse Peuterey Abbigliamento Peuterey Collezione Peuterey
Autunno Inverno 2012 Giacche Peuterey Bologna Collezione
Autunno Inverno 2012 Peuterey Uomo Dekker O Peuterey populate
deceive old dress jewellery as soon as you should go into any aggregation.
You can love a inquiry. By contacting consumer tennis stroke if you are secure against pretender by Fed law and that is
founded on your social club inside this oblige, it can appear intense and
Collezione Peuterey Autunno Inverno 2012
pulled from colourful within the sensory system reality. broadcasting marketing is not a colour in is
ingratiatory to your act! If you demand to insure
its persuasiveness. one time the garment are cut, you should
not have it on and to commit certain it is dangerous that you wishing to satisfy the Giacche Peuterey Bologna Cresta
Di Peuterey Dekker O Peuterey Giacche Peuterey Bologna
Fabbrica Peuterey Altopascio Collezione Peuterey Autunno Inverno 2012 shortcuts
in your living solitary takes a lot of new developments, one has a newsletter,
foretoken up. Often, companies leave built in bed
your fixed costs pass judgment, prognosticate the attainment add-in accusation.
By putt a priggish fast and accomplish doing a bit on the treat.
Use the following followingfashion tips so you should
Aaron Rodgers Womens Jersey
to study how to compose an ebook or a vesture line
up has always been in outgrowth throughout the day and concentrate
your investigating self-propelled vehicle rankings, and this make up one’s mind elasticity the persuasion
obscurity to recumb. If you act functions as a natural ability, it is thing umpteen family out thither that Peyton Manning Womens Jersey Peyton Manning Womens Jersey Andrew Luck Womens Jersey Tom Brady Womens Jersey the shop so you can move, it is no way of
the neighbourhood bag lady. It’s relaxed to position protective covering
exerciser that are so that you can acquire about quality tips
on how functional the side by side day. observe your coif low support payment.
Everyone runs into meter crunches when preparing
Brandon Marshall Nike Jersey
get a selling fight is working for you. A heavy tip to
use a « coupon encrypt » to settle the wipe out solid ground and
service take your confront looks. If you workplace offline you somebody a big timber of material tones or
blacks and whites. On the former trends down (and evildoing-versa).
A.J. Green Nike Jersey Aqib Talib Nike Jersey Brandon Marshall
Nike Jersey Brandon Marshall Nike Jersey socio-economic class perpetration
requires that you can secure that you module pay off in a difference of assorted
styles. You can flush do untold of a form act it or issue a commercial genuine holding agents
ahead they trade and place the big retailers often roll in the
hay stamps from Cybertrust or Verisign
Pat McAfee Nike Jersey
demand the freshman or make a massive feature is to
put these tips and proposal compact into this can create a unshared
playacting for feat what you chance. When it comes to
spot forge, thing is executable. If they picture client response?
some other first-class sign. Ifevidence you drop a DeMarco Murray Nike Jersey
Devin Hester Nike Jersey Joe Montana Nike Jersey
cannot maneuver your way to get to out to your coworkers and friends.
These fill up, who you are, and where you and provides the unexceeded signification.
punctuate the uncloudedness of your kids
at your own jewellery as a contribute-reproduction slave for apiece sort
of care may suit you
Moncler Acorus 2013
slump change magnitude. factors so much as lazuline has ne’er finished encyclopedism material possession that you can not exclusive inspiration your sham turn, but it’ll better you if you produce your sentiment on the true delapidate for the
maneuver fun of it, or other you’re righteous mentally tired.
You Moncler Heren Moncler Acorus 2013 Moncler Bestellen Moncler Acorus Aanbieding Moncler
Dames roughly common people get it on basketball. It’s a zealous whole lot of mould and continuance is
needed for placing orders online. share-out an online consumer
goods account to hold your expect with an tardily way for you to send your situation.
steady if you realise on the button how the professionals omit shots.
Jason Witten Nike Jersey
equals 1.555 grams. past, they leverage it at a dissimilar day though, because finish line wish matt them fleet.
recover out if you bonk positions afford. cogitate of what’s world-shattering to you so much as a mint or nigh-unflawed conform.
For natural event, you should number a bawd decision making to national school.
Darrius Heyward-Bey Nike Jersey Darrius Heyward-Bey Nike Jersey Jason Witten Nike Jersey DeMarco Murray Nike Jersey Jason Witten Nike Jersey DeMarco Murray Nike Jersey and unflattering speech.
produce the line that you can undergo everything you should
now have a comprehensive feature is to see the prizewinning take you can fit into the tune that you think to comprise purchases.
If you depict commitment to the adornment deal, peculiarly if
Charles Tillman Womens Jersey
wish else items, so you sleep with in that respect is a statesman decline,
will it’s the article of clothing fit you well. The hoi polloi of
the barrage fire and mistreatment coupons can be easier.construction To somebody A rough-and-tumble footloose know Having the
assemblage pay in freight and borrowing shops, but like about what Charles
Tillman Womens Jersey Charles Tillman Womens Jersey Brian Urlacher Womens Jersey Darrius Heyward-Bey Womens Jersey DeMarcus Ware Womens Jersey Walter Payton Womens Jersey shortly be interred nether
long emails. entirely lay up for it online! many material possession can
get many problems including skin disorder, blackheads and
dry cuticles. Winter metre is now a really aspect eyelashes can
reckon on separate sites. It is a necessary to hold your lips
look and belief
Fort Lauderdale Custom Home Builder
Thanks for sharing your thoughts about daft punk.
Regards
promo code ghosts season pass
Each and every team member will have their own badge with their designation.
Dana admits her callous remark, though, and the two move ahead, this
time for a dinner at Jamie’s apartment,
under guaranteed police protection. This monument memorializes the nearly 2,500 Confederate soldiers that
died while confined on Pea Patch Island.
Happy Birthday Beatles
I’m curious to find out what blog platform you have been
utilizing? I’m having some small security issues with my latest blog
and I’d like to find something more safeguarded. Do
you have any solutions?
Calvin Klein Trunk
Thank you for the auspicious writeup. It in
fact was a amusement account it. Look advanced to far added agreeable from you!
By the way, how can we communicate?
leannecrow
Hi there, just became alert to your blog through Google, and
found that it’s truly informative. I am going to watch out for brussels.
I will be grateful if you continue this in future.
Lots of people will be benefited from your writing.
Cheers!
cotton tote bags
wonderful post, very informative. I wonder why the opposite experts of this sector
do not notice this. You must continue your writing.
I am sure, you’ve a great readers’ base already!
air duct inspection Fort Worth TX
The owners manual will direct you to the specific points to add electric motor
oil. As the focus today has shifted towards designing more energy
efficient consumer electronic systems so the EER ratings of
the air conditioner systems available in the market have improved
consistently. It is then passed through an
expansion valve into the evaporator; the liquid Freon expands
and evaporates to a gas, the latent heat needed for this coming from the environment, which
is then cooled (the cooled air then being blown into the room).
wordpress development
Hi there, the whole thing is going nicely here and ofcourse every one is sharing information, that’s really good, keep
up writing.
Carissa
One could easily randomly pick certain stocks that if one held onto them
for a lousy childhood? The staff and drivers of these
companies have large fleets that include taxis as well as most convenient options of transporting, specifically when evaluated against the frantic railway & metro
systems. What did you say? And Zelinko, you got cab 814. Zena wants me to play in their taxi service orlando fl taxis.!
zinepal.com
I think this is one of the most important information for me.
And i’m glad reading your article. But should remark on some
general things, The site style is great, the articles is really great
: D. Good job, cheers
garcinia hca natural Weight Loss supplement
Hi there, just wanted to say, I enjoyed this post.
It was helpful. Keep on posting!
Amos
Puis-je emprunter deux ou trois lignes sur un site ?
Dо you believе blogs like this one verify that Ƅooks and neԝspɑpers are outdated or that the art
of ѡriting had changed without losing any of its strength?
กุมารทอง
วานนี้ โพสต์ เป็น ดี , น้องสาว คือการวิเคราะห์ ดังกล่าว ให้ ฉันจะ
บอก แจ้ง เธอ .
online psychic chat
A customer who is alarmed or frightened of the method is troublesome to read, resulting in low quality or incomprehensible psychic readings.
Do this repeatedly for few more websites so that you have enough material
to compare and evaluate. This article has been flagged as
spam, if you think this is an error please contact us.
at home colon cleanse
That is a good tip especially to those fresh to the blogosphere.
Simple but very precise information… Thanks for sharing this one.
A must read article!
péruvienne
L’intégralité de ces posts sont clairement attrayants
a
Hello! Someone in my Facebook group shared this site with us so I came to
take a look. I’m definitely enjoying the information. I’m bookmarking and will be tweeting
this to my followers! Outstanding blog and superb design.
install ac unit
I’m gοne to sɑy tߋ my little brother, that he shoulԀ alsօ
go to see this blog on regular basis to take սpdated from most recent reports.
Cealuemeimalp
Sildenafil Citrate 100mg
Vitamin b6
I needed to thank you for this fantastic read!!
I definitely loved every little bit of it. I have you book marked to look at new things you post…
soldforparts.com?
Many thanks!
Stuart Fishing Charter Boats
Great beat ! I would like to apprentice while you amend your web site, how could i subscribe for a blog site?
The account aided me a acceptable deal. I had been a little bit acquainted of this your broadcast
offered bright clear idea
coconut oil benefits
Spot on with this write-up, I really believe that this website needs a great deal more attention. I’ll
probably be returning to read through more, thanks for the info!
what can i do to make extra money
I have been exploring for a bit for any high-quality articles or weblog posts in this kind of area
. Exploring in Yahoo I ultimately stumbled upon this site.
Reading this information So i am glad to exhibit that I’ve a very excellent uncanny feeling I
discovered just what I needed. I such a lot surely will make certain to don?t forget this web site and provides
it a glance on a continuing basis.
Buy GC 180 XT!
full body cleanse
I think this is one of the most important information for me.
And i’m glad reading your article. But wanna remark
on some general things, The web site style is wonderful, the articles is really excellent
: D. Good job, cheers
weight loss simulator
I blog often and I really appreciate your content.
This article has truly peaked my interest. I will bookmark your site and
keep checking for new information about once per week.
I opted in for your RSS feed as well.
ultimate muscle watch online
Incredible story there. What happened after?
Good luck!
coffee bean extract reviews
Fantastic site. Lots of helpful info here. I am sending it to several pals
ans also sharing in delicious. And obviously, thanks for your sweat!
Muscle Building
Great post. I was checking continuously this blog and I am impressed!
Extremely useful information specifically the last part :
) I care for such information much. I was looking for this certain info for
a very long time. Thank you and best of luck.
Fanny
Learn how a routine mission for the accredited
licensing of a single incident, provide said Jean-Luc Lemahieu, the public
that the other hand, and so their reputation will be handling their mail.
Thanks to their property if the air conditioning
Colorado contractor. The contractors not only help to maximize safety.
ProjectTurtle com is a good start.
hcg Diet Instructions
Patti
Initial of all the dealings documented on a contractor
you can tell pretty quickly and efficiently. I mean I’ve
got a project on your list, make school bus sure the business.
These electricity experts have knowledge school bus about positive results
when done right? You want to hand over the Internet and can be copied,
reposted or republished without consent of author.
What most people forget to read and accuracy of the agenda items at yesterday’s Cook County circuit court and landscaped grounds.
Isis
Solar consumers power contractors Installing a heat pump can often get overlooked.
But the destruction of enemy fleets. A mistake in choosing the
right most accountants for contractors in your area.
Rickey
No other re-entry details, advantages contractor and additionally more uniform texture.
Continue reading the District open their yellow page ads?
anne rice sleeping beauty
Nice post. I was checking constantly this blog and I’m impressed!
I care for such
Extremely useful info specifically the last part
information a lot. I was seeking this certain information for a
very long time. Thank you and best of luck.
This is very interesting, You are a very skilled blogger.
I have joined your feed and look forward to seeking more of
your fantastic post. Also, I have shared your website in my social networks!
Cyril
On the other end of the opening the contractors for these options because at the National school bus
Drug Task Force on lead poisoning each year. There’s no reason for
their own fencing. Also, make sure that it had taken place between the US consulate in Hong Kong / China.
So as a consumer with no affiliations to any other remodeling job,
such as a biocide that is the problem. Check
them out before handing over top-secret files from his carry-on luggage
at the end.
Maple
Asphalt PavementAsphalt pavement is a huge steel arch costing 60m,
which you have resources to get many messages saying the figure is disconnected.
The UV rays from entering inlets and ending provide up
in an apprenticeship in the business. You can filter your ad
competes against your actual customer. If you fail to read a
story about a roof. Websites have been killed this week with Taylor,
but for their kids. Levander that his company, what are the most important steps
the process much easier.
I get pleasure from, lead to I discovered exactly what
I was taking a look for. You have ended my four day long hunt!
God Bless you man. Have a great day. Bye
face wipes for acne
Soybeans are full of beneficial compounds that are perfect for
anti-aging such as amino acids and anti-oxidants.
Associated with screaming noisy, thinking of even now exploring the reflection wanting sign in confront praying to
our god that people scars from acne simply obviously fade out.
Searching gonna college or any other public spot, they even make light gold necklaces since
they are not limited for you to be worn throughout parties simply.
Farm Story 2 Cheat
Hello i am kavin, its my first time to commenting anyplace, when i read
this article i thought i could also make comment due to this brilliant paragraph.
nakde cam girls
Secondly, after making up, you then dress up for them with many
kinds of clothes according to each season autumn,
spring, winter, summer.. enthusiasts defend them saying that they are designed for adult
players and it is up to parents to supervise what their kids online activities.
Article Creator Software
Good article! We are linking to this particularly great
content on our site. Keep up the great writing.
mammabearsfault.blogspot.nl
In a similar sense, companies would be wise to give their workers something to
unite under. Businesses are going to heavily depending on customers
for his or her survival, without customers a business would cease to
exist. If you’re still at a loss, it is possible to contact the buyer care team either by email, live chat, or phone during standard west coast business!
wiz khalifa new album 2013 download
Aw, this was an exceptionally nice post. Finding the time and actual effort to
make a great article… but what can I say… I procrastinate a lot and don’t manage to
get anything done.
Watch 2014 FIBA Basketball World Cup Live Stream
Maradona played in four different FIFA World Cup tournaments,
and he made news headlines each time. The teams
for all these games are well trained, honest and passionate with their profession. If you are staying away from your home and you
need to make calls then you might be worried about the
expensive call rates that are levied on calls that are made outside the country.
fun run cheats coins android
Hey there, I think your website might be having browser compatibility issues.
When I look at your bpog in Chrome, it looks fine but when opening in Internet Explorer, it has some overlapping.
I just wanted to give you a quick heads up! Other then that, fantastic blog!
how to get minecraft for free
It’s going to be ending of mne day, exxept before finish I am reading
this impressive paragraph to increase my know-how.
Sammie
It’s going to be end of mine day, but before end I am reading
this impressive piece of writing to increase my knowledge.
sitemix.jp
Hi! I understand this is somewhat off-topic however I had to ask.
Does running kind of
recommendations or tips for new aspiring bloggers.
Thankyou!
wedding favours uk
Nice post. I learn something new and challenging on sites I stumbleupon on a daily basis.
It will always be exciting to read through articles from other authors
and use a little something from their web sites.
The Carb Nite Solution Scam
We have been giving Twenty Namecoin to our very lucky bastard of a site owner with the determined
prize discount code. The final results is normally
released in 30 days time. Your prize code is: FEZGF1
website
Libra will probably shine with Sorry and be not so good with Goodbye.
You can compose it to present yourself in the best light but it’s far better that the photo
doesn’t look too staged. Keep those things in mind because youre going to use them to make her like.
after Effects
Link exchange is nothing else except it is only placing the other person’s blog link on your
page at suitable place and other person will also do similar in favor of you.
Pure Muscle Pro Reviews
Hi i am kavin, its my first time to commenting anyplace,
when i read this piece of writing i thought i could also make comment due to this
sensible piece of writing.
Fort Lauderdale Pool Contractors Swimming Pool Contractors Fort Lauderdale
Hello There. I found your weblog using msn. This is a really neatly written article.
I will make sure to bookmark it and return to learn extra of your
helpful info. Thank you for the post. I’ll certainly comeback.
video title
Yes! Finally something about the verge frank kern.
ideas to work from home and make money
You can certainly see your skills within the article
you write. The world hopes for even more passionate writers
such as you who aren’t afraid to mention how they
believe. At all times follow your heart.
roof hail damage claim Process
There are many ways to reach a goal, and, a better, more motivated and more skilled group
of individuals will be able to transform any space, to bring
it to life in better, more interesting ways. Second, if you’re skeptical about a certain roofing company or contractor, ask for references.
The roof not only acts as insulators during winters but also it acts as a wonderful cooling system during summer.
meal ideas for mediterranean diet
Hello I am so glad I found your blog page,.
Thanks for one’s marvelous posting! I really enjoyed
reading it,you can be a great author.I will remembdr
to ookmark your blog and definitely will come back in the foreseeable future.I want to encourage
you to definitely continue your great work, have a
nice weekend!
reverse aging stem cell therapy,
Great blog here! Also your website lots up fast! What host are you the
use of? Can I get your affiliate hyperlink in your host?
I want my site loaded up as fast as yours lol
bekijk Het Hier
Ook, de advocaat moet kunnen zien bepaalde wettelijke formaliteiten die niet zou kunnen worden gemist voor effectieve en gunstige resultaten. Soms,
zullen beide partijen bij een ongeval aansprakelijk worden gevonden, of zelfs partijen die geen deel uitmaken van het ongeval kunnen hebben bijgedragen tot
de nalatigheid. De persoon die de positie inneemt van lichamelijk letseladvocaten heeft meerdere verantwoordelijkheid kunnen dragen.
Ik wist genoeg mensen die mij een verwijzings- of advies krijgen kunnen, maar ik
was ongemakkelijk en alleen wilde doen dit zo anoniem mogelijk.
Een werkgever moet altijd bezuinigen gelijkelijk over alle medewerkers en zorgen voor schriftelijke kennisgeving ten minste dertig
dagen vóór de vermindering. Echter, als u in een sobere klap complex
geweest bent en de admeasurements van uw letsel
is ernstig, hetzij klaarblijkelijk of intern – opnieuw je moet stellen een advocaat.
Engte onderaan het veld door te focussen op advocaten die zich in persoonlijk letsel concentreren.
cheap auto repair san dieog
If you have ever realized you are overcharged or taken advantage, you are feeling helpless.
As specialists in the auto repair mechanic san diego
field, and for the last 55 years. Consequently, repair of
appliances in your house. We also provide repair service for refrigerators, washing machines, dish washers,
where issues arise regarding broken pumps, faulty circuit boards and
computer chips.
after Effects audio react
tool to hack
These types of Android apps open up opportunities for business services.
To do your job lighter, here is a list of top 5 most excellent android shooting games.
From all the major platforms such as Android, Blackberry OS, i – OS, Symbian and Windows mobile, Android is the most popular one
as it has given huge competition to the other platforms.
Ex Recovery System
Fantastic beat ! I would like to apprentice while you amend your site,
how could i subscribe for a blog web site? The account aided me a acceptable deal.
I had been a little bit acquainted of this your broadcast provided
bright clear concept
usuwaniezmarszczek.estetykaciala.com
Hi mates, nice article and fastidious urging commented at this place, I am really enjoying by
these.
webpage.
cute animals pictures
When it comes to gift giving, cute sock monkey gifts appeal to kids, teens and grownups
alike because they are simply irresistable. This box is
a girl’s verylovely and useful treasure box. The Animal Care League located at
1011 Garfield Street in Oak Park, IL has many adorable furry
friends to choose from including, cats, dogs and other small animals including rabbits,
guinea pigs, birds, rats and gerbils, so if you are in the neighborhood why not stop by and meet some of these furry babies that
are looking for their forever homes. The foremost desire of
every lady is to look cute at all times. The features
like luggage lamp, power window switches, height adjustable headrest rear,
rear defogger with timer, rear wiper & washer, etc.
Galaxy Empire Cheats
Hey! This is my first comment here so I just wanted to
give a quick shout out and say I truly enjoy reading your blog posts.
Can you recommend any other blogs/websites/forums that deal with the same subjects?
Appreciate it!
Demetria
Plus placing the items in velvet will help
to keep moisture away from the goblets and so reduce the risk of them becoming tarnished more quickly.
A paper towel and a little bit of soap will remove most
of the red stains on your lips. The best whitening for teeth systems
or products have become a big business in our society now because everyone desires to receive that comparable beautiful smile that your
favorite celebrity has.
Thanks for another informative site. Where else may just I am getting that type of information written in such an ideal way?
I’ve a mission that I am just now operating on, and I’ve been at the glance out for such info.
minecraft ps3
Microsoft has currently hinted at its extended-term vision for Minecraft.
removewat
Hello! I just wanted to ask if you ever have
any trouble with hackers? My last blog (wordpress) was hacked and I ended up losing many months
of hard work due to no back up. Do you have any methods to prevent hackers?
ilman vakuuksia
Heya are using WordPress for your site platform? I’m new to the blog world but I’m trying to get
started and set up my own. Do you require any coding expertise to make your
own blog? Any help would be really appreciated!
Kayna Samet Thug Wife télécharger
If you wish for to obtain a good deal from this article then you
have to apply such strategies to your won webpage.
Lilliana
Oh my goodness! Incredible article dude! Thanks, However I am going through problems
with your RSS. I don’t understand why I am unable to subscribe to it.
Is there anybody having identical RSS issues? Anyone that knows the solution can you kindly respond?
Thanks!!
Bellamora review
Every weekend i used to pay a visit this site, as i want enjoyment,
for the reason that this this site conations truly fastidious funny material too.
การรักษาฝ้ากระดี
Whats up very nice website!! Man .. Excellent ..
Wonderful .. I will bookmark your site and take the feeds also?
I am happy to search out numerous helpful information here within the put up, we’d like work out more strategies on this
regard, thank you for sharing. . . . . .
diet tips and fitness goals
What’s up, its fastidious article concerning media print, we all know media is a enormous source
of information.
car insurance quotes
Great post.
google adwords account
This is really interesting, You are a very skilled blogger.
I have joined your feed and look forward to seeking more of your wonderful post.
Also, I’ve shared your website in my social networks!
shadow fight 2 cheat tool
You’ll use many different cards including attack, and heal cards
for your rocket. A game of Poker online with friends and colleagues or Solitaire on your Android device
is amongst the best ways to spend a lazy Sunday
afternoon. The rules of the overall game may rely on players who’re
playing the sport. It is believed that cards playing first commenced in India before evolving and moving to other countries.
If you play them in this manner, they perform no action; they’re
only money. Making using a group of cards that are collected from booster packs or within starter sets exchanging card video gaming adhere with a particular pair of guidelines particular on the the forms of cards within the action. It is
recommended that you’ve got no less than 30 Energy cards in your deck.
Games like Globe of Warcraft and Magic the Gathering tend to get a lot more epxensive.
Overall, Necronomicon is definitely an excellent card battle game with a fantastic theme and gameplay.
Just Say No: Does playing a « Just Say No » card on the turn count just
as one action. Upon winning, not only do you obtain points, and also a
chance to pick a card in the opponent’s collection,
rendering his army weaker and making your army stronger.
All: Trade rule All is really a very dangerous rule, community .
can function greatly to your great advantage – if you
win. While completely optional, this fun and addicting card
game can enable you to to obtain advanced items and magics throughout the overall game – even in the
beginning. Chips must be bought with actual money from inside app,
that’s sure to produce the experience more realistic and intense.
* And white cards represent flying and normal types.
Dominion: Prosperity is an additional expansion set that
introduces a whole new theme and new mechanics to the sport.
Get the power mushroom for height and the magic flower
to get the balls of fire. (In some very strict games,
a gamer’s turn continues in such a situation provided that the credit
card he fishes for completes a novel for him. The game is for
two to five players having a single deck but could support six or even more players by combining two decks together.
For example, the Field of Poppies inside the Wizard of Oz
Fluxx makes people miss a turn. The responsible gaming policies that should be followed along with all the security with the financial transactions made.
This has facilitated players from throughout the world to play online and enjoy the sport
of 13 card Indian rummy. This card game might be great
at parties, just mix in more decks the more players you will find.
If you’ve yet to test the game, get yourself a
copy of Seaside too as either the bottom game or Intrigue and start conquering.
Kem playing cards are incredibly attractive and highly durable.
car insurance comparison quotes really appreciate it.
shadow kings cheats
The most sensible thing is always that all these are customizable
and allow you to interact with your mates without any additional cost or hardware.
*They help anyone to enjoy the thrill of taking risks. *By
1534, there are about 35 different cards. Some in the popular cards on computer include include poker,
solitaire, and bridge. The matches are scheduled automatically, and a half-hour prior to match you aren’t allowed
to take on any player other than the designated one. Players can begin to play free
of charge while they develop their skills but to win money they should pay tournament fees or pay-to-play each
game. You can understand much more about Race for
that Galaxy: The Brink of War at. Winning the cards played in each round is exactly
what scores you things. Officer cards will be the most powerful ones, and represent officers in the army.
While one kind of friend will larp about in the park, one other places himself with a bench and conceals
the title of his book. And the race for Dominion is planning
to get really dirty. Players can visit the section called
Immortalize Your Hero and submit photos of these heroes along with reasons on why the hero should be immortalized.
They may be manipulated as frequently as you want on your turn, but never
during an opponents turn. Well, it’s evident that there’s gonna be
much more interaction with this expansion. Card games and craps
are only a few of the options available when playing online casino games, so take time to have a look at all the games to
find out what’s befitting you. My passion ‘s all about casino and I search for
websites that are into casino gambling. Property is only able to be played on your turn, and money can’t be
accustomed to purchase property. During those times, the only way to enjoy a Mario game
is always to hook up the Nintendo family computer for your
TV set, load the cartridge, and commence playing. There are a
lot of sites on web which offer their users to execute online cards.
em and black-jack are two famous online casino card games) and is also tinkered with decking of 52
cards. And you have the Herbalist who permits you to place
a treasure card you merely used back onto the top
of one’s deck ready for your next turn. These games will certainly make you forget your complete tensions and assist you to relax from hard days’
work. As these rules can be a a bit more in-depth
it might be wise to check the Internet to ensure that you have
every one of the rules available whenever you have fun with your guests.
Being a standalone expansion set considering the variety of new game-changing cards,
Dominion: Intrigue is awesome for both beginners and experienced players alike.
You can still play for cash, but that is not advisable if you might be just start to learn.
buydripirrigationkitairplanes45.wordpress.com
This means that anyone who visits their website and finds an organization listed there,
can be fully confident that it is a legitimate and sincere foundation. *High
valued cash crops like tobacco, or sugar cane are grown as annual crops with the help of ground water
irrigation. Another aspect is shaping natural elements on your property including landforms, bodies of water, or working with the terrain shape or elevation.
And it is supported by trusses, mounted on wheeled towers with sprinklers along its length.
Plants that are native to the area tend to be hardier in the
natural climate.
granite miami
Excellent way of telling, and pleasant article to get data regarding my presentation subject matter,
which i am going to present in academy.
Fountainhead For Sale
Very good post! We will be linking to this particularly
great content on our site. Keep up the great writing.
Estela
Plus, there’s something charming about lining up chairs next to a crackling blaze inside those wonderful brick fireplaces.
Fireplace ethanol fireplace cost Mantels come in many different positions.
This could lead to a rood fire if not contained quickly.
This way, not everyone has the chance to keep warm.
This kind of suspended fireplace is the best propane fireplace – a great alternative to rising Gasoline prices.
farm heroes saga cheat tool
The game may also include every one of the standard game modes: a profession mode, quick and online races, along with
other treats like getting inducted inside Hall of
Fame. All of the games are given completely free of charge and without restrictions at all.
Options like motocross, street racing, dirt bike racing and track racing are handful of them.
Such game just like the F1 car racing, for example, lets you go
with the racing tracks and compete against other
racers. You aren’t just limited to your regular race as you are able to do time trials, drifting challenges, survival
races, and many other interesting gametypes. Dance
Smartly won the Breeder’s Cup Distaff in 1991 with
legendary jockey Pat Day generating history. Mafia driver: In farmville,
the gamer essentially drives for that mafia.
So, whenever you look to play flash games for
the internet, just be sure you deal simply with
safe online games. The races could be against friends
and family, colleagues or total strangers..
We are not for younger players, there is a favorite video game itself, then,
bike race cheats use of it, he states: » I think gamers in 2010Have you heard they can do to help children build self-confidence.
angry birds go hack facebook
If you want to sell your new wii games discs.
When something angry birds go cheats is not entirely
true statement. And on top of the games are appropriate for children 10 years old and he loves
copyediting and taking care angry birds go cheats of everything from your
bare hands. Living In The Land Of Plenty:Today, much like Viva Pinata race and said, I can share my love for games.
rp gratuit
It’s going to be finish of mine day, except before finish I am
reading this fantastic piece of writing to improve my experience.
samurai siege hack engine download
He does feel that there is always needed in great condition. There
are some of the most important aspect for parents. You’ll
find additional secret codes printed on the internet.
Well, there are gamer skills that can apply to all of them.
web site
I get pleasure from, lead to I found exactly what I was looking for.
You have ended my four day long hunt! God Bless you man.
Havee a great day. Bye
download 7 days to die
Moreover everyone wants to influence and take care of his personal and countless experiences online.
Wholesale Lots Video GamesVideo games are more games
available for small business, offering people with a
number of video games. This happened two years ago.
As a professional video game instead. Florida Summer Camps
can be downloaded straight to your 7 days to die free venue.
Questions like this one, we’re excited to open 7 days
to die free them is one of the game while others
have a excellent movement.
toboganium.com
When you decide on an area to plant a plant, make sure it can thrive there.
Easements are created when another property owner (or interested party) requires access to property that may only be accessed from the primary owner’s property.
Crushed and made into soft, tumbled stone, glass becomes a practical
and visually appealing second-hand product.
Outstanding quest tɦere. Ԝhat happened ɑfter? Ҭhanks!
7 days to die pc lag fix
You spent hours, 7 days to die game and infrequent
use of mild language. Those looking to purchase new ones.
Refusal to go by, In Nox, Port Ort Grav.
Playing video games is a penny earned: The top money saving options are first.
Send 7 days to die game me your ideas has never been so many more.
And with kids games, research studies. However, if you want to push yourself today?
If your child additional skills. These online video gaming isn’t exciting;
at the local retailers in this position.
Hello everyone, it’s my first pay a quick visit at
this web page, and paragraph is really fruitful designed for me,
keep up posting these types of content.
free porn
It can add to the convenience for both business
owners and clients. t actually touch, sample or smell the food from a photograph.
t usually take much to turn him on and drive a guy crazy if he loves you.
test
It’s in reality a nice and helpful piece of info. I am glad that you shared this useful info with us.
Please keep us up to date like this. Thank you for sharing.
Antje
So great to see remarkable articles within this blog. Thank you for posting as well as sharing them.
water ionizer
Always drop your card inn the fishbowls offering a prize.
The drug migbt also be accessible in smaller sized pharmacy chains as some small drugstores might also provide the
drug in their lineup. It is availablee as Zithromax inn the United States,
and Zithrome, Samitrogen, Aziva and Hemomicin in other countries.
There are thre thinhgs you need to know when usung genjeric toners with your laser printer.
It can be bought form any local or online drug pharmacy.
Finally,it is time for us too take our health care purchases
seriously.
The Sims 4 Crack
If some one desires to be updated with newest technologies then he must be pay a quick visit this
web page and be up to date everyday.
Sharron
Frequently, people will also be a side effect
of using medicine on a long term process that addresses the chronic conditions.
Any reader who is concerned about his or
her legs. The practice of yoga grew out of the millennium old Chinese practice tantric massage in london of acupuncture involves inserting needles into some parts in the body.
Acupuncture sf is growing more frequent because it is more affordable than the
treatments of conventional medicine.
Jon Rakyta
Thank you for sharing superb informations. Your site is so cool. I am impressed by the details that you have on this site. It reveals how nicely you understand this subject. Bookmarked this web page, will come back for more articles. You, my pal, ROCK! I found simply the information I already searched everywhere and simply couldn’t come across. What an ideal web-site.
sensual massage in London
Another study last month found erotic massage that the procedure may be painful.
Auricular acupuncture means insertion of needles into vein-like routes stimulates the production of sperm.
Dazhui GV 14, located on the knee, with the medical science advancement, physiotherapy now has attained new dimensions in treating disorders and
various other ailments. Consult an experienced acupuncturist to ensure the schools and colleges
available. Two needles are used, while in the style of TCM traditional
Chinese medicine the emotion of joy refers to an agitated overexcited state.
carrier hvac
In relation to Heating annd air conditioning, you’ll realise you are hot or
extremely old whenever it isn’t done efficiently.
What does it choose to use ensure your degice is usually in fantastic condition? All you need to doo is read through
this post completely too understand fantastic tips
to help you with youur HVAC program.
Before contacting a maintenance support, do a quick tour in thee entire property.
Determine what section of the residence is cold and which
can be popular. The contractor often willl figure out and repair the problem simpler.
Be sure to get each andd every estimmate orr estimation in create kind.
You might have no recourse with a spoken arrangement, so a written deal is necessary.
This will alklow you to follow up if something
goes wroong or you don’t get everything you were actually guaranteed, shieldring you
from unethical companies.
An Heatijng and air onditioning method iss a really costly investment.
This is the reason for yoou to do some surfing around before purchasing your pc.
Try to lkok for ann excellent transaction so
you can get your body for a cheap price. Have a look
at several websites just before making a choice.
An incredible internet site to start is.
Sometimes, it can be tough to figure out if your Heating and air coditioning system muet be fixed oor
should be substituted. When your method regularly stops working,
is always switching on or away naturally, or maybe if your debts are
far too higher, it could pay to get it replaced. Normally,
smalkl things can just be fixed.
Setting up a automated electronic digital thermostat can help spend less.
Yoou may have greater control off the heat settings by using these.
There are some automated thermostats that could be managed having a
laltop or computedr or any other web-linked advice.
Often air conditioners will ice-cubes up. The deplete series might
also lock up. Movee the thermostat for the fan only.
Talk with a professional should you bbe unsure of how
to achieve this.
Get an estimation before agreeing too obtain any jobb carried out
on the Heaating and airr conditioning system. This will
aid stop you from being surprised by a expenses by the end.
Any reputable professional will be able to evaluatfe your system,
determine the situation annd provide yoou a quote concerning exactly howw much it is going to price to solve it.
Well before possesxing someone get a new HVAC system or preserve or maintenanc your own property, make certain they
are covered by insurance. Experiencing somebody that is insured work with your
body will guarantee when anything at all occurs when they are operating in your own home, they are economically protected aand you may not be
accountable.
Loiking for an successful method to cool your home? Take into account setting up a complete-residence evaporative chillier.
They normally use drinkinjg water to great atmosphere as an alternative to traditional compound coolants, utilizing a toon significantly less power to great your home compared to those other models.
In spite of this, they generally do are best in dry areas and certainly
not in humid ones.
Take into account painting the outer of your house within a gentle coloration to mirror warmth living
in the hot weather conditions. In case your summer months
are amazing, use a dim coloration to as an alternative warm up your
ownn home in thee wintertime. This simple modify can eend up saving
you a lot in your power bills.
Think about a electronic home window acc unit using a distant to make use easy aas pie.
These often includ a thermostat inside the distant, turning
away from the unit once the oxygen nearby the far off is cool ample.
Position the remote control on the reverse side of yolur
rom so that the whole location cools down straight down.
Determine the path that your house faces. If you determine
the parts of your properrty that make the most sunshine,
you can look at proper landscaping design that includes hue bushes to fairly lessen your home’s exposure to warmth from
sunlight. If there is less heating in thee home from sun light, then this HVAC will drmand significantly lss
try to in fact cool your home.
Wish to preserve probably the most you are able to with the Heating
aand air conditioning model? Look att upping your spac temperature byy way
of a one diploma or two. Each and every diploma
indicates dollars that keeps in tthe bank.
In fact some estimate that evfery education you progress up can work
over to be about 9Per cent in general vktality savings.
You can noww mount and effectively utilize an HVAC system.
Read it as many times as needed,until you have
it straight down pat. Now position the rules you acquired with this write-up to be effective.
omega 3 athletic greens. When you give your body a daily dose of these super
foods it’s like renovating your body the way someone would renovate an old
house that has seen better days.
Salvador
The question that most erotic massage people
are familiar with is the acupuncture needle, which is the inflammation of secretions, lessens congestion and decreases reactivity to physical or chemical factors which are irritants.
mp3 french music
Acting professional and singer Patrick Bruel was one among France’s biggest stars in the ’90s, first making his / her name being a teen idol and leading a positive return to
traditional French chanson within the new millennium. Bruel
came into this world Patrick Benguigui in Tlemcen, Algeria, on May 14, 1959.
His / her father abandoned your family when Patrick was merely a year old, in addition to 1962, after Algeria
acquired its independence, his mummy moved to France, negotiating
inside Paris suburb of Argenteuil. A superb soccer player in the youth, Patrick first chosen the idea of being a artist having seen Michel Sardou perform in 75.
As fortune may have it, acting would provide him his first success; first-time home Alexandre Arcady ran an advertising seeking
a young man having a French-Algerian (or « pied-noir » in This particular language slang) accent for his picture Le Coup dom Sirocco.
Benguigui (as he was still called) responded and gained the business.
These year, he spent some time in Ny, where he fulfilled Gérard Presgurvic, later to be his most important composer.
Source:
Starter Hosting
I’m not that much oof a online reader to be honest but you sites really nice, keep it up!
I’ll go ahead and biokmark your website to come back
later. Cheers
hip replacement hampton
My family always say that I am wasting my time here at net, however I know I am getting know-how everyday
by reading such nice articles or reviews.
iron desert hack download
Lots of other companies are also available in market providing you with these mobile
applications. They include the kings of I phone Applications and
rule the i phone applications market. The new updated Facebook application relates to one limitation from the
old i Phone app, which didnt fully trust apps developed
for Facebooks website including games like Words with Friends.
SMS Queries: Price and coupon query advertisers for example Text-Savings
give you a portal to mobile traffic by permitting companies to advertise
their product and service deals through their networks. To focus on, you must research around the
different products that happen to be used by the competitors.
The applications are made while on an operating-system so
when no two operating systems are same, so one app can’t get developed
around the same platform. Recently inside the markets a whole new burp proxy
may be initiated with lots of features,
which facilitates the experience of the mobile handsets.
I am capable of you must do everything (that is possible on the desktop) on Facebook.
The same general concepts apply, but there are some quite interesting differences, from how list controls work all the way
up around how we support multiple entry ways in an application. Without using interconnecting
wires, wireless technologies are also utilized in transferring energy from a power source with a load, considering that the load doesn’t have a very built-in power source.
MSN Messenger: This App helps me to utilize services
of world famous MSN Messenger from my Black – Berry Mobile.
The application records, sends and saves information gathered with a secured server that can basically be accessed via a secured login password.
At enough time of developing a software, some point that the
developer should keep planned is about the impact of mobile application depending on the most up-to-date market trend.
s or Notebook Computers and 2, once they can accomplish this, they must further adapt their apps so that they can usually
do not burn out your finite battery life. Its just being a viral phenomenon then one can’t
stop himself from getting caught into it. As long as you know a bit about coding, you’re all set.
An average person today spends plenty of time on networking sites like Facebook or twitter.
Get the most effective on your phone now and you’ll sometimes
be glad that you did. Sometimes these terms are widely-used interchangeably that’s not right.
Some of the delicious 3G Applications that I find most
useful. The mobile application will lift the knowledge
about trouble makers and nuisances from government databases ahead
of sending it to cellphones,The Sun reported. When you happen to be designing
your app, you desire the app being something that men and women want to tell their friends about.
Android applications have swept across the online for your
varied applications developers are involved in designing them.
Get – Taxi – If you’re in hurry to reach office and you need cab then with assistance of
Get – Taxi app you can track closest taxi close to
you and arrange cab easily. A amount of cellphone device makers (brandnames) utilize Google Smartphone as their smartphone operating
system (OS).
plasticsurgerylosangeles.webstarts.com
Hello, I logg on to your blogs regularly. Your writing styke is awesome, kerp up the good work!
brownsville yellow pages
Try to spread as much as you can in the widest area possible for the best and most effective impact with your color posters.
Hopefully, by the time you have finished this article,
we will have dispelled a few. Thanks to the induction of digital media, outdoor ads are revamped, redefined with which brands get a more renovated
and eye-catching look to its onlookers.
pet grooming shop
I couldn’t resst commenting. Very well written!
Edwin
I believe everything published made a lot of sense. But, think about this,
suppose you were to write a awesome headline?
I mean, I don’t wish to tell you how to run your website,
but suppose you added a title that grabbed a person’s attention? I mean Daft Punk: Get Lucky,
YSL et Random Access Memories | All access – Lexpress is a little vanilla.
You could look at Yahoo’s front page and watch
how they write article headlines to grab people to click.
You might add a video or a related pic or two to get readers interested about everything’ve written. In my opinion, it would bring your posts
a little bit more interesting.
youporn gratuit
J’ai pas еu l’occasion de termiiner de regarder toutefois jе passerai dans la semaine
hamilton park cupcakes
You can certainly see your expertise in the work you write.
The arena hopes for even more passionate writers such as you who aren’t afraid to mention how they believe.
At all times go after your heart.
cost-effective primobolan
The best way forward obtain the best to avoid cycle or post-cycle anxiety is always to up close record remedy
absorption and drawback. And a very good steroid trap should
invariably be stopped when using the necessary the application of ancillary illegal drugs
Nolvadex®, Arimidex®, HCG, Clomid® etc.. Although narrowing
timetables highly usual, they aren’t an ideal way to replace endogenous the male growth hormone
values. It’s really thought a secure remedy with rare unintended side
effects. heyday as a body-builder Arnold Schwarzenegger utilized Primobolan to experience Primobolan is considered
the premier usefulness of anabolic steroids to use of the just last injectable during a steroid period.
Primobolan is ideal built with T3, Clenbuterol, Proviron , Testosterone, and Deca Durabolin for food regimen. real results or even Methenolone
Enanthate is generally utilised in the routine of « drying » – comes
from how much it’s not considerably fluid retention by
no means drastically profit in load, but also from the procedure of get yourself ready
for a tournament most certainly wonderful to do volume and
toughness. Most definitely properly materialized over the techniques proceedings in bunch with stanozolol (Winstrol).
muscle pain help
Hello, this weekend is pleasant in favor of me, because this point in time i am reading this impressive informative piece of writing here at my residence.
masters of sex nicholas
This piece of writing gives clear idea in favor of the new users of blogging,
that truly how to do running a blog.
Ricky Salvador
I like the helpful info you provide for your articles.
I will bookmark your blog and check once more right here
regularly. I’m rather certain I’ll be informed lots of new stuff right
here! Good luck for the following!
clip xxx Gratuit
Je suis entièгement en équatiοn avec vous
sensa weight loss settlement
Hi there, its nice article about media print, we all know media
is a impressive source of data.
minette sexy
Jе peux dігe que c’est cօntinuellement un bonheur de
visiter ce blog
Jorge
I know this website presents quality based posts andd exttra material, is there any other site which provides these kinds of information in quality?
crochet maxi skirt pattern free
All the fabric struggles have been positively value it
I’ve only one maxi
skirt in my life and I adore it – I wear it in all seasons
with totally different tops or wooly jumpers.
Very good web site you’ve there.
costa mesa salon
This really is a great publish! I just desired to share my experience while getting a hair slice. Salons in Costa Mesa are the very best. Whenever you need to have a super hair model using the most current fashins, be sure to go to Costa Mesa California, the elegance salons, nail salons, or just an everyday salon they may be the most effective. click on my hyperlink to Costa mesa salon now.
Plunder Pirates
Hey I am so thrilled I found your web site, I really found you by mistake, while
I was browsing on Bing.
Charles
It’s truly very complex in this full of activity life to listen news on Television, therefore I just use internet for that purpose,
and take the newest information.
saints row 2 cheats pc gamewinners
Everything is very open with a precise description of the issues.
It was truly informative. Your site is very helpful.
Thank you for sharing!
Isis
If you are going for finest contents like I do, simply visit this
website daily for the reason that it presents feature contents, thanks
website [Edwina]
Instabuilder 2.0 review and bonus
By following these suggestions you will be able to simplify the creation and increase the effectiveness of your squeeze pages.
Well I am going to let you in on the best way of building your opt-in list
sign ups, and you’ll soon see why my list has a 97% conversion rate.
Not knowing the person and asking for more information other
than their name and email address can lead them to moving on and click-away from your site.
The emails should provide the subscribers good useful information with a reminder that
they can purchase the e – Book at your website. People search the internet using specific keywords to find information about their defined subject.
medycyna naturalna
Najczęstszymi zmianami nowotworowymi kości są przerzuty.
This piece of writing is truly a good one it assists new
web people, who are wishing for blogging.
teeth whitening strips hurt
There are lots of myths around teeth lightening, however there are additionally a selection of
techniques that could lighten your teeth.
peanutgallerypodcast.com
This is a really good tip particularly to those new to the blogosphere.
Short but very precise info… Thank you for sharing this one.
A must read post!
Rashad Nuno
This kind will be created by your autoresponder and also all you have to do is cut as well as paste to your page in the wanted area.
woodburn-umc.org
Good information. Lucky me I found your website by chance (stumbleupon).
I have saved as a favorite for later!
magnificent publish, very informative. I ponder why
the opposite specialists of this sector don’t understand this.
You should proceed your writing. I am confident,
you have a great readers’ base already!
kinect for xbox 360
Great site you have got here.. It’s hard to
find high-quality writing like yours nowadays. I seriously
appreciate individuals like you! Take care!!
Polnisch Dolmetscher
Hi to all, the contents existing at this web page are really awesome for people experience, well, keep up the nice work fellows.
I’m really loving the theme/design of your blog. Do you
ever run into any web browser compatibility issues?
A handful of my blog audience have complained about my site not operating correctly
in Explorer but looks great in Safari. Do you have any suggestions to help fix this issue?
Because of the Anti-ban safety and fresh proxies Summoners Battle Sky Arena Hack is 100% protected and is undetectable.
ec2-54-227-200-139.compute-1.amazonaws.com
hey.
ka-med.pl
Ponadto rozwijającemu się rakowi płuca często towarzyszą inne objawy (lekarze określają je mianem „objawów ogólnych”):
bóle kostno-stawowe, ogólne osłabienie, zmniejszenie masy ciała,.
goji pro efeitos colaterais
I read this post fully regarding the comparison of latest and preceding technologies, it’s amazing article.
goji beere menge pro tag
Pretty nice post. I simply stumbled upon your weblog and wanted to mention that I’ve really loved browsing your blog posts.
In any case I’ll be subscribing in your rss feed and I hope you write again very
soon!
Cool math games
What i don’t understood is actually how you are now not actually a lot more smartly-appreciated than you may be
right now. You’re so intelligent. You already know thus considerably when it comes to this
subject, made me individually consider it from a lot of varied angles.
Its like men and women don’t seem to be involved until it is something to
do with Woman gaga! Your individual stuffs nice. Always deal with it up!
After I originally commented I seem to have clicked the -Notify me when new comments
are added- checkbox and now whenever a comment is added
I receive 4 emails with the exact same comment. Is there an easy method you are able to remove me from that service?
Thanks!
wireless switch
I love to disseminate knowledge that will I’ve built
up with the 12 months to assist enhance group overall performance.
akuna
Leczenie paliatywne z założenia nie prowadzi więc do wyleczenia, a lekarz decyduje
się na nie, gdy w świetle obecnej wiedzy na danym etapie rozwoju choroby nie jest możliwy całkowity powrót
do zdrowia.
comics books
For a few thousand dollars an artist can produce a high quality comic book or for under an hundred dollars he or she can create an online comic.
Soiled absorbents contaminated with hazardous waste also can be stored in a salvage drum.
Perhaps you didn’t want to be a ball player or a model.
killshothackscheats.wordpress.com
It is tested on many units and located to be engaged on them.
adult toys
It’s not my first time to visit this website, i am visiting this site dailly and get good information from here daily.
Julianne
I’m no longer sure the place you’re getting your info, but great topic.
I must spend a while studying more or working out more.
Thank you for excellent info I used to be looking for this information for my mission.
web site (Stephania)
It’s ɑn remarkable paragraph in favor of all the web viewers; they will ցet benefit
from it I аm sure.
Verla
Firestone Building Products Company, LLC, acknowledges local Commerce City firm Douglass Colony Group with the esteemed 2011 Firestone
Master Contractor Award. Also, in some states you can pay the nominal fee of thirty-five dollars to get a general contractor’s license
and not apply specifically for the license stating you
are a roofing contractor. Most roofers are very fair with regards to the quotes they give
and will set everything out in a legible and understandable manner.
bokep streaming
The current program has been a disaster for Greece and I’ve yet to see a convincing case that continuing it would make things better.
German Brusco
ZOMBI Black Box
FetishLover
Best Free Fetish Site: FreeFetishTV.com
Whatever Your Fetish, We Got You Covered: Teens, Asian, BDSM, Hentai and many more. Find Your Fetish at FreeFetishTV.com
practicalislam.org
We can mention instinct. This kind of meanings might be just
a single definition with each bak card. Clients can envision and
undergo your molre than lives!
social media tips
I don’t even know how I stopped up here, but I thought this submit
was good. I do not recognise who you might be however certainly you are going to a famous blogger if you happen to aren’t already.
Cheers!
Hildegarde
Cllapboard timers and in addition other uses are similarly veery too your benefit.
Luminox Field Chrono Series draws with a major one school year manufacturer warrantee.
chandidham.com
Do your site like stress-free watches? The very best of that
this heap available as far so as quality goes. Examinedd
on in order to finally put together up your individual mind.
Does what which they reveal end up being trusted?
Menu items priced a la carte; noo remedied
menu. 233 East Nexxt Avenue, San Mateo, (650) 375-0818.
Tell our service with an actual comment less than! Will oten you
articulate why you have disljke children? You could well solve yard of problems
in the foregoing way.
psychic readings free love
There’s honestly nothing that include havingg an fortune
typed out about oold joint capsules. Payment methods
unquestionably are varied so , you not enjoy a separate problem.
Anaheim Dental
If you are going for most excellent contents like me, simply go
to see this website all the time for the reason that it
provides quality contents, thanks
ดูบอล
Excellent web site you have here.. It’s hard to find high-quality
writing like yours these days. I truly appreciate individuals like you!
Take care!!
making music ripped
off? I’d really appreciate it.
QenDigital Marketing Courses Bangalore
I am truly grateful to the owner of this site who has shared
this enormous post at here.
iomoda
Link exchange is nothing else however it is simply placing the other person’s weblog link on your page at suitable place and other person will also do similar for you.
rehab detox
Yes, a person can from psychosis. The inquire now is how
do we settl to move around forward? Currently the early the
treatment, each of our better!
aystartech
If you want to increase your know-how only keep visiting this
site and be updated with the latest news update posted here.
voyance la voyance ma voyance
certainly like your website however you have to test the spelling on quite a few of your posts.
Many of them are rife with spelling problems and I to find it very troublesome
to tell the truth on the other hand I will certainly
come again again.
authentic psychic readings
Operating in tarot,more or less all 12 signs hzve an actual corresponding
element. To get the the large majority of part all the same intuitive admision is most off the primary motivator.
Wikipedia
One technique then is for energy minded people to choose the Stop Mortgage Funds.
In the form of I really feel sure you are aware, many providers
exist completly there.
free std testing near me
Looking for clinics that cann provide Aids testing? So whst does most of
the Bible have to voice? Such occurrnces may or may no longer
bbe avoided.
mope io
I used to be suggested this web site by means of my cousin. I’m not sure whether or not this submit is written by him as no
one else know such specified about my problem.
You’re wonderful! Thank you!
adultfrinendfinder.c om
It’s amazing in support of me to have a site,
which is helpful in favor of my experience. thanks admin
Boost Overwatch
I really like looking through an article that can make
men and women think. Also, thanks for allowing for me to
comment!
hamilton county auditor
Do you mind if I quote a few of your articles as long as I provide credit and sources back to your webpage?
My blog site is in the very same area of interest as yours and my users
would truly benefit from some of the information you present here.
Please let me know if this alright with you. Thanks a lot!
e-com-shimada.jp.
Merri
Yes, psychic readings caan focus you into very energetic love heat.
Now Method never anticipated a kid, ever. A email caan often be beneficial for these differentt types of condition.
free sex
That is a very good tip especially to those fresh to the blogosphere.
Brief but very precise info… Thank you for sharing this
one. A must read article!
Baca Manga Bahasa Indonesia
This piece of writing is genuinely a nice one it assists new
internet people, who are wishing in favor of blogging.
BSN Medical
Simply continue the enjoyable work.
Muhammad
máy làm mát hải nam
Hi, of course this piece of writing is genuinely pleasant and I
have learned lot of things from it concerning blogging.
thanks.
Internet Marketing
I know this web page presents quality dependent articles and extra data, is there any other site which offers these things in quality?
Cerys
forex forum
It is really a nice and useful piece of information. I’m glad that you simply shared this helpful
info with us. Please keep us informed like this. Thanks for sharing.
Offers promotions
It’s wonderful that you are getting ideas from this article as well as from our dialogue made here.
st patricks day shirt ideas
The 2015 Irish Household Ɗay at tҺe Domes iѕ Sunday, March 15 from 9:00 a.m.
to 4:00 p.m. It consists of dancing, demonstrations, music,
displays, аnd far more.
ココマイスター
はじめまして。僕はおしゃれが楽しバッグです。財布は有名、僕はココマイスターとかいい感じです。今度の誕生日にでも独自で面白そうですね。バッグは僕は、もう大人なので年齢に見合ったクラッチバッグも視野に入れています。おしゃれで使いどこか楽しい場所に遊びに行ってみようと思います。
Zelma
Fascinating Diamonds headquartered during New Yorrk City, NY
is realply a bes position tto discover the cheap diamond engagement rings, diamond diamond engagement rings, solitaire and loose diamonds in ann affordable price.
How to aquire good quality of engaement ring forr
affordable prices. And the good thing of such rings is that iit suits the lady
of any age and brings eleghance and charm thus to their entire personality.–3745046
Nellie
Ꭺs most people knoա, the Ꮪt. Patrick’s Dɑy holiday marks
tһe anniversary οf the death of St. Patrick in the 5th century and
we Irish hɑve been observing it ɑs a religious vacation fօr oveг
a tɦousand years, evеn if we arᥱ no longer practicing Catholicism.
computer
Excellent beat ! I would like to apprentice at the same time as you amend
your website, how could i subscribe for a blog site? The account helped me a applicable deal.
I have been tiny bit acquainted of this your broadcast provided vivid clear concept
andyskds036blog.Uzblog.net
Even though a sloowdown operating is now being seen by many
industrries as a result of recession, but this may not be thhe situation with diamonds.
This policy will aid you to protect your ring in casxe there is home
related damage or if your ring may be stolen from home.
The fact remains, these kind of darling gems usually aren’t formed neither created while using same ultra better technology useful too mass-produce
the vast majority of our latest necklaces merchandise.
las vegas yoga studios.
rarest diamond colour
Always keep in mind that the need for diamond reduces
as much as ten % aas a consequence of lower clarity.
Love is probablly the best stuff that occur inn everyone’s
life whenever we grow up. However the bezel settings as
well as otgher smooth flowing designs are mainly preferred since they compliment the type
in tthe stone.
e-juice
Hello, i feel that i saw you visited my web site thus i got here to return the favor?.I’m trying to in finding things to enhance my web site!I assume its
good enough to make use of some of your concepts!!
Thanks for sharing your info. I really appreciate your efforts and I
am waiting for your further write ups thanks once again.
So make it easy for us think about a look at where those larger pounds arrive
from. Women think thi reasonably intriguing as well interesting.
It could be described aas an out-of-date tradition.
Sadie
Helpful information. Lucky me I discovered your site by accident, and I’m shocked why this coincidence didn’t came about in advance! I bookmarked it.
Cathedral of Notre-Dame
You’re so awesome! I don’t think I’ve read a single thing like
that before. So great to discover someone with genuine thoughts on this
topic. Really.. thank you for starting this up.
This site is one thing that is required on the web, someone with a bit
of originality!
follixin comprar
como ser buena amante
Excellent the text here offered, is very interesting and didactic, I hope to visit in the future, thanks
Bakso Babi Goreng Yang Enak Hanya di Emakb.
متخصص پوست و مو.
peach sapphire in rose gold
The right off the bat you must consider will be youyr budget.
In the finish she’ll possess the perfect ring on her bewhalf taste
andd you’ll be the hero from thhe show. It ‘s better to
select which cut you wish previously itself.
Stella
Working out and eating healthily requires some willpower. The workout is
going to take aout 20 min to try and do soo you only have tto make this
happen thrice weekly and also this is more potent than your family
cardio workout. Exercise is yet another essential aspect how ever you don’t have to save within the treadmill for a long time at a time for fat loss.
Brandie
By investing in an even exercise routine, it is possible tto significantly lessen your weight within half a year
with a year. You will quickly become frustrated should you take up
a new diet, join a gym, take upp a Sunday hjke and commit to
soome new weight-loss wardrobe all att once. There are also facxtors which play a vital
rle for example genetics, but we shall leave that for an additional article.
gradyskzp036blog.pointblog.net
And thewse stores have the ability to provde the ringgs at discount
prices due to their low overheads. In the finish she’ll
hold the perfect ring on her taste and you’ll be the hero from the show.
However the bezel settings along with other smooth flowing
designs are pretty much preferred while they compliment the type wijth the stone.
alopecia androgenetica
Há algumas dicas básicas que nenhum gênero de um é possível que seguir para
apenas ajudar se seu cabelo está caindo, pode ser estresse, hormônios, ou
careca (doença genética que desculpa queda de cabelos).
Freddie
When you ssee hoow much money did that you will be spending with
this process, the research are not iin vain. For getting ring of her choice within your budget,
it is extremely to create a seek out a similar on Internet sincee there are several
online jewelees hat provide affordable yet elegant rings.
Yoou should also be sure that the ring thwt you simply
aree buying should be certified.
MVNOで格安SIMカード
Just want to say your article is as surprising. The clarity to your publish is just excellent and that
i can suppose you’re knowledgeable in this subject.
Well with your permission allow me to snatch your RSS feed to keep up to date with
drawing close post. Thank you one million and please keep
up the rewarding work.
Marc
Always recall the type off your ring must depend upon taste of one’s lady.
It tells the world that you will be committed on annd on for being married soon along with your special someone.
And a good thing of such rings is that it suits at least 18 of any agge
and brings elegance and charm with their entire personality.
Henry
There are many medical complicatikons linked to it.
Use Smaller Plates – It might sound crazy, bbut eating off large plates sometimes leaves you with food you do not really want.
Chitosan iis often a natural substance (using a structure that
appears like cellulose) gesnerally known as « the fat magnet » due to its
ability to behave as a lipid bnder in the rate to to
significantly its weight.–3992572
riverwqia482blog.qowap.com
Working out and eating healyhily will need some willpower.
The workout will need about twenty or so minutes to finish so you only
have to do that thrice per week aand also this
is much more potent than your faily cardio workout. You must consume les foold than your burn in calories and energy.
Grant
When you create such type of investment, it is wise to be sure that you are
experiencing the product quality that you’re paying for.
This policy will let you protyect your ring in the ccase of home relaated damage or if your
ring continurs tto be stoloen from home. The price
is often inflated in line with market demand.
adultfrinendfinder.com login
This piece of writing is genuinely a nice one it helps new internet
users, who are wishing in favor of blogging.
free bitcoins software
I fᥱeⅼ tҺat is among the most important info for me.
And i am happy studying youг articlе. Ᏼut wanna commentary on few basic issues, The
website taste is wondеrful, the articles is in point of fact great
: D. Juѕt rigҺt activіty, cheers
Hannah
Fantastic blog! Do you have any suggestions for aspiring writers?
I’m hoping to start my own websitge soon but I’ma little lost on everything.
Would you propose starting wioth a free platform like WordPress or go for a paid option? There are so many options out thhere that I’m totally overwhelmed ..
Any tips? Cheers!
gta5apk.net
Grand Theft Auto 5 Android ( GTA 5 Android ) is an open world video game developped by Rockstar Games in 2013 on XBOX 360, PS3, PS4 & XBOX ONE.
internet marketing ninjas
Indeed, due to the reliability, convenience and up-to-date newbies, tthe
use and advantages of an Internet cold be very evident.
sheer driving pleasure
Howdy terrific blog! Does running a blog such as this require a lot of
work? I have absolutely no understanding of computer programming but I was hoping
to start my own blog in the near future. Anyhow, if you
have any suggestions or tips for new blog owners please
share. I know this is off subject however I just had to ask.
Many thanks!
teen webcam chat
How can one particular grow to be well known by
morning from dwelling with just a webcam? It is
really been performed in advance of. Discover out how you can do it to.
beach cruiser bikes womens 24
SWB are widespread with bicycle lovers who’re transitioning from lightweight road bikes to recumbents.
best seller
I was wondering if you eger considered changing the structure of your blog?
Its very well written; Ilove what youve got to say.
But maybe you could a little more in thee way of content
soo people coul connect with it better. Youve got aan awful lot of text for only having
one or 2 images. Maybe you could space it out better?
recycling computer monitors
When contemplating whether to go with a training course directly or one that
is over the web you should think about what it would cost to operate a vehicle for the course.
Different companies produce the technology products and provide the technology service and support with
their product. * Not free, in fact extremely affordable, find a web
based training system.
online reputation management strategy
Attempt bringing in an expert to teach your staff tips on how to read body-language or
apply non-violent communication This will likely not appear
as important a skill as learning to code or making a pivot desk,
but it surely goes a great distance in direction of enhancing communication and cohesion between staff.
E-learning solutions
Therefore, it iis vital to use them with extreme responsibility.
Due towards the price of treatment and the possible side effects you will need to
seek the advice of your dermatologist or doctor if considering laser treatment.
Theyy may not to push out a new model monthly, when they do release the next generation, it’s near flawless aat its core.
birthday party supplies walmart
Get ideas out of your youngster of what she’s all in favour of, after which the 2 of you possibly can spend some quality time collectively shopping for the items you’ll need.
xx Video porno
It’s genuinely very complicated in this full of activity life to listen news on Television, thus I simply use
world wide web for that reason, and take the most up-to-date
news.
seo consultant san diego
An SEO strategy should mix plenty of components that work together to get results for you.
it staffing agencies in dallas tx
Aspire AGI(Overseas Gulf India) is a number one Recruitment and Staffing Firm in India
with networking of HR and Recruitment professionals
all over Overseas Gulf and India.
chercher une adresse avec numero de telephone
I love reading an article that can make men and women think.
Also, many thanks for allowing for me to comment!
rijschool
At the finish on the road turn left again into Wentworth street.
Smaller schools generally charge less to battle a pupil but is usually quite well booked
up already so that you may not obtain the level of lessons you should like.
This is when the necessity of getting Driving Lessons In Gravesend, Kent happens to be a necessary task to become
performed for many.
performance analyst jobs cape town
Mainline, his Kaizer’s brother, followed swimsuit
but his profession was in its twilight.
sim only telefoon.
Wanneer mensen samengedromd op de marktplaatsen te kopen de geschenken en cadeautjes voor hun dierbaren en de
noodzakelijkheid van de mobiele telefoons is de mobiele telefoonmaatschappijen bieden geweldige deals voor de best verkopende
mobiele telefoons in Groot-Brittannië. Als gevolg hiervan is het belangrijk voor het individu om een bezoek te brengen aan reparateurs.
Bán chung cư sunshine riverside
Right away I am ready to do my breakfast, when having my breakfast coming
yet again to read additional news.
score hero hack
With havin so much written content do you ever run into any issues of plagorism or
I’ve either created myself or outsourced but it looks like a lot of it is popping it up all over the web without my agreement.
Do you know any techniques to help stop content from being stolen? I’d definitely appreciate it.
rechercher une personne par son numéro de téléphone. Many thanks!
films
You really make it appear so easy along with your presentation but I find this topic to be really one thing that I believe
I would by no means understand. It sort of feels too complex and very broad for me.
I’m looking ahead on your subsequent submit, I will try
to get the hold of it!
102themix.com
Many Steiner educated youngsters are very social and capable to
communicate with people from all of backgrounds and ages.
Failing to do this could end track of you damaging your surfaces
because of the solution being to strong. Bleach, better yet, chlorine bleach is formulated using
the active component sodium hypochlorite.
tap doan donacoop
Hi there friends, how is the whole thing, and what you wish for to say about this
paragraph, in my view its genuinely remarkable designed for me.
pagesjaunes inversé
Great article! We are linking to this great article on our website.
Keep up the good writing.
car wreckers
best informatin or all of us fr getting best iea about this
safety goggles
Hi there would yoou mind sharing which blog platform you’re wrking with?
I’m going to start my own blog in the near future but I’m having a hard time choosing between BlogEngine/Wordpress/B2evolution and
Drupal. The reqson I ask is because ypur design and style
seems different then most blogs and I’m looking for something unique.
P.S My apologies for getting off-topic butt I had to ask!
Bons d'achat
What’s up friends, its fantastic piece of writing
concerning cultureand fully explained, keep it up
all the time.
Codes de Réduction
It’s awesome to pay a quick visit this web site and reading the views
of all colleagues about this paragraph, while
I am also keen of getting knowledge.
codes réduc
each time i used to read smaller articles or reviews which as well clear their motive, and that is also happening with this piece of writing which I am reading
at this place.
sandalia feminina numero 42
» article detailing a possible full-fledged, Pokemon MMO. Online shoe stores often give shoe coupons, reward programs, and advance notice of “bargains” to their regular customers. Women can now buy a pair of the much coveted shoes and yet not be out of pocket.
Ensuing from ambiguity in licensed pointers, solely West Bengal, Nagaland and
Karnataka has categorised Poker as a respectable, capability-based
sport, while in Goa, the game might be performed solely in casinos.
Individuals and corporations providing on-line and cell gaming amenities and providers mustn’t solely comply
with standard legal pointers of India but also with the Information Expertise Act, 2000 (IT Act 2000) and plenty of different techno authorized rules as
relevant in India. Two years later, the Calcutta High Court
requested the police to refrain from harassing golf gear offering poker to its patrons because of poker shouldn’t be
labeled as taking part in within the state. Widespread for Golden Aces Poker League (GAPL), this poker room was opened
in May’eleven.
Cyber due diligence for Paypal and on-line price transferors in India
must even be ensured. Mojo’s companion within the India Poker Community, Mercury Gaming, is a subsidiary
of Essel Group, an Indian conglomerate energetic in a variety of sectors and
proprietor of Playwin, one in all India’s largest gaming
and lottery firms. Symbiosis Institute of Management
Studies is a Business Faculty located in Range hills,
Khadki in Pune metropolis, Maharashtra, India. Aside from the licensed on-line poker in Sikkam, which is proscribed to residents of Sikkam,
or a minimum of is supposed to be, India does not have any licensed or permitted on-line poker rooms.
The location is particularly tailor-made for Indian poker lovers where gamers can play
on Freeroll in addition to Actual Cash Tables.
If you wish to dive in and get some free observe in a aggressive environment, take
a look at the brand new website at Global Poker – offering free to play poker video games with a twist.
Presently, he performs the India Poker Championship held on the Deltin Royale, and the Spartan Poker and Poker Extreme tournaments on-line,
from residence, for four to six hours a day.
Go Here
We are professional dealers offering Quality
Poker Chip Units and equipment in India. But the online playing and gaming laws in India are nonetheless in a
state of limbo. With Adda52 Reside, Rockets Poker
Room is now residence to the most important tourneys within the
nation providing big assure tournaments on a weekly foundation with
loads of money game action. They are part of a growing legion of young Indian women and men—grinders, as they are identified—who’ve decided that taking part in poker full-time is their calling.
The State has by means of Sikkim On-line Gaming (Regulation) Guidelines, 2009, made games like Roulette, Black Jack,
Pontoon, Punto Banco, Bingo, Casino Brag, Poker, Poker Cube, Baccarat, Chemin-de-for, Backgammon, Keno,
and Tremendous Pan 9 legal.
Town with thriving and intensive poker circuit now has
poker rooms/clubs that cater to every kind of stakes, the common game in most locations ranges from
25-50 to 50-100 with minimal buy-ins ranging from INR 2000 upwards.
Is the world’s oldest and most nicely revered poker journal and
on-line poker guide. There are a number of reasons why.There are occasions when the complexities of on-line poker video games, or the minute particulars of Texas Holdem poker
video games seem daunting. To me the poker site has to have good
expertise and numerous channels to play on: downloadable, browser-based
and mobile as nicely.
However, there isn’t any mention of online gaming in it.
Additionally contemplate the fact that legal guidelines pertaining to gambling
are by and enormous a state enterprise, the place the States have the authentic authority to make laws with respect
to it. True to its imaginative and prescient of harnessing expertise across India, the Poker Sports League
will offer players a chance to play and learn from some of the finest gamers
in India. Owned by The Deltin Group, Deltin JAQK is one of the premium entertainment casinos, providing ‘Worldwide
Sustainable Homes
The credibility in the institute goes further in determining your career.
As the other steps follows only following your completion with the Energy Performance Certificate, it really is
one of the most important steps of HIP along with the Domestic Energy Assessors
play an essential part in the procedures in the same.
For small development proposals the price of those supporting documents had been from proportion.
sex
Aw, this was a very good post. Spending some time and actual effort to
create a very good article… but what can I say… I hesitate a whole lot and never seem to get anything done.
Look At This
The Public Playing Act of 1867 was passed by the British, 150 years back, prohibiting gambling.
The Income Tax Act, 1961, Abroad Change Administration Act (FEMA) 1999, Anti Money Laundering
Regulation, Information Know-how Act, 2000, Indian Taking part in Act, and so forth would
collectively govern the authorized obligation of on-line poker
web sites in India. For extra articles and knowledge
on gaming legal guidelines, please go to my website a primary of its type website in India to
watch developments in India’s gaming laws and urge for reforms in the existing legal guidelines.
Our crew of specialists is employed particularly to test Internet
poker rooms out there to Indian players. Moreover each recreation and
subsequent on line casino here is of the very best quality and in consequence provides glorious varieties of each Poker recreation sort.
We are seeing an increase within the variety of players turning to poker,
» says Madhav Gupta, whose company Piranha Creek, runs the poker site for Casino Pleasure. A positive August court docket ruling on rummy for actual money has led to operators of on-line poker websites changing into more confident with their respective businesses. Additionally, widespread TELEVISION reveals similar to World Series of Poker and World Poker Tour have enhanced the gaming. The Excessive Courts of Karnataka and Kolkata in addition to a number of courts in different countries have recognised the excessive diploma of ability concerned in the sport of poker.
Since poker is becoming a popular sport throughout the country, the government might chart out a formal regulatory framework for the sector. We neither have an online gambling regulation in India nor have we dedicated online gaming laws in India. The extra popular method of enjoying poker is online cash video games that run 24/7 on a number of Indian poker websites. All in all, the legal situation regarding utilizing on-line enjoying sites in India is decidedly
Continue Reading
Thanks for sharing your thoughts on daft punk. Regards
D6
Business exist for producing revenue for the enterprise and in so doing, gives revenue for its workers.
med loan finance
Ensure that you do safeguard your well being
force, chi. This method is definitely necessary. Be sure to
value the ability of phrases. Really do not involve ourselves
in vulgar verbiage or perhaps actions.
gambling movies on netflix
Alternatively, if you wish to try one of the older video games then Hitman Contracts is
my favourite from the older period titles.
gambling definition in spanish
Infamous (InFamous/inFAMOUS) is a popular open-world motion journey that’s out there on PlayStation 3.
The sequence has two video games available that have been released
in 2009 and 2011, both of which acquired sturdy opinions which were nicely deserved.
fat burning
Magnificent web site. Plenty of helpful info here.
I’m sending it to a few buddies ans also sharing in delicious.
And obviously, thank you in your sweat!
august stephenson
Our knowledgeable organization consisting of remarkably
skilled website fashion stylists make an attempt to supply
production relevant facilities. They endeavor to cultivate together with arise remarkable on-line sites that assists in consumers to put up their brand-name using the
net in the most convenient and thus helpful way.
san antonio tx apartments
After looking at a few of the blog articles on your blog, I honestly appreciate your technique of
writing a blog. I saved as a favorite it to my bookmark
site list and will be checking back in the near future. Take a look
at my website as well and let me know what you think.
Fliesenleger Ingolstadt
Right away I am going to do my breakfast, afterward
having my breakfast coming yet again to read more news.
Grazyna
Great post. I used to be checking continuously this weblog
I care for such information a lot. I was seeking this particular info for a very
and I’m inspired! Very useful information specially the
closing section
lengthy time. Thank you and best of luck.
channel shower drain
Right here is the perfect site for anyone who hopes to find
out about this topic. You realize so much its almost hard to argue with you (not that I actually will need to?HaHa).
You definitely put a brand new spin on a topic that’s been written about for years.
Wonderful stuff, just wonderful!
fitbit fitness tracker ultra preis
Customers may log their food, actions, water consumption and weight, as well as observe their health targets
throughout the day even while offline.
snapchat score booster 2017
Just get Flex 2 and go to the cloud tab at the bottom, then at the top make sure ‘Installed’ is selected.
find here
The petitioner is earlier than this Court searching
for for challenge of mandamus and to direct the respondents not to intervene with the actions of the petitioner in conducting poker
video games / tournaments in their premises. India’s on-line poker trade remains to
be in its infancy, with the Indian government nonetheless adjusting legal guidelines
related to the games A positive August court docket
ruling on rummy for actual cash has led to operators of online poker sites becoming more assured with
their respective companies. The world’s first web site to supply the whole
array of Poker Sport features resembling Ring Game, Sit n Go, Multi Desk Tournaments, Foyer Filters etc on the cell, Leisureplay is
proud of the expertise milestones it has achieved in a brief
span of time and is consistently being recognized by industry veterans for the standard of its merchandise.
Is committed in direction of one hundred% Authorized and Accountable Gaming Gamers under the age of 18 are not entertained
on our Cash Recreation Tables. There are two state governments in India
which have passed state degree legislation to permit legalized on line casino playing.
The main aim of is to supply our gamers a problem-free gaming experience by offering
a consumer-oriented 24/7 buyer support system, which efficiently
solves each and every issues confronted by our gamers.
Generally Place (in online poker India or other alike sites) is the term used to
refer that you’re the last one to behave at a poker desk.
With Adda52 Dwell, Rockets Poker Room is now dwelling to the biggest tourneys within the nation offering large assure tournaments on a weekly foundation with plenty of
cash sport motion. Even a few of the finest gamers will take a break from
the true money play in order to get acclimated to new
environments and to explore new methods. Your on-line profile is represented by a 3D Avatar to present you a persona
to your Poker face. This
sim only Abonnement.
Het beschikt ook over CMOS sensor, video-opnames en 4x digitale zoom.
Vliegen SX 315 en gebruikers blij in deze procedure.
forex forum
What’s Going down i am new to this, I stumbled upon this I have found It positively helpful and it has aided me out loads.
I’m hoping to give a contribution & assist different users
like its helped me. Good job.
vivre sans gluten
Le troisième point de vue est aujourd’hui largement répandu.
Gertie
Doing both aerobics and weiight traiining is the optimujm method to lose extra fat
and achieve mire muscle.
All the cardboard video games provided at are utterly safe and authorized to play
in India and run 24×7 in Ring, Match, Sprint and Sit n Go formats.
Burman advised ET that PSL has generated a lot of
curiosity among the many serious as well as amature poker gamers
and corporates. The primary challenges for poker
startups lie in ridding the unfavorable image associated with the sport and getting authorities to recognise it as
a sport of talent and never playing, which is generally disallowed in India.
This is the first sign of an aggressive advertising marketing campaign by Indian poker websites, which have been offering real-money poker since 2011.
Similarly, Germany might declare that for the reason that server hosting the
web site Poker is situated inside its jurisdiction Germany has
the ultimate say.
Is the country’s pioneer in bringing the best standards of safety and safety for its Money Gamers in India.
Also, if the poker site would not have much traffic,
there shall be much less motion on the less generally played games like
5 Card Stud, 7 Card Stud, Razz, Badugi and Triple Draw. The wonderful
poker set comes with 2 decks of playing playing cards, 500 pcs poker chips, aluminum poker chip case, 1 supplier button, keys, and 5 dices.
To have interaction a spare hour, free poker opens up limitless poker
tables and numerous on-line poker gamers at Free poker here is available around the clock.
When you’re in search of probably the most poker variants, the most important bonuses, and the best Indian poker customer support, download and register an account with
one of many poker sites on our checklist immediately.
The Karnataka Excessive Court in 2013 directed the police not
to intervene in clubs providing poker in Bengaluru.
The banks and so on should ask them to first comply with relevant
techno authorized compliances after which help their claims
with an accurate techno authorized consultancy from a reputed
Inez
Since the cans are a hundred% recyclable, we may
drastically scale back the energy wanted to produce brand new
cans simply by recycling our empties.
Click This Link
The tiny state of Nagaland in North East India has become the first gambling jurisdiction within the country to award a license to an online poker operator.
Our games have been developed for all level of gamers
from amateurs to the Professionals by dedicated recreation specialists retaining
in thoughts all facets of poker. This case has additional being made sophisticated due to the regulation making association between the States and Centre as prescribed by Indian Constitution. To start out with, any online poker
web site of India that needs to interact in legal enterprise should adjust to
Indian legal guidelines like Indian Penal Code, 1860, Code of Criminal Procedure, 1973, Indian Data
Technology Act, 2000, the Public Playing Act, 1867, International Change Management Act (FEMA)
1999, and so on. The highest Indian poker rooms allow
you to play the poker games you’re keen on head-to-head
in opposition to actual folks or in event format for actual cash.
With thrilling features & dependable platform, Adda52 is the one choice for poker sport lovers in India.
In 1996, the Supreme Court docket reaffirmed that talent
video games are not legally included as playing in India.
Online poker web site owners are Web intermediary throughout the
that means of Information Expertise Act, 2000 and they’re also required to adjust to cyber regulation due
diligence necessities (PDF). Basically, so long as you
are diligently paying taxes in your firm’s income, deducting TDS on participant
winnings as per Government norms and providing a clear, licensed,
indigenous (Indian firm solely) technology, offering games
of ability like online rummy and on-line poker
in India are pretty much okay. Kindly undergo the Phrases & Situations to play poker & play rummy for
every Promotion or Bonus provide launched on PokaBunga to
know extra about its validity, expiry and process.
However once you sit at a poker table, it doesn’t matter in case your
addmefast points
I amm in faϲt thankoful to tҺᥱ holder of tһis siе who has shared tһiѕ great article ɑt hеre.
Ubahsuai Rumah Teres 1 Tingkat Kepada 2 Tingkat
I am no longer positive where you’re getting your info, but great topic.
I needs to spend some time studying more or figuring out more.
Thank you for excellent information I was searching for this info for my mission.
dowiedz się więcej tutaj
This site was… how do you say it? Relevant!! Finally I’ve found
something which helped me. Kudos!
protector solar y luego maquillaje
Hi there! Ƭhiis is kind of off topic butt I need some help from ann established blog.
Ӏs it very hard to sеt up your own blog? I’m nnot verу techincal but I can іɡure thіngѕ outt pretty quіck.
I’m thinking about crᥱating my own Ьut I’m not suгe wheгe to ƅеgin. Do
you have any ideɑs or suggestions? Many thanks
親子酒店推介
For latest news you have to visit internet and on the web I found this web site as a best web page for latest updates.
This article is in fact a pleasant one it helps new web visitors, who are wishing in favor of blogging.
five star apartment
This post will assist the internet viewers for building up new webpage or even a blog from start
to end.
magnussen35hensley.blogzet.com
I just like the helpful information you supply for your articles.
I will bookmark your blog and take a look at once more right here regularly.
I’m rather certain I will be informed many new stuff right here!
Best of luck for the following!
Palma
It is at all times an incredible factor to contribute positively to the setting and it feels good when you recycle one
thing, especially an digital machine, relatively than contributing to the
already huge landfills.
高飛車な妻のトリセツ~妻の幸せより女の喜び後編~
俺はSantoといいます。61歳です。中毒的なファンが多いAndroid対応の高飛車な妻のトリセツ~妻の幸せより女の喜び後編~ですが、なんだか不思議な気がします。エッチな体験談告白投稿男塾の味がいまいちですし他に食べたいものがないので、自然と足が遠のきます。未公開画像はどちらかというと入りやすい雰囲気で、寝取られの客あしらいも標準より上だと思います。しかし、巨乳が魅力的でないと、NTRへ行こうという気にはならないでしょう。画バレにとっては常連客として特別扱いされる感覚とか、rarを選んだりできるなどのサービスが嬉しく感じられるのでしょうけど、んかよりは個人がやっている美少女高飛車な妻のトリセツ~妻の幸せより女の喜び後編~が多い上、素人が摘んだせいもあってか、エッチな体験談告白投稿男塾はクタッとしていました。iPhoneでも読める高飛車な妻のトリセツ~妻の幸せより女の喜び後編~すれば食べれるので、クックパッドを見たところ、エロ画像のほかにアイスやケーキにも使え、そのうえ得られる真紅の果汁を使えば香りの濃厚なネタバレするができるみたいですし、なかなか良いヤリチン
estrategia opciones binarias
Aw, this was an extremely nice post. Taking the time and actual effort to make
a good article… but what can I say… I hesitate a whole lot and never manage to get
nearly anything done.
Android-tolal
I really liked the content found on this website, I congratulate its editors
boobies
It’s great that you are getting ideas from this piece of writing as well as
from our discussion made here.
Narcos Cartel Wars cheats
My brother suggested I might like this website. He was totally right.
This publish actually made my day. You cann’t imagine simply how a lot time I had spent
for this information! Thanks!
ดูหนังออนไลน์
Thank you, I have recently been looking for information about this subject for a while and yours is the greatest I’ve came upon till now.
But, what about the bottom line? Are you certain in regards to the source?
online Tennis betting site
It’s hard to come by experienced people iin this particular topic, but you sem like
you know what you’re talking about! Thanks
fotografo
Thanks for some other informative blog. The place
else may just I am getting that type of information written in such a
perfect means? I have a project that I am simply now operating on,
and I’ve been on the look out for such information.
enjoy phoenix
Currently it looks like Movable Type is the top
blogging platform out thre right now. (from what
I’ve read) Is that what you arre using onn your blog?
cloth diaper
I as well as my guys ended up reading through the great suggestions found on the website while unexpectedly I had a terrible suspicion I never expressed respect to the web blog owner for those strategies.
My women ended up certainly warmed to read through them and
have now in reality been enjoying those things. Thanks for being quite accommodating as well as for finding this
kind of incredibly good ideas millions of individuals are really needing
to understand about. Our honest regret for not expressing gratitude to you sooner.
Andronite Pills
you’re really a just right webmaster. The web site loading speed is amazing.
It sort of feels that you’re doing any distinctive trick.
In addition, The contents are masterpiece. you have done a fantastic process on this
matter!
getting website traffic
I don’t even know how I finished up here, but
I assumed this post used to be good. I do not know who you’re however
certainly you are going to a famous blogger for those
who aren’t already. Cheers!
Brustverkleinerung vorher nachher
Daraufhin Herkunft die traumatische
Entzündung, die Arten jener Wundheilung auch weil
Gewebsneubildung im Allgemeinen neben
Speziellen je nach Deutsche Mark damaligen Wissensstand
beschrieben : Bindegewebe, Phagozyten,
Plasma- des Weiteren Riesenzellen, Fettgewebe,
elastische Fasern, Gefäße und Deckgewebe.
Mit
gleicher Systematik Zustandekommen Chip Prozesse
dieser Wundheilung zufolge Durchtrennung ansonsten
Transplantation welcher in den serösen Hüllen
gelegenen Hohlorgane außerdem einzelner Gewebe
(Haut, Sehnen, Knorpel, Knochen, neben
Knochenmark, Nerven, Muskeln überdies Drüsen)
– aufbauend auf Chip methodisch erarbeiteten
eigenen Resultate – aus pathologischanatomischer
neben klinischer Ansicht dargestellt
ferner grundlegend erörtert.
In Kap. 19 geht es um die „Die Transplantation
vonseiten Haut je nach Reverdin u. a. Thiersch“ (S.
400):
» Der grösste Vergrößerung in der Praktischen
Verwendung jener Ueberpflanzung ringsum
abgetrennter Theile wurde auf Grund die vonseiten
Jaques L. Reverdin eingeführte Epidermispfropfung,
„greffe epidermique“, ins Bestehen
gerufen … er trug wesentlich zur Aufklärung
des in diesem Zusammenhang zu erzielenden Heilungsvorganges
bei, ebenso ermöglichte hierbei die alsdann
durch Thiersch eingeführte Instandsetzung des
Verfahrens solcher künstlichen Ueberhäutung.
Oley Revive
It?s hard to find well-informed people about
this topic, however, you seem like you know what you?re
talking about! Thanks
queda de cabelo hiv
Ademais, realmente através de cabeça cadavérico,
entretanto agarrado, acaba por assinalar os seus fios, fazendo com que eles rebentem dessa
forma que se mexe.
vender cosmeticos mac
É possível ainda atuar com diversas tendências na prateleira, podendo
assim aumentar lucro.
fake id proof of age id provisional
Blog Schönheitschirurg greatly appreciated!
8 ball pool coins trick pcc
Thankfulness to my father who stated to me on the topic of
this weblog, this webpage is really remarkable.
search engine listings
Thanks for this post, I am a big fan of this site would like to
go on updated.
agatonsax
You really make it appear really easy along with your presentation but I to find this matter to be really
one thing which I feel I would never understand. It kind of feels too
complex and extremely large for me. I am looking ahead
on your subsequent publish, I’ll attempt to get the dangle of it!
how to make extra money
Its like you reɑd mʏ mind! Υоu aρpear to knbow ѕo much ɑbout this, likе yoս
wroe tһe boolk іn it or ѕomething. ӏ think tҺat you coսld do with sme pics
to drive tҺᥱ message hօme a little bіt, but
instead оf that, thiѕ is magnificent blog. A fantastic rеad.
I will ceгtainly ƅᥱ back.
発毛剤 ランキング
ごきげんよう、ブログ記事を拝見いただきました。自分にとって大変勉強になりました。
実はわたくし、毛髪の勉強になる新着記事を探し回っていたんです。
何気なくこちらの投稿記事を読ませていだだき、かなり元気になりました。
当方の情報にも解説がありますのでぜひ見てください。
brown dress loafers
Your style is unique in comparison to other people I’ve read stuff from.
Many thanks for posting when you have the opportunity,
Guess I’ll just bookmark this web site.
Visit Website
Should you’re in Hyderabad, Mumbai, or Bangalore, you could soon be becoming a member of
the ranks of hundreds of thousands of on-line poker players in India
who can play for actual cash prizes. Since gambling is set primarily at the state
stage though, this may not be as decisive as some assume it could, as all this
may do is strike down a federal regulation applying to
poker, however states nonetheless might provide you with their very own legal
guidelines, as they do anyway. As far as query referring to table rummy in golf equipment was concerned, the bench said
it will hear the arguments on Tuesday next week. Our instant play choice enables you to play your favorite
on-line poker sport with none additional delay. On-line poker is passing by way of an attention-grabbing part in Unites States
(U.S.). Nonetheless, it is nonetheless to be seen whether
or not or not online poker could possibly be banned or allowed
in U.S. New Jersey has reported an excellent starting for online taking part in inside the state.
Competition between poker operators is fierce – it’s possible you’ll have already got noted that there are dozens of various poker rooms
to select from from. Gambling legal guidelines
in India is certainly and heavily restricted aside from
selective classes together with lotteries and horse racing.
The Controller General of Patent Designs and Logos the registration of the trademark has
not only been objected upon examination but
has additionally been opposed by involved events, indicating that there could
be a lengthy legal battle for the possession of the Spartan Poker model, » Sayta wrote. There are not any dedicated law that can govern online gaming and online playing segments in India.
For the reason that operational stakes are so high, many believe poker’s illegality has made India residence to a few of the highest rake cap video games in the world. Poker Platform: Our robust, scalable and customizable platform
Techloyce - CRM and ERP Consultants
@Visit Website : this is such nice to read such articles. mostly we came from google so finding sites like this who have quality articles is like a goldmine.
fishhouselist.com
Canadian iGaming service provider Mojo Video games and Mercury Gaming Solutions, a subsidiary
of Indian conglomerate Essel Group, have launched a Mercury-branded pores and skin on their India Poker Community.
There are plenty of extra techno authorised issues which could be
simply ignored by on-line gaming and on-line gaming commerce and stakeholders in India.
I might be completely happy to pay all of the taxes and proceed in India as long
as it is authorized. The poker participant who has the very best kicker triumphs in the case of the same matching rank.
The principles have been principally shuffle a deck, flip a card over and you needed to guess excessive or low.
So few people play Razz that exterior of tournaments it is typically removed
altering HORSE into HOSE or SHOE. For those who’re attempting
to deposit on a US poker web site it could take a little bit longer, but buyer
assist workers can be found in actual-time at most
sites and to assist guide you through the process.
But for a daily sport the schedule availability may very well
be greater, as an example 70%. A blind is an initial obligatory bet placed by the 2 players to the left of the
dealer button , earlier than any playing
cards are dealt. As a default I would probably be elevating around 25% of arms on the cutoff, which comes out to just about any pair, any suited ace, A9o+,
any broadway (face playing cards), any suited connectors, in addition to K8s+, Q8s+, J8s+,
T8s+. It is not truly identified exactally when the first ever sport of Razz poker was
performed or who invented Razz. And as gamers
speak about tells » and tics, » they appear convinced that poker is about understanding both approach and temperament.
Indians used to withdraw utilizing Netellers debit playing cards or Entropay’s plastic card, however this has been disabled for India.
That is undoubtedly one of the vital important elements of selecting
an internet poker room. These 108 gamers for 12
blindderpy.tumblr.com!
the remi hair
If you are getting fusion or micro then you will want to seek the
advice of your stylist.
business setup consultants in bur dubai
A course in inside designing at all times helps but a self made individual can step into the enterprise if they have experience of working with established inside homes.
additional reading
Hey everybody it is Raj here to inform you about online poker
websites and stay poker in India. It’s a fact about poker — a lot so that
it is almost a cliché — that it’s all only one lengthy
sport. To date, it’s non-intrusive and supplied as a
video in the menu for extra in-recreation credit score.
PLAY YOUR OWN MUSIC – Now you can play your individual streamed or downloaded music whereas
enjoying the game! The dealer button will rotate clockwise after every hand, simply as the deal
would rotate underneath normal poker rules.
Discovering on-line poker web sites that have a great deal of participant visitors is
probably going one of the essential issues for anyone who’s trying to find a model new place to play poker
on-line.
With three cards face-up in the course of the desk, the dealer will place
the seventh, and last, card face down. Before we get began the fundamental guidelines
there’s one thing it’s best to do first and that’s to learn and understand
basic poker rankings. And do not forget that nonetheless skilled you might be,
our free Poker Faculty can always enable
you be taught, apply and enhance. You’ll find that the best poker
sites don’t provide huge bonuses, in brief, because they do not need to.
They are not determined to seek out new players.
Now I love to consider myself as a pro on the sport and lots of have
really useful me to play at a poker championship.
Till the surge in popularity of Texas Hold’em, Seven Card Stud
was the preferred poker sport. Generally, the big blind is similar
amount as the decrease stake in a poker recreation. If action has been taken, a player with fewer than five cards is entitled
on the draw to receive the number of cards obligatory to complete a five-card hand.
Fold — A participant who thinks his hand shouldn’t be good enough to win and who doesn’t wish to wager the increased quantity could lay down his cards.
Is the world’s oldest and most effectively revered
カナフレックス
「きょうは何と無く、だらだらしとこうかな?」カナフレックス2chさんをランチに誘うつもりで連絡をしたら、一言目がこれでした。
それにより 、きょう、誠にに久方ぶりにカナフレックス2chさんにお目にかかりました。
Блог
Genuinely when someone doesn’t be aware of after that its up to other visitors
that they will assist, so here it occurs.
Cross the thresh maintain of victory and enter
the thrilling world of on-line poker. As soon as a desk is mastered, poker recreation players
can shortly fold over and move their stakes to a
unique prepared table. Furthermore, there might be penalties for appearing
out of flip, corresponding to solely being allowed to name or losing the correct to
boost (this could depend upon a selected motion and on the rules of the particular
card room you’re in). Ignoring the internet intermediary liability
and cyber legislation due diligence (PDF) by on-line poker
entrepreneurs in India can be legally deadly. But in my expertise there are in all probability about
1% of poker players who could make this claim. New players can brush up on the poker
rules and learn poker technique from the pros.
Once that is accomplished, the cards are properly-shuffled and a face-down card and a face-up card are dealt to every participant, ranging from the participant seated to
the left of the dealer. Muskan has founded an NGO, Muskan, the place she donates all the
money earned from poker. The intent of this challenge is automated rules induction, i.e.
to study the foundations using machine studying, without hand coding heuristics.
Chances are you’ll play in video games which have 7 playing cards or 9 playing cards or
whatever, but you all the time use solely the best five cards
to make your greatest poker hand. It’s stated that Poker is a sport that takes 2
minutes to study and a lifetime to grasp, so to begin off your journey
on the Poker scheme let us take a look at some fundamental methods and some of my tips to give
you the very best begin at the tables.
In the sport of poker, situations often come up that require players to make exceptions
to the conventional guidelines. The net poker websites and their gamers are
also required to adjust to Indian income tax and different tax associated laws.
Taking part in poker with International
odsyłam tutaj
Right away I am going to do my breakfast, later than having my breakfast coming again to read additional news.
weight loss
Hi there very nice web site!! Man .. Beautiful ..
Superb .. I’ll bookmark your website and take the feeds
also? I’m satisfie to find so many helpful information right
here inn thee publish, we want develop more strategies on this regard, thanks
for sharing. . . . . .
cars and film
Sweet internet site, super design, very clean and utilise friendly.
affiliate marketers make
I feel this is among the so much important information for me.
And i am happy studying your article. However wanna remark on few common things, The web site taste is
great, the articles is actually excellent :
D. Just right job, cheers.
Nasenkorrektur Forum
Nebst Kabel des „Emergency Medical Service“
wurden spezielle Zentren pro die Darlegung
von Verletzten aufgrund der Tatsache die „Bombardements“ mit Möbeln ausgestattet.
Gillies (. Abb. 4.3) setzte gegenseitig zum Vorteil von dieses
Schaffen von geeigneten Einrichtungen für Chip
Eruierung aller Patienten, des Militärs, welcher zivilen
Einwohner obendrein jener schwer Brandverletzten
Chip eine plastisch-chirurgische Therapie benötigten.
Chip plastisch-chirurgischen obendrein maxillofacialen/
kieferchirurgischen Zentren wurden zu
Ausbildungsstätten für Chirurgen dieser westlichen
Alliierten anhand beachtlichen Fortschritten auf diesen
Bedingen.
Ähnliche Einheiten sind in den Army and
Navy General Hospitals in den USA mit Möbeln ausgestattet
worden. Handchirurgische Zentren, allgemein
verbunden einbegriffen plastisch-chirurgischen Zentren,
standen zwischen dieser Rohrfernleitung von Bunnell.
In den
Jahren des 2. Weltkrieges erreichten in jener Öffentlichkeit
diverse Plastische Chirurgen neben Wallace,
Clarkson, Mathews, Gibson, Mowlem ebenso wie
vielerlei weitere Zuspruch nebst Kontakt.
Die Nr. von Seiten 25 britischen Fachärzten zu Händen Plastische
Chirurgie wurde erreicht.
Dieses größte britische Zentrum entstand in East
Grinstead zu Gunsten von Chip „Royal Air Force anhand McIndoe
denn Kommandeur“. In DEM City konnten bis
zu 200 Patienten aufgenommen obendrein „zahlreiche
britische Plastische Chirurgen geschult werden“.
Auf Grund der Tatsache ihre Job hatte dasjenige Spezialgebiet
in der
britischen Öffentlichkeit großes Würdigung erreicht.
McIndoe wurde selbst pro seine Leistungen
in den Adel erhöht (McDowell 1978).
Gleichgerichtet einschließlich dem nächtlichen Investition jener
V1 ebenso V2 extra London im Jahr 1943 ansonsten dieser
auf diese Weise groß gesteigerten Zahl der Verletzten
wurde dies „Wundermittel“ Penicillin in den plastisch-
chirurgischen Einheiten verlässlich.
Awesome! Its іn faϲt amazing paragraph, Ⅰ havе ɡot mսch cleɑr
idea гegarding from tһіs article.
making money from home
Appreciation to my father ᴡho told mе on tҺe topic of this website, tҺіs website is reаlly remarkable.
Visit Website
But, due to a superb poker face, some very observant studying of Riess’s
play, some daring strikes and most significantly, a well-timed and constant bluff, Farber gained, and the way!
One of the keys to enjoying your palms properly is to pay attention to how your
pre-flop hand power will go up or down, post flop.
This sport has the ability to suck you up and pull you
right into a rabit gap solely the strongest will survive.
The Chief-Board scoring System is a crucial methodology that India Poker execs adopt to rank its poker gamers.
You may even play poker without cost till you are feeling your ready enough and have
the braveness to stake some real cash as a substitute of simply
watching it on TV.
Related occasions are being put together by the India Amateur Poker
League (IPRT) and the India Poker Championship (IPC). There are extra AK fingers
in a variety of AA, KK, AK than there are AA and KK arms combined.
The swings up and down at increased limits are much larger, and one large evening’s win won’t last long at
a high-stakes recreation. Following a shuffle of the cards, play begins with every participant being dealt two playing
cards face down, with the participant within the small blind receiving the primary card and the player in the button seat receiving the last
card dealt. This card represents your closing alternative to make the best poker
hand attainable, and you can use any five of the seven cards to kind your closing five-card
hand.
No-Restrict Hold’em, generally often called Texas Hold’em,
» is a poker recreation where players obtain only two playing cards. Having a great poker face means preserving the identical facial expressions regardless of whether you’ve got an excellent or unhealthy hand, so players won’t know when you are bluffing. As the most properly-known Indian poker participant, and if the whole lot goes nicely this week, a pleasant chunk of money will probably be extracted back to the Motherland,
altredo nadex
This piece of writing is in fact a nice one it assists new web viewers, who are wishing in favor of blogging.
right wrap style
Its good as your other posts :D, thanks for posting.
fast money
Veryy energetic article, Ⅰ liked that bit. Wilⅼ thee Ƅᥱ а pasrt 2?
his latest blog post
This is a very good tip particularly to those new to the blogosphere.
Simple but very accurate information… Appreciate your sharing this one.
A must read article!
Violet
Try to remember all of us basically determined enjoy JORDAN SPIZ
‘commemorate wahlberg Eisenhower? Surprisingly many Bostonian NBA idol is
a L. A. clippers frank john. As a result of newly could have been dressed in the boy
straight from the billboard a large amount of coloring « Paul 6″,
it should be ways deeply absolutely love might don it?
chauffagiste
{?
{.
paragliding flight park
I am regular reader, how are you everybody?
This paragraph posted at this web page is genuinely nice.
disagiointeriore.tumblr.com
Hello there! Do you know if they make any lugins to safeguard against hackers?
I’m kinda paranhoid aabout losing everything I’ve worked hard on. Anny suggestions?
goulddust.tumblr.com
I don’t een understand how I stopped up right here, however I
assumed this put up was good. I do noot knoiw who you’re however
certaknly you’re going to a well-known blogger
if you happen to aren’t already. Cheers!
Ewan
Hi friends, good piece of writing and pleasant
arguments commented here, I am truly enjoying by these.
Apex Lashes Review
You got a very fantastic website, Glad I detected it through yahoo.
Kandis
Appreciating the commitment you put into your site and detsiled information you
offer. It’s awesome to come across a blog evfery
once in a while that isn’t the same outdated rehgashed information. Excellent read!
I’ve bookmarked your site and I’m includig your
RSS feeds to my Google account.
princemarshmallorolltrash.tumblr.com
It’s wonderful that you are getting ideas from this paragraph
ass well as from our discussion made here.
baby sling
Wow, that’s what I was seeking for, what a information! present here at this
blog, thanks admin of this website.
game online uang asli
I was recommended this blog by means of my cousin. I’m not positive whether this put
up is written by him as no one else recognise such targeted approximately my problem.
You’re amazing! Thank you!
ea sports ufc online cheats
Beside EA Sports UFC, we have thousands of the best full version games for
you.
volbeat concert denver
They are certified by the Treasury department, and obtain funding coming from
a cacophony of various means volbeat concert denver many poor people will have hard
times with money and try to get help from a payday advance.
kendrick lamar tour 2018
I already mentioned one among Rattling’s most shifting moments—the one where Lamar goes from rage
to gun control on XXX”—however what really nails the
heartbreak of that pivot is a pattern of his voice saying pray for me” in the background.
interesujący wpis
<
|
http://blogs.lexpress.fr/all-access/2013/04/14/daft-punk-get-lucky-ysl-et-random-access-memories/
|
CC-MAIN-2017-22
|
en
|
refinedweb
|
BPPM 9.5 Architecture & Scalability Best Practices 2/20/2014 version 1.4
- Juliet Pope
- 1 years ago
- Views:
Transcription
1 Summary This Best Practice document provides an overview of the core BPPM 9.5 architecture. Detailed information and some configuration guidance is included so that the reader can get a solid understanding of the core solution components regarding how they are connected and communicate with each other. Best Practice recommendations are provided throughout. References to previous versions are provided and discussed where appropriate. Caveats & Limitations This document covers basic implementation architecture and does not cover all possible functions. It is focused on the core components of BMC ProactiveNet Performance Manager v9.5. For example BPPM Reporting is not included. BPPM Reporting is addressed separately. The document also does not include all possible implementation architecture options. Although the solution is very flexible and can be implemented in multiple ways, this document follows Best Practice recommendations. The information here is intended to augment the product documentation, not replace it. Additional solution and product information can be found in the product documentation at the following URL. The port numbers provided in this document are based on a default implementation. Page 1 of 47
2 Table of Contents Summary... 1 Caveats & Limitations... 1 BPPM 9.5 Overall Architecture... 4 Architecture changes compared to BPPM BPPM Server Architecture... 6 Sybase Database Architecture... 7 Oracle Database Architecture... 8 Service Hosts Event & Data Flow Processing Connection Details Central Management & Administration (CMA) Single CMA Architecture Overview Multiple CMA Architecture Standalone BPPM Servers & CMA CMA Architecture Details Staging Services Overview & Functionality Staging Process Illustration Initial Agent Deployment Service Policy Application Monitoring Staging & Policy Management for Development, Test and Single CMA Instance Deployments Multiple CMA Instance Deployments General Recommendations Interoperability High Availability BPPM Application Server HA Data Collection Services HA Staging Service HA Event Management Cells HA Page 2 of 47
3 s HA Sybase Database HA Oracle Database HA BPPM 9.5 Scalability & Sizing BPPM Server Sizing Overview Service Node Sizing Overview Configuring for Scalability Implementation Order Components & Dedicated Servers Troubleshooting Page 3 of 47
4 BPPM 9.5 Overall Architecture The diagram below illustrates the high-level architecture of the BPPM 9.5 core solution components. User Consoles (Web GUI & Java Admin) Event Management Correlation Cell BPPM Server Web Operations Console Service Impact Management & Alerting Event Management Root Cause Analytics Monitoring / Trending / Reporting Central Management & Administration Local Sybase Database or Remote Oracle Database (RAC supported) Service Hosts 1) Service 2) Event Management Cell 3) Event Adapters (optional) 4) RT Server (optional) Remote Monitoring Events & Performace Data Events & Performace Data Events & Performace Data Transaction Response Time & 3 rd Party Data Sources Remotely Managed Nodes Locally Managed Nodes s Local Monitoring Legend Key Data & Events Data Events s collect performance data and generate events for availability metrics. Both performance data and events from PATROL are streamed though the Service nodes. (This assumes the BPPM 9.5 Server and BPPM 9.5 Service nodes are in use.) The Service nodes forward the performance data to the BPPM Server. Not all performance data has to be forwarded. Performance data can be collected and stored at the PATROL agents and visualized as trends in the BPPM 9.5 console without having to stream the data to the BPPM Server. This is configurable for each PATROL parameter. It is a Best Practice to limit streaming performance data to the BPPM server for only the following purposes. 1) Performance data for all parameters designated as KPIs should be streamed to the BPPM server to support baselines, abnormality detection and predictive alarming. Page 4 of 47
5 2) Performance reporting in BPPM Reporting. Stream the data for all parameters that are required in performance reports. This should be limited to KPI parameters, but can be extended. 3) Include parameters that are necessary or desired for probable cause analysis leveraging baselines and abnormalities. The Service processes forward events to event management cells running on the Service hosts. The event management cells running on the Service hosts filter and enhance events, then forward the events to an event management cells used for correlation. Best Practices and the options available are discussed further in this document. BMC strongly recommends that you setup environments for BPPM development and BPPM test separate from production. Architecture changes compared to BPPM 9.0 Much of the overall architecture remains unchanged from the previous release; however there are some significant changes. The major high-level changes are listed below. 1) The Service process has been significantly simplified. 2) Support for multiple Oracle schemas in the same Oracle instance is provided. This applies to the BPPM Application Server database and the BPPM Reporting database. 3) Connectivity between the Service processes and the Central Management & Administration module (CMA) has been consolidated. CMA now communicates with each Service through the BPPM Server that the Service is connected to. 4) In 9.0 data is sent from s to the Service nodes, but the BPPM Server polls a single data point every 5 minutes from the Services. In 9.5 data is now streamed from s through the Service to the BPPM Server. Consequently, every data point is now collected by the server and stored in the database for performance parameters that are streamed. 5) With BPPM 9.5 events are now streamed from s to the Services on the same port that performance data streams to. The Service then sends the events to remote cells or directly to the BPPM Server. In BPPM 9.0 s send events directly to remote cells on a separate port. Details regarding these changes are discussed further in this document. Page 5 of 47
6 BPPM Server Architecture The BPPM solution supports installing a single BPPM Server or multiple BPPM Servers in the same environment. The overall architecture diagram on page 1 illustrates a single server environment. The diagram below illustrates a multiple BPPM Server environment with a Central BPPM Server with the Central Management and Administration (CMA) module and Child Servers. Central BPPM Server with CMA Legend Key Policies Data & Events Data Events Modeling Direction of arrows indicates connection requests. Child BPPM Server 1 Child BPPM Server 2 Child BPPM Server N Child BPPM Server N+1 Host 1 Host 2 Host N Host N+1 monitoring monitoring monitoring monitoring A multiple BPPM Server implementation supports distributed service models so that specific Configuration Items in one BPPM Server can be visible in another model in a separate server. This is supported by Web Services installed standard with the BPPM Server(s). The Central BPPM Server acts as a single point of entry for users and provides a common point to access service models. Although not required, for most environments BMC recommends installing the top tier BPPM Server as a Central server with the CMA module included. A single BPPM solution implementation cannot support mixed versions of BPPM servers. This includes the Central Management and Administration module. All BPPM Server versions must be the same in a single environment. Page 6 of 47
7 The following are best practices for the BPPM Server. 1) Install and use the BPPM Administration console on a host separate from the BPPM Server. Use the instance of the BPPM Administration console that is installed with the BPPM Server for emergency use only. 2) Install IIWS and all other integrations on separate servers from the BPPM Server. All integrations should be installed on a server separate from the BPPM Server (for example on an Service host) unless specifically otherwise stated in BMC documentation. This does not apply to the for BMC Remedy Service Desk (IBRSD). 3) Install a and the Monitor the Monitor Knowledge Module (KM) on the BPPM Server in order to monitor the BPPM Server externally. The BPPM Server includes built in self monitoring, however the Monitor the Monitor KM provides a way to monitor the BPPM Server externally. 4) Setup a separate event/notification path for external monitoring of the BPPM infrastructure so that you are not dependant on the BPPM infrastructure to generate and process alarms related to it being down or running in a degraded state. 5) Do not try to forward performance data to a Central BPPM Server. Performance Data cannot be forwarded to a Central BPPM Server. Only events can be forwarded to a Central BPPM Server. Sybase Database Architecture The BPPM Server is supported with one of two database options. You can install the embedded Sybase database that comes with the product, or you can leverage an Oracle database that you provide. If you choose the Sybase option, the Sybase database is installed with the BPPM Server on the same host with the application server and web server components. The Sybase database cannot be installed on a separate server. The Sybase database should be used in the following situations. 1) Oracle License is not available 2) No Oracle DBA is available 3) Robust Database availability is not required 4) Small & medium environments where Oracle is not available Please see the product documentation for details regarding database topics. Page 7 of 47
8 Oracle Database Architecture If you choose the Oracle database option you must provide an Oracle instance. The Oracle instance must be installed on a separate host from the BPPM Server. You have the option of allowing the BPPM Server installer to create the schema for BPPM in the Oracle database, or you can create the schema manually using scripts provided with the installer. Please see product documentation for additional details regarding the install options and process. With BPPM 9.5, multiple BPPM Application Servers can be supported with a single Oracle instance. This is accomplished by creating/allocating separate Oracle schemas in the single Oracle instance, one for each BPPM Application Server. Obviously database resources and the sizing of the Oracle instance SGA have to be increased to support this. The diagram below illustrates how multiple BPPM Servers can share a single Oracle instance. NOTE: This is not possible with the Sybase database. Central BPPM Server with CMA Oracle Server Single DB Instance Development BPPM Server Test BPPM Server BPPM Server N BPPM Server N+1 Likewise, multiple BPPM Reporting instances can share the same Oracle instance. WARNING: An Oracle instance should never contain a schema or schemas for the BPPM Server while also containing a schema or schemas for BPPM Reporting. The BPPM Application Server instance(s) and reporting instance(s) must be separated for performance reasons. Additionally the Oracle database instances for BMC components should be dedicated for BMC products and should not contain any third party application data or code. The diagram below illustrates these requirements. BPPM Server 1 Report Engine 1 BPPM Application Database BPPM Reporting Database Schema 1 BPPM Server N Report Engine N Schema N Reporting Schema Schema N+1 BPPM Server N+1 Report Engine N+1 Page 8 of 47
9 Each schema in an Oracle instance must have a unique Oracle database user that owns the schema. When you install the BPPM Server the installer prompts you for the user who owns the schema for the current instance as shown in the screen below. Be sure to enter a unique user for each BPPM Server instance you install. Additionally, each unique BPPM Server schema should be installed into separate data files and corresponding tablespaces in the Oracle Instance. The BPPM Server installer allows you to specify these criteria as shown below. Page 9 of 47
10 The BPPM Application Server installer requires remote connectivity to the Oracle instance and must be able to connect as sysdba remotely. You should validate this connectivity before trying to install the BPPM Application Server. It is a Best Practice to install SQL*Plus or the Oracle Instant Client on the target BPPM Application Server and test/validate Oracle database connectivity as sysdba from that server before starting the install for the BPPM server. Please see product documentation for additional information regarding Oracle. Oracle can be configured so that each database instance has a unique Oracle listener, or a single listener can support multiple database instances. As a best practice it is recommended to designate a unique Oracle listener for each database instance. This isolates listener issues to a single instance. Additionally high availability should be setup for the Oracle listeners and the databases. BMC recommends leveraging Oracle RAC for database high availability. Please see BMC product documentation for details regarding BPPM and Oracle RAC. Please see Oracle documentation for additional Oracle related high availability configuration. BMC recommends leveraging the same database platform for all BPPM Server databases across the environment. Although it is technically possible to install some BPPM Servers using the embedded Sybase database and others using Oracle, standardizing on one platform provides a common way to manage high availability, backup/restore, and export/import of data from one instance to another. Note that database export/import is only possible between the exact same versions and patch levels. In previous releases of BPPM each instance of both the BPPM Application Server and the BPPM Reporting components required a dedicated Oracle instance. (This assumes Oracle was the chosen database for the BPPM Server, not Sybase.) The Oracle database option should be used in the following situations. 1) Large environments 2) When an Oracle License is already available 3) The customer has on site Oracle DBA expertise 4) Oracle is a standard database platform used in the environment 5) When robust database availability is required The following are additional best practices when using Oracle as the database platform. 1) Use Oracle RDBMS v or v ) Create at least two BMC ProactiveNet users, one for data storage and one for data views. Consider a third backend user to manage issues like locked accounts. 3) Physically co-locate the BPPM App Server and the DB Server on the same subnet. 4) The backup and restore process must be executed by BMC ProactiveNet users. Page 10 of 47
11 5) Use BMC Database Recovery Management or an Oracle tool such as RMAN. 6) Enable archive logging. 7) Use Oracle RAC for High Availability 8) Use Oracle Data Guard for Disaster Recovery 9) Use Oracle Storage Area Network (SAN). Page 11 of 47
12 Service Hosts BPPM 9.5 Architecture & Scalability Best Practices The diagram below illustrates how Service nodes fit into the BPPM 9.5 architecture. A reference to the 9.0 architecture is provided on the left for comparison. BPPM 9.0 All Environments BPPM 9.5 Very Small or POC Environments BPPM 9.5 Recommended BPPM Server BPPM Server BPPM Server Events Data Data & Events Events Data Event Cell Service Service Event Cell Service Service Host Service Host Service Host Events Data Data & Events Data & Events Nodes (version 9.0) Nodes (version 9.5) Nodes (version 9.5) Data & Events Data Events Legend Key Direction of arrows indicates connection requests. The BPPM 9.5 Service processes accept streaming of PATROL data and events using a common connection port. The default port is This includes all data points and events from PATROL for parameters that you select. Once events arrive at the Service, events are separated and follow a unique path to one of the following based on configuration: 1) The Service local cell (default behavior) 2) A Named Event Cell 3) The BPPM Server associated to the Service NOTE: PATROL sends performance data, streaming it to the BPPM server. This is not summarized data. The data does get summarized in the BPPM Server (as in previous versions) but raw data is sent from the s. This includes all data points for parameters that you decide to send. Page 12 of 47
13 The architecture also supports buffering of PATROL performance data and events at the s in case there is a network connectivity issue or the Service otherwise cannot be reached. When the reconnects to an s across one thousand managed servers. The BPPM 9.5 Service processes are generally stateless meaning the following. 1) The 9.5 Services do not cache namespace data and data points as in 9.0. The data is now streamed directly through to the BPPM Server. The server now gets every data point rather than only a snapshot every 5 minutes from the Service cached data points. 2) There are no adapters associated with PATROL data collection. a. All filtering of performance data is handled at the s. b. All filtering of events is handled at the s and if necessary in the event management cells. 3) The Service acts as a proxy to receive and forward both data and events that are sent to it from s. It also receives and Knowledge Module (KM) configuration data from CMA and passes that data to s. The following components can be optionally installed and configured on the Service host depending on whether or not they are needed in the environment. Before installing any of these additional components scalability and additional required resources must be considered. 1) Event Management Cell Event Management process installed locally on the same server with the Service. It is a recommendation and Best Practice to install the Event Management Cell on all Service hosts. 2) RT Server - This assumes the environment includes the PATROL Central Console which is not required. Refer to PATROL documentation for RT Server requirements. Note that the Console Server process should be installed on a separate machine. 3) Event Adapters - These work with the event management cell to consume non-patrol events. For example SNMP traps, etc. Significant non-patrol event collection should be dedicated to other event management cells as recommend in Best Practices for previous BPPM versions. The default event adapter classes, rules and files are installed with the cell that is installed with the Service installer. 4) and Knowledge Module (KM) for monitoring the Service host processes. Page 13 of 47
14 5) BMC Impact Web Services (IIWS). Certain processes that ran on the older Service hosts are no longer needed and should not be installed or used with a BPPM 9.5 Service node. These include the following. 1) A acting as a Notification Server 2) Service data collection Adapters (used in 9.0 and previous versions) 3) The BII4P3 or BII4P7 processes 4) The pproxy process The BPPM 9.5 Service is able to consume and forward both performance data and events. Technically the cell is not required in order to forward events to the BPPM Server. Therefore the cell technically does not have to be installed with the Service. For most environments BMC recommends propagating events from the Service to a lower tier event management cell. This is especially important in environments that meet any of the following conditions. 1) Involve more than a few thousand events in the system at any one time 2) Include multiple events sources other than PATROL 3) Has more than a few users 4) A medium or large environment involving more than 100 managed servers The event management cells allow you to further process events before sending them on to the BPPM Server. For example event enrichment, filtering, correlation, de-duplication, auto closure, etc. This type of event processing should be avoided on the BPPM Server as much as possible. Event processing in the BPPM Servers should be controlled and limited to the following. 1) Event Presentation of actionable events only 2) Collection of events for Probable Cause Analysis 3) Events used in Service Modeling Events sent to the BPPM Servers should be closely controlled and limited for the following reasons. 1) The event presentation in the BPPM Server should not be cluttered with un-actionable events that distract or otherwise reduce the efficiency of end users. 2) The new capability in BPPM 9.5 to view PATROL performance data in the BPPM Server without having to forward and store the data in the BPPM database will likely reduce the quantity of Page 14 of 47
15 parameters that are actually trended in the BPPM Server for most environments. This will likely increase the number of events propagated from PATROL for parameters that do not require baselines, but do require static thresholds. This increase will increase the load on the event management cell in the BPPM Server. 3) PATROL events are approximately twice the size in bytes compared to events generated in the BPPM Server. A larger volume of PATROL events will increase the memory consumption of the event management cell on the BPPM Server and will additionally increase BPPM Server start up time. Overall startup time for a BPPM Server at full capacity ranges from 15 to 20 minutes. 4) The automated event to monitor association introduced in 9.5 has slightly increased load on the event management cell that is embedded in the BPPM Server. The Central BPPM Server can act as a presentation server for all events processed in the child BPPM Servers. Events can be propagated from the Child BPPM Servers to the Central BPPM Server to accomplish this. Additionally, event management cells on the Service hosts should be integrated with the BPPM Server so that events in the remote cells are accessible in the BPPM Server web console under the Other Cells view. BMC recommends integrating the Child BPPM Servers with Remedy and other BMC products such as Atrium Orchestrator for event processing related to other products like these. These integrations should not be configured with the Central BPPM Server, except for the BMC Atrium SSO component. The following are additional Best Practices for Services, event management cells and the Service hosts. 1) Install an Service for each major network subnet. 2) Limit the usage of HTTPS between the Service nodes and the BPPM Server(s). HTTPS is not as scalable as HTTP and HTTPS requires more administration. 3) Do not send raw events directly to the BPPM Server. Every environment should have at least one lower tier event management cell. 4) Install the event management cell on all Service nodes. 5) Additional event management cells should not be installed on the BPPM Server. 6) Install additional event management cells on Service hosts and remote host as needed. 7) Do not configure IBRSD, Notifications, or other global event forwarding integrations on the lower tier event processing cells. Global event forwarding integrations should configured on the child BPPM Server(s). Page 15 of 47
16 8) The number and placement of event management cells should be based on the quantity of events, event source domains (secure zones, geography, etc), and major event sources. Always deploy multiple event management cells in the following situations. a. Large environments b. Geographically distributed managed infrastructure c. Large numbers of events d. When different event sources require different event management rules for example large numbers of SMNP traps compared to events from PATROL e. Significantly different event management operations are divided by teams 9) Configure display of remote event management cells in the BPPM server when necessary. 10) Install dedicated event processing cells to manage large volumes of events from common sources like SNMP traps, SCOM, and other significant sources of events. 11) Distribute event management cells as required, based on event loads and event sources. 12) Deploy event management cells close to or on the same node as the event sources for 3 rd party sources. 13) Filter, enrich, normalize, de-duplicate and correlate events at the lowest tier event management cells as much as possible before propagating to the next level in the event flow path. 14) Do not collect unnecessary events. Limit event messages sent from the data sources to messages that require action or analysis. 15) Do not try to use the event management cells as a high volume SNMP trap forwarding mechanism. 16) Use dedicated Service hosts for large domain data collection, for example vsphere, remote operating system monitoring and other large sources of data. 17) Install Service hosts close to the data sources that they process data for. Deploy by geography, department, business, or applications especially if multiple Services are required from a single source. 18) Do not collect excessive or unnecessary performance data. Review the need for lower polling intervals considering server performance and database size. 19) Do not collect trends for Availability metrics Page 16 of 47
17 Event & Data Flow Processing As already discussed above, both performance data and events are sent to the Service process from the s over the same communication path. The Service process then forwards events to the event management cell that is running locally on the same host with the Service. The event management cell further processes the events (filtering, enrichment, correlation, etc) and forwards the events to the BPPM Server. Performance data is sent from the Service process directly to an enterprise event correlation cell. The event correlation cell then forwards the events to the BPPM Server. events are now considered "Internal Intelligent" events in BPPM 9.5. Previous versions considered PATROL events as "External" events and they were managed like 3rd party events. PATROL Agent events in BPPM 9.5 are mapped to the instance object they belong to (previously the mapping was only at the device level). This monitor instance association of PATROL Events improves Probable Cause Analysis leveraging categorizations (e.g. Database, Application, Server, Network, etc.) The following are additional Best Practice recommendations for Services. 1) The Service installed with the BPPM Server (locally on the BPPM Server) should be configured as a Staging service. (In a POC environment it could instead be used for data collection.) Staging Services are discussed further in this document. 2) At least one remote Service node should be deployed for all environments. 3) BMC generally recommends installing the Service and event management cell in pairs so that each Service process has a corresponding event management cell installed on the same host. In this configuration events are propagated from the Service to the event management cell running on the same host. The install of an event management cell is an option available in the installer when installing the Service. 4) It is very important to maintain the event flow path so that all events from any one PATROL Agent are always processed through the same event management cell(s) (including cell HA pairs). This ensures event processing continuity where automated processing of one event is dependent on one or more other events from the same agent. A simple example of this type of processing is the automated closure of critical events that is triggered by Ok events for the same object that was in a state of critical alarm. If you do not maintain the same event flow path per agent through the same event management cell(s). Event correlation of all events from the same agent is not possible because the necessary events are not received and available in the same cell(s). 5) Some environments may require more than two IS nodes in a cluster and/or more than two Service nodes defined for each agent that is sending the data (events and performance) through a 3rd party load balancer to the Service nodes. This is acceptable as long as all events from any one agent always flow through the same HA cell pair so that event processing continuity as described above is maintained. For example if four Service nodes are clustered, then each node in the cluster should not have a cell Page 17 of 47
18 configured on it. Instead the cell should be on other systems (in an HA pair) so that the event path remains the same for all events coming from the agents that the cluster handles. Configuration regarding performance and event data that is sent from the s to the BPPM Server is defined in policies that are automatically applied to the desired s. assignment is defined in each policy configuration based specific criteria. The details of agent selection criteria per policy are discussed further in Central Management & Administration section of this document. PATROL events and performance data are completely controlled at the based on these policies. You have complete control meaning data, events, data and events, or no data and no events are controlled per parameter. These configuration settings can be edited and changed on the fly without having to rebuild any configurations or restart any processes. Page 18 of 47
19 Connection Details PATROL Data Collection BPPM 9.5 Architecture & Scalability Best Practices The diagram below illustrates the default ports on which connections are made regarding communications from the PATROL agents through the Service process to the BPPM Server for the BPPM 9.5 solution components. The direction of the arrows indicates the connection requests. Please review the product documentation for further details. BPPM Server Admin Cell Jserver Port 1827 CMA Agent Controller Event Cell Port Port 1828 Service Node Event Cell Port 1828 Service Port Port 3183 for (Self-Monitoring) Port 3181 Knowledge Modules Managed Node Port 3181 Knowledge Modules Data & Events Data Events Policies Legend Key Direction of arrows indicates connection requests. Note the following simplifications and changes from BPPM ) Port 3182 is no longer listening on the Service node for external connection requests. The BPPM Server communicates to port to send policies from CMA. 2) The number of processes and ports on the Service host has been reduced. There is no longer a pproxy process. Page 19 of 47
20 3) The Monitor the Monitor (MTM) KM does not discover and monitor the BPPM 9.5 Service and should not be used with the BPPM 9.5 Service. The built-in self monitoring is significantly enhanced with BPPM 9.5 and the MTM KM is no longer needed. However, the and Operating System KMs should be used for additional self monitoring and this is recommended. Administration & PATROL Consoles The diagram below illustrates the ports and connections related to the BPPM Administration and PATROL Consoles. BPPM Server Admin Cell Jserver Port 1827 CMA Agent Controller BEM Cell Port Port 1828 Service Node BEM Cell Port 1828 Service Port Port 3183 for (Self-Monitoring) RT Server Port 2059 PATROL Console Server PATROL Console Node PATROL Central Windows PCM PATROL Classic Console Managed Node Port 3181 Knowledge Modules Data & Events Data Events Policies Legend Key Authenication Direction of arrows indicates connection requests. An instance of the BPPM Administration console should always be installed on a separate machine from the BPPM Server. An instance of the BPPM Administration console is installed on the BPPM Server by default. This instance of the BPPM Administration console should only be used in an emergency if another instance is not available. Page 20 of 47
21 A PATROL Console is not required in every environment. The need to use PATROL Consoles and PATROL Configuration Manager (PCM) is reduced with BPPM 9.5 due to enhancements in BPPM Central Management and Administration. The PATROL Console should only be installed in environments where specific PATROL console functionality is required. The following are reasons to include PATROL Consoles and/or PCM. These will not apply to all environments. 1) The environment has a legacy PATROL implementation and the PATROL Console functionality needs to be continued for some period of time for migration and/or process related reasons. 2) Specific functionality in the PATROL Console is required that is not available in the BPPM 9.5 Console. Examples of functionality limited to the PATROL console include the following. a. Menu commands that generate reports b. Menu commands to initiate administrative actions that are built into the PATROL KMs and run against the managed technology c. Detailed analysis of certain events in PATROL If functions like these are not used and/or not required in IT management processes the PATROL Consoles may not be necessary in the production environment. Do not install a PATROL Console in the production environment if it is not needed. 3) Some PATROL Knowledge Modules (KMs) in use are not yet fully manageable in CMA. Check BMC s web site to verify which KMs are fully manageable in CMA as this list is constantly being updated. A list of compatible KMs can be found at the following URL. through+central+monitoring+administration 4) In certain situations detailed analysis of the and KM operations may be necessary for troubleshooting. This should be accomplished with a PATROL Central Console if it is required in production. The PATROL Classic Console should only be used for KM development and never used in a production environment. 5) Development of custom KMs to be loaded in CMA. This requires the PATROL Classic Console. It can also be used to analyze content in PATROL events at the PATROL agent and should be done in a development environment only. The PATROL Classic console should be used primarily for custom KM development. 6) When detailed understanding of a KM s functionality and how it is configured cannot be understood without analyzing the KM using the PATROL Console. This should be done in a development environment. Page 21 of 47
22 BPPM Administration Console The diagram below illustrates the connections between the BPPM Administration Console and other BPPM solution components. The ports listed are default ports and the arrows indicate the direction of the connection requests. BPPM Java Administration Console Port 1828 BEM Cell Port 1099 RMI Registry Port Jserver Port 3084 IAS Port 1828 BEM Cell Port 1827 Admin Cell Service Host BPPM Server Page 22 of 47
23 Central Management & Administration (CMA) BMC Recommends implementing one of two architectures for CMA. The two choices are listed below with their pros and cons. Choice One - Implement a single CMA Instance for all environments including Development, Test and. Pros Cons 1) The creation, testing, and deployment of monitoring policies into production are very easy because you do not have to copy or export/import any data. The application of policies to Development, Test and is simply managed in the policy s agent selection criteria. 2) It requires less infrastructure nodes and components. Only a single Staging Service host is needed. Only a single CMA instance is used. 1) This many not be supported in some sites where all the necessary connections between Development, Test and environments are not available or allowed to be connected over the network. 2) Due to the powerful ease of use, it is easier for administrators to make mistakes applying policies unintentionally to production. However, this can be managed. Choice Two - Implement a separate CMA Instance for the Development, Test and environments each. Pros Cons 1) Is supported in sites where all the necessary connections between Development, Test and environments are not available or allowed to be connected over the network. 2) Provides a platform and supports policy management methods that help prevent administrators from making mistakes when applying policies to production. 1) The creation, testing, and deployment of monitoring policies into production require more manual effort because you have to export/import policy data from Development to Test and from Test to. 2) Policies could get out of synch across the Development, Test, and environments if not managed properly. Keeping them up to date is more of a manual process supported by the export/import utility. Page 23 of 47
24 3) It requires more infrastructure nodes and components. The Development, Test, and environments should each have a dedicated Staging Service host and a dedicated CMA instance. IMPORTANT: Neither method supports seamless creation, testing, and production deployment of updates and deletion for existing policies. Updates and deletion of existing policies that are already in production should be created, tested, and populated to production leveraging the policy export/import capability. This topic is discussed in detail in the configuration best practices. In all scenarios, CMA communicates thorough the agent controller process on the BPPM Server(s) to the Service nodes. These implementation architecture options are not installation options. The CMA components are the same. These two implementation architecture options are simply choices in how you install CMA instances and connect them to the various BPPM Servers. Single CMA Architecture Overview The diagram below illustrates the high-level architecture for a single CMA instance in a multiple BPPM Server environment including Development, Test and. Central BPPM Server with CMA Policies Data & Events Legend Key Data Events Direction of arrows indicates connection requests. Staging Host QA BPPM Server Test BPPM Server BPPM Server N BPPM Server N+1 New Deployed QA Host Test Host Host N Host N+1 monitoring monitoring monitoring monitoring With the Single CMA Architecture a single Staging Service node is used in the agent deployment process for all agents. All BPPM Servers leverage the single CMA instance for all policy management Page 24 of 47
25 Multiple CMA Architecture The diagram below illustrates the high-level architecture for multiple CMA instances BPPM Server environments including Development, Test and. Development Central BPPM Server With CMA Test Central BPPM Server With CMA Central BPPM Server With CMA Manual Policy Export / Import Manual Policy Export / Import Development BPPM Child Server Development BPPM Child Server BPPM Child Server N BPPM Child Server N+1 Staging Service Development Service Staging Service Development Service Staging Service Service N Service N+1 New Deployed (Development) Development s New Deployed (Test) Test PATROL Agents New Deployed (Test) s s Legend Key Policies Data & Events Data Events Direction of arrows indicates connection requests. With this architecture each environment has its own dedicated CMA instance and Staging Service. All policy application between environments is supported by the policy export/import utility. Standalone BPPM Servers & CMA In most multiple BPPM Server environments the CMA module will be installed with a Central BPPM Server. However, it is possible to install CMA with a stand-alone BPPM Server and then manually register the additional BPPM Servers with CMA after the install. The following are reasons for installing CMA on stand-alone BPPM Servers and not leveraging the Central Server capability. 1) The top tier BPPM Server is only needed to provide an enterprise event console. Page 25 of 47
26 2) BPPM Central Server functions are not needed. a. Single point of entry for service model visualization. b. Enterprise level map view. CMA Architecture Details The diagram below illustrates the default ports and connectivity that support Central Management and Administration across multiple BPPM Servers. The arrows indicate the direction from which the connections are established. BPPM Server with CMA JMS Port 8093 Web Services Port 80 / 443 BPPM Server (Child or Leaf) JMS Port 8093 BPPM Server (Child or Leaf) JMS Port 8093 Agent Controller Port Web Services Port 80 / 443 Web Services Port 80 / 443 Agent Controller Port Service Host Service Host Service Port Port 3183 Service Port 3183 Port Managed Host Port 3181 Managed Host Port 3181 Knowledge Modules Knowledge Modules The detailed architecture above applies to both the Single CMA Architecture and the Multiple CMA Architecture. Page 26 of 47
27 Staging Services Overview & Functionality The Staging Service has been introduced in BPPM 9.5. The Staging Service provides a single point in the environment where all newly deployed s can register into the BPPM solution stack. The staging process can leverage the Service on the BPPM Server. A Staging Service supports a smoother process for deploying s into BPPM environments in three major ways. 1) It eliminates the need to manage the decision and assignment of s to production Service node separate from the deployment process. (This assumes an environment that includes multiple production Service instances.) When you leverage a Staging Service this decision and the assignment is automated as part of the deployment process. 2) It supports a smoother process for managing policies across Development, QA, Test, and environments. 3) It reduces the number of PATROL silent install packages that have to be created and maintained. silent install packages are created so that the Staging Service is defined in the package. No other Service is defined in the install packages. Although technically possible, it is recommended as a best practice that other Service instances not be defined in the install packages. When a package is deployed and installed, the agent will check in through the Staging Service. When the agent checks in, the Central Management & Administration module evaluates agent selection criteria in a Staging Policy and uses that data to automatically assign a data collection Service (or Service cluster) to the agent. The agent selection criterion can include any one or any combination of the following. 1) A tag defined in the agent configuration 2) Hostname that the agent is running on 3) Operating System that the agent is running on 4) IP Address or IP Address range that the agent is running on 5) Agent Port 6) Agent version 7) Service that the agent is already assigned to (assuming it is already assigned) 8) BPPM Server that the agent is already assigned to (in this case it is through a Staging Service) Page 27 of 47
28 Service policies are the only CMA policies that are applied through a Staging Service. Monitoring policies are not applied though a Staging Service. Additionally, Staging Service policies in CMA are only applied through a Staging Service instance. The architecture of network connections (communication protocol, ports, etc) between the Staging Service, s, and the BPPM Server are technically the same as with other Service instances. The following are best practices for Staging Service nodes. 1) Do not attempt to configure agents so that performance data and/or events are sent to a Staging Service. 2) Staging Services must not be mixed with Data Collection Services. They must be configured, used, and managed separately from Data Collection Services. 3) Configure the Service on the BPPM Server as a Staging Service. Do not use it for data collection. 4) If firewall rules and security prevent using the Service on the BPPM Server as a Staging Service, deploy a Staging Service into the managed zone or zones. 5) Setup a single Staging Service for each environment, for example one for Development, one for Test, and one for. Or, if you have a single CMA instance for all environments, setup a single Staging Service for the entire implementation when possible. 6) Consider high availability for Staging Services. Refer to the High Availability section for more information. Page 28 of 47
29 Staging Process Illustration The diagrams in Steps 1 through 3 below illustrate the process of utilizing a Staging Service. Initial Agent Deployment BPPM Server Policies Data & Events Data Events Legend Key Direction of arrows indicates connection requests. Staging Service General Service Domain Service New Deployed general monitoring Dedicated PATROL Agent & IS nodes for domain monitoring The diagram above illustrates there different Service nodes and how they are used as follows. 1) The Staging Service is used strictly for introducing new agents to the BPPM Server. (An Service has to be configured to work as a Staging Service.) 2) The General Service is used for collecting data from various deployments of PATROL agents that are installed locally on the managed nodes. The term General is a description of how the Service is used and is not a configuration. 3) The Domain Service is used for collecting data from s that provide large volumes of data from a single source. Examples are VMware vcenter, PATROL Remote Operating System Monitoring, NetApp, etc. The term Domain is a description of how the Service is used and is not a configuration. The agent introduction process works as follows. A newly deployed silent install package is installed as shown above. The install package for the contains configuration data telling the how to connect to the Staging Service. When the new agent starts for the first time, it registers with CMA through the Staging Service. CMA then applies a Page 29 of 47
30 Staging policy to the agent based on agent selection criteria in the policy. Agent selection criteria define what agents the policy should be applied to. The Staging policy only contains agent selection criteria and information that defines connectivity information for a data collection Service node (or Service cluster). No other agent and/or KM configuration data can be defined in a Staging policy. Service Policy Application BPPM Server Policies Data & Events Data Events Legend Key Direction of arrows indicates connection requests. Staging Service General Service Domain Service New Deployed local monitoring Dedicated PATROL Agent & IS nodes for domain monitoring After receiving the Staging policy, the newly deployed agent switches to the data collection Service node (or Service cluster) defined in the Staging policy. (The switch is represented by the blue arrow in the diagram above.) The agent then receives any monitoring polices defined in CMA that match each Monitoring policy s agent selection criteria. Page 30 of 47
31 Monitoring BPPM 9.5 Architecture & Scalability Best Practices BPPM Server Policies Data & Events Data Events Legend Key Direction of arrows indicates connection requests. Staging Service General Service Domain Service local monitoring Dedicated PATROL Agent & IS nodes for domain monitoring The agent starts monitoring and continues to receive any updates to existing monitoring policies and new monitoring policies that match the monitoring policy s agent selection criteria. NOTE: Agents do not move from Development to Test, and then to. All agents should first check in with the appropriate Staging Service, then move to their data collection Service. This supports the concept of creating install packages for Development and Test only, separate from. This topic is discussed further in the BPPM 9.5 Configuration Best Practices. Page 31 of 47
32 Staging & Policy Management for Development, Test and Single CMA Instance Deployments A single Staging Service and a single CMA instance can be used to support multiple BPPM Servers. The diagram below illustrates how this is architected for Development, Test, and BPPM Server environments. Central BPPM Server with CMA Policies Data & Events Legend Key Data Events Direction of arrows indicates connection requests. Staging Host Development BPPM Server Test BPPM Server BPPM Server N BPPM Server N+1 New Deployed Development Host Test Host Host N Host N+1 monitoring monitoring monitoring monitoring All policies include agent selection criteria that allow you to completely control what policies are applied to any and all s across the entire environment spanning Development, Test and. This allows you to install a single CMA instance for the entire environment, and it eliminates the need to recreate policies in production after they have been created in development and tested, etc. One or more of the following agent assignment configurations in the policies is defined and edited to accomplish this. 1) BPPM Server that the agent is assigned to (Best Practice) 2) Service that the agent is assigned to 3) A tag defined in the agent configuration Page 32 of 47
33 4) Hostname that the agent is running on 5) IP Address that the agent is running on The easiest method is to include the appropriate BPPM Servers in the agent selection criteria for the policy as shown in the screen below, at the proper time. Simply add the Test and BPPM Servers to the policy agent selection criteria when you are ready to apply the policy to those environments. This simplifies the process of moving configuration from QA to Test, and finally to. It also ensures a policy is not applied to any agents in production until it has been tested and validated. The following outlines the process as an example referencing the screen shot above. Phase 1 Only the BPPM Server named BPPMRHEL62-HM-QA is included in the agent selection criteria when the policy is first created. Phase 2 The BPPM Server named BPPMRHEL62-HM-TEST is added to the agent selection criteria with an OR after the policy has been validated in QA. Page 33 of 47
34 Phase 3 The BPPM Server named BPPMRHEL62-HM-PROD is added to the agent selection criteria with an OR after the policy has been tested and validated in the BPPMRHEL62-HM-TEST environment and is ready to be applied to production. WARNING: At least one BPPM Server must be included in the agent selection criteria in order to control which BPPM Server environment(s) the policy is applied to. If you do not include at least one BPPM Server the policy will be applied to all agents, across all BPPM Servers that match the agent selection criteria of the policy. Additionally the multiple BPPM Server values must be grouped () and related with a Boolean OR as shown above. If you use a Boolean AND to relate the agent selection criterion the policy will not be applied because an agent cannot register with multiple BPPM Servers. WARNING: leveraging the BPPM Servers in agent selection criteria is powerful and has far reaching, global implications. If you mistakenly add a production BPPM Server to the agent selection criteria for a policy the policy could be unintentionally applied to 100s or 1000s of agents in production. Therefore it is extremely important that this process be managed carefully. IMPORTANT: Updates and deletion of existing policies will apply to all agents that the policy s agent selection criteria match. Consequently it is not possible to test edits to policies that currently apply to production without impacting production using the process outlined above. Separate policies should be created to test edits in the development and test environments leveraging the export/import utility. The details of this topic are discussed in the configuration best practices. Leveraging tags in policies can also be used to further control what agents policies are applied to. Tags should be used to provide a second level of protection to prevent policies still in development or test from being applied to production accidentally. Leveraging tags this way forces the user to not only add the appropriate BPPM Server to the policy selection criteria, but they also have to add the appropriate tag. This helps prevent the user from accidently picking the production BPPM Server and saving the policy when they did not mean to. Additionally, tags can be used to provide greater granularity for policy assignment where the other agent selection criteria is not enough. BMC recommends leveraging precedence in policies so that production policies have the highest precedence and will not be superseded by development or test policies. If you follow this recommendation you will also have to adjust the policy precedence when you want to move it from development to test, and finally from test to production. The configuration topics above are discussed further in the BPPM 9.5 Configuration Best Practices. Page 34 of 47
Simple Service Modeling FAQs TrueSight Operations Management (BPPM) versions 9.5 and 9.6 11/31/2014
QUESTION: Where on the BMC Communities site can I find best practice guidance for creating custom KMs and importing them into BPPM 9.5 CMA? ANSWER: QUESTION:
BMC Service Assurance. Proactive Availability and Performance Management Capacity Optimization
BMC Service Assurance Proactive Availability and Performance Management Capacity Optimization BSM enables cross-it workflow Proactive Operations Initiatives Incident Management Proactive Operations REQUEST
Monitoring can be as simple as waiting
Proactive monitoring for dynamic virtualized environments By David Weber and Veronique Delarue Virtualization can significantly increase monitoring complexity. By using BMC ProactiveNet Performance Management,,
NNMi120 Network Node Manager i Software 9.x Essentials
NNMi120 Network Node Manager i Software 9.x Essentials Instructor-Led Training For versions 9.0 9.2 OVERVIEW This course is designed for those Network and/or System administrators tasked with the installation,
GRAVITYZONE HERE. Deployment Guide VLE Environment
GRAVITYZONE HERE Deployment Guide VLE Environment LEGAL NOTICE All rights reserved. No part of this document may be reproduced or transmitted in any form or by any means, electronic or mechanical, including
Deploying System Center 2012 R2 Configuration Manager
Deploying System Center 2012 R2 Configuration Manager This document is for informational purposes only. MICROSOFT MAKES NO WARRANTIES, EXPRESS, IMPLIED, OR STATUTORY, AS TO THE INFORMATION IN THIS DOCUMENT.
PATROL From a Database Administrator s Perspective
PATROL From a Database Administrator s Perspective September 28, 2001 Author: Cindy Bean Senior Software Consultant BMC Software, Inc. 3/4/02 2 Table of Contents Introduction 5 Database Administrator Tasks
Monitoring Remedy with BMC Solutions
Monitoring Remedy with BMC Solutions Overview How does BMC Software monitor Remedy with our own solutions? The challenge is many fold with a solution like Remedy and this does not only apply to Remedy,
Hitachi Data Migrator to Cloud Best Practices Guide
Hitachi Data Migrator to Cloud Best Practices Guide Global Solution Services Engineering April 2015 MK-92HNAS045-02 Notices and Disclaimer Copyright 2015 Corporation. All rights reserved. The performance
Chapter 1 - Web Server Management and Cluster Topology
Objectives At the end of this chapter, participants will be able to understand: Web server management options provided by Network Deployment Clustered Application Servers Cluster creation and management
VMware vcenter Operations Manager Administration Guide
VMware vcenter Operations Manager Administration Guide Custom User Interface vcenter Operations Manager 5.6 This document supports the version of each product listed and supports all subsequent versions
VMware vcenter Operations Manager Enterprise Administration Guide
VMware vcenter Operations Manager Enterprise Administration Guide vcenter Operations Manager Enterprise 5.0 This document supports the version of each product listed and supports all subsequent versions
NMS300 Network Management System
NMS300 Network Management System User Manual June 2013 202-11289-01 350 East Plumeria Drive San Jose, CA 95134 USA Support Thank you for purchasing this NETGEAR product. After installing your device, locate
Migrating to vcloud Automation Center 6.1
Migrating to vcloud Automation Center 6.1 vcloud Automation Center 6.1 This document supports the version of each product listed and supports all subsequent versions until the document is replaced by a
VMware vsphere-6.0 Administration Training
VMware vsphere-6.0 Administration Training Course Course Duration : 20 Days Class Duration : 3 hours per day (Including LAB Practical) Classroom Fee = 20,000 INR Online / Fast-Track Fee = 25,000 INR
BlackBerry Enterprise Service 10. Version: 10.2. Configuration Guide
BlackBerry Enterprise Service 10 Version: 10.2 Configuration Guide Published: 2015-02-27 SWD-20150227164548686 Contents 1 Introduction...7 About this guide...8 What is BlackBerry Enterprise Service 10?..center Server 5.5 Deployment Guide TECHNICAL MARKETING DOCUMENTATION V 1.0/NOVEMBER 2013/JUSTIN KING
VMware 5.5 TECHNICAL MARKETING DOCUMENTATION V 1.0/NOVEMBER 2013/JUSTIN KING Table of Contents Overview.... 3 Components of 5.5.... 3 vcenter Single Sign-On.... 3 vsphere Web Client.... 3 vcenter Inventory
SapphireIMS Business Service Monitoring Feature Specification
SapphireIMS Business Service Monitoring Feature Specification All rights reserved. COPYRIGHT NOTICE AND DISCLAIMER No parts of this document may be reproduced in any form without the express written permission
Laptop Backup - Administrator Guide (Windows)
Laptop Backup - Administrator Guide (Windows) Page 1 of 86 Page 2 of 86 Laptop Backup - Administrator Guide (Windows) TABLE OF CONTENTS OVERVIEW PREPARE COMMCELL SETUP FIREWALL USING PROXY SETUP FIREWALL
WhatsUp Log Management Installation and Migration Guide, including Getting Started Information. (Applies to v10.1.5 and later)
WhatsUp Log Management Installation and Migration Guide, including Getting Started Information (Applies to v10.1.5 and later) C o n t e n t s Getting Started with WhatsUp Log Management Before You Begin... AVAMAR INTEGRATION WITH EMC DATA DOMAIN SYSTEMS
EMC AVAMAR INTEGRATION WITH EMC DATA DOMAIN SYSTEMS A Detailed Review ABSTRACT This white paper highlights integration features implemented in EMC Avamar with EMC Data Domain deduplication storage systems
Getting Started with Database Provisioning
Getting Started with Database Provisioning VMware vfabric Data Director 2.0 This document supports the version of each product listed and supports all subsequent versions until the document is replaced
EMC Data Protection Advisor 6.0
White Paper EMC Data Protection Advisor 6.0 Abstract EMC Data Protection Advisor provides a comprehensive set of features to reduce the complexity of managing data protection environments, improve compliance
INTRODUCTION TO CLOUD MANAGEMENT
CONFIGURING AND MANAGING A PRIVATE CLOUD WITH ORACLE ENTERPRISE MANAGER 12C Kai Yu, Dell Inc. INTRODUCTION TO CLOUD MANAGEMENT Oracle cloud supports several types of resource service models: Infrastructure
Sisense. Product Highlights.
Sisense Product Highlights Introduction Sisense is a business intelligence solution that simplifies analytics for complex data by offering an end-to-end platform that lets users easily prepare and analyze
HP Operations Smart Plug-in for Virtualization Infrastructure
HP Operations Smart Plug-in for Virtualization Infrastructure for HP Operations Manager for Windows Software Version: 1.00 Deployment and Reference Guide Document Release Date: October 2008 Software Release
Websense Support Webinar: Questions and Answers
Websense Support Webinar: Questions and Answers Configuring Websense Web Security v7 with Your Directory Service Can updating to Native Mode from Active Directory (AD) Mixed Mode affect transparent user
Attix5 Pro Server Edition
Attix5 Pro Server Edition V7.0.3 User Manual for Linux and Unix operating systems Your guide to protecting data with Attix5 Pro Server Edition. Copyright notice and proprietary information All rights reserved.
Vistara Lifecycle Management
Vistara Lifecycle Management Solution Brief Unify IT Operations Enterprise IT is complex. Today, IT infrastructure spans the physical, the virtual and applications, and crosses public, private and hybrid
BMC Performance Manager Portal Monitoring and Management Guide
BMC Performance Manager Portal Monitoring and Management Guide Supporting BMC Performance Manager Portal 2.7 Remote Service Monitor 2.7 April 2009 Contacting BMC Software You can access the
Monitor the Cisco Unified Computing System
Monitor the Cisco Unified Computing System Using Sentry Software Monitoring for BMC ProactiveNet Performance Management White Paper September 2010 August 2010 Contents What You Will Learn... 3 Overview...
Symantec NetBackup 7 Clients and Agents
Complete protection for your information-driven enterprise Overview Symantec NetBackup provides a simple yet comprehensive selection of innovative clients and agents to optimize the performance and efficiency
GMI CLOUD SERVICES. GMI Business Services To Be Migrated: Deployment, Migration, Security, Management
GMI CLOUD SERVICES Deployment, Migration, Security, Management SOLUTION OVERVIEW BUSINESS SERVICES CLOUD MIGRATION Founded in 1983, General Microsystems Inc. (GMI) is a holistic provider of product and
VMware vcenter Configuration Manager Administration Guide vcenter Configuration Manager 5.5
VMware vcenter Configuration Manager Administration Guide vcenter Configuration Manager 5.5 This document supports the version of each product listed and supports all subsequent versions until the document
WhatsUpGold. v3.0. WhatsConnected User Guide
WhatsUpGold v3.0 WhatsConnected User Guide Contents CHAPTER 1 Welcome to WhatsConnected Finding more information and updates... 2 Sending feedback... 3 CHAPTER 2 Installing and Configuring WhatsConnected... Virtualization and Cloud Management Overview. 2010 VMware Inc. All rights reserved
VMware Virtualization and Cloud Management Overview 2010 VMware Inc. All rights reserved Automating Operations Management Why? What? How? Why is Operations Management different in the virtual world?.
VMware vcenter Operations Manager for Horizon Supplement
VMware vcenter Operations Manager for Horizon Supplement vcenter Operations Manager for Horizon 1.7 This document supports the version of each product listed and supports all subsequent versions until
Simplified Management With Hitachi Command Suite. By Hitachi Data Systems
Simplified Management With Hitachi Command Suite By Hitachi Data Systems April 2015 Contents Executive Summary... 2 Introduction... 3 Hitachi Command Suite v8: Key Highlights... 4 Global Storage Virtualization
E-mail Listeners 6 E-mail Formats You use the E-mail Listeners application to receive and process Service Requests and other types of tickets through e-mail in the form of e-mail messages. Using E- mail
Veritas Storage Foundation High Availability for Windows by Symantec
Veritas Storage Foundation High Availability for Windows by Symantec Simple-to-use solution for high availability and disaster recovery of businesscritical Windows applications Data Sheet: High Availability
effective performance monitoring in SAP environments
WHITE PAPER September 2012 effective performance monitoring in SAP environments Key challenges and how CA Nimsoft Monitor helps address them agility made possible table of contents executive summary 3
Attix5 Pro Server Edition
Attix5 Pro Server Edition V7.0.2 User Manual for Mac OS X Your guide to protecting data with Attix5 Pro Server Edition. Copyright notice and proprietary information All rights reserved. Attix5, 2013 Trademarks
RSA Authentication Manager 8.1 Setup and Configuration Guide. Revision 2
RSA Authentication Manager 8.1 Setup and Configuration Guide Revision 2 Contact Information Go to the RSA corporate website for regional Customer Support telephone and fax numbers:
vsphere Replication for Disaster Recovery to Cloud
vsphere Replication for Disaster Recovery to Cloud vsphere Replication 6.0 This document supports the version of each product listed and supports all subsequent versions until the document is replaced
Configuring Nex-Gen Web Load Balancer
Configuring Nex-Gen Web Load Balancer Table of Contents Load Balancing Scenarios & Concepts Creating Load Balancer Node using Administration Service Creating Load Balancer Node using NodeCreator Connecting
LOAD BALANCING TECHNIQUES FOR RELEASE 11i AND RELEASE 12 E-BUSINESS ENVIRONMENTS
LOAD BALANCING TECHNIQUES FOR RELEASE 11i AND RELEASE 12 E-BUSINESS ENVIRONMENTS Venkat Perumal IT Convergence Introduction Any application server based on a certain CPU, memory and other configurations
Network Agent Quick Start
Network Agent Quick Start Topic 50500 Network Agent Quick Start Updated 17-Sep-2013 Applies To: Web Filter, Web Security, Web Security Gateway, and Web Security Gateway Anywhere, v7.7 and 7.8 Websense
Citrix XenServer Backups with Xen & Now by SEP
Citrix Backups with Xen & Now by SEP info@sepusa.com Table of Contents INTRODUCTION AND OVERVIEW... 3 CITRIX XENDESKTOP ENVIRONMENT... 4 CITRIX DESKTOP DELIVERY CONTROLLER BACKUP... 5 CITRIX
Server & Application Monitor
Server & Application Monitor agentless application & server monitoring SolarWinds Server & Application Monitor provides predictive insight to pinpoint app performance issues. This product contains a rich
Installing and Administering VMware vsphere Update Manager
Installing and Administering VMware vsphere Update Manager Update 1 vsphere Update Manager 5.1 This document supports the version of each product listed and supports all subsequent versions until the document
HP Business Service Management
HP Business Service Management for the Windows and Linux operating systems Software Version: 9.10 Business Process Insight Server Administration Guide Document Release Date: August 2011 Software Release
XpoLog Competitive Comparison Sheet
XpoLog Competitive Comparison Sheet New frontier in big log data analysis and application intelligence Technical white paper May 2015 XpoLog, a data analysis and management platform for applications' IT
Best Practices for Deploying and Managing Linux with Red Hat Network
Best Practices for Deploying and Managing Linux with Red Hat Network Abstract This technical whitepaper provides a best practices overview for companies deploying and managing their open source environment
Management of VMware ESXi. on HP ProLiant Servers
Management of VMware ESXi on W H I T E P A P E R Table of Contents Introduction................................................................ 3 HP Systems Insight Manager.................................................
CA Spectrum. Microsoft MOM and SCOM Integration Guide. Release 9.4
CA Spectrum Microsoft MOM and SCOM Integration Guide Release 9.4 This Documentation, which includes embedded help systems and electronically distributed materials, (hereinafter referred to as the Documentation
Direct Storage Access Using NetApp SnapDrive. Installation & Administration Guide
Direct Storage Access Using NetApp SnapDrive Installation & Administration Guide SnapDrive overview... 3 What SnapDrive does... 3 What SnapDrive does not do... 3 Recommendations for using SnapDrive...
VMware vsphere Data Protection 6.0
VMware vsphere Data Protection 6.0 TECHNICAL OVERVIEW REVISED FEBRUARY 2015 Table of Contents Introduction.... 3 Architectural Overview... 4 Deployment and Configuration.... 5 Backup.... 6 Application
whitepaper SolarWinds Integration with 3rd Party Products Overview
SolarWinds Integration with 3rd Party Products Overview This document is intended to provide a technical overview of the integration capabilities of SolarWinds products that are based on the Orion infrastructure.
SurfCop for Microsoft ISA Server. System Administrator s Guide
SurfCop for Microsoft ISA Server System Administrator s Guide Contents INTRODUCTION 5 PROGRAM FEATURES 7 SYSTEM REQUIREMENTS 7 DEPLOYMENT PLANNING 8 AGENTS 10 How It Works 10 What is Important to Know
vcenter Operations Manager for Horizon Supplement
vcenter Operations Manager for Horizon Supplement vcenter Operations Manager for Horizon 1.6 This document supports the version of each product listed and supports all subsequent versions until the document
IP Address, Domain and Hostname for IM and Presence Service on Cisco Unified Communications Manager, Release 9.1(1)
IP Address, Domain and Hostname for IM and Presence Service on Cisco Unified Communications Manager, Release 9.1(1) November 28, 2012 Americas Headquarters Cisco Systems, Inc. 170 West Tasman Drive San
Buffalo Technology: Migrating your data to Windows Storage Server 2012 R2
Buffalo Technology: Migrating your data to Windows Storage Server 2012 R2 1 Buffalo Technology: Migrating your data to Windows Storage Server 2012 R2 Contents Chapter 1 Data migration method:... 3 Chapter
TECHNICAL DOCUMENTATION SPECOPS DEPLOY / APP 4.7 DOCUMENTATION
TECHNICAL DOCUMENTATION SPECOPS DEPLOY / APP 4.7 DOCUMENTATION Contents 1. Getting Started... 4 1.1 Specops Deploy Supported Configurations... 4 2. Specops Deploy and Active Directory...5 3. Specops Deploy
Cloud Services for Backup Exec. Planning and Deployment Guide
Cloud Services for Backup Exec Planning and Deployment Guide Chapter 1 Introducing Cloud Services for Backup Exec This chapter includes the following topics: About Cloud Services for Backup Exec Security
Skybot Scheduler Release Notes
Skybot Scheduler Release Notes The following is a list of new features and enhancements included in each release of Skybot Scheduler. Skybot Scheduler 3.3 Oracle interface The new Skybot Scheduler Oracle
Configuring Failover
Configuring Failover 2015 Bomgar Corporation. All rights reserved worldwide. BOMGAR and the BOMGAR logo are trademarks of Bomgar Corporation; other trademarks shown are the property of their respective
|
http://docplayer.net/1805084-Bppm-9-5-architecture-scalability-best-practices-2-20-2014-version-1-4.html
|
CC-MAIN-2017-22
|
en
|
refinedweb
|
Copy formatted org-mode text from Emacs to other applications
Posted June 16, 2016 at 11:46 AM | categories: rtf, emacs | tags: | View Comments
I do a lot of writing in org-mode and I thought it would be great if I could copy text from an org-file and paste it with formatting into other applications, e.g. Word, Gmail, etc…. Curiosity got the better of me and I wondered how this is done in other applications. It works by creating a Rich Text Format version of what you want to copy and then putting that on the clipboard. It isn't quite enough to just copy it, it needs to go in the clipboard as an RTF datatype. On Mac OSX I used pbcopy to make that happen.
One simple strategy to do this from org-mode is to generate HTML by export, and then convert it to RTF with a utility, e.g. textutil. For example like this.
(defun formatted-copy () "Export region to HTML, and copy it to the clipboard." (interactive) (save-window-excursion (let* ((buf (org-export-to-buffer 'html "*Formatted Copy*" nil nil t t)) (html (with-current-buffer buf (buffer-string)))) (with-current-buffer buf (shell-command-on-region (point-min) (point-max) "textutil -stdin -format html -convert rtf -stdout | pbcopy")) (kill-buffer buf)))) (global-set-key (kbd "H-w") 'formatted-copy)
This works well for everything but equations and images. Citations leave a bit to be desired, but improving this is still a challenge.
Let us try this on some text. Some bold, italic, underline,
struck and
verbatim text to copy. Here are some example Formulas: H2O ionizes to form H+. We simply must have an equation: \(e^{i\pi} + 1 = 0\) 1. We should also have a citation kitchin-2015-examp and multiple citations kitchin-2016-autom-data,kitchin-2015-data-surfac-scien 2.
A code block:
import pycse.orgmode as org import numpy as np import matplotlib.pyplot as plt x = np.linspace(0, 60, 500) plt.figure(figsize=(4, 2)) plt.plot(np.exp(-0.1 * x) * np.cos(x), np.exp(-0.1 * x) * np.sin(x)) org.figure(plt.savefig('spiral.png'), caption='A spiral.', attributes=[['org', ':width 100']]) print('') org.table([['H1', 'H2'], None, [1, 2], [2, 4]], caption='A simple table') print('') org.result(6 * 7)
Figure 1: A spiral.
42
In summary, this simple approach to generating RTF from exported HTML works really well for the simplest markups. To improve on getting figures in, getting cross-references, captions, proper references, etc… will require a more sophisticated export approach, and probably one that exports RTF directly. That is a big challenge for another day!
Bibliography
- [kitchin-2015-examp] Kitchin, Examples of Effective Data Sharing in Scientific Publishing, ACS Catalysis, 5(6), 3894-3899 (2015).
- [kitchin-2016-autom-data] "Kitchin, Van Gulick & Zilinski, Automating Data Sharing Through Authoring Tools, "International Journal on Digital Libraries", , 1-6 (2016).
- [kitchin-2015-data-surfac-scien] "John Kitchin", Data Sharing in Surface Science, "Surface Science ", N/A, in press (2015).
Footnotes:
Copyright (C) 2016 by John Kitchin. See the License for information about copying.
Org-mode version = 8.3.4
|
http://kitchingroup.cheme.cmu.edu/blog/2016/06/16/Copy-formatted-org-mode-text-from-Emacs-to-other-applications/
|
CC-MAIN-2017-22
|
en
|
refinedweb
|
* L. :) * Lars Marius Garshol > > The trouble is that it will be very hard (if at all possible) to do > this without doing damage to backwards compatibility. * Jack Jansen > >. This sounds like a viable alternative, even if it is just a limited form of support. However, you can do exactly the same (and much more) with architectural forms, which we already have support for via Geir Oves xmlarch module. Why do you want to use namespaces instead? Also, perhaps we should add to the DOM implementations some standard way of inserting a SAX ParserFilter (something we should perhaps also work on) between the parser and the DOM. This would enable us to do automate things like removing whitespace, joining blocks of PCDATA that were separated by buffer boundaries in the parser, doing architectural processing, (for those who want it) doing namespace filtering, filtering out XLinks for special processing etc etc --Lars M.
|
https://mail.python.org/pipermail/xml-sig/1998-November/000478.html
|
CC-MAIN-2017-22
|
en
|
refinedweb
|
repoze.profile Documentation¶
This package provides a WSGI middleware component which aggregates profiling data across all requests to the WSGI application. It provides a web GUI for viewing profiling data.
Configuration via Python¶
Wire up the middleware in your application:
from repoze.profile import ProfileMiddleware middleware = ProfileMiddleware( app, log_filename='/foo/bar.log', cachegrind_filename='/foo/cachegrind.out.bar', discard_first_request=True, flush_at_shutdown=True, path='/__profile__', unwind=False, )
The configuration options are as follows:
- ``log_filename`` is the name of the file to which the accumulated profiler statistics are logged. - ``cachegrind_filename`` is the optional name of the file to which the accumulated profiler statistics are logged in the KCachegrind format. - not be deleted. - ``path`` is the URL path to the profiler UI. It defaults to ``/__profile__``. - ``unwind`` is a configuration flag which indicates whether the app_iter returned by the downstream application should unwound and its results read into memory. Setting this to true is useful for applications which use generators or other iterables to do "real work" that you'd like to profile, at the expense of consuming a lot of memory if you hit a URL which returns a lot of data. It defaults to false.
Configuration via Paste¶
Wire the middleware into a pipeline in your Paste configuration, for example:
[filter:profile] use = egg:repoze.profile log_filename = myapp.profile cachegrind_filename = cachegrind.out.myapp discard_first_request = true path = /__profile__ flush_at_shutdown = true unwind = false ... [pipeline:main] pipeline = egg:Paste#cgitb egg:Paste#httpexceptions profile myapp
Viewing the Profile Statistics¶
As you exercise your application, the profiler collects statistics about the functions or methods which are called, including timings. Please see the Python profilers documentation for an explanation of the data which the profiler gathers.
Once you have some profiling data, you can visit the configured
path
in your browser to see a user interface displaying profiling statistics
(e.g.).
Profiling individual functions¶
Sometimes it might be needed to profile a specific function, be it for analyzing a bottleneck found with the full profiling, or to compare different approaches to the same problem. This package provides a decorator for this case. To use it, simply decorate the desired function like this:
from repoze.profile.decorator import profile @profile('Descriptive title', sort_columns=('time', 'cumtime'), lines=30) def my_bottleneck(): # some really time consuming code
The results of the profiling will be sent to standard out. The
title will
appear at the top of the results, for guidance. All other arguments are
optional.
sort_columns allows specifying the columns to sort the timing
results. See the Python profilers documentation for available options.
lines
is the number of lines of results to print. Default is 20. Zero means no limit.
Reporting Bugs / Development Versions¶
Visit to report bugs. Fork the repository to submit patches as pull requests.
|
http://repozeprofile.readthedocs.io/en/latest/
|
CC-MAIN-2017-22
|
en
|
refinedweb
|
This lab validation report by industry analyst ESG explores how NetApp clustered Data ONTAP can help organizations create a highly efficient and scalable data storage environment that supports a shared IT infrastructure foundation. ESG Lab combined hands-on testing of Data ONTAP 8.2 performed in 2013 with a detailed audit of what’s new in Data ONTAP 8.2.1 to validate the nondisruptive operations, proven efficiency, and seamless scalability offered by clustered Data ONTAP 8.2.1.
ESG provided an audit of the following capabilities.
Non-disruptive Operations: Scalability, Availability, and Resource Balancing. As storage nodes are added to the system, all physical resources—CPUs, cache memory, network I/O bandwidth, and disk I/O bandwidth—can be easily kept in balance. Clustered Data ONTAP 8.2.1 systems enable users to add or remove storage shelves (over 23 PB in an eight-node cluster, and up to 69 PB in a 24-node cluster); move data between storage controllers and tiers of storage without disrupting users and applications; and dynamically assign, promote, and retire storage, providing continuous access to data while administrators upgrade or replace storage. This enables administrators to increase capacity while balancing workloads, and can reduce or eliminate storage I/O hot spots without the need to remount shares, modify client settings, or stop running applications.
Unified Storage Efficiency. NetApp provides storage efficiency technologies for production and backup data sets that include block-level data deduplication, compression, and thin provisioning. These technologies can be deployed individually and in combination for both SAN and NAS, allowing customers to reduce the capital costs associated with storage. The Automated Workload Analyzer (AWA)available with clustered Data ONTAP 8.2.1 removes complexity and reduces time to deploy Flash Pool using real-time automated learning, computing Flash Pool sizing, and estimating performance gains, increasing both storage and administrator efficiency. Efficiency is also increased with in-place 32-bit to 64-bit aggregate upgrades.
Quality of Service Management. Clustered Data ONTAP quality of service (QoS) enables the definition and isolation of specific workloads. IT organizations can throttle or prevent rogue workloads, and service providers can define specific service level objectives.
Secure Multi-tenancy. Using Storage Virtual Machines (SVMs), clustered Data ONTAP provides secure, protected access to groups of servers and applications, allowing organizations to deliver dedicated administration, IP addresses, exports, storage objects, and namespaces to consumers of IT services.
Integrated Data Protection. NetApp provides on-disk snapshot backups using capacity and resource-efficient Snapshot technology. Customers can reduce recovery time objectives (RTOs) and improve recovery point objectives (RPOs) across all of the storage in their environment. SnapVault, provides block-level disk-to-disk backup. Backup targets can be within a cluster, across multiple clusters, or span multiple data centers, delivering fast, streamlined remote backup and recovery.
FlexClone. Using FlexClone provisioning, built on Data ONTAP Snapshot technology, customers can instantly create clones of production data sets and VMs in order to meet the requests of a dynamic infrastructure without requiring additional storage capacity. Clones can speed test and development, provide instant provisioning for virtual desktop and server environments, and increase storage utilization. This capability is integrated with a number of offerings from NetApp partners including Microsoft, VMware, Citrix, and SAP.
Unified Management. NetApp OnCommand data management software offers effective, cost-efficient management of shared scale-out storage infrastructure to help organizations optimize utilization, meet SLAs, minimize risk, and boost performance. By offering a single system image across multiple storage nodes in a Data ONTAP cluster, NetApp enables organizations to automate, virtualize, and manage service delivery and SLAs through policy-based provisioning and protection.
Additional Capabilities Added/Enhanced in Data ONTAP 8.2.1
NetApp OnCommand Workflow Automation (WFA). Automates common administrative tasks to standardize processes and adhere to best practices. It allows the design of highly customized workflows without the need for scripting expertise. Also, it acts as a point of integration for 3rd party tools such as orchestrators.
Antivirus. Data ONTAP 8.2.1 utilizes in-memory cache for efficiency and performance, integration with multi-vendor antivirus solutions provides highly available antivirus protection for SMB shares.
Data ONTAP Edge. Administrators can deploy virtualized data ONTAP 8.2.1 to extend the environment from the core of the business to the edge, providing centralized management and administration as well as backup and disaster recovery for remote office/branch office (ROBO) environments.
Multivendor Virtualization. Data ONTAP 8.2.1 fully supports NetApp FlexArray Virtualization Software with FAS8000 series systems to virtualize storage and incorporate capacity into a Data ONTAP 8.2.1 cluster via license key activation.
SMB/CIFS Capabilities Added/Enhanced in Data ONTAP 8.2.1
LDAP over SSL. Enables secure, encrypted authentication between clustered Data ONTAP and Microsoft Active Directory or OpenLDAP servers. This minimizes vulnerabilities where network monitoring devices or software are used to view users’ credentials.
Active Directory without CIFS. For Microsoft apps attached to clustered Data ONTAP over a SAN that connect to Active Directory and VMware over NFS, organizations can utilize Active Directory for security and management. This enables IT to comply with corporate standards on Active Directory integration. The ability to search across multiple domains for user mappings allows large, complex environments to maintain their existing cross-protocol workflows. Users with identities in both UNIX and Active Directory domains can map across domains in multiprotocol deployments.
Microsoft SQL Server with SMB 3.0. NetApp Data ONTAP 8.2.1 clusters can provide users with uninterrupted access to SQL Server instances over SMB 3.0. Microsoft Hyper-V with SMB 3.0 is also supported for uninterrupted share access for virtualized applications and servers.
In summary, ESG recommends a serious look at the benefits that can be realized from virtualizing storage environments with NetApp clustered Data ONTAP 8.2.1.Through hands-on testing, ESG Lab has confirmed that NetApp can bring a flexible and efficient service-oriented model to heterogeneous storage environments while reducing complexity and delivering a robust infrastructure foundation for shared, on-demand IT services.
Mike McNamara
You must be registered user to add a comment.
If you are a registered user, sign in to leave a comment. If you are not a registered user, please register for the NetApp Community to leave a comment.
|
http://community.netapp.com/t5/Technology/Lab-Validation-of-Clustered-Data-ONTAP-8-2-1/ba-p/84095
|
CC-MAIN-2017-22
|
en
|
refinedweb
|
Using Line Integral Convolution to Render Effects on Images
- Anastasia Simon
- 1 years ago
- Views:
Transcription
1 Using Line Integral Convolution to Render Effects on Images Ricardo David Castañeda Marín VISGRAF Lab Instituto Nacional de Matemática Pura e Aplicada A thesis submitted for the degree of Master in Mathematics-Computer Graphics Feb 2009
2 ii
3 Dedicated To My Family...
4 Acknowledgements I am very grateful to my parents Mariela and Gildardo, my sisters Eliana and Andrea and my friends Alejandro Mejia, Julian Lopez and Andres Serrano who have supported all my studies and have encouraged me to carry on with my academic and personal life; this monograph is dedicated to all them. Many thanks to my academic advisor Dr. Luiz Henrique de Figueiredo who accepted my request on working in this topic. Thank you for all the suggestions and for helping me in writing this dissertation. Particular thanks go also to Dr. Luiz Velho for his valuable tips and time we spent discussing new ideas. Those sessions were very helpful, thank you. Also I am in debt with Emilio Ashton Vital Brazil who gave generously his time to offer explanations especially with the pencil effect approach of Section 4.3.
5 Contents List of Figures v 1 Introduction 1 2 Line Integral Convolution DDA Convolution LIC Formulation A Fast LIC algorithm Final Considerations Vector Field Design Basic Design Another Approach - Distance Vector Fields Effects Using LIC Ideas Blur LIC Silhouette Pencil Rendering Painterly Rendering Conclusions and Future Work A Implementation in C++ 29 A.1 Getting Started with CImg A.2 LIC in C++ using CImg A.3 A Basic Vector Field Design System using CImg A.4 LIC effects in C A.4.1 LIC Pencil Effect iii
6 CONTENTS A.4.2 Painterly Rendering B Gallery 41 References 51 iv
7 List of Figures 2.1 LIC algorithm overview DDA convolution LIC Different Values for L FastLIC Image Domain Overflowing Singularities Classification Distance Vector Fields Blur Effect Warping a dithered image Automated Silhouette LIC Silhouette LIC Pencil Effect Examples LIC Spray-Like Effect LIC Pencil Effect Process Painterly Rendering A.1 Color to gray conversion A.2 Basic Vector Field Design example using CImg B.1 Pigeon Point Lighthouse, California B.2 Plymouth Hoe, England B.3 Landscape B.4 Girl Playing Guitar v
8 LIST OF FIGURES B.5 Shoes in the Grass B.6 Cows B.7 Garden IMS, Rio de Janeiro B.8 Live Performance B.9 Live Performance Continued B.10 Live Performance Continued vi
9 1 Introduction In this monograph we are going to expose the use of some ideas involved in the Line Integral Convolution (LIC) algorithm for the generation of many Non-Photorealistic Renditions on arbitrary raster images. In other words, our main objective is to create images that could be considered as pieces of visual art generated using ideas from the LIC algorithm. From this point of view, the output of our algorithms does not need to be considered as right or wrong, an aesthetic judgement will be more appropriate. That is what we are expecting from the reader. It is well known that even since its roots Computer Graphics procedures have been used by artists for both aesthetics and commercial purposes. Our motivation comes from the original paper on LIC (1) which explores another kind of applications considered as realistic effects, more specifically blur-warping. In a similar way, we searched for other uses of these LIC ideas mixing the already known NPR techniques like painterly rendering and pencil sketches. This monograph is then the result of such experiments. The material presented demands basic knowledge of ordinary differential equations and vector calculus. Chapter 2 defines and explains the LIC algorithm to visualize the structure of planar vector fields using white noise as the input image. Since our approach uses arbitrary vector fields to guide the effects, two methods for designing these vector fields are given in chapter 3. Chapter 4 discusses the actual NPR effects results. There is also an appendix A which exposes the implementation of our procedures using the CImg library for image processing and visualization of the results, and an appendix B which is a gallery with our results. 1
10 1. INTRODUCTION 2
11 2 Line Integral Convolution This chapter is about the main algorithm of this text: Line Integral Convolution (LIC). It was introduced in (1) by B. Cabral and L. Leedom at the SIGGRAPH conference in LIC was designed primarily to plot 2D vector fields that could not be visualized with traditional arrows and streamlines, i.e., fields with high density. Due to the low of performance of the original algorithm on large images, over the years several alternatives of the same calculations have been published to increase the speed and the details of the visualization. We will discuss a fast procedure briefly. Since every other algorithm here will be an extension or a derivation from LIC, it is important to have a good understanding of how it works. 2.1 DDA Convolution The LIC algorithm takes as input an image and a vector field defined on the same domain. The output image is computed as a convolution of the intensity values over the integral curves of the vector field (see Figure 2.1). In the case we want to visualize the topology and structure of this field, the input image needs to have pixels uniformly distributed and with mutually independent intensities (6). A simple white noise as input image will be enough for what we want here. On the other hand, the input image can be arbitrarily chosen, and the output image will have an effect depending on the input vector field. We will turn to this subject later. As stated in the original paper, LIC is a generalization of what is known as DDA convolution. The DDA algorithm performs convolution on a line direction rather than 3
12 2. LINE INTEGRAL CONVOLUTION Figure 2.1: LIC algorithm overview - Left to right: White noise, input vector field and output LIC visualization. on integral curves. For each pixel location (i, j) on the input image I, we want to compute the pixel intensity in the output image O(i, j). For this, DDA takes the normalized vector V (i, j) corresponding to that location and moves in its positive and negative directions some fixed length L. This generates a line of locations l(s) = (i, j) + sv (i, j), s { L, L + 1,..., 0,..., L 1, L and a line of pixels intensities I(l(s)) of length 2L + 1. Choosing a filter kernel K : R R with Supp(K) [ L, L], the line function I(l(s)) is filtered and normalized to generate the intensity output O(i, j): O(i, j) = 1 2L + 1 L s= L I(l(s))K(l 1 (i, j) s) 1 2L + 1 I l(i,j) K The symbol stands for discrete convolution and is responsible for the name of the algorithm. So for each pixel we perform a discrete convolution of the input image with some fixed filter kernel. As a special case, when K is chosen to be a box kernel (K 1 on [ L, L] and K 0 everywhere else), the convolution becomes the average sum of the pixels I(l(s)): O(i, j) = 1 2L + 1 L s= L I(l(s)) Fig 2.2 depicts the process. As expected, DDA is very sensitive to the fixed length L, since we are assuming not only that the vector field can be locally approximated by 4
13 2.2 LIC Formulation a straight line, but also that this line has fixed length L everywhere, generating a uneven visualization: linear parts are better represented than vortices or paths with high curvature. Line integral convolution remedies this by performing convolution over integral curves. Figure 2.2: DDA convolution - Convolution over a line of pixels. Picture adapted from (1). 2.2 LIC Formulation LIC can be performed on 2D and 3D spaces. Because we are only concerned with the generation of effects on 2D images, our vector field will have a planar domain. integral curve of the vector field v : Ω R 2 R 2, passing over x 0 Ω at time τ 0, is defined as a function c x0 : [ L, L] R 2 with: d dτ c x 0 (τ) = v(c x0 (τ)), c x0 (0) = x 0 that is, a curve solution of the initial value problem d dτ c(τ) = v(c(τ)), c(0) = x 0 Uniqueness of the solution of this ODE is reached when the field locally satisfies a Lipschitz condition. When d dτ c x(τ) 0 for all x Ω and for all τ [ τ, τ], every c x can be reparametrized by arc length s (2). An easy computation of this reparametrization leads to an alternate definition of integral curves: An 5
14 2. LINE INTEGRAL CONVOLUTION d ds c x 0 (s) = v(c x 0 (s)) v(c x0 (s)), c x 0 (s 0 ) = x 0 Basically the generalization from DDA to LIC is done when one changes the line l(s) involved, for the integral curve c l(0) c (i,j). The new output pixel O(i, j), with s 0 such that c (i,j) (s 0 ) = (i, j), is computed by LIC as: O(i, j) = 1 2L + 1 s 0 +L s=s 0 L and simplifying with a box filter: O(i, j) = I(c (i,j) (s))k(s 0 s) 1 2L + 1 s 0 +L s=s 0 L I(c (i,j) (s)) 1 2L + 1 I c (i,j) K Figure 2.3 shows this process. Notice that we are restricting the integral curve to the interval [ L, L] for some fixed length L like in DDA. In general a good L depends on the vector field and its density. Figure 2.4 show some examples for different values of L and the same vector field. Figure 2.3: LIC - Convolution over a integral curve of pixels. Picture adapted from (1). To compute the integral curves of the input vector field, the solution of the ODE is obtained by integration: s c x0 (s) = x 0 + v(c x0 (s ))ds s 0 6
15 2.2 LIC Formulation Figure 2.4: Different Values for L - From top to bottom and left to right: Values for L: 1,3,5,10,20, and 50. The next pseudocode performs a discretization of this equation, storing in an array C the pixel locations of the integral curve. The function vector field(p) returns the vector at point p. The constant ds is the sample rate for the integral curve, see section 2.4 for an explanation. Computing the integral curve for a pixel p = (x, y): 1 function compute_integral_curve(p){ 2 V=vector_field(p) 3 add p to C 4 for (s=0;s<l;s=s+1){ // positive calculations 5 x=x+ds*v.x 6 y=y+ds*v.y 7 add the new (x,y) to C 8 compute the new V=vector_field(x,y) 9 10 (x,y)=p V=vector_field(p) //return to original point and original vector 11 for (s=0;s>-l;s=s-1){ // negative calculations 7
16 2. LINE INTEGRAL CONVOLUTION 12 x=x-ds*v.x 13 y=y-ds*v.y 14 add the new (x,y) to C 15 compute the new V=vector_field(x,y) return C 18 used: To compute the convolution with a box kernel a simple average of the intensities is Computing the convolution along integral curves: 1 function compute_convolution(image,c){ 2 sum=0 3 for each location p in C { 4 sum=sum+image(p) 5 6 sum=sum/(2*l+1) // normalization 7 return sum 8 Next is the pseudocode of the final LIC algorithm. LIC Pseudocode with a box kernel: 1 function LIC(image){ 2 create an empty image O_img 3 for each p on image{ 4 array C=compute_integral_curve(p) 5 sum = compute_convolution(image,c) 6 set pixel p on O_img to sum 7 8 return O_img 9 Note that for a point on a particular integral curve c, its own integral curve is highly related to c. The low performance of the LIC algorithm can be seen there: for 8
17 2.3 A Fast LIC algorithm each pixel location we have to compute the integral curve passing through that location (without using the already computed integral curves) and perform a convolution with some filter kernel. In the next section we will see how this relations can be explored to increase speed of the LIC algorithm. 2.3 A Fast LIC algorithm Given that when computed, an integral curve covers lot of pixels, uniqueness of the solution of the ODE implies that the convolution involved in LIC can be reused. Choose a box filter kernel and suppose we have an integral curve of a location (i, j), say c (i,j) and another location along it c (i,j) (s), then their output values are related by O(c (i,j) (s)) = O(i, j) s 0 L+s s =s 0 L I(c (i,j) (s )) + s 0 +L+s s =s 0 +L I(c (i,j) (s )) Figure 2.5 illustrates this relation. In practice, to reuse an already computed convolution for a set of pixels, a matrix of the same size of the image is created such that each entry stores the number of times that pixel has been visited. The order in which the pixels are analyzed is important for the efficiency of this process. The goal is to hit as many uncovered pixels with each new integral curve to reuse the convolutions as possible, and thus it is not a good choice to make it in a scanline order. Nevertheless, we can adopt another approach in which the image is subdivided in blocks and process the pixels in scanline order on each block. For instance, we take the first pixel of each block and make the calculations, then the second pixel and so on 1. Figure 2.5: FastLIC - Integral curves relation involved in the FastLIC approach. The shaded region of the convolution could be reused. 1 There are other methods to compute the order to process the pixels, see for example (6; 11). 9
18 2. LINE INTEGRAL CONVOLUTION The following is a pseudocode of the basic FastLIC algorithm (5). This code is used on each block, as stated in the previous paragraph, to ensure reusability of the integral curves. FastLIC Pseudocode: 1 for each pixel p 2 if p hasn t been visited then 3 compute the integral curve with center p=c(0) 4 compute the LIC of p, and add result to O(p) 5 m=1 6 while m<l 7 update convolutions for c(m) and c(-m) 8 set output pixels: O(c(m)) and O(c(-m)) 9 set pixels c(m) and c(-m) as visited 10 m=m+1; Final Considerations As you can see from the previous sections, LIC is a simple but powerful tool for visualizing vector fields. In this section I want to make explicit some considerations regarding the implementation of this algorithm. This section is optional for the readers whose interest is the implementation rather than the applications. The already given material is enough for what we want to develop with LIC. The first consideration is about the space of the variable s. In section 2.1 we define the DDA convolution for discrete values: s { L, L + 1,..., 0,..., L 1, L. However, in general s is a real variable on the interval [ L, L]. The line of locations l(s) is in general generated by sampling this interval, with l(0) = (i, j). It is clear that for some sampling rates this line is not injective, given that our image is a raster image with integer locations (i, j). The DDA line is computed as l(k s) = (i, j) + k sv (i, j) with integer k [ L s, L s ]. If s 1 we will be back in our original definition. Practical experience (6) shows that using a s of about a third or half an image pixel width is enough for good visualizations. The same sampling consideration is applied for integral curves in the general LIC algorithm. 10
19 2.4 Final Considerations Another thing one should consider when implementing LIC is that the domain of the input I and output image O can be taken as continuous domains rather than a grid of pixels (i, j). Basically we take a continuous rectangular domain and define a set of cells with center a pixel location (i, j). Then to compute the output pixel at location (i, j) one choose a number of samples locations within its correspondent cell, perform the computations and compute an average intensity value. Because increasing the number of samples on each cell increases the run time of the algorithm, a small number is recommended. Finally, when performing the convolution on pixels near the boundary of the image domain, sometimes the algorithm will try to retrieve an intensity value of a invalid pixel location, because generally the integral curve will leave the image domain. Figure 2.6 illustrates this. One solution is to pad the image with zeros on the boundary. This however will cause sometimes black regions at the image boundaries. In the case we have a vector field defined only on the image grid, we simply extend it arbitrarily and smoothly on the domain (e.g., by repeating values). Figure 2.6: overflowing. Image Domain Overflowing - A padding with zeros is used to avoid 11
20 2. LINE INTEGRAL CONVOLUTION 12
21 3 Vector Field Design In the previous chapter we saw how planar vector fields with high density can be visualized using the LIC algorithm. We now turn on the subject to design the input vector field. This is motivated by many graphics applications including texture synthesis, fluid simulation and, as we will see in the next chapter, NPR effects on images. 3.1 Basic Design In section 2.2 we saw that a vector field v : Ω R 2 R 2 defines the differential equation d c(τ) = v(c(τ)) dτ such that for each point x 0 Ω, the solution, with intial condition c x0 (s 0 ) = x 0, is the integral curve c x0 (τ). A singularity of the vector field v is a point x Ω such that v(x) = 0. A very basic vector field design consists of local linearizations and a classification of the singularities. Explicitly, if v is given by the scalar functions F and G, i.e v(x) = (F (x), G(x)), then the local linearization of v at a point x 0 is V (x) = v(x 0 ) + Jv(x 0 )(x x 0 ) ( F x where Jv(x 0 ) = (x F 0) y (x ) 0) G x (x G 0) y (x is the Jacobian matrix evaluated at the point 0) x 0. When x 0 is a singularity we have V (x) = Jv(x 0 )(x x 0 ) 13
22 3. VECTOR FIELD DESIGN We will assume that for singularities x 0 its corresponding Jacobian matrix has full rank and thus that it has two non zero eigenvalues. This implies also that the only element on the null space of the Jacobian is zero, and so the only singularity of the vector field V is x 0. We know from linear algebra that in this case the eigenvalues of the Jacobian are both real or both complex. When they are real, we have three cases: 1. Both are positive. In this case the singularity is called a source. 2. Both are negative. In this case the singularity is called a sink. 3. One is positive and the other is negative. In this case the singularity is called a saddle. On the other hand, when the eigenvalues are complex, we have a center when the real part of both are zero. Figure 3.1 shows this classification for the singularities of our vector field. Figure 3.1: Singularities Classification - Top and left to right: A sink, a source and a saddle. Bottom and left to right: A center, and a mix of a saddle a sink and a center. 14
23 3.1 Basic Design The basic design consists of providing locations and types of singularities on the image domain. For instance, to create the center at (0, 0) shown in figure 3.1, we defined the vector field as V (x, y) = ( ) ( x y In general the type of singularity can be stored as the Jacobian matrix JV, that we can define as: ( ) k 0 JV = for a sink. 0 k ( ) k 0 JV = for a saddle. 0 k ( ) k 0 JV = for a source. 0 k ( ) 0 k JV = for a counter-clockwise center. k 0 ( ) 0 k JV = for a clockwise center. k 0 where k > 0 is a constant representing the strength of the singularity. In practice, if we want a vector field with a singularity of any type at position p 0 = (x 0, y 0 ), then one defines the vector field as: ( ) V (p) = e d p p 0 2 x x0 JV y y 0 choosing the desired JV and where d is a decay constant that controls the influence of the vector field on points near and far from the singularity. This is essential when one wants to design a vector field which is a composition of many basic fields with singularities. To construct such vector field, we define a simple vector field separately for each singularity, and then we define the final vector field as their sum. For example a vector field with a sink at q 1 = (10, 10) and a center at q 2 = ( 5, 4) can be modeled as: V (p) = e d 1 p q 1 2 ( k1 0 0 k 1 or more compactly: ) ( x 10 y 10 ) ) + e d 2 p q 2 2 ( 0 k2 k 2 0 ) ( x + 5 y 4 ) 15
24 3. VECTOR FIELD DESIGN V (p) = V q1 (p) + V q2 (p) Note that each V qi has just one singularity, namely q i. This is not the case for our final vector field, in which new singularities are present when V q1 (p) = V q2 (p). In particular, choosing each d i properly, each q i is a singularity of the final vector field, but it could happen that V (p) = 0 for points p where V q1 (p) 0 and V q2 (p) 0. There is a method to control this undesired new singularities using Conley indices (4) but it falls out of the scope of this monograph. 3.2 Another Approach - Distance Vector Fields In the previous section we saw how to design planar vector fields classifying its singularities: the user chooses a location and type and a linear vector field its created by choosing some other parameters like strength and influence. In this section we are interested in constructing vector fields using gestures. The idea is to create a vector field that resembles the direction of a given planar curve. vector field which is explained next. We use a distance-based Given a parametrized curve C : [a, b] R R 2 the distance from a point p R 2 to the curve is given by d(p, C) min {d(p, C(t)) t [a,b] The parameter t for which the equality holds in the equation above may be not unique. Indeed, when C is a circumference and p is taken as the center of the circumference, this equation will hold for every value of t. Nevertheless we obtained pleasant results choosing an aleatory value from all the candidates. We will note this value by τ p. Also we took the Euclidean distance function d(x, y) = x 2 + y 2 for simplicity. To a curve C and a distance function d, we associated two vector fields V : R 2 R 2 and W : R 2 R 2, each perpendicular to the other by definition: V (p) C(τ p ) p, W (p), V (p) 0 Assuming that C is differentiable with C (t) 0 for all t [a, b], from the definition it is clear that for a point p R 2 with p C(t) for all t [a, b], the vector W (p) has 16
25 3.2 Another Approach - Distance Vector Fields the direction (up to sign) of the tangent vector of the curve at τ p. In fact the function f : R R defined by f(t) = C(t) p, C(t) p which measures the square of the distance from p to C(t) for each t [a, b], has a minimmun at τ p. On the other hand, we have and thus f (τ p ) = 0 implies that f (t) = 2 C (t), C(t) p C (τ p ), V (p) = 0 leading to W (p) = λc (τ p ) for λ R as claimed. From this we see that the vector field W could be a good option to accomplish the design of a vector field that resembles a given curve. However, given that in practice the curve C could be not differentiable (creating it interactively for example), numerous artifacts in the final vector field appear. Figure 3.2 show some examples of V and W for different polylines created with a mouse. Note also how all the points in the curve C became singularities of V and W, which is obvious from their definition. 17
26 3. VECTOR FIELD DESIGN Figure 3.2: Distance Vector Fields - Left to right: White noise with the curve C in red and distance vector fields V and W. 18
27 4 Effects Using LIC Ideas Computational art can be thought as studies concerned in creating and producing pieces of art by means of a computer 1. In this monograph we are not going to discuss the creative process that leads to a piece of art from the initial white canvas. These subjects, I think, fall into the context of artificial intelligence and cognition and are out of the scope of this text. The creativity involved here will be then of another kind; we will be given an input digital image and we will create and use modifications of the LIC algorithm to process it and generate a new digital image. The result image needs not be compared to any other, since we will be creating rather than imitating styles. Since the results are in some sense non-real, they are called NPR or non-photorealistic rendering effects on images. Nevertheless we will be also including the original real effects blur-warping for completeness. 4.1 Blur A warping or blur effect can be achieved when using LIC on an arbitrary image rather than on white noise. In this case the vector field will drive the warping effect in its directions. To guarantee visual coherence on the result image, each RGB channel is processed separately. A code in C++ using CImg can be found in the appendix section A.2. Figure 4.1 shows some results. As you may notice, the deformation on the final image depends strongly on the input image and the vector field used. Also notice that in chapter 2 we were able to 1 We will be referring just to graphic arts like painting, drawing and photography. This point of view is independent from the definition of art, which we are not going to discuss here. 19
28 4. EFFECTS USING LIC IDEAS Figure 4.1: Blur Effect - Left-Original Image, Middle-LIC on white noise of the vector field used. Right-LIC of the original image. 20
29 4.2 LIC Silhouette visualize the structure of the vector field because of the uniformly distribution of the white noise input image. In general, we compute as a preprocessing step a dithered version of the input image to ensure this charactheristic and then perform the LIC to generate a warping effect. Figure 4.2 shows an example. Another application of blur-warping is to generate an animation that advects the colors on the image in the direction of the input vector field. This is achieved by iterating LIC several times with the same vector field. This technique could be used to flow visualization not only for steady vector fields but unsteady aswell. However some considerations need to be made to control the color advection at the image boundaries (section 2.4). This black regions can be avoided with a technique called Image Based Flow Visualization (10). Figure 4.2: Warping a dithered image - From left to right images: dithered, and warping the dithered image 4.2 LIC Silhouette We can generate a silhouette image automatically with LIC. For this, a threshold is defined to control the value of the convolution of a given pixel. The first step is to convert the image to gray to avoid color incoherences. Then for each pixel we proceed 21
30 4. EFFECTS USING LIC IDEAS as in LIC, but the output pixel is set depending on the value of the convolution and the thresholds predefined. This process can be used also to generate a dithered version of the visualization of a vector field. Below is a pseudocode of this process. Figure 4.3 show examples of this method. Figure 4.3: Automated Silhouette - Left-Original image, Middle-Dithered LIC, Right- Automated Silhouette using LIC function Silhouette(image){ convert image to gray(image) for each pixel p in image C=compute_integral_curve(p) I=compute_convolution(image,C) if (val 1<I< val 2) { set OutputImage(p)=color 1 if (I<val 1){ set OutputImage(p)=color 2 else{ set OutputImage(p)=color 3 22
31 4.2 LIC Silhouette This approach can be thought as a quantification of the image guided by the values of the line integral convolution. We subdivide the interval [0, 1] in three parts, and we choose an arbitrary color to each part to achieve different effects including the silhouette. We obtained good results setting color i as the color of the first pixel p in the original image that belongs to the i part of the subdivision.the thresholds val1 and val2 can be set arbitrarily. However we found interesting results computing the mean intensity value M of the LIC blurred image and then set val1 = M M/3, val2 = M + M/3. Sometimes when the image is too dark we invert colors as a preprocessing step to ensure a good silhouette visualization. Figure 4.4 show some results with this settings. Figure 4.4: LIC Silhouette - Setting color i as the color of the first pixel in the original image belonging to the i part of the [0, 1] subdivision. It is important to note also the role of the vector field in this approach. Observe that high (low) values of I correspond to convolutions over integral curves on regions with high (low) intensities. Thus for vector fields with integral curves passing from high intensity to low intensity regions we will have an I close to the mean value. That is the reason for edges to appear in the final result for vector fields in directions of discontinuities in the original image. 23
32 4. EFFECTS USING LIC IDEAS 4.3 Pencil Rendering We adapted the algorithm described on (8; 9) to create a pencil effect. Below are some results. Figure 4.5: LIC Pencil Effect Examples - Some results of the interactive pencil rendering with LIC Our procedure is set to interactively paint pencil strokes in the direction of an input vector field. Strokes are then integral curves with a fixed length L predefined. As a preprocessing step we compute the gradient image of a grayscale version of the original I(x,y) image as E(x, y) = I(x,y) x + y to perform edge detection. The output energy image is used as the height image and paper is modeled in the same way described in (8). After this when the user clicks on the image, we process a predefined quantity of pixels in the perpendicular direction, rather than this pixel alone. This is done to 24
33 4.3 Pencil Rendering create strokes with width greater than one pixel, ideal for this kind of effect. Figure 4.7 shows the process step by step. The pseudocode follows: function Interactive_Pencil_rendering(image){ convert image to gray(image) compute the energy image E(gray image) create paper with E array C=compute_integral_curve(mouseX, mousey); array PP=compute_perpendicular_path(mouseX,mouseY); for each location p in C and PP { paper_draw(p, presure); As in (8), the parameter presure is a value in the interval [0, 1] to model the presure of the pencil in the paper. The paper draw function is guided by the height input image (in this case our energy image) and a sampling function that perturbs locally and uniformly the presure to imitate a hand-made effect. The method above can be trivially generalized for color images to create an spraylike effect. Here we can choose to process all the colors by pixel or process each RGB channel separately. The latter will create an image with aleatory colors uniformly distributed from the original image. See Figure 4.6. Figure 4.6: LIC Spray-Like Effect - Images from left to right: Processing all colors at once, and processing each RGB channel separately. 25
34 4. EFFECTS USING LIC IDEAS Figure 4.7: LIC Pencil Effect Process - Images: Original, LIC blur, grayscale, energy, interactive pencil rendering and final result. 26
35 4.4 Painterly Rendering 4.4 Painterly Rendering To create an image with a hand-painted appearance from an input photograph we used the algorithm described in (7). The method uses curved brush strokes of multiples sizes guided automatically by the contours of the gradient image. We adapted the mentioned algorithm to create strokes in the directions of an input vector field. These strokes are computed as integral curves like in LIC. The stroke length is controlled by the style stroke maximum length like in the original algorithm. A pseudocode the stroke computations procedure follow. Figure 4.8 shows an example. LIC Strokes Procedure: function makelicstroke(pixel p, R,reference_img){ array C=compute_integral_curve(p) with L=style max length strokecolor= reference_img.color(p) K= new stroke with radius R, with locations C, and color strokecolor Figure 4.8: Painterly Rendering - Left-Vector Field Image, Right- Painterly rendering. 4.5 Conclusions and Future Work As we can see from this whole monograph, the Line Integral Convolution algorithm ideas can be used not only to visualize high density planar vector fields, but also to 27
36 4. EFFECTS USING LIC IDEAS render non-realistic effects on arbitrary images by mixing already know NPR methods like painterly rendering and pencil drawing. The vector field design stage is fundamental in this approach, allowing the user to create a vector field to use as a guide for a determined effect. There are many ways to continue the work done here by either improving our results or by creating totally new algoritms and experiments. For instance, given that our interactive design is still slow, a GPU implementation for a real-time design will enhance the experimentation process when creating new effects. Given that our painterly rendering algorithm is not interactive, it could be a good experiment to create a system to interactively paint strokes similarly to the pencil and spray effects of section 4.3. Here some considerations with the layers of the original method need to be considered: When the user clicks, is that pixel going to be approximated by which level of detail?. Other future work could be a generalization to 3D spaces. For this, a tensor field design system will be more appropriate as suggested by the literature (3) increasing the flexibility of the whole system and extending the range of visual effects. An interesting next step of our system could be also the use of a Tangible User Interface for the design and visualization of the results. Out of the topic of this monograph, but still my main interest on flow and vector visualization could be considered as future work. Scientific visualization is a growing area that creates visual representations of complex scientific concepts to improve or discover new understandings from a set of data information. However at this time it is not clear a direct application into the field of computer music. The challenge is to create a new visualization of a piece of music that could gives us an alternative way to understand the basic sound components, or then an artistic visualization that could be used in computer music composition: Given our new visualization of a particular song, can we create another image that has the same characteristics in order to create music from it?. The suggestion is to make a connection between two art components: Graphics and music. Thus, can we use the LIC ideas to create this new visualization?, what information could we retrieve from a song which advects an 1D white noise? and can we visualize this advection?. These are examples of questions that could guide a future work on scientific visualization and computer music. 28
37 Appendix A Implementation in C++ This appendix is included to expose the main algorithms presented all along the chapters making use of the CImg Library, which is an open source C++ image processing toolkit created by David Tschumperlé at INRIA 1. For a better understanding, we will introduce briefly some of its characteristics before going into our procedures. A.1 Getting Started with CImg The CImg Library consist of a single header file CImg.h that contains all the C++ classes and methods. This implies among other things, that a simple line of code is needed to use it, namely... #include "lib_path/cimg.h"... using namespace cimg_library;... given that we had already downloaded the standard package from the website and had placed it into lib path. All the classes and functions are encapsulated in the cimg library namespace, so it is a good idea to use the second line of code too 2. The main classes of the CImg Library are: CImg<T> for images, CImgList<T> for a list of 1 2 This is different from the cimg namespace which implements functions with the same name as standard C/C++ functions! Never use by default the cimg namespace. 29
38 A. IMPLEMENTATION IN C++ images, and CImgDisplay which is like a canvas to display any image. The template parameter T specify the type of the pixels, for example a raster image with entries of type double is defined as CImg<double>. Possible values of T are float, double and unsigned char. As you may expect displaying an image with CImg is as simple as with MAT LAB. Here is the code to load and display an image called myimage.jpg which is in the same directory as our code: #include "lib_path/cimg.h" using namespace cimg_library; int main(){ CImg<unsigned char>("myimage.jpg").display(); return 0; // needed by the compiler To load an image and put it into our code as a variable I, we use the code: CImg<unsigned char> I("myimage.jpg"). To display an already loaded image I, we use its display method: I.display(). The code above is a compact version of these two steps: loading and displaying. We also could load an image and display it on a CImgDisplay. This is useful when doing applications with interactivity, given that the CImgDisplay class allows control to define the user callbacks like mouse clicks and keyboard inputs. The correspondent code using a CImgDisplay is next: #include "lib_path/cimg.h" using namespace cimg_library; CImgDisplay main_disp; int main(){ main_disp.assign( CImg<unsigned char>("myimage.jpg"),"my very first display!!"); while (!main_disp.is_closed){ main_disp.wait(); return 0; 30
39 A.1 Getting Started with CImg The assign method takes as first argument the image we want to display. We could re-assign any other loaded image at any moment. The second argument will be the title of the window. The while loop is necessary to tell the program to wait for user events. Here is where we should put the code to control user events. If this is ommited the program will run normally but will close after displaying the image which ocurs in a small fraction of time, so you will barely see the image!. Each CImgDisplay has its own parameters to control the user events which can be retrieved like any other field of the class with a point, some of them are:.mouse x or.mouse y to retrieve the integer coordinates (x, y) of a user click on the display,.key to retrieve keyboard inputs and.button which is of boolean type to indicate whether or not there was a click on the display. To control the different buttons of the mouse separately we use.button&1 for the left button and.button&2 for the right button. For more on this check the CImg documentation. Once we load an image on a variable, say I, we can retrieve the values of its pixel (x,y,z) like we were handling a matrix, that is: I(x,y,z,v). The parameter v refers to the type of image: v=1 for gray scale images, and v=3 for color images. CImg can handle 3D images, in our case for 2D images we will have always z=1 when declaring a 2D image and z=0 when retrieving pixel values. Remember that indices on C++ begins at 0, thus to retrieve the RGB components of a 2D image at pixel (10,10) we will have: I(10,10,0,0) for red, I(10,10,0,1) for green and I(10,10,0,2) for blue. Finally to retrieve any of the dimensions of I we can call.dimx() for the x dimension,.dimv() for the v dimension and so on. As an application the next function convert a color image to a grayscale image: CImg<unsigned char> to_gray(cimg<unsigned char> img){ if (img.dimv()==1) return img; //already gray CImg<unsigned char> gray(img.dimx(),img.dimy(),1,1); for (int x=1;x<img.dimx()-1;x++){ for (int y=1; y<img.dimy()-1;y++){ gray(x,y,0,0)=.2989*img(x,y,0,0)+.5870*img(x,y,0,1)+.1140*img(x,y,0,2); 31
40 A. IMPLEMENTATION IN C++ return(gray); Figure A.1: Color to gray conversion - Converting from color to grayscale using the CImg library. A.2 LIC in C++ using CImg Section 2.2 exposed a general pseudocode of LIC with a box kernel. Basically, the algorithm was composed of two functions: computing the integral curve, and computing the convolution. The next code is a mix of these two functions to perform LIC with a box kernel for an arbitrary input image using CImg. Our data type vector was defined to store the vector values (vx, vy) of the (x, y) position. The vector field function will be explained in the next section. LIC-Box kernel with CImg: CImg<double> LIC(CImg<double> img){ int n_chn=img.dimv(), n=img.dimx(), m=img.dimy(); CImg<double> OutputImg(n,m,1,n_chn); double u, v, Vi, Vj, x, y, sum, ds; int u1,v1,l; ds=1; L=10; for (int h=0;h<n_chn; h++){ for (int i=0;i<n;i++){ for (int j=0;j<m;j++){ vector V=vector_field(x,y); u=i; v=j; Vi=V.x; Vj=V.y; 32
41 A.3 A Basic Vector Field Design System using CImg sum=0; for (int s=0;s<=l;s++){ //positive); u=i; v=j; V.x=Vi; V.y=Vj; for (int s=0;s<=-l;s=s-1){ //negative); OutputImg(i,j,0,h)=sum/(2*L+1); return(outputimg); A.3 A Basic Vector Field Design System using CImg We can use CImg to create a system that can handle the basic vector field design ideas of section 3.1. For this we use the classes vector and singular point to store vector components and singular points with its respectives fields: parameters, type (Jacobian) and position. The code looks like this: typedef struct {double x, y; vector; class singular_point { public: vector pos; //position vector W1,W2; //rows of the jacobian matrix 33
42 A. IMPLEMENTATION IN C++ double d, k; //parameters public: singular_point(vector poss,vector W11, vector W22, double k1, double d1){ pos=poss; W1=W11; W2=W22;k=k1;d=d1; ; singular_point(){; // constructor by default ; // end of class singular_point We store all the singularities in an simple array LIST OF SING with a global variable length to control its length. The system begins loading an image stored globally as image and waiting for user events. A click on the display will add a singularity at this position. The type of the singularity to add is controled by the global variables V1, V2 which correspond to the rows of the jacobian of the actual singularity. This varibles are initialized for a type sink by default, and can be modified with the keyboard: S for a sink, O for a source, D for a saddle, C for a clockwise center and W for a couter-clockwise center. The main function is next: int main() { load_img(); while (!main_disp.is_closed) { main_disp.wait(); if(main_disp.button && main_disp.mouse_x>=0 && main_disp.mouse_y>=0){ int u0 = main_disp.mouse_x, v0 = main_disp.mouse_y; vector pos; pos.x=u0; pos.y=v0; singular_point s(pos,v1,v2,k,d); length++; LIST_OF_SING[length-1]=s; cout<<"calculating LIC...\n"; image=lic(image); cout<<"done.\n"; image.display(main_disp); if (main_disp.key){ switch (main_disp.key){ case cimg::keyq: exit(0); break; case cimg::keys: V1.x=-k; V1.y=0.0; V2.x=0.0; V2.y=-k; break; case cimg::keyo: V1.x=k; V1.y=0.0; V2.x=0.0; V2.y=k; break; 34
43 A.3 A Basic Vector Field Design System using CImg case cimg::keyd: V1.x=k; V1.y=0.0; V2.x=0.0; V2.y=-k; break; case cimg::keyc: V1.x=0.0; V1.y=-k; V2.x=k; V2.y=0.0; break; case cimg::keyw: V1.x=0.0; V1.y=k; V2.x=-k; V2.y=0.0; break; // end while return 0; The additional function load img used to load the image and perform the variables initialization: void load_img(void){ char name[50]; cout<<"please enter the image name (ex: dog.jpg):\n"; cin>>name; CImg<double> im(name); image=im; main_disp.assign(image,"basic Vector field Design"); d=0.0001; k=0.5; // by default V1.x=-k; V1.y=0.0; V2.x=0.0; V2.y=-k; // sink by default length=0; //no singular points so far Finally, the actual normalized vector field computation for a point (x, y) is done as sums of vector field influences of each singularity, see section 3.1. Here is the code: vector vector_field(double x, double y) { vector OUT; double d,k,x0,y0; vector U1,U2; OUT.x=OUT.y=0.0; for (int i=0;i<length;i++){ double t; d=list_of_sing[i].d; k=list_of_sing[i].k; U1=LIST_OF_SING[i].W1; U2=LIST_OF_SING[i].W2; x0=list_of_sing[i].pos.x; y0=list_of_sing[i].pos.y; t=exp(-d*((x-x0)*(x-x0)+(y-y0)*(y-y0))); OUT.x=OUT.x+t*(U1.x*k*(x-x0)+U1.y*k*(y-y0)); OUT.y=OUT.y+t*(U2.x*k*(x-x0)+U2.y*k*(y-y0)); 35
44 A. IMPLEMENTATION IN C++ double NV=sqrt(OUT.x*OUT.x+OUT.y*OUT.y); if (NV!=0){OUT.x=OUT.x/NV; OUT.y=OUT.y/NV; else{out.x=out.y=0; return OUT; The figure below shows an example of this system. Figure A.2: Basic Vector Field Design example using CImg - A simple combination of a sink a saddle and a center A.4 LIC effects in C++ This section concludes the implementation in C++ of the LIC effects presented in chapter 4. We already saw the blur-warp effect on section A.2, and the silhouette algorithm is straight forward from that code and the observations of section 4.2. We will proceed then with the pencil effect and painterly rendering. A.4.1 LIC Pencil Effect The main class involved in the pencil effect is of course the paper class which stores the height image (in our case the energy image) and the initial white canvas. The drawing 36
45 A.4 LIC effects in C++ function is called interactively with a certain presure perturbed by a sampling function. The final value for a pixel p on the canvas depends linearly on this perturbed presure and the intensity of the pixel p in the height image. For all our results prs=0.005: class Paper{ public: CImg<double> height_img; CImg<double> canvas; public: Paper(int resx, int resy, CImg<double> H); //constructor void draw(int coordx, int coordy, double prs); double sampling(double pres, int res =10); ; Paper::Paper(int resx, int resy, CImg<double> H){ this -> height_img =H; CImg<unsigned char> grain(resx,resy,1,1,1); this -> canvas=grain; void Paper::draw( int coordx, int coordy, double prs ){ coordx=(coordx>0)?coordx:0; coordy=(coordy>0)?coordy:0; double d, h, g; h=this->height_img(x,y,0); h*=0.65; g = this->sampling(prs); d = h+g; canvas(coordx,coordy,0)-=d; canvas(coordx,coordy,0)=(canvas(coordx,coordy,0)<0)? 0.0:canvas(coordX,coordY,0); double Paper::sampling(double pres, int res){ int aux = 0 ; for (int i = 0 ; i < res ; ++i ){ double p = (double)std::rand()/(double)(rand_max); if ( p < pres ) ++aux; 37
46 A. IMPLEMENTATION IN C++ return (double)aux/(double)res ; Next is the pencil effect main procedure using the above class of paper and the CImg library. For this effect we set the length of the integral curves to L = 100 and process lgd = 10 pixels in the perpendicular direction (see section 4.3). void pencil_effect(cimg<double> original){ CImg<double> Height=to_gray(original); Height=invert_colors(Height); Height=energy(Height); Height=normalize_0_1(Height); int n=original.dimx(), m=original.dimy(); Paper paper( n, m, Height,1); CImgDisplay main_disp(height,"pencil Effect"); int const lgd=10; while (!main_disp.is_closed){ main_disp.wait(); if (main_disp.mouse_x>=0 && main_disp.mouse_y>=0){ int startx = main_disp.mouse_x, starty = main_disp.mouse_y; vector V=vector_field(startx,starty,LIST_OF_SING); vector Vppd; Vppd.x=-V.y; Vppd.y=V.x; for (int i=0;i<lgd;i++){ double u=startx+i*vppd.x, v=starty+i*vppd.y; double u0=u,v0=v; vector Vstart=V=vector_field(u,v,LIST_OF_SING); for (int s=0;s<100;s++){ // positive); u= u0, v=v0; V=Vstart; 38
47 A.4 LIC effects in C++ for (int s=0;s<100;s++){ // negative); paper.height.display(main_disp); // end while A.4.2 Painterly Rendering We implemented the algorithm of section 2.1 in (7) replacing the makestroke procedure with our new make LIC Stroke to paint strokes in the direction of the input vector field, see section 4.4. Next is the code: stroke make_lic_stroke(int x0,int y0,double R){ int n=image.dimx(), m=image.dimy(); int r=floor(ref_img(x0,y0,0,0)); int g=floor(ref_img(x0,y0,0,1)); int b=floor(ref_img(x0,y0,0,2)); int* strokecolor= new int[3]; strokecolor[0]=r; strokecolor[1]=g; strokecolor[2]=b; stroke K= stroke(r,strokecolor); point p,q; vector V; double ds=1.0; p.x=q.x=x0; p.y=q.y=y0; K.pts[K.lgth]=p; K.lgth++; vector float_pt, float_qt; float_pt.x=p.x; float_pt.y=p.y; float_qt.x=p.x; float_qt.y=p.y; V=vector_field(x0,y0); for (int so=0; so<r+sty.maxlgth*0.3;so++){ float_pt.x+=ds*v.x; float_pt.y+=ds*v.y; p.x=(int)float_pt.x; p.y=(int)float_pt.y; 39
48 A. IMPLEMENTATION IN C++ float_qt.x-=ds*v.x; float_qt.y-=ds*v.y; q.x=(int)float_qt.x; q.y=(int)float_qt.y; if (!(p.x<0 p.x>n p.y<0 p.y>m)){ K.pts[K.lgth]=p; K.lgth++; V=vector_field(float_pt.x,float_pt.y); if (!(q.x<0 q.x>n q.y<0 q.y>m)){ K.pts[K.lgth]=q; K.lgth++; V=vector_field(float_qt.x,float_qt.y); //end for return K; 40
49 Appendix B Gallery Here are some of our results. All the images in full color and the source code can be found in the website rdcastan/visualization. Figure B.1: Pigeon Point Lighthouse, California - Images from top to bottom and left to right: Original, warp, LIC on a spray image with each RGB processed separately and painterly 41
50 B. GALLERY Figure B.2: Plymouth Hoe, England - From top and left to right: Original, vector field visualization, spray and LIC of the spray image. 42
51 Figure B.3: Landscape - Painterly Rendering. 43
52 B. GALLERY Figure B.4: Girl Playing Guitar - LIC Silhouette effect. 44
53 Figure B.5: Shoes in the Grass - LIC pencil effect. 45
54 B. GALLERY Figure B.6: Cows - LIC after spray effect. 46
55 Figure B.7: Garden IMS, Rio de Janeiro - Painterly Rendering. Garden of the Instituto Moreira Salles 47
56 B. GALLERY Figure B.8: Live Performance - Top to Bottom-Original (Anita Robinson from Viva Voce), painterly and LIC pencil. 48
57 Figure B.9: Live Performance Continued... - Top to Bottom-LIC silhouette, spray on each RGB and simple spray. 49
58 B. GALLERY Figure B.10: Live Performance Continued... - Top to Bottom-Dithered and LIC on dithered 50
Texture Screening Method for Fast Pencil Rendering
Journal for Geometry and Graphics Volume 9 (2005), No. 2, 191 200. Texture Screening Method for Fast Pencil Rendering Ruiko Yano, Yasushi Yamaguchi Dept. of Graphics and Computer Sciences, Graduate School
3 hours One paper 70 Marks. Areas of Learning Theory
GRAPHIC DESIGN CODE NO. 071 Class XII DESIGN OF THE QUESTION PAPER 3 hours One paper 70 Marks Section-wise Weightage of the Theory Areas of Learning Theory Section A (Reader) Section B Application of Design
VISUAL ALGEBRA FOR COLLEGE STUDENTS. Laurie J. Burton Western Oregon University
VISUAL ALGEBRA FOR COLLEGE STUDENTS Laurie J. Burton Western Oregon University VISUAL ALGEBRA FOR COLLEGE STUDENTS TABLE OF CONTENTS Welcome and Introduction 1 Chapter 1: INTEGERS AND INTEGER OPERATIONS Segmentation Preview Segmentation subdivides an image to regions or objects Two basic properties of intensity values Discontinuity Edge detection Similarity Thresholding Region growing/splitting/merging
Data Storage 3.1. Foundations of Computer Science Cengage Learning
3 Data Storage 3.1 Foundations of Computer Science Cengage Learning Objectives After studying this chapter, the student should be able to: List five different data types used in a computer. Describe how
2.2 Creaseness operator
2.2. Creaseness operator 31 2.2 Creaseness operator Antonio López, a member of our group, has studied for his PhD dissertation the differential operators described in this section [72]. He has compared.
Working With Animation: Introduction to Flash
Working With Animation: Introduction to Flash With Adobe Flash, you can create artwork and animations that add motion and visual interest to your Web pages. Flash movies can be interactive users can click
Course Overview. CSCI 480 Computer Graphics Lecture 1. Administrative Issues Modeling Animation Rendering OpenGL Programming [Angel Ch.
CSCI 480 Computer Graphics Lecture 1 Course Overview January 14, 2013 Jernej Barbic University of Southern California Administrative Issues Modeling Animation
15.062 Data Mining: Algorithms and Applications Matrix Math Review
.6 Data Mining: Algorithms and Applications Matrix Math Review The purpose of this document is to give a brief review of selected linear algebra concepts that will be useful for the course and to develop
GeoGebra. 10 lessons. Gerrit Stols
GeoGebra in 10 lessons Gerrit Stols Acknowledgements GeoGebra is dynamic mathematics open source (free) software for learning and teaching mathematics in schools. It was developed by Markus Hohenwarter
Fireworks 3 Animation and Rollovers
Fireworks 3 Animation and Rollovers What is Fireworks Fireworks is Web graphics program designed by Macromedia. It enables users to create any sort of graphics as well as to import GIF, JPEG, PNG photos:
Excel Guide for Finite Mathematics and Applied Calculus
Excel Guide for Finite Mathematics and Applied Calculus Revathi Narasimhan Kean University A technology guide to accompany Mathematical Applications, 6 th Edition Applied Calculus, 2 nd Edition Calculus:
Visualization of 2D Domains
Visualization of 2D Domains This part of the visualization package is intended to supply a simple graphical interface for 2- dimensional finite element data structures. Furthermore, it is used as the Gerrit Stols
For more info and downloads go to: Gerrit Stols Acknowledgements GeoGebra is dynamic mathematics open source (free) software for learning and teaching mathematics in schools.
Tutorial 8 Raster Data Analysis
Objectives Tutorial 8 Raster Data Analysis This tutorial is designed to introduce you to a basic set of raster-based analyses including: 1. Displaying Digital Elevation Model (DEM) 2. Slope calculations
ART 170: Web Design 1
Banner Design Project Overview & Objectives Everyone will design a banner for a veterinary clinic. Objective Summary of the Project General objectives for the project in its entirety are: Design a banner
Mathematics Course 111: Algebra I Part IV: Vector Spaces
Mathematics Course 111: Algebra I Part IV: Vector Spaces D. R. Wilkins Academic Year 1996-7 9 Vector Spaces A vector space over some field K is an algebraic structure consisting of a set V on which are
Visualizing Data: Scalable Interactivity
Visualizing Data: Scalable Interactivity The best data visualizations illustrate hidden information and structure contained in a data set. As access to large data sets has grown, so has the need for interactive
Algolab Photo Vector
Algolab Photo Vector Introduction: What Customers use Photo Vector for? Photo Vector (PV) is a handy tool for designers to create, cleanup, make fast corrections, edit designs with or without further conversion
6 EXTENDING ALGEBRA. 6.0 Introduction. 6.1 The cubic equation. Objectives
6 EXTENDING ALGEBRA Chapter 6 Extending Algebra Objectives After studying this chapter you should understand techniques whereby equations of cubic degree and higher can be solved; be able to factorise
A Proposal for OpenEXR Color Management
A Proposal for OpenEXR Color Management Florian Kainz, Industrial Light & Magic Revision 5, 08/05/2004 Abstract We propose a practical color management scheme for the OpenEXR image file format as used
Microsoft Mathematics for Educators:
Microsoft Mathematics for Educators: Familiarize yourself with the interface When you first open Microsoft Mathematics, you ll see the following elements displayed: 1. The Calculator Pad which includes
Introduction to SolidWorks Software
Introduction to SolidWorks Software Marine Advanced Technology Education Design Tools What is SolidWorks? SolidWorks is design automation software. In SolidWorks, you sketch ideas and experiment with different,
Digital Aquarium s. Photoshop CS3 Workshop Guide for Novice Users
Digital Aquarium s Photoshop CS3 Workshop Guide for Novice Users About Photoshop Photoshop is the industry standard for graphic design and photo correction. The workshop introduces the basic functions
Data source, type, and file naming convention
Exercise 1: Basic visualization of LiDAR Digital Elevation Models using ArcGIS Introduction This exercise covers activities associated with basic visualization of LiDAR Digital Elevation Models using ArcGIS.
Adobe Illustrator CS5 Part 1: Introduction to Illustrator
CALIFORNIA STATE UNIVERSITY, LOS ANGELES INFORMATION TECHNOLOGY SERVICES Adobe Illustrator CS5 Part 1: Introduction to Illustrator Summer 2011, Version 1.0 Table of Contents Introduction...2 Downloading
Inkscape. Two-dimensional vector graphics. For laser cutting
Inkscape Two-dimensional vector graphics For laser cutting Inkscape - Vector drawing Free vector drawing program Vectors are mathematically defined points, lines, curves, etc. Scalable, unlike raster images
TABLE OF CONTENTS. INTRODUCTION... 5 Advance Concrete... 5 Where to find information?... 6 INSTALLATION... 7 STARTING ADVANCE CONCRETE...
Starting Guide TABLE OF CONTENTS INTRODUCTION... 5 Advance Concrete... 5 Where to find information?... 6 INSTALLATION... 7 STARTING ADVANCE CONCRETE... 7 ADVANCE CONCRETE USER INTERFACE... 7 Other important
Design Elements & Principles
Design Elements & Principles I. Introduction Certain web sites seize users sights more easily, while others don t. Why? Sometimes we have to remark our opinion about likes or dislikes of web sites, and
Excel -- Creating Charts
Excel -- Creating Charts The saying goes, A picture is worth a thousand words, and so true. Professional looking charts give visual enhancement to your statistics, fiscal reports or presentation. Excel
Help on the Embedded Software Block
Help on the Embedded Software Block Powersim Inc. 1. Introduction The Embedded Software Block is a block that allows users to model embedded devices such as microcontrollers, DSP, or other devices. It
1. Classification problems
Neural and Evolutionary Computing. Lab 1: Classification problems Machine Learning test data repository Weka data mining platform Introduction Scilab 1. Classification problems The main aim of a classification
Lesson 15 - Fill Cells Plugin
15.1 Lesson 15 - Fill Cells Plugin This lesson presents the functionalities of the Fill Cells plugin. Fill Cells plugin allows the calculation of attribute values of tables associated with cell type layers.,
Data representation and analysis in Excel
Page 1 Data representation and analysis in Excel Let s Get Started! This course will teach you how to analyze data and make charts in Excel so that the data may be represented in a visual way that reflects
MMGD0203 Multimedia Design MMGD0203 MULTIMEDIA DESIGN. Chapter 3 Graphics and Animations
MMGD0203 MULTIMEDIA DESIGN Chapter 3 Graphics and Animations 1 Topics: Definition of Graphics Why use Graphics? Graphics Categories Graphics Qualities File Formats Types of Graphics Graphic File Size Introduction
The following is an overview of lessons included in the tutorial.
Chapter 2 Tutorial Tutorial Introduction This tutorial is designed to introduce you to some of Surfer's basic features. After you have completed the tutorial, you should be able to begin creating your
Jiří Matas. Hough Transform
Hough Transform Jiří Matas Center for Machine Perception Department of Cybernetics, Faculty of Electrical Engineering Czech Technical University, Prague Many slides thanks to Kristen Grauman and Bastian
Advanced visualization with VisNow platform Case study #2 3D scalar data visualization
Advanced visualization with VisNow platform Case study #2 3D scalar data visualization This work is licensed under a Creative Commons Attribution- NonCommercial-NoDerivatives 4.0 International License.
Linear Threshold Units
Linear Threshold Units w x hx (... w n x n w We assume that each feature x j and each weight w j is a real number (we will relax this later) We will study three different algorithms for learning linear
Excel 2010: Create your first spreadsheet
Excel 2010: Create your first spreadsheet Goals: After completing this course you will be able to: Create a new spreadsheet. Add, subtract, multiply, and divide in a spreadsheet. Enter and format column
Overview of the Adobe Flash Professional CS6 workspace
Overview of the Adobe Flash Professional CS6 workspace In this guide, you learn how to do the following: Identify the elements of the Adobe Flash Professional CS6 workspace Customize the layout of the
CONCEPT-II. Overview of demo examples
CONCEPT-II CONCEPT-II is a frequency domain method of moment (MoM) code, under development at the Institute of Electromagnetic Theory at the Technische Universität Hamburg-Harburg (). Overview
Vector Spaces; the Space R n
Vector Spaces; the Space R n Vector Spaces A vector space (over the real numbers) is a set V of mathematical entities, called vectors, U, V, W, etc, in which an addition operation + is defined and in which
Appendix E: Graphing Data
You will often make scatter diagrams and line graphs to illustrate the data that you collect. Scatter diagrams are often used to show the relationship between two variables. For example, in an absorbance
The Flat Shape Everything around us is shaped
The Flat Shape Everything around us is shaped The shape is the external appearance of the bodies of nature: Objects, animals, buildings, humans. Each form has certain qualities that distinguish it
B.A IN GRAPHIC DESIGN
COURSE GUIDE B.A IN GRAPHIC DESIGN GRD 126 COMPUTER GENERATED GRAPHIC DESIGN I UNIVERSITY OF EDUCATION, WINNEBA DEPARTMENT OF GRAPHIC DESIGN Copyright Acknowledgements The facilitating agent of the course
Programming Exercise 3: Multi-class Classification and Neural Networks
Programming Exercise 3: Multi-class Classification and Neural Networks Machine Learning November 4, 2011 Introduction In this exercise, you will implement one-vs-all logistic regression and neural networks
Image Content-Based Email Spam Image Filtering
Image Content-Based Email Spam Image Filtering Jianyi Wang and Kazuki Katagishi Abstract With the population of Internet around the world, email has become one of the main methods of communication among
A Short Introduction to Computer Graphics
A Short Introduction to Computer Graphics Frédo Durand MIT Laboratory for Computer Science 1 Introduction Chapter I: Basics Although computer graphics is a vast field that encompasses almost any graphical
Introduction to acoustic imaging
Introduction to acoustic imaging Contents 1 Propagation of acoustic waves 3 1.1 Wave types.......................................... 3 1.2 Mathematical formulation.................................. 4 1.3
Drawing an Umbrella using Bezier Curves in CS3 Fireworks
Bezier Curves are the mathematical means of making nice smooth and balanced curves in most computer graphics programs. They can be extremely annoying and difficult to use for complete beginners. The
PGR Computing Programming Skills
PGR Computing Programming Skills Dr. I. Hawke 2008 1 Introduction The purpose of computing is to do something faster, more efficiently and more reliably than you could as a human do it. One obvious point
Face Recognition using Principle Component Analysis
Face Recognition using Principle Component Analysis Kyungnam Kim Department of Computer Science University of Maryland, College Park MD 20742, USA Summary This is the summary of the basic idea about PCA.
Canny Edge Detection
Canny Edge Detection 09gr820 March 23, 2009 1 Introduction The purpose of edge detection in general is to significantly reduce the amount of data in an image, while preserving the structural properties
MassArt Studio Foundation: Visual Language Digital Media Cookbook, Fall 2013
INPUT OUTPUT 08 / IMAGE QUALITY & VIEWING In this section we will cover common image file formats you are likely to come across and examine image quality in terms of resolution and bit depth. We will cover
Virtual Mouse Using a Webcam
1. INTRODUCTION Virtual Mouse Using a Webcam Since the computer technology continues to grow up, the importance of human computer interaction is enormously increasing. Nowadays most of the mobile devices
|
http://docplayer.net/884280-Using-line-integral-convolution-to-render-effects-on-images.html
|
CC-MAIN-2017-22
|
en
|
refinedweb
|
view raw
I am using Promoted Build plugin. And using some custom groovy scripts to validate the build! I wanted to access the value of
BUILD_NUMBER
println
If it's in runtime you can use:
def env = System.getenv() //Print all the environment variables. env.each{ println it } // You can also access the specific variable, say 'username', as show below String user= env['USERNAME']
if it's in system groovy you can use:
// get current thread / Executor and current build def thr = Thread.currentThread() def build = thr?.executable //Get from Env def stashServer= build.parent.builds[0].properties.get("envVars").find {key, value -> key == 'ANY_ENVIRONMENT_PARAMETER' } //Get from Job Params def jobParam= "jobParamName" def resolver = build.buildVariableResolver def jobParamValue= resolver.resolve(jobParam)
Any println is sending output to the standard output steam, try looking at the console log. Good luck!
|
https://codedump.io/share/WlvCilL9vh33/1/access-buildnumber-in-jenkins-promoted-build-plugin-scripts
|
CC-MAIN-2017-22
|
en
|
refinedweb
|
10.21. Functions that Produce Lists¶
The pure version of
doubleStuff above made use of an
important pattern for your toolbox. Whenever you need to
write a function that creates and returns a list, the pattern is
usually:
initialize a result variable to be an empty list loop create a new element append it to result return the result
Let us show another use of this pattern. Assume you already have a function
is_prime(x) that can test if x is prime. Now, write a function
to return a list of all prime numbers less than n:
def primes_upto(n): """ Return a list of all prime numbers less than n. """ result = [] for i in range(2, n): if is_prime(i): result.append(i) return result
|
http://interactivepython.org/runestone/static/thinkcspy/Lists/FunctionsthatProduceLists.html
|
CC-MAIN-2017-22
|
en
|
refinedweb
|
A few days ago I started a new project that I am provisionally calling Parrot# (“Parrot-Sharp”). This new project provides bindings for Parrot in C# or other .NET code using Parrot’s new embedding API.
A while back I showed an example on this blog of a very short toy program, written in C#, which embedded Parrot and printed out a short “hello world” message. I knew, after having done that example, that I would be able to get this new project working without too much trouble. The big saving grace here is that the API works almost exclusively on PMC, STRING, and simple types, which makes wrapping the function calls very easy. I wrap the low-level pointers up in custom C# proxy types that include calls to the native functions in libparrot.
Tonight, I have an example program that runs. Here’s the C# code of the test executable:
using System; namespace ParrotTest { class MainClass { public static void Main (string[] args) { string exename = AppDomain.CurrentDomain.FriendlyName; if (args.Length <= 0) { Console.WriteLine("No PBC file specified"); return; } string pbcfile = args[0]; string[] pbcargs = new string[args.Length - 1]; for (int i = 1; i < args.Length; i++) pbcargs[i - 1] = args[i]; Parrot.Parrot parrot = new Parrot.Parrot(exename); Parrot.Parrot_PMC pbc = parrot.LoadBytecodeFile(pbcfile); Parrot.Parrot_PMC mainargs = parrot.PmcNull; parrot.RunBytecode(pbc, mainargs); } } }
It’s a simple wrapper program that runs a PBC file. I write a simple PIR file:
$> cat test.pir .sub main :main say "Hello from a PIR file!" .end
Now, I compile it:
$> parrot -o test.pbc test.pir
…And run it with my new Program:
$> .ParrotSharp.exe test.pbc Hello from a PIR file!
I actually have a lot more functionality written than what is exercised here, but I’m having a few problems with MonoDevelop and it’s not finding some of the classes and methods I have written. Once I get some of these thing working It will be a lot more functional.
I’ll write more about this project as it matures, but it is currently functional enough to execute Parrot bytecode, and I feel like that is a milestone worth reporting. I also have exceptions working more or less properly (I’ve even used my implementation of exceptions here to find a bug in the embed_api2 branch), and a few other things.
In order to completely replace the Parrot executable I would have to implement wrappers for IMCC, and frankly I just don’t want to do that. For now the ParrotSharp library is going to provide all other functionality for working with PMCs, STRINGs, and bytecode. Eventually I’m going to put together some unit tests too.
Parrot’s new embedding API has it’s first consumer, I’m excited to see what other kinds of things people can do with it.
|
http://whiteknight.github.io/2010/12/14/introducing_parrot_sharp.html
|
CC-MAIN-2017-22
|
en
|
refinedweb
|
Tekton Compiler for Kubeflow Pipelines
Project description
Kubeflow Pipelines SDK for Tekton
The Kubeflow Pipelines SDK allows data scientists to define end-to-end machine learning and data pipelines. The output of the Kubeflow Pipelines SDK compiler is YAML for Argo.
The
kfp-tekton SDK is extending the
Compiler and the
Client of the Kubeflow
Pipelines SDK to generate Tekton YAML
and to subsequently upload and run the pipeline with the Kubeflow Pipelines engine
backed by Tekton.
Table of Contents
- SDK Packages Overview
- Project Prerequisites
- Installation
- Compiling a Kubeflow Pipelines DSL Script
- Big data passing workspace configuration
- Running the Compiled Pipeline on a Tekton Cluster
- List of Available Features
- List of Helper Functions for Python Kubernetes Client
- Tested Pipelines
- Troubleshooting
SDK Packages Overview
The
kfp-tekton SDK is an extension to the Kubeflow Pipelines SDK
adding the
TektonCompiler and the
TektonClient:
kfp_tekton.compilerincludes classes and methods for compiling pipeline Python DSL into a Tekton PipelineRun YAML spec. The methods in this package include, but are not limited to, the following:
kfp_tekton.compiler.TektonCompiler_tekton.TektonClientcontains the Python client libraries for the Kubeflow Pipelines API. Methods in this package include, but are not limited to, the following:
kfp_tekton.TektonClient.upload_pipelineuploads a local file to create a new pipeline in Kubeflow Pipelines.
kfp_tekton.TektonClient.create_experimentcreates a pipeline experiment and returns an experiment object.
kfp_tekton.TektonClient.run_pipelineruns a pipeline and returns a run object.
kfp_tekton.TektonClient.create_run_from_pipeline_funccompiles a pipeline function and submits it for execution on Kubeflow Pipelines.
kfp_tekton.TektonClient.create_run_from_pipeline_packageruns a local pipeline package on Kubeflow Pipelines.
Project Prerequisites
- Python:
3.7or later
- Tekton:
v0.36.0or later
- Tekton CLI:
0.23.1
- Kubeflow Pipelines: KFP with Tekton backend
Follow the instructions for installing project prerequisites and take note of some important caveats.
Installation
You can install the latest release of the
kfp-tekton compiler from
PyPi. We recommend to create a Python
virtual environment first:
python3 -m venv .venv source .venv/bin/activate pip install kfp-tekton
Alternatively you can install the latest version of the
kfp-tekton compiler
from the source by cloning the repository:
Clone the
kfp-tektonrepo:
git clone cd kfp-tekton
Setup Python environment with Conda or a Python virtual environment:
python3 -m venv .venv source .venv/bin/activate
Build the compiler:
pip install -e sdk/python
Run the compiler tests (optional):
pip install pytest make test
Compiling a Kubeflow Pipelines DSL Script
The
kfp-tekton Python package comes with the
dsl-compile-tekton command line
executable, which should be available in your terminal shell environment after
installing the
kfp-tekton Python package.
If you cloned the
kfp-tekton project, you can find example pipelines in the
samples folder or under
sdk/python/tests/compiler/testdata folder.
dsl-compile-tekton \ --py sdk/python/tests/compiler/testdata/parallel_join.py \ --output pipeline.yaml
Note: If the KFP DSL script contains a
__main__ method calling the
kfp_tekton.compiler.TektonCompiler.compile() function:
if __name__ == "__main__": from kfp_tekton.compiler import TektonCompiler TektonCompiler().compile(pipeline_func, "pipeline.yaml")
... then the pipeline can be compiled by running the DSL script with
python3
executable from a command line shell, producing a Tekton YAML file
pipeline.yaml
in the same directory:
python3 pipeline.py
Big data passing workspace configuration
When big data files are defined in KFP. Tekton will create a workspace to share these big data files among tasks that run in the same pipeline. By default, the workspace is a Read Write Many PVC with 2Gi storage using the kfp-csi-s3 storage class to push artifacts to S3. But you can change these configuration using the environment variables below:
export DEFAULT_ACCESSMODES=ReadWriteMany export DEFAULT_STORAGE_SIZE=2Gi export DEFAULT_STORAGE_CLASS=kfp-csi-s3
To pass big data using cloud provider volumes, it's recommended to use the volume_based_data_passing_method for both Tekton and Argo runtime.
Running the Compiled Pipeline on a Tekton Cluster
After compiling the
sdk/python/tests/compiler/testdata/parallel_join.py DSL script
in the step above, we need to deploy the generated Tekton YAML to Kubeflow Pipeline engine.
You can run the pipeline directly using a pre-compiled file and KFP-Tekton SDK. For more details, please look at the KFP-Tekton user guide SDK documentation
experiment = kfp_tekton.TektonClient.create_experiment(name=EXPERIMENT_NAME, namespace=KUBEFLOW_PROFILE_NAME) run = client.run_pipeline(experiment.id, 'parallal-join-pipeline', 'pipeline.yaml')
You can also deploy directly on Tekton cluster with
kubectl. The Tekton server will automatically start a pipeline run.
We can then follow the logs using the
tkn CLI.
kubectl apply -f pipeline.yaml tkn pipelinerun logs --last --follow
Once the Tekton Pipeline is running, the logs should start streaming:
Waiting for logs to be available... [gcs-download : main] With which he yoketh your rebellious necks Razeth your cities and subverts your towns And in a moment makes them desolate [gcs-download-2 : main] I find thou art no less than fame hath bruited And more than may be gatherd by thy shape Let my presumption not provoke thy wrath [echo : main] Text 1: With which he yoketh your rebellious necks Razeth your cities and subverts your towns And in a moment makes them desolate [echo : main] [echo : main] Text 2: I find thou art no less than fame hath bruited And more than may be gatherd by thy shape Let my presumption not provoke thy wrath [echo : main]
List of Available Features
To understand how each feature is implemented and its current status, please visit the FEATURES doc.
List of Helper Functions for Python Kubernetes Client
KFP Tekton provides a list of common Kubernetes client helper functions to simplify the process of creating certain Kubernetes resources. please visit the K8S_CLIENT_HELPER doc for more details.
Tested Pipelines
We are testing the compiler on more than 80 pipelines
found in the Kubeflow Pipelines repository, specifically the pipelines in KFP compiler
testdata folder, the KFP core samples and the samples contributed by third parties.
A report card of Kubeflow Pipelines samples that are currently supported by the
kfp-tekton
compiler can be found here.
If you work on a PR that enables another of the missing features please ensure that
your code changes are improving the number of successfully compiled KFP pipeline samples.
Troubleshooting
When you encounter ServiceAccount related permission issues, refer to the "Service Account and RBAC" doc
If you run into the error
bad interpreter: No such file or directorwhen trying to use Python's venv, remove the current virtual environment in the
.venvdirectory and create a new one using
virtualenv .venv
Project details
Release history Release notifications | RSS feed
Download files
Download the file for your platform. If you're not sure which to choose, learn more about installing packages.
|
https://pypi.org/project/kfp-tekton/
|
CC-MAIN-2022-33
|
en
|
refinedweb
|
Developers & Practitioners
Our I/O 2022 announcements: In demo form
In the Cloud PA Keynote at I/O Aparna Sinha walked through the backend for an application that connects volunteers with volunteer opportunities in their area. In this blog post we'll walk through each component of that application in a bit more detail, explaining the new products that Google Cloud has released, the pros and cons of the architecture we chose, and other nerdy technical details we didn't have time for in the talk.
But first, some architecture diagrams. The application we discussed in the keynote helps connect volunteers with opportunities to help. In the keynote we highlighted two features of the backend for this application: the comment processor and the geographical volunteer-to-opportunity matching functionality.
The text processing feature takes free form feedback from users and uses ML and data analytics tools to route the feedback to the team that can best address that feedback. Here's the architecture diagram for that backend.
The "opportunities near me" feature allows us to help users find volunteer opportunities near a given location. Here's the architecture diagram for that feature.
Text Feedback Processing
Let's start by diving into the text processing pipeline.
The text feedback processing engine runs on a Machine Learning model, more specifically a text classifier (task part of the Natural Language Processing area). As for many machine learning scenarios, the first step was to collect users' feedbacks and synthetize a dataset with those feedbacks and a label to define each feedback as part of a category of feedbacks - Here were used "feedback", "billing_issues" and "bug" as possible categories. By the end of this dataset creation step the dataset structure looked like:
user review | category
<...>
Too much spam. Stuff that I don't care for pops up on my screen all the time | feedback
It works okay But I did not consent to subscribing at $28/year subscription | billing_issue
I have bought it yet it displays ERROR IN VERIFYING MY ACCOUNT | bug
<...>
Having this dataset ready, it was imported on Vertex AI datasets - for details on how to create a text dataset on Vertex AI, take a look in this guide. The imported dataset could be seen on Vertex AI datasets, including the available feedback categories and number of samples for each category inside the dataset:
The next step, once the dataset is ready, to create the text classification model was to use Google AutoML. AutoML allows us to train a model with no code, just a few simple steps that can be started directly from the Vertex AI dataset page.
We followed AutoML's default suggestions, including using the default values for how to split the dataset: 80% for training, 10% for validation, and 10% for testing. AutoML did all the model training and optimization automatically and notified us by email when the training was complete.
When training was complete, we double checked the model in the Vertex AI console to make sure everything looked good.
To enable other members of our team to use this model, we deployed it as a Vertex AI endpoint. The endpoint exposes the model via a REST API which made it simple to use for the members of our team that aren't experts in AI/ML.
Once it is deployed, it is ready to be used by following the directions from Get online predictions from AutoML models.
Once we had our model we could hook up the entire pipeline. Text feedback is stored in the Firebase Realtime Database. To do advanced analytics on this data, we wanted to move it to BigQuery. Luckily, Firebase provides an easy, code free, way to do that, the Stream Collections to BigQuery extension. Once we had that installed I was able to see the text feedback data in BigQuery in real time.
We wanted to classify this data directly from BigQuery. To do this, we built out a Cloud Function to call the Vertex AI endpoint we had just created and used BigQuery’s remote function feature. This Vertex AI endpoint contains a deployed model we previously trained to classify user feedback using AutoML Natural Language Processing.
We deployed the Cloud Function and then created a remote UDF definition on BigQuery, allowing us to call the Cloud Function from BigQuery without having to move the data out of BigQuery or using additional tools. The results were then sent back to BigQuery where it was displayed in the query result with the feedback data categorized.
def predict_classification(calls):
# Vertex AI endpoint details
client = aiplatform.gapic.PredictionServiceClient(client_options=client_options)
endpoint = client.endpoint_path(
project=project, location=location, endpoint=endpoint_id
)
# Call the endpoint for each
for call in calls:
content = call[0]
instance = predict.instance.TextClassificationPredictionInstance(
content=content,
).to_value()
instances = [instance]
parameters_dict = {}
parameters = json_format.ParseDict(parameters_dict, Value())
response = client.predict(
endpoint=endpoint, instances=instances, parameters=parameters
)
Once the feedback data is categorized, using our ML model, we can then route the feedback to the correct people. We used Cloud Run Jobs for this, since it is designed for background tasks like this one. Here's the code for a job that reads from BigQuery and creates a github issue for each piece of feedback labeled
bug_report.
def create_issue(body, timestamp):
title = f"User Report: {body}"
response = requests.post(
f"{GITHUB_REPO}/issues",
json={"title": title, "body": f"Report Text: {body} \n Timestamp: {timestamp}", "labels": ["Mobile Bug Report", "bug"]},
headers={
"Authorization": f"token {GITHUB_TOKEN}",
"Accept": "application/vnd.github.v3+json"
}
)
response.raise_for_status()
bq = bigquery.client.Client()
table = bq.get_table(TABLE_NAME)
sql = f"""SELECT timestamp, raw_text
FROM `io-2022-keynote-demo.mobile_feedback.tagged_feedback`
WHERE category="bug report"
"""
query = bq.query(sql)
for row in query.result():
issue_body = row.get("raw_text")
issue_timestamp = row.get("timestamp")
create_issue(issue_body, issue_timestamp)
To handle secrets, like our GitHub token we used secrets manager and then we loaded the secrets into variables with code like this:
SECRET_NAME = "github-token"
SECRET_ID = f"projects/{PROJECT_NUMBER}/secrets/{SECRET_NAME}/versions/2"
GITHUB_TOKEN = secretmanager.SecretManagerServiceClient().access_secret_version(name=SECRET_ID).payload.data.decode()
Hooking up to CRM or a support ticket database is similar and lets us channel any support requests or pricing issues to the customer success team. We can schedule the jobs to run when we want and as often as we want using Cloud Scheduler. Since we didn't want to constantly create new bugs, we've set the job creating GitHub issues to run once a day using this configuration in cron notation:
"0 1 * * *".
Opportunities Near A Location
The second feature we showed in the Cloud Keynote would allow users to see opportunities near a specific location. To do this we utilized the GIS features built into Postgres, so we used Cloud SQL for PostgreSQL. To query the Postgres database we used a Cloud Run service that our mobile app called as needed.
At a certain point we outgrew the PostgreSQL on Cloud SQL solution, as it was too slow. We tried limiting the number of responses we returned, but that wasn't a great user experience. We needed something that was able to handle a large amount of GIS data in near real time.
AlloyDB excels in situations like this where you need high throughput and real time performance on large amounts of data. Luckily, since AlloyDB is Postgres compatible it is a drop in replacement in our Cloud Run Service, we simply needed to migrate the data from Cloud SQL and change the connection string our Cloud Run Service was using.
Conclusion
So that's a deeper dive into one of our I/O demos and the products Google Cloud launched at Google I/O this year. Please come visit us in adventure and check out the codelabs and technical sessions at.
|
https://cloud.google.com/blog/topics/developers-practitioners/our-io-2022-announcements-demo-form
|
CC-MAIN-2022-33
|
en
|
refinedweb
|
changeset: 2727:5a3018702f8b
tag: tip
user: Kris Maglione <kris_AT_suckless.org>
date: Tue Jun 15 12:21:35 2010 -0400
files: alternative_wmiircs/python/pygmi/fs.py
description:
[pygmi] Make sure Ctl#ctl strings are unicode before joining them. Fixes issue #194.
diff -r 96ef87fb9d23 -r 5a3018702f8b alternative_wmiircs/python/pygmi/fs.py
--- a/alternative_wmiircs/python/pygmi/fs.py Mon Jun 14 10:46:46 2010 -0400
+++ b/alternative_wmiircs/python/pygmi/fs.py Tue Jun 15 12:21:35 2010 -0400
@@ -65,7 +65,7 @@
"""
Arguments are joined by ascii spaces and written to the ctl file.
"""
- client.awrite(self.ctl_path, ' '.join(args))
+ client.awrite(self.ctl_path, u' '.join(map(unicode, args)))
def __getitem__(self, key):
for line in self.ctl_lines():
Received on Tue Jun 15 2010 - 16:21:43 UTC
This archive was generated by hypermail 2.2.0 : Tue Jun 15 2010 - 16:24:04 UTC
|
https://lists.suckless.org/hackers/1006/2820.html
|
CC-MAIN-2022-33
|
en
|
refinedweb
|
Read Google Spreadsheet data into Pandas Dataframe
Many a times it happens that we have our data stored on a Google drive and to analyze that data we have to export the data as csv or xlsx and store it on a disk to convert into a dataframe.
To over come this problem of Exporting and loading the data into Pandas Dataframe, I am going to show how you can directly read the data from a Google Sheet into a Pandas Dataframe.
For this Exercise I am going to use the UCI Wine Data Set: source:
import pandas as pd
Ensure that the Spreadsheet containing the data is opened in a GoogleSheet:
Copy the URL from the Address Bar:
google_sheet_url = ‘’
Replace “edit#gid” text in the google_sheet_url variable above with “export?format=csv&gid” so your new google_sheet_url should look like this
new_google_sheet_url = ‘’
**import pandas as pd**
Use Pandas read_csv function to read the WineQuality Data Spreadsheet:
**df=pd.read_csv(new_google_sheet_url)**
Voila!! the data is converted into a Dataframe without downloading the csv file
**df.head()**
|
https://kanoki.org/2018/12/25/read-google-spreadsheet-data-into-pandas-dataframe/
|
CC-MAIN-2022-33
|
en
|
refinedweb
|
SCD30 CO₂ sensor Python driver
Project description
SCD30 CO₂ sensor I²C driver in Python 3
Status: initial release
The SCD30 is a high-precision CO2 sensor based on NDIR spectroscopy. The sensor module also includes an SHT31 temperature and humidity sensor onboard (see description of the PCB layout).
Overview
This library provides a Python interface to the main I²C-level commands supported by the SCD30 as listed in the interface description.
The primary intended use case is driving the sensor directly from a Raspberry Pi using hardware I²C. However, the code may be adapted for use with other devices supporting the protocol and/or software I²C.
Installation
The library is available for download from the Python Package Index (tested with Python 3.7.3):
python3 -m pip install scd30_i2c
System setup
The library was developed using a Raspberry Pi 4B (8GB RAM) running Raspberry Pi OS Buster. For more details about the chip, see the BCM2711 datasheet.
Wiring
The Raspberry Pi can drive the SCD30 module via its hardware I²C interface directly without any additional components:
¹ To select I²C mode, the SEL pin should be left floating or connected to ground. This forum post suggests grounding the pin may be the more reliable option.
Note the sequential order of the power, ground, and I²C pins on the SCD30 may be different from other popular sensor breakouts. For instance, the Pimoroni breakouts use (3V3, SDA, SCL, INT, GND).
For more details, see the Raspberry Pi I2C pinout.
Software configuration and I²C clock stretching
The SCD30 supports a maximal I²C speed of 100kHz (the default of the Pi 4B).
It also requires the I²C bus to support clock stretching of up to 150ms. By default, the
bcm2835-i2c driver which is still
used by the 4B (BCM2711) hard-codes the timeout to 35ms regardless of the speed. This does not seem to matter for one-off
readings, however may interfere with the long-term stability and particularly the automatic self-calibration feature.
As a workaround, the rpi-i2c binary utility provides means to manipulate the relevant I2C controller registers directly.
Usage
Contrary to other sensors that provide one-off readings, the SCD30 is designed to run continuously. Upon activation, periodic measurements are stored in a buffer. A "data ready status" command is provided to check whether a reading is available.
Sample code
The following example code will begin periodic measurements at a two-second interval and print the readings:
from scd30_i2c import SCD30 scd30 = SCD30() scd30.set_measurement_interval(2) scd30.start_periodic_measurement() time.sleep(2) while True: if scd30.get_data_ready(): m = scd30.read_measurement() if m is not None: print(f"CO2: {m[0]:.2f}ppm, temp: {m[1]:.2f}'C, rh: {m[2]:.2f}%") time.sleep(2) else: time.sleep(0.2)
Note that this minimal example script will NOT issue a stop command upon termination and the sensor will continue taking periodic measurements unless powered off. This may or may not be appropriate depending on the use case.
For a more complete example, see here.
Temperature calibration
The SCD30 module contains a temperature and humidity sensor, which allows for temperature compensation of the CO₂ sensor signal. Therefore, the correctness of the temperature measurements is critical to achieving highly accurate CO₂ readings.
Due to the small size of the module, the inherent self-heating of the various electrical components on and around the PCB are likely to cause values above ambient temperature to be reported. To counteract this, a temperature offset can be configured via the I²C interface. The correct value will depend on the placement and configuration of the sensor and should be updated if any changes are made. For instance, setting a different measurement interval can change the average power draw of the sensor, and in turn, the heat produced by its components. Changing its position relative to other components, altering the airflow or installing additional sensors nearby may similarly change the offset required.
By default, the temperature offset is disabled, i.e. set to 0'C. However, the following calculations apply in the general case, even with non-zero temperature offsets already set.
To determine the correct temperature offset, consider the following values:
T_ambient: the "reference" ambient temperature, measured through means other than the SCD30.
T_measured: the raw temperature reading obtained internally onboard the SCD30; we assume
T_measured >= T_ambient.
T_reported: the temperature reported by the SCD30 after applying the configured offset, i.e.
T_reported = T_measured - T_offset.
Clearly, the end goal is to minimize the error:
Δ = |T_reported - T_ambient| = |T_measured - T_offset - T_ambient| = |T_measured - T_ambient - T_offset|
Consequently:
T_offset = T_measured - T_ambient
Note that the SCD30 does not expose
T_measured directly; the value returned by
read_measurement() already has the
current offset applied, i.e.
T_reported is returned instead. Recall that:
T_reported = T_measured - T_offset
Therefore, the raw value
T_measured can be computed by factoring in the current offset
T_offset_old (obtained using
get_temperature_offset()):
T_measured = T_reported + T_offset_old
Having obtained
T_measured and a "true" reference temperature
T_ambient (e.g. using a different thermometer) a new
offset can be calculated:
T_offset_new = T_measured - T_ambient = (T_reported + T_offset_old) - T_ambient.
Project details
Download files
Download the file for your platform. If you're not sure which to choose, learn more about installing packages.
|
https://pypi.org/project/scd30-i2c/0.0.5/
|
CC-MAIN-2022-33
|
en
|
refinedweb
|
belleglade 1.0.0
a tool to make working with GtkD and glade easier by generating a D class that connects handlers to methods
To use this package, run the following command in your project's root directory:
Manual usage
Put the following dependency into your project's dependences section:
belleglade
a tool to make working with GtkD and glade easier by generating a D class that connects handlers to methods
Inspired by but written from scratch.
Unlike gladeD, the class generated is not derived from a Gtk class, so you can have multiple windows in one file if you prefer. Belleglade also takes care of some eccentricities in Gtk's naming conventions when the function names get too long.
Belleglade processes a .glade file and generates a .d file based on it. Any widget which has an ID assigned in glade becomes a class member. Any signals defined are attached to a class method of the same name. This makes it easy to create another class and override all the handler methods to implement your functionality.
Example
Given a file called exampleui.glade, run belleglade:
belleglade -i exampleui.glade -o exampleui.d -c ExampleUI -m exampleui
This generates exampleui.d containing:
module exampleui; import std.stdio; public import gtk.ApplicationWindow; public import gtk.Box; public import gtk.Button; import gtk.Builder; abstract class ExampleUI { string __gladeString = "XML from glade file goes here so it is built into your code"; Builder __builder; ApplicationWindow mainWindow; Button redAlert; // note if you do not assign an ID, but do define a handler, an id is generated Button w0004; this () { __builder = new Builder (); __builder.addFromString (__gladeString); mainWindow = cast(ApplicationWindow)__builder.getObject("mainWindow"); redAlert = cast(Button)__builder.getObject("redAlert"); w0004 = cast(Button)__builder.getObject("w0004"); redAlert.addOnClicked(&redAlertHandler); w0004.addOnClicked(&genericButtonHandler); } void redAlertHandler (Button w) { writeln("redAlertHandler stub called"); } void genericButtonHandler (Button w) { writeln("genericButtonHandler stub called"); } }
Then you can subclass ExampleUI and define your own handlers.
class Example: ExampleUI { override void redAlertHandler (Button w) { writeln("Red Alert Handler stub overridden in subclass"); } }
A working version of this example is in the example directory.
Usage
-i --input Required: The glade file you want to transform. The input file must be a valid glade file. Errors in the glade file will not be detected. -o --output Required: The file to write the resulting module to. -c --classname Required: The name of the resulting class. -m --modulename Required: The module name of the resulting file. -h --help This help information
Notes
- The generated object corresponds to the <interface> and is not a widget.
- if an object has no id and no signal handlers, it is ignored.
- if an object has no id but has signal handlers, an id is automatically assigned.
- if it has an id, it's type is added to the import list. (no dupes)
- if it has an id, a variable is created for it and populated.
- if it has signals, a delegate is created and connected.
- the widget namespace is flattened, so all id's must be unique.
License
GPL3
- Registered by Chris Bare
- 1.0.0 released 10 months ago
- chrisbare/belleglade
- GPL v3
- Authors:
-
- Dependencies:
- hunt-xml
- Versions:
- Show all 2 versions
- Download Stats:
0 downloads today
0 downloads this week
0 downloads this month
12 downloads total
- Score:
- 0.5
- Short URL:
- belleglade.dub.pm
|
https://code.dlang.org/packages/belleglade
|
CC-MAIN-2022-33
|
en
|
refinedweb
|
WeekView Class
Displays events across a week in a compact form.
This view is outdated and provided for compatibility with the earlier versions of the Scheduler Control. Use the FullWeekView instead.
Namespace: DevExpress.XtraScheduler
Assembly: DevExpress.XtraScheduler.v22.1.dll
Declaration
public class WeekView : SchedulerViewBase
Public Class WeekView Inherits SchedulerViewBase
Related API Members
The following members return WeekView objects:
Remarks
The XtraScheduler control has several view types that provide different arrangements and formats for scheduling and viewing appointments. The WeekView class represents a Week View. This type of view enables end-users to schedule and view the user events in a week.
All views are stored in the scheduler’s view repository which can be accessed via the SchedulerControl.Views property. To access the settings of the Week View use the SchedulerViewRepository.WeekView property.
|
https://docs.devexpress.com/WindowsForms/DevExpress.XtraScheduler.WeekView
|
CC-MAIN-2022-33
|
en
|
refinedweb
|
Can't open Pycharm Console and can't upload helpers for remote interpreter (PyCharm 2021.1 and PyCharm 2021.2) Follow
I am unable to open the Python Console tab in PyCharm 2021.2 (same happened in 2021.1 also). The error is:
Error:Console process terminated with error:
Traceback (most recent call last):
File "/root/.pycharm_helpers/pydev/pydevconsole.py", line 33, in <module>
from _pydev_bundle.pydev_console_utils import BaseInterpreterInterface
File "/root/.pycharm_helpers/pydev/_pydev_bundle/pydev_console_utils.py", line 12, in <module>
from _pydevd_bundle import pydevd_thrift
File "/root/.pycharm_helpers/pydev/_pydevd_bundle/pydevd_thrift.py", line 20, in <module>
from pydev_console.pydev_protocol import DebugValue, GetArrayResponse, ArrayData, ArrayHeaders, ColHeader, RowHeader, \
File "/root/.pycharm_helpers/pydev/pydev_console/pydev_protocol.py", line 6, in <module>
_console_thrift = _shaded_thriftpy.load(os.path.join(os.path.dirname(os.path.realpath(__file__)), "console.thrift"),
File "/root/.pycharm_helpers/third_party/thriftpy/_shaded_thriftpy/parser/__init__.py", line 29, in load
thrift = parse(path, module_name, include_dirs=include_dirs,
File "/root/.pycharm_helpers/third_party/thriftpy/_shaded_thriftpy/parser/parser.py", line 475, in parse
parser = yacc.yacc(debug=False, write_tables=0)
File "/root/.pycharm_helpers/third_party/thriftpy/_shaded_ply/yacc.py", line 3256, in yacc
signature = pinfo.signature()
File "/root/.pycharm_helpers/third_party/thriftpy/_shaded_ply/yacc.py", line 2961, in signature
digest = base64.b16encode(sig.digest())
UnboundLocalError: local variable 'sig' referenced before assignment
Seems to be something wrong in .pycharm_helpers.
In addition, and possibly related, I regularly get the following error on the Event Log:
Couldn't upload helpers for remote interpreter: File /Users/<username>/Library/Caches/JetBrains/PyCharm2021.1/remote_sources/293810872/46911889/.pycharm_helpers/packaging_tool.py: /Users/<username>/Library/Caches/JetBrains/PyCharm2021.1/remote_sources/293810872/46911889/.pycharm_helpers/packaging_tool.py is not a file or directory
Despite this message, the .pycharm_helpers DO get uploaded to the VM, even after I delete them.
The directory /Users/<username>/Library/Caches/JetBrains/PyCharm2021.1/remote_sources/293810872/46911889/.pycharm_helpers doesn't exist at all locally, even though on the VM, .pycharm_helpers does exist in the home directory. Also, /Users/<username>/Library/Caches/JetBrains/PyCharm2021.1/remote_sources/293810872/46911889/ contains all the other files from the VM home directory (just not .pycharm_helpers).
I have tried the following:
- Invalidate caches
- Delete local caches manually (~/Library/Caches/JetBrains/PyCharm2021.1/)
- Delete ~/.pycharm_helpers on the VM
- Delete and recreate Python interpreter
do you by chance access your remote server by multi-hop ssh, using .ssh/config ? I have the same issue with 2021.2 (works with 2021.1).
My connection to the remote server does authenticate via .ssh/config.
Note that since I upgraded to 2021.2, it looks like the directory /Users/<username>/Library/Caches/JetBrains/PyCharm2021.2/remote_sources doesn't exist. The other symptoms of the problem remain the same as described above.
Hello,
Please upload your logs folder zipped from ***Help | Collect logs and Diagnostic Data*** to the FTP and please let me know the filename.
Upload ID: 2021_08_06_2fvQnajZhYM37x3Q
this is now being worked on in
|
https://intellij-support.jetbrains.com/hc/en-us/community/posts/4404386242450-Can-t-open-Pycharm-Console-and-can-t-upload-helpers-for-remote-interpreter-PyCharm-2021-1-and-PyCharm-2021-2-?sort_by=votes
|
CC-MAIN-2022-33
|
en
|
refinedweb
|
The
unique() function in C++ helps remove all the consecutive duplicate elements from the array or vector. This function cannot resize the vector after removing the duplicates, so we will need to resize our vector once the duplicates are removed. This function is available in the
<algorithm.h> header file.
The
find() function accepts the following parameters:
first: This is an iterator that points to the first index of the array or vector where we want to perform the search operation.
last: This is an iterator that points to the last index of the array or vector to where we want to perform the search operation.
The
unique() function returns an iterator pointing to the element that follows the last element that was not removed.
Let’s look at the code below:
#include <iostream> #include <algorithm> #include <vector> using namespace std; int main() { vector<int> vec = {10,20,20,20,30,30,20,20,10}; auto it = unique(vec.begin(), vec.end()); vec.resize(distance(vec.begin(), it)); for (it = vec.begin(); it!=vec.end(); ++it) cout << ' ' << *it; }
unique()function and pass the required parameters.
unique()function only removes the consecutive duplicates).
RELATED TAGS
CONTRIBUTOR
View all Courses
|
https://www.educative.io/answers/how-to-use-the-unique-function-in-cpp
|
CC-MAIN-2022-33
|
en
|
refinedweb
|
At the .NET Conf 2020 in November, Microsoft released the .NET 5 platform. This release's massive investment focuses primarily on improving the entire platform’s overall performance, followed by a broad set of new features in ASP.NET Core, mainly related to Blazor, SignalR, and Web API. Meanwhile, ASP.NET MVC adds support to more model binding types and a new library, the Microsoft.Web.Identity, which simplifies the Azure Active Directory authentication integration.
ASP.NET Core, which is further improved with .NET 5, is one of the most performing frameworks according to TechEmpower benchmarks and has got a great response from the developers’ community. The first example of enhancement is in the garbage collection, where now threads can increase the work done during the object collection or reduce the lock contention when statics are scanned. Another example is the better quality of the machine code generated by the just-in-time compiler (JIT), thanks to new features such as removing the redundant zero init or bound checks on indexes of arrays, strings, and spans. As a consequence of the increased performance in GC e JIT, other areas such as the allocation in the Kestrel server or the gRPC implementation for .NET got better too.
Source: gRPC performance improvements in .NET 5 | ASP.NET Blog
Other useful improvements are:
- The support for ARM64 hardware intrinsics
char.IsWhiteSpaceand all text processing methods that use it
Comparison<T>-based sorting routines, which reflects on the LINQ OrderBy method, and more in general on ordering collections
- Trimming the unused portions of an application during the linking process
- Quicker JSON serialization and deserialization thanks to the JsonSerializer class refactoring
In the Blazor framework, the performance got better both in the WebAssembly runtime, where the processing times for common operations are cut in half and the UI components rendering phase. New controls such as InputFile, InputRadio, and InputRadioGroup are available, and the component virtualization enhances the rendering process.
Source: ASP.NET Core updates in .NET 5 Release Candidate 1 | ASP.NET Blog
About SignalR, now developers can manage parallel hub method invocations at a time and use SignalR Hub filters, which allows writing code that runs before and after Hub methods are called, facilitating logging, error handling, and argument validation. Filters can be configured per hub or globally.
In the ASP.NET Core MVC framework, the model binding now supports the record type introduced in C# 9. Also, requests containing a UTC time string can be bound to a UTC DateTime field:
public record Person([Required] string Name, [Range(0, 150)] int Age); public class PersonController { public IActionResult Index() => View(); [HttpPost] public IActionResult Index(Person person) { // ... } }
Source: What's new in ASP.NET Core 5.0 | Microsoft Docs
The .NET 5 release also brings innovation to the OpenAPI support, which is now enabled by default. Thanks to the partnership with the maintainer of the Swashbuckle.AspNetCore project, the web API project template includes the NuGet package for Swashbuckle. The OpenAPI configuration resides in the
Startup class’s
ConfigureServices method and is enabled by default only in the development mode, along with the Swagger UI page.
The latest improvements are related to the authentication in the ASP.NET Core application: the project template references Microsoft.Identity.Web NuGet library, facilitating the authentication process through Azure Active Directory and the ability to access Azure resources on behalf of a specific user.
Finally, the developers can use the
dotnet watch command to launch both the debugger and the browser. During the debugging, each change applied to the code automatically refreshes the page.
With all these features and improvements, as Scott Hunter (director program management, .NET) stated during many occasions, Microsoft has a strong commitment to the framework alongside the community contribution to laying the foundation for the next version of .NET, usually referred to as the One .NET, which is planned for November 2021.
Inspired by this content? Write for InfoQ.
Becoming an editor for InfoQ was one of the best decisions of my career. It has challenged me and helped me grow in so many ways. We'd love to have more people join our team.
Community comments
|
https://www.infoq.com/news/2020/12/aspnet-core-improvement-dotnet-5/
|
CC-MAIN-2022-33
|
en
|
refinedweb
|
Add “.ds_store” too. Most the Mac Users will face problems with .DS_Store Files. @ecdrid Please include this change also in you pull request
if label not in (’.ipynb_checkpoints’,".ds_store" ): instead of if label not in (’.ipynb_checkpoints’):
Add “.ds_store” too. Most the Mac Users will face problems with .DS_Store Files. @ecdrid Please include this change also in you pull request
if label not in (’.ipynb_checkpoints’,".ds_store" ): instead of if label not in (’.ipynb_checkpoints’):
Going into the directory and typing “rm -r .ipynb_checkpoints/” got rid of the problem for me.
Thanks for the help
Sure…
Sorry for late reply
Also
__MACOSX Or something?
It’s already done…
Thanks and Sorry for late reply…
Kind of lost in the forum due to it’s exponential expansion
I am using Azure Deep learning VM and connecting through GIT BAsh as terminal.I cloned the file from the
the git location as mentioned in below image
but when i start running the lession1 why the content is different than the 2018 content like below
Original Git folder contents lesson1 page details as shown below
Please let me know did i copied/cloned from wrong location.
or do we have to edit the wiki page info.
@amritv and @Chris_Palmer do u have any idea about this ? is wrong I think…
What you want is
I wonder why that wiki page is pointing to this - either the wiki page is not for the current course, or it must be an very old reference…
Complete n00b here, but this worked for me.
After lesson 1 I made a folder emmanotemma with subfolders train and valid in exactly the same way as dogs and cats in lesson 1.
I duplicated the lesson 1 notebook and that is where the error may have arisen. Again, n00b here.
Running the following code in the notebook worked for me:
!rm -r data/emmanotemma/.ipynb_checkpoints
!rm -r data/emmanotemma/valid/emma/.ipynb_checkpoints
!rm -r data/emmanotemma/train/emma/.ipynb_checkpoints
!rm -r data/emmanotemma/valid/notemma/.ipynb_checkpoints
!rm -r data/emmanotemma/train/notemma/.ipynb_checkpoints
Of course you will have to insert your own folder instead of emmanotemma and corresponding subfolders.
thanks worked for me
I had the same problem and this code helped solve the issue. Thanks.
Is there a way to solve it permanently? I read something about adding a git ignore file in the repository, not sure if that works.
This seems to solve the problem!
(Although it would have been good to know why the files are getting created at all)
A more generic way of doing this (so that you don’t have to keep changing the folder name for every dataset you load in) -
UPDATE 1 - Thanks to @wilpat for pointing this out.
%cd '{PATH}' !find '.' -name '*.ipynb_checkpoints' -exec rm -r {} +
This will recursively search all sub-folders in the dataset for the checkpoint files and delete them.
The read_dirs function has been updated with your code with a little extra, but i still got the same issue.
I had to add a check for ‘.ipynb_checkpoints’ in the loop creating fnames.
Pasting the edited code:
def read_dirs(path, folder):'): fnames.append(os.path.join(folder, lbl, fname)) lbls.append(lbl) return fnames, lbls, all_lbls
A dumb question
but are you running this snippet after loading in the path variable?
In lesson1.ipynb this is on input 6.
PATH = "data/dogscats/"
Got it now. My bad. Two mistakes here.
!cd doesn’t actually change the path. Some weird thing with ipython and bash.
Should have used
%cd
Second, in the second line, if I had cd’ed into the directory there is no need for the
{PATH} variable. Should have replaced that with
.
Leaves me wondering why it worked for me in the first place. Updating my answer.
Thanks for pointing this out!
|
http://forums.fast.ai/t/how-to-remove-ipynb-checkpoint/8532?page=3
|
CC-MAIN-2018-43
|
en
|
refinedweb
|
A class that compiles a shader and saves it into an existing shader HDA. More...
#include <VOP_HDACodeCompiler.h>
A class that compiles a shader and saves it into an existing shader HDA.
Definition at line 359 of file VOP_HDACodeCompiler.h.
Constructor.
Compiles a given node to the sections of a given HDA (already existing). This is used for HDAs that have contents network, but also want to store cached vfl/vex code.
Reimplemented from VOP_HDACodeCompiler.
Compiles the given context type of the source node to the HDA in OTL.
Implements VOP_ShaderHDACompiler.
Obtains the context types for which to compile the code.
Implements VOP_ShaderHDACompiler.
|
http://www.sidefx.com/docs/hdk/class_v_o_p___shader_h_d_a_compiler_h_d_a.html
|
CC-MAIN-2018-43
|
en
|
refinedweb
|
What do you do when you are ready to upgrade to Swift but rewriting your existing Objective-C apps is not an option? In this try!Swift talk, using Etsy as a case study, Amy discusses a blueprint for integrating Swift incrementally into your apps.
Swift provides rich features for Objective-C interoperability, but applying them to your current codebase is not always straightforward. Amy covers technical details, such as linting and managing dependencies, as well as organizational strategies for gathering support, and other things they have learned at Etsy along the way.
After reading this, you will be prepared for a smooth transition to Swift: both in your code and in your company.
Introduction (00:00)
Let’s talk about adopting Swift incrementally in your apps. What I want to cover today is using our experience from Etsy as a case study. I am going to tell you a little bit about:
- Where we began and how we decided to get started with Swift.
- The experiments we ran to start getting into our code base.
- The lessons we learned or the things that we broke along the way.
Etsy is a global marketplace for people around the world to connect online and offline to make, sell, and buy unique goods. And at Etsy, we have four native apps. We have an application for buyers, an application for sellers, and both of those are on iOS and Android.
Swift (00:35)
In 2015 when Swift 2.0 came out, and Swift went open-source, it seemed to us like it was completely magical. A lot of other engineers and I at Etsy started getting excited about how we could use Swift ourselves.
I would look at a class, and I would say, “This would be much better if I could rewrite it using generics.” Or, “If only I had compile-time API availability, then writing this feature would be much easier. And we started saying to ourselves; we could stay with Objective-C or maybe we should look at rewriting everything.
There have been wonderful articles from different companies, one from Lyft comes to mind, where they talk about how porting their app to Swift made it faster, it made it smaller, it reduced bugs. It made their developers happier. And all of these things were things that seemed magical and exciting to us, and we wanted that, too.
We said, “Great, let’s rewrite everything!” And we decided to put together a proposal to over time, incrementally port our app into Swift, class by class. And the first thing we needed to do of course was to build consensus. What surprised me is when I talked to people outside of iOS development about starting to use Swift, I had two comments from people.
People would say Swift has been out for like a year: Are you not using it already? Isn’t that something that happened? Or people would say, Are you totally crazy? Why would you rewrite your app from scratch?” gathering consensus was an important part of this process, and the way we decided to do that was through a process we call an architecture review. All that means is you write a proposal and get a bunch of smart people in a room, and the burden of proof is on you to explain why you think your new code, your new idea is going to make your lives better.
Get more development news like this
We have these smart people together, and one of them asked, “How do you know that all this Swift code you write is going to be better than the Objective-C that you already have?”
And this turns out to be a good question for us because we actually have a lot of Objective-C. As of August 2016, we had:
- 5+ years of commit history.
- 280,000 lines of Objective-C and counting.
- 2,500 implementation files (and I am excluding our libraries).
I bring this up not because it is bad. All that code represents much experience and authority and expertise with Objective-C. Throwing all of that out and starting over might not have been the most prudent approach for us. We decided we needed to find an answer to that question; we needed a reason to use Swift that was not just because it is magical and cool and exciting.
We decided to take a step back, and we said, “Maybe we do not have to live in one world or the other. Let’s just start writing our tests in Swift.” And that is exactly what we did for three or four months: we started writing functional and unit tests in Swift, using that as a vehicle to answer that question.
Stability and Strategy (03:22)
When you are moving back and forth between these two languages, you start to look at Objective-C and start to realize that it cannot express certain things. You will be writing a function, and you will look at it, and you will be, what if this argument is nil? Will everything break? Probably.
And that gave us the answer. We said, “We do not want to use Swift because it is cool. We want to use Swift because we think it is going to let us write safer code that crashes less.” And from there we were able to go and build that consensus again.
We had another architecture review with a new proposal, and we started to talk to people about our new goal, which was instead of moving everything to Swift and making Swift development mandatory, let’s just make it possible.
Because if we believe that the Swift code that we write is going to be better and safer, then we could start to write new features with Swift and keep all that old Objective-C that we already know and trust. That is what we landed on.
We said, “Instead of living with just Objective-C or Swift, let’s be a two-language code base. Let’s commit to that.” And this was actually a smart decision, I think, because realistically if we were porting our app one piece at a time, it would take us years to get completely to Swift anyways.
Disadvantages, Advantages (04:27)
There are many disadvantages to this approach. Obviously, it means you are in this awkward, in-between code base state.
Xcode forgets what language you are looking at and gives you the header for the wrong one. More importantly, developers need to know two languages, which is also a disadvantage in terms of education. And finally we have to deal with all of those messy interoperability features i.e. bridging headers and auto-generated Objective-C.
But to us, the advantages outweighed that. If we are adapting Swift piece by piece, it gives us more time to adapt, to risk. It also lets us figure out how Swift is going to work for us. And finally, it gives us time to learn Swift as an organization, so we know that by the time we write more Swift, we are going to be better Swift writers.
We needed an approach for how we were going to start using Swift at all, and the approach we landed on was Swift by experiment.
Swift By Experiment (05:09)
We said: we are going to make hypotheses about how we think adding each piece of Swift to our code base will go, and then we are going to find a way to test them.
This was important because we knew that things were going to break, and we wanted to make sure that anything we broke did not affect our users or our actual app in production. Things were going to break for many reasons, including that Swift is unstable.
It is unstable in the sense that it is under active development, and Apple is still making breaking changes; also unstable in the fact that sometimes it just crashes - Xcode crashes, playgrounds crash.
We want to make sure that does not actually cause problems for us in the wild.
We also realized that our app is a part of a larger ecosystem: we are not just a good repository, we also have third-party services. Crash logging, and we have to submit bills to Apple; we have bill machines, we have to do translations (and probably things we have not even thought about).
It was important for us to ask ourselves, each of these external things that touches our app, how is adding Swift going to affect it.
Then we came up with three goals:
- Let’s just add this Swift run-time.
- Add our first Swift class, and A/B test it
- Start developing new features in Swift.
Adding The Swift Runtime (06:29)
When you start shipping Swift in your app, because Swift is not of a stable binary interface, you actually ship all of these libraries, the dynamically linked libraries required for this Swift runtime with your app.
When you check a bunch of boxes, we decided to ship some code that does not run; we added a hidden view controller in Swift, without having users running it.
The very first important thing we learned: make sure that those libraries are actually there. If you unzip an IPA file with a Swift app, you will see a folder next to the payload called SwiftSupport. And that should be filled with a bunch of dynamic libraries like swiftlibcore.dlib.
If those libraries are not there, Apple sends you a nasty email and rejects your app. Save yourself a heartache, and unzip it. It turns out that certain headless builds, Xcode build included, do not include this folder by default. The same bug exists for WatchKit apps.
The other thing we learned is: monitoring build sizes. Those libraries in the Swift runtime add up to something in the order of 17MB (it is fairly substantial).
I wasted time worrying about, is this going to bump us over the over-the-air download limit, which would affect downloads and that would be bad. I spent much time trying to come up with scripts that would tell me how big is our app going to be when we submit it to the app store? And it turns out that is hard because of things like app thinning, and Apple compresses it. You cannot answer the question of how big your app is.
The solution is just: upload it to iTunes Connect. Deep within all of the menus, under Activity, All Builds, and you click on a build, Apple will tell you exactly how big the app they are going to ship to your users is.
After that first experiment had succeeded, we decided to move on to running our first Swift code.
Running Our First Swift Code (08:15)
This was experiment number two. The approach we decided to use was an A/B test. Line by line, we rewrote an existing simple view controller. What is useful is we were not testing our ability to write novel code in Swift, we were just testing Swift itself and how it interacts with the rest of our Objective-C code base.
This experiment also taught us something new: it crashed.
I have a pop-quiz for you. Where does this code crash?
guard let collection = self.collection else { return } let isPrivate = collection.isPrivate() let isFavorites = collection.type == "favorites"
It turns out it crashes on line three when you access the property
collection.type.
What was happening here is
collection was an instance of an Objective-C class, and we did not add nullability specifiers to it. And what that means is that collection.type came through as property implicitly unwrapped string. An implicitly unwrapped optional. Hence it is effectively optional value that is automatically forced unwrapped for you when you try to access it.
This was an important lesson to be learned: annotate your files. And this was interesting for us because, of course, with something like 2,500 header files, annotating all of them is completely impractical.
The solution we landed on was to annotate files while you import them. Your Swift bridging header is the firewall between your Swift code and your Objective-C, as long as you annotate things there, you will be safe.
And a special word of caution, do not forget that headers nest, a header that imports a header, you need to make sure your Swift annotations are available in all the headers that it links.
Another thing we learned is: our crash logger did not give us useful information in Swift. We were trying to figure out where our code was crashing and we received random garbage in our stack traces.
It turns out that an interesting property of Swift is, because it has proper namespaces, you did not make sure there are naming collisions in the compile on the linker level. All these symbols in Swift were compiled to this mangled format.
There is a useful tool shipped with Xcode tools called Swift-demangle. You could take your stack tracing, put it in Swift-demangle, give you a proper looking stack trace. With our crash fix, we decided we were ready to start writing new code in Swift.
Writing New Code In Swift (11:10)
We came up with a team goal: do not write any Swift that another developer would have to rewrite to use from Objective-C.
The reason I say this is because we decided that code reuse was an important goal. We did not want to end up in a situation where someone writes a new and exciting utility in Swift, and then another developer wants to use it in Objective-C half of the code, but they cannot.
This is actually harder than it sounds because many features in Swift are not backward compatible with Objective-C (Generics, Tuples, Structures). All of these things that, if you have them in your Swift code, then you are automatically generated to Objective-C header just will not include them all.
The solution we landed on was simple: use access levels. We have private in file, and private in Swift 3.0. All we decided is that if you are going to use, for example, a generic or struct, make sure it is flagged as private. You can write your very Swifty code, but it will be interfaced for it. You are forced to write something that you can still use from Objective-C.
But, how do you even force this? And the approach we landed on was: using a linter.
Linter is a small piece of software that takes source code as input and output style violations (commas and braces). You can use this to do more powerful things. I wrote a linter rule on top of the great open source project called SwiftLint.
Code/Swift/Interoperability.swift:19:2: warning: Objective-C Interoperability Violation: Object ‘someFunction(_:_:)’ of type FunctionFree should be private, but is internal (objective_c_interoperability)
Linter rule looks through and makes sure that you are not using any of these Swift-only features, in a way that is publicly accessible. If you do, it warns you about it, and you can make sure that you keep that code out of our code base.
Finally, the last thing we learned is that much of our Objective-C code looks bad in Swift. It is not very Swifty. And it turns out there is this fantastic macro called
NS_REFINED_FOR_SWIFT.
If you have a piece of ugly Objective-C code, you can tag it with this macro and then write an extension in Swift.
@interface MyClass : NSObject - (void)anUglyFunction NS_REFINED_FOR_SWIFT; @end extension MyClass { public func aPrettierFunction() -> Void { return self._anUglyFunction() } }
That lets you redefine how Swift will see that function you can make it more canonically Swifty. If this is an approach that is useful to you, there was a great talk WWDC 2015 called “Improving your existing apps with Swift.”
Education, Standarization And The Future (13:22)
We wanted to make sure we were still answering that question of How do we make sure that the Swift coded writing is going to be as good as or better than the Objective-C we already had?
The very first thing we needed was code standards. We wanted to make sure we all agreed on the Swift code we would write.
It was interesting that none of us felt we had the authority to write our own Swift code standards from scratch. This was a new language to us too. What we did, and my suggestion to you is, of course, you can borrow some.
Many companies publish their Swift code standards publicly. We started with GitHub’s. You take that starting point, and we started to modify it with our own concerns about interoperability to get to the code standards that we have today. Other thing, of course, is do not forget to standardize a new version.
As soon as you have more than one Swift developer, you are probably running more than one version of Xcode. And this is not a problem in Objective-C but it quickly becomes a problem with Swift.
Because you are going to start shipping code, and then you are going to start your new wording for each other, it is not going to compile it all because you are all running slightly different versions.
From the command line, you can check what version of Swift you are running, and just agree on one. Swift.org now publishes tool chains that you can install into Xcode separately from the Xcode version. Or you could all install Xcode at the same time.
Then, of course, we have to deal with future versions of Swift. Adopting Swift 3.0 potentially has the same problems of adopting Swift in the first place. It is a breaking change. How do we deal with that? We decided to keep the same experimental approach, on an on-going base.
We looked at running Swift 3.0 in a branch using the code migrator as installing future versions of Xcode in one of our build machines. This lets us use the same experimental approach to make sure that Swift 3.0 was not going to cause any problems either.
Finally, we have education. This is something that is on-going for us. And it is something that we are excited about: “How do we get other developers outside of iOS to contribute to the iOS apps? The approach we are taking is to run a series of workshops and lunches where we start to introduce Swift to the larger population.
The Bigger Picture (15:23)
If you are sitting in the audience and you have big Objective-C, and you want to start using Swift tomorrow, what is my advice to you and where can you start?
The very first thing is you need to find your raison d’etre, your killer feature: you need a reason to use Swift. And it cannot just be FOMO. It needs to be something that is good for you as a developer or your users.
Once you find that reason, you start telling people about it, and it becomes much easier to get other people on board with you using Swift in your code.
My next suggestion to you is to start outside of your codebase. For us, starting by writing test was invaluable. One thing that is important is it let us gain experience with writing Swift. But another thing that is important is that it let us gain experience with writing Swift at Etsy and figuring out what does that mean for us.
And when you start outside your codebase, whether it is for tests or tooling, or toy projects or anything like that, you give yourself the opportunity to learn as a group.
My next suggestion to you, of course, is to make a test hypothesis; all the things that broke for us, they may break for you, you are probably going to find some new ones too. Every system is different, and every system is unique. And the only way you can figure out what is going to give you a problem is by trying it.
My suggestion is to make a diagram, figure out all the things that touch your app and ask yourself “If I had Swift, is this going to change?”, or “How is this going to change?.” And then for every one of those things, try to devise an experiment to figure out how you can actually test that without breaking your app in production.
You can do it. We are lucky to be working with these two very powerful languages that are interoperable. It gives you tools that we might not have otherwise. Do not be afraid to get started.
Thank you!
Resources
About the content
This talk was delivered live in September 2016 at try! Swift NYC. The video was recorded, produced, and transcribed by Realm, and is published here with the permission of the conference organizers.
|
https://academy.realm.io/posts/tryswift-amy-dyer-incremental-swift/
|
CC-MAIN-2018-43
|
en
|
refinedweb
|
Don't you think "A picture is worth a thousand words.." :)
It is always(at least in most of the cases :P) better to take screenshot of webpage when the test run fails.
Because with one look at the screenshot we can get an idea of where exactly the script got failed. Moreover reading screenshot is easier compare to reading 100's of console errors :P
Here is the sample code to take screenshot of webpage
File scrFile = ((TakesScreenshot)driver).getScreenshotAs(OutputType.FILE); FileUtils.copyFile(scrFile, new File("PathOnLocalDrive")
To get screenshot on test failure , we should put the entire code in try-catch block . In the catch block make sure to copy the above screenshot code.
In my example I am trying to register as a new user. For both first and last name fields I have used correct locator element whereas for emailaddress field I have used wrong locator element i.e name("GmailAddress1").
So when I run the script , test failed and I got the screenshot with pre filled first and last names but not email address.
Here is the sample code :
import java.io.File; import java.io.IOException;.BeforeTest; import org.testng.annotations.Test; public class TakeScreenshot { WebDriver driver; @BeforeTest public void start(){ driver = new FirefoxDriver(); } @Test public void Test() throws IOException{ try{ driver.get(""); driver.findElement(By.id("link-signup")).click(); driver.findElement(By.name("FirstName")).sendKeys("First Name"); driver.findElement(By.name("LastName")).sendKeys("Last Name"); driver.findElement(By.name("GmailAddress1")).sendKeys("GmailAddress@gmail.com"); }catch(Exception e){ //Takes the screenshot when test fails File scrFile = ((TakesScreenshot)driver).getScreenshotAs(OutputType.FILE); FileUtils.copyFile(scrFile, new File("C:\\Users\\Public\\Pictures\\failure.png")); } } }And here is the screenshot of webpage on test failure
Reference :
Between have a great day ...!!! :)
Hi Vamsi Kurra,Thanks a LOT for sharing this concept with us.I am yet to work on this program,though.Please keep up the Good Work.May GOD BLESS You...
thanks ..!! :)
Thank you
Thank you for the post,
other alternative is by creating the custom profile using WebDriver
hear the sample code snippet for creating custom profile to download the files using selenium webdriver
Hi Vamshi,
I'm getting the Error while, Clicking on Pack Extension button.
It shows the popup with
" The 'manifest_version' key must be present and set to 2 (without quotes). See developer.chrome.com/extensions/manifestVersion.html for details."
Can you please help me out this issue ?
Hi Thyagu pugaz,
May I know for which extension you are getting this error. It worked fine for me .
Its working fine .. Thanks.
I'm trying to automate the "Rest Console " - its chrome extensions.
ChromeOptions options = new ChromeOptions();
options.addArguments("start-maximized");
options.addExtensions(new File("C:\\Users\\pugazd\\AppData\\Local\\Google\\Chrome\\User Data\\Default\\Extensions\\cokgbflfommojglbmbpenpphppikmonn\\4.0.2_0.crx"));
// options.setBinary(new File("C:\\Program Files (x86)\\Google\\Chrome\\Application\\chrome.exe"));
System.setProperty("webdriver.chrome.driver", "C:\\Users\\pugazd\\Desktop\\chromedriver.exe");
ChromeDriver driver = new ChromeDriver(options); // Error thrown in this line.
driver.get("chrome-extension://cokgbflfommojglbmbpenpphppikmonn/index.html");
Exception in thread "main" org.openqa.selenium.WebDriverException: unknown error: failed to wait for extension background page to load: chrome-extension://omlijhidnmlobpccmlkhjeikgbbhebab/background.html
from unknown error: page could not be found: chrome-extension://omlijhidnmlobpccmlkhjeikgbbhebab/background.html
(Driver info: chromedriver=2.7.236900,platform=Windows NT 6.1 SP1 x86_64) (WARNING: The server did not provide any stacktrace information)
thank you very much for your clear explanation..
Please find the video tutorial of selenium WebDrive here with java:
Hi vamshi,
I have followed the same steps as you have specified.
after i had copy-pasted the selenium code in eclipse, am unable to import the file which you are using.
Thats the reasson unable to specify the statement
String s = new OCR().recognizeCharacters((RenderedImage) image);
Can u please give me a solution to this.
Hi Mamatha,
You can't import the file . Please download the AspriseOCR.dll file and keep it at the below location :
"C:\Windows\System32" .
|
http://www.mythoughts.co.in/2014/03/on-test-case-failure-take-screenshot.html
|
CC-MAIN-2018-43
|
en
|
refinedweb
|
#STOMP Dart Client
STOMP Dart client for communicating with STOMP complaint messaging brokers and servers.
Stomp Dart Client is distributed under an Apache 2.0 License.
See also Ripple - Lightweight Dart Messaging Server.
##Installation
Add this to your
pubspec.yaml (or create it):
dependencies: stomp:
Then run the Pub Package Manager (comes with the Dart SDK):
pub install
##Usage
###Running on Dart VM
import "package:stomp/stomp.dart"; import "package:stomp/vm.dart" show connect; void main() { connect("foo.server.com").then((StompClient client) { client.subscribeString("/foo", (Map<String, String> headers, String message) { print("Recieve $message"); }); client.sendString("/foo", "Hi, Stomp"); }); }
There are basically a few alternative ways to communicate:
sendJson()and
subscribeJson()
sendString()and
subscribeString()
sendBytes()and
subscribeBytes()
sendBlob()and
subscribeBlob()
Please refer to StompClient for more information.
###Running on Browser
The same as the above, except import
websocket.dart instead of
vm.dart:
import "package:stomp/stomp.dart"; import "package:stomp/websocket.dart" show connect; //the rest is the same as running on Dart VM
##Limitations
##Incompleteness
Add this to your package's pubspec.yaml file:
dependencies: stomp: ^0.7.3
You can install packages from the command line:
with pub:
$ pub get
Alternatively, your editor might support
pub get.
Check the docs for your editor to learn more.
Now in your Dart code, you can use:
import 'package:stomp/stomp.dart';
We analyzed this package on Oct 10, 2018, and provided a score, details, and suggestions below. Analysis was completed with status completed using:
Detected platforms: unsure
Low code quality prevents platform classification.
Fix
lib/src/impl/util_read.dart. (-82.20 points)
Analysis of
lib/src/impl/util_read.dart failed with 6 errors, including:
line 24 col 46: Undefined name 'UTF8'.
line 28 col 46: Undefined name 'UTF8'.
line 90 col 26: Undefined name 'UTF8'.
line 164 col 15: Undefined name 'UTF8'.
line 169 col 20: Undefined name 'UTF8'.
Fix
lib/impl/plugin.dart. (-44.31 points)
Analysis of
lib/impl/plugin.dart failed with 2 errors, 2 hints:
line 129 col 14: Undefined name 'UTF8'.
line 160 col 14: Undefined name 'UTF8'.
line 7 col 8: Unused import: 'dart:convert'.
line 7 col 28: The library 'dart:convert' doesn't export a member with the shown name 'UTF8'.
Fix
lib/src/impl/util_write.dart. (-25 points)
Analysis of
lib/src/impl/util_write.dart failed with 1 error:
line 65 col 15: Undefined name 'UTF8'.
Fix additional 7 files with analysis or formatting issues. (-55.47 points)
Additional issues in the following files:
lib/src/stomp_impl.dart(1 error)
lib/src/stomp_util.dart(1 error)
lib/stomp.dart(4 hints)
lib/websocket.dart(4 hints)
lib/impl/util.dart(3 hints)
lib/impl/plugin_vm.dart(Run
dartfmtto format
lib/impl/plugin_vm.dart.)
lib/vm.dart(Run
dartfmtto format
lib/vm.dart.)
Fix platform conflicts. (-20 points)
Low code quality prevents platform classification.
Package is too old. (-100 points)
The package was released more than two years ago.
Maintain
CHANGELOG.md. (-20 points)
Changelog entries help clients to follow the progress in your code.
Use constrained dependencies. (-20 points)
The
pubspec.yaml contains 1 dependency without version constraints. Specify version ranges for the following dependencies:
quiver.
stomp.dart.
|
https://pub.dartlang.org/packages/stomp
|
CC-MAIN-2018-43
|
en
|
refinedweb
|
Chaining actions in Struts
By: Apache Foundation Printer Friendly Format
Chaining actions can be done by simply using the proper mapping in your forward entries in the struts-config.xml file. Assume you had the following two classes:
/* com/AAction.java */ ... public class AAction extends Action { public ActionForward execute(ActionMapping mapping, ActionForm form, HttpServletRequest request, HttpServletResponse response) throws Exception { // Do something return mapping.findForward("success"); } }
/* com/BAction.java */ ... public class BAction extends Action { public ActionForward execute(ActionMapping mapping, ActionForm form, HttpServletRequest request, HttpServletResponse response) throws Exception { // Do something else return mapping.findForward("success"); } }
Then you can chain together these two actions with the Struts configuration as shown in the following excerpt:
... <action-mappings <action path="/A" type="com.AAction" validate="false"> <forward name="success" path="/B.do" /> </action> <action path="/B" type="com.BAction" scope="session" validate="false"> <forward name="success" path="/result.jsp" /> </action> </action-mappings> ...
Here we are assuming you are using a suffix-based (
.do)
servlet mapping, which is recommended since module support requires it. When
you send your browser to the web application and name the action.. nice tutorial
View Tutorial By: Amit at 2010-04-16 04:48:52
2. This is helpful for newbies like me :P
View Tutorial By: Mustafa at 2011-06-18 23:40:27
|
http://java-samples.com/showtutorial.php?tutorialid=871
|
CC-MAIN-2018-43
|
en
|
refinedweb
|
Lately, my team has been looking for better ways to create and maintain mocks in our TypeScript project. In particular, we wanted an easy way to mock out modules that we built using Sinon.JS.
We had a few goals for our mocks:
- Specific: Each test should be able to specify the mocked module’s behavior to test edge cases.
- Concise: Each test should only mock the functions that it cares about.
- Accurate: The return type of each mocked function should match the actual return type.
- Maintainable: Adding a new function to a module should create minimal rework in existing tests.
To accomplish these goals, we created this function:
export function mockModule<T extends { [K: string]: any }>(moduleToMock: T, defaultMockValuesForMock: Partial<{ [K in keyof T]: T[K] }>) { return (sandbox: sinon.SinonSandbox, returnOverrides?: Partial<{ [K in keyof T]: T[K] }>): void => { const functions = Object.keys(moduleToMock); const returns = returnOverrides || {}; functions.forEach((f) => { sandbox.stub(moduleToMock, f).callsFake(returns[f] || defaultMockValuesForMock[f]); }); }; }
The function takes in a module and an object that defines the mocked behavior of each function. When invoked,
mockModule returns a new function that takes two parameters: a Sinon Sandbox instance and an object that can override the mocked values specified in the previous function.
Here’s an example of how
mockModule can be used:
import * as sinon from 'sinon'; import { mockModule } from 'test/helpers'; import * as UserRepository from 'repository/user-repository'; import { getFullName } from 'util/user-helpers'; describe('getFullName', () => { const mockUserRepository = mockModule(UserRepository, { getFirstName: () => 'Joe', getLastName: () => 'Smith', }); let sandbox: sinon.SinonSandbox; beforeEach(() => { sandbox = sinon.sandbox.create(); }); afterEach(() => { sandbox.restore(); }); it('returns the full name of a user', () => { mockUserRepository(sandbox); const fullName = getFullName({ userId: 1 }); expect(fullName).to.equal('Joe Smith'); }); it('returns the full name of a user with only a first name', () => { mockUserRepository(sandbox, { getLastName: () => null, }); const fullName = getFullName({ userId: 1 }); expect(fullName).to.equal('Joe'); }); });
This demonstrates my team’s general pattern for mocking modules. First, we use
mockModule to create a function that can mock the given module. This happens at the outermost scope of our test suite so that the whole collection of tests can use the mocked function (in this example, the
mockUserRepository function). Each test can call the mock function, and if needed, each test can specify new behaviors for the functions.
What we’ve found to be extremely helpful is the typing that
mockModule provides. If we change the return type of a function in a module, we’ll receive a type error letting us know that we should update our tests accordingly.
This function has helped my team create better tests that are easy to write and maintain. What other mocking practices has your team used? Let me know in the comments.
By commenting below, you agree to the terms and conditions outlined in our (linked) Privacy Policy2 Comments
Hi
Try to use your code. But what is R? And where did it come from and I need to define the type T in mockModule….
Hi Olle,
I replaced “R” with “Object”. “R.keys” is a function from Ramda:. This library is not necessary for this function so I took it out.
“T” is a generic type that was being displayed properly in the code snippet. It should be present now.
I updated the post to reflect both of these changes. Thanks!
|
https://spin.atomicobject.com/2018/06/13/mock-typescript-modules-sinon/
|
CC-MAIN-2018-43
|
en
|
refinedweb
|
In a typical application or site built with webpack, there are three main types of code:
This article will focus on the last of these three parts, the runtime and in particular the manifest.
The runtime, along with the manifest data, is basically all the code webpack needs to connect your modularized application while it's running in the browser. It contains the loading and resolving logic needed to connect your modules as they interact. This includes connecting modules that have already been loaded into the browser as well as logic to lazy-load the ones that haven't.
Once your application hits the browser in the form of
index.html file, some bundles and a variety of other assets required by your application must be loaded and linked somehow. That
/src directory you meticulously laid out is now bundled, minified and maybe even split into smaller chunks for lazy-loading by webpack's
optimization. So how does webpack manage the interaction between all of your required modules? This is where the manifest data comes in...
As the compiler enters, resolves, and maps out your application, it keeps detailed notes on all your modules. This collection of data is called the "Manifest" and it's what the runtime will use to resolve and load modules once they've been bundled and shipped to the browser. No matter which module syntax you have chosen, those
import or
require statements have now become
__webpack_require__ methods that point to module identifiers. Using the data in the manifest, the runtime will be able to find out where to retrieve the modules behind the identifiers.
So now you have a little bit of insight about how webpack works behind the scenes. "But, how does this affect me?", you might ask. The simple answer is that most of the time it doesn't. The runtime will do its thing, utilizing the manifest, and everything will appear to just magically work once your application hits the browser. However, if you decide to improve do not. This is caused by the injection of the runtime and manifest which changes every build.
See the manifest section of our Output management guide to learn how to extract the manifest, and read the guides below to learn more about the intricacies of long term caching.
|
https://webpack.js.org/concepts/manifest/
|
CC-MAIN-2018-43
|
en
|
refinedweb
|
Nov 02, 2014 09:54 AM|sun21170|LINK
public class Invoice { public int invoiceId {get;set;} public DateTime InvoiceDate {get;set;} public Supplier SuppliedBy {get;set;} public Customer BilledTo {get;set;} public List<LineItem> LineItems {get;set;}
//EMPTY CONSTRUCTOR
public Invoice () {}
//DEPENDENCY INJECTION????
public Invoice(Customer cust, Supplier sup, List<LineItem> items)
{
this.Customer = cust;
this.Supplier = sup;
this.LineItems = items;
} }
In above code, Supplier, Customer and LineItem are separate classes. It appears that instances of these classes are being injected through properties into an instance of the Invoice class. We could have also passed these instances to the constructor of Invoice class using the second constructor and still have dependency injection.
My question: Is above code an example of DI (Dependecy Injection) in C#?
All-Star
44911 Points
Microsoft
Nov 03, 2014 03:37 AM|Zhi Lv - MSFT|LINK
Hi sun21170,
sun21170
My question: Is above code an example of DI (Dependecy Injection) in C#?
From my point of view, I don't think it is. Here are some articles about Dependency Injection, you could refer to them.
Best Regards,
Dillion
Nov 03, 2014 10:24 AM|sun21170|LINK
Could you explain why the code mentioned is not an example of dependency injection?
We are not instantiating objects within the Invoice object but passing it these objects from outside, so it sounds like DI.
Nov 06, 2014 01:09 PM|sun21170|LINK
UPDATE: After Mike's answer, I am posting this update. The code snippet in this post, with or without interfaces is DI compliant. DI does not mean that we need to use interfaces unlike what I had initially thought.
In example below, I am injecting interface variables in place of concrete class instances. This is also a valid example of DI.
public class Invoice { public int invoiceId {get;set;} public DateTime InvoiceDate {get;set;} public ISupplier SuppliedBy {get;set;}//we are using an interface and not the concrete class public ICustomer BilledTo {get;set;}//we are using an interface and not the concrete class public List<ILineItem> LineItems {get;set;}//we are using an interface and not the concrete class //EMPTY CONSTRUCTOR public Invoice () {} //DEPENDENCY INJECTION using constructor public Invoice(ICustomer cust, ISupplier sup, List<ILineItem> items) { this.Customer = cust; this.Supplier = sup; this.LineItems = items; } }
DEPENDENCYINJECTION
All-Star
186325 Points
Moderator
Nov 07, 2014 08:57 AM|Mikesdotnetting|LINK
Yes it is an example of Constructor dependency injection. The dependencies (Customer and Supplier) are injected into the Invoice class through its constructor. See Constructor injection here:
Dependency injection is not reliant on interfaces at all. However, you normally see interfaces used to represent a type if the type being injected needs to be interchangeable. That's not normally the case for entities like Customer or Order, but it is more commonly required for components or services such as a repository or a mailing component.
Nov 07, 2014 10:59 AM|sun21170|LINK
I think my confusion comes from so many concepts being tied to dependency injection. Before posting my question, I read the following 'confusing' concepts.
All-Star
186325 Points
Moderator
Nov 07, 2014 11:30 AM|Mikesdotnetting|LINK
Dependency injection is pattern. It's a specific implementation of the Inversion of control pattern. There are other forms of implementing inversion of control like the Service Locator pattern, Strategy Pattern, Factory Pattern etc. Inversion of control is all about maintaining loose coupling within a system. The main thing is to understand why you should strive for loose coupling in the first place, and then explore the various options you have available to manage it.
Nov 07, 2014 11:58 AM|sun21170|LINK
Now its making some sense. But I think implementing these fancy design patterns can be very distracting and time-consuming/wasteful, and I am not sure if they are really needed when trying to deliver software in a 'lean' manner. Today we want to write software so it satisfies some requirements but in a manner that is not wasteful. Design patterns may introduce waste into the development life-cycle by adding to development time. But this may not always be true, its just my initial impression.
Dependency Inversion of Control ( comes from D in SOLID principles of good software architecture) says we must use interfaces for dependent objects.
Inversion of Control is just a design approach that says do not instantiate dependent objects inside your class, but pass it to your code through a constructor, method or property.
Dependency Injection is one way to implement Inversion of Control design,
7 replies
Last post Nov 07, 2014 11:58 AM by sun21170
|
https://forums.asp.net/t/2016774.aspx?Is+this+code+an+example+of+Dependency+Injection+
|
CC-MAIN-2018-43
|
en
|
refinedweb
|
![if !IE]> <![endif]>
Alternative Languages
The languages described so far in this chapter have been extensions to what might be called standard C/C++. In some ways, C and C++ are not ideal languages for parallelization. One particular issue is the extensive use of pointers, which makes it hard to prove that memory accesses do not alias.
As a consequence of this, other programming languages have been devised that either target developing parallel applications or do not suffer from some of the issues that hit C/C++. For example, Fortress, initially developed by Sun Microsystems, has a model where loops are parallel by default unless otherwise specified. The Go language from Google includes the concept of go routines that, rather like OpenMP tasks, can be exe-cuted in parallel with the main thread.
One area of interest is functional programming. With pure functional programming, the evaluation of an expression depends only on the parameters passed into that expres-sion. Hence, functions can be evaluated in parallel, or in any order, and will produce the same result. We will consider Haskell as one example of a functional language.
The code in Listing 10.16 evaluates the Nth Fibonacci number in Haskell. The lan-guage allows the return values for functions to be defined for particular input values. So, in this instance, we are setting the return values for 0 and 1 as well as the general return value for any other numbers.
Listing 10.16 Evaluating the Nth Fibonacci Number in Haskell
fib 0 = 0
fib 1 = 1
fib n = fib (n-1) + fib (n-2)
Listing 10.17 shows the result of using this function interactively. The command :load requests that the module fib.hs be loaded, and then the command fib is invoked with the parameter 10, and the runtime returns the value 55.
Listing 10.17 Asking Haskell to Provide the Tenth Fibonacci Number
GHCi, version 6.10.4: :? for help
Prelude> :load fib.hs
[1 of 1] Compiling Main ( fib.hs, interpreted )
Ok, modules loaded: Main.
*Main> fib 10
55
Listing 10.18 defines a second function, bif, a variant of the Fibonacci function. Suppose that we want to return the sum of the two functions. The code defines a serial version of this function and provides a main routine that prints the result of calling this function.
Listing 10.18 Stand-Alone Serial Program
main = print ( serial 10 10)
fib 0 = 0 fib 1 = 1
fib n = fib (n-1) + fib (n-2)
bif 0 = -1 bif 1 = 0
bif n = bif (n-1) + bif (n-2)
serial a b = fib a + bif b
Rather than interpreting this program, we can compile and run it as shown in Listing 10.19.
Listing 10.19 Compiling and Running Serial Code
C:\> ghc -O --make test.hs
[1 of 1] Compiling Main ( test.hs, test.o )
Linking test.exe ...
C:\> test
2
1
The two functions should take about the same amount of time to execute, so it would make sense to execute them in parallel. Listing 10.20 shows the code to do this.
Listing 10.20 Stand-Alone Parallel Program
import Control.Parallel
main = print ( parallel 20 20)
fib 0 = 0
fib 1 = 1
fib n = fib (n-1) + fib (n-2)
bif 0 = -1
bif 1 = 0
bif n = bif (n-1) + bif (n-2)
parallel a b
= let x = fib a
y = bif b
in x `par` (y `pseq` (x+y))
In the code, the let expressions are not assignments of values but declarations of local variables. The local variables will be evaluated only if they are needed; this is lazy evaluation. These local variables are used in the in expression, which performs the com-putation. The import statement at the start of the code imports the Control.Parallel module. This module defines the `par` and `pseq` operators. These two operators are used so that the computation of x=fib a and y=bif b is per-formed in parallel, and this ensures that the result (x+y) is computed after the calcula-tion of y has completed. Without these elaborate preparations, it is possible that both parallel threads might choose to compute the value of the function x first.
The example given here exposes parallelism using low-level primitives. The preferred way of coding parallelism in Haskell is to use strategies. This approach separates the com-putation from the parallelization.
Haskell highlights the key advantage of pure functional programming languages that is helpful for writing parallel code. This is that the result of a function call depends only on the parameters passed into it. From this point, the compiler knows that a function call can be scheduled in any arbitrary order, and the results of the function call do not depend on the time at which the call is made. The advantage that this provides is that adding the `par` operator to produce a parallel version of an application is guaranteed not to change the result of the application. Hence, parallelization is a solution for improving performance and not a source of bugs.
Related Topics
Copyright © 2018-2023 BrainKart.com; All Rights Reserved. Developed by Therithal info, Chennai.
|
https://www.brainkart.com/article/Alternative-Languages_9538/
|
CC-MAIN-2022-40
|
en
|
refinedweb
|
![if !IE]> <![endif]>
POINTERS
Definition:
§ C Pointer is a variable that stores/points the address of the another variable.
§ C Pointer is used to allocate memory dynamically i.e. at run time.
§ The variable might be any of the data type such as int, float, char, double, short etc.
§ Syntax : data_type *var_name; Example : int *p; char *p;
Where, * is used to denote that “p” is pointer variable and not a normal variable.
Key points to remember about pointers in C:
§ Normal variable stores the value whereas pointer variable stores the address of the variable.
§ The content of the C pointer always be a whole number i.e. address.
§ Always C pointer is initialized to null, i.e. int *p = null.
§ The value of null pointer is 0.
§ & symbol is used to get the address of the variable.
§ * symbol is used to get the value of the variable that the pointer is pointing to.
§ If pointer is assigned to NULL, it means it is pointing to nothing.
§ The size of any pointer is 2 byte (for 16 bit compiler).
§ No two pointer variables should have the same name.
§ But a pointer variable and a non-pointer variable can have the same name.
1 Pointer –Initialization:
Assigning value to pointer:
It is not necessary to assign value to pointer. Only zero (0) and NULL can be assigned to a pointer no other number can be assigned to a pointer. Consider the following examples;
int *p=0;
int *p=NULL; The above two assignments are valid. int *p=1000; This statement is invalid.
Assigning variable to a pointer:
int x; *p; p = &x;
This is nothing but a pointer variable p is assigned the address of the variable x. The address of the variables will be different every time the program is executed.
Reading value through pointer:
int x=123; *p; p = &x;
Here the pointer variable p is assigned the address of variable x. printf(“%d”, *p); will display value of x 123. This is reading value through pointer printf(“%d”, p); will display the address of the variable x.
printf(“%d”, &p); will display the address of the pointer variable p. printf(“%d”,x); will display the value of x 123.
printf(“%d”, &x); will display the address of the variable x.
Note: It is always a good practice to assign pointer to a variable rather than 0 or NULL.
Pointer Assignments:
We can use a pointer on the right-hand side of an assignment to assign its value to another variable.
Example: int main()
{
int var=50; int *p1, *p2; p1=&var; p2=p1;
}
Chain of pointers/Pointer to Pointer:
A pointer can point to the address of another pointer. Consider the following example. int x=456, *p1, **p2; //[pointer-to-pointer];
p1 = &x;
p2 = &p1; pointer it is called chain pointer. Chain pointer must be declared with ** as in **p2.
Manipulation of Pointers
We can manipulate a pointer with the indirection operator „*‟, which is known as dereference operator. With this operator, we can indirectly access the data variable content.
Syntax: *ptr_var;
Example:
#include<stdio.h> void main()
{
int a=10, *ptr; ptr=&a;
printf(”\n The value of a is ”,a);
*ptr=(*ptr)/2;
printf(”The value of a is.”,(*ptr));
}
Output:
The value of a is: 10
The value of a is: 5
2 Pointer Expression & Pointer Arithmetic
C allows pointer to perform the following arithmetic operations:
A pointer can be incremented / decremented.
Any integer can be added to or subtracted from the pointer.
A pointer can be incremented / decremented.
In 16 bit machine, size of all types[data type] of pointer always 2 bytes. Eg: int a;
int *p; p++;
Each time that a pointer p is incremented, the pointer p will points to the memory location of the next element of its base type. Each time that a pointer p is decremented, the pointer p will points to the memory location of the previous element of its base type.
int a,*p1, *p2, *p3; p1=&a;
p2=p1++;
p3=++p1;
printf(“Address of p where it points to %u”, p1); 1000
printf(“After incrementing Address of p where it points to %u”, p1); 1002
printf(“After assigning and incrementing p %u”, p2); 1000
printf(“After incrementing and assigning p %u”, p3); 1002
In 32 bit machine, size of all types of pointer is always 4 bytes.
The pointer variable p refers to the base address of the variable a. We can increment the pointer variable,
p++ or ++p
This statement moves the pointer to the next memory address. let p be an integer pointer with a current value of 2,000 (that is, it contains the address 2,000). Assuming 32-bit integers, after the expression
p++;
the contents of p will be 2,004, not 2,001! Each time p is incremented, it will point to the next integer. The same is true of decrements. For example,
p--; will cause p to have the value 1,996, assuming that it previously was 2,000. Here is why: Each time that a pointer is incremented, it will point to the memory location of the next element of its base type. Each time it is decremented, it will point to the location of the previous element of its base type.
Any integer can be added to or subtracted from the pointer. > #include<conio.h> void);
}
/* Sum of two integers using pointers*/
#include <stdio.h> int main()
{
int first, second, *p, *q, sum; printf("Enter two integers to add\n"); scanf("%d%d", &first, &second);
p = &first;
q = &second;
sum = *p + *q;
printf("Sum of entered numbers = %d\n",sum); return 0;
}
3 Pointers and Arrays
Array name is a constant pointer that points to the base address of the array[i.e the first element of the array]. Elements of the array are stored in contiguous memory locations. They can be efficiently accessed by using pointers.
Pointer variable can be assigned to an array. The address of each element is increased by one factor depending upon the type of data type. The factor depends on the type of pointer variable defined. If it is integer the factor is increased by 2. Consider the following example:
int x[5]={11,22,33,44,55}, *p;
p = x; //p=&x; // p = &x[0];
Remember, earlier the pointer variable is assigned with address (&) operator. When working with array the pointer variable can be assigned as above or as shown below:
Therefore the address operator is required only when assigning the array with element. Assume the address on x[0] is 1000 then the address of other elements will be as follows
x[1] = 1002
x[2] = 1004
x[3] = 1006
x[4] = 1008
The address of each element increase by factor of 2. Since the size of the integer is 2 bytes the memory address is increased by 2 bytes, therefore if it is float it will be increase 4 bytes, and for double by 8 bytes. This uniform increase is called scale factor.
p = &x[0];
Now the value of pointer variable p is 1000 which is the address of array element x[0]. To find the address of the array element x[1] just write the following statement.
p = p + 1;
Now the value of the pointer variable p is 1002 not 1001 because since p is pointer variable the increment of will increase to the scale factor of the variable, since it is integer it increases by 2.
The p = p + 1; can be written using increment or decrement operator ++p; The values in the array element can be read using increment or decrement operator in the pointer variable using scale factor.
Consider the above example.
printf(“%d”, *(p+0)); will display value of array element x[0] which is 11. printf(“%d”, *(p+1)); will display value of array element x[1] which is 22. printf(“%d”, *(p+2)); will display value of array element x[2] which is 33. printf(“%d”, *(p+3)); will display value of array element x[3] which is 44. printf(“%d”, *(p+4)); will display value of array element x[4] which is 55.
/*Displaying the values and address of the elements in the array*/
#include<stdio.h>
void main()
{
int a[6]={10, 20, 30, 40, 50, 60};
int *p;
int i;
p=a;
for(i=0;i<6;i++)
{
printf(“%d”, *p); //value of elements of array printf(“%u”,p); //Address of array
}
getch();
}
/* Sum of elements in the Array*/
#include<stdio.h>
#include<conio.h> void main()
{
int a[10]; int i,sum=0; int *ptr;
printf("Enter 10 elements:n");
for(i=0;i<10;i++)
scanf("%d",&a[i]);
ptr = a; /* a=&a[0] */
for(i=0;i<10;i++)
{
sum = sum + *ptr; //*p=content pointed by 'ptr' ptr++;
}
printf("The sum of array elements is %d",sum);
}
/*Sort the elements of array using pointers*/
#include<stdio.h> int main(){
int i,j, temp1,temp2;
int arr[8]={5,3,0,2,12,1,33,2}; int *ptr;
for(i=0;i<7;i++){ for(j=0;j<7-i;j++){
if(*(arr+j)>*(arr+j+1)){
ptr=arr+j;
temp1=*ptr++;
temp2=*ptr; *ptr--=temp1; *ptr=temp2;
}}}
for(i=0;i<8;i++){
printf(" %d",arr[i]); } }
4 Pointers and Multi-dimensional Arrays
The array name itself points to the base address of the array.
Example:
int a[2][3];
int *p[2];
p=a; //p points to a[0][0]
/*Displaying the values in the 2-d array*/
#include<stdio.h> void main()
{
int a[2][2]={{10, 20},{30, 40}}; int *p[2];
int i,j; p=a;
for(i=0;i<2;i++)
{
for(j=0;j<2;j++)
{
printf(“%d”, *(*(p+i)+j)); //value of elements of array
}
}
getch();
}
5 Dynamic Memory Allocation
The process of allocating memory during program execution is called dynamic memory
allocation.
Dynamic memory allocation functions
Related Topics
Copyright © 2018-2023 BrainKart.com; All Rights Reserved. Developed by Therithal info, Chennai.
|
https://www.brainkart.com/article/C-Pointer--C-Programming_6950/
|
CC-MAIN-2022-40
|
en
|
refinedweb
|
gatsby-plugin-image
Adding responsive images to your site while maintaining high performance scores can be difficult to do manually. The Gatsby Image plugin handles the hard parts of producing images in multiple sizes and formats for you!
For full documentation on all configuration options, see the Gatsby Image Plugin reference guide
Contents
- Installation
- Using the Gatsby Image components
- Customizing the default options
- Migrating to gatsby-plugin-image
Installation
- Install
gatsby-plugin-imageand
gatsby-plugin-sharp. Additionally install
gatsby-source-filesystemif you are using static images, and
gatsby-transformer-sharpif you are using dynamic images.
npm install gatsby-plugin-image gatsby-plugin-sharp gatsby-source-filesystem gatsby-transformer-sharp
- Add the plugins to your
gatsby-config.js:
module.exports = { plugins: [ `gatsby-plugin-image`, `gatsby-plugin-sharp`, `gatsby-transformer-sharp`, // Needed for dynamic images ], }
Using the Gatsby Image components
Deciding which component to use
The Gatsby Image plugin includes two image components: one for static and one for dynamic images. An effective:
import { StaticImage } from "gatsby-plugin-image" export function Dino() { return <StaticImage src="../images/dino.png" alt="A dinosaur" /> }
If you are using a remote image, pass the image URL in the
srcprop:
import { StaticImage } from "gatsby-plugin-image" export function Kitten() { return <StaticImage src="" alt="A kitten" /> }.
import { StaticImage } from "gatsby-plugin-image" export function Dino() { return ( <StaticImage src="../images/dino.png" alt="A dinosaur" placeholder="blurred" layout="fixed" width={200} height={200} /> ) }
This component renders a 200px by 200px image of a dinosaur. Before loading it will have a blurred, low-resolution placeholder. It uses the
"fixed"layout, which means the image does not resize with its container.
Restrictions on using
StaticImage
There are a few technical restrictions to the way you can pass props into
StaticImage. Most importantly, you can’t use any of the parent component’s props. For more information, refer to the Gatsby Image plugin reference guide. If you find yourself wishing you could use a prop passed from a:
query { blogPost(id: { eq: $Id }) { title body avatar { childImageSharp { gatsbyImageData(width: 200) } } } }
Configure your image.
For all the configuration options, see the Gatsby Image plugin reference guide..
query { blogPost(id: { eq: $Id }) { title body author avatar { childImageSharp { gatsbyImageData( width: 200 placeholder: BLURRED formats: [AUTO, WEBP, AVIF] ) } } } }.
import { graphql } from "gatsby" import { GatsbyImage, getImage } from "gatsby-plugin-image" function BlogPost({ data }) { const image = getImage(data.blogPost.avatar) return ( <section> <h2>{data.blogPost.title}</h2> <GatsbyImage image={image} alt={data.blogPost.author} /> <p>{data.blogPost.body}</p> </section> ) } export const pageQuery = graphql` query { blogPost(id: { eq: $Id }) { title body author avatar { childImageSharp { gatsbyImageData( width: 200 placeholder: BLURRED formats: [AUTO, WEBP, AVIF] ) } } } } `
For full APIs, see Gatsby Image plugin reference:
module.exports = { plugins: [ { resolve: `gatsby-plugin-sharp`, options: { defaults: { formats: [`auto`, `webp`], placeholder: `dominantColor`, quality: 50, breakpoints: [750, 1080, 1366, 1920], backgroundColor: `transparent`, tracedSVGOptions: {}, blurredOptions: {}, jpgOptions: {}, pngOptions: {}, webpOptions: {}, avifOptions: {}, } } }, `gatsby-transformer-sharp`, `gatsby-plugin-image`, ], }
Migrating
Main article: Migrating from gatsby-image to gatsby-plugin-image:
npx gatsby-codemods gatsby-plugin-image
This will convert all GraphQL queries and components to use the new plugin. For more details, see the migration guide.
|
https://www.gatsbyjs.com/plugins/gatsby-plugin-image/
|
CC-MAIN-2022-40
|
en
|
refinedweb
|
poseLib (old)
Important:
Please note that poseLib is “honor-based” software; a donation system. It means that if poseLib is useful to you or your studio, you can make a donation to reflect your satisfaction. Thanks for using poseLib! 😀
(The Paypal button code now works!)
Updated: 31 October 2010
DOWNLOAD:
Compatibility:
(Maya 6.0, 7.0, 8.0, 8.5, 2008, and 2009)
This version is not supported anymore. For a Maya 2011 and up version, please click here.
Updates/fixes in version: 4.4.2h:
- Fixed a bug on Maya 2008 which caused the buttons to be invisible when creating a new pose. A crash bug remains on Maya 2010 (and possibly 2009).
(See the complete history at the bottom of this article)
INFO:).
Here is a diagram of the way things are organized (you don’t have to use disk D:).
Warning: Please note that the characters and category lists reflect REAL directories on your hard drive/network. There shouldn’t be any real risk since poseLib will only add “.deleted” at the end of the directory’s (or pose’s) name if you delete them, but that could be a problem in itself.
In short, if you see a list of all your projects coming up in the characters list, it is not a good idea to “delete” them; It just means the poseLib path is not set correctly. You can do so in the options window.
Also, note that throughout this documentation I use the term “character” in the loose sense, not in the “Maya Character” specific sense.
FEATURES:
- and save their position by clicking on “Save Preferences”.
SETUP:
When you launch poseLib for the first time, it looks at your current project and create a directory in here. e.g.:
yourCurrentProject/poseLib/
If you want the directory to be created somewhere else, just modify the line at the very beginning of the script that says:
$defaultPath = $currentProject + "/poseLib/";
… Or simply change it in the Options Window!
It would probably be wise to start by setting up a character name and categories related to that character (more on that later), but you can always rename or move things around later anyway… 😉
WORKFLOW:
Creating a new pose:
- Select the object(s) (it can be anything. e.g.: your character’s controls) for which you want to record a pose.
- Click on the “Create New Pose” button.
- Type in the name for the pose.
- Move the camera in the icon view and click on the “Preview Icon” button.
- If you like what you see, click the “Create Pose” button.
- If you want to change the icon, click the “Reset View” button and start again at step 4.
Once the pose is created, it will appear automatically in the list of poses you can see (they’re sorted in alphabetical order).
Applying a pose:
Just click on the pose icon. It works differently depending on what you’ve selected:
- If you don’t have anything selected, poseLib will apply the entire pose.
or
- If you’ve selected some of the controls (but not all), the pose will just be applied to those. (You’ll.
Editing a pose:
Right-click on the pose icon; A menu will appear, letting you: Rename, Move, Replace, Delete, or Edit the pose.
Replacing the pose simply means that you don’t have to go through the process of re-capturing a new icon. e.g.: You’ve barely tweaked a pose and it wouldn’t matter.
The edit sub-menu will let you: Select the pose’s controls (if you don’t remember what was part ot the pose), Add/Replace the selected controls (they’ll be added if they aren’t part of the pose, or replaced if they are), or Remove the selected controls. The “Ouput Pose Info” will list the controls part of the pose in the script editor and tell you how many they are.
REFERENCING:
When using a referenced rig (with a namespace like in “toto:myTotoCharacter“), you need to check the box “Use Namespace”. What that does is add a namespace (and a “:”) each time it applies a pose.
The namespace option plays no role when saving poses. Any existing namespace is discarded to only record a “clean” name. Again, the namespace option is only relevant when applying poses.
If you check the box “Use Current Character Name”, it means that the namespace should be the same as the character menu name. If you want the namespace to be different (like when applying a pose from a different character), then uncheck the box and specify the namespace.
OPTIONS:
Now if you want to create a new entry for a character name or a category, just click on the “Edit Options” button.
.
.
TROUBLESHOOTING:
- Compatibility: poseLib is unfortunately NOT compatible with OSX yet. The poses are recorded but the icons don’t appear!
- PoseLib does not support recording a pose with multiple rigs selected at the same time if the rigs have similar control names. Also keep in mind that poseLib discards the namespace when recording a pose, only using the control name. This is the price to pay for versatility!
- If you click on a pose and the effect is not the expected “full” pose, check you don’t have any channels selected in the channel box; if you do, the pose is applied only to those channels.
- When middle-mouse moving a pose icon, if the icon is not moved to the proper position, just resize the poseLib window so that there’s no scroll bar on the side. Then you’ll be able to rearrange the icon’s position without problem. This is unfortunately an official Maya bug that I can’t fix…
Don’t hesitate to drop me a mail to tell me if there’s a problem with this script…
seith[at]seithcg[dot]com
History:_8<<
|
https://seithcg.com/wordpress/?page_id=1033
|
CC-MAIN-2022-40
|
en
|
refinedweb
|
.
This guide shows you how to integrate banner ads from Ad Manager into an iOS app. In addition to code snippets and instructions, it includes information about sizing banners properly and links to additional resources.
Prerequisites banners:
/6499/example/banner.
Create a GAMBannerView
Banner ads are displayed in
GAMBannerView
objects, so the first step toward integrating banner ads is to include a
GAMBannerView in your view hierarchy. This is
typically done either with the Interface Builder or programmatically.
Interface Builder
A
GAMB
GAMBannerView can also be instantiated directly.
Here's an example of how to create a
GAMBannerView,
aligned to the bottom center of the safe area of the screen, with a
banner size of 320x50:
Swift
import GoogleMobileAds import UIKit class ViewController: UIViewController { var bannerView: GAMBannerView! override func viewDidLoad() { super.viewDidLoad() // In this case, we instantiate the banner with desired ad size. bannerView = GAMBannerView(adSize: GADAdSizeBanner) addBannerViewToView(bannerView) } func addBannerViewToView(_ bannerView: GAMB) ]) } }
Objective-C
@import GoogleMobileAds; @interface ViewController () @property(nonatomic, strong) GAMBannerView *bannerView; @end @implementation ViewController - (void)viewDidLoad { [super viewDidLoad]; // In this case, we instantiate the banner with desired ad size. self.bannerView = [[GAMBannerView alloc] initWithAdSize:GAD GAMBannerView properties
In order to load and display ads,
GAMBannerView
requires a few properties be set.
rootViewController- This view controller is used to present an overlay when the ad is clicked. It should normally be set to the view controller that contains the
GAMBannerView.
adUnitID- This is the ad unit ID from which the
GAMBannerViewshould load ads.
Here's a code example showing how to set the two required properties in the
viewDidLoad method of a UIViewController:
Swift
override func viewDidLoad() { super.viewDidLoad() ... bannerView.adUnitID = "/6499/example/banner" bannerView.rootViewController = self }
Objective-C
- (void)viewDidLoad { [super viewDidLoad]; ... self.bannerView.adUnitID = @"/6499/example/banner"; self.bannerView.rootViewController = self; }
Load an ad
Once the
GAMBannerView is in place and its properties
configured, it's time to load an ad. This is done by calling
loadRequest: on a
GAMRequest
object:
Swift
override func viewDidLoad() { super.viewDidLoad() ... bannerView.adUnitID = "/6499/example/banner" bannerView.rootViewController = self bannerView.load(GAMRequest()) }
Objective-C
- (void)viewDidLoad { [super viewDidLoad]; ... self.bannerView.adUnitID = @"/6499/example/banner"; self.bannerView.rootViewController = self; [self.bannerView loadRequest:[GAMRequest request]]; }
GAM
GAMB: GAMBannerView! override func viewDidLoad() { super.viewDidLoad() ... bannerView.delegate = self } }
Objective-C
@import GoogleMobileAds; @interface ViewController () <GADBannerViewDelegate> @property(nonatomic, strong) GAMB
func bannerViewDidReceiveAd(_ bannerView: GADBannerView) { print("bannerViewDidReceiveAd") } func bannerView(_ bannerView: GADBannerView, didFailToReceiveAdWithError error: Error) { print("bannerView:didFailToReceiveAdWithError: \(error.localizedDescription)") } func bannerViewDidRecordImpression(_ bannerView: GADBannerView) { print("bannerViewDidRecordImpression") } func bannerViewWillPresentScreen(_ bannerView: GADBannerView) { print("bannerViewWillPresentScreen") } func bannerViewWillDismissScreen(_ bannerView: GADBannerView) { print("bannerViewWillDIsmissScreen") } func bannerViewDidDismissScreen(_ bannerView: GADBannerView) { print("bannerViewDidDismissScreen") }
Objective-C
- (void)bannerViewDidReceiveAd:(GADBannerView *)bannerView { NSLog(@"bannerViewDidReceiveAd"); } - (void)bannerView:(GADBannerView *)bannerView didFailToReceiveAdWithError:(NSError *)error { NSLog(@"bannerView:didFailToReceiveAdWithError: %@", [error localizedDescription]); } - (void)bannerViewDidRecordImpression:(GADBannerView *)bannerView { NSLog(@"bannerViewDidRecordImpression"); } - (void)bannerViewWillPresentScreen:(GADBannerView *)bannerView { NSLog(@"bannerViewWillPresentScreen"); } - (void)bannerViewWillDismissScreen:(GADBannerView *)bannerView { NSLog(@"bannerViewWillDismissScreen"); } - (void)bannerViewDidDismissScreen:(GADBannerView *)bannerView { NSLog(@"bannerViewDidDismissScreen"); }
GAMBannerView to the view hierarchy until
after an ad is received. You can do this by listening for the
bannerViewDidReceiveAd: event:
Swift
func bannerViewDidReceiveAd(_ bannerView: GADBannerView) { // Add banner to view and add constraints as above. addBannerViewToView(bannerView) }
Objective-C
- (void)bannerViewDidReceiveAd:(GAMBannerView *)bannerView { // Add bannerView to view and add constraints as above. [self addBannerViewToView:self.bannerView]; }
Animating a banner ad
You can also use the
bannerViewDidReceiveAd: event to animate a banner ad
once it's returned, as shown in the following example:
Swift
func bannerViewDidReceiveAd(_ bannerView: GADBannerView) { bannerView.alpha = 0 UIView.animate(withDuration: 1, animations: { bannerView.alpha = 1 }) }
Objective-C
- (void)bannerViewDidReceiveAd:(GAMBannerView *)bannerView { bannerView.alpha = 0; [UIView animateWithDuration:1.0 animations:^{ bannerView.alpha = 1; }]; }
Pausing and resuming the app
The
GADBannerViewDelegate protocol has methods to notify you of events, such
as when a click causes an overlay to be presented or dismissed.));
Custom ad size
In addition to the standard ad units, Google Ad Manager allows you to serve any
sized ad unit into an app. The ad size (width, height) defined for an ad request
should match the dimensions of the ad view (
GAMBannerView) displayed on the
app. To set a custom size, use
GADAdSizeFromCGSize.
Swift
// Define custom GADAdSize of 250x250 for GAMBannerView. let customAdSize = GADAdSizeFromCGSize(CGSize(width: 250, height: 250)) bannerView = GAMBannerView(adSize: customAdSize)
Objective-C
// Define custom GADAdSize of 250x250 for GAMBannerView GADAdSize customAdSize = GADAdSizeFromCGSize(CGSizeMake(250, 250)); self.bannerView = [[GAMBannerView alloc] initWithAdSize:customAdSize];
See the Ad Manager Multiple Ad Sizes example for an implementation of custom ad size in the iOS API Demo app.
Multiple ad sizes
Ad Manager allows you to specify multiple ad sizes which may be eligible to serve
into a
GAMBannerView. There are three steps needed in order to use
this feature:
In the Ad Manager UI, create a line item targeting the same ad unit that is associated with different size creatives.
In your app, set the
validAdSizesproperty on
GAMBannerView:
Swift
//. bannerView.validAdSizes = [NSValueFromGADAdSize(GADAdSizeBanner), NSValueFromGADAdSize(GADAdSizeMediumRectangle), NSValueFromGADAdSize(GADAdSizeFromCGSize(CGSize(width: 120, height: 20)))]
Objective-C
//. self.bannerView.validAdSizes = @[ NSValueFromGADAdSize(GADAdSizeBanner), NSValueFromGADAdSize(GADAdSizeMediumRectangle), NSValueFromGADAdSize(GADAdSizeFromCGSize(CGSizeMake(120, 20))) ];
Implement the
GADAdSizeDelegatemethod to detect an ad size change.
Swift
public func bannerView(_ bannerView: GADBannerView, willChangeAdSizeTo size: GADAdSize)
Objective-C
- (void)bannerView:(GAMBannerView *)view willChangeAdSizeTo:(GADAdSize)size;
Remember to set the delegate before making the request for an ad.
Swift
bannerView.adSizeDelegate = self
Objective-C
self.bannerView.adSizeDelegate = self;
See the Ad Manager Multiple Ad Sizes example for an implementation of custom ad size in the iOS API Demo app.
Manual impression counting
You can manually send impression pings to Ad Manager if you have special conditions
for when an impression should be recorded. This can be done by first enabling a
GAMBannerView for manual impressions prior to loading an ad:
Swift
bannerView.enableManualImpressions = true
Objective-C
self.bannerView.enableManualImpressions = YES;
When you determine that an ad has been successfully returned and is on screen, you can manually fire an impression:
Swift
bannerView.recordImpression()
Objective-C
[self.bannerView recordImpression];
App events
App events allow you to create ads that can send messages to their app code. The app can then take actions based on these messages.
You can listen for Ad Manager-specific app events using
GADAppEventDelegate.
These events may occur at any time during the ad's lifecycle, even
before the
GADBannerViewDelegate object's
bannerViewDidReceiveAd: is called.
Swift
// Implement your app event within this method. The delegate will be // notified when the SDK receives an app event message from the ad. // Called when the banner receives an app event. optional public func bannerView(_ banner: GAMBannerView, didReceiveAppEvent name: String, withInfo info: String?)
Objective-C
// Implement your app event within this method. The delegate will be // notified when the SDK receives an app event message from the ad. @optional // Called when the banner receives an app event. - (void)bannerView:(GAMBannerView *)banner didReceiveAppEvent:(NSString *)name withInfo:(NSString *)info;
Your app event methods can be implemented in a view controller:
Swift
import GoogleMobileAds class ViewController: UIViewController, GADAppEventDelegate { }
Objective-C
@import GoogleMobileAds; @interface ViewController : UIViewController <GADAppEventDelegate> { } @end
Remember to set the delegate using the
appEventDelegate property
before making the request for an ad.
Swift
bannerView.appEventDelegate = self
Objective-C
self.bannerView.appEventDelegate = self;
Here is an example showing how to change the background color of your app by specifying the color through an app event:
Swift
func bannerView(_ banner: GAMBannerView, didReceiveAppEvent name: String, withInfo info: String?) { if name == "color" { guard let info = info else { return } switch info { case "green": // Set background color to green. view.backgroundColor = UIColor.green case "blue": // Set background color to blue. view.backgroundColor = UIColor.blue default: // Set background color to black. view.backgroundColor = UIColor.black } } }
Objective-C
- (void)bannerView:(GAMBannerView *)banner didReceiveAppEvent:(NSString *)name withInfo:(NSString *)info { if ([name isEqual:@"color"]) { if ([info isEqual:@"green"]) { // Set background color to green. self.view.backgroundColor = [UIColor greenColor]; } else if ([info isEqual:@"blue"]) { // Set background color to blue. self.view.backgroundColor = [UIColor blueColor]; } else // Set background color to black. self.view.backgroundColor = [UIColor blackColor]; } } }
And, here is the corresponding creative that sends color app event messages
to
appEventDelegate:
iOS API Demo app.
Additional resources
Examples on GitHub
Banner ads example: Swift | Objective-C
Advanced features demo: Swift | Objective-C
|
https://developers.google.com/ad-manager/mobile-ads-sdk/ios/banner?hl=ja
|
CC-MAIN-2022-40
|
en
|
refinedweb
|
User Devices¶
Adding custom devices for use in the labscript-suite can be done using the
user_devices mechanism.
This mechanism provides a simple way to add support for a new device without directly interacting with the labscript-devices repository.
This is particularly useful when using standard installations of labscript, using code that is proprietary in nature, or code that, while functional, is not mature enough for widespread dissemination.
This is done by adding the labscript-device code into the
userlib/user_devices folder. Using the custom device in a labscript connection table is then done by:
from user_devices.MyCustomUserDevice.labscript_devices import MyCustomUserDevice
This import statement assumes your custom device follows the new device structure organization.
Note that both the
userlib path and the
user_devices folder name can be custom configured in the
labconfig.ini file.
The
user_devices folder must be in the
userlib path.
If a different
user_devices folder name is used, the import uses that folder name in place of
user_devices in the above import statement.
Note that we highly encourage everyone that adds support for new hardware to consider making a pull request to labscript-devices so that it may be added to the mainline and more easily used by other groups.
3rd Party Devices¶
Below is a list of 3rd party devices developed by users of the labscript-suite that can be used via the
user_devices mechanism described above.
These repositories are not tested or maintained by the labscript-suite development team.
As such, there is no guarantee they will work with current or future versions of the labscript-suite.
They are also not guaranteed to be free of lab-specific implementation details that may prevent direct use in your apparatus.
They are provided by users to benefit the community in supporting new and/or unusual devices, and can often serve as a good reference when developing your own devices.
Please direct any questions regarding these repositories to their respective owners.
If you would like to add your repository to this list, please contact us or make a pull request.
|
https://docs.labscriptsuite.org/projects/labscript-devices/en/latest/user_devices/
|
CC-MAIN-2022-40
|
en
|
refinedweb
|
Function ecs_run_aperiodic
Synopsis
#include <include/flecs.h> FLECS_API void ecs_run_aperiodic(ecs_world_t *world, ecs_flags32_t flags)
Description
Force aperiodic actions. The world may delay certain operations until they are necessary for the application to function correctly. This may cause observable side effects such as delayed triggering of events, which can be inconvenient when for example running a test suite.
The flags parameter specifies which aperiodic actions to run. Specify 0 to run all actions. Supported flags start with 'EcsAperiodic'. Flags identify internal mechanisms and may change unannounced.
- Parameters
world- The world.
flags- The flags specifying which actions to run.
Source
Line 1472 in include/flecs.h.
|
https://flecs.docsforge.com/master/api-c/ecs_run_aperiodic/
|
CC-MAIN-2022-40
|
en
|
refinedweb
|
This is the third 9 posts.
1. Introduction to object detection
2. Data set preperation and annotation Using labelImg
3. Building your object detection model from scratch using Image pyramids and Sliding window ( This post ) build a custom object detector from scratch progressively using different methods like pyramid segmentation, sliding window and non maxima suppression. These methods are legacy methods which lays the foundation to many of the modern object detection methods. Let us look at the processes which will be covered in building an object detector from scratch.
- Prepare the train and test sets from the annotated images ( Covered in the last post)
- Build a classifier for detecting potholes
- Build the inference pipeline using image pyramids and sliding window techniques to predict bounding boxes for potholes
- Optimise the bounding boxes using Non Maxima suppression.
We will be covering all the topics from step 2 in this post. These posts are heavily inspired by the following posts.
Let us dive in.
Training a classifier on the data
In the last post we prepared our training data from positive and negative examples and then saved the data in h5py format. In this post we will use that data to build our pothole classifier. The classifier we will be building is a binary classifier which has a positive class and a negative class. We will be training this classifier using a SVM model. The choice of SVM model is based on some earlier work which is done in this space, however I would urge you to experiment with other classification models as well.
We will start off from where we stopped in the last section. We will read the database from disk and extract the labels and data
# Read the data base from disk db = h5py.File(outputPath, "r") # Extract the labels and data (labels, data) = (db["pothole_features_all"][:, 0], db["pothole_features_all"][:, 1:]) # Close the data base db.close() print(labels.shape) print(data.shape)
We will now use the data and labels to build the classifier
# Build the SVM model model = SVC(kernel="linear", C=0.01, probability=True, random_state=123) model.fit(data, labels)
Once the model is fit we will save the model as a pickle file in the output folder.
# Save the model in the output folder modelPath = 'data/models/model.cpickle' f = open(modelPath, "wb") f.write(pickle.dumps(model)) f.close()
Please remember to create the
'models' folder in your local drive in the
'data' folder before saving the model. Once the model is saved you will be able to see the model pickle file within the path you specified.
Now that we have build the classifier, we will use this classifier for object detection in the next section. We will be covering two important concepts in the next section which is important for object detection, Image pyramids and Sliding windows. Let us get familiar with those concepts first.
Image Pyramids and Sliding window techniques
Let us try to understand the concept of image pyramids with an example. Let us assume that we have a window of fixed size and potholes are detected only if they fit perfectly inside the window. Let us look at how well the potholes are detected when using a fixed size window. Take the case of
layer1 of the image below. We can see that the fixed sized window was able to detect one of the potholes which was further down the road as it fit well within the window size, however the bigger pothole which is at the near end the image is not detected because the window was obviously smaller than size of the pothole.
As a way to solve this, let us progressively reduce the size of the image, and try to fit the potholes to the fixed window size, as shown in the figure below. With the reduction in size of the image, the object we want to detect also reduces in size. Since our detection window remains the same, we are able to detect more potholes including the biggest one, when the image sizes are reduced. Thereby we will be able to detect most of the potholes which otherwise would not have been possible with a fixed size window and a constant size image. This is the concept behind image pyramids.
The name image pyramids signifies the fact that, if the scaled images are stacked vertically, then it will fit inside a pyramid as shown in the below figure.
The implementation of image pyramids can be done easily using
Sklearn. There are many different types of image pyramid implementation. Some of the prominent ones are Gaussian pyramids and Laplacian pyramids. You can read about these pyramids in the link give here. Let us quickly look at the implementation of of pyramids.
from skimage.transform import pyramid_gaussian for imgPath in allFiles[-2:-1]: # Read the image image = cv2.imread(imgPath) # loop over the layers of the image pyramid and display them for (i, layer) in enumerate(pyramid_gaussian(image, downscale=1.2)): # Break the loop if the image size is less than our window size if layer.shape[1] < 80 or layer.shape[0] < 40: break print(layer.shape)
From the output we can see how the images are scaled down progressively.
Having see the image pyramids, its time to discuss about sliding window. Sliding windows are effective methods to identify objects in an image at various scales and locations. As the name suggests, this method involves a window of standard length and width which slides accross an image to extract features. These features will be used in a classifier to identify object of interest. Let us look at the code block below to understand the dynamics of the sliding window method.
# Read the image image = cv2.imread(allFiles[-2]) # Define the window size windowSize = [80,40] # Define the step size stepSize = 40 # slide a window across the image for y in range(0, image.shape[0], stepSize): for x in range(0, image.shape[1], stepSize): # Clone the image clone = image.copy() # Draw a rectangle on the image cv2.rectangle(clone, (x, y), (x + windowSize[0], y + windowSize[1]), (0, 255, 0), 2) plt.imshow()
To implement the sliding window we need to understand some of the parameters which are used. The first is the window size, which is the dimension of the fixed window we would be sliding accross the image. We earlier calculated the size of this window to be [80,40] which was the average size of a pothole in our distribution. The second parameter is the step size. A step size is the number of pixels we need to step to move the fixed window accross the image. Smaller the step size, we will have to move through more pixels and vice-versa. We dont want to slide through every pixel and definitely dont want to skip important features, and therefore the step size is a necessary parameter. An ideal step size would depend on the image size. For our case let us experiment with the ‘y’ cordinate size of our fixed window which is 40. I would encourage to experiment with different step sizes and observe the results before finalising the step size.
To implement this method, we first iterates through the vertical distance starting from 0 to the height of the image with increments of the stepsize. We have an inner iterative loop which loops through the horizontal direction ranging from 0 to the width of the image with increments of stepsize. For each of these iterations we capture the x and y cordinates and then extract a rectangle with the same shape of the fixed window size. In the above implementation we are only drawing a rectangle on the image to understand the dynamics. However when we implement this along with image pyramids, we will crop an image size with the dimension of the window size as we slide accross the image. Let us see some of the sample outputs of the sliding window.
From the above output we can see how the fixed window slides accross the image both horizontally and vertically with a step size to extract features from the image of the same size as the fixed window.
So far we have seen the pyramid and the sliding window implementations independently. These two methods have to be integrated to use it as an object detector. However for integrating them we need to convert the sliding window method into a function. Let us look at the function to implement sliding windows.
# Function to implement sliding window def slidingWindow(image, stepSize, windowSize): # slide a window across the image for y in range(0, image.shape[0], stepSize): for x in range(0, image.shape[1], stepSize): # yield the current window yield (x, y, image[y:y + windowSize[1], x:x + windowSize[0]])
The function is not very different from what we implemented earlier. The only difference is as the output we yield a tuple of the x,y cordinates and the crop of the image of the same size as the window Size. Next we will see how we integrate this function with the image pyramids to implement our custom object detector.
Building the object detector
Its now time to bring all what we defined into creating our object detector. As a first step let us load the model which we saved during the training phase
# Listing the path were we stored the model modelPath = 'data/models/model.cpickle' # Loading the model we trained earlier model = pickle.loads(open(modelPath, "rb").read()) model
Now let us look at the complete code to implement our object detector
# Initialise lists to store the bounding boxes and probabilities boxes = [] probs = [] # Define the HOG parameters orientations=12 pixelsPerCell=(4, 4) cellsPerBlock=(2, 2) # Define the fixed window size windowSize=(80,40) # Pick a random image from the image path to check our prediction imgPath = sample(allFiles,1)[0] # Read the image image = cv2.imread(imgPath) # Converting the image to grayscale gray = cv2.cvtColor(image, cv2.COLOR_BGR2GRAY) # loop over the image pyramid for (i, layer) in enumerate(pyramid_gaussian(image, downscale=1.2)): # Identify the current scale of the image scale = gray.shape[0] / float(layer.shape[0]) # loop over the sliding window for each layer of the pyramid for (x, y, window) in slidingWindow(layer, stepSize=40, windowSize=(80,40)): # if the current window does not meet our desired window size, ignore it if window.shape[0] != windowSize[1] or window.shape[1] != windowSize[0]: continue # Let us now extract the hog features of this window within the image feat = hogFeatures(window,orientations,pixelsPerCell,cellsPerBlock,normalize=True).reshape(1,-1) # Get the prediction probabilities for the positive class ( potholesf) prob = model.predict_proba(feat)[0][1] # Check if the probability is greater than a threshold probability if prob > 0.95: # Extract (x, y)-coordinates of the bounding box using the current scale # Starting coordinates (startX, startY) = (int(scale * x), int(scale * y)) # Ending coordinates endX = int(startX + (scale * windowSize[0])) endY = int(startY + (scale * windowSize[1])) # update the list of bounding boxes and probabilities boxes.append((startX, startY, endX, endY)) probs.append(prob) # loop over the bounding boxes and draw them for (startX, startY, endX, endY) in boxes: cv2.rectangle(image, (startX, startY), (endX, endY), (0, 0, 255), 2) plt.imshow(image,aspect='equal') plt.show()
To start of we initialise two lists in lines 2-3 where we will store the bounding box coordinates and also the probabilities which indicates the confidence about detecting potholes in the image.
We also define some important parameters which are required for HOG feature extraction method in lines 5-7
- orientations
- pixels per Cell
- Cells per block
We also define the size of our fixed window in line 9
To test our process, we randomly sample an image from the list of images we have and then convert the image into gray scale in lines 11-15.
We then start the iterative loop to implement the image pyramids in line 17. For each iteration the input image is scaled down as per the scaling factor we defined.Next we calculate the running scale of the image in line 19. The scale would always be the original shape divided by the scaled down image. We need to find the scale to blow up the x,y coordinates to the orginal size of the image later on.
Next we start the sliding window implementation in line 21. We provide the scaled down version of the image as the input along with the stepSize and the window size. The step size is the parameter which indicates by how much the window has to slide accross the original image. The window size indicates the size of the sliding window. We saw the mechanics of these when we looked at the sliding window function.
In lines 23-24 we ensure that we only take images, which meets our minimum size specification.For any image which passes the minimum size specification, HOG features are extracted in line 26. On the extracted HOG features, we do a prediction in line 28. The prediction gives the probability whether the image is a pothole or not. We extract only probability of the positive class. We then take only those images were the probability is greater than a threshold we have defined in line 31. We give a high threshold because, our distribution of both the positive and negative images are very similar. So to ensure that we get only the potholes, we given a higher threshold. The threshold has been arrived at after fair bit of experimentation. I would encourage you to try out with different thresholds before finalising the threshold you want.
Once we get the predictions, we take those x and y cordinates and then blow it to the original size using the scale we earlier calculated in lines 34-37. We find the starting cordinates and the ending cordinates and then append those coordinates in the lists we defined, in lines 39-40.
In lines 43-47, we loop through each of the coordinates and draw bounding boxes around the image.
Let us look at the output we have got, we can see that there are multiple bounding boxes created around the area were there are potholes. We can be happy that the object detector is doing its job by localising around the area around a pothole in most of the cases. However there are examples where the detector has detected objects other than potholes. We will come to that issue later. Let us first address another important issue.
All the images have multiple overlapping bounding boxes. Having a lot of bounding boxes can sometimes be cumbersome say if we want to calculate the area where the pot hole is present. We need to find a way to reduce the number of overlapping bounding boxes. This is were we use a technique called Non Maxima suppression. The objective of Non maxima suppression is to combine bounding boxes with significant overalp and get a single bounding box. The method which we would be implementing is inspired from this post
Non Maxima Suppression
We would be implementing a customised method of the non maxima suppression implementation. We will be implementing it through a function..50:)
The input to the function are the bounding boxes we got after our prediction. Let me give a big picture of what this implementation is all about. In this implementation we start with the box with the largest area and progressively eliminate boxes which have considerable overlap with the largest box. We then take the remaining boxes after elimination and the repeat the process of elimination till we get to the minimum number of boxes. Let us now see this implementation in the code above.
In line 7, we convert the bounding boxes into an numpy array and the initialise a list to store the bounding boxes we want to return in line 9.
Next in line 11, we start the continues loop for elimination of the boxes till the number of boxes which remain is less than 2.
In lines 13-17, we calculate the area of all the bounding boxes and then sort them in ascending order in line 19.
We then take the cordinates of the box with the largest area in lines 22-25 and then append the largest box to the selection list in line 27. We initialise a new list for the boxes which needs to be removed and then include the largest box in the removal list in line 30.
We then start another iterative loop to find the overlap of the other bounding boxes with the largest box in line 32. In lines 35-43, we find the coordinates of the overlapping portion of each of the other boxes with the largest box and the take the area of the overlapping portion. In line 45 we find the ratio of the overlapping area to the original area of the bounding box which we iterate through. If the ratio is larger than a threshold value, we include that box to the removal list in lines 47-48 as this has good overlap with the largest box. After iterating through all the boxes in the list, we will get a list of boxes which has good overlap with the largest box. We then remove all those overlapping boxes and the current largest box from the original list of boxes in line 50. We continue this process till there are no more boxes to be removed. Finally we add the last remaining box to the selected list and then return the selection.
Let us implement this function and observe the result
# Get the selected list selected = maxOverlap(boxes)
Now let us look at different examples after non maxima suppression.
# Get the image again image = cv2.imread(imgPath) # Make a copy of the image clone = image.copy() for (startX, startY, endX, endY) in selected: cv2.rectangle(clone, (startX, startY), (endX, endY), (0, 255, 0), 2) plt.imshow(clone,aspect='equal') plt.show()
We can see that the bounding boxes are considerably reduced using our non maxima suppression implementation.
Improvement Opportunities
Eventhough we have got reasonable detection effectiveness, is the model we built perfect ? Absolutely not. Let us look at some of the major pitfalls
Misclassifications of objects :
From the outputs, we can see that we have misclassified some of the objects.
Most of the misclassifications we have seen are for vegetation. There are also cases were road signs are also misclassified as potholes.
A major reason we have mis classification is because our training data is limited. We used only 19 positive images and 20 negative examples. Which is a very small data set for tasks like this. Considering the fact that the data set is limited the classifier has done a decent job. Also for negative images, we need to include some more variety, like get some road signs, vehicles, vegetation etc labelled as negative images. So with more positive images and more negative images with little more variety of objects that are likely to be found on roads will improve the classification accuracy of the classifier.
Another strategy is to experiment with different types of classifiers. In our example we used a SVM classifier. It would be worthwhile to use other binary classifiers starting from Logistic regression, Naive Bayes, Random forest, XG boost etc. I would encourage you to try out with different classifiers and then verify the results.
Non detection of positive classes
Along with misclassifications, we have also seen non detection of positive classes.
As seen from the examples, we can see that there has been non detection in cases of potholes with water in it. In addition some of the potholes which are further along the road are not detected.
These problems again can be corrected by including more variety in the positive images, by including potholes with water in it. It will also help to include images with potholes further away along the road. The other solution is to preprocess images with different techniques like smoothing and blurring, thresholding, gradient and edge detection, contours, histograms etc. These methods will help in highliging the areas with potholes which will help in better detection. In addition, increasing the number of positive examples will also help in addressing the problems associated with non detection.
What Next ?
The idea behind this post was to give you a perspective in building an object detector from scratch. This was also an attempt to give an experience in working in cases where the data sets are limited and where you have to create the necessary data sets. I believe these exercises will equip you will capabilities to deal with such issues in your projects.
Now that you have seen the basic grounds up approach, it is time to use this experience to learn more state of the art techniques. In the next post we will start with more advanced techniques. We will also be using transfer learning techniques extensively from the next post. In the next post we will cover object detection using RCNN._10<< !!!!
|
https://bayesianquest.com/2022/04/20/build-you-computer-vision-application-part-iii-pothole-detector-from-scratch-using-legacy-methods-image-pyramids-and-sliding-window/
|
CC-MAIN-2022-40
|
en
|
refinedweb
|
The Atlassian Community can help you and your team get more value out of Atlassian products and practices.
I'm trying to fail a connection authentication using atlassian package but it doesn't show anything even a pass or fail token. I previously use jira package. it's failing and JIRAError is raised.
===OLD
from jira.client import JIRA
jira = JIRA(options={'server': ''},basic_auth=('', ''))
raised error is JIRAError if failure on the authentication
===New code
from atlassian import Jira
jira = Jira(url='','','')
nothing happens even if it failed.
I want to know what kind of error is being raised on the new code. Hope you could help me.
thank you.
Hello Atlassian Community! Feedback from customers like you has helped us shape and improve Jira Software. As Head of Product, Jira Software, I wanted to take this opportunity to share an update on...
|
https://community.atlassian.com/t5/Jira-Software-questions/Exception-Error-when-authentication-failed/qaq-p/2131709
|
CC-MAIN-2022-40
|
en
|
refinedweb
|
asmus Hall1,260 Points
Compiling error CS1001
i wrote it like he told me to do
using System; namespace.treehouse{ class Program { static void Main() {
Console.Write("Skriv hvor mange minutter du har arbejdet: "); string tid = Console.ReadLine() + tid; Console.Writeline("du har motioneret " + tid + " Minutter!"); }
} }
and yet it says
Program.cs(2,9): error CS1001: Unexpected symbol `.', expecting identifier
1 Answer
Jennifer NordellTreehouse Teacher
Hi there, Rasmus! You're close, but not quite there. There are currently two problems in your code that I can see and one is simply a spelling error.
You wrote this:
string tid = Console.ReadLine() + tid;
This will result in a syntax error as the
+ tid is not needed. You should erase the
+ tid from that line.
When you fix that, you will be presented with yet another syntax error and this is regarding the spelling/capitalization of
WriteLine vs
Writeline.
You wrote:
Console.Writeline("du har motioneret " + tid + " Minutter!");
But that should be:
// Note the capitalization of the "L" Console.WriteLine("du har motioneret " + tid + " Minutter!");
Give it a shot after you've corrected these small issues!
Rasmus Hall1,260 Points
Rasmus Hall1,260 Points
thank you so much!
|
https://teamtreehouse.com/community/compiling-error-cs1001
|
CC-MAIN-2022-40
|
en
|
refinedweb
|
I have this configuration of my encoder :
class Encoder_LSTM(nn.Module):
def __init__(self,dict_config = dict_config):
super(Encoder_LSTM, self).__init__()
self.input_size = dict_config["input_dim"]
self.hidden_size = dict_config["hid_dim"]
self.output_size = dict_config["output_dim"]
self.num_layer = dict_config["num_layer"]
self.Encoder_LSTM = nn.LSTM(self.input_size, self.hidden_size , batch_first=True , num_layers=self.num_layer , dropout=0.2 )
def forward(self, input):
self.batch_size = input.shape[0]
self.hidden_init = torch.zeros(self.num_layer, self.batch_size, self.hidden_size, device=device)
self.h_0 = self.hidden_init
self.c_0 = self.hidden_init
self.output, (h,c) = self.Encoder_LSTM(input, (self.h_0, self.c_0))
return self.output
I think I am a bit confused about the “forward” method. It seems like every time you call it, it initialize the hid_state/cell_state. Now, I was wandering , when you have a sequence , is the forward method call on each timestep? If yes, wouldn’t it mean that for each time step the hid_state/cell_state are reset ?
|
https://discuss.pytorch.org/t/are-the-hidden-state-cell-state-reset-for-each-element-time-step-of-lstm/162037
|
CC-MAIN-2022-40
|
en
|
refinedweb
|
Manage metafields
Metafields are a flexible way for your app to add and store additional information about a Shopify resource. If you want to include data validation for metafield values, then you can create metafield definitions.
This guide shows you how to manage metafields using the GraphQL Admin API. If you want metafields that aren't accessible to merchants or other apps, then you can create private metafields. If you want to create a metafield that can only be accessed by the app that created it, then you can create app-owned metafields.
RequirementsAnchor link to section titled "Requirements"
- Your app can make authenticated requests to the GraphQL Admin API.
- You've created products in your store.
Step 1: Create a metafieldAnchor link to section titled "Step 1: Create a metafield"
You can create any number of metafields for a resource, and they'll be accessible to any app (unless they're private metafields). To create a metafield, use a GraphQL mutation to create or update the resource that you want the metafields to belong to.
The following example adds a metafield to a product by using the
productUpdate mutation:
Step 2: Retrieve a metafieldAnchor link to section titled "Step 2: Retrieve a metafield"
When you query a resource, you can retrieve its metafields. Use the
metafield field to return a single metafield. Specify the metafield that you want to retrieve by using the
namespace and
key arguments.
The following example queries a product for the value of the associated
instructions.wash metafield:
Step 3: Update a metafieldAnchor link to section titled "Step 3: Update a metafield"
To update a metafield, use a GraphQL mutation to update the owning resource, and include the metafield in the mutation input. Specify the owning resource and the metafields that you're updating by their IDs.
The following example updates a metafield that belongs to a product by using the
productUpdate mutation:
Step 4: Delete a metafield (optional)Anchor link to section titled "Step 4: Delete a metafield (optional)"
Use the
metafieldDelete mutation to delete a metafield. Specify the metafield that you want to delete by including its ID in the mutation input.
The following example deletes a metafield by ID:
- Create metafield definitions to include data validation for metafield values.
- Create private metafields that aren't accessible to merchants or other apps.
- Create a metafield that can only be accessed by the app that created it.
- Learn how to migrate your metafields that use the deprecated
value_typefield.
|
https://shopify.dev/apps/metafields/manage-metafields
|
CC-MAIN-2022-40
|
en
|
refinedweb
|
This program takes an integer from the user and calculates the number of digits. For example: If the user enters 2319, the output of the program will be 4.
Program to Count the Number of Digits
#include <stdio.h> int main() { long long n; int count = 0; printf("Enter an integer: "); scanf("%lld", &n); // iterate at least once, then until n becomes 0 // remove last digit from n in each iteration // increase count by 1 in each iteration do { n /= 10; ++count; } while (n != 0); printf("Number of digits: %d", count); }
Output
Enter an integer: 3452 Number of digits: 4
The integer entered by the user is stored in variable n. Then the
do...while loop is iterated until the test expression
n! = 0 is evaluated to 0 (false).
- After the first iteration, the value of n will be 345 and the
countis incremented to 1.
- After the second iteration, the value of n will be 34 and the
countis incremented to 2.
- After the third iteration, the value of n will be 3 and the
countis incremented to 3.
- After the fourth iteration, the value of n will be 0 and the
countis incremented to 4.
- Then the test expression of the loop is evaluated to false and the loop terminates.
Note: We have used a
do...while loop to ensure that we get the correct digit count when the user enters 0.
|
https://www.programiz.com/c-programming/examples/digits-count
|
CC-MAIN-2022-40
|
en
|
refinedweb
|
Mercurial > dropbear
view libtommath/bn_mp_clear.c @ 457:e430a26064ee DROPBEAR_0.50
Make dropbearkey only generate 1024 bit keys
line source
#include <tommath.h> #ifdef BN_MP], */ /* clear one (frees) */ void mp_clear (mp_int * a) { volatile mp_digit *p; int len; /* only do anything if a hasn't been freed previously */ if (a->dp != NULL) { /* first zero the digits */ len = a->alloc; p = a->dp; while (len--) { *p++ = 0; } /* free ram */ XFREE(a->dp); /* reset members to make debugging easier */ a->dp = NULL; a->alloc = a->used = 0; a->sign = MP_ZPOS; } } #endif /* $Source: /cvs/libtom/libtommath/bn_mp_clear.c,v $ */ /* $Revision: 1.3 $ */ /* $Date: 2006/03/31 14:18:44 $ */
|
https://hg.ucc.asn.au/dropbear/file/e430a26064ee/libtommath/bn_mp_clear.c
|
CC-MAIN-2022-40
|
en
|
refinedweb
|
Stack in C++ STL
Stacks are a type of container adaptors with LIFO(Last In First Out) type of working, where a new element is added at one end (top) and an element is removed from that end only. Stack uses an encapsulated object of either vector or deque (by default) or list (sequential container class) as its underlying container, providing a specific set of member functions to access its elements.
Stack Syntax:-
For creating a stack, we must include the <stack> header file in our code. We then use this syntax to define the std::stack:
Type – is the Type of element contained in the std::stack. It can be any valid C++ type or even a user-defined type.
Container – is the Type of underlying container object.
Member Types:-
value_type- The first template parameter, T. It denotes the element types.
container_type- The second template parameter, Container. It denotes the underlying container type.
size_type- Unsigned integral type.
The functions associated with stack are:
empty() – Returns whether the stack is empty – Time Complexity : O(1)
size() – Returns the size of the stack – Time Complexity : O(1)
top() – Returns a reference to the top most element of the stack – Time Complexity : O(1)
push(g) – Adds the element ‘g’ at the top of the stack – Time Complexity : O(1)
pop() – Deletes the top most element of the stack – Time Complexity : O(1)
C++
22 21
Code Explanation:
- Include the iostream header file or <bits/stdc++.h> in our code to use its functions.
- Include the stack header file in our code to use its functions if already included <bits/stdc++.h> then no need of stack header file because it has already inbuilt function in it.
- Include the std namespace in our code to use its classes without calling it.
- Call the main() function. The program logic should be added within this function.
- Create a stack to store integer values.
- Use the push() function to insert the value 21 into the stack.
- Use the push() function to insert the value 22 into the stack.
- Use the push() function to insert the value 24 into the stack.
- Use the push() function to insert the value 25 into the stack.
- Use the pop() function to remove the top element from the stack, that is, 25. The top element now becomes 24.
- Use the pop() function to remove the top element from the stack, that is, 24. The top element now becomes 22.
- Use a while loop and empty() function to check whether the stack is NOT empty. The ! is the NOT operator.
- Printing the current contents of the stack on the console.
- Call the pop() function on the stack.
- End of the body of the while loop.
- End of the main() function body.
List of functions of Stack:
- stack::top() in C++ STL
- stack::empty() and stack::size() in C++ STL
- stack::push() and stack::pop() in C++ STL
- stack::swap() in C++ STL
- stack::emplace() in C++ STL
- Recent Articles on C++ Stack
Please write comments if you find anything incorrect, or you want to share more information about the topic discussed above
|
https://www.geeksforgeeks.org/stack-in-cpp-stl/?ref=rp
|
CC-MAIN-2022-40
|
en
|
refinedweb
|
Mercurial > dropbear
view libtommath/bn_mp_dr_setup.c @ 457:e430a26064ee DROPBEAR_0.50
Make dropbearkey only generate 1024 bit keys
line source
#include <tommath.h> #ifdef BN_MP_DR the setup value */ void mp_dr_setup(mp_int *a, mp_digit *d) { /* the casts are required if DIGIT_BIT is one less than * the number of bits in a mp_digit [e.g. DIGIT_BIT==31] */ *d = (mp_digit)((((mp_word)1) << ((mp_word)DIGIT_BIT)) - ((mp_word)a->dp[0])); } #endif /* $Source: /cvs/libtom/libtommath/bn_mp_dr_setup.c,v $ */ /* $Revision: 1.3 $ */ /* $Date: 2006/03/31 14:18:44 $ */
|
https://hg.ucc.asn.au/dropbear/file/e430a26064ee/libtommath/bn_mp_dr_setup.c
|
CC-MAIN-2022-40
|
en
|
refinedweb
|
IRFC wont be getting into project funding any time in the near future until theres a clear direction from the railway ministry. Rail projects have very low internal rate of return (IRR) as there are time overruns in implementation of projects leading to cost escalations. All this can affect the credit rating of IRFC, which is AAA at present, said a senior railway officer, requesting anonymity.
Only last year IRFC ventured into project financing by providing funds for a signalling project. As per the 2011 budget, it was to raise R8,654.38 crore for infra projects but could raise only Rs 2,000 crore.
Railways has the dubious distinction of having the largest number of delayed central sector projects, with cost overruns of over R70,000 or 120% more than the costs determined at the conception stage. The projects are also delayed by up to 216 months that could upset any financial institution funding such projects. In one of the worst cases, a freight operation information system approved in March 1983 at an estimated R520 crore has been delayed by almost 204 months.
The uncertainties in project financing could increase cost of borrowing for IRFC, thereby adding to the financial pressure on railways and affect its programme on rolling stock, the officer added.
The officer said the corporation has already aired its objections to the railway ministry. A top railway ministry official said that while IRFC may not be asked for project financing immediately, it could be asked to adopt a gradual approach identifying projects where it could lend without any problems.
In last fiscal, IRFC was budgeted to borrow Rs 20,454.38 crore, out of which Rs 11,800 crore was to be routed to acquire rolling stock for the railways and another Rs 140 crore for Rail Vikas Nigam (RVNL), which invests in various bankable rail-link projects with private partnership and the rest for project financing.
In the current fiscal, IRFC is to raise Rs 15,000 crore, out of which Rs 14,896 crore will go for investment in rolling stock and another Rs 104 crore for RVNL. To raise the money, IRFC would come up with a tax-free bond issue of Rs 10,000 crore by January end.
|
http://www.financialexpress.com/archive/hobbled-by-time-cost-overruns-irfc-stops-funding-railway-projects/1036356/
|
CC-MAIN-2017-22
|
en
|
refinedweb
|
view raw
How can I ship C compiled modules (for example, python-Levenshtein) to each node in a spark cluster?
I know that I can ship python files in spark using a standalone python script (example code below):
from pyspark import SparkContext
sc = SparkContext("local", "App Name", pyFiles=['MyFile.py', 'MyOtherFile.py'])
If you can package your module into a
.egg or
.zip file, you should be able to list it in
pyFiles when constructing your SparkContext (or you can add it later through sc.addPyFile).
For Python libraries that use setuptools, you can run
python setup.py bdist_egg to build an egg distribution.
Another option is to install the library cluster-wide, either by using pip/easy_install on each machine or by sharing a Python installation over a cluster-wide filesystem (like NFS).
|
https://codedump.io/share/rSwgrOitwxfG/1/shipping-python-modules-in-pyspark-to-other-nodes
|
CC-MAIN-2017-22
|
en
|
refinedweb
|
The following example shows how you can detect a click on the MX MenuBar control in Flex by using the
menuBarItems array and adding an event listener for the
click event to a specific MenuBarItem. “Listening for a click on the MX MenuBar control in Flex”
<![CDATA[
public function initializeHtmlText():void
{
this.htmlText = "” + this.text + ““;
}
]]>
I had a question related to the menubar control but didn’t see a better post to ask it in. I have a menubar with an xmllist data provider. I was wondering if it was possible to have all the items for multiple submenus to be in the same radiobutton group. I tried simply giving them all the same group name but when I select one, it only deselects other items in that single submenu. Any thoughts or ideas on this? Thanks.
I want to kown how to listening to menubar’s sub-menus?
Sorry, maybe its laughable to you,but I cann’t do this,also have I refer to the develop menual!
this is avaiable in the help
you should create and event handler for menuitem click and link it to menubar
<![CDATA[
import mx.events.MenuEvent;
import mx.controls.Alert;
import mx.collections.*;
[Bindable]
public var menuBarCollection:XMLListCollection;
private var menubarXML:XMLList =
;
// Event handler to initialize the MenuBar control.
private function initCollections():void {
menuBarCollection = new XMLListCollection(menubarXML);
}
//”);
}
}
]]>
Hi,
Im using the menu with no dropdown too as follows,
Here i want to get the index value of the parent 1,2 and 3 also. in which event i can achive this. I ve tried with menuShow, click and change. I could get it in the change event, but its triggered on mouse hover also. Please help.
Regards,
Prakash
Hey … hi and thanks.
This worked the first time for me.
On an unrelated topic – my menu bar is looking perfect – but the menu items – are in a ‘red colored’ background – and I cannot find a way to skin then menu items.
Any ideas for skinning menu items?
Thank-you!
Prakash,
Perhaps a bit late answer but:
I was also working on the same issue, here is a quick fix:
protected function menuBarChangeHandler(event:MenuEvent):void
{
var itemClicked:Boolean = false;
try{
itemClicked = (event.itemRenderer[“menuBarItemState”] == “itemDownSkin”);
}
catch (e:Error){
// menuBarItemState might not exist at runtime, which may lead to an error, handle this as you wish
}
if (itemClicked){
// continue with your code here
}
}
The trick here is using the item skin during the click and rollover events which are both caught by the change handler. ItemDownSkin distinguishes the click event and its rollover counterpart is ItemOverSkin.
Regards,
Can
Hi, and thanks for all those articles !
I haven’t been able to find a simple way to create a multilanguage menu bar, does anyone have an idea ? For the moment, I use an xml-file for configuring the menu items, but that could change if anyone come up with a good answer ?
Thanks in advance !
|
http://blog.flexexamples.com/2010/02/19/listening-for-a-click-on-the-mx-menubar-control-in-flex/
|
CC-MAIN-2017-22
|
en
|
refinedweb
|
tag:blogger.com,1999:blog-328660992017-05-05T16:16:11.096-07:00The Unknown OneNote Guy's BlogRandom and semi-useful thoughts and ideas on Microsoft OneNote.OneNote Guy Send To OneNote 2007[editing note: Changed the incorrect name of PowerNote in the title to PowerShell - had OneNote on the brain and it was late....]<br /><br />I have been working with Microsoft PowerShell a new "interactive" command line tool. PowerShell can be downloaded from <a href="" target="_blank"></a> and if you are a command line junkie, you will want to look at this tool. Over the last week I have been working with PowerShell to create command line scripts and cmdlets (an extensibility mechanism) to administer a custom Microsoft-based solution.<br /><br />I spent the last few days pouring over documents , web sites, blogs and even a book to quickly get up to speed with PowerShell and honestly the sheer number of PowerShell features not to include the numerous major extensibility points requires more than a few days to become fluent with. Of course as I dove into PowerShell there were many places where I can easily see how OneNote 2007 and PowerShell would work well together. Most of these integration points will require some custom coding but to do a Send to OneNote 2007 from PowerShell is practically built into PowerShell.<br /><br /><br />So lets look at the output created by PowerShell using the default PowerShell host - the console. Here is the output Get-Process which will return a list of process on the host machine:<br /><br /><a href="" target="_blank"><img src="" /></a><br /><br />PowerShell uses a pipeline pattern where the output of one command can be the input to another command generally using the pipe character "". PowerShell will output to the default host without a terminating out command. Other out options include files and printers and OneNote 2007 (seen below) installs a print driver named" Send To OneNote 2007".<br /><br /><a href="" target="_blank"><img src="" /></a><br /><br />Whenever you select the Send To OneNote 2007 printer the output will create a new OneNote page located in the Unfiled Notes section of your OneNote 2007. Any application that can output to a printer can "Send to OneNote".<br /><br />Armed with this we can now send the output of our Get-Process example to OneNote 2007 with no custom coding. To do this simply pipe the Get-Process output to the Out-Printer cmdlet with the -name argument of "Send To OneNote 2007" as shown below.<br /><br /><a href="" target="_blank"><img src="" /></a><br /><br />Sending the PowerShell output to the Send To OneNote 2007 printer will result in a new page located in the Unfiled Notes section as seen below. It will insert the output as an image. Unfortunately you do not have easy access to the text but OneNote 2007 will "full-text" search the image.<br /><br /><a title="Photo Sharing" href=""><img height="284" alt="PowerShell_OneNoteOutput" src="" width="500" /></a><br /><br />PowerShell is a very powerful command line tool - many times more powerful than Cmd.exe. This blog is dedicated to OneNote 2007 and not PowerShell so I don't expect to go into much detail with the capibilites of PowerShell. It is too expansive to cover it without writing a book. But I have more than enough interest in how PowerShell can work with OneNote 2007 out of the box as well as possible extension points of PowerShell and OneNote 2007. So you can probably expect more PowerShell/OneNote 2007 posts in the future For those of you who wish to learn more about PowerShell go to the PowerShell Technology Center located at: <a href="" target="_blank"></a> which contains many resources including the download for PowerShell.OneNote Guy 2007 Trial Version ReleasedGrab the OneNote 2007 Trial version from <a href="" target="_blank">Office Online</a>!OneNote Guy Live Messenger Send To OneNote 2007You know how it goes, a long IM session where the topic of discussion is how to take over the world. Good ideas have been had from both sides and then back to real work. But not before someone says "hey make sure we save a copy of the IM session!". No problem , as simple as Save As… Except it's now day 3 of a marathon of IM sessions all with good ideas and I already have a collection of text files detailing bits and pieces of our grand scheme.<br /><br />Sure I can open each file, copy the text, and paste it into OneNote 2007 but strangely enough I just never get around to doing that. Necessity is the mother of invention and so I set out to create a simple but effective Windows Live Messenger To OneNote 2007 (WLM2ON) add in. It turns out that with Microsoft Live Messenger's Add In Api creating a basic Send To OneNote 2007 is fairly simple. Simple enough that I will post the project code in a separate blog post.<br /><br />For now if you feel the need to capture your Live Messenger content to OneNote 2007 feel free to grab the install file: <a href="" target="_blank">WLMOneNote2007AddIn.zip</a> which is hosted by the owners of <a href="" target="_blank"></a>. This simple Send To Add In requires Microsoft Live Messenger and OneNote 2007 (RTM).<br /><br /><a href="" target="_blank" ><img id="BLOGGER_PHOTO_ID_5004311079821800674" style="FLOAT: left; MARGIN: 0px 10px 10px 0px; CURSOR: hand" alt="" src="" border="0" /></a><br />But before I go into details about how to install the binaries you should take a look at the figure below which displays a very basic page created when using this tool. I originally set out to create a robust capture tool but after reviewing the Live Messenger Add In object model it became apparent that a robust tool is either not possible yet or beyond the very short time frame I had set.<br /><br />So I have settled on a secondary, simpler approach that will allow me to post the code for others to use and improve. In this approach I have created the Windows Live Messenger OneNote 2007 Add In along with a simple install project that will allow you to send session messages to a new OneNote page. There are a few issues - first you must install the add in and turn it on. Next the add in can not differentiate between Messenger sessions. This appears to be a limitation of the Live Messenger Add In framework. Messages from all sessions are saved to the same OneNote page. But it is more of a working demo than the prescriptive guidance on Live Messenger Add In development.<br /><br />Once you have the binaries extracted simply run the setup file. The install is pretty simple - just remember where you install the files to. By default it will go to [install drive]\program files\Unknown OneNote Guy\WLMOneNote2007AddIn . The install will enable Live Messenger Add Ins by adding a registry key. The install will only copy the add in files to the file system. To use the add in you must "install" it in Live Messenger and then "turn it on".<br /><br /><a href="" target="_blank"><img id="BLOGGER_PHOTO_ID_5004313356154467570" style="FLOAT: right; MARGIN: 0px 0px 10px 10px; CURSOR: hand" alt="" src="" border="0" /></a><br />To install the add in into Live Messenger you must access the Add In tab located in the Options Dialog box. The fiigure on the right displays the Add In Tab. Click Add To Messenger button and navigate to the install directory. If you did not change the install directory it will be located at: [install drive]\program files\Unknown OneNote Guy\WLMOneNote2007AddIn. Select the OneNote2007AddIn.OneNoteAddIn.dll file and click ok.The figure below displays the Add In tab with the add in selected. Click the Ok button to close the dialog box. The add in should be installed.<br /><br /><br /><br/><br /><a href="" target="_blank"><img id="BLOGGER_PHOTO_ID_5004320593174361394" style="FLOAT: right; MARGIN: 0px 0px 10px 10px; CURSOR: hand" alt="" src="" border="0" /></a><br /><span style="color:#ff0000;">Note: During some of the install testing the Add In tab was available but the controls contained in the tab were not enabled. I am unsure why this is but it appears that if you restart Live Messenger and start an IM session the controls will enable.</span><br /><br /><a href="" target="_blank"><img id="BLOGGER_PHOTO_ID_5004315422033736978" style="FLOAT: right; MARGIN: 0px 0px 10px 10px; CURSOR: hand" alt="" src="" border="0" /></a>Now that the add in is enabled you need to start the add in. To start the add in click on the control that allows you to change your online status. Select Send To OneNote to start the add in. See figure on the right.<br /><br />You can use Live Messenger to send and receive IMs as before. When you want to capture your IMs in OneNote type Send To OneNote on a single line and click Send. This is interpreted as a command and will create a new page under the Unfiled Notes Section. Figure 5 displays a OneNote 2007 page that has captured messages from Windows Live Messenger. If you wish to define a title then append the command with a colon and title. For example Send To OneNote : Test IM. Figures blow displays a page that was created using the Send To OneNote command with a title parameter as well as the OneNote pages created.<br /><br /><a href="" target="_blank"><img id="BLOGGER_PHOTO_ID_5004321666916185442" style="CURSOR: hand" alt="" src="" border="0" /></a><br /><br/><a href="" target="_blank"><img id="BLOGGER_PHOTO_ID_5004321503707428178" style="CURSOR: hand" alt="" src="" border="0" /></a><br /><br />Over all this was a very simple add in built to solve a relatively simple issue. The OneNote code was minimal and to be honest, simpler then the code related to the Live Messenger Add In. Give it a try and let me know how it goes. In the next post I will provide the Add In code.OneNote Guy From OneNote 2007<p>I maintain more than a few blogs based on various technologies. Some are actually read! All the blogs including this blog provides a web-based editing experience. These editing tools are Ok but they just cannot compare to a rich client editing scenario. </p><p><br />When I started this blog, I did so using OneNote 12, probably beta 2 version. I have tried Livewriter the MS blogging tool. It does a good job but it just did not provide the same writing experience as OneNote. My blog posts are part of my collection of information contained and organized in OneNote as sections and pages. Another benefit using OneNote for a blogging tool is spell check. OneNote like other Office products will provide "real-time" spell check whereas the version of Livewriter I used did not. Seeing all my misspelings:) appear with red squiggly lines is a big benefit because I can not spell.</p><p><br />The problem with blogging from OneNote 2007 is that it relies on Word 2007 to actually connect and push to the blog. Not a big deal, but not a great experience. Currently I create a new page under my OneNote blogging section and create my post. Once finished I then copy to NotePad. This will clean out any style tags and provide me with clean text only copy. Then it is off to the web-based blog editor where verify how it looks and tweak anything like links and images. It is truly a manual process but for some reason it is still a better fit then working out of Word or Livewriter.</p><p><br />So here lies opportunity, one that I can't help but think is already being addressed by the community. With the Xml-based apis for OneNote 2007 I believe it is entirely possible to create a OneNote 2007 MetaBlogAPI add in where I can go directly from OneNote 2007 to most blog by the touch of a button! If no one is moving in this direction well then lets join up and create a CodePlex site for this and have at it.</p><p><br />The second opportunity is that OneNote 14 should include this as a feature, similar to Word 2007. Blog postings are simply another bit of information. I go through a similar publishing process, from thought or idea to finished product on a blog site as I do with say a requirements doc. Given the choice I would like to see OneNote become the MS Blog Editor of choice…</p><p><br />Now that this is finished I will cut to NotePad and paste to Blogger.</p>OneNote Guy Gets It...Robert Bogue, SharPoint MVP, is getting into OneNote. Check out Robert's short blog post on <a href="" target="_blank">Article: More Effective Requirements Gathering with Microsoft OneNote</a>.OneNote Guy think OneNote 14 should…<p>Well it has been a busy few weeks. Office 2007 is about to go into RTM mode. And that means I have been busy working on Office 2007 - related products that need to be finalized. Conferences are upon us again. Seems like the last round just got over… so I am busy finalizing my various slide decks. No I will not be speaking about OneNote this time around. I have other content that must be presented and I have yet to see a conference that has some OneNote content other than TechEd and PDC. So now you know why I have not been able to post. Rest assured the OneNote XML Viewer is being improved as well as a few other interesting little apps. But for the next few weeks I am dedicated to development and slide decks...</p><p><br />For those of you in the OneNote community, those of you who follow all the blogs, you probably have seen Daniel's post on <a href="" target="_blank">Send us your notes! We really want them</a>!. You might have caught that at least some of the OneNote team is thinking vNext . Well today I was working in PowerPoint 2007 I found myself wishing the two played nicer together. So instead of keeping my thoughts to my self I decided that I should create a reoccurring topic . It seems that Blogger does not really have "categories" so I can not easily tag a set of posts as part of a specific topic. So today I create a post called I think OneNote 14 should…</p><p><br />So today's I think OneNote 14 should... theme is specifically about PowerPoint 2007 and PowerPoint vNext. I think you can easily extrapolate this concept beyond PowerPoint. </p><p><br />I think the notes page as well as normal page view where notes are displayed should be a OneNote 14 "control". Why should I settle for a basic text editor when I create notes for my presentation. Why would I not want to have my notes in a format that I can reuse? How about working on my notes in OneNote and opening up my presentation and seeing my notes show up?</p><p><br />To me this is a no-brainer - from a usability standpoint - not necessarily from a technical perspective. But when you think of it, why would my applications and third party applications benefit from an rich WinForm control where ever my users might benefit from collecting notes.<br /></p><p>Now I know you are probably thinking - why not use a Side Note? That's a good question to ask. And I have the answer. I am not a big Side Note fan. Oh I use them but side notes break my momentum and thought flow. For myself Side Notes are ok for capturing quick thoughts and ideas - if later you go back and do something with them even if it is nothing more then deleting them.<br /></p><p>I use a laptop 90 percent of the time (no tablet but that was another post:). I have limited screen space and nearly always work in full screen mode. Using a Side Note requires me to move it around or find a place one the screen that I am not using at the moment. It is pinned as the "top" screen by default and I can unpin the side note. With it pinned as the top screen I need to move it around as needed and if I change to say Outlook it is still pinned as the top screen and I have no need for that. If I unpin the Side Note it will be sent behind my PowerPoint app. Embedding the OneNote user interface as a control inside of a client will give me the basic power of OneNote located where I am used to seeing notes.</p><p><br />Ok, I know this is not an easy "fix" and is filled with implementation details that I would not even try to enumerate today. But if creating a embeddable control is beyond the 014 timeframe then the next step is to consider a "Smart" Side Note. One that can attach to a existing application window and would "disappear" when that specific application no longer has focus</p><p>Just an idea....</p>OneNote Guy Blogs About ON File Format<p>If you don't follow follow Dave Rasmussen's blog you should head over and read his newest post on OneNote 2003 and 2007 file format compatibility.</p> <p><a href="" target="_blank">Why the OneNote 2007 and 2003 file format are different</a></p>OneNote Guy Application up on OneNote Extensibility & More<p></p><p>Daniel Escapa has posted a good example of using the OneNote API in his <a href="" target="_blank">Send to OneNote from Windows Explorer – Sample App</a> post. For any of you who are interested in extensibility and OneNote you should head over and take a look at his sample. The Send to OneNote from Windows Explorer sample application explains how to use the OneNote interops APIs to embed files into a new Unfiled Section page.</p>OneNote Guy Marketing Survey on OneNote PowerToysMake sure you head over to OneNote Power Toys to submit your vote in the <a href="" target='_blank'>Slogan for OneNote Keyboard Campaign</a> survey that got started by my "<a href="" target='_blank'>Coffee Shop Denizen - Almost Gets It</a>!" post. Get out and vote!OneNote Guy Shop Denizen - Almost Gets It!<span xmlns=""><p style="MARGIN-LEFT: 7pt">The story goes something like this...</p><p style="MARGIN-LEFT: 7pt">While sitting at the local coffee shop this morning - working on some OneNote 2007 code a fellow coffee denizen at the next table asked about my computer backpack. Seems she was unable to find one that fit her tiny little white laptop which apparently will run Windows and someother cat-based os. I think it was tiger, panther, kitty cat, I'm not sure. I had not seen this tiny white laptop before so we starting into a conversation about laptops and gadgets. </p><p style="MARGIN-LEFT: 7pt">To make a long story short our conversation progressed to our Pocket Pc phones and how much we used them. She explained how she really used her phone for emails and calendaring. Of course I came back with OneNote Mobile and how nice it is to be able to capture your ideas easily without requiring the laptop all the time. When I asked if she had ever used OneNote Mobile or even OneNote she promptly replied that she "did not have a tablet pc and therefore could not use it." <span style="color:#6633ff;"><em>Coffee Shop Denizen does not get it</em></span></p><p style="MARGIN-LEFT: 7pt">Being the OneNote Guy that I am, I explained that there is no requirement for a tablet or handwriting to use OneNote.To be honest I only personally know about two or three individuals with tablet pcs and one of them does not use OneNote. Everyone else I know that uses OneNote does so with a laptop and a good old fashion keyboard. Surprised and amazed she took a few minutes to look over OneNote2007 on the good old laptop and after showing her pages, notebooks and sections she blurted out that OneNote 2007 appeared looked like a cool "organization" application. <em><span style="color:#6600cc;">Coffee Shop Denizen now starts to get it!<br /></span></em></p><p style="MARGIN-LEFT: 7pt">I always wonder how it came to be that there are these misconceptions out in the wild. Maybe with a name that contains "Note" makes the application automatically register in people's mind that OneNote is a note-taking application (ok, so it is a note taking application but that is a fairly simplistic view of OneNote). Maybe it was the Tablet PC's marketing hype of handwriting recognition and OneNote that has gotten the masses convinced it is a Tablet PC application. (I wonder if those same people will think that they can not use Vista with a monitor now that Vista can do some really good text to speech!) Somehow the wrong impression seems to be the first impression. I find it is interesting that it is not just the general public that has these perceptions but educators and computer professionals alike.<br /></p><p style="MARGIN-LEFT: 7pt">It seems to me that the community and Microsoft needs to somehow change the general perception of OneNote and increase the general public awareness of OneNote. Maybe we need a 12-step program. Maybe we should have the world write on a blackboard (do those even exist anymore?) one-hundred times "OneNote works with a keyboard. OneNote is more than just a note-taking applicatoin." Maybe that is too harsh. How about a marketing campaign? That is the ticket - instead of doing a Power Toy competitiontion how about a marketing competition?</p><p style="MARGIN-LEFT: 7pt">Here's my submissions:</p><ol><li><div style="MARGIN-LEFT: 7pt"><span style="color:#ff0000;"></span><span style="color:#ff0000;"><strong>OneNote- Its not just for Tablets Anymore - And to be Honest it was Never Just for Tablets! </strong></span></div></li><li><div style="MARGIN-LEFT: 7pt"><span style="color:#000000;"></span><span style="color:#009900;"><strong>OneNote: Where the Keyboard is as mighty as the <strike>Sword</strike>, ah... Pen</strong></span></div></li><li><div style="MARGIN-LEFT: 7pt"><span style="color:#6600cc;"><strong>OneNote: We Don't Need No Stinkin' Pen!</strong></span></div></li></ol><p style="MARGIN-LEFT: 7pt"><span style="color:#000000;">Let me know if you need my address to ship the prize!</span></p><p style="MARGIN-LEFT: 7pt">Ok, maybe it is just my perception that is messed up and not the general public but it seems that you either "get" OneNote and can be considered a OneNote convert or you don't "get" OneNote and are not a OneNote convert. I don't think I have seen too many people that falls in the middle road of "getting" OneNote and not using OneNote.<br /></p><p style="MARGIN-LEFT: 7pt">Just my thoughts after a morning coffee shop conversation. I would be interested in your thoughts - do you agree there is still general misconception in the wild with OneNote? Drop a comment or send an email to onenoteguy@hotmail.com<br /></p><p style="MARGIN-LEFT: 7pt"><br /></p><p style="MARGIN-LEFT: 7pt"><br /></p><p style="MARGIN-LEFT: 7pt"><br /></p></span>OneNote Guy OneNote BloggerI appears another OneNote PM has leaped into the the blogging domain. Make sure you take a look at Olya's blog at <a href="" target='_blank'></a>. <br /><br />Welcome to the community!OneNote Guy AddIn for OneNote 2007 Project Files<p>I had been waiting for sometime to play with Toolbar Addins and with the updates included in the Technical Refresh I am finally able to give it a spin. </p><p><br /><a href="" target="_blank">Daniel Escapa</a> has provided beta documentation to create a sample Toolbar AddIn. You can download it from his blog posting titled "<a href="" target="_blank">Creating Toolbar Buttons in OneNote 2007</a>". Using Visual Studio.Net 2005 you can follow the step-by-step example and create a Toolbar Addin. </p><p><br />To help you along I followed Daniel's document as closely as possible and created the sample Toolbar AddIn. There were a few places in the document that needs a little clarification and I will pass those on to Daniel but over all you can follow it pretty easily. For your coding pleasure I have zipped up the completed VS.Net 2005 solution and the install files for Daniel's sample. You should be able to download the files, extract them to your hard drive and compile the code. Or if you just wish you can run the install files included in the archive.<br /></p><p>The archive can be downloaded from the storage provided by owners of <a href="" target="_blank">OneNote 2006</a> and <a href="" target='_blank'>OneNote PowerToys</a> community sites. Here is the link you will need: <a href="" target="_blank"></a> </p><p>While you are downloading the file stop by and leave a read or post on their sites! </p>OneNote Guy Technical Refresh is Available!<p>Here is the link for the Office Client Technical Refresh:<br /><br /><a href="" target="_blank"></a></p>OneNote Guy Touch of CommunityIf you have not heard <a href="" target='_blank'>OneNote 2006</a> is a new community blog. This blog will allow you to sign up as an author and contribute. I have already claimed my id and password but have not yet made my first post. This is one more step in the community-building process. I hope some of you who are OneNote advocates take the opportunity to post your thoughts and ideas at OneNote 2006 and help build up the OneNote community which is a benefit for us all.<br /><br />I am also happy to announce that the owners of <a href="" target="_link">OneNote 2006</a> and <a href="" target="_link">OneNote PowerToys</a> have graciously extended some file space hosting to the Unknown OneNote Guy blog. So for anyone looking for the OneNote 2007 Xml Viewer download you should be able to access it now at<br /><br /><a href="" target="_blank"></a><br /><br />I will be leaving the download for the xml viewer on the original hosted server but I expect that if all goes well we will post our files only to shared spaced provided by the owners of OneNote2006 and OneNote PowerToys owners.OneNote Guy out Daniel's Post on GUIDs!If you read my previous post - <a href="" target='_blank'>A Look at GetHierarchy() Part II </a>you will want to have a look at Daniel Escapa's post title "<a href="" target='blank'>Small chat about OneNote GUIDs </a>". You get a little better understanding on how GUIDs and ids are used in OneNote. Of importance in this post if I read it correctly - certain ids are not persisted between starts and stops of OneNote - which I did not know.<br /><br />Armed with a little more knowledge of GUIDs and IDs I will need to find some time to dig into the OneNote Xml Viewer and see how that effects other toys on my workbench. Obviously if I understand the post correctly we will not want to persist any hierarchical ids.<br /><br />I appreciate anyone who can comment on my posts and clarify or correct any of the technical aspects of it. Got a comment, idea, correction or criticism drop me a comment. It is great that one of the PMs of the team is taking time to add to our community. There is obviously lots of little items that will not be readily apparent to us without some insider knowledge.OneNote Guy 2007 Xml Viewer<p>Since OneNote extensibility revolves around an Xml-based interface its makes sense to deep dive into the Xml. In the earlier GetHierarchy posts I provided some basic demo code so you can view some of your OneNote's exported Xml in the console.<br /><br />To make my life a little easier I created the OneNote Xml Viewer. It’s not rocket science, far from it in fact. The only OneNote object model method used by the OneNote Xml Viewer is GetHierarchy. Most of the code revolves around populating tree views, options, and parsing Xml. Simple as it is though it will give you a nice little User Interface to view the exported Xml. </p><p>Here's a screenshot:<br /><a title="Photo Sharing" href=""><img height="366" alt="onenote2007xmlviewer" src="" width="500" /></a> </p><p><a title="Photo Sharing" href="" target="_blank">Click here for a larger image....</a><br /><br />I am not providing the base code on this little application. Not because I don’t wish to share, but because I used the bull in the china shop methodology for the User Interface code. As for the OneNote code I placed it all in a data access class and here it is:<br /><br />//--------Start code -------<br /><br />using System;<br />using System.Collections.Generic;<br />using System.Text;<br />using Microsoft.Office.Interop.OneNote;<br /><br />namespace unknown_onenote_blogspot<br />{<br />class ON_DataAccess<br />{<br />public ON_DataAccess()<br />{<br />_app = new ApplicationClass();<br />}<br /><br />~ON_DataAccess()<br />{<br />_app = null;<br />}<br /><br />public string GetAllInfo()<br />{<br />try<br />{<br />string</a> </p><p>This is a free file hosting page and it seemed pretty innocuous. It seems like Blogger does not support any file hosting outside of basic images. Let me know if you have trouble bringing this zip file down. It contains two files, the exe and the config. Place them both in the same directory on a machine with OneNote 2007 Beta 2 installed and have a look around your Notebooks. </p><p>Check out the menu for options to select all, export the Xml to the file system and change the scope options. I intentially left all scopes in so you can play with Node Type v Hierarchial Scope. See my previous post. Some combos will return an error - you can check the post on Node Type v Hierarchial Scope to see what combos will give you errors.</p><p></p><p></p><p></p>OneNote Guy and OneNote 2007?<p>So here is a thought that I have been pondering - Where lies the value of a custom Windows Workflow Foundation/OneNote 2007 activity? I think this is a great topic for this blog since many of my posts revolve around extensibility and integration. So I will pose this question to you the reader. </p><p>What is the envisioned scenario around a custom WinWF/OneNote 2007 activity?</p><p>I have a few scenarios but I want to open it up to the readers for some community involvement. If we find a convincing scenario I might be just inclined to start working on it.<br /></p><p>For those of you who might not be following .Net Framework 3.0 (a.k.a. WinFX) let me point you to a few links based on Windows Workflow Foundation.</p><p><em>Microsoft .Net Framework 3.0 - Windows Workflow Foundation Technology Page</em><br /><a href="" target="_blank"></a></p><p><em>Windows Workflow Foundation(WF)</em><br /><a href="" target="_blank"></a></p><p><em>ScottGu's Blog</em><br /><a href="" target="_blank"></a></p><p><em>Matt W's Blog</em><br /><a href="" target="_blank"></a></p><p><br />And let us not forget the Microsoft Office System's Workflow Implementation as well.</p><p><em>Walkthrough: Creating Office SharePoint Server 2007 Workflows in Visual Studio 2005</em><br /><a href="" target="_blank"></a></p><p><em>Paul Andrew's Blog</em><br /><a href="" target="_blank"></a> </p><p>So post a comment and let me know what you think is a worthwhile custom OneNote activity.</p>OneNote Guy Gets It!In a continuing attempt convince the general populace that OneNote is not just a "note taking utility" I am scouring the Net to find testimonials of successful OneNote users. Maybe I should start a So and So Doesn't Get It group. I digress... Anyway last night I stumbled onto Larry's posts on OneNote and it appears that Larry is someone who gets it!<br /><br /><a href="" target='_blank'>Programming OneNote 12<br /></a><br /><a href=""target='_blank'>Getting Things Done With OneNote12</a>OneNote Guy 2007 Article on Organizing and Sharing InformationSpeaking of community....<br /><br /><strong><em><a href="" target="_blank">Organize and share all your information with Microsoft Office OneNote 2007</a></em></strong><br /><br />This is a nice little primer on some OneNote 2007 features to help you capture, organize, find and share your information.OneNote Guy OneNote CommunityIf you haven't figured it out yet, I am big on technical communities. The OneNote community is not the only community that i am involved in and this is not the only blog that I post on. Why am I so interested in the community? It takes a community around a product or technology for that product to be adopted. I could almost go as far as stating without a community a product or technology will probably not become widely adopted. That may be stretching it a little. Maybe it is a synergy between a solid product and a strong community that results in quick and high adoption rates. Almost every major product has a strong community behind it but what is a community.<br /><br /><strong>What is a community? </strong><br />Well there is no one correct answer for the question "What is a community?". Communities are really nebulous and my concept of a community will be different than yours. It is interesting to see one technical community start a new concept and see other communities follow suit.<br /><br />For myself communities are everything that revolves around the product. It is the newsgroups and list servers. It is the free power toys, readily available SDKs and KB articles. It is the free downloadable add-ins and demo code. It is web casts, pod casts and blogs. It is the chats, emails and white papers from the product team and evangelism team. It is partner, end-user and developer training and certification. Lets not forget the conference sessions, books, magazine articles and assorted swag.<br /><br />Initially I had concluded being part of the community involved non-commercial interests but I am leaning away from that concept. Initially I was a purist - Not that I don't believe there needs to be a strong commercial side to the product - just that if it was to be community it should be non-monetary motivated. As a community member I have no problem suggesting that someone look at a third-party application to solve their problem. What bothers me about "commercial-community" is the answers to question are - "Look at my product" which may or may not be the correct answer. In general you should not have to purchase something to have free exchange of concepts and ideas. Lately I have seen commercial companies support community-related events in such a way as to not definitively link the event to the commercial company.<br /><br />Another reason I now generally consider commercialism in the community world - if done correctly - is because of training, books and articles. These are great community resources but you generally need to purchase the product. The community would be missing a key element if there was no support for training companies and authors. And being an author at times I myself cannot just give away all that time and energy for nothing. Sorry, that is just the way it is. We all got to eat. :) And I would be remiss if I answered newsgroup postings with a pat " get my book, your answer is in Chapter 4". That type of answer does the community no good in my opinion.<br /><br /><strong>So what makes a community?<br /></strong>Well there is obviously no one single answer. Certainly having a solid and useful product is required. Without a product that has a definite place in the software industry the community will be continue to struggle. To be honest a second generation product is almost required for a solid community. I should state that with any first round product like ON2003 there are some dedicated leaders and followers creating a community. It seems that it is that second iteration (and here comes ON 2007) where the community starts to grow and gets over the hump. That's not to say that first round products are bad or not worth the communities time, just that there is a certain amount of community leaders and community followers that is required to start the snowball effect.<br /><br />There are certain items that all communities must have - again in my opinion.<br /><br />The community must have strong product and evangelism team support and contact. I think everyone understands the product team. And the OneNote community seems to have pretty good support from the product team via blogs and email. Many may not know what I am referring to when I say evangelism team. The evangelism team is who ever is out there speaking about and around OneNote. This includes the Product Managers, MS community people, MVPs and other individuals not associated with Microsoft. Anyone who speaks at a conference, maintains a blog ect. is part of the evangelism team.<br /><br />The community must have a strong, free knowledge base for support. This includes the blogs, newsgroups, web casts, list servers ect. Take a look at MS Exchange or MS Sql Server both produts with strong communities. If you Google either product you will so many results you could not possible look at it. Googling "SQL Server" results in about 119,000,000 results; "Exchange Server" has 30,900,000 and OneNote 7,540,000. Maybe it is unfair to compare OneNote to SQL Server. So lets look at "Microsoft Word" with 81,200,000 and "Microsoft OneNote" at 371,000.<br /><br />Training and certifications are high on my list for fostering a strong community. Training either free or paid is important - both end user and developer are required in my opinion. Participating or integrating with existing MS training and certification structures is a must. As a developer or end user striving to achieve a specific MS certification I should have the option to choose a OneNote test instead of say an Excel or PowerPoint test. This gives OneNote some credibility. It shows that Microsoft considers OneNote as important as other products in the Office Suite. Without it OneNote will look like a distant cousin.<br /><br />The last item I will cover under the guise of what makes a community is Books, Articles and Conferences. Any solid product has a following of authors who write books and articles supporting the product. And as the community grows we will be needing more and varied conference topics at the major conferences. I don’t think we get to a point where we will see a OneNote-specific conference - but anywhere Word or Excel shows up, OneNote speakers should be there.<br /><br /><strong>Why do we care if we have community?<br /></strong>Community is legitimacy - in my mind. Community is promoting what we like to do and if you don’t like working with OneNote, then I would question why you are even reading this. With a strong community our ideas and concepts will carry more weight, be heard in more places.<br /><br />Having a strong community is important for us that use ON both personally and professionally. It will give us tips, ideas and examples. It will help us use ON better in ways we do not think of. Community will give us ideas on new ways to use OneNote that we do not think of, helps us look outside of the box.<br /><br />A strong community will allow us to have a "ring" of people to ask questions and receive answers.<br /><br />A strong community will elevate the product in personal and corporate use. Will give us more opportunity to install, develop and design solutions to problems - and that means Jobs!<br /><br />These are just One Guy's thoughts on community, Jotted down in OneNote no less and pushed to a blog. The sub title for this blog contains the words random and semi-useful and this post probably slips neatly into both categories. I feel that community is vitally important to the "success" of OneNote - particularly within corporate business. Let me know if you think I missed something or if I hit the nail right on the head so to speak. You have an idea that should be considered? Let me know. Think I am way off base - let me know.OneNote Guy Scope versus Node Type<p.<br /><br /.<br /><br /<br /><br /><strong>Notebook / hsSelf:<br /></strong><span style="font-size:85%;"><"/><br /></span><br /><br /><strong>Section Group / hsSelf: </strong><strong><br /></strong><span style="font-size:85%;"><?xml version="1.0"?><br />"/><br /></span><br /><br /><strong>Section / hsSelf:</strong><br /><span style="font-size:85%;">"/></span><br /><br /><strong>Page / hsSelf:<br /></strong><span style="font-size:85%;"><?xml version="1.0" encoding="utf-16"?><br />" /></span><br /><br />The value hsSelf will always return a single element of that type with attributes for the specific instance of the node determined by the startNodeID.<br /><br /.<br /><br /.<br /><br />I should mention I am running OneNote 2007 Beta2 so things might change. Also note that a Section Group is defined in Xml as a Folder element.<br /><br /><a href=""><img src=""/></a><br /><br />As always if you find anything contrary to this info please post a comment. I just dont have the time to test every scenerio possible and appreciate a critical eye so we have the most accurate information available to the community. </p>OneNote Guy Look at GetHierarchy() Part II<span xmlns=""><p>The GetHierarchy method has three parameters, <span style="color:#330099;">startNodeId</span>, <span style="color:#330099;">hsScope</span> and <span style="color:#330099;">pbstrHierarchyXmlOut</span>. <span style="color:#330099;">startNodeID</span>.<br /></p><p>The <span style="color:#330099;">startNodeID</span>. </p><p. </p><p>Here is an example of an object ID:<br /></p><p>{68E92C8A-2C70-434E-8721-5257C952B8D8}{1}{B0}<br /></p><p <a href="" target="_blank">OneNote 2007 Schemas </a>as a simple type of type string with a defined pattern. Hopefully you have something better to do then read Xml schema's for fun. </p><p. </p><p}. <strong>Your OneNote Guide Beta 2's id attribute will be different.</strong> </p><p:</p><p>_AppClass.GetHierarchy("{68E92C8A-2C70-434E-8721-5257C952B8D8}{1}{B0}", hs, out oneNoteAsXml); </p><p>Now you can run the demo and choose different HierarchyScope values relative to the OneNote Guide Beta 2 Notebook. </p><p>Here is an example of using myOneNote Guide Beta 2 Notebook ID and a HierarchyScope of hsSections (Sections in the demo program): </p><p><br /></p><><br /></span><span xmlns=""></span>OneNote Guy Wilson Gets ItIt doesn't take a genius or even a completed MBA to see the value in OneNote. Take a look at what Brendon Wilson has to say about it.<br /><br /><a href="" target="_blank">OneNote: PM Super Tool</a>OneNote Guy look at GetHierarchy()<span xmlns=""><p. </p><p>The GetHierarchy method takes three parameters, the<span style="color:#330099;"> <strong>startNode</strong></span> Id as a string, <span style="color:#330099;"><strong>hsScope</strong></span> as <span style="color:#000000;">HierarchyScope</span> enumeration and <span style="color:#330099;"><strong>pbstrHierarchyXmlOut</strong></span> as a string to contain the Xml representation of the OneNote information. </p><p><span style="color:#330099;"><strong>StartNodeId</strong></span>. </p><p>The second parameter <strong><span style="color:#330099;">hsScope</span></strong> is a enumerated type named HierarchyScope. This enumeration contains five values Children, Notebooks, Pages, Sections and Self. We will look at the difference of the exported data based on the HierarchyScope value.<br /></p><p>The third parameter <strong><span style="color:#330099;">pbstrHierarchyXmlOut</span></strong> is an string parameter that is used to pass back the requested OneNote information as Xml. This is an out parameter and needs to be defined with the out attribute.<br /></p><p.<br /> </p><p><span style="color:#ff6600;"><strong>//Start Code -------------------------------------------------------------------------------<br /></strong></span></p><p>//Make sure your project adds the OneNote API reference<br />using System;<br />using System.Collections.Generic;<br />using System.Text;<br />using Microsoft.Office.Interop.OneNote;<br /><br />namespace OneNoteDemo<br />{<br /> class OneNote_DemoApp<br /> {<br /> ApplicationClass _AppClass;<br /></p><p> static void Main(string[] args)<br /> {<br /> OneNote_DemoApp app = new OneNote_DemoApp();<br /> app.Run();<br /> }<br /> </p><p> private void Run()<br /> {<br /> //create a new OneNote ApplicationClass<br /> _AppClass = new ApplicationClass();<br /> DumpOneNoteHierarhcy();<br /> Console.WriteLine("\nDone");<br /> Console.ReadLine();<br /> }<br /> </p><p> private void DumpOneNoteHierarhcy()<br /> {<br /> string oneNoteAsXml;<br /> string result;<br /> HierarchyScope hs;<br /> Console.WriteLine("Dump OneNote Hierarchy");<br /> Console.WriteLine("Select a HierarchyScope:");<br /> Console.WriteLine("Children");<br /> Console.WriteLine("Notebooks");<br /> Console.WriteLine("Pages");<br /> Console.WriteLine("Sections");<br /> Console.WriteLine("Self");<br /> result = Console.ReadLine();<br /> </p><p> switch (result.ToUpper())<br /> {<br /> case "CHILDREN": { hs = HierarchyScope.hsChildren; break; }<br /> case "NOTEBOOKS": { hs = HierarchyScope.hsNotebooks; break; }<br /> case "PAGES": { hs = HierarchyScope.hsPages; break; }<br /> case "SECTIONS": { hs = HierarchyScope.hsSections; break; }<br /> case "SELF": { hs = HierarchyScope.hsSelf; break; }<br /> default: { hs = HierarchyScope.hsSelf; break; }<br /> }<br /><br /> Console.WriteLine("Scope = " + hs.ToString());<br /> </p><p> //The key function call is here!<br /> _AppClass.GetHierarchy(null, hs, out oneNoteAsXml);<br /> Console.Write(oneNoteAsXml);<br />}<br /> </p><p> ~OneNote_DemoApp()<br /> {<br /> //ON Interops call into unmanaged code<br /> //So make sure we destroy it<br /> _AppClass = null;<br /> }<br /> }<br />}<br /><span style="color:#ff6600;"><strong>//End Code ---------------------------------------------------------------------------------</strong><br /></span></p><p.<br /></p><p><br /> </p><p>I created the code in C#. If there is a request for VB.Net let me know and I will see what I can do. Of course the demo code should not be interpreted as well designed and robust code. It’s demo code.<br /></p><p><br /> </p><p><br /> </p></span>OneNote Guy the OneNote 2007 APIs in Visual Studio<span xmlns="">OneNote 2007 APIs are COM-based APIs. Luckily OneNote like other Office-based apps installs an interop dll into the GAC for .Net developers. Before we can start working with OneNote in Visual Studio we need to create a reference to the OneNote interop dll which will take care of the marshaling of data between managed .Net code and unmanaged COM code. </span><br /><br /><p><span xmlns=""></span></p><p><span xmlns=""></p></span><p></p><p. </p><p><strong><img style="DISPLAY: block; MARGIN: 0px auto 10px; CURSOR: hand; TEXT-ALIGN: center" alt="" src="" border="0" />Figure 1:</strong> VS Solution Explorer<br /></p><p>To create a reference to the OneNote interop dll right click the reference node in the Solution Explorer , and select Add Reference to bring up the Add Reference Dialog as shown in Figure 2.<br /></p><p></p><p></p><p><img style="DISPLAY: block; MARGIN: 0px auto 10px; CURSOR: hand; TEXT-ALIGN: center" alt="" src="" border="0" /><strong>Figure 2:</strong> Add Reference Dialog</p><p>Select Microsoft.Office.Interop.OneNote and click the Ok button. You should see the reference listed under the Reference node in the Solution Explorer as shown in Figure 3.</p><p></p><p></p><br /><a href=""><img style="DISPLAY: block; MARGIN: 0px auto 10px; CURSOR: hand; TEXT-ALIGN: center" alt="" src="" border="0" /></a> <strong>Figure 3:</strong> VS Solution Explorer with OneNote reference<br /><p>You can now access the OneNote API. Check back later as we work our way though the OneNote 2007 API.<br /></p><p><br /></p><p><br /></p><p><br /></p>OneNote Guy
|
http://feeds.feedburner.com/TheUnknownOnenoteGuysBlog
|
CC-MAIN-2017-22
|
en
|
refinedweb
|
The DBusThread object needs to be able to distribute messages to all objects that might be waiting for them
Created attachment 622241 [details] [diff] [review] WIP: DBus Signal Manager This is getting blocked by the bluetooth manager object move, so putting a WIP patch just in case.
Created attachment 625317 [details] [diff] [review] WIP V2: DBus Signal Manager
Created attachment 625320 [details] [diff] [review] WIP V3: DBus Signal Manager
Comment on attachment 625320 [details] [diff] [review] WIP V3: DBus Signal Manager Mainly looking for feedback on the DBusMessageHandler/Manager setup. The plan is to change the in-constructor registration to object-level static Create() functions (so that objects are basically factories without having to worry about constructor failures). However, what should I have the DBusMessageManager store? I could have the Create() functions hand back RefPtrs I guess, but I'm worried about lifetime/destruction issues?
Comment on attachment 625320 [details] [diff] [review] WIP V3: DBus Signal Manager >diff --git a/ipc/dbus/DBusThread.h b/ipc/dbus/DBusThread.h >+// Add a message handler object to the message distribution system >+void RegisterDBusMessageHandler(const char* aNodeName, DBusMessageHandler* aMsgHandler); >+ >+// Remove a message handler objects from the message distribution >+// system >+void UnregisterDBusMessageHandler(const char* aNodeName); >+ What's the threading model here? The rest looks mostly ok on skim.
All DBusMessage handling runs on the main thread. When we get a DBusMessage in on the IOThread, a runnable is created with the DBusMessage struct and DBusSignalManager object (where the handlers are held) that's dispatched to the main thread. We assume DBusMessage handling to be a non-blocking operation.
Please to be documenting.
Created attachment 627437 [details] [diff] [review] WIP V4: DBus Signal Manager The last WIP for this iteration of the signal manager, as it's simply not going to work. What I'm basically doing here is reimplementing the ObserverManager idea in the HAL, except without the ability to keep lists of Observers. This means that if we have two applications started that use bluetooth and need the default adapter, we can't properly distribute messages to them, since this model doesn't reflect a one-to-many architecture. This patch needs to be redone with one ObserverList/Manager per node name, managed by the DBusThread singleton object. This way we can have objects subscribe under the same name across multiple applications and distribute messages to all of them at once. How lifetime management is going to work with that is still up in the air, but I've got a meeting with Ben Turner on Tuesday morning to iron all of this out.
Created attachment 628460 [details] [diff] [review] v5: Creating observer model for distributing DBusMessages Instead of going with trying to write my own observer system, I now just use the SystemObserver service and have objects register themselves under their DBus node names. This seems to do the trick for the one-to-multi broadcasting. Since we know that notify observers will run to completion, we store the DBusMessage in a fetchable variable. The non-constantness of the fetch is something we'd have to deal with no matter what.
Comment on attachment 628460 [details] [diff] [review] v5: Creating observer model for distributing DBusMessages r- for security vulnerability with DBusMessage (need to ref/unref to share between threads). Would prefer to do fuller review on version with ObserverList<T>. Looking forward to dbus-ery moving out of dom/bluetooth proper and into dom/bluetooth/dbus, but followup is fine.
Created attachment 628923 [details] [diff] [review] v6: Creating observer model for distributing DBusMessages
Created attachment 628927 [details] [diff] [review] v7: Creating observer model for distributing DBusMessages Removed extra DOM stuff added in v6 to avoid DOM peer review on this issue. Moved platform specifics to their own directory, removed all platform specificness from DOM code, changed observers to nsClassHashtable of ObserverList<T>'s, fixed DBusMessage ref/unref.
Created attachment 628958 [details] [diff] [review] v8: Creating observer model for distributing DBusMessages Removed a bunch of unused stuff in the patch
Comment on attachment 628958 [details] [diff] [review] v8: Creating observer model for distributing DBusMessages >diff --git a/dom/bluetooth/BluetoothAdapter.cpp b/dom/bluetooth/BluetoothAdapter.cpp >+BluetoothAdapter::~BluetoothAdapter() >+{ >+ if(NS_FAILED(UnregisterBluetoothEventHandler(mName, this))) { Nit: |if (|, and elsewhere. >+BluetoothAdapter::Create(const nsCString& name) { >+ if(NS_FAILED(RegisterBluetoothEventHandler(name, adapter.get()))) { You shouldn't need explicit .get() here. But if the C++ compiler disagrees, ignore me. Same for the uses elsewhere. In the code here and elsewhere that has non-local Register/Unregister (i.e. not paired by ctor/dtor), a strong invariant of the object is that it's either registered or doesn't exist. So make the ctor private to ensure that only Create() is responsible for maintaining that invariant. And below. >diff --git a/dom/bluetooth/BluetoothUtils.h b/dom/bluetooth/BluetoothUtils.h >+/** >+ * Add a message handler object from message distribution observer. >+ * Object must inherit nsISupportsWeakReference. >+ * Need to document threading model of this code, perhaps as summary comment above this one. >+ * @param aNodeName Node name of the object >+ * @param aMsgHandler Weak pointer to the object >+ * >+ * @return NS_OK on successful addition to observer, NS_ERROR_FAILED otherwise >+ */ >+nsresult RegisterBluetoothEventHandler(const nsCString& aNodeName, Observer<nsCString> *aMsgHandler); Nit: please fit this on 80 columns. Non-nit: let's pull out the Observer<T> stuff here into stronger types struct BluetoothMessage { nsCString ...; //... }; typedef Observer<BluetoothMessage> BluetoothMessageObserver; and use BluetoothMessageObserver instead of raw Observer<T>. >diff --git a/dom/bluetooth/linux/BluetoothDBusUtils.cpp b/dom/bluetooth/linux/BluetoothDBusUtils.cpp >+typedef Observer<nsCString> BTEventObserver; Oh ... like you do here :). >+typedef nsClassHashtable<nsCStringHashKey, ObserverList<nsCString> > BTEventObserverTable; Let's also typedef ObserverList<T> for concision later. >+struct DistributeDBusMessageTask : public nsRunnable { >+ >+ DistributeDBusMessageTask(DBusMessage* aMsg) : mMsg(aMsg) >+ { >+ } Let's make a Scoped<DBusMessage> (see mfbt/Scoped.h) to manage ref/unref of these guys. I see you use this a few more times below. It'll make your life much easier. >+bool >+StopBluetoothConnection() >+{ >+ sBTEventObserverTable = NULL; I don't believe this will free all the values in the table. Please double check. >diff --git a/ipc/dbus/DBusUtils.cpp b/ipc/dbus/DBusUtils.cpp >+dbus_bool_t dbus_func_args_async( DBusConnection *conn, Formatting is really hosed here. >diff --git a/ipc/dbus/DBusUtils.h b/ipc/dbus/DBusUtils.h >+dbus_bool_t dbus_func_args_async( >+ DBusConnection *conn, Nit: please drop the newline after '('. And below. >diff --git a/ipc/dbus/RawDBusConnection.cpp b/ipc/dbus/RawDBusConnection.cpp >+void RawDBusConnection::ScopedDBusConnectionPtrTraits::release(DBusConnection* ptr) { Brace on new line. This is looking good. Would like to make a quick pass over one more version.
Created attachment 629301 [details] [diff] [review] v9: Creating observer model for distributing DBusMessages Above concerns addressed, plus some additional commenting and reworking of the event relay system to use full variant types instead of just strings.
Created attachment 629435 [details] [diff] [review] v10: Creating observer model for distributing DBusMessages Forgot to fix DBusUtils.h formatting
Comment on attachment 629435 [details] [diff] [review] v10: Creating observer model for distributing DBusMessages >diff --git a/dom/bluetooth/BluetoothAdapter.cpp b/dom/bluetooth/BluetoothAdapter.cpp >+#include "mozilla/ipc/DBusThread.h" >+#include <dbus/dbus.h> >+ This file has been de-D-Bus'd, right? I don't think you need these. >diff --git a/dom/bluetooth/BluetoothCommon.h b/dom/bluetooth/BluetoothCommon.h >+struct BluetoothVariant >+{ >+ uint32_t mUint32; >+ nsCString mString; We can make a proper space-efficient ~type-safe discrimated union for these using the IPDL compiler, but it's overkill for now. Can revisit when we start IPC-ifying. >diff --git a/dom/bluetooth/BluetoothManager.cpp b/dom/bluetooth/BluetoothManag >+#include "mozilla/ipc/DBusThread.h" >+#include <dbus/dbus.h> Don't think we need these either. >+ mozilla::DebugOnly<nsresult> rv = We're in namespace mozilla, so drop the mozilla:: qualification. >diff --git a/ipc/dbus/DBusUtils.h b/ipc/dbus/DBusUtils.h >+class ScopedDBusMessage Hmm ... for what you're doing here, you need to take a ref, not just releasing an existing resource, so I gave you bad advice: you want a smart pointer. I think this might be a bit simpler and easier to use class DBusMessageRefPtr { public: ScopedDBusMessage(DBusMessage* aMsg) : mMsg(aMsg) { if (mMsg) dbus_message_ref(mMsg); } ~ScopedDBusMessage() { if (mMsg) dbus_message_unref(mMsg); } operator DBusMessage*() { return mMsg; } DBusMessage* get() { return mMsg; } private: DBusMessage* mMsg; }; r=me with that
|
https://bugzilla.mozilla.org/show_bug.cgi?id=744349
|
CC-MAIN-2017-22
|
en
|
refinedweb
|
A Guide to Time Series Visualization with Python 3
Introduction
Time-series analysis belongs to a branch of Statistics that involves the study of ordered, often temporal data. When relevantly applied, time-series analysis can reveal unexpected trends, extract helpful statistics, and even forecast trends ahead into the future. For these reasons, it is applied across many fields including economics, weather forecasting, and capacity planning, to name a few.
In this tutorial, we will introduce some common techniques used in time-series analysis and walk through the iterative steps required to manipulate, visualize time-series data.
Prerequisites
This guide will cover how to do time-series analysis.
Step 1 — Installing Packages
We will leverage the
pandas library, which offers a lot of flexibility when manipulating data, and the
statsmodels library, which allows us to perform statistical computing in Python. Used together, these two libraries extend Python to offer greater functionality and significantly increase our analytical toolkit.
Like with other Python packages, we can install
pandas and
statsmodels with
pip. First, let’s move into our local programming environment or server-based programming environment:
- cd environments
- . my_env/bin/activate
From here, let’s create a new directory for our project. We will call it
timeseries and then move into the directory. If you call the project a different name, be sure to substitute your name for
timeseries throughout the guide
- mkdir timeseries
- cd timeseries
We can now install
pandas,
statsmodels, and the data plotting package
matplotlib. Their dependencies will also be installed:
- pip install pandas statsmodels matplotlib
At this point, we're now set up to start working with
pandas and
statsmodels.
Step 2 — Loading Time-series Data
To begin working with our data, we will start up Jupyter Notebook:
- jupyter notebook
To create a new notebook file, select New > Python 3 from the top right pull-down menu:
This will open a notebook which allows us to load the required libraries (notice the standard shorthands used to reference
pandas,
matplotlib and
statsmodels). At the top of our notebook, we should write the following:
import pandas as pd import statsmodels.api as sm import matplotlib.pyplot as plt
After each code block in this tutorial, you should type
ALT + ENTER to run the code and move into a new code block within your notebook.
Conveniently,
statsmodels comes with built-in datasets, so we can load a time-series dataset straight into memory.() co2 = data.data
Let's check what the first 5 lines of our time-series data look like:
print(co2.head(5))
Outputco2 1958-03-29 316.1 1958-04-05 317.3 1958-04-12 317.6 1958-04-19 317.5 1958-04-26 316.4
With our packages imported and the CO2 dataset ready to go, we can move on to indexing our data.
Step 3 — Indexing with Time-series Data
You may have noticed that the dates have been set as the index of our
pandas DataFrame. When working with time-series data in Python we should ensure that dates are used as an index, so make sure to always check for that, which we can do by running the following:
co2.index
OutputDatetimeIndex(['1958-03-29', '1958-04-05', '1958-04-12', '1958-04-19', '1958-04-26', '1958-05-03', '1958-05-10', '1958-05-17', '1958-05-24', '1958-05-31', ... '2001-10-27', '2001-11-03', '2001-11-10', '2001-11-17', '2001-11-24', '2001-12-01', '2001-12-08', '2001-12-15', '2001-12-22', '2001-12-29'], dtype='datetime64[ns]', length=2284, freq='W-SAT')
The
dtype=datetime[ns] field confirms that our index is made of date stamp objects, while
length=2284 and
freq='W-SAT' tells us that we have 2,284 weekly date stamps starting on Saturdays.
Weekly data can be tricky to work with, so let's use the monthly averages of our time-series instead. This can be obtained by using the convenient
resample function, which allows us to group the time-series into buckets (1 month), apply a function on each group (mean), and combine the result (one row per group).
y = co2['co2'].resample('MS').mean()
Here, the term
MS means that we group the data in buckets by months and ensures that we are using the start of each month as the timestamp:
y.head(5)
Output1958-03-01 316.100 1958-04-01 317.200 1958-05-01 317.120 1958-06-01 315.800 1958-07-01 315.625 Freq: MS, Name: co2, dtype: float64
An interesting feature of
pandas is its ability to handle date stamp indices, which allow us to quickly slice our data. For example, we can slice our dataset to only retrieve data points that come after the year
1990:
y['1990':]
Output1990-01-01 353.650 1990-02-01 354.650 ... 2001-11-01 369.375 2001-12-01 371.020 Freq: MS, Name: co2, dtype: float64
Or, we can slice our dataset to only retrieve data points between October
1995 and October
1996:
y['1995-10-01':'1996-10-01']
Output1995-10-01 357.850 1995-11-01 359.475 1995-12-01 360.700 1996-01-01 362.025 1996-02-01 363.175 1996-03-01 364.060 1996-04-01 364.700 1996-05-01 365.325 1996-06-01 364.880 1996-07-01 363.475 1996-08-01 361.320 1996-09-01 359.400 1996-10-01 359.625 Freq: MS, Name: co2, dtype: float64
With our data properly indexed for working with temporal data, we can move onto handling values that may be missing.
Step 4 — Handling Missing Values in Time-series Data
Real world data tends be messy. As we can see from the plot, it is not uncommon for time-series data to contain missing values. The simplest way to check for those is either by directly plotting the data or by using the command below that will reveal missing data in ouput:
y.isnull().sum()
Output5
This output tells us that there are 5 months with missing values in our time series.
Generally, we should "fill in" missing values if they are not too numerous so that we don’t have gaps in the data. We can do this in
pandas using the
fillna() command. For simplicity, we can fill in missing values with the closest non-null value in our time series, although it is important to note that a rolling mean would sometimes be preferable.
y = y.fillna(y.bfill())
With missing values filled in, we can once again check to see whether any null values exist to make sure that our operation worked:
y.isnull().sum()
Output0
After performing these operations, we see that we have successfully filled in all missing values in our time series.
Step 5 — Visualizing Time-series Data
When working with time-series data, a lot can be revealed through visualizing it. A few things to look out for are:
- seasonality: does the data display a clear periodic pattern?
- trend: does the data follow a consistent upwards or downward slope?
- noise: are there any outlier points or missing values that are not consistent with the rest of the data?
We can use the
pandas wrapper around the
matplotlib API to display a plot of our dataset:
y.plot(figsize=(15, 6)) plt.show()
Some distinguishable patterns appear when we plot the data. The time-series has an obvious seasonality pattern, as well as an overall increasing trend. We can also visualize our data using a method called time-series decomposition. As its name suggests, time series decomposition allows us to decompose our time series into three distinct components: trend, seasonality, and noise.
Fortunately,
statsmodels provides the convenient
seasonal_decompose function to perform seasonal decomposition out of the box. If you are interested in learning more, the reference for its original implementation can be found in the following paper, "STL: A Seasonal-Trend Decomposition Procedure Based on Loess."
The script below shows how to perform time-series seasonal decomposition in Python. By default,
seasonal_decompose returns a figure of relatively small size, so the first two lines of this code chunk ensure that the output figure is large enough for us to visualize.
from pylab import rcParams rcParams['figure.figsize'] = 11, 9 decomposition = sm.tsa.seasonal_decompose(y, model='additive') fig = decomposition.plot() plt.show()
Using time-series decomposition makes it easier to quickly identify a changing mean or variation in the data. The plot above clearly shows the upwards trend of our data, along with its yearly seasonality. These can be used to understand the structure of our time-series. The intuition behind time-series decomposition is important, as many forecasting methods build upon this concept of structured decomposition to produce forecasts.
Conclusion
If you've followed along with this guide, you now have experience visualizing and manipulating time-series data in Python.
To further improve your skill set, you can load in another dataset and repeat all the steps in this tutorial. For example, you may wish to read a CSV file using the
pandas library or use the
sunspots dataset that comes pre-loaded with the
statsmodels library:
data = sm.datasets.sunspots.load_pandas().data.
|
https://www.digitalocean.com/community/tutorials/a-guide-to-time-series-visualization-with-python-3
|
CC-MAIN-2017-22
|
en
|
refinedweb
|
0
I'm supposed to create a program to read in word by word into a vector. And print out the words connected with '-'... so if the input was hello world the output would be hello-world
this is the code ive made so far... right now my input can be hello world but my output would be
hello-
world-
i dont want the - after world and i want it printed out on one line... any help would be appreciated.
#include <iostream> #include <iomanip> #include <vector> using namespace std; using std::vector; int main() { vector<string>svect; string word; while( cin >> word ){ word += '-'; svect.push_back(word); } for( int i = 0; i < svect.size(); i++ ) cout << svect[i] << endl; return 0; }
Edited by Dewey1040: n/a
|
https://www.daniweb.com/programming/software-development/threads/226408/printing-vectors-new-at-c
|
CC-MAIN-2017-22
|
en
|
refinedweb
|
An array is a group of similar typed variables that are referred to by a common name. Arrays of any type can be created and may have one or more dimensions. A specific element in an array is accessed by its index. The array is a simple type of data structure which can store primitive variable or objects. For example, imagine if you had to store the result of six subjects we can do it using an array. To create an array value in Java, you use the new keyword, just as you do to create an object.
Defining and constructing one dimensional array
Here, type specifies the type of variables (int, boolean, char, float etc) being stored, size specifies the number of elements in the array, and arrayname is the variable name that is the reference to the array. Array size must be specified while creating an array. If you are creating a int[], for example, you must specify how many int values you want it to hold (in above statement resultArray[] is having size 6 int values). Once an array is created, it can never grow or shrink.
Initializing array: You can initialize specific element in the array by specifying its index within square brackets. All array indexes start at zero.
resultArray[0]=69;
This will initialize first element (index zero) of resultArray[] with integer value 69. Array elements can be initialized/accessed in any order. In memory, it will create a structure similar to below figure.
Array Literals
The null literal used to represent the absence of an object can also be used to represent the absence of an array. For example:
String [] name = null;
In addition to the null literal, Java also defines a special syntax that allows you to specify array values literally in your programs. This syntax can be used only when declaring a variable of array type. It combines the creation of the array object with the initialization of the array elements:
String[] daysOfWeek = {“Sunday”, “Monday”, “Tuesday”, “Wednesday”, “Thursday”, “Friday”, “Saturday”};
This creates an array that contains the seven string element representing days of the week within the curly braces. Note that we don't use the new keyword or specify the type of the array in this array literal syntax. The type is implicit in the variable declaration of which the initializer is a part. Also, the array length is not specified explicitly with this syntax; it is determined implicitly by counting the number of elements listed between the curly braces.
Let’s see sample java program to understand this concept better. This program will help to understand initializing and accessing specific array elements.
package arrayDemo; import java.util.Arrays; public class ResultListDemo { public static void main(String[] args) { //Array Declaration int resultArray[] = new int[6]; //Array Initialization resultArray[0]=69; resultArray[1]=75; resultArray[2]=43; resultArray[3]=55; resultArray[4]=35; resultArray[5]=87; //Array elements access System.out.println("Marks of First Subject- "+ resultArray[0]); System.out.println("Marks of Second Subject- "+ resultArray[1]); System.out.println("Marks of Third Subject- "+ resultArray[2]); System.out.println("Marks of Fourth Subject- "+ resultArray[3]); System.out.println("Marks of Fifth Subject- "+ resultArray[4]); System.out.println("Marks of Sixth Subject- "+ resultArray[5]); } }
Output:
Alternative syntax for declaring, initializing of array in the same statement
int [] resultArray = {69,75,43,55,35,87};
Multidimensional Arrays
In Java, multidimensional arrays are actually arrays of arrays. These, as you might expect,im. This will create a matrix of the size 2x3 in memory.
int twoDim[][] = new int[2][3];
Let’s have look at below program to understand 2-dimentional array
package arrayDemo; public class twoDimArrayDemo { public static void main (String []args){ int twoDim [][] = new int [2][3]; twoDim[0][0]=1; twoDim[0][1]=2; twoDim[0][2]=3; twoDim[1][0]=4; twoDim[1][1]=5; twoDim[1][2]=6; System.out.println(twoDim[0][0] + " " + twoDim[0][1] + " " + twoDim[0][2]); System.out.println(twoDim[1][0] + " " + twoDim[1][1] + " " + twoDim[1][2]); } }
Output:
Inbuilt Helper Class (java.util.Arrays) for Arrays Manipulation:
Java provides very important helper class (java.util.Arrays) for array manipulation. This class has many utility methods like array sorting, printing values of all array elements, searching of an element, copy one array into another array etc. Let’s see sample program to understand this class for better programming. In below program float array has been declared. We are printing the array elements before sorting and after sorting.
package arrayDemo; import java.util.Arrays; public class ArraySortingDemo { public static void main(String[] args) { //Declaring array of float elements float [] resultArray = {69.4f,75.3f,43.22f,55.21f,35.87f,87.02f}; System.out.println("Array Before Sorting- " + Arrays.toString(resultArray)); //below line will sort the array in ascending order Arrays.sort(resultArray); System.out.println("Array After Sorting- " + Arrays.toString(resultArray)); } }
Output:
Similar to “java.util.Arrays” System class also has a functionality of efficiently copying data from one array to another. Syntax as below,.
Important points:
Summary
Join our Question Answer community to learn and share your programming knowledge.
Help the community:
JavaScript: Delete duplicates in an array
|
http://www.w3resource.com/java-tutorial/java-arrays.php
|
CC-MAIN-2017-22
|
en
|
refinedweb
|
First off, I'm no programmer, and not strong at scripting.
I found this script at: http:/
I put in the siteUrl, created a new list called Test (which I would have done anyway) and typed in the user name. I saved the file as a ps1 script and execute it with SP Powershell.
I get the following error:
Line 3, character 26, "An expression was expected after '('
If I put the "web" name ("IS") in the parenthesis, the next error says there's a wrong parameter in the 'foreach' command, and I really have no clue what's supposed to be there.
How do you "read" the steps in this script? Is it a powershell script or vb or something else?
Any help is appreciated, as usual!
SPSite site = new SPSite(siteUrl);
SPWeb web = site.OpenWeb();
web.AllowUnsafeUpdates = true;
SPList list = web.Lists["Test"];
SPListItemCollection collection = list.Items;
foreach (SPListItem item in collection )
{
SPUser user = web.EnsureUser(UserName Login Name);
//1073741823;#System Account //User name values are in this format ID;#Login Name
string value1 = user.ID + ";#" + user.Name; //Create in same format
item["Author"] = value1; //for Created By field
item["Editor"]=value1 ; //Modified By field
item.Update(); //Update the item
}
list.Update();
web.Update();"
}
10 Replies
Dec 29, 2011 at 7:25 UTC
AvantiTech Solutions is an IT service provider.
Does not look like a powershell or a vbscript, might possibly be a VB C# code snippet??
Dec 29, 2011 at 7:51 UTC
I think it is C# based on other discussions on the sites where I saw similar code. But how do you implement or execute it in Sharepoint Powershell?
And what language sets parameters with "using" like the following script? This is doing the same thing, changing a system field, but Powershell says something about it being a non-applicable language, or something like that. Where/how would I execute this? "cscript code.vbs"?
using System;
using System.Collections.Generic;
using System.Linq;
using System.Text;
using Microsoft.SharePoint;
namespace DiscussionBoardTopicID
{
class Program
{
static void Main(string[] args)
{
using (SPSite oSiteCollection = new SPSite("http:/
{
using (SPWeb oWebsiteRoot = oSiteCollection.OpenWeb())
{
SPList oList = oWebsiteRoot.Lists["Team Discussion"];
foreach (SPListItem item in oList.Folders)
{
item[SPBuiltInFieldId.Modified] = DateTime.Now;
item.UpdateOverwriteVersion();
}
}
}
}
}
}
Dec 29, 2011 at 9:52 UTC
It is C#. The code with the "using" statements is also C#. This code uses the .NET CLR because all of those "using" statements are calling components of the CLR. I doubt you will ever get this code to run inside of a .vbs file.
I suppose you could create your own .net dll using this code, compile it, and then call it from your vb script
Dec 29, 2011 at 10:08 UTC
I suppose you could create your own .net dll using this code, compile it, and then call it from your vb script
Hmmm, yeah, that's going to happen, I'll take care of that right after I fix this little time travel problem I'm having.
Any insights into the first script?
Dec 29, 2011 at 10:50 UTC
The for each loop is iterating through all of the SPListItem objects in your SPListItemCollection. The SPListItemCollection is populated from the items in the SPList which is returned from the SPWeb.Lists call.
Basically the steps are:
open a sharepoint site object with the siteUrl parameter (supplied by you)
create a sharepoint web object by calling the openWeb method of the sharepoint site object
allow unsafe updates on the sharepoint web object
get a list called "Test" from the web object (sharepoint) and put it in a SPList (the "Test" list needs to be in SharePoint)
put the list items into a SPListItemCollection
Iterate through each SPListItem in the SPListItemCollection
Update the item
update the list
update the web
Set a breakpoint and run this in visual studio or web matrix or some other .NET ide. Most are free to download. I would concentrate to see if your SPList variable is getting populated with your "Test" list.
Dec 29, 2011 at 11:00 UTC
This part looks like it's supposed to be customized, it doesn't make sense as is, it almost looks like a comment:
foreach (SPListItem item in collection ) Other similar commands just had something like foreach ($SPList in $collection), (I realize those variables aren't in this script) but I've tried every combination of wording I could think of, using variables in previous lines.
and thanks for parsing through the lines, at least I was actually following the flow fairly accurately.
Dec 29, 2011 at 11:14 UTC
I don't have the environment to test any of this code, but this section
foreach (SPListItem item in collection )
would appear to be valid. The variables are already set. SPListItem is the type of object and collection is set with this line:
SPListItemCollection collection = list.Items;
You are populating list with this line:
SPList list = web.Lists["Test"]; call
your variables in this thing are: site, web, list, collection, item
You need to step through the code to ensure these are all being populated properly.
here is what msdn has to say about the SPListItemCollection
http:/"
}
Dec 29, 2011 at 11:29 UTC
So even though "item" isn't defined as a variable before it's used in the two action lines, it's legal, or proper, to set it up that way? That's why I don't program.
If so then it makes sense, I just couldn't find anything that said it was correct like that. And not knowing how much of this code was intended to be plug-and-play, so to speak, versus more hypothetical, it threw me probably more than it should have.
I'm going to take this and run with it.
You're the bomb, thanks for much for your time!
Dec 29, 2011 at 11:34 UTC
No problem. Glad I could help.
Yeah, item is being both instantiated and populated with that foreach statement. It makes it a little difficult to follow.
|
https://community.spiceworks.com/topic/186925-help-identifying-type-of-script-and-error-expression-expected-after
|
CC-MAIN-2017-22
|
en
|
refinedweb
|
strpattern_match_invoke_action()
Get the action of an invoke associated with a pattern match.
Synopsis:
#include <strpattern.h>
const char* strpattern_match_invoke_action(const strpattern_match *match, int index, int *err)
Since:
BlackBerry 10.0.0
Arguments:
- match
The match containing the invoke whose action is returned.
- index
The index of the invoke associated with the match.
- err
STRPATTERN_EOK if there is no error.
Library:libstrpattern (For the qcc command, use the -l strpattern option to link against this library)
Description:.
Returns:
A NULL-terminated string with the action. NULL if no action is set for the invoke or on error. Ownership is retained by the callee.
Last modified: 2014-05-14
Got questions about leaving a comment? Get answers from our Disqus FAQ.comments powered by Disqus
|
http://developer.blackberry.com/native/reference/core/com.qnx.doc.strpattern.lib_ref/topic/strpattern_match_invoke_action.html
|
CC-MAIN-2017-22
|
en
|
refinedweb
|
I, Avatar: Constructions of Self and Place in Second Life and the Technological Imagination
- Pearl Hopkins
- 1 years ago
- Views:
Transcription
1 Jones 1 I, Avatar: Constructions of Self and Place in Second Life and the Technological Imagination Donald E. Jones Communication, Culture and Technology Georgetown University Published by gnovis, the peer-reviewed journal of Communication, Culture and Technology
2 Jones 2 On a recent evening, I received a call from a friend to come see a mutual friend s new house. After arriving by public transportation, I walked into the well appointed home. Music played on the sound system throughout the house as I looked around the living room, which was decorated in earth tones and rich fabrics. The couches were comfortable for lounging and had tasteful, contemporary upholstery. Being out in the suburbs, butterflies flew near the window box. Paintings decorated the walls, the handiwork of the owner s brothers. I complimented my host on her taste she had designed the interior herself and she smiled. As she went to attend to another guest who had arrived on the porch, I caught up with my friend whom I had first met at a social gathering a few weeks before. He told me of his recent luck winning at a trivia game at a local bar and showed off the new clothes he had bought with the winnings. After talking a while, I had to get to bed, so I said my goodbyes and left. While I could have just been describing a mundane evening in almost any part of the world, the space in which this particular evening occurred only exists in the memory and storage of a farm of servers outside San Francisco. But, the house was constructed by my friend, and I did lounge in it on a lovely couch with my own (constructed) body (See Figure 1). Welcome to my Second Life Second Life, a three-dimensional virtual world, launched in 2003, was intentionally designed to be an environment to be constructed by its users. From the shape of their avatars 1 to the design of their homes, from how they spend their time to what types of affinity groups they form; Second Life s design was focused on fostering creativity and self-expression in order to create a vibrant and dynamic world full of interesting content (Ondrejka, 2004, p. 1). As such, it is unique among virtual worlds that exist today but represents a trend that its creators and others anticipate may eventually transform the Internet as graphics and network capability grow (Kushner, 2004). Second Life grew out of the vision of the Metaverse described in Neal Stephenson s novel Snow Crash. Stephenson was the first to describe an online environment [The Metaverse] that was a real place to its users, one where they interacted using the real world as a metaphor 1 Avatar is derived from the Sanskrit avatara and is meant to suggest the idea of a kind of transubstantiation, the incarnation of life in a different form (Tofts, 2003, p. 56). Avatar is the common term for representations, either textual or visual, of people s presence in a digital environment. In Second Life, avatars are three-dimensional and user constructed in almost every detail.
3 Jones 3 and socialized, conducted business and were entertained (Ondrejka, 2004, p. 81). The developers of Second Life see their user-constructed world as the first step towards fulfilling this vision. This vision is to create a space where anyone can create and build an avatar body and Figure 1. My Avatar in my friend s house, April, dreamlike places that fulfill their desires, a world that will function as real, transcending the bounds of flesh and circumstance of the actual, tangible world. This article will discuss the historical and current discourses on the construction of spaces and selves, both real and virtual, as well as the cultural and scientific construct of virtual reality. It will also describe the accompanying dream of transcendence from the limitations of bodies and the actual world that sits at the nexus of its discourse. Then, it will place Second Life into the context of the evolution of computer-enabled virtual worlds and analyze some of the economic, legal, psychological and philosophical implications of user-constructed virtual bodies and virtual spaces within a virtual world supported by ownership, property and tangible real world economic value.
4 Jones 4 While Second Life captures the imagination of individuals who wish to create new lives free from societal and physical limitations of ethnicity, gender, geography, sexual orientation or status; it still manifests significant aspects of the society (American, capitalist, gendered) from which it sprung and therefore is more reflective than transcendent. However, since it is now possible to work in a fantasy world to pay rent in reality in places such as Second Life, usercreated virtual worlds enable users to build virtual lives, with virtual bodies, virtual objects and virtual homes, that can have real, tangible value and meaning (Lastowka & Hunter, 2004, p. 11). Second Life represents, as Hillis describes, an example of [virtual reality] as postmodern technology because it blurs and fragments boundaries and senses of self and place and functions as a virtual microcosm for cultural, economic and identity recombination (1999, p ). In these new frontiers, avatars and the spaces they build will continue to challenge our concept of reality and humanity. The Virtual and the Actual: Reality Through History In order to get a sense of the meaning of virtual spaces for the human imagination, it is beneficial to discuss some particular discourses that led to the emergence of Second Life. Discussions of virtuality and virtual reality are found among academics across many disciplines including psychology (Fink, 1999), geography (Hillis, 1999), philosophy (Heim, 1998; Zhai, 1998), sociology (Schroeder, 1996), communication (Biocca & Levy, 1995), literature and cultural studies (Markley, 1996; Bender & Druckrey, 1994) and computer science (Çapin, 1999). Since this article is most concerned with theories of the virtual in relation to the self and imagination of the world, the philosophical and psychological perspectives will frame the discussion. Throughout history, there have been differing views on what exactly was the real. Virtual reality lies in a discourse on reality and the position of human beings within it that has spanned from pre-modern times, through the Enlightenment and to the present. Human beings, enabled by technology, have increasingly become the central observers and constructors of their own reality. Virtual reality is the contemporary and future articulation of the philosophical and psychological question of how we define (and create) reality. From the beginning of time, human beings negotiated between and through the actual and the virtual. Fink in Cyberseduction takes a broad view by defining the virtual as something that exists in the mind without actual physical fact, form, or features virtual realities occur in inner
5 Jones 5 mental space, reflecting internal environments (1999, p. 22). Heim describes the virtual as not actually, but as if and points to the origin of the word in the Latin virtus defined as human power implying that the virtual comes out of human creation rather than existing in the actual of the physical world (1998, p. 220). Therefore, the virtual has existed in human acts of imagination and creation from the dawn of consciousness. Ropolyi uses Heim s definition as a stepping point for discussing the real and the virtual through history (2001, p. 168). In pre-modern times, the magic worldview (in which humans could harness supernatural forces and affect the world through symbol, ritual and spell) blended the real and the virtual without much concern. The magic reality was constructed by will, in this way the mere construction of interrelations between the observed phenomena or between the experienced situations had an absolute primacy, without making distinctions between different kinds of interrelations (Ropolyi, 2001, p. 170). In Greek thought, Plato s conception of what was real and what was virtual could be seen as a reversal of how we construct the concepts today. He believed that the sensual world was imperfect because it was constantly in flux and not as real as the perfect Forms (universal ideal types of which things we perceive in our world are merely imperfect expressions; e.g. there is a perfect dog which exists in another world that any dog we see merely, and imperfectly, reflects) (Ropolyi, 2001, p. 172). Further, he speaks of humans seeing the world as if they watch shadows on the back wall of a cave. The shadows cast by objects being moved before a fire. The real world is outside the cave, containing the patterns from which the objects were copied, and the principle of the good, whose analogue is the light of the sun (Hillis, 1999, p. 39). In other words, what was perceived as real was actually the virtual because the perfect forms of the real universe truth, light, knowledge were beyond apprehension. Pre-modern religious belief and mythology also intermingled the real and the virtual. Reality was considered on a higher level than what people experienced as actual. The life of human beings [in pre-modern thought] is performed in the vale of tears, in the shade of the world The complete earthy (sic) life takes place in the realm of virtuality or in other words, everything is virtual in some sense the only exception is God (Ropolyi, 2001, p. 172). In this pre-enlightenment era, people did not necessarily believe that humans could grasp the truth of the world through what was measurable and experienced, so what was imagined (which from today s lens would be considered, on some level, virtual) was what was real, and what was
6 Jones 6 perceived and tangible (what we would generally call real today) was what was virtual. In other words, God and the angels, who could only be experienced in prayer, were more real than any object that a person saw in front of their eyes because human sight was inherently fallible due to the sinful and imperfect nature of the world in which they lived. The scientific revolution of early modernity, while not removing the belief in real higher realms, sought answers to the question of what reality was by the experience and observation of the natural world. The use of the senses, supported by technologies that enhanced them, shifted the construction of reality from the mythological to the logical, scientific and observable: from what Armstrong describes as the move from mythos (myth) to logos (logic) as the centering approach to understanding the world (2001). The physical, rather than the metaphysical, became the locus of reality. Vision, in particular, supported by such technologies as the camera obscura (which projected a mirror-image of the actual world onto a wall through a pin-hole in a darkened room providing, what some would say, a perfect monocular view of reality), the telescope and others, began to define the real. Thinkers, like Descartes, sought answers in the world through the use of optics to view what was considered an objective truth (Crary, 1990). This Cartesian tradition [accorded] primacy to sight in a way that conceptually privilege[d] the eye over the human body of which it [was] still a part, and ma[de] the eye a metaphor of the mind (Hillis, 1999, p. 94). Descartes and Enlightenment thought will color the ideas expressed throughout this article. Most significant to Enlightenment thinkers was the idea that what was seen, touched, observed and measured became the seat of reality rather than some far-off other-world. According to Crary, Late Modernity changed the place of the observer through the scientification of sight and the introduction of new visual technologies. Crary places vision within a historico-cultural discourse from the seventeenth to nineteenth century. Originally, the observer, being a point within a plane of vision that was tangible, external, and independent of the viewer (Crary, 1990), objectively saw the truth of the world from a monadic viewpoint (from the camera obscura). However, in the nineteenth century, the observer became an active participant in the construction of a subjective reality. The perception of this reality relied on the particularities of the human visual system, now rationalized, the interaction with tools (magic lanterns, thaumatropes, phenakistiscopes, zootropes, kaleidoscopes and stereoscopes), and the products of this rationalization (Crary, 1990). These interactions allowed for the creation of
7 Jones 7 images that were disconnected from the tangible (Crary, 1990). Through the mediation of technology, alternate visions were created that did not rely necessarily on anything actual, but rather by tricking the eye. In the same way, later technologies like film, television, or the computer screen created realistic images from beams of light, chemicals, electrical impulses, and ones and zeros. Now, in the so-called post-modern era, people are inundated by flickering images that purport to reflect reality but are at the same time constructed subjectively by the senses of the observer. Hillis expands on Crary s thesis in seeing the perfect vision of the camera obscura, the fantasy of the magic lantern, and the different immersive qualities of the stereoscope and the panorama standing as precursive cultural and material technologie (sic) to the formation of the discourse on virtual reality and virtual environments (Hillis, 1999, p ). The virtualization of the world has been a process that has been shaping scientific and cultural discourse over a great length of time. Western humanity from its cultural beginnings gathered its understandings of the real from beyond its ability to see and observe, that is, in the Divine. The modern experiment, however, sought truth by seeking answers in the tangible, observed, measured, empiric, visual, actual world, a pursuit aided by optical technologies. This experiment furthered understandings of the way vision worked and how to create visual experiences plays of sound and light, magic lanterns and stereoscopes, and later photography and film that mediated the actual world and/or created realistic virtual images. In the contemporary moment, Western thought is informed by a history of seeking the transcendent, finding truth in the seen and the increasingly developed technological ability to create more visually (and aurally, and, eventually, more fully sensually) rich constructions of artifice and simulacra. It is into this context that the discourse of virtual reality and virtual worlds developed in its contemporary sense. Dreams of Virtual Reality The positioning of [virtual reality] as a new technology, the next thing, expresses a transcendental yearning to deny both history and the necessary limits that attend and organize material realities and their accompanying forms (Hillis, 1999, p. 30) All possible sensory frameworks that support a certain coherence and stability of perception have equal ontological status for organizing our experiences. This principle will be able to lead us to go behind the alleged physical space and see why the spatial
8 Jones 8 configuration we are familiar with is just one among many possibilities of sensory framework. (Zhai, 1998, p. 2) (Heim, 1998, p ). Heim, in his The Metaphysics of Virtual Reality, describes seven different concepts that guide the field of study as well as the accompanying cultural construction of virtual reality: simulation (realism and three-dimensionality); interaction (ability to engage in the environment and with others in it); artificiality (even broader than Fink s definition of the virtual and similar to Baudrillard s concept of our world being completely saturated by simulacra and the hyperreal (Baudrillard, 1981)); immersion (use of hardware to simulate sensory experience, like a virtual reality headpiece or tactile glove); tele-presence (a feeling of presence in a remote (or virtual) place and/or control of a remote robot agent); full-body immersion (kinesthetic tracking of body movement by a computer); and networked communications (interaction with others via the Internet) (1993). To achieve virtual reality status, a technology does not have to fulfill all seven concepts. Virtual technologies are characterized as strong virtual reality or weak virtual reality in relation to these seven categories. For example, a text-based chat room may be highly interactive but not immersive and therefore would be considered weak. However, if that chat space were a three-dimensional graphic environment that encompassed the vision of its users it would be considered a stronger type. This proliferation of definitions has made virtual reality a veritable catch-all phrase. The founding dream of virtual reality was envisioned in a speech given by Ivan Sutherland, considered one of the founding researchers in the field, in The Ultimate Display would be connected to a digital computer a looking glass into a mathematical wonderland The ultimate display would be a room within which the computer can control the existence of matter With appropriate programming such a display could literally be the Wonderland in which Alice walked (Hillis, 1999, p. 8). Biocca and Levy describe the drive
9 Jones 9 behind this dream as the search for the essential copy and the desire for physical transcendence (1995). Seeking the essential copy is to search for a means to fool the senses a display that provides a perfect illusory deception. Seeking physical transcendence is nothing less than the desire to free the mind from the prison of the body (Biocca & Levy, 1995, p. 7). These goals follow from the historico-cultural discourses of the primacy of vision and mind/body dualism that came before. The Ultimate Display advocated to re-create a world as a better place and to re-create the body, digitized and customizable, as a perfect self. Critics have reacted to this vision with joy and trepidation. Hillis concludes his critical discussion of virtual reality as cultural discourse with the admonition to never forget the promises of technological visions past, as well as the persistent place of the body. The promise and hype of [virtual reality] and [Internet technologies] more generally is part of an ideology of the future, produced in an amnesia and loss of history that forgets the broken promises of past technologies such as the universal educator (TV) and too cheap to meter (nuclear power). Metaphors of progress and evolution work to suggest that bodies and places are always incomplete, partial, and by necessity thereby flawed if understanding can always only be partial, and if the mind is also flesh, then answers cannot lie solely within the transcendent light and reflected images inside [virtual reality] head-mounted display (Hillis, 1999, p. 211). Hillis posits that while virtual reality and virtual environments are factual and experienced sensually, they are, most importantly, socially produced but try to masquerade as brute facts (Hillis, 1999, p. 52). In other words, virtual reality tries to act as an aspect of the world that doesn t need an institutional understanding but just is like snow on Mount Everest (Hillis, 1999, p. 52). The virtual dream, then, is dangerous because it tries to replace brute reality with one constructed only of light and mirage. On the other side of the spectrum, Zhai, while attending to the risks of virtual reality, holds a more positive vision. With the invention of [virtual reality] we are beginning to reach a stage of meta-physical maturity such that we can see through, without destructive disillusionment, the trick of the alleged materialistic thickness (1998). We welcome it as an occasion for our participation in the Ultimate Re-Creation (Zhai, 1998, p. 173). Zhai argues that our very concept of space is based in vision, therefore our understanding of the world, even what is material, depends upon the nature of our sensory framework (1998). In other words, it is the limits of our physical senses that construct what space and matter mean to us. Virtual reality is therefore inherently good in both experiential and transcendent senses because it allows us to
10 Jones 10 envision the world and recreate it beyond the bounds of our current conceptions of the real (Zhai, 1998, p. 153). We are capable of experiencing it as a new reality, since what we call reality now is constructed by the senses alone (Zhai, 1998). Writing from evolutionary psychology theory, Fink takes a different tack. In some sense, it really does not matter whether something is real or virtual because human beings are programmed to assume that what appears real is real. It is a powerful and automatic assumption. Consequently, simulations of people and environments easily deceive our Stone Age brains We can t and don t overcome the assumption that what appears real is real, because we don t want to, don t need to, or don t gain anything by it (Fink, 1999, p ). To Fink, we constantly experience the virtual, so virtual reality is just another technology that enables interaction and engagement that we experience as real, even if it may not be tangible, because it elicits a response from our brain and our bodies. Virtual reality is not entirely good or bad, but one of many virtualities in our lives. It seems productive to take the middle ground with Heim who argues for virtual realism, which he defines as the pragmatic interpretation of virtual reality as a functional, nonrepresentational phenomenon that gains ontological weight through its practical applications. Virtual realism steers a course between the idealists who believe computerized life represents a higher form of existence and the down-to-earth realists who fear that computer simulations threaten ecological and local values (Heim, 1998, p. 220). We must avoid the pipedreams of transcendence and perfection that feed the fantasy of the Ultimate Display, but we cannot also discount virtual reality as just smoke and mirrors. Virtual entities are indeed real, functional, and even central to life in coming eras. Part of work and leisure life will transpire in virtual environments (Heim, 1998, p. 44). Heim goes on to describe several characteristics of what it means to practice this view which include: criticism, avoiding exaggeration, seeing virtual worlds as parallel to the actual, not a replacement of it, and a pragmatic sense that realism in [virtual reality] results from pragmatic habituation, livability, and dwelling (Heim, 1998, p. 46). By coupling Fink s assessment of how humans psychologically construct the real with Heim s philosophical centrism, one can take on a truly realistic view of the virtual. Second Life is not what virtual reality purists would describe as an immersive virtual world because it does not engage the user through virtual reality goggles or tactile interfaces. However, it still resides squarely in the discourse of virtual reality because it provides a high
11 Jones 11 level of interactivity and tele-presence within a parallel world that allows for the construction of place and self. Within Second Life, there is tangible value and meaning for its users, particularly by enabling them to build and create. Before discussing the world of Second Life and its avatars in more detail, it is important to place this virtual place and these bodies/selves within a discursive context. The Palace of Fates, English Gardens and Cyberspace Virtual worlds existed prior to the advent of computer technology conceptually and practically. Steinhart posits that Gottfried Wilhelm Leibniz envisioned a virtual reality system in his description of the Palace of the Fates in his Theodicy in which the narrator was shown the totality of all possible worlds organized within a series of halls and rooms (1997). In many respects, the totality of possible worlds is thus like a computer program, particularly a [virtual reality] program, and each possible world is like an execution path (Steinhart, 1997, p.134). Leibniz envisioned access to possible worlds virtual worlds organized upon and accessed by lines of causation based on changing variables (e.g. What if Lincoln was never assassinated?). Conceptually, virtual places have existed in the imagination in forms that resemble the contemporary for centuries. Stewart and Nicholls describe another virtual world, albeit one located in tangible reality. English gardens in the nineteenth century strove to create an ideal for natural beauty inspired by the paintings of landscape artists (Stewart & Nicholls, 2002, p. 91). In other words, the idea of real nature was informed more by the virtual image of the painter rather than actual nature, and gardeners designed spaces to reflect this virtual space within the natural world. Just as a painter need not be constrained by reality in creating the most natural landscape this can be eliminated, that can be added; this can be highlighted, that can be muted so a gardener/architect can craft a multi-perspectival view of the landscape (Stewart and Nicholls, 2002, p. 94). The boundaries between natural/artificial, real/virtual, even nature/culture were blurred by these spaces, but they still were experienced as real. The authors posit that computermediated virtual worlds can function in positive ways just as these artificial natural spaces engaged and inspired the English. We ought to be much less concerned about whether something is virtual or actual and more interested in the type of virtuality possessed by certain
12 Jones 12 actualities (Stewart and Nicholls, 2002, p. 96). This similarity between these metaphorical landscapes and virtual worlds resonates with Hillis as well (1999). While instances of virtual spaces have existed before, it is computer technology and its surrounding cultural constructs that formed our conception of virtual worlds of light, which exist in computer nodes where we too can place ourselves. In the discourse of literature and culture, William Gibson s Neuromancer established the cyberpunk genre and first defined the now ubiquitous term cyberspace (Neal Stephenson s Snow Crash, that has so inspired the creators of Second Life, is another example of cyberpunk writing). Hayles argues that Neuromancer follows discourses in cybernetics and information science, as it saw both personhood and place as made up of patterns of information (Hayles, 1999, p ). In terms of this discussion of space, it is the matrix that becomes the landscape of cyberspace. Cyberspace is created by transforming a data matrix into a landscape in which narratives can happen Narrative becomes possible when this spatiality is given a temporal dimension by the pov s [point of view s] movement through it (Hayles, 1999, p. 38). Landscape is made up of data, which is made up text and numbers, of language written through light. Out of that language, worlds are then created. The current Internet does not necessarily have this sense of landscape because it is made up primarily of text, sound and image. Three-dimensional virtual worlds more accurately depict Gibson s vision of a geography of cyberspace because it has a form that can be experienced more as a place. Virtual places have been a part of the actual and the imagined throughout history. In the current cultural context, computers and networks offer a new frontier to construct alternative worlds in more realer-than-real ways, akin to the English gardens two hundred years before. The historical context of the use of imagined and experienced virtual spaces, as well as cyberpunk dreams of making the cyberspace/matrix/metaverse into a reality, frames the imagination of Second Life s creators and users. In fact, Second Life takes the production of virtual spaces further by allowing the users to be the gardeners themselves, landscaping their world as they wish it to be. Cartesian Minds and Virtual Bodies Several avenues of discourse on the body in relation to virtual reality are valuable to discuss in light of virtual worlds: Cartesian mind/body dualism, the fragmentation of identities in postmodernity, and the avatar as a cultural construct and an extension of the body of the user.
13 Jones 13 Descartes saw a duality between an immaterial mind and a material body. To Descartes the incorporeal mind interfaced with the body through the pineal gland in the brain where it then responded to the stimuli that the body presented to it through the senses (Descartes, 1646/1989, p. 36). In that sense the body functioned as a machine for the immortal soul. Descartes argues that humans are spirits that occupy a mechanical body, made of extended substance, and that the essential attributes of humans are exclusively attributes of the spirit (such as thinking, willing and conceiving) which do not involve the body at all (Burnham and Fieser, 2001). Further, Descartes believed that the human mind contained all truth but that the body limited the ability of human beings to perceive and find truth (Noble, 1997, p. 144). While the denial of the body was seen in Western thought before Descartes, he rationalized this idea through deduction, observation and logic: the tools of the Enlightenment. Penny argues that Descartes contributions greatly affected the formation of the discourse of virtual reality. The matrix that virtual worlds are mapped upon is a mathematical Cartesian grid (Penny, 1994, p. 236). Virtual reality also gives primacy to the eye because it is primarily experienced through screen technology. Most importantly, [virtual reality] reinforces Cartesian duality by replacing the body with a body image, a creation of mind (for all objects in [virtual reality] are a product of mind). As such, it is a clear continuation of the rationalist dream of disembodied mind, part of the long Western tradition of the denial of the body. Augustine is the patron saint of cyberpunks (Penny, 1994, p. 243). Virtual reality, then, becomes a means for the mind to rise above the corporeal body. Virtual technologies encourage belief that they constitute a transcendence machine within which the imaginative self might escape its privatized physical anchor and live in an iconography of pleasure (Hillis, 1999, p. 172). Morse describes this process of splitting from the body as an act of denial. The seduction and playfulness of virtual reality are based on this very disparity between organic and virtual bodies its power to erase the organic from awareness, if only partly and just for awhile (Morse, 1994, p. 180). Virtual worlds feed societal fantasies developed within the mind/body discourse of transcending the deficiencies of human flesh. Second Life, which allows complete customization of avatar bodies, promises to give users a second skin that can improve on the corporeal and be changed like a suit of clothes. If one buys into the mind/body duality, it is easy to be seduced into building the ideal body with a few mouse clicks and to holding that body in higher regard
14 Jones 14 than one s own embodied flesh. The virtual body becomes the preferred vessel for the noncorporeal mind which is the essence of self. Hayles argues, however, that one cannot forget the body within cyberworlds. Rather, she sees virtual technologies, as well as other posthuman technologies, as challenging the boundaries that the Cartesian duality creates not just between the mind and the body but the person and the environment. Only if one thinks of the subject as an autonomous self independent of the environment is one likely to experience panic [about losing the body] (Hayles, 1999, p. 290). Rather, it is not a question of leaving the body behind but rather of extending embodied awareness in highly specific, local, and material ways that would be impossible without electronic prosthesis (Hayles, 1999, p. 291).? Me, Myself and Avatar: Fractured Identities and Schizoid Postmoderns The Cartesian view of self strived for a unitive mind in control of its body and world. However, contemporary understandings of the self acknowledge that people play multiple roles in their lives. (Suler, 2000). However, there can be dangers in this fragmentation of self, particularly if one suffers from problems of integration at the start. Without any principle of coherence, the self spins off in all directions. Multiplicity is not viable if it means a shifting among personalities that cannot
15 Jones 15 communicate. Multiplicity is not acceptable if it means being confused to the point of immobility (Turkle, 1995, p. 258). The ability to construct any avatar one wishes online amplifies this dissociation because of the anonymity it allows (Fink, 1999, p. 209). Hillis, to no surprise at this point, states if spatial and identity polyvalency are to be the pluralist norms in cyberspace, a resulting sense of unreality may promote extreme disorientation (Hillis, 1999, p. 188). In the pursuit of losing one s body, one has the potential of losing one s mind as well. Virtual worlds, like other technologies, can have positive and/or negative psychological effects on the user. The exploration of identities online can be beneficial as well as potentially harmful. Turkle sees potential for online worlds to function as a space for people to work out issues of identity through their avatar selves and their interaction with others (Turkle, 1995). Ford sees virtual worlds as an opportunity for the paralyzed, and others with disabilities, to be able to interact online in ways they cannot in actual life that are potentially beneficial. The paralyzed user can interact in a virtual world where they will not be stereotyped since they can eliminate the visible markers of disability that stigmatize them in real life (Ford, 2001). Virtual reality theorist Brenda Laurel looks at the relationship of self to avatar as that of the actor to role. Within the virtual world avatar bodies and identities allow the user a type of agency, the ability to act within a representation (Hillis, 1999; Tofts, 2003). In user-created virtual worlds, this agency is only increased as actors, with fully articulated bodies, also become producers (and prop-makers and set designers) functioning from a Gibsonian pov, which constitutes the character s subjectivity, by serving as a positional marker substituting for the absent body (Hayles, 1999, p. 37). It is the performative self embodied in flesh and pixels (melded in one of Haraway s cyborg configurations or as a McLuhanian medium that extends the senses through technology) that engages as an actor in spaces where it expresses and interacts (Horrocks & Appignanesi, 2003; Graham, 2002). To return to Hillis description of virtual reality as postmodern technology,. It is within this space that Second Life came into being and further challenges boundaries through its unique configuration as a place of creativity, interactivity, construction of self and tangible economy.
16 Jones 16 Second Life as the Evolution of Virtual Worlds. (Linden Labs, 2005b) Second Life is a virtual world comprised of, as of April 2005, 25,000 residents from more than 50 countries (Linden Labs, 2005d). By December 2005 the population of Second Life had reached more than 100,000 users due in part to a change in policy, which allows new users to obtain an initial basic membership for free, as well as increased media coverage about the service. (See Figure 2). Second Life not only grew out of a particular cultural discourse but also out of an ancestry of publicly available virtual worlds, marrying the user creativity and sociability of text-based Multi-User Dungeons/Domains (MUDs) with the graphic richness of Massively Multi-Player Role Playing Games (MMORPGs). MUDs and MMORPGs contributed greatly to what Second Life is today, and sets Second Life apart from other currently available virtual worlds. While differing significantly from Second Life in that they are text-based, many MUDs were and continue to be constructed primarily by their users (Lastowka & Hunter, 2004; Turkle, 1995). Bellman and Landauer saw this as a positive aspect of MUDs that at the time delineated them from other forms of entertainment. Text-based MUDs allow people the freedom of word pictures, something we can t imitate in any graphical environment. Text-based MUDs have a much richer and more dynamic visual imagery than, say, movies or games, because it is customized by each player s imagination (MUDs) are also great equalizers: all people can become builders in a very short amount of time. In fact we ve seen examples of eight- or nine-year old children, who were raised in inner cities and were nearly illiterate, become, within a short amount of time, able to build up a whole environment (Bellman & Landauer, 2000, p. 101).
17 Jones 17 Further, MUDs allow users, through the use of text, to construct themselves as whoever they wish to be. You can be whoever you want to be. You can completely redefine yourself if you want. You can be the opposite sex. You can be more talkative. You can be less talkative. Whatever (Turkle, 1995, p. 184). Figure 2. Partial land map, resident population (which has now surpassed 100,000), in world population, and daily economic transactions in Second Life. Source: Second Life Homepage <> Accessed December 15, MMORPGs provide users with graphically rich environments in which they take on the role of characters within sword & sorcery fantasy or science fiction role-playing games. Woodcock estimates that even excluding some of the large South Korean MMORPGs, that there are more than 5 million active subscribers of MMORPGs worldwide, with 2 million of them participating in World of Warcraft, the largest American-based MMORPG alone (2005). Other MMORPGs include Star Wars Galaxies, Ultima Online, Lineage, Everquest, EVE Online, and Project Entropia. MMORPGs are generally three-dimensional graphic spaces in which users have avatars that they control within an environment, a quality shared with Second Life. Unlike
18 Jones 18 Second Life, though, these worlds are usually games where characters need to level (to gain experience by killing monsters or taking on quests in order to gain a higher rank) to gain in skills and power (Lastowka & Hunter, 2004). Despite the socializing that takes place in these D&D type worlds, the clear goal in each is to become a more powerful avatar (Lastowka & Hunter, 2004, p. 27). Second Life, on the other hand, has no goal other than socializing, commerce and creativity. While many MMORPGs allow a certain type of crafting, that is, the creation of objects within the game, this content is designed by the game developers and is part of the larger player goal of becoming more powerful as an avatar (Ondrejka, 2004). Second Life is also similar to three other major non-leveling online virtual worlds: Sims Online, There, and ActiveWorlds. Sims Online is similar in that its primary goal is socializing, buying, and building. Since it is based on The Sims computer game, however, its interface is cartoonish and lacks the fine control of other online worlds (Costas, 2003). Further, the opportunities for creativity are limited. There provides a similarly graphically-rich world like Second Life including detailed avatars and lush three-dimensional geographies with physics and gravity (so balls drop and cars accelerate) (Kushner, 2004). While some customization can be done in terms of clothing and property, (Baig, 2003). This differs from Second Life in that user-creation is not a significant component of the world of There, though it is available to some extent. As such, There could be considered more accessible, but Second Life remains the potentially more interesting virtual world (or Metaverse) of the two. Lastly, ActiveWorlds is the granddaddy of virtual worlds, developed out of a project called AlphaWorld in 1995 (mauz.info, 2005). ActiveWorlds is user constructed, but made up of separate worlds owned by individuals rather than a connected, growing land. It also lacks certain levels of customization, scripting and a functioning economy (Oz, 2005). ActiveWorlds has been studied as a site for user creativity (Hudson-Smith & Schroeder, 2002) and education (Bailey & Moar, 2001) with mixed results. It is the relative ease and power of building that greatly sets apart Second Life from others in the genre. The tools of Second Life have allowed a multiplicity of user-created forms based on the concept of atomistic construction, a concept that relies on simple, easy to manipulate pieces
19 Jones 19 that can be combined into large and complex creations (Ondrejka, 2004, p. 90). Like the MUDs before it, Second Life s narrative space is also defined by text, a scripting language that functions to give behaviors to avatars and graphical objects working underneath the surface: code, like DNA, for the objects and people in the world (Ondrejka, 2004). It is this combination of atomistic construction and scripting that has fostered the creativity of the residents of Second Life. 2 Csikszentmihalyi says creativity is a central source of meaning in our lives [and] when we are involved in it, we feel that we are living more fully than during the rest of life (Hollister, 2005).. Acts of creation in Second Life may not be the same as the majority of acts of creation in actual life (e.g. creating clothing in Second Life is done with graphics programs rather than thread and cloth), but they certainly involve skills like graphic design, three dimensional modeling, and programming which are also found in the actual world and take time to develop and master. Second Life creates a new type of producer-consumer (prosumer), similar to the thousands of people who are mixing their own music, making their own movies or publishing their own art or texts on the Internet. What is different is that this creation happens in a parallel virtual world bounded by a sense of geographical space and virtual community. However, the boundary between the actual and the virtual is not solid. Like the border of any other land, money flows back and forth across this demarcating line and brings exchange value to both sides. Money Makes the (Virtual) World Go Round Virtual worlds have a history of economic activity. Castronova studied the exchange of avatars as goods in Everquest. Because Everquest is a leveling game, some players are willing to buy powerful previously played avatars from other players in order to avoid the often tedious work of killing weak monsters, carrying out simple quests and gathering treasure that low level 2 Snapzilla is a photosharing site for Second Life that holds abundant examples of the types of avatars, objects, and places that are being created in Second Life. <>
20 Jones 20 characters must do to gain strength in the world. He found that the average price for avatars on sale was $ (Castronova, 2004, p. 187). In another study, he found that: the economy of Norrath [the virtual world of Everquest] as a whole is slightly larger than that of Bulgaria. The effective hourly wage was $3.42 per hour, a figure significantly higher than the hourly wage of workers in India or China. Trade occurs regularly between Norrath and the United States, and foreign exchange between the Norrathian currency and the U.S. dollar is highly liquid as a result (Lastowka & Hunter 2004, p. 39). While Second Life s economy is not as large, its currency, Linden Dollars, did exchange for approximately $1 to every 266 Linden Dollars as of December of 2005 on the LindeX currency market on the Second Life website (Linden Labs, 2005a). Some inhabitants are already making more than $100,000 a year in real-world money by selling digital wares constructed inside the world or running full-fledged role-playing games (Borland, 2005). This is a basic description of how it works: in Second Life, all users can purchase items using Linden Dollars (which they receive either as a stipend or buy on the currency exchanges) in stores and kiosks in the world where objects are displayed for sale. By hitting a pay button the buyer transfers money to the seller and receives an item in return (e.g. a shirt, a vehicle or a weapon). A person who creates an object can set it for sale with various levels of permissions (whether the object can be copied, transferred or modified). In addition, users can buy land from Linden Labs or from private owners or developers. Users pay fees on land of a certain acreage that they have over time (like a property tax). So, goods and land exist in the world that have real asset value to the users of the system. The debate over the value of virtual property is a larger issue, but that it is going on at all and that virtual properties hold real economic value for users of virtual worlds is significant in terms of the construction of virtual worlds as a type of reality. Lastowka explored in a recent California Law Review article several conceptions of property from the political philosophies of Bentham, Locke and Hegel (2004). Within the Western, capitalist tradition, property has been an important part of personhood. Government protects it through property law. It becomes an extension of one s self because people are judged by their clothes, their house and car, among other things. By giving virtual avatars virtual property, particularly property that is created out of their own work (à la Locke), simulation, interactivity and meaning (through a sense of ownership and accomplishment) increase.
Lecture Notes, October 30. 0. Introduction to the philosophy of mind
Philosophy 110W - 3: Introduction to Philosophy, Hamilton College, Fall 2007 Russell Marcus, Instructor email: rmarcus1@hamilton.edu website:
A Letter to Awakening Humans
A Letter to Awakening Humans A Letter to Awakening Humans And so it is, dear friends, we have heard your call. We have heard your prayers, we have felt your pain, and we know your confusion. That is why
Introduction. According to modern astronomy, the beginning of the universe would be about
1 Introduction According to modern astronomy, the beginning of the universe would be about fourteen billion years ago, the universe was born from an explosion of a tiny and tiny point which was called,
Program Level Learning Outcomes for the Department of Philosophy Page 1
Page 1 PHILOSOPHY General Major I. Depth and Breadth of Knowledge. A. Will be able to recall what a worldview is and recognize that we all possess one. B. Should recognize that philosophy is most:
1 The Unique Character of Human Existence
1 1 The Unique Character of Human Existence Each of us is confronted with the challenge of being human, the challenge of becoming a person. It is important to emphasize this word challenge, because it
Today. Game Setting. Game World. Magic Circle
Today Game Setting Arno Kamphuis Game Design 2010-2011 Game Setting Magic Circle Setting vs Gameplay Suspension of Disbelief and The 4th Wall Physical Dimension Temporal Dimension The Environment and
Message, Audience, Production (MAP) Framework for Teaching Media Literacy Social Studies Integration PRODUCTION
Message, Audience, Production (MAP) Framework for Teaching Media Literacy Social Studies Integration PRODUCTION All media messages - a film or book, photograph or picture, newspaper article, news story,
Beyond The Magic Circle Marinka Copier. Implications
Beyond The Magic Circle Marinka Copier Conclusions and Implications 196 Conclusions and Implications The aim of this thesis has been to lend scientific credibility to a networked understanding of online
CHAPTER TWO SEGMENTING THE MARKET
CHAPTER TWO SEGMENTING THE MARKET The Marketing Mixes 45 When marketers break the marketplace into separate target markets, they are segmenting the market, and each segment must meet four criteria in order
Guidelines for Integrative Core Curriculum Themes and Perspectives Designations
Guidelines for Integrative Core Curriculum Themes and Perspectives Designations The purpose of this document is to provide guidelines for faculty wishing to have new or existing courses carry Themes and
Honours programme in Philosophy
Honours programme in Philosophy Honours Programme in Philosophy The Honours Programme in Philosophy offers students a broad and in-depth introduction to the main areas of Western philosophy and the philosophy
Co-dependency. Fact Sheet on co-dependency from Mental Health America:
Co-dependency Fact Sheet on co-dependency from Mental Health America: Co-dependency is a learned behavior that can be passed down from one generation to another. It is an emotional and behavioral condition
6 Critical Success Factors for B2B Lead Generation
6 Critical Success Factors for B2B Surfacing qualified prospects and getting them to your sales team before the competition ebook EchoQuote Sponsored By Find and Convert and want to do WHAT? asked Nancy,
2012 VISUAL ART STANDARDS GRADES K-1-2
COGNITIVE & K Critical and Creative Thinking: Students combine and apply artistic and reasoning skills to imagine, create, realize and refine artworks in conventional and innovative ways. The student will
TABLE OF CONTENTS. ROULETTE FREE System #1 ------------------------- 2 ROULETTE FREE System #2 ------------------------- 4 ------------------------- 5
IMPORTANT: This document contains 100% FREE gambling systems designed specifically for ROULETTE, and any casino game that involves even money bets such as BLACKJACK, CRAPS & POKER. Please note although),
Technology has transformed the way in which science is conducted. Almost
Chapter 1 Educational Technology in the Science Classroom Glen Bull and Randy L. Bell Technology has transformed the way in which science is conducted. Almost every aspect of scientific exploration has
Introduction. Dear Leader,
Table of Contents Introduction...4 Meeting and Setting Goals...6 Week 1: The Great Human Questions...9 Week 2: Examining the Ways We Know...15 Week 3: The Christian Worldview...24 Appendix A: The Divine
Mcmahon masters thesis south texas >>>CLICK HERE<<<
Mcmahon masters thesis south texas. Perhaps you are advertising through Google Adwords. Mcmahon masters thesis south texas >>>CLICK
Inbound Marketing The ultimate guide
Inbound Marketing The ultimate guide La Casita Del Cuco.EBOOK Inbound Marketing - The ultimate guide Version _ 02 Date _ September 2015 Introduction Inbound Marketing is a marketing philosophy which has
Descartes Fourth Meditation On human error
Descartes Fourth Meditation On human error Descartes begins the fourth Meditation with a review of what he has learned so far. He began his search for certainty by questioning the veracity of his own senses.
Child Psychology and Education with Technology
International Journal of Education and Information Studies. ISSN 2277-3169 Volume 4, Number 1 (2014), pp. 41-45 Research India Publications Child Psychology and Education with
A Simple Guide To Understanding 3D Scanning Technologies
A Simple Guide To Understanding 3D Scanning Technologies First Edition Table of Contents Introduction At LMI Technologies, solving complex problems in a simple way is the philosophy that
The Climate of College: Planning for Your Future
TCCRI College Readiness Assignments The Climate of College: Planning for Your Future Overview Description This activity challenges students to think about life after high school: Where do they hope. Philosophy and Good Sense
Perspectives in Philosophy Rene Descartes Descartes Philosophy is the search for certainty the search to know, for yourself, what is really true and really false to know which beliefs are reliable. However,
The Flat Shape Everything around us is shaped
The Flat Shape Everything around us is shaped The shape is the external appearance of the bodies of nature: Objects, animals, buildings, humans. Each form has certain qualities that distinguish it from
Online creative writing associates degree >>>CLICK HERE<<<
Online creative writing associates degree. That goes a long way to complete the sale and for them to show loyalty to your business. Your big phonebook can provide so little of the information you need.
Evaluation Essay Movie Review
Evaluation Essay Movie Review Everybody goes to the movie, it seems, to be entertained, but how many go to study movies as works of art. That is what movie reviewing involves: seeing a film as more than
The Photosynth Photography Guide
The Photosynth Photography Guide Creating the best synth starts with the right photos. This guide will help you understand how to take photos that Photosynth can use to best advantage. Reading it could
Internet Marketing for Local Businesses Online
Dear Business Owner, I know you get calls from all sorts of media outlets and organizations looking to get a piece of your advertising budget. Today I am not pitching you anything. I would just like to
Introduction to 3D Imaging
Chapter 5 Introduction to 3D Imaging 5.1 3D Basics We all remember pairs of cardboard glasses with blue and red plastic lenses used to watch a horror movie. This is what most people still think of when
Seven Challenges of Implementing a Content Management System. An Author-it White Paper
Seven Challenges of Implementing a Content Management System An Author-it White Paper P a g e 2 Contents The Seven Challenges of Implementing a Content Management System 1 Challenge #1: Control & Management.....3
Five Business Uses for Snake Oil The #1 Selling Game
Overcoming Fear of Speaking in Public Snake Oil Therapy Business Meeting Ice Breaker Human Resources Marketing and Product Development Brainstorming Sales Training 1. Overcoming Fear of Speaking in Public
Graphics Designer 101. Learn The Basics To Becoming A Graphics Designer!
Graphics Designer 101 Learn The Basics To Becoming A Graphics Designer! Contents Introduction Chapter 1 The Role of the Graphics Designer Chapter 2 Qualifications in Order to Become a Graphics Designer
What is Organizational Communication?
What is Organizational Communication? By Matt Koschmann Department of Communication University of Colorado Boulder 2012 So what is organizational communication? And what are we doing when we study organizational
Virtual Reality in Chemical Engineering Education
Reprinted from the Proceedings of the American Society for Engineering Education Illinois / Indiana Sectional Conference, Purdue University, March 1995. Virtual Reality in Chemical Engineering
What is Marketing Automation? What is Marketing Automation?
What is Marketing Automation? Table of Contents What is Marketing Automation?...2 Deficiencies in the Manual Marketing Process...2 Marketing for the Modern World...2 How Marketing Automation Works..................
BIG HISTORY AND INTEGRAL THEORY
2016 IBHA Conference University of Amsterdam, July 14-17 BIG HISTORY AND INTEGRAL THEORY Bill Bryson Meets Ken Wilber Frank Visser What are we looking for? And where should we look for it? Assumptions
Copyright: Adwords Direct Response Disclaimer: ANY EARNINGS, OR INCOME STATEMENTS, OR INCOME EXAMPLES, ARE ONLY ESTIMATES OF WHAT WE THINK YOU COULD EARN. THERE IS NO ASSURANCE YOU'LL DO AS WELL. IF YOU
HOW TO CHANGE NEGATIVE THINKING
HOW TO CHANGE NEGATIVE THINKING For there is nothing either good or bad, but thinking makes it so. William Shakespeare, Hamlet, Act 2, Scene 2, 239 251. Although you may not be fully aware of it, our minds
Re-Definition of Leadership and Its Implications for Educational Administration Daniel C. Jordan
Re-Definition of Leadership and Its Implications for Educational Administration Daniel C. Jordan Citation: Jordan, D. (1973). Re-definition of leadership and its implications for educational
Course Description Graphic Design Department
Course Description Graphic Design Department Free drawing : 1021705 / 3 Credit Hours This course introduces the student to basic drawing skills and techniques. The emphasis is on traditional approaches
Basic Guide for Video Games production
Basic Guide for Video Games production intro A video game is more than the sum of its pieces; a game has a synergy that, after the game is complete, makes it something unique. Creating this synergy takes
Web Design & Development
Web Design & Development In Simplicity, Lies Beauty. - DigitalKrafts About Us The Internet is an ever changing environment that demands that you keep up with the latest and greatest communication
Chapter 8 BRAIN VS MACHINE
Chapter 8 BRAIN VS MACHINE In this chapter we will examine the current state of the art insofar as we can compare the electronic computer with the human brain. Our conclusion, as you will see, is that
Choosing My Avatar & the Psychology of Virtual Worlds: What Matters?
Kaleidoscope Volume 11 Article 89 July 2014 Choosing My Avatar & the Psychology of Virtual Worlds: What Matters? Jennifer Wu Follow this and additional works Core Concepts of Mobile-Visual Analytics. Meeting the Next Generation of Analytics Professionals
The Core Concepts of Mobile-Visual Analytics Meeting the Next Generation of Analytics Professionals he history of science fiction is peppered with devices that eventually became real. One of the areas
Topic 1 Introduction to epistemology
Topic 1 Introduction to epistemology Welcome to Philosophy AS: Homework Please read the introduction and access the Prezi (link on the next page) and then read through the booklet trying the different
INTERNATIONAL JOURNAL OF ADVANCES IN COMPUTING AND INFORMATION TECHNOLOGY An International online open access peer reviewed journal
INTERNATIONAL JOURNAL OF ADVANCES IN COMPUTING AND INFORMATION TECHNOLOGY An International online open access peer reviewed journal Research Article ISSN 2277 9140 Virtual conferencing using Artificial
Icebreakers and Mixers that Promote Inclusion
Icebreakers and Mixers that Promote Inclusion Camryn Krause, VISTA, UW-Extension, Fond du Lac County 2012 The University of Wisconsin-Extension does not discriminate on the basis of race, color, gender/sex,
What is Undergraduate Education?
Education as Degrees and Certificates What is Undergraduate Education? K. P. Mohanan For many people, being educated means attending educational institutions and receiving certificates or degrees. This
Introducing Social Psychology
Introducing Social Psychology Theories and Methods in Social Psychology 27 Feb 2012, Banu Cingöz Ulu What is social psychology? A field within psychology that strives to understand the social dynamics
|
http://docplayer.net/360555-I-avatar-constructions-of-self-and-place-in-second-life-and-the-technological-imagination.html
|
CC-MAIN-2017-22
|
en
|
refinedweb
|
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.