text
stringlengths
20
1.01M
url
stringlengths
14
1.25k
dump
stringlengths
9
15
lang
stringclasses
4 values
source
stringclasses
4 values
What does AWS-SDK v3 mean for node.js Lambda? Best practice is about to get easier. Yesterday I found out about @aws-sdk — the Javascript version of the AWS-SDK — is hitting version 3! I’m a bit behind the curve as this has been in developer preview since the end of November. But you’d have to be some sort of mad man to keep track of every AWS release note… One of the things you find out when developing serverless functions is that you want to keep the deployment size small. You also learn that the aws-sdk is “just there” when you deploy a node.js function, so you can save some space by not bundling it. A little bit later you learn that the aws-sdk available to you is version 2.290.0, and that using library from August 2018 (and circa 200 versions behind the latest) will lead to missing features and unexpected behaviours. And so the “best practice” has been to deploy with the version (hopefully latest!) of the aws-sdk that you developed, built and tested your function against. The snag is that the aws-sdk is not all that light. The third iteration of the sdk aims to fix this by making the services truly modular. And in the spirit of adventure, I thought I’d check out just how well this was working. I’ve created a benchmarking repo (you can find it here:). Effectively it’s a simple “get from dynamo” lambda function, that bundles with webpack (so we can enjoy the wonders of tree shaking). v2/External: 743b (1x) This version “mocks” the aws-sdk, which would be the same as relying on the external (2.290.0) version that node.js lambda functions deploy with. This version of the function deploys in an impressive 743 bytes zip file. v2/Import: 573kb (771x) In this version we deploy with the full aws-sdk, relying on webpack’s treeshaking to get the deployment size down. It does ok, but inflates our function by a factor of 771, taking us up to 573 kilobytes. I also tried the import AWS from 'aws-sdk' variation of this to see if it would make much of a difference, but it turns out webpack was one step ahead of me and it’s exactly the same size. v2/Direct: 90kb (121x) It’s worth noting that version 2 of the SDK already supports importing specific clients directly. In the above approach I take the guesswork out for webpack and point it directly at the client I want to use. For my efforts I get the file down to a respectable 90 kilobytes (or a 121-fold increase from external). v3: 31kb (41x) Version 3 of the AWS-SDK (which apparently is packaged as the “v2 client”- it’s not up to us to understand the naming), defaults to modularised packages. However it also makes a modern node.js specific package available, which allows for additional optimisations. All this pays off when we get our package down to a very nice 31 kilobytes, just 41x the code size of nothing-at-all! Using the external (lit. built-in) aws-sdk in your javascript lambda functions can be very tempting. However StackOverflow is filled with the bones of those who have lost many an hour to debugging as a consequence of mismatched versions and missing features. Best practice therefore dictates that you should package the aws-sdk which you used to build and test the function. Previously people would baulk at including a 3mb library with every deployment. But thanks to the fantastic efforts of the JavaScript AWS-SDK team, this is becoming a much more palatable option. The lesson: Using AWS-SDK v3 allows you to significantly reduce the size of your node.js lambda functions. ed. after writing this I discovered that Yan Cui has done some much better research showing the time-cost of deploying with the aws-sdk, which is definitely worth a read too:
https://medium.com/tomincode/what-does-aws-sdk-v3-mean-for-node-js-lambda-67e249e52b53?utm_source=newsletter&utm_medium=email&utm_content=offbynone&utm_campaign=Off-by-none%3A%20Issue%20%2331
CC-MAIN-2019-18
en
refinedweb
Swift for TensorFlow Deep Learning Library Get a taste of protocol-oriented differentiable programming. This repository hosts Swift for TensorFlow's deep learning library, available both as a part of Swift for TensorFlow toolchains and as a Swift package. Usage This library is being automatically integrated in Swift for TensorFlow toolchains. You do not need to add this library as a Swift Package Manager dependency. Use Google Colaboratory Open an empty Colaboratory now to try out Swift, TensorFlow, differentiable programming, and deep learning. For detailed usage and troubleshooting, see Usage on the Swift for TensorFlow project homepage. Define a model Simply import TensorFlow to get the full power of TensorFlow. import TensorFlow let hiddenSize: Int = 10 struct Model:, activation: identity) @differentiable func applied(to input: Tensor<Float>) -> Tensor<Float> { return input.sequenced(through: layer1, layer2, layer3) } } Initialize a model and an optimizer let optimizer = SGD(for: model, learningRate: 0.02) var classifier = Model() Context.local.learningPhase = .training let x: Tensor<Float> = ... let y: Tensor<Float> = ... Run a training loop One way to define a training epoch is to use the Differentiable.gradient(in:) method. for _ in 0..<1000 { let 𝛁model = classifier.gradient { classifier -> Tensor<Float> in let ŷ = classifier.applied(to: x) let loss = softmaxCrossEntropy(logits: ŷ, labels: y) print("Loss: \(loss)") return loss } optimizer.update(&classifier.allDifferentiableVariables, along: 𝛁model) } Another way is to make use of methods on Differentiable or Layer that produce a backpropagation function. This allows you to compose your derivative computation with great flexibility. for _ in 0..<1000 { let (ŷ, backprop) = classifier.appliedForBackpropagation(to: x) let (loss, 𝛁ŷ) = ŷ.valueWithGradient { ŷ in softmaxCrossEntropy(logits: ŷ, labels: y) } print("Model output: \(ŷ), Loss: \(loss)") let 𝛁model = backprop(𝛁ŷ) optimizer.update(&classifier.allDifferentiableVariables, along: 𝛁model) } For more models, go to tensorflow/swift-models. Development Requirements - Swift for TensorFlow toolchain. - An environment that can run the Swift for TensorFlow toolchains: Linux 18.04 or macOS with Xcode 10. Building and testing $ swift build $ swift test Bugs Please report bugs and feature requests using GitHub issues in this repository. Community Discussion about Swift for TensorFlow happens on the swift@tensorflow.org mailing list. Contributing We welcome contributions: please read the Contributor Guide to get started. It's always a good idea to discuss your plans on the mailing list before making any major submissions. Code of Conduct. The Swift for TensorFlow community is guided by our Code of Conduct, which we encourage everybody to read before participating. Github Help us keep the lights on Dependencies Used By Total: 0
https://swiftpack.co/package/tensorflow/swift-apis
CC-MAIN-2019-18
en
refinedweb
Apollo Cache A guide to customizing and directly accessing your Apollo cache InMemoryCache apollo-cache-inmemory is the default. import { InMemoryCache } from 'apollo-cache-inmemory'; import { Apollo } from 'apollo-angular'; import { HttpLink } from 'apollo-angular-link-http'; @NgModule({ ... }) class AppModule { constructor( apollo: Apollo, httpLink: HttpLink ) { const cache = new InMemoryCache(); apollo.create({ link: httpLink.create(),]. - cacheResolves: A map of custom ways to resolve data from other parts of the cache. = new InMemoryCache({ dataIdFromObject: object => object.key }); This also allows you to use different unique identifiers for different data types by keying off of the __typename property attached to every object typed by GraphQL. For example: const cache = new InMemoryCache({ dataIdFromObject: object => { switch (object.__typename) { case 'foo': return object.key; // use `key` as the primary key case 'bar': return object.blah; // use `blah` as the priamry key default: return object.id || object._id; // fall back to `id` and `_id` for all other types } } }); Direct Cache AccessTo interact directly with your cache, you can use the Apollo Client class methods readQuery, readFragment, writeQuery, and writeFragment. These methods are available to us via the [`DataProxy` interface](). An instance of ApolloClient can be accessed by `getClient()` method of `Apollo` Service.: @Component({ ... }) class AppComponent { constructor(apollo: Apollo) { const { todo } = apollo.getClient().readQuery({ query: gql` query ReadTodo { todo(id: 5) { id text completed } } `, }); } }. @Component({ ... }) class AppComponent { constructor(apollo: Apollo) { const { todo } = apollo.getClient().readQuery({ query: gql` query ReadTodo($id: Int!) { todo(id: $id) { id text completed } } `, variables: { id: 5, }, }); } }: @Component({ ... }) class AppComponent { constructor(apollo: Apollo) { const todo = apollo.getClient().readFragment({ id: ..., // `id` is any id that could be returned by `dataIdFromObject`. fragment: gql` fragment myTodo on Todo { id text completed } `, }); } }: @NgModule({ ... }) class AppModule { constructor(apollo: Apollo) { apollo.create({ ..., // other options dataIdFromObject: object => object.id, }); } } ...and you requested a todo before with an id of 5, then you can read that todo out of your cache with the following: @Component({ ... }) class AppComponent { constructor(apollo: Apollo) { const todo = apollo.getClient().readFragment({ id: '5', fragment: gql` fragment myTodo on Todo { id text completed } `, }); } }: @Component({ ... }) class AppComponent { constructor(apollo: Apollo) { apollo.getClient().writeFragment({ id: '5', fragment: gql` fragment myTodo on Todo { completed } `, data: { completed: true, }, }); } } text completed } } `; @Component({ ... }) class AppComponent { constructor(apollo: Apollo) { const data = apollo.getClient().readQuery({ query }); const myNewTodo = { id: '6', text: 'Start using Apollo Client.', completed: false, }; apollo.getClient()! Server side rendering If you would like to learn more about server side rendering, please check our our more in depth guide here.
https://www.apollographql.com/docs/angular/basics/caching/
CC-MAIN-2019-18
en
refinedweb
Debugging tips Programs are like kids. You're at your home/office playing around, see how your baby grows, does all these nice things you taught it to do, and feel so proud when it succeeds. It reflects you so well, for the better & the worst. True, somethings it misbehave and make you regret that late drunken night it all started at, but at the end you're creation is all grown up and needs to go out to the world. Well all was nice & cozy back home, but its a jungle out there! Bullies will try to hurt it, viruses will try to kill it, and all those damn users just don't understand how to tread it right... So, when you release your baby out to the world, make sure its damn ready. And when it comes back home all banged up - patch it up fast all send it out there again, because mammy & daddy are working on a new one now and don't have time for this kind of crap. The Debug process The basic steps in debugging are: 1. Recognize that a bug exists 2. Isolate the source of the bug 3. Identify the cause of the bug 4. Determine a fix for the bug 5. Apply the fix and test it Pretty simple ahh? But we all know its not as easy as it sounds. The hardest parts are 2 & 3. This is where you bang your head at the table and regret the day you choose to be a programmer. Luckily, there are ways to soften that processes. At this doc, we'll go over some. Tools The difference between a pro and an amateur is that a pro knows how to use his tools. Its easier to kill a bug with an exterminator that with a flip-flop. But keep in mind its easier to kill it with the flip-flop than drive it over with a truck. And if you aim is off, non of them will help you. Choose you tools wisely, learn how to use them correctly. When used right, tools will help use complete the job faster & better. Loggers allow us to to see what a program is doing just by observing from outside. When logs are kept & well maintained, it will also allow us to see what the system did in the past. When something is logged, it is written into the corresponding log file, if the log level of the message is equal or higher than the configured log level. Most common log levels (ordered by severity) are :debug, :info, :warn, :error, :fatal, but most loggers will allow you to add your own levels & tags. Loggers affect programs performance in two aspects: - IO - Logs are written to disk. When IO is massive, it can become a bottle neck at your OS/Network, which can impact your program performance. - CPU/RAM - Strings are written to the log. If we convert complex objects to strings, it takes its toll on the OS. Reading a log should give you a clear indication of what is going on at the system. A log entry should be informative. It should be easy to understand: - Where in the code this log entry was created at. - Which system/module/process generated it. - When & in what order the events happen. - What exactly happened. You can achieve most of it by - Add the Module & Method name to the log entry - Print parameters & variables - Use log levels wisely. In order to minimize the Logger impact on your system: - Avoid logs in a loop - Avoid logs that require stringify/parsing of big data structures Lets consider the following code module MyMath def add(x, y) return (x.to_i + y.to_i) end end module MyCalc def awesome_calc(args) ... MyMath.add(x, y) ... end end Our program which uses the MyMath module, is misbehaving. Some users complained that sometimes the program returns the wrong result. After more questioning, we know the exact input the user entered. We are able reproduce the bug, but see no errors/exceptions raised. All we can to do is start probing around at the code, while running manual tests. How can we prevent, or at least reduce the debug cycle of such scenarios? Methods have input, and output. If we know what goes in & out of them, we can tell if they work properly. This is why often while coding & debugging we find ourselves adding prints to the code like so: # coverts 2 strings to integers and performs addition. def add(x, y) puts "x=#{x}" puts "y=#{y}" puts "x+y=#{(x.to_i + y.to_i)}" return (x.to_i + y.to_i) end Those kind of prints are very helpful while developing/debugging, but pretty annoying for when comes in masses. This is why prints are required to be deleted - in order to keep our system logs clean. Using loggers debug mode allows us to eat the cake and leave it whole - use our prints, but only when needed. def add(x, y) @logger.debug("MyMath.add(#{x}, #{y})") res = (x.to_i + y.to_i) @logger.debug("MyMath.add: res = #{res}") return res end Now by simply viewing the log we can pin point the error, looks like someone is misusing the the MyMath.add method and feeding it the wrong input. Apr 24 04:01:20 [debug]: MyMath.add(3, _2) Apr 24 04:01:20 [debug]: MyMath.add: res = 0 We can even improve that by adding a warn message def add(x, y) @logger.debug("MyMath.add(#{x}, #{y})") @logger.warn("MyMath.add: invalid input x=#{x}") unless valid_arg(x) @logger.warn("MyMath.add: invalid input y=#{y}") unless valid_arg(y) res = (x.to_i + y.to_i) @logger.debug("MyMath.add: res = #{res}") return res end Now by just viewing the logs, we can see the issue. Apr 24 04:01:20 [debug]: MyMath.add(3, _2) Apr 24 04:01:20 [warn]: MyMath.add: invalid input y=_2 Apr 24 04:01:20 [debug]: MyMath.add: res = 0 Debuggers allows us to see what a programs is doing from the inside. Different debugger have different capabilities but usually you can find the same basic features at each. Breakpoint allows stopping or pausing place in a program. A breakpoint consists of one or more conditions that determine when a program's execution should be interrupted. the basic condition is line number, but we can add more sophisticated conditions. once the programs is paused, we can follow each line/frame/block of code and see exactly whats going on. We can evaluate different variables, run queries, check performance... all in order to find out what our program state. Byebug, for example, is a great debugger for Ruby. see byebud_demo.rb for a live demo. require 'pry-byebug' def count_down(x) p "#{x}..." return true if x <= 0 count_down(x - 1) end byebug bomb = { sound: 'BOOM' } count_down(5) p bomb[:sound] Linters are programs that analyze code for potential errors. Simply put, a parser parses your code and looks for mistakes. It saves time & maintain code quality/safety. Most editors will allow us to integrate linters as plugins, but we can use them as stand alone to automate scripts of our own. Linters have a wide range of abilities: - Detect logical errors (such as missing a comma or misspell a variable name) - Detect code anti-patterns. - Detect syntax error. - Detect deprecated code usage. - Suggest code optimizations. - Some Linters will even do those fixes for you. For example, a linter will prevent the classic mistake of global variables at JS function (){ i=0; ... } or useless assignments at ruby obj = do_some_heavy_calc() ... obj = [] A validator is a program that can checks your web pages against the web standards. It is easy to miss a closing tag or misspell attribute, and very hard to find once its there. Most of the time it leads to visual bugs. Visual bugs can be hurt UX so badly that it will affect you app drastically (for example, a user that can't click a submit button of a form). HTML will work, somethings even look pretty good, when there are some major issues with the page structure. To make things even more challenging, different browsers deal with such invalid structure differently, so you page can look great on one, but totally deformed on other. Plus, Visual bugs are the ones makes you look bad & unprofessional just because there very easy to spot by the end user. The user doesn't thinks 'boy, I shouldn't view this site with Opera 9, maybe I'll switch to chrome 47'. He will probably think 'Man those guys are idiots over there' and move on to our competitors. Formatters are programs that analyze code, and fix them automatically according to style conventions. Formatters will remove all the annoying 'ugly code' for you. don't spend time on fixing indentations, white spaces, and match brackets. Focus on what you write, not who it looks. Since Linters warn about style issues, formatters help reduce lint errors too. Bugs are 99.99999% human error. someone wrote a piece of code that is misbehaving. Version Control keeps our code history. By reviewing the history we can tell who changed what & when. git commit In order to make the best of it, first we should be descriptive at our commits messages. Commit messages like git commit -am 'fix' are pretty useless... Messages should be informative: - What feature does this commit relate to? - Is there a task id that we use to manage our work? include that too. - Describe what changes does this commit bring to the project. - Make sure your username & email are set correctly git config -l - Keep commits small. commits that includes many code changes are hard to follow/describe. This way, our commit will look more like commit 0196feb6de1e75f44ca05ebb6f25bf235acc21b8 Author: Guy Yogev <[email protected]> Date: Wed May 4 12:17:42 2016 +0300 TASK-87 - Fix download buttun action. At User page, `download` button didn't work due to of missing 'id' parameter at request params. added user params to request. Now that our commits contains some actual useful data, how can we find the relevant commit? git log Displays latest commits to the repo. git blame Shows who was the last to make changes to the viewed code, and at which commit. git bisect Does a binary search on the commits. - git bisect start - start the bisect - git bisect bad - marks the current commit as bad, meaning the bug exists. - git bisect good <COMMIT_ID> - makes the commit as bug free, (for example the last stable version that was marked with the git tag cmd) - git bisect check out to the middle commit. Keep marking commits as good/bad till you find the commit that produced the bug. - remmber to use git bisect reset to go back to the starting point, or you'll leave the repo in a weird state. git diff Displays the diffrance between 2 commits / branches / files Use the --name-only option to list affected files instead of files content. git show When a lot of changes where done between the two targets, git diff can be too overwhelming. git show displays only changes from the given commit. --name-only works here too. Browsers dev tools All major browsers have developer-tools built-in. Its a wide range of tools such as debuggers, analyzers, recorders, viewers. diving into each one is beyond the scope of this doc. Generally speaking, those tools provide web developers deep access into the internals of the browser and their web application. It helps us efficiently - Track down layout issues - Debug JavaScript breakpoints - Get insights for code optimization via audits. Culture A single developer can work very fast & efficiently, but there is a limit. At some point the project is simply too big/complicated for a single developer to manage & support. Working as a team requires good communication & following agreed guidelines. As our software gets bigger & bigger, the code base increases. Vendors code is integrated & open source code is added. As the code evolves, it accumulate more and more bugs. - When a bug is found, minor or major, make sure it is tracked. - Minor bugs can be ignored for a while, but can also mutate into a more serious issue. - Open a task. Write everything you found relevant during your investigations for future usage. - Keep vendor/open source code updated. - Keep track on your vendors blogs / press releases. - View the change log of new releases. - Review the external projects open/close issues. - Upgrades can contain bug fixes. - Upgrades can introduce new bugs too, run sanity tests before committing code upgrades. Ok, we use all those awesome tools now know where the bug is generated. Time to pin point the issue and find a solution. Debugging is hard. We need to keep our mind sharp. - Take breaks - Find a quite place to work. - Ask not to be disturbed. In order to find a solution, we first need to define the problem. We all had that experience when a solution emerges while you try to explain the problem to someone else. You don't need their input, just someone to talk to aloud while you gather your thoughts. Studies that objects can be as effective as a humans for that purpose, So... why not talk to a rubber duck or a teddy bear? Well, even your duck wasn't very much helpful... Time to leverage other people knowledge & experience. - Decide for how long you're going to pursue leads on your own. - Keep in mind that others are busy too. Try to find leads at forums / blogs before approaching a co-worker. I find the 'Try everything from the first Google page' rule pretty useful. - Once you got someone attention, let them find their own way. - Explain what is wrong, not what you did. Let others find their own way. - Be patient, nobody likes helping a jackass. Bugs appear when the programs is in some state - Every time that state applies, the bug will appear. Sometimes it requires a few simple steps to put the system in the buggy state. A click of a button, or a flag change and you're done. In other cases, getting there can be quite a journey. You'll need to repeat multiple changes every time in order to see & test the bug. This is just white noise for the debugging process. Plus, when we have multiple actions to complete each run, it is easy to miss one, and get to a totally different state. Automation helps us to remain focused on the stuff that matter. We can reproduce/test our bug fast. - Write a script that will change your programs state. - Write an automated test for that bug. - Automate casual state changing tasks such as db seeds/resets/cleanups/backups. - Automate devOps tasks such as deployment, e2e/smoke tests. Code is almost never written just once. Most of the time, someone (maybe even you) will need to work on that piece of code at some point..” But why should you care? What’s wrong with code that just works? - 'Clean code' contains less bugs, easier to understand & debug. - 'Dirty code' is hard to handle & maintain, and in time will 'rot' due to abandonment. More tips: - The five-minute rule - if you find an ugly piece of code that can be fixed very fast, do it. - Always improve yourself. if you are not happy with your current work, improve it, or at least understand what is wrong. next time, do it better. Programs are divided into modules. Every module has its role and purpose. It helps you keep DRY principle and code quality. Some tasks can be quite complex, and won't be completed in one coding session. We want to make sure that if & when we drop or hand over a task to someone els, it will be easy to pickup. Overall, sound architecture also prevents bugs. When program parts are not well defined/written we start seeing mutant modules that does many things, code duplication, and spaghetti code. Before racing ahead and writing hundreds of lines of code, take some time an ponder about the task at hand. - How does it fit the 'big picture'. - What are the feature requirements exactly? - Do I have sufficient knowledge in order to complete the task, or need more time for research. - What are the feature components and how they interact with each other. - Can I reuse/expand any other modules? - Break the tasks into a road-map. Define were are the main break points. When you reach a break point, ask your self if you have the time to reach the next one or not. Code review is systematic examination of the source code. Its main goals are find developers mistakes before the code is deployed and mutate into a full grown bug. There are a few type of code reviews - Over-the-shoulder – one developer looks over the author's shoulder as the latter walks through the code. - Email pass-around – source code management system emails code to reviewers automatically after commits is made. - Pair programming – two authors develop code together at the same workstation, as it is common in Extreme Programming. - Tool-assisted code review – authors and reviewers use software tools specialized for peer code review. Studies show that lightweight reviews uncovered as many bugs as formal reviews, but were faster and more cost-effective. - Use tools (like version control) to see code changes. - Review the code while its fresh. - Sessions should be brief, no more than an hour. - Cover 200-400 line per session. - Cover everything. Configs, unit tests are code too. There are lots of reasons why we should automate tests. You can read more about it at a previous post here The main argument you hear against it is - 'it takes too much time'. Unit tests are usually an 'easy written' code, therefore, it is written much faster than production code. Studies show that the overhead on on the development process is around 10%. Well, let me assure you - debugging takes much longer. Resources
https://www.spectory.com/blog/Debugging%20tips
CC-MAIN-2019-18
en
refinedweb
BetweenBetween Between is a library for working with (time) intervals and the relations between them. It takes as a basis the thirteen relations of Allen's Interval Algebra. This is a system for reasoning about (temporal) intervals as described in the paper Maintaining Knowledge about Temporal Intervals. InstallationInstallation Between is published for Scala 2.13. To start using it add the following to your build.sbt: libraryDependencies += "nl.gn0s1s" %% "between" % "0.4.2" Example usageExample usage When the endpoints of an interval are known, the Interval[T] case class is available for testing all possible relations between intervals. It needs two values T which reflect the (inclusive) endpoints of an interval. For the type T there needs to be an implicit Ordering trait available. Additionally the endpoint -and +of type -needs to be smaller than the endpoint +. The example below shows two examples for Double and java.time.Instant: import nl.gn0s1s.between._ val i = Interval[Int](1, 2) // i: nl.gn0s1s.between.Interval[Int] = Interval(1,2) val j = Interval[Int](2, 3) // j: nl.gn0s1s.between.Interval[Int] = Interval(2,3) i meets j // res0: Boolean = true j metBy i // res1: Boolean = true val k = Interval[java.time.Instant](java.time.Instant.ofEpochSecond(1000L), java.time.Instant.ofEpochSecond(2000L)) // k: nl.gn0s1s.between.Interval[java.time.Instant] = Interval(1970-01-01T00:16:40Z,1970-01-01T00:33:20Z) val l = Interval[java.time.Instant](java.time.Instant.ofEpochSecond(1500L), java.time.Instant.ofEpochSecond(2500L)) // l: nl.gn0s1s.between.Interval[java.time.Instant] = Interval(1970-01-01T00:25:00Z,1970-01-01T00:41:40Z) k overlaps l // res2: Boolean = true l overlappedBy k // res3: Boolean = true RelationsRelations Given two intervals there is always only one of the following thirteen defined relations true: beforeand afterare also available as precedesand precededBy, respectively. finishesand finishedByare also available as endsand endedBy. There's a findRelation method which can be used to find out which relation exists between two intervals. The Relation has an inverse method implemented, which gives the inverse of a relation. import nl.gn0s1s.between._ val i = Interval[Int](1, 2) // i: nl.gn0s1s.between.Interval[Int] = Interval(1,2) val j = Interval[Int](2, 3) // j: nl.gn0s1s.between.Interval[Int] = Interval(2,3) val relationBetweenIAndJ = i.findRelation(j) // relationBetweenIAndJ: nl.gn0s1s.between.Relation = m relationBetweenIAndJ.inverse // res0: nl.gn0s1s.between.Relation = mi Additional methodsAdditional methods A number of additional methods are availabe on the Interval[T] case class, some of which may be familiar for users of the ThreeTen-Extra Interval class. abuts, checks if the interval abuts the supplied interval encloses, checks if the interval encloses the supplied interval enclosedBy, checks if the interval is enclosed by the supplied interval gap, returns the interval that is between this interval and the supplied interval intersection, returns the intersection of this interval and the supplied interval minus, returns the result of subtracting the supplied interval from this interval span, returns the smallest interval that contains this interval and the supplied interval union, returns the union of this interval and the supplied interval Some point related methods are: after, checks if the interval is after the supplied point before, checks if the interval is before the supplied point chop, chops this interval into two intervals that meet at the supplied point clamp, clamps a supplied point within the interval contains, checks if supplied point is within the interval endsAt, checks if the interval ends at the supplied point startsAt, checks if the interval starts at the supplied point with-, returns a copy of this interval with the supplied with+, returns a copy of this interval with the supplied ReasoningReasoning I got inspired to write this library during Eric Evans' talk at the Domain-Driven Design Europe 2018 conference. I started writing it in the train on my way back from the conference, this can be represented like this: write lib <-(o)- - train - -(>, mi)-> DDD Europe - -(di)-> EE talk <-(d) - - inspired Since the composition table of relations and the constraints method are implemented we can find out what the possible relations between write lib and DDD Europe are: import nl.gn0s1s.between._ Relation.constraints(Set(o), Set(<, m)) // res0: Set[nl.gn0s1s.between.Relation] = Set(<) ResourcesResources Allen's Interval Algebra: - Maintaining Knowledge about Temporal Intervals - Wikipedia entry - Thomas A. Alspaugh's Foundations Material on Allen's Interval Algebra - Moments and Points in an Interval-Based Temporal Logic Related links: - A Modal Logic for Chopping Intervals - SOWL QL: Querying Spatio - Temporal Ontologies in OWL - AsterixDB Temporal Functions: Allen’s Relations - Haskell package that does something similar for Haskell - LicenseLicense The code is available under the Mozilla Public License, version 2.0.
https://index.scala-lang.org/philippus/between
CC-MAIN-2022-21
en
refinedweb
I’ve been putting off writing about this for a while now, mostly because it’s such a huge topic. I’m not going to try to give more than a brief introduction to it here – don’t expect to be able to whip up your own LINQ to SQL implementation afterwards – but it’s worth at least having an idea of what happens when you use something like LINQ to SQL, NHibernate or the Entity Framework. Just as LINQ to Objects is primarily interested in IEnumerable<T> and the static Enumerable class, so out-of-process LINQ is primarily interested in IQueryable<T> and the static Queryable class… but before we get to them, we need to talk about expression trees. Expression Trees To put it in a nutshell, expression trees encapsulate logic in data instead of code. While you can introspect .NET code via MethodBase.GetMethodBody and then MethodBody.GetILAsByteArray, that’s not really a practical approach. The types in the System.Linq.Expressions define expressions in an easier-to-process manner. When expression trees were introduced in .NET 3.5, they were strictly for expressions, but the Dynamic Language Runtime uses expression trees to represent operations, and the range of logic represented had to expand accordingly, to include things like blocks. While you certainly can build expression trees yourself (usually via the factory methods on the nongeneric Expression class), and it’s fun to do so at times, the most common way of creating them is to use the C# compiler’s support for them via lambda expressions. So far we’ve always seen a lambda expression being converted to a delegate, but it can also convert lambdas to instances of Expression<TDelegate>, where TDelegate is a delegate type which is compatible with the lambda expression. A concrete example will help here. The statement: will be compiled into code which is effectively something like this: var one = Expression.Constant(1, typeof(int)); var addition = Expression.Add(parameter, one); var addOne = Expression.Lambda<Func<int, int>>(addition, new ParameterExpression[] { parameter }); The compiler has some tricks up its sleeves which allow it to refer to methods, events and the like in a simpler way than we can from code, but largely you can regard the transformation as just a way of making life a lot simpler than if you had to build the expression trees yourself every time. IQueryable, IQueryable<T> and IQueryProvider Now that we’ve got the idea of being able to inspect logic relatively easily at execution time, let’s see how it applies to LINQ. There are three interfaces to introduce, and it’s probably easiest to start with how they appear in a class diagram: Most of the time, queries are represented using the generic IQueryable<T> interface, but this doesn’t actually add much over the nongeneric IQueryable interface it extends, other than also extending IEnumerable<T> – so you can iterate over the contents of an IQueryable<T> just as with any other sequence. IQueryable contains the interesting bits, in the form of three properties: ElementType which indicates the type of the elements within the query (in other words, a dynamic form of the T from IQueryable<T>), Expression returns the expression tree for the query so far, and Provider returns the query provider which is responsible for creating new queries and executing the existing one. We won’t need to use the ElementType property ourselves, but we’ll need both the Provider and Expression properties. The static Queryable class We’re not going to implement any of the interfaces ourselves, but I’ve got a small sample program to demonstrate how they all work, imagining we were implementing most of Queryable ourselves. This static class contains extension methods for IQueryable<T> just as Enumerable does for IEnumerable<T>. Most of the query operators from LINQ to Objects appear in Queryable as well, but there are a few notable omissions, such as the To{Lookup, Array, List, Dictionary} methods. If you call one of those on an IQueryable<T>, the Enumerable implementations will be used instead. (IQueryable<T> extends IEnumerable<T>, so the extension methods in Enumerable are applicable to IQueryable<T> sequences as well.) The big difference between the Queryable and Enumerable methods in terms of their declarations is in the parameters: - The "source" parameter in Queryable is always of type IQueryable<TSource> instead of IEnumerable<TSource>. (Other sequence parameters such as the sequence to concatenate for Queryable.Concat are expressed as IEnumerable<T>, interestingly enough. This allows you to express a SQL query using "local" data as well; the query methods work out whether the sequence is actually an IQueryable<T> and act accordingly.) - Any parameters which were delegates in Enumerable are expression trees in Queryable; so while the selector parameter in Enumerable.Select is of type Func<TSource, TResult>, the equivalent in Queryable.Select is of type Expression<Func<TSource, TResult>> The big difference between the methods in terms of what they do is that whereas the Enumerable methods actually do the work (eventually – possibly after deferred execution of course), the Queryable methods themselves really don’t do any work: they just ask the query provider to build up a query indicating that they’ve been called. Let’s have a look at Where for example. If we wanted to implement Queryable.Where, we would have to: - Perform argument checking - Get the "current" query’s Expression - Build a new expression representing a call to Queryable.Where using the current expression as the source, and the predicate expression as the predicate - Ask the current query’s provider to build a new IQueryable<T> based on that call expression, and return it. It all sounds a bit recursive, I realize – the Where call needs to record that a Where call has happened… but that’s all. You may very well wonder where all the work is happening. We’ll come to that. Now building a call expression is slightly tedious because you need to have the right MethodInfo – and as Where is overloaded, that means distinguishing between the two Where methods, which is easier said than done. I’ve actually used a LINQ query to find the right overload – the one where the predicate parameter Expression<Func<T, bool>> rather than Expression<Func<T, int, bool>>. In the .NET implementation, methods can use MethodBase.GetCurrentMethod() instead… although equally they could have created a bunch of static variables computed at class initialization time. We can’t use GetCurrentMethod() for experimentation purposes, because the query provider is likely to expect the exact correct method from System.Linq.Queryable in the System.Core assembly. Here’s our sample implementation, broken up quite a lot to make it easier to understand: this IQueryable<TSource> source, Expression<Func<TSource, bool>> predicate) { if (source == null) { throw new ArgumentNullException("source"); } if (predicate == null) { throw new ArgumentNullException("predicate"); } Expression sourceExpression = source.Expression; Expression quotedPredicate = Expression.Quote(predicate); // This gets the "open" method, without specific type arguments. The second parameter // of the method we want is of type Expression<Func<TSource, bool>>, so the sole generic // type argument to Expression<T> itself has two generic type arguments. // Let’s face it, reflection on generic methods is a mess. MethodInfo method = typeof(Queryable).GetMethods() .Where(m => m.Name == "Where") .Where(m => m.GetParameters()[1] .ParameterType .GetGenericArguments()[0] .GetGenericArguments().Length == 2) .First(); // This gets the method with the same type arguments as ours MethodInfo closedMethod = method.MakeGenericMethod(new Type[] { typeof(TSource) }); // Now we can create a *representation* of this exact method call Expression methodCall = Expression.Call(closedMethod, sourceExpression, quotedPredicate); // … and ask our query provider to create a query for it return source.Provider.CreateQuery<TSource>(methodCall); } There’s only one part of this code that I don’t really understand the need for, and that’s the call to Expression.Quote on the predicate expression tree. I’m sure there’s a good reason for it, but this particular example would work without it, as far as I can see. The real implementation uses it though, so dare say it’s required in some way. EDIT: Daniel’s comment has made this somewhat clearer to me. Each of the arguments to Expression.Call after the MethodInfo itself is meant to be an expression which represents the argument to the method call. In our example we need an expression which represents an argument of type Expression<Func<TSource, bool>>. We already have the value, but we need to provide the layer of wrapping… just as we did with Expression.Constant in the very first expression tree I showed at the top. To wrap the expression value we’ve got, we use Expression.Quote. It’s still not clear to me exactly why we can use Expression.Quote but not Expression.Constant, but at least it’s clearer why we need something… EDIT: I’m gradually getting there. This Stack Overflow answer from Eric Lippert has much to say on the topic. I’m still trying to get my head round it, but I’m sure when I’ve read Eric’s answer several times, I’ll get there. We can even test that this works, by using the Queryable.AsQueryable method from the real .NET implementation. This creates an IQuerable<T> from any IEnumerable<T> using a built-in query provider. Here’s the test program, where FakeQueryable is a static class containing the extension method above: using System.Collections.Generic; using System.Linq; class Test { static void Main() { List<int> list = new List<int> { 3, 5, 1 }; IQueryable<int> source = list.AsQueryable(); IQueryable<int> query = FakeQueryable.Where(source, x => x > 2); foreach (int value in query) { Console.WriteLine(value); } } } This works, printing just 3 and 5, filtering out the 1. Yay! (I’m explicitly calling FakeQueryable.Where rather than letting extension method resolution find it, just to make things clearer.) Um, but what’s doing the actual work? We’ve implemented the Where clause without providing any filtering ourselves. It’s really the query provider which has built an appropriate IQueryable<T> implementation. When we call GetEnumerator() implicitly in the foreach loop, the query can examine everything that’s built up in the expression tree (which could contain multiple operators – it’s nesting queries within queries, essentially) and work out what to do. In the case of our IQueryable<T> built from a list, it just does the filtering in-process… but if we were using LINQ to SQL, that’s when the SQL would be generated. The provider recognizes the specific methods from Queryable, and applies filters, projections etc. That’s why it was important that our demo Where method pretended that the real Queryable.Where had been called – otherwise the query provider wouldn’t know what the call expression Just to hammer the point home even further… Queryable itself neither knows nor cares what kind of data source you’re using. Its job is not to perform any query operations itself; its job is to record the requested query operations in a source-agnostic manner, and let the source provider handle them when it needs to. Immediate execution with IQueryProvider.Execute All the operators using deferred execution in Queryable are implemented in much the same way as our demo Where method. However, that doesn’t cover the situation where we need to execute the query now, because it has to return a value directly instead of another query. This time I’m going to use ElementAt as the sample, simply because it’s only got one overload, which makes it very easy to grab the relevant MethodInfo. The general procedure is exactly the same as building a new query, except that this time we call the provider’s Execute method instead of CreateQuery. { if (source == null) { throw new ArgumentNullException("source"); } Expression sourceExpression = source.Expression; Expression indexExpression = Expression.Constant(index); MethodInfo method = typeof(Queryable).GetMethod("ElementAt"); MethodInfo closedMethod = method.MakeGenericMethod(new Type[] { typeof(TSource) }); // Now we can create a *representation* of this exact method call Expression methodCall = Expression.Call(closedMethod, sourceExpression, indexExpression); // … and ask our query provider to execute it return source.Provider.Execute<TSource>(methodCall); } The type argument we provide to Execute is the desired return type – so for Count, we’d call Execute<int> for example. Again, it’s up to the query provider to work out what the call actually means. It’s worth mentioning that both CreateQuery and Execute have generic and non-generic overloads. I haven’t personally encountered a use for the non-generic ones, but I gather they’re useful for various situations in generated code, particularly if you really don’t know the element type – or at least only know it dynamically, and don’t want to have to use reflection to generate an appropriate generic method call. Transparent support in source code One of the aspects of LINQ which raises it to the "genius" status (and "slightly scary" at the same time) is that most of the time, most developers don’t need to make any changes to their source code in order to use Enumerable or Queryable. Take this query expression and its translation: where person.LastName == "Skeet" select person.FirstName; // Translation var query = family.Where(person => person.LastName == "Skeet") .Select(person => person.FirstName); Which set of query methods will that use? It entirely depends on the compile-time type of the "family" variable. If that’s a type which implements IQueryable<T>, it will use the extension methods in Queryable, the lambda expression will be converted into expression trees, and the type of "query" will be IQueryable<string>. Otherwise (and assuming the type implements IEnumerable<T> isn’t some other interesting type such as ParallelEnumerable) it will use the extension methods in Enumerable, the lambda expressions will be converted into delgeates, and the type of "query" will be IEnumerable<string>. The query expression translation part of the specification has no need to care about this, because it’s simply translating into a form which uses lambda expressions – the rest of overload resolution and lambda expression conversion deals with the details. Genius… although it does mean you need to be careful that really you know where your query evaluation is going to take place – you don’t want to accidentally end up performing your whole query in-process having shipped the entire contents of a database across a network connection… Conclusion This was really a whistlestop tour of the "other" side of LINQ – and without going into any of the details of the real providers such as LINQ to SQL. However, I hope it’s given you enough of a flavour for what’s going on to appreciate the general design. Highlights: - Expression trees are used to capture logic in a data structure which can be examined relatively easily at execution time - Lambda expressions can be converted into expression trees as well as delegates - IQueryable<T> and IQueryable form a sort of parallel interface hierarchy to IEnumerable<T> and IEnumerable – although the queryable forms extend the enumerable forms - IQueryProvider enables one query to be built based on another, or executed immediately where appropriate - Queryable provides equivalent extension methods to most of the Enumerable LINQ operators, except that it uses IQueryable<T> sources and expression trees instead of delegates - Queryable doesn’t handle the queries itself at all; it simply records what’s been called and delegates the real processing to the query provider I think I’ve now covered most of the topics I wanted to mention after finishing the actual Edulinq implementation. Next up I’ll talk about some of the thorny design issues (most of which I’ve already mentioned, but which bear repeating) and then I’ll write a brief "series conclusion" post with a list of links to all the other parts. 9 thoughts on “Reimplementing LINQ to Objects: Part 43 – Out-of-process queries with IQueryable” The idea is that the static methods on Queryable record an expression tree that represents how they were called. An expression tree normally represents a delegate. But Queryable.Where() wasn’t called with a delegate – it was called with an expression tree. Thus we need to go to the meta-meta-level: build an expression tree that describes an expression tree that describes the lambda. This is what the Expression.Quote() operator is doing: it goes one meta-level higher. A normal expression tree is “code as data”. A quoted expression tree is “‘code as data’ as data”. @Daniel: I think I see what you mean. I’ve added an edit (find “EDIT:” in the text after reloading) to explain this as best I can. See if it sounds like what you meant :) I had some problems understanding Quote too, here is a little example using curryfication: 1- Simple expression: Exp<Func<int, Func>> exp = a => b => a + b; if you compile it should create this: Func<int, Func> f = exp.Compile(); f(2)(3) -> 5 2.- Quoting expression: Exp<Func<int, Exp<Func>>> exp = a => b => a + b; When you write this the compiler has to notate somehow that the second lambda is an expression, by quoting (like scheme quoting) Exp<Func<int, Exp<Func>>> exp2 = a => ‘b => a + b’; if you compile it should create this: Func<int, Expression<Func>> f2 = exp2.Compile(); f(2) –> returns an expression like ‘b=>2 + b’ and then you could do: f(2).Compile()(3) -> then you have the 5 again. 3.- Why LINQ needs this: When you call Expression.Lambda method the Expression returned has TDelegate as its Type (expression Type), in order to make it Expression you need to quote, otherwise it will fail creating the Expression.Call. 4.- Why it didn’t fail to Jon Skeet then? Because the private method Expression.ValidateAccessorArgumentTypes makes the quoting for you in case you forget. I think this is what makes Quoting harder to understand! Hope it helped. Hi Jon, Just a minor thing… In the comment that reads: ‘This gets the “open” method’ I think it should say ‘This gets the “where” method’. @Matt: No, the idea is that it gets the “open” definition of the Where method, in terms of open/closed generics. Jon: I think this StackOverflow question has the explanation you’re looking for: – it has a long detailed answer by Eric that pretty much explains the difference between Constant and Quote. However I think his summary is enough to understand it: Quotes and constants have different meanings and therefore have different representations in an expression tree. Having the same representation for two very different things is extremely confusing and bug prone. @configurator: Thanks, I’ve added another edit to mention that answer within the post. I just love LINQ, IQueryable and the expression infrastructure that allows creation of dynamic delegates at runtime; once you master this part of the .NET framework, a whole new world of possibilities becomes possible. High performance cached delegates crafted for specific type scenarios? Check. Writing a LINQ provider, spending hours looking at expression trees etc. is definitely one of the most interesting and rewarding experiences I’ve had with the .NET framework in the past years. For those interested in seeing a complete implementation of a query provider I highly recommend this series by Matt Warren: The entire series goes through all of the nitty gritty details of implementing a provider. Enjoy.
https://codeblog.jonskeet.uk/2011/02/20/reimplementing-linq-to-objects-part-43-out-of-process-queries-with-iqueryable/
CC-MAIN-2020-05
en
refinedweb
Exporting Variations MOSS 2007 had an OOB capability of exporting a publishing page to a .cmp file to enable offline translations scenarios. In case your translation team does an offline translation of the WCM site, the exported .cmp contained the required xmls for translation and you could import the translated xmls back to the destination variation. This was done through the “Export Variation” – “Import Variation” option under Site content and Structure. From my initial analysis I did not find the same UI in SharePoint Server 2010 to export variations, however this can be achieved by using the following web service. Code extract from MSDN. using System; using System.IO; using System.Collections.Generic; using System.Text; using System.Net; //Replace macro with "(your assembly name).(name of your Web reference)" using PublishingServiceClient = Microsoft.SDK.SharePoint.Server.Samples.PublishingServiceClient; namespace Microsoft.SDK.SharePoint.Server.Samples { class Program { static void Main(string[] args) { // Create a Web reference (named "PublishingServiceClient" here)which generates a SOAP Client class of the same name. // URL to the Web service is given by "" // Access the Web service methods using objects of this class. PublishingServiceClient.PublishingService publishingServiceClient = new PublishingServiceClient.PublishingService(); //Create credentials for the user accessing the Web service. The export requires site administrator privileges. //Use default credentials if the user running client has required rights at the server. //Otherwise, explicitly create Credentials using new System.Net.Credentials("username","passwd", "domain"). publishingServiceClient.Credentials = System.Net.CredentialCache.DefaultCredentials; //Replace webUrl with the url of the site you want to export. string webUrl = " Home/Variation Site"; if (!string.IsNullOrEmpty(webUrl)) { //Invoke the SOAP Client Method. byte[] compressedFileContent = publishingServiceClient.ExportObjects(webUrl); File.WriteAllBytes(Path.GetTempFileName(), compressedFileContent); //Uncompress the file for translation. } ...
https://docs.microsoft.com/en-us/archive/blogs/jojok/exporting-variations
CC-MAIN-2020-05
en
refinedweb
src-d/jgit-spark-connector[web search] README.md engine engine is a library for running scalable data retrieval pipelines that process any number of Git repositories for source code analysis. It is written in Scala and built on top of Apache Spark to enable rapid construction of custom analysis pipelines and processing large number of Git repositories stored in HDFS in Siva file format. It is accessible both via Scala and Python Spark APIs, and capable of running on large-scale distributed clusters. Current implementation combines: - src-d/enry to detect programming language of every file - bblfsh/client-scala to parse every file to UAST - src-d/siva-java for reading Siva files in JVM - apache/spark to extend DataFrame API - eclipse/jgit for working with Git .pack files Quick-start First, you need to download Apache Spark somewhere on your machine: $ cd /tmp && wget "" -O spark-2.2.1-bin-hadoop2.7.tgz The Apache Software Foundation suggests you the better mirror where you can download Spark from. If you wish to take a look and find the best option in your case, you can do it here. Then you must extract Spark from the downloaded tar file: $ tar -C ~/ -xvzf spark-2.2.1-bin-hadoop2.7.tgz Binaries and scripts to run Spark are located in spark-2.2.1-bin-hadoop2.7/bin, so should set PATH and SPARK_HOME to point to this directory. It's advised to add this to your shell profile: $ export SPARK_HOME=$HOME/spark-2.2.1-bin-hadoop2.7 $ export PATH=$PATH:$SPARK_HOME/bin Look for the latest engine version, and then replace in the command where [version] is showed: $ spark-shell --packages "tech.sourced:engine:[version]" # or $ pyspark --packages "tech.sourced:engine:[version]" Run bblfsh daemon. You can start it easily in a container following its quick start guide. If you run engine in an UNIX like environment, you should set the LANG variable properly: export LANG="en_US.UTF-8" The rationale behind this is that UNIX file systems don't keep the encoding for each file name, they are just plain bytes, so the Java API for FS looks for the LANG environment variable to apply certain encoding. Either in case the LANG variable wouldn't be set to a UTF-8 encoding or it wouldn't be set at all (which results in handle encoding in C locale) you could get an exception during the engine execution similar to java.nio.file.InvalidPathException: Malformed input or input contains unmappable characters. Pre-requisites - Scala 2.11.x - Apache Spark Installation 2.2.x or 2.3.x - bblfsh >= 2.5.0: Used for UAST extraction Python pre-requisites: - Python >= 3.4.x (engine is tested with Python 3.4, 3.5 and 3.6 and these are the supported versions, even if it might still work with previous ones) libxml2-devinstalled python3-devinstalled g++installed Examples of engine usage engine is available on maven central. To add it to your project as a dependency, For projects managed by maven add the following to your pom.xml: <dependency> <groupId>tech.sourced</groupId> <artifactId>engine</artifactId> <version>[version]</version> </dependency> For sbt managed projects add the dependency: libraryDependencies += "tech.sourced" % "engine" % "[version]" In both cases, replace [version] with the latest engine version Usage in applications as a dependency The default jar published is a fatjar containing all the dependencies required by the engine. It's meant to be used directly as a jar or through --packages for Spark usage. If you want to use it in an application and built a fatjar with that you need to follow these steps to use what we call the "slim" jar: With maven: <dependency> <groupId>tech.sourced</groupId> <artifactId>engine</artifactId> <version>[version]</version> <classifier>slim</classifier> </dependency> Or (for sbt): libraryDependencies += "tech.sourced" % "engine" % "[version]" % Compile classifier "slim" If you run into problems with io.netty.versions.properties on sbt, you can add the following snippet to solve it: In sbt: assemblyMergeStrategy in assembly := { case "META-INF/io.netty.versions.properties" => MergeStrategy.last case x => val oldStrategy = (assemblyMergeStrategy in assembly).value oldStrategy(x) } pyspark Local mode Install python-wrappers is necessary to use engine from pyspark: $ pip install sourced-engine Then you should provide the engine's maven coordinates to the pyspark's shell: $ $SPARK_HOME/bin/pyspark --packages "tech.sourced:engine:[version]" Replace [version] with the latest engine version Cluster mode Install engine wrappers as in local mode: $ pip install -e sourced-engine Then you should package and compress with zip the python wrappers to provide pyspark with it. It's required to distribute the code among the nodes of the cluster. $ zip <path-to-installed-package> ./sourced-engine.zip $ $SPARK_HOME/bin/pyspark <same-args-as-local-plus> --py-files ./sourced-engine.zip pyspark API usage Run pyspark as explained before to start using the engine, replacing [version] with the latest engine version: $ $SPARK_HOME/bin/pyspark --packages "tech.sourced:engine:[version]" Welcome to spark version 2.2.1 Using Python version 3.6.2 (default, Jul 20 2017 03:52:27) SparkSession available as 'spark'. >>> from sourced.engine import Engine >>> engine = Engine(spark, '/path/to/siva/files', 'siva') >>> engine.repositories.filter('id = "github.com/mingrammer/funmath.git"').references.filter("name = 'refs/heads/HEAD'").show() +--------------------+---------------+--------------------+ | repository_id| name| hash| +--------------------+---------------+--------------------+ |github.com/mingra...|refs/heads/HEAD|290440b64a73f5c7e...| +--------------------+---------------+--------------------+ Scala API usage You must provide engine as a dependency in the following way, replacing [version] with the latest engine version: $ spark-shell --packages "tech.sourced:engine:[version]" To start using engine from the shell you must import everything inside the tech.sourced.engine package (or, if you prefer, just import Engine and EngineDataFrame classes): scala> import tech.sourced.engine._ import tech.sourced.engine._ Now, you need to create an instance of Engine and give it the spark session and the path of the directory containing the siva files: scala> val engine = Engine(spark, "/path/to/siva-files", "siva") Then, you will be able to perform queries over the repositories: scala> engine.getRepositories.filter('id === "github.com/mawag/faq-xiyoulinux"). | getReferences.filter('name === "refs/heads/HEAD"). | getAllReferenceCommits.filter('message.contains("Initial")). | select('repository_id, 'hash, 'message). | show +--------------------------------+-------------------------------+--------------------+ | repository_id| hash| message| +--------------------------------+-------------------------------+--------------------+ |github.com/mawag/...|fff7062de8474d10a...|Initial commit| +--------------------------------+-------------------------------+--------------------+ Supported repository formats As you might have seen, you need to provide the repository format you will be reading when you create the Engine instance. Although the documentation always uses the siva format, there are more repository formats available. These are all the supported formats at the moment: siva: rooted repositories packed in a single .sivafile. standard: regular git repositories with a .gitfolder. Each in a folder of their own under the given repository path. bare: git bare repositories. Each in a folder of their own under the given repository path. Processing local repositories with the engine There are some design decisions that may surprise the user when processing local repositories, instead of siva files. This is the list of things you should take into account when doing so: - All local branches will belong to a repository whose id is. So, if you clone /home/foo/bar, you will see two repositories github.com/foo/bar, even if you only have one. - Remote branches are transformed from refs/remote/$REMOTE_NAME/$BRANCH_NAMEto refs/heads/$BRANCH_NAMEas they will only belong to the repository id of their corresponding remote. So refs/remote/origin/HEADbecomes refs/heads/HEAD. Playing around with engine on Jupyter You can launch our docker container which contains some Notebooks examples just running: docker run --name engine-jupyter --rm -it -p 8080:8080 -v $(pwd)/path/to/siva-files:/repositories --link bblfshd:bblfshd srcd/engine-jupyter You must have some siva files in local to mount them on the container replacing the path $(pwd)/path/to/siva-files. You can get some siva-files from the project here. You should have a bblfsh daemon container running to link the jupyter container (see Pre-requisites). When the engine-jupyter container starts it will show you an URL that you can open in your browser. Using engine directly from Python If you are using engine directly from Python and are unable to modify the PYTHON_SUBMIT_ARGS you can copy the engine jar to the pyspark jars to make it available there. cp engine.jar "$(python -c 'import pyspark; print(pyspark.__path__[0])')/jars" This way, you can use it in the following way: import sys pyspark_path = "/path/to/pyspark/python" sys.path.append(pyspark_path) from pyspark.sql import SparkSession from sourced.engine import Engine siva_folder = "/path/to/siva-files" spark = SparkSession.builder.appName("test").master("local[*]").getOrCreate() engine = Engine(spark, siva_folder, 'siva') Development Build fatjar Build the fatjar is needed to build the docker image that contains the jupyter server, or test changes in spark-shell just passing the jar with --jars flag: $ make build It leaves the fatjar in target/scala-2.11/engine-uber.jar Build and run docker to get a Jupyter server To build an image with the last built of the project: $ make docker-build Notebooks under examples folder will be included on the image. To run a container with the Jupyter server: $ make docker-run Before run the jupyter container you must run a bblfsh daemon: $ make docker-bblfsh If it's the first time you run the bblfsh daemon, you must install the drivers: $ make docker-bblfsh-install-drivers To see installed drivers: $ make docker-bblfsh-list-drivers To remove the development jupyter image generated: $ make docker-clean Run tests engine uses bblfsh so you need an instance of a bblfsh server running: $ make docker-bblfsh To run tests: $ make test To run tests for python wrapper: $ cd python $ make test Windows support There is no windows support in enry-java or bblfsh's client-scala right now, so all the language detection and UAST features are not available for the windows platform. Code of Conduct License Apache License Version 2.0, see LICENSE
https://jaytaylor.com/notes/node/1534532202000.html
CC-MAIN-2020-05
en
refinedweb
Multi-byte String manipulation functions. More... #include "config.h" #include <stddef.h> #include <ctype.h> #include <stdbool.h> #include <wchar.h> #include <wctype.h> Go to the source code of this file. Multi-byte String manipulationbyte.h. Count the bytes in a (multibyte) character. Definition at line 55 of file mbyte.c. Replace unprintable characters. Unprintable characters will be replaced with ReplacementChar. Definition at line 424 of file mbyte.c. Turn a name into initials. Take a name, e.g. "John F. Kennedy" and reduce it to initials "JFK". The function saves the first character from each word. Words are delimited by whitespace, or hyphens (so "Jean-Pierre" becomes "JP"). Definition at line 84 of file mbyte.c. Will this character corrupt the display? Definition at line 390 of file mbyte.c. Does a multi-byte string contain only lowercase characters? Non-alphabetic characters are considered lowercase. Definition at line 358 of file mbyte.c. Is character not typically part of a pathname. Definition at line 344 of file mbyte.c. Convert a string from multibyte to wide characters. Definition at line 295 of file mbyte.c. Convert a string from wide to multibyte characters. Definition at line 237 of file mbyte.c. Measure the screen width of a string. Definition at line 196 of file mbyte.c. Measure the screen width of a character. Definition at line 178 of file mbyte.c. Measure a string's display width (in screen columns) This is like wcwidth(), but gets const char* not wchar_t*. Definition at line 139 of file mbyte.c. Keep the end of the string on-screen. Given a string and a width, determine how many characters from the beginning of the string should be skipped so that the string fits. Definition at line 217 of file mbyte.c.
https://neomutt.org/code/mbyte_8h.html
CC-MAIN-2020-05
en
refinedweb
ASP.NET MVC - Passing. 1. public class Record 2. { 3. public int Id { get; set; } 4. public string RecordName { get; set; } 5. public string RecordDetail { get; set; } 6. 7. }. 1. public ActionResult Index() 2. { 3. Record rec = new Record 4. { 5. Id = 101, 6. RecordName = "Bouchers", 7. RecordDetail = "The basic stocks" 8. }; 9. ViewBag.Message = rec; 10. return View(); 11. } Add a View for an Index Action by right clicking on it. Give a name to it and select Add button. First of all import the model class. Assign viewbag into a variable and all the properties will be in place, using the variable and Razor block. 1. @using PassDatainMVC.Models 2. 3. @{ 4. ViewBag.Title = "Index"; 5. } 6. 7. <h3>Passing Data From Controller to View using ViewBag</h3> 8. @{ 9. var data = ViewBag.Message; 10. } 11. <h3>Id: @data.Id</h3> 12. <h3>RecordName: @data.RecordName</h3> 13. <h3>RecordDetail: @data.RecordDetail</h3> Build and run your Application. You will get ViewBag Data. The other way of passing the data from Controller to View is ViewData. Also, a dictionary type object is similar to ViewBag. There are no huge changes in Controller and ViewData contains key-value pairs. 1. public ActionResult Index() 2. { 3. Record rec = new Record 4. { 5. Id = 101, 6. RecordName = "Bouchers", 7. RecordDetail = "The basic stocks" 8. }; 9. ViewData["Message"] = rec; 10. return View(); 11. } Access your model class when you are using ViewData, as shown below. 1. @using PassDatainMVC.Models 2. @{ 3. ViewBag.Title = "Index"; 4. } 5. <h3>Passing Data From Controller To View using ViewData</h3> 6. @{ 7. var data = (Record)ViewData["Message"]; 8. } 9. <h3>Id: @data.Id</h3> 10. <h3>RecordName: @data.RecordName</h3> 11. <h3>RecordDetail: @data.RecordDetail</h3>. 1. public ActionResult Index() 2. { 3. Record rec = new Record 4. { 5. Id = 101, 6. RecordName = "Bouchers", 7. RecordDetail = "The basic stocks" 8. }; 9. return View(rec); 10. } Import the binding object of model class at the top of Index View and access the properties by @Model. 1. @using PassDatainMVC.Models 2. @model PassDatainMVC.Models.Record 3. @{ 4. ViewBag.Title = "Index"; 5. } 6. <h3>Passing Data From Controller To View using Model Class Object</h3> 7. 8. <h3>Id: @Model.Id</h3> 9. <h3>RecordName: @Model.RecordName</h3> 10. <h3>RecordDetail: @Model.RecordDetail</h3>. 1. public ActionResult CheckTempData() 2. { 3. TempData["data"] = "I'm temprory data to used in subsequent request"; 4. return RedirectToAction("Index"); 5. } Acccess TempData in Index.Chtml view is given. 1. <h3>Hey! @TempData["data"]</h3> Run the Application and call the respective action method. TempData uses an internal session to store the data. I hope, you liked this article. Stay tuned with me for more on ASP.NET MVC, Web API and Microsoft Azure.
https://tutorialslink.com/Articles/ASPNET-MVC-Passing-Data-From-Controller-To-View/953
CC-MAIN-2020-05
en
refinedweb
Contextual Menu Actions in the Design ModeEdit online The contextual menu of the Design mode includes the following actions: Go to Definition () - Shows the definition for the current selected component. For references, this action is available by clicking the arrow displayed in its bottom right corner. Open Schema () - Opens the selected schema. This action is available for xsd:import, xsd:includeand xsd:redefineelements. If the file you try to open does not exist, a warning message is displayed and you have the possibility to create the file. Edit Attributes () - Allows you to edit the attributes of the selected component in a small in-place editor that presents the same attributes as in the Attributes view and the Facets view. The actions that can be performed on attributes in this dialog box are the same actions presented in the two views. - Append child - Offers a list of valid components, depending on the context, and appends your selection as a child of the currently selected component. You can set a name for a named component after it has been added in the diagram. - Insert before - Offers a list of valid components, depending on the context, and inserts your selection before the selected component, as a sibling. You can set a name for a named component after it has been added in the diagram. - Insert after - Offers a list of valid components, depending on the context, and inserts your selection after the selected component, as a sibling. You can set a name for a named component after it has been added in the diagram. - New global - Inserts a global component in the schema diagram. This action does not depend on the current context. If you choose to insert an import you have to specify the URL of the imported file, the target namespace and the import ID. The same information, excluding the target namespace, is requested for an xsd:includeor xsd:redefineelement.Note: If the imported file has declared a target namespace, the field Namespace is completed automatically. - Edit Schema Namespaces - When performed on the schema root, it allows you to edit the schema target namespace and namespace mappings. You can also invoke the action by double-clicking the target namespace property from Attributes view for the schema or by double-clicking the schema component. - Allows you to edit the annotation for the selected schema component in the Edit Annotations dialog box. You can perform the following operations in the dialog box: Annotations are rendered by default under the graphical representation of the component. When you have a reference to a component with annotations, these annotations are also presented in the diagram below the referenced component. To edit the annotations, use the Edit Annotations action from the contextual menu. If the reference component does not have annotations, you can edit the annotations of the referenced component by double-clicking the annotations area. Otherwise, you can edit the referenced component annotations only if you go to the definition of the component.Note: For imported/included components that do not belong to the currently edited schema, the Edit Annotations dialog box presents the annotation as read-only. To edit its annotation, open the schema where the component is defined. - Edit all appinfo/documentationitems for a specific annotation - All appinfo/documentationitems for a specific annotation are presented in a table and can be easily edited. Information about an annotation item includes: type (documentation/appinfo), content, source (optional, specify the source of the documentation/appinfoelement) and xml:lang. The content of a documentation/appinfoitem can be edited in the Content area below the table. - Insert/Insert before/Remove documentation/appinfo. The Add button allows you to insert a new annotation item ( documentation/appinfo). You can add a new item before the item selected in table by pressing the Insert Before button. Also, you can delete the selected item using the Remove button. - Move items up/down - to do this use the Move up and Move down buttons. - Insert/Insert before/Remove annotation - Available for components that allow multiple annotations such as schemas or redefines. - Specify an ID for the component annotation. An optional identifier for the annotation. - Extract Global Element Action that is available for local elements. A local element is made global and is replaced with a reference to the global element. The local element properties that are also valid for the global element declaration are kept. Figure 1: Extracting a Global Element If you use the Extract Global Element action on a nameelement, the result is: Figure 2: Extracting a Global Element on a nameElement - Extract Global Attribute - Action available for local attributes. A local attribute is made global and replaced with a reference to the global attribute. The properties of local attribute that are also valid in the global attribute declaration are kept. Figure 3: Extracting a Global Attribute If you use the Extract Global Attribute action on a noteattribute, the result is: Figure 4: Extracting a Global Attribute on a noteAttribute - Extract Global Group - Action available for compositors (sequence, choice, all). This action extracts a global group and makes a reference to it. The action is available only if the parent of the compositor is not a group. Figure 5: Extracting a Global Group If you use the Extract Global Group action on the sequenceelement, the Extract Global Component dialog box is displayed and you can choose a name for the group. If you type personGroup, the result is: Figure 6: Extracting a Global Group on a sequenceElement - Extract Global Type - Action used to extract an anonymous simple type or an anonymous complex type as global. For anonymous complex types, the action is available on the parent element. Figure 7: Extracting a Global Simple Type If you use the action on the unioncomponent and choose numericSTfor the new global simple type name, the result is: Figure 8: Extracting a Global Simple Type on a unionComponent Figure 9: Extracting a Global Complex Type If you use the action on a personelement and choose person_typefor the new complex type name, the result is: Figure 10: Extracting a Global Complex Type on a personElement Rename Component in - Rename the selected component. Cut - Cut the selected component(s). Copy - Copy the selected component(s). Copy XPath This action copies an XPath expression that identifies the selected element or attribute in an instance XML document of the edited schema and places it in the clipboard. Paste - Paste the component(s) from the clipboard as children of the selected component. - Paste as Reference - Create references to the copied component(s). If not possible a warning message is displayed. - Remove (Delete) - Remove the selected component(s). - Override component - Copies the overridden component in the current XML Schema. This option is available for xs:overridecomponents. - Redefine component - The referenced component is added in the current XML Schema. This option is available for xs:redefinecomponents. - Optional - Can be performed on element/attribute/group references, local attributes, elements, compositors, and element wildcards. The minOccursproperty is set to 0 and the useproperty for attributes is set to optional. - Unbounded - Can be performed on element/attribute/group references, local attributes, elements, compositors, and element wildcards. The maxOccursproperty is set to unboundedand the useproperty for attributes is set to required. - Search - Can be performed on local elements or attributes. This action makes a reference to a global element or attribute. Search References - Searches all references of the item found at current cursor position in the defined scope if any. - Search References in - Searches all references of the item found at current cursor position in the specified scope. - Search Occurrences in File - Searches all occurrences of the item found at current cursor position in the current file. Component Dependencies - Opens the Component Dependencies view that allows you to see the dependencies for the current selected component. - Resource Hierarchy - Opens the Resource Hierarchy / Dependencies view that allows you to see the hierarchy for the current selected resource. - Flatten Schema - Recursively adds the components of included Schema files to the main one. It also flattens every imported XML Schema from the hierarchy. - Resource Dependencies - Allows you to see the dependencies for the current selected resource. Expand All - Recursively expands all sub-components of the selected component. Collapse All - Recursively collapses all sub-components of the selected component. - Save as Image - Save the diagram as image, in JPEG, BMP, SVG or PNG format. Generate Sample XML Files - Generate XML files using the current opened schema. The selected component is the XML document root. See more in the Generate Sample XML Files section. Options - Show the Schema preferences panel.
https://www.oxygenxml.com/doc/versions/21.1/ug-editor/topics/contextual-menu-actions.html
CC-MAIN-2020-05
en
refinedweb
Block until a thread terminates #include <sys/neutrino.h> int ThreadJoin( int tid, void** status ); int ThreadJoin_r( int tid, void** status ); libc Use the -l c option to qcc to link against this library. This library is usually included automatically. The ThreadJoin() and ThreadJoin_r() kernel calls block until the thread specified by tid terminates. If status isn't NULL, the functions save the thread's exit status in the area pointed to by status. If the thread tid has already terminated, the functions immediately return with success and the status, if requested. These functions are identical except in the way they indicate errors. See the Returns section for details. When ThreadJoin() returns successfully, the target thread has been successfully terminated. Until this occurs, the thread ID tid isn't reused and a small kernel resource (a thread object) is retained. You can't join a thread that's detached (see ThreadCreate() and ThreadDetach()). The target thread must be joinable. Multiple pthread_join(), pthread_timedjoin(), ThreadJoin(), and ThreadJoin_r() calls on the same target thread aren't allowed. Blocking states The only difference between these functions is the way they indicate errors:
http://www.qnx.com/developers/docs/qnxcar2/topic/com.qnx.doc.neutrino.lib_ref/topic/t/threadjoin.html
CC-MAIN-2020-05
en
refinedweb
Android Developers Docs Reference androidx.test.annotation Kotlin |Java Annotations Beta Signifies that a public API (public class, method or field) is subject to incompatible changes, or even removal, in a future release. UiThreadTest Methods annotated with this annotation will be executed on the application's UI thread (or main thread). Annotations Beta UiThreadTest
https://developer.android.com/reference/androidx/test/annotation/package-summary.html?authuser=0
CC-MAIN-2020-05
en
refinedweb
sealed_generator 1.0.1 ᶘ ᵒᴥᵒᶅ SealedCodegen - 'when' operator generator for @sealed classes # Getting started # - Add these dependencies dev_dependencies: build_runner: ^1.0.0 sealed_generator: 1.0.1 - Mark any class you want with @sealed annotation (meta: ^1.1.7) and add part ' import 'package:meta/meta.dart'; part 'result.g.dart'; @sealed class Result<T> with SealedResult<T> {} class Success<T> extends Result<T> { T value; Success(this.value); } - run: flutter packages pub run build_runner build Generator will create a class OriginalClassNameSealed for you to use class SealedResult<T> { R when<R>({ @required R Function(Success<T>) success, @required R Function(Failure<T>) failure, }) { if (this is Success<T>) { return success(this as Success<T>); } if (this is Failure<T>) { return failure(this as Failure<T>); } throw new Exception( 'If you got here, probably you forgot to regenerate the classes? Try running flutter packages pub run build_runner build'); } } - Add with(or extends) to your sealed class, for e.g. class Result extends(with) ResultSealed Using # Just create an instance of you sealed class and call when on it, for example: var resultWidget = result.when( success: (event) => Text(event.value), failure: (event) => Text("Failure"), idle: (event) => Text("idle"), ); And that's it, you are ready to use sealed classses with some sort of when It is a very early version of the library, mostly a proof of concept, so contributions are highly welcomed 1.0.1 # - Fix version resolving 1.0.0 # - Initial release import 'dart:math'; import 'package:flutter/material.dart'; import 'package:sealed_demo/result> { Result<String> result = Idle(); void _changeState() { setState(() { var value = Random().nextInt(3); if (value == 0) { result = Failure(); } else if (value == 1) { result = Success<String>("Value from success"); } else if (value == 2) { result = Idle(); } }); } @override Widget build(BuildContext context) { var resultWidget = result.when( success: (event) => Text(event.value), failure: (event) => Text("Failure"), idle: (event) => Text("idle"), ); return Scaffold( appBar: AppBar( title: Text(widget.title), ), body: Center( child: Column( mainAxisAlignment: MainAxisAlignment.center, children: <Widget>[ resultWidget, ], ), ), floatingActionButton: FloatingActionButton( onPressed: _changeState, tooltip: 'Increment', child: Icon(Icons.add), ), // This trailing comma makes auto-formatting nicer for build methods. ); } } Use this package as a library 1. Depend on it Add this to your package's pubspec.yaml file: dependencies: sealed_generator: ^1_generator/sealed_generator) 2 out of 2 API elements have no dartdoc comment.Providing good documentation for libraries, classes, functions, and other API elements improves code readability and helps developers find and use your API. Maintenance issues and suggestions No valid SDK. (-20 points) The analysis could not detect a valid SDK that can use this package. Support latest dependencies. (-20 points) The version constraint in pubspec.yaml does not support the latest published versions for 2 dependencies ( analyzer, recase).
https://pub.dev/packages/sealed_generator
CC-MAIN-2020-05
en
refinedweb
28 of file io/CStream.h. #include <mrpt/io/CStream.h> Used in CStream::Seek. Definition at line 32 of file io/CStream.h. Reads from the stream until a ' ' character is found ('' characters are ignored). Definition at line 69 of file CStream.cpp. Method for getting the current cursor position, where 0 is the first byte and TotalBytesCount-1 the last one.OutputStream, mrpt::io::CFileGZInputStream, mrpt::io::CFileGZOutputStream, and mrpt::io::CFileInputStream. Returns the total amount of bytes in the stream.GZOutputStream, mrpt::io::CFileOutputStream, mrpt::io::CFileGZInputStream, and mrpt::io::CFileInputStream. Writes a string to the stream in a textual form. Definition at line 30 of file CStream.cpp. References MRPT_END, MRPT_START, and mrpt::system::os::vsnprintf(). Referenced by mrpt::hmtslam::CTopLCDetector_GridMatching::computeTopologicalObservationModel(), mrpt::apps::MonteCarloLocalization_Base::do_pf_localization(), printf_vector(), mrpt::apps::CGridMapAlignerApp::run(), mrpt::apps::RBPF_SLAM_App_Base::run(), and mrpt::apps::ICP_SLAM_App_Base::run(). Prints a vector in the format [A,B,C,...] using CStream::printf, and the fmt string for each vector element T. Definition at line 102GZInputStream, mrpt::io::CFileOutputStream, mrpt::io::CFileGZOutputStream, mrpt::io::CFileInputStream, mrpt::comms::CClientTCPSocket, mrpt::io::CFileStream, and mrpt::io::CMemoryStream. Referenced by mrpt::io::zip::decompress(), mrpt::maps::CPointsMapXYZI::loadFromKittiVelodyneFile(), and ReadBufferImmediate()..GZInputStream, mrpt::io::CFileOutputStream, mrpt::io::CFileGZOutputStream, mrpt::comms::CClientTCPSocket, mrpt::io::CFileInputStream, mrpt::io::CFileStream, and mrpt::io::CMemoryStream.
https://docs.mrpt.org/reference/devel/classmrpt_1_1io_1_1_c_stream.html
CC-MAIN-2020-05
en
refinedweb
Hello, I know this is being asked a lot and there is some code out there but It doesn't seem to work for me. It works on one page but doesn't work on the results page with the same search bar. The search bar is in the header that is repeated throughout the site. This is the code I have on the home page.. import {local} from 'wix-storage'; import wixLocation from 'wix-location'; $w.onReady(function () { }); export function searchButton_click() { let word = $w("#searchBar").value; local.setItem("searchWord", word); wixLocation.to(`/results`); } This is what I have on the results page that has a search bar as well.() { search(); } function search() { wixData.query('AdvertisingServices') .contains('service', $w("#searchBar").value) .or(wixData.query('AdvertisingServices').contains('title', $w("#searchBar").value)) .find() .then(res => { $w('#allServices').data = res.items; }); } Thank you so much for your help.
https://www.wix.com/corvid/forum/community-discussion/search-on-enter-key-press
CC-MAIN-2020-05
en
refinedweb
Home of Joel Cochran and Jim Burnett. The reality of .NET is that with the thousands of classes available, there is simply too much to know. No one can be an expert in everything, so we frequently hit the Search Engines looking for help and solutions to our problems. Hopefully, this blog can help with that. Topics will be simple solutions to common problems, some of them will be cries for help themselves. In either case, everything posted here will be from Joel and Jim’s real world experiences. (Recent Post: Upgrade your C# Skills part 3 – Lambda Expressions)!] [Update: The C# articles from this blog have moved to C# 411.] ) Thanks! 😀 Great List… thanks for sharing… […] …and a few non-C# blogs courtesy of the folks at DevTopics. […] […] Best C# Blogs (via DotNetKicks) […] […] DevTopics links us to some of the Best C# Blogs. […] […] programming magazines, blogs and web […] There are some other blogs also have look and post your feedback […] I posted a list of the Best C# Blogs. Today we recognize the best C# Web […] […] Best C# blogs. These icons link to social bookmarking sites where readers can share and discover new web pages. […] […] the folks who brought you Best C# Blogs, a List of best C# Web Sites. By Tim […] The Official C# Online.NET Weblog () is the blog for a unique wiki based resource for C# developers. Check it out! I learned more about .NET and C# from Ayende’s blog than from any other single source that I know of. Here is my new blog about .NET, C#, user controls reviews, rendering and other .NET stuff. It could be useful… Great blogs for techies !!! Thanks. I think that you forgot a blog : C# 411 I found it today and I think it is a good explained blog. C# sucks;vb rocks:) good Great, I love it Very good summary, I found a couple of new blogs I haven’t read 🙂 Wow….nice Collection 😉 […] advance in their careers. So naturally .NET programmers will gravitate toward .NET magazines and websites, etc. And so a generalist magazine like Dr. Dobbs will have little use in our daily work […] Excellent collection of blogs for C# developers. { CommissionedEmployee[] salespeople = {new CommissionedEmployee(“Bob”), new CommissionedEmployee(“Ted”), new CommissionedEmployee(“Sally”)}; Employee[] employees = (Employee[])salespeople.Clone(); foreach (Employee person in employees) { person.Pay(); } } } public class Employee { public Employee(string name) { m_Name = name; } public virtual void Pay() { Console.WriteLine(“Paying {0}”, m_Name); } private string m_Name; } public class CommissionedEmployee : Employee { public CommissionedEmployee(string name) : base(name) { } public override void Pay() { base.Pay(); Console.WriteLine(“Paying commissions”); plese help me how to make console project C# about “Hashing” Closed / Chained Addressing (use array not linked list) please…please… […] de programacion Java API’s de Java El API de la version 6 SE de Java C# Los mejores blogs de C# … en construcción …. This entry was posted in bases de datos, manuales and […] I acquired most of my C# knowledge by reading through groups, blogs, wiki @ public class Building { } public class Home : Building { } public class Testing { Building _home = new Home(); } Can any one explain what is Building _home = new Home(); in detailed ,regarding memory and creation of object after this statement ie Building _home = new Home(); Thanx in advance @Skr – Um, wrong place to ask! But a short answer is memory consumption = Home. But when you use _home you can only use methods from Building (at least without re-casting). Nice list. I frequent coding horror and Scott Gu’s. I have my own blog (who doesn’t it seems) take a look. I just create new blog for my career softwaredeveloperhouse.blogspot.com feel free to view 🙂 Sadly Charlie is no longer a Community Manager at Microsoft and his blog is no longer updated. I got a chance to meet Charlie in the flesh before he retired from Microsoft. His presence is really missed amongst the community. Maybe it is time to update this post. Hopefully OmegaMan’s musings will still make the cut. 😉 Anyone is expert on auto poster? Hi I got an error in this manner with a scrolbar control.. System.ArgumentOutOfRangeException: Value of ‘-1978350742’ is not valid for ‘Value’. ‘Value’ should be between ‘minimum’ and ‘maximum’. When i am scrolling with keyboard controls and after that when i am clicking in the rigt side of the control got this type error…Can anyone say why it is happend? VB sucks;C# rocks 🙂 I have three gridview with different no of columns like 8,6 and 2 !! so when i export them to excel the column width is not showing proper format !! plz help Thanks for the list of blogs, they were good. Oh! But you forgot to mention mine! Thanks for the post! C# Programming
http://www.csharp411.com/best-c-blogs/
CC-MAIN-2020-05
en
refinedweb
KHTML DOM::CSSException Class ReferenceThis exception is raised when a specific CSS operation is impossible to perform. More... #include <css_stylesheet.h> Detailed DescriptionThis exception is raised when a specific CSS operation is impossible to perform. Definition at line 173 of file css_stylesheet.h. Member Enumeration Documentation - Enumerator: - Definition at line 189 of file css_stylesheet.h. Constructor & Destructor Documentation Definition at line 176 of file css_stylesheet.h. Definition at line 177 of file css_stylesheet.h. Definition at line 182 of file css_stylesheet.h. Member Function Documentation Definition at line 179 of file css_stylesheet.h. Member Data Documentation An integer indicating the type of error generated. Definition at line 187 of file css_stylesheet.h. The documentation for this class was generated from the following file:
https://api.kde.org/3.5-api/kdelibs-apidocs/khtml/html/classDOM_1_1CSSException.html
CC-MAIN-2020-05
en
refinedweb
Django Lookup Dict is a django app that enables you use a django model the Python dict way Project description A django app that allows you to use a djanog model created by the app with a python dict-like operators. Useful for storing configuration variables Hello, world Simple Demo: from django_lookup_dict import LookupDict lookup = LookupDict() lookup['hello'] = 'world' print "Lookup for Hello is : {0}".format(lookup['hello']) How To: Setting a value, regular assigningment with the square bracket operator [ ]: lookup['hello'] = 'world' Retrieving a value, using square bracket operator [ ]: lookup['hello'] Key count: len(lookup) Deletin, using the del and using square bracket operators: del lookup['hello'] Deleting certain keys: # lookup.delete(*args) lookup.delete('key1', 'key2', 'key3') Installation Automatic installation: pip install django_lookup_dict Manual installation: Download the latest source from GitHub. tar xvzf django_lookup_dict-[VERSION].tar.gz cd django_lookup_dict-[VERSION] python setup.py build sudo python setup.py install After Instalation: - Add ‘django_lookup_dict’ to INSTALLED_APPS in your django project’s settings. - Run ‘python manage.py syncdb’ in order to create the data storage for the model. Project details Release history Release notifications Download files Download the file for your platform. If you're not sure which to choose, learn more about installing packages.
https://pypi.org/project/django_lookup_dict/
CC-MAIN-2020-05
en
refinedweb
How Can I Use Laravel Envoy or Deployer with SemaphoreCI? This article was peer reviewed by Wern Ancheta and Viraj Khatavkar. Thanks to all of SitePoint’s peer reviewers for making SitePoint content the best it can be!. We will be using SemaphoreCI for continuous delivery and Deployer to push our code to the DigitalOcean production server. If you’re not familiar with Deployer, we recommend you check out this introduction. Demo Application We’ll be using a 500px application that loads photos from the marketplace. It was built using Laravel and you can read the full article about its building process here, and find the repo on GitHub. Creating a Deployer Script The way Deployer works is by us defining servers, and then creating tasks that handle the process of deploying the application to those servers. Our deploy.php script looks like this: <?php require_once "recipe/common.php"; set('ssh_type', 'native'); set('default_stage', 'staging'); env('deploy_path', '/var/www'); env('composer_options', 'install --no-dev --prefer-dist --optimize-autoloader --no-progress --no-interaction');'); server('digitalocean', '174.138.78.215') ->identityFile() ->user('root') ->stage('staging'); task('deploy:upload', function() { $files = get('copy_dirs'); $releasePath = env('release_path'); foreach ($files as $file) { upload($file, "{$releasePath}/{$file}"); } }); task('deploy:staging', [ 'deploy:prepare', 'deploy:release', 'deploy:upload', 'deploy:shared', 'deploy:writable', 'deploy:symlink', 'deploy:vendors', 'current',// print current release number ])->desc('Deploy application to staging.'); after('deploy:staging', 'success'); You should read the Deployer article if you’d like to learn more about what this specific script does. Our next step is to set up a SemaphoreCI project. Please read the crash course article if you’ve never tried SemaphoreCI before, and do that. Setting up Deployment To configure the deployment strategy, we need to go to the project’s page and click Set Up Deployment. Next, we select the generic deployment option, so that SemaphoreCI gives us the freedom to add manual configuration. After selecting automatic deployment, SemaphoreCI will give us the ability to specify deployment commands. The difference between manual and automatic, is that automatic deployment is triggered after every successful test, while manual will let us deploy any successful commit. We can choose to include the deployer.phar in our repo as a PHAR file or require it using Composer. Either way, the commands will be similar. If we chose to deploy the application using SSH, SemaphoreCI gives us the ability to store our SSH private key on their servers and make it available in the deployment phase. Note: SemaphoreCI recommends that we create a new SSH key specifically for the deployment process. In case someone stole our keys or something, we can easily revoke it. The key will also be encrypted before storing it on their end. The key will be available under ~/.ssh/id_rsa, so the identityFile() can be left at the default. Push to Deploy Now that everything is set up, we need to commit some changes to the repository to trigger the integration and deployment process. // Edit something git add . git commit -am "Updated deploy" git push origin master If something went wrong, we can click on the failed deploy process and see the logs to investigate the problem further. The above screenshot is a failed commit due to the php artisan clear-compiled command returning an error because the mcrypt extension wasn’t enabled. Note: Another neat trick that SemaphoreCI provides is SSHing to the build server to see what went wrong. Other Deployment Tools The same process we used here may be applied to any other deployment tool. Laravel Envoy, for example, might be configured like this: @servers(['web' => 'root@ip-address']) @task('deploy', ['on' => 'web']) cd /var/www @if($new) {{-- If this is the first deployment --}} git init git remote add origin repo@github.git @endif git reset --hard git pull origin master composer update composer dumpautoload -o @if($new) chmod -R 755 storage php artisan storage:link php artisan key:generate @endif php artisan migrate --force php artisan config:clear php artisan route:clear php artisan optimize php artisan config:cache php artisan route:cache php artisan view:clear @endtask And in the deployment command step, we would install and run Envoy: cd /var/www composer global require "laravel/envoy=~1.0" envoy run deploy That’s it! Envoy will now authenticate with the key we’ve added and run the update command we specified. Conclusion CI/CD tools are a great improvement to a developer’s workflow, and certainly help teams integrate new code into production systems. SemaphoreCI is a great choice that I recommend for its easy to use interface and its wonderful support. If you have any comments or questions, please post them below!
https://www.sitepoint.com/how-can-i-use-laravel-envoy-or-deployer-with-semaphoreci/?utm_source=rss
CC-MAIN-2020-05
en
refinedweb
Hide Forgot From Bugzilla Helper: User-Agent: Mozilla/5.0 Galeon/1.2.5 (X11; Linux i686; U;) Gecko/20020713 Description of problem: The following package in the url is the package. If you go to the jpackage.org site and examine the spec file you will notice they have two packages. ant-optional and ant-optional-full. The ant-optional-full contains the same libraries as ant-optional full, different filenames, but the full has more compiled into it. Since jpackage is providing the same package as ant-optional in there rpm namespace with ant-optional-full they put the provides: ant-optional tag. Make sense. Now you don't want to have them both install at the same time so they put in a Conflicts: ant-optional. When you go to do an install you get the following : [pearcec@mp3 a]$ sudo rpm -Uvh ant-optional-full-1.5-4jpp.noarch.rpm error: failed dependencies: ant-optional conflicts with ant-optional-full-1.5-4jpp rpm should smart enough to allow this during the install. Some flag in the code. If the package installing provides and conflicts the same package it should allow it to install. Version-Release number of selected component (if applicable): How reproducible: Always Steps to Reproduce: The description above should be enough to understan the problem. Additional info: Yup, that's the way conflicts is supposed to work. Some means other than "smarter rpm" and Conflicts: is needed to package the minimal -> full upgrade path as described. In fact, I suspect that adding Obsoletes: to each package to eliminate the other is closer to the desired behavior.
https://bugzilla.redhat.com/show_bug.cgi?id=72689
CC-MAIN-2019-09
en
refinedweb
When programs are written, they commonly require the assistance of libraries which contain part of the functionality they require to run. Programs could, in principle, be written without invoking functions from other libraries, but that would dramatically increase the amount of source code for even the simplest programs as they would need to contain their own copies of all the necessary basic functions which are readily available in libraries provided by either the operational system or by third parties. This redundancy would also have the negative effect of forcing the developers responsible for a given project to update their code whenever bugs are found on these commonly used functions. When a program is compiled, it can use functions present in a given available library by linking this library directly to itself either statically or dynamically. When a library is statically linked to a program, its binary contents are incorporated into that program during compilation time. In other words, the library becomes part of the binary version of the program. The linking process is done by a program called "linker" (on Linux, that program is usually ld). This post focuses on the case where a library is only dynamically linked to a program. In this case, the contents of the linked library will not become part of the program. Instead, when the program is compiled, a table containing the required symbols (e.g. function names) which it needs to run is created and stored on the compiled version of the program (the "executable"). This library is called the "dynamic symbol table" of the program. When the program is executed, a dynamic linker is invoked to link the program to the dynamic (or "shared") libraries which contain the definitions of these symbols. On Linux, the dynamic linker which does this job is ld-linux.so. When a program is executed, ld-linux.so is first loaded inside the address space of the process and then it loads all the dynamic libraries required to run the program (I will not describe the process in detail, but the more curious reader can find lots of relevant information about how this happens in this page). It is only after the required dynamic libraries are loaded that the program actually starts running. When a program is compiled, the path to the dynamic linker (the "interpreter") it requires to run is added to its .interp section (a description of each ELF section of a program can be found here). To make this clear, compile this very simple C program: #include <stdio.h> int main () { printf("Hello, world!\n"); return 0; } with the command: gcc main.c -o main Now get the contents of the .interp section of the executable main: readelf -p .interp main The output should be similar to this: String dump of section '.interp': [ 0] /lib64/ld-linux-x86-64.so.2 In my system, /lib64/ld-linux-x86-64.so.2 is a symbolic link to the executable file /lib/x86_64-linux-gnu/ld-2.19.so. For the curious reader, I recommend you execute the equivalent file in your system and read what it displays. Having an idea of how the dynamic libraries are loaded, the question which comes to mind is: what are the symbols which a program requires from dynamically linked libraries to run? The answer can be obtained in many different ways. One common way to get that information is through objdump: objdump -T <program-name> For the executable main from above, the output should be similar to this: main: file format elf64-x86-64 DYNAMIC SYMBOL TABLE: 0000000000000000 DF *UND* 0000000000000000 GLIBC_2.2.5 puts 0000000000000000 DF *UND* 0000000000000000 GLIBC_2.2.5 __libc_start_main 0000000000000000 w D *UND* 0000000000000000 __gmon_start__ The output above shows a very curious fact: even to print a simple string "Hello, world!", a dynamic library is necessary, namely the GNU C Library (glibc), since the definition of the functions puts and __libc_start_main are needed. Actually, even if you comment out the "Hello, world!" line, the program will still need a definition of __libc_start_main from glibc. NOTE: the command nm -D main is equivalent to objdump -T main; see the manual of nm for more details. One way to get a list with the dynamic libraries which a program needs to run is to use ldd: ldd -v <program-name> For the program above, this is the what the output should look like: linux-vdso.so.1 => (0x00007fffcfdfe000) libc.so.6 => /lib/x86_64-linux-gnu/libc.so.6 (0x00007f264e47d000) /lib64/ld-linux-x86-64.so.2 (0x00007f264e85f000) Version information: ./main: libc.so.6 (GLIBC_2.2.5) => /lib/x86_64-linux-gnu/libc.so.6 /lib/x86_64-linux-gnu/libc.so.6: ld-linux-x86-64.so.2 (GLIBC_2.3) => /lib64/ld-linux-x86-64.so.2 ld-linux-x86-64.so.2 (GLIBC_PRIVATE) => /lib64/ld-linux-x86-64.so.2 This output is very informative: it tells us that main needs libc.so.6 (glibc) to run, and libc.so.6 needs ld-linux-x86-64.so.2 (the dynamic linker) to be loaded. ldconfig So far we know that ld-linux.so is responsible for loading the dynamic libraries which a program needs to run, but how does it know where to find them? This is where ldconfig enters the scene. The ldconfig utility scans the directories where the dynamic libraries are commonly found (/lib and /usr/lib) as well as the directories specified in /etc/ld.so.conf and creates both symbolic links to these libraries and a cache (stored on /etc/ld.so.cache) containing their locations so that ld-linux.so can quickly find them whenever necessary. This is done when you run ldconfig without any arguments (you can also add the -v option to see the scanned directories and the created symbolic links): sudo ldconfig You can list the contents of the created cache with the -p option: ldconfig -p The command above will show you a comprehensive list with all the dynamic libraries discovered on the scanned directories. You can also use this command to get the version of a dynamic library on your system. For example, to get the installed version of the X11 library, you can run: ldconfig -p | grep libX11 This is the output I obtain on my laptop (running Xubuntu 14.04; notice that dynamic library names are usually in the format <library-name>.so.<version>): libX11.so.6 (libc6,x86-64) => /usr/lib/x86_64-linux-gnu/libX11.so.6 libX11.so (libc6,x86-64) => /usr/lib/x86_64-linux-gnu/libX11.so libX11-xcb.so.1 (libc6,x86-64) => /usr/lib/x86_64-linux-gnu/libX11-xcb.so.1 libX11-xcb.so (libc6,x86-64) => /usr/lib/x86_64-linux-gnu/libX11-xcb.so In words, the output above states, for example, that the symbols required from libX11.so can be found at the dynamic library /usr/lib/x86_64-linux-gnu/libX11.so. Since the latter might be a symbolic link to the actual shared object file (i.e., the dynamic library), we can get its actual location with readlink: readlink -f /usr/lib/x86_64-linux-gnu/libX11.so In my system, both libX11.so and libX11.so.6 are symbolic links to the same shared object file: /usr/lib/x86_64-linux-gnu/libX11.so.6.3.0 These symbolic links are also created by ldconfig. If you wish to only create the symbolic links but not the cache, run ldconfig with the -N option; to only create the cache but not the symbolic links, use the -X option. As a final note on ldconfig, notice that on Ubuntu/Debian, whenever you install a (dynamic) library using apt-get, ldconfig is automatically executed at the end to update the dynamic library cache. You can confirm this fact by grepping the output of ldconfig -p for some library which is not installed in your system, then installing that library and grepping again. Seeing ld-linux.so in action You can see the dynamic libraries being loaded when a program is executed using the strace command: strace ./main The output should be similar to the one shown below (the highlighted lines show the most interesting parts; I omitted some of the output for brevity): execve("./main", ["./main"], [/* 68 vars */]) = 0 brk(0) = 0x1d9b000 access("/etc/ld.so.nohwcap", F_OK) = -1 ENOENT (No such file or directory) mmap(NULL, 8192, PROT_READ|PROT_WRITE, MAP_PRIVATE|MAP_ANONYMOUS, -1, 0) = 0x7f3bf95c7000 access("/etc/ld.so.preload", R_OK) = -1 ENOENT (No such file or directory) open("/etc/ld.so.cache", O_RDONLY|O_CLOEXEC) = 3 fstat(3, {st_mode=S_IFREG|0644, st_size=103686, ...}) = 0 mmap(NULL, 103686, PROT_READ, MAP_PRIVATE, 3, 0) = 0x7f3bf95ad0003bf8fe1000 mprotect(0x7f3bf919d000, 2093056, PROT_NONE) = 0 mmap(0x7f3bf939c000, 24576, PROT_READ|PROT_WRITE, MAP_PRIVATE|MAP_FIXED|MAP_DENYWRITE, 3, 0x1bb000) = 0x7f3bf939c000 mmap(0x7f3bf93a2000, 17088, PROT_READ|PROT_WRITE, MAP_PRIVATE|MAP_FIXED|MAP_ANONYMOUS, -1, 0) = 0x7f3bf93a2000 close(3) = 0 ... exit_group(0) = ? +++ exited with 0 +++
https://diego.assencio.com/?index=a500ab0fa6037fc2dc20224e7505b82f
CC-MAIN-2019-09
en
refinedweb
#include <wx/gdicmn.h> A wxRealPoint is a useful data structure for graphics operations. It contains floating point x and y members. See wxPoint for an integer version. Note that the coordinates stored inside a wxRealPoint object may be negative and that wxRealPoint functions do not perform any check against negative values. Initializes to zero the x and y members. Initializes the point with the given coordinates. Converts the given wxPoint (with integer coordinates) to a wxRealPoint. X coordinate of this point. Y coordinate of this point.
https://docs.wxwidgets.org/3.1.0/classwx_real_point.html
CC-MAIN-2019-09
en
refinedweb
Numbers Useful tricks with numbers. Get maximum, minimum, and total of an array of numbers (here integers) let addTwo: (Int, Int) -> Int = { x, y in x + y } let sortedNumbers = copiedNumbers.sorted(by: <) let theMin = sortedNumbers.first! let theMax = sortedNumbers.last! let theTotal = sortedNumbers.reduce(0, addTwo) Date formats Using Date (NSDate) and DateFormatter. Use DateFormatter let dateFormatter = DateFormatter() dateFormatter.dateFormat = "yyyy-MM-dd HH:mm:ss.SSSSSSZZZ" tempDate = dateFormatter.date(from: dateStampString) using correct formatting string. Get the period in decimal seconds since the present time let timeSinceLast = theEventTime.timeIntervalSinceNow Get the period in decimal seconds between firstDate and anotherDate let timeSince = firstDate.timeIntervalSince(anotherDate) Be careful with the sign of the result. Get a date and time far into the future neverNever = NSDate.distantFuture Make an empty array of dates var startBU = [Date]() then at start of code to use startBU = [Date]() Unified log Using Sierra’s new unified log. Implement as a class with static strings. By default, any mutable strings are written out to log as <private>, which is unhelpful. The public formatting override bypasses that, but only when used as shown, not as %{public}s. import os.log public class Demo { static let gen_log = OSLog(subsystem: "co.mycompany.appname", category: "general") public init() { } func writeLogEntry(type: OSLogType) { os_log("Any static string", log: Demo.gen_log, type: type) } func writeLogEntry(type: OSLogType, number: Int) { os_log("Here's an integer: %d", log: Demo.gen_log, type: type, number) } func writeLogEntry(type: OSLogType, string: String) { os_log("%{public}@", log: Demo.gen_log, type: type, string) } } Call as let myDemo = Demo() myDemo.writeLogEntry(type: .debug, number: intParam!) myDemo.writeLogEntry(type: .debug, string: anyString) also simple static string entry os_log("This is the message.") Objective-C Working with old Objective-C and C interfaces. Don’t literally try passing NULL. Use e.g. let theChain: SecKeychain? = nil let theAccess: UnsafeRawPointer? = nil let theResult2 = SecKeychainUnlock(theChain, 0, theAccess, false) When a call returns a CFString? e.g. let theReserved: UnsafeMutableRawPointer? = nil let theSctring = SecCopyErrorMessageString(theResult2, theReserved) theResString = theSctring! as String and you can then print(theResString) Passing a buffer into which a null-terminated C string will be written { theDefKeychainPath = FileManager().string(withFileSystemRepresentation: theStrBuff, length: Int(theLength)) let theStr = String(theDefKeychainPath) } } TheStrBuff does not need to be explicitly deallocated, and theStr is a Swift String. Swift Snippets 0: Introduction and Contents (Updated 2 July 2017.)
https://eclecticlight.co/2017/06/28/swift-snippets-4-numbers-dates-unified-log-objective-c/
CC-MAIN-2019-18
en
refinedweb
Hi, after I upgraded to 1.5.1 and getting material design in place, my app is working and looking great in Android. HOWEVER the view when focussing or unfoccusing the keyboard does not work as expected. I have a Grid with icons at the bottom of my ContentPage. So when the keyboard expands the Grid should be sitting on top of the keyboard and the Editor shrinking in height to accomodate. As mentioned this worked before the Material Design update. I am using the IosKeyboardFixPageRenderer from for iOS and it works great. How do I fix this on Android with Material Design? Thank you After calling base.OnCreate in your FormsAppCompatActivity subclass, call this: Window.SetSoftInputMode (SoftInput.AdjustResize); It will restore the old behavior if you depended on it. Thanks, but it only work on 4.x Android, not 5+ oh right I forgot this is disabled by the fullscreen flag... hmmmmmmmmmmmmm we might need to add a config option to FormsAppCompatActivity. If you can that will be great. My app is quite unusable in 5+ with material design as key buttons are hidden. There's also some full screen quirks, modal not going up the full way, etc. I can give you access to a private GitHub repo so you can download and run the code if you want, you'll see exactly what the issues are. Send me a private message with your GitHub username An update on this, since my app was unusable in Android 5+ I did some more digging and finally found a solution that seems to be working in the meantime. I blogged about it here: link Below is the code in MainActivity.cs // Fix the keyboard so it doesn't overlap the grid icons above keyboard etc if ((int)Build.VERSION.SdkInt >= Build.VERSION_CODES.L) { // Bug in Android 5+, this is an adequate workaround AndroidBug5497WorkaroundForXamarinAndroid.assistActivity (this, WindowManager); } else { // Only works in Android 4.x Window.SetSoftInputMode (SoftInput.AdjustResize); } And the AndroidBug5497WorkaroundForXamarinAndroid class implementation with thanks from these StackOverflow posts link and link `using System; using Android.App; using Android.Widget; using Android.Views; using Android.Graphics; using Android.OS; using Android.Util; namespace MyNamespace.Droid { public class AndroidBug5497WorkaroundForXamarinAndroid { private readonly View mChildOfContent; private int usableHeightPrevious; private FrameLayout.LayoutParams frameLayoutParams; }` This very nearly works - only issue I have is that sometimes Xamarin.Forms does not seem to redraw the bottom part of the screen when you press the back button. The part under the keyboard shows up as blank. Did you see the same behavior? Any ideas? @RezaMousavi, I have the same issue here. Strange part here for me is that if you add a breakpoint at the first line in possiblyResizeChildOfContent method the issue is fixed, layout is fully redrawn. It's really looks like a bug After Xamarin Forms version 2.3.3.168 update, and with the new ones versions, the AndroidBug5497WorkaroundForXamarinAndroid solution no longer works, even it is causing an extra scroll out of screen bounderies, leaving a white space between the bottom of the view and the soft-keyboard without any scrolling option to get the view right. Could you please help us with any solution for this? Hi @DiegoVarela my app heavily relies on AndroidBug5497WorkaroundForXamarinAndroid, I haven't released an update to my app with Xamarin Forms 2.3.3, but I just quickly updated it to Xamarin Forms 2.3.3.180 and in the VS Android emulator, everything with the soft keyboard kept working as expected. I did make some minor tweaks to AndroidBug5497WorkaroundForXamarinAndroid though, not sure if that's what kept it working... can share my tweaks if you still have issues. Hi @MichaelDimoudis, maybe your tweaks prevent the wrong behavior that I am getting. Could you please share it? Thanks Sorry for delay @DiegoVarela, here is my file. `using System; using Android.App; using Android.Widget; using Android.Views; using Android.Graphics; using Android.OS; using Android.Util; namespace ContinuousFeedback.Droid { /// /// Android bug5497 workaround for xamarin android. /// Answer from /// /// For more information, see /// To use this class, simply invoke assistActivity() on an Activity that already has its content view set. /// /// CREDIT TO Joseph Johnson () for publishing the original Android solution on stackoverflow.com /// public class AndroidBug5497WorkaroundForXamarinAndroid { private readonly View mChildOfContent; private int usableHeightPrevious; private FrameLayout.LayoutParams frameLayoutParams; }` Also this is what I have in my MainActivity.cs inside OnCreate() Window.SetSoftInputMode (SoftInput.AdjustResize); if (Build.VERSION.SdkInt >= BuildVersionCodes.Lollipop) { AndroidBug5497WorkaroundForXamarinAndroid.assistActivity (this, WindowManager); } I have had this same issue and through exploring other posts, I discovered this thread: As someone mentioned earlier, the problem originates from Xamarin's switch to FormsAppCompatActivity from FormsApplicationActivity in MainActivity.cs. A solution was posted in that thread that simplifies the workaround posted by MichaelDimoudis. However, the solution is still broken when using the back arrow. Just kidding. When I implemented the solution from the other thread, I guess the effects of the solution in this thread still stuck around. Therefore, while the other thread has good information, it didn't really solve this problem - as far as I can tell. As a functional alternative, you could manually downgrade your app to use FormsApplicationActivity instead of FormsAppCompatActivity. Doing so would make the keyboard interact with the pages correctly. To do that (which I haven't) you would have to adjust some files such as MainActivity.cs, App.cs, and styles.xml. At the same time that I asked in this thread, I also file a bug to Xamarin in: when today this marked solved using Platform Specifics features in this way in PCL project: using Xamarin.Forms.PlatformConfiguration.AndroidSpecific; Application.Current.On<Xamarin.Forms.PlatformConfiguration.Android>().UseWindowSoftInputModeAdjust(WindowSoftInputModeAdjust.Resize); All the documentation and sample code is in:. I would try this in my project to fix it. Thanks a lot @MichaelDimoudis and @ConnorSchmidt for your feedback and help. Hi, I have tried this solution it works fine with nuget android.support.v4 version 23.3.0 but as i have updated packages to 27 it is not working as expected. Can you tell me why? Any information will be helpful.
https://forums.xamarin.com/discussion/comment/244852/
CC-MAIN-2019-18
en
refinedweb
The ActivityManager class provides details about activities, services and the containing process of Android. Some methods of the class are meant to be used for debugging or information, so they shouldn't be used to affect any behaviour of your application during the runtime. If your code uses this class, the exception on the IDE "Cannot resolve symbol 'Activity Manager'" will appear due to the absence of importing the related class. This class can be imported in your class with the following line at the top of your class: import android.app.ActivityManager; android.app.ActivityManager namespace at the beginning of your Java code. Happy coding ! Become a more social person
https://ourcodeworld.com/articles/read/898/how-to-resolve-android-studio-error-cannot-resolve-symbol-activity-manager
CC-MAIN-2019-18
en
refinedweb
Controller Stability Analysis The purpose of controller stability analysis is to determine the range of controller gains between lower `K_{cL}` and upper `K_{cU}` limits that lead to a stable controller. $$K_{cL} \le K_c \le K_{cU}$$ The principles of stability analysis presented here are general for any linear time-invariant system whether it is for controller design or for analysis of system dynamics. Several characteristics of a system in the Laplace domain can be deduced without transforming a system signal or transfer function back into the time domain. Some of the analysis relies on the roots of the transfer function denominator, also known as poles. The roots of the numerator, also known as zeros, do not affect the stability directly but can potentially cancel an unstable pole to create an overall stable system. Converge or Diverge A first point of analysis is whether the system converges or diverges. This is determined by analyzing the roots of the denominator of the transfer function. If any of the real parts of the roots of the denominator are positive then the system is unstable. A simple rule to determine whether there are positive real roots is to examine the signs of the polynomial. If there are mixed signs (+ or -) then the system will be unstable because there is at least one positive real root. Before modern computational methods, there were several methods devised to determine the stability of a system. One such approach is the Routh-Hurwitz stability criterion. The leading left edge of a table determines whether the system is stable for or any nth-degree polynomial $$a_n s^n + a_{n-1} s^{n-1} + \cdots + a_1 s + a_0$$ The coefficients of the polynomial are placed into tabular form and additional coefficients b and c are computed from higher rows. The terms b and c are: $$b_i=\frac{a_{n-1}\times{a_{n-2i}}-a_n\times{a_{n-2i-1}}}{a_{n-1}}$$ $$c_i=\frac{b_1\times{a_{n-2i-1}}-a_{n-1}\times{b_{i+1}}}{b_1}$$ A changing sign (+ or -) of the leading left edge `a_n`, `a_{n-1}`, `b_{1}`, `c_{1}` indicates that the system is unstable. Several additional methods can be used to determine stability as summarized below. In addition to analysis in the Laplace domain, stability can be determined from a model in state space form. $$\dot x = A x + B u$$ $$y = C x + D u$$ A state space model is stable when the eigenvalues of the A matrix have negative real parts. Oscillatory or Smooth A second point of analysis is whether the system exhibits oscillatory or smooth behavior. If any of the roots of the denominator have an imaginary component then the system has oscillations. Imaginary roots always come in pairs with the same positive and negative imaginary values. Final Value Theorem The Final Value Theorem (FVT) gives the steady state gain `K_p` of a transfer function `G(s)` by taking the limit as `s \to 0` $$K_p = \lim_{s \to 0}G(s)$$ The FVT also determines the final signal value `y_\infty` for a stable system with output `Y(s)`. Note that the Laplace variable `s` is multiplied by the signal `Y(s)` before the limit is taken. $$y_\infty = \lim_{s \to 0} s \, Y(s)$$ The FVT may give misleading results if applied to an unstable system. It is only applicable to stable systems or signals. Initial Value Theorem The Initial Value Theorem (IVT) gives an initial condition of a signal by taking the limit as `s \to \infty`. Like the FVT, the Laplace variable `s` is multiplied by the signal `Y(s)` before the limit is taken. $$y_0 = \lim_{s \to \infty} s \, Y(s)$$ Controller Stability Controller stability analysis is finding the range of controller gains that lead to a stabilizing controller. There are multiple methods to compute this range between a lower limit `K_{cL}` and an upper limit `K_{cL}`. $$K_{cL} \le K_c \le K_{cU}$$ This range is important to know the range of tuning values that will not lead to a destabilizing controller. With modern computational tools and powerful computers, the simulation based option is frequently used for complex systems. Exercise Consider a feedback control system that has the following open loop transfer function. $$G(s) = \frac{4K_c}{(s+1)(s+2)(s+3)}$$ Determine the values of `K_c` that keep the closed loop system response stable. Solution Routh Array The leading edge cannot change signs for the system to be stable. Therefore, the following conditions must be met: $$a_n=1 > 0$$ $$a_{n-1}=6 > 0$$ $$b_{1}=\frac{66 - 6 - 4 K_c}{6} > 0$$ $$c_{1}=6+4 K_c > 0$$ The positive constraint on `b_1` leads to `K_c<15`. The positive constraint on `c_1` means that `K_c > -1.5`. Therefore the following ranges are acceptable for the controller stability. $$-1.5 < K_c < 15$$ This is a more comprehensive solution than the other methods shown below because it also includes a lower bound on the controller stability limit (if a direct acting controller were inadvertently used). Root Locus Plot Determine where the real portion of the roots crosses to the right hand side of the plane. In this case, the real part of two roots becomes positive at `K_c=15`. Bode Plot Determine the gain margin at -180o phase. The magnitude at -180o phase is about -23 dB. With `-23 = 20 log_{10} (AR)`, the gain margin is `1/{AR}` and approximately equal to 15. This is the upper bound on the controller gain to keep the system stable. This answer agrees with the root locus plot solution. from scipy import signal import matplotlib.pyplot as plt # open loop num = [4.0] den = [1.0,6.0,11.0,6.0] sys = signal.TransferFunction(num, den) t1,y1 = signal.step(sys) # closed loop Kc = 1.0 num = [4.0*Kc] den = [1.0,6.0,11.0,4.0*Kc+6.0] sys2 = signal.TransferFunction(num, den) t2,y2 = signal.step(sys2) plt.figure(1) plt.subplot(2,1,1) plt.plot(t1,y1,'k-') plt.legend(['Open Loop'],loc='best') plt.subplot(2,1,2) plt.plot(t2,y2,'r--') plt.legend(['Closed Loop'],loc='best') plt.xlabel('Time') # root locus plot import numpy.polynomial.polynomial as poly n = 1000 # number of points to plot nr = 3 # number of roots rs = np.zeros((n,2*nr)) # store results Kc = np.logspace(-2,2,n) # Kc values for i in range(n): # cycle through n times den = [1.0,6.0,11.0,4.0*Kc[i]+6.0] # polynomial roots = poly.polyroots(den) # find roots for j in range(nr): # store roots rs[i,j] = roots[j].real # store real rs[i,j+nr] = roots[j].imag # store imaginary plt.figure(2) plt.subplot(2,1,1) plt.xlabel('Root (real)') plt.ylabel('Root (imag)') plt.grid(b=True, which='major', color='b', linestyle='-') plt.grid(b=True, which='minor', color='r', linestyle='--') for i in range(nr): plt.plot(rs[:,i],rs[:,i+nr],'.') plt.subplot(2,1,2) plt.plot([Kc[0],Kc[-1]],[0,0],'k-') for i in range(3): plt.plot(Kc,rs[:,i],'.') plt.ylabel('Root (real part)') plt.xlabel('Controller Gain (Kc)') # bode plot w,mag,phase = signal.bode(sys) plt.figure(3) plt.subplot(2,1,1) plt.semilogx(w,mag,'k-',linewidth=3) plt.grid(b=True, which='major', color='b', linestyle='-') plt.grid(b=True, which='minor', color='r', linestyle='--') plt.ylabel('Magnitude') plt.subplot(2,1,2) plt.semilogx(w,phase,'k-',linewidth=3) plt.grid(b=True, which='major', color='b', linestyle='-') plt.grid(b=True, which='minor', color='r', linestyle='--') plt.ylabel('Phase') plt.xlabel('Frequency') plt.show() Assignment See Stability Analysis Exercises
http://apmonitor.com/pdc/index.php/Main/StabilityAnalysis
CC-MAIN-2019-18
en
refinedweb
D Front End for GCC WWW: No installation instructions: this port has been deleted. The package name of this deleted port was: gdc gdc PKGNAME: gdc ONLY_FOR_ARCHS: i386 amd64 distinfo: There is no distinfo for this port. NOTE: FreshPorts displays only information on required and default dependencies. Optional dependencies are not covered. This port is required by: No options to configure Number of commits found: 46 2 days deprecation was a bit short Deprecate and set expiration date for ports broken for more than 6 month - Mark broken. Assume maintainership of this port. - remove MD5 - Switch SourceForge ports to the new File Release System: categories starting with H,I,J,K,L Utilize %%SITE_PERL%% and %%PERL_ARCH%% in pkg-plists PR: ports/136771 Exp Run by: pav Approved by: portmgr (pav) - Fix on CURRENT. The failure reason is that broken libphobos was produced, as the library used non-existing symbols from libc, namely tgammal, lgammal, erfcl, erfl, cbrtl, log1pl, expm1l. This somehow was not triggered before rev. 181074. So to fix this, add an extra patch to remove unimplemented math functions from libphobos. - Mark broken on 8.x - gdc fails to link anything May be tested on simple hello world: --- import std.stdio; int main() { std.stdio.writefln("Hello World!"); std.stdio.readln; return 0; } --- gdc test.d -o test --- /usr/local/lib/gcc/i386-portbld-freebsd8.0/../../libgphobos.a(math.o)(.text+0xa45): In function `_D3std4math6tgammaFeZe': ../.././../gcc-4.1-20080428/libphobos/std/math.d:1136: undefined reference to `tgammal' ... - Bring back --disable-shared, as removing it was not needed to fix exceptions and it also broke gdc on 6.x Approved by: miwi (mentor implicit) - Add patch to fix exceptions on FreeBSD (throwing exception from D will no longer lead straight to abort()) Obtained from: Thanks to: David Friedman Approved by: miwi (mentor implicit) - Update to newer gcc snapshot - Fix socket problem - Add missing USE_ICONV - dirrmtry include/d as it may be used by other ports PRs: ports/124437 [1], ports/124567 [2] Submitted by: kevin <kevinxlinuz at 163 dot com> [1], myself [2] Approved by: miwi (mentor) - Reset maintainship under maintainer's request PR: 125000 Submitted by: Masanori OZAWA <ozawa@ongs.co.jp> (maintainer) - Update 4.1 target to 20071105 PR: ports/118492 Submitted by: Masanori OZAWA <ozawa@ongs.co.jp> (maintainer) lang/gdc: link error fixed - GDC gets a fail of link with GCC42(on 8-current/7-PR1) because libstdc++ is not in the default link target where libphobos of GDC needs it. To fix that issue, I have added a patch that make libstdc++ a link tareget of GDC. Sometimes 6-stable says that it is a overplus link but it is no problem. PR: ports/117318 Submitted by: Masanori OZAWA <ozawa@ongs.co.jp> (maintainer) Migration from bison 1.x to 2.x PR: 117086 Tested by: -exp runs [PATCH]: lang/gdc: update to 0.24 - Update to 0.24 - Removed support for GCC 3.3.x - Changed the GCC_MASTER_SITE_SUBDIR for GCC 4.1.x PR: ports/116350 Submitted by: Masanori OZAWA <ozawa@ongs.co.jp> Use GCC 4.1.x on AMD64 to fix compilation errors. PR: 111021 Submitted by: maintainer lang/gdc: update to 0.23 - updated to 0.23 - added amd64 arch support - added gcc41 for build PR: ports/110953 Submitted by: Masanori OZAWA (maintainer) - Update to 0.21 - Update to GCC 4.0.x to 4.0.4-20061228 - Add documenta PR: ports/107521 Submitted by: Armin Pirkovitsch <a.pirko at inode.at> Approved by: Masanori OZAWA <ozawa at ongs.co.jp> (maintainer) - Fix threading support, favor pthread Since this port supports 5.x and later, we don't need to consider the 4.x case (-lc_r). Moreover, gdc uses ld as linker when compiling D source files, so PTHREAD_LIBS is not applicable here. PR: ports/107437 Submitted by: Jason DiCioccio <jd at ods.org> Approved by: Masanori OZAWA <ozawa at ongs.co.jp> (maintainer) fixes: lang/gdc: add the "dmd wrapper script" - add the "dmd wrapper script" PR: misc/102725 Submitted by: maintainer - Update to 0.19 - Update WWW PR: ports/101163 Submitted by: Masanori OZAWA <ozawa at ongs.co.jp> (maintainer) Prune an empty sub-directory. Submitted by: Masanori OZAWA <ozawa (at) ongs.co.jp> (maintainer) Upgrade to 0.18. PR: ports/98527 Submitted by: Masanori OZAWA <ozawa (at) ongs.co.jp> (maintainer) lang/gdc: depend gcc update from 3.4.6 to 4.0.4 PR: ports/96138 Submitted by: maintainer Update to 0.17 Add knob to build with GCC 4.0.x Add SHA256 PR: 89928 Submitted by: Masanori OZAWA <ozawa@ongs.co.jp> (maintainer) Update to 0.16 PR: 88597 Submitted by: Masanori OZAWA <ozawa@ongs.co.jp> (maintainer) - Remove alpha from list of supported architectures. The compiler will compile and run, but produce 32-bit code. - Use same gcc as gcc34 port - Move library test to check: target - Cleanup PR: ports/87690 Submitted by: Alejandro Pulver <alejandro@varnet.biz> Approved by: Masanori OZAWA <ozawa@ongs.co.jp> (maintainer) Fix default include path. PR: ports/84046 Submitted by: Masanori OZAWA <ozawa@ongs.co.jp> (maintainer) Approved by: flz (mentor) - Update to 0.15 PR: ports/83829 Submitted by: Masanori OZAWA <ozawa@ongs.co.jp> (maintainer) Approved by: flz (mentor) lang/gdc: update to 0.14 PR: ports/82716 Submitted by: Masanori OZAWA <ozawa@ongs.co.jp> (maintainer) lang/gdc: update to 0.13 - Update to 0.13 - Update gcc target to 3.4.5-20050607 PR: ports/82196 Submitted by: maintainer - update to 0.12 - update gcc target to 3.4.5-20050524 - build fail fix on current PR: ports/81750 Pointed out by: pointyhat via kris Submitted by: maintainer change maintainership (pre-commit is not enough :( - fix build fail - change maintainership Pointed out by: pointyhat via kris Reviewed by: ozawa@ongs.co.jp lang/gdc update to 0.11 PR: ports/81043 Submitted by: Masanori OZAWA <ozawa@ongs.co.jp> Update GCC version to 20050211 from 20050107. Submitted by: Masanori OZAWA <ozawa@ongs.co.jp> amd64 build is broken now. Submitted by: ozawa@ongs.net o update to 0.10 o build fail problem fixed Submitted by: ozawa@ongs.co.jp - update to 0.9 - support to latest gcc34 Submitted by: ozawa@ongs.co.jp o support for latest gcc34 o not depend on lang/boehm-gc port Submitted by: ozawa@ongs.co.jp Makefile contains an erroneous NUL (ascii \000) character PR: ports/74222 Submitted by: Conrad J. Sabatier <conrads@cox.net> Add gdc 0.8, D Front End for GCC. PR: ports/74072 Submitted by: Masanori OZAWA (ozawa@ongs.co.jp) Servers and bandwidth provided byNew York Internet, SuperNews, and RootBSD 13 vulnerabilities affecting 73 ports have been reported in the past 14 days * - modified, not new All vulnerabilities
http://www.freshports.org/lang/gdc/
CC-MAIN-2017-26
en
refinedweb
I am getting a couple of errors when I compile this program and all I seem to be doing today is making them worse. This program is a homework program that I did before and it worked, but my program structure was totally different. I am re-writing it again as a class(we are working about classes , structs, inheiritance etc., and I wanted to try out this new material on a program that I had done before), but have screwed up in my functions and am just confused now. I know this is a case of messing with it for too long and the more I try to correct it the more I am messing it up. Specifically, I am getting error code C2601 on line string reverse(string word) and bool pal(string word).Specifically, I am getting error code C2601 on line string reverse(string word) and bool pal(string word).Code:#include <string> #include <iostream> using namespace std; class PString : public std::string { public: PString( const std::string &aString ); bool isPalindrome() const; }; PString::PString( const std::string &aString ) : std::string( aString ) { } bool PString::isPalindrome() const { string reverse(string word) { string reverse; int length=word.length(); for(int i=length;i>=0;i--) { reverse=reverse+word[i]; } return reverse; } bool pal(string word) { bool pal=true; if(word!=reverse(word)) { pal=false; } return pal; } return true; } int main() { std::string str; std::cout << "This program is a palindrome-testing program. Enter a string to test:\n"; std::cin >> str; // Create a PString object that will check strings PString s(str); // Check string and print output if (s.isPalindrome()) { std::cout << s << " is a palindrome"; } else { std::cout << s << " is not a palindrome"; } std::cout << std::endl; system("pause"); return 0; } I get error C2780 two lines below that line and it tells me there is a problem in the void function- but I have no void function. What do I need to look at to get this to work? (Besides aspirin and a stiff drink!)
https://cboard.cprogramming.com/cplusplus-programming/136264-errors-i-cannot-figure-out.html
CC-MAIN-2017-26
en
refinedweb
Hello, I have a text file that is formatted as such: Volkswagen, 547, 9.78, 2 Mercedes, 985, 45.77, 35 ... I am trying to figure out how use the Scanner to read from the text file and store the information into an ArrayList of objects. ArrayList<Car> cars = new ArrayList<Car>(); I would then like to be able to use each element of the arraylist individually. For example: cars.getName(); cars.getSerial(); ... Here is what I have so far: import java.io.File; import java.io.FileNotFoundException; import java.util.Scanner; public class ScannerTest { public static void main(String[] args) { ArrayList<Car> cars = new ArrayList<Car>(); File file = new File("car.csv"); try { Scanner scanner = new Scanner(file).useDelimiter(","); while (scanner.hasNextLine()) { String line = scanner.nextLine(); String name = scanner.next(); int serial = scanner.nextInt(); double itemCost = scanner.nextDouble(); int itemCode = scanner.nextInt(); System.out.println(name); }//while } //try catch (FileNotFoundException e) { e.printStackTrace(); } } } For some reason, I am getting a Mismatch input error and the items are not being added to an arraylist for proper usage. Is there a better method to using the file elements individually? Any assitance will appreciated. Thank you
https://www.daniweb.com/programming/software-development/threads/207767/scanner-and-arraylist-of-objects
CC-MAIN-2017-26
en
refinedweb
Hi, I need to copy a file from one location say c:\a\abc.txt to c:\b\abc.txt. I used the following below code #include <stdio.h> #include<fstream> int main() { if ( rename("c:\a\abc.txt","c:\b\abc.txt") perror( NULL ); system("pause"); return 0; } when i ran this code, the original file in the folder "a" gets deleted. I do not want the original file to be deleted. what should i modify in this code so the original file remains undeleted. and my next question is, i want to copy the original file 3 times and paste it in the new location 3 times, and each time while pasting i want the file name like this "abc1.txt", "abc2.txt", "abc3.txt"...and so on till i give a condition, so how to implement this? Thanks Priya
https://www.daniweb.com/programming/software-development/threads/229164/copying-file-from-one-location-to-another-location-and-should-not-delete-the-original
CC-MAIN-2017-26
en
refinedweb
Connect to Azure Event Hubs to send and receive events. You can perform operations such as send an event to an Event Hub and receive events from an Event Hub. To use any connector, you first need to create a logic app. You can get started by creating a logic app now. Connect to Event Bus Before your logic app can access any service, you first need to create a connection to the service. A connection provides connectivity between a logic app and another service. Prerequisites You must have an Event Hubs account. Before you can use your Azure Event Hubs account in a logic app, you must authorize the logic app to connect to your Event Hubs account. Fortunately, you can do this easily from within your logic app on the Azure portal. Here are the steps to authorize your logic app to connect to your Event Hubs account: - To create a connection to Event Hubs, in the logic app designer, select Show Microsoft managed APIs in the drop-down list. Then enter event hubs in the search box. Select the trigger or action you want to use. - If you haven’t created any connections to Event Hubs before, you’ll be prompted to provide your Event Hubs credentials. These credentials are used to authorize your logic app to connect to and access your Event Hubs’ data. The Event Hubs connector needs the connection string for the Event Hubs namespace. It also requires Manage permissions. A good way to know if your connection string is for the namespace or a specific entity is if it contains the EntityPath parameter. If it does, it is not the right connection string for a logic app. - After you have received the connection string for the namespace, you can use it for the API connection in Logic Apps. - Notice the connection has been created, and you are now free to proceed with the other steps in your logic app. Use an Event Hubs trigger A trigger is an event that can be used to start the workflow defined in a logic app. Learn more about triggers. Here’s how to use the Event Hubs – When events are available in Event Hub trigger to initiate a logic app workflow when new events are sent to an Event Hub. Note You will be prompted to sign in with your Event Hubs connection string if you have not already created a connection to Event Hubs. - In the search box on the logic apps designer, enter event hubs. Then select the Event Hubs – When events are available in an Event Hub - The When an event in available in an Event Hub dialog box is displayed. - Enter the name of the Event Hub you would like the trigger to monitor. Optionally, you can also select a consumer group. At this point, your logic app has been configured with a trigger. When new events are available in the Event Hub you selected, the trigger will begin a run of the other triggers and actions in the workflow. Use an Event Hubs action An action is an operation carried out by the workflow defined in a logic app. Learn more about actions. Now that you have added a trigger, it’s time to do something interesting with the data that’s generated by the trigger. Follow these steps to add the Event Hubs – Send event action. This action sends an event to an Event Hub. Follow these steps to create the send event action: - Select+ New step to add the action. - Select Add an action. This opens a search box, where you can search for any action you would like to take. For this example, Event Hubs’ actions are of interest. - Enter event hubs. - Select Event Hubs – Send event as the action to take - Enter the content for the event. This is required. - Enter the event hub name to which the event will be sent. This is also required. - Provide other details about the event. This is optional. - Save the changes to your workflow.
https://blogs.msdn.microsoft.com/vinaysin/2017/03/25/get-started-with-the-azure-event-hubs-connector/
CC-MAIN-2017-26
en
refinedweb
Get-FsrmEffectiveNamespace Get-FsrmEffectiveNamespace Syntax Detailed Description The Get-FsrmEffectiveNamespace cmdlet gets a list of paths that match the static namespaces in the input list and any namespaces that have a Folder Usage property set that have a value in the input list.. -Namespace<String[]> Specifies an array of namespaces. Each string must be either a value of the FolderType property on the server, the string "All Shares", or a static path. The FolderType properties must be in the format [<Folder type property name>=: Gets paths that match static namespaces This command gets a list of paths that have their Folder Usage property set to User Data and C:\data.
https://technet.microsoft.com/en-us/library/jj900614(v=wps.630).aspx
CC-MAIN-2017-26
en
refinedweb
API Usage Tips¶ Below is a list of helpful tips when using the Shotgun API. We have tried to make the API very simple to use with predictable results while remaining a powerful tool to integrate with your pipeline. However, there’s always a couple of things that crop up that our users might not be aware of. Those are the types of things you’ll find below. We’ll be adding to this document over time as new questions come up from our users that exhibit these types of cases. Importing¶ We strongly recommend you import the entire shotgun_api3 module instead of just importing the shotgun_api3.Shotgun class from the module. There is other important functionality that is managed at the module level which may not work as expected if you only import the shotgun_api3.Shotgun object. Do: import shotgun_api3 Don’t: from shotgun_api3 import Shotgun Multi-threading¶ The Shotgun API is not thread-safe. If you want to do threading we strongly suggest that you use one connection object per thread and not share the connection. Entity Fields¶ When you do a find() call that returns a field of type entity or multi-entity (for example the ‘assets’ column on Shot), the entities are returned in a standard dictionary: {'type': 'Asset', 'name': 'redBall', 'id': 1} For each entity returned, you will get a type, name, and id key. This does not mean there are fields named type and name on the Asset. These are only used to provide a consistent way to represent entities returned via the API. type: the entity type (CamelCase) name: the display name of the entity. For most entity types this is the value of the codefield but not always. For example, on the Ticket and Delivery entities the namekey would contain the value of the titlefield. CustomEntities¶ Entity types are always referenced by their original names. So if you enable CustomEntity01 and call it Widget. When you access it via the API, you’ll still use CustomEntity01 as the entity_type. If you want to be able to remember what all of your CustomEntities represent in a way where you don’t need to go look it up all the time when you’re writing a new script, we’d suggest creating a mapping table or something similar and dumping it in a shared module that your studio uses. Something like the following: # studio_globals.py entity_type_map = { 'Widget': 'CustomEntity01', 'Foobar': 'CustomEntity02', 'Baz': 'CustomNonProjectEntity01, } # or even simpler, you could use a global like this ENTITY_WIDGET = 'CustomEntity01' ENTITY_FOOBAR = 'CustomEntity02' ENTITY_BAZ = 'CustomNonProjectEntity01' Then when you’re writing scripts, you don’t need to worry about remembering which Custom Entity “Foobars” are, you just use your global: import shotgun_api3 import studio_globals sg = Shotgun('', 'script_name', '0123456789abcdef0123456789abcdef0123456') result = sg.find(studio_globals.ENTITY_WIDGET, filters=[['sg_status_list', 'is', 'ip']], fields=['code', 'sg_shot']) ConnectionEntities¶ Connection entities exist behind the scenes for any many-to-many relationship. Most of the time you won’t need to pay any attention to them. But in some cases, you may need to track information on the instance of one entity’s relationship to another. For example, when viewing a list of Versions on a Playlist, the Sort Order ( sg_sort_order) field is an example of a field that resides on the connection entity between Playlists and Versions. This connection entity is appropriately called PlaylistVersionConnection. Because any Version can exist in multiple Playlists, the sort order isn’t specific to the Version, it’s specific to each _instance_ of the Version in a Playlist. These instances are tracked using connection entities in Shtogun and are accessible just like any other entity type in Shotgun. To find information about your Versions in the Playlist “Director Review” (let’s say it has an id of 4). We’d run a query like so: filters = [['playlist', 'is', {'type':'Playlist', 'id':4}]] fields = ['playlist.Playlist.code', 'sg_sort_order', 'version.Version.code', 'version.Version.user', 'version.Version.entity'] order=[{'column':'sg_sort_order','direction':'asc'}] result = sg.find('PlaylistVersionConnection', filters, fields, order) Which returns the following: [{'id': 28, 'playlist.Playlist.code': 'Director Review', 'sg_sort_order': 1.0, 'type': 'PlaylistVersionConnection', 'version.Version.code': 'bunny_020_0010_comp_v003', 'version.Version.entity': {'id': 880, 'name': 'bunny_020_0010', 'type': 'Shot'}, 'version.Version.user': {'id': 19, 'name': 'Artist 1', 'type': 'HumanUser'}}, {'id': 29, 'playlist.Playlist.code': 'Director Review', 'sg_sort_order': 2.0, 'type': 'PlaylistVersionConnection', 'version.Version.code': 'bunny_020_0020_comp_v003', 'version.Version.entity': {'id': 881, 'name': 'bunny_020_0020', 'type': 'Shot'}, 'version.Version.user': {'id': 12, 'name': 'Artist 8', 'type': 'HumanUser'}}, {'id': 30, 'playlist.Playlist.code': 'Director Review', 'sg_sort_order': 3.0, 'type': 'PlaylistVersionConnection', 'version.Version.code': 'bunny_020_0030_comp_v003', 'version.Version.entity': {'id': 882, 'name': 'bunny_020_0030', 'type': 'Shot'}, 'version.Version.user': {'id': 33, 'name': 'Admin 5', 'type': 'HumanUser'}}, {'id': 31, 'playlist.Playlist.code': 'Director Review', 'sg_sort_order': 4.0, 'type': 'PlaylistVersionConnection', 'version.Version.code': 'bunny_020_0040_comp_v003', 'version.Version.entity': {'id': 883, 'name': 'bunny_020_0040', 'type': 'Shot'}, 'version.Version.user': {'id': 18, 'name': 'Artist 2', 'type': 'HumanUser'}}, {'id': 32, 'playlist.Playlist.code': 'Director Review', 'sg_sort_order': 5.0, 'type': 'PlaylistVersionConnection', 'version.Version.code': 'bunny_020_0050_comp_v003', 'version.Version.entity': {'id': 884, 'name': 'bunny_020_0050', 'type': 'Shot'}, 'version.Version.user': {'id': 15, 'name': 'Artist 5', 'type': 'HumanUser'}}] versionis the Version record for this connection instance. playlistis the Playlist record for this connection instance. sg_sort_orderis the sort order field on the connection instance. We can pull in field values from the linked Playlist and Version entities using dot notation like version.Version.code. The syntax is fieldname.EntityType.fieldname. In this example, PlaylistVersionConnection has a field named version. That field contains a Version entity. The field we are interested on the Version is code. Put those together with our f riend the dot and we have version.Version.code. Shotgun UI fields not available via the API¶ Summary type fields like Query Fields and Pipeline Step summary fields are currently only available via the UI. Some other fields may not work as expected through the API because they are “display only” fields made available for convenience and are only available in the browser UI. Shot¶ Smart Cut Fields: These fields are available only in the browser UI. You can read more about smart cut fields and the API in the Smart Cut Fields doc: smart_cut_in smart_cut_out smart_cut_duration smart_cut_summary_display smart_duration_summary_display smart_head_in smart_head_out smart_head_duration smart_tail_in smart_tail_out smart_tail_duration smart_working_duration Pipeline Step summary fields on entities¶ The Pipeline Step summary fields on entities that have Tasks aren’t currently available via the API and are calculated on the client side in the UI. These fields are like step_0, or step_13. Note that the Pipeline Step entity itself is available via the API as the entity type Step. Audit Fields¶ You can set the created_by and created_at fields via the API at creation time. This is often useful for when you’re importing or migrating data from another source and want to keep the history in tact. However, you cannot set the updated_by and updated_at fields. These are automatically set whenever an entity is created or updated. Logging Messages from the API¶ The API uses standard python logging but does not define a handler. To see the logging output in stdout, define a streamhandler in your script: import logging import shotgun_api3 as shotgun logging.basicConfig(level=logging.DEBUG) To write logging output from the shotgun API to a file, define a file handler in your script: import logging import shotgun_api3 as shotgun logging.basicConfig(level=logging.DEBUG, filename='/path/to/your/log') To suppress the logging output from the API in a script which uses logging, set the level of the Shotgun logger to a higher level: import logging import shotgun_api3 as shotgun sg_log = logging.getLogger('shotgun_api3') sg_log.setLevel(logging.ERROR)
http://developer.shotgunsoftware.com/python-api/cookbook/usage_tips.html
CC-MAIN-2017-26
en
refinedweb
06 December 2006 12:04 [Source: ICIS news] TOKYO (ICIS news)--Japanese conglomerate Marubeni Corp is considering building a plant to produce propylene in an oil-rich country as it expects supply to be tight, a company spokesman said on Wednesday. ?xml:namespace> He said the company was yet to decide on the finer details such as the location and capacity, adding that the proposed unit could be downstream of its refinery joint venture project in ?xml:namespace> Marubeni, Idemitsu Kosan, Cosmo Oil and Mitsui & Co reached an agreement in late November with Qatar Petroleum on an equity participation of 29% in Laffan Refinery Co, Marubeni said, for which the company has a 4.5% stake. Laffan was constructing a condensate refinery with a capacity of 146,000 bbd/day in The refinery plans to produce Naphtha, Kero/Jet, Gas Oil and LPG by refining condensate produced at Qatar North Field, the company added. The spokesman said that even though there was no specific plan for the refinery to produce petrochemicals, there was a possibility that the company may venture into the production of propylene or other petrochemicals in the future as the refinery will produce naphtha. However, the spokesman added that the project would not be with the current
http://www.icis.com/Articles/2006/12/06/1112255/Marubeni-considers-building-new-propylene-plant.html
CC-MAIN-2014-52
en
refinedweb
Hi, I have a C program I wrote for Unix that uses some time libraries such as: #include <sys/time.h> #include <sys/types.h> #include <sys/stat.h> #include <sys/sysinfo.h> and functions like: ascftime() and strptime() to manipulate date strings and formats. I am trying to compile this code under windows (using LLC compiler) but it looks that it does not contain the libraries/functions to support this coding. Does anyone know either where I can dowwnload such lubraries (for Windows) or what are the ANSI-C equivalent commands under Windows OS? Appreciated vmn_3k
http://cboard.cprogramming.com/c-programming/43209-time-libraries-funtions-windows-c-compiling.html
CC-MAIN-2014-52
en
refinedweb
Hi Diego, On Mon, 2006-08-07 at 17:14 +0200, Diego Biurrun wrote: [...] > > > Careful, svn:externals only works for directories, not files. > > Ok... So, we will probably need to have a copy of those two files in the > > repository for some time :( > > Why not put them in libavutil (which is mandatory for MPlayer) then? Well, they will stay in the repository only for a limited time, until a proper solution will be implemented. asmalign.h maybe can be put in libavutil, but I think img_format.h should not go in libavutil (it has nothing to do with ffmpeg). > > --- Makefile (revision 5944) > > +++ Makefile (working copy) > > @@ -11,6 +11,10 @@ > > > > +ifeq ($(CONFIG_SWSCALER),yes) > > +CFLAGS := -I$(SRC_PATH)/libswscale $(CFLAGS) > > +endif > > Why not simply += here? Since I removed the "#include_next", "-I$(SRC_PATH)/libswscale" must go before all the other "-I". += would put it after them. An alternative could be ifeq ($(CONFIG_SWSCALER),yes) CFLAGS=-I$(SRC_PATH)/libswscale endif CFLAGS+=$(OPTFLAGS) -I. -I$(SRC_PATH) -I$(SRC_PATH)/libavutil \ -I$(SRC_PATH)/libavcodec -I$(SRC_PATH)/libavformat \ -D_FILE_OFFSET_BITS=64 -D_LARGEFILE_SOURCE -D_ISOC9X_SOURCE but I did not like it. Anyway, I can change it if needed. Thanks, Luca -- _____________________________________________________________________________ Copy this in your signature, if you think it is important: N O W A R ! ! !
http://ffmpeg.org/pipermail/ffmpeg-devel/2006-August/007127.html
CC-MAIN-2014-52
en
refinedweb
31 December 2009 16:23 [Source: ICIS news] HOUSTON (ICIS news)--The US Chemical Safety Board (CSB) has a detailed watch list for 2010 and beyond for improving the safety of the chemical industry. But to tackle those issues in the depth it wants, the CSB may need additional funding help from US legislators. The CSB was not able to investigate a number of late-year 2009 incidents - including the American Acryl 9 December explosion in Seabrook, Texas - because allocated 2009 budget money had mostly run dry, said CSB chairman John Bresland. In recent years, the Washington, DC-based CSB received funds for a second office in ?xml:namespace> Bresland said the CSB also has interest in opening a “People within the oil industry have told us that we give the best value for taxpayer dollar of any agency in the government,” Bresland said. “$10m (€7m) is what we spend, but in terms of accident prevention, that money is returned many times over.” For 2010, that budget is likely to be similar to 2009, which could leave the CSB stretched thin. Any increase would either come as a supplemental increase at some point during the year, or more than likely for the 2011 budget, which the CSB probably wouldn’t see until late next year, Bresland said. “We operate on the previous year’s budget until we actually get the money from Congress,” he said. So for now, the CSB’s priorities are shaped and somewhat limited in scope to a few target areas. One such area remains the 2009 investigation of Bayer CropScience and resulting legislation aimed at stopping chemical companies from inappropriately using sensitive security information (SSI) labels to impede safety probes. But going forward, the question lingers as to whether the legislation will have its intended effect. In December, a CSB report on a Citgo fire in However, the CSB received affirmation from the US Department of Homeland Security (DHS) that the video did not meet those qualifications. The CSB said it found Citgo’s efforts “disturbing and contrary to the intent” of the law. “Maybe in this case, they weren’t aware of the legislation,” Bresland speculated. “But they’re a big company; they need to be knowledgeable. “We hope that companies as time goes on will be much more aware of this legislation, and not be tempted to use SSI as an excuse for not giving us essential information,” he added. In addition to the SSI issue, the fire also caused concern over the potential release of hydrogen fluoride into the surrounding community. Incidents such as that, and particularly the Silver Eagle Refining fire and explosion in Woods Cross, “I think there are a lot of implications for facility siting in these cases,” Bresland said. “Just how close should a residential or commercial community be to refineries or chemical plants? That’s something we’re going to have to look at the future and see if there are recommendations we could or should make.” The CSB does not have the authority to implement legal sanctions, but regularly advises US government policymakers. Another priority for the agency is fires at gasoline storage facilities, such as the spilled gasoline that vaporised and caused a late October fire and blast that destroyed more than 10 petroleum storage tanks owned by Caribbean Petroleum in That was the third of three very similar incidents in the past few years, Bresland said. “There is a factor at play that appears puzzling to experts in this area,” Bresland said. “If you get a tank that overflows with gasoline, normally you would expect a fire, but in all three cases, there was a significant explosion that did considerable off-site damage. “We are working with investigators to see what commonality is there,” he added. Bresland said that issue also ties into the debate on whether residential communities should be built so closely to refineries and other chemical facilities. In each of those areas, the CSB said has received “very good feedback” for its work, Bresland said. What remains to be seen is whether a Congress embroiled in debate over health care reform has had the time to notice. (
http://www.icis.com/Articles/2009/12/31/9321017/outlook-10-us-chem-safety-board-hopes-to-expand-seeks-funding.html
CC-MAIN-2014-52
en
refinedweb
Google Translate on Windows Phone Windows Phone 8 Windows Phone 7.5 This article shows how to use the Google translate web service in Windows Phone applications. Introduction I have been browsing the web to find a free translator service, that I can use. I have finally found a solution and implemented it in Windows Phone 8. (I haven't tested on WP7 devices). It uses Yahoo Query Language to access Google translate service . The limit of using this service is 20000 queries/IP address/hour. No registration is required, as it's a public service. See the google.translate table on. Note: Google Translate API is available as a paid service if used directly. However, through YQL it is freely available as mentioned above. Implementation You'll need to add two files to your project. The main class and an event argument class. Additionally, Json.NET is requied for the class to work. It's a free package. Translator.cs using System; using System.Collections.Generic; using System.Linq; using System.Text; using System.Threading.Tasks; using System.Net; using Newtonsoft.Json; using YourNameSpace; public delegate void TranslatedString(TranslatedStringEventArgs e); namespace YourNameSpace { class Translator { public event TranslatedString TranslatedString; public string translatingString; // Supporting function to make the URI generation simpler. private Uri constructUri(string to, string text) { string url = @"" + Uri.EscapeDataString("select * from google.translate where q=\"" + text + "\" and target=\"" + to + "\";") + "&format=json&env=store%3A%2F%2Fdatatables.org%2Falltableswithkeys&callback="; return new Uri(url, UriKind.Absolute); } public void TranslateString(string to, string text) { // getting the translation via YQL, setting up a WebClient for this WebClient wc = new WebClient(); wc.OpenReadCompleted += wc_OpenReadCompleted; wc.OpenReadAsync(constructUri(to, text)); translatingString = text; } void wc_OpenReadCompleted(object sender, OpenReadCompletedEventArgs e) { // getting a new class to return and filling in the inital translate string TranslatedStringEventArgs tsea = new TranslatedStringEventArgs(); tsea.initialString = translatingString; // checking if the translation succeeded if (e.Error == null) { // setting the return values tsea.Error = false; tsea.ErrorMessage = ""; //helper variables for converting string resultString = ""; byte[] byteArrayForResultString = new byte[e.Result.Length]; //converting the returned value to string - that's what Json.NET eats e.Result.Read(byteArrayForResultString, 0, Convert.ToInt32(e.Result.Length)); resultString = UTF8Encoding.UTF8.GetString(byteArrayForResultString, 0, byteArrayForResultString.Length); // try to parse the results try { // doing the actual work Newtonsoft.Json.Linq.JObject obj = (Newtonsoft.Json.Linq.JObject)JsonConvert.DeserializeObject(resultString); // Since everything is called "json" in the json array (pretty straightforward, but not that practical if you ask me), // we have to navigate to our string manually. tsea.translatedString = ((((((((((((((((Newtonsoft.Json.Linq.JContainer)(obj)).First).First).Last).First).First).First).First).First).First).First).First).First).First).First).ToString(); } // handle the exceptions if there are catch (Exception serializer_exception) { tsea.Error = true; tsea.ErrorMessage = "Error in JSON Serializing: " + serializer_exception.Message + Environment.NewLine + resultString; } } else { tsea.Error = true; tsea.ErrorMessage = "Error in WebClient: " + e.Error.Message; } TranslatedString(tsea); } } } TranslatedStringEventArgs.cs using System; using System.Collections.Generic; using System.Linq; using System.Text; namespace YourNameSpace { public class TranslatedStringEventArgs:EventArgs { public string initialString { get; set; } public string translatedString {get; set; } public bool Error { get; set; } public string ErrorMessage { get; set; } } } Usage The usage is very simple. As shown below, instantiate the Translator object t and pass the language and string to be translated as arguments to TranslateString. Here is the translated event handler, void t_TranstlatedString(TranslatedStringEventArgs e) { if (e.Error == false) { MessageBox.Show(e.translatedString); } else { MessageBox.Show(e.ErrorMessage, ":(", MessageBoxButton.OK); } } Source code You can download the source code for the example here: File:TranslatorExample.zip Croozeus - Some comments Hi molbal Thanks for the article. I've tidied up the article and sub-edited it. As a comment in general, I think it's worth a mention why YQL was used here, and Google translate APIs weren't used directly - perhaps because the Google translate APIs doesn't support both JSON and XML output? If there's no reason in particular, we can mention the possibility of using directly google APIs too! I think the introduction is the right place to mention this. I did couple of edits, please check if OK and here are few comments: Pankaj Regards croozeus 07:48, 12 February 2013 (EET) Molbal - Thanks for tidying it up Hi, Thank you for making the article more professional. I'm going to check the code on WP7, too. I'l also add, that "Newtonsoft.Json" is requied. I choosed YQL, because it provides free access too the Google translate service, while Google's own APIs are paid only. Of course I'm going to pack it up and upload an example. Thanks, Regards,Balint molbal 16:07, 12 February 2013 (EET) Croozeus - Cool! Hi Balint, Thanks for the explanation re Google translate vs YQL APIs. I've added a note to the introduction to reflect this. Please check if OK. Let me know after you've tested it on WP7, I'll rename the article. And would be great to have the buildable example uploaded for others to try. RegardsPankaj croozeus 11:05, 13 February 2013 (EET) Molbal - Added requied changes Hi, I have added an exmaple, that shows the usage on Windows Phone 7. It has a minor modification, "System.Threading.Tasks" is no longer used, since it's only supported in WP8. I've also uploaded a WP7 project, and it works well. I forgot I used Json.Net (A free package available), so I placed a similar note block as You did with the YQL explanation. Could you please move the download link in an appropriate place and rename the article? Thank you! Regards,Bálint molbal 12:00, 13 February 2013 (EET) Croozeus - Thank you! Thanks for checking it for WP7. The Json.Net information is useful, thanks for adding. Just moved it to implementation section where we mention what files are required. I renamed the article to "Using Google Translate on Windows Phone". Regarding the source code, I checked your Skydrive link - it has only a Speech Example. Did you upload the sample code there yet? You can directly upload the zip to our wiki, so that you don't have to maintain it on your skydrive forever. Here's the upload link, RegardsPankaj. croozeus 12:20, 13 February 2013 (EET) Molbal - My bad! I screwed it up. I am working on a speech-to-speech translation and made this class for that project. It was a reflex I gave that name insted of Translation example. That's the corrent file, I'll rename everything in it and then upload it to the wiki. (I didn't know about it yet) Thanks for the link and taking care of my clumsiness. Regards,Bálint molbal 12:27, 13 February 2013 (EET) Molbal - Fixed Hi Pankaj, I have fixed the errors :) Regards,Bálint molbal 16:43, 13 February 2013 (EET) Croozeus - Awesome! Hi Bálint, That's great. Thanks very much for adding it. I added source-code link it to the ArticleMetaData as well. Looking forward to your other contributions to the Wiki! RegardsPankaj croozeus 08:26, 14 February 2013 (EET) Takacs.albert - Code snippet for more than one sentences Hi All, Please find below the snippet for the case when multiple sentence translation is needed. Please replace the following line found in the original post: tsea.translatedString = ((((((((((((((((Newtonsoft.Json.Linq.JContainer)(obj)).First).First).Last).First).First).First).First).First).First).First).First).First).First).First).ToString(); To these lines: var sentences = ((((((((((((Newtonsoft.Json.Linq.JContainer)(obj)).First).First).Last).First).First).First).First).First).First).First).First; StringBuilder sb = new StringBuilder(); foreach (var item in sentences.Children()) { } tsea.translatedString = sb.ToString();Hope this helps, takacs.albert 21:46, 5 June 2013 (EEST) Molbal - It helps for sure Hi Albert,It surely helps a lot! Thanks, I'll upgrade the article snippet soon. molbal 21:54, 5 June 2013 (EEST) Takacs.albert - Cool!Cool! takacs.albert 21:56, 5 June 2013 (EEST) Hamishwillee - Let us know when done Hi Bálint That will be great. Please add a note when you're done and I'll remove the comments next time I come round. RegardsHamish hamishwillee 09:48, 7 June 2013 (EEST) Molbal - On Monday Hi Hamish, I'll update the article on Monday evening. (I have an exam where I have to battle the Devil itself: Theory of computing) :) Regards,bálint molbal 14:14, 7 June 2013 (EEST) Hamishwillee - Thank you.There is no urgency - just want to know when I can clean up. Good luck with your battle with evil :-) hamishwillee 08:08, 10 June 2013 (EEST) Paulo.morgado - Ading async/await support Hi Bálint, As I had posted in your other article I’ve played with your code and added async/await capabilities to it. Instead of using the WebClient class, I’ll be using the HttpClient class that you can find in the Microsoft HTTP Client Libraries NuGet package. I won’t be changing the meaning of your code as I don’t know the API you’re using. By using async/await for writing the code, the code becomes easier to understand and maintain, as you can see: Noticed the .ConfigureAwait(continueOnCapturedContext: false)? This will instruct the compiler generated state machine to not post the continuation into the current SynchronizationContext, if one exists. This way, if the calling code is on the UI thread And it becomes also easier to consume: One of the reasons I used the HttpClient class instead of the WebClient class is because it’s easier to handle cancellation: And if you’d like to set a timeout for the call: paulo.morgado (talk) 12:47, 21 August 2013 (EEST) Hamishwillee - Great comment Hi Balint, Paulo Paulo's comment looks like a good approach to me. Only issue I guess would be that for this still to work on WP7 you'd need to add the libraries (can't remember which) that add await/async to WP7. If it were me (and I hope you will restructure this) I'd probably redo using structure suggested by Paulo, then have a WP7 section. In this section you could explain that the above code will work with addition of libraries, X/Y, but that if you don't want that dependency then you can use old-style event handling (and then just have your current text). Thoughts? RegardsH hamishwillee (talk) 02:49, 22 August 2013 (EEST) Paulo.morgado - WP7 It should work with WP7. The documentation states that the Microsoft HTTP Client Libraries package is supported for: It doesn't declare a dependency on the Async for .NET Framework 4, Silverlight 4 and 5, and Windows Phone 7.5 and 8 package, but it can be added if needed. Cheers,Paulo paulo.morgado (talk) 02:59, 22 August 2013 (EEST) Takacs.albert - Translation service is failing Is it possible that this service is not working anymore? I keep getting the following response string everytime: "{"query":{"count":0,"created":"2013-09-13T21:55:35Z","lang":"en-US","results":null}}" Please help me out as this feature is in production and my users are not able to translate... Thanks,Albert takacs.albert (talk) 00:49, 14 September 2013 (EEST) Molbal - the service changed Hello,I've been inactive for a while, I'll read the comments soon and then edit this comment :) molbal (talk) 10:42, 21 February 2014 (EET)
http://developer.nokia.com/community/wiki/Using_Google_Translate_on_Windows_Phone
CC-MAIN-2014-52
en
refinedweb
Test driven development thrives on a tight feedback loop. However,: describe RecipientInterceptor do it 'overrides to/cc/bcc fields' do Mail.register_interceptor RecipientInterceptor.new(recipient_string) response = deliver_mail expect(response.to).to eq [recipient_string] expect(response.cc).to eq [] expect(response.bcc).to eq [] end end Type <Leader>s: rspec spec/recipient_interceptor_spec.rb:4 Run options: include {:locations=>{"spec/recipient_interceptor_spec.rb"=>[4]}} . Finished in 0.03059 seconds 1 example, 0 failures The screen is overtaken by a shell that runs just: def delivering_email(message) add_custom_headers message add_subject_prefix message message.to = @recipients message.cc = [] message.bcc = [] end Run <Leader>l without having to switch back to the spec: rspec spec/recipient_interceptor_spec.rb ...... Finished in 0.17752 seconds 6 examples, 0 failures These tight feedback loops make TDD easier by eliminating the switching cost between editor to the shell when running specs. What’s next? If you found this useful, you might also enjoy: - Running Specs from Vim, Sent to tmux Via Tslime - Use RSpec.vim with tmux and Dispatch - Destroy All Software for screencasts showing this and other productivity techniques with specs and Vim
http://robots.thoughtbot.com/running-specs-from-vim
CC-MAIN-2014-52
en
refinedweb
Type: Posts; User: howardcartter here is the code import javax.vecmath.*; import com.sun.j3d.utils.universe.*; import javax.media.j3d.*; import com.sun.j3d.utils.behaviors.vp.*; import javax.swing.JFrame; import... can someone help me get this obj file working This is the code called "Phong.java" that I am trying to run with the icosahedron object file. I needed to use Phong.java and the teapot input to...
http://forums.codeguru.com/search.php?s=3693c7c6f62bde52b2ff370c06dbe290&searchid=5796371
CC-MAIN-2014-52
en
refinedweb
10 Jul 01:53 2010 Re: gtkpod 1.0 beta 2 Leandro Lucarella <luca <at> llucax.com.ar> 2010-07-09 23:53:09 GMT 2010-07-09 23:53:09 GMT Leandro Lucarella, el 9 de julio a las 18:42 me escribiste: > I tried to break tm_add_track_to_track_model() using GDB to see how long > took each song to be scanned and it seems to be instantly really, so > I guess there is something else going on. I'm doing some profiling and, correct me if I'm wrong, but gtkpod load all the songs at the beginning of the program, and not each time one clicks on the iPod playlist, so I that is correct, an I/O problem involving the device is discarded. It looks like it's GTK which somehow is slow. Manipulating the TreeViews seems to be what's taking so long. I've compiled gtkpod setting the macro DEBUG_TIMING and added a couple more prints and this is the result: pm_selection_changed_cb enter: 2098.201221 sec pm_selection_changed_cb before listing: 2098.214338 sec pm_selection_changed_cb after listing: 2150.997785 sec pm_selection_changed_cb exit: 2151.172406 sec st_selection_changed_cb enter (inst: 0): 2151.504683 sec st_selection_changed_cb after st_init: 2178.739254 sec st_selection_changed_cb before loading tracks: 2178.739273 sec st_selection_changed_cb after loading tracks: 2232.149276 sec st_selection_changed_cb exit: 2232.376281 sec st_selection_changed_cb enter (inst: 1): 2232.376300 sec st_selection_changed_cb after st_init: 2260.392100 sec st_selection_changed_cb before loading tracks: 2260.392119 sec st_selection_changed_cb after loading tracks: 2315.961767 sec st_selection_changed_cb exit: 2316.018309 sec st_selection_changed_cb enter (inst: 1): 2316.116774 sec st_selection_changed_cb after st_init: 2343.740650 sec st_selection_changed_cb before loading tracks: 2343.740677 sec st_selection_changed_cb after loading tracks: 2399.858041 sec st_selection_changed_cb exit: 2399.920906 sec st_selection_changed_cb enter (inst: 1): 2399.920933 sec st_selection_changed_cb after st_init: 2428.241284 sec st_selection_changed_cb before loading tracks: 2428.241303 sec st_selection_changed_cb after loading tracks: 2483.904216 sec st_selection_changed_cb exit: 2483.962219 sec The after and before listing/loading tracks was added like this: #if DEBUG_TIMING g_get_current_time (&time); printf ("pm_selection_changed_cb before listing: %ld.%06ld sec\n", time.tv_sec % 3600, time.tv_usec); #endif for (gl=new_playlist->members; gl; gl=gl->next) { /* add all tracks to sort tab 0 */ Track *track = gl->data; st_add_track (track, FALSE, TRUE, 0); } #if DEBUG_TIMING g_get_current_time (&time); printf ("pm_selection_changed_cb after listing: %ld.%06ld sec\n", time.tv_sec % 3600, time.tv_usec); #endif iAt display_playlists.c:1518 and: #if DEBUG_TIMING || DEBUG_CB_INIT g_get_current_time (&time); printf ("st_selection_changed_cb before loading tracks: %ld.%06ld sec\n", time.tv_sec % 3600, time.tv_usec); #endif for (gl = new_entry->members; gl; gl = gl->next) { /* add all member tracks to next instance */ Track *track = gl->data; st_add_track(track, FALSE, TRUE, inst+1); } #if DEBUG_TIMING || DEBUG_CB_INIT g_get_current_time (&time); printf ("st_selection_changed_cb after loading tracks: %ld.%06ld sec\n", time.tv_sec % 3600, time.tv_usec); #endif (at display_sorttabs.c:~1930) "the after st_init()" was added after any st_init() call in st_selection_changed_cb(). have no idea why this is happening though, all I can say is that I'm using other GTK applications that make heavy use of TreeView; like gmpc, where I can list about 25k songs (much more than I have in the iPod) in a fraction of a second. To makes things worse, libgtk-2.0 is the same version in the Ubuntu box where it's slow than in the box where it works fine (which BTW is a Pentium M 1.7GHz, much less processing power than the box where it's incredibly slow). Any ideas or suggestions are welcome. -- -- Leandro Lucarella (AKA luca) ---------------------------------------------------------------------- GPG Key: 5F5A8D05 (F8CD F9A7 BF00 5431 4145 104C 949E BFB6 5F5A 8D05) ---------------------------------------------------------------------- Y tuve amores, que fue uno sólo El que me dejó de a pie y me enseñó todo... ------------------------------------------------------------------------------ This SF.net email is sponsored by Sprint What will you do first with EVO, the first 4G phone? Visit sprint.com/first -- _______________________________________________ Gtkpod-questions mailing list Gtkpod-questions <at> lists.sourceforge.net
http://permalink.gmane.org/gmane.comp.ipod.gtkpod.user/2033
CC-MAIN-2014-52
en
refinedweb
I have successfully embedded the Perl interpreter, and it is working great. The next step is to provide the ability for the user to make calls into my C program to get certain information, take certain actions, etc. I have written an extension using SWIG, and built that into my program as well. After constructing my perl interpreter, with the appropriate xs_init function that adds my DynTrans as a static XSUB, I immediately use: eval_pv("use DynTrans;", FALSE); [download] So, the problem: In order for this to work, I have to have DynTrans.pm in the directory where I run my application. I want to remove this requirement, I want the entire application to be completely self-contained. I have gone so far as to modify my code like this: perl_setup_module(); eval_pv("use DynTrans;", FALSE); perl_cleanup_module(); [download] So, the question: Is there a way that I can make XSUBs availiable to my perl interpreter without having to have the .pm file around at all? I tried building the entire .pm file into my application with a series of eval_pv() for every line, but of course that didn't work. What I would really like is a programatic interface into whatever the use DynTrans; perl stuff does. I have read and re-read perlembed, perlxs, perlguts, perlapi, perlcall, several perl books, forums, SuperSearch, etc. and I cannot find a way to do this. Can anyone save me from the dreaded .pm file? Thanks in advance. What happens if you put the whole text of the module in a single string and pass that to eval_pv()? I haven't tried, but I don't see any reason why that shouldn't work. It seems that Perl isn't executing the .pm file directly as Perl code, at least not in the main interpreter namespace, but is rather pulling it in as part of the module loading/initialization process. I.E. the eval_pv() method that I tried would be akin to writing a perl script and beginning it with: (this is the .pm file that SWIG generated) package DynTrans; require Exporter; @ISA = qw(Exporter); package DynTrans; boot_DynTrans(); package DynTrans; @EXPORT = qw( GetTableName GetAction ); . . . User script here ... [download] Does this make sense? where. Thanks for the suggestions, though. I would post this to the Perl-Xs list. List is low traffic, but you will get some good suggestions. Hell yes! Definitely not I guess so I guess not Results (49 votes), past polls
http://www.perlmonks.org/?node_id=428832
CC-MAIN-2014-52
en
refinedweb
by Ethan Wilansky March 2007 Summary: System.DirectoryServices.Protocols (S.DS.P), first introduced in the .NET Framework 2.0, is a powerful namespace that brings LDAP programming to managed code developers. This paper provides you with an introduction to programming with S.DS.P by describing common directory management tasks and how you code those tasks using this namespace. (48 printed pages) Contents Introduction What to Expect How to Prepare for Running the Code What Not to Expect System.DirectoryServices.Protocols Architecture Common Patterns Establishing an LDAP Connection Request and Response Classes Management Tasks LDAP Directory Management Tasks Search Operations Performing a Simple Search Returning Attribute Values Running a Paged Search Running an Asynchronous Search Creating an Attribute Scoped Query (ASQ) Creating a Virtual List View Advanced LDAP Server Connection and Session Options Binding over a TLS/SSL Encrypted Connection Performing Fast Concurrent Bind Operations Leveraging Transport Layer Security Performing Certificate-Based Authentication References Conclusion In the first white paper of this series, I explored System.DirectoryServices.ActiveDirectory (S.DS.AD) to help you understand how you can use this namespace for administering Active Directory and ADAM instances. S.DS.AD is a specialized namespace specifically targeted at programmers working with Microsoft directory services. In contrast, System.DirectoryServices.Protocols (S.DS.P) is a namespace designed for LDAP programming in general. In addition, it provides capabilities that were previously unavailable to managed code programmers. This paper provides you with an introduction to S.DS.P by describing common tasks and how you code those tasks with this namespace first introduced in the .NET Framework 2.0: for example, how to perform simple tasks, like creating a user account, to more complex tasks, such as running an attribute scoped query against an LDAP directory or binding to a directory server using X.509 client and server certificates. While some of these types of programming tasks are possible to complete with S.DS alone, Microsoft has opened the full power of directory programming to managed code developers through S.DS.P. If you are already familiar with directory services programming using managed code, you will immediately see that many of the directory services programming tasks I introduce you to in this paper can be completed using the System.DirectoryServices namespace. However, these examples are a great way to get familiarized with common S.DS.P programming patterns. In addition, S.DS.P provides raw LDAP access, meaning that it is designed specifically to reach beyond Active Directory and ADAM to other LDAP compliant directories. Therefore, if you plan to use .NET managed code against other LDAP directories, a great place to focus is on S.DS.P. The example companies, organizations, products, domain names, e-mail addresses, logos, people, places, and events depicted herein are fictitious. No association with any real company, organization, product, domain name, email address, logo, person, places, or events is intended or should be inferred. This paper includes copious code examples and an associated code download so that you can test everything explored here. The examples are intentionally simple so that you can quickly grasp their purpose and begin to detect patterns that repeat throughout this namespace exploration. For example, by the time you complete this introduction, you'll have an excellent understanding of how to connect to a directory using the LdapConnection class and how the various classes derived from the DirectoryRequest and DirectoryResponse base classes allow you to interact with an LDAP directory. The code download is a console application solution containing the DirectoryServices.Protocols in this paper. To run these examples, have at least one Active Directory domain available for testing. Optionally, have an ADAM instance available and know the designated port numbers applied to the instance. The advanced authentication operations also require a valid SSL certificate and one example requires a client certificate. The Advanced LDAP Server Connection and Session Options section includes reference information to help you configure certificates. After compiling the solution, your output will be the DS.P program. Typing the program name at the command line will return a list of available commands and their parameters, as shown here: CreateUsers server_or_domain_name targetOu numUsers AddObject server_or_domain_name dn dirClassType AddAttribute server_or_domain_name dn attributeName attributeValue AddAttribute2 server_or_domain_name dn attributeName attributeValue AddAttributeUri server_or_domain_name dn attributeName attributeUriValue AddMVAttribStrings server_or_domain_name dn attribName "attribVal1,...attribValN" DeleteAttribute server_or_domain_name dn attributeName EnableAccount server_or_domain_name dn DeleteObject server_or_domain_name dn MoveRenameObject server_or_domain_name originalDn newParentDn objectName SimpleSearch server_or_domain_name startingDn AttributeSearch server_or_domain_name startingDn "attribName1,...attribNameN" TokenGroupsSearch server_or_domain_name DnofUserAccount PagedSearch server_or_domain_name startingDn numbericPageSize AsyncSearch server_or_domain_name startingDn Asq server_or_domain_name groupDn Vlv server_or_domain_name startingDn maxNumberOfEntries nameToSearch Sslbind fullyQualifiedHostName:sslPort userName password FastConBind server_or_domain_name user1 pword1 user2 pword2 domainName Tls fullyQualifiedHostName_or_domainName userName password domainName cert fullyQualifiedHostName:sslPort clientCert certPassword Be sure that you install the .NET Framework 2.0 or later wherever you are going to compile and run the sample, and also be sure to reference the System.DirectoryServices.Protocols assembly. S.DS.P is a robust namespace and while I attempt to provide you with useful introductory examples, it is not a complete survey of its members. For example, I don't explore how this namespace provides members to perform Directory Services Markup Language (DSML) operations. I also do not provide guidance on best practices for directory services programming or guidance on best practices for configuration settings in ADAM or Active Directory. The .NET Developer's Guide to Directory Services Programming by Joe Kaplan and Ryan Dunn will provide the directory programming best practices not included in this introduction and I'll reference a number of good online resources for configuring Active Directory. Also, I'm hoping to write about DSML programming with S.DS.P in the future. Finally, all the examples are in C#. Even if C# isn't your language of choice, I think you will find the examples simple enough to rewrite/convert into your preferred managed code language. S.DS.P is one of three namespaces Microsoft has created for directory services programming in managed code. Unlike the other two namespaces, System.DirectoryServices and System.DirectoryServices.ActiveDirectory, S.DS.P provides raw access to underlying LDAP-based directories, such as Active Directory and ADAM. The darker boxes in Figure 1 show the essential S.DS.P components. The lighter boxes provide a relative mapping to the other components associated with directory services programming. Why bother showing the lighter boxes? To demonstrate exactly where S.DS.P resides in the hierarchy of directory services programming namespaces. S.DS.P exclusively relies on the LDAP APIs in wldap32 to access underlying LDAP directories. Figure 1. S.DS.P Architectural Block Diagram Preparing to Use S.DS.P As the architectural diagram depicts, the S.DS.P namespace is distinct from S.DS and S.DS.AD and resides in its own assembly, System.DirectoryServices.Protocols.dll. Thus, to use S.DS.P, you must reference the System.DirectoryServices.Protocols.dll assembly in your project. In contrast, System.DirectoryServices.dll contains both S.DS and S.DS.AD and referencing this single assembly gives you access to both namespaces. What I've come to appreciate with .NET programming is the patterns that emerge when you work with a namespace for a while. Calling out those patterns is really helpful when you begin coding in a namespace. In S.DS.P, you will almost always begin a code task by establishing a connection and eventually sending a request through that connection to a directory server and receiving a response. I outline how to complete these common tasks here and you will see them repeated throughout this paper and in the code download. The first step in all of the code examples in this paper and the associated code download is making an initial connection to a directory server. Making a connection does not bind to objects in the directory. Binding either occurs automatically as a result of a directory service operation that requires it, or you can perform an explicit bind operation by calling the Bind method from your code. In either case, binding to a directory sends credentials to a directory server. The following key classes are involved in making an initial connection to a directory server: For example, the following code snippet creates a connection to an available directory server using the default connection port, which is 389: LdapConnection connection = new LdapConnection("fabrikam.com"); For example, the following code snippet creates a credential object with a username of user1, a password of password1 to a domain named fabrikam: NetworkCredential credential = new NetworkCredential("user1", "password1", "fabrikam"); You then set the Credential property of the connection equal to the credential object, like so: connection.Credential = credential These credentials are not sent to a directory server until you bind to a directory. If you don't specify credentials, when a bind occurs, the current user's credentials are sent. A number of code examples in this paper and in the code download use this class. Therefore, I won't show an example of using it here. For example, the following code snippet creates an identifier object to a directory server named sea-dc-02.fabrikam.com using the Active Directory SSL port: LdapDirectoryIdentifier identifier = new LdapDirectoryIdentifier("sea-dc-02.fabricom.com:636"); You then pass the identifier to the connection object when you create the connection, like so: LdapConnection connection = new LdapConnection (identifier); I use this object in a single code example, but it can be useful if you want to establish a connection over UDP or separate the identifying information about a connection from the creation of the LdapConnection object. A fundamental part of interacting with a directory service via LDAP is creating and sending requests and receiving responses. The synchronous S.DS.P method for sending a request is SendRequest. A directory server then returns a response that you can cast into the appropriate response object. When you call the SendRequest method of an LdapConnection, the method ships an LDAP operation to a directory server and the server returns a DirectoryResponse object. The object returned aligns in structure with the type of request. For example, if you supply the SendRequest method with an AddRequest object, the directory server returns a DirectoryResponse object that is structurally equivalent to an AddResponse object. You must then cast the returned DirectoryResponse base class into an AddResponse object before you inspect the response. The pattern for this is: DirectoryRequestType request = new DirectoryRequestType(parameters…); DirectoryResponseType response = (DirectoryResponseType)connection.SendRequest(request); The following code snippet demonstrates how to implement this pattern using the AddRequest and AddResponse objects. The values of the dn and dirClassType are defined elsewhere and are not shown here to avoid obscuring the pattern: // build an addrequest object AddRequest addRequest = new AddRequest(dn, dirClassType); // cast the response into an AddResponse object to get the response AddResponse addResponse = (AddResponse)connection.SendRequest(addRequest); The following request classes map to the listed response classes appearing in Table 1: Table 1. DirectoryRequest and Corresponding DirectoryResponse Classes The .NET Framework SDK Class Library Reference describes the purpose of each request and response class. In addition, I demonstrate how to use all of these request objects except the last two DSML request objects. For more information on S.DS.P architecture, see "System.DirectoryServices.Protocols Architecture" at. Common directory services management tasks include creating, adding, moving, modifying and deleting directory objects. While S.DS provides all of these capabilities, S.DS.P allows you to use common LDAP programming constructs to perform the same tasks. S.DS is easier for these code tasks, but seeing how to complete these familiar tasks with S.DS.P is a great way to introduce key members of this namespace. Code examples in this section will build on one another to familiarize you with the common patterns. For instance, the first example will show you how to create 100 user accounts in just a few lines of code by using the AddRequest object, but it won't show you how to get a response back about the task from a directory server. The next example returns to the essence of the first create users task by demonstrating how to add any valid object to the directory, and it also shows how to get a response back about the task. A later example introduces you to the ModifyRequest object for managing an attribute, but it doesn't demonstrate how to get a response back about whether the attribute was successfully modified. Immediately following that example, I introduce the ModifyResponse object. This incremental approach, I believe, will help you better understand how to build on the examples to create more complex and useful code. Creating Users Accounts A classic initial demonstration of directory services programming techniques often involves generating many user accounts with only a few lines of code. As S.DS.P is arguably the most radical departure from traditional directory services coding in the .NET Framework, I think a multi-user creation example is a good starting point. I think you would agree that it's more useful than writing Hello World to an attribute! The following code example demonstrates how to create 100 user accounts in just a few lines of code: In an Active Directory domain, the Locator service provides the host name of a domain controller in the specified domain for the connection. You pass a directory request (in this case an AddRequest) to the SendRequest method. The SendRequest method then automatically binds to a domain controller in the targeted domain using the current user's credentials. Example 1. Creating 100 user accounts LdapConnection connection = new LdapConnection("fabrikam.com"); for (int i = 1; i <= 101; i++) { string dn = "cn=user" + i + ",ou=UserAccounts,dc=fabrikam,dc=com"; connection.SendRequest(new AddRequest(dn, "user")); } If you were to run this code, you wouldn't get any return results and the user accounts created in the fabrikam.com Active Directory domain would be disabled. Obviously, this is a pedantic example, but it effectively demonstrates that even a namespace as sophisticated as S.DS.P provides a simple and elegant model to complete significant directory management tasks. Adding an Object to a Directory As you saw in the previous create user example, when you call the SendRequest method, you pass the method an AddRequest object to create a user by the specified name. The second parameter in the AddRequest can either be an array of attributes to assign to the object or the lDAPDisplayName of the class schema object from which the object should be derived. In order to get a response from a directory server about the success or failure of the requested operation, you cast the returned DirectoryResponse base class into the proper response type based on the type of DirectoryRequest object you pass to the SendRequest method. The following code example demonstrates how to add a directory object named Seasoned derived from the organizationalUnit class schema object to the directory below the techwriters ou in the fabrikam.com domain: The corresponding code download allows you to pass these and other values in as command line arguments. Because a specific domain controller was not declared for the hostordomainName variable, the Active Directory Locator will find an available domain controller for the binding operation. This is referred to as serverless binding. An implicit bind occurs here. The AddResponse class contains an ErrorMessage response property that you can display for more information on any error that might be returned from the directory server. Example 2. Adding an OrganizationalUnit object string hostOrDomainName = "fabrikam.com"; string dn = "ou=Seasoned,ou=techwriters,dc=fabrikam,dc=com"; string dirClasstype = "organizationalUnit"; // establish a connection to the directory LdapConnection connection = new LdapConnection(hostOrDomainName); try { // create an addrequest object AddRequest addRequest = new AddRequest(dn, dirClassType); // cast the returned DirectoryResponse as an AddResponse object AddResponse addResponse = (AddResponse)connection.SendRequest(addRequest); Console.WriteLine("A {0} with a dn of\n {1} was added successfully " + "The server response was {2}", dirClassType, dn, addResponse.ResultCode); } catch (Exception e) { Console.WriteLine("\nUnexpected exception occured:\n\t{0}: {1}", e.GetType().Name, e.Message); } Adding an Object to a Directory Using a Different AddRequest Constructor Before delving into another code example, let's step back for a moment and consider how the definition of class schema objects plays an important role in directory object creation. This is essential to understand before you try to use the alternative AddRequest constructor to create directory objects. The attributes of a class schema object define the object. A key part of that definition is the attributes that the object must or may contain. When you instantiate a directory object from a class schema object, you or the directory service must provide values for any attributes that the directory object must contain (mandatory attributes) when it is created. In the prior code example (Example 2), I demonstrate how to add an OrganizationalUnit object to the directory by using the AddRequest constructor. In that case, the constructor takes the distinguishedName of the object to create and the type of object class from which the object is derived. If you take a close look at the organizationalUnit class schema object in an Active Directory or ADAM schema, you will see that the instanceType, objectCategory, nTSecurityDescriptor, objectClass and ou attributes must be defined for the object in order for it to be created. The organizationalUnit class inherits from the Top schema class object, which defines the first four of those attributes as mandatory and the organizationalUnit class defines the ou attribute as mandatory. You must provide values for the ou attribute and the objectClass attribute, and directory services takes care of providing the other values. Now that you know the mandatory attributes and who has to set what, you can make use of the AddRequest constructor that takes the distinguished name of the object you want to create and an array of DirectoryAttribute objects. The array of objects must include values for any mandatory attributes that directory services will not set for you or that are not defined as part of the distinguished name of the new object. Considering the previous organizationalUnit example, the following code snippet shows how you can define the one required directory attribute (objectClass) by creating a DirectoryAttribute object: DirectoryAttribute objectClass = new DirectoryAttribute("objectClass", "organizationalUnit"); You can then pass that to the AddRequest object, like so: addRequest = new AddRequest(dn, objectClass); You might notice that this doesn't add much to the prior code example (Error! Reference source not found.). It gets more interesting when you encounter a directory object that contains more mandatory attributes, such as an object derived from the User class schema object, which also requires other mandatory attributes (i.e., sAMAccountName), or you want to add additional optional attributes to an object when it's created. In the following code snippet I define two optional attributes, the city ("l") directory attribute and description directory attribute, and pass those along with the objectClass mandatory directory attribute when I call the AddRequest constructor to create an OU: DirectoryAttribute l = new DirectoryAttribute("l", "Redmond"); DirectoryAttribute description = new DirectoryAttribute("description", "Writers with 3 years of experience"); DirectoryAttribute objectClass = new DirectoryAttribute("objectClass", "organizationalUnit"); // create a DirectoryAttribute array and pass in three directory attributes DirectoryAttribute[] dirAttribs = new DirectoryAttribute[3]; dirAttribs[0] = l; dirAttribs[1] = description; dirAttribs[2] = objectClass; // create an addrequest object addRequest = new AddRequest(dn, dirAttribs); Note that there is not a corresponding code sample with the code download for this variation on creating an AddRequest object. Start with the AddObject method in the code sample and this information to create a method that uses this AddRequest constructor. Adding an Attribute to a Directory Object After creating an object in a directory, you might want to add optional attributes to it. For all attributes that an object may contain (optional attributes), you can add them using the ModifyRequest object. To add an attribute to an existing directory object, create a ModifyRequest object and in that object specify the distinguished name of the object you want to modify along with the Add value of the DirectoryAttributeOperation enumeration, the attribute name and value to add. The DirectoryAttributeOperation enumeration contains three values: Add, Delete and Replace. If an attribute already exists in an object, specifying an Add DirectoryAttributeOperation will throw a DirectoryOperationException error. Therefore, if you want to update an existing attribute, use the Replace DirectoryAttributeOperation value instead. The following code sample demonstrates how to add a department attribute and value of Human Resources to a user account object named John Doe in the TechWriters OU of the fabrikam.com domain: If the attribute has not been assigned to the user account, the send request will succeed. Otherwise, a DirectoryOperationException will be thrown. An additional try catch block appears inside the request to modify an existing attribute in case this attempt also throws a DirectoryOperationException error. This result might not be correct, as the code does not consult the server to verify that the LDAP operation was successful or not. The next section explores how to get a response back from a directory server. Example 3. Adding or replacing the department attribute of a user account); try { ModifyRequest modRequest = new ModifyRequest( dn, DirectoryAttributeOperation.Add, attributeName, attributeValue); // example of modifyrequest not using the response object... connection.SendRequest(modRequest); Console.WriteLine("{0} of {1} added successfully.", attributeName, attributeValue); } catch (DirectoryOperationException) { try { ModifyRequest modRequest = new ModifyRequest( dn, DirectoryAttributeOperation.Replace, attributeName, attributeValue); connection.SendRequest(modRequest); Console.WriteLine("The {0} attribute in:\n{1}\nreplaced " + "successfully with a value of {2}", attributeName, dn, attributeValue); } catch (DirectoryOperationException e) { Console.WriteLine("\nUnexpected exception occured:\n\t{0}: {1}", e.GetType().Name, e.Message); } } catch (Exception e) { Console.WriteLine("\nUnexpected exception occured:\n\t{0}: {1}", e.GetType().Name, e.Message); } Important Consider using the code example appearing next (Example 4) as a starting point for building robust code for adding or replacing attributes. That example uses the directory response object to determine if the attribute has already been set and to check if the directory operation was successful. Getting Feedback from a Directory Server from an Object Modify Request The example appearing in Example 3 does not demonstrate the pairing of the ModifyRequest and ModifyResponse classes or how you can leverage the DirectoryOperationException class to determine more about an error response. While the code catches errors, it doesn't directly display responses from a directory server as a result of modifying an object. The pattern for using a ModifyResponse object to properly cast a returned directory response from a ModifyRequest is identical to the pattern I demonstrated for casting a directory response from an AddRequest into an AddResponse. The code download with this article contains the AddAttribute2 method so that you have a complete example using the ModifyRequest and ModifyResponse classes. The following code snippet shows how you use the ModifyResponse object in an example similar to Example 3: // build a modifyrequest object ModifyRequest modRequest = new ModifyRequest(dn, DirectoryAttributeOperation.Add, attributeName, attributeValue); // cast the returned directory response into a ModifyResponse type named modResponse ModifyResponse modResponse = (ModifyResponse)connection.SendRequest(modRequest); Console.WriteLine("The {0} attribute in {1} added successfully " + "with a value of {2}. The server response was {3}", attributeName, dn, attributeValue, modResponse.ResultCode); When an add operation fails, you can determine why by examining the server's directory response more closely. The SendRequest method throws a DirectoryOperationException if the directory server returns a DirectoryResponse object containing an error. This directory response is packaged in the Response property of the exception. The ResultCode of the directory response returns a value contained in the ResultCode enumeration. This enumeration is rich with a plethora of error values. For example, an error equivalent to the AttributeOrValueExists value is returned if an attribute is assigned to an object. The following code example demonstrates how to use the ModifyResponse class to verify a directory object modification and how to use a directory response object containing an error code to handle a DirectoryOperationException. This code sample is similar to Error! Reference source not found., but provides a better starting point for building code that adds or replaces an attribute value: These two objects are declared here because they could be used within two try catch blocks. This is more efficient than the code example appearing in Error! Reference source not found. where there is a potential of creating two ModifyRequest objects, one for the attempted add operation and another for the replace operation. If the attribute has not been assigned to the user account, the send request will succeed. Otherwise, the SendRequest throws a DirectoryOperationException. The Response property of the DirectoryOperationException object named doe contains the directory response object. An additional try catch block appears inside the request to modify an existing attribute in case other errors are thrown. However, you can more elegantly handle errors using other values of the ResultCode enumeration. For example, if the object specified, cn=john doe,ou=techwriters,dc=fabrikam,dc=com in this example does not exist, the directory response will be equivalent to the NoSuchObject value of the ResultCode enumeration. Example 4. A more robust example demonstrating how to add or replace an attribute of a directory object); // declare the request and response objects here // they are used in two blocks ModifyRequest modRequest; ModifyResponse modResponse; try { // initialize the modRequest object modRequest = new ModifyRequest(dn, DirectoryAttributeOperation.Add, attributeName, attributeValue); // cast the returned directory response into a ModifyResponse type // named modResponse modResponse = (ModifyResponse)connection.SendRequest(modRequest); Console.WriteLine("The {0} attribute of {1} added successfully " + "with a value of {2}. The server response was {3}", attributeName, dn, attributeValue, modResponse.ResultCode); } // if the code enters this catch block, it might be // caused by the presence of the specified attribute. // The DirectoryAttributeOperation.Add enumeration fails // if the attribute is already present. catch (DirectoryOperationException doe) { // The resultcode from the error message states that // the attribute already exists if (doe.Response.ResultCode == ResultCode.AttributeOrValueExists) { try { modRequest = new ModifyRequest( dn, DirectoryAttributeOperation.Replace, attributeName, attributeValue); modResponse = (ModifyResponse)connection.SendRequest(modRequest); Console.WriteLine("The {0} attribute of {1} replaced " + "successfully with a value of {2}. The server " + "response was {3}", attributeName, dn, attributeValue, modResponse.ResultCode); } // this catch block will handle other errors that you could // more elegantly handle with other values in the // ResultCode enumeration. catch (Exception e) { Console.WriteLine("\nUnexpected exception occured:\n\t{0}: {1}", e.GetType().Name, e.Message); } } } catch (Exception e) { Console.WriteLine("\nUnexpected exception occured:\n\t{0}: {1}", e.GetType().Name, e.Message); } To keep the remaining code examples as simple as possible, I show just a few of the most common interrogations of the ResultCode property in a response object. In production code, you will want to examine many more result codes contained in a DirectoryOperationException. Use the examples I show as a starting point for handling other directory response result codes. Carefully review the ResultCode enumeration for other common directory responses. In addition, the prior code example can be simplified with the PermissiveModifyControl directory control, which is explored in the next section. Adding Values to a Multi-Valued Attribute Many attributes can hold more than one value. The classic example of this is the member attribute of a group. In the prior examples of ModifyRequest, I show you how to add an attribute or replace a value in an existing attribute of an object. If you review the ModifyRequest constructor in the .NET Class Library or through Visual Studio, you'll notice that the fourth parameter of the constructor I use in the previous examples is actually an object array that can take a variable number of arguments. Therefore, you can pass an array of values to populate a multi-valued attribute with many entries. Another important detail about using the ModifyRequest constructor with a multi-valued attribute is that the second parameter, the DirectoryAttributeOperation enumeration, behaves differently based on whether you are working with a single or multi-valued attribute. The following table describes how this enumeration behaves for single-valued and multi-valued attributes. Table 2. How the DirectoryAttributeOperation Enumeration Operations Interact With Single and Multi-Valued Attributes This is a lot to keep in mind and leads to writing lengthy error handling code. To more gracefully modify both single- and multi-valued attributes, you can add the PermissiveModifyControl directory control to the Controls collection of the modify request. An LDAP modify request will normally fail if it attempts to add an attribute that already exists or if it attempts to delete an attribute that does not exist. With this control the modify operation succeeds without throwing a DirectoryOperationException error. The only negative consequence of using this directory control is that you won't be able to tell whether the attribute or value being modified was present or not. Later in the search examples, I demonstrate more examples of directory controls. The following code snippet demonstrates how to add this control to a ModifyRequest object named modRequest before calling the SendRequest method: // create the PermissiveModifyControl to better control modification behavior PermissiveModifyControl permissiveModify = new PermissiveModifyControl(); // add the directory control to the modifyRequest modRequest.Controls.Add(permissiveModify); // cast the returned directory response into a ModifyResponse and // store the response in the modResponse object modResponse = (ModifyResponse)connection.SendRequest(modRequest); The following code example demonstrates how to pass a string array to populate the Url multi-valued attribute of a user account object: Whether or not the attribute and value has been assigned to the user account, the send request will succeed with the help of the permissiveModify directory control object. Example 5. Adding or replacing the Url multi-valued attribute of a user account string hostOrDomainName = "fabrikam.com"; string dn = "cn=john doe,ou=techwriters,dc=fabrikam,dc=com"; string attributeName = "url"; String[] attribVals = new String[2]; attribVals[0] = ""; attribVals[1] = "msdn.microsoft.com"; // establish a connection to the directory LdapConnection connection = new LdapConnection(hostOrDomainName); try { // initialize the modifyrequest object. Note the fourth // parameter is a string array in this instance ModifyRequest modRequest = new ModifyRequest( dn, DirectoryAttributeOperation.Add, attributeName, attribVals); // create the PermissiveModify control // to better control modification behavior. PermissiveModifyControl permissiveModify = new PermissiveModifyControl(); // add the directory control to the modifyRequest modRequest.Controls.Add(permissiveModify); // cast the returned directory response into a ModifyResponse // object named modResponse ModifyResponse modResponse = (ModifyResponse)connection.SendRequest(modRequest); Console.WriteLine("The {0} attribute of {1} added successfully\n" + "The server response was {2}. The following values were added:", attributeName, dn, modResponse.ResultCode); foreach (string attribVal in attribVals) { Console.WriteLine(attribVal); } } catch (Exception e) { Console.WriteLine("\nUnexpected exception occured:\n\t{0}: {1}", e.GetType().Name, e.Message); } Considering the classic example, managing the member attribute of a group, the only changes you would have to make in the previous code example (Example 5) would be: For instance, the following code snippet shows how you would change the variable declarations in the previous code example to add the John Doe user account to the administrators group: string hostOrDomainName = "fabrikam.com"; string dn = "cn=administrators,cn=builtin,dc=fabrikam,dc=com"; string attributeName = "member"; String[] attribVals = new String[1]; attribVals[0] = "cn=john doe,ou=techwriters,dc=fabrikam,dc=com"; The code download with this article does not include an example of setting the member attribute as it is very similar to Example 5. However, it does include an example of using the Uri data type with the ModifyRequest constructor. You will typically use this data type when you are performing DSML operations with S.DS.P. Deleting Attributes Programming directory services delete operations using ADSI or managed code namespaces like S.DS and S.DS.AD are relatively simple coding tasks and deleting attributes within objects using S.DS.P is no different. You use the ModifyRequest class and pass it a directory object, the Delete value of the DirectoryAttributeOperation enumeration and the name of the attribute you want to delete. The following code example demonstrates how to delete the Url attribute from a user account object: If the attribute has been assigned to the user account, the send request will succeed. Otherwise, a DirectoryOperationException will be thrown. If a DirectoryOperationException occurs, it's probably because the attribute value has not been assigned to the object. Example 6. Deleting the Url attribute from a user account object string hostOrDomainName = "fabrikam.com"; string dn = "cn=john doe,ou=techwriters,dc=fabrikam,dc=com"; string attributeName = "url"; // establish a connection to the directory LdapConnection connection = new LdapConnection(hostOrDomainName); try { ModifyRequest modRequest = new ModifyRequest( dn, DirectoryAttributeOperation.Delete, attributeName); ModifyResponse modResponse = (ModifyResponse)connection.SendRequest(modRequest); Console.WriteLine("{1} delete operation sent successfully. " + "The directory server reports {1}", attributeName, modResponse.ResultCode); } catch (DirectoryOperationException doe) { if (doe.Response.ResultCode == ResultCode.NoSuchAttribute) Console.WriteLine("{0} is not assigned to this object.", attributeName); } catch (Exception e) { Console.WriteLine("\nUnexpected exception occured:\n\t{0}: {1}", e.GetType().Name, e.Message); } Final Observations about ModifyRequest As I mentioned earlier, the fourth parameter of ModifyRequest takes an object[] to contain one or more values. An object array can accommodate the wide variety of data types that directory attributes store for directory objects. It's your responsibility to specify the proper data type for an attribute value when you declare it. For an excellent examination of Active Directory syntax and corresponding .NET data types, see The .NET Developer's Guide to Directory Services Programming by Joe Kaplan and Ryan Dunn. The ModifyRequest class also includes a constructor that allows you to package a series of attribute operations for an object using the DirectoryAttributeModification class. Using this approach, you could add, replace and delete a variety of attributes of an object in a single send request. The following example demonstrates how you could package an attribute replace operation of a user's givenName and an attribute add operation of a user's Url multi-valued attribute. I've left out all error checking so that you can easily see how to use this ModifyRequest constructor. For each DirectoryAttributeModification object you must specify the type of operation: Add, Replace or Delete for the Operation property; the name of the attribute for the Name property; and the Add or AddRange method for a single or an array of values respectively. Example 7. Using the DirectoryAttributeModification class to send multiple attribute changes to a directory string hostOrDomainName = "fabrikam.com"; string dn = "cn=john doe,ou=techwriters,dc=fabrikam,dc=com"; string attributeUrl = "url"; string[] attribVals = new String[2]; attribVals[0] = ""; attribVals[1] = "msdn.microsoft.com"; string attributeGn = "givenName"; string attribVal = "John"; // create a DirectoryAttributeModification object for // adding the url values to the url attribute DirectoryAttributeModification mod1 = new DirectoryAttributeModification(); mod1.Operation = DirectoryAttributeOperation.Add; mod1.Name = attributeUrl; mod1.AddRange(attribVals); // create a DirectoryAttributeModificaton object for // replacing the first name stored in the givenName attribute DirectoryAttributeModification mod2 = new DirectoryAttributeModification(); mod2.Operation = DirectoryAttributeOperation.Replace; mod2.Name = attributeGn; mod2.Add(attribVal); // create a DirectoryAttributeModification array to hold the // DirectoryAttributeModification objects DirectoryAttributeModification[] mods = new DirectoryAttributeModification[2]; // add each DirectoryAttributeModification object to the array mods[0] = mod1; mods[1] = mod2; // establish a connection to the directory LdapConnection connection = new LdapConnection(hostOrDomainName); // pass the DirectoryAttributeModification array as the second parameter // of the ModifyRequest object ModifyRequest modRequest = new ModifyRequest(dn, mods); // cast the directory response as a ModifyResponse object named modResponse ModifyResponse modResponse = (ModifyResponse)connection.SendRequest(modRequest); Console.WriteLine("The result was: {0}", modResponse.ResultCode); Note that this code example is exceptionally brittle as there is no error checking included. In addition, because you are packaging multiple directory attribute operations, the conditions in the directory must be just right or the entire send request will fail. For example, if the Url attribute contains one of the two values that the add operation is attempting to insert, the replace operation on the givenName and the add operation on the Url will fail. You can also make the code less likely to fail by adding the PermissiveModifyControl to the DirectoryControlCollection of the modRequest object. See Example 5 for an example of using this control. An effective code example using the DirectoryAttributeModification class would require a significant number of command line parameters so I have not included an example of this in the code download. Use the previous code example (Example 7) as a starting point for packaging multiple attribute change operations against a directory object. Examining Attribute Values Some of the previous examples of managing attributes make assumptions about the state of the attribute, for example, whether a certain value is present or not. One way to avoid making this assumption is by inspecting an attribute to determine its value. The CompareRequest and CompareResponse classes provide this facility by allowing you to compare an attribute's value to a value you supply. Using the directory response returned from a CompareRequest operation, you can then make decisions on how you might want to manipulate the attribute. The following code example demonstrates how to use the CompareRequest and CompareResponse classes to compare the userAccountControl attribute to a given value before attempting to modify the attribute value. The second bit of the userAccountControl attribute should be off to enable the user account. A value of 546 is the default value for the userAccountControl attribute immediately after a user account is created using the CreateUsers method in the code download. The CreateUsers method is similar to the code appearing in Example 1. A value of 546 means that the account is disabled because the second bit is on. Following this code example, I'll explore a more accurate way to represent the underlying data type of the userAccountControl attribute. In this example, it's represented as a string, but it's stored in the directory as an integer. For now, representing it as a string simplifies the code example. If the test is true (CompareTrue) then the userAccountControl attribute in the tested user account is equal to 546. Example 8. Enabling a disabled user account string hostOrDomainName = "fabrikam.com"; string dn = "cn=john doe,ou=techwriters,dc=fabrikam,dc=com"; // establish a connection to the directory LdapConnection connection = new LdapConnection(hostOrDomainName); // create a DirectoryAttribute object for the // userAccontControl attribute DirectoryAttribute userAccountControl = new DirectoryAttribute("userAccountControl", "546"); try { // create a CompareRequest object and pass it the distinguished name // of a directory object and the attribute to compare CompareRequest compRequest = new CompareRequest(dn, userAccountControl); // cast the returned directory response into a CompareResponse object CompareResponse compResponse = (CompareResponse)connection.SendRequest(compRequest); if (compResponse.ResultCode.Equals(ResultCode.CompareTrue)) { Console.WriteLine("The account is currently disabled." + " The result code is: {0}", compResponse.ResultCode); ModifyRequest modRequest = new ModifyRequest( dn, DirectoryAttributeOperation.Replace, "userAccountControl", "544"); ModifyResponse modResponse = (ModifyResponse)connection.SendRequest(modRequest); Console.WriteLine("Modification of userAccountControl for" + " user{0} returned {1}\n" + "The account is now enabled.", dn, modResponse.ResultCode); } else if (compResponse.ResultCode.Equals(ResultCode.CompareFalse)) { Console.WriteLine("The account is already enabled." + " The result code is: {0}", compResponse.ResultCode); } else { Console.WriteLine("The directory server reported {0}", compResponse.ResultCode); } } catch (Exception e) { Console.WriteLine("\nUnexpected exception occured:\n\t{0}: {1}", e.GetType().Name, e.Message); } As I mentioned in the code walkthrough for the CompareRequest/CompareResponse example, representing userAccountControl as a string isn't an accurate depiction of the attribute syntax in the directory. The userAccountControl attribute in the directory is stored as an integer. The integer is evaluated by Active Directory as a bit mask for configuring various settings of a user account, including whether the account is enabled or disabled. Therefore, while the previous example demonstrates how simple it is to compare and set the value as a string, it's more accurate to represent the code value as an integer. The following code snippet shows how to use an integer variable to represent the userAccountControl attribute and then perform a bitwise AND to set the second bit of the variable to 0: // create a 32 bit integer containing the default value of the userAccountControl attribute of a // user account when it is initially created with the LDAPManagement.CreateUsers method uint uAC = 546; // store the starting value of userAccountControl as a string array because the attribute // compare operation expects to compare a string value with the value in the underlying directory string[] uACStartingVal = new string[] { uAC.ToString() }; // clear bit 2 of uAC to represent an enabled user account uAC &= 0xFFFFFFFD; // make a string array to store the value, which is what SendRequest // transmits to the directory server string[] uACEndVal = new string[] { uAC.ToString() }; The code download shows a complete example using this code snippet for manipulating the userAccountControl attribute. If you've worked with ADSI methods (i.e., Get and GetEx in the IADs core interface) and DS properties (i.e., Contains in the PropertyCollection class), you know that these approaches allow you to quickly determine whether a value or values are present in an attribute. In both cases, ADSI runs a search operation to return attribute values. Later in this paper, I'll demonstrate how you can perform a similar operation using the S.DS.P SearchRequest, SearchResponse and SearchResultEntry classes. Deleting a directory object is even simpler than deleting an attribute of an object. S.DS.P includes the DeleteRequest and DeleteResponse classes to perform and report on the result of a delete operation. The following example demonstrates how to delete a user account object from a directory: Example 9. Deleting a user account object from a directory string hostOrDomainName = "fabrikam.com"; string dn = "cn=john doe,ou=techwriters,dc=fabrikam,dc=com"; // establish a connection to the directory LdapConnection connection = new LdapConnection(hostOrDomainName); try { // create a deleterequest object DeleteRequest delRequest = new DeleteRequest(dn); // cast the returned directory response into a DeleteResponse object DeleteResonse delResponse = (DeleteResponse)connection.SendRequest(delRequest); // display the result of the delete operation Console.WriteLine("The request to delete {0} was sent" + " successfully.\nThe server response was {1}", dn, delResponse.ResultCode); } catch (Exception e) { Console.WriteLine("\nUnexpected exception occured:\n\t{0}: {1}", e.GetType().Name, e.Message); } Moving and/or Renaming a Directory Object Performing move or rename operations using S.DS.P is almost as simple as an object delete operation. In this case you use the ModifyDNRequest and ModifyDNResponse classes for performing and reporting on the requested operation. The ModifyDNRequest object takes the distinguished name of the object, the distinguished name of the container where you want to move the object and a new object name. If you aren't moving the object, the distinguished name of the parent container is the current name. For example, if a user account named User10 resides in the TechWriters OU and you want to rename the user account to User11 but keep the user account in the TechWriters OU, then the second parameter in the ModifyDNRequest is the distinguished name of the TechWriters OU while the third parameter is the new relative distinguished name (RDN) you want to assign the user. Similarly, if your goal is to move the user account without renaming it, the second parameter specifies the new OU and the third parameter contains the current relative distinguished name of the user. The following code example demonstrates how to move and rename a user account: Example 10. Moving and renaming a user account string hostOrDomainName = "fabrikam.com"; string dn = "cn=user10,ou=techwriters,dc=fabrikam,dc=com"; string newParentDn = "ou=techeditors,dc=fabrikam,dc=com"; string newObjectName = "cn=user11"; // establish a connection to the directory LdapConnection connection = new LdapConnection(hostOrDomainName); try { // create a ModifyDNRequest object ModifyDNRequest modDnRequest = new ModifyDNRequest(dn, newParentDn, newObjectName); // cast the returned directory response into a ModifyDNResponse object ModifyDNResponse modDnResponse = (ModifyDNResponse)connection.SendRequest(modDnRequest); Console.WriteLine("The {0} was moved successfully to\n" + "{1} with the object name of {2}.\n" + "The server response was {3}", dn, newParentDn, newObjectName, modDnResponse.ResultCode); } catch (Exception e) { Console.WriteLine("\nUnexpected exception occured:\n\t{0}: {1}", e.GetType().Name, e.Message); } While the patterns you saw in the Management Tasks section of this paper repeat themselves in this section, search operations are so fundamental to directory services programming that they warrant the beginning of a new section. To further emphasize their importance, the directory services programming team has provided search capabilities using S.DS.P that were previously unavailable via COM automation interfaces. Note that all of these operations can be performed using S.DS, but S.DS.P gives you more control over complex operations such as asynchronous search queries. If you're already familiar with writing search operations using the DirectorySearcher class in S.DS, using ADSI OLEDB or by using the IDirectorySearch interface from C/C++ you will quickly see a familiar approach to running a search. In essence, you bind to a directory location, specify a base distinguished name for a search, create an LDAP search filter (note that a search filter isn't necessary when using the IDirectorySearch interface), scope the query and run it. When the results are returned, you enumerate the result set and perform some action against one or more items in the enumeration, such as displaying results or leveraging other classes to modify items in the result set. There are three search classes involved in a simple search operation: SearchRequest, SearchResponse and SearchResultEntry. To run a search, you pass the SendRequest method of the connection a SearchRequest object. Next, you cast the returned result as a SearchResponse object. The Entries property of the SearchResponse object contains a SearchResultEntryCollection object. You enumerate this object using the SearchResultEntry class. The following example demonstrates how to search the Builtin container to return the distinguished names of all objects in the container: While it might seem unnecessary to initialize a search filter that doesn't actually filter anything (objectClass=*), it is necessary. Passing the SearchRequest class a null value causes the compile to fail and while you can declare an empty string for a search filter, the directory server will throw an LdapException stating that the search filter is invalid. If you aren't familiar with the LDAP dialect for creating a search filter, refer to the "LDAP Dialect" topic and the "Search Filter Syntax" topic in the Directory Services SDK (part of the platform SDK). SQL search Syntax is not supported in S.DS.P. The Entries property contains a SearchResultEntryCollection object. Example 11. Performing a simple search to return the distinguished name of each object in the Builtin container string hostOrDomainName = "fabrikam.com"; string targetOu = "cn=builtin,dc=fabrikam,dc=com"; // create a search filter to find all objects string ldapSearchFilter = "(objectClass=*)"; // establish a connection to the directory LdapConnection connection = new LdapConnection(hostOrDomainName); Console.WriteLine("\r\nPerforming a simple search ..."); try { SearchRequest searchRequest = new SearchRequest (targetOu, ldapSearchFilter, SearchScope.OneLevel, null); // cast the returned directory response as a SearchResponse object SearchResponse searchResponse = (SearchResponse)connection.SendRequest(searchRequest); Console.WriteLine("\r\nSearch Response Entries:{0}", searchResponse.Entries.Count); // enumerate the entries in the search response foreach (SearchResultEntry entry in searchResponse.Entries) { Console.WriteLine("{0}:{1}", searchResponse.Entries.IndexOf(entry), entry.DistinguishedName); } } catch (Exception e) { Console.WriteLine("\nUnexpected exception occured:\n\t{0}: {1}", e.GetType().Name, e.Message); } Each SearchResultEntry contains a number of properties. In the previous code example, I only show how to display the DistinguishedName property of the object. However, using the Attributes property of the SearchResultEntry, you can return one or more values of other attributes in the directory object. By passing null as the fourth parameter in the construction of a SearchRequest object, as shown in Example 11, each SearchResultEntry contains all non-constructed attributes of the returned objects. Constructed attributes are not stored in the directory, but are calculated by directory servers when requested. For performance and code clarity, you will usually want to limit the number of attributes returned. To do this, create a string array containing the lDAPDisplayNames of the attributes you want returned for each SearchResultEntry and then pass that into the SearchRequest constructor, as shown in this code snippet: // specify attributes to return string[] attributesToReturn = new string[] { "givenName", "sn", "description", "cn", "objectClass" }; // create a SearchRequest object and specify baseDn, ldap search filter, attributes to return and // search scope. Note, the targetOu and ldapSearchFilter variables are strings whose values do // not appear in this code example. SearchRequest searchRequest = new SearchRequest(targetOu, ldapSearchFilter, SearchScope.Subtree, attributesToReturn); Once you have defined which attributes you want to return, you then have to work with the return types contained in the Values property of the SearchResultAttributeCollection. The two recommended ways to do this is either use the DirectoryAttribute indexer or the GetValues method of a DirectoryAttribute to return a specific type. Even though attributes are stored in a variety of types referred to as attribute syntax, S.DS.P always attempts to convert each value it retrieves into a string; otherwise it returns a byte array. Therefore, if you know the return type of a specific attribute value you want to return, use GetValues. If not, use the indexer and test the return type prior to displaying the values. If you're not familiar with attribute syntax, you can get a handle on it by reading The .NET Developer's Guide to Directory Services Programming by Joe Kaplan and Ryan Dunn. In addition, review the ActiveDirectorySyntax enumeration in the .NET Framework Class Library. The code download with this paper includes an example of how to selectively return and display attribute names and values. The key differences between the code download and the prior code example (Example 11) are: The following code example is part of the AttributeSearch method in the code download. It emphasizes how you return attribute values using the DirectoryAttribute indexer once you have obtained a set of objects using the Entries property of the SearchResultEntryCollection: The attribute.Count property contains a count of all values with an attribute. A single-valued attibute will contain exactly one entry while a multi-valued attribute can contain one or more entries. Using this count, the code then uses the DirectoryAttribute indexer to return the value. Notice that if the count is equivalent to 1, then the attribute is written to the screen immediately following the attribute name. Otherwise, tabs are added to move the multiple values off the console's left margin. If this is the first time through the loop, display a message indicating that the returned values are byte arrays; otherwise just display the byte array data. This helper method appears in the code download within the LDAPCommon class. This is not part of S.DS.P but is useful when you need to convert a byte array, such as a SID value, to a hex string. Example 12. How to display values of attributes returned in a SearchResultEntryCollection foreach (SearchResultEntry entry in searchResponse.Entries) { Console.WriteLine("\n{0}:{1}", searchResponse.Entries.IndexOf(entry), entry.DistinguishedName); SearchResultAttributeCollection attributes = entry.Attributes; foreach (DirectoryAttribute attribute in attributes.Values) { Console.Write("{0} ", attribute.Name); if (attribute.Count != 1) { Console.WriteLine(); } // used to track where we are in the loop when displyaing // byte arrays stored in multi-valued attributes int innerCount = 0; // count the number of values associated with this attribute for (int i = 0; i < attribute.Count; i++) { if (attribute[i] is string) { if (attribute.Count == 1) { Console.WriteLine("{0}", attribute[i]); } else { Console.WriteLine("\t\t{0}", attribute[i]); } } else if (attribute[i] is byte[]) { if (innerCount == 0) { Console.WriteLine("is a byte array. " + "Converting value to a hex string."); } if (attribute.Count == 1) { Console.WriteLine( LDAPCommon.ToHexString((byte[])attribute[i])); } else { Console.WriteLine( LDAPCommon.ToHexString((byte[])attribute[i])); innerCount++; } } else { Console.WriteLine("Unexpected type for attribute value:{0}", attribute[i].GetType().Name); } } } } You might ask, "Why bother using the GetValues method to return values when I can simply use the DirectoryAttribute indexer shown in the previous example?" As I mentioned earlier in this section, you can use the GetValues method when you know the type of data you want to retrieve, either string or byte array. There are two distinct advantages GetValues gives you: To reinforce the second advantage of the GetValues method, consider the tokenGroups attribute of a user account. If you were to use the DirectoryAttribute indexer to return the tokenGroups value of a user account, you could see output similar to the following: This output shows that S.DS.P always attempts to convert values to strings when a conversion is possible. When it isn't possible, byte values are returned. Specifically, S.DS.P successfully returned the first three objectSID values in the tokenGroups attribute as strings while the remaining two were returned as byte arrays. Note that if you want to test these results using the AttributeSearch method in the code download, you must set the SearchScope to Base to successfully return the tokenGroups attribute. The following code example shows how you use the GetValues method to return the tokenGroups attribute of the user account: To successfully return the tokenGroups attribute from the search, you must set the SearchScope to Base. To instruct the GetValues to return a byte array, you pass the method the type you want returned. You achieve this by calling the GetType method of the Type class and pass the method the name of the type you want returned. In this case, you want to return a byte array, so the code specifies the System.Byte[] type. Example 13. Performing a search to return the tokenGroups attribute using the GetValues method string hostOrDomainName = "fabrikam.com"; string targetUser = "cn=Jane,ou=TechWriters,dc=fabrikam,dc=com"; // create a search filter to limit the results to user account objects string ldapSearchFilter = "(&(objectCategory=person)(objectClass=user))"; // establish a connection to the directory LdapConnection connection = new LdapConnection(hostOrDomainName); Console.WriteLine("\r\nPerforming an attribute search of tokenGroups..."); // create a SearchRequest object and specify the dn of the target user account, // ldap search filter, a Base search scope, and the tokenGroups attribute SearchRequest searchRequest = new SearchRequest (targetUser, ldapSearchFilter, SearchScope.Base, "tokenGroups"); // cast the returned directory response as a SearchResponse object SearchResponse searchResponse = (SearchResponse)connection.SendRequest(searchRequest); Console.Write("\ntokenGroups in: "); foreach (SearchResultEntry entry in searchResponse.Entries) { Console.WriteLine("{0}", entry.DistinguishedName); SearchResultAttributeCollection attributes = entry.Attributes; foreach (DirectoryAttribute attribute in entry.Attributes.Values) { object[] values = attribute.GetValues(Type.GetType("System.Byte[]")); for (int i = 0; i < values.Length; i++) { Console.WriteLine( LDAPCommon.ToHexString((byte[])values[i])); } } } In contrast to the earlier example showing how the tokenGroups attribute is displayed using the DirectoryAttribute indexer, the following output shows how GetValues returns the results in a readable format: Note that to make this example more useful, you should search for the corresponding friendly name of the returned groups or use the DsCrackNames API that Ryan Dunn describes in his blog at. Ryan and Joe also cover approaches for converting values in the tokenGroups attribute into user friendly names in The .NET Developer's Guide to Directory Services Programming. Search operations that return a large result set in a single response can consume a lot of memory on the directory server responding to the request and will fail if the size of the requested result set is larger than the size limit configured for the responding directory server. If a search request is processed successfully by the server, sending the results to the client making the request can cause a spike in network utilization. Another negative consequence is poor response time on the client awaiting the large result set. In order to throttle both directory server memory utilization and network bandwidth and ensure that you don't exceed server-side result set size limits, you can code a search operation to return results in chunks or pages. The key classes for running a paged search operation are: To enable paging using S.DS.P, you must assign a PageResultRequestControl object to a SearchRequest object. After creating a SearchRequest object, you create a PageResultRequestControl object and add it to the directory control collection of a SearchRequest object, as this code snippet demonstrates: PageResultRequestControl pageRequest = new PageResultRequestControl(pageSize); searchRequest.Controls.Add(pageRequest); You are then ready to send the search to the server using the connection object's SendRequest method. After casting the returned directory response into a SearchResponse, you should verify that the directory server can support paging. This is important because not all directory servers support paging. If paging is supported, the directory control object will be contained in the directory control array of the SearchResponse and will contain exactly one control. The following conditional check will verify that the PageResultResponseControl object is present. If not, the code will exit: if (searchResponse.Controls.Length != 1 || !(searchResponse.Controls[0] is PageResultResponseControl)) { Console.WriteLine("The server cannot page the result set"); return; } If a server can return results in pages, the next step is to cast the returned directory control into a PageResultResponseControl directory control type. This is similar to the pattern you follow to cast a directory response into a specific type of response object. PageResultResponseControl pageResponse = (PageResultResponseControl)searchResponse.Controls[0]; The PageResultResponseControl object contains an opaque cookie used by the DirectoryServer to determine which page of data needs to be returned to the client. After returning the first page of data, you set the PageResultRequestControl Cookie property equal to the value of the PageResultResponse.Cookie, as shown: pageRequest.Cookie = pageResponse.Cookie; You then initiate another send request operation with the search request object containing an updated pageRequest. SearchResponse searchResponse = (SearchResponse)connection.SendRequest(searchRequest); This instructs the server to request the next page of results in the search request operation. If you're familiar with using cursors to move from record to record in a database, this is similar to how the directory server uses the response cookie to retrieve the next page in the result set. As I walk through the code, I'll explicitly call out where the search request is being updated. The following example demonstrates how to run a paged search operation to return up to 5 distinguishedName entries per page of the objects inside of an OU: Later in this code example, pageSize is passed to the PageResultRequestControl object to instruct the directory server to return up to 5 results per page. Unless the entire result set is divisible by 5, the final page will contain fewer than 5 entries. Later in this code example, pageCount tracks which page was returned. Just as you would for the prior search operations explained in this section of the paper, you construct a SearchRequest object by passing the starting distinguished name (base DN) for the search, an LDAP search filter and a search scope. While not necessary for this example, you can also specify one or more attributes to return. This control allows you to enable or disable two aspects of a search request: referral chasing and whether to allow subordinate referrals in a search. For more information on either of these topics, see the Referrals topic and the ADS_CHASE_REFERRALS_ENUM in the Directory Services Platform SDK. Instead of using the SearchOptionsControl, you can use the ReferralChasing property of the LdapSessionOptions class, as the following code snippet shows: LdapSessionOptions options = connection.SessionOptions; options.ReferralChasing = ReferralChasingOptions.None; If you choose this alternative approach, you can remove the code that creates and adds the SearchOptionsControl object to the Controls collection of the searchRequest object. The while loop will be true as long as there are remaining pages for the server to return. As you'll see later in the code, this will be true until the length of the pageResponse cookie is 0. The pageCount variable is initially set to 0. Each time the code returns a page, pageCount displays the page number. This test determines whether the directory server responding to the request can actually return results in pages. If either condition is true, then the directory server does not support paging. As you'll see in a later example, conditional tests similar to this one are important for other advanced search operations. In addition, you might have noticed that two directory controls were added to the searchRequest: PageResultRequestControl and the SearchOptionsControl. However, the searchResponse only contains one response control. This is because there is no reason to return a directory response control for the SearchOptionsControl directory request control. Use the IndexOf method of the SearchResultEntry to display the index of the returned result. I add 1 to the displayed value since this is a zero-based index. The cookie in the pageResponse is an internal data structure the server uses to determine the next page to return as a result of a search request. When the code begins the next loop, the searchRequest object passed to the SendRequest method now contains the value of the cookie passed from the pageResponse object to the pageRequest object. Example 14. Performing a paged search of the TechWriters OU to return up to 5 entries within each page string hostOrDomainName = "fabrikam.com"; string startingDn = "ou=techwriters,dc=fabrikam,dc=com"; // for returning up to 5 entries in each page int pageSize = 5; // for tracking the pages returned by the search request int pageCount = 0; // establish a connection to the directory LdapConnection connection = new LdapConnection(hostOrDomainName); try { Console.WriteLine("\nPerforming a paged search ..."); // this search filter does not limit the returned results string ldapSearchFilter = "(objectClass=*)"; // create a SearchRequest object SearchRequest searchRequest = new SearchRequest (startingDn, ldapSearchFilter, SearchScope.Subtree, null); // create the PageResultRequestControl object // pass it the size of each page. PageResultRequestControl pageRequest = new PageResultRequestControl(pageSize); // add the PageResultRequestControl object to the // SearchRequest object's directory control collection // to enable a paged search request searchRequest.Controls.Add(pageRequest); // turn off referral chasing so that data from other partitions is // not returned. This is necessary when scoping a search // to a single naming context, such as a domain or the // configuration container SearchOptionsControl searchOptions = new SearchOptionsControl(SearchOption.DomainScope); // add the SearchOptionsControl object to the // SearchRequest object's directory control collection // to disable referral chasing searchRequest.Controls.Add(searchOptions) // loop through the pages until there are no more // to retrieve while (true) { // increment the pageCount by 1 pageCount++; // cast the directory response into a // SearchResponse object SearchResponse searchResponse = (SearchResponse)connection.SendRequest(searchRequest); // verify support for this advanced search operation if (searchResponse.Controls.Length != 1 || !(searchResponse.Controls[0] is PageResultResponseControl)) { Console.WriteLine("The server cannot page the result set"); return; } // cast the diretory control into // a PageResultResponseControl object. PageResultResponseControl pageResponse = (PageResultResponseControl)searchResponse.Controls[0]; // display the retrieved page number and the number of // directory entries in the retrieved page Console.WriteLine("\nPage:{0} contains {1} response entries", pageCount, searchResponse.Entries.Count); // display the entries within this page foreach (SearchResultEntry entry in searchResponse.Entries) { Console.WriteLine("{0}:{1}", searchResponse.Entries.IndexOf(entry) + 1, entry.DistinguishedName); } // if this is true, there // are no more pages to request if (pageResponse.Cookie.Length == 0) break; // set the cookie of the pageRequest equal to the cookie // of the pageResponse to request the next page of data // in the send request pageRequest.Cookie = pageResponse.Cookie; } Console.WriteLine("\nPaged search completed."); } catch (Exception e) { Console.WriteLine("\nUnexpected exception occured:\n\t{0}: {1}", e.GetType().Name, e.Message); } Returning search results in pages is an effective way to improve the performance of search operations containing large result sets. The server can more effectively handle other tasks while addressing the relatively small search page requests. Client-side, results display much more quickly in pages than when awaiting a large result set from a single send request operation. However, while a synchronous method runs on the client, the code blocks until the method completes. By running search requests asynchronously, long running segments of code within a method do not block other methods from running. To run an asynchronous search operation, you create a SearchRequest object just as you've seen in previous examples. However, instead of calling the SendRequest method of the connection object, you call the BeginSendRequest method to perform an asynchronous request to a directory server responding to the request. Here's an example of how you code a BeginSendRequest: IAsyncResult asyncResult = connection.BeginSendRequest( searchRequest, PartialResultProcessing.ReturnPartialResultsAndNotifyCallback, RunAsyncSearch, null); The BeginSendRequest method is overloaded. Both constructors require that you provide a DirectoryRequest object. In the case of an asynchronous search, you pass BeginSendRequest method a SearchRequest object (called searchRequest in the code snippet). You also specify a value for the PartialResultProcessing enumeration when calling BeginSendRequest. This enumeration describes how results should be returned, either with no support for returning partial results (NoPartialResultSupport value) or by returning partial results (the ReturnPartialResults and ReturnPartialResultsAndNotifyCallback values). As the .NET Framework Class Library recommends, you should use NoPartialResultSupport for performance and scalability of most asynchronous search operations. The documentation mentions that partial result support is particularly useful when a search operation takes a long time to complete. For example, by performing search operations that use the DirectoryNotificationControl to return changes in the directory while a search operation is running. I demonstrate in the code download and here how to return partial results using an asynchronous callback mechanism. Note that using ReturnPartialResultsAndNotifyCallback can cause high CPU utilization. This issue is explained in the following KB: 918995 at. There is a fix referenced in the KB if you experience this issue. To perform an asynchronous send request, you must specify a callback delegate to handle the search request (called RunAsyncSearch in the previous code snippet). This delegate runs the search operation. In the code, you pass the IAsyncResult interface to the callback delegate. Within the delegate, you use this interface to determine whether or not the asynchronous operation has completed or not. The last parameter of both BeginSendRequest methods is an object that contains state information for the operation. You can use this parameter to distinguish this asynchronous request from other requests that might be running. Finally, one overload of BeginSendRequest contains a request time out parameter. By default, the request times out in 2 minutes unless you change the request time out. If you want the connection to run for longer, you can either specify the request timeout in your call to BeginSendRequest or in the TimeOut property of the connection, as I show here: connection.Timeout = new TimeSpan(0, 3, 30); This value specifies that the connection shouldn't time out for 3½ minutes. This setting is important for long running asynchronous search operations. The following code example demonstrates how to set up the search request and call the BeginSendRequest operation. It uses a DirectoryNotificationControl to track changes occurring in a directory. Following this code example, I explore the asynchronous callback delegate: Notice that the startingDn is to an OU in the fabrikam domain. You could start at the root of the domain if you're interested in seeing all directory changes occurring at the root of the domain and below. Because the search scope is Subtree, the search operation returns changes starting at the specified startingDn and below. This control instructs the directory server to watch for changes in the directory. When a change is detected, the server returns a search result. While this particular control isn't necessary to run most asynchronous searches, it's really useful here so that you can see how to track changes to a directory asynchronously. Typical search requests return results as soon as possible. In contrast, the search runs as long as the connection timeout hasn't been reached. As you'll see later, to test this example you make a change, such as disabling a user account, and the search will return the distinguished name of the disabled directory object. The Timeout property is a TimeSpan type. Example 15. Setting up an asynchronous search operation using the BeginSendRequest method string hostOrDomainName = "fabrikam.com"; string startingDn = "ou=techwriters,dc=fabrikam,dc=com"; string ldapSearchFilter = "(objectClass=*)"; // establish a connection to the directory LdapConnection connection = new LdapConnection(hostOrDomainName); SearchRequest searchRequest = new SearchRequest( startingDn, ldapSearchFilter, SearchScope.Subtree, null); // this directory control allows the server to watch for changes to // objects in the directory searchRequest.Controls.Add(new DirectoryNotificationControl()); // increase the connection timeout to 3½ minutes connection.Timeout = new TimeSpan(0, 3, 30); IAsyncResult asyncResult = connection.BeginSendRequest( searchRequest, PartialResultProcessing. ReturnPartialResultsAndNotifyCallback, RunAsyncSearch, null); The asynchronous search request is then handed off to the delegate named RunAsyncSearch and the process is free to perform other operations until the server has data to return. In the following example, I show the entire delegate so that you can see how the asynchronous callback delegate (RunAsynchSearch) is constructed and passed into the BeginSendRequest method. The RunAsynchSearch delegate receives the asyncResult interface from the BeginSendRequest method. Each time the server has a search result for the method, RunAsynchSearch runs and displays results to the console. The following code example demonstrates how to create the RunAsynchSearch delegate to display partial or all results from the notifications received from the directory server: The asyncResult interface contains the state data returned by the directory server. Notice that you must cast each partialResult to a SearchResultEntry in order to get to the common properties and methods of a directory entry object. Notice that in this case you cast the directory response as a SearchResponse object as you've seen in previous examples. The entries within a search response object are SearchResultEntry objects. Example 16. The RunAsyncSearch delegate for the asynchronous search operation // execute the search when the server has data to return) static void RunAsyncSearch(IAsyncResult asyncResult) { Console.WriteLine("Asynchronous search operation called."); if (!asyncResult.IsCompleted) { Console.WriteLine("Getting a partial result"); PartialResultsCollection result = null; try { result = connection.GetPartialResults(asyncResult); } catch (Exception e) { Console.WriteLine(e.Message); } if (result != null) { for (int i = 0; i < result.Count; i++) { if (result[i] is SearchResultEntry) { Console.WriteLine("A changed just occured to: {0}", ((SearchResultEntry)result[i]).DistinguishedName); } } } else Console.WriteLine("Search result is null"); } else { Console.WriteLine("The search operation has been completed."); try { // end the send request search operation SearchResponse response = (SearchResponse)connection.EndSendRequest(asyncResult); foreach (SearchResultEntry entry in response.Entries) { Console.WriteLine("{0}:{1}", response.Entries.IndexOf(entry), entry.DistinguishedName); } } // in case of some directory operation exception // return whatever data has been processed catch (DirectoryOperationException e) { Console.WriteLine(e.Message); SearchResponse response = (SearchResponse)e.Response; foreach (SearchResultEntry entry in response.Entries) { Console.WriteLine("{0}:{1}", response.Entries.IndexOf(entry), entry.DistinguishedName); } } catch (LdapException e) { Console.WriteLine(e.Message); } } } Common search operations return directory objects that are derived from class schema objects. In contrast, an attribute scoped query (ASQ) allows you to search for values within an attribute. The most common use of this feature is to search for members contained in the member attribute of a group object. To perform an ASQ, you must set your search scope to Base and you must pass a valid attribute name to an AsqRequestControl object. If you don't follow these two rules, an ASQ search operation will return a DirectoryOperationException error informing you that the server doesn't support the control even when this might not be the case. The following code example demonstrates how to use an ASQ to return the values stored in the member attribute of the Domain Users group: Notice that the search filter doesn't limit the returned results. You could refine the filter further by returning just group objects with the following search filter: string ldapSearchFilter = "(objectClass=group)"; To limit the results to group, user and foreignSecurityPrincipal objects, use the following filter: string ldapSearchFilter = "(|(|(objectClass=group)" + "(objectClass=foreignSecurityPrincipal)" + "(objectClass=user)))"; Notice that the targetGroupObject variable specifies the distinguished name of a group. This is equivalent to the startingDN that I show in earlier code examples. In this case, the query is scoped to a leaf object rather than a container object, such as an OU or the root of a domain. Notice the Base value of the SearchScope enumeration. You must scope an ASQ to Base because you are searching an attribute within a directory object. This is the same pattern I demonstrated for the earlier paged search example (Example 14). Example 17. Performing an attribute scoped query to list the members of the Users group string hostOrDomainName = "fabrikam.com"; string startingDn = "cn=users,dc=fabrikam,dc=com"; // create an open search filter string ldapSearchFilter = "(objectClass=*)"; // specify a target directory object. This is equivalent to the starting // distinguished name of a typical search operation string targetGroupObject = "cn=users,cn=builtin,dc=fabrikam,dc=com"; // establish a connection to the directory LdapConnection connection = new LdapConnection(hostOrDomainName); // perform a search operation to return // the members attribute of the specified group object try { Console.WriteLine("\nPerforming an attribute scoped query"); // create a SearchRequest object SearchRequest searchRequest = new SearchRequest( targetGroupObject, ldapSearchFilter, SearchScope.Base, null); // create the AsqRequestControl object // and specify the attribute to query AsqRequestControl asqRequest = new AsqRequestControl("member"); // add the AsqRequestControl object to // searchReuest directory control collection. searchRequest.Controls.Add(asqRequest); // cast the returned directory response // as a SearchResponse object SearchResponse searchResponse = (SearchResponse)connection.SendRequest(searchRequest); // verify that the server supports an attribte scoped query if (searchResponse.Controls.Length != 1 || !(searchResponse.Controls[0] is AsqResponseControl)) { Console.WriteLine("The server cannot return ASQ results"); return; } // cast the diretory control into // a AsqResponseControl object. AsqResponseControl asqResponse = (AsqResponseControl)searchResponse.Controls[0]; Console.WriteLine("\nSearch Response Entries:{0}", searchResponse.Entries.Count); // list the entries in this page foreach (SearchResultEntry entry in searchResponse.Entries) { Console.WriteLine("{0}:{1}", searchResponse.Entries.IndexOf(entry) + 1, entry.DistinguishedName); } } catch (Exception e) { Console.WriteLine("\nUnexpected exception occured:\n\t{0}: {1}", e.GetType().Name, e.Message); } Thus far, you have seen a number of interesting ways to optimize search operations, by returning results in pages or running a search asynchronously. While these search operations allow you to create high-performance searches, neither approach provides an easy way to return a subset of values from a search operation. Imagine for a moment if every time you ran a search on the Internet, the search engine attempted to return all results to your browser. On a search phrase with a lot of matches, you would likely grind a search engine to a halt and your browser would become non-response. This is an extreme case, but it provides an excellent backdrop for introducing the virtual list view (VLV) search. VLV allows you to control how many results from a search should be returned to the client. An address book application is a common directory services use for this capability. A user might be interested in returning just the first 10 matches on a last name search or they might want to move through name matches in small chunks. The VlvRequestControl and VlvResponseControl objects provide a structured way to code this kind of operation. Because server-side sorting is required for a VLV search to succeed, you must also make use of the SortRequestControl and SortResponseControl objects. The trickiest part of getting a VLV search to work is properly setting the parameters you pass to the VlvRequestControl control. The VlvRequestControl constructor's first parameter is the before count, which represents the number of entries to send back before the current entry. The second parameter is the after count, which is the total number of entries to return (zero-based) that match the search request. The third parameter is the target value, which is what you are trying to match for the search. For this parameter, you can pass either a byte array, 32-bit integer or string value. When you run a search for a target value, you might get back values that don't appear to match. Suppose, for example, you are searching for a last name (surname) of smith and you want to return 10 values. The last few values might not appear to match because they don't begin with smith. However, the results are correct. Results are sorted by the sort control, which returns all values that are greater than or equal to smith. Therefore, even if the next entry in the sorted list doesn't contain a last name starting with smith, it will be returned as the next item in sort order. There is a lot more to the three VlvRequestControl parameters. For more in-depth coverage, refer to the ADSI SDK (netdir.chm) and The .NET Developer's Guide to Directory Services Programming. Both of these sources have excellent VLV search examples, the former using the LDAP VLV control and the latter using the S.DS DirectoryVirtualListView class. The explanation for the parameters involved in setting up a VLV-based search is perfectly relevant to this example. The following code example demonstrates how to find the eight closest matches for user accounts with a last name starting with smith. The code uses the valueToSearch variable to specify the target search string for this search operation. The code uses the maxNumberofEntries value to limit the returned results to 10 entries. The attribs array is passed to the SearchRequest in the next line of code. While this isn't necessary, it's a good idea to limit the attributes returned by the server to the list you will display in the results. This is true for all search operations, not just this one. Notice that attribs is the fourth parameter in this search request. All of the directory controls shown thus far contain a ServerSide property. Even though this is the default setting, you might want to explicitly set this property to emphasize that these are server side controls. The first parameter is the before count, which is the number of entries to return before the first matching entry in the sorted list. This value must be greater than or equal to 0. The second parameter is the number of entries to return. The numEntries variable was declared as 10 earlier in the code so the code will return a total of 10 results. The third parameter is the target value for the search. The valueToSearch variable was declared as smith* earlier in the code. This will return any last names that start with smith. However, it will not return any values equal to smith. If you increase the before count value then you are likely to return some values equal to smith. The sortRequest and vlvRequest controls were added to the Controls collection of the searchRequest and were sent to the directory server in the send request. If the server does not return two response controls, or if the two response controls are not the corresponding response directory controls, then exit the code. The example demonstrates how to test a directory server for multiple directory control support. In this case, both the sort and vlv controls are necessary to support a VLV search operation. All but the cn attribute are optional for a user account object. If the code attempts to return a non-existent attribute, it throws a NullReferenceException. In a production code example, you should handle missing attributes more gracefully by testing the presence of the attribute before attempting to return a value from it. Example 18. Performing a vlv search to return 10 entries starting with a last name (sn) of smith string hostOrDomainName = "fabrikam.com"; string startingDn = "cn=users,dc=fabrikam,dc=com"; // create a search filter to find all user accounts that are // security principals. string ldapSearchFilter = "(&(objectClass=user)(objectCategory=person))"; // specify the target value for the VLV search request string valueToSearch = "smith*"; // specify the maximum number of entries to return from the search string maxNumberofEntries = 10; // establish a connection to the directory LdapConnection connection = new LdapConnection(hostOrDomainName); try { Console.WriteLine("\r\nPerforming a VLV search operation ..."); // create a string array to hold the attribute names // to be passed to the SearchRequest string[] attribs = {"cn", "sn", "givenName", "telephoneNumber"}; // create a SearchRequest object SearchRequest searchRequest = new SearchRequest (startingDn, ldapSearchFilter, SearchScope.Subtree, attribs); // create a sortRequest directory control since // VLV requires server-side sorting. // Set the name of the attribute for sorting and set // the sort order to ascending SortRequestControl sortRequest = new SortRequestControl("sn", false); // add the sort request to the searchRequest object searchRequest.Controls.Add(sortRequest); // create VlvRequestControl object named vlvRequest // the first parameter is the before count (the number of // entries to send back before the current entry), the // second paramter is the after count (the total number of // entries to return (zero-based) that match the // search request. The third parameter is the target value // (nameToSearch), which is what you are trying to match for // this search operation. VlvRequestControl vlvRequest = new VlvRequestControl(0, numEntries, valueToSearch); // add the vlv request to the searchRequest object searchRequest.Controls.Add(vlvRequest); // cast the the directory response as a SearchResponse SearchResponse searchResponse = (SearchResponse)connection.SendRequest(searchRequest); // verify that there are two controls added to the // searchResponse object's Controls collection // then verify that the first control is a // SortResponseControl and the second is a // VlvResponseControl if (searchResponse.Controls.Length != 2 || !(searchResponse.Controls[0] is SortResponseControl) & !(searchResponse.Controls[1] is VlvResponseControl)) { Console.WriteLine("The server does not support VLV"); return; } // cast the first directory control as a // SortResponseControl object SortResponseControl sortResponse = (SortResponseControl)searchResponse.Controls[0]; // cast the second directory control as a // VlvResponseControl object VlvResponseControl vlvResponse = (VlvResponseControl)searchResponse.Controls[1]; Console.WriteLine("\nSearch Response Entries: {0}", searchResponse.Entries.Count); // Display the entries foreach (SearchResultEntry entry in searchResponse.Entries) { Console.WriteLine("\nEntry {0}: {1}", searchResponse.Entries.IndexOf(entry), entry.DistinguishedName); try { Console.WriteLine("\tfirstname:\t{0}" + "\n\tlastname:\t{1}" + "\n\taccount name:\t{2}" + "\n\ttelephone:\t{3}", entry.Attributes["givenName"][0], entry.Attributes["sn"][0], entry.Attributes["cn"][0], entry.Attributes["telephoneNumber"][0]); } catch (NullReferenceException) { Console.WriteLine("name: {0}\n" + "either the first name," + "last name or phone number isn't available", entry.Attributes["cn"][0]); } } } catch (Exception e) { Console.WriteLine("\nUnexpected exception occured:\n\t{0}: {1}", e.GetType().Name, e.Message); } The last two sections explored management and search tasks that you can perform using S.DS.P. Another powerful capability of this namespace that isn't available via COM automation or S.DS is the fine-grained control it provides you in performing advanced authentication and session operations. For example, using S.DS.P and Windows Server 2003 or later, you can perform fast concurrent bind operations. You can also dynamically communicate with a variety of directory servers using transport layer security where part of the communication is encrypted and other parts are not. Finally, S.DS.P allows you to perform certificate-based authentication using client and server certificates. This section explores all of these capabilities. While the typical forms of authentication Kerberos or NTLM are the preferred approaches for Intranet-based authentication in a Windows network, oftentimes it's necessary to use Basic authentication, especially when servicing external authentication requests. Unlike Kerberos or NTLM authentication, Basic authentication is not inherently secure unless the channel is encrypted via the Secure Sockets Layer (SSL) and Transport Layer Security (TLS) authentication protocol. You can read more about this authentication protocol and lots of other really useful information about security in The .NET Developer's Guide to Identity by Keith Brown at. SSL/TLS is also the only approach for authenticating securely to ADAM user accounts in ADAM or ADAM SP1. ADAM in Server 2003 R2 also supports digest authentication but SSL/TLS remains the most common form of secure authentication whether you are running ADAM that supports digest authentication or not. One way to send data over an encrypted connection to a directory server is by installing a valid certificate so that the directory server can receive the secured transmission over a server specified SSL port. The default SSL port for an Active Directory server is 636. ADAM can use any valid available port that you designate for SSL communication during the installation of an ADAM instance. You can generate a certificate request using IIS. This request will then be used to generate a certificate. It's important that the host name you provide in the request matches the host name of the directory server responding to the request. For example, if you connect to a directory server named sea-dc-02.fabrikam.com, then the host name in the certificate must match this name. The SSL certificate also specifies a valid from and valid to date. You must be using the SSL certificate in that time period for it to be considered valid. An invalid certificate will cause any code attempting to connect to a directory server over an SSL port to fail. It's easy to tell if you're using an invalid certificate in IIS because you can evaluate the returned certificate in most Web browsers when you attempt to connect to the Web server using the https moniker. However, it's much harder to tell when an invalid certificate is causing a TLS/SSL connection to a directory to fail. Here are some general approaches for troubleshooting failed connection and binding attempts with certificates: You will also see other useful answers to FAQs at this URL if you are working with ADAM. In addition, review the ADAM Help chm included with an ADAM installation. While this is by no means complete, it should give you some troubleshooting techniques to get TLS/SSL connect and bind operations working. Note that you can use SSL wildcard certificates as well. A wildcard certificate is valid for any subdomains of a given domain name. This is particularly useful when you have directory servers behind a load balancer, such as the Microsoft Network Load Balancing (NLB) service. If you need to generate a valid server certificate for your testing, you can either request an SSL certificate from one of the many third-party certificate providers or by using an existing Public key infrastructure (PKI). Microsoft includes certificate services that you can install in Windows 2000 Server or later to generate certificates. In either case, the easiest way to generate a certificate request is via IIS. You can read a really useful reference for creating a valid certificate by visiting. This URL contains a reference to Chapter 6 - Managing Microsoft Certificate Services and SSL from the Microsoft® Windows® 2000 and IIS 5.0 Administrator's Pocket Consultant. In addition, Joe Kaplan's blog (September 2006) contains a comment from Tomasz Onyszko that briefly mentions the Microsoft certificate services auto-enrollment feature that can provide certificates to all domain controllers. Finally, I recommend "Configuring SSL/TLS, Securing your Web traffic isn't a trivial task" by Jan De Clercq at for more details on creating and installing SSL certificates. The following code example demonstrates how to securely bind to an ADAM instance named ap1 as an ADAM user account using basic authentication over TLS/SSL: Notice that the hostNameAndSSLPort value contains both a host name and a port value of 50001. This is a custom SSL port that I configured for an ADAM instance upon installation. In addition, a valid SSL server certificate for the host name is installed on the directory server. The userName variable specifies an ADAM user account for this simple bind operation. The code uses the options object to configure the connection for SSL binding. This is critical to ensure that passwords are not sent over the network as clear text. For a simple bind operation you do not pass in a domain name when you create a NetworkCredential object. Otherwise, the bind operation will fail with an invalid credential error message. Example 19. Binding to an ADAM instance on secure port 50001 using Basic authentication and SSL/TLS string hostNameAndSSLPort = "sea-dc-02.fabrikam.com:50001"; string userName = "cn=User1,cn=AdamUsers,cn=ap1,dc=fabrikam,dc=com"; string password = "adamPassword01!"; // establish a connection LdapConnection connection = new LdapConnection(hostNameAndSSLPort); // create an LdapSessionOptions object to configure session // settings on the connection. LdapSessionOptions options = connection.SessionOptions; options.ProtocolVersion = 3; options.SecureSocketLayer = true; connection.AuthType = AuthType.Basic; NetworkCredential credential = new NetworkCredential(userName, password); connection.Credential = credential; try { connection.Bind(); Console.WriteLine("\nUser account {0} validated using " + "ssl.", userName); if (options.SecureSocketLayer == true) { Console.WriteLine("SSL for encryption is enabled\nSSL information:\n" + "\tcipher strength: {0}\n" + "\texchange strength: {1}\n" + "\tprotocol: {2}\n" + "\thash strength: {3}\n" + "\talgorithm: {4}\n", options.SslInformation.CipherStrength, options.SslInformation.ExchangeStrength, options.SslInformation.Protocol, options.SslInformation.HashStrength, options.SslInformation.AlgorithmIdentifier); } } catch (LdapException e) { Console.WriteLine("\nCredential validation for User " + "account {0} using ssl failed\n" + "LdapException: {1}", userName, e.Message); } catch (DirectoryOperationException e) { Console.WriteLine("\nCredential validation for User " + "account {0} using ssl failed\n" + "DirectoryOperationException: {1}", userName, e.Message); } A common requirement in single sign on (SSO) or Web site authentication scenarios is high-performance authentication. Sometimes the goal is to simply verify that a user can authenticate to a directory. Using the fast concurrent bind feature available in S.DS.P, you can establish a single, anonymous LDAP connection, and perform multiple binding (authentication) operations over the connection or open channel. Unlike typical bind operations, fast concurrent bind does not create a security token as a result of a bind request so the connection cannot be used for further operations with the provided credentials. This lightweight bind is therefore significantly faster (approximately three to five times faster) then a typical bind and is ideal when a system must perform many bind requests (some possibly concurrently) in a short period of time but perform no other directory related tasks that require credentials. Fast concurrent bind in S.DS.P works only against directory servers running ADAM or Windows Server 2003 or later. This is true both on the client running the code and the server responding to the fast concurrent bind operation. Therefore, be sure to run your code on the directory server or a Windows Server 2003 (or later) client. To use a fast concurrent bind, you first create an LdapConnection object, set authentication to Basic and connect to a directory server with the specified credentials. You then create an LdapSessionOptions object, set the ProtocolVersion property to 3 and then call the FastConcurrentBind method. Once these options are specified and the method is called, you call the Bind method of the connection to complete the first authentication attempt. You can then repeatedly pass in new credentials to the connection object and call the Bind method each time. As I mentioned earlier, fast concurrent bind requires that you use Basic authentication. The FastConcurrentBind method binds to a directory server anonymously. However, subsequent calls to the Bind method pass credentials to the directory server. Because Basic authentication transmits credentials to the directory server as plain text, for security it's important to encrypt the data prior to sending it. See the previous section, Binding over a TLS/SSL Encrypted Connection, for information on creating a certificate for secure authentication. Another important requirement is setting the ProtocolVersion property to 3 to provide LDAP version 3 support. If you don't do this, then when you call the FastConcurrentBind method it will automatically set the protocol version to 3 for you. Without LDAP version 3, a fast concurrent bind operation will fail. I have provided some pointers to more information about LDAP v3 in the References section of this paper. The following example demonstrates how to perform a fast concurrent bind over a secure connection. Initially the code binds to the directory server as user1 it then uses the same connection to bind to the server as user2. Notice that the hostNameAndSSLPort value contains both a host name and a port value of 636. This is the default SSL port for an Active Directory server. In addition, a valid server certificate for the host name must be installed on the directory server. The code uses the options object to configure the connection for fast concurrent binding. This is critical to ensure that credentials are not sent over the network as clear text. Notice that the code calls the FastConcurrentBind method inside of a try catch block. The only error being tested here is an LDAPException. If this exception is thrown, it's likely that an attempt was made to connect over an unencrypted connection. In that case, the exception is handled and the code terminates. An LdapException error occurs if the credentials are invalid. Also, a DirectoryOperationException occurs if the directory server is unable to complete the binding operation. Example 20. Performing a fast concurrent bind operation over a secure connection first as user1 and then as user2 string hostNameAndSSLPort = "sea-dc-02.fabrikam.com:636"; string domain = "fabrikam"; string userName1 = "user1"; string password1 = "password01!"; string userName2 = "user2"; string password1 = "password02!"; // establish a connection to the directory LdapConnection connection = new LdapConnection(hostNameAndSSLPort); // reset authentication type to basic to support // concurrent binding. The default is Negotiate connection.AuthType = AuthType.Basic; Console.WriteLine("Authentication type reset to {0}", connection.AuthType); // create an LdapSessionOptions object to configure session // settings on the connection LdapSessionOptions options = connection.SessionOptions; // set to LDAP version 3, to support enhanced authentication // if you don't explicitly set this, when you call the // FastConcurrentBind method, protocol version 3 is set // for you options.ProtocolVersion = 3; // if an attempt is made to bind over a non-ssl connection, adding this // property will prevent the bind from succeeding options.SecureSocketLayer = true; try { // call fast concurrent bind for this connection options.FastConcurrentBind(); } catch (LdapException) { Console.WriteLine("\nYou did not connect to an SSL" + " port.\nThis connection is unsafe and is being terminated."); return; } NetworkCredential credential = new NetworkCredential(userName1, password1, domain); connection.Credential = credential; // send the first credential try { connection.Bind(); Console.WriteLine("\nUser account {0} validated using " + "fast concurrent bind.", userName1); } catch (LdapException e) { Console.WriteLine("\nCredential validation for User " + "account {0} using fast concurrent bind failed\n" + "LdapException: {1}", userName1, e.Message); } catch (DirectoryOperationException e) { Console.WriteLine("\nCredential validation for User " + "account {0} using fast concurrent bind failed\n" + "DirectoryOperationException: {1}", userName1, e.Message); } // send the second credential using the same connection try { credential = new NetworkCredential(userName2, password2, domain); connection.Credential = credential; connection.Bind(); Console.WriteLine("\nUser account {0} validated using " + "fast concurrent bind.", userName2); } catch (LdapException e) { Console.WriteLine("\nCredential validation for User " + "account {0} using fast concurrent bind failed\n" + "LdapException: {1}", userName2, e.Message); } catch (DirectoryOperationException e) { Console.WriteLine("\nCredential validation for User " + "account {0} using fast concurrent bind failed\n" + "DirectoryOperationException: {1}", userName2, e.Message); } The FastConcurrentBind method in the code download contains additional error checking to determine whether you are attempting to bind over an unencrypted connection. Sometimes you might need to send just some data to a directory server securely via an encrypted connection while other operations don't require this level of protection. Common examples of when security is paramount is authenticating or sending credit card data to a Web server over the Internet. Other operations, such as reviewing or selecting products online might not require an encrypted connection. While it's possible to keep all communications encrypted, it places an unnecessary burden on both the server and client to encrypt and decrypt the data. The end result is non-scalable and poor performance solutions. Ideally, you use encrypted communications for data that requires it and unencrypted communications for everything else. S.DS.P provides this capability via transport layer security (TLS). Using TLS, you can be selective about what data is sent over an encrypted connection. Note that if your communication to a directory server uses the default negotiate (Kerberos or NTLM) authentication mechanism, it's really not necessary to use TLS for authentication. If, however, you are binding using an exposed authentication mechanism, such as basic authentication, TLS comes in handy. TLS requires LDAP v3. You enable TLS by calling the StartTransportLayerSecurity method of the LdapSessionOptions class. You can pass this method any directory controls that you want sent to the server for enhanced operations. See the advanced search operation code samples earlier in this paper if you are not familiar with using directory controls. If you aren't using any directory controls, you simply pass null to the StartTransportLayerSecurity method. Once you are finished with TLS, you stop it by calling the StopTransportLayerSecurity method. The following code example first demonstrates how to start transport layer security, bind over the secure connection using basic authentication and complete an additional task. Second, it demonstrates how to stop TLS, rebind using an inherently secure authentication mechanism and perform a task over the connection. Note that a valid SSL certificate must be installed on the responding directory server. Note that no server name is provided in this example. If you do not have SSL certificates on all of your domain controllers, you should specify the fully qualified domain name of a server containing an SSL certificate for the hostOrDomainName variable. Otherwise, the code will fail whenever the Locator service directs the client running the code to connect to a directory server not containing a valid SSL certificate. Unlike the previous Fast Concurrent Bind example, setting the AuthType property to Basic is not required. I'm showing it here to demonstrate how you can use TLS to encrypt the bind operation for an authentication method that is not inherently secure. The code uses the options object to configure the connection for TLS and call the start and stop TLS methods. Calling this method sets the SecureSocketLayer property of the options object to True. The TestTask method is a simple search operation that I don't show in this code example. However, the code download includes this TestTask so that you can successfully run the TLS sample in the code download. Calling this method sets the SecureSocketLayer property of the options object to False. Because the binding operation occurred using basic authentication over the encrypted connection, you must rebind to the directory server. If you bind securely from the beginning using the Negotiate authentication type, there is no need to start TLS until after you complete the bind. As a result, you will not need to rebind to the directory after stopping TLS. Example 21. How to use TLS to authenticate and perform a task string hostOrDomainName = "fabrikam.com"; string userName = "user1"; string password = "password1"; // establish a connection to the directory LdapConnection connection = new LdapConnection(hostOrDomainName); NetworkCredential credential = new NetworkCredential(userName, password, domainName); connection.Credential = credential; connection.AuthType = AuthType.Basic; LdapSessionOptions options = connection.SessionOptions; options.ProtocolVersion = 3; try { options.StartTransportLayerSecurity(null); Console.WriteLine("TLS started.\n"); } catch (Exception e) { Console.WriteLine("Start TLS failed with {0}", e.Message); return; } try { connection.Bind(); Console.WriteLine("Bind succeeded using basic " + "authentication and SSL.\n"); Console.WriteLine("Complete another task over " + "this SSL connection"); TestTask(hostName); } catch (LdapException e) { Console.WriteLine(e.Message); } try { options.StopTransportLayerSecurity(); Console.WriteLine("Stop TLS succeeded\n"); } catch (Exception e) { Console.WriteLine("Stop TLS failed with {0}", e.Message); } Console.WriteLine("Switching to negotiate auth type"); connection.AuthType = AuthType.Negotiate; Console.WriteLine("\nRe-binding to the directory"); connection.Bind(); // complete some action over this non-SSL connection // note, because Negotiate was used, the bind request // is secure. // run a task using this new binding TestTask(hostName); In the last three examples, I demonstrated how you can use an SSL certificate on a directory server to encrypt communications between a client and a server. S.DS.P also allows you to use client and server certificates to validate both sides of a connection during an authentication attempt. During a bind operation, you can selectively import a client certificate, inspect both the client and server certificate and verify the server certificate before completing the bind operation. You use the following properties of the LdapSessionOptions class support certificate-based authentication : X509Certificate cert = new X509Certificate(); // select a certificate to import cert.Import( @"c:\cert\cert1.pfx", "password1", X509KeyStorageFlags.DefaultKeySet); // add the certificate to the connection connection.ClientCertificates.Add(cert); You must then write the client and server methods that are called by their respective delegates. You are also able to inspect details of both the client and server certificates so that you can decide whether to continue the bind operation. While this isn't necessary, it's useful to increase your confidence in the authenticity of both the client and the server certificate. Certificate-based authentication requires that you assign a client certificate to an existing user account before performing tasks against the directory. Otherwise, following certificate authentication, the directory server will deny any operation that requires a user principal to perform a task. Here are the required steps for successfully running the associated certificate-based authentication code sample: If you don't have a client certificate available or have certificate services already implemented in your instance of Active Directory, you can generate one using Microsoft Certificate Services. After you install certificate services, submit an advanced certificate request to create a client authentication certificate. If you're not familiar with this task, read the certificate services documentation, which is accessible from the home page of the CertSrv site that Certificate Services creates in IIS. You will assign a password to the certificate when you export it as a .pfx file. You must know the certificate file location and password to successfully run the certificate authentication example in the code download. You can verify that a valid certificate is installed and working properly from LDP by attempting to connect to the server over the Active Directory SSL connection port 636. If you successfully ran the TLS example in the previous section, then a valid server authentication certificate is installed. The following code example demonstrates how to create a certificate routine that calls the two methods to complete a certificate-based authentication operation. Following this code example, I show the methods called by the delegates and a certificate inspection method to increase your confidence in the certificates involved in the authentication operation: Notice that the hostNameAndSSLPort value contains both a host name and a port value of 636. This is the default SSL port for an Active Directory server. In addition, a valid certificate for the host name must be installed on the directory server. This is an important step because the default authentication type is Negotiate. Even though you intend on using certificate authentication, the code will try Kerberos and then NTLM authentication rather than certificate authentication unless you explicitly specify an External authorization type. You don't have to set these explicitly because if the certificates are valid and accepted, these properties will be set to True and 3 respectively prior to completing the bind operation. However, it's useful to be clear in your code about critical connection settings. The code examples following this one explore the methods passed to QueryClientCallback and VerifyServerCertificateCallback. When this task needs to bind to the directory, the code calls the ClientCertificateRoutine and the ServerCertificateRoutine methods. This TestTask method adds and deletes a user account using the AddResponse and DeleteResponse classes I explored earlier in this paper. It also demonstrates how to return the rootDSE object from Active Directory via LDAP calls. The code download includes this TestTask method so that you can successfully run the certificate sample from the code download. However, I don't show it here. Example 22. Creating a client and server certificate-based authentication operation string hostNameAndSSLPort = "sea-dc-02.fabrikam.com:636"; // establish an SSL connection to the directory LdapConnection connection = new LdapConnection(hostNameAndSSLPort); Console.WriteLine("initial connection succeeded\n"); // set the authentication type to external since the code does not // rely on built-in Windows authentication mechanisms connection.AuthType = AuthType.External; LdapSessionOptions options = connection.SessionOptions; options.SecureSocketLayer = true; options.ProtocolVersion = 3; options.QueryClientCertificate = new QueryClientCertificateCallback(ClientCertificateRoutine); options.VerifyServerCertificate = new VerifyServerCertificateCallback(ServerCertificateRoutine); try //perform a task over client/server certificate authentication { TestTask(connection, hostName); } catch (Exception e) { Console.WriteLine("bind with certificate failed with {0} {1}", e.Message, e.InnerException); if (e.Message == "The LDAP server is unavailable.") Console.WriteLine("You might not have specified an " + "SSL port for this connection."); Console.WriteLine("Press \"y\" to exit or anything else to continue"); ConsoleKeyInfo key = Console.ReadKey(false); if (key.KeyChar.ToString().ToLower() == "y") { Console.WriteLine("\noperation terminated"); return; } Console.WriteLine("Attempting to continue the operation " + "without certificate authentication"); } The really interesting part of certificate verification appears in the methods called by the delegates, ClientCertificateRoutine and ServerCertificateRoutine. These two method calls appear in the previous code example when the two corresponding callback objects are created. I've also added the GetCertInfo method to demonstrate how you can inspect the certificates before continuing with certificate-based authentication. In the next three code examples, I show the method signatures because they are called from the code in Example 22. The following code example shows the ClientCertificateRoutine and how you import a certificate in this routine. The corresponding code download does not require that you hard code the certificate or password as I show in this code example. This delegate provides a facility for loading a client certificate during a particular LDAP session. Therefore, it makes sense to allow the user to specify a certificate when running the code. You use this cert object to import, inspect and add the certificate to the LDAP connection. The X509KeyStorage enumeration allows you to control exactly how the certificate key is handled following an import operation. The DefaultKeySet value specifies that the default private key should be used for the import. I created this method to demonstrate how to return some certificate information. The code for this method appears in Example 25. This QueryClientCertificateCallback delegate requires that you return an X509Certificate from the method call. However, since this delegate is called during the bind operation and the bind operation doesn't return anything, there is no need to return the certificate. If you were using the delegate from your own method, you could potentially make use of the returned certificate. Example 23. The ClientCertificateRoutine called by QueryClientCertificateCallback during a bind operation private static X509Certificate ClientCertificateRoutine( LdapConnection connection, byte[][] auth) { string certFilewithPathSpec = @"c:\certs\myCert.pfx"; string certPassword = "myCertPassword"; Console.WriteLine("Inside ClientCertificateRoutine"); X509Certificate cert = new X509Certificate(); // select a certificate to import cert.Import( certFilewithPathSpec, certPassword, X509KeyStorageFlags.DefaultKeySet); GetCertInfo(cert); // add the certificate to the connection connection.ClientCertificates.Add(cert); return null; } The following code example shows the ServerCertificateRoutine and how you use it to verify the server certificate: I created this method to demonstrate how to return key certificate information. The code for this method appears in Example 25. This line of code is the only required line in this method. If the method runs successfully, a value of True is passed back to the calling method — the Bind method in this case. If this method returns False, then the bind operation raises an error and the server cannot complete the authentication request. Example 24. The ServerCertificateRoutine called by VerifyServerCertificateCallback during a bind operation private static bool ServerCertificateRoutine(LdapConnection connection, X509Certificate cert) { // client can verify the server certificate here Console.WriteLine("\nVerifying the server certificate in the ServerCertificateRoutine"); GetCertInfo(cert); // by returning true, the server certificate is validated return true; } Another powerful part of client and server certificate verification is having an operator inspect the certificate to enhance the verification process. Once the certificate is exposed in the last two methods, you can run code to inspect the data in the certificate, such as the date and time of certificate validity and details about the certificates issuer and subject. The following code example shows the GetCertInfo method and how you use it to inspect the client and server certificates: Figure 25. The GetCertInfo routine called by the ClientCertificateRoutine and ServerCertificateRoutine delegates private static void GetCertInfo(X509Certificate cert) { // return some information about this certificate Console.WriteLine("Valid from {0} to {1}", cert.GetEffectiveDateString(), cert.GetExpirationDateString()); Console.WriteLine("subject: {0}", cert.Subject); Console.WriteLine("issuer: {0}", cert.Issuer); } For information on when S.DS.P is the right choice, see "A Tale of Two LDAP Stacks":. This is Joe Kaplan's blog. This is a great place to go if you are digging deeply into MS LDAP programming. For information about ADAM, see the ADAM Resource Site at and read the introductory reviews on this technology. To see key advantages for using LDAP directly, see:. To read about Lightweight Directory Access Protocol (v3), see the Extension for Transport Layer Security rfc:. To read the Lightweight Directory Access Protocol (v3) rfc, go to. For general information about LDAP v3, see:. For information about LDAP authentication mechanisms, see. For information on Identity, see the The .NET Developer's Guide to Identity:. As you can see, S.DS.P opens a whole world of possibilities for performing advanced LDAP programming tasks against directory servers. This namespace exposes capabilities that were previously unavailable to managed code programmers. Hopefully, this information helps you enhance or build new powerful directory services solutions for your customers. About the author Ethan Wilansky is a contributing editor for Windows IT Pro, an enterprise architect for EDS in its Innovation Engineering practice, and a Microsoft MVP. He has authored or coauthored more than a dozen books for Microsoft and more than 70 articles. ![CDATA[ Third party scripts and code linked to or referenced from this website are licensed to you by the parties that own such code, not by Microsoft. See ASP.NET Ajax CDN Terms of Use –. ]]>
http://msdn.microsoft.com/en-us/bb332056
CC-MAIN-2014-52
en
refinedweb
Connecting to databases with JDBC Posted on March 1st, 2001 with JDBC It has been estimated that half of all software development involves client/server operations. A great promise of Java has been the ability to build platform-independent client/server database applications. In Java 1.1 this has come to fruition with Java DataBase Connectivity (JDBC). One of the major problems with databases has been the feature wars between the database companies. There is a “standard” database language, Structured Query Language (SQL-92), but usually you must( ). - That you’re using JDBC with “jdbc” - The “subprotocol”: the name of the driver or the name of a database connectivity mechanism. Since the design of JDBC was inspired by ODBC, the first subprotocol available is the “jdbc-odbc bridge,” specified by “odbc” - The database identifier. This varies with the database driver used, but it generally provides a logical name that is mapped by the database administration software to a physical directory where the database tables are located. For your database identifier to have any meaning, you must register the name using your database administration software. (The process of registration varies from platform to platform.) All this information is combined into one string, the “database URL.” For example, to connect through the ODBC subprotocol to a database identified as “people,” the database URL could be: String dbUrl = "jdbc:odbc:people"; If you’re connecting across a network, the database URL will also contain the information identifying the remote machine. When you’re ready to connect to the database, you call the static method DriverManager.getConnection( ), passing it the database URL, the user name, and a password to get into the database. You get back a Connection object that you can then use to query and manipulate the database. The following example opens a database of contact information and looks for a person’s last name as given on the command line. It selects only the names of people that have email addresses, then prints out all the ones that match the given last name: //: Lookup.java // Looks up email addresses in a // local database using JDBC import java.sql.*; public class Lookup { public static void main(String[] args) { String dbUrl = "jdbc:odbc:people"; String user = ""; String password = ""; try { // } catch(Exception e) { e.printStackTrace(); } } } ///:~ You can see the creation of the database URL as previously described. In this example, there is no password protection on the database so the user name and password are empty strings..) The executeQuery( ) method returns a ResultSet object, which is quite a bit like an iterator: the next( ) method moves the iterator to the next record in the statement, or returns null. Getting the example to work. Of course, this process can vary radically from machine to machine, but the process I used to make it work under 32-bit Windows might give you clues to help you attack your own situation.Step 1: Find the JDBC Driver JDK 1.1).Step 2: Configure the database Again, this is specific to 32-bit Windows; you might need to do some research to figure it out for your own platform. First, open the control panel. You might find two icons that say “ODBC.” You must use the one that says “32bit ODBC,” since the other one is for backwards “File DSN” section I chose “Add,” chose the text driver to handle my comma-separated ASCII file, and then un-checked “use current directory” to allow me to specify the directory where I exported the data file. this one) is usually called a flat-file database . Most problems that go beyond the simple storage and retrieval of data generally require multiple tables that must be related by joins to produce the desired results, and these are called relational databases.Step 3: Test the configuration.” Once you’ve done this, you will see that your database is available when you create a new query using your query tool.Step 4: Generate your SQL query, ‘Eckel’. I also wanted to display only those names that had email addresses associated with them. The steps I took to create this query were: - Start a new query and use the Query Wizard. Select the “people” database. (This is the equivalent of opening the database connection using the appropriate database URL.) - Select the “people” table within the database. From within the table, choose the columns FIRST, LAST, and EMAIL. - Under “Filter Data,” choose LAST and select “equals” with an argument of Eckel. Click the “And” radio button. - Choose EMAIL and select “Is not Null.” - Under “Sort By,” choose FIRST. The result of this query will show you whether you’re getting what you want. With more complicated queries it’s easy to get things wrong, but with a query tool you can interactively test your queries and automatically generate the correct code. It’s hard to argue the case for doing this by hand.Step 5: Modify and paste in your query. You can see from this example that by using the tools currently available – in particular the query-building tool – database programming with SQL and JDBC can be quite straightforward. A GUI version of the lookup program: //: VLookup.java // GUI version of Lookup.java import java.awt.*; import java.awt.event.*; import java.applet.*; import java.sql.*; public class VLookup extends Applet { String dbUrl = "jdbc:odbc:people"; String user = ""; String password = ""; Statement s; TextField searchFor = new TextField(20); Label completion = new Label(" "); TextArea results = new TextArea(40, 20); public void init() { searchFor.addTextListener(new SearchForL()); Panel p = new Panel(); p.add(new Label("Last name to search for:")); p.add(searchFor); p.add(completion); setLayout(new BorderLayout()); add(p, BorderLayout.NORTH);.getMessage()); } } class SearchForL implements TextListener { public void textValueChanged(TextEvent te) {.getMessage()); return; } results.setText(""); try { while(r.next()) { results.append( r.getString("Last") + ", " + r.getString("fIRST") + ": " + r.getString("EMAIL") + "\n"); } } catch(Exception e) { results.setText(e.getMessage()); } } } public static void main(String[] args) { VLookup applet = new VLookup(); Frame aFrame = new Frame("Email lookup"); aFrame.addWindowListener( new WindowAdapter() { public void windowClosing(WindowEvent e) { System.exit(0); } }); aFrame.add(applet, BorderLayout.CENTER); aFrame.setSize(500,200); applet.init(); applet.start(); aFrame.setVisible(true); } } ///:~ Much of the database logic is the same, but you can see that a TextListener is added to listen to the TextField, so that whenever you type a new character it first tries to do a name completion by looking up the last name in the database and using the first one that shows up. (It places it in the completion Label, and uses that as the lookup text.) This way, as soon as you’ve typed enough characters for the program to uniquely find the name you’re looking for, you can stop. Why the JDBC API. This, of course, is not Java’s fault. The discrepancies between database products are just something that JDBC tries to help compensate for. But bear in mind that your life will be easier if you can either write generic queries and not worry too much about performance, or, if you must tune for performance, know the platform you’re writing for so you don’t need to write all that investigation code. There is more JDBC information available in the electronic documents that come as part of the Java 1.1 distribution from Sun. In addition, you can find more in the book JDBC Database Access with Java (Hamilton, Cattel, and Fisher, Addison-Wesley 1997). Other JDBC books are appearing regularly. There are no comments yet. Be the first to comment!
http://www.codeguru.com/java/tij/tij0169.shtml
CC-MAIN-2014-52
en
refinedweb
Originally posted by Swati Singhal: Hi, I am a lil confused with this question. Appreciate if somebody could clarify. Which of the following constructors must exist in Parent class? public class Child extends Parent{ public Child(int i){ } public Child(int i, int j){ super(i,j); } public void Child(int i, int j, int k) { } } a. public Parent(){} b. public Parent(int i){} c. public Parent(int i, int j){} d. public Parent(int i, int j, int k){} I thought the answer would be c, but it is both a & c.
http://www.coderanch.com/t/245060/java-programmer-SCJP/certification/Inheritance
CC-MAIN-2014-52
en
refinedweb
Collection classes are used for data storage and manipulate (sort, insert, remove etc) the data. Most of the collection classes implement the same interfaces, and these interfaces may be inherited to create new collection classes on the basis of more specialized data. These collection classes are defined in System.Collections.Generic. Main collection classes which are used in c# as · ArrayList Class · HashTable Class · Stack Class and Queue class etc. The main properties of the collection classes are · Collection classes are defined as part of the System.Collection or System.Collections.Generic namespace. · Most collection classes derive from the interfaces ICollection, IComparer, IEnumerable, IList, IDictionary, and IDictionaryEnumerator and their generic equivalents. · Using generic collection classes provides increased type-safety and in some cases can provide better performance, especially when storing value types. The following generic types correspond to existing collection types: · List is the generic class corresponding to ArrayList. · Dictionary is the generic class corresponding to Hashtable. · Collection is the generic class corresponding to collectionBase.Collection can be used as a base class, but unlike CollectionBase it is not abstract, making it much easier to use. · ReadOnlyCollection is the generic class corresponding to ReadonlyCollection. ReadOnlyCollection is not abstract, and has a constructor that makes it easy to expose an existing List as a read-only collection. · The Queue, stack and SortedList generic classes correspond to the respective nongeneric classes with the same names. List Generic Class or ArrayList The List class is the generic equivalent of the ArrayList class. It implements the IList generic interface using an array whose size is dynamically increased as required. ArrayList is the part of datastructure.it show the simple list value. ArrayList class contain the Add, Insert Remove, RemoveAt and Sort method and main properties like Capacity, Count etc. It is the part of System.Collection. Syntax for creating a ArrayList: ArrayList name = new ArrayList(); Syntax for creating a List<T>: List<Type> name=new List<Type>(); There are some basic properties and methods of List<T> 1. Adding item to list 2. Removing item to list 3. Sort the list 4. Insert the item into the list etc. Example: //make object of ArrayList class like countryList ArrayList countryList = new ArrayList(); //Add the country in the countrylist countryList.Add("India"); countryList.Add("SriLanka"); countryList.Add("SouthAfrica"); countryList.Add("Australia"); countryList.Add("England"); //Show the countryList Response.Write("<b><u>Country List:</u></b><br/>"); foreach (string country in countryList) Response.Write(country + "<br/>"); Example: //define the List here List<string> countryList = new List<string>(); //use the add method to add the element in List countryList.Add("Rusea"); countryList.Add("GreenLand"); countryList.Add("India"); countryList.Add("Pakistan"); countryList.Add("US"); //print the data on web page Response.Write("<b><u>Country List:</u></b><br/>"); foreach (string country in countryList) Response.Write(country +"<br/>"); Country List: India SriLanka SouthAfrica Australia England List have following remove properties: Remove(), RemoveAt(), RemoveAll(), RemoveRange(). Example: countryList.Remove("Pakistan"); Insert method is used for Insert item into the List at any index of the list. Example countryList.Insert(2,"Pakistan"); Example: countryList.Sort (); Note: Other method which is used in List<T> IndexOf(),Contains(),TrimExcess(),Clear() etc. The SortedList object contains items in key/value pairs. SortedList objects automatically sort the items in alphabetic or numeric order. Main method of sorted list are Add(), Remove(), IndexOfKey(), IndexOfValue(), GetKeyList(), GetKeyValue() etc. and to object key and values. Example: //make object of Sorted List class like countryTable SortedList countrySList = new SortedList(); //Add the country in the hashtable.Add(Object key,Object value) countrySList.Add(1, "india"); countrySList.Add(2, "England"); //Find the key and value by using DictionaryEntry foreach (DictionaryEntry country in countrySList) Response.Write(country.Key + " : " + country.Value + "<br/>"); Note: Another Example related to SortedList check this link How use the GetKeyList() and GetKeyValue() method Example IList countryKey = countrySList.GetKeyList(); foreach(Int32 country in countryKey) Response.Write(country +"<br/>"); Where IList is the interface of System.Collection.IList IList countryValue = countrySList.GetValueList(); foreach (Int32 country in countryKey) Response.Write(country + "<br/>"); Hashtable in C# represents a collection of key/value pairs which maps keys to value. Any non-null object can be used as a key but a value can. We can retrieve items from hashTable to provide the key. Both keys and values are Objects. The main properties of HashTable are Key and Values and methods add (), remove (), contains () etc. Example: //make object of HashTable class like countryTable Hashtable countryTable = new Hashtable(); //Add the country in the hashtable.Add(Object key,Object value) countryTable.Add(1, "India"); countryTable.Add(2, "Srilanka"); countryTable.Add(3, "England"); //Find the key and value by using DictionaryEntry class foreach (DictionaryEntry country in countryTable) Response.Write(country.Key + " : " + country.Value +"<br/>"); Country List: 3 : England 2 : Srilanka 1 : India Detailed concept on hash table sees this link: It is worked as Last in first out (LIFO) when making the object of the stack class. Stack follow the two important method Push () and Pop (). Push () method is used for inserting the item and pop () method is used for the removing the data. Push () method Example: //make object of Stack class Stack countryStack = new Stack(); //Insert the item by push method countryStack.Push("India"); countryStack.Push("England"); //show the element in the stack foreach(string country in countryStack) Response.Write(country + "<br/>"); England India <![if !supportLineBreakNewLine]> Pop () method: //Remove the item from the list countryStack.Pop(); Queue work as first in first out (FIFO). Queue class has main method enqueue() and dequeue().Objects stored in a Queue are inserted at one end and removed from the other. The Queue provides additional insertion, extraction, and inspection operations. We can Enqueue (add) items in Queue and we can Dequeue (remove from Queue) or we can Peek (that is we will get the reference of first item) item from Queue. Queue accepts null reference as a valid value and allows duplicate elements. The main method and properties of the queue class are Enqueue(), Dequeue() and Peek() etc. Example: //make object of the queue class Queue countryQueue = new Queue(); //insert the item in queue by Enqueue method countryQueue.Enqueue("India"); countryQueue.Enqueue("England"); //remove the item in queue by Dequeue method countryQueue.Dequeue(); foreach (string country in countryQueue) Response.Write(country + "<br/>"); Its main properties are Next and Previous so it is allow the forward and reverse traversal by these properties and its main methods AddAfter(), AddFirst(), AddBefore(), AddHead(), AddLast() and AddTail.
http://www.mindstick.com/Articles/62f1e1b7-4f54-4d29-8c59-aa08d1190db1/Collection%20and%20Generic%20Collection%20Classes%20in%20C%20NET
CC-MAIN-2014-52
en
refinedweb
Validator A widget that is used to validate the associated DevExtreme editors against the defined validation rules. Validator widget using every supported library and framework. For more details on working with widgets in these libraries and frameworks, see the Widget Basics topic for jQuery, Angular, AngularJS, Knockout or ASP.NET MVC. jQuery $(function() { $("#textBox1").dxTextBox({ }) .dxValidator({ validationRules: [ // ... ] }); }); <div id="textBox1"></div> Angular <dx-text-box> <dx-validator> <dxi-validation-rule</dxi-validation-rule> </dx-validator> </dx-text-box> import { DxValidatorModule, DxTextBoxModule } from "devextreme-angular" // ... export class AppComponent { // ... } @NgModule({ imports: [ // ... DxValidatorModule, DxTextBoxModule ], // ... }) AngularJS <div dx- </div> Knockout <div data- </div> See Also The learn the validation rules that can be defined using the Validator widget for an editor, refer to the Validation Rules section. The editors that are associated with the Validator widgets are automatically validated against the specified rules each time the event assigned to the editor's valueChangeEvent option occurs. In addition, several editors can be validated at once. To learn how to do this, refer to the Validate Several Editor Values topic. See Also Configuration Events Validation Rules Validation Result If you have technical questions, please create a support ticket in the DevExpress Support Center. We appreciate your feedback.
https://js.devexpress.com/Documentation/18_2/ApiReference/UI_Widgets/dxValidator/
CC-MAIN-2021-49
en
refinedweb
From: João Abecasis (jpabecasis_at_[hidden]) Date: 2006-04-19 09:21:08 Kevin Wheatley wrote: > João Abecasis wrote: >> What do others think about this? I'd like to hear your comments on the >> approach and implementation, as well as bug reports ;-) > > In general I like the idea as a feature for my own use too. > > Bug? When you have 0 found headers in a directory though you get an > error relating to the lack of tests... > *** argument error > * rule test-suite ( suite-name : tests + ) > * called with: ( repository.headers : ) > * missing argument tests Aha! In this case, I'm not sure it is a bug. test-suite is part of Boost.Build-testing and it provides a way of grouping together a set of tests under a virtual target name. Hmmm... Then again, perhaps I should provide my own virtual name for the headers tests as well. I'll look into this. Thanks for reporting! In the meantime the workaround is to skip the test-suite and directly use, import headers ; headers dirs ; > Not sure about the the weird escaped values for the test > objects/executables/etc. looks odd. > > MkDir1 bin/directory%2finclude%2fsomeincludefile%2eh.test Right. I also find these odd. The problem is I need to give each header test a unique name. By default this would be the filename minus path and file-extension which is prone to collisions. I sticked to this scheme only because it is simple and reversible, although other escaping mechanisms are possible. Another option would be to use a non-descriptive sequential naming scheme: header-1 header-2 ... , possibly outputting the corresponding header paths to a separate file. I've tried both approaches locally. Or maybe I can skip naming the tests... I'll look into this possibility, which would be tied to the fix for the issue above. Thanks for your comments! Best regards, João Boost-Build list run by bdawes at acm.org, david.abrahams at rcn.com, gregod at cs.rpi.edu, cpdaniel at pacbell.net, john at johnmaddock.co.uk
https://lists.boost.org/boost-build/2006/04/13521.php
CC-MAIN-2021-49
en
refinedweb
How to plot.¶ There exist multiple ways to plot using Oríon. We will start with the most convenient one, the experiment object plot accessor. Accessor experiment.plot¶ Note It only supports single experiment plots. It does not support regrets and averages. It is possible to render most plots directly with the ExperimentClient object. The accessor ExperimentClient.plot can be used to plot the results of the experiment. from orion.client import get_experiment # Specify the database where the experiments are stored. We use a local PickleDB here. storage = dict(type="legacy", database=dict(type="pickleddb", host="../../db.pkl")) # Load the data for the specified experiment experiment = get_experiment("2-dim-exp", storage=storage) fig = experiment.plot.regret() fig.show() Module plotting¶ The plotting module can used as well to plot experiments. import orion.plotting.base as plot fig = plot.regret(experiment) fig.show() The advantage of the plotting module is that it can render plots using multiple examples as shown in the example below. fig = plot.regret(experiment) fig.show() Command orion plot¶ Note The plotting command line utility does not support arguments for now. Contributions are welcome! :) Note Only supports single experiment plots. It does not support regrets and averages. This section refers to the use of the orion plot ... subcommand.. Web API¶ Note Only supports single experiment plots. It does not support regrets and averages. The Web API supports queries for single experiment plots. See documentation for all queries. The JSON output is generated automatically according to the Plotly.js schema reference. Total running time of the script: ( 0 minutes 0.000 seconds) Gallery generated by Sphinx-Gallery
https://orion.readthedocs.io/en/v0.1.16/auto_examples/how-tos/code_1_how_to_plot.html
CC-MAIN-2021-49
en
refinedweb
Learn Python the Hard Way with Pythonista Hi, I’ve bought the book for learning Python and I’m starting to have issues in some exercises when I try to run the scrips in Pythonista. I’m convinced that the problem could be on file structure or the way I put the arguments because in a regular computer the scripts run ok. I paste the script hoping for anyone that can give me a hand. Thanks. from sys import argv from os.path import exists script, from_file, to_file = argv print(f"Copying from {from_file} to {to_file}") # we could do these two on one line, how? in_file = open(from_file) indata = in_file.read() print(f"the input file is {len(indata)} bytes long") print(f"Does the output file exist? {exists(to_file)}") print("Ready, hit RETURN to continue, CTRL-C to abort.") input() out_file = open(to_file, 'w') out_file.write(indata) print("Alright, all done.") out_file.close() in_file.close() . :) Hi , I´m with problems again, sorry to bother you. This script runs ok in the author´s video but it´s no giving any result in Pythonista, this time is not a matter of arguments. Hope someone could give me a hand. # this one is like your scripts with argv def print_two(*args): arg1, arg2 = args print(f"arg1: {arg1}, arg2: {arg2}") # ok, that *args is actually pointless, we can just do this def print_two_again(arg1, arg2): print(f"arg1: {arg1}, arg2: {arg2}") # this just takes one argument def print_one(arg1): print(f"arg1: {arg1}") # this one takes no argumentss def print_none(): print("I got nothing.") print_two("Zed", "Shaw") print_two_again("Zed", "Shaw") print_one("First!") print_none()```.
https://forum.omz-software.com/topic/4614/learn-python-the-hard-way-with-pythonista/12
CC-MAIN-2021-49
en
refinedweb
How to install discord.py on pythonista. This post is deleted!last edited by @ccc I need some assistance on something I am almost done installing all the modules but I am confused on this. It says module "attr" has no attribute 's' Please provide the full error message, not just the last line. Try import attr print(attr.__file__) @ccc here You can see the whole thing on this I was asking for the text that is under the box with the two arrows in the upper right corner of the window. You can run @JonB's code in the Python REPL. We suspect that it will show you the path to a file in the local directory called attr.pyand its presence is preventing Python from finding a second file with the same name in the package. If this is the case, then rename the local file and try again.
https://forum.omz-software.com/topic/7296/how-to-install-discord-py-on-pythonista
CC-MAIN-2021-49
en
refinedweb
Adding Semantic to Base Types Parameters in Scala Here is another "fun" dive in the complex type system of Scala. This was inspired by Eric Torreborre's post that gives a very good use case for what I am going to show, but I wanted to share another one. I am giving a simple example here, it's a bit naive and there would be other ways to implement it, but I hope you will still understand the overarching concept: being able to annotate the meaning of basic type parameters (and variables) such as Int, List[String], etc. This will help the compiler can automatically detect more errors and a code reviewer to understand better what's going on. A Use Case Imagine that you are trying to get users to score products. Each product can be scored along different dimensions: design, usability, durability. You could represent a score with a simple object: case class User(id: Long) case class Product(id: Long) case class Scoring(user: User, product: Product, design: Int, usability: Int, durability: Int) Now, you can create Scoring instances in your code Scoring(u1, p1, 10, 15, 20). You will probably end up creating such objects in different parts of your code, from raw Ints coming from diverse sources, such as a REST request, or from parsing a line from your database. The issue here is that the Scoring constructor is simple, but not very safe. If you are like me, you will have to go check in the documentation to know in what order to set the parameters. For instance, you might write something like this, to create an instance with Anorm from a database query: def parseProduct(row: Row): Product = … def parseUser(row: Row): User = … def parseScoring(row: Row): Scoring = Scoring( parseProduct(row), parseUser(row), row[Int]("design"), row[Int]("durability"), row[Int]("usability") ) There are two errors in this code: - the scala compiler performing type check would straightaway flag the inverted product/user parameters as a compilation error: type mismatch; found : Product required: User, - however, the second error, inverting the durabilityscore and the usabilityscore will not be spotted by the compiler and even a code reviewer would have to be very concentrated to spot such a trivial error. Wrappers The simple solution would be to create wrapper classes for each type of score: case class Design(val score: Int) extends AnyVal case class Usability(val score: Int) extends AnyVal case class Durability(val score: Int) extends AnyVal case class Scoring(user: User, product: Product, design: Design, usability: Usability, durability: Durability) def parseScoring(row: Row): Scoring = Scoring( parseUser(row), parseProduct(row), Design(row[Int]("design")), Usability(row[Int]("usability")), Durability(row[Int]("durability")) ) This makes a bit more code to write, but it explicitly marks the semantic of the Ints that you are passing in; making the code easier to understand and the scala compiler will now complain when you create Scoring with the scores in the wrong order. If you instantiate a Usability score with the "durability" column, the compiler won't know the different, but an attentive reviewer would quickly spot the confusion. Note the AnyVal extension on the case classes, this is a new feature of scala 2.10, called value classes. This tells the compiler that these types are just wrappers for typechecking, but that they can be optimized at runtime to their inner value, without the expensive creation of the wrapper object. This is already quite a good solution and it will go a long way. But imagine that you now want to compute the average of all the scores of a product, you might have a list of Scoring for a product and will foldLeft on it to sum up every scores. However, because you have included new types, you can't just sum prod1.design + prod2.design as the operation + is not defined on the Design class. You end up having to either write: Design(prod1.design.score + prod2.design.score) which makes your code more cumbersome than it should be. You could also implement a proxy + operator in all new wrapper case class and replicate all other operations you might need within the wrapper. When it's just one operation, for three case classes, that's fine. But if you deal with more complex basic types, such as String or composed types, such as List[Int] or Future[Int], you might end up having to proxy a lot of operations in your wrappers. If you have loads of such wrappers, it might be a lot of extra code. Unboxed Tagged Types The idea that I have decided to follow, inspired by Eric's article, is to use "Unboxed Tagged Types". The idea is that you can "tag" types (and not just basic types) with a keyword, without loosing all its operations as you would with a simple wrapper. The principle is simple and it is basic object oriented inheritance: to retain all the features of Int you can define a class: class DesignScore extends Int However, we use a scala trick to make this definition easier and to avoid actually creating new objects: type aliases. It takes a very simple code scaffold: //code from Eric's article type Tagged[U] = { type Tag = U } type @@[T, U] = T with Tagged[U] Tagged[U]defines a new type, which just adds a type property containing the parameter type U. I.E it's a type that has an annotation U. So if you wrote type TaggedInt = Int with Tagged[Design], you would just define a new type alias, which extends Intand tags it with the new inner type property type Tag = Design. @@[T, U]is only a convenient shortcut to extend types with the Tagged interface. So you could write the previous example with: type TaggedInt = Int @@ Design You can add this snippet to your own code or use the implementation provided by scalaz, which is pretty much equivalent. Right, this is still pretty abstract; so don't worry if you are not getting it yet. Just see it as a tool to annotate types. Let's see how to use it now with our previous example. First, we define "tags" that will allow us to annotate the Int basic type with some contextual information: trait Design trait Usability trait Durability These traits don't do anything except existing as a known type for the compiler, you can see them as constants, or values of an enum, at the type level. We can now use them to define our tagged types: type DesignScore = Int @@ Design type UsabilityScore = Int @@ Usability type DurabilityScore = Int @@ Durability These type aliases just represent an extension of the type Int with an embedded type parameter Tag. We can use these as we would use any other type, so as for our previous example: case class Scoring(user: User, product: Product, design: DesignScore, usability: UsabilityScore, durability: DurabilityScore) Because DesignScore is just a subclass of Int, it inherits all its operations, and we can now use the sum operator: prod1.design+prod2.design. However, we are still missing something: scala doesn't know how to create a DesignScore, as when you write val score = 1, you get an Int and scala doesn't know apriori how to convert this to a subtype of Int. This actually plays in our favor, as it will force us to explicitely write the conversion when we need to assign an integer to something that needs a DesignScore. Let's write an extension of Int that knows how to do this conversion: implicit class TaggedInt(val i: Int) extends AnyVal { def design = i.asInstanceOf[DesignScore] def usability = i.asInstanceOf[UsabilityScore] def durability = i.asInstanceOf[DurabilityScore] } Note a few things: - we use a cast to transform a superclass ( Int) in a subclass ( DesignScore), it might seem dirty, but it's safe as we will never encounter a case where we cannot perform the case, - we are using a new feature of scala 2.10: an implicit class that extends Intwith the new methods, - we shouldn't use a direct implicit conversion between Intand DesignScoreas we would loose the fact that now developers have to be explicit about how they want to use the Inttype. Nothing would then stop you from using the Usability Int in a DesignScore slot. - while I like the postfix operator style, you could just define unary operators if you prefer, without using an implicit class at all: def design(i: Int): DesignScore = i.asInstanceOf[DesignScore]. We can now create the Scoring object in the following manner Scoring(u1, p1, 10.design, 15.usability, 20.durability). And in our previous database parsing example: def parseScoring(row: Row): Scoring = Scoring( parseUser(row), parseProduct(row), row[Int]("design").design, row[Int]("usability").durability, row[Int]("durability").usability ) As you can see, it's already a bit easier to spot the error, and if you don't the compiler will complain anyway. I hope that through this simplistic example, you have been able to understand how Unboxed Tagged Types can help. Explicitly annotating the use of non semantic basic types ( Int, String, List[Int], …) in your code will have two benefits: - in many cases, the compiler will warm you when you are using a value in the wrong slot. - by annotating method signatures (in and out types), and the uses of raw types in your code, this one will be easier to understand and you will hopefully make less errors and quickly spot bugs. Parameter Validation As George Leontiev points out, you can also use this feature to perform validation. In our example, the scores are unbounded, you could pass in any value without anyone complaining. Ideally, you would probably want to have them in a bounded range, let's say between 0 and 5. The usual approach would be to validate the input when the user inputs a value. But then, in the rest of the code, there is nothing stopping you from making bad operations on the values. Clearly, this is not something you can deal with at compile time as you do not know what user inputs you will get or what the values in the database will be. However, you can build in some gards to warn you when something is wrong. As we have seen, we cannot build the tagged types without going through the explicit call to the design, durability or usability functions. So we can just extend these functions to add checks on the ranges: implicit class TaggedInt(val i: Int) extends AnyVal { def design = { require(i >= 0 && i <= 5, "the design score has to be between 0 and 5") i.asInstanceOf[DesignScore] } ... } Now, the validation of your parameters is built-in the "type" that defines it, if you call Scoring(u1, p1, 10.design, 15.usability, 20.durability), an IllegalArgumentException will be thrown. If you check for such exception when receiving a score value, you can deal with it properly, report an error to the user, etc. If you prefer to use the validation pattern instead of catching exceptions, you can use scalaz Validation monad, or scala default Option or Either constructs: implicit class TaggedInt(val i: Int) extends AnyVal { def design: Either[String, DesignScore] = { if(i >= 0 && i <= 5){ Right(i.asInstanceOf[DesignScore]) } else { Left("the design score has to be between 0 and 5") } } ... } Written by Pierre Andrews Related protips Scala fold, foldLeft, and foldRight 1 Response Add your response Which version of scala were you using to test this. Use of tagged types and case classes still does not seem to work (as of 10.2 which I am using) : Have a fresh tip? Share with Coderwall community!
https://coderwall.com/p/l-plmq/adding-semantic-to-base-types-parameters-in-scala
CC-MAIN-2021-49
en
refinedweb
table of contents NAME¶ CURLOPT_PROXYUSERPWD - user name and password to use for proxy authentication SYNOPSIS¶ #include <curl/curl.h> CURLcode curl_easy_setopt(CURL *handle, CURLOPT_PROXYUSERPWD, char *userpwd); DESCRIPTION¶(3) is used - beware.) Use CURLOPT_PROXYAUTH(3) to specify the authentication method. The application does not have to keep the string around after setting this option. DEFAULT¶ This is NULL by default. PROTOCOLS¶ Used with all protocols that can use a proxy EXAMPLE¶¶ Always RETURN VALUE¶ Returns CURLE_OK if proxies are supported, CURLE_UNKNOWN_OPTION if not, or CURLE_OUT_OF_MEMORY if there was insufficient heap space. SEE ALSO¶ CURLOPT_PROXY(3), CURLOPT_PROXYTYPE(3),
https://manpages.debian.org/bullseye/libcurl4-doc/CURLOPT_PROXYUSERPWD.3.en.html
CC-MAIN-2021-49
en
refinedweb
table of contents NAME¶ storage.conf - Syntax of Container Storage configuration file DESCRIPTION¶ The STORAGE configuration file specifies all of the available container storage options for tools using shared container storage, but in a TOML format that can be more easily modified and versioned. FORMAT¶ The [TOML format][toml] is used as the encoding of the configuration file. Every option and subtable listed here is nested under a global "storage" table. No bare options are used. The format of TOML can be simplified to: [table] option = value [table.subtable1] option = value [table.subtable2] option = value STORAGE TABLE¶ The storage table supports the following options: driver="" container storage driver Default Copy On Write (COW) container storage driver. Valid drivers are "overlay", "vfs", "devmapper", "aufs", "btrfs", and "zfs". Some drivers (for example, "zfs", "btrfs", and "aufs") may not work if your kernel lacks support for the filesystem. This field is required to guarantee proper operation. Valid rootless drivers are "btrfs", "overlay", and "vfs". Rootless users default to the driver defined in the system configuration when possible. When the system configuration uses an unsupported rootless driver, rootless users default to "overlay" if available, otherwise "vfs". graphroot="" container storage graph dir (default: "/var/lib/containers/storage") Default directory to store all writable content created by container storage programs. The rootless graphroot path supports environment variable substitutions (ie. $HOME/containers/storage) environment variable substitutions (ie. $HOME/containers/storage) A common use case for this field is to provide a local storage directory when user home directories are NFS-mounted (podman does not support container storage over NFS). runroot="" container storage run dir (default: "/run/containers/storage") Default directory to store all temporary writable content created by container storage programs. The rootless runroot path supports environment variable substitutions (ie. $HOME/containers/storage) STORAGE OPTIONS TABLE¶ The storage.options table supports the following options: additionalimagestores=[] Paths to additional container image stores. Usually these are read/only and stored on remote network shares. remap-uids="" remap-gids="" Remap-UIDs/GIDs is the mapping from UIDs/GIDs as they should appear inside of a container, to the UIDs/GIDs. Example remap-uids = 0:1668442479:65536 remap-gids = 0:1668442479:65536 These mappings tell the container engines to map UID 0 inside of the container to UID 1668442479 outside. UID 1 will be mapped to 1668442480. UID 2 will be mapped to 1668442481, etc, for the next 65533 UIDs in succession. remap-user="" remap-group="" Remap-User/Group is a user name which can be used to look up one or more UID/GID ranges in the /etc/subuid or /etc/subgid file. Mappings are set up starting with an in-container ID of 0 and then a host-level ID taken from the lowest range that matches the specified name, and using the length of that range. Additional ranges are then assigned, using the ranges which specify the lowest host-level IDs first, to the lowest not-yet-mapped in-container ID, until all of the entries have been used for maps. Example remap-user = "containers" remap-group = "containers" root-auto-userns-user="" Root-auto-userns-user is a user name which can be used to look up one or more UID/GID ranges in the /etc/subuid and /etc/subgid file. These ranges will be partitioned to containers configured to create automatically a user namespace. Containers configured to automatically create a user namespace can still overlap with containers having an explicit mapping set. This setting is ignored when running as rootless. auto-userns-min-size=1024 Auto-userns-min-size is the minimum size for a user namespace created automatically. auto-userns-max-size=65536 Auto-userns-max-size is the maximum size for a user namespace created automatically. disable-volatile=true If disable-volatile is set, then the "volatile" mount optimization is disabled for all the containers. STORAGE OPTIONS FOR AUFS TABLE¶ The storage.options.aufs table supports the following options: mountopt="" Comma separated list of default options to be used to mount container images. Suggested value "nodev". Mount options are documented in the mount(8) man page. STORAGE OPTIONS FOR BTRFS TABLE¶ The storage.options.btrfs table supports the following options: min_space="" Specifies the min space in a btrfs volume. size="" Maximum size of a container image. This flag can be used to set quota on the size of container images. (format: [], where unit = b (bytes), k (kilobytes), m (megabytes), or g (gigabytes)) STORAGE OPTIONS FOR THINPOOL (devicemapper) TABLE¶ The storage.options.thinpool table supports the following options for the devicemapper driver: autoextend_percent="" Tells the thinpool driver the amount by which the thinpool needs to be grown. This is specified in terms of % of pool size. So a value of 20 means that when threshold is hit, pool will be grown by 20% of existing pool size. (default: 20%) autoextend_threshold="" Tells the driver the thinpool extension threshold in terms of percentage of pool size. For example, if threshold is 60, that means when pool is 60% full, threshold has been hit. (default: 80%) basesize="" Specifies the size to use when creating the base device, which limits the size of images and containers. (default: 10g) blocksize="" Specifies a custom blocksize to use for the thin pool. (default: 64k) directlvm_device="" Specifies a custom block storage device to use for the thin pool. Required for using graphdriver devicemapper. directlvm_device_force="" Tells driver to wipe device (directlvm_device) even if device already has a filesystem. (default: false) fs="xfs" Specifies the filesystem type to use for the base device. (default: xfs) log_level="" Sets the log level of devicemapper. 0: LogLevelSuppress 0 (default) 2: LogLevelFatal 3: LogLevelErr 4: LogLevelWarn 5: LogLevelNotice 6: LogLevelInfo 7: LogLevelDebug metadata_size="" metadata_size is used to set the pvcreate --metadatasize options when creating thin devices. (Default 128k) min_free_space="" Specifies the min free space percent in a thin pool required for new device creation to succeed. Valid values are from 0% - 99%. Value 0% disables. (default: 10%) mkfsarg="" Specifies extra mkfs arguments to be used when creating the base device. mountopt="" Comma separated list of default options to be used to mount container images. Suggested value "nodev". Mount options are documented in the mount(8) man page. size="" Maximum size of a container image. This flag can be used to set quota on the size of container images. (format: [], where unit = b (bytes), k (kilobytes), m (megabytes), or g (gigabytes)) use_deferred_deletion="" Marks thinpool device for deferred deletion. If the thinpool is in use when the driver attempts to delete it, the driver will attempt to delete device every 30 seconds until successful, or when it restarts. Deferred deletion permanently deletes the device and all data stored in the device will be lost. (default: true). use_deferred_removal="" Marks devicemapper block device for deferred removal. If the device is in use when its driver attempts to remove it, the driver tells the kernel to remove the device as soon as possible. Note this does not free up the disk space, use deferred deletion to fully remove the thinpool. (default: true). xfs_nospace_max_retries="" Specifies the maximum number of retries XFS should attempt to complete IO when ENOSPC (no space) error is returned by underlying storage device. (default: 0, which means to try continuously.) STORAGE OPTIONS FOR OVERLAY TABLE¶ The storage.options.overlay) inodes="" Maximum inodes in a read/write layer. This flag can be used to set a quota on the inodes allocated for a read/write layer of a container. force_mask = "0000|shared|private" ForceMask specifies the permissions mask that is used for new files and directories. The values "shared" and "private" are accepted. (default: ""). Octal permission masks are also accepted. ``: Not set All files/directories, get set with the permissions identified within the image. private: it is equivalent to 0700. All files/directories get set with 0700 permissions. The owner has rwx access to the files. No other users on the system can access the files. This setting could be used with networked based home directories. shared: it is equivalent to 0755. The owner has rwx access to the files and everyone else can read, access and execute them. This setting is useful for sharing containers storage with other users. For instance, a storage owned by root could be shared to rootless users as an additional store. NOTE: All files within the image are made readable and executable by any user on the system. Even /etc/shadow within your image is now readable by any user. OCTAL: Users can experiment with other OCTAL Permissions. Note: The force_mask Flag is an experimental feature, it could change in the future. When "force_mask" is set the original permission mask is stored in the "user.containers.override_stat" xattr and the "mount_program" option must be specified. Mount programs like "/usr/bin/fuse-overlayfs" present the extended attribute permissions to processes within containers rather then the "force_mask" permissions. mount_program="" Specifies the path to a custom program to use instead of using kernel defaults for mounting the file system. In rootless mode, without the CAP_SYS_ADMIN capability, many kernels prevent mounting of overlay file systems, requiring you to specify a mount_program. The mount_program option is also required on systems where the underlying storage is btrfs, aufs, zfs, overlay, or ecryptfs based file systems. mount_program = "/usr/bin/fuse-overlayfs" mountopt="" Comma separated list of default options to be used to mount container images. Suggested value "nodev". Mount options are documented in the mount(8) man page. size="" Maximum size of a read/write layer. This flag can be used to set quota on the size of a read/write layer of a container. (format: [], where unit = b (bytes), k (kilobytes), m (megabytes), or g (gigabytes)) STORAGE OPTIONS FOR VFS TABLE¶ The storage.options.vfs) STORAGE OPTIONS FOR ZFS TABLE¶ The storage.options.zfs table supports the following options: fsname="" File System name for the zfs driver mountopt="" Comma separated list of default options to be used to mount container images. Suggested value "nodev". Mount options are documented in the mount(8) man page. skip_mount_home="" Tell storage drivers to not create a PRIVATE bind mount on their home directory. size="" Maximum size of a container image. This flag can be used to set quota on the size of container images. (format: [], where unit = b (bytes), k (kilobytes), m (megabytes), or g (gigabytes)) SELINUX LABELING¶ When running on an SELinux system, if you move the containers storage graphroot directory, you must make sure the labeling is correct. Tell SELinux about the new containers storage by setting up an equivalence record. This tells SELinux to label content under the new path, as if it was stored under /var/lib/containers/storage. semanage fcontext -a -e /var/lib/containers NEWSTORAGEPATH restorecon -R -v NEWSTORAGEPATH The semanage command above tells SELinux to setup the default labeling of NEWSTORAGEPATH to match /var/lib/containers. The restorecon command tells SELinux to apply the labels to the actual content. Now all new content created in these directories will automatically be created with the correct label. QUOTAS¶ Container storage implements XFS project quota controls for overlay storage containers and volumes. The directory used to store the containers must be an XFS file system and be mounted with the pquota option. Example /etc/fstab entry: /dev/podman/podman-var /var xfs defaults,x-systemd.device-timeout=0,pquota 1 2 Container storage generates project ids for each container and builtin volume, but these project ids need to be unique for the XFS file system. The xfs_quota tool can be used to assign a project id to the storage driver directory, e.g.: echo 100000:/var/lib/containers/storage/overlay >> /etc/projects echo 200000:/var/lib/containers/storage/volumes >> /etc/projects echo storage:100000 >> /etc/projid echo volumes:200000 >> /etc/projid xfs_quota -x -c 'project -s storage volumes' /<xfs mount point> In the example above, the storage directory project id will be used as a "start offset" and all containers will be assigned larger project ids (e.g. >= 100000). Then the volumes directory project id will be used as a "start offset" and all volumes will be assigned larger project ids (e.g. >= 200000). This is a way to prevent xfs_quota management from conflicting with containers/storage. FILES¶ Distributions often provide a /usr/share/containers/storage.conf file to define default storage configuration. Administrators can override this file by creating /etc/containers/storage.conf to specify their own configuration. The storage.conf file for rootless users is stored in the $XDG_CONFIG_HOME/containers/storage.conf file. If $XDG_CONFIG_HOME is not set then the file $HOME/.config/containers/storage.conf is used. /etc/projects - XFS persistent project root definition /etc/projid - XFS project name mapping file SEE ALSO¶ semanage(8), restorecon(8), mount(8), fuse-overlayfs(1), xfs_quota(8), projects(5), projid(5) HISTORY¶ May 2017, Originally compiled by Dan Walsh dwalsh@redhat.com ⟨mailto:dwalsh@redhat.com⟩ Format copied from crio.conf man page created by Aleksa Sarai asarai@suse.de ⟨mailto:asarai@suse.de⟩
https://manpages.debian.org/unstable/containers-storage/containers-storage.conf.5.en.html
CC-MAIN-2021-49
en
refinedweb
Checkpointing trials¶ Hint In short, you should use “{experiment.working_dir}/{trial.hash_params}” to set the path of the checkpointing file. When using multi-fidelity algorithms such as Hyperband it is preferable to checkpoint the trials to avoid starting training from scratch when resuming a trial. In this tutorial for instance, hyperband will train VGG11 for 1 epoch, pick the best candidates and train them for 7 more epochs, doing the same again for 30 epoch, and then 120 epochs. We want to resume training at last epoch instead of starting from scratch. Oríon provides a unique hash for trials that can be used to define the unique checkpoint file path: trial.hash_params. This can be used with the Python API as demonstrated in this example or with Command-line templating. With command line¶ The example below is based on the Python API solely. It is also possible to do checkpointing using the command line API. To this end, your script should accept an argument for the checkpoint file path. Suppose this argument is --checkpoint, you should call your script with the following template. orion hunt -n <exp name> ./your_script.sh --checkpoint '{experiment.working_dir}/{trial.hash_params}' Your script is reponsible to take this checkpoint path, resume from checkpoints or same checkpoints. We will demonstrate below how this can be done with PyTorch, but using Oríon’s Python API. Training code¶ We will first go through the training code piece by piece before tackling the hyperparameter optimization. First things first, the imports. import numpy import torch import torch.nn as nn import torch.optim as optim from torch.utils.data import SubsetRandomSampler import torchvision import torchvision.models as models import torchvision.transforms as transforms import os import argparse We will use the data SubsetRandomSampler data loader from PyTorch to split the training set into a training and validation sets. We include test set here for completeness but won’t use it in this example as we only need the training data and the validation data for the hyperparameter optimization. We use torchvision’s transformers to apply the standard transformations on CIFAR10 images, that is, random cropping, random horizontal flipping and normalization. def build_data_loaders(batch_size, split_seed=1): normalize = [ transforms.ToTensor(), transforms.Normalize((0.4914, 0.4822, 0.4465), (0.2023, 0.1994, 0.2010)), ] augment = [ transforms.RandomCrop(32, padding=4), transforms.RandomHorizontalFlip(), ] train_set = torchvision.datasets.CIFAR10( root="./data", train=True, download=True, transform=transforms.Compose(augment + normalize), ) valid_set = torchvision.datasets.CIFAR10( root="./data", train=True, download=True, transform=transforms.Compose(normalize), ) test_set = torchvision.datasets.CIFAR10( root="./data", train=False, download=True, transform=transforms.Compose(normalize), ) num_train = 45000 # num_valid = 5000 indices = numpy.arange(num_train) numpy.random.RandomState(split_seed).shuffle(indices) train_idx, valid_idx = indices[:num_train], indices[num_train:] train_sampler = SubsetRandomSampler(train_idx) valid_sampler = SubsetRandomSampler(valid_idx) train_loader = torch.utils.data.DataLoader( train_set, batch_size=batch_size, sampler=train_sampler, num_workers=5 ) valid_loader = torch.utils.data.DataLoader( train_set, batch_size=1000, sampler=train_sampler, num_workers=5 ) test_loader = torch.utils.data.DataLoader( test_set, batch_size=1000, shuffle=False, num_workers=5 ) return train_loader, valid_loader, test_loader Next, we write the function to save checkpoints. It is important to include not only the model in the checkpoint, but also the optimizer and the learning rate schedule when using one. In this example we will use the exponential learning rate schedule, so we checkpoint it. We save the current epoch as well so that we now where we resume from. def save_checkpoint(checkpoint, model, optimizer, lr_scheduler, epoch): if checkpoint is None: return state = { "model": model.state_dict(), "optimizer": optimizer.state_dict(), "lr_scheduler": lr_scheduler.state_dict(), "epoch": epoch, } torch.save(state, f"{checkpoint}/checkpoint.pth") To resume from checkpoints, we simply restore the states of the model, optimizer and learning rate schedules based on the checkpoint file. If there is no checkpoint path or if the file does not exist, we return epoch 1 so that training starts from scratch. Otherwise we return the last trained epoch number found in checkpoint file. def resume_from_checkpoint(checkpoint, model, optimizer, lr_scheduler): if checkpoint is None: return 1 try: state_dict = torch.load(f"{checkpoint}/checkpoint.pth") except FileNotFoundError: return 1 model.load_state_dict(state_dict["model"]) optimizer.load_state_dict(state_dict["optimizer"]) lr_scheduler.load_state_dict(state_dict["lr_scheduler"]) return state_dict["epoch"] + 1 # Start from next epoch Then comes the training loop for one epoch. def train(loader, device, model, optimizer, lr_scheduler, criterion): model.train() for batch_idx, (inputs, targets) in enumerate(loader): inputs, targets = inputs.to(device), targets.to(device) optimizer.zero_grad() outputs = model(inputs) loss = criterion(outputs, targets) loss.backward() optimizer.step() lr_scheduler.step() Finally the validation loop to compute the validation error rate. def valid(loader, device, model): model.eval() correct = 0 total = 0 with torch.no_grad(): for batch_idx, (inputs, targets) in enumerate(loader): inputs, targets = inputs.to(device), targets.to(device) outputs = model(inputs) _, predicted = outputs.max(1) total += targets.size(0) correct += predicted.eq(targets).sum().item() return 100.0 * (1 - correct / total) We combine all these functions into a main function for the whole training pipeline. Note We set batch_size to 1024 by default, you may need to reduce it depending on your GPU. def main( epochs=120, learning_rate=0.1, momentum=0.9, weight_decay=0, batch_size=1024, gamma=0.97, checkpoint=None, ): # We create the checkpointing folder if it does not exist. if checkpoint and not os.path.isdir(checkpoint): os.makedirs(checkpoint) device = "cuda" if torch.cuda.is_available() else "cpu" model = models.vgg11() model = model.to(device) # We define the training criterion, optimizer and learning rate scheduler criterion = nn.CrossEntropyLoss() optimizer = optim.SGD( model.parameters(), lr=learning_rate, momentum=momentum, weight_decay=weight_decay, ) lr_scheduler = torch.optim.lr_scheduler.ExponentialLR(optimizer, gamma) # We restore the states of model, optimizer and learning rate scheduler if a checkpoint file is # available. This will return the last epoch number of the checkpoint or 1 if no checkpoint. start_epoch = resume_from_checkpoint(checkpoint, model, optimizer, lr_scheduler) # We build the data loaders. test_loader is here for completeness but won't be used. train_loader, valid_loader, test_loader = build_data_loaders(batch_size=batch_size) # If no training needed, because the trial was resumed from an epoch equal or greater to number # of epochs requested here (``epochs``). if start_epoch >= epochs + 1: return valid(valid_loader, device, model) # Training from last epoch until ``epochs + 1``, checkpointing at end of each epoch. for epoch in range(start_epoch, epochs + 1): print("epoch", epoch) train(train_loader, device, model, optimizer, lr_scheduler, criterion) valid_error_rate = valid(valid_loader, device, model) save_checkpoint(checkpoint, model, optimizer, lr_scheduler, epoch) return valid_error_rate You can test the training pipeline before working with the hyperparameter optimization. main(epochs=4) HPO code¶ We finally implement the hyperparameter optimization loop. We will use Hyperband with the number of epochs as the fidelity, using the prior fidelity(1, 120, base=4). Hyperband will thus train VGG11 for 1, 7, 30 and 120 epochs. To explore enough candidates at 120 epochs, we set Hyperband with 5 repetitions. In the optimization loop ( while not experiment.is_done), we ask Oríon to suggest a new trial and then pass the hyperparameter values **trial.params to main(), specifying the checkpoint file with f"{experiment.working_dir}/{trial.hash_params}". from orion.client import build_experiment def run_hpo(): # Specify the database where the experiments are stored. We use a local PickleDB here. storage = { "type": "legacy", "database": { "type": "pickleddb", "host": "./db.pkl", }, } # Load the data for the specified experiment experiment = build_experiment( "hyperband-cifar10", space={ "epochs": "fidelity(1, 120, base=4)", "learning_rate": "loguniform(1e-5, 0.1)", "momentum": "uniform(0, 0.9)", "weight_decay": "loguniform(1e-10, 1e-2)", "gamma": "loguniform(0.97, 1)", }, algorithms={ "hyperband": { "seed": 1, "repetitions": 5, }, }, storage=storage, ) trials = 1 while not experiment.is_done: print("trial", trials) trial = experiment.suggest() if trial is None and experiment.is_done: break valid_error_rate = main( **trial.params, checkpoint=f"{experiment.working_dir}/{trial.hash_params}" ) experiment.observe(trial, valid_error_rate, name="valid_error_rate") trials += 1 Let’s run the optimization now. You may want to reduce the maximum number of epochs in fidelity(1, 120, base=4) and set the number of repetitions to 1 to get results more quickly. With current configuration, this example takes 2 days to run on a Titan RTX. experiment = run_hpo() Analysis¶ That is all for the checkpointing example. We should nevertheless analyse the results before wrapping up this tutorial. We should first look at the Regret curves to verify the optimization with Hyperband. fig = experiment.plot.regret() fig.show()
https://orion.readthedocs.io/en/v0.1.16/auto_tutorials/code_2_hyperband_checkpoint.html
CC-MAIN-2021-49
en
refinedweb
In this question, we will learn how to calculate the Sum of numbers in String Problem Statement In the “Calculate Sum of all Numbers Present in a String” problem we have given a string “s”. This string contains some alphanumeric numbers and some English lowercase characters. Write a program that will calculate all the numbers present in that string and print the final answer. Input Format The first and only one line containing a string “s”. Output Format The first and only one line containing an integer value N which represents the sum of all numbers present in the string. Constraints - 1<=|s|<=10^6 - s[i] must be a lower case English alphabet or a digit from 0 to 9(inclusive). - The number present in the given string “s” is not greater than 10^9 Example a123b12c1d 136 Explanation: Here the number present in the given string “a123b12c1d” are 123, 12, 1. So, the sum of these numbers is 136. Algorithm The only tricky part in this question is that multiple consecutive digits are considered as one number. The idea is very simple. We scan each character of the input string and if a number is formed by consecutive characters of the string, we increment the result by that amount. - Set “ans” to zero and take input string from the user in “s”. - Traverse the string character by character. - If the current character is a lower case English alphabet then move to the next character. - Else update ans by that amount only. - Print the final answer which is stored in “ans”. Implementation C++ Program to Calculate Sum of all Numbers Present in a String #include <bits/stdc++.h> using namespace std; int main() { string s; cin>>s; int ans=0; string t=""; for(char ch : s) { if(ch>='0' && ch<='9') { t+=ch; } else { if(t.length()>0) ans+=stoi(t); t=""; } } if(t.length()>0) ans+=stoi(t); cout<<ans<<endl; return 0; } Java Program to find Sum of numbers in String import java.util.Scanner; import java.util.Vector; class sum { public static void main(String[] args) { Scanner sr = new Scanner(System.in); String s = sr.next(); int ans=0; String t=""; for(int i=0;i<s.length();i++) { if(s.charAt(i)>='0' && s.charAt(i)<='9') { t+=s.charAt(i); } else { if(t.length()>0) ans+=Integer.parseInt(t); t=""; } } if(t.length()>0) ans+=Integer.parseInt(t); System.out.println(ans); } } 123a21bc1sqvaus 145 Complexity Analysis to calculate Sum of numbers in String Time Complexity O(n) where n is the length of the given string “s”. Here we visit the whole string char by char and perform the operation in constant time. Space Complexity O(1) because we only store the sum of the number present in the given string.
https://www.tutorialcup.com/interview/string/calculate-sum-of-all-numbers-present-in-a-string.htm
CC-MAIN-2021-49
en
refinedweb
Iterables and iterators Looping is one of the most impressive features of Python. We can find it almost everywhere. We can loop over the built-in types such as dictionary, list, tuple, string or even custom object. For loop syntax is slightly different from many other languages. It does not require any initial value or conditional expression indicating when to stop. Instead, it gathers next element from an iterable in every loop iteration. Iterable is an object that implements the iterator protocol. It simply means that the object should be of a class that contains two magic methods: __next__() and __iter__(). - __iter__() – returns an iterator object. Usually it returns ‘self’ indicating that the class is iterable. However, it can be any other iterator (or generator). - __next__() – returns next value of an iterator. Usually it raises StopIteration exception that indicates that there is no next value and looping over the object should stop. If __iter__() returns generator, there is no need to implement this method. Let’s consider the following example of iterating over Fibonacci numbers: class Fibonacci: def __init__(self, numbers): self._numbers = numbers def __iter__(self): self._x = 1 self._y = 1 self._counter = 0 return self def __next__(self): current_number = self._x self._counter += 1 if self._counter > self._numbers: raise StopIteration self._x, self._y = self._y, self._x + self._y return current_number for fib_number in Fibonacci(10): print(fib_number) In the above example our ‘for’ loop invoke __iter__() method of the class when we create an object. Without this method, the TypeError exception would be raised. Then on every iteration the __next__() method will be called to get the next value. This proceeds until StopIteration is raised to indicate there is no next value. Iterators allow us to create more readable code and treat any object as iterable that can be used in a loop. Moreover, it helps us to save memory usage. Typically when we want to loop over build-in collections such as dictionary, list or tuple we need to have the whole collection in a memory, while iterators require only one element in a given iteration. To sum up, iterators should be used as a good practice to increase readability of the code and to save the memory usage. However, there is a lot of boilerplate and if there is no need to create the whole class to iterate over we should create generator instead. Generators Generator is another Python construct that makes looping over specific elements even simpler than iterators. It is a function with a yield keyword instead of return statement. It is enough for Python interpreter to know that the function is a generator. Generators differ from ordinary functions in the way on how they work. When we invoke a function, it executes until the body ends or the return statement is met. On the other hand, if we invoke generator, we get the generator object in idle state i.e. no code is executed yet. At every iteration the code starts to execute until it meets the yield keyword. It returns the current value and become idle again. It proceeds until particular condition is met (the generator body ends without yield statement or StopIteration exception is raised). Let’s compare the conventional way of implementing the Fibonacci function and the generator one. def fibonacci_func(numbers): results = [] current_number = 0 x, y = 1, 1 while current_number < numbers: current_element = x x, y = y, x+y current_number += 1 results.append(current_element) return results for fib_number in fibonacci_func(10000): print(fib_number) This function returns the list of elements that we iterate over. The similar example with generator will be as follows: def fibonacci_gen(numbers): current_number = 0 x, y = 1, 1 while current_number < numbers: current_element = x x, y = y, x+y current_number += 1 yield current_element for fib_number in fibonacci_gen(10000): print(fib_number) As a result both solutions do exactly the same but they differ with the memory usage. Similarly to iterators, generators do not need to load the whole collection into the memory. We can check this with the getsizeof() function: from sys import getsizeof print(getsizeof(fib_gen)) # 48 print(getsizeof(fibonacci_func)) # 4516 As we can see, Generators in usage are very similar to Iterators. Both of them are a great option for working with vast collections or resources to save memory usage. However, there are some advantages of using first over the second: - Reduced boilerplate - Creating generators are simpler and more readable Generators can also be used in the iterator protocol. Instead of implementing both __iter__() and __next__() methods, we can make __iter__() to be a generator. With combining those two constructs together we can rewrite the Fibonacci class to look like this: class Fibonacci: def __init__(self, numbers): self._numbers = numbers def __iter__(self): current_number = 0 x, y = 1, 1 while current_number < self._numbers: current_element = x x, y = y, x+y current_number += 1 yield current_element for fib_number in Fibonacci(10): print(fib_number) In conclusion, a good practice is to use generators every time we want to loop over vast number of elements or resources if we want to save memory space. Moreover, we should use generators instead of iterators to reduce boilerplate if we do not need to create the whole class. Otherwise we can combine the usage of iterators and generators. Comprehensions Comprehension is another Python construct that allows us to create collections in a more concise way. It is driven by one of Python design principles saying that “Flat is better then nested”. We can use comprehensions for lists, sets or even dictionaries. Basically it uses generator expression to create generator object and unpack it to the collection we want to. Generator expressions are slightly different from generators. They are mostly one-line expressions with implicit yield statement e.g. gen_expr = (x for x in [1, 2, 3]) The above line created a generator object that yields every element of the provided collection. This generator can be unpacked to the list in the following way: list_ = list(gen_expr) Now let’s consider the situation when we want to create a list of elements multiplied by 2 from another list. The conventional way of doing this would be as follows: numbers = [1, 2, 3, 4, 5] multiplied = [] for number in numbers: multiplied.append(number*2) Comprehensions use generator expressions to create collections on-the-fly (list, dictionary etc.). The code snippet can be rewritten with this construct: multiplied = [number*2 for number in numbers] As we can see it is a great way to create collections in a very concise way. Comprehensions are not limited to lists. This way we can create dictionaries and even immutable tuples. Let’s say we have a dictionary of countries population and we want to create another dictionary with countries with over 1 million population. country_population = { "Afghanistan": 22720000, "Albania": 3401200, "Andorra": 78000, "Luxembourg": 435700, "Montserrat": 11000, "United Kingdom": 59623400, "United States": 278357000, "Zimbabwe": 11669000 } over_mln_population = {country: population for country, population in country_population.items() if population > 1000000} Or create immutable tuple with the names of these countries e.g.: over_mln_population = tuple(country for country, population in country_population.items() if population > 1000000) Comprehensions may be used in a more complex way to combine multiple collections into one. Here’s the example: characters_per_serie = [ { 'serie': 'How I Met Your Mother', 'characters': ['Barney', 'Ted', 'Marshall', 'Robin', 'Lily'] }, { 'serie': 'Friends', 'characters': ['Ross', 'Rachel', 'Phoebe', 'Monica', 'Joey', 'Chandler'] }, { 'serie': 'The Big Bang Theory', 'characters': ['Sheldon', 'Penny', 'Leonard', 'Rajesh', 'Howard', 'Amy', 'Bernadette'] } ] actresses = { 'Rachel': 'Jennifer Aniston', 'Monica': 'Courteney Cox', 'Phoebe': 'Lisa Kudrow', 'Penny': 'Kaley Cuoco', 'Bernadette': 'Melissa Rauch', 'Amy': 'Mayim Bialik', 'Robin': 'Cobie Smulders', 'Lily': 'Alyson Hannigan' } actresses_per_serie = {actresses[character]: characters_per_serie_['serie'] for characters_per_serie_ in characters_per_serie for character in characters_per_serie_['characters'] if character in actresses.keys()} As a result we got the dictionary where keys are actresses and values are series they played in. Comprehensions give us the possibility to create any collection from another in a very simple, concise and readable way with significant code reduction. In conclusion, it is a good practice to use comprehensions to create collections of items on-the-fly to avoid nested blocks, reduce code volume and increase its readability. Context managers Context Manager is another construct that allows to write code in a safer and more readable way. Working with resources e.g. files is the most common usage of this construct. Let’s consider the conventional way of opening and closing the file: file = open('filepath', 'r') file.close() However, consider the following code: file = open('filepath', 'r') raise SomeException('Something went terribly wrong') file.close() As we can assume, the file is not being closed. To overcome this, the code can be surrounded with a try … except block. Nevertheless, Context Managers can be used instead: with open('filepath', 'r'): raise SomeException('Something went terribly wrong') With this construct we are assured that the file will have been closed before program is interrupted. - Creating Context Managers Context Managers are not limited to resources. We can create custom ones. It can be done in various ways. Consider the object that is expected to perform some action at the beginning and in the end of some particular operation. class CtxMngr: def do_something(self, raise_exception): print('Before exception') if raise_exception: raise Exception() print('After exception') def __enter__(self): print('Enter') return self def __exit__(self, exc_type, exc_val, exc_tb): print('Exit') with CtxMngr() as ctx_mngr: ctx_mngr.do_something(raise_exception=True) When the program starts, we can see that ‘After exception’ is not printed out but ‘Exit’ is. This makes us be sure that something will be done before leaving the Context Manager block. Another way to create context managers is to use contextlib module that contains contextmanager decorator. It should decorate a generator that has one yield statement. Everything before it will be assumed as __enter__ block and after will be the equivalence to __exit__. Here is an example of contextmanager decorator usage: import contextlib @contextlib.contextmanager def ctx_mngr(): print('Enter') yield print('Exit') with ctx_mngr() as ctx_mngr: raise Exception() Context Managers should be used as a good practice when we want to be ensured that something will be always done at the beginning and in the end of some operation. They are invaluable while working with resources to assure that unexpected behavior will not result in resource leak. Decorators Decorators are another Python construct. They behave like one of the design patterns with the same name. They are functions that extend functionality of another one. Python has some useful built-in decorators such as @staticmethod, @classmethod or @property but we can create a custom one. Let’s say we want to benchmark function execution: def print_fibonacci(length): start = time.time() for number in fibonacci_gen(length): print(number) end = time.time() print('Execution time: {}'.format(end - start)) print_fibonacci(5000) Now imagine that we want to benchmark more than one function. The best way would be to create a separate function and reuse it. Python allows to pass function as a parameter to another one. Function can be also returned from another. With this features we can write a decorate design pattern as follows: def benchmark(func): def inner_function(*args, **kwargs): start = time.time() func(*args, **kwargs) end = time.time() print('Execution time: {}'.format(end - start)) return inner_function def print_fibonacci(length): for number in fibonacci_gen(length): print(number) decorated_function = benchmark(print_fibonacci) decorated_function(1000) The functionality of print_fibonacci() function was extended by the benchmark(). However, Python has a special sign ‘@’ that simplifies this: def benczmark(func): def inner_function(*args, **kwargs): start = time.time() func(*args, **kwargs) end = time.time() print('Execution time: {}'.format(end - start)) return inner_function @benchmark def print_fibonacci(length): for number in fibonacci_gen(length): print(number) print_fibonacci(1000) The both solutions do exactly the same but we moved the decoration where the function is implemented rather than executed. With one symbol we are able to reuse the code wherever we want to. Decorators are chaining functions together. Knowing this, we can decorate function with multiple decorators. Moreover we can pass arguments to decorators: def benchmark(func): def inner_function(*args, **kwargs): start = time.time() func(*args, **kwargs) end = time.time() print('Execution time: {}'.format(end - start)) return inner_function def check_input(input_type): def decorator(func): def inner_function(*args, **kwargs): if not all(isinstance(arg, input_type) for arg in args + tuple(kwargs.values())): print('Input should be integer') return func(*args, **kwargs) return inner_function return decorator @benchmark @check_input(int) def print_fibonacci(length): """ Printing fibonacci numbers """ for number in fibonacci_gen(length): print(number) print_fibonacci(length='1') As we can see, with decorators we can simply move reusable implementation to other place and extend functionalities of any function. But one thing need to be remembered. When we decorate functions as above, we are loosing information about the original function e.g.: if we call: print(print_fibonacci.__name__) print(print_fibonacci.__doc__) We are getting that the name of the function is ‘inner_function’ without docstring that is not expected but in fact is true. To keep the information about the function we should always use wraps decorator from functools module. from functools import wraps def benczmark(func): @wraps(func) def inner_function(*args, **kwargs): start = time.time() func(*args, **kwargs) end = time.time() print('Execution time: {}'.format(end - start)) return inner_function def check_input(input_type): def decorator(func): @wraps(func) def inner_function(*args, **kwargs): if not all(isinstance(arg, input_type) for arg in args + tuple(kwargs.values())): print('Input should be integer') return func(*args, **kwargs) return inner_function return decorator Now we keep all the original information about wrapped functions. In conclusion, good practice is to use decorators when we want to separate reusable code or extend the functionality of one function without modifying it. Summary Python has many specific constructs that can make our code faster, more readable and simpler. Their usage depends on the goal we want to achieve but definitely they are a great and powerful tools to make our code better. Python is very simple and easy language but it always depends on the programmer how its code looks like.
https://blog.j-labs.pl/2019/03/Python-Good-Practices-Part-1-Python-constructs
CC-MAIN-2021-49
en
refinedweb
SET (ObjectScript) Synopsis SET:pc setargument,... S:pc setargument,... where setargument can be: variable=value (variable-list)=value Arguments Description The SET command assigns a value to a variable. It can set a single variable, or set multiple variables using any combination of two syntactic forms. It can assign values to variables by specifying a comma-separated list of variable=value pairs. For example: SET a=1,b=2,c=3 WRITE a,b,c There is no restriction in the number of assignments you can perform with a single invocation of SET a=value,b=value,c=value,.... If a specified variable does not exist, SET creates it and assigns the value. If a specified variable exists, SET replaces the previous value with the specified value. Because SET executes in left-to-right order, you can assign a value to a variable, then assign that variable to another variable: SET a=1,b=a WRITE a,b A value can be a string, a numeric, a JSON object, JSON array, or an expression that evaluates to one of these values. To define an “empty” variable, you can set the variable to the empty string ("") value. Setting Multiple Variables to the Same Value You can use SET to assign the same value to multiple variables by specifying a comma-separated list of variables enclosed in parentheses. For example: SET (a,b,c)=1 WRITE a,b,c You can combine the two SET syntactic forms in any combination. For example: SET (a,b)=1,c=2,(d,e,f)=3 WRITE a,b,c,d,e,f The maximum number of assignments you can perform with a single invocation of SET (a,b,c,...)=value is 128. Exceeding this number results in a <SYNTAX> error. Restrictions on Setting Multiple Variables $LIST: You cannot use SET (a,b,c,...)=value syntax to assign a value to a $LIST function on the left side of the equal sign. Attempting to do so results in a <SYNTAX> error. You must use SET a=value,$LIST(mylist,n)=value,c=value,... syntax when using $LIST to set one of the items. $EXTRACT and $PIECE: You cannot use SET (a,b,c,...)=value syntax to assign a value to an $EXTRACT or $PIECE function on the left side of the equal sign if that function uses relative offset syntax. In relative offset syntax an asterisk represents the end of a string, and *-n and *+n represent a relative offset from the end of the string. For example, SET (x,$PIECE(mylist,"^",3))=123 is valid, but SET (x,$PIECE(mylist,"^",*))=123 results in an <UNIMPLEMENTED> error. You must use SET a=value,b=value,c=value,... syntax when setting one of these functions using relative offset. Object Property: You cannot use SET (a,b,c,...)=value syntax to assign a value to an object property on the left side of the equal sign. Attempting to do so results in an <OBJECT DISPATCH> error with a message such as the following: Set property MyProp of class MyPackage.MyClass is not a direct reference and may not be multiple SET arg. You must use SET a=value,oref.MyProp=value,c=value,... syntax when setting an object property. SET and Subscripts You can set individual subscripted values (array nodes) for a local variable, process-private global, or a global. You can set subscripts in any order. If the variable subscript level does not already exist, SET creates it and then assigns the value. Each subscript level is treated as an independent variable; only those subscript levels set are defined. For example: KILL myarray SET myarray(1,1,1)="Cambridge" WRITE !,myarray(1,1,1) SET myarray(1)="address" WRITE !,myarray(1) In this example, the variables myarray(1,1,1) and myarray(1) are defined and contain values. However, the variables myarray and myarray(1,1) are not defined, and return an <UNDEFINED> error when invoked. By default, you cannot set a null subscript. For example, SET ^x("")=123 results in a <SUBSCRIPT> error. However you can set %SYSTEM.Process.NullSubscripts() method to allow null subscripts for global and process-private global variables. You cannot set a null subscript for a local variable. The maximum length of a subscript is 511 characters. Exceeding this length results in a <SUBSCRIPT> error. The maximum number of subscript levels for a local variable is 255. The maximum number of subscript levels for a global variable depends on the subscript level names, and may exceed 255 levels. Attempting to set a local variable to more than 255 subscript levels (either directly or by indirection) results in a <SYNTAX> error. For further information on subscripted variables, refer to Global Structure in Using Globals.. variable If the target variable does not already exist, SET creates it and then assigns the value. If it does exist, SET replaces the existing value with the assigned value. The variable to receive the value resulting from the evaluation of value. It can be a local variable, a process-private global, a global variable. A local variable, process-private global, or global variable can be either subscripted or unsubscripted (see SET and Subscripts for further details). A global variable can be specified with extended global reference (see Global Structure in Using Globals). You can specify certain special variables, including $ECODE, $ETRAP, $DEVICE, $KEY, $TEST, $X, and $Y. Local variables, process-private globals, and special variables are specific to the current process; they are mapped to be accessible from all namespaces. A global variable persists after the process that created it terminates. A global is specific to the namespace in which it was created. By default, a SET assigns a global in the current namespace. You can use SET to define a global (^myglobal) in another namespace by using syntax such as the following: SET ^["Samples"]myglobal="Ansel Adams".\. A variable can be a piece or segment of a variable as specified in the argument of a $PIECE or $EXTRACT function. A variable can be represented as an object property using obj.property or .. property syntax, or by using the $PROPERTY function. You can set an i%property instance variable reference using the following syntax: SET i%propname = "abc" SET accepts a variable name of any length, but it truncates a long variable name to 31 characters before assigning it a value. If a variable name is not unique within the first 31 characters this name truncation can cause unintended overwriting of variable values, as shown in the following example: SET abcdefghijklmnopqrstuvwxyz2abc="30 characters" SET abcdefghijklmnopqrstuvwxyz2abcd="31 characters" SET abcdefghijklmnopqrstuvwxyz2abcde="32 characters" SET abcdefghijklmnopqrstuvwxyz2abcdef="33 characters" WRITE !,abcdefghijklmnopqrstuvwxyz2abc // returns "30 characters" WRITE !,abcdefghijklmnopqrstuvwxyz2abcd // returns "33 characters" WRITE !,abcdefghijklmnopqrstuvwxyz2abcde // returns "33 characters" WRITE !,abcdefghijklmnopqrstuvwxyz2abcdef // returns "33 characters" Special variables are, by definition, set by system events. You can use SET to assign a value to certain special variables. However, most special variables cannot be assigned a value using SET. See the reference pages for individual special variables for further details. Refer to the “Variables” chapter of Using ObjectScript for further details on variable types and naming conventions. value A literal value or any valid ObjectScript expression. Usually a value is a numeric or string expression. A value can be a JSON object or JSON array. A numeric value is converted to canonical form before assignment: leading and trailing zeros, a plus sign or a trailing decimal point are removed. Conversion from scientific notation and evaluation of arithmetic operations are performed. A string value is enclosed in quotation marks. A string is assigned unchanged, except that doubled quotation marks within the string are converted to a single quotation mark. The null string ("") is a valid value. A numeric value enclosed in quotation marks is not converted to canonical form and no arithmetic operations are performed before assignment. If a relational or logical expression is used, InterSystems IRIS assigns the truth value (0 or 1) resulting from the expression. Object properties and object methods that return a value are valid expressions. Use the relative dot syntax (..) for assigning a property or method value to a variable. JSON Values You can use the SET command to set a variable to a JSON object or a JSON array. For a JSON object, the value is a JSON object delimited by curly braces. For a JSON array, the value is a JSON array delimited by square brackets. Within these delimiters, the literal values are JSON literals, not ObjectScript literals. An invalid JSON literal generates a <SYNTAX> error. String literal: You must enclose a JSON string in double quotes. To specify certain characters as literals within a JSON string, you must specify the \ escape character, followed by the literal. If a JSON string contains a double-quote literal character this character is written as \". JSON string syntax provides escapes for double quote (\"), backslash (\\), and slash (\/). Line space characters can also be escaped: backspace (\b), formfeed (\f), newline (\n), carriage return (\r), and tab (\t). Any Unicode character can be represented by a six character sequence: a backslash, followed by lowercase letter u, followed by four hexadecimal digits. For example, \u0022 specifies a literal double quote character; \u03BC specifies the Greek lowercase letter Mu. Numeric literal: JSON does not convert numbers to ObjectScript canonical form. JSON has its own conversion and validation rules: Only a single leading minus sign is permitted; a leading plus sign is not permitted, multiple leading signs are not permitted. The “E” scientific notation character is permitted, but not evaluated. Leading zeros are not permitted; trailing zeros are preserved. A decimal separator must have a digit character on both sides of it. Therefore, the JSON numerics 0, 0.0, 0.4, and 0.400 are valid. A negative sign on a zero value is preserved. For IEEE floating-point numbers, additional rules apply. Refer to the $DOUBLE function for details. JSON fractional numbers are stored in a different format than ObjectScript numbers. ObjectScript floating point fractional numbers are rounded when they reach their maximum precision, and trailing zeros are removed. JSON packed BCD fractional numbers allow for greater precision, and trailing zeros are retained. This is shown in the following example: SET jarray=[1.23456789123456789876000,(1.23456789123456789876000)] WRITE jarray.%ToJSON() Special values: JSON supports the following special values: true, false, and null. These are literal values that must be specified as an unquoted literal in lowercase letters. These JSON special values cannot be specified using a variable, or specified in an ObjectScript expression. ObjectScript: To include an ObjectScript literal or expression within a JSON array element or a JSON object value, you must enclose the entire string in parentheses. You cannot specify ObjectScript in a JSON object key. ObjectScript and JSON use different escape sequence conventions. To escape a double quote character in ObjectScript, you double it. In the following example, a JSON string literal and an ObjectScript string literal are specified in a JSON array: SET jarray=["This is a \"good\" JSON string",("This is a ""good"" ObjectScript string")] WRITE jarray.%ToJSON() The following JSON array example specifies an ObjectScript local variable and performs ObjectScript numeric conversion to canonical form: SET str="This is a string" SET jarray=[(str),(--0007.000)] WRITE jarray.%ToJSON() The following example specifies an ObjectScript function in a JSON object value: SET jobj={"firstname":"Fred","namelen":($LENGTH("Fred"))} WRITE jobj.%ToJSON() JSON Object A value can be a JSON object delimited by curly braces. The variable is set to an OREF, such as the following: 3@%Library.DynamicObject. You can use the ZWRITE command with a specified local variable name to display the JSON value: SET jobj={"inventory123":"Fred's \"special\" bowling ball"} ZWRITE jobj You can use the %Get() method to retrieve the value of a specified key using the OREF. You can resolve the OREF to the full JSON object value using the %ToJSON() method. This is shown in the following example: SET jobj={"inventory123":"Fred's \"special\" bowling ball"} WRITE "JSON object reference = ",jobj,! WRITE jobj.%Get("inventory123")," (data value in ObjectScript format)",! WRITE jobj.%Get("inventory123",,"json")," (data value in JSON format)",! WRITE jobj.%ToJSON()," (key and data value in JSON format)" A valid JSON object has the following format: Begins with an open curly brace, ends with a close curly brace. The empty object {} is a valid JSON object. Within the curly braces, a key:value pair or a comma-separated list of key:value pairs. Both the key and the value components are JSON literals, not ObjectScript literals. The key component must be a JSON quoted string literal. It cannot be an ObjectScript literal or expression enclosed in parentheses. The value component can be a JSON string or a JSON numeric literal. These JSON literals follow JSON validation criteria. A value component can be an ObjectScript literal or expression enclosed in parentheses. The value component can be specified as a defined variable specifying a string, a numeric, a JSON object, or a JSON array. The value component can contain nested JSON objects or JSON arrays. A value component can also be one of the following three JSON special values: true, false, null, specified as an unquoted literal in lowercase letters; these JSON special values cannot be specified using a variable. The following are all valid JSON objects: {"name":"Fred"}, {"name":"Fred","city":"Bedrock"}, {"bool":true}, {"1":true,"0":false,"Else":null}, {"name":{"fname":"Fred","lname":"Flintstone"},"city":"Bedrock"}, {"name":["Fred","Wilma","Barney"],"city":"Bedrock"}. A JSON object can specify a null property name and assign it a value, as shown in the following example: SET jobj={} SET jobj.""="This is the ""null"" property value" WRITE jobj.%Get(""),! WRITE "JSON null property object value = ",jobj.%ToJSON() Note that the returned JSON string uses the JSON escape sequence (\") for a literal double quote character. You can use the %Set() method to add a key:value pair to a JSON object. You can use the %Get() method to return the value of a specified key in various formats. The syntax is: jobj.%Get(keyname,default,format) The default argument is the value returned if keyname does not exist. The format argument specifies the format for the returned value. If no format is specified, the value is returned in ObjectScript format; if format="json", the value is returned in JSON format; if format="string", all string and numeric values are returned in ObjectScript format, but the JSON true and false special values are returned as JSON alphabetic strings rather than boolean integers; the JSON null special value is returned in ObjectScript format as a zero-length null string. This is shown in the following example: SET x={"yep":true,"nil":null} WRITE "IRIS: ",x.%Get("yep")," JSON: ",x.%Get("yep",,"json")," STRING: ",x.%Get("yep",,"string"),! /* IRIS: 1 JSON: true STRING: true */ WRITE "IRIS: ",x.%Get("nil")," JSON: ",x.%Get("nil",,"json")," STRING: ",x.%Get("nil",,"string") /* IRIS: JSON: null STRING: */ For further details, see Using JSON. JSON Array A value can be a JSON array delimited by square brackets. The variable is set to an OREF, such as the following: 1@%Library.DynamicArray. You can use the ZWRITE command with a specified local variable name to display the JSON value: SET jary=["Fred","Wilma","Barney"] ZWRITE jary You can use the %Get() method to retrieve the value of a specified array element (counting from 0) using the OREF: %Get(n) returns the ObjectScript value; %Get(n,,”json”) returns the JSON value. %Get(n,”no such element”,”json”) specifies a default value to return if the specified array element does not exist. You can resolve the OREF to the full JSON array value using the %ToJSON() function. This is shown in the following example: SET jary=["Fred","Wilma","Barney"] WRITE "JSON array reference = ",jary,! WRITE jary.%Get(1)," (array element value in ObjectScript format)",! WRITE jary.%Get(1,,"json")," (array element value in JSON format)",! WRITE jary.%ToJSON()," (array values in JSON format)" A valid JSON array has the following format: Begins with an open square bracket, ends with a close square bracket. The empty array [] is a valid JSON array. Within the square brackets, an element or a comma-separated list of elements. Each array element can be a JSON string or JSON numeric literal. These JSON literals follow JSON validation criteria. An array element can be an ObjectScript literal or expression enclosed in parentheses. An array element can be specified as a defined variable specifying a string, a numeric, a JSON object, or a JSON array. An array element can contain one or more JSON objects or JSON arrays. An array element can also be one of the following three JSON special values: true, false, null, specified as an unquoted literal in lowercase letters; these JSON special values cannot be specified using a variable. The following are all valid JSON arrays: [1], [5,7,11,13,17], ["Fred","Wilma","Barney"], [true,false], ["Bedrock",["Fred","Wilma","Barney"]], [{"name":"Fred"},{"name":"Wilma"}], [{"name":"Fred","city":"Bedrock"},{"name":"Wilma","city":"Bedrock"}], [{"names":["Fred","Wilma","Barney"]}]. You can use the %Push() method to add a new element to the end of the array. You can use the %Set() method to add a new array element or update an existing array element by position. For further details, see Using JSON. SET Command with Objects The following example contains three SET commands: the first sets a variable to an OREF (object reference); the second sets a variable to the value of an object property; the third sets an object property to a value: SET myobj=##class(%SQL.Statement).%New() SET dmode=myobj.%SelectMode WRITE "Default select mode=",dmode,! SET myobj.%SelectMode=2 WRITE "Newly set select mode=",myobj.%SelectMode Note that dot syntax is used in object expressions; a dot is placed between the object reference and the object property name or object method name. To set a variable with an object property or object method value for the current object, use the double-dot syntax: SET x=..LastName If the specified object property does not exist, InterSystems IRIS issues a <PROPERTY DOES NOT EXIST> error. If you use double-dot syntax and the current object has not been defined, InterSystems IRIS issues a <NO CURRENT OBJECT> error. For further details, refer to Object-Specific ObjectScript Features in Defining and Using Classes. The following command sets x to the value returned by the GetNodeName() method: SET x=##class(%SYS.System).GetNodeName() WRITE "the current system node is: ",x A SET command for objects can take an expression with cascading dot syntax, as shown in the following examples: SET x=patient.Doctor.Hospital.Name In this example, the patient.Doctor object property references the Hospital object, which contains the Name property. Thus, this command sets x to the name of the hospital affiliated with the doctor of the specified patient. The same cascading dot syntax can be used with object methods. A SET command for objects can be used with system-level methods, such as the following data type property method: SET x=patient.NameIsValid(Name) In this example, the NameIsValid() method returns its result for the current patient object. NameIsValid() is a boolean method generated for data type validation of the Name property. Thus, this command sets x to 1 if the specified name is a valid name, and sets x to 0 if the specified name is not a valid name. SET Using an Object Method You can specify an object method on the left side of a SET expression. The following example specifies the %Get() method: SET obj=##class(test).%New() // Where test is class with a multidimensional property md SET myarray=[(obj)] SET index=0,subscript=2 SET myarray.%Get(index).md(subscript)="value" IF obj.md(2)="value" {WRITE "success"} ELSE {WRITE "failure"} Setting a List of Variables to an Object When using SET with objects, multiple assignments set all of the variables in a list to the same OREF, as shown in the following examples: SET (a,b,c)=##class(Sample.Person).%New() SET (dyna1,dyna2,dyn3) = ["default","default"] To assign each variable a separate OREF, issue a separate SET command for each assignment, as shown in the following examples: SET a=##class(Sample.Person).%New() SET b=##class(Sample.Person).%New() SET c=##class(Sample.Person).%New() SET dyna1 = ["default","default"] SET dyna2 = ["default","default"] SET dyna3 = ["default","default"] You can also use the #Dim preprocessor directive to assign all of the variables in a list to individual OREFs, as shown in the following examples: #Dim a,b,c As %ClassDefinition = ##class(Sample.Person).%New() #Dim dyn1,dyn2,dyn3 As %DynamicArray = ["default","default"] Examples The following example specifies multiple arguments for the same SET command. Specifically, the command assigns values to three variables. Note that arguments are evaluated in left-to-right order. SET var1=12,var2=var1*3,var3=var1+var2 WRITE "var1=",var1,!,"var2=",var2,!,"var3=",var3 The following example shows the (variable-list)=value form of the SET command. It shows how to assign the same value to multiple variables. Specifically, the command assigns the value 0 to three variables. SET (sum,count,average)=0 WRITE "sum=",sum,!,"count=",count,!,"average=",average The following example sets a subscripted global variable in a different namespace using extended global reference. NEW $NAMESPACE SET $NAMESPACE="%SYS" SET ^["user"]nametest(1)="fred" NEW $NAMESPACE SET $NAMESPACE="USER" WRITE ^nametest(1) KILL ^nametest Order of Evaluation InterSystems IRIS evaluates the arguments of the SET command in strict left-to-right order. For each argument, it performs the evaluation in the following sequence: Evaluates occurrences of indirection or subscripts to the left of the equal sign in a left-to-right order to determine the variable name(s). For more information, refer to Indirection in Using ObjectScript. Evaluates the expression to the right of the equal sign. Assigns the expression to the right of the equal sign to the variable name or references to the left of the equal sign. Transaction Processing A SET of a global variable is journaled as part of the current transaction; this global variable assignment is rolled back during transaction rollback. A SET of a local variable or a process-private global variable is not journaled, and thus this assignment is unaffected by a transaction rollback. Defined and Undefined Variables Most ObjectScript commands and functions require that a variable be defined before it is referenced. By default, attempting to reference an undefined variable generates an <UNDEFINED> error. Attempting to reference an undefined object generates a <PROPERTY DOES NOT EXIST> or <METHOD DOES NOT EXIST> error. Refer to $ZERROR for further details on these error codes. You can change InterSystems IRIS behavior when referencing an undefined variable by setting the %SYSTEM.Process.Undefined() method. The READ command and the $INCREMENT function can reference an undefined variable and assign a value to it. The $DATA function can take an undefined or defined variable and return its status. The $GET function returns the value of a defined variable; optionally, it can also assign a value to an undefined variable. SET with $PIECE and $EXTRACT You can use the $PIECE and $EXTRACT functions with SET on either side of the equals sign. For detailed descriptions, refer to $PIECE and $EXTRACT. When used on the right side of the equals sign, $PIECE and $EXTRACT extract a substring from a variable and assign its value to the specified variable(s) on the left side of the equals sign. $PIECE extracts a substring using a specified delimiter, and $EXTRACT extracts a substring using a character count. For example, assume that variable x contains the string "HELLO WORLD". The following commands extract the substring "HELLO" and assign it to variables y and z, respectively: SET x="HELLO WORLD" SET y=$PIECE(x," ",1) SET z=$EXTRACT(x,1,5) WRITE "x=",x,!,"y=",y,!,"z=",z When used on the left side of the equals sign, $PIECE and $EXTRACT insert the value from the expression on the right side of the equals sign into the specified portion of the target variable. Any existing value in the specified portion of the target variable is replaced by the inserted value. For example, assume that variable x contains the string "HELLO WORLD" and that variable y contains the string "HI THERE". In the command: SET x="HELLO WORLD" SET y="HI THERE" SET $PIECE(x," ",2)=$EXTRACT(y,4,9) WRITE "x=",x The $EXTRACT function extracts the string "THERE" from variable y and the $PIECE function inserts it into variable x at the second field position, replacing the existing string "WORLD". Variable x now contains the string "HELLO THERE". If the target variable does not exist, the system creates it and pads it with delimiters (in the case of $PIECE) or with spaces (in the case of $EXTRACT) as needed. In the following example, SET $EXTRACT is used to insert the value of z into strings x and y, overwriting the existing values: SET x="HELLO WORLD" SET y="OVER EASY" SET z="THERE" SET $EXTRACT(x,7,11)=z SET $EXTRACT(y,*-3,*)=z WRITE "edited x=",x,! WRITE "edited y=",y Variable x now contains the string "HELLO THERE" and y contains the string "OVER THERE". Note that because one of the SET $EXTRACT operations in this example uses a negative offset (*-3) these operations must be done as separate sets. You cannot set multiple variables with a single SET using enclosing parentheses if any of the variables uses negative offset. In the following example, assume that the global array ^client is structured so that the root node contains the client’s name, with subordinate nodes containing the street address and city. For example, ^client(2,1,1) would contain the city address for the second client stored in the array. Assume further that the city node (x,1,1) contains field values identifying the city, state abbreviation, and ZIP code (postal code), with the comma as the field separator. For example, a typical city node value might be "Cambridge,MA,02142". The three SET commands in the following code each use the $PIECE function to assign a specific portion of the array node value to the appropriate local variable. Note that in each case $PIECE references the comma (",") as the string separator. ADDRESSPIECE SET ^client(2,1,1)="Cambridge,MA,02142" SET city=$PIECE(^client(2,1,1),",",1) SET state=$PIECE(^client(2,1,1),",",2) SET zip=$PIECE(^client(2,1,1),",",3) WRITE "City is ",city,!, "State or Province is ",state,! ,"Postal code is ",zip QUIT The $EXTRACT function could be used to perform the same operation, but only if the fields were fixed length and the lengths were known. For example, if the city field was known to contain only up to 9 characters and the state and ZIP fields were known to contain only 2 and 5 characters, respectively, the SET commands could be coded with the $EXTRACT function as follows: ADDRESSEXTRACT SET ^client(2,1,1)="Cambridge,MA,02142" SET city=$EXTRACT(^client(2,1,1),1,9) SET state=$EXTRACT(^client(2,1,1),11,12) SET zip=$EXTRACT(^client(2,1,1),14,18) WRITE "City is ",city,!, "State or Province is ",state,!, "Postal code is ",zip QUIT Notice the gaps between 9 and 11 and 12 and 14 to accommodate the comma field separators. The following example replaces the first substring in A (originally set to 1) with the string "abc". StringPiece SET A="1^2^3^4^5^6^7^8^9" SET $PIECE(A,"^")="abc" WRITE !,"A=",A QUIT A="abc^2^3^4^5^6^7^8^9" The following example uses $EXTRACT to replace the first character in A (again, a 1) with the string "abc". StringExtract SET A="123456789" SET $EXTRACT(A)="abc" WRITE !,"A=",A QUIT A="abc23456789" The following example replaces the third through sixth pieces of A with the string "abc" and replaces the first character in the variable B with the string "abc". StringInsert SET A="1^2^3^4^5^6^7^8^9" SET B="123" SET ($PIECE(A,"^",3,6),$EXTRACT(B))="abc" WRITE !,"A=",A,!,"B=",B QUIT A="1^2^abc^7^8^9" B="abc23" The following example sets $X, $Y, $KEY, and the fourth piece of a previously undefined local variable, A, to the value of 20. It also sets the local variable K to the current value of $KEY. A includes the previous three pieces and their caret delimiter (^). SetVars SET ($X,$Y,$KEY,$PIECE(A,"^",4))=20,X=$X,Y=$Y,K=$KEY WRITE !,"A=",A,!,"K=",K,!,"X=",X,!,"Y=",Y QUIT A="^^^20" K="20" X=20 Y=20 SET with $LIST and $LISTBUILD The $LIST functions create and manipulate lists. They encode the length (and type) of each element within the list, rather than using an element delimiter. They then use the encoded length specifications to extract specified list elements during list manipulation. Because the $LIST functions do not use delimiter characters, the lists created using these functions should not be input to $PIECE or other character-delimiter functions. When used on the right side of the equal sign, these functions return the following: $LIST returns the specified element of the specified list. $LISTBUILD returns a list containing one element for each argument given. When used on the left side of the equal sign, in a SET argument, these functions perform the following tasks: SET $LIST replaces the specified element(s) with the value given on the right side of the equal sign. SET A=$LISTBUILD("red","blue","green","white") WRITE "Created list A=",$LISTTOSTRING(A),! SET $LIST(A,2)="yellow" WRITE "Edited list A=",$LISTTOSTRING(A) SET A=$LISTBUILD("red","blue","green","white") WRITE "Created list A=",$LISTTOSTRING(A),! SET $LIST(A,*-1,*)=$LISTBUILD("yellow") WRITE "Edited list A=",$LISTTOSTRING(A) You cannot use parentheses with SET $LIST to assign the same value to multiple variables. SET $LISTBUILD extracts several elements of a list in a single operation. The arguments of $LISTBUILD are variables, each of which receives an element of the list corresponding to their position in the $LISTBUILD parameter list. Variable names may be omitted for positions that are not of interest. In the following example, $LISTBUILD (on the right side of the equal sign) is first used to return a list. Then $LISTBUILD (on the left side of the equal sign) is used to extract two items from that list and set the appropriate variables. SetListBuild SET J=$LISTBUILD("red","blue","green","white") SET $LISTBUILD(A,,B)=J WRITE "A=",A,!,"B=",B In this example, A="red" and B="green". See Also - $LISTBUILD function - - - -
https://docs.intersystems.com/healthconnectlatest/csp/docbook/stubcanonicalbaseurl/csp/docbook/DocBook.UI.Page.cls?KEY=RCOS_cset
CC-MAIN-2021-49
en
refinedweb
This tutorial will teach you on how to print a “Hello World” text in Java. Before you start, be sure that you already install the JDK (Java Development Kit) and Netbeans IDE(Integrated Development Environment). Printing a line of text is the first step of Java program especially if you are a newbie. The first step is you need to understand the structure code of java. Java program compose of Classes, Methods and statements. What is a Class? A Java class is a blueprint from which individual objects are created. What is a Method? A method is a group of Java statement that performs some operation on some data and may or may not return result. Methods must be located inside java Class. What is a Statement? A group of statements that perform operation located inside a Java Method. Printing Hello World Text in Java Steps 1.Using your Netbeans IDE, go to “File” and Select “New Project”. A dialog box appears and click “next”. 2.Insert your desire “Project Name” and click “Finish” button. A java class will automatically create. 3.Insert the following codes below inside your class. [java]public static void main(String args[]){ System.out.println(“Hello World!”); }[/java] 4.Run your program and the result should look like the image below. 5. Complete source codes. [java]public class PrintHelloWorld { public static void main(String args[]){ System.out.println(“Hello World!”); } }[/java] About How to Print Hello World Text In Java If you have any comment or suggestions about on How to Print Hello World text in Java, feel free to leave your comment below, use the contact page of this website or use my contact information.
https://itsourcecode.com/free-projects/java-projects/print-hello-world-text-in-java/
CC-MAIN-2021-49
en
refinedweb
We're currently porting our games to Android, and would like to be able to release a universal build for tablets and phones. Is there a way to determine the actual screen size or DPI in order to adjust the size of interface elements accordingly? Answer by Daniel-Brauer · Aug 31, 2011 at 07:44 PM It took a fair bit of research, but in the end it was pretty easy. Hopefully this serves as a reasonable example of how to interact with Android's Java environment from within Unity. using UnityEngine; public class DisplayMetricsAndroid { // The logical density of the display public static float Density { get; protected set; } // The screen density expressed as dots-per-inch public static int DensityDPI { get; protected set; } // The absolute height of the display in pixels public static int HeightPixels { get; protected set; } // The absolute width of the display in pixels public static int WidthPixels { get; protected set; } // A scaling factor for fonts displayed on the display public static float ScaledDensity { get; protected set; } // The exact physical pixels per inch of the screen in the X dimension public static float XDPI { get; protected set; } // The exact physical pixels per inch of the screen in the Y dimension public static float YDPI { get; protected set; } static DisplayMetricsAndroid() { // Early out if we're not on an Android device if (Application.platform != RuntimePlatform.Android) { return; } // The following is equivalent to this Java code: // // metricsInstance = new DisplayMetrics(); // UnityPlayer.currentActivity.getWindowManager().getDefaultDisplay().getMetrics(metricsInstance); // // ... which is pretty much equivalent to the code on this page: // using ( AndroidJavaClass unityPlayerClass = new AndroidJavaClass("com.unity3d.player.UnityPlayer"), metricsClass = new AndroidJavaClass("android.util.DisplayMetrics") ) { using ( AndroidJavaObject metricsInstance = new AndroidJavaObject("android.util.DisplayMetrics"), activityInstance = unityPlayerClass.GetStatic<AndroidJavaObject>("currentActivity"), windowManagerInstance = activityInstance.Call<AndroidJavaObject>("getWindowManager"), displayInstance = windowManagerInstance.Call<AndroidJavaObject>("getDefaultDisplay") ) { displayInstance.Call("getMetrics", metricsInstance); Density = metricsInstance.Get<float>("density"); DensityDPI = metricsInstance.Get<int>("densityDpi"); HeightPixels = metricsInstance.Get<int>("heightPixels"); WidthPixels = metricsInstance.Get<int>("widthPixels"); ScaledDensity = metricsInstance.Get<float>("scaledDensity"); XDPI = metricsInstance.Get<float>("xdpi"); YDPI = metricsInstance.Get<float>("ydpi"); } } } } Brilliant. Thank you. I already try to display each of the variables, but the variables always return 0. I don't know what's wrong. Can anyone help me?? Thank you. Useful for all the other bits of information but if you just want the usable screen width and height in pixels why can't you use Screen.width & Screen.height? Great! (and still works in 2016) Trying to figure out how to use this. Do i treat it like a static? do I add monodeveloper and add it to an object? Answer by Waz · Jul 27, 2012 at 12:15 AM var widthInInches = Screen.width / Screen.dpi; Using Screen.dpi is much simpler and works across platforms as well. I have a Samsung I9001 Galaxy S Plus, and its dpi is 233,33 for sure, but Screen.dpi returns 160.2105. This value is far away from reality. I'll try the DisplayMetricsAndroid function... @krifton Unfortunately the Unity developers do not appear very familiar with Android intricacies (in my experience, anyway) and appear to have assigned density to Screen.dpi, which is based on a hardcoded value of 160. In all reality, this is only useful in determining the conversion from pixels to Android's scaled values, such as dp or sp, which are also based on a value of 160. Android added densityDPI to attempt some flexibility for varying device DPI, but it is easier to retrieve the values of getMetrics and getRealMetrics when accessing DisplayMetrics than try to perform any calculations with Unity's crippled versions. Screen.dpi dp sp getMetrics getRealMetrics DisplayMetrics Answer by ViicEsquivel · Aug 05, 2015 at 09:56 PM My C# solution float inch = Mathf.Sqrt((Screen.width * Screen.width) + (Screen.height * Screen.height)); inch = inch/Screen.dpi; Answer by HarleysZoonBox · Oct 31, 2013 at 03:30 AM unity gives us most of the basic resolutions and there is no problem for me when i use Screen.width or height i check them against variables and adjust accordingly no java needed... just use stored variable which by the way uses less overhead and adjust accordingly (check it in the Awake Function) to set before. Random X posistion between left and right edges of screen 2D 1 Answer How to adjust the screen for many types of screens in Android? 1 Answer How can i run a Unity Build on my PC, but use my phone as second screen, (Cheapskate Oculus-Like) 2 Answers After turning off Android phone screen, screen turns itself back on after a minute or so 0 Answers 2 Different Device Screens 2 Answers
https://answers.unity.com/questions/161281/is-there-a-way-to-android-physical-screen-size.html
CC-MAIN-2020-29
en
refinedweb
I'm making a 2d top down shooter like Galaga. I want the projectile to shoot from the ship and move up the screen with a sine wave pattern. This is all I got and I'm only able to move the projectile straight public class Projectile : MonoBehaviour { // Used to control how fast the game object moves public float MoveSpeed = 5.0f; // Use this for initialization void Start () { DestroyObject(gameObject, 1.0f); } // Update is called once per frame void Update () { transform.position += transform.up * Time.deltaTime * MoveSpeed; } } Answer by robertbu · Oct 05, 2014 at 04:26 PM using UnityEngine; using System.Collections; public class Projectile : MonoBehaviour { public float MoveSpeed = 5.0f; public float frequency = 20.0f; // Speed of sine movement public float magnitude = 0.5f; // Size of sine movement private Vector3 axis; private Vector3 pos; void Start () { pos = transform.position; DestroyObject(gameObject, 1.0f); axis = transform.right; // May or may not be the axis you want } void Update () { pos += transform.up * Time.deltaTime * MoveSpeed; transform.position = pos + axis * Mathf.Sin (Time.time * frequency) * magnitude; } } Thanks a lot :) thanks for the temp code, i do believe this should be very similar in cos wave, thanks again for the post =) Nice script. Thanks a lot :) Thank do I bring an instantiated projectile back towards the player if the player tells it to? Similar to a boomerang. 0 Answers 2d cannon projectile not following expected path 1 Answer 2D Projectile Script Not Working 1 Answer Spin Projectile? 0 Answers Projectiles aren't shooting the direction they're supposed to with transform.up 0 Answers
https://answers.unity.com/questions/803434/how-to-make-projectile-to-shoot-in-a-sine-wave-pat.html
CC-MAIN-2020-29
en
refinedweb
Azure SignalR is a fully managed service that makes it easy to add highly-scalable real-time messaging to any application using WebSockets and other protocols. SignalR Service has integrations for ASP.NET and Azure Functions. Other backend apps can use the service's RESTful HTTP API. In this article, we'll look at the benefits of using Azure SignalR Service for real-time communication and how to integrate it with a Java Spring Boot chat application using the service's HTTP API. Azure SignalR Service overview While many libraries or frameworks support WebSockets, properly scaling out a real-time application is not a trivial task; it typically requires setting up Redis or other infrastructure to act as a backplane. Azure SignalR Service does all the work for managing client connections and scale-out. We can integrate with it using a simplified API. SignalR Service uses the SignalR real-time protocol that was popularized by ASP.NET. It provides a programming model that abstracts away the underlying communication channels. Instead of managing individual WebSocket connections ourselves, we can send messages to everyone, a single user, or arbitrary groups of connections with a single API call. SignalR also negotiates the best protocol for each connection. It prefers to use WebSockets, but if it is not available for a given connection, it will automatically fall back to server-sent events or long-polling. There are many SignalR client SDKs for connecting to Azure SignalR Service. They're available in .NET, JavaScript/TypeScript, and Java. There are also third-party open source clients for languages like Swift and Python. Azure SignalR Service RESTful HTTP API Server applications, like a Java Spring app, can use an HTTP API to send messages from SignalR Service to its connected clients. There are also APIs for managing group membership; we can place users into arbitrary groups and send messages to a group of connections. The API documentation can be found on GitHub. We'll be using these APIs in the rest of this article. Integrating SignalR Service with Java There are four main steps to integrating SignalR Service with an application. - Create an Azure SignalR Service instance - Add an API endpoint ( /negotiate) in our Java app for SignalR clients to retrieve a token for connecting to SignalR Service - Create a connection with a SignalR client SDK (we'll be using a JavaScript app in a browser) - Send messages from our Java app How it works - The SignalR client SDK requests a SignalR Service URL and access token using the /negotiateendpoint - The client SDK automatically uses that information establish a connection to SignalR Service - The Java app uses SignalR Service's RESTful APIs to send messages to connected clients Create a SignalR Service instance We can create a free instance of SignalR Service using the Azure CLI or the Azure portal. To work with the REST API, configure it to use the Serverless mode. For more information on how to create an SignalR Service instance, check out the docs. Add a "negotiate" endpoint SignalR Service is secured with a key. We never want to expose this key to our clients. Instead, in our backend application, we generate a JSON web token (JWT) that is signed with this key for each client that wants to connect. A SignalR client sends a request to an HTTP endpoint we define in our application to retrieve this JWT. This is the negotiate endpoint in our Spring Boot app. It generates a token and returns it to the caller. We can (optionally) embed a user ID into the token so we can send messages targetted to that user. @PostMapping("/signalr/negotiate") public SignalRConnectionInfo negotiate() { String hubUrl = signalRServiceBaseEndpoint + "/client/?hub=" + hubName; String userId = "12345"; // optional String accessKey = generateJwt(hubUrl, userId); return new SignalRConnectionInfo(hubUrl, accessKey); } Notice that the route ends in /negotiate. This is a requirement as it is a convention used by the SignalR clients. The method for generating a JWT uses the Java JWT (jjwt) library and signs it with the SignalR Service key. Notice we set the audience to the hub URL. A hub is a virtual namespace for our messages. We can have more than one hub in a single SignalR Service. For instance, we can use a hub for chat messages and another for notifications. private String generateJwt(String audience, String userId) { long nowMillis = System.currentTimeMillis(); Date now = new Date(nowMillis); long expMillis = nowMillis + (30 * 60 * 1000); Date exp = new Date(expMillis); byte[] apiKeySecretBytes = signalRServiceKey.getBytes(StandardCharsets.UTF_8); SignatureAlgorithm signatureAlgorithm = SignatureAlgorithm.HS256; Key signingKey = new SecretKeySpec(apiKeySecretBytes, signatureAlgorithm.getJcaName()); JwtBuilder builder = Jwts.builder() .setAudience(audience) .setIssuedAt(now) .setExpiration(exp) .signWith(signingKey); if (userId != null) { builder.claim("nameid", userId); } return builder.compact(); } Create a client connection On our web page, we bring in the SignalR JavaScript SDK and create a connection. We add one or more event listeners that will be invoked when a message is received from the server. Lastly, we start the connection. <script src=""></script> const connection = new signalR.HubConnectionBuilder() .withUrl(`/signalr`) .withAutomaticReconnect() .build() connection.on('newMessage', function(message) { // do something with the message }) connection.start() .then(() => data.ready = true) .catch(console.error) Notice that we used the negotiate URL without the /negotiate segment. The SignalR client SDK automatically attempts the negotiation be appending /negotiate to the URL. When we start the application and open our web page, we should see a successful connection in the browser console. Send messages from the Java app Now that our clients are connected to SignalR Service, we can send them messages. Our sample is a chat app, so we have an endpoint that our frontend app will call to send messages. We use a similar method as the /negotiate endpoint to generate a JWT. This time, the JWT is used as a bearer token in our HTTP request to the service to send a message. @PostMapping("/api/messages") public void sendMessage(@RequestBody ChatMessage message) { String hubUrl = signalRServiceBaseEndpoint + "/api/v1/hubs/" + hubName; String accessKey = generateJwt(hubUrl, null); Unirest.post(hubUrl) .header("Content-Type", "application/json") .header("Authorization", "Bearer " + accessKey) .body(new SignalRMessage("newMessage", new Object[] { message })) .asEmpty(); } And now our app should be working! To support hundreds of thousands of connections, we simply have to go into the Azure portal and increase the number of connection units with a slider. Resources Posted on Oct 14 '19 by: Anthony Chu I'm a Cloud Developer Advocate at Microsoft focusing on serverless, containers, JavaScript, and .NET. Discussion If I'm currently communicating with service workers using a push notification, at what point/scale etc should I consider switching to signalR? Push notifications are great for infrequent messages. WebSocket based solutions like SignalR and socket.io can be used for high throughput scenarios when the client (e.g., browser) is open. An example is a rideshare app. When you see your driver’s location in real-time on a map on their way to pick you up, that’s a good use of WebSockets. When the driver arrives and your phone gets a notification (even when the app is closed), push notifications is great for that.
https://dev.to/azure/add-real-time-to-your-java-app-with-azure-signalr-service-3p8
CC-MAIN-2020-29
en
refinedweb
Java File class represents the path of directories and files. It provides the methods for renaming, deleting, and obtaining the properties of a file or directory. The File class is the wrapper class for the file name and its directory path. Java File Class The File class is Java’s representation of the file or directory pathname. Because file and directory names have different formats on different platforms, a simple string is not adequate to name them. A File class contains several methods for working with the pathname, deleting and renaming files, creating new directories, listing the contents of an index, and determining several common attributes of files and directories. - It is an abstract representation of file and directory pathnames. - The pathname, whether abstract or in the string form can be either absolute or relative. The parent of the abstract pathname may be obtained by invoking a getParent() method of this class. - First of all, we should create a File class object by passing a filename or directory name to it. The file system may implement restrictions to certain operations on the actual file-system object, such as the reading, writing, and executing. These restrictions are collectively known as access permissions. - Instances of a File class are immutable; that is, once created, the abstract pathname represented by the File object will never change. The pathname can be absolute or relative. #Absolute name It contains the full path and drives letter, i.e., it is the full name of the path. For example, C:\Documents\TextFiles\sample.txt #Relative name It is the file name/path concerning the current working directory. For example, TextFiles\sample.txt (C:\Documents being the current working directory) See the following image. #How to create a File Object in Java A File object is created by passing in a string that represents the name of a file, or a String or another File object. For example, File f = new File("/usr/local/bin/hello"); It defines an abstract file name for the hello file in directory /usr/local/bin. This is an absolute abstract file name. #Creating objects for files and directories The File class objects can be created by passing the file name or directory name in the string format. - new File(“C:\\Documents\\TextFiles\\sample.txt”) – This creates the object for the file sample.txt - new File(“C:\\Documents”) – This creates the File object for the directory C:\Documents The File class does not provide the methods for reading and writing the file contents. Instances of the File class are immutable, which means the path names represented cannot be changed once created. #Constructors of File Class #File(String pathname) It creates a File object for the specified pathname for a file or directory. #File(File parentpath, String childpath) It creates a File object from an existing file object with its child file/directories pathname. #File(String parentpath, String childpath) It creates a File object with the specified parent directory’s pathname and child file/directory pathname. #File(URI uri) It creates a File object from a Uniform Resource Identifier. #Methods of File Class boolean isFile(): Returns true if the object represents the path of a file. boolean isDirectory(): Returns true if the object represents the path of a directory. boolean isHidden() Returns true if a file or directory is hidden. boolean exists(): Returns true if such a file/ directory exists. boolean canRead(): Returns true if the read permission of the file is on. boolean canWrite(): Returns true if the write permission of the file is on. boolean canExecute(): Tells whether the file is executable or not. String getName(): Returns the name of the file or directory. String getPath(): Returns the formatted string path of the file or directory. String getAbsolutePath(): It returns the absolute path of the file/directory. long lastModified(): Returns the date when the file was last modified (in milliseconds), this value can be converted into dd-MM-yyyy HH:mm:sss format using SimpleDateFormat class. long length(): It will give the length of the file. boolean delete(): Deletes the file or directory. boolean renameTo(File f): Renames the file with the given abstract pathname. File [] listFiles(): Returns the array of File objects of all files contained in the directory specified. int compareTo(File pathname): It compares the pathnames of two files. boolean createNewFile(): It creates a new and empty file having the pathname specified in the constructor. boolean equals(Object obj): Tests whether the specified abstract pathname and the object are equal or not. long getFreeSpace(): It returns the number of unallocated bytes, i.e. the free space in the specified partition. String getParent(): It returns the parent directory pathname (string formatted) of the specified file/directory. File getParentFile(): It creates the File object of the parent directory of the specified file/directory. String[] list(): Returns the array of strings containing the name of the files and directories in the specified directory. boolean mkdir(): Creates the new directory with the specified pathname. boolean setExecutable(boolean exe): Changes the permission of the file to executable and set it true for the owner. boolean setReadable(boolean read): Changes the read permission of the file and set it true for owner. boolean setReadable(boolean read, boolean own): Sets the read permission for either owner or everyone. boolean setReadOnly(): Sets only the read permissions, disabling all other operations. String toString(): Returns the formatted string path of the abstract pathname of the specified file/directory. boolean setWritable(boolean canWriteIt): Sets the write permission for the owner. URI toURI(): It will return a file URI representing the file/directory abstract pathname. The following program checks whether a file exists or not, whether it is a file or a directory and also checks all the permissions the file has. import java.io.File; class Example1 { public static void main(String [] args) { File sample=new File("Demofile.txt"); if(sample.exists()==true) { System.out.println("The file: 'Demofile.txt' exists."); //checking whether it is a file or directory System.out.print("Is it a file or directory: "); if(sample.isFile()==true) System.out.println("It is a file."); else if(sample.isDirectory()==true) System.out.println("It is a directory."); //checking different kinds of permissions System.out.println("The file is readable: " + sample.canRead()); System.out.println("The file is writable: " + sample.canWrite()); System.out.println("The file is executable: " + sample.canExecute()); } else System.out.println("The file: 'Demofile.txt' does not exist"); } } See the output. Here’s another program as an example. The following program lists all the files and directories in a certain directory along with the length of the file and whether it is a file or directory. import java.io.File; class Example2 { public static void main(String [] args) { //creating the File object for directory Books File dir=new File("Books"); if(dir.exists()) { if(dir.isFile()) System.out.println("The given is a file"); else { System.out.println("The given is a directory"); //creating an array of File objects for files and directories in the given directory File[] listOfFiles=dir.listFiles(); //traversing the array of files for(int i=0;i<listOfFiles.length;i++) { String fileOrDir=""; //checking if it is file or directory if(listOfFiles[i].isFile()) fileOrDir="file"; else if(listOfFiles[i].isDirectory()) fileOrDir="directory"; //finding the size of the file long len=listOfFiles[i].length(); System.out.println("name: "+listOfFiles[i].getName()+"\nfile or directory: "+fileOrDir+"\nsize (bytes): "+len+"\n"); } } } } } See the following output. The following program creates a new file that does not exist in memory and setting the readable permission of the file. import java.io.File; class Example3 { public static void main(String [] args) { File sample=new File("Demofile3.txt"); //checking if the file exists if(sample.exists()) System.out.println("THE FILE ALREADY EXISTS."); else { System.out.println("THE FILE DOES NOT EXIST."); //creating the non existing file System.out.println("creating new file..."); try { sample.createNewFile(); Thread.sleep(1000); System.out.println("File created..."); } catch(Exception e) { System.out.println("Exception..."); } //set only readable permission sample.setReadOnly(); //checking all permission System.out.println("Permissions of the file:"); System.out.println("The file is readable: " + sample.canRead()); System.out.println("The file is writable: " + sample.canWrite()); } } } See the following output. Finally, Java File Class Tutorial is over.
https://appdividend.com/2019/06/28/java-file-class-tutorial-java-io-file-class-in-java-example/
CC-MAIN-2020-29
en
refinedweb
State machine advent: One event, two possible state transitions (15/24) Mikey Stengel Updated on ・2 min read 24 days to learn statecharts #devadvent (25 Part Series) Conditional logic is everywhere. While state machines reduce conditional logic by eliminating impossible states, there is some conditional logic we want to have within our machines. In particular, when one or the other action should be executed or multiple state transitions exist. We can define such conditional logic using the very concept we've learned yesterday, guards. By providing an array of possible state transitions, the state transition with the first guard that evaluates to true will determine the next state of our machine. Let's say we want our thermostat to distinctively express whether it is cold or warm. If the temperature is below 18°C, it should go into the cold state and above, transition to the warm state. import { Machine, assign } = 'xstate'; const thermostatMachine = Machine({ id: 'thermostat', initial: 'inactive', context: { temperature: 20, }, states: { inactive: { on: { POWER_TOGGLE: 'active' } }, active: { initial: 'warm', states: { cold: {}, warm: {}, }, on: { POWER_TOGGLE: { target: 'inactive', }, SET_TEMPERATURE: [ { target: '.cold', cond: (context, event) => event.temperature < 18, actions: assign({ temperature: (context, event) => event.temperature, }), }, { // transition without a guard as a fallback. target: '.warm', actions: assign({ temperature: (context, event) => event.temperature, }), }, ] } }, } }); Think of the state transition array as a switch case to determine the next state of a machine. The default keyword can be expressed as a state transition without a guard as seen in the example above. Notice how we had to duplicate the action to assign the temperature. Similar to when machines transition from one state to another, actions are only executed if no guard is defined, or when a guard evaluates to true. To give one more example of this behavior, the code below will never call the 'log' action. [ { target: 'cold', cond: () => false, actions: 'log', }, { target: 'warm', }, ] Tomorrow we'll refactor the thermostatMachine so that we don't have to define the same action twice.. 24 days to learn statecharts #devadvent (25 Part Series) So meetup.com is going to charge attendees in future - what's next for event organizers? So meetup.com is going to charge attendees in future - what's next for event organizers? A personal use I found for JReply's soon to come _.define() feature. PDS OWNER CALIN (Calin Baenen) - Discussion: What is The Best Hosting Out There? And What is Your Favorite? Yuli - Is there a way to freeze objects only to outside functions? PDS OWNER CALIN (Calin Baenen) - Hi Mikey, really enjoying this series as a thorough introduction to XState. Not having the ability to run some experiments myself right now, I'm curious why you set target: '. warm'rather than just warm. Does it carry a meaning? Hey Joel, glad you are enjoying the series. 😊 I mostly use target as I like being explicit and it makes adding a guard or action easier. You can totally omit it if you prefer the shorthand notation. Right, I'm referring in particular to the prefix dot though, didn't see that syntax explained anywhere. Is .warmdifferent to warmin any meaningful way? Ah right. The dot is added (and needed) for the transition to be recognized as an internal transition. We don't want the machine to leave the activestate. Instead, we are specifying a relative target (e.g .warm) to transition to the active.coldor active.warmstate. Without the relative target, the machine could think that we are transitioning to a warmstate by assuming a state structure like the following and would then fail since the state node does not exist. Makes sense! Wasn't trivial to figure out without XState experience though :) I'm glad you asked! I couldn't figure out a good way to include it in the post as I found the transitionpages of docs the most difficult to grasp and a bit discouraging for beginners. On day 15 I was also still naive enough to believe I can get away with explaining one concept per day. 😁 Towards the end, I had to ramp up and explain 2-3 things per post to write about most XState features I wanted to cover. Either way, really happy to have come across your calendar, will make a good basis for me to dive into XState myself! Thank you. It's a high reward decision for sure! Let me know if you are struggling with anything and feel free to send me machines for feedback.
https://dev.to/codingdive/state-machine-advent-one-event-two-possible-state-transitions-15-24-588k
CC-MAIN-2020-10
en
refinedweb
Hide Forgot Description of problem: Attempting to rebuild gcc-python-plugin, I ran into this issue on ppc64 and ppc64le: checking for gcc-plugin.h... not found Test compilation failed with exit code 1 The command was: gcc -c -o config-tests/00001-checking-for-gcc-plugin.h/feature-test.o -I/usr/lib/gcc/ppc64le-redhat-linux/8/plugin/include -x c++ config-tests/00001-checking-for-gcc-plugin.h/feature-test.c The source was: (in config-tests/00001-checking-for-gcc-plugin.h/feature-test.c) #include <gcc-plugin.h> The stderr was: In file included from /usr/lib/gcc/ppc64le-redhat-linux/8/plugin/include/tm.h:26, from /usr/lib/gcc/ppc64le-redhat-linux/8/plugin/include/backend.h:28, from /usr/lib/gcc/ppc64le-redhat-linux/8/plugin/include/gcc-plugin.h:30, from config-tests/00001-checking-for-gcc-plugin.h/feature-test.c:1: /usr/lib/gcc/ppc64le-redhat-linux/8/plugin/include/config/rs6000/rs6000.h:35:10: fatal error: config/rs6000/rs6000-modes.h: No such file or directory #include "config/rs6000/rs6000-modes.h" ^~~~~~~~~~~~~~~~~~~~~~~~~~~~~~ compilation terminated. make: *** [Makefile:177: autogenerated-config.h] Error 1 I'm guessing it's an unpackaged header file. Failed build on ppc64le: and on ppc64: Version-Release number of selected component (if applicable): gcc-plugin-devel.ppc64 8.1.1-3.fc29 gcc-plugin-devel.ppc64le 8.1.1-3.fc29 For reference, Jakub fixed this upstream (on trunk) here:;a=commitdiff;h=c335f36328f1d6928bbb1496c022bd297836f02a Fixed. gcc-8.1.1-5.fc28 has been submitted as an update to Fedora 28. gcc-8.1.1-5.fc28 has been pushed to the Fedora 28 testing repository. If problems still persist, please make note of it in this bug report. See for instructions on how to install test updates. You can provide feedback for this update here: gcc-8.1.1-5.fc28 has been pushed to the Fedora 28 stable repository. If problems still persist, please make note of it in this bug report.
https://partner-bugzilla.redhat.com/show_bug.cgi?id=1596407
CC-MAIN-2020-10
en
refinedweb
- 7.1 Role of a Bootloader - 7.2 Bootloader Challenges - 7.3 A Universal Bootloader: Das U-Boot - 7.4 Porting U-Boot - 7.5 Other Bootloaders - 7.6 Chapter Summary 7.2 Bootloader Challenges Even a simple “Hello World” program written in C requires significant hardware and software resources. The application developer does not need to know or care much about these details because the C runtime environment transparently provides this infrastructure. A bootloader developer has no such luxury. Every resource that a bootloader requires must be carefully initialized and allocated before it is used. One of the most visible examples of this is Dynamic Random Access Memory (DRAM). 7.2.1 DRAM Controller DRAM chips cannot be directly read from or written to like other microprocessor bus resources. They require specialized hardware controllers to enable read and write cycles. To further complicate matters, DRAM must be constantly refreshed or the data contained within will be lost. Refresh is accomplished by sequentially reading each location in DRAM in a systematic manner and within the timing specifications set forth by the DRAM manufacturer. Modern DRAM chips support many modes of operation, such as burst mode and dual data rate for high-performance applications. It is the DRAM controller’s responsibility to configure DRAM, keep it refreshed within the manufacturer’s timing specifications, and respond to the various read and write commands from the processor. Setting up a DRAM controller is the source of much frustration for the newcomer to embedded development. It requires detailed knowledge of DRAM architecture, the controller itself, the specific DRAM chips being used, and the overall hardware design. Though this is beyond the scope of this book, the interested reader can learn more about this important concept by referring to the references at the end of this chapter. Appendix D, “SDRAM Interface Considerations,” provides more background on this important topic. Very little can happen in an embedded system until the DRAM controller and DRAM itself have been properly initialized. One of the first things a bootloader must do is to enable the memory subsystem. After it is initialized, memory can be used as a resource. In fact, one of the first actions many bootloaders perform after memory initialization is to copy themselves into DRAM for faster execution. 7.2.2 Flash Versus RAM Another complexity inherent in bootloaders is that they are required to be stored in nonvolatile storage but are usually loaded into RAM for execution. Again, the complexity arises from the level of resources available for the bootloader to rely on. In a fully operational computer system running an operating system such as Linux, it is relatively easy to compile a program and invoke it from nonvolatile storage. The runtime libraries, operating system, and compiler work together to create the infrastructure necessary to load a program from nonvolatile storage into memory and pass control to it. The aforementioned “Hello World” program is a perfect example. When compiled, it can be loaded into memory and executed simply by typing the name of the executable (hello) on the command line (assuming, of course, that the executable exists somewhere on your PATH). This infrastructure does not exist when a bootloader gains control upon power-on. Instead, the bootloader must create its own operational context and move itself, if required, to a suitable location in RAM. Furthermore, additional complexity is introduced by the requirement to execute from a read-only medium. 7.2.3 Image Complexity As application developers, we do not need to concern ourselves with the layout of a binary executable file when we develop applications for our favorite platform. The compiler and binary utilities are preconfigured to build a binary executable image containing the proper components needed for a given architecture. The linker places startup (prologue) and shutdown (epilogue) code into the image. These objects set up the proper execution context for your application, which typically starts at main() in your application. This is absolutely not the case with a typical bootloader. When the bootloader gets control, there is no context or prior execution environment. In a typical system, there might not be any DRAM until the bootloader initializes the processor and related hardware. Consider what this means. In a typical C function, any local variables are stored on the stack, so a simple function like the one in Listing 7-1 is unusable. Listing 7-1. Simple C function int setup_memory_controller(board_info_t *p) { unsigned int *dram_controller_register = p->dc_reg; ... When a bootloader gains control on power-on, there is no stack and no stack pointer. Therefore, a simple C function similar to Listing 7-1 will likely crash the processor because the compiler will generate code to create and initialize the pointer dram_controller_register on the stack, which does not yet exist. The bootloader must create this execution context before any C functions are called. When the bootloader is compiled and linked, the developer must exercise complete control over how the image is constructed and linked. This is especially true if the bootloader is to relocate itself from Flash to RAM. The compiler and linker must be passed a handful of parameters defining the characteristics and layout of the final executable image. Two primary characteristics conspire to add complexity to the final binary executable image. The first characteristic that presents complexity is the need to organize the startup code in a format compatible with the processor’s boot sequence. The first bytes of executable code must be at a predefined location in Flash, depending on the processor and hardware architecture. For example, the AMCC PowerPC 405GP processor seeks its first machine instructions from a hard-coded address of 0xFFFF_FFFC. Other processors use similar methods with different details. Some processors are configurable at power-on to seek code from one of several predefined locations, depending on hardware configuration signals. How does a developer specify the layout of a binary image? The linker is passed a linker description file, also called a linker command script. This special file can be thought of as a recipe for constructing a binary executable image. Listing 7-2 contains a snippet from an existing linker description file in use in a popular bootloader, which we discuss shortly. Listing 7-2. Linker Command Script—Reset Vector Placement SECTIONS { .resetvec 0xFFFFFFFC : { *(.resetvec) } = 0xffff ... A complete description of linker command scripts syntax is beyond the scope of this book. The interested reader is directed to the GNU LD manual referenced at the end of this chapter. Looking at Listing 7-2, we see the beginning of the definition for the output section of the binary ELF image. It directs the linker to place the section of code called .resetvec at a fixed address in the output image, starting at location 0xFFFF_FFFC. Furthermore, it specifies that the rest of this section shall be filled with all ones (0xFFFF.) This is because an erased Flash memory array contains all ones. This technique not only saves wear and tear on the Flash memory, but it also significantly speeds up programming of that sector. Listing 7-3 is the complete assembly language file from a recent U-Boot distribution that defines the .resetvec code section. It is contained in an assembly language file called .../cpu/ppc4xx/resetvec.S. Notice that this code section cannot exceed 4 bytes in length in a machine with only 32 address bits. This is because only a single instruction is defined in this section, no matter what configuration options are present. Listing 7-3. Source Definition of .resetvec /* Copyright MontaVista Software Incorporated, 2000 */ #include <config.h> .section .resetvec, "ax" #if defined(CONFIG_440) b _start_440 #else #if defined(CONFIG_BOOT_PCI) && defined(CONFIG_MIP405) b _start_pci #else b _start #endif #endif This assembly language file is very easy to understand, even if you have no assembly language programming experience. Depending on the particular configuration (as specified by the CONFIG_* macros), an unconditional branch instruction (b in PowerPC assembler syntax) is generated to the appropriate start location in the main body of code. This branch location is a 4-byte PowerPC instruction, and as we saw in the snippet from the linker command script in Listing 7-2, this simple branch instruction is placed in the absolute Flash address of 0xFFFF_FFFC in the output image. As mentioned earlier, the PPC 405GP processor fetches its first instruction from this hard-coded address. This is how the first sequence of code is defined and provided by the developer for this particular architecture and processor combination. 7.2.4 Execution Context The other primary reason for bootloader image complexity is the lack of execution context. When the sequence of instructions from Listing 7-3 starts executing (recall that these are the first machine instructions after power-on), the resources available to the running program are nearly zero. Default values designed into the hardware ensure that fetches from Flash memory work properly and that the system clock has some default values, but little else can be assumed.2 The reset state of each processor is usually well defined by the manufacturer, but the reset state of a board is defined by the hardware designers. Indeed, most processors have no DRAM available at startup for temporary storage of variables or, worse, for a stack that is required to use C program calling conventions. If you were forced to write a “Hello World” program with no DRAM and, therefore, no stack, it would be quite different from the traditional “Hello World” example. This limitation places significant challenges on the initial body of code designed to initialize the hardware. As a result, one of the first tasks the bootloader performs on startup is to configure enough of the hardware to enable at least some minimal amount of RAM. Some processors designed for embedded use have small amounts of on-chip static RAM available. This is the case with the PPC 405GP we’ve been discussing. When RAM is available, a stack can be allocated using part of that RAM, and a proper context can be constructed to run higher-level languages such as C. This allows the rest of the processor and platform initialization to be written in something other than assembly language.
http://www.informit.com/articles/article.aspx?p=674698&seqNum=2
CC-MAIN-2020-10
en
refinedweb
From: Steven Watanabe (steven_at_[hidden]) Date: 2007-12-20 11:13:21 AMDG Ion Gaztañaga <igaztanaga <at> gmail.com> writes: > > Hi all, > > The formal review of the Unordered library started December 7, we have a > few nice reviews, but they are not enough. If you are interested in the > library (and I *know* you are), please take some time to review it. Sorry I didn't do a full review earlier. I just have a few implementation comments. allocator.hpp line 118: typedef typename Allocator::value_type value_type; Is there a reason you're not using allocator_value_type? lines 165 and 217: reset(ptr_); I don't think you want ADL here. hash_table.hpp: line 66: float_to_size_t: I don't think the test used is correct. The following program prints "0" under msvc 8.0: #include <limits> #include <iostream> int main() { std::cout << static_cast<std::size_t>( static_cast<float>(std::numeric_limits<std::size_t>::max())) << std::endl; } hash_table_impl.hpp lines 137-140: Do you want ADL for hash_swap? line 865: Is there a reason not to use cached_begin_bucket_? line 1282: return float_to_size_t(ceil( should qualify ceil with std:: line 1361: rehash_impl(static_cast<size_type>(floor(n / mlf_ * 1.25)) + 1); *std::*floor? The implementation files use BOOST_DEDUCED_TYPENAME but unordered_map and unordered_set use typename. Could you make it consistant? I'm getting a lot of warnings on the tests from msvc 8.0 with /W4 because minimal::ptr/const_ptr only defines operator+(int) and the internals call operator+ with a size_t. Is + required to work for the size_type or should it be cast to the difference_type explicitly? Also, minimal::ptr should use std::ptrdiff_t rather than int. In Christ, Steven Watanabe Boost list run by bdawes at acm.org, gregod at cs.rpi.edu, cpdaniel at pacbell.net, john at johnmaddock.co.uk
https://lists.boost.org/Archives/boost/2007/12/131817.php
CC-MAIN-2020-10
en
refinedweb
Introduction # Some of the unique concepts in TypeScript describe the shape of JavaScript objects at the type level. One example that is especially unique to TypeScript is the concept of ‘declaration merging’. Understanding this concept will give you an advantage when working with existing JavaScript. It also opens the door to more advanced abstraction concepts. For the purposes of this article, “declaration merging” means that the compiler merges two separate declarations declared with the same name into a single definition. This merged definition has the features of both of the original declarations. Any number of declarations can be merged; it’s not limited to just two declarations. Basic Concepts # In TypeScript, a declaration creates entities in at least one of three groups: namespace, type, or value. Namespace-creating declarations create a namespace, which contains names that are accessed using a dotted notation. Type-creating declarations do just that: they create a type that is visible with the declared shape and bound to the given name. Lastly, value-creating declarations create values that are visible in the output JavaScript. Understanding what is created with each declaration will help you understand what is merged when you perform a declaration merge. Merging Interfaces # The simplest, and perhaps most common, type of declaration merging is interface merging. At the most basic level, the merge mechanically joins the members of both declarations into a single interface with the same name. interface Box { height: number; width: number; } interface Box { scale: number; } let box: Box = {height: 5, width: 6, scale: 10}; Non-function members of the interfaces should be unique. If they are not unique, they must be of the same type. The compiler will issue an error if the interfaces both declare a non-function member of the same name, but of different types. For function members, each function member of the same name is treated as describing an overload of the same function. Of note, too, is that in the case of interface A merging with later interface A, the second interface will have a higher precedence than the first. That is, in the example: interface Cloner { clone(animal: Animal): Animal; } interface Cloner { clone(animal: Sheep): Sheep; } interface Cloner { clone(animal: Dog): Dog; clone(animal: Cat): Cat; } The three interfaces will merge to create a single declaration as so: interface Cloner { clone(animal: Dog): Dog; clone(animal: Cat): Cat; clone(animal: Sheep): Sheep; clone(animal: Animal): Animal; } Notice that the elements of each group maintains the same order, but the groups themselves are merged with later overload sets ordered first. One exception to this rule is specialized signatures. If a signature has a parameter whose type is a single string literal type (e.g. not a union of string literals), then it will be bubbled toward the top of its merged overload list. For instance, the following interfaces will merge together: interface Document { createElement(tagName: any): Element; } interface Document { createElement(tagName: "div"): HTMLDivElement; createElement(tagName: "span"): HTMLSpanElement; } interface Document { createElement(tagName: string): HTMLElement; createElement(tagName: "canvas"): HTMLCanvasElement; } The resulting merged declaration of Document will be the following: interface Document { createElement(tagName: "canvas"): HTMLCanvasElement; createElement(tagName: "div"): HTMLDivElement; createElement(tagName: "span"): HTMLSpanElement; createElement(tagName: string): HTMLElement; createElement(tagName: any): Element; } Merging Namespaces # Similarly to interfaces, namespaces of the same name will also merge their members. Since namespaces create both a namespace and a value, we need to understand how both merge. To merge the namespaces, type definitions from exported interfaces declared in each namespace are themselves merged, forming a single namespace with merged interface definitions inside. To merge the namespace value, at each declaration site, if a namespace already exists with the given name, it is further extended by taking the existing namespace and adding the exported members of the second namespace to the first. The declaration merge of Animals in this example: namespace Animals { export class Zebra { } } namespace Animals { export interface Legged { numberOfLegs: number; } export class Dog { } } is equivalent to: namespace Animals { export interface Legged { numberOfLegs: number; } export class Zebra { } export class Dog { } } This model of namespace merging is a helpful starting place, but we also need to understand what happens with non-exported members. Non-exported members are only visible in the original (un-merged) namespace. This means that after merging, merged members that came from other declarations cannot see non-exported members. We can see this more clearly in this example: namespace Animal { let haveMuscles = true; export function animalsHaveMuscles() { return haveMuscles; } } namespace Animal { export function doAnimalsHaveMuscles() { return haveMuscles; // Error, because haveMuscles is not accessible here } } Because haveMuscles is not exported, only the animalsHaveMuscles function that shares the same un-merged namespace can see the symbol. The doAnimalsHaveMuscles function, even though it’s part of the merged Animal namespace can not see this un-exported member. Merging Namespaces with Classes, Functions, and Enums # Namespaces are flexible enough to also merge with other types of declarations. To do so, the namespace declaration must follow the declaration it will merge with. The resulting declaration has properties of both declaration types. TypeScript uses this capability to model some of the patterns in JavaScript as well as other programming languages. Merging Namespaces with Classes # This gives the user a way of describing inner classes. class Album { label: Album.AlbumLabel; } namespace Album { export class AlbumLabel { } } The visibility rules for merged members is the same as described in the ‘Merging Namespaces’ section, so we must export the AlbumLabel class for the merged class to see it. The end result is a class managed inside of another class. You can also use namespaces to add more static members to an existing class. In addition to the pattern of inner classes, you may also be familiar with the JavaScript practice of creating a function and then extending the function further by adding properties onto the function. TypeScript uses declaration merging to build up definitions like this in a type-safe way. function buildLabel(name: string): string { return buildLabel.prefix + name + buildLabel.suffix; } namespace buildLabel { export let suffix = ""; export let prefix = "Hello, "; } console.log(buildLabel("Sam Smith")); Similarly, namespaces can be used to extend enums with static members: enum Color { red = 1, green = 2, blue = 4 } namespace Color { export function mixColor(colorName: string) { if (colorName == "yellow") { return Color.red + Color.green; } else if (colorName == "white") { return Color.red + Color.green + Color.blue; } else if (colorName == "magenta") { return Color.red + Color.blue; } else if (colorName == "cyan") { return Color.green + Color.blue; } } } Disallowed Merges # Not all merges are allowed in TypeScript. Currently, classes can not merge with other classes or with variables. For information on mimicking class merging, see the Mixins in TypeScript section. Module Augmentation # Although JavaScript modules do not support merging, you can patch existing objects by importing and then updating them. Let’s look at a toy Observable example: // observable.ts export class Observable<T> { // ... implementation left as an exercise for the reader ... } // map.ts import { Observable } from "./observable"; Observable.prototype.map = function (f) { // ... another exercise for the reader } This works fine in TypeScript too, but the compiler doesn’t know about Observable.prototype.map. You can use module augmentation to tell the compiler about it: // observable.ts export class Observable<T> { // ... implementation left as an exercise for the reader ... } // map.ts import { Observable } from "./observable"; declare module "./observable" { interface Observable<T> { map<U>(f: (x: T) => U): Observable<U>; } } Observable.prototype.map = function (f) { // ... another exercise for the reader } // consumer.ts import { Observable } from "./observable"; import "./map"; let o: Observable<number>; o.map(x => x.toFixed()); The module name is resolved the same way as module specifiers in import/ export. See Modules for more information. Then the declarations in an augmentation are merged as if they were declared in the same file as the original. However, there are two limitations to keep in mind: - You can’t declare new top-level declarations in the augmentation – just patches to existing declarations. - Default exports also cannot be augmented, only named exports (since you need to augment an export by its exported name, and defaultis a reserved word - see #14080 for details) Global augmentation # You can also add declarations to the global scope from inside a module: // observable.ts export class Observable<T> { // ... still no implementation ... } declare global { interface Array<T> { toObservable(): Observable<T>; } } Array.prototype.toObservable = function () { // ... } Global augmentations have the same behavior and limits as module augmentations.
https://www.typescriptlang.org/docs/handbook/declaration-merging.html
CC-MAIN-2020-10
en
refinedweb
Tutorial: Categorize iris flowers using k-means clustering with ML.NET version 15.6 or later with which group each flower belongs to, you choose the unsupervised machine learning task. To divide a data set in groups in such a way that elements in the same group are more similar to each other than to those in other groups, use a clustering machine learning task. Create a console application Open Visual Studio. Select File > New > Project from the menu bar. In the New Project dialog, select the Visual C# node followed by the .NET Core node. Then select the Console App (.NET Core) project template. In the Name text box, type "IrisFlowerClustering" and then select the OK button. Create a directory named Data in your project to store the data set and model files: In Solution Explorer, right-click the project and select Add > New Folder. Type "Data" and hit Enter. Install the Microsoft.ML NuGet package: In Solution Explorer, right-click the project and select Manage NuGet Packages. Choose "nuget.org" as the Package source, select the Browse tab, search for Microsoft.ML and select the Install button. Select the OK button on the Preview Changes dialog and then select the I Accept button on the License Acceptance dialog if you agree with the license terms for the packages listed. Prepare the data Download the iris.data data set and save it to the Data folder you've created at the previous step. For more information about the iris data set, see the Iris flower data set Wikipedia page and the Iris Data Set page, which is the source of the data set. In Solution Explorer, right-click the iris.data file and select Properties. Under Advanced, change the value of Copy to Output Directory to Copy if newer. The iris.data file contains five columns that represent: - sepal length in centimetres - sepal width in centimetres - petal length in centimetres - petal width in centimetres - type of iris flower For the sake of the clustering example, this tutorial ignores the last column. Create data classes Create classes for the input data and the predictions: In Solution Explorer, right-click the project, and then select Add > New Item. In the Add New Item dialog box, select Class and change the Name field to IrisData.cs. Then, select the Add button. Add the following usingdirective to the new file: using Microsoft.ML.Data; Remove the existing class definition and add the following code, which defines the classes IrisData and ClusterPrediction, to the IrisData.cs file: public class IrisData { [LoadColumn(0)] public float SepalLength; [LoadColumn(1)] public float SepalWidth; [LoadColumn(2)] public float PetalLength; [LoadColumn(3)] public float PetalWidth; } public class ClusterPrediction { [ColumnName("PredictedLabel")] public uint PredictedClusterId; [ColumnName("Score")] public float[] Distances; } IrisData is the input data class and has definitions for each feature from the data set. Use the LoadColumn attribute to specify the indices of the source columns in the data set file. The ClusterPrediction class represents the output of the clustering model applied to an IrisData instance. Use the ColumnName attribute to bind the PredictedClusterId and Distances fields to the PredictedLabel and Score columns respectively. In case of the clustering task those columns have the following meaning: - PredictedLabel column contains the ID of the predicted cluster. - Score column contains an array with squared Euclidean distances to the cluster centroids. The array length is equal to the number of clusters. Note Use the float type to represent floating-point values in the input and prediction data classes. Define data and model paths Go back to the Program.cs file and add two fields to hold the paths to the data set file and to the file to save the model: _dataPathcontains the path to the file with the data set used to train the model. _modelPathcontains the path to the file where the trained model is stored. Add the following code right above the Main method to specify those paths: static readonly string _dataPath = Path.Combine(Environment.CurrentDirectory, "Data", "iris.data"); static readonly string _modelPath = Path.Combine(Environment.CurrentDirectory, "Data", "IrisClusteringModel.zip"); To make the preceding code compile, add the following using directives at the top of the Program.cs file: using System; using System.IO; Create ML context Add the following additional using directives to the top of the Program.cs file: using Microsoft.ML; using Microsoft.ML.Data; In the Main method, replace the Console.WriteLine("Hello World!"); line with the following code: var mlContext = new MLContext(seed: 0); The Microsoft.ML.MLContext class represents the machine learning environment and provides mechanisms for logging and entry points for data loading, model training, prediction, and other tasks. This is comparable conceptually to using DbContext in Entity Framework. Set up data loading Add the following code to the Main method to set up the way to load data: IDataView dataView = mlContext.Data.LoadFromTextFile<IrisData>(_dataPath, hasHeader: false, separatorChar: ','); The generic MLContext.Data.LoadFromTextFile extension method infers the data set schema from the provided IrisData type and returns IDataView which can be used as input for transformers. Create a learning pipeline For this tutorial, the learning pipeline of the clustering task comprises two following steps: - concatenate loaded columns into one Features column, which is used by a clustering trainer; - use a KMeansTrainer trainer to train the model using the k-means++ clustering algorithm. Add the following code to the Main method: string featuresColumnName = "Features"; var pipeline = mlContext.Transforms .Concatenate(featuresColumnName, "SepalLength", "SepalWidth", "PetalLength", "PetalWidth") .Append(mlContext.Clustering.Trainers.KMeans(featuresColumnName, numberOfClusters: 3)); The code specifies that the data set should be split in three clusters. Train the model The steps added in the preceding sections prepared the pipeline for training, however, none have been executed. Add the following line to the Main method to perform data loading and model training: var model = pipeline.Fit(dataView); Save the model At this point, you have a model that can be integrated into any of your existing or new .NET applications. To save your model to a .zip file, add the following code to the Main method: using (var fileStream = new FileStream(_modelPath, FileMode.Create, FileAccess.Write, FileShare.Write)) { mlContext.Model.Save(model, dataView.Schema, fileStream); } Use the model for predictions To make predictions, use the PredictionEngine<TSrc,TDst> class that takes instances of the input type through the transformer pipeline and produces instances of the output type. Add the following line to the Main method to create an instance of that class: var predictor = mlContext.Model.CreatePredictionEngine<IrisData, ClusterPrediction>(model); The PredictionEngine is a convenience API, which allows you to perform a prediction on a single instance of data. PredictionEngine is not thread-safe. It's acceptable to use in single-threaded or prototype environments. For improved performance and thread safety in production environments, use the PredictionEnginePool service, which creates an ObjectPool of PredictionEngine objects for use throughout your application. See this guide on how to use PredictionEnginePool in an ASP.NET Core Web API. Note PredictionEnginePool service extension is currently in preview. Create the TestIrisData class to house test data instances: In Solution Explorer, right-click the project, and then select Add > New Item. In the Add New Item dialog box, select Class and change the Name field to TestIrisData.cs. Then, select the Add button. Modify the class to be static like in the following example: static class TestIrisData This tutorial introduces one iris data instance within this class. You can add other scenarios to experiment with the model. Add the following code into the TestIrisData class: internal static readonly IrisData Setosa = new IrisData { SepalLength = 5.1f, SepalWidth = 3.5f, PetalLength = 1.4f, PetalWidth = 0.2f }; To find out the cluster to which the specified item belongs to, go back to the Program.cs file and add the following code into the Main method: var prediction = predictor.Predict(TestIrisData.Setosa); Console.WriteLine($"Cluster: {prediction.PredictedClusterId}"); Console.WriteLine($"Distances: {string.Join(" ", prediction.Distances)}"); Run the program to see which cluster contains the specified data instance and squared distances from that instance to the cluster centroids. Your results should be similar to the following: Cluster: 2 Distances: 11.69127 0.02159119 25.59896 Congratulations! You've now successfully built a machine learning model for iris clustering and used it to make predictions. You can find the source code for this tutorial at the dotnet/samples GitHub repository. Next steps In this tutorial, you learned how to: - Understand the problem - Select the appropriate machine learning task - Prepare the data - Load and transform the data - Choose a learning algorithm - Train the model - Use the model for predictions Check out our GitHub repository to continue learning and find more samples. Feedback
https://docs.microsoft.com/en-us/dotnet/machine-learning/tutorials/iris-clustering
CC-MAIN-2020-10
en
refinedweb
public class RecursionCutoffPoint extends java.lang.Object RecursionCutoffPointrepresents one such cut off query composition. When the compilation of the recursive query finishes and the compiled form becomes available, the RecursionCutoffPointhas to be signaled to update parent traces and recipes of the recursive call. equals, getClass, hashCode, notify, notifyAll, toString, wait, wait, wait public RecursionCutoffPoint(PQuery query) public void mend(CompiledQuery finalCompiledForm) public CompiledQuery getCompiledQuery() public ProductionRecipe getRecipe()
https://www.eclipse.org/viatra/javadoc/releases/incquery-1.1.0/org/eclipse/incquery/runtime/rete/construction/plancompiler/RecursionCutoffPoint.html
CC-MAIN-2020-10
en
refinedweb
libkdegames #include <KgDifficulty> Detailed Description KgDifficulty manages difficulty levels of a game in a standard way. The difficulty can be a type of game (like in KMines: small or big field) or the AI skills (like in Bovo: how deep should the computer search to find the best move) or a combination of both of them. On the user point of view, it's not really different: either is the game easy or hard to play. KgDifficulty contains a list of KgDifficultyLevel instances. One of these levels is selected; this selection will be recorded when the application is closed. A set of standard difficulty levels is provided by KgDifficultyLevel, but custom levels can be defined at the same time. Definition at line 96 of file kgdifficulty.h. Constructor & Destructor Documentation Definition at line 167 of file kgdifficulty.cpp. Destroys this instance and all DifficultyLevel instances in it. Definition at line 175 of file kgdifficulty.cpp. Member Function Documentation Adds a difficulty level to this instance. This will not affect the currentLevel() if there is one. Definition at line 180 of file kgdifficulty.cpp. A shortcut for addLevel(new KgDifficultyLevel(level)). Definition at line 202 of file kgdifficulty.cpp. This convenience method adds a range of standard levels to this instance (including the boundaries). For example: This adds the levels "Easy", "Medium", "Hard" and "Very hard". Definition at line 207 of file kgdifficulty.cpp. This is an overloaded member function, provided for convenience. It differs from the above function only in what argument(s) it accepts. This overload allows to specify a defaultLevel. Definition at line 213 of file kgdifficulty.cpp. - Returns - the current difficulty level After the KgDifficulty object has been created, the current difficulty level will not be determined until this method is called for the first time. This allows the application developer to set up the difficulty levels before KgDifficulty retrieves the last selected level from the configuration file. Emitted when a new difficulty level has been selected. Emitted when the editability changes. - See also - setEditable Emitted when a running game has been marked or unmarked. - See also - setGameRunning - Returns - whether the difficulty level selection may be edited Definition at line 272 of file kgdifficulty.cpp. - Returns - whether a running game has been marked - See also - setGameRunning Definition at line 287 of file kgdifficulty.cpp. - Returns - a list of all difficulty levels, sorted by hardness Definition at line 238 of file kgdifficulty.cpp. Select a new difficulty level. The given level must already have been added to this instance. - Note - This does nothing if isEditable() is false. If a game is running (according to setGameRunning()), the user will be asked for confirmation before the new difficulty level is selected. Definition at line 302 of file kgdifficulty.cpp. Set whether the difficulty level selection may be edited. The default value is true. Definition at line 277 of file kgdifficulty.cpp. KgDifficulty has optional protection against changing the difficulty level while a game is running. If setGameRunning(true) has been called, and select() is called to select a new difficulty level, the user will be asked for confirmation. Definition at line 292 of file kgdifficulty.cpp. Property Documentation Definition at line 101 of file kgdifficulty.h. Definition at line 103 of file kgdifficulty.h. Definition at line 104 of file kgdifficulty.h. Definition at line 102 of file kgdifficulty.h. The documentation for this class was generated from the following files: Documentation copyright © 1996-2020 The KDE developers. Generated on Mon Feb 17 2020 03:37:09 by doxygen 1.8.7 written by Dimitri van Heesch, © 1997-2006 KDE's Doxygen guidelines are available online.
https://api.kde.org/4.x-api/kdegames-apidocs/libkdegames/html/classKgDifficulty.html
CC-MAIN-2020-10
en
refinedweb
iofunc_pathconf() Support pathconf() requests Synopsis: #include <sys/iofunc.h> int iofunc_pathconf( resmgr_context_t *ctp, io_pathconf_pathconf ) io_pathconf_t structure: -
https://developer.blackberry.com/native/reference/core/com.qnx.doc.neutrino.lib_ref/topic/i/iofunc_pathconf.html
CC-MAIN-2020-10
en
refinedweb
SYNOPSIS #include <nng/protocol/bus0/bus.h> DESCRIPTION The bus protocol provides for building mesh networks where every peer is connected to every other peer. In this protocol, each message sent by a node is sent to every one of its directly connected peers. All message delivery in this pattern is best-effort, which means that peers may not receive messages. Furthermore, delivery may occur to some, all, or none of the directly connected peers. (Messages are not delivered when peer nodes are unable to receive.) Hence, send operations will never block; instead if the message cannot be delivered for any reason it is discarded. Socket Operations The nng_bus0_open() functions create a bus socket. This socket may be used to send and receive messages. Sending messages will attempt to deliver to each directly connected peer. Protocol Versions Only version 0 of this protocol is supported. (At the time of writing, no other versions of this protocol have been defined.) Protocol Options The bus protocol has no protocol-specific options. Protocol Headers When using a “raw” bus socket, received messages will contain the incoming pipe ID as the sole element in the header. If a message containing such a header is sent using a raw bus socket, then, the message will be delivered to all connected pipes except the one identified in the header. This behavior is intended for use with device configurations consisting of just a single socket. Such configurations are useful in the creation of rebroadcasters, and this capability prevents a message from being routed back to its source. If no header is present, then a message is sent to all connected pipes. When using “cooked” bus sockets, no message headers are present.
https://nng.nanomsg.org/man/v1.2.2/nng_bus.7.html
CC-MAIN-2020-10
en
refinedweb
Unit Tests There are many schools of thought on how, what, and when to test. This is a very sensitive subject for many people. As such, I will simply give an overview of the basic tools available for testing and leave it up to you to decide how and when to use them. The Test API Clojure provides built-in support for testing via the clojure.test namespace. When a new project is created, a test package is generated along with it. Let’s take a quick look at what the clojure.test API looks like and how to work with it. The simplest way to write tests is to create assertions using the is macro. The following are a few examples of how it works: Get Web Development with Clojure, 2nd Edition now with O’Reilly online learning. O’Reilly members experience live online training, plus books, videos, and digital content from 200+ publishers.
https://www.oreilly.com/library/view/web-development-with/9781680502152/f_0059.xhtml
CC-MAIN-2020-10
en
refinedweb
Christopher James JepsonMember Content count35 Joined Last visited Community Reputation229 Neutral About Christopher James Jepson - RankMember Raising Money For my Game? Christopher James Jepson replied to BeoGames's topic in Games Business and Law. Raising Money For my Game? Christopher James Jepson replied to BeoGames's topic in Games Business and LawTo insure that the words "copy" and "introspecting" are not being confused with each other, I pose the following question to you: Could you please clarify this, specifically what Blender is referring to as a "standalone"? No assumptions please. Per blender.org: I'm assuming you are incorrect. However, if you are indeed correct, you'll be helping me a great deal in the long run. Please provide reliable citations to the answer regarding the question I pose to you. Thank you. Raising Money For my Game? Christopher James Jepson replied to BeoGames's topic in Games Business and LawRemoved/Edited due to errors found in my statement. Raising Money For my Game? Christopher James Jepson replied to BeoGames's topic in Games Business and LawRemoved/edited due to errors found in my statement. How much planning do game programmer before writing a single line of code and how do they plan it out Christopher James Jepson replied to Nicholas Kong's topic in General and Gameplay ProgrammingCorrect. This is why almost all internships do diagrams and charts rather then pure coding. Conceptual understanding is critical, or you're just coding blindly. It's also an easy way to spot a very obvious bug that could be over looked in development. What saddens me is when people bash charts and diagrams, claiming it to be useless. For some, such things may be a hindrance. But here's my outlook on it: 1 great programmer can not program nearly as good as a 100 moderately good programmers working as a team. While it's critical to be confident, its crucial to identify when confidence is tainted by belligerence and ignorance. I would much rather work with the person that needs the flow chart and communicates well with me than someone who is too good for anyone else. So yes, communicating to the point of what is needed (typically in a chart or diagram) can be of a invaluable asset for leadership and direction, which means more man power to complete the larger task at hand. How much planning do game programmer before writing a single line of code and how do they plan it out Christopher James Jepson replied to Nicholas Kong's topic in General and Gameplay ProgrammingTo answer your question, it is best that you become familiar with the SDLC (Software Design Life Cycle) and the AGILE-Scrum process of this cycle, including what stories, sprints and backlogs are. There is something called the Technical Design Specification. It is a book, created by the designers and in part, the developers of the SDLC. It encompasses just about every diagram you can think of: Use Case UML, High Level Class Diagram UML, Database Design, ERD, Dataflow Diagram - Flow Charts. Before a single code is worked on, the Technical Design Specification should be completed. The reason is because of the first page of this book, being the contents regarding its Purpose and Description of the Requirements. In a nut shell, if either the Purpose or the Requirements are breached or not met, then you either have one really passive and generous client or... you can't adequately define your sprints, which results in project failures (and people start losing their jobs and companies start losing a lot of money). All the pages there after the first page, break down what the program should do. And that breakdown, allows a sprint log to be created, which allows management for the overall project. As a one man programmer, yes, you can get away with not using this. As a two or three man programmer, you might be able to get away with not using this. But as the numbers increase, so does the unlikeliness of succession from not using the Technical Design Specification. Hope this helps. OOP is so confusing[wrong question] Christopher James Jepson replied to KuraiTsubasa's topic in For BeginnersTo understand OOP (Object Oriented Programming), it's likely best to understand what an Object is when referring to programming. Consider the following: Scenario #1 You are a car mechanic. A car is delivered to you, and it requires a tune up. You find that several nuts and bolts are lose on examination. You search for and acquire an adjustable wrench. Using the adjustable wrench, you tighten the nuts and bolts. The result of using the adjustable wrench returns a task completed. Since there are no other tasks associated with this tune up, the tuneup is completed. Scenario #2 A second car comes in. The car requires a replacement radiator. On examination of the radiator, you determine that an array of tools are required, including an adjustable wrench. You search for and acquire an adjustable wrench. You use the adjustable wrench to loosen bolts. You use other tools to further allow the removal and replacement of the radiator. A new radiator is installed. You use a combination of tools to again mount the radiator. You again use the adjustable wrench to tighten the bolts. The result of using the adjustable wrench returns a completed task (as with the other tools). Since there are no other tasks associated with this radiator replacement, the replacement is completed. An object seen in the two examples above would be the adjustable wrench. In programming, a object is essentially a class that gets instantiated, meets a criteria, and can be reused. For proper object creation, it must meet these four requirements: It must support polymorphism. It must support overloading. It should support overriding (for consistency). It should support encapsulation (for security). Much like the adjustable wrench in the above example, the adjustable wrench was performed on both a tune up and a radiator replacement. It essentially was used to remove and tighten bolts and nuts in two different scenarios. The nuts and bolts between the two cars were different, but were classified as the same type of property of which an adjustable wrench be used with. Likewise, two different sets of code may require the use of a class for different reasons, but pass a similar type of peropty and require a similar return value. This is essentially polymorphism, because the same object was used for the same reason with a different set or properties in two different scenarios (you had to adjust the wrench size for different size nuts/bolts two separate jobs, but got the job done with one adjustable wrench). Polymorphism The biggest challenge in OOP is thinking outside of primitives. As novice programmers, we tend to think int, float, double... and the associated values to such. But OOP is both similar and different. An object is a container of values/properties. The easiest way to truly understand an object is to understand a structure in C. You have an entity, and that entity has properties. When you use that entity, you can call one property, a set of properties, or all properties stored. If you want to get more complicated, you can stored objects within objects, and would have to iterate through the first object to access the nested objects, for the values stored in the desired object. Tricky, huh? To further understand this, I would highly recommend studying data structures. Overloading Overload is hard to explain without confusing people. I would recommend researching it. In simplest definition, overloading is when you call the same function/method of a class, pass a different set of parameters. The parameters define which of the identically named function/methods you are calling. Overriding Overriding is when you are quite literately calling a method of identical name and parameter, but of a child class, which invokes the use of that method over the identical method of a super class. A clean example of this is when you define a default constructor which is required in instantiating a class (for Java and C#). Another way to look at this is saying you have two classes, class 1, and class 2. You have a function/method called public int GetMe(int i) in both. To get to class 2, you must first instantiate class 1, then using that object, instantiate class 2. But instead of using GetMe(int i) in class 1, you use it in Class 2. You are overriding the GetMe(int i) method of class 1 with the GetMe(int i) method of class 2. Encapsulation Encapsulation is when you use access modifiers like private, protected, etc to hide variables from super classes, but give access to change such values through methods like setters/getters. Again, if you truly want to understand OOP, study polymorphism, overloading, overriding and encapsulation. I would also advise studying further into containers and data structures, and learn how to use iterators. You'll never ask this question again if you know these key aspects. Best of luck to you. Unity Best Laptop for Game Development Christopher James Jepson replied to Carbon101's topic in General and Gameplay ProgrammingTo help provide guidance: The only time a GPU becomes essentially important is when you are considering cross platform support or are pushing graphics to the very limit of it's hardware capabilities. For example, when tackling code in a Mac/Linux/UNIX environment, OpenGL will be used, over DirectX, Direct3D which is exclusive for Windows. This fundamental difference would decide whether or not it would be safe to go ATI or nVidia alone. When pushing hard on graphic hardware, that'll narrow the choice of the GPU even further. However, for a lesser hardware critical game, it's better to go toward a middle ground, such as an Intel or nVidia simply for the sake of compatibility. Computer animators on the other hand, have a vigorous requirement, but that's a different subject altogether. Now, while a GPU can be the most important feature for gamers, and are important for programmers, it is not the most important role for developers. In fact, if we had to talk about hardware at such a low level, I would say, in that low level requirement, a CPU and associated BUS on the mother board would hold a greater importance because of the compatibility needs of the compiler. But frankly, even that's not true for a higher level design paradigm because essentially, the OS takes care of almost all of that for you, even on Linux. To the point at hand: What I would do is focus on something with a strong processor, that has as many threads as you can get, and has a proven track record for reliability. Games are typically designed to be thread dependent. I would suggest an Intel Processor within the i series for the sake of running fast compilations and executions. You want loads of RAM and a fast hard drive because nothing sucks more than a IDE that crawls and crashes. Don't worry too much about HD space though, as you should get in the habit of backing up your data on portable media, a CVS or SVN regularly, all of which should not be on your system to begin with. For a GPU, I would highly suggest staying away from anything that is ATI simply due to cross compatibility reasons. While nVidia is a wise choice for a GPU brand, adding a reasonably good nVidia GPU along side with a i series CPU on any laptop usually brings the cost fairly high because the i series CPU's have built in graphics support, essentially having 2 GPU's in one system (a luxury or in your case, a feature as a programmer depending on your role and needs). For more information on nVidia GPU's, I would suggest visiting this site, and narrowing your search down further to what you need: Remember, you want to cross check the GPU you are interested in with any platform you may be interested in focusing on. A simple Google search for a GPU type and a Linux Distribution can bring up a lot of information for you just for developing on Linux alone. For CPU, I recommend an Intel iSeries. However, you can review a list of CPU's and get a genuine idea of which one ideally works for you by going here: I don't do game design, but I do play games, and I do develop code in Java. I wanted a cheap laptop that was a good middle ground for all of this. I found that a AMD A8 Quad Core with a ATI Radeon GPU stood on equal ground for gaming to a i3 and some i5's, met my requirements for coding, and dropped the cost by $300. Plus, I got even more than I could have expected. I honestly thought League of Legends wouldn't run on it, but I was proven horrible wrong when it rocked the game. As for laptop brands, that could be important as well due to driver support and general reliability. In such cases I would suggest a Lenovo, HP or in some cases a Dell. I personally got a Samsung, but my focus is not game development so I can't suggest that. Unfortunately, I can not give you any recommendations on sound support, as that is way outside my support and interest scopes. Hope this helps, best of luck. Unity Suggestion for a cross-platform C++ 3D game engine/framework Christopher James Jepson replied to skwee's topic in General and Gameplay ProgrammingThat is a seriously good question in my opinion, one of which I would like an answer to as well. My programming experience doesn't fall within games, but I am very much interested in knowing if this does or does not work as a suggestion, and why. Skwee seems to know his stuff, and what he's looking for, as well as why. Getting some feed back from him, if not someone who has worked with Unreal Engine would be nice. Per wiki comments: The current release is Unreal Engine 3, designed for Microsoft'sJavaScript/WebGL (for HTML5). Source: License Proprietary; UDK free for noncommercial us Written in C++ , UnrealScript EDIT: Whoops, looks like I didn't see their were 2 additional pages full of comments after Godmil's post. Sorry about that. Help me with code ideas please? Christopher James Jepson replied to mepis's topic in General and Gameplay ProgrammingGlad to be of some help. Best practice is to always put code in a separate object (that meaning not in the main class). That however does not necessarily mean it will provide the best result though; that being speed and only speed. An example of jamming stuff up in the main class is if you were creating a POJO (Plain Old Java Object) to substitute a output only BASH script in Linux/UNIX. Remember, the String[] args of the main parameter refer to the arguments taken in at the command prompt when typing in java someJavaMainClass. If let's say you created a POJO that was designed to run a sequence of Linux/UNIX commands using the getRunTime() method of the Runtime class (), it may be practical to do so only in the main class. However, it is actually more efficient to do this in a language designed to work at a lower level of support, such as PERL, C/C++. as you are taking a unnecessary leap into a higher level language with the use of the JVM, which is developed largely in C++. It's like avoiding C++ to use C++, which simply doesn't make any sense and uses more memory in the process. The rule of thumb is, if you need to instantiate even one object, then you are using Object Oriented Design, and it is best practice to adhere to it. If you're not, then it's kinda fruitless to use Java unless your environmental situation mandates it. AWT, Swing or Java FX? You should really jump on Java FX if your not currently using it. Java FX will at some point deprecate Swing, and AWT is mostly deprecated already. You should really learn how to use either Swing or Java FX from the source up, but if you ever needed to cheat: Ever hear of Java FX Scenebuilder (). This is typically a debate also found in C++. Both in C++ and in Java, the answer is the same; reader's simplicity. The moment you are referring to a part of your code as a separate English subject, that subject should be in a different class. For Example, if I was coding a game about cats, dogs, birds, and fish, I would have 6 classes: Main (instantiates cat, dog and bird through animal) Animal (abstract) Cat (extends animal) Dog (extends animal) Bird (extends animal) Fish (extends animal) Methods (otherwise referred to as functions in C++) would refer to the characteristics of a class. Here is an example: Cat (extends animal) Sound() (makes a unique sound pertaining to the characteristics of the class defined) Move() (moves in a unique way pertaining to the characteristics of the class defined) Eat() (Eats unique food in a unique way to satisfy its appetite and physical needs) Dog (extends animal) Sound() (makes a unique sound pertaining to the characteristics of the class defined) Move() (moves in a unique way pertaining to the characteristics of the class defined) Eat() (Eats unique food in a unique way to satisfy its appetite and physical needs) Bird (extends animal) Sound() (makes a unique sound pertaining to the characteristics of the class defined) Move() (moves in a unique way pertaining to the characteristics of the class defined) Eat() (Eats unique food in a unique way to satisfy its appetite and physical needs) Fish (extends animal) Sound() (makes a unique sound pertaining to the characteristics of the class defined) Move() (moves in a unique way pertaining to the characteristics of the class defined) Eat() (Eats unique food in a unique way to satisfy its appetite and physical needs) This is essentially important because, let us say that you now needed to consider which animal was a mammal (Dog, Cat), an aves (bird), and a paraphyletic (fish). This would be done using interface as you can extend only once from an abstract. In a more gamer sense, you could see it this way: You have a mage (aka, wizard). You create an abstract of their AD&D class (necromancer, conjurer, enchanter, etc). However, their sortie of spells depend on an alignment, so you also create an alignments (evil, natural, good). You would abstract the wizard's AD&D class, and interface their alignment. The sortie of spells they use could then be considered based off of both the abstraction and inclusion of their associations (e.g., David, the Evil Necromancer [spells control the undead and unleash disease on helpless saps], John, the Good Necromancer [spells control the undead and allow him to siphon his own life force to heal a helpless sap], Brian, the Evil Wizard [spells that control the weather and use it to devastate the land because helpless saps should just be vaporized]). Regardless if the project is large or small, if you don't look at it for a year, or someone else looks at it for you, it becomes extremely difficult to understand it quickly. You are correct. Their is a way to compromise between what you're trying to do and what Java is designed to do. I would recommend researching Singleton, and practicing on how to develop it and around it. For a small application, it's possible to get away with using a C++ practice in Java, but as projects get bigger, you will need to consider object management. A Few Notes: Since we are talking about objects and memory: Since you are creating a GUI: If you create a program that uses String, you are essentially creating an object. Most people know this. However, what most people do not know is every time you change the value of your String object, you are throwing away an object and creating a new String object. To get around this, you can use either StringBuffer (is thread safe) or StringBuilder (is not thread safe) as the object is unchanged when the value changes. If you set an object to NULL and then declare System.gc(), you are likely to have that object destroyed much quicker than if it was just simply left as is. This is again, a good reason to use the try/catch/finally exception handling. In finally, you could create a clean up procedure, and later declare a System.gc() to schedule the destruction of that object you cleaned up. An example of where this could be used in your code is as follows: After the last use of Random rand, use: rand = null; System.gc(); Over all, I see you doing a great job. For the size of your program, I'll assume it's fine enough to remain in the main class. I just don't want to see you get into the bad habit of using object evasion over object management. Hope this helps further. Also, thanks for the feedback. How to install GCC Christopher James Jepson replied to mousetail's topic in Engines and MiddlewareI. Help me with code ideas please? Christopher James Jepson replied to mepis's topic in General and Gameplay ProgrammingNice code Mepis. The first thing I see where you could improve on is putting this code in a class, not in a main method. the mole is correct. But this is also more the reason why it should not be in main though. Their is something called the default constructor, which the JVM (Java Virtual Machine) creates for you when instantiating classes. It is used to assign all primitives with a 0, 0.0 or null value. However, if you needed to override or overload the default constructor, you can. An example would be like this: File 1. public class Main{ public static void main(String[] args){ ClassExample ce1; = new ClassExample(); // Uses default constructor. int[][] maze = new int[40][40]; ClassExample ce2; = new ClassExample(0, 0, 0, 0, maze); // Overloads default constructor. } } File 2 import java.util.Random; public class ClassExample { // Your variables should look like this. private final int ROWSIZE = 40; private final int COLUMNSIZE = 40; private int[][] maze; private int randX; private int randY; private int choke; private int checkSurroundings; /*If you want to define a default value on creation of the class, you then want to override the default constructor like this. */ public ClassExample() { randX = 0; randY = 0; choke = 0; checkSurroundings = 0; maze = new int[ROWSIZE][COLUMNSIZE]; // Will create instantiate on instantiate. } /*This will allow you to do the same thing as above but also allow you to override the default values assigned in the default constructor for reasons relating to making the code scalable. Remember, if you want to overload the default constructor, you need to override the default constructor first! */ public ClassExample(int x, int y, int choke, int cS, int[][] maze){ this.randX = x; this.randY = y; this.choke = choke; this.checkSurroundings = cS; this.maze = maze; } // <-- Put your logic methods here. // Setters/Getters for all private class primatives. public int[][] getMaze() { return maze; } public void setMaze(int[][] maze) { this.maze = maze; } public int getRandX() { return randX; } public void setRandX(int randX) { this.randX = randX; } public int getRandY() { return randY; } public void setRandY(int randY) { this.randY = randY; } public int getChoke() { return choke; } public void setChoke(int choke) { this.choke = choke; } public int getCheckSurroundings() { return checkSurroundings; } public void setCheckSurroundings(int checkSurroundings) { this.checkSurroundings = checkSurroundings; } // Getters only for final primatives public int getROWSIZE() { return ROWSIZE; } public int getCOLUMNSIZE() { return COLUMNSIZE; } } Also, remember that static means global. You must assign a static value to any variable called outside of a method in the main class because of runtime limitations. This is useful for some reasons, but not in the way you are using it. If you create those variables in a sub class to main, they do not need to be static as seen in the example above. Additionally, remember that you can use getters for final (constant) variables too, you just can't use setters because they are final. Another good reason for using a subclass like this for the sake of being able to instantiate it multiple times. For example, if you had a game with two players, and it was a race to see who could get through the maze faster, but one of the players was very experienced with your game, you may need to create a handicap for the more experienced. In this instance, you may want to overload the default constructor to make the maze size larger, thus creating two maze objects; one for each player with the larger maze going to the experienced player. Additionally, if you have to reuse the values stored in the instantiated objects, the values could remain the same since you could overload the default constructor. Or, if you needed to reset their values, you could also do this by instantiating with the default constructor. And finally, because the primitives are not static, their's less chance of exploitation. The above example are key points of overloading, overriding, encapsulation and polymorphism. One more note: Try to get in the habit of using try/catch/finally. Here's an example: public boolean Game(){ try { // <-- Put your logic here. // Example if only if(true){ System.out.println(maze[500][500]); // Will create a null exception! } return true; } catch (Exception e) { // Will attempt to catch exception and keep application running. e.printStackTrace(); // Will tell you what happened. return false; } } The key reasons for using try/catch/finally is it keeps the application running even if their is an exception, and it also makes your life easier to track down the bug. This greatly improves on quality assurance. Hope this helps. Please feel free to add on to this in any way needed, or correct anywhere I am wrong. Thanks! Does anyone try to Break their Game? Christopher James Jepson replied to Tutorial Doctor's topic in General and Gameplay ProgrammingLOL I was curious about that. It's cool. Thanks much for letting me know. At least you got what you needed, so I'm happy about that. Java Android Application Direction Requested... Christopher James Jepson posted a topic in Networking and MultiplayerHello everyone. Thank you for taking the time and interest in reading this thread. I've broken down the opening topic of this thread for easier reading. Before I get started, let me give you a very brief background about myself. About me: I'm a Java Application Developer. I focus primarily on the business logic of a multitier (client-server) architecture: About what I'm doing which is related to this thread: I'm gearing up to make a cross platform Instant Messenger Client & Server. This is purely for educational reasons. I am confident I have what I need to commit myself for this project on a Windows/Linux/Mac environment. Where I could use your help: I would like (preferably from someone who's already done it) to give me an idea of what I will need to successfully create a client to server application on an Android mobile device. There are some requirements, so in finer clarity: The APIs must be scalable with JOGL for later projects. The APIs must be easily integrated with Eclipse IDE (no Netbeans IDE please). I want to avoid Game Engines like jMonkey as these remove the education value of what I'm aiming for. I would like something that's standard for businesses, like Java ME for starters. I need to know the business model just like mine listed above for multitier architecture. Conclusion: If anyone could answer these questions, you would be doing me a great service, and saving me hours if not days of doing the research myself. If I'm going to recreate the wheal, I really want to confine it in the development, and not the designing aspect of SDLC. Many thanks in advance. Oh, and please... while I have respect for other languages, please do not make comments about using a different language. Thank you! NOTE: I am not in any rush for an answer. With that said, if someone is trying to research the design aspect of this request, and than implement it to verify and for their own learning before answering this inquiry, than kudos to you. Does anyone try to Break their Game? Christopher James Jepson replied to Tutorial Doctor's topic in General and Gameplay ProgrammingGlad.
https://www.gamedev.net/profile/208143-subtle-wonders/?tab=topics
CC-MAIN-2018-05
en
refinedweb
What's New in Windows 10 for developers, build 16299 Windows 10 build 16299 (also known as the Fall Creators Update or version 17 collection of new and improved features and guidance of interest to Windows developers in this release. For a full list of new namespaces added to the Windows SDK, see the Windows 10 build 16299 API changes. For more information on the highlighted features of Windows 10, see What's cool in Windows 10. In addition, see Windows Developer Platform features for a high-level overview of both past and future additions to the Windows platform. Design & UI Gaming Develop Windows apps Publish & Monetize Windows apps The features in this section have been added since the release of the previous version of Windows, 1703. They are available to all Windows developers and do not require the updated SDK. Samples Lunch Scheduler The Lunch Scheduler sample schedules lunches with your friends and coworkers. You create a lunch, invite friends to a restaurant of interest, and the app takes care of the lunch management for all involved parties. This app highlights the following: - Demonstrates integration with services like Facebook, Microsoft Graph for authentication, graph-based operations, and friends discovery. - Works with Yelp and Bing maps for restaurant recommendations. - Incorporates elements of the Fluent Design System in a UWP app including acrylic, reveal, and connected animations. Quiz Game The Quiz Game App (Remote System Sessions API) sample demonstrates how to use the Remote System Sessions API in the context of a quiz game scenario. A host sends the questions to the proximal devices and the participants the answer the questions on their own devices. The Remote System Sessions API allows a device to host a session that is discoverable by other devices that are nearby. They can then join this session, and send messages to the host and other participants.
https://docs.microsoft.com/en-us/windows/uwp/whats-new/windows-10-build-16299
CC-MAIN-2018-05
en
refinedweb
Free for PREMIUM members Submit Want to protect your cyber security and still get fast solutions? Ask a secure question today.Go Premium #include <wchar.h> #include <malloc.h> #include <locale.h> #include <stdlib.h> #include <stdio.h> int main() { _wsetlocale(LC_ALL, L"arabic"); const char* text = "Arabic text"; wchar_t* p = (wchar_t*)malloc(200); p[0] = 0; mbstowcs(p, text, 200); _wsetlocale(LC_ALL, L""); free(p); return 0; } Select all Open in new window Modern healthcare requires a modern cloud. View this brief video to understand how the Concerto Cloud for Healthcare can help your organization. Sign up to receive Decoded, a new monthly digest with product updates, feature release info, continuing education opportunities, and more. A2T depends on the project settings: if there is Unicode character set selected, this A2T will convert the text to the wide characters. Windows likes only Unicode (or their version of Unicode). Better to keep all translations/localization in Unicode and do not play with these A2W, mbstowcs, MultiByteToWideChar,... Open in new window Modern healthcare requires a modern cloud. View this brief video to understand how the Concerto Cloud for Healthcare can help your organization. the encode that come from internet page have windows-1256 encoding pgnatyuk: i used in the properties of the project unicode Anyway A2T is not a solution. You can try to use your A2T with #pragma setlocale("arabic"), but I don't think it will be an acceptable solution for you. Ok, that seems to imply the narrow encoding is ANSI not UTF8 so I would follow what pgnatyuk is suggesting. Just because something is wide does not mean it is Unicode. Just because something is narrow don't mean it is not Unicode! UTF8 and UTF16 are both Unicode transformation formats and both represent Unicode, the former is narrow and the latter is wide. hen Microsoft speak of Unicode they are referring to UTF16 and when they speak of non-Unicode they are (usually) referring to ANSI. If you want to convert from narrow to wide you MUST know what the encoding of the narrow form is otherwise your conversion will not behave as you expect. This was the reason for my original question. This is slightly off topic but I get very vexed at how Microsoft confuse people with their redefinition of terminology! :) This multibyte character set from Microsoft is totally useless. They can rename it in simplified English. :) i used setlocal but it still not ok you can see the image to see the problem in arabic text thank you asas.png If you already took one line, why you cannot apply wcstombs? How I see it should be something like _wsetlocale(LC_ALL, L"arabic"); TCHAR* urlcnw = (wchar_t*)malloc(1024); urlcnw[0] = 0; mbstowcs(urlscnw, urlcn, 1000); Do not forget to delete this urlcnw when you will not need it. It is correct for your code with A2T too. At least this system font used in the VS debugger can show the Arabic text.
https://www.experts-exchange.com/questions/26414400/when-i-use-this-function-in-c-the-text-have-arbic-it-destroied.html
CC-MAIN-2018-05
en
refinedweb
#include <hallo.h> * Jon Dowland [Thu, Oct 28 2004, 12:33:47PM]: > > In case you don't need to run 3d programs, using nv will be fine for you > > ...provided you have a reasonably modern computer or a small > resolution display. On my AMD K6-2 450mhz, the nv driver is unbearably > slow at 1280x1024. I don't think the CPU is your problem. Do you have a TNT-2 or older Nvidia chip? The XVideo support for them is not complete, which means taht watching videos becomes _very_ CPU intensive. Although, the nvidia drivers does support XVideo at TNT-2. With modern Nvidia cards, I did not notice any big difference except of GLX support. And the NV driver is less buggy WRT power saving system modes. Regards, Eduard. -- <martoss> hmm, naja, aber kde 2 ist nicht wirklich prickelnd <weasel> martoss: wenn du was prickelndes willst, trink mineralwasser <martoss> weasel: :-), ich weiss, aber es muss halt auch ein bischen was fürs auge da sein... <youam> martoss: dann tu nen strohhalm rein! :)
https://lists.debian.org/debian-user/2004/10/msg03099.html
CC-MAIN-2018-05
en
refinedweb
NaroCAD 1.0 Released NaroCAD opensource project has reached version 1.0 The most important technological point of view of NaroCAD is that is built on top of C#/.NET and uses OpenCascade 6.3 for all work involved in visualization, modelling. The OCAF like layer is rewritten in C# for maintenance reasons as is pretty hard to work with two debuggers, one to debug C++ OpenCascade code and the other side the C#. Also we would like to thanks to OpenCascade team, elsewhere this project would not be ever possible! Release announcement: is there any kind of documentation or support forum? because after working with open source for some time i understood one simple idea: any even the most cool project is useless as a naked piece of code! (im looking on the project from the developer's point if view, not just as user) The documentation is generated from Doxygene, but most topics are discussed on blog as topic item (even are not related with daily subject). We are open to both way contributions. Entery needs to add STEP support to wrappers (we wrap only 2200 classes from around 5000 !? of OpenCascade) as he need it, we help him and we integrate in oficial Naro wrappers. At least when the interest is in both ways (like: we integrate IronPython scripting, but we don't expose all API to the script code, so if you will need this support in your NaroCAD based project/extension, etc.) we will like to help you as long is somehow integrated. There is SourceForge forum, but as far as it is, we had only one post as long as it seem. So for any specific developer question we are happy to answer. Also, if you think that a part should be improved, we welcome very much this kind of feedback. So, put any answer on blog. I will want to rephrase the last words: - every time we implement a new feature (or we work on it) we write on blog. So you can get an insight (high level) of what was happening. If you combine with SVN log, you may get an idea what was happening for real. Also you can post your questions on blog and we will answer as we can. But we will prefer the ones that really contribute in a way to NaroCAD. Because we are two developers and we do in our short time, is hard to make everything and to answer to any question. So we may prefer specific questions for the same reason. We cannot answer OpenCascade issues for example, this forum is much useful for this. - if you are interested in specific questions and the documentation that comes with project is not good for you, please feedback on blog. At least in this way we can know what are your specific needs and we can settle them - NaroCAD works on top of OpenCascade, and for specific tasks you will need still to know OpenCascade. But a much fewer things than regular programming with it. Most programming use cases, like adding a new shape to document (to "OCAF"), layers, etc. were made hopefully to be in a C# way, and with clean design. So feel free to ask on presented blog for NaroCAD specific questions and still remain loyal to OCC forums for your problems with OpenCascade. Hi Ciprian, I just downloaded and tested your first release of NaroCAD. Congratulations, it's an impressive work. I've however failed to run and test the scripting part of the software. The blank text zone doesn't answer to any keyboard entry/mouse click, and open/save/execute buttons seems disabled. Did I miss something? Thomas Hi Thomas, It works, but there was a bug in layout that was fixed right away. The edit text window have integrated IronPython code. To find the box enough large, you should press some enters to see actually that this edit box works. To see that it really works, write this code and press execute: clr.AddReference("System.Windows.Forms") from System.Windows.Forms import * MessageBox.Show("Hello from NaroCAD") The full topic is here: Now is version 1.2 It expose more of OpenCascade functionality. You can download it from here: The version was bumped to 1.5.1 and it is a fairly stable release. You can look more about announcement from here:
https://www.opencascade.com/content/narocad-10-released
CC-MAIN-2018-05
en
refinedweb
#include <sysc/kernel/sc_spawn.h> This templated helper class allows an object to provide the execution semantics for a process via its () operator. An instance of the supplied execution object will be kept to provide the semantics when the process is scheduled for execution. The () operator does not return a value. An example of an object that might be used for this helper function would be void SC_BOOST bound function or method. This class is derived from sc_process_host and overloads sc_process_host::semantics to provide the actual semantic content. sc_spawn_object(T object, const char* name, const sc_spawn_options* opt_p) This is the object instance constructor for this class. It makes a copy of the supplied object. The tp_call constructor is called with an indication that this object instance should be reclaimed when execution completes. object = object whose () operator will be called to provide the process semantics. name_p = optional name for object instance, or zero. opt_p -> spawn options or zero. virtual void semantics() This virtual method provides the execution semantics for its process. It performs a () operation on m_object. Definition at line 75 of file sc_spawn.h. Definition at line 77 of file sc_spawn.h. Definition at line 81 of file sc_spawn.h. Definition at line 87 of file sc_spawn.h.
http://www.cecs.uci.edu/~doemer/risc/v030/html_oopsc/a00198.html
CC-MAIN-2018-05
en
refinedweb
interface for antenna radiation pattern models More... #include "antenna-model.h" interface for antenna radiation pattern models Introspection did not find any typical Config paths. This class provides an interface for the definition of antenna radiation pattern models. This interface is based on the use of spherical coordinates, in particular of the azimuth and inclination angles. This choice is the one proposed "Antenna Theory - Analysis and Design", C.A. Balanis, Wiley, 2nd Ed., see in particular section 2.2 "Radiation pattern". No Attributes are defined for this type. No TraceSources are defined for this type. Size of this type is 32 bytes (on a 64-bit architecture). Definition at line 44 of file antenna-model.h. Definition at line 34 of file antenna-model.cc. Definition at line 38 of file antenna-model.cc. this method is expected to be re-implemented by each antenna model Implemented in ns3::ParabolicAntennaModel, ns3::CosineAntennaModel, and ns3::IsotropicAntennaModel. Definition at line 43 of file antenna-model.cc. References ns3::TypeId::SetParent().
https://www.nsnam.org/docs/release/3.27/doxygen/classns3_1_1_antenna_model.html
CC-MAIN-2018-05
en
refinedweb
LeakDB 0.1 LeakDB is a very simple and fast key value store for Python Why ? For the fun o/ Overview LeakDB is a very simple and fast key value store for Python. All data is stored in memory and the persistence is defined by the user. A max queue size can be defined for a auto-flush. API >>> from leakdb import PersistentQueueStorage >>> leak = PersistentQueueStorage(filename='/tmp/foobar.db') # set the value of a key >>> leak.set('bar', {'foo': 'bar'}) >>> leak.set('foo', 2, key_prefix='bar_') # increment a key >>> leak.incr(key='bar_foo', delta=5) 7 >>> leak.incr(key='foobar', initial_value=1000) 1000 # looks up multiple keys >>> leak.get_multi(keys=['bar', 'foobar']) {u'foobar': 1000, u'bar': {u'foo': u'bar'}} # ensure changes are sent to disk >>> print leak /tmp/foobar.db 12288 bytes :: 3 items in queue :: 3 items in storage memory >>> leak.flush(force=True) /tmp/foobar.db 12338 bytes :: 0 items in queue :: 3 items in storage memory >>> leak.close() STORAGE - DefaultStorage :: The default storage, all API operations are implemented set set_multi incr decr get_multi delete - QueueStorage :: Use the DefaultStorage with a queue. You can override the QueueStorage.worker_process method and make what you want when the flush method is called. from leakdb import QueueStorage class MyQueueStorage(QueueStorage): def worker_process(self, item): """ Default action execute by each worker. Must return a True statement to remove the item, otherwise the worker put the item into the queue. """ logger.info('process item :: {}'.format(item)) return True - PersistentStorage :: Use the DefaultStorage, otherwise each operation is stored through the shelve module. - PersistentQueueStorage :: Use the QueueStorage and the PersistentStorage. # see also the API part from leakdb import PersistentQueueStorage storage = PersistentQueueStorage(filename="/tmp/foobar.db", maxsize=1, workers=1) # the queue is auto-flush, each operations check the queue size storage.set('foo', 1) TODO - finish the transport layer through zeroMQ - cleanup the code - write the unittests - write a CLI - benchmark each storage - Downloads (All Versions): - 1 downloads in the last day - 55 downloads in the last week - 210 downloads in the last month - Author: Lujeni - Categories - Package Index Owner: lujeni - DOAP record: LeakDB-0.1.xml
https://pypi.python.org/pypi/LeakDB/0.1
CC-MAIN-2015-40
en
refinedweb
Here's the discussion on this topic in the mailing list: I've checked the behavior of Toolkit.getLockingKeyState()/setLockingState() on Windows XP on RI and found that those methods work weirdly. Here's a simple test demonstrating RI behavior: import java.awt.Toolkit; import java.awt.event.KeyEvent; public class Test { public static void main(String[] args) { try catch (Throwable e){ e.printStackTrace(); } } } If CapsLock was OFF at the application start, the output is as follows (comments denote what happens at that point on the keyboard): false true false 1. Turning CapsLock ON // Light goes ON false 2. Turning CapsLock OFF false 3. Turning CapsLock ON // Light goes OFF false 4. Turning CapsLock OFF false If start CapsLock was ON at the application, the ouput is as follows: true true false 1. Turning CapsLock ON // Light goes OFF true 2. Turning CapsLock OFF true 3. Turning CapsLock ON // Light goes ON true 4. Turning CapsLock OFF true In other words, the following statements seem true about the RI operation: - getLockingKeyState() returns the state of the key at the application start, calls to setLockingKeyState() and pressing the key on the keyboard do not affect it. - If the key was OFF at the application start, setLockingKeyState(false) does nothing, and setLockingKeyState(true) toggles the actual state (light on the keyboard changes state and typing (in other window) indicates the state changes immediately). - If the key was ON at the application start, setLockingKeyState(true) does nothing, and setLockingKeyState(false) toggles the actual state. This behavior clearly contradicts the specification, but I suspect it's grounded by some Windows API peculiarities. I'm not sure if we should follow specification or RI behavior, but investigation of Windows API capabilities in this area is surely required before we could make the right decision. As far as I understood setLockingState works OK on RI but getLocking state returns the key state at the start up. Right? I've tried the test on Linux, it seems to throw UnsupportedOperationException on any call to getLockingKeyState()/setLockingState() - in other words, it seems RI doesn't support this functionality on Linux. In fact, the problem turns out to be much more complex. I searched the Java.Sun.Com site for related information and found a number of them that are useful. General idea is investigating the RI behavior to provide the proper compatibility in this area is a separate non-simple task. We can implement the spec. This is not complicated on Windows. I've not investigated the issue on Linux.. So what are the problems from Windows API and Linux API sides which are prevents us from getting the state of CAPS, NUM, SCROLL and KANA LOCK keys?. Here is Toolkit.getLockingKeyState/setLockingKeyState implementation for Windows. Toolkit class uses static LockingState.getInstance() method to get platform-specific successor of base class LockingState. Windows successor WinLockingState is implemented; LinuxLockingState is added also but its methods do nothing. Note: keybd_event function is used to generate keyboard events for toggling keys. MSDN says that SendInput should be used instead, but using SendInput involves raising minimal Windows version to 0x0500 and requires changes in build files. Patch was updated: removed some copy-paste errors from LinuxLockingState class. Some improvements were made in suggested patch. Calls to keybd_event were replaces with calls to obtained pointers Attached 4423_win_nosearch.patch - it's another version of patch in which native code doesn't look for Win32 API functions through user32 library, but relies on natural dynamic linking. Wow, Ilya, this patch is great! It seems to work even better than RI does! Thank you! Just a small note - when throwing UnsupportedOperationException (when Kana key is absent, for example), RI adds a detalization message, like this: java.lang.UnsupportedOperationException: Keyboard doesn't have requested key Probably we should do the same. Vasily, I thought about it, but as far as I can see AWT usually uses strings like "Messages.getString("awt.XXX"))" for exceptions. Unfortunately, I don't know where to add appropriate string. If so, and if you know where to add string into string table, you can prepare additional patch for Toolkit.java, which will apply over both '4423_win.patch' and '4423_win_nosearch.patch'. If adding simple string right into java code is acceptable, I'll update patches. Ilya, Messages.getString("awt.XXX")) reads the messages from awt/src/main/java/common/org/apache/harmony/awt/internal/nls/messages.properties. You can put all messages you need to this file. Attaching updated patches with added message in UnsupportedOperationException. The message added is awt.29A="Keyboard doesn't have KANA key", because this exception is thrown for KANA only. Ilya, please note that UnsupportedOperationException is also thrown if totally wrong key number is provided or the code is run in Headless mode. Probably in these cases the diagnostics should be different. Vasily, thanks. I'm going to provide updated patch for Windows in next few hours, I'll take your note into account With new improved implementation I've got wrong behavior So now I'm trying to locate an issue. Regarding exception throwing: by the spec, for wrong key these methods should throw IllegalArgumentException, and for headless mode they should throw HeadlessException. So I'll leave exception message as it, and I'll use string in code (not from table), because UnsupportedOperationException will be thrown from native code. Oh, I see, then it's fine. The only thing, after the patch is complete, we should not forget to file any non-bug differences our implementation would have with RI. Latest "4423_win.patch" is implementation for Windows with several performance and structure improvements. Ilya, thanks for the patch. I've slightly modified your patch... I've made WinWTK.getLockingState and WinWTK.setLockingState methods native. Vasily, please verify that the patch works as expected.. With the patch above applied, the additional workaround patch to java.awt.Toolkit that is used in jEdit automated GUI tests (see HARMONY-3633) becomes useless and may be removed. The attached Harmony-4423-jEdit.patch patch removes all extra stuff needed for that workaround from jEdit tests framework. The patch removes the bt-2/tests/jedit_test/src/patches/harmony directory and updates build.xml, build.properties and readme.txt accordingly. Oh, I've found that similar issue for Linux is filed as HARMONY-4636. As of obsolete workaround in jEdit automated GUI tests, I've created a new JIRA, HARMONY-4659, and moved the patch there. Now this issue may be closed. The attached patch unconditionally returns false, instead of throwing exception.
https://issues.apache.org/jira/browse/HARMONY-4423?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
CC-MAIN-2015-40
en
refinedweb
Details Description Issue Links Activity - All - Work Log - History - Activity - Transitions Refactoring the XmlUpdateRequestHandler to use constant variables that can be reused by the Stax implementation. Adding a stax implementation for the XmlUpdateRequestHandler. Till now I get an error about missing content stream. NOTE: To make the version compile you need to download the JSR 173 API from and copy it to $SOLR_HOME/lib/. It seems the diff does not show the other libs you need to compile. You can download them from: Fixing bugs from first version. Adding workaround for problem with direct use of the handler (never gets a stream). by patching the SolrUpdateServlet Please test, it works fine for me. @Larrea 1) standards-based 2) agree 3) agree 4) agree StAX is become a standard. Not as fast as SAX but nearly. IMO the StAX implementation is as easy to follow as the xpp, personally I think even easier. Thorsten - this looks good. I cleaned it up a bit and modified it to use SOLR-139. The big changes I made are: - It uses two spaces (not tabs or 4 spaces) - It overwrites the existing XmlUpdateRequestHandler rather then adding a parallel one. (We should either use StAX or XPP, but not both) - It breaks out the xml parsing so that parsing a single document is an easily testable chunk: SolrDocument readDoc(XMLStreamReader parser) - It adds a test to make sure it reads documents correctly - Since it is the XmlUpdateRequestHandler all the other tests that insert documents use it.. fixed the document parser to handle fields with CDATA. switch (event) { // Add everything to the text case XMLStreamConstants.SPACE: case XMLStreamConstants.CDATA: case XMLStreamConstants.CHARACTERS: text.append( parser.getText() ); break; ... What is missing with this issue, where can I give a helping had. >> Solr should assume UTF-8 encoding unless the contentType says otherwise. > > In general yes (when Solr is asked for a Reader). > For XML, we should probably give the parser an InputStream. > > Extracts the request parsing and update handling into two parts. This adds an "UpdateRequestProcessor" that handles the actual updating. This offers a good place for authentication / document transformation etc. This can all be reuse if we have a JSONUpdate handler. The UpdateRequestProcessor can be changed using an init param in solrconfig,xml: <requestHandler name="/update" class="solr.XmlUpdateRequestHandler" > <str name="update.processor.class">org.apache.solr.handler.UpdateRequestProcessor</str> </requestHandler> Moved the XPP version to XppUpdateRequestHandler and mapped it to: <requestHandler name="/update/xpp" class="solr.XppUpdateRequestHandler" /> My initial (not accurate) tests don't show any significant time difference between the two – we should keep both in the code until we are confident the new one is stable. - - - - - Thorsten - can you check if the STAX includes are all in good shape? Is it ok to use: import javanet.staxutils.BaseXMLInputFactory; dooh – wrong issue this is the default implementation since r552198 It would be useful if there first were some consensus as to what the goals are for making a change to the XML Update Handler; some possibilities I can think of include: 1) To use standards-based rather than non-standards-based technologies as much as possible 2) To use as few different XML technologies (and coding styles related to the technology) as possible 3) To reduce as much as possible the complexity of code needed for interpreting XML command and/or configuration streams 4) To lower resource consumption and limitations for XML handling, e.g. stream-based rather than random-access By all means add to that list, prioritize, and remove goals which are not seen as important. Then it seems to me the question would be how many of those goals are addressed by changing XML Update Handler to stAX, vs. other technologies. One might at the same time also want to look at other places where SOLR decodes XML such as config files, to see if there can be more commonality rather than continued isolation.
https://issues.apache.org/jira/browse/SOLR-133?focusedCommentId=12486195&page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel
CC-MAIN-2015-40
en
refinedweb
XmlArrayAttribute.ElementName Property Silverlight Gets or sets the XML element name given to the serialized array. Namespace: System.Xml.SerializationNamespace: System.Xml.Serialization information aboutusing namespaces and creating prefixed names in the XML document, see XmlSerializerNamespaces. For a list of the operating systems and browsers that are supported by Silverlight, see Supported Operating Systems and Browsers. Show:
https://msdn.microsoft.com/en-us/library/system.xml.serialization.xmlarrayattribute.elementname(v=vs.95)
CC-MAIN-2015-40
en
refinedweb
ichael Fitzgerald is the author of Learning XSLT. I know what you're up against. You've just inherited a new project at work that requires you to learn XSLT, but you don't have a clue where to start. If that's your problem, this article should give you a leg up over the wall. It will quickly cover five basics of XSLT found in the first chapter of Learning XSLT, O'Reilly's new hands-on guide to get you using XSLT with XPath by close of business today. Extensible Stylesheet Language Transformations or XSLT is a language that allows you to transform XML documents into XML, HTML, XHTML, or plain text documents. It relies on a companion technology called XPath. XPath helps XSLT identify and find nodes in XML documents; nodes are things like elements, attributes, and other objects in XML. With XSLT and XPath, you can do things like transform an XML document into HTML or XHTML so it will easily display in a web browser; convert from one XML markup vocabulary to another, such as from Docbook to XHTML (see); extract plain text out of an XML document for use in some other application, like a text editor; or build new Spanish language document by pulling and repurposing all the Spanish text from a multilingual XML document. This is only a start of what you can do with XSLT. Now that you know what it is, it's time to learn how it works. The quickest way to get you acquainted with how XSLT works is through a simple example. Consider this ridiculously brief XML document contained in a file I'll call msg.xml: <msg/> There isn't much to this document, but it's legal, well-formed XML: just a single, empty element tag with no content (that is, nothing between a pair of tags). For our purposes, it's the source document for the XSLT processing we'll do in a minute. Now you can use the very simple XSLT stylesheet msg.xsl to transform msg.xml: <stylesheet version="1.0" xmlns=""> <output method="text"/> <template match="msg">Found it!</template> </stylesheet> You'll notice that XSLT is written in XML. This allows you to use some of the same tools to process XSLT stylesheets that you would use to process other XML documents. Nice. The first element (start tag, really) in msg.xsl is <stylesheet version="1.0" xmlns=""> This is the document element for stylesheet, one of two possible document elements in XSLT. The other possible document element is transform, which is actually just a synonym for stylesheet. You can use one or the other. The version attribute in stylesheet is required, along with its value of 1.0. (We're only dealing with version 1.0 of XSLT here.) transform stylesheet version The attribute xmlns on stylesheet is a special attribute for declaring a namespace. It's value is, which is the official namespace for XSLT. An XSLT stylesheet must always have such a namespace declaration in order for it to work. (XSLT stylesheets usually use the xsl prefix, as in xsl:stylesheet, but I am setting the prefix aside for simplicity at the moment. You'll want to use xsl when your stylesheets get only slightly more complex.) xmlns xsl prefix xsl:stylesheet xsl The stylesheet element is followed by the output element which is optional. The value text for the method attribute signals that you want the output of the stylesheet to just be plain text: output text method <output method="text"/> Two other possible values for method in XSLT 1.0 are xml and html. (The output element actually has ten attributes, all of which are optional.) xml html The next element in msg.xsl is the template element. This element is at the heart of what XSLT does. A template rule consists of two parts: a pattern, such as an XML element in the source document that you're trying to match, and a sequence of instructions. The match attribute of template contains a pattern, a location path in XPath. The pattern in this example is the name of the msg element: template match msg <template match="msg">Found it!</template> XPath syntax always appears in attribute values, as in the value of match. The sequence of instructions (sometimes called a sequence constructor) contains only the literal text Found it!. Sequence instructions tells an XSLT processor what you want to have happen when the pattern is found in the source. Using this stylesheet, when msg is found in the source by an XSLT processor, it will output the text Found it!. When a template executes its instructions, that template is said to be instantiated. To make this happen, you need an XSLT processor. Found it! An XSLT processor processes a source document with an XSLT stylesheet, producing an output or result. There are lots of free XSLT processors available for download on the web. I'll mention a couple. Michael Kay's free Instant Saxon (saxon.exe) runs on the Windows command line. Download it from prdownloads.sourceforge.net/saxon/instant_saxon6_5_3.zip. (If the link fails, just try saxon.sourceforge.net). Unzip the file in some directory on your Windows box. Assuming that you have created and saved the files msg.xml and msg.xsl discussed earlier in the same spot that you unzipped saxon.exe, you can run Instant Saxon from the Windows command line like this: saxon msg.xml msg.xsl This command will process msg.xml against the stylesheet msg.xsl and produce the simple result: If you prefer a graphical application, Architag offers a free, graphical XML editor with XSLT processing capability called xRay2. It is available for download from. Like Instant Saxon, xRay2 runs only on the Windows platform. Assuming that you have successfully downloaded and installed it, launch xRay2 and open the file msg.xml and then open the file msg.xsl. Now select New XSLT Transform from the File menu. In the XML Document pull-down menu, select msg.xml, and in the XSLT Program pull-down menu, select msg.xsl (if it is not already checked, check Auto-update). The result of the transformation should appear in the transform window of the application. If you are using the Linux operating system or some other Unix flavor, you can run Apache's XSLT processor Xalan C++ (works on Windows, too). In order to run Xalan, you also need the C++ version of Xerces, Apache's XML parser. You can find both Xalan C++ and Xerces C++ on xml.apache.org. After downloading and installing them (follow instructions on the Apache site), you need to make sure that Xalan and Xerces are in your execution path. Now type the following line in a Unix shell window or at a Windows command prompt: xalan msg.xml msg.xsl If successful, the following results should be printed on your screen: An XSLT processor is probably readily available to you on your computer desktop in the form of a web browser: Microsoft Internet Explorer (IE) Version 6, Netscape Navigator (Netscape) Version 7.1, Mozilla Version 1.4, or Mozilla Firebird 0.7. Each of these browsers has client-side XSLT processing ability already built into them. The way to apply an XSLT stylesheet like msg.xsl to the document msg.xml in a browser is by using a processing instruction. A processing instruction (PI) allows you to include instructions for an application in an XML document. You can see a processing instruction in a slightly altered version of msg.xml, which I call msg-pi.xml: <?xml-stylesheet href="msg.xsl" type="text/xsl"?> <msg/> The XML stylesheet PI should always come before the first element in the document (the document element msg in msg-pi.xml). The purpose of this PI is similar to one of the purposes of the link tag in HTML, that is, to associate a stylesheet with the document. Save msg-pi.xml in a text file with the other files. If you open msg-pi.xml in one of the browsers I mentioned, the built-in XSLT processor in the browser will write the string Found it! on the browser's canvas or rendering space. link XSLT has a hobgoblin of sorts. It's a feature know as built-in templates. Built-in templates automatically find nodes that are not specifically matched by a template rule, so you can sometimes get results from an XSLT stylesheet that you're not expecting. These built-in templates automatically find text (among other things) in the XML source when no explicit template matches that text. This can rattle your nerves at first, but you'll get comfortable with them soon enough. I'll illustrate an instance where the built-in template matches text in an XML document. The file hobgoblin.xml contains a bit of text in the element msg: <msg>Spooky!</msg> To trigger the built-in template for text, the dull-witted stylesheet hobgoblin.xsl will do the trick: <stylesheet version="1.0" xmlns=""> <output method="text"/> </stylesheet> Apply hobgoblin.xsl to hobgoblin.xml with Instant Saxon using this command: saxon hobgoblin.xml hobgoblin.xsl And you will get the following result: Spooky! Even though hobgoblin.xsl does not contain a template rule, Instant Saxon found the text Spooky! in the msg element by default using a built-in template rule. That covers five basics of XSLT 1.0. This article is only a starting point to get you rolling. There is much, much more to learn about XSLT. Of course, Learning XSLT can help you out there. For resources and news for XSLT from the W3C, go to. If you're brave enough to read the specs, go to and to learn more about XSLT 1.0 and XPath 1.0. (Versions 2.0 of these specs are in the last stages of development and are found at and.) You can search the archives of XSL-List (an XSLT mail list hosted by Mulberry Technologies, Inc.) at or join the list at. Wherever you go with XSLT, or wherever it takes you, best of luck. O'Reilly & Associates recently released (November 2003) Learning XSLT. Sample Chapter 2, Building New Documents with XSLT , is available free online. You can also look at the Table of Contents, the Index, and the Full Description of the book. For more information, or to order the book, © , O’Reilly Media, Inc. (707) 827-7019 (800) 889-8969 All trademarks and registered trademarks appearing on oreilly.com are the property of their respective owners.
http://www.xml.com/pub/a/2003/11/26/learnXSLT.html
CC-MAIN-2015-40
en
refinedweb
CodeChef submission 788939 (C++ 4.3.2) plaintext list. Status: WA, problem LUCKY3, contest JAN12. By scientist1642 (scientist1642), 2012-01-11 14:00:43.#include <algorithm>#include <iostream>#include <sstream>#include <string>#include <vector>#include <queue>#include <set>#include <map>#include <cstdio>#include <cstdlib>#include <cctype>#include <cmath>using namespace std;const int INF = 1000000009;const double PI = acos(-1.0);const double eps = 1e-8;const int MAXN =0;const int MAXM =0; int T, n, i, j, k, mask,p;long long dp[52][1 << 11];int nmask[52];string num[52];long long ans;string s;vector<string > G;void rec(string s){ G.push_back(s); if (s.size() == 9) return; rec(s+"7"); rec(s+"4");}void count(){ for (p = 0; p < G.size(); p++) { s = G[p]; reverse(s.begin(),s.end()); int l = s.length(), pos; for (pos = 0; pos <= n; pos ++) { for (i = 0; i < (1 << 9); i++) dp[pos][i] =0; } dp[0][0] = 1; bool flag; int k = 0, finalMask =0; for (i = 0; i < n; i++) { flag = true; if (num[i].size() > s.size()) continue; nmask[k] = 0; for (pos = 0; pos <num[i].size(); pos ++) { if (num[i][pos] == s[pos]) nmask[k] = nmask[k] | (1 << pos); if (num[i][pos] > s[pos]) flag = false; } if (flag) k++; } dp[0][0] = 1; for (i = 0; i < k; i++) for (mask = 0; mask < (1 <<l); mask++) { dp[i+1][nmask[i] | mask] +=dp[i][mask]; dp[i+1][mask] += dp[i][mask]; } ans += dp[k][(1<<l)-1]; }}int main(){ // freopen("input.txt","r",stdin); //freopen("output.txt","w",stdout); cin >> T; rec("4"); rec("7"); while (T --) { ans = 0; cin >> n; for (i = 0; i < n; i ++) cin >> num[i]; count(); cout << ans << endl; }.
https://www.codechef.com/viewsolution/788939
CC-MAIN-2015-40
en
refinedweb
Building your own Windows Live Messenger Events Agent - Posted: Mar 08, 2007 at 5:06AM - 1,337 views - 5 comments ![if gt IE 8]> <![endif]> Something went wrong getting user information from Channel 9 Something went wrong getting user information from MSDN Something went wrong getting the Visual Studio Achievements Summary The guide takes you on your new text-based adventure, as you'll learn to build a little gadget that will answer questions instead of you. This pretty piece of software is nothing more than a Windows Live Messenger Add-In that will register people for events that are tracked using Windows SharePoint Services. In the end, you'll be able to activate an agent on your messenger that will interact with your friends in live scenarios using text messages. Introduction Microsoft Windows SharePoint Services is a versatile technology that people can use to increase the efficiency of business processes and improve team productivity. Let's see how we can extend this wonderful technology: you are probably spending almost all your time online and using Windows Live Messenger to talk to your friends; more, you may be a very organized person that keeps track of every event (e.g. movie nights, snowboarding weekends etc.), so why not let your friends register for your events in a totally unique way. Sounds good? You'll even be able to supervise the conversation and supply answers to questions that your agent is not prepared to answer. Prerequisites Please download Windows Live Messenger and install it on your computer if you have not done so already. With Windows Live Messenger version 8, you can even talk with your Yahoo! friends; we'll talk about Windows Live Messenger and Yahoo! Messenger later on. I hope you are already using Visual Studio Express C# or VB since you're on the Coding4Fun web site (you're ok with any choice, as the sample is available is both languages). You'll also need a place to set up your Windows SharePoint Services; depending on your mood, two options are available: I personally tried them both and the project worked properly, so it's up to you. Now that you have everything up and ready, let's get started: it's Coding4Fun time! Getting Started with Windows Live Messenger Add-Ins From a developer's perspective, Windows Live Messenger delivers a great instant messaging platform that provides support for custom add-ins through its Messenger Add-In API. However, this is not available out of the box; to take advantage of this feature, we must explicitly tell Windows Live Messenger that we want to use add-ins by tweaking a registry key. Open the Registry Editor and set HKEY_CURRENT_USER\Software\Microsoft\MSNMessenger\AddInFeatureEnabled to a DWORD entry with the value equal to 1. It's time to check whether the change in the registry caused the desired effect in your Personal Settings, as illustrated in [Fig. 1]. A new tab called Add-ins should be visible to you. [Figure 1] Getting Started with Windows SharePoint Services Here, I'll try to make you understand how Windows SharePoint Services works. Please excuse me in case you find these obvious, or I'm repeating myself; it's better to clear things out from the early stages than getting into code and not understanding the basic concepts. Windows SharePoint Services is designed as a platform to support different types of sites, also named templates. Natively installed unique types in Windows SharePoint Services include the STS type, which defines the Team Site, Blank Site, and Document Workspace configurations, and the MPS type, which defines the Basic Meeting Workspace, Blank Meeting Workspace, Decision Meeting Workspace, Social Meeting Workspace, and Multipage Meeting Workspace configurations; throughout the tutorial, we are going to work only with Team Web Site and Basic Meeting Workspace. Let's organize your default Team Web Site to be ready for use in the upcoming custom add-in; navigate to your default web site location, as shown in [Fig. 2]. Notice that the page contains several panels named Announcements, Events and Links; in SharePoint terms, these are called lists. We are going to work only with the Events list; you are not constrained to work only with these predefined lists, you can create your own or use other predefined lists at anytime; just needs a little more investigation from your side. Adding a New Event to the Events List It's simple to add a new event; just click Add new event below the Events list. You'll have to complete a three-page process in order to create a valid event. That's it! A new event that uses a Meeting Workspace should take you to the new workspace web site, like in [Fig. 4]. Again, you can see that the page contains several panels named Objectives, Attendees, Agenda and Document Library; these again are lists. At this point you got the idea: Windows SharePoint Services is made up from different types of sites (or sub-sites), and that each site (or sub-site) contains different types of lists depending on the template it's using. The Attendees list already contains the email of the user who created it (that's me). When a friend of yours registers for an event (e.g. Microsoft Academic Tour), you should be able to see its email together with its response here. Every event has its own Meeting Workspace, so don't expect to see emails for all events here. Return to the Team Web Site Home (by clicking the Up to Team Web Site link located in the top-right corner) and repeat these steps as many times it's necessary. For example, you should add movie nights and snowboarding weekends as events; be sure to set up the occurrence date correctly as this is quite important when querying for events that occur within a week, month or year time. Building the Windows Live Messenger Add-In The current version of messenger provides only text message support for add-ins, although you have probably already seen that it's more than capable of handling sound and video when communicating with your friends; nevertheless it's enough for what we intend to build. Fire up your code editor, we're a step closer from writing code! The add-in will be able to: Several basic requirements need to be met in order for an add-in to function as expected. Please note that they are already done and included in the sample. However, it's better for you to understand how it actually works and not just find them somewhere in the sample and wonder why did I do that: Exchanging Text Messages By this time, it's clear that we are going to work with many text messages; it's always better to keep text that you're probably going to modify in the near future (you may even want to localize the messages in your language) in resources. We have several types of text messages, which all are located in Resources.resx: The following rule applies to all text messages that are sent in and out the network: a text message can have only 400 characters; in case a reply exceeds this limit, it won't send the text message, and you'll just wonder what happened. Take into account that a large number of events may increase the size of a query reply considerably. In case you have many events, you'll have to create a paging mechanism that meets your needs. Consuming Windows SharePoint Services Web Services There are many web services that can be used to work remotely with a deployment of Windows SharePoint Services: Administration, Alerts, Document Workspace, Forms, Imaging, List Data Retrieval, Lists, Meetings, Permissions etc; we are going to learn how to use Lists and Meetings web services, since these are the only ones we need. When adding web references to the web services, make sure you enter the URLs like these: and. I know that you are eager to write some code; hang on, we are just one paragraph away from coding. Looking at the Windows Live Messenger Add-In Code I thought I would never reach the coding stuff... we'll start by looking at the way the add-in comes to life. In MessengerAddIn.cs or MessengerAddIn.vb, the class MessengerAddIn (to remind you, the assembly name is EventsAgent.MessengerAddIn.dll) is implementing the Microsoft.Messenger.IMessengerAddIn. Visual C# 14 public class MessengerAddIn : IMessengerAddIn 15 { 16 private MessengerClient MeClient; 17 18 private System.Collections.Generic.Dictionary<String, Conversation> People = 19 new Dictionary<string, Conversation>(); Visual Basic 14 Public Class MessengerAddIn 15 Implements IMessengerAddIn 16 17 Private MeClient As MessengerClient 18 Private People As Dictionary(Of String, Conversation) = New Dictionary(Of String, Conversation)() Any class that implements this interface must also implement its Initialize method. The Initialize method is the first one which gets called when you click Turn on "Events Agent", as in [Fig. 5]; and receives the Windows Live Messenger client as an argument, which is saved in a local variable for use later on. One should use the initialization to properly change the friendly name, description and personal status message. The project only uses one messenger event, which is fired up each time a new text message arrives; in case you decide to use other events, just uncomment them. Visual C# 25 public void Initialize(MessengerClient messenger) 26 { 27 MeClient = messenger; 28 29 //Messenger Add-In friendly texts 30 MeClient.AddInProperties.FriendlyName = Resources.MessengerAddIn_FriendlyName; 31 MeClient.AddInProperties.Description = Resources.MessengerAddIn_Description; 32 MeClient.AddInProperties.PersonalStatusMessage = Resources.MessengerAddIn_PersonalStatusMessage; 33 34 //Messenger text events 35 MeClient.IncomingTextMessage += 36 new EventHandler<IncomingTextMessageEventArgs>(this.IncomingTextMessage); 37 //MeClient.OutgoingTextMessage += 38 // new EventHandler<OutgoingTextMessageEventArgs>(this.OutgoingTextMessage); 39 //MeClient.ShowOptionsDialog += new EventHandler(this.ShowOptionsDialog); 40 } Visual Basic 24 Public Sub Initialize(ByVal messenger As MessengerClient) Implements IMessengerAddIn.Initialize 25 MeClient = messenger 26 27 'Messenger Add-In friendly texts 28 MeClient.AddInProperties.FriendlyName = My.Resources.MessengerAddIn_FriendlyName 29 MeClient.AddInProperties.Description = My.Resources.MessengerAddIn_Description 30 MeClient.AddInProperties.PersonalStatusMessage = My.Resources.MessengerAddIn_PersonalStatusMessage 31 32 'Messenger text events 33 AddHandler MeClient.IncomingTextMessage, AddressOf Me.IncomingTextMessage 34 'AddHandler MeClient.OutgoingTextMessage, AddressOf Me.OutgoingTextMessage 35 'AddHandler MeClient.ShowOptionsDialog, AddressOf Me.ShowOptionsDialog 36 End Sub For handling incoming text messages, I have written and registered my own event handler. The event is raised regardless who sent the text message, so we need a way to track users: have we talked with a user before, what kind of query has the user previously made etc.; this is achieved using a dictionary having the user's unique id as key, and an instance of the Conversation class as value. On line 58 (in C#) or 52 (in VB) you can see how easy it is to detect the user's status - nothing complicated. Take care at: Windows Live Messenger Add-Ins ca only send a (one) text message as a reply to a (one) incoming text message, that's why I always use a return statement after I send a message (an exception is raised when you try to send multiple messages). I don't like this issue, you don't like it, but it keeps me and you secure from malicious add-ins. Visual C# 47 private void IncomingTextMessage(object sender, IncomingTextMessageEventArgs e) 48 { 49 // Check if the text message comes from a new user; 50 // with whom we have not talked before. 51 if (!People.ContainsKey(e.UserFrom.UniqueId)) 52 { 53 People.Add(e.UserFrom.UniqueId, new Conversation()); 54 string message = String.Format(CultureInfo.CurrentCulture, 55 Resources.OTextMessage_Welcome, e.UserFrom.FriendlyName); 56 57 // Verify the status of the user. 58 if (e.UserFrom.Status != UserStatus.Busy) 59 message += Resources.OTextMessage_EndWelcome; 60 61 MeClient.SendTextMessage(message, e.UserFrom); 62 return; 63 } 64 65 // Pull the previous conversation with the user. 66 Conversation person = null; 67 People.TryGetValue(e.UserFrom.UniqueId, out person); Visual Basic 43 Private Sub IncomingTextMessage(ByVal sender As Object, ByVal e As IncomingTextMessageEventArgs) 44 ' Check if the text message comes from a new user; 45 ' with whom we have not talked before. 46 If Not People.ContainsKey(e.UserFrom.UniqueId) Then 47 People.Add(e.UserFrom.UniqueId, New Conversation) 48 Dim message As String = String.Format(CultureInfo.CurrentCulture, _ 49 My.Resources.OTextMessage_Welcome, e.UserFrom.FriendlyName) 50 51 ' Verify the status of the user. 52 If (e.UserFrom.Status <> UserStatus.Busy) Then 53 message += My.Resources.OTextMessage_EndWelcome 54 End If 55 56 MeClient.SendTextMessage(message, e.UserFrom) 57 Return 58 End If 59 60 ' Pull the previous conversation with the user. 61 Dim person As Conversation = Nothing 62 People.TryGetValue(e.UserFrom.UniqueId, person) What happens when a friend of yours sends you a show events text message and the add-in intercepts it? When a text message validates the condition from line 78-79 (in C#) or 72-73 (in VB), SharePointWrapper.GetEvents (we'll talk about this later on) is called and used to complete the request; plus, a friendly text message that reflects the previous call is built and sent back. You must always send the reply to the user from which the request came from; otherwise, an exception will be thrown. Visual C# 78 if (e.TextMessage.StartsWith(Resources.ITextMessage_AllEvents, 79 true, CultureInfo.CurrentCulture)) 80 { 81 try 82 { 83 person.Events = SharePointWrapper.GetEvents( 84 new Uri(Resources.SharePoint_ServerName), SharePointWrapper.QuerySpan.All); 85 86 MeClient.SendTextMessage(ReplyGetEvents(person.Events), e.UserFrom); 87 return; 88 } 89 catch (WebException) 90 { 91 MeClient.SendTextMessage(Resources.OTextMessage_Exception, e.UserFrom); 92 return; 93 } 94 } Visual Basic 72 If e.TextMessage.StartsWith(My.Resources.ITextMessage_AllEvents, _ 73 True, CultureInfo.CurrentCulture) Then 74 Try 75 person.Events = SharePointWrapper.GetEvents( _ 76 New Uri(My.Resources.SharePoint_ServerName), SharePointWrapper.QuerySpan.All) 77 78 MeClient.SendTextMessage(ReplyGetEvents(person.Events), e.UserFrom) 79 Return 80 Catch ex As WebException 81 MeClient.SendTextMessage(My.Resources.OTextMessage_Exception, e.UserFrom) 82 Return 83 End Try 84 End If The code that is executed when a user requests help, show events that occur in a week, month or year time is either similar with the previous code, or too simple and I'm not going to display it here. Instead, I'll show you what happens when a user requests to register for an event. A text message that would trigger the code looks like register for event 2; all I did was get the id from the incoming message and processed the request; the rest you'll understand as it looks almost the same. When a user wants to perform an action on an invalid event, the add-in notifies him or her of the mistake. Visual C# 184 if (e.TextMessage.StartsWith(Resources.ITextMessage_RegisterForEvent, 185 true, CultureInfo.CurrentCulture)) 186 { 187 try 188 { 189 int id = Convert.ToInt32(e.TextMessage.ToLower().Replace( 190 Resources.ITextMessage_RegisterForEvent, "").Trim(), CultureInfo.CurrentCulture); 191 192 MeClient.SendTextMessage(ReplyRegisterForEvent( 193 person.Events.Data.GetListItem(id), e.UserFrom.Email), e.UserFrom); 194 return; 195 } 196 catch (ArgumentNullException) 197 { 198 MeClient.SendTextMessage(Resources.OTextMessage_InvalidEvent, e.UserFrom); 199 return; 200 } 201 catch (WebException) 202 { 203 MeClient.SendTextMessage(Resources.OTextMessage_Exception, e.UserFrom); 204 return; 205 } 206 catch (Exception) 207 { 208 MeClient.SendTextMessage(Resources.OTextMessage_Exception, e.UserFrom); 209 return; 210 } 211 } Visual Basic 153 If e.TextMessage.StartsWith(My.Resources.ITextMessage_RegisterForEvent, _ 154 True, CultureInfo.CurrentCulture) Then 155 Try 156 Dim id As Integer = Convert.ToInt32(e.TextMessage.ToLower().Replace( _ 157 My.Resources.ITextMessage_RegisterForEvent, "").Trim, CultureInfo.CurrentCulture) 158 159 MeClient.SendTextMessage(ReplyRegisterForEvent( _ 160 person.Events.Data.GetListItem(id), e.UserFrom.Email), e.UserFrom) 161 Return 162 Catch ex As ArgumentNullException 163 MeClient.SendTextMessage(My.Resources.OTextMessage_InvalidEvent, e.UserFrom) 164 Return 165 Catch ex As WebException 166 MeClient.SendTextMessage(My.Resources.OTextMessage_Exception, e.UserFrom) 167 Return 168 Catch ex As Exception 169 MeClient.SendTextMessage(My.Resources.OTextMessage_Exception, e.UserFrom) 170 Return 171 End Try 172 End If Requests to unregister for events and show details for an event are similar and there is no reason to talk about them. Each task (request) uses its own method to create its reply - that's were many conditions are checked and a proper response is built. Be sure to download the sample, which is available in both languages and discover the hidden aspects for yourself. Looking at the Windows SharePoint Services Wrapper Code Most Windows SharePoint Services web services use as arguments Collaborative Application Markup Language (CAML), so they're quite flexible. In case you want to do more with SharePoint than I have done here, you'll have to learn CAML. It's important to notice that the authentication to the web service, located on line 46 (in C#) or 43 (in VB), is made through the GetSharePointCredentials method; don't forget to supply your credentials as described within the body of the method. Depending on the time span, the appropriate CAML query is built - it checks for a greater date than now and a less date than the one specified in the query span. Right now I can think of three methods to process the response from the web services: navigate your way using xpaths, work with datasets or... deserialize the response into objects. Although you have to write more code, deserialization brings a huge benefit - it's easier to work with later on; three classes were used to achieve that: SharePointListRoot, SharePointListData and SharePointListItem. Visual C# 37 public static SharePointListRoot GetEvents(Uri websiteUri, QuerySpan span) 38 { 39 if (websiteUri == null) 40 throw new ArgumentNullException("websiteUri"); 41 try 42 { 43 // Work with the Lists web service 44 Lists wsLists = new Lists(); 45 wsLists.Url = websiteUri.ToString() + "/_vti_bin/Lists.asmx"; 46 wsLists.Credentials = GetSharePointCredentials(); 47 48 XmlDocument xDocument = new XmlDocument(); 49 XmlNode xQuery = xDocument.CreateNode(XmlNodeType.Element, "Query", ""); 50 XmlNode xViewFields = xDocument.CreateNode(XmlNodeType.Element, "ViewFields", ""); 51 XmlNode xQueryOptions = xDocument.CreateNode(XmlNodeType.Element, "QueryOptions", ""); 52 string viewName = ""; 53 string rowLimit = "0"; 54 55 // CAML (Collaborative Application Markup Language) 56 switch (span) 57 { 58 case QuerySpan.All: 59 xQuery.InnerXml = 60 "<Where>" + 61 "<Gt><FieldRef Name='EventDate'/><Value Type='DateTime'>" + 62 DateTime.UtcNow.ToString("s", CultureInfo.InvariantCulture) + "</Value></Gt>" + 63 "</Where>"; 64 break; 65 case QuerySpan.InAYear: 66 xQuery.InnerXml = 67 "<Where><And>" + 68 "<Gt><FieldRef Name='EventDate'/><Value Type='DateTime'>" + 69 DateTime.UtcNow.ToString("s", CultureInfo.InvariantCulture) + "</Value></Gt>" + 70 "<Lt><FieldRef Name='EventDate'/><Value Type='DateTime'>" + 71 DateTime.UtcNow.AddYears(1).ToString("s", CultureInfo.InvariantCulture) + "</Value></Lt>" + 72 "</And></Where>"; 73 break; 74 case QuerySpan.InAMonth: 75 xQuery.InnerXml = 76 "<Where><And>" + 77 "<Gt><FieldRef Name='EventDate'/><Value Type='DateTime'>" + 78 DateTime.UtcNow.ToString("s", CultureInfo.InvariantCulture) + "</Value></Gt>" + 79 "<Lt><FieldRef Name='EventDate'/><Value Type='DateTime'>" + 80 DateTime.UtcNow.AddMonths(1).ToString("s", CultureInfo.InvariantCulture) + "</Value></Lt>" + 81 "</And></Where>"; 82 break; 83 case QuerySpan.InAWeek: 84 xQuery.InnerXml = 85 "<Where><And>" + 86 "<Gt><FieldRef Name='EventDate'/><Value Type='DateTime'>" + 87 DateTime.UtcNow.ToString("s", CultureInfo.InvariantCulture) + "</Value></Gt>" + 88 "<Lt><FieldRef Name='EventDate'/><Value Type='DateTime'>" + 89 DateTime.UtcNow.AddDays(7).ToString("s", CultureInfo.InvariantCulture) + "</Value></Lt>" + 90 "</And></Where>"; 91 break; 92 }; 93 94 XmlNode xResult = wsLists.GetListItems(Resources.SharePoint_EventsListName, 95 viewName, xQuery, xViewFields, rowLimit, xQueryOptions); 96 97 // Create an SharePointListRoot object from the web response. 98 XmlTextReader xReader = new XmlTextReader(xResult.OuterXml, XmlNodeType.Element, null); 99 XmlSerializer xSerializer = new XmlSerializer(new SharePointListRoot().GetType()); 100 101 return (SharePointListRoot)xSerializer.Deserialize(xReader); 102 } 103 catch (WebException) 104 { 105 throw; 106 } 107 } Visual Basic 35 Public Shared Function GetEvents(ByVal websiteUri As Uri, ByVal span As QuerySpan) As SharePointListRoot 36 If (websiteUri Is Nothing) Then 37 Throw New ArgumentNullException("websiteUri") 38 End If 39 Try 40 ' Work with the Lists web service 41 Dim wsLists As Lists = New Lists() 42 wsLists.Url = websiteUri.ToString() & "/_vti_bin/Lists.asmx" 43 wsLists.Credentials = GetSharePointCredentials() 44 45 Dim xDocument As XmlDocument = New XmlDocument() 46 Dim xQuery As XmlNode = xDocument.CreateNode(XmlNodeType.Element, "Query", "") 47 Dim xViewFields As XmlNode = xDocument.CreateNode(XmlNodeType.Element, "ViewFields", "") 48 Dim xQueryOptions As XmlNode = xDocument.CreateNode(XmlNodeType.Element, "QueryOptions", "") 49 Dim viewName As String = "" 50 Dim rowLimit As String = "0" 51 52 ' CAML (Collaborative Application Markup Language) 53 Select Case (span) 54 Case QuerySpan.All 55 xQuery.InnerXml = _ 56 "<Where>" & _ 57 "<Gt><FieldRef Name='EventDate'/><Value Type='DateTime'>" & _ 58 DateTime.UtcNow.ToString("s", CultureInfo.InvariantCulture) & "</Value></Gt>" & _ 59 "</Where>" 60 Case QuerySpan.InAYear 61 xQuery.InnerXml = _ 62 "<Where><And>" & _ 63 "<Gt><FieldRef Name='EventDate'/><Value Type='DateTime'>" & _ 64 DateTime.UtcNow.ToString("s", CultureInfo.InvariantCulture) & "</Value></Gt>" & _ 65 "<Lt><FieldRef Name='EventDate'/><Value Type='DateTime'>" & _ 66 DateTime.UtcNow.AddYears(1).ToString("s", CultureInfo.InvariantCulture) & "</Value></Lt>" & _ 67 "</And></Where>" 68 Case QuerySpan.InAMonth 69 xQuery.InnerXml = _ 70 "<Where><And>" & _ 71 "<Gt><FieldRef Name='EventDate'/><Value Type='DateTime'>" & _ 72 DateTime.UtcNow.ToString("s", CultureInfo.InvariantCulture) & "</Value></Gt>" & _ 73 "<Lt><FieldRef Name='EventDate'/><Value Type='DateTime'>" & _ 74 DateTime.UtcNow.AddMonths(1).ToString("s", CultureInfo.InvariantCulture) & "</Value></Lt>" & _ 75 "</And></Where>" 76 Case QuerySpan.InAWeek 77 xQuery.InnerXml = _ 78 "<Where><And>" & _ 79 "<Gt><FieldRef Name='EventDate'/><Value Type='DateTime'>" & _ 80 DateTime.UtcNow.ToString("s", CultureInfo.InvariantCulture) & "</Value></Gt>" & _ 81 "<Lt><FieldRef Name='EventDate'/><Value Type='DateTime'>" & _ 82 DateTime.UtcNow.AddDays(7).ToString("s", CultureInfo.InvariantCulture) & "</Value></Lt>" & _ 83 "</And></Where>" 84 End Select 85 86 Dim xResult As XmlNode = wsLists.GetListItems(My.Resources.SharePoint_EventsListName, _ 87 viewName, xQuery, xViewFields, rowLimit, xQueryOptions) 88 89 ' Create an SharePointListRoot object from the web response. 90 Dim xReader As XmlTextReader = New XmlTextReader(xResult.OuterXml, XmlNodeType.Element, Nothing) 91 Dim xSerializer As XmlSerializer = New XmlSerializer(New SharePointListRoot().GetType()) 92 93 Return CType(xSerializer.Deserialize(xReader), SharePointListRoot) 94 Catch ex As WebException 95 Throw 96 End Try 97 End Function When you want to add a new attendee, you basically need to add a new item to the Attendees list. A trick to notice: every site has its own web services, so you'll need to change the URL of the web service to target the web service of the workspace site that contains the Attendees list you want to modify; otherwise, a null reference exception will be raised. For example, a Team Web Site is not a Meeting Workspace and does not have a meetings web service. Always take care at this issue when you work with SharePoint web services. Visual C# 115 public static bool AddAttendee(Uri workspaceUri, string email) 116 { 117 if (workspaceUri == null) 118 throw new ArgumentNullException("workspaceUri"); 119 try 120 { 121 // Work with the Lists web service 122 Lists wsLists = new Lists(); 123 wsLists.Url = workspaceUri.GetLeftPart(UriPartial.Path) + "/_vti_bin/Lists.asmx"; 124 wsLists.Credentials = GetSharePointCredentials(); 125 126 XmlDocument xDocument = new XmlDocument(); 127 XmlNode xUpdates = xDocument.CreateNode(XmlNodeType.Element, "Batch", ""); 128 129 // CAML (Collaborative Application Markup Language) 130 xUpdates.InnerXml = 131 "<Method ID='1' Cmd='New'>" + 132 "<Field Name='Title'>" + email + "</Field>" + 133 "<Field Name='Status'>Accepted</Field>" + 134 "<Field Name='Attendance'>Optional</Field>" + 135 "</Method>"; 136 137 XmlNode xResult = wsLists.UpdateListItems(Resources.SharePoint_AttendeesListName, 138 xUpdates); 139 140 // Since we are performing only one action (a new), we get as response only one node; 141 // we check to see if there was an error completing the task. 142 return (xResult.ChildNodes[0].ChildNodes[0].InnerText == "0x00000000"); 143 } 144 catch (WebException) 145 { 146 throw; 147 } 148 } Visual Basic 105 Public Shared Function AddAttendee(ByVal workspaceUri As Uri, ByVal email As String) As Boolean 106 If (workspaceUri Is Nothing) Then 107 Throw New ArgumentNullException("workspaceUri") 108 End If 109 Try 110 ' Work with the Lists web service 111 Dim wsLists As Lists = New Lists() 112 wsLists.Url = workspaceUri.GetLeftPart(UriPartial.Path) & "/_vti_bin/Lists.asmx" 113 wsLists.Credentials = GetSharePointCredentials() 114 115 Dim xDocument As XmlDocument = New XmlDocument() 116 Dim xUpdates As XmlNode = xDocument.CreateNode(XmlNodeType.Element, "Batch", "") 117 118 ' CAML (Collaborative Application Markup Language) 119 xUpdates.InnerXml = _ 120 "<Method ID='1' Cmd='New'>" & _ 121 "<Field Name='Title'>" & email & "</Field>" & _ 122 "<Field Name='Status'>Accepted</Field>" & _ 123 "<Field Name='Attendance'>Optional</Field>" & _ 124 "</Method>" 125 126 Dim xResult As XmlNode = wsLists.UpdateListItems(My.Resources.SharePoint_AttendeesListName, _ 127 xUpdates) 128 129 ' Since we are performing only one action (a new), we get as response only one node; 130 ' we check to see if there was an error completing the task. 131 Return (xResult.ChildNodes(0).ChildNodes(0).InnerText = "0x00000000") 132 Catch ex As WebException 133 Throw 134 End Try 135 End Function When a user wants to re-register for an event, we won't add it again to the Attendees list, because we would have duplicate items; instead, we are going to use the Meetings web service to set its response back from Declined to Accepted, or from Accepted to Declined, in the unregister case. Again, the URLs targeting the web services are set as they should be. This is one of the places you need to modify in order to make it work with non-recurrent events; check the comments within the code for more details. Visual C# 198 public static void SetAttendee(Uri workspaceUri, string email, AttendeeResponse response) 199 { 200 if (workspaceUri == null) 201 throw new ArgumentNullException("workspaceUri"); 202 try 203 { 204 // Work with the Lists web service 205 Lists wsLists = new Lists(); 206 wsLists.Url = workspaceUri.GetLeftPart(UriPartial.Path) + "/_vti_bin/Lists.asmx"; 207 wsLists.Credentials = GetSharePointCredentials(); 208 209 // Also work with the Meetings web service 210 Meetings wsMeetings = new Meetings(); 211 wsMeetings.Url = workspaceUri.GetLeftPart(UriPartial.Path) + "/_vti_bin/Meetings.asmx"; 212 wsMeetings.Credentials = GetSharePointCredentials(); 213 214 XmlDocument xDocument = new XmlDocument(); 215 XmlNode xQuery = xDocument.CreateNode(XmlNodeType.Element, "Query", ""); 216 XmlNode xViewFields = xDocument.CreateNode(XmlNodeType.Element, "ViewFields", ""); 217 XmlNode xQueryOptions = xDocument.CreateNode(XmlNodeType.Element, "QueryOptions", ""); 218 string viewName = ""; 219 string rowLimit = "0"; 220 221 // For non-recurring events, SharePoint sets the default id to 1; 222 // so it's safe to query only for the meeting with the instance id equal to 1. 223 xQuery.InnerXml = 224 "<Where>" + 225 "<Eq><FieldRef Name='ID'/><Value Type='Counter'>" + 1 + "</Value></Eq>" + 226 "</Where>"; 227 228 XmlNode xResult = wsLists.GetListItems(Resources.SharePoint_MeetingSeriesListName, 229 viewName, xQuery, xViewFields, rowLimit, xQueryOptions); 230 231 // We know the instance id, but we need the unique id of the meeting. 232 string uid = xResult.ChildNodes[1].ChildNodes[1].Attributes["ows_EventUID"].InnerText; 233 234 wsMeetings.SetAttendeeResponse(email, 0, uid, 1, DateTime.UtcNow, DateTime.UtcNow, response); 235 } 236 catch (WebException) 237 { 238 throw; 239 } 240 } Visual Basic 182 Public Shared Sub SetAttendee(ByVal workspaceUri As Uri, ByVal email As String, ByVal response As AttendeeResponse) 183 If (workspaceUri Is Nothing) Then 184 Throw New ArgumentNullException("workspaceUri") 185 End If 186 Try 187 ' Work with the Lists web service 188 Dim wsLists As Lists = New Lists() 189 wsLists.Url = workspaceUri.GetLeftPart(UriPartial.Path) + "/_vti_bin/Lists.asmx" 190 wsLists.Credentials = GetSharePointCredentials() 191 192 ' Also work with the Meetings web service 193 Dim wsMeetings As Meetings = New Meetings() 194 wsMeetings.Url = workspaceUri.GetLeftPart(UriPartial.Path) + "/_vti_bin/Meetings.asmx" 195 wsMeetings.Credentials = GetSharePointCredentials() 196 197 Dim xDocument As XmlDocument = New XmlDocument() 198 Dim xQuery As XmlNode = xDocument.CreateNode(XmlNodeType.Element, "Query", "") 199 Dim xViewFields As XmlNode = xDocument.CreateNode(XmlNodeType.Element, "ViewFields", "") 200 Dim xQueryOptions As XmlNode = xDocument.CreateNode(XmlNodeType.Element, "QueryOptions", "") 201 Dim viewName As String = "" 202 Dim rowLimit As String = "0" 203 204 ' For non-recurring events, SharePoint sets the default id to 1; 205 ' so it's safe to query only for the meeting with the instance id equal to 1. 206 xQuery.InnerXml = _ 207 "<Where>" & _ 208 "<Eq><FieldRef Name='ID'/><Value Type='Counter'>" & 1 & "</Value></Eq>" & _ 209 "</Where>" 210 211 Dim xResult As XmlNode = wsLists.GetListItems(My.Resources.SharePoint_MeetingSeriesListName, _ 212 viewName, xQuery, xViewFields, rowLimit, xQueryOptions) 213 214 ' We know the instance id, but we need the unique id of the meeting. 215 Dim uid As String = xResult.ChildNodes(1).ChildNodes(1).Attributes("ows_EventUID").InnerText 216 217 wsMeetings.SetAttendeeResponse(email, 0, uid, 1, DateTime.UtcNow, DateTime.UtcNow, response) 218 Catch ex As WebException 219 Throw 220 End Try 221 End Sub This concludes our overview of the code. Not every line of code is displayed here, but the number preceding each line of code is the actual line number from the sample code files; so, if you have any questions about the code you'll be able to find it in a snap. It Works on Windows Live Messenger Network Although you can talk with your Yahoo! friends using text messages, Windows Live Messenger Add-Ins do not work with Yahoo! contacts; in other words, the add-in won't receive the text messages from them, although you see them, so it does not have how to send text messages back. You'll have to find another suitable alternative for them to register for your events. The usual way of testing a messenger add-in is to ask a friend of yours to send you text messages. Of course, you can have multiple Windows Live Ids and use Virtual PC to install and run more messengers; but hey, I did not want to do that. So, at one moment I had an interesting idea: why not make Encarta Instant Answers (which is a BOT) make it send me some text messages that matched my patterns (e.g. show events). In case you say to Encarta Instant Answers show events, it will respond with something like: Why must I show events? (the response you get may differ). It should have worked, but unfortunately it's the same as in the Yahoo! case. The good news is that it works with your friends that are connected to the Windows Live Messenger Network using Pocket MSN. I was curious and tried MSN Messenger on a device running Windows Mobile 5.1 Pocket PC Phone Edition and it worked fine; I was able to interact with the add-in. Do not get me wrong, add-ins can be loaded only on a Windows Live Messenger, and not on a Pocket MSN. And a dummy remark: people who use Windows Live Messenger have no problems using Windows Live Messenger Add-Ins. Demo In the end, I just want to show you how it works, because so far we have only talked around the subject without seeing it in action. Return to your Personal Settings > Add-ins, as illustrated in [Fig. 1] and click Add to Messenger; navigate to your sample installation folder and select the EventsAgent.MessengerAddIn.dll assembly. After completing this, the combo should have the Events Agent value; optionally, you may want to check the Automatically turn on this add-in when my status is anything other than Online or Appear Offline box. By clicking Turn on "Events Agent", like in [Fig. 6] you are activating your add-in and allowing it to interact with your friends, and this is what we expect it to do! Before I give it a spin, please check my sample events that will be used to demonstrate the project. In [Fig. 6] you can see that I am using three events that occur at different time intervals, and all of them are using meeting workspaces to allow people to enroll in events as attendees. It's spinning... in [Fig. 7] you can watch my conversation with one of my friends. A golden bar tells us that we have almost reached our goal; you'll be able to recognize text messages sent by the add-in when a message begins with a Paul-Valentin Borza's add-in "Events Agent" says: text. Oh, you're right it's not my conversation - I haven't written a single word, or character: our project, the Windows Live Messenger Add-In Events Agent has done all the work for me. And that's great because that means my work here is done! Don't forget to look at the result: there are two attendees now. Conclusion I'm happy that you learned how to use and combine two powerful technologies: Windows Live Messenger Add-Ins and Windows SharePoint Services. You have built your own events agent that will serve you every time you start your favorite messenger. You'll now be able to show it to your friends while talking to them, from the same windows you have always did before! I'm sure they'll be quite surprised... mine were. You can try my events agent on my Windows Live Messenger Id windowslive@borza.ro; you'll even be able to talk with me in case you need any assistance. Being at my second Coding4Fun article, I feel great when I can help people achieve a greater potential that has always been within their reach; and all of these happening while simply coding for fun. Thanks to the Microsoft Academic Program Team Romania for support. Improvements I encourage you to extend my work by: second (2nd!) Coding4Fun article just got published on MSDN Coding4Fun! I want to thank Todi Pruteanu... Could u tell me how to change the nickname? i couldn't reach the : HKEY_CURRENT_USER\Software\Microsoft\MSNMessenger\AddInFeatureEnabled by the way i m using WLM 8.5 any idea???? And why don't some channels show up, like mine!? , I cant find any MessengerClient.dll file on my PC. I am using Windows Live Messenger V14. Are these addons still supported? Remove this comment Remove this threadClose
https://channel9.msdn.com/coding4fun/articles/Building-your-own-Windows-Live-Messenger-Events-Agent
CC-MAIN-2015-40
en
refinedweb
ChangeProposals/fixedprefixlikexml. An extension specification SHOULD NOT define a new element type Add text like the following: After the following paragraph: In HTML documents, elements in the HTML namespace may have an xmlns attribute specified, if, and only if, it has the exact value "". This does not apply to XML documents. Add a line like the following:). -:
http://www.w3.org/html/wg/wiki/ChangeProposals/fixedprefixlikexml
CC-MAIN-2015-40
en
refinedweb
This is another homework problem. I thought it was easier, and I wouldn't need help, but I keep getting a segmentation fault when running the program. The assignment is to use a recursive function to output the amount of candybars we can buy with a user-inputted amount of money. Each candybar costs $1, and each candybar gives a coupon. 7 coupons can be redeemed for an additional candybar. The example he gave us is: "For example, if we have $20 dollars then we can initially buy 20 candy bars. This gives us 20 coupons. We can redeem 14 coupons for 2 additional candy bars. These two additional candy bars have 2 more coupons, so we now have a total of 8 coupons when added to the 6 left over from the original purchase. This gives us enough to redeem for 1 more candy bar. As a result we now have 23 candy bars and 2 left over coupons." Here is the code I have. When I don't get a segmentation fault, I get it returning zero. //File Name: assg3.cpp #include <iostream> using namespace std; int candybars=0; int recursivecandy(int A, int B) //A is money, B is coupons { do { candybars = candybars + A +(B/7); B = A + B/7, B%7; A = 0; return recursivecandy(A, B); }while (B>=7); } int main() { int money; int coupons=0; cout << "Please enter the number of dollars you have: $"; cin >> money; cout << "With " << money << " dollars, you can buy " << recursivecandy(money, coupons) << " candy bars.\n"; return 0; } Fairly straightforward, I use a do-while loop to run inside the recursive function because I want it run at least once (you start with no coupons, which is my stopping case). Does anyone see anything wrong? Any help would be greatly appreciated.
https://www.daniweb.com/programming/software-development/threads/351555/recursive-design-to-input-coupon-and-money-
CC-MAIN-2015-40
en
refinedweb
Event Based Programming in JavaFX Old Song, New World I decided to try my hand at some JavaFX programming to see what the language had to offer. Two of the key features of JavaFX are its ability to bind to data, and its access to all Java libraries. I used that to see how it handles for event-based programming. I built this minesweeper game: As the World Turns: Reactive Data Models JavaFX let me build reactive data models using bind and on replace. When some piece of state changes, the change propagates through based on code right of the declarations. These keywords shrink the boilerplate down to a few readable characters. Here's a piece of code from TileControl.fx that uses both: package class TileControl { ... var tileNode : TileNode; //View of the tile public-init var cell: HexBoards.ClientCell; //Model of the tile def cellState = bind cell.state on replace oldCellState { if(tileNode != null) { tileNode.update(cellState); } }; ...} I'm not completely convinced that TileControl -- and MVC-- is worth the extra class. I could have bound cell.state directly to a field in tileNode. It does prevent these few important lines from being lost in a sea of graphics code, and keeps the model from leaking into the verbose TileNode graphics code. More importantly, it lets a model of several layers, say rules for a more complex board game or some obscure business logic, propagate based on their declarations. An outer layer can define its own dependencies on the inner layer, so the system stays very clean. Old World Meets New: Event-based Programming and Clean Code I like event-based programming. It tends to keep class structures shallow and clean, and separates a program into understandable parts. When I throw in a way to distribute the events I can get multiple machines to form a coherent system, usually fairly painlessly. That minesweeper game shows the idea in JavaFX on a small scale. I used JMS to separate a Server, which knows where all the mines are, from a Client, which only knows what the player has uncovered. The client and server have no direct access to each other's objects; they are loosely coupled via JMS events. It's overkill for this little project, with one player, no reward (not even bragging rights) in the game, and client and server collocated in a single process. However, it'd make creating a distributed multiplayer game, or any other distributed system very easy. (To save me having to work with network connections on your web page, I've used SomnifugiJMS and colocated the Server and Client in the applet. It needs your permission to read system properties and to use JMX.) I set up some simple wrapper classes to handle the JMS calls. Nothing to write home about, but it does bundle up the boiler plate neatly. JavaFX doesn't do much with exception handling. I haven't spotted where uncaught exceptions go yet. (Maybe another blog there...) In any case, here's one of the four helper classes: package class Publisher { def connection = SomniJNDIBypass.IT.getTopicConnectionFactory().createTopicConnection(); def session = connection.createTopicSession(false,Session.AUTO_ACKNOWLEDGE); public-init var topicName : String; var publisher : TopicPublisher; init { def topic = SomniJNDIBypass.IT.getTopic(topicName); publisher = session.createPublisher(topic); connection.start(); } package function publishObject(object : Serializable) { var message = session.createObjectMessage(); message.setObject(object); publisher.publish(message); } package function close() { publisher.close(); session.close(); connection.close(); } } Earth To Mars Once I'd typed the boilerplate, publishing events when something changed was easy with on replace. Here's what happens in the server after a client finds a safe cell: package class Game { ... var safeTestedAddresses = [] on replace oldValue { def address = safeTestedAddresses[sizeof safeTestedAddresses - 1] as Address2D; def cell : HexBoards.ServerSafeCell = board.getCell(address) as HexBoards.ServerSafeCell; def event = Events.SafeCellTestedEvent { address: address; mineNeighborCount: cell.minesTouched; } publisher.publishObject(event); }; ... } Receiving Events... "Oh, Crap... Alien Thread" Inbound messages seemed like they'd be just as easy. They kind of worked in JavaFX 1.1, although I saw some screen twitching that reminded me of trying to run Swing-based code on the wrong thread. JavaFX 1.2 seems to spike the whole works and just did nothing -- no error message, just not responsive. I asked Josh for some help, and he sent this reply: All JavaFX stuff happens on the GUI thread by default. The exceptions are APIs which do threading for you, such as loading an image in the background. If you create your own (Java) Thread then you are on your own. We won't stop you but if you touch some JavaFX structures some weird things may happen. If you need to do some non GUI work in a different thread (talk to the network, compute some calculation, etc.) then you should do it in Java and use a callback to get back into the JavaFX side. You can either use the usual Swing way, SwingUtilities.invokeLater(), or use the new FX.deferLater function. Since we have function references in JavaFX this sort of callback works quite well. Just before I got that response, I found this two-year-old email from Tom Ball: Part 2 is to come up with a replacement for "do later". The canonical use case for "do later" is "oh, crap, I got called back in some other thread that isn't the EDT, get me to the EDT!" This comes about because you may implement an interface that represents a callback, and the callback happens in the wrong thread. In that case, the body of "do later" should really be the whole method, since you don't want to be touching any data from the alien thread. Aliens Among Us I normally prefer receive()s in my own threads to MessageListeners, but I didn't see a good way to use receive() or even receiveNoWait() without either polling or locking down the graphics thread with a blocking call. Using FX.deferAction() inside a MessageListener was pretty easy, and everything flowed from there: package class TestMessageListener extends MessageListener { var board : HexBoards.ClientMineBoard; //On a JMS Thread. Oh crap. override function onMessage(message : Message) { //Get back to the GUI thread before something bad happens. FX.deferAction(function() { def event : Events.CellTestedEvent = (message as ObjectMessage).getObject() as Events.CellTestedEvent; board.processEvent(event); }); } } The World Is Not Enough JavaFX is already doing some event-based programming in the background, single-threaded, on the graphics thread, using its single queue. The reactive data model is great, so long as it can live on the graphics thread along with everything else, without bogging things down. But bogging down the graphics thread was always one of the risks in AWT and Swing. JavaFX doesn't save us from that. Simon Morris posted an approach for building very clean parsers in JavaFX. If the program is only about parsing, that should work well. However, if you need the graphics thread for graphics, your JavaFX program might sputter or jam during the parse, or any other big computation or big i/o operation. World on a Thread Osvaldo Pinali posted a blog with a postscript about the power of automatic propagation through bind. Fabrizio Giudici's concerns about encapsulation I think are misplaced.* The great thing about bind is that when you create your objects' code, you don't have to predict how those objects will be used and build the corresponding boilerplate. Someone later uses bind when they want an update, binding to the fields they care about. It's getting back to OO's forgotten roots in message-passing, and taking a step beyond. Instead of being limited to API provided by a developer, you ask an object to send a message when something you care about changes. Osvaldo talks about his days in constraint programming. Propagation in constraint programming was tricky to get right. Mixing concurrency and propagation is even tricker. JavaFX solves this problem by only propagating changes on the graphics thread, alongside all the other graphics work. It can't take advantage of multiple threads and multiple cores; it can't dedicate one core to keep graphics responsive and use the rest for computation and i/o. The tail end of Tom Ball's email lays out a long term goal: Part 3 (to be deferred for a while) involves creating a functional subset of FX that can be safely invoked in threads other than the EDT. I hold out some hope that the "valueof" operator discussed this week (in the context of holding some variables constant in bind expressions) would provide the key: that an "async closure" would be a closure which could not have the side effect of reading or writing FX attributes. Instead, at the time the closure was created, the appropriate values would have to be copied with "valueof", so that the closure was operating on local copies. The goal is to create FX code that can't touch arbitrary application state, but instead copies what it needs. Josh says, We have basically done parts 1 and 2 of Tom's plan. ... Part 3, a threadsafe functional subset of the language hasn't been done yet." Tom's description of where they're going implies that the graphics thread is going to control all the data and hand copies off to other threads via some programming construct. It'd be better, but will still be limited by flows in and out of a single thread. * Fabrizio has a solid practical point, though. His example shows that some part of control flow and mutability is out of kilter. I'll keep my binds on defs, one-way only, for now. World of Tomorrow Osvaldo Pinali's blog's main point was to open a discussion about what we need next in JavaFX. I think the ability to use JavaFX for big jobs beyond user interface work should be high on the list. FX.deferAction() is already using the graphics event queue; one queue already exists. One easy way to gain some concurrency is with events flowing into multiple queues from wherever, processed by a thread dedicated to each queue. The complexity comes in when figuring out which objects live on which queues. JavaFX right now makes an easy choice; there's only one queue for one world of data structures. The other extreme, one thread per object, is too resource-heavy to sustain. I'd like the power to segregate my objects into groups that I define. For example, I'd like to put the user interface of a game on one thread, the game's logic on a second thread, and large computes and i/o operations on other threads. That would give JavaFX unique power in two domains: user interfaces and scalable propagation. - Login or register to post comments - Printer-friendly version - dwalend's blog - 6408 reads by fabriziogiudici - 2009-07-05 23:54It took a while for me to understand what's happening, but it looks like FireFox 3.5 is pretty bad screwed out. While the first time I accessed this blog was with 3.0, and I could see the applet, with 3.5 I can't see the applet, and even worse the whole navigator is broken (can't edit the URL in the navigation bar, can't write anything in this comment box; I have been forced to use Safari). I think it's the fault of FireFox 3.5, since it is plagued by a high number of bugs. Back to the topic. "Osvaldo Pinali posted a blog with a postscript about the power of automatic propagation through bind. Fabrizio Giudici's concerns about encapsulation I think are misplaced.*" Well, it depends. Seeing binding as message passing is a great idea as you can define the bindable structures as messages independent of the internal state of the object. Binding to internal state can be a trouble, and that's where my concern is. As usual, we have to distinguish from the binding feature (a powerful tool) and the use people make of it. by dwalend - 2009-07-04 08:09... queue.put() might be easy to do... I'll try the experiment. by dwalend - 2009-07-03 19:34Hi aleixmr, I'm not that bothered by the asynchrony. The user is always on his own thread, so asynchrony is part of the UI puzzle. I like the "one thread for pixels" approach (although "one thread per display" might be better). Driving via interrupts (pre-Mac-OSX for example) I found much harder to get my head around. I think the FX.deferAction is clear, clean and very compact. I just wish we had queue.put(function) instead. My long-lived complaint about having to use the graphics event thread was that the parts we have to use looked nothing like the rest of the system. There's no non-graphics event queue, non-graphics worker, or non-graphics invokeLater(), so the other concurrency puzzles get solved differently from Swing's. Foxtrot standardizes some other options for Swing UIs, but (1) you still have to learn all of Swing's rules to use it, (2) you have to learn Foxtrot's additional rules, and (3) it's still UI-only. JavaFX brings us this very profound feature -- easy automatic propagation -- but the feature seems pinned to the graphics thread. Tantalizing. by aleixmr - 2009-07-03 10:56Hey !! that's pretty crazy, still 10 years and we need to do things on invokelater !!! I don't like that asynchronous solution coz it gets your code messy and difficult to read ! I like the foxtrot aproach I use it for swing and works like a charm, synchronous solution !!! (Please don't get me wrong, asynchronous callback is needed to !) by dwalend - 2009-07-02 05:43Now working on firefox - windows. Looks like the problem with linux is that the OS thinks its still March and I signed the .jars a few days ago. Dave by dwalend - 2009-07-02 05:10whp, Which browser? Which OS? Which versions? I've seen a lot of variability by OS and browser. Works for me on Safari - MacOS, Firefox - MacOS (asks for a password), IE - Windows. Haven't seen it work on firefox - windows or anything - linux yet. Does it work now? (I updated the jnlp. It had a file URL. Now points somewhere more reasonable. Maybe a "feature" for the NetBeans plugin to add.) Thanks, Dave by whp - 2009-07-02 03:31Exception: java.io.FileNotFoundException: JNLP file error:. Please make sure the file exists and check if "codebase" and "href" in the JNLP file are correct. JavaFX deployment bites again. by dwalend - 2009-07-06 17:52Fabrizio, Thanks for the report on firefox. I've realized "applets in Java in N browsers on M operating systems" is too many wheels in wheels to rely on. I'll try jnlp next time so my work only has to spin on top of Java and the OS. var, def, public-init and bind play together in some interesting ways. bind with def is OK. It seems not so much encapsulation as clashing mutators. I haven't thought through two-way binds or bound functions yet. (Heck, this is my first javafx project.) Do you see something similar or something else?
https://weblogs.java.net/node/242548/atom/feed
CC-MAIN-2015-40
en
refinedweb
Agenda See also: IRC log <trackbot> Date: 04 May 2010 trackbot, start meeting <trackbot> Meeting: XML Security Working Group Teleconference <trackbot> Date: 04 May 2010 <scribe> Scribe: tlr <esimon2> My phone does not seem to be working; might be chat only for me. <fjh> Privacy Workshop <fjh> fjh: privacy workshop -- privacy on the web, apis, etc, 12/13 July in London <fjh> Widget Signature proposed updates (Last Call review) fjh: widget signature; MArcos did revision based on test work <fjh> fjh: some changes to normative requirements <fjh> fjh: there'll be a decision on the Widgets call on Thursday <fjh> <esimon2> * Ed now has a phone connection <fjh> Approve 27 April minutes <Cynthia> <fjh> RESOLUTION: 27 April minutes approved fjh: pratik made some changes; later in the agenda <fjh> fjh: made the request <fjh> tlr: it's on the agenda fjh: sent some comments, magnus looked <fjh> <fjh> fjh: confused about ??info magnus: if provide one value, need to provide the other one, too <fjh> PartyVInfo="" should have value in example <fjh> actoin: magnus to update XML Encryption 1.1 with changes from Frederick and his proposed changes, , also to give value to PartyVInfo="" should have value in example <fjh> ACTION: magnus to update XML Encryption 1.1 with changes from Frederick and his proposed changes, , also to give value to PartyVInfo="" should have value in example [recorded in] <trackbot> Created ACTION-566 - Update XML Encryption 1.1 with changes from Frederick and his proposed changes, , also to give value to PartyVInfo="" should have value in example [on Magnus Nystrom - due 2010-05-11]. fjh: would like to get closure on 1.1 soon-ish <fjh> fjh: there were some places where derived keys could be used; Magnus looked at that as well, agreed <fjh> fjh: chatted about shortname and namespace ... namespace in /2009/? tlr: doesn't really matter, either fjh: hearing no objection, leave them as they are? edsimon: associate with a dated specification tlr: if there are implementations around and make breaking changes to spec, then makes sense to change namespaces magnus: /2009/gh -> /2010/ghc? fjh: implementations? -- silence -- <scribe> ACTION: magnus to change generic hybrid ciphers namespace to /2010/ghc [recorded in] <trackbot> Created ACTION-567 - Change generic hybrid ciphers namespace to /2010/ghc [on Magnus Nystrom - due 2010-05-11]. fjh: EC note? magnus: spec is optiona tlr: well, optional to implementers of signature, but there are implementation requirements in here <fjh> <Cynthia> please add some of this discussion to the meeting minutes tlr: so, does this spec need to include a mandatory set of algorithms? magnus: nice security properties; future-proofing. <Cynthia> I think it may fjh: would like to see this move forward tlr: so, in Enc and Sig, we're talking about specific curves. Here we don't. bal: we're indeed not talking about curves here. <fjh> is ghc simply an application of the ecc in dsig? bal: it's a pre-stage to EC, right? <fjh> or xenc bal: defining encapsulation in a way that key derivation functions get applied <fjh> so this may be a non-issue for IPR tlr: so, we have had concerns about what's in Encryption and Signature. If this either simply uses what's in those specs, or is entirely different, and no concerns come up, then don't see why we can't move ahead. <scribe> ACTION: magnus and bal to review relationship between ghc and material in encryption [recorded in] <trackbot> Created ACTION-568 - And bal to review relationship between ghc and material in encryption [on Magnus Nystrom - due 2010-05-11]. fjh: in encryption, section 6 marked as informative, schema example as non-normative magnus: there was one other suggestion -- versioning section of the doc says that the namespace URI is used as a prefix for identifiers <fjh> ACTION: implement changes to Generic Hybrid Ciphers as proposed by Frederick, and revisions from Magnus, [recorded in] <trackbot> Sorry, couldn't find user - implement <fjh> ACTION: magnus implement changes to Generic Hybrid Ciphers as proposed by Frederick, and revisions from Magnus, [recorded in] <trackbot> Created ACTION-569 - Implement changes to Generic Hybrid Ciphers as proposed by Frederick, and revisions from Magnus, [on Magnus Nystrom - due 2010-05-11]. <fjh> Replace "This namespace is also used as the prefix for identifiers defined by this <fjh> > specification." <fjh> The use of the gh prefix in this document is an editorial convention <fjh> > - other prefix values could be associated with the namespace <scribe> ACTION: thomas to revise gh namespace section [recorded in] <trackbot> Created ACTION-570 - Revise gh namespace section [on Thomas Roessler - due 2010-05-11]. ACTION-570 due today <trackbot> ACTION-570 Revise gh namespace section due date now today <fjh> Last Call planning - Errata Status fjh: looks like they're all in the doc <fjh> <fjh> <fjh> ISSUE-91? <trackbot> ISSUE-91 -- ECC can't be REQUIRED -- OPEN <trackbot> ISSUE-178? <trackbot> ISSUE-178 -- Highlight additional text constraints on XSD schema as such. -- OPEN <trackbot> <fjh> suggest we only apply this to new 2.0 work fjh: what do people think? RESOLUTION: will not address issue-178 in 1.1 work <fjh> issue=180? <fjh> issue-180? <trackbot> ISSUE-180 -- Section 8 identifies Joseph Reagle as the contact for the XML Encryption media type. This needs to be updated, perhaps to a generic identity? -- OPEN <trackbot> ISSUE-178: will only address in 2.0 work <trackbot> ISSUE-178 Highlight additional text constraints on XSD schema as such. notes added <fjh> tlr notes that changes will not invalidate any review, so should not be an issue for going to Last Call <fjh> action-11? <trackbot> ACTION-11 -- Frederick Hirsch to ask for XPath 2.0 presentation to group -- due 2008-07-24 -- CLOSED <trackbot> ACTION-511? <trackbot> ACTION-511 -- Thomas Roessler to propose next steps on media type registration (ISSUE-180) -- due 2010-04-30 -- OPEN <trackbot> <fjh> action-511? <trackbot> ACTION-511 -- Thomas Roessler to propose next steps on media type registration (ISSUE-180) -- due 2010-04-30 -- OPEN <trackbot> <fjh> ISSUE-192? <trackbot> ISSUE-192 -- Namespaces for DerivedKey and pbkdf2 outside of xenc11 namespace -- OPEN <trackbot> issue-192? <trackbot> ISSUE-192 -- Namespaces for DerivedKey and pbkdf2 outside of xenc11 namespace -- OPEN <trackbot> <fjh> issue-192 closed <trackbot> ISSUE-192 Namespaces for DerivedKey and pbkdf2 outside of xenc11 namespace closed <fjh> issue-194? <trackbot> ISSUE-194 -- Is "the ECPublicKey element" in Encryption 1.1 and Signature 1.1 actually the ECKeyValue element? -- OPEN <trackbot> <fjh> issue-194 closed <trackbot> ISSUE-194 Is "the ECPublicKey element" in Encryption 1.1 and Signature 1.1 actually the ECKeyValue element? closed <fjh> issue-194 ECPublicKey element change to ECKeyValue in document <fjh> issue-194: ECPublicKey element change to ECKeyValue in document <trackbot> ISSUE-194 Is "the ECPublicKey element" in Encryption 1.1 and Signature 1.1 actually the ECKeyValue element? notes added issue-138? <trackbot> ISSUE-138 -- What interoperability and security issues arise out of schema validation behavior? -- OPEN <trackbot> scantor: people have run into issues with well-formedness ... lost namespaces etc ... but think schema validation problems are sort of a superset ... doesn't have the same infoset modifying issues which are what comes up in signature esimon2: that issue is mine <fjh> note issue-138 only applies to xml signature esimon2: this is a long-term issue <fjh> suggest this is a 2.0 issue tlr: associate with signature 2.0? esimon2: would associate with any version tlr: if this is an issue against 1.1, then shouldn't have gone to last call fjh: is this an issue that blocks 1.1? ... I'd argue that it isn't esimon2: right tlr: so we track this as an issue against 2.0? esimon: all theoretical at this point, haven't proven at this point ... similar to the namespace questions ... made some progress thanks to Meiko and others fjh: so, we deal with this in 2.0, *if* we deal with it, which depends on us figuring it out scantor: neither spec addresses schema validation, therefore out of scope esimon2: somewhat agree fjh: you don't mind 1.1 becoming Rec without addressing this? esimon2: I don't object against that, right PROPOSED RESOLUTION: Won't address ISSUE-138 in XML Signature 1.1 work RESOLUTION: We won't address ISSUE-138 in XML Signature 1.1 work issue-138? <trackbot> ISSUE-138 -- What interoperability and security issues arise out of schema validation behavior? -- OPEN <trackbot> action-280? <trackbot> ACTION-280 -- Magnus Nyström to produce test cases for derived keys -- due 2009-05-19 -- OPEN <trackbot> fjh: this isn't critical for LC. But what's the status? magnus: developed test cases, but didn't develop code to build actual data fjh: shouldn't hold up last call; keeping open magnus: shouldn't affect LC; not sure when I'll get to it ACTION-452? <trackbot> ACTION-452 -- Scott Cantor to review the XML ENC v1.1 document -- due 2009-11-24 -- OPEN <trackbot> scantor: was supposed to review EXI changes and application implications. Think events have moved on <fjh> action-452: focused on EXI <trackbot> ACTION-452 Review the XML ENC v1.1 document notes added ACTION-452 closed <trackbot> ACTION-452 Review the XML ENC v1.1 document closed ACTION-238? <trackbot> ACTION-238 -- Thomas Roessler to update the proposal associated with ACTION-222 and send to list. -- due 2010-05-31 -- OPEN <trackbot> <fjh> action-222? <trackbot> ACTION-222 -- Konrad Lanz to make proposal RIPE algorithms -- due 2009-03-03 -- CLOSED <trackbot> <fjh> action-238: for possible algorithms RFC, not for XML Encryption specification <trackbot> ACTION-238 Update the proposal associated with ACTION-222 and send to list. notes added ACTION-515? <trackbot> ACTION-515 -- Aldrin J D'Souza to propose the schema addition for issue-186 -- due 2010-02-23 -- OPEN <trackbot> action-515 closed <trackbot> ACTION-515 Propose the schema addition for issue-186 closed> <fjh>> ACTION-515: <trackbot> ACTION-515 Propose the schema addition for issue-186 notes added tlr: the text proposed in ACTION-515 indeed has been added to the spec, so ACTION-533 done ACTION-515 closed <trackbot> ACTION-515 Propose the schema addition for issue-186 closed ACTION-533 closed <trackbot> ACTION-533 Implement proposed change to XML Encryption 1.1 per proposal to resolve ISSUE-186 closed ISSUE-186 closed <trackbot> ISSUE-186 What is the normative content of section 5.4.2? (PBKDF2) closed fjh: there are some more things about test cases and interop; not critical path ... only actions that are relevant now are the three on magnus tlr: ... and the one on me to fix the namespace piece fjh: another item, references. ... anybody have a moment to scan through references and see if they're still up to date -- silence -- <scribe> ACTION: fjh to review references in generic hybrids and encryption 1.1 [recorded in] <trackbot> Created ACTION-571 - Review references in generic hybrids and encryption 1.1 [on Frederick Hirsch - due 2010-05-11]. <scribe> ACTION: frederick to get encryption 1.1 and GHC pubrules-ready [recorded in] <trackbot> Created ACTION-572 - Get encryption 1.1 and GHC pubrules-ready [on Frederick Hirsch - due 2010-05-11]. <fjh> plan to Agree to Last Call at next week's call, 11 May fjh: plan is to resolve on Last Call next week ... publish next Thursday ... have four week last call tlr: checking IETF meeting dates -- late July ... that's much later; suggest not to wait fjh: send note to EXI chairs on timing <fjh> fjh: any worries about the schema files? tlr: well, if we make a change to the GH namespace, that also needs to happen in any schema files where it shows up fjh: mentioned Pratik's editorial update in beginning of call <fjh> fjh: pratik, did you want to walk us through that? action-550? <trackbot> ACTION-550 -- Pratik Datta to implement editorial changes from scott and ed -- due 2010-04-20 -- PENDINGREVIEW <trackbot> action-554? <trackbot> ACTION-554 -- Pratik Datta to review c14n comments from meiko, incorporate into doc, flagging with email any concerns for discussion -- due 2010-04-27 -- PENDINGREVIEW <trackbot> action-550 closed <trackbot> ACTION-550 Implement editorial changes from scott and ed closed action-554 closed <trackbot> ACTION-554 Review c14n comments from meiko, incorporate into doc, flagging with email any concerns for discussion closed <fjh> ACTION-561 closed <trackbot> ACTION-561 Review ISSUE-196 closed tlr: see e-mail. Don't think it's worth conf call time. RESOLUTION: XML/EXI URIs from ACTION-561 accepted <esimon2> As per ACTION-559, I checked and XML Schema does NOT support entities like DTDs do. If you want to use entities as one did with DTDs, you have to use a DTD. pratik: ignoreDTD parameter action-559? <trackbot> ACTION-559 -- Ed Simon to investigate schema vs. DTD -- due 2010-05-04 -- OPEN <trackbot> pdatta: ?? <fjh> Ed notes XML Schemas does not support entities like DTDs did, so if you want them you need DTDs <fjh> discussion on ignoreDTD parameter and whether we need another name... esimon2: may need to rename this. But need to be clearer what this parameter is about. ... is this about entity processing only? scantor: right, because defaults come into play as well ... i.e., processing default attributes <fjh> is parameter really about entities? scantor: strongly in favor of leaving schema out of this discussion ... some discussion about merging these concepts, and that would be a mistake ... is this about ignoring the DTD, or just about entities ... parser settings that one can mix and match <fjh> processEntities? processDTDEntities? scantor: look at use cases, would be useful to know whether old c14n algs do anything with issue ... is DTD processing just implied now issue-183? <trackbot> ISSUE-183 -- Constrain 2.0 SignedInfo canonicalization choice for 2.0 model? -- OPEN <trackbot> esimon: this might also apply to issue-183? pdatta: c14n does process DTD scantor: treat as ignoring DTD? esimon: think I disagree about importance of processing schema ... security concern scantor: was saying it isn't a good idea to merge schema processing behavior into DTD stuff ... leave this as DTD centric option <fjh> tlr notes entities are defined in core XML spec <fjh> tlr advocates c14n2 defined on infoset <fjh> pratik notes parameter is included to address security concerns scantor: original c14N talks about entity references s/tlr notes they do not appear in infoset/tlr: do they even make an appearance in the infoset?/ <Zakim> fjh, you wanted to ask if we are talking about separate parameter for entity processing only scantor: for simplicity reasons, would be happy to treat ignore DTD as just ignore DTD, don't go deeper ... reason for the feature are entities ... but defaults in DTDs cause problems similar to schema correction: entities *do* show up in the infoset. I was wrong. fjh: we want those parameters, but don't like admitting? <fjh> pratik notes two parameters, ignoreDTD , expandEntities <scantor> and c14n 1.x says "Character and parsed entity references are replaced", as Pratik was saying pratik: default values, entity expansion. If only two parameters, it's fine to have two separate ones <fjh> pratik agrees two parameters, one to ignore entities, one to ignore default values <fjh> tlr says that if ignoring entities, then not validating so default values ignored <fjh> others note assumptions are risky and confusing <scribe> ACTION: scantor to create issue on DTDs, entities, defaulting, schema validation for C14N 2.0 [recorded in] <trackbot> Created ACTION-573 - Create issue on DTDs, entities, defaulting, schema validation for C14N 2.0 [on Scott Cantor - due 2010-05-11]. <fjh> please include in issue summary of issue and proposed resolution ACTION-573: review of XML parsing, XML Infoset, C14N 1.1 needed <trackbot> ACTION-573 Create issue on DTDs, entities, defaulting, schema validation for C14N 2.0 notes added pdatta: also, note ID attribute problems edsimon: how does that affect c14? ... that sounds like validation aspect scantor: affects signature, not c14n esimon: schema validation in XML? scantor: no such thing in core XML 1.0 + XML Namespaces + ... esimon: But XML Schema defines validation ... but XML signature we use XML schema to define what an XML Signature looks like scantor: yes. But schema validation isn't part of the processing model. esimon: but the schema is normative part of the spec scantor: only to define the grammar esimon: umh fjh: back to Pratik's e-mail <fjh> <scantor> proposal on last call to combine the xml:* attribute options in c14n2 <scantor> fine with me <fjh> <esimon2> * Can Zakim distinguish between who is making noise and who is talking (where hopefully the two are not the same)? Anyway, the noise I'm hearing does not seem to originate from my office which is quite quiet. tlr: so, this is about having one parameter for the *heritable* xml: ... parameters? fjh: yes scantor: issue that had been raising was use case for treating them differently <fjh> <fjh> from pratik email: <fjh> . >> Would it be clean enough (and simpler) to collapse the the xmlXAncestors parameters into a single parameter and just apply "combine" to only xml:base? Is there a need to use different rules for different attributes? <fjh> Seems like the various "modes" sort of go together given how the earlier algorithms work. <fjh> How about a new parameter "xmlAncestors" whose values can be <fjh> "inheritAll" : Simulate Canonical XML 1.0 behavior, which inherits all the attributes <fjh> "inherit" : Simulate the Canonical XML 1.1 behavior, where you inherit the inheritable attributes and combine the xml:base <fjh> "none" : Simulate Exc Canonical xML 1.0 behavior tlr: realization during c14n 1.1 work was that xml:base is heritable, but more difficult than just copying what's in the serialization ... why is the 1.0 behavior useful? ... we know it's wrong fjh: for backward compatibility scantor: goal to rewrite all existing canonicalization into a version of 2.0 tlr: Eeeek! pdatta: yes, that's the idea scantor: idea to use 2.0 implementations for everything <scantor> the intent of the full range of options was to enable a 2.0 impl to be given options that allow for output equivalent to before tlr: I can see how the exclusive 1.0 behavior is sane. The inclusive 1.1 behavior is ok. The inclusive 1.0 behavior is broken. Now we're adding an option that must be supported and forces everybody to implement the inclusive 1.0 behavior?! pdatta: convinced that we should remove inheritAll ... maybe something an implementation might do internally ... remove inheritAll <fjh> suggest pratik summarize proposal in email pdatta: qnames *noise* <fjh> scott plans to provide proposal meiko: this is what we talked about in the context of the include/excludeXPath expressions scantor: think there is a general solution to the harder problem ... problem with qnames in documents is mostly qname-valued nodes ... rare that they show up in the middle of text <fjh> <fjh> ACTION-562? <trackbot> ACTION-562 -- Meiko Jensen to provide streaming-canonicalization proposal -- due 2010-05-04 -- PENDINGREVIEW <trackbot> action-562 closed <trackbot> ACTION-562 Provide streaming-canonicalization proposal closed meiko: named parameter sets; strawman; without deeper investigation of issues, it's not worth discussing in 10 minutes ... if somebody wants to review, that would be great; otherwise, suggest stream based parameter set be put in spec fjh: so this is a proposal to pick and choose parameters for streaming <Cynthia1> I haven't read it yet, but am interested in the specific applications for this tlr: interested in Pratik's review of this proposal pdatta: should propose trimTextNodes in streaming mode, too meiko: pdatta suspects trimTextNodes true might be more work. Will follow up on list, since we're running out of time. pdatta: +1 meiko: digest-based prefix rewriting -- base64 might contain '=', which isn't allowed for namespaces <fjh> tlr notes could use a different encoding meiko: you'd have to escape tlr: ... or use different encoding -- base32 or so scantor: would be nice to simplify digesting mechanism, including encoding ... would like to avoid having ton of steps that people might screw up ... I had proposed the hex-encoded digest ... probably with underscore to get around leading digits <scribe> ACTION: scantor to send his proposal on prefix rewriting to the list [recorded in] <trackbot> Created ACTION-574 - Send his proposal on prefix rewriting to the list [on Scott Cantor - due 2010-05-11]. -- adjourned --
http://www.w3.org/2010/05/04-xmlsec-minutes.html
CC-MAIN-2015-40
en
refinedweb
C# Interfaces, what are they and why use them? What is an Interface First. An interface is a contract between itself and any class that implements it. This contract states that any class that implements the interface will implement the interface's properties, methods and/or events. An interface contains no implementation, only the signatures of the functionality the interface provides. An interface can contain signatures of methods, properties, indexers & events. You can think of an interface as an abstract class with the implementation stripped out. An interface doesn't actually do anything, like a class or abstract class, it merely defines what a class that implements it will do. An interface can also inherit/implement other interfaces. Why use interfaces So if an interface implements no functionality then why should we use them? Using interface based design concept provides loose coupling, component-based programming, easier maintainability, makes your code base more scalable and makes code reuse much more accessible because implementation is separated from the interface. Interfaces add a plug and play like architecture into your applications. Interfaces help define a contract (agreement or blueprint, however you chose to define it), between your application and other objects. This indicates what sort of methods, properties and events are exposed by an object. For example let's take a vehicle. All vehicles have similar items, but are different enough that we could design an interface that holds all the common items of a vehicle. Some vehicles have 2 wheels, some have 4 wheels and can even have 1 wheel, though these are differences they have something in common, they're all movable, they all have some sort of engine, they all have doors, but each of these items may vary. So we can create an interface of a vehicle that has these properties, then we inherit from that interface to implement it. While wheels, doors and engines are different they all rely on the same interface (I sure hope this is making sense). Interfaces allow us to create nice layouts for what a class is going to implement. Because of the guarantee the interface gives us, when many components use the same interface it allows us to easily interchange one component for another which is using the same interface. Dynamic programs begin to form easily from this. An interface is a contract that defines the signature of some piece of functionality. So here's a simple example of an interface and implementing it. From the above example we're created a IVehicle interface that looks like this namespace InterfaceExample{ public interface IVehicle { int Doors { get; set; } int Wheels { get; set; } Color VehicleColor { get; set; } int TopSpeed { get; set; } int Cylinders { get; set; } int CurrentSpeed { get; } string DisplayTopSpeed(); void Accelerate(int step); }} Now we have our vehicle blueprint, and all classes that implement it must implement the items in our interface, whether it be a motorcycle, car, or truck class we know that all will contain the same functionality. Now for a sample implementation, in this example we'll create a motorcycle class that implements our IVehicle class. This class will contains everything we have defined in our interface namespace InterfaceExample{ public class Motorcycle : IVehicle { private int _currentSpeed = 0; public int Doors { get; set; } public int Wheels { get; set; } public Color VehicleColor { get; set; } public int TopSpeed { get; set; } public int HorsePower { get; set; } public int Cylinders { get; set; } public int CurrentSpeed { get { return _currentSpeed; } } public Motorcycle(int doors, int wheels, Color color, int topSpeed, int horsePower, int cylinders, int currentSpeed) { this.Doors = doors; this.Wheels = wheels; this.VehicleColor = color; this.TopSpeed = topSpeed; this.HorsePower = horsePower; this.Cylinders = cylinders; this._currentSpeed = currentSpeed; } public string DisplayTopSpeed() { return "Top speed is: " + this.TopSpeed; } public void Accelerate(int step) { this._currentSpeed += step; } } Now in the same application we could interchange our Motorcycle class with a Truck class or a Car class and they will all have the same base functionality, that of a IVehicle. So as you can see interface based development can make a developers life much easier, and our applications much cleaner, maintainable and extensible.
https://dzone.com/articles/c-interfaces-what-are-they-and
CC-MAIN-2015-40
en
refinedweb
Mysql & java - JDBC to connect to mysql 5.1 using java. But it shows error about: Class.forName...; String url = "jdbc:mysql://localhost:3306/"; String dbName... on JDBC visit to : java error - JDBC ? import java.sql.*; public class MysqlConnect{ public static void main(String[] args) { System.out.println("MySQL Connect Example."); Connection conn = null; String url = "jdbc:mysql://localhost:3306/"; String dbName class not found error - JDBC class not found error thanks for your response. please clarify the following doubts. i am having the specified mysql connector jar file. where that jar file has to be placed. also does the jdbc driver need to be installed jdbc - JDBC jdbc import java.sql.*; public class MysqlConnect{ public static void main(String[] args) { System.out.println("MySQL Connect Example."); Connection conn = null; String url = "jdbc:mysql://localhost:3306 error - JDBC conn = null; String url = "jdbc:oracle:thin:@localhost:1521:xe"; String...(); } } } i wrote any jdbc program .it won't work in my system. but it is complied i...; Hi friend, Please add mysql connector jar file. Read for more JDBC - JDBC JDBC JDBC driver class not found:com.mysql.jdbc.Driver..... Am getting an error like this...... i have added the jar files for mysql inside...:// Thanks Java jdbc class.forName error - JDBC Java jdbc class.forName error HI: how is possible that my JDBC connection to an MYSQL instance works inside the IDE (Netbeans) and not when I start it from the jar file? Could some please give some pointers please? Higly jdbc - JDBC ("com.mysql.jdbc.Driver"); Connection con=DriverManager.getConnection("jdbc:mysql://localhost... con=DriverManager.getConnection("jdbc:mysql://localhost:3306/ram","root","root... MySql visit to : Thanks ClassNotFound error - JDBC it shows an exception.... Can i kno y is dat error comes??? Is i want 2 download anything else than mysql n jdk1.6.... Clarify ma doubt soon.... Hi friend, I think mysql connector problem. please add mysql java runtime error - JDBC java runtime error when i m running my jdbc program it is giving... is not in your lib "mysql-connector-java-5.0.6-bin.jar".Plz check it. For more information on JDBC visit to : Thanks MySQL connectivity - JDBC MySQL connectivity hi all, i am not able to connect Mysql to java ..as i connect it ..i am getting the error ..saying classNotFoundException....... thank u.. Hi friend, This error occur due Oracle Database error - JDBC = DriverManager.getConnection("jdbc:mysql://localhost:3306/register", "root", "root"); Statement compilation error - JDBC jdbc compilation error java.lang.Exception: Problem in call_select when i am executing the program i am getting the above error. how can i.... Read for more information. Thanks regarding jdbc - JDBC ").newInstance(); con = DriverManager.getConnection("jdbc:mysql:///test... on JDBC visit to :...("Successfully connected to MySQL server..."); } catch(Exception e MYSQL and SERVLETS - JDBC MYSQL and SERVLETS I did addition ,deletion of data in mysql using... program .When I click this add menu it has to be added the data in table .when i click the delete menu ,it has to be deleted the data from table .Anyone help me problem - JDBC mysql problem hai friends i have some problem with image storing in mysql. i.e while i am using image(blob) for insert the image it says out of bound of size error. please provide the logic for storing normal maxi size. SERVLETS AND MYSQL - JDBC -application.shtml.This link has the same program .But it has edit and delete button in the table... the table has to displayed.Help me .Awaiting for ur reply jdbc jdbc I already create the connection.Again it will show the same error. Hi, Let's know what error you are getting? Thanks JDBC = DriverManager.getConnection("jdbc:mysql://localhost:3306/test", "root", "root jdbc ("sun.jdbc.odbc.JdbcOdbcDriver"); ^ I got this error.can any one tell me,why this error...{ Connection con; con=DriverManager.getConnection("jdbc:odbc:student JDBC ("com.mysql.jdbc.Driver"); Connection con = DriverManager.getConnection("jdbc:mysql jdbc = DriverManager.getConnection("jdbc:mysql://localhost:3306/test", "root", "root mysql - JDBC =DriverManager.getConnection("jdbc:mysql://localhost:3306/ram","root","root...jdbc mysql import java.sql.*; public class AllTableName... be problem in your mySql, the above code is working fine. first you check JDBC Components below describes how to run the JDBC program with MySql. JDBCExample.java...; Connection conn = null; String url = "jdbc:mysql... the JDBC drivers. Each driver has to be registered with this class. 4. getConnection jdbc - JDBC at Thanks Hi, You..."); Read at Thanks... management so i need how i can connect the pgm to database by using jdbc Error - JDBC "); Connection con=DriverManager.getConnection("jdbc:odbc:nico;integrated:Runtime error - Java Beginners : ClassNotFoundException com.mysql.jdbc.Driver SQLException No suitable driver found for jdbc:mysql...()); } con = DriverManager.getConnection(strURL, strUser, strPasswd);//"jdbc:mysql...JDBC:Runtime error Dear sir/madam i am facing a problem with jdbc void main(String[] args) { System.out.println("Inserting values in Mysql database table!"); Connection con = null; String url = "jdbc:mysql... implementing class. Hi friend, Example of JDBC Connection with Statement JDBC Training, Learn JDBC yourself with MySQL JDBC MySQL Tutorial JDBC Tutorials with MySQL Database... JDBC Connectivity Code in Java. JDBC Drive For Mysql... JDBC Drive for Mysql. JDBC Execute Query JDBC:Runtime error - Java Beginners = DriverManager.getConnection(strURL, strUser, strPasswd);//"jdbc:mysql://localhost...JDBC:Runtime error Dear sir/madam, thanks to respond for my request... found for mysql. I copied the mysql-connector-jave-5.0.8-bin.jar, mysql-connector JDBC - JDBC "); con = DriverManager.getConnection("jdbc:mysql://192.168.10.211...:// jdbc - JDBC in a database System.out.println("MySQL Connect Example."); Connection conn = null; String url = "jdbc:mysql://localhost:3306/"; String dbName jdbc - JDBC main(String[]args){ try{ Connection con = null; String url = "jdbc:mysql...(); Connection con = DriverManager.getConnection( "jdbc:mysql://localhost:3306/test...:// Thanks jdbc - JDBC = DriverManager.getConnection( "jdbc:mysql://localhost:3306/test", "root", "root...: Retrieve Image using Java jdbc - JDBC = null; String url = "jdbc:mysql://localhost:3306/"; String dbName.... Thanks java runtime error: JDBC code - Java Beginners java runtime error: JDBC code Hi i want to insert data into mysql...: Driver:org.gjt.mm.mysql.Driver url: jdbc:mysql://localhost... the problem and visit to : jdbc - JDBC Example!"); Connection con = null; String url = "jdbc:mysql://localhost...; String url = "jdbc:mysql://192.168.10.211:3306/amar"; String driver...(); } } } --------------------------------------------- Visit for more information: ("jdbc:mysql://localhost:3306/ram","root","root"); System.out.println("Connect... information. Thanks mysql jdbc connectivity mysql jdbc connectivity i want to connect retrieve data from mysql using jdbc run time error - JDBC jdbc run time error i m creating the table using thin driver it is showing the following run time error: Exception in thread "main...=DriverManager.getConnection("jdbc:opracle:thin:@ptpl:1521:oracle","scott","tiger java - JDBC = "com.mysql.jdbc.Driver"; 3.Creating a jdbc Connection String url = "jdbc:mysql://localhost:3306/"; String username = "root"; String password.... Creating a jdbc Statement object Statement st = conn.createStatement(); 5 ResultSetMetaData - JDBC !"); Connection con = null; String url = "jdbc:mysql://localhost:3306... in Database!"); Connection con = null; String url = "jdbc:mysql...(); } } } For more information on JDBC-Mysql visit to : JDBC CONNECTIVITY "); con = DriverManager.getConnection("jdbc:mysql://localhost:3306/abhi...JDBC CONNECTIVITY import java.sql.Connection; import... but when we run application i have an error message like this..." (""""""Java Tools connectivity - JDBC connectivity I hav MySQL 5.0, JDK 1.5, Tomcat 5.0 in my system when... error () that prevented it from fulfilling this request. exception..., Please check the mysql-connector-java-5.0.6-bin.jar file for Connection java - JDBC = "jdbc:mysql://localhost:3306/"; String dbName = "register"...Connection Database extends HTTP Servlets In my code.. an error occurs while creating Connection Database extends HTTP Servlets hi servlet-jdbc "); PrintWriter pw = response.getWriter(); String connectionURL = "jdbc:odbc:shweta...("Record has been inserted"); } else{ pw.println("failed..._info. i am getting 404 error that is requested resource not available Prepared statement JDBC MYSQL Prepared statement JDBC MYSQL How to create a prepared statement in JDBC using MYSQL? Actually, I am looking for an example of prepared statement. Selecting records using prepared statement in JDBC java.Sql - JDBC and check for any data in the data base. But after some time it shows the error that. Too many connections to the MySql database. what should i do...:// Thanks Hi friend, Read for more JDBC connection and SQL Query - JDBC to make it. even the code i wrote below has some error with quotation marks...JDBC connection and SQL Query Hi, I'm reading a all files one after the other in a directory in java. storing the values in an array of string database connectivity - JDBC ."); Connection conn = null; String url = "jdbc:mysql://localhost:3306...database connectivity example java code for connecting Mysql database using java Hi friend, Code for connecting Mysql database using Connectivity with sql in detail - JDBC ; String url = "jdbc:mysql://localhost:3306/"; String dbName... the following link:.... Thankyou. Hi Friend, Put mysql-connector insertuploadimahe - JDBC script below.(this script may vary if you use other data base such as oracle ,mysql..."); con = DriverManager.getConnection("jdbc:edb://192.168.1.136:5444/testhr...-specific error occurs * @throws IOException if an I/O error occurs java - JDBC ; String url = "jdbc:mysql://192.168.10.211:3306/"; String db = "amar...(); con.close(); } catch(SQLException sqe) { System.out.println("SQl error"); } catch(ClassNotFoundException cnf) { System.out.println("Class not found error Accessing database with JDBC via Java ."); Connection connnection = null; String url = "jdbc:mysql://localhost:3306...Accessing database with JDBC via Java How to access a database with JDBC via Java application? Accessing database with JDBC through maven mysql jdbc driver maven mysql jdbc driver How to add the maven mysql jdbc driver dependency in the pom.xml file of a maven based application? Hi, Its...;groupId>mysql</groupId> <artifactId>mysql-connector-java<   Error! .bigRed {font how to resolve this JDBC Error? how to resolve this JDBC Error? i am trying to Exceute this code... = "oracle.jdbc.driver.OracleDriver"; String dburl = "jdbc:oracle:oci:@localhost... but when i am executing this code i am getting this Error: Exception in thread exception at runtime - JDBC java.sql.*; public class MysqlConnect { public static void main(String args[]) { System.out.println("MySQL Connect Example."); Connection con = null; String url = "jdbc:mysql://localhost:3306/"; String dbName = "bank"; String java runtime exception - JDBC java.sql.*; public class MysqlConnect { public static void main(String args[]) { System.out.println("MySQL Connect Example."); Connection con = null; String url = "jdbc:mysql://localhost:3306 Programming error - JDBC =DriverManager.getConnection("jdbc:odbc:Odsn"); Statement s=con.createStatement(); int x...) { out.println("Database Error :"+se.getMessage()); } catch(Exception e) { out.println(e.toString()); } } } Please reply me the error in this program because i java runtime error - JDBC java runtime error when i am running my jdbc program using thin driver this error is coming at runtime: Exception in thread "main..., give me the suggesion yo solve this problem This kind of error due mysql problem - JDBC = "jdbc:mysql://localhost:3306/test"; Connection con=null; try...mysql problem hai friends please tell me how to store the videos in mysql plese help me as soon as possible thanks in advance   how to ckeck whether the date has expired - JDBC how to ckeck whether the date has expired i have maintained a table... licenced copy has expired. how to write the query for that?? or is their any... licenced copy has expired.your query would be: select *from tableName where MySql ClassNotFoundException - JDBC MySql ClassNotFoundException Dear sir, i am working in Linux platform with MySQL database , actually i finished all installation in MySQL... install in linux any software making connection between java and MySQL. Or how can i java runtime error - JDBC java runtime error when i m running my program using type1 driver it is showing a runtime error of SQLException of unable to create the connection object.please give the solution JAVA & MYSQL - JDBC JAVA & MYSQL How can we take backup of MySQL 5.0 database by using...;Hi Friend, Please visit the following page for working example of MySQL backup. This may help you in solving your problem. JDBC Tutorial - Writing first JDBC example and running in Eclipse of creating project in Eclipse, downloading JDBC driver for MySQL, adding JDBC driver...; MySQL JDBC driver (to be downloaded from MySQL website) JDBC... in the video above. Step 2: Download the JDBC driver for MySQL. You can jsp error - JDBC of that a error is like-----------"java.sql.SQLException: [Microsoft][ODBC Driver Manager... the line is not getting incremented u r getting that error just do this change java database error - JDBC ("sun.jdbc.odbc.JdbcOdbcDriver"); Connection connect =DriverManager.getConnection("jdbc:odbc Advertisements If you enjoyed this post then why not add us on Google+? Add us to your Circles
http://www.roseindia.net/tutorialhelp/comment/87818
CC-MAIN-2015-40
en
refinedweb
risu wrote: Upon further discussion it is not intended for every connection to call the ReadFile function when it's created. I was kinda afraid of this (are loacal file reads for every connection object neccessarily bad?). Instead they will have a top level Module (not class or form) firing off the overall program. The module will read the local file (through whatever method is decided upon) and fill a global variable that will be used by all forms called subsequently. Thoughts on this? Well.. File System I/O has always been saught at as expensive.. Path Lookups, Authorization Checks, etc... By what you have said so far, it seems as though your settings arent going to change that frequently anyway, so reading it in once would be sufficient. The System.Configuration namespace is wonderful at those types of applications... If you were to think on an object oriented level, you would have your object that handles data, read the config file on construction... or if you read it in at an application level, its already in memory so it would just have to be passed or referenced.. Jake
https://channel9.msdn.com/Forums/Coffeehouse/6237-Reading-local-files-Win-APIs-Vs-Framework/2e7f3a768135494d80d69dea0119b7f8
CC-MAIN-2015-40
en
refinedweb
Answered by: Whats wrong with my code? C# using System; using System.Collections.Generic; using System.ComponentModel; using System.Data; using System.Drawing; using System.Linq; using System.Text; using System.Windows.Forms; using System.IO; using OutLook = Microsoft.Office.Interop.Outlook; namespace Access_Honeywell_V8 { public partial class email : Form { public email(TextBox t) { InitializeComponent(); this.mailto.Text = t.Text; mailto.Focus(); } public email() { InitializeComponent(); } private string ReadSignature() { string appDataDir = Environment.GetFolderPath(Environment.SpecialFolder.ApplicationData) + "\\Microsoft\\Signatures"; string signature = string.Empty; DirectoryInfo diInfo = new DirectoryInfo(appDataDir); if (diInfo.Exists) { FileInfo[] fiSignature = diInfo.GetFiles("*.htm"); if (fiSignature.Length > 0) { StreamReader sr = new StreamReader(fiSignature[0].FullName, Encoding.Default); signature = sr.ReadToEnd(); if (!string.IsNullOrEmpty(signature)) { string fileName = fiSignature[0].Name.Replace(fiSignature[0].Extension, string.Empty); signature = signature.Replace(fileName + "_files/", appDataDir + "/" + fileName + "_files/"); } } } return signature; } private void button1_Click(object sender, EventArgs e) { OutLook.Application mailApp = new OutLook.Application(); OutLook.NameSpace myNam = mailApp.GetNamespace("MAPI"); myNam.Logon(null, null, true, true); OutLook.MAPIFolder ofold = myNam.GetDefaultFolder(OutLook.OlDefaultFolders.olFolderSentMail); OutLook._MailItem mi = (OutLook._MailItem)mailApp.CreateItem(OutLook.OlItemType.olMailItem); mi.To = mailto.Text; mi.CC = emailcc.Text; mi.CC = emailcc.Text; mi.CC = emailcc.Text; mi.CC = emailcc.Text; mi.CC = emailcc.Text; mi.CC = emailcc.Text; mi.CC = emailcc.Text; mi.CC = emailcc.Text; mi.SentOnBehalfOfName = "mycompany.com"; mi.Subject = subject.Text; mi.HTMLBody = body.Text; mi.Display(true); mi.SaveSentMessageFolder = ofold; mi.Send(); } Im trying to add a signature to an outlook email.....(ive asked a thousand times in the outlook forum...and no answer) Please just take a look...i can send the mail just fine...and all works great...just no signature!! HELP Please this one has been eating me for a week!! Loving life since 1981 Preston Lambeth Question Answers All replies Please Debug the ReadSignature() method and check if its returning Signature in expected format. Secondly... I dont see a Call to this ReadSignature method any where in the posted code... Check if you missed it. .NET Maniac -=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=- - YOU HAVE NO IDEA HOW MUCH YOU HAVE HELPED ME OUT!!!! Its amazaing how much a second set of eyes can help....wow is all i can say...thank you so much for helping me out with the problem that has been killing me for weeks!!!!! It was hard enough to get the the files with the signature...then emplimenting it was a whole other story......thank you!!!!!!! Loving life since 1981 Preston Lambeth By the way, depending on exactly how you're emailing this, another "gotcha" with sending e-mails can be virus scan software. I see you're using Outlook here, so that shouldn't be a problem, but keep in mind that if you ever change architectures and try to access the e-mail port directly to send outgoing e-mail that most virus scan software will prevent that from functioning unless you give your application "permission" to use that port. (I learned this the hard way; I had an application that would periodically generate and send e-mails - couldn't get it to work, even though the code looked OK - turned out that the problem was with my virus scan software). I obviously don't have enough information to determine if this will be an issue for your particular application, but it's definitely worth taking the time to make sure you're not going to accidentally run afoul of some kind of security measure (e.g. firewall, OS security measures, Outlook security measures, virus scan/anti-malware software, etc.).
https://social.msdn.microsoft.com/Forums/vstudio/en-US/a2432f7f-7854-4a68-8011-da9d243bc1f9/whats-wrong-with-my-code-c?forum=csharpgeneral
CC-MAIN-2015-40
en
refinedweb
Is it a terminology issue, or a deeper problem, that we have both When a _foreign element_ has one of the namespaced attributes given by the local name and namespace of the first and second cells of a row from the following table, it must be written using the name given by the third cell from the same row. . . xml:base . . . xml:space I think they should be allowed on HTML elements as well. They can be set by script, and xml:base might well be useful. This bug was cloned to create bug 17890
https://www.w3.org/Bugs/Public/show_bug.cgi?id=17745
CC-MAIN-2015-40
en
refinedweb
Tutorial 3: Using Matrices The Matrices tutorial project introduces the concept of matrices and shows how to use them. Matrices are used to transform the coordinates of vertices and to set up cameras and viewports. Path Source location: (SDK root)\Samples\Managed\Direct3D\Tutorials\Tutorial3 Procedure Tutorial 2: Rendering Vertices rendered 2-D vertices to draw a triangle. This tutorial adds to the code of Tutorial 2 to rotate the triangle using 3-D vertex transformations. Because this project will apply transformations to the triangle object, instead of using already-transformed 2-D window coordinates as in Tutorial 2, the vertex buffer is initialized with the CustomVertex.CustomVertex.PositionColored structure, as in the following code fragment. In addition, within the private Render method, the device vertex format is initialized to be the CustomVertex.PositionColored format, as shown in the following code. Before geometry is rendered, the application-defined SetupMatrices method, which creates and sets the 3-D matrix transformations of the triangle object, is called from Render. [C#] private void Render() { . . . // Set up the world, view, and projection matrices. SetupMatrices(); device.SetStreamSource(0, vertexBuffer, 0); device.VertexFormat = CustomVertex.PositionColored.Format; device.DrawPrimitives(PrimitiveType.TriangleList, 0, 1); // End the scene. device.EndScene(); device.Present(); } Typically, three types of transformation are set for a 3-D scene. The transformations are all defined as properties of the Transforms object, accessed from the Device.Transform property. All use a left-handed coordinate system typical of Direct3D; see 3-D Coordinate Systems. - World Transformation Matrix: In this case, the triangle is rotated around the y-axis by calling the Matrix.RotateY method, as shown in the following code sample. Note that Matrix is part of the general-purpose Microsoft.DirectX namespace. This call uses the system Environment .TickCount method, divided by a scaling value, to provide the RotateY argument in radians. This procedure yields a smoothly-varying rotation about the y-axis. - View Transformation Matrix: The view transformation matrix yields the camera view of the scene, in this sample code by calling the Matrix.RotateY method. Three Vector3 vectors form the arguments for the LookAtLH method, which builds a left-handed (LH) look-at matrix. The three vectors represent respectively the eye location, the camera look-at target (in this case the origin), and the current world's up-direction. - Projection Transformation Matrix: The projection transformation matrix defines how geometry is transformed from 3-D view space to 2-D viewport space. In this sample code it is formed from the matrix returned by the left-handed PerspectiveFovLH method. Arguments to the method are the field of view in radians, the aspect ratio (view space height divided by width), the near clipping plane distance, and the far clipping plane distance. The order in which these transformation matrices are created does not affect the layout of the objects in a scene. However, Direct3D applies the matrices to the scene in the above order. [C#] private void SetupMatrices() { // For our world matrix, we will just rotate the object about the y-axis. // Set up the rotation matrix to generate 1 full rotation (2*PI radians) // every 1000 ms. To avoid the loss of precision inherent in very high // floating point numbers, the system time is modulated by the rotation // period before conversion to a radian angle. int iTime = Environment.TickCount % 1000; float fAngle = iTime * (2.0f * (float)Math.PI) / 1000.0f; device.Transform.World = Matrix.RotationY( fAngle ); // Set up our view matrix. A view matrix can be defined given an eye // point, a point to lookat, and a direction for which way is up. Here, // we set the eye five units back along the z-axis and up three units, // look at the origin, and define "up" to be in the y-direction. device.Transform.View = Matrix.LookAtLH( new Vector3( 0.0f, 3.0f,-5.0f ), new Vector3( 0.0f, 0.0f, 0.0f ), new Vector3( 0.0f, 1.0f, 0.0f ) ); // For the projection matrix, we set up a perspective transform (which // transforms geometry from 3D view space to 2D viewport space, with // a perspective divide making objects smaller in the distance). To build // a perpsective transform, we need the field of view (1/4 pi is common), // the aspect ratio, and the near and far clipping planes (which define // at what distances geometry should be no longer be rendered). device.Transform.Projection = Matrix.PerspectiveFovLH( (float)Math.PI / 4, 1.0f, 1.0f, 100.0f ); } The character of rendering is controlled by setting properties of the RenderStateManager class. This is done in the OnResetDevice application-defined method, as shown in the following code fragment. [C#] public void OnResetDevice(object sender, EventArgs e) { Device dev = (Device)sender; // Turn off culling, so the user sees the front and back of the triangle dev.RenderState.CullMode = Cull.None; // Turn off Direct3D lighting, since object provides its own vertex colors dev.RenderState.Lighting = false; } In this case, culling beyond a backplane and Direct3D lighting are both turned off. These settings allow the full depth of the 3-D object to be viewed and the object to provide its own colors.
https://msdn.microsoft.com/en-us/library/bb153260(v=vs.85).aspx
CC-MAIN-2015-40
en
refinedweb
Using IntelliSense with Exchange Web Services Topic Last Modified: 2009-07-15 You can use Microsoft Visual Studio 2005 or Visual Studio 2008 or Visual Studio 2008 IDE. In many ways, it is better to use wsdl.exe to generate proxies than to use the Add Web Reference wizard in Visual Studio 2005 or Visual Studio 2008. or Visual Studio 2008, open a Command Prompt Window. Run wsdl.exe with the following suggested arguments: - /namespace:ExchangeWebServices - /out:EWS.cs - The URL to the Exchange Web Services endpoint The following is an example of the full command:, or in a Visual Studio 2008 Command Prompt Window, run the Microsoft Visual C# 2008 Compiler version 3.5.21022.8, with the following suggested arguments: - /out:EWS_E2K7_release.dll - /target:library - The file name of the code file to compile. This will be the output source code file that is generated by using wsdl.exe; for example, EWS.cs. The following is an example of the full command: or Visual Studio 2008 or Visual Studio 2008. Add a reference to the library that you created in step 3. You can now view IntelliSense information in the Object Browser or the text editor while you are instantiating a new class.
https://msdn.microsoft.com/en-us/library/bb629923(v=exchg.80).aspx
CC-MAIN-2015-40
en
refinedweb