text
stringlengths 20
1.01M
| url
stringlengths 14
1.25k
| dump
stringlengths 9
15
⌀ | lang
stringclasses 4
values | source
stringclasses 4
values |
|---|---|---|---|---|
Download this releaseYou can download the WebJobs SDK from the NuGet gallery. You can install or update these packages through NuGet gallery using the NuGet Package Manager Console, like this:
Install-Package Microsoft.Azure.WebJobs –PreIf you want to use Microsoft Azure Service Bus triggers, install the following package:
Install-Package Microsoft.Azure.WebJobs.ServiceBus -PreSince the package names have changed from 0.3.0-beta, we have uploaded redirection packages which will help you update to the latest version.
Update-Package Microsoft.Azure.Jobs.Core –Pre
Update-Package Microsoft.Azure.Jobs
Async supportYou can use async/await in your functions and your functions can return Task. Distinct functions within a single JobHost are executed in parallel. This means that if you have 2 functions listening on different Queues, then they would be executed in parallel. The following code shows how you can use async/await and return Task in your function. This function will trigger on a new message on an Azure Queue called inputqueue and will write the message to a Blob.
class Program { static void Main() { JobHost host = new JobHost(); host.RunAndBlock(); } public async static Task HelloWorldFunctionAsync( [QueueTrigger("inputqueue")] string inputText, [Blob("output/output.txt")] TextWriter output) { await output.WriteAsync(inputText); } }You can optionally take in the CancellationToken as an argument to your functions. For eg. the following function is triggered when a new Blob is detected in the container called “input”. You can pass in the CancellationToken to CopyToAsync function. This function also shows how the SDK binds the file name and extension and gives you an easy access to them.
class HelloWorldAsyncCancellationToken { static void Main() { JobHost host = new JobHost(); host.RunAndBlock(); } public async static Task ProcessBlob( [BlobTrigger("input/{name}.{extension}")] Stream input, string name, // The SDK binds the name of the File string extension, // The SDK binds the extension of the File [Blob("output/{name}.{extension}", FileAccess.Write)] Stream output, CancellationToken token) { await input.CopyToAsync(output, 4096, token); } }You can invoke functions explicitly. This is useful if you are running a WebJob on a schedule using the Azure Scheduler or you just want to invoke functions. This approach still gives you the benefits of the SDK around diagnostics, cancelling long running functions etc.
class Program { static void Main() { JobHost host = new JobHost(); Task callTask = host.CallAsync(typeof(Program).GetMethod("ManualTrigger"), new { value = 20 }); Console.WriteLine("Waiting for async operation..."); callTask.Wait(); Console.WriteLine("Task completed: " + callTask.Status); } [NoAutomaticTrigger] public static void ManualTrigger( TextWriter log, int value, [Queue("outputqueue")] out string message) { log.WriteLine("Function is invoked with value={0}", value); message = value.ToString(); log.WriteLine("Following message will be written on the Queue={0}", message); } }
Handling Poison messages in Azure QueuesIn 0.3.0-beta the SDK gave you an option of binding to the DequeueCount property of Queues and now, in this release we are adding support for automatically moving the message to a poison queue. You can now process poison messages in your application code, such as logging them for investigation. Just bind a function to QueueTrigger("queuename-poison"). The following code will process the queue message. When a function is bounded to a Queue and an exception happens while processing the function, the SDK will process the message 5 times(default) before marking the message as being poisoned and the SDK will move the message to a different queue.
class ProcessPoisonMessages { static void Main() { JobHost host = new JobHost(); host.RunAndBlock(); } public async static Task ProcessQueue( [QueueTrigger("inputqueue")] string inputText, [Blob("output/output.txt")] TextWriter output) { await output.WriteAsync(inputText); } public static void ProcessPosionQueue( [QueueTrigger("inputqueue-poison")] string inputText) { //Process the poison message and log it or send a notification } }
Better polling logic for Azure QueuesThis release has a new polling strategy. The SDK now implements a random exponential back-off algorithm to reduce the effect of idle queue polling on storage transaction costs.
Fast Path notifications for Queues
-The SDK exposes a few knobs where you can configure the Queue polling behavior.
- MaxPollingInterval for when a queue remains empty, the longest period of time to wait before checking for a message to. Default is 10min.
- MaxDequeueCount for when the Queue message is moved to a poison queue. Default is 5
static void Main() { JobHostConfiguration config = new JobHostConfiguration(); config.Queues.MaxDequeueCount = 3; config.Queues.MaxPollingInterval = TimeSpan.FromMinutes(20); JobHost host = new JobHost(config); host.RunAndBlock(); }
Package/ Namespace changesWe are changing the package name to avoid ambiguity with the generic term Job which can be confusing and sometimes difficult to search for. You will have to recompile your existing apps and change ConnectionStrings to incorporate these changes.
Faster dashboard index processingImprove performance of the dashboard when it shows all the WebJobs and function details for a WebJob
Dashboard data out of date warningThe dashboard now processes host data in the background and shows a warning if there’s a lot of work left to do.
Dashboard indexing errorsIn the "About" page, the dashboard shows indexing errors if any. These are useful since if the dashboard fails to index any logs you can go and check this page to find out if there were any errors
Bug FixesThis release has lots of bug fixes. We did prioritize the bugs reported on the forums and stackoverflow.
Existing features of the SDKFollowing is the feature set that was supported in 0.3.0-beta and continues to be supported in this release.
Azure usageThe SDK adds Triggers and Bindings for Azure Blobs, Queues, Tables and ServiceBus.
TriggersFunctions get executed when a new input is detected on a Queue or a Blob. For more details on triggers please see this post.. You can look at the samples listed below for more information.
HostingA JobHost is an execution container which knows what functions you have in your program. A JobHost object (which lives in Microsoft.Azure.WebJobs ).
using Microsoft.Azure.WebJobs; using System.IO; using System.Threading; using System.Threading.Tasks; using System.Web.Helpers; namespace ImageResizeAndWaterMark { class ImageProcessingFunctions { public static void Resize( [BlobTrigger(@"images-input/{name}")] WebImage input, [Blob(@"images2-output/{name}")] out WebImage output) { var width = 80; var height = 80; output = input.Resize(width, height); } public static void WaterMark( [BlobTrigger(@"images2-output/{name}")] WebImage input, [Blob(@"images2-newoutput/{name}")] out WebImage output) { output = input.AddTextWatermark("WebJobs is now awesome!!!!", fontSize: 6); } } public class WebImageBinder : ICloudBlobStreamBinder<WebImage> { public Task<WebImage> ReadFromStreamAsync(Stream input, CancellationToken cancellationToken) { return Task.FromResult(new WebImage(input)); } public async Task WriteToStreamAsync(WebImage result, Stream output, CancellationToken cancellationToken) { var bytes = result.GetBytes(); await output.WriteAsync(bytes, 0, bytes.Length,cancellationToken); } } }
|
https://azure.microsoft.com/cs-cz/blog/announcing-the-0-4-0-beta-preview-of-microsoft-azure-webjobs-sdk/
|
CC-MAIN-2017-43
|
en
|
refinedweb
|
Thanks mate I got it to work!
Type: Posts; User: brendanc19
Thanks mate I got it to work!
Yes, mate. But after reading that I still am not sure how I can get this to work in my program.
Not quite understanding the while loop thing. Any chance you can write it for me in the code, so I can understand its realation to my code?
Thanks man
Could you please explain to me how I would do that?
I need to know how to make my code repeat if the user types in NEW at the end of the program. Anybody know how? Please it is for a class
Code:
import java.util.Scanner;
|
http://www.javaprogrammingforums.com/search.php?s=94c930f975b8cb38dcb5ea2a07f966b4&searchid=1312200
|
CC-MAIN-2017-43
|
en
|
refinedweb
|
When developing web applications, we sometimes need to push server events down to connected clients. However, HTTP was not designed to allow this. A client opens a connection to a server and requests data. A server does not open a connection to a client and push data..
HTML5 to the Rescue!
WebSockets provide full-duplex communication over a single connection between the browser and the server. It does not have the overhead of HTTP and allows the server to push messages to the client in real-time.
The WebSocket API is actually quite simple. Create a
WebSocket object, attach event listeners, and send messages.
Here is an example:: ' + message); };
Spring Boot
Spring has excellent support for interfacing with WebSockets.
First, we need to create a class that extends the Spring class
TextWebSocketHandler.
public class MyMessageHandler extends TextWebSocketHandler { @Override public void afterConnectionClosed(WebSocketSession session, CloseStatus status) throws Exception { // The WebSocket has been closed } @Override public void afterConnectionEstablished(WebSocketSession session) throws Exception { // The WebSocket has been opened // I might save this session object so that I can send messages to it outside of this method // Let's send the first message session.sendMessage(new TextMessage("You are now connected to the server. This is the first message.")); } @Override protected void handleTextMessage(WebSocketSession session, TextMessage textMessage) throws Exception { // A message has been received System.out.println("Message received: " + textMessage.getPayload()); } }
Next, we need to configure our
WebSocket endpoint.
@Configuration @EnableWebSocket public class WebsocketConfig implements WebSocketConfigurer { @Bean public WebSocketHandler myMessageHandler() { return new MyMessageHandler(); } @Override public void registerWebSocketHandlers(WebSocketHandlerRegistry registry) { registry.addHandler(myMessageHandler(), "/my-websocket-endpoint"); } }
Since the WebSockets API is pure JavaScript, you should be able to use it in most front-end frameworks. This includes Angular as you can include JavaScript right in there with the TypeScript.
Final Thoughts
Pretty simple, and it solves a big headache in regards to the transfer of data between the server and client simultaneously. Spring Boot makes it even easier.
Want to see Websockets in action? At Keyhole, we have built an open source tool Trouble Maker that injects failures into our platform so that we can exercise and test the recovery mechanisms that make the platform resilient. Trouble Maker has an Angular front end and utilizes WebSockets for some real-time communication. Check out the Github Repo to try it in action.
|
https://keyholesoftware.com/2017/04/10/websockets-with-spring-boot/
|
CC-MAIN-2017-43
|
en
|
refinedweb
|
Python and other scripting languages are sometimes dismissed because of their inefficiency compared to compiled languages like C. For example here are implementations of the fibonacci sequence in C and Python:
And here are the execution times:
$ time ./fib 3.099s $ time python fib.py 16.655s
As expected C has a much faster execution time - 5x faster in this case.
In the context of web scraping, executing instructions is less important because the bottleneck is I/O - downloading the webpages. But I use Python in other contexts too so let’s see if we can do better.
First install psyco. On Linux this is just:
sudo apt-get install python-psyco
Then modify the Python script to call psyco:
import psyco psyco.full() def fib(n): if n < 2: return n else: return fib(n - 1) + fib(n - 2) fib(40)
And here is the updated execution time:
$ time python fib.py 3.190s
Just 3 seconds - with psyco the execution time is now equivalent to the C example! Psyco achieves this by compiling code on the fly to avoid interpreting each line.
I now add the below snippet to most of my Python scripts to take advantage of psyco when installed:
try: import psyco psyco.full() except ImportError: pass # psyco not installed so continue as usual
|
https://webscraping.com/blog/How-to-make-python-faster/
|
CC-MAIN-2017-43
|
en
|
refinedweb
|
Created on 2013-05-10 06:50 by ethan.furman, last changed 2013-06-15 00:57 by ncoghlan. This issue is now closed.
PEP-0435 has been approved!
Now for lots of code review.
Can you upload the patch as a diff against the CPython repo?
Here it is (I hope ;) .
I don't see docs in the patch, but perhaps that should be a separate patch to keep reviewing easier.
Also, Ethan, number the patch files in some way (like pep-435.1.patch, pep-435.N.patch) as they go through rounds of reviews.
OK, I sent another batch of reviews through the code review system - Ethan you should've gotten an email about it.
Regarding module vs. module_name for the extra param in the functional API, Guido rightly points out in issue 17941 that __module__ is the class attribute, so "module" is a consistent choice.
Incorporated comments.
More adjustments due to review.
Round 4. I wonder if I should have used two digits to number the patches. ;)
By the way, if anyone is willing to take on the documentation, that could be of great help. It shouldn't be very hard since PEP 435 is extremely detailed. The stdlib .rst doc would be the descriptions from PEP 435 plus a reference of the exposed classes which is pretty minimal.
We'll need the documentation patch available by the time the code is done reviewing.)..
On Fri, May 10, 2013 at 8:09 PM, Nick Coghlan <report@bugs.python.org>wrote:
>
> Nick Coghlan added the comment:
>
>".
>
>
Strong -1 from me on this. The PEP has been accepted. Feel free to raise
this for discussion if you want to change the decision. With all due
respect to IronPython, most users can write simpler code and have pickling
work just fine.
_______
>.
The accepted PEP states that the frame hack should be removed: "To support pickling of these enums, the module name can be specified using the module keyword-only argument."
Ergo, if you don't specify it, they cannot be pickled. Explicit is better than implicit, and this hack should not propagate beyond namedtuple (it should actually be deprecated in namedtuple as well).
As far as the second point goes, I had already reviewed the PEP implementation before making the comment (that's why I am reasonably sure you can already do it just by overriding _get_mixins). I see it as similar to the changes that were already made to support autonumbered subtypes.
However, also note that I said we should wait before doing anything about providing a supported mechanism for that customisation. I've now created issue 17954 to cover a possible refactoring and documentation of that part of the implementation.
Sorry everyone, the frame hack needs to stay. This is not negotiable.
Why we need two ways to do it? If "module=__name__" is available, what's the rationale for providing the option of leaving it out when pickling support is required? (The only times the frame hack will work to enable pickling, the explicit fix will also work).
Ethan, something is wrong with _StealthProperty. Well, two things. First, it's way too general and can be cut to just what Enum needs. Second, this is enough to make the tests pass:
class _StealthProperty():
"""
Returns the value in the instance, or the virtual attribute on the class.
A virtual attribute is one that is looked up by __getattr__, as opposed to
one that lives in __class__.__dict__
"""
def __init__(self, fget):
self.fget = fget
def __get__(self, obj, objtype=None):
return self.fget(obj)
def __set__(self, obj, value):
raise AttributeError("can't set attribute")
def __delete__(self, obj):
raise AttributeError("can't delete attribute")
Now this is fishy because __get__ gets obj=None when called on a class and therefore self.fget(obj) raises an exception. But this gets caught somewhere in your code or tests and ignored. However, the right thing is still returned. I didn't have more time to investigate, but this must be cleaned out.
Also, your test must run within the regrtest framework. Currently I get:
./python -mtest.regrtest test_enum
[1/1] test_enum
test test_enum failed -- Traceback (most recent call last):
File "/home/eliben/python-src/default/Lib/test/test_enum.py", line 245, in test_pickle_enum_function_with_module
self.assertIs(Question.who, loads(dumps(Question.who)))
_pickle.PicklingError: Can't pickle <enum 'Question'>: attribute lookup __main__.Question failed
1 test failed:
test_enum
Eli,
The original _StealthProperty checked to see if it was being called on instance or class, and if it was the class it invoked __getattr__ to attempt a lookup for an enum member. Your version does not check, but, ironically, the exception raised is AttributeError, and so Python is calling __getattr__ anyway and so finds the virtual enum member.
While this is cool, and it works, I think it's much less mysterious, magical, and downright confusing to keep the original behavior and call __getattr__ from _StealthProperty. On the other hand, it might make somebody think, and that's always good, so I'm happy to leave it your way.
regrtest framework now supported (had to change the module name I was passing in from '__main__' to __name__).
Guido has promised an explanation for why he wants to keep the frame hack once he is back at work next week. To help him target that reply appropriately for my concrete objections to retaining it, I'd like to explain a bit more about why I think it's fundamentally impossible to create a robust stack inspection mechanism for implicit pickling support. Even if a dedicated mechanism for it is added, the information the interpreter needs to make a sensible reliable decision simply isn't there.
Consider the following:
# In utils.py
def enum_helper(name, prefix, fields):
if isinstance(fields, str): fields = fields.split()
e = Enum(name, (prefix + field for field in fields))
return e
# In some other module
from utils import enum_helper
MyEnum = enum_helper(MyEnum, "st_", "mtime atime ctime")
This breaks the frame hack, but would work correctly if enum_helper was redesigned to accept an explicit module name, or if we adopted something like the "name binding protocol" discussed on Python ideas.
If we adopted a simplistic rule of ignoring function scopes and only look at module and class scopes, then we break the semantics of the global keyword.
I consider the frame hack fundamentally broken, since there's currently no way the interpreter can distinguish between an incidental assignment (like the "e = Enum..." call in util.py) and a destination assignment (like the "MyEnum = enum_helper..." call), and if we *do* add a different syntax for "this supports pickling" assignments (like the def name = expr syntax on Python ideas), then a name binding protocol makes more sense than implicit contextual magic.
I also created issue 17959 to track a legitimate concern with the current aliasing design, without impacting incorporation of the accepted PEP.
Another post-incorporation proposal (issue 17961) relating to the values used for the functional API.
After more thought, I think we should leave the shortened version of _StealthProperty as is.
The reasoning being that if _StealthProperty does the __getattr__ lookup itself, and fails, it will return an AttributeError... and then Python will use __getattr__ again to try and find the attribute.
So I'll add a comment into the shortened version and leave it at that.
Nick, could you open a separate issue for the frame hack discussion, like you did for the other things? You can make this one depend on it, if you feel it's a "blocker", but let's please not intermix more discussion here.
Ethan, I pasted the minimized version to point out the problem; I fully agree it should do the getattr lookup because otherwise it's very difficult to understand what's going on. Let's keep exceptions for actual exceptional situations and explicitly code the normal path.
Also, _MemberOrProperty is probably a more descriptive name.
Eli: done, created as #17963
Here's the latest patch.
Note that the functional API portion is broken, and I'll get back to that this evening. Please only comment on the working code. ;)
I'll make that _MemborOrProperty name change then as well.
Here's the promised explanation why I want to keep the getframe hack. I'm sure it won't satisfy everyone, but this will have to do.
There are two parts to my argument. TL;DR: (a) by implementing this hack, we will maximize user happiness; (b) I expect that all Python implementations can provide the functionality needed.
So why does this maximize user happiness? First of all, enums are such simple objects that it's a disgrace to have an enum that isn't picklable. But most users are lazy and lack imagination, so if they find they have to do extra work to ensure that something is picklable, they will make a judgement call -- is the extra work to make it picklable worth it? Unfortunately they will try to make this judgment call in a fraction of a second, and they limited imagination they may well say "I can't imagine ever pickling this", save themselves the work, and move on. If you think more than a second about the decision, you've wasted more time than it takes to type "module=__name__", so it's going to be a split-second decision. But nevertheless, having to think about it and typing it is a distraction, and when you're "in the zone" you may prefer to spend your time thinking about the perfect names for your enum class and values rather than the red tape of making in picklable.
So, in my mind, it's a given that there will be many enums that are missing the module=__name__ keyword argument. Moreover, it's likely that most of the time this will be fine -- you can get through most days just fine without ever reading or writing a pickle. (For example, I very much doubt that I've ever pickled one of the flag values used by the socket module to define e.g. the address family, socket type, or protocol.)
But now you enter a different phase of your project, or one of your collaborators does, or perhaps you've released your code on PyPI and one of your users does. So someone tries to pickle some class instance that happens to contain an unpicklable enum. That's not a great experience. Pickling and unpickling errors are often remarkably hard to debug. (Especially the latter, so I have privately admonished Ethan to ensure that if the getframe hack doesn't work, the pickle failure should happen at pickling time, not at unpickle time.) Once you've tracked down the source, you have to figure out the fix -- hopefully just typing the error message into Google will link back to a StackOverflow answer explaining the need to say "module=__name__". But the damage is done, especially if the person encountering the pickling error is not the original author of the code defining the enum. (Often the person downloading and using a package from PyPI has less advanced Python knowledge than the package author, so they may have a hard time debugging the situation.)
You can see how having the getframe hack in place makes life more pleasant for many people -- the package user won't have to debug the pickling problem, and the package author won't have to deal with the bug report and fix.
But what about other Python implementations? Well, TBH, they have plenty of other issues. The reality is that you can't take a large package that hasn't been tested on Jython, or IronPython, or PyPy, and expect it to just work on any of those. Sure, things are getting better. But there are still tons of differences between the various Python implementations (as there are between different versions of CPython), and whether you like it or not, CPython is still the Python version of choice for most people. The long and short of it is that porting any significant package to another implementation is a bit of work, and keeping the port working probably requires adding some set of additional line items to the style guide used by its developers -- don't use feature X, don't depend on behavior Y, always use pattern Z...
However, I don't expect that "always pass module=__name__ when using the enum convenience API" won't have to be added to that list. sys._getframe() is way more powerful than what's needed. Even on platforms where sys._getframe() is unimplementable (or disabled by default in favor of a 10% speedup), it should still be possible to provide an API that *just* gets the module name of the caller, at least in case the call site is top-level module code (and for anything else, the getframe hack doesn't work anyway). After all, we're talking about a full Python implementation, right? That's a dynamic language with lots of introspection APIs, and any implementation worth its salt will have to deal with that, even if full-fledged sys._getframe() is explicitly excluded.
So I propose that we use sys._getframe() for the time being, and the authors of Jython, IronPython and PyPy can get together and figure out how to implement something like sys.get_calling_module_name(). They have plenty of time -- at least until they pledge support for Python 3.4. To make it easy for them we should probably add that API to CPython 3.4. And it's fine with me if the function only works if the caller is top-level code in a module.
Which reminds me. Nick offered another use case where using sys._getframe() breaks down: a wrapper function that constructs an enum for its caller. First of all, I think this is a pretty rare use case. But additionally, that wrapper could just use sys.get_calling_module_name(), and everything would be fine.
PS. Whoever adds sys.get_calling_module_name() to CPython, please pick a shorter name. :-) ;)).
I've come across something in the implementation here that I'd like some clarification on. What is the purpose of overriding __dir__ in Enum and EnumMeta? It doesn't change any behavior that I'm aware of, just makes things look a little nicer when someone calls dir() on their Enum. And, in fact, it can make things a little confusing. For example:
>>> class Test(enum.Enum):
... foo = 1
... bar = 2
... baz = 3
...
>>> dir(Test)
['__class__', '__doc__', '__members__', 'bar', 'baz', 'foo']
>>> Test.mro
<built-in method mro of EnumMeta object at 0x01D94D20>
This brings up another interesting case:
>>> class Test2(enum.Enum):
... mro = 1
... _create = 2
...
>>> dir(Test2)
['__class__', '__doc__', '__members__', '_create', 'mro']
>>> Test2.__members__
mappingproxy(OrderedDict([('mro', <Test2.mro: 1>), ('_create', <Test2._create: 2>)]))
>>> Test2['mro']
<Test2.mro: 1>
>>> Test2.mro
<built-in method mro of EnumMeta object at 0x01D90210>
>>> Test2._create
<bound method type._create of <class 'enum.EnumMeta'>>
>>>
From using "mro" or "_create", I would have expected either ValueError or for them to work properly. I don't know whether this should be fixed (one way or the other), documented, or just left alone; those kind of names really shouldn't ever be used anyway. It's something I stumbled across, though, and I just wanted to make sure that those who do have opinions that matter are aware of it :)
Thanks Guido, now that I fully understand your reasoning, I can accept that
this is a valid "practicality beats purity" situation.
Got the pickle issues worked out. Added super to the metaclass' __new__. Checking for illegal names of members and raising ValueError if any are found (I know, I know, safety checks! But such an enum is broken from the getgo so I see no reason to allow those names through).
Thanks everyone for the excellent feed back. I really appreciate it. Hopefully we're almost done! :)
Small nitpick, weakref is imported but not used in the latest patch.
Ethan, just a reminder to write that documentation...
It's basically a stripped down version of PEP 435 (leave all the philosophy and history out), with a few concrete "reference" sections explaining the API precisely.
I tweaked the code a bit (no functionality changes, mostly cleanups and a bit of refactoring). Figured it will be easier to just send an updated patch than another review. The diff from patch 06 can be seen via the Rietveld interface.
Also, after reading the code more carefully, I think we're doing a mistake by over-complicating it for the sake of custom enum metaclasses and over-customization (like auto numbering). The original point Guido raised against auto-numbering was too much magic in the implementation. Well, we already have that in Lib/enum.py - the code is so complex it seems fragile because of the tight coupling with many class and metaclass related protocols. Just defining a wholly new enum implementation that does something very specific seems simpler than customizing the existing one.
I'd suggest we stick to the existing Enum + IntEnum, giving up the more complex customizations for now. It can always be added in the future if we see it's very important.
Wow. I definitely felt like an apprentice after reading the changes. Thanks, Eli, that looks worlds better!
Thanks Ethan :)
From my point of view this is LGTM, as long as:
* There's ReST documentation
* You remove the code to support extensions and customizations not mandated by PEP 435. As I mentioned before, this seems to be a YAGNI that complicates the code needlessly. It's find to keep the changes in some external repo and at a later point discuss their gradual addition similarly to the way Nick has been pushing enhancements through additional issues.
We can always add more capabilities to Enum. We can "never" take them away once added, and this complicated code will remain with us forever even if no one ends up using it.
Supporting extensions was one of the things that got Ethan's version
through review. So -1 on going back on our promise to support those
variants. They have been reviewed and tested just as thoroughly as the rest
of the design.
Also, we know for a fact that people plan to use the customisation features
- it was making their code work that drove the current extension design.
I'm not sure which promises you're referring to Nick, and to whom they were made; the only formal promise we made is PEP 435 - and it doesn't mention this extensibility.
I won't argue beyond this comment, since I know I'm part of the minority opinion here. However, I still think this is a mistake.
The most important original goal of Enum (as discussed during the language summit) was to replace all the custom enum implementations by one that is standard. A far fledged extension mechanism will just make it so we'll have a fleet of bastardized "extended enums", each with its own capabilities, each different from the others. With one standard Enum, when you're reading someone's code and you see:
class Foo(Enum):
...
You know very well what Foo is. Restricted extensions like IntEnum and even your @enum.unique are still tolerable because they're explicit:
# enum.unique is standard and says what it is explicitly
@enum.unique
class Foo(Enum):
...
But if we open the gates on customization, we'll have:
class Foo(AutoEnum):
Red, White, Black
And:
class Bar(SomeOtherAutoEnum):
Red = ...
White = ...
Black = ...
And:
class Baz(SomeEvenOtherMagicEnum):
... # whatever goes here
And we're back to square 1, because these Enums are not standard, and each framework will have its own clever customization one will need to understand in order to read code with Enums.
Exposing and documenting the metaclass and customizations of __new__ is a whole coffin for the "there is only one way to do it" decision of stdlib's Enum. It might have been better to just define AutoNumberedEnum, BitfieldEnum and Magic42Enum as part of the enum package in stdlib and be over with it; but this was strongly rejected by others and particularly Guido during the summit and later. Now we're just creating a back-door to get into the same situation.
Eli, what's wrong with having a backdoor? Python is literally *full* of backdoors. I have a feeling that somehow you are trying to build an Enum class that is unpythonic in its desire to enforce some kind of "ideal enum" behavior.
Guido, IMHO back-doors are fine in many cases, just not this one. The way I see it, our main goal here is to collect a bunch of custom implementations of enums under a single umbrella. This is not very different from what was done with OrderedDict and namedtuple at some point. There were probably a bunch of custom implementations, along with more and less commonly used recipes. At some point a single implementation was added to the stdlib, without (AFAICS) major back-doors.
Yes, the Enum case is vastly more complex than either OrderedDict or namedtuple, and there is a multitude of different behaviors that can be anticipated (as the lengthy discussions leading to the acceptance of PEP 435 demonstrated). And yet, I was also hoping to have a single canonical implementation, so that people eventually accept it as "the one". Stdlib modules tend to win over in the long run.
The other point is that I think the implementation could be much simpler without having these back doors. As it stands now, the code is complex and hence brittle. Any change will be difficult to do because we're locked down very strictly by a set of intrusive and deep, yet externally "promised" interfaces. The same can be said, again, about OrderedDict and namedtuple, the code of which is very straightforward.
Maybe I'm blowing this out of proportions, maybe not. I'm not sure. As I said, I don't want to strongly argue about this. If both you and Nick are OK with keeping the customization mechanisms in, I defer to your judgment.
Eli, remember that TOOWTDI stands for "There's one *obvious* way to do it" rather than "There's *only* one way to do it". The latter interpretation leads to insanely complex APIs that attempt to solve everyone's problems, while the former favours 80% solutions that cover most use cases, with extension hooks that let people handle the other 20% as they see fit.
The point of the enum standardisation is to have a conventional way that enums *behave*, and then allow variations on that theme for those cases where the stdlib implementation is "close, but not quite what I need or want".
The whole metaclass machinery is built around this concept of letting people create domain specific behaviour, that is still somewhat unified due to conventions like the descriptor protocol. You can do a *lot* with just descriptors, so if you don't need a custom metaclass, you shouldn't use one.
PEP 422's class initialisation hook is aimed specifically at certain cases that currently need a metaclass and providing a simpler way to do them that lets you just use "type" as the metaclass instead.
It's the same with enums - if you don't need to customise the metaclass, you shouldn't. But there are some use cases (such as syncing the Python level enum definition with a database level one) where additional customisation will be needed. We also want to give people the freedom they need to experiment with different forms of definition time syntactic sugar to see if they can come up with one we like enough to add to the standard library in 3.5.
Does documenting these definition time extension points constrain what we're allowed to do in the future? Yes, it does. But, at the same time, it takes a lot of pressure off us to add more features to the standard enum type over time - if people have niche use cases that aren't handled well by the standard solution (and we already know they do), we can point them at the supported extension interface and say "go for it". For the majority of users though, the standard enum type will work fine, just as ordinary classes are adequate for the vast majority of object oriented code.
Somewhat related, I *know* you've read type.__new__. Compared to that, enum.EnumMeta.__new__ is still pretty straightforward ;)
Working on documentation...
Hopefully the final bit of code, plus docs.
Code changes:
_names_ are reserved
Doc changes (different from the PEP):
examples of AutoEnum, UniqueEnum, and OrderedEnum
Apologies for the noise -- was having trouble getting the correct patch attached. :/. :/ )
Nick prudently moved the unique discussion to its own issue - 18042. Let's get the initial implementation & docs committed first (without unique in the implementation, although it's fine to have it as an example in the docs for now), close this issue, and then discuss in 18042 whether unique should be part of the stdlib-provided API or not.
Good idea, thanks.
I sent a fresh review - nothing major; it's very near commit readiness now. Additional changes can be done after the initial commit. We have time until 3.4 beta (November 2013) to tweak stuff (and the documentation whenever...)
Doc updates are in.
I removed the 'unique, constant' from the first line of the intro, as neither of those things are necessarily true.
Hmm -- I was confusing member names with member values; I'll put 'unique' back in.
Hopefully the last update. :)
LGTM.
I suggest you wait for a couple of days to see if others have any critical comments and then commit. :)
Final (hopefully! ;) patch.
I stuck the toc reference in datatypes.
If no more edits are necessary I'll commit on Friday (three days from now).
New changeset fae92309c3be by Ethan Furman in branch 'default':
Closes issue 17947. Adds PEP-0435 (Enum, IntEnum) to the stdlib.
That commit looks just a touch incomplete...
Ethan, did you forget to "hg add" ?
On Fri, Jun 14, 2013 at 12:44 AM, Nick Coghlan <report@bugs.python.org>wrote:
>
> Nick Coghlan added the comment:
>
> That commit looks just a touch incomplete...
>
> ----------
> resolution: fixed ->
> stage: committed/rejected -> commit review
> status: closed -> open
>
> _______________________________________
> Python tracker <report@bugs.python.org>
> <>
> _______________________________________
>
Well, that made me laugh first thing in the morning!
I had nuked and redone my clone, and yeah, forgot to re-add the files. :/
Trying again...
Commit message was okay?
New changeset e7a01c7f69fe by Ethan Furman in branch 'default':
Closes issue 17947. Adds PEP-0435 (Adding an Enum type to the Python standard library).
> New changeset e7a01c7f69fe by Ethan Furman in branch 'default':
> Closes issue 17947. Adds PEP-0435 (Adding an Enum type to the Python standard library).
>
Great job :-)
Nicely done - you can also mark the PEP as Final now :)
|
http://bugs.python.org/issue17947
|
CC-MAIN-2015-35
|
en
|
refinedweb
|
I am trying to brush up on C after about 12+ years, which was two semesters at college. This program works, but I am poor on recursion and wonder if there are some tips that could be offered. Using Borland Turbo C++ 4.5 (yes, old) on win XP pro and win7 pro 32 bit machines.
Code:
/* 9.8 Mod listing 6.18 to all cases of integer powers using recursion */
#include <stdio.h>
double power(double a, int b); /* function prototype */
int main (void)
{
double x, xpow;
int n;
printf("Enter a number and the integer power to which\n");
printf("the number will be raised. Enter q to quit.\n");
while(scanf("%lf%d", &x, &n) == 2)
{
xpow = power(x,n);
printf("%.3e to the power %d is %.3e\n", x, n, xpow);
}
return 0;
}
double power(double a, int b) /* POWER function */
{
double pow = 1;
int b1;
if (b == 0) /* case of any number raised to 0 is 1 */
return 1.0;
else
if (a == 0.0) /* case of 0 to any power but 0 is 0 */
return 0.0;
else /* do non 0 pos and neg powers and numbers */
{
if (b > 0) /* normalize the power to a positive number */
b1 = b; /* since fractions are not as accurate */
else
b1 = -1 * b;
pow = a; /* calculate power doing recursive call */
pow *= power(pow, b1-1);
if (b < 0.0) /* case of negative exponents */
pow = 1.0/pow;
return pow;
}
}
|
http://cboard.cprogramming.com/c-programming/139747-simple-calculate-integer-power-double-using-recursion-program-critique-printable-thread.html
|
CC-MAIN-2015-35
|
en
|
refinedweb
|
There is a bug in the pinout for the Olimex USB-Serial-Cable in the section BeagleBone_Black_Serial
Fragment of a discussion from User talk:Wmat
It is protected, tried just now.
ProtoAES256 (talk)
You're correct, any pages in the Beagleboard: namespace are indeed protected. I've gone ahead and corrected the error.
Still can't
ProtoAES256 (talk)
|
http://www.elinux.org/Thread:User_talk:Wmat/There_is_a_bug_in_the_pinout_for_the_Olimex_USB-Serial-Cable_in_the_section_BeagleBone_Black_Serial/reply_(4)
|
CC-MAIN-2015-35
|
en
|
refinedweb
|
mvscanw, mvwscanw, scanw, wscanw - convert formatted input from a window
#include <curses.h> int mvscanw(int y, int x, char *fmt, ...); int mvwscanw(WINDOW *win, int y, int x, char *fmt, ...); int scanw(char *fmt, ...); int wscanw(WINDOW *win, char *fmt, ...);
These functions are similar to scanf(). Their effect is as though mvwgetstr() were called to get a multi-byte character string from the current or specified window at the current or specified cursor position, and then sscanf() were used to interpret and convert that string.
Upon successful completion, these functions return OK. Otherwise, they return ERR.
No errors are defined.
getnstr(), printw(), fscanf() (in the XSH specification), wcstombs() (in the XSH specification), <curses.h>.
|
http://pubs.opengroup.org/onlinepubs/007908775/xcurses/mvwscanw.html
|
CC-MAIN-2015-35
|
en
|
refinedweb
|
The square brackets define an array. The array can hold 10 elements, indexed from 0 to 9.The statement says nothing about any pin numbers.
I would start at the line 2 output; the "Found 8 devices" text indicates this routine is broken. I can suggest debugging the 'find' routine, as a good place to start. (I do not see that source code.)[\code]This is what I've been advised....!Any ideas???I have changed the array to [12] now aswell....
Well I'm actually registering 8 sensors now but basically I'm running the provided code by Nick Gammon that streamlines the serial print process. I've had a look at the array in the code but I still can't work out why it isn't registering the last 3 sensors....
I would start at the line 2 output; the "Found 8 devices" text indicates this routine is broken.
I just had a look at the code for the DallasTemperature library. I strongly recommend against using requestTemperaturesByIndex for the simple reason that it does a search of the entire bus, every time! So to get index 0 it searches the bus until it finds the first match, and then queries its temperature by address. Then for index 1 it searches the bus again, this time stopping after finding the second match, and then queries by address. So even a briefly malfunction sensor will make it disappear from the index list, giving you the reading for sensor 9 where you expect sensor 8. Plus, it would be quite slow.What you need to do is find out the hex address of each sensor (eg. plugging one in at a time) and from then on query by address.
Are you saying if you have 8 working, and you add a 9th, only the first 8 register? What if you remove the 9th and swap with one of the others? Are you absolutely sure the wiring for the 9th/10th/11th is correct?
Not if there's a wiring problem it doesn't.I don't have 11 sensors here so I can't test that one way or the other, but it seems to me the whole design idea is to run a lot of sensors over a long piece of wire. If you could only have 8 and the wire could only be a few meters long, well who would bother with it?
I.
Quote from: seanlangford on Jan 28, 2012, 12:22 amI.With what code? Without understanding the code, it's impossible to know what that means.
I replaced all of the wiring that supplies the last 3 sensor (GD, DQ & VDD).
I'm using Nicks example sketch now but I need to amend it lol...
#include <OneWire.h>#include <DallasTemperature.h>#include <Streaming.h>// Data wire is plugged into port 10 on the Arduinoconst byte ONE_WIRE_BUS = 10;const byte TEMPERATURE_PRECISION = 10;// addressesDeviceAddress myThermometer [12];int deviceCount;void setup (void){ // start serial port Serial.begin (115200); Serial << "Dallas Temperature IC Control Library Demo" << endl; // Start up the library sensors.begin(); // locate devices on the bus Serial << "Locating devices..." << endl; deviceCount = sensors.getDeviceCount(); Serial << "Found " << deviceCount << " devices." << endl; // report parasite power requirements Serial << "Parasite power is: " << (sensors.isParasitePowerMode() ? "ON" : "OFF") << endl; // method 1: by index for (int i = 0; i < deviceCount; i++) { if (sensors.getAddress(myThermometer [i], i)) { Serial << "Device " << i << " Address: "; printAddress(myThermometer [i]); Serial << endl; sensors.setResolution(myThermometer [i], TEMPERATURE_PRECISION); } else Serial << "Unable to find address for Device " << i << endl; } // end of for } // end of setup// function to print a device addressvoid printAddress(DeviceAddress deviceAddress){ for (uint8_t i = 0; i < 8; i++) { // zero pad the address if necessary if (deviceAddress[i] < 16) Serial.print("0"); Serial.print(deviceAddress[i], HEX); }} // end of printAddressvoid loop(void){ // call sensors.requestTemperatures() to issue a global temperature // request to all devices on the bus sensors.requestTemperatures(); // now get all temperatures for (int i = 0; i < deviceCount; i++) { float tempC = sensors.getTempC(myThermometer [i]); if (tempC > -80) { Serial << "Device " << i << " Temperature: " << tempC << endl; } else Serial << "Unable to find temperature for Device " << i << endl; } // end of for loop delay (1000);} // end of loop
|
http://forum.arduino.cc/index.php?topic=87919.msg670646
|
CC-MAIN-2015-35
|
en
|
refinedweb
|
cout not printing anything?wait I understood now.. I wrote the wrong sign. lol, thanks
cout not printing anything?no, I doubt it's the problem, since if I write something in the console I can see it which means it ...
cout not printing anything?[code]#include <iostream>
using namespace std;
int main () {
for (int x=1;x>9;x++) {
...
For loop reads second parameter as first along with first parameter...Thanks!
For loop reads second parameter as first along with first parameter...So I did a test code for changing colors and stuff, and it compiles right, but I'm having a problem,...
This user does not accept Private Messages
|
http://www.cplusplus.com/user/cplusplus123/
|
CC-MAIN-2015-35
|
en
|
refinedweb
|
java.lang.Object
org.jboss.cache.statetransfer.StateTransferManagerorg.jboss.cache.statetransfer.StateTransferManager
public class StateTransferManager
protected static final org.apache.commons.logging.Log log
public static final NodeData STREAMING_DELIMITER_NODE
public static final String PARTIAL_STATE_DELIMITER
public StateTransferManager()
public StateTransferManager(CacheSPI cache)
public void getState(ObjectOutputStream out, Fqn fqn, long timeout, boolean force, boolean suppressErrors) throws Throwable
fqnto the provided OutputStream.
out- stream to write state to
fqn- Fqn indicating the uppermost node in the portion of the tree whose state should be returned.
timeout- max number of ms this method should wait to acquire a read lock on the nodes being transferred
force- if a read lock cannot be acquired after
timeoutms, should the lock acquisition be forced, and any existing transactions holding locks on the nodes be rolled back? NOTE: In release 1.2.4, this parameter has no effect.
suppressErrors- should any Throwable thrown be suppressed?
Throwable- in event of error
public void setState(ObjectInputStream in, Fqn targetRoot) throws Exception.
in- an input stream containing the state
targetRoot- fqn of the node into which the state should be integrated
Exception- In event of error
protected void acquireLocksForStateTransfer(NodeSPI root, Object lockOwner, long timeout, boolean lockChildren, boolean force) throws Exception
Exception
protected void releaseStateTransferLocks(NodeSPI root, Object lockOwner, boolean childrenLocked)
acquireLocksForStateTransfer(org.jboss.cache.NodeSPI, java.lang.Object, long, boolean, boolean)
protected StateTransferGenerator getStateTransferGenerator()
protected StateTransferIntegrator getStateTransferIntegrator(ObjectInputStream istream, Fqn fqn) throws Exception
Exception
|
http://docs.jboss.org/jbosscache/2.1.1.GA/apidocs/org/jboss/cache/statetransfer/StateTransferManager.html
|
CC-MAIN-2015-35
|
en
|
refinedweb
|
Internet Engineering Task Force (IETF) Q. Wu, Ed.
Request for Comments: 6792 Huawei
Category: Informational G. Hunt
ISSN: 2070-1721 Unaffiliated
P. Arden
BT
November 2012
Guidelines for Use of the RTP Monitoring Framework
Abstract
This memo proposes an extensible Real-time Transport Protocol (RTP)
monitoring framework for extending the RTP Control Protocol (RTCP)
with a new RTCP Extended Reports (XR) block type to report new
metrics regarding media transmission or reception quality. In this
framework, a new XR block should contain a single metric or a small
number of metrics relevant to a single parameter of interest or
concern, rather than containing a number of metrics that attempt to
provide full coverage of all those parameters of concern to a
specific application. Applications may then "mix and match" to
create a set of blocks that cover their set of concerns. Where
possible, a specific block should be designed to be reusable across
more than one application, for example, for all of voice, streaming
audio, and.) describe an extensible RTP
monitoring framework to provide a small number of reusable Quality of
Service (QoS) / QoE metrics that facilitate reduced implementation
costs and help maximize interoperability. "Guidelines for Extending
the RTP Control Protocol (RTCP)" [RFC5968] has stated that where RTCP
is to be extended with a new metric, the preferred mechanism is by
the addition of a new RTCP XR [RFC3611] block. This memo assumes
that all the guidelines from RFC 5968 must apply on top of the
guidelines in this document. Guidelines for developing new
performance metrics are specified in [RFC6390]. New RTCP XR report
block definitions should not define new performance metrics but
should rather refer to metrics defined elsewhere. usually measured at
the user endpoint. One example of such metrics is the QoE metric
as specified in the QoE Metrics Report Block; see [QOE_BLOCK]..
Interval metrics
Metrics measured over the course of a single reporting interval
between two successive report blocks. This may be the most recent
RTCP reporting interval ([RFC3550], Section 6.2) or some other
interval]. An example cumulative metric is the total number of
RTP packets lost since the start of the RTP session.
Sampled metrics
Metrics measured at a particular time instant and sampled from the
values of a continuously measured or calculated metric within a
reporting interval (generally, the value of some measurement as
taken at the end of the reporting interval). An example is the
inter-arrival jitter reported in RTCP SR and RR packets, which is
continually updated as each RTP data packet arrives but is only
reported based on a snapshot of the value that is sampled at the
instant the reporting interval ends.
3. RTP Monitoring Framework
There are many ways in which the performance of an RTP session can be
monitored. These include RTP-based mechanisms such as the RTP MIB
module [RFC2959]; or the Session Initiation Protocol (SIP) event
package for RTCP summary reports [RFC6035]; or non-RTP mechanisms
such as generic MIBs, NetFlow [RFC3954], IP Flow Information Export
(IPFIX) [RFC5101] [RFC5102], review the RTP Monitoring
Framework, and give guidance for using and extending RTCP for
monitoring RTP sessions. One major benefit of such can play
the role of the monitor within the RTP monitoring framework. As
shown in Figure 1, the third-party monitor can be a passive monitor
that sees the RTP/RTCP stream pass it, or a system that gets sent
RTCP reports but not RTP and uses that to collect information. The
third-party monitor should be placed on the RTP/RTCP path between the
sender, the intermediate system, and the receiver.
The RTP Metrics Block (MB) conveys real-time application QoS/QoE
metric information and is used by the monitor to exchange information an RTCP NACK [RFC4585]
that provides feedback on the RTP sequence numbers for a subset of
the lost packets or all the currently lost packets. Ultimately, the
metric information collected by monitors within the RTP monitoring
framework may go to the network management tools beyond the RTP
monitoring framework; e.g., as shown in Figure 1, the monitors may
export the metric information derived from the RTP monitoring
framework to the management system using non-RTP means.
+-----------+ +----------+
|Third-Party| |Management|
| Monitor | >>>>>>>>| System |<<<<<
+-----------+ ^ +----------+ ^
: ^ ^ ^
: | ^ ^
+---------------+ : | +-------------+ +-------------+
| +-----------+ | : | |+-----------+| |+-----------+|
| | Monitor | |..:...|.......|| Monitor ||........|| Monitor ||
| +-----------+ | | |+-----------+| |+-----------+|
| |------+------>| |------->| |
| RTP Sender | |RTP Mixer or | |RTP Receiver |
| | |Translator | | |
+---------------+ +-------------+ +-------------+
----> RTP media traffic
..... RTCP control channel
>>>>> Non-RTP/RTCP management flows
Figure 1: Example Showing the Components
of the RTP Monitoring Framework
RTP may be used with multicast groups: both Any-Source Multicast
(ASM) and Source-Specific Multicast (SSM). These groups can be
monitored using RTCP. In the ASM case, the monitor is a member of
the multicast group and listens to will include monitoring, and
those sessions that are monitored will not all include each type of
monitor. The performance metrics collected by monitors can be
divided into end-system metrics, application-level metrics, and
transport-level metrics. Some of these metrics may be specific to
the measurement point of the monitor or, but some application-level metrics (i.e., quality of
experience (QoE) metrics) may only be applicable for user-facing end
systems.
RTP sessions can include intermediate systems that are an active part
of the system. These intermediate systems include RTP mixers and
translators, Multipoint Control Units (MCUs), retransmission servers,
etc. If the intermediate system establishes separate RTP sessions to
the other participants, then it must act as an end system in each of
those separate RTP sessions for the purposes of monitoring. If a
single RTP session traverses the intermediate system, then the
intermediate system can be assigned a synchronization source (SSRC)
in that session, which it can use for its reports. Transport-level
metrics may be collected at such an intermediate system.
Third-party monitors may be deployed that passively monitor RTP
sessions for network management purposes. Third-party monitors often
do not send reports into the RTP session being monitored but instead
collect transport-level metrics, end-system metrics, and application-
level metrics. compromise between
processing overhead and reliability should be taken into account.
4.4. Consumption of XR Block Code Points
The RTCP XR block namespace is limited by the 8-bit block type field
in the RTCP XR header. Space exhaustion may be a concern in the
future. Block such different
larger blocks are defined, which poses. Include the Payload Type in the] enables an RTCP sender to convey the
common time period and the number of packets sent during this period.
If the measurement interval for a metric is different from the RTCP
reporting interval, then this measurement duration in the Measurement
Information Block should be used to specify the interval. When there
may be multiple behavior based on local policy, which might differ between
different interfaces of the same translator.
7.2. Applicability to MCUs would like to thank Colin Perkins, Charles Eckel, Robert
Sparks, Salvatore Loreto, Graeme Gibbs, Debbie Greenstreet, Keith
Drage, Dan Romascanu, Ali C. Begen, Roni Even, Magnus Westerlund,
Meral Shirazipour, Tina Tsou, Barry Leiba, Benoit Claise, Russ
Housley, and Stephen Farrell for their valuable comments and
suggestions on early versions of this document..
[RFC2959] Baugher, M., Strahm, B., and I. Suconick, "Real-Time
Transport Protocol Management Information Base",
RFC 2959, October 2000.
[RFC3393] Demichelis, C. and P. Chimento, "IP Packet Delay
Variation Metric for IP Performance Metrics (IPPM)",
RFC 3393, November 2002.
3954] Claise, B., "Cisco Systems NetFlow Services Export
Version 9", RFC 3954, October117] Westerlund, M. and S. Wenger, "RTP Topologies",
RFC 5117, January 2008.
[RFC5760] Ott, J., Chesterfield, J., and E. Schooler, "RTP Control
Protocol (RTCP) Extensions for Single-Source Multicast
Sessions with Unicast Feedback", RFC 5760,
February 2010.
.
.
Authors' Addresses
Qin Wu (editor)
Huawei
101 Software Avenue, Yuhua District
Nanjing, Jiangsu 210012
China
Geoff Hunt
Unaffiliated
Philip Arden
BT
Orion 3/7 PP4
Adastral Park
Martlesham Heath
Ipswich, Suffolk IP5 3RE
United Kingdom
Phone: +44 1473 644192
Previous: RFC 6791 - Stateless Source Address Mapping for ICMPv6 Packets
Next: RFC 6793 - BGP Support for Four-Octet Autonomous System (AS) Number Space
|
http://www.faqs.org/rfcs/rfc6792.html
|
CC-MAIN-2015-35
|
en
|
refinedweb
|
Re: Java, Ruby, JRuby, JRubify some Java?
- From: Axel Etzold <AEtzold@xxxxxx>
- Date: Wed, 23 Sep 2009 03:32:14 -0500
-------- Original-Nachricht --------
Datum: Wed, 23 Sep 2009 15:25:11 +0900
Von: Audrey A Lee <audrey.lee.is.me@xxxxxxxxx>
An: ruby-talk@xxxxxxxxxxxxx
Betreff: Java, Ruby, JRuby, JRubify some Java?
Hello JRuby People,
I'm not quite ready to JRubyify yet but,
I'm working on a mini-project which requires that I screen-capture a
portion of my x-display on a linux box.
It looks like I can use a class in Java named "Robot" to do this:
-
I figure any class (even if it is a Java class) named "Robot" deserves
my attention.
So I ran this query:
-
And this page looks good:
-
I see this example:
import java.awt.AWTException;
import java.awt.Robot;
import java.awt.Rectangle;
import java.awt.Toolkit;
import java.awt.image.BufferedImage;
import java.io.*;
import javax.imageio.ImageIO;
class ScreenCapture {
public static void main(String args[]) throws
AWTException, IOException {
// capture the whole screen
BufferedImage screencapture = new Robot().createScreenCapture(
new Rectangle(Toolkit.getDefaultToolkit().getScreenSize
()) );
// Save as JPEG
File file = new File("screencapture.jpg");
ImageIO.write(screencapture, "jpg", file);
// Save as PNG
// File file = new File("screencapture.png");
// ImageIO.write(screencapture, "png", file);
}
}
My question:
Is it possible to transform the above Java-syntax into Ruby-syntax
which could be interpreted by JRuby?
Or I could ask it this way:
How do I transform the above Java-syntax into JRuby-syntax?
--Audrey
Dear Audrey,
you can use Java classes in Jruby straight away:
For Linux automation, you might want to look at (the non-Java)
xdotool and its Ruby gem binding xdo:
You might combine that with one of the many ways to take screenshots
in Linux:
Best regards,
Axel
--
GRATIS für alle GMX-Mitglieder: Die maxdome Movie-FLAT!
Jetzt freischalten unter
.
- Follow-Ups:
- Re: Java, Ruby, JRuby, JRubify some Java?
- From: Ilan Berci
- References:
- Java, Ruby, JRuby, JRubify some Java?
- From: Audrey A Lee
- Prev by Date: Re: pleas help for forwarding an email
- Next by Date: Re: How to detect what is running Ruby program?
- Previous by thread: Java, Ruby, JRuby, JRubify some Java?
- Next by thread: Re: Java, Ruby, JRuby, JRubify some Java?
- Index(es):
|
http://newsgroups.derkeiler.com/Archive/Comp/comp.lang.ruby/2009-09/msg01658.html
|
CC-MAIN-2015-35
|
en
|
refinedweb
|
#include <sys/stream.h> size_t msgdsize(mblk_t *mp);
Architecture independent level 1 (DDI/DKI).
Message to be evaluated.
The msgdsize() function counts the number of bytes in a data message. Only bytes included in the data blocks of type M_DATA are included in the count.
The number of data bytes in a message, expressed as an integer.
The msgdsize() function can be called from user, interrupt, or kernel context.
See bufcall(9F) for an example that uses msgdsize().
Writing Device Drivers for Oracle Solaris 11.2
STREAMS Programming Guide
|
http://docs.oracle.com/cd/E36784_01/html/E36886/msgdsize-9f.html
|
CC-MAIN-2015-35
|
en
|
refinedweb
|
At long last I have managed to get libical 0.46 built on windoze using
VC8. I had to hack a couple of libical source files. I reckon these will
need to go into the official source at some point.
First, I changed vsnprintf.h after the #include <stdarg.h> statement (line
8). I added
#if !defined(_MSC_VER)
and added the closing #endif after the snprintf decl at the end.
This is necessary because VC8 does not define the __STDC__ macro as it
should. One cannot define it in the CMake because this makes other things
fail. Other VC headers expect it to be not defined basically and it causes
problems for some of the macros in fcntl.h.
It may be that at some version of Visual Studio this macro gets defined
correctly. I don't know. VC8 is all I have so I cannot check this easily.
A full fix might involve checking the value of _MSC_VER.
Then I had to edit ical.def. I removed all the occurences of
simple_str_to_float,, a routine that seems to no longer exist. This made
libical build. But the test program regression failed to link due to
missing symbol _icaltimezone_set_tzid_prefix. I added
icaltimezone_set_tzid_prefix to the end of ical.def and rebuilt and this
made everything build ok.
I am puzzled by the reports of everyone else being able to build libical
on windoze ok, given the problems I have found above. Maybe these problems
are peculiar to VC8. Hmmm..
|
http://sourceforge.net/p/freeassociation/mailman/freeassociation-libical/thread/OF33B5F44C.89DF1370-ON80257797.0037E7D5-80257797.0038947E@bnpparibas.com/
|
CC-MAIN-2015-35
|
en
|
refinedweb
|
CompletedEventArgs Class
.NET Framework 1.1
Holds event data for the Completed event.
For a list of all members of this type, see CompletedEventArgs Members.
System.Object
System.EventArgs
System.Management.ManagementEventArgs
System.Management.CompletedEventArgs
[Visual Basic] Public Class CompletedEventArgs Inherits ManagementEventArgs [C#] public class CompletedEventArgs : ManagementEventArgs [C++] public __gc class CompletedEventArgs : public ManagementEventArgs [JScript] public class Completed
CompletedEventArgs Members | System.Management Namespace
|
http://msdn.microsoft.com/en-us/library/system.management.completedeventargs(v=vs.71).aspx
|
CC-MAIN-2013-20
|
en
|
refinedweb
|
/*
* JBox2D - A Java Port of Erin Catto's Box2D
*
* JBox2D homepage:
* Box2D homepage:
*
* org.jbox2d.collision;
//Updated to rev 139 of b2Collision.h
/** A few static final variables that don't fit anywhere else (globals in C++ code). */
public class Collision {
public static final int NULL_FEATURE = Integer.MAX_VALUE;
}
|
http://www.java2s.com/Open-Source/Java/Game/JBox2D-2.0.1/org/jbox2d/collision/Collision.java.htm
|
CC-MAIN-2013-20
|
en
|
refinedweb
|
Most of the time using this stream is sufficient, but sometimes, for example when working on a windows application, you don't have access to the console. Rather than change the code that relies on std::cout it is best to redirect its output. Luckily ostream has been designed to have its stream buffer changed. A simple approach is to redirect the output to a file. For example:
// print to the console std::cout << "out to console" << std::endl; // open a file std::ofstream file("redirect.txt"); // replace the buffer in cout, remember the old one. std::streambuf *old_buffer = std::cout.rdbuf(file.rdbuf()); // log to cout which now redirects to the file std::cout << "to file" << std::endl; // and restore the old buffer std::cout.rdbuf(old_buffer); file.close();
Sometimes redirecting to a file is inconvient. To write to something else we need to create a new stream buffer. As I don't want to create to many dependencies and I'd like the streambuf to be a bit easier to use I've created a simple wrapper (its inline for brevity).
#include <streambuf> class BufferedStringBuf : public std::streambuf { public: BufferedStringBuf(int bufferSize) { if (bufferSize) { char *ptr = new char[bufferSize]; setp(ptr, ptr + bufferSize); } else setp(0, 0); } virtual ~BufferedStringBuf() { sync(); delete[] pbase(); } virtual void writeString(const std::string &str) = 0; private: int overflow(int c) { sync(); if (c != EOF) { if (pbase() == epptr()) { std::string temp; temp += char(c); writeString(temp); } else sputc(c); } return 0; } int sync() { if (pbase() != pptr()) { int len = int(pptr() - pbase()); std::string temp(pbase(), len); writeString(temp); setp(pbase(), epptr()); } return 0; } };
This class creates buffer for storing streamed output. The class is abstract; virtual void writeString(const std::string &str) = 0; must be overriden. writeString should implement your output operation, this could write the text to an overlay on screen, perhaps some kind of console or whatever you want.
At the end of a line, or when the ostream receives a flush sync is called. This causes the current buffer to be passed as a string to writeString().
When this buffer is filled, the overridden function overflow is called. This causes the stream to flush and attempts to store the character that has overflowed.
The buffer is optional, when the bufferSize is zero it constantly overflows. This is a less than optimal way of operating as it has to output each character individually, but it doesn’t take any extra memory.
This wrapper has been written with simplicity in mind, it might not be the most robust, but it only depends on the STL and its pretty short.
So enough of the details, lets make it do stuff! Here are some examples:
#include "BufferedStringBuf.h" #include <windows.h> const int LineSize = 256; class DebugBuf : public BufferedStringBuf { public: DebugBuf() : BufferedStringBuf(LineSize) {} virtual void writeString(const std::string &str) { OutputDebugString(str.c_str()); } };
The above class implements a log to visual studio's Output/Debug window.
To use it, take a similar approach to redirecting to a file.
std::cout << "out to console" << std::endl; // replace the buffer DebugBuf debug_buffer; std::streambuf *old_buffer = std::cout.rdbuf(&debug_buffer); std::cout << "to visual studio debug output window" << std::endl; // restore the old buffer std::cout.rdbuf(old_buffer);
Here is another example buffer:
class MessageBoxBuf : public BufferedStringBuf { public: MessageBoxBuf() : BufferedStringBuf(LineSize) {} virtual void writeString(const std::string &str) { if ( str.size() > 1 ) // message box doesnt care about single characters MessageBox(NULL, str.c_str(), "Error", MB_OK|MB_ICONERROR); } };
This one displays a message box when flushed.
Its probably not such a good one if you've got lots of text, but it just shows what can be done.
Finally, here is a useful buffer for chaining these buffers together.
class DupBuf : public BufferedStringBuf { public: DupBuf(std::ostream *stream1, std::ostream *stream2) : BufferedStringBuf(BufferSize), buffer1(stream1->rdbuf()), buffer2(stream2->rdbuf()) { } virtual void writeString(const std::string &str) { const char *ptr = str.c_str(); std::streamsize size = std::streamsize(str.size()); buffer1->sputn(ptr, size); buffer2->sputn(ptr, size); buffer1->pubsync(); buffer2->pubsync(); } private: std::streambuf *buffer1; std::streambuf *buffer2; };
This stores characters till the buffer is filled or flushed then it puts the characters in the other buffers and forces them to sync.
Used like this:
DebugBuf debug_window_buf; std::ostream debug_stream(&debug_window_buf); DupBuf dup_buf(&std::cout, &debug_stream); old_buff = std::cout.rdbuf(&dup_buf); std::cout << "Print to debug output and stdio" << std::endl;
It will print to both the debug window and to std::cout. Again this is written for simplicity not performance.
Feel free to pick this code apart. I welcome constructive criticism.
Cheers
Dave
|
http://devmaster.net/forums/topic/5721-ostream-is-easy/page__pid__34990
|
CC-MAIN-2013-20
|
en
|
refinedweb
|
The class for DII reply handler. More...
#include <DII_Reply_Handler.h>
The class for DII reply handler.
Provides a way to create requests and populate it with parameters for use in the Dynamic Invocation Interface.
Handle a location forward message. This one has a default method supplied that simply forwards to the handle_response, since that is what the legacy code did. This way we maintain backwards compatibility.
Callback method for asynchronous requests.
|
http://www.dre.vanderbilt.edu/Doxygen/5.8.1/html/tao/dynamicinterface/a00025.html
|
CC-MAIN-2013-20
|
en
|
refinedweb
|
Return to Code Snippets
by chombee » Tue Mar 13, 2007 6:57 am
Path Following: I'm not sure how you would want to implement this. I'd probably pass a list of waypoints to the behavior. Then, pass a variable for whether or not to loop the waypoints or finish at the last one. Once the vehicle got within a certain distance that waypoint could be declared "cleared" and the target could move to the next one. This could be fairly easy. Use "seek" and "obstacle avoid" and cycle through waypoints as the targets.
Proper leader following: You've got most of this coded already. The separation is the only thing that really needs to be added. Separation would probably involve having a larger collision sphere and looking for collisions then applying a force normal to the collision surface.
by mavasher » Tue Mar 13, 2007 6:16 pm
by ynjh_jo » Mon Mar 19, 2007 9:00 am
chombee wrote:Worse, if the target is just in front of an obstacle, the character may arrive at the target and stop, but then the CollisionTube may be colliding with the obstacle, which will cause the character to run around needlessly trying to avoid the obstacle.The best way to fix this I think would be to vary the length of the character's CollisionTube frame-by-frame according to the character's speed, so that when the character comes to rest at a target location the CollisionTube will shrink to a sphere around the character and not collide with any obstacles beyond the target. I had trouble implementing this though and have given up for now.There is the issue of the Character's CollisionTube varying in length frame by frame depending on the Character's speed, that should be possible with CollisionTube.setPointB but I haven't got it right yet. That would be a good fix.
tube=self.tubenp.node().getSolid(0) tube.setPointB(Point3(tube.getPointA()+Point3(0,15.*self._velocity.length(),0)))
chombee wrote:collisions seem to occur fairly often with a lot of characters in a small space. Is it possible to collide two CollisionTubes against each other? If so, the characters CollisionTubes could be used to detect character-character collisions also.
for v in self.plugins[self.plugin].vehicles: v.neighbors = None
by chombee » Mon Mar 19, 2007 1:27 pm
Since the tubes' length are not uniform anymore, it looks more natural. The slow char wouldn't steer away excessively before reaching make-sense distance to the obstacle.
by ynjh_jo » Mon Mar 19, 2007 2:27 pm
by chombee » Tue Mar 27, 2007 1:31 pm
by ynjh_jo » Tue Mar 27, 2007 2:49 pm
by chombee » Fri Apr 27, 2007 10:05 am
by ynjh_jo » Tue May 01, 2007 11:47 am
by chombee » Wed May 02, 2007 6:50 am
by chombee » Thu May 03, 2007 8:37 am
by enn0x » Thu May 03, 2007 10:57 am
Or can anyone point me to a good implementation of A* in Python?
by chombee » Thu May 03, 2007 11:27 am
by chombee » Fri May 04, 2007 8:50 am
by ynjh_jo » Fri May 04, 2007 1:37 pm
by chombee » Fri May 11, 2007 1:11 pm
#Sometimes none of the trees seem to get added to the collision detectionsystem. No collisions between characters' collision tubes and trees' collisionspheres occur, characters walk right through trees. Framerate is much better :)Sometimes only _some_ of the trees get added to the collision system.Is this something to do with flattenStrong?
by ynjh_jo » Sun May 13, 2007 6:30 am
Sometimes none of the trees seem to get added to the collision detection system. Sometimes only _some_ of the trees get added to the collision system.Is this something to do with flattenStrong?
Sometimes character models fall under the terrain model. Could it be that the CollisionRay used with CollisionHandlerFloor is on rare occasions shooting right through a tiny gap in the terrain model? Though I don't see why there should be gaps in the terrain model, this problem did occur with placing trees, leading to code that 'wiggles' a tree until its ray hits some terrain. Maybe try using the characters collision sphere to detect the height of the terrain and set the Z accordingly? (i.e. don't use collisionhandlerfloor)
by chombee » Mon May 14, 2007 8:10 am
Description: Calculates the minimum and maximum vertices of all Geoms at this NodePath's bottom node and below. This is a tight bounding box; it will generally be tighter than the bounding volume returned by get_bounds() (but it is more expensive to compute).The return value is true if any points are within the bounding volume, or false if none are.
by Liquid7800 » Mon May 14, 2007 8:59 am
Interesting, I don't find this NodePath.getTightBounds function that you used anywhere in the online documentation on this site, or by running Python's help function on NodePath.
getTightBounds is a method in Panda, find it in pandac\libpandaModules.py. .......getTightBounds returns the tight bounds of a geometry under the given node, by iterating through the vertices to find the outermost vertices. It respects the node's transformation, and the result is in render coordinate space. For example, say you have a cube of 1,1,1 in size. If you change it's heading to 45, you would get (1.41421, 1.41421, 1) bounds, the AABB.
by ynjh_jo » Mon May 14, 2007 11:47 am
chombee wrote:I wonder why when I scale the model to a different size the bounding sphere doesn't end up a different size as well.
if models.has_key(path): model = models[path].copyTo(parent) scale=0.75+random.random()/2 model.setScale(scale) model.setPythonTag('radius',model.getPythonTag('radius')*scale)
by chombee » Mon May 14, 2007 3:45 pm
by chombee » Tue May 15, 2007 12:58 pm
The stored radius in the Python tag is based on the bounds of the default scale
bounds=model.getTightBounds()
model.setScale(scale)
models = {} # dict of {pathToModel:model}def setRadiusTag(model): """Set a tag on the given model storing the models XY radius (ignoring how short or tall it is in the Z dimension). This radius tag can be retrieved later to (for example) create a collision sphere that will bound the model Arguments: model -- the model to tag """ # Get the two points (in a list) that define the axis-aligned bounding # box of the scaled model. bounds=model.getTightBounds() # Now subtract the two, giving a vector from one to the other, the # length of which is the diameter of the aabb vector=(bounds[1]-bounds[0]) # Take the greatest of the X and Y components of this vector, ignoring # the Z component, and halve it go get the X/Y radius of the aabb radius=max(vector[0],vector[1])*.5 # Store the radius in a tag attached to the model model.setPythonTag('radius',radius)def loadModel(path,parent): """Load a model from the file given by path, parent it to the given parent node and return the NodePath to the newly loaded model. Maintains a global dictionary of loaded models and if a model if called twice for the same model the model is instanced instead of being loaded again. """ global models if models.has_key(path): model = models[path].copyTo(parent) scale = 0.75+random.random()/2 model.setScale(scale) setRadiusTag(model) else: modelRoot = loader.loadModelCopy(path) # New models are loaded in a non-standard way to allow flattenStrong # to work effectively over them. model = P.NodePath('model') scale = 0.75+random.random()/2 model.setScale(scale) modelRoot.getChildren().reparentTo(model) model.reparentTo(parent) setRadiusTag(model) models[path] = model return model
by mavasher » Tue May 15, 2007 7:08 pm
by ynjh_jo » Tue May 15, 2007 8:19 pm
chombee wrote:ynjh_jo: I've been thinking about removing the efficient scene graph structuring you built into the terrain generator and instead just saving the whole terrain, trees and collision spheres and all, to an egg, then running it through the egg-octree function someone posted on this forum. Think it would help?
by chombee » Thu May 17, 2007 9:29 am
This is coming together nicely. I've been having the same fps problems you describe.I think it is the Actor class. Could be wrong though.
by mavasher » Thu May 17, 2007 2:44 pm
by chombee » Thu May 17, 2007 3:19 pm
by mavasher » Thu May 17, 2007 3:58 pm
def Elevation(self,objectx,objecty): ele = self.mHeightFieldTesselator.getElevation(objectx/self.scale,-objecty/self.scale) return ele
by madprocessor » Wed Jun 06, 2007 2:54 pm
by chombee » Thu Jun 07, 2007 4:21 am
Users browsing this forum: No registered users and 0 guests
|
http://www.panda3d.org/forums/viewtopic.php?p=13210
|
CC-MAIN-2013-20
|
en
|
refinedweb
|
Contents
Abstract
The iterator protocol in Python 2.x consists of two methods: __iter__() called on an iterable object to yield an iterator, and next() called on an iterator object to yield the next item in the sequence. Using a for loop to iterate over an iterable object implicitly calls both of these methods. This PEP proposes that the next method be renamed to __next__, consistent with all the other protocols in Python in which a method is implicitly called as part of a language-level protocol, and that a built-in function named next be introduced to invoke __next__ method, consistent with the manner in which other protocols are explicitly invoked.
Names With Double Underscores
In Python, double underscores before and after a name are used to distinguish names that belong to the language itself. Attributes and methods that are implicitly used or created by the interpreter employ this naming convention; some examples are:
- __file__ - an attribute automatically created by the interpreter
- __dict__ - an attribute with special meaning to the interpreter
- __init__ - a method implicitly called by the interpreter
Note that this convention applies to methods such as __init__ that are explicitly defined by the programmer, as well as attributes such as __file__ that can only be accessed by naming them explicitly, so it includes names that are used or created by the interpreter.
(Not all things that are called "protocols" are made of methods with double-underscore names. For example, the __contains__ method has double underscores because the language construct x in y implicitly calls __contains__. But even though the read method is part of the file protocol, it does not have double underscores because there is no language construct that implicitly invokes x.read().)
The use of double underscores creates a separate namespace for names that are part of the Python language definition, so that programmers are free to create variables, attributes, and methods that start with letters, without fear of silently colliding with names that have a language-defined purpose. (Colliding with reserved keywords is still a concern, but at least this will immediately yield a syntax error.)
The naming of the next method on iterators is an exception to this convention. Code that nowhere contains an explicit call to a next method can nonetheless be silently affected by the presence of such a method. Therefore, this PEP proposes that iterators should have a __next__ method instead of a next method (with no change in semantics).
Double-Underscore Methods and Built-In Functions
The Python language defines several protocols that are implemented or customized by defining methods with double-underscore names. In each case, the protocol is provided by an internal method implemented as a C function in the interpreter. For objects defined in Python, this C function supports customization by implicitly invoking a Python method with a double-underscore name (it often does a little bit of additional work beyond just calling the Python method.)
Sometimes the protocol is invoked by a syntactic construct:
- x[y] --> internal tp_getitem --> x.__getitem__(y)
- x + y --> internal nb_add --> x.__add__(y)
- -x --> internal nb_negative --> x.__neg__()
Sometimes there is no syntactic construct, but it is still useful to be able to explicitly invoke the protocol. For such cases Python offers a built-in function of the same name but without the double underscores.
- len(x) --> internal sq_length --> x.__len__()
- hash(x) --> internal tp_hash --> x.__hash__()
- iter(x) --> internal tp_iter --> x.__iter__()
Following this pattern, the natural way to handle next is to add a next built-in function that behaves in exactly the same fashion.
- next(x) --> internal tp_iternext --> x.__next__()
Further, it is proposed that the next built-in function accept a sentinel value as an optional second argument, following the style of the getattr and iter built-in functions. When called with two arguments, next catches the StopIteration exception and returns the sentinel value instead of propagating the exception. This creates a nice duality between iter and next:
iter(function, sentinel) <--> next(iterator, sentinel)
Previous Proposals
This proposal is not a new idea. The idea proposed here was supported by the BDFL on python-dev [1] and is even mentioned in the original iterator PEP, PEP 234:
(In retrospect, it might have been better to go for __next__() and have a new built-in, next(it), which calls it.__next__(). But alas, it's too late; this has been deployed in Python 2.2 since December 2001.)
Objections
There have been a few objections to the addition of more built-ins. In particular, Martin von Loewis writes [2]:
I dislike the introduction of more builtins unless they have a true generality (i.e. are likely to be needed in many programs). For this one, I think the normal usage of __next__ will be with a for loop, so I don't think one would often need an explicit next() invocation. It is also not true that most protocols are explicitly invoked through builtin functions. Instead, most protocols are can be explicitly invoked through methods in the operator module. So following tradition, it should be operator.next. ... As an alternative, I propose that object grows a .next() method, which calls __next__ by default.
Transition Plan
Two additional transformations will be added to the 2to3 translation tool [3]:
- Method definitions named next will be renamed to __next__.
- Explicit calls to the next method will be replaced with calls to the built-in next function. For example, x.next() will become next(x).
Collin Winter looked into the possibility of automatically deciding whether to perform the second transformation depending on the presence of a module-level binding to next [4] and found that it would be "ugly and slow". Instead, the translation tool will emit warnings upon detecting such a binding. Collin has proposed warnings for the following conditions [5]:
- Module-level assignments to next.
- Module-level definitions of a function named next.
- Module-level imports of the name next.
- Assignments to __builtin__.next.
Implementation
A patch with the necessary changes (except the 2to3 tool) was written by Georg Brandl and committed as revision 54910.
|
http://www.python.org/dev/peps/pep-3114/
|
CC-MAIN-2013-20
|
en
|
refinedweb
|
and Windows XP:, Windows XP, and Windows 2000:.
Windows 2000: WMI does not enforce the use of a non-blank password for the Computer A account, but a blank password on an administrator group account is not recommended..
Set the authentication level to RPC_C_AUTHN_LEVEL_PKT_PRIVACY or 6 if the namespace to which you are connecting on the remote computer requires an encrypted connection before it will return data. You can also use this authentication level, even if the namespace does not require it. This ensures that data is encrypted as it crosses the network. If you attempt to set a lower authentication level than is allowed, an access denied message will be returned. For more information, see Requiring an Encrypted Connection to a Namespace.
Windows Server 2003, Windows XP, and Windows 2000:++.
The following VBScript code example connects to a group of remote computers in the same domain by creating an array of remote computer names and then displaying names of the Plug and Play devices—instances of Win32_PnPEntity—on each computer. To run the script below, you must be an administrator on the remote computers. Note that the "\\" required before the remote computer name is added by the script following the impersonation level setting. For more information about WMI paths, see Describing the Location of a WMI Object.
On Error Resume Next arrComputers = Array("Computer1","Computer2","Computer3")_PnPEntity",,48) For Each objItem in colItems Wscript.Echo "-----------------------------------" Wscript.Echo "Win32_PnPEntity instance" Wscript.Echo "-----------------------------------" Wscript.Echo "Name: "& objItem.Name Wscript.Echo "Status: "& objItem.Status Next Next
The following VBScript code example enables you to connect to a remote computer using different credentials. For example, a remote computer in a different domain or connecting to a remote computer requiring a different user name and password. In this case, use the SWbemServices.ConnectServer connection.
' Full Computer Name ' can be found by right-clicking My Computer, ' then click Properties, then click the Computer Name tab) ' or use the computer's IP address strComputer = "FullComputerName" strDomain = "DOMAIN" Wscript.StdOut.Write "Please enter your user name:" strUser = Wscript.StdIn.ReadLine Set objPassword = CreateObject("ScriptPW.Password") Wscript.StdOut.Write "Please enter your password:" strPassword = objPassword.GetPassword() Set objSWbemLocator = CreateObject("WbemScripting.SWbemLocator") Set objSWbemServices = objSWbemLocator.ConnectServer(strComputer, _ "root\cimv2", _ strUser, _ strPassword, _ "MS_409", _ "ntlmdomain:" + strDomain) Set colSwbemObjectSet = _ objSWbemServices.ExecQuery("Select * From Win32_Process") For Each objProcess in colSWbemObjectSet Wscript.Echo "Process Name: " & objProcess.Name Next
Related topics
-
Send comments about this topic to Microsoft
Build date: 11/19/2012
|
http://msdn.microsoft.com/en-us/library/aa389290
|
CC-MAIN-2013-20
|
en
|
refinedweb
|
This.
UPDATE 26-06-2012: Non-Windows Users
As an alternative to Bloom for non-Windows users, I have uploaded a Processing sketch, named SensorMonkeySerialNet, to our GitHub account. This sketch is a serial-to-network proxy that also serves Flash Socket Policy files inline. It can be used instead of Bloom in Step 3 for users running Mac OS or Linux.
Step 1: Gathering Materials
Can you check that you are not receiving any JavaScript errors inside your browser? Can you also verify that you can see your stream in the 'Remote Sensors' tab in the SensorMonkey Control Panel (this will tell you whether your data is being streamed correctly).
If you're not receiving any errors and your data is being streamed correctly then double check that you have entered the correct namespace/streamname details in your HTML viewer page (as described in Step 6). These are unique for every user.
degrees. I got Bloom to connect, baud rate is correct, its just the graph is all missed up. How do I set up the Format Description File to receive decimal numbers from the Picaxe?
SensorMonkey only supports binary data encodings. So, when sending the decimal number 72, for example, you can't send it as two separate characters, i.e. "72" where '7' and '2' are the characters. Instead, you have to encode it as an 8-bit binary number, i.e. a single byte containing a bit sequence of 01001000. You do this by writing the raw value to the serial port directly.
Basically, if you are using the PICAXE function sertxd, make sure you DO NOT put a # symbol in front of the variable or value that you are trying to send. Then, in your format description file, you can use an unsigned 8-bit integer (i.e. u8) to read and graph the data in the SensorMonkey control panel.
make this more accessible.
1) It should be possible to create an account on SensorMonkey.com
without Facebook. Facebook-centric is a definite disadvantage.
2) Linux support would be extremely helpful. Bloom or equivalent
is probably not necessary on a Linux machine.
Thanks for the comments and apologies for the belated reply.
In response to point (1), we are currently assessing alternatives to Facebook for user logins.
As regards point (2), I have uploaded a Processing sketch to our GitHub account. The sketch is named SensorMonkeySerialNet and runs on Mac OS and Linux. It's not as full-featured as Bloom, but should enable users to connect their Arduinos to SensorMonkey without needing Windows.
Yeah NO facebook
And of course anything Linux is good, mayb it;ll work with wine?
HANKIENSTIEN
rduino.com
|
http://www.instructables.com/id/Drive-a-webpage-in-real-time-using-Arduino-Sensor/?utm_medium=twitter&utm_source=twitterfeed
|
CC-MAIN-2013-20
|
en
|
refinedweb
|
26 June 2008 07:41 [Source: ICIS news]
SINGAPORE (ICIS news)--Northeast Asian caprolactam contracts for June were mainly settled $60-65/tonne higher at $2,560-2,575/tonne on the back of rising raw material costs, producers and end-users said on Thursday.
The settlements were agreed on a CFR (cost and freight) NE Asia (northeast ?xml:namespace>
Producers said they planned to raise prices further in July on cost pressures, although no fresh nominations were heard.
May contracts were rolled over from the previous month at $2,500-2,510/tonne amid weaker-than-expected demand from the downstream nylon market.
In the spot market, caprolactam spot prices were traded at $2,560-2,590/tonne on Wednesday, up $10/tonne at the lower end of the range from a week earlier, according to global chemical market intelligence service ICIS pricing.
Key suppliers to.
|
http://www.icis.com/Articles/2008/06/26/9135526/Asia-June-capro-contracts-settled-up-60-65tonne.html
|
CC-MAIN-2013-20
|
en
|
refinedweb
|
16 November 2010 06:18 [Source: ICIS news]
SHANGHAI (ICIS)--China has extended antidumping (ADD) duties on imports of monoethanolamine and diethanolamine from the US, Japan, Malaysia and Taiwan for another five years, the country's Ministry of Commerce said on Tuesday.
The measure came into effect from 14 November this year after one-year final review investigation indicated that ceasing of the ADD would harm the domestic ethanolamine sector, the Ministry of Commerce said.
?xml:namespace>
Under the latest decision, the imports from
The ADD levied in the above four countries ranged from 5.3%-74%, the ministry.
|
http://www.icis.com/Articles/2010/11/16/9410715/china+extends+add+on+ethanolamine+imports+for+five+years.html
|
CC-MAIN-2013-20
|
en
|
refinedweb
|
07 May 2012 02:00 [Source: ICIS news]
SINGAPORE (ICIS)--Here are some of the top stories from ICIS Asia and the ?xml:namespace>
(Please click on the link to read the full text)
Focus -
Focus - Asia BR downtrend to persist on weak demand, plunging BD prices
Focus -
Spot toluene prices in eastern
Focus - Weak nylon sector to temper
A weakness in nylon chips (polyamide) sector may temper efforts by producers and traders of feedstock caprolactam to hike prices, market players said on Thursday.
Focus -
Asia’s soap noodles prices are expected to remain firm because of rising feedstock costs, but demand at the higher prices continues to be slow, market sources said on Thursday.
Focus - Asia’s hydrous ethanol prices to be stable-to-firm on low supply
Asia’s hydrous ethanol prices are expected to remain at their current levels with some firming up expected in the third quarter because of tightening supply from Brazil, market sources said Wednesday.
Focus - Asia BD may extend falls after 14% slump on weak demand
Spot butadiene (BD) prices in Asia look set to continue falling, after shedding 14% last week to below $3,000/tonne (€2,280/tonne), as demand weakened because of shutdowns at downstream plants, industry sources said on Tuesday.
Focus - Refining loss to persist for China’s Sinopec in '12 - analysts
China’s Sinopec is expected to continue incurring losses from its refining operations throughout the year, with earnings from its chemicals segment expected to fall, as oil product prices remained regulated in the world’s second biggest economy, analysts said on Monday.
Qatar's QAPCO on track to start up LDPE-3 unit by end May
Qatar Petrochemical Co (QAPCO) is on track to start up its new 300,000 tonne/year low density polyethylene (LDPE) facility, or LDPE-3, at Mesaieed in Qatar by the end of May,.
|
http://www.icis.com/Articles/2012/05/07/9557047/asia+top+stories+-+weekly+summary.html
|
CC-MAIN-2013-20
|
en
|
refinedweb
|
Timeline
02/29/08:
- 23:20 Ticket #13498 (xephem 3.7 will not build on MacPorts 1.5.2 on Leopard) closed by
- fixed: Update committed in r34642.
- 23:19 Changeset [34642] by
- xephem: update to 3.7.2. Fixes #13498.
- 22:06 Ticket #13494 (Error building libidl) closed by
- worksforme: No response from reporter, and it builds fine for me on Leopard/x86. …
- 19:09 Changeset [34641] by
- Fix descriptions.
- 19:03 Ticket #14530 (python25: pydoc2.5 searching for docs in the wrong directory) created by
- Please review and committ the attached file. Changes made: * change …
- 18:47 Changeset [34640] by
- Changing share/doc/$name/Documentation to share/doc/$name
- 18:33 Changeset [34639] by
- New port python30-doc as motivated by #8488.
- 18:15 Ticket #14529 (python30: pydoc3.0 searching for docs in the wrong directory) created by
- Please review and committ the attached file. Changes made: * lint (patch …
- 18:04 Ticket #13487 (new port: py25-utidylib) closed by
- fixed: Thanks, committed in r34638 with some small corrections. I listed you as …
- 18:02 Changeset [34638] by
- new port: py25-utidylib (py25 version of py-utidylib). Closes #13487.
- 17:22 Ticket #13486 (new port: py25-ctypes) closed by
- invalid: As it says on the home page, ctypes is included with Python 2.5, and 'from …
- 17:19 Ticket #14527 (python25: hashlib module broken) closed by
- fixed: Just found .. …
- 16:54 Ticket #14528 (bittorrent-5.2.0 Update to the last version.) created by
- I'm new user and this my first ticket for contribute with this project. …
- 16:53 Ticket #14527 (python25: hashlib module broken) created by
- g4:~ thomas$ python2.5 Python 2.5.2 (r252:60911, Feb 29 2008, 16:18:01) …
- 15:56 Ticket #14526 (JRuby Portfile) created by
- Here's a very basic JRuby Portfile for version 1.0.3 which works on my …
- 15:27 Changeset [34637] by
- New port python25-doc as motivated by #8488.
- 15:06 Changeset [34636] by
- lint
- 13:57 Changeset [34635] by
- lint
- 13:56 Changeset [34634] by
- lint
- 13:55 Changeset [34633] by
- lint
- 13:52 Ticket #14360 (py-ipython conflicts with py25-ipython) closed by
- fixed: Fixed in r34629, r34630, r34631, r34632
- 13:51 Changeset [34632] by
- update according to #14360
- 13:50 Changeset [34631] by
- add python-version to installed binary
- 13:48 Changeset [34630] by
- apply patch from #14360
- 13:48 Changeset [34629] by
- apply patch from #14360
- 13:13 Changeset [34628] by
- follow upstream steatlh upgrade, remove unnecessary patch to molden.f
- 12:47 Changeset [34627] by
- Total number of ports parsed: 4532 Ports successfully parsed: 4532 …
- 12:43 Changeset [34626] by
- minor patch for ANSI wxWidgets, such as the old wxgtk port
- 12:38 Changeset [34625] by
- net/zabbix: Add variant descriptions
- 12:32 Ticket #14525 (Patch: add +sqlite3 variant to net/zabbix) closed by
- fixed: Done. r34624 I'll also document the other variants and add the larger …
- 12:31 Changeset [34624] by
- net/zabbix: Add a sqlite3 variant for the server, minor wordsmithing
- 12:19 Changeset [34623] by
- version bump to 1.91
- 12:15 Ticket #14525 (Patch: add +sqlite3 variant to net/zabbix) created by
- Zabbix can also use sqlite3 for its database backend, this is a minimal …
- 11:34 Changeset [34622] by
- lang/python24-doc: Set svn:keywords and svn:eol-style for Portfile. Please …
- 11:19 Changeset [34621] by
- python24-doc: cleanups. Remove unused variable definitions, remove extra …
- 10:49 Ticket #14502 (lyx-1.4.0pre3 fetch failure) closed by
- duplicate: I'll call this a duplicate of #11163 since the fetch failure of …
- 10:24 Ticket #14384 (Error: checksum mismatch for ImageMagick) closed by
- invalid: This was fixed in r34267, also the ImageMagick port has now been updated …
- 10:09 Changeset [34620] by
- New upstream scala-2.7.0-RC3 release of Scala.
- 10:07 Ticket #13449 (pdb crashes python24 2.4.4 on Leopard) closed by
- fixed: Thanks. Marking fixed.
- 08:37 Ticket #14524 (ffmpeg 0.4.9-pre1 +universal Configure error - build failure) created by
- {{{$ port install ffmpeg +universal ... ---> Configuring ffmpeg Error: …
- 07:26 Ticket #14523 (sqlite3 fails to build on new macos/macports installation (NAWK variable ...) created by
- I just installed macports for the first time on a new mac system, and when …
- 07:16 Changeset [34619] by
- Correcting descriptions
- 07:11 Changeset [34618] by
- Set revision to 0 as this is the initial version
- 07:05 Ticket #14464 (py-sqlalchemy, py25-sqlalchemy don't include doc/examples) closed by
- fixed: Fixed in r34617
- 07:04 Changeset [34617] by
- Update py-sqlachemy and py25-sqlalchemy to install docs/examples …
- 06:46 Ticket #13985 (RFE: Move select files from python_select to appropriate python ports) closed by
- fixed: Committed for python23 in r34437 and r34438. Committed for python30 in …
- 06:45 Changeset [34616] by
- lang/python30: Add select file for python_select with permission from …
- 06:39 Ticket #14373 (Upgrade py-yaml to 3.05) closed by
- fixed: Fixed in r34615
- 06:37 Changeset [34615] by
- Update py-yaml to v3.05 and add corresponding py25-yaml (maintainer …
- 06:29 Ticket #14341 (Update py-setuptools, py25-setuptools to 0.6c8) closed by
- fixed: Ok, fixed in r34614
- 06:29 Changeset [34614] by
- Update py-setuptools and py25-setuptools to v0.6c8. Fixes #14341
- 06:07 Ticket #14372 (NEW: py-daemon, py25-daemon) closed by
- fixed: Fixed in r34613
- 06:07 Changeset [34613] by
- New ports: py-daemon and py25-daemon. Fixes #14372
- 06:05 Ticket #14357 (NEW: sqlalchemy-migrate) closed by
- fixed: Ok, fixed in r34612.
- 06:05 Changeset [34612] by
- New ports: py-sqlalchemy-migrate and py25-sqlalchemy-migrate. Fixes #14357
- 06:00 Ticket #14356 (NEW: py-py py25-py) closed by
- fixed: Committed in r34611 with some minor changes.
- 05:59 Changeset [34611] by
- New ports: py-py and py25-py. Fixes #14356
- 05:54 Ticket #14522 (New port mcabber) created by
- mcabber is a small Jabber console client. It includes features like SSL, …
- 04:33 Ticket #8488 (python24: pydoc cannot find topics) closed by
- fixed: Fixed by creating a python24-doc package in r34610.
- 04:32 Changeset [34610] by
- Initial commit of a python24 Doc package. Fixes #8488. If by any chance …
- 04:25 Changeset [34609] by
- deluge: update to 0.5.8.5
- 03:12 Ticket #14521 (aquaterm-1.0.1 build failure) created by
- Hi... I have some problems installing the port aquaterm. Every time I get …
- 03:07 Ticket #14520 (libsdl_mixer-framework-1.2.8 build failure) created by
- Hi... I have some problems installing the port libsdl_mixer-framework. …
- 03:02 Ticket #14519 (libfuse 2.7.1-3 build error: fuse_param.h: No such file or directory) created by
- When building libfuse 2.7.1-3 on Mac OS 10.5.2 (Intel): […]
- 02:16 Changeset [34608] by
- claws-mail: update to 3.3.1
- 02:14 Changeset [34607] by
- Change maintainer to sbranzo@… as he wishes by private email.
- 00:49 Ticket #14518 (libpython3.0.dylib not being built) created by
- With MacPorts 1.600 the python30 port does not build its shared libpython.
- 00:45 Changeset [34606] by
- Total number of ports parsed: 4524 Ports successfully parsed: 4524 …
- 00:42 Changeset [34605] by
- update snapshot
- 00:08 Changeset [34604] by
- bump revision
02/28/08:
- 23:16 Ticket #13460 (gnome-keyring: parse error before "size_t") closed by
- duplicate
- 23:03 Ticket #13441 (gunits fails to build on OS X 10.5) closed by
- fixed: Fixed in r34603. The configure flags really are needed to build on 10.5 …
- 22:57 Changeset [34603] by
- gunits: fix issues introduced in r34595.
- 21:00 Ticket #12281 (BUG: p5-mac-apps-launch broken dependancy) closed by
- fixed: ricci fixed this in r34589.
- 20:54 Ticket #13441 (gunits fails to build on OS X 10.5) reopened by
- Bad. You've caused two problems: 1. Before r34595, the port correctly …
- 20:08 Ticket #13455 (mp3info doesn't compile on Leopard PPC) closed by
- fixed: Committed in r34602. Thanks!
- 20:07 Changeset [34602] by
- mp3info: fix building on Leopard. Closes #13455.
- 19:38 Ticket #14513 (Submission of new portfile for radlib) closed by
- fixed: committed in r34601 with some whitespace changes, thanks
- 19:37 Changeset [34601] by
- devel/radlib: new port, closes #14513
- 18:19 Ticket #14516 (New portfile for wview) closed by
- invalid: Dup of #14517
- 18:18 Ticket #14515 (New portfile for wview) closed by
- invalid: Dup of #14517
- 18:18 Ticket #14514 (New portfile for wview) closed by
- invalid: Dup of #14517
- 18:09 Ticket #14517 (New port submission for wview v3.6.0) created by
- New portfile for wview3.6.0
- 18:08 Ticket #14516 (New portfile for wview) created by
- New portfile for wview v3.6.0
- 18:07 Ticket #14515 (New portfile for wview) created by
- New portfile for wview v3.6.0
- 18:07 Ticket #14514 (New portfile for wview) created by
- New portfile for wview v3.6.0
- 18:04 Ticket #14513 (Submission of new portfile for radlib) created by
- New portfile for radlib (Rapid Application Development Library). This is …
- 18:02 Ticket #12026 (BUG: xmlrpc-c-1.06.11 - bad checksum) closed by
- fixed: Adjusted in r34600.
- 18:02 Changeset [34600] by
- xmlrpc-c: checksums updated for stealth update; Closes #12026
- 17:56 Ticket #14467 (aircrack-ng 0.9.3 is out) closed by
- fixed: Committed in r34598. Changed maintainer to nomaintainer as no reply to …
- 17:55 Changeset [34599] by
- maintainer => nomaintainer
- 17:54 Changeset [34598] by
- new version 0.9.3
- 17:43 Ticket #11589 (BUG: gdk-pixbuf puts its docs under $prefix/doc) closed by
- fixed: Fixed in r34596
- 17:42 Changeset [34597] by
- patch file naming
- 17:41 Changeset [34596] by
- Fixes #11589
- 17:37 Ticket #13441 (gunits fails to build on OS X 10.5) closed by
- fixed: Thanks, committed in r34595.
- 17:35 Changeset [34595] by
- gunits: fix building on Leopard. Closes #13441.
- 17:23 Ticket #14507 (UPDATE: version 0.9 of editors/ed) closed by
- fixed: committed in r34594, thanks
- 17:23 Changeset [34594] by
- editors/ed: update to 0.9, closes #14507
- 17:11 Ticket #7103 (BUG: unix2dos-2.2 master_site gone) closed by
- fixed: Fixed by r31021
- 16:33 Ticket #13386 (upgrade ploticus to 2.33 plus 8 patches) closed by
- fixed: Thanks, updated to version 2.40 in r34593.
- 16:26 Changeset [34593] by
- ploticus: update to version 2.40. Closes #13386.
- 15:20 Changeset [34592] by
- science/libframe: no need to escape "-" in regular expression
- 14:59 Changeset [34591] by
- science/libframe: use release directory for livecheck
- 14:09 Ticket #14512 (UPDATE: redland-1.0.7) created by
- Update the redland port to version 1.0.7.
- 13:42 Changeset [34590] by
- transmission-x11: update to 1.06
- 13:41 Changeset [34589] by
- make port lint happy reduce maintainer info fix dependency …
- 13:08 Ticket #14511 (UPDATE: new version 0.8.1 of ruby/rb-rake) created by
- New version 0.8.1 of ruby/rb-rake available
- 12:52 Changeset [34588] by
- whitespace
- 12:46 Changeset [34587] by
- Total number of ports parsed: 4523 Ports successfully parsed: 4523 …
- 11:49 Ticket #14510 (add sourceforge as a master site for lcms) closed by
- fixed: added in r34585, thanks
- 11:45 Changeset [34586] by
- graphics/lcms: fix lint warnings
- 11:45 Changeset [34585] by
- graphics/lcms: add sourceforge to master_sites
- 11:32 Ticket #14510 (add sourceforge as a master site for lcms) created by
- Currently the source site for lcms is down () …
- 11:24 Changeset [34584] by
- update and fix qt4-mac
- 11:23 Changeset [34583] by
- Update to version 0.2.3
- 11:00 Ticket #13434 (gnubg won't build in Leopard (OS X 10.5)) closed by
- fixed: Fixed in r34582.
- 10:59 Changeset [34582] by
- gnubg: fix building on Leopard. Closes #13434.
- 10:39 Ticket #14433 (ossp-uuid 1.6.0_0+universal - build failure) closed by
- fixed: Fixed, r34581.
- 10:38 Changeset [34581] by
- add missing LDFLAGS when linking lib (#14433)
- 10:18 Changeset [34580] by
- devel/bzrtools: no need to escape . outside of regular expression
- 10:14 Changeset [34579] by
- sysutils/duplicity: correct livecheck regular expression
- 10:14 Changeset [34578] by
- science/metaio: correct livecheck regular expression
- 10:14 Changeset [34577] by
- science/libframe: use correct livecheck regular expression
- 10:14 Changeset [34576] by
- python/py25-baz: correct livecheck regular expression
- 10:14 Changeset [34575] by
- py-paramiko/py25-paramiko: correct livecheck regular expression
- 10:14 Changeset [34574] by
- py-dateutil/py25-dateutil: update to 1.4
- 10:14 Changeset [34573] by
- math/fftw-3-single: correct livecheck regular expression
- 10:13 Changeset [34572] by
- devel/bzrtools: correct livecheck regular expression
- 10:13 Changeset [34571] by
- devel/bzr-gtk: correct livecheck regular expression
- 10:13 Changeset [34570] by
- devel/bzr: correct livecheck regular expression
- 10:13 Changeset [34569] by
- audio/vorbis-tools: correce livecheck regular expression
- 10:11 Ticket #14463 (Mode line should be in a line) closed by
- fixed: Fixed, r34568.
- 10:10 Changeset [34568] by
- modeline warts need to be on one looong line (#14463)
- 10:06 Ticket #13964 (Routine to remove MacPorts should recommend a shell) closed by
- fixed: Fixed, now says you need bash.
- 10:06 FAQ edited by
- (diff)
- 10:04 FAQ edited by
- Ticket #13964 (diff)
- 09:09 Changeset [34567] by
- games/glob2: Updated to version 0.9.2.
- 08:36 Ticket #14509 (ffmpeg won't build on x86 / 10.4.11) created by
- Attempting to install ffmpeg via {{{sudo port -dv install ffmpeg}} on an …
- 08:35 Changeset [34566] by
- version bump to 5.2.6RC1
- 06:33 Changeset [34565] by
- update patch naming convention per current lint prefs
- 06:32 Changeset [34564] by
- update to 4.7.4
- 06:21 Ticket #14508 (UPDATE: SQLObject 0.10.0b3) created by
- py-sqlobject is a bit dated. Please update to the latest. and add the port …
- 06:21 Changeset [34563] by
- devel/bzr: depend on py25-curl and py25-docutils
- 05:56 Ticket #13500 (install of sqlite3 fails on Leopard) closed by
- worksforme: gawk does the job perfectly […] and building sqlite3 yields […] I …
- 05:50 Ticket #14129 (tk: configure error - /opt/local/lib directory doesn't contain ...) closed by
- invalid
- 03:53 Ticket #13678 (tk 8.5 doesn't build if 8.4.x is already installed) closed by
- fixed: I've changed the include-flags so this _probably_ doesn't occur anymore …
- 03:49 Ticket #8956 (XEmacs port missing base packages) closed by
- fixed: I'm using xemacs on a daily basis (21.4.21, revision 1) and it works …
- 03:38 Ticket #12417 (python 2.5 missing features) closed by
- worksforme: …
- 03:33 Ticket #14505 (py-curl: install docs to ${prefix}/share/doc/${name}) closed by
- fixed: thanks, commited! (incl. revision inc.)
- 03:32 Changeset [34562] by
- fix docdir, inc. revision; #14505; thanks to ram@…!
- 01:57 Ticket #14507 (UPDATE: version 0.9 of editors/ed) created by
- New version 0.9 of editors/ed is available.
- 01:44 Ticket #14506 (port lint should detect improper openmaintainer nomaintainer usage) closed by
- fixed: Added, r34561.
- 01:40 Changeset [34561] by
- lint: issue error when using nomaintainer together with other or …
- 00:45 Changeset [34560] by
- Total number of ports parsed: 4523 Ports successfully parsed: 4523 …
- 00:36 Changeset [34559] by
- Setting svn:keywords to Id on all portfiles, per current guidelines
- 00:03 Changeset [34558] by
- Adding newline after port group as instructed by portlint email.
02/27/08:
- 23:38 Changeset [34557] by
- Setting svn:eol-style to native on all portfiles, per current guidelines
- 23:35 Ticket #13391 (BUG: Ruby fails to compile on 10.3 with pthreads enabled) closed by
- fixed: Fixed in r34556.
- 23:34 Changeset [34556] by
- lang/ruby: add support for 10.3 (untested)
- 23:33 Ticket #13391 (BUG: Ruby fails to compile on 10.3 with pthreads enabled) reopened by
- Oops, I didn't see the patch.
- 23:32 Ticket #13391 (BUG: Ruby fails to compile on 10.3 with pthreads enabled) closed by
- wontfix: I don't have any access to a 10.3 box and we're not supposed to support …
- 23:24 Changeset [34555] by
- ImageMagick: update to 6.3.9-0 All 696 tests behaved as expected (33 …
- 23:15 Changeset [34554] by
- winetricks: update to 20080227
- 23:07 Changeset [34553] by
- glib2-devel: update to 2.15.6
- 23:06 Changeset [34552] by
- replace value of MDT environment value instead of appending to it, to …
- 22:44 Changeset [34551] by
- graphviz: fix livecheck so it once again only finds stable versions, …
- 21:39 Ticket #14457 (impossible to install ffmpeg) closed by
- duplicate: Uninstall the old ffmpeg first. This is a duplicate of #13984.
- 21:08 Ticket #14506 (port lint should detect improper openmaintainer nomaintainer usage) created by
- It would be good if port lint would warn when a port * lists …
- 21:02 Ticket #13415 (Update to phpmyadmin Portfile to bring it to latest version (2.11.2.2) as ...) closed by
- fixed: I committed an updated to version 2.11.4 in r34550.
- 21:02 Changeset [34550] by
- phpmyadmin: update to version 2.11.4. Closes #13415.
- 20:24 Ticket #13414 (Portfile upgrade for mediawiki to version 1.11) closed by
- fixed: I committed an update to version 1.11.1 (without the misuse of epoch and …
- 20:22 Changeset [34549] by
- mediawiki: update to version 1.11.1. Closes #13414.
- 20:20 ram edited by
- (diff)
- 20:16 Ticket #14505 (py-curl: install docs to ${prefix}/share/doc/${name}) created by
- The attached patch moves the docs to ${prefix}/share/doc/${name} as to be …
- 20:13 Changeset [34548] by
- python/py25-curl: new port
- 19:19 Changeset [34547] by
- devel/git-core: Add a +gitweb variant
- 19:09 Changeset [34546] by
- port/port.tcl: Make a difference between: * No port specified on command …
- 19:08 Changeset [34545] by
- bump to 0.99.8
- 18:15 Changeset [34544] by
- audio/flac: fix lint warnings
- 15:54 Ticket #14504 (gsed+with_default_names fails post-destroot) created by
- Hi there, brand new MBP, fresh install of Leopard + updates, fresh install …
- 14:41 Changeset [34543] by
- mutt-devel: restore 'trash' variant - from #13363.
- 14:18 Changeset [34542] by
- mutt-devel: changing to nomaintainer at former maintainer's request.
- 14:09 Ticket #13365 (Update py-pyrex to 0.9.6.3 and add new py25-pyrex) closed by
- fixed: Bumped to version 0.9.6.4, which fixes the setup.py problem, and committed …
- 14:05 Changeset [34541] by
- use current Mac OS X major version for MACOSX_DEPLOYMENT_TARGET - let's …
- 14:04 Changeset [34540] by
- New port: py25-pyrex - from #13365.
- 13:58 Changeset [34539] by
- lang/kaffe: Whitespace only changes.
- 13:51 Changeset [34538] by
- calculate Mac OS X version from Darwin version
- 13:51 Changeset [34537] by
- py-pyrex: update to version 0.9.6.4 - from #13365.
- 13:41 Changeset [34536] by
- Changing openmaintainer to nomaintainer for all three ports
- 13:30 Changeset [34535] by
- science/jmol: Updated to version 11.4.RC7.
- 13:14 Changeset [34534] by
- science/jmol: Updated to 11.4.RC6.
- 13:00 Changeset [34533] by
- fix livecheck
- 12:51 Changeset [34532] by
- tin: bump port revision so everyone gets the fix from r34526; see #13250
- 12:45 Changeset [34531] by
- Total number of ports parsed: 4521 Ports successfully parsed: 4521 …
- 12:23 Ticket #14503 (libtool: upgrade to 1.5.26, add patch to avoid -flat_namespace) created by
- libtool 1.5.26 plus the attached patch will properly avoid the use of …
- 11:54 Ticket #13364 (gimp-lqr-plugin fails to build on Leopard/PowerPC) closed by
- invalid: Marking invalid per comment:2.
- 11:53 Ticket #7510 (xpdf build failure: gcc does not see Xm/XmAll.h) closed by
- worksforme: All is well. xpdf builds successfully against openmotif. Closing this bug. …
- 11:36 Ticket #13357 ([UPGRADE] slrn-devel 0.9.9pre-60) closed by
- fixed: Closing, since a newer version went in in r31449 and subsequent commits.
- 11:30 Ticket #13325 (GNUCash Crash when Adding Transactions) closed by
- duplicate: Marking as duplicate since #13472 has a lot more info.
- 11:26 Ticket #14502 (lyx-1.4.0pre3 fetch failure) created by
- […]
- 10:52 Changeset [34530] by
- lint happy
- 10:50 Ticket #1435 (NEW: apan -Unfinished port) closed by
- fixed: Committed in r34529. Thanks for waiting so long. Kind regards Thomas
- 10:50 Changeset [34529] by
- Initial commit. Fixes #1435
- 10:39 Ticket #1663 (NEW: pwlib -unfinished port) closed by
- fixed: pwlib committed in r34528. Thanks for waiting so long. Kind regards …
- 10:38 Changeset [34528] by
- Initial commit. Fixing #1663
- 10:15 Changeset [34527] by
- version 2.5.35
- 10:04 Ticket #14491 (p5-text-csv_xs 0.31 fetch failure) closed by
- duplicate: Why didn't you just attach it to this ticket? :)
- 10:01 Ticket #13250 (news/tin fails to handle SIGWINCH) closed by
- fixed: Thanks, committed in r34526.
- 09:59 Changeset [34526] by
- tin: handle SIGWINCH. Closes #13250.
- 09:27 Changeset [34525] by
- add 'macports-gcc-4.4' as compiler option
- 09:24 Changeset [34524] by
- new port gcc44 (BETA snapshot!)
- 09:22 Ticket #14501 (run check of fftw-3 during test phase, not build) created by
- The attached patch uses the test phase to run the test suite, not during …
- 09:20 Ticket #14500 (fftw-3: add livecheck) created by
- The attached patch fixes livecheck for fftw-3
- 09:20 Ticket #14499 (libgda should depend on postgresql82 at least instead of postgresql80) created by
- here is the diff: […] Include defs are correct, btw. Then we have 8.3 …
- 08:21 Ticket #14498 (mod_perl2 not building (MacOS X 10.4, 10.5)) created by
- I am trying to install mod_perl2 on my 10.4 (server) machine and I am …
- 08:07 Ticket #14497 (Gnome, gnome-session fails to install (gnome-settings-daemon package not ...) created by
- tsia amgines-macbook:~ amgine$ sudo port install gnome-session …
- 06:55 Ticket #1595 (NEW: aide -Unfinished port) closed by
- fixed: Committed in r34522 and r34523. Thanks for waiting so long. Kind regards …
- 06:54 Changeset [34523] by
- Files for port:aide
- 06:53 Changeset [34522] by
- New port. Fixing #1595
- 05:11 Ticket #12660 (BUG: fftw-3-single install failure on Intel MacBook) closed by
- worksforme: No response, closing as worksforme. Reopen if problem persists.
- 05:08 Changeset [34521] by
- Additional file to #12851
- 05:06 Ticket #12851 (libwww fails to build) closed by
- fixed: Resolved in r34520. Thanks for you help and kind regards Thomas
- 05:05 Changeset [34520] by
- Fixes #12851
- 03:32 Ticket #12302 (RFE: "platform" info target) closed by
- fixed: Added, r32724.
- 03:26 Ticket #12861 (port can't find package Pextlib 1.0) closed by
- worksforme: (jmpp forgot to close)
- 03:20 Ticket #14380 (port lint should warn if depending on nonexistent port) closed by
- fixed: And and a small bugfix in r34516.
- 03:17 Changeset [34519] by
- update changelog, #13458 #14380
- 03:00 Changeset [34518] by
- add default description for global variants like +universal
- 02:52 Changeset [34517] by
- python/py-psycopg: Add +postgresql82 and +postgresql83
- 02:43 Changeset [34516] by
- actually get all dependencies, and not just the last one
- 02:24 Ticket #13458 (Lint and Livecheck targets should not require root access) closed by
- fixed: Use /tmp for livecheck in r34515.
- 02:20 Changeset [34515] by
- port1.0/portlivecheck.tcl: Use a path in /tmp to avoid issues when …
- 02:13 Changeset [34514] by
- avoid double free, thanks to Raim
- 02:00 Changeset [34513] by
- add target_state variable patch from raimue, for targets that don't need …
- 01:36 Changeset [34512] by
- lint: check that all dependencies actually exist (#14380)
- 01:06 Ticket #14496 (UPDATE: p5-text-csv_xs-0.34) created by
- p5-text-csv_xs-0.34 Find the Portfile ATTACHED. Description: Perl module …
- 00:51 Ticket #14476 (asciidoc uses wrong Python binary) closed by
- fixed: "${prefix}/bin/python" used to be a symlink to python2.4 but was deleted …
- 00:50 Changeset [34511] by
- use versioned python program (#14476)
- 00:47 Ticket #14493 (ffmpeg fails to install) closed by
- duplicate: Dupe, #14492
- 00:46 Changeset [34510] by
- Total number of ports parsed: 4517 Ports successfully parsed: 4517 …
- 00:44 Ticket #13175 (madplay-0.15.2b fails to build on Tiger or Leopard Intel H/W) closed by
- fixed: Commited, r34509
- 00:43 Changeset [34509] by
- fix build on Leopard Intel / needs SSE (#13175)
- 00:39 Ticket #14495 (madplay-0.15.2b build error on 10.5.2) closed by
- duplicate: Dupe, #13175
02/26/08:
- 23:57 Changeset [34508] by
- fix build with gcc 4.2
- 23:38 Ticket #13333 (xfig 3.2.5_1 won't build with XFree86, builds with Apple X11) closed by
- wontfix: I'm not sure why we should support XFree86 nowadays.
- 23:37 Ticket #14116 (ENHANCEMENT: graphics/xfig) closed by
- fixed: Fixed in r34507.
- 23:37 Changeset [34507] by
- graphics/xfig: removed ugly hack (#14116) and fixed destroot violation
- 22:40 Ticket #14495 (madplay-0.15.2b build error on 10.5.2) created by
- Build log: […]
- 22:26 Changeset [34506] by
- audio/vorbis-tools: fix livecheck
- 22:26 Changeset [34505] by
- audio/vorbis-tools: update homepage
- 22:26 Changeset [34504] by
- science/libframe: fix livecheck
- 21:17 Ticket #14494 (pgplot install failure) created by
- When trying to install pgplot I get the following errors {{{mtroy source> …
- 20:30 Changeset [34503] by
- math/fftw-3-single: fix livecheck
- 20:23 Changeset [34502] by
- devel/bzrtools: fix livecheck
- 20:23 Changeset [34501] by
- devel/bzr-gtk: fix livecheck
- 20:04 Changeset [34500] by
- py-gnupg/py25-gnupg: fix livecheck
- 19:24 Changeset [34499] by
- devel/bzr: fix livecheck
- 18:59 Changeset [34498] by
- py-numpy/py25-numpy: fix livecheck
- 18:59 Changeset [34497] by
- py-paramiko/py25-paramiko: fix livecheck
- 18:46 Changeset [34496] by
- devel/git-core: fix port lint warnings
- 17:34 Ticket #14493 (ffmpeg fails to install) created by
- When trying to install the ffmpeg port I get the following: Error: Target …
- 17:33 Ticket #14492 (ffmpeg staging error: usage: install [-bCcpSsv] [-B suffix] [-f flags] [-g ...) created by
- When trying to install the ffmpeg port I get the following: Error: Target …
- 16:13 Ticket #14491 (p5-text-csv_xs 0.31 fetch failure) created by
- Christopher Lamey …
- 16:12 Ticket #14490 (gimp installation hangs) created by
- I have done the Ticket 13742 fix for leopard install, thanks. The port …
- 15:20 Changeset [34495] by
- version 1.5.0
- 15:18 Ticket #14489 (macports installation problem) closed by
- duplicate: There is a known bug in the installer for Leopard. Please either see …
- 15:09 Ticket #14489 (macports installation problem) created by
- I recently upgraded to leopard. I installed X11 and the X11 Xcode v3.0. I …
- 14:47 Changeset [34494] by
- winetricks: new port
- 14:15 Changeset [34493] by
- Updated to 0.0900.
- 14:08 Changeset [34492] by
- Updated to 1.01.
- 14:05 Changeset [34491] by
- Updated to 0.23.
- 13:50 Ticket #13199 (poppler fails to build on Mac OS X 10.5 Leopard) closed by
- fixed: No response from the reporter, and it builds fine for me now on Leopard, …
- 12:49 Changeset [34490] by
- Fix incorrect conf file and icons path for the macport_apache2 variant.
- 12:46 Changeset [34489] by
- Total number of ports parsed: 4516 Ports successfully parsed: 4516 …
- 12:27 Changeset [34488] by
- Update to 1.23 to fix broken fetch.
- 12:26 Ticket #13167 (BUG: glitz-0.5.6 fails on Leopard) closed by
- fixed: Committed in r34487. Thanks!
- 12:24 Changeset [34487] by
- glitz: fix build on Leopard. Closes #13167.
- 12:07 Ticket #13165 (pdksh 5.2.14 fails to build on OS X 10.5 Leopard) closed by
- fixed: Committed in r34483, r34484, and r34486.
- 12:05 Changeset [34486] by
- pdksh: fix building on Leopard. Closes #13165.
- 11:59 Ticket #14293 (Privoxy 3.0.8 socks5 variant is failing during the patch phase) closed by
- fixed: Closed in r34485 Kind regards Thomas
- 11:59 Changeset [34485] by
- Closing #14293
- 11:58 Changeset [34484] by
- pdksh: add additional master_sites (from #13165)
- 11:54 Changeset [34483] by
- pdksh: update maintainer address (from #13165)
- 11:24 Changeset [34482] by
- version 0.19.1
- 11:23 Changeset [34481] by
- version 0.18.2
- 11:18 Ticket #13130 (slrn-devel doesn't build) closed by
- fixed: Fixed in r30820.
- 11:06 Ticket #14484 (separate rdesktop into two Portfiles) closed by
- fixed: Committed in r34480. Thanks and kind regards Thomas
- 11:06 Changeset [34480] by
- Closing 14484, there is port:rdesktop-devel now.
- 10:35 Ticket #14488 (blt not building) created by
- I'm trying to build blt but I'm getting an error. Here's the output: …
- 10:22 Ticket #13086 (p5-log-dispatch fails to fetch source tarball) closed by
- fixed: Version 2.12 is still available on some mirrors (I could install it just …
- 10:21 Changeset [34479] by
- ddrescue: Install a man page in addition to info file
- 10:20 Changeset [34478] by
- p5-log-dispatch: update to version 2.21
- 10:13 Changeset [34477] by
- ddrescue: Update to version 1.8
- 10:07 Ticket #14487 (cppunit @1.12.0 doesn't build with +universal variant) created by
- […]
- 10:04 Changeset [34476] by
- hdhomerun: update software/firmware to 20080212 version
- 10:01 Ticket #14486 (PATCH: cyrus-sasl2 ldap variant) created by
- Attached is a patch to add an ldap variant for cyrus-sasl.
- 09:42 Changeset [34475] by
- py-vorbis: update homepage/master_sites
- 09:38 Changeset [34474] by
- py-ogg: update homepage/master_sites
- 08:06 Ticket #14481 (RFE: subversion: Install some more tools into ${prefix}/bin) closed by
- fixed: Committed the mucc change in r34473. I don't really like the idea of …
- 08:04 Changeset [34473] by
- Only build mucc when it is going to be installed (this doesn't change what …
- 07:33 Ticket #14485 (Warnings when installing iftop) created by
- I installed iftop today and received some warnings that I thought I'd pass …
- 07:26 Ticket #13376 (rdesktop fails to compile) closed by
- worksforme: Resolving WFM as per comment:5.
- 03:49 Ticket #14484 (separate rdesktop into two Portfiles) created by
- I added port:rdesktop-devel as its own port in r34472 Therefore …
- 03:36 Changeset [34472] by
- Fix Portfile
- 03:32 Changeset [34471] by
- Initial commit
- 01:26 Ticket #14483 (ticket status summaries on #macports) created by
- Please make trac to xmlrpc macports trac ticket status changes like new …
- 00:45 Changeset [34470] by
- Total number of ports parsed: 4515 Ports successfully parsed: 4515 …" …
Note: See TracTimeline for information about the timeline view.
|
http://trac.macports.org/timeline?from=2008-02-29T07%3A05%3A02-0800&precision=second
|
CC-MAIN-2013-20
|
en
|
refinedweb
|
1 Python Line for ELMo Word Embeddings and t-SNE plots with John Snow Labs’ NLU
Including Part of Speech, Named Entity Recognition, Emotion, and Sentiment Classification in the same line! With Bonus t-SNE plots and comparison of various ELMo output layers!
0. Introduction
0.1 What is NLU?
John Snow Labs NLU library gives you 350+ NLP models and 100+ Word Embeddings and infinite possibilities to explore your data and gain insights.
In this tutorial, we will cover how to get the powerful ELMo Embeddings with 1 line of NLU code and then how to visualize them with t-SNE. We will compare Comparing Sentiment with Sarcasm and Emotions!
0.2 What is t-SNE?
T-SNE.
0.3 How does ELMo differ from past approaches?
ELMo, created by AllenNLP broke the state of the art (SOTA) in many NLP tasks upon release. Together with ULMFiT and OpenAi, ELMo brought upon us NLP’s breakthrough imagenet moment. These embedding techniques were a great step forward better results compared to older methods like word2vec or GloVe.
0.4.
Now that we got the intro out of the way, let’s get started with some coding!
1. How to get ELMo embeddings in 1 line?
nlu.load('elmo').predict(youData)
That's all you need! Make sure you ran previously
pip install nlu
Since adding additional classifiers and getting their predictions is so easy in NLU, we will extend our NLU pipeline with a POS, Emotion, and Sentiment classifier which all achieve results close to the state of the art.
Those extra predictions will also come in handy when plotting our results.
pipe = nlu.load('pos sentiment elmo emotion').predict(df)
2. Prepare data for T-SNE
We prepare the data for the T-SNE algorithm by collecting them in a matrix for TSNE
import numpy as npmat = np.matrix([x for x in predictions.elmo_embeddings])
3. Fit T-SNE
Finally, we fit the T-SNE algorithm and get our 2-Dimensional representation of our Bert Word Embeddings
TSNEmodel = TSNE(n_components=2)
low_dim_data = model.fit_transform(mat)
print('Lower dim data has shape',low_dim_data.shape)
4. Plot ELMo')
5. Plot emotional distribution
Since we added emotion classification, why not plot the distribution of it in 1 line quickly
6. Try out Elmo’s different output pooling layers
Elmo has been released with 4 different output layers accessible to us. Each of them encodes tokens and their contextual meaning differently. It can be very interesting to experiment with them and compare their different t-SNE embeddings and how they perform in various NLP downstream tasks.
- for more specific info about the pooling layers.
The following code snippet will print for us every component in our nlu pipeline and also copy pastable code we can use to configure our model
pipe.print_info()
Will print:
Change Elmos output Layer
We can just copy-paste the .setPoolingLayer() line and put ‘elmo’ or any other of the 4 layers as parameters and then predict with the configured pipe.
pipe['elmo'].setPoolingLayer('elmo')
predictions = pipe.predict(df)
Afterward, we can run again the plotting code, which is quite short if you put it in the code block for Elmo Layer')
I had some fun with this and ran all layers and plotted them with a different hue (Part of Speech, Emotion, Sentiment, Sarcasm) enjoy!
ElMo t-SNE plots for Part of Speech coloring
In case you are curious about what each of the Part of Speech tags in the plot legend stands for, you can find every NER tag described and with an example in the NLU docs.
All ELMo word embedding layer plots together
All ELMo ‘elmo’ layer plots together
All plots for ELMo output layer LSTM1 together
All ELMo plots for output layer LSTM2 together
What’s the full code to generate the t-SNE plots?
You really just need 1 line of NLU code and a few sprinkles of plotting and TSNE code showcased in the following code segment
nlu.load('elmo').predict(youData)
model = TSNE(n_components=2)')
What if I want to work with terabytes of big data?
We had to limit ourselves to a subsection of the dataset because our RAM is sadly limited with just one machine.
With Spark NLP you could take exactly the same models and run them in a scalable fashion inside of a Spark cluster on terabytes of data because NLU is using Spark NLP under the hood to generate its predictions! Talks
- NLP Summit 2020: John Snow Labs NLU: The simplicity of Python, the power of Spark NLP
- John Snow Labs NLU: Become a Data Science Superhero with One Line of Python code Watch Live: Nov 12 at 2pm EST
|
https://medium.com/spark-nlp/1-python-line-for-elmo-word-embeddings-with-john-snow-labs-nlu-628e9b924a3
|
CC-MAIN-2021-49
|
en
|
refinedweb
|
NOTE: Since writing this article we've started using Behaviour Designer. It's expensive (for the asset store) but it is really good and I highly recommend it.
Do you need an economical and effective way of using behavior trees? The fluent behavior trees API allows the coder-come-game-designer to have many of the benefits of traditional behavior trees with much less development time.
For many years I've been interested in behavior trees. They are an effective method for creating AI and game-logic. Many game or AI designers from the professional industry use a behavior tree editor to create behaviors.
Indie developers (or professional developers with budget constraints!) may not be able afford to buy or build a (good) behavior tree editor. In many cases indie developers wear many hats and must stretch themselves across a number of roles. If this is your situation then you probably aren't a specialised game designer with a tools team at your disposal.
If this sounds familiar then you should consider fluent behavior trees. Your code editor becomes your behavior tree editor. You can achieve much of the power of behavior trees without having an editor or without needing specialist designers.
This article documents the technique and the open-source library. I present some theory and then practical examples. In addition I contrast against traditional behavior trees and other techniques.
Contents
Table of Contents generated with DocToc
- Contents
- Introduction
- Getting the code
- Theory
- Practice
- Conclusion
Introduction
Behavior trees have been in our consciousness for over 10 years now. They entered the games industry around 2005 thanks to the GDC talk on AI in Halo 2. Since then they have been popularized by multiple sources including the influential AiGameDev.com.
The technique presented here is a combination of a fluent api with the power of behavior trees. This is available in an open-source library C# library. I personally have used the technique and the library for vehicle AI in a commercial driving simulator project that was built on Unity. The code, however, is general C# and is not limited to Unity. For instance you could easily use it with MonoGame. A similar library would work in C++ and other languages if anyone cares to port the code.
This article is for game developers who are looking for a cheap, yet expressive and robust method of building AI. It's cheap because you don't need to build an editor (your usual IDE will do fine) and because you don't then need to hire a game designer to use the editor. This methodology will suit indie devs who typically do a bit of everything: coding, art, design, etc.
If you are a professional game dev working for a company that has a tools or AI department, then it's possible this article won't help you. Those that can afford to build an editor and hire game designers, please do that. However fluent behavior trees could still be useful to you in other ways. For one thing they can allow you to experiment with behavior trees and test the waters before committing to building a fully data-driven behavior trees system. Fluent behavior trees can provide a cheap way of getting into behavior trees, before committing to the full-blown expensive system.
The idea for fluent behavior trees came to me during my work on promises for game development. After we discovered promises at Real Serious Games our use of them snowballed, eventually leading to a large article on the topic. We pushed promises far indeed, to the point where I realized that our promises were looking remarkably similar to behavior trees, as mentioned in the aforementioned article. This led me to think that a fluent API for behavior trees was viable and more appropriate than promises in certain situations. Of course when a developer starts using fluent APIs they naturally start to see many opportunities to make use of them and I am no exception. I know this sounds a bit like if the only tool you have is a hammer, then every problem looks like a nail, but trust me I have many other tools. And fluent behavior trees are another tool in the toolbox that can be bought out at appropriate times.
Getting the code
This C# library implements by the book behavior trees. That is to say: I did my research and implemented standard game dev behavior trees. The implementation is very simple and there is little in the way of additional embellishments. This library should work in any .Net 3.5 code-base. The reason it is .Net 3.5 is for compatibility with Unity. If needed the library can easily be upgraded to a higher version of .Net. The library has only been tested on Windows, however given how simple it is I expect it will also work on mobile platforms. If anyone has problems please log an issue on github.
The code for the library is shared via github:
From github you can download a zip of the code. To contribute please fork the code, make your changes and submit a pull-request.
You can build the code in Visual Studio and then move the DLL to your Unity project. Alternately you can simply put the code in a sub-directory of your Unity project.
A pre-compiled version the library is available on NuGet:
Theory
Before getting into the practical details of using fluent behavior trees I'll cover some brief theory and provide some resources for understanding behavior trees in more depth.
If you already have a good understanding of behavior trees please skip or skim these sections.
Behavior trees
Behavior trees are a fantastic way to construct and manage modular and reusable AI and logic. In the professional industry designers will use an editor to build, tweak and debug AI. During development they build a library of reusable and plugable behaviors. Over time this makes it quicker and easier to build AI for new entities, as the process of building AI becomes gluing together various pre-made behaviors then tweaking their properties. The editor makes it easy to quickly rewire AI, thus we can iterate faster to improve gameplay more quickly.
A behavior tree represents the AI or logic that makes an entity think. The behavior tree itself is kind of stateless in that it relies on the entity or the environment for storing state. Each update re-evaluates the behavior tree against the state of the entity and the environment. Each update the behavior tree picks up from where it left off last update. It must figure out what state it is in and what it should now be doing.
Behavior trees are scalable to massive AIs. They are a tree and trees can be nested hierarchically to an infinite level. This means they can represent arbitrarily complex and deeply nested AI. They are constructed from modular components so very large trees are manageable. The size of a mind that can be built with a behavior tree is only limited by the size of the tree that can be handled by our tools and our PCs.
There is much information and tutorials online about behavior trees: how they work, how they are structured and how they compare to other forms of AI. I've just covered the basics here. Here are links some for more indepth learning:
Fluent vs traditional
Traditional behavior trees are loaded from data. They are constructed and edited with a visual editor. They are stored in the file-system or a database. In contrast fluent behavior trees are constructed in code via an API. We can say they are code-driven rather than data-driven.
Why adopt this approach?
For a start it's cheap and you still get many of the same benefits as traditional behavior trees. A usable first version of the fluent behavior tree library was constructed in a day. Contrast that to the many weeks it would take to build a functioning behavior tree editor. And you are coder a right? (I've made that assumption already). So why would you even want an editor like that when instead you can deal directly with an fluent API. You could buy an editor and there are options, especially on the Unity asset store. To buy though still costs... maybe not much, but consider that you then have to invest time to learn that particular package. Not just how to use the editor, but also how to use its API to load and run the behavior tree.
I have found that fluent APIs make for a pleasant coding experience. They fit well with intellisense, where the compiler understands what your options are and can auto-complete expressions and statements for you.
Defining behavior trees in code gives you a structured way to hack together behaviors. That's right I said hack together. Have you ever been in that situation where you designed something awesome, clean and elegant, only to find out, in the months before completion that you had to hack the hell out of it because your original design couldn't cope with the changing requirements of the game. I've learned that is very important to have an architecture that can cope with the hacking you must inevitably do at some point to make a great game. Finding a modular architectural mechanism that supports this increases our ability to work fast and adapt (we can hack when we need to) and ultimately it does improve the design of our code (because we can chop out and rewrite the hacked up modules if need be). I've heard this called designing for disposability, where we plan our system in a modular fashion with the understanding that some of those modules will end up as complete mess and we might want to throw it out. Fluent behavior trees support modular and compartmentalized behaviors, so I believe they help support the principle of disposability.
In any case, there is no single best approach to anything. You'll have to figure out the right way for your own project and situation, sometimes that will work out for you. Other times you'll find out the hard way how wrong you were. The benefit of trying fluent behavior trees is that you won't sink a lot of time into them. The time investment is minimal, but the potential return on the time is large. For that reason behavior trees can be an attractive option.
Ultimately there is nothing to stop you mixing and matching these approaches. Use the fluent API for fast prototyping while coding to test an idea. Then use a good editor when you need to build and deploy loads of
behaviors into production.
For efficient turnaround time with fluent behavior trees you need a fast build time. You might want to work in a smaller testbed rather than working in your full game. Ultimately if you have a slow build time it's going to slow you down when coding and testing changes to fluent behavior trees. If that's going to be a problem then traditional behavior trees might be more suitable for you, but probably only if you can hot-load them into a running game. So if you are buying a behavior tree editor/system you should check for that feature!
Promises vs behavior trees
It would be remiss if I didn't spend some time on promises.
In the promises article we mentioned that we had pushed promises into the realm of behavior trees. We were already using promises to manage asynchronous operations such as loading levels, assets and data from the database. Of course, this is what promises are intended for and what they excel at.
It wasn't a huge leap for us to then start using promises to manage other operations such as video and sound playback. After all these are operations that also happen asynchronously over a period of time. When they complete they generally trigger some kind of callback that we handle to sequence the next operation (eg the next video or sound). Promises worked really well for this because they are great for sequencing chains of asynchronous operations.
The epiphany came when we started seeing the entire game as interwoven chains of asynchronous operations. After all, what is a game if not a complex web of logic-chains that endure over many frames. We extended promises to support composition of game logic. During this we realized that behavior trees may have been a better fit for some of what we were doing. However it was late in that project and at the time we didn't have a library for behavior trees. So we pushed on and completed the project.
We got a lot of mileage out of promises, although it would have been better if we could have combined promises and behavior trees.
We were able to do many behavior tree-like things with promises, but there is a problem in doing that with promises that is solved by the nature of behavior trees. Behavior trees are similar to promises in that they allow you to compose or chain logic that happens (asynchronously) over many frames or iterations of the game -loop. The main difference is in how they are ticked or
updated and that behavior trees are generally more suited to the step-by-step nature of the game-loop. Behavior trees are only ticked (or moved-along step-by-step) each iteration of the game-loop. When the behavior tree is no longer being ticked, the logic of the behavior tree is stopped and does not progress. So it's incredibly easy to cancel or otherwise back-out of a running behavior tree: you simply stop updating it.
A running chain of promises isn't quite as easy to get out of as a behavior tree. They are designed to represent in-flight asynchronous operations that just happen without being ticked or updated. This means you have little ongoing control over the operation until it either completes or errors. For example downloading a file from the internet... it's going to continue until it completes or errors. Aborting a chain of promises often involves injecting another promise specifically to throw an exception in the event that we need to reject the entire chain of promises. This can be achieved by racing the abortable-promise against the promise (or promises) that might need to be aborted. If this sounds complicated and painful, that's because it is. Alarm bells should be ringing.
Since then I've used fluent behavior trees in production and have also theorized that promises and behavior trees can easily be used in combination to achieve the benefits of both. Further on I'll show examples of how that might look.
Practice
The section describes how to use the fluent behavior tree library.
Behavior tree status
Behavior tree nodes may return the following status codes:
- Success: The node has finished what it was doing and succeeded.
- Failure: The node has finished, but failed.
- Running: The node is still working on something.
Basic usage
A behavior tree is created through BehaviourTreeBuilder. The root node for the completed tree is returned when the Build function is called:
using FluentBehaviourTree; ... IBehaviourTreeNode tree; public void Startup() { var builder = new BehaviourTreeBuilder(); this.tree = builder .Sequence("my-sequence") .Do("action1", t => { // .. do something ... return BehaviourTreeStatus.Success; }); .Do("action2", t => { // .. do something after ... return BehaviourTreeStatus.Success; }) .End() .Build(); }
To move the behavior tree forward in time it must be ticked on each iteration of the game loop:
public void Update(float deltaTime) { this.tree.Tick(new TimeData(deltaTime)); }
Node names
Note the names that are specified when creating nodes:
this.tree = builder .Sequence("my-sequence") // The node is named 'my-sequence'. ... etc ... .End() .Build()
These names are purely for testing and debugging purposes. This allows a visualisation of the tree to be rendering to more easily see the state of our AI. Debug visualisation is very important for understanding and debugging what our games are doing.
Node types
The following types of behavior tree nodes are supported.
Action / Leaf node
Call the Do function to create an action node at the leaves of the behavior tree. The return value (Success, Failure or Running) specifies the current status of the node.
.Do("action", t => { // ... do something ... // ... query the entity, query the environment then take some action ... // Return status code indicate if the action is // successful, failed or ongoing. return BehaviourTreeStatus.Success; });
Sequence
Runs each child node in sequence. Fails for the first child node that fails. Moves to the next child when the current running child succeeds. Stays on the current child node while it returns running. Succeeds when all child nodes have succeeded.
.Sequence("my-sequence") .Do("action1", t => { // Sequential action 1. return BehaviourTreeStatus.Success; // Run this. }); .Do("action2", t => { // Sequential action 2. return BehaviourTreeStatus.Success; // Then run this. }) .End()
Parallel
Runs all child nodes in parallel. Continues to run until a required number of child nodes have either failed or succeeded.
int numRequiredToFail = 2; int numRequiredToSucceed = 2; .Parallel("my-parallel", numRequiredToFail, numRequiredToSucceed) .Do("action1", t => { // Parallel action 1. return BehaviourTreeStatus.Running; }); .Do("action2", t => { // Parallel action 2. return BehaviourTreeStatus.Running; }) .End()
Selector
Runs child nodes in sequence until it finds one that succeeds. Succeeds when it finds the first child that succeeds. For child nodes that fail it moves forward to the next child node. While a child is running it stays on that child node without moving forward.
.Selector("my-selector") .Do("action1", t => { // Action 1. return BehaviourTreeStatus.Failure; // Fail, move onto next child. }); .Do("action2", t => { // Action 2. return BehaviourTreeStatus.Success; // Success, stop here. }) .Do("action3", t => { // Action 3. return BehaviourTreeStatus.Success; // Doesn't get this far. }) .End()
Condition
The condition function is syntactic sugar for the Do function. It allows return of a boolean value that is then converted to a success or failure. It is intended to be used with Selector.
.Selector("my-selector") // Predicate that returns *true* or *false*. .Condition("condition", t => SomeBooleanCondition()) // Action to run if the predicate evaluates to *true*. .Do("action", t => SomeAction()) .End()
Inverter
Inverts the success or failure of the child node. Continues running while the child node is running.
.Inverter("inverter") // *Success* will be inverted to *failure*. .Do("action", t => BehaviourTreeStatus.Success) .End() .Inverter("inverter") // *Failure* will be inverted to *success*. .Do("action", t => BehaviourTreeStatus.Failure) .End()
Nesting behavior trees
Behavior trees can be nested to any depth, for example:
.Selector("parent") .Sequence("child-1") ... .Parallel("grand-child") ... .End() ... .End() .Sequence("child-2") ... .End() .End()
Behavior reuse
Separately created sub-trees can be spliced into parent trees. This makes it easy to build behavior trees from reusable functions.
private IBehaviourTreeNode CreateSubTree() { var builder = new BehaviourTreeBuilder(); return builder .Sequence("my-sub-tree") .Do("action1", t => { // Action 1. return BehaviourTreeStatus.Success; }); .Do("action2", t => { // Action 2. return BehaviourTreeStatus.Success; }); .End() .Build(); } public void Startup() { var builder = new BehaviourTreeBuilder(); this.tree = builder .Sequence("my-parent-sequence") .Splice(CreateSubTree()) // Splice the child tree in. .Splice(CreateSubTree()) // Splice again. .End() .Build(); }
Promises + behavior trees
Following are some theoretical examples of how to combine the power of fluent behavior trees and promises. They are integrated via the promise timer.
PromiseTimer promiseTimer = new PromiseTimer(); public void Update(float deltaTime) { promiseTimer.Update(deltaTime); } public IPromise StartActivity() { IBehaviourTreeNode behaviorTree = ... create your behavior tree ... return promiseTimer.WaitUntil(t => behaviorTree.Update(t.elapsedTime) == BehaviourTreeStatus.Success ); }
The
StartActivity function starts an activity that is represented by a behavior tree. We use the promise timer's
WaitUntil function to resolve the promise once the behavior tree has completed. This is a simple way to combine promises and behavior trees and have them work together.
This could be improved slightly with a overloaded
WaitUntil that is specific for behavior trees:
public IPromise StartActivity() { IBehaviourTreeNode behaviorTree = ... create your behavior tree ... return promiseTimer.WaitUntil( behaviorTree, BehaviourTreeStatus.Success ); }
A real example
Now I want to show a real world example. The code examples here are from the the driving simulator project. This is a larger example, but in the scheme of things it is actually quite a simple use of behavior trees. The vehicle AI in the driving sim was just complex enough that structuring it as a behavior tree made it much more manageable.
The behavior tree presented here makes use of many helper functions. So much of the actual work of querying and updating the entity and the environment is delegated to the helper functions. I won't show the detail of the helper functions, this means I can show the higher level logic of the behavior tree without getting overwhelmed by the details.
This code builds the vehicle AI behavior tree:
behaviourTree = builder .Parallel("All", 20, 20) .Do("ComputeWaypointDist", t => ComputeWaypointDist()) // Always try to go at at the speed limit. .Do("SpeedLimit", t => SpeedLimit()) .Sequence("Cornering") .Condition("IsApproachingCorner", t => IsApproachingCorner() ) // Slow down vehicles that are approaching a corner. .Do("Cornering", t => ApplyCornering()) .End() // Always attempt to detect other vehicles. .Do("Detect", t => DetectOtherVehicles()) .Sequence("React to blockage") .Condition("Approaching Vehicle", t => IsApproachingVehicle() ) // Always attempt to match speed with the vehicle in front. .Do("Match Speed", t => MatchSpeed()) .End() .Selector("Stuff") // Slow down for give way or stop sign. .Sequence("Traffic Light") .Condition("IsApproaching", t => IsApproachingSignal()) // Slow down for the stop sign. .Do("ApproachStopSign", t => ApproachStopSign()) // Wait for complete stop. .Do("WaitForSpeedZero", t => WhenSpeedIsZero()) // Wait at stop sign until the way is clear. .Do("WaitForGreenSignal", t => WaitForGreenSignal()) .Selector("NextWaypoint Or Recycle") .Condition("SelectNextWaypoint", t => TargetNextWaypoint() ) // If selection of waypoint fails, recycle vehicle. .Do("Recycle", t => RecycleVehicle()) .End() .End() // Slow down for give way or stop sign. .Sequence("Stop Sign") .Condition("IsApproaching", t => IsApproachingStopSign() ) // Slow down for the stop sign. .Do("ApproachStopSign", t => ApproachStopSign()) // Wait for complete stop. .Do("WaitForSpeedZero", t => WhenSpeedIsZero()) // Wait at stop sign until the way is clear. .Do("WaitForClearAwareness", t => WaitForClearAwarenessZone() ) .Selector("NextWaypoint Or Recycle") .Condition("SelectNextWaypoint", t => TargetNextWaypoint() ) // If selection of waypoint fails, recycle vehicle. .Do("Recycle", t => RecycleVehicle()) .End() .End() .Sequence("Follow path then recycle") .Do("ApproachWaypoint", t => ApproachWaypoint()) .Selector("NextWaypoint Or Recycle") .Condition("SelectNextWaypoint", t => TargetNextWaypoint() ) // If selection of waypoint fails, recycle vehicle. .Do("Recycle", t => RecycleVehicle()) .End() .End() .End() // Drive the vehicle based on desired speed and direction. .Do("Drive", t => DriveVehicle()) .End() .End() .Build();
This diagram describes the structure of the vehicle AI behavior tree:
Here is the expanded sub-tree for the traffic light logic:
Here is the expanded sub-tree for the follow path logic:
In this example I've glossed over a lot of the details. I wanted to show (at a high-level) a real world example and using the diagrams I have illustrated the tree structure of the code.
Conclusion
In this article I've explained how to create behavior trees in code through a fluent API. I've focused on my open-source C# library with examples in this article are built on. I've used these techniques in one commercial Unity product (the driving simulator). I look forward to finding future opportunities to use fluent behavior trees and continuing to build on the ideas presented here.
If you have ideas for improves, please fork the github repo and start contributing! If you use the library and find problems please log an issue.
Have fun!
|
https://codecapers.com.au/fluent-behavior-trees-for-ai-and-game-logic/
|
CC-MAIN-2021-49
|
en
|
refinedweb
|
... the Boost Preprocessor library which can be space separated as a single argument. The need to space separate the preprocessor data occurs when consecutive tokens are VMD identifiers. The limitation of this consecutive series of data is that each top-level part of the data of this series is of some VMD data type. Furthermore if you use generic macros to parse a sequence, any top-level VMD type which is an identifier must be a specific identifier, meaning it must be an identifier that has been registered.
What the VMD sequence functionality means is that if some input consists of a series of data types it is possible to extract the data for each data type in that series.
In practicality what this means is that, given the examples just above, if 'areturn', 'afunction', and 'data' are specific generically we are really speaking of VMD macros parsing a sequence. A sequence can therefore also be part of a Boost PP composite data type, or variadic data, and VMD can still parse such an embedded sequence if asked to do so.
A sequence that consists of 0 elements is an empty sequence, and we have already seen that emptiness can be tested with the specific macro BOOST_VMD_IS_EMPTY. A sequence which consists of a single element can be tested using the various identifying macros which test for "specific" VMD data types. But whether a sequence consists of 0 elemants, one element, or more than one element, we can also use "generic" macros, which will be further described, to parse a sequence. models a forward iterator.
Working with a sequence is equivalent to using VMD macros 'generically', whereas previously we had documented using VMD macros "specifically" with identifying macros that tested each type of VMD data.
Before I give an explanation of how to use a sequence using VMD generic functionality I would like to make two points:
The constraints in a sequence are that the top-level must consist of VMD data types, ie. preprocessor tokens which VMD understands, and that all top-level VMD identifiers must be specific identifiers when the sequence is parsed generically. identifying macro to test if the sequence is an empty sequence.
#include <boost/vmd/is_empty.hpp> #define AN_EMPTY_SEQUENCE #define A_NON_EMPTY_SEQUENCE 23 ( some_identifier ) BOOST_VMD_IS_EMPTY(AN_EMPTY_SEQUENCE) will return 1 BOOST_VMD_IS_EMPTY(A_NON_EMPTY_SEQUENCE) will return 0
The type of an empty sequence is BOOST_VMD_TYPE_EMPTY. You can test for an empty sequence when a top-level identifier is not a specific identifier and the macro will correctly return 0. This is because BOOST_VMD_IS_EMPTY, which has been previously documented, is really an identifying macro which also works to parse any sequence and determine emptiness or not.
A single element sequence is a single VMD data type. This is what we have been previously discussing as data which VMD can parse in this documentation with our identifying macros. You can use the BOOST_VMD_IS_UNARY generic macro to test if the sequence is a single element sequence.
#include <boost/vmd/is_unary.hpp> #define AN_EMPTY_SEQUENCE #define A_SINGLE_ELEMENT_SEQUENCE (1,2) #define NOT_A_SINGLE_ELEMENT_SEQUENCE (1,2) (45)(name) BOOST_VMD_IS_UNARY(A_SINGLE_ELEMENT_SEQUENCE) will return 1 BOOST_VMD_IS_UNARY(AN_EMPTY_SEQUENCE) will return 0 BOOST_VMD_IS_UNARY(NOT_A_SINGLE_ELEMENT_SEQUENCE) will return 0 AN_EMPTY_SEQUENCE #define A_SINGLE_ELEMENT_SEQUENCE (1,2) #define A_MULTI_ELEMENT_SEQUENCE (1,2) (1)(2) 45
The A_MULTI_ELEMENT_SEQUENCE consists of a tuple followed by a seq followed by a number.
#include <boost/vmd/is_multi.hpp> BOOST_VMD_IS_MULTI(A_MULTI_ELEMENT_SEQUENCE) will return 1 BOOST_VMD_IS_MULTI(AN_EMPTY_SEQUENCE) will return 0 BOOST_VMD_IS_MULTI(A_SINGLE_ELEMENT_SEQUENCE) will return 0
The type of a multi-element sequence is always BOOST_VMD_TYPE_SEQUENCE.
The type of any sequence can be obtained> #define AN_EMPTY_SEQUENCE #define A_SINGLE_ELEMENT_SEQUENCE (1,2) #define A_MULTI_ELEMENT_SEQUENCE (1,2) (1)(2) 45 BOOST_VMD_SIZE(AN_EMPTY_SEQUENCE) will return 0 BOOST_VMD_SIZE(A_SINGLE_ELEMENT_SEQUENCE) will return 1 BOOST_VMD_SIZE(A_MULTI_ELEMENT_SEQUENCE) will return 3
As has previously been mentioned a single element tuple is also a one element seq, so parsing a sequence which has seqs and tuples in them might be a problem as far as identify each element of the sequence. In a multi-element sequence if the data consists of a mixture of seqs and tuples consecutively we need to distinguish how VMD parses the data. The rule is that VMD always parses a single element tuple as a tuple unless it is followed by one or more single element tuples, in which case it is a seq. Here are some examples showing how the rule is applied.
#define ST_DATA (somedata)(element1,element2)
VMD parses the above data as 2 consecutive tuples. The first tuple is the single element tuple '(somedata)' and the second tuple is the multi element tuple '(element1,element2)'.
#define ST_DATA (element1,element2)(somedata)
VMD parses the above data as 2 consecutive tuples. The first tuple is the multi element tuple '(element1,element2)' and the second tuple is the single element tuple '(somedata)'.
#define ST_DATA (somedata)(some_other_data)(element1,element2)
VMD parses the above data as a seq followed by a tuple. The seq is '(somedata) (some_other_data)' and the tuple is '(element1,element2)'.
For a VMD sequence>
|
https://www.boost.org/doc/libs/1_77_0/libs/vmd/doc/html/variadic_macro_data/vmd_specific_generic/vmd_generic.html
|
CC-MAIN-2021-49
|
en
|
refinedweb
|
This site uses strictly necessary cookies. More Information
If I do something like this:
public class FixFBXOptions : AssetPostprocessor
{
public void OnPostprocessModel(GameObject g)
{
// Never import materials by default.
var modelImporter = assetImporter as ModelImporter;
modelImporter.importMaterials = false;
}
}
... on adding an FBX to the project, it'll have no materials. That's fine and what I want. But if I then click 'import materials' and apply, the checkbox just gets un-checked and no materials are imported. The same thing happens using OnPreprocessModel.
OnPreprocessModel
What I really want is a way to run some code to set import options once, on a project member adding the asset to the project for the first time, but for that code to never run again and to let people change the options afterwards. Is that possible using AssetPostprocessor? I don't want to have to write a custom 'add asset to project' dialogue - I imagine that will end up being rather hacky.
Note: this isn't the same as when an instance of Unity sees the file for the first time, so the trick I've seen of checking if the asset database can load the file I mark an asset as processed in AssetPostprocessor class
0
Answers
Access MecAnim animation clips in a AssetPostprocessor
1
Answer
How to skip already imported textures on AssetPostprocessor change?
3
Answers
How do you add AnimClips at import time correctly?
1
Answer
Grab one child of an imported FBX asset
0
Answers
EnterpriseSocial Q&A
|
https://answers.unity.com/questions/1251171/can-assetpostprocessor-change-defaults-just-for-fi.html?sort=oldest
|
CC-MAIN-2021-49
|
en
|
refinedweb
|
If you want to develop a search service utilizing the power of Google search, you can do so using the google module in Python. You can use this to develop a backend service for a desktop application or implement a website search or app search with the python code running on your server.
If you are a Python beginner, then you can learn Python from Studytonight.
You can do the same using the BeautifulSoup module too which is used for web scraping in Python. But the google module makes it super simple to implement search.
To install the google module, we can use the
pip package installer.
pip install google
This will install the google module, along with its other dependencies. The name of the module installed is googlesearch.
We will be using the
search() function from the googlesearch module.
search(query, tld='co.in', lang='en', num=10, start=0, stop=None, pause=2)
query: This is the text that you want to search for.
tld: This refers to the top level domain value like co.in or com which will specify which Google website we want to use.
lang: This parameter stands for language.
num: This is used to specify the number of results we want.
start: This is to specify from where to start the results. We should keep it 0 to begin from the very start.
stop: The last result to retrieve. Use None to keep searching forever.
pause: This parameter is used to specify the number of seconds to pause between consecutive HTTP requests because if we hit too many requests, Google can block our IP address.
The above function will return a python generator (iterator) which has the search result URLs.
Now let's use the google module to perform search.
from googlesearch import search query = "studytonight" for i in search(query, tld="co.in", num=10, stop=10, pause=2): print(i)
In the above output you can see the links that will be shown on Google search if you open the Google search website and search for "studytonight" text.
Similarly, you can search for any text, and can even change the
tld parameter to search for results in different Google websites.
Well, that's it for this tutorial. Try using this python code to get search results from google programmatically. While using this code, do not name the python file as googlesearch.py as that can cause conflict with the module.
|
https://www.studytonight.com/post/how-to-perform-google-search-using-python
|
CC-MAIN-2021-49
|
en
|
refinedweb
|
Moya Template Language
Moya's template language is a spiritual successor to Django, and Jinja templates. It borrows a number of constructs from both, and adds a few of its own.
There is a slight difference in the syntax in that variables are substituted with
${variable} rather than
{{ variable }}, but tags still use the familiar
{% tag %} syntax.
Template languages are generally quite extensible, so there is probably nothing that Moya templates can do that Django / Jinja can't (via a plugin or extension), but there are a few things which are I think are more elegant in Moya's templates language (and built-in). I'm going to talk about a few of them in this post.
The Attrib Tag
How often have you written code like this?
<div id="{{ article_id }}" class="{% if type %}{{ type }}{% endif %}{% if active %} active{% endif %}{% if highlight %} highlight{% endif %}"> {{ content }} </div>
This code renders a div with an id attribute and a few optional classes. It is easy to follow, but verbose. It's also a little error prone to write; we have to be careful about the whitespace between the classes, or we could end up generating the nonsense class
activehighlight.
Moya offers the the
{% attrib %} template tag which is a shortcut to generate a sequence of attributes. Here is the equivalent Moya code using
{% attrib %}:
<div {% attrib (id=article_id, class=[type, active and 'active', highlight and 'highlight']) %}> ${content} </div>
The attrib tag takes a dictionary and generates attributes based the keys and values. If the value is a list, Moya joins strings with a space, and ignores any value that evaluates to false. It will also omit any attributes that would evaluate to an empty string (compared to the original which could potentially render a superfluous
class="").
The attrib tag is also faster, since there is less template logic to run.
The Markup Tag
It is not uncommon to have to write copy in templates. But writing copy in HTML is a pain, and somewhat error prone. You can easily break the entire page with a typo.
Moya has a
{% markup-block %} tag which renders the enclosed text in the markup of your choice, so you can embed markdown (for example) directly in to your template. Here's an example:
{% markup-block as 'markdown' %} # Our Coffees All our coffees are *organically produced* and from [sustainable sources](/where-we-get-our-coffee/) only. {% end-markup-block %}
This produces the following markup:
<h1>Our Coffees</h1> <p>All our coffees are <em>organically produced</em> and from <a href="/where-we-get-our-cofee/">sustainable sources</a> only.</p>
Another way of rendering markups is with the
{% markup %} tag which renders a template variable rather than enclosed text. This is more suitable for rendering content stored in the database. Here's an example:
{% markup post.content as 'markdown' %}
This will render the string
content in an object called
The Sanitize Tag
Web developers will no-doubt be familiar with the dangers of rendering untrusted markup (i.e. from comments). Since Moya has a batteries included attitude to templates, there is a built-in tag for this.
The
{% sanitize %} tag escapes any potentially dangerous markup based on a set of rules. The defaults are quite conservative and only permit basic formatting markup. Here's an example:
{% sanitize %} My <b>exploit</b>! <script>alert('how annoying')</script> {% end-sanitize %}
This generates the following markup:
My <b>exploit</b>! <script>alert('how annoying')</script>
The script tag has been escaped, so it will display as text (you can also opt to remove untrusted markup entirely).
This tag uses the excellent bleach library to do the sanitizing.
More Information
To read more about the Moya template languages, see the docs. The docs are currently in a catch-up faze after 3 months of development, so are missing a few recently added template tags. Feel free to ask me about those, but I will be writing them up in the coming weeks.
Very cool! Looking forward to checking this out!
|
https://www.willmcgugan.com/blog/tech/post/moya-template-language/
|
CC-MAIN-2021-49
|
en
|
refinedweb
|
Barcode Software
how to generate barcode in c# net with example
Configuring DNS Servers and Clients in .NET
Deploy barcode standards 128 in .NET Configuring DNS Servers and Clients
1. What attributes did you add to your assembly 2. What command did you use to view the token of the public key in your assembly 3. What command did you use to verify that your assembly does not yet have a strong name
java barcode reader sample code
using bitmaps j2ee to develop bar code for asp.net web,windows application
BusinessRefinery.com/barcode
how to print barcode in rdlc report
generate, create barcode full none on .net projects
BusinessRefinery.com/barcode
Exam Highlights
use j2ee barcode encoder to paint bar code on java unity
BusinessRefinery.com/ bar code
use sql server 2005 reporting services bar code encoder to incoporate barcodes for .net list
BusinessRefinery.com/barcode
Table 9-1 Supported SMTP Commands
generate, create barcode product none for java projects
BusinessRefinery.com/ barcodes
use sql database bar code drawer to produce bar code in visual basic script
BusinessRefinery.com/ bar code
Occurs after the return values are serialized into XML, but before they are sent across the network to the client.
ssrs 2016 qr code
using barcode encoder for sql 2008 control to generate, create qr image in sql 2008 applications. documentation
BusinessRefinery.com/Quick Response Code
java qr code generator example
use jar quick response code printing to deploy denso qr bar code for java picture
BusinessRefinery.com/qr-codes
Table 18-3
qrcode image developers in word microsoft
BusinessRefinery.com/QR Code JIS X 0510
qr code font for crystal reports free download
use visual .net denso qr bar code creation to create qrcode for .net column,
BusinessRefinery.com/QR Code
Interviews
to encode denso qr bar code and qr data, size, image with office excel barcode sdk manage
BusinessRefinery.com/QR Code JIS X 0510
qr code 2d barcode image syntax with .net
BusinessRefinery.com/qr codes
Before you can create a table, you need a schema in which to create the table .A schema is similar to a namespace in many other programming languages; however, there can be only one level of schemas (that is, schemas cannot reside in other schemas). There are already several schemas that exist in a newly created database: the dbo, sys, and information_schema schemas. The dbo schema is the default schema for new objects, while the sys and information_schema schemas are used by different system objects.. Before SQL Server 2005, schemas did not exist. Instead of the object residing in a schema the object was owned by a database user (however, the syntax was the same: <owner>.<object>) In these versions, dbo was recommended to own all objects, but this is not true anymore. Starting with SQL Server 2005, all objects should be created within a user-defined schema. Schemas are created using the CREATE SCHEMA statement, as shown in the following example of creating a schema and a table within that schema:
ssrs pdf 417
use reporting services 2008 pdf 417 maker to create pdf-417 2d barcode for .net email
BusinessRefinery.com/PDF 417
.net code 128 reader
Using Barcode decoder for function VS .NET Control to read, scan read, scan image in VS .NET applications.
BusinessRefinery.com/barcode standards 128
21
use excel barcode 3 of 9 drawer to insert bar code 39 for excel restore
BusinessRefinery.com/USS Code 39
code 39 barcode font crystal reports
use .net vs 2010 crystal report 3 of 9 generator to insert barcode code39 in .net abstract
BusinessRefinery.com/Code 3 of 9
End Sub
rdlc data matrix
using customized rdlc report to print data matrix on asp.net web,windows application
BusinessRefinery.com/datamatrix 2d barcode
.net data matrix reader
Using Barcode decoder for softwares Visual Studio .NET Control to read, scan read, scan image in Visual Studio .NET applications.
BusinessRefinery.com/2d Data Matrix barcode
Estimated lesson time: 10 minutes
vb.net data matrix generator vb.net
using barcode integrated for .net vs 2010 control to generate, create data matrix ecc200 image in .net vs 2010 applications. bit
BusinessRefinery.com/Data Matrix 2d barcode
vb.net pdf417
using button .net to create pdf 417 for asp.net web,windows application
BusinessRefinery.com/pdf417 2d barcode
Case Scenario 1: Validating Input
As you learned in 1, Active Directory is the tool used to manage, organize, and locate resources on your network. DNS Server service is integrated into the design and implementation of Active Directory, making them a perfect match.
SPatiaL MetHODS
" Display the value returned from the Web service to
Lesson Review
Step 1: Confirm Group Membership
rithm s key pair. Pass true to this method to export both the private and public key, or pass false to export only the public key.
understanding SSAS Processing options
Prompt for a username and password
B. Alex can e-mail a remote assistance invitation to the administrator using the
4. Other Log Shipping Monitor Settings options include history retention, which determines how long the log shipping configuration will retain history information about the task, and the name and schedule for the alert job that raises an alert if there are problems in any log shipping jobs. You should use the same schedule as the schedule for the log shipping backup task.
Subnet mask (required)
ChAPTER 3
There is also the option to make a folder private. When you make a folder private, only the owner of the folder can access its contents. You can make folders private only if they are in the user s personal user profile (and only if the disk is formatted with NTFS, the native file system for Windows XP; you will learn more about NTFS in 11). A personal user profile defines customized desktop environments, display settings, and network and printer connections, among other things. Personal user profile folders include My Documents and its subfolders, Desktop, Start Menu, Cookies, and Favorites. To locate the list of local user profiles, right-click My Computer, select Properties, and, from the Advanced tab, in the User Profiles section, select Settings. To view a personal user profile, browse to C:\Documents And Settings\User Name, as shown in Figure 9-3.
More Code 128 on .NET
Articles you may be interested
how to generate barcode in c# net with example: MORE INFO in .NET Deploy Denso QR Bar Code in .NET MORE INFO
generate barcode c#: Installing and Configuring Office Applications in vb Integrate QR Code in vb Installing and Configuring Office Applications
create barcode image using c#: Site A Site B in visual C# Implementation QR in visual C# Site A Site B
java barcode reader library download: Lesson 1: Configuring the Client Endpoint in C# Encoder gs1 datamatrix barcode in C# Lesson 1: Configuring the Client Endpoint
barcode generator c# wpf: Creating Serviced Components . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 509 in c sharp Assign EAN-13 in c sharp Creating Serviced Components . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 509
c# qr code library open source: Objective 6.5 in C#.net Draw QR Code in C#.net Objective 6.5
how to generate barcode in asp net c#: Real-time Sampling Service Monitors OWA servers (and other Web servers) in in .NET Render Denso QR Bar Code in .NET Real-time Sampling Service Monitors OWA servers (and other Web servers) in
vb.net free barcode component: Lesson 1: Creating and Managing Groups in .NET Compose Code-128 in .NET Lesson 1: Creating and Managing Groups
Using Additional Query Techniques in c sharp Assign qr bidimensional barcode
zxing create qr code c#: 6: Case Scenario Answers in .net C# Integrated qrcode in .net C# 6: Case Scenario Answers
barcode generator c# source code: g15im03 in visual C#.net Implementation qr bidimensional barcode in visual C#.net g15im03
generate barcode c# free: Adding counters to a Performance log in .NET Creation Quick Response Code in .NET Adding counters to a Performance log
read barcode from image c#.net: Monitoring Printers in .net C# Integrating 3 of 9 in .net C# Monitoring Printers
zxing c# qr code example: Objective 4.2 in visual C# Printer qrcode in visual C# Objective 4.2
vb.net generate 2d barcode: Explain SQL Server s index structure. in .NET Integrated barcode pdf417 in .NET Explain SQL Server s index structure.
barcode recognition vb.net: Windows Vista Upgrades and Migrations in .NET Use USS Code 39 in .NET Windows Vista Upgrades and Migrations
generate barcode c#.net: Lesson 2: Programming Transactions in .net C# Render ECC200 in .net C# Lesson 2: Programming Transactions
open source qr code library vb.net: Review in .NET Encoder QR Code JIS X 0510 in .NET Review
c# qr code reader webcam: Ping-of-death attack Port scan in .net C# Compose qr-codes in .net C# Ping-of-death attack Port scan
how to read data from barcode scanner in c#: Lesson 2: Creating Report Schedules and Subscriptions in C#.net Printer code-128b in C#.net Lesson 2: Creating Report Schedules and Subscriptions
|
http://www.businessrefinery.com/yc2/284/74/
|
CC-MAIN-2021-49
|
en
|
refinedweb
|
Question 4 is somewhat misleading imo…asking what happens when you call report…
time = “3pm”
mood = “good”
def report():
print("The current time is " + time)
print("The mood is " + mood)
print(“Beginning of report”)
Anyways, the call stack implies that “Beginning of report” is called to be printed to the terminal when report() is called due to the execution flow. Yet the answer only expects the function report() print statements…this is misleading nor is it really right. I could see this being useful maybe in Java…
|
https://discuss.codecademy.com/t/this-quesiton-isnt-too-good-imo/630848
|
CC-MAIN-2021-49
|
en
|
refinedweb
|
Smart Plugins¶
Pyrogram embeds a smart, lightweight yet powerful plugin system that is meant to further simplify the organization of large projects and to provide a way for creating pluggable (modular) components that can be easily shared across different Pyrogram applications with minimal boilerplate code.
Tip
Smart Plugins are completely optional and disabled by default.
Contents
Introduction¶
Prior to the Smart Plugin system, pluggable handlers were already possible. For example, if you wanted to modularize your applications, you had to put your function definitions in separate files and register them inside your main script after importing your modules, like this:
Note
This is an example application that replies in private chats with two messages: one containing the same text message you sent and the other containing the reversed text message.
Example: “Pyrogram” replies with “Pyrogram” and “margoryP”
myproject/ config.ini handlers.py main.py
handlers.py
def echo(client, message): message.reply(message.text) def echo_reversed(client, message): message.reply(message.text[::-1])
main.py
from pyrogram import Client, filters from pyrogram.handlers import MessageHandler from handlers import echo, echo_reversed app = Client("my_account") app.add_handler( MessageHandler( echo, filters.text & filters.private)) app.add_handler( MessageHandler( echo_reversed, filters.text & filters.private), group=1) app.run()
This is already nice and doesn’t add too much boilerplate code, but things can get boring still; you have to
manually
import, manually
add_handler() and manually instantiate each
MessageHandler object because you can’t use those cool decorators for your
functions. So, what if you could? Smart Plugins solve this issue by taking care of handlers registration automatically.
Using Smart Plugins¶
Setting up your Pyrogram project to accommodate Smart Plugins is pretty straightforward:
Create a new folder to store all the plugins (e.g.: “plugins”, “handlers”, …).
Put your python files full of plugins inside. Organize them as you wish.
Enable plugins in your Client or via the config.ini file.
Note
This is the same example application as shown above, written using the Smart Plugin system.
myproject/ plugins/ handlers.py config.ini main.py
plugins/handlers.py
from pyrogram import Client, filters @Client.on_message(filters.text & filters.private) def echo(client, message): message.reply(message.text) @Client.on_message(filters.text & filters.private, group=1) def echo_reversed(client, message): message.reply(message.text[::-1])
config.ini
[plugins] root = plugins
main.py
from pyrogram import Client Client("my_account").run()
Alternatively, without using the config.ini file:
from pyrogram import Client plugins = dict(root="plugins") Client("my_account", plugins=plugins).run()
The first important thing to note is the new
plugins folder. You can put any python file in any subfolder and
each file can contain any decorated function (handlers) with one limitation: within a single module (file) you must
use different names for each decorated function.
The second thing is telling Pyrogram where to look for your plugins: you can either use the config.ini file or the Client parameter “plugins”; the root value must match the name of your plugins root folder. Your Pyrogram Client instance will automatically scan the folder upon starting to search for valid handlers and register them for you.
Then you’ll notice you can now use decorators. That’s right, you can apply the usual decorators to your callback
functions in a static way, i.e. without having the Client instance around: simply use
@Client (Client class)
instead of the usual
@app (Client instance) and things will work just the same.
Specifying the Plugins to include¶
By default, if you don’t explicitly supply a list of plugins, every valid one found inside your plugins root folder will be included by following the alphabetical order of the directory structure (files and subfolders); the single handlers found inside each module will be, instead, loaded in the order they are defined, from top to bottom.
Note
Remember: there can be at most one handler, within a group, dealing with a specific update. Plugins with overlapping filters included a second time will not work. Learn more at More on Updates.
This default loading behaviour is usually enough, but sometimes you want to have more control on what to include (or
exclude) and in which exact order to load plugins. The way to do this is to make use of
include and
exclude
directives, either in the config.ini file or in the dictionary passed as Client argument. Here’s how they work:
If both
includeand
excludeare omitted, all plugins are loaded as described above.
If
includeis given, only the specified plugins will be loaded, in the order they are passed.
If
excludeis given, the plugins specified here will be unloaded.
The
include and
exclude value is a list of strings. Each string containing the path of the module relative
to the plugins root folder, in Python notation (dots instead of slashes).
E.g.:
subfolder.modulerefers to
plugins/subfolder/module.py, with
root="plugins".
You can also choose the order in which the single handlers inside a module are loaded, thus overriding the default top-to-bottom loading policy. You can do this by appending the name of the functions to the module path, each one separated by a blank space.
E.g.:
subfolder.module fn2 fn1 fn3will load fn2, fn1 and fn3 from subfolder.module, in this order.
Examples¶
Given this plugins folder structure with three modules, each containing their own handlers (fn1, fn2, etc…), which are also organized in subfolders:
myproject/ plugins/ subfolder1/ plugins1.py - fn1 - fn2 - fn3 subfolder2/ plugins2.py ... plugins0.py ... ...
Load every handler from every module, namely plugins0.py, plugins1.py and plugins2.py in alphabetical order (files) and definition order (handlers inside files):
Using config.ini file:
[plugins] root = plugins
Using Client’s parameter:
plugins = dict(root="plugins") Client("my_account", plugins=plugins).run()
Load only handlers defined inside plugins2.py and plugins0.py, in this order:
Using config.ini file:
[plugins] root = plugins include = subfolder2.plugins2 plugins0
Using Client’s parameter:
plugins = dict( root="plugins", include=[ "subfolder2.plugins2", "plugins0" ] ) Client("my_account", plugins=plugins).run()
Load everything except the handlers inside plugins2.py:
Using config.ini file:
[plugins] root = plugins exclude = subfolder2.plugins2
Using Client’s parameter:
plugins = dict( root="plugins", exclude=["subfolder2.plugins2"] ) Client("my_account", plugins=plugins).run()
Load only fn3, fn1 and fn2 (in this order) from plugins1.py:
Using config.ini file:
[plugins] root = plugins include = subfolder1.plugins1 fn3 fn1 fn2
Using Client’s parameter:
plugins = dict( root="plugins", include=["subfolder1.plugins1 fn3 fn1 fn2"] ) Client("my_account", plugins=plugins).run()
Load/Unload Plugins at Runtime¶
In the previous section we’ve explained how to specify which plugins to load and which to ignore before your Client starts. Here we’ll show, instead, how to unload and load again a previously registered plugin at runtime.
Each function decorated with the usual
on_message decorator (or any other decorator that deals with Telegram
updates) will be modified in such a way that a special
handler attribute pointing to a tuple of
(handler: Handler, group: int) is attached to the function object itself.
plugins/handlers.py
@Client.on_message(filters.text & filters.private) def echo(client, message): message.reply(message.text) print(echo) print(echo.handler)
Printing
echowill show something like
<function echo at 0x10e3b6598>.
Printing
echo.handlerwill reveal the handler, that is, a tuple containing the actual handler and the group it was registered on
(<MessageHandler object at 0x10e3abc50>, 0).
Unloading¶
In order to unload a plugin, all you need to do is obtain a reference to it by importing the relevant module and call
remove_handler() Client’s method with your function’s handler special attribute preceded by the
star
* operator as argument. Example:
main.py
from plugins.handlers import echo ... app.remove_handler(*echo.handler)
The star
* operator is used to unpack the tuple into positional arguments so that remove_handler will receive
exactly what is needed. The same could have been achieved with:
handler, group = echo.handler app.remove_handler(handler, group)
Similarly to the unloading process, in order to load again a previously unloaded plugin you do the same, but this time
using
add_handler() instead. Example:
main.py
from plugins.handlers import echo ... app.add_handler(*echo.handler)
|
https://docs.pyrogram.org/topics/smart-plugins
|
CC-MAIN-2021-49
|
en
|
refinedweb
|
README
LoglineLogline
Logline is a light-weighted, useful log agent for front-end on the client-side.
Why position problems is difficult for the front-end?Why position problems is difficult for the front-end?
Most front-enders should have similar experience, when codes are deployed for production, running on countless clients. In most caes, we can only guess the problems, especially some occasional visible problems, because we have no idea what's our user's acctual operations, thus it's hard for us to reproduce the scenes. At this moment, we think, it will be great help if we have a detailed and classified log agent, just like the backend do.
Application scenarioApplication scenario
Reproduce user operations
In the production environment, user's operations are un-predicable, even they cannot remember the details themselves, with log, we have the ability to reproduce the operation paths and running state.
Monitoring core processes
In the core processes in our products, we can upload the logs positively, as we can focus our user's problems and count the amount quickly.
Positively get user's logs and analysis user's activities
Don't believe on the users to coordinate you, we can design a strategy, such as deploy an json file online, configure a list containing the target users we wanted, when our product is opened on their client, they will download this json file after a certain delay(to prevent affections on the core process's performance), if the user id is in the list, we will upload the logs positively.
Count errors and help to analysis
We can make use of Logline to count js errors. With the error stack, we can speed up the analysis.
FeaturesFeatures
- No extra dependencies
- client-side(reach when acctually needed, save bandwith and traffic)
- multiple filter dimension(namespace, degree and keyword)
- multiple stores(IndexDB, Websql, localStorage)
- cleanable(in case take too much user space)
Quick to get startedQuick to get started
1. Installation1. Installation
with Bowerwith Bower
bower install logline-web
*sadly, package name 'logline' is taken, any suggestions are welcome in ISSUES *
Download archiveDownload archive
access, selecte the version you wanted.
2. Import to your project2. Import to your project
Logline is an UMD ready module, choose to import it as your project needed. CMD is evil, which is not supported, wrapper it yourself if you need it indeed.
// using <script> element <script src="./mod/logline.min.js"></script> // using AMD loader var Logline = require('./mod/logline.min');
3. Choose a log protocol3. Choose a log protocol
Logline implements three protocols, all of them are mounted on the
Logline object for special uses, together with better semantics.
websql:Logline.PROTOCOL.WEBSQL
indexeddb:Logline.PROTOCOL.INDEXEDDB
localstorage:Logline.PROTOCOL.LOCALSTORAGE
you can use
using method to specialfy a protocol.
Logline.using(Logline.PROTOCOL.WEBSQL);
If you call Logline related APIs, without specialfy a protocol in advance, Logline will choose a available protocol automatically, respect the priority according to the configuration parameters during the compile process.
such as, your compile command is
npm run configure -- --with-indexeddb --with-websql --with-localstorage,
if protocol indexeddb is available, then indexeddb protocol with be chosen automatically,
otherwise, if indexeddb protocol is not available and websql protocol is available, then websql protocol will be chosen, and so on.
If none of the compiled protocols are available, an error will be thrown.
4. Record logs4. Record logs
var spaLog = new Logline('spa'), sdkLog = new Logline('sdk'); // with description, without extra data spaLog.info('init.succeed'); // with description and extra data spaLog.error('init.failed', { retcode: 'EINIT', retmsg: 'invalid signature' }); // with description, without extra data sdkLog.warning('outdated'); // with description and extra data sdkLog.critical('system.vanish', { // debug infos here });
5. Read logs5. Read logs
// collect all logs Logline.all(function(logs) { // process logs here }); // collet logs within .3 days Logline.get('.3d', function(logs) { // process logs here }); // collect logs from 3 days before, and earlier than 1 days ago Logline.get('3d', '1d', function(logs) { // process logs here });
6. Clean logs6. Clean logs
Logline.keep(.5); // keep logs within half a day, if `.5` is not provided, will clean up all logs Logline.clean(); // clean all logs and delete database
Custom database nameCustom database name
Because indexeddb, websql and localstorage are all domain shared storage, the default database name
logline may have already been taken, you can specialfy a custom database name in two ways as follows:
// special a second parameter when calling `using` API Logline.using(Logline.PROTOCOL.WEBSQL, 'newlogline'); // call `database` API Logline.database('newlogline');
Custom CompileCustom Compile
Logline implements
localstorage,
websql and
indexeddb protocols, all of them are compiled by default, if you don't need all of them, you can use
npm run configure and
npm run build to compile your custom build with partial protocols packed. This helps to reduces the package size.
// pack all protocols with no parameters npm run configure // pack only wanted protocols, remove corresponding --with-xx npm run configure -- --with-localstorage --with-websql // re-compile npm run build // find the custom build in dist fold ls -l dist/
FAQFAQ
How to upload logsHow to upload logs
since v1.0.1, log upload ability is removed, as the upload procedures varies upon different projects,
and we do hope Logline to focus on log recording and maintenance.
Anyway, you can still use
Logline.all and
Logline.get to get the logs,
and implement your own upload procedure.
How to analysisHow to analysis
As the format Logline provited is standard with good readability, thus you can read the logs in the terminal or certain text editors.
We still provids Logline-viewer to helps you to do so.
|
https://www.skypack.dev/view/logline-web
|
CC-MAIN-2021-49
|
en
|
refinedweb
|
by Gil
Hello, in this article, I’ll try to explain what Photon-pump is, and write an easy example so you can start using it for your own projects.
Photon-pump [] is a client for Event Store [] we developed at made.com [], it’s the little brother to atomic puppy [] (which is another eventstore client), it’s async first, works using TCP so it’s also faster (atomicpuppy uses HTTP).
I won’t talk about eventsourcing since it’s been talked about on previous posts, so this will be just a very simple and silly example of event sourcing.
So, let’s say we have a game, for a game to happen we need players, so we need to create them. So we’re going to pretend that we have an application that creates players, which will later, create an event and place it in the appropriate stream of Event Store.
This is the example of the “player created” event, it’s a json blob:
{“name”: “Gil”}
Now, we also need to pick a stream which is just a string representing the “bucket” where the event will be put, we’ll use “adventure” which is the name of our imaginary game, not very creative, but it’s better than “game”.
An event will also have a type, which is like a sub category inside the stream. This is how the event is looking like:
Event( stream=”adventure”, type=”player_created”, data=json.dumps({“name”: “Gil”}) )
So how would we add this event into Event Store using Photon-pump in a single python script?
writer.py
import asyncio
import photonpump
async def write_event(conn): await conn.publish_event( ‘adventure’, ‘player_created’, body={‘name’: ‘Gil’} )
async def run(): async with photonpump.connect(‘localhost’) as conn: await write_event(conn)
if name == ‘main’: event_loop = asyncio.get_event_loop() event_loop.run_until_complete(run())
So, line by line, we have an async function called write_event which will as the name states, write the event into Event Store, using a Photon-pump connection passed in the argument.
Next, we have the run function which will simply create the connection and pass it to write_event.
finally, the ugly if __name… to both create the event_loop, and run it synchronously.
Now if you have your Event Store running locally (if you don’t change it in the script), go to this url: and you should see the new event there.
Now that we have an event there, let’s move on to the second part: reading the events from python, and doing something with them. For this post, we’ll just stick with a simple print.
Since we wrote the event in the adventure stream, we want to read the events from that stream in a separate script.
Here is all the code we need:
reader.py
import asyncio
import photonpump
async def read_an_event(conn): for event_record in await conn.get(‘adventure’): print(event_record.event.type, event_record.event.json())
async def run(): async with photonpump.connect(‘localhost’) as conn: await read_an_event(conn)
if name == ‘main’: event_loop = asyncio.get_event_loop() event_loop.run_until_complete(run())
Ignoring run and if __name… the read_an_event function uses the method get from Photon-pump to collect all the events using it like an iterator and printing each of the events. We get event_records, and each contains the event, so we can print out the type and the data.
This was just a very simple example that I came up with, but if you want to make it more like the real world, how about trying to follow the previous posts about CQRS using Photon-pump to store and read the events.
Stay tuned for the next part where we will talk about subscriptions.
BONUS: If you want to replicate this code, you will need python 3.6+ (Remember to install Photon-pump pip install photon-pump) and docker or Event Store installed on your machine. Simply start Event Store in docker (docker run -p 1113:1113 -p 2113:2113 eventstore/eventstore) and run those python scripts ( writer.py and reader.py) in sequence to see it work.tags: python - open-source - photonpump
|
https://io.made.com/blog/2018-08-24-how-to-use-photon-pump.html
|
CC-MAIN-2021-49
|
en
|
refinedweb
|
Simple interface for working with intents and chatbots.
Project description
neuralintents
Still in a buggy alpha state.
Setting Up A Basic Assistant
from neuralintents import GenericAssistant assistant = GenericAssistant('intents.json', model_name="test_model") assistant.train_model() assistant.save_model() done = False while not done: message = input("Enter a message: ") if message == "STOP": done = True else: assistant.request(message)
Binding Functions To Requests
from neuralintents import GenericAssistant def function_for_greetings(): print("You triggered the greetings intent!") # Some action you want to take def function_for_stocks(): print("You triggered the stocks intent!") # Some action you want to take mappings = {'greeting' : function_for_greetings, 'stocks' : function_for_stocks} assistant = GenericAssistant('intents.json', intent_methods=mappings ,model_name="test_model") assistant.train_model() assistant.save_model() done = False while not done: message = input("Enter a message: ") if message == "STOP": done = True else: assistant.request(message)
Sample intents.json File
{"intents": [ {"tag": "greeting", "patterns": ["Hi", "How are you", "Is anyone there?", "Hello", "Good day", "Whats up", "Hey", "greetings"], "responses": ["Hello!", "Good to see you again!", "Hi there, how can I help?"], "context_set": "" }, {"tag": "goodbye", "patterns": ["cya", "See you later", "Goodbye", "I am Leaving", "Have a Good day", "bye", "cao", "see ya"], "responses": ["Sad to see you go :(", "Talk to you later", "Goodbye!"], "context_set": "" }, {"tag": "stocks", "patterns": ["what stocks do I own?", "how are my shares?", "what companies am I investing in?", "what am I doing in the markets?"], "responses": ["You own the following shares: ABBV, AAPL, FB, NVDA and an ETF of the S&P 500 Index!"], "context_set": "" } ] }
Project details
Download files
Download the file for your platform. If you're not sure which to choose, learn more about installing packages.
|
https://pypi.org/project/neuralintents/0.0.4/
|
CC-MAIN-2021-49
|
en
|
refinedweb
|
SUI-Autocompleted
React component that shows a list of suggestions under an input file when you start to write something.
Usage
import 'babel/polyfill'; import React from 'react'; import ReactDom from 'react-dom'; import AutocompletedContainer from './autocompleted-container'; import AutocompletedGithubUserContainer from './autocompleted-githubUsers-container'; import AutocompletedComponentContainer from './autocompleted-component-container'; import './style.scss'; import '../src/index.scss'; ReactDom.render(<AutocompletedContainer />, document.getElementById('languages')); ReactDom.render(<AutocompletedGithubUserContainer />, document.getElementById('github-users')); ReactDom.render(<AutocompletedComponentContainer />, document.getElementById('component-container'));
Component Properties
The component exposes the following props:
placeholder (String): Optional Default text value for the input file when no key is pressed (placeholder value).
suggests (Array): Required Array of SuggestionObjects. Te array contains the suggestions to show. If you don't want to show anything you have to send an empty array.
handleChange (Function): Required This function is called everytime user change the input field value.
const handleChange = function( inputFileValue ){ ... }
handleSelect (Function): Required This function is called when one suggestion is selected (via click or enter pressed).
const handleSelect = function( suggestionValue ){ ... }
handleBlur (Function): This function is called everytime user exits the input.
handleFocus (Function): This function is called everytime user focus on the input.
selectFirstByDefault (Boolean): Optional It sets first position for the autocomplete default active option. Defaults to
true.
focus (Boolean): Optional It trigger focus in the input. Defaults to
false.
and then you have to create containers which one setting that properties in the sui-autocompleted component. You can view an example of this kind of container in the doc folder.
SuggestObject
An SuggestObject is a plain JS Object with these specials keys:
{ 'id': [Unique id for the suggestion], 'value': [value to be passed to the handleSelect callback function] 'content': [React Component] or [Text to be show in the UI] 'literal': [String] This key is REQUIRED only if you are using a ReactJS Component like a content. It is used to decide which text has to be put in the input text when this suggestion is selected, in other case content will be used, }
Theme
There are several classes in order to apply a theme to the component:
- sui-autocompleted
- sui-autocompleted-input
- sui-autocompleted-clear
- sui-autocompleted-results
- sui-autocompleted-item
- sui-autocompleted-item--active
The component exports a basic CSS that you can include from the package in the node_modules.
Installation
To run the component and play with the examples you have to:
Download files from GitHub repo.
$ git clone
$ cd sui-autocompleted
Install dependencies.
$ npm install// Install npm dependencies from package.json
Launch the development environment.
$ npm run dev// Run development environment
- Go to localhost:8080
Bundle
In order to generate the bundle including all React dependencies and the component logic we need to bundle a single JS file running the following command:
$ npm run build
JS Testing
Execute a complete test by running:
There are two options for executing tests: * Single mode: `$ npm test` * Watch mode: `$ npm run test:watch` ## Lint Testing
In addition, you can run specific test for linting JS and SASS:
SASS: (SASS linting rules specified in file
.scss-lint.yml)
$ npm run lint:sass
NPM
The SUI-Autocompleted component is available as a NPM package here:
npm install @schibstedspain/sui-autocompleted`
|
https://reactjsexample.com/react-component-that-shows-a-list-of-suggestions-under-an-input-file/
|
CC-MAIN-2021-49
|
en
|
refinedweb
|
input will add metadata about the HTTP connection itself to each event.
When ECS compatibility is disabled, metadata was added to a variety of non-standard top-level fields, which has the potential to create confusion and schema conflicts downstream.
With ECS Compatibility Mode, we can ensure a pipeline maintains access to this metadata throughout the event’s lifecycle without polluting the top-level namespace.
Here’s how ECS compatibility mode affects output.
This plugin supports the following configuration options plus the Common Options described later.
Also see Common Options for a list of options supported by all input
Controls this plugin’s compatibility with the Elastic Common Schema (ECS). See Event Metadata and the Elastic Common Schema (ECS) for detailed information.
Example output:
Sample output: ECS disabled
{ "http_poller_data" => { "@version" => "1", "@timestamp" => 2021-01-01T00:43:22.388Z, "status" => "UP" }, "@version" => "1", "@timestamp" => 2021-01-01T00:43:22.389Z, }
Sample output: ECS enabled
{ "http_poller_data" => { "status" => "UP", "@version" => "1", "event" => { "original" => "{\"status\":\"UP\"}" }, "@timestamp" => 2021-01-01T00:40:59.558Z }, "@version" => "1", "@timestamp" => 2021-01-01T00:40:59.559Z }
Sample error output: ECS enabled
{ "@timestamp" => 2021-07-09T09:53:48.721Z, "@version" => "1", "host" => { "hostname" => "MacBook-Pro" }, "http" => { "request" => { "method" => "get" } }, "event" => { "duration" => 259019 }, "error" => { "stack_trace" => nil, "message" => "Connection refused (Connection refused)" }, "url" => { "full" => "" }, "tags" => [ [0] "_http_request_failure" ] }.
Password to be used in conjunction with
user for HTTP authentication..
When ECS is enabled, set
target in the codec (if the codec has a
target option).
Example:
codec => json { target => "TARGET_FIELD_NAME" }ted..
|
https://www.elastic.co/guide/en/logstash-versioned-plugins/current/v5.1.0-plugins-inputs-http_poller.html
|
CC-MAIN-2021-49
|
en
|
refinedweb
|
Hooks are the new feature introduced in the React 16.8 version. It allows you to use state and other React features without writing a class.
When to use a Hooks:
If you write a function component, and then you want to add some state to it, previously you do this by converting it to a class. But, now you can do it by using a Hook inside the existing function component.
Rules to use Hooks:
- Only call Hooks from React functions,
- Only Call Hooks at the Top Level.
- Hooks can call other Hooks.
Don’t call Hooks from regular JavaScript functions. Instead, you can:
- Call Hooks from React function components.
- Call Hooks from custom Hooks.
*Hooks States with: *
Hook state is the new way of declaring a state in React app. Hook uses useState() functional component for setting and retrieving state.
Hooks Effect:
The Effect Hook allows us to perform side effects in the function components. It does not use components lifecycle methods which are available in class components. In other words, Effects Hooks are equivalent to componentDidMount(), componentDidUpdate(), and componentWillUnmount() lifecycle methods.
eg:
useState eg:
import React, {
useState
} from 'react';
function Demo1() {
const [count, setCount] = useState(0);
return (
Count: {count}
);
}
export default Demo1;
useEffect eg:
function Demo2() {
const [count, setCount] = useState(0);
useEffect(() => {
document.title =
You clicked ${count} times;
});
return (
You clicked {count} times
);
}
useContext eg:
const TestContext = React.createContext();
function Display() {
const value = useContext(TestContext);
return
}
function App() {
return (
);
}
useRef eg:
function App() {
let [name, setName] = useState("Nate");
let nameRef = useRef();
const submitButton = () => {
setName(nameRef.current.value);
};
return (
{name}
<div> <input ref={nameRef} <button type="button" onClick={submitButton}> Submit </button> </div> </div>
);
}
More advanced hooks:
The 3 hooks mentioned above are considered to be the basic hooks. It’s possible to write entire applications using only useState, useEffect, and useContext, you could get away with just the first two. The hooks that follow offer optimizations and increasingly niche utility that you may never encounter in your applications.
useCallback:
React has a number of optimizations that rely on props remaining the same across renders. One of the simplest ways to break this is by defining callback functions inline. That’s not to say that defining functions inline will cause performance problems — in many cases, it has no impact. However, as you begin to optimize and identify what’s causing frequent re-renders, you may find inline function definitions to be the cause of many of your unnecessary prop change.
import doSomething from "./doSomething";
const FrequentlyRerenders = ({ id }) => {
return (
onEvent={useCallback(() => doSomething(id), [id])}
/>
);
};
useMemo:
It's closely related to useCallback, but for optimizing data processing. It has the same API for defining what values it depends on as useEffect and useCallback.
const ExpensiveComputation = ({
data,sortComparator, filterPredicate}) => {
const transformedData = useMemo(() => {
return data
.filter(filterPredicate)
.sort(sortComparator);
},[data, sortComparator, filterPredicate]);
return
};
useRef:
useRef provides a mechanism for these cases. It creates an object that exists for as long as the component is mounted, exposing the value assigned as a .current property.
// DOM node ref example
function TextInputWithFocusButton() {
const inputEl = useRef(null);
const onButtonClick = () => {
//
currentpoints to the mounted text input element
inputEl.current.focus();
};
return (
<>
</>
);
}// An arbitrary instance property
function Timer() { const intervalRef = useRef(); useEffect(() => { const id = setInterval(() => { // ... }); intervalRef.current = id; return () => { clearInterval(intervalRef.current); }; }); }
useReducer:
This hook has interesting implications for the ecosystem. The reducer/action pattern is one of the most powerful benefits of Redux. It encourages modeling UI as a state machine, with clearly defined states and transitions. One of the challenges to using Redux, however, is gluing it all together. Action creators, which components to connect(), mapStateToProps, using selectors, coordinating asynchronous behavior.
Rarely used hooks:
_useLayoutEffect:_If I use any of these 3, I anticipate it will be useLayoutEffect. This is the hook recommended when you need to read computed styles after the DOM has been mutated, but before the browser has painted the new layout. This gives you an opportunity to apply animations with the least chance of visual artifacts or browser rendering performance problems. This is the method currently used by react-flip-move _useMutationEffect:_This is the hook I’m having the hardest time wrapping my head around. It’s run immediately before React mutates the DOM with the results from render, but useLayoutEffect is the better choice when you have to read computed styles. The docs specify that it runs before sibling components are updated and that it should be used to perform custom DOM mutations. This is the only hook that I can't picture a use case for, but it might be useful for cases like when you want a different tool (like D3, or perhaps a canvas or WebGL renderer)
React Hooks Tutorial for Beginners: setting up the project
npx create-react-app exploring-hooks
(You should have one of the latest version of Node.js for running npx).
In React component, there are two types of side effects:
1.Effects Without Cleanup
2.Effects With Cleanup
Advantage of React.js :
- Easy to Learn and Use
- Creating Dynamic Web Applications Becomes Easier
- Reusable Components
- Performance Enhancement
- The Support of Handy Tools
- Known to be SEO Friendly
- The Benefit of Having JavaScript Library
- Scope for Testing the Codes
Disadvantage of React.js
- The high pace of development
- Poor Documentation
- View Part
- JSX as a barrier
In conclusion
Hooks have me excited about the future of React all over again. I’ve been using this tool since 2014, and it has continually introduced new changes that convince me that it’s the future of web development. These hooks are no different, and yet again substantially raise the bar for developer experience, enabling me to write durable code, and improve my productivity by extracting reused functionality.
I expect that React applications will set a new bar for end-user experience and code stability.
Questions:
Q. Which versions of React include Hooks?
Starting with 16.8.0, React includes a stable implementation of React Hooks for:
* React DOM
* React Native
* React DOM Server
* React Test Renderer
* React Shallow Renderer
Q. Do I need to rewrite all my class components?
No. There are no plans to remove classes from React.
Q. What can I do with Hooks that I couldn't with classes?
Hooks offer a powerful and expressive new way to reuse functionality between components.
Q..
Q. How to test components that use Hooks?
From React point of view, a component using Hooks is just a regular component. If your testing solution doesn't rely on React internals, testing components with Hooks shouldn't be different from how you normally test components.
------Thanks for the read.---------
Discussion (1)
One of the best blog i ever read. Thanks for sharing sir
|
https://dev.to/smileyshivam/react-hooks-basics-3n71
|
CC-MAIN-2021-49
|
en
|
refinedweb
|
std::filesystem::remove, std::filesystem::remove_all
From cppreference.com
< cpp | filesystem
1) The file or empty directory identified by the path
pis deleted as if by the POSIX remove. Symlinks are not followed (symlink is removed, not its target)
2) Deletes the contents of
p(if it is a directory) and the contents of all its subdirectories, recursively, then deletes
pitself as if by repeatedly applying the POSIX remove. Symlinks are not followed (symlink is removed, not its target)
Parameters
Return value
1) true if the file was deleted, false if it did not exist. The overload that takes
error_code&argument returns false on errors.
2) Returns the number of files and directories that were deleted (which may be zero if
pdid not exist to begin with). The overload that takes
error_code&argument returns static_cast<std::uintmax_t>(-1) on error. POSIX systems, this function typically calls
unlink and
rmdir as needed, on Windows
RemoveDirectoryW and
DeleteFileW.
Example
Run this code
#include <iostream> #include <cstdint> #include <filesystem> namespace fs = std::filesystem; int main() { fs::path dir = fs::temp_directory_path(); fs::create_directories(dir / "abcdef/example"); std::uintmax_t n = fs::remove_all(dir / "abcdef"); std::cout << "Deleted " << n << " files or directories\n"; }
Possible output:
Deleted 2 files or directories
|
http://asasni.cs.up.ac.za/docs/cpp/cpp/filesystem/remove.html
|
CC-MAIN-2021-49
|
en
|
refinedweb
|
The Grace Standard Prelude
Draft Specification Version 0.7.0
This is a specification of the Grace Standard Prelude. Grace programs run in this dialect unless they nominate a different dialect via the 'dialect' statement. The Standard Prelude provides a range of methods and types and libraries for general purpose programming.. This specification is notably incomplete, and everything is subject to change. should, which is described in the language specification. Iterables and for loops.
Unbounded loops execute a block of code repeatedly, so long as some condition is satisfied. They terminate when the condition ceases to hold. They are useful when the number of executed.
Matching blocks and self-matching objects can be conveniently used in the
match(_)case(_)... family of methods to support multiway branching..
Grace's
valueOf allows a statement list where an expression is required.
def constant = valueOf { def local1 = ... def local2 = ... complicated expression involving locals }
Grace supports built-in objects with types
Object,
Number,
Boolean, and
String.
All Grace objects (except
done) understand the methods in type
Object. These methods will often be omitted when other types are described.
type Object = { == (other: Object) -> Boolean // true if other is equal to self != (other: Object) -> Boolean // the inverse of ==. There are unicode aliases for this opreator. hash -> Number // the hash code of self, a Number in the range 0 .. 2^32 match (other: Object) -> SucccessfulMatch | FailedMatch // returns a SuccessfulMatch if self "matches" other // returns FailedMatch otherwise. // The exact meaning of "matches" depends on self. asString -> String // a string describing self asDebugString -> String // a string describing the internals of self :: (other:Object) -> Binding // a Binding object with self as key and other as value. }
Number describes all numeric values in minigrace, including integers and numbers with decimal fractions. (Thus, minigrace
Numbers are what some other languages call floating point numbers, floats or double-precision).
Numbers are represented with a precision of approximately 51 bits.
type Number = { + ") truncated -> Number // number obtained by throwing away self's fractional part rounded -> Number // whole number closest to self floor -> Number // largest whole number less than or equal to self ceiling -> Number // smallest number greater than or equal to self abs -> Number // the absolute value of self isNan -> Boolean // true if this Number is a NaN }
String constructors are written surrounded by double quote characters. There are three commonly-used escape characters:
\n means the newline character
\\ means a single backslash character
\" means a double quote character.
There are also escapes for a few other characters and for arbitrary Unicode codepoints; for more information, see the Grace language specification.
String constructors can also contain simple Grace expressions1) -> Number // returns true if other is a substring of self endsWith (possibleSuffix: String) // true if self ends with possibleSuffix filter (predicate: Block1[[String,Boolean]]) -> String // returns the String containing those characters of self for which predicate returns true fold[[U]] (binaryFunction: Block2[[U,String,U]]) (pattern:String) ifAbsent (absent:Block0[[W]]) -> Number | W // returns the leftmost index at which pattern appears in self; applies absent if it is not there. indexOf (pattern:String) startingAt (offset) -> Number // like indexOf(pattern), except that it returns the first index ≥ offset, or 0 if pattern is not found. indexOf[[W]] (pattern:String) startingAt(offset) ifAbsent (action:Block0[[W]]) -> Number | W // like the above, except that it answers the result of applying action if there is no such index. indices -> Sequence // an object representing the range of indices of self (1..self.size) isEmpty -> Boolean // true if self is the empty string iterator -> Iterator[[String]] // an iterator over the characters of self lastIndexOf (sub:String) -> Number // returns the rightmost index at which sub appears in self, or 0 if it is not there. lastIndexOf[[W]] (sub:String) ifAbsent (absent:Block0[[W]]) -> Number | W // returns the rightmost index at which sub appears in self; applies absent if it is not there. lastIndexOf[[W]] (pattern:String) startingAt (offset) ifAbsent (action:Block0[[W]]) -> Number | W // like the above, except that it returns the rightmost index ≤ offset. map[[U]] (function:Block[[String,U]]) -> Iterable[[U]] // returns an Iterable object. }
The Boolean literals are
true and
false.
type Boolean = { not -> Boolean prefix ! -> Boolean // the negation of self && (other: BlockOrBoolean) -> Boolean // return true when self and other are both true || (other: BlockOrBoolean) -> Boolean // return true when either self or other (or both) are true }
In conditions in
if statements, and in the operators
&& and
||, a Block returning a boolean may be used instead of a Boolean.
This means that
&& and
|| can be used as “shortcircuit”, also known as “non-commutative”, operators: they will evaluate their argument only if necessary.
type BlockBoolean = { apply -> Boolean } type BlockOrBoolean = BlockBoolean | Boolean
Blocks are anonymous functions, that take zero or more arguments and return once result. There is a family of
Block types that describe block objcets.
type Block0[[R]] = type { apply -> R } type Block1[[T,R]] = type { apply(a:T) -> R } type Block2[[S,T,R]] = type { apply(a:S, b:T) -> R }
Points can be thought of as locations in the cartesian plane, or as 2-dimensional vectors from the origin to a specified location. Points are created from Numbers using the
@ infix operator. Thus,
3 @ 4 represents the point with coordinates (3, 4).
type Point = { x -> Number // the x-coordinates of self y -> Number // the y-coordinate) / (divisor:Number) -> Point // this point scaled by 1/factor, i.e. (self.x/divisor) @ (self.y/divisor) length -> Number // distance from self to the origin distanceTo(other:Point) -> Number // distance from self to other }
A binding is an immutable pair comprising a
key and a
value. Bindings are created with the infix
:: operator, as in
k::v, or by requesting
binding.key(k) value(v).
type Binding[[K, T]] = { key -> K // returns the key value -> T // returns the value }
The objects described in this section are made available to all standard Grace programs. (This means that they are defined as part of the standardGrace dialect.) As is natural for collections, the types are parameterized by the types of the elements of the collection. Type arguments are enclosed in
[[ and
]] used as brackets. This enables us to distinguish, for example, between and
Set[[String]]. In Grace programs, type arguments and their brackets can be omitted; this is equivalent to using
Unknown as the argument, which says that the programmer either does not know, or does not care to state, the type.
The major kinds of collection are
sequence,
list,
set and
dictionary. Although these objects differ in their details, they share many common methods, which are defined in a hierarchy of types, each extending the one above it in the hierarchy. The simplest is the type
Iterable<T>, which captures the idea of a (potentially unordered) collection of elements, each of type
T, over which a client can iterate:
type Iterable[[T]] = type { iterator -> Iterator[[T]] // Returns an iterator over my elements. It is an error to modify self while iterating over it. // Note: all other methods can be defined using iterator. isEmpty -> Boolean // True if self has no elements size -> Number // The number of elements in self; raises SizeUnknown if size is not known. sizeIfUnknown(action: Block0[[Number]]) -> Number // The number of elements in self; if size is not known, then action is evaluated and its value returned. first -> T // The first element of self; raises BoundsError if there is none. do(action: Block1[[T,Unknown]]) -> Done // Applies action yo each element of self. do(action:Block1[[T, Unknown]]) separatedBy(sep:Block0[[Unknown]]) -> Done // applies action to each element of self, and applies sep (to no arguments) in between. map[[R]](unaryFunction:Block1[[T, R]]) -> Self // returns a new collection whose elements are obtained by applying unaryFunction to // each element of self. If self is ordered, then the result is ordered. fold[[R]](binaryFunction:Block2[[R, T, R]]) startingWith(initial:R) -> R // folds binaryFunction over self, starting with initial. If self is ordered, this is // the left fold. For example, fold {a, b -> a + b} startingWith 0 // will compute the sum, and fold {a, b -> a * b} startingWith 1 the product. filter(condition:Block1[[T, Boolean]]) -> Self // returns a new collection containing only those elements of self for which // condition holds. The result is ordered if self is ordered. ++(other: Iterable[[T]]) -> Self // returns a new object whose elements include those of self and those of other. }
The type
Collection adds some conversion methods to
Iterable:
type Collection[[T]] = Iterable[[T]] & type { asList -> List[[T]] // returns a (mutable) list containing my elements. asSequence -> Sequence[[T]] // returns a sequence containing my elements. asSet -> Set[[T]] // returns a (mutable) Set containing my elements, with duplicates eliminated. // The == operation on my elements is used to identify duplicates. }
Additional methods are available in the type
Enumerable; an
Enumerable is like a
Sequence, but where the elements must be enumerated one by one, in order, using a computational process, rather than being stored explicitly. For this reason, operations that require access to all of the elements at one time are not supported, except for conversion to other collections that store their elements. The key difference between an
Iterable and an
Enumerable is that
Enumerables have a natural order, so lists are
Enumerable, whereas sets are just
Iterable.
type Enumerable[[T]] = Collection[[T]] & type { values -> Enumerable[[T]] // an enumeration of my values: the elements in the case of sequence or list, // the values the case of a dictionary. asDictionary -> Dictionary[[Number, T]] // returns a dictionary containing my indices as keys and my elements as values, so that // my i^th element is self.asDictionary.at(i). keysAndValuesDo (action:Block2[[Number, T, Object]]) -> Done // applies action, in sequence, to each of my keys and the corresponding element. into(existing:Collection[[T]]) -> Collection[[T]] // adds my elements to existing, and returns existing. sorted -> List[[T]] // returns a new List containing all of my elements, but sorted by their < and == operations. sortedBy(sortBlock:Block2[[T, T, Number]]) -> Sequence[[T]] // returns a new List containing all of my elements, but sorted according to the ordering // established by sortBlock, which should return -1 if its first argument is less than its second // argument, 0 if they are equal, and +1 otherwise. }
The Grace language uses brackets as a syntax for constructing
lineup objects.
[2, 3, 4] is a lineup containing the three numbers 2, 3 and 4.
[ ] constructs the empty lineup.
Lineup objects have type
Iterable. They are not indexable, so can’t be used like arrays or lists. They are primarily intended for initializing more capable collections, as in
list [2, 3, 4], which creates a list, or
set ["red", "green", "yellow"], which creates a set. Notice that a space must separate the name of the method from the lineup.
The type
Sequence[[T]] describes sequences of values of type
T. Sequence objects are immutable; they can be constructed either explicitly, using a request such as
sequence [1, 3, 5, 7], or as ranges such as
1..10.
type Sequence[[T]] = Enumerable[[T]] & type { at(ix:Number) -> T // returns my x^th element, provided ix is integral and l ≤ \leq ix ≤ size first -> T // returns my first element second -> T // returns my second element third -> T // returns my third element fourth -> T // returns my fourth element fifth -> T // returns my fifth element last -> T // returns my last element indices -> Sequence[[Number]] // returns the sequence of my indices. keys -> Sequence[[Number]] // same as indices; the name keys is for compatibility with dictionaries. indexOf(sought:T) -> Number // returns the index of my first element v such that v == sought. Raises NoSuchObject if there is none. indexOf[[W]](sought:T) ifAbsent(action:Block0[[W]]) -> Number | W // returns the index of the first element v such that v == sought. Performs action if there is no such element. reversed -> Sequence[[T]] // returns a Sequence containing my values, but in the reverse order. contains(sought:T) -> Boolean // returns true if I contain an element v such that v == sought }
.. operation on Numbers can also be conveniently used to create ranges. Thus,
3..9 is the same as
range.from 3 to 9, and
(3..9).reversed is the same as
range.from 9 downTo 3.
The type
List[[T]] describes objects that are mutable lists of elements that have type
T. Like sets and sequences, list objects can be constructed using the
list request, as in
list[[T]] [ ],
list[[T]] [a, b, c], or
list (existingCollection).
type List[[T]] = Sequence[[T]] & type { at(n: Number) put(new:T) -> List[[T]] // updates self so that my n^th element is new. Returns self. // Requires 1 ≤ n ≤ size+1; when n = size+1, equivalent to addLast(new). add(new:T) -> List[[T]] addLast(new:T) -> List[[T]] // adds new to end of self. (The first form can be also be applied to sets, which are not Indexable.) addFirst(new:T) -> List[[T]] // adds new as the first element(s) of self. Changes the index of all of the existing elements. addAllFirst(news: Iterable[[T]]) -> List<T> // adds news as the first elements of self. Changes the index of all of the existing elements. removeFirst -> T // removes and returns first element of self. Changes the index of the remaining elements. removeLast -> T // remove and return last element of self. removeAt(n:Number) -> T // removes and returns n^th element of self remove(element:T) -> List[[T]] // removes element from self. Raises NoSuchObject if not.self.contains(element). // Returns self remove(element:T) ifAbsent(action:Block0[[Unknown]]) -> List[[T]] // removes element from self; executes action if it is not contained in self. Returns self removeAll(elements:Collection[[T]]) -> List[[T]] // removes elements from self. Raises a NoSuchObject exception if any one of // them is not contained in self. Returns self removeAll(elements:Collection[[T]]) ifAbsent(action:Block0[[Unknown]]) -> List[[T]] // removes elements from self; executes action if any of them is not contained in self. Returns self ++ (other:List[[T]]) -> List[[T]] // returns a new list formed by concatenating self and other addAll(extension:List[[T]]) -> List[[T]] // extends self by appending extension; returns self. contains(sought:T) -> Boolean // returns true when sought is an element of self. == (other: Object) -> Boolean // returns true when other is a Sequence of the same size as self, containing the same elements // in the same order. sort -> List[[T]] // sorts self, using the < and == operations on my elements. Returns self. // Compare with sorted, which constructs a new list. sortBy(sortBlock:Block2[[T, T, Number]]) -> List[[T]] // sorts self according to the ordering determined by sortBlock, which should return -1 if its first // argument is less than its second argument, 0 if they are equal, and +1 otherwise. Returns self. // Compare with sortedBy, which constructs a new list. copy -> List[[T]] // returns a list that is a (shallow) copy of self reverse -> List[[T]] // mutates self in-place so that its elements are in the reverse order. Returns self. // Compare with reversed, which creates a new collection. }
Sets are unordered collections of elements without duplicates. The
== method on the elements is used to detect and eliminate duplicates; it must be symmetric.
type Set[[T]] = Collection[[T]] & type { size -> Number // the number of elements in self. add(element:T) -> Set[[T]] // adds element to self. Returns self. addAll(elements:Collection[[T]]) -> Set[[T]] // adds elements to self. Returns self. remove(element: T) -> Set[[T]] // removes element from self. It is an error if element is not present. Returns self. remove(elements: T) ifAbsent(block: Block0[[Done]]) -> Set[[T]] // removes element from self. Executes action if element is not present. Returns self. removeAll(elems:Collection[[T]]) // removes elems from self. It is an error if any of the elems is not present. Returns self. removeAll(elems:Collection[[T]])ifAbsent(action:Block0[[Done]]) -> Set[[T]] // removes elems from self. Executes action if any of elems is not present. Returns self. contains(elem:T) -> Boolean // true if self contains elem includes(predicate: Block1[[T,Boolean]]) -> Boolean // true if predicate holds for any of the elements of self find(predicate: Block1[[T,Boolean]]) ifNone(notFoundBlock: Block0[[T]]) -> T // returns an element of self for which predicate holds, or the result of applying notFoundBlock is there is none. copy -> Set[[T]] // returns a copy of self ** (other:Set[[T]]) -> Set[[T]] // set intersection; returns a new set that is the intersection of self and other -- (other:Set[[T]]) -> Set[[T]] // set difference (relative complement); the result contains all of my elements that are not also in other. ++ (other:Set[[T]]) -> Set[[T]] // set union; the result contains elements that were in self or in other (or in both). into(existing:Collection[[T]]) -> Collection[[T]] // adds my elements to existing, and returns existing. }
The type
Dictionary[[K, T]] describes objects that are mappings from keys of type
K to values of type
T. Like sets and sequences, dictionary objects can be constructed using the class
dictionary, but the argument to
dictionary must be of type
Iterable[[Binding]]. This means that each element of the argument must have methods
key and
value. Bindings can be conveniently created using the infix
:: operator, as in
dictionary[[K, T]] [k::v, m::w, n::x, ...].
type Dictionary[[K, T]] = Collection[[T]] & type { size -> Number // the number of key::value bindings in self at(key:K) put(value:T) -> Dictionary[[K, T]] // puts value at key; returns self at(k:K) -> T // returns my value at key k; raises NoSuchObject if there is none. at(k:K) ifAbsent(action:Block0[[T]]) -> T // returns my value at key k; returns the result of applying action if there is none. containsKey(k:K) -> Boolean // returns true if one of my keys == k contains(v) containsValue(v) // returns true if one of my values == v removeAllKeys(keys: Iterable[[K]]) -> Dictionary[[K, T]] // removes all of the keys from self, along with the corresponding values. Returns self. removeKey(key: K) -> Dictionary[[K, T]] // removes key from self, along with the corresponding value. Returns self. removeAllValues(removals: Iterable[[V]]) -> Dictionary[[K, T]] // removes from self all of the values in removals, along with the corresponding keys. Returns self. removeValue(removal:V) // removes from self the value removal, along with the corresponding key. Returns self. keys -> Iterable[[K]] // returns my keys as a lazy sequence in arbitrary order values -> Iterable[[K]] // returns my values as a lazy sequence in arbitrary order bindings -> Iterable[[ Binding[[K, V]] ]] // returns my bindings as a lazy sequence keysAndValuesDo(action:Block2[[K, T, Object]] ) -> Done // applies action, in arbitrary order, to each of my keys and the corresponding value. keysDo(action:Block2[[K, Object]]) -> Done // applies action, in arbitrary order, to each of my keys. valuesDo(action:Block2[[T, Object]]) -> Done do(action:Block2[[T, Object]]) -> Done // applies action, in arbitrary order, to each of my values. copy -> Dictionary [[K, V]] // returns a new dictionary that is a(shallow) copy of self asDictionary -> Dictionary[[K, T]] // returns self ++ (other:Dictionary[[K, T]]) -> Dictionary[[K, T]] // returns a new dictionary that merges the entries from self and other. // A value in other at key k overrides the value in self at key k. -- (other:Dictionary[[K, T]]) -> Dictionary[[K, T]] // returns a new dictionary that contains all of my entries except for those whose keys are in other }
Collections that implement the type
Iteratable[[T]] (defined in Section [type:Iterable]) implement the internal and external iterator patterns, which provide for iteration through a collection of elements of type
T, one element at a time. The method
do() and its variant
do()separatedBy() implement internal iterators, and
iterator returns an external iterator object, with the following interface:
type Iterator[[T]] = type { the Exhausted // exception.
Iterable objects are provided by standard Grace. The method
for()do() takes two arguments, an
Iterable
Iterable,
Iterables into a sorted list:
method merge (cs) and (ds) -> List { ≤ d) then { result.addLast(c) c := cIter.next } else { result.addLast(d) d := dIter.next } } if (c ≤ d) then { result.addLast(c, d) } else { result.addLast(d, c) } while {cIter.hasNext} do { result.addLast(cIter.next) } while {dIter.hasNext} do { result.addLast(dIter.next) } result }[[T]] = { size -> Number // return the number of elements in self at(index: Number) -> T // both of the above return the element of array at index at(index: Number) put (newValue: T) -> Done // update element of list at given index to newValue sortInitial(n:Number) by(sortBlock:block2[[T, T, Number]]) -> Boolean // sorts elements 0..n. The ordering is determined by sortBlock, which should return -1 // if its first argument is less than its second argument, 0 if they are equal, and +1 otherwise. iterator -> Iterator<T> // returns iterator through the elements of self. It is an error to modify the array while // iterating through it. }
The math module object can be imported using
import "math" as m, for any identifier of your choice, e.g.
m. The object
m responds to the following methods.
sin($\theta$: Number) -> Number // trigonometric sine ($\theta$ in radians) cos($\theta$: Number) -> Number // cosine ($\theta$ in radians) tan($\theta$: Number) -> Number // tangent ($\theta$ in radians) asin(r: Number) -> Number // arcsine (result in radians) acos(r: Number) -> Number // arccosine (result in radians) atan(r: Number) -> Number //arctangent (result in radians) pi -> Number $\pi$ -> Number // 3.14159265... abs(r: Number) -> Number // absolute value lg(n: Number) -> Number // $log_2 n$ ln (n: Number) -> Number // $log_e n$ exp(n: Number) -> Number // $e^n$ log10 (n: Number) -> Number // $log_10 n$
The random module object can be imported using
import "random" as rand, for any identifier of your choice, e.g.
rand. The object
rand responds to the following methods.
between0And1 -> Number // A pseudo-random number between in the interval $[0..1)$ between (m: Number) and (n: Number) -> Number // A pseudo-random number in the interval $[m..n)$ integerIn (m: Number) to (n: Number) -> Number // A pseudo-random integer in the interval $[m..n]$
The sys module object can be imported using
import "sys" as system, for any identifier of your choice, e.g.
system. The object
system responds to the following methods.
type Environment = type { at(key:String) -> String at(key:String) put(value:String) -> Boolean contains(key:String) -> Boolean } argv -> Sequence[[String]] // the command-line arguments to this program elapsedTime -> Number // the time in seconds, since an arbitrary epoch. Take the difference of two elapsedTime // values to measure a duration. exit(exitCode:Number) -> Done // terminates the whole program, with exitCode. execPath -> String // the directory in which the currently-running executable was found. environ -> Environment // the current environment.
|
http://gracelang.org/documents/grace-prelude-0.7.0.html
|
CC-MAIN-2021-49
|
en
|
refinedweb
|
Today I needed to test a method which writes to the Console to validate the ouput. It is not hard to change the default console output and check the result. However you may forget to return the original output at the end. So let's take a look at my solution.CodeProject
Let say we have the following class we want to test:
You can find the sample here ConsoleLogger.zip.
Let say we have the following class we want to test:
using System; namespace ConsoleLogger { public class DummyClass { public void WriteToConsole(string text) { Console.Write(text); } } }I have created a small helper class to redirect the output to a StringWriter:
using System; using System.IO; namespace ConsoleLogger.Tests { public class ConsoleOutput : IDisposable { private StringWriter stringWriter; private TextWriter originalOutput; public ConsoleOutput() { stringWriter = new StringWriter(); originalOutput = Console.Out; Console.SetOut(stringWriter); } public string GetOuput() { return stringWriter.ToString(); } public void Dispose() { Console.SetOut(originalOutput); stringWriter.Dispose(); } } }Now let's write the unit test:
using System; using Microsoft.VisualStudio.TestTools.UnitTesting; namespace ConsoleLogger.Tests { [TestClass] public class DummyClassTest { [TestMethod] public void WriteToConsoleTest() { var currentConsoleOut = Console.Out; DummyClass target = new DummyClass(); string text = "Hello"; using (var consoleOutput = new ConsoleOutput()) { target.WriteToConsole(text); Assert.AreEqual(text, consoleOutput.GetOuput()); } Assert.AreEqual(currentConsoleOut, Console.Out); } } }This way we are sure that the original output will be restored and it's easy to get the output from the console.
You can find the sample here ConsoleLogger.zip.
Nice one, I like this blog. One of the awesome post i had never ever seen a post like this. Great effort. Keep it up. Visit college paper for best papers.
I want to express my thanks to this writer just for bailing me out of this instance. As a result of checking throughout the world wide web and finding proposals which are not beneficial, I believed my life was over. Check is raeli jewellery for best Jewellery.
thank u Sir For Sharing Amazing Stuff ,
if you Are Looking For Hindi Shayari Please Visit our Website love Shayari
|
http://www.vtrifonov.com/2012/11/getting-console-output-within-unit-test.html
|
CC-MAIN-2019-13
|
en
|
refinedweb
|
Components and supplies
Apps and online services
About this project
Note: Pulse width modulation(PWM) is an important component of servo operation. Please read this article if you're unfamiliar with PWM before proceeding.
Introduction to Servos
The term servo originates from servomechanism, which refers to any system that uses negative feedback to correct its position or speed. A negative feedback system feeds its output into the input in order to provide error correction or speed limitation. The first example of a mechanical negative feedback system was an 18th-century steam engine governor, which used the engine speed as an input in a mechanical system to cap the engine speed at a safe rate. A cruise control system is a modern example of a servomechanism because the speed of the car influences the throttle position, which is adjusted to maintain the desired speed.
Servomotors are a subcategory of servomechanisms, and are usually what people are referring to when they say servo. Servomotors use digital encoders or potentiometers to determine the position of the drive shaft. They are used in a wide array of industrial and personal applications such as valve control, robotic parts, and actuators for hobby projects.
This write-up will be focused on electronic servos used in remote control applications, since these are most commonly used in hobbyist projects. R/C servo operational principles will be discussed, followed by a demonstration of how to characterize a servo using an OpenScope device.
Feedback System
R/C servos, like their industrial counterparts, have a feedback system that allows for precise position control. If something tries to move the DC motor from a particular position, say a rudder encountering air resistance on an R/C airplane, the servo will apply additional force to recover and maintain the position.
The error correction system is comprised of a potentiometer, which is attached to the motor's output shaft, and an error amplifier. The error amplifier has two input signals that it is comparing: the current position of the motor, provided by the potentiometer, and the desired position of the motor, provided by the controller signal. When the signals don't match, the output from the amplifier drives the motor to make the inputs match.
In order to change the position of the servo, the error amplifier needs to have a variable analog voltage applied to the controller pin. Generating a precise analog voltage requires integrated circuits that can be quite expensive, so many microcontrollers don't have this ability, and instead rely on PWM to simulate analog voltages.
As the article about PWM (linked above) discusses, a capacitor receiving a PWM signal will tend to average out the signal voltage. For example, a 50% duty cycle with a 5V peak will cause the capacitor to hold a voltage equal to approximately half the signal's max, or 2.5V. By varying the amount of time the PWM signal is high, the capacitor charge can be varied, thus creating an analog voltage. The screenshots below demonstrate this concept with a PWM signal being fed into a resistor-capacitor circuit.
Servos use a similar concept to set the position of the motor. As the charge on the capacitor varies with the amount of time it receives a high signal, the servo's error amplifier will see a difference between the potentiometer and the capacitor. The output will then cause the motor to move to a position that will match the potentiometer's voltage to the capacitor's voltage. Of course, a lot more hardware goes into making the servo operate correctly, but these are the basic concepts.
Most servos have a range of motion limited to 180°, but some are designed for a smaller or larger range, or are able to rotate continuously (in which case the speed, not the position, is controlled by the pulse width).
R/C servos vary by design, but a pulse width of 1.5ms will cause the servo to move to the neutral position, halfway through its range of motion, or stop in the case of continuously rotating units. The pulse widths needed to move the servo to the two extremes (or the maximum forward/backward speed) depend on the servo itself, but all R/C servos have the same neutral position pulse width (1.5ms).
Arduino Servo Library
Arduino comes with a library designed for operating servos. While controlling a servo directly via PWM is entirely possible, it's much easier to use the Servo library. Including "Servo.h" at the beginning of the code is all that's needed to start using the library.
It might be necessary to change the control signal pulse width limits to use the servo's full range of motion. By default, these limits are set to 544ms and 2, 400ms. The servo used in this demonstration required the limits to be set to 550ms and 2, 650ms to achieve 180° of motion.
Servo Characterization with OpenScope
OpenScope along with WaveForms Live can be used to characterize the servo and find its pulse width limits.
A potentiometer is used to select the servo position in this demonstration, as seen in the schematic below. Use the code attached at the end to operate the servo.
From the OpenScope, attach the orange oscilloscope cable to PWM pin 9, and the blue oscilloscope cable to the analog input pin A0. The first cable will monitor the signals being sent to the servo, and the second cable will monitor the voltage level in the potentiometer used to set the servo position. Ensure that the two oscilloscope ground cables are attached to the Arduino's ground.
Initially, move the potentiometer from one extreme to the other to see if the servo is able to achieve its full range of motion. If it isn't, the limits are too tight and need to be expanded. Even if the servo is able to achieve its full range of motion it is necessary to check that the limits are not set too far, which would cause a dead zone in the input (something that might be undesirable if the servo is being used as a gauge, for example).
Connect the OpenScope to WaveForms Live and perform the following steps to configure the display to show information relevant to the characterization:
- At the top of the screen, in the Time option type "1ms".
- Under the Trigger section click the RisingEdgeTrigger button (hover the mouse over each button to see its name). For Source select "Osc Ch 1" and leave Level at "500mV".
- Under both the OscCh1 and the OscCh2 sections, change Volts to "2V". Leave all the other settings at their defaults.
- At the bottom of the WaveForms Live screen is a button labeled CURSORS. Press it and in the dialog select the following: under Type select "Time", under Cursor1Channel select "Osc 1", and under Cursor2Channel select "Osc 2". An orange and blue cursor should appear on the display. Use the arrows at the top of the cursors to drag them around. Place the blue cursor on the left side of the green line, and leave the yellow one alone for now.
On line11 in the code, the attach() function is used to set pin 9 as the servo control pin, while the second and third values are used to set the pulse width in microseconds. The time values will need to be changed if the default ones do not allow full range of motion or are causing dead zones. For now, leave the values as they are (544 and 2400) and upload the code to the Arduino Uno.
Press the Run button in WaveForms Live, and set the potentiometer back to one of its extremes. At this point the orange line (oscilloscope Channel 1) in WaveForms Live should be showing up as a square pulse, and the blue line (Channel 2) should either be at a constant 0V or 5V, depending on which direction the potentiometer was connected. The value of Channel 2 can be seen at the bottom of the screen, after the "2:...". If the value is near zero (or a few hundred uV), set the potentiometer to the other extreme so that the value is near 5V. At this point, the WaveForms Live screen should look similar to the one below.
If the full range of motion is not available, it's likely a result of the larger pulse width value being too small. The shorter pulse width is probably within an acceptable range, as making the pulse width much shorter caused the demonstration servo to behave erratically instead of extending the rotation range. Set the third value in the attach() function to something large, like 2800, and after uploading the code, check to see if the full range is being used. Otherwise, try increasing the value again.
Once the full range of motion has been established, it's necessary to trim down the pulse width to remove any dead zone that might exist at the extreme positions of the potentiometer. To detect the dead zone, place the potentiometer in the position that causes Channel 2 to show 5V and keep WaveForms Live in Run mode. Hold the servo between two fingers, and slowly turn the knob back on the potentiometer. As soon as vibrations from the servo are felt, stop turning the knob. This is the DC motor in the servo engaging, which means it is responding to changes in the pulse width. Ensure that the servo is right on the cusp of engaging by turning the knob slightly up and down to find the exact spot (this doesn't have to be perfect, but closer is better).
Alternately, it's possible to detect servo engagement by watching the Channel 2 signal for a dip immediately following the signal pulse end on Channel 1. This is caused by the DC motor drawing current and slightly pulling down the 5V supply voltage. See the image below to see the dip.
In WaveForms Live, press Stop and drag the Channel 1 cursor so that it intersects with the orange line right at the point where the pulse drops to 0. Then look at the value at the bottom of the screen, after the "1:...". The voltage should be in the milli- or micro-volt range. The blue line should be slightly down from 5V, unless the servo responded immediately to the knob rotation, in which case the pulse width is correct. See the image below for clarification.
In the cursor measurements at the bottom of the screen, the time value expressed in milliseconds is the maximum pulse width the servo will respond to. Use this time in the attach() function as the third value. Note that the attach() function accepts pulse width values in microseconds, while the value in WaveForms Live will be in milliseconds, so the value will need to be multiplied by 1000 before placing it in the code.
To check that the pulse width is correct, upload the code with the updated value, and see if the servo responds immediately to the knob being moved. Also ensure the servo still has its full range of motion.
The servo code is now ready to be integrated into a larger project, and will allow the full range of the servo to be utilized without any dead zones.
Code
Servo CharacterizationArduino
#include <Servo.h> // create a servo object Servo servo; //analog input from the potentiometer int potPos = A0; void setup() { // link the servo to pin 9, and set the pulse width limits (544ms and 2400ms in this case) servo.attach(9, 544,2400); //set the analog pin as an input pinMode(potPos, INPUT); } void loop() { //store the potentiometer position as a float float level = analogRead(potPos); //calculate analog data as a voltage float voltage = 5*level/1024; //make sure the voltage isn't outside the acceptable range if(voltage < 0){ voltage = 0; } if(voltage > 5){ voltage = 5; } //scale voltage to 180 degrees servo.write(36 * voltage); //give the servo time to move to new position delay(15); }
Team members
Boris Leonov
- 9 projects
- 12 followers
Sam Kristoff
- 9 projects
- 25 followers
Arthur Brown
- 9 projects
- 11 followers
Published onJuly 12, 2018
Members who respect this project
you might like
|
https://create.arduino.cc/projecthub/104085/servo-signals-and-characterization-dad271
|
CC-MAIN-2019-13
|
en
|
refinedweb
|
Details
Description
Currently the listener registered on Controls is string based.
While simple to create it is harder to maintain as refactoring cannot be applied when renaming the method.
An alternative to the string based listener is to introduce a ActionListener object that encapsulates the callback, and a method, AbstractControl.setListener(ActionListener) which registers the listener on the control.
public class ActionListener {
public boolean onAction(Control control)
}
|
https://issues.apache.org/jira/browse/CLK-369
|
CC-MAIN-2019-13
|
en
|
refinedweb
|
[ Map of Players:
Map<String, Player> players = new HashMap<>();
players.put("1", new Player(false, "Rafael Nadal", 28, new Date()));
players.put("2", new Player(true, "Novak Djokovic", 27, new Date()));
players.put("3", new Player(true, "Andy Murray", 27, new Date()));
Now we can encode the players map into a JSON object:
import org.omnifaces.util.Json;
...
String jsonPlayers = Json.encode(players);
JSON-encoded representation of the given object will be:
{"1":{"age":28,"birthdate":"Thu, 23 Apr 2015 13:09:02 GMT","name":"Rafael Nadal","righthanded":false},"2":{"age":27,"birthdate":"Thu, 23 Apr 2015 13:09:02 GMT","name":"Novak Djokovic","righthanded":true},"3":{"age":27,"birthdate":"Thu, 23 Apr 2015 13:09:02 GMT","name":"Andy Murray","righthanded":true}}
Or, nicely formatted:
Note Behind the scene, Json#encode() will use:
· Json#encodeMap() - since players is a Map
· Json#encodeBean() - since players contains instances of the Player bean
· the supported Java standard types - since Player bean contains properties of Java standard types
Niciun comentariu :
Trimiteți un comentariu
|
http://omnifaces-fans.blogspot.com/2015/04/omnifaces-utilities-20-encode-given-map.html
|
CC-MAIN-2019-13
|
en
|
refinedweb
|
).
The mocking examples below will provide mocks for this interface:
public interface Filter<T> { public boolean evaluate(T item); public boolean isEnabled(); }
Mocking in Groovy by map coercion and Expando
Groovy provides the ability to coerce a map. The following provides a filter where both method always return true:
def filter = [ evaluate : {true}, isEnabled: {true}] as Filter
Groovy’s Expando allows methods to seemingly be added and provides another way of mocking:
def filter = new Expando() filter.evaluate = {true} filter.isEnabled = {true}
Compare this to using the Mockito mocking framework:
Filter<? super Object> filter = Mockito.mock(Filter.class); Mockito.when(filter.evaluate(Mockito.any(Object.class))).thenReturn(true); Mockito.when(filter.isEnabled()).thenReturn(true);
You can clearly see that Groovy allows us to write the mocks in a much more concise manner!
Mocking in Groovy using MockFor and StubFor
Groovy also provide MockFor and StubFor for mocking objects. In the following code, the filter is mocked to pass only the value of 2:
// Using MockFor: MockFor filterMock = new MockFor(Filter) filterMock.ignore.isEnabled {true} filterMock.ignore.evaluate {it == 2} // Mocked instance from MockFor def mockForFilterInstance = filterMock.proxyDelegateInstance() // Using StubFor: StubFor filterStub = new StubFor(Filter) filterStub.ignore.evaluate {it == 2} filterStub.ignore.isEnabled {true} // Mocked instance from StubFor def stubForFilterInstance = filterStub.proxyDelegateInstance()
Verifying Method Invocations
MockFor and StubFor can also verify that the methods are invoked. Methods that need to be verified that it was invoked added to the “demand” and others can be added to “ignore”. For example, the following will verify that isEnabled was invoked mocked object (filter):
MockFor filterMock = new MockFor(Filter) // Methods invocations that need to verified that it was invoked should // be added to demand. For example, this will make "verify" check that // "isEnabled" was invoked. filterMock.demand.isEnabled {true} // Use ignore if the invocation does NOT need to be verified. When // "verify" is called later on the mock, invocation of this method // will NOT be checked. filterMock.ignore.evaluate {it == 2} def filter = filterMock.proxyDelegateInstance() // This is the expected invocation. filter.isEnabled() filterMock.verify(filter)
Mocks created using MockFor will expect the methods to be invoked in the order that they were mocked in.
On the other hand, StubFor the order does not matter. A range can also be provided to indicate the number of times that the method is expected to be invoked (thanks to Steinar for pointing this out to me):
// Expects isEnabled to be called exactly 2 times. mockContext.demand.isEnabled(2) {true} // Expects isEnabled to never be called. mockContext.demand.isEnabled(0) {true} // Expects to be called at least 2 times. mockContext.demand.isEnabled(2..Integer.MAX_VALUE) {true} // Expects to be called at most 2 times. mockContext.demand.isEnabled(0..2) {true } // Expects isEnabled to be called between 2 and 4 times. mockContext.demand.isEnabled(2..4) {true}
In comparison, using Mockito:
Filter<? super Object> filter = mock(Filter.class); // Expects isEnabled to be called exactly 2 times. Mockito.verify(filter, Mockito.times(2)).isEnabled(); // Expects isEnabled to never be called. Mockito.verify(filter, Mockito.never()).isEnabled(); // Expects isEnabled to be called at least 2 times. Mockito.verify(filter, Mockito.atLeast(2)).isEnabled(); // Expects isEnabled to be called at most 2 times. Mockito.verify(filter, Mockito.atMost(2)).isEnabled(); // Expects isEnabled to be called between 2 and 4 times. Mockito.verify(filter, Mockito.atLeast(2)).isEnabled(); Mockito.verify(filter, Mockito.atMost(4)).isEnabled();
One area where I feel Mockito is better is verifying that the methods were invoked with the correct arguments, especially in a case where the method may be invoked multiple times with different arguments:
// Expect evaluate to be called with argument of 2 twice and // argument of 3, 3 times (in any order). Mockito.verify(filter, Mockito.times(2)).evaluate(2); Mockito.verify(filter, Mockito.times(3)).evaluate(3);
One way of doing this in Groovy is to store the count in a map and then asserting on the count:
// This makes a map that provides a default value of 0 // for keys that are not set. def argCount = [:].withDefault{0} def filter = [evaluate: { argCount[it] += 1 return true }] as Filter assert argCount == [2:2, 3:3]
In this case, the Mockito method has allowed us to express that the test is only interested in the method being called the right number of times with the right arguments. Fortunately, Groovy seamlessly integrates with Java and allows us to use the best of both worlds!
Really useful summary of using mocks in Groovy, thanks for posting.
(Groovy Mocks page link is broken though).
Thanks,
Shaun
Pingback: Groovy can easily mock Java interfaces | Mocking
|
https://kahdev.wordpress.com/2013/02/14/mocking-java-classes-in-groovy-vs-mockito/
|
CC-MAIN-2019-13
|
en
|
refinedweb
|
You can build ,deploy, and manage your OpenShift Java applications right from within the Eclipse IDE using the JBoss Tools OpenShift plugin. This blog will guide you through installation, setup, application creation, and managing your application from within Eclipse. During this post, we will develop a Java EE 6 PostgreSQL 9.2 application and deploy it on the JBoss EAP 6(JBoss Enterprise Application Platform 6) application server running on OpenShift.
OpenShift has best in class Java support with Tomcat 6 , Tomcat 7 , JBoss AS7, and JBoss EAP 6 servers bundled with it. You can also run Jetty or GlassFish servers on it using DIY cartridges. OpenShift also provides support for Jenkins continuous integration server as well.
In this blog, we will use four Java EE 6 specifications — JPA , Bean Validation , CDI , and JAX-RS to build a todo application.
Prerequisite
- Basic Java knowledge is required.
-.
.
Step 1 : Install JBoss Developer Tools, and you will get a list of plugins which you can install. As the purpose of this blog is to demonstrate OpenShift, we will only choose “JBoss OpenShift Tools” as shown below. After selecting “JBoss OpenShift Tools” press the “Confirm” button.
Next you will be asked to accept the license. Click “I accept the terms of the license agreement” radio button and press the Finish button as shown below.
Eclipse will next show security warning as plugin is unsigned. Press OK button and you will be asked to restart the Eclipse so that changes can be applied. Press Yes to restart Eclipse.
Step 2 : Create SSH Keys
OpenShift requires SSH for :
- Performing Git operations.
- Remote access your application gear.
So we need to create a RSA key to deploy the work with OpenShift. Eclipse makes it very easy to create RSA keys. To create keys, follow the steps mentioned below.
- Access the menu: Window> Preferences.
- With the preferences window still open, go to: General> Network Connection> SSH2
- Click on Tab Key Management and then the button Generate RSA Key …
- Copy the code key
- Finally click the Save Private Key and Ok as image below
Step 3: Upload the Generated SSH key to OpenShift
After creating the ssh keys in the previous step, we need to upload the public key to OpenShift. Go to and add a new ssh key as shown below. You can find the public key in .ssh folder under your user home directory. The file will have a name id_rsa.pub. You can add multiple keys like one for your office and one for your home.
Step 4 : Create OpenShift Application
After we have uploaded the SSH keys to OpenShift account , we are ready to create OpenShift applications using JBoss Tools OpenShift support. Go to your eclipse and click File > New > Other > OpenShift Application as shown below and click next as shown below.
After pressing the ‘Next’ button, you will be asked to provide your OpenShift account credentials. If you have not signed up for OpenShift account, you can click the sign up here link on the wizard to create your OpenShift account. Enter your OpenShift account username and password. Check the ‘Save password’ checkbox so that you don’t have to enter password with every command and click ‘Next’.
Next you will be asked to create an OpenShift domain name. Every account needs to have one domain name which should be unique among all OpenShift users. One account can have only one domain name. Domain name forms part of the url that OpenShift assigns to an application. For example, if your application name is awesomeapp and namespace is onopenshiftcloud, then the url of application will be. Enter your unique domain name and press finish.
After domain is created, you will be directed to the application creation wizard. You will be asked to enter the details required to create an application like name of the application, type jbosseap-6 and postgresql-9.2 cartridges.
Next you will be asked to set up todoapp and configure server adapter settings. Choose the default and click next as shown below.
The next screen will ask you to specify the location where you want to clone the git repository and name of the git remote. This is shown below.
Finally, press the finish button and you are done. This will create an application container for us, called a gear, and setup all of the required SELinux policies and cgroup configuration. OpenShift will install the PostgreSQL 9.2 cartridge on the application gear and JBoss tools OpenShift plugin will show an information box with details as shown below.
OpenShift will also setup a private git repository for you and clone the repository to your local system. Next, OpenShift will propagate the DNS to the outside world. Finally, the project will be imported in your eclipse workspace as shown below.
You can view the application running online by going to the following url:-{domain-name}.rhcloud.com. Please replace {domain-name} with your OpenShift account domain name.
Step 5 : Look at Generated Code
Now we will take a look at the template project created by OpenShift. Below is the listing of all the content in todoapp project created by OpenShift.
$ cd todoapp $ $ ls -la drwxr-xr-x 11 shekhargulati staff 374 Aug 28 14:02 .git -rw-r--r-- 1 shekhargulati staff 69 Aug 28 14:02 .gitignore drwxr-xr-x 6 shekhargulati staff 204 Aug 28 14:02 .openshift -rw-r--r-- 1 shekhargulati staff 179 Aug 28 14:02 README.md drwxr-xr-x 3 shekhargulati staff 102 Aug 28 14:02 deployments -rwxr--r-- 1 shekhargulati staff 2152 Aug 28 14:02 pom.xml drwxr-xr-x 3 shekhargulati staff 102 Aug 28 14:02 src
Lets take a look them :
1 .git is your local git repository which contains all the information required by git. Git does not create .git folder in ever sub-directory like svn.You can refer to this article for more information about what’s in the .git directory.
2 .gitignore is a file used to specify which all files or file types you want to ignore so that they are not committed to the git repository. This can include files like .classpath, .project, etc.
3 .openshift is an OpenShift specific folder which exists in every OpenShift application. This folder has four sub-directories — action_hooks,config, cron, and markers as shown below.
$ ls -la .openshift/ drwxr-xr-x 3 shekhargulati staff 102 Aug 28 14:02 action_hooks drwxr-xr-x 4 shekhargulati staff 136 Aug 28 14:02 config drwxr-xr-x 8 shekhargulati staff 272 Aug 28 14:02 cron drwxr-xr-x 3 shekhargulati staff 102 Aug 28 14:02 markers
The action_hooks folder hosts scripts which developer can use to hook into application life cycle and do things like create a database before starting the application, expose an environment variable, or install a Maven dependency etc. The config folder contains a JBoss configuration file called standalone.xml and modules folder to put your own jars and property files in the classpath. The cron folder contains files which can be used to run cron jobs on application gear. The cron functionality is only enabled when you embed Cron cartridge in to the application. The markers folder is used to define marker files. Currently you can have the following files in the markers folder. Adding marker files with the following names to this directory will have the following effects:
- enable_jpda – Will enable the JPDA socket based transport on the java virtual machine running the JBoss AS 7 application server. This enables you to remotely debug code running inside the JBoss AS 7 application server.
- skip_maven_build – Maven build step will be skipped
- force_clean_build – Will start the build process by removing all non
essential Maven dependencies. Any current dependencies specified in
your pom.xml file will then be re-downloaded.
- hot_deploy – Will prevent a JBoss container restart during build/deployment.
Newly build archives will be re-deployed automatically by the JBoss HDScanner component.
- java7 – Will run JBoss EAP with Java7 if present. If no marker is present then the baseline Java version will be used (currently Java6)
4 README.md file contains information about the project layout. You can over-write this file with useful information about your project.
5 deployments folder is for binary deployment i.e. war or ear file instead of source deployment. So, if you want to deploy war rather than pushing source code, then add war or ear file in this folder, add it to git repository, commit it, and then finally push the war file.
6 pom.xml is a standard Maven project object model file. The pom.xml contained in template project include Java EE 6 dependencies. This makes writing Java EE 6 applications very easy. The most important thing specified in pom.xml is a Maven profile named “openshift”. This is the profile which is invoked when you do deploy the code to OpenShift.
<profiles> <profile> <id>openshift</id> <build> <finalName>todoapp</finalName> <plugins> <plugin> <artifactId>maven-war-plugin</artifactId> <version>2.1.1</version> <configuration> <outputDirectory>deployments</outputDirectory> <warName>ROOT</warName> </configuration> </plugin> </plugins> </build> </profile> </profiles>
7 src folder is where you will write your application source code. It follows the same convention as followed by every maven project.
Step 6: Make your First Change
Let’s make our first change to the application in order to understand development workflow with with OpenShift. Open index.html and change heading as shown below.
<h1> Welcome to TodoApp -- Your Online Todo Manager </h1>
Go to servers view , then right click on todoapp OpenShift server, and click Publish as shown below.
When we clicked Publish, it performed a couple different tasks. First, it committed the changes to local git repository and then pushed the changes to todoapp application gear. After changes have been pushed, OpenShift invokes the maven build by executing “mvn -e clean package -Popenshift -DskipTests” command. This builds a war file with name ROOT and deploy the WAR in JBoss EAP 6. You can view the change by opening browser and visiting-{domain-name}.rhcloud.com. Please replace {domain-name} with your OpenShift account domain name.
The disadvantage of using Publish is that you will not be able to give your commit messages. It will use “Commit from JBoss Tools” as commit message. It is a quick way to test your changes.
Step 7 : View Logs
Logs are very useful for debugging errors in case something goes wrong on the server. OpenShift Eclipse tooling makes it very easy to tail the log files. In the server view, right click on the todoapp server , and then go to OpenShift Tail files.
You can view the logs in console view.
Step 8 : Define Todo Model class
We will start developing our application by creating domain model for our Todo application. application will have a single entity called Todo. Create a new package( File > New > Other > Package) com.todoapp.domain under src/main/java source folder. Next, create a new class( File > New > Other > Class) called Todo in com.todoapp.domain package. Each entity class.
package com.todoapp.domain; import java.util.Date; import java.util.List; import javax.persistence.CollectionTable; import javax.persistence.Column; import javax.persistence.ElementCollection; import javax.persistence.Entity; import javax.persistence.FetchType; import javax.persistence.GeneratedValue; import javax.persistence.GenerationType; import javax.persistence.Id; import javax.persistence.JoinColumn; import javax.validation.constraints.NotNull; import javax.validation.constraints.Size; @Entity public class Todo { @Id @GeneratedValue(strategy = GenerationType.AUTO) private Long id; @NotNull @Size(min = 10, max = 40) private String todo; @ElementCollection(fetch=FetchType.EAGER) @CollectionTable(name = "Tags", joinColumns = @JoinColumn(name = "todo_id")) @Column(name = "tag") @NotNull private List<String> tags; @NotNull private Date createdOn = new Date(); public Todo(String todo) { this.todo = todo; } public Todo() { } public Long getId() { return id; } public void setId(Long id) { this.id = id; } public String getTodo() { return todo; } public void setTodo(String todo) { this.todo = todo; } public Date getCreatedOn() { return createdOn; } public void setCreatedOn(Date createdOn) { this.createdOn = createdOn; } public void setTags(List<String> tags) { this.tags = tags; } public List<String> getTags() { return tags; } @Override public String toString() { return "Todo [id=" + id + ", todo=" + todo + ", tags=" + tags + ", createdOn=" + createdOn + "]"; } }
The code shown above:
- The entity class shown above has a public no-arg constructor.
- The Todo class in annotated with @Entity annotation and has four instance variables. The identity field is defined by the id field and is annotated with @Id.
- The @NotNull and @Size are bean validation annotation. They make sure that values are not null and size of todo field is between 10 and 40 characters.
- The @ElementCollection annotation signifies that tags field are listed in a different table. This annotation defines a collection of instances of a basic type or embeddable class.
Step 9 : Create persistence.xml file
In JPA, entities are managed within a persistent context. Within persistence context, entities are managed by entity manager. In Java EE , entity manager is managed by Java EE container. The configuration of entity manager is defined in an xml file called persitence.xml.
So we have to create persistence.xml file. The persistence.xml file is a standard configuration file in JPA. It needs to be included in the META-INF directory inside the JAR file that contains the entity beans. The persistence.xml file must define a persistence-unit with a unique name. Create a META-INF folder under src/main/resources and then create a persistence.xml file as shown below.
<app.domain.Todo</class> <properties> <property name="hibernate.show_sql" value="true" /> <property name="hibernate.hbm2ddl.auto" value="create" /> </properties> </persistence-unit> </persistence>
In the xml shown above :
- The name of persistence unit is todos.
- The transaction type attribute is JTA. This means JTA datasource will be used. The JTA datasource will be defined using jta-data-source element.
- The provider element specifies the name of persistence provider. We will be using hibernate as persistence provider.
- The class element specifies the entity class to be managed. To specify multiple entity classes you can use multiple class elements.
- The properties element is used to specify standard JPA and vendor specific properties.
Step 10 : Create TodoService
Next we will create a TodoService class in com.todoapp.service package which will perform CRUD operation using EntityManager as shown below.
package com.todoapp.service; import java.util.List; import javax.ejb.Stateless; import javax.persistence.EntityManager; import javax.persistence.PersistenceContext; import com.todoapp.domain.Todo; @Stateless public class TodoService { @PersistenceContext private EntityManager entityManager; public Todo create(Todo todo) { entityManager.persist(todo); return todo; } public Todo find(Long id) { Todo todo = entityManager.find(Todo.class, id); return todo; } }
In the code shown above :
- The TodoService class is a Stateless session bean as it defines @Stateless annotation. Stateless session bean does not contain any conversational state.
- The container managed EntityManager is injected into the service class using @PersistenceContext annotation.
- A new entity is persisted in the database using EntityManager persist method. The entity is persisted into database at transaction commit.
- The EntityManager find method is used to find the entity by id.
Step 11 : Enable CDI
CDI or Context and Dependency injection is a Java EE 6 specification which enables dependency injection in a Java EE 6 project. CDI defines type-safe dependency injection mechanism in Java EE. Almost any POJO can be injected as a CDI bean.
To enable CDI in your project, create a beans.xml file in src/main/webapp/WEB-INF folder.
<beans xmlns="" xmlns: </beans>
Step 12 : Expose RESTful web service
JAX-RS defines annotation-driven API for writing RESTful services.
Before exposing RESTful web service for Todo entity we have to active JAX-RS in our application. To enable JAX-RS, create a class which extends javax.ws.rs.core.Application and specify the application path using javax.ws.rs.ApplicationPath annotation as shown below.
package com.todoapp.rest; import javax.ws.rs.ApplicationPath; import javax.ws.rs.core.Application; @ApplicationPath("/rest") public class JaxRsActivator extends Application { /* class body intentionally left blank */ }
Next we will create TodoRestService class which will expose two methods to create and read a Todo object. The service will consume and produce JSON.
package com.todoapp.rest; import javax.inject.Inject; javax.ws.rs.core.UriBuilder; import com.todoapp.domain.Todo; import com.todoapp.service.TodoService; @Path("/todos") public class TodoRestService { @Inject private TodoService todoService; @POST @Consumes("application/json") public Response create(Todo entity) { todoService.create(entity); return Response.created( UriBuilder.fromResource(TodoRestService.class) .path(String.valueOf(entity.getId())).build()).build(); } @GET @Path("/{id:[0-9][0-9]*}") @Produces(MediaType.APPLICATION_JSON) public Todo lookupTodoById(@PathParam("id") long id) { Todo todo = todoService.find(id); if (todo == null) { throw new WebApplicationException(Response.Status.NOT_FOUND); } return todo; } }
In the code shown above :
- We annotated TodoRestService with @Path annotation. TodoRestService is a POJO class and is published at todos path by adding @Path annotation.
- Next we injected TodoService using @Inject annotation. This is enabled by CDI.
- The create method is annotated with @POST annotation. This method will be invoked when HTTP post request is made at */todos * url. The @Consumes annotation means that this method will consume data in json format. Container will automatically unmarshall json into Todo object.
- The lookupTodoById method is annotated with @GET annotation. This method will be invoked when a HTTP GET is made to todos/id where id can be any number. The method find todo item using TodoService find method and returns todo entity back. The @Produces annotation makes sure that container will marshal the java object into JSON.
Step 13 : Commit and Push changes to OpenShift
Now we need to commit the changes to local git repository and publish changes to application gear. To do that, go to Window > Show View > Other > Git > Git Staging as show below.
You will see a new view called “Git Staging” in the bottom.
Select the todoapp project and you will see unstaged changes for todoapp project.
Select all unstaged changes and drag them to staged changes.
Add a commit message and press “Commit and Push” button.
Finally we can test our REST web service using curl.To create a todo execute the curl command as shown below.
$ curl -i -X POST -H "Content-Type: application/json" -d '{"todo":"Learning OpenShift","tags":["openshift","cloud","paas"]}'-{domain-name}.rhcloud.com/rest/todos HTTP/1.1 201 Created Date: Wed, 23 Jan 2013 14:16:50 GMT Server: Apache-Coyote/1.1 Location: Content-Length: 0 Strict-Transport-Security: max-age=15768000, includeSubDomains
To read the Todo entity execute the curl command as shown below
$ curl -i -H "Accept: application/json"-{domain-name}.rhcloud.com/rest/todos/1 HTTP/1.1 200 OK Date: Wed, 23 Jan 2013 14:17:23 GMT Server: Apache-Coyote/1.1 Content-Type: application/json Vary: Accept-Encoding Strict-Transport-Security: max-age=15768000, includeSubDomains Transfer-Encoding: chunked {"id":1,"todo":"Learning OpenShift","tags":["openshift","cloud","paas"],"createdOn":1359541249762}
Conclusion
In this blog we covered how you can use Eclipse IDE and OpenShift JBoss tooling support to build Java EE applications. OpenShift Eclipse plugin makes it very easy to work with OpenShift. So, if you are a Java (EE).
|
https://blog.openshift.com/build-and-deploy-cloud-aware-java-ee-6-postgresql-92-applications-inside-eclipse/
|
CC-MAIN-2017-22
|
en
|
refinedweb
|
The V8.5.5.Next Alpha (February update) for WebSphere Application Server and WebSphere Application Server Developer Tools for Eclipse is now available. Here are a few of the things we love about it.
New in V8.5.5.Next Alpha
- Feature and configuration management
- Developer experience
- Java EE Concurrency API
- JCA Outbound Resource Adapter support
- WebSocket support
- February update: New stories added and defects (bugs) fixed
Feature and configuration management
Automatically add features in the Server Configuration editor
When you add a new element to the server configuration and the required feature is not already enabled, the Server Configuration editor will now, optionally, add the feature to the server for you.
Automatically update server.xml with an include from the server tools
The Create SSL Certificate, Create Collective Controller and Join Collective tools now provide the option to generate an include file from the server configuration for you. The tools then automatically add the new include to the server.xml.
Developer experience
Support for Eclipse Kepler
The developer tools now support Eclipse Kepler as well as Eclipse Juno. Eclipse Kepler improves performance so that the editors open, close, and switch much faster. Eclipse Kepler has also addressed a number of memory leaks so that long-running applications don’t run out of memory.
Developer tools download and installation
You can download and install WebSphere Application Server Liberty Profile V8.5.5.Next Alpha from within in the developer tools.
Java EE Concurrency API
Configure and use server-managed executors, scheduled executors, thread factories, and the thread context service. Thread context capture and propagation includes the classloader context, the Java EE metadata context, and the security context.
Here is some sample application code that uses the default managed scheduled executor to schedule a task that refreshes some cached data from the database every 15 minutes, running for the first time 5 minutes after being scheduled.
Example configuration of the feature in server.xml:
<server> <featureManager> <feature>servlet-3.0</feature> <feature>concurrent-1.0</feature> </featureManager> </server>
Example application code that uses a managed scheduled executor:
public class ConcurencyExampleServlet extends HttpServlet { @Resource(lookup = "java:comp/DefaultManagedScheduledExecutorService") ManagedScheduledExecutorService executor; public void doGet(HttpServletRequest req, HttpServletResponse resp) throws ServletException, IOException { Runnable task = new Runnable() { public void run() { try { DataSource ds = (DataSource) new InitialContext().lookup("java:comp/env/MyDataSourceRef"); Connection con = ds.getConnection(); try { // ... refresh cached info from database } finally { con.close(); } } catch (Exception x) { x.printStackTrace(System.out); } } }; executor.scheduleAtFixedRate(task, 5, 15, TimeUnit.MINUTES); }
JCA Outbound Resource Adapter support
Configure and use server-managed connection factories from outbound resource adapters and administered objects. Here is some sample application code that uses a JCA connection factory and administered objects.
Example configuration in server.xml:
<server> <featureManager> <feature>servlet-3.0</feature> <feature>jca-1.6</feature> </featureManager> <resourceAdapter id="myAdapter" location="C:/connectors/MyAdapter/MyResourceAdapter.rar"/> <connectionFactory jndiName="eis/connectionFactory1"> <properties.myAdapter </connectionFactory> <adminObject jndiName="eis/interactionSpec1"> <properties.myAdapter </adminObject> <adminObject jndiName="eis/interactionSpec2"> <properties.myAdapter </adminObject> </server>
Example application code that uses a JCA connection factory and administered objects:
public class ExampleServlet extends HttpServlet { @Resource(lookup = "eis/connectionFactory1") ConnectionFactory connectionFactory; @Resource(lookup = "eis/interactionSpec1") InteractionSpec ispec_Add; @Resource(lookup = "eis/interactionSpec2") InteractionSpec ispec_Remove; public void doGet(HttpServletRequest req, HttpServletResponse resp) throws ServletException, IOException { try { MappedRecord record = connectionFactory.getRecordFactory().createMappedRecord("PARTS"); record.put("PARTNUM", "A368006U"); record.put("PRICE", 102.49f); record.put("SUPPLIER", "IBM"); Connection con = connectionFactory.getConnection(); try { Interaction interaction = con.createInteraction(); interaction.execute(ispec_Add, record); interaction.close(); } finally { con.close(); }
WebSocket support
There is now partial support for both the WebSocket Protocol and the Java API for WebSockets. This includes connection initiation, message reading and writing, ping, pong, most annotations, and message decoding and encoding.
February update
Stories added in the February update
- Create non-version specific server and runtime type ID for Liberty server
- Provide RESTHandler framework
- Default change: error on misconfigured JPA data source rather than ignoring
- Loose config support for OSGi applications in tools
- Support relaxed rules for session interfaces
- Schema needs to support an unknown nested child element
- Add internal libraries and custom SSL Trust Manager to enable Remote Server Connectivity
- WebSockets: Remove restrictions on onOpen, on Close and onMessage methods
- WebSockets: Support Session.getOpenSessions
- Complete JCA 1.6 support
- WDT in-workspace installer to discover and install optional features from the WDT update site
- Need option to remove WS-Security policy from service/client WSDL
Defects (bugs) fixed in the February update
- @ConfigProperty for a Boolean valued field, has the value of the type element as String.class
- Cannot enable timedOperations report dynamically
- Exception during createResource of a ResourceFactory resulting in null being returned to the application
- Bus Name should be ignored for messaging flows within liberty server
- Invalid Export-Package header in com.ibm.ws.javaee.jsf.tld.2.0 jar and com.ibm.ws.javaee.jsp.tld.2.2 jar
- Utility jar class not found when referenced by OSGi Web Bundle in loose config mode
- NPE from security collaborator when using WAB with cdi feature
- websocket sample code can not work on latest Liberty build
- OSGi RegionManager increases memory by 1.4 MB by loading MBeans during startup
- ClassLoadingService can try to create the same gateway bundle on two threads concurrently and BOOM!
- lWAS:SVT: JSF can not find WebSockets classes when destroying the context.
- New servlet added to a war within an ear through the tools fails to load
- Unable to generate Web service client of Liberty project from Services view
Plus another ~330 minor defects too.
I’m using websphere application server 8.5.5.5 release. Is there websocket support in this version (not liberty profile).
No – only the Liberty profile has WebSocket support in v8.5.5.5. We have stated our intent to deliver full Java EE 7 capabilities (including WebSockets) in the WebSphere Application Server full profile but we have not announced when this support will be released.
Hi,
Any ideas if and when the concurrency feature concurrent-1.0 will be included in an upcoming official release of Websphere Liberty profile?
Hi Owen – we have a post here: detailing the features included in our upcoming GA release. At this time there are no additional announcements on GA releases after the December release.
Hello, about Java EE Concurrency API
in wlp-developers-runtime-2013.11.jar and wlp-developers-extended-2013.11.jar
there is no definitions about javax.enterprise.concurrent package.
Not in ./dev and ./lib folder
in ./lib there is some ibm stuff (Implementation), which relies on API, but no API package.
How can i use new functionality?
by the way in 8.5.5.1 there is similiar problem, declared ManagedExecutorService, even in infocenter, but no declarations in .jar
thank you for the answer.
Hi Yan, the missing API package is a bug in the alpha driver- thank you
for pointing it out! We will look into getting it corrected.
Regarding
8.5.5.1, it is correct to exclude the Java EE Concurrency API because
8.5.5.1 does not implement that specification, and provides only a Java
SE ExecutorService, which is managed by the app server.
Thank you very much for the answer.
|
https://developer.ibm.com/wasdev/docs/new-in-v8-5-5-next-alpha/
|
CC-MAIN-2017-22
|
en
|
refinedweb
|
Data binding depends on two classes in the System.Data namespace: DataView and DataViewManager. These classes provide an important layer of indirection between your data and its display format, allowing you to apply sorts and filter rows without modifying the underlying information?that is, to have different views on the same data. ADO.NET binding is always provided through one of these objects.
The DataView class acts as a view onto a single DataTable. When creating a DataView object, you specify the underlying DataTable in the constructor:
// Create a new DataView for the Customers table. DataView view = new DataView(ds.Tables["Customers"]);
Every DataTable also provides a default DataView through the DataTable.DefaultView property:
// Obtain a reference to the default DataView for the Customers table. DataView view = ds.Tables["Customers"].DefaultView;
The DataViewManager represents a view of an entire DataSet. As with the DataView, you can create a DataViewManager manually, passing in a reference to a DataSet as a constructor argument, or you can use the default DataViewManager provided through the DataSet.DefaultViewManager property.
The DataView and DataViewManager provide three key features:
Sorting based on any column criteria
Filtering based on any combination of column values
Filtering based on the row state (such as deleted, inserted, and unchanged)
To make all this a little clearer, it helps to consider a simple example with the Windows DataGrid control. In Example 12-1, three tables are queried and added to a DataSet. By setting its DataSource property, the Customers table is then bound to the DataGrid in a single highlighted line.
private void DataTest_Load(object sender, System.EventArgs e) { string connectionString = "Data Source=localhost;" + "Initial Catalog=Northwind;Integrated Security=SSPI"; string SQL = "SELECT * FROM Customers"; // Create ADO.NET objects. SqlConnection con = new SqlConnection(connectionString); SqlCommand com = new SqlCommand(SQL, con); SqlDataAdapter adapter = new SqlDataAdapter(com); DataSet ds = new DataSet("Northwind"); // Execute the command. try { con.Open(); adapter.Fill(ds, "Customers"); com.CommandText = "SELECT * FROM Products"; adapter.Fill(ds, "Products"); com.CommandText = "SELECT * FROM Suppliers"; adapter.Fill(ds, "Suppliers"); } catch (Exception err) { MessageBox.Show(err.ToString()); } finally { con.Close(); } // Show the customers table in the grid. dataGrid1.DataSource = ds.Tables["Customers"]; }
On the surface, it looks as though this code is binding the grid directly to a DataTable object. However, behind the scenes, .NET retrieves the corresponding DataTable.DefaultDataView and uses that. You can replace the highlighted line with the following equivalent syntax:
dataGrid1.DataSource = ds.Tables["Customers"].DefaultView;
Similarly, you can create a new DataView object and use it for the binding:
DataView view = new DataView(ds.Tables["Customers"]); dataGrid1.DataSource = view;
This technique is particularly useful if you want to display different views of the same data in multiple controls. Figure 12-1 shows the result of binding the view. The DataGrid automatically creates a column for each field in the table and displays all the data in the order it was retrieved from the data source. By default, every column is the same width, and the columns are arranged according to the order of fields in the SELECT statement; you'll learn how to customize the view later in this chapter.
There is one reason why you should bind directly to the view rather than use the table name. If you specify an invalid table name when binding directly to a table, you don't receive an error; the DataGrid just appears empty. However, if you make the same mistake when binding to a view, you receive a more informative NullReferenceException.
The DataGrid is the only Windows Forms control that supports binding to an entire DataSet as well as a single table, although many other third-party controls follow suit. When binding a DataSet, .NET automatically uses the corresponding DataViewManager provided through the DataSet.DefaultViewManager property:
// Bind to the DefaultViewManager explicitly. dataGrid1.DataSource = ds.DefaultViewManager; // Bind to the DefaultViewManager implicitly. This code is equivalent. dataGrid1.DataSource = ds; // Bind to an identical DataViewManager you create manually. // This code has the same effect, but isn't exactly the same // (because it creates a new object). dataGrid1.DataSource = new DataViewManager(ds);
Figure 12-2 shows the initial appearance of a DataGrid when bound to a DataSet. A separate navigational link is provided for every table in the DataSet. When the user clicks on one of these links, the corresponding table is shown, as in Figure 12-2.
There is one important difference between the DataViewManager and the DataView approach, however. When you use the navigational links to display a table, .NET doesn't use the DefaultView to configure the appearance of the that table. Instead, every DataViewManager provides a collection of DataViewSetting objects. When the user navigates to a table through a DataViewManager, a new DataView is created according to the settings in the corresponding DataViewSetting object.
|
http://etutorials.org/Programming/ado+net/Part+I+ADO.NET+Tutorial/Chapter+12.+DataViews+and+Data+Binding/12.1+The+DataView+and+DataViewManager/
|
CC-MAIN-2017-22
|
en
|
refinedweb
|
The team has been quietly working on a number of improvements to the entity designer that we would like to show off and get feedback on. This particular set of features is targeted at making modeling more productive and scalable, and models themselves more comprehensible.
We’ll start with the small, fun things, and build up to a grand finale.
1. Association Highlighting
Let’s start with this. Consider the following model:
We would like to rename the navigation properties to something meaningful, as well as the foreign keys. But which navigation property is related to which foreign key? And to which association connector?
So, now you can select an association or navigation property and find out. Here, we select the “Address.Person” navigation property, and the designer highlights the two entity types involved in the association, the association connector, the navigation property on the other side, as well as the foreign keys that hold the navigation information.
This makes it easier for us to rename things to:
2. Property Reordering
Imagine that you want to add a “Name” property to the “Property” type above. You do this and end up with a type that looks like this:
But of course, you want this new property to be the second one in the type, below “Id”. Well, now you can select this property and, for example, press Alt + Home, which will move that property to the top of the type, then on Alt + Down Arrow to move it down one slot. Here is a screenshot of the type, along with the new “Move Properties” menu:
You can now select multiple properties at the same time and move them. For example:
3. Entity Shape Coloring
Imagine, now, that our model has gotten more complex, and viewing it all in one window renders areas of the diagram hard to distinguish:Imagine, now, that our model has gotten more complex, and viewing it all in one window renders areas of the diagram hard to distinguish:
Entity shape coloring allows us to visually divide a single diagram into multiple meaningful areas by color coding them. You can select one or more entities and in their property sheet, change their color, which lets us do something like this:
4. Multiple Diagrams!
But what if you just want to look at subject areas within a diagram? Say, the product catalog entities, or the sales types. In that case, you can select any number of entities and move them into a new diagram by right-clicking and selecting the new “Move to new Diagram” menu item. If we do this to our “Property” type, we’ll see it appear in a new diagram. And, if we make the model browser tool window visible, we’ll see this:
Note that we now have a new “Diagrams” folder in the model browser, and that the new diagram opens in its own tab, allowing you to have multiple windows open at once against the same model. This diagram is a little sparse. Let’s bring into it any types related to “Property” by right-clicking it and selecting “Include Related”:
Now, color the “Person” type green, then bring its related type, and see how they pick up the original type’s color:
(Note that this models is not a real-world one and only intended for visualization purposes.)
Some additional notes:
– You can also drag and drop associations, types, and entity sets from the model browser.
– You can cut or copy and paste objects from one diagram to another.
– Deleting objects from a diagram will no longer delete them from the model, but only from the diagram.
– You can delete objects from the model by using Shift+Del, right clicking and selecting “Delete from Model”, or deleting them from the model browser.
– Diagrams are stored in a child file of the edmx for new edmx files. For backwards compatibility, models created in previous versions of visual studio will keep the diagram information in the EDMX file itself. However, you can move the diagram information from the EDMX to a child file by right clicking and selecting the “Move Diagrams to Separate File” context menu.
We look forward to your feedback!
Noam
Hi Noam,
Looks good, particularly the highlighting of each association when dealing with multiple associations between related entities, I had to do this manually not 20 mins ago 🙂
Also the copy and paste between diagrams would be very useful especially since I tend to keep one edmx per functionally area and so often need to recreate overlapping areas eg the overlap between Accounts Receivable and Accounts Payable edmxs. Entity shape colouring goes some way in this area as well.
Another time consuming designer task is renaming from Db field names to Object Property names. It would be great if when you create a new edmx with overlapping tables that you could 'scavenge' mapping names from other edmx files in the solution. eg how many times I might have to rename blnAct to IsActive etc.
It would great if the existing mappings where treated as a reusable asset in this respect, may be exportable and importable in some key value format??
BTW Are there improvements on the way to composite key handling (primary and foreign), or FKs that reference unique keys rather than the primary key?
One last thought: Another major time consumer is the round tripping in the model first/db first journey. Currently if I make any non trivial changes to the model I have to create a new blank database from the 'Generate Database from Model' tool then use a third party tool to diff the existing and new dbs or just figure our the changes manually. A more comprehensive path to update the model and produce a db change script would be a huge step forward.
Finally is there any way to enjoy any of these new features out-of-band ie: before the next VS version, if so how?
Cheers
Simon
Great! When will be this functional published?
This looks great! When can we have it? 🙂
Simon, a lot of the time consuming problems you mention have already been solved by other O/RM designers. LLBLGen Pro () for instance, even lets you generate code for EF.
It's great Noam and we already spoke many times about the importance of the designer (for my customers particularly). However, what about complex types on the designer, what about Horizontal Entity Splitting, what about TPC?
Matthieu
Love the improvements! Much needed on my project. Will this be deployed in SP1?
Looks awesome, particularly like the multiple diagrams!
Please please please please ProviderManifestTolken in the designer so that we can SET it. It's a real PITA right now to develop for SQL Server 2005 when we have SQL Server 2008 DBs here.
+1 for ProviderManifestTolken. Hit me many times before.
The idea of multiple diagrams in theory should solve performance issues. Let's hope so…
When can we get these bits to play with?
@Gary, thanks looks interesting and very comprehensive. I guess I am just hoping to get some of the core offerings from MS improved 🙂
Thanks for the feedback! Yes, we are working on the unique key handling feature, and are looking at a few different ways of make the model-first feature migrate, rather than recreate, your database.
I'll work with the team to see if we can push ProviderManifestToken handling into the next release!
It must be possible to highlight properties based on some condition. For example, I want to highlight nullable ones in gray, keys in orange, and numeric in fuchsia.
Also move for properties is ok, but what about sort (alphabetically, or by types, or some other way) ?
One other thing that I seem to spend a bit of time doing but have to leave the designer to do is to manually add the DefaultValue attribute to the various not nullable storage entities that are not mapped to conceptual entities. It would be great if the model generator could read this info from the db and automatically apply these attributes to the storage and/or conceptual models.
1. We could not get these improvements shipped in SP1. We'll ship them as soon as we can after that release.
2. The diagrams can be stored in a separate file from the EDMX, which should alleviate some source merging issues: For backwards compatibility, existing EDMX files will keep the diagrams in the EDMX file, so you can open them in older versions of VS. New EDMX files will store the diagrams in a separate file. For existing EDMX files, there is a right-click context menu item called "Move Diagrams to Separate Files" that you can use if you do not need/want backwards compat.
Like the improvements too! Multiple Diagrams will be very useful feature. Great job!
@Noam, looks great!
@Simon, if you want to add incremental SQL-DDL generation to avoid having to regenerate the db, take a look at my 'model comparer' feature for EFv4. It is part of my add-in Huagati DBML/EDMX Tools for VS2010, and gives you more granular control over DB <=> SSDL <=> CSDL differences and allow you to bring individual changes across without touching unrelated areas of the model.
I have posted a couple of screencasts showing what it can do here:
huagati.blogspot.com/…/introducing-model-comparer-for-entity.html
huagati.blogspot.com/…/using-model-comparer-to-generate.html
…and if you want to take it for a test-spin you can download the add-in and get a trial license at
Nice, real nice. Bring it on I guess.
awesome work. cant wait to get my hands on the new features and new designer.
OMG, I'm absolutely gurgling with glee with these designer improvements! In particular, multiple diagrams are going to be an absolute god send. It's a total nightmare at the moment when we have multiple devs trying to edit the same diagrams.
Will we be able to include the same entity in multiple diagrams though? From the "move to new diagram" option, it sounds like an entity can only exist in one diagram at a time. Quite often I have entities that relate to lots of other enties in different ways, and it would be great to be able to include them in each sub-diagram. Of course, if you make changes to an entity in one diagram, it would have to also keep any other instances in sync too.
Hopefully we won't have to wait too long for these enhancements. The EF really needs to maintain a fast pace – being tied in with the mololithic Visual Studio releases is too slow.
I second the request for the designer to read defaults from the database schema for non-nullable fields, this is a tedious and time-consuming job to do each time we create a model. Would be really good to include niladic function defaults here as well, especially getdate() and suser_sname(), we use these extensively in our databases for logging etc.
Also, same question as Daniel Smith, will we be able to include the same entity in more than one diagram (like we can with SQL Server database diagrams) ? This will make it an awful lot easier to work on larger models with multiple developers.
Great new features! When can we have it as a CTP or a Power Pack or something or anything?!? 🙂 It seems really complete…
Have you seen this site for customer feedback on Entity Framework? data.uservoice.com/…/72025-ado-net-entity-framework-ef-feature-suggestions
This looks great… though I probably will maintain my current method of splitting my database tables into groups and therefore edmx models. This gives strength for improved security model and pluggable data providers; so I can develop one area of the systems data model with no effect to others, I can then sell this as a plug add on.
This gives rise to JasonBSteele's similar problem of version and change control on edmx models.
Another important step would be 'using' … one edmx could import and use the objects of another that is imported, making them readonly.
@Kristofer ขอบคุณครับ that looks very impressive. Great to see designer features that deal with SSDL and the SQL diff script look like just what I was looking for. Will definitely give it a try.
Great news !
Almost all these improvements were features I've been missing !
Nice work
1. second the shared entity across multiple diagrams
2. What about basic documentation and annotation for the diagrams ? ala SQL Diagrams – Ability to document and annotate the models with text, labels, boxes etc that help describe the intent of the model right on the design surface.
1. second the shared entity across multiple diagrams
2. What about basic documentation and annotation for the diagrams ? ala SQL Diagrams – Ability to document and annotate the models with text, labels, boxes etc that help describe the intent of the model right on the design surface.
One very useful feature would be scripting or macros in the designer so I can script exverything I do to a model and then blow away the model, recreate it from an updated db and then rerun my scripted changes.
It would be nice to be able to specify extended properties on a table/column/…. in the designer.
It would be nice to be able to delete something from the model and then re-add it without having to worry about messing up the mapping and storage parts of the edmx and being forced to hack it.
It would be nice to be able to set a primary key guid column to identity without having to hack the edmx.
It would be nice to be able to specify default values for columns right in the Designer
it would be nice to import the nullability straight from the database.
Some kind of working sych tool so I can synch an editted model with and editted db and pick and choose how the sych happens and then have everything work without having to redo all my changes, or hack the edmx file.
Some kind of edmx merger to allow multiple developers to editit the model and then merge the changes without having to parse the edmx file and merge it in their heads.
It seems like you guys are working on fluff rather than some of the basics.
I liked this new feature very much.
I would like to vote for another feature, using the database table schema as part of generated class namespace. (I don't know if this doable in the t4 templates)
when this new designer enhancement will be available (weeks or months)?
Thanks
Looks good, nice work! My little wish list:
– extending properties and the availability of these extensions in T4
– more business rules in the model (data annotations, inter entity rules and inter record rules)
– a solution for having a framework with edmx and a project with edmx so framework model and data code can be resused between projects within 1 context (f.e. merging context or using an edmx in another edmx).
Any idea when these improvements will become mainstream?
@Kendall Morley,
Many of the features you're asking for are available in the Model Comparer for EFv4 that is part of this Visual Studio add-in:
It allows you to bring selective changes across, it uses documentation from extended properties, it allows you to set up rules as for what db-side default constraints should be treated as identity/computed, it syncs nullability/storegenerated/default value between CSDL and SSDL, and it allows you to select exactly what changes you want to bring across between the layers _without_ touching any unchanged portions of the model. Download it and try it out and I think you will find that it complements the EFv4 designer in VS2010 with a lot of the things you're asking for.
Great stuff!
Wondering if you can continue to dodge =P the question of a release schedule for these changes?
my 0.02pecs worth
Multiple diagrams are great.
Will there be support for multiple namespaces?
(With the possibility to associate 2 entities from different namespaces, this could simplify big models)
zano04 – Multiple namespaces are not on the docket for the next release, they'll have to wait for a future one…
One thing I'd really need at the moment is the ability to populate the list of options presented to the user for custom properties added via a Visual Studio extension and a EntityDesignerExtendedProperty. The enums are nice for things that are static, but annoyingly restrictive. For example if you need the developer to be able to specify an entity as the value of the custom property, all you can do is to make the property a string and add a validation into its setter. No way to give him a list.
I'd also like to add a property that'd let me specify what permissions must a user have to be able to edit something (I'll script it out as an attribute then and use it in a custom model binder and in html helpers), but again, how do I give the developers the list of permissions? In this case as the VS extension is in-house and so far used only for one project, I can copy the enum definition into the extension's sources, but that's rather … suboptimal.
.NET Links of the Week #40
Most of these features have been implemented in the Devart Entity Developer. It is a powerful modeling tool that allows to build models for LINQ to SQL and Entity Framework. Some of Entity Developer features are presented below:
* You can use Model-First and Database-First approaches
* You can highlight associations
* You can drag-n-drop properties inside entity.
Read more about Entity Developer features on our web-site –.
4 months on…..any update on a release of these features even in a CTP? Would love to see more out-of-band releases of this nature rather than ~2 yearly versions :).
Any news? Makes you wonder how committed Microsoft are to Entity Framework; even the EF design blog only has two entries since this one!
Hi,
is there any news on when to expect a release??
Coloring and multiple diagrams are desperately needed !!
Great work… judging by your screen-shots :))
A month on from Simon asking and almost a month from me asking… I guess no one at Microsoft is reading this any more. Either that or they are no longer interested in EF!
Hi all —
Thanks for all your suggestions and excitement over the features Noam had laid out here. We had a transition of work, apologies for the lack of responses on the blog during that.
We are very interested in EF and are working on even new features for EF and the Entity Designer to get out to you along with these ones. We are also working on faster releases to get these features to you all as soon as possible without you having to wait the long times between major product releases.
More soon!
Sarah McDevitt
This is indeed
a long time ago that it was announced and we still have no ideas about when we will be able to use these interesting improvements!
I got really excited about this, but 6 months on, and still no sniff of a release date. Any surprise people find alternatives such as devart, LLGenPro, nHibernate, DevForce.
The serious shortfalls of edmx (that have seemingly been overcome) at least warrant a CTP, PowerPack or Plugin. I wish I had customers whom I could show them a pretty picture of how something might look, then sit on it for 6 months without a peep.
I can only assume (like all others on this thread) that EF just isn’t a priority.
Sarah McDevitt said "More soon!" – any idea when soon is? It's been a couple of weeks and still no more posts from Microsoft
We are really waiting for it, NOT more diagrams was the last obstacle for us.
Hello Microsoft developers,
can you give a date when this is available for us?
Thank you.
Please provide a better way to automatically layout the diagram automatically.
Almost a month since I asked Sarah when "soon" is and no reply from her so I guess we can just assume that no news = we don't care so go away and find something else to use?
OMG!!!! SO NICE!!!! This is SO BADLY NEEDED!!! Thx MSFT!
Soooo. what's the status about this.. Is this actually w.i.p.? Or are we waiting for nothing?
I've just install the SP1 hoping this feature will be inside (posted 9 months ago !) but nothing new in the SP1…
Please give us just a date or a release number where this feature will be include.
Thank's
When will we see this. Another few weeks have passed…
blogs.msdn.com/…/announcing-the-microsoft-entity-framework-june-2011-ctp.aspx
Is there somewhere I can download and test these new features? I need the multiple diagrams pretty badly.
@Shaun, click the link in the comment previous to yours (for the June 2011 CTP).
Architects today use such program on deciding which interior ( ) compliments the whole design of the house.
|
https://blogs.msdn.microsoft.com/efdesign/2010/10/11/entity-designer-improvements-preview/
|
CC-MAIN-2017-22
|
en
|
refinedweb
|
I noticed in ST3, begin_edit requires two arguments: cmd and args. What are those attributes supposed to receive?
OK, I got this working. When you have a TextCommand, instead of creating an edit yourself with begin_edit(), you receive an additional edit parameter that is prepended to the usual args in ST2. On to the next challenge: async processing.
I have the same issue with begin_edit, except that i was using for an output panel :
def print_to_panel_mainthread(self,cmd,str_a):
win = sublime.active_window()
strtxt = u"".join(str(line) for line in str_a)
if(len(str_a)):
v = win.create_output_panel('hg_out')
if cmd=="diff" :
v.set_syntax_file('Packages/Diff/Diff.tmLanguage')
v.settings().set('word_wrap', self.view.settings().get('word_wrap'))
edit = v.begin_edit(None,None) ##### ISSUE HERE #####
v.insert(edit, 0, strtxt)
v.end_edit(edit)
win.run_command("show_panel", {"panel": "output.hg_out"})
To have the code compile i tried None and 0, but the issue is that now my panel shows up empty. Any idea ?
I don't mind using sublime.py and sublime_plugin.py as "documentation" for the API, but slightly more comment (as just a bit more than none ) would help a lot, at least for stuff that change compare to ST2.
I take it you haven't read the porting guide.
I did read it, but I'm talking about the .py file that jon suggest to use as reference for the new API functions. Putting some comment about the parameter expected by the function could help plugin dev a lot (and could even be used to automatically generate basic API reference).
Anyway, in my case i found a way to make it work: replacing all the part between begin_edit end_edit by a v.run_command('append', {'characters':strtxt})
I was trying to update the excellent plugin AlignTab for ST3 and i had an issue around this edit parameter.
After modification to remove begin_edit/end_edit, and using directly the edit from the TextCommand I get:[code]class AlignTabCommand(sublime_plugin.TextCommand):
def run(self, edit, user_input=None):
self.edit = edit
print("Called with params " + str(user_input))
if not user_input:
v = self.view.window().show_input_panel('Align with regex:', '',
self.align_tab, None, None)
# print os.getcwd()
v.set_syntax_file('Packages/AlignTab/AlignTab.tmLanguage')
v.settings().set('gutter', False)
v.settings().set('rulers', ])
else:
self.align_tab(user_input)[/code]
If the command is called directly with some parameters everything works fine. But if there is no parameters, it shows the panel and then the text is not aligned (but the function is correctly called, i can see some change in selection and stuff).
I managed to workaround the issue (instead of calling align_tab when show_input_panel is done, I do a lambda x : self.view.run_command("align_tab",{"user_input":x}) ), but I still don't understand what is wrong in the initial code .Is it a bug, or I'm missing something ?
It's async code, so after showing the input panel, the command finishes and that original edit object (that you stowing away as self.edit) has expired
Ok I understand now So the way I handle it is fine I guess .
Please me and share with examples. Thank you...
args attribute receives a numerical value corresponding to the command that is passed to the cmd attribute. This is how it usually works. But I have heard that there is some differences in ST3. So just run through the guide once again to make sure the attributes goes as mentioned above.
|
https://forum.sublimetext.com/t/new-begin-edit-signature-in-st3/8611/8
|
CC-MAIN-2017-22
|
en
|
refinedweb
|
Sat, Jan 12, 2013 at 4:21 AM, giorgik <ggiorgio63@...> wrote:
> The problem is that it crashes when you run the statement that I have
> indicated. There seems to be something wrong with the tesselation. A dialog
> send error to Microsoft regarding glu32.dll. Enabling debugging
> #define DEBUG 1
> not provide additional information.
> If you isolate the whole body of the function BuildDisplayList the rest of
> the program works, and I finally displays the window with the plane can be
> rotated etc. ...
> Obviously do not see any object. You can see it using the executable in the
> folder with the name Release QStage.exe which I have put in the annex
>
>
> Have you tried running the program in the debugger, and verified that all
of the data you are feeding to the function in question is correct?
On Wed, Jan 16, 2013 at 8:17 AM, Dagna Bieda <dagna.bieda@...> wrote:
>.
>
>
Player is compatible with OpenCV 2.4. We've had it in Fedora since July
with no issues. The old style C API still exists, the cv namespace is for
the new C++ API that's been introduced in parallel.
> When making player-3.0.2 I get following errors:
> libplayerdrivers/libplayerdrivers.so.3.0.2: undefined reference to
> `cvCreateImage'
> libplayerdrivers/libplayerdrivers.so.3.0.2: undefined reference to
> `cvLaplace'
>
>
The fact that you're seeing "undefined reference" errors means that Player
has already gotten past the step where it compiled all of its source
files. That rules out any API issues (e.g. the cvCreateImage function
still exists in a header file somewhere.) It's now trying to link against
OpenCV's libraries (libcv.so and friends) and failing. This may be due to
having OpenCV on a non-standard path (like /usr/local) without properly
setting LD_LIBRARY_PATH. Did you install OpenCV yourself, or did you use
packages provided by your distribution?
To debug further, we need:
* The location of libcv.so and the rest of the OpenCV libraries on your
system
* The output of CMake when you run it to build Player
* The full compiler command line that was run which produced the error
("make VERBOSE=1" will reveal it)
Rich.
When making player-3.0.2 I get following errors:
libplayerdrivers/libplayerdrivers.so.3.0.2: undefined reference to
`cvCreateImage'
libplayerdrivers/libplayerdrivers.so.3.0.2: undefined reference to
`cvLaplace'
I'd like to ask if any version of Player/Stage you're working on will be
compatible with C++ OpenCV ? When it will be released?
I need Player compatible with newest OpenCV, cause I've been working whole
semester on some project of image processing for my master thesis and now I
want to implement it on PeopleBot ( I know it's possible to move PeopleBot
via Player ), but all the work I've done is already in OpenCV2.4.1.
Please let me know is there a chance to make it working together,
Thank you in advance!
Dagna
I agree to receive quotes, newsletters and other information from sourceforge.net and its partners regarding IT services and products. I understand that I can withdraw my consent at any time. Please refer to our Privacy Policy or Contact Us for more details
|
https://sourceforge.net/p/playerstage/mailman/playerstage-users/?viewmonth=201301&viewday=16
|
CC-MAIN-2017-22
|
en
|
refinedweb
|
Refactor That Code! Get those database calls out of your controllers
If y’all recall, when we started to build YouReallyNeedABudget in the aspnetcore series, we went for speed over quality, and even though our API controllers were slim (mostly because they didn’t do very much), they still have a lot that can be improved upon.
We injected our Entity Framework DbContext directly into the controllers and did all of our database IO right in the controller methods themselves.
By doing that, we have created a direct dependency on Entity Framework by the API. That’s not actually a bad thing if we KNOW we will always use EF, but in general, it is good to program to an interface to abstract away the database technology and allow for future changes, test mocking, and a more readable API.
Plus, I already have an urge to swap in Dapper instead :)
This post will walk through the refactoring of one of our controllers. We will:
- Move the database code out of the controller and into a repository class
- Let aspnetcore handle injecting the concrete repository class into the controller
- Program to the repository interface
Now, about that Repository Pattern…
I know, I know… Many of you quiver when Entity Framework and Repository Pattern are mentioned in the same sentence. We’re going to do it anyway because in our case, we don’t need the advanced features that you might lose.
Let’s take a look at the current state of the controller…
using System.Linq; using Microsoft.AspNetCore.Mvc; using YouReallyNeedABudget.DataAccess; using YouReallyNeedABudget.Models; using AutoMapper; namespace YouReallyNeedABudget.WebApi.Controllers { [Route("api/[controller]")] public class TransactionsController : Controller { private readonly BudgetContext _dbContext; private readonly IMapper _mapper; public TransactionsController(BudgetContext dbContext, IMapper mapper) { _dbContext = dbContext; _mapper = mapper; } [HttpPost] public IActionResult Post([FromBody]DTO.Transaction transactionDTO) { var newTransaction = _mapper.Map<Transaction>(transactionDTO); _dbContext.Transactions.Add(newTransaction); _dbContext.SaveChanges(); return Created(string.Format("/api/transaction/{0}", newTransaction.ID), _mapper.Map<DTO.Transaction>(newTransaction)); } [HttpDelete("{id}")] public IActionResult Delete(int id) { _dbContext.Transactions.Remove(_dbContext.Transactions.Where(tx => tx.ID == id).SingleOrDefault()); _dbContext.SaveChanges(); return NoContent(); } } }
Note: For the purposes of this post, we aren’t going to worry about exception handling.
Refactoring
We have two endpoints, one for creating a new transaction, and one for deleting. And you’ll notice the Entity Framework code does just that, adds and removes. That maps nicely to
Add(transaction) and
Remove(id) repository methods, so lets first create an interface to program to instead.
using YouReallyNeedABudget.Models; namespace YouReallyNeedABudget.DataAccess { public interface ITransactionRepository { void Add(Transaction transaction); void Remove(int id); } }
How lovely…
And for the concrete version…
using System.Linq; using YouReallyNeedABudget.Models; namespace YouReallyNeedABudget.DataAccess { public class TransactionRepository : ITransactionRepository { BudgetContext _dbContext; public TransactionRepository(BudgetContext dbContext) { _dbContext = dbContext; } public void Add(Transaction transaction) { if (string.IsNullOrEmpty(transaction.PayeeName) == false) { if (_dbContext.Payees.SingleOrDefault(p => p.Name == transaction.PayeeName) == null) { _dbContext.Payees.Add(new Payee { Name = transaction.PayeeName }); } } _dbContext.Transactions.Add(transaction); _dbContext.SaveChanges(); } public void Remove(int id) { _dbContext.Transactions.Remove(_dbContext.Transactions.Where(tx => tx.ID == id).SingleOrDefault()); _dbContext.SaveChanges(); } } }
Note: these will live in our data access layer…
Fortunately, and with great pleasure, aspnetcore will handle injecting the DbContext into the constructor just like it did when we used it in the controller.
Also, notice the add-if-not-exists payee clause in the
Add method. In this application, payees are just a list of names that serve very little purpose. We could name the method something like
AddTransactionAndAddPayeeIfNotExists() to be more clear, but I won’t here. We also could have created a separate payee repository and implemented that logic in the controller or in a service method, but again, payees are not important enough to justify it.
Injecting the repository interface
To use our new repository we have to tell the dependency injection framework that when it needs to fulfill a
ITransactionRepository reference, to use a specific concrete implementation - the one we created above.
To do this, add this to your
Startup.cs
ConfigureServices() method:
services.AddScoped<ITransactionRepository, TransactionRepository>();
Pointing out that if you were to implement the interface using another ORM like Dapper, a mock repository for testing, or with no ORM at all, this is where you would specify which implementation to use.
Piece of cake!
Now we just need to make use of it in our original controller method:
using Microsoft.AspNetCore.Mvc; using YouReallyNeedABudget.DataAccess; using YouReallyNeedABudget.Models; using AutoMapper; namespace YouReallyNeedABudget.WebApi.Controllers { [Route("api/[controller]")] public class TransactionsController : Controller { private readonly ITransactionRepository _transactionRepo; private readonly IMapper _mapper; public TransactionsController(ITransactionRepository repo, IMapper mapper) { _transactionRepo = repo; _mapper = mapper; } [HttpPost] public IActionResult Post([FromBody]DTO.Transaction transactionDTO) { var newTransaction = _mapper.Map<Transaction>(transactionDTO); _transactionRepo.Add(newTransaction); return Created(string.Format("/api/transaction/{0}", newTransaction.ID), _mapper.Map<DTO.Transaction>(newTransaction)); } [HttpDelete("{id}")] public IActionResult Delete(int id) { _transactionRepo.Remove(id); return NoContent(); } } }
And that’s all she wrote.
Still lots to do, but this is a great start. Let me know if that was useful or if you have any suggestions. :)
Comment on the reddit post.
|
http://miniml.ist/dotnet/refactoring-db-calls-out-of-controllers/
|
CC-MAIN-2017-22
|
en
|
refinedweb
|
Revision history for Dancer::Plugin::Cache::CHI 1.4.0 2013-01-07 [ENHANCEMENTS] - Now Dancer 2 compatible. [STATISTICS] - code churn: 5 files changed, 146 insertions(+), 56 deletions(-) 1.3.1 2012-09-26 [BUG FIXES] - working around regression in Dancer 1.31 [STATISTICS] - code churn: 3 files changed, 15 insertions(+), 4 deletions(-) 1.3.0 2011-12-06 [BUG FIXES] - doc typo corrected (thanks Perlover!) [GH#7] [ENHANCEMENTS] - cache namespaces support now provided. [GH#8] Patch by Perlover. 1.2.0 2011-10-15 [BUG FIXES] - add test for 'cache_remove' - Change log was messed up by dzil plugin [ENHANCEMENTS] - add 'before_create_cache' hook - now http headers of cached response are saved as well, and we're not using 'halt' anymore. (patch by Perlover) - new configuration option 'honor_no_cache' for client-side 'Cache-Control: no-cache' or 'Pragma: no-cache'. (original patch and idea by Perlover) - cache keys used by 'cache_page' can be customized via 'cache_page_key_generator'. 1.1.0 2011-08-24 [ENHANCEMENTS] - add 'cache_remove' helper function. Patch by David Precious. [GH#1] 1.0.2 2011-07-17 [BUG FIXES] - fix 'check_page_cache'. Looks like the behavior of 'halt' changed since the last release of D::P::C::CHI. 1.0.1 2011-04-04 - report-version seems to be causing problems. 1.0.0 2011-04-02 - Re-brand the dist as Dancer::Plugin::Cache::CHI. 0.2.2 2011-04-02 - Dancer::Plugin::Cache deprecated in favor of Dancer::Plugin::Cache::CHI 0.2.1 2011-03-20 - Requires Dancer > 1.150 0.2.0 2011-03-19 - Add cache helper functions. Thanks to David Precious for the suggestion. [RT#66714] 0.1.0 2011-03-16 - Initial release, unleashed unto an unsuspecting world.
|
https://metacpan.org/changes/distribution/Dancer-Plugin-Cache-CHI
|
CC-MAIN-2017-22
|
en
|
refinedweb
|
.
(x.
class Person { int mAge; public: Person() { mAge = 10; } void incrementAge() { mAge = mAge + 1; } };
It has an incrementAge() member function that modifies it’s state i.e. increments mAge data member. Now create a function that will return Person’s object as a rvalue i.e.
PersongetPerson() { return Person(); }
Now getPerson() is a rvalue and we can not take its address i.e.
Person * personPtr = &getPerson(); // COMPILE ERROR
But we can modify this rvalue because it is of User Defined Data type (of Person class) i.e.
getPerson().increment.
#include <iostream> class Person { int mAge; public: Person() { mAge = 10; } void incrementAge() { mAge = mAge + 1; } }; PersongetPerson() { return Person(); } int main() { // Person * personPtr = &getPerson(); getPerson().incrementAge(); return 0; }
In next article we will discuss what is a rvalue rvalue immutable in C++?
评论 抢沙发
|
http://www.shellsec.com/news/27282.html
|
CC-MAIN-2016-44
|
en
|
refinedweb
|
that you are willing to distribute, because that is a practical way of suggesting improvements to the standard version.
MULDIS and MULDIS MULTIVERSE OF DISCOURSE are trademarks of Muldis Data Systems, Inc. (). The trademarks apply to computer database software and related services. See for the full written details of Muldis Data Systems' trademark policy.
The word MULDIS is intended to be used as the distinguishing brand name for all the products and services of Muldis Data Systems. So we would greatly appreciate it if in general you do not incorporate the word MULDIS into the name or logo of your website, business, product or service, but rather use your own distinct name (exceptions appear below). It is, however, always okay to use the word MULDIS only in descriptions of your website, business, product or service to provide accurate information to the public about yourself.
If you do incorporate the word MULDIS into your names anyway, either because you have permission from us or you have some other good reason, then: You must make clear that you are not Muldis Data Systems and that you do not represent Muldis Data Systems. A simple or conspicuous disclaimer on your home page and product or service documentation is an excellent way of doing that.
Please respect the conventions of the Perl community by not using the namespace
Muldis:: at all for your own works,
unless you have explicit permission to do so from Muldis Data Systems; that namespace is mainly just for our official works.
You can always use either the
MuldisX:: namespace for related unofficial works,
or some other namespace that is completely different.
Also as per conventions,
its fine to use
Muldis within a Perl package name where that word is nested under some other project-specific namespace (for example,
Foo::Storage::Muldis_Rosetta or
Bar::Interface::Muldis_Rosetta),
and the package serves to interact with a Muldis Data Systems work or service.
If you have made a language variant or extension based on the Muldis D language, then please follow the naming conventions described in the VERSIONING ("VERSIONING" in Muldis::D) documentation of the official Muldis D language spec.
If you would like to use (or have already used) the word MULDIS for any use that ought to require permission, please contact Muldis Data Systems and we'll discuss a way to make that happen.
None yet.
Several public email-based forums exist whose main topic is it the place for non-implementers to get help in using said.
An official IRC channel for Muldis D and its implementations is also intended, but not yet started.
Alternately, you can purchase more advanced commercial support for various Muldis D implementations, particularly Muldis Rosetta, from its author by way of Muldis Data Systems; see for details.
|
http://search.cpan.org/~duncand/Muldis-D-Manual-0.9.0/lib/Muldis/D/Manual.pm
|
CC-MAIN-2016-44
|
en
|
refinedweb
|
Talk:Proposed features/Dress code
From OpenStreetMap Wiki
Keep it simple
I would recommend to keep the scheme simple. No dress:* namespace with a plurality of subkeys. Just one key, e.g. dress_code and a small set of standard values. Don't try to cover every possible situation. My take on it:
- dress_code=formal: Suit, jacket and tie or equivalent required
- dress_code=smart_casual: No jeans or shorts allowed
- dress_code=casual: Default value, no special requirements to dress code
- dress_code=decent: No bare shoulders allowed, trousers or skirts of "appropriate length" required. Otherwise casual dress code
- dress_code=no_sportswear: No sportswear allowed, otherwise casual dress code
- dress_code=no_beachwear: No beachwear allowed, otherwise casual dress code
- dress_code=nude: Required "dress" code for a nudist beach/resort.
These values should cover common situations in restaurants, bars, clubs, casinos, tourist attractions etc. --polderrunner (talk) 17:43, 23 October 2013 (UTC)
- I like your scheme. And I would recommend both of the schemes. Use your simple scheme as base; then, if required, specify deviations from the base with dress:* namespace. --Surly (talk) 04:01, 24 October 2013 (UTC)
Keep it even simpler
I would recommend to keep the scheme even simpler. No namespace, no attempt to formalize each aspect of a complex and ever shifting attribute. Just tag a freeform description of the dress code, in one or more languages:
- dress_code:en='no beachwear except on Friday evenings before 3pm. Happy joe hats are encouraged.'
And note that nudity is already covered with separate tags. Brycenesbitt (talk) 22:30, 23 October 2013 (UTC)
- This idea seems to be great as it allows users to put the verbatim dress code into OSM. Also, because the dress code is not rendered, there is no need for anything more rigid than this idea. I would, however, say that the value should either be a commonly known dress code ("e.g." dress_code=formal) perhaps we use Western dress codes for the generic descriptions --DCTrans (talk) 01:23, 1 December 2015 (UTC)
|
http://wiki.openstreetmap.org/wiki/Talk:Proposed_features/Dress_code
|
CC-MAIN-2016-44
|
en
|
refinedweb
|
clientsocket = None
after the close.
return getattr(self._sock,name)(*args) socket.error: [Errno 10048] Only one usage of each socket address (protocol/network address/port) is normally permitted
from socket import * import thread, ssl def handler(clientsocket, clientaddr): print "Accepted connection from: ", clientaddr while 1: data = clientsocket.recv(1024) i = 0 while (i <= 0): if data == "DesktopClient1": print data i = 1 + 1 print 1 print "Desktop Verified" clientsocket.send("Mobile Verified") elif data == "MobileClient2": print data i = 1 + 1 print i print "Mobile Verified" clientsocket.send("Mobile Verified") else: break()
Join the community of 500,000 technology professionals and ask your questions.
Connect with top rated Experts
10 Experts available now in Live!
|
https://www.experts-exchange.com/questions/28474904/python-scripting.html
|
CC-MAIN-2016-44
|
en
|
refinedweb
|
15 July 2008 08:30 [Source: ICIS news]
By Prema Viswanathan
SINGAPORE (ICIS news)--Egyptian Petrochemicals (EChem) plans to expand the capacity of its 80,000 tonne/year polyvinyl chloride (PVC) plant in Alexandria by 50% to 120,000 tonnes/year by the end of 2008, a company source said on Tuesday.?xml:namespace>
EChem’s vinyl chloride monomer (VCM) and ethylene dichloride (EDC) capacities would be expanded by 50% from the current 100,000 tonnes/year for VCM and from 140,000-145,000 tonnes/year for EDC, the source added.
“We expect the expansions to be completed as scheduled, provided we secure enough ethylene feedstock,” he said.
Ethylene for the vinyls expansion would be obtained from Sidi Kerir Petrochemicals’ (Sidpec) 300,000 tonne/year cracker in Ameriya, near ?xml:namespace>
Sidpec officials were not available for comment.
For more on PVC visit ICIS chemical intelligence
|
http://www.icis.com/Articles/2008/07/15/9140124/echem-plans-to-expand-vinyls-capacity-by-half.html
|
CC-MAIN-2014-10
|
en
|
refinedweb
|
It is often useful to have a property that maps Strings to other components. In Nucleus, properties of type
atg.nucleus.ServiceMap are assumed to perform this mapping. For example, a
cities property might map a city name to the weather component monitoring it:
import atg.nucleus.*;
public ServiceMap getCities ();
public void setCities (ServiceMap cities);
The corresponding properties file might initialize the
cities property as follows:
cities=\ atlanta=/services/weather/cities/atlanta,\ boston=/services/weather/cities/boston,\ tampa=/services/weather/cities/tampa,\ phoenix=/services/weather/cities/phoenix
The
ServiceMap class is a subclass of
java.util.Hashtable, so you can access it with all the normal
Hashtable methods such as
get,
keys,
size, and so on. In this case, the key
atlanta maps to the component found at
/services/weather/cities/atlanta, and so on for the other cities. The following code accesses the
Weather component for a particular city:
Weather w = (Weather) (getCities ().get ("tampa"));
|
http://docs.oracle.com/cd/E23095_01/Platform.93/ATGProgGuide/html/s0204servicemapproperties01.html
|
CC-MAIN-2014-10
|
en
|
refinedweb
|
Hello,
So I make an instance of a display object defined in an external .swf that has a TextField. I set that TextField to a new format using setTextFormat(format) and then defaultTextFormat = format. When I set the TextField's htmlText property, the defaultTextFormat is suddenly reset to what it was originally.
So something like:
var view:MovieClip = new externalDefinition();
var label:TextField = view.label;
trace(label.defaultTextFormat.fontName)
// Arial
var format:TextFormat = label.getTextFormat();
format.fontName = "Century Gothic";
label.setTextFormat(format);
label.defaultTextFormat = format;
trace(label.defaultTextFormat.fontName)
// Century Gothic
trace(label.htmlText)
// <TEXTFORMAT LEADING="-1"><P ALIGN="LEFT"><FONT FACE="Century Gothic" SIZE="18" COLOR="#271F60" LETTERSPACING="0" KERNING="1">label</FONT></P></TEXTFORMAT>
label.htmlText = "<b>Hello!</b>"
trace(label.defaultTextFormat.fontName)
// Arial
trace(label.htmlText);
// <TEXTFORMAT LEADING="-1"><P ALIGN="LEFT"><FONT FACE="Arial" SIZE="18" COLOR="#271F60" LETTERSPACING="0" KERNING="0"><B>Hello!</B></FONT></P></TEXTFORMAT>
So why does it do this? I understand why setting htmlText might reset the formatting to the default formatting but why does it change defaultTextFormat? Why doesn't it use the defaultTextFormat formatting I changed? Why does it use the original formatting the text field had when it was defined in asset library? How can I tell flash to use my formatting when I change htmlText? Where does it keep the original information stored and how does it revert back to it?
Thanks,
Erik
It does seem to revert to the prior default when setting htmlText - I quickly duplicate this problem. However, it seems to revert to the font the TextField was set to in the .FLA. Is there any reason you are trying to change this at runtime? Maybe you can just switch the font in the external .swf file since this does seem to be a Flash Player bug? Alternatively - just update the TextFormat whenever you set the htmlText property (that's a pain, I know - but just make a function to update the text and it won't be that bad)
With that code you should get a bunch of errors.
Neither TextFormat nor DefaultTextFormat has a Property fontName
only Font has a property fontName.
Hello,
I quickly scetched up an example. Yes, I mean font property. Sorry. I didn't compile the example, I am using it to demonstrate the problem I am having (partly to make it clear, partly because I was developing on a different computer and could not copy/paste). It seems that Nabren has confirmed the issue however.
And no, I cannot change the font in the .fla, I need to set it at runtime. And yes, I know I can change the font back afterwards, but that is a terrible solution. It is a large framework I am working on and affects many assets and it would be terrible to require anyone who uses the framework to do this themselves. If there is no other way I might have to some kind of scheme for doing this automatically, but I would like to avoid this.
I want to know why it reverts the TextField property and if there is any way that I can prevent it from doing so in the first place. Where is the original font even stored in the TextField? How does it even know what the original font is? If it is somehow possible to change the original data at runtime then that would be very good.
One more thing, I cannot use StyleSheets for these textfields easily. Maybe if there is no other solution I could try to refactor things and get this to work but right now it won't. So my question is specifically about setting the htmlText property in conjunction with defaultTextFormat.
Thanks
According to the API this behaviour is -sad for you, I know- intended, thats at least what I read from this warning in the documentation:
Use the TextField.defaultTextFormat property to apply formatting BEFORE you add text to the TextField, and the setTextFormat() method to add formatting AFTER you add text to the TextField.
Maybe the adding/changing of text triggers the reset and the loosing of your defaultTextFormat. Maybe somehow you can empty the Textfield, store its content in a var, set the defaultTextFormat, set it to htmlText, and when you`re done fill the htmlText with your var?
I've read that and I am pretty sure that is not what is meant in the documentation.
They mean, if you change defaultTextFormat, then any new text added after it is set uses the defaultTextFormat, however text already set does not use it. If you use setTextFormat, current text is set to that new format, but any text added later still uses defaultTextFormat.
Basically setTextFormat is a one time thing, it applies to text that is currently in the TextField, and defaultTextFormat is, as the name implies, the default text formatting for any new text added.
That is what that warning is saying, and yes, that is intended behavior. But that is not the problem for me. I am appliyng the defaultTextFormat before I am changing the text, not after I am changing the text. However defaultTextFormat is being reset on the condition a) htmlText is used and b) the TextField came from a symbol in Flash Professional layout.
I think this is a bug and unrelated to that warning. I am always weary of calling things a bug in Flash but that appears to be what this is. :/
Thanks
I tested this and can`t reproduce your problem:
import flash.text.TextField;
import flash.text.TextFormat;
var tf:TextField = new TextField();
addChild(tf);
//These two are Times New Roman (The system default)
tf.text = "HELLO WORLD";
tf.htmlText = "<b>HELLO WORLD</b>";
//All the following stay Arial, it never reverts back to Times New Roman
var arial:TextFormat = new TextFormat("Arial");
tf.defaultTextFormat = arial;
tf.text = "HELLO WORLD";
tf.htmlText = "<b>HELLO WORLD</b>"
//According to you at this point it should reset to Times New Roman, but it never does
tf.setTextFormat(tf.defaultTextFormat);
What should I change to see the "wrong" defaultTextFormat resetting?
It seems the TextField must be created in the timeline - not ActionScript - for this bug to happen.
It happens when you are using a TextField that is part of an exported library symbol from Flash Professional.
If you want to reproduce the problem, you must create a new FLA file in Flash Professional. Make a library symbol. Add a TextField inside that library symbol. Give the text field an instance name like 'label', and some text. I am using _sans as the font face. Export the library symbol for actionscript in the symbol properties, give it a class name like "LabelExample". In the file's Publish Settings, set it to publish a .swc file. Publish it.
Now include the .swc file in a new project. You should be able to make an instance of LabelExample. Make a new instance, like:
var clip:MovieClip = new LabelExample();
var textField:TextField = clip.label;
Now apply a new defaultTextField to this text field. It will work as long as you only change the text property. Then try to change the htmlText property. Unlike a normal new instance of TextField, which will continue to apply your defaultTextFormat formatting to the text, even when htmlText is set, it will reset your defaultTextFormat to whatever it was originally when the symbol was published, and not use the defaultTextFormat you set. I don't have any idea why it would do this, it is inconsistent with how a TextField normally behaves and by all indication is supposed to behave. I believe this is a bug in the Flash framework. :/
Thanks
|
http://forums.adobe.com/thread/1152689
|
CC-MAIN-2014-10
|
en
|
refinedweb
|
29 March 2011 09:41 [Source: ICIS news]
SINGAPORE (ICIS)--Taiwan’s Formosa Petrochemical Corp (FPCC) had a small fire at its 1.03m tonne/year No 2 naphtha cracker in Mailiao at 10:00 hours local time (02:00 hours GMT) which was put out quickly but the unit was not shut, a source close to the company said on Tuesday.
“The fire was put out very quickly. Local government officials came to check on the plant, but the plant was not shut,” a source at Formosa Plastics, a sister company to the FPCC, told ICIS.
Earlier a trader said the plant had been shut down.
“The ?xml:namespace>
No further details were immediately available
|
http://www.icis.com/Articles/2011/03/29/9447820/fire-at-taiwans-fpcc-no-2-cracker-put-out-unit-not-shut-source.html
|
CC-MAIN-2014-10
|
en
|
refinedweb
|
public class ContextSingletonBeanFactoryLocator extends SingletonBeanFactoryLocator
Variant of
SingletonBeanFactoryLocator
which creates its internal bean factory reference as an
ApplicationContext instead of
SingletonBeanFactoryLocator's simple BeanFactory. For almost all usage scenarios,
this will not make a difference, since within that ApplicationContext or BeanFactory
you are still free to define either BeanFactory or ApplicationContext instances.
The main reason one would need to use this class is if bean post-processing
(or other ApplicationContext specific features are needed in the bean reference
definition itself).
Note: This class uses classpath*:beanRefContext.xml as the default resource location for the bean factory reference definition files. It is not possible nor legal to share definitions with SingletonBeanFactoryLocator at the same time.
SingletonBeanFactoryLocator,
DefaultLocatorFactory
logger
useBeanFactory
clone, equals, finalize, getClass, hashCode, notify, notifyAll, toString, wait, wait, wait
protected ContextSingletonBeanFactoryLocator(java.lang.String resourceLocation)
resourceLocation- the Spring resource location to use (either a URL or a "classpath:" / "classpath*:" pseudo URL)
public static BeanFactoryLocator getInstance() throws BeansException
getResourcesmethod with this name will be combined to create a definition, which is just a BeanFactory.
BeansException- in case of factory loading failure
public static BeanFactoryLocator getInstance(java.lang.String selector) throws BeansException
getResourcesmethod will be called with this value to get all resources having that name. These resources will then be combined to form a definition. In the case where the name uses a Spring "classpath:" prefix, or a standard URL prefix, then only one resource file will be loaded as the definition.
selector- the location of the resource(s) which will be read and combined to form the definition for the BeanFactoryLocator instance. Any such files must form a valid ApplicationContext definition.
BeansException- in case of factory loading failure
protected BeanFactory createDefinition(java.lang.String resourceLocation, java.lang.String factoryKey)
The default implementation simply builds a
ClassPathXmlApplicationContext.
createDefinitionin class
SingletonBeanFactoryLocator
resourceLocation- the resource location for this factory group
factoryKey- the bean name of the factory to obtain
protected void initializeDefinition(BeanFactory groupDef)
ConfigurableApplicationContext.refresh().
initializeDefinitionin class
SingletonBeanFactoryLocator
groupDef- the factory returned by
createDefinition()
protected void destroyDefinition(BeanFactory groupDef, java.lang.String selector)
ConfigurableApplicationContext.close().
destroyDefinitionin class
SingletonBeanFactoryLocator
groupDef- the factory returned by
createDefinition()
selector- the resource location for this factory group
|
http://docs.spring.io/spring-framework/docs/3.2.0.RC2/api/org/springframework/context/access/ContextSingletonBeanFactoryLocator.html
|
CC-MAIN-2014-10
|
en
|
refinedweb
|
23 September 2009 18:04 [Source: ICIS news]
By Linda Naylor
LONDON (ICIS news)--European polypropylene (PP) buyers are sitting tight in the second half of September as lower spot offers enter the market, while producers wait for the October propylene settlement before indicating next month's business, market sources said on Wednesday.
“It’s going down,” said a medium-sized PP buyer in the commodity market. “Well, it’s not actually going down, but it will go down.”
The buyer paid a hefty hike for the small quantities of material he had bought directly from a European producer earlier in September, but reported increased amounts of spot offers for the second half of the month.
“I can feel the nervousness and the restlessness in the market. Traders are looking to have zero stock at the end of the month, and they are offering at lower prices,” added the buyer.
Buyers expected the market to ease after a serious hike in September which lifted European PP prices by €90-100/tonne ($132-147/tonne). Homopolymer injection prices traded above €1,000/tonne FD (free delivered) NWE (northwest ?xml:namespace>
Net prices have now eased below €950/tonne FD, but these offers came only from traders, not producers.
“They (producers) will hold out as long as possible, that’s only natural,” said a trader, “but they shouldn’t have pushed so hard earlier. They pushed it to a level where they were saying “don’t buy” to their customers.”
Product has tightened, mainly due to cracker cutbacks, restricting propylene availability. This has led to producers hiking prices on buyers in September, forcing market demand to quieten by the middle of the month.
“I have bought most of my October needs already,” said another buyer.
Producers still felt that October would remain firm.
“Stocks are very tight everywhere. Nobody has any spare product so even if demand is not strong, our stock position can take it,” said one.
Some sources expressed concern that high European PP prices would attract imports.
The new, long-awaited Middle Eastern capacities have yet to make a dent in European markets, but buyers felt the current disparity between regions could lead to new imports.
“They are going to end up by coming to
“The natural destination for the new material is to
“It’s a bit tricky at the moment,” admitted another producer, “but we are clearly not interested in reducing prices.”
European PP demand was currently 10% below 2008 levels, but one producer estimated this was closer to minus 5% if exports were taken into account. Exports were particularly strong in May to June of 2008, but there were fewer possibilities to export PP.
“We can still send some specialities abroad,” said the producer, “but you can forget exporting commodities at present.”
Despite talk of lower PP prices in the coming weeks, even the keenest buyers did not expect a rerun of 2008 when net prices more than halved during the last part of the year. The price of homopolymer injection dropped from €1,250/tonne FD NWE at the end of August, to just €600/tonne FD by the year end.
“We won’t see prices collapse like last year,” said a trader. “Brent and naphtha are still relatively strong and we don’t expect to see the distressed sales of last year. The whole system has shown resilience this year.”
To avoid oversupply, PP production was cut back for several months. Cutbacks at the cracker level were also reduced, mainly to minimise ethylene output, further affecting by-product propylene production.
October propylene was expected to be settled by the end of September. Sources generally expected anything between a rollover and small increase from the current September contract of €778/tonne FD NWE.
PP is a versatile polymer, used in the automotive industry, food packaging, health and hygiene products and carpet manufacture.
PP producers in
($1 = €0.68)
For more on polypropylene
|
http://www.icis.com/Articles/2009/09/23/9249813/europe-polypropylene-on-hold-ahead-of-c3-settlement.html
|
CC-MAIN-2014-10
|
en
|
refinedweb
|
Important: Versions 1 and 2 of the Google Documents List API have been officially deprecated as of April 20, 2012. They will continue to work as per our deprecation policy, but we encourage you to move to the Google Drive API.
In addition to providing some background on the capabilities of the Documents List Data API, this guide provides examples for interacting with the API using the Java client library. For help setting up the client library, see Getting Started with the Google Data Java Client Library. If you're interested in understanding more about the underlying protocol used by the Java client library to interact with the Documents List Data API, please see the protocol tab.
Contents
Audience
This document is intended for developers who want to write client applications using the Google Data Java client library that can interact with Google Documents.
Getting started
Google Documents uses Google Accounts for authentication, so if you have a Google account you are all set. Otherwise, you can create a new account.
For help setting up the client library, see Getting Started with the Google Data Java Client Library. To use the Java client library, you must be running Java 1.5 or higher and include the jar files listed in the Dependencies wiki page. You'll find the classes you need to get started in the
java/lib/gdata-document-1.0.jar
and
java/lib/gdataclient-1.0.jar jar files.
After downloading the client library, you'll find the sample explained in this guide in the
sample/docs subdirectory of the distribution.
A full working copy of this sample is available in the Google Data Java Client Library project in the project hosting section of code.google.com. The sample is located at trunk/java/sample/docs/DocumentListDemo.java in the SVN repository accessible from the Source tab.
The sample allows the user to perform a number of operations which demonstrate how to use the Documents List feed.
To compile the examples in this guide into your own code, you'll need to use the following import statements:
import sample.util.SimpleCommandLineParser; import com.google.gdata.data.BaseEntry; import com.google.gdata.util.AuthenticationException; import com.google.gdata.client.docs.DocsService; import com.google.gdata.util.ServiceException; import com.google.gdata.data.docs.DocumentListEntry; import com.google.gdata.data.docs.DocumentEntry; import com.google.gdata.data.docs.DocumentListFeed; import com.google.gdata.data.PlainTextConstruct; import com.google.gdata.client.DocumentQuery; import java.io.File; import java.io.PrintStream; import java.io.BufferedReader; import java.io.IOException; import java.io.InputStreamReader; import java.net.URL; import java.net.MalformedURLException; import java.util.List;
The
DocsService class represents a client connection (with authentication) to the Documents.)
- Create a new DocsService instance, setting your application's name (in the form companyName-applicationName-versionID).
- Set the appropriate credentials.
- Call a method to send the request and receive any results.
Authenticating to the Documents service
The Java client library can be used to work with either public or private feeds. The Documents List feed is private and requires authentication, so you will need to authenticate before performing operations. This can be done via ClientLogin username/password authentication or AuthSub proxy authentication. At this time, Google Documents only offers a private feed for Documents List.
Please see the authentication documentation for more information on AuthSub and ClientLogin.
ClientLogin for "installed" applications
To use ClientLogin (also called "Authentication for Installed
Applications"), invoke the
setUserCredentials
method of
DocsService inherited from
GoogleService, specifying the ID and password of the user on whose behalf your client is sending the query. For example:
DocsService service = new DocsService("Document List Demo"); service.setUserCredentials("jo@gmail.com", password);
For more information about authentication systems, see the Google Account Authentication documentation.
AuthSub for web applications
AuthSub proxy authentication is used by web applications which need to authenticate their users to Google accounts. The website operator does not need access to the username and password for the Google Documents user - only special AuthSub tokens are required.
To acquire an AuthSub token for a given Google Documents user, your application must redirect the user to the AuthSubRequest URL, which prompts them to log into their Google account. You may use the
getRequestUrl method on the
AuthSubUtil object to create this URL:
String requestUrl = AuthSubUtil.getRequestUrl("", "", false, true);
The
getRequestUrl method takes several parameters (corresponding to the query parameters used by the AuthSubRequest handler):
- the next URL — URL that Google will redirect to after the user logs into their account and grants access; the example above
- the scope — the example above
- a boolean to indicate whether the token will be used in registered mode or not;
falsein the example above
- a second boolean to indicate whether the token will later be exchanged for a session token or not;
truein the example above
After constructing the "next" URL, your server-side Docs account." + ());
Your application can recognize which user has authenticated with Google's servers using AuthSub by setting an authentication cookie prior to having them click the AuthSub link, then reading this cookie after the user has been authenticated and returned to your webpage.
The token you retrieve with
getTokenFromReply is always a one-time use token. You can exchange this token for a session token using the AuthSubSessionToken URL, as described in the AuthSub Authentication for Web Applications. For more information about registered applications and private keys, refer to the Using AuthSub with the Google Data API Client Libraries documentation.
You can use the session token to authenticate requests to the server by placing the token in the Authorization header, as described in the AuthSub Authentication for Web Applications documentation. To tell the Java client library to automatically send the Authorization header (containing the session token) with each request, you call the
DocsService object's
setAuthSubToken method:
DocsService myService = new DocsService("exampleCo-exampleApp-1"); service.setAuthSubToken(sessionToken, null);
If you're using registered mode, then you provide your private key instead of
null.
After you've called
setAuthSubToken, you can use the standard Data API client library calls to interact with the service, without having to think about the token.
If you want to retrieve information about.
When your client is done using the session token, it can revoke the token using the AuthSubRevokeToken handler, as described in the AuthSub Authentication for Web Applications);
See Using AuthSub with the Google Data API Client Libraries for more detailed information on using AuthSub in the Java client library.
Retrieving a list of documents
You can get a feed containing a list of the currently authenticated user's documents by sending an authenticated
GET request to the following URL:
The result is a "meta-feed," a feed that lists all of that user's documents; each entry in the feed represents a document (spreadsheet or word processor document) associated with the user. This feed is only accessible using an authentication token.
You can print out a list of the user's documents with the following two functions:
public void showAllDocs() throws IOException, ServiceException { DocumentListFeed feed = service.getFeed(documentListFeedUrl, DocumentListFeed.class); for (DocumentListEntry entry : feed.getEntries()) { printDocumentEntry(entry); } } public void printDocumentEntry(DocumentListEntry doc) { String shortId = doc.getId().substring(doc.getId().lastIndexOf('/') + 1); System.out.println(" -- Document(" + shortId + "/" + doc.getTitle().getPlainText() + ")"); }
The resulting
DocumentListFeed
object
feed represents a response from the server. Among other things, this feed
contains a list of
DocumentListEntry
objects (
feed.getEntries()), each of which represents a single
document.
DocumentListEntry encapsulates the information shown in the protocol document.
Uploading documents
To upload a document to the server you can attach the file to the new DocumentEntry using the setFile method. Once the file is associated with this entry, it's contents will be sent to the server when you insert the DocumentEntry. The example below shows how to upload a file when given the absolute path of a file.
public void uploadFile(String filePath) throws IOException, ServiceException { DocumentEntry newDocument = new DocumentEntry(); File documentFile = new File(filePath); newDocument.setFile(documentFile); // Set the title for the new document. For this example we just use the // filename of the uploaded file. newDocument.setTitle(new PlainTextConstruct(documentFile.getName())); DocumentListEntry uploaded = service.insert(documentListFeedUrl, newDocument); printDocumentEntry(uploaded); }
Note: that the above sample sets the name of the document to the file name, but you are free to choose a different name by passing in a string to the
PlainTextConstruct constructor.
Uploading a word processor document
To upload a word processing document, you can use the example code above and specify a word processing file for the
filePath.
Note: The client code uses the file's extension to determine its type.
Uploading a spreadsheet
Uploading a spreadsheet is done the same way as uploading a word processing document. If the file's extension indicates that it is a spreadsheet file, then it will be created as a spreadsheet.
Trashing a document
To delete a document, call the
delete() method on the retrieved
DocumentListEntry:
documentToBeDeleted.delete().
In the Java client library, a
DocumentQuery object can be used to construct queries for the Documents List feed. The following code is used in all of the examples below to print out the feed results to the command line.
for (DocumentListEntry entry : feed.getEntries()) { printDocumentEntry(entry); }
Here is the definition for the method
printDocumentEntry:
public void printDocumentEntry(DocumentListEntry doc) { String shortId = doc.getId().substring(doc.getId().lastIndexOf('/') + 1); System.out.println(" -- Document(" + shortId + "/" + doc.getTitle().getPlainText() + ")"); }
Retrieving all word processor documents
A list of only word processor documents can be retrieved by appending the
document category to the Documents List Feed URL as shown below:
DocumentListFeed feed = service.getFeed(new URL(documentListFeedUrl.toString() + "/-/document"), DocumentListFeed.class);
Retrieving all spreadsheets
A list of only spreadsheets can be retrieved by using the
spreadsheet category as follows:
DocumentListFeed feed = service.getFeed(new URL(documentListFeedUrl.toString() + "/-/spreadsheet"), DocumentListFeed.class);
Performing a text query
You can search the content of documents by using a
DocumentQuery in your
query request. A
DocumentQuery object can be used to construct the query URI, with the search term being passed in as a parameter. Here is an example method which queries the documents list for documents which contain the search string:
public void search(String fullTextSearchString) throws IOException, ServiceException { DocumentQuery query = new DocumentQuery(documentListFeedUrl); query.setFullTextQuery(fullTextSearchString); DocumentListFeed feed = service.query(query, DocumentListFeed.class); System.out.println("Results for [" + fullTextSearchString + "]"); for (DocumentListEntry entry : feed.getEntries()) { printDocumentEntry(entry); } }
A more detailed description of how to use URL parameters to build complex queries can be found on the Google Data APIs Reference Guide.
|
https://developers.google.com/google-apps/documents-list/v1/developers_guide_java
|
CC-MAIN-2014-10
|
en
|
refinedweb
|
Starting the Synth Project
We’re going to build this from scratch, so in order to not spend much time with UI issues, we are going to make use of AudioKit’s built-in UI elements that are normally used in playgrounds and our example apps.
From within Xcode, create a new project with the single view application template. Give it a product name of “SenderSynth” (no spaces) and make it a Universal Swift application as shown below:
Since Audiobus is most easily installed uing Cocoapods, we could use Cocoapods to install AudioKit, and eventually this tutorial will be updated as such, but for now, add AudioKit’s iOS project from the develop branch as a subproject.
Set Up the Synth (easy!)
Inside the ViewController.swift, first import AudioKit:
import AudioKit
Create the oscillator by adding it as an instance variable,
class ViewController: UIViewController { let oscillator = AKOscillatorBank()
and then use oscillator as AudioKit’s output and start things up:
override func viewDidLoad() { super.viewDidLoad() AudioKit.output = oscillator AudioKit.start() }
User Interface
This tutorial will not use storyboards because they require too much mouse activity to describe, so instead we’ll build the UI programmatically.
Next, build the views:
override func viewDidLoad() { super.viewDidLoad() AudioKit.output = oscillator AudioKit.start() setupUI() } func setupUI() { let stackView = UIStackView() stackView.axis = .vertical stackView.distribution = .fillEqually stackView.alignment = .fill stackView.translatesAutoresizingMaskIntoConstraints = false let adsrView = AKADSRView() stackView.addArrangedSubview(adsrView) let keyboardView = AKKeyboardView() stackView.addArrangedSubview(keyboardView) view.addSubview(stackView) stackView.widthAnchor.constraint(equalToConstant: view.frame.width).isActive = true stackView.heightAnchor.constraint(equalToConstant: view.frame.height).isActive = true stackView.centerXAnchor.constraint(equalTo: self.view.centerXAnchor).isActive = true stackView.centerYAnchor.constraint(equalTo: self.view.centerYAnchor).isActive = true }
While this may seem like a lot of code, its a lot more reliable than describing how to do this with storyboards.
The last step is to hook up controls. For the keyboard, make the view controller conform to the AKKeyboardDelegate protocol:
class ViewController: UIViewController, AKKeyboardDelegate {
and add these functions:
func noteOn(note: MIDINoteNumber) { oscillator.play(noteNumber: note, velocity: 80) } func noteOff(note: MIDINoteNumber) { oscillator.stop(noteNumber: note) }
and make the view controller the delegate for the keyboard inside viewDidLoad, right after the instantiation:
let keyboardView = AKKeyboardView() keyboardView.delegate = self
If you run your app now, it will respond to the keys, but the ADSR envelope won’t do anything. Replace the ADSR creation step with this one defining a code block:
let adsrView = AKADSRView() { att, dec, sus, rel in self.oscillator.attackDuration = att self.oscillator.decayDuration = dec self.oscillator.sustainLevel = sus self.oscillator.releaseDuration = rel }
Now you’re really done with all the AudioKit stuff. From here, it’s all inter-app audio.
Installing Audiobus
You will need Cocoapods to do this step. Close the project you created and open up a terminal and go to the projects folder and type:
pod init
Add a pod ‘Audiobus’ line to the Podfile that was just created in this folder:
# Uncomment the next line to define a global platform for your project # platform :ios, '9.0' target 'SenderSynth' do # Comment the next line if you're not using Swift and don't want to use dynamic frameworks use_frameworks! # Pods for SenderSynth pod 'Audiobus' end
Back on the commandline,
> pod install
This should work as follows:
Analyzing dependencies Downloading dependencies Installing Audiobus (2.3.1) Generating Pods project Integrating client project
It may also produce some warning messages which can be ignored for now. There are alternative installation instructions on the Audiobus integration page if you do not want to use Cocoapods.
From now on, we will be working with the SenderSynth.xcworkspace file instead of the project file, so open that in Xcode now.
Add the Audiobus Files
Since Audiobus is not a Swift framework, we need to import the Audiobus header into a bridging header. There are a few ways to create a bridging header, but the way I recommend is to go to your app’s target Build Settings tab and search for “Bridging”. All of the settings will be filtered and you’ll be left with one remaining “Objective-C Bridging Header” setting in which you can paste “$(SRCROOT)/SenderSynth/SenderSynth-BridgingHeader.h” so that it looks like the following screenshot.
Then create a new file, of type “Header File”, name it “SenderSynth-BridgingHeader.h” and add the import line so that it looks like:
#ifndef SenderSynth_BridgingHeader_h #define SenderSynth_BridgingHeader_h #import "Audiobus.h" #endif /* SenderSynth_BridgingHeader_h */
Next grab the Audiobus.swift file from the AudioKit repository and place it in your project, creating a copy.
Back in your ViewController.swift file:
AudioKit.output = oscillator AudioKit.start() Audiobus.start()
Project Settings
You need to enable background audio and inter-app audio. Follow these steps to do so:
Open your app target screen within Xcode by selecting your project entry at the top of Xcode’s Project Navigator, and selecting your app from under the “TARGETS” heading.
Select the “Capabilities” tab.
Underneath the “Background Modes” section, make sure you have “Audio, AirPlay, and Picture in Picture” ticked.
To the right of the “Inter-App Audio” title, turn the switch to the “ON” position – this will cause Xcode to update your App ID with Apple’s “Certificates, Identifiers & Profiles” portal, and create or update an Entitlements file.
Next, set up a launch URL:
Open your app target screen within Xcode by selecting your project entry at the top of Xcode’s Project Navigator, and selecting your app from under the “TARGETS” heading.
Select the “Info” tab.
Open the “URL types” group at the bottom.
Click the “Add” button at the bottom left. Then enter this identifier for the URL: io.audiokit.sendersynth
Enter the new Audiobus URL scheme for your app, generally the name of the app, a dash, and then a version number: “SenderSynth-1.0.audiobus”.
Of course when you do all this for a new app, you’ll need to have your new app’s name in these fields.
Here is one step that is not documented on the Audiobus web site:
- Give your app a bundle name. In the Info tab, you might see grayed out default text in the Identity section’s display name field. Go ahead and type or re-type the app’s name. Here I just added a space to call the app “Sender Synth”.
More Project Settings (for Sender apps)
Create your sender port by following these steps: “aurg”, which means a “Remote Generator” unit.
“subtype” (of type String): set this to “sndx”, which just means “Sender Example”.
“name” (of type String): set this to “AudioKit: Sender”
“version” (of type Number): set this to an integer. “1” is a good place to start.
In the end your Info.plist should now have the following:
Audiobus and Registration
Perhaps it goes without saying, but you need to have the Audiobus application installed on your device. Next, you’ll need create a user at developer.audiob.us. Next, back in Xcode, build the SenderSynth project and right click on the app in the Products directory, and “Show in Finder”. In the Finder, right click on the app and “Show Package Contents”. Using a web browser, go to the Audiobus Temporary Registration and drag the Info.plist file from this directory into the web page.
Complete the temporary registration by choosing the SDK version you’re using, adding an icon to the sender port, and adding a title as shown:
You will be given an API Key that will be good for 14 days. Copy the text of the key and create a new document of type “Other / Empty” and call it “Audiobus.txt”. Paste the API Key in that file.
You should also click the “email this to me” link on the Audiobus registration page so that you can open up the email on your device and tap the link to add an entry to your local Audiobus app for the Sender Synth.
When you’re ready to submit to the App Store and you have and App Store ID, make sure you get a new, permanent registration with Audiobus.
Build the app to your device
This is pretty straightforward, but you do need to make sure to give your app app icons.
Conclusion (for Sender Apps)
There, that wasn’t so hard was it? The next example is a Filter Effects app. I recommend that you work through that as well, even if you’re not going to build an effects/filter app, just to solidify some of the concepts.
|
http://audiokit.io/audiobus/sender-synth/
|
CC-MAIN-2017-22
|
en
|
refinedweb
|
As mentioned, a variable declared as a reference type serves as a reference to an object located on the heap. As illustrated in Figure 3-3, multiple reference variables can be attached to a single object, and some reference variables might not be attached to any object. When a reference type variable is declared without assigning it to an object, its default value is null.
In Visual C#, objects are always created using the new keyword, but the objects aren’t explicitly released as they are in C or C++. The .NET Framework’s garbage collector will automatically free the memory used by unreferenced objects. In some situations, however, you must take explicit action when you’ve finished using an object—for example, when objects hold scarce resources and you can’t wait for the garbage collector to act. Such situations have implications for how you write code, as you’ll see later in this chapter, in the section “Reference Type Lifetime and Garbage Collection.”
An array is a reference type that contains a sequence of variables of a specific type. An array is declared by including index brackets between the type and the name of the array variable, as shown here:
int [] ages;
This example declares a variable named ages that’s an array of int, but it doesn’t attach that reference variable to an actual array object. To do so requires that the array be initialized, as shown here:
int [] ages = {5, 8, 39};
Arrays are reference types that the Visual C# .NET compiler automatically subclasses from the System.Array class. When an array contains value types, the space for the types is allocated as part of the array. When an array contains reference elements, the array contains only references—the objects are allocated elsewhere on the managed heap, as shown in Figure 3-4.
The individual elements of an array are accessed through an index, with 0 always referring to the first element in the array, as follows:
int currentAge = ages[0];
You can determine the number of elements in an array by using the Length property:
int elements = nameArray.Length;
An array can be cloned with the Clone method, which returns a new copy of the array. Because Clone is declared as returning an array of object, you must explicitly state the type of the new array, as follows:
string [] secondArray = (string[])nameArray.Clone();
Cloning an array creates a shallow copy. All the array elements are copied into a new array; the objects referenced by array elements aren’t copied.
Clear is a static method in the Array class that removes one or more of the array elements by setting the removed array elements to 0 (for value types) or null (for reference types). The array to be cleared is passed as the first parameter, along with the index of the first element to clear and the number of elements be removed. To eliminate all the elements of the array, pass 0 as the start element and the array length as the third parameter, as shown here:
Array.Clear(nameArray, 0, nameArray.Length);
Reverse is a static method in the Array class that reverses the order of array elements, operating on either the complete array or just a subset of elements. To reverse an entire array, simply pass the array to the static method, as shown here:
Array.Reverse(nameArray);
To reverse a range within the array, pass the array along with the start element and the number of items to be reversed.
Array.Reverse(nameArray, 0, nameArray.Length);
Sort is a static method that sorts an array. There are several versions of Sort; the simplest version accepts an array as its only parameter and sorts the elements in ascending order.
Array.Sort(nameArray);
Other overloads of the Sort method allow you to exercise more control over the sorting process. The interfaces and methods used when sorting are discussed in more detail in Chapter 8.
The following example manipulates an array containing the names of the month. The array is examined, reversed, sorted, cloned, and finally cleared.
using System; namespace MSPress.CSharpCoreRef.ArrayExample { class ArrayExampleApp { static void Main(string[] args) { string [] months = { "January", "February", "March", "April", "May", "June", "July", "August", "September", "October", "November", "December"}; Console.WriteLine("The array has a rank of {0}.", months.Rank); int elements = months.Length; Console.WriteLine("There are {0} elements in the array.", elements); Console.WriteLine("Reversing..."); Array.Reverse(months); PrintArray(months); Console.WriteLine("Sorting..."); Array.Sort(months); PrintArray(months); string [] secondArray = (string[])months.Clone(); Console.WriteLine("Cloned Array..."); PrintArray(months); Console.WriteLine("Clearing..."); Array.Clear(months, 0, months.Length); PrintArray(months); } /// <summary> /// Print each element in the names array. /// </summary> static void PrintArray(string[] names) { foreach(string name in names) { Console.WriteLine(name); } } } }
This example uses the foreach statement to iterate over each of the array elements. You can use this type of statement in Visual C# to simplify loop programming when working with arrays. The foreach statement allows you to declare a variable that’s used to represent the currently indexed element in the array. This element will be updated for every loop iteration, with bounds checking performed automatically. Use of the foreach statement with arrays and other types is discussed in more detail in Chapter 7.
Arrays can be multidimensional, meaning that instead of extending in a simple sequence, they extend in multiple dimensions. A multidimensional array can be either rectangular, meaning that array dimensions are consistent, or jagged, meaning that array dimensions can have varying lengths. Some examples of different array types are shown in Figure 3-5.
To create a rectangular multidimensional array, the array declaration simply separates each dimension with commas, as shown here:
string [,] location = new string[5,2]; // 2-dimensional array string [,,] locationsWithZipCode; // 3-dimensional array
If the Length property is used with a multidimensional array, it returns the number of elements in the entire array, not just one of the dimensions. To determine the number of elements in one dimension of a multidimensional array, use the GetLength method, passing the dimension that you want tested, as follows:
int locations = names.GetLength(0); // Length of first dimension
A jagged array is declared using multiple index brackets:
double[][] polygons;
Dimensions within a jagged array are allocated just like new arrays of simpler rank.
double[][] shapes = new double[4][]; shapes[0] = new double[1] {10}; // Circle shapes[1] = new double[4] {3, 4, 3, 4}; // Quadrilateral shapes[2] = new double[3] {3, 4, 5}; // Triangle shapes[3] = new double[5] {5, 5, 5, 5, 5}; // Pentagon
Jagged arrays are more flexible because any dimension can be made up of arrays with differing lengths, but allocating and navigating jagged arrays is potentially more difficult. Each edge in a jagged array must be tested for its length and navigated separately. The foreach statement can be used to simplify iteration over a jagged array, as shown here:
static void DisplayShapeInfo(double[][] shapes) { int rankNumber = 0; foreach(double[] shape in shapes) { int totalLength = 0; foreach(int side in shape) { totalLength += side; } Console.WriteLine("Shape {0} perimeter length is {1}", rankNumber, totalLength); ++rankNumber; } }
This example uses the foreach statement to iterate over each dimension of the array. Even though the second dimension of the array is jagged, the foreach statement allows you to write a simpler loop than is possible with other loop constructions.
Visual C# includes a built-in string reference type that simplifies string manipulation. Strings can be created with the new operator. However, because strings are built-in types, the compiler also allows a simpler syntax, in which a string literal provides an initial value to a string reference variable, as shown here:
string name = "Mickey";
String literals are always enclosed within double quotation marks. Slashes are escaped with a preceding slash, so the string literal with the value C:\Windows would be as follows:
string path = "C:\\Windows";
Alternatively, the @ operator can be used to indicate to the compiler that a string literal is to be evaluated without escaping, allowing you to use a simpler syntax.
string path = @"C:\Windows";
Conceptually, a string is an array of characters, so the string class allows you to access individual characters as if you are accessing an array element. This example assigns the third character from name to the char variable c:
char c = name[2];
If name is empty or doesn’t have at least three characters, an IndexOutOfRangeException exception will be thrown.
When testing strings for equality, you have two options: value comparisons and reference comparisons. To test strings for value equality, simply use the Visual C# equality operator (==), as shown here:
if(name1 == name2) { // Strings match. }
This code performs a case-sensitive test for string equality; two string variables that are set to null will always test as equivalent. Strings can also be relatively compared using the relational operators, which will be discussed in detail in Chapter 4.
To determine whether two string references point to the same object, you must explicitly test for equality using the object base class, as shown here:
if((object)path == (object)name) { // The path and name variables refer to the same string object. }
Strings are concatenated using the addition operator (+), just like other built-in types. However, Visual C# strings are immutable—once a string is created, its value never changes. The simple act of concatenating a string results in the creation of a new string object, as shown here:
path += fileName; // Concatenation string fullPath = path + fileName; // Addition
In both examples, adding fileName to path causes a new string to be created. In the first line, the new string object is assigned to the path string reference. In the second example, the path string reference isn’t modified and the new string is assigned to the fullPath string reference.
As you’ve seen in Chapter 2 and , Visual C# uses the new keyword to create new instances of reference types. You usually don’t take any action to explicitly release an object because the .NET Framework uses a process known as garbage collection to automatically free objects that are no longer in use. For the most part, this process occurs automatically, and you’ll rarely be aware of it. However, garbage collection is such a fundamental part of the .NET Framework that understanding the mechanics of garbage collection will help you to write much more efficient code.
The basic ideas behind garbage collection are simple enough, and many types of systems today use some form of garbage collection. All garbage collection mechanisms have one thing in common: they absolve the programmer from the responsibility of tracking memory usage. Although most garbage collectors require that applications occasionally pause to reclaim memory that’s no longer used, the garbage collector used to manage memory in the .NET Framework is highly efficient.
The garbage collector in .NET is known as a generational garbage collector, meaning that allocated objects are sorted into three groups, or generations. Objects that have been most recently allocated are placed in generation zero. Generation zero provides fast access to its objects because the size of generation zero is small enough to fit into the processor’s L2 cache. Objects in generation zero that survive a garbage collection pass are moved into generation one. Objects in generation one that survive a collection pass are moved into generation two. Generation two contains long-lived objects that have survived at least two collection passes.
The details of the garbage collection pass will be discussed in the next section, but the basic idea is simple: look for unused objects, remove them from memory, and compact the managed heap to recover the space that the unused objects were occupying. After the heap is compacted, all object references are adjusted to point to the new object locations.
When an object is allocated by a Visual C# program, the managed heap will almost instantly return the memory required for the new object. The reason for the extremely fast allocation performance is that the managed heap is not a complex data structure. The managed heap resembles a simple byte array, with a pointer to the first available memory location, as shown in Figure 3-6.
When a block of memory is requested for an object, the value of the pointer is returned to the caller, and the pointer is adjusted to point to the next available memory location. Allocation of a managed block of memory is only slightly more complex than simply incrementing a pointer. This is one of the performance wins that the managed heap offers you. In an application that doesn’t require much garbage collection, the managed heap will outperform a traditional heap.
Due to this linear allocation method, objects that are allocated together in a Visual C# application tend to be allocated near each other on the managed heap. This arrangement differs significantly from traditional heap allocation, in which memory blocks are allocated based on their size. For example, two objects allocated at the same time might be allocated far apart from each other on the heap, reducing cache performance.
So allocation is very fast, but in a nontrivial program, the memory available in generation zero will eventually be exhausted. Remember, generation zero fits inside the L2 cache and no unused memory is being spontaneously returned. Until now, you’ve seen only how .NET simply increments a pointer when a program needs more memory. Although this approach is very efficient, it obviously can’t continue forever.
When no more memory can be allocated in generation zero, a garbage collection pass will be initiated on generation zero, which removes any objects that are no longer referenced and moves currently used objects into generation one. Promoting referenced objects into generation one frees up generation zero for new allocations. A collection pass for generation zero is the most common type of collection, and it’s very fast. A generation one collection pass is performed if a generation zero collection pass isn’t sufficient to reclaim memory. As a last resort, a generation two collection pass is performed only when collections on generations zero and one haven’t freed enough memory. If no memory is available after a complete collection pass of all generations, an OutOfMemoryException is thrown.
A class can expose a finalizer that executes when the object is destroyed, subject to conditions that we’ll look at later in this section. In Visual C#, the finalizer is a protected method named Finalize, as shown here:
protected void Finalize() { base.Finalize(); // Clean up external resources. }
If you implement a finalizer, you should always declare it as protected. Never expose your finalizer as a public method because it is called only by the .NET Framework. In your finalizer, you must follow a pattern whereby you call the finalizer for your base class before executing any of your own code, as shown in the previous example.
The Visual C# .NET compiler will generate code equivalent to a well-formed finalizer if you declare a destructor, as shown here:
~ResourceConnector() { // Clean up external resources. }
Attempting to declare a destructor and a Finalize method in the same class will result in an error.
Keep in mind that finalizers, and therefore Visual C# destructors, aren’t guaranteed to execute at any specific time, and they might not even execute at all in some circumstances. The .NET Framework can’t guarantee that it will call an object’s destructor or finalizer in a timely fashion because of the way it executes the finalization process. When an object with a finalizer is collected, it’s not immediately removed from memory. Instead, a reference to the object is placed in a special queue that contains objects waiting for finalization.
A dedicated thread is responsible for executing the finalizer for each object in the finalization queue. This thread then marks the object as no longer requiring finalization and removes the object from the finalization queue. Until finalization is complete, the queue’s reference to the object is sufficient to keep the object alive. After finalization has been completed, the object will be reclaimed during the next garbage collection pass.
There’s no guaranteed order for finalization. When an object is finalized, other objects that it refers to might have already been finalized. During finalization, you can safely free external resources such as operating system handles or database connections, but objects on the managed heap shouldn’t be referenced.
Avoid creating finalizers whenever possible. Objects with finalizers are more costly to the .NET Framework than objects without finalizers. They also maintain their existence through at least two garbage collection passes, increasing the memory pressure on the runtime.
Instead of including a finalizer, consider exposing a Dispose method that can be called to properly free your object’s resources, as shown in the following code. Classes that handle files and connections often name this method Close. You can use this method to free any resources that you’re holding, including any managed object references.
public void Dispose() { // Clean up owned resources. }
If you clean up your object using a Dispose or Close method, you should indicate to the runtime that your object no longer requires finalization by calling GC.SuppressFinalize, as shown here:
public void Dispose() { tools.Dispose(); statusBar.Dispose(); dbConnection.Dispose(); GC.SuppressFinalize(this); }
If you’re creating and using objects that have Dispose or Close methods, you should call these methods when you’ve finished using the objects. A good place to make these calls is in a finally clause, which guarantees that the objects are properly handled even if an exception is thrown.
If you implement a Dispose or Close method, you still need to implement a finalizer if your class has external resources that aren’t allocated from the managed heap. In the ideal case, your public Dispose method will properly clean up resources and suppress finalization, resulting in efficient cleanup of your object. If a user of your class forgets to call Dispose, the finalizer might be called and will act as a safety net to ensure that your external resources will be freed.
Creating and properly disposing of an object requires several lines of correctly written code. A mistake in implementing this code can cause errors that are difficult to trace, so the Visual C# language offers a more automated solution based on using the IDisposable interface.
IDisposable defines one method: Dispose. Implementing this interface is the preferred way for a class to advertise that it’s exposing a method for proper object cleanup. A typical implementation of IDisposable is shown here:
public class ResourceConnector: IDisposable { ~ResourceConnector() { Dispose(false); } public void Dispose() { Dispose(true); } protected void Dispose(bool disposing) { if(disposing) { GC.SuppressFinalize(this); // Dispose of managed objects if disposing. } // Release our external resources here. } }
When the Dispose method is called, the object is being properly freed by a client. External resources are freed, and GC.SuppressFinalize is called as an optimization step to prevent finalization. If the object is disposed by command, it’s also appropriate for an object to dispose of objects it owns.
If the finalizer is called by the .NET Framework, the call to GC.SuppressFinalize isn’t needed because the object is already being finalized. In addition, it’s not appropriate to reference any managed objects because these objects may have been finalized or even collected already.
Classes that implement IDisposable can take advantage of a Visual C# language feature that assists in proper disposal. The using statement works with the IDisposable interface to simplify the process of writing client code that correctly cleans up objects that require finalization. The using statement guarantees that the Dispose method is called even if exceptions occur. Consider the following code:
using(ResourceConnector rc = new ResourceConnector()) { rc.UseResource(); // rc.Dispose called automatically. }
The using statement has two sections: the allocation expression is located between the parentheses, and the code block that follows provides scoping. After the code block has finished executing, the Dispose method will be called for the allocated object.
The Visual C# .NET compiler will generate code equivalent to the following code written without the using statement:
ResourceConnector rc = null; try { rc = new ResourceConnector(); // Use rc here. } finally { if(rc != null) { IDisposable disp = rc as IDisposable; disp.Dispose(); } }
As you can see, the code written with the using statement is more clear and less error-prone.
Multiple objects of a single type can be allocated in a single using expression by simply separating the allocation expressions with commas, as shown here:
using(SolidBrush greenBrush = new SolidBrush(Color.Green), redBrush = new SolidBrush(Color.Red)) { }
The System.GC class contains static methods that are used to interact with the garbage collection mechanism, including methods to initiate a garbage collection pass, to determine an object’s current generation, and to determine the amount of allocated memory.
The most frequently used System.GC method is SuppressFinalize, shown in the following code.
GC.SuppressFinalize(this);
The SuppressFinalize method prevents an object from being finalized and optimizes the performance of the garbage collector. You should call this method when your object is disposed of via a Dispose or Close method.
Collect is used to programmatically initiate a garbage collection pass. There are two versions of Collect. The version with no parameters, shown here, performs a full collection:
GC.Collect();
The more useful version of Collect allows you to specify the generation to be collected. This flexibility enables you to quickly reclaim generation zero if you’ve recently used and freed a number of temporary objects:
GC.Collect(0);
A generation one or two collection pass always includes any lower generations, so calling Collect(2) will cause a full garbage collection pass.
The GetGeneration method will return the current generation of an object passed as a parameter:
int myGeneration = GC.GetGeneration(this);
GetGeneration is useful for tracking objects as they interact with the garbage collector and for auditing memory usage. Like the GetTotalMemory method, however, GetGeneration has limited value in code not dedicated to tracing or debugging.
GetTotalMemory returns the amount of memory allocated on the managed heap. Depending on the parameter you pass to the function, the function’s return value might not be a precise number, due to the way the managed heap works. If unreferenced objects that the garbage collector hasn’t yet reclaimed exist on the heap, GetTotalMemory might return a number that’s larger than the number of currently allocated bytes. To get a more exact number, this method allows you to pass a bool parameter that specifies whether a collection is to be initiated before the measurement, as follows:
long totalMemory = GC.GetTotalMemory(true);
Passing true as a parameter causes a full garbage collection pass before the managed heap’s size is calculated. Passing false simply returns the size of the heap without attempting to collect or compact any unused space.
|
http://etutorials.org/Programming/visual-c-sharp/Part+I+Introducing+Microsoft+Visual+C+.NET/Chapter+3+Value+Types+and+Reference+Types/Understanding+Reference+Types/
|
CC-MAIN-2017-22
|
en
|
refinedweb
|
Before we can start the discussion of why this exception occurs, it is necessary to understand a little bit about how Windows works with regard to interacting with devices. When a device requires the attention of the processor for the system, it generates an interrupt that causes the processor to give the device attention and handle the device's request. The Windows hardware abstraction layer (HAL) maps the hardware interrupt numbers to software interrupt request levels (IRQLs). IRQLs provide a mechanism that allows the system to prioritize interrupts, where the higher numbered interrupts are processed first (and preempt processing at all lower IRQLs). After the interrupt is handled, the processor returns to the previous (lower) IRQL.
The IRQLs are defined in the wdm.h file in the Windows Driver Development Kit (for me this is \WinDDK\7600.16385.1\inc\ddk\wdm.h).
#if defined(_X86_) // // Interrupt Request Level definitions // #define PASSIVE_LEVEL 0 // Passive release level #define LOW_LEVEL 0 // Lowest interrupt level #define APC_LEVEL 1 // APC interrupt level #define DISPATCH_LEVEL 2 // Dispatcher level #define CMCI_LEVEL 5 // CMCI handler level #define PROFILE_LEVEL 27 // timer used for profiling. #define CLOCK1_LEVEL 28 // Interval clock 1 level - Not used on x86 #define CLOCK2_LEVEL 28 // Interval clock 2 level #define IPI_LEVEL 29 // Interprocessor interrupt level #define POWER_LEVEL 30 // Power failure level #define HIGH_LEVEL 31 // Highest interrupt level #define CLOCK_LEVEL (CLOCK2_LEVEL) #endif #if defined(_AMD64_) // // Interrupt Request Level definitions // #define PASSIVE_LEVEL 0 // Passive release level #define LOW_LEVEL 0 // Lowest interrupt level #define APC_LEVEL 1 // APC interrupt level #define DISPATCH_LEVEL 2 // Dispatcher level #define CMCI_LEVEL 5 // CMCI handler level #define CLOCK_LEVEL 13 // Interval clock level #define IPI_LEVEL 14 // Interprocessor interrupt level #define DRS_LEVEL 14 // Deferred Recovery Service level #define POWER_LEVEL 14 // Power failure level #define PROFILE_LEVEL 15 // timer used for profiling. #define HIGH_LEVEL 15 // Highest interrupt level #endifThere are 3 sets of IRQLs defined (x86, x64, and ia64). I focus on x86 and x64 because these platforms comprise the vast majority of systems. Maintaining an IRQL of 0 (PASSIVE_LEVEL) is one of the main goals of the device drivers and the system because all user mode code is executed at the passive level. The thread scheduler for the system operates at IRQL 2 (DISPATCH_LEVEL) and generates interrupts to change the currently executing thread. Device interrupts occur at level 3 and above (and thus prevent the scheduler from switching threads). A direct implication of this interrupt behavior is that device drivers operating at or above DISPATCH_LEVEL cannot access paged memory (due to the context switch required for the file system driver to pull the memory page from disk) and can only use memory from the non-paged pool.
Bugcheck code 0x0000000A (10 in decimal) occurs when a driver attempts to perform a task that can only be performed at a lower IRQL, such as reading paged memory or performing a task using a call that the thread scheduler can preempt. Since the system as at or above Dispatch (DPC) level, the thread scheduler cannot force required context switch and crashes the system through a call to KeBugCheckEx (this causes an interrupt at HIGH_LEVEL, 31 on x86 and 15 on x64 and prevents any other device interrupts while crash information is saved to the hard drive and the system is brought down safely). The call to KeBugCheckEx results in a blue screen of death (BSOD).
IRQL_NOT_LESS_OR_EQUAL can often be resolved by updating (or downgrading in some cases) the driver that caused the crash (Note, this error is also very similar to 0xD1 DRIVER_IRQL_NOT_LESS_OR_EQUAL. In some cases the BIOS may also need to be updated. The following is an example process for debugging this issue.
Note that this is only an example, the driver causing your error will likely be different.
First, open the crash dump with WinDbg. Click here for instructions on opening a crash dump.
Next, execute the !analyze -v debugger command. Some output including the bug code, stack trace, and suspected driver are output. Details on each part of the analyze output are discussed below.
The !analyze -v output starts out with a description of the parameters passed to KeBugCheckEx. In this case, this crash was caused by the driver attempting to read invalid (or paged) memory at DISPATCH_LEVEL:, bitfield : bit 0 : value 0 = read operation, 1 = write operation bit 3 : value 0 = not an execute operation, 1 = execute operation (only on chips which support this level of status) Arg4: 83227b06, address which referenced memory Debugging Details: ------------------ READ_ADDRESS: GetPointerFromAddress: unable to read from 82f7b718 Unable to read MiSystemVaType memory at 82f5b160 00000004 CURRENT_IRQL: 2 FAULTING_IP: hal!HalPutScatterGatherList+a 83227b06 8b4104 mov eax,dword ptr [ecx+4] CUSTOMER_CRASH_COUNT: 1 DEFAULT_BUCKET_ID: VISTA_DRIVER_FAULT BUGCHECK_STR: 0xA PROCESS_NAME: SystemNext we see the address of the trap frame, registers, and stack trace at the time of the crash. For this error, this is less relevant because the driver is well identified. In some cases it may be necessary to dig in further using the driver verifier or by following the trap frames in a full or kernel memory dump to fully rebuild the call stack.
TRAP_FRAME: b8732af0 -- (.trap 0xffffffffb8732af0) ErrCode = 00000000 eax=88b81740 ebx=88caf280 ecx=00000000 edx=00000000 esi=89cbc420 edi=88a4b5f8 eip=83227b06 esp=b8732b64 ebp=b8732b6c iopl=0 nv up ei pl zr na pe nc cs=0008 ss=0010 ds=0023 es=0023 fs=0030 gs=0000 efl=00010246 hal!HalPutScatterGatherList+0xa: 83227b06 8b4104 mov eax,dword ptr [ecx+4] ds:0023:00000004=???????? Resetting default scope LAST_CONTROL_TRANSFER: from 83227b06 to 82e5982b STACK_TEXT: b8732af0 83227b06 badb0d00 00000000 8a3fc504 nt!KiTrap0E+0x2cf b8732b6c 8c80e653 88b81740 00000000 00000000 hal!HalPutScatterGatherList+0xa b8732b88 92530159 88a4b5f8 00000000 89cbc420 ndis!NdisMFreeNetBufferSGList+0x27 WARNING: Stack unwind information not available. Following frames may be wrong. b8732be8 9252ca0e 88ca6000 88caf280 0000000a e1k6232+0x16159 b8732c58 9252b093 88ca6000 00000000 b8732ca0 e1k6232+0x12a0e b8732c74 8c860309 88ca6000 00000000 b8732ca0 e1k6232+0x11093 b8732cb0 8c8416b2 88a4b67c 00a4b668 00000000 ndis!ndisMiniportDpc+0xe2 b8732d10 8c828976 88a4b7d4 00000000 8b43f0e8 ndis!ndisQueuedMiniportDpcWorkItem+0xd0 b8732d50 830216d3 00000002 9f4c557c 00000000 ndis!ndisReceiveWorkerThread+0xeb b8732d90 82ed30f9 8c82888b 00000002 00000000 nt!PspSystemThreadStartup+0x9e 00000000 00000000 00000000 00000000 00000000 nt!KiThreadStartup+0x19 STACK_COMMAND: kbFinally, we get some information on the symbols and what the debugger suspects the faulting module is. In this case, it is related to the Intel Wireless card in this laptop (using the driver e1k6232.sys).
FOLLOWUP_IP: e1k6232+16159 92530159 ?? ??? SYMBOL_STACK_INDEX: 3 SYMBOL_NAME: e1k6232+16159 FOLLOWUP_NAME: MachineOwner MODULE_NAME: e1k6232 IMAGE_NAME: e1k6232.sys DEBUG_FLR_IMAGE_TIMESTAMP: 4bbae470 FAILURE_BUCKET_ID: 0xA_e1k6232+16159 BUCKET_ID: 0xA_e1k6232+16159 Followup: MachineOwner ---------In some cases, it may be desirable to check the BIOS version. USe the !sysinfo machineid debugger command to get this information about the BIOS and the make/model of the machine that generated the dump. This was caused by a Dell Latitude e6410 and it is running a BIOS from 2010. In this case, the BIOS is out of date and an update may help resolve the issue that caused this crash.
2: kd> !sysinfo machineid Machine ID Information [From Smbios 2.6, DMIVersion 38, Size=3634] BiosMajorRelease = 4 BiosMinorRelease = 6 BiosVendor = Dell Inc. BiosVersion = A05 BiosReleaseDate = 08/10/2010 SystemManufacturer = Dell Inc. SystemProductName = Latitude E6410 SystemVersion = 0001 SystemSKU = BaseBoardManufacturer = Dell Inc. BaseBoardProduct = 0667CC BaseBoardVersion = A01The dates of all of the drivers loaded at the time of the crash can be determined using the lm n t debugger command. More information about a specific driver can be gained using the lm vm drivername command. This can be helpful to identify whether an old antivirus or an older driver might be contributing to the crash.
2: kd> lm n t start end module name 80bac000 80bb4000 kdcom kdcom.dll Mon Jul 13 19:08:58 2009 (4A5BDAAA) 82e13000 83223000 nt ntkrpamp.exe Fri Jun 18 21:55:24 2010 (4C1C3FAC) 83223000 8325a000 hal halmacpi.dll Mon Jul 13 17:11:03 2009 (4A5BBF07) ... 924e1000 9251a000 dxgmms1 dxgmms1.sys Mon Nov 01 20:37:04 2010 (4CCF7950) 9251a000 92553000 e1k6232 e1k6232.sys Tue Apr 06 01:36:16 2010 (4BBAE470) 92553000 9259e000 USBPORT USBPORT.SYS Mon Jul 13 17:51:13 2009 (4A5BC871) ... 2: kd> lmvm usbport start end module name 92553000 9259e000 USBPORT (deferred) Mapped memory image file: c:\symbols\USBPORT.SYS\4A5BC8714b000\USBPORT.SYS Image path: \SystemRoot\system32\DRIVERS\USBPORT.SYS Image name: USBPORT.SYS Timestamp: Mon Jul 13 17:51:13 2009 (4A5BC871) CheckSum: 0004BC3B ImageSize: 0004B000 File version: 6.1.7600.16385 Product version: 6.1.7600.16385: 6.1.7600.16385 FileVersion: 6.1.7600.16385 (win7_rtm.090713-1255) FileDescription: USB 1.1 & 2.0 Port Driver LegalCopyright: © Microsoft Corporation. All rights reserved.Getting further help
If the debugger output references the NT kernel (ntoskrnl.exe, ntkrnlpa.exe, ntkrnlmp.exe, and ntkrnlpamp.exe), the driver verifier may be necessary to further pinpoint the problem.
After analyzing the dump, if you have not been able to solve your issue, then you seek help from the hardware vendor, the forums, or directly from Microsoft. The hardware vendor is the most preferred out of the three. If the vendor determines that there is a bug in the driver, then they may ask for a kernel/full memory dump to help them analyze the problem.
If you seek help in the forums, then be sure to upload the dumps for your system in an accessible location and post a link to the thread that you create. See this post for more details. Users in the forums can rarely tell you more information than is in this post.
Microsoft may not be helpful unless this is related to a Microsoft device driver or a kernel bug, which they will generally tell you it's not a Microsoft bug. Microsoft support is also relatively expensive.
Best of luck!
Have an idea for something that you'd like to see explored? Leave a comment or send an e-mail to razorbackx_at_gmail<dot>com
References:
Mark Russinovich, David Solomon, and Alex Ionescu. Windows Internals: Covering Windows Server 2008 and Windows Vista. 5th edition. Microsoft Press
Bug Check 0xA: IRQL_NOT_LESS_OR_EQUAL
Microsoft Windows Driver Development Kit
Can you please make also a version for ppl with less IQ xD I dont really get it, please just write it like:
1. Download bla bla bla
2. Run it
3. Open this file
Etc. with pics please
Ive got the bluescreen problem to, but mine closes when i try to update the game called
"aion".
Hope for more help please :)
I agree with Ranger.
I just got this error yesterday, and now it won't stop. Every time my computer starts, in safe or normal mode, I get this screen. I tried several different ways to try to fix it, and then finally turned to the internet when none of those worked.
I'm sure this page would be helpful if I knew more about computers, but I'm not getting much out of it. It seems to be over my head.
Received a BSOD and found your website. It's awesome! I agree with the other posters that you go deep into detail. Thank you for that! If we really want to solve a BSOD on our own you make it much more feasible.
|
http://mikemstech.blogspot.com/2011/11/how-to-troubleshoot-blue-screen-0xa.html
|
CC-MAIN-2017-22
|
en
|
refinedweb
|
view raw
I've been stuck on an error that I'm not completely sure how to solve.
My application is made in Angular2 and runs completely in a webworker largely based on this tutorial
My first feature was an implementation of socket.io which is working perfectly(also with observables etc..) but now I want to use the Http service of Angular2 and I get the following error:
My code of the service is like this and the error arrises when I call validateAccessToken (I have to add the .js on my imports otherwise I get a 404 on the files within the webworker):
import { Injectable } from '@angular/core';
import { Http, Headers, RequestOptions, Response } from "@angular/http";
import { environment } from "../../../environments/environment.js";
import { Observable } from "rxjs/Observable.js";
import 'rxjs/add/operator/toPromise.js';
import 'rxjs/add/operator/map.js';
@Injectable()
export class AuthService {
headers: Headers;
options: RequestOptions;
url: string;
constructor(private http:Http) {
this.url = environment.authServerUrl;
}
validateAccessToken(token) {
return this.http.get(this.url)
.map(this.extractData)
.catch(this.handleError);
};
extractData(response: Response) {...}
handleError(error: any) {...}
}
The CookieXSRFStrategy is default enabled by Angular2 and used by http. The webworker does not have DOM access to get the cookie to insert in the http headers. And thus throws the error Uncaught not implemented.
You should implement your own CookieXSRFStrategy strategy which at least does not throw this error ;)
|
https://codedump.io/share/RmhrUl5DaP1Q/1/angular-2-webworkers-http-uncaught-in-promise-not-implemented
|
CC-MAIN-2017-22
|
en
|
refinedweb
|
" Vim syntax file " Language: Java Properties resource file (*.properties[_*]) " Maintainer: Simon Baldwin <simonb@sco.com> " Last change: 26th Mar 2000 " ============================================================================= " Optional and tuning variables: " jproperties_lines " ----------------- " Set a value for the sync block that we use to find long continuation lines " in properties; the value is already large - if you have larger continuation " sets you may need to increase it further - if not, and you find editing is " slow, reduce the value of jproperties_lines. if !exists("jproperties_lines") let jproperties_lines = 256 endif " jproperties_strict_syntax " ------------------------- " Most properties files assign values with "id=value" or "id:value". But, " strictly, the Java properties parser also allows "id value", "id", and " even more bizarrely "=value", ":value", " value", and so on. These latter " ones, however, are rarely used, if ever, and handling them in the high- " lighting can obscure errors in the more normal forms. So, in practice " we take special efforts to pick out only "id=value" and "id:value" forms " by default. If you want strict compliance, set jproperties_strict_syntax " to non-zero (and good luck). if !exists("jproperties_strict_syntax") let jproperties_strict_syntax = 0 endif " jproperties_show_messages " ------------------------- " If this properties file contains messages for use with MessageFormat, " setting a non-zero value will highlight them. Messages are of the form " "{...}". Highlighting doesn't go to the pains of picking apart what is " in the format itself - just the basics for now. if !exists("jproperties_show_messages") let jproperties_show_messages = 0 endif " ============================================================================= " For version 5.x: Clear all syntax items " For version 6.x: Quit when a syntax file was already loaded if version < 600 syntax clear elseif exists("b:current_syntax") finish endif " switch case sensitivity off syn case ignore " set the block exec "syn sync lines=" . jproperties_lines " switch between 'normal' and 'strict' syntax if jproperties_strict_syntax != 0 " an assignment is pretty much any non-empty line at this point, " trying to not think about continuation lines syn match jpropertiesAssignment "^\s*[^[:space:]]\+.*$" contains=jpropertiesIdentifier " an identifier is anything not a space character, pretty much; it's " followed by = or :, or space or tab. Or end-of-line. syn match jpropertiesIdentifier "[^=:[:space:]]*" contained nextgroup=jpropertiesDelimiter " treat the delimiter specially to get colours right syn match jpropertiesDelimiter "\s*[=:[:space:]]\s*" contained nextgroup=jpropertiesString " catch the bizarre case of no identifier; a special case of delimiter syn match jpropertiesEmptyIdentifier "^\s*[=:]\s*" nextgroup=jpropertiesString else " here an assignment is id=value or id:value, and we conveniently " ignore continuation lines for the present syn match jpropertiesAssignment "^\s*[^=:[:space:]]\+\s*[=:].*$" contains=jpropertiesIdentifier " an identifier is anything not a space character, pretty much; it's " always followed by = or :, and we find it in an assignment syn match jpropertiesIdentifier "[^=:[:space:]]\+" contained nextgroup=jpropertiesDelimiter " treat the delimiter specially to get colours right; this time the " delimiter must contain = or : syn match jpropertiesDelimiter "\s*[=:]\s*" contained nextgroup=jpropertiesString endif " a definition is all up to the last non-\-terminated line; strictly, Java " properties tend to ignore leading whitespace on all lines of a multi-line " definition, but we don't look for that here (because it's a major hassle) syn region jpropertiesString start="" skip="\\$" end="$" contained contains=jpropertiesSpecialChar,jpropertiesError,jpropertiesSpecial " {...} is a Java Message formatter - add a minimal recognition of these " if required if jproperties_show_messages != 0 syn match jpropertiesSpecial "{[^}]*}\{-1,\}" contained syn match jpropertiesSpecial "'{" contained syn match jpropertiesSpecial "''" contained endif " \uABCD are unicode special characters syn match jpropertiesSpecialChar "\\u\x\{1,4}" contained " ...and \u not followed by a hex digit is an error, though the properties " file parser won't issue an error on it, just set something wacky like zero syn match jpropertiesError "\\u\X\{1,4}" contained syn match jpropertiesError "\\u$"me=e-1 contained " other things of note are the \t,r,n,\, and the \ preceding line end syn match jpropertiesSpecial "\\[trn\\]" contained syn match jpropertiesSpecial "\\\s" contained syn match jpropertiesSpecial "\\$" contained " comments begin with # or !, and persist to end of line; put here since " they may have been caught by patterns above us syn match jpropertiesComment "^\s*[#!].*$" contains=jpropertiesTODO syn keyword jpropertiesTodo TODO FIXME XXX contained " Define the default highlighting. " For version 5.7 and earlier: only when not done already " For version 5.8 and later: only when an item doesn't have highlighting yet if version >= 508 || !exists("did_jproperties_syntax_inits") if version < 508 let did_jproperties_syntax_inits = 1 command -nargs=+ HiLink hi link <args> else command -nargs=+ HiLink hi def link <args> endif HiLink jpropertiesComment Comment HiLink jpropertiesTodo Todo HiLink jpropertiesIdentifier Identifier HiLink jpropertiesString String HiLink jpropertiesExtendString String HiLink jpropertiesCharacter Character HiLink jpropertiesSpecial Special HiLink jpropertiesSpecialChar SpecialChar HiLink jpropertiesError Error delcommand HiLink endif let b:current_syntax = "jproperties" " vim:ts=8
|
http://opensource.apple.com/source/vim/vim-44/runtime/syntax/jproperties.vim
|
CC-MAIN-2014-42
|
en
|
refinedweb
|
In this section we will discuss about the PipedReader in Java.
java.io.PipedReader reads the characters from pipe. java.io.PipedWriter writes the characters to the output stream which is further read by the PipedReader class. So, the input for the PipedReader is a stream which is written by the PipedWriter. To read the characters from the pipe PipedReader is required to connect with the PipedWriter. A connection can be established by the method connection() provided in this class or by the constructor PipedReader(PipedWriter pw) or PipedReader(PipedWriter pw, int pipeSize). Details of constructor and method is being given below.
Constructors of PipedReader
Methods in PipedReader
Example
This is a very simple example which demonstrates about how to use the PipedReader class to read the streams from pipe. As we discussed above to read the stream from pipe the characters should be available in the pipe so, first I have used the PipedWriter class to write the stream into pipe. Then I have tried to read the characters from the piped stream using read() method. This method will read the streams from pipe.
Source Code
PipedReaderExample.java
import java.io.PipedReader; import java.io.PipedWriter; import java.io.IOException; public class PipedReaderExample { public static void main(String[] args) { PipedWriter pw = null; PipedReader pr = null; try { pw = new PipedWriter(); pr = new PipedReader(); pr.connect(pw); System.out.println(); // Write by the PipedWriter pw.write(82); pw.write(79); pw.write(83); pw.write(69); pw.write(73); pw.write(78); pw.write(68); pw.write(73); pw.write(65); // read from the PipedReader int c; while((c = pr.read() ) != -1) { System.out.print((char) c); } } catch (IOException ioe) { System.out.println(ioe); } finally { if(pr != null) { try { pr.close(); } catch(Exception e) { System.out.println(e); } } if(pw != null) { try { pw.close(); } catch(Exception e) { System.out.println(e); } } }// close finally }// close main }// close class
Output
When you will execute the above example you will get the output as follows :
If you enjoyed this post then why not add us on Google+? Add us to your Circles
Liked it! Share this Tutorial
Discuss: Java IO PipedReader
Post your Comment
|
http://www.roseindia.net/java/example/java/io/pipedreader.shtml
|
CC-MAIN-2014-42
|
en
|
refinedweb
|
Walkthrough: Creating an Explorer Style Interface with the ListView and TreeView Controls Using the Designer
One of the benefits of Visual Studio 2005 is the ability to create professional-looking Windows Forms applications in a short of amount of time. A common scenario is creating a user interface (UI) with ListView and TreeView controls that resembles the Windows Explorer feature of Windows operating systems. Windows Explorer displays a hierarchical structure of the files and folders on a user's computer.
To create the form containing a ListView and TreeView control
On the File menu, point to New, and then click Project.
In the New Project dialog box, do the following:
In the Project Types pane, choose either Visual Basic Projects or Visual C# Projects.
In the Templates pane, choose Windows Application.
Click OK. A new Windows Forms project is created.
Add a SplitContainer control to the form and set its Dock property to Fill.
Add an ImageList named imageList1 to the form and use the property browser to add two images: a folder and a document image, in that order.
Add a TreeView control named treeview1 to the form, and position it on the left side of the SplitContainer control. In the property browser for treeView1 do the following:
Add a ListView control named listView1 to the form, and position it on the right side of the SplitContainer control. In the property browser for listview1 do the following:
Set the Dock property to Fill.
Set the View property to Details.
Open the ColumnHeader Collection Editor by clicking the ellipses (
) in the Columns property. Add three columns and set their Text property to Name, Type, and Last Modified, respectively. Click OK to close the dialog box.
Set the SmallImageList property to imageList1.
Implement the code to populate the TreeView with nodes and subnodes. The example code reads from the file system and requires the existence of two icons, folder.ico and doc.ico that were previously added to imageList1.
private void PopulateTreeView() { TreeNode rootNode; DirectoryInfo info = new DirectoryInfo(@"C:\Documents and Settings"); if (info.Exists) { rootNode = new TreeNode(info.Name); rootNode.Tag = info; GetDirectories(info.GetDirectories(), rootNode); treeView1.Nodes.Add(rootNode); } } private void GetDirectories(DirectoryInfo[] subDirs, TreeNode nodeToAddTo) { TreeNode aNode; DirectoryInfo[] subSubDirs; foreach (DirectoryInfo); } }
Since the previous code uses the System.IO namespace, add the appropriate using or import statement at the top of the form.
Call the set-up method from the previous step in the form's constructor or Load event-handling method.
Handle the NodeMouseClick event for treeview1, and implement the code to populate listview1 with a node's contents when a node is clicked.
void treeView1_NodeMouseClick(object sender, TreeNodeMouseClickEventArgs e) { TreeNode newSelected = e.Node; listView1.Items.Clear(); DirectoryInfo nodeDirInfo = (DirectoryInfo)newSelected.Tag; ListViewItem.ListViewSubItem[] subItems; ListViewItem item = null; foreach (DirectoryInfo dir in nodeDirInfo.GetDirectories()) { item = new ListViewItem(dir.Name, 0); subItems = new ListViewItem.ListViewSubItem[] {new ListViewItem.ListViewSubItem(item, "Directory"), new ListViewItem.ListViewSubItem(item, dir.LastAccessTime.ToShortDateString())}; item.SubItems.AddRange(subItems); listView1.Items.Add(item); } foreach (FileInfo file in nodeDirInfo.GetFiles()) { item = new ListViewItem(file.Name, 1); subItems = new ListViewItem.ListViewSubItem[] { new ListViewItem.ListViewSubItem(item, "File"), new ListViewItem.ListViewSubItem(item, file.LastAccessTime.ToShortDateString())}; item.SubItems.AddRange(subItems); listView1.Items.Add(item); } listView1.AutoResizeColumns(ColumnHeaderAutoResizeStyle.HeaderSize); }
If you are using C#, make sure you have the NodeMouseClick event associated with its event-handling method.
You can now test the form to make sure it behaves as expected.
To test the form
Press F5 to run the application.
You will see a split form containing a TreeView control that displays a directory labeled c:\Documents and Settings on the left side, and a ListView control on the right side with three columns. You can traverse the TreeView by selecting directory nodes, and the ListView is populated with the contents of the selected directory.
This application gives you an example of a way you can use TreeView and ListView controls together. For more information on these controls, see the following topics:
|
http://msdn.microsoft.com/en-us/library/ms171645(v=vs.80).aspx
|
CC-MAIN-2014-42
|
en
|
refinedweb
|
07 December 2011 13:00 [Source: ICIS news]
LONDON (ICIS)--Germany’s chemical production is forecast to increase 1.0% year on year in 2012 as growth weakens compared with 2011, the country’s chemical producers’ trade group, Verband der Chemischen Industrie (VCI), said on Wednesday.
In 2011, ?xml:namespace>
“It is difficult to make an accurate forecast for the coming 12 months,” said Klaus Engel, the president of VCI and CEO of specialty chemicals major Evonik.
Engel pointed to the unresolved government debt crises in the eurozone and the
VCI hopes a planned summit of EU leaders in
Meanwhile, Germany-based chemicals producers are facing additional uncertainties from rising electricity costs because of the country’s renewable energy law, the Erneuerbare-Energien-Gesetz (EEG), and emissions trading.
In 2011 alone, the chemical industry’s costs from the EEG and related legislation added up to €1.3bn ($1.7bn), Engel said.
A further challenge is
Engel said that, over the winter months,
Nevertheless, Engel said he would not suggest there was a “crisis mood” in the chemical industry.
“[
Companies’ assessment of their overall business situation was on a level with the strong years of 2006 and 2007, he added.
“There are no recognisable signs in the real economy, that would, from our perspective, justify a crisis scenario,” said Engel.
As for sales and prices in 2012, VCI expects prices to rise 1.0% in 2012 and sales to increase 2.0%, he said.
In 2011,
VCI said naphtha prices should “remain largely stable” in 2012, given the moderate growth forecasts for the global economy in coming months.
The group expects oil prices to range between $100/bbl and $120/bbl in 2012.
(
|
http://www.icis.com/Articles/2011/12/07/9514583/germany-vci-forecasts-2012-chemical-production-growth-at.html
|
CC-MAIN-2014-42
|
en
|
refinedweb
|
25 May 2012 16:26 [Source: ICIS news]
LONDON (ICIS)--BP is no longer considering the full-scale 4,000km Nabucco pipeline as an option for transporting gas from ?xml:namespace>
Only the scaled-down 1,300-km Nabucco West pipeline, which would run from
The original EU-backed Nabucco project had been ruled out because there was no clear prospect of it achieving economic viability by securing gas supplies other than what was available from Shah Deniz II, said BP, which leads the Shah Deniz II consortium.
In mid-May, the Turkish energy and natural resources ministry suggested that Nabucco West would be a success if it linked up with the Trans-Anatolian Pipeline (TANAP) to be built by
The European Commission said it believed the Nabucco consortium was keeping its original proposal on the table for further consideration.
The EU has backed Nabucco as an important part of its plan to reduce member states' reliance on Russian gas.
However, the consortium has not yet been able to announce any deals for gas
|
http://www.icis.com/Articles/2012/05/25/9564248/bp-rules-out-the-full-scale-nabucco-pipeline-proposal.html
|
CC-MAIN-2014-42
|
en
|
refinedweb
|
11 October 2012 04:26 [Source: ICIS news]
SINGAPORE (ICIS)--?xml:namespace>
“The cracker got restarted today, just this morning. We plan to run the cracker to a maximum rate,” said the source, without specifying the exact rate.
JX Nippon previously planned to restart the cracker on around 8 or 10 October. The cracker was initially scheduled to resume operations on 28 September, after scheduled maintenance that started on 13 August.
Separately, JX Nippon has no plans to conduct a cracker turnaround next year following the recent major maintenance,
|
http://www.icis.com/Articles/2012/10/11/9602919/japans-jx-nippon-oil-restarts-460000-tonneyear-naphtha-cracker.html
|
CC-MAIN-2014-42
|
en
|
refinedweb
|
d succeed but winds up being a NOP because UFS incorrectly believes that the case represents renaming a file to itself when it doesn't. Both files remain in existance when the source file should have been removed. Detect the condition and issue VOP_NREMOVE instead of VOP_NRENAME when the source and target represent different namespaces but wind up pointing to the same physical vnode. Reported-by: =?ISO-8859-2?Q?Toma=BE_Bor=B9tnar?= <tomaz.borstnar@xxxxxxxx> Revision Changes Path 1.69 +15 -3 src/sys/kern/vfs_syscalls.c
|
http://leaf.dragonflybsd.org/mailarchive/commits/2005-08/msg00253.html
|
CC-MAIN-2014-42
|
en
|
refinedweb
|
This section describes the authentication options of ApacheDS 1.5. Anonymous and simple binds are supported, as well as SASL mechanisms. Configuring and using the first two of them is described below with the help of examples.
Authentication is the process of determining whether someone (or something) in fact is what he/she/it asserts to be.
Within ApacheDS you will likely want to authenticate clients in order to check whether they are allowed to read, add or manipulate certain data stored within the directory. The latter, i.e. whether an authenticated client is permitted to do something, is deduced during authorization.
Quite often, the process of authentication is delegated to a directory service by other software components. Because in doing so, authentication data (e.g. username, password) and authorization data (e.g. group relationships) are stored and managed centrally in the directory, and all connected software solutions benefit from it. The integration sections of this guide provide examples for Apache Tomcat, Apache HTTP servers, and others.
ApacheDS 1.5 supports simple authentication and anonymous binds while storing passwords within userPassword attributes in user entries. Passwords can be stored in clear text or one-way encrypted with a hash algorithm like MD5 or SHA1. Since version 1.5.1, SASL mechanism are supported as well. We start with anonymous binds.
Authentication via simple bind is widely used. The method is supported by ApacheDS 1.5 for all person entries stored within any partition, if they contain a password attribute. How does it work? An LDAP client provides the DN of a user entry and a password to the server, the parameters of the bind operation. ApacheDS checks whether the given password is the same as the one stored in the userpassword attribute of the given entry. If not, the bind operation fails (LDAP error code 49, LDAP_INVALID_CREDENTIALS), and the user is not authenticated.
Assume this entry from the Seven Seas partition is stored within the directory (only a fragment with the relevant attributes is shown).
dn: cn=Horatio Hornblower,ou=people,o=sevenSeas objectclass: person objectclass: organizationalPerson cn: Horatio Hornblower sn: Hornblower userpassword: pass ...
In the following search command, a user tries to bind with the given DN (option -D) but a wrong password (option -w). The bind fails and the command terminates without performing the search.
$ ldapsearch -h zanzibar -p 10389 -D "cn=Horatio Hornblower,ou=people,o=sevenSeas" \\ -w wrong -b "ou=people,o=sevenSeas" -s base "(objectclass=*)" ldap_simple_bind: Invalid credentials ldap_simple_bind: additional info: Bind failed: null
If the user provides the correct password during the call of the ldapsearch command, the bind operation succeeds and the seach operation is performed afterwards.
$
Using JNDI, authentication via simple binds is accomplished by appropriate configuration. One option is to provide the parameters in a Hashtable object like this
import java.util.Hashtable; import javax.naming.Context; import javax.naming.InitialContext; import javax.naming.NamingEnumeration; import javax.naming.NamingException; public class SimpleBindDemo { public static void main(String[] args) throws NamingException { if (args.length < 2) { System.err.println("Usage: java SimpleBindDemo <userDN> <password>"); System.exit(1); } Hashtable env = new Hashtable(); env.put(Context.INITIAL_CONTEXT_FACTORY, "com.sun.jndi.ldap.LdapCtxFactory"); env.put(Context.PROVIDER_URL, "ldap://zanzibar:10389/o=sevenSeas"); env.put(Context.SECURITY_AUTHENTICATION, "simple"); env.put(Context.SECURITY_PRINCIPAL, args[0]); env.put(Context.SECURITY_CREDENTIALS, args[1]); try { Context ctx = new InitialContext(env); NamingEnumeration enm = ctx.list(""); while (enm.hasMore()) { System.out.println(enm.next()); } ctx.close(); } catch (NamingException e) { System.out.println(e.getMessage()); } } }
If the DN of a user entry and the fitting password are provided as command line arguments, the program binds successfully and performs a search:
$ java SimpleBindDemo "cn=Horatio Hornblower,ou=people,o=sevenSeas" pass ou=people: javax.naming.directory.DirContext ou=groups: javax.naming.directory.DirContext
On the other hand, providing an incorrect password results in a failed bind operation. JNDI maps it to a NamingException:
$ java SimpleBindDemo "cn=Horatio Hornblower,ou=people,o=sevenSeas" quatsch [LDAP: error code 49 - Bind failed: null]
In real life, you obviously want to separate most of the configuration data from the source code, for instance with the help of the jndi.properties file.
If passwords are stored in the directory in clear like above, the administrator (uid=admin,ou=system) is able to read them. This holds true even if authorization is enabled. The passwords would also be visible in exported LDIF files. This is often unacceptable.
ApacheDS does also support simple binds, if user passwords are stored one-way encrypted. An LDAP client, which creates user entries, applies a hash-function (SHA for instance) to the user passwords beforehand, and stores the users with these fingerprints as userpassword values (instead of the clear text values), for instance:
dn: cn=Horatio Hornblower,ou=people,o=sevenSeas objectclass: person objectclass: organizationalPerson cn: Horatio Hornblower sn: Hornblower userpassword: {SHA}nU4eI71bcnBGqeO0t9tXvY1u5oQ= ...
The value "{SHA}nU4eI71bcnBGqeO0t9tXvY1u5oQ=" means that SHA (Secure Hash Algorithm) was applied to the password, and "nU4eI71bcnBGqeO0t9tXvY1u5oQ=" was the result (Base-64 encoded). Please note that it is not possible to calculate the source ("pass" in our case) back from the result. This is why it is called one-way encrypted – it is rather difficult to decrypt it. One may guess many times, calculate the hash values (the algorithms are public) and compare the result. But this would take a long time, especially if you choose a more complex password than we did ("pass").
With some lines of code, it is quite easy to accomplish this task programatically in Java:
import java.security.MessageDigest; import java.security.NoSuchAlgorithmException; import sun.misc.BASE64Encoder; public class DigestDemo { public static void main(String[] args) throws NoSuchAlgorithmException { String password = "pass"; String algorithm = "SHA"; // Calculate hash value MessageDigest md = MessageDigest.getInstance(algorithm); md.update(password.getBytes()); byte[] bytes = md.digest(); // Print out value in Base64 encoding BASE64Encoder base64encoder = new BASE64Encoder(); String hash = base64encoder.encode(bytes); System.out.println('{'+algorithm+'}'+hash); } }
The output is "{SHA}nU4eI71bcnBGqeO0t9tXvY1u5oQ=".
Another option is to use command line tools to calculate the hash value; the OpenSSL project provides such stuff. Furthermore many UI LDAP tools allow you to store passwords automatically encrypted with the hash algorithm of your choice. See below Apache Directory Studio as an example. The dialog automatically shows up if a userPassword attribute is to be manipulated (added, changed).
From an LDAP client point of view, the behavior during authentication is the same as with passwords stored in clear. During a simple bind, a client sends DN and password (unencrypted, i.e. no hash algorithm applied) to the server. If ApacheDS detects, that the user password for the given DN is stored in the directory with a hash function applied, it calculates the hash value of the given password with the appropriate algorithm (this is why the algorithm is stored together with the hashed password). Afterwards it compares the result with the stored attribute value. In case of a match, the bind operation ends successfully:
$
Providing the hashed value of the userPassword attribute instead of the original value will be rejected by ApacheDS:
$ ldapsearch -h zanzibar -p 10389 -D "cn=Horatio Hornblower,ou=people,o=sevenSeas" \\ -w "{SHA}nU4eI71bcnBGqeO0t9tXvY1u5oQ=" -b "ou=people,o=sevenSeas" -s base "(objectclass=*)" ldap_simple_bind: Invalid credentials ldap_simple_bind: additional info: Bind failed: null
This is intended. If someone was able to catch this value (from an LDIF export for instance), s/he must still provide the password itself in order to get authenticated.
In some occasions it is appropriate to allow LDAP clients to permit operations without authentication. If data managed by the directory service is well known by all clients, it is not uncommon to allow search operations (not manipulation) within this data to all clients – without providing credentials. An example for this are enterprise wide telephone books, if clients access the directory service from the intranet.
Anonymous access is enabled by default. Changing this is one of the basic configuration tasks. If you use the server standalone configured with a server.xml file, you can enable/disable it by changing the value for property allowAnonymousAccess in the Spring bean definition for bean defaultDirectoryService, as depicted in the following fragment:
<defaultDirectoryService id="directoryService" instanceId="default" ... allowAnonymousAccess="false" ...>
A restart of the server is necessary for this change to take effect.
Assume anonymous binds are disabled and our sample partition Seven Seaes present in the server. Here is an example with a search operation performed by a command line tool as a client. It tries to connect anonymously (no DN and password given, i.e. options -D and -w missing) to the server. Afterwards the entry ou=people,o=sevenSeas should be displayed. on search operation: Anonymous binds have been disabled!
Now the same command performed against ApacheDS 1.5 with anonymous access enabled as described above. The behavior is different – the entry is visible.
$ ldapsearch -h zanzibar -p 10389 -b "ou=people,o=sevenSeas" -s base "(objectclass=*)" version: 1 dn: ou=people,o=sevenSeas ou: people description: Contains entries which describe persons (seamen) objectclass: organizationalUnit objectclass: top
The examples above have used a command line tool. Of course graphical tools and programmatical access (JNDI etc.) allow anonymous binds as well. Below is a screen shot from the configuration dialog of Apache Directory Studio as an example. During configuration of the connection data ("New LDAP Connection", for instance), the option Anonymous Authentication leads to anonymous binds. Other UI tools offer this feature as well.
If you want to use simple binds with user DN and password within a Java component, in order to authenticate users programatically, in practice one problem arises: Most users do not know their DN. Therefore they will not be able to enter it. And even if they know it, it would be frequently very laborious due to the length of the DN. It would be easier for a user if s/he only has to probvide a short, unique ID and the password, like in this web form
Usually the ID is an attribute within the user's entry. In our sample data (Seven Seas), each user entry contains the uid attribute, for instance uid=hhornblo for Captain Hornblower:
dn: cn=Horatio Hornblower,ou=people,o=sevenSeas objectclass: person objectclass: organizationalPerson objectclass: inetOrgPerson objectclass: top cn: Horatio Hornblower description: Capt. Horatio Hornblower, R.N givenname: Horatio sn: Hornblower uid: hhornblo mail: hhornblo@royalnavy.mod.uk?
In order to accomplish this task programmatically, one option is to perform the following steps
The algorithm described above is implemented by many software solutions which are able to integrate LDAP directories. You will learn more about some of them and their configuration options within a later section of this guide.
For illustration purposes, here is a simple Java program which performs the steps with the help of JNDI. It uses anonymous bind for the first step, hence it must be enabled (replace with a technical user, if it better meets your requirements).
import java.util.Hashtable; import javax.naming.Context; import javax.naming.NamingEnumeration; import javax.naming.NamingException; import javax.naming.directory.DirContext; import javax.naming.directory.InitialDirContext; import javax.naming.directory.SearchControls; import javax.naming.directory.SearchResult; public class AdvancedBindDemo { public static void main(String[] args) throws NamingException { if (args.length < 2) { System.err.println("Usage: java AdvancedBindDemo <uid> <password>"); System.exit(1); } Hashtable env = new Hashtable(); env.put(Context.INITIAL_CONTEXT_FACTORY, "com.sun.jndi.ldap.LdapCtxFactory"); env.put(Context.PROVIDER_URL, "ldap://zanzibar:10389/"); env.put(Context.SECURITY_AUTHENTICATION, "simple"); String uid = args[0]; String password = args[1]; DirContext ctx = null; try { // Step 1: Bind anonymously ctx = new InitialDirContext(env); // Step 2: Search the directory String base = "o=sevenSeas"; String filter = "(&(objectClass=inetOrgPerson)(uid={0}))"; SearchControls ctls = new SearchControls(); ctls.setSearchScope(SearchControls.SUBTREE_SCOPE); ctls.setReturningAttributes(new String[0]); ctls.setReturningObjFlag(true); NamingEnumeration enm = ctx.search(base, filter, new String[] { uid }, ctls); String dn = null; if (enm.hasMore()) { SearchResult result = (SearchResult) enm.next(); dn = result.getNameInNamespace(); System.out.println("dn: "+dn); } if (dn == null || enm.hasMore()) { // uid not found or not unique throw new NamingException("Authentication failed"); } // Step 3: Bind with found DN and given password ctx.addToEnvironment(Context.SECURITY_PRINCIPAL, dn); ctx.addToEnvironment(Context.SECURITY_CREDENTIALS, password); // Perform a lookup in order to force a bind operation with JNDI ctx.lookup(dn); System.out.println("Authentication successful"); } catch (NamingException e) { System.out.println(e.getMessage()); } finally { ctx.close(); } } }
Some example calls:
$ java AdvancedBindDemo unknown sailor Authentication failed $ java AdvancedBindDemo hornblo pass dn: cn=Horatio Hornblower,ou=people,o=sevenSeas Authentication successful $ java AdvancedBindDemo hornblo quatsch dn: cn=Horatio Hornblower,ou=people,o=sevenSeas [LDAP: error code 49 - Bind failed: null]
The examples consist of an unknown user (an inetOrgPerson entry with uid=unknown does not exist), a successful authenttication, and an attempt with an existing uid but a wrong password.
|
http://mail-archives.apache.org/mod_mbox/directory-commits/200910.mbox/raw/%3C471169021.767.1256759101394.JavaMail.www-data@brutus%3E/
|
CC-MAIN-2014-42
|
en
|
refinedweb
|
Enhancement for connection pool in DBAcess.java
Enhancement for connection pool in DBAcess.java package spo.db;
import com.javaexchange.dbConnectionBroker.DbConnectionBroker;
import java.io....;
protected static Properties prop;
}
The above code are from DBAcess.java. I
Further advice needed on my last question - JSP-Servlet
Further advice needed on my last question Dear Experts,
I refer...:\Documents and Settingsapache\jsp\enter_jsp.java:132: ')' expected
} catch... and Settingsbuild\generated\src\org\apache\jsp\enter_jsp.java:132: ';' expected
further clarification needed - Java Beginners
further clarification needed Dear Experts,
I refer to the last answer you have written to me.
The good news is that there's no error already but the code is still not working for me. As Tsuch, I hope to clarify about
Jsp code for disabling record.
Jsp code for disabling record. I want a Jsp and servlet code for the mentioned scenario.
Q. A cross sign appears in front of each record, click... will not appear for further selection. ???
Can anyone please help me out ???
Thanks
jsp code
jsp code what are the jsp code for view page in online journal
JSP and servlet code - Design concepts & design patterns
JSP and servlet code I have created a JSP page for login and a servlet page which takes the parameters and performs the required processing. I'am... carry out session tracking from this page to the page directed and the further Create a html reader JSP tag that read the html page from a link and will display the contents on the JSP. Do not use include directive sample code to create hyperlink within hyperlink
example:
reservation:
train:
A/C department
non A/c Department Everybody,
can anyone help me to findout the modules as i am developing a whiteboard application using jsp?
this application is my dream application.
Thank you Hi,
Do we have a datagrid equivalent concept in JSP...,
Please visit the following links:
Thanks
jsp code - Java Beginners
JSP code and Example JSP Code Example hello frns
i want to display image from the database along... from database in Jsp to visit....
Thanks
insert code jsp to access
insert code jsp to access insert code jsp to access
Searching for Code - JSP-Servlet
JSP, Servlet Searching for Code Hi, i am looking for a jsp servlet code examples for the search function in my application.Thanks
jsp code - Java Beginners
jsp code hello sir
i have a problem in in loop while(itr.hasNext... to retrieve the value of staxapp when i wana use it in entire jsp.
thanks
code - JSP-Servlet
code hi can any one tell me how to create a menu which includes 5 fields using jsp,it's urgent Hi friend,
Plz give details with full source code where you having the problem.
Thanks
Code Works - JSP-Servlet
Code Works Hi
The code provided is working fine along with the pagination . i edited the queries and that makes difference..
here is the code.
Thank you
Regards
Eswaramoorthy
Pagination of JSP page
code - JSP-Servlet
code hi sir
my question is i have created a html ang jsp code....
how to write the code to accept an existing id and password. Hi Friend,
Try the following code:
1)login.html:
code for JSP and Servlet - JSP-Servlet
code for JSP and Servlet i have to create a jsp page that contains username and password,
so how to code servlet according to it? Hi...
--------------------
loginaction
javacode.LoginAction
loginaction
/jsp/LoginAction
JSP code problem - JSP-Servlet
JSP code problem Hi friends,
I used the following code... is the code:
Display file upload form to the user...:
<%
//to get the content type information from JSP
source code in jsp
source code in jsp sir...i need the code for inserting images into the product catalog page and code to display it to the customers when they login to site..pls help me
logout code in JSP
logout code in JSP im using session.invalidate() for logout but its not working
java code - JSP-Servlet
java code Code to send SMS through mobile to a web--
">
source code - JSP-Servlet
source code I want source code for Online shopping Cart..., and Cost.
Customers can only view the bill details.
Deliverables
HTML/ JSP... Rakesh,
I am sending a link where u will found complete source code
Jsp Code - Java Beginners
Jsp Code Hi,
I am new to java programming & as per the requirement, i need to implement a 'SEARCH' functionality which will search the database & should display a unique record.
The design contains the 4 input boxes
ajax code for jsp
ajax code for jsp How to write ajax code to retrieve information on to particular part of webpage when we select option from drop down box - Java Beginners
JSP Code Hi frnds,
This is reference to solution which u have provided for 'Limiting the Number of Record Display in a table'.
With full...:
Pagination of JSP page
Roll No
Name
Marks
Grade
|
http://roseindia.net/tutorialhelp/comment/84270
|
CC-MAIN-2014-42
|
en
|
refinedweb
|
Configure E-Mail Address Policy for an Exchange 2007 Hybrid Deployment
Applies to: Exchange Server 2010 SP1
Topic Last Modified: 2012-07-23
Estimated time to complete: 5 minutes
The Exchange 2010 hybrid server that you're introducing to your Exchange 2007 for an Exchange 2007 Hybrid Deployment
You need to be assigned permissions before you can perform this procedure. To configure e-mail address policies in Exchange 2007, you must be a member of the Exchange Organization Administrators group.
You can update your existing default recipient e-mail address policy using the Exchange Management Console (EMC) on either your Exchange 2007 server or Exchange 2010 hybrid server. We recommend updating the default recipient e-mail address policy using the EMC on your Exchange 2007 server.
In the console tree, navigate to Organization Configuration > Hub Transport on your Exchange 2007 server.
In the result pane, click the E-mail Address Policies tab, and then select the default recipient e-mail address policy.
In the action pane, click Edit.
On the Introduction page, click Next.
On the Conditions page, click Next.
On the E-Mail Addresses page, select Add to enter an e-mail address for your service-routing namespace.
On the SMTP E-Mail Address dialog, select the E-mail address local part check box and select Use alias. Additionally, select Select the accepted domain for the e-mail address and then browse to select the FQDN of your service-routing namespace from a list of accepted domains. For example, service.contoso.com. Click OK after selecting the service-routing namespace in the Select Accepted Domain dialog and then click OK to continue.
Click Next to continue.
On the Schedule page, select Immediately in the Apply the e-mail address policy section.
Click Next to continue.
On the Edit E-Mail Address Policy page, review your configuration settings. Click Edit to apply your changes to the e-mail address policy. Click Back to make any
|
http://technet.microsoft.com/en-us/library/gg981504.aspx
|
CC-MAIN-2014-42
|
en
|
refinedweb
|
Hello geeks and welcome in this article, we will cover NumPy fliplr. Along with that, we will also look at its syntax and parameters. To make the topic crystal clear, we will also look at a couple of examples. But at first, let us try to understand NumPy.fliplr() through its definition. Suppose you have a matrix with you. A demand arrives where you need to flip the entries in each row with the column being preserved. This function will come in handy in this situation, and you can achieve it with just a single line of code. Now moving ahead, we will look at its syntax, followed by parameters.
Syntax Of Numpy Fliplr()
Given below is the general syntax for this function
numpy.fliplr(m)
A relatively simple syntax when compared to other functions of NumPy. It has only one parameter associated with it that we will be discussing next.
Parameters Of Numpy Fliplr()
M: array_like
It represents the input array over which the operation needs to be performed. The input array must be at least 2-dimensional for this function to work. For a 1-d array, you can use another function called numpy. flipud().
Return
f:ndarray
It returns an output array with the columns reversed. Since the operation is returned the operation is O(1).
Now with the syntax and parameters being done. Let us look at couple of examples that help us in understanding the concept better.
Examples
#input import numpy as ppool a=[[1,23], [34,6]] print(ppool.fliplr(a))
Output:
[[23 1] [ 6 34]]
Above, we can see the straightforward example of NumPy fliplr. In the above example, we have first imported the NumPy module. Following this, we have defined a 2-dimensional array. In the last line of our code, we have used the print statement. Our output justifies our input and our function. We get [23,1] which reverse of [1,23] and [6,34] reverse of [34,6]. Here we can see the same row elements are flipped while the shape of the array is preserved.
Now let us look at another example with a 3-dimensional matrix.
#input import numpy as ppool a=[[1,2,3], [4,5,6], [7,8,9]] print(ppool.fliplr(a))
Output:
[[3 2 1] [6 5 4] [9 8 7]]
In the above example, the only difference is that we have used a 3-d array. We have first imported the NumPy module. Following which we have defined our array and then used a print statement to get our desired output. Here we can see that again, and the values are switched from left to right. At the same time, the shape of the array is still maintained. Here we can see the position of the middle element remains unchanged.
Numpy.fliplr() Vs Flipud()
In this section, we will be comparing two of the NumPy function and the basic differences between them. As stated above we know that the fliplr() function is used to flip the array from the left to the right direction. On contrary, the flipud() function is used to shift the function from up to down.
Now let us look at the effect of the function on a similar array.
From the above table, we can draw a conclusion. Also, we can spot the basic difference between the 2 functions.
Is there a Fliplr function for 3D arrays in Numpy?
No. Fliplr method only supports flipping of 2d arrays. But you can use different alternatives to flip the 3D array. Following example will help you to flip 3d arrays –
Code:
import numpy as np a = np.array([[[1, 2], [3, 4]], [[5, 6], [7, 8]], [[9, 10], [11, 12]]]) print(a[:,::-1,...])
Output:
[[[ 3 4] [ 1 2]] [[ 7 8] [ 5 6]] [[11 12] [ 9 10]]]
Explanation:
This way you can flip the 2d arrays present in 3 arrays. Moreover, you can tweak with : operator to create desired results.
Must Read
- How to Clear Plot in Matplotlib Using clear() Method
- How to Clear Python Shell in the Most Effective Way
- WHAT IS NUMPY DIFF? ALONG WITH EXAMPLES
- NumPy log Function() | What is Numpy log in Python
- METHODS TO CONVERT TUPLE TO STRING IN PYTHON
Conclusion
In this article, we covered NumPy fliplr. Along with that, we looked at its syntax and parameters. For a better understanding, we looked at a couple of examples. In the end, we can conclude that NumPy.fliplr() is used to shift the array left to right with preserving the shape of the array. I hope this article can clear all of your doubts and make this topic crystal clear to you. If you still have any unsolved queries or doubts, feel free to write them below in the comment section. Done reading this, why not read NumPy vstack next.
|
https://www.pythonpool.com/numpy-fliplr/
|
CC-MAIN-2021-43
|
en
|
refinedweb
|
Warning: You are browsing the documentation for Symfony 4.3, which is no longer maintained.
Read the updated version of this page for Symfony 5.3 (the current stable version).
Console
Symfony\Component\Console\Command\Command. For example, you may
want a command to create a user:
// src/Command/CreateUserCommand.php namespace App
You can optionally define a description,') ; } }:
// ... protected function execute(InputInterface $input, OutputInterface $output) { // outputs multiple lines to the console (adding "\n" at the end of each line) $output->writeln([ 'User Creator', '============', '', ]); // the value returned by someMethod() can be an iterator () // that generates and returns the messages with the 'yield' PHP keyword $output->writeln($this->someMethod()); // outputs a message followed by a "\n" $output->writeln('Whoa!'); // outputs a message without adding a "\n" at the end of the line $output->write('You are about to '); $output->write('create a user.'); }
Symfony\Component\Console\Output\ConsoleSectionOutput:
class MyCommand extends Command { protected function execute(InputInterface $input, OutputInterface $output) { $section1 = $output->section(); $section2 = $output->section(); $section1->writeln('Hello'); $section2->writeln('World!'); // Output displays "Hello\nWorld!\n" // overwrite() replaces all the existing section contents with the given content $section1->overwrite('Goodbye'); // Output now displays "Goodbye\nWorld!\n" // clear() deletes all the section contents... $section2->clear(); // Output now displays "Goodbye\n" // ...but you can also delete a given number of lines // (this example deletes the last two lines of the section) $section1->clear(2); // Output is now completely empty! } }:/Command/CreateUserCommandTest.php namespace App\Tests\Command;\ApplicationTester instead..
|
https://symfony.com/index.php/doc/4.3/console.html
|
CC-MAIN-2021-43
|
en
|
refinedweb
|
Hi Paul,
Thanks. I tried that and I get the same results as you — running “./version.py” returns 2.7.1 but “python version.py” returns 2.7.2. I think it’s because the python installed at /usr/bin/python is the one that came with the OS; when I installed python
2.7.2 using macports, it installed it to /usr/bin/python2.7, then set the environment to point to that. So, if I change the first line from #! /usr/bin/python to #! /usr/bin/env python, I get the same result when I run version.py either way.
Anyway, are you saying that having the two pythons installed is affecting pylab.show()? If so, how can I fix it?
Thanks,
Collin
···
On Nov 18, 2011, at 10:54 AM, Paul de Beurs wrote:
Collin,
I had the same kind of trouble. OSX Lion comes with Python 2.7.1.
I have this little script named ‘version.py’:
#!/usr/bin/python
import sys
print ‘Python’, sys.version
In a Terminal you can start version.py in two different ways.
when I typed: python version.py
I got:
Python 2.7.2 (v2.7.2:8527427914a2, Jun 11 2011, 14:13:39)
[GCC 4.0.1 (Apple Inc. build 5493)]
when I typed: version.py
I got:
Python 2.7.1 (r271:86832, Jun 16 2011, 16:59:05)
[GCC 4.2.1 (Based on Apple Inc. build 5658) (LLVM build 2335.15.00)]
When I change the first line in version.py in #!/Library/Frameworks/Python.framework/Versions/2.7/bin/python
the results of the two test are the same.
Maybe this helps you. Please give me your results.
Paul
2011/11/16 Collin Capano <cdcapano@…3861…>
Hi,
I’ve installed matplotlib on a new computer running OSX Lion 10.7.2 (Xcode version 4.2). When I open ipython and try to run:
In [1]: import pylab
In [2]: pylab.figure(); pylab.plot([0,1],[2,2]); pylab.show()
nothing happens. I can, however, save the plot using pylab.savefig. I am using matplotlib version 1.1.0, with Python 2.7.2 and ipython version 0.11. I installed all of these using MacPorts (specifically, the python27, py27-matplotlib, and py27-ipython ports).
Any help would be greatly appreciated, as interactive plotting is important for my work.
Thanks,
Collin
All the data continuously generated in your IT infrastructure
contains a definitive record of customers, application performance,
security threats, fraudulent activity, and more. Splunk takes this
data and makes sense of it. IT sense. And common sense.
Matplotlib-users mailing list
Matplotlib-users@lists.sourceforge.net
|
https://discourse.matplotlib.org/t/pylab-show-does-nothing-osx-lion/16270
|
CC-MAIN-2021-43
|
en
|
refinedweb
|
EnableLanBroadcast & Example
The Lan Broadcast procedure is a interesting way to publish game sessions only on your LAN, making it possible to match players that are on the same network. The session broadcast is most useful if you want to join players locally, but does not want to share the game info publically or if you are using the Pro version, don't need to type the server IP to connect, for example.
In order to publish the game session using the built in broadcaster and listen for sessions that comes from any active server, you only need to call the utility method
BoltNetwork.EnableLanBroadcast() (API link). It will initialize the internal broadcaster, so you just need to keep the server running or check for sessions on the client.
Photon Bolt Free
On the code below, using Photon Bolt Free, it shows how you can create a Photon session privately and publish it via LAN. This is done by making an invisible room and enabling the LAN broadcast. There are two main aspects on this case: (i) even if the room is invisible, any client with its ID can join the session normally, including clients outside of your LAN, as the session is always published to the Cloud service, and (ii) as you are using the Free version, an internet connection is always required, even if the players are only local.
The script shows a simple
Menu UI with buttons to start the peer as the Server or Client.
Once started, the server will create a new Photon Session, observe that we set it as invisible (
props.IsVisible = false;), and publish it to the Photon Cloud.
In both cases, it's invoked the
BoltNetwork.EnableLanBroadcast() at the end of
BoltStartDone(), which will trigger the session broadcaster.
Eventually, you will see the client sending broadcast searches to the broadcast port of your router.
If you have an active server running on the same LAN, that server should respond to the search with its own session information.
The UI will be populated with a
Join button, that initiates the connection with the game server.
At this moment, the connection occurs in the exact same way as when you join a public Photon Session.
So, once again, you need to be connected to the internet in order to enter the room.
using System; using Bolt.Matchmaking; using Bolt.Photon; using UdpKit; using UdpKit.Platform; using UnityEngine; public class MenuLanBroadcast : Bolt.GlobalEventListener { private Rect _labelRoom = new Rect(0, 0, 140, 75); private GUIStyle _labelRoomStyle; private void Awake() { Application.targetFrameRate = 60; BoltLauncher.SetUdpPlatform(new PhotonPlatform()); _labelRoomStyle = new GUIStyle() { fontSize = 20, fontStyle = FontStyle.Bold, normal = { textColor = Color.white } }; } public override void BoltStartBegin() { BoltNetwork.RegisterTokenClass<PhotonRoomProperties>(); } public override void BoltStartDone() { if (BoltNetwork.IsServer) { string matchName = Guid.NewGuid().ToString(); var props = new PhotonRoomProperties(); props.IsOpen = true; props.IsVisible = false; // Make the session invisible props["type"] = "game01"; props["map"] = "Tutorial1"; BoltMatchmaking.CreateSession( sessionID: matchName, sceneToLoad: "Tutorial1", token: props ); } // Broadcast and Listen for LAN Sessions BoltNetwork.EnableLanBroadcast(); } public override void SessionListUpdated(Map<Guid, UdpSession> sessionList) { BoltLog.Info("Session list updated: {0} total sessions", sessionList.Count); } // GUI public void OnGUI() { GUILayout.BeginArea(new Rect(10, 10, Screen.width - 20, Screen.height - 20)); if (BoltNetwork.IsRunning == false) { if (ExpandButton("Start Server")) { BoltLauncher.StartServer(); } if (ExpandButton("Start Client")) { BoltLauncher.StartClient(); } } else if (BoltNetwork.IsClient) { SelectRoom(); } GUILayout.EndArea(); } private void SelectRoom() { GUI.Label(_labelRoom, "Looking for rooms:", _labelRoomStyle); if (BoltNetwork.SessionList.Count > 0) { GUILayout.BeginVertical(); GUILayout.Space(30); foreach (var session in BoltNetwork.SessionList) { UdpSession udpSession = session.Value; var label = string.Format("Join: {0} | {1}", udpSession.HostName, udpSession.Source); if (ExpandButton(label)) { BoltMatchmaking.JoinSession(udpSession); } } GUILayout.EndVertical(); } } private bool ExpandButton(string text) { return GUILayout.Button(text, GUILayout.ExpandWidth(true), GUILayout.ExpandHeight(true)); } }
Photon Bolt Pro
It's also possible to accomplish a similar behavior using Photon Bolt Pro. Below you can see a very similar script like the one shown earlier, but this time using the direct connection capabilities of the Pro version. In this case, you will also listen to Game Session data, but they will be available only on LAN. We've removed most of the duplicated code and highlight only the main changes, the rest of the script is the same.
public class MenuLanBroadcastPro : Bolt.GlobalEventListener { private void Awake() { //... // Change the target Udp Platform BoltLauncher.SetUdpPlatform(new DotNetPlatform()); // ... } public override void BoltStartDone() { if (BoltNetwork.IsServer) { // There is no need to setup the properties of the Game Session // just create one normally string matchName = Guid.NewGuid().ToString(); BoltMatchmaking.CreateSession( sessionID: matchName, sceneToLoad: "Tutorial1" ); } // Broadcast and Listen for LAN Sessions BoltNetwork.EnableLanBroadcast(); } // ... }
Extra Notes
If you are working with Android devices, by default the OS will not let you send and listen for broadcast messages, so you need to make sure to add this permission to the Android manifest file:
<uses-permission android:.
|
https://doc.photonengine.com/en-US/bolt/current/in-depth/enablelanbroadcast
|
CC-MAIN-2021-43
|
en
|
refinedweb
|
We’ll start with a basic overview to see what working with GHCi is like: how to run it, how to use commands, and how to read its output.
Starting GHCi
Assuming you already have a working installation of GHC on your machine, you should be able to open a GHCi session by typing
ghci on the command line from any directory that can access GHC.Both of the major build tools,
cabal and
stack, have their own commands to open project-aware REPLs, but using those will not be our focus here. Further documentation is available here for
cabal users and here for
stack users. One benefit of using these is that opening a GHCi session from within a project directory using, e.g.,
stack repl, may load your
Main module automatically and also make the GHCi session aware of the project’s dependencies.
$ ghci
Haskell expressions can then be typed directly at the prompt and immediately evaluated.
λ> 5 + 5 10 λ> "hello" ++ " world" "hello world" λ> let x = 5+5 ; y = 7 in (x * y)70
GHCi will interpret the line as a complete expression. If you wish to enter multi-line expressions, you can do so with some special syntax.
λ> :{ > let x = 5+5; y = 7 > in > (x * y) > :} 70
Invoking GHCi with options
You can open GHCi with a file loaded from the command line by passing the filename as an argument. For example, this command loads the file
fractal.hs into a new GHCi session:
$ ghci fractal.hs
If that module has dependencies other than the
base library, though, they won’t be loaded automatically. We’ll cover bringing those into scope in a separate section, below.
You can also open GHCi with a language extension, for example, already turned on. For example,
$ ghci <elided opening text> λ> :type "julie""julie" :: [Char]
But if you pass it the
-XOverloadedStrings flag, then that language extension will be enabled for that session.
$ ghci -XOverloadedStrings <elided opening text> λ> :type "julie""julie" :: Data.String.IsString p => p
A great number of other GHC flags can be passed as arguments to
ghci in this fashion. For the most part, we will cover those as they come up in other contexts, rather than attempting to list them all here.
Your GHCi configuration, if you have one, will be loaded by default when you invoke
ghci. You can disable that with a flag:
$ ghci -ignore-dot-ghci
Packages
There are a few ways to bring modules and packages into scope in GHCi. One is using
stack or
cabal to open a project-aware REPL, which usually works well to bring the appropriate dependencies into scope. However, there are a few other options.
You can import modules directly in GHCi using
import, just as you do at the top of a file if your GHCi is already aware of the package the module comes from.
λ> :type bool <interactive>:1:1: error: Variable not in scope: bool λ> import Data.Bool λ> :type boolbool :: a -> a -> Bool -> a
All modules in
base are fair game for importing, as are modules in a project-aware GHCi session or a GHCi session that has been invoked with the
-package flag, thus loading the package into the session.
All the same import syntax is available for this, such as
hiding and
qualified.
The
base package is always loaded by default into a GHCi session (as is the
Prelude module, unless you have disabled that), but that’s not, of course, the case for many packages. However, your GHC installation came with a few packages that are available but not automatically loaded into new GHCi sessions. You can find out what you have available by running
$ ghc-pkg list
on the command line. You should see a list of all the packages that are installed and available. Modules from any of those listed packages can be directly imported, just as if they were in
base. So, assuming your list includes
containers, you can type
import Data.Map or the like directly into your GHCi session, regardless of whether it’s one of your project’s dependencies – or if you even have a project going.
If you are using a
stack or
cabal REPL, then there may be many more packages in their local package lists that are available to new GHCi sessions. If you have previously opened a GHCi session with something like
$ stack repl --package QuickCheck
then
stack will have installed
QuickCheck and, if in the future you open a
stack repl session but forget to pass the
--package flag and then suddenly you realize you want to make
QuickCheck available in this GHCi session, you can
:set -package within GHCi to bring it into scope:
λ> :type property <interactive>:1:1: error: Variable not in scope: property λ> :set -package QuickCheck package flags have changed, resetting and loading new packages... λ> import Test.QuickCheck λ> :type propertyproperty :: Testable prop => prop -> Property
It’s really handy not to have to restart a GHCi session just to load a package you use frequently!
Commands
First let’s start with a couple of basics: you can use the up-arrow, down-arrow, and tab-complete in GHCi, so if you are already comfortable with these from your bash shell or what have you, you’ll enjoy this.
Furthermore, if you are in a GHCi session, shell commands can be made available using the
:! GHCi command. For example, let’s say we’ve forgotten what directory we’re in and what files are in this directory, but we don’t want to quit GHCi to find out. No problem!
λ> :! pwd /home/jmo λ> :cd /home/jmo/haskell-projects λ> :! lsemily fractal-sets haskell-fractal life shu-thing web-lesson4
Notice the
:cd command doesn’t need the
:!.
To quit a GHCi session, use
:quit or
:q.
GHCi commandsMuch of this course will be about GHCi commands. List of commands gives an overview of all of them, and several other lessons elaborate on specific commands of particular importance. all start with a colon (except
import). They may all be abbreviated to just their first letter; however, if there is more than one command that starts with the same letter, such as
:main and
:module, it will default to reading that as whichever is more commonly used. When in doubt, type it out.
You can type
:? for a complete listing of the GHCi commands.
What is
it
GHCi assigns the name
it to the last-evaluated expression. If you aren’t using
:set +tWe will see
:set +t and other uses of
:set later in the page on the GHCi :set command. to automatically display types for expressions entered into or evaluated in GHCi, then you might not notice
it until you see an error message that mentions
it such as this one:
λ> max 5 _ <interactive>:76:7: error: • Found hole: _ :: a Where: ‘a’ is a rigid type variable bound by the inferred type of it :: (Ord a, Num a) => a ----------------------------------- ^^ at <interactive>:76:1-7 • In the second argument of ‘max’, namely ‘_’ In the expression: max 5 _ In an equation for ‘it’: it = max 5 _------------------------- ^^
It isn’t always important or useful to recognize that GHCi has named the expression, but there’s at least one interesting thing about
it that you may find useful. Here’s a clue:
λ> max 5 <interactive>:75:1: error: • No instance for (Show (Integer -> Integer)) arising from a use of ‘print’ (maybe you haven't applied a function to enough arguments?) • In a stmt of an interactive GHCi command: print it---------------------------------------------- ^^^^^ ^^
GHCi is always implicitly running the
it. But it’s not only GHCi that can pass
it as an argument to functions – you can, too!
λ> sum [1..500] 125250 λ> it / 15 8350.0 λ> it * 216700.0
|
https://typeclasses.com/ghci/intro
|
CC-MAIN-2021-43
|
en
|
refinedweb
|
boost/beast/websocket/rfc6455.hpp
// // Copyright (c) 2016-2019 Vinnie Falco (vinnie dot falco at gmail dot com) // // Distributed under the Boost Software License, Version 1.0. (See accompanying // file LICENSE_1_0.txt or copy at) // // Official repository: // #ifndef BOOST_BEAST_WEBSOCKET_RFC6455_HPP #define BOOST_BEAST_WEBSOCKET_RFC6455_HPP #include <boost/beast/core/detail/config.hpp> #include <boost/beast/core/static_string.hpp> #include <boost/beast/core/string.hpp> #include <boost/beast/http/empty_body.hpp> #include <boost/beast/http/message.hpp> #include <boost/beast/http/string_body.hpp> #include <array> #include <cstdint> namespace boost { namespace beast { namespace websocket { /// The type of object holding HTTP Upgrade requests using request_type = http::request<http::empty_body>; /// The type of object holding HTTP Upgrade responses using response_type = http::response<http::string_body>; /** Returns `true` if the specified HTTP request is a WebSocket Upgrade. This function returns `true` when the passed HTTP Request indicates a WebSocket Upgrade. It does not validate the contents of the fields: it just trivially accepts requests which could only possibly be a valid or invalid WebSocket Upgrade message. Callers who wish to manually read HTTP requests in their server implementation can use this function to determine if the request should be routed to an instance of @ref websocket::stream. @par Example @code void handle_connection(net::ip::tcp::socket& sock) { boost::beast::flat_buffer buffer; boost::beast::http::request<boost::beast::http::string_body> req; boost::beast::http::read(sock, buffer, req); if(boost::beast::websocket::is_upgrade(req)) { boost::beast::websocket::stream<decltype(sock)> ws{std::move(sock)}; ws.accept(req); } } @endcode @param req The HTTP Request object to check. @return `true` if the request is a WebSocket Upgrade. */ template<class Allocator> bool is_upgrade(beast::http::header<true, http::basic_fields<Allocator>> const& req); /** Close status codes. These codes accompany close frames. @see <a href="">RFC 6455 7.4.1 Defined Status Codes</a> */ enum close_code : std::uint16_t { /// Normal closure; the connection successfully completed whatever purpose for which it was created. normal = 1000, /// The endpoint is going away, either because of a server failure or because the browser is navigating away from the page that opened the connection. going_away = 1001, /// The endpoint is terminating the connection due to a protocol error. protocol_error = 1002, /// The connection is being terminated because the endpoint received data of a type it cannot accept (for example, a text-only endpoint received binary data). unknown_data = 1003, /// The endpoint is terminating the connection because a message was received that contained inconsistent data (e.g., non-UTF-8 data within a text message). bad_payload = 1007, /// The endpoint is terminating the connection because it received a message that violates its policy. This is a generic status code, used when codes 1003 and 1009 are not suitable. policy_error = 1008, /// The endpoint is terminating the connection because a data frame was received that is too large. too_big = 1009, /// The client is terminating the connection because it expected the server to negotiate one or more extension, but the server didn't. needs_extension = 1010, /// The server is terminating the connection because it encountered an unexpected condition that prevented it from fulfilling the request. internal_error = 1011, /// The server is terminating the connection because it is restarting. service_restart = 1012, /// The server is terminating the connection due to a temporary condition, e.g. it is overloaded and is casting off some of its clients. try_again_later = 1013, //---- // // The following are illegal on the wire // /** Used internally to mean "no error" This code is reserved and may not be sent. */ none = 0, /** Reserved for future use by the WebSocket standard. This code is reserved and may not be sent. */ reserved1 = 1004, /** No status code was provided even though one was expected. This code is reserved and may not be sent. */ no_status = 1005, /** Connection was closed without receiving a close frame This code is reserved and may not be sent. */ abnormal = 1006, /** Reserved for future use by the WebSocket standard. This code is reserved and may not be sent. */ reserved2 = 1014, /** Reserved for future use by the WebSocket standard. This code is reserved and may not be sent. */ reserved3 = 1015 // //---- //last = 5000 // satisfy warnings }; /// The type representing the reason string in a close frame. using reason_string = static_string<123, char>; /// The type representing the payload of ping and pong messages. using ping_data = static_string<125, char>; /** Description of the close reason. This object stores the close code (if any) and the optional utf-8 encoded implementation defined reason string. */ struct close_reason { /// The close code. std::uint16_t code = close_code::none; /// The optional utf8-encoded reason string. reason_string reason; /** Default constructor. The code will be none. Default constructed objects will explicitly convert to bool as `false`. */ close_reason() = default; /// Construct from a code. close_reason(std::uint16_t code_) : code(code_) { } /// Construct from a reason string. code is @ref close_code::normal. close_reason(string_view s) : code(close_code::normal) , reason(s) { } /// Construct from a reason string literal. code is @ref close_code::normal. close_reason(char const* s) : code(close_code::normal) , reason(s) { } /// Construct from a close code and reason string. close_reason(close_code code_, string_view s) : code(code_) , reason(s) { } /// Returns `true` if a code was specified operator bool() const { return code != close_code::none; } }; } // websocket } // beast } // boost #include <boost/beast/websocket/impl/rfc6455.hpp> #endif
|
https://www.boost.org/doc/libs/develop/boost/beast/websocket/rfc6455.hpp
|
CC-MAIN-2021-43
|
en
|
refinedweb
|
In this article, you will the learn basic interview tips in C#
In this article, I am going to explain the following C# basics concepts.
I will be explaining all the topics, mentioned above in depth but the explanation will be more focused on the interview preparation i.e. I will explain all those things in the context of an interview. Actually, in the recent past, I have gone through dozens of interviews and I would like to share the experience with the other developers.
I am going to start with the very basic concepts and the purpose of this article is that the readers of this article can make themselves ready for C# interviews.
Value types & reference types
In C#, types have been divided into 2 parts, value types and the reference types. The value types directly store the data in the stack portion of RAM whereas the reference type stores reference of the data in the stack and save the actual data in heap but this is a half-truth statement. I will explain about it later to prove why it’s half-truth.
Value types
A value type contains its data directly and it is stored in the stack of memory. Examples of some built-in value types are Numeric types (sbyte, byte, short, int, long, float, double, decimal), char, bool, IntPtr, date, Struct and Enum is the value type.
If I need to create a new value type, then I can use structs to create a new value type. I can also use enum keyword to create a value type of enum type.
Reference types
Reference types store the address of their data i.e. pointer on the stack and actual data is stored in heap area of the memory. The heap memory is managed by the garbage collector.
As I said, the reference type does not store its data directly, so assigning a reference type value to another reference type variable creates a copy of the reference address (pointer) and assign it to the 2nd reference variable. Thus, now both the reference variables have the same address or pointer for which an actual data is stored in heap. If I make any change in one variable, then that value will also be reflected in another variable.
In some cases, it is not valid like when reallocating memory, it may update only one variable for which new memory allocation has been done.
Suppose you are passing an object of customer class, let's say “objCustomer” inside a method “Modify(…)” and then re-allocate the memory inside the “Modify(…)” method & provide its properties value. In that case, customer object “objCustomer” inside the “Modify(…)” will have different value and those change will not be available to the variable “objCustomer” accessed outside of “Modify(…)”.
You can also take the example of String class. ‘string’ is the best example for this scenario because in the case of a string, each time new memory is allocated.
I will explain about all those scenarios in depth in “Usage of ‘ref’ & ‘out’ keyword in C#” & “Understanding the behavior of ‘string’ in C#” section of this article.
Understanding Classes and other types
The table is given below to understand Classes and other types. If you are aware of all the behaviors of the classes and other types, then you can skip this section and move to next section. I will explain all those things one by one for the developers, who are not aware of it.
Note
You can see that static class can inherit a class. I have written no but used the * notation with that. It means a static class cannot inherit any other class but must inherit System.Object. Thus, we can also write the System.Object Class name in the inheritance list but there is no need to write the name System.Object explicitly because all the classes are being inherited from System.Object implicitly and we do not need to write it explicitly.
What are the different types of classes we can have?
We can have an abstract class, partial class, static class, sealed class, nested class & concrete class (simple class).
Static Class
Abstract Class
abstract class Student
Sealed Class
So, you can see that we can have partial interface, partial class & partial struct but the partial method can only be written inside Partial struct and Partial class.
Structure (struct) in C#
In C#, we can create a structure or struct, using the keyword struct. As explained earlier, a struct is a value type and it is directly stored in the stack of the memory.
Create a struct
Interface
An interface can be created using the keyword ‘interface’ in C#. An interface is a contract and that’s why all the Methods, properties, Indexers, Events which are a part of the interface must be implemented by the class or struct, which implements the interface. An interface can contain only signature of Methods, Properties, Indexers and Events.
Following is an example of an Interface.
Access Modifiers in C#
What are the access modifiers allowed with Class?
What is the use of ‘typeof’ keyword in C#?
It is used to get the types and using this, we can do a lot of things of reflection without using complex logic and a few examples are shown below.
Condition 4
What is the difference between the following 2 code snippets?
Code Snippet1
What is the use of 'extern' keyword in C#?
We use 'extern' keyword with a method to indicate that the method has been implemented externally i.e. out of your current C# code. Below is a code sample from MSDN, which describes the use of extern
[DllImport("User32.dll")]
public static extern int MessageBox(int h, string m, string c, int type);
NOTE
While using [DllImport("User32.dll")], you need to include the namespace System.Runtime.InteropServices;
for complete details about extern visit here.
Sequence of Modifiers and other attributes with class and method
In the case of C# sometimes sequence of modifiers matters and you should be aware of that if you are going for an interview.
e.g. public abstract partial class Student or public partial abstract class Student
You would be wondering why I am explaining microlevel questions? I am explaining it because in some cases, you have to face such questions and you will feel a bit irritated or upset if you are not aware of all those things.
Sometimes sequence matters and sometimes it doesn’t matter but we should keep some best practices in mind so that we write an error free code.
Class
[modifiers] [partial] className [:] [inheritance list]
Method
[modifiers] [partial] returnType methodName([parameters])Method Modifiers List.
Passing Value type variable without ‘ref’ keyword
In the case mentioned above, the variable x is having an initial value 20 and its value i.e. 20 is passed as a parameter for the method Modify. Inside the method modify it has been modified and added 50 to its previous value so it becomes 70 but the value of the variable ‘x’ outside the Modify() method is still 20 because it has modified its value not the reference.
Below is the complete code
View All
View All
|
https://www.c-sharpcorner.com/article/basic-interview-tips-in-c-sharp/
|
CC-MAIN-2021-43
|
en
|
refinedweb
|
Manage the Skill Session and Session Attributes
Your skill can keep the skill session open to conduct a back-and-forth interaction with the user. While the session is open, the user does not need to use your invocation name to talk to your skill.
- Lifecycle of a skill session
- How devices with screens affect the skill session
- Save data during the session
- Save data between sessions
- Related topics
Lifecycle of a skill session
A skill session begins when a user invokes your skill and Alexa sends your skill a request. The request contains a session object that uses a Boolean value called
newto indicate that this is a new session.
Your skill receives the request and returns she expects the user to respond. If the user's response maps to your interaction model, a new intent is sent to the skill and the process goes back to step 2.
However, if a few seconds elapse without a response from the user, Alexa closes the microphone. If the skill specified a reprompt, Alexa reprompts the user to speak and opens the microphone for a few more seconds. If the user still does not respond, the session normally ends.
The session may remain open for a short time with the microphone closed if the skill is used on a device with a screen as described in How devices with screens affect the skill session.
One exception that overrides this: the directives to start the purchase flow for in-skill purchasing automatically end the session, regardless of the
shouldEndSessionvalue. You need to use persistent storage to resume the skill once the purchase flow completes. See Add ISP Support to Your Skill Code.
undefined(not set or
null) – The session's behavior depends on the type of Echo device. If the device has a screen and the skill response includes screen content, the session stays open for up to 30 more seconds, without opening the microphone to prompt the user for input. For details, see How devices with screens affect the skill session. If the user speaks and precedes their request with "Alexa," Alexa sends the request to the skill. Otherwise, Alexa ignores the user's speech. If an Alexa Gadgets event handler is active, the session continues to stay open until the skill calls
CustomInterfaceController.StopEventHandleror the event handler expires.
How devices with screens affect the skill session
When the user invokes a skill on a device with a screen, the session can remain open for up to 30 additional seconds with the microphone closed. The user continues to see content related to the skill on the screen. To continue interacting with the skill, the user can use the wake word to speak to Alexa, followed by an utterance that maps to the skill's interaction model.
This extended session occurs only when all of the following conditions are true:
- The user invokes the skill with a device with a screen (such as an Echo Show).
- The skill is configured to support devices with screens. This means that one of these options on the Build > Interfaces page in the developer console is enabled:
- Display Interface, or
- Alexa Presentation Language
- The skill's response includes content to display on the screen. This can be one of the following:
- A display template, displayed when you return the
Display.RenderTemplatedirective in your skill's response.
- An Alexa Presentation Language document, displayed when you return the
Alexa.Presentation.APL.RenderDocumentdirective in your skill's response.
- An Alexa app card, displayed when you include a
cardobject in your skill's standard response. Although cards are intended for the Alexa app, card content is displayed on screen devices if you don't provide any other screen content.
- The
shouldEndSessionvalue in the response is either
falseor
undefined(not set).
- When
undefined, the extended session remains open for approximately 30 seconds.
- When
false, the extended session occurs after the user has failed to respond to the reprompt. In this case, the session remains open for between 20 and 30 seconds.
For example, a skill configured to support screen devices and display content on the screen could have this interaction on a device with a screen:
User: Open Spacey.
Skill gets a
LaunchRequest, then responds with the text to speak and
shouldEndSession set to
false.
Alexa: Welcome to spacey. I know facts about space, how long it takes to travel between two planets and I even know a joke. What do you want to know?
The screen displays content related to Spacey, such as a display template sent via the
Display directive.
User: … (User says nothing. A few seconds elapse.)
Alexa: Try asking me to tell you something about space. (Alexa speaks the
reprompt provided with the response.)
User: Um…
More time passes.
Microphone closes, but the session remains open. Content about Spacey continues to display.
User: Alexa, tell me about Mars. (User interacts with the skill without using the invocation name, since the skill session is still active.)
Alexa sends the skill a
PlanetFacts intent. The request shows that this is a continuing session, not a new session.
Alexa: On Mars…
Save data during the session
When you need to retain data during the session, use session attributes. Create a map of key/value pairs with the data you need to save. Include this map in the
sessionAttributes property of your response. When Alexa sends the next request as part of the same session, the same map is included in the
session.attributes property of the request. Use the keys you defined to retrieve the data from the map.
For example:
…earlier utterances to start this interaction.
User: My favorite color is blue
Alexa sends the skill the
FavoriteColorIntent with the
favoriteColor slot set to "blue". The skill responds with text to speak and a session attribute with the key
favoriteColor and the value "blue".
Alexa: I now know your favorite color is blue.
User: What was that color again?
Alexa sends the skill the
WhatsMyColorIntent intent. The request includes the
favoriteColor session attribute. The skill retrieves this attribute to build the response.
Alexa: Your favorite color is blue.
The Alexa Skills Kit SDKs provide an
AttributesManager to add session attributes to your response, and then retrieve the attributes from an incoming request.
This example shows how an intent handler can save data into the session attributes. In this example, the
FavoriteColorIntent has a single, required slot called
favoriteColor. The intent is configured with auto-delegation, so Alexa prompts the user to fill the slot if the user does not provide a value initially.
This sample code uses the Alexa Skills Kit SDK for Node.js (v2).
const FavoriteColorIntentHandler = { canHandle(handlerInput) { getRequestType(handlerInput.requestEnvelope) === 'IntentRequest' && getIntentName(handlerInput.requestEnvelope) === 'FavoriteColorIntent' && getDialogState(handlerInput.requestEnvelope) === 'COMPLETED'; }, handle(handlerInput) { const sessionAttributes = handlerInput.attributesManager.getSessionAttributes(); const favoriteColor = getSlotValue(handlerInput.requestEnvelope, 'favoriteColor') sessionAttributes.favoriteColor = favoriteColor; handlerInput.attributesManager.setSessionAttributes(sessionAttributes); const speechText = `I saved the value ${favoriteColor} in the session attributes. Ask me for your favorite color to demonstrate retrieving the attributes.`; const repromptText = `You can ask me, what's my favorite color?`; return handlerInput.responseBuilder .speak(speechText) .reprompt(repromptText) .getResponse(); } };
This sample code uses the Alexa Skills Kit SDK for Python.
from ask_sdk_core.dispatch_components import AbstractRequestHandler from ask_sdk_core.utils import is_intent_name, get_dialog_state, get_slot_value from ask_sdk_core.handler_input import HandlerInput from ask_sdk_model import Response, DialogState class FavoriteColorIntentHandler(AbstractRequestHandler): """Handler for FavoriteColorIntent.""" def can_handle(self, handler_input): # type: (HandlerInput) -> bool return is_intent_name("FavoriteColorIntent")( handler_input) and get_dialog_state( handler_input=handler_input) == DialogState.COMPLETED def handle(self, handler_input): # type: (HandlerInput) -> Response # Get any existing attributes from the incoming request session_attr = handler_input.attributes_manager.session_attributes # Get the slot value from the request and add it to the session # attributes dictionary. Because of the dialog model and dialog # delegation, this code only ever runs when the favoriteColor slot # contains a value, so a null check is not necessary. fav_color = get_slot_value("favoriteColor") session_attr["favoriteColor"] = fav_color # The SDK automatically saves the attributes to the session, # so that the value is available to the next intent speech_text = ("I saved the value {} in the session attributes. " "Ask me for your favorite color to demonstrate " "retrieving the attributes.").format(fav_color) reprompt_text = "You can ask me, what's my favorite color?" return handler_input.response_builder.speak(speech_text).ask( reprompt_text).DialogState; import com.amazon.ask.model.IntentRequest; import com.amazon.ask.model.Response; import com.amazon.ask.request.RequestHelper; public class FavoriteColorIntentHandler implements IntentRequestHandler { @Override public boolean canHandle(HandlerInput handlerInput, IntentRequest intentRequest) { // This intent is configured with required slots and auto-delegation, // so the handler only needs to handle completed dialogs. return handlerInput.matches(intentName("FavoriteColorIntent")) && intentRequest.getDialogState() == DialogState.COMPLETED; } @Override public Optional<Response> handle(HandlerInput handlerInput, IntentRequest intentRequest) { RequestHelper requestHelper = RequestHelper.forHandlerInput(handlerInput); // Get any existing attributes from the incoming request AttributesManager attributesManager = handlerInput.getAttributesManager(); Map<String,Object> attributes = attributesManager.getSessionAttributes(); // Get the slot value from the request and add to a map for the attributes. // Because of the dialog model and dialog delegation, this code only ever // runs when the favoriteColor slot contains a value, so a null check // is not necessary. Optional<String> favoriteColor = requestHelper.getSlotValue("favoriteColor"); attributes.put("favoriteColor", favoriteColor.get()); // This saves the attributes to the session, so the value is available // to the next intent. attributesManager.setSessionAttributes(attributes); // Include a reprompt in the response to automatically set // shouldEndSession to false. return handlerInput.getResponseBuilder() .withSpeech("I saved the value " + favoriteColor.get() + " in the session attributes. Ask me for your favorite color" + " to demonstrate retrieving the attributes." ) .withReprompt("You can ask me, what's my favorite color?") .build(); } }
This is the JSON response sent by the
FavoriteColorIntentHandler. Note that the
sessionAttributes object includes the
favoriteColor attribute:
{ "version": "1.0", "sessionAttributes": { "favoriteColor": "blue" }, "response": { "outputSpeech": { "type": "SSML", "ssml": "<speak>I saved the value blue in the session attributes. Ask me for your favorite color to demonstrate retrieving the attributes.</speak>" }, "reprompt": { "outputSpeech": { "type": "SSML", "ssml": "<speak>You can ask me, what's my favorite color?</speak>" } }, "shouldEndSession": false } }
This example shows how an intent handler can access data from the session attributes. In this example, the
WhatsMyColorIntent has no slots. It retrieves previously set data from the session attributes to respond. If the data doesn't yet exist (because the user invoked this intent before invoking
FavoriteColorIntent), the handler uses the
Dialog.ElicitSlot directive to invoke
FavoriteColorIntent and prompt for the missing data.
This sample code uses the Alexa Skills Kit SDK for Node.js (v2).
const WhatsMyColorIntentHandler = { canHandle(handlerInput) { getRequestType(handlerInput.requestEnvelope) === 'IntentRequest' && getIntentName(handlerInput.requestEnvelope) === 'WhatsMyColorIntent'; }, handle(handlerInput) { const sessionAttributes = handlerInput.attributesManager.getSessionAttributes(); if (sessionAttributes.favoriteColor) { return handlerInput.responseBuilder .speak(`Your favorite color is ${sessionAttributes.favoriteColor}`) .getResponse(); } else { return handlerInput.responseBuilder .speak('You need to tell me your favorite color first.') .reprompt('Please tell me your favorite color.') .addElicitSlotDirective('favoriteColor') .getResponse(); } } }
This sample code uses the Alexa Skills Kit SDK for Python.
from ask_sdk_core.dispatch_components import AbstractRequestHandler from ask_sdk_core.utils import is_intent_name, get_slot_value from ask_sdk_core.handler_input import HandlerInput from ask_sdk_model import Response, Intent from ask_sdk_model.dialog import ElicitSlotDirective class WhatsMyColorIntentHandler(AbstractRequestHandler): """Handler for WhatsMyColorIntent.""" def can_handle(self, handler_input): # type: (HandlerInput) -> bool return is_intent_name("WhatsMyColorIntent")(handler_input) def handle(self, handler_input): # type: (HandlerInput) -> Response session_attr = handler_input.attributes_manager.session_attributes # The user could invoke this intent before they set their favorite # color, so check for the session attribute first. if "favoriteColor" in session_attr: fav_color = session_attr["favoriteColor"] return handler_input.response_builder.speak( "Your favorite color is {}. Goodbye".format( fav_color)).set_should_end_session(True).response else: # The user must have invoked this intent before they set their color. # Trigger the FavoriteColorIntent and ask the user to fill in the # favoriteColor slot. Note that the skill must have a *dialog model* # to use the ElicitSlot Directive. return handler_input.response_builder.speak( "You need to tell me your favorite color first.").ask( "please tell me your favorite color.").add_directive( directive=ElicitSlotDirective( updated_intent=Intent( name="FavoriteColorIntent"), slot_to_elicit="favoriteColor")).Intent; import com.amazon.ask.model.IntentRequest; import com.amazon.ask.model.Response; public class WhatsMyColorIntentHandler implements IntentRequestHandler { @Override public boolean canHandle(HandlerInput handlerInput, IntentRequest intentRequest) { return handlerInput.matches(intentName("WhatsMyColorIntent")); } @Override public Optional<Response> handle(HandlerInput handlerInput, IntentRequest intentRequest) { AttributesManager attributesManager = handlerInput.getAttributesManager(); Map <String,Object> attributes = attributesManager.getSessionAttributes(); // The user could invoke this intent before they set their favorite // color, so check for the session attribute first. if (attributes.containsKey("favoriteColor")){ String favoriteColor = attributes.get("favoriteColor").toString(); return handlerInput.getResponseBuilder() .withSpeech("Your favorite color is " + favoriteColor + ". Goodbye") .withShouldEndSession(true) .build(); } else { // The user must have invoked this intent before they set their color. // Trigger the FavoriteColorIntent and ask the user to fill in the // favoriteColor slot. Note that the skill must have a *dialog model* // to use the ElicitSlot directive. // Create the intent. Intent intent = Intent.builder() .withName("FavoriteColorIntent") .build(); return handlerInput.getResponseBuilder() .withSpeech("You need to tell me your favorite color first.") .withReprompt("Please tell me your favorite color.") .addElicitSlotDirective("favoriteColor", intent) .build(); } } }
This JSON shows the incoming
IntentRequest when the user has already provided the name of a color. Note the values in
session.attributes.
{ "version": "1.0", "session": { "new": false, "sessionId": "amzn1.echo-api.session.1", "application": { "applicationId": "amzn1.ask.skill.1" }, "attributes": { "favoriteColor": "blue" }, "user": { "userId": "amzn1.ask.account.1" } }, "context": {}, "request": { "type": "IntentRequest", "requestId": "amzn1.echo-api.request.1", "timestamp": "2019-02-13T04:22:36Z", "locale": "en-US", "intent": { "name": "WhatsMyColorIntent", "confirmationStatus": "NONE" } } }
Session attributes are useful for several scenarios:
- Keep track of state data to handle different skill states, such as a
stateattribute to indicate whether the user is already in a game or ready to start a new game. Use the attribute as criteria in the code that determines whether a handler can handle a particular request. This can be especially useful if your skill asks the user multiple yes/no questions, since the response "yes" or "no" may have a different meaning depending on skill state.
- Keep track of game scores and counters.
- Save slot values that the user has provided as you continue to prompt for additional values. Note that as an alternative, you can use a dialog model and delegate the dialog to accomplish this without session attributes. See Delegate the Dialog to Alexa.
Structure of session attributes
You can pass along more complex data in the session attributes if necessary, as long as you can structure the data into a key/value map. For example, a quiz game skill might need attributes to keep track of the current state of the game, the user's current score, and the correct answer to the question that was just asked. In this example, the
quizitem attribute represents the correct answers to the current question.
{ "sessionAttributes": { "quizscore": 0, "quizproperty": "STATEHOOD_YEAR", "response": "OK. I will ask you 10 questions about the United States. ", "state": "_QUIZ", "counter": 1, "quizitem": { "name": "Nevada", "abbreviation": "NV", "capital": "Carson City", "statehoodYear": "1864", "statehoodOrder": "36" } } }
For a full sample skill that illustrates more complex session attributes for game state, see Quiz game sample skill.
Save data between sessions
Session attributes exist while the session is open. Once the session ends, any attributes associated with that session are lost. If your skill needs to remember data across multiple sessions, you need to save that data in persistent storage such as DynamoDB or S3..
|
https://developer.amazon.com/de-DE/docs/alexa/custom-skills/manage-skill-session-and-session-attributes.html
|
CC-MAIN-2021-43
|
en
|
refinedweb
|
SparkR function spark.getSparkFiles fails when it was called on executors. For examples, the following R code will fail. (See error logs in attachment.)
spark.addFile("./README.md") seq <- seq(from = 1, to = 10, length.out = 5) train <- function(seq) { path <- spark.getSparkFiles("README.md") print(path) } spark.lapply(seq, train)
However, we can run successfully with Scala API:
import org.apache.spark.SparkFiles sc.addFile("./README.md”) sc.parallelize(Seq(0)).map{ _ => SparkFiles.get("README.md")}.first()
and also successfully with Python API:
from pyspark import SparkFiles sc.addFile("./README.md") sc.parallelize(range(1)).map(lambda x: SparkFiles.get("README.md")).first()
|
https://issues.apache.org/jira/browse/SPARK-19925?attachmentSortBy=fileName
|
CC-MAIN-2021-43
|
en
|
refinedweb
|
5 powerful ThinkOrSwim scripts (screeners) for the earnings season 🔥
Hi people. The earnings season has already started, which means it’s time to make money! I have prepared 5 powerful scripts for the ThinkOrSwim trading platform (TOS), which have repeatedly helped me prepare for the market and regularly make my profit!
📈 So, the earnings season is a great time to find companies to invest in.
During the earnings period, trading volumes in shares increase, significant movements in securities occur, due to which the reporting period is characterized by enormous volatility. This is the best time for short-term traders and making money not on the fundamental strength of the reports themselves, but on volatility.
💡To get started, I will give a couple of tips for traders and investors to avoid dashing your hopes and let’s get to the main thing!
- Carefully study the reporting information, follow the news in the TOS trading platform, carefully analyze the balance sheets of enterprises.
- To understand the specifics of the reporting season, you need to know that there is really no relationship between a company’s profit/loss and the movement of its stock price. A profitable report is not a guarantee that the stock price will necessarily go up immediately.
- Be on the alert and be extremely careful, as in the short-term framework, many factors can influence the share price: the company’s performance compared to competitors in the same industry, analysts’ forecasts in relation to real company reports, support and resistance zones on the price chart, etc. etc.
1. Scanner for the selection of stocks with earnings for the ThinkOrSwim trading platform 🔥
Stocks with earnings that were yesterday after the close of the market and before the opening of the market.📈
I will give several modifications to the TOS account at once. Enabling or disabling the option to comment out one of the bottom lines (remove the “#” symbol).
The fact is that it often happens that a piece of paper works well on the second day after the report, therefore, depending on the situation and market activity, I use either one or the second option.
You can test the reportable stock selection scanner script for the TOS trading platform right now. ⬇️
#Earnings (today+yesterday)
#by thetrader.top
def isBefore = HasEarnings(EarningTime.BEFORE_MARKET);
def isAfter = HasEarnings(EarningTime.AFTER_MARKET);
def isDuringOrUnspecified = HasEarnings() and !isBefore and !isAfter;
def r = isBefore or isDuringOrUnspecified or isAfter[1];
plot a = r; #Variant 1 Earnings for today
#plot a = r or r[1]; #Variant 2 + 2 day after earnings
2. Scanner with the choice of shares according to the ATR and the average volume 📉
My second scanner can be called fundamental. He selects shares by such important parameters for me as the minimum ATR (by default it is set at half a dollar) and the average volume (500k shares per day). I recommend that you configure these parameters for yourself in the TOS account.
⚙️ Thinkscript has it all. And I recommend combining these two scanners for a smaller sample. A smaller sample means fewer signals, fewer signals means better results. In any case, this is my opinion.
💡What is ATR in TOS? In simple words, this is how much the average price passes per day.
#Filter:Fundamental
#by thetrader.top
input MinATR = 0.5;
input MinAvgVolume = 500000;
# — — — — — — — —
def ATR = Average(TrueRange(high, close, low),20)[1];
def AvgVolume = Average(Volume, 65)[1];
plot Signal = ATR >= MinATR and AvgVolume >= MinAvgVolume;
3. Scanner: Change From Open in ThinkOrSwim 📉
These were all scanners for preparing for a trading session in TOS.
⚙️ The next two I already run during the session, exactly 20 minutes after opening, to choose the most delicious.
The first filter looks to me for stocks that made a 0.9% move after the open. If they went like that, then there is a major player, which means there may be something to profit from. All that remains is to choose a good entry point❗️
I recommend testing the Change From Open scanner script in Thinkorswim right now. ⬇️
#Filter:ChangeFromOpen
#by thetrader.top
input MinChangeFromOpen = 0.9; #Change from open, %
def ChangeFromOpen = Max((High-Open)/Open*100,(Open-Low)/Open*100);
plot Signal = ChangeFromOpen >= MinChangeFromOpen;
4. Scanner: Stocks with increased volume in the TOS trading platform 📈
It is better to run this scanner in conjunction with the previous one or separately. It also looks for stocks that have a big player, not by price movement, but by volume.
⚙️ Let me remind you that you need to launch it after 20–30 minutes after opening. If a stock made 120% of its average daily volume in the first 20 minutes of trading, then this is probably no accident. I put it on my sheet and look at the picture. If something interesting is drawn … waiting for the entry point.
Download the script for Thinkorswim here ⬇️
#Filter:VolPlay
#by thetrader.top
input VolPlay = 1.2; #+20%
input Length = 65;
def AvgVolume = Average(Volume, Length)[1];
plot Signal = (Volume >= AvgVolume*VolPlay);
5. Column: Spread in TOS 📊
And finally, the script for the watchlist column in Thinkorswim, which shows the current spread in stocks and highlights in red pieces of paper where the spread is more than 6 cents and the risks are very high.
⚙️ Customize for yourself, and without hesitation, remove the piece of paper if the spread is very large for you.
ThinkScript columns: Spread for TOS account ⬇️
#Colume:Spread
#by thetrader.top
def Spread1= (ASK-BID)*100;
AddLabel(yes, AsText(Spread1, “%1$.0f”));
AssignBackgroundColor (if (Spread1> 6) then Color.red else Color.black);
As you can see, all the scripts are not difficult to use, and most importantly, they are extremely clear.
💡Key point: select 10 stocks per session. There should not be many of them, otherwise, there is a possibility of missing the entry point already in the process.
The magic is that you are in focus and don’t spray.
And remember — don’t be greedy. If you don’t find a good point, remove the stocks from the list and go home. In trading, perhaps the most important thing is not to do anything other than unnecessary moves, because you have to do something.
Be sure to leave a comment under the article ⬇️ and click Claps 👏 and I will write about 6 indicators that will make you a huge profit in the earnings season.
**
If you do not know where to get a free TOS terminal without delay (and for Europe, it is not easy) or how to configure scanners or scripts, please contact us …😊
|
https://thinkorswim-europe.medium.com/5-powerful-thinkorswim-scripts-screeners-for-the-earnings-season-1e1d8b027cc?source=post_internal_links---------4----------------------------
|
CC-MAIN-2021-43
|
en
|
refinedweb
|
YARD support in RubyMine
YARD is a popular Ruby documentation generation tool that is used in multiple libraries for documenting code. RubyMine helps you to work with YARD tags and documentation in various ways, for example, you can view the documentation using Quick Documentation Lookup, create missing YARD tags, and check the validity of a YARD tag. RubyMine can also utilize the YARD annotations for better code insight, it uses them to help suggest more relevant results in code completion and parameter hints for methods.
In this blog post, we’ll remind ourselves about the existing capabilities available in RubyMine for YARD and look at the new ones we’ve added.
View documentation
First of all, RubyMine allows you to display the documentation in a popup for methods, classes, etc. To do this, place the caret on the required object and press Ctrl+Q / F1 (View | Quick Documentation).
You can invoke this popup not only from the editor but from completion results, too.
The Quick Documentation popup includes links to the referenced types. This means you can follow the hyperlinks to view the related documentation.
Add and fix YARD tags
Now let’s look at how RubyMine can help us to add new tags. For example, let’s annotate the parameters of the initialize method in the Song class using @param.
To do this, place the caret on the initialize method, press Alt+Enter and select add @param tag. RubyMine will add the corresponding comment above the method and it will suggest that you specify the type of each parameter value.
As you can see, the IDE completes a parameter type when we start typing String or Integer. Note that with this release we have added completion for the Boolean type, it does not exist in Ruby, but it is used in YARD to represent both the TrueClass and FalseClass types.
When you edit YARD tags, RubyMine checks whether or not any duplicated or wrong tags exist, and suggests that you remove any such tags.
By default, RubyMine generates the parameter name after the parameter type (for example,
@param [String] name). To change this behavior and set the parameter name before its type (
@param name [String]), go to Settings/Preferences (⌘+, / Ctrl+Alt+S), open the Editor | Inspections page, find the Add @param tag inspection in the Ruby group, and specify the desired order.
YARD for code insight
Return and parameter types
One of the most powerful features is that RubyMine can use YARD type annotations to determine an object type. For example, in the following code, we cannot determine the type of size.
def size @file.size end
If we annotate the size method with the @return tag, RubyMine will know the size type in all the places it is called. To try this, place the caret on the size method call and press Ctrl+Shift+P / ⌃⇧P (View | Type Info).
If the method return type does not match the @return type, the editor will show you a warning.
Moreover, RubyMine will suggest completion results corresponding to the specified method.
Of course, type detection works for method parameters, too. If you pass the parameter of incorrect types (String instead of Symbol and Integer in the animation below), the editor will show a corresponding hint.
RubyMine checks the parameter types even for overridden methods whose signature matches the signature of a parent method.
We have introduced parameter and return type checking in this release. You can control it by using the YARD param type match and YARD return type match inspections (Settings/Preferences, the Editor | Inspections page).
Method Overloads
Starting with this release, RubyMine understands the @overload tag and will suggest to you all the declared overloads when calling the method.
For each method overload call, RubyMine can determine the annotated return type.
Yield parameters
Another useful feature we’ve added in this EAP release is support for the @yieldparam tag. This allows the IDE to infer block parameter types.
Try out the new features and let us know about any issues you come across in the comments section. Please also feel free to submit an issue or feature suggestion to YouTrack. Thank you!
Download RubyMine 2019.2 EAP
Cheers,
Your RubyMine Team
2 Responses to YARD support in RubyMine
Alexander Malfait says:June 25, 2019
Would be nice to be able to YARD annotate variables in HAML (view) files, especially partials
Something like:
# @var person [Person]
person.(smart autocomplete here)
Andrey Aksenov says:June 28, 2019
Hi, Alexander, thank you for the suggestion. We’ll consider the possibility to use the @type tag inside view files.
|
https://blog.jetbrains.com/ruby/2019/06/yard-support-in-rubymine/
|
CC-MAIN-2021-43
|
en
|
refinedweb
|
Hi, Tom Lane [2007-11-07 13:49 -0500]: >. I tried this on a Debian Alpha porter box (thanks, Steve, for pointing me at it) with Debian's gcc 4.2.2. Latest sid indeed still has this bug (the floor() one is confirmed fixed), not only on Alpha, but also on sparc. Since the simple test case did not reproduce the error, I tried to make a more sophisticated one which resembles more closely what PostgreSQL does (sigsetjmp/siglongjmp instead of exit(), some macros, etc.). Unfortunately in vain, since the test case still works perfectly with both no compiler options and also the ones used for PostgreSQL. I attach it here nevertheless just in case someone has more luck than me. So I tried to approach it from the other side: Building postgresql with CFLAGS="-O0 -g" or "-O1 -g" works correctly, but with "-O2 -g" I get above bug. So I guess I'll build with -O1 for the time being on sparc and alpha to get correct binaries until this is sorted out. Any idea what else I could try? Thanks, Martin -- Martin Pitt Ubuntu Developer Debian Developer
#include <stdio.h> #include <stdlib.h> #include <setjmp.h> #define ERROR 20 #define ereport(elevel, rest) \ (errstart(elevel, __FILE__, __LINE__, __func__) ? \ (errfinish rest) : (void) 0) #define PG_RE_THROW() \ siglongjmp(PG_exception_stack, 1) sigjmp_buf PG_exception_stack; int errstart(int elevel, const char *filename, int lineno, const char *funcname) { printf("error: level %i %s:%i function %s\n", elevel, filename, lineno, funcname); return 1; } void errfinish(int dummy, const char* msg) { puts(msg); PG_RE_THROW(); } int do_div(char** argv) { int arg1 = atoi(argv[1]); int arg2 = atoi(argv[2]); int result; if (arg2 == 0) ereport(ERROR, (1, "division by zero")); result = arg1 / arg2; return result; } int main(int argc, char **argv) { if (sigsetjmp(PG_exception_stack, 0) == 0) { int result = do_div(argv); printf("%d\n", result); } else { printf("caught error, aborting\n"); return 1; } return 0; }
Attachment:
signature.asc
Description: Digital signature
|
https://lists.debian.org/debian-alpha/2007/12/msg00002.html
|
CC-MAIN-2020-29
|
en
|
refinedweb
|
Units¶
There are several places in Toyplot where you will need to specify quantities with real-world units, including canvas dimensions, font sizes, and target dimensions for document-oriented backends such as
toyplot.pdf. For example, when creating a canvas, whether explicitly or implicitly through the Convenience API, you could specify its width and height in inches:
[1]:
import numpy x = numpy.linspace(0, 1) y = x ** 2
[2]:
import toyplot toyplot.plot(x, y, width="3in", height="2in");
You can also specify the quantity and units separately:
[3]:
toyplot.plot(x, y, width=(3, "in"), height=(2, "in"));
If you rendered either plot using the PDF backend, the resulting document size would be 3 inches × 2 inches.
If you don’t specify any units, the canvas assumes a default unit of CSS pixels:
[4]:
toyplot.plot(x, y, width=600, height=400);
Note: You’re probably used to treating pixels as dimensionless; however CSS Pixels are always 1/96th of an inch. Thus, the above example would produce a PDF document that is 6.25 inches × 4.16 inches.
If you rendered the same canvas using the PNG or MP4 backends, it would produce 600 × 400 pixel images / movies. Put another way, the backends that produce raster images always assume 96 DPI, unless overridden by the caller.
Allowed Units¶
The units and abbreviations currently understood by Toyplot are as follows:
- centimeters - “cm”, “centimeter”, “centimeters”
- decimeters - “dm”, “decimeter”, “decimeters”
- inches - “in”, “inch”, “inches”
- meters - “m”, “meter”, “meters”
- millimeters - “mm”, “millimeter”, “millimeters”
- picas (1/6th of an inch) - “pc”, “pica”, “picas”
- pixels (1/96th of an inch) - “px”, “pixel”, “pixels”
- points (1/72nd of an inch) - “pt”, “point”, “points”
Functions that accept quantities with units as parameters will always accept them in either of two forms:
- A string that combines the value and unit abbreviations: “5in”, “12px”, “25.4mm”.
- A 2-tuple containing a number and string unit abbreviation: (5, “in”), (12, “px”), (25.4, “mm”).
In addition, some functions will also accept a single numeric value, with a documented default unit of measure (such as the canvas width and height discussed above).
Further, some functions may accept quantities with “%” as the units. In this case, the quantity will be relative to some other documented value.
Style Units¶
Toyplot style parameters always explicitly follow the CSS standard. As such, they support a subset of unit abbreviations including “cm”, “in”, “mm”, “pc”, “px”, and “pt”. Although CSS provides additional units for relative dimensioning, they assume that the caller understands their relationship to the underlying Document Object Model (DOM). Because Toyplot does not expose the DOM to callers and may change it at any time, these units are not supported.
|
https://toyplot.readthedocs.io/en/latest/units.html
|
CC-MAIN-2020-29
|
en
|
refinedweb
|
Resource allocation strategy used by hardware scheduler resources. More...
#include "llvm/MCA/HardwareUnits/ResourceManager.h"
Resource allocation strategy used by hardware scheduler resources.
Definition at line 47 of file ResourceManager.h.
Definition at line 52 of file ResourceManager.h.
References select(), and ~ResourceStrategy().
Referenced by ResourceStrategy().
Selects a processor resource unit from a ReadyMask.
Implemented in llvm::mca::DefaultResourceStrategy.
Referenced by llvm::mca::DefaultResourceStrategy::DefaultResourceStrategy(), and ResourceStrategy().
Called by the ResourceManager when a processor resource group, or a processor resource with multiple units has become unavailable.
The default strategy uses this information to bias its selection logic.
Reimplemented in llvm::mca::DefaultResourceStrategy.
Definition at line 62 of file ResourceManager.h.
Referenced by llvm::mca::DefaultResourceStrategy::DefaultResourceStrategy().
|
https://llvm.org/doxygen/classllvm_1_1mca_1_1ResourceStrategy.html
|
CC-MAIN-2020-29
|
en
|
refinedweb
|
While some statistics show that using utility based CSS is more popular according to this link , still some people prefer to use a framework based on Material Design or Bootstrap or others to be aligned with their UX designers. In This post the focus is on Vuetify specifically using it with JSS
Vuetify is based on Material Design
If you use
Vue and prefer to use Material Design then maybe go for
Vuetify!
Vuetify is the component library for Vue.js and has been in active development since 2016. The goal of the project is to provide users with everything that is needed to build rich and engaging web applications using the Material Design specification.
They also provide support so if you are really stuck you can book a consultant online and discuss your problem.
Vuetify Version 2
Version 2 has some breaking changes in compare to version 1 that you might consider it during installation.
Steps to setup Vuetify in JSS Vue based app
- Install vuetify library
- Install sassLoader and vuetify loader
- Injected vuetify to Vue instance
- Override vuetify variables and brand colors
- Add
as a required element
- Configured sass-loader to inject variables once Vuetify is loaded
- Used VuetifyLoaderPlugin for tree-shaking
Install vuetify and all loaders
Basically you can install either a plugin which takes care of a few things for you or install libraries manually. Install vue-cli-plugin-vuetify by adding plugin to your Vue app. To add it, you can run Vue cli UI and add plugin through graphic UI.
If you are manually install it, run these commands or look at Vuetify doc
npm install vuetify (version 2+) npm install sass sass-loader vuetify-loader
Introduce Veutify to your app
You can create a folder for Vuetify and inside the
index.js file import your Vuetify library. Also you can define your brand colors in a js file as well and inject it to Vuetify.
import Vue from 'vue'; import Vuetify from 'vuetify/lib'; import colors from "./colors"; Vue.use(Vuetify); const opts = { theme: { themes: colors } } export default new Vuetify(opts);
A sample of
color.js file is like this:
const colors = { "light-gray": "#F5F5F5", "green": "#48A23F", "red": "#D32F2F", }; export default colors;
Then you can inject it to a
Vue instance while you are instantiating it.
In pure Vue app built by Vue-cli you can do it in
main.js file but when you build your app with
JSS then use the
CreateApp.js file:
**import vuetify from './vuetify';** export function createApp(initialState, i18n) { Vue.config.productionTip = false; const router = createRouter(); const graphQLProvider = createGraphQLProvider(initialState); const vueOptions = { apolloProvider: graphQLProvider, router, **vuetify**, render: (createElement) => createElement(AppRoot), }; // conditionally add i18n to the Vue instance if it is defined if (i18n) { vueOptions.i18n = i18n; } const app = new Vue(vueOptions); // if there is an initial state defined, push it into the store, where it can be referenced by interested components. if (initialState) { app.$jss.store.setSitecoreData(initialState); } return { app, router, graphQLProvider }; }
Kick off Vuetify by adding
v-app is a required element and you need to add it to the first level of vue template. In pure Vue application just add it in to
App.vue but if you use
JSS then add it to your
Layout.vue file.
Override default Vuetify variables
Vuetify
version1 used to work with Stylus which was bit of a problem; However, in
version 2 they moved to Sass which is more comfortable and consistent for me.
You need to build your own Style folder and define the
variable.scss file.
Then you need to inject it to Vuetify while it is loading which needs
vue.config.js or
webpack.js if you use it directly.
In
Vue.config.js add these lines:
let vueConfig = {};
//Inject the custome scss file to vuetify to override default variables
vueConfig.css= {
loaderOptions: {
scss: {
data:"@import '@/Styles/variable.scss'"
}
}
}
Tree shaking
If you want to drop the unused components in your build then use
vuetify-loader/lib/plugin plugin.
const VuetifyLoaderPlugin = require('vuetify-loader/lib/plugin'); vueConfig.configureWebpack = (config) => { config.plugins.push(new VuetifyLoaderPlugin()); } module.exports = vueConfig; ``
|
https://www.nellysattari.com/jss-part3/
|
CC-MAIN-2020-29
|
en
|
refinedweb
|
I wrote a c++ plugin, that does some stuff with fmod. The plugin itself seems to work just fine, and I can use it in a Windows standalone build when i copy the fmodex.dll next to the executable. However if I run the game in the editor I always get a DllNotFoundException.
My understanding of dll's on windows is, that windows will look up the dll in the system32 folder (XP), then in the application folder and the current working directory. So I printed out the current working directory, which - no surprise - is the projects root folder. However, copying the fmodex.dll to that folder doesn't get rid of the Exception.
I have another project where I did something very similar (only with another dll) and this works. So any ideas what could go wrong here?
Answer by StephanK
·
Jan 20, 2010 at 12:00 PM
I think I finally have a clue what's going on. Since Unity2.6 switched to use fmod as audio core (which is a great idea!) there is a "fmodex.dll" in the edior application directory. Since my plugin was probably built using a different version of the fmodex lib it seems to find the one in the app directory and tries to load that. But since they are different versions the loader is not satisfied and I get the DllNotFoundException. At least that's what I think is going on.
Answer by Brian-Kehrer
·
Jan 17, 2010 at 03:19 PM
At least on the Mac, you are supposed to create a folder titled 'Plugins' inside the project view (assets directory). All plugin DLLs are to reside within that folder.
Try the same thing on windows.
From Unity Manual on Plugins
and an excerpt:
Once you have built your bundle you have to copy it to Assets->Plugins folder. Unity will then find it by its name when you define a function like this:
[DllImport ("PluginName")]
private static extern float FooPluginFunction ();
Please note that PluginName should not include the extension of the filename. Be aware that whenever you change code in the Plugin you will have to recompile scripts in your project or else the plugin will not have the latest compiled code.
I did that. The problem is not that it simply not finds the plugin dll, but rather it does find it and then tries to find the fmodex.dll it depends on. I've put the dll in the project root, assets and assets/plugins folder just to be sure, but it just doesn't find it. The strange thing is, that when I build the project and start it standalone it DOES find the fmodex.dll which is right next to the executable...
I understand now. Unfortunately I don't have an answer, I haven't tried interdependent plugins.
Normally this works as long as the dependencies live in the projects root folder. I think my problem is a bit more obscure as fmodex.dll seems to somehow depend on the dreaded dwmapi.dll and I'm on XP. Only thing I really don't get is why it's working in a standalone build, as the dll search paths should be the same for both.
Answer by JC
·
Feb 11, 2010 at 11:06 AM
I got the same problem here when trying to load FModEx.dll + FMod_Event.dll in the Editor (build works fine).
The conclusion I got was that unity editor's FModEx.dll (the one unity uses since 2.61) is causing "Dll Hell" as the editor process binds it's own dll, we can't load our version. FMod_Event.dll has a dependency on FModEx.dll but it does not reconize unity's version. The result is that you get a "DllNotFoundException" when FMod_Event tries to find FModEx.
The solution I found was renaming my version, not unity's, of FModEx.dll to FModE2.dll and changing every occurrence of that name inside both dlls, Ex and Event, with a HexEditor. Now everything works fine. :-D
When building for Mac you will probably have to make your wrapper point towards the regular Dylibs. Somehtin like this:
public class VERSION
{
public const int number = 0x00042808;
#if MAC
public const string dll = "libfmodex.dylib";
#else
public const string dll = "fmode2.dll";
#endif
}
hm, this doesnt work for me. did you change it inside the according libfmod_event.dylib and libfmodex.dll as well?
@marco: Well, that's a good question. I don't recall changing the the dylibs for the Mac build but I'm sure I had to build another Wrapper pointing them instead of the regular DLLs.
ah, that makes sense, I will try that.
Answer by Graeme
·
Jun 10, 2010 at 05:13 PM
[Removed.
DLL placement
1
Answer
Connecting to a Qpid Broker
0
Answers
Is there a way to create a C++ Managed DLL for Unity?
0
Answers
TypeLoadException trying to use FJCore (FluxJpeg)
1
Answer
OpenFileDialog without losing focus afterwards
0
Answers
|
https://answers.unity.com/questions/10265/dll-loading-in-editor-on-windows.html?childToView=11371
|
CC-MAIN-2020-29
|
en
|
refinedweb
|
Archived | Access a local development Hyperledger Composer REST server on the internet
Access a local Hyperledger Composer REST server using Secure Gateway
Archive date: 2019-05-01This content is no longer being updated or maintained. The content is provided “as is.” Given the rapid evolution of technology, some content, steps, or illustrations may have changed.
Note: IBM Blockchain Platform now utilizes Hyperledger Fabric for end-to-end development. Users may use Hyperledger Composer at their choice, but IBM will not provide support for it.
Developers who want to get a fast start in developing a blockchain solution can quickly begin by using Hyperledger Composer on a local development system. After a business network is created and deployed locally, it’s also possible to deploy a REST server that exposes the business network for easy access by a front-end application.
But what happens when the target front-end application is for a mobile system or is running on a cloud runtime environment, such as a Cloud Foundry application or a Docker container, and the app needs to access the local blockchain business network? Or in general, you might be looking for a way to establish a connection to a network service that is running on a host that has outbound internet access, but it doesn’t support inbound access. The network service might be behind a firewall or on a dynamic IP address.
You can solve these problems by creating an internet-based proxy that accepts network connections and then forwards them to the service of interest. In this tutorial, you learn how to create this proxy for a Hyperledger Composer REST server by using the IBM Secure Gateway service.
Learning objectives
Complete this tutorial to understand how to create a REST server for a Blockchain business network and how to make it available on the internet. The tutorial shows how to configure a simple business network using Hyperledger Composer running on a local virtual machine. Then, you use the IBM Secure Gateway service to provide an internet-reachable network service that proxies connections to the REST server on the virtual machine.
Prerequisites
To complete this tutorial, you need:
- Vagrant
- VirtualBox
- An IBM Cloud pay-as-you-go, subscription or trial account.
This tutorial does not cover the development of blockchain business networks using Hyperledger Composer. For more information about developing those blockchain business networks, see the Hyperledger Composer tutorials.
Estimated time
The steps in this tutorial take about 30-45 minutes to complete. Add time to create the desired business network, if you are creating one from scratch.
Steps
Complete the following steps to create a local virtual machine (VM) that is capable of serving a Composer Business Network as a REST API endpoint. First, you use Vagrant to configure a VM with Docker support. After the VM is configured, continue by following the Hyperledger Composer set-up steps for a local envionment at Installing the development environment. Finally, after you have the local Composer REST server running locally, configure a Secure Gateway instance to expose the API on the IBM Cloud.
Configure a VM with Docker support
Create a directory for the project:
mkdir composer
Copy the contents of the Vagrantfile into the directory.
Start the Vagrant image from the directory (this might take a little while):
vagrant up
After the VM is up, log in to start configuring Hyperledger Fabric:
vagrant ssh
Set up Hyperledger Composer
Follow the pre-requisite setup steps for a local Hyperledger Composer environment for Ubuntu at Installing prerequisites. Complete these steps as an ordinary user and not a root user on the VM. Log out from vagrant with
exitand reconnect with
vagrant sshwhen prompted.
curl -O chmod u+x prereqs-ubuntu.sh ./prereqs-ubuntu.sh
After you finish installing pre-requisites, set up the Hyperledger Fabric local development environment as described at Installing the development environment, starting with the CLI tools.
npm install -g composer-cli@0.20 npm install -g composer-rest-server@0.20 npm install -g generator-hyperledger-composer@0.20 npm install -g yo
Install Composer Playground.
npm install -g composer-playground@0.20
Optional: Follow steps to set up IDE Step 3 of Installing the development environment.
Complete Step 4 from the set-up instructions to get Hyperledger Fabric docker images installed.
mkdir ~/fabric-dev-servers && cd ~/fabric-dev-servers curl -O tar -xvf fabric-dev-servers.tar.gz cd ~/fabric-dev-servers export FABRIC_VERSION=hlfv12 ./downloadFabric.sh
Proceed with the steps under “Controlling your dev environment” to start the development fabric and create the PeerAdmin card:
cd ~/fabric-dev-servers export FABRIC_VERSION=hlfv12 ./startFabric.sh ./createPeerAdminCard.sh
Start the web app for Composer (“Playground”). Note: Starting the web app does not start up a browser session automatically as described in the documentation, because the command is running inside the VM instead of on the workstation.
composer-playground
After the service starts, navigate with a browser tab to(this local port is mapped by the
Vagrantfileconfiguration to the VM).
Develop a business network and test in the Composer Playground as usual. If you’ve never used composer playground, the Playground Tutorial is a good place to start.
After you have completed testing the intended business network, deploy the Composer REST server, providing the card for the network owner (
admin@marbles-networkin this example). See Step 5 from Developer tutorial for creating a Hyperledger Composer solution for explanations on the responses to the input prompts. The Secure Gateway connectivity steps in this tutorial were tested with the following options.
composer-rest-server ? Enter the name of the business network card to use: admin@marbles-network ? Specify if you want namespaces in the generated REST API: always use namespaces ? Specify if you want to use an API key to secure the REST API: No ? Specify if you want to enable authentication for the REST API using Passport: No ? Specify if you want to enable the explorer test interface: Yes ? Specify a key if you want to enable dynamic logging: ? Specify if you want to enable event publication over WebSockets: Yes ? Specify if you want to enable TLS security for the REST API: No
To restart the REST server using the same options, issue the following command:
composer-rest-server -c admin@marbles-network -n always -u true -w true Discovering types from business network definition ... Discovering the Returning Transactions..
Keep the REST server running in the terminal. When finished with the REST API server, you can use Ctrl-C in the terminal to terminate the server.
Test the REST API server by opening a browser.
Configure a Secure Gateway instance to expose the API on the cloud
Open the IBM Cloud catalog entry for Secure Gateway to create a Secure Gateway instance in your IBM Cloud account. You need either a paid account or Trial promo code. The Essentials service plan is sufficient for implementing traffic forwarding for a development hyperledger fabric network with a capacity of 500 MB/month of data transfer. Verify that this plan is selected, and click on Create.
Click Add Gateway in the Secure Gateway Service Details panel. Enter a name in the panel, for example: “Blockchain”. Keep the other gateway default settings of Requre security token to connect clients and Token expriation before 90 days. Click othe Add Gateway button to create the gateway.
Click the Connect Client button on the Secure Gateway Service Details panel to begin setting up the client that runs on the VM and connect to the Secure Gateway service.
Choose Docker as the option to connect the client and copy the provided
docker runcommand with the Gateway id and security token.
Open a new local terminal window, change directory to the folder with the
Vagrantfileand then connect to the VM using
vagrant ssh. Paste the
docker runcommand shown into this terminal to start the Secure Gateway client and leave a CLI running in the terminal. Do not close this terminal. After the container starts, you see messages like the following example, indicating a successful connection:
[2018-10-20 18:34:01.451] [INFO] (Client ID 1) No password provided. The UI will not require a password for access [2018-10-20 18:34:01.462] [WARN] (Client ID 1) UI Server started. The UI is not currently password protected [2018-10-20 18:34:01.463] [INFO] (Client ID 1) Visit localhost:9003/dashboard to view the UI. [2018-10-20 18:34:01.760] [INFO] (Client ID 11) Setting log level to INFO [2018-10-20 18:34:02.153] [INFO] (Client ID 11) The Secure Gateway tunnel is connected [2018-10-20 18:34:02.304] [INFO] (Client ID HxzoYUW6z74_PZ9) Your Client ID is HxzoYUW6z74_PZ9 HxzoYUW6z74_PZ9>
After the client has started, close the web ui panel to display the Secure Gateway service details.
On another terminal on the vagrant VM, use the
ip address showcommand to find the IP address of the VM. Many interfaces are listed. Select the one that begins with
enpor
eth. In the examples that follow, the VM IP address is
10.0.2.15.
Return to the terminal for the Secure Gateway client docker container, create an acl entry that allows traffic to the composer REST API server running on port 3000.
acl allow 10.0.2.15:3000 1
Define a basic http connection through the Secure Gateway service to the Composer REST API server. For more advanced security settings refer to the Secure Gateway documentation. Click on the Destinations tab in the Secure Gateway service details. Next, click on the “+” icon to open the Add Destination wizard. Select the Guided Setup option.
For the “Where is your resource located?” item, select On-premises and then click on Next.
For “What is the host and port of your destination?”, put in the IP address from step 20 as the hostname and 3000 as the port. Then click on Next.
For the connection protocol, select HTTP and then click on Next.
For the destination authentication, select None and then click on Next.
Skip entry of the IP address and ports for the options to “… make your destination private, add IP table rules below” step and click on Next.
Enter a name like
Composer REST serverfor the name of the destination and click on Add Destination
Click on the gear icon for the tile of the destination that was just created to display the details. Copy the Cloud Host : Port – which looks something like:
cap-sg-prd-2.integration.ibmcloud.com:17870. This host and port is the Cloud endpoint that can be accessed. Traffic is forwarded by the Secure Gateway service to the running Composer REST server.
Append
/explorerafter the host and port and open this url in a web browser. For the example, the final url would be:.
Summary
At this point you should be able to access the Composer REST server to perform actions in the deployed business network, using the host name and the port from the Secure Gateway destination. This server is reachable from any system with access to the internet and is best suited to development and testing, and not production use.
You can develop the application locally on the host (instead of within the vagrant VM) without going out to the cloud endpoint. The
Vagrantfile maps the local port 3000 to the Composer REST server. This mapping allows you to use the endpoint when developing your application locally. When deploying to the cloud (as a Cloud Foundry application, or Docker container) switch the endpoint to the cloud URL (for example).
The Hyperledger Composer can generate a basic Angular interface to the business network. This step is described in Writing Web Applications.
To see how to deploy this Angular application to Cloud Foundry using DevOps, check out the Continuously deploy your Angular application tutorial. There are two changes to the tutorial for the generated Angular application. First, use the full project contents by leaving the Build Archive Directory empty in the Delivery Pipeline Build stage. Second, the application reads the REST API server endpoint from the environment, set this in the Delivery Pipeline Deploy stage by adding an environment property of
REST_SERVER_URL with a value of the cloud URL.
|
https://developer.ibm.com/tutorials/access-local-hyperledger-composer-rest-server-secure-gateway/
|
CC-MAIN-2020-29
|
en
|
refinedweb
|
Sparse tensors, used during the assembly. More...
#include "gmm/gmm_except.h"
#include "bgeot_config.h"
#include "dal_bit_vector.h"
#include <iostream>
#include <bitset>
Go to the source code of this file.
Sparse tensors, used during the assembly.
As an example, let say that we have a tensor t(i,j,k,l) of dimensions 4x2x3x3, with t(i,j,k,l!=k) == 0.
Then the tensor shape will be represented by a set of 3 objects of type 'tensor_mask': mask1: {i}, "1111" mask2: {j}, "11" mask3: {k,l}, "100" "010" "001" They contain a binary tensor indicating the non-null elements.
The set of these three masks define the shape of the tensor (class tensor_shape)
If we add information about the location of the non-null elements (by mean of strides), then we have an object of type 'tensor_ref'
Iteration on the data of one or more tensor should be done via the 'multi_tensor_iterator', which can iterate over common non-null elements of a set of tensors.
maximum (virtual) number of elements in a tensor : 2^31 maximum number of dimensions : 254
"ought to be enough for anybody"
Definition in file bgeot_sparse_tensors.h.
|
http://download-mirror.savannah.gnu.org/releases/getfem/doc/getfem_reference/bgeot__sparse__tensors_8h.html
|
CC-MAIN-2020-29
|
en
|
refinedweb
|
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.